Go 1.23: Interactive release notes
Go 1.23 is out, so it's a good time to explore what's new. The official release notes are pretty dry, so I prepared an interactive version with lots of examples showing what has changed and what the new behavior is.
Read on and see!
Iterators • Timers • Canonical values • Cookies • Copy directories • Slices • Atomics • Other stdlib changes • Tooling and runtime • Summary
I also provide links to the proposals (𝗣) and commits (𝗖𝗟) for the features described. Check them out for motivation and implementation details.
Iterators
Go 1.23 brings a lot of features related to sequence iteration. Let's go through them one by one.
Range iteration
An iterator is a function that passes successive elements of a sequence to a callback function. The function stops either when the sequence is finished or when the callback returns false
, indicating to stop the iteration early.
A good example of an interator in a pre-1.23 stdlib is the sync.Map.Range
method, which iterates over a concurrent-safe map:
var m sync.Map
m.Store("alice", 11)
m.Store("bob", 12)
m.Store("cindy", 13)
m.Range(func(key, value any) bool {
fmt.Println(key, value)
return true
})
alice 11
bob 12
cindy 13
Starting with Go 1.23, you can use iterators in a for-range loop. So an explicit call:
// go 1.22
m.Range(func(key, val any) bool {
fmt.Println(key, val)
return true
})
alice 11
bob 12
cindy 13
Becomes an implicit call:
// go 1.23
for key, val := range m.Range {
fmt.Println(key, val)
}
alice 11
bob 12
cindy 13
The range
clause in a for-range loop accepts any of the following function types:
func(func() bool)
func(func(K) bool)
func(func(K, V) bool)
See the spec for details.
Iterator types
The iterator types are formally defined in the new iter
package:
type (
Seq[V any] func(yield func(V) bool)
Seq2[K, V any] func(yield func(K, V) bool)
)
Seq
yields single values, while Seq2
yields pairs (as does sync.Map.Range
).
The yield parameter name is just a convention, you can name it foo or bar or whatever — the only important thing is the function signature.
Using Seq
/ Seq2
types makes iterator definitions more concise. You can define a function that returns an iterator:
// Reversed returns an iterator that loops over a slice in reverse order.
func Reversed[V any](s []V) iter.Seq[V] {
return func(yield func(V) bool) {
for i := len(s) - 1; i >= 0; i-- {
if !yield(s[i]) {
return
}
}
}
}
And a function that consumes an iterator:
// PrintAll prints all elements in a sequence.
func PrintAll[V any](s iter.Seq[V]) {
for v := range s {
fmt.Print(v, " ")
}
fmt.Println()
}
And compose them in a convenient way:
func main() {
s := []int{1, 2, 3, 4, 5}
PrintAll(Reversed(s))
}
5 4 3 2 1
𝗣 61897 • 𝗖𝗟 543319, 557836, 565935, 565937, 591096
Pull iterators
Seq
and Seq2
can be thought of as push iterators, pushing values to the yield function.
Sometimes a range loop is not the preferred way to consume values of the sequence. In this case, you can convert a push iterator to a pull iterator using iter.Pull
:
func main() {
s := []int{1, 2, 3, 4, 5}
// uses the Reversed iterator defined previously
next, stop := iter.Pull(Reversed(s))
defer stop()
for {
v, ok := next()
if !ok {
break
}
fmt.Print(v, " ")
}
}
5 4 3 2 1
Pull
starts an iterator and returns a pair of functions — next
and stop
— which return the next value from the iterator and stop it, respectively. You call next
to pull the next value from the iterator — hence the name.
If clients do not consume the sequence to completion, they must call stop
, which allows the iterator function to finish and return. As shown in the example, the conventional way to ensure this is to use defer
.
Slice iterators
The slices
package adds several functions that work with iterators.
All returns an iterator over slice indexes and values:
s := []string{"a", "b", "c"}
for i, v := range slices.All(s) {
fmt.Printf("%d:%v ", i, v)
}
0:a 1:b 2:c
Values returns an iterator over slice elements:
s := []string{"a", "b", "c"}
for v := range slices.Values(s) {
fmt.Printf("%v ", v)
}
a b c
Backward returns an iterator that loops over a slice backward:
s := []string{"a", "b", "c"}
for i, v := range slices.Backward(s) {
fmt.Printf("%d:%v ", i, v)
}
2:c 1:b 0:a
Collect collects values from an iterator into a new slice:
s1 := []int{11, 12, 13}
s2 := slices.Collect(slices.Values(s1))
fmt.Println(s2)
[11 12 13]
AppendSeq appends values from an iterator to an existing slice:
s1 := []int{11, 12}
s2 := []int{13, 14}
s := slices.AppendSeq(s1, slices.Values(s2))
fmt.Println(s)
[11 12 13 14]
Sorted collects values from an iterator into a new slice, and then sorts the slice:
s1 := []int{13, 11, 12}
s2 := slices.Sorted(slices.Values(s1))
fmt.Println(s2)
[11 12 13]
SortedFunc is like Sorted but with a comparison function:
type person struct {
name string
age int
}
s1 := []person{{"cindy", 20}, {"alice", 25}, {"bob", 30}}
compare := func(p1, p2 person) int {
return cmp.Compare(p1.name, p2.name)
}
s2 := slices.SortedFunc(slices.Values(s1), compare)
fmt.Println(s2)
[{alice 25} {bob 30} {cindy 20}]
SortedStableFunc is like SortFunc but uses a stable sort algorithm.
Chunk returns an iterator over consecutive sub-slices of up to n elements of a slice:
s := []int{1, 2, 3, 4, 5}
chunked := slices.Chunk(s, 2)
for v := range chunked {
fmt.Printf("%v ", v)
}
[1 2] [3 4] [5]
Map iterators
The maps
package adds several functions that work with iterators:
All returns an iterator over key-value pairs from a map:
m := map[string]int{"a": 1, "b": 2, "c": 3}
for k, v := range maps.All(m) {
fmt.Printf("%v:%v ", k, v)
}
c:3 a:1 b:2
Keys returns an iterator over keys in a map:
m := map[string]int{"a": 1, "b": 2, "c": 3}
for k := range maps.Keys(m) {
fmt.Printf("%v ", k)
}
fmt.Println()
b c a
Values returns an iterator over values in a map:
m := map[string]int{"a": 1, "b": 2, "c": 3}
for v := range maps.Values(m) {
fmt.Printf("%v ", v)
}
fmt.Println()
2 3 1
Insert adds the key-value pairs from an iterator to an existing map (overwriting existing elements):
m1 := map[string]int{"a": 1, "b": 2}
m2 := map[string]int{"b": 12, "c": 3, "d": 4}
maps.Insert(m1, maps.All(m2))
fmt.Println(m1)
map[a:1 b:12 c:3 d:4]
Collect collects key-value pairs from an iterator into a new map and returns it:
m1 := map[string]int{"a": 1, "b": 2, "c": 3}
m2 := maps.Collect(maps.All(m1))
fmt.Println(m2)
map[a:1 b:2 c:3]
Timer changes
Go 1.23 makes two significant changes to the implementation of time.Timer
and time.Ticker
. The first is related to garbage collection, and the second is related to stop/reset behavior.
Garbage collection
Using time.After()
in a loop in pre-1.23 Go can lead to significant memory usage. Consider this example:
// go 1.22
type token struct{}
func consumer(ctx context.Context, in <-chan token) {
for {
select {
case <-in:
// do stuff
case <-time.After(time.Hour):
// log warning
case <-ctx.Done():
return
}
}
}
The consumer reads tokens from the input channel and alerts if a value does not appear in a channel after an hour.
Let's write a client that measures the memory usage after 100K channel sends:
// go 1.22
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
tokens := make(chan token)
go consumer(ctx, tokens)
memBefore := getAlloc()
for range 100000 {
tokens <- token{}
}
memAfter := getAlloc()
memUsed := memAfter - memBefore
fmt.Printf("Memory used: %d KB\n", memUsed/1024)
}
Memory used: 20379 KB
What is getAlloc
// getAlloc returns the number of bytes of allocated
// heap objects (after garbage collection).
func getAlloc() uint64 {
var m runtime.MemStats
runtime.GC()
runtime.ReadMemStats(&m)
return m.Alloc
}
Behind the scenes, time.After
creates a timer that is not freed until it expires. And since we are using a large timeout (one hour), the for loop essentially creates a miriad of timers that are not yet freed. These timers together use ≈20 MB of memory.
In Go 1.23, Timer
s and Ticker
s that are no longer referred to by the program become eligible for garbage collection immediately, even if their Stop
methods have not been called. So a time.After
in a loop will not pile up memory (for long):
// go 1.23
func consumer(ctx context.Context, in <-chan token) {
for {
select {
case <-in:
// do stuff
case <-time.After(time.Hour):
// log warning
case <-ctx.Done():
return
}
}
}
Memory used: 0 KB
Stop/reset behavior
Of course, the above time.After
loop example will still do a lot of allocations. So you might prefer to create a Timer
once and Reset
it on each iteration instead of creating new timers with time.After
. The thing is, in pre-1.23 Go, the Reset
method does not work as you expect.
Consider this example:
// go 1.22
func main() {
const timeout = 10 * time.Millisecond
t := time.NewTimer(timeout)
time.Sleep(20 * time.Millisecond)
start := time.Now()
t.Reset(timeout)
<-t.C
fmt.Printf("Time elapsed: %dms\n", time.Since(start).Milliseconds())
// expected: Time elapsed: 10ms
// actual: Time elapsed: 0ms
}
Time elapsed: 0ms
The timer t
has a timeout of 10ms. So after we waited for 20ms, it has already expired and sent a value to the t.C
channel. And since Reset
does not drain the channel, <-t.C
does not block and proceeds immediately. Also, since Reset
restarted the timer, we'll see another value in t.C
after 10ms.
Quoting the Go 1.22 stdlib documentation:
For a
Timer
created withNewTimer
,Reset
should be invoked only on stopped or expired timers with drained channels.
The reset problem is fixed in Go 1.23. Quoting the docs again:
The timer channel associated with a
Timer
orTicker
is now unbuffered, with capacity 0. The main effect of this change is that Go now guarantees that for any call to aReset
orStop
method, no stale values prepared before that call will be sent or received after the call.
Despite what the docs say, the timer channel is still buffered, but that's another story.
So Reset
now works as you'd expect:
// go 1.23
func main() {
const timeout = 10 * time.Millisecond
t := time.NewTimer(timeout)
time.Sleep(20 * time.Millisecond)
start := time.Now()
t.Reset(timeout)
<-t.C
fmt.Printf("Time elapsed: %dms\n", time.Since(start).Milliseconds())
}
Time elapsed: 10ms
Both of the new behaviors (garbage collection and stop/reset) are only enabled with a go.mod
specifying the version 1.23
or later. When Go 1.23 builds older programs, the old behaviors remain in effect.
Canonical values
The new unique
package provides facilities for canonicalizing values (like "interning" or "hash-consing").
Let's say we have a random word generator:
// wordGen returns a generator of random words of length wordLen
// from a set of nDistinct unique words.
func wordGen(nDistinct, wordLen int) func() string {
vocab := make([]string, nDistinct)
for i := range nDistinct {
word := randomString(wordLen)
vocab[i] = word
}
return func() string {
word := vocab[rand.Intn(nDistinct)]
return strings.Clone(word)
}
}
// randomString returns a random string of length n.
func randomString(n int) string {
// omitted for brevity
}
And we use it to generate 10,000 words from a 100-word vocabulary:
var words []string
func main() {
const nWords = 10000
const nDistinct = 100
const wordLen = 40
generate := wordGen(nDistinct, wordLen)
memBefore := getAlloc()
// store words
words = make([]string, nWords)
for i := range nWords {
words[i] = generate()
}
memAfter := getAlloc()
memUsed := memAfter - memBefore
fmt.Printf("Memory used: %d KB\n", memUsed/1024)
}
Memory used: 622 KB
10K words take up about 600 KB in the heap.
Now let's use unique.Handle
to assign a "handle" (descriptor) to each unique word, and store these handles instead of words:
var words []unique.Handle[string]
func main() {
const nWords = 10000
const nDistinct = 100
const wordLen = 40
generate := wordGen(nDistinct, wordLen)
memBefore := getAlloc()
// store word handles
words = make([]unique.Handle[string], nWords)
for i := range nWords {
words[i] = unique.Make(generate())
}
memAfter := getAlloc()
memUsed := memAfter - memBefore
fmt.Printf("Memory used: %d KB\n", memUsed/1024)
}
Memory used: 95 KB
Memory consumption is reduced to 100 KB — a 6x improvement! As you can imagine, the more words we generate, the greater the gains (given a fixed vocabulary).
The Make
function can canonicalize any value of comparable type. It creates a reference to a canonical copy of the value in the form of a Handle
.
Two Handle
s are equal if and only if the values used to create the handles are equal, allowing programs to deduplicate values and reduce their memory footprint. Comparing two Handle
values is efficient and reduces down to a simple pointer comparison.
Internally, unique
maintains a global concurrent-safe cache of all added values, guaranteeing their uniqueness and efficient reuse.
HTTP Cookies
There are a number of changes in the http
package related to cookie handling.
The ParseCookie
function parses a cookie header value and returns all the cookies set within it:
line := "session_id=abc123; dnt=1; lang=en; lang=de"
cookies, err := http.ParseCookie(line)
if err != nil {
panic(err)
}
for _, cookie := range cookies {
fmt.Printf("%s: %s\n", cookie.Name, cookie.Value)
}
session_id: abc123
dnt: 1
lang: en
lang: de
Since the same cookie name can appear multiple times, the returned values can contain more than one value for a given key.
The ParseSetCookie
function parses a Set-Cookie header value and returns a cookie:
line := "session_id=abc123; SameSite=None; Secure; Partitioned; Path=/; Domain=.example.com"
cookie, err := http.ParseSetCookie(line)
if err != nil {
panic(err)
}
fmt.Println("Name:", cookie.Name)
fmt.Println("Value:", cookie.Value)
fmt.Println("Path:", cookie.Path)
fmt.Println("Domain:", cookie.Domain)
fmt.Println("Secure:", cookie.Secure)
fmt.Println("Partitioned:", cookie.Partitioned)
Name: session_id
Value: abc123
Path: /
Domain: .example.com
Secure: true
Partitioned: true
The Partitioned
field identifies cookies with the Partitioned attribute (restricts the cookie's scope to a specific partition of the browsing context, such as a top-level site or a specific subdomain).
The Quoted
field indicates whether the value was originally quoted:
line := `name="Alice Zakas";`
cookie, err := http.ParseSetCookie(line)
if err != nil {
panic(err)
}
fmt.Println("Name:", cookie.Name)
fmt.Println("Value:", cookie.Value)
fmt.Println("Quoted:", cookie.Quoted)
Name: name
Value: Alice Zakas
Quoted: true
The Request.CookiesNamed
method retrieves all cookies that match the given name:
func handler(w http.ResponseWriter, r *http.Request) {
cookies := r.CookiesNamed("session")
if len(cookies) > 0 {
fmt.Fprintf(w, "session cookie = %s", cookies[0].Value)
} else {
fmt.Fprint(w, "session cookie not found")
}
}
func main() {
req := httptest.NewRequest("GET", "/", nil)
req.AddCookie(&http.Cookie{Name: "session", Value: "abc123"})
w := httptest.NewRecorder()
handler(w, req)
resp := w.Result()
body, _ := io.ReadAll(resp.Body)
fmt.Println(string(body))
}
session cookie = abc123
Copy directories
The os.CopyFS
function copies files and directories recursively with a single call:
src := os.DirFS("/etc/cron.d")
dst := "/tmp/cron.d"
err := os.CopyFS(dst, src)
if err != nil {
panic(err)
}
fmt.Printf("copied %s to %s\n", src, dst)
copied /etc/cron.d to /tmp/cron.d
Slices
The slices.Repeat
function returns a new slice that repeats the provided slice the given number of times:
s := []int{1, 2}
r := slices.Repeat(s, 3)
fmt.Println(r)
[1 2 1 2 1 2]
Atomics
The And
and Or
operators apply a bitwise AND or OR to the given input, returning the old value:
const (
modeRead = 0b100
modeWrite = 0b010
modeExec = 0b001
)
var mode atomic.Int32
mode.Store(modeRead)
old := mode.Or(modeWrite)
fmt.Printf("mode: %b -> %b\n", old, mode.Load())
mode: 100 -> 110
Other stdlib changes
Tooling and runtime
Tools
The Go toolchain can collect telemetry — usage and breakage statistics to help the Go team understand how the toolchain is being used.
Telemetry is opt-in, controlled by the
go telemetry
command. By default, the toolchain only collects statistics in local files (go telemetry local
).
go env -changed
flag causes the command to print only those settings whose effective value differs from the default value.go mod tidy -diff
flag causes the command not to modify the files, but to print the necessary changes as a unified diff.go list -m -json
command includesSum
andGoModSum
fields, similar togo mod download -json
.godebug
directive ingo.mod
andgo.work
declares a GODEBUG setting to apply for the work module or workspace in use.GOROOT_FINAL
environment variable no longer has any effect.
trace
tool better tolerates partially broken traces by attempting to recover what trace data it can.
Vet:
go vet
subcommand flags references to symbols that are too new for the effective Go version in the referring file.
Cgo:
cmd/cgo
supports the-ldflags
flag for passing flags to the C linker. The go command uses it automatically.
Runtime
The traceback printed by the runtime after an unhandled panic or other fatal error now indents the second and subsequent lines of the error message (e.g., the argument to the panic) with a single tab to distinguish it from the stack trace of the first goroutine.
Before:
// go 1.22
panic("what\nhave\nI\ndone")
panic: what
have
I
done
goroutine 1 [running]:
main.main()
/sandbox/src/main.go:14 +0x25 (exit status 2)
After:
// go 1.23
panic("what\nhave\nI\ndone")
panic: what
have
I
done
goroutine 1 [running]:
main.main()
/sandbox/src/main.go:14 +0x25 (exit status 2)
Compiler
The build time overhead to building with Profile Guided Optimization has been reduced significantly. Previously, large builds could see 100%+ build time increase from enabling PGO. In Go 1.23, overhead should be in the single digit percentages.
Summary
Go 1.23 is all-in on iterators, the feature intended to provide a standard way of working with sequences of values in both stdlib and third-party packages. It also fixes a long-standing problem with stopping and resetting timers. Value interning can be helpful when working with a limited number of unique values, and other features such as cookie handling and filesystem copying will also come in handy.
Here are the major features we covered:
And minor ones:
──
P.S. Interactive examples in this post are powered by codapi — an open source tool I'm building. Use it to embed live code snippets into your product docs, online course or blog.
★ Subscribe to keep up with new posts.