Go 1.25 interactive tour
Go 1.25 is scheduled for release in August, so it's a good time to explore what's new. The official release notes are pretty dry, so I prepared an interactive version with lots of examples showing what has changed and what the new behavior is.
Read on and see!
synctest • json/v2 • GOMAXPROCS • New GC • Anti-CSRF • WaitGroup.Go • FlightRecorder • os.Root • reflect.TypeAssert • T.Attr • slog.GroupAttrs • hash.Cloner
This article is based on the official release notes from The Go Authors, licensed under the BSD-3-Clause license. This is not an exhaustive list; see the official release notes for that.
I provide links to the proposals (𝗣) and commits (𝗖𝗟) for the features described. Check them out for motivation and implementation details.
Error handling is often skipped to keep things simple. Don't do this in production ツ
# Synthetic time for testing
Suppose we have a function that waits for a value from a channel for one minute, then times out:
// Read reads a value from the input channel and returns it.
// Timeouts after 60 seconds.
func Read(in chan int) (int, error) {
select {
case v := <-in:
return v, nil
case <-time.After(60 * time.Second):
return 0, errors.New("timeout")
}
}
We use it like this:
func main() {
ch := make(chan int)
go func() {
ch <- 42
}()
val, err := Read(ch)
fmt.Printf("val=%v, err=%v\n", val, err)
}
val=42, err=<nil>
How do we test the timeout situation? Surely we don't want the test to actually wait 60 seconds. We could make the timeout a parameter (we probably should), but let's say we prefer not to.
The new synctest
package to the rescue! The synctest.Test
function executes an isolated "bubble". Within the bubble, time
package functions use a fake clock, allowing our test to pass instantly:
func TestReadTimeout(t *testing.T) {
synctest.Test(t, func(t *testing.T) {
ch := make(chan int)
_, err := Read(ch)
if err == nil {
t.Fatal("expected timeout error, got nil")
}
})
}
PASS
The initial time in the bubble is midnight UTC 2000-01-01. Time advances when every goroutine in the bubble is blocked. In our example, when the only goroutine is blocked on select
in Read
, the bubble's clock advances 60 seconds, triggering the timeout case.
Keep in mind that the t
passed to the inner function isn't exactly the usual testing.T
. In particular, you should never call T.Run
, T.Parallel
, or T.Deadline
on it:
func TestSubtest(t *testing.T) {
synctest.Test(t, func(t *testing.T) {
t.Run("subtest", func (t *testing.T) {
t.Log("ok")
})
})
}
panic: testing: t.Run called inside synctest bubble [recovered, repanicked]
So, no inner tests inside the bubble.
Another useful function is synctest.Wait
. It waits for all goroutines in the bubble to block, then resumes execution:
func TestWait(t *testing.T) {
synctest.Test(t, func(t *testing.T) {
var innerStarted bool
done := make(chan struct{})
go func() {
innerStarted = true
time.Sleep(time.Second)
close(done)
}()
// Wait for the inner goroutine to block on time.Sleep.
synctest.Wait()
// innerStarted is guaranteed to be true here.
fmt.Printf("inner started: %v\n", innerStarted)
<-done
})
}
inner started: true
Try commenting out the Wait()
call and see how the innerStarted
value changes.
The testing/synctest
package was first introduced as experimental in version 1.24. It's now considered stable and ready to use. Note that the Run
function, which was added in 1.24, is now deprecated. You should use Test
instead.
𝗣 67434, 73567 • 𝗖𝗟 629735, 629856, 671961
# JSON v2
The second version of the json
package is a big update, and it has a lot of breaking changes. That's why I wrote a separate post with a detailed review of what's changed and lots of interactive examples.
Here, I'll just show one of the most impressive features.
With json/v2
you're no longer limited to just one way of marshaling a specific type. Now you can use custom marshalers and unmarshalers whenever you need, with the generic MarshalToFunc
and UnmarshalFromFunc
functions.
For example, we can marshal boolean values (true
/false
) and "boolean-like" strings (on
/off
) to ✓
or ✗
— all without creating a single custom type!
First we define a custom marshaler for bool values:
// Marshals boolean values to ✓ or ✗.
boolMarshaler := json.MarshalToFunc(
func(enc *jsontext.Encoder, val bool) error {
if val {
return enc.WriteToken(jsontext.String("✓"))
}
return enc.WriteToken(jsontext.String("✗"))
},
)
Then we define a custom marshaler for bool-like strings:
// Marshals boolean-like strings to ✓ or ✗.
strMarshaler := json.MarshalToFunc(
func(enc *jsontext.Encoder, val string) error {
if val == "on" || val == "true" {
return enc.WriteToken(jsontext.String("✓"))
}
if val == "off" || val == "false" {
return enc.WriteToken(jsontext.String("✗"))
}
// SkipFunc is a special type of error that tells Go to skip
// the current marshaler and move on to the next one. In our case,
// the next one will be the default marshaler for strings.
return json.SkipFunc
},
)
Finally, we combine marshalers with JoinMarshalers
and pass them to the marshaling function using the WithMarshalers
option:
// Combine custom marshalers with JoinMarshalers.
marshalers := json.JoinMarshalers(boolMarshaler, strMarshaler)
// Marshal some values.
vals := []any{true, "off", "hello"}
data, err := json.Marshal(vals, json.WithMarshalers(marshalers))
fmt.Println(string(data), err)
["✓","✗","hello"] <nil>
Isn't that cool?
There are plenty of other goodies, like support for I/O readers and writers, nested objects inlining, a plethora of options, and a huge performance boost. So, again, I encourage you to check out the post dedicated to v2 changes.
# Container-aware GOMAXPROCS
The GOMAXPROCS
runtime setting controls the maximum number of operating system threads the Go scheduler can use to execute goroutines concurrently. Since Go 1.5, it defaults to the value of runtime.NumCPU
, which is the number of logical CPUs on the machine (strictly speaking, this is either the total number of logical CPUs or the number allowed by the CPU affinity mask, whichever is lower).
For example, on my 8-core laptop, the default value of GOMAXPROCS
is also 8:
maxProcs := runtime.GOMAXPROCS(0) // returns the current value
fmt.Println("NumCPU:", runtime.NumCPU())
fmt.Println("GOMAXPROCS:", maxProcs)
NumCPU: 8
GOMAXPROCS: 8
Go programs often run in containers, like those managed by Docker or Kubernetes. These systems let you limit the CPU resources for a container using a Linux feature called cgroups.
A cgroup (control group) in Linux lets you group processes together and control how much CPU, memory, and network I/O they can use by setting limits and priorities.
For example, here's how you can limit a Docker container to use only four CPUs:
docker run --cpus=4 golang:1.24-alpine go run /app/nproc.go
Before version 1.25, the Go runtime didn't consider the CPU quota when setting the GOMAXPROCS
value. No matter how you limited CPU resources, GOMAXPROCS
was always set to the number of logical CPUs on the host machine:
docker run --cpus=4 golang:1.24-alpine go run /app/nproc.go
NumCPU: 8
GOMAXPROCS: 8
Now, the Go runtime started to respect the CPU quota:
docker run --cpus=4 golang:1.25rc1-alpine go run /app/nproc.go
NumCPU: 8
GOMAXPROCS: 4
The default value of GOMAXPROCS
is now set to either the number of logical CPUs or the CPU limit enforced by cgroup settings for the process, whichever is lower.
Fractional CPU limits are rounded up:
docker run --cpus=2.3 golang:1.25rc1-alpine go run /app/nproc.go
NumCPU: 8
GOMAXPROCS: 3
On a machine with multiple CPUs, the minimum default value for GOMAXPROCS
is 2, even if the CPU limit is set lower:
docker run --cpus=1 golang:1.25rc1-alpine go run /app/nproc.go
NumCPU: 8
GOMAXPROCS: 2
The Go runtime automatically updates GOMAXPROCS
if the CPU limit changes. The current implementation updates up to once per second (less if the application is idle).
Note on CPU limits
Cgroups actually offer not just one, but two ways to limit CPU resources:
- CPU quota — the maximum CPU time the cgroup may use within some period window.
- CPU shares — relative CPU priorities given to the kernel scheduler.
Docker's --cpus
and --cpu-period
/--cpu-quota
set the quota, while --cpu-shares
sets the shares.
Kubernetes' CPU limit sets the quota, while CPU request sets the shares.
Go's runtime GOMAXPROCS
only takes the CPU quota into account, not the shares.
You can set GOMAXPROCS
manually using the runtime.GOMAXPROCS
function. In this case, the runtime will use the value you provide and won't try to change it:
runtime.GOMAXPROCS(4)
fmt.Println("GOMAXPROCS:", runtime.GOMAXPROCS(0))
GOMAXPROCS: 4
You can also undo any manual changes you made — whether by setting the GOMAXPROCS
environment variable or calling runtime.GOMAXPROCS()
— and return to the default value chosen by the runtime. To do this, use the new runtime.SetDefaultGOMAXPROCS
function:
GOMAXPROCS=2 go1.25rc1 run nproc.go
// Using the environment variable.
fmt.Println("GOMAXPROCS:", runtime.GOMAXPROCS(0))
// Using the manual setting.
runtime.GOMAXPROCS(4)
fmt.Println("GOMAXPROCS:", runtime.GOMAXPROCS(0))
// Back to the default value.
runtime.SetDefaultGOMAXPROCS()
fmt.Println("GOMAXPROCS:", runtime.GOMAXPROCS(0))
GOMAXPROCS: 2
GOMAXPROCS: 4
GOMAXPROCS: 8
To provide backward compatibility, the new GOMAXPROCS
behavior only takes effect if your program uses Go version 1.25 or higher in your go.mod
. You can also turn if off manually using these GODEBUG settings:
containermaxprocs=0
to ignore the cgroups CPU quota.updatemaxprocs=0
to preventGOMAXPROCS
from updating automatically.
𝗣 73193 • 𝗖𝗟 668638, 670497, 672277, 677037
# Green Tea garbage collector
Some of us Go folks used to joke about Java and its many garbage collectors, each worse than the other. Now the joke's on us — there's a new experimental garbage collector coming to Go.
Green Tea is a garbage collecting algorithm optimized for programs that create lots of small objects and run on modern computers with many CPU cores.
The current GC scans memory in a way that jumps around a lot, which is slow because it causes many delays waiting for memory. The problem gets even worse on high-performance systems with many cores and non-uniform memory architectures, where each CPU or group of CPUs has its own "local" RAM.
Green Tea takes a different approach. Instead of scanning individual small objects, it scans memory in much larger, contiguous blocks — spans. Each span contains many small objects of the same size. By working with bigger blocks, the GC can scan more efficiently and make better use of the CPU's memory cache.
Benchmark results vary, but the Go team expects a 10–40% reduction in garbage collection overhead in real-world programs that heavily use the GC.
I ran an informal test by doing 1,000,000 reads and writes to Redka (my Redis clone written in Go), and observed similar GC pause times both the old and new GC algorithms. But Redka probably isn't the best example here because it relies heavily on SQLite, and the Go part is pretty minimal.
The new garbage collector is experimental and can be enabled by setting GOEXPERIMENT=greenteagc
at build time. The design and implementation of the GC may change in future releases. For more information or to give feedback, see the proposal.
# CSRF protection
The new http.CrossOriginProtection
type implements protection against cross-site request forgery (CSRF) by rejecting non-safe cross-origin browser requests.
It detects cross-origin requests in these ways:
- By checking the Sec-Fetch-Site header, if it's present.
- By comparing the hostname in the Origin header to the one in the Host header.
Here's an example where we enable CrossOriginProtection
and explicitly allow some extra origins:
// Register some handlers.
mux := http.NewServeMux()
mux.HandleFunc("GET /get", func(w http.ResponseWriter, req *http.Request) {
io.WriteString(w, "ok\n")
})
mux.HandleFunc("POST /post", func(w http.ResponseWriter, req *http.Request) {
io.WriteString(w, "ok\n")
})
// Configure protection against CSRF attacks.
antiCSRF := http.NewCrossOriginProtection()
antiCSRF.AddTrustedOrigin("https://example.com")
antiCSRF.AddTrustedOrigin("https://*.example.com")
// Add CSRF protection to all handlers.
srv := http.Server{
Addr: ":8080",
Handler: antiCSRF.Handler(mux),
}
log.Fatal(srv.ListenAndServe())
Now, if the browser sends a request from the same domain the server is using, the server will allow it:
curl --data "ok" -H "sec-fetch-site:same-origin" localhost:8080/post
ok
If the browser sends a cross-origin request, the server will reject it:
curl --data "ok" -H "sec-fetch-site:cross-site" localhost:8080/post
cross-origin request detected from Sec-Fetch-Site header
If the Origin header doesn't match the Host header, the server will reject the request:
curl --data "ok" \
-H "origin:https://evil.com" \
-H "host:antonz.org" \
localhost:8080/post
cross-origin request detected, and/or browser is out of date:
Sec-Fetch-Site is missing, and Origin does not match Host
If the request comes from a trusted origin, the server will allow it:
curl --data "ok" \
-H "origin:https://example.com" \
-H "host:antonz.org" \
localhost:8080/post
ok
The server will always allow GET, HEAD, and OPTIONS methods because they are safe:
curl -H "origin:https://evil.com" localhost:8080/get
ok
The server will always allow requests without Sec-Fetch-Site or Origin headers (by design):
curl --data "ok" localhost:8080/post
ok
# Go wait group, go!
Everyone knows how to run a goroutine with a wait group:
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
fmt.Println("go is awesome")
}()
wg.Add(1)
go func() {
defer wg.Done()
fmt.Println("cats are cute")
}()
wg.Wait()
fmt.Println("done")
cats are cute
go is awesome
done
The new WaitGroup.Go
method automatically increments the wait group counter, runs a function in a goroutine, and decrements the counter when it's done. This means we can rewrite the example above without using wg.Add()
and wg.Done()
:
var wg sync.WaitGroup
wg.Go(func() {
fmt.Println("go is awesome")
})
wg.Go(func() {
fmt.Println("cats are cute")
})
wg.Wait()
fmt.Println("done")
cats are cute
go is awesome
done
The implementation is just what you'd expect:
// https://github.com/golang/go/blob/master/src/sync/waitgroup.go
func (wg *WaitGroup) Go(f func()) {
wg.Add(1)
go func() {
defer wg.Done()
f()
}()
}
It's curious that it took the Go team 13 years to add a simple "Add+Done" wrapper. But hey, better late than never!
# Flight recording
Flight recording is a tracing technique that collects execution data, such as function calls and memory allocations, within a sliding window that's limited by size or duration. It helps to record traces of interesting program behavior, even if you don't know in advance when it will happen.
The new trace.FlightRecorder
type implements a flight recorder in Go. It tracks a moving window over the execution trace produced by the runtime, always containing the most recent trace data.
Here's an example of how you might use it.
First, configure the sliding window:
// Configure the flight recorder to keep
// at least 5 seconds of trace data,
// with a maximum buffer size of 3MB.
// Both of these are hints, not strict limits.
cfg := trace.FlightRecorderConfig{
MinAge: 5 * time.Second,
MaxBytes: 3 << 20, // 3MB
}
Then create the recorder and start it:
// Create and start the flight recorder.
rec := trace.NewFlightRecorder(cfg)
rec.Start()
defer rec.Stop()
Continue with the application code as usual:
// Simulate some workload.
done := make(chan struct{})
go func() {
defer close(done)
const n = 1 << 20
var s []int
for range n {
s = append(s, rand.IntN(n))
}
fmt.Printf("done filling slice of %d elements\n", len(s))
}()
<-done
Finally, save the trace snapshot to a file when an important event occurs:
// Save the trace snapshot to a file.
file, _ := os.Create("/tmp/trace.out")
defer file.Close()
n, _ := rec.WriteTo(file)
fmt.Printf("wrote %dB to trace file\n", n)
done filling slice of 1048576 elements
wrote 8441B to trace file
Use the go tool
to view the trace in the browser:
go1.25rc1 tool trace /tmp/trace.out
# More Root methods
The os.Root
type, which limits filesystem operations to a specific directory, now supports several new methods similar to functions already found in the os
package.
Chmod
changes the mode of a file:
root, _ := os.OpenRoot("data")
root.Chmod("01.txt", 0600)
finfo, _ := root.Stat("01.txt")
fmt.Println(finfo.Mode().Perm())
-rw-------
Chown
changes the numeric user ID (uid) and group ID (gid) of a file:
root, _ := os.OpenRoot("data")
root.Chown("01.txt", 1000, 1000)
finfo, _ := root.Stat("01.txt")
stat := finfo.Sys().(*syscall.Stat_t)
fmt.Printf("uid=%d, gid=%d\n", stat.Uid, stat.Gid)
uid=1000, gid=1000
Chtimes
changes the access and modification times of a file:
root, _ := os.OpenRoot("data")
mtime := time.Date(2020, 1, 1, 0, 0, 0, 0, time.UTC)
atime := time.Now()
root.Chtimes("01.txt", atime, mtime)
finfo, _ := root.Stat("01.txt")
fmt.Println(finfo.ModTime())
2020-01-01 00:00:00 +0000 UTC
Link
creates a hard link to a file:
root, _ := os.OpenRoot("data")
root.Link("01.txt", "hardlink.txt")
finfo, _ := root.Stat("hardlink.txt")
fmt.Println(finfo.Name())
hardlink.txt
MkdirAll
creates a new directory and any parent directories that don't already exist:
const dname = "path/to/secret"
root, _ := os.OpenRoot("data")
root.MkdirAll(dname, 0750)
finfo, _ := root.Stat(dname)
fmt.Println(dname, finfo.Mode())
path/to/secret drwxr-x---
RemoveAll
removes a file or a directory and any children that it contains:
root, _ := os.OpenRoot("data")
root.RemoveAll("01.txt")
finfo, err := root.Stat("01.txt")
fmt.Println(finfo, err)
<nil> statat 01.txt: no such file or directory
Rename
renames (moves) a file or a directory:
const oldname = "01.txt"
const newname = "go.txt"
root, _ := os.OpenRoot("data")
root.Rename(oldname, newname)
_, err := root.Stat(oldname)
fmt.Println(err)
finfo, _ := root.Stat(newname)
fmt.Println(finfo.Name())
statat 01.txt: no such file or directory
go.txt
Symlink
creates a symbolic link to a file. Readlink
returns the destination of the symbolic link:
const lname = "symlink.txt"
root, _ := os.OpenRoot("data")
root.Symlink("01.txt", lname)
lpath, _ := root.Readlink(lname)
fmt.Println(lname, "->", lpath)
symlink.txt -> 01.txt
WriteFile
writes data to a file, creating it if necessary. ReadFile
reads the file and returns its contents:
const fname = "go.txt"
root, _ := os.OpenRoot("data")
root.WriteFile(fname, []byte("go is awesome"), 0644)
content, _ := root.ReadFile(fname)
fmt.Printf("%s: %s\n", fname, content)
go.txt: go is awesome
Since there are now plenty of os.Root
methods, you probably don't need the file-related os
functions most of the time. This can make working with files much safer.
Speaking of file systems, the ones returned by os.DirFS()
(a file system rooted at the given directory) and os.Root.FS()
(a file system for the tree of files in the root) both implement the new fs.ReadLinkFS
interface. It has two methods — ReadLink
and Lstat
.
ReadLink
returns the destination of the symbolic link:
const lname = "symlink.txt"
root, _ := os.OpenRoot("data")
root.Symlink("01.txt", lname)
fsys := root.FS().(fs.ReadLinkFS)
lpath, _ := fsys.ReadLink(lname)
fmt.Println(lname, "->", lpath)
symlink.txt -> 01.txt
I have to say, the inconsistent naming between
os.Root.Readlink
andfs.ReadLinkFS.ReadLink
is quite surprising. Maybe it's not too late to fix?
Lstat
returns the information about a file or a symbolic link:
fsys := os.DirFS("data").(fs.ReadLinkFS)
finfo, _ := fsys.Lstat("01.txt")
fmt.Printf("name: %s\n", finfo.Name())
fmt.Printf("size: %dB\n", finfo.Size())
fmt.Printf("mode: %s\n", finfo.Mode())
fmt.Printf("mtime: %s\n", finfo.ModTime().Format(time.DateOnly))
name: 01.txt
size: 11B
mode: -rw-r--r--
mtime: 2025-06-22
That's it for the os
package!
𝗣 49580, 67002, 73126 • 𝗖𝗟 645718, 648295, 649515, 649536, 658995, 659416, 659757, 660635, 661595, 674116, 674315, 676135
# Reflective type assertion
To convert a reflect.Value
back to a specific type, you typically use the Value.Interface()
method combined with a type assertion:
alice := &Person{"Alice", 25}
// Given a reflection Value...
aliceVal := reflect.ValueOf(alice).Elem()
// ...convert it back to the Person type.
person, _ := aliceVal.Interface().(Person)
fmt.Printf("Name: %s, Age: %d\n", person.Name, person.Age)
Name: Alice, Age: 25
Now you can use the new generic reflect.TypeAssert
function instead:
alice := &Person{"Alice", 25}
// Given a reflection Value...
aliceVal := reflect.ValueOf(alice).Elem()
// ...convert it back to the Person type.
person, _ := reflect.TypeAssert[Person](aliceVal)
fmt.Printf("Name: %s, Age: %d\n", person.Name, person.Age)
Name: Alice, Age: 25
It's more idiomatic and avoids unnecessary memory allocations, since the value is never boxed in an interface.
# Test attributes and friends
With the new T.Attr
method, you can add extra test information, like a link to an issue, a test description, or anything else you need to analyze the test results:
func TestAttrs(t *testing.T) {
t.Attr("issue", "demo-1234")
t.Attr("description", "Testing for the impossible")
if 21*2 != 42 {
t.Fatal("What in the world happened to math?")
}
}
=== RUN TestAttrs
=== ATTR TestAttrs issue demo-1234
=== ATTR TestAttrs description Testing for the impossible
--- PASS: TestAttrs (0.00s)
Attributes can be especially useful in JSON format if you send the test output to a CI or other system for automatic processing:
go1.25rc1 test -json -run=.
...
{
"Time":"2025-06-25T20:34:16.831401+00:00",
"Action":"attr",
"Package":"sandbox",
"Test":"TestAttrs",
"Key":"issue",
"Value":"demo-1234"
}
...
{
"Time":"2025-06-25T20:34:16.831415+00:00",
"Action":"attr",
"Package":"sandbox",
"Test":"TestAttrs",
"Key":"description",
"Value":"Testing for the impossible"
}
...
The output is formatted to make it easier to read.
The same Attr
method is also available on testing.B
and testing.F
.
The new T.Output
method lets you access the output stream (io.Writer
) used by the test. This can be helpful if you want to send your application log to the test log stream, making it easier to read or analyze automatically:
func TestLog(t *testing.T) {
t.Log("test message 1")
t.Log("test message 2")
appLog := slog.New(slog.NewTextHandler(t.Output(), nil))
appLog.Info("app message")
}
=== RUN TestLog
main_test.go:12: test message 1
main_test.go:13: test message 2
time=2025-06-25T16:14:34.085Z level=INFO msg="app message"
--- PASS: TestLog (0.00s)
The same Output
method is also available on testing.B
and testing.F
.
Last but not least, the testing.AllocsPerRun
function will now panic if parallel tests are running.
Compare 1.24 behavior:
// go 1.24
func TestAllocs(t *testing.T) {
t.Parallel()
allocs := testing.AllocsPerRun(100, func() {
var s []int
// Do some allocations.
for i := range 1024 {
s = append(s, i)
}
})
t.Log("Allocations per run:", allocs)
}
=== RUN TestAllocs
=== PAUSE TestAllocs
=== CONT TestAllocs
main_test.go:21: Allocations per run: 12
--- PASS: TestAllocs (0.00s)
To 1.25:
// go 1.25
func TestAllocs(t *testing.T) {
t.Parallel()
allocs := testing.AllocsPerRun(100, func() {
var s []int
// Do some allocations.
for i := range 1024 {
s = append(s, i)
}
})
t.Log("Allocations per run:", allocs)
}
=== RUN TestAllocs
=== PAUSE TestAllocs
=== CONT TestAllocs
--- FAIL: TestAllocs (0.00s)
panic: testing: AllocsPerRun called during parallel test [recovered, repanicked]
The thing is, the result of AllocsPerRun
is inherently flaky if other tests are running in parallel. That's why there's a new panicking behavior — it should help catch these kinds of bugs.
# Grouped attributes for logging
With structured logging, you often group related attributes under a single key:
logger.Info("deposit",
slog.Bool("ok", true),
slog.Group("amount",
slog.Int("value", 1000),
slog.String("currency", "USD"),
),
)
msg=deposit ok=true amount.value=1000 amount.currency=USD
It works just fine — unless you want to gather the attributes first:
attrs := []slog.Attr{
slog.Int("value", 1000),
slog.String("currency", "USD"),
}
logger.Info("deposit",
slog.Bool("ok", true),
slog.Group("amount", attrs...),
)
cannot use attrs (variable of type []slog.Attr)
as []any value in argument to slog.Group
(exit status 1)
slog.Group
expects a slice of any
values, so it doesn't accept a slice of slog.Attr
s.
The new slog.GroupAttrs
function fixes this issue by creating a group from the given slog.Attr
s:
attrs := []slog.Attr{
slog.Int("value", 1000),
slog.String("currency", "USD"),
}
logger.Info("deposit",
slog.Bool("ok", true),
slog.GroupAttrs("amount", attrs...),
)
msg=deposit ok=true amount.value=1000 amount.currency=USD
Not a big deal, but can be quite handy.
# Hash cloner
The new hash.Cloner
interface defines a hash function that can return a copy of its current state:
// https://github.com/golang/go/blob/master/src/hash/hash.go
type Cloner interface {
Hash
Clone() (Cloner, error)
}
All standard library hash.Hash
implementations now provide the Clone
function, including MD5, SHA-1, SHA-3, FNV-1, CRC-64, and others:
h1 := sha3.New256()
h1.Write([]byte("hello"))
clone, _ := h1.Clone()
h2 := clone.(*sha3.SHA3)
// h2 has the same state as h1, so it will produce
// the same hash after writing the same data.
h1.Write([]byte("world"))
h2.Write([]byte("world"))
fmt.Printf("h1: %x\n", h1.Sum(nil))
fmt.Printf("h2: %x\n", h2.Sum(nil))
fmt.Printf("h1 == h2: %t\n", reflect.DeepEqual(h1, h2))
h1: 92dad9443e4dd6d70a7f11872101ebff87e21798e4fbb26fa4bf590eb440e71b
h2: 92dad9443e4dd6d70a7f11872101ebff87e21798e4fbb26fa4bf590eb440e71b
h1 == h2: true
# Final thoughts
Go 1.25 finalizes support for testing concurrent code, introduces a major experimental JSON package, and improves the runtime with a new GOMAXPROCS design and garbage collector. It also adds a flight recorder, modern CSRF protection, a long-awaited wait group shortcut, and several other improvements.
All in all, a great release!
P.S. To catch up on other Go releases, check out the Go features by version list or explore the interactive tours for Go 1.24 and 1.23.
P.P.S. Want to learn more about Go? Check out my interactive book on concurrency
★ Subscribe to keep up with new posts.