Go 1.26 interactive tour
Go 1.26 is coming out in February, so it's a good time to explore what's new. The official release notes are pretty dry, so I prepared an interactive version with lots of examples showing what has changed and what the new behavior is.
Read on and see!
new(expr) • Recursive type constraints • Type-safe error checking • Green Tea GC • Faster cgo and syscalls • Faster memory allocation • Vectorized operations • Secret mode • Reader-less cryptography • Hybrid public key encryption • Goroutine leak profile • Goroutine metrics • Reflective iterators • Peek into a buffer • Process handle • Signal as cause • Compare IP subnets • Context-aware dialing • Fake example.com • Optimized fmt.Errorf • Optimized io.ReadAll • Multiple log handlers • Test artifacts • Modernized go fix • Final thoughts
This article is based on the official release notes from The Go Authors and the Go source code, licensed under the BSD-3-Clause license. This is not an exhaustive list; see the official release notes for that.
I provide links to the documentation (𝗗), proposals (𝗣), commits (𝗖𝗟), and authors (𝗔) for the features described. Check them out for motivation, usage, and implementation details. I also have dedicated guides (𝗚) for some of the features.
Error handling is often skipped to keep things simple. Don't do this in production ツ
# new(expr)
Previously, you could only use the new built-in with types:
p := new(int)
*p = 42
fmt.Println(*p)
42
Now you can also use it with expressions:
// Pointer to a int variable with the value 42.
p := new(42)
fmt.Println(*p)
42
If the argument expr is an expression of type T, then new(expr) allocates a variable of type T, initializes it to the value of expr, and returns its address, a value of type *T.
This feature is especially helpful if you use pointer fields in a struct to represent optional values that you marshal to JSON or Protobuf:
type Cat struct {
Name string `json:"name"`
Fed *bool `json:"is_fed"` // you can never be sure
}
cat := Cat{Name: "Mittens", Fed: new(true)}
data, _ := json.Marshal(cat)
fmt.Println(string(data))
{"name":"Mittens","is_fed":true}
You can use new with composite values:
s := new([]int{11, 12, 13})
fmt.Println(*s)
type Person struct{ name string }
p := new(Person{name: "alice"})
fmt.Println(*p)
[11 12 13]
{alice}
And function calls:
f := func() string { return "go" }
p := new(f())
fmt.Println(*p)
go
Passing nil is still not allowed:
p := new(nil)
// compilation error
𝗗 spec • 𝗣 45624 • 𝗖𝗟 704935, 704737, 704955, 705157 • 𝗔 Alan Donovan
# Recursive type constraints
Generic functions and types take types as parameters:
// A list of values.
type List[T any] struct {}
// Reverses a slice in-place.
func Reverse[T any](s []T)
We can further restrict these type parameters by using type constraints:
// The map key must have a comparable type.
type Map[K comparable, V any] struct {}
// S is a slice with values of a comparable type,
// or a type derived from such a slice (e.g., type MySlice []int).
func Compact[S ~[]E, E comparable](s S) S
Previously, type constraints couldn't directly or indirectly refer back to the generic type:
type T[P T[P]] struct{}
// compile error:
// invalid recursive type: T refers to itself
Now they can:
type T[P T[P]] struct{}
ok
A typical use case is a generic type that supports operations with arguments or results of the same type as itself:
// A value that can be compared to other values
// of the same type using the less-than operation.
type Ordered[T Ordered[T]] interface {
Less(T) bool
}
Now we can create a generic container with Ordered values and use it with any type that implements Less:
// A tree stores comparable values.
type Tree[T Ordered[T]] struct {
nodes []T
}
// netip.Addr has a Less method with the right signature,
// so it meets the requirements for Ordered[netip.Addr].
t := Tree[netip.Addr]{}
_ = t
ok
This makes Go's generics a bit more expressive.
𝗣 68162, 75883 • 𝗖𝗟 711420, 711422 • 𝗔 Robert Griesemer
# Type-safe error checking
The new errors.AsType function is a generic version of errors.As:
// go 1.13+
func As(err error, target any) bool
// go 1.26+
func AsType[E error](err error) (E, bool)
It's type-safe and easier to use:
// using errors.As
var target *AppError
if errors.As(err, &target) {
fmt.Println("application error:", target)
}
application error: database is down
// using errors.AsType
if target, ok := errors.AsType[*AppError](err); ok {
fmt.Println("application error:", target)
}
application error: database is down
AsType is especially handy when checking for multiple types of errors. It makes the code shorter and keeps error variables scoped to their if blocks:
if connErr, ok := errors.AsType[*net.OpError](err); ok {
fmt.Println("Network operation failed:", connErr.Op)
} else if dnsErr, ok := errors.AsType[*net.DNSError](err); ok {
fmt.Println("DNS resolution failed:", dnsErr.Name)
} else {
fmt.Println("Unknown error")
}
DNS resolution failed: does.not.exist
Another issue with As is that it uses reflection and can cause runtime panics if used incorrectly (like if you pass a non-pointer or a type that doesn't implement error):
// using errors.As
var target AppError
if errors.As(err, &target) {
fmt.Println("application error:", target)
}
panic: errors: *target must be interface or implement error
AsType doesn't cause a runtime panic; it gives a clear compile-time error instead:
// using errors.AsType
if target, ok := errors.AsType[AppError](err); ok {
fmt.Println("application error:", target)
}
./main.go:24:32: AppError does not satisfy error (method Error has pointer receiver)
AsType doesn't use reflect, executes faster, and allocates less than As:
goos: darwin
goarch: arm64
cpu: Apple M1
BenchmarkAs-8 12606744 95.62 ns/op 40 B/op 2 allocs/op
BenchmarkAsType-8 37961869 30.26 ns/op 24 B/op 1 allocs/op
Since AsType can handle everything that As does, it's a recommended drop-in replacement for new code.
𝗗 errors.AsType • 𝗣 51945 • 𝗖𝗟 707235 • 𝗔 Julien Cretel
# Green Tea garbage collector
The new garbage collector (first introduced as experimental in 1.25) is designed to make memory management more efficient on modern computers with many CPU cores.
Motivation
Go's traditional garbage collector algorithm operates on graph, treating objects as nodes and pointers as edges, without considering their physical location in memory. The scanner jumps between distant memory locations, causing frequent cache misses.
As a result, the CPU spends too much time waiting for data to arrive from memory. More than 35% of the time spent scanning memory is wasted just stalling while waiting for memory accesses. As computers get more CPU cores, this problem gets even worse.
Implementation
Green Tea shifts the focus from being processor-centered to being memory-aware. Instead of scanning individual objects, it scans memory in contiguous 8 KiB blocks called spans. The algorithm focuses on small objects (up to 512 bytes) because they are the most common and hardest to scan efficiently.
Each span is divided into equal slots based on its assigned size class, and it only contains objects of that size class. For example, if a span is assigned to the 32-byte size class, the whole block is split into 32-byte slots, and objects are placed directly into these slots, each starting at the beginning of its slot. Because of this fixed layout, the garbage collector can easily find an object's metadata using simple address arithmetic, without checking the size of each object it finds.
When the algorithm finds an object that needs to be scanned, it marks the object's location in its span but doesn't scan it immediately. Instead, it waits until there are several objects in the same span that need scanning. Then, when the garbage collector processes that span, it scans multiple objects at once. This is much faster than going over the same area of memory multiple times.
To make better use of CPU cores, GC workers share the workload by stealing tasks from each other. Each worker has its own local queue of spans to scan, and if a worker is idle, it can grab tasks from the queues of other busy workers. This decentralized approach removes the need for a central global list, prevents delays, and reduces contention between CPU cores.
Green Tea uses vectorized CPU instructions (only on amd64 architectures) to process memory spans in bulk when there are enough objects.
Benchmarks
Benchmark results vary, but the Go team expects a 10–40% reduction in garbage collection overhead in real-world programs that rely heavily on the garbage collector. Plus, with vectorized implementation, an extra 10% reduction in GC overhead when running on CPUs like Intel Ice Lake or AMD Zen 4 and newer.
Unfortunately, I couldn't find any public benchmark results from the Go team for the latest version of Green Tea, and I wasn't able to create a good synthetic benchmark myself. So, no details this time :(
The new garbage collector is enabled by default. To use the old garbage collector, set GOEXPERIMENT=nogreenteagc at build time (this option is expected to be removed in Go 1.27).
𝗣 73581 • 𝗔 Michael Knyszek
# Faster cgo and syscalls
In the Go runtime, a processor (often referred to as a P) is a resource required to run the code. For a thread (a machine or M) to execute a goroutine (G), it must first acquire a processor.
Processors move through different states. They can be _Prunning (executing code), _Pidle (waiting for work), or _Pgcstop (paused because of the garbage collection).
Previously, processors had a state called _Psyscall used when a goroutine is making a system or cgo call. Now, this state has been removed. Instead of using a separate processor state, the system now checks the status of the goroutine assigned to the processor to see if it's involved in a system call.
This reduces internal runtime overhead and simplifies code paths for cgo and syscalls. The Go release notes say -30% in cgo runtime overhead, and the commit mentions an 18% sec/op improvement:
goos: linux
goarch: amd64
pkg: internal/runtime/cgobench
cpu: AMD EPYC 7B13
│ before.out │ after.out │
│ sec/op │ sec/op vs base │
CgoCall-64 43.69n ± 1% 35.83n ± 1% -17.99% (p=0.002 n=6)
CgoCallParallel-64 5.306n ± 1% 5.338n ± 1% ~ (p=0.132 n=6)
I decided to run the CgoCall benchmarks locally as well:
goos: darwin
goarch: arm64
cpu: Apple M1
│ go1_25.txt │ go1_26.txt │
│ sec/op │ sec/op vs base │
CgoCall-8 28.55n ± 4% 19.02n ± 2% -33.40% (p=0.000 n=10)
CgoCallWithCallback-8 72.76n ± 5% 57.38n ± 2% -21.14% (p=0.000 n=10)
geomean 45.58n 33.03n -27.53%
Either way, both a 20% and a 30% improvement are pretty impressive.
And here are the results from a local syscall benchmark:
goos: darwin
goarch: arm64
cpu: Apple M1
│ go1_25.txt │ go1_26.txt │
│ sec/op │ sec/op vs base │
Syscall-8 195.6n ± 4% 178.1n ± 1% -8.95% (p=0.000 n=10)
source
func BenchmarkSyscall(b *testing.B) {
for b.Loop() {
_, _, _ = syscall.Syscall(syscall.SYS_GETPID, 0, 0, 0)
}
}
That's pretty good too.
𝗖𝗟 646198 • 𝗔 Michael Knyszek
# Faster memory allocation
The Go runtime now has specialized versions of its memory allocation function for small objects (from 1 to 512 bytes). It uses jump tables to quickly choose the right function for each size, instead of relying on a single general-purpose implementation.
The Go release notes say "the compiler will now generate calls to size-specialized memory allocation routines". But based on the code, that's not completely accurate: the compiler still emits calls to the general-purpose
mallocgcfunction. Then, at runtime,mallocgcdispatches those calls to the new specialized allocation functions.
This change reduces the cost of small object memory allocations by up to 30%. The Go team expects the overall improvement to be ~1% in real allocation-heavy programs.
I couldn't find any existing benchmarks, so I came up with my own. And indeed, running it on Go 1.25 compared to 1.26 shows a significant improvement:
goos: darwin
goarch: arm64
cpu: Apple M1
│ go1_25.txt │ go1_26.txt │
│ sec/op │ sec/op vs base │
Alloc1-8 8.190n ± 6% 6.594n ± 28% -19.48% (p=0.011 n=10)
Alloc8-8 8.648n ± 16% 7.522n ± 4% -13.02% (p=0.000 n=10)
Alloc64-8 15.70n ± 15% 12.57n ± 4% -19.88% (p=0.000 n=10)
Alloc128-8 56.80n ± 4% 17.56n ± 4% -69.08% (p=0.000 n=10)
Alloc512-8 81.50n ± 10% 55.24n ± 5% -32.23% (p=0.000 n=10)
geomean 21.99n 14.33n -34.83%
source
var sink *byte
func benchmarkAlloc(b *testing.B, size int) {
b.ReportAllocs()
for b.Loop() {
obj := make([]byte, size)
sink = &obj[0]
}
}
func BenchmarkAlloc1(b *testing.B) { benchmarkAlloc(b, 1) }
func BenchmarkAlloc8(b *testing.B) { benchmarkAlloc(b, 8) }
func BenchmarkAlloc64(b *testing.B) { benchmarkAlloc(b, 64) }
func BenchmarkAlloc128(b *testing.B) { benchmarkAlloc(b, 128) }
func BenchmarkAlloc512(b *testing.B) { benchmarkAlloc(b, 512) }
The new implementation is enabled by default. You can disable it by setting GOEXPERIMENT=nosizespecializedmalloc at build time (this option is expected to be removed in Go 1.27).
𝗖𝗟 665835 • 𝗔 Michael Matloob
# Vectorized operations (experimental)
The new simd/archsimd package provides access to architecture-specific vectorized operations (SIMD — single instruction, multiple data). This is a low-level package that exposes hardware-specific functionality. It currently only supports amd64 platforms.
Because different CPU architectures have very different SIMD operations, it's hard to create a single portable API that works for all of them. So the Go team decided to start with a low-level, architecture-specific API first, giving "power users" immediate access to SIMD features on the most common server platform — amd64.
The package defines vector types as structs, like Int8x16 (a 128-bit SIMD vector with sixteen 8-bit integers) and Float64x8 (a 512-bit SIMD vector with eight 64-bit floats). These match the hardware's vector registers. The package supports vectors that are 128, 256, or 512 bits wide.
Most operations are defined as methods on vector types. They usually map directly to hardware instructions with zero overhead.
To give you a taste, here's a custom function that uses SIMD instructions to add 32-bit float vectors:
func Add(a, b []float32) []float32 {
if len(a) != len(b) {
panic("slices of different length")
}
// If AVX-512 isn't supported, fall back to scalar addition,
// since the Float32x16.Add method needs the AVX-512 instruction set.
if !archsimd.X86.AVX512() {
return fallbackAdd(a, b)
}
res := make([]float32, len(a))
n := len(a)
i := 0
// 1. SIMD loop: Process 16 elements at a time.
for i <= n-16 {
// Load 16 elements from a and b vectors.
va := archsimd.LoadFloat32x16Slice(a[i : i+16])
vb := archsimd.LoadFloat32x16Slice(b[i : i+16])
// Add all 16 elements in a single instruction
// and store the results in the result vector.
vSum := va.Add(vb) // translates to VADDPS asm instruction
vSum.StoreSlice(res[i : i+16])
i += 16
}
// 2. Scalar tail: Process any remaining elements (0-15).
for ; i < n; i++ {
res[i] = a[i] + b[i]
}
return res
}
Let's try it on two vectors:
func main() {
a := []float32{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17}
b := []float32{17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1}
res := Add(a, b)
fmt.Println(res)
}
[18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18]
Common operations in the archsimd package include:
Loada vector from array/slice, orStorea vector to array/slice.- Arithmetic:
Add,Sub,Mul,Div,DotProduct. - Bitwise:
And,Or,Not,Xor,Shift. - Comparison:
Equal,Greater,Less,Min,Max. - Conversion:
As,SaturateTo,TruncateTo. - Masking:
Compress,Masked,Merge. - Rearrangement:
Permute.
The package uses only AVX instructions, not SSE.
Here's a simple benchmark for adding two vectors (both the "plain" and SIMD versions use pre-allocated slices):
goos: linux
goarch: amd64
cpu: AMD EPYC 9575F 64-Core Processor
BenchmarkAddPlain/1k-2 1517698 889.9 ns/op 13808.74 MB/s
BenchmarkAddPlain/65k-2 23448 52613 ns/op 14947.46 MB/s
BenchmarkAddPlain/1m-2 2047 1005628 ns/op 11932.84 MB/s
BenchmarkAddSIMD/1k-2 36594340 33.58 ns/op 365949.74 MB/s
BenchmarkAddSIMD/65k-2 410742 3199 ns/op 245838.52 MB/s
BenchmarkAddSIMD/1m-2 12955 94228 ns/op 127351.33 MB/s
The package is experimental and can be enabled by setting GOEXPERIMENT=simd at build time.
𝗗 simd/archsimd • 𝗣 73787 • 𝗖𝗟 701915, 712880, 729900, 732020 • 𝗔 Junyang Shao, Sean Liao, Tom Thorogood
# Secret mode (experimental)
Cryptographic protocols like WireGuard or TLS have a property called "forward secrecy". This means that even if an attacker gains access to long-term secrets (like a private key in TLS), they shouldn't be able to decrypt past communication sessions. To make this work, ephemeral keys (temporary keys used to negotiate the session) need to be erased from memory immediately after the handshake. If there's no reliable way to clear this memory, these keys could stay there indefinitely. An attacker who finds them later could re-derive the session key and decrypt past traffic, breaking forward secrecy.
In Go, the runtime manages memory, and it doesn't guarantee when or how memory is cleared. Sensitive data might remain in heap allocations or stack frames, potentially exposed in core dumps or through memory attacks. Developers often have to use unreliable "hacks" with reflection to try to zero out internal buffers in cryptographic libraries. Even so, some data might still stay in memory where the developer can't reach or control it.
The Go team's solution to this problem is the new runtime/secret package. It lets you run a function in secret mode. After the function finishes, it immediately erases (zeroes out) the registers and stack it used. Heap allocations made by the function are erased as soon as the garbage collector decides they are no longer reachable.
secret.Do(func() {
// Generate an ephemeral key and
// use it to negotiate the session.
})
This helps make sure sensitive information doesn't stay in memory longer than needed, lowering the risk of attackers getting to it.
Here's an example that shows how secret.Do might be used in a more or less realistic setting. Let's say you want to generate a session key while keeping the ephemeral private key and shared secret safe:
// DeriveSessionKey does an ephemeral key exchange to create a session key.
func DeriveSessionKey(peerPublicKey *ecdh.PublicKey) (*ecdh.PublicKey, []byte, error) {
var pubKey *ecdh.PublicKey
var sessionKey []byte
var err error
// Use secret.Do to contain the sensitive data during the handshake.
// The ephemeral private key and the raw shared secret will be
// wiped out when this function finishes.
secret.Do(func() {
// 1. Generate an ephemeral private key.
// This is highly sensitive; if leaked later, forward secrecy is broken.
privKey, e := ecdh.P256().GenerateKey(rand.Reader)
if e != nil {
err = e
return
}
// 2. Compute the shared secret (ECDH).
// This raw secret is also highly sensitive.
sharedSecret, e := privKey.ECDH(peerPublicKey)
if e != nil {
err = e
return
}
// 3. Derive the final session key (e.g., using HKDF).
// We copy the result out; the inputs (privKey, sharedSecret)
// will be destroyed by secret.Do when they become unreachable.
sessionKey = performHKDF(sharedSecret)
pubKey = privKey.PublicKey()
})
// The session key is returned for use, but the "recipe" to recreate it
// is destroyed. Additionally, because the session key was allocated
// inside the secret block, the runtime will automatically zero it out
// when the application is finished using it.
return pubKey, sessionKey, err
}
Here, the ephemeral private key and the raw shared secret are effectively "toxic waste" — they are necessary to create the final session key, but dangerous to keep around.
If these values stay in the heap and an attacker later gets access to the application's memory (for example, via a core dump or a vulnerability like Heartbleed), they could use these intermediates to re-derive the session key and decrypt past conversations.
By wrapping the calculation in secret.Do, we make sure that as soon as the session key is created, the "ingredients" used to make it are permanently destroyed. This means that even if the server is compromised in the future, this specific past session can't be exposed, which ensures forward secrecy.
func main() {
// Generate a dummy peer public key.
priv, _ := ecdh.P256().GenerateKey(nil)
peerPubKey := priv.PublicKey()
// Derive the session key.
pubKey, sessionKey, err := DeriveSessionKey(peerPubKey)
fmt.Printf("public key = %x...\n", pubKey.Bytes()[:16])
fmt.Printf("error = %v\n", err)
var _ = sessionKey
}
public key = 04288d5ade66bab4320a86d80993f628...
error = <nil>
The current secret.Do implementation only supports Linux (amd64 and arm64). On unsupported platforms, Do invokes the function directly. Also, trying to start a goroutine within the function causes a panic (this will be fixed in Go 1.27).
The runtime/secret package is mainly for developers who work on cryptographic libraries. Most apps should use higher-level libraries that use secret.Do behind the scenes.
The package is experimental and can be enabled by setting GOEXPERIMENT=runtimesecret at build time.
𝗗 runtime/secret • 𝗣 21865 • 𝗖𝗟 704615 • 𝗔 Daniel Morsing
# Reader-less cryptography
Current cryptographic APIs, like ecdsa.GenerateKey or rand.Prime, often accept an io.Reader as the source of random data:
// Generate a new ECDSA private key for the specified curve.
key, _ := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
fmt.Println(key.D)
// Generate a 64-bit integer that is prime with high probability.
prim, _ := rand.Prime(rand.Reader, 64)
fmt.Println(prim)
31253152889057471714062019675387570049552680140182252615946165331094890182019
17433987073571224703
These APIs don't commit to a specific way of using random bytes from the reader. Any change to underlying cryptographic algorithms can change the sequence or amount of bytes read. Because of this, if the application code (mistakenly) relies on a specific implementation in Go version X, it might fail or behave differently in version X+1.
The Go team chose a pretty bold solution to this problem. Now, most crypto APIs will just ignore the random io.Reader parameter and always use the system random source (crypto/internal/sysrand.Read).
// The reader parameter is no longer used, so you can just pass nil.
// Generate a new ECDSA private key for the specified curve.
key, _ := ecdsa.GenerateKey(elliptic.P256(), nil)
fmt.Println(key.D)
// Generate a 64-bit integer that is prime with high probability.
prim, _ := rand.Prime(nil, 64)
fmt.Println(prim)
16265662996876675161677719946085651215874831846675169870638460773593241527197
14874320216361938581
The change applies to the following crypto subpackages:
// crypto/dsa
func GenerateKey(priv *PrivateKey, rand io.Reader) error
// crypto/ecdh
type Curve interface {
// ...
GenerateKey(rand io.Reader) (*PrivateKey, error)
}
// crypto/ecdsa
func GenerateKey(c elliptic.Curve, rand io.Reader) (*PrivateKey, error)
func SignASN1(rand io.Reader, priv *PrivateKey, hash []byte) ([]byte, error)
func Sign(rand io.Reader, priv *PrivateKey, hash []byte) (r, s *big.Int, err error)
func (priv *PrivateKey) Sign(rand io.Reader, digest []byte, opts crypto.SignerOpts) ([]byte, error)
// crypto/rand
func Prime(rand io.Reader, bits int) (*big.Int, error)
// crypto/rsa
func GenerateKey(random io.Reader, bits int) (*PrivateKey, error)
func GenerateMultiPrimeKey(random io.Reader, nprimes int, bits int) (*PrivateKey, error)
func EncryptPKCS1v15(random io.Reader, pub *PublicKey, msg []byte) ([]byte, error)
ed25519.GenerateKey(rand) still uses the random reader if provided. But if rand is nil, it uses an internal secure source of random bytes instead of crypto/rand.Reader (which could be overridden).
To support deterministic testing, there's a new testing/cryptotest package with a single SetGlobalRandom function. It sets a global, deterministic cryptographic randomness source for the duration of the given test:
func Test(t *testing.T) {
cryptotest.SetGlobalRandom(t, 42)
// All test runs will generate the same numbers.
p1, _ := rand.Prime(nil, 32)
p2, _ := rand.Prime(nil, 32)
p3, _ := rand.Prime(nil, 32)
got := [3]int64{p1.Int64(), p2.Int64(), p3.Int64()}
want := [3]int64{3713413729, 3540452603, 4293217813}
if got != want {
t.Errorf("got %v, want %v", got, want)
}
}
PASS
SetGlobalRandom affects crypto/rand and all implicit sources of cryptographic randomness in the crypto/* packages:
func Test(t *testing.T) {
cryptotest.SetGlobalRandom(t, 42)
t.Run("rand.Read", func(t *testing.T) {
var got [4]byte
rand.Read(got[:])
want := [4]byte{34, 48, 31, 184}
if got != want {
t.Errorf("got %v, want %v", got, want)
}
})
t.Run("rand.Int", func(t *testing.T) {
got, _ := rand.Int(rand.Reader, big.NewInt(10000))
const want = 6185
if got.Int64() != want {
t.Errorf("got %v, want %v", got.Int64(), want)
}
})
}
PASS
To temporarily restore the old reader-respecting behavior, set GODEBUG=cryptocustomrand=1 (this option will be removed in a future release).
𝗗 testing/cryptotest • 𝗣 70942 • 𝗖𝗟 724480 • 𝗔 Filippo Valsorda, qiulaidongfeng
# Hybrid public key encryption
The new crypto/hpke package implements Hybrid Public Key Encryption (HPKE) as specified in RFC 9180.
HPKE is a relatively new IETF standard for hybrid encryption. Traditional public-key encryption methods, like RSA, are slow and can only handle small amounts of data. HPKE improves on this by combining two types of encryption: it uses asymmetric cryptography (public/private keys) to safely create a shared secret, then uses fast symmetric encryption to protect the actual data. This lets you securely and quickly encrypt large files or messages, while still using the security benefits of public-key systems.
The "asymmetric" part of HPKE (called Key Encapsulation Mechanism or KEM) can use both traditional algorithms, such as those using elliptic curves, and new post-quantum algorithms, like ML-KEM. ML-KEM is designed to remain secure even against future quantum computers that could break traditional cryptography.
I'm not going to pretend I'm an expert in cryptography, so here's an example I took straight from the Go standard library documentation. It uses ML-KEM-X25519 for asymmetric cryptography (traditional X25519 combined with ML-KEM), AES-256 for symmetric encryption, and SHA-256 as a key hash function:
// Encrypt a single message from a sender to a recipient using the one-shot API.
kem, kdf, aead := hpke.MLKEM768X25519(), hpke.HKDFSHA256(), hpke.AES256GCM()
// Recipient side
var (
recipientPrivateKey hpke.PrivateKey
publicKeyBytes []byte
)
{
k, err := kem.GenerateKey()
if err != nil {
panic(err)
}
recipientPrivateKey = k
publicKeyBytes = k.PublicKey().Bytes()
}
// Sender side
var ciphertext []byte
{
publicKey, err := kem.NewPublicKey(publicKeyBytes)
if err != nil {
panic(err)
}
message := []byte("secret message")
ct, err := hpke.Seal(publicKey, kdf, aead, []byte("public"), message)
if err != nil {
panic(err)
}
ciphertext = ct
}
// Recipient side
{
plaintext, err := hpke.Open(recipientPrivateKey, kdf, aead, []byte("public"), ciphertext)
if err != nil {
panic(err)
}
fmt.Printf("Decrypted: %s\n", plaintext)
}
Decrypted: secret message
As Filippo Valsorda (the cryptography engineer who maintains Go's crypto packages) says, HPKE is now the right way to do public key encryption.
𝗗 crypto/hpke • 𝗣 75300 • 𝗔 Filippo Valsorda
# Goroutine leak profile (experimental)
A leak occurs when one or more goroutines are indefinitely blocked on synchronization primitives like channels, while other goroutines continue running and the program as a whole keeps functioning. Here's a simple example:
func leak() <-chan int {
out := make(chan int)
go func() {
out <- 42 // leaks if nobody reads from out
}()
return out
}
If we call leak and don't read from the output channel, the inner leak goroutine will stay blocked trying to send to the channel for the rest of the program:
func main() {
leak()
// ...
}
ok
Unlike deadlocks, leaks do not cause panics, so they are much harder to spot. Also, unlike data races, Go's tooling did not address them for a long time.
Things started to change in Go 1.24 with the introduction of the synctest package. Not many people talk about it, but synctest is a great tool for catching leaks during testing.
Go 1.26 adds a new experimental goroutineleak profile designed to report leaked goroutines in production. Here's how we can use it in the example above:
func main() {
prof := pprof.Lookup("goroutineleak")
leak()
time.Sleep(50 * time.Millisecond)
prof.WriteTo(os.Stdout, 2)
// ...
}
goroutine 7 [chan send (leaked)]:
main.leak.func1()
/tmp/sandbox/main.go:16 +0x1e
created by main.leak in goroutine 1
/tmp/sandbox/main.go:15 +0x67
As you can see, we have a nice goroutine stack trace that shows exactly where the leak happens.
The goroutineleak profile finds leaks by using the garbage collector's marking phase to check which blocked goroutines are still connected to active code. It starts with runnable goroutines, marks all sync objects they can reach, and keeps adding any blocked goroutines waiting on those objects. When it can't add any more, any blocked goroutines left are waiting on resources that can't be reached — so they're considered leaked.
Tell me more
Here's the gist of it:
[ Start: GC mark phase ]
│
│ 1. Collect live goroutines
v
┌───────────────────────┐
│ Initial roots │ <────────────────┐
│ (runnable goroutines) │ │
└───────────────────────┘ │
│ │
│ 2. Mark reachable memory │
v │
┌───────────────────────┐ │
│ Reachable objects │ │
│ (channels, mutexes) │ │
└───────────────────────┘ │
│ │
│ 3a. Check blocked goroutines │
v │
┌───────────────────────┐ (Yes) │
│ Is blocked G waiting │ ─────────────────┘
│ on a reachable obj? │ 3b. Add G to roots
└───────────────────────┘
│
│ (No - repeat until no new Gs found)
v
┌───────────────────────┐
│ Remaining blocked │
│ goroutines │
└───────────────────────┘
│
│ 5. Report the leaks
v
[ LEAKED! ]
(Blocked on unreachable
synchronization objects)
- Collect live goroutines. Start with currently active (runnable or running) goroutines as roots. Ignore blocked goroutines for now.
- Mark reachable memory. Trace pointers from roots to find which synchronization objects (like channels or wait groups) are currently reachable by these roots.
- Resurrect blocked goroutines. Check all currently blocked goroutines. If a blocked goroutine is waiting for a synchronization resource that was just marked as reachable — add that goroutine to the roots.
- Iterate. Repeat steps 2 and 3 until there are no more new goroutines blocked on reachable objects.
- Report the leaks. Any goroutines left in the blocked state are waiting for resources that no active part of the program can access. They're considered leaked.
For even more details, see the paper by Saioc et al.
If you want to see how goroutineleak (and synctest) can catch typical leaks that often happen in production — check out my article on goroutine leaks.
The goroutineleak profile is experimental and can be enabled by setting GOEXPERIMENT=goroutineleakprofile at build time. Enabling the experiment also makes the profile available as a net/http/pprof endpoint, /debug/pprof/goroutineleak.
According to the authors, the implementation is already production-ready. It's only marked as experimental so they can get feedback on the API, especially about making it a new profile.
𝗗 runtime/pprof • 𝗚 Detecting leaks • 𝗣 74609, 75280 • 𝗖𝗟 688335 • 𝗔 Vlad Saioc
# Goroutine metrics
New metrics in the runtime/metrics package give better insight into goroutine scheduling:
- Total number of goroutines since the program started.
- Number of goroutines in each state.
- Number of active threads.
Here's the full list:
/sched/goroutines-created:goroutines
Count of goroutines created since program start.
/sched/goroutines/not-in-go:goroutines
Approximate count of goroutines running
or blocked in a system call or cgo call.
/sched/goroutines/runnable:goroutines
Approximate count of goroutines ready to execute,
but not executing.
/sched/goroutines/running:goroutines
Approximate count of goroutines executing.
Always less than or equal to /sched/gomaxprocs:threads.
/sched/goroutines/waiting:goroutines
Approximate count of goroutines waiting
on a resource (I/O or sync primitives).
/sched/threads/total:threads
The current count of live threads
that are owned by the Go runtime.
Per-state goroutine metrics can be linked to common production issues. For example, an increasing waiting count can show a lock contention problem. A high not-in-go count means goroutines are stuck in syscalls or cgo. A growing runnable backlog suggests the CPUs can't keep up with demand.
You can read the new metric values using the regular metrics.Read function:
func main() {
go work() // omitted for brevity
time.Sleep(100 * time.Millisecond)
fmt.Println("Goroutine metrics:")
printMetric("/sched/goroutines-created:goroutines", "Created")
printMetric("/sched/goroutines:goroutines", "Live")
printMetric("/sched/goroutines/not-in-go:goroutines", "Syscall/CGO")
printMetric("/sched/goroutines/runnable:goroutines", "Runnable")
printMetric("/sched/goroutines/running:goroutines", "Running")
printMetric("/sched/goroutines/waiting:goroutines", "Waiting")
fmt.Println("Thread metrics:")
printMetric("/sched/gomaxprocs:threads", "Max")
printMetric("/sched/threads/total:threads", "Live")
}
func printMetric(name string, descr string) {
sample := []metrics.Sample{{Name: name}}
metrics.Read(sample)
// Assuming a uint64 value; don't do this in production.
// Instead, check sample[0].Value.Kind and handle accordingly.
fmt.Printf(" %s: %v\n", descr, sample[0].Value.Uint64())
}
Goroutine metrics:
Created: 57
Live: 21
Syscall/CGO: 0
Runnable: 0
Running: 1
Waiting: 20
Thread metrics:
Max: 2
Live: 4
The per-state numbers (not-in-go + runnable + running + waiting) are not guaranteed to add up to the live goroutine count (/sched/goroutines:goroutines, available since Go 1.16).
All new metrics use uint64 counters.
𝗗 runtime/metrics • 𝗣 15490 • 𝗖𝗟 690397, 690398, 690399 • 𝗔 Michael Knyszek
# Reflective iterators
The new Type.Fields and Type.Methods methods in the reflect package return iterators for a type's fields and methods:
// List the fields of a struct type.
typ := reflect.TypeFor[http.Client]()
for f := range typ.Fields() {
fmt.Println(f.Name, f.Type)
}
Transport http.RoundTripper
CheckRedirect func(*http.Request, []*http.Request) error
Jar http.CookieJar
Timeout time.Duration
// List the methods of a struct type.
typ := reflect.TypeFor[*http.Client]()
for m := range typ.Methods() {
fmt.Println(m.Name, m.Type)
}
CloseIdleConnections func(*http.Client)
Do func(*http.Client, *http.Request) (*http.Response, error)
Get func(*http.Client, string) (*http.Response, error)
Head func(*http.Client, string) (*http.Response, error)
Post func(*http.Client, string, string, io.Reader) (*http.Response, error)
PostForm func(*http.Client, string, url.Values) (*http.Response, error)
The new methods Type.Ins and Type.Outs return iterators for the input and output parameters of a function type:
typ := reflect.TypeFor[filepath.WalkFunc]()
fmt.Println("Inputs:")
for par := range typ.Ins() {
fmt.Println("-", par.Name())
}
fmt.Println("Outputs:")
for par := range typ.Outs() {
fmt.Println("-", par.Name())
}
Input params:
- string
- FileInfo
- error
Output params:
- error
The new methods Value.Fields and Value.Methods return iterators for a value's fields and methods. Each iteration yields both the type information (StructField or Method) and the value:
client := &http.Client{}
val := reflect.ValueOf(client)
fmt.Println("Fields:")
for f, v := range val.Elem().Fields() {
fmt.Printf("- name=%s kind=%s\n", f.Name, v.Kind())
}
fmt.Println("Methods:")
for m, v := range val.Methods() {
fmt.Printf("- name=%s kind=%s\n", m.Name, v.Kind())
}
Fields:
- name=Transport kind=interface
- name=CheckRedirect kind=func
- name=Jar kind=interface
- name=Timeout kind=int64
Methods:
- name=CloseIdleConnections kind=func
- name=Do kind=func
- name=Get kind=func
- name=Head kind=func
- name=Post kind=func
- name=PostForm kind=func
Previously, you could get all this information by using a for-range loop with NumX methods (which is what iterators do internally):
// go 1.25
typ := reflect.TypeFor[http.Client]()
for i := range typ.NumField() {
field := typ.Field(i)
fmt.Println(field.Name, field.Type)
}
Transport http.RoundTripper
CheckRedirect func(*http.Request, []*http.Request) error
Jar http.CookieJar
Timeout time.Duration
Using an iterator is more concise. I hope it justifies the increased API surface.
𝗗 reflect • 𝗣 66631 • 𝗖𝗟 707356 • 𝗔 Quentin Quaadgras
# Peek into a buffer
The new Buffer.Peek method in the bytes package returns the next N bytes from the buffer without advancing it:
buf := bytes.NewBufferString("I love bytes")
sample, err := buf.Peek(1)
fmt.Printf("peek=%s err=%v\n", sample, err)
buf.Next(2)
sample, err = buf.Peek(4)
fmt.Printf("peek=%s err=%v\n", sample, err)
peek=I err=<nil>
peek=love err=<nil>
If Peek returns fewer than N bytes, it also returns io.EOF:
buf := bytes.NewBufferString("hello")
sample, err := buf.Peek(10)
fmt.Printf("peek=%s err=%v\n", sample, err)
peek=hello err=EOF
The slice returned by Peek points to the buffer's content and stays valid until the buffer is changed. So, if you change the slice right away, it will affect future reads:
buf := bytes.NewBufferString("car")
sample, err := buf.Peek(3)
fmt.Printf("peek=%s err=%v\n", sample, err)
sample[2] = 't' // changes the underlying buffer
data, err := buf.ReadBytes(0)
fmt.Printf("data=%s err=%v\n", data, err)
peek=car err=<nil>
data=cat err=EOF
The slice returned by Peek is only valid until the next call to a read or write method.
𝗗 Buffer.Peek • 𝗣 73794 • 𝗖𝗟 674415 • 𝗔 Ilia Choly
# Process handle
After you start a process in Go, you can access its ID:
attr := &os.ProcAttr{Files: []*os.File{os.Stdin, os.Stdout, os.Stderr}}
proc, _ := os.StartProcess("/bin/echo", []string{"echo", "hello"}, attr)
defer proc.Wait()
fmt.Println("pid =", proc.Pid)
pid = 41
hello
Internally, the os.Process type uses a process handle instead of the PID (which is just an integer), if the operating system supports it. Specifically, in Linux it uses pidfd, which is a file descriptor that refers to a process. Using the handle instead of the PID makes sure that Process methods always work with the same OS process, and not a different process that just happens to have the same ID.
Previously, you couldn't access the process handle. Now you can, thanks to the new Process.WithHandle method:
func (p *Process) WithHandle(f func(handle uintptr)) error
WithHandle calls a specified function and passes a process handle as an argument:
attr := &os.ProcAttr{Files: []*os.File{os.Stdin, os.Stdout, os.Stderr}}
proc, _ := os.StartProcess("/bin/echo", []string{"echo", "hello"}, attr)
defer proc.Wait()
fmt.Println("pid =", proc.Pid)
proc.WithHandle(func(handle uintptr) {
fmt.Println("handle =", handle)
})
pid = 49
handle = 6
hello
The handle is guaranteed to refer to the process until the callback function returns, even if the process has already terminated. That's why it's implemented as a callback instead of a Process.Handle field or method.
WithHandle is only supported on Linux 5.4+ and Windows. On other operating systems, it doesn't execute the callback and returns an os.ErrNoHandle error.
𝗗 Process.WithHandle • 𝗣 70352 • 𝗖𝗟 699615 • 𝗔 Kir Kolyshkin
# Signal as cause
signal.NotifyContext returns a context that gets canceled when any of the specified signals is received. Previously, the canceled context only showed the standard "context canceled" cause:
// go 1.25
// The context will be canceled on SIGINT signal.
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt)
defer stop()
// Send SIGINT to self.
p, _ := os.FindProcess(os.Getpid())
_ = p.Signal(syscall.SIGINT)
// Wait for SIGINT.
<-ctx.Done()
fmt.Println("err =", ctx.Err())
fmt.Println("cause =", context.Cause(ctx))
err = context canceled
cause = context canceled
Now the context's cause shows exactly which signal was received:
// go 1.26
// The context will be canceled on SIGINT signal.
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt)
defer stop()
// Send SIGINT to self.
p, _ := os.FindProcess(os.Getpid())
_ = p.Signal(syscall.SIGINT)
// Wait for SIGINT.
<-ctx.Done()
fmt.Println("err =", ctx.Err())
fmt.Println("cause =", context.Cause(ctx))
err = context canceled
cause = interrupt signal received
The returned type, signal.signalError, is based on string, so it doesn't provide the actual os.Signal value — just its string representation.
𝗗 signal.NotifyContext • 𝗣 60756 • 𝗖𝗟 721700 • 𝗔 Filippo Valsorda
# Compare IP subnets
An IP address prefix represents an IP subnet. These prefixes are usually written in CIDR notation:
10.0.0.0/16
127.0.0.0/8
169.254.0.0/16
203.0.113.0/24
In Go, an IP prefix is represented by the netip.Prefix type.
The new Prefix.Compare method lets you compare two IP prefixes, making it easy to sort them without having to write your own comparison code:
prefixes := []netip.Prefix{
netip.MustParsePrefix("10.1.0.0/16"),
netip.MustParsePrefix("203.0.113.0/24"),
netip.MustParsePrefix("10.0.0.0/16"),
netip.MustParsePrefix("169.254.0.0/16"),
netip.MustParsePrefix("203.0.113.0/8"),
}
slices.SortFunc(prefixes, netip.Prefix.Compare)
for _, p := range prefixes {
fmt.Println(p.String())
}
10.0.0.0/16
10.1.0.0/16
169.254.0.0/16
203.0.113.0/8
203.0.113.0/24
Compare orders two prefixes as follows:
- First by validity (invalid before valid).
- Then by address family (IPv4 before IPv6).
10.0.0.0/8 < ::/8 - Then by masked IP address (network IP).
10.0.0.0/16 < 10.1.0.0/16 - Then by prefix length.
10.0.0.0/8 < 10.0.0.0/16 - Then by unmasked address (original IP).
10.0.0.0/8 < 10.0.0.1/8
This follows the same order as Python's netaddr.IPNetwork and the standard IANA (Internet Assigned Numbers Authority) convention.
𝗗 Prefix.Compare • 𝗣 61642 • 𝗖𝗟 700355 • 𝗔 database64128
# Context-aware dialing
The net package has top-level functions for connecting to an address using different networks (protocols) — DialTCP, DialUDP, DialIP, and DialUnix. They were made before context.Context was introduced, so they don't support cancellation:
raddr, _ := net.ResolveTCPAddr("tcp", "127.0.0.1:12345")
conn, err := net.DialTCP("tcp", nil, raddr)
fmt.Printf("connected, err=%v\n", err)
defer conn.Close()
connected, err=<nil>
There's also a net.Dialer type with a general-purpose DialContext method. It supports cancellation and can be used to connect to any of the known networks:
var d net.Dialer
ctx := context.Background()
conn, err := d.DialContext(ctx, "tcp", "127.0.0.1:12345")
fmt.Printf("connected, err=%v\n", err)
defer conn.Close()
connected, err=<nil>
However, DialContext a bit less efficient than network-specific functions like net.DialTCP — because of the extra overhead from address resolution and network type dispatching.
So, network-specific functions in the net package are more efficient, but they don't support cancellation. The Dialer type supports cancellation, but it's less efficient. The Go team decided to resolve this contradiction.
The new context-aware Dialer methods (DialTCP, DialUDP, DialIP, and DialUnix) combine the efficiency of the existing network-specific net functions with the cancellation capabilities of Dialer.DialContext:
var d net.Dialer
ctx := context.Background()
raddr := netip.MustParseAddrPort("127.0.0.1:12345")
conn, err := d.DialTCP(ctx, "tcp", netip.AddrPort{}, raddr)
fmt.Printf("connected, err=%v\n", err)
defer conn.Close()
connected, err=<nil>
I wouldn't say that having three different ways to dial is very convenient, but that's the price of backward compatibility.
𝗗 net.Dialer • 𝗣 49097 • 𝗖𝗟 490975 • 𝗔 Michael Fraenkel
# Fake example.com
The default httptest.Server certificate already lists example.com in its DNSNames (a list of hostnames or domain names that the certificate is authorized to secure). Because of this, Server.Client doesn't trust responses from the real example.com:
// go 1.25
func Test(t *testing.T) {
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("hello"))
})
srv := httptest.NewTLSServer(handler)
defer srv.Close()
_, err := srv.Client().Get("https://example.com")
if err != nil {
t.Fatal(err)
}
}
--- FAIL: Test (0.29s)
main_test.go:19: Get "https://example.com":
tls: failed to verify certificate:
x509: certificate signed by unknown authority
To fix this issue, the HTTP client returned by httptest.Server.Client now redirects requests for example.com and its subdomains to the test server:
// go 1.26
func Test(t *testing.T) {
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("hello"))
})
srv := httptest.NewTLSServer(handler)
defer srv.Close()
resp, err := srv.Client().Get("https://example.com")
if err != nil {
t.Fatal(err)
}
body, _ := io.ReadAll(resp.Body)
resp.Body.Close()
if string(body) != "hello" {
t.Errorf("Unexpected response body: %s", body)
}
}
PASS
𝗗 Server.Client • 𝗖𝗟 666855 • 𝗔 Sean Liao
# Optimized fmt.Errorf
People often point out that using fmt.Errorf("x") for plain strings causes more memory allocations than errors.New("x"). Because of this, some suggest switching code from fmt.Errorf to errors.New when formatting isn't needed.
The Go team disagrees. Here's a quote from Russ Cox:
Using
fmt.Errorf("foo")is completely fine, especially in a program where all the errors are constructed withfmt.Errorf. Having to mentally switch between two functions based on the argument is unnecessary noise.
With the new Go release, this debate should finally be settled. For unformatted strings, fmt.Errorf now allocates less and generally matches the allocations for errors.New.
Specifically, fmt.Errorf goes from 2 allocations to 0 allocations for a non-escaping error, and from 2 allocations to 1 allocation for an escaping error:
_ = fmt.Errorf("foo") // non-escaping error
sink = fmt.Errorf("foo") // escaping error
This matches the allocations for errors.New in both cases.
The difference in CPU cost is also much smaller now. Previously, it was ~64ns vs. ~21ns for fmt.Errorf vs. errors.New for escaping errors, now it's ~25ns vs. ~21ns.
Tell me more
Here are the "before and after" benchmarks for the fmt.Errorf change. The non-escaping case is called local, and the escaping case is called sink. If there's just a plain error string, it's no-args. If the error includes formatting, it's int-arg.
Seconds per operation:
goos: linux
goarch: amd64
pkg: fmt
cpu: AMD EPYC 7B13
│ old.txt │ new.txt │
│ sec/op │ sec/op vs base │
Errorf/no-arsg/local-16 63.76n ± 1% 4.874n ± 0% -92.36% (n=120)
Errorf/no-args/sink-16 64.25n ± 1% 25.81n ± 0% -59.83% (n=120)
Errorf/int-arg/local-16 90.86n ± 1% 90.97n ± 1% ~ (p=0.713 n=120)
Errorf/int-arg/sink-16 91.81n ± 1% 91.10n ± 1% -0.76% (p=0.036 n=120)
Bytes per operation:
│ old.txt │ new.txt │
│ B/op │ B/op vs base │
Errorf/no-args/local-16 19.00 ± 0% 0.00 ± 0% -100.00% (n=120)
Errorf/no-args/sink-16 19.00 ± 0% 16.00 ± 0% -15.79% (n=120)
Errorf/int-arg/local-16 24.00 ± 0% 24.00 ± 0% ~ (p=1.000 n=120)
Errorf/int-arg/sink-16 24.00 ± 0% 24.00 ± 0% ~ (p=1.000 n=120)
Allocations per operation:
│ old.txt │ new.txt │
│ allocs/op │ allocs/op vs base │
Errorf/no-args/local-16 2.000 ± 0% 0.000 ± 0% -100.00% (n=120)
Errorf/no-args/sink-16 2.000 ± 0% 1.000 ± 0% -50.00% (n=120)
Errorf/int-arg/local-16 2.000 ± 0% 2.000 ± 0% ~ (p=1.000 n=120)
Errorf/int-arg/sink-16 2.000 ± 0% 2.000 ± 0% ~ (p=1.000 n=120)
If you're interested in the details, I highly recommend reading the CL — it's perfectly written.
𝗗 fmt.Errorf • 𝗖𝗟 708836 • 𝗔 thepudds
# Optimized io.ReadAll
Previously, io.ReadAll allocated a lot of intermediate memory as it grew its result slice to the size of the input data. Now, it uses intermediate slices of exponentially growing size, and then copies them into a final perfectly-sized slice at the end.
The new implementation is about twice as fast and uses roughly half the memory for a 65KiB input; it's even more efficient with larger inputs. Here are the geomean results comparing the old and new versions for different input sizes:
│ old │ new vs base │
sec/op 132.2µ 66.32µ -49.83%
B/op 645.4Ki 324.6Ki -49.70%
final-capacity 178.3k 151.3k -15.10%
excess-ratio 1.216 1.033 -15.10%
See the full benchmark results in the commit. Unfortunately, the author didn't provide the benchmark source code.
Ensuring the final slice is minimally sized is also quite helpful. The slice might persist for a long time, and the unused capacity in a backing array (as in the old version) would just waste memory.
As with the fmt.Errorf optimization, I recommend reading the CL — it's very good. Both changes come from thepudds, whose change descriptions are every reviewer's dream come true.
𝗗 io.ReadAll • 𝗖𝗟 722500 • 𝗔 thepudds
# Multiple log handlers
The log/slog package, introduced in version 1.21, offers a reliable, production-ready logging solution. Since its release, many projects have switched from third-party logging packages to use it. However, it was missing one key feature: the ability to send log records to multiple handlers, such as stdout or a log file.
The new MultiHandler type solves this problem. It implements the standard Handler interface and calls all the handlers you set up.
For example, we can create a log handler that writes to stdout:
stdoutHandler := slog.NewTextHandler(os.Stdout, nil)
And another handler that writes to a file:
const flags = os.O_CREATE | os.O_WRONLY | os.O_APPEND
file, _ := os.OpenFile("/tmp/app.log", flags, 0644)
defer file.Close()
fileHandler := slog.NewJSONHandler(file, nil)
Finally, combine them using a MultiHandler:
// MultiHandler that writes to both stdout and app.log.
multiHandler := slog.NewMultiHandler(stdoutHandler, fileHandler)
logger := slog.New(multiHandler)
// Log a sample message.
logger.Info("login",
slog.String("name", "whoami"),
slog.Int("id", 42),
)
time=2025-12-31T11:46:14.521Z level=INFO msg=login name=whoami id=42
{"time":"2025-12-31T11:46:14.521126342Z","level":"INFO","msg":"login","name":"whoami","id":42}
I'm also printing the file contents here to show the results.
When the MultiHandler receives a log record, it sends it to each enabled handler one by one. If any handler returns an error, MultiHandler doesn't stop; instead, it combines all the errors using errors.Join:
hInfo := slog.NewTextHandler(
os.Stdout, &slog.HandlerOptions{Level: slog.LevelInfo},
)
hErrorsOnly := slog.NewTextHandler(
os.Stdout, &slog.HandlerOptions{Level: slog.LevelError},
)
hBroken := &BrokenHandler{
Handler: hInfo,
err: fmt.Errorf("broken handler"),
}
handler := slog.NewMultiHandler(hBroken, hInfo, hErrorsOnly)
rec := slog.NewRecord(time.Now(), slog.LevelInfo, "hello", 0)
// Calls hInfo and hBroken, skips hErrorsOnly.
// Returns an error from hBroken.
err := handler.Handle(context.Background(), rec)
fmt.Println(err)
time=2025-12-31T13:32:52.110Z level=INFO msg=hello
broken handler
The Enable method reports whether any of the configured handlers is enabled:
hInfo := slog.NewTextHandler(
os.Stdout, &slog.HandlerOptions{Level: slog.LevelInfo},
)
hErrors := slog.NewTextHandler(
os.Stdout, &slog.HandlerOptions{Level: slog.LevelError},
)
handler := slog.NewMultiHandler(hInfo, hErrors)
// hInfo is enabled.
enabled := handler.Enabled(context.Background(), slog.LevelInfo)
fmt.Println(enabled)
true
Other methods — WithAttr and WithGroup — call the corresponding methods on each of the enabled handlers.
𝗗 slog.MultiHandler • 𝗣 65954 • 𝗖𝗟 692237 • 𝗔 Jes Cok
# Test artifacts
Test artifacts are files created by tests or benchmarks, such as execution logs, memory dumps, or analysis reports. They are important for debugging failures in remote environments (like CI), where developers can't step through the code manually.
Previously, the Go test framework and tools didn't support test artifacts. Now they do.
The new methods T.ArtifactDir, B.ArtifactDir, and F.ArtifactDir return a directory where you can write test output files:
func TestFunc(t *testing.T) {
dir := t.ArtifactDir()
logFile := filepath.Join(dir, "app.log")
content := []byte("Loading user_id=123...\nERROR: Connection failed\n")
os.WriteFile(logFile, content, 0644)
t.Log("Saved app.log")
}
If you use go test with -artifacts, this directory will be inside the output directory (specified by -outputdir, or the current directory by default):
go1.26rc1 test -v -artifacts -outputdir=/tmp/output
=== RUN TestFunc
=== ARTIFACTS TestFunc /tmp/output/_artifacts/2933211134
artifacts_test.go:14: Saved app.log
--- PASS: TestFunc (0.00s)
As you can see, the first time ArtifactDir is called, it writes the directory location to the test log, which is quite handy.
If you don't use -artifacts, artifacts are stored in a temporary directory which is deleted after the test completes.
Each test or subtest within each package has its own unique artifact directory. Subtest outputs are not stored inside the parent test's output directory — all artifact directories for a given package are created at the same level:
func TestFunc(t *testing.T) {
t.ArtifactDir()
t.Run("subtest 1", func(t *testing.T) {
t.ArtifactDir()
})
t.Run("subtest 2", func(t *testing.T) {
t.ArtifactDir()
})
}
=== RUN TestFunc
=== ARTIFACTS TestFunc /tmp/output/_artifacts/2878232317
=== RUN TestFunc/subtest_1
=== ARTIFACTS TestFunc/subtest_1 /tmp/output/_artifacts/1651881503
=== RUN TestFunc/subtest_2
=== ARTIFACTS TestFunc/subtest_2 /tmp/output/_artifacts/3341607601
The artifact directory path normally looks like this:
<output dir>/_artifacts/<test package>/<test name>/<random>
But if this path can't be safely converted into a local file path (which, for some reason, always happens on my machine), the path will simply be:
<output dir>/_artifacts/<random>
(which is what happens in the examples above)
Repeated calls to ArtifactDir in the same test or subtest return the same directory.
𝗗 T.ArtifactDir • 𝗣 71287 • 𝗖𝗟 696399 • 𝗔 Damien Neil
# Modernized go fix
Over the years, the go fix command became a sad, neglected bag of rewrites for very ancient Go features. But now, it's making a comeback.
The new go fix is re-implemented using the Go analysis framework — the same one go vet uses.
While go fix and go vet now use the same infrastructure, they have different purposes and use different sets of analyzers:
- Vet is for reporting problems. Its analyzers describe actual issues, but they don't always suggest fixes, and the fixes aren't always safe to apply.
- Fix is (mostly) for modernizing the code to use newer language and library features. Its analyzers produce fixes are always safe to apply, but don't necessarily indicate problems with the code.
usage: go fix [build flags] [-fixtool prog] [fix flags] [packages]
Fix runs the Go fix tool (cmd/fix) on the named packages
and applies suggested fixes.
It supports these flags:
-diff
instead of applying each fix, print the patch as a unified diff
The -fixtool=prog flag selects a different analysis tool with
alternative or additional fixers.
By default, go fix runs a full set of analyzers (currently, there are more than 20). To choose specific analyzers, use the -NAME flag for each one, or use -NAME=false to run all analyzers except the ones you turned off.
For example, here we only enable the forvar analyzer:
go fix -forvar .
And here, we enable all analyzers except omitzero:
go fix -omitzero=false .
Currently, there's no way to suppress specific analyzers for certain files or sections of code.
To give you a taste of go fix analyzers, here's one of them in action. It replaces loops with slices.Contains or slices.ContainsFunc:
// before go fix
func find(s []int, x int) bool {
for _, v := range s {
if x == v {
return true
}
}
return false
}
// after go fix
func find(s []int, x int) bool {
return slices.Contains(s, x)
}
If you're interested, check out the dedicated blog post for the full list of analyzers with examples.
𝗗 cmd/fix • 𝗚 go fix • 𝗣 71859 • 𝗔 Alan Donovan
# Final thoughts
Go 1.26 is incredibly big — it's the largest release I've ever seen, and for good reason:
- It brings a lot of useful updates, like the improved
newbuiltin, type-safe error checking, and goroutine leak detector. - There are also many performance upgrades, including the new garbage collector, faster cgo and memory allocation, and optimized
fmt.Errorfandio.ReadAll. - On top of that, it adds quality-of-life features like multiple log handlers, test artifacts, and the updated
go fixtool. - Finally, there are two specialized experimental packages: one with SIMD support and another with protected mode for forward secrecy.
All in all, a great release!
You might be wondering about the json/v2 package that was introduced as experimental in 1.25. It's still experimental and available with the GOEXPERIMENT=jsonv2 flag.
P.S. To catch up on other Go releases, check out the Go features by version list or explore the interactive tours for Go 1.25 and 1.24.
P.P.S. Want to learn more about Go? Check out my interactive book on concurrency
★ Subscribe to keep up with new posts.