| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
| |
We want to match the underlying system socket buffer.
Filling this buffer minimizes the number of syscalls we do.
Larger would be a waste.
Also changed parser to use enums that more closely match the NATS
message types.
|
| |
|
|
| |
like 150 mbps now
|
| | |
|
| | |
|
| |
|
|
| |
Making it easier to use the server as a library
|
| | |
|
| |
|
|
| |
Adding tests fore everything
|
| |
|
|
| |
was freeing the wrong element before.
|
| | |
|
| |
|
|
| |
This can be computed (as it is now)
|
| |
|
|
|
|
|
| |
since the queue was being set in an async task and we were then calling send asserting that the queue was set, we could have triggered a panic.
didn't run into it but seemed likely to cause issues in the future.
also compute the buffer size for operators at comptime.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
support hpub in general, and properly support reply subjects
|
| | |
|
| |
|
|
| |
clean up some tests
|
| |
|
|
|
|
|
|
|
| |
coder@08714a4174bb:~$ nats bench sub foo -s localhost:4223
14:33:02 Starting Core NATS subscriber benchmark [clients=1, msg-size=128 B, msgs=100,000, multi-subject=false, subject=foo]
14:33:02 [1] Starting Core NATS subscriber, expecting 100,000 messages
Finished 0s [===============================================================================================================================] 100%
NATS Core NATS subscriber stats: 934,205 msgs/sec ~ 114 MiB/sec ~ 1.07us
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
not sure why, seems like i'm using the right allocators everywhere?
need to take another pass at this later.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
simplify code
clean up dead code
|
| |
|
|
|
| |
dosen't flush every message, pulls batches from the queue to send, and flushes at the end of each batch.
batches are a min of 1 message, but may be more.
|
| | |
|
| |
|
|
| |
should probably make a copy instead of doing io in the mutex
|
| | |
|
| |
|
|
| |
Works against the latest master branch for zig again :)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
way faster than before even??
coder@08714a4174bb:~$ nats bench pub foo -s localhost:4223
05:12:23 Starting Core NATS publisher benchmark [clients=1, msg-size=128 B, msgs=100,000, multi-subject=false, multi-subject-max=100,000, sleep=0s, subject=foo]
05:12:23 [1] Starting Core NATS publisher, publishing 100,000 messages
Finished 0s [====================================================================================] 100%
NATS Core NATS publisher stats: 574,666 msgs/sec ~ 70 MiB/sec ~ 1.74us
So cool.
src/server/client.zig JJ: M src/server/main.zig JJ: JJ: Lines starting with "JJ:" (like this one) will be
removed.
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
This is quite slow
|
| | |
|
| |
|
|
|
|
|
|
|
| |
coder@08714a4174bb:~$ nats bench sub foo -s localhost:4223
03:28:04 Starting Core NATS subscriber benchmark [clients=1, msg-size=128 B, msgs=100,000, multi-subject=false, subject=foo]
03:28:04 [1] Starting Core NATS subscriber, expecting 100,000 messages
Finished 6s [====================================================================================] 100%
NATS Core NATS subscriber stats: 14,691 msgs/sec ~ 1.8 MiB/sec ~ 68.06us
|
| | |
|
| | |
|
| |
|
|
|
|
| |
Break up creating and starting the client process.
I think this should simplify storing the std.Io.Queue on the stack.
Before I was storing it on the heap because it was hard to make it point to the same location if I was initializing the client on the stack.
|
| |
|
|
| |
Assert implies panic, expect implies error
|
| |
|
|
|
| |
Was accidentally consuming one more byte than I was expecting when reaching the end of the second term.
This was causing the parser to work properly in the case that a queue group was specified, but failing and consuming the next message (usually a PING) as the SID.
|
| |
|
|
|
| |
This lets me dev cycle faster
Shouldn't have to do this though, should be cleaning up properly
|
| | |
|
| | |
|