| Commit message (Collapse) | Author | Age |
| |
|
|
| |
Use juicy main ;)
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
fixed issue where not all data was being sent
request reply has a performance issue but technically works
|
| |
|
|
| |
Seems to not flush the last message
|
| | |
|
| | |
|
| |
|
|
| |
also change internal name to match public name
|
| |
|
|
|
| |
Use a Mutex to wait for the signal handler to fire instead of checking
an atomic boolean over and over again.
|
| | |
|
| |
|
|
| |
This makes things much easier to use as a library
|
| |
|
|
|
| |
Based on this conversation with Andrew
https://ziggit.dev/t/am-i-canceling-my-std-io-group-incorrectly/13836
|
| |
|
|
| |
Add a bunch of tests for the client
|
| | |
|
| | |
|
| |
|
|
|
| |
This was a temporary workaround for when I was not cleanly exiting.
Now that I am, this is not necessary.
|
| | |
|
| |
|
|
|
|
|
| |
This will try to take better advantage of the buffered reading.
Instead of pushing one byte at a time to the array list for each
section, find the end index for each section, then alloc the arraylist
and copy the data into it all at once.
|
| | |
|
| |
|
|
|
|
| |
Also correctly move resetting the task to the end instead of defer.
We don't want to reset the task in the case of an error, so shouldn't
use defer.
|
| | |
|
| |
|
|
|
| |
stores short message buffers in a colocated array, overflowing to an
allocated slice when needed.
|
| | |
|
| |
|
|
|
|
|
|
|
| |
We want to match the underlying system socket buffer.
Filling this buffer minimizes the number of syscalls we do.
Larger would be a waste.
Also changed parser to use enums that more closely match the NATS
message types.
|
| |
|
|
| |
like 150 mbps now
|
| | |
|
| | |
|
| |
|
|
| |
Making it easier to use the server as a library
|
| | |
|
| |
|
|
| |
Adding tests fore everything
|
| |
|
|
| |
was freeing the wrong element before.
|
| | |
|
| |
|
|
| |
This can be computed (as it is now)
|
| |
|
|
|
|
|
| |
since the queue was being set in an async task and we were then calling send asserting that the queue was set, we could have triggered a panic.
didn't run into it but seemed likely to cause issues in the future.
also compute the buffer size for operators at comptime.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
support hpub in general, and properly support reply subjects
|
| | |
|
| |
|
|
| |
clean up some tests
|
| |
|
|
|
|
|
|
|
| |
coder@08714a4174bb:~$ nats bench sub foo -s localhost:4223
14:33:02 Starting Core NATS subscriber benchmark [clients=1, msg-size=128 B, msgs=100,000, multi-subject=false, subject=foo]
14:33:02 [1] Starting Core NATS subscriber, expecting 100,000 messages
Finished 0s [===============================================================================================================================] 100%
NATS Core NATS subscriber stats: 934,205 msgs/sec ~ 114 MiB/sec ~ 1.07us
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
not sure why, seems like i'm using the right allocators everywhere?
need to take another pass at this later.
|
| | |
|