This rate limits new connections to prevent DoS attacks.
For effectively rate limiting QUIC connections, we now gate QUIC connection attempts before the handshake, so that we don't spend compute on handshakes for connections that will eventually be cancelled.
We can only set a single ConnContext per quic-go Transport, as there's only 1 listener per quic-go Transport. So we cannot set a different ConnContext for listeners on the same address.
As we're now gating QUIC connections before the handshake, we use source address verification to ensure that spoofed IPs cannot DoS new connections from a particular IP. This is done by ensuring that some of the connection attempts always verify the source address. We get DoS protection at the expense of increased latency of source address verification.
The rate limits id pushes from peers to one every five second with an allowed burst of 10 pushes. This should be enough for all but malfunctioning and malicious peers.
We can use the exact same code for autonat, autonatv2, circuit v2, etc.
Introducing limits to identify separately to get some feedback for #3265. For this PR, I'd like to ignore issues regarding where should this piece of code go, and focus on how specifically it should behave. See the long comment in rateLimiter.allow for example.
Part of: #3265
In experiments with js we've found that increasing the message size
increases throughput. See: libp2p/specs#628 (comment)
for details.
This changes the protobuf reader for the stream to read 256kB messages.
This also forces a change to the connection SCTP read buffer to be
increased to about 2.5 MB, to support 1 message being buffered for 10
streams.
This isn't enough to support larger messages. We most likely need to
change the inferred SDP of the server to use 256kB maxMessageSize, and
need some backwards compatible mechanism in the handshake to opt in to
large messages. See: libp2p/specs#628 for
details
This also removes the go-leveldb-datastore dependency. There's no
reason to test with levelDB. This code should work with any compliant
go-datastore.
Bumps go-datastore to latest as it removes the go-process dependency.
Fixes: #3250
This option, `WithEmergencyTrim`, intended to trim connections where there
was a memory emergency. The API was very confusing. To use it correctly
you had to use the `WithEmergencyTrim` option and then do
`watchdog.HeapDriven(...)` to run the goroutine that would trigger this
in time of a memory emergency.
As there's no correct usage of this
API(https://github.com/search?q=WithEmergencyTrim&type=code&p=1),
I'm removing this and exporting a ForceTrim method that users can call
using any watchdog style memory tracking implementation.
* chore: bump deps
* chore: bump deps in test-plans
* Downgrade quic-go to v0.47.0 while we update WebTransport
* Fork webtransport-go
This lets us update quic-go to v0.48.0
* Revert "Fork webtransport-go"
This reverts commit fddad1c95b.
* Revert "Downgrade quic-go to v0.47.0 while we update WebTransport"
This reverts commit 094221dddb.
* Update webtransport-go
* Update webtransport-go usage
* Update go generate
* Bump go-multiaddr dep
* Add support for http-path
* Support redirects
* Don't split at p2p, split at P_HTTP_PATH
* fixup
* Fill in host if missing
* mod tidy
* Fix test
* Add MultiaddrURIRedirect
* Only alloc err once
This change has some history. Originally there was v1.5.0, then the
project stalled and eventually the repo got archived. Some new
maintainers stepped up and released v1.5.1. That version had some
controversial changes including excessive logging (see
https://github.com/gorilla/websocket/issues/880). This caused us to
downgrade this dep back to v1.5.0 (see #2762). The change was short
lived as I bumped this dep back up to v1.5.1 without remembering this
context.
Since then the maintainers of gorilla/websocket have released a new
version v1.5.3 that brings the project back to the state of when it got
archived (minus a README edit). Bumping to this version should solve our
issues with v1.5.1 without having to downgrade back down to v1.5.0.