Rust-Quinn: quinn — Futures-based QUIC implementation in Rust

Quinn

Documentation Crates.io Build status codecov Chat Chat License: MIT License: Apache 2.0

Quinn is an implementation of the QUIC transport protocol undergoing standardization by the IETF. It is suitable for experimental use. This repository contains the following crates:

  • quinn contains a high-level async API based on tokio, see quinn/examples/ for usage. This will be used by most Rust developers. (Basic benchmarks are included.)
  • quinn-proto contains a deterministic state machine of the protocol which performs no I/O internally and is suitable for use with custom event loops (and potentially a C or C++ API).
  • quinn-h3 contains an implementation of HTTP 3 and QPACK. It is split internally in a deterministatic state machine and a tokio-based high-level async API.
  • bench contains some extra benchmarks without any framework.
  • interop contains tooling that helps the Quinn team run interoperability tests.

Quinn is the subject of a RustFest Paris (May 2018) presentation; you can also get the slides (and the animation about head-of-line blocking). Video of the talk is available on YouTube. Since this presentation, Quinn has been merged with quicr, another Rust implementation.

All feedback welcome. Feel free to file bugs, requests for documentation and any other feedback to the issue tracker.

Quinn was created and is maintained by Dirkjan Ochtman and Benjamin Saunders.

Features

  • Simultaneous client/server operation
  • Ordered and unordered stream reads for improved performance
  • Works on stable Rust, tested on Linux, macOS and Windows
  • Pluggable cryptography, with a standard implementation backed by rustls and ring
  • Application-layer datagrams for small, unreliable messages

Status

  • QUIC draft 27 with TLS 1.3
  • Cryptographic handshake
  • Stream data w/ flow control and congestion control
  • Connection close
  • Stateless retry
  • Explicit congestion notification
  • Migration
  • 0-RTT data
  • Session resumption
  • HTTP over QUIC

Usage Notes

Buffers

A Quinn endpoint corresponds to a single UDP socket, no matter how many connections are in use. Handling high aggregate data rates on a single endpoint can require a larger UDP buffer than is configured by default in most environments. If you observe erratic latency and/or throughput over a stable network link, consider increasing the buffer sizes used. For example, you could adjust the SO_SNDBUF and SO_RCVBUF options of the UDP socket to be used before passing it in to Quinn. Note that some platforms (e.g. Linux) require elevated privileges or modified system configuration for a process to increase its UDP buffer sizes.

Certificates

By default, Quinn clients validate the cryptographic identity of servers they connect to. This prevents an active, on-path attacker from intercepting messages, but requires trusting some certificate authority. For many purposes, this can be accomplished by using certificates from Let's Encrypt for servers, and relying on the default configuration for clients.

For some cases, including peer-to-peer, trust-on-first-use, deliberately insecure applications, or any case where servers are not identified by domain name, this isn't practical. Arbitrary certificate validation logic can be implemented by enabling the dangerous_configuration feature of rustls and constructing a Quinn ClientConfig with an overridden certificate verifier by hand.

When operating your own certificate authority doesn't make sense, rcgen can be used to generate self-signed certificates on demand. To support trust-on-first-use, servers that automatically generate self-signed certificates should write their generated certificate to persistent storage and reuse it on future runs.

Running the Examples

$ cargo run --example server ./
$ cargo run --example client https://localhost:4433/Cargo.toml

This launches a HTTP 0.9 server on the loopback address serving the current working directory, with the client fetching ./Cargo.toml. By default, the server generates a self-signed certificate and stores it to disk, where the client will automatically find and trust it.

Development

The quinn-proto test suite uses simulated IO for reproducibility and to avoid long sleeps in certain timing-sensitive tests. If the SSLKEYLOGFILE environment variable is set, the tests will emit UDP packets for inspection using external protocol analyzers like Wireshark, and NSS-compatible key logs for the client side of each connection will be written to the path specified in the variable.

Comments

  • Encrypt retry token
    Encrypt retry token

    May 24, 2020

    When stateless retries are enabled, Quinn embeds some state in the retry token and authenticates it with a HMAC. This information is not secret, so the current approach works, but it would be more in the spirit of QUIC to use strong encryption to prevent other parties from coming to rely on that data, creating a stability hazard.

    The stateless nature of retry tokens requires some care if we want to expose absolutely zero information to other parties, as there's no way to bundle an AEAD nonce. One solution would be:

    • Generate a unique AEAD key for each retry token by using a HKDF from a fixed master key on a fixed-length sequence of random data.
    • Encrypt the token payload using a nonce of 0. This is safe because we never reuse a key.
    • Send the concatenation of the random data, the encrypted payload, and the AEAD tag as the token. An outside party cannot distinguish any of this from random data.
    • Recover the key for decryption by re-running the HKDF on the random data prefix.
    enhancement good first issue 
    Reply
  • Simplify unordered read state tracking
    Simplify unordered read state tracking

    May 29, 2020

    For unordered reads of receive streams, we currently track a RangeSet (aka BTreeMap<u64, u64>) of received data plus a BinaryHeap of received data. Both of these data structure serve to sort the received ranges of stream data. We could reduce duplicated effort by replacing them with a BTreeMap<u64, UnorderedChunk> where UnorderedChunk is either a Bytes of buffered data or a u64 representing a range of data that's already been read.

    enhancement 
    Reply
  • H3: make clients check server authority against it's certificates
    H3: make clients check server authority against it's certificates

    Jun 5, 2020

    In respect of this extract of the draft:

    Once a connection exists to a server endpoint, this connection MAY be reused for requests with multiple different URI authority components. In general, a server is considered authoritative for all URIs with the "https" scheme for which the hostname in the URI is present in the authenticated certificate provided by the server, either as the CN field of the certificate subject or as a dNSName in the subjectAltName field of the certificate; see [RFC6125]. For a host that is an IP address, the client MUST verify that the address appears as an iPAddress in the subjectAltName field of the certificate. If the hostname or address is not present in the certificate, the client MUST NOT consider the server authoritative for origins containing that hostname or address. See Section 5.4 of [SEMANTICS] for more detail on authoritative access.

    h3 help wanted 
    Reply
  • 0-RTT is sometimes unexpectedly rejected
    0-RTT is sometimes unexpectedly rejected

    Jun 6, 2020

    Both the quant maintainer and quic-tracker have recently reported 0-RTT sometimes being rejected when it shouldn't be. Manual local testing with quant and the interop server has so far failed to reproduce this. The server logs "dropping unexpected 0-RTT packet" instead of "0-RTT enabled", indicating that the call to rustls' get_0rtt_keys() failed, so something's happening at that layer: either SessionCommon::get_suite() is returning None, the PSK isn't being successfully decoded in CompleteClientHelloHandling::handle_client_hello, or ExtensionProcessing::process_common is unsetting the quic.early_secret due to some condition that requires early data to be rejected.

    If we can manage to reproduce this locally, it should be straightforward to identify which of these cases is responsible by stepping through with a debugger and/or adding diagnostic prints inside rustls. The responsible case can then be sanity-checked against the TLS 1.3 spec.

    bug 
    Reply
  • Update to draft 29
    Update to draft 29

    Jun 12, 2020

                                                                                                                                                                                                           
    Reply
  • Initial support for PLPMTUD
    Initial support for PLPMTUD

    Jun 15, 2020

    I have not added any tests so far. I did check the benchmarks and this seems to result in a ~3% improvement in the large streams benchmark.

    Reply
  • Relation between Connection and IncomingStreams
    Relation between Connection and IncomingStreams

    Mar 20, 2019

    On a successful connection in either direction (i.e., either actively via connect or passively via listen) I get two things eventually - one Connection and one IncomingStreams. As a user i expect these to be related at some level since they denote a connection to the peer. However it seems that closing the connection does nothing for the IncomingStreams.

    Details:

    1. I get a Connection and IncomingStreams from a peer while I'm listening
    2. Something happens and I decide I don't want this peer any more
    3. I get the Connection object and do Connection::close(...).
    4. Nothing happens to the IncomingStreams - it continue to stay indefinitely with tokio event loop - I would have expected it to resolve to completion/failure at this point and destroy itself.
    5. Remote peer still tries to send something to us on their connection to us (Which is still alive for them) by trying to open a new stream to us. The peer keeps getting "ConnectionAborted - closed by remote peer" error which is fine and expected.

    So the IncomingStreams stream (and possibly any other stream obtained from it which the remote hasn't shutdown/closed/destroyed yet - though I haven't checked this part) uselessly remains unresolved with tokio. I have to now keep extra knowledge about closing these streams when I close the Connection to the peer.

    Wouldn't the better/expected design be that when I close Connection (or drop/destroy it) all the related stuff resolve into an error (or anything, but resolve) to gracefully collect all resources ?

    Reply
  • Update `tokio`, `futures` and `bytes`.
    Update `tokio`, `futures` and `bytes`.

    Nov 27, 2019

    Old postI had to put `tracing` to `0.1.9` until https://github.com/tokio-rs/mio/pull/1170 is resolved.

    Also https://github.com/carllerche/string/pull/17 and https://github.com/carllerche/string/pull/18 need to pass, but there are probably workarounds for those. This is also an option: https://github.com/carllerche/string/pull/20.

    Examples and tests aren't done yet, ~~I'm still trying to get the server <-> client example to work~~.

    Any help is appreciated!

    This updates the following:

    • tokio: 0.2.0-alpha.6 -> 0.2.2
    • futures: 0.3.0-alpha.18 -> 0.3.1
    • bytes: 0.4.7 -> 0.5.2
    • string: 0.2 -> master
    • http: a3a8fcb213bc456e0b7a42cf0e2bd57afa49851b -> 43dffa1eb79f6801e5e07f3338fa56191dc454bb

    Tests, examples and benchmarks are minimally changed to keep the PR small.

    Reply
  • Allow connection to return peer certificates
    Allow connection to return peer certificates

    Dec 17, 2019

    Experiment on implement libp2p-tls.

    But it need peer's certificates. I see rustls has get_peer_certificates func on Session trait. Consider add it to crypto::Session in quinn-proto and Connection in quinn.

    Reply
  • Manage InnerEndpoint timers with a DelayQueue
    Manage InnerEndpoint timers with a DelayQueue

    Oct 29, 2018

                                                                                                                                                                                                           
    Reply
  • Do not return `ReadError::Blocked` when reading from a stream in a closed connection
    Do not return `ReadError::Blocked` when reading from a stream in a closed connection

    Jan 17, 2020

    Currently, reading from a stream in a closed connection can sometimes return Blocked, which makes no sense.

    Reply
  • Intermittent timeout with git master
    Intermittent timeout with git master

    Mar 17, 2020

    Logs at https://gist.github.com/DemiMarie-parity/c83746d3f95f94861a446207f58e0752

    Reply