Rust-Smoltcp: smoltcp — A standalone, event-driven TCP/IP stack that is designed for bare-metal, real-time systems

smoltcp

smoltcp is a standalone, event-driven TCP/IP stack that is designed for bare-metal, real-time systems. Its design goals are simplicity and robustness. Its design anti-goals include complicated compile-time computations, such as macro or type tricks, even at cost of performance degradation.

smoltcp does not need heap allocation at all, is extensively documented, and compiles on stable Rust 1.28 and later.

smoltcp achieves ~Gbps of throughput when tested against the Linux TCP stack in loopback mode.

Features

smoltcp is missing many widely deployed features, usually because no one implemented them yet. To set expectations right, both implemented and omitted features are listed.

Media layer

The only supported medium is Ethernet.

  • Regular Ethernet II frames are supported.
  • Unicast, broadcast and multicast packets are supported.
  • ARP packets (including gratuitous requests and replies) are supported.
  • ARP requests are sent at a rate not exceeding one per second.
  • Cached ARP entries expire after one minute.
  • 802.3 frames and 802.1Q are not supported.
  • Jumbo frames are not supported.

IP layer

IPv4

  • IPv4 header checksum is generated and validated.
  • IPv4 time-to-live value is configurable per socket, set to 64 by default.
  • IPv4 default gateway is supported.
  • Routing outgoing IPv4 packets is supported, through a default gateway or a CIDR route table.
  • IPv4 fragmentation is not supported.
  • IPv4 options are not supported and are silently ignored.

IPv6

  • IPv6 hop-limit value is configurable per socket, set to 64 by default.
  • Routing outgoing IPv6 packets is supported, through a default gateway or a CIDR route table.
  • IPv6 hop-by-hop header is supported.
  • ICMPv6 parameter problem message is generated in response to an unrecognized IPv6 next header.
  • ICMPv6 parameter problem message is not generated in response to an unknown IPv6 hop-by-hop option.

IP multicast

IGMP

The IGMPv1 and IGMPv2 protocols are supported, and IPv4 multicast is available.

  • Membership reports are sent in response to membership queries at equal intervals equal to the maximum response time divided by the number of groups to be reported.

ICMP layer

ICMPv4

The ICMPv4 protocol is supported, and ICMP sockets are available.

  • ICMPv4 header checksum is supported.
  • ICMPv4 echo replies are generated in response to echo requests.
  • ICMP sockets can listen to ICMPv4 Port Unreachable messages, or any ICMPv4 messages with a given IPv4 identifier field.
  • ICMPv4 protocol unreachable messages are not passed to higher layers when received.
  • ICMPv4 parameter problem messages are not generated.

ICMPv6

The ICMPv6 protocol is supported, but is not available via ICMP sockets.

  • ICMPv6 header checksum is supported.
  • ICMPv6 echo replies are generated in response to echo requests.
  • ICMPv6 protocol unreachable messages are not passed to higher layers when received.

NDISC

  • Neighbor Advertisement messages are generated in response to Neighbor Solicitations.
  • Router Advertisement messages are not generated or read.
  • Router Solicitation messages are not generated or read.
  • Redirected Header messages are not generated or read.

UDP layer

The UDP protocol is supported over IPv4 and IPv6, and UDP sockets are available.

  • Header checksum is always generated and validated.
  • In response to a packet arriving at a port without a listening socket, an ICMP destination unreachable message is generated.

TCP layer

The TCP protocol is supported over IPv4 and IPv6, and server and client TCP sockets are available.

  • Header checksum is generated and validated.
  • Maximum segment size is negotiated.
  • Window scaling is negotiated.
  • Multiple packets are transmitted without waiting for an acknowledgement.
  • Reassembly of out-of-order segments is supported, with no more than 4 or 32 gaps in sequence space.
  • Keep-alive packets may be sent at a configurable interval.
  • Retransmission timeout starts at a fixed interval of 100 ms and doubles every time.
  • Time-wait timeout has a fixed interval of 10 s.
  • User timeout has a configurable interval.
  • Selective acknowledgements are not implemented.
  • Delayed acknowledgements are not implemented.
  • Silly window syndrome avoidance is not implemented.
  • Nagle's algorithm is not implemented.
  • Congestion control is not implemented.
  • Timestamping is not supported.
  • Urgent pointer is ignored.
  • Probing Zero Windows is not implemented.
  • Packetization Layer Path MTU Discovery PLPMTU is not implemented.

Installation

To use the smoltcp library in your project, add the following to Cargo.toml:

[dependencies]
smoltcp = "0.5"

The default configuration assumes a hosted environment, for ease of evaluation. You probably want to disable default features and configure them one by one:

[dependencies]
smoltcp = { version = "0.5", default-features = false, features = ["log"] }

Feature std

The std feature enables use of objects and slices owned by the networking stack through a dependency on std::boxed::Box and std::vec::Vec.

This feature is enabled by default.

Feature alloc

The alloc feature enables use of objects owned by the networking stack through a dependency on collections from the alloc crate. This only works on nightly rustc.

This feature is disabled by default.

Feature log

The log feature enables logging of events within the networking stack through the log crate. Normal events (e.g. buffer level or TCP state changes) are emitted with the TRACE log level. Exceptional events (e.g. malformed packets) are emitted with the DEBUG log level.

This feature is enabled by default.

Feature verbose

The verbose feature enables logging of events where the logging itself may incur very high overhead. For example, emitting a log line every time an application reads or writes as little as 1 octet from a socket is likely to overwhelm the application logic unless a BufReader or BufWriter is used, which are of course not available on heap-less systems.

This feature is disabled by default.

Features phy-raw_socket and phy-tap_interface

Enable smoltcp::phy::RawSocket and smoltcp::phy::TapInterface, respectively.

These features are enabled by default.

Features socket-raw, socket-udp, and socket-tcp

Enable smoltcp::socket::RawSocket, smoltcp::socket::UdpSocket, and smoltcp::socket::TcpSocket, respectively.

These features are enabled by default.

Features proto-ipv4 and proto-ipv6

Enable IPv4 and IPv6 respectively.

Hosted usage examples

smoltcp, being a freestanding networking stack, needs to be able to transmit and receive raw frames. For testing purposes, we will use a regular OS, and run smoltcp in a userspace process. Only Linux is supported (right now).

On *nix OSes, transmiting and receiving raw frames normally requires superuser privileges, but on Linux it is possible to create a persistent tap interface that can be manipulated by a specific user:

sudo ip tuntap add name tap0 mode tap user $USER
sudo ip link set tap0 up
sudo ip addr add 192.168.69.100/24 dev tap0
sudo ip -6 addr add fe80::100/64 dev tap0
sudo ip -6 addr add fdaa::100/64 dev tap0
sudo ip -6 route add fe80::/64 dev tap0
sudo ip -6 route add fdaa::/64 dev tap0

It's possible to let smoltcp access Internet by enabling routing for the tap interface:

sudo iptables -t nat -A POSTROUTING -s 192.168.69.0/24 -j MASQUERADE
sudo sysctl net.ipv4.ip_forward=1
sudo ip6tables -t nat -A POSTROUTING -s fdaa::/64 -j MASQUERADE
sudo sysctl -w net.ipv6.conf.all.forwarding=1

Fault injection

In order to demonstrate the response of smoltcp to adverse network conditions, all examples implement fault injection, available through command-line options:

  • The --drop-chance option randomly drops packets, with given probability in percents.
  • The --corrupt-chance option randomly mutates one octet in a packet, with given probability in percents.
  • The --size-limit option drops packets larger than specified size.
  • The --tx-rate-limit and --rx-rate-limit options set the amount of tokens for a token bucket rate limiter, in packets per bucket.
  • The --shaping-interval option sets the refill interval of a token bucket rate limiter, in milliseconds.

A good starting value for --drop-chance and --corrupt-chance is 15%. A good starting value for --?x-rate-limit is 4 and --shaping-interval is 50 ms.

Note that packets dropped by the fault injector still get traced; the rx: randomly dropping a packet message indicates that the packet above it got dropped, and the tx: randomly dropping a packet message indicates that the packet below it was.

Packet dumps

All examples provide a --pcap option that writes a libpcap file containing a view of every packet as it is seen by smoltcp.

examples/tcpdump.rs

examples/tcpdump.rs is a tiny clone of the tcpdump utility.

Unlike the rest of the examples, it uses raw sockets, and so it can be used on regular interfaces, e.g. eth0 or wlan0, as well as the tap0 interface we've created above.

Read its source code, then run it as:

cargo build --example tcpdump
sudo ./target/debug/examples/tcpdump eth0

examples/httpclient.rs

examples/httpclient.rs emulates a network host that can initiate HTTP requests.

The host is assigned the hardware address 02-00-00-00-00-02, IPv4 address 192.168.69.1, and IPv6 address fdaa::1.

Read its source code, then run it as:

cargo run --example httpclient -- tap0 ADDRESS URL

For example:

cargo run --example httpclient -- tap0 93.184.216.34 http://example.org/

or:

cargo run --example httpclient -- tap0 2606:2800:220:1:248:1893:25c8:1946 http://example.org/

It connects to the given address (not a hostname) and URL, and prints any returned response data. The TCP socket buffers are limited to 1024 bytes to make packet traces more interesting.

examples/ping.rs

examples/ping.rs implements a minimal version of the ping utility using raw sockets.

The host is assigned the hardware address 02-00-00-00-00-02 and IPv4 address 192.168.69.1.

Read its source code, then run it as:

cargo run --example ping -- tap0 ADDRESS

It sends a series of 4 ICMP ECHO_REQUEST packets to the given address at one second intervals and prints out a status line on each valid ECHO_RESPONSE received.

The first ECHO_REQUEST packet is expected to be lost since arp_cache is empty after startup; the ECHO_REQUEST packet is dropped and an ARP request is sent instead.

Currently, netmasks are not implemented, and so the only address this example can reach is the other endpoint of the tap interface, 192.168.69.100. It cannot reach itself because packets entering a tap interface do not loop back.

examples/server.rs

examples/server.rs emulates a network host that can respond to basic requests.

The host is assigned the hardware address 02-00-00-00-00-01 and IPv4 address 192.168.69.1.

Read its source code, then run it as:

cargo run --example server -- tap0

It responds to:

  • pings (ping 192.168.69.1);
  • UDP packets on port 6969 (socat stdio udp4-connect:192.168.69.1:6969 <<<"abcdefg"), where it will respond "hello" to any incoming packet;
  • TCP connections on port 6969 (socat stdio tcp4-connect:192.168.69.1:6969), where it will respond "hello" to any incoming connection and immediately close it;
  • TCP connections on port 6970 (socat stdio tcp4-connect:192.168.69.1:6970 <<<"abcdefg"), where it will respond with reversed chunks of the input indefinitely.
  • TCP connections on port 6971 (socat stdio tcp4-connect:192.168.69.1:6971 </dev/urandom), which will sink data. Also, keep-alive packets (every 1 s) and a user timeout (at 2 s) are enabled on this port; try to trigger them using fault injection.
  • TCP connections on port 6972 (socat stdio tcp4-connect:192.168.69.1:6972 >/dev/null), which will source data.

Except for the socket on port 6971. the buffers are only 64 bytes long, for convenience of testing resource exhaustion conditions.

examples/client.rs

examples/client.rs emulates a network host that can initiate basic requests.

The host is assigned the hardware address 02-00-00-00-00-02 and IPv4 address 192.168.69.2.

Read its source code, then run it as:

cargo run --example client -- tap0 ADDRESS PORT

It connects to the given address (not a hostname) and port (e.g. socat stdio tcp4-listen:1234), and will respond with reversed chunks of the input indefinitely.

examples/benchmark.rs

examples/benchmark.rs implements a simple throughput benchmark.

Read its source code, then run it as:

cargo run --release --example benchmark -- tap0 [reader|writer]

It establishes a connection to itself from a different thread and reads or writes a large amount of data in one direction.

A typical result (achieved on a Intel Core i7-7500U CPU and a Linux 4.9.65 x86_64 kernel running on a Dell XPS 13 9360 laptop) is as follows:

$ cargo run -q --release --example benchmark tap0 reader
throughput: 2.556 Gbps
$ cargo run -q --release --example benchmark tap0 writer
throughput: 5.301 Gbps

Bare-metal usage examples

Examples that use no services from the host OS are necessarily less illustrative than examples that do. Because of this, only one such example is provided.

examples/loopback.rs

examples/loopback.rs sets up smoltcp to talk with itself via a loopback interface. Although it does not require std, this example still requires the alloc feature to run, as well as log, proto-ipv4 and socket-tcp.

Read its source code, then run it without std:

cargo run --example loopback --no-default-features --features="log proto-ipv4  socket-tcp alloc"

... or with std (in this case the features don't have to be explicitly listed):

cargo run --example loopback -- --pcap loopback.pcap

It opens a server and a client TCP socket, and transfers a chunk of data. You can examine the packet exchange by opening loopback.pcap in Wireshark.

If the std feature is enabled, it will print logs and packet dumps, and fault injection is possible; otherwise, nothing at all will be displayed and no options are accepted.

License

smoltcp is distributed under the terms of 0-clause BSD license.

See LICENSE-0BSD for details.

Comments

  • Add support for IP mediums, v1
    Add support for IP mediums, v1

    Apr 16, 2020

    Opening this PR so I can get feedback earlier in case there are problems with the approach I'm taking.

    This is a first draft of #334. The goal is to split EthernetInterface into Ethernet and IP parts, so the IP parts can be reused in other non-ethernet interface types (linux tun, VPNs, PPP).

    The IP module defines the following structs:

    • Config: contains IP configuration (addresses, routes) that doesn't change when processing IP packets.
    • State: contains state that can change when processing IP packets.
    • Processor: contains a &Config and a &mut State. Most of the functions moved from ethernet are here. This is just a convenience wrapper, as most functions need the config and the state, so they don't have to be passed around as parameters all the time.

    The motivation for separating Config and State is that all the ethernet code needs read-only access to Config. This way Config can be immutably borrowed many times and shared everywhere it's needed, at the same time State is mutably borrowed to process packets. Without this I was getting borrow issues with both the ethernet and ip code needing to access the ip config.

    For packet egress, the Ethernet layer passes a struct implementing a Dispatcher traint to the IP layer, which is used to pass IP packets back to the Ethernet layer.

    Missing items:

    • Tests. I'll update them as soon as we've confirmed the code structure is OK
    • IPv6 NDISC. I don't know what to do, I have 2 ideas:
      • Have ethernet::Interface inject a Socket that handles NDISC. The socket would have to mut borrow ethernet::InnerInterface so it would have to be created on the fly every time, not sure if it's even possible.
      • Pass a process_ndisc callback to process_ipv6... ugly but should work?
    • Some random FIXMEs in the code.

    My questions:

    1. any general feedback on the taken approach, or possible improvements?
    2. Most of the functions in the Ethernet code take as param the ip::Config, passing it around is somewhat annoying. This could benefit from an abstraction similar to ip::Processor: ethernet::InterfaceInner would become ethernet::State, and there would be an ethernet::Processor with ip_config, ethernet_state. Pros: no passing around ip all the time, design is consistent with IP. Cons: it's an extra struct. What do you think?
    3. Should the IP structs be pub or pub(crate)? I guess the latter, unless we want people to implement their interfaces out-of-tree?
    4. Is the route table an ethernet concept or an IP concept? To me it looks like it is an Ethernet concept. The only use in the IP code is for AnyIP. To me it feels a bit strange that AnyIP reads the route table instead of just using the CIDRs in ip_addrs. (Also AFAICT these CIDRs aren't used for anything?). If we change it to use just ip_addrs, then routes could be moved to ethernet.
    Reply
  • informations about compilation on mobile + capabilities of this stack
    informations about compilation on mobile + capabilities of this stack

    May 7, 2020

    Hi! I want to integrate OpenVPN3 (official C++ OpenVPN client) into my app, which targets Android, iOS, Windows, Linux, macOS. I already did the OpenVPN3 part, I can send and receive IP packets by hand programatically (that is, I don't use sockets at all, I simply call the OpenVPN3 as a library).

    Because of that, an userspace TCP/IP stack is needed. The only ones I found for C are ports of DPDK or Linux which are not portable at all. C++ is even worse, I could only find one that was good, but it used a LOT of preprocessing (almost everything in the lib is templated and uses lots of objects composition and unnecessary C++17 things). I found a good one in OCaml, which is the MirageOS TCP/IP stack but compiling OCaml as a library has proven to be very difficult, even more difficult for Android and iOS.

    Rust is very C-friendly (I guess) and so I'd be happy to be able to integrate this stack into my project, and I'm even going to release the OpenVPN3 client + the userspace stack as an open source library for anyone to use.

    What I want to know is:

    • Can this stack be easily compiled, as a library with C interface, for the devices I cited? I'm planning to call it from C++ code.

    • Can I use this stack in a completely programatically way? I mean, I don't want to open a socket and talk with this stack. I want to be able to feed my IP packets uint8_t buffers and the stack will output a buffer with the TCP payload, and vice-versa. In summary: no system calls to open pipes or anything like that

    I know I can get this information from the source code, but since they're easy for someone familiar with the stack to respond, and since I don't know Rust yet (but I'm willing to learn once this proves viable in my project), I'm kindly asking here.

    Thank you so much and have a great day!

    question 
    Reply
  • Understanding packet representations and emit
    Understanding packet representations and emit

    May 24, 2020

    I'm working on @Dirbaio's pull request to make an example that uses Tun (in my case a virtual tun, not a file descriptor based tun) since he already separated the IP from Ethernet.

    Here are my commits: https://github.com/Dirbaio/smoltcp/compare/ip-interface...lucaszanella:ip-interface?expand=1 I made it compile at least, now I'm working on how to dispatch to the tun interface.

    I've come up with a dispatch_ip function that should do what @Dirbaio'sdispatch_ip does but for a tun device, in my example tun.rs, not a tap one (ethernet.rs in his). Here's the function: https://github.com/lucaszanella/smoltcp/blob/c2050c3023f5e78e5f6b3dca4d20bba52d90e109/src/iface/tun.rs#L689 or simply:

    fn dispatch_ip<Tx: TxToken>(&mut self, _tx_token: Tx, _timestamp: Instant,
                          _packet: ip::Packet) -> Result<()> {
    

    My question is simple: can I assume that the ip packet _packet already has the payload and everything inside it? Then in order to create a buffer with the entire IP packet, I'd do:

    ip_repr.emit(my_buffer, &caps.checksum);
    let payload = &mut frame.payload_mut()[ip_repr.buffer_len()..];
    packet.emit_payload(ip_repr, payload, &caps);
    

    and then simply send my_buffer to the tun interface?

    I'm asking because on dispatch_ip for ethernet.rs:

    fn dispatch_ip<Tx: TxToken>(&mut self, tx_token: Tx, timestamp: Instant,
                              packet: ip::Packet) -> Result<()> {
            let ip_repr = packet.ip_repr().lower(&self.ip_config.ip_addrs)?;
            let caps = self.state.device_capabilities.clone();
    
            let (dst_hardware_addr, tx_token) =
                self.lookup_hardware_addr(tx_token, timestamp,
                                          &ip_repr.src_addr(), &ip_repr.dst_addr())?;
    
            self.dispatch_ethernet(tx_token, timestamp, ip_repr.total_len(), |mut frame| {
                frame.set_dst_addr(dst_hardware_addr);
                match ip_repr {
                    #[cfg(feature = "proto-ipv4")]
                    IpRepr::Ipv4(_) => frame.set_ethertype(EthernetProtocol::Ipv4),
                    #[cfg(feature = "proto-ipv6")]
                    IpRepr::Ipv6(_) => frame.set_ethertype(EthernetProtocol::Ipv6),
                    _ => return
                }
    
                ip_repr.emit(frame.payload_mut(), &caps.checksum);
    
                let payload = &mut frame.payload_mut()[ip_repr.buffer_len()..];
                packet.emit_payload(ip_repr, payload, &caps);
            })
        }
    

    it does a series of emits. I don't understand what are these representations and emit things. It looks like it's a way to fill the ethernet packet's payload with the IP stuff then the UDP stuff, for example?

    I'm trying to understand the design of the stack. For example, if the IP packet is an UDP packet, then this packet can be emitted into a buffer. This emit process will first emit the IP header, then will call and emit the UDP payload. Like this:

            Packet::Udp((_, udp_repr)) =>
                    udp_repr.emit(&mut UdpPacket::new_unchecked(payload),
                                  &_ip_repr.src_addr(), &_ip_repr.dst_addr(), &caps.checksum),
    

    Why this emit design was chosen?

    question 
    Reply
  • Add support for IP mediums, v2
    Add support for IP mediums, v2

    Jun 4, 2020

    This is a new attempt at implementing #334, addressing the shortcomings found in the first version (#336) detailed in https://github.com/smoltcp-rs/smoltcp/pull/336#issuecomment-638520316.

    So far I'm quite pleased by how it turned out. The only big structural change in Interface is checking for Device.medium() in send and receive paths and adjusting the behavior accordingly. The rest of the changes are mostly boilerplate and updating the tests.

    This includes the Linux tun work by @lucaszanella from Dirbaio/smoltcp#1, squashed and updated for the v2 changes.

    Things that need fixing:

    • The prettyprinter is broken (it doesn't take medium into account, always interprets as Ethernet)

    Possible future work:

    • Adding a medium_ip feature
    • Renaming ethernet feature to medium_ethernet for consistency?
    • Doing a PoC of instantiating an Interface with dynamic dispatch to the device, check how much improvement in binary size is there vs static dispatch.
    Reply
  • The impl From<std::Instant> for smoltcp::Instant seems buggy
    The impl From for smoltcp::Instant seems buggy

    Jun 7, 2020

    https://github.com/smoltcp-rs/smoltcp/blob/master/src/time.rs#L71

    While reviewing the crate a bit, I noticed this implementation, which seems a bit suspicious to me. It creates a smoltcp::time::Instant from an std::time::Instant by looking at how much time has elapsed since said instant was created? This seems super wrong to me.

    smoltcp::time::Instant is supposed to represent the number of milliseconds passed since an arbitrary point in time (e.g. be monotically increasing, with a frequency of 1000Hz). Calling smoltcp::Instant::from(std::Instant) twice with the same argument should always return the same value. This cannot be done with elapsed, since it returns the amount of time since the previous instant. See this playground

    I expected From<Instant> to be implemented along the lines of:

    
    #[cfg(feature = "std")]
    impl From<::std::time::Instant> for Instant {
        fn from(other: ::std::time::Instant) -> Instant {
            lazy_static! {
                static ref REFERENTIAL: Instant = Instant::now();
            }
            let elapsed = other - *REFERENTIAL;
            Instant::from_millis((elapsed.as_secs() * 1_000) as i64 + (elapsed.subsec_nanos() / 1_000_000) as i64)
        }
    }
    
    bug 
    Reply
  • Possible lifetime bug for TcpSocket vs UdpSocket
    Possible lifetime bug for TcpSocket vs UdpSocket

    Jun 11, 2020

    I'm new to Rust so I'm still understanding lifetimes. I had a discussion about the TcpSocket::into usage here: https://users.rust-lang.org/t/understanding-an-unconventional-lifetime-error-stuck-on-this-for-days/44093/13 but if you dont want to read, here's the summary:

    impl<'a, 'b: 'a, 'c: 'a + 'b> TunSmolStack<'a, 'b, 'c> {
        pub fn new(interface_name: String) -> Result<TunSmolStack<'a, 'b, 'c>, u32> {
            let device = TunDevice::new("tun").unwrap();
            
            let neighbor_cache = NeighborCache::new(BTreeMap::new());
            let socket_set = SocketSet::new(vec![]);
            let mut interface = InterfaceBuilder::new(device)
                .neighbor_cache(neighbor_cache)
                .finalize();
            Ok(TunSmolStack {
                sockets: socket_set,
            })
        }
    
        pub fn add_socket(&mut self, socket_type: SocketType) -> usize {
            match socket_type {
                SocketType::TCP => {
                    let rx_buffer = TcpSocketBuffer::new(vec![0; 1024]);
                    let tx_buffer = TcpSocketBuffer::new(vec![0; 1024]);
                    let socket = TcpSocket::new(rx_buffer, tx_buffer);
                    self.sockets.add(Socket::Tcp(socket));      
                }
    

    this code was giving this error on line self.sockets.add(socket);:

    error[E0495]: cannot infer an appropriate lifetime for lifetime parameter `'b` due to conflicting requirements
      --> src/virtual_tun/smol_stack.rs:51:30
       |
    51 |                 self.sockets.add(socket);        
       |                              ^^^
       |
    note: first, the lifetime cannot outlive the lifetime `'b` as defined on the impl at 23:10...
      --> src/virtual_tun/smol_stack.rs:23:10
       |
    23 | impl<'a, 'b, 'c> TunSmolStack<'a, 'b, 'c> {
       |          ^^
    note: ...but the lifetime must also be valid for the lifetime `'c` as defined on the impl at 23:14...
      --> src/virtual_tun/smol_stack.rs:23:14
       |
    23 | impl<'a, 'b, 'c> TunSmolStack<'a, 'b, 'c> {
       |              ^^
    note: ...so that the types are compatible
      --> src/virtual_tun/smol_stack.rs:51:30
       |
    51 |                 self.sockets.add(socket);        
       |                              ^^^
       = note: expected  `&mut virtual_tun::smoltcp::socket::SocketSet<'_, '_, '_>`
                  found  `&mut virtual_tun::smoltcp::socket::SocketSet<'a, 'b, 'c>`
    
    

    turns out its because of

    impl<'a> Into<Socket<'a, 'a>> for TcpSocket<'a> {
        fn into(self) -> Socket<'a, 'a> {
            Socket::Tcp(self)
        }
    }
    

    which generates a Socket with equal lifetimes for 'b and 'c on my struct.

    I noticed UdpSocket does not have this problem:

    impl<'a, 'b> Into<Socket<'a, 'b>> for UdpSocket<'a, 'b> {
        fn into(self) -> Socket<'a, 'b> {
            Socket::Udp(self)
        }
    }
    

    Is there a reason why you did Socket<'a, 'a> instead of Socket<'a, 'b>?

    For now I'm doing this to avoid the problem:

    self.sockets.add(Socket::Tcp(socket));

    bug 
    Reply
  • rewrite ::phy::raw_socket
    rewrite ::phy::raw_socket

    Jan 28, 2018

    @whitequark

    Changes:

    1. Add cfg-if crate.
    2. Update libc version to latest.
    3. Make examples httpclient/ping/server/client/benchmark only work on linux platform.
    4. Update example tcpdump.
    5. Add phy::LinkLayer.
    6. Rewrite phy::RawSocket and phy::sys::RawSocket.

    Problems:

    1. BPF buffer reader ( see code )
    Reply
  • IGMP processing
    IGMP processing

    Mar 5, 2018

    This is on top of #177

    Once you start review, please keep in mind that this requires a rust-managed release with the ManagedMap.iter() pull request.

    Reply
  • Add GC & threshold to the ARP cache.
    Add GC & threshold to the ARP cache.

    Jun 12, 2018

    Implementation of solution discussed in: https://github.com/m-labs/smoltcp/issues/83

    Main things to look at:

    1. I figured this only really makes sense for the ManagedMap::Owned case, so I conditionally compile for the alloc case. Let me know if this is wrong

    2. There weren't any tests in the neighbor.rs file that use the Owned case, so I had to start using BTreeMap. I'm not totally sure what the downsides of this are, so let me know if there's anything I should be aware of.

    3. The reason I needed to upgrade the version of managed is so that we could have len operator on ManagedMap.

    Manual Testing

    I did manually test this with the httpclient example (i changed around where run_gc gets run in order to make it do something) and checked that the sizes were correct.

    thanks for taking a look at this!

    Reply
  • Rename `new` method on Packet types to `new_checked`
    Rename `new` method on Packet types to `new_checked`

    Jul 10, 2018

    Fixes #195.

    r? @dlrobertson, did I miss anything?

    Reply
  • Add raw sockets
    Add raw sockets

    Jun 16, 2017

    Problem: smoltcp doesn't support raw sockets, low-level utilities like ping and custom IP protocols are impossible to implement.

    Solution: implement raw sockets, provide an example of their usage.

    Changes introduced by this pull request:

    • socket::udp::SocketBuffer has been factored out as generic common::RingBuffer.
    • socket::raw module has been implemented.
    • iface::EthernetInterface has been changed to support raw sockets, giving them priority over the built-in handlers.
    • The ping example has been implemented.

    Status: the ping example works as expected.

    TODOs:

    • Currently, smoltcp uses iteration over the list of sockets for packet dispatch, which is highly inefficient on the platforms where standard containers like maps and hash tables are available. The existing SocketSet container should be complimented with lookup tables for raw and tcp/udp sockets.

    Other:

    • I'm using the current version of rustfmt for my code, sometimes it looks weird.
    • I've used some recent stable Rust features like field shorthands and pub(crate), so I've bumped Rust version in Readme.md to 1.18.
    Reply
  • DHCPv4 client
    DHCPv4 client

    Apr 5, 2018

    Hi

    This is an attempt at creating a DHCP client for IPv4. Please give feedback.

    Cc: @phil-opp, this PR is uses the work merged in #75.

    I've taken a completely different approach compared to #63. UDP sockets don't allow us to send and receive on unspecified/broadcast IPv4 addresses. Therefore IPv4/UDP is parsed/serialized on top of a RawSocket.

    ~~There are still two problems with RawSocketBuffer:~~

    • ~~How do I let the user provide storage? I have tried but failed to accept that as parameters in Client::new().~~ I've found the proper lifetime relation by now.
    • ~~The RawSocket is getting stuck with Error::Exhausted when contig_window is too small for the DHCP egress packet. Somehow it appears to behave like sent packets don't get dequeued from the tx_buffer. Is that known? What is my code doing wrong regarding RawSocket?~~ See #187
    Reply