How the simulation is done
Constant experiment parameters
Simulations:
Same constant simulation parameters, but message sizes and latency:
- Message sizes: 100 bytes, 1KB, 50KB
- Latency: added with
tc qdisc add dev eth0 root netem delay 100ms 30ms distribution normal
- Every outgoing packet has a mean latency of 100ms with a std of 30ms.
- Note than as this is affected to every outgoing packet. This will be accumulated 3 times, as we will have IHAVE → IWANT → MESSAGE.
- Delays to receive a complete message have been observed between 90 to 1 second~ depending on the size.
Variable parameters:
- nim-libp2p versions:
v1.1, v1.2, v1.3, v1.4, v1.5, v1.6, v1.7, v1.7.1, v1.8.0, v1.9.0
- multiplexers:
yamux, mplex
Simulation workflow:
- Deploy 1000 nimlibp2p nodes
- Messages will be injected in the network at a rate of 1 message per second from different nodes.
- After 30-35~ minutes, simulation is killed and data is gathered.
- For each node during that 30-35 minutes, we get the average.
- The following plots are made of 1000 averages of those 30-35 minutes for each node.
- The number that appear in the plots is the median.
Notes:
IDONTWANT was introduced on nim-libp2p 1.2. We can see the effect of IDONTWANT control messages in the 50KB and 500KB plots, where the bandwidth from 1.2 to onwards gets decreased with respect to 1.1.
Bandwidth incrase in 1.6 is due to a Kubernetes worker missconfiguration.
nim-libp2p bandwidth plots: