How the simulation is done
Constant experiment parameters
Same constant simulation parameters, but message sizes and latency:
- Message sizes: 100 bytes, 1KB, 50KB
- With 500KB, the message rate has been lowered to 1msg/10sec
- Latency: added with
tc qdisc add dev eth0 root netem delay 100ms 30ms distribution normal
- Every outgoing packet has a mean latency of 100ms with a std of 30ms.
- Note than as this is affected to every outgoing packet. This will be accumulated 3 times, as we will have IHAVE → IWANT → MESSAGE.
- Delays to receive a complete message have been observed between 90 to 1 second~ depending on the size.
Variable parameters:
- nim-libp2p versions:
v1.1, v1.2, v1.3, v1.4, v1.5-(6f53e21d12eb98c8d0e9405fa30ef53098529b8b)
- multiplexers:
yamux, mplex
Simulation workflow:
- Deploy 1000 nimlibp2p nodes
- Messages will be injected in the network at a rate of 1 message per second from different nodes.
- After 30-35~ minutes, simulation is killed and data is gathered.
- For each node during that 30-35 minutes, we get the average.
- The following plots are made of 1000 averages of those 30-35 minutes for each node.
- The number that appear in the plots is the median.
Notes:
IDONTWANT was introduced on nim-libp2p 1.2. We can see the effect of IDONTWANT control messages in the 50KB and 500KB plots, where the bandwidth from 1.2 to onwards gets decreased with respect to 1.1.
At first the initial thought was that in 100B the difference in bandwidth was due to the small payload + randomness of control messages. Now that we can see v1.2 are a bit different, this simulation will be repeated.
nim-libp2p bandwidth plots:

Size: 100 Bytes, Mesh: 1000 nodes

Size: 1KB, Mesh: 1000 nodes