How the simulation is done
Constant simulation parameters
Same constant simulation parameters, but message sizes and latency:
- Message sizes: 100 bytes, 1KB, 50KB
- With 500KB, the message rate has been lowered to 1msg/10sec
- Latency: added with
tc qdisc add dev eth0 root netem delay 100ms 30ms distribution normal
- Every outgoing packet has a mean latency of 100ms with a std of 30ms.
- Note than as this is affected to every outgoing packet. This will be accumulated 3 times, as we will have IHAVE → IWANT → MESSAGE.
- Delays to receive a complete message have been observed between 90 to 1 second~ depending on the size.
Variable parameters:
- nim-libp2p versions:
v1.2
, v1.3
- With 500KB, also used
v1.1
- multiplexers:
yamux
, mplex
Simulation workflow:
- Deploy 1000 nimlibp2p nodes
- Messages will be injected in the network at a rate of 1 message per second from different nodes.
- After 30-35~ minutes, simulation is killed and data is gathered.
- For each node during that 30-35 minutes, we get the average.
- The following plots are made of 1000 averages of those 30-35 minutes for each node.
- The number that appear in the plots is the median.
Analysis:
- In both
v1.2
and v1.3
, yamux
tend to use more bandwidth than mplex
. This is independent on the message size.
v1.3
uses less bandwidth compared to v1.2
when the message size is lower. Shouldn’t this be the opposite, thanks to IDONTWANT?
- IDONTWANT was introduced on nimlibp2p
1.2.
We can see the effect of IDONTWANT control messages in the 50KB and 500KB plots.
nim-libp2p bandwidth plots:
Size: 100 Bytes, Mesh: 1000 nodes