Simulation parameters:
Deployment process:
We are first using KubeVirt to deploy between 50 and 80 virtualised Kubernetes workers. The exact number we’ll be using going forward is still being worked out, but for now we are using 75 worker nodes.
We deploy 3 initial bootstrap nodes, followed by 30 (what we called) midstrap nodes, followed by the remaining nodes (1K, 2K or 3K). The idea behind this deployment is to speed up the process of stabilizing the mesh toward a healthy state before starting to inject traffic.
Once the mesh is stable (all nodes have the topic as healthy), we start the injection. The publisher will start injecting traffic to random nodes through the headless service.
After 15 minutes, the data is collected, saved and plotted.
Notes:
floodpublish
, although we would like to further investigate this.--discv5-bootstrap-node
. After keep investigating this, we discovered that setting --peer-exchange
immensely helped to network stability, as long as much shorter response times by the Waku nodes when responding to traffic injection. This is not represented in the experiments in the previous graph.Current lab specs: 4 physical machines, 448.00 CPU threads, RAM: 2TiB, 140TiB SSD
Directly participating in this test: 3 nodes, 320.00 CPU threads, RAM: 1.5TiB, 140TiB SSD
(Machine “inferno” is being excluded from running Waku nodes, but does help by running metrics, storage and query workloads).