Analyzing nwaku:v0.35.0-rc.1
, we noticed again that sometimes the deployment was taking too much time (10 minutes) to reach a healthy state. This was also a persistent “issue” since the beginning of nwaku experiments, but was partially fixed by increasing the number of max-connections
or bootstrap nodes.
Upon further investigation, with the following setup:
/usr/bin/wakunode --rendezvous=false --relay=false --rest=true --rest-address=0.0.0.0 --max-connections=1500 --discv5-discovery=true --discv5-enr-auto-update=True --log-level=INFO --metrics-server=True --metrics-server-address=0.0.0.0 --nat=extip:$IP --cluster-id=2
/usr/bin/wakunode --filter=true --relay=true --max-connections=100 --rest=true --rest-admin=true --rest-address=0.0.0.0 --discv5-discovery=true --discv5-enr-auto-update=True --log-level=INFO --metrics-server=True --metrics-server-address=0.0.0.0 --discv5-bootstrap-node=$ENR1 --discv5-bootstrap-node=$ENR2 --discv5-bootstrap-node=$ENR3 --nat=extip:${IP} --cluster-id=2 --shard=0 --rendezvous=false
Taking into account the max-connections
parameter is set to 100, we have the following distribution:
in
and 1/3 for out
.reserved | total | |
---|---|---|
max-connections | 100 | 100 |
relay-connections | 60 | 100 |
service-connections | 40 | 100 |
reserved | total | |
---|---|---|
inRelayConns | ||
(from relay-connections) | 40 | 60 |
outRelayConns | ||
(from relay-connections) | 20 | 60 |
The nodes that cannot reach a healthy state, they report the following:
INF 2025-02-18 15:26:30.324+00:00 Relay peer connections topics="waku node peer_manager" tid=7 file=peer_manager.nim:767 inRelayConns=0/40 outRelayConns=20/20 totalConnections=21/100 notConnectedPeers=79 outsideBackoffPeers=79
We confirmed that the peer doesn’t have enough full-message peers with the metrics:
libp2p_gossipsub_peers_per_topic_mesh
libp2p_gossipsub_healthy_peers_topics
Also, we could confirm that the peer is aware of all other peers with the command to the node:
*curl -X GET "<http://127.0.0.1:8645/admin/v1/peers>" -H "accept: application/json"*
Looks like the situation is: