Skip to content

Troubleshooting

Need help with setup? This guide assumes you've followed the Quick Start guide. For configuration issues, also check Configuration.

Comprehensive solutions for common Optimum Gateway issues, organized by category.

Is My Gateway Running?

sh
# Check if gateway responds (should show your gateway_id)
curl http://localhost:48123/metrics | grep gateway_id

# View recent logs
docker logs optimum-gateway --tail=10

If these work, your gateway is likely fine. If not, check the issues below.

Gateway Startup Issues

Container Won't Start

Problem: Gateway container exits immediately or won't start

Solution:

sh
# Check if required ports are available
lsof -i :33212 -i :33213 -i :48123

# Verify config file syntax
python -c "import yaml; yaml.safe_load(open('config/app_conf.yml'))"

# Check detailed startup logs
docker logs optimum-gateway --timestamps

Common causes:

  • Port conflicts: Another service using gateway ports
  • Invalid YAML syntax in config file
  • Missing config file or wrong volume mount path
  • Insufficient Docker resources (memory/CPU)

Config File Issues

Problem: Gateway logs show "config file not found" or "invalid config"

Solution:

sh
# Verify config file exists and has correct permissions
ls -la config/app_conf.yml

# Test config syntax
docker run --rm -v $(pwd)/config:/config alpine:latest cat /config/app_conf.yml

# Check volume mounting
docker inspect optimum-gateway | grep -A 5 "Mounts"

Fix:

  • Ensure config file path is correct in docker run command
  • Verify file permissions (should be readable)
  • Check YAML indentation and syntax

Identity Directory Issues

Problem: Gateway recreates peer identity on every restart

Solution:

sh
# Check if identity directories are properly mounted
docker exec optimum-gateway ls -la /tmp/libp2p /tmp/optp2p

# Verify host directories exist and are writable
mkdir -p identity/libp2p identity/optp2p

Fix: Ensure identity directories are mounted as volumes to persist across container restarts

Normal Errors (Ignore These)

You'll see these errors - they're expected:

text
failed to connect to bootstrap node... i/o timeout
failed to send handshake for peer... connection closed

These don't affect gateway functionality.

Real Connection Issues

Missing gateway_id in config:

yaml
gateway_id: "your_unique_identifier"  # Must be set

Test proxy connectivity:

sh
# Test with actual proxy hosts (will be shared privately)
nc -zv <proxy_host> 50051  # Should connect

CL Client Connection Issues

Prysm Connection Problems

Problem: Prysm beacon node not connecting to gateway

Solution:

sh
# Check if Prysm sees the gateway peer
curl -s "http://localhost:3500/eth/v1/node/peers" | grep -c "connected"

# Verify Prysm is trying to connect to correct peer
curl -s "http://localhost:3500/eth/v1/node/identity"

# Check Prysm logs for connection attempts
docker logs prysm-beacon --tail 50 | grep -i peer

Common fixes:

  • Verify --peer flag has correct gateway multiaddr format
  • Ensure gateway and Prysm can reach each other on port 33212
  • Check if both are on same Docker network or use --network host

Lighthouse Connection Problems

Problem: Lighthouse beacon node not connecting to gateway

Solution:

sh
# Check Lighthouse peer status
curl -s "http://localhost:5052/eth/v1/node/peers"

# Verify lighthouse peer configuration
curl -s "http://localhost:5052/eth/v1/node/identity"

# Check Lighthouse logs
docker logs lighthouse-beacon --tail 50 | grep -i "trusted\|peer"

Common fixes:

  • Use both --trusted-peers PEER_ID and --libp2p-addresses /ip4/IP/tcp/PORT
  • Verify peer ID matches gateway's peer ID exactly
  • Ensure Lighthouse has network access to gateway host

Getting Correct Peer Information

Problem: Don't know gateway's peer ID or IP for CL client configuration

Solution:

sh
# Get gateway peer information
curl -s http://localhost:48123/api/v1/self_info

# Expected output format:
# {
#   "peer_id": "16Uiu2HAmBz6gXxrF69zgQ6YHDn2bQepLQYh3LZLGtaSb3Hfkb353",
#   "multiaddrs": ["/ip4/172.17.0.4/tcp/33212"]
# }

# Extract just the peer ID
curl -s http://localhost:48123/api/v1/self_info | jq -r '.peer_id'

Use this information to construct the correct peer multiaddr for your CL client.

Network & Connectivity Issues

Port Access Problems

Problem: Gateway or monitoring services can't bind to required ports

Solution:

sh
# Check what's using required ports
sudo lsof -i :33212 -i :33213 -i :48123 -i :9090 -i :3000

# Kill processes on specific ports if needed
sudo kill $(lsof -t -i:PORT_NUMBER)

# Verify ports are now available
nc -zv localhost 33212

Required open ports:

  • 33212 - CL client connections to gateway
  • 33213 - Gateway internal mumP2P protocol communication
  • 48123 - Gateway metrics endpoint
  • 9090 - Prometheus web interface
  • 3000 - Grafana web interface
  • 50051 - Outbound connections to Optimum proxies

Docker Network Configuration

Problem: Containers can't communicate or reach each other

Solution:

sh
# Use host networking for simplest setup (recommended)
docker run --network host getoptimum/gateway:v0.0.1-rc1

# Or create custom bridge network
docker network create optimum-network
docker run --network optimum-network --name gateway getoptimum/gateway:v0.0.1-rc1

# Check container networking
docker inspect gateway | grep -A 10 "NetworkSettings"

Best practice: Use --network host to avoid networking complexity

Firewall Issues

Problem: External connections blocked by firewall

Solution:

sh
# Check firewall status
sudo ufw status
sudo iptables -L

# Allow required ports (Ubuntu/Debian)
sudo ufw allow 33212/tcp
sudo ufw allow 48123/tcp

# For monitoring (if accessing from other machines)
sudo ufw allow 3000/tcp
sudo ufw allow 9090/tcp

Wrong Fork Digest

Check your network's fork digest:

sh
# For Hoodi testnet, should be: 82556a32
curl -s "http://localhost:3500/eth/v1/beacon/genesis" | jq -r '.data.genesis_fork_version'

Update config if different:

yaml
eth_topics_subscribe:
  - /eth2/YOUR_FORK_DIGEST/beacon_block/ssz_snappy
  - /eth2/YOUR_FORK_DIGEST/beacon_aggregate_and_proof/ssz_snappy

Telemetry & Monitoring Issues

Grafana Dashboard Not Loading

Problem: Dashboard shows "No data" or panels are empty

Solution:

sh
# 1. Check if Prometheus is scraping gateway metrics
curl http://localhost:9090/targets

# 2. Verify gateway is exposing metrics
curl http://localhost:48123/metrics | grep optp2p_gateway

# 3. Check Prometheus configuration
docker logs prometheus-container

Common causes:

  • Gateway telemetry disabled: Ensure telemetry_enable: true in config
  • Prometheus can't reach gateway: Check network connectivity
  • Wrong gateway IP in prometheus.yml targets

Prometheus "Connection Refused"

Problem: Cannot access Prometheus at localhost:9090

Solution:

sh
# Check if Prometheus container is running
docker ps | grep prometheus

# Verify port mapping
docker port prometheus-container

# Check Prometheus logs for startup errors
docker logs prometheus-container --tail 20

Fix:

  • Ensure Prometheus container has correct port mapping: 9090:9090
  • Check docker-compose.yml syntax is valid
  • Verify no firewall blocking port 9090

Gateway Metrics Missing Specific Data

Problem: Some panels show data, others don't (like peer composition or latency)

Solution:

sh
# Check which metrics are actually being exposed
curl http://localhost:48123/metrics | grep -E "(latency|peer|aggregation)"

# Verify gateway is connected to peers
curl http://localhost:48123/api/v1/self_info

Common causes:

  • Gateway not connected to Hoodi testnet peers yet
  • mumP2P protocol network not fully initialized
  • Missing topic subscriptions in gateway config

Grafana Login Issues

Problem: Cannot login to Grafana or forgot password

Solution:

sh
# Reset Grafana admin password
docker exec -it grafana-container grafana-cli admin reset-admin-password admin

# Or recreate Grafana container (will lose custom settings)
docker-compose down
docker-compose up grafana

Default credentials: admin/admin (can skip password change prompt)

Dashboard Panels Show "Query Error"

Problem: Red error boxes instead of charts

Solution:

sh
# Check Prometheus data source in Grafana
# 1. Go to Configuration > Data Sources
# 2. Test Prometheus connection
# 3. URL should be: http://prometheus:9090

# Verify metric names haven't changed
curl http://localhost:48123/metrics | head -20

Fix: Update dashboard queries if metric names differ from expected

Quick Commands

sh
# Restart gateway
docker restart optimum-gateway

# View live logs  
docker logs optimum-gateway -f

# Check status
curl http://localhost:48123/metrics | grep gateway_id

# Test connectivity
nc -zv <gateway_ip> 33212

# Check monitoring stack
docker-compose -f docker-compose-monitoring.yml ps

Next Steps

Issues resolved? Here's what to do next:

Still having issues? Double-check your Configuration settings and verify all prerequisites from Quick Start are met.