Skip to main content

Ethereum Mainnet Node Setup Guide

This guide will help you set up an Ethereum full node on the mainnet using automated Ansible playbooks. The setup includes both execution layer (Geth) and consensus layer (Lighthouse/Prysm) clients with MEV-Boost support.

Prerequisites

System Requirements

  • Operating System: Ubuntu 20.04/22.04 LTS x64
  • CPU: 4+ cores (8+ cores recommended)
  • Memory: 32GB RAM minimum (64GB recommended for mainnet)
  • Storage:
    • Execution client: 2TB+ NVMe SSD
    • Consensus client: 500GB+ SSD
  • Network: Stable internet connection with 25+ Mbps
  • Access: Root or sudo privileges

Network Information

ComponentValueDescription
NetworkEthereum MainnetProduction network
Execution ClientGeth v1.17.2Go Ethereum implementation
Consensus ClientLighthouse v8.1.3 / Prysm v7.1.3Beacon chain clients
MEV-Boostv1.12MEV relay sidecar
Sync ModeSnap SyncFast synchronization
Storage~2TB+Current mainnet size
tip

This guide uses automated Ansible playbooks for easy deployment and management. The setup is production-ready and includes monitoring, security, and backup configurations.

info

Client versions were checked against official GitHub releases on 2026-05-10. Keep the Ansible roles pinned to the versions above unless a client team publishes an urgent mainnet advisory.

tip

The following commands are executed as root by default. If you are not a root user, please prepend the commands with sudo.

Installation Method: Ansible Automation

This method uses the ethnode-infra project for automated deployment.

Step 1: Install Prerequisites

sudo -i

# Update system packages
apt update
apt install -y git python3 python3-pip ansible openssh-server curl jq wget aria2

# Install required Ansible collections
ansible-galaxy collection install community.general
ansible-galaxy collection install ansible.posix
ansible-galaxy collection install community.docker

# Verify installations
python3 --version
ansible --version
ssh -V

Step 2: Clone the Repository

# Clone the ethnode-infra repository
git clone https://github.com/ronnynth/ethnode-infra.git
cd ethnode-infra

# Set environment variables
export ETH_NODE_DIR=$(pwd)
export NODE_NAME="ethereum-mainnet-node"
info

The ethnode-infra project provides production-ready Ansible playbooks for deploying Ethereum nodes with best practices for security, monitoring, and maintenance.

Step 3: Configure Inventory

Create an inventory file for your setup:

# Create inventory file
cat > hosts << EOF
[ethereum_nodes]
eth-node-01 ansible_host=localhost ansible_connection=local ansible_user=${USER}

[qcloud]
qcloud-eth-01 ansible_host=localhost ansible_connection=local ansible_user=${USER}
EOF

Step 4: Configure Deployment Variables

Update the deploy.yml file to match your environment:

# Edit deploy.yml
vars:
execution_disk: /dev/sdb # Your actual disk device
consensus_disk: /dev/sdc # Secondary disk or same as execution
network: mainnet # Network selection
checkpoint: https://mainnet.checkpoint.sigp.io

Step 5: Configure Fee Recipient (Important)

Update the fee recipient address in role variables:

# For Lighthouse
cat > roles/lighthouse/vars/main.yml << EOF
---
recipient: "0xYOUR_ETHEREUM_ADDRESS_HERE"
EOF

# For Prysm (if using)
cat > roles/prysm/vars/main.yml << EOF
---
recipient: "0xYOUR_ETHEREUM_ADDRESS_HERE"
EOF
warning

Critical: Replace 0xYOUR_ETHEREUM_ADDRESS_HERE with your actual Ethereum address to receive block rewards and MEV fees.

Step 6: Run the Deployment

Option 1: Full Automated Setup

# Deploy everything
ansible-playbook -i hosts deploy.yml

# This will:
# 1. Install system dependencies (Docker, Go, monitoring tools)
# 2. Configure disk management and mounting
# 3. Install and configure Geth (execution client)
# 4. Install and configure Lighthouse/Prysm (consensus client)
# 5. Set up MEV-Boost with multiple relays
# 6. Configure systemd services
# 7. Set up JWT authentication
# 8. Optimize system parameters

Option 2: Step-by-Step Deployment

# Deploy base system components
ansible-playbook -i hosts deploy.yml --tags "base"

# Deploy Geth execution client
ansible-playbook -i hosts deploy.yml --tags "geth"

# Deploy Lighthouse consensus client
ansible-playbook -i hosts deploy.yml --tags "lighthouse"

# Deploy MEV-Boost
ansible-playbook -i hosts deploy.yml --tags "mev"

Step 7: Start the Services

# Start Geth (execution layer)
systemctl start geth
systemctl enable geth

# Wait for Geth to sync (this can take several hours)
journalctl -u geth -f

# Start Lighthouse (consensus layer)
systemctl start lighthouse
systemctl enable lighthouse

# Start MEV-Boost
systemctl start mev-boost
systemctl enable mev-boost

# Check service status
systemctl status geth lighthouse mev-boost

Step 8: Monitor Synchronization

# Check Geth sync status
curl -X POST -H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}' \
http://localhost:8545

# Check Lighthouse sync status
curl http://localhost:5052/eth/v1/node/syncing

# Check MEV-Boost status
curl http://localhost:18550/eth/v1/builder/status

# View logs
journalctl -u geth -f
journalctl -u lighthouse -f
journalctl -u mev-boost -f

Installation Method: Docker Compose

If you prefer a lightweight, infrastructure-as-code approach without Ansible, you can deploy an Ethereum full node using Docker Compose directly. This method is ideal for operators who want full control over the container lifecycle and configuration.

Prerequisites

  • Docker Engine 24+ and Docker Compose V2 installed
  • All system requirements met
  • Ports 30303 (P2P), 9000 (P2P), and any RPC/metrics ports you expose are open in your firewall

Step 1: Prepare Project Directory

sudo -i

mkdir -p /data/ethnode/{geth,lighthouse,.jwt}
cd /data/ethnode

Step 2: Generate JWT Secret

The JWT secret is used for authenticated communication between the Execution Layer (Geth) and the Consensus Layer (Lighthouse) via the Engine API. Both clients must share the same secret.

openssl rand -hex 32 > /data/ethnode/.jwt/jwt.hex
chmod 600 /data/ethnode/.jwt/jwt.hex
warning

Do not lose or regenerate this file while both clients are running. If the JWT secret is mismatched, the consensus client will fail to communicate with the execution client, and your node will stop following the chain.

Step 3: Create Environment File

Create a .env file to centralize all version pins and configurable parameters:

cat > .env << 'EOF'
# ─── Client Versions ───────────────────────────────────────
EL_GETH_VERSION=v1.17.2
LIGHTHOUSE_VERSION=v8.1.3
MEV_MEVBOOST_VERSION=1.12

# ─── Network ───────────────────────────────────────────────
NETWORK=mainnet

# ─── Port Mapping (host:container) ─────────────────────────
# Expose only what you need; keep RPC behind firewall
EL_PORT_HTTP=8545 # Geth JSON-RPC HTTP
EL_PORT_WS=8546 # Geth JSON-RPC WebSocket
CL_PORT_API=5052 # Lighthouse Beacon API

# ─── Lighthouse ────────────────────────────────────────────
LIGHTHOUSE_CHECKPOINT_SYNC_URL=https://mainnet.checkpoint.sigp.io

# ─── MEV-Boost Relays ─────────────────────────────────────
MEV_RELAYS=https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net,https://0xa7ab7a996c8584251c8f925da3170bdfd6ebc75d50f5ddc4050a6fdc77f2a3b5fce2cc750d0865e05d7228af97d69561@agnostic-relay.net,https://0xa55c1285d84ba83a5ad26420cd5ad3091e49c55a813eee651cd467db38a8c8e63192f47955e9376f6b42f6d190571cb5@builder-relay-mainnet.blocknative.com

# ─── Fee Recipient ─────────────────────────────────────────
FEE_RECIPIENT=0xYOUR_ETHEREUM_ADDRESS_HERE
EOF
warning

Critical: Replace 0xYOUR_ETHEREUM_ADDRESS_HERE with your actual Ethereum address to receive block rewards and MEV tips.

Environment Variables Reference

VariableDescriptionExample
EL_GETH_VERSIONGeth Docker image tagv1.17.2
LIGHTHOUSE_VERSIONLighthouse Docker image tagv8.1.3
MEV_MEVBOOST_VERSIONMEV-Boost Docker image tag1.12
NETWORKTarget Ethereum networkmainnet / holesky / sepolia
EL_PORT_HTTPHost port mapped to Geth HTTP RPC (8545)8545
EL_PORT_WSHost port mapped to Geth WebSocket RPC (8546)8546
CL_PORT_APIHost port mapped to Lighthouse Beacon API (5052)5052
LIGHTHOUSE_CHECKPOINT_SYNC_URLTrusted checkpoint sync endpoint for fast beacon synchttps://mainnet.checkpoint.sigp.io
MEV_RELAYSComma-separated list of MEV relay URLsSee .env above
FEE_RECIPIENTEthereum address to receive priority fees & MEV rewards0x...

Step 4: Create Docker Compose File

docker-compose.yml
x-logging: &default-logging
driver: "json-file"
options:
max-size: "500m"
max-file: "5"

services:
# ──────────────────────────────────────────────
# Execution Layer — Geth
# ──────────────────────────────────────────────
el-geth:
image: ethereum/client-go:${EL_GETH_VERSION}
container_name: el-geth
logging: *default-logging
restart: unless-stopped
ports:
- "${EL_PORT_HTTP}:8545" # JSON-RPC HTTP
- "${EL_PORT_WS}:8546" # JSON-RPC WebSocket
- "30303:30303/tcp" # P2P TCP
- "30303:30303/udp" # P2P UDP (discovery)
- "6004:6060" # Metrics (Prometheus)
volumes:
- /data/ethnode/geth:/geth/data
- /data/ethnode/.jwt:/root/jwt:ro
networks:
- eth-node-net
command: >
--${NETWORK}
--datadir=/geth/data
--authrpc.jwtsecret=/root/jwt/jwt.hex
--authrpc.addr=0.0.0.0
--authrpc.port=8551
--authrpc.vhosts="*"
--http
--http.addr=0.0.0.0
--http.port=8545
--http.api="eth,net,web3"
--http.vhosts="*"
--http.corsdomain="*"
--ws
--ws.addr=0.0.0.0
--ws.port=8546
--ws.api="eth,net,web3"
--metrics
--metrics.addr=0.0.0.0
--metrics.port=6060

# ──────────────────────────────────────────────
# Consensus Layer — Lighthouse
# ──────────────────────────────────────────────
cl-lighthouse:
image: sigp/lighthouse:${LIGHTHOUSE_VERSION}
container_name: cl-lighthouse
logging: *default-logging
restart: unless-stopped
depends_on:
- el-geth
- mev-mevboost
ports:
- "${CL_PORT_API}:5052" # Beacon HTTP API
- "9000:9000/tcp" # P2P TCP
- "9000:9000/udp" # P2P UDP (discovery)
- "6005:5054" # Metrics (Prometheus)
volumes:
- /data/ethnode/lighthouse:/opt/app/beacon
- /data/ethnode/.jwt:/opt/jwt:ro
networks:
- eth-node-net
command: >
lighthouse bn
--network=${NETWORK}
--checkpoint-sync-url=${LIGHTHOUSE_CHECKPOINT_SYNC_URL}
--checkpoint-sync-url-timeout=600
--execution-endpoint=http://el-geth:8551
--execution-jwt=/opt/jwt/jwt.hex
--datadir=/opt/app/beacon/
--builder=http://mev-mevboost:18550
--http
--http-address=0.0.0.0
--http-port=5052
--metrics
--metrics-address=0.0.0.0
--metrics-port=5054
--metrics-allow-origin="*"
--suggested-fee-recipient=${FEE_RECIPIENT}

# ──────────────────────────────────────────────
# MEV-Boost — Relay Sidecar
# ──────────────────────────────────────────────
mev-mevboost:
image: flashbots/mev-boost:${MEV_MEVBOOST_VERSION}
container_name: mev-mevboost
logging: *default-logging
restart: unless-stopped
networks:
- eth-node-net
command: >
-${NETWORK}
-loglevel=info
-addr=0.0.0.0:18550
-relay-check
-relays=${MEV_RELAYS}

networks:
eth-node-net:
driver: bridge
name: shared-eth-net

Service Architecture Explanation

Geth (Execution Layer)

Geth is the Go implementation of the Ethereum execution client. It handles:

  • Transaction processing and smart contract execution
  • State management — maintaining the world state trie
  • P2P networking — peer discovery and block propagation on port 30303
  • JSON-RPC API — provides eth, net, web3 namespaces for external queries
  • Engine API (authrpc on port 8551) — the JWT-authenticated channel used by the consensus client to deliver payloads
PortProtocolPurposeExposure
8545HTTPJSON-RPC queriesHost-mapped, firewall-restrict
8546WebSocketStreaming subscriptionsHost-mapped, firewall-restrict
8551HTTPEngine API (JWT auth)Internal only (container network)
30303TCP+UDPP2P peer-to-peerPublic, required for peering
6060HTTPPrometheus metricsHost 6004, for monitoring
tip

--http.vhosts="*" and --http.corsdomain="*" are permissive for development. In production, restrict these to trusted origins or keep RPC behind a firewall / reverse proxy.

Lighthouse (Consensus Layer)

Lighthouse is a Rust-based beacon chain client developed by Sigma Prime. It handles:

  • Beacon chain finality tracking, attestation, and block proposal
  • Checkpoint sync — fast-syncs from a trusted checkpoint to avoid syncing from genesis (saves days of time)
  • Builder API — connects to MEV-Boost to request blocks from external builders for MEV extraction
  • Engine API client — sends execution payloads to Geth via the JWT-authenticated endpoint
PortProtocolPurposeExposure
5052HTTPBeacon APIHost-mapped, firewall-restrict
9000TCP+UDPP2P peer-to-peerPublic, required for peering
5054HTTPPrometheus metricsHost 6005, for monitoring

Key flags explained:

  • --checkpoint-sync-url — Skips syncing from genesis; downloads a recent finalized state from a trusted provider
  • --checkpoint-sync-url-timeout=600 — Allows up to 10 minutes for the checkpoint download (large on mainnet)
  • --execution-endpoint=http://el-geth:8551 — Connects to Geth via Docker internal DNS
  • --builder=http://mev-mevboost:18550 — Enables MEV-Boost for external block building
  • --suggested-fee-recipient — Default address for priority fees when proposing blocks

MEV-Boost (Relay Sidecar)

MEV-Boost is a middleware that connects your validator to external block builders through relay networks. It enables Maximum Extractable Value optimization:

  • Receives block bids from multiple relays
  • Selects the most profitable block for your validator to propose
  • -relay-check — verifies relay connectivity at startup
  • Does not expose any ports to the host; only accessible within the eth-node-net Docker network

Networking

All three services share a single Docker bridge network shared-eth-net:

┌─ shared-eth-net (bridge) ──────────────────────────────────┐
│ │
│ el-geth:8551 ◄──── Engine API (JWT) ──── cl-lighthouse │
│ │
│ mev-mevboost:18550 ◄── Builder API ──── cl-lighthouse │
│ │
└─────────────────────────────────────────────────────────────┘

Inter-container communication uses Docker DNS (e.g. http://el-geth:8551), so no host ports are needed for internal APIs. Only P2P and optional RPC/metrics are mapped to the host.

Logging

The x-logging anchor configures JSON file logging for all containers:

max-size: "500m"   # Rotate after 500 MB per log file
max-file: "5" # Keep at most 5 rotated files (≈ 2.5 GB max per container)

This prevents disk exhaustion from verbose client logs during syncing.

Step 5: Launch the Stack

cd /data/ethnode

# Start all services in detached mode
docker compose up -d

# Verify all containers are running
docker compose ps

Expected output:

NAME             IMAGE                                   STATUS          PORTS
el-geth ethereum/client-go:v1.17.2 Up 10 seconds 0.0.0.0:8545->8545/tcp, ...
cl-lighthouse sigp/lighthouse:v8.1.3 Up 8 seconds 0.0.0.0:5052->5052/tcp, ...
mev-mevboost flashbots/mev-boost:1.12 Up 9 seconds

Step 6: Monitor Sync Progress

# Geth sync status
curl -s -X POST -H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}' \
http://localhost:8545 | jq .

# Lighthouse sync status
curl -s http://localhost:5052/eth/v1/node/syncing | jq .

# MEV-Boost health
curl -s http://localhost:18550/eth/v1/builder/status

# Follow real-time logs
docker compose logs -f el-geth
docker compose logs -f cl-lighthouse
docker compose logs -f mev-mevboost
info

Initial sync times: Geth snap sync typically takes 6–12 hours. Lighthouse checkpoint sync takes 5–30 minutes to reach head, but will backfill historical blocks in the background.

Docker Compose Operations

# Stop all services (preserves data)
docker compose down

# Restart a single service
docker compose restart el-geth

# Upgrade a client (update version in .env first)
docker compose pull el-geth
docker compose up -d el-geth

# View resource usage
docker stats --no-stream

# Prune old images after upgrades
docker image prune -f

Directory Structure (Docker Compose)

/data/ethnode/
├── .env # Environment variables
├── docker-compose.yml # Service definitions
├── .jwt/
│ └── jwt.hex # Shared JWT secret (EL ↔ CL auth)
├── geth/ # Geth data directory
│ ├── geth/
│ │ ├── chaindata/ # Blockchain state
│ │ └── nodes/ # Peer database
│ └── keystore/ # Account keystores
└── lighthouse/ # Lighthouse data directory
└── beacon/ # Beacon chain database
tip

To switch to Holesky testnet, simply update your .env:

NETWORK=holesky
LIGHTHOUSE_CHECKPOINT_SYNC_URL=https://holesky.checkpoint.sigp.io

Then recreate the containers: docker compose up -d --force-recreate


Architecture Overview

The deployment creates a complete Ethereum node infrastructure:

┌─────────────────────────────────────────────────────────────┐
│ Ethereum Node Infrastructure │
├─────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌──────────────┐ ┌─────────────────────┐ │
│ │ Geth │ │ Lighthouse │ │ MEV-Boost │ │
│ │ (Execution) │◄─┤ (Consensus) │◄─┤ (Relay) │ │
│ │ │ │ │ │ │ │
│ │ Port: 8545 │ │ Port: 5052 │ │ Port: 18550 │ │
│ │ Port: 8546 │ │ │ │ │ │
│ │ Port: 8551 │ │ │ │ │ │
│ └─────────────┘ └──────────────┘ └─────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Base System Components │ │
│ │ • Docker │ │
│ │ • Go language runtime │ │
│ │ • System optimization (kernel params, limits) │ │
│ │ • Disk management and mounting │ │
│ │ • Monitoring tools (htop, iotop, prometheus-exporter) │ │
│ └─────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘

Directory Structure

After deployment, your node will have this structure:

/data/
├── execution/ # Geth execution client data
│ ├── chaindata/ # Blockchain data
│ ├── nodes/ # Network peers
│ ├── geth.ipc # IPC socket
│ └── .jwt.hex # JWT secret for Engine API
├── consensus/ # Lighthouse consensus client data
│ ├── beacon/ # Beacon chain data
│ ├── validators/ # Validator keys (if staking)
│ ├── logs/ # Client logs
│ └── .jwt.hex # JWT secret for Engine API
└── mev-boost/ # MEV-Boost data
├── logs/ # MEV logs
└── .jwt.hex # JWT secret

MEV-Boost Configuration

The setup includes multiple MEV relays for optimal performance:

  • Flashbots - Most popular MEV relay
  • Titan Relay - High performance relay
  • Agnostic Relay - Decentralized relay
  • Ultra Sound Money - Community relay
  • Aestus - Professional relay
  • SecureRPC - Secure relay
  • BloxRoute - Max Profit & Regulated relays
  • Eden Network - Alternative relay

Configuration Customization

Execution Client Options

# The playbook supports multiple execution clients:
# - Geth (default) - Most popular, well-tested
# - Nethermind - Fast sync, .NET ecosystem
# - Besu - Enterprise features, Java-based
# - Erigon - Efficient storage, fast sync

Consensus Client Options

# Switch to Prysm instead of Lighthouse:
# Edit deploy.yml and replace lighthouse role with:
- role: prysm
vars:
base_dir: "{{ consensus_data_dir }}"
tags: ['prysm', 'consensus']

Network Configuration

# For Mainnet (default)
network: mainnet
checkpoint: https://mainnet.checkpoint.sigp.io

# For Holesky testnet
network: holesky
checkpoint: https://holesky.checkpoint.sigp.io

# For Sepolia testnet
network: sepolia
checkpoint: https://sepolia.checkpoint.sigp.io

Service Management

Basic Operations

# Check service status
systemctl status geth
systemctl status lighthouse
systemctl status mev-boost

# View logs
journalctl -u geth -f
journalctl -u lighthouse -f
journalctl -u mev-boost -f

# Restart services
systemctl restart geth
systemctl restart lighthouse
systemctl restart mev-boost

# Stop services
systemctl stop geth lighthouse mev-boost

Updates

# Update Geth
ansible-playbook -i hosts deploy.yml --tags "update.geth"

# Update Lighthouse
ansible-playbook -i hosts deploy.yml --tags "update.lighthouse"

# Update MEV-Boost
ansible-playbook -i hosts deploy.yml --tags "update.mev"

Monitoring and Maintenance

Built-in Monitoring Tools

The playbook installs monitoring tools:

  • htop - Process monitoring
  • iotop - I/O monitoring
  • iftop - Network monitoring
  • prometheus-node-exporter - Metrics export

Performance Monitoring

# Monitor resource usage
htop
iotop
iftop

# Check disk usage
df -h /data/execution
df -h /data/consensus

# Monitor sync progress
journalctl -u geth -f | grep "Imported new chain segment"
journalctl -u lighthouse -f | grep "Synced"

Backup Considerations

Critical data to backup:

# JWT tokens
/data/execution/.jwt.hex
/data/consensus/.jwt.hex
/data/mev-boost/.jwt.hex

# Validator keys (if running validators)
/data/consensus/validators/

# Configuration files
/etc/systemd/system/geth.service
/etc/systemd/system/lighthouse.service
/etc/systemd/system/mev-boost.service

Troubleshooting

Common Issues

  1. Disk not found: Update execution_disk and consensus_disk variables in deploy.yml
  2. Permission denied: Ensure SSH key authentication is working
  3. Port conflicts: Check if ports 8545, 8546, 8551, 5052, 9000, 18550 are available
  4. Sync issues: Verify network connectivity and disk space

Log Analysis

# Check for errors in logs
journalctl -u geth --since "1 hour ago" | grep -i error
journalctl -u lighthouse --since "1 hour ago" | grep -i error
journalctl -u mev-boost --since "1 hour ago" | grep -i error

# Monitor sync progress
journalctl -u geth -f | grep "Imported new chain segment"
journalctl -u lighthouse -f | grep "Synced"

Performance Tuning

  1. I/O Performance: Ensure SSD disks, enable noatime mount option
  2. Network: Optimize peer connections, use fast internet
  3. Memory: Consider increasing if experiencing OOM issues
  4. CPU: Modern multi-core processor recommended

Security Features

JWT Authentication

  • Each client uses unique JWT tokens for secure communication
  • All JWT tokens are randomly generated and unique per deployment
  • Execution ↔ Consensus client authentication

System Hardening

  • Optimized kernel parameters for network performance
  • Increased file descriptor limits
  • Docker daemon security configuration
  • Proper file permissions and ownership

Support Resources

For help and support:


info

This setup creates a production-ready Ethereum node with MEV-Boost support, monitoring, security, and maintenance automation. The node will automatically stay synchronized with the Ethereum mainnet.

warning

Important Notes:

  • Initial sync can take 6-24 hours depending on hardware and network
  • Ensure you have adequate storage space (2TB+ recommended)
  • Monitor resource usage during initial sync
  • Keep your system updated for security
  • Always test on testnets before deploying to mainnet