Compare commits

...

6 Commits

Author SHA1 Message Date
6d7101bd85 Use existing L1 accounts for deployment if found (#7)
Part of [Create bridge channel in go-nitro](https://www.notion.so/Create-bridge-channel-in-go-nitro-22ce80a0d8ae4edb80020a8f250ea270)
- Use existing accounts from a volume if present instead of creating new accounts and funding them

Reviewed-on: cerc-io/fixturenet-optimism-stack#7
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-09-10 13:44:49 +00:00
ab0e42a54b Add extra_hosts for l2 config generation service (#5)
Part of [Create bridge channel in go-nitro](https://www.notion.so/Create-bridge-channel-in-go-nitro-22ce80a0d8ae4edb80020a8f250ea270)

Co-authored-by: Adw8 <adwaitgharpure@gmail.com>
Reviewed-on: cerc-io/fixturenet-optimism-stack#5
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-09-04 14:04:26 +00:00
c5f97dda13 Add configuration for L1 beacon endpoint and funding amounts for Optimism accounts (#4)
Part of [Create a public laconicd testnet](https://www.notion.so/Create-a-public-laconicd-testnet-896a11bdd8094eff8f1b49c0be0ca3b8)

Co-authored-by: Adw8 <adwaitgharpure@gmail.com>
Reviewed-on: cerc-io/fixturenet-optimism-stack#4
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-09-03 09:42:08 +00:00
9ca108e877 Upgrade Optimism to latest release (#3)
Part of [laconicd testnet validator enrollment](https://www.notion.so/laconicd-testnet-validator-enrollment-6fc1d3cafcc64fef8c5ed3affa27c675)

- Updated Dockerfiles
- Added a separate service to generate L2 config files
  - Updated service dependency for `op-geth` and `op-node`
- Passed lighthouse beacon API endpoint to the `op-node` command
- Updated path for artifact generated by L1 contracts deployment

Co-authored-by: Adwait Gharpure <69599306+Adw8@users.noreply.github.com>
Reviewed-on: cerc-io/fixturenet-optimism-stack#3
2024-07-18 15:02:52 +00:00
89b2a7d82c Upgrade foundry version for compatibility with geth v1.14 (#2)
Part of [Update Optimism stack to use Bedrock release](https://www.notion.so/Update-Optimism-stack-to-use-Bedrock-release-e44a490247724a6095a9fbc19fba3bcd)

- Upgrade foundry version after geth version was upgraded in `fixturenet-eth-stacks`
  - cerc-io/fixturenet-eth-stacks#7

Reviewed-on: cerc-io/fixturenet-optimism-stack#2
Co-authored-by: Nabarun <nabarun@deepstacksoft.com>
Co-committed-by: Nabarun <nabarun@deepstacksoft.com>
2024-07-09 08:36:33 +00:00
002d334b98 Port over optimism stack and update to use latest releases (#1)
Part of [Update Optimism stack to use Bedrock release](https://www.notion.so/Update-Optimism-stack-to-use-Bedrock-release-e44a490247724a6095a9fbc19fba3bcd)
Requires [Add an option to allow unprotected txs in geth](cerc-io/fixturenet-eth-stacks#8)

- Port over existing `fixturenet-optimism` stack components from SO
- Use external [fixturenet-eth](https://git.vdb.to/cerc-io/fixturenet-eth-stacks/src/branch/main/stack-orchestrator/stacks/fixturenet-eth) stack for L1
- Use latest Optimism releases ([optimism@v1.7.4](https://github.com/ethereum-optimism/optimism/releases/tag/v1.7.4) and [op-geth@v1.101311.0](https://github.com/ethereum-optimism/op-geth/releases/tag/v1.101311.0))
- Update Optimism L1 contracts deployment script
- Update L2 genesis generation
- Remove override on L1 script to allow unprotected txs and unlock an account
  - `fixturenet-eth` stack itself now has an option to allow unprotected txs; the raw tx bytes for create2 proxy contract deployment from Optimism docs is not `EIP155` compatible
  - Use pk of funded account for txs instead of unlocking it
- Add updated instructions
- Use upstream [foundry](https://github.com/foundry-rs/foundry/pkgs/container/foundry/209507574?tag=nightly-267e14fab654d9ce955dce64c0eb09f01c8538ee) as base image for `cerc/optimism-contracts` container
  - Support for arm64/Apple Silicon to be handled in a follow-on PR

Reviewed-on: cerc-io/fixturenet-optimism-stack#1
Co-authored-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
Co-committed-by: Prathamesh Musale <prathamesh.musale0@gmail.com>
2024-05-07 04:18:02 +00:00
23 changed files with 1346 additions and 0 deletions

4
.gitignore vendored Normal file
View File

@ -0,0 +1,4 @@
.idea
venv
.vscode
__pycache__

View File

@ -1 +1,5 @@
# fixturenet-optimism-stack
L1+L2 fixturenet stack with [fixturenet-eth](https://git.vdb.to/cerc-io/fixturenet-eth-stacks/src/branch/main/stack-orchestrator/stacks/fixturenet-eth) (L1) and [Optimism](https://stack.optimism.io) (L2)
Stack documentation: [stack/fixturenet-optimism](./stack/fixturenet-optimism/README.md)

View File

@ -0,0 +1,177 @@
version: '3.7'
services:
# Generates and funds the accounts required when setting up the L2 chain (outputs to volume l2_accounts)
# Creates / updates the configuration for L1 contracts deployment
# Deploys the L1 smart contracts (outputs to volume l1_deployment)
fixturenet-optimism-contracts:
restart: on-failure
image: cerc/optimism-contracts:local
hostname: fixturenet-optimism-contracts
env_file:
- ../config/fixturenet-optimism/l1-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_L1_CHAIN_ID: ${CERC_L1_CHAIN_ID}
CERC_L1_RPC: ${CERC_L1_RPC}
CERC_L1_ACCOUNTS_CSV_URL: ${CERC_L1_ACCOUNTS_CSV_URL}
CERC_L1_ADDRESS: ${CERC_L1_ADDRESS}
CERC_L1_PRIV_KEY: ${CERC_L1_PRIV_KEY}
CERC_PROPOSER_AMOUNT: ${CERC_PROPOSER_AMOUNT}
CERC_BATCHER_AMOUNT: ${CERC_BATCHER_AMOUNT}
volumes:
- ../config/network/wait-for-it.sh:/app/packages/contracts-bedrock/wait-for-it.sh
- ../config/fixturenet-optimism/optimism-contracts/deploy-contracts.sh:/app/packages/contracts-bedrock/deploy-contracts.sh
- l2_accounts:/l2-accounts
- l1_deployment:/l1-deployment
- l2_config:/l2-config
# Waits for L1 endpoint to be up before running the contract deploy script
command: |
"./wait-for-it.sh -h ${CERC_L1_HOST:-$${DEFAULT_CERC_L1_HOST}} -p ${CERC_L1_PORT:-$${DEFAULT_CERC_L1_PORT}} -s -t 60 -- ./deploy-contracts.sh"
extra_hosts:
- "host.docker.internal:host-gateway"
l2-config-generation:
restart: on-failure
image: cerc/optimism-op-node:local
depends_on:
fixturenet-optimism-contracts:
condition: service_completed_successfully
env_file:
- ../config/fixturenet-optimism/l1-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_L1_CHAIN_ID: ${CERC_L1_CHAIN_ID}
CERC_L1_RPC: ${CERC_L1_RPC}
volumes:
- ../config/fixturenet-optimism/generate-l2-config.sh:/generate-l2-config.sh
- l1_deployment:/l1-deployment:ro
- l2_config:/l2-config
entrypoint: "bash"
command: "/generate-l2-config.sh"
extra_hosts:
- "host.docker.internal:host-gateway"
# Initializes and runs the L2 execution client (outputs to volume l2_geth_data)
op-geth:
restart: always
image: cerc/optimism-l2geth:local
hostname: op-geth
depends_on:
l2-config-generation:
condition: service_completed_successfully
volumes:
- ../config/fixturenet-optimism/run-op-geth.sh:/run-op-geth.sh
- l2_config:/l2-config:ro
- l2_accounts:/l2-accounts:ro
- l2_geth_data:/datadir
entrypoint: "sh"
command: "/run-op-geth.sh"
ports:
- "8545"
- "8546"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost:8545"]
interval: 30s
timeout: 10s
retries: 100
start_period: 10s
extra_hosts:
- "host.docker.internal:host-gateway"
# Runs the L2 consensus client (Sequencer node)
# Generates the L2 config files if not already present (outputs to volume l2_config)
op-node:
restart: always
image: cerc/optimism-op-node:local
hostname: op-node
depends_on:
op-geth:
condition: service_healthy
env_file:
- ../config/fixturenet-optimism/l1-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_L1_RPC: ${CERC_L1_RPC}
CERC_L1_BEACON: ${CERC_L1_BEACON}
volumes:
- ../config/fixturenet-optimism/run-op-node.sh:/run-op-node.sh
- l1_deployment:/l1-deployment:ro
- l2_config:/l2-config
- l2_accounts:/l2-accounts:ro
entrypoint: "sh"
command: "/run-op-node.sh"
ports:
- "8547"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost:8547"]
interval: 30s
timeout: 10s
retries: 100
start_period: 10s
extra_hosts:
- "host.docker.internal:host-gateway"
# Runs the batcher (takes transactions from the Sequencer and publishes them to L1)
op-batcher:
restart: always
image: cerc/optimism-op-batcher:local
hostname: op-batcher
depends_on:
op-node:
condition: service_healthy
op-geth:
condition: service_healthy
env_file:
- ../config/fixturenet-optimism/l1-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_L1_RPC: ${CERC_L1_RPC}
volumes:
- ../config/network/wait-for-it.sh:/wait-for-it.sh
- ../config/fixturenet-optimism/run-op-batcher.sh:/run-op-batcher.sh
- l2_accounts:/l2-accounts:ro
entrypoint: ["sh", "-c"]
# Waits for L1 endpoint to be up before running the batcher
command: |
"/wait-for-it.sh -h ${CERC_L1_HOST:-$${DEFAULT_CERC_L1_HOST}} -p ${CERC_L1_PORT:-$${DEFAULT_CERC_L1_PORT}} -s -t 60 -- /run-op-batcher.sh"
ports:
- "8548"
extra_hosts:
- "host.docker.internal:host-gateway"
# Runs the proposer (periodically submits new state roots to L1)
op-proposer:
restart: always
image: cerc/optimism-op-proposer:local
hostname: op-proposer
depends_on:
op-node:
condition: service_healthy
op-geth:
condition: service_healthy
env_file:
- ../config/fixturenet-optimism/l1-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_L1_RPC: ${CERC_L1_RPC}
CERC_L1_CHAIN_ID: ${CERC_L1_CHAIN_ID}
volumes:
- ../config/network/wait-for-it.sh:/wait-for-it.sh
- ../config/fixturenet-optimism/run-op-proposer.sh:/run-op-proposer.sh
- l1_deployment:/l1-deployment:ro
- l2_accounts:/l2-accounts:ro
entrypoint: ["sh", "-c"]
# Waits for L1 endpoint to be up before running the proposer
command: |
"/wait-for-it.sh -h ${CERC_L1_HOST:-$${DEFAULT_CERC_L1_HOST}} -p ${CERC_L1_PORT:-$${DEFAULT_CERC_L1_PORT}} -s -t 60 -- /run-op-proposer.sh"
ports:
- "8560"
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
l1_deployment:
l2_accounts:
l2_config:
l2_geth_data:

View File

@ -0,0 +1,43 @@
#!/bin/bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L1_CHAIN_ID="${CERC_L1_CHAIN_ID:-${DEFAULT_CERC_L1_CHAIN_ID}}"
CERC_L1_RPC="${CERC_L1_RPC:-${DEFAULT_CERC_L1_RPC}}"
DEPLOYMENT_CONTEXT="$CERC_L1_CHAIN_ID"
deploy_config_file="/l2-config/$DEPLOYMENT_CONTEXT.json"
l1_deployment_file="/l1-deployment/$DEPLOYMENT_CONTEXT-deploy.json"
l2_allocs_file="/l2-config/allocs-l2.json"
genesis_outfile="/l2-config/genesis.json"
rollup_outfile="/l2-config/rollup.json"
jwt_file="/l2-config/l2-jwt.txt"
# Create a JWT secret at shared path if not found
if [ ! -f "$jwt_file" ]; then
openssl rand -hex 32 > $jwt_file
echo "Generated JWT secret at $jwt_file"
fi
# Check if genesis.json and rollup.json already exist
if [ -f "$genesis_outfile" ] && [ -f "$rollup_outfile" ]; then
echo "Skipping generation, files already exist: $genesis_outfile, $rollup_outfile"
exit 0
fi
# If files don't exist, proceed with generation
echo "Generating l2 config files..."
op-node genesis l2 \
--deploy-config "$deploy_config_file" \
--l1-deployments "$l1_deployment_file" \
--l2-allocs "$l2_allocs_file" \
--outfile.l2 "$genesis_outfile" \
--outfile.rollup "$rollup_outfile" \
--l1-rpc "$CERC_L1_RPC"
echo "Generated the l2 config files successfully"
echo "Exiting"

View File

@ -0,0 +1,15 @@
# Defaults
# L1 endpoint
DEFAULT_CERC_L1_CHAIN_ID=1212
DEFAULT_CERC_L1_RPC="http://fixturenet-eth-geth-1:8545"
DEFAULT_CERC_L1_HOST="fixturenet-eth-geth-1"
DEFAULT_CERC_L1_PORT=8545
# URL to get CSV with credentials for accounts on L1
# that are used to send balance to Optimism Proxy contract
# (enables them to do transactions on L2)
DEFAULT_CERC_L1_ACCOUNTS_CSV_URL="http://fixturenet-eth-bootnode-geth:9898/accounts.csv"
DEFAULT_CERC_PROPOSER_AMOUNT="0.2ether"
DEFAULT_CERC_BATCHER_AMOUNT="0.1ether"
DEFAULT_CERC_L1_BEACON=http://fixturenet-eth-lighthouse-1:8001

View File

@ -0,0 +1,201 @@
#!/bin/bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L1_CHAIN_ID="${CERC_L1_CHAIN_ID:-${DEFAULT_CERC_L1_CHAIN_ID}}"
CERC_L1_RPC="${CERC_L1_RPC:-${DEFAULT_CERC_L1_RPC}}"
CERC_L1_ACCOUNTS_CSV_URL="${CERC_L1_ACCOUNTS_CSV_URL:-${DEFAULT_CERC_L1_ACCOUNTS_CSV_URL}}"
CERC_PROPOSER_AMOUNT="${CERC_PROPOSER_AMOUNT:-${DEFAULT_CERC_PROPOSER_AMOUNT}}"
CERC_BATCHER_AMOUNT="${CERC_BATCHER_AMOUNT:-${DEFAULT_CERC_BATCHER_AMOUNT}}"
export DEPLOYMENT_CONTEXT="$CERC_L1_CHAIN_ID"
# Optional create2 salt for deterministic deployment of contract implementations
export IMPL_SALT=$(openssl rand -hex 32)
echo "Using L1 RPC endpoint ${CERC_L1_RPC}"
# Exit if a deployment already exists (on restarts)
if [ -f "/l1-deployment/$DEPLOYMENT_CONTEXT-deploy.json" ]; then
echo "Deployment directory /l1-deployment/$DEPLOYMENT_CONTEXT-deploy.json, checking OptimismPortal deployment"
OPTIMISM_PORTAL_ADDRESS=$(cat /l1-deployment/$DEPLOYMENT_CONTEXT-deploy.json | jq -r .OptimismPortal)
contract_code=$(cast code $OPTIMISM_PORTAL_ADDRESS --rpc-url $CERC_L1_RPC)
if [ -z "${contract_code#0x}" ]; then
echo "Error: A deployment directory was found in the volume, but no contract code was found on-chain at the associated address. Please clear L1 deployment volume before restarting."
exit 1
else
echo "Deployment found, exiting (successfully)."
exit 0
fi
fi
wait_for_block() {
local block="$1" # Block to wait for
local timeout="$2" # Max time to wait in seconds
echo "Waiting for block $block."
i=0
loops=$(($timeout/10))
while [ -z "$block_result" ] && [[ "$i" -lt "$loops" ]]; do
sleep 10
echo "Checking..."
block_result=$(cast block $block --rpc-url $CERC_L1_RPC | grep -E "(timestamp|hash|number)" || true)
i=$(($i + 1))
done
}
# We need four accounts and their private keys for the deployment: Admin, Proposer, Batcher, and Sequencer
# Check if accounts file already exists
l2_accounts_file="/l2-accounts/accounts.json"
if [ -f $l2_accounts_file ]; then
echo "Using existing accounts from $l2_accounts_file."
ADMIN=$(jq -r .Admin $l2_accounts_file)
ADMIN_KEY=$(jq -r .AdminKey $l2_accounts_file)
# Proposer
PROPOSER=$(jq -r .Proposer $l2_accounts_file)
PROPOSER_KEY=$(jq -r .ProposerKey $l2_accounts_file)
# Batcher
BATCHER=$(jq -r .Batcher $l2_accounts_file)
BATCHER_KEY=$(jq -r .BatcherKey $l2_accounts_file)
# Sequencer
SEQ=$(jq -r .Seq $l2_accounts_file)
SEQ_KEY=$(jq -r .SeqKey $l2_accounts_file)
else
# If $CERC_L1_ADDRESS and $CERC_L1_PRIV_KEY have been set, we'll assign it to Admin and generate/fund the remaining three accounts from it
# If not, we'll assume the L1 is the stack's own fixturenet-eth and use the pre-funded accounts/keys from $CERC_L1_ACCOUNTS_CSV_URL
if [ -n "$CERC_L1_ADDRESS" ] && [ -n "$CERC_L1_PRIV_KEY" ]; then
echo "Creating new accounts for Optimism deployment."
wallet1=$(cast wallet new)
wallet2=$(cast wallet new)
wallet3=$(cast wallet new)
# Admin
ADMIN=$CERC_L1_ADDRESS
ADMIN_KEY=$CERC_L1_PRIV_KEY
# Proposer
PROPOSER=$(echo "$wallet1" | awk '/Address:/{print $2}')
PROPOSER_KEY=$(echo "$wallet1" | awk '/Private key:/{print $3}')
# Batcher
BATCHER=$(echo "$wallet2" | awk '/Address:/{print $2}')
BATCHER_KEY=$(echo "$wallet2" | awk '/Private key:/{print $3}')
# Sequencer
SEQ=$(echo "$wallet3" | awk '/Address:/{print $2}')
SEQ_KEY=$(echo "$wallet3" | awk '/Private key:/{print $3}')
echo "Funding accounts..."
wait_for_block 1 300
cast send --from $ADMIN --rpc-url $CERC_L1_RPC --value $CERC_PROPOSER_AMOUNT $PROPOSER --private-key $ADMIN_KEY
cast send --from $ADMIN --rpc-url $CERC_L1_RPC --value $CERC_BATCHER_AMOUNT $BATCHER --private-key $ADMIN_KEY
else
curl -o accounts.csv $CERC_L1_ACCOUNTS_CSV_URL
# Admin
ADMIN=$(awk -F ',' 'NR == 1 {print $2}' accounts.csv)
ADMIN_KEY=$(awk -F ',' 'NR == 1 {print $3}' accounts.csv)
# Proposer
PROPOSER=$(awk -F ',' 'NR == 2 {print $2}' accounts.csv)
PROPOSER_KEY=$(awk -F ',' 'NR == 2 {print $3}' accounts.csv)
# Batcher
BATCHER=$(awk -F ',' 'NR == 3 {print $2}' accounts.csv)
BATCHER_KEY=$(awk -F ',' 'NR == 3 {print $3}' accounts.csv)
# Sequencer
SEQ=$(awk -F ',' 'NR == 4 {print $2}' accounts.csv)
SEQ_KEY=$(awk -F ',' 'NR == 4 {print $3}' accounts.csv)
fi
# These accounts will be needed by other containers, so write them to a shared volume
echo "Writing accounts/private keys to volume l2_accounts."
accounts_json=$(jq -n \
--arg Admin "$ADMIN" --arg AdminKey "$ADMIN_KEY" \
--arg Proposer "$PROPOSER" --arg ProposerKey "$PROPOSER_KEY" \
--arg Batcher "$BATCHER" --arg BatcherKey "$BATCHER_KEY" \
--arg Seq "$SEQ" --arg SeqKey "$SEQ_KEY" \
'{Admin: $Admin, AdminKey: $AdminKey, Proposer: $Proposer, ProposerKey: $ProposerKey, Batcher: $Batcher, BatcherKey: $BatcherKey, Seq: $Seq, SeqKey: $SeqKey}')
echo "$accounts_json" > $l2_accounts_file
fi
echo "Using accounts:"
echo -e "Admin: $ADMIN\nProposer: $PROPOSER\nBatcher: $BATCHER\nSequencer: $SEQ"
# Get a finalized L1 block to set as the starting point for the L2 deployment
# If the chain is a freshly created fixturenet-eth, a finalized block won't be available for many minutes; rather than wait, we can use block 1
echo "Checking L1 for finalized block..."
finalized=$(cast block finalized --rpc-url $CERC_L1_RPC | grep -E "(timestamp|hash|number)" || true)
config_build_script="scripts/getting-started/config.sh"
if [ -z "$finalized" ]; then
# assume fresh chain and use block 1 instead
echo "No finalized block. Using block 1 instead."
# wait for 20 or so blocks to be safe
wait_for_block 24 300
# Replace how block is calculated in the config building script
sed -i 's/block=.*/block=$(cast block 1 --rpc-url $L1_RPC_URL)/g' $config_build_script
fi
# Generate the deploy-config/getting-started.json file
GS_ADMIN_ADDRESS=$ADMIN \
GS_BATCHER_ADDRESS=$BATCHER \
GS_PROPOSER_ADDRESS=$PROPOSER \
GS_SEQUENCER_ADDRESS=$SEQ \
L1_RPC_URL=$CERC_L1_RPC \
L1_CHAIN_ID=$CERC_L1_CHAIN_ID \
L2_CHAIN_ID=42069 \
L1_BLOCK_TIME=12 \
L2_BLOCK_TIME=2 \
$config_build_script
echo "Writing deployment config."
deploy_config_file="deploy-config/$DEPLOYMENT_CONTEXT.json"
cp deploy-config/getting-started.json $deploy_config_file
# Update generated config
# 1. Update L1 chain id
# 2. Add missing faultGameWithdrawalDelay field
# (see issue: https://github.com/ethereum-optimism/optimism/issues/9773#issuecomment-2080224969)
mkdir -p deployments/$DEPLOYMENT_CONTEXT
# Deployment requires the create2 deterministic proxy contract be published on L1 at address 0x4e59b44847b379578588920ca78fbf26c0b4956c
# See: https://github.com/Arachnid/deterministic-deployment-proxy
create2CodeSize=$(cast codesize 0x4e59b44847b379578588920cA78FbF26c0B4956C --rpc-url $CERC_L1_RPC)
if [ "$create2CodeSize" -eq 0 ]; then
echo "Deploying create2 proxy contract..."
echo "Funding deployment signer address"
deployment_signer="0x3fab184622dc19b6109349b94811493bf2a45362"
cast send --from $ADMIN --rpc-url $CERC_L1_RPC --value 0.5ether $deployment_signer --private-key $ADMIN_KEY
raw_bytes="0xf8a58085174876e800830186a08080b853604580600e600039806000f350fe7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffe03601600081602082378035828234f58015156039578182fd5b8082525050506014600cf31ba02222222222222222222222222222222222222222222222222222222222222222a02222222222222222222222222222222222222222222222222222222222222222"
cast publish --rpc-url $CERC_L1_RPC $raw_bytes
fi
# Create the L2 deployment
# Writes out artifact to /app/packages/contracts-bedrock/deployments/$DEPLOYMENT_CONTEXT-deploy.json
echo "Deploying L1 Optimism contracts..."
DEPLOY_CONFIG_PATH=$deploy_config_file forge script scripts/Deploy.s.sol:Deploy --private-key $ADMIN_KEY --broadcast --rpc-url $CERC_L1_RPC
echo "Done deploying contracts."
echo "Generating L2 genesis allocs..."
L2_CHAIN_ID=$(jq ".l2ChainID" $deploy_config_file)
DEPLOY_CONFIG_PATH=$deploy_config_file \
CONTRACT_ADDRESSES_PATH="deployments/$DEPLOYMENT_CONTEXT-deploy.json" \
forge script --chain-id $L2_CHAIN_ID scripts/L2Genesis.s.sol:L2Genesis --sig 'runWithAllUpgrades()' --private-key $ADMIN_KEY
cp /app/packages/contracts-bedrock/state-dump-$L2_CHAIN_ID.json allocs-l2.json
echo "Done."
echo "*************************************"
# Copy files needed by other containers to the appropriate shared volumes
echo "Copying deployment artifacts to volume l1_deployment and deploy-config to volume l2_config"
cp /app/packages/contracts-bedrock/deployments/$DEPLOYMENT_CONTEXT-deploy.json /l1-deployment
cp /app/packages/contracts-bedrock/deploy-config/$DEPLOYMENT_CONTEXT.json /l2-config
cp allocs-l2.json /l2-config
echo "Deployment successful. Exiting"

View File

@ -0,0 +1,27 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L1_RPC="${CERC_L1_RPC:-${DEFAULT_CERC_L1_RPC}}"
# Start op-batcher
L2_RPC="http://op-geth:8545"
ROLLUP_RPC="http://op-node:8547"
BATCHER_KEY=$(cat /l2-accounts/accounts.json | jq -r .BatcherKey)
op-batcher \
--l2-eth-rpc=$L2_RPC \
--rollup-rpc=$ROLLUP_RPC \
--poll-interval=1s \
--sub-safety-margin=6 \
--num-confirmations=1 \
--safe-abort-nonce-too-low-count=3 \
--resubmission-timeout=30s \
--rpc.addr=0.0.0.0 \
--rpc.port=8548 \
--rpc.enable-admin \
--max-channel-duration=1 \
--l1-eth-rpc=$CERC_L1_RPC \
--private-key="${BATCHER_KEY#0x}"

View File

@ -0,0 +1,56 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
l2_genesis_file="/l2-config/genesis.json"
# Check for genesis file; if necessary, wait on op-node to generate
timeout=300 # 5 minutes
start_time=$(date +%s)
elapsed_time=0
echo "Checking for L2 genesis file at location $l2_genesis_file"
while [ ! -f "$l2_genesis_file" ] && [ $elapsed_time -lt $timeout ]; do
echo "Waiting for L2 genesis file to be generated..."
sleep 10
current_time=$(date +%s)
elapsed_time=$((current_time - start_time))
done
if [ ! -f "$l2_genesis_file" ]; then
echo "L2 genesis file not found after timeout of $timeout seconds. Exiting..."
exit 1
fi
# Initialize geth from our generated L2 genesis file (if not already initialized)
data_dir="/datadir"
if [ ! -d "$datadir/geth" ]; then
geth init --datadir=$data_dir $l2_genesis_file
fi
# Start op-geth
jwt_file="/l2-config/l2-jwt.txt"
geth \
--datadir=$data_dir \
--http \
--http.corsdomain="*" \
--http.vhosts="*" \
--http.addr=0.0.0.0 \
--http.api=web3,debug,eth,txpool,net,engine \
--ws \
--ws.addr=0.0.0.0 \
--ws.port=8546 \
--ws.origins="*" \
--ws.api=debug,eth,txpool,net,engine \
--syncmode=full \
--gcmode=archive \
--nodiscover \
--maxpeers=0 \
--networkid=42069 \
--authrpc.vhosts="*" \
--authrpc.addr=0.0.0.0 \
--authrpc.port=8551 \
--authrpc.jwtsecret=$jwt_file \
--rollup.disabletxpoolgossip=true

View File

@ -0,0 +1,38 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L1_CHAIN_ID="${CERC_L1_CHAIN_ID:-${DEFAULT_CERC_L1_CHAIN_ID}}"
CERC_L1_RPC="${CERC_L1_RPC:-${DEFAULT_CERC_L1_RPC}}"
DEPLOYMENT_CONTEXT="$CERC_L1_CHAIN_ID"
CERC_L1_BEACON="${CERC_L1_BEACON:-${DEFAULT_CERC_L1_BEACON}}"
deploy_config_file="/l2-config/$DEPLOYMENT_CONTEXT.json"
l1_deployment_file="/l1-deployment/$DEPLOYMENT_CONTEXT-deploy.json"
l2_allocs_file="/l2-config/allocs-l2.json"
genesis_outfile="/l2-config/genesis.json"
rollup_outfile="/l2-config/rollup.json"
# Start op-node
SEQ_KEY=$(cat /l2-accounts/accounts.json | jq -r .SeqKey)
jwt_file=/l2-config/l2-jwt.txt
L2_AUTH="http://op-geth:8551"
RPC_KIND=any # this can optionally be set to a preset for common node providers like Infura, Alchemy, etc.
op-node \
--l2=$L2_AUTH \
--l2.jwt-secret=$jwt_file \
--sequencer.enabled \
--sequencer.l1-confs=5 \
--verifier.l1-confs=4 \
--rollup.config=$rollup_outfile \
--rpc.addr=0.0.0.0 \
--rpc.port=8547 \
--p2p.disable \
--rpc.enable-admin \
--p2p.sequencer.key="${SEQ_KEY#0x}" \
--l1=$CERC_L1_RPC \
--l1.rpckind=$RPC_KIND \
--l1.beacon=$CERC_L1_BEACON

View File

@ -0,0 +1,22 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_L1_RPC="${CERC_L1_RPC:-${DEFAULT_CERC_L1_RPC}}"
CERC_L1_CHAIN_ID="${CERC_L1_CHAIN_ID:-${DEFAULT_CERC_L1_CHAIN_ID}}"
DEPLOYMENT_CONTEXT="$CERC_L1_CHAIN_ID"
# Start op-proposer
ROLLUP_RPC="http://op-node:8547"
PROPOSER_KEY=$(cat /l2-accounts/accounts.json | jq -r .ProposerKey)
L2OO_ADDR=$(cat /l1-deployment/$DEPLOYMENT_CONTEXT-deploy.json | jq -r .L2OutputOracleProxy)
op-proposer \
--poll-interval=12s \
--rpc.port=8560 \
--rollup-rpc=$ROLLUP_RPC \
--l2oo-address="${L2OO_ADDR#0x}" \
--private-key="${PROPOSER_KEY#0x}" \
--l1-eth-rpc=$CERC_L1_RPC

182
config/network/wait-for-it.sh Executable file
View File

@ -0,0 +1,182 @@
#!/usr/bin/env bash
# Use this script to test if a given TCP host/port are available
WAITFORIT_cmdname=${0##*/}
echoerr() { if [[ $WAITFORIT_QUIET -ne 1 ]]; then echo "$@" 1>&2; fi }
usage()
{
cat << USAGE >&2
Usage:
$WAITFORIT_cmdname host:port [-s] [-t timeout] [-- command args]
-h HOST | --host=HOST Host or IP under test
-p PORT | --port=PORT TCP port under test
Alternatively, you specify the host and port as host:port
-s | --strict Only execute subcommand if the test succeeds
-q | --quiet Don't output any status messages
-t TIMEOUT | --timeout=TIMEOUT
Timeout in seconds, zero for no timeout
-- COMMAND ARGS Execute command with args after the test finishes
USAGE
exit 1
}
wait_for()
{
if [[ $WAITFORIT_TIMEOUT -gt 0 ]]; then
echoerr "$WAITFORIT_cmdname: waiting $WAITFORIT_TIMEOUT seconds for $WAITFORIT_HOST:$WAITFORIT_PORT"
else
echoerr "$WAITFORIT_cmdname: waiting for $WAITFORIT_HOST:$WAITFORIT_PORT without a timeout"
fi
WAITFORIT_start_ts=$(date +%s)
while :
do
if [[ $WAITFORIT_ISBUSY -eq 1 ]]; then
nc -z $WAITFORIT_HOST $WAITFORIT_PORT
WAITFORIT_result=$?
else
(echo -n > /dev/tcp/$WAITFORIT_HOST/$WAITFORIT_PORT) >/dev/null 2>&1
WAITFORIT_result=$?
fi
if [[ $WAITFORIT_result -eq 0 ]]; then
WAITFORIT_end_ts=$(date +%s)
echoerr "$WAITFORIT_cmdname: $WAITFORIT_HOST:$WAITFORIT_PORT is available after $((WAITFORIT_end_ts - WAITFORIT_start_ts)) seconds"
break
fi
sleep 1
done
return $WAITFORIT_result
}
wait_for_wrapper()
{
# In order to support SIGINT during timeout: http://unix.stackexchange.com/a/57692
if [[ $WAITFORIT_QUIET -eq 1 ]]; then
timeout $WAITFORIT_BUSYTIMEFLAG $WAITFORIT_TIMEOUT $0 --quiet --child --host=$WAITFORIT_HOST --port=$WAITFORIT_PORT --timeout=$WAITFORIT_TIMEOUT &
else
timeout $WAITFORIT_BUSYTIMEFLAG $WAITFORIT_TIMEOUT $0 --child --host=$WAITFORIT_HOST --port=$WAITFORIT_PORT --timeout=$WAITFORIT_TIMEOUT &
fi
WAITFORIT_PID=$!
trap "kill -INT -$WAITFORIT_PID" INT
wait $WAITFORIT_PID
WAITFORIT_RESULT=$?
if [[ $WAITFORIT_RESULT -ne 0 ]]; then
echoerr "$WAITFORIT_cmdname: timeout occurred after waiting $WAITFORIT_TIMEOUT seconds for $WAITFORIT_HOST:$WAITFORIT_PORT"
fi
return $WAITFORIT_RESULT
}
# process arguments
while [[ $# -gt 0 ]]
do
case "$1" in
*:* )
WAITFORIT_hostport=(${1//:/ })
WAITFORIT_HOST=${WAITFORIT_hostport[0]}
WAITFORIT_PORT=${WAITFORIT_hostport[1]}
shift 1
;;
--child)
WAITFORIT_CHILD=1
shift 1
;;
-q | --quiet)
WAITFORIT_QUIET=1
shift 1
;;
-s | --strict)
WAITFORIT_STRICT=1
shift 1
;;
-h)
WAITFORIT_HOST="$2"
if [[ $WAITFORIT_HOST == "" ]]; then break; fi
shift 2
;;
--host=*)
WAITFORIT_HOST="${1#*=}"
shift 1
;;
-p)
WAITFORIT_PORT="$2"
if [[ $WAITFORIT_PORT == "" ]]; then break; fi
shift 2
;;
--port=*)
WAITFORIT_PORT="${1#*=}"
shift 1
;;
-t)
WAITFORIT_TIMEOUT="$2"
if [[ $WAITFORIT_TIMEOUT == "" ]]; then break; fi
shift 2
;;
--timeout=*)
WAITFORIT_TIMEOUT="${1#*=}"
shift 1
;;
--)
shift
WAITFORIT_CLI=("$@")
break
;;
--help)
usage
;;
*)
echoerr "Unknown argument: $1"
usage
;;
esac
done
if [[ "$WAITFORIT_HOST" == "" || "$WAITFORIT_PORT" == "" ]]; then
echoerr "Error: you need to provide a host and port to test."
usage
fi
WAITFORIT_TIMEOUT=${WAITFORIT_TIMEOUT:-15}
WAITFORIT_STRICT=${WAITFORIT_STRICT:-0}
WAITFORIT_CHILD=${WAITFORIT_CHILD:-0}
WAITFORIT_QUIET=${WAITFORIT_QUIET:-0}
# Check to see if timeout is from busybox?
WAITFORIT_TIMEOUT_PATH=$(type -p timeout)
WAITFORIT_TIMEOUT_PATH=$(realpath $WAITFORIT_TIMEOUT_PATH 2>/dev/null || readlink -f $WAITFORIT_TIMEOUT_PATH)
WAITFORIT_BUSYTIMEFLAG=""
if [[ $WAITFORIT_TIMEOUT_PATH =~ "busybox" ]]; then
WAITFORIT_ISBUSY=1
# Check if busybox timeout uses -t flag
# (recent Alpine versions don't support -t anymore)
if timeout &>/dev/stdout | grep -q -e '-t '; then
WAITFORIT_BUSYTIMEFLAG="-t"
fi
else
WAITFORIT_ISBUSY=0
fi
if [[ $WAITFORIT_CHILD -gt 0 ]]; then
wait_for
WAITFORIT_RESULT=$?
exit $WAITFORIT_RESULT
else
if [[ $WAITFORIT_TIMEOUT -gt 0 ]]; then
wait_for_wrapper
WAITFORIT_RESULT=$?
else
wait_for
WAITFORIT_RESULT=$?
fi
fi
if [[ $WAITFORIT_CLI != "" ]]; then
if [[ $WAITFORIT_RESULT -ne 0 && $WAITFORIT_STRICT -eq 1 ]]; then
echoerr "$WAITFORIT_cmdname: strict mode, refusing to execute subprocess"
exit $WAITFORIT_RESULT
fi
exec "${WAITFORIT_CLI[@]}"
else
exit $WAITFORIT_RESULT
fi

View File

@ -0,0 +1,24 @@
# TODO: Make work for arm64/Apple Silicon
FROM ghcr.io/foundry-rs/foundry:nightly-c4a984fbf2c48b793c8cd53af84f56009dd1070c
RUN apk update
# Install node (use edge repo to get latest version)
RUN apk add --update --no-cache curl wget bash git busybox jq openssl \
&& apk add nodejs --update-cache --repository http://dl-cdn.alpinelinux.org/alpine/edge/main --allow-untrusted \
&& apk add npm \
&& node -v
# Add corepack for yarn and pnpm
RUN npm install -g corepack && corepack enable \
&& yarn --version
WORKDIR /app
# Copy optimism repo contents
COPY . .
RUN echo "Building optimism" && \
pnpm install && pnpm build
WORKDIR /app/packages/contracts-bedrock

View File

@ -0,0 +1,9 @@
#!/usr/bin/env bash
# Build cerc/optimism-contracts
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
# See: https://stackoverflow.com/a/246128/1701505
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/optimism-contracts:local -f ${SCRIPT_DIR}/Dockerfile ${build_command_args} ${CERC_REPO_BASE_DIR}/optimism

View File

@ -0,0 +1,7 @@
#!/usr/bin/env bash
# Build cerc/optimism-l2geth
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
docker build -t cerc/optimism-l2geth:local ${build_command_args} ${CERC_REPO_BASE_DIR}/op-geth

View File

@ -0,0 +1,31 @@
FROM golang:1.21.0-alpine3.18 as builder
ARG VERSION=v0.0.0
RUN apk add --no-cache make gcc musl-dev linux-headers git jq bash
# build op-batcher with the shared go.mod & go.sum files
COPY ./op-batcher /app/op-batcher
COPY ./op-node /app/op-node
COPY ./op-service /app/op-service
COPY ./op-plasma /app/op-plasma
COPY ./go.mod /app/go.mod
COPY ./go.sum /app/go.sum
COPY ./.git /app/.git
WORKDIR /app/op-batcher
RUN go mod download
ARG TARGETOS TARGETARCH
RUN make op-batcher VERSION="$VERSION" GOOS=$TARGETOS GOARCH=$TARGETARCH
FROM alpine:3.18
RUN apk add --no-cache jq bash
COPY --from=builder /app/op-batcher/bin/op-batcher /usr/local/bin
ENTRYPOINT ["op-batcher"]

View File

@ -0,0 +1,9 @@
#!/usr/bin/env bash
# Build cerc/optimism-op-batcher
# TODO: use upstream Dockerfile once its buildx-specific content has been removed
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/optimism-op-batcher:local -f ${SCRIPT_DIR}/Dockerfile ${build_command_args} ${CERC_REPO_BASE_DIR}/optimism

View File

@ -0,0 +1,31 @@
FROM golang:1.21.0-alpine3.18 as builder
ARG VERSION=v0.0.0
RUN apk add --no-cache make gcc musl-dev linux-headers git jq bash
# build op-node with the shared go.mod & go.sum files
COPY ./op-node /app/op-node
COPY ./op-chain-ops /app/op-chain-ops
COPY ./op-service /app/op-service
COPY ./op-plasma /app/op-plasma
COPY ./op-conductor /app/op-conductor
COPY ./go.mod /app/go.mod
COPY ./go.sum /app/go.sum
COPY ./.git /app/.git
WORKDIR /app/op-node
RUN go mod download
ARG TARGETOS TARGETARCH
RUN make op-node VERSION="$VERSION" GOOS=$TARGETOS GOARCH=$TARGETARCH
FROM alpine:3.18
RUN apk add --no-cache openssl jq bash curl
COPY --from=builder /app/op-node/bin/op-node /usr/local/bin
CMD ["op-node"]

View File

@ -0,0 +1,9 @@
#!/usr/bin/env bash
# Build cerc/optimism-op-node
# TODO: use upstream Dockerfile once its buildx-specific content has been removed
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/optimism-op-node:local -f ${SCRIPT_DIR}/Dockerfile ${build_command_args} ${CERC_REPO_BASE_DIR}/optimism

View File

@ -0,0 +1,30 @@
FROM golang:1.21.0-alpine3.18 as builder
ARG VERSION=v0.0.0
RUN apk add --no-cache make gcc musl-dev linux-headers git jq bash
# build op-proposer with the shared go.mod & go.sum files
COPY ./op-proposer /app/op-proposer
COPY ./op-node /app/op-node
COPY ./op-service /app/op-service
COPY ./op-plasma /app/op-plasma
COPY ./go.mod /app/go.mod
COPY ./go.sum /app/go.sum
COPY ./.git /app/.git
WORKDIR /app/op-proposer
RUN go mod download
ARG TARGETOS TARGETARCH
RUN make op-proposer VERSION="$VERSION" GOOS=$TARGETOS GOARCH=$TARGETARCH
FROM alpine:3.18
RUN apk add --no-cache jq bash
COPY --from=builder /app/op-proposer/bin/op-proposer /usr/local/bin
CMD ["op-proposer"]

View File

@ -0,0 +1,8 @@
#!/usr/bin/env bash
# Build cerc/optimism-op-proposer
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/optimism-op-proposer:local -f ${SCRIPT_DIR}/Dockerfile ${build_command_args} ${CERC_REPO_BASE_DIR}/optimism

View File

@ -0,0 +1,263 @@
# fixturenet-optimism
Instructions to setup and deploy an end-to-end L1+L2 stack with [fixturenet-eth](https://git.vdb.to/cerc-io/fixturenet-eth-stacks/src/branch/main/stack-orchestrator/stacks/fixturenet-eth) (L1) and [Optimism](https://stack.optimism.io) (L2)
We support running just the L2 part of stack, given an external L1 endpoint. Follow the [L2 only doc](./l2-only.md) for the same.
## Setup
Clone the stack repo:
```bash
laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-eth-stacks
laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-optimism-stack
```
Clone required repositories:
```bash
# L1 (fixturenet-eth)
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth setup-repositories
# L2 (optimism)
laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism setup-repositories
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove concerned repositories and re-run the command
# The repositories are located in $HOME/cerc by default
```
Build the container images:
```bash
# L1 (fixturenet-eth)
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth build-containers
# L2 (optimism)
laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism build-containers
# If redeploying with changes in the stack containers
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth build-containers --force-rebuild
laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism build-containers --force-rebuild
# If errors are thrown during build, old images used by this stack would have to be deleted
```
Note: this will take >10 mins depending on the specs of your machine, and **requires** 16GB of memory or greater.
This should create the required docker images in the local image registry:
* cerc/lighthouse
* cerc/lighthouse-cli
* cerc/fixturenet-eth-genesis-premerge
* cerc/fixturenet-eth-geth
* cerc/fixturenet-eth-lighthouse
* cerc/optimism-contracts
* cerc/optimism-op-node
* cerc/optimism-l2geth
* cerc/optimism-op-batcher
* cerc/optimism-op-proposer
## Create a deployment
First, create a spec file for the deployment, which will map the stack's ports and volumes to the host:
```bash
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy init --output fixturenet-eth.yml
laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism deploy init --map-ports-to-host any-fixed-random --output fixturenet-optimism-spec.yml
```
### Ports
It is usually necessary to expose certain container ports on one or more the host's addresses to allow incoming connections.
Any ports defined in the Docker compose file are exposed by default with random port assignments, bound to "any" interface (IP address 0.0.0.0), but the port mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`.
In addition, a stack-wide port mapping "recipe" can be applied at the time the
`laconic-so deploy init` command is run, by supplying the desired recipe with the `--map-ports-to-host` option. The following recipes are supported:
| Recipe | Host Port Mapping |
|--------|-------------------|
| any-variable-random | Bind to 0.0.0.0 using a random port assigned at start time (default) |
| localhost-same | Bind to 127.0.0.1 using the same port number as exposed by the containers |
| any-same | Bind to 0.0.0.0 using the same port number as exposed by the containers |
| localhost-fixed-random | Bind to 127.0.0.1 using a random port number selected at the time the command is run (not checked for already in use)|
| any-fixed-random | Bind to 0.0.0.0 using a random port number selected at the time the command is run (not checked for already in use) |
For example, you may wish to use `any-fixed-random` to generate the initial mappings and then edit the spec file to set the `fixturenet-eth-geth-1` RPC to port 8545 and the `op-geth` RPC to port 9545 on the host.
Or, you may wish to use `any-same` for the initial mappings -- in which case you'll have to edit the spec to file to ensure the various geth instances aren't all trying to publish to host ports 8545/8546 at once.
### Data volumes
Container data volumes are bind-mounted to specified paths in the host filesystem.
The default setup (generated by `laconic-so deploy init`) places the volumes in the `./data` subdirectory of the deployment directory. The default mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`.
---
Once you've made any needed changes to the spec file, create a deployment from it:
```bash
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy create --spec-file fixturenet-eth.yml --deployment-dir fixturenet-eth-deployment
laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism deploy create --spec-file fixturenet-optimism-spec.yml --deployment-dir fixturenet-optimism-deployment
# Place them both in the same namespace (cluster)
cp fixturenet-eth-deployment/deployment.yml fixturenet-optimism-deployment/deployment.yml
```
### Env configuration
Inside the `fixturenet-eth-deployment` deployment directory, open `config.env` file and set following env variables:
```bash
# Allow unprotected txs for Optimism contracts deployment
CERC_ALLOW_UNPROTECTED_TXS=true
```
Inside the `fixturenet-optimism-deployment` deployment directory, open `config.env` file and set following env variables:
```bash
# Optional
# Funding amounts for proposer and batcher accounts on L1
CERC_PROPOSER_AMOUNT= # Default 0.2ether
CERC_BATCHER_AMOUNT= # Default 0.1ether
```
## Start the stack
Start the deployment:
```bash
laconic-so deployment --dir fixturenet-eth-deployment start
laconic-so deployment --dir fixturenet-optimism-deployment start
```
1. The `fixturenet-eth` L1 chain will start up first and begin producing blocks
2. The `fixturenet-optimism-contracts` service will configure and deploy the Optimism contracts to L1, exiting when complete. This may take several minutes; you can follow the progress by following the container's logs (see below)
3. The `op-node` and `op-geth` services will initialize themselves (if not already initialized) and start
4. The remaining services, `op-batcher` and `op-proposer` will start
### Logs
To list and monitor the running containers:
```bash
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy ps
laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism deploy ps
# With status
docker ps
# Check logs for a container
docker logs -f <CONTAINER_ID>
```
## Example: bridge some ETH from L1 to L2
Send some ETH from the desired account to the `L1StandardBridgeProxy` contract on L1 to test bridging to L2.
We can use the testing account `0xe6CE22afe802CAf5fF7d3845cec8c736ecc8d61F` which is pre-funded and unlocked, and the `cerc/foundry:local` container to make use of the `cast` cli.
1. Note the docker network the stack is running on:
```bash
docker network ls
# The network name will be something like laconic-[some_hash]_default
```
2. Set some variables:
```bash
L1_RPC=http://fixturenet-eth-geth-1:8545
L2_RPC=http://op-geth:8545
NETWORK=<the network name found above>
DEPLOYMENT_CONTEXT=<L1 chain-id; 1212 by default>
ACCOUNT=0xe6CE22afe802CAf5fF7d3845cec8c736ecc8d61F
```
If you need to check the L1 chain-id, you can use:
```bash
docker run --rm --network $NETWORK cerc/optimism-contracts:local "cast chain-id --rpc-url $L1_RPC"
```
3. Check the account starting balance on L2 (it should be 0):
```bash
docker run --rm --network $NETWORK cerc/optimism-contracts:local "cast balance $ACCOUNT --rpc-url $L2_RPC"
# 0
```
4. Read the bridge contract address from the L1 deployment records in the `op-node` container:
```bash
# get the container id for op-node
NODE_CONTAINER=$(docker ps --filter "name=op-node" -q)
BRIDGE=$(docker exec $NODE_CONTAINER cat /l1-deployment/$DEPLOYMENT_CONTEXT/.deploy | jq -r .L1StandardBridgeProxy)
# get the funded account's pk
ACCOUNT_PK=$(docker exec $NODE_CONTAINER jq -r '.AdminKey' /l2-accounts/accounts.json)
```
5. Use cast to send some ETH to the bridge contract:
```bash
docker run --rm --network $NETWORK cerc/optimism-contracts:local "cast send --from $ACCOUNT --value 1ether $BRIDGE --rpc-url $L1_RPC --private-key $ACCOUNT_PK"
```
6. Allow a couple minutes for the bridge to complete
7. Check the L2 balance again (it should show the bridged funds):
```bash
docker run --rm --network $NETWORK cerc/optimism-contracts:local "cast balance $ACCOUNT --rpc-url $L2_RPC"
# 1000000000000000000
```
## Clean up
To stop all services running in the background, while preserving chain data:
```bash
laconic-so deployment --dir fixturenet-optimism-deployment stop
laconic-so deployment --dir fixturenet-eth-deployment stop
```
To stop all services and also delete chain data:
```bash
laconic-so deployment --dir fixturenet-optimism-deployment stop --delete-volumes
laconic-so deployment --dir fixturenet-eth-deployment stop --delete-volumes
```
## Troubleshooting
* If `op-geth` service aborts or is restarted, the following error might occur in the `op-node` service:
```bash
WARN [02-16|21:22:02.868] Derivation process temporary error attempts=14 err="stage 0 failed resetting: temp: failed to find the L2 Heads to start from: failed to fetch L2 block by hash 0x0000000000000000000000000000000000000000000000000000000000000000: failed to determine block-hash of hash 0x0000000000000000000000000000000000000000000000000000000000000000, could not get payload: not found"
```
* This means that the data directory that `op-geth` is using is corrupted and needs to be reinitialized; the containers `op-geth`, `op-node` and `op-batcher` need to be started afresh:
WARNING: This will reset the L2 chain; consequently, all the data on it will be lost
* Stop and remove the concerned containers:
```bash
# List the containers
docker ps -f "name=op-geth|op-node|op-batcher"
# Force stop and remove the listed containers
docker rm -f $(docker ps -qf "name=op-geth|op-node|op-batcher")
```
* Remove the concerned volume:
```bash
# List the volume
docker volume ls -q --filter name=l2_geth_data
# Remove the listed volume
docker volume rm $(docker volume ls -q --filter name=l2_geth_data)
```
* Re-run the deployment command used in [Deploy](#deploy) to restart the stopped containers

View File

@ -0,0 +1,140 @@
# fixturenet-optimism (L2-only)
Instructions to setup and deploy L2 fixturenet using [Optimism](https://stack.optimism.io)
Prerequisite: An L1 Ethereum RPC endpoint (with unprotected txs enabled)
## Setup
Clone the stack repo:
```bash
laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-optimism-stack
```
Clone required repositories:
```bash
laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism setup-repositories
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove concerned repositories and re-run the command
```
Build the container images:
```bash
laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism build-containers
```
This should create the required docker images in the local image registry:
* `cerc/optimism-contracts`
* `cerc/optimism-l2geth`
* `cerc/optimism-op-node`
* `cerc/optimism-op-batcher`
* `cerc/optimism-op-proposer`
## Create a deployment
First, create a spec file for the deployment, which will map the stack's ports and volumes to the host:
```bash
laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism deploy init --map-ports-to-host any-fixed-random --output fixturenet-optimism-spec.yml
```
### Ports and Data volumes
The default port and volume mappings to host can be customized by editing the "spec" file generated by `laconic-so deploy init`
---
Once you've made any needed changes to the spec file, create a deployment from it:
```bash
laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism deploy create --spec-file fixturenet-optimism-spec.yml --deployment-dir fixturenet-optimism-deployment
```
## Set chain env variables
Inside the deployment directory, open the file `config.env` and add the following variables to point the stack at your L1 rpc and provide account credentials ([defaults](../../config/fixturenet-optimism/l1-params.env)):
```bash
# External L1 endpoint
CERC_L1_CHAIN_ID=
CERC_L1_RPC=
CERC_L1_HOST=
CERC_L1_PORT=
# External L1 Beacon node endpoint
CERC_L1_BEACON=
# URL to get CSV with credentials for accounts on L1
# that are used to send balance to Optimism Proxy contract
# (enables them to do transactions on L2)
CERC_L1_ACCOUNTS_CSV_URL=
# OR
# Specify the required account credentials for the Admin account
# Other generated accounts will be funded from this account, so it should contain ~20 Eth
CERC_L1_ADDRESS=
CERC_L1_PRIV_KEY=
# Optional
# Funding amounts for proposer and batcher accounts on L1
# (Accounts funded using the Admin account)
CERC_PROPOSER_AMOUNT=
CERC_BATCHER_AMOUNT=
```
* NOTE: If L1 is running on the host machine, use `host.docker.internal` as the hostname to access the host port, or use the `ip a` command to find the IP address of the `docker0` interface (this will usually be something like `172.17.0.1` or `172.18.0.1`)
## Start the stack
Start the deployment:
```bash
laconic-so deployment --dir fixturenet-optimism-deployment start
```
1. The stack will check for a response from the L1 endpoint specified in your env file
2. The `fixturenet-optimism-contracts` service will configure and deploy the Optimism contracts to L1, exiting when complete. This may take several minutes; you can follow the progress by following the container's logs (see below)
3. The `op-node` and `op-geth` services will initialize themselves (if not already initialized) and start
4. The remaining services, `op-batcher` and `op-proposer` will start
### Logs
To list and monitor the running containers:
```bash
laconic-so --stack fixturenet-optimism deploy ps
# With status
docker ps
# Check logs for a container
docker logs -f <CONTAINER_ID>
```
## Example
Try out the [example](./README.md#example-bridge-some-eth-from-l1-to-l2) to bridge ETH from L1 to L2
Note: Use external L1 endpoint as `L1_RPC` and add flag `--add-host="host.docker.internal:host-gateway"` to `docker run` commands if necessary
## Clean up
To stop all L2 services running in the background, while preserving chain data:
```bash
laconic-so deployment --dir fixturenet-optimism-deployment stop
```
To stop all L2 services and also delete chain data:
```bash
laconic-so deployment --dir fixturenet-optimism-deployment stop --delete-volumes
```
## Troubleshooting
See [Troubleshooting](./README.md#troubleshooting)

View File

@ -0,0 +1,16 @@
version: "1.1"
name: fixturenet-optimism
description: "Optimism Fixturenet"
repos:
# L2 (optimism)
- github.com/ethereum-optimism/optimism@v1.7.7
- github.com/ethereum-optimism/op-geth@v1.101315.2
containers:
# L2 (optimism)
- cerc/optimism-contracts
- cerc/optimism-op-node
- cerc/optimism-l2geth
- cerc/optimism-op-batcher
- cerc/optimism-op-proposer
pods:
- fixturenet-optimism