10 KiB
fixturenet-optimism
Instructions to setup and deploy an end-to-end L1+L2 stack with fixturenet-eth (L1) and Optimism (L2)
We support running just the L2 part of stack, given an external L1 endpoint. Follow the L2 only doc for the same.
Setup
Clone the stack repo:
laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-eth-stacks
laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-optimism-stack
Clone required repositories:
# L1 (fixturenet-eth)
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth setup-repositories
# L2 (optimism)
laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism setup-repositories
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove concerned repositories and re-run the command
# The repositories are located in $HOME/cerc by default
Build the container images:
# L1 (fixturenet-eth)
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth build-containers
# L2 (optimism)
laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism build-containers
# If redeploying with changes in the stack containers
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth build-containers --force-rebuild
laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism build-containers --force-rebuild
# If errors are thrown during build, old images used by this stack would have to be deleted
Note: this will take >10 mins depending on the specs of your machine, and requires 16GB of memory or greater.
This should create the required docker images in the local image registry:
- cerc/lighthouse
- cerc/lighthouse-cli
- cerc/fixturenet-eth-genesis-premerge
- cerc/fixturenet-eth-geth
- cerc/fixturenet-eth-lighthouse
- cerc/optimism-contracts
- cerc/optimism-op-node
- cerc/optimism-l2geth
- cerc/optimism-op-batcher
- cerc/optimism-op-proposer
Create a deployment
First, create a spec file for the deployment, which will map the stack's ports and volumes to the host:
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy init --output fixturenet-eth.yml
laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism deploy init --map-ports-to-host any-fixed-random --output fixturenet-optimism-spec.yml
Ports
It is usually necessary to expose certain container ports on one or more of the host's addresses to allow incoming connections.
Any ports defined in the Docker compose file are exposed by default with random port assignments, bound to "any" interface (IP address 0.0.0.0), but the port mappings can be customized by editing the "spec" file generated by laconic-so deploy init
.
In addition, a stack-wide port mapping "recipe" can be applied at the time the
laconic-so deploy init
command is run, by supplying the desired recipe with the --map-ports-to-host
option. The following recipes are supported:
Recipe | Host Port Mapping |
---|---|
any-variable-random | Bind to 0.0.0.0 using a random port assigned at start time (default) |
localhost-same | Bind to 127.0.0.1 using the same port number as exposed by the containers |
any-same | Bind to 0.0.0.0 using the same port number as exposed by the containers |
localhost-fixed-random | Bind to 127.0.0.1 using a random port number selected at the time the command is run (not checked for already in use) |
any-fixed-random | Bind to 0.0.0.0 using a random port number selected at the time the command is run (not checked for already in use) |
For example, you may wish to use any-fixed-random
to generate the initial mappings and then edit the spec file to set the fixturenet-eth-geth-1
RPC to port 8545 and the op-geth
RPC to port 9545 on the host.
Or, you may wish to use any-same
for the initial mappings -- in which case you'll have to edit the spec to file to ensure the various geth instances aren't all trying to publish to host ports 8545/8546 at once.
Data volumes
Container data volumes are bind-mounted to specified paths in the host filesystem.
The default setup (generated by laconic-so deploy init
) places the volumes in the ./data
subdirectory of the deployment directory. The default mappings can be customized by editing the "spec" file generated by laconic-so deploy init
.
Once you've made any needed changes to the spec file, create a deployment from it:
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy create --spec-file fixturenet-eth.yml --deployment-dir fixturenet-eth-deployment
laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism deploy create --spec-file fixturenet-optimism-spec.yml --deployment-dir fixturenet-optimism-deployment
# Place them both in the same namespace (cluster)
cp fixturenet-eth-deployment/deployment.yml fixturenet-optimism-deployment/deployment.yml
Env configuration
Inside the fixturenet-eth-deployment
deployment directory, open config.env
file and set following env variables:
# Allow unprotected txs for Optimism contracts deployment
CERC_ALLOW_UNPROTECTED_TXS=true
Inside the fixturenet-optimism-deployment
deployment directory, open config.env
file and set following env variables:
# Optional
# Funding amounts for proposer and batcher accounts on L1
CERC_PROPOSER_AMOUNT= # Default 0.2ether
CERC_BATCHER_AMOUNT= # Default 0.1ether
Start the stack
Start the deployment:
laconic-so deployment --dir fixturenet-eth-deployment start
laconic-so deployment --dir fixturenet-optimism-deployment start
- The
fixturenet-eth
L1 chain will start up first and begin producing blocks - The
fixturenet-optimism-contracts
service will configure and deploy the Optimism contracts to L1, exiting when complete. This may take several minutes; you can follow the progress by following the container's logs (see below) - The
op-node
andop-geth
services will initialize themselves (if not already initialized) and start - The remaining services,
op-batcher
andop-proposer
will start
Logs
To list and monitor the running containers:
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy ps
laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism deploy ps
# With status
docker ps
# Check logs for a container
docker logs -f <CONTAINER_ID>
Example: bridge some ETH from L1 to L2
Send some ETH from the desired account to the L1StandardBridgeProxy
contract on L1 to test bridging to L2.
We can use the testing account 0xe6CE22afe802CAf5fF7d3845cec8c736ecc8d61F
which is pre-funded and unlocked, and the cerc/optimism-contracts:local
container to make use of the cast
cli.
- Note the docker network the stack is running on:
docker network ls
# The network name will be something like laconic-[some_hash]_default
- Set some variables:
L1_RPC=http://fixturenet-eth-geth-1:8545
L2_RPC=http://op-geth:8545
NETWORK=<the network name found above>
DEPLOYMENT_CONTEXT=<L1 chain-id; 1212 by default>
ACCOUNT=0xe6CE22afe802CAf5fF7d3845cec8c736ecc8d61F
You can create an alias for running cast
on the network:
alias op-cast="docker run --rm --network $NETWORK --entrypoint cast cerc/optimism-contracts:local"
If you need to check the L1 chain-id, you can use:
op-cast chain-id --rpc-url $L1_RPC
- Check the account starting balance on L2 (it should be 0):
op-cast balance $ACCOUNT --rpc-url $L2_RPC
# 0
- Read the bridge contract address from the L1 deployment records in the
op-node
container:
# get the container id for op-node
NODE_CONTAINER=$(docker ps --filter "name=op-node" -q)
# get the bridge contract address
BRIDGE=$(docker exec $NODE_CONTAINER cat /l1-deployment/$DEPLOYMENT_CONTEXT-deploy.json | jq -r .L1StandardBridgeProxy)
# get the funded account's pk
ACCOUNT_PK=$(docker exec $NODE_CONTAINER jq -r '.AdminKey' /l2-accounts/accounts.json)
- Use cast to send some ETH to the bridge contract:
op-cast send --from $ACCOUNT --value 1ether $BRIDGE --rpc-url $L1_RPC --private-key $ACCOUNT_PK
-
Allow a couple minutes for the bridge to complete.
-
Check the L2 balance again (it should show the bridged funds):
op-cast balance $ACCOUNT --rpc-url $L2_RPC
# 1000000000000000000
Clean up
To stop all services running in the background, while preserving chain data:
laconic-so deployment --dir fixturenet-optimism-deployment stop
laconic-so deployment --dir fixturenet-eth-deployment stop
To stop all services and also delete chain data:
laconic-so deployment --dir fixturenet-optimism-deployment stop --delete-volumes
laconic-so deployment --dir fixturenet-eth-deployment stop --delete-volumes
Troubleshooting
-
If
op-geth
service aborts or is restarted, the following error might occur in theop-node
service:WARN [02-16|21:22:02.868] Derivation process temporary error attempts=14 err="stage 0 failed resetting: temp: failed to find the L2 Heads to start from: failed to fetch L2 block by hash 0x0000000000000000000000000000000000000000000000000000000000000000: failed to determine block-hash of hash 0x0000000000000000000000000000000000000000000000000000000000000000, could not get payload: not found"
-
This means that the data directory that
op-geth
is using is corrupted and needs to be reinitialized; the containersop-geth
,op-node
andop-batcher
need to be started afresh:WARNING: This will reset the L2 chain; consequently, all the data on it will be lost
-
Stop and remove the concerned containers:
# List the containers docker ps -f "name=op-geth|op-node|op-batcher" # Force stop and remove the listed containers docker rm -f $(docker ps -qf "name=op-geth|op-node|op-batcher")
-
Remove the concerned volume:
# List the volume docker volume ls -q --filter name=l2_geth_data # Remove the listed volume docker volume rm $(docker volume ls -q --filter name=l2_geth_data)
-
Re-run the deployment command used in Deploy to restart the stopped containers
-