9.2 KiB
testnet-nitro-node
Prerequisites
-
Local:
-
Ansible: see installation
-
yq: see installation
-
-
On deployment machine:
- laconic-so: see installation
Setup
-
On your local machine, clone the
cerc-io/testnet-ops
repository:git clone git@git.vdb.to:cerc-io/testnet-ops.git cd testnet-ops/nitro-nodes-setup
-
Fetch the required Nitro node config:
wget -O nitro-vars.yml https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage2/nitro-node-config.yml # Expected variables in the fetched config file: # nitro_l1_chain_url: "" # nitro_l2_chain_url: "" # na_address: "" # ca_address: "" # vpa_address: "" # l1_asset_address: "" # bridge_contract_address: "" # bridge_nitro_address: "" # nitro_l1_bridge_multiaddr: "" # nitro_l2_bridge_multiaddr: ""
-
TODO: Get L1 tokens on your address
-
Edit
nitro-vars.yml
and add the following variables:# Private key for your Nitro account (same as the one used in stage0 onboarding) # Export the key from Laconic wallet (https://wallet.laconic.com) nitro_sc_pk: "" # Private key for a funded account on L1 # This account should have L1 tokens for funding your Nitro channels nitro_chain_pk: "" # Multiaddr with publically accessible IP address / DNS for your L1 nitro node # Use port 3007 # Example: "/ip4/192.168.x.y/tcp/3007" # Example: "/dns4/example.com/tcp/3007" nitro_l1_ext_multiaddr: "" # Multiaddr with publically accessible IP address / DNS for your L2 nitro node # Use port 3009 # Example: "/ip4/192.168.x.y/tcp/3009" # Example: "/dns4/example.com/tcp/3009" nitro_l2_ext_multiaddr: ""
-
Update the target dir in
setup-vars.yml
:# Set path to desired deployments dir DEPLOYMENTS_DIR=<path-to-deployments-dir> sed -i "s|^nitro_directory:.*|nitro_directory: $DEPLOYMENTS_DIR/nitro-node|" setup-vars.yml # Will create deployments at $DEPLOYMENTS_DIR/nitro-node/l1-nitro-deployment and $DEPLOYMENTS_DIR/nitro-node/l2-nitro-deployment
Run Nitro Nodes
Nitro nodes can be run using Ansible either locally or on a remote machine; follow corresponding steps for your setup
On Local Host
-
Setup and run a Nitro node (L1+L2) by executing the
run-nitro-nodes.yml
Ansible playbook:LANG=en_US.utf8 ansible-playbook -i localhost, --connection=local run-nitro-nodes.yml --extra-vars='{ "target_host": "localhost"}' --user $USER
On Remote Host
-
Create a new
hosts.ini
file:cp ../hosts.example.ini hosts.ini
-
Edit the
hosts.ini
file to run the playbook on a remote machine:[deployment_host] <host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
- Replace
<deployment_host>
withnitro_host
- Replace
<host_name>
with the alias of your choice - Replace
<target_ip>
with the IP address or hostname of the target machine - Replace
<ssh_user>
with the SSH username (e.g., dev, ubuntu)
- Replace
-
Verify that you are able to connect to the host using the following command
ansible all -m ping -i hosts.ini -k # Expected output: # <host_name> | SUCCESS => { # "ansible_facts": { # "discovered_interpreter_python": "/usr/bin/python3.10" # }, # "changed": false, # "ping": "pong" # }
-
Execute the
run-nitro-nodes.yml
Ansible playbook for remote deployment:LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
Check Deployment Status
-
Run the following command in the directory where the deployments are created:
cd $DEPLOYMENTS_DIR/nitro-node # Check the logs, ensure that the nodes are running laconic-so deployment --dir l1-nitro-deployment logs nitro-node -f laconic-so deployment --dir l2-nitro-deployment logs nitro-node -f # Let L1 node sync up with the chain # Expected logs after sync: # nitro-node-1 | 2:04PM INF Initializing Http RPC transport... # nitro-node-1 | 2:04PM INF Completed RPC server initialization url=127.0.0.1:4005/api/v1
Create Channels
Create a ledger channel with the bridge on L1 which is mirrored on L2
-
Run the following commands from the directory where the deployments are created
-
Set required variables:
cd $DEPLOYMENTS_DIR/nitro-node # Fetch the required Nitro node config wget https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage2/nitro-node-config.yml export BRIDGE_NITRO_ADDRESS=$(yq eval '.bridge_nitro_address' nitro-node-config.yml) export L1_ASSET_ADDRESS=$(yq eval '.l1_asset_address' nitro-node-config.yml)
-
Check that check that you have no existing channels on L1 or L2:
laconic-so deployment --dir l1-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node" laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node" # Expected output: # []
-
Create a ledger channel between your L1 Nitro node and Bridge with custom asset:
# Set amount to ledger LEDGER_AMOUNT=1000000 laconic-so deployment --dir l1-nitro-deployment exec nitro-rpc-client "nitro-rpc-client direct-fund $BRIDGE_NITRO_ADDRESS --assetAddress $L1_ASSET_ADDRESS --alphaAmount $LEDGER_AMOUNT --betaAmount $LEDGER_AMOUNT -p 4005 -h nitro-node" # Follow your L1 Nitro node logs for progress # Expected Output: # Objective started DirectFunding-0x161d289a50222caa781db215bb82a3ede4f557217742245525b8e8cbff04ec21 # Channel Open 0x161d289a50222caa781db215bb82a3ede4f557217742245525b8e8cbff04ec21 # Set the resulting ledger channel id in a variable export LEDGER_CHANNEL_ID=
- Check the Troubleshooting section if command to create a ledger channel fails or gets stuck
-
Once direct-fund objective is complete, the bridge will create a mirrored channel on L2
-
Check L2 Nitro node's logs to see that a bridged-fund objective completed:
laconic-so deployment --dir l2-nitro-deployment logs nitro-node -f --tail 30 # Expected Output: # nitro-node-1 | 5:01AM INF INFO Objective cranked address=0xaaa6628ec44a8a742987ef3a114ddfe2d4f7adce objective-id=bridgedfunding-0x6a9f5ccf1fa802525d794f4a899897f947615f6acc7141e61e056a8bfca29179 waiting-for=WaitingForNothing # nitro-node-1 | 5:01AM INF INFO Objective is complete & returned to API address=0xaaa6628ec44a8a742987ef3a114ddfe2d4f7adce objective-id=bridgedfunding-0x6a9f5ccf1fa802525d794f4a899897f947615f6acc7141e61e056a8bfca29179
-
Check status of L1 ledger channel with the bridge using channel id:
laconic-so deployment --dir l1-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-ledger-channel $LEDGER_CHANNEL_ID -p 4005 -h nitro-node" # Expected output: # { # ID: '0x161d289a50222caa781db215bb82a3ede4f557217742245525b8e8cbff04ec21', # Status: 'Open', # Balance: { # AssetAddress: '<l1-asset-address>', # Me: '<your-nitro-address>', # Them: '<bridge-nitro-address>', # MyBalance: <ledger-amount>n, # TheirBalance: <ledger-amount>n # }, # ChannelMode: 'Open' # }
-
Check status of the mirrored channel on L2:
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node" # Expected output: # [ # { # "ID": "0x6a9f5ccf1fa802525d794f4a899897f947615f6acc7141e61e056a8bfca29179", # "Status": "Open", # "Balance": { # "AssetAddress": "<l2-asset-address>", # "Me": "<your-nitro-address>", # "Them": "<bridge-nitro-address>", # "MyBalance": <ledger-amount>n, # "TheirBalance": <ledger-amount>n # }, # "ChannelMode": "Open" # } # ]
Clean up
-
Switch to deployments dir:
cd $DEPLOYMENTS_DIR/nitro-node
-
Stop all Nitro services running in the background:
laconic-so deployment --dir l1-nitro-deployment stop laconic-so deployment --dir l2-nitro-deployment stop
-
To stop all services and also delete data:
laconic-so deployment --dir l1-nitro-deployment stop --delete-volumes laconic-so deployment --dir l2-nitro-deployment stop --delete-volumes # Remove deployment directories (deployments will have to be recreated for a re-run) sudo rm -r l1-nitro-deployment sudo rm -r l2-nitro-deployment
Troubleshooting
-
Stop (
Ctrl+C
) the direct-fund command if it is stuck -
Restart the L1 Nitro node:
-
Stop the deployment:
cd $DEPLOYMENTS_DIR/nitro-node laconic-so deployment --dir l1-nitro-deployment stop
-
Reset the node's durable store:
sudo rm -rf l1-nitro-deployment/data/nitro_node_data mkdir l1-nitro-deployment/data/nitro_node_data
-
Restart the deployment:
laconic-so deployment --dir l1-nitro-deployment start
-
-
Retry the ledger channel creation command