diff --git a/README.md b/README.md index 388d85b..ac60865 100644 --- a/README.md +++ b/README.md @@ -15,3 +15,7 @@ Stacks to run a node for laconic testnet ## Join LORO testnet Follow steps in [testnet-onboarding-validator.md](./testnet-onboarding-validator.md) to onboard your participant and join as a validator on the LORO testnet + +## Run testnet Nitro Node + +Follow steps in [testnet-nitro-node.md](./testnet-nitro-node.md) to run you Nitro node for the testnet diff --git a/ops/deployments-from-scratch.md b/ops/deployments-from-scratch.md index 58dcf12..d933896 100644 --- a/ops/deployments-from-scratch.md +++ b/ops/deployments-from-scratch.md @@ -10,6 +10,515 @@ cd /srv ``` +## Prerequisites + +* Local: + + * Clone the `cerc-io/testnet-ops` repository: + + ```bash + git clone git@git.vdb.to:cerc-io/testnet-ops.git + ``` + + * Ansible: see [installation](https://git.vdb.to/cerc-io/testnet-ops#installation) + +* On deployments VM(s): + + * laconic-so: see [installation](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/stack-orchestrator-setup/README.md) + +
+ L2 Optimism + +## L2 Optimism + +* Stack: + +* Source repos: + * + * + +* Target dir: `/srv/op-sepolia/optimism-deployment` + +* Cleanup an existing deployment on VM if required: + + ```bash + cd /srv/op-sepolia + + # Stop the deployment + laconic-so deployment --dir optimism-deployment stop --delete-volumes + + # Remove the deployment dir + sudo rm -rf optimism-deployment + ``` + +### Setup + +* Switch to `testnet-ops/l2-setup` directory on your local machine: + + ```bash + cd testnet-ops/l2-setup + ``` + +* Copy the `l2-vars-example.yml` vars file: + + ```bash + cp l2-vars-example.yml l2-vars.yml + ``` + +* Edit `l2-vars.yml` with required values: + + ```bash + # L1 chain ID (Sepolia: 11155111) + l1_chain_id: "11155111" + + # L1 RPC endpoint + l1_rpc: "http://host.docker.internal:8545" + + # L1 RPC endpoint host or IP address + l1_host: "host.docker.internal" + + # L1 RPC endpoint port number + l1_port: "8545" + + # L1 Beacon endpoint + l1_beacon: "http://host.docker.internal:8001" + + # Account credentials for the Admin account + # Used for Optimism contracts deployment and funding other generated accounts + l1_address: "" + l1_priv_key: "" + ``` + +* Update the target dir in `setup-vars.yml`: + + ```bash + sed -i 's|^l2_directory:.*|l2_directory: /srv/op-sepolia|' setup-vars.yml + + # Will create deployment at /srv/op-sepolia/optimism-deployment + ``` + +### Run + +* Set up and run L2 on remote host by executing `run-optimism.yml` Ansible playbook on your local machine: + + * Create a new `hosts.ini` file: + + ```bash + cp ../hosts.example.ini hosts.ini + ``` + + * Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine: + + ```ini + [l2_host] + ansible_host= ansible_user= ansible_ssh_common_args='-o ForwardAgent=yes' + ``` + + - Replace `` with the alias of your choice + - Replace `` with the IP address or hostname of the target machine + - Replace `` with the SSH username (e.g., dev, ubuntu) + + * Verify that you are able to connect to the host using the following command: + + ```bash + ansible all -m ping -i hosts.ini -k + + # Expected output: + # | SUCCESS => { + # "ansible_facts": { + # "discovered_interpreter_python": "/usr/bin/python3.10" + # }, + # "changed": false, + # "ping": "pong" + # } + ``` + + * Execute the `run-optimism.yml` Ansible playbook for remote deployment: + + ```bash + LANG=en_US.utf8 ansible-playbook -i hosts.ini run-optimism.yml --extra-vars='{ "target_host": "l2_host"}' --user $USER -kK + ``` + +* Bridge funds on L2: + + * On the deployment VM, set the following variables: + + ```bash + cd /srv/op-sepolia + + L1_RPC=http://host.docker.internal:8545 + L2_RPC=http://host.docker.internal:9545 + + NETWORK=$(grep 'cluster-id' optimism-deployment/deployment.yml | sed 's/cluster-id: //')_default + + DEPLOYMENT_CONTEXT=11155111 + ACCOUNT= + ``` + + * Read the bridge contract address from the L1 deployment records in the `op-node` container: + + ```bash + BRIDGE=$(laconic-so deployment --dir optimism-deployment exec op-node "cat /l1-deployment/$DEPLOYMENT_CONTEXT-deploy.json" | jq -r .L1StandardBridgeProxy) + + # Get the funded account's pk + ACCOUNT_PK=$(laconic-so deployment --dir optimism-deployment exec op-node "jq -r '.AdminKey' /l2-accounts/accounts.json") + ``` + + * Check that the starting balance for account on L2 is 0: + + ```bash + docker run --rm --network $NETWORK cerc/optimism-contracts:local "cast balance $ACCOUNT --rpc-url $L2_RPC" + + # 0 + ``` + + * Use cast to send ETH to the bridge contract: + + ```bash + docker run --rm cerc/optimism-contracts:local "cast send --from $ACCOUNT --value 1ether $BRIDGE --rpc-url $L1_RPC --private-key $ACCOUNT_PK" + ``` + + * Allow a couple minutes for the bridge to complete + + * Check balance on L2: + + ```bash + docker run --rm --network $NETWORK cerc/optimism-contracts:local "cast balance $ACCOUNT --rpc-url $L2_RPC" + + # 100000000000000000 + ``` + +
+ +
+ L1 Nitro Contracts Deployment + +## L1 Nitro Contracts Deployment + +* Stack: + +* Source repo: + +* Target dir: `/srv/bridge/nitro-contracts-deployment` + +* Cleanup an existing deployment on VM if required: + + ```bash + cd /srv/bridge + + # Stop the deployment + laconic-so deployment --dir nitro-contracts-deployment stop --delete-volumes + + # Remove the deployment dir + sudo rm -rf nitro-contracts-deployment + +### Setup + +* Switch to `testnet-ops/nitro-contracts-setup` directory on your local machine: + + ```bash + cd testnet-ops/nitro-contracts-setup + ``` + +* Copy the `contract-vars-example.yml` vars file: + + ```bash + cp contract-vars.example.yml contract-vars.yml + ``` + +* Edit [`contract-vars.yml`](./contract-vars.yml) and fill in the following values: + + ```bash + # L1 RPC endpoint + geth_url: "https://sepolia.laconic.com" + + # L1 chain ID (Sepolia: 11155111) + geth_chain_id: "11155111" + + # Private key for a funded L1 account, to be used for contract deployment on L1 + # Must also be funded on L2 for deploying contracts + # Required since this private key will be utilized by both L1 and L2 nodes of the bridge + geth_deployer_pk: "" + + # Custom L1 token to be deployed + token_name: "" + token_symbol: "" + intial_token_supply: "" + ``` + +* Update the target dir in `setup-vars.yml`: + + ```bash + sed -i 's|^nitro_directory:.*|nitro_directory: /srv/bridge|' setup-vars.yml + + # Will create deployment at /srv/bridge/nitro-contracts-deployment + ``` + +### Run + +* Deploy nitro contracts on remote host by executing `deploy-contracts.yml` Ansible playbook on your local machine: + + * Create a new `hosts.ini` file: + + ```bash + cp ../hosts.example.ini hosts.ini + ``` + + * Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine: + + ```ini + [nitro_host] + ansible_host= ansible_user= ansible_ssh_common_args='-o ForwardAgent=yes' + ``` + + - Replace `` with the alias of your choice + - Replace `` with the IP address or hostname of the target machine + - Replace `` with the SSH username (e.g., dev, ubuntu) + + * Verify that you are able to connect to the host using the following command: + + ```bash + ansible all -m ping -i hosts.ini -k + + # Expected output: + # | SUCCESS => { + # "ansible_facts": { + # "discovered_interpreter_python": "/usr/bin/python3.10" + # }, + # "changed": false, + # "ping": "pong" + # } + ``` + + * Execute the `deploy-contracts.yml` Ansible playbook for remote deployment: + + ```bash + LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK + ``` + +* Check logs for deployment on the virtual machine: + + ```bash + cd /srv/bridge + + # Check the l1 nitro contract deployments + laconic-so deployment --dir nitro-contracts-deployment logs nitro-contracts -f + ``` + +
+ +
+ Nitro Bridge + +## Nitro Bridge + +* Stack: + +* Source repo: + +* Target dir: `/srv/bridge/bridge-deployment` + +* Cleanup an existing deployment on VM if required: + + ```bash + cd /srv/bridge + + # Stop the deployment + laconic-so deployment --dir bridge-deployment stop --delete-volumes + + # Remove the deployment dir + sudo rm -rf bridge-deployment + ``` + +### Setup + +* Execute the command on the deployment VM to get the deployed L1 Nitro contract addresses along with the L1 asset address: + + ```bash + cd /srv/bridge + + laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "cat /app/deployment/nitro-addresses.json" + + # Expected output: + # { + # "11155111": [ + # { + # "name": "geth", + # "chainId": "11155111", + # "contracts": { + # "ConsensusApp": { + # "address": "0xC98aD0B41B9224dad0605be32A9241dB9c67E2e8" + # }, + # "NitroAdjudicator": { + # "address": "0x7C22fdA703Cdf09eB8D3B5Adc81F723526713D0e" + # }, + # "VirtualPaymentApp": { + # "address": "0x778e4e6297E8BF04C67a20Ec989618d72eB4a19E" + # }, + # "Token": { + # "address": "0x02ebfB2706527C7310F2a7d9098b2BC61014C5F2" + # } + # } + # } + # ] + # } + ``` + +* Switch to `testnet-ops/nitro-bridge-setup` directory on your local machine: + + ```bash + cd testnet-ops/nitro-bridge-setup + ``` + +* Create the required vars file: + + ```bash + cp bridge-vars.example.yml bridge-vars.yml + ``` + +* Edit `bridge-vars.yml` with required values: + + ```bash + # L1 WS endpoint + nitro_l1_chain_url: "wss://sepolia.laconic.com" + + # L2 WS endpoint + nitro_l2_chain_url: "wss://optimism.laconic.com" + + # Private key for bridge Nitro address + nitro_sc_pk: "" + + # Private key should correspond to a funded account on both L1 and L2, and this account must own the Nitro contracts on L1 + # It also needs to hold L1 tokens to fund Nitro channels and will be used for deploying contracts on L2 + nitro_chain_pk: "" + + # L2 chain ID (Optimism: 42069) + optimism_chain_id: "42069" + + # L2 RPC endpoint + optimism_url: "https://optimism.laconic.com" + + # Custom L2 token to be deployed + token_name: "" + token_symbol: "" + intial_token_supply: "" + + # Deployed L1 Nitro contract addresses + na_address: "" + vpa_address: "" + ca_address: "" + + # Deployed L1 token address + l1_asset_address: "" + ``` + +* Update the target dir in `setup-vars.yml`: + + ```bash + sed -i 's|^nitro_directory:.*|nitro_directory: /srv/bridge|' setup-vars.yml + + # Will create deployment at /srv/bridge/nitro-contracts-deployment and /srv/bridge/bridge-deployment + ``` + +### Run + +* Deploy L2 contracts and start the bridge on remote host by executing `run-nitro-bridge.yml` Ansible playbook on your local machine: + + * Create a new `hosts.ini` file: + + ```bash + cp ../hosts.example.ini hosts.ini + ``` + + * Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine: + + ```ini + [nitro_host] + ansible_host= ansible_user= ansible_ssh_common_args='-o ForwardAgent=yes' + ``` + + - Replace `` with the alias of your choice + - Replace `` with the IP address or hostname of the target machine + - Replace `` with the SSH username (e.g., dev, ubuntu) + + * Verify that you are able to connect to the host using the following command: + + ```bash + ansible all -m ping -i hosts.ini -k + + # Expected output: + # | SUCCESS => { + # "ansible_facts": { + # "discovered_interpreter_python": "/usr/bin/python3.10" + # }, + # "changed": false, + # "ping": "pong" + # } + ``` + + * Execute the `run-nitro-bridge.yml` Ansible playbook for remote deployment: + + ```bash + LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK + ``` + +* Check logs for deployments on the virtual machine: + + ```bash + cd /srv/bridge + + # Check the l2 nitro contract deployments + laconic-so deployment --dir bridge-deployment logs l2-nitro-contracts -f + + # Check bridge logs, ensure that the node is running + laconic-so deployment --dir bridge-deployment logs nitro-bridge -f + ``` + +* Create Nitro node config for users: + + ```bash + cd /srv/bridge + + # Create required variables + GETH_CHAIN_ID="11155111" + OPTIMISM_CHAIN_ID="42069" + + export NA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.NitroAdjudicator.address' /app/deployment/nitro-addresses.json") + export CA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.ConsensusApp.address' /app/deployment/nitro-addresses.json") + export VPA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.VirtualPaymentApp.address' /app/deployment/nitro-addresses.json") + + export L1_ASSET_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.Token.address' /app/deployment/nitro-addresses.json") + + export BRIDGE_CONTRACT_ADDRESS=$(laconic-so deployment --dir bridge-deployment exec nitro-bridge "jq -r '.\"$OPTIMISM_CHAIN_ID\"[0].contracts.Bridge.address' /app/deployment/nitro-addresses.json") + + export BRIDGE_NITRO_ADDRESS=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4006 -h nitro-bridge" | jq -r '.SCAddress') + + export BRIDGE_PEER_ID=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4006 -h nitro-bridge" | jq -r '.MessageServicePeerId') + + export L1_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3005/p2p/$BRIDGE_PEER_ID" + export L2_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3006/p2p/$BRIDGE_PEER_ID" + + # Create the required config file + cat < nitro-node-config.yml + nitro_l1_chain_url: "wss://sepolia.laconic.com" + nitro_l2_chain_url: "wss://optimism.laconic.com" + na_address: "$NA_ADDRESS" + ca_address: "$CA_ADDRESS" + vpa_address: "$VPA_ADDRESS" + l1_asset_address: "${L1_ASSET_ADDRESS}" + bridge_contract_address: "$BRIDGE_CONTRACT_ADDRESS" + bridge_nitro_address: "$BRIDGE_NITRO_ADDRESS" + nitro_l1_bridge_multiaddr: "$L1_BRIDGE_MULTIADDR" + nitro_l2_bridge_multiaddr: "$L2_BRIDGE_MULTIADDR" + EOF + ``` + + * The required config file should be generated at `/srv/bridge/nitro-node-config.yml` + + * Check in the generated file at location `ops/stage2/nitro-node-config.yml` within this repository + +
+
stage0 laconicd @@ -669,10 +1178,22 @@ ## Domains / Port Mappings ```bash -laconicd.laconic.com -> 26657 -laconicd.laconic.com/api -> 9473/api -faucet.laconic.com -> 4000 -loro-signup.laconic.com -> 3000 -wallet.laconic.com -> 5000 -loro-console.laconic.com -> 4001 +# Machine 1 +https://laconicd.laconic.com -> 26657 +https://laconicd.laconic.com/api -> 9473/api +https://faucet.laconic.com -> 4000 +https://loro-signup.laconic.com -> 3000 +https://wallet.laconic.com -> 5000 +https://loro-console.laconic.com -> 4001 + +# Machine 2 +https://sepolia.laconic.com -> 8545 +wss://sepolia.laconic.com -> 8546 +https://optimism.laconic.com -> 9545 +wss://optimism.laconic.com -> 9546 + +bridge.laconic.com +Open ports: +3005 (L1 side) +3006 (L2 side) ``` diff --git a/scripts/package.json b/scripts/package.json index fb8c8f1..c9c5d7e 100644 --- a/scripts/package.json +++ b/scripts/package.json @@ -22,7 +22,8 @@ }, "scripts": { "build": "tsc", - "map-subscribers-to-participants": "node dist/map-subscribers-to-participants.js" + "map-subscribers-to-participants": "node dist/map-subscribers-to-participants.js", + "participants-with-filtered-validators": "node dist/participants-with-filtered-validators.js" }, "packageManager": "yarn@1.22.19+sha1.4ba7fc5c6e704fce2066ecbfb0b0d8976fe62447" } diff --git a/testnet-nitro-node.md b/testnet-nitro-node.md new file mode 100644 index 0000000..5c83184 --- /dev/null +++ b/testnet-nitro-node.md @@ -0,0 +1,305 @@ +# testnet-nitro-node + +## Prerequisites + +* Ansible: see [installation](https://git.vdb.to/cerc-io/testnet-ops#installation) + +* yq: see [installation](https://github.com/mikefarah/yq#install) + +* laconic-so: see [installation](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/stack-orchestrator-setup/README.md) + +* Check versions to verify installation: + + ```bash + laconic-so version + + ansible --version + + yq --version + ``` + +## Setup + +* Clone the `cerc-io/testnet-ops` repository: + + ```bash + git clone git@git.vdb.to:cerc-io/testnet-ops.git + + cd testnet-ops/nitro-node-setup + ``` + +* Fetch the required Nitro node config: + + ```bash + wget -O nitro-vars.yml https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage2/nitro-node-config.yml + + # Expected variables in the fetched config file: + + # nitro_l1_chain_url: "" + # nitro_l2_chain_url: "" + # na_address: "" + # ca_address: "" + # vpa_address: "" + # l1_asset_address: "" + # bridge_contract_address: "" + # bridge_nitro_address: "" + # nitro_l1_bridge_multiaddr: "" + # nitro_l2_bridge_multiaddr: "" + ``` + +* TODO: Get L1 tokens on your address + +* Edit `nitro-vars.yml` and add the following variables: + + ```bash + # Private key for your Nitro account (same as the one used in stage0 onboarding) + # Export the key from Laconic wallet (https://wallet.laconic.com) + nitro_sc_pk: "" + + # Private key for a funded account on L1 + # This account should have L1 tokens for funding your Nitro channels + nitro_chain_pk: "" + + # Multiaddr with publically accessible IP address / DNS for your L1 nitro node + # Use port 3007 + # Example: "/ip4/192.168.x.y/tcp/3007" + # Example: "/dns4/example.com/tcp/3007" + nitro_l1_ext_multiaddr: "" + + # Multiaddr with publically accessible IP address / DNS for your L2 nitro node + # Use port 3009 + # Example: "/ip4/192.168.x.y/tcp/3009" + # Example: "/dns4/example.com/tcp/3009" + nitro_l2_ext_multiaddr: "" + ``` + +* Update the target dir in `setup-vars.yml`: + + ```bash + # Set path to desired deployments dir + DEPLOYMENTS_DIR= + + sed -i "s|^nitro_directory:.*|nitro_directory: $DEPLOYMENTS_DIR/nitro-node|" setup-vars.yml + + # Will create deployments at $DEPLOYMENTS_DIR/nitro-node/l1-nitro-deployment and $DEPLOYMENTS_DIR/nitro-node/l2-nitro-deployment + ``` + +## Run Nitro Nodes + +### On Local Host + +* Setup and run a Nitro node (L1+L2) by executing the `run-nitro-nodes.yml` Ansible playbook: + + ```bash + LANG=en_US.utf8 ansible-playbook -i localhost, --connection=local run-nitro-nodes.yml --extra-vars='{ "target_host": "localhost"}' --user $USER + ``` + +### On Remote Host (optional) + +* Create a new `hosts.ini` file: + + ```bash + cp ../hosts.example.ini hosts.ini + ``` + +* Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine: + + ```ini + [nitro_host] + ansible_host= ansible_user= ansible_ssh_common_args='-o ForwardAgent=yes' + ``` + + - Replace `` with the alias of your choice + - Replace `` with the IP address or hostname of the target machine + - Replace `` with the SSH username (e.g., dev, ubuntu) + +* Verify that you are able to connect to the host using the following command + + ```bash + ansible all -m ping -i hosts.ini -k + + # Expected output: + + # | SUCCESS => { + # "ansible_facts": { + # "discovered_interpreter_python": "/usr/bin/python3.10" + # }, + # "changed": false, + # "ping": "pong" + # } + ``` + +* Execute the `run-nitro-nodes.yml` Ansible playbook for remote deployment: + + ```bash + LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK + ``` + +### Check Deployment Status + +* Run the following command in the directory where the deployments are created: + + ```bash + cd $DEPLOYMENTS_DIR/nitro-node + + # Check the logs, ensure that the nodes are running + laconic-so deployment --dir l1-nitro-deployment logs nitro-node -f + laconic-so deployment --dir l2-nitro-deployment logs nitro-node -f + ``` + +## Create Channels +Create a ledger channel with the bridge on L1 which is mirrored on L2 + +* Run the following commands from the directory where the deployments are created + +* Set required variables: + + ```bash + cd $DEPLOYMENTS_DIR/nitro-node + + # Fetch the required Nitro node config + wget https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage2/nitro-node-config.yml + + export BRIDGE_NITRO_ADDRESS=$(yq eval '.bridge_nitro_address' nitro-node-config.yml) + export L1_ASSET_ADDRESS=$(yq eval '.l1_asset_address' nitro-node-config.yml) + ``` + +* Check that check that you have no existing channels on L1 or L2: + + ```bash + laconic-so deployment --dir l1-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node" + laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node" + + # Expected output: + # [] + ``` + +* Create a ledger channel between your L1 Nitro node and Bridge with custom asset: + + ```bash + # Set amount to ledger + LEDGER_AMOUNT=1000000 + + laconic-so deployment --dir l1-nitro-deployment exec nitro-rpc-client "nitro-rpc-client direct-fund $BRIDGE_NITRO_ADDRESS --assetAddress $L1_ASSET_ADDRESS --alphaAmount $LEDGER_AMOUNT --betaAmount $LEDGER_AMOUNT -p 4005 -h nitro-node" + + # Follow your L1 Nitro node logs for progress + + # Expected Output: + # Objective started DirectFunding-0x161d289a50222caa781db215bb82a3ede4f557217742245525b8e8cbff04ec21 + # Channel Open 0x161d289a50222caa781db215bb82a3ede4f557217742245525b8e8cbff04ec21 + + # Set the resulting ledger channel id in a variable + export LEDGER_CHANNEL_ID= + ``` + + * Check the [Troubleshooting](#troubleshooting) section if command to create a ledger channel fails or gets stuck + +* Once direct-fund objective is complete, the bridge will create a mirrored channel on L2 + +* Check L2 Nitro node's logs to see that a bridged-fund objective completed: + + ```bash + laconic-so deployment --dir l2-nitro-deployment logs nitro-node -f --tail 30 + + # Expected Output: + # nitro-node-1 | 5:01AM INF INFO Objective cranked address=0xaaa6628ec44a8a742987ef3a114ddfe2d4f7adce objective-id=bridgedfunding-0x6a9f5ccf1fa802525d794f4a899897f947615f6acc7141e61e056a8bfca29179 waiting-for=WaitingForNothing + # nitro-node-1 | 5:01AM INF INFO Objective is complete & returned to API address=0xaaa6628ec44a8a742987ef3a114ddfe2d4f7adce objective-id=bridgedfunding-0x6a9f5ccf1fa802525d794f4a899897f947615f6acc7141e61e056a8bfca29179 + ``` + +* Check status of L1 ledger channel with the bridge using channel id: + + ```bash + laconic-so deployment --dir l1-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-ledger-channel $LEDGER_CHANNEL_ID -p 4005 -h nitro-node" + + # Expected output: + # { + # ID: '0x161d289a50222caa781db215bb82a3ede4f557217742245525b8e8cbff04ec21', + # Status: 'Open', + # Balance: { + # AssetAddress: '', + # Me: '', + # Them: '', + # MyBalance: n, + # TheirBalance: n + # }, + # ChannelMode: 'Open' + # } + ``` + +* Check status of the mirrored channel on L2: + + ```bash + laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node" + + # Expected output: + # [ + # { + # "ID": "0x6a9f5ccf1fa802525d794f4a899897f947615f6acc7141e61e056a8bfca29179", + # "Status": "Open", + # "Balance": { + # "AssetAddress": "", + # "Me": "", + # "Them": "", + # "MyBalance": n, + # "TheirBalance": n + # }, + # "ChannelMode": "Open" + # } + # ] + ``` + +## Clean up + +* Switch to deployments dir: + + ```bash + cd $DEPLOYMENTS_DIR/nitro-node + ``` + +* Stop all Nitro services running in the background: + + ```bash + laconic-so deployment --dir l1-nitro-deployment stop + laconic-so deployment --dir l2-nitro-deployment stop + ``` + +* To stop all services and also delete data: + + ```bash + laconic-so deployment --dir l1-nitro-deployment stop --delete-volumes + laconic-so deployment --dir l2-nitro-deployment stop --delete-volumes + + # Remove deployment directories (deployments will have to be recreated for a re-run) + sudo rm -r l1-nitro-deployment + sudo rm -r l2-nitro-deployment + ``` + +## Troubleshooting + +* Stop (`Ctrl+C`) the direct-fund command if it is stuck + +* Restart the L1 Nitro node: + + * Stop the deployment: + + ```bash + cd $DEPLOYMENTS_DIR/nitro-node + + laconic-so deployment --dir l1-nitro-deployment stop + ``` + + * Reset the node's durable store: + + ```bash + sudo rm -rf l1-nitro-deployment/data/nitro_node_data + + mkdir l1-nitro-deployment/data/nitro_node_data + ``` + + * Restart the deployment: + + ```bash + laconic-so deployment --dir l1-nitro-deployment start + ``` + +* Retry the ledger channel creation command