Add Nitro node config for users and payments instructions #28
@ -2,7 +2,7 @@
|
||||
|
||||
## Login
|
||||
|
||||
* Log in as `dev` user on the deployments VM
|
||||
* Log in as `dev` user on the deployments machine
|
||||
|
||||
* All the deployments are placed in the `/srv` directory:
|
||||
|
||||
@ -22,9 +22,9 @@
|
||||
|
||||
* Ansible: see [installation](https://git.vdb.to/cerc-io/testnet-ops#installation)
|
||||
|
||||
* On deployments VM(s):
|
||||
* On deployments machine(s):
|
||||
|
||||
* laconic-so: see [installation](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/stack-orchestrator-setup/README.md)
|
||||
* laconic-so: see [installation](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/stack-orchestrator-setup/README.md#setup-stack-orchestrator)
|
||||
|
||||
<details open>
|
||||
<summary>L2 Optimism</summary>
|
||||
@ -39,7 +39,7 @@
|
||||
|
||||
* Target dir: `/srv/op-sepolia/optimism-deployment`
|
||||
|
||||
* Cleanup an existing deployment on VM if required:
|
||||
* Cleanup an existing deployment if required:
|
||||
|
||||
```bash
|
||||
cd /srv/op-sepolia
|
||||
@ -114,10 +114,10 @@
|
||||
<host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
|
||||
```
|
||||
|
||||
- Replace `<deployment_host>` with `l2_host`
|
||||
- Replace `<host_name>` with the alias of your choice
|
||||
- Replace `<target_ip>` with the IP address or hostname of the target machine
|
||||
- Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
|
||||
* Replace `<deployment_host>` with `l2_host`
|
||||
* Replace `<host_name>` with the alias of your choice
|
||||
* Replace `<target_ip>` with the IP address or hostname of the target machine
|
||||
* Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
|
||||
|
||||
* Verify that you are able to connect to the host using the following command:
|
||||
|
||||
@ -142,12 +142,12 @@
|
||||
|
||||
* Bridge funds on L2:
|
||||
|
||||
* On the deployment VM, set the following variables:
|
||||
* On the deployment machine, set the following variables:
|
||||
|
||||
```bash
|
||||
cd /srv/op-sepolia
|
||||
|
||||
L1_RPC=http://host.docker.internal:8545
|
||||
L1_RPC=http://localhost:8545
|
||||
L2_RPC=http://op-geth:8545
|
||||
|
||||
NETWORK=$(grep 'cluster-id' optimism-deployment/deployment.yml | sed 's/cluster-id: //')_default
|
||||
@ -176,7 +176,7 @@
|
||||
* Use cast to send ETH to the bridge contract:
|
||||
|
||||
```bash
|
||||
docker run --rm cerc/optimism-contracts:local "cast send --from $ACCOUNT --value 1ether $BRIDGE --rpc-url $L1_RPC --private-key $ACCOUNT_PK"
|
||||
docker run --rm --network host cerc/optimism-contracts:local "cast send --from $ACCOUNT --value 1ether $BRIDGE --rpc-url $L1_RPC --private-key $ACCOUNT_PK"
|
||||
```
|
||||
|
||||
* Allow a couple minutes for the bridge to complete
|
||||
@ -202,7 +202,7 @@
|
||||
|
||||
* Target dir: `/srv/bridge/nitro-contracts-deployment`
|
||||
|
||||
* Cleanup an existing deployment on VM if required:
|
||||
* Cleanup an existing deployment if required:
|
||||
|
||||
```bash
|
||||
cd /srv/bridge
|
||||
@ -272,10 +272,10 @@
|
||||
<host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
|
||||
```
|
||||
|
||||
- Replace `<deployment_host>` with `nitro_host`
|
||||
- Replace `<host_name>` with the alias of your choice
|
||||
- Replace `<target_ip>` with the IP address or hostname of the target machine
|
||||
- Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
|
||||
* Replace `<deployment_host>` with `nitro_host`
|
||||
* Replace `<host_name>` with the alias of your choice
|
||||
* Replace `<target_ip>` with the IP address or hostname of the target machine
|
||||
* Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
|
||||
|
||||
* Verify that you are able to connect to the host using the following command:
|
||||
|
||||
@ -298,7 +298,7 @@
|
||||
LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
|
||||
```
|
||||
|
||||
* Check logs for deployment on the virtual machine:
|
||||
* Check logs for deployment on the remote machine:
|
||||
|
||||
```bash
|
||||
cd /srv/bridge
|
||||
@ -320,7 +320,7 @@
|
||||
|
||||
* Target dir: `/srv/bridge/bridge-deployment`
|
||||
|
||||
* Cleanup an existing deployment on VM if required:
|
||||
* Cleanup an existing deployment if required:
|
||||
|
||||
```bash
|
||||
cd /srv/bridge
|
||||
@ -334,7 +334,7 @@
|
||||
|
||||
### Setup
|
||||
|
||||
* Execute the command on the deployment VM to get the deployed L1 Nitro contract addresses along with the L1 asset address:
|
||||
* Execute the following command on deployment machine to get the deployed L1 Nitro contract addresses along with the L1 asset address:
|
||||
|
||||
```bash
|
||||
cd /srv/bridge
|
||||
@ -439,10 +439,10 @@
|
||||
<host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
|
||||
```
|
||||
|
||||
- Replace `<deployment_host>` with `nitro_host`
|
||||
- Replace `<host_name>` with the alias of your choice
|
||||
- Replace `<target_ip>` with the IP address or hostname of the target machine
|
||||
- Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
|
||||
* Replace `<deployment_host>` with `nitro_host`
|
||||
* Replace `<host_name>` with the alias of your choice
|
||||
* Replace `<target_ip>` with the IP address or hostname of the target machine
|
||||
* Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
|
||||
|
||||
* Verify that you are able to connect to the host using the following command:
|
||||
|
||||
@ -465,7 +465,7 @@
|
||||
LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
|
||||
```
|
||||
|
||||
* Check logs for deployments on the virtual machine:
|
||||
* Check logs for deployments on the remote machine:
|
||||
|
||||
```bash
|
||||
cd /srv/bridge
|
||||
@ -494,9 +494,9 @@
|
||||
|
||||
export BRIDGE_CONTRACT_ADDRESS=$(laconic-so deployment --dir bridge-deployment exec nitro-bridge "jq -r '.\"$OPTIMISM_CHAIN_ID\"[0].contracts.Bridge.address' /app/deployment/nitro-addresses.json")
|
||||
|
||||
export BRIDGE_NITRO_ADDRESS=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4006 -h nitro-bridge" | jq -r '.SCAddress')
|
||||
export BRIDGE_NITRO_ADDRESS=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-bridge" | jq -r '.SCAddress')
|
||||
|
||||
export BRIDGE_PEER_ID=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4006 -h nitro-bridge" | jq -r '.MessageServicePeerId')
|
||||
export BRIDGE_PEER_ID=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-bridge" | jq -r '.MessageServicePeerId')
|
||||
|
||||
export L1_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3005/p2p/$BRIDGE_PEER_ID"
|
||||
export L2_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3006/p2p/$BRIDGE_PEER_ID"
|
||||
@ -520,6 +520,12 @@
|
||||
|
||||
* Check in the generated file at location `ops/stage2/nitro-node-config.yml` within this repository
|
||||
|
||||
* List down L2 channels created by the bridge:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4006 -h nitro-bridge"
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details open>
|
||||
|
10
ops/stage2/nitro-node-config.yml
Normal file
10
ops/stage2/nitro-node-config.yml
Normal file
@ -0,0 +1,10 @@
|
||||
nitro_l1_chain_url: "wss://sepolia.laconic.com"
|
||||
nitro_l2_chain_url: "wss://optimism.laconic.com"
|
||||
na_address: "0xfD5276DDfE0E7738Af5F3dA0dE58D36560BbE544"
|
||||
ca_address: "0xC71F47d58d521aE24FDf5e324969aCD6f83b6Ff8"
|
||||
vpa_address: "0xEA55dEab3718CF4d084a94Fe4C0D750a80Eb1F2C"
|
||||
l1_asset_address: "0xa4351114dAE1aBEb2d552d441C9733c72682a45D"
|
||||
bridge_contract_address: "0x0fCC47652bd8Fa5ED4192DD6238B4d523B34D724"
|
||||
bridge_nitro_address: "0xf0E6a85C6D23AcA9ff1b83477D426ed26F218185"
|
||||
nitro_l1_bridge_multiaddr: "/dns4/bridge.laconic.com/tcp/3005/p2p/16Uiu2HAky2PYTfBNHpytybz4ADY9n7boiLgK5btJpTrGVbWC3diZ"
|
||||
nitro_l2_bridge_multiaddr: "/dns4/bridge.laconic.com/tcp/3006/p2p/16Uiu2HAky2PYTfBNHpytybz4ADY9n7boiLgK5btJpTrGVbWC3diZ"
|
@ -2,30 +2,26 @@
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Ansible: see [installation](https://git.vdb.to/cerc-io/testnet-ops#installation)
|
||||
* Local:
|
||||
|
||||
* yq: see [installation](https://github.com/mikefarah/yq#install)
|
||||
* Clone the `cerc-io/testnet-ops` repository:
|
||||
|
||||
* laconic-so: see [installation](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/stack-orchestrator-setup/README.md)
|
||||
```bash
|
||||
git clone git@git.vdb.to:cerc-io/testnet-ops.git
|
||||
```
|
||||
|
||||
* Check versions to verify installation:
|
||||
* Ansible: see [installation](https://git.vdb.to/cerc-io/testnet-ops#installation)
|
||||
|
||||
```bash
|
||||
laconic-so version
|
||||
* On deployment machine:
|
||||
|
||||
ansible --version
|
||||
|
||||
yq --version
|
||||
```
|
||||
* laconic-so: see [installation](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/stack-orchestrator-setup/README.md#setup-stack-orchestrator)
|
||||
|
||||
## Setup
|
||||
|
||||
* Clone the `cerc-io/testnet-ops` repository:
|
||||
* Move to `nitro-nodes-setup` :
|
||||
|
||||
```bash
|
||||
git clone git@git.vdb.to:cerc-io/testnet-ops.git
|
||||
|
||||
cd testnet-ops/nitro-node-setup
|
||||
cd testnet-ops/nitro-nodes-setup
|
||||
```
|
||||
|
||||
* Fetch the required Nitro node config:
|
||||
@ -76,7 +72,7 @@
|
||||
* Update the target dir in `setup-vars.yml`:
|
||||
|
||||
```bash
|
||||
# Set path to desired deployments dir
|
||||
# Set path to desired deployments dir (under your user)
|
||||
DEPLOYMENTS_DIR=<path-to-deployments-dir>
|
||||
|
||||
sed -i "s|^nitro_directory:.*|nitro_directory: $DEPLOYMENTS_DIR/nitro-node|" setup-vars.yml
|
||||
@ -86,6 +82,8 @@
|
||||
|
||||
## Run Nitro Nodes
|
||||
|
||||
Nitro nodes can be run using Ansible either locally or on a remote machine; follow corresponding steps for your setup
|
||||
|
||||
### On Local Host
|
||||
|
||||
* Setup and run a Nitro node (L1+L2) by executing the `run-nitro-nodes.yml` Ansible playbook:
|
||||
@ -94,9 +92,9 @@
|
||||
LANG=en_US.utf8 ansible-playbook -i localhost, --connection=local run-nitro-nodes.yml --extra-vars='{ "target_host": "localhost"}' --user $USER
|
||||
```
|
||||
|
||||
### On Remote Host (optional)
|
||||
### On Remote Host
|
||||
|
||||
* Create a new `hosts.ini` file:
|
||||
* In `testnet-ops/nitro-nodes-setup`, create a new `hosts.ini` file:
|
||||
|
||||
```bash
|
||||
cp ../hosts.example.ini hosts.ini
|
||||
@ -105,20 +103,22 @@
|
||||
* Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
|
||||
|
||||
```ini
|
||||
[deployment_host]
|
||||
[<deployment_host>]
|
||||
<host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
|
||||
```
|
||||
|
||||
- Replace `<deployment_host>` with `nitro_host`
|
||||
- Replace `<host_name>` with the alias of your choice
|
||||
- Replace `<target_ip>` with the IP address or hostname of the target machine
|
||||
- Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
|
||||
* Replace `<deployment_host>` with `nitro_host`
|
||||
* Replace `<host_name>` with the alias of your choice
|
||||
* Replace `<target_ip>` with the IP address or hostname of the target machine
|
||||
* Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
|
||||
|
||||
* Verify that you are able to connect to the host using the following command
|
||||
|
||||
```bash
|
||||
ansible all -m ping -i hosts.ini -k
|
||||
|
||||
# If using password based authentication, enter the ssh password on prompt; otherwise, leave it blank
|
||||
|
||||
# Expected output:
|
||||
|
||||
# <host_name> | SUCCESS => {
|
||||
@ -134,32 +134,56 @@
|
||||
|
||||
```bash
|
||||
LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
|
||||
|
||||
# If using password based authentication, enter the ssh password on prompt; otherwise, leave it blank
|
||||
# Enter the sudo password as "BECOME password" on prompt
|
||||
```
|
||||
|
||||
### Check Deployment Status
|
||||
|
||||
* Run the following command in the directory where the deployments are created:
|
||||
* Run the following commands on deployment machine:
|
||||
|
||||
```bash
|
||||
DEPLOYMENTS_DIR=<path-to-deployments-dir>
|
||||
|
||||
cd $DEPLOYMENTS_DIR/nitro-node
|
||||
|
||||
# Check the logs, ensure that the nodes are running
|
||||
laconic-so deployment --dir l1-nitro-deployment logs nitro-node -f
|
||||
laconic-so deployment --dir l2-nitro-deployment logs nitro-node -f
|
||||
|
||||
# Let L1 node sync up with the chain
|
||||
# Expected logs after sync:
|
||||
# nitro-node-1 | 2:04PM INF Initializing Http RPC transport...
|
||||
# nitro-node-1 | 2:04PM INF Completed RPC server initialization url=127.0.0.1:4005/api/v1
|
||||
```
|
||||
|
||||
* Get your Nitro node's info:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l1-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-node"
|
||||
|
||||
# Expected output:
|
||||
# {
|
||||
# "SCAddress": "0xd0eA8b27591b1D070cCcD4D30b8D408fe794FDfc",
|
||||
# "MessageServicePeerId": "16Uiu2HAmSHRjoxveaPmJipzmdq69U8zme8BMnFjSBPferj1E5XAd"
|
||||
# }
|
||||
|
||||
# SCAddress -> nitro address, MessageServicePeerId -> libp2p peer id
|
||||
```
|
||||
|
||||
## Create Channels
|
||||
|
||||
Create a ledger channel with the bridge on L1 which is mirrored on L2
|
||||
|
||||
* Run the following commands from the directory where the deployments are created
|
||||
* Run the following commands on deployment machine
|
||||
|
||||
* Set required variables:
|
||||
|
||||
```bash
|
||||
cd $DEPLOYMENTS_DIR/nitro-node
|
||||
DEPLOYMENTS_DIR=<path-to-deployments-dir>
|
||||
|
||||
# Fetch the required Nitro node config
|
||||
wget https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage2/nitro-node-config.yml
|
||||
cd $DEPLOYMENTS_DIR/nitro-node
|
||||
|
||||
export BRIDGE_NITRO_ADDRESS=$(yq eval '.bridge_nitro_address' nitro-node-config.yml)
|
||||
export L1_ASSET_ADDRESS=$(yq eval '.l1_asset_address' nitro-node-config.yml)
|
||||
@ -249,6 +273,150 @@ Create a ledger channel with the bridge on L1 which is mirrored on L2
|
||||
# ]
|
||||
```
|
||||
|
||||
## Payments On L2 Channel
|
||||
|
||||
Perform payments using a virtual payment channel created with another Nitro node over the mirrored L2 channel with bridge as an intermediary
|
||||
|
||||
* Run the following commands on deployment machine
|
||||
|
||||
* Switch to the `nitro-node` directory:
|
||||
|
||||
```bash
|
||||
DEPLOYMENTS_DIR=<path-to-deployments-dir>
|
||||
|
||||
cd $DEPLOYMENTS_DIR/nitro-node
|
||||
```
|
||||
|
||||
* Check status of the mirrored channel on L2:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
|
||||
|
||||
# Expected output:
|
||||
# [
|
||||
# {
|
||||
# "ID": "0x6a9f5ccf1fa802525d794f4a899897f947615f6acc7141e61e056a8bfca29179",
|
||||
# "Status": "Open",
|
||||
# "Balance": {
|
||||
# "AssetAddress": "<l2-asset-address>",
|
||||
# "Me": "<your-nitro-address>",
|
||||
# "Them": "<bridge-nitro-address>",
|
||||
# "MyBalance": Xn,
|
||||
# "TheirBalance": Yn
|
||||
# },
|
||||
# "ChannelMode": "Open"
|
||||
# }
|
||||
# ]
|
||||
```
|
||||
|
||||
* Set required variables:
|
||||
|
||||
```bash
|
||||
export BRIDGE_NITRO_ADDRESS=$(yq eval '.bridge_nitro_address' nitro-node-config.yml)
|
||||
|
||||
# Counterparty to create the payment channel with
|
||||
export COUNTER_PARTY_ADDRESS=<counterparty-nitro-address>
|
||||
|
||||
# Mirrored channel on L2
|
||||
export L2_CHANNEL_ID=<l2-channel-id>
|
||||
|
||||
# Amount to create the payment channel with
|
||||
export PAYMENT_CHANNEL_AMOUNT=10000
|
||||
```
|
||||
|
||||
* Check for existing payment channels for the L2 channel:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-payment-channels-by-ledger $L2_CHANNEL_ID -p 4005 -h nitro-node"
|
||||
```
|
||||
|
||||
* Create a virtual payment channel:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client virtual-fund $COUNTER_PARTY_ADDRESS $BRIDGE_NITRO_ADDRESS --amount $PAYMENT_CHANNEL_AMOUNT -p 4005 -h nitro-node"
|
||||
|
||||
# Follow your L2 Nitro node logs for progress
|
||||
|
||||
# Expected Output:
|
||||
# Objective started VirtualFund-0x43db45a101658387263b36d613322cc952d8ce5b70de51e3a495513c256bef4d
|
||||
# Channel Open 0x43db45a101658387263b36d613322cc952d8ce5b70de51e3a495513c256bef4d
|
||||
|
||||
# Set the resulting payment channel id in a variable
|
||||
PAYMENT_CHANNEL_ID=<payment-channel-id>
|
||||
```
|
||||
|
||||
Multiple virtual payment channels can be created at once
|
||||
|
||||
* Check the payment channel's status:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-payment-channel $PAYMENT_CHANNEL_ID -p 4005 -h nitro-node"
|
||||
|
||||
# Expected output:
|
||||
# {
|
||||
# ID: '0xb29aeb32c9495a793ebf7bd116232075d1e7bfe89fc82281c7d498e3ffd3e3bf',
|
||||
# Status: 'Open',
|
||||
# Balance: {
|
||||
# AssetAddress: '0x0000000000000000000000000000000000000000',
|
||||
# Payee: '<your-nitro-address>',
|
||||
# Payer: '<counterparty-nitro-address>',
|
||||
# PaidSoFar: 0n,
|
||||
# RemainingFunds: <payment-channel-amount>n
|
||||
# }
|
||||
# }
|
||||
```
|
||||
|
||||
* Send payments using the virtual payment channel:
|
||||
|
||||
```bash
|
||||
export PAY_AMOUNT=200
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client pay $PAYMENT_CHANNEL_ID $PAY_AMOUNT -p 4005 -h nitro-node"
|
||||
|
||||
# Expected output
|
||||
# {
|
||||
# Amount: <pay-amount>,
|
||||
# Channel: '<payment-channel-id>'
|
||||
# }
|
||||
|
||||
# This can be done multiple times until the payment channel balance is exhausted
|
||||
```
|
||||
|
||||
* Check payment channel's status again to view updated channel state
|
||||
|
||||
* Close the payment channel to settle on the L2 mirrored channel:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client virtual-defund $PAYMENT_CHANNEL_ID -p 4005 -h nitro-node"
|
||||
|
||||
# Expected output:
|
||||
# Objective started VirtualDefund-0x43db45a101658387263b36d613322cc952d8ce5b70de51e3a495513c256bef4d
|
||||
# Channel complete 0x43db45a101658387263b36d613322cc952d8ce5b70de51e3a495513c256bef4d
|
||||
```
|
||||
|
||||
* Check L2 mirrored channel's status after the virtual payment channel is closed:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
|
||||
|
||||
# Expected output:
|
||||
# [
|
||||
# {
|
||||
# "ID": "0x6a9f5ccf1fa802525d794f4a899897f947615f6acc7141e61e056a8bfca29179",
|
||||
# "Status": "Open",
|
||||
# "Balance": {
|
||||
# "AssetAddress": "<l2-asset-address>",
|
||||
# "Me": "<your-nitro-address>",
|
||||
# "Them": "<bridge-nitro-address>",
|
||||
# "MyBalance": <your-updated-balance>n,
|
||||
# "TheirBalance": <bridge-updated-balance>n
|
||||
# },
|
||||
# "ChannelMode": "Open"
|
||||
# }
|
||||
# ]
|
||||
```
|
||||
|
||||
Your balance on the L2 channel should be reduced by total payments done on the virtual payment channel
|
||||
|
||||
## Clean up
|
||||
|
||||
* Switch to deployments dir:
|
||||
|
@ -879,7 +879,7 @@
|
||||
- Open new terminal, check that no channels exist on L2
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4006 -h nitro-bridge"
|
||||
laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4005 -h nitro-bridge"
|
||||
```
|
||||
|
||||
- Set address of bridge and address of custom token on L1 in the current terminal
|
||||
@ -952,7 +952,7 @@
|
||||
- Check status of all L2 mirrored ledger channels
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4006 -h nitro-bridge"
|
||||
laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4005 -h nitro-bridge"
|
||||
|
||||
# Expected output:
|
||||
# {"ID":"0x15dbe6b996e4e46fdd6ea3e2074cbca58014dbb07368e3e7ba286df5c7b9da0d","Status":"Open","Balance":{"AssetAddress":"<Token_address_on_L2>","Me":"0xbbb676f9cff8d242e9eac39d063848807d3d1d94","Them":"0xa8d2d06ace9c7ffc24ee785c2695678aecdfd7a0","MyBalance":1000000,"TheirBalance":1000000},"ChannelMode":"Open"}
|
||||
|
Loading…
Reference in New Issue
Block a user