nitro-stack/nitro-stack-demo.md
2024-08-21 18:42:47 +05:30

27 KiB

Setup

L1 eth and L2 Optimism Stacks

  • Clone the stack repo:

    laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-eth-stacks --pull
    laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-optimism-stack --pull
    
  • Clone required repositories:

    # L1 (fixturenet-eth)
    laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth setup-repositories --pull
    
    # L2 (optimism)
    laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism setup-repositories --pull
    
    # If this throws an error as a result of being already checked out to a branch/tag in a repo, remove all repositories from that stack and re-run the command
    # The repositories are located in $HOME/cerc by default
    
  • Build the container images:

    # Remove any older foundry image with `latest` tag
    docker rmi ghcr.io/foundry-rs/foundry:latest
    
    # L1 (fixturenet-eth)
    laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth build-containers --force-rebuild
    
    # L2 (optimism)
    laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism build-containers --force-rebuild
    
    # If errors are thrown during build, old images used by this stack would have to be deleted
    
    • NOTE: this will take >10 mins depending on the specs of your machine, and requires 16GB of memory or greater.

    • Remove any dangling Docker images (to clear up space):

      docker image prune
      
  • Create spec files for deployments, which will map the stack's ports and volumes to the host:

    laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy init --output fixturenet-eth-spec.yml
    
    laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism deploy init --output fixturenet-optimism-spec.yml
    
  • Configure ports:

    • fixturenet-eth-spec.yml

      ...
      network:
        ports:
          fixturenet-eth-bootnode-geth:
            - '9898:9898'
            - '30303'
          fixturenet-eth-geth-1:
            - '8545:8545'
            - '8546:8546'
            - '40000'
            - '6060'
          fixturenet-eth-lighthouse-1:
            - '8001'
      ...
      
    • fixturenet-optimism-spec.yml

      ...
      network:
        ports:
          op-geth:
            - '9545:8545'
            - '9546:8546'
          ...
      
  • Create deployments: Once you've made any needed changes to the spec files, create deployments from them:

    laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy create --spec-file fixturenet-eth-spec.yml --deployment-dir fixturenet-eth-deployment
    
    laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism deploy create --spec-file fixturenet-optimism-spec.yml --deployment-dir fixturenet-optimism-deployment
    
    # Place them both in the same namespace (cluster)
    cp fixturenet-eth-deployment/deployment.yml fixturenet-optimism-deployment/deployment.yml
    
  • Env configuration:

    cat <<EOF > fixturenet-eth-deployment/config.env
    CERC_ALLOW_UNPROTECTED_TXS=true
    EOF
    

Go-nitro

  • Clone the stack repo:

    laconic-so fetch-stack git.vdb.to/cerc-io/nitro-stack --git-ssh --pull
    
  • Clone required repositories:

    laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node setup-repositories --git-ssh --pull
    
  • Build the container images:

    laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node build-containers --force-rebuild
    
  • Create a deployment spec-file for Alice's L1 nitro-node:

    • Create spec file for the deployment:

      laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node deploy init --output l1alice-nitro-spec.yml
      
    • Edit network in the spec file to map container ports to host ports as required:

      # l1alice-nitro-spec.yml
      ...
      network:
        ports:
          nitro-node:
            - 3007:3005
            - 4007:4005
      
  • Create a deployment spec-file for Charlie's L1 nitro-node:

    • Create spec file for the deployment:

      laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node deploy init --output l1charlie-nitro-spec.yml
      
    • Edit network in the spec file to map container ports to host ports as required:

      # l1charlie-nitro-spec.yml
      ...
      network:
        ports:
          nitro-node:
            - 3008:3005
            - 4008:4005
      
  • Create a deployment spec-file for Alice's L2 nitro-node:

    • Create spec file for the deployment:

      laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node deploy init --output l2alice-nitro-spec.yml
      
    • Edit network in the spec file to map container ports to host ports as required:

      # l2alice-nitro-spec.yml
      ...
      network:
        ports:
          nitro-node:
            - 3009:3005
            - 4009:4005
      
  • Create a deployment spec-file for Charlie's L2 nitro-node:

    • Create spec file for the deployment:

      laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node deploy init --output l2charlie-nitro-spec.yml
      
    • Edit network in the spec file to map container ports to host ports as required:

      # l2charlie-nitro-spec.yml
      ...
      network:
        ports:
          nitro-node:
            - 3010:3005
            - 4010:4005
      

Run

  • Start L1, L2 stacks: (run steps in directory where the stack deployments had been created)

    • Start fixturenet-eth-deployment deployment:

      laconic-so deployment --dir fixturenet-eth-deployment start
      
      • Check status of L1

        • Run status check:

          laconic-so deployment --dir fixturenet-eth-deployment exec fixturenet-eth-bootnode-lighthouse "/scripts/status-internal.sh"
          
        • Check geth logs to ensure that new blocks are getting created

          laconic-so deployment --dir fixturenet-eth-deployment logs -f fixturenet-eth-geth-1
          
    • Start fixturenet-optimism-deployment deployment:

      laconic-so deployment --dir fixturenet-optimism-deployment start
      

      NOTE: The fixturenet-optimism-contracts service will configure and deploy the Optimism contracts to L1, exiting when complete. This may take several minutes; you can follow the progress by following the container's logs

      • Follow optimism contracts deployment logs:

        laconic-so deployment --dir fixturenet-optimism-deployment logs -f fixturenet-optimism-contracts
        
      • Check L2 logs:

        laconic-so deployment --dir fixturenet-optimism-deployment logs -f op-geth
        
        # Ensure new blocks are getting created
        
  • Send ETH from L1 to L2 (run steps in directory where the stack deployments had been created)

    • Get information about funded accounts on L1

      curl 127.0.0.1:9898/accounts.csv
      
    • Send some ETH from the desired account to the L1StandardBridgeProxy contract on L1 to bridge it to L2:

      • Set the following variables:

        L1_RPC=http://fixturenet-eth-geth-1:8545
        L2_RPC=http://op-geth:8545
        
        DEPLOYMENT_CONTEXT=1212
        ACCOUNT=0xe6CE22afe802CAf5fF7d3845cec8c736ecc8d61F
        
      • Read the bridge contract address from the L1 deployment records in the op-node container:

        BRIDGE=$(laconic-so deployment --dir fixturenet-optimism-deployment exec op-node "cat /l1-deployment/$DEPLOYMENT_CONTEXT-deploy.json" | jq -r .L1StandardBridgeProxy)
        
        # Get the funded account's pk
        ACCOUNT_PK=$(laconic-so deployment --dir fixturenet-optimism-deployment exec op-node "jq -r '.AdminKey' /l2-accounts/accounts.json")
        
      • Use cast to send ETH to the bridge contract:

        laconic-so deployment --dir fixturenet-eth-deployment exec foundry "cast send --from $ACCOUNT --value 1ether $BRIDGE --rpc-url $L1_RPC --private-key $ACCOUNT_PK"
        

        NOTE: This is for sending funds to the contracts deployer account which is also the Bridge node account

      • Allow a couple minutes for the bridge to complete

    • Check balance on L2

      laconic-so deployment --dir fixturenet-eth-deployment exec foundry "cast balance $ACCOUNT --rpc-url $L2_RPC"
      
      # 100000000000000000
      
  • Run the bridge:

    • Create a spec-file for the deployment, map container ports to host ports and set env variables:

      laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/bridge deploy init --map-ports-to-host any-same --output bridge-nitro-spec.yml --config "NITRO_L1_CHAIN_URL=ws://host.docker.internal:8546,NITRO_L2_CHAIN_URL=ws://host.docker.internal:9546,NITRO_CHAIN_PK=888814df89c4358d7ddb3fa4b0213e7331239a80e1f013eaa7b2deca2a41a218,NITRO_SC_PK=0279651921cd800ac560c21ceea27aab0107b67daf436cdd25ce84cad30159b4,GETH_URL=http://host.docker.internal:8545,OPTIMISM_URL=http://host.docker.internal:9545,GETH_DEPLOYER_PK=$ACCOUNT_PK,OPTIMISM_DEPLOYER_PK=$ACCOUNT_PK,TOKEN_NAME=LaconicNetworkToken,TOKEN_SYMBOL=LNT,INITIAL_TOKEN_SUPPLY=129600"
      
    • Create a deployment from the spec file:

      laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/bridge deploy create --spec-file bridge-nitro-spec.yml --deployment-dir bridge-deployment
      
    • Start the nitro bridge

      laconic-so deployment --dir bridge-deployment start
      
      # Check the logs, ensure that the node is running
      laconic-so deployment --dir bridge-deployment logs nitro-bridge -f
      
  • Send custom tokens to Alice and Charlie on L1

    • Export variables for L1 token address

      export L1_ASSET_ADDRESS="$(laconic-so deployment --dir bridge-deployment exec nitro-contracts "jq -r '.\"1212\"[0].contracts.Token.address' /app/deployment/nitro-addresses.json")"
      
      export A_CHAIN_ADDRESS="0xe22AD83A0dE117bA0d03d5E94Eb4E0d80a69C62a"
      export C_CHAIN_ADDRESS="0xf1ac8Dd1f6D6F5c0dA99097c57ebF50CD99Ce293"
      
    • Send tokens to Alice and Charlie

      # Send tokens to Alice
      laconic-so deployment --dir bridge-deployment exec nitro-contracts "cd packages/nitro-protocol && yarn hardhat transfer --contract $L1_ASSET_ADDRESS --to $A_CHAIN_ADDRESS --amount 1000 --network geth"
      
      #Send tokens to Charlie
      laconic-so deployment --dir bridge-deployment exec nitro-contracts "cd packages/nitro-protocol && yarn hardhat transfer --contract $L1_ASSET_ADDRESS --to $C_CHAIN_ADDRESS --amount 1000 --network geth"
      

Demo

  • Get the deployed nitro contract addresses (run in the directory where the deployments were created):

    # Nitro contract addresses
    export NA_ADDRESS=$(laconic-so deployment --dir bridge-deployment exec nitro-bridge "jq -r '.\"1212\"[0].contracts.NitroAdjudicator.address' /app/deployment/nitro-addresses.json")
    export CA_ADDRESS=$(laconic-so deployment --dir bridge-deployment exec nitro-bridge "jq -r '.\"1212\"[0].contracts.ConsensusApp.address' /app/deployment/nitro-addresses.json")
    export VPA_ADDRESS=$(laconic-so deployment --dir bridge-deployment exec nitro-bridge "jq -r '.\"1212\"[0].contracts.VirtualPaymentApp.address' /app/deployment/nitro-addresses.json")
    
    # Contract address of bridge
    export BRIDGE_ADDRESS=$(laconic-so deployment --dir bridge-deployment exec nitro-bridge "jq -r '.\"42069\"[0].contracts.Bridge.address' /app/deployment/nitro-addresses.json")
    
    export A_PRIVATE_KEY=0x9aebbd42f3044295411e3631fcb6aa834ed5373a6d3bf368bfa09e5b74f4f6d1
    export C_PRIVATE_KEY=0x19242258fc60ec7488db0163b20ed1c32f2d27dc49e4d427a461e20a6656de20
    
  • Prepare deployments for the nodes

    • Create a deployment l1alice-nitro-deployment from the spec file

      laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node deploy create --spec-file l1alice-nitro-spec.yml --deployment-dir l1alice-nitro-deployment
      
    • Set the env variables for L1 Alice's nitro-node:

      cat <<EOF > l1alice-nitro-deployment/config.env
      NITRO_CHAIN_URL=ws://host.docker.internal:8546
      NITRO_SC_PK=$A_PRIVATE_KEY
      NITRO_CHAIN_PK=570b909da9669b2f35a0b1ac70b8358516d55ae1b5b3710e95e9a94395090597
      NA_ADDRESS=$NA_ADDRESS
      VPA_ADDRESS=$VPA_ADDRESS
      CA_ADDRESS=$CA_ADDRESS
      BRIDGE_ADDRESS=$BRIDGE_ADDRESS
      NITRO_BOOTPEERS=/dns4/host.docker.internal/tcp/3005/p2p/16Uiu2HAmJDxLM8rSybX78FH51iZq9PdrwCoCyyHRBCndNzcAYMes
      NITRO_EXT_MULTIADDR=/dns4/host.docker.internal/tcp/3007
      EOF
      
    • Create a deployment l1charlie-nitro-deployment from the spec file

      laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node deploy create --spec-file l1charlie-nitro-spec.yml --deployment-dir l1charlie-nitro-deployment
      
    • Set the env variables for L1 Charlie's nitro-node:

      cat <<EOF > l1charlie-nitro-deployment/config.env
      NITRO_CHAIN_URL=ws://host.docker.internal:8546
      NITRO_SC_PK=$C_PRIVATE_KEY
      NITRO_CHAIN_PK=111b7500bdce494d6f4bcfe8c2a0dde2ef92f751d9070fac6475dbd6d8021b3f
      NA_ADDRESS=$NA_ADDRESS
      VPA_ADDRESS=$VPA_ADDRESS
      CA_ADDRESS=$CA_ADDRESS
      BRIDGE_ADDRESS=$BRIDGE_ADDRESS
      NITRO_BOOTPEERS=/dns4/host.docker.internal/tcp/3005/p2p/16Uiu2HAmJDxLM8rSybX78FH51iZq9PdrwCoCyyHRBCndNzcAYMes
      NITRO_EXT_MULTIADDR=/dns4/host.docker.internal/tcp/3008
      EOF
      
    • Create a deployment l2alice-nitro-deployment from the spec file:

      laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node deploy create --spec-file l2alice-nitro-spec.yml --deployment-dir l2alice-nitro-deployment
      
    • Set the env variables for L2 Alice's nitro-node:

      cat <<EOF > l2alice-nitro-deployment/config.env
      NITRO_CHAIN_URL=ws://host.docker.internal:9546
      NITRO_SC_PK=$A_PRIVATE_KEY
      NITRO_CHAIN_PK=570b909da9669b2f35a0b1ac70b8358516d55ae1b5b3710e95e9a94395090597
      NA_ADDRESS=$NA_ADDRESS
      VPA_ADDRESS=$VPA_ADDRESS
      CA_ADDRESS=$CA_ADDRESS
      BRIDGE_ADDRESS=$BRIDGE_ADDRESS
      NITRO_BOOTPEERS=/dns4/host.docker.internal/tcp/3006/p2p/16Uiu2HAmJDxLM8rSybX78FH51iZq9PdrwCoCyyHRBCndNzcAYMes
      NITRO_EXT_MULTIADDR=/dns4/host.docker.internal/tcp/3009
      NITRO_L2=true
      EOF
      
    • Create deployment l2charlie-nitro-deployment from the spec files:

      laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node deploy create --spec-file l2charlie-nitro-spec.yml --deployment-dir l2charlie-nitro-deployment
      
    • Set the env variables for L2 Charlie's nitro-node:

      cat <<EOF > l2charlie-nitro-deployment/config.env
      NITRO_CHAIN_URL=ws://host.docker.internal:9546
      NITRO_SC_PK=$C_PRIVATE_KEY
      NITRO_CHAIN_PK=111b7500bdce494d6f4bcfe8c2a0dde2ef92f751d9070fac6475dbd6d8021b3f
      NA_ADDRESS=$NA_ADDRESS
      VPA_ADDRESS=$VPA_ADDRESS
      CA_ADDRESS=$CA_ADDRESS
      BRIDGE_ADDRESS=$BRIDGE_ADDRESS
      NITRO_BOOTPEERS=/dns4/host.docker.internal/tcp/3006/p2p/16Uiu2HAmJDxLM8rSybX78FH51iZq9PdrwCoCyyHRBCndNzcAYMes
      NITRO_EXT_MULTIADDR=/dns4/host.docker.internal/tcp/3010
      NITRO_L2=true
      EOF
      
  • Start nitro nodes for Alice and Charlie on L1 and L2:

    • Start the deployment for Alice's L1 node

      laconic-so deployment --dir l1alice-nitro-deployment start
      
      # Check the logs, ensure that the node is running
      laconic-so deployment --dir l1alice-nitro-deployment logs nitro-node -f
      
    • Start the deployment for Charlie's L1 node

      laconic-so deployment --dir l1charlie-nitro-deployment start
      
      # Check the logs, ensure that the node is running
      laconic-so deployment --dir l1charlie-nitro-deployment logs nitro-node -f
      
    • Start the deployment for Alice's L2 node

      laconic-so deployment --dir l2alice-nitro-deployment start
      
      # Check the logs, ensure that the node is running
      laconic-so deployment --dir l2alice-nitro-deployment logs nitro-node -f
      
    • Start the deployment for Charlie's L2 node

      laconic-so deployment --dir l2charlie-nitro-deployment start
      
      # Check the logs, ensure that the node is running
      laconic-so deployment --dir l2charlie-nitro-deployment logs nitro-node -f
      
  • Create ledger channels on L1 and mirrored channels on L2

    • Open new terminal, check that no channels exist on L2

      laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4006 -h nitro-bridge"
      
    • Set address of bridge and address of custom token on L1 in the current terminal

      export BRIDGE_NITRO_ADDRESS=0xBBB676f9cFF8D242e9eaC39D063848807d3D1D94
      export L1_ASSET_ADDRESS="$(laconic-so deployment --dir bridge-deployment exec nitro-contracts "jq -r '.\"1212\"[0].contracts.Token.address' /app/deployment/nitro-addresses.json")"
      
    • Create ledger channel between A and Bridge with custom token

      laconic-so deployment --dir l1alice-nitro-deployment exec nitro-rpc-client "nitro-rpc-client direct-fund $BRIDGE_NITRO_ADDRESS --assetAddress $L1_ASSET_ADDRESS --alphaAmount 1000000 --betaAmount 1000000 -p 4005 -h nitro-node"
      
      • Once direct-fund objective is complete, bridge will create mirrored channel on L2
      • Check node A' logs to see bridged-fund objective completed
      • Check the Troubleshooting section if command to create a ledger channel fails or gets stuck
    • Check status of L1 ledger channel between A and Bridge

      laconic-so deployment --dir l1alice-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-ledger-channel <ledger channel ID> -p 4005 -h nitro-node"
      
      # Expected output:
      # {
      #   ID: '0x161d289a50222caa781db215bb82a3ede4f557217742245525b8e8cbff04ec21',
      #   Status: 'Open',
      #   Balance: {
      #     AssetAddress: '<Token address on L1>',
      #     Me: '0xaaa6628ec44a8a742987ef3a114ddfe2d4f7adce',
      #     Them: '0xbbb676f9cff8d242e9eac39d063848807d3d1d94',
      #     MyBalance: 1000000n,
      #     TheirBalance: 1000000n
      #   },
      #   ChannelMode: 'Open'
      # }
      
    • Create ledger channel between C and Bridge with custom token

      laconic-so deployment --dir l1charlie-nitro-deployment exec nitro-rpc-client "nitro-rpc-client direct-fund $BRIDGE_NITRO_ADDRESS --assetAddress $L1_ASSET_ADDRESS --alphaAmount 1000000 --betaAmount 1000000 -p 4005 -h nitro-node"
      
      • Once direct fund objective is complete, bridge will create mirrored channel on L2
      • Check node C' logs to see bridged-fund objective completed
      • Check the Troubleshooting section if command to create a ledger channel fails or gets stuck
    • Check status of L1 ledger channel between C and Bridge

      laconic-so deployment --dir l1charlie-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-ledger-channel <ledger channel ID> -p 4005 -h nitro-node"
      
      # Expected output:
      # {
      #   ID: '0x69a3f09b6f4f94f033cf084e6e4a9453438c45b43606e9a95f5434f4c6527543',
      #   Status: 'Open',
      #   Balance: {
      #     AssetAddress: '<Token address on L1>',
      #     Me: '0xa8d2d06ace9c7ffc24ee785c2695678aecdfd7a0',
      #     Them: '0xbbb676f9cff8d242e9eac39d063848807d3d1d94',
      #     MyBalance: 1000000n,
      #     TheirBalance: 1000000n
      #   },
      #   ChannelMode: 'Open'
      # }
      
    • Check status of all L2 mirrored ledger channels

      laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4006 -h nitro-bridge"
      
      # Expected output:
      # {"ID":"0x15dbe6b996e4e46fdd6ea3e2074cbca58014dbb07368e3e7ba286df5c7b9da0d","Status":"Open","Balance":{"AssetAddress":"<Token_address_on_L2>","Me":"0xbbb676f9cff8d242e9eac39d063848807d3d1d94","Them":"0xa8d2d06ace9c7ffc24ee785c2695678aecdfd7a0","MyBalance":1000000,"TheirBalance":1000000},"ChannelMode":"Open"}
      # {"ID":"0x6a9f5ccf1fa802525d794f4a899897f947615f6acc7141e61e056a8bfca29179","Status":"Open","Balance":{"AssetAddress":"<Token_address_on_L2>","Me":"0xbbb676f9cff8d242e9eac39d063848807d3d1d94","Them":"0xaaa6628ec44a8a742987ef3a114ddfe2d4f7adce","MyBalance":1000000,"TheirBalance":1000000},"ChannelMode":"Open"}
      

      In above expected output the following are observed

      • In ledger channel with ID 0x15dbe6b996e4e46fdd6ea3e2074cbca58014dbb07368e3e7ba286df5c7b9da0d
        • Alice is a participant since Them address is 0xa8d2d06ace9c7ffc24ee785c2695678aecdfd7a0
        • Alice amount is 1000000 since TheirBalance corresponds to her amount
      • In ledger channel with ID 0x6a9f5ccf1fa802525d794f4a899897f947615f6acc7141e61e056a8bfca29179
        • Charlie is participant since Them address is 0xaaa6628ec44a8a742987ef3a114ddfe2d4f7adce
        • Charlie amount is 1000000 since TheirBalance corresponds to his amount

Payments Demo in L2

  • Create virtual channel on L2 from A' to C' via Bridge' as intermediary

    • Get Nitro (Ethereum) addresses for Alice and Charlie from the wallet:

      export A_ADDRESS=<Alice Nitro address>
      export C_ADDRESS=<Charlie Nitro address>
      
      # Bridge's Nitro address
      export BRIDGE_NITRO_ADDRESS=0xBBB676f9cFF8D242e9eaC39D063848807d3D1D94
      
    • Create a virtual channel:

      # Starts virtual fund objective on L2 to create virtual channel from A' to C'
      laconic-so deployment --dir l2alice-nitro-deployment exec nitro-rpc-client "nitro-rpc-client virtual-fund $C_ADDRESS $BRIDGE_NITRO_ADDRESS --amount 1000 -p 4005 -h nitro-node"
      
      # Set the payment channel id in a variable
      PAYMENT_CHANNEL_ID=<payment channel id>
      
  • Check payment channel between A' and C'

    laconic-so deployment --dir l2alice-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-payment-channel $PAYMENT_CHANNEL_ID -p 4005 -h nitro-node"
    
    # Expected output:
    # {
    #   ID: '0xb29aeb32c9495a793ebf7bd116232075d1e7bfe89fc82281c7d498e3ffd3e3bf',
    #   Status: 'Open',
    #   Balance: {
    #     AssetAddress: '0x0000000000000000000000000000000000000000',
    #     Payee: '0xa8d2d06ace9c7ffc24ee785c2695678aecdfd7a0',
    #     Payer: '0xaaa6628ec44a8a742987ef3a114ddfe2d4f7adce',
    #     PaidSoFar: 0n,
    #     RemainingFunds: 1000n
    #   }
    # }
    
  • After virtual fund objective is complete, make payments

    laconic-so deployment --dir l2alice-nitro-deployment exec nitro-rpc-client "nitro-rpc-client pay $PAYMENT_CHANNEL_ID 200 -p 4005 -h nitro-node"
    
    # Expected output:
    # {
    #   Amount: 200,
    #   Channel: '0xb29aeb32c9495a793ebf7bd116232075d1e7bfe89fc82281c7d498e3ffd3e3bf'
    # }
    
  • Check payment channel status again to view updated channel state

  • Close payment channel after payments

    laconic-so deployment --dir l2alice-nitro-deployment exec nitro-rpc-client "nitro-rpc-client virtual-defund $PAYMENT_CHANNEL_ID -p 4005 -h nitro-node"
    
  • Check L2 mirrored channels status after virtual-defund is complete:

    • Note balance change in A' node:

      laconic-so deployment --dir l2alice-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
      
      # Expected output:
      # {"ID":"0x6a9f5ccf1fa802525d794f4a899897f947615f6acc7141e61e056a8bfca29179","Status":"Open","Balance":{"AssetAddress":"<Token_address_on_L2>","Me":"0xaaa6628ec44a8a742987ef3a114ddfe2d4f7adce","Them":"0xbbb676f9cff8d242e9eac39d063848807d3d1d94","MyBalance":999800,"TheirBalance":1000200},"ChannelMode":"Open"}
      
    • Note balance change in C' node:

      laconic-so deployment --dir l2charlie-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
      
      # Expected output:
      # {"ID":"0x15dbe6b996e4e46fdd6ea3e2074cbca58014dbb07368e3e7ba286df5c7b9da0d","Status":"Open","Balance":{"AssetAddress":"<Token_address_on_L2>","Me":"0xa8d2d06ace9c7ffc24ee785c2695678aecdfd7a0","Them":"0xbbb676f9cff8d242e9eac39d063848807d3d1d94","MyBalance":1000200,"TheirBalance":999800},"ChannelMode":"Open"}
      

Demo cleanup

  • Reset nitro-node deployments:

    • Stop nitro-node deployments and remove volumes:

      # Run where deployments are created
      laconic-so deployment --dir l1alice-nitro-deployment stop --delete-volumes
      laconic-so deployment --dir l1charlie-nitro-deployment stop --delete-volumes
      laconic-so deployment --dir l2alice-nitro-deployment stop --delete-volumes
      laconic-so deployment --dir l2charlie-nitro-deployment stop --delete-volumes
      
    • Clear nitro-node deployments:

      # Run where deployments are created
      sudo rm -rf l1alice-nitro-deployment
      sudo rm -rf l1charlie-nitro-deployment
      sudo rm -rf l2alice-nitro-deployment
      sudo rm -rf l2charlie-nitro-deployment
      

Re-run

  • After running demo cleanup, follow the steps from Demo to re-run the demo

Cleanup

  • Reset nitro-node deployments:

    • Stop nitro-node deployments and remove volumes:

      # Run where deployments are created
      laconic-so deployment --dir l1alice-nitro-deployment stop --delete-volumes
      laconic-so deployment --dir l1charlie-nitro-deployment stop --delete-volumes
      laconic-so deployment --dir l2alice-nitro-deployment stop --delete-volumes
      laconic-so deployment --dir l2charlie-nitro-deployment stop --delete-volumes
      laconic-so deployment --dir bridge-deployment stop --delete-volumes
      
    • Clear nitro-node and bridge deployments:

      # Run where deployments are created
      sudo rm -rf l1alice-nitro-deployment
      sudo rm -rf l1charlie-nitro-deployment
      sudo rm -rf l2alice-nitro-deployment
      sudo rm -rf l2charlie-nitro-deployment
      sudo rm -rf bridge-deployment
      
  • Clean up L1 and L2 deployments:

    • Stop deployment and remove volumes:

      # Run where deployments are created
      laconic-so deployment --dir fixturenet-optimism-deployment stop --delete-volumes
      laconic-so deployment --dir fixturenet-eth-deployment stop --delete-volumes
      
    • Clear deployments:

      # Run where deployments are created
      sudo rm -rf fixturenet-optimism-deployment
      sudo rm -rf fixturenet-eth-deployment
      

Troubleshooting

  • If the ledger channel creation fails, follow these steps:

    • Check whether the status of cerc/fixturenet-eth-geth is unhealthy.

      docker ps
      
      • If the chain is not producing new blocks, restart the chain.

        laconic-so deployment --dir fixturenet-eth-deployment stop
        
        laconic-so deployment --dir fixturenet-eth-deployment start
        
    • Stop the nitro-rpc-client direct-fund command if it is stuck

    • Restart the failed node (for example: to restart Charlie's node)

      • Stop the failed nitro node

        laconic-so deployment --dir l1charlie-nitro-deployment stop
        
      • Remove the node's durable store and create it again

        sudo rm -rf l1charlie-nitro-deployment/data/nitro_node_data
        
        mkdir l1charlie-nitro-deployment/data/nitro_node_data
        
      • Restart the node and create ledger channel again

        laconic-so deployment --dir l1charlie-nitro-deployment start
        
    • Retry the ledger channel creation command