testnet-ops/nitro-bridge-setup
nabarun 65be098ce9 Add steps to clean up service provider setup (#14)
Part of [Service Provider Setup](https://www.notion.so/Service-provider-setup-a09e2207e1f34f3a847f7ce9713b7ac5)
- Move user setup playbook to separate directory
- Remove unneeded variables

Co-authored-by: Adw8 <adwaitgharpure@gmail.com>
Reviewed-on: cerc-io/testnet-ops#14
2024-10-23 06:53:07 +00:00
..
templates/specs Remove use of L2 chain & contracts for running nitro nodes with playbook (#9) 2024-10-01 03:47:46 +00:00
.gitignore Add instructions to run Ansible playbooks on remote machines (#5) 2024-09-09 13:37:41 +00:00
bridge-vars.example.yml Remove use of L2 chain & contracts for running nitro nodes with playbook (#9) 2024-10-01 03:47:46 +00:00
README.md Add steps to clean up service provider setup (#14) 2024-10-23 06:53:07 +00:00
run-nitro-bridge.yml Update service provider setup to configure laconicd chain id (#12) 2024-10-21 10:22:32 +00:00
setup-vars.yml Add steps to clean up service provider setup (#14) 2024-10-23 06:53:07 +00:00

nitro-bridge-setup

Prerequisites

  • Setup Ansible: To get started, follow the installation guide to setup ansible on your machine.

  • Setup user with passwordless sudo: Follow steps from Setup a user to setup a new user

Setup

The following commands have to be executed in the nitro-bridge-setup directory:

  • Copy the bridge-vars.example.yml vars file:

    cp bridge-vars.example.yml bridge-vars.yml
    
  • Edit bridge-vars.yml with the required values:

    # L1 WS endpoint
    nitro_chain_url: ""
    
    # Private key for the bridge's nitro address
    nitro_sc_pk: ""
    
    # Private key for a funded account on L1
    # This account should have tokens for funding Nitro channels
    nitro_chain_pk: ""
    
    # Custom L2 token to be deployed
    token_name: "LaconicNetworkToken"
    token_symbol: "LNT"
    initial_token_supply: "129600"
    
    # Addresses of the deployed nitro contracts
    na_address: ""
    vpa_address: ""
    ca_address: ""
    

Run Nitro Bridge

  • Create a new hosts.ini file:

    cp ../hosts.example.ini hosts.ini
    
  • Edit the hosts.ini file:

    [<deployment_host>]
    <host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
    
    • Replace <deployment_host> with nitro_host
    • Replace <host_name> with the alias of your choice
    • Replace <target_ip> with the IP address or hostname of the target machine
    • Replace <ssh_user> with the username of the user that you set up on target machine (e.g. dev, ubuntu)
  • Verify that you are able to connect to the host using the following command:

    ansible all -m ping -i hosts.ini
    
    # Expected output:
    
    # <host_name> | SUCCESS => {
    #  "ansible_facts": {
    #      "discovered_interpreter_python": "/usr/bin/python3.10"
    #  },
    #  "changed": false,
    #  "ping": "pong"
    # }
    
  • Execute the run-nitro-bridge.yml Ansible playbook for deploying nitro bridge:

    LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host"}'  --user $USER
    

    NOTE: By default, deployments are created in an out directory. To change this location, update the nitro_directory variable in the setup-vars.yml file

  • For skipping container build, run with "skip_container_build" : true in the --extra-vars parameter:

    LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host", "skip_container_build": true }' --user $USER
    

Check Deployment Status

Run the following command in the directory where the bridge-deployment is created:

  • Check logs for deployments:

    # Check the bridge deployment logs, ensure that the node is running
    laconic-so deployment --dir bridge-deployment logs nitro-bridge -f