testnet-ops/nitro-nodes-setup
nabarun 65be098ce9 Add steps to clean up service provider setup (#14)
Part of [Service Provider Setup](https://www.notion.so/Service-provider-setup-a09e2207e1f34f3a847f7ce9713b7ac5)
- Move user setup playbook to separate directory
- Remove unneeded variables

Co-authored-by: Adw8 <adwaitgharpure@gmail.com>
Reviewed-on: #14
2024-10-23 06:53:07 +00:00
..
templates Remove use of L2 chain & contracts for running nitro nodes with playbook (#9) 2024-10-01 03:47:46 +00:00
.gitignore Add instructions to run Ansible playbooks on remote machines (#5) 2024-09-09 13:37:41 +00:00
nitro-vars.example.yml Remove use of L2 chain & contracts for running nitro nodes with playbook (#9) 2024-10-01 03:47:46 +00:00
README.md Add steps to clean up service provider setup (#14) 2024-10-23 06:53:07 +00:00
run-nitro-nodes.yml Update service provider setup to configure laconicd chain id (#12) 2024-10-21 10:22:32 +00:00
setup-vars.yml Add steps to clean up service provider setup (#14) 2024-10-23 06:53:07 +00:00

nitro-nodes-setup

Prerequisites

  • Setup Ansible: To get started, follow the installation guide to setup ansible on your machine.

  • Setup user with passwordless sudo: Follow steps from Setup a user to setup a new user with passwordless sudo

Setup

The following commands have to be executed in nitro-nodes-setup directory

  • Copy the nitro-vars.example.yml vars file

    cp nitro-vars.example.yml nitro-vars.yml
    
  • Edit nitro-vars.yml and fill in the following values

    # L1 WS endpoint
    nitro_chain_url: ""
    
    # Private key for your nitro address
    nitro_sc_pk: ""
    
    # Private key of the account on chain that is used for funding channels in Nitro node
    nitro_chain_pk: ""
    
    # Contract address of NitroAdjudicator
    na_address: ""
    
    # Contract address of VirtualPaymentApp
    vpa_address: ""
    
    # Contract address of ConsensusApp
    ca_address: ""
    
    # Multiaddr of the L1 bridge node
    nitro_l1_bridge_multiaddr: ""
    
    # Multiaddr of the L2 bridge node
    nitro_l2_bridge_multiaddr: ""
    
    # Multiaddr with publically accessible IP address / DNS for your L1 nitro node
    # Example: "/ip4/192.168.x.y/tcp/3009"
    # Example: "/dns4/example.com/tcp/3009"
    nitro_l1_ext_multiaddr: ""
    
    # Multiaddr with publically accessible IP address / DNS for your L2 nitro node
    nitro_l2_ext_multiaddr: ""
    

Run Nitro Node

  • Create a new hosts.ini file:

    cp ../hosts.example.ini hosts.ini
    
  • Edit the hosts.ini file:

    [<deployment_host>]
    <host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
    
    • Replace <deployment_host> with nitro_host
    • Replace <host_name> with the alias of your choice
    • Replace <target_ip> with the IP address or hostname of the target machine
    • Replace <ssh_user> with the username of the user that you set up on target machine (e.g. dev, ubuntu)
  • Verify that you are able to connect to the host using the following command

    ansible all -m ping -i hosts.ini
    
    # Expected output:
    
    # <host_name> | SUCCESS => {
    #  "ansible_facts": {
    #      "discovered_interpreter_python": "/usr/bin/python3.10"
    #  },
    #  "changed": false,
    #  "ping": "pong"
    # }
    
  • Copy and edit the nitro-vars.yml file as described in the local setup section

  • Execute the run-nitro-nodes.yml Ansible playbook to deploy nitro nodes:

    LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER
    

    NOTE: By default, deployments are created in a out directory. To change this location, update the nitro_directory variable in the setup-vars.yml file

  • For skipping container build, run with "skip_container_build" : true in the --extra-vars parameter:

    LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "nitro_host", "skip_container_build": true }' --user $USER
    

Check Deployment Status

Run the following command in the directory where the deployments are created

  • Check L1 nitro node logs:

    laconic-so deployment --dir l1-nitro-deployment logs nitro-node -f
    
  • Check L2 nitro node logs:

    laconic-so deployment --dir l2-nitro-deployment logs nitro-node -f