Prathamesh Musale
23f3a4c8ed
Part of [Automate testnet nitro deployments using Ansible](https://www.notion.so/Automate-testnet-nitro-deployments-using-Ansible-0d15579430204b8daba9a8aa31e07568) Co-authored-by: Adw8 <adwaitgharpure@gmail.com> Reviewed-on: #5 Co-authored-by: Prathamesh Musale <prathamesh@noreply.git.vdb.to> Co-committed-by: Prathamesh Musale <prathamesh@noreply.git.vdb.to> |
||
---|---|---|
.. | ||
templates | ||
.gitignore | ||
nitro-vars.example.yml | ||
README.md | ||
run-nitro-nodes.yml | ||
setup-vars.yml |
nitro-nodes-setup
Setup Ansible
To get started, follow the installation guide to setup ansible on your machine
Setup for Remote Host
To run the playbook on a remote host:
-
Follow steps from setup remote hosts
-
Update / append the
hosts.ini
file for your remote host with<deployment_host>
set asnitro_host
Setup
The following commands have to be executed in nitro-nodes-setup
directory
-
Copy the
nitro-vars-example.yml
vars filecp nitro-vars.example.yml nitro-vars.yml
-
Edit
nitro-vars.yml
and fill in the following values# L1 WS endpoint nitro_l1_chain_url: "" # L2 WS endpoint nitro_l2_chain_url: "" # Private key for your nitro address nitro_sc_pk: "" # Private key of the account on chain that is used for funding channels in Nitro node nitro_chain_pk: "" # Contract address of NitroAdjudicator na_address: "" # Contract address of VirtualPaymentApp vpa_address: "" # Contract address of ConsensusApp ca_address: "" # Address of the bridge node bridge_contract_address: "" # Multiaddr of the L1 bridge node nitro_l1_bridge_multiaddr: "" # Multiaddr of the L2 bridge node nitro_l2_bridge_multiaddr: "" # Multiaddr with publically accessible IP address / DNS for your L1 nitro node # Example: "/ip4/192.168.x.y/tcp/3009" # Example: "/dns4/example.com/tcp/3009" nitro_l1_ext_multiaddr: "" # Multiaddr with publically accessible IP address / DNS for your L2 nitro node nitro_l2_ext_multiaddr: ""
Run Nitro Node
On Local Host
-
To run a nitro node, execute the
run-nitro-nodes.yml
Ansible playbook by running the following command:LANG=en_US.utf8 ansible-playbook run-nitro-nodes.yml --extra-vars='{ "target_host": "localhost"}' --user $USER -kK
NOTE: By default, deployments are created in a
out
directory. To change this location, update thenitro_directory
variable in the setup-vars.yml file -
For skipping container build, run with
"skip_container_build" : true
in the--extra-vars
parameter:LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "nitro_host", "skip_container_build": true }' --user $USER -kK
On Remote Host
To run the playbook on a remote host:
-
Create a new
hosts.ini
file:cp ../hosts.example.ini hosts.ini
-
Edit the
hosts.ini
file to run the playbook on a remote machine:[<deployment_host>] <host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
- Replace
<deployment_host>
withnitro_host
- Replace
<host_name>
with the alias of your choice - Replace
<target_ip>
with the IP address or hostname of the target machine - Replace
<ssh_user>
with the SSH username (e.g., dev, ubuntu)
- Replace
-
Verify that you are able to connect to the host using the following command
ansible all -m ping -i hosts.ini -k # Expected output: # <host_name> | SUCCESS => { # "ansible_facts": { # "discovered_interpreter_python": "/usr/bin/python3.10" # }, # "changed": false, # "ping": "pong" # }
-
Copy and edit the
nitro-vars.yml
file as described in the local setup section -
Execute the
run-nitro-nodes.yml
Ansible playbook for remote deployment:LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
-
For skipping container build, run with
"skip_container_build" : true
in the--extra-vars
parameter:LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "nitro_host", "skip_container_build": true }' --user $USER -kK
Check Deployment Status
-
Run the following command in the directory where the deployments are created
-
Check L1 nitro node logs:
laconic-so deployment --dir l1-nitro-deployment logs nitro-node -f
-
Check L2 nitro node logs:
laconic-so deployment --dir l2-nitro-deployment logs nitro-node -f
-