Add Nitro node config for users and payments instructions #28

Merged
nabarun merged 8 commits from pm-add-nitro-node-config into main 2024-09-17 13:55:56 +00:00
2 changed files with 48 additions and 41 deletions
Showing only changes of commit cac9018059 - Show all commits

View File

@ -2,7 +2,7 @@
## Login ## Login
* Log in as `dev` user on the deployments VM * Log in as `dev` user on the deployments machine
* All the deployments are placed in the `/srv` directory: * All the deployments are placed in the `/srv` directory:
@ -22,7 +22,7 @@
* Ansible: see [installation](https://git.vdb.to/cerc-io/testnet-ops#installation) * Ansible: see [installation](https://git.vdb.to/cerc-io/testnet-ops#installation)
* On deployments VM(s): * On deployments machine(s):
* laconic-so: see [installation](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/stack-orchestrator-setup/README.md) * laconic-so: see [installation](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/stack-orchestrator-setup/README.md)
@ -39,7 +39,7 @@
* Target dir: `/srv/op-sepolia/optimism-deployment` * Target dir: `/srv/op-sepolia/optimism-deployment`
* Cleanup an existing deployment on VM if required: * Cleanup an existing deployment if required:
```bash ```bash
cd /srv/op-sepolia cd /srv/op-sepolia
@ -114,10 +114,10 @@
<host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes' <host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
``` ```
- Replace `<deployment_host>` with `l2_host` * Replace `<deployment_host>` with `l2_host`
- Replace `<host_name>` with the alias of your choice * Replace `<host_name>` with the alias of your choice
- Replace `<target_ip>` with the IP address or hostname of the target machine * Replace `<target_ip>` with the IP address or hostname of the target machine
- Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu) * Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
* Verify that you are able to connect to the host using the following command: * Verify that you are able to connect to the host using the following command:
@ -142,12 +142,12 @@
* Bridge funds on L2: * Bridge funds on L2:
* On the deployment VM, set the following variables: * On the deployment machine, set the following variables:
```bash ```bash
cd /srv/op-sepolia cd /srv/op-sepolia
L1_RPC=http://host.docker.internal:8545 L1_RPC=http://localhost:8545
L2_RPC=http://op-geth:8545 L2_RPC=http://op-geth:8545
NETWORK=$(grep 'cluster-id' optimism-deployment/deployment.yml | sed 's/cluster-id: //')_default NETWORK=$(grep 'cluster-id' optimism-deployment/deployment.yml | sed 's/cluster-id: //')_default
@ -176,7 +176,7 @@
* Use cast to send ETH to the bridge contract: * Use cast to send ETH to the bridge contract:
```bash ```bash
docker run --rm cerc/optimism-contracts:local "cast send --from $ACCOUNT --value 1ether $BRIDGE --rpc-url $L1_RPC --private-key $ACCOUNT_PK" docker run --rm --network host cerc/optimism-contracts:local "cast send --from $ACCOUNT --value 1ether $BRIDGE --rpc-url $L1_RPC --private-key $ACCOUNT_PK"
``` ```
* Allow a couple minutes for the bridge to complete * Allow a couple minutes for the bridge to complete
@ -202,7 +202,7 @@
* Target dir: `/srv/bridge/nitro-contracts-deployment` * Target dir: `/srv/bridge/nitro-contracts-deployment`
* Cleanup an existing deployment on VM if required: * Cleanup an existing deployment if required:
```bash ```bash
cd /srv/bridge cd /srv/bridge
@ -272,10 +272,10 @@
<host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes' <host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
``` ```
- Replace `<deployment_host>` with `nitro_host` * Replace `<deployment_host>` with `nitro_host`
- Replace `<host_name>` with the alias of your choice * Replace `<host_name>` with the alias of your choice
- Replace `<target_ip>` with the IP address or hostname of the target machine * Replace `<target_ip>` with the IP address or hostname of the target machine
- Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu) * Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
* Verify that you are able to connect to the host using the following command: * Verify that you are able to connect to the host using the following command:
@ -298,7 +298,7 @@
LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
``` ```
* Check logs for deployment on the virtual machine: * Check logs for deployment on the remote machine:
```bash ```bash
cd /srv/bridge cd /srv/bridge
@ -320,7 +320,7 @@
* Target dir: `/srv/bridge/bridge-deployment` * Target dir: `/srv/bridge/bridge-deployment`
* Cleanup an existing deployment on VM if required: * Cleanup an existing deployment if required:
```bash ```bash
cd /srv/bridge cd /srv/bridge
@ -334,7 +334,7 @@
### Setup ### Setup
* Execute the command on the deployment VM to get the deployed L1 Nitro contract addresses along with the L1 asset address: * Execute the following command on deployment machine to get the deployed L1 Nitro contract addresses along with the L1 asset address:
```bash ```bash
cd /srv/bridge cd /srv/bridge
@ -439,10 +439,10 @@
<host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes' <host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
``` ```
- Replace `<deployment_host>` with `nitro_host` * Replace `<deployment_host>` with `nitro_host`
- Replace `<host_name>` with the alias of your choice * Replace `<host_name>` with the alias of your choice
- Replace `<target_ip>` with the IP address or hostname of the target machine * Replace `<target_ip>` with the IP address or hostname of the target machine
- Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu) * Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
* Verify that you are able to connect to the host using the following command: * Verify that you are able to connect to the host using the following command:
@ -465,7 +465,7 @@
LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
``` ```
* Check logs for deployments on the virtual machine: * Check logs for deployments on the remote machine:
```bash ```bash
cd /srv/bridge cd /srv/bridge
@ -520,6 +520,12 @@
* Check in the generated file at location `ops/stage2/nitro-node-config.yml` within this repository * Check in the generated file at location `ops/stage2/nitro-node-config.yml` within this repository
* List down L2 channels created by the bridge:
```bash
laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4006 -h nitro-bridge"
```
</details> </details>
<details open> <details open>

View File

@ -2,30 +2,24 @@
## Prerequisites ## Prerequisites
* Ansible: see [installation](https://git.vdb.to/cerc-io/testnet-ops#installation) * Local:
* yq: see [installation](https://github.com/mikefarah/yq#install) * Ansible: see [installation](https://git.vdb.to/cerc-io/testnet-ops#installation)
* laconic-so: see [installation](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/stack-orchestrator-setup/README.md) * yq: see [installation](https://github.com/mikefarah/yq#install)
* Check versions to verify installation: * On deployment machine:
```bash * laconic-so: see [installation](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/stack-orchestrator-setup/README.md)
laconic-so version
ansible --version
yq --version
```
## Setup ## Setup
* Clone the `cerc-io/testnet-ops` repository: * On your local machine, clone the `cerc-io/testnet-ops` repository:
```bash ```bash
git clone git@git.vdb.to:cerc-io/testnet-ops.git git clone git@git.vdb.to:cerc-io/testnet-ops.git
cd testnet-ops/nitro-node-setup cd testnet-ops/nitro-nodes-setup
``` ```
* Fetch the required Nitro node config: * Fetch the required Nitro node config:
@ -86,6 +80,8 @@
## Run Nitro Nodes ## Run Nitro Nodes
Nitro nodes can be run using Ansible either locally or on a remote machine; follow corresponding steps for your setup
### On Local Host ### On Local Host
* Setup and run a Nitro node (L1+L2) by executing the `run-nitro-nodes.yml` Ansible playbook: * Setup and run a Nitro node (L1+L2) by executing the `run-nitro-nodes.yml` Ansible playbook:
@ -94,7 +90,7 @@
LANG=en_US.utf8 ansible-playbook -i localhost, --connection=local run-nitro-nodes.yml --extra-vars='{ "target_host": "localhost"}' --user $USER LANG=en_US.utf8 ansible-playbook -i localhost, --connection=local run-nitro-nodes.yml --extra-vars='{ "target_host": "localhost"}' --user $USER
``` ```
### On Remote Host (optional) ### On Remote Host
* Create a new `hosts.ini` file: * Create a new `hosts.ini` file:
@ -109,10 +105,10 @@
<host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes' <host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
``` ```
- Replace `<deployment_host>` with `nitro_host` * Replace `<deployment_host>` with `nitro_host`
- Replace `<host_name>` with the alias of your choice * Replace `<host_name>` with the alias of your choice
- Replace `<target_ip>` with the IP address or hostname of the target machine * Replace `<target_ip>` with the IP address or hostname of the target machine
- Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu) * Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
* Verify that you are able to connect to the host using the following command * Verify that you are able to connect to the host using the following command
@ -146,6 +142,11 @@
# Check the logs, ensure that the nodes are running # Check the logs, ensure that the nodes are running
laconic-so deployment --dir l1-nitro-deployment logs nitro-node -f laconic-so deployment --dir l1-nitro-deployment logs nitro-node -f
laconic-so deployment --dir l2-nitro-deployment logs nitro-node -f laconic-so deployment --dir l2-nitro-deployment logs nitro-node -f
# Let L1 node sync up with the chain
# Expected logs after sync:
# nitro-node-1 | 2:04PM INF Initializing Http RPC transport...
# nitro-node-1 | 2:04PM INF Completed RPC server initialization url=127.0.0.1:4005/api/v1
``` ```
## Create Channels ## Create Channels