diff --git a/README.md b/README.md
index 2400fed..842ccc7 100644
--- a/README.md
+++ b/README.md
@@ -18,6 +18,8 @@ Stacks to run a node for laconic testnet
Follow steps in [testnet-onboarding-validator.md](./testnet-onboarding-validator.md) to onboard your participant and join as a validator on the LORO testnet
+Follow steps in [Upgrade to testnet2](../testnet-onboarding-validator.md#upgrade-to-testnet2) to upgrade your testnet node for testnet2
+
## Run testnet Nitro Node
Follow steps in [testnet-nitro-node.md](./testnet-nitro-node.md) to run you Nitro node for the testnet
diff --git a/ops/deployments-from-scratch.md b/ops/deployments-from-scratch.md
index cc90d1f..04a7e02 100644
--- a/ops/deployments-from-scratch.md
+++ b/ops/deployments-from-scratch.md
@@ -26,441 +26,6 @@
* laconic-so: see [installation](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/stack-orchestrator-setup/README.md#setup-stack-orchestrator)
-
- Fixturenet Eth
-
-## Fixturenet Eth
-
-* Stack:
-
-* Target dir: `/srv/fixturenet-eth/fixturenet-eth-deployment`
-
-* Cleanup an existing deployment if required:
-
- ```bash
- cd /srv/fixturenet-eth
-
- # Stop the deployment
- laconic-so deployment --dir fixturenet-eth-deployment stop --delete-volumes
-
- # Remove the deployment dir
- sudo rm -rf fixturenet-eth-deployment
- ```
-
-### Setup
-
-* Create a `fixturenet-eth` dir if not present already and cd into it
-
- ```bash
- mkdir /srv/fixturenet-eth
-
- cd /srv/fixturenet-eth
- ```
-
-* Clone the stack repo:
-
- ```bash
- laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-eth-stacks --pull
- ```
-
-* Clone required repositories:
-
- ```bash
- laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth setup-repositories --pull
-
- # If this throws an error as a result of being already checked out to a branch/tag in a repo, remove all repositories from that stack and re-run the command
- # The repositories are located in $HOME/cerc by default
- ```
-
-* Build the container images:
-
- ```bash
- # Remove any older foundry image with `latest` tag
- docker rmi ghcr.io/foundry-rs/foundry:latest
-
- laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth build-containers --force-rebuild
-
- # If errors are thrown during build, old images used by this stack would have to be deleted
- ```
-
- * NOTE: this will take >10 mins depending on the specs of your machine, and **requires** 16GB of memory or greater.
-
- * Remove any dangling Docker images (to clear up space):
-
- ```bash
- docker image prune
- ```
-
-* Create spec files for deployments, which will map the stack's ports and volumes to the host:
-
- ```bash
- laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy init --output fixturenet-eth-spec.yml
- ```
-
-* Configure ports:
- * `fixturenet-eth-spec.yml`
-
- ```yml
- ...
- network:
- ports:
- fixturenet-eth-bootnode-geth:
- - '9898:9898'
- - '30303'
- fixturenet-eth-geth-1:
- - '7545:8545'
- - '7546:8546'
- - '40000'
- - '6060'
- fixturenet-eth-lighthouse-1:
- - '8001'
- ...
- ```
-
-* Create deployments:
- Once you've made any needed changes to the spec files, create deployments from them:
-
- ```bash
- laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy create --spec-file fixturenet-eth-spec.yml --deployment-dir fixturenet-eth-deployment
- ```
-
-### Run
-
-* Start `fixturenet-eth-deployment` deployment:
-
- ```bash
- laconic-so deployment --dir fixturenet-eth-deployment start
- ```
-
-
-
-
- Nitro Contracts Deployment
-
-## Nitro Contracts Deployment
-
-* Stack:
-
-* Source repo:
-
-* Target dir: `/srv/bridge/nitro-contracts-deployment`
-
-* Cleanup an existing deployment if required:
-
- ```bash
- cd /srv/bridge
-
- # Stop the deployment
- laconic-so deployment --dir nitro-contracts-deployment stop --delete-volumes
-
- # Remove the deployment dir
- sudo rm -rf nitro-contracts-deployment
- ```
-
-### Setup
-
-* Switch to `testnet-ops/nitro-contracts-setup` directory on your local machine:
-
- ```bash
- cd testnet-ops/nitro-contracts-setup
- ```
-
-* Copy the `contract-vars-example.yml` vars file:
-
- ```bash
- cp contract-vars.example.yml contract-vars.yml
- ```
-
-* Edit [`contract-vars.yml`](./contract-vars.yml) and fill in the following values:
-
- ```bash
- # RPC endpoint
- geth_url: "https://fixturenet-eth.laconic.com"
-
- # Chain ID (Fixturenet-eth: 1212)
- geth_chain_id: "1212"
-
- # Private key for a funded L1 account, to be used for contract deployment on L1
- # Required since this private key will be utilized by both L1 and L2 nodes of the bridge
-
- geth_deployer_pk: "888814df89c4358d7ddb3fa4b0213e7331239a80e1f013eaa7b2deca2a41a218"
-
- # Custom token to be deployed
- token_name: "TestToken"
- token_symbol: "TST"
- initial_token_supply: "129600"
- ```
-
-* Update the target dir in `setup-vars.yml`:
-
- ```bash
- sed -i 's|^nitro_directory:.*|nitro_directory: /srv/bridge|' setup-vars.yml
-
- # Will create deployment at /srv/bridge/nitro-contracts-deployment
- ```
-
-### Run
-
-* Deploy nitro contracts on remote host by executing `deploy-contracts.yml` Ansible playbook on your local machine:
-
- * Create a new `hosts.ini` file:
-
- ```bash
- cp ../hosts.example.ini hosts.ini
- ```
-
- * Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
-
- ```ini
- [deployment_host]
- ansible_host= ansible_user= ansible_ssh_common_args='-o ForwardAgent=yes'
- ```
-
- * Replace `` with `nitro_host`
- * Replace `` with the alias of your choice
- * Replace `` with the IP address or hostname of the target machine
- * Replace `` with the SSH username (e.g., dev, ubuntu)
-
- * Verify that you are able to connect to the host using the following command:
-
- ```bash
- ansible all -m ping -i hosts.ini -k
-
- # Expected output:
- # | SUCCESS => {
- # "ansible_facts": {
- # "discovered_interpreter_python": "/usr/bin/python3.10"
- # },
- # "changed": false,
- # "ping": "pong"
- # }
- ```
-
- * Execute the `deploy-contracts.yml` Ansible playbook for remote deployment:
-
- ```bash
- LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
- ```
-
-* Check logs for deployment on the remote machine:
-
- ```bash
- cd /srv/bridge
-
- # Check the nitro contract deployments
- laconic-so deployment --dir nitro-contracts-deployment logs nitro-contracts -f
- ```
-
-* To deploy a new token and transfer it to another account, refer to this [doc](./nitro-token-ops.md)
-
-
-
-
- Nitro Bridge
-
-## Nitro Bridge
-
-* Stack:
-
-* Source repo:
-
-* Target dir: `/srv/bridge/bridge-deployment`
-
-* Cleanup an existing deployment if required:
-
- ```bash
- cd /srv/bridge
-
- # Stop the deployment
- laconic-so deployment --dir bridge-deployment stop --delete-volumes
-
- # Remove the deployment dir
- sudo rm -rf bridge-deployment
- ```
-
-### Setup
-
-* Execute the following command on deployment machine to get the deployed Nitro contract addresses along with the asset address:
-
- ```bash
- cd /srv/bridge
-
- laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "cat /app/deployment/nitro-addresses.json"
-
- # Expected output:
- # {
- # "1212": [
- # {
- # "name": "geth",
- # "chainId": "1212",
- # "contracts": {
- # "ConsensusApp": {
- # "address": "0xC98aD0B41B9224dad0605be32A9241dB9c67E2e8"
- # },
- # "NitroAdjudicator": {
- # "address": "0x7C22fdA703Cdf09eB8D3B5Adc81F723526713D0e"
- # },
- # "VirtualPaymentApp": {
- # "address": "0x778e4e6297E8BF04C67a20Ec989618d72eB4a19E"
- # },
- # "TestToken": {
- # "address": "0x02ebfB2706527C7310F2a7d9098b2BC61014C5F2"
- # }
- # }
- # }
- # ]
- # }
- ```
-
-* Switch to `testnet-ops/nitro-bridge-setup` directory on your local machine:
-
- ```bash
- cd testnet-ops/nitro-bridge-setup
- ```
-
-* Create the required vars file:
-
- ```bash
- cp bridge-vars.example.yml bridge-vars.yml
- ```
-
-* Edit `bridge-vars.yml` with required values:
-
- ```bash
- # WS endpoint
- nitro_chain_url: "wss://fixturenet-eth.laconic.com"
-
- # Private key for bridge Nitro address
- nitro_sc_pk: ""
-
- # Private key should correspond to a funded account on L1 and this account must own the Nitro contracts
- # It also needs to hold L1 tokens to fund Nitro channels
- nitro_chain_pk: "888814df89c4358d7ddb3fa4b0213e7331239a80e1f013eaa7b2deca2a41a218"
-
- # Deployed Nitro contract addresses
- na_address: ""
- vpa_address: ""
- ca_address: ""
- ```
-
-* Update the target dir in `setup-vars.yml`:
-
- ```bash
- sed -i 's|^nitro_directory:.*|nitro_directory: /srv/bridge|' setup-vars.yml
-
- # Will create deployment at /srv/bridge/nitro-contracts-deployment and /srv/bridge/bridge-deployment
- ```
-
-### Run
-
-* Start the bridge on remote host by executing `run-nitro-bridge.yml` Ansible playbook on your local machine:
-
- * Create a new `hosts.ini` file:
-
- ```bash
- cp ../hosts.example.ini hosts.ini
- ```
-
- * Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
-
- ```ini
- [deployment_host]
- ansible_host= ansible_user= ansible_ssh_common_args='-o ForwardAgent=yes'
- ```
-
- * Replace `` with `nitro_host`
- * Replace `` with the alias of your choice
- * Replace `` with the IP address or hostname of the target machine
- * Replace `` with the SSH username (e.g., dev, ubuntu)
-
- * Verify that you are able to connect to the host using the following command:
-
- ```bash
- ansible all -m ping -i hosts.ini -k
-
- # Expected output:
- # | SUCCESS => {
- # "ansible_facts": {
- # "discovered_interpreter_python": "/usr/bin/python3.10"
- # },
- # "changed": false,
- # "ping": "pong"
- # }
- ```
-
- * Execute the `run-nitro-bridge.yml` Ansible playbook for remote deployment:
-
- ```bash
- LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
- ```
-
-* Check logs for deployments on the remote machine:
-
- ```bash
- cd /srv/bridge
-
- # Check bridge logs, ensure that the node is running
- laconic-so deployment --dir bridge-deployment logs nitro-bridge -f
- ```
-
-* Create Nitro node config for users:
-
- ```bash
- cd /srv/bridge
-
- # Create required variables
- GETH_CHAIN_ID="1212"
-
- export NA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.NitroAdjudicator.address' /app/deployment/nitro-addresses.json")
- export CA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.ConsensusApp.address' /app/deployment/nitro-addresses.json")
- export VPA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.VirtualPaymentApp.address' /app/deployment/nitro-addresses.json")
-
- export BRIDGE_NITRO_ADDRESS=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-bridge" | jq -r '.SCAddress')
-
- export BRIDGE_PEER_ID=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-bridge" | jq -r '.MessageServicePeerId')
-
- export L1_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3005/p2p/$BRIDGE_PEER_ID"
- export L2_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3006/p2p/$BRIDGE_PEER_ID"
-
- # Create the required config files
- cat < nitro-node-config.yml
- nitro_chain_url: "wss://fixturenet-eth.laconic.com"
- na_address: "$NA_ADDRESS"
- ca_address: "$CA_ADDRESS"
- vpa_address: "$VPA_ADDRESS"
- bridge_nitro_address: "$BRIDGE_NITRO_ADDRESS"
- nitro_l1_bridge_multiaddr: "$L1_BRIDGE_MULTIADDR"
- nitro_l2_bridge_multiaddr: "$L2_BRIDGE_MULTIADDR"
- EOF
-
- laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq --arg chainId \"$GETH_CHAIN_ID\" '{
- (\$chainId): [
- {
- \"name\": .[\$chainId][0].name,
- \"chainId\": .[\$chainId][0].chainId,
- \"contracts\": (
- .[\$chainId][0].contracts
- | to_entries
- | map(select(.key | in({\"ConsensusApp\":1, \"NitroAdjudicator\":1, \"VirtualPaymentApp\":1}) | not))
- | from_entries
- )
- }
- ]
- }' /app/deployment/nitro-addresses.json" > assets.json
- ```
-
- * The required config files should be generated at `/srv/bridge/nitro-node-config.yml` and `/srv/bridge/assets.json`
-
- * Check in the generated files at locations `ops/stage2/nitro-node-config.yml` and `ops/stage2/assets.json` within this repository respectively
-
-* List down L2 channels created by the bridge:
-
- ```bash
- laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4005 -h nitro-bridge"
- ```
-
-
-
stage0 laconicd
@@ -955,7 +520,7 @@
### Setup
-* Same as that for [stage0 laconicd](#setup), not required if already done for stage0
+* Same as that for [stage0-laconicd](#stage0-laconicd), not required if already done for stage0
### Deployment
@@ -1145,29 +710,13 @@
### Setup
-* Clone the stack repo:
+* Create a tag for existing stage1 laconicd image:
```bash
- laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-laconicd-stack --pull
-
- # This should clone the fixturenet-laconicd-stack repo at `/home/dev/cerc/fixturenet-laconicd-stack`
+ docker tag cerc/laconicd:local cerc/laconicd-stage1:local
```
-* Clone required repositories:
-
- ```bash
- laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd setup-repositories --pull
-
- # This should clone the laconicd repo at `/home/dev/cerc/laconicd`
- ```
-
-* Build the container images:
-
- ```bash
- laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd build-containers --force-rebuild
-
- # This should create the "cerc/laconicd" Docker image
- ```
+* Same as that for [stage0-laconicd](#stage0-laconicd)
### Deployment
@@ -1206,6 +755,537 @@
+
+ laconic-console-testnet2
+
+## laconic-console-testnet2
+
+* Stack:
+
+* Source repos:
+ *
+ *
+
+* Target dir: `/srv/console/laconic-console-testnet2-deployment`
+
+* Cleanup an existing deployment if required:
+
+ ```bash
+ cd /srv/console
+
+ # Stop the deployment
+ laconic-so deployment --dir laconic-console-testnet2-deployment stop --delete-volumes
+
+ # Remove the deployment dir
+ sudo rm -rf laconic-console-testnet2-deployment
+
+ # Remove the existing spec file
+ rm laconic-console-testnet2-spec.yml
+ ```
+
+### Setup
+
+* Create tags for existing stage1 images:
+
+ ```bash
+ docker tag cerc/laconic-console-host:local cerc/laconic-console-host-stage1:local
+
+ docker tag cerc/laconic-registry-cli:local cerc/laconic-registry-cli-stage1:local
+ ```
+
+* Same as that for [laconic-console](#laconic-console)
+
+### Deployment
+
+* Create a spec file for the deployment:
+
+ ```bash
+ cd /srv/console
+
+ laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy init --output laconic-console-testnet2-spec.yml
+ ```
+
+* Edit network in the spec file to map container ports to host ports:
+
+ ```bash
+ network:
+ ports:
+ console:
+ - '127.0.0.1:4002:80'
+ ```
+
+* Create a deployment from the spec file:
+
+ ```bash
+ laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy create --spec-file laconic-console-testnet2-spec.yml --deployment-dir laconic-console-testnet2-deployment
+ ```
+
+* Update the configuration:
+
+ ```bash
+ cat < laconic-console-testnet2-deployment/config.env
+ # Laconicd (hosted) GQL endpoint
+ LACONIC_HOSTED_ENDPOINT=https://laconicd-testnet2.laconic.com
+ EOF
+ ```
+
+### Start
+
+* Start the deployment:
+
+ ```bash
+ laconic-so deployment --dir laconic-console-testnet2-deployment start
+ ```
+
+* Check status:
+
+ ```bash
+ # List down the container
+ docker ps -a | grep console
+
+ # Follow logs for console container
+ laconic-so deployment --dir laconic-console-testnet2-deployment logs console -f
+ ```
+
+* The laconic console can now be viewed at
+
+
+
+
+ Fixturenet Eth
+
+## Fixturenet Eth
+
+* Stack:
+
+* Target dir: `/srv/fixturenet-eth/fixturenet-eth-deployment`
+
+* Cleanup an existing deployment if required:
+
+ ```bash
+ cd /srv/fixturenet-eth
+
+ # Stop the deployment
+ laconic-so deployment --dir fixturenet-eth-deployment stop --delete-volumes
+
+ # Remove the deployment dir
+ sudo rm -rf fixturenet-eth-deployment
+ ```
+
+### Setup
+
+* Create a `fixturenet-eth` dir if not present already and cd into it
+
+ ```bash
+ mkdir /srv/fixturenet-eth
+
+ cd /srv/fixturenet-eth
+ ```
+
+* Clone the stack repo:
+
+ ```bash
+ laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-eth-stacks --pull
+ ```
+
+* Clone required repositories:
+
+ ```bash
+ laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth setup-repositories --pull
+
+ # If this throws an error as a result of being already checked out to a branch/tag in a repo, remove all repositories from that stack and re-run the command
+ # The repositories are located in $HOME/cerc by default
+ ```
+
+* Build the container images:
+
+ ```bash
+ # Remove any older foundry image with `latest` tag
+ docker rmi ghcr.io/foundry-rs/foundry:latest
+
+ laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth build-containers --force-rebuild
+
+ # If errors are thrown during build, old images used by this stack would have to be deleted
+ ```
+
+ * NOTE: this will take >10 mins depending on the specs of your machine, and **requires** 16GB of memory or greater.
+
+ * Remove any dangling Docker images (to clear up space):
+
+ ```bash
+ docker image prune
+ ```
+
+* Create spec files for deployments, which will map the stack's ports and volumes to the host:
+
+ ```bash
+ laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy init --output fixturenet-eth-spec.yml
+ ```
+
+* Configure ports:
+ * `fixturenet-eth-spec.yml`
+
+ ```yml
+ ...
+ network:
+ ports:
+ fixturenet-eth-bootnode-geth:
+ - '9898:9898'
+ - '30303'
+ fixturenet-eth-geth-1:
+ - '7545:8545'
+ - '7546:8546'
+ - '40000'
+ - '6060'
+ fixturenet-eth-lighthouse-1:
+ - '8001'
+ ...
+ ```
+
+* Create deployments:
+ Once you've made any needed changes to the spec files, create deployments from them:
+
+ ```bash
+ laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy create --spec-file fixturenet-eth-spec.yml --deployment-dir fixturenet-eth-deployment
+ ```
+
+### Run
+
+* Start `fixturenet-eth-deployment` deployment:
+
+ ```bash
+ laconic-so deployment --dir fixturenet-eth-deployment start
+ ```
+
+
+
+
+ Nitro Contracts Deployment
+
+## Nitro Contracts Deployment
+
+* Stack:
+
+* Source repo:
+
+* Target dir: `/srv/bridge/nitro-contracts-deployment`
+
+* Cleanup an existing deployment if required:
+
+ ```bash
+ cd /srv/bridge
+
+ # Stop the deployment
+ laconic-so deployment --dir nitro-contracts-deployment stop --delete-volumes
+
+ # Remove the deployment dir
+ sudo rm -rf nitro-contracts-deployment
+ ```
+
+### Setup
+
+* Switch to `testnet-ops/nitro-contracts-setup` directory on your local machine:
+
+ ```bash
+ cd testnet-ops/nitro-contracts-setup
+ ```
+
+* Copy the `contract-vars-example.yml` vars file:
+
+ ```bash
+ cp contract-vars.example.yml contract-vars.yml
+ ```
+
+* Edit [`contract-vars.yml`](./contract-vars.yml) and fill in the following values:
+
+ ```bash
+ # RPC endpoint
+ geth_url: "https://fixturenet-eth.laconic.com"
+
+ # Chain ID (Fixturenet-eth: 1212)
+ geth_chain_id: "1212"
+
+ # Private key for a funded L1 account, to be used for contract deployment on L1
+ # Required since this private key will be utilized by both L1 and L2 nodes of the bridge
+
+ geth_deployer_pk: "888814df89c4358d7ddb3fa4b0213e7331239a80e1f013eaa7b2deca2a41a218"
+
+ # Custom token to be deployed
+ token_name: "TestToken"
+ token_symbol: "TST"
+ initial_token_supply: "129600"
+ ```
+
+* Update the target dir in `setup-vars.yml`:
+
+ ```bash
+ sed -i 's|^nitro_directory:.*|nitro_directory: /srv/bridge|' setup-vars.yml
+
+ # Will create deployment at /srv/bridge/nitro-contracts-deployment
+ ```
+
+### Run
+
+* Deploy nitro contracts on remote host by executing `deploy-contracts.yml` Ansible playbook on your local machine:
+
+ * Create a new `hosts.ini` file:
+
+ ```bash
+ cp ../hosts.example.ini hosts.ini
+ ```
+
+ * Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
+
+ ```ini
+ [deployment_host]
+ ansible_host= ansible_user= ansible_ssh_common_args='-o ForwardAgent=yes'
+ ```
+
+ * Replace `` with `nitro_host`
+ * Replace `` with the alias of your choice
+ * Replace `` with the IP address or hostname of the target machine
+ * Replace `` with the SSH username (e.g., dev, ubuntu)
+
+ * Verify that you are able to connect to the host using the following command:
+
+ ```bash
+ ansible all -m ping -i hosts.ini -k
+
+ # Expected output:
+ # | SUCCESS => {
+ # "ansible_facts": {
+ # "discovered_interpreter_python": "/usr/bin/python3.10"
+ # },
+ # "changed": false,
+ # "ping": "pong"
+ # }
+ ```
+
+ * Execute the `deploy-contracts.yml` Ansible playbook for remote deployment:
+
+ ```bash
+ LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
+ ```
+
+* Check logs for deployment on the remote machine:
+
+ ```bash
+ cd /srv/bridge
+
+ # Check the nitro contract deployments
+ laconic-so deployment --dir nitro-contracts-deployment logs nitro-contracts -f
+ ```
+
+* To deploy a new token and transfer it to another account, refer to this [doc](./nitro-token-ops.md)
+
+
+
+
+ Nitro Bridge
+
+## Nitro Bridge
+
+* Stack:
+
+* Source repo:
+
+* Target dir: `/srv/bridge/bridge-deployment`
+
+* Cleanup an existing deployment if required:
+
+ ```bash
+ cd /srv/bridge
+
+ # Stop the deployment
+ laconic-so deployment --dir bridge-deployment stop --delete-volumes
+
+ # Remove the deployment dir
+ sudo rm -rf bridge-deployment
+ ```
+
+### Setup
+
+* Execute the following command on deployment machine to get the deployed Nitro contract addresses along with the asset address:
+
+ ```bash
+ cd /srv/bridge
+
+ laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "cat /app/deployment/nitro-addresses.json"
+
+ # Expected output:
+ # {
+ # "1212": [
+ # {
+ # "name": "geth",
+ # "chainId": "1212",
+ # "contracts": {
+ # "ConsensusApp": {
+ # "address": "0xC98aD0B41B9224dad0605be32A9241dB9c67E2e8"
+ # },
+ # "NitroAdjudicator": {
+ # "address": "0x7C22fdA703Cdf09eB8D3B5Adc81F723526713D0e"
+ # },
+ # "VirtualPaymentApp": {
+ # "address": "0x778e4e6297E8BF04C67a20Ec989618d72eB4a19E"
+ # },
+ # "TestToken": {
+ # "address": "0x02ebfB2706527C7310F2a7d9098b2BC61014C5F2"
+ # }
+ # }
+ # }
+ # ]
+ # }
+ ```
+
+* Switch to `testnet-ops/nitro-bridge-setup` directory on your local machine:
+
+ ```bash
+ cd testnet-ops/nitro-bridge-setup
+ ```
+
+* Create the required vars file:
+
+ ```bash
+ cp bridge-vars.example.yml bridge-vars.yml
+ ```
+
+* Edit `bridge-vars.yml` with required values:
+
+ ```bash
+ # WS endpoint
+ nitro_chain_url: "wss://fixturenet-eth.laconic.com"
+
+ # Private key for bridge Nitro address
+ nitro_sc_pk: ""
+
+ # Private key should correspond to a funded account on L1 and this account must own the Nitro contracts
+ # It also needs to hold L1 tokens to fund Nitro channels
+ nitro_chain_pk: "888814df89c4358d7ddb3fa4b0213e7331239a80e1f013eaa7b2deca2a41a218"
+
+ # Deployed Nitro contract addresses
+ na_address: ""
+ vpa_address: ""
+ ca_address: ""
+ ```
+
+* Update the target dir in `setup-vars.yml`:
+
+ ```bash
+ sed -i 's|^nitro_directory:.*|nitro_directory: /srv/bridge|' setup-vars.yml
+
+ # Will create deployment at /srv/bridge/nitro-contracts-deployment and /srv/bridge/bridge-deployment
+ ```
+
+### Run
+
+* Start the bridge on remote host by executing `run-nitro-bridge.yml` Ansible playbook on your local machine:
+
+ * Create a new `hosts.ini` file:
+
+ ```bash
+ cp ../hosts.example.ini hosts.ini
+ ```
+
+ * Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
+
+ ```ini
+ [deployment_host]
+ ansible_host= ansible_user= ansible_ssh_common_args='-o ForwardAgent=yes'
+ ```
+
+ * Replace `` with `nitro_host`
+ * Replace `` with the alias of your choice
+ * Replace `` with the IP address or hostname of the target machine
+ * Replace `` with the SSH username (e.g., dev, ubuntu)
+
+ * Verify that you are able to connect to the host using the following command:
+
+ ```bash
+ ansible all -m ping -i hosts.ini -k
+
+ # Expected output:
+ # | SUCCESS => {
+ # "ansible_facts": {
+ # "discovered_interpreter_python": "/usr/bin/python3.10"
+ # },
+ # "changed": false,
+ # "ping": "pong"
+ # }
+ ```
+
+ * Execute the `run-nitro-bridge.yml` Ansible playbook for remote deployment:
+
+ ```bash
+ LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
+ ```
+
+* Check logs for deployments on the remote machine:
+
+ ```bash
+ cd /srv/bridge
+
+ # Check bridge logs, ensure that the node is running
+ laconic-so deployment --dir bridge-deployment logs nitro-bridge -f
+ ```
+
+* Create Nitro node config for users:
+
+ ```bash
+ cd /srv/bridge
+
+ # Create required variables
+ GETH_CHAIN_ID="1212"
+
+ export NA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.NitroAdjudicator.address' /app/deployment/nitro-addresses.json")
+ export CA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.ConsensusApp.address' /app/deployment/nitro-addresses.json")
+ export VPA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.VirtualPaymentApp.address' /app/deployment/nitro-addresses.json")
+
+ export BRIDGE_NITRO_ADDRESS=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-bridge" | jq -r '.SCAddress')
+
+ export BRIDGE_PEER_ID=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-bridge" | jq -r '.MessageServicePeerId')
+
+ export L1_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3005/p2p/$BRIDGE_PEER_ID"
+ export L2_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3006/p2p/$BRIDGE_PEER_ID"
+
+ # Create the required config files
+ cat < nitro-node-config.yml
+ nitro_chain_url: "wss://fixturenet-eth.laconic.com"
+ na_address: "$NA_ADDRESS"
+ ca_address: "$CA_ADDRESS"
+ vpa_address: "$VPA_ADDRESS"
+ bridge_nitro_address: "$BRIDGE_NITRO_ADDRESS"
+ nitro_l1_bridge_multiaddr: "$L1_BRIDGE_MULTIADDR"
+ nitro_l2_bridge_multiaddr: "$L2_BRIDGE_MULTIADDR"
+ EOF
+
+ laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq --arg chainId \"$GETH_CHAIN_ID\" '{
+ (\$chainId): [
+ {
+ \"name\": .[\$chainId][0].name,
+ \"chainId\": .[\$chainId][0].chainId,
+ \"contracts\": (
+ .[\$chainId][0].contracts
+ | to_entries
+ | map(select(.key | in({\"ConsensusApp\":1, \"NitroAdjudicator\":1, \"VirtualPaymentApp\":1}) | not))
+ | from_entries
+ )
+ }
+ ]
+ }' /app/deployment/nitro-addresses.json" > assets.json
+ ```
+
+ * The required config files should be generated at `/srv/bridge/nitro-node-config.yml` and `/srv/bridge/assets.json`
+
+ * Check in the generated files at locations `ops/stage2/nitro-node-config.yml` and `ops/stage2/assets.json` within this repository respectively
+
+* List down L2 channels created by the bridge:
+
+ ```bash
+ laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4005 -h nitro-bridge"
+ ```
+
+
+
## Domains / Port Mappings
```bash
diff --git a/ops/stage1-to-stage2.md b/ops/stage1-to-stage2.md
index 42d6df5..e97a622 100644
--- a/ops/stage1-to-stage2.md
+++ b/ops/stage1-to-stage2.md
@@ -48,14 +48,6 @@
sudo chown dev:dev /srv/laconicd/stage1-laconicd-export.tar.gz
```
-* Get the exports locally:
-
- ```bash
- scp dev@:/srv/laconicd/stage1-laconicd-export.tar.gz
-
- # These files are to be used in the next initialization step, scp them over to the stage2 deploment machine
- ```
-
## Initialize stage2
* Copy over the stage1 state and node export archive to stage2 deployment machine
@@ -98,6 +90,12 @@
* Skips staking and validator data
+* Copy over the genesis file outside data directory:
+
+ ```bash
+ cp stage2-deployment/data/laconicd-data/config/genesis.json stage2-deployment
+ ```
+
## Start stage2
* Start the stage2 deployment:
@@ -127,10 +125,10 @@
* Get the genesis file:
```bash
- scp dev@:/srv/laconicd/stage2-deployment/data/laconicd-data/config/genesis.json
+ scp dev@:/srv/laconicd/stage2-deployment/genesis.json
```
-* Now users can follow the steps to [Upgrade to testnet stage2](../testnet-onboarding-validator.md#upgrade-to-testnet-stage2)
+* Now users can follow the steps to [Upgrade to testnet2](../testnet-onboarding-validator.md#upgrade-to-testnet2)
## Bank Transfer
diff --git a/ops/stage2/upgrade-node-to-stage2.sh b/ops/stage2/upgrade-node-to-stage2.sh
deleted file mode 100755
index de334cf..0000000
--- a/ops/stage2/upgrade-node-to-stage2.sh
+++ /dev/null
@@ -1,25 +0,0 @@
-#!/bin/bash
-
-# Exit on error
-set -e
-set -u
-
-NODE_HOME="$HOME/.laconicd"
-stage2_genesis="$NODE_HOME/tmp-stage2/genesis.json"
-
-if [ ! -f ${stage2_genesis} ]; then
- echo "stage2 genesis file not found, exiting..."
- exit 1
-fi
-
-# Remove data but keep keys
-laconicd cometbft unsafe-reset-all
-
-# Use provided genesis config
-cp $stage2_genesis $NODE_HOME/config/genesis.json
-
-# Set chain id in config
-chain_id=$(jq -r '.chain_id' $stage2_genesis)
-laconicd config set client chain-id $chain_id --home $NODE_HOME
-
-echo "Node data reset and ready for stage2!"
diff --git a/ops/stage2/upgrade-node-to-testnet2.sh b/ops/stage2/upgrade-node-to-testnet2.sh
new file mode 100755
index 0000000..2cb41e0
--- /dev/null
+++ b/ops/stage2/upgrade-node-to-testnet2.sh
@@ -0,0 +1,25 @@
+#!/bin/bash
+
+# Exit on error
+set -e
+set -u
+
+NODE_HOME="$HOME/.laconicd"
+testnet2_genesis="$NODE_HOME/tmp-testnet2/genesis.json"
+
+if [ ! -f ${testnet2_genesis} ]; then
+ echo "testnet2 genesis file not found, exiting..."
+ exit 1
+fi
+
+# Remove data but keep keys
+laconicd cometbft unsafe-reset-all
+
+# Use provided genesis config
+cp $testnet2_genesis $NODE_HOME/config/genesis.json
+
+# Set chain id in config
+chain_id=$(jq -r '.chain_id' $testnet2_genesis)
+laconicd config set client chain-id $chain_id --home $NODE_HOME
+
+echo "Node data reset and ready for testnet2!"
diff --git a/testnet-onboarding-validator.md b/testnet-onboarding-validator.md
index bb8588f..e01571c 100644
--- a/testnet-onboarding-validator.md
+++ b/testnet-onboarding-validator.md
@@ -280,18 +280,28 @@ laconic-so deployment --dir testnet-laconicd-deployment start
sudo rm -r testnet-laconicd-deployment
```
-## Upgrade to testnet stage2
+## Upgrade to testnet2
### Prerequisites
-* Testnet stage2 genesis file and peer node address
+* testnet2 genesis file and peer node address
* Mnemonic from the [wallet](https://wallet.laconic.com)
-* Have a testnet stage1 node running
+* A testnet stage1 node
### Setup
+* Stop the stage1 node:
+
+ ```bash
+ # In dir where stage1 deployment (`testnet-laconicd-deployment`) exists
+
+ TESTNET_DEPLOYMENT=$(pwd)/testnet-laconicd-deployment
+
+ laconic-so deployment --dir testnet-laconicd-deployment stop --delete-volumes
+ ```
+
* Clone / pull the stack repo:
```bash
@@ -318,20 +328,17 @@ laconic-so deployment --dir testnet-laconicd-deployment start
### Create a deployment
-* The existing deployment used for stage1 can be used for stage2
+* The existing stage1 deployment can be used for testnet2
-* Copy over the published testnet genesis file (`.json`) to data directory in deployment (`testnet-laconicd-deployment/data/laconicd-data/tmp-stage2`):
+* Copy over the published testnet2 genesis file (`.json`) to data directory in deployment (`testnet-laconicd-deployment/data/laconicd-data/tmp-testnet2`):
```bash
- # In dir where stage1 deployment (`testnet-laconicd-deployment`) exists
- TESTNET_DEPLOYMENT=$(pwd)/testnet-laconicd-deployment
-
# Example
- mkdir -p $TESTNET_DEPLOYMENT/data/laconicd-data/tmp-stage2
- cp genesis.json $TESTNET_DEPLOYMENT/data/laconicd-data/tmp-stage2/genesis.json
+ mkdir -p $TESTNET_DEPLOYMENT/data/laconicd-data/tmp-testnet2
+ cp genesis.json $TESTNET_DEPLOYMENT/data/laconicd-data/tmp-testnet2/genesis.json
```
-* Run script to reset node data and upgrade for stage2:
+* Run script to reset node data and upgrade for testnet2:
```bash
cd ~/cerc/testnet-laconicd-stack
@@ -339,7 +346,7 @@ laconic-so deployment --dir testnet-laconicd-deployment start
docker run -it \
-v $TESTNET_DEPLOYMENT/data/laconicd-data:/root/.laconicd \
-v ./ops/stage2:/scripts \
- cerc/laconicd:local bash -c "/scripts/upgrade-node-to-stage2.sh"
+ cerc/laconicd:local bash -c "/scripts/upgrade-node-to-testnet2.sh"
cd $TESTNET_DEPLOYMENT
```
@@ -352,7 +359,7 @@ laconic-so deployment --dir testnet-laconicd-deployment start
CERC_CHAIN_ID=laconic-testnet-2
# Comma separated list of nodes to keep persistent connections to
- # Example: "node-1-id@laconicd-testnet2.laconic.com:26656"
+ # Example: "node-1-id@laconicd-testnet2.laconic.com:36656"
# Use the provided node id
CERC_PEERS=""