diff --git a/README.md b/README.md
index ea70127..6c1775e 100644
--- a/README.md
+++ b/README.md
@@ -10,6 +10,7 @@ Stacks to run a node for laconic testnet
- [Update deployments after code changes](./ops/update-deployments.md)
- [Halt stage0 and start stage1](./ops/stage0-to-stage1.md)
+- [Halt stage1 and start stage2](./ops/stage1-to-stage2.md)
- [Create deployments from scratch (for reference only)](./ops/deployments-from-scratch.md)
- [Deploy and transfer new tokens for nitro operations](./ops/nitro-token-ops.md)
@@ -17,6 +18,10 @@ Stacks to run a node for laconic testnet
Follow steps in [testnet-onboarding-validator.md](./testnet-onboarding-validator.md) to onboard your participant and join as a validator on the LORO testnet
+## SAPO testnet
+
+Follow steps in [Upgrade to SAPO testnet](./testnet-onboarding-validator.md#upgrade-to-sapo-testnet) for upgrading your LORO testnet node to SAPO testnet
+
## Run testnet Nitro Node
Follow steps in [testnet-nitro-node.md](./testnet-nitro-node.md) to run you Nitro node for the testnet
diff --git a/ops/deployments-from-scratch.md b/ops/deployments-from-scratch.md
index 96a939a..e4b821d 100644
--- a/ops/deployments-from-scratch.md
+++ b/ops/deployments-from-scratch.md
@@ -26,445 +26,6 @@
* laconic-so: see [installation](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/stack-orchestrator-setup/README.md#setup-stack-orchestrator)
-
- Fixturenet Eth
-
-## Fixturenet Eth
-
-* Stack:
-
-* Target dir: `/srv/fixturenet-eth/fixturenet-eth-deployment`
-
-* Cleanup an existing deployment if required:
-
- ```bash
- cd /srv/fixturenet-eth
-
- # Stop the deployment
- laconic-so deployment --dir fixturenet-eth-deployment stop --delete-volumes
-
- # Remove the deployment dir
- sudo rm -rf fixturenet-eth-deployment
- ```
-
-### Setup
-
-* Create a `fixturenet-eth` dir if not present already and cd into it
-
- ```bash
- mkdir /srv/fixturenet-eth
-
- cd /srv/fixturenet-eth
- ```
-
-* Clone the stack repo:
-
- ```bash
- laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-eth-stacks --pull
- ```
-
-* Clone required repositories:
-
- ```bash
- laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth setup-repositories --pull
-
- # If this throws an error as a result of being already checked out to a branch/tag in a repo, remove all repositories from that stack and re-run the command
- # The repositories are located in $HOME/cerc by default
- ```
-
-* Build the container images:
-
- ```bash
- # Remove any older foundry image with `latest` tag
- docker rmi ghcr.io/foundry-rs/foundry:latest
-
- laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth build-containers --force-rebuild
-
- # If errors are thrown during build, old images used by this stack would have to be deleted
- ```
-
- * NOTE: this will take >10 mins depending on the specs of your machine, and **requires** 16GB of memory or greater.
-
- * Remove any dangling Docker images (to clear up space):
-
- ```bash
- docker image prune
- ```
-
-* Create spec files for deployments, which will map the stack's ports and volumes to the host:
-
- ```bash
- laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy init --output fixturenet-eth-spec.yml
- ```
-
-* Configure ports:
- * `fixturenet-eth-spec.yml`
-
- ```yml
- ...
- network:
- ports:
- fixturenet-eth-bootnode-geth:
- - '9898:9898'
- - '30303'
- fixturenet-eth-geth-1:
- - '7545:8545'
- - '7546:8546'
- - '40000'
- - '6060'
- fixturenet-eth-lighthouse-1:
- - '8001'
- ...
- ```
-
-* Create deployments:
- Once you've made any needed changes to the spec files, create deployments from them:
-
- ```bash
- laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy create --spec-file fixturenet-eth-spec.yml --deployment-dir fixturenet-eth-deployment
- ```
-
-### Run
-
-* Start `fixturenet-eth-deployment` deployment:
-
- ```bash
- laconic-so deployment --dir fixturenet-eth-deployment start
- ```
-
-
-
-
- Nitro Contracts Deployment
-
-## Nitro Contracts Deployment
-
-* Stack:
-
-* Source repo:
-
-* Target dir: `/srv/bridge/nitro-contracts-deployment`
-
-* Cleanup an existing deployment if required:
-
- ```bash
- cd /srv/bridge
-
- # Stop the deployment
- laconic-so deployment --dir nitro-contracts-deployment stop --delete-volumes
-
- # Remove the deployment dir
- sudo rm -rf nitro-contracts-deployment
- ```
-
-### Setup
-
-* Switch to `testnet-ops/nitro-contracts-setup` directory on your local machine:
-
- ```bash
- cd testnet-ops/nitro-contracts-setup
- ```
-
-* Copy the `contract-vars-example.yml` vars file:
-
- ```bash
- cp contract-vars.example.yml contract-vars.yml
- ```
-
-* Edit [`contract-vars.yml`](./contract-vars.yml) and fill in the following values:
-
- ```bash
- # RPC endpoint
- geth_url: "https://fixturenet-eth.laconic.com"
-
- # Chain ID (Fixturenet-eth: 1212)
- geth_chain_id: "1212"
-
- # Private key for a funded L1 account, to be used for contract deployment on L1
- # Required since this private key will be utilized by both L1 and L2 nodes of the bridge
-
- geth_deployer_pk: "888814df89c4358d7ddb3fa4b0213e7331239a80e1f013eaa7b2deca2a41a218"
-
- # Custom token to be deployed
- token_name: "TestToken"
- token_symbol: "TST"
- initial_token_supply: "129600"
- ```
-
-* Edit the `setup-vars.yml` to update the target directory:
-
- ```bash
- ...
- nitro_directory: /srv/bridge
- ...
-
- # Will create deployment at /srv/bridge/nitro-contracts-deployment
- ```
-
-### Run
-
-* Deploy nitro contracts on remote host by executing `deploy-contracts.yml` Ansible playbook on your local machine:
-
- * Create a new `hosts.ini` file:
-
- ```bash
- cp ../hosts.example.ini hosts.ini
- ```
-
- * Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
-
- ```ini
- [deployment_host]
- ansible_host= ansible_user= ansible_ssh_common_args='-o ForwardAgent=yes'
- ```
-
- * Replace `` with `nitro_host`
- * Replace `` with the alias of your choice
- * Replace `` with the IP address or hostname of the target machine
- * Replace `` with the SSH username (e.g., dev, ubuntu)
-
- * Verify that you are able to connect to the host using the following command:
-
- ```bash
- ansible all -m ping -i hosts.ini -k
-
- # Expected output:
- # | SUCCESS => {
- # "ansible_facts": {
- # "discovered_interpreter_python": "/usr/bin/python3.10"
- # },
- # "changed": false,
- # "ping": "pong"
- # }
- ```
-
- * Execute the `deploy-contracts.yml` Ansible playbook for remote deployment:
-
- ```bash
- LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
- ```
-
-* Check logs for deployment on the remote machine:
-
- ```bash
- cd /srv/bridge
-
- # Check the nitro contract deployments
- laconic-so deployment --dir nitro-contracts-deployment logs nitro-contracts -f
- ```
-
-* To deploy a new token and transfer it to another account, refer to this [doc](./nitro-token-ops.md)
-
-
-
-
- Nitro Bridge
-
-## Nitro Bridge
-
-* Stack:
-
-* Source repo:
-
-* Target dir: `/srv/bridge/bridge-deployment`
-
-* Cleanup an existing deployment if required:
-
- ```bash
- cd /srv/bridge
-
- # Stop the deployment
- laconic-so deployment --dir bridge-deployment stop --delete-volumes
-
- # Remove the deployment dir
- sudo rm -rf bridge-deployment
- ```
-
-### Setup
-
-* Execute the following command on deployment machine to get the deployed Nitro contract addresses along with the asset address:
-
- ```bash
- cd /srv/bridge
-
- laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "cat /app/deployment/nitro-addresses.json"
-
- # Expected output:
- # {
- # "1212": [
- # {
- # "name": "geth",
- # "chainId": "1212",
- # "contracts": {
- # "ConsensusApp": {
- # "address": "0xC98aD0B41B9224dad0605be32A9241dB9c67E2e8"
- # },
- # "NitroAdjudicator": {
- # "address": "0x7C22fdA703Cdf09eB8D3B5Adc81F723526713D0e"
- # },
- # "VirtualPaymentApp": {
- # "address": "0x778e4e6297E8BF04C67a20Ec989618d72eB4a19E"
- # },
- # "TestToken": {
- # "address": "0x02ebfB2706527C7310F2a7d9098b2BC61014C5F2"
- # }
- # }
- # }
- # ]
- # }
- ```
-
-* Switch to `testnet-ops/nitro-bridge-setup` directory on your local machine:
-
- ```bash
- cd testnet-ops/nitro-bridge-setup
- ```
-
-* Create the required vars file:
-
- ```bash
- cp bridge-vars.example.yml bridge-vars.yml
- ```
-
-* Edit `bridge-vars.yml` with required values:
-
- ```bash
- # WS endpoint
- nitro_chain_url: "wss://fixturenet-eth.laconic.com"
-
- # Private key for bridge Nitro address
- nitro_sc_pk: ""
-
- # Private key should correspond to a funded account on L1 and this account must own the Nitro contracts
- # It also needs to hold L1 tokens to fund Nitro channels
- nitro_chain_pk: "888814df89c4358d7ddb3fa4b0213e7331239a80e1f013eaa7b2deca2a41a218"
-
- # Deployed Nitro contract addresses
- na_address: ""
- vpa_address: ""
- ca_address: ""
- ```
-
-* Edit the `setup-vars.yml` to update the target directory:
-
- ```bash
- ...
- nitro_directory: /srv/bridge
- ...
-
- # Will create deployment at /srv/bridge/nitro-contracts-deployment and /srv/bridge/bridge-deployment
- ```
-
-### Run
-
-* Start the bridge on remote host by executing `run-nitro-bridge.yml` Ansible playbook on your local machine:
-
- * Create a new `hosts.ini` file:
-
- ```bash
- cp ../hosts.example.ini hosts.ini
- ```
-
- * Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
-
- ```ini
- [deployment_host]
- ansible_host= ansible_user= ansible_ssh_common_args='-o ForwardAgent=yes'
- ```
-
- * Replace `` with `nitro_host`
- * Replace `` with the alias of your choice
- * Replace `` with the IP address or hostname of the target machine
- * Replace `` with the SSH username (e.g., dev, ubuntu)
-
- * Verify that you are able to connect to the host using the following command:
-
- ```bash
- ansible all -m ping -i hosts.ini -k
-
- # Expected output:
- # | SUCCESS => {
- # "ansible_facts": {
- # "discovered_interpreter_python": "/usr/bin/python3.10"
- # },
- # "changed": false,
- # "ping": "pong"
- # }
- ```
-
- * Execute the `run-nitro-bridge.yml` Ansible playbook for remote deployment:
-
- ```bash
- LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
- ```
-
-* Check logs for deployments on the remote machine:
-
- ```bash
- cd /srv/bridge
-
- # Check bridge logs, ensure that the node is running
- laconic-so deployment --dir bridge-deployment logs nitro-bridge -f
- ```
-
-* Create Nitro node config for users:
-
- ```bash
- cd /srv/bridge
-
- # Create required variables
- GETH_CHAIN_ID="1212"
-
- export NA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.NitroAdjudicator.address' /app/deployment/nitro-addresses.json")
- export CA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.ConsensusApp.address' /app/deployment/nitro-addresses.json")
- export VPA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.VirtualPaymentApp.address' /app/deployment/nitro-addresses.json")
-
- export BRIDGE_NITRO_ADDRESS=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-bridge" | jq -r '.SCAddress')
-
- export BRIDGE_PEER_ID=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-bridge" | jq -r '.MessageServicePeerId')
-
- export L1_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3005/p2p/$BRIDGE_PEER_ID"
- export L2_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3006/p2p/$BRIDGE_PEER_ID"
-
- # Create the required config files
- cat < nitro-node-config.yml
- nitro_chain_url: "wss://fixturenet-eth.laconic.com"
- na_address: "$NA_ADDRESS"
- ca_address: "$CA_ADDRESS"
- vpa_address: "$VPA_ADDRESS"
- bridge_nitro_address: "$BRIDGE_NITRO_ADDRESS"
- nitro_l1_bridge_multiaddr: "$L1_BRIDGE_MULTIADDR"
- nitro_l2_bridge_multiaddr: "$L2_BRIDGE_MULTIADDR"
- EOF
-
- laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq --arg chainId \"$GETH_CHAIN_ID\" '{
- (\$chainId): [
- {
- \"name\": .[\$chainId][0].name,
- \"chainId\": .[\$chainId][0].chainId,
- \"contracts\": (
- .[\$chainId][0].contracts
- | to_entries
- | map(select(.key | in({\"ConsensusApp\":1, \"NitroAdjudicator\":1, \"VirtualPaymentApp\":1}) | not))
- | from_entries
- )
- }
- ]
- }' /app/deployment/nitro-addresses.json" > assets.json
- ```
-
- * The required config files should be generated at `/srv/bridge/nitro-node-config.yml` and `/srv/bridge/assets.json`
-
- * Check in the generated files at locations `ops/stage2/nitro-node-config.yml` and `ops/stage2/assets.json` within this repository respectively
-
-* List down L2 channels created by the bridge:
-
- ```bash
- laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4005 -h nitro-bridge"
- ```
-
-
-
stage0 laconicd
@@ -718,123 +279,6 @@
-
- Shopify
-
-## Shopify
-
-* Stack:
-
-* Source repo:
-
-* Target dir: `/srv/shopify/laconic-shopify-deployment`
-
-* Cleanup an existing deployment if required:
-
- ```bash
- cd /srv/shopify
-
- # Stop the deployment
- laconic-so deployment --dir laconic-shopify-deployment stop
-
- # Remove the deployment dir
- sudo rm -rf laconic-shopify-deployment
-
- # Remove the existing spec file
- rm laconic-shopify-spec.yml
- ```
-
-### Setup
-
-* Clone the stack repo:
-
- ```bash
- laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack --pull
-
- # This should clone the testnet-laconicd-stack repo at `/home/dev/cerc/testnet-laconicd-stack`
- ```
-
-* Clone required repositories:
-
- ```bash
- laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify setup-repositories --pull
-
- # This should clone the laconicd repos at `/home/dev/cerc/shopify` and `/home/dev/cerc/laconic-faucet`
- ```
-
-* Build the container images:
-
- ```bash
- laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify build-containers --force-rebuild
-
- # This should create the "cerc/laconic-shopify" and "cerc/laconic-shopify-faucet" Docker images
- ```
-
-### Deployment
-
-* Create a spec file for the deployment:
-
- ```bash
- cd /srv/shopify
-
- laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify deploy init --output laconic-shopify-spec.yml
- ```
-
-* Create a deployment from the spec file:
-
- ```bash
- laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify deploy create --spec-file laconic-shopify-spec.yml --deployment-dir laconic-shopify-deployment
- ```
-
-* Inside the `laconic-shopify-deployment` deployment directory, open `config.env` file and set following env variables:
-
- ```bash
- # Shopify GraphQL URL
- CERC_SHOPIFY_GRAPHQL_URL='https://6h071x-zw.myshopify.com/admin/api/2024-10/graphql.json'
-
- # Access token for Shopify API
- CERC_SHOPIFY_ACCESS_TOKEN=
-
- # Delay for fetching orders in milliseconds
- CERC_FETCH_ORDER_DELAY=10000
-
- # Number of line items per order in Get Orders GraphQL query
- CERC_ITEMS_PER_ORDER=10
-
- # Private key of a funded faucet account
- CERC_FAUCET_KEY=
-
- # laconicd RPC endpoint
- CERC_LACONICD_RPC_ENDPOINT='https://laconicd-sapo.laconic.com'
-
- # laconicd chain id
- CERC_LACONICD_CHAIN_ID=laconic-testnet-2
-
- # laconicd address prefix
- CERC_LACONICD_PREFIX=laconic
-
- # laconicd gas price
- CERC_LACONICD_GAS_PRICE=0.001
- ```
-
-### Start
-
-* Start the deployment:
-
- ```bash
- laconic-so deployment --dir laconic-shopify-deployment start
- ```
-
-* Check status:
-
- ```bash
- # Check logs for faucet and shopify containers
- laconic-so deployment --dir laconic-shopify-deployment logs shopify -f
- laconic-so deployment --dir laconic-shopify-deployment logs faucet -f
- ```
-
-
-
testnet-onboarding-app
@@ -1076,7 +520,7 @@
### Setup
-* Same as that for [stage0 laconicd](#setup), not required if already done for stage0
+* Same as that for [stage0-laconicd](#stage0-laconicd), not required if already done for stage0
### Deployment
@@ -1238,6 +682,300 @@
+---
+
+
+ stage2 laconicd
+
+## stage2 laconicd
+
+* Stack:
+
+* Source repo:
+
+* Target dir: `/srv/laconicd/stage2-deployment`
+
+* Cleanup an existing deployment if required:
+
+ ```bash
+ cd /srv/laconicd
+
+ # Stop the deployment
+ laconic-so deployment --dir stage2-deployment stop --delete-volumes
+
+ # Remove the deployment dir
+ sudo rm -rf stage2-deployment
+
+ # Remove the existing spec file
+ rm stage2-spec.yml
+ ```
+
+### Setup
+
+* Create a tag for existing stage1 laconicd image:
+
+ ```bash
+ docker tag cerc/laconicd:local cerc/laconicd-stage1:local
+ ```
+
+* Same as that for [stage0-laconicd](#stage0-laconicd)
+
+### Deployment
+
+* Create a spec file for the deployment:
+
+ ```bash
+ cd /srv/laconicd
+
+ laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy init --output stage2-spec.yml
+ ```
+
+* Edit network in the spec file to map container ports to host ports:
+
+ ```bash
+ # stage2-spec.yml
+ network:
+ ports:
+ laconicd:
+ - '6060'
+ - '36657:26657'
+ - '36656:26656'
+ - '3473:9473'
+ - '127.0.0.1:3090:9090'
+ - '127.0.0.1:3317:1317'
+ ```
+
+* Create a deployment from the spec file:
+
+ ```bash
+ laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy create --spec-file stage2-spec.yml --deployment-dir stage2-deployment
+ ```
+
+### Start
+
+* Follow [stage1-to-stage2.md](./stage1-to-stage2.md) to halt stage1 deployment, initialize the stage2 chain and start stage2 deployment
+
+
+
+
+ laconic-console-testnet2
+
+## laconic-console-testnet2
+
+* Stack:
+
+* Source repos:
+ *
+ *
+
+* Target dir: `/srv/console/laconic-console-testnet2-deployment`
+
+* Cleanup an existing deployment if required:
+
+ ```bash
+ cd /srv/console
+
+ # Stop the deployment
+ laconic-so deployment --dir laconic-console-testnet2-deployment stop --delete-volumes
+
+ # Remove the deployment dir
+ sudo rm -rf laconic-console-testnet2-deployment
+
+ # Remove the existing spec file
+ rm laconic-console-testnet2-spec.yml
+ ```
+
+### Setup
+
+* Create tags for existing stage1 images:
+
+ ```bash
+ docker tag cerc/laconic-console-host:local cerc/laconic-console-host-stage1:local
+
+ docker tag cerc/laconic-registry-cli:local cerc/laconic-registry-cli-stage1:local
+ ```
+
+* Same as that for [laconic-console](#laconic-console)
+
+### Deployment
+
+* Create a spec file for the deployment:
+
+ ```bash
+ cd /srv/console
+
+ laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy init --output laconic-console-testnet2-spec.yml
+ ```
+
+* Edit network in the spec file to map container ports to host ports:
+
+ ```bash
+ network:
+ ports:
+ console:
+ - '127.0.0.1:4002:80'
+ ```
+
+* Create a deployment from the spec file:
+
+ ```bash
+ laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy create --spec-file laconic-console-testnet2-spec.yml --deployment-dir laconic-console-testnet2-deployment
+
+ # Place console deployment in the same cluster as stage2 deployment
+ cp ../stage2-deployment/deployment.yml laconic-console-testnet2-deployment/deployment.yml
+ ```
+
+* Update the configuration:
+
+ ```bash
+ cat < laconic-console-testnet2-deployment/config.env
+ # Laconicd (hosted) GQL endpoint
+ LACONIC_HOSTED_ENDPOINT=https://laconicd-sapo.laconic.com
+
+ # laconicd chain id
+ CERC_LACONICD_CHAIN_ID=laconic-testnet-2
+ EOF
+ ```
+
+### Start
+
+* Start the deployment:
+
+ ```bash
+ laconic-so deployment --dir laconic-console-testnet2-deployment start
+ ```
+
+* Check status:
+
+ ```bash
+ # List down the container
+ docker ps -a | grep console
+
+ # Follow logs for console container
+ laconic-so deployment --dir laconic-console-testnet2-deployment logs console -f
+ ```
+
+* The laconic console can now be viewed at
+
+
+
+
+ Shopify
+
+## Shopify
+
+* Stack:
+
+* Source repo:
+
+* Target dir: `/srv/shopify/laconic-shopify-deployment`
+
+* Cleanup an existing deployment if required:
+
+ ```bash
+ cd /srv/shopify
+
+ # Stop the deployment
+ laconic-so deployment --dir laconic-shopify-deployment stop
+
+ # Remove the deployment dir
+ sudo rm -rf laconic-shopify-deployment
+
+ # Remove the existing spec file
+ rm laconic-shopify-spec.yml
+ ```
+
+### Setup
+
+* Clone the stack repo:
+
+ ```bash
+ laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack --pull
+
+ # This should clone the testnet-laconicd-stack repo at `/home/dev/cerc/testnet-laconicd-stack`
+ ```
+
+* Clone required repositories:
+
+ ```bash
+ laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify setup-repositories --pull
+
+ # This should clone the laconicd repos at `/home/dev/cerc/shopify` and `/home/dev/cerc/laconic-faucet`
+ ```
+
+* Build the container images:
+
+ ```bash
+ laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify build-containers --force-rebuild
+
+ # This should create the "cerc/laconic-shopify" and "cerc/laconic-shopify-faucet" Docker images
+ ```
+
+### Deployment
+
+* Create a spec file for the deployment:
+
+ ```bash
+ cd /srv/shopify
+
+ laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify deploy init --output laconic-shopify-spec.yml
+ ```
+
+* Create a deployment from the spec file:
+
+ ```bash
+ laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify deploy create --spec-file laconic-shopify-spec.yml --deployment-dir laconic-shopify-deployment
+ ```
+
+* Inside the `laconic-shopify-deployment` deployment directory, open `config.env` file and set following env variables:
+
+ ```bash
+ # Shopify GraphQL URL
+ CERC_SHOPIFY_GRAPHQL_URL='https://6h071x-zw.myshopify.com/admin/api/2024-10/graphql.json'
+
+ # Access token for Shopify API
+ CERC_SHOPIFY_ACCESS_TOKEN=
+
+ # Delay for fetching orders in milliseconds
+ CERC_FETCH_ORDER_DELAY=10000
+
+ # Number of line items per order in Get Orders GraphQL query
+ CERC_ITEMS_PER_ORDER=10
+
+ # Private key of a funded faucet account
+ CERC_FAUCET_KEY=
+
+ # laconicd RPC endpoint
+ CERC_LACONICD_RPC_ENDPOINT='https://laconicd-sapo.laconic.com'
+
+ # laconicd chain id
+ CERC_LACONICD_CHAIN_ID=laconic-testnet-2
+
+ # laconicd address prefix
+ CERC_LACONICD_PREFIX=laconic
+
+ # laconicd gas price
+ CERC_LACONICD_GAS_PRICE=0.001
+ ```
+
+### Start
+
+* Start the deployment:
+
+ ```bash
+ laconic-so deployment --dir laconic-shopify-deployment start
+ ```
+
+* Check status:
+
+ ```bash
+ # Check logs for faucet and shopify containers
+ laconic-so deployment --dir laconic-shopify-deployment logs shopify -f
+ laconic-so deployment --dir laconic-shopify-deployment logs faucet -f
+ ```
+
+
+
deploy-backend
@@ -1338,8 +1076,9 @@
# This should create the deployment directory at `/srv/deploy-backend/backend-deployment`
```
-* Modify file `backend-deployment/kubeconfig.yml` if required
- ```
+* Modify file `backend-deployment/kubeconfig.yml` if required:
+
+ ```bash
apiVersion: v1
...
contexts:
@@ -1349,6 +1088,7 @@
name: default
...
```
+
NOTE: `context.name` must be default to use with SO
* Fetch the config template file for the snowball backend:
@@ -1423,7 +1163,7 @@
* Verify status after the auction ends. It should list a completed status and a winner
- ```
+ ```bash
laconic-so deployment --dir laconic-console-testnet2-deployment exec cli "laconic registry auction get 73e0b082a198c396009ce748804a9060c674a10045365d262c1584f99d2771c1 -txKey $deployKey"
```
@@ -1440,7 +1180,6 @@
laconic-so deployment --dir laconic-console-testnet2-deployment exec cli "laconic registry authority whois deploy-vaasl --txKey $deployKey"
```
-
* Update `/srv/snowball/snowball-deployment/data/config/prod.toml`. Replace `` with your credentials. Use the `userKey`, `bondId` and `authority` that you set up
### Start
@@ -1502,6 +1241,7 @@
DEPLOYER_LRN=lrn://vaasl-provider/deployers/webapp-deployer-api.apps.vaasl.io
AUTHORITY=vaasl
```
+
Note: The bond id should be set to the `vaasl` authority
* Update required laconic config. You can use the same `userKey` and `bondId` used for deploying backend:
@@ -1519,6 +1259,7 @@
gasPrice: 0.001alnt
EOF
```
+
Note: The `userKey` account should own the authority `vaasl`
### Run
@@ -1533,16 +1274,484 @@
+
+ Fixturenet Eth
+
+## Fixturenet Eth
+
+* Stack:
+
+* Target dir: `/srv/fixturenet-eth/fixturenet-eth-deployment`
+
+* Cleanup an existing deployment if required:
+
+ ```bash
+ cd /srv/fixturenet-eth
+
+ # Stop the deployment
+ laconic-so deployment --dir fixturenet-eth-deployment stop --delete-volumes
+
+ # Remove the deployment dir
+ sudo rm -rf fixturenet-eth-deployment
+ ```
+
+### Setup
+
+* Create a `fixturenet-eth` dir if not present already and cd into it
+
+ ```bash
+ mkdir /srv/fixturenet-eth
+
+ cd /srv/fixturenet-eth
+ ```
+
+* Clone the stack repo:
+
+ ```bash
+ laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-eth-stacks --pull
+ ```
+
+* Clone required repositories:
+
+ ```bash
+ laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth setup-repositories --pull
+
+ # If this throws an error as a result of being already checked out to a branch/tag in a repo, remove all repositories from that stack and re-run the command
+ # The repositories are located in $HOME/cerc by default
+ ```
+
+* Build the container images:
+
+ ```bash
+ # Remove any older foundry image with `latest` tag
+ docker rmi ghcr.io/foundry-rs/foundry:latest
+
+ laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth build-containers --force-rebuild
+
+ # If errors are thrown during build, old images used by this stack would have to be deleted
+ ```
+
+ * NOTE: this will take >10 mins depending on the specs of your machine, and **requires** 16GB of memory or greater.
+
+ * Remove any dangling Docker images (to clear up space):
+
+ ```bash
+ docker image prune
+ ```
+
+* Create spec files for deployments, which will map the stack's ports and volumes to the host:
+
+ ```bash
+ laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy init --output fixturenet-eth-spec.yml
+ ```
+
+* Configure ports:
+ * `fixturenet-eth-spec.yml`
+
+ ```yml
+ ...
+ network:
+ ports:
+ fixturenet-eth-bootnode-geth:
+ - '9898:9898'
+ - '30303'
+ fixturenet-eth-geth-1:
+ - '7545:8545'
+ - '7546:8546'
+ - '40000'
+ - '6060'
+ fixturenet-eth-lighthouse-1:
+ - '8001'
+ ...
+ ```
+
+* Create deployments:
+ Once you've made any needed changes to the spec files, create deployments from them:
+
+ ```bash
+ laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy create --spec-file fixturenet-eth-spec.yml --deployment-dir fixturenet-eth-deployment
+ ```
+
+### Run
+
+* Start `fixturenet-eth-deployment` deployment:
+
+ ```bash
+ laconic-so deployment --dir fixturenet-eth-deployment start
+ ```
+
+
+
+
+ Nitro Contracts Deployment
+
+## Nitro Contracts Deployment
+
+* Stack:
+
+* Source repo:
+
+* Target dir: `/srv/bridge/nitro-contracts-deployment`
+
+* Cleanup an existing deployment if required:
+
+ ```bash
+ cd /srv/bridge
+
+ # Stop the deployment
+ laconic-so deployment --dir nitro-contracts-deployment stop --delete-volumes
+
+ # Remove the deployment dir
+ sudo rm -rf nitro-contracts-deployment
+ ```
+
+### Setup
+
+* Switch to `testnet-ops/nitro-contracts-setup` directory on your local machine:
+
+ ```bash
+ cd testnet-ops/nitro-contracts-setup
+ ```
+
+* Copy the `contract-vars-example.yml` vars file:
+
+ ```bash
+ cp contract-vars.example.yml contract-vars.yml
+ ```
+
+* Get a funded account's private key from fixturenet-eth deployment:
+
+ ```bash
+ FUNDED_ACCOUNT_PK=$(curl --silent localhost:9898/accounts.csv | awk -F',' 'NR==1 {gsub(/^0x/, "", $NF); print $NF}')
+ echo $FUNDED_ACCOUNT_PK
+ ```
+
+* Edit [`contract-vars.yml`](./contract-vars.yml) and fill in the following values:
+
+ ```bash
+ # RPC endpoint
+ geth_url: "https://fixturenet-eth.laconic.com"
+
+ # Chain ID (Fixturenet-eth: 1212)
+ geth_chain_id: "1212"
+
+ # Private key for a funded L1 account, to be used for contract deployment on L1
+ # Required since this private key will be utilized by both L1 and L2 nodes of the bridge
+
+ geth_deployer_pk: ""
+
+ # Custom token to be deployed
+ token_name: "TestToken"
+ token_symbol: "TST"
+ initial_token_supply: "129600"
+ ```
+
+* Edit the `setup-vars.yml` to update the target directory:
+
+ ```bash
+ ...
+ nitro_directory: /srv/bridge
+
+ # Will create deployment at /srv/bridge/nitro-contracts-deployment
+ ```
+
+### Run
+
+* Deploy nitro contracts on remote host by executing `deploy-contracts.yml` Ansible playbook on your local machine:
+
+ * Create a new `hosts.ini` file:
+
+ ```bash
+ cp ../hosts.example.ini hosts.ini
+ ```
+
+ * Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
+
+ ```ini
+ [deployment_host]
+ ansible_host= ansible_user= ansible_ssh_common_args='-o ForwardAgent=yes'
+ ```
+
+ * Replace `` with `nitro_host`
+ * Replace `` with the alias of your choice
+ * Replace `` with the IP address or hostname of the target machine
+ * Replace `` with the SSH username (e.g., dev, ubuntu)
+
+ * Verify that you are able to connect to the host using the following command:
+
+ ```bash
+ ansible all -m ping -i hosts.ini -k
+
+ # Expected output:
+ # | SUCCESS => {
+ # "ansible_facts": {
+ # "discovered_interpreter_python": "/usr/bin/python3.10"
+ # },
+ # "changed": false,
+ # "ping": "pong"
+ # }
+ ```
+
+ * Execute the `deploy-contracts.yml` Ansible playbook for remote deployment:
+
+ ```bash
+ LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
+ ```
+
+* Check logs for deployment on the remote machine:
+
+ ```bash
+ cd /srv/bridge
+
+ # Check the nitro contract deployments
+ laconic-so deployment --dir nitro-contracts-deployment logs nitro-contracts -f
+ ```
+
+* To deploy a new token and transfer it to another account, refer to this [doc](./nitro-token-ops.md)
+
+
+
+
+ Nitro Bridge
+
+## Nitro Bridge
+
+* Stack:
+
+* Source repo:
+
+* Target dir: `/srv/bridge/bridge-deployment`
+
+* Cleanup an existing deployment if required:
+
+ ```bash
+ cd /srv/bridge
+
+ # Stop the deployment
+ laconic-so deployment --dir bridge-deployment stop --delete-volumes
+
+ # Remove the deployment dir
+ sudo rm -rf bridge-deployment
+ ```
+
+### Setup
+
+* Execute the following command on deployment machine to get the deployed Nitro contract addresses along with the asset address:
+
+ ```bash
+ cd /srv/bridge
+
+ laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "cat /app/deployment/nitro-addresses.json"
+
+ # Expected output:
+ # {
+ # "1212": [
+ # {
+ # "name": "geth",
+ # "chainId": "1212",
+ # "contracts": {
+ # "ConsensusApp": {
+ # "address": "0xC98aD0B41B9224dad0605be32A9241dB9c67E2e8"
+ # },
+ # "NitroAdjudicator": {
+ # "address": "0x7C22fdA703Cdf09eB8D3B5Adc81F723526713D0e"
+ # },
+ # "VirtualPaymentApp": {
+ # "address": "0x778e4e6297E8BF04C67a20Ec989618d72eB4a19E"
+ # },
+ # "TestToken": {
+ # "address": "0x02ebfB2706527C7310F2a7d9098b2BC61014C5F2"
+ # }
+ # }
+ # }
+ # ]
+ # }
+ ```
+
+* Get a funded account's private key from fixturenet-eth deployment:
+
+ ```bash
+ FUNDED_ACCOUNT_PK=$(curl --silent localhost:9898/accounts.csv | awk -F',' 'NR==1 {gsub(/^0x/, "", $NF); print $NF}')
+ echo $FUNDED_ACCOUNT_PK
+ ```
+
+* Switch to `testnet-ops/nitro-bridge-setup` directory on your local machine:
+
+ ```bash
+ cd testnet-ops/nitro-bridge-setup
+ ```
+
+* Create the required vars file:
+
+ ```bash
+ cp bridge-vars.example.yml bridge-vars.yml
+ ```
+
+* Edit `bridge-vars.yml` with required values:
+
+ ```bash
+ # WS endpoint
+ nitro_chain_url: "wss://fixturenet-eth.laconic.com"
+
+ # Private key for bridge Nitro address
+ nitro_sc_pk: ""
+
+ # Private key should correspond to a funded account on L1 and this account must own the Nitro contracts
+ # It also needs to hold L1 tokens to fund Nitro channels
+ nitro_chain_pk: ""
+
+ # Deployed Nitro contract addresses
+ na_address: ""
+ vpa_address: ""
+ ca_address: ""
+ ```
+
+* Edit the `setup-vars.yml` to update the target directory:
+
+ ```bash
+ ...
+ nitro_directory: /srv/bridge
+
+ # Will create deployment at /srv/bridge/bridge-deployment
+ ```
+
+### Run
+
+* Start the bridge on remote host by executing `run-nitro-bridge.yml` Ansible playbook on your local machine:
+
+ * Create a new `hosts.ini` file:
+
+ ```bash
+ cp ../hosts.example.ini hosts.ini
+ ```
+
+ * Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
+
+ ```ini
+ [deployment_host]
+ ansible_host= ansible_user= ansible_ssh_common_args='-o ForwardAgent=yes'
+ ```
+
+ * Replace `` with `nitro_host`
+ * Replace `` with the alias of your choice
+ * Replace `` with the IP address or hostname of the target machine
+ * Replace `` with the SSH username (e.g., dev, ubuntu)
+
+ * Verify that you are able to connect to the host using the following command:
+
+ ```bash
+ ansible all -m ping -i hosts.ini -k
+
+ # Expected output:
+ # | SUCCESS => {
+ # "ansible_facts": {
+ # "discovered_interpreter_python": "/usr/bin/python3.10"
+ # },
+ # "changed": false,
+ # "ping": "pong"
+ # }
+ ```
+
+ * Execute the `run-nitro-bridge.yml` Ansible playbook for remote deployment:
+
+ ```bash
+ LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
+ ```
+
+* Check logs for deployments on the remote machine:
+
+ ```bash
+ cd /srv/bridge
+
+ # Check bridge logs, ensure that the node is running
+ laconic-so deployment --dir bridge-deployment logs nitro-bridge -f
+ ```
+
+* Create Nitro node config for users:
+
+ ```bash
+ cd /srv/bridge
+
+ # Create required variables
+ GETH_CHAIN_ID="1212"
+
+ export NA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.NitroAdjudicator.address' /app/deployment/nitro-addresses.json")
+ export CA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.ConsensusApp.address' /app/deployment/nitro-addresses.json")
+ export VPA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.VirtualPaymentApp.address' /app/deployment/nitro-addresses.json")
+
+ export BRIDGE_NITRO_ADDRESS=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-bridge" | jq -r '.SCAddress')
+
+ export BRIDGE_PEER_ID=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-bridge" | jq -r '.MessageServicePeerId')
+
+ export L1_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3005/p2p/$BRIDGE_PEER_ID"
+ export L2_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3006/p2p/$BRIDGE_PEER_ID"
+
+ # Create the required config files
+ cat < nitro-node-config.yml
+ nitro_chain_url: "wss://fixturenet-eth.laconic.com"
+ na_address: "$NA_ADDRESS"
+ ca_address: "$CA_ADDRESS"
+ vpa_address: "$VPA_ADDRESS"
+ bridge_nitro_address: "$BRIDGE_NITRO_ADDRESS"
+ nitro_l1_bridge_multiaddr: "$L1_BRIDGE_MULTIADDR"
+ nitro_l2_bridge_multiaddr: "$L2_BRIDGE_MULTIADDR"
+ EOF
+
+ laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq --arg chainId \"$GETH_CHAIN_ID\" '{
+ (\$chainId): [
+ {
+ \"name\": .[\$chainId][0].name,
+ \"chainId\": .[\$chainId][0].chainId,
+ \"contracts\": (
+ .[\$chainId][0].contracts
+ | to_entries
+ | map(select(.key | in({\"ConsensusApp\":1, \"NitroAdjudicator\":1, \"VirtualPaymentApp\":1}) | not))
+ | from_entries
+ )
+ }
+ ]
+ }' /app/deployment/nitro-addresses.json" > assets.json
+ ```
+
+ * The required config files should be generated at `/srv/bridge/nitro-node-config.yml` and `/srv/bridge/assets.json`
+
+ * Check in the generated files at locations `ops/stage2/nitro-node-config.yml` and `ops/stage2/assets.json` within this repository respectively
+
+* List down L2 channels created by the bridge:
+
+ ```bash
+ laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4005 -h nitro-bridge"
+ ```
+
+
+
## Domains / Port Mappings
```bash
# Machine 1
-https://laconicd.laconic.com -> 26657
-https://laconicd.laconic.com/api -> 9473/api
-https://faucet.laconic.com -> 4000
-https://loro-signup.laconic.com -> 3000
-https://wallet.laconic.com -> 5000
-https://loro-console.laconic.com -> 4001
+
+# LORO testnet
+https://laconicd.laconic.com -> 26657
+https://laconicd.laconic.com/api -> 9473/api
+https://laconicd.laconic.com/console -> 9473/console
+https://laconicd.laconic.com/graphql -> 9473/graphql
+https://faucet.laconic.com -> 4000
+https://loro-signup.laconic.com -> 3000
+https://wallet.laconic.com -> 5000
+https://loro-console.laconic.com -> 4001
+
+Open p2p ports:
+26656
+
+# SAPO testnet
+https://laconicd-sapo.laconic.com -> 36657
+https://laconicd-sapo.laconic.com/api -> 3473/api
+https://laconicd-sapo.laconic.com/console -> 3473/console
+https://laconicd-sapo.laconic.com/graphql -> 3473/graphql
+https://console-sapo.laconic.com -> 4002
+
+Open p2p ports:
+36656
# Machine 2
https://sepolia.laconic.com -> 8545
diff --git a/ops/stage0-to-stage1.md b/ops/stage0-to-stage1.md
index c76e615..89bf12f 100644
--- a/ops/stage0-to-stage1.md
+++ b/ops/stage0-to-stage1.md
@@ -93,7 +93,7 @@ Once all the participants have completed their onboarding, stage0 laconicd chain
scp dev@:/srv/laconicd/stage1-deployment/data/laconicd-data/config/genesis.json
```
-* Now users can follow the steps to [Join as a validator on stage1](https://git.vdb.to/cerc-io/testnet-laconicd-stack/src/branch/main/testnet-onboarding-validator.md#join-as-a-validator-on-stage1)
+* Now users can follow the steps to [Join as a validator on stage1](../testnet-onboarding-validator.md#join-as-a-validator-on-stage1)
## Bank Transfer
diff --git a/ops/stage1-to-stage2.md b/ops/stage1-to-stage2.md
new file mode 100644
index 0000000..d3453a7
--- /dev/null
+++ b/ops/stage1-to-stage2.md
@@ -0,0 +1,150 @@
+# Halt stage1 and start stage2
+
+## Login
+
+* Log in as `dev` user on the deployments VM
+
+* All the deployments are placed in the `/srv` directory:
+
+ ```bash
+ cd /srv
+ ```
+
+## Halt stage1
+
+* Confirm the the currently running node is for stage1 chain:
+
+ ```bash
+ # On stage1 deployment machine
+ STAGE1_DEPLOYMENT=/srv/laconicd/testnet-laconicd-deployment
+
+ laconic-so deployment --dir $STAGE1_DEPLOYMENT logs laconicd -f --tail 30
+
+ # Note: stage1 node on deployments VM has been changed to run from /srv/laconicd/testnet-laconicd-deployment instead of /srv/laconicd/stage1-deployment
+ ```
+
+* Stop the stage1 deployment:
+
+ ```bash
+ laconic-so deployment --dir $STAGE1_DEPLOYMENT stop
+
+ # Stopping this deployment marks the end of testnet stage1
+ ```
+
+## Export stage1 state
+
+* Export the chain state:
+
+ ```bash
+ docker run -it \
+ -v $STAGE1_DEPLOYMENT/data/laconicd-data:/root/.laconicd \
+ cerc/laconicd-stage1:local bash -c "laconicd export | jq > /root/.laconicd/stage1-state.json"
+ ```
+
+* Archive the state and node config and keys:
+
+ ```bash
+ sudo tar -czf /srv/laconicd/stage1-laconicd-export.tar.gz --exclude="./data" --exclude="./tmp" -C $STAGE1_DEPLOYMENT/data/laconicd-data .
+ sudo chown dev:dev /srv/laconicd/stage1-laconicd-export.tar.gz
+ ```
+
+## Initialize stage2
+
+* Copy over the stage1 state and node export archive to stage2 deployment machine
+
+* Extract the stage1 state and node config to stage2 deployment dir:
+
+ ```bash
+ # On stage2 deployment machine
+ cd /srv/laconicd
+
+ # Unarchive
+ tar -xzf stage1-laconicd-export.tar.gz -C stage2-deployment/data/laconicd-data
+
+ # Verify contents
+ ll stage2-deployment/data/laconicd-data
+ ```
+
+* Initialize stage2 chain:
+
+ ```bash
+ DEPLOYMENT_DIR=$(pwd)
+
+ cd ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd
+
+ STAGE2_CHAIN_ID=laconic-testnet-2
+ ./scripts/initialize-stage2.sh $DEPLOYMENT_DIR/stage2-deployment $STAGE2_CHAIN_ID LaconicStage2 os 1000000000000000
+
+ # Enter the keyring passphrase for account from stage1 when prompted
+
+ cd $DEPLOYMENT_DIR
+ ```
+
+ * Resets the node data (`unsafe-reset-all`)
+
+ * Initializes the `stage2-deployment` node
+
+ * Generates the genesis file for stage2 with stage1 state
+
+ * Carries over accounts, balances and laconicd modules from stage1
+
+ * Skips staking and validator data
+
+* Copy over the genesis file outside data directory:
+
+ ```bash
+ cp stage2-deployment/data/laconicd-data/config/genesis.json stage2-deployment
+ ```
+
+## Start stage2
+
+* Start the stage2 deployment:
+
+ ```bash
+ laconic-so deployment --dir stage2-deployment start
+ ```
+
+* Check status of stage2 laconicd:
+
+ ```bash
+ # List down the container and check health status
+ docker ps -a | grep laconicd
+
+ # Follow logs for laconicd container, check that new blocks are getting created
+ laconic-so deployment --dir stage2-deployment logs laconicd -f
+ ```
+
+* Get the node's peer adddress and stage2 genesis file to share with the participants:
+
+ * Get the node id:
+
+ ```bash
+ echo $(laconic-so deployment --dir stage2-deployment exec laconicd "laconicd cometbft show-node-id")@laconicd-sapo.laconic.com:36656
+ ```
+
+ * Get the genesis file:
+
+ ```bash
+ scp dev@:/srv/laconicd/stage2-deployment/genesis.json
+ ```
+
+* Now users can follow the steps to [Upgrade to SAPO testnet](../testnet-onboarding-validator.md#upgrade-to-sapo-testnet)
+
+## Bank Transfer
+
+* Transfer tokens to an address:
+
+ ```bash
+ cd /srv/laconicd
+
+ RECEIVER_ADDRESS=
+ AMOUNT=
+
+ laconic-so deployment --dir stage2-deployment exec laconicd "laconicd tx bank send alice ${RECEIVER_ADDRESS} ${AMOUNT}alnt --from alice --fees 1000alnt"
+ ```
+
+* Check balance:
+
+ ```bash
+ laconic-so deployment --dir stage2-deployment exec laconicd "laconicd query bank balances ${RECEIVER_ADDRESS}"
+ ```
diff --git a/ops/stage2/upgrade-node-to-testnet2.sh b/ops/stage2/upgrade-node-to-testnet2.sh
new file mode 100755
index 0000000..2cb41e0
--- /dev/null
+++ b/ops/stage2/upgrade-node-to-testnet2.sh
@@ -0,0 +1,25 @@
+#!/bin/bash
+
+# Exit on error
+set -e
+set -u
+
+NODE_HOME="$HOME/.laconicd"
+testnet2_genesis="$NODE_HOME/tmp-testnet2/genesis.json"
+
+if [ ! -f ${testnet2_genesis} ]; then
+ echo "testnet2 genesis file not found, exiting..."
+ exit 1
+fi
+
+# Remove data but keep keys
+laconicd cometbft unsafe-reset-all
+
+# Use provided genesis config
+cp $testnet2_genesis $NODE_HOME/config/genesis.json
+
+# Set chain id in config
+chain_id=$(jq -r '.chain_id' $testnet2_genesis)
+laconicd config set client chain-id $chain_id --home $NODE_HOME
+
+echo "Node data reset and ready for testnet2!"
diff --git a/stack-orchestrator/compose/docker-compose-laconic-console.yml b/stack-orchestrator/compose/docker-compose-laconic-console.yml
index 54769d9..30a11e9 100644
--- a/stack-orchestrator/compose/docker-compose-laconic-console.yml
+++ b/stack-orchestrator/compose/docker-compose-laconic-console.yml
@@ -9,7 +9,8 @@ services:
CERC_LACONICD_USER_KEY: ${CERC_LACONICD_USER_KEY}
CERC_LACONICD_BOND_ID: ${CERC_LACONICD_BOND_ID}
CERC_LACONICD_GAS: ${CERC_LACONICD_GAS:-200000}
- CERC_LACONICD_FEES: ${CERC_LACONICD_FEES:-200000alnt}
+ CERC_LACONICD_FEES: ${CERC_LACONICD_FEES:-200alnt}
+ CERC_LACONICD_GASPRICE: ${CERC_LACONICD_GASPRICE:-0.001alnt}
volumes:
- ../config/laconic-console/cli/create-config.sh:/app/create-config.sh
- laconic-registry-data:/laconic-registry-data
diff --git a/stack-orchestrator/compose/docker-compose-testnet-laconicd.yml b/stack-orchestrator/compose/docker-compose-testnet-laconicd.yml
index fc7c70e..5a1a22f 100644
--- a/stack-orchestrator/compose/docker-compose-testnet-laconicd.yml
+++ b/stack-orchestrator/compose/docker-compose-testnet-laconicd.yml
@@ -7,6 +7,7 @@ services:
CERC_MONIKER: ${CERC_MONIKER:-TestnetNode}
CERC_CHAIN_ID: ${CERC_CHAIN_ID:-laconic_9000-1}
CERC_PEERS: ${CERC_PEERS}
+ MIN_GAS_PRICE: ${MIN_GAS_PRICE:-0.001}
CERC_LOGLEVEL: ${CERC_LOGLEVEL:-info}
volumes:
- laconicd-data:/root/.laconicd
diff --git a/stack-orchestrator/config/laconic-console/cli/create-config.sh b/stack-orchestrator/config/laconic-console/cli/create-config.sh
index 1442eff..ea50ec2 100755
--- a/stack-orchestrator/config/laconic-console/cli/create-config.sh
+++ b/stack-orchestrator/config/laconic-console/cli/create-config.sh
@@ -18,6 +18,7 @@ services:
chainId: ${CERC_LACONICD_CHAIN_ID}
gas: ${CERC_LACONICD_GAS}
fees: ${CERC_LACONICD_FEES}
+ gasPrice: ${CERC_LACONICD_GASPRICE}
EOF
echo "Exported config to $config_file"
diff --git a/stack-orchestrator/config/laconicd/run-laconicd.sh b/stack-orchestrator/config/laconicd/run-laconicd.sh
index 3ae415f..a5d70c9 100755
--- a/stack-orchestrator/config/laconicd/run-laconicd.sh
+++ b/stack-orchestrator/config/laconicd/run-laconicd.sh
@@ -21,6 +21,7 @@ echo "Env:"
echo "Moniker: $CERC_MONIKER"
echo "Chain Id: $CERC_CHAIN_ID"
echo "Persistent peers: $CERC_PEERS"
+echo "Min gas price: $MIN_GAS_PRICE"
echo "Log level: $CERC_LOGLEVEL"
NODE_HOME=/root/.laconicd
@@ -40,12 +41,16 @@ else
echo "Node data dir $NODE_HOME/data already exists, skipping initialization..."
fi
+# Enable cors
+sed -i 's/cors_allowed_origins.*$/cors_allowed_origins = ["*"]/' $HOME/.laconicd/config/config.toml
+
# Update config with persistent peers
sed -i "s/^persistent_peers *=.*/persistent_peers = \"$CERC_PEERS\"/g" $NODE_HOME/config/config.toml
echo "Starting laconicd node..."
laconicd start \
--api.enable \
+ --minimum-gas-prices=${MIN_GAS_PRICE}alnt \
--rpc.laddr="tcp://0.0.0.0:26657" \
--gql-playground --gql-server \
--log_level $CERC_LOGLEVEL \
diff --git a/stack-orchestrator/stacks/laconic-console/README.md b/stack-orchestrator/stacks/laconic-console/README.md
index 7f7bd64..0a7793f 100644
--- a/stack-orchestrator/stacks/laconic-console/README.md
+++ b/stack-orchestrator/stacks/laconic-console/README.md
@@ -83,9 +83,14 @@ Instructions for running laconic registry CLI and console
# Gas limit for txs (default: 200000)
CERC_LACONICD_GAS=
- # Max fees for txs (default: 200000alnt)
+ # Max fees for txs (default: 200alnt)
CERC_LACONICD_FEES=
+ # Gas price to use for txs (default: 0.001alnt)
+ # Use for auto fees calculation, gas and fees not required to be set in that case
+ # Reference: https://git.vdb.to/cerc-io/laconic-registry-cli#gas-and-fees
+ CERC_LACONICD_GASPRICE=
+
# Console configuration
# Laconicd (hosted) GQL endpoint (default: http://localhost:9473)
diff --git a/stack-orchestrator/stacks/testnet-laconicd/README.md b/stack-orchestrator/stacks/testnet-laconicd/README.md
index 8edc70f..b9ffd90 100644
--- a/stack-orchestrator/stacks/testnet-laconicd/README.md
+++ b/stack-orchestrator/stacks/testnet-laconicd/README.md
@@ -122,6 +122,9 @@ Instructions for running a laconicd testnet full node and joining as a validator
# Output log level (default: info)
CERC_LOGLEVEL=
+
+ # Minimum gas price in alnt to accept for transactions (default: "0.001")
+ MIN_GAS_PRICE
```
* Inside the `laconic-console-deployment` deployment directory, open `config.env` file and set following env variables:
@@ -143,9 +146,14 @@ Instructions for running a laconicd testnet full node and joining as a validator
# Gas limit for txs (default: 200000)
CERC_LACONICD_GAS=
- # Max fees for txs (default: 200000alnt)
+ # Max fees for txs (default: 200alnt)
CERC_LACONICD_FEES=
+ # Gas price to use for txs (default: 0.001alnt)
+ # Use for auto fees calculation, gas and fees not required to be set in that case
+ # Reference: https://git.vdb.to/cerc-io/laconic-registry-cli#gas-and-fees
+ CERC_LACONICD_GASPRICE=
+
# Console configuration
# Laconicd (hosted) GQL endpoint (default: http://localhost:9473)
diff --git a/testnet-onboarding-validator.md b/testnet-onboarding-validator.md
index 6f94e9c..9e2c9e9 100644
--- a/testnet-onboarding-validator.md
+++ b/testnet-onboarding-validator.md
@@ -63,6 +63,8 @@
```bash
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack
+
+ # See stack documentation stack-orchestrator/stacks/testnet-laconicd/README.md for more details
```
* Clone required repositories:
@@ -126,6 +128,8 @@
* Inside the `testnet-laconicd-deployment` deployment directory, open `config.env` file and set following env variables:
```bash
+ CERC_CHAIN_ID=laconic_9000-1
+
# Comma separated list of nodes to keep persistent connections to
# Example: "node-1-id@laconicd.laconic.com:26656"
# Use the provided node id
@@ -197,12 +201,15 @@ laconic-so deployment --dir testnet-laconicd-deployment start
* From wallet, approve and send transaction to stage1 laconicd chain
+
+
* Alternatively, create a validator using the laconicd CLI:
* Import a key pair:
```bash
KEY_NAME=alice
+ CHAIN_ID=laconic_9000-1
# Restore existing key with mnemonic seed phrase
# You will be prompted to enter mnemonic seed
@@ -243,7 +250,7 @@ laconic-so deployment --dir testnet-laconicd-deployment start
```bash
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd tx staking create-validator my-validator.json \
--fees 500000alnt \
- --chain-id=laconic_9000-1 \
+ --chain-id=$CHAIN_ID \
--from $KEY_NAME"
```
@@ -280,6 +287,109 @@ laconic-so deployment --dir testnet-laconicd-deployment start
sudo rm -r testnet-laconicd-deployment
```
+## Upgrade to SAPO testnet
+
+### Prerequisites
+
+* SAPO testnet (testnet2) genesis file and peer node address
+
+* A testnet stage1 node
+
+ * For setting up a fresh testnet2 node, follow [Join as a validator](#join-as-a-validator-on-stage1) instead, but use testnet2 chain id (`laconic-testnet-2`)
+
+### Setup
+
+* Stop the stage1 node:
+
+ ```bash
+ # In dir where stage1 node deployment (`testnet-laconicd-deployment`) exists
+
+ TESTNET_DEPLOYMENT=$(pwd)/testnet-laconicd-deployment
+
+ laconic-so deployment --dir testnet-laconicd-deployment stop --delete-volumes
+ ```
+
+* Clone / pull the stack repo:
+
+ ```bash
+ laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack --pull
+ ```
+
+* Clone / pull the required repositories:
+
+ ```bash
+ laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd setup-repositories --pull
+
+ # If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories and re-run the command
+ ```
+
+ Note: Make sure the latest `cerc-io/laconicd` changes have been pulled
+
+* Build the container images:
+
+ ```bash
+ laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd build-containers --force-rebuild
+ ```
+
+ This should create the following docker images locally with latest changes:
+
+ * `cerc/laconicd`
+
+### Create a deployment
+
+* The existing stage1 deployment can be used for testnet2
+
+* Copy over the published testnet2 genesis file (`.json`) to data directory in deployment (`testnet-laconicd-deployment/data/laconicd-data/tmp-testnet2`):
+
+ ```bash
+ # Example
+ mkdir -p $TESTNET_DEPLOYMENT/data/laconicd-data/tmp-testnet2
+ cp genesis.json $TESTNET_DEPLOYMENT/data/laconicd-data/tmp-testnet2/genesis.json
+ ```
+
+* Run script to reset node data and upgrade for testnet2:
+
+ ```bash
+ cd ~/cerc/testnet-laconicd-stack
+
+ docker run -it \
+ -v $TESTNET_DEPLOYMENT/data/laconicd-data:/root/.laconicd \
+ -v ./ops/stage2:/scripts \
+ cerc/laconicd:local bash -c "/scripts/upgrade-node-to-testnet2.sh"
+
+ cd -
+ ```
+
+### Configuration
+
+* Inside the `testnet-laconicd-deployment` deployment directory, open `config.env` file and set following env variables:
+
+ ```bash
+ CERC_CHAIN_ID=laconic-testnet-2
+
+ # Comma separated list of nodes to keep persistent connections to
+ # Example: "node-1-id@laconicd-sapo.laconic.com:36656"
+ # Use the provided node id
+ CERC_PEERS=""
+
+ # A custom human readable name for this node
+ CERC_MONIKER=
+ ```
+
+### Start the deployment
+
+```bash
+laconic-so deployment --dir testnet-laconicd-deployment start
+```
+
+See [Check status](#check-status) to follow sync status of your node
+
+See [Join as testnet validator](#create-validator-using-cli) to join as a validator using laconicd CLI (use chain id `laconic-testnet-2`)
+
+### Clean up
+
+* Same as [Clean up](#clean-up)
+
## Troubleshooting
* If you face any issues in the onboarding app or the web-wallet, clear your browser cache and reload