46 KiB
Create deployments from scratch (for reference only)
Login
-
Log in as
dev
user on the deployments machine -
All the deployments are placed in the
/srv
directory:cd /srv
Prerequisites
-
Local:
-
Clone the
cerc-io/testnet-ops
repository:git clone git@git.vdb.to:cerc-io/testnet-ops.git
-
Ansible: see installation
-
-
On deployments machine(s):
- laconic-so: see installation
stage0 laconicd
stage0 laconicd
-
Source repo: https://git.vdb.to/cerc-io/laconicd
-
Target dir:
/srv/laconicd/stage0-deployment
-
Cleanup an existing deployment if required:
cd /srv/laconicd # Stop the deployment laconic-so deployment --dir stage0-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf stage0-deployment # Remove the existing spec file rm stage0-spec.yml
Setup
-
Clone the stack repo:
laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-laconicd-stack --pull # This should clone the fixturenet-laconicd-stack repo at `/home/dev/cerc/fixturenet-laconicd-stack`
-
Clone required repositories:
laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd setup-repositories --pull # This should clone the laconicd repo at `/home/dev/cerc/laconicd`
-
Build the container images:
laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd build-containers --force-rebuild # This should create the "cerc/laconicd" Docker image
Deployment
-
Create a spec file for the deployment:
cd /srv/laconicd laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy init --output stage0-spec.yml
-
Edit network in the spec file to map container ports to host ports:
# stage0-spec.yml network: ports: laconicd: - '6060' - '127.0.0.1:26657:26657' - '26656' - '127.0.0.1:9473:9473' - '127.0.0.1:9090:9090' - '127.0.0.1:1317:1317'
-
Create a deployment from the spec file:
laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy create --spec-file stage0-spec.yml --deployment-dir stage0-deployment
-
Update the configuration:
cat <<EOF > stage0-deployment/config.env # Set to true to enable adding participants functionality of the onboarding module ONBOARDING_ENABLED=true # A custom human readable name for this node MONIKER=LaconicStage0 EOF
Start
-
Start the deployment:
laconic-so deployment --dir stage0-deployment start
-
Check status:
# List down the containers and check health status docker ps -a | grep laconicd # Follow logs for laconicd container, check that new blocks are getting created laconic-so deployment --dir stage0-deployment logs laconicd -f
-
Verify that endpoint is now publicly accessible:
-
https://laconicd.laconic.com is pointed to the node's RPC endpoint
-
Check status query:
curl https://laconicd.laconic.com/status | jq # Expected output: # JSON with `node_info`, `sync_info` and `validator_info`
-
faucet
faucet
-
Source repo: https://git.vdb.to/cerc-io/laconic-faucet
-
Target dir:
/srv/faucet/laconic-faucet-deployment
-
Cleanup an existing deployment if required:
cd /srv/faucet # Stop the deployment laconic-so deployment --dir laconic-faucet-deployment stop # Remove the deployment dir sudo rm -rf laconic-faucet-deployment # Remove the existing spec file rm laconic-faucet-spec.yml
Setup
-
Clone the stack repo:
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack --pull # This should clone the testnet-laconicd-stack repo at `/home/dev/cerc/testnet-laconicd-stack`
-
Clone required repositories:
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet setup-repositories --pull # This should clone the laconicd repo at `/home/dev/cerc/laconic-faucet
-
Build the container images:
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet build-containers --force-rebuild # This should create the "cerc/laconic-faucet" Docker image
Deployment
-
Create a spec file for the deployment:
cd /srv/faucet laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet deploy init --output laconic-faucet-spec.yml
-
Edit network in the spec file to map container ports to host ports:
# laconic-faucet-spec.yml network: ports: faucet: - '127.0.0.1:4000:3000'
-
Create a deployment from the spec file:
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet deploy create --spec-file laconic-faucet-spec.yml --deployment-dir laconic-faucet-deployment # Place in the same namespace as stage0 cp /srv/laconicd/stage0-deployment/deployment.yml laconic-faucet-deployment/deployment.yml
-
Update the configuration:
# Get the faucet account key from stage0 deployment export FAUCET_ACCOUNT_PK=$(laconic-so deployment --dir /srv/laconicd/stage0-deployment exec laconicd "echo y | laconicd keys export alice --keyring-backend test --unarmored-hex --unsafe") cat <<EOF > laconic-faucet-deployment/config.env CERC_FAUCET_KEY=$FAUCET_ACCOUNT_PK EOF
Start
-
Start the deployment:
laconic-so deployment --dir laconic-faucet-deployment start
-
Check status:
# List down the containers and check health status docker ps -a | grep faucet # Check logs for faucet container laconic-so deployment --dir laconic-faucet-deployment logs faucet -f
-
Verify that endpoint is now publicly accessible:
-
https://faucet.laconic.com is pointed to the faucet endpoint
-
Check faucet:
curl -X POST https://faucet.laconic.com/faucet # Expected output: # {"error":"address is required"}
-
testnet-onboarding-app
testnet-onboarding-app
-
Source repo: https://git.vdb.to/cerc-io/testnet-onboarding-app
-
Target dir:
/srv/app/onboarding-app-deployment
-
Cleanup an existing deployment if required:
cd /srv/app # Stop the deployment laconic-so deployment --dir onboarding-app-deployment # Remove the deployment dir sudo rm -rf onboarding-app-deployment # Remove the existing spec file rm onboarding-app-spec.yml
Setup
-
Clone the stack repo:
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-onboarding-app-stack --pull # This should clone the testnet-onboarding-app-stack repo at `/home/dev/cerc/testnet-onboarding-app-stack`
-
Clone required repositories:
laconic-so --stack ~/cerc/testnet-onboarding-app-stack/stack-orchestrator/stacks/onboarding-app setup-repositories --pull # This should clone the testnet-onboarding-app repo at `/home/dev/cerc/testnet-onboarding-app`
-
Build the container images:
laconic-so --stack ~/cerc/testnet-onboarding-app-stack/stack-orchestrator/stacks/onboarding-app build-containers --force-rebuild # This should create the Docker image "cerc/testnet-onboarding-app" locally
Deployment
-
Create a spec file for the deployment:
cd /srv/app laconic-so --stack ~/cerc/testnet-onboarding-app-stack/stack-orchestrator/stacks/onboarding-app deploy init --output onboarding-app-spec.yml
-
Edit network in the spec file to map container ports to host ports:
network: ports: testnet-onboarding-app: - '127.0.0.1:3000:80'
-
Create a deployment from the spec file:
laconic-so --stack ~/cerc/testnet-onboarding-app-stack/stack-orchestrator/stacks/onboarding-app deploy create --spec-file onboarding-app-spec.yml --deployment-dir onboarding-app-deployment
-
Update the configuration:
cat <<EOF > onboarding-app-deployment/config.env WALLET_CONNECT_ID=63... CERC_REGISTRY_GQL_ENDPOINT="https://laconicd.laconic.com/api" CERC_LACONICD_RPC_ENDPOINT="https://laconicd.laconic.com" CERC_FAUCET_ENDPOINT="https://faucet.laconic.com" CERC_WALLET_META_URL="https://loro-signup.laconic.com" EOF
Start
-
Start the deployment:
laconic-so deployment --dir onboarding-app-deployment start
-
Check status:
# List down the container docker ps -a | grep testnet-onboarding-app # Follow logs for testnet-onboarding-app container, wait for the build to finish laconic-so deployment --dir onboarding-app-deployment logs testnet-onboarding-app -f
-
The onboarding app can now be viewed at https://loro-signup.laconic.com
laconic-wallet-web
laconic-wallet-web
-
Source repo: https://git.vdb.to/cerc-io/laconic-wallet-web
-
Target dir:
/srv/wallet/laconic-wallet-web-deployment
-
Cleanup an existing deployment if required:
cd /srv/wallet # Stop the deployment laconic-so deployment --dir laconic-wallet-web-deployment # Remove the deployment dir sudo rm -rf laconic-wallet-web-deployment # Remove the existing spec file rm laconic-wallet-web-spec.yml
Setup
-
Clone the stack repo:
laconic-so fetch-stack git.vdb.to/cerc-io/laconic-wallet-web --pull # This should clone the laconic-wallet-web repo at `/home/dev/cerc/laconic-wallet-web`
-
Build the container images:
laconic-so --stack ~/cerc/laconic-wallet-web/stack/stack-orchestrator/stack/laconic-wallet-web build-containers --force-rebuild # This should create the Docker image "cerc/laconic-wallet-web" locally
Deployment
-
Create a spec file for the deployment:
cd /srv/wallet laconic-so --stack ~/cerc/laconic-wallet-web/stack/stack-orchestrator/stack/laconic-wallet-web deploy init --output laconic-wallet-web-spec.yml
-
Edit network in the spec file to map container ports to host ports:
network: ports: laconic-wallet-web: - '127.0.0.1:5000:80'
-
Create a deployment from the spec file:
laconic-so --stack ~/cerc/laconic-wallet-web/stack/stack-orchestrator/stack/laconic-wallet-web deploy create --spec-file laconic-wallet-web-spec.yml --deployment-dir laconic-wallet-web-deployment
-
Update the configuration:
cat <<EOF > laconic-wallet-web-deployment/config.env WALLET_CONNECT_ID=63... EOF
Start
-
Start the deployment:
laconic-so deployment --dir laconic-wallet-web-deployment start
-
Check status:
# List down the container docker ps -a | grep laconic-wallet-web # Follow logs for laconic-wallet-web container, wait for the build to finish laconic-so deployment --dir laconic-wallet-web-deployment logs laconic-wallet-web -f
-
The web wallet can now be viewed at https://wallet.laconic.com
stage1 laconicd
stage1 laconicd
-
Source repo: https://git.vdb.to/cerc-io/laconicd
-
Target dir:
/srv/laconicd/stage1-deployment
-
Cleanup an existing deployment if required:
cd /srv/laconicd # Stop the deployment laconic-so deployment --dir stage1-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf stage1-deployment # Remove the existing spec file rm stage1-spec.yml
Setup
- Same as that for stage0-laconicd, not required if already done for stage0
Deployment
-
Create a spec file for the deployment:
cd /srv/laconicd laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy init --output stage1-spec.yml
-
Edit network in the spec file to map container ports to host ports:
# stage1-spec.yml network: ports: laconicd: - '6060' - '127.0.0.1:26657:26657' - '26656:26656' - '127.0.0.1:9473:9473' - '127.0.0.1:9090:9090' - '127.0.0.1:1317:1317'
-
Create a deployment from the spec file:
laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy create --spec-file stage1-spec.yml --deployment-dir stage1-deployment
-
Update the configuration:
cat <<EOF > stage1-deployment/config.env AUTHORITY_AUCTION_ENABLED=true AUTHORITY_AUCTION_COMMITS_DURATION=3600 AUTHORITY_AUCTION_REVEALS_DURATION=3600 AUTHORITY_GRACE_PERIOD=7200 MONIKER=LaconicStage1 EOF
Start
- Follow stage0-to-stage1.md to halt stage0 deployment, generate the genesis file for stage1 and start the stage1 deployment
laconic-console
laconic-console
-
Source repos:
-
Target dir:
/srv/console/laconic-console-deployment
-
Cleanup an existing deployment if required:
cd /srv/console # Stop the deployment laconic-so deployment --dir laconic-console-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf laconic-console-deployment # Remove the existing spec file rm laconic-console-spec.yml
Setup
-
Clone the stack repo:
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack --pull # This should clone the testnet-laconicd-stack repo at `/home/dev/cerc/testnet-laconicd-stack`
-
Clone required repositories:
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console setup-repositories --pull # This should clone the laconic-registry-cli repo at `/home/dev/cerc/laconic-registry-cli`, laconic-console repo at `/home/dev/cerc/laconic-console`
-
Build the container images:
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console build-containers --force-rebuild # This should create the Docker images: "cerc/laconic-registry-cli", "cerc/webapp-base", "cerc/laconic-console-host"
Deployment
-
Create a spec file for the deployment:
cd /srv/console laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy init --output laconic-console-spec.yml
-
Edit network in the spec file to map container ports to host ports:
network: ports: console: - '127.0.0.1:4001:80'
-
Create a deployment from the spec file:
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy create --spec-file laconic-console-spec.yml --deployment-dir laconic-console-deployment
-
Update the configuration:
cat <<EOF > laconic-console-deployment/config.env # Laconicd (hosted) GQL endpoint LACONIC_HOSTED_ENDPOINT=https://laconicd.laconic.com EOF
Start
-
Start the deployment:
laconic-so deployment --dir laconic-console-deployment start
-
Check status:
# List down the container docker ps -a | grep console # Follow logs for console container laconic-so deployment --dir laconic-console-deployment logs console -f
-
The laconic console can now be viewed at https://loro-console.laconic.com
stage2 laconicd
stage2 laconicd
-
Source repo: https://git.vdb.to/cerc-io/laconicd
-
Target dir:
/srv/laconicd/stage2-deployment
-
Cleanup an existing deployment if required:
cd /srv/laconicd # Stop the deployment laconic-so deployment --dir stage2-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf stage2-deployment # Remove the existing spec file rm stage2-spec.yml
Setup
-
Create a tag for existing stage1 laconicd image:
docker tag cerc/laconicd:local cerc/laconicd-stage1:local
-
Same as that for stage0-laconicd
Deployment
-
Create a spec file for the deployment:
cd /srv/laconicd laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy init --output stage2-spec.yml
-
Edit network in the spec file to map container ports to host ports:
# stage2-spec.yml network: ports: laconicd: - '6060' - '36657:26657' - '36656:26656' - '3473:9473' - '127.0.0.1:3090:9090' - '127.0.0.1:3317:1317'
-
Create a deployment from the spec file:
laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy create --spec-file stage2-spec.yml --deployment-dir stage2-deployment
Start
- Follow stage1-to-stage2.md to halt stage1 deployment, initialize the stage2 chain and start stage2 deployment
laconic-console-testnet2
laconic-console-testnet2
-
Source repos:
-
Target dir:
/srv/console/laconic-console-testnet2-deployment
-
Cleanup an existing deployment if required:
cd /srv/console # Stop the deployment laconic-so deployment --dir laconic-console-testnet2-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf laconic-console-testnet2-deployment # Remove the existing spec file rm laconic-console-testnet2-spec.yml
Setup
-
Create tags for existing stage1 images:
docker tag cerc/laconic-console-host:local cerc/laconic-console-host-stage1:local docker tag cerc/laconic-registry-cli:local cerc/laconic-registry-cli-stage1:local
-
Same as that for laconic-console
Deployment
-
Create a spec file for the deployment:
cd /srv/console laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy init --output laconic-console-testnet2-spec.yml
-
Edit network in the spec file to map container ports to host ports:
network: ports: console: - '127.0.0.1:4002:80'
-
Create a deployment from the spec file:
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy create --spec-file laconic-console-testnet2-spec.yml --deployment-dir laconic-console-testnet2-deployment # Place console deployment in the same cluster as stage2 deployment cp ../stage2-deployment/deployment.yml laconic-console-testnet2-deployment/deployment.yml
-
Update the configuration:
cat <<EOF > laconic-console-testnet2-deployment/config.env # Laconicd (hosted) GQL endpoint LACONIC_HOSTED_ENDPOINT=https://laconicd-sapo.laconic.com # laconicd chain id CERC_LACONICD_CHAIN_ID=laconic-testnet-2 EOF
Start
-
Start the deployment:
laconic-so deployment --dir laconic-console-testnet2-deployment start
-
Check status:
# List down the container docker ps -a | grep console # Follow logs for console container laconic-so deployment --dir laconic-console-testnet2-deployment logs console -f
-
The laconic console can now be viewed at https://console-sapo.laconic.com
Shopify
Shopify
-
Source repo: https://git.vdb.to/cerc-io/shopify
-
Target dir:
/srv/shopify/laconic-shopify-deployment
-
Cleanup an existing deployment if required:
cd /srv/shopify # Stop the deployment laconic-so deployment --dir laconic-shopify-deployment stop # Remove the deployment dir sudo rm -rf laconic-shopify-deployment # Remove the existing spec file rm laconic-shopify-spec.yml
Setup
-
Clone the stack repo:
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack --pull # This should clone the testnet-laconicd-stack repo at `/home/dev/cerc/testnet-laconicd-stack`
-
Clone required repositories:
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify setup-repositories --pull # This should clone the laconicd repos at `/home/dev/cerc/shopify` and `/home/dev/cerc/laconic-faucet`
-
Build the container images:
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify build-containers --force-rebuild # This should create the "cerc/laconic-shopify" and "cerc/laconic-shopify-faucet" Docker images
Deployment
-
Create a spec file for the deployment:
cd /srv/shopify laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify deploy init --output laconic-shopify-spec.yml
-
Create a deployment from the spec file:
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify deploy create --spec-file laconic-shopify-spec.yml --deployment-dir laconic-shopify-deployment
-
Inside the
laconic-shopify-deployment
deployment directory, openconfig.env
file and set following env variables:# Shopify GraphQL URL CERC_SHOPIFY_GRAPHQL_URL='https://6h071x-zw.myshopify.com/admin/api/2024-10/graphql.json' # Access token for Shopify API CERC_SHOPIFY_ACCESS_TOKEN= # Delay for fetching orders in milliseconds CERC_FETCH_ORDER_DELAY=10000 # Number of line items per order in Get Orders GraphQL query CERC_ITEMS_PER_ORDER=10 # Private key of a funded faucet account CERC_FAUCET_KEY= # laconicd RPC endpoint CERC_LACONICD_RPC_ENDPOINT='https://laconicd-sapo.laconic.com' # laconicd chain id CERC_LACONICD_CHAIN_ID=laconic-testnet-2 # laconicd address prefix CERC_LACONICD_PREFIX=laconic # laconicd gas price CERC_LACONICD_GAS_PRICE=0.001
Start
-
Start the deployment:
laconic-so deployment --dir laconic-shopify-deployment start
-
Check status:
# Check logs for faucet and shopify containers laconic-so deployment --dir laconic-shopify-deployment logs shopify -f laconic-so deployment --dir laconic-shopify-deployment logs faucet -f
deploy-backend
Deploy Backend
-
Source repo: https://git.vdb.to/cerc-io/snowballtools-base
-
Target dir:
/srv/deploy-backend/backend-deployment
-
Cleanup an existing deployment if required:
cd /srv/deploy-backend # Stop the deployment laconic-so deployment --dir backend-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf backend-deployment # Remove the existing spec file rm backend-deployment-spec.yml
Setup
-
Clone the stack repo:
laconic-so fetch-stack git.vdb.to/cerc-io/snowballtools-base-api-stack --pull # This should clone the snowballtools-base-api-stack repo at `/home/dev/cerc/snowballtools-base-api-stack`
-
Clone required repositories:
laconic-so --stack ~/cerc/snowballtools-base-api-stack/stack-orchestrator/stacks/snowballtools-base-backend setup-repositories --git-ssh --pull # This should clone the snowballtools-base repo at `/home/dev/cerc/snowballtools-base`
-
Build the container images:
laconic-so --stack ~/cerc/snowballtools-base-api-stack/stack-orchestrator/stacks/snowballtools-base-backend build-containers --force-rebuild # This should create the Docker images: "cerc/snowballtools-base-backend" and "cerc/snowballtools-base-backend-base"
-
Push the images to the container registry. The container registry will be set up while setting up a service provider
laconic-so deployment --dir backend-deployment push-images
Deployment
-
Create a spec file for the deployment:
cd /srv/backend-deployment laconic-so --stack ~/cerc/snowballtools-base-api-stack/stack-orchestrator/stacks/snowballtools-base-backend deploy init --output backend-deployment-spec.yml --config SNOWBALL_BACKEND_CONFIG_FILE_PATH=/config/prod.toml
-
Edit the spec file to deploy the stack to k8s:
stack: /home/dev/cerc/snowballtools-base-api-stack/stack-orchestrator/stacks/snowballtools-base-backend deploy-to: k8s kube-config: /home/dev/.kube/config-vs-narwhal.yaml image-registry: container-registry.apps.vaasl.io/laconic-registry config: SNOWBALL_BACKEND_CONFIG_FILE_PATH: /config/prod.toml network: ports: deploy-backend: - '8000' http-proxy: - host-name: deploy-backend.apps.vaasl.io routes: - path: '/' proxy-to: deploy-backend:8000 volumes: data: configmaps: config: ./configmaps/config
-
Create a deployment from the spec file:
laconic-so --stack ~/cerc/snowballtools-base-api-stack/stack-orchestrator/stacks/snowballtools-base-backend deploy create --deployment-dir backend-deployment --spec-file backend-deployment-spec.yml # This should create the deployment directory at `/srv/deploy-backend/backend-deployment`
-
Modify file
backend-deployment/kubeconfig.yml
if required:apiVersion: v1 ... contexts: - context: cluster: *** user: *** name: default ...
NOTE:
context.name
must be default to use with SO -
Fetch the config template file for the snowball backend:
# Place in snowball deployment directory wget -O /srv/deploy-backend/backend-deployment/configmaps/config/prod.toml https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/configs/backend-deployment.toml
-
Setup private key and bond. If not already setup, execute the following commands in the directory containing
stage2-deployment
-
Create a new account and fetch the private key
laconic-so deployment --dir stage2-deployment exec laconicd "laconicd keys add deploy" # - address: laconic1yr758d5vkg28text073vlzdjdgd7ud6w729tww ... export deployKey=$(laconic-so deployment --dir stage2-deployment exec laconicd "echo y | laconicd keys export deploy --keyring-backend test --unarmored-hex --unsafe") # ... # txhash: 262D380259AC06024F87C909EB0BF7814CEC26CDF527B003C4C10631E1DB5893
-
Send tokens to this account
laconic-so deployment --dir stage2-deployment exec laconicd "laconicd tx bank send alice laconic1yr758d5vkg28text073vlzdjdgd7ud6w729tww 1000000000000000000alnt --from alice --fees 200000alnt -y"
-
Create a bond using this account
laconic-so deployment --dir laconic-console-testnet2-deployment exec cli "laconic registry bond create --type alnt --quantity 1000000000000 --user-key $deployKey" | jq -r '.bondId' # 15e5bc37c40f67adc9ab498fa3fa50b090770f9bb56b27d71714a99138df9a22
-
Set bond id
export bondId=15e5bc37c40f67adc9ab498fa3fa50b090770f9bb56b27d71714a99138df9a22
-
-
Register authority. Execute the following commands in the directory containing
laconic-console-testnet2-deployment
-
Reserve an authority
laconic-so deployment --dir laconic-console-testnet2-deployment exec cli "laconic registry authority reserve deploy-vaasl --txKey $deployKey"
-
Obtain the auction ID
laconic-so deployment --dir laconic-console-testnet2-deployment exec cli "laconic registry authority whois deploy-vaasl --txKey $deployKey" # "auction": { # "id": "73e0b082a198c396009ce748804a9060c674a10045365d262c1584f99d2771c1"
-
Commit a bid using the auction ID. A reveal file will be generated
laconic-so deployment --dir laconic-console-testnet2-deployment exec cli "laconic registry auction bid commit 73e0b082a198c396009ce748804a9060c674a10045365d262c1584f99d2771c1 5000000 alnt --chain-id laconic-testnet-2 --txKey $deployKey" # {"reveal_file":"/app/out/bafyreiewi4osqyvrnljwwcb36fn6sr5iidfpuznqkz52gxc5ztt3jt4zmy.json"}
-
Reveal a bid using the auction ID and the reveal file generated from the bid commit
laconic-so deployment --dir laconic-console-testnet2-deployment exec cli "laconic registry auction bid reveal 73e0b082a198c396009ce748804a9060c674a10045365d262c1584f99d2771c1 /app/out/bafyreiewi4osqyvrnljwwcb36fn6sr5iidfpuznqkz52gxc5ztt3jt4zmy.json --chain-id laconic-testnet-2 --txKey $deployKey" # {"success": true}
-
Verify status after the auction ends. It should list a completed status and a winner
laconic-so deployment --dir laconic-console-testnet2-deployment exec cli "laconic registry auction get 73e0b082a198c396009ce748804a9060c674a10045365d262c1584f99d2771c1 -txKey $deployKey"
-
Set the authority using a bond ID.
laconic-so deployment --dir laconic-console-testnet2-deployment exec cli "laconic registry authority bond set deploy-vaasl $bondId --txKey $deployKey" # {"success": true}
-
Verify the authority has been registered.
laconic-so deployment --dir laconic-console-testnet2-deployment exec cli "laconic registry authority whois deploy-vaasl --txKey $deployKey"
-
-
Update
/srv/snowball/snowball-deployment/data/config/prod.toml
. Replace<redacted>
with your credentials. Use theuserKey
,bondId
andauthority
that you set up
Start
-
Start the deployment:
laconic-so deployment --dir backend-deployment start
-
Check status:
# Follow logs for snowball container laconic-so deployment --dir backend-deployment logs snowballtools-base-backend -f
deploy-frontend
Deploy Frontend
- Source repo: https://git.vdb.to/cerc-io/snowballtools-base
Prerequisites
-
Node.js
-
Yarn
Setup
-
On your local machine, clone the
snowballtools-base
repo:git clone git@git.vdb.to:cerc-io/snowballtools-base.git
-
Install dependencies:
cd snowballtools-base yarn install
-
In the deployer package, create required env:
cd packages/deployer cp .env.example .env
Set the required variables:
REGISTRY_BOND_ID=<bond-id> DEPLOYER_LRN=lrn://vaasl-provider/deployers/webapp-deployer-api.apps.vaasl.io AUTHORITY=vaasl
Note: The bond id should be set to the
vaasl
authority -
Update required laconic config. You can use the same
userKey
andbondId
used for deploying backend:# Replace <user-pk> and <bond-id> cat <<EOF > config.yml services: registry: rpcEndpoint: https://laconicd-sapo.laconic.com gqlEndpoint: https://laconicd-sapo.laconic.com/api userKey: <user-pk> bondId: <bond-id> chainId: laconic-testnet-2 gasPrice: 0.001alnt EOF
Note: The
userKey
account should own the authorityvaasl
Run
-
Run frontend deployment script:
./deploy-frontend.sh
Follow deployment logs on the deployer UI
Fixturenet Eth
Fixturenet Eth
-
Target dir:
/srv/fixturenet-eth/fixturenet-eth-deployment
-
Cleanup an existing deployment if required:
cd /srv/fixturenet-eth # Stop the deployment laconic-so deployment --dir fixturenet-eth-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf fixturenet-eth-deployment
Setup
-
Create a
fixturenet-eth
dir if not present already and cd into itmkdir /srv/fixturenet-eth cd /srv/fixturenet-eth
-
Clone the stack repo:
laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-eth-stacks --pull
-
Clone required repositories:
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth setup-repositories --pull # If this throws an error as a result of being already checked out to a branch/tag in a repo, remove all repositories from that stack and re-run the command # The repositories are located in $HOME/cerc by default
-
Build the container images:
# Remove any older foundry image with `latest` tag docker rmi ghcr.io/foundry-rs/foundry:latest laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth build-containers --force-rebuild # If errors are thrown during build, old images used by this stack would have to be deleted
-
NOTE: this will take >10 mins depending on the specs of your machine, and requires 16GB of memory or greater.
-
Remove any dangling Docker images (to clear up space):
docker image prune
-
-
Create spec files for deployments, which will map the stack's ports and volumes to the host:
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy init --output fixturenet-eth-spec.yml
-
Configure ports:
-
fixturenet-eth-spec.yml
... network: ports: fixturenet-eth-bootnode-geth: - '9898:9898' - '30303' fixturenet-eth-geth-1: - '7545:8545' - '7546:8546' - '40000' - '6060' fixturenet-eth-lighthouse-1: - '8001' ...
-
-
Create deployments: Once you've made any needed changes to the spec files, create deployments from them:
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy create --spec-file fixturenet-eth-spec.yml --deployment-dir fixturenet-eth-deployment
Run
-
Start
fixturenet-eth-deployment
deployment:laconic-so deployment --dir fixturenet-eth-deployment start
Nitro Contracts Deployment
Nitro Contracts Deployment
-
Stack: https://git.vdb.to/cerc-io/nitro-stack/src/branch/main/stack-orchestrator/stacks/nitro-contracts
-
Source repo: https://github.com/cerc-io/go-nitro
-
Target dir:
/srv/bridge/nitro-contracts-deployment
-
Cleanup an existing deployment if required:
cd /srv/bridge # Stop the deployment laconic-so deployment --dir nitro-contracts-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf nitro-contracts-deployment
Setup
-
Switch to
testnet-ops/nitro-contracts-setup
directory on your local machine:cd testnet-ops/nitro-contracts-setup
-
Copy the
contract-vars-example.yml
vars file:cp contract-vars.example.yml contract-vars.yml
-
Get a funded account's private key from fixturenet-eth deployment:
FUNDED_ACCOUNT_PK=$(curl --silent localhost:9898/accounts.csv | awk -F',' 'NR==1 {gsub(/^0x/, "", $NF); print $NF}') echo $FUNDED_ACCOUNT_PK
-
Edit
contract-vars.yml
and fill in the following values:# RPC endpoint geth_url: "https://fixturenet-eth.laconic.com" # Chain ID (Fixturenet-eth: 1212) geth_chain_id: "1212" # Private key for a funded L1 account, to be used for contract deployment on L1 # Required since this private key will be utilized by both L1 and L2 nodes of the bridge geth_deployer_pk: "<funded-account-pk>" # Custom token to be deployed token_name: "TestToken" token_symbol: "TST" initial_token_supply: "129600"
-
Edit the
setup-vars.yml
to update the target directory:... nitro_directory: /srv/bridge # Will create deployment at /srv/bridge/nitro-contracts-deployment
Run
-
Deploy nitro contracts on remote host by executing
deploy-contracts.yml
Ansible playbook on your local machine:-
Create a new
hosts.ini
file:cp ../hosts.example.ini hosts.ini
-
Edit the
hosts.ini
file to run the playbook on a remote machine:[deployment_host] <host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
- Replace
<deployment_host>
withnitro_host
- Replace
<host_name>
with the alias of your choice - Replace
<target_ip>
with the IP address or hostname of the target machine - Replace
<ssh_user>
with the SSH username (e.g., dev, ubuntu)
- Replace
-
Verify that you are able to connect to the host using the following command:
ansible all -m ping -i hosts.ini -k # Expected output: # <host_name> | SUCCESS => { # "ansible_facts": { # "discovered_interpreter_python": "/usr/bin/python3.10" # }, # "changed": false, # "ping": "pong" # }
-
Execute the
deploy-contracts.yml
Ansible playbook for remote deployment:LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
-
-
Check logs for deployment on the remote machine:
cd /srv/bridge # Check the nitro contract deployments laconic-so deployment --dir nitro-contracts-deployment logs nitro-contracts -f
-
To deploy a new token and transfer it to another account, refer to this doc
Nitro Bridge
Nitro Bridge
-
Stack: https://git.vdb.to/cerc-io/nitro-stack/src/branch/main/stack-orchestrator/stacks/bridge
-
Source repo: https://github.com/cerc-io/go-nitro
-
Target dir:
/srv/bridge/bridge-deployment
-
Cleanup an existing deployment if required:
cd /srv/bridge # Stop the deployment laconic-so deployment --dir bridge-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf bridge-deployment
Setup
-
Execute the following command on deployment machine to get the deployed Nitro contract addresses along with the asset address:
cd /srv/bridge laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "cat /app/deployment/nitro-addresses.json" # Expected output: # { # "1212": [ # { # "name": "geth", # "chainId": "1212", # "contracts": { # "ConsensusApp": { # "address": "0xC98aD0B41B9224dad0605be32A9241dB9c67E2e8" # }, # "NitroAdjudicator": { # "address": "0x7C22fdA703Cdf09eB8D3B5Adc81F723526713D0e" # }, # "VirtualPaymentApp": { # "address": "0x778e4e6297E8BF04C67a20Ec989618d72eB4a19E" # }, # "TestToken": { # "address": "0x02ebfB2706527C7310F2a7d9098b2BC61014C5F2" # } # } # } # ] # }
-
Get a funded account's private key from fixturenet-eth deployment:
FUNDED_ACCOUNT_PK=$(curl --silent localhost:9898/accounts.csv | awk -F',' 'NR==1 {gsub(/^0x/, "", $NF); print $NF}') echo $FUNDED_ACCOUNT_PK
-
Switch to
testnet-ops/nitro-bridge-setup
directory on your local machine:cd testnet-ops/nitro-bridge-setup
-
Create the required vars file:
cp bridge-vars.example.yml bridge-vars.yml
-
Edit
bridge-vars.yml
with required values:# WS endpoint nitro_chain_url: "wss://fixturenet-eth.laconic.com" # Private key for bridge Nitro address nitro_sc_pk: "" # Private key should correspond to a funded account on L1 and this account must own the Nitro contracts # It also needs to hold L1 tokens to fund Nitro channels nitro_chain_pk: "<funded-account-pk>" # Deployed Nitro contract addresses na_address: "" vpa_address: "" ca_address: ""
-
Edit the
setup-vars.yml
to update the target directory:... nitro_directory: /srv/bridge # Will create deployment at /srv/bridge/bridge-deployment
Run
-
Start the bridge on remote host by executing
run-nitro-bridge.yml
Ansible playbook on your local machine:-
Create a new
hosts.ini
file:cp ../hosts.example.ini hosts.ini
-
Edit the
hosts.ini
file to run the playbook on a remote machine:[deployment_host] <host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
- Replace
<deployment_host>
withnitro_host
- Replace
<host_name>
with the alias of your choice - Replace
<target_ip>
with the IP address or hostname of the target machine - Replace
<ssh_user>
with the SSH username (e.g., dev, ubuntu)
- Replace
-
Verify that you are able to connect to the host using the following command:
ansible all -m ping -i hosts.ini -k # Expected output: # <host_name> | SUCCESS => { # "ansible_facts": { # "discovered_interpreter_python": "/usr/bin/python3.10" # }, # "changed": false, # "ping": "pong" # }
-
Execute the
run-nitro-bridge.yml
Ansible playbook for remote deployment:LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
-
-
Check logs for deployments on the remote machine:
cd /srv/bridge # Check bridge logs, ensure that the node is running laconic-so deployment --dir bridge-deployment logs nitro-bridge -f
-
Create Nitro node config for users:
cd /srv/bridge # Create required variables GETH_CHAIN_ID="1212" export NA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.NitroAdjudicator.address' /app/deployment/nitro-addresses.json") export CA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.ConsensusApp.address' /app/deployment/nitro-addresses.json") export VPA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.VirtualPaymentApp.address' /app/deployment/nitro-addresses.json") export BRIDGE_NITRO_ADDRESS=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-bridge" | jq -r '.SCAddress') export BRIDGE_PEER_ID=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-bridge" | jq -r '.MessageServicePeerId') export L1_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3005/p2p/$BRIDGE_PEER_ID" export L2_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3006/p2p/$BRIDGE_PEER_ID" # Create the required config files cat <<EOF > nitro-node-config.yml nitro_chain_url: "wss://fixturenet-eth.laconic.com" na_address: "$NA_ADDRESS" ca_address: "$CA_ADDRESS" vpa_address: "$VPA_ADDRESS" bridge_nitro_address: "$BRIDGE_NITRO_ADDRESS" nitro_l1_bridge_multiaddr: "$L1_BRIDGE_MULTIADDR" nitro_l2_bridge_multiaddr: "$L2_BRIDGE_MULTIADDR" EOF laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq --arg chainId \"$GETH_CHAIN_ID\" '{ (\$chainId): [ { \"name\": .[\$chainId][0].name, \"chainId\": .[\$chainId][0].chainId, \"contracts\": ( .[\$chainId][0].contracts | to_entries | map(select(.key | in({\"ConsensusApp\":1, \"NitroAdjudicator\":1, \"VirtualPaymentApp\":1}) | not)) | from_entries ) } ] }' /app/deployment/nitro-addresses.json" > assets.json
-
The required config files should be generated at
/srv/bridge/nitro-node-config.yml
and/srv/bridge/assets.json
-
Check in the generated files at locations
ops/stage2/nitro-node-config.yml
andops/stage2/assets.json
within this repository respectively
-
-
List down L2 channels created by the bridge:
laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4005 -h nitro-bridge"
Domains / Port Mappings
# Machine 1
# LORO testnet
https://laconicd.laconic.com -> 26657
https://laconicd.laconic.com/api -> 9473/api
https://laconicd.laconic.com/console -> 9473/console
https://laconicd.laconic.com/graphql -> 9473/graphql
https://faucet.laconic.com -> 4000
https://loro-signup.laconic.com -> 3000
https://wallet.laconic.com -> 5000
https://loro-console.laconic.com -> 4001
Open p2p ports:
26656
# SAPO testnet
https://laconicd-sapo.laconic.com -> 36657
https://laconicd-sapo.laconic.com/api -> 3473/api
https://laconicd-sapo.laconic.com/console -> 3473/console
https://laconicd-sapo.laconic.com/graphql -> 3473/graphql
https://console-sapo.laconic.com -> 4002
Open p2p ports:
36656
# Machine 2
https://sepolia.laconic.com -> 8545
wss://sepolia.laconic.com -> 8546
https://fixturenet-eth.laconic.com -> 7545
wss://fixturenet-eth.laconic.com -> 7546
bridge.laconic.com
Open ports:
3005 (L1 side)
3006 (L2 side)