29 KiB
Create deployments from scratch (for reference only)
Login
-
Log in as
dev
user on the deployments VM -
All the deployments are placed in the
/srv
directory:cd /srv
Prerequisites
-
On deployments VM(s):
- laconic-so: see installation
-
Local:
-
Clone the
cerc-io/testnet-ops
repository:git clone git@git.vdb.to:cerc-io/testnet-ops.git
-
Ansible: see installation
-
L2 Optimism
L2 Optimism
-
Source repos:
-
Target dir:
/srv/op-sepolia/optimism-deployment
-
Cleanup an existing deployment if required:
cd /srv/op-sepolia # Stop the deployment laconic-so deployment --dir optimism-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf optimism-deployment
Setup
-
Switch to
testnet-ops/l2-setup
directory on your local machine:cd testnet-ops/l2-setup
-
Copy the
l2-vars-example.yml
vars file:cp l2-vars-example.yml l2-vars.yml
-
Edit
l2-vars.yml
with required values:# L1 chain ID (Sepolia: 11155111) l1_chain_id: "11155111" # L1 RPC endpoint l1_rpc: "http://host.docker.internal:8545" # L1 RPC endpoint host or IP address l1_host: "host.docker.internal" # L1 RPC endpoint port number l1_port: "8545" # L1 Beacon endpoint l1_beacon: "http://host.docker.internal:8001" # Account credentials for the Admin account # Used for Optimism contracts deployment and funding other generated accounts l1_address: "" l1_priv_key: ""
-
Update the target dir in
setup-vars.yml
:sed -i 's|^l2_directory:.*|l2_directory: /srv/op-sepolia|' setup-vars.yml # Will create deployment at /srv/op-sepolia/optimism-deployment
Run
-
Setup and run L2 by executing the
run-optimism.yml
Ansible playbook:LANG=en_US.utf8 ansible-playbook -i localhost, --connection=local run-optimism.yml --extra-vars='{ "target_host": "localhost"}' --user $USER
-
For skipping container build, run with
"skip_container_build" : true
:LANG=en_US.utf8 ansible-playbook -i localhost, --connection=local run-optimism.yml --extra-vars='{"target_host" : "localhost", "skip_container_build": true}' -kK --user $USER
-
-
Bridge funds on L2:
-
Set the following variables:
cd /srv/op-sepolia L1_RPC=http://host.docker.internal:8545 L2_RPC=http://host.docker.internal:9545 NETWORK=<the network name found above> DEPLOYMENT_CONTEXT=11155111 ACCOUNT=<admin-account-address>
-
Read the bridge contract address from the L1 deployment records in the
op-node
container:BRIDGE=$(laconic-so deployment --dir optimism-deployment exec op-node "cat /l1-deployment/$DEPLOYMENT_CONTEXT-deploy.json" | jq -r .L1StandardBridgeProxy) # Get the funded account's pk ACCOUNT_PK=$(laconic-so deployment --dir optimism-deployment exec op-node "jq -r '.AdminKey' /l2-accounts/accounts.json")
-
Check that the starting balance for account on L2 is 0:
docker run --rm --network $NETWORK cerc/optimism-contracts:local "cast balance $ACCOUNT --rpc-url $L2_RPC" # 0
-
Use cast to send ETH to the bridge contract:
docker run --rm cerc/optimism-contracts:local "cast send --from $ACCOUNT --value 1ether $BRIDGE --rpc-url $L1_RPC --private-key $ACCOUNT_PK"
- Allow a couple minutes for the bridge to complete
-
Check balance on L2
docker run --rm --network $NETWORK cerc/optimism-contracts:local "cast balance $ACCOUNT --rpc-url $L2_RPC" # 100000000000000000
-
L1 Nitro Contracts Deployment
L1 Nitro Contracts Deployment
-
Source repos: https://github.com/cerc-io/go-nitro
-
Target dir:
/srv/bridge/nitro-contracts-deployment
-
Cleanup an existing deployment on virtual machine if required:
cd /srv/bridge # Stop the deployment laconic-so deployment --dir nitro-contracts-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf nitro-contracts-deployment
Setup
-
Following commands have to be executed in
testnet-ops/nitro-contracts-setup
directory on your local machinecd testnet-ops/nitro-contracts-setup
-
Copy the
contract-vars-example.yml
vars filecp contract-vars-example.yml contract-vars.yml
-
Edit
contract-vars.yml
and fill in the following values# L1 RPC endpoint geth_url: "https://sepolia.laconic.com" # L1 chain ID (Sepolia: 11155111) geth_chain_id: "11155111" # Private key for a funded account on L1 to use for contracts deployment on L1 geth_deployer_pk: "" # Custom L1 token to be deployed token_name: "" token_symbol: "" intial_token_supply: ""
-
Update the target dir in
setup-vars.yml
:sed -i 's|^nitro_directory:.*|nitro_directory: /srv/bridge|' vars.yml # Will create deployment at /srv/bridge/nitro-contracts-deployment
Run
-
Execute the
deploy-contracts.yml
Ansible playbook on the remote host to deploy Nitro contracts:-
Create a new
hosts.ini
file:cp ../hosts.example.ini hosts.ini
-
Edit the
hosts.ini
file to run the playbook on a remote machine:[<deployment_host>] <host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
- Replace
<deployment_host>
withnitro_host
- Replace
<host_name>
with the alias of your choice - Replace
<target_ip>
with the IP address or hostname of the target machine - Replace
<ssh_user>
with the SSH username (e.g., dev, ubuntu)
- Replace
-
Verify that you are able to connect to the host using the following command
ansible all -m ping -i hosts.ini -k # Expected output: # <host_name> | SUCCESS => { # "ansible_facts": { # "discovered_interpreter_python": "/usr/bin/python3.10" # }, # "changed": false, # "ping": "pong" # }
-
Execute the
deploy-contracts.yml
Ansible playbook for remote deployment:LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
-
For skipping container build, run with
"skip_container_build" : true
in the--extra-vars
parameter:LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host", "skip_container_build": true }' --user $USER -kK
-
-
Check logs for deployments on the virtual machine:
cd /srv/bridge # Check the l1 nitro contract deployments laconic-so deployment --dir nitro-contracts-deployment logs nitro-contracts -f
Nitro Bridge
Nitro Bridge
-
Source repos: https://github.com/cerc-io/go-nitro
-
Target dir:
/srv/bridge/bridge-deployment
-
Cleanup an existing deployment on virtual machine if required:
cd /srv/bridge # Stop the deployment laconic-so deployment --dir bridge-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf bridge-deployment
Setup
-
Following commands have to be executed in
testnet-ops/nitro-bridge-setup
directory on your local machinecd testnet-ops/nitro-bridge-setup
-
Create the required vars file:
cp bridge-vars-example.yml bridge-vars.yml
-
Edit
bridge-vars.yml
with required values:# L1 WS endpoint nitro_l1_chain_url: "wss://sepolia.laconic.com" # L2 WS endpoint nitro_l2_chain_url: "wss://optimism.laconic.com" # Private key for bridge Nitro address nitro_sc_pk: "" # Private key for a funded account on L1 # This account should have L1 tokens for funding Nitro channels nitro_chain_pk: "" # L1 chain ID (Sepolia: 11155111) geth_chain_id: "11155111" # L1 RPC endpoint geth_url: "https://sepolia.laconic.com" # L2 RPC endpoint optimism_url: "https://optimism.laconic.com" # Private key for a funded account for L1 contracts deployment geth_deployer_pk: "" # Private key for a funded account for L2 contracts deployment # Use the same account for L1 and L2 deployments optimism_deployer_pk: ""
-
Update the target dir in
setup-vars.yml
:sed -i 's|^nitro_directory:.*|nitro_directory: /srv/bridge|' vars.yml # Will create deployment at /srv/bridge/nitro-contracts-deployment and /srv/bridge/bridge-deployment
Run
-
Execute the
run-nitro-bridge.yml
Ansible playbook on the remote host to deploy the L2 contracts and start the bridge:-
Create a new
hosts.ini
file:cp ../hosts.example.ini hosts.ini
-
Edit the
hosts.ini
file to run the playbook on a remote machine:[<deployment_host>] <host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
- Replace
<deployment_host>
withnitro_host
- Replace
<host_name>
with the alias of your choice - Replace
<target_ip>
with the IP address or hostname of the target machine - Replace
<ssh_user>
with the SSH username (e.g., dev, ubuntu)
- Replace
-
Verify that you are able to connect to the host using the following command
ansible all -m ping -i hosts.ini -k # Expected output: # <host_name> | SUCCESS => { # "ansible_facts": { # "discovered_interpreter_python": "/usr/bin/python3.10" # }, # "changed": false, # "ping": "pong" # }
-
Execute the
run-nitro-bridge.yml
Ansible playbook for remote deployment:LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
-
For skipping container build, run with
"skip_container_build" : true
in the--extra-vars
parameter:LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host", "skip_container_build": true }' --user $USER -kK
-
-
Check logs for deployments on the virtual machine:
cd /srv/bridge # Check the l2 nitro contract deployments laconic-so deployment --dir bridge-deployment logs l2-nitro-contracts -f # Check bridge logs, ensure that the node is running laconic-so deployment --dir bridge-deployment logs nitro-bridge -f
-
Create Nitro node config for users:
cd /srv/bridge # Create required variables GETH_CHAIN_ID="11155111" OPTIMISM_CHAIN_ID="42069" export NA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.NitroAdjudicator.address' /app/deployment/nitro-addresses.json") export CA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.ConsensusApp.address' /app/deployment/nitro-addresses.json") export VPA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.VirtualPaymentApp.address' /app/deployment/nitro-addresses.json") export L1_ASSET_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.Token.address' /app/deployment/nitro-addresses.json") export BRIDGE_CONTRACT_ADDRESS=$(laconic-so deployment --dir bridge-deployment exec nitro-bridge "jq -r '.\"$OPTIMISM_CHAIN_ID\"[0].contracts.Bridge.address' /app/deployment/nitro-addresses.json") export BRIDGE_NITRO_ADDRESS=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4006 -h nitro-bridge" | jq -r '.SCAddress') export BRIDGE_PEER_ID=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4006 -h nitro-bridge" | jq -r '.MessageServicePeerId') export L1_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3005/p2p/$BRIDGE_PEER_ID" export L2_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3006/p2p/$BRIDGE_PEER_ID" # Create the required config file cat <<EOF > nitro-node-config.yml nitro_l1_chain_url: "wss://sepolia.laconic.com" nitro_l2_chain_url: "wss://optimism.laconic.com" na_address: "$NA_ADDRESS" ca_address: "$CA_ADDRESS" vpa_address: "$VPA_ADDRESS" l1_asset_address: "${L1_ASSET_ADDRESS}" bridge_contract_address: "$BRIDGE_CONTRACT_ADDRESS" bridge_nitro_address: "$BRIDGE_NITRO_ADDRESS" nitro_l1_bridge_multiaddr: "$L1_BRIDGE_MULTIADDR" nitro_l2_bridge_multiaddr: "$L2_BRIDGE_MULTIADDR" EOF
The required config file should be generated at
/srv/bridge/nitro-node-config.yml
Check in the file at location
ops/stage2/nitro-node-config.yml
stage0 laconicd
stage0 laconicd
-
Source repo: https://git.vdb.to/cerc-io/laconicd
-
Target dir:
/srv/laconicd/stage0-deployment
-
Cleanup an existing deployment if required:
cd /srv/laconicd # Stop the deployment laconic-so deployment --dir stage0-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf stage0-deployment # Remove the existing spec file rm stage0-spec.yml
Setup
-
Clone the stack repo:
laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-laconicd-stack --pull # This should clone the fixturenet-laconicd-stack repo at `/home/dev/cerc/fixturenet-laconicd-stack`
-
Clone required repositories:
laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd setup-repositories --pull # This should clone the laconicd repo at `/home/dev/cerc/laconicd`
-
Build the container images:
laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd build-containers --force-rebuild # This should create the "cerc/laconicd" Docker image
Deployment
-
Create a spec file for the deployment:
cd /srv/laconicd laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy init --output stage0-spec.yml
-
Edit network in the spec file to map container ports to host ports:
# stage0-spec.yml network: ports: laconicd: - '6060' - '127.0.0.1:26657:26657' - '26656' - '127.0.0.1:9473:9473' - '127.0.0.1:9090:9090' - '127.0.0.1:1317:1317'
-
Create a deployment from the spec file:
laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy create --spec-file stage0-spec.yml --deployment-dir stage0-deployment
-
Update the configuration:
cat <<EOF > stage0-deployment/config.env # Set to true to enable adding participants functionality of the onboarding module ONBOARDING_ENABLED=true # A custom human readable name for this node MONIKER=LaconicStage0 EOF
Start
-
Start the deployment:
laconic-so deployment --dir stage0-deployment start
-
Check status:
# List down the containers and check health status docker ps -a | grep laconicd # Follow logs for laconicd container, check that new blocks are getting created laconic-so deployment --dir stage0-deployment logs laconicd -f
-
Verify that endpoint is now publicly accessible:
-
https://laconicd.laconic.com is pointed to the node's RPC endpoint
-
Check status query:
curl https://laconicd.laconic.com/status | jq # Expected output: # JSON with `node_info`, `sync_info` and `validator_info`
-
faucet
faucet
-
Source repo: https://git.vdb.to/cerc-io/laconic-faucet
-
Target dir:
/srv/faucet/laconic-faucet-deployment
-
Cleanup an existing deployment if required:
cd /srv/faucet # Stop the deployment laconic-so deployment --dir laconic-faucet-deployment stop # Remove the deployment dir sudo rm -rf laconic-faucet-deployment # Remove the existing spec file rm laconic-faucet-spec.yml
Setup
-
Clone the stack repo:
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack --pull # This should clone the testnet-laconicd-stack repo at `/home/dev/cerc/testnet-laconicd-stack`
-
Clone required repositories:
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet setup-repositories --pull # This should clone the laconicd repo at `/home/dev/cerc/laconic-faucet
-
Build the container images:
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet build-containers --force-rebuild # This should create the "cerc/laconic-faucet" Docker image
Deployment
-
Create a spec file for the deployment:
cd /srv/faucet laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet deploy init --output laconic-faucet-spec.yml
-
Edit network in the spec file to map container ports to host ports:
# laconic-faucet-spec.yml network: ports: faucet: - '127.0.0.1:4000:3000'
-
Create a deployment from the spec file:
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet deploy create --spec-file laconic-faucet-spec.yml --deployment-dir laconic-faucet-deployment # Place in the same namespace as stage0 cp /srv/laconicd/stage0-deployment/deployment.yml laconic-faucet-deployment/deployment.yml
-
Update the configuration:
# Get the faucet account key from stage0 deployment export FAUCET_ACCOUNT_PK=$(laconic-so deployment --dir /srv/laconicd/stage0-deployment exec laconicd "echo y | laconicd keys export alice --keyring-backend test --unarmored-hex --unsafe") cat <<EOF > laconic-faucet-deployment/config.env CERC_FAUCET_KEY=$FAUCET_ACCOUNT_PK EOF
Start
-
Start the deployment:
laconic-so deployment --dir laconic-faucet-deployment start
-
Check status:
# List down the containers and check health status docker ps -a | grep faucet # Check logs for faucet container laconic-so deployment --dir laconic-faucet-deployment logs faucet -f
-
Verify that endpoint is now publicly accessible:
-
https://faucet.laconic.com is pointed to the faucet endpoint
-
Check faucet:
curl -X POST https://faucet.laconic.com/faucet # Expected output: # {"error":"address is required"}
-
testnet-onboarding-app
testnet-onboarding-app
-
Source repo: https://git.vdb.to/cerc-io/testnet-onboarding-app
-
Target dir:
/srv/app/onboarding-app-deployment
-
Cleanup an existing deployment if required:
cd /srv/app # Stop the deployment laconic-so deployment --dir onboarding-app-deployment # Remove the deployment dir sudo rm -rf onboarding-app-deployment # Remove the existing spec file rm onboarding-app-spec.yml
Setup
-
Clone the stack repo:
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-onboarding-app-stack --pull # This should clone the testnet-onboarding-app-stack repo at `/home/dev/cerc/testnet-onboarding-app-stack`
-
Clone required repositories:
laconic-so --stack ~/cerc/testnet-onboarding-app-stack/stack-orchestrator/stacks/onboarding-app setup-repositories --pull # This should clone the testnet-onboarding-app repo at `/home/dev/cerc/testnet-onboarding-app`
-
Build the container images:
laconic-so --stack ~/cerc/testnet-onboarding-app-stack/stack-orchestrator/stacks/onboarding-app build-containers --force-rebuild # This should create the Docker image "cerc/testnet-onboarding-app" locally
Deployment
-
Create a spec file for the deployment:
cd /srv/app laconic-so --stack ~/cerc/testnet-onboarding-app-stack/stack-orchestrator/stacks/onboarding-app deploy init --output onboarding-app-spec.yml
-
Edit network in the spec file to map container ports to host ports:
network: ports: testnet-onboarding-app: - '127.0.0.1:3000:80'
-
Create a deployment from the spec file:
laconic-so --stack ~/cerc/testnet-onboarding-app-stack/stack-orchestrator/stacks/onboarding-app deploy create --spec-file onboarding-app-spec.yml --deployment-dir onboarding-app-deployment
-
Update the configuration:
cat <<EOF > onboarding-app-deployment/config.env WALLET_CONNECT_ID=63... CERC_REGISTRY_GQL_ENDPOINT="https://laconicd.laconic.com/api" CERC_LACONICD_RPC_ENDPOINT="https://laconicd.laconic.com" CERC_FAUCET_ENDPOINT="https://faucet.laconic.com" CERC_WALLET_META_URL="https://loro-signup.laconic.com" EOF
Start
-
Start the deployment:
laconic-so deployment --dir onboarding-app-deployment start
-
Check status:
# List down the container docker ps -a | grep testnet-onboarding-app # Follow logs for testnet-onboarding-app container, wait for the build to finish laconic-so deployment --dir onboarding-app-deployment logs testnet-onboarding-app -f
-
The onboarding app can now be viewed at https://loro-signup.laconic.com
laconic-wallet-web
laconic-wallet-web
-
Source repo: https://git.vdb.to/cerc-io/laconic-wallet-web
-
Target dir:
/srv/wallet/laconic-wallet-web-deployment
-
Cleanup an existing deployment if required:
cd /srv/wallet # Stop the deployment laconic-so deployment --dir laconic-wallet-web-deployment # Remove the deployment dir sudo rm -rf laconic-wallet-web-deployment # Remove the existing spec file rm laconic-wallet-web-spec.yml
Setup
-
Clone the stack repo:
laconic-so fetch-stack git.vdb.to/cerc-io/laconic-wallet-web --pull # This should clone the laconic-wallet-web repo at `/home/dev/cerc/laconic-wallet-web`
-
Build the container images:
laconic-so --stack ~/cerc/laconic-wallet-web/stack/stack-orchestrator/stack/laconic-wallet-web build-containers --force-rebuild # This should create the Docker image "cerc/laconic-wallet-web" locally
Deployment
-
Create a spec file for the deployment:
cd /srv/wallet laconic-so --stack ~/cerc/laconic-wallet-web/stack/stack-orchestrator/stack/laconic-wallet-web deploy init --output laconic-wallet-web-spec.yml
-
Edit network in the spec file to map container ports to host ports:
network: ports: laconic-wallet-web: - '127.0.0.1:5000:80'
-
Create a deployment from the spec file:
laconic-so --stack ~/cerc/laconic-wallet-web/stack/stack-orchestrator/stack/laconic-wallet-web deploy create --spec-file laconic-wallet-web-spec.yml --deployment-dir laconic-wallet-web-deployment
-
Update the configuration:
cat <<EOF > laconic-wallet-web-deployment/config.env WALLET_CONNECT_ID=63... EOF
Start
-
Start the deployment:
laconic-so deployment --dir laconic-wallet-web-deployment start
-
Check status:
# List down the container docker ps -a | grep laconic-wallet-web # Follow logs for laconic-wallet-web container, wait for the build to finish laconic-so deployment --dir laconic-wallet-web-deployment logs laconic-wallet-web -f
-
The web wallet can now be viewed at https://wallet.laconic.com
stage1 laconicd
stage1 laconicd
-
Source repo: https://git.vdb.to/cerc-io/laconicd
-
Target dir:
/srv/laconicd/stage1-deployment
-
Cleanup an existing deployment if required:
cd /srv/laconicd # Stop the deployment laconic-so deployment --dir stage1-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf stage1-deployment # Remove the existing spec file rm stage1-spec.yml
Setup
- Same as that for stage0 laconicd, not required if already done for stage0
Deployment
-
Create a spec file for the deployment:
cd /srv/laconicd laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy init --output stage1-spec.yml
-
Edit network in the spec file to map container ports to host ports:
# stage1-spec.yml network: ports: laconicd: - '6060' - '127.0.0.1:26657:26657' - '26656:26656' - '127.0.0.1:9473:9473' - '127.0.0.1:9090:9090' - '127.0.0.1:1317:1317'
-
Create a deployment from the spec file:
laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy create --spec-file stage1-spec.yml --deployment-dir stage1-deployment
-
Update the configuration:
cat <<EOF > stage1-deployment/config.env AUTHORITY_AUCTION_ENABLED=true AUTHORITY_AUCTION_COMMITS_DURATION=3600 AUTHORITY_AUCTION_REVEALS_DURATION=3600 AUTHORITY_GRACE_PERIOD=7200 MONIKER=LaconicStage1 EOF
Start
- Follow stage0-to-stage1.md to halt stage0 deployment, generate the genesis file for stage1 and start the stage1 deployment
laconic-console
laconic-console
-
Source repos:
-
Target dir:
/srv/console/laconic-console-deployment
-
Cleanup an existing deployment if required:
cd /srv/console # Stop the deployment laconic-so deployment --dir laconic-console-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf laconic-console-deployment # Remove the existing spec file rm laconic-console-spec.yml
Setup
-
Clone the stack repo:
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack --pull # This should clone the testnet-laconicd-stack repo at `/home/dev/cerc/testnet-laconicd-stack`
-
Clone required repositories:
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console setup-repositories --pull # This should clone the laconic-registry-cli repo at `/home/dev/cerc/laconic-registry-cli`, laconic-console repo at `/home/dev/cerc/laconic-console`
-
Build the container images:
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console build-containers --force-rebuild # This should create the Docker images: "cerc/laconic-registry-cli", "cerc/webapp-base", "cerc/laconic-console-host"
Deployment
-
Create a spec file for the deployment:
cd /srv/console laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy init --output laconic-console-spec.yml
-
Edit network in the spec file to map container ports to host ports:
network: ports: console: - '127.0.0.1:4001:80'
-
Create a deployment from the spec file:
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy create --spec-file laconic-console-spec.yml --deployment-dir laconic-console-deployment
-
Update the configuration:
cat <<EOF > laconic-console-deployment/config.env # Laconicd (hosted) GQL endpoint LACONIC_HOSTED_ENDPOINT=https://laconicd.laconic.com EOF
Start
-
Start the deployment:
laconic-so deployment --dir laconic-console-deployment start
-
Check status:
# List down the container docker ps -a | grep console # Follow logs for console container laconic-so deployment --dir laconic-console-deployment logs console -f
-
The laconic console can now be viewed at https://loro-console.laconic.com
Domains / Port Mappings
# Machine 1
https://laconicd.laconic.com -> 26657
https://laconicd.laconic.com/api -> 9473/api
https://faucet.laconic.com -> 4000
https://loro-signup.laconic.com -> 3000
https://wallet.laconic.com -> 5000
https://loro-console.laconic.com -> 4001
# Machine 2
https://sepolia.laconic.com -> 8545
wss://sepolia.laconic.com -> 8546
https://optimism.laconic.com -> 9545
wss://optimism.laconic.com -> 9546
bridge.laconic.com
Open ports:
3005 (L1 side)
3006 (L2 side)