# Create deployments from scratch (for reference only) ## Login * Log in as `dev` user on the deployments machine * All the deployments are placed in the `/srv` directory: ```bash cd /srv ``` ## Prerequisites * Local: * Clone the `cerc-io/testnet-ops` repository: ```bash git clone git@git.vdb.to:cerc-io/testnet-ops.git ``` * Ansible: see [installation](https://git.vdb.to/cerc-io/testnet-ops#installation) * On deployments machine(s): * laconic-so: see [installation](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/stack-orchestrator-setup/README.md#setup-stack-orchestrator)
Nitro Contracts Deployment ## Nitro Contracts Deployment * Stack: * Source repo: * Target dir: `/srv/bridge/nitro-contracts-deployment` * Cleanup an existing deployment if required: ```bash cd /srv/bridge # Stop the deployment laconic-so deployment --dir nitro-contracts-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf nitro-contracts-deployment ### Setup * Switch to `testnet-ops/nitro-contracts-setup` directory on your local machine: ```bash cd testnet-ops/nitro-contracts-setup ``` * Copy the `contract-vars-example.yml` vars file: ```bash cp contract-vars.example.yml contract-vars.yml ``` * Edit [`contract-vars.yml`](./contract-vars.yml) and fill in the following values: ```bash # L1 RPC endpoint geth_url: "https://sepolia.laconic.com" # L1 chain ID (Sepolia: 11155111) geth_chain_id: "11155111" # Private key for a funded L1 account, to be used for contract deployment on L1 # Must also be funded on L2 for deploying contracts # Required since this private key will be utilized by both L1 and L2 nodes of the bridge geth_deployer_pk: "" # Custom L1 token to be deployed token_name: "" token_symbol: "" intial_token_supply: "" ``` * Update the target dir in `setup-vars.yml`: ```bash sed -i 's|^nitro_directory:.*|nitro_directory: /srv/bridge|' setup-vars.yml # Will create deployment at /srv/bridge/nitro-contracts-deployment ``` ### Run * Deploy nitro contracts on remote host by executing `deploy-contracts.yml` Ansible playbook on your local machine: * Create a new `hosts.ini` file: ```bash cp ../hosts.example.ini hosts.ini ``` * Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine: ```ini [deployment_host] ansible_host= ansible_user= ansible_ssh_common_args='-o ForwardAgent=yes' ``` * Replace `` with `nitro_host` * Replace `` with the alias of your choice * Replace `` with the IP address or hostname of the target machine * Replace `` with the SSH username (e.g., dev, ubuntu) * Verify that you are able to connect to the host using the following command: ```bash ansible all -m ping -i hosts.ini -k # Expected output: # | SUCCESS => { # "ansible_facts": { # "discovered_interpreter_python": "/usr/bin/python3.10" # }, # "changed": false, # "ping": "pong" # } ``` * Execute the `deploy-contracts.yml` Ansible playbook for remote deployment: ```bash LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK ``` * Check logs for deployment on the remote machine: ```bash cd /srv/bridge # Check the nitro contract deployments laconic-so deployment --dir nitro-contracts-deployment logs nitro-contracts -f ```
Nitro Bridge ## Nitro Bridge * Stack: * Source repo: * Target dir: `/srv/bridge/bridge-deployment` * Cleanup an existing deployment if required: ```bash cd /srv/bridge # Stop the deployment laconic-so deployment --dir bridge-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf bridge-deployment ``` ### Setup * Execute the following command on deployment machine to get the deployed Nitro contract addresses along with the asset address: ```bash cd /srv/bridge laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "cat /app/deployment/nitro-addresses.json" # Expected output: # { # "11155111": [ # { # "name": "geth", # "chainId": "11155111", # "contracts": { # "ConsensusApp": { # "address": "0xC98aD0B41B9224dad0605be32A9241dB9c67E2e8" # }, # "NitroAdjudicator": { # "address": "0x7C22fdA703Cdf09eB8D3B5Adc81F723526713D0e" # }, # "VirtualPaymentApp": { # "address": "0x778e4e6297E8BF04C67a20Ec989618d72eB4a19E" # }, # "Token": { # "address": "0x02ebfB2706527C7310F2a7d9098b2BC61014C5F2" # } # } # } # ] # } ``` * Get the start block for Nitro Adjudicator events by executing the following command: ```bash laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq '.receipt.blockNumber' /app/deployment/hardhat-deployments/geth/NitroAdjudicator.json" # Expected output: 101 ``` * Switch to `testnet-ops/nitro-bridge-setup` directory on your local machine: ```bash cd testnet-ops/nitro-bridge-setup ``` * Create the required vars file: ```bash cp bridge-vars.example.yml bridge-vars.yml ``` * Edit `bridge-vars.yml` with required values: ```bash # WS endpoint nitro_chain_url: "wss://sepolia.laconic.com" # Private key for bridge Nitro address nitro_sc_pk: "" # Private key should correspond to a funded account on L1 and this account must own the Nitro contracts # It also needs to hold L1 tokens to fund Nitro channels nitro_chain_pk: "" # Deployed L1 Nitro contract addresses na_address: "" vpa_address: "" ca_address: "" # Specifies the block number to start looking for nitro adjudicator events nitro_chain_start_block: 0 ``` * Update the target dir in `setup-vars.yml`: ```bash sed -i 's|^nitro_directory:.*|nitro_directory: /srv/bridge|' setup-vars.yml # Will create deployment at /srv/bridge/nitro-contracts-deployment and /srv/bridge/bridge-deployment ``` ### Run * Start the bridge on remote host by executing `run-nitro-bridge.yml` Ansible playbook on your local machine: * Create a new `hosts.ini` file: ```bash cp ../hosts.example.ini hosts.ini ``` * Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine: ```ini [deployment_host] ansible_host= ansible_user= ansible_ssh_common_args='-o ForwardAgent=yes' ``` * Replace `` with `nitro_host` * Replace `` with the alias of your choice * Replace `` with the IP address or hostname of the target machine * Replace `` with the SSH username (e.g., dev, ubuntu) * Verify that you are able to connect to the host using the following command: ```bash ansible all -m ping -i hosts.ini -k # Expected output: # | SUCCESS => { # "ansible_facts": { # "discovered_interpreter_python": "/usr/bin/python3.10" # }, # "changed": false, # "ping": "pong" # } ``` * Execute the `run-nitro-bridge.yml` Ansible playbook for remote deployment: ```bash LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK ``` * Check logs for deployments on the remote machine: ```bash cd /srv/bridge # Check bridge logs, ensure that the node is running laconic-so deployment --dir bridge-deployment logs nitro-bridge -f ``` * Create Nitro node config for users: ```bash cd /srv/bridge # Create required variables GETH_CHAIN_ID="11155111" export NA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.NitroAdjudicator.address' /app/deployment/nitro-addresses.json") export CA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.ConsensusApp.address' /app/deployment/nitro-addresses.json") export VPA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.VirtualPaymentApp.address' /app/deployment/nitro-addresses.json") export ASSET_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.Token.address' /app/deployment/nitro-addresses.json") export BRIDGE_NITRO_ADDRESS=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-bridge" | jq -r '.SCAddress') export BRIDGE_PEER_ID=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-bridge" | jq -r '.MessageServicePeerId') export L1_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3005/p2p/$BRIDGE_PEER_ID" export L2_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3006/p2p/$BRIDGE_PEER_ID" export NITRO_CHAIN_START_BLOCK=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq '.receipt.blockNumber' /app/deployment/hardhat-deployments/geth/NitroAdjudicator.json") # Create the required config file cat < nitro-node-config.yml nitro_chain_url: "wss://sepolia.laconic.com" na_address: "$NA_ADDRESS" ca_address: "$CA_ADDRESS" vpa_address: "$VPA_ADDRESS" asset_address: "${ASSET_ADDRESS}" nitro_chain_start_block: $NITRO_CHAIN_START_BLOCK bridge_nitro_address: "$BRIDGE_NITRO_ADDRESS" nitro_l1_bridge_multiaddr: "$L1_BRIDGE_MULTIADDR" nitro_l2_bridge_multiaddr: "$L2_BRIDGE_MULTIADDR" EOF ``` * The required config file should be generated at `/srv/bridge/nitro-node-config.yml` * Check in the generated file at location `ops/stage2/nitro-node-config.yml` within this repository * List down L2 channels created by the bridge: ```bash laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4006 -h nitro-bridge" ```
stage0 laconicd ## stage0 laconicd * Stack: * Source repo: * Target dir: `/srv/laconicd/stage0-deployment` * Cleanup an existing deployment if required: ```bash cd /srv/laconicd # Stop the deployment laconic-so deployment --dir stage0-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf stage0-deployment # Remove the existing spec file rm stage0-spec.yml ``` ### Setup * Clone the stack repo: ```bash laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-laconicd-stack --pull # This should clone the fixturenet-laconicd-stack repo at `/home/dev/cerc/fixturenet-laconicd-stack` ``` * Clone required repositories: ```bash laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd setup-repositories --pull # This should clone the laconicd repo at `/home/dev/cerc/laconicd` ``` * Build the container images: ```bash laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd build-containers --force-rebuild # This should create the "cerc/laconicd" Docker image ``` ### Deployment * Create a spec file for the deployment: ```bash cd /srv/laconicd laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy init --output stage0-spec.yml ``` * Edit network in the spec file to map container ports to host ports: ```bash # stage0-spec.yml network: ports: laconicd: - '6060' - '127.0.0.1:26657:26657' - '26656' - '127.0.0.1:9473:9473' - '127.0.0.1:9090:9090' - '127.0.0.1:1317:1317' ``` * Create a deployment from the spec file: ```bash laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy create --spec-file stage0-spec.yml --deployment-dir stage0-deployment ``` * Update the configuration: ```bash cat < stage0-deployment/config.env # Set to true to enable adding participants functionality of the onboarding module ONBOARDING_ENABLED=true # A custom human readable name for this node MONIKER=LaconicStage0 EOF ``` ### Start * Start the deployment: ```bash laconic-so deployment --dir stage0-deployment start ``` * Check status: ```bash # List down the containers and check health status docker ps -a | grep laconicd # Follow logs for laconicd container, check that new blocks are getting created laconic-so deployment --dir stage0-deployment logs laconicd -f ``` * Verify that endpoint is now publicly accessible: * is pointed to the node's RPC endpoint * Check status query: ```bash curl https://laconicd.laconic.com/status | jq # Expected output: # JSON with `node_info`, `sync_info` and `validator_info` ```
faucet ## faucet * Stack: * Source repo: * Target dir: `/srv/faucet/laconic-faucet-deployment` * Cleanup an existing deployment if required: ```bash cd /srv/faucet # Stop the deployment laconic-so deployment --dir laconic-faucet-deployment stop # Remove the deployment dir sudo rm -rf laconic-faucet-deployment # Remove the existing spec file rm laconic-faucet-spec.yml ``` ### Setup * Clone the stack repo: ```bash laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack --pull # This should clone the testnet-laconicd-stack repo at `/home/dev/cerc/testnet-laconicd-stack` ``` * Clone required repositories: ```bash laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet setup-repositories --pull # This should clone the laconicd repo at `/home/dev/cerc/laconic-faucet ``` * Build the container images: ```bash laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet build-containers --force-rebuild # This should create the "cerc/laconic-faucet" Docker image ``` ### Deployment * Create a spec file for the deployment: ```bash cd /srv/faucet laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet deploy init --output laconic-faucet-spec.yml ``` * Edit network in the spec file to map container ports to host ports: ```bash # laconic-faucet-spec.yml network: ports: faucet: - '127.0.0.1:4000:3000' ``` * Create a deployment from the spec file: ```bash laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet deploy create --spec-file laconic-faucet-spec.yml --deployment-dir laconic-faucet-deployment # Place in the same namespace as stage0 cp /srv/laconicd/stage0-deployment/deployment.yml laconic-faucet-deployment/deployment.yml ``` * Update the configuration: ```bash # Get the faucet account key from stage0 deployment export FAUCET_ACCOUNT_PK=$(laconic-so deployment --dir /srv/laconicd/stage0-deployment exec laconicd "echo y | laconicd keys export alice --keyring-backend test --unarmored-hex --unsafe") cat < laconic-faucet-deployment/config.env CERC_FAUCET_KEY=$FAUCET_ACCOUNT_PK EOF ``` ### Start * Start the deployment: ```bash laconic-so deployment --dir laconic-faucet-deployment start ``` * Check status: ```bash # List down the containers and check health status docker ps -a | grep faucet # Check logs for faucet container laconic-so deployment --dir laconic-faucet-deployment logs faucet -f ``` * Verify that endpoint is now publicly accessible: * is pointed to the faucet endpoint * Check faucet: ```bash curl -X POST https://faucet.laconic.com/faucet # Expected output: # {"error":"address is required"} ```
testnet-onboarding-app ## testnet-onboarding-app * Stack: * Source repo: * Target dir: `/srv/app/onboarding-app-deployment` * Cleanup an existing deployment if required: ```bash cd /srv/app # Stop the deployment laconic-so deployment --dir onboarding-app-deployment # Remove the deployment dir sudo rm -rf onboarding-app-deployment # Remove the existing spec file rm onboarding-app-spec.yml ``` ### Setup * Clone the stack repo: ```bash laconic-so fetch-stack git.vdb.to/cerc-io/testnet-onboarding-app-stack --pull # This should clone the testnet-onboarding-app-stack repo at `/home/dev/cerc/testnet-onboarding-app-stack` ``` * Clone required repositories: ```bash laconic-so --stack ~/cerc/testnet-onboarding-app-stack/stack-orchestrator/stacks/onboarding-app setup-repositories --pull # This should clone the testnet-onboarding-app repo at `/home/dev/cerc/testnet-onboarding-app` ``` * Build the container images: ```bash laconic-so --stack ~/cerc/testnet-onboarding-app-stack/stack-orchestrator/stacks/onboarding-app build-containers --force-rebuild # This should create the Docker image "cerc/testnet-onboarding-app" locally ``` ### Deployment * Create a spec file for the deployment: ```bash cd /srv/app laconic-so --stack ~/cerc/testnet-onboarding-app-stack/stack-orchestrator/stacks/onboarding-app deploy init --output onboarding-app-spec.yml ``` * Edit network in the spec file to map container ports to host ports: ```bash network: ports: testnet-onboarding-app: - '127.0.0.1:3000:80' ``` * Create a deployment from the spec file: ```bash laconic-so --stack ~/cerc/testnet-onboarding-app-stack/stack-orchestrator/stacks/onboarding-app deploy create --spec-file onboarding-app-spec.yml --deployment-dir onboarding-app-deployment ``` * Update the configuration: ```bash cat < onboarding-app-deployment/config.env WALLET_CONNECT_ID=63... CERC_REGISTRY_GQL_ENDPOINT="https://laconicd.laconic.com/api" CERC_LACONICD_RPC_ENDPOINT="https://laconicd.laconic.com" CERC_FAUCET_ENDPOINT="https://faucet.laconic.com" CERC_WALLET_META_URL="https://loro-signup.laconic.com" EOF ``` ### Start * Start the deployment: ```bash laconic-so deployment --dir onboarding-app-deployment start ``` * Check status: ```bash # List down the container docker ps -a | grep testnet-onboarding-app # Follow logs for testnet-onboarding-app container, wait for the build to finish laconic-so deployment --dir onboarding-app-deployment logs testnet-onboarding-app -f ``` * The onboarding app can now be viewed at
laconic-wallet-web ## laconic-wallet-web * Stack: * Source repo: * Target dir: `/srv/wallet/laconic-wallet-web-deployment` * Cleanup an existing deployment if required: ```bash cd /srv/wallet # Stop the deployment laconic-so deployment --dir laconic-wallet-web-deployment # Remove the deployment dir sudo rm -rf laconic-wallet-web-deployment # Remove the existing spec file rm laconic-wallet-web-spec.yml ``` ### Setup * Clone the stack repo: ```bash laconic-so fetch-stack git.vdb.to/cerc-io/laconic-wallet-web --pull # This should clone the laconic-wallet-web repo at `/home/dev/cerc/laconic-wallet-web` ``` * Build the container images: ```bash laconic-so --stack ~/cerc/laconic-wallet-web/stack/stack-orchestrator/stack/laconic-wallet-web build-containers --force-rebuild # This should create the Docker image "cerc/laconic-wallet-web" locally ``` ### Deployment * Create a spec file for the deployment: ```bash cd /srv/wallet laconic-so --stack ~/cerc/laconic-wallet-web/stack/stack-orchestrator/stack/laconic-wallet-web deploy init --output laconic-wallet-web-spec.yml ``` * Edit network in the spec file to map container ports to host ports: ```bash network: ports: laconic-wallet-web: - '127.0.0.1:5000:80' ``` * Create a deployment from the spec file: ```bash laconic-so --stack ~/cerc/laconic-wallet-web/stack/stack-orchestrator/stack/laconic-wallet-web deploy create --spec-file laconic-wallet-web-spec.yml --deployment-dir laconic-wallet-web-deployment ``` * Update the configuration: ```bash cat < laconic-wallet-web-deployment/config.env WALLET_CONNECT_ID=63... EOF ``` ### Start * Start the deployment: ```bash laconic-so deployment --dir laconic-wallet-web-deployment start ``` * Check status: ```bash # List down the container docker ps -a | grep laconic-wallet-web # Follow logs for laconic-wallet-web container, wait for the build to finish laconic-so deployment --dir laconic-wallet-web-deployment logs laconic-wallet-web -f ``` * The web wallet can now be viewed at
stage1 laconicd ## stage1 laconicd * Stack: * Source repo: * Target dir: `/srv/laconicd/stage1-deployment` * Cleanup an existing deployment if required: ```bash cd /srv/laconicd # Stop the deployment laconic-so deployment --dir stage1-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf stage1-deployment # Remove the existing spec file rm stage1-spec.yml ``` ### Setup * Same as that for [stage0 laconicd](#setup), not required if already done for stage0 ### Deployment * Create a spec file for the deployment: ```bash cd /srv/laconicd laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy init --output stage1-spec.yml ``` * Edit network in the spec file to map container ports to host ports: ```bash # stage1-spec.yml network: ports: laconicd: - '6060' - '127.0.0.1:26657:26657' - '26656:26656' - '127.0.0.1:9473:9473' - '127.0.0.1:9090:9090' - '127.0.0.1:1317:1317' ``` * Create a deployment from the spec file: ```bash laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy create --spec-file stage1-spec.yml --deployment-dir stage1-deployment ``` * Update the configuration: ```bash cat < stage1-deployment/config.env AUTHORITY_AUCTION_ENABLED=true AUTHORITY_AUCTION_COMMITS_DURATION=3600 AUTHORITY_AUCTION_REVEALS_DURATION=3600 AUTHORITY_GRACE_PERIOD=7200 MONIKER=LaconicStage1 EOF ``` ### Start * Follow [stage0-to-stage1.md](./stage0-to-stage1.md) to halt stage0 deployment, generate the genesis file for stage1 and start the stage1 deployment
laconic-console ## laconic-console * Stack: * Source repos: * * * Target dir: `/srv/console/laconic-console-deployment` * Cleanup an existing deployment if required: ```bash cd /srv/console # Stop the deployment laconic-so deployment --dir laconic-console-deployment stop --delete-volumes # Remove the deployment dir sudo rm -rf laconic-console-deployment # Remove the existing spec file rm laconic-console-spec.yml ``` ### Setup * Clone the stack repo: ```bash laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack --pull # This should clone the testnet-laconicd-stack repo at `/home/dev/cerc/testnet-laconicd-stack` ``` * Clone required repositories: ```bash laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console setup-repositories --pull # This should clone the laconic-registry-cli repo at `/home/dev/cerc/laconic-registry-cli`, laconic-console repo at `/home/dev/cerc/laconic-console` ``` * Build the container images: ```bash laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console build-containers --force-rebuild # This should create the Docker images: "cerc/laconic-registry-cli", "cerc/webapp-base", "cerc/laconic-console-host" ``` ### Deployment * Create a spec file for the deployment: ```bash cd /srv/console laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy init --output laconic-console-spec.yml ``` * Edit network in the spec file to map container ports to host ports: ```bash network: ports: console: - '127.0.0.1:4001:80' ``` * Create a deployment from the spec file: ```bash laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy create --spec-file laconic-console-spec.yml --deployment-dir laconic-console-deployment ``` * Update the configuration: ```bash cat < laconic-console-deployment/config.env # Laconicd (hosted) GQL endpoint LACONIC_HOSTED_ENDPOINT=https://laconicd.laconic.com EOF ``` ### Start * Start the deployment: ```bash laconic-so deployment --dir laconic-console-deployment start ``` * Check status: ```bash # List down the container docker ps -a | grep console # Follow logs for console container laconic-so deployment --dir laconic-console-deployment logs console -f ``` * The laconic console can now be viewed at
## Domains / Port Mappings ```bash # Machine 1 https://laconicd.laconic.com -> 26657 https://laconicd.laconic.com/api -> 9473/api https://faucet.laconic.com -> 4000 https://loro-signup.laconic.com -> 3000 https://wallet.laconic.com -> 5000 https://loro-console.laconic.com -> 4001 # Machine 2 https://sepolia.laconic.com -> 8545 wss://sepolia.laconic.com -> 8546 bridge.laconic.com Open ports: 3005 (L1 side) 3006 (L2 side) ```