# Create deployments from scratch (for reference only)

## Login

* Log in as `dev` user on the deployments machine

* All the deployments are placed in the `/srv` directory:

  ```bash
  cd /srv
  ```

## Prerequisites

* Local:

  * Clone the `cerc-io/testnet-ops` repository:

    ```bash
    git clone git@git.vdb.to:cerc-io/testnet-ops.git
    ```

  * Ansible: see [installation](https://git.vdb.to/cerc-io/testnet-ops#installation)

* On deployments machine(s):

  * laconic-so: see [installation](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/stack-orchestrator-setup/README.md#setup-stack-orchestrator)

<details open>
  <summary>Fixturenet Eth</summary>

## Fixturenet Eth

* Stack: <https://git.vdb.to/cerc-io/fixturenet-eth-stacks>

* Target dir: `/srv/fixturenet-eth/fixturenet-eth-deployment`

* Cleanup an existing deployment if required:

  ```bash
  cd /srv/fixturenet-eth

  # Stop the deployment
  laconic-so deployment --dir fixturenet-eth-deployment stop --delete-volumes

  # Remove the deployment dir
  sudo rm -rf fixturenet-eth-deployment
  ```

### Setup

* Create a `fixturenet-eth` dir if not present already and cd into it

  ```bash
  mkdir /srv/fixturenet-eth

  cd /srv/fixturenet-eth
  ```

* Clone the stack repo:

  ```bash
  laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-eth-stacks --pull
  ```

* Clone required repositories:

  ```bash
  laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth setup-repositories --pull

  # If this throws an error as a result of being already checked out to a branch/tag in a repo, remove all repositories from that stack and re-run the command
  # The repositories are located in $HOME/cerc by default
  ```

* Build the container images:

  ```bash
  # Remove any older foundry image with `latest` tag
  docker rmi ghcr.io/foundry-rs/foundry:latest

  laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth build-containers --force-rebuild

  # If errors are thrown during build, old images used by this stack would have to be deleted
  ```

  * NOTE: this will take >10 mins depending on the specs of your machine, and **requires** 16GB of memory or greater.

  * Remove any dangling Docker images (to clear up space):

    ```bash
    docker image prune
    ```

* Create spec files for deployments, which will map the stack's ports and volumes to the host:

  ```bash
  laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy init --output fixturenet-eth-spec.yml
  ```

* Configure ports:
  * `fixturenet-eth-spec.yml`

    ```yml
    ...
    network:
      ports:
        fixturenet-eth-bootnode-geth:
          - '9898:9898'
          - '30303'
        fixturenet-eth-geth-1:
          - '7545:8545'
          - '7546:8546'
          - '40000'
          - '6060'
        fixturenet-eth-lighthouse-1:
          - '8001'
    ...
    ```

* Create deployments:
  Once you've made any needed changes to the spec files, create deployments from them:

  ```bash
  laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth deploy create --spec-file fixturenet-eth-spec.yml --deployment-dir fixturenet-eth-deployment
  ```

### Run

* Start `fixturenet-eth-deployment` deployment:

  ```bash
  laconic-so deployment --dir fixturenet-eth-deployment start
  ```

</details>

<details open>
  <summary>Nitro Contracts Deployment</summary>

## Nitro Contracts Deployment

* Stack: <https://git.vdb.to/cerc-io/nitro-stack/src/branch/main/stack-orchestrator/stacks/nitro-contracts>

* Source repo: <https://github.com/cerc-io/go-nitro>

* Target dir: `/srv/bridge/nitro-contracts-deployment`

* Cleanup an existing deployment if required:

  ```bash
  cd /srv/bridge

  # Stop the deployment
  laconic-so deployment --dir nitro-contracts-deployment stop --delete-volumes

  # Remove the deployment dir
  sudo rm -rf nitro-contracts-deployment
  ```

### Setup

* Switch to `testnet-ops/nitro-contracts-setup` directory on your local machine:

  ```bash
  cd testnet-ops/nitro-contracts-setup
  ```

* Copy the `contract-vars-example.yml` vars file:

  ```bash
  cp contract-vars.example.yml contract-vars.yml
  ```

* Edit [`contract-vars.yml`](./contract-vars.yml) and fill in the following values:

  ```bash
  # RPC endpoint
  geth_url: "https://fixturenet-eth.laconic.com"

  # Chain ID (Fixturenet-eth: 1212)
  geth_chain_id: "1212"

  # Private key for a funded L1 account, to be used for contract deployment on L1
  # Required since this private key will be utilized by both L1 and L2 nodes of the bridge

  geth_deployer_pk: "888814df89c4358d7ddb3fa4b0213e7331239a80e1f013eaa7b2deca2a41a218"

  # Custom token to be deployed
  token_name: "TestToken"
  token_symbol: "TST"
  initial_token_supply: "129600"
  ```

* Edit the `setup-vars.yml` to update the target directory:

  ```bash
  ...
  nitro_directory: /srv/bridge
  ...

  # Will create deployment at /srv/bridge/nitro-contracts-deployment
  ```

### Run

* Deploy nitro contracts on remote host by executing `deploy-contracts.yml` Ansible playbook on your local machine:

  * Create a new `hosts.ini` file:

    ```bash
    cp ../hosts.example.ini hosts.ini
    ```

  * Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:

    ```ini
    [deployment_host]
    <host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
    ```

    * Replace `<deployment_host>` with `nitro_host`
    * Replace `<host_name>` with the alias of your choice
    * Replace `<target_ip>` with the IP address or hostname of the target machine
    * Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)

  * Verify that you are able to connect to the host using the following command:

    ```bash
    ansible all -m ping -i hosts.ini -k

    # Expected output:
    # <host_name> | SUCCESS => {
    #  "ansible_facts": {
    #      "discovered_interpreter_python": "/usr/bin/python3.10"
    #  },
    #  "changed": false,
    #  "ping": "pong"
    # }
    ```

  * Execute the `deploy-contracts.yml` Ansible playbook for remote deployment:

    ```bash
    LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host"}'  --user $USER -kK
    ```

* Check logs for deployment on the remote machine:

  ```bash
  cd /srv/bridge

  # Check the nitro contract deployments
  laconic-so deployment --dir nitro-contracts-deployment logs nitro-contracts  -f
  ```

* To deploy a new token and transfer it to another account, refer to this [doc](./nitro-token-ops.md)

</details>

<details open>
  <summary>Nitro Bridge</summary>

## Nitro Bridge

* Stack: <https://git.vdb.to/cerc-io/nitro-stack/src/branch/main/stack-orchestrator/stacks/bridge>

* Source repo: <https://github.com/cerc-io/go-nitro>

* Target dir: `/srv/bridge/bridge-deployment`

* Cleanup an existing deployment if required:

  ```bash
  cd /srv/bridge

  # Stop the deployment
  laconic-so deployment --dir bridge-deployment stop --delete-volumes

  # Remove the deployment dir
  sudo rm -rf bridge-deployment
  ```

### Setup

* Execute the following command on deployment machine to get the deployed Nitro contract addresses along with the asset address:

  ```bash
  cd /srv/bridge

  laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "cat /app/deployment/nitro-addresses.json"

  # Expected output:
  # {
  #   "1212": [
  #     {
  #       "name": "geth",
  #       "chainId": "1212",
  #       "contracts": {
  #         "ConsensusApp": {
  #           "address": "0xC98aD0B41B9224dad0605be32A9241dB9c67E2e8"
  #         },
  #         "NitroAdjudicator": {
  #           "address": "0x7C22fdA703Cdf09eB8D3B5Adc81F723526713D0e"
  #         },
  #         "VirtualPaymentApp": {
  #           "address": "0x778e4e6297E8BF04C67a20Ec989618d72eB4a19E"
  #         },
  #         "TestToken": {
  #           "address": "0x02ebfB2706527C7310F2a7d9098b2BC61014C5F2"
  #         }
  #       }
  #     }
  #   ]
  # }
  ```

* Switch to `testnet-ops/nitro-bridge-setup` directory on your local machine:

  ```bash
  cd testnet-ops/nitro-bridge-setup
  ```

* Create the required vars file:

  ```bash
  cp bridge-vars.example.yml bridge-vars.yml
  ```

* Edit `bridge-vars.yml` with required values:

  ```bash
  # WS endpoint
  nitro_chain_url: "wss://fixturenet-eth.laconic.com"

  # Private key for bridge Nitro address
  nitro_sc_pk: ""

  # Private key should correspond to a funded account on L1 and this account must own the Nitro contracts
  # It also needs to hold L1 tokens to fund Nitro channels
  nitro_chain_pk: "888814df89c4358d7ddb3fa4b0213e7331239a80e1f013eaa7b2deca2a41a218"

  # Deployed Nitro contract addresses
  na_address: ""
  vpa_address: ""
  ca_address: ""
  ```

* Edit the `setup-vars.yml` to update the target directory:

  ```bash
  ...
  nitro_directory: /srv/bridge
  ...

  # Will create deployment at /srv/bridge/nitro-contracts-deployment and /srv/bridge/bridge-deployment
  ```

### Run

* Start the bridge on remote host by executing `run-nitro-bridge.yml` Ansible playbook on your local machine:

  * Create a new `hosts.ini` file:

      ```bash
      cp ../hosts.example.ini hosts.ini
      ```

  * Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:

    ```ini
    [deployment_host]
    <host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
    ```

    * Replace `<deployment_host>` with `nitro_host`
    * Replace `<host_name>` with the alias of your choice
    * Replace `<target_ip>` with the IP address or hostname of the target machine
    * Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)

  * Verify that you are able to connect to the host using the following command:

    ```bash
    ansible all -m ping -i hosts.ini -k

    # Expected output:
    # <host_name> | SUCCESS => {
    #  "ansible_facts": {
    #      "discovered_interpreter_python": "/usr/bin/python3.10"
    #  },
    #  "changed": false,
    #  "ping": "pong"
    # }
    ```

  * Execute the `run-nitro-bridge.yml` Ansible playbook for remote deployment:

    ```bash
    LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host"}'  --user $USER -kK
    ```

* Check logs for deployments on the remote machine:

  ```bash
  cd /srv/bridge

  # Check bridge logs, ensure that the node is running
  laconic-so deployment --dir bridge-deployment logs nitro-bridge -f
  ```

* Create Nitro node config for users:

  ```bash
  cd /srv/bridge

  # Create required variables
  GETH_CHAIN_ID="1212"

  export NA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.NitroAdjudicator.address' /app/deployment/nitro-addresses.json")
  export CA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.ConsensusApp.address' /app/deployment/nitro-addresses.json")
  export VPA_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.VirtualPaymentApp.address' /app/deployment/nitro-addresses.json")

  export BRIDGE_NITRO_ADDRESS=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-bridge" | jq -r '.SCAddress')

  export BRIDGE_PEER_ID=$(laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-bridge" | jq -r '.MessageServicePeerId')

  export L1_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3005/p2p/$BRIDGE_PEER_ID"
  export L2_BRIDGE_MULTIADDR="/dns4/bridge.laconic.com/tcp/3006/p2p/$BRIDGE_PEER_ID"

  # Create the required config files
  cat <<EOF > nitro-node-config.yml
  nitro_chain_url: "wss://fixturenet-eth.laconic.com"
  na_address: "$NA_ADDRESS"
  ca_address: "$CA_ADDRESS"
  vpa_address: "$VPA_ADDRESS"
  bridge_nitro_address: "$BRIDGE_NITRO_ADDRESS"
  nitro_l1_bridge_multiaddr: "$L1_BRIDGE_MULTIADDR"
  nitro_l2_bridge_multiaddr: "$L2_BRIDGE_MULTIADDR"
  EOF

  laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq --arg chainId \"$GETH_CHAIN_ID\" '{
    (\$chainId): [
      {
        \"name\": .[\$chainId][0].name,
        \"chainId\": .[\$chainId][0].chainId,
        \"contracts\": (
          .[\$chainId][0].contracts
          | to_entries
          | map(select(.key | in({\"ConsensusApp\":1, \"NitroAdjudicator\":1, \"VirtualPaymentApp\":1}) | not))
          | from_entries
        )
      }
    ]
  }' /app/deployment/nitro-addresses.json" > assets.json
  ```

  * The required config files should be generated at `/srv/bridge/nitro-node-config.yml` and `/srv/bridge/assets.json`

  * Check in the generated files at locations `ops/stage2/nitro-node-config.yml` and `ops/stage2/assets.json` within this repository respectively

* List down L2 channels created by the bridge:

  ```bash
  laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4005 -h nitro-bridge"
  ```

</details>

<details open>
  <summary>stage0 laconicd</summary>

## stage0 laconicd

* Stack: <https://git.vdb.to/cerc-io/fixturenet-laconicd-stack/src/branch/main/stack-orchestrator/stacks/fixturenet-laconicd>

* Source repo: <https://git.vdb.to/cerc-io/laconicd>

* Target dir: `/srv/laconicd/stage0-deployment`

* Cleanup an existing deployment if required:

  ```bash
  cd /srv/laconicd

  # Stop the deployment
  laconic-so deployment --dir stage0-deployment stop --delete-volumes

  # Remove the deployment dir
  sudo rm -rf stage0-deployment

  # Remove the existing spec file
  rm stage0-spec.yml
  ```

### Setup

* Clone the stack repo:

  ```bash
  laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-laconicd-stack --pull

  # This should clone the fixturenet-laconicd-stack repo at `/home/dev/cerc/fixturenet-laconicd-stack`
  ```

* Clone required repositories:

  ```bash
  laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd setup-repositories --pull

  # This should clone the laconicd repo at `/home/dev/cerc/laconicd`
  ```

* Build the container images:

  ```bash
  laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd build-containers --force-rebuild

  # This should create the "cerc/laconicd" Docker image
  ```

### Deployment

* Create a spec file for the deployment:

  ```bash
  cd /srv/laconicd

  laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy init --output stage0-spec.yml
  ```

* Edit network in the spec file to map container ports to host ports:

  ```bash
  # stage0-spec.yml
  network:
    ports:
      laconicd:
        - '6060'
        - '127.0.0.1:26657:26657'
        - '26656'
        - '127.0.0.1:9473:9473'
        - '127.0.0.1:9090:9090'
        - '127.0.0.1:1317:1317'
  ```

* Create a deployment from the spec file:

  ```bash
  laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy create --spec-file stage0-spec.yml --deployment-dir stage0-deployment
  ```

* Update the configuration:

  ```bash
  cat <<EOF > stage0-deployment/config.env
  # Set to true to enable adding participants functionality of the onboarding module
  ONBOARDING_ENABLED=true

  # A custom human readable name for this node
  MONIKER=LaconicStage0
  EOF
  ```

### Start

* Start the deployment:

  ```bash
  laconic-so deployment --dir stage0-deployment start
  ```

* Check status:

  ```bash
  # List down the containers and check health status
  docker ps -a | grep laconicd

  # Follow logs for laconicd container, check that new blocks are getting created
  laconic-so deployment --dir stage0-deployment logs laconicd -f
  ```

* Verify that endpoint is now publicly accessible:

  * <https://laconicd.laconic.com> is pointed to the node's RPC endpoint

  * Check status query:

    ```bash
    curl https://laconicd.laconic.com/status | jq

    # Expected output:
    # JSON with `node_info`, `sync_info` and `validator_info`
    ```

</details>

<details open>
  <summary>faucet</summary>

## faucet

* Stack: <https://git.vdb.to/cerc-io/testnet-laconicd-stack/src/branch/main/stack-orchestrator/stacks/laconic-faucet>

* Source repo: <https://git.vdb.to/cerc-io/laconic-faucet>

* Target dir: `/srv/faucet/laconic-faucet-deployment`

* Cleanup an existing deployment if required:

  ```bash
  cd /srv/faucet

  # Stop the deployment
  laconic-so deployment --dir laconic-faucet-deployment stop

  # Remove the deployment dir
  sudo rm -rf laconic-faucet-deployment

  # Remove the existing spec file
  rm laconic-faucet-spec.yml
  ```

### Setup

* Clone the stack repo:

  ```bash
  laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack --pull

  # This should clone the testnet-laconicd-stack repo at `/home/dev/cerc/testnet-laconicd-stack`
  ```

* Clone required repositories:

  ```bash
  laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet setup-repositories --pull

  # This should clone the laconicd repo at `/home/dev/cerc/laconic-faucet
  ```

* Build the container images:

  ```bash
  laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet build-containers --force-rebuild

  # This should create the "cerc/laconic-faucet" Docker image
  ```

### Deployment

* Create a spec file for the deployment:

  ```bash
  cd /srv/faucet

  laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet deploy init --output laconic-faucet-spec.yml
  ```

* Edit network in the spec file to map container ports to host ports:

  ```bash
  # laconic-faucet-spec.yml
  network:
    ports:
      faucet:
        - '127.0.0.1:4000:3000'
  ```

* Create a deployment from the spec file:

  ```bash
  laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet deploy create --spec-file laconic-faucet-spec.yml --deployment-dir laconic-faucet-deployment

  # Place in the same namespace as stage0
  cp /srv/laconicd/stage0-deployment/deployment.yml laconic-faucet-deployment/deployment.yml
  ```

* Update the configuration:

  ```bash
  # Get the faucet account key from stage0 deployment
  export FAUCET_ACCOUNT_PK=$(laconic-so deployment --dir /srv/laconicd/stage0-deployment exec laconicd "echo y | laconicd keys export alice --keyring-backend test --unarmored-hex --unsafe")

  cat <<EOF > laconic-faucet-deployment/config.env
  CERC_FAUCET_KEY=$FAUCET_ACCOUNT_PK
  EOF
  ```

### Start

* Start the deployment:

  ```bash
  laconic-so deployment --dir laconic-faucet-deployment start
  ```

* Check status:

  ```bash
  # List down the containers and check health status
  docker ps -a | grep faucet

  # Check logs for faucet container
  laconic-so deployment --dir laconic-faucet-deployment logs faucet -f
  ```

* Verify that endpoint is now publicly accessible:

  * <https://faucet.laconic.com> is pointed to the faucet endpoint

  * Check faucet:

    ```bash
    curl -X POST https://faucet.laconic.com/faucet

    # Expected output:
    # {"error":"address is required"}
    ```

</details>

<details open>
  <summary>testnet-onboarding-app</summary>

## testnet-onboarding-app

* Stack: <https://git.vdb.to/cerc-io/testnet-onboarding-app-stack/src/branch/main/stack-orchestrator/stacks/onboarding-app>

* Source repo: <https://git.vdb.to/cerc-io/testnet-onboarding-app>

* Target dir: `/srv/app/onboarding-app-deployment`

* Cleanup an existing deployment if required:

  ```bash
  cd /srv/app

  # Stop the deployment
  laconic-so deployment --dir onboarding-app-deployment

  # Remove the deployment dir
  sudo rm -rf onboarding-app-deployment

  # Remove the existing spec file
  rm onboarding-app-spec.yml
  ```

### Setup

* Clone the stack repo:

  ```bash
  laconic-so fetch-stack git.vdb.to/cerc-io/testnet-onboarding-app-stack --pull

  # This should clone the testnet-onboarding-app-stack repo at `/home/dev/cerc/testnet-onboarding-app-stack`
  ```

* Clone required repositories:

  ```bash
  laconic-so --stack ~/cerc/testnet-onboarding-app-stack/stack-orchestrator/stacks/onboarding-app setup-repositories --pull

  # This should clone the testnet-onboarding-app repo at `/home/dev/cerc/testnet-onboarding-app`
  ```

* Build the container images:

  ```bash
  laconic-so --stack ~/cerc/testnet-onboarding-app-stack/stack-orchestrator/stacks/onboarding-app build-containers --force-rebuild

  # This should create the Docker image "cerc/testnet-onboarding-app" locally
  ```

### Deployment

* Create a spec file for the deployment:

  ```bash
  cd /srv/app

  laconic-so --stack ~/cerc/testnet-onboarding-app-stack/stack-orchestrator/stacks/onboarding-app deploy init --output onboarding-app-spec.yml
  ```

* Edit network in the spec file to map container ports to host ports:

  ```bash
  network:
    ports:
      testnet-onboarding-app:
        - '127.0.0.1:3000:80'
  ```

* Create a deployment from the spec file:

  ```bash
  laconic-so --stack ~/cerc/testnet-onboarding-app-stack/stack-orchestrator/stacks/onboarding-app deploy create --spec-file onboarding-app-spec.yml --deployment-dir onboarding-app-deployment
  ```

* Update the configuration:

  ```bash
  cat <<EOF > onboarding-app-deployment/config.env
  WALLET_CONNECT_ID=63...

  CERC_REGISTRY_GQL_ENDPOINT="https://laconicd.laconic.com/api"
  CERC_LACONICD_RPC_ENDPOINT="https://laconicd.laconic.com"

  CERC_FAUCET_ENDPOINT="https://faucet.laconic.com"

  CERC_WALLET_META_URL="https://loro-signup.laconic.com"
  EOF
  ```

### Start

* Start the deployment:

  ```bash
  laconic-so deployment --dir onboarding-app-deployment start
  ```

* Check status:

  ```bash
  # List down the container
  docker ps -a | grep testnet-onboarding-app

  # Follow logs for testnet-onboarding-app container, wait for the build to finish
  laconic-so deployment --dir onboarding-app-deployment logs testnet-onboarding-app -f
  ```

* The onboarding app can now be viewed at <https://loro-signup.laconic.com>

</details>

<details open>
  <summary>laconic-wallet-web</summary>

## laconic-wallet-web

* Stack: <https://git.vdb.to/cerc-io/laconic-wallet-web/src/branch/main/stack/stack-orchestrator/stack/laconic-wallet-web>

* Source repo: <https://git.vdb.to/cerc-io/laconic-wallet-web>

* Target dir: `/srv/wallet/laconic-wallet-web-deployment`

* Cleanup an existing deployment if required:

  ```bash
  cd /srv/wallet

  # Stop the deployment
  laconic-so deployment --dir laconic-wallet-web-deployment

  # Remove the deployment dir
  sudo rm -rf laconic-wallet-web-deployment

  # Remove the existing spec file
  rm laconic-wallet-web-spec.yml
  ```

### Setup

* Clone the stack repo:

  ```bash
  laconic-so fetch-stack git.vdb.to/cerc-io/laconic-wallet-web --pull

  # This should clone the laconic-wallet-web repo at `/home/dev/cerc/laconic-wallet-web`
  ```

* Build the container images:

  ```bash
  laconic-so --stack ~/cerc/laconic-wallet-web/stack/stack-orchestrator/stack/laconic-wallet-web build-containers --force-rebuild

  # This should create the Docker image "cerc/laconic-wallet-web" locally
  ```

### Deployment

* Create a spec file for the deployment:

  ```bash
  cd /srv/wallet

  laconic-so --stack ~/cerc/laconic-wallet-web/stack/stack-orchestrator/stack/laconic-wallet-web deploy init --output laconic-wallet-web-spec.yml
  ```

* Edit network in the spec file to map container ports to host ports:

  ```bash
  network:
    ports:
      laconic-wallet-web:
        - '127.0.0.1:5000:80'
  ```

* Create a deployment from the spec file:

  ```bash
  laconic-so --stack ~/cerc/laconic-wallet-web/stack/stack-orchestrator/stack/laconic-wallet-web deploy create --spec-file laconic-wallet-web-spec.yml --deployment-dir laconic-wallet-web-deployment
  ```

* Update the configuration:

  ```bash
  cat <<EOF > laconic-wallet-web-deployment/config.env
  WALLET_CONNECT_ID=63...
  EOF
  ```

### Start

* Start the deployment:

  ```bash
  laconic-so deployment --dir laconic-wallet-web-deployment start
  ```

* Check status:

  ```bash
  # List down the container
  docker ps -a | grep laconic-wallet-web

  # Follow logs for laconic-wallet-web container, wait for the build to finish
  laconic-so deployment --dir laconic-wallet-web-deployment logs laconic-wallet-web -f
  ```

* The web wallet can now be viewed at <https://wallet.laconic.com>

</details>

<details open>
  <summary>stage1 laconicd</summary>

## stage1 laconicd

* Stack: <https://git.vdb.to/cerc-io/fixturenet-laconicd-stack/src/branch/main/stack-orchestrator/stacks/fixturenet-laconicd>

* Source repo: <https://git.vdb.to/cerc-io/laconicd>

* Target dir: `/srv/laconicd/stage1-deployment`

* Cleanup an existing deployment if required:

  ```bash
  cd /srv/laconicd

  # Stop the deployment
  laconic-so deployment --dir stage1-deployment stop --delete-volumes

  # Remove the deployment dir
  sudo rm -rf stage1-deployment

  # Remove the existing spec file
  rm stage1-spec.yml
  ```

### Setup

* Same as that for [stage0 laconicd](#setup), not required if already done for stage0

### Deployment

* Create a spec file for the deployment:

  ```bash
  cd /srv/laconicd

  laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy init --output stage1-spec.yml
  ```

* Edit network in the spec file to map container ports to host ports:

  ```bash
  # stage1-spec.yml
  network:
    ports:
      laconicd:
        - '6060'
        - '127.0.0.1:26657:26657'
        - '26656:26656'
        - '127.0.0.1:9473:9473'
        - '127.0.0.1:9090:9090'
        - '127.0.0.1:1317:1317'
  ```

* Create a deployment from the spec file:

  ```bash
  laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy create --spec-file stage1-spec.yml --deployment-dir stage1-deployment
  ```

* Update the configuration:

  ```bash
  cat <<EOF > stage1-deployment/config.env
  AUTHORITY_AUCTION_ENABLED=true
  AUTHORITY_AUCTION_COMMITS_DURATION=3600
  AUTHORITY_AUCTION_REVEALS_DURATION=3600
  AUTHORITY_GRACE_PERIOD=7200

  MONIKER=LaconicStage1
  EOF
  ```

### Start

* Follow [stage0-to-stage1.md](./stage0-to-stage1.md) to halt stage0 deployment, generate the genesis file for stage1 and start the stage1 deployment

</details>

<details open>
  <summary>laconic-console</summary>

## laconic-console

* Stack: <https://git.vdb.to/cerc-io/testnet-laconicd-stack/src/branch/main/stack-orchestrator/stacks/laconic-console>

* Source repos:
  * <https://git.vdb.to/cerc-io/laconic-registry-cli>
  * <https://git.vdb.to/cerc-io/laconic-console>

* Target dir: `/srv/console/laconic-console-deployment`

* Cleanup an existing deployment if required:

  ```bash
  cd /srv/console

  # Stop the deployment
  laconic-so deployment --dir laconic-console-deployment stop --delete-volumes

  # Remove the deployment dir
  sudo rm -rf laconic-console-deployment

  # Remove the existing spec file
  rm laconic-console-spec.yml
  ```

### Setup

* Clone the stack repo:

  ```bash
  laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack --pull

  # This should clone the testnet-laconicd-stack repo at `/home/dev/cerc/testnet-laconicd-stack`
  ```

* Clone required repositories:

  ```bash
  laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console setup-repositories --pull

  # This should clone the laconic-registry-cli repo at `/home/dev/cerc/laconic-registry-cli`, laconic-console repo at `/home/dev/cerc/laconic-console`
  ```

* Build the container images:

  ```bash
  laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console build-containers --force-rebuild

  # This should create the Docker images: "cerc/laconic-registry-cli", "cerc/webapp-base", "cerc/laconic-console-host"
  ```

### Deployment

* Create a spec file for the deployment:

  ```bash
  cd /srv/console

  laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy init --output laconic-console-spec.yml
  ```

* Edit network in the spec file to map container ports to host ports:

  ```bash
  network:
    ports:
      console:
        - '127.0.0.1:4001:80'
  ```

* Create a deployment from the spec file:

  ```bash
  laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy create --spec-file laconic-console-spec.yml --deployment-dir laconic-console-deployment
  ```

* Update the configuration:

  ```bash
  cat <<EOF > laconic-console-deployment/config.env
  # Laconicd (hosted) GQL endpoint
  LACONIC_HOSTED_ENDPOINT=https://laconicd.laconic.com
  EOF
  ```

### Start

* Start the deployment:

  ```bash
  laconic-so deployment --dir laconic-console-deployment start
  ```

* Check status:

  ```bash
  # List down the container
  docker ps -a | grep console

  # Follow logs for console container
  laconic-so deployment --dir laconic-console-deployment logs console -f
  ```

* The laconic console can now be viewed at <https://loro-console.laconic.com>

</details>

<details open>
  <summary>deploy-backend</summary>

## Deploy Backend

* Stack: <https://git.vdb.to/cerc-io/snowballtools-base-api-stack/src/branch/main/stack-orchestrator/stacks/snowballtools-base-backend>

* Source repo: <https://git.vdb.to/cerc-io/snowballtools-base>

* Target dir: `/srv/deploy-backend/backend-deployment`

* Cleanup an existing deployment if required:

  ```bash
  cd /srv/deploy-backend

  # Stop the deployment
  laconic-so deployment --dir backend-deployment stop --delete-volumes

  # Remove the deployment dir
  sudo rm -rf backend-deployment

  # Remove the existing spec file
  rm backend-deployment-spec.yml
  ```

### Setup

* Clone the stack repo:

  ```bash
  laconic-so fetch-stack git.vdb.to/cerc-io/snowballtools-base-api-stack --pull

  # This should clone the snowballtools-base-api-stack repo at `/home/dev/cerc/snowballtools-base-api-stack`
  ```

* Clone required repositories:

  ```bash
  laconic-so --stack ~/cerc/snowballtools-base-api-stack/stack-orchestrator/stacks/snowballtools-base-backend setup-repositories --git-ssh --pull

  # This should clone the snowballtools-base repo at `/home/dev/cerc/snowballtools-base`
  ```

* Build the container images:

  ```bash
  laconic-so --stack ~/cerc/snowballtools-base-api-stack/stack-orchestrator/stacks/snowballtools-base-backend build-containers --force-rebuild

  # This should create the Docker images: "cerc/snowballtools-base-backend" and "cerc/snowballtools-base-backend-base"
  ```

* Push the images to the container registry. The container registry will be set up while setting up a service provider

  ```bash
  laconic-so deployment --dir backend-deployment push-images
  ```

### Deployment

* Create a spec file for the deployment:

  ```bash
  cd /srv/backend-deployment

  laconic-so --stack ~/cerc/snowballtools-base-api-stack/stack-orchestrator/stacks/snowballtools-base-backend deploy init --output backend-deployment-spec.yml --config SNOWBALL_BACKEND_CONFIG_FILE_PATH=/config/prod.toml
  ```

* Edit the spec file to deploy the stack to k8s:

  ```bash
  stack:
    /home/dev/cerc/snowballtools-base-api-stack/stack-orchestrator/stacks/snowballtools-base-backend
  deploy-to: k8s
  kube-config: /home/dev/.kube/config-vs-narwhal.yaml
  image-registry: container-registry.apps.vaasl.io/laconic-registry
  config:
    SNOWBALL_BACKEND_CONFIG_FILE_PATH: /config/prod.toml
  network:
    ports:
      deploy-backend:
      - '8000'
    http-proxy:
      - host-name: deploy-backend.apps.vaasl.io
        routes:
          - path: '/'
            proxy-to: deploy-backend:8000
  volumes:
    data:
  configmaps:
    config: ./configmaps/config
  ```

* Create a deployment from the spec file:

  ```bash
  laconic-so --stack ~/cerc/snowballtools-base-api-stack/stack-orchestrator/stacks/snowballtools-base-backend deploy create --deployment-dir backend-deployment --spec-file backend-deployment-spec.yml
  # This should create the deployment directory at `/srv/deploy-backend/backend-deployment`
  ```

* Modify file `backend-deployment/kubeconfig.yml` if required
  ```
  apiVersion: v1
  ...
  contexts:
    - context:
        cluster: ***
        user: ***
      name: default
  ...
  ```
  NOTE: `context.name` must be default to use with SO

* Fetch the config template file for the snowball backend:

  ```bash
  # Place in snowball deployment directory
  wget -O /srv/deploy-backend/backend-deployment/configmaps/config/prod.toml https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/configs/backend-deployment.toml
  ```

* Setup private key and bond. If not already setup, execute the following commands in the directory containing `stage2-deployment`

  * Create a new account and fetch the private key

    ```bash
    laconic-so deployment --dir stage2-deployment exec laconicd "laconicd keys add deploy"
    # - address: laconic1yr758d5vkg28text073vlzdjdgd7ud6w729tww
    ...
    export deployKey=$(laconic-so deployment --dir stage2-deployment exec laconicd "echo y | laconicd keys export deploy --keyring-backend test --unarmored-hex --unsafe")
    # ...
    # txhash: 262D380259AC06024F87C909EB0BF7814CEC26CDF527B003C4C10631E1DB5893
    ```

  * Send tokens to this account

    ```bash
    laconic-so deployment --dir stage2-deployment exec laconicd "laconicd tx bank send alice laconic1yr758d5vkg28text073vlzdjdgd7ud6w729tww 1000000000000000000alnt --from alice --fees 200000alnt -y"
    ```

  * Create a bond using this account

    ```bash
    laconic-so deployment --dir laconic-console-testnet2-deployment exec cli "laconic registry bond create --type alnt --quantity 1000000000000 --user-key $deployKey" | jq -r '.bondId'
    # 15e5bc37c40f67adc9ab498fa3fa50b090770f9bb56b27d71714a99138df9a22
    ```

  * Set bond id

    ```bash
    export bondId=15e5bc37c40f67adc9ab498fa3fa50b090770f9bb56b27d71714a99138df9a22
    ```

* Register authority. Execute the following commands in the directory containing `laconic-console-testnet2-deployment`

  * Reserve an authority

    ```bash
    laconic-so deployment --dir laconic-console-testnet2-deployment exec cli "laconic registry authority reserve deploy-vaasl --txKey $deployKey"
    ```

  * Obtain the auction ID

    ```bash
    laconic-so deployment --dir laconic-console-testnet2-deployment exec cli "laconic registry authority whois deploy-vaasl --txKey $deployKey"
    # "auction": {
    #   "id": "73e0b082a198c396009ce748804a9060c674a10045365d262c1584f99d2771c1"
    ```

  * Commit a bid using the auction ID. A reveal file will be generated

    ```bash
    laconic-so deployment --dir laconic-console-testnet2-deployment exec cli "laconic registry auction bid commit 73e0b082a198c396009ce748804a9060c674a10045365d262c1584f99d2771c1 5000000 alnt --chain-id laconic-testnet-2 --txKey $deployKey"

    # {"reveal_file":"/app/out/bafyreiewi4osqyvrnljwwcb36fn6sr5iidfpuznqkz52gxc5ztt3jt4zmy.json"}
    ```

  * Reveal a bid using the auction ID and the reveal file generated from the bid commit

    ```bash
    laconic-so deployment --dir laconic-console-testnet2-deployment exec cli "laconic registry auction bid reveal 73e0b082a198c396009ce748804a9060c674a10045365d262c1584f99d2771c1 /app/out/bafyreiewi4osqyvrnljwwcb36fn6sr5iidfpuznqkz52gxc5ztt3jt4zmy.json --chain-id laconic-testnet-2 --txKey $deployKey"
    # {"success": true}
    ```

  * Verify status after the auction ends. It should list a completed status and a winner

    ```
    laconic-so deployment --dir laconic-console-testnet2-deployment exec cli "laconic registry auction get 73e0b082a198c396009ce748804a9060c674a10045365d262c1584f99d2771c1 -txKey $deployKey"
    ```

  * Set the authority using a bond ID.

    ```bash
    laconic-so deployment --dir laconic-console-testnet2-deployment exec cli "laconic registry authority bond set deploy-vaasl $bondId --txKey $deployKey"
    # {"success": true}
    ```

  * Verify the authority has been registered.

    ```bash
    laconic-so deployment --dir laconic-console-testnet2-deployment exec cli "laconic registry authority whois deploy-vaasl --txKey $deployKey"
    ```


* Update `/srv/snowball/snowball-deployment/data/config/prod.toml`. Replace `<redacted>` with your credentials. Use the `userKey`, `bondId` and `authority` that you set up

### Start

* Start the deployment:

  ```bash
  laconic-so deployment --dir backend-deployment start
  ```

* Check status:

  ```bash
  # Follow logs for snowball container
  laconic-so deployment --dir backend-deployment logs snowballtools-base-backend -f
  ```

</details>

<details open>
  <summary>deploy-frontend</summary>

## Deploy Frontend

* Source repo: <https://git.vdb.to/cerc-io/snowballtools-base>

### Prerequisites

* Node.js

* Yarn

### Setup

* On your local machine, clone the `snowballtools-base` repo:

  ```bash
  git clone git@git.vdb.to:cerc-io/snowballtools-base.git
  ```

* Install dependencies:

  ```bash
  cd snowballtools-base
  yarn install
  ```

* In the deployer package, create required env:

  ```bash
  cd packages/deployer
  cp .env.example .env
  ```

  Set the required variables:

  ```bash
  REGISTRY_BOND_ID=<bond-id>
  DEPLOYER_LRN=lrn://vaasl-provider/deployers/webapp-deployer-api.apps.vaasl.io
  AUTHORITY=vaasl
  ```
  Note: The bond id should be set to the `vaasl` authority

* Update required laconic config. You can use the same `userKey` and `bondId` used for deploying backend:

  ```bash
  # Replace <user-pk> and <bond-id>
  cat <<EOF > config.yml
  services:
    registry:
      rpcEndpoint: https://laconicd-sapo.laconic.com
      gqlEndpoint: https://laconicd-sapo.laconic.com/api
      userKey: <user-pk>
      bondId: <bond-id>
      chainId: laconic-testnet-2
      gasPrice: 0.001alnt
  EOF
  ```
  Note: The `userKey` account should own the authority `vaasl`

### Run

* Run frontend deployment script:

  ```bash
  ./deploy-frontend.sh
  ```

  Follow deployment logs on the deployer UI

</details>

## Domains / Port Mappings

```bash
# Machine 1
https://laconicd.laconic.com        -> 26657
https://laconicd.laconic.com/api    -> 9473/api
https://faucet.laconic.com          -> 4000
https://loro-signup.laconic.com     -> 3000
https://wallet.laconic.com          -> 5000
https://loro-console.laconic.com    -> 4001

# Machine 2
https://sepolia.laconic.com           -> 8545
wss://sepolia.laconic.com             -> 8546
https://fixturenet-eth.laconic.com    -> 7545
wss://fixturenet-eth.laconic.com      -> 7546

bridge.laconic.com
Open ports:
3005 (L1 side)
3006 (L2 side)
```