Compare commits

...

9 Commits

Author SHA1 Message Date
Neeraj
fcfda24172 Start nodes and bridge from deployed contract's block number 2024-10-16 15:15:07 +05:30
f597e5dfc7 Add result with endpoints to service-provider-setup README (#11)
Part of [Service Provider setup](https://www.notion.so/Service-provider-setup-a09e2207e1f34f3a847f7ce9713b7ac5)
- Add tasks to create DNS records for daemon
- Add DNS resolution check with retries for daemon URL

Co-authored-by: Adw8 <adwaitgharpure@gmail.com>
Reviewed-on: cerc-io/testnet-ops#11
2024-10-08 12:41:36 +00:00
18df60a291 Add ansible playbook to automate service provider setup (#10)
Part of [Service Provider setup](https://www.notion.so/Service-provider-setup-a09e2207e1f34f3a847f7ce9713b7ac5)
- Added ansible playbooks for:
  - Adding a new user with passwordless sudo
  - Configuring DNS records
  - Setting up the system with required packages and gpg key
  - Deploying k8s
  - Setting up container registry
  - Setting up laconicd and laconic-console
  - Setting up and starting webapp-deployer-api and webapp-deployer-ui
- TODOs:
  - Mount gpg keys in webapp-deployer-api container

Co-authored-by: Adw8 <adwaitgharpure@gmail.com>
Reviewed-on: cerc-io/testnet-ops#10
2024-10-01 12:17:10 +00:00
631a859080 Remove use of L2 chain & contracts for running nitro nodes with playbook (#9)
Part of [Create bridge channel in go-nitro](https://www.notion.so/Create-bridge-channel-in-go-nitro-22ce80a0d8ae4edb80020a8f250ea270)

Co-authored-by: Neeraj <neeraj.rtly@gmail.com>
Reviewed-on: cerc-io/testnet-ops#9
2024-10-01 03:47:46 +00:00
40428cdaa3 Add Ansible task to fetch nitro node config file (#8)
Part of [Automate testnet nitro deployments using Ansible](https://www.notion.so/Automate-testnet-nitro-deployments-using-Ansible-0d15579430204b8daba9a8aa31e07568)
- Added ansible tasks to:
  - Install yq
  - Download nitro node config file
- Modified playbooks to handle prompts when cloning repositories

Co-authored-by: Adw8 <adwaitgharpure@gmail.com>
Reviewed-on: cerc-io/testnet-ops#8
2024-09-17 13:55:18 +00:00
bcbd175f00 Update playbook for Nitro bridge setup (#7)
Part of [Automate testnet nitro deployments using Ansible](https://www.notion.so/Automate-testnet-nitro-deployments-using-Ansible-0d15579430204b8daba9a8aa31e07568)

Co-authored-by: Neeraj <neeraj.rtly@gmail.com>
Reviewed-on: cerc-io/testnet-ops#7
Co-authored-by: Prathamesh Musale <prathamesh@noreply.git.vdb.to>
Co-committed-by: Prathamesh Musale <prathamesh@noreply.git.vdb.to>
2024-09-16 06:49:46 +00:00
cd314d2fdf Add Ansible playbooks to setup Docker and stack orchestrator (#6)
Part of [Automate testnet nitro deployments using Ansible](https://www.notion.so/Automate-testnet-nitro-deployments-using-Ansible-0d15579430204b8daba9a8aa31e07568)

Add Ansible playbooks to:
- Setup Docker if not present
- Setup Stack Orchestrator if not present

Co-authored-by: Adw8 <adwaitgharpure@gmail.com>
Co-authored-by: Neeraj <neeraj.rtly@gmail.com>
Reviewed-on: cerc-io/testnet-ops#6
Co-authored-by: Prathamesh Musale <prathamesh@noreply.git.vdb.to>
Co-committed-by: Prathamesh Musale <prathamesh@noreply.git.vdb.to>
2024-09-12 04:43:53 +00:00
23f3a4c8ed Add instructions to run Ansible playbooks on remote machines (#5)
Part of [Automate testnet nitro deployments using Ansible](https://www.notion.so/Automate-testnet-nitro-deployments-using-Ansible-0d15579430204b8daba9a8aa31e07568)

Co-authored-by: Adw8 <adwaitgharpure@gmail.com>
Reviewed-on: cerc-io/testnet-ops#5
Co-authored-by: Prathamesh Musale <prathamesh@noreply.git.vdb.to>
Co-committed-by: Prathamesh Musale <prathamesh@noreply.git.vdb.to>
2024-09-09 13:37:41 +00:00
88e0b48540 Add ansible playbook to setup and run Nitro bridge (#4)
Part of [Automate testnet nitro deployments using Ansible](https://www.notion.so/Automate-testnet-nitro-deployments-using-Ansible-0d15579430204b8daba9a8aa31e07568)
Implement Ansible playbook to:
  - Deploy nitro contracts on L1
  - Deploy bridge contract on L2
  - Setup and run nitro bridge

Co-authored-by: Adw8 <adwaitgharpure@gmail.com>
Reviewed-on: cerc-io/testnet-ops#4
Co-authored-by: Prathamesh Musale <prathamesh@noreply.git.vdb.to>
Co-committed-by: Prathamesh Musale <prathamesh@noreply.git.vdb.to>
2024-09-09 06:11:31 +00:00
67 changed files with 2301 additions and 245 deletions

4
.gitignore vendored
View File

@ -1,3 +1 @@
nitro-nodes-setup/out/ hosts.ini
nitro-nodes-setup/nitro-vars.yml
l2-setup/out

View File

@ -8,7 +8,7 @@
- Set Locale Encoding to `UTF-8` - Set Locale Encoding to `UTF-8`
Ansible requires the locale encoding to be `UTF-8`. You can either use the `LANG` prefix when running Ansible commands or set the system-wide locale. Ansible requires the locale encoding to be `UTF-8`. You can either use the `LANG` prefix when running Ansible commands or set the system-wide locale
- Option 1: Use `LANG` Prefix in Commands - Option 1: Use `LANG` Prefix in Commands
@ -32,11 +32,15 @@
LANG="en_US.UTF-8" LANG="en_US.UTF-8"
``` ```
- Reboot your system or log out and log back in to apply the changes. - Reboot your system or log out and log back in to apply the changes
- Reference: <https://udhayakumarc.medium.com/error-ansible-requires-the-locale-encoding-to-be-utf-8-detected-iso8859-1-6da808387f7d> - Reference: <https://udhayakumarc.medium.com/error-ansible-requires-the-locale-encoding-to-be-utf-8-detected-iso8859-1-6da808387f7d>
## Playbooks ## Playbooks
- [stack-orchestrator-setup](./stack-orchestrator-setup/README.md)
- [l2-setup](./l2-setup/README.md) - [l2-setup](./l2-setup/README.md)
- [nitro-node-setup](./nitro-nodes-setup/README.md) - [nitro-node-setup](./nitro-nodes-setup/README.md)
- [nitro-bridge-setup](./nitro-bridge-setup/README.md)
- [nitro-contracts-setup](./nitro-contracts-setup/README.md)
- [service-provider-setup](./service-provider-setup/README.md)

2
hosts.example.ini Normal file
View File

@ -0,0 +1,2 @@
[deployment_host]
<host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'

View File

@ -1,77 +0,0 @@
# l2-setup
## Setup Ansible
To get started, follow the [installation](../README.md#installation) guide to setup ansible on your machine
## Setup and Run Optimism
The following commands have to be executed in [`l2-setup`](./) directory
- Edit [`l2-vars.yml`](./l2-vars.yml) with the required values
```bash
# L1 chain ID
l1_chain_id: ""
# L1 RPC endpoint
l1_rpc: ""
# L1 RPC endpoint host or IP address
l1_host: ""
# L1 RPC endpoint port number
l1_port: ""
# L1 Beacon endpoint
l1_beacon: ""
# Address of the funded account on L1
# Used for optimism contracts deployment
l1_address: ""
# Private key of the funded account on L1
l1_priv_key: ""
```
- To setup and run L2, execute the `run-optimism.yml` Ansible playbook by running the following command.
NOTE: By default, deployments are created in the `l2-setup/out` directory. To change this location, update the `l2_directory` variable in the [setup-vars.yml](./setup-vars.yml) file.
```bash
LANG=en_US.utf8 ansible-playbook -i localhost, --connection=local run-optimism.yml --extra-vars='{ "target_host": "localhost"}' -kK --user $USER
```
- For skipping container build, set `"skip_container_build" : true` in the `--extra-vars` parameter:
```bash
LANG=en_US.utf8 ansible-playbook -i localhost, --connection=local run-optimism.yml --extra-vars='{"target_host" : "localhost", "skip_container_build": true}' -kK --user $USER
```
- To run using existing contracts deployment
- Update `artifact_path` in [`setup-vars.yml`](./setup-vars.yml) file with path to data directory of the existing deployment
- Run the ansible playbook with `"existing_contracts_deployment": true` in the `--extra-vars` parameter:
```bash
LANG=en_US.utf8 ansible-playbook -i localhost, --connection=local run-optimism.yml --extra-vars='{"target_host" : "localhost", "existing_contracts_deployment": true}' -kK --user $USER
```
## Check Deployment Status
- Run the following command in the directory where the optimism-deployment is created
- Follow optimism contracts deployment logs:
```bash
laconic-so deployment --dir optimism-deployment logs -f fixturenet-optimism-contracts
```
- Check L2 logs:
```bash
laconic-so deployment --dir optimism-deployment logs -f op-geth
# Ensure new blocks are getting created
```

View File

@ -1,9 +0,0 @@
l1_chain_id: ""
l1_rpc: ""
l1_host: ""
l1_port: ""
l1_beacon: ""
l1_address: ""
l1_priv_key: ""
proposer_amount: "0.2ether"
batcher_amount: "0.1ether"

View File

@ -1,89 +0,0 @@
- name: Setup L2 on host
hosts: "{{ target_host }}"
vars_files:
- setup-vars.yml
- l2-vars.yml
environment:
PATH: "{{ ansible_env.PATH }}:/home/{{ansible_user}}/bin"
tasks:
- name: Create directory for L2
file:
path: "{{ l2_directory }}"
state: directory
- name: Change owner of l2-directory
file:
path: "{{ l2_directory }}"
owner: "{{ansible_user}}"
group: "{{ansible_user}}"
state: directory
recurse: yes
become: yes
- name: Clone fixturenet-optimism-stack
command: laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-optimism-stack --pull
ignore_errors: yes
- name: Clone required repositories for fixturenet-optimism
command: laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism setup-repositories --pull
- name: Build container images for L2
command: laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism build-containers --force-rebuild
when: not skip_container_build
- name: Generate spec file for L2 deployment
template:
src: "./templates/specs/l2-spec.yml.j2"
dest: "{{ l2_directory }}/optimism-spec.yml"
- name: Check if the deployment directory exists for L2
stat:
path: "{{ l2_directory }}/optimism-deployment"
register: l2_deployment_dir
- name: Create a deployment from the spec file for L2
command: laconic-so --stack ~/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism deploy create --spec-file optimism-spec.yml --deployment-dir optimism-deployment
args:
chdir: "{{ l2_directory }}"
when: not l2_deployment_dir.stat.exists
- name: Generate config.env for L2 deployment
template:
src: "./templates/configs/l2-config.env.j2"
dest: "{{ l2_directory }}/optimism-deployment/config.env"
- name: Copy deployed contract addresses and configuration files
block:
- name: Copy l1 deployment file
copy:
src: "{{ artifact_path }}/l1_deployment/{{ l1_chain_id }}-deploy.json"
dest: "{{ l2_directory }}/optimism-deployment/data/l1_deployment"
remote_src: "{{ target_host != 'localhost' }}"
- name: Copy l2 configuration file
copy:
src: "{{ artifact_path }}/l2_config/{{ l1_chain_id }}.json"
dest: "{{ l2_directory }}/optimism-deployment/data/l2_config"
remote_src: "{{ target_host != 'localhost' }}"
- name: Copy allocs-l2 file
copy:
src: "{{ artifact_path }}/l2_config/allocs-l2.json"
dest: "{{ l2_directory }}/optimism-deployment/data/l2_config"
remote_src: "{{ target_host != 'localhost' }}"
- name: Copy l2 accounts file
copy:
src: "{{ artifact_path }}/l2_accounts/accounts.json"
dest: "{{ l2_directory }}/optimism-deployment/data/l2_accounts"
remote_src: "{{ target_host != 'localhost' }}"
when: existing_contracts_deployment
- name: Start L2-deployment
command: laconic-so deployment --dir optimism-deployment start
args:
chdir: "{{ l2_directory }}"

View File

@ -1,4 +0,0 @@
skip_container_build: false
l2_directory: "./out"
existing_contracts_deployment: false
artifact_path: ""

View File

@ -1,9 +0,0 @@
CERC_L1_CHAIN_ID={{ l1_chain_id }}
CERC_L1_RPC={{ l1_rpc }}
CERC_L1_HOST={{ l1_host }}
CERC_L1_PORT={{ l1_port }}
CERC_L1_BEACON={{ l1_beacon }}
CERC_L1_ADDRESS={{ l1_address }}
CERC_L1_PRIV_KEY={{ l1_priv_key }}
CERC_PROPOSER_AMOUNT={{ proposer_amount }}
CERC_BATCHER_AMOUNT={{ batcher_amount }}

View File

@ -1,18 +0,0 @@
stack: /home/{{ansible_user}}/cerc/fixturenet-optimism-stack/stack/fixturenet-optimism
deploy-to: compose
network:
ports:
op-geth:
- '9545:8545'
- '9546:8546'
op-node:
- '8547'
op-batcher:
- '8548'
op-proposer:
- '8560'
volumes:
l1_deployment: ./data/l1_deployment
l2_accounts: ./data/l2_accounts
l2_config: ./data/l2_config
l2_geth_data: ./data/l2_geth_data

3
nitro-bridge-setup/.gitignore vendored Normal file
View File

@ -0,0 +1,3 @@
out
bridge-vars.yml
hosts.ini

View File

@ -0,0 +1,121 @@
# nitro-bridge-setup
## Setup Ansible
To get started, follow the [installation](../README.md#installation) guide to setup ansible on your machine
## Setup
The following commands have to be executed in the [`nitro-bridge-setup`](./) directory:
- Copy the `bridge-vars.example.yml` vars file:
```bash
cp bridge-vars.example.yml bridge-vars.yml
```
- Edit [`bridge-vars.yml`](./bridge-vars.yml) with the required values:
```yaml
# L1 WS endpoint
nitro_chain_url: ""
# Private key for the bridge's nitro address
nitro_sc_pk: ""
# Private key for a funded account on L1
# This account should have tokens for funding Nitro channels
nitro_chain_pk: ""
# Specifies the block number to start looking for nitro adjudicator events
nitro_chain_start_block: ""
# Custom L2 token to be deployed
token_name: "LaconicNetworkToken"
token_symbol: "LNT"
initial_token_supply: "129600"
# Addresses of the deployed nitro contracts
na_address: ""
vpa_address: ""
ca_address: ""
```
## Run Nitro Bridge
### On Local Host
- To setup and run nitro bridge locally, execute the `run-nitro-bridge.yml` Ansible playbook:
```bash
LANG=en_US.utf8 ansible-playbook run-nitro-bridge.yml --extra-vars='{ "target_host": "localhost"}' --user $USER -kK
```
NOTE: By default, deployments are created in an `out` directory. To change this location, update the `nitro_directory` variable in the [setup-vars.yml](./setup-vars.yml) file
- For skipping container build, set `"skip_container_build" : true` in the `--extra-vars` parameter:
```bash
LANG=en_US.utf8 ansible-playbook run-nitro-bridge.yml --extra-vars='{"target_host" : "localhost", "skip_container_build": true}' --user $USER -kK
```
### On Remote Host
To run the playbook on a remote host:
- Create a new `hosts.ini` file:
```bash
cp ../hosts.example.ini hosts.ini
```
- Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
```ini
[<deployment_host>]
<host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
```
- Replace `<deployment_host>` with `nitro_host`
- Replace `<host_name>` with the alias of your choice
- Replace `<target_ip>` with the IP address or hostname of the target machine
- Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
- Verify that you are able to connect to the host using the following command
```bash
ansible all -m ping -i hosts.ini -k
# Expected output:
# <host_name> | SUCCESS => {
# "ansible_facts": {
# "discovered_interpreter_python": "/usr/bin/python3.10"
# },
# "changed": false,
# "ping": "pong"
# }
```
- Execute the `run-nitro-bridge.yml` Ansible playbook for remote deployment:
```bash
LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
```
- For skipping container build, run with `"skip_container_build" : true` in the `--extra-vars` parameter:
```bash
LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host", "skip_container_build": true }' --user $USER -kK
```
## Check Deployment Status
- Run the following command in the directory where the bridge-deployment is created:
- Check logs for deployments:
```bash
# Check the bridge deployment logs, ensure that the node is running
laconic-so deployment --dir bridge-deployment logs nitro-bridge -f
```

View File

@ -0,0 +1,7 @@
nitro_chain_url: ""
nitro_chain_pk: ""
nitro_chain_start_block: 0
nitro_sc_pk: ""
na_address: ""
vpa_address: ""
ca_address: ""

View File

@ -0,0 +1,65 @@
- name: Setup go-nitro on host
hosts: "{{ target_host }}"
vars_files:
- setup-vars.yml
- bridge-vars.yml
environment:
PATH: "{{ ansible_env.PATH }}:/home/{{ansible_user}}/bin"
tasks:
- name: Create directory for nitro bridge deployment
file:
path: "{{ nitro_directory }}"
state: directory
- name: Change owner of nitro-directory
file:
path: "{{ nitro_directory }}"
owner: "{{ansible_user}}"
group: "{{ansible_user}}"
state: directory
recurse: yes
- name: Clone go-nitro stack repo
expect:
command: laconic-so fetch-stack git.vdb.to/cerc-io/nitro-stack --git-ssh --pull
responses:
"Are you sure you want to continue connecting \\(yes/no/\\[fingerprint\\]\\)\\?": "yes"
timeout: 300
ignore_errors: yes
- name: Clone repositories required for nitro-stack
expect:
command: laconic-so --stack {{ ansible_env.HOME }}/cerc/nitro-stack/stack-orchestrator/stacks/bridge setup-repositories --git-ssh --pull
responses:
"Are you sure you want to continue connecting \\(yes/no/\\[fingerprint\\]\\)\\?": "yes"
timeout: 300
ignore_errors: yes
- name: Build containers
command: laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/bridge build-containers --force-rebuild
when: not skip_container_build
- name: Check if deployment exists for bridge node
stat:
path: "{{ nitro_directory }}/bridge-deployment"
register: bridge_deployment
- name: Generate spec file for bridge deployment
template:
src: "./templates/specs/bridge-nitro-spec.yml.j2"
dest: "{{ nitro_directory }}/bridge-nitro-spec.yml"
when: not bridge_deployment.stat.exists
- name: Create a deployment for the bridge node
command: laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/bridge deploy create --spec-file bridge-nitro-spec.yml --deployment-dir bridge-deployment
args:
chdir: "{{ nitro_directory }}"
when: not bridge_deployment.stat.exists
- name: Start the nitro bridge
command: laconic-so deployment --dir bridge-deployment start
args:
chdir: "{{ nitro_directory }}"

View File

@ -0,0 +1,3 @@
target_host: "localhost"
nitro_directory: out
skip_container_build: false

View File

@ -0,0 +1,21 @@
stack: /home/{{ ansible_user }}/cerc/nitro-stack/stack-orchestrator/stacks/bridge
deploy-to: compose
config:
NITRO_CHAIN_URL: {{ nitro_chain_url }}
NITRO_CHAIN_PK: {{ nitro_chain_pk }}
NITRO_CHAIN_START_BLOCK: {{ nitro_chain_start_block }}
NITRO_SC_PK: {{ nitro_sc_pk }}
NA_ADDRESS: "{{ na_address }}"
VPA_ADDRESS: "{{ vpa_address }}"
CA_ADDRESS: "{{ ca_address }}"
network:
ports:
nitro-bridge:
- 0.0.0.0:3005:3005
- 0.0.0.0:3006:3006
- 0.0.0.0:4006:4006
volumes:
nitro_bridge_data: ./data/nitro_bridge_data
nitro_bridge_tls: ./data/nitro_bridge_tls
nitro_node_caroot: ./data/nitro_node_caroot
nitro_deployment: ./data/nitro_deployment

3
nitro-contracts-setup/.gitignore vendored Normal file
View File

@ -0,0 +1,3 @@
out
contract-vars.yml
hosts.ini

View File

@ -0,0 +1,122 @@
# nitro-contracts-setup
## Setup Ansible
To get started, follow the [installation](../README.md#installation) guide to setup ansible on your machine
## Setup
The following commands have to be executed in the [`nitro-contracts-setup`](./) directory:
- Copy the `contract-vars.example.yml` vars file
```bash
cp contract-vars.example.yml contract-vars.yml
```
- Edit [`contract-vars.yml`](./contract-vars.yml) and fill in the following values
```bash
# L1 RPC endpoint
geth_url: ""
# L1 chain ID
geth_chain_id: ""
# Private key for a funded account on L1 to use for contracts deployment on L1
geth_deployer_pk: ""
# Custom L1 token to be deployed
token_name: "LaconicNetworkToken"
token_symbol: "LNT"
initial_token_supply: "129600"
```
## Deploy Contracts
### On Local Host
- To deploy nitro contracts locally, execute the `deploy-contracts.yml` Ansible playbook:
```bash
LANG=en_US.utf8 ansible-playbook deploy-contracts.yml --extra-vars='{ "target_host": "localhost"}' --user $USER -kK
```
NOTE: By default, deployments are created in an `out` directory. To change this location, update the `nitro_directory` variable in the [setup-vars.yml](./setup-vars.yml) file
- For skipping container build, set `"skip_container_build" : true` in the `--extra-vars` parameter:
```bash
LANG=en_US.utf8 ansible-playbook deploy-contracts.yml --extra-vars='{"target_host" : "localhost", "skip_container_build": true}' --user $USER -kK
```
### On Remote Host
To run the playbook on a remote host:
- Create a new `hosts.ini` file:
```bash
cp ../hosts.example.ini hosts.ini
```
- Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
```ini
[<deployment_host>]
<host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
```
- Replace `<deployment_host>` with `nitro_host`
- Replace `<host_name>` with the alias of your choice
- Replace `<target_ip>` with the IP address or hostname of the target machine
- Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
- Verify that you are able to connect to the host using the following command
```bash
ansible all -m ping -i hosts.ini -k
# Expected output:
# <host_name> | SUCCESS => {
# "ansible_facts": {
# "discovered_interpreter_python": "/usr/bin/python3.10"
# },
# "changed": false,
# "ping": "pong"
# }
```
- Execute the `deploy-contracts.yml` Ansible playbook for remote deployment:
```bash
LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
```
- For skipping container build, run with `"skip_container_build" : true` in the `--extra-vars` parameter:
```bash
LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host", "skip_container_build": true }' --user $USER -kK
```
## Check Deployment Status
- Run the following command in the directory where the nitro-contracts-deployment is created:
- Check logs for deployments:
```bash
# Check the L2 nitro contract deployment logs
laconic-so deployment --dir nitro-contracts-deployment logs l2-nitro-contracts -f
```
## Get Contract Addresses
- Run the following commands in the directory where the deployments are created:
- Get addresses of L1 nitro contracts:
```bash
laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "cat /app/deployment/nitro-addresses.json"
```

View File

@ -0,0 +1,6 @@
geth_url: ""
geth_chain_id: ""
geth_deployer_pk: ""
token_name: ""
token_symbol: ""
initial_token_supply: ""

View File

@ -0,0 +1,110 @@
- name: Deploy nitro contracts
hosts: "{{ target_host }}"
vars_files:
- setup-vars.yml
- contract-vars.yml
environment:
PATH: "{{ ansible_env.PATH }}:/home/{{ansible_user}}/bin"
tasks:
- name: Create directory for nitro bridge deployment
file:
path: "{{ nitro_directory }}"
state: directory
- name: Change owner of nitro-directory
file:
path: "{{ nitro_directory }}"
owner: "{{ansible_user}}"
group: "{{ansible_user}}"
state: directory
recurse: yes
- name: Clone go-nitro stack repo
expect:
command: laconic-so fetch-stack git.vdb.to/cerc-io/nitro-stack --git-ssh --pull
responses:
"Are you sure you want to continue connecting \\(yes/no/\\[fingerprint\\]\\)\\?": "yes"
timeout: 300
ignore_errors: yes
- name: Clone repositories required for nitro-stack
expect:
command: laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/bridge setup-repositories --git-ssh --pull
responses:
"Are you sure you want to continue connecting \\(yes/no/\\[fingerprint\\]\\)\\?": "yes"
timeout: 300
ignore_errors: yes
- name: Build containers
command: laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/bridge build-containers --force-rebuild
when: not skip_container_build
- name: Generate spec file for nitro contracts deployment
template:
src: "./templates/specs/nitro-contracts-spec.yml.j2"
dest: "{{ nitro_directory }}/nitro-contracts-spec.yml"
- name: Check if deployment exists for nitro contracts
stat:
path: "{{ nitro_directory }}/nitro-contracts-deployment"
register: nitro_contracts_deployment
- name: Create a deployment for nitro contracts
command: laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/nitro-contracts deploy create --spec-file nitro-contracts-spec.yml --deployment-dir nitro-contracts-deployment
args:
chdir: "{{ nitro_directory }}"
when: not nitro_contracts_deployment.stat.exists
- name: Start deployment for nitro-contracts
command: laconic-so deployment --dir nitro-contracts-deployment start
args:
chdir: "{{ nitro_directory }}"
- name: Wait for the contracts to be deployed
wait_for:
path: "{{ nitro_directory }}/nitro-contracts-deployment/data/nitro_deployment/nitro-addresses.json"
timeout: 300
- name: Export NA_ADDRESS
shell: laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"{{ geth_chain_id }}\"[0].contracts.NitroAdjudicator.address' /app/deployment/nitro-addresses.json"
args:
chdir: "{{ nitro_directory }}"
register: na_address
- debug:
msg: "NA_ADDRESS: {{ na_address.stdout }}"
- name: Export CA_ADDRESS
shell: laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"{{ geth_chain_id }}\"[0].contracts.ConsensusApp.address' /app/deployment/nitro-addresses.json"
args:
chdir: "{{ nitro_directory }}"
register: ca_address
- debug:
msg: "CA_ADDRESS: {{ ca_address.stdout }}"
- name: Export VPA_ADDRESS
shell: laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"{{ geth_chain_id }}\"[0].contracts.VirtualPaymentApp.address' /app/deployment/nitro-addresses.json"
args:
chdir: "{{ nitro_directory }}"
register: vpa_address
- debug:
msg: "VPA_ADDRESS: {{ vpa_address.stdout }}"
- name: Export ASSET_ADDRESS
shell: laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"{{ geth_chain_id }}\"[0].contracts.Token.address' /app/deployment/nitro-addresses.json"
args:
chdir: "{{ nitro_directory }}"
register: asset_address
- debug:
msg: "ASSET_ADDRESS: {{ asset_address.stdout }}"
- name: Export NITRO_CHAIN_START_BLOCK
shell: laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq '.receipt.blockNumber' /app/deployment/hardhat-deployments/geth/NitroAdjudicator.json"
args:
chdir: "{{ nitro_directory }}"
register: nitro_chain_start_block
- debug:
msg: "NITRO_CHAIN_START_BLOCK: {{ nitro_chain_start_block.stdout }}"

View File

@ -0,0 +1,3 @@
target_host: "localhost"
nitro_directory: out
skip_container_build: false

View File

@ -0,0 +1,13 @@
stack: /home/{{ ansible_user }}/cerc/nitro-stack/stack-orchestrator/stacks/nitro-contracts
deploy-to: compose
config:
GETH_URL: {{ geth_url }}
GETH_CHAIN_ID: {{ geth_chain_id }}
GETH_DEPLOYER_PK: {{ geth_deployer_pk }}
TOKEN_NAME: {{ token_name }}
TOKEN_SYMBOL: {{ token_symbol }}
INITIAL_TOKEN_SUPPLY: {{ initial_token_supply }}
network:
ports: {}
volumes:
nitro_deployment: ./data/nitro_deployment

3
nitro-nodes-setup/.gitignore vendored Normal file
View File

@ -0,0 +1,3 @@
out
nitro-vars.yml
hosts.ini

View File

@ -4,24 +4,29 @@
To get started, follow the [installation](../README.md#installation) guide to setup ansible on your machine To get started, follow the [installation](../README.md#installation) guide to setup ansible on your machine
## Run a nitro node ## Setup for Remote Host
To run the playbook on a remote host:
- Follow steps from [setup remote hosts](../README.md#setup-remote-hosts)
- Update / append the [`hosts.ini`](../hosts.ini) file for your remote host with `<deployment_host>` set as `nitro_host`
## Setup
The following commands have to be executed in [`nitro-nodes-setup`](./) directory The following commands have to be executed in [`nitro-nodes-setup`](./) directory
- Copy the `nitro-vars-example.yml` vars file - Copy the `nitro-vars.example.yml` vars file
```bash ```bash
cp nitro-vars-example.yml nitro-vars.yml cp nitro-vars.example.yml nitro-vars.yml
``` ```
- Edit [`nitro-vars.yml`](./nitro-vars.yml) and fill in the following values - Edit [`nitro-vars.yml`](./nitro-vars.yml) and fill in the following values
```bash ```bash
# L1 WS endpoint # L1 WS endpoint
nitro_l1_chain_url: "" nitro_chain_url: ""
# L2 WS endpoint
nitro_l2_chain_url: ""
# Private key for your nitro address # Private key for your nitro address
nitro_sc_pk: "" nitro_sc_pk: ""
@ -29,6 +34,9 @@ The following commands have to be executed in [`nitro-nodes-setup`](./) director
# Private key of the account on chain that is used for funding channels in Nitro node # Private key of the account on chain that is used for funding channels in Nitro node
nitro_chain_pk: "" nitro_chain_pk: ""
# Specifies the block number to start looking for nitro adjudicator events
nitro_chain_start_block: ""
# Contract address of NitroAdjudicator # Contract address of NitroAdjudicator
na_address: "" na_address: ""
@ -38,9 +46,6 @@ The following commands have to be executed in [`nitro-nodes-setup`](./) director
# Contract address of ConsensusApp # Contract address of ConsensusApp
ca_address: "" ca_address: ""
# Address of the bridge node
bridge_contract_address: ""
# Multiaddr of the L1 bridge node # Multiaddr of the L1 bridge node
nitro_l1_bridge_multiaddr: "" nitro_l1_bridge_multiaddr: ""
@ -56,19 +61,76 @@ The following commands have to be executed in [`nitro-nodes-setup`](./) director
nitro_l2_ext_multiaddr: "" nitro_l2_ext_multiaddr: ""
``` ```
- To run a nitro node, execute the `run-nitro-nodes.yml` Ansible playbook by running the following command. ## Run Nitro Node
NOTE: By default, deployments are created in the `nitro-nodes-setup/out` directory. To change this location, update the `nitro_directory` variable in the [setup-vars.yml](./setup-vars.yml) file. ### On Local Host
- To run a nitro node, execute the `run-nitro-nodes.yml` Ansible playbook by running the following command:
```bash ```bash
LANG=en_US.utf8 ansible-playbook -i localhost, --connection=local run-nitro-nodes.yml --extra-vars='{ "target_host": "localhost"}' -kK --user $USER LANG=en_US.utf8 ansible-playbook run-nitro-nodes.yml --extra-vars='{ "target_host": "localhost"}' --user $USER -kK
``` ```
- For skipping container build, run with `"skip_container_build" : true` in the `--extra-vars` parameter: NOTE: By default, deployments are created in a `out` directory. To change this location, update the `nitro_directory` variable in the [setup-vars.yml](./setup-vars.yml) file
- For skipping container build, run with `"skip_container_build" : true` in the `--extra-vars` parameter:
```bash
LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "localhost", "skip_container_build": true }' --user $USER -kK
```
### On Remote Host
To run the playbook on a remote host:
- Create a new `hosts.ini` file:
```bash
cp ../hosts.example.ini hosts.ini
```
- Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
```ini
[<deployment_host>]
<host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
```
- Replace `<deployment_host>` with `nitro_host`
- Replace `<host_name>` with the alias of your choice
- Replace `<target_ip>` with the IP address or hostname of the target machine
- Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
- Verify that you are able to connect to the host using the following command
```bash
ansible all -m ping -i hosts.ini -k
# Expected output:
# <host_name> | SUCCESS => {
# "ansible_facts": {
# "discovered_interpreter_python": "/usr/bin/python3.10"
# },
# "changed": false,
# "ping": "pong"
# }
```
- Copy and edit the [`nitro-vars.yml`](./nitro-vars.yml) file as described in the [local setup](./README.md#run-nitro-node-on-local-host) section
- Execute the `run-nitro-nodes.yml` Ansible playbook for remote deployment:
```bash
LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
```
- For skipping container build, run with `"skip_container_build" : true` in the `--extra-vars` parameter:
```bash
LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "nitro_host", "skip_container_build": true }' --user $USER -kK
```
```bash
LANG=en_US.utf8 ansible-playbook -i localhost, --connection=local run-nitro-nodes.yml --extra-vars='{ "target_host": "localhost", "skip_container_build": true }' -kK --user $USER
```
## Check Deployment Status ## Check Deployment Status

View File

@ -1,11 +1,10 @@
nitro_l1_chain_url: "" nitro_chain_url: ""
nitro_l2_chain_url: ""
nitro_sc_pk: "" nitro_sc_pk: ""
nitro_chain_pk: "" nitro_chain_pk: ""
nitro_chain_start_block: 0
na_address: "" na_address: ""
vpa_address: "" vpa_address: ""
ca_address: "" ca_address: ""
bridge_contract_address: ""
nitro_l1_bridge_multiaddr: "" nitro_l1_bridge_multiaddr: ""
nitro_l2_bridge_multiaddr: "" nitro_l2_bridge_multiaddr: ""
nitro_l1_ext_multiaddr: "" nitro_l1_ext_multiaddr: ""

View File

@ -9,6 +9,13 @@
PATH: "{{ ansible_env.PATH }}:/home/{{ansible_user}}/bin" PATH: "{{ ansible_env.PATH }}:/home/{{ansible_user}}/bin"
tasks: tasks:
- name: Install yq
get_url:
url: https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
dest: /usr/bin/yq
mode: '0755'
become: yes
- name: Create directory for nitro-stack - name: Create directory for nitro-stack
file: file:
path: "{{ nitro_directory }}" path: "{{ nitro_directory }}"
@ -23,15 +30,23 @@
recurse: yes recurse: yes
- name: Clone go-nitro stack repo - name: Clone go-nitro stack repo
command: laconic-so fetch-stack git.vdb.to/cerc-io/nitro-stack --git-ssh --pull expect:
command: laconic-so fetch-stack git.vdb.to/cerc-io/nitro-stack --git-ssh --pull
responses:
"Are you sure you want to continue connecting \\(yes/no/\\[fingerprint\\]\\)\\?": "yes"
timeout: 300
ignore_errors: yes ignore_errors: yes
- name: Clone repositories required for nitro-stack - name: Clone repositories required for nitro-stack
command: laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node setup-repositories --git-ssh --pull expect:
command: laconic-so --stack {{ ansible_env.HOME }}/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node setup-repositories --git-ssh --pull
responses:
"Are you sure you want to continue connecting \\(yes/no/\\[fingerprint\\]\\)\\?": "yes"
timeout: 300
ignore_errors: yes ignore_errors: yes
- name: Build containers - name: Build containers
command: laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node build-containers --force-rebuild command: laconic-so --stack {{ ansible_env.HOME }}/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node build-containers --force-rebuild
when: not skip_container_build when: not skip_container_build
- name: Generate spec file for L1 nitro node - name: Generate spec file for L1 nitro node
@ -103,3 +118,8 @@
command: laconic-so deployment --dir l2-nitro-deployment start command: laconic-so deployment --dir l2-nitro-deployment start
args: args:
chdir: "{{ nitro_directory }}" chdir: "{{ nitro_directory }}"
- name: Fetch the nitro-node-config file
get_url:
url: https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage2/nitro-node-config.yml
dest: "{{ nitro_directory }}"

View File

@ -1,3 +1,3 @@
target_host: "localhost" target_host: "localhost"
nitro_directory: ./out nitro_directory: out
skip_container_build: false skip_container_build: false

View File

@ -1,8 +1,9 @@
NITRO_CHAIN_URL={{ nitro_l1_chain_url }} NITRO_CHAIN_URL={{ nitro_chain_url }}
NITRO_SC_PK={{ nitro_sc_pk }} NITRO_SC_PK={{ nitro_sc_pk }}
NITRO_CHAIN_PK={{ nitro_chain_pk }} NITRO_CHAIN_PK={{ nitro_chain_pk }}
NA_ADDRESS={{ na_address }} NA_ADDRESS="{{ na_address }}"
VPA_ADDRESS={{ vpa_address }} VPA_ADDRESS="{{ vpa_address }}"
CA_ADDRESS={{ ca_address }} CA_ADDRESS="{{ ca_address }}"
NITRO_BOOTPEERS={{ nitro_l1_bridge_multiaddr }} NITRO_BOOTPEERS={{ nitro_l1_bridge_multiaddr }}
NITRO_EXT_MULTIADDR={{ nitro_l1_ext_multiaddr }} NITRO_EXT_MULTIADDR={{ nitro_l1_ext_multiaddr }}
NITRO_CHAIN_START_BLOCK: {{ nitro_chain_start_block }}

View File

@ -1,10 +1,6 @@
NITRO_CHAIN_URL={{ nitro_l2_chain_url }}
NITRO_SC_PK={{ nitro_sc_pk }} NITRO_SC_PK={{ nitro_sc_pk }}
NITRO_CHAIN_PK={{ nitro_chain_pk }} VPA_ADDRESS="{{ vpa_address }}"
NA_ADDRESS={{ na_address }} CA_ADDRESS="{{ ca_address }}"
VPA_ADDRESS={{ vpa_address }}
CA_ADDRESS={{ ca_address }}
BRIDGE_ADDRESS={{ bridge_contract_address }}
NITRO_BOOTPEERS={{ nitro_l2_bridge_multiaddr }} NITRO_BOOTPEERS={{ nitro_l2_bridge_multiaddr }}
NITRO_EXT_MULTIADDR={{ nitro_l2_ext_multiaddr }} NITRO_EXT_MULTIADDR={{ nitro_l2_ext_multiaddr }}
NITRO_L2=true NITRO_L2=true

2
service-provider-setup/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
vars/*.yml
!vars/*.example.yml

View File

@ -0,0 +1,174 @@
# service-provider-setup
## Setup Ansible
To get started, follow the [installation](../README.md#installation) guide to setup ansible on your machine
## Prerequisites
- Set up a DigitalOcean Droplet with passwordless SSH access
- Buy a domain and configure [nameservers pointing to DigitalOcean](https://docs.digitalocean.com/products/networking/dns/getting-started/dns-registrars/)
- Generate a DigitalOcean access token, used for API authentication and managing cloud resources
## Setup a new User
- Create a new `hosts.ini` file:
```bash
cp ../hosts.example.ini hosts.ini
```
- Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
```ini
[root_host]
<host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
```
- Replace `<host_name>` with the desired `hostname` of the remote machine
- Replace `<target_ip>` with the IP address or hostname of the target machine
- Replace `<ssh_user>` with `root`
- Verify that you are able to connect to the host using the following command:
```bash
ansible all -m ping -i hosts.ini
# Expected output:
# <host_name> | SUCCESS => {
# "ansible_facts": {
# "discovered_interpreter_python": "/usr/bin/python3.10"
# },
# "changed": false,
# "ping": "pong"
# }
```
- Setup `user-vars.yml` using the example file
```bash
cp vars/user-vars.example.yml vars/user-vars.yml
```
- Edit the `user-vars.yml` file:
```bash
# name of the user you want to setup on the target host
username: ""
# password of the user you want to setup on the target host
password: ""
# path to the ssh key on your machine, eg: "/home/user/.ssh/id_rsa.pub"
path_to_ssh_key: ""
```
- Execute the `setup-user.yml` Ansible playbook to create a user with passwordless sudo permissions:
```bash
LANG=en_US.utf8 ansible-playbook setup-user.yml -i hosts.ini --extra-vars='{ "target_host": "deployment_host" }'
```
## Become a Service Provider
### Setup
- Copy the vars files:
```bash
cd vars
cp dns-vars.example.yml dns-vars.yml
cp gpg-vars.example.yml gpg-vars.yml
cp k8s-vars.example.yml k8s-vars.yml
cp container-vars.example.yml container-vars.yml
cp webapp-vars.example.yml webapp-vars.yml
cd -
```
- Update the following values in the respective variable files:
```bash
# vars/dns-vars.yml
full_domain: "" # eg: laconic.com
subdomain_prefix: "" # eg: lcn-cad
service_provider_ip: "" # eg: 23.111.78.179
do_api_token: "" # Digital Ocean access token that you generated, eg: dop_v1...
# vars/gpg-vars.yml
gpg_user_name: "" # Full name of the user for the GPG key
gpg_user_email: "" # Email address associated with the GPG key
gpg_passphrase: "" # Passphrase for securing the GPG key
# vars/k8s-vars.yml
target_host: "deployment_host"
org_id: "" # eg: lcn
location_id: "" # eg: cad
base_domain: "" # eg: laconic
support_email: "" # eg: support@laconic.com
# vars/container-vars.yml
container_registry_username: "" # username to login to the container registry
container_registry_password: "" # password to login to the container registry
# vars/webapp-vars.yml
authority_name: "" # eg: my-org-name
cpu_reservation: "" # Minimum number of cpu cores to be used, eg: 2
memory_reservation: "" # Minimum amount of memory in GB to be used, eg: 4G
cpu_limit: "" # Maximum number of cpu cores to be used, eg: 6
memory_limit: "" # Maximum amount of memory in GB to be used, eg: 8G
deployer_gpg_passphrase: "" # passphrase for creating GPG key used by webapp-deployer, eg: SECRET
```
- Update the [`hosts.ini`](./hosts.ini) file:
```ini
[root_host]
<host_name> ansible_host=<target_ip> ansible_user=root ansible_ssh_common_args='-o ForwardAgent=yes'
[deployment_host]
<host_name> ansible_host=<target_ip> ansible_user=<new_username> ansible_ssh_common_args='-o ForwardAgent=yes'
```
- Replace `<host_name>` with the desired `hostname` of the remote machine
- Replace `<target_ip>` with the IP address or hostname of the target machine
- Under `deployment_host`, Replace `<ansible_user>` with the name of the user you have created
- Verify that you are able to connect to the host using the following command:
```bash
ansible all -m ping -i hosts.ini
# Expected output:
# <host_name> | SUCCESS => {
# "ansible_facts": {
# "discovered_interpreter_python": "/usr/bin/python3.10"
# },
# "changed": false,
# "ping": "pong"
# }
```
- Run the `service-provider-setup.yml` ansible-playbook to:
- Create DNS records
- Deploy k8s
- Setup laconicd and laconic console
- Setup container registry
- Deploy the webapp-deployer API and webapp-deployer UI
```bash
LANG=en_US.utf8 ansible-playbook service-provider-setup.yml -i hosts.ini --extra-vars='{ target_host: "deployment_host" }' --user $USER
```
### Result
After the playbook finishes executing, the following services will be deployed (your setup should look similar to the example below):
- laconicd chain RPC endpoint: http://lcn-daemon.laconic.com:26657
- laconic console: http://lcn-daemon.laconic.com:8080/registry
- laconicd GQL endpoint: http://lcn-daemon.laconic.com:9473/api
- webapp deployer API: https://webapp-deployer-api.pwa.laconic.com
- webapp deployer UI: https://webapp-deployer-ui.pwa.laconic.com

View File

@ -0,0 +1,127 @@
- name: Deploy webapp-deployer backend
hosts: "{{ target_host }}"
environment:
PATH: "{{ ansible_env.PATH }}:/home/{{ansible_user}}/bin"
KUBECONFIG: "{{ ansible_env.HOME }}/.kube/config-default.yaml"
vars_files:
- vars/webapp-vars.yml
- vars/container-vars.yml
- vars/k8s-vars.yml
- vars/dns-vars.yml
tasks:
- name: Ensure gpg-keys directory exists
file:
path: ~/gpg-keys
state: directory
mode: '0700'
- name: Create a GPG key
shell: gpg --batch --passphrase "{{ deployer_gpg_passphrase }}" --quick-generate-key webapp-deployer-api.{{ full_domain }} default default never
- name: Export the public key
shell: gpg --export webapp-deployer-api.{{ full_domain }} > ~/gpg-keys/webapp-deployer-api.{{ full_domain }}.pgp.pub
args:
creates: ~/gpg-keys/webapp-deployer-api.{{ full_domain }}.pgp.pub
- name: Export the GPG private key with passphrase
shell: gpg --pinentry-mode=loopback --passphrase "{{ deployer_gpg_passphrase }}" --export-secret-keys webapp-deployer-api.{{ full_domain }} > ~/gpg-keys/webapp-deployer-api.{{ full_domain }}.pgp.key
- name: Setup repositories for webapp-deployer-backend
command: laconic-so --stack webapp-deployer-backend setup-repositories
- name: Build containers for webapp-deployer-backend
command: laconic-so --stack webapp-deployer-backend build-containers
- name: Ensure the config directory exists
file:
path: "{{ ansible_env.HOME }}/config"
state: directory
- name: Create laconic config file
template:
src: "./templates/laconic.yml.j2"
dest: "config/laconic.yml"
- name: Copy the gpg private key file to config dir
copy:
src: "gpg-keys/webapp-deployer-api.{{ full_domain }}.pgp.key"
dest: "config"
remote_src: true
- name: Copy the gpg public key file to config dir
copy:
src: "gpg-keys/webapp-deployer-api.{{ full_domain }}.pgp.pub"
dest: "config"
remote_src: true
- name: Publish the webapp-deployer record using laconic-so
shell: |
docker run -i -t \
-v /home/{{ ansible_user }}/config:/home/root/config \
cerc/webapp-deployer-backend:local laconic-so publish-deployer-to-registry \
--laconic-config /home/root/config/laconic.yml \
--api-url https://webapp-deployer-api.pwa.{{ full_domain }} \
--public-key-file /home/root/config/webapp-deployer-api.{{ full_domain }}.pgp.pub \
--lrn lrn://{{ authority_name }}/deployers/webapp-deployer-api.{{ full_domain }} \
--min-required-payment 0
register: publish_output
- name: Display publish output
debug:
var: publish_output.stdout
- name: Generate spec file for webapp-deployer-backend
template:
src: "./templates/specs/webapp-deployer.spec.j2"
dest: "webapp-deployer.spec"
- name: Create the deployment directory from the spec file
command: >
laconic-so --stack webapp-deployer-backend deploy create
--deployment-dir webapp-deployer --spec-file webapp-deployer.spec
- name: Update config for webapp-deployer-backend
template:
src: "./templates/configs/webapp-deployer-config.env.j2"
dest: "webapp-deployer/config.env"
- name: Copy the kube config file to webapp-deployer directory
copy:
src: "{{ansible_env.HOME}}/.kube/config-default.yaml"
dest: "webapp-deployer/data/config/kube.yml"
remote_src: true
- name: Create laconic config file
template:
src: "./templates/laconic.yml.j2"
dest: "webapp-deployer/data/config/laconic.yml"
- name: login to the container registry
command: "docker login container-registry.pwa.{{ full_domain }} --username {{ container_registry_username }} --password {{ container_registry_password}}"
- name: Push images to container registry
command: laconic-so deployment --dir webapp-deployer push-images
- name: Start the webapp deployer
command: laconic-so deployment --dir webapp-deployer start
- name: Get the most recent pod for the deployment
shell: kubectl get pods --sort-by=.metadata.creationTimestamp -o jsonpath='{.items[-1].metadata.name}'
register: webapp_deployer_pod
- name: Set pod ID to a variable
set_fact:
pod_id: "{{ webapp_deployer_pod.stdout }}"
- name: Wait for the recent pod to be ready
command: kubectl wait --for=condition=Ready pod/{{ pod_id }} --timeout=300s
register: wait_result
- name: Copy gpg private key file to webapp deployer pod
shell: kubectl cp gpg-keys/webapp-deployer-api.{{ full_domain }}.pgp.key {{ pod_id }}:/app
- name: Copy gpg public key file to webapp deployer pod
shell: kubectl cp gpg-keys/webapp-deployer-api.{{ full_domain }}.pgp.pub {{ pod_id }}:/app

View File

@ -0,0 +1,43 @@
- name: Deploy webapp-deployer ui
hosts: "{{ target_host }}"
environment:
PATH: "{{ ansible_env.PATH }}:/home/{{ansible_user}}/bin"
vars_files:
- vars/webapp-vars.yml
- vars/dns-vars.yml
- vars/k8s-vars.yml
tasks:
- name: Clone webapp-deployment-status-ui repository
git:
repo: "https://git.vdb.to/cerc-io/webapp-deployment-status-ui.git"
dest: "{{ ansible_env.HOME }}/cerc/webapp-deployment-status-ui"
update: yes
- name: Build webapp-deployer-status-ui
command: laconic-so build-webapp --source-repo {{ ansible_env.HOME }}/cerc/webapp-deployment-status-ui
- name: Create a deployment for webapp-ui
command: |
laconic-so deploy-webapp create --kube-config {{ ansible_env.HOME }}/.kube/config-default.yaml
--image-registry container-registry.pwa.{{ full_domain }} --deployment-dir webapp-ui
--image cerc/webapp-deployment-status-ui:local --url https://webapp-deployer-ui.pwa.{{ full_domain }}
--env-file ~/cerc/webapp-deployment-status-ui/.env
- name: Push image to container registry
command: laconic-so deployment --dir webapp-ui push-images
- name: Update config file for webapp ui
template:
src: "./templates/configs/webapp-ui-config.env.j2"
dest: "webapp-ui/config.env"
- name: Start the deployer ui
command: laconic-so deployment --dir webapp-ui start
- name: Create .out file
file:
path: "{{ ansible_env.HOME }}/.out"
state: touch

View File

@ -0,0 +1,90 @@
- name: Setup and run laconic console
hosts: "{{target_host}}"
environment:
PATH: "{{ ansible_env.PATH }}:/home/{{ansible_user}}/bin"
vars_files:
- vars/webapp-vars.yml
- vars/dns-vars.yml
- vars/k8s-vars.yml
tasks:
- name: Clone the stack repo
command: laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack --pull
ignore_errors: yes
- name: Clone required repositories for laconic-console
command: laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console setup-repositories --pull
- name: Build container images
command: laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console build-containers --force-rebuild
- name: Generate spec file for laconic console deployment
template:
src: "./templates/specs/laconic-console-spec.yml.j2"
dest: "laconic-console-spec.yml"
- name: Check if the deployment directory exists
stat:
path: laconic-console-deployment
register: deployment_dir
- name: Create a deployment from the spec file
command: laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy create --spec-file laconic-console-spec.yml --deployment-dir laconic-console-deployment
when: not deployment_dir.stat.exists
- name: Place deployment in the same namespace as fixturenet-laconicd
copy:
src: "fixturenet-laconicd-deployment/deployment.yml"
dest: "laconic-console-deployment/deployment.yml"
remote_src: yes
- name: Fetch user key from laconicd
command: laconic-so deployment --dir fixturenet-laconicd-deployment exec laconicd "echo y | laconicd keys export alice --unarmored-hex --unsafe"
register: alice_pk
- name: Set Private key for console deployment
set_fact:
ALICE_PK: "{{ alice_pk.stdout }}"
- name: Check if DNS resolves for daemon
command: getent ahosts {{ org_id }}-daemon.{{ full_domain }}
register: dns_check
retries: 5
delay: 5
until: dns_check.rc == 0
ignore_errors: yes
- name: Fail if DNS does not resolve after retries
fail:
msg: "DNS resolution failed for example.com after 5 retries"
when: dns_check.rc != 0
- name: Start the laconic console deployment
command: laconic-so deployment --dir laconic-console-deployment start
- name: Create a bond using cli
shell: laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry bond create --type alnt --quantity 1000000000000 --user-key {{ALICE_PK}}" | jq -r '.bondId'
register: bond_id
- name: Set Bond ID for console deployment
set_fact:
BOND_ID: "{{ bond_id.stdout }}"
- name: Stop the console deployment
command: laconic-so deployment --dir laconic-console-deployment stop
- name: Modify the console config with alice_pk and bond_id
template:
src: "./templates/configs/console-config.env.j2"
dest: "laconic-console-deployment/config.env"
- name: Start the laconic console deployment with updated config
command: laconic-so deployment --dir laconic-console-deployment start
- name: Reserve an authority
command: laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry authority reserve {{authority_name}}"
- name: Set authority using bond id
command: laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry authority bond set {{authority_name}} {{BOND_ID}}"

View File

@ -0,0 +1,33 @@
- name: Setup and run fixturnet-laconicd-stack
hosts: "{{ target_host }}"
environment:
PATH: "{{ ansible_env.PATH }}:/home/{{ansible_user}}/bin"
tasks:
- name: Clone the fixturenet-laconicd-stack repo
command: laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-laconicd-stack --pull
ignore_errors: yes
- name: Setup repos for fixturenet-laconicd
command: laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd setup-repositories
- name: Build container images
command: laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd build-containers --force-rebuild
- name: Generate over spec file for laconicd deployment
template:
src: "./templates/specs/fixturenet-laconicd-spec.yml.j2"
dest: "fixturenet-laconicd-spec.yml"
- name: Check if the deployment directory exists
stat:
path: "fixturenet-laconicd-deployment"
register: deployment_dir
- name: Create the deployment from the spec file
command: laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy create --spec-file fixturenet-laconicd-spec.yml --deployment-dir fixturenet-laconicd-deployment
when: not deployment_dir.stat.exists
- name: Start the deployment
command: laconic-so deployment --dir fixturenet-laconicd-deployment start

View File

@ -0,0 +1,20 @@
- hosts: "{{ target_host }}"
tasks:
- name: Check if .out file exists
stat:
path: "{{ ansible_env.HOME }}/.out"
register: out_file
- name: Exit playbook if .out file exists
fail:
msg: ".out file exists, exiting playbook."
when: out_file.stat.exists
- import_playbook: setup-dns.yml
- import_playbook: setup-system.yml
- import_playbook: setup-k8s.yml
- import_playbook: setup-container-registry.yml
- import_playbook: run-laconicd.yml
- import_playbook: run-laconic-console.yml
- import_playbook: deploy-backend.yml
- import_playbook: deploy-frontend.yml

View File

@ -0,0 +1,161 @@
- name: Setup container registry
hosts: "{{ target_host }}"
environment:
PATH: "{{ ansible_env.PATH }}:/home/{{ansible_user}}/bin"
vars_files:
- vars/k8s-vars.yml
- vars/container-vars.yml
- vars/dns-vars.yml
tasks:
- name: Generate spec file for the container-registry stack
template:
src: "./templates/specs/container-registry.spec.j2"
dest: "{{ansible_env.HOME}}/container-registry.spec"
- name: Create a deployment for the container-registry stack
command: laconic-so --stack container-registry deploy create --deployment-dir container-registry --spec-file container-registry.spec
- name: Base64 encode the container registry credentials
set_fact:
b64_encoded_cred: "{{ (container_registry_username + ':' + container_registry_password) | b64encode }}"
- name: Encrypt the container registry credentials to create an htpasswd file
command: >
htpasswd -bB -c container-registry/configmaps/config/htpasswd
{{ container_registry_username }} {{ container_registry_password }}
register: htpasswd_file
- name: Read the htpasswd file
slurp:
src: "container-registry/configmaps/config/htpasswd"
register: htpasswd_file_content
- name: Extract the hashed password (after the colon)
set_fact:
hashed_password: "{{ (htpasswd_file_content.content | b64decode).split(':')[1] | trim }}"
- name: Create container-registry/my_password.json file
template:
src: "./templates/my_password.json.j2"
dest: "container-registry/my_password.json"
- name: Configure the file container-registry/config.env
copy:
dest: "container-registry/config.env"
content: |
REGISTRY_AUTH=htpasswd
REGISTRY_AUTH_HTPASSWD_REALM="{{org_id}} Service Provider Image Registry"
REGISTRY_AUTH_HTPASSWD_PATH="/config/htpasswd"
REGISTRY_HTTP_SECRET='{{ hashed_password }}'
- name: Set KUBECONFIG environment variable
set_fact:
kubeconfig_path: "{{ ansible_env.HOME }}/.kube/config-default.yaml"
- name: Add the container registry credentials as a secret available to the cluster
command: >
kubectl create secret generic laconic-registry
--from-file=.dockerconfigjson=container-registry/my_password.json
--type=kubernetes.io/dockerconfigjson
environment:
KUBECONFIG: "{{ kubeconfig_path }}"
# TODO: Investigate why container registry throws error if started immediately
- name: Wait for 90 seconds
pause:
seconds: 90
- block:
- name: Get Kubernetes nodes with wide output
command: kubectl get nodes -o wide
environment:
KUBECONFIG: "{{ kubeconfig_path }}"
register: nodes_output
- name: Print output of 'kubectl get nodes -o wide'
debug:
var: nodes_output.stdout
- name: Get all secrets from all namespaces
command: kubectl get secrets --all-namespaces
environment:
KUBECONFIG: "{{ kubeconfig_path }}"
register: secrets_output
- name: Print output of 'kubectl get secrets --all-namespaces'
debug:
var: secrets_output.stdout
- name: Get cluster issuers
command: kubectl get clusterissuer
environment:
KUBECONFIG: "{{ kubeconfig_path }}"
register: clusterissuer_output
- name: Print output of 'kubectl get clusterissuer'
debug:
var: clusterissuer_output.stdout
- name: Get certificates
command: kubectl get certificates
environment:
KUBECONFIG: "{{ kubeconfig_path }}"
register: certificates_output
- name: Print output of 'kubectl get certificates'
debug:
var: certificates_output.stdout
- name: Get DaemonSets in all namespaces
command: kubectl get ds --all-namespaces
environment:
KUBECONFIG: "{{ kubeconfig_path }}"
register: daemonsets_output
- name: Print output of 'kubectl get ds --all-namespaces'
debug:
var: daemonsets_output.stdout
ignore_errors: yes
- name: Deploy the container registry
command: >
laconic-so deployment --dir container-registry start
- name: Get cluster_id from container-registry-deployment
slurp:
src: container-registry/deployment.yml
register: deployment_file
- name: Decode and extract cluster-id
set_fact:
extracted_cluster_id: "{{ deployment_file.content | b64decode | regex_search('cluster-id: (.+)', '\\1') }}"
- name: Set modified cluster-id
set_fact:
formatted_cluster_id: "{{ extracted_cluster_id | replace('[', '') | replace(']', '') | replace(\"'\", '') }}"
- name: Display the cluster ID
debug:
msg: "The cluster ID is: {{ formatted_cluster_id }}"
- name: Annotate ingress for proxy body size
command: >
kubectl annotate ingress {{ formatted_cluster_id }}-ingress nginx.ingress.kubernetes.io/proxy-body-size=0
environment:
KUBECONFIG: "{{ ansible_env.HOME }}/.kube/config-default.yaml"
- name: Annotate ingress for proxy read timeout
command: >
kubectl annotate ingress {{ formatted_cluster_id }}-ingress nginx.ingress.kubernetes.io/proxy-read-timeout=600
environment:
KUBECONFIG: "{{ ansible_env.HOME }}/.kube/config-default.yaml"
- name: Annotate ingress for proxy send timeout
command: >
kubectl annotate ingress {{ formatted_cluster_id }}-ingress nginx.ingress.kubernetes.io/proxy-send-timeout=600
environment:
KUBECONFIG: "{{ ansible_env.HOME }}/.kube/config-default.yaml"

View File

@ -0,0 +1,124 @@
- name: Configure DNS records
hosts: localhost
vars_files:
- vars/dns-vars.yml
- vars/k8s-vars.yml
tasks:
- name: Create a domain
community.digitalocean.digital_ocean_domain:
state: present
oauth_token: "{{ do_api_token }}"
name: "{{ full_domain }}"
ip: "{{ service_provider_ip }}"
- name: Create record for cluster control machine
community.digitalocean.digital_ocean_domain_record:
state: present
oauth_token: "{{ do_api_token }}"
domain: "{{ full_domain }}"
type: A
name: "{{ subdomain_prefix }}-cluster-control"
data: "{{ service_provider_ip }}"
- name: Create record for daemon machine
community.digitalocean.digital_ocean_domain_record:
state: present
oauth_token: "{{ do_api_token }}"
domain: "{{ full_domain }}"
type: A
name: "{{ org_id }}-daemon"
data: "{{ service_provider_ip }}"
- name: Create CNAME record for www
community.digitalocean.digital_ocean_domain_record:
state: present
oauth_token: "{{ do_api_token }}"
data: "{{ full_domain }}"
domain: "{{ full_domain }}"
type: CNAME
name: www
ttl: 43200
- name: Create CNAME record for subdomain
community.digitalocean.digital_ocean_domain_record:
state: present
oauth_token: "{{ do_api_token }}"
data: "{{ subdomain_cluster_control }}.{{ full_domain }}"
domain: "{{ full_domain }}"
type: CNAME
name: "{{ subdomain_prefix }}.{{ full_domain }}"
ttl: 43200
- name: Create CNAME record for laconicd endpoint
community.digitalocean.digital_ocean_domain_record:
state: present
oauth_token: "{{ do_api_token }}"
data: "{{ org_id }}-daemon.{{ full_domain }}"
domain: "{{ full_domain }}"
type: CNAME
name: "laconicd.{{ full_domain }}"
ttl: 43200
- name: Create CNAME record for backend
community.digitalocean.digital_ocean_domain_record:
state: present
oauth_token: "{{ do_api_token }}"
data: "{{ org_id }}-daemon.{{ full_domain }}"
domain: "{{ full_domain }}"
type: CNAME
name: "{{ org_id }}-backend.{{ full_domain }}"
ttl: 43200
- name: Create CNAME record for console
community.digitalocean.digital_ocean_domain_record:
state: present
oauth_token: "{{ do_api_token }}"
data: "{{ org_id }}-daemon.{{ full_domain }}"
domain: "{{ full_domain }}"
type: CNAME
name: "{{ org_id }}-console.{{ full_domain }}"
ttl: 43200
- name: Create CNAME record for org and location
community.digitalocean.digital_ocean_domain_record:
state: present
oauth_token: "{{ do_api_token }}"
data: "{{ org_id }}-daemon.{{ full_domain }}"
domain: "{{ full_domain }}"
type: CNAME
name: "{{ subdomain_prefix }}"
ttl: 43200
- name: Create wildcard A record for subdomain
community.digitalocean.digital_ocean_domain_record:
state: present
oauth_token: "{{ do_api_token }}"
name: "{{ subdomain_cluster_control }}.{{ full_domain }}"
data: "{{ service_provider_ip }}"
domain: "{{ full_domain }}"
type: A
name: "*.{{ subdomain_prefix }}"
ttl: 43200
- name: Create CNAME record for pwa
community.digitalocean.digital_ocean_domain_record:
state: present
oauth_token: "{{ do_api_token }}"
data: "{{ subdomain_cluster_control }}.{{ full_domain }}"
domain: "{{ full_domain }}"
type: CNAME
name: "pwa"
ttl: 43200
- name: Create wildcard A record for pwa
community.digitalocean.digital_ocean_domain_record:
state: present
oauth_token: "{{ do_api_token }}"
name: "{{ subdomain_cluster_control }}.{{ full_domain }}"
data: "{{ service_provider_ip }}"
domain: "{{ full_domain }}"
type: A
name: "*.pwa"
ttl: 43200

View File

@ -0,0 +1,184 @@
- name: Install Stack Orchestrator if it isn't present
import_playbook: ../stack-orchestrator-setup/setup-laconic-so.yml
- name: Setup k8s
hosts: "{{ target_host }}"
environment:
PATH: "{{ ansible_env.PATH }}:/home/{{ansible_user}}/.local/bin"
VAULT_KEY: "{{ vault_passphrase }}"
vars_files:
- vars/dns-vars.yml
- vars/gpg-vars.yml
- vars/k8s-vars.yml
tasks:
- name: Install Python and pip
apt:
name: "{{ item }}"
state: present
become: yes
loop:
- python3
- python3-pip
- name: Add user to docker group
user:
name: "{{ ansible_user }}"
groups: docker
append: true
become: yes
- name: Install Ansible on remote host
pip:
name: ansible
extra_args: --user
when: target_host != "localhost"
- name: Ensure ~/.local/bin is in PATH in .bashrc
lineinfile:
path: ~/.bashrc
line: 'export PATH="$HOME/.local/bin:$PATH"'
state: present
create: yes
- name: Ensure ~/.local/bin is in PATH in .zshrc
lineinfile:
path: ~/.zshrc
line: 'export PATH="$HOME/.local/bin:$PATH"'
state: present
create: yes
- name: Clone the service provider template repo
git:
repo: "https://git.vdb.to/cerc-io/service-provider-template.git"
dest: "{{ ansible_env.HOME }}/service-provider-template"
- name: Update .vault/vault-keys file
lineinfile:
path: "service-provider-template/.vault/vault-keys"
regexp: '^.*$'
line: "{{ gpg_key_id }}"
create: yes
- name: Start GPG agent
command: gpg-agent --daemon
ignore_errors: yes
- name: Sign a dummy string using gpg-key
shell: echo "This is a dummy string." | gpg --batch --yes --local-user "{{ gpg_key_id }}" --passphrase "{{ vault_passphrase }}" --pinentry-mode loopback --sign -
- name: Run vault-rekey.sh
shell: bash .vault/vault-rekey.sh
args:
chdir: "service-provider-template"
register: rekey_result
until: rekey_result.stderr == ""
retries: 5
delay: 5
- name: Ensure the target directory exists
file:
path: "{{ ansible_env.HOME }}/service-provider-template"
state: directory
mode: '0755'
- name: Change directory name in group_vars
command: mv lcn_cad {{ org_id }}_{{ location_id }}
args:
chdir: "{{ ansible_env.HOME }}/service-provider-template/group_vars"
- name: Change control directory name in host_vars
command: mv lcn-cad-cluster-control {{ org_id }}-{{ location_id }}-cluster-control
args:
chdir: "{{ ansible_env.HOME }}/service-provider-template/host_vars"
- name: Change daemon directory name in host_vars
command: mv lcn-daemon {{ org_id }}-daemon
args:
chdir: "{{ ansible_env.HOME }}/service-provider-template/host_vars"
- name: Copy control-firewalld.yml to the remote VM
template:
src: ./templates/control-firewalld.yml.j2
dest: "{{ ansible_env.HOME }}/service-provider-template/host_vars/{{ org_id }}-{{ location_id }}-cluster-control/firewalld.yml"
- name: Copy daemon-firewalld.yml to the remote VM
template:
src: ./templates/daemon-firewalld.yml.j2
dest: "{{ ansible_env.HOME }}/service-provider-template/host_vars/{{ org_id }}-daemon/firewalld.yml"
- name: Copy nginx.yml to the remote VM
template:
src: ./templates/nginx.yml.j2
dest: "{{ ansible_env.HOME }}/service-provider-template/host_vars/{{ org_id }}-daemon/nginx.yml"
- name: Copy hosts file to the remote VM
template:
src: ./templates/hosts.j2
dest: "{{ ansible_env.HOME }}/service-provider-template/hosts"
- name: Copy k8s.yml to the remote VM
template:
src: ./templates/k8s.yml.j2
dest: "{{ ansible_env.HOME }}/service-provider-template/group_vars/{{ org_id }}_{{ location_id }}/k8s.yml"
- name: Copy wildcard-pwa-{{ base_domain }}.yaml to the remote VM
template:
src: ./templates/wildcard-pwa-example.yml.j2
dest: "{{ ansible_env.HOME }}/service-provider-template/files/manifests/wildcard-pwa-{{ base_domain }}.yaml"
- name: Delete old wildcard-pwa file
file:
path: "{{ ansible_env.HOME }}/service-provider-template/files/manifests/wildcard-pwa-laconic.yaml"
state: absent
- name: Install required ansible roles
shell: ansible-galaxy install -f -p roles -r roles/requirements.yml
args:
chdir: "{{ ansible_env.HOME }}/service-provider-template"
- name: Install Kubernetes helper tools
shell: ./roles/k8s/files/scripts/get-kube-tools.sh
args:
chdir: "{{ ansible_env.HOME }}/service-provider-template"
become: yes
- name: Update group_vars/all/vault.yml with support email using template
template:
src: ./templates/vault.yml.j2
dest: "{{ ansible_env.HOME }}/service-provider-template/group_vars/all/vault.yml"
- name: Base64 encode DigitalOcean token
set_fact:
b64_encoded_token: "{{ do_api_token | b64encode }}"
- name: Update secret-digitalocean-dns.yaml with encoded token
template:
src: ./templates/secret-digitalocean-dns.yml.j2
dest: "{{ ansible_env.HOME }}/service-provider-template/files/manifests/secret-digitalocean-dns.yaml"
vars:
b64_encoded_token: "{{ b64_encoded_token }}"
- name: Remove k8s-vault.yml file
file:
path: "{{ ansible_env.HOME }}/service-provider-template/group_vars/{{ org_id }}_{{ location_id }}/k8s-vault.yml"
state: absent
- name: Generate token for the cluster
command: ./roles/k8s/files/scripts/token-vault.sh ./group_vars/{{ org_id }}_{{ location_id }}/k8s-vault.yml
args:
chdir: "{{ ansible_env.HOME }}/service-provider-template"
- name: Configure firewalld and nginx
command: ansible-playbook -i hosts site.yml --tags=firewalld,nginx
args:
chdir: "{{ ansible_env.HOME }}/service-provider-template"
environment:
ANSIBLE_HOST_KEY_CHECKING: "False"
- name: Deploy Kubernetes
command: ansible-playbook -i hosts site.yml --tags=k8s --limit={{ org_id }}_{{ location_id }} --user {{ ansible_user }}
args:
chdir: "{{ ansible_env.HOME }}/service-provider-template"

View File

@ -0,0 +1,137 @@
- name: Setup system for the service provider setup
hosts: "{{ target_host }}"
environment:
GNUPGHOME: /home/{{ ansible_user }}/.gnupg
vars_files:
- vars/k8s-vars.yml
- vars/dns-vars.yml
- vars/gpg-vars.yml
tasks:
- name: Install required packages
apt:
name:
- doas
- zsh
- tmux
- git
- jq
- acl
- curl
- wget
- netcat-traditional
- fping
- rsync
- htop
- iotop
- iftop
- tar
- less
- firewalld
- sshguard
- wireguard
- iproute2
- iperf3
- zfsutils-linux
- net-tools
- ca-certificates
- gnupg
- sshpass
- apache2-utils
state: latest
update_cache: true
become: yes
- name: Set unique hostname
hostname:
name: "{{ inventory_hostname }}"
when: ansible_hostname != inventory_hostname
become: yes
- name: Verify status of firewalld and enable sshguard
systemd:
name: "{{ item }}"
enabled: yes
state: started
loop:
- firewalld
- sshguard
ignore_errors: yes
- name: Disable and remove snapd
block:
- name: Disable snapd services
systemd:
name: "{{ item }}"
enabled: no
state: stopped
loop:
- snapd.service
- snapd.socket
- snapd.seeded
- snapd.snap-repair.timer
ignore_errors: yes
become: yes
- name: Purge snapd
apt:
name: snapd
state: absent
become: yes
- name: Remove snap directories
file:
path: "{{ item }}"
state: absent
loop:
- "{{ ansible_env.HOME }}/snap"
- /snap
- /var/snap
- /var/lib/snapd
become: yes
ignore_errors: yes
- name: Ensure GPG directory exists
file:
path: "{{ ansible_env.HOME }}/.gnupg"
state: directory
mode: '0700'
- name: Create GPG key parameters file
copy:
dest: /tmp/gpg_key_params.txt
content: |
Key-Type: RSA
Key-Length: 4096
Subkey-Type: RSA
Name-Real: {{ gpg_user_name }}
Name-Email: {{ gpg_user_email }}
Expire-Date: 0
Passphrase: {{ gpg_passphrase }}
%no-protection
%commit
mode: '0600'
- name: Generate GPG key using the parameter file
command: gpg --batch --gen-key /tmp/gpg_key_params.txt
become_user: "{{ ansible_user }}"
register: gpg_keygen_output
ignore_errors: yes
- name: Show GPG key generation output
debug:
var: gpg_keygen_output.stdout
- name: Fetch the Key ID of the most recently created GPG key
shell: gpg --list-secret-keys --keyid-format=long | grep 'sec' | tail -n 1 | awk -F'/' '{print $2}' | awk '{print $1}'
register: gpg_key_output
- name: Set the GPG key ID to a variable
set_fact:
sec_key_id: "{{ gpg_key_output.stdout }}"
- name: Show GPG Key ID
debug:
msg: "GPG Key ID: {{ sec_key_id }}"

View File

@ -0,0 +1,46 @@
- name: Configure system
hosts: root_host
become: yes
vars_files:
- vars/user-vars.yml
tasks:
- name: Create a user
user:
name: "{{ username }}"
password: "{{ '{{ password }}' | password_hash('sha512') }}"
shell: /bin/bash
state: present
- name: Add user to sudoers group
user:
name: "{{ username }}"
groups: sudo
append: yes
- name: Ensure .ssh directory exists for user
file:
path: /home/{{ username }}/.ssh
state: directory
owner: "{{ username }}"
group: "{{ username }}"
mode: '0700'
- name: Append SSH public key to authorized_keys
lineinfile:
path: /home/{{ username }}/.ssh/authorized_keys
line: "{{ lookup('file', path_to_ssh_key) }}"
create: yes
owner: "{{ username }}"
group: "{{ username }}"
mode: '0600'
state: present
- name: Add user to sudoers for passwordless sudo
lineinfile:
path: /etc/sudoers
state: present
regexp: '^{{ username }} ALL=\(ALL\) NOPASSWD:ALL'
line: '{{ username }} ALL=(ALL) NOPASSWD:ALL'
validate: 'visudo -cf %s'

View File

@ -0,0 +1,5 @@
CERC_LACONICD_USER_KEY={{ALICE_PK}}
CERC_LACONICD_BOND_ID={{BOND_ID}}
CERC_LACONICD_RPC_ENDPOINT=http://{{ org_id }}-daemon.{{ full_domain }}:26657
CERC_LACONICD_GQL_ENDPOINT=http://{{ org_id }}-daemon.{{ full_domain }}:9473/api
LACONIC_HOSTED_ENDPOINT=http://{{ org_id }}-daemon.{{ full_domain }}:9473

View File

@ -0,0 +1,28 @@
DEPLOYMENT_DNS_SUFFIX="pwa.{{ full_domain }}"
# Name of reserved authority
DEPLOYMENT_RECORD_NAMESPACE="{{ authority_name }}"
# url of the deployed docker image registry
IMAGE_REGISTRY="container-registry.pwa.{{ full_domain }}"
# htpasswd credentials
IMAGE_REGISTRY_USER="{{ container_registry_username }}"
IMAGE_REGISTRY_CREDS="{{ container_registry_password }}"
# configs
CLEAN_DEPLOYMENTS=false
CLEAN_LOGS=false
CLEAN_CONTAINERS=false
SYSTEM_PRUNE=false
WEBAPP_IMAGE_PRUNE=true
CHECK_INTERVAL=5
FQDN_POLICY="allow"
# lrn of the webapp deployer
LRN="lrn://{{ authority_name }}/deployers/webapp-deployer-api.{{ full_domain }}"
export OPENPGP_PRIVATE_KEY_FILE="webapp-deployer-api.{{ full_domain }}.pgp.key"
export OPENPGP_PASSPHRASE="{{ deployer_gpg_passphrase }}"
export DEPLOYER_STATE="srv-test/deployments/autodeploy.state"
export UNDEPLOYER_STATE="srv-test/deployments/autoundeploy.state"
export UPLOAD_DIRECTORY="srv-test/uploads"

View File

@ -0,0 +1,3 @@
CERC_WEBAPP_DEBUG=0.1.0
LACONIC_HOSTED_CONFIG_app_api_url=https://webapp-deployer-api.pwa.{{ full_domain }}
LACONIC_HOSTED_CONFIG_app_console_link=http://{{ org_id }}-daemon.{{ full_domain }}:9473/console?query=%0A%20%20fragment%20ValueParts%20on%20Value%20%7B%0A%20%20%20%20...%20on%20BooleanValue%20%7B%0A%20%20%20%20%20%20bool%3A%20value%0A%20%20%20%20%7D%0A%20%20%20%20...%20on%20IntValue%20%7B%0A%20%20%20%20%20%20int%3A%20value%0A%20%20%20%20%7D%0A%20%20%20%20...%20on%20FloatValue%20%7B%0A%20%20%20%20%20%20float%3A%20value%0A%20%20%20%20%7D%0A%20%20%20%20...%20on%20StringValue%20%7B%0A%20%20%20%20%20%20string%3A%20value%0A%20%20%20%20%7D%0A%20%20%20%20...%20on%20BytesValue%20%7B%0A%20%20%20%20%20%20bytes%3A%20value%0A%20%20%20%20%7D%0A%20%20%20%20...%20on%20LinkValue%20%7B%0A%20%20%20%20%20%20link%3A%20value%0A%20%20%20%20%7D%0A%20%20%7D%0A%0A%20%20fragment%20AttrParts%20on%20Attribute%20%7B%0A%20%20%20%20key%0A%20%20%20%20value%20%7B%0A%20%20%20%20%20%20...ValueParts%0A%20%20%20%20%20%20...%20on%20ArrayValue%20%7B%0A%20%20%20%20%20%20%20%20value%20%7B%0A%20%20%20%20%20%20%20%20%20%20...ValueParts%0A%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%7D%0A%20%20%20%20%7D%0A%20%20%7D%0A%0A%20%20%7B%0A%20%20%20%20getRecordsByIds(ids%3A%20%5B%22#RQID#%22%5D)%20%7B%0A%20%20%20%20%20%20id%0A%20%20%20%20%20%20names%0A%20%20%20%20%20%20bondId%0A%20%20%20%20%20%20createTime%0A%20%20%20%20%20%20expiryTime%0A%20%20%20%20%20%20owners%0A%20%20%20%20%20%20attributes%20%7B%0A%20%20%20%20%20%20%20%20...AttrParts%0A%20%20%20%20%20%20%20%20value%20%7B%0A%20%20%20%20%20%20%20%20%20%20...%20on%20MapValue%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20map%3A%20value%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20...AttrParts%0A%20%20%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%7D%0A%20%20%20%20%7D%0A%20%20%7D%0A

View File

@ -0,0 +1,16 @@
---
firewalld_add:
- name: public
interfaces:
- enp9s0
services:
- http
- https
ports:
- 6443/tcp
- name: trusted
sources:
- 10.42.0.0/16
- 10.43.0.0/16
- {{ service_provider_ip }}

View File

@ -0,0 +1,16 @@
---
firewalld_add:
- name: public
interfaces:
- ens3
services:
- http
- https
ports:
- 26657/tcp
- 26656/tcp
- 1317/tcp
- name: trusted
sources:
- {{ service_provider_ip }}

View File

@ -0,0 +1,12 @@
[all]
{{ org_id }}-daemon ansible_host={{ service_provider_ip }}
{{ org_id }}-{{ location_id }}-cluster-control ansible_host={{ service_provider_ip }}
[so]
{{ org_id }}-daemon
[{{ org_id }}_{{ location_id }}]
{{ org_id }}-{{ location_id }}-cluster-control k8s_node_type=bootstrap k8s_pod_limit=1024 k8s_external_ip={{ service_provider_ip }}
[k8s:children]
{{ org_id }}_{{ location_id }}

View File

@ -0,0 +1,55 @@
---
# default context is used for stack orchestrator deployments, for testing a custom context name can be usefull
#k8s_cluster_name: {{ org_id }}-{{ location_id }}-cluster
k8s_cluster_name: default
k8s_cluster_url: {{ org_id }}-{{ location_id }}-cluster-control.{{ full_domain }}
k8s_taint_servers: false
k8s_acme_email: "{{ support_email }}"
# k3s bundles traefik as the default ingress controller, we will disable it and use nginx instead
k8s_disable:
- traefik
# secrets can be stored in a file or as a template, the template secrets gets dynamically base64 encoded while file based secrets must be encoded by hand
k8s_secrets:
- name: digitalocean-dns
type: file
source: secret-digitalocean-dns.yaml
k8s_manifests:
# ingress controller, replaces traefik which is explicitly disabled
- name: ingress-nginx
type: url
source: https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml
# cert-manager, required for letsencrypt
- name: cert-manager
type: url
source: https://github.com/cert-manager/cert-manager/releases/download/v1.15.1/cert-manager.yaml
# issuer for basic http certs
- name: letsencrypt-prod
type: template
source: shared/clusterissuer-acme.yaml
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- type: http
ingress: nginx
# issuer for wildcard dns certs
- name: letsencrypt-prod-wild
type: template
source: shared/clusterissuer-acme.yaml
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- type: dns
provider: digitalocean
tokenref: tokenSecretRef
secret_name: digitalocean-dns
secret_key: access-token
# initiate wildcard cert
- name: pwa.{{ full_domain }}
type: file
source: wildcard-pwa-{{ base_domain }}.yaml

View File

@ -0,0 +1,9 @@
services:
registry:
rpcEndpoint: 'http://{{ org_id }}-daemon.{{ full_domain }}:26657'
gqlEndpoint: 'http://{{ org_id }}-daemon.{{ full_domain }}:9473/api'
userKey: "{{ ALICE_PK }}"
bondId: "{{ BOND_ID }}"
chainId: lorotestnet-1
gas: 200000
fees: 200000alnt

View File

@ -0,0 +1,9 @@
{
"auths": {
"{{container_registry_domain}}": {
"username": "{{ container_registry_username }}",
"password": "{{ hashed_password }}",
"auth": "{{ b64_encoded_cred }}"
}
}
}

View File

@ -0,0 +1,21 @@
---
nginx_packages_intall: false
nginx_server_name_hash: 64
nginx_proxy_read_timeout: 1200
nginx_proxy_send_timeout: 1200
nginx_proxy_connection_timeout: 75
nginx_sites:
- name: {{ org_id }}-console
url: {{ org_id }}-console.{{ full_domain }}
upstream: http://localhost:8080
template: basic-proxy
ssl: true
- name: {{ org_id }}-daemon
url: {{ org_id }}-daemon.{{ full_domain }}
upstream: http://localhost:9473
configs:
- rewrite ^/deployer(/.*)? https://webapp-deployer.pwa.{{full_domain}} permanent
template: websocket-proxy
ssl: true

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: Namespace
metadata:
name: cert-manager
---
apiVersion: v1
data:
access-token: {{ b64_encoded_token }}
kind: Secret
metadata:
name: digitalocean-dns
namespace: cert-manager

View File

@ -0,0 +1,16 @@
stack: container-registry
deploy-to: k8s
kube-config: /home/{{ ansible_user }}/.kube/config-default.yaml
network:
ports:
registry:
- '5000'
http-proxy:
- host-name: container-registry.pwa.{{full_domain}}
routes:
- path: '/'
proxy-to: registry:5000
volumes:
registry-data:
configmaps:
config: ./configmaps/config

View File

@ -0,0 +1,15 @@
stack:
/home/{{ansible_user}}/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd
deploy-to: compose
network:
ports:
laconicd:
- '6060:6060'
- '26657:26657'
- '26656:26656'
- '9473:9473'
- '9090:9090'
- '1317:1317'
volumes:
laconicd-data: ./data/laconicd-data
genesis-config: ./data/genesis-config

View File

@ -0,0 +1,9 @@
stack:
/home/{{ansible_user}}/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console
deploy-to: compose
network:
ports:
console:
- '8080:80'
volumes:
laconic-registry-data: ./data/laconic-registry-data

View File

@ -0,0 +1,35 @@
stack: webapp-deployer-backend
deploy-to: k8s
kube-config: {{ansible_env.HOME}}/.kube/config-default.yaml
image-registry: container-registry.pwa.{{full_domain}}/laconic-registry
network:
ports:
server:
- '9555'
http-proxy:
- host-name: webapp-deployer-api.pwa.{{ full_domain }}
routes:
- path: '/'
proxy-to: server:9555
volumes:
srv:
configmaps:
config: ./data/config
annotations:
container.apparmor.security.beta.kubernetes.io/{name}: unconfined
labels:
container.kubeaudit.io/{name}.allow-disabled-apparmor: "podman"
security:
privileged: true
resources:
containers:
reservations:
cpus: {{ cpu_reservation }}
memory: {{ memory_reservation }}
limits:
cpus: {{ cpu_limit }}
memory: {{ memory_limit }}
volumes:
reservations:
storage: 200G

View File

@ -0,0 +1,2 @@
---
support_email: {{ support_email }}

View File

@ -0,0 +1,15 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: pwa.{{ full_domain }}
namespace: default
spec:
secretName: pwa.{{ full_domain }}
issuerRef:
name: letsencrypt-prod-wild
kind: ClusterIssuer
group: cert-manager.io
commonName: *.pwa.{{ full_domain }}
dnsNames:
- pwa.{{ full_domain }}
- *.pwa.{{ full_domain }}

View File

@ -0,0 +1,3 @@
container_registry_username: ""
container_registry_password: ""
container_registry_domain: "container-registry.pwa.{{ full_domain }}"

View File

@ -0,0 +1,5 @@
full_domain: ""
subdomain_prefix: ""
subdomain_cluster_control: "{{ subdomain_prefix }}-cluster-control"
service_provider_ip: ""
do_api_token: ""

View File

@ -0,0 +1,3 @@
gpg_user_name: ""
gpg_user_email: ""
gpg_passphrase: ""

View File

@ -0,0 +1,8 @@
target_host: "deployment_host"
gpg_key_id: "{{ sec_key_id }}"
vault_passphrase: "{{ gpg_passphrase }}"
org_id: ""
location_id: ""
base_domain: ""
support_email: ""
ansible_ssh_extra_args: '-o StrictHostKeyChecking=no'

View File

@ -0,0 +1,3 @@
username: ""
password: ""
path_to_ssh_key: ""

View File

@ -0,0 +1,8 @@
ALICE_PK: "{{ ALICE_PK }}"
BOND_ID: "{{ BOND_ID }}"
authority_name: ""
cpu_reservation: ""
memory_reservation: ""
cpu_limit: "6"
memory_limit: "8G"
deployer_gpg_passphrase: ""

1
stack-orchestrator-setup/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
hosts.ini

View File

@ -0,0 +1,102 @@
# stack-orchestrator-setup
## Setup Ansible
To get started, follow the [installation](../README.md#installation) guide to setup ansible on your machine.
## Setup Stack Orchestrator
This playbook will install Docker and Stack Orchestrator (laconic-so) on the machine if they aren't already present.
Run the following commands in the [`stack-orchestrator-setup`](./) directory.
### On Local Host
To setup stack orchestrator and docker locally, execute the `setup-laconic-so.yml` Ansible playbook:
```bash
LANG=en_US.utf8 ansible-playbook setup-laconic-so.yml --user $USER -kK
```
### On Remote Host
To run the playbook on a remote host:
- Create a new `hosts.ini` file:
```bash
cp ../hosts.example.ini hosts.ini
```
- Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
```ini
[deployment_host]
<host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
```
- Replace `<host_name>` with the alias of your choice
- Replace `<target_ip>` with the IP address or hostname of the target machine
- Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
- Verify that you are able to connect to the host using the following command
```bash
ansible all -m ping -i hosts.ini -k
# Expected output:
# <host_name> | SUCCESS => {
# "ansible_facts": {
# "discovered_interpreter_python": "/usr/bin/python3.10"
# },
# "changed": false,
# "ping": "pong"
# }
```
- Execute the `setup-laconic-so.yml` Ansible playbook for setting up stack orchestrator and docker on a remote machine:
```bash
LANG=en_US.utf8 ansible-playbook setup-laconic-so.yml -i hosts.ini --extra-vars='{ "target_host": "deployment_host"}' --user $USER -kK
```
## Verify Installation
- After the installation is complete, verify if `$HOME/bin` is already included in your PATH by running:
```bash
echo $PATH | grep -q "$HOME/bin" && echo "$HOME/bin is already in PATH" || echo "$HOME/bin is not in PATH"
```
If the command outputs `"$HOME/bin is not in PATH"`, you'll need to add it to your `PATH`.
- To add `$HOME/bin` to your PATH, run the following command:
```bash
export PATH="$HOME/bin:$PATH"
```
- To make this change permanent, add the following line to your shell configuration file (`~/.bashrc` or `~/.zshrc`, depending on your shell):
```bash
# For bash users
echo 'export PATH="$HOME/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
# For zsh users
echo 'export PATH="$HOME/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc
```
- Once the PATH is set, verify the installation by running the following commands:
```bash
# Check version of docker
docker --version
# Check version of docker compose
docker compose version
# Check version of Stack Orchestrator
laconic-so version
```

View File

@ -0,0 +1,91 @@
- name: Set up Docker
hosts: "{{ target_host }}"
become: yes
vars:
target_host: "localhost"
docker_gpg_key_url: "https://download.docker.com/linux/ubuntu/gpg"
docker_gpg_key_path: "/etc/apt/keyrings/docker.asc"
tasks:
- name: Check if Docker is installed
command: which docker
register: is_docker_present
ignore_errors: yes
- block:
- name: Exit if docker is present
debug:
msg: "Docker already on host, ending play"
- meta: end_play
when: is_docker_present.rc == 0
- name: Update apt cache
apt:
update_cache: yes
- name: Install prerequisites
apt:
name:
- ca-certificates
- curl
state: present
- name: Ensure keyrings directory exists
file:
path: "/etc/apt/keyrings"
state: directory
mode: '0755'
- name: Download Docker GPG key
get_url:
url: "{{ docker_gpg_key_url }}"
dest: "{{ docker_gpg_key_path }}"
mode: '0644'
- name: Get system architecture
shell: dpkg --print-architecture
register: system_arch
- name: Add Docker repository
apt_repository:
repo: "deb [arch={{ system_arch.stdout }} signed-by={{ docker_gpg_key_path }}] https://download.docker.com/linux/ubuntu {{ ansible_lsb.codename }} stable"
state: present
filename: docker
- name: Update apt cache
apt:
update_cache: yes
- name: Install Docker packages
apt:
name:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-buildx-plugin
- docker-compose-plugin
state: latest
notify: Restart Docker
- name: Add user to docker group
user:
name: "{{ ansible_user }}"
groups: docker
append: true
- name: Verify Docker installation by running the hello world image
command: docker run hello-world
register: hello_world_output
ignore_errors: true
become: no
- name: Display hello-world output
debug:
var: hello_world_output.stdout_lines
handlers:
- name: Restart Docker
service:
name: docker
state: restarted

View File

@ -0,0 +1,57 @@
- name: Install Docker if it isn't present
import_playbook: setup-docker.yml
- name: Set up Stack Orchestrator
hosts: "{{ target_host }}"
vars:
target_host: "localhost"
environment:
PATH: "{{ ansible_env.PATH }}:/home/{{ansible_user}}/bin"
tasks:
- name: Check if Stack Orchestrator is installed
shell: which laconic-so
register: is_so_present
ignore_errors: yes
- block:
- name: Exit if Stack Orchestrator is present
debug:
msg: "Stack Orchestrator already on host, ending play"
- meta: end_play
when: is_so_present.rc == 0
- name: Install jq
apt:
name: jq
state: present
update_cache: yes
become: yes
- name: Ensure that directory ~/bin exists and is writable
file:
path: "{{ ansible_env.HOME }}/bin"
state: directory
mode: '0755'
- name: Download the laconic-so binary
get_url:
url: https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
dest: "{{ ansible_env.HOME }}/bin/laconic-so"
mode: '0755'
force: yes
- name: Ensure ~/.laconic-so directory exists
file:
path: "{{ ansible_env.HOME }}/.laconic-so"
state: directory
mode: '0755'
- name: Save the distribution url to ~/.laconic-so directory
copy:
dest: "{{ ansible_env.HOME }}/.laconic-so/config.yml"
content: |
distribution-url: https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
mode: '0644'