Compare commits

..

1 Commits

Author SHA1 Message Date
Neeraj
fcfda24172 Start nodes and bridge from deployed contract's block number 2024-10-16 15:15:07 +05:30
38 changed files with 336 additions and 370 deletions

View File

@ -36,22 +36,10 @@
- Reference: <https://udhayakumarc.medium.com/error-ansible-requires-the-locale-encoding-to-be-utf-8-detected-iso8859-1-6da808387f7d> - Reference: <https://udhayakumarc.medium.com/error-ansible-requires-the-locale-encoding-to-be-utf-8-detected-iso8859-1-6da808387f7d>
- Verify ansible installation by running the following command:
```bash
ansible --version
# ansible [core 2.17.2]
```
- Install `sshpass` used for automating SSH password authentication
```bash
sudo apt-get install sshpass
```
## Playbooks ## Playbooks
- [stack-orchestrator-setup](./stack-orchestrator-setup/README.md) - [stack-orchestrator-setup](./stack-orchestrator-setup/README.md)
- [l2-setup](./l2-setup/README.md)
- [nitro-node-setup](./nitro-nodes-setup/README.md) - [nitro-node-setup](./nitro-nodes-setup/README.md)
- [nitro-bridge-setup](./nitro-bridge-setup/README.md) - [nitro-bridge-setup](./nitro-bridge-setup/README.md)
- [nitro-contracts-setup](./nitro-contracts-setup/README.md) - [nitro-contracts-setup](./nitro-contracts-setup/README.md)

View File

@ -1,10 +1,8 @@
# nitro-bridge-setup # nitro-bridge-setup
## Prerequisites ## Setup Ansible
- Setup Ansible: To get started, follow the [installation](../README.md#installation) guide to setup ansible on your machine. To get started, follow the [installation](../README.md#installation) guide to setup ansible on your machine
- Setup user with passwordless sudo: Follow steps from [Setup a user](../user-setup/README.md#setup-a-user) to setup a new user
## Setup ## Setup
@ -29,6 +27,9 @@ The following commands have to be executed in the [`nitro-bridge-setup`](./) dir
# This account should have tokens for funding Nitro channels # This account should have tokens for funding Nitro channels
nitro_chain_pk: "" nitro_chain_pk: ""
# Specifies the block number to start looking for nitro adjudicator events
nitro_chain_start_block: ""
# Custom L2 token to be deployed # Custom L2 token to be deployed
token_name: "LaconicNetworkToken" token_name: "LaconicNetworkToken"
token_symbol: "LNT" token_symbol: "LNT"
@ -42,13 +43,33 @@ The following commands have to be executed in the [`nitro-bridge-setup`](./) dir
## Run Nitro Bridge ## Run Nitro Bridge
### On Local Host
- To setup and run nitro bridge locally, execute the `run-nitro-bridge.yml` Ansible playbook:
```bash
LANG=en_US.utf8 ansible-playbook run-nitro-bridge.yml --extra-vars='{ "target_host": "localhost"}' --user $USER -kK
```
NOTE: By default, deployments are created in an `out` directory. To change this location, update the `nitro_directory` variable in the [setup-vars.yml](./setup-vars.yml) file
- For skipping container build, set `"skip_container_build" : true` in the `--extra-vars` parameter:
```bash
LANG=en_US.utf8 ansible-playbook run-nitro-bridge.yml --extra-vars='{"target_host" : "localhost", "skip_container_build": true}' --user $USER -kK
```
### On Remote Host
To run the playbook on a remote host:
- Create a new `hosts.ini` file: - Create a new `hosts.ini` file:
```bash ```bash
cp ../hosts.example.ini hosts.ini cp ../hosts.example.ini hosts.ini
``` ```
- Edit the [`hosts.ini`](./hosts.ini) file: - Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
```ini ```ini
[<deployment_host>] [<deployment_host>]
@ -58,12 +79,12 @@ The following commands have to be executed in the [`nitro-bridge-setup`](./) dir
- Replace `<deployment_host>` with `nitro_host` - Replace `<deployment_host>` with `nitro_host`
- Replace `<host_name>` with the alias of your choice - Replace `<host_name>` with the alias of your choice
- Replace `<target_ip>` with the IP address or hostname of the target machine - Replace `<target_ip>` with the IP address or hostname of the target machine
- Replace `<ssh_user>` with the username of the user that you set up on target machine (e.g. dev, ubuntu) - Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
- Verify that you are able to connect to the host using the following command: - Verify that you are able to connect to the host using the following command
```bash ```bash
ansible all -m ping -i hosts.ini ansible all -m ping -i hosts.ini -k
# Expected output: # Expected output:
@ -76,23 +97,21 @@ The following commands have to be executed in the [`nitro-bridge-setup`](./) dir
# } # }
``` ```
- Execute the `run-nitro-bridge.yml` Ansible playbook for deploying nitro bridge: - Execute the `run-nitro-bridge.yml` Ansible playbook for remote deployment:
```bash ```bash
LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
``` ```
NOTE: By default, deployments are created in an `out` directory. To change this location, update the `nitro_directory` variable in the [setup-vars.yml](./setup-vars.yml) file
- For skipping container build, run with `"skip_container_build" : true` in the `--extra-vars` parameter: - For skipping container build, run with `"skip_container_build" : true` in the `--extra-vars` parameter:
```bash ```bash
LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host", "skip_container_build": true }' --user $USER LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-bridge.yml --extra-vars='{ "target_host": "nitro_host", "skip_container_build": true }' --user $USER -kK
``` ```
## Check Deployment Status ## Check Deployment Status
Run the following command in the directory where the bridge-deployment is created: - Run the following command in the directory where the bridge-deployment is created:
- Check logs for deployments: - Check logs for deployments:

View File

@ -1,5 +1,6 @@
nitro_chain_url: "" nitro_chain_url: ""
nitro_chain_pk: "" nitro_chain_pk: ""
nitro_chain_start_block: 0
nitro_sc_pk: "" nitro_sc_pk: ""
na_address: "" na_address: ""
vpa_address: "" vpa_address: ""

View File

@ -30,6 +30,14 @@
timeout: 300 timeout: 300
ignore_errors: yes ignore_errors: yes
- name: Clone repositories required for nitro-stack
expect:
command: laconic-so --stack {{ ansible_env.HOME }}/cerc/nitro-stack/stack-orchestrator/stacks/bridge setup-repositories --git-ssh --pull
responses:
"Are you sure you want to continue connecting \\(yes/no/\\[fingerprint\\]\\)\\?": "yes"
timeout: 300
ignore_errors: yes
- name: Build containers - name: Build containers
command: laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/bridge build-containers --force-rebuild command: laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/bridge build-containers --force-rebuild
when: not skip_container_build when: not skip_container_build

View File

@ -1,3 +1,3 @@
target_host: "nitro_host" target_host: "localhost"
nitro_directory: out nitro_directory: out
skip_container_build: false skip_container_build: false

View File

@ -3,6 +3,7 @@ deploy-to: compose
config: config:
NITRO_CHAIN_URL: {{ nitro_chain_url }} NITRO_CHAIN_URL: {{ nitro_chain_url }}
NITRO_CHAIN_PK: {{ nitro_chain_pk }} NITRO_CHAIN_PK: {{ nitro_chain_pk }}
NITRO_CHAIN_START_BLOCK: {{ nitro_chain_start_block }}
NITRO_SC_PK: {{ nitro_sc_pk }} NITRO_SC_PK: {{ nitro_sc_pk }}
NA_ADDRESS: "{{ na_address }}" NA_ADDRESS: "{{ na_address }}"
VPA_ADDRESS: "{{ vpa_address }}" VPA_ADDRESS: "{{ vpa_address }}"

View File

@ -1,10 +1,8 @@
# nitro-contracts-setup # nitro-contracts-setup
## Prerequisites ## Setup Ansible
- Setup Ansible: To get started, follow the [installation](../README.md#installation) guide to setup ansible on your machine. To get started, follow the [installation](../README.md#installation) guide to setup ansible on your machine
- Setup user with passwordless sudo: Follow steps from [Setup a user](../user-setup/README.md#setup-a-user) to setup a new user with passwordless sudo
## Setup ## Setup
@ -29,20 +27,40 @@ The following commands have to be executed in the [`nitro-contracts-setup`](./)
geth_deployer_pk: "" geth_deployer_pk: ""
# Custom L1 token to be deployed # Custom L1 token to be deployed
token_name: "TestToken" token_name: "LaconicNetworkToken"
token_symbol: "TST" token_symbol: "LNT"
initial_token_supply: "129600" initial_token_supply: "129600"
``` ```
## Deploy Contracts ## Deploy Contracts
### On Local Host
- To deploy nitro contracts locally, execute the `deploy-contracts.yml` Ansible playbook:
```bash
LANG=en_US.utf8 ansible-playbook deploy-contracts.yml --extra-vars='{ "target_host": "localhost"}' --user $USER -kK
```
NOTE: By default, deployments are created in an `out` directory. To change this location, update the `nitro_directory` variable in the [setup-vars.yml](./setup-vars.yml) file
- For skipping container build, set `"skip_container_build" : true` in the `--extra-vars` parameter:
```bash
LANG=en_US.utf8 ansible-playbook deploy-contracts.yml --extra-vars='{"target_host" : "localhost", "skip_container_build": true}' --user $USER -kK
```
### On Remote Host
To run the playbook on a remote host:
- Create a new `hosts.ini` file: - Create a new `hosts.ini` file:
```bash ```bash
cp ../hosts.example.ini hosts.ini cp ../hosts.example.ini hosts.ini
``` ```
- Edit the [`hosts.ini`](./hosts.ini) file: - Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
```ini ```ini
[<deployment_host>] [<deployment_host>]
@ -52,12 +70,12 @@ The following commands have to be executed in the [`nitro-contracts-setup`](./)
- Replace `<deployment_host>` with `nitro_host` - Replace `<deployment_host>` with `nitro_host`
- Replace `<host_name>` with the alias of your choice - Replace `<host_name>` with the alias of your choice
- Replace `<target_ip>` with the IP address or hostname of the target machine - Replace `<target_ip>` with the IP address or hostname of the target machine
- Replace `<ssh_user>` with the username of the user that you set up on target machine (e.g. dev, ubuntu) - Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
- Verify that you are able to connect to the host using the following command - Verify that you are able to connect to the host using the following command
```bash ```bash
ansible all -m ping -i hosts.ini ansible all -m ping -i hosts.ini -k
# Expected output: # Expected output:
@ -70,23 +88,21 @@ The following commands have to be executed in the [`nitro-contracts-setup`](./)
# } # }
``` ```
- Execute the `deploy-contracts.yml` Ansible playbook to deploy nitro contracts: - Execute the `deploy-contracts.yml` Ansible playbook for remote deployment:
```bash ```bash
LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
``` ```
NOTE: By default, deployments are created in an `out` directory. To change this location, update the `nitro_directory` variable in the [setup-vars.yml](./setup-vars.yml) file
- For skipping container build, run with `"skip_container_build" : true` in the `--extra-vars` parameter: - For skipping container build, run with `"skip_container_build" : true` in the `--extra-vars` parameter:
```bash ```bash
LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host", "skip_container_build": true }' --user $USER LANG=en_US.utf8 ansible-playbook -i hosts.ini deploy-contracts.yml --extra-vars='{ "target_host": "nitro_host", "skip_container_build": true }' --user $USER -kK
``` ```
## Check Deployment Status ## Check Deployment Status
Run the following command in the directory where the nitro-contracts-deployment is created: - Run the following command in the directory where the nitro-contracts-deployment is created:
- Check logs for deployments: - Check logs for deployments:
@ -97,7 +113,7 @@ Run the following command in the directory where the nitro-contracts-deployment
## Get Contract Addresses ## Get Contract Addresses
Run the following commands in the directory where the deployments are created: - Run the following commands in the directory where the deployments are created:
- Get addresses of L1 nitro contracts: - Get addresses of L1 nitro contracts:

View File

@ -14,6 +14,7 @@
path: "{{ nitro_directory }}" path: "{{ nitro_directory }}"
state: directory state: directory
- name: Change owner of nitro-directory - name: Change owner of nitro-directory
file: file:
path: "{{ nitro_directory }}" path: "{{ nitro_directory }}"
@ -22,7 +23,7 @@
state: directory state: directory
recurse: yes recurse: yes
- name: Clone nitro stack repo - name: Clone go-nitro stack repo
expect: expect:
command: laconic-so fetch-stack git.vdb.to/cerc-io/nitro-stack --git-ssh --pull command: laconic-so fetch-stack git.vdb.to/cerc-io/nitro-stack --git-ssh --pull
responses: responses:
@ -32,14 +33,14 @@
- name: Clone repositories required for nitro-stack - name: Clone repositories required for nitro-stack
expect: expect:
command: laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/nitro-contracts setup-repositories --git-ssh --pull command: laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/bridge setup-repositories --git-ssh --pull
responses: responses:
"Are you sure you want to continue connecting \\(yes/no/\\[fingerprint\\]\\)\\?": "yes" "Are you sure you want to continue connecting \\(yes/no/\\[fingerprint\\]\\)\\?": "yes"
timeout: 300 timeout: 300
ignore_errors: yes ignore_errors: yes
- name: Build containers - name: Build containers
command: laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/nitro-contracts build-containers --force-rebuild command: laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/bridge build-containers --force-rebuild
when: not skip_container_build when: not skip_container_build
- name: Generate spec file for nitro contracts deployment - name: Generate spec file for nitro contracts deployment
@ -93,9 +94,17 @@
msg: "VPA_ADDRESS: {{ vpa_address.stdout }}" msg: "VPA_ADDRESS: {{ vpa_address.stdout }}"
- name: Export ASSET_ADDRESS - name: Export ASSET_ADDRESS
shell: laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"{{ geth_chain_id }}\"[0].contracts.{{ token_name }}.address' /app/deployment/nitro-addresses.json" shell: laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"{{ geth_chain_id }}\"[0].contracts.Token.address' /app/deployment/nitro-addresses.json"
args: args:
chdir: "{{ nitro_directory }}" chdir: "{{ nitro_directory }}"
register: asset_address register: asset_address
- debug: - debug:
msg: "ASSET_ADDRESS: {{ asset_address.stdout }}" msg: "ASSET_ADDRESS: {{ asset_address.stdout }}"
- name: Export NITRO_CHAIN_START_BLOCK
shell: laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq '.receipt.blockNumber' /app/deployment/hardhat-deployments/geth/NitroAdjudicator.json"
args:
chdir: "{{ nitro_directory }}"
register: nitro_chain_start_block
- debug:
msg: "NITRO_CHAIN_START_BLOCK: {{ nitro_chain_start_block.stdout }}"

View File

@ -1,3 +1,3 @@
target_host: "nitro_host" target_host: "localhost"
nitro_directory: out nitro_directory: out
skip_container_build: false skip_container_build: false

View File

@ -1,10 +1,16 @@
# nitro-nodes-setup # nitro-nodes-setup
## Prerequisites ## Setup Ansible
- Setup Ansible: To get started, follow the [installation](../README.md#installation) guide to setup ansible on your machine. To get started, follow the [installation](../README.md#installation) guide to setup ansible on your machine
- Setup user with passwordless sudo: Follow steps from [Setup a user](../user-setup/README.md#setup-a-user) to setup a new user with passwordless sudo ## Setup for Remote Host
To run the playbook on a remote host:
- Follow steps from [setup remote hosts](../README.md#setup-remote-hosts)
- Update / append the [`hosts.ini`](../hosts.ini) file for your remote host with `<deployment_host>` set as `nitro_host`
## Setup ## Setup
@ -28,6 +34,9 @@ The following commands have to be executed in [`nitro-nodes-setup`](./) director
# Private key of the account on chain that is used for funding channels in Nitro node # Private key of the account on chain that is used for funding channels in Nitro node
nitro_chain_pk: "" nitro_chain_pk: ""
# Specifies the block number to start looking for nitro adjudicator events
nitro_chain_start_block: ""
# Contract address of NitroAdjudicator # Contract address of NitroAdjudicator
na_address: "" na_address: ""
@ -54,13 +63,33 @@ The following commands have to be executed in [`nitro-nodes-setup`](./) director
## Run Nitro Node ## Run Nitro Node
### On Local Host
- To run a nitro node, execute the `run-nitro-nodes.yml` Ansible playbook by running the following command:
```bash
LANG=en_US.utf8 ansible-playbook run-nitro-nodes.yml --extra-vars='{ "target_host": "localhost"}' --user $USER -kK
```
NOTE: By default, deployments are created in a `out` directory. To change this location, update the `nitro_directory` variable in the [setup-vars.yml](./setup-vars.yml) file
- For skipping container build, run with `"skip_container_build" : true` in the `--extra-vars` parameter:
```bash
LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "localhost", "skip_container_build": true }' --user $USER -kK
```
### On Remote Host
To run the playbook on a remote host:
- Create a new `hosts.ini` file: - Create a new `hosts.ini` file:
```bash ```bash
cp ../hosts.example.ini hosts.ini cp ../hosts.example.ini hosts.ini
``` ```
- Edit the [`hosts.ini`](./hosts.ini) file: - Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
```ini ```ini
[<deployment_host>] [<deployment_host>]
@ -70,12 +99,12 @@ The following commands have to be executed in [`nitro-nodes-setup`](./) director
- Replace `<deployment_host>` with `nitro_host` - Replace `<deployment_host>` with `nitro_host`
- Replace `<host_name>` with the alias of your choice - Replace `<host_name>` with the alias of your choice
- Replace `<target_ip>` with the IP address or hostname of the target machine - Replace `<target_ip>` with the IP address or hostname of the target machine
- Replace `<ssh_user>` with the username of the user that you set up on target machine (e.g. dev, ubuntu) - Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
- Verify that you are able to connect to the host using the following command - Verify that you are able to connect to the host using the following command
```bash ```bash
ansible all -m ping -i hosts.ini ansible all -m ping -i hosts.ini -k
# Expected output: # Expected output:
@ -90,23 +119,22 @@ The following commands have to be executed in [`nitro-nodes-setup`](./) director
- Copy and edit the [`nitro-vars.yml`](./nitro-vars.yml) file as described in the [local setup](./README.md#run-nitro-node-on-local-host) section - Copy and edit the [`nitro-vars.yml`](./nitro-vars.yml) file as described in the [local setup](./README.md#run-nitro-node-on-local-host) section
- Execute the `run-nitro-nodes.yml` Ansible playbook to deploy nitro nodes: - Execute the `run-nitro-nodes.yml` Ansible playbook for remote deployment:
```bash ```bash
LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
``` ```
NOTE: By default, deployments are created in a `out` directory. To change this location, update the `nitro_directory` variable in the [setup-vars.yml](./setup-vars.yml) file
- For skipping container build, run with `"skip_container_build" : true` in the `--extra-vars` parameter: - For skipping container build, run with `"skip_container_build" : true` in the `--extra-vars` parameter:
```bash ```bash
LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "nitro_host", "skip_container_build": true }' --user $USER LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "nitro_host", "skip_container_build": true }' --user $USER -kK
``` ```
## Check Deployment Status ## Check Deployment Status
Run the following command in the directory where the deployments are created - Run the following command in the directory where the deployments are created
- Check L1 nitro node logs: - Check L1 nitro node logs:

View File

@ -1,6 +1,7 @@
nitro_chain_url: "" nitro_chain_url: ""
nitro_sc_pk: "" nitro_sc_pk: ""
nitro_chain_pk: "" nitro_chain_pk: ""
nitro_chain_start_block: 0
na_address: "" na_address: ""
vpa_address: "" vpa_address: ""
ca_address: "" ca_address: ""

View File

@ -29,7 +29,7 @@
state: directory state: directory
recurse: yes recurse: yes
- name: Clone nitro-stack repo - name: Clone go-nitro stack repo
expect: expect:
command: laconic-so fetch-stack git.vdb.to/cerc-io/nitro-stack --git-ssh --pull command: laconic-so fetch-stack git.vdb.to/cerc-io/nitro-stack --git-ssh --pull
responses: responses:
@ -37,6 +37,14 @@
timeout: 300 timeout: 300
ignore_errors: yes ignore_errors: yes
- name: Clone repositories required for nitro-stack
expect:
command: laconic-so --stack {{ ansible_env.HOME }}/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node setup-repositories --git-ssh --pull
responses:
"Are you sure you want to continue connecting \\(yes/no/\\[fingerprint\\]\\)\\?": "yes"
timeout: 300
ignore_errors: yes
- name: Build containers - name: Build containers
command: laconic-so --stack {{ ansible_env.HOME }}/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node build-containers --force-rebuild command: laconic-so --stack {{ ansible_env.HOME }}/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node build-containers --force-rebuild
when: not skip_container_build when: not skip_container_build
@ -115,8 +123,3 @@
get_url: get_url:
url: https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage2/nitro-node-config.yml url: https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage2/nitro-node-config.yml
dest: "{{ nitro_directory }}" dest: "{{ nitro_directory }}"
- name: Fetch required asset addresses
get_url:
url: https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage2/assets.json
dest: "{{ nitro_directory }}"

View File

@ -1,3 +1,3 @@
target_host: "nitro_host" target_host: "localhost"
nitro_directory: out nitro_directory: out
skip_container_build: false skip_container_build: false

View File

@ -6,3 +6,4 @@ VPA_ADDRESS="{{ vpa_address }}"
CA_ADDRESS="{{ ca_address }}" CA_ADDRESS="{{ ca_address }}"
NITRO_BOOTPEERS={{ nitro_l1_bridge_multiaddr }} NITRO_BOOTPEERS={{ nitro_l1_bridge_multiaddr }}
NITRO_EXT_MULTIADDR={{ nitro_l1_ext_multiaddr }} NITRO_EXT_MULTIADDR={{ nitro_l1_ext_multiaddr }}
NITRO_CHAIN_START_BLOCK: {{ nitro_chain_start_block }}

View File

@ -1,18 +1,76 @@
# service-provider-setup # service-provider-setup
This setup has been tested on digitalocean droplets running ubuntu 22.04 LTS ## Setup Ansible
To get started, follow the [installation](../README.md#installation) guide to setup ansible on your machine
## Prerequisites ## Prerequisites
- Setup Ansible: follow the [installation](../README.md#installation) guide to setup ansible on your machine
- Set up a DigitalOcean Droplet with passwordless SSH access - Set up a DigitalOcean Droplet with passwordless SSH access
- Buy a domain and configure [nameservers pointing to DigitalOcean](https://docs.digitalocean.com/products/networking/dns/getting-started/dns-registrars/) - Buy a domain and configure [nameservers pointing to DigitalOcean](https://docs.digitalocean.com/products/networking/dns/getting-started/dns-registrars/)
- Generate a DigitalOcean access token, used for API authentication and managing cloud resources - Generate a DigitalOcean access token, used for API authentication and managing cloud resources
- Setup a user: Follow steps from [Setup a user](../user-setup/README.md#setup-a-user) to setup a new user with passwordless sudo ## Setup a new User
- Create a new `hosts.ini` file:
```bash
cp ../hosts.example.ini hosts.ini
```
- Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
```ini
[root_host]
<host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
```
- Replace `<host_name>` with the desired `hostname` of the remote machine
- Replace `<target_ip>` with the IP address or hostname of the target machine
- Replace `<ssh_user>` with `root`
- Verify that you are able to connect to the host using the following command:
```bash
ansible all -m ping -i hosts.ini
# Expected output:
# <host_name> | SUCCESS => {
# "ansible_facts": {
# "discovered_interpreter_python": "/usr/bin/python3.10"
# },
# "changed": false,
# "ping": "pong"
# }
```
- Setup `user-vars.yml` using the example file
```bash
cp vars/user-vars.example.yml vars/user-vars.yml
```
- Edit the `user-vars.yml` file:
```bash
# name of the user you want to setup on the target host
username: ""
# password of the user you want to setup on the target host
password: ""
# path to the ssh key on your machine, eg: "/home/user/.ssh/id_rsa.pub"
path_to_ssh_key: ""
```
- Execute the `setup-user.yml` Ansible playbook to create a user with passwordless sudo permissions:
```bash
LANG=en_US.utf8 ansible-playbook setup-user.yml -i hosts.ini --extra-vars='{ "target_host": "deployment_host" }'
```
## Become a Service Provider ## Become a Service Provider
@ -26,7 +84,6 @@ This setup has been tested on digitalocean droplets running ubuntu 22.04 LTS
cp gpg-vars.example.yml gpg-vars.yml cp gpg-vars.example.yml gpg-vars.yml
cp k8s-vars.example.yml k8s-vars.yml cp k8s-vars.example.yml k8s-vars.yml
cp container-vars.example.yml container-vars.yml cp container-vars.example.yml container-vars.yml
cp laconicd-vars.example.yml laconicd-vars.yml
cp webapp-vars.example.yml webapp-vars.yml cp webapp-vars.example.yml webapp-vars.yml
cd - cd -
``` ```
@ -36,46 +93,41 @@ This setup has been tested on digitalocean droplets running ubuntu 22.04 LTS
```bash ```bash
# vars/dns-vars.yml # vars/dns-vars.yml
full_domain: "" # eg: laconic.com full_domain: "" # eg: laconic.com
subdomain_prefix: "" # eg: lcn-cad
service_provider_ip: "" # eg: 23.111.78.179 service_provider_ip: "" # eg: 23.111.78.179
do_api_token: "" # Digital Ocean access token that you generated, eg: dop_v1... do_api_token: "" # Digital Ocean access token that you generated, eg: dop_v1...
# vars/gpg-vars.yml # vars/gpg-vars.yml
gpg_user_name: "" # full name of the user for the GPG key gpg_user_name: "" # Full name of the user for the GPG key
gpg_user_email: "" # email address associated with the GPG key gpg_user_email: "" # Email address associated with the GPG key
gpg_passphrase: "" # passphrase for securing the GPG key gpg_passphrase: "" # Passphrase for securing the GPG key
# vars/k8s-vars.yml # vars/k8s-vars.yml
target_host: "deployment_host"
org_id: "" # eg: lcn org_id: "" # eg: lcn
location_id: "" # eg: cad location_id: "" # eg: cad
base_domain: "" # eg: laconic
support_email: "" # eg: support@laconic.com support_email: "" # eg: support@laconic.com
# vars/container-vars.yml # vars/container-vars.yml
container_registry_username: "" # username to login to the container registry container_registry_username: "" # username to login to the container registry
container_registry_password: "" # password to login to the container registry container_registry_password: "" # password to login to the container registry
# vars/laconicd-vars.yml
chain_id: "" # chain id to use for the Laconic chain
# vars/webapp-vars.yml # vars/webapp-vars.yml
authority_name: "" # eg: laconic-authority authority_name: "" # eg: my-org-name
cpu_reservation: "1" # minimum number of cpu cores to be used, eg: 1 cpu_reservation: "" # Minimum number of cpu cores to be used, eg: 2
memory_reservation: "2G" # minimum amount of memory in GB to be used, eg: 2G memory_reservation: "" # Minimum amount of memory in GB to be used, eg: 4G
cpu_limit: "6" # maximum number of cpu cores to be used, eg: 6 cpu_limit: "" # Maximum number of cpu cores to be used, eg: 6
memory_limit: "8G" # maximum amount of memory in GB to be used, eg: 8G memory_limit: "" # Maximum amount of memory in GB to be used, eg: 8G
deployer_gpg_passphrase: "" # passphrase for creating GPG key used by webapp-deployer, eg: SECRET deployer_gpg_passphrase: "" # passphrase for creating GPG key used by webapp-deployer, eg: SECRET
handle_auction_requests: "true" # whether the webapp deployer should handle deployment auction requests, eg: true
auction_bid_amount: "500000" # bid amount for deployment auctions in alnt, eg: 500000
``` ```
- Create a new `hosts.ini` file: - Update the [`hosts.ini`](./hosts.ini) file:
```bash
cp ../hosts.example.ini hosts.ini
```
- Edit the [`hosts.ini`](./hosts.ini) file:
```ini ```ini
[root_host]
<host_name> ansible_host=<target_ip> ansible_user=root ansible_ssh_common_args='-o ForwardAgent=yes'
[deployment_host] [deployment_host]
<host_name> ansible_host=<target_ip> ansible_user=<new_username> ansible_ssh_common_args='-o ForwardAgent=yes' <host_name> ansible_host=<target_ip> ansible_user=<new_username> ansible_ssh_common_args='-o ForwardAgent=yes'
``` ```
@ -115,87 +167,8 @@ This setup has been tested on digitalocean droplets running ubuntu 22.04 LTS
After the playbook finishes executing, the following services will be deployed (your setup should look similar to the example below): After the playbook finishes executing, the following services will be deployed (your setup should look similar to the example below):
- laconicd chain RPC endpoint: <http://lcn-daemon.laconic.com:26657> - laconicd chain RPC endpoint: http://lcn-daemon.laconic.com:26657
- laconicd GQL endpoint: <http://lcn-daemon.laconic.com:9473/api> - laconic console: http://lcn-daemon.laconic.com:8080/registry
- laconic console: <http://lcn-console.laconic.com:8080/registry> - laconicd GQL endpoint: http://lcn-daemon.laconic.com:9473/api
- webapp deployer API: <https://webapp-deployer-api.pwa.laconic.com> - webapp deployer API: https://webapp-deployer-api.pwa.laconic.com
- webapp deployer UI: <https://webapp-deployer-ui.pwa.laconic.com> - webapp deployer UI: https://webapp-deployer-ui.pwa.laconic.com
## Cleanup
Run the following steps on the target machine to stop the webapp-deployer, container-registry, fixturenet-laconicd and laconic-console-deployment, undeploy k8s, remove GPG keys and DNS records
- Stop deployments
```
$ laconic-so deployment --dir webapp-ui stop
$ laconic-so deployment --dir webapp-deployer
$ laconic-so deployment --dir container-registry stop
$ laconic-so deployment --dir laconic-console-deployment stop --delete-volumes
$ laconic-so deployment --dir fixturenet-laconicd-deployment stop --delete-volumes
```
- Remove deployment directories
```
sudo rm -rf webapp-ui
sudo rm -rf webapp-deployer
sudo rm -rf container-registry
sudo rm -rf laconic-console-deployment
sudo rm -rf fixturenet-laconicd-deployment
```
- Remove spec files
```
rm webapp-deployer.spec
rm container-registry.spec
rm laconic-console-spec.yml
rm fixturenet-laconicd-spec.yml
```
- Undeploy the k8s
```
$ cd service-provider-template
$ export VAULT_KEY=<gpg_passphrase>
$ bash .vault/vault-rekey.sh
$ ansible-playbook -i hosts site.yml --tags=k8s --limit=<org_id>_<location_id> --user <user> --extra-vars 'k8s_action=destroy'
```
- Remove service-provider-template repo
```
$ rm -rf service-provider-template
```
- Remove any existing GPG keys
```
$ rm -rf gpg-keys/
$ gpg --list-secret-keys --keyid-format=long
/home/dev/.gnupg/pubring.kbx
----------------------------
sec rsa4096/DA9E3D638930A699 2024-10-15 [SCEA]
69A3200727091E72B773BBEBDA9E3D638930A699
uid [ultimate] deepstack <support@deepstacksoft.com>
ssb rsa3072/2B5D80CF44753EFD 2024-10-15 [SEA]
sec rsa3072/2449A62C838440AB 2024-10-15 [SC]
646A42164F978DC1415C11F12449A62C838440AB
uid [ultimate] webapp-deployer-api.deepstack.com
ssb rsa3072/67576558A2F2FE91 2024-10-15 [E]
$ gpg --delete-secret-key 69A3200727091E72B773BBEBDA9E3D638930A699
$ gpg --delete-key 69A3200727091E72B773BBEBDA9E3D638930A699
$ gpg --delete-secret-key 646A42164F978DC1415C11F12449A62C838440AB
$ gpg --delete-key 646A42164F978DC1415C11F12449A62C838440AB
```
- Remove the user if required
```bash
$ userdel <user>
# If required, kill process that is using the user
# userdel: user <user> is currently used by process 1639
# $ kill -9 1639
```
- Remove DNS records using DigitalOcean's API:
- <https://docs.digitalocean.com/reference/api/api-try-it-now/#/Domain%20Records/domains_delete_record>

View File

@ -10,7 +10,6 @@
- vars/container-vars.yml - vars/container-vars.yml
- vars/k8s-vars.yml - vars/k8s-vars.yml
- vars/dns-vars.yml - vars/dns-vars.yml
- vars/laconicd-vars.yml
tasks: tasks:
- name: Ensure gpg-keys directory exists - name: Ensure gpg-keys directory exists
@ -44,7 +43,7 @@
- name: Create laconic config file - name: Create laconic config file
template: template:
src: "./templates/laconic.yml.j2" src: "./templates/laconic.yml.j2"
dest: "{{ ansible_env.HOME }}/config/laconic.yml" dest: "config/laconic.yml"
- name: Copy the gpg private key file to config dir - name: Copy the gpg private key file to config dir
copy: copy:
@ -66,7 +65,7 @@
--laconic-config /home/root/config/laconic.yml \ --laconic-config /home/root/config/laconic.yml \
--api-url https://webapp-deployer-api.pwa.{{ full_domain }} \ --api-url https://webapp-deployer-api.pwa.{{ full_domain }} \
--public-key-file /home/root/config/webapp-deployer-api.{{ full_domain }}.pgp.pub \ --public-key-file /home/root/config/webapp-deployer-api.{{ full_domain }}.pgp.pub \
--lrn lrn://{{ authority_name }}/deployers/webapp-deployer-api.pwa.{{ full_domain }} \ --lrn lrn://{{ authority_name }}/deployers/webapp-deployer-api.{{ full_domain }} \
--min-required-payment 0 --min-required-payment 0
register: publish_output register: publish_output
@ -79,7 +78,7 @@
src: "./templates/specs/webapp-deployer.spec.j2" src: "./templates/specs/webapp-deployer.spec.j2"
dest: "webapp-deployer.spec" dest: "webapp-deployer.spec"
- name: Create deployment directory for webapp-deployer - name: Create the deployment directory from the spec file
command: > command: >
laconic-so --stack webapp-deployer-backend deploy create laconic-so --stack webapp-deployer-backend deploy create
--deployment-dir webapp-deployer --spec-file webapp-deployer.spec --deployment-dir webapp-deployer --spec-file webapp-deployer.spec

View File

@ -26,7 +26,7 @@
--image cerc/webapp-deployment-status-ui:local --url https://webapp-deployer-ui.pwa.{{ full_domain }} --image cerc/webapp-deployment-status-ui:local --url https://webapp-deployer-ui.pwa.{{ full_domain }}
--env-file ~/cerc/webapp-deployment-status-ui/.env --env-file ~/cerc/webapp-deployment-status-ui/.env
- name: Push webapp-ui images to container registry - name: Push image to container registry
command: laconic-so deployment --dir webapp-ui push-images command: laconic-so deployment --dir webapp-ui push-images
- name: Update config file for webapp ui - name: Update config file for webapp ui

View File

@ -8,7 +8,6 @@
- vars/webapp-vars.yml - vars/webapp-vars.yml
- vars/dns-vars.yml - vars/dns-vars.yml
- vars/k8s-vars.yml - vars/k8s-vars.yml
- vars/laconicd-vars.yml
tasks: tasks:
- name: Clone the stack repo - name: Clone the stack repo

View File

@ -4,9 +4,6 @@
environment: environment:
PATH: "{{ ansible_env.PATH }}:/home/{{ansible_user}}/bin" PATH: "{{ ansible_env.PATH }}:/home/{{ansible_user}}/bin"
vars_files:
- vars/laconicd-vars.yml
tasks: tasks:
- name: Clone the fixturenet-laconicd-stack repo - name: Clone the fixturenet-laconicd-stack repo
command: laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-laconicd-stack --pull command: laconic-so fetch-stack git.vdb.to/cerc-io/fixturenet-laconicd-stack --pull
@ -18,7 +15,7 @@
- name: Build container images - name: Build container images
command: laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd build-containers --force-rebuild command: laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd build-containers --force-rebuild
- name: Generate spec file for laconicd deployment - name: Generate over spec file for laconicd deployment
template: template:
src: "./templates/specs/fixturenet-laconicd-spec.yml.j2" src: "./templates/specs/fixturenet-laconicd-spec.yml.j2"
dest: "fixturenet-laconicd-spec.yml" dest: "fixturenet-laconicd-spec.yml"
@ -32,10 +29,5 @@
command: laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy create --spec-file fixturenet-laconicd-spec.yml --deployment-dir fixturenet-laconicd-deployment command: laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd deploy create --spec-file fixturenet-laconicd-spec.yml --deployment-dir fixturenet-laconicd-deployment
when: not deployment_dir.stat.exists when: not deployment_dir.stat.exists
- name: Create laconicd config
template:
src: "./templates/configs/laconicd-config.env.j2"
dest: "fixturenet-laconicd-deployment/config.env"
- name: Start the deployment - name: Start the deployment
command: laconic-so deployment --dir fixturenet-laconicd-deployment start command: laconic-so deployment --dir fixturenet-laconicd-deployment start

View File

@ -6,16 +6,6 @@
- vars/k8s-vars.yml - vars/k8s-vars.yml
tasks: tasks:
- name: Check if domain exists
community.digitalocean.digital_ocean_domain_facts:
oauth_token: "{{ do_api_token }}"
register: existing_domains
- name: Fail if domain already exists
fail:
msg: "Domain {{ full_domain }} already exists."
when: full_domain in existing_domains.data | map(attribute='name') | list
- name: Create a domain - name: Create a domain
community.digitalocean.digital_ocean_domain: community.digitalocean.digital_ocean_domain:
state: present state: present
@ -58,7 +48,7 @@
data: "{{ subdomain_cluster_control }}.{{ full_domain }}" data: "{{ subdomain_cluster_control }}.{{ full_domain }}"
domain: "{{ full_domain }}" domain: "{{ full_domain }}"
type: CNAME type: CNAME
name: "{{ subdomain_prefix }}" name: "{{ subdomain_prefix }}.{{ full_domain }}"
ttl: 43200 ttl: 43200
- name: Create CNAME record for laconicd endpoint - name: Create CNAME record for laconicd endpoint
@ -68,7 +58,7 @@
data: "{{ org_id }}-daemon.{{ full_domain }}" data: "{{ org_id }}-daemon.{{ full_domain }}"
domain: "{{ full_domain }}" domain: "{{ full_domain }}"
type: CNAME type: CNAME
name: "laconicd" name: "laconicd.{{ full_domain }}"
ttl: 43200 ttl: 43200
- name: Create CNAME record for backend - name: Create CNAME record for backend
@ -78,7 +68,7 @@
data: "{{ org_id }}-daemon.{{ full_domain }}" data: "{{ org_id }}-daemon.{{ full_domain }}"
domain: "{{ full_domain }}" domain: "{{ full_domain }}"
type: CNAME type: CNAME
name: "{{ org_id }}-backend" name: "{{ org_id }}-backend.{{ full_domain }}"
ttl: 43200 ttl: 43200
- name: Create CNAME record for console - name: Create CNAME record for console
@ -88,35 +78,47 @@
data: "{{ org_id }}-daemon.{{ full_domain }}" data: "{{ org_id }}-daemon.{{ full_domain }}"
domain: "{{ full_domain }}" domain: "{{ full_domain }}"
type: CNAME type: CNAME
name: "{{ org_id }}-console" name: "{{ org_id }}-console.{{ full_domain }}"
ttl: 43200 ttl: 43200
- name: Create wildcard CNAME record for subdomain - name: Create CNAME record for org and location
community.digitalocean.digital_ocean_domain_record: community.digitalocean.digital_ocean_domain_record:
state: present state: present
oauth_token: "{{ do_api_token }}" oauth_token: "{{ do_api_token }}"
name: "*.{{ subdomain_prefix }}" data: "{{ org_id }}-daemon.{{ full_domain }}"
data: "{{ subdomain_prefix }}-cluster-control.{{ full_domain }}"
domain: "{{ full_domain }}" domain: "{{ full_domain }}"
type: CNAME type: CNAME
name: "{{ subdomain_prefix }}"
ttl: 43200
- name: Create wildcard A record for subdomain
community.digitalocean.digital_ocean_domain_record:
state: present
oauth_token: "{{ do_api_token }}"
name: "{{ subdomain_cluster_control }}.{{ full_domain }}"
data: "{{ service_provider_ip }}"
domain: "{{ full_domain }}"
type: A
name: "*.{{ subdomain_prefix }}"
ttl: 43200 ttl: 43200
- name: Create CNAME record for pwa - name: Create CNAME record for pwa
community.digitalocean.digital_ocean_domain_record: community.digitalocean.digital_ocean_domain_record:
state: present state: present
oauth_token: "{{ do_api_token }}" oauth_token: "{{ do_api_token }}"
name: "pwa" data: "{{ subdomain_cluster_control }}.{{ full_domain }}"
data: "{{ subdomain_prefix }}-cluster-control.{{ full_domain }}"
domain: "{{ full_domain }}" domain: "{{ full_domain }}"
type: CNAME type: CNAME
name: "pwa"
ttl: 43200 ttl: 43200
- name: Create wildcard CNAME record for pwa - name: Create wildcard A record for pwa
community.digitalocean.digital_ocean_domain_record: community.digitalocean.digital_ocean_domain_record:
state: present state: present
oauth_token: "{{ do_api_token }}" oauth_token: "{{ do_api_token }}"
name: "*.pwa" name: "{{ subdomain_cluster_control }}.{{ full_domain }}"
data: "{{ subdomain_prefix }}-cluster-control.{{ full_domain }}" data: "{{ service_provider_ip }}"
domain: "{{ full_domain }}" domain: "{{ full_domain }}"
type: CNAME type: A
name: "*.pwa"
ttl: 43200 ttl: 43200

View File

@ -66,7 +66,6 @@
command: gpg-agent --daemon command: gpg-agent --daemon
ignore_errors: yes ignore_errors: yes
# Cache GPG passphrase by signing a dummy string to avoid passphrase prompts in later steps
- name: Sign a dummy string using gpg-key - name: Sign a dummy string using gpg-key
shell: echo "This is a dummy string." | gpg --batch --yes --local-user "{{ gpg_key_id }}" --passphrase "{{ vault_passphrase }}" --pinentry-mode loopback --sign - shell: echo "This is a dummy string." | gpg --batch --yes --local-user "{{ gpg_key_id }}" --passphrase "{{ vault_passphrase }}" --pinentry-mode loopback --sign -
@ -125,10 +124,10 @@
src: ./templates/k8s.yml.j2 src: ./templates/k8s.yml.j2
dest: "{{ ansible_env.HOME }}/service-provider-template/group_vars/{{ org_id }}_{{ location_id }}/k8s.yml" dest: "{{ ansible_env.HOME }}/service-provider-template/group_vars/{{ org_id }}_{{ location_id }}/k8s.yml"
- name: Copy wildcard template to the remote VM - name: Copy wildcard-pwa-{{ base_domain }}.yaml to the remote VM
template: template:
src: ./templates/wildcard-pwa-example.yml.j2 src: ./templates/wildcard-pwa-example.yml.j2
dest: "{{ ansible_env.HOME }}/service-provider-template/files/manifests/wildcard-pwa-{{ full_domain | replace('.', '-') }}.yaml" dest: "{{ ansible_env.HOME }}/service-provider-template/files/manifests/wildcard-pwa-{{ base_domain }}.yaml"
- name: Delete old wildcard-pwa file - name: Delete old wildcard-pwa file
file: file:

View File

@ -1,9 +1,9 @@
- name: Configure system - name: Configure system
hosts: deployment_host hosts: root_host
become: yes become: yes
vars_files: vars_files:
- user-vars.yml - vars/user-vars.yml
tasks: tasks:
- name: Create a user - name: Create a user

View File

@ -2,5 +2,4 @@ CERC_LACONICD_USER_KEY={{ALICE_PK}}
CERC_LACONICD_BOND_ID={{BOND_ID}} CERC_LACONICD_BOND_ID={{BOND_ID}}
CERC_LACONICD_RPC_ENDPOINT=http://{{ org_id }}-daemon.{{ full_domain }}:26657 CERC_LACONICD_RPC_ENDPOINT=http://{{ org_id }}-daemon.{{ full_domain }}:26657
CERC_LACONICD_GQL_ENDPOINT=http://{{ org_id }}-daemon.{{ full_domain }}:9473/api CERC_LACONICD_GQL_ENDPOINT=http://{{ org_id }}-daemon.{{ full_domain }}:9473/api
CERC_LACONICD_CHAIN_ID={{ chain_id }}
LACONIC_HOSTED_ENDPOINT=http://{{ org_id }}-daemon.{{ full_domain }}:9473 LACONIC_HOSTED_ENDPOINT=http://{{ org_id }}-daemon.{{ full_domain }}:9473

View File

@ -1 +0,0 @@
CHAINID={{ chain_id }}

View File

@ -20,11 +20,9 @@ CHECK_INTERVAL=5
FQDN_POLICY="allow" FQDN_POLICY="allow"
# lrn of the webapp deployer # lrn of the webapp deployer
LRN="lrn://{{ authority_name }}/deployers/webapp-deployer-api.pwa.{{ full_domain }}" LRN="lrn://{{ authority_name }}/deployers/webapp-deployer-api.{{ full_domain }}"
export OPENPGP_PRIVATE_KEY_FILE="webapp-deployer-api.{{ full_domain }}.pgp.key" export OPENPGP_PRIVATE_KEY_FILE="webapp-deployer-api.{{ full_domain }}.pgp.key"
export OPENPGP_PASSPHRASE="{{ deployer_gpg_passphrase }}" export OPENPGP_PASSPHRASE="{{ deployer_gpg_passphrase }}"
export DEPLOYER_STATE="srv-test/deployments/autodeploy.state" export DEPLOYER_STATE="srv-test/deployments/autodeploy.state"
export UNDEPLOYER_STATE="srv-test/deployments/autoundeploy.state" export UNDEPLOYER_STATE="srv-test/deployments/autoundeploy.state"
export UPLOAD_DIRECTORY="srv-test/uploads" export UPLOAD_DIRECTORY="srv-test/uploads"
export HANDLE_AUCTION_REQUESTS={{ handle_auction_requests }}
export AUCTION_BID_AMOUNT={{ auction_bid_amount }}

View File

@ -52,4 +52,4 @@ k8s_manifests:
# initiate wildcard cert # initiate wildcard cert
- name: pwa.{{ full_domain }} - name: pwa.{{ full_domain }}
type: file type: file
source: wildcard-pwa-{{ full_domain | replace('.', '-') }}.yaml source: wildcard-pwa-{{ base_domain }}.yaml

View File

@ -4,6 +4,6 @@ services:
gqlEndpoint: 'http://{{ org_id }}-daemon.{{ full_domain }}:9473/api' gqlEndpoint: 'http://{{ org_id }}-daemon.{{ full_domain }}:9473/api'
userKey: "{{ ALICE_PK }}" userKey: "{{ ALICE_PK }}"
bondId: "{{ BOND_ID }}" bondId: "{{ BOND_ID }}"
chainId: {{ chain_id }} chainId: lorotestnet-1
gas: 200000 gas: 200000
fees: 200000alnt fees: 200000alnt

View File

@ -9,7 +9,7 @@ spec:
name: letsencrypt-prod-wild name: letsencrypt-prod-wild
kind: ClusterIssuer kind: ClusterIssuer
group: cert-manager.io group: cert-manager.io
commonName: "*.pwa.{{ full_domain }}" commonName: *.pwa.{{ full_domain }}
dnsNames: dnsNames:
- "pwa.{{ full_domain }}" - pwa.{{ full_domain }}
- "*.pwa.{{ full_domain }}" - *.pwa.{{ full_domain }}

View File

@ -1,5 +1,5 @@
full_domain: "" full_domain: ""
subdomain_prefix: "{{ org_id }}-{{ location_id }}" subdomain_prefix: ""
subdomain_cluster_control: "{{ subdomain_prefix }}-cluster-control" subdomain_cluster_control: "{{ subdomain_prefix }}-cluster-control"
service_provider_ip: "" service_provider_ip: ""
do_api_token: "" do_api_token: ""

View File

@ -1,6 +1,8 @@
target_host: "deployment_host"
gpg_key_id: "{{ sec_key_id }}" gpg_key_id: "{{ sec_key_id }}"
vault_passphrase: "{{ gpg_passphrase }}" vault_passphrase: "{{ gpg_passphrase }}"
org_id: "" org_id: ""
location_id: "" location_id: ""
base_domain: ""
support_email: "" support_email: ""
ansible_ssh_extra_args: '-o StrictHostKeyChecking=no' ansible_ssh_extra_args: '-o StrictHostKeyChecking=no'

View File

@ -1 +0,0 @@
chain_id: "laconic_9000-1"

View File

@ -1,10 +1,8 @@
ALICE_PK: "{{ ALICE_PK }}" ALICE_PK: "{{ ALICE_PK }}"
BOND_ID: "{{ BOND_ID }}" BOND_ID: "{{ BOND_ID }}"
authority_name: "" authority_name: ""
cpu_reservation: "1" cpu_reservation: ""
memory_reservation: "2G" memory_reservation: ""
cpu_limit: "6" cpu_limit: "6"
memory_limit: "8G" memory_limit: "8G"
deployer_gpg_passphrase: "" deployer_gpg_passphrase: ""
handle_auction_requests: "true"
auction_bid_amount: "500000"

View File

@ -1,10 +1,8 @@
# stack-orchestrator-setup # stack-orchestrator-setup
## Prerequisites ## Setup Ansible
- Setup Ansible: To get started, follow the [installation](../README.md#installation) guide to setup ansible on your machine. To get started, follow the [installation](../README.md#installation) guide to setup ansible on your machine.
- Setup user with passwordless sudo: Follow steps from [Setup a user](../user-setup/README.md#setup-a-user) to setup a new user with passwordless sudo
## Setup Stack Orchestrator ## Setup Stack Orchestrator
@ -12,6 +10,18 @@ This playbook will install Docker and Stack Orchestrator (laconic-so) on the mac
Run the following commands in the [`stack-orchestrator-setup`](./) directory. Run the following commands in the [`stack-orchestrator-setup`](./) directory.
### On Local Host
To setup stack orchestrator and docker locally, execute the `setup-laconic-so.yml` Ansible playbook:
```bash
LANG=en_US.utf8 ansible-playbook setup-laconic-so.yml --user $USER -kK
```
### On Remote Host
To run the playbook on a remote host:
- Create a new `hosts.ini` file: - Create a new `hosts.ini` file:
```bash ```bash
@ -27,12 +37,12 @@ Run the following commands in the [`stack-orchestrator-setup`](./) directory.
- Replace `<host_name>` with the alias of your choice - Replace `<host_name>` with the alias of your choice
- Replace `<target_ip>` with the IP address or hostname of the target machine - Replace `<target_ip>` with the IP address or hostname of the target machine
- Replace `<ssh_user>` with the username of the user that you set up on target machine (e.g. dev, ubuntu) - Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
- Verify that you are able to connect to the host using the following command - Verify that you are able to connect to the host using the following command
```bash ```bash
ansible all -m ping -i hosts.ini ansible all -m ping -i hosts.ini -k
# Expected output: # Expected output:
@ -45,24 +55,22 @@ Run the following commands in the [`stack-orchestrator-setup`](./) directory.
# } # }
``` ```
- Execute the `setup-laconic-so.yml` Ansible playbook for setting up stack orchestrator and docker on the target machine: - Execute the `setup-laconic-so.yml` Ansible playbook for setting up stack orchestrator and docker on a remote machine:
```bash ```bash
LANG=en_US.utf8 ansible-playbook setup-laconic-so.yml -i hosts.ini --extra-vars='{ "target_host": "deployment_host"}' --user $USER LANG=en_US.utf8 ansible-playbook setup-laconic-so.yml -i hosts.ini --extra-vars='{ "target_host": "deployment_host"}' --user $USER -kK
``` ```
## Verify Installation ## Verify Installation
Run the following commands on your target machine: - After the installation is complete, verify if `$HOME/bin` is already included in your PATH by running:
- After the installation is complete, verify if `$HOME/bin` is already included in the `PATH` by running:
```bash ```bash
echo $PATH | grep -q "$HOME/bin" && echo "$HOME/bin is already in PATH" || echo "$HOME/bin is not in PATH" echo $PATH | grep -q "$HOME/bin" && echo "$HOME/bin is already in PATH" || echo "$HOME/bin is not in PATH"
``` ```
If the command outputs `"$HOME/bin is not in PATH"`, you'll need to add it to your `PATH`. If the command outputs `"$HOME/bin is not in PATH"`, you'll need to add it to your `PATH`.
- To add `$HOME/bin` to your `PATH`, run the following command: - To add `$HOME/bin` to your PATH, run the following command:
```bash ```bash
export PATH="$HOME/bin:$PATH" export PATH="$HOME/bin:$PATH"

View File

@ -1 +0,0 @@
user-vars.yml

View File

@ -1,75 +0,0 @@
# user-setup
## Prerequisites
- Setup Ansible: follow the [installation](../README.md#installation) guide to setup ansible on your machine.
- Setup a remote machine with passwordless SSH login for the root user
- Install `passlib` used for handling encrypted passwords when setting up a user
```bash
pip install passlib
```
## Setup a user
Execute the following commands in the `user-setup` directory
- Create a new `hosts.ini` file:
```bash
cp ../hosts.example.ini hosts.ini
```
- Edit the [`hosts.ini`](./hosts.ini) file:
```ini
[deployment_host]
<host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
```
- Replace `<host_name>` with the desired `hostname` of the remote machine
- Replace `<target_ip>` with the IP address or hostname of the target machine
- Replace `<ssh_user>` with `root`
- Verify that you are able to connect to the host using the following command:
```bash
ansible all -m ping -i hosts.ini
# Expected output:
# <host_name> | SUCCESS => {
# "ansible_facts": {
# "discovered_interpreter_python": "/usr/bin/python3.10"
# },
# "changed": false,
# "ping": "pong"
# }
```
- Setup `user-vars.yml` using the example file
```bash
cp user-vars.example.yml user-vars.yml
```
- Edit the `user-vars.yml` file:
```bash
# name of the user you want to setup on the target host
username: ""
# password of the user you want to setup on the target host
password: ""
# path to the ssh key on your machine, eg: "/home/user/.ssh/id_rsa.pub"
path_to_ssh_key: ""
```
- Execute the `setup-user.yml` Ansible playbook to create a user with passwordless sudo permissions:
```bash
LANG=en_US.utf8 ansible-playbook setup-user.yml -i hosts.ini
```