Merge remote-tracking branch 'upstream/main' into lotus

This commit is contained in:
iskay 2023-05-08 16:06:33 +00:00
commit b755d28700
54 changed files with 1372 additions and 83 deletions

View File

@ -35,58 +35,28 @@ curl -L -o ~/bin/laconic-so https://github.com/cerc-io/stack-orchestrator/releas
```
Give it execute permissions:
```bash
chmod +x ~/bin/laconic-so
```
Ensure `laconic-so` is on the [`PATH`](https://unix.stackexchange.com/a/26059)
Verify operation (your version will probably be different, just check here that you see some version outut and not an error):
Verify operation (your version will probably be different, just check here that you see some version output and not an error):
```
laconic-so version
Version: v1.0.27-7831078
Version: 1.1.0-7a607c2-202304260513
```
## Usage
Three sub-commands: `setup-repositories`, `build-containers` and `deploy-system` are generally run in order. The following is a slim example for standing up the `erc20-watcher`. Go further with the [erc20 watcher demo](/app/data/stacks/erc20) and other pieces of the stack, within the [`stacks` directory](/app/data/stacks).
The various [stacks](/app/data/stacks) each contain instructions for running different stacks based on your use case. For example:
### Setup Repositories
Clone the set of git repositories necessary to build a system:
```bash
laconic-so --stack erc20 setup-repositories
```
This will default to cloning git reposiories into: `~/cerc` or - if set - the environment variable `CERC_REPO_BASE_DIR`
### Build Containers
Build the set of docker container images required to run a system. It takes around 10 minutes to build all the containers from scratch.
```bash
laconic-so --stack erc20 build-containers
```
### Deploy System
Uses `docker compose` to deploy a system (with most recently built container images).
```bash
laconic-so --stack erc20 deploy-system up
```
Check out he GraphQL playground here: [http://localhost:3002/graphql](http://localhost:3002/graphql)
See the [erc20 watcher demo](/app/data/stacks/erc20) to continue further.
### Cleanup
```bash
laconic-so --stack erc20 deploy-system down
```
- [self-hosted Gitea](/app/data/stacks/build-support)
- [an Optimism Fixturenet](/app/data/stacks/fixturenet-optimism)
- [laconicd with console and CLI](app/data/stacks/fixturenet-laconic-loaded)
- [kubo (IPFS)](app/data/stacks/kubo)
## Contributing

View File

@ -49,6 +49,8 @@ services:
timeout: 10s
retries: 10
start_period: 3s
environment:
CERC_KEEP_RUNNING_AFTER_GETH_EXIT: "true"
env_file:
- ../config/fixturenet-eth/fixturenet-eth.env
image: cerc/fixturenet-eth-geth:local
@ -62,8 +64,6 @@ services:
environment:
RUN_BOOTNODE: "true"
image: cerc/fixturenet-eth-lighthouse:local
volumes:
- fixturenet_eth_bootnode_lighthouse_data:/opt/testnet/build/cl
fixturenet-eth-lighthouse-1:
hostname: fixturenet-eth-lighthouse-1
@ -118,6 +118,5 @@ volumes:
fixturenet_eth_bootnode_geth_data:
fixturenet_eth_geth_1_data:
fixturenet_eth_geth_2_data:
fixturenet_eth_bootnode_lighthouse_data:
fixturenet_eth_lighthouse_1_data:
fixturenet_eth_lighthouse_2_data:

View File

@ -122,6 +122,8 @@ services:
# Waits for L1 endpoint to be up before running the batcher
command: |
"/wait-for-it.sh -h ${CERC_L1_HOST:-$${DEFAULT_CERC_L1_HOST}} -p ${CERC_L1_PORT:-$${DEFAULT_CERC_L1_PORT}} -s -t 60 -- /run-op-batcher.sh"
ports:
- "127.0.0.1:8548:8548"
extra_hosts:
- "host.docker.internal:host-gateway"
@ -145,6 +147,8 @@ services:
# Waits for L1 endpoint to be up before running the proposer
command: |
"/wait-for-it.sh -h ${CERC_L1_HOST:-$${DEFAULT_CERC_L1_HOST}} -p ${CERC_L1_PORT:-$${DEFAULT_CERC_L1_PORT}} -s -t 60 -- /run-op-proposer.sh"
ports:
- "127.0.0.1:8560:8560"
extra_hosts:
- "host.docker.internal:host-gateway"

View File

@ -0,0 +1,18 @@
version: "3.2"
services:
pocket:
restart: unless-stopped
image: cerc/pocket:local
# command: ["sh", "/docker-entrypoint-scripts.d/create-fixturenet.sh"]
entrypoint: ["sh", "/docker-entrypoint-scripts.d/create-fixturenet.sh"]
volumes:
# TODO: look at folding these scripts into the container
- ../config/fixturenet-pocket/create-fixturenet.sh:/docker-entrypoint-scripts.d/create-fixturenet.sh
- ../config/fixturenet-pocket/chains.json:/home/app/pocket-configs/chains.json
- ../config/fixturenet-pocket/genesis.json:/home/app/pocket-configs/genesis.json
ports:
- "8081:8081" # pocket relay rpc
networks:
net1:
name: fixturenet-eth_default
external: true

View File

@ -13,6 +13,7 @@ services:
CERC_DEPLOYED_CONTRACT: ${CERC_DEPLOYED_CONTRACT}
CERC_APP_WATCHER_URL: ${CERC_APP_WATCHER_URL}
CERC_RELAY_NODES: ${CERC_RELAY_NODES}
CERC_DENY_MULTIADDRS: ${CERC_DENY_MULTIADDRS}
CERC_BUILD_DIR: "@cerc-io/mobymask-ui/build"
working_dir: /scripts
command: ["sh", "mobymask-app-start.sh"]
@ -44,6 +45,7 @@ services:
CERC_DEPLOYED_CONTRACT: ${CERC_DEPLOYED_CONTRACT}
CERC_APP_WATCHER_URL: ${CERC_APP_WATCHER_URL}
CERC_RELAY_NODES: ${CERC_RELAY_NODES}
CERC_DENY_MULTIADDRS: ${CERC_DENY_MULTIADDRS}
CERC_BUILD_DIR: "@cerc-io/mobymask-ui-lxdao/build"
working_dir: /scripts
command: ["sh", "mobymask-app-start.sh"]

View File

@ -10,6 +10,7 @@ services:
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_RELAY_NODES: ${CERC_RELAY_NODES}
CERC_DENY_MULTIADDRS: ${CERC_DENY_MULTIADDRS}
command: ["sh", "test-app-start.sh"]
volumes:
- ../config/wait-for-it.sh:/scripts/wait-for-it.sh

View File

@ -0,0 +1,304 @@
version: '3.2'
services:
# Starts the PostgreSQL database for watchers
watcher-db:
restart: unless-stopped
image: postgres:14-alpine
environment:
- POSTGRES_USER=vdbm
- POSTGRES_MULTIPLE_DATABASES=azimuth-watcher,azimuth-watcher-job-queue,censures-watcher,censures-watcher-job-queue,claims-watcher,claims-watcher-job-queue,conditional-star-release-watcher,conditional-star-release-watcher-job-queue,delegated-sending-watcher,delegated-sending-watcher-job-queue,ecliptic-watcher,ecliptic-watcher-job-queue,linear-star-release-watcher,linear-star-release-watcher-job-queue,polls-watcher,polls-watcher-job-queue
- POSTGRES_EXTENSION=azimuth-watcher-job-queue:pgcrypto,censures-watcher-job-queue:pgcrypto,claims-watcher-job-queue:pgcrypto,conditional-star-release-watcher-job-queue:pgcrypto,delegated-sending-watcher-job-queue:pgcrypto,ecliptic-watcher-job-queue:pgcrypto,linear-star-release-watcher-job-queue:pgcrypto,polls-watcher-job-queue:pgcrypto,
- POSTGRES_PASSWORD=password
volumes:
- ../config/postgresql/multiple-postgressql-databases.sh:/docker-entrypoint-initdb.d/multiple-postgressql-databases.sh
- watcher_db_data:/var/lib/postgresql/data
ports:
- "0.0.0.0:15432:5432"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "5432"]
interval: 20s
timeout: 5s
retries: 15
start_period: 10s
# Starts the azimuth-watcher server
azimuth-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/azimuth-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/azimuth-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/azimuth-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/azimuth-watcher/start-server.sh
ports:
- "3001"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3001"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the censures-watcher server
censures-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/censures-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/censures-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/censures-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/censures-watcher/start-server.sh
ports:
- "3002"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3002"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the claims-watcher server
claims-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/claims-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/claims-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/claims-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/claims-watcher/start-server.sh
ports:
- "3003"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3003"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the conditional-star-release-watcher server
conditional-star-release-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/conditional-star-release-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/conditional-star-release-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/conditional-star-release-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/conditional-star-release-watcher/start-server.sh
ports:
- "3004"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3004"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the delegated-sending-watcher server
delegated-sending-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/delegated-sending-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/delegated-sending-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/delegated-sending-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/delegated-sending-watcher/start-server.sh
ports:
- "3005"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3005"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the ecliptic-watcher server
ecliptic-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/ecliptic-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/ecliptic-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/ecliptic-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/ecliptic-watcher/start-server.sh
ports:
- "3006"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3006"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the linear-star-release-watcher server
linear-star-release-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/linear-star-release-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/linear-star-release-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/linear-star-release-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/linear-star-release-watcher/start-server.sh
ports:
- "3007"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3007"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the polls-watcher server
polls-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/polls-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/polls-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/polls-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/polls-watcher/start-server.sh
ports:
- "3008"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3008"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the gateway-server for proxying queries
gateway-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
azimuth-watcher-server:
condition: service_healthy
censures-watcher-server:
condition: service_healthy
claims-watcher-server:
condition: service_healthy
conditional-star-release-watcher-server:
condition: service_healthy
delegated-sending-watcher-server:
condition: service_healthy
ecliptic-watcher-server:
condition: service_healthy
linear-star-release-watcher-server:
condition: service_healthy
polls-watcher-server:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
working_dir: /app/packages/gateway-server
command: "yarn server"
volumes:
- ../config/watcher-azimuth/gateway-watchers.json:/app/packages/gateway-server/dist/watchers.json
ports:
- "0.0.0.0:4000:4000"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "4000"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
watcher_db_data:

View File

@ -83,6 +83,7 @@ services:
CERC_L1_ACCOUNTS_CSV_URL: ${CERC_L1_ACCOUNTS_CSV_URL}
CERC_PRIVATE_KEY_PEER: ${CERC_PRIVATE_KEY_PEER}
CERC_RELAY_PEERS: ${CERC_RELAY_PEERS}
CERC_DENY_MULTIADDRS: ${CERC_DENY_MULTIADDRS}
CERC_RELAY_ANNOUNCE_DOMAIN: ${CERC_RELAY_ANNOUNCE_DOMAIN}
CERC_ENABLE_PEER_L2_TXS: ${CERC_ENABLE_PEER_L2_TXS}
CERC_DEPLOYED_CONTRACT: ${CERC_DEPLOYED_CONTRACT}

View File

@ -0,0 +1,18 @@
[
{
"id": "0001",
"url": "http://127.0.0.1:8081/",
"basic_auth": {
"username": "",
"password": ""
}
},
{
"id": "0021",
"url": "http://fixturenet-eth-geth-1:8545/",
"basic_auth": {
"username": "",
"password": ""
}
}
]

View File

@ -0,0 +1,65 @@
#!/bin/bash
# TODO: we should have a mechanism to bundle it inside the container rather than link from here
# at deploy time.
CHAINID="pocketlocal-1"
MONIKER="localtestnet"
SERVICE_URL="http://127.0.0.1:8081"
PASSWORD="mypassword" # wallet password, required by cli
# check if jq is installed; install if necessary
# command -v jq > /dev/null 2>&1 || { echo >&2 "jq not installed. More info: https://stedolan.github.io/jq/download/"; exit 1; }
if ! command -v jq > /dev/null 2>&1; then
echo "jq not installed, downloading..."
mkdir -p /home/app/bin
wget -O /home/app/bin/jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
chmod +x /home/app/bin/jq
export PATH=$PATH:/home/app/bin
fi
# remove existing daemon and client
rm -rf ~/.pocket*
# create a wallet with password "mypassword" and save the address for later
address=$(pocket accounts create --pwd $PASSWORD | awk '/Address:/ {print $2}')
# set this address as the validator address for the node
pocket accounts set-validator $address --pwd $PASSWORD
# save the public key for later
pubkey=$(pocket accounts show $address | awk '/Public Key:/ {print $3}')
# set node's moniker
echo $(pocket util print-configs) | jq '.tendermint_config.Moniker = "'"$MONIKER"'"' | jq . > $HOME/.pocket/config/config.json
# pocket mainnet has block time of 15 minutes, set closer to 1 minute instead
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutPropose = 8000000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutProposeDelta = 600000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutPrevote = 4000000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutPrevoteDelta = 600000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutPrecommit = 4000000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutPrecommitDelta = 6000000006' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutCommit = 52000000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.CreateEmptyBlocksInterval = 60000000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.PeerGossipSleepDuration = 2000000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.PeerQueryMaj23SleepDuration = 1200000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
# include genesis.json and chains.json
cp $HOME/pocket-configs/genesis.json $HOME/.pocket/config/genesis.json
cp $HOME/pocket-configs/chains.json $HOME/.pocket/config/chains.json
# set chain-id and add node to genesis.json as a validator
cat $HOME/.pocket/config/genesis.json | jq '.chain_id="'"$CHAINID"'"' > $HOME/.pocket/config/tmp_genesis.json && mv $HOME/.pocket/config/tmp_genesis.json $HOME/.pocket/config/genesis.json
cat $HOME/.pocket/config/genesis.json | jq '.app_state.auth.accounts[0].value.address="'"$address"'"' > $HOME/.pocket/config/tmp_genesis.json && mv $HOME/.pocket/config/tmp_genesis.json $HOME/.pocket/config/genesis.json
cat $HOME/.pocket/config/genesis.json | jq '.app_state.auth.accounts[0].value.public_key.value="'"$pubkey"'"' > $HOME/.pocket/config/tmp_genesis.json && mv $HOME/.pocket/config/tmp_genesis.json $HOME/.pocket/config/genesis.json
cat $HOME/.pocket/config/genesis.json | jq '.app_state.pos.validators[0].address="'"$address"'"' > $HOME/.pocket/config/tmp_genesis.json && mv $HOME/.pocket/config/tmp_genesis.json $HOME/.pocket/config/genesis.json
cat $HOME/.pocket/config/genesis.json | jq '.app_state.pos.validators[0].public_key="'"$pubkey"'"' > $HOME/.pocket/config/tmp_genesis.json && mv $HOME/.pocket/config/tmp_genesis.json $HOME/.pocket/config/genesis.json
cat $HOME/.pocket/config/genesis.json | jq '.app_state.pos.validators[0].service_url="'"$SERVICE_URL"'"' > $HOME/.pocket/config/tmp_genesis.json && mv $HOME/.pocket/config/tmp_genesis.json $HOME/.pocket/config/genesis.json
# if [[ $1 == "pending" ]]; then
# echo "pending mode is on, please wait for the first block committed."
# fi
# Start the node
pocket start --simulateRelay

View File

@ -0,0 +1,272 @@
{
"genesis_time": "2020-07-28T15:00:00.000000Z",
"chain_id": "testnet",
"consensus_params": {
"block": {
"max_bytes": "4000000",
"max_gas": "-1",
"time_iota_ms": "1"
},
"evidence": {
"max_age": "120000000000"
},
"validator": {
"pub_key_types": [
"ed25519"
]
}
},
"app_hash": "",
"app_state": {
"application": {
"params": {
"unstaking_time": "1814000000000000",
"max_applications": "9223372036854775807",
"app_stake_minimum": "1000000",
"base_relays_per_pokt": "167",
"stability_adjustment": "0",
"participation_rate_on": false,
"maximum_chains": "15"
},
"applications": [],
"exported": false
},
"auth": {
"params": {
"max_memo_characters": "75",
"tx_sig_limit": "8",
"fee_multipliers": {
"fee_multiplier": [],
"default": "1"
}
},
"accounts": [
{
"type": "posmint/Account",
"value": {
"address": "!validator-address",
"coins": [
{
"amount": "0",
"denom": "upokt"
}
],
"public_key": {
"type": "crypto/ed25519_public_key",
"value": "!validator-pubkey"
}
}
}
],
"supply": []
},
"gov": {
"params": {
"acl": [
{
"acl_key": "application/ApplicationStakeMinimum",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "application/AppUnstakingTime",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "application/BaseRelaysPerPOKT",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "application/MaxApplications",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "application/MaximumChains",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "application/ParticipationRateOn",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "application/StabilityAdjustment",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "auth/MaxMemoCharacters",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "auth/TxSigLimit",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "gov/acl",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "gov/daoOwner",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "gov/upgrade",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pocketcore/ClaimExpiration",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "auth/FeeMultipliers",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pocketcore/ReplayAttackBurnMultiplier",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/ProposerPercentage",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pocketcore/ClaimSubmissionWindow",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pocketcore/MinimumNumberOfProofs",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pocketcore/SessionNodeCount",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pocketcore/SupportedBlockchains",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/BlocksPerSession",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/DAOAllocation",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/DowntimeJailDuration",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/MaxEvidenceAge",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/MaximumChains",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/MaxJailedBlocks",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/MaxValidators",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/MinSignedPerWindow",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/RelaysToTokensMultiplier",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/SignedBlocksWindow",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/SlashFractionDoubleSign",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/SlashFractionDowntime",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/StakeDenom",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/StakeMinimum",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/UnstakingTime",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
}
],
"dao_owner": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4",
"upgrade": {
"Height": "0",
"Version": "0"
}
},
"DAO_Tokens": "50000000000000"
},
"pos": {
"params": {
"relays_to_tokens_multiplier": "10000",
"unstaking_time": "1814000000000000",
"max_validators": "5000",
"stake_denom": "upokt",
"stake_minimum": "15000000000",
"session_block_frequency": "4",
"dao_allocation": "10",
"proposer_allocation": "1",
"maximum_chains": "15",
"max_jailed_blocks": "37960",
"max_evidence_age": "120000000000",
"signed_blocks_window": "10",
"min_signed_per_window": "0.60",
"downtime_jail_duration": "3600000000000",
"slash_fraction_double_sign": "0.05",
"slash_fraction_downtime": "0.000001"
},
"prevState_total_power": "0",
"prevState_validator_powers": null,
"validators": [
{
"address": "!validator-address",
"public_key": "!validator-pubkey",
"jailed": false,
"status": 2,
"tokens": "5000000000000",
"service_url": "!validator-url",
"chains": [
"0001",
"0021"
],
"unstaking_time": "2021-05-15T00:00:00Z"
}
],
"exported": false,
"signing_infos": {},
"missed_blocks": {},
"previous_proposer": ""
},
"pocketcore": {
"params": {
"session_node_count": "5",
"proof_waiting_period": "3",
"supported_blockchains": [
"0001",
"0021"
],
"claim_expiration": "120",
"replay_attack_burn_multiplier": "3",
"minimum_number_of_proofs": "10"
},
"receipts": null,
"claims": null
}
}
}

View File

@ -0,0 +1,34 @@
[
{
"endpoint": "http://azimuth-watcher-server:3001/graphql",
"prefix": "azimuth"
},
{
"endpoint": "http://censures-watcher-server:3002/graphql",
"prefix": "censures"
},
{
"endpoint": "http://claims-watcher-server:3003/graphql",
"prefix": "claims"
},
{
"endpoint": "http://conditional-star-release-watcher-server:3004/graphql",
"prefix": "conditionalStarRelease"
},
{
"endpoint": "http://delegated-sending-watcher-server:3005/graphql",
"prefix": "delegatedSending"
},
{
"endpoint": "http://ecliptic-watcher-server:3006/graphql",
"prefix": "ecliptic"
},
{
"endpoint": "http://linear-star-release-watcher-server:3007/graphql",
"prefix": "linearStarRelease"
},
{
"endpoint": "http://polls-watcher-server:3008/graphql",
"prefix": "polls"
}
]

View File

@ -0,0 +1,31 @@
const fs = require('fs');
const tomlJS = require('toml-js');
const toml = require('toml');
const { merge } = require('lodash')
const main = () => {
const overrideConfigString = fs.readFileSync('environments/watcher-config.toml', 'utf-8');
const configString = fs.readFileSync('environments/local.toml', 'utf-8');
const overrideConfig = toml.parse(overrideConfigString)
const config = toml.parse(configString)
// Merge configs
const updatedConfig = merge(config, overrideConfig);
// Form dbConnectionString for jobQueue DB
const parts = config.jobQueue.dbConnectionString.split("://");
const credsAndDB = parts[1].split("@");
const creds = credsAndDB[0].split(":");
creds[0] = overrideConfig.database.username;
creds[1] = overrideConfig.database.password;
credsAndDB[0] = creds.join(":");
const dbName = credsAndDB[1].split("/")[1]
credsAndDB[1] = [overrideConfig.database.host, dbName].join("/");
parts[1] = credsAndDB.join("@");
updatedConfig.jobQueue.dbConnectionString = parts.join("://");
fs.writeFileSync('environments/local.toml', tomlJS.dump(updatedConfig), 'utf-8');
}
main();

View File

@ -0,0 +1,27 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_IPLD_ETH_RPC="${CERC_IPLD_ETH_RPC:-${DEFAULT_CERC_IPLD_ETH_RPC}}"
CERC_IPLD_ETH_GQL="${CERC_IPLD_ETH_GQL:-${DEFAULT_CERC_IPLD_ETH_GQL}}"
echo "Using IPLD ETH RPC endpoint ${CERC_IPLD_ETH_RPC}"
echo "Using IPLD GQL endpoint ${CERC_IPLD_ETH_GQL}"
# Replace env variables in template TOML file
# Read in the config template TOML file and modify it
WATCHER_CONFIG_TEMPLATE=$(cat environments/watcher-config-template.toml)
WATCHER_CONFIG=$(echo "$WATCHER_CONFIG_TEMPLATE" | \
sed -E "s|REPLACE_WITH_CERC_IPLD_ETH_RPC|${CERC_IPLD_ETH_RPC}|g; \
s|REPLACE_WITH_CERC_IPLD_ETH_GQL|${CERC_IPLD_ETH_GQL}| ")
# Write the modified content to a new file
echo "$WATCHER_CONFIG" > environments/watcher-config.toml
# Merge SO watcher config with existing config file
node merge-toml.js
echo 'yarn server'
yarn server

View File

@ -0,0 +1,14 @@
[server]
host = "0.0.0.0"
maxSimultaneousRequests = -1
[database]
host = "watcher-db"
port = 5432
username = "vdbm"
password = "password"
[upstream]
[upstream.ethServer]
gqlApiEndpoint = "REPLACE_WITH_CERC_IPLD_ETH_GQL"
rpcProviderEndpoint = "REPLACE_WITH_CERC_IPLD_ETH_RPC"

View File

@ -0,0 +1,5 @@
# Defaults
# ipld-eth-server endpoints
DEFAULT_CERC_IPLD_ETH_RPC=
DEFAULT_CERC_IPLD_ETH_GQL=

View File

@ -7,6 +7,7 @@ fi
CERC_CHAIN_ID="${CERC_CHAIN_ID:-${DEFAULT_CERC_CHAIN_ID}}"
CERC_DEPLOYED_CONTRACT="${CERC_DEPLOYED_CONTRACT:-${DEFAULT_CERC_DEPLOYED_CONTRACT}}"
CERC_RELAY_NODES="${CERC_RELAY_NODES:-${DEFAULT_CERC_RELAY_NODES}}"
CERC_DENY_MULTIADDRS="${CERC_DENY_MULTIADDRS:-${DEFAULT_CERC_DENY_MULTIADDRS}}"
CERC_APP_WATCHER_URL="${CERC_APP_WATCHER_URL:-${DEFAULT_CERC_APP_WATCHER_URL}}"
# If not set (or []), check the mounted volume for relay peer id
@ -37,5 +38,6 @@ yq -n ".address = env(CERC_DEPLOYED_CONTRACT)" > /config/config.yml
yq ".watcherUrl = env(CERC_APP_WATCHER_URL)" -i /config/config.yml
yq ".chainId = env(CERC_CHAIN_ID)" -i /config/config.yml
yq ".relayNodes = strenv(CERC_RELAY_NODES)" -i /config/config.yml
yq ".denyMultiaddrs = strenv(CERC_DENY_MULTIADDRS)" -i /config/config.yml
/scripts/start-serving-app.sh

View File

@ -24,3 +24,6 @@ DEFAULT_CERC_CHAIN_ID=42069
# Set of relay nodes to be used by web-apps
DEFAULT_CERC_RELAY_NODES=[]
# Set of multiaddrs to be avoided while dialling
DEFAULT_CERC_DENY_MULTIADDRS=[]

View File

@ -8,13 +8,20 @@ CERC_L2_GETH_RPC="${CERC_L2_GETH_RPC:-${DEFAULT_CERC_L2_GETH_RPC}}"
CERC_L1_ACCOUNTS_CSV_URL="${CERC_L1_ACCOUNTS_CSV_URL:-${DEFAULT_CERC_L1_ACCOUNTS_CSV_URL}}"
CERC_RELAY_PEERS="${CERC_RELAY_PEERS:-${DEFAULT_CERC_RELAY_PEERS}}"
CERC_DENY_MULTIADDRS="${CERC_DENY_MULTIADDRS:-${DEFAULT_CERC_DENY_MULTIADDRS}}"
CERC_RELAY_ANNOUNCE_DOMAIN="${CERC_RELAY_ANNOUNCE_DOMAIN:-${DEFAULT_CERC_RELAY_ANNOUNCE_DOMAIN}}"
CERC_ENABLE_PEER_L2_TXS="${CERC_ENABLE_PEER_L2_TXS:-${DEFAULT_CERC_ENABLE_PEER_L2_TXS}}"
CERC_DEPLOYED_CONTRACT="${CERC_DEPLOYED_CONTRACT:-${DEFAULT_CERC_DEPLOYED_CONTRACT}}"
echo "Using L2 RPC endpoint ${CERC_L2_GETH_RPC}"
# Use public domain for relay multiaddr in peer config if specified
# Otherwise, use the docker container's host IP
if [ -n "$CERC_RELAY_ANNOUNCE_DOMAIN" ]; then
CERC_RELAY_MULTIADDR="/dns4/${CERC_RELAY_ANNOUNCE_DOMAIN}/tcp/443/wss/p2p/$(jq -r '.id' /app/peers/relay-id.json)"
else
CERC_RELAY_MULTIADDR="/dns4/mobymask-watcher-server/tcp/9090/ws/p2p/$(jq -r '.id' /app/peers/relay-id.json)"
fi
# Use contract address from environment variable or set from config.json in mounted volume
if [ -n "$CERC_DEPLOYED_CONTRACT" ]; then
@ -42,6 +49,7 @@ fi
WATCHER_CONFIG_TEMPLATE=$(cat environments/watcher-config-template.toml)
WATCHER_CONFIG=$(echo "$WATCHER_CONFIG_TEMPLATE" | \
sed -E "s|REPLACE_WITH_CERC_RELAY_PEERS|${CERC_RELAY_PEERS}|g; \
s|REPLACE_WITH_CERC_DENY_MULTIADDRS|${CERC_DENY_MULTIADDRS}|g; \
s/REPLACE_WITH_CERC_RELAY_ANNOUNCE_DOMAIN/${CERC_RELAY_ANNOUNCE_DOMAIN}/g; \
s|REPLACE_WITH_CERC_RELAY_MULTIADDR|${CERC_RELAY_MULTIADDR}|g; \
s/REPLACE_WITH_CERC_ENABLE_PEER_L2_TXS/${CERC_ENABLE_PEER_L2_TXS}/g; \

View File

@ -1,6 +1,7 @@
{
"relayNodes": [],
"peer": {
"denyMultiaddrs": [],
"enableDebugInfo": true
}
}

View File

@ -5,6 +5,7 @@ if [ -n "$CERC_SCRIPT_DEBUG" ]; then
fi
CERC_RELAY_NODES="${CERC_RELAY_NODES:-${DEFAULT_CERC_RELAY_NODES}}"
CERC_DENY_MULTIADDRS="${CERC_DENY_MULTIADDRS:-${DEFAULT_CERC_DENY_MULTIADDRS}}"
# If not set (or []), check the mounted volume for relay peer id
if [ -z "$CERC_RELAY_NODES" ] || [ "$CERC_RELAY_NODES" = "[]" ]; then
@ -16,5 +17,6 @@ echo "Using CERC_RELAY_NODES $CERC_RELAY_NODES"
# Use yq to create config.yml with environment variables
yq -n ".relayNodes = strenv(CERC_RELAY_NODES)" > /config/config.yml
yq ".denyMultiaddrs = strenv(CERC_DENY_MULTIADDRS)" -i /config/config.yml
/scripts/start-serving-app.sh

View File

@ -27,6 +27,7 @@
host = "0.0.0.0"
port = 9090
relayPeers = REPLACE_WITH_CERC_RELAY_PEERS
denyMultiaddrs = REPLACE_WITH_CERC_DENY_MULTIADDRS
peerIdFile = './peers/relay-id.json'
announce = 'REPLACE_WITH_CERC_RELAY_ANNOUNCE_DOMAIN'
enableDebugInfo = true
@ -34,6 +35,7 @@
[server.p2p.peer]
relayMultiaddr = 'REPLACE_WITH_CERC_RELAY_MULTIADDR'
pubSubTopic = 'mobymask'
denyMultiaddrs = REPLACE_WITH_CERC_DENY_MULTIADDRS
peerIdFile = './peers/peer-id.json'
enableDebugInfo = true
enableL2Txs = REPLACE_WITH_CERC_ENABLE_PEER_L2_TXS

View File

@ -127,3 +127,9 @@ else
fi
wait $geth_pid
if [ "true" == "$CERC_KEEP_RUNNING_AFTER_GETH_EXIT" ]; then
while [ 1 -eq 1 ]; do
sleep 60
done
fi

View File

@ -1,4 +1,4 @@
FROM sigp/lcli:v3.2.1 AS lcli
FROM sigp/lcli:v4.1.0 AS lcli
FROM skylenet/ethereum-genesis-generator@sha256:210353ce7c898686bc5092f16c61220a76d357f51eff9c451e9ad1b9ad03d4d3 AS ethgen
FROM cerc/fixturenet-eth-geth:local AS fnetgeth

View File

@ -21,21 +21,13 @@ if [ ! -f "$DATADIR/bootnode/enr.dat" ]; then
--udp-port $BOOTNODE_PORT \
--tcp-port $BOOTNODE_PORT \
--genesis-fork-version $GENESIS_FORK_VERSION \
--output-dir $DATADIR/bootnode-temp
# Output ENR to a temp dir and mv as "lcli generate-bootnode-enr" will not overwrite an empty dir (mounted volume)
mkdir -p $DATADIR/bootnode
mv $DATADIR/bootnode-temp/* $DATADIR/bootnode
rm -r $DATADIR/bootnode-temp
echo "Generated bootnode enr"
else
echo "Found existing bootnode enr"
fi
--output-dir $DATADIR/bootnode
bootnode_enr=`cat $DATADIR/bootnode/enr.dat`
echo "- $bootnode_enr" > $TESTNET_DIR/boot_enr.yaml
echo "Written bootnode enr to $TESTNET_DIR/boot_enr.yaml"
echo "Generated bootnode enr and written to $TESTNET_DIR/boot_enr.yaml"
fi
exec lighthouse boot_node \
--testnet-dir $TESTNET_DIR \

View File

@ -0,0 +1,47 @@
#!/bin/bash
# Exports the complete fixturenet-eth ethdb data to a tarball (default, ./ethdb.tgz), waiting for a minimum
# block height (default 1000) to be reached before exporting.
# Usage: export-ethdb.sh [min_block_number=1000] [output_file=./ethdb.tgz]
if [[ -n "$CERC_SCRIPT_DEBUG" ]]; then
set -x
fi
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
GETH_EXPORT_MIN_BLOCK=${1:-${GETH_EXPORT_MIN_BLOCK:-1000}}
# Wait for block.
${SCRIPT_DIR}/status.sh $GETH_EXPORT_MIN_BLOCK
if [[ $? -ne 0 ]]; then
echo "Unable to export ethdb." 1>&2
exit 1
fi
# Stop geth.
echo -n "Exporting ethdb.... "
GETH_CONTAINER=`docker ps -q -f "name=${CERC_SO_COMPOSE_PROJECT}-fixturenet-eth-geth-2-1"`
if [[ -z "$GETH_CONTAINER" ]]; then
echo "not found"
exit 1
fi
docker exec $GETH_CONTAINER sh -c 'rm -rf /root/tmp && mkdir -p /root/tmp/export'
docker exec $GETH_CONTAINER sh -c 'ln -s /opt/testnet/build/el/geth.json /root/tmp/export/genesis.json && ln -s /root/ethdata /root/tmp/export/'
docker exec $GETH_CONTAINER sh -c 'cat /root/tmp/export/genesis.json | jq ".config" > /root/tmp/export/genesis.config.json'
# Stop geth and zip up ethdb.
docker exec $GETH_CONTAINER sh -c 'curl -s --location "localhost:8545" --header "Content-Type: application/json" --data "{\"jsonrpc\": \"2.0\", \"id\": 1, \"method\": \"eth_getBlockByNumber\", \"params\": [\"0x0\", false]}" > /root/tmp/export/eth_getBlockByNumber_0x0.json'
docker exec $GETH_CONTAINER sh -c 'curl -s --location "localhost:8545" --header "Content-Type: application/json" --data "{\"jsonrpc\": \"2.0\", \"id\": 1, \"method\": \"eth_blockNumber\", \"params\": []}" > /root/tmp/export/eth_blockNumber.json'
docker exec $GETH_CONTAINER sh -c "killall geth && sleep 2 && tar chzf /root/tmp/ethdb.tgz -C /root/tmp/export ."
# Copy ethdb to host.
GETH_EXPORT_FILE=${2:-${GETH_EXPORT_FILE:-./ethdb.tgz}}
docker cp $GETH_CONTAINER:/root/tmp/ethdb.tgz $GETH_EXPORT_FILE
echo "$GETH_EXPORT_FILE"
docker exec $GETH_CONTAINER sh -c "rm -rf /root/tmp"
# Restart the container to get geth back up and running.
docker restart $GETH_CONTAINER >/dev/null

View File

@ -2,9 +2,10 @@
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
STATUSES=("geth to generate DAG" "beacon phase0" "beacon altair" "beacon bellatrix pre-merge" "beacon bellatrix merge")
STATUS=0
MIN_BLOCK_NUM=${1:-${MIN_BLOCK_NUM:-3}}
STATUSES=("geth to generate DAG" "beacon phase0" "beacon altair" "beacon bellatrix pre-merge" "beacon bellatrix merge" "block number $MIN_BLOCK_NUM")
STATUS=0
LIGHTHOUSE_BASE_URL=${LIGHTHOUSE_BASE_URL}
GETH_BASE_URL=${GETH_BASE_URL}
@ -13,18 +14,29 @@ GETH_BASE_URL=${GETH_BASE_URL}
# or some execution environment-neutral mechanism.
if [ -z "$LIGHTHOUSE_BASE_URL" ]; then
LIGHTHOUSE_CONTAINER=`docker ps -q -f "name=fixturenet-eth-lighthouse-1-1"`
if [ -z "$LIGHTHOUSE_CONTAINER" ]; then
echo "Lighthouse container not found." 1>&2
exit 1
fi
LIGHTHOUSE_PORT=`docker port $LIGHTHOUSE_CONTAINER 8001 | cut -d':' -f2`
LIGHTHOUSE_BASE_URL="http://localhost:${LIGHTHOUSE_PORT}"
fi
if [ -z "$GETH_BASE_URL" ]; then
GETH_CONTAINER=`docker ps -q -f "name=fixturenet-eth-geth-1-1"`
if [ -z "$GETH_CONTAINER" ]; then
echo "Lighthouse container not found." 1>&2
exit 1
fi
GETH_PORT=`docker port $GETH_CONTAINER 8545 | cut -d':' -f2`
GETH_BASE_URL="http://localhost:${GETH_PORT}"
fi
MARKER="."
function inc_status() {
echo " done"
MARKEr="."
STATUS=$((STATUS + 1))
if [ $STATUS -lt ${#STATUSES[@]} ]; then
echo -n "Waiting for ${STATUSES[$STATUS]}..."
@ -34,7 +46,7 @@ function inc_status() {
echo -n "Waiting for ${STATUSES[$STATUS]}..."
while [ $STATUS -lt ${#STATUSES[@]} ]; do
sleep 1
echo -n "."
echo -n "$MARKER"
case $STATUS in
0)
result=`wget --no-check-certificate --quiet -O - --method POST --header 'Content-Type: application/json' \
@ -67,5 +79,13 @@ while [ $STATUS -lt ${#STATUSES[@]} ]; do
inc_status
fi
;;
5)
result=`wget --no-check-certificate --quiet -O - "$LIGHTHOUSE_BASE_URL/eth/v2/beacon/blocks/head" | jq -r '.data.message.body.execution_payload.block_number'`
if [ ! -z "$result" ] && [ $result -gt $MIN_BLOCK_NUM ]; then
inc_status
else
MARKER="$result "
fi
;;
esac
done

View File

@ -1,4 +1,4 @@
FROM sigp/lighthouse:v4.0.1-modern
FROM sigp/lighthouse:v4.1.0-modern
RUN apt-get update; apt-get install bash netcat curl less jq -y;

View File

@ -50,9 +50,9 @@ RUN yarn global add http-server
# Globally install both versions of the payload web app package
# Install old version of MobyMask web app
RUN yarn global add @cerc-io/mobymask-ui@0.1.3
RUN yarn global add @cerc-io/mobymask-ui@0.1.4
# Install the LXDAO version of MobyMask web app
RUN yarn global add @cerc-io/mobymask-ui-lxdao@npm:@cerc-io/mobymask-ui@0.1.3-lxdao-0.1.1
RUN yarn global add @cerc-io/mobymask-ui-lxdao@npm:@cerc-io/mobymask-ui@0.1.4-lxdao-0.1.1
# Expose port for http
EXPOSE 80

View File

@ -33,7 +33,7 @@ do
echo "Substituting: ${template_string_to_replace} = ${template_value_to_substitute}"
# TODO: Pass keys to be replaced without double quotes
if [[ "$template_string_to_replace" =~ ^${config_prefix}_(relayNodes|chainId)$ ]]; then
if [[ "$template_string_to_replace" =~ ^${config_prefix}_(relayNodes|chainId|denyMultiaddrs)$ ]]; then
find ${webapp_files_dir} -type f -exec sed -i 's#"'"${template_string_to_replace}"'"#'"${template_value_to_substitute}"'#g' {} +
else
# Note: we do not escape our strings, on the expectation they do not container the '#' char.

View File

@ -0,0 +1,3 @@
#!/usr/bin/env bash
# Build cerc/pocket
docker build -t cerc/pocket:local ${CERC_REPO_BASE_DIR}/pocket-core-deployments/docker

View File

@ -21,7 +21,7 @@ RUN mkdir -p /config
RUN yarn global add http-server
# Globally install the payload web app package
RUN yarn global add @cerc-io/test-app@0.2.33
RUN yarn global add @cerc-io/test-app@0.2.34
# Expose port for http
EXPOSE 80

View File

@ -33,7 +33,7 @@ do
echo "Substituting: ${template_string_to_replace} = ${template_value_to_substitute}"
# TODO: Pass keys to be replaced without double quotes
if [[ "$template_string_to_replace" == "${config_prefix}_relayNodes" ]]; then
if [[ "$template_string_to_replace" =~ ^${config_prefix}_(relayNodes|denyMultiaddrs)$ ]]; then
find ${webapp_files_dir} -type f -exec sed -i 's#"'"${template_string_to_replace}"'"#'"${template_value_to_substitute}"'#g' {} +
else
# Note: we do not escape our strings, on the expectation they do not container the '#' char.

View File

@ -0,0 +1,13 @@
FROM node:18.16.0-alpine3.16
RUN apk --update --no-cache add git python3 alpine-sdk
WORKDIR /app
COPY . .
RUN echo "Building azimuth-watcher-ts" && \
yarn && yarn build
RUN echo "Install toml-js to update watcher config files" && \
yarn add --dev --ignore-workspace-root-check toml-js

View File

@ -0,0 +1,9 @@
#!/usr/bin/env bash
# Build cerc/watcher-azimuth
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
# See: https://stackoverflow.com/a/246128/1701505
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/watcher-azimuth:local -f ${SCRIPT_DIR}/Dockerfile ${build_command_args} ${CERC_REPO_BASE_DIR}/azimuth-watcher-ts

View File

@ -36,4 +36,6 @@ cerc/optimism-l2geth
cerc/optimism-op-batcher
cerc/optimism-op-node
cerc/optimism-op-proposer
cerc/pocket
cerc/watcher-azimuth
cerc/lotus

View File

@ -24,4 +24,6 @@ tx-spammer
kubo
foundry
fixturenet-optimism
fixturenet-pocket
watcher-azimuth
fixturenet-lotus

View File

@ -27,4 +27,7 @@ lirewine/sdk
telackey/act_runner
ethereum-optimism/op-geth
ethereum-optimism/optimism
pokt-network/pocket-core
pokt-network/pocket-core-deployments
cerc-io/azimuth-watcher-ts
filecoin-project/lotus

View File

@ -0,0 +1,72 @@
# Azimuth Watcher
Instructions to setup and deploy Azimuth Watcher stack
## Setup
Prerequisite: ipld-eth-server RPC and GQL endpoints
Clone required repositories:
```bash
laconic-so --stack azimuth setup-repositories
```
NOTE: If the repository already exists and checked out to a different version, `setup-repositories` command will throw an error.
For getting around this, the `azimuth-watcher-ts` repository can be removed and then run the command.
Checkout to the required versions and branches in repos
```bash
# azimuth-watcher-ts
cd ~/cerc/azimuth-watcher-ts
git checkout v0.1.0
```
Build the container images:
```bash
laconic-so --stack azimuth build-containers
```
This should create the required docker images in the local image registry.
### Configuration
* Create and update an env file to be used in the next step:
```bash
# External ipld-eth-server endpoints
CERC_IPLD_ETH_RPC=
CERC_IPLD_ETH_GQL=
```
* NOTE: If ipld-eth-server is running on the host machine, use `host.docker.internal` as the hostname to access host ports
### Deploy the stack
* Deploy the containers:
```bash
laconic-so --stack azimuth deploy-system --env-file <PATH_TO_ENV_FILE> up
```
* List and check the health status of all the containers using `docker ps` and wait for them to be `healthy`
## Clean up
Stop all the services running in background run:
```bash
laconic-so --stack azimuth deploy-system down
```
Clear volumes created by this stack:
```bash
# List all relevant volumes
docker volume ls -q --filter "name=.*watcher_db_data"
# Remove all the listed volumes
docker volume rm $(docker volume ls -q --filter "name=.*watcher_db_data")
```

View File

@ -0,0 +1,8 @@
version: "1.0"
name: azimuth
repos:
- cerc-io/azimuth-watcher-ts
containers:
- cerc/watcher-azimuth
pods:
- watcher-azimuth

View File

@ -16,7 +16,7 @@ Leave `CERC_NPM_REGISTRY_URL` un-set to use the local gitea registry.
Note: the scheme/gerbil container is excluded as it isn't currently required for the package registry.
```
$ laconic-so --stack build-support build-containers --exclude cerc/builder-gerbil
$ laconic-so --stack build-support build-containers
```
### 2. Deploy Gitea Package Registry

View File

@ -1,6 +1,5 @@
version: "1.1"
version: "1.2"
name: build-support
decription: "Build Support Components"
containers:
- cerc/builder-js
- cerc/builder-gerbil

View File

@ -0,0 +1,37 @@
# fixturenet-eth-tx
A variation of `fixturenet-eth` that automatically generates transactions using `tx-spammer`.
See `stacks/fixturenet-eth/README.md` for more information.
## Containers
* cerc/go-ethereum
* cerc/lighthouse
* cerc/fixturenet-eth-geth
* cerc/fixturenet-eth-lighthouse
* cerc/tx-spammer
## Deploy the stack
```
$ laconic-so --stack fixturenet-eth-tx setup-repositories
$ laconic-so --stack fixturenet-eth-tx build-containers
$ laconic-so --stack fixturenet-eth-tx deploy up
```
## Export the ethdb (optional)
It is easy to export data from the fixturenet for offline processing of the raw ethdb files (eg, by eth-statediff-service) using the `export-ethdb.sh` script.
For example:
```
$ app/data/container-build/cerc-fixturenet-eth-lighthouse/scripts/export-ethdb.sh 500
Waiting for geth to generate DAG.... done
Waiting for beacon phase0.... done
Waiting for beacon altair.... done
Waiting for beacon bellatrix pre-merge.... done
Waiting for beacon bellatrix merge.... done
Waiting for block number 500.... done
Exporting ethdb.... ./ethdb.tgz
```

View File

@ -0,0 +1,18 @@
version: "1.2"
name: fixturenet-eth-tx
decription: "Ethereum Fixturenet w/ tx-spammer"
repos:
- cerc-io/go-ethereum
- cerc-io/tx-spammer
- dboreham/foundry
containers:
- cerc/go-ethereum
- cerc/lighthouse
- cerc/fixturenet-eth-geth
- cerc/fixturenet-eth-lighthouse
- cerc/tx-spammer
- cerc/foundry
pods:
- fixturenet-eth
- foundry
- tx-spammer

View File

@ -117,8 +117,8 @@ Clear volumes created by this stack:
```bash
# List all relevant volumes
$ docker volume ls -q --filter "name=.*fixturenet_eth_bootnode_geth_data|.*fixturenet_eth_bootnode_lighthouse_data|.*fixturenet_eth_geth_1_data|.*fixturenet_eth_geth_2_data|.*fixturenet_eth_lighthouse_1_data|.*fixturenet_eth_lighthouse_2_data"
$ docker volume ls -q --filter "name=.*fixturenet_eth_bootnode_geth_data|.*fixturenet_eth_geth_1_data|.*fixturenet_eth_geth_2_data|.*fixturenet_eth_lighthouse_1_data|.*fixturenet_eth_lighthouse_2_data"
# Remove all the listed volumes
$ docker volume rm $(docker volume ls -q --filter "name=.*fixturenet_eth_bootnode_geth_data|.*fixturenet_eth_bootnode_lighthouse_data|.*fixturenet_eth_geth_1_data|.*fixturenet_eth_geth_2_data|.*fixturenet_eth_lighthouse_1_data|.*fixturenet_eth_lighthouse_2_data")
$ docker volume rm $(docker volume ls -q --filter "name=.*fixturenet_eth_bootnode_geth_data|.*fixturenet_eth_geth_1_data|.*fixturenet_eth_geth_2_data|.*fixturenet_eth_lighthouse_1_data|.*fixturenet_eth_lighthouse_2_data")
```

View File

@ -2,7 +2,7 @@
Instructions to setup and deploy an end-to-end L1+L2 stack with [fixturenet-eth](../fixturenet-eth/) (L1) and [Optimism](https://stack.optimism.io) (L2)
We support running just the L2 part of stack, given an external L1 endpoint. Follow [l2-only](./l2-only.md) for the same.
We support running just the L2 part of stack, given an external L1 endpoint. Follow the [L2 only doc](./l2-only.md) for the same.
## Setup
@ -28,6 +28,8 @@ Build the container images:
laconic-so --stack fixturenet-optimism build-containers
```
Note: this will take >10 mins depending on the specs of your machine, and **requires** 16GB of memory or greater.
This should create the required docker images in the local image registry:
* `cerc/go-ethereum`
* `cerc/lighthouse`
@ -48,12 +50,14 @@ Deploy the stack:
laconic-so --stack fixturenet-optimism deploy up
```
The `fixturenet-optimism-contracts` service may take a while (`~15 mins`) to complete running as it:
If you get the error `service "fixturenet-optimism-contracts" didn't complete successfully: exit 1` with ~25 lines of Traceback, wait 15-20 mins then re-run the command.
The `fixturenet-optimism-contracts` service takes a while to complete running as it:
1. waits for the 'Merge' to happen on L1
2. waits for a finalized block to exist on L1 (so that it can be taken as a starting block for roll ups)
3. deploys the L1 contracts
To list down and monitor the running containers:
To list and monitor the running containers:
```bash
laconic-so --stack fixturenet-optimism deploy ps

View File

@ -0,0 +1,59 @@
# Pocket Fixturenet
Instructions for deploying a local single-node Pocket chain alongside a geth + lighthouse blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator.
## 1. Build Laconic Stack Orchestrator
Build this fork of Laconic Stack Orchestrator which includes the fixturenet-pocket stack:
```
$ scripts/build_shiv_package.sh
$ cd package
$ mv laconic-so-{version} /usr/local/bin/laconic-so # Or move laconic-so to ~/bin or your favorite on-path directory
```
## 2. Clone required repositories
```
$ laconic-so --stack fixturenet-pocket setup-repositories
```
## 3. Build the stack's containers
```
$ laconic-so --stack fixturenet-pocket build-containers
```
## 4. Deploy the stack
```
$ laconic-so --stack fixturenet-pocket deploy up
```
It may take up to 10 minutes for the Eth Fixturenet to fully come online and start producing blocks.
## 5. Check status
**Eth Fixturenet:**
```
$ laconic-so --stack fixturenet-pocket deploy exec fixturenet-eth-bootnode-lighthouse /scripts/status-internal.sh
Waiting for geth to generate DAG.... done
Waiting for beacon phase0.... done
Waiting for beacon altair.... done
Waiting for beacon bellatrix pre-merge.... done
Waiting for beacon bellatrix merge.... done
```
**Pocket node:**
```
$ laconic-so --stack fixturenet-pocket deploy exec pocket "pocket query height"
2023/04/20 08:07:46 Initializing Pocket Datadir
2023/04/20 08:07:46 datadir = /home/app/.pocket
http://localhost:8081/v1/query/height
{
"height": 4
}
```
or
```
$ laconic-so --stack fixturenet-pocket deploy logs pocket
```
## 6. Send a relay request to Pocket node
The Pocket node serves relay requests at `http://localhost:8081/v1/client/sim`
**Example request:**
```
$ curl -X POST --data '{"relay_network_id":"0021","payload":{"data":"{\"jsonrpc\": \"2.0\",\"id\": 1,\"method\": \"eth_blockNumber\",\"params\": []}","method":"POST","path":"","headers":{}}}' http://localhost:8081/v1/client/sim
```
**Response:**
```
"{\"jsonrpc\":\"2.0\",\"id\":1,\"result\":\"0x6fe\"}\n"
```

View File

@ -0,0 +1,16 @@
version: "1.0"
name: fixturenet-pocket
description: "A single node pocket chain that can serve relays from the geth-1 node in eth-fixturenet"
repos:
- cerc-io/go-ethereum
- pokt-network/pocket-core
- pokt-network/pocket-core-deployments # contains the dockerfile
containers:
- cerc/go-ethereum
- cerc/lighthouse
- cerc/fixturenet-eth-geth
- cerc/fixturenet-eth-lighthouse
- cerc/pocket
pods:
- fixturenet-pocket
- fixturenet-eth

View File

@ -23,11 +23,11 @@ Checkout to the required versions and branches in repos
```bash
# watcher-ts
cd ~/cerc/watcher-ts
git checkout v0.2.39
git checkout v0.2.41
# mobymask-v2-watcher-ts
cd ~/cerc/mobymask-v2-watcher-ts
git checkout v0.1.0
git checkout v0.1.1
# MobyMask
cd ~/cerc/MobyMask

View File

@ -19,11 +19,11 @@ Checkout to the required versions and branches in repos:
```bash
# watcher-ts
cd ~/cerc/watcher-ts
git checkout v0.2.39
git checkout v0.2.41
# mobymask-v2-watcher-ts
cd ~/cerc/mobymask-v2-watcher-ts
git checkout v0.1.0
git checkout v0.1.1
# MobyMask
cd ~/cerc/MobyMask
@ -67,11 +67,14 @@ Create and update an env file to be used in the next step ([defaults](../../conf
# (used for generating a root invite link after deploying the contract)
CERC_MOBYMASK_APP_BASE_URI="http://127.0.0.1:3002/#"
# (Optional) Domain to be used in the relay node's announce address
CERC_RELAY_ANNOUNCE_DOMAIN=
# (Optional) Set of relay peers to connect to from the relay node
CERC_RELAY_PEERS=[]
# (Optional) Domain to be used in the relay node's announce address
CERC_RELAY_ANNOUNCE_DOMAIN=
# (Optional) Set of multiaddrs to be avoided while dialling
CERC_DENY_MULTIADDRS=[]
# Set to false for disabling watcher peer to send txs to L2
CERC_ENABLE_PEER_L2_TXS=true

View File

@ -35,11 +35,11 @@ Checkout to the required versions and branches in repos:
```bash
# watcher-ts
cd ~/cerc/watcher-ts
git checkout v0.2.39
git checkout v0.2.41
# mobymask-v2-watcher-ts
cd ~/cerc/mobymask-v2-watcher-ts
git checkout v0.1.0
git checkout v0.1.1
# MobyMask
cd ~/cerc/MobyMask

View File

@ -26,6 +26,9 @@ Create and update an env file to be used in the next step ([defaults](../../conf
# Eg. CERC_RELAY_NODES=["/dns4/example.com/tcp/443/wss/p2p/12D3KooWGHmDDCc93XUWL16FMcTPCGu2zFaMkf67k8HZ4gdQbRDr"]
CERC_RELAY_NODES=[]
# Set of multiaddrs to be avoided while dialling
CERC_DENY_MULTIADDRS=[]
# Also add if running MobyMask app:
# Watcher endpoint used by the app for GQL queries

View File

@ -26,7 +26,7 @@ In addition to the pre-requisites listed in the [README](/README.md), the follow
1. Clone this repository:
```
$ git clone (https://github.com/cerc-io/stack-orchestrator.git
$ git clone https://github.com/cerc-io/stack-orchestrator.git
```
2. Enter the project directory:
@ -87,20 +87,27 @@ Use shiv to build a single file Python executable zip archive of laconic-so:
```
$ cp stack-orchetrator/laconic-so ~/bin
$ laconic-so
Usage: python -m laconic-so [OPTIONS] COMMAND [ARGS]...
Usage: laconic-so [OPTIONS] COMMAND [ARGS]...
Laconic Stack Orchestrator
Options:
--stack TEXT specify a stack to build/deploy
--quiet
--verbose
--dry-run
--local-stack
--debug
--continue-on-error
-h, --help Show this message and exit.
Commands:
build-containers build the set of containers required for a complete...
build-npms build the set of npm packages required for a...
deploy deploy a stack
deploy-system deploy a stack
setup-repositories git clone the set of repositories required to build...
version print tool version
```
For cutting releases, use the [shiv build script](/scripts/build_shiv_package.sh).

183
docs/laconicd-fixturenet.md Normal file
View File

@ -0,0 +1,183 @@
# Running a laconicd fixturenet with console
The following tutorial explains the steps to run a laconicd fixturenet with CLI and web console that displays records in the registry. It is designed as an introduction to Stack Orchestrator and to showcase one component of the Laconic Stack. Prior to Stack Orchestrator, the following 4 repositories had to be cloned and setup manually:
- https://github.com/cerc-io/laconicd
- https://github.com/cerc-io/laconic-sdk
- https://github.com/cerc-io/laconic-registry-cli
- https://github.com/cerc-io/laconic-console
Now, with Stack Orchestrator, it is a few quick commands. Additionally, the `docker` and `docker compose` integration on the back-end allows the stack to easily persist, facilitating workflows.
## Setup laconic-so
To avoid hiccups on Mac M1/M2 and any local machine nuances that may affect the user experience, this tutorial is focused on using a fresh Digital Ocean (DO) droplet with similar specs:
16 GB Memory / 8 Intel vCPUs / 160 GB Disk.
1. Login to the droplet as root (either by SSH key or password set in the DO console)
```
ssh root@IP
```
2. Get the install script, give it executable permissions, and run it:
```
curl -o install.sh https://raw.githubusercontent.com/cerc-io/stack-orchestrator/main/scripts/quick-install-ubuntu.sh
```
```
chmod +x install.sh
```
```
bash install.sh
```
3. Confirm docker was installed and activate the changes in `~/.profile`:
```
docker run hello-world
```
```
source ~/.profile
```
4. Verify installation:
```
laconic-so version
```
## Setup the laconic fixturenet stack
1. Get the repositories
```
laconic-so --stack fixturenet-laconic-loaded setup-repositories --include cerc-io/laconicd,cerc-io/laconic-sdk,cerc-io/laconic-registry-cli,cerc-io/laconic-console
```
2. Set this environment variable to the Laconic self-hosted Gitea instance:
```
export CERC_NPM_REGISTRY_URL=https://git.vdb.to/api/packages/cerc-io/npm/
```
3. Build the containers:
```
laconic-so --stack fixturenet-laconic-loaded build-containers
```
It's possible to run into an `ESOCKETTIMEDOUT` error, e.g., `error An unexpected error occurred: "https://registry.yarnpkg.com/@material-ui/icons/-/icons-4.11.3.tgz: ESOCKETTIMEDOUT"`. This may happen even if you have a great internet connection. In that case, re-run the `build-containers` command.
4. Set this environment variable to your droplet's IP address:
```
export LACONIC_HOSTED_ENDPOINT=http://<your-IP>
```
5. Deploy the stack:
```
laconic-so --stack fixturenet-laconic-loaded deploy up
```
6. Check the logs:
```
laconic-so --stack fixturenet-laconic-loaded deploy logs
```
You'll see output from `laconicd` and the block height should be >1 to confirm it is running:
```
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:29PM INF indexed block exents height=12 module=txindex server=node
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF Timed out dur=4976.960115 height=13 module=consensus round=0 server=node step=1
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF received proposal module=consensus proposal={"Type":32,"block_id":{"hash":"D26C088A711F912ADB97888C269F628DA33153795621967BE44DCB43C3D03CA4","parts":{"hash":"22411A20B7F14CDA33244420FBDDAF24450C0628C7A06034FF22DAC3699DDCC8","total":1}},"height":13,"pol_round":-1,"round":0,"signature":"DEuqnaQmvyYbUwckttJmgKdpRu6eVm9i+9rQ1pIrV2PidkMNdWRZBLdmNghkIrUzGbW8Xd7UVJxtLRmwRASgBg==","timestamp":"2023-04-18T21:30:01.49450663Z"} server=node
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF received complete proposal block hash=D26C088A711F912ADB97888C269F628DA33153795621967BE44DCB43C3D03CA4 height=13 module=consensus server=node
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF finalizing commit of block hash={} height=13 module=consensus num_txs=0 root=1A8CA1AF139CCC80EC007C6321D8A63A46A793386EE2EDF9A5CA0AB2C90728B7 server=node
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF minted coins from module account amount=2059730459416582643aphoton from=mint module=x/bank
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF executed block height=13 module=state num_invalid_txs=0 num_valid_txs=0 server=node
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF commit synced commit=436F6D6D697449447B5B363520313037203630203232372039352038352032303820313334203231392032303520313433203130372031343920313431203139203139322038362031323720362031383520323533203137362031333820313735203135392031383620323334203135382031323120313431203230342037335D3A447D
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF committed state app_hash=416B3CE35F55D086DBCD8F6B958D13C0567F06B9FDB08AAF9FBAEA9E798DCC49 height=13 module=state num_txs=0 server=node
laconic-5cd0a80c1442c3044c8b295d26426bae-laconicd-1 | 9:30PM INF indexed block exents height=13 module=txindex server=node
```
7. Confirm operation of the registry CLI:
```
laconic-so --stack fixturenet-laconic-loaded deploy exec cli "laconic cns status"
```
## Configure Digital Ocean firewall
Let's open some ports.
1. In the Digital Ocean web console, navigate to your droplet's main page. Select the "Networking" tab and scroll down to "Firewall".
2. Get the port for the running console:
```
echo http://IP:$(laconic-so --stack fixturenet-laconic-loaded deploy port laconic-console 80 | cut -d ':' -f 2)
```
```
http://IP:32778
```
3. Go back to the Digital Ocean web console and set an Inbound Rule for Custom TCP of the above port:
- `32778` in this example, but yours will be different.
- do the same for port `9473`
Additional ports will need to be opened depending on your application. Ensure you add your droplet to this new Firewall and wait a minute or so for the update to propagate.
4. Navigate to http://IP:port and ensure laconic-console is functioning as expected:
- ensure you are connected to `laconicd`; no error message should pop up;
- the wifi symbol in the bottom right should have a green check mark beside it
- navigate to the status tab; it should display similar/identical information
- navigate to the config tab, you'll see something like (with your IP):
```
wns
webui http://68.183.195.210:9473/console
server http://68.183.195.210:9473/api
```
## Publish and query a sample record to the registry
1. The following command will create a bond and publish a record:
```
laconic-so --stack fixturenet-laconic-loaded deploy exec cli ./scripts/create-demo-records.sh
```
You'll get an output like:
```
Balance is: 99998999999999998999600000
Created bond with id: dd88e8d6f9567b32b28e70552aea4419c5dd3307ebae85a284d1fe38904e301a
Published demo-record-1.yml with id: bafyreierh3xnfivexlscdwubvczmddsnf46uytyfvrbdhkjzztvsz6ruly
```
The sample record we deployed looks like:
```
TODO
```
2. Return to the laconic-console
- the published record should now be viewable
- explore it for more information
- click on the link that opens the GraphQL console
- the query is pre-loaded, click the button to run it
- inspect the output
3. Try out additional CLI commands
- these are documented [here](https://github.com/cerc-io/laconic-registry-cli#readme) and updates are forthcoming
- e.g,:
```
laconic-so --stack fixturenet-laconic-loaded deploy exec cli "laconic cns record list"
```