Merge branch 'main' into ci-test

Former-commit-id: 966539faa46bf33a3b6471a3289a4fb7c4685cc7
This commit is contained in:
David Boreham 2023-05-18 13:55:29 -06:00
commit e9e1d8bcd4
86 changed files with 1998 additions and 86 deletions

1
.gitignore vendored
View File

@ -7,3 +7,4 @@ __pycache__
*~
package
app/data/build_tag.txt
build

View File

@ -35,58 +35,28 @@ curl -L -o ~/bin/laconic-so https://github.com/cerc-io/stack-orchestrator/releas
```
Give it execute permissions:
```bash
chmod +x ~/bin/laconic-so
```
Ensure `laconic-so` is on the [`PATH`](https://unix.stackexchange.com/a/26059)
Verify operation (your version will probably be different, just check here that you see some version outut and not an error):
Verify operation (your version will probably be different, just check here that you see some version output and not an error):
```
laconic-so version
Version: v1.0.27-7831078
Version: 1.1.0-7a607c2-202304260513
```
## Usage
Three sub-commands: `setup-repositories`, `build-containers` and `deploy-system` are generally run in order. The following is a slim example for standing up the `erc20-watcher`. Go further with the [erc20 watcher demo](/app/data/stacks/erc20) and other pieces of the stack, within the [`stacks` directory](/app/data/stacks).
The various [stacks](/app/data/stacks) each contain instructions for running different stacks based on your use case. For example:
### Setup Repositories
Clone the set of git repositories necessary to build a system:
```bash
laconic-so --stack erc20 setup-repositories
```
This will default to cloning git reposiories into: `~/cerc` or - if set - the environment variable `CERC_REPO_BASE_DIR`
### Build Containers
Build the set of docker container images required to run a system. It takes around 10 minutes to build all the containers from scratch.
```bash
laconic-so --stack erc20 build-containers
```
### Deploy System
Uses `docker compose` to deploy a system (with most recently built container images).
```bash
laconic-so --stack erc20 deploy-system up
```
Check out he GraphQL playground here: [http://localhost:3002/graphql](http://localhost:3002/graphql)
See the [erc20 watcher demo](/app/data/stacks/erc20) to continue further.
### Cleanup
```bash
laconic-so --stack erc20 deploy-system down
```
- [self-hosted Gitea](/app/data/stacks/build-support)
- [an Optimism Fixturenet](/app/data/stacks/fixturenet-optimism)
- [laconicd with console and CLI](app/data/stacks/fixturenet-laconic-loaded)
- [kubo (IPFS)](app/data/stacks/kubo)
## Contributing

View File

@ -115,7 +115,7 @@ def command(ctx, include, exclude, force_rebuild, extra_build_args):
# TODO: make this less of a hack -- should be specified in some metadata somewhere
# Check if we have a repo for this container. If not, set the context dir to the container-build subdir
repo_full_path = os.path.join(dev_root_path, repo_dir)
repo_dir_or_build_dir = repo_dir if os.path.exists(repo_full_path) else build_dir
repo_dir_or_build_dir = repo_full_path if os.path.exists(repo_full_path) else build_dir
build_command = os.path.join(container_build_dir, "default-build.sh") + f" {container}:local {repo_dir_or_build_dir}"
if not dry_run:
if verbose:

View File

@ -64,8 +64,6 @@ services:
environment:
RUN_BOOTNODE: "true"
image: cerc/fixturenet-eth-lighthouse:local
volumes:
- fixturenet_eth_bootnode_lighthouse_data:/opt/testnet/build/cl
fixturenet-eth-lighthouse-1:
hostname: fixturenet-eth-lighthouse-1
@ -120,6 +118,5 @@ volumes:
fixturenet_eth_bootnode_geth_data:
fixturenet_eth_geth_1_data:
fixturenet_eth_geth_2_data:
fixturenet_eth_bootnode_lighthouse_data:
fixturenet_eth_lighthouse_1_data:
fixturenet_eth_lighthouse_2_data:

View File

@ -1,10 +1,11 @@
version: "3.2"
services:
laconicd:
restart: unless-stopped
image: cerc/laconicd:local
command: ["sh", "/docker-entrypoint-scripts.d/create-fixturenet.sh"]
volumes:
# The cosmos-sdk node's database directory:
- laconicd-data:/root/.laconicd/data
# TODO: look at folding these scripts into the container
- ../config/fixturenet-laconicd/create-fixturenet.sh:/docker-entrypoint-scripts.d/create-fixturenet.sh
- ../config/fixturenet-laconicd/export-mykey.sh:/docker-entrypoint-scripts.d/export-mykey.sh

View File

@ -0,0 +1,68 @@
version: "3.8"
services:
lotus-miner:
hostname: lotus-miner
env_file:
- ../config/fixturenet-lotus/lotus-env.env
image: cerc/lotus:local
volumes:
- ../config/fixturenet-lotus/setup-miner.sh:/docker-entrypoint-scripts.d/setup-miner.sh
- ../config/fixturenet-lotus/genesis/devgen.car:/devgen.car
- $HOME/stack-orchestrator/app/data/config/fixturenet-lotus/genesis/.genesis-sectors:/root/.genesis-sectors
- lotus-shared:/root/.lotus-shared
healthcheck:
# test: ["CMD-SHELL", "grep 'started ChainNotify channel' /var/log/lotus.log"]
# test: ["CMD-SHELL", "[ -f /root/.lotus-shared/miner.addr ]"]
test: ["CMD-SHELL", "[ -d /root/.lotus-miner-local-net ]"]
interval: 10s
timeout: 10s
retries: 10
start_period: 60s
entrypoint: ["sh", "/docker-entrypoint-scripts.d/setup-miner.sh"]
ports:
- "1234"
- "2345"
- "3456"
- "1777"
lotus-node-1:
hostname: lotus-node-1
env_file:
- ../config/fixturenet-lotus/lotus-env.env
image: cerc/lotus:local
volumes:
- ../config/fixturenet-lotus/setup-node.sh:/docker-entrypoint-scripts.d/setup-node.sh
- ../config/fixturenet-lotus/genesis/devgen.car:/devgen.car
- lotus-shared:/root/.lotus-shared
depends_on:
lotus-miner:
condition: service_healthy
entrypoint: ["sh", "/docker-entrypoint-scripts.d/setup-node.sh"]
ports:
- "1234"
- "2345"
- "3456"
- "1777"
lotus-node-2:
hostname: lotus-node-2
env_file:
- ../config/fixturenet-lotus/lotus-env.env
image: cerc/lotus:local
volumes:
- ../config/fixturenet-lotus/setup-node.sh:/docker-entrypoint-scripts.d/setup-node.sh
- ../config/fixturenet-lotus/genesis/devgen.car:/devgen.car
- lotus-shared:/root/.lotus-shared
depends_on:
lotus-miner:
condition: service_healthy
entrypoint: ["sh", "/docker-entrypoint-scripts.d/setup-node.sh"]
ports:
- "1234"
- "2345"
- "3456"
- "1777"
volumes:
lotus-shared:

View File

@ -0,0 +1,18 @@
version: "3.2"
services:
pocket:
restart: unless-stopped
image: cerc/pocket:local
# command: ["sh", "/docker-entrypoint-scripts.d/create-fixturenet.sh"]
entrypoint: ["sh", "/docker-entrypoint-scripts.d/create-fixturenet.sh"]
volumes:
# TODO: look at folding these scripts into the container
- ../config/fixturenet-pocket/create-fixturenet.sh:/docker-entrypoint-scripts.d/create-fixturenet.sh
- ../config/fixturenet-pocket/chains.json:/home/app/pocket-configs/chains.json
- ../config/fixturenet-pocket/genesis.json:/home/app/pocket-configs/genesis.json
ports:
- "8081:8081" # pocket relay rpc
networks:
net1:
name: fixturenet-eth_default
external: true

View File

@ -13,6 +13,7 @@ services:
CERC_DEPLOYED_CONTRACT: ${CERC_DEPLOYED_CONTRACT}
CERC_APP_WATCHER_URL: ${CERC_APP_WATCHER_URL}
CERC_RELAY_NODES: ${CERC_RELAY_NODES}
CERC_DENY_MULTIADDRS: ${CERC_DENY_MULTIADDRS}
CERC_BUILD_DIR: "@cerc-io/mobymask-ui/build"
working_dir: /scripts
command: ["sh", "mobymask-app-start.sh"]
@ -44,6 +45,7 @@ services:
CERC_DEPLOYED_CONTRACT: ${CERC_DEPLOYED_CONTRACT}
CERC_APP_WATCHER_URL: ${CERC_APP_WATCHER_URL}
CERC_RELAY_NODES: ${CERC_RELAY_NODES}
CERC_DENY_MULTIADDRS: ${CERC_DENY_MULTIADDRS}
CERC_BUILD_DIR: "@cerc-io/mobymask-ui-lxdao/build"
working_dir: /scripts
command: ["sh", "mobymask-app-start.sh"]

View File

@ -10,6 +10,7 @@ services:
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_RELAY_NODES: ${CERC_RELAY_NODES}
CERC_DENY_MULTIADDRS: ${CERC_DENY_MULTIADDRS}
command: ["sh", "test-app-start.sh"]
volumes:
- ../config/wait-for-it.sh:/scripts/wait-for-it.sh

View File

@ -1,7 +1,13 @@
version: "3.2"
services:
test:
image: cerc/test-container:local
restart: always
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
volumes:
- test-data:/var
ports:
- "80"
volumes:
test-data:

View File

@ -0,0 +1,304 @@
version: '3.2'
services:
# Starts the PostgreSQL database for watchers
watcher-db:
restart: unless-stopped
image: postgres:14-alpine
environment:
- POSTGRES_USER=vdbm
- POSTGRES_MULTIPLE_DATABASES=azimuth-watcher,azimuth-watcher-job-queue,censures-watcher,censures-watcher-job-queue,claims-watcher,claims-watcher-job-queue,conditional-star-release-watcher,conditional-star-release-watcher-job-queue,delegated-sending-watcher,delegated-sending-watcher-job-queue,ecliptic-watcher,ecliptic-watcher-job-queue,linear-star-release-watcher,linear-star-release-watcher-job-queue,polls-watcher,polls-watcher-job-queue
- POSTGRES_EXTENSION=azimuth-watcher-job-queue:pgcrypto,censures-watcher-job-queue:pgcrypto,claims-watcher-job-queue:pgcrypto,conditional-star-release-watcher-job-queue:pgcrypto,delegated-sending-watcher-job-queue:pgcrypto,ecliptic-watcher-job-queue:pgcrypto,linear-star-release-watcher-job-queue:pgcrypto,polls-watcher-job-queue:pgcrypto,
- POSTGRES_PASSWORD=password
volumes:
- ../config/postgresql/multiple-postgressql-databases.sh:/docker-entrypoint-initdb.d/multiple-postgressql-databases.sh
- watcher_db_data:/var/lib/postgresql/data
ports:
- "0.0.0.0:15432:5432"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "5432"]
interval: 20s
timeout: 5s
retries: 15
start_period: 10s
# Starts the azimuth-watcher server
azimuth-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/azimuth-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/azimuth-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/azimuth-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/azimuth-watcher/start-server.sh
ports:
- "3001"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3001"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the censures-watcher server
censures-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/censures-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/censures-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/censures-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/censures-watcher/start-server.sh
ports:
- "3002"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3002"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the claims-watcher server
claims-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/claims-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/claims-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/claims-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/claims-watcher/start-server.sh
ports:
- "3003"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3003"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the conditional-star-release-watcher server
conditional-star-release-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/conditional-star-release-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/conditional-star-release-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/conditional-star-release-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/conditional-star-release-watcher/start-server.sh
ports:
- "3004"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3004"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the delegated-sending-watcher server
delegated-sending-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/delegated-sending-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/delegated-sending-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/delegated-sending-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/delegated-sending-watcher/start-server.sh
ports:
- "3005"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3005"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the ecliptic-watcher server
ecliptic-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/ecliptic-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/ecliptic-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/ecliptic-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/ecliptic-watcher/start-server.sh
ports:
- "3006"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3006"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the linear-star-release-watcher server
linear-star-release-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/linear-star-release-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/linear-star-release-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/linear-star-release-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/linear-star-release-watcher/start-server.sh
ports:
- "3007"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3007"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the polls-watcher server
polls-watcher-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-azimuth/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
working_dir: /app/packages/polls-watcher
command: "./start-server.sh"
volumes:
- ../config/watcher-azimuth/watcher-config-template.toml:/app/packages/polls-watcher/environments/watcher-config-template.toml
- ../config/watcher-azimuth/merge-toml.js:/app/packages/polls-watcher/merge-toml.js
- ../config/watcher-azimuth/start-server.sh:/app/packages/polls-watcher/start-server.sh
ports:
- "3008"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "3008"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the gateway-server for proxying queries
gateway-server:
image: cerc/watcher-azimuth:local
restart: unless-stopped
depends_on:
azimuth-watcher-server:
condition: service_healthy
censures-watcher-server:
condition: service_healthy
claims-watcher-server:
condition: service_healthy
conditional-star-release-watcher-server:
condition: service_healthy
delegated-sending-watcher-server:
condition: service_healthy
ecliptic-watcher-server:
condition: service_healthy
linear-star-release-watcher-server:
condition: service_healthy
polls-watcher-server:
condition: service_healthy
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
working_dir: /app/packages/gateway-server
command: "yarn server"
volumes:
- ../config/watcher-azimuth/gateway-watchers.json:/app/packages/gateway-server/dist/watchers.json
ports:
- "0.0.0.0:4000:4000"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost", "4000"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
watcher_db_data:

View File

@ -0,0 +1,91 @@
version: '3.2'
services:
# Starts the PostgreSQL database for watcher
gelato-watcher-db:
restart: unless-stopped
image: postgres:14-alpine
environment:
- POSTGRES_USER=vdbm
- POSTGRES_MULTIPLE_DATABASES=gelato-watcher,gelato-watcher-job-queue
- POSTGRES_EXTENSION=gelato-watcher-job-queue:pgcrypto
- POSTGRES_PASSWORD=password
volumes:
- ../config/postgresql/multiple-postgressql-databases.sh:/docker-entrypoint-initdb.d/multiple-postgressql-databases.sh
- gelato_watcher_db_data:/var/lib/postgresql/data
ports:
- "0.0.0.0:15432:5432"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "5432"]
interval: 10s
timeout: 5s
retries: 15
start_period: 10s
# Starts the gelato-watcher job runner
gelato-watcher-job-runner:
image: cerc/watcher-gelato:local
restart: unless-stopped
depends_on:
gelato-watcher-db:
condition: service_healthy
env_file:
- ../config/watcher-gelato/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
command: ["./start-job-runner.sh"]
volumes:
- ../config/watcher-gelato/watcher-config-template.toml:/app/environments/watcher-config-template.toml
- ../config/watcher-gelato/start-job-runner.sh:/app/start-job-runner.sh
ports:
- "0.0.0.0:9000:9000"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "9000"]
interval: 10s
timeout: 5s
retries: 15
start_period: 10s
extra_hosts:
- "host.docker.internal:host-gateway"
# Starts the gelato-watcher server
gelato-watcher-server:
image: cerc/watcher-gelato:local
restart: unless-stopped
depends_on:
gelato-watcher-db:
condition: service_healthy
gelato-watcher-job-runner:
condition: service_healthy
env_file:
- ../config/watcher-gelato/watcher-params.env
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_IPLD_ETH_RPC: ${CERC_IPLD_ETH_RPC}
CERC_IPLD_ETH_GQL: ${CERC_IPLD_ETH_GQL}
CERC_USE_STATE_SNAPSHOT: ${CERC_USE_STATE_SNAPSHOT}
CERC_SNAPSHOT_GQL_ENDPOINT: ${CERC_SNAPSHOT_GQL_ENDPOINT}
CERC_SNAPSHOT_BLOCKHASH: ${CERC_SNAPSHOT_BLOCKHASH}
command: ["./start-server.sh"]
volumes:
- ../config/watcher-gelato/watcher-config-template.toml:/app/environments/watcher-config-template.toml
- ../config/watcher-gelato/start-server.sh:/app/start-server.sh
- ../config/watcher-gelato/create-and-import-checkpoint.sh:/app/create-and-import-checkpoint.sh
- gelato_watcher_state_gql:/app/state_checkpoint
ports:
- "0.0.0.0:3008:3008"
- "0.0.0.0:9001:9001"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "3008"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
gelato_watcher_db_data:
gelato_watcher_state_gql:

View File

@ -83,6 +83,7 @@ services:
CERC_L1_ACCOUNTS_CSV_URL: ${CERC_L1_ACCOUNTS_CSV_URL}
CERC_PRIVATE_KEY_PEER: ${CERC_PRIVATE_KEY_PEER}
CERC_RELAY_PEERS: ${CERC_RELAY_PEERS}
CERC_DENY_MULTIADDRS: ${CERC_DENY_MULTIADDRS}
CERC_RELAY_ANNOUNCE_DOMAIN: ${CERC_RELAY_ANNOUNCE_DOMAIN}
CERC_ENABLE_PEER_L2_TXS: ${CERC_ENABLE_PEER_L2_TXS}
CERC_DEPLOYED_CONTRACT: ${CERC_DEPLOYED_CONTRACT}

View File

@ -0,0 +1 @@
}+V<>{iνΆΠΉ<CEA0>²<EFBFBD>¨ΣΗ\k»qς  —?δΪAΒ~μ©™LΉ<4C>tb·yqτ·²ηξΔ<CEBE>Ο?ξaΣ<61>J

View File

@ -0,0 +1 @@
Β~μ©™LΉ<4C>tb·yqτ·²ηξΔ<CEBE>Ο?ξaΣ<61>J

View File

@ -0,0 +1,71 @@
{
"t01000": {
"ID": "t01000",
"Owner": "t3spusn5ia57qezc3fwpe3n2lhb4y4xt67xoflqbqy2muliparw2uktevletuv7gl4qakjpafgcl7jk2s2er3q",
"Worker": "t3spusn5ia57qezc3fwpe3n2lhb4y4xt67xoflqbqy2muliparw2uktevletuv7gl4qakjpafgcl7jk2s2er3q",
"PeerId": "12D3KooWG5q6pWJVdPBhDBv9AjWVbUh4xxTAZ7xvgZSjczWuD2Z9",
"MarketBalance": "0",
"PowerBalance": "0",
"SectorSize": 2048,
"Sectors": [
{
"CommR": {
"/": "bagboea4b5abcboxypcewlkmrat2myu4vthk3ii2pcomak7nhqmdbb6sxlolp2wdf"
},
"CommD": {
"/": "baga6ea4seaqn3jfixthmdgksv4vhfeuyvr6upw6tvaqbmzmsyxnzosm4pwgnmlq"
},
"SectorID": 0,
"Deal": {
"PieceCID": {
"/": "baga6ea4seaqn3jfixthmdgksv4vhfeuyvr6upw6tvaqbmzmsyxnzosm4pwgnmlq"
},
"PieceSize": 2048,
"VerifiedDeal": false,
"Client": "t3spusn5ia57qezc3fwpe3n2lhb4y4xt67xoflqbqy2muliparw2uktevletuv7gl4qakjpafgcl7jk2s2er3q",
"Provider": "t01000",
"Label": "0",
"StartEpoch": 0,
"EndEpoch": 9001,
"StoragePricePerEpoch": "0",
"ProviderCollateral": "0",
"ClientCollateral": "0"
},
"DealClientKey": {
"Type": "bls",
"PrivateKey": "tFvSRiSg2G3Ssgg0PSYy23XyjaIMXpsmdyG2B7UFLT4="
},
"ProofType": 5
},
{
"CommR": {
"/": "bagboea4b5abcb6krzypqcczhcnbeyjcqkeo6omfergm336o3kitugh3jgjog2yqq"
},
"CommD": {
"/": "baga6ea4seaqhondpb2373hjasjplxvbjzi5n5mm4fbbhjxp5ptnbq4cibapkeii"
},
"SectorID": 1,
"Deal": {
"PieceCID": {
"/": "baga6ea4seaqhondpb2373hjasjplxvbjzi5n5mm4fbbhjxp5ptnbq4cibapkeii"
},
"PieceSize": 2048,
"VerifiedDeal": false,
"Client": "t3spusn5ia57qezc3fwpe3n2lhb4y4xt67xoflqbqy2muliparw2uktevletuv7gl4qakjpafgcl7jk2s2er3q",
"Provider": "t01000",
"Label": "1",
"StartEpoch": 0,
"EndEpoch": 9001,
"StoragePricePerEpoch": "0",
"ProviderCollateral": "0",
"ClientCollateral": "0"
},
"DealClientKey": {
"Type": "bls",
"PrivateKey": "tFvSRiSg2G3Ssgg0PSYy23XyjaIMXpsmdyG2B7UFLT4="
},
"ProofType": 5
}
]
}
}

View File

@ -0,0 +1 @@
7b2254797065223a22626c73222c22507269766174654b6579223a227446765352695367324733537367673050535979323358796a61494d5870736d64794732423755464c54343d227d

View File

@ -0,0 +1,11 @@
{
"ID": "f355523e-69d0-4984-bd0e-9588487c6231",
"Weight": 0,
"CanSeal": false,
"CanStore": false,
"MaxStorage": 0,
"Groups": null,
"AllowTo": null,
"AllowTypes": null,
"DenyTypes": null
}

Binary file not shown.

View File

@ -0,0 +1,108 @@
{
"NetworkVersion": 18,
"Accounts": [
{
"Type": "account",
"Balance": "50000000000000000000000000",
"Meta": {
"Owner": "t3spusn5ia57qezc3fwpe3n2lhb4y4xt67xoflqbqy2muliparw2uktevletuv7gl4qakjpafgcl7jk2s2er3q"
}
}
],
"Miners": [
{
"ID": "t01000",
"Owner": "t3spusn5ia57qezc3fwpe3n2lhb4y4xt67xoflqbqy2muliparw2uktevletuv7gl4qakjpafgcl7jk2s2er3q",
"Worker": "t3spusn5ia57qezc3fwpe3n2lhb4y4xt67xoflqbqy2muliparw2uktevletuv7gl4qakjpafgcl7jk2s2er3q",
"PeerId": "12D3KooWG5q6pWJVdPBhDBv9AjWVbUh4xxTAZ7xvgZSjczWuD2Z9",
"MarketBalance": "0",
"PowerBalance": "0",
"SectorSize": 2048,
"Sectors": [
{
"CommR": {
"/": "bagboea4b5abcboxypcewlkmrat2myu4vthk3ii2pcomak7nhqmdbb6sxlolp2wdf"
},
"CommD": {
"/": "baga6ea4seaqn3jfixthmdgksv4vhfeuyvr6upw6tvaqbmzmsyxnzosm4pwgnmlq"
},
"SectorID": 0,
"Deal": {
"PieceCID": {
"/": "baga6ea4seaqn3jfixthmdgksv4vhfeuyvr6upw6tvaqbmzmsyxnzosm4pwgnmlq"
},
"PieceSize": 2048,
"VerifiedDeal": false,
"Client": "t3spusn5ia57qezc3fwpe3n2lhb4y4xt67xoflqbqy2muliparw2uktevletuv7gl4qakjpafgcl7jk2s2er3q",
"Provider": "t01000",
"Label": "0",
"StartEpoch": 0,
"EndEpoch": 9001,
"StoragePricePerEpoch": "0",
"ProviderCollateral": "0",
"ClientCollateral": "0"
},
"DealClientKey": {
"Type": "bls",
"PrivateKey": "tFvSRiSg2G3Ssgg0PSYy23XyjaIMXpsmdyG2B7UFLT4="
},
"ProofType": 5
},
{
"CommR": {
"/": "bagboea4b5abcb6krzypqcczhcnbeyjcqkeo6omfergm336o3kitugh3jgjog2yqq"
},
"CommD": {
"/": "baga6ea4seaqhondpb2373hjasjplxvbjzi5n5mm4fbbhjxp5ptnbq4cibapkeii"
},
"SectorID": 1,
"Deal": {
"PieceCID": {
"/": "baga6ea4seaqhondpb2373hjasjplxvbjzi5n5mm4fbbhjxp5ptnbq4cibapkeii"
},
"PieceSize": 2048,
"VerifiedDeal": false,
"Client": "t3spusn5ia57qezc3fwpe3n2lhb4y4xt67xoflqbqy2muliparw2uktevletuv7gl4qakjpafgcl7jk2s2er3q",
"Provider": "t01000",
"Label": "1",
"StartEpoch": 0,
"EndEpoch": 9001,
"StoragePricePerEpoch": "0",
"ProviderCollateral": "0",
"ClientCollateral": "0"
},
"DealClientKey": {
"Type": "bls",
"PrivateKey": "tFvSRiSg2G3Ssgg0PSYy23XyjaIMXpsmdyG2B7UFLT4="
},
"ProofType": 5
}
]
}
],
"NetworkName": "localnet-6d52dae5-ff29-4bac-a45d-f84e6c07564c",
"VerifregRootKey": {
"Type": "multisig",
"Balance": "0",
"Meta": {
"Signers": [
"t1ceb34gnsc6qk5dt6n7xg6ycwzasjhbxm3iylkiy"
],
"Threshold": 1,
"VestingDuration": 0,
"VestingStart": 0
}
},
"RemainderAccount": {
"Type": "multisig",
"Balance": "0",
"Meta": {
"Signers": [
"t1ceb34gnsc6qk5dt6n7xg6ycwzasjhbxm3iylkiy"
],
"Threshold": 1,
"VestingDuration": 0,
"VestingStart": 0
}
}
}

View File

@ -0,0 +1,5 @@
LOTUS_PATH=~/.lotus-local-net
LOTUS_MINER_PATH=~/.lotus-miner-local-net
LOTUS_SKIP_GENESIS_CHECK=_yes_
CGO_CFLAGS_ALLOW="-D__BLST_PORTABLE__"
CGO_CFLAGS="-D__BLST_PORTABLE__"

View File

@ -0,0 +1,39 @@
#!/bin/bash
lotus --version
# # remove old bootnode peer info if present
# [ -f /root/.lotus-shared/miner.addr ] && rm /root/.lotus-shared/miner.addr
##TODO: generate genesis files inside container instead of bundling in config dir
##something like commands below should work, other scripts/compose will have to be updated to corresponding directories
# lotus fetch-params 2048
# lotus-seed pre-seal --sector-size 2KiB --num-sectors 2
# lotus-seed genesis new localnet.json
# lotus-seed genesis add-miner localnet.json ~/.genesis-sectors/pre-seal-t01000.json
# start daemon
nohup lotus daemon --genesis=/devgen.car --profile=bootstrapper --bootstrap=false > /var/log/lotus.log 2>&1 &
# Loop until the daemon is started
echo "Waiting for daemon to start..."
while ! grep -q "started ChainNotify channel" /var/log/lotus.log ; do
sleep 5
done
echo "Daemon started."
# publish bootnode peer info to shared volume
lotus net listen | awk 'NR==1{print}' > /root/.lotus-shared/miner.addr
# if miner not already initialized
if [ ! -d /root/.lotus-miner-local-net ]; then
# initialize miner
lotus wallet import --as-default ~/.genesis-sectors/pre-seal-t01000.key
lotus-miner init --genesis-miner --actor=t01000 --sector-size=2KiB --pre-sealed-sectors=~/.genesis-sectors --pre-sealed-metadata=~/.genesis-sectors/pre-seal-t01000.json --nosync
fi
# start miner
nohup lotus-miner run --nosync &
tail -f /dev/null

View File

@ -0,0 +1,24 @@
#!/bin/bash
lotus --version
##TODO: paths can use values from lotus-env.env file
# if not already initialized
if [ ! -f /root/.lotus-local-net/config.toml ]; then
# init node config
mkdir $HOME/.lotus-local-net
lotus config default > $HOME/.lotus-local-net/config.toml
# add bootstrap peer info if available
if [ -f /root/.lotus-shared/miner.addr ]; then
MINER_ADDR=\"$(cat /root/.lotus-shared/miner.addr)\"
# add bootstrap peer id to config file
sed -i "/^\[Libp2p\]/a \ \ BootstrapPeers = [$MINER_ADDR]" $HOME/.lotus-local-net/config.toml
else
echo "Bootstrap peer info not found, unable to configure. Manual peering will be required."
fi
fi
# start node
lotus daemon --genesis=/devgen.car

View File

@ -0,0 +1,18 @@
[
{
"id": "0001",
"url": "http://127.0.0.1:8081/",
"basic_auth": {
"username": "",
"password": ""
}
},
{
"id": "0021",
"url": "http://fixturenet-eth-geth-1:8545/",
"basic_auth": {
"username": "",
"password": ""
}
}
]

View File

@ -0,0 +1,65 @@
#!/bin/bash
# TODO: we should have a mechanism to bundle it inside the container rather than link from here
# at deploy time.
CHAINID="pocketlocal-1"
MONIKER="localtestnet"
SERVICE_URL="http://127.0.0.1:8081"
PASSWORD="mypassword" # wallet password, required by cli
# check if jq is installed; install if necessary
# command -v jq > /dev/null 2>&1 || { echo >&2 "jq not installed. More info: https://stedolan.github.io/jq/download/"; exit 1; }
if ! command -v jq > /dev/null 2>&1; then
echo "jq not installed, downloading..."
mkdir -p /home/app/bin
wget -O /home/app/bin/jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
chmod +x /home/app/bin/jq
export PATH=$PATH:/home/app/bin
fi
# remove existing daemon and client
rm -rf ~/.pocket*
# create a wallet with password "mypassword" and save the address for later
address=$(pocket accounts create --pwd $PASSWORD | awk '/Address:/ {print $2}')
# set this address as the validator address for the node
pocket accounts set-validator $address --pwd $PASSWORD
# save the public key for later
pubkey=$(pocket accounts show $address | awk '/Public Key:/ {print $3}')
# set node's moniker
echo $(pocket util print-configs) | jq '.tendermint_config.Moniker = "'"$MONIKER"'"' | jq . > $HOME/.pocket/config/config.json
# pocket mainnet has block time of 15 minutes, set closer to 1 minute instead
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutPropose = 8000000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutProposeDelta = 600000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutPrevote = 4000000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutPrevoteDelta = 600000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutPrecommit = 4000000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutPrecommitDelta = 6000000006' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.TimeoutCommit = 52000000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.CreateEmptyBlocksInterval = 60000000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.PeerGossipSleepDuration = 2000000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
cat $HOME/.pocket/config/config.json | jq '.tendermint_config.Consensus.PeerQueryMaj23SleepDuration = 1200000000' | jq . > $HOME/.pocket/config/tmp_config.json && mv $HOME/.pocket/config/tmp_config.json $HOME/.pocket/config/config.json
# include genesis.json and chains.json
cp $HOME/pocket-configs/genesis.json $HOME/.pocket/config/genesis.json
cp $HOME/pocket-configs/chains.json $HOME/.pocket/config/chains.json
# set chain-id and add node to genesis.json as a validator
cat $HOME/.pocket/config/genesis.json | jq '.chain_id="'"$CHAINID"'"' > $HOME/.pocket/config/tmp_genesis.json && mv $HOME/.pocket/config/tmp_genesis.json $HOME/.pocket/config/genesis.json
cat $HOME/.pocket/config/genesis.json | jq '.app_state.auth.accounts[0].value.address="'"$address"'"' > $HOME/.pocket/config/tmp_genesis.json && mv $HOME/.pocket/config/tmp_genesis.json $HOME/.pocket/config/genesis.json
cat $HOME/.pocket/config/genesis.json | jq '.app_state.auth.accounts[0].value.public_key.value="'"$pubkey"'"' > $HOME/.pocket/config/tmp_genesis.json && mv $HOME/.pocket/config/tmp_genesis.json $HOME/.pocket/config/genesis.json
cat $HOME/.pocket/config/genesis.json | jq '.app_state.pos.validators[0].address="'"$address"'"' > $HOME/.pocket/config/tmp_genesis.json && mv $HOME/.pocket/config/tmp_genesis.json $HOME/.pocket/config/genesis.json
cat $HOME/.pocket/config/genesis.json | jq '.app_state.pos.validators[0].public_key="'"$pubkey"'"' > $HOME/.pocket/config/tmp_genesis.json && mv $HOME/.pocket/config/tmp_genesis.json $HOME/.pocket/config/genesis.json
cat $HOME/.pocket/config/genesis.json | jq '.app_state.pos.validators[0].service_url="'"$SERVICE_URL"'"' > $HOME/.pocket/config/tmp_genesis.json && mv $HOME/.pocket/config/tmp_genesis.json $HOME/.pocket/config/genesis.json
# if [[ $1 == "pending" ]]; then
# echo "pending mode is on, please wait for the first block committed."
# fi
# Start the node
pocket start --simulateRelay

View File

@ -0,0 +1,272 @@
{
"genesis_time": "2020-07-28T15:00:00.000000Z",
"chain_id": "testnet",
"consensus_params": {
"block": {
"max_bytes": "4000000",
"max_gas": "-1",
"time_iota_ms": "1"
},
"evidence": {
"max_age": "120000000000"
},
"validator": {
"pub_key_types": [
"ed25519"
]
}
},
"app_hash": "",
"app_state": {
"application": {
"params": {
"unstaking_time": "1814000000000000",
"max_applications": "9223372036854775807",
"app_stake_minimum": "1000000",
"base_relays_per_pokt": "167",
"stability_adjustment": "0",
"participation_rate_on": false,
"maximum_chains": "15"
},
"applications": [],
"exported": false
},
"auth": {
"params": {
"max_memo_characters": "75",
"tx_sig_limit": "8",
"fee_multipliers": {
"fee_multiplier": [],
"default": "1"
}
},
"accounts": [
{
"type": "posmint/Account",
"value": {
"address": "!validator-address",
"coins": [
{
"amount": "0",
"denom": "upokt"
}
],
"public_key": {
"type": "crypto/ed25519_public_key",
"value": "!validator-pubkey"
}
}
}
],
"supply": []
},
"gov": {
"params": {
"acl": [
{
"acl_key": "application/ApplicationStakeMinimum",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "application/AppUnstakingTime",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "application/BaseRelaysPerPOKT",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "application/MaxApplications",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "application/MaximumChains",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "application/ParticipationRateOn",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "application/StabilityAdjustment",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "auth/MaxMemoCharacters",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "auth/TxSigLimit",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "gov/acl",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "gov/daoOwner",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "gov/upgrade",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pocketcore/ClaimExpiration",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "auth/FeeMultipliers",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pocketcore/ReplayAttackBurnMultiplier",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/ProposerPercentage",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pocketcore/ClaimSubmissionWindow",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pocketcore/MinimumNumberOfProofs",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pocketcore/SessionNodeCount",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pocketcore/SupportedBlockchains",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/BlocksPerSession",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/DAOAllocation",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/DowntimeJailDuration",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/MaxEvidenceAge",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/MaximumChains",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/MaxJailedBlocks",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/MaxValidators",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/MinSignedPerWindow",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/RelaysToTokensMultiplier",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/SignedBlocksWindow",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/SlashFractionDoubleSign",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/SlashFractionDowntime",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/StakeDenom",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/StakeMinimum",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
},
{
"acl_key": "pos/UnstakingTime",
"address": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4"
}
],
"dao_owner": "a83172b67b5ffbfcb8acb95acc0fd0466a9d4bc4",
"upgrade": {
"Height": "0",
"Version": "0"
}
},
"DAO_Tokens": "50000000000000"
},
"pos": {
"params": {
"relays_to_tokens_multiplier": "10000",
"unstaking_time": "1814000000000000",
"max_validators": "5000",
"stake_denom": "upokt",
"stake_minimum": "15000000000",
"session_block_frequency": "4",
"dao_allocation": "10",
"proposer_allocation": "1",
"maximum_chains": "15",
"max_jailed_blocks": "37960",
"max_evidence_age": "120000000000",
"signed_blocks_window": "10",
"min_signed_per_window": "0.60",
"downtime_jail_duration": "3600000000000",
"slash_fraction_double_sign": "0.05",
"slash_fraction_downtime": "0.000001"
},
"prevState_total_power": "0",
"prevState_validator_powers": null,
"validators": [
{
"address": "!validator-address",
"public_key": "!validator-pubkey",
"jailed": false,
"status": 2,
"tokens": "5000000000000",
"service_url": "!validator-url",
"chains": [
"0001",
"0021"
],
"unstaking_time": "2021-05-15T00:00:00Z"
}
],
"exported": false,
"signing_infos": {},
"missed_blocks": {},
"previous_proposer": ""
},
"pocketcore": {
"params": {
"session_node_count": "5",
"proof_waiting_period": "3",
"supported_blockchains": [
"0001",
"0021"
],
"claim_expiration": "120",
"replay_attack_burn_multiplier": "3",
"minimum_number_of_proofs": "10"
},
"receipts": null,
"claims": null
}
}
}

View File

@ -0,0 +1,34 @@
[
{
"endpoint": "http://azimuth-watcher-server:3001/graphql",
"prefix": "azimuth"
},
{
"endpoint": "http://censures-watcher-server:3002/graphql",
"prefix": "censures"
},
{
"endpoint": "http://claims-watcher-server:3003/graphql",
"prefix": "claims"
},
{
"endpoint": "http://conditional-star-release-watcher-server:3004/graphql",
"prefix": "conditionalStarRelease"
},
{
"endpoint": "http://delegated-sending-watcher-server:3005/graphql",
"prefix": "delegatedSending"
},
{
"endpoint": "http://ecliptic-watcher-server:3006/graphql",
"prefix": "ecliptic"
},
{
"endpoint": "http://linear-star-release-watcher-server:3007/graphql",
"prefix": "linearStarRelease"
},
{
"endpoint": "http://polls-watcher-server:3008/graphql",
"prefix": "polls"
}
]

View File

@ -0,0 +1,31 @@
const fs = require('fs');
const tomlJS = require('toml-js');
const toml = require('toml');
const { merge } = require('lodash')
const main = () => {
const overrideConfigString = fs.readFileSync('environments/watcher-config.toml', 'utf-8');
const configString = fs.readFileSync('environments/local.toml', 'utf-8');
const overrideConfig = toml.parse(overrideConfigString)
const config = toml.parse(configString)
// Merge configs
const updatedConfig = merge(config, overrideConfig);
// Form dbConnectionString for jobQueue DB
const parts = config.jobQueue.dbConnectionString.split("://");
const credsAndDB = parts[1].split("@");
const creds = credsAndDB[0].split(":");
creds[0] = overrideConfig.database.username;
creds[1] = overrideConfig.database.password;
credsAndDB[0] = creds.join(":");
const dbName = credsAndDB[1].split("/")[1]
credsAndDB[1] = [overrideConfig.database.host, dbName].join("/");
parts[1] = credsAndDB.join("@");
updatedConfig.jobQueue.dbConnectionString = parts.join("://");
fs.writeFileSync('environments/local.toml', tomlJS.dump(updatedConfig), 'utf-8');
}
main();

View File

@ -0,0 +1,27 @@
#!/bin/sh
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_IPLD_ETH_RPC="${CERC_IPLD_ETH_RPC:-${DEFAULT_CERC_IPLD_ETH_RPC}}"
CERC_IPLD_ETH_GQL="${CERC_IPLD_ETH_GQL:-${DEFAULT_CERC_IPLD_ETH_GQL}}"
echo "Using IPLD ETH RPC endpoint ${CERC_IPLD_ETH_RPC}"
echo "Using IPLD GQL endpoint ${CERC_IPLD_ETH_GQL}"
# Replace env variables in template TOML file
# Read in the config template TOML file and modify it
WATCHER_CONFIG_TEMPLATE=$(cat environments/watcher-config-template.toml)
WATCHER_CONFIG=$(echo "$WATCHER_CONFIG_TEMPLATE" | \
sed -E "s|REPLACE_WITH_CERC_IPLD_ETH_RPC|${CERC_IPLD_ETH_RPC}|g; \
s|REPLACE_WITH_CERC_IPLD_ETH_GQL|${CERC_IPLD_ETH_GQL}| ")
# Write the modified content to a new file
echo "$WATCHER_CONFIG" > environments/watcher-config.toml
# Merge SO watcher config with existing config file
node merge-toml.js
echo 'yarn server'
yarn server

View File

@ -0,0 +1,14 @@
[server]
host = "0.0.0.0"
maxSimultaneousRequests = -1
[database]
host = "watcher-db"
port = 5432
username = "vdbm"
password = "password"
[upstream]
[upstream.ethServer]
gqlApiEndpoint = "REPLACE_WITH_CERC_IPLD_ETH_GQL"
rpcProviderEndpoint = "REPLACE_WITH_CERC_IPLD_ETH_RPC"

View File

@ -0,0 +1,5 @@
# Defaults
# ipld-eth-server endpoints
DEFAULT_CERC_IPLD_ETH_RPC=
DEFAULT_CERC_IPLD_ETH_GQL=

View File

@ -0,0 +1,28 @@
#!/bin/bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_SNAPSHOT_GQL_ENDPOINT="${CERC_SNAPSHOT_GQL_ENDPOINT:-${DEFAULT_CERC_SNAPSHOT_GQL_ENDPOINT}}"
CERC_SNAPSHOT_BLOCKHASH="${CERC_SNAPSHOT_BLOCKHASH:-${DEFAULT_CERC_SNAPSHOT_BLOCKHASH}}"
CHECKPOINT_FILE_PATH="./state_checkpoint/state-gql-${CERC_SNAPSHOT_BLOCKHASH}"
if [ -f "${CHECKPOINT_FILE_PATH}" ]; then
# Skip checkpoint creation if the file already exists
echo "File at ${CHECKPOINT_FILE_PATH} already exists, skipping checkpoint creation..."
else
# Create a checkpoint using GQL endpoint
echo "Creating a state checkpoint using GQL endpoint..."
yarn create-state-gql \
--snapshot-block-hash "${CERC_SNAPSHOT_BLOCKHASH}" \
--gql-endpoint "${CERC_SNAPSHOT_GQL_ENDPOINT}" \
--output "${CHECKPOINT_FILE_PATH}"
fi
echo "Initializing watcher using a state snapshot..."
# Import the state checkpoint
# (skips if snapshot block is already indexed)
yarn import-state --import-file "${CHECKPOINT_FILE_PATH}"

View File

@ -0,0 +1,23 @@
#!/bin/bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_IPLD_ETH_RPC="${CERC_IPLD_ETH_RPC:-${DEFAULT_CERC_IPLD_ETH_RPC}}"
CERC_IPLD_ETH_GQL="${CERC_IPLD_ETH_GQL:-${DEFAULT_CERC_IPLD_ETH_GQL}}"
echo "Using ETH server RPC endpoint ${CERC_IPLD_ETH_RPC}"
echo "Using ETH server GQL endpoint ${CERC_IPLD_ETH_GQL}"
# Read in the config template TOML file and modify it
WATCHER_CONFIG_TEMPLATE=$(cat environments/watcher-config-template.toml)
WATCHER_CONFIG=$(echo "$WATCHER_CONFIG_TEMPLATE" | \
sed -E "s|REPLACE_WITH_CERC_IPLD_ETH_GQL|${CERC_IPLD_ETH_GQL}|g; \
s|REPLACE_WITH_CERC_IPLD_ETH_RPC|${CERC_IPLD_ETH_RPC}| ")
# Write the modified content to a new file
echo "$WATCHER_CONFIG" > environments/local.toml
echo "Running job-runner"
DEBUG=vulcanize:* exec node --enable-source-maps dist/job-runner.js

View File

@ -0,0 +1,32 @@
#!/bin/bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
CERC_IPLD_ETH_RPC="${CERC_IPLD_ETH_RPC:-${DEFAULT_CERC_IPLD_ETH_RPC}}"
CERC_IPLD_ETH_GQL="${CERC_IPLD_ETH_GQL:-${DEFAULT_CERC_IPLD_ETH_GQL}}"
CERC_USE_STATE_SNAPSHOT="${CERC_USE_STATE_SNAPSHOT:-${DEFAULT_CERC_USE_STATE_SNAPSHOT}}"
echo "Using ETH server RPC endpoint ${CERC_IPLD_ETH_RPC}"
echo "Using ETH server GQL endpoint ${CERC_IPLD_ETH_GQL}"
# Read in the config template TOML file and modify it
WATCHER_CONFIG_TEMPLATE=$(cat environments/watcher-config-template.toml)
WATCHER_CONFIG=$(echo "$WATCHER_CONFIG_TEMPLATE" | \
sed -E "s|REPLACE_WITH_CERC_IPLD_ETH_GQL|${CERC_IPLD_ETH_GQL}|g; \
s|REPLACE_WITH_CERC_IPLD_ETH_RPC|${CERC_IPLD_ETH_RPC}| ")
# Write the modified content to a new file
echo "$WATCHER_CONFIG" > environments/local.toml
if [ "$CERC_USE_STATE_SNAPSHOT" = true ] ; then
./create-and-import-checkpoint.sh
else
echo "Initializing watcher using fill..."
yarn fill --start-block $DEFAULT_CERC_GELATO_START_BLOCK --end-block $DEFAULT_CERC_GELATO_START_BLOCK
fi
echo "Running active server"
DEBUG=vulcanize:* exec node --enable-source-maps dist/server.js

View File

@ -0,0 +1,75 @@
[server]
host = "0.0.0.0"
port = 3008
kind = "active"
# Checkpointing state.
checkpointing = true
# Checkpoint interval in number of blocks.
checkpointInterval = 2000
# Enable state creation
# CAUTION: Disable only if state creation is not desired or can be filled subsequently
enableState = true
subgraphPath = "./subgraph"
# Interval to restart wasm instance periodically
wasmRestartBlocksInterval = 20
# Interval in number of blocks at which to clear entities cache.
clearEntitiesCacheInterval = 1000
# Boolean to filter logs by contract.
filterLogs = true
# Max block range for which to return events in eventsInRange GQL query.
# Use -1 for skipping check on block range.
maxEventsBlockRange = 1000
# GQL cache settings
[server.gqlCache]
enabled = true
# Max in-memory cache size (in bytes) (default 8 MB)
# maxCacheSize
# GQL cache-control max-age settings (in seconds)
maxAge = 15
timeTravelMaxAge = 86400 # 1 day
[metrics]
host = "0.0.0.0"
port = 9000
[metrics.gql]
port = 9001
[database]
type = "postgres"
host = "gelato-watcher-db"
port = 5432
database = "gelato-watcher"
username = "vdbm"
password = "password"
synchronize = true
logging = false
[upstream]
[upstream.ethServer]
gqlApiEndpoint = "REPLACE_WITH_CERC_IPLD_ETH_GQL"
rpcProviderEndpoint = "REPLACE_WITH_CERC_IPLD_ETH_RPC"
[upstream.cache]
name = "requests"
enabled = false
deleteOnStart = false
[jobQueue]
dbConnectionString = "postgres://vdbm:password@gelato-watcher-db/gelato-watcher-job-queue"
maxCompletionLagInSecs = 300
jobDelayInMilliSecs = 100
eventsInBatch = 50
blockDelayInMilliSecs = 2000
prefetchBlocksInMem = true
prefetchBlockCount = 10

View File

@ -0,0 +1,13 @@
# ipld-eth-server endpoints
DEFAULT_CERC_IPLD_ETH_RPC="http://ipld-eth-server:8082"
DEFAULT_CERC_IPLD_ETH_GQL="http://ipld-eth-server:8083/graphql"
# Gelato start block
DEFAULT_CERC_GELATO_START_BLOCK=11361987
# Whether to use a state snapshot to initialize the watcher
DEFAULT_CERC_USE_STATE_SNAPSHOT=false
# State snapshot params
DEFAULT_CERC_SNAPSHOT_GQL_ENDPOINT=
DEFAULT_CERC_SNAPSHOT_BLOCKHASH=

View File

@ -7,6 +7,7 @@ fi
CERC_CHAIN_ID="${CERC_CHAIN_ID:-${DEFAULT_CERC_CHAIN_ID}}"
CERC_DEPLOYED_CONTRACT="${CERC_DEPLOYED_CONTRACT:-${DEFAULT_CERC_DEPLOYED_CONTRACT}}"
CERC_RELAY_NODES="${CERC_RELAY_NODES:-${DEFAULT_CERC_RELAY_NODES}}"
CERC_DENY_MULTIADDRS="${CERC_DENY_MULTIADDRS:-${DEFAULT_CERC_DENY_MULTIADDRS}}"
CERC_APP_WATCHER_URL="${CERC_APP_WATCHER_URL:-${DEFAULT_CERC_APP_WATCHER_URL}}"
# If not set (or []), check the mounted volume for relay peer id
@ -37,5 +38,6 @@ yq -n ".address = env(CERC_DEPLOYED_CONTRACT)" > /config/config.yml
yq ".watcherUrl = env(CERC_APP_WATCHER_URL)" -i /config/config.yml
yq ".chainId = env(CERC_CHAIN_ID)" -i /config/config.yml
yq ".relayNodes = strenv(CERC_RELAY_NODES)" -i /config/config.yml
yq ".denyMultiaddrs = strenv(CERC_DENY_MULTIADDRS)" -i /config/config.yml
/scripts/start-serving-app.sh

View File

@ -24,3 +24,6 @@ DEFAULT_CERC_CHAIN_ID=42069
# Set of relay nodes to be used by web-apps
DEFAULT_CERC_RELAY_NODES=[]
# Set of multiaddrs to be avoided while dialling
DEFAULT_CERC_DENY_MULTIADDRS=[]

View File

@ -8,13 +8,20 @@ CERC_L2_GETH_RPC="${CERC_L2_GETH_RPC:-${DEFAULT_CERC_L2_GETH_RPC}}"
CERC_L1_ACCOUNTS_CSV_URL="${CERC_L1_ACCOUNTS_CSV_URL:-${DEFAULT_CERC_L1_ACCOUNTS_CSV_URL}}"
CERC_RELAY_PEERS="${CERC_RELAY_PEERS:-${DEFAULT_CERC_RELAY_PEERS}}"
CERC_DENY_MULTIADDRS="${CERC_DENY_MULTIADDRS:-${DEFAULT_CERC_DENY_MULTIADDRS}}"
CERC_RELAY_ANNOUNCE_DOMAIN="${CERC_RELAY_ANNOUNCE_DOMAIN:-${DEFAULT_CERC_RELAY_ANNOUNCE_DOMAIN}}"
CERC_ENABLE_PEER_L2_TXS="${CERC_ENABLE_PEER_L2_TXS:-${DEFAULT_CERC_ENABLE_PEER_L2_TXS}}"
CERC_DEPLOYED_CONTRACT="${CERC_DEPLOYED_CONTRACT:-${DEFAULT_CERC_DEPLOYED_CONTRACT}}"
echo "Using L2 RPC endpoint ${CERC_L2_GETH_RPC}"
CERC_RELAY_MULTIADDR="/dns4/mobymask-watcher-server/tcp/9090/ws/p2p/$(jq -r '.id' /app/peers/relay-id.json)"
# Use public domain for relay multiaddr in peer config if specified
# Otherwise, use the docker container's host IP
if [ -n "$CERC_RELAY_ANNOUNCE_DOMAIN" ]; then
CERC_RELAY_MULTIADDR="/dns4/${CERC_RELAY_ANNOUNCE_DOMAIN}/tcp/443/wss/p2p/$(jq -r '.id' /app/peers/relay-id.json)"
else
CERC_RELAY_MULTIADDR="/dns4/mobymask-watcher-server/tcp/9090/ws/p2p/$(jq -r '.id' /app/peers/relay-id.json)"
fi
# Use contract address from environment variable or set from config.json in mounted volume
if [ -n "$CERC_DEPLOYED_CONTRACT" ]; then
@ -42,6 +49,7 @@ fi
WATCHER_CONFIG_TEMPLATE=$(cat environments/watcher-config-template.toml)
WATCHER_CONFIG=$(echo "$WATCHER_CONFIG_TEMPLATE" | \
sed -E "s|REPLACE_WITH_CERC_RELAY_PEERS|${CERC_RELAY_PEERS}|g; \
s|REPLACE_WITH_CERC_DENY_MULTIADDRS|${CERC_DENY_MULTIADDRS}|g; \
s/REPLACE_WITH_CERC_RELAY_ANNOUNCE_DOMAIN/${CERC_RELAY_ANNOUNCE_DOMAIN}/g; \
s|REPLACE_WITH_CERC_RELAY_MULTIADDR|${CERC_RELAY_MULTIADDR}|g; \
s/REPLACE_WITH_CERC_ENABLE_PEER_L2_TXS/${CERC_ENABLE_PEER_L2_TXS}/g; \

View File

@ -1,6 +1,7 @@
{
"relayNodes": [],
"peer": {
"denyMultiaddrs": [],
"enableDebugInfo": true
}
}

View File

@ -5,6 +5,7 @@ if [ -n "$CERC_SCRIPT_DEBUG" ]; then
fi
CERC_RELAY_NODES="${CERC_RELAY_NODES:-${DEFAULT_CERC_RELAY_NODES}}"
CERC_DENY_MULTIADDRS="${CERC_DENY_MULTIADDRS:-${DEFAULT_CERC_DENY_MULTIADDRS}}"
# If not set (or []), check the mounted volume for relay peer id
if [ -z "$CERC_RELAY_NODES" ] || [ "$CERC_RELAY_NODES" = "[]" ]; then
@ -16,5 +17,6 @@ echo "Using CERC_RELAY_NODES $CERC_RELAY_NODES"
# Use yq to create config.yml with environment variables
yq -n ".relayNodes = strenv(CERC_RELAY_NODES)" > /config/config.yml
yq ".denyMultiaddrs = strenv(CERC_DENY_MULTIADDRS)" -i /config/config.yml
/scripts/start-serving-app.sh

View File

@ -27,6 +27,7 @@
host = "0.0.0.0"
port = 9090
relayPeers = REPLACE_WITH_CERC_RELAY_PEERS
denyMultiaddrs = REPLACE_WITH_CERC_DENY_MULTIADDRS
peerIdFile = './peers/relay-id.json'
announce = 'REPLACE_WITH_CERC_RELAY_ANNOUNCE_DOMAIN'
enableDebugInfo = true
@ -34,6 +35,7 @@
[server.p2p.peer]
relayMultiaddr = 'REPLACE_WITH_CERC_RELAY_MULTIADDR'
pubSubTopic = 'mobymask'
denyMultiaddrs = REPLACE_WITH_CERC_DENY_MULTIADDRS
peerIdFile = './peers/peer-id.json'
enableDebugInfo = true
enableL2Txs = REPLACE_WITH_CERC_ENABLE_PEER_L2_TXS

View File

@ -1,4 +1,4 @@
FROM sigp/lcli:v3.2.1 AS lcli
FROM sigp/lcli:v4.1.0 AS lcli
FROM skylenet/ethereum-genesis-generator@sha256:210353ce7c898686bc5092f16c61220a76d357f51eff9c451e9ad1b9ad03d4d3 AS ethgen
FROM cerc/fixturenet-eth-geth:local AS fnetgeth

View File

@ -21,22 +21,14 @@ if [ ! -f "$DATADIR/bootnode/enr.dat" ]; then
--udp-port $BOOTNODE_PORT \
--tcp-port $BOOTNODE_PORT \
--genesis-fork-version $GENESIS_FORK_VERSION \
--output-dir $DATADIR/bootnode-temp
--output-dir $DATADIR/bootnode
# Output ENR to a temp dir and mv as "lcli generate-bootnode-enr" will not overwrite an empty dir (mounted volume)
mkdir -p $DATADIR/bootnode
mv $DATADIR/bootnode-temp/* $DATADIR/bootnode
rm -r $DATADIR/bootnode-temp
bootnode_enr=`cat $DATADIR/bootnode/enr.dat`
echo "- $bootnode_enr" > $TESTNET_DIR/boot_enr.yaml
echo "Generated bootnode enr"
else
echo "Found existing bootnode enr"
echo "Generated bootnode enr and written to $TESTNET_DIR/boot_enr.yaml"
fi
bootnode_enr=`cat $DATADIR/bootnode/enr.dat`
echo "- $bootnode_enr" > $TESTNET_DIR/boot_enr.yaml
echo "Written bootnode enr to $TESTNET_DIR/boot_enr.yaml"
exec lighthouse boot_node \
--testnet-dir $TESTNET_DIR \
--port $BOOTNODE_PORT \

View File

@ -1,4 +1,4 @@
FROM sigp/lighthouse:v4.0.1-modern
FROM sigp/lighthouse:v4.1.0-modern
RUN apt-get update; apt-get install bash netcat curl less jq -y;

View File

@ -0,0 +1,138 @@
#####################################
FROM golang:1.19.7-buster AS lotus-builder
MAINTAINER Lotus Development Team
RUN apt-get update && apt-get install -y ca-certificates build-essential clang ocl-icd-opencl-dev ocl-icd-libopencl1 jq libhwloc-dev
ENV XDG_CACHE_HOME="/tmp"
### taken from https://github.com/rust-lang/docker-rust/blob/master/1.63.0/buster/Dockerfile
ENV RUSTUP_HOME=/usr/local/rustup \
CARGO_HOME=/usr/local/cargo \
PATH=/usr/local/cargo/bin:$PATH \
RUST_VERSION=1.63.0
RUN set -eux; \
dpkgArch="$(dpkg --print-architecture)"; \
case "${dpkgArch##*-}" in \
amd64) rustArch='x86_64-unknown-linux-gnu'; rustupSha256='5cc9ffd1026e82e7fb2eec2121ad71f4b0f044e88bca39207b3f6b769aaa799c' ;; \
arm64) rustArch='aarch64-unknown-linux-gnu'; rustupSha256='e189948e396d47254103a49c987e7fb0e5dd8e34b200aa4481ecc4b8e41fb929' ;; \
*) echo >&2 "unsupported architecture: ${dpkgArch}"; exit 1 ;; \
esac; \
url="https://static.rust-lang.org/rustup/archive/1.25.1/${rustArch}/rustup-init"; \
wget "$url"; \
echo "${rustupSha256} *rustup-init" | sha256sum -c -; \
chmod +x rustup-init; \
./rustup-init -y --no-modify-path --profile minimal --default-toolchain $RUST_VERSION --default-host ${rustArch}; \
rm rustup-init; \
chmod -R a+w $RUSTUP_HOME $CARGO_HOME; \
rustup --version; \
cargo --version; \
rustc --version;
COPY ./ /opt/filecoin
WORKDIR /opt/filecoin
#RUN scripts/docker-git-state-check.sh
### make configurable filecoin-ffi build
ARG FFI_BUILD_FROM_SOURCE=0
ENV FFI_BUILD_FROM_SOURCE=${FFI_BUILD_FROM_SOURCE}
RUN make clean deps
ARG RUSTFLAGS=""
ARG GOFLAGS=""
#RUN make buildall
RUN make 2k
#####################################
FROM ubuntu:20.04 AS lotus-base
MAINTAINER Lotus Development Team
# Base resources
COPY --from=lotus-builder /etc/ssl/certs /etc/ssl/certs
COPY --from=lotus-builder /lib/*/libdl.so.2 /lib/
COPY --from=lotus-builder /lib/*/librt.so.1 /lib/
COPY --from=lotus-builder /lib/*/libgcc_s.so.1 /lib/
COPY --from=lotus-builder /lib/*/libutil.so.1 /lib/
COPY --from=lotus-builder /usr/lib/*/libltdl.so.7 /lib/
COPY --from=lotus-builder /usr/lib/*/libnuma.so.1 /lib/
COPY --from=lotus-builder /usr/lib/*/libhwloc.so.5 /lib/
COPY --from=lotus-builder /usr/lib/*/libOpenCL.so.1 /lib/
RUN useradd -r -u 532 -U fc \
&& mkdir -p /etc/OpenCL/vendors \
&& echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd
#####################################
FROM lotus-base AS lotus
MAINTAINER Lotus Development Team
COPY --from=lotus-builder /opt/filecoin/lotus /usr/local/bin/
COPY --from=lotus-builder /opt/filecoin/lotus-shed /usr/local/bin/
#COPY scripts/docker-lotus-entrypoint.sh /
#COPY myscripts/setup-node.sh /docker-entrypoint-scripts.d/setup-node.sh
ARG DOCKER_LOTUS_IMPORT_SNAPSHOT https://snapshots.mainnet.filops.net/minimal/latest
ENV DOCKER_LOTUS_IMPORT_SNAPSHOT ${DOCKER_LOTUS_IMPORT_SNAPSHOT}
ENV FILECOIN_PARAMETER_CACHE /var/tmp/filecoin-proof-parameters
ENV LOTUS_PATH /var/lib/lotus
ENV DOCKER_LOTUS_IMPORT_WALLET ""
RUN mkdir /var/lib/lotus /var/tmp/filecoin-proof-parameters
RUN chown fc: /var/lib/lotus /var/tmp/filecoin-proof-parameters
VOLUME /var/lib/lotus
VOLUME /var/tmp/filecoin-proof-parameters
USER fc
EXPOSE 1234
ENTRYPOINT ["/docker-lotus-entrypoint.sh"]
CMD ["-help"]
#####################################
FROM lotus-base AS lotus-all-in-one
ENV FILECOIN_PARAMETER_CACHE /var/tmp/filecoin-proof-parameters
ENV LOTUS_MINER_PATH /var/lib/lotus-miner
ENV LOTUS_PATH /var/lib/lotus
ENV LOTUS_WORKER_PATH /var/lib/lotus-worker
ENV WALLET_PATH /var/lib/lotus-wallet
COPY --from=lotus-builder /opt/filecoin/lotus /usr/local/bin/
COPY --from=lotus-builder /opt/filecoin/lotus-seed /usr/local/bin/
COPY --from=lotus-builder /opt/filecoin/lotus-shed /usr/local/bin/
#COPY --from=lotus-builder /opt/filecoin/lotus-wallet /usr/local/bin/
#COPY --from=lotus-builder /opt/filecoin/lotus-gateway /usr/local/bin/
COPY --from=lotus-builder /opt/filecoin/lotus-miner /usr/local/bin/
COPY --from=lotus-builder /opt/filecoin/lotus-worker /usr/local/bin/
#COPY --from=lotus-builder /opt/filecoin/lotus-stats /usr/local/bin/
#COPY --from=lotus-builder /opt/filecoin/lotus-fountain /usr/local/bin/
RUN mkdir /var/tmp/filecoin-proof-parameters
RUN mkdir /var/lib/lotus
RUN mkdir /var/lib/lotus-miner
RUN mkdir /var/lib/lotus-worker
RUN mkdir /var/lib/lotus-wallet
RUN chown fc: /var/tmp/filecoin-proof-parameters
RUN chown fc: /var/lib/lotus
RUN chown fc: /var/lib/lotus-miner
RUN chown fc: /var/lib/lotus-worker
RUN chown fc: /var/lib/lotus-wallet
#VOLUME /var/tmp/filecoin-proof-parameters
#VOLUME /var/lib/lotus
#VOLUME /var/lib/lotus-miner
#VOLUME /var/lib/lotus-worker
#VOLUME /var/lib/lotus-wallet
EXPOSE 1234
EXPOSE 2345
EXPOSE 3456
EXPOSE 1777

View File

@ -0,0 +1,12 @@
#!/usr/bin/env bash
# Build cerc/lotus
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
# Per lotus docs, 'releases' branch always contains latest stable release
git -C ${CERC_REPO_BASE_DIR}/lotus checkout releases
# Replace repo's Dockerfile with modified one
cp ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}/lotus/Dockerfile
docker build -t cerc/lotus:local ${build_command_args} ${CERC_REPO_BASE_DIR}/lotus

View File

@ -50,9 +50,9 @@ RUN yarn global add http-server
# Globally install both versions of the payload web app package
# Install old version of MobyMask web app
RUN yarn global add @cerc-io/mobymask-ui@0.1.3
RUN yarn global add @cerc-io/mobymask-ui@0.1.4
# Install the LXDAO version of MobyMask web app
RUN yarn global add @cerc-io/mobymask-ui-lxdao@npm:@cerc-io/mobymask-ui@0.1.3-lxdao-0.1.1
RUN yarn global add @cerc-io/mobymask-ui-lxdao@npm:@cerc-io/mobymask-ui@0.1.4-lxdao-0.1.1
# Expose port for http
EXPOSE 80

View File

@ -33,7 +33,7 @@ do
echo "Substituting: ${template_string_to_replace} = ${template_value_to_substitute}"
# TODO: Pass keys to be replaced without double quotes
if [[ "$template_string_to_replace" =~ ^${config_prefix}_(relayNodes|chainId)$ ]]; then
if [[ "$template_string_to_replace" =~ ^${config_prefix}_(relayNodes|chainId|denyMultiaddrs)$ ]]; then
find ${webapp_files_dir} -type f -exec sed -i 's#"'"${template_string_to_replace}"'"#'"${template_value_to_substitute}"'#g' {} +
else
# Note: we do not escape our strings, on the expectation they do not container the '#' char.

View File

@ -0,0 +1,3 @@
#!/usr/bin/env bash
# Build cerc/pocket
docker build -t cerc/pocket:local ${CERC_REPO_BASE_DIR}/pocket-core-deployments/docker

View File

@ -21,7 +21,7 @@ RUN mkdir -p /config
RUN yarn global add http-server
# Globally install the payload web app package
RUN yarn global add @cerc-io/test-app@0.2.33
RUN yarn global add @cerc-io/test-app@0.2.34
# Expose port for http
EXPOSE 80

View File

@ -33,7 +33,7 @@ do
echo "Substituting: ${template_string_to_replace} = ${template_value_to_substitute}"
# TODO: Pass keys to be replaced without double quotes
if [[ "$template_string_to_replace" == "${config_prefix}_relayNodes" ]]; then
if [[ "$template_string_to_replace" =~ ^${config_prefix}_(relayNodes|denyMultiaddrs)$ ]]; then
find ${webapp_files_dir} -type f -exec sed -i 's#"'"${template_string_to_replace}"'"#'"${template_value_to_substitute}"'#g' {} +
else
# Note: we do not escape our strings, on the expectation they do not container the '#' char.

View File

@ -1,11 +1,14 @@
#!/bin/sh
#!/usr/bin/env bash
set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
# Test if the container's filesystem is old (run previously) or new
EXISTSFILENAME=/var/exists
echo "Test container starting"
if [[ -f "$EXISTSFILENAME" ]];
then
TIMESTAMP = `cat $EXISTSFILENAME`
TIMESTAMP=`cat $EXISTSFILENAME`
echo "Filesystem is old, created: $TIMESTAMP"
else
echo "Filesystem is fresh"

View File

@ -0,0 +1,13 @@
FROM node:18.16.0-alpine3.16
RUN apk --update --no-cache add git python3 alpine-sdk
WORKDIR /app
COPY . .
RUN echo "Building azimuth-watcher-ts" && \
yarn && yarn build
RUN echo "Install toml-js to update watcher config files" && \
yarn add --dev --ignore-workspace-root-check toml-js

View File

@ -0,0 +1,9 @@
#!/usr/bin/env bash
# Build cerc/watcher-azimuth
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
# See: https://stackoverflow.com/a/246128/1701505
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/watcher-azimuth:local -f ${SCRIPT_DIR}/Dockerfile ${build_command_args} ${CERC_REPO_BASE_DIR}/azimuth-watcher-ts

View File

@ -0,0 +1,12 @@
FROM node:18.15.0-alpine3.16
RUN apk --update --no-cache add git python3 alpine-sdk bash
WORKDIR /app
COPY . .
RUN echo "Building gelato-watcher-ts" && \
yarn && yarn build
WORKDIR /app

View File

@ -0,0 +1,9 @@
#!/usr/bin/env bash
# Build cerc/watcher-gelato
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
# See: https://stackoverflow.com/a/246128/1701505
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/watcher-gelato:local -f ${SCRIPT_DIR}/Dockerfile ${build_command_args} ${CERC_REPO_BASE_DIR}/gelato-watcher-ts

View File

@ -15,6 +15,6 @@ WORKDIR /app
COPY . .
RUN echo "Building mobymask-v2-watcher-ts" && \
yarn && yarn build
yarn && yarn build
WORKDIR /app

View File

@ -36,3 +36,8 @@ cerc/optimism-l2geth
cerc/optimism-op-batcher
cerc/optimism-op-node
cerc/optimism-op-proposer
cerc/pocket
cerc/watcher-azimuth
cerc/ipld-eth-state-snapshot
cerc/watcher-gelato
cerc/lotus

View File

@ -24,3 +24,7 @@ tx-spammer
kubo
foundry
fixturenet-optimism
fixturenet-pocket
watcher-azimuth
watcher-gelato
fixturenet-lotus

View File

@ -27,3 +27,9 @@ lirewine/sdk
telackey/act_runner
ethereum-optimism/op-geth
ethereum-optimism/optimism
pokt-network/pocket-core
pokt-network/pocket-core-deployments
cerc-io/azimuth-watcher-ts
cerc-io/ipld-eth-state-snapshot
cerc-io/gelato-watcher-ts
filecoin-project/lotus

View File

@ -0,0 +1,72 @@
# Azimuth Watcher
Instructions to setup and deploy Azimuth Watcher stack
## Setup
Prerequisite: `ipld-eth-server` RPC and GQL endpoints
Clone required repositories:
```bash
laconic-so --stack azimuth setup-repositories
```
NOTE: If the repository already exists and checked out to a different version, `setup-repositories` command will throw an error.
For getting around this, the `azimuth-watcher-ts` repository can be removed and then run the command.
Checkout to the required versions and branches in repos
```bash
# azimuth-watcher-ts
cd ~/cerc/azimuth-watcher-ts
git checkout v0.1.0
```
Build the container images:
```bash
laconic-so --stack azimuth build-containers
```
This should create the required docker images in the local image registry.
### Configuration
* Create and update an env file to be used in the next step:
```bash
# External ipld-eth-server endpoints
CERC_IPLD_ETH_RPC=
CERC_IPLD_ETH_GQL=
```
* NOTE: If `ipld-eth-server` is running on the host machine, use `host.docker.internal` as the hostname to access host ports
### Deploy the stack
* Deploy the containers:
```bash
laconic-so --stack azimuth deploy-system --env-file <PATH_TO_ENV_FILE> up
```
* List and check the health status of all the containers using `docker ps` and wait for them to be `healthy`
## Clean up
Stop all the services running in background run:
```bash
laconic-so --stack azimuth deploy-system down
```
Clear volumes created by this stack:
```bash
# List all relevant volumes
docker volume ls -q --filter "name=.*watcher_db_data"
# Remove all the listed volumes
docker volume rm $(docker volume ls -q --filter "name=.*watcher_db_data")
```

View File

@ -0,0 +1,8 @@
version: "1.0"
name: azimuth
repos:
- cerc-io/azimuth-watcher-ts
containers:
- cerc/watcher-azimuth
pods:
- watcher-azimuth

View File

@ -16,7 +16,7 @@ Leave `CERC_NPM_REGISTRY_URL` un-set to use the local gitea registry.
Note: the scheme/gerbil container is excluded as it isn't currently required for the package registry.
```
$ laconic-so --stack build-support build-containers --exclude cerc/builder-gerbil
$ laconic-so --stack build-support build-containers
```
### 2. Deploy Gitea Package Registry
@ -30,7 +30,7 @@ $ laconic-so --stack package-registry deploy up
⠿ Container laconic-aecc4a21d3a502b14522db97d427e850-server-1 Started 1.9s
New user 'gitea_admin' has been successfully created!
This is your gitea access token: 84fe66a73698bf11edbdccd0a338236b7d1d5c45. Keep it safe and secure, it can not be fetched again from gitea.
To use with laconic-so set this environment variable: export CERC_NPM_AUTH_TOKEN=3e493e77b3e83fe9e882f7e3a79dd4d5441c308b
To use with laconic-so set this environment variable: export CERC_NPM_AUTH_TOKEN=84fe66a73698bf11edbdccd0a338236b7d1d5c45
Created the organization cerc-io
Gitea was configured to use host name: gitea.local, ensure that this resolves to localhost, e.g. with sudo vi /etc/hosts
Success, gitea is properly initialized

View File

View File

@ -0,0 +1,13 @@
version: "1.0"
name: chain-chunker
decription: "Stack to build containers for chain-chunker"
repos:
- cerc-io/ipld-eth-state-snapshot
- cerc-io/eth-statediff-service
- cerc-io/ipld-eth-db
- cerc-io/ipld-eth-server
containers:
- cerc/ipld-eth-state-snapshot
- cerc/eth-statediff-service
- cerc/ipld-eth-db
- cerc/ipld-eth-server

View File

@ -117,8 +117,8 @@ Clear volumes created by this stack:
```bash
# List all relevant volumes
$ docker volume ls -q --filter "name=.*fixturenet_eth_bootnode_geth_data|.*fixturenet_eth_bootnode_lighthouse_data|.*fixturenet_eth_geth_1_data|.*fixturenet_eth_geth_2_data|.*fixturenet_eth_lighthouse_1_data|.*fixturenet_eth_lighthouse_2_data"
$ docker volume ls -q --filter "name=.*fixturenet_eth_bootnode_geth_data|.*fixturenet_eth_geth_1_data|.*fixturenet_eth_geth_2_data|.*fixturenet_eth_lighthouse_1_data|.*fixturenet_eth_lighthouse_2_data"
# Remove all the listed volumes
$ docker volume rm $(docker volume ls -q --filter "name=.*fixturenet_eth_bootnode_geth_data|.*fixturenet_eth_bootnode_lighthouse_data|.*fixturenet_eth_geth_1_data|.*fixturenet_eth_geth_2_data|.*fixturenet_eth_lighthouse_1_data|.*fixturenet_eth_lighthouse_2_data")
$ docker volume rm $(docker volume ls -q --filter "name=.*fixturenet_eth_bootnode_geth_data|.*fixturenet_eth_geth_1_data|.*fixturenet_eth_geth_2_data|.*fixturenet_eth_lighthouse_1_data|.*fixturenet_eth_lighthouse_2_data")
```

View File

@ -0,0 +1,28 @@
# Lotus Fixturenet
Instructions for deploying a local Lotus (Filecoin) chain for development and testing purposes using laconic-stack-orchestrator.
## 1. Clone required repositories
```
$ laconic-so --stack fixturenet-lotus setup-repositories
```
## 2. Build the stack's packages and containers
```
$ laconic-so --stack fixturenet-lotus build-containers
```
## 3. Deploy the stack
```
$ laconic-so --stack fixturenet-lotus deploy up
```
Correct operation should be verified by checking the container logs with:
```
$ laconic-so --stack fixturenet-lotus deploy logs lotus-miner
$ laconic-so --stack fixturenet-lotus deploy logs lotus-node-1
$ laconic-so --stack fixturenet-lotus deploy logs lotus-node-2
```
or by checking the chain status on each node:
```
$ laconic-so --stack fixturenet-lotus deploy exec lotus-miner "lotus status"
$ laconic-so --stack fixturenet-lotus deploy exec lotus-node-1 "lotus status"
$ laconic-so --stack fixturenet-lotus deploy exec lotus-node-2 "lotus status"
```

View File

@ -0,0 +1,9 @@
version: "1.0"
name: fixturenet-lotus
description: "A lotus fixturenet"
repos:
- filecoin-project/lotus
containers:
- cerc/lotus
pods:
- fixturenet-lotus

View File

@ -0,0 +1,59 @@
# Pocket Fixturenet
Instructions for deploying a local single-node Pocket chain alongside a geth + lighthouse blockchain "fixturenet" for development and testing purposes using laconic-stack-orchestrator.
## 1. Build Laconic Stack Orchestrator
Build this fork of Laconic Stack Orchestrator which includes the fixturenet-pocket stack:
```
$ scripts/build_shiv_package.sh
$ cd package
$ mv laconic-so-{version} /usr/local/bin/laconic-so # Or move laconic-so to ~/bin or your favorite on-path directory
```
## 2. Clone required repositories
```
$ laconic-so --stack fixturenet-pocket setup-repositories
```
## 3. Build the stack's containers
```
$ laconic-so --stack fixturenet-pocket build-containers
```
## 4. Deploy the stack
```
$ laconic-so --stack fixturenet-pocket deploy up
```
It may take up to 10 minutes for the Eth Fixturenet to fully come online and start producing blocks.
## 5. Check status
**Eth Fixturenet:**
```
$ laconic-so --stack fixturenet-pocket deploy exec fixturenet-eth-bootnode-lighthouse /scripts/status-internal.sh
Waiting for geth to generate DAG.... done
Waiting for beacon phase0.... done
Waiting for beacon altair.... done
Waiting for beacon bellatrix pre-merge.... done
Waiting for beacon bellatrix merge.... done
```
**Pocket node:**
```
$ laconic-so --stack fixturenet-pocket deploy exec pocket "pocket query height"
2023/04/20 08:07:46 Initializing Pocket Datadir
2023/04/20 08:07:46 datadir = /home/app/.pocket
http://localhost:8081/v1/query/height
{
"height": 4
}
```
or
```
$ laconic-so --stack fixturenet-pocket deploy logs pocket
```
## 6. Send a relay request to Pocket node
The Pocket node serves relay requests at `http://localhost:8081/v1/client/sim`
**Example request:**
```
$ curl -X POST --data '{"relay_network_id":"0021","payload":{"data":"{\"jsonrpc\": \"2.0\",\"id\": 1,\"method\": \"eth_blockNumber\",\"params\": []}","method":"POST","path":"","headers":{}}}' http://localhost:8081/v1/client/sim
```
**Response:**
```
"{\"jsonrpc\":\"2.0\",\"id\":1,\"result\":\"0x6fe\"}\n"
```

View File

@ -0,0 +1,16 @@
version: "1.0"
name: fixturenet-pocket
description: "A single node pocket chain that can serve relays from the geth-1 node in eth-fixturenet"
repos:
- cerc-io/go-ethereum
- pokt-network/pocket-core
- pokt-network/pocket-core-deployments # contains the dockerfile
containers:
- cerc/go-ethereum
- cerc/lighthouse
- cerc/fixturenet-eth-geth
- cerc/fixturenet-eth-lighthouse
- cerc/pocket
pods:
- fixturenet-pocket
- fixturenet-eth

View File

@ -0,0 +1,98 @@
# Gelato watcher
Instructions to setup and deploy Gelato watcher using [laconic-stack-orchestrator](/README.md#install)
## Setup
Prerequisite: `ipld-eth-server` RPC and GQL endpoints
Clone required repositories:
```bash
laconic-so --stack gelato setup-repositories
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories mentioned below and re-run the command
```
Checkout to required version in the repo:
```bash
# gelato-watcher-ts
cd ~/cerc/gelato-watcher-ts
git checkout v0.1.0
```
Build the container images:
```bash
laconic-so --stack gelato build-containers
```
This should create the required docker images in the local image registry.
## Deploy
### Configuration
Create and update an env file to be used in the next step ([defaults](../../config/watcher-gelato/watcher-params.env)):
```bash
# External ipld-eth-server endpoints
CERC_IPLD_ETH_RPC=
CERC_IPLD_ETH_GQL=
# Whether to use a state snapshot to initialize the watcher
CERC_USE_STATE_SNAPSHOT=false
# State snapshot params
# Required if CERC_USE_STATE_SNAPSHOT is set to true
CERC_SNAPSHOT_GQL_ENDPOINT=
CERC_SNAPSHOT_BLOCKHASH=
```
* NOTE: If `ipld-eth-server` is running on the host machine, use `host.docker.internal` as the hostname to access the host port(s)
### Deploy the stack
```bash
laconic-so --stack gelato deploy --env-file <PATH_TO_ENV_FILE> up
```
To list down and monitor the running containers:
```bash
laconic-so --stack gelato deploy ps
# With status
docker ps
# Check logs for a container
docker logs -f <CONTAINER_ID>
```
The stack runs an active watcher with following endpoints exposed on the host ports:
* `3008`: watcher endpoint
* `9000`: watcher metrics
* `9001`: watcher GQL metrics
## Web Apps
TODO
## Clean up
Stop all services running in the background:
```bash
laconic-so --stack gelato deploy down
```
Clear volumes created by this stack:
```bash
# List all relevant volumes
docker volume ls -q --filter "name=.*gelato_watcher_db_data|.*gelato_watcher_state_gql"
# Remove all the listed volumes
docker volume rm $(docker volume ls -q --filter "name=.*gelato_watcher_db_data|.*gelato_watcher_state_gql")
```

View File

@ -0,0 +1,8 @@
version: "1.0"
name: gelato
repos:
- cerc-io/gelato-watcher-ts
containers:
- cerc/watcher-gelato
pods:
- watcher-gelato

View File

@ -12,10 +12,10 @@ If running in the cloud, visit `IP:5001/webui` and you'll likely see this error:
1. Get the container name with `docker ps`:
2. Go into the container (replace with your container name):
2. Go into the container:
```
docker exec -it laconic-dbbf5498fd7d322930b9484121a6a5f4-ipfs-1 sh
laconic-so --stack kubo deploy exec ipfs sh
```
3. Enable CORS as described in point 2 of the error message. Copy/paste/run each line in sequence, then run `exit` to exit the container.

View File

@ -23,11 +23,11 @@ Checkout to the required versions and branches in repos
```bash
# watcher-ts
cd ~/cerc/watcher-ts
git checkout v0.2.39
git checkout v0.2.41
# mobymask-v2-watcher-ts
cd ~/cerc/mobymask-v2-watcher-ts
git checkout v0.1.0
git checkout v0.1.1
# MobyMask
cd ~/cerc/MobyMask

View File

@ -19,11 +19,11 @@ Checkout to the required versions and branches in repos:
```bash
# watcher-ts
cd ~/cerc/watcher-ts
git checkout v0.2.39
git checkout v0.2.41
# mobymask-v2-watcher-ts
cd ~/cerc/mobymask-v2-watcher-ts
git checkout v0.1.0
git checkout v0.1.1
# MobyMask
cd ~/cerc/MobyMask
@ -67,11 +67,14 @@ Create and update an env file to be used in the next step ([defaults](../../conf
# (used for generating a root invite link after deploying the contract)
CERC_MOBYMASK_APP_BASE_URI="http://127.0.0.1:3002/#"
# (Optional) Domain to be used in the relay node's announce address
CERC_RELAY_ANNOUNCE_DOMAIN=
# (Optional) Set of relay peers to connect to from the relay node
CERC_RELAY_PEERS=[]
# (Optional) Domain to be used in the relay node's announce address
CERC_RELAY_ANNOUNCE_DOMAIN=
# (Optional) Set of multiaddrs to be avoided while dialling
CERC_DENY_MULTIADDRS=[]
# Set to false for disabling watcher peer to send txs to L2
CERC_ENABLE_PEER_L2_TXS=true

View File

@ -35,11 +35,11 @@ Checkout to the required versions and branches in repos:
```bash
# watcher-ts
cd ~/cerc/watcher-ts
git checkout v0.2.39
git checkout v0.2.41
# mobymask-v2-watcher-ts
cd ~/cerc/mobymask-v2-watcher-ts
git checkout v0.1.0
git checkout v0.1.1
# MobyMask
cd ~/cerc/MobyMask

View File

@ -26,6 +26,9 @@ Create and update an env file to be used in the next step ([defaults](../../conf
# Eg. CERC_RELAY_NODES=["/dns4/example.com/tcp/443/wss/p2p/12D3KooWGHmDDCc93XUWL16FMcTPCGu2zFaMkf67k8HZ4gdQbRDr"]
CERC_RELAY_NODES=[]
# Set of multiaddrs to be avoided while dialling
CERC_DENY_MULTIADDRS=[]
# Also add if running MobyMask app:
# Watcher endpoint used by the app for GQL queries

View File

@ -26,7 +26,7 @@ In addition to the pre-requisites listed in the [README](/README.md), the follow
1. Clone this repository:
```
$ git clone (https://github.com/cerc-io/stack-orchestrator.git
$ git clone https://github.com/cerc-io/stack-orchestrator.git
```
2. Enter the project directory:
@ -87,20 +87,27 @@ Use shiv to build a single file Python executable zip archive of laconic-so:
```
$ cp stack-orchetrator/laconic-so ~/bin
$ laconic-so
Usage: python -m laconic-so [OPTIONS] COMMAND [ARGS]...
Usage: laconic-so [OPTIONS] COMMAND [ARGS]...
Laconic Stack Orchestrator
Options:
--stack TEXT specify a stack to build/deploy
--quiet
--verbose
--dry-run
-h, --help Show this message and exit.
--local-stack
--debug
--continue-on-error
-h, --help Show this message and exit.
Commands:
build-containers build the set of containers required for a complete...
build-npms build the set of npm packages required for a...
deploy deploy a stack
deploy-system deploy a stack
setup-repositories git clone the set of repositories required to build...
version print tool version
```
For cutting releases, use the [shiv build script](/scripts/build_shiv_package.sh).

View File

@ -162,7 +162,13 @@ Published demo-record-1.yml with id: bafyreierh3xnfivexlscdwubvczmddsnf46uytyfvr
The sample record we deployed looks like:
```
TODO
record:
type: WebsiteRegistrationRecord
url: 'https://cerc.io'
repo_registration_record_cid: QmSnuWmxptJZdLJpKRarxBMS2Ju2oANVrgbr2xWbie9b2D
build_artifact_cid: QmP8jTG1m9GSDJLCbeWhVSVgEzCPPwXRdCRuJtQ5Tz9Kc9
tls_cert_cid: QmbWqxBEKC3P8tqsKc98xmWNzrzDtRLMiMPL8wBuTGsMnR
version: 1.0.23
```
2. Return to the laconic-console