WIP: run console without local gitea #276

Closed
zramsay wants to merge 228 commits from allow-registry-config into main
135 changed files with 4564 additions and 556 deletions

249
README.md
View File

@ -1,218 +1,93 @@
# Stack Orchestrator
Stack Orchestrator allows building and deployment of a Laconic stack on a single machine with minimial prerequisites.
Stack Orchestrator allows building and deployment of a Laconic Stack on a single machine with minimial prerequisites. It is a Python3 CLI tool that runs on any OS with Python3 and Docker. The following diagram summarizes the relevant repositories in the Laconic Stack - and the relationship to Stack Orchestrator.
## Setup
### Prerequisites
Stack Orchestrator is a Python3 CLI tool that runs on any OS with Python3 and Docker. Tested on: Ubuntu 20/22.
![The Stack](/docs/images/laconic-stack.png)
## Install
Ensure that the following are already installed:
1. Python3 (the stock Python3 version available in Ubuntu 20 and 22 is suitable)
```
$ python3 --version
Python 3.8.10
```
2. Docker (Install a current version from dockerco, don't use the version from any Linux distro)
```
$ docker --version
Docker version 20.10.17, build 100c701
```
3. If installed from regular package repository (not Docker Desktop), BE AWARE that the compose plugin may need to be installed, as well.
```
DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
mkdir -p $DOCKER_CONFIG/cli-plugins
curl -SL https://github.com/docker/compose/releases/download/v2.11.2/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose
chmod +x ~/.docker/cli-plugins/docker-compose
# see https://docs.docker.com/compose/install/linux/#install-the-plugin-manually for further details
# or to install for all users.
```
- [Python3](https://wiki.python.org/moin/BeginnersGuide/Download): `python3 --version` >= `3.8.10` (the Python3 shipped in Ubuntu 20+ is good to go)
- [Docker](https://docs.docker.com/get-docker/): `docker --version` >= `20.10.21`
- [jq](https://stedolan.github.io/jq/download/): `jq --version` >= `1.5`
### User Mode Install
Note: if installing docker-compose via package manager on Linux (as opposed to Docker Desktop), you must [install the plugin](https://docs.docker.com/compose/install/linux/#install-the-plugin-manually), e.g. :
User mode runs the orchestrator from a "binary" single-file release and does not require special Python environment setup. Use this mode unless you plan to make changes to the orchestrator source code.
```bash
mkdir -p ~/.docker/cli-plugins
curl -SL https://github.com/docker/compose/releases/download/v2.11.2/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose
chmod +x ~/.docker/cli-plugins/docker-compose
```
*NOTE: User Mode is currently broken, use "Developer mode" described below for now.*
Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be
a "user" binary directory such as `~/bin` or perhaps `/usr/local/laconic` or possibly just the current working directory.
1. Download the latest release from [this page](https://github.com/cerc-io/stack-orchestrator/tags), into a suitable directory (e.g. `~/bin`):
```
$ cd ~/bin
$ curl -L https://github.com/cerc-io/stack-orchestrator/releases/download/v1.0.3-alpha/laconic-so
```
1. Ensure `laconic-so` is on the `PATH`
1. Verify operation:
```
$ ~/bin/laconic-so --help
Usage: python -m laconic-so [OPTIONS] COMMAND [ARGS]...
Now, having selected that directory, download the latest release from [this page](https://github.com/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable:
Laconic Stack Orchestrator
```bash
curl -L -o ~/bin/laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so
```
Options:
--quiet
--verbose
--dry-run
--local-stack
-h, --help Show this message and exit.
Give it execute permissions:
```bash
chmod +x ~/bin/laconic-so
```
Commands:
build-containers build the set of containers required for a complete...
deploy-system deploy a stack
setup-repositories git clone the set of repositories required to build...
```
### Developer mode Install
Suitable for developers either modifying or debugging the orchestrator Python code:
#### Prerequisites
In addition to the binary install prerequisites listed above, the following are required:
1. Python venv package
This may or may not be already installed depending on the host OS and version. Check by running:
```
$ python3 -m venv
usage: venv [-h] [--system-site-packages] [--symlinks | --copies] [--clear] [--upgrade] [--without-pip] [--prompt PROMPT] ENV_DIR [ENV_DIR ...]
venv: error: the following arguments are required: ENV_DIR
```
If the venv package is missing you should see a message indicating how to install it, for example with:
```
$ apt install python3.8-venv
```
#### Install
1. Clone this repository:
```
$ git clone (https://github.com/cerc-io/stack-orchestrator.git
```
4. Enter the project directory:
```
$ cd stack-orchestrator
```
5. Create and activate a venv:
```
$ python3 -m venv venv
$ source ./venv/bin/activate
(venv) $
```
6. Install the cli in edit mode:
```
$ pip install --editable .
```
7. Verify installation:
```
(venv) $ laconic-so
Usage: laconic-so [OPTIONS] COMMAND [ARGS]...
Ensure `laconic-so` is on the [`PATH`](https://unix.stackexchange.com/a/26059)
Laconic Stack Orchestrator
Verify operation (your version will probably be different, just check here that you see some version outut and not an error):
Options:
--quiet
--verbose
--dry-run
-h, --help Show this message and exit.
Commands:
build-containers build the set of containers required for a complete...
deploy-system deploy a stack
setup-repositories git clone the set of repositories required to build...
```
#### Build a zipapp (single file distributable script)
Use shiv to build a single file Python executable zip archive of laconic-so:
1. Install [shiv](https://github.com/linkedin/shiv):
```
$ (venv) pip install shiv
$ (venv) pip install wheel
```
1. Run shiv to create a zipapp file:
```
$ (venv) shiv -c laconic-so -o laconic-so .
```
This creates a file `./laconic-so` that is executable outside of any venv, and on other machines and OSes and architectures, and requiring only the system Python3:
1. Verify it works:
```
$ cp stack-orchetrator/laconic-so ~/bin
$ laconic-so
Usage: python -m laconic-so [OPTIONS] COMMAND [ARGS]...
Laconic Stack Orchestrator
Options:
--quiet
--verbose
--dry-run
-h, --help Show this message and exit.
Commands:
build-containers build the set of containers required for a complete...
deploy-system deploy a stack
setup-repositories git clone the set of repositories required to build...
```
### CI Mode
_write-me_
```
laconic-so version
Version: v1.0.27-7831078
```
## Usage
There are three sub-commands: `setup-repositories`, `build-containers` and `deploy-system` that are generally run in order:
Note: $ laconic-so will run the version installed to ~/bin, while ./laconic-so can be invoked to run locally built
version in a checkout
Three sub-commands: `setup-repositories`, `build-containers` and `deploy-system` are generally run in order. The following is a slim example for standing up the `erc20-watcher`. Go further with the [erc20 watcher demo](/app/data/stacks/erc20) and other pieces of the stack, within the [`stacks` directory](/app/data/stacks).
### Setup Repositories
Clones the set of git repositories necessary to build a system.
Note: the use of `ssh-agent` is recommended in order to avoid entering your ssh key passphrase for each repository.
```
$ laconic-so --verbose setup-repositories #this will default to ~/cerc or CERC_REPO_BASE_DIR from an env file
#$ ./laconic-so --verbose --local-stack setup-repositories #this will use cwd ../ as dev_root_path
Clone the set of git repositories necessary to build a system:
```bash
laconic-so --stack erc20 setup-repositories
```
This will default to cloning git reposiories into: `~/cerc` or - if set - the environment variable `CERC_REPO_BASE_DIR`
### Build Containers
Builds the set of docker container images required to run a system. It takes around 10 minutes to build all the containers from cold.
```
$ laconic-so --verbose build-containers #this will default to ~/cerc or CERC_REPO_BASE_DIR from an env file
#$ ./laconic-so --verbose --local-stack build-containers #this will use cwd ../ as dev_root_path
Build the set of docker container images required to run a system. It takes around 10 minutes to build all the containers from scratch.
```bash
laconic-so --stack erc20 build-containers
```
### Deploy System
Uses `docker compose` to deploy a system.
Use `---include <list of components>` to deploy a subset of all containers:
Uses `docker compose` to deploy a system (with most recently built container images).
```bash
laconic-so --stack erc20 deploy-system up
```
$ laconic-so --verbose deploy-system --include db-sharding,contract,ipld-eth-server,go-ethereum-foundry up
Check out he GraphQL playground here: [http://localhost:3002/graphql](http://localhost:3002/graphql)
See the [erc20 watcher demo](/app/data/stacks/erc20) to continue further.
### Cleanup
```bash
laconic-so --stack erc20 deploy-system down
```
```
$ laconic-so --verbose deploy-system --include db-sharding,contract,ipld-eth-server,go-ethereum-foundry down
```
Note: deploy-system command interacts with most recently built container images.
## Contributing
See the [CONTRIBUTING.md](/docs/CONTRIBUTING.md) for developer mode install.
## Platform Support
Native aarm64 is _not_ currently supported. x64 emulation on ARM64 macos should work (not yet tested).
## Implementation
The orchestrator's operation is driven by files shown below. `repository-list.txt` container the list of git repositories; `container-image-list.txt` contains
the list of container image names, while `clister-list.txt` specifies the set of compose components (corresponding to individual docker-compose-xxx.yml files which may in turn specify more than one container).
Files required to build each container image are stored under `./container-build/<container-name>`
Files required at deploy-time are stored under `./config/<component-name>`
```
├── pod-list.txt
├── compose
│   ├── docker-compose-contract.yml
│   ├── docker-compose-db-sharding.yml
│   ├── docker-compose-db.yml
│   ├── docker-compose-eth-statediff-fill-service.yml
│   ├── docker-compose-go-ethereum-foundry.yml
│   ├── docker-compose-ipld-eth-beacon-db.yml
│   ├── docker-compose-ipld-eth-beacon-indexer.yml
│   ├── docker-compose-ipld-eth-server.yml
│   ├── docker-compose-lighthouse.yml
│   └── docker-compose-prometheus-grafana.yml
├── config
│   └── ipld-eth-server
├── container-build
│   ├── cerc-eth-statediff-fill-service
│   ├── cerc-go-ethereum
│   ├── cerc-go-ethereum-foundry
│   ├── cerc-ipld-eth-beacon-db
│   ├── cerc-ipld-eth-beacon-indexer
│   ├── cerc-ipld-eth-db
│   ├── cerc-ipld-eth-server
│   ├── cerc-lighthouse
│   └── cerc-test-contract
├── container-image-list.txt
├── repository-list.txt
```
_write-more-of-me_
Native aarm64 is _not_ currently supported. x64 emulation on ARM64 macos should work (not yet tested).

71
app/base.py Normal file
View File

@ -0,0 +1,71 @@
# Copyright © 2022, 2023 Cerc
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
import os
from abc import ABC, abstractmethod
from .deploy_system import get_stack_status
def get_stack(config, stack):
if stack == "package-registry":
return package_registry_stack(config, stack)
else:
return base_stack(config, stack)
class base_stack(ABC):
def __init__(self, config, stack):
self.config = config
self.stack = stack
@abstractmethod
def ensure_available(self):
pass
@abstractmethod
def get_url(self):
pass
class package_registry_stack(base_stack):
def ensure_available(self):
self.url = "<no registry url set>"
# Check if we were given an external registry URL
url_from_environment = os.environ.get("CERC_NPM_REGISTRY_URL")
if url_from_environment:
if self.config.verbose:
print(f"Using package registry url from CERC_NPM_REGISTRY_URL: {url_from_environment}")
self.url = url_from_environment
else:
# Otherwise we expect to use the local package-registry stack
# First check if the stack is up
registry_running = get_stack_status(self.config, "package-registry")
if registry_running:
# If it is available, get its mapped port and construct its URL
if self.config.debug:
print("Found local package registry stack is up")
# TODO: get url from deploy-stack
self.url = "http://gitea.local:3000/api/packages/cerc-io/npm/"
else:
# If not, print a message about how to start it and return fail to the caller
print("ERROR: The package-registry stack is not running, and no external registry specified with CERC_NPM_REGISTRY_URL")
print("ERROR: Start the local package registry with: laconic-so --stack package-registry deploy-system up")
return False
return True
def get_url(self):
return self.url

View File

@ -1,4 +1,4 @@
# Copyright © 2022 Cerc
# Copyright © 2022, 2023 Cerc
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
@ -21,12 +21,13 @@
# TODO: display the available list of containers; allow re-build of either all or specific containers
import os
import sys
from decouple import config
import subprocess
import click
import importlib.resources
from pathlib import Path
from .util import include_exclude_check
from .util import include_exclude_check, get_parsed_stack_config
# TODO: find a place for this
# epilog="Config provided either in .env or settings.ini or env vars: CERC_REPO_BASE_DIR (defaults to ~/cerc)"
@ -43,6 +44,8 @@ def command(ctx, include, exclude):
verbose = ctx.obj.verbose
dry_run = ctx.obj.dry_run
local_stack = ctx.obj.local_stack
stack = ctx.obj.stack
continue_on_error = ctx.obj.continue_on_error
# See: https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure
container_build_dir = Path(__file__).absolute().parent.joinpath("data", "container-build")
@ -62,16 +65,27 @@ def command(ctx, include, exclude):
# See: https://stackoverflow.com/a/20885799/1701505
from . import data
with importlib.resources.open_text(data, "container-image-list.txt") as container_list_file:
containers = container_list_file.read().splitlines()
all_containers = container_list_file.read().splitlines()
containers_in_scope = []
if stack:
stack_config = get_parsed_stack_config(stack)
containers_in_scope = stack_config['containers']
else:
containers_in_scope = all_containers
if verbose:
print(f'Containers: {containers}')
print(f'Containers: {containers_in_scope}')
if stack:
print(f"Stack: {stack}")
# TODO: make this configurable
container_build_env = {
"CERC_NPM_URL": "http://gitea.local:3000/api/packages/cerc-io/npm/",
"CERC_NPM_URL": config("CERC_NPM_URL", default="http://gitea.local:3000/api/packages/cerc-io/npm/"),
"CERC_NPM_AUTH_TOKEN": config("CERC_NPM_AUTH_TOKEN", default="<token-not-supplied>"),
"CERC_REPO_BASE_DIR": dev_root_path
"CERC_REPO_BASE_DIR": dev_root_path,
"CERC_HOST_UID": f"{os.getuid()}",
"CERC_HOST_GID": f"{os.getgid()}",
"DOCKER_BUILDKIT": "0"
}
def process_container(container):
@ -91,17 +105,24 @@ def command(ctx, include, exclude):
# Check if we have a repo for this container. If not, set the context dir to the container-build subdir
repo_full_path = os.path.join(dev_root_path, repo_dir)
repo_dir_or_build_dir = repo_dir if os.path.exists(repo_full_path) else build_dir
build_command = os.path.join(container_build_dir, "default-build.sh") + f" {container} {repo_dir_or_build_dir}"
build_command = os.path.join(container_build_dir, "default-build.sh") + f" {container}:local {repo_dir_or_build_dir}"
if not dry_run:
if verbose:
print(f"Executing: {build_command}")
print(f"Executing: {build_command} with environment: {container_build_env}")
build_result = subprocess.run(build_command, shell=True, env=container_build_env)
# TODO: check result in build_result.returncode
print(f"Result is: {build_result}")
if verbose:
print(f"Return code is: {build_result.returncode}")
if build_result.returncode != 0:
print(f"Error running build for {container}")
if not continue_on_error:
print("FATAL Error: container build failed and --continue-on-error not set, exiting")
sys.exit(1)
else:
print("****** Container Build Error, continuing because --continue-on-error is set")
else:
print("Skipped")
for container in containers:
for container in containers_in_scope:
if include_exclude_check(container, include, exclude):
process_container(container)
else:

View File

@ -1,4 +1,4 @@
# Copyright © 2022 Cerc
# Copyright © 2022, 2023 Cerc
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
@ -19,11 +19,16 @@
# CERC_REPO_BASE_DIR defaults to ~/cerc
import os
import sys
from shutil import rmtree, copytree
from decouple import config
import click
import importlib.resources
from python_on_whales import docker
from .util import include_exclude_check
from python_on_whales import docker, DockerException
from .base import get_stack
from .util import include_exclude_check, get_parsed_stack_config
builder_js_image_name = "cerc/builder-js:local"
@click.command()
@click.option('--include', help="only build these packages")
@ -37,6 +42,23 @@ def command(ctx, include, exclude):
dry_run = ctx.obj.dry_run
local_stack = ctx.obj.local_stack
debug = ctx.obj.debug
stack = ctx.obj.stack
continue_on_error = ctx.obj.continue_on_error
_ensure_prerequisites()
# build-npms depends on having access to a writable package registry
# so we check here that it is available
package_registry_stack = get_stack(ctx.obj, "package-registry")
registry_available = package_registry_stack.ensure_available()
if not registry_available:
print("FATAL: no npm registry available for build-npms command")
sys.exit(1)
npm_registry_url = package_registry_stack.get_url()
npm_registry_url_token = config("CERC_NPM_AUTH_TOKEN", default=None)
if not npm_registry_url_token:
print("FATAL: CERC_NPM_AUTH_TOKEN is not defined")
sys.exit(1)
if local_stack:
dev_root_path = os.getcwd()[0:os.getcwd().rindex("stack-orchestrator")]
@ -44,49 +66,100 @@ def command(ctx, include, exclude):
else:
dev_root_path = os.path.expanduser(config("CERC_REPO_BASE_DIR", default="~/cerc"))
if not quiet:
build_root_path = os.path.join(dev_root_path, "build-trees")
if verbose:
print(f'Dev Root is: {dev_root_path}')
if not os.path.isdir(dev_root_path):
print('Dev root directory doesn\'t exist, creating')
os.makedirs(dev_root_path)
if not os.path.isdir(dev_root_path):
print('Build root directory doesn\'t exist, creating')
os.makedirs(build_root_path)
# See: https://stackoverflow.com/a/20885799/1701505
from . import data
with importlib.resources.open_text(data, "npm-package-list.txt") as package_list_file:
packages = package_list_file.read().splitlines()
all_packages = package_list_file.read().splitlines()
packages_in_scope = []
if stack:
stack_config = get_parsed_stack_config(stack)
# TODO: syntax check the input here
packages_in_scope = stack_config['npms']
else:
packages_in_scope = all_packages
if verbose:
print(f'Packages: {packages}')
print(f'Packages: {packages_in_scope}')
def build_package(package):
if not quiet:
print(f"Building npm package: {package}")
repo_dir = package
repo_full_path = os.path.join(dev_root_path, repo_dir)
# TODO: make the npm registry url configurable.
build_command = ["sh", "-c", "cd /workspace && build-npm-package-local-dependencies.sh http://gitea.local:3000/api/packages/cerc-io/npm/"]
# Copy the repo and build that to avoid propagating JS tooling file changes back into the cloned repo
repo_copy_path = os.path.join(build_root_path, repo_dir)
# First delete any old build tree
if os.path.isdir(repo_copy_path):
if verbose:
print(f"Deleting old build tree: {repo_copy_path}")
if not dry_run:
rmtree(repo_copy_path)
# Now copy the repo into the build tree location
if verbose:
print(f"Copying build tree from: {repo_full_path} to: {repo_copy_path}")
if not dry_run:
copytree(repo_full_path, repo_copy_path)
build_command = ["sh", "-c", f"cd /workspace && build-npm-package-local-dependencies.sh {npm_registry_url}"]
if not dry_run:
if verbose:
print(f"Executing: {build_command}")
envs = {"CERC_NPM_AUTH_TOKEN": os.environ["CERC_NPM_AUTH_TOKEN"]} | ({"CERC_SCRIPT_DEBUG": "true"} if debug else {})
build_result = docker.run("cerc/builder-js",
remove=True,
interactive=True,
tty=True,
user=f"{os.getuid()}:{os.getgid()}",
envs=envs,
add_hosts=[("gitea.local", "host-gateway")],
volumes=[(repo_full_path, "/workspace")],
command=build_command
)
# TODO: check result in build_result.returncode
print(f"Result is: {build_result}")
# Originally we used the PEP 584 merge operator:
# envs = {"CERC_NPM_AUTH_TOKEN": npm_registry_url_token} | ({"CERC_SCRIPT_DEBUG": "true"} if debug else {})
# but that isn't available in Python 3.8 (default in Ubuntu 20) so for now we use dict.update:
envs = {"CERC_NPM_AUTH_TOKEN": npm_registry_url_token}
envs.update({"CERC_SCRIPT_DEBUG": "true"} if debug else {})
try:
docker.run(builder_js_image_name,
remove=True,
interactive=True,
tty=True,
user=f"{os.getuid()}:{os.getgid()}",
envs=envs,
# TODO: detect this host name in npm_registry_url rather than hard-wiring it
add_hosts=[("gitea.local", "host-gateway")],
volumes=[(repo_copy_path, "/workspace")],
command=build_command
)
# Note that although the docs say that build_result should contain
# the command output as a string, in reality it is always the empty string.
# Since we detect errors via catching exceptions below, we can safely ignore it here.
except DockerException as e:
print(f"Error executing build for {package} in container:\n {e}")
if not continue_on_error:
print("FATAL Error: build failed and --continue-on-error not set, exiting")
sys.exit(1)
else:
print("****** Build Error, continuing because --continue-on-error is set")
else:
print("Skipped")
for package in packages:
for package in packages_in_scope:
if include_exclude_check(package, include, exclude):
build_package(package)
else:
if verbose:
print(f"Excluding: {package}")
def _ensure_prerequisites():
# Check that the builder-js container is available and
# Tell the user how to build it if not
images = docker.image.list(builder_js_image_name)
if len(images) == 0:
print(f"FATAL: builder image: {builder_js_image_name} is required but was not found")
print("Please run this command to create it: laconic-so --stack build-support build-containers")
sys.exit(1)

View File

@ -0,0 +1,23 @@
version: "3.2"
services:
prometheus:
restart: always
image: prom/prometheus
depends_on:
fixturenet-eth-geth-1:
condition: service_healthy
volumes:
- ../config/fixturenet-eth-metrics/prometheus/etc:/etc/prometheus
ports:
- "9090"
grafana:
restart: always
image: grafana/grafana
environment:
- GF_SECURITY_ADMIN_PASSWORD=changeme6325
volumes:
- ../config/fixturenet-eth-metrics/grafana/etc/provisioning/dashboards:/etc/grafana/provisioning/dashboards
- ../config/fixturenet-eth-metrics/grafana/etc/provisioning/datasources:/etc/grafana/provisioning/datasources
- ../config/fixturenet-eth-metrics/grafana/etc/dashboards:/etc/grafana/dashboards
ports:
- "3000"

View File

@ -20,9 +20,12 @@ services:
CERC_REMOTE_DEBUG: "true"
CERC_RUN_STATEDIFF: "detect"
CERC_STATEDIFF_DB_NODE_ID: 1
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
env_file:
- ../config/fixturenet-eth/fixturenet-eth.env
image: cerc/fixturenet-eth-geth:local
volumes:
- fixturenet_geth_accounts:/opt/testnet/build/el
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "8545"]
interval: 30s
@ -34,6 +37,7 @@ services:
ports:
- "8545"
- "40000"
- "6060"
fixturenet-eth-geth-2:
hostname: fixturenet-eth-geth-2
@ -54,6 +58,12 @@ services:
environment:
RUN_BOOTNODE: "true"
image: cerc/fixturenet-eth-lighthouse:local
healthcheck:
test: ["CMD", "/scripts/status-internal.sh"]
interval: 10s
timeout: 100s
retries: 3
start_period: 15s
fixturenet-eth-lighthouse-1:
hostname: fixturenet-eth-lighthouse-1
@ -77,7 +87,7 @@ services:
condition: service_healthy
ports:
- "8001"
fixturenet-eth-lighthouse-2:
hostname: fixturenet-eth-lighthouse-2
healthcheck:
@ -99,3 +109,6 @@ services:
condition: service_started
fixturenet-eth-geth-2:
condition: service_healthy
volumes:
fixturenet_geth_accounts:

View File

@ -0,0 +1,6 @@
services:
laconic-console:
restart: unless-stopped
image: cerc/laconic-console-host:local
ports:
- "80"

View File

@ -5,17 +5,21 @@ services:
image: cerc/laconicd:local
command: ["sh", "/docker-entrypoint-scripts.d/create-fixturenet.sh"]
volumes:
# TODO: look at folding this script into the container
# TODO: look at folding these scripts into the container
- ../config/fixturenet-laconicd/create-fixturenet.sh:/docker-entrypoint-scripts.d/create-fixturenet.sh
- ../config/fixturenet-laconicd/export-mykey.sh:/docker-entrypoint-scripts.d/export-mykey.sh
# TODO: determine which of the ports below is really needed
ports:
- "6060"
- "26657"
- "26656"
- "9473"
- "9473:9473"
- "8545"
- "8546"
- "9090"
- "9091"
- "1317"
cli:
image: cerc/laconic-registry-cli:local
volumes:
- ../config/fixturenet-laconicd/registry-cli-config-template.yml:/registry-cli-config-template.yml

View File

@ -0,0 +1,99 @@
version: '3.7'
services:
fixturenet-optimism-contracts:
hostname: fixturenet-optimism-contracts
image: cerc/optimism-contracts:local
depends_on:
fixturenet-eth-geth-1:
condition: service_healthy
fixturenet-eth-bootnode-lighthouse:
condition: service_healthy
environment:
CHAIN_ID: 1212
L1_RPC: "http://fixturenet-eth-geth-1:8545"
command: "./run.sh"
volumes:
- ../config/fixturenet-optimism/optimism-contracts/rekey-json.ts:/app/packages/contracts-bedrock/tasks/rekey-json.ts
- ../config/fixturenet-optimism/optimism-contracts/send-balance.ts:/app/packages/contracts-bedrock/tasks/send-balance.ts
- ../config/fixturenet-optimism/optimism-contracts/update-config.js:/app/packages/contracts-bedrock/update-config.js
- ../config/fixturenet-optimism/optimism-contracts/run.sh:/app/packages/contracts-bedrock/run.sh
- fixturenet_geth_accounts:/geth-accounts:ro
- l2_accounts:/l2-accounts
- l1_deployment:/app/packages/contracts-bedrock
op-node-l2-config-gen:
image: cerc/optimism-op-node:local
depends_on:
fixturenet-optimism-contracts:
condition: service_completed_successfully
environment:
L1_RPC: "http://fixturenet-eth-geth-1:8545"
volumes:
- ../config/fixturenet-optimism/generate-l2-config.sh:/app/generate-l2-config.sh
- l1_deployment:/contracts-bedrock:ro
- op_node_data:/app
command: ["sh", "/app/generate-l2-config.sh"]
op-geth:
image: cerc/optimism-l2geth:local
depends_on:
op-node-l2-config-gen:
condition: service_started
volumes:
- ../config/fixturenet-optimism/run-op-geth.sh:/run-op-geth.sh
- op_node_data:/op-node:ro
- l2_accounts:/l2-accounts:ro
entrypoint: "sh"
command: "/run-op-geth.sh"
ports:
- "8545"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost:8545"]
interval: 30s
timeout: 10s
retries: 10
start_period: 10s
op-node:
environment:
L1_RPC: "http://fixturenet-eth-geth-1:8545"
depends_on:
op-geth:
condition: service_healthy
image: cerc/optimism-op-node:local
volumes:
- ../config/fixturenet-optimism/run-op-node.sh:/app/run-op-node.sh
- op_node_data:/op-node-data:ro
- l2_accounts:/l2-accounts:ro
command: ["sh", "/app/run-op-node.sh"]
ports:
- "8547"
healthcheck:
test: ["CMD", "nc", "-vz", "localhost:8547"]
interval: 30s
timeout: 10s
retries: 10
start_period: 10s
op-batcher:
environment:
L1_RPC: "http://fixturenet-eth-geth-1:8545"
depends_on:
fixturenet-eth-geth-1:
condition: service_healthy
op-node:
condition: service_healthy
op-geth:
condition: service_healthy
image: cerc/optimism-op-batcher:local
volumes:
- ../config/fixturenet-optimism/run-op-batcher.sh:/run-op-batcher.sh
- l2_accounts:/l2-accounts:ro
entrypoint: "sh"
command: "/run-op-batcher.sh"
volumes:
op_node_data:
l2_accounts:
l1_deployment:

View File

@ -0,0 +1,8 @@
# Add-on pod to include foundry tooling within a fixturenet
services:
foundry:
image: cerc/foundry:local
command: ["while :; do sleep 600; done"]
volumes:
- ../config/foundry/foundry.toml:/foundry.toml
- ./foundry/workspace:/workspace

View File

@ -8,7 +8,7 @@ services:
condition: service_healthy
image: cerc/go-ethereum-foundry:local
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "8545"]
test: ["CMD", "nc", "-vz", "localhost", "8545"]
interval: 30s
timeout: 3s
retries: 10

View File

@ -20,16 +20,24 @@ services:
DATABASE_USER: "vdbm"
DATABASE_PASSWORD: "password"
ETH_CHAIN_ID: 99
ETH_FORWARD_ETH_CALLS: $eth_forward_eth_calls
ETH_PROXY_ON_ERROR: $eth_proxy_on_error
ETH_HTTP_PATH: $eth_http_path
ETH_FORWARD_ETH_CALLS: "false"
ETH_FORWARD_GET_STORAGE_AT: "false"
ETH_PROXY_ON_ERROR: "false"
METRICS: "true"
PROM_HTTP: "true"
PROM_HTTP_ADDR: "0.0.0.0"
PROM_HTTP_PORT: "8090"
LOGRUS_LEVEL: "debug"
CERC_REMOTE_DEBUG: "true"
volumes:
- type: bind
source: ../config/ipld-eth-server/chain.json
target: /tmp/chain.json
ports:
- "127.0.0.1:8081:8081"
- "127.0.0.1:8082:8082"
- "8081"
- "8082"
- "8090"
- "40000"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "8081"]
interval: 20s

View File

@ -29,9 +29,17 @@ services:
condition: service_healthy
keycloak-nginx:
image: nginx:1.23-alpine
restart: always
volumes:
- ../config/keycloak/nginx:/etc/nginx/conf.d
ports:
- 80
depends_on:
- keycloak
keycloak-nginx-prometheus-exporter:
image: nginx/nginx-prometheus-exporter
restart: always
environment:
- SCRAPE_URI=http://keycloak-nginx:80/stub_status
depends_on:
- keycloak-nginx

View File

@ -0,0 +1,13 @@
version: "3.2"
# See: https://docs.ipfs.tech/install/run-ipfs-inside-docker/#set-up
services:
ipfs:
image: ipfs/kubo:master-2023-02-20-714a968
restart: always
volumes:
- ./ipfs/import:/import
- ./ipfs/data:/data/ipfs
ports:
- "8080"
- "4001"
- "5001"

View File

@ -0,0 +1,25 @@
version: "3.2"
services:
laconicd:
restart: unless-stopped
image: cerc/laconicd:local
command: ["sh", "/docker-entrypoint-scripts.d/create-fixturenet.sh"]
volumes:
- ../config/fixturenet-laconicd/create-fixturenet.sh:/docker-entrypoint-scripts.d/create-fixturenet.sh
ports:
- "9473"
- "8545"
- "8546"
- "1317"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "8545"]
interval: 20s
timeout: 5s
retries: 15
start_period: 10s
networks:
# https://docs.docker.com/compose/networking/#configure-the-default-network
default:
name: mobymask-v2-network

View File

@ -1,21 +0,0 @@
version: "3.2"
services:
# If you want prometheus to work, you must update the following file in the ops repo locally.
# localhost:6060 --> go-ethereum:6060
prometheus:
restart: always
user: "987"
image: prom/prometheus
volumes:
- ${cerc_ops}/metrics/etc:/etc/prometheus
- ./prometheus-data:/prometheus
ports:
- "127.0.0.1:9090:9090"
grafana:
restart: always
user: "472"
image: grafana/grafana
volumes:
- ./grafana-data:/var/lib/grafana
ports:
- "127.0.0.1:3000:3000"

View File

@ -3,3 +3,5 @@ services:
test:
image: cerc/test-container:local
restart: always
ports:
- "80"

View File

@ -39,7 +39,7 @@ services:
- "0.0.0.0:3002:3001"
- "0.0.0.0:9002:9001"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "3002"]
test: ["CMD", "nc", "-vz", "localhost", "3001"]
interval: 20s
timeout: 5s
retries: 15

View File

@ -0,0 +1,127 @@
version: '3.2'
services:
mobymask-watcher-db:
restart: unless-stopped
image: postgres:14-alpine
environment:
- POSTGRES_USER=vdbm
- POSTGRES_MULTIPLE_DATABASES=mobymask-watcher,mobymask-watcher-job-queue
- POSTGRES_EXTENSION=mobymask-watcher-job-queue:pgcrypto
- POSTGRES_PASSWORD=password
volumes:
- ../config/postgresql/multiple-postgressql-databases.sh:/docker-entrypoint-initdb.d/multiple-postgressql-databases.sh
- mobymask_watcher_db_data:/var/lib/postgresql/data
ports:
- "0.0.0.0:15432:5432"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "5432"]
interval: 20s
timeout: 5s
retries: 15
start_period: 10s
mobymask:
restart: unless-stopped
image: cerc/mobymask:local
working_dir: /app/packages/server
depends_on:
op-node:
condition: service_healthy
op-geth:
condition: service_healthy
# TODO: Configure env file for ETH RPC URL & private key
environment:
- ENV=PROD
command: ["sh", "./deploy-invite.sh"]
volumes:
# TODO: add a script to set rpc endpoint from env
# add manually if running seperately
- ../config/watcher-mobymask-v2/secrets-template.json:/app/packages/server/secrets-template.json
- ../config/watcher-mobymask-v2/deploy-invite.sh:/app/packages/server/deploy-invite.sh
- moby_data_server:/app/packages/server
- fixturenet_geth_accounts:/geth-accounts:ro
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "3330"]
interval: 20s
timeout: 5s
retries: 15
start_period: 10s
mobymask-watcher-server:
# TODO: pass optimism rpc endpoint
restart: unless-stopped
depends_on:
mobymask-watcher-db:
condition: service_healthy
mobymask:
condition: service_healthy
image: cerc/watcher-mobymask-v2:local
command: ["sh", "server-start.sh"]
volumes:
# TODO: add a script to set rpc endpoint from env
# add manually if running seperately
- ../config/watcher-mobymask-v2/watcher-config-template.toml:/app/packages/mobymask-v2-watcher/environments/watcher-config-template.toml
- ../config/watcher-mobymask-v2/peer.env:/app/packages/peer/.env
- ../config/watcher-mobymask-v2/relay-id.json:/app/packages/mobymask-v2-watcher/relay-id.json
- ../config/watcher-mobymask-v2/peer-id.json:/app/packages/mobymask-v2-watcher/peer-id.json
- ../config/watcher-mobymask-v2/server-start.sh:/app/packages/mobymask-v2-watcher/server-start.sh
- moby_data_server:/server
- fixturenet_geth_accounts:/geth-accounts:ro
ports:
- "0.0.0.0:3001:3001"
- "0.0.0.0:9001:9001"
- "0.0.0.0:9090:9090"
healthcheck:
test: ["CMD", "busybox", "nc", "localhost", "9090"]
interval: 20s
timeout: 5s
retries: 15
start_period: 5s
# TODO: Move to a separate pod
mobymask-app:
depends_on:
mobymask-watcher-server:
condition: service_healthy
mobymask:
condition: service_healthy
image: cerc/mobymask-ui:local
command: ["sh", "mobymask-app-start.sh"]
volumes:
- ../config/watcher-mobymask-v2/mobymask-app.env:/app/.env
- ../config/watcher-mobymask-v2/mobymask-app-config.json:/app/src/mobymask-app-config.json
- ../config/watcher-mobymask-v2/mobymask-app-start.sh:/app/mobymask-app-start.sh
- moby_data_server:/server
ports:
- "0.0.0.0:3002:3000"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "3000"]
interval: 20s
timeout: 5s
retries: 15
start_period: 10s
shm_size: '1GB'
peer-test-app:
depends_on:
mobymask-watcher-server:
condition: service_healthy
image: cerc/react-peer:local
working_dir: /app/packages/test-app
command: ["sh", "-c", "yarn build && serve -s build"]
volumes:
- ../config/watcher-mobymask-v2/test-app-config.json:/app/packages/test-app/src/config.json
ports:
- "0.0.0.0:3003:3000"
healthcheck:
test: ["CMD", "nc", "-v", "localhost", "3000"]
interval: 20s
timeout: 5s
retries: 15
start_period: 10s
volumes:
mobymask_watcher_db_data:
moby_data_server:
fixturenet_geth_accounts:

View File

@ -0,0 +1,9 @@
apiVersion: 1
providers:
- name: dashboards
type: file
updateIntervalSeconds: 10
options:
path: /etc/grafana/dashboards
foldersFromFilesStructure: true

View File

@ -0,0 +1,19 @@
apiVersion: 1
datasources:
- id: 1
uid: jZUuGao4k
orgId: 1
name: Prometheus
type: prometheus
typeName: Prometheus
typeLogoUrl: public/app/plugins/datasource/prometheus/img/prometheus_logo.svg
access: proxy
url: http://prometheus:9090
user: ""
database: ""
basicAuth: false
isDefault: true
jsonData:
httpMethod: POST
readOnly: false

View File

@ -0,0 +1,34 @@
global:
scrape_interval: 5s
evaluation_interval: 15s
scrape_configs:
# ipld-eth-server
- job_name: 'ipld-eth-server'
metrics_path: /metrics
scrape_interval: 5s
static_configs:
- targets: ['ipld-eth-server:8090']
# geth
- job_name: 'geth'
metrics_path: /debug/metrics/prometheus
scheme: http
static_configs:
- targets: ['fixturenet-eth-geth-1:6060']
# nginx
- job_name: 'nginx'
scrape_interval: 5s
metrics_path: /metrics
scheme: http
static_configs:
- targets: ['keycloak-nginx-prometheus-exporter:9113']
# keycloak
- job_name: 'keycloak'
scrape_interval: 5s
metrics_path: /auth/realms/cerc/metrics
scheme: http
static_configs:
- targets: ['keycloak:8080']

View File

@ -19,3 +19,5 @@ CERC_STATEDIFF_DB_USER="vdbm"
CERC_STATEDIFF_DB_PASSWORD="password"
CERC_STATEDIFF_DB_GOOSE_MIN_VER=23
CERC_STATEDIFF_DB_LOG_STATEMENTS="false"
CERC_GETH_VMODULE="statediff/*=5,rpc/*=5"

View File

@ -1,8 +1,8 @@
#!/bin/sh
# Originally from: https://github.com/cerc-io/laconicd/blob/main/init.sh
# TODO: fold this back into the laconicd repo
#!/bin/bash
# TODO: this file is now an unmodified copy of cerc-io/laconicd/init.sh
# so we should have a mechanism to bundle it inside the container rather than link from here
# at deploy time.
KEY="mykey"
CHAINID="laconic_9000-1"
@ -10,7 +10,7 @@ MONIKER="localtestnet"
KEYRING="test"
KEYALGO="eth_secp256k1"
LOGLEVEL="info"
# to trace evm
# trace evm
TRACE="--trace"
# TRACE=""
@ -28,7 +28,7 @@ laconicd config chain-id $CHAINID
# if $KEY exists it should be deleted
laconicd keys add $KEY --keyring-backend $KEYRING --algo $KEYALGO
# Set moniker and chain-id for laconic (Moniker can be anything, chain-id must be an integer)
# Set moniker and chain-id for Ethermint (Moniker can be anything, chain-id must be an integer)
laconicd init $MONIKER --chain-id $CHAINID
# Change parameter token denominations to aphoton
@ -37,28 +37,28 @@ cat $HOME/.laconicd/config/genesis.json | jq '.app_state["crisis"]["constant_fee
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["gov"]["deposit_params"]["min_deposit"][0]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["mint"]["params"]["mint_denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
# Custom modules
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["record_rent"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_rent"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_auction_commit_fee"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_auction_reveal_fee"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_auction_minimum_bid"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["record_rent"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_rent"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_commit_fee"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_reveal_fee"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_minimum_bid"]["denom"]="aphoton"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
if [[ "$TEST_NAMESERVICE_EXPIRY" == "true" ]]; then
if [[ "$TEST_REGISTRY_EXPIRY" == "true" ]]; then
echo "Setting timers for expiry tests."
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["record_rent_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_grace_period"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_rent_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["record_rent_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_grace_period"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_rent_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
fi
if [[ "$TEST_AUCTION_ENABLED" == "true" ]]; then
echo "Enabling auction and setting timers."
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_auction_enabled"]=true' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_rent_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_grace_period"]="300s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_auction_commits_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["nameservice"]["params"]["authority_auction_reveals_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_enabled"]=true' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_rent_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_grace_period"]="300s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_commits_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
cat $HOME/.laconicd/config/genesis.json | jq '.app_state["registry"]["params"]["authority_auction_reveals_duration"]="60s"' > $HOME/.laconicd/config/tmp_genesis.json && mv $HOME/.laconicd/config/tmp_genesis.json $HOME/.laconicd/config/genesis.json
fi
# increase block time (?)

View File

@ -0,0 +1,2 @@
#!/bin/sh
echo y | laconicd keys export mykey --unarmored-hex --unsafe

View File

@ -0,0 +1,7 @@
services:
cns:
restEndpoint: 'http://laconicd:1317'
gqlEndpoint: 'http://laconicd:9473/api'
userKey: REPLACE_WITH_MYKEY
bondId:
chainId: laconic_9000-1

View File

@ -0,0 +1,11 @@
#!/bin/sh
set -e
op-node genesis l2 \
--deploy-config /contracts-bedrock/deploy-config/getting-started.json \
--deployment-dir /contracts-bedrock/deployments/getting-started/ \
--outfile.l2 /app/genesis.json \
--outfile.rollup /app/rollup.json \
--l1-rpc $L1_RPC
openssl rand -hex 32 > /app/jwt.txt

View File

@ -0,0 +1,28 @@
import fs from 'fs'
import { task } from 'hardhat/config'
import { hdkey } from 'ethereumjs-wallet'
import * as bip39 from 'bip39'
task('rekey-json', 'Generates a new set of keys for a test network')
.addParam('output', 'JSON file to output accounts to')
.setAction(async ({ output: outputFile }) => {
const mnemonic = bip39.generateMnemonic()
const pathPrefix = "m/44'/60'/0'/0"
const labels = ['Admin', 'Proposer', 'Batcher', 'Sequencer']
const hdwallet = hdkey.fromMasterSeed(await bip39.mnemonicToSeed(mnemonic))
const output = {}
for (let i = 0; i < labels.length; i++) {
const label = labels[i]
const wallet = hdwallet.derivePath(`${pathPrefix}/${i}`).getWallet()
const addr = '0x' + wallet.getAddress().toString('hex')
const pk = wallet.getPrivateKey().toString('hex')
output[label] = { address: addr, privateKey: pk }
}
fs.writeFileSync(outputFile, JSON.stringify(output, null, 2))
console.log(`L2 account keys written to ${outputFile}`)
})

View File

@ -0,0 +1,88 @@
#!/bin/bash
set -e
# TODO Support restarts; fixturenet-eth-geth currently starts fresh on a restart
# Exit if a deployment already exists (on restarts)
# if [ -d "deployments/getting-started" ]; then
# echo "Deployment directory deployments/getting-started already exists, exiting"
# exit 0
# fi
# Append tasks/index.ts file
echo "import './rekey-json'" >> tasks/index.ts
echo "import './send-balance'" >> tasks/index.ts
# Update the chainId in the hardhat config
sed -i "/getting-started/ {n; s/.*chainId.*/ chainId: $CHAIN_ID,/}" hardhat.config.ts
# Generate the L2 account addresses
yarn hardhat rekey-json --output /l2-accounts/keys.json
# Read JSON file into variable
KEYS_JSON=$(cat /l2-accounts/keys.json)
# Parse JSON into variables
ADMIN_ADDRESS=$(echo "$KEYS_JSON" | jq -r '.Admin.address')
ADMIN_PRIV_KEY=$(echo "$KEYS_JSON" | jq -r '.Admin.privateKey')
PROPOSER_ADDRESS=$(echo "$KEYS_JSON" | jq -r '.Proposer.address')
BATCHER_ADDRESS=$(echo "$KEYS_JSON" | jq -r '.Batcher.address')
SEQUENCER_ADDRESS=$(echo "$KEYS_JSON" | jq -r '.Sequencer.address')
# Read the private key of L1 accounts
# TODO: Take from env if /geth-accounts volume doesn't exist to allow using separately running L1
L1_ADDRESS=$(head -n 1 /geth-accounts/accounts.csv | cut -d ',' -f 2)
L1_PRIV_KEY=$(head -n 1 /geth-accounts/accounts.csv | cut -d ',' -f 3)
L1_ADDRESS_2=$(awk -F, 'NR==2{print $(NF-1)}' /geth-accounts/accounts.csv)
L1_PRIV_KEY_2=$(awk -F, 'NR==2{print $NF}' /geth-accounts/accounts.csv)
# Send balances to the above L2 addresses
yarn hardhat send-balance --to "${ADMIN_ADDRESS}" --amount 2 --private-key "${L1_PRIV_KEY}" --network getting-started
yarn hardhat send-balance --to "${PROPOSER_ADDRESS}" --amount 5 --private-key "${L1_PRIV_KEY}" --network getting-started
yarn hardhat send-balance --to "${BATCHER_ADDRESS}" --amount 1000 --private-key "${L1_PRIV_KEY}" --network getting-started
echo "Balances sent to L2 accounts"
# Select a finalized L1 block as the starting point for roll ups
# TODO Use web3.js to get the latest finalized block
until CAST_OUTPUT=$(cast block finalized --rpc-url "$L1_RPC"); do
echo "Waiting for a finalized L1 block to exist, retrying after 10s"
sleep 10
done
L1_BLOCKHASH=$(echo "$CAST_OUTPUT" | awk '/hash/{print $2}')
L1_BLOCKTIMESTAMP=$(echo "$CAST_OUTPUT" | awk '/timestamp/{print $2}')
# Update the deployment config
sed -i 's/"l2OutputOracleStartingTimestamp": TIMESTAMP/"l2OutputOracleStartingTimestamp": '"$L1_BLOCKTIMESTAMP"'/g' deploy-config/getting-started.json
jq --arg chainid "$CHAIN_ID" '.l1ChainID = ($chainid | tonumber)' deploy-config/getting-started.json > tmp.json && mv tmp.json deploy-config/getting-started.json
node update-config.js deploy-config/getting-started.json "$ADMIN_ADDRESS" "$PROPOSER_ADDRESS" "$BATCHER_ADDRESS" "$SEQUENCER_ADDRESS" "$L1_BLOCKHASH"
echo "Updated the deployment config"
# Create a .env file
echo "L1_RPC=$L1_RPC" > .env
echo "PRIVATE_KEY_DEPLOYER=$ADMIN_PRIV_KEY" >> .env
echo "Deploying the L1 smart contracts, this will take a while..."
# Deploy the L1 smart contracts
yarn hardhat deploy --network getting-started
echo "Deployed the L1 smart contracts"
# Read Proxy contract's JSON and get the address
PROXY_JSON=$(cat deployments/getting-started/Proxy__OVM_L1StandardBridge.json)
PROXY_ADDRESS=$(echo "$PROXY_JSON" | jq -r '.address')
# Send balance to the above Proxy contract in L1 for reflecting balance in L2
# First account
yarn hardhat send-balance --to "${PROXY_ADDRESS}" --amount 1 --private-key "${L1_PRIV_KEY}" --network getting-started
# Second account
yarn hardhat send-balance --to "${PROXY_ADDRESS}" --amount 1 --private-key "${L1_PRIV_KEY_2}" --network getting-started
echo "Balance sent to Proxy L2 contract"
echo "Use following accounts for transactions in L2:"
echo "${L1_ADDRESS}"
echo "${L1_ADDRESS_2}"
echo "Done"

View File

@ -0,0 +1,22 @@
import { task } from 'hardhat/config'
import '@nomiclabs/hardhat-ethers'
import { ethers } from 'ethers'
task('send-balance', 'Sends Ether to a specified Ethereum account')
.addParam('to', 'The Ethereum address to send Ether to')
.addParam('amount', 'The amount of Ether to send, in Ether')
.addParam('privateKey', 'The private key of the sender')
.setAction(async ({ to, amount, privateKey }, {}) => {
// Open the wallet using sender's private key
const provider = new ethers.providers.JsonRpcProvider(`${process.env.L1_RPC}`)
const wallet = new ethers.Wallet(privateKey, provider)
// Send amount to the specified address
const tx = await wallet.sendTransaction({
to,
value: ethers.utils.parseEther(amount),
})
console.log(`Balance sent to: ${to}, from: ${wallet.address}`)
console.log(`Transaction hash: ${tx.hash}`)
})

View File

@ -0,0 +1,36 @@
const fs = require('fs')
// Get the command-line argument
const configFile = process.argv[2]
const adminAddress = process.argv[3]
const proposerAddress = process.argv[4]
const batcherAddress = process.argv[5]
const sequencerAddress = process.argv[6]
const blockHash = process.argv[7]
// Read the JSON file
const configData = fs.readFileSync(configFile)
const configObj = JSON.parse(configData)
// Update the finalSystemOwner property with the ADMIN_ADDRESS value
configObj.finalSystemOwner =
configObj.portalGuardian =
configObj.controller =
configObj.l2OutputOracleChallenger =
configObj.proxyAdminOwner =
configObj.baseFeeVaultRecipient =
configObj.l1FeeVaultRecipient =
configObj.sequencerFeeVaultRecipient =
configObj.governanceTokenOwner =
adminAddress
configObj.l2OutputOracleProposer = proposerAddress
configObj.batchSenderAddress = batcherAddress
configObj.p2pSequencerAddress = sequencerAddress
configObj.l1StartingBlockTag = blockHash
// Write the updated JSON object back to the file
fs.writeFileSync(configFile, JSON.stringify(configObj, null, 2))

View File

@ -0,0 +1,21 @@
#!/bin/sh
set -e
# Get BACTHER_KEY from keys.json
BATCHER_KEY=$(jq -r '.Batcher.privateKey' /l2-accounts/keys.json | tr -d '"')
op-batcher \
--l2-eth-rpc=http://op-geth:8545 \
--rollup-rpc=http://op-node:8547 \
--poll-interval=1s \
--sub-safety-margin=6 \
--num-confirmations=1 \
--safe-abort-nonce-too-low-count=3 \
--resubmission-timeout=30s \
--rpc.addr=0.0.0.0 \
--rpc.port=8548 \
--rpc.enable-admin \
--max-channel-duration=1 \
--target-l1-tx-size-bytes=2048 \
--l1-eth-rpc=$L1_RPC \
--private-key=$BATCHER_KEY

View File

@ -0,0 +1,58 @@
#!/bin/sh
set -e
mkdir datadir
echo "pwd" > datadir/password
# TODO: Add in container build or use other tool
echo "installing jq"
apk update && apk add jq
# Get SEQUENCER KEY from keys.json
SEQUENCER_KEY=$(jq -r '.Sequencer.privateKey' /l2-accounts/keys.json | tr -d '"')
echo $SEQUENCER_KEY > datadir/block-signer-key
geth account import --datadir=datadir --password=datadir/password datadir/block-signer-key
while [ ! -f "/op-node/jwt.txt" ]
do
echo "Config files not created. Checking after 5 seconds."
sleep 5
done
echo "Config files created by op-node, proceeding with script..."
cp /op-node/genesis.json ./
geth init --datadir=datadir genesis.json
SEQUENCER_ADDRESS=$(jq -r '.Sequencer.address' /l2-accounts/keys.json | tr -d '"')
echo "SEQUENCER_ADDRESS: ${SEQUENCER_ADDRESS}"
cp /op-node/jwt.txt ./
geth \
--datadir ./datadir \
--http \
--http.corsdomain="*" \
--http.vhosts="*" \
--http.addr=0.0.0.0 \
--http.api=web3,debug,eth,txpool,net,engine \
--ws \
--ws.addr=0.0.0.0 \
--ws.port=8546 \
--ws.origins="*" \
--ws.api=debug,eth,txpool,net,engine \
--syncmode=full \
--gcmode=full \
--nodiscover \
--maxpeers=0 \
--networkid=42069 \
--authrpc.vhosts="*" \
--authrpc.addr=0.0.0.0 \
--authrpc.port=8551 \
--authrpc.jwtsecret=./jwt.txt \
--rollup.disabletxpoolgossip=true \
--password=./datadir/password \
--allow-insecure-unlock \
--mine \
--miner.etherbase=$SEQUENCER_ADDRESS \
--unlock=$SEQUENCER_ADDRESS

View File

@ -0,0 +1,20 @@
#!/bin/sh
set -e
# Get SEQUENCER KEY from keys.json
SEQUENCER_KEY=$(jq -r '.Sequencer.privateKey' /l2-accounts/keys.json | tr -d '"')
op-node \
--l2=http://op-geth:8551 \
--l2.jwt-secret=/op-node-data/jwt.txt \
--sequencer.enabled \
--sequencer.l1-confs=3 \
--verifier.l1-confs=3 \
--rollup.config=/op-node-data/rollup.json \
--rpc.addr=0.0.0.0 \
--rpc.port=8547 \
--p2p.disable \
--rpc.enable-admin \
--p2p.sequencer.key=$SEQUENCER_KEY \
--l1=$L1_RPC \
--l1.rpckind=any

View File

@ -0,0 +1,2 @@
[profile.default]
eth-rpc-url = "http://fixturenet-eth-geth-1:8545"

View File

@ -20,16 +20,19 @@ server {
proxy_pass http://fixturenet-eth-geth-1:8545;
}
### ipld-eth-server
## ipld-eth-server
# location ~ ^/ipld/eth/([^/]*)$ {
# set $apiKey $1;
# if ($apiKey = '') {
# set $apiKey $http_X_API_KEY;
# }
# auth_request /auth;
# auth_request_set $user_id $sent_http_x_user_id;
# proxy_buffering off;
# rewrite /.*$ / break;
# proxy_pass http://ipld-eth-server:8081;
# proxy_set_header X-Original-Remote-Addr $remote_addr;
# proxy_set_header X-User-Id $user_id;
# }
#
# location ~ ^/ipld/gql/([^/]*)$ {
@ -42,14 +45,14 @@ server {
# rewrite /.*$ / break;
# proxy_pass http://ipld-eth-server:8082;
# }
#
### lighthouse
# location /beacon/ {
# set $apiKey $http_X_API_KEY;
# auth_request /auth;
# proxy_buffering off;
# proxy_pass http://fixturenet-eth-lighthouse-1:8001/;
# }
## lighthouse
location /beacon/ {
set $apiKey $http_X_API_KEY;
auth_request /auth;
proxy_buffering off;
proxy_pass http://fixturenet-eth-lighthouse-1:8001/;
}
location = /auth {
internal;
@ -63,7 +66,7 @@ server {
proxy_set_header X-Original-Host $host;
}
# location = /basic_status {
# stub_status;
# }
location = /stub_status {
stub_status;
}
}

View File

@ -0,0 +1,11 @@
#!/bin/sh
set -e
# Read the private key of L1 account to deploy contract
# TODO: Take from env if /geth-accounts volume doesn't exist to allow using separately running L1
L1_PRIV_KEY=$(head -n 1 /geth-accounts/accounts.csv | cut -d ',' -f 3)
# Set the private key
jq --arg privateKey "$L1_PRIV_KEY" '.privateKey = ($privateKey)' secrets-template.json > secrets.json
npm start

View File

@ -0,0 +1,8 @@
{
"relayNodes": [
"/ip4/127.0.0.1/tcp/9090/ws/p2p/12D3KooWSPCsVkHVyLQoCqhu2YRPvvM7o6r6NRYyLM5zeA6Uig5t"
],
"peer": {
"enableDebugInfo": true
}
}

View File

@ -0,0 +1,9 @@
#!/bin/sh
set -e
# Merging config files to get deployed contract address
jq -s '.[0] * .[1]' /app/src/mobymask-app-config.json /server/config.json > /app/src/config.json
npm run build
serve -s build

View File

@ -0,0 +1 @@
REACT_APP_WATCHER_URI=http://localhost:3001/graphql

View File

@ -0,0 +1,5 @@
{
"id": "12D3KooWK6myjc8r1KBnfP9igp31qJkPaVfsKDjKrjoSefV5SDEo",
"privKey": "CAESQJMHbMaH+UEOtjGOzXYtoPO/cdHakCtN1hcnknIWzx/6ie1lxb+8kfzBjwt7apfj8fHlTCYSIVK8Q2AWu9a2h3g=",
"pubKey": "CAESIIntZcW/vJH8wY8Le2qX4/Hx5UwmEiFSvENgFrvWtod4"
}

View File

@ -0,0 +1 @@
RELAY="/ip4/127.0.0.1/tcp/9090/ws/p2p/12D3KooWSPCsVkHVyLQoCqhu2YRPvvM7o6r6NRYyLM5zeA6Uig5t"

View File

@ -0,0 +1,5 @@
{
"id": "12D3KooWSPCsVkHVyLQoCqhu2YRPvvM7o6r6NRYyLM5zeA6Uig5t",
"privKey": "CAESQGsqG5o4VlWJZM9XlA3MjabyZOXWQ2MLZU5AhBQsjXGt9iSlGtTuNOrHX5xSRgLBxLuMoqWsjGxE/dDB9c46RI8=",
"pubKey": "CAESIPYkpRrU7jTqx1+cUkYCwcS7jKKlrIxsRP3QwfXOOkSP"
}

View File

@ -0,0 +1,5 @@
{
"rpcUrl": "http://op-geth:8545",
"privateKey": "",
"baseURI": "http://127.0.0.1:3002/#"
}

View File

@ -0,0 +1,11 @@
#!/bin/sh
# Assign deployed contract address from server config
CONTRACT_ADDRESS=$(jq -r '.address' /server/config.json | tr -d '"')
L1_PRIV_KEY_2=$(awk -F, 'NR==2{print $NF}' /geth-accounts/accounts.csv)
sed "s/REPLACE_WITH_PRIVATE_KEY/${L1_PRIV_KEY_2}/" environments/watcher-config-template.toml > environments/local.toml
sed -i "s/REPLACE_WITH_CONTRACT_ADDRESS/${CONTRACT_ADDRESS}/" environments/local.toml
echo 'yarn server'
yarn server

View File

@ -0,0 +1,8 @@
{
"relayNodes": [
"/ip4/127.0.0.1/tcp/9090/ws/p2p/12D3KooWSPCsVkHVyLQoCqhu2YRPvvM7o6r6NRYyLM5zeA6Uig5t"
],
"peer": {
"enableDebugInfo": true
}
}

View File

@ -0,0 +1,74 @@
[server]
host = "0.0.0.0"
port = 3001
kind = "lazy"
# Checkpointing state.
checkpointing = true
# Checkpoint interval in number of blocks.
checkpointInterval = 2000
# Enable state creation
enableState = true
# Boolean to filter logs by contract.
filterLogs = true
# Max block range for which to return events in eventsInRange GQL query.
# Use -1 for skipping check on block range.
maxEventsBlockRange = -1
[server.p2p]
enableRelay = true
enablePeer = true
[server.p2p.relay]
host = "0.0.0.0"
port = 9090
relayPeers = []
peerIdFile = './relay-id.json'
enableDebugInfo = true
[server.p2p.peer]
relayMultiaddr = '/dns4/mobymask-watcher-server/tcp/9090/ws/p2p/12D3KooWSPCsVkHVyLQoCqhu2YRPvvM7o6r6NRYyLM5zeA6Uig5t'
pubSubTopic = 'mobymask'
peerIdFile = './peer-id.json'
enableDebugInfo = true
[server.p2p.peer.l2TxConfig]
privateKey = 'REPLACE_WITH_PRIVATE_KEY'
contractAddress = 'REPLACE_WITH_CONTRACT_ADDRESS'
[metrics]
host = "0.0.0.0"
port = 9000
[metrics.gql]
port = 9001
[database]
type = "postgres"
host = "mobymask-watcher-db"
port = 5432
database = "mobymask-watcher"
username = "vdbm"
password = "password"
synchronize = true
logging = false
[upstream]
[upstream.ethServer]
gqlApiEndpoint = "http://ipld-eth-server:8083/graphql"
rpcProviderEndpoint = "http://op-geth:8545"
blockDelayInMilliSecs = 60000
[upstream.cache]
name = "requests"
enabled = false
deleteOnStart = false
[jobQueue]
dbConnectionString = "postgres://vdbm:password@mobymask-watcher-db/mobymask-watcher-job-queue"
maxCompletionLagInSecs = 300
jobDelayInMilliSecs = 100
eventsInBatch = 50

View File

@ -0,0 +1,3 @@
#!/usr/bin/env bash
# Build a local version of the task executor for act-runner
docker build -t cerc/act-runner-task-executor:local -f ${CERC_REPO_BASE_DIR}/act_runner/Dockerfile.task-executor ${CERC_REPO_BASE_DIR}/act_runner

View File

@ -0,0 +1,3 @@
#!/usr/bin/env bash
# Build a local version of the act-runner image
docker build -t cerc/act-runner:local -f ${CERC_REPO_BASE_DIR}/act_runner/Dockerfile ${CERC_REPO_BASE_DIR}/act_runner

View File

@ -0,0 +1,31 @@
# From: https://github.com/vyzo/gerbil/blob/master/docker/Dockerfile
FROM gerbil/ubuntu
# Install the Solidity compiler (latest stable version)
# and guile
# and libsecp256k1-dev
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive && export DEBCONF_NOWARNINGS="yes" && \
apt-get install -y software-properties-common && \
add-apt-repository ppa:ethereum/ethereum && \
apt-get update && \
apt-get install -y solc && \
apt-get install -y guile-3.0 && \
apt-get install -y libsecp256k1-dev && \
apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN mkdir /scripts
COPY install-dependencies.sh /scripts
# Override the definition of GERBIL_PATH in the base image, but
# is safe because (at present) no gerbil packages are installed in the base image
# We do this in order to allow a set of pre-installed packages from the container
# to be used with an arbitrary, potentially different set of projects bind mounted
# at /src
ENV GERBIL_PATH=/.gerbil
RUN bash /scripts/install-dependencies.sh
# Needed to prevent git from raging about /src
RUN git config --global --add safe.directory /src
COPY entrypoint.sh /scripts
ENTRYPOINT ["/scripts/entrypoint.sh"]

View File

@ -0,0 +1,21 @@
## Gerbil Scheme Builder
This container is designed to be used as a simple "build runner" environment for building and running Scheme projects using Gerbil and gerbil-ethereum. Its primary purpose is to allow build/test/run of gerbil code without the need to install and configure all the necessary prerequisites and dependencies on the host system.
### Usage
First build the container with:
```
$ laconic-so build-containers --include cerc/builder-gerbil
```
Now, assuming a gerbil project located at `~/projects/my-project`, run bash in the container mounting the project with:
```
$ docker run -it -v $HOME/projects/my-project:/src cerc/builder-gerbil:latest bash
root@7c4124bb09e3:/src#
```
Now gerbil commands can be run.

View File

@ -0,0 +1,2 @@
#!/bin/sh
exec "$@"

View File

@ -0,0 +1,16 @@
DEPS=(github.com/fare/gerbil-utils
github.com/fare/gerbil-poo
github.com/fare/gerbil-crypto
github.com/fare/gerbil-persist
github.com/fare/gerbil-ethereum
github.com/drewc/gerbil-swank
github.com/drewc/drewc-r7rs-swank
github.com/drewc/smug-gerbil
github.com/drewc/ftw
github.com/vyzo/gerbil-libp2p
) ;
for i in ${DEPS[@]} ; do
echo "Installing gerbil package: $i"
gxpkg install $i &&
gxpkg build $i
done

View File

@ -1,14 +1,34 @@
# Originally from: https://github.com/devcontainers/images/blob/main/src/javascript-node/.devcontainer/Dockerfile
# Which depends on: https://github.com/nodejs/docker-node/blob/main/Dockerfile-debian.template
# [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 18, 16, 14, 18-bullseye, 16-bullseye, 14-bullseye, 18-buster, 16-buster, 14-buster
ARG VARIANT=16-bullseye
ARG VARIANT=18-bullseye
FROM node:${VARIANT}
# Set these args to change the uid/gid for the base container's "node" user to match that of the host user (so bind mounts work as expected).
ARG CERC_HOST_UID=1000
ARG CERC_HOST_GID=1000
# Make these values available at runtime to allow a consistency check.
ENV HOST_UID=${CERC_HOST_UID}
ENV HOST_GID=${CERC_HOST_GID}
ARG USERNAME=node
ARG NPM_GLOBAL=/usr/local/share/npm-global
# Add NPM global to PATH.
ENV PATH=${NPM_GLOBAL}/bin:${PATH}
SHELL ["/bin/bash", "-c"]
RUN \
# Don't switch container uid/gid if the host uid/gid is 1000 (which means it's already correct),
# or root (which won't work anyway) or <= 100 (which also won't work).
if [[ ${CERC_HOST_GID} -ne 1000 && ${CERC_HOST_GID} -ne 0 && ${CERC_HOST_GID} -gt 100 ]]; then \
groupmod -g ${CERC_HOST_GID} ${USERNAME}; \
fi \
&& if [[ ${CERC_HOST_UID} -ne 1000 && ${CERC_HOST_UID} -ne 0 && ${CERC_HOST_UID} -gt 100 ]]; then \
usermod -u ${CERC_HOST_UID} -g ${CERC_HOST_GID} ${USERNAME} && chown ${CERC_HOST_UID}:${CERC_HOST_GID} /home/${USERNAME}; \
fi
RUN \
# Configure global npm install location, use group to adapt to UID/GID changes
if ! cat /etc/group | grep -e "^npm:" > /dev/null 2>&1; then groupadd -r npm; fi \
@ -39,6 +59,7 @@ RUN mkdir /scripts
COPY build-npm-package.sh /scripts
COPY yarn-local-registry-fixup.sh /scripts
COPY build-npm-package-local-dependencies.sh /scripts
COPY check-uid.sh /scripts
ENV PATH="${PATH}:/scripts"
COPY entrypoint.sh .

View File

@ -1,7 +1,7 @@
#!/bin/bash
# Usage: build-npm-package-local-dependencies.sh <registry-url> <publish-with-this-version>
# Runs build-npm-package.sh after first fixing up yarn.lock to use a local
# npm registry for all packages in a spcific scope (currently @cerc-io)
# npm registry for all packages in a spcific scope (currently @cerc-io and @lirewine)
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
@ -13,20 +13,25 @@ if [[ -z "${CERC_NPM_AUTH_TOKEN}" ]]; then
echo "CERC_NPM_AUTH_TOKEN is not set" >&2
exit 1
fi
# Exit on error
set -e
local_npm_registry_url=$1
package_publish_version=$2
# TODO: make this a paramater and allow a list of scopes
npm_scope_for_local="@cerc-io"
# We need to configure the local registry
npm config set ${npm_scope_for_local}:registry ${local_npm_registry_url}
npm config set -- ${local_npm_registry_url}:_authToken ${CERC_NPM_AUTH_TOKEN}
# Find the set of dependencies from the specified scope
mapfile -t dependencies_from_scope < <(cat package.json | jq -r '.dependencies | with_entries(if (.key|test("^'${npm_scope_for_local}'/.*$")) then ( {key: .key, value: .value } ) else empty end ) | keys[]')
echo "Fixing up dependencies"
for package in "${dependencies_from_scope[@]}"
# If we need to handle an additional scope, add it to the list below:
npm_scopes_to_handle=("@cerc-io" "@lirewine")
for npm_scope_for_local in ${npm_scopes_to_handle[@]}
do
echo "Fixing up package ${package}"
yarn-local-registry-fixup.sh $package ${local_npm_registry_url}
# We need to configure the local registry
npm config set ${npm_scope_for_local}:registry ${local_npm_registry_url}
npm config set -- ${local_npm_registry_url}:_authToken ${CERC_NPM_AUTH_TOKEN}
# Find the set of dependencies from the specified scope
mapfile -t dependencies_from_scope < <(cat package.json | jq -r '.dependencies | with_entries(if (.key|test("^'${npm_scope_for_local}'/.*$")) then ( {key: .key, value: .value } ) else empty end ) | keys[]')
echo "Fixing up dependencies in scope ${npm_scope_for_local}"
for package in "${dependencies_from_scope[@]}"
do
echo "Fixing up package ${package}"
yarn-local-registry-fixup.sh $package ${local_npm_registry_url}
done
done
echo "Running build"
build-npm-package.sh ${local_npm_registry_url} ${package_publish_version}

View File

@ -17,14 +17,16 @@ if [[ $# -eq 2 ]]; then
else
package_publish_version=$( cat package.json | jq -r .version )
fi
# Exit on error
set -e
# Get the name of this package from package.json since we weren't passed that
package_name=$( cat package.json | jq -r .name )
local_npm_registry_url=$1
npm config set @lirewine:registry ${local_npm_registry_url}
npm config set @cerc-io:registry ${local_npm_registry_url}
npm config set @lirewine:registry ${local_npm_registry_url}
npm config set -- ${local_npm_registry_url}:_authToken ${CERC_NPM_AUTH_TOKEN}
# First check if the version of this package we're trying to build already exists in the registry
package_exists=$( yarn info --json ${package_name}@${package_publish_version} | jq -r .data.dist.tarball )
package_exists=$( yarn info --json ${package_name}@${package_publish_version} 2>/dev/null | jq -r .data.dist.tarball )
if [[ ! -z "$package_exists" && "$package_exists" != "null" ]]; then
echo "${package_publish_version} of ${package_name} already exists in the registry, skipping build"
exit 0

View File

@ -0,0 +1,21 @@
#!/bin/bash
# Make the container usable for uid/gid != 1000
if [[ -n "$CERC_SCRIPT_DEBUG" ]]; then
set -x
fi
current_uid=$(id -u)
current_gid=$(id -g)
# Don't check if running as root
if [[ ${current_uid} == 0 ]]; then
exit 0
fi
# Check the current uid/gid vs the uid/gid used to build the container.
# We do this because both bind mounts and npm tooling require the uid/gid to match.
if [[ ${current_gid} != ${HOST_GID} ]]; then
echo "Warning: running with gid: ${current_gid} which is not the gid for which this container was built (${HOST_GID})"
exit 0
fi
if [[ ${current_uid} != ${HOST_UID} ]]; then
echo "Warning: running with gid: ${current_uid} which is not the uid for which this container was built (${HOST_UID})"
exit 0
fi

View File

@ -1,2 +1,3 @@
#!/bin/sh
/scripts/check-uid.sh
exec "$@"

View File

@ -14,14 +14,24 @@ if [[ $# -ne 2 ]]; then
echo "Illegal number of parameters" >&2
exit 1
fi
# Exit on error
set -e
target_package=$1
local_npm_registry_url=$2
# TODO: use jq rather than sed here:
versioned_target_package=$(grep ${target_package} package.json | sed -e 's#[[:space:]]\{1,\}\"\('${target_package}'\)\":[[:space:]]\{1,\}\"\(.*\)\",#\1@\2#' )
# Extract the actual version pinned in yarn.lock
# See: https://stackoverflow.com/questions/60454251/how-to-know-the-version-of-currently-installed-package-from-yarn-lock
versioned_target_package=$(yarn list --pattern ${target_package} --depth=0 --json --non-interactive --no-progress | jq -r '.data.trees[].name')
# Use yarn info to get URL checksums etc from the new registry
yarn_info_output=$(yarn info --json $versioned_target_package 2>/dev/null)
# Code below parses out the values we need
# First check if the target version actually exists.
# If it doesn't exist there will be no .data.dist.tarball element,
# and jq will output the string "null"
package_tarball=$(echo $yarn_info_output | jq -r .data.dist.tarball)
if [[ $package_tarball == "null" ]]; then
echo "FATAL: Target package version ($versioned_target_package) not found" >&2
exit 1
fi
# Code below parses out the values we need
# When running inside a container, the registry can return a URL with the wrong host name due to proxying
# so we need to check if that has happened and fix the URL if so.
if ! [[ "${package_tarball}" =~ ^${local_npm_registry_url}.* ]]; then
@ -33,6 +43,7 @@ package_integrity=$(echo $yarn_info_output | jq -r .data.dist.integrity)
package_shasum=$(echo $yarn_info_output | jq -r .data.dist.shasum)
package_resolved=${package_tarball}#${package_shasum}
# Some strings need to be escaped so they work when passed to sed later
escaped_package_integrity=$(printf '%s\n' "$package_integrity" | sed -e 's/[\/&]/\\&/g')
escaped_package_resolved=$(printf '%s\n' "$package_resolved" | sed -e 's/[\/&]/\\&/g')
escaped_target_package=$(printf '%s\n' "$target_package" | sed -e 's/[\/&]/\\&/g')
if [ -n "$CERC_SCRIPT_VERBOSE" ]; then
@ -44,4 +55,4 @@ fi
# Use magic sed regex to replace the values in yarn.lock
# Note: yarn.lock is not json so we can not use jq for this
sed -i -e '/^\"'${escaped_target_package}'.*\":$/ , /^\".*$/ s/^\([[:space:]]\{1,\}resolved \).*$/\1'\"${escaped_package_resolved}\"'/' yarn.lock
sed -i -e '/^\"'${escaped_target_package}'.*\":$/ , /^\".*$/ s/^\([[:space:]]\{1,\}integrity \).*$/\1'${package_integrity}'/' yarn.lock
sed -i -e '/^\"'${escaped_target_package}'.*\":$/ , /^\".*$/ s/^\([[:space:]]\{1,\}integrity \).*$/\1'${escaped_package_integrity}'/' yarn.lock

View File

@ -1,5 +1,9 @@
#!/bin/bash
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
ETHERBASE=`cat /opt/testnet/build/el/accounts.csv | head -1 | cut -d',' -f2`
NETWORK_ID=`cat /opt/testnet/el/el-config.yaml | grep 'chain_id' | awk '{ print $2 }'`
NETRESTRICT=`ip addr | grep inet | grep -v '127.0' | awk '{print $2}'`
@ -28,7 +32,9 @@ else
echo -n "$JWT" > /opt/testnet/build/el/jwtsecret
if [ "$CERC_RUN_STATEDIFF" == "detect" ] && [ -n "$CERC_STATEDIFF_DB_HOST" ]; then
if [ -n "$(dig $CERC_STATEDIFF_DB_HOST +short)" ]; then
dig_result=$(dig $CERC_STATEDIFF_DB_HOST +short)
dig_status_code=$?
if [[ $dig_status_code = 0 && -n $dig_result ]]; then
echo "Statediff DB at $CERC_STATEDIFF_DB_HOST"
CERC_RUN_STATEDIFF="true"
else
@ -62,6 +68,7 @@ else
--statediff.db.port=$CERC_STATEDIFF_DB_PORT \
--statediff.db.user=$CERC_STATEDIFF_DB_USER \
--statediff.db.logstatements=${CERC_STATEDIFF_DB_LOG_STATEMENTS:-false} \
--statediff.db.copyfrom=${CERC_STATEDIFF_DB_COPY_FROM:-true} \
--statediff.waitforsync=true \
--statediff.writing=true"
fi
@ -72,11 +79,16 @@ else
--http \
--http.addr="0.0.0.0" \
--http.vhosts="*" \
--http.api="eth,web3,net,admin,personal,debug,statediff" \
--http.api="${CERC_GETH_HTTP_APIS:-eth,web3,net,admin,personal,debug,statediff}" \
--http.corsdomain="*" \
--authrpc.addr="0.0.0.0" \
--authrpc.vhosts="*" \
--authrpc.jwtsecret="/opt/testnet/build/el/jwtsecret" \
--ws \
--ws.addr="0.0.0.0" \
--ws.origins="*" \
--ws.api="${CERC_GETH_WS_APIS:-eth,web3,net,admin,personal,debug,statediff}" \
--http.corsdomain="*" \
--networkid="${NETWORK_ID}" \
--netrestrict="${NETRESTRICT}" \
--gcmode archive \
@ -86,6 +98,7 @@ else
--mine \
--miner.threads=1 \
--metrics \
--metrics.addr="0.0.0.0" \
--verbosity=${CERC_GETH_VERBOSITY:-3} \
--vmodule="${CERC_GETH_VMODULE:-statediff/*=5}" \
--miner.etherbase="${ETHERBASE}" ${STATEDIFF_OPTS}

View File

@ -27,4 +27,8 @@ RUN cd /opt/testnet && make genesis-cl
# Work around some bugs in lcli where the default path is always used.
RUN mkdir -p /root/.lighthouse && cd /root/.lighthouse && ln -s /opt/testnet/build/cl/testnet
RUN mkdir -p /scripts
COPY scripts/status-internal.sh /scripts
COPY scripts/status.sh /scripts
ENTRYPOINT ["/opt/testnet/run.sh"]

View File

@ -0,0 +1,10 @@
#!/usr/bin/env bash
# Wrapper to facilitate using status.sh inside the container
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
export LIGHTHOUSE_BASE_URL="http://fixturenet-eth-lighthouse-1:8001"
export GETH_BASE_URL="http://fixturenet-eth-geth-1:8545"
# See: https://stackoverflow.com/a/246128/1701505
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
$SCRIPT_DIR/status.sh

View File

@ -1,5 +1,7 @@
#!/bin/bash
#!/usr/bin/env bash
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
STATUSES=("geth to generate DAG" "beacon phase0" "beacon altair" "beacon bellatrix pre-merge" "beacon bellatrix merge")
STATUS=0
@ -7,18 +9,22 @@ STATUS=0
LIGHTHOUSE_BASE_URL=${LIGHTHOUSE_BASE_URL}
GETH_BASE_URL=${GETH_BASE_URL}
# TODO: Docker commands below should be replaced by some interface into stack orchestrator
# or some execution environment-neutral mechanism.
if [ -z "$LIGHTHOUSE_BASE_URL" ]; then
LIGHTHOUSE_PORT=`docker ps -f "name=fixturenet-eth-lighthouse-1-1" --format "{{.Ports}}" | head -1 | cut -d':' -f2 | cut -d'-' -f1`
LIGHTHOUSE_CONTAINER=`docker ps -q -f "name=fixturenet-eth-lighthouse-1-1"`
LIGHTHOUSE_PORT=`docker port $LIGHTHOUSE_CONTAINER 8001 | cut -d':' -f2`
LIGHTHOUSE_BASE_URL="http://localhost:${LIGHTHOUSE_PORT}"
fi
if [ -z "$GETH_BASE_URL" ]; then
GETH_PORT=`docker ps -f "name=fixturenet-eth-geth-1-1" --format "{{.Ports}}" | head -1 | cut -d':' -f2 | cut -d'-' -f1`
GETH_CONTAINER=`docker ps -q -f "name=fixturenet-eth-geth-1-1"`
GETH_PORT=`docker port $GETH_CONTAINER 8545 | cut -d':' -f2`
GETH_BASE_URL="http://localhost:${GETH_PORT}"
fi
function inc_status() {
echo " DONE!"
echo " done"
STATUS=$((STATUS + 1))
if [ $STATUS -lt ${#STATUSES[@]} ]; then
echo -n "Waiting for ${STATUSES[$STATUS]}..."

View File

@ -0,0 +1,3 @@
#!/usr/bin/env bash
# Build a local version of the foundry-rs/foundry image
docker build -t cerc/foundry:local -f ${CERC_REPO_BASE_DIR}/foundry/Dockerfile-debian ${CERC_REPO_BASE_DIR}/foundry

View File

@ -1,7 +1,9 @@
FROM ghcr.io/foundry-rs/foundry
# Note: cerc/foundry is Debian based
FROM cerc/foundry:local
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get -y install --no-install-recommends jq curl netcat
RUN apk update ; apk add --no-cache --allow-untrusted ca-certificates curl bash git jq
RUN apk add --no-cache --upgrade grep
WORKDIR /root
ARG GENESIS_FILE_PATH=genesis.json

View File

@ -1,4 +1,4 @@
FROM quay.io/keycloak/keycloak:20.0
WORKDIR /opt/keycloak/providers
RUN curl -L https://github.com/aerogear/keycloak-metrics-spi/releases/download/2.5.3/keycloak-metrics-spi-2.5.3.jar --output keycloak-metrics-spi.jar
RUN curl -L https://github.com/cerc-io/keycloak-api-key-demo/releases/download/v0.1/api-key-module-0.1.jar --output api-key-module.jar
RUN curl -L https://github.com/cerc-io/keycloak-api-key-demo/releases/download/v0.3/api-key-module-0.3.jar --output api-key-module.jar

View File

@ -0,0 +1,67 @@
# Originally from: https://github.com/devcontainers/images/blob/main/src/javascript-node/.devcontainer/Dockerfile
# [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 18, 16, 14, 18-bullseye, 16-bullseye, 14-bullseye, 18-buster, 16-buster, 14-buster
ARG VARIANT=16-bullseye
FROM node:${VARIANT}
ARG USERNAME=node
ARG NPM_GLOBAL=/usr/local/share/npm-global
# This container pulls npm packages from a local registry configured via these env vars
ARG CERC_NPM_URL
ARG CERC_NPM_AUTH_TOKEN
# Add NPM global to PATH.
ENV PATH=${NPM_GLOBAL}/bin:${PATH}
RUN \
# Configure global npm install location, use group to adapt to UID/GID changes
if ! cat /etc/group | grep -e "^npm:" > /dev/null 2>&1; then groupadd -r npm; fi \
&& usermod -a -G npm ${USERNAME} \
&& umask 0002 \
&& mkdir -p ${NPM_GLOBAL} \
&& touch /usr/local/etc/npmrc \
&& chown ${USERNAME}:npm ${NPM_GLOBAL} /usr/local/etc/npmrc \
&& chmod g+s ${NPM_GLOBAL} \
&& npm config -g set prefix ${NPM_GLOBAL} \
&& su ${USERNAME} -c "npm config -g set prefix ${NPM_GLOBAL}" \
# Install eslint
&& su ${USERNAME} -c "umask 0002 && npm install -g eslint" \
&& npm cache clean --force > /dev/null 2>&1
# [Optional] Uncomment this section to install additional OS packages.
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get -y install --no-install-recommends jq
# [Optional] Uncomment if you want to install an additional version of node using nvm
# ARG EXTRA_NODE_VERSION=10
# RUN su node -c "source /usr/local/share/nvm/nvm.sh && nvm install ${EXTRA_NODE_VERSION}"
# We do this to get a yq binary from the published container, for the correct architecture we're building here
COPY --from=docker.io/mikefarah/yq:latest /usr/bin/yq /usr/local/bin/yq
RUN mkdir -p /scripts
COPY ./apply-webapp-config.sh /scripts
COPY ./start-serving-app.sh /scripts
# [Optional] Uncomment if you want to install more global node modules
# RUN su node -c "npm install -g <your-package-list-here>"
# Configure the local npm registry
RUN npm config set @cerc-io:registry ${CERC_NPM_URL} \
&& npm config set @lirewine:registry ${CERC_NPM_URL} \
&& npm config set -- ${CERC_NPM_URL}:_authToken ${CERC_NPM_AUTH_TOKEN}
RUN mkdir -p /config
COPY ./config.yml /config
# Install simple web server for now (use nginx perhaps later)
RUN yarn global add http-server
# Globally install the payload web app package
RUN yarn global add @cerc-io/console-app
# Expose port for http
EXPOSE 80
# Default command sleeps forever so docker doesn't kill it
CMD ["/scripts/start-serving-app.sh"]

View File

@ -0,0 +1,34 @@
#!/usr/bin/env bash
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
if [[ $# -ne 2 ]]; then
echo "Illegal number of parameters" >&2
exit 1
fi
config_file_name=$1
webapp_files_dir=$2
if ![[ -f ${config_file_name} ]]; then
echo "Config file ${config_file_name} does not exist" >&2
exit 1
fi
if ![[ -d ${webapp_files_dir} ]]; then
echo "Webapp directory ${webapp_files_dir} does not exist" >&2
exit 1
fi
# First some magic using yq to translate our yaml config file into an array of key value pairs like:
# LACONIC_HOSTED_CONFIG_<path-through-objects>=<value>
readarray -t config_kv_pair_array < <( yq '.. | select(length > 2) | ([path | join("_"), .] | join("=") )' ${config_file_name} | sed 's/^/LACONIC_HOSTED_CONFIG_/' )
declare -p config_kv_pair_array
# Then iterate over that kv array making the template substitution in our web app files
for kv_pair_string in "${config_kv_pair_array[@]}"
do
kv_pair=(${kv_pair_string//=/ })
template_string_to_replace=${kv_pair[0]}
template_value_to_substitute=${kv_pair[1]}
# Run find and sed to do the substitution of one variable over all files
# See: https://stackoverflow.com/a/21479607/1701505
echo "Substituting: ${template_string_to_replace} = ${template_value_to_substitute}"
# Note: we do not escape our strings, on the expectation they do not container the '#' char.
find ${webapp_files_dir} -type f -exec sed -i 's#'${template_string_to_replace}'#'${template_value_to_substitute}'#g' {} +
done

View File

@ -0,0 +1,9 @@
#!/usr/bin/env bash
# Build cerc/laconic-registry-cli
# See: https://stackoverflow.com/a/246128/1701505
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/laconic-console-host:local -f ${SCRIPT_DIR}/Dockerfile \
--add-host gitea.local:host-gateway \
--build-arg CERC_NPM_AUTH_TOKEN --build-arg CERC_NPM_URL ${SCRIPT_DIR}

View File

@ -0,0 +1,6 @@
# Config for laconic-console running in a fixturenet with laconicd
services:
wns:
server: 'http://localhost:9473/api'
webui: 'http://localhost:9473/console'

View File

@ -0,0 +1,8 @@
#!/usr/bin/env bash
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
# TODO: Don't hard wire this:
webapp_files_dir=/usr/local/share/.config/yarn/global/node_modules/@cerc-io/console-app/dist/production
/scripts/apply-webapp-config.sh /config/config.yml ${webapp_files_dir}
http-server -p 80 ${webapp_files_dir}

View File

@ -1,6 +1,6 @@
# Originally from: https://github.com/devcontainers/images/blob/main/src/javascript-node/.devcontainer/Dockerfile
# [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 18, 16, 14, 18-bullseye, 16-bullseye, 14-bullseye, 18-buster, 16-buster, 14-buster
ARG VARIANT=16-bullseye
ARG VARIANT=18-bullseye
FROM node:${VARIANT}
ARG USERNAME=node
@ -40,18 +40,20 @@ RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# RUN su node -c "npm install -g <your-package-list-here>"
# Configure the local npm registry
RUN npm config set @lirewine:registry ${CERC_NPM_URL} \
&& npm config set @cerc-io:registry ${CERC_NPM_URL} \
RUN npm config set @cerc-io:registry ${CERC_NPM_URL} \
&& npm config set @lirewine:registry ${CERC_NPM_URL} \
&& npm config set -- ${CERC_NPM_URL}:_authToken ${CERC_NPM_AUTH_TOKEN}
# TODO: the image at this point could be made a base image for several different CLI images
# that install different Node-based CLI commands
# DEBUG, remove
RUN yarn info @cerc-io/laconic-registry-cli
# Globally install the cli package
RUN yarn global add @cerc-io/laconic-registry-cli
ENTRYPOINT ["laconic"]
# Add scripts
RUN mkdir /scripts
ENV PATH="${PATH}:/scripts"
COPY ./import-key.sh /scripts
# Default command sleeps forever so docker doesn't kill it
CMD ["sh", "-c", "while :; do sleep 600; done"]

View File

@ -0,0 +1,2 @@
#!/bin/sh
sed 's/REPLACE_WITH_MYKEY/'${1}'/' registry-cli-config-template.yml > config.yml

View File

@ -0,0 +1,14 @@
FROM node:18.15.0-alpine3.16
RUN apk --update --no-cache add make git jq
WORKDIR /app
COPY . .
RUN npm install -g serve
RUN echo "Building mobymask-ui" && \
npm install
WORKDIR /app

View File

@ -0,0 +1,7 @@
#!/usr/bin/env bash
# Build cerc/mobymask-ui
# See: https://stackoverflow.com/a/246128/1701505
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/mobymask-ui:local -f ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}/mobymask-ui

View File

@ -0,0 +1,13 @@
FROM node:16.17.1-alpine3.16
RUN apk --update --no-cache add python3 alpine-sdk jq
WORKDIR /app
COPY . .
RUN yarn
# Add scripts
RUN mkdir /scripts
ENV PATH="${PATH}:/scripts"

View File

@ -0,0 +1,7 @@
#!/usr/bin/env bash
# Build cerc/mobymask
# See: https://stackoverflow.com/a/246128/1701505
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/mobymask:local -f ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}/MobyMask

View File

@ -0,0 +1,23 @@
# TODO: Use a node alpine image
FROM cerc/foundry:local
# Install node (local foundry is a debian based image)
RUN apt-get update \
&& apt-get install -y curl \
&& curl --silent --location https://deb.nodesource.com/setup_16.x | bash - \
&& apt-get update \
&& apt-get install -y nodejs git busybox jq \
&& node -v
RUN corepack enable \
&& yarn --version
WORKDIR /app
# Copy optimism repo contents
COPY . .
RUN echo "Building optimism" && \
yarn && yarn build
WORKDIR /app/packages/contracts-bedrock

View File

@ -0,0 +1,7 @@
#!/usr/bin/env bash
# Build cerc/optimism-contracts
# See: https://stackoverflow.com/a/246128/1701505
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/optimism-contracts:local -f ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}/optimism

View File

@ -0,0 +1,3 @@
#!/usr/bin/env bash
# Build cerc/optimism-l2geth
docker build -t cerc/optimism-l2geth:local ${CERC_REPO_BASE_DIR}/op-geth

View File

@ -0,0 +1,32 @@
FROM golang:1.19.0-alpine3.15 as builder
ARG VERSION=v0.0.0
RUN apk add --no-cache make gcc musl-dev linux-headers git jq bash
# build op-batcher with the shared go.mod & go.sum files
COPY ./op-batcher /app/op-batcher
COPY ./op-bindings /app/op-bindings
COPY ./op-node /app/op-node
COPY ./op-service /app/op-service
COPY ./op-signer /app/op-signer
COPY ./go.mod /app/go.mod
COPY ./go.sum /app/go.sum
COPY ./.git /app/.git
WORKDIR /app/op-batcher
RUN go mod download
ARG TARGETOS TARGETARCH
RUN make op-batcher VERSION="$VERSION" GOOS=$TARGETOS GOARCH=$TARGETARCH
FROM alpine:3.15
RUN apk add --no-cache jq
COPY --from=builder /app/op-batcher/bin/op-batcher /usr/local/bin
ENTRYPOINT ["op-batcher"]

View File

@ -0,0 +1,5 @@
#!/usr/bin/env bash
# Build cerc/optimism-op-batcher
# TODO: use upstream Dockerfile once its buildx-specific content has been removed
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/optimism-op-batcher:local -f ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}/optimism

View File

@ -0,0 +1,30 @@
FROM golang:1.19.0-alpine3.15 as builder
ARG VERSION=v0.0.0
RUN apk add --no-cache make gcc musl-dev linux-headers git jq bash
# build op-node with the shared go.mod & go.sum files
COPY ./op-node /app/op-node
COPY ./op-chain-ops /app/op-chain-ops
COPY ./op-service /app/op-service
COPY ./op-bindings /app/op-bindings
COPY ./go.mod /app/go.mod
COPY ./go.sum /app/go.sum
COPY ./.git /app/.git
WORKDIR /app/op-node
RUN go mod download
ARG TARGETOS TARGETARCH
RUN make op-node VERSION="$VERSION" GOOS=$TARGETOS GOARCH=$TARGETARCH
FROM alpine:3.15
RUN apk add --no-cache openssl jq
COPY --from=builder /app/op-node/bin/op-node /usr/local/bin
CMD ["op-node"]

View File

@ -0,0 +1,5 @@
#!/usr/bin/env bash
# Build cerc/optimism-op-node
# TODO: use upstream Dockerfile once its buildx-specific content has been removed
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/optimism-op-node:local -f ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}/optimism

View File

@ -0,0 +1,14 @@
FROM node:18.15.0-alpine3.16
RUN apk --update --no-cache add make git python3
WORKDIR /app
COPY . .
RUN yarn global add serve
RUN echo "Building react-peer" && \
yarn install --ignore-scripts && yarn build --ignore @cerc-io/test-app
WORKDIR /app

View File

@ -0,0 +1,7 @@
#!/usr/bin/env bash
# Build cerc/react-peer
# See: https://stackoverflow.com/a/246128/1701505
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/react-peer:local -f ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}/react-peer

View File

@ -1,4 +1,11 @@
FROM alpine:latest
FROM ubuntu:latest
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive && export DEBCONF_NOWARNINGS="yes" && \
apt-get install -y software-properties-common && \
apt-get install -y nginx && \
apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
EXPOSE 80
COPY run.sh /app/run.sh

View File

@ -12,5 +12,5 @@ else
echo `date` > $EXISTSFILENAME
fi
# Sleep forever to keep docker happy
while true; do sleep 10; done
# Run nginx which will block here forever
/usr/sbin/nginx -g "daemon off;"

View File

@ -0,0 +1,20 @@
FROM ubuntu:22.04
RUN apt-get update \
&& apt-get install -y curl gnupg build-essential \
&& curl --silent --location https://deb.nodesource.com/setup_18.x | bash - \
&& apt-get update \
&& apt-get install -y nodejs git busybox jq \
&& node -v
RUN corepack enable \
&& yarn --version
WORKDIR /app
COPY . .
RUN echo "Building watcher-ts" && \
yarn && yarn build
WORKDIR /app/packages/mobymask-v2-watcher

View File

@ -0,0 +1,7 @@
#!/usr/bin/env bash
# Build cerc/watcher-mobymask-v2
# See: https://stackoverflow.com/a/246128/1701505
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/watcher-mobymask-v2:local -f ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}/watcher-ts

View File

@ -1,6 +1,9 @@
#!/usr/bin/env bash
# Usage: default-build.sh <image-tag> [<repo-relative-path>]
# if <repo-relative-path> is not supplied, the context is the directory where the Dockerfile lives
if [[ -n "$CERC_SCRIPT_DEBUG" ]]; then
set -x
fi
if [[ $# -ne 2 ]]; then
echo "Illegal number of parameters" >&2
exit 1
@ -8,4 +11,4 @@ fi
image_tag=$1
build_dir=$2
echo "Building ${image_tag} in ${build_dir}"
docker build -t ${image_tag} ${build_dir}
docker build -t ${image_tag} --build-arg CERC_HOST_UID=${CERC_HOST_UID} --build-arg CERC_HOST_GID=${CERC_HOST_GID} ${build_dir}

View File

@ -1,3 +1,4 @@
cerc/foundry
cerc/test-contract
cerc/eth-statediff-fill-service
cerc/eth-statediff-service
@ -10,6 +11,7 @@ cerc/ipld-eth-beacon-indexer
cerc/ipld-eth-server
cerc/laconicd
cerc/laconic-registry-cli
cerc/laconic-console-host
cerc/fixturenet-eth-geth
cerc/fixturenet-eth-lighthouse
cerc/watcher-mobymask
@ -17,8 +19,18 @@ cerc/watcher-erc20
cerc/watcher-erc721
cerc/watcher-uniswap-v3
cerc/uniswap-v3-info
cerc/watcher-mobymask-v2
cerc/react-peer
cerc/mobymask-ui
cerc/mobymask
cerc/test-container
cerc/eth-probe
cerc/builder-js
cerc/keycloak
cerc/tx-spammer
cerc/builder-gerbil
cerc/act-runner
cerc/act-runner-task-executor
cerc/optimism-l2geth
cerc/optimism-op-batcher
cerc/optimism-op-node

View File

@ -1,4 +1,7 @@
gem
laconic-sdk
debug
laconic-registry-cli
laconic-console
debug
crypto
sdk
gem

View File

@ -6,15 +6,20 @@ ipld-eth-beacon-db
ipld-eth-beacon-indexer
ipld-eth-server
lighthouse
prometheus-grafana
laconicd
fixturenet-laconicd
fixturenet-eth
fixturenet-eth-metrics
watcher-mobymask
watcher-erc20
watcher-erc721
watcher-uniswap-v3
watcher-mobymask-v2
mobymask-laconicd
test
eth-probe
keycloak
tx-spammer
kubo
foundry
fixturenet-optimism

View File

@ -9,10 +9,22 @@ cerc-io/ipld-eth-beacon-db
cerc-io/laconicd
cerc-io/laconic-sdk
cerc-io/laconic-registry-cli
cerc-io/laconic-console
cerc-io/mobymask-watcher
cerc-io/watcher-ts
cerc-io/react-peer
cerc-io/mobymask-ui
cerc-io/MobyMask
vulcanize/uniswap-watcher-ts
vulcanize/uniswap-v3-info
vulcanize/assemblyscript
cerc-io/eth-probe
cerc-io/tx-spammer
dboreham/foundry
lirewine/gem
lirewine/debug
lirewine/crypto
lirewine/sdk
telackey/act_runner
ethereum-optimism/op-geth
ethereum-optimism/optimism

View File

@ -0,0 +1,59 @@
# Build Support Stack
## Instructions
JS/TS/NPM builds need an npm registry to store intermediate package artifacts.
This can be supplied by the user (e.g. using a hosted registry or even npmjs.com), or a local registry using gitea can be deployed by stack orchestrator.
To use a user-supplied registry set these environment variables:
`CERC_NPM_REGISTRY_URL` and
`CERC_NPM_AUTH_TOKEN`
Leave `CERC_NPM_REGISTRY_URL` un-set to use the local gitea registry.
### 1. Build support containers
```
$ laconic-so --stack build-support build-containers
```
Note that the scheme/gerbil builder container can take a while to build so if you aren't going to build scheme projects it can be skipped with:
```
$ laconic-so --stack build-support build-containers --exclude cerc/builder-gerbil
```
### 2. Deploy Gitea Package Registry
```
$ laconic-so --stack package-registry setup-repositories
$ laconic-so --stack package-registry deploy up
[+] Running 3/3
⠿ Network laconic-aecc4a21d3a502b14522db97d427e850_gitea Created 0.0s
⠿ Container laconic-aecc4a21d3a502b14522db97d427e850-db-1 Started 1.2s
⠿ Container laconic-aecc4a21d3a502b14522db97d427e850-server-1 Started 1.9s
New user 'gitea_admin' has been successfully created!
This is your gitea access token: 84fe66a73698bf11edbdccd0a338236b7d1d5c45. Keep it safe and secure, it can not be fetched again from gitea.
To use with laconic-so set this environment variable: export CERC_NPM_AUTH_TOKEN=3e493e77b3e83fe9e882f7e3a79dd4d5441c308b
Created the organization cerc-io
Gitea was configured to use host name: gitea.local, ensure that this resolves to localhost, e.g. with sudo vi /etc/hosts
Success, gitea is properly initialized
$
```
### 3. Configure the hostname gitea.local
How to do this is OS-dependent but usually involves editing a `hosts` file. For example on Linux add this line to the file `/etc/hosts` (needs sudo):
```
127.0.0.1 gitea.local
```
Test with:
```
$ ping gitea.local
PING gitea.local (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.147 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.033 ms
```
Although not necessary in order to build and publish packages, you can now access the Gitea web interface at: [http://gitea.local:3000](http://gitea.local:3000) using these credentials: gitea_admin/admin1234 (Note: please properly secure Gitea if public internet access is allowed).
Now npm packages can be built:
### Build npm Packages
Ensure that `CERC_NPM_AUTH_TOKEN` is set with the token printed above when the package-registry stack was deployed (the actual token value will be different than shown in this example):
```
$ export CERC_NPM_AUTH_TOKEN=84fe66a73698bf11edbdccd0a338236b7d1d5c45
$ laconic-so build-npms --include laconic-sdk
```

View File

@ -0,0 +1,6 @@
version: "1.1"
name: build-support
decription: "Build Support Components"
containers:
- cerc/builder-js
- cerc/builder-gerbil

View File

@ -1,157 +1,157 @@
# ERC20 Watcher
Instructions to deploy a local ERC20 watcher stack (core + watcher) for demonstration and testing purposes using [laconic-stack-orchestrator](../../README.md#setup)
Instructions to deploy a local ERC20 watcher stack (core + watcher) for demonstration and testing purposes using [stack orchestrator](/README.md#install)
## Setup
* Clone / pull required repositories:
Clone required repositories:
```bash
$ laconic-so setup-repositories --include cerc-io/go-ethereum,cerc-io/ipld-eth-db,cerc-io/ipld-eth-server,cerc-io/watcher-ts --pull
```
```bash
laconic-so --stack erc20 setup-repositories
```
* Build the core and watcher container images:
Build the core and watcher container images:
```bash
$ laconic-so build-containers --include cerc/go-ethereum,cerc/go-ethereum-foundry,cerc/ipld-eth-db,cerc/ipld-eth-server,cerc/watcher-erc20
```
```bash
laconic-so --stack erc20 build-containers
```
This should create the required docker images in the local image registry.
This should create the required docker images in the local image registry.
* Deploy the stack:
Deploy the stack:
```bash
$ laconic-so deploy-system --include db,go-ethereum-foundry,ipld-eth-server,watcher-erc20 up
```
```bash
laconic-so --stack erc20 deploy-system up
```
## Demo
* Find the watcher container's id using `docker ps` and export it for later use:
Find the watcher container's id using `docker ps` and export it for later use:
```bash
$ export CONTAINER_ID=<CONTAINER_ID>
```
```bash
export CONTAINER_ID=<CONTAINER_ID>
```
* Deploy an ERC20 token:
Deploy an ERC20 token:
```bash
$ docker exec $CONTAINER_ID yarn token:deploy:docker
```
```bash
docker exec $CONTAINER_ID yarn token:deploy:docker
```
Export the address of the deployed token to a shell variable for later use:
Export the address of the deployed token to a shell variable for later use:
```bash
$ export TOKEN_ADDRESS=<TOKEN_ADDRESS>
```
```bash
export TOKEN_ADDRESS=<TOKEN_ADDRESS>
```
* Open `http://localhost:3002/graphql` (GraphQL Playground) in a browser window
Open `http://localhost:3002/graphql` (GraphQL Playground) in a browser window
* Connect MetaMask to `http://localhost:8545` (with chain ID `99`)
Connect MetaMask to `http://localhost:8545` (with chain ID `99`)
* Add the deployed token as an asset in MetaMask and check that the initial balance is zero
Add the deployed token as an asset in MetaMask and check that the initial balance is zero
* Export your MetaMask account (second account) address to a shell variable for later use:
Export your MetaMask account (second account) address to a shell variable for later use:
```bash
$ export RECIPIENT_ADDRESS=<RECIPIENT_ADDRESS>
```
```bash
export RECIPIENT_ADDRESS=<RECIPIENT_ADDRESS>
```
* To get the primary account's address, run:
To get the primary account's address, run:
```bash
$ docker exec $CONTAINER_ID yarn account:docker
```
```bash
docker exec $CONTAINER_ID yarn account:docker
```
* To get the current block hash at any time, run:
To get the current block hash at any time, run:
```bash
$ docker exec $CONTAINER_ID yarn block:latest:docker
```
```bash
docker exec $CONTAINER_ID yarn block:latest:docker
```
* Fire a GQL query in the playground to get the name, symbol and total supply of the deployed token:
Fire a GQL query in the playground to get the name, symbol and total supply of the deployed token:
```graphql
query {
name(
blockHash: "LATEST_BLOCK_HASH"
token: "TOKEN_ADDRESS"
) {
value
proof {
data
}
}
symbol(
blockHash: "LATEST_BLOCK_HASH"
token: "TOKEN_ADDRESS"
) {
value
proof {
data
}
}
totalSupply(
blockHash: "LATEST_BLOCK_HASH"
token: "TOKEN_ADDRESS"
) {
value
proof {
data
}
```graphql
query {
name(
blockHash: "LATEST_BLOCK_HASH"
token: "TOKEN_ADDRESS"
) {
value
proof {
data
}
}
```
* Fire the following query to get balances for the primary and the recipient account at the latest block hash:
```graphql
query {
fromBalanceOf: balanceOf(
blockHash: "LATEST_BLOCK_HASH"
token: "TOKEN_ADDRESS",
# primary account having all the balance initially
owner: "PRIMARY_ADDRESS"
) {
value
proof {
data
}
}
toBalanceOf: balanceOf(
blockHash: "LATEST_BLOCK_HASH"
token: "TOKEN_ADDRESS",
owner: "RECIPIENT_ADDRESS"
) {
value
proof {
data
}
symbol(
blockHash: "LATEST_BLOCK_HASH"
token: "TOKEN_ADDRESS"
) {
value
proof {
data
}
}
```
* The initial balance for the primary account should be `1000000000000000000000`
* The initial balance for the recipient should be `0`
totalSupply(
blockHash: "LATEST_BLOCK_HASH"
token: "TOKEN_ADDRESS"
) {
value
proof {
data
}
}
}
```
* Transfer tokens to the recipient account:
Fire the following query to get balances for the primary and the recipient account at the latest block hash:
```bash
$ docker exec $CONTAINER_ID yarn token:transfer:docker --token $TOKEN_ADDRESS --to $RECIPIENT_ADDRESS --amount 100
```
```graphql
query {
fromBalanceOf: balanceOf(
blockHash: "LATEST_BLOCK_HASH"
token: "TOKEN_ADDRESS",
# primary account having all the balance initially
owner: "PRIMARY_ADDRESS"
) {
value
proof {
data
}
}
toBalanceOf: balanceOf(
blockHash: "LATEST_BLOCK_HASH"
token: "TOKEN_ADDRESS",
owner: "RECIPIENT_ADDRESS"
) {
value
proof {
data
}
}
}
```
* Fire the above GQL query again with the latest block hash to get updated balances for the primary (`from`) and the recipient (`to`) account:
- The initial balance for the primary account should be `1000000000000000000000`
- The initial balance for the recipient should be `0`
* The balance for the primary account should be reduced by the transfer amount (`100`)
* The balance for the recipient account should be equal to the transfer amount (`100`)
Transfer tokens to the recipient account:
* Transfer funds between different accounts using MetaMask and use the playground to query the balance before and after the transfer.
```bash
docker exec $CONTAINER_ID yarn token:transfer:docker --token $TOKEN_ADDRESS --to $RECIPIENT_ADDRESS --amount 100
```
Fire the above GQL query again with the latest block hash to get updated balances for the primary (`from`) and the recipient (`to`) account:
- The balance for the primary account should be reduced by the transfer amount (`100`)
- The balance for the recipient account should be equal to the transfer amount (`100`)
Transfer funds between different accounts using MetaMask and use the playground to query the balance before and after the transfer.
## Clean up
* To stop all the services running in background run:
To stop all the services running in background run:
```bash
$ laconic-so deploy-system --include db,go-ethereum-foundry,ipld-eth-server,watcher-erc20 down
```
```bash
laconic-so --stack erc20 deploy-system down
```

View File

@ -5,7 +5,9 @@ repos:
- cerc-io/ipld-eth-db
- cerc-io/ipld-eth-server
- cerc-io/watcher-ts
- dboreham/foundry
containers:
- cerc/foundry
- cerc/go-ethereum
- cerc/go-ethereum-foundry
- cerc/ipld-eth-db
@ -13,6 +15,6 @@ containers:
- cerc/watcher-erc20
pods:
- go-ethereum-foundry
- db
- ipld-eth-db
- ipld-eth-server
- watcher-erc20

Some files were not shown because too many files have changed in this diff Show More