Ping pub integration #190
233
README.md
233
README.md
@ -1,49 +1,42 @@
|
||||
# Stack Orchestrator
|
||||
|
||||
Stack Orchestrator allows building and deployment of a Laconic stack on a single machine with minimial prerequisites.
|
||||
Stack Orchestrator allows building and deployment of a Laconic Stack on a single machine with minimial prerequisites. It is a Python3 CLI tool that runs on any OS with Python3 and Docker. The following diagram summarizes the relevant repositories in the Laconic Stack - and the relationship to Stack Orchestrator.
|
||||
|
||||
## Setup
|
||||
### Prerequisites
|
||||
Stack Orchestrator is a Python3 CLI tool that runs on any OS with Python3 and Docker. Tested on: Ubuntu 20/22.
|
||||
![The Stack](/docs/images/laconic-stack.png)
|
||||
|
||||
## Install
|
||||
|
||||
Ensure that the following are already installed:
|
||||
|
||||
1. Python3 (the stock Python3 version available in Ubuntu 20 and 22 is suitable)
|
||||
```
|
||||
$ python3 --version
|
||||
Python 3.8.10
|
||||
```
|
||||
2. Docker (Install a current version from dockerco, don't use the version from any Linux distro)
|
||||
```
|
||||
$ docker --version
|
||||
Docker version 20.10.17, build 100c701
|
||||
```
|
||||
3. If installed from regular package repository (not Docker Desktop), BE AWARE that the compose plugin may need to be installed, as well.
|
||||
```
|
||||
DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
|
||||
mkdir -p $DOCKER_CONFIG/cli-plugins
|
||||
curl -SL https://github.com/docker/compose/releases/download/v2.11.2/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose
|
||||
- [Python3](https://wiki.python.org/moin/BeginnersGuide/Download): `python3 --version` >= `3.10.8`
|
||||
- [Docker](https://docs.docker.com/get-docker/): `docker --version` >= `20.10.21`
|
||||
- [Docker Compose](https://docs.docker.com/compose/install/): `docker-compose --version` >= `2.13.0`
|
||||
|
||||
Note: if installing docker-compose via package manager (as opposed to Docker Desktop), you must [install the plugin](https://docs.docker.com/compose/install/linux/#install-the-plugin-manually), e.g., on Linux:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.docker/cli-plugins
|
||||
curl -SL https://github.com/docker/compose/releases/download/v2.11.2/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose
|
||||
chmod +x ~/.docker/cli-plugins/docker-compose
|
||||
|
||||
# see https://docs.docker.com/compose/install/linux/#install-the-plugin-manually for further details
|
||||
# or to install for all users.
|
||||
```
|
||||
|
||||
### User Mode Install
|
||||
Next, download the latest release from [this page](https://github.com/cerc-io/stack-orchestrator/tags), into a suitable directory (e.g. `~/bin`):
|
||||
|
||||
User mode runs the orchestrator from a "binary" single-file release and does not require special Python environment setup. Use this mode unless you plan to make changes to the orchestrator source code.
|
||||
```bash
|
||||
curl -L -o ~/bin/laconic-so https://github.com/cerc-io/stack-orchestrator/releases/latest/download/laconic-so
|
||||
```
|
||||
|
||||
*NOTE: User Mode is currently broken, use "Developer mode" described below for now.*
|
||||
Give it permissions:
|
||||
```bash
|
||||
chmod +x ~/bin/laconic-so
|
||||
```
|
||||
|
||||
Ensure `laconic-so` is on the [`PATH`](https://unix.stackexchange.com/a/26059)
|
||||
|
||||
Verify operation:
|
||||
|
||||
1. Download the latest release from [this page](https://github.com/cerc-io/stack-orchestrator/tags), into a suitable directory (e.g. `~/bin`):
|
||||
```
|
||||
$ cd ~/bin
|
||||
$ curl -L https://github.com/cerc-io/stack-orchestrator/releases/download/v1.0.3-alpha/laconic-so
|
||||
```
|
||||
1. Ensure `laconic-so` is on the `PATH`
|
||||
1. Verify operation:
|
||||
```
|
||||
$ ~/bin/laconic-so --help
|
||||
laconic-so --help
|
||||
Usage: python -m laconic-so [OPTIONS] COMMAND [ARGS]...
|
||||
|
||||
Laconic Stack Orchestrator
|
||||
@ -57,162 +50,56 @@ User mode runs the orchestrator from a "binary" single-file release and does not
|
||||
|
||||
Commands:
|
||||
build-containers build the set of containers required for a complete...
|
||||
build-npms build the set of npm packages required for a...
|
||||
deploy-system deploy a stack
|
||||
setup-repositories git clone the set of repositories required to build...
|
||||
```
|
||||
### Developer mode Install
|
||||
Suitable for developers either modifying or debugging the orchestrator Python code:
|
||||
#### Prerequisites
|
||||
In addition to the binary install prerequisites listed above, the following are required:
|
||||
1. Python venv package
|
||||
This may or may not be already installed depending on the host OS and version. Check by running:
|
||||
```
|
||||
$ python3 -m venv
|
||||
usage: venv [-h] [--system-site-packages] [--symlinks | --copies] [--clear] [--upgrade] [--without-pip] [--prompt PROMPT] ENV_DIR [ENV_DIR ...]
|
||||
venv: error: the following arguments are required: ENV_DIR
|
||||
```
|
||||
If the venv package is missing you should see a message indicating how to install it, for example with:
|
||||
```
|
||||
$ apt install python3.8-venv
|
||||
```
|
||||
#### Install
|
||||
1. Clone this repository:
|
||||
```
|
||||
$ git clone (https://github.com/cerc-io/stack-orchestrator.git
|
||||
```
|
||||
4. Enter the project directory:
|
||||
```
|
||||
$ cd stack-orchestrator
|
||||
```
|
||||
5. Create and activate a venv:
|
||||
```
|
||||
$ python3 -m venv venv
|
||||
$ source ./venv/bin/activate
|
||||
(venv) $
|
||||
```
|
||||
6. Install the cli in edit mode:
|
||||
```
|
||||
$ pip install --editable .
|
||||
```
|
||||
7. Verify installation:
|
||||
```
|
||||
(venv) $ laconic-so
|
||||
Usage: laconic-so [OPTIONS] COMMAND [ARGS]...
|
||||
|
||||
Laconic Stack Orchestrator
|
||||
|
||||
Options:
|
||||
--quiet
|
||||
--verbose
|
||||
--dry-run
|
||||
-h, --help Show this message and exit.
|
||||
|
||||
Commands:
|
||||
build-containers build the set of containers required for a complete...
|
||||
deploy-system deploy a stack
|
||||
setup-repositories git clone the set of repositories required to build...
|
||||
```
|
||||
|
||||
#### Build a zipapp (single file distributable script)
|
||||
Use shiv to build a single file Python executable zip archive of laconic-so:
|
||||
1. Install [shiv](https://github.com/linkedin/shiv):
|
||||
```
|
||||
$ (venv) pip install shiv
|
||||
$ (venv) pip install wheel
|
||||
```
|
||||
1. Run shiv to create a zipapp file:
|
||||
```
|
||||
$ (venv) shiv -c laconic-so -o laconic-so .
|
||||
```
|
||||
This creates a file `./laconic-so` that is executable outside of any venv, and on other machines and OSes and architectures, and requiring only the system Python3:
|
||||
1. Verify it works:
|
||||
```
|
||||
$ cp stack-orchetrator/laconic-so ~/bin
|
||||
$ laconic-so
|
||||
Usage: python -m laconic-so [OPTIONS] COMMAND [ARGS]...
|
||||
|
||||
Laconic Stack Orchestrator
|
||||
|
||||
Options:
|
||||
--quiet
|
||||
--verbose
|
||||
--dry-run
|
||||
-h, --help Show this message and exit.
|
||||
|
||||
Commands:
|
||||
build-containers build the set of containers required for a complete...
|
||||
deploy-system deploy a stack
|
||||
setup-repositories git clone the set of repositories required to build...
|
||||
```
|
||||
### CI Mode
|
||||
_write-me_
|
||||
|
||||
## Usage
|
||||
There are three sub-commands: `setup-repositories`, `build-containers` and `deploy-system` that are generally run in order:
|
||||
|
||||
Note: $ laconic-so will run the version installed to ~/bin, while ./laconic-so can be invoked to run locally built
|
||||
version in a checkout
|
||||
Three sub-commands: `setup-repositories`, `build-containers` and `deploy-system` are generally run in order. The following is a slim example for standing up the `erc20-watcher`. Go further with the [erc20 watcher demo](/app/data/stacks/erc20) and other pieces of the stack, within the [`stacks` directory](/app/data/stacks).
|
||||
|
||||
### Setup Repositories
|
||||
Clones the set of git repositories necessary to build a system.
|
||||
|
||||
Note: the use of `ssh-agent` is recommended in order to avoid entering your ssh key passphrase for each repository.
|
||||
```
|
||||
$ laconic-so --verbose setup-repositories #this will default to ~/cerc or CERC_REPO_BASE_DIR from an env file
|
||||
#$ ./laconic-so --verbose --local-stack setup-repositories #this will use cwd ../ as dev_root_path
|
||||
Clone the set of git repositories necessary to build a system:
|
||||
|
||||
```bash
|
||||
laconic-so --verbose setup-repositories --include cerc-io/go-ethereum,cerc-io/ipld-eth-db,cerc-io/ipld-eth-server,cerc-io/watcher-ts
|
||||
```
|
||||
|
||||
This will default to `~/cerc` or - if set - the environment variable `CERC_REPO_BASE_DIR`
|
||||
|
||||
### Build Containers
|
||||
Builds the set of docker container images required to run a system. It takes around 10 minutes to build all the containers from cold.
|
||||
```
|
||||
$ laconic-so --verbose build-containers #this will default to ~/cerc or CERC_REPO_BASE_DIR from an env file
|
||||
#$ ./laconic-so --verbose --local-stack build-containers #this will use cwd ../ as dev_root_path
|
||||
|
||||
Build the set of docker container images required to run a system. It takes around 10 minutes to build all the containers from scratch.
|
||||
|
||||
```bash
|
||||
laconic-so --verbose build-containers --include cerc/go-ethereum,cerc/go-ethereum-foundry,cerc/ipld-eth-db,cerc/ipld-eth-server,cerc/watcher-erc20
|
||||
```
|
||||
|
||||
### Deploy System
|
||||
Uses `docker compose` to deploy a system.
|
||||
|
||||
Use `---include <list of components>` to deploy a subset of all containers:
|
||||
Uses `docker-compose` to deploy a system (with most recently built container images).
|
||||
|
||||
```bash
|
||||
laconic-so --verbose deploy-system --include ipld-eth-db,go-ethereum-foundry,ipld-eth-server,watcher-erc20 up
|
||||
```
|
||||
$ laconic-so --verbose deploy-system --include db-sharding,contract,ipld-eth-server,go-ethereum-foundry up
|
||||
|
||||
Check out he GraphQL playground here: [http://localhost:3002/graphql](http://localhost:3002/graphql)
|
||||
|
||||
See the [erc20 watcher demo](/app/data/stacks/erc20) to continue further.
|
||||
|
||||
### Cleanup
|
||||
|
||||
```bash
|
||||
laconic-so --verbose deploy-system --include ipld-eth-db,go-ethereum-foundry,ipld-eth-server,watcher-erc20 down
|
||||
```
|
||||
```
|
||||
$ laconic-so --verbose deploy-system --include db-sharding,contract,ipld-eth-server,go-ethereum-foundry down
|
||||
```
|
||||
Note: deploy-system command interacts with most recently built container images.
|
||||
|
||||
## Contributing
|
||||
|
||||
See the [CONTRIBUTING.md](/docs/CONTRIBUTING.md) for developer mode install.
|
||||
|
||||
## Platform Support
|
||||
Native aarm64 is _not_ currently supported. x64 emulation on ARM64 macos should work (not yet tested).
|
||||
## Implementation
|
||||
The orchestrator's operation is driven by files shown below. `repository-list.txt` container the list of git repositories; `container-image-list.txt` contains
|
||||
the list of container image names, while `clister-list.txt` specifies the set of compose components (corresponding to individual docker-compose-xxx.yml files which may in turn specify more than one container).
|
||||
Files required to build each container image are stored under `./container-build/<container-name>`
|
||||
Files required at deploy-time are stored under `./config/<component-name>`
|
||||
```
|
||||
├── pod-list.txt
|
||||
├── compose
|
||||
│ ├── docker-compose-contract.yml
|
||||
│ ├── docker-compose-db-sharding.yml
|
||||
│ ├── docker-compose-db.yml
|
||||
│ ├── docker-compose-eth-statediff-fill-service.yml
|
||||
│ ├── docker-compose-go-ethereum-foundry.yml
|
||||
│ ├── docker-compose-ipld-eth-beacon-db.yml
|
||||
│ ├── docker-compose-ipld-eth-beacon-indexer.yml
|
||||
│ ├── docker-compose-ipld-eth-server.yml
|
||||
│ ├── docker-compose-lighthouse.yml
|
||||
│ └── docker-compose-prometheus-grafana.yml
|
||||
├── config
|
||||
│ └── ipld-eth-server
|
||||
├── container-build
|
||||
│ ├── cerc-eth-statediff-fill-service
|
||||
│ ├── cerc-go-ethereum
|
||||
│ ├── cerc-go-ethereum-foundry
|
||||
│ ├── cerc-ipld-eth-beacon-db
|
||||
│ ├── cerc-ipld-eth-beacon-indexer
|
||||
│ ├── cerc-ipld-eth-db
|
||||
│ ├── cerc-ipld-eth-server
|
||||
│ ├── cerc-lighthouse
|
||||
│ └── cerc-test-contract
|
||||
├── container-image-list.txt
|
||||
├── repository-list.txt
|
||||
```
|
||||
|
||||
_write-more-of-me_
|
||||
Native aarm64 is _not_ currently supported. x64 emulation on ARM64 macos should work (not yet tested).
|
||||
|
||||
|
71
app/base.py
Normal file
71
app/base.py
Normal file
@ -0,0 +1,71 @@
|
||||
# Copyright © 2022, 2023 Cerc
|
||||
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU Affero General Public License for more details.
|
||||
|
||||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
|
||||
|
||||
import os
|
||||
from abc import ABC, abstractmethod
|
||||
from .deploy_system import get_stack_status
|
||||
|
||||
|
||||
def get_stack(config, stack):
|
||||
if stack == "package-registry":
|
||||
return package_registry_stack(config, stack)
|
||||
else:
|
||||
return base_stack(config, stack)
|
||||
|
||||
|
||||
class base_stack(ABC):
|
||||
|
||||
def __init__(self, config, stack):
|
||||
self.config = config
|
||||
self.stack = stack
|
||||
|
||||
@abstractmethod
|
||||
def ensure_available(self):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_url(self):
|
||||
pass
|
||||
|
||||
|
||||
class package_registry_stack(base_stack):
|
||||
|
||||
def ensure_available(self):
|
||||
self.url = "<no registry url set>"
|
||||
# Check if we were given an external registry URL
|
||||
url_from_environment = os.environ.get("CERC_NPM_REGISTRY_URL")
|
||||
if url_from_environment:
|
||||
if self.config.verbose:
|
||||
print(f"Using package registry url from CERC_NPM_REGISTRY_URL: {url_from_environment}")
|
||||
self.url = url_from_environment
|
||||
else:
|
||||
# Otherwise we expect to use the local package-registry stack
|
||||
# First check if the stack is up
|
||||
registry_running = get_stack_status(self.config, "package-registry")
|
||||
if registry_running:
|
||||
# If it is available, get its mapped port and construct its URL
|
||||
if self.config.debug:
|
||||
print("Found local package registry stack is up")
|
||||
# TODO: get url from deploy-stack
|
||||
self.url = "http://gitea.local:3000/api/packages/cerc-io/npm/"
|
||||
else:
|
||||
# If not, print a message about how to start it and return fail to the caller
|
||||
print("ERROR: The package-registry stack is not running, and no external registry specified with CERC_NPM_REGISTRY_URL")
|
||||
print("ERROR: Start the local package registry with: laconic-so --stack package-registry deploy-system up")
|
||||
return False
|
||||
return True
|
||||
|
||||
def get_url(self):
|
||||
return self.url
|
@ -1,4 +1,4 @@
|
||||
# Copyright © 2022 Cerc
|
||||
# Copyright © 2022, 2023 Cerc
|
||||
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as published by
|
||||
@ -21,12 +21,13 @@
|
||||
# TODO: display the available list of containers; allow re-build of either all or specific containers
|
||||
|
||||
import os
|
||||
import sys
|
||||
from decouple import config
|
||||
import subprocess
|
||||
import click
|
||||
import importlib.resources
|
||||
from pathlib import Path
|
||||
from .util import include_exclude_check
|
||||
from .util import include_exclude_check, get_parsed_stack_config
|
||||
|
||||
# TODO: find a place for this
|
||||
# epilog="Config provided either in .env or settings.ini or env vars: CERC_REPO_BASE_DIR (defaults to ~/cerc)"
|
||||
@ -43,6 +44,8 @@ def command(ctx, include, exclude):
|
||||
verbose = ctx.obj.verbose
|
||||
dry_run = ctx.obj.dry_run
|
||||
local_stack = ctx.obj.local_stack
|
||||
stack = ctx.obj.stack
|
||||
continue_on_error = ctx.obj.continue_on_error
|
||||
|
||||
# See: https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure
|
||||
container_build_dir = Path(__file__).absolute().parent.joinpath("data", "container-build")
|
||||
@ -62,10 +65,19 @@ def command(ctx, include, exclude):
|
||||
# See: https://stackoverflow.com/a/20885799/1701505
|
||||
from . import data
|
||||
with importlib.resources.open_text(data, "container-image-list.txt") as container_list_file:
|
||||
containers = container_list_file.read().splitlines()
|
||||
all_containers = container_list_file.read().splitlines()
|
||||
|
||||
containers_in_scope = []
|
||||
if stack:
|
||||
stack_config = get_parsed_stack_config(stack)
|
||||
containers_in_scope = stack_config['containers']
|
||||
else:
|
||||
containers_in_scope = all_containers
|
||||
|
||||
if verbose:
|
||||
print(f'Containers: {containers}')
|
||||
print(f'Containers: {containers_in_scope}')
|
||||
if stack:
|
||||
print(f"Stack: {stack}")
|
||||
|
||||
# TODO: make this configurable
|
||||
container_build_env = {
|
||||
@ -91,17 +103,24 @@ def command(ctx, include, exclude):
|
||||
# Check if we have a repo for this container. If not, set the context dir to the container-build subdir
|
||||
repo_full_path = os.path.join(dev_root_path, repo_dir)
|
||||
repo_dir_or_build_dir = repo_dir if os.path.exists(repo_full_path) else build_dir
|
||||
build_command = os.path.join(container_build_dir, "default-build.sh") + f" {container} {repo_dir_or_build_dir}"
|
||||
build_command = os.path.join(container_build_dir, "default-build.sh") + f" {container}:local {repo_dir_or_build_dir}"
|
||||
if not dry_run:
|
||||
if verbose:
|
||||
print(f"Executing: {build_command}")
|
||||
build_result = subprocess.run(build_command, shell=True, env=container_build_env)
|
||||
# TODO: check result in build_result.returncode
|
||||
print(f"Result is: {build_result}")
|
||||
if verbose:
|
||||
print(f"Return code is: {build_result.returncode}")
|
||||
if build_result.returncode != 0:
|
||||
print(f"Error running build for {container}")
|
||||
if not continue_on_error:
|
||||
print("FATAL Error: container build failed and --continue-on-error not set, exiting")
|
||||
sys.exit(1)
|
||||
else:
|
||||
print("****** Container Build Error, continuing because --continue-on-error is set")
|
||||
else:
|
||||
print("Skipped")
|
||||
|
||||
for container in containers:
|
||||
for container in containers_in_scope:
|
||||
if include_exclude_check(container, include, exclude):
|
||||
process_container(container)
|
||||
else:
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Copyright © 2022 Cerc
|
||||
# Copyright © 2022, 2023 Cerc
|
||||
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as published by
|
||||
@ -19,11 +19,16 @@
|
||||
# CERC_REPO_BASE_DIR defaults to ~/cerc
|
||||
|
||||
import os
|
||||
import sys
|
||||
from shutil import rmtree, copytree
|
||||
from decouple import config
|
||||
import click
|
||||
import importlib.resources
|
||||
from python_on_whales import docker
|
||||
from .util import include_exclude_check
|
||||
from python_on_whales import docker, DockerException
|
||||
from .base import get_stack
|
||||
from .util import include_exclude_check, get_parsed_stack_config
|
||||
|
||||
builder_js_image_name = "cerc/builder-js:local"
|
||||
|
||||
@click.command()
|
||||
@click.option('--include', help="only build these packages")
|
||||
@ -37,6 +42,23 @@ def command(ctx, include, exclude):
|
||||
dry_run = ctx.obj.dry_run
|
||||
local_stack = ctx.obj.local_stack
|
||||
debug = ctx.obj.debug
|
||||
stack = ctx.obj.stack
|
||||
continue_on_error = ctx.obj.continue_on_error
|
||||
|
||||
_ensure_prerequisites()
|
||||
|
||||
# build-npms depends on having access to a writable package registry
|
||||
# so we check here that it is available
|
||||
package_registry_stack = get_stack(ctx.obj, "package-registry")
|
||||
registry_available = package_registry_stack.ensure_available()
|
||||
if not registry_available:
|
||||
print("FATAL: no npm registry available for build-npms command")
|
||||
sys.exit(1)
|
||||
npm_registry_url = package_registry_stack.get_url()
|
||||
npm_registry_url_token = config("CERC_NPM_AUTH_TOKEN", default=None)
|
||||
if not npm_registry_url_token:
|
||||
print("FATAL: CERC_NPM_AUTH_TOKEN is not defined")
|
||||
sys.exit(1)
|
||||
|
||||
if local_stack:
|
||||
dev_root_path = os.getcwd()[0:os.getcwd().rindex("stack-orchestrator")]
|
||||
@ -44,49 +66,96 @@ def command(ctx, include, exclude):
|
||||
else:
|
||||
dev_root_path = os.path.expanduser(config("CERC_REPO_BASE_DIR", default="~/cerc"))
|
||||
|
||||
if not quiet:
|
||||
build_root_path = os.path.join(dev_root_path, "build-trees")
|
||||
|
||||
if verbose:
|
||||
print(f'Dev Root is: {dev_root_path}')
|
||||
|
||||
if not os.path.isdir(dev_root_path):
|
||||
print('Dev root directory doesn\'t exist, creating')
|
||||
os.makedirs(dev_root_path)
|
||||
if not os.path.isdir(dev_root_path):
|
||||
print('Build root directory doesn\'t exist, creating')
|
||||
os.makedirs(build_root_path)
|
||||
|
||||
# See: https://stackoverflow.com/a/20885799/1701505
|
||||
from . import data
|
||||
with importlib.resources.open_text(data, "npm-package-list.txt") as package_list_file:
|
||||
packages = package_list_file.read().splitlines()
|
||||
all_packages = package_list_file.read().splitlines()
|
||||
|
||||
packages_in_scope = []
|
||||
if stack:
|
||||
stack_config = get_parsed_stack_config(stack)
|
||||
# TODO: syntax check the input here
|
||||
packages_in_scope = stack_config['npms']
|
||||
else:
|
||||
packages_in_scope = all_packages
|
||||
|
||||
if verbose:
|
||||
print(f'Packages: {packages}')
|
||||
print(f'Packages: {packages_in_scope}')
|
||||
|
||||
def build_package(package):
|
||||
if not quiet:
|
||||
print(f"Building npm package: {package}")
|
||||
repo_dir = package
|
||||
repo_full_path = os.path.join(dev_root_path, repo_dir)
|
||||
# TODO: make the npm registry url configurable.
|
||||
build_command = ["sh", "-c", "cd /workspace && build-npm-package-local-dependencies.sh http://gitea.local:3000/api/packages/cerc-io/npm/"]
|
||||
# Copy the repo and build that to avoid propagating JS tooling file changes back into the cloned repo
|
||||
repo_copy_path = os.path.join(build_root_path, repo_dir)
|
||||
# First delete any old build tree
|
||||
if os.path.isdir(repo_copy_path):
|
||||
if verbose:
|
||||
print(f"Deleting old build tree: {repo_copy_path}")
|
||||
if not dry_run:
|
||||
rmtree(repo_copy_path)
|
||||
# Now copy the repo into the build tree location
|
||||
if verbose:
|
||||
print(f"Copying build tree from: {repo_full_path} to: {repo_copy_path}")
|
||||
if not dry_run:
|
||||
copytree(repo_full_path, repo_copy_path)
|
||||
build_command = ["sh", "-c", f"cd /workspace && build-npm-package-local-dependencies.sh {npm_registry_url}"]
|
||||
if not dry_run:
|
||||
if verbose:
|
||||
print(f"Executing: {build_command}")
|
||||
envs = {"CERC_NPM_AUTH_TOKEN": os.environ["CERC_NPM_AUTH_TOKEN"]} | ({"CERC_SCRIPT_DEBUG": "true"} if debug else {})
|
||||
build_result = docker.run("cerc/builder-js",
|
||||
envs = {"CERC_NPM_AUTH_TOKEN": npm_registry_url_token} | ({"CERC_SCRIPT_DEBUG": "true"} if debug else {})
|
||||
try:
|
||||
docker.run(builder_js_image_name,
|
||||
remove=True,
|
||||
interactive=True,
|
||||
tty=True,
|
||||
user=f"{os.getuid()}:{os.getgid()}",
|
||||
envs=envs,
|
||||
# TODO: detect this host name in npm_registry_url rather than hard-wiring it
|
||||
add_hosts=[("gitea.local", "host-gateway")],
|
||||
volumes=[(repo_full_path, "/workspace")],
|
||||
volumes=[(repo_copy_path, "/workspace")],
|
||||
command=build_command
|
||||
)
|
||||
# TODO: check result in build_result.returncode
|
||||
print(f"Result is: {build_result}")
|
||||
# Note that although the docs say that build_result should contain
|
||||
# the command output as a string, in reality it is always the empty string.
|
||||
# Since we detect errors via catching exceptions below, we can safely ignore it here.
|
||||
except DockerException as e:
|
||||
print(f"Error executing build for {package} in container:\n {e}")
|
||||
if not continue_on_error:
|
||||
print("FATAL Error: build failed and --continue-on-error not set, exiting")
|
||||
sys.exit(1)
|
||||
else:
|
||||
print("****** Build Error, continuing because --continue-on-error is set")
|
||||
|
||||
else:
|
||||
print("Skipped")
|
||||
|
||||
for package in packages:
|
||||
for package in packages_in_scope:
|
||||
if include_exclude_check(package, include, exclude):
|
||||
build_package(package)
|
||||
else:
|
||||
if verbose:
|
||||
print(f"Excluding: {package}")
|
||||
|
||||
|
||||
def _ensure_prerequisites():
|
||||
# Check that the builder-js container is available and
|
||||
# Tell the user how to build it if not
|
||||
images = docker.image.list(builder_js_image_name)
|
||||
if len(images) == 0:
|
||||
print(f"FATAL: builder image: {builder_js_image_name} is required but was not found")
|
||||
print("Please run this command to create it: laconic-so --stack build-support build-containers")
|
||||
sys.exit(1)
|
||||
|
23
app/data/compose/docker-compose-fixturenet-eth-metrics.yml
Normal file
23
app/data/compose/docker-compose-fixturenet-eth-metrics.yml
Normal file
@ -0,0 +1,23 @@
|
||||
version: "3.2"
|
||||
services:
|
||||
prometheus:
|
||||
restart: always
|
||||
image: prom/prometheus
|
||||
depends_on:
|
||||
fixturenet-eth-geth-1:
|
||||
condition: service_healthy
|
||||
volumes:
|
||||
- ../config/fixturenet-eth-metrics/prometheus/etc:/etc/prometheus
|
||||
ports:
|
||||
- "9090"
|
||||
grafana:
|
||||
restart: always
|
||||
image: grafana/grafana
|
||||
environment:
|
||||
- GF_SECURITY_ADMIN_PASSWORD=changeme6325
|
||||
volumes:
|
||||
- ../config/fixturenet-eth-metrics/grafana/etc/provisioning/dashboards:/etc/grafana/provisioning/dashboards
|
||||
- ../config/fixturenet-eth-metrics/grafana/etc/provisioning/datasources:/etc/grafana/provisioning/datasources
|
||||
- ../config/fixturenet-eth-metrics/grafana/etc/dashboards:/etc/grafana/dashboards
|
||||
ports:
|
||||
- "3000"
|
@ -20,6 +20,7 @@ services:
|
||||
CERC_REMOTE_DEBUG: "true"
|
||||
CERC_RUN_STATEDIFF: "detect"
|
||||
CERC_STATEDIFF_DB_NODE_ID: 1
|
||||
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
|
||||
env_file:
|
||||
- ../config/fixturenet-eth/fixturenet-eth.env
|
||||
image: cerc/fixturenet-eth-geth:local
|
||||
@ -34,6 +35,7 @@ services:
|
||||
ports:
|
||||
- "8545"
|
||||
- "40000"
|
||||
- "6060"
|
||||
|
||||
fixturenet-eth-geth-2:
|
||||
hostname: fixturenet-eth-geth-2
|
||||
|
@ -20,16 +20,24 @@ services:
|
||||
DATABASE_USER: "vdbm"
|
||||
DATABASE_PASSWORD: "password"
|
||||
ETH_CHAIN_ID: 99
|
||||
ETH_FORWARD_ETH_CALLS: $eth_forward_eth_calls
|
||||
ETH_PROXY_ON_ERROR: $eth_proxy_on_error
|
||||
ETH_HTTP_PATH: $eth_http_path
|
||||
ETH_FORWARD_ETH_CALLS: "false"
|
||||
ETH_FORWARD_GET_STORAGE_AT: "false"
|
||||
ETH_PROXY_ON_ERROR: "false"
|
||||
METRICS: "true"
|
||||
PROM_HTTP: "true"
|
||||
PROM_HTTP_ADDR: "0.0.0.0"
|
||||
PROM_HTTP_PORT: "8090"
|
||||
LOGRUS_LEVEL: "debug"
|
||||
CERC_REMOTE_DEBUG: "true"
|
||||
volumes:
|
||||
- type: bind
|
||||
source: ../config/ipld-eth-server/chain.json
|
||||
target: /tmp/chain.json
|
||||
ports:
|
||||
- "127.0.0.1:8081:8081"
|
||||
- "127.0.0.1:8082:8082"
|
||||
- "8081"
|
||||
- "8082"
|
||||
- "8090"
|
||||
- "40000"
|
||||
healthcheck:
|
||||
test: ["CMD", "nc", "-v", "localhost", "8081"]
|
||||
interval: 20s
|
||||
|
@ -29,9 +29,17 @@ services:
|
||||
condition: service_healthy
|
||||
keycloak-nginx:
|
||||
image: nginx:1.23-alpine
|
||||
restart: always
|
||||
volumes:
|
||||
- ../config/keycloak/nginx:/etc/nginx/conf.d
|
||||
ports:
|
||||
- 80
|
||||
depends_on:
|
||||
- keycloak
|
||||
keycloak-nginx-prometheus-exporter:
|
||||
image: nginx/nginx-prometheus-exporter
|
||||
restart: always
|
||||
environment:
|
||||
- SCRAPE_URI=http://keycloak-nginx:80/stub_status
|
||||
depends_on:
|
||||
- keycloak-nginx
|
||||
|
13
app/data/compose/docker-compose-kubo.yml
Normal file
13
app/data/compose/docker-compose-kubo.yml
Normal file
@ -0,0 +1,13 @@
|
||||
version: "3.2"
|
||||
# See: https://docs.ipfs.tech/install/run-ipfs-inside-docker/#set-up
|
||||
services:
|
||||
ipfs:
|
||||
image: ipfs/kubo:master-2023-02-20-714a968
|
||||
restart: always
|
||||
volumes:
|
||||
- ./ipfs/import:/import
|
||||
- ./ipfs/data:/data/ipfs
|
||||
ports:
|
||||
- "8080"
|
||||
- "4001"
|
||||
- "5001"
|
13
app/data/compose/docker-compose-laconic-explorer-test.yml
Normal file
13
app/data/compose/docker-compose-laconic-explorer-test.yml
Normal file
@ -0,0 +1,13 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
laconic-explorer:
|
||||
image: cerc/laconic-explorer:local
|
||||
#env_file:
|
||||
#- ../config/laconic-explorer/TODO?
|
||||
#volumes:
|
||||
#- ../config/keycloak/import:/import
|
||||
ports:
|
||||
- "8080:8080"
|
||||
# command: ["yarn serve"]
|
||||
stdin_open: true
|
22
app/data/compose/docker-compose-laconic-explorer.yml
Normal file
22
app/data/compose/docker-compose-laconic-explorer.yml
Normal file
@ -0,0 +1,22 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
laconic-explorer:
|
||||
image: cerc/laconic-explorer:local
|
||||
#env_file:
|
||||
#- ../config/laconic-explorer/TODO?
|
||||
#volumes:
|
||||
#- ../config/keycloak/import:/import
|
||||
ports:
|
||||
- "8080:8080"
|
||||
command: ["yarn serve"]
|
||||
depends_on:
|
||||
explorer-nginx:
|
||||
condition: service_healthy
|
||||
explorer-nginx:
|
||||
image: nginx:1.23-alpine
|
||||
restart: always
|
||||
volumes:
|
||||
- ../config/laconic-explorer/default.conf:/etc/nginx/conf.d/default.conf
|
||||
ports:
|
||||
- "80:80"
|
@ -1,21 +0,0 @@
|
||||
version: "3.2"
|
||||
services:
|
||||
# If you want prometheus to work, you must update the following file in the ops repo locally.
|
||||
# localhost:6060 --> go-ethereum:6060
|
||||
prometheus:
|
||||
restart: always
|
||||
user: "987"
|
||||
image: prom/prometheus
|
||||
volumes:
|
||||
- ${cerc_ops}/metrics/etc:/etc/prometheus
|
||||
- ./prometheus-data:/prometheus
|
||||
ports:
|
||||
- "127.0.0.1:9090:9090"
|
||||
grafana:
|
||||
restart: always
|
||||
user: "472"
|
||||
image: grafana/grafana
|
||||
volumes:
|
||||
- ./grafana-data:/var/lib/grafana
|
||||
ports:
|
||||
- "127.0.0.1:3000:3000"
|
@ -3,3 +3,5 @@ services:
|
||||
test:
|
||||
image: cerc/test-container:local
|
||||
restart: always
|
||||
ports:
|
||||
- "80"
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,9 @@
|
||||
apiVersion: 1
|
||||
|
||||
providers:
|
||||
- name: dashboards
|
||||
type: file
|
||||
updateIntervalSeconds: 10
|
||||
options:
|
||||
path: /etc/grafana/dashboards
|
||||
foldersFromFilesStructure: true
|
@ -0,0 +1,19 @@
|
||||
apiVersion: 1
|
||||
|
||||
datasources:
|
||||
- id: 1
|
||||
uid: jZUuGao4k
|
||||
orgId: 1
|
||||
name: Prometheus
|
||||
type: prometheus
|
||||
typeName: Prometheus
|
||||
typeLogoUrl: public/app/plugins/datasource/prometheus/img/prometheus_logo.svg
|
||||
access: proxy
|
||||
url: http://prometheus:9090
|
||||
user: ""
|
||||
database: ""
|
||||
basicAuth: false
|
||||
isDefault: true
|
||||
jsonData:
|
||||
httpMethod: POST
|
||||
readOnly: false
|
@ -0,0 +1,34 @@
|
||||
global:
|
||||
scrape_interval: 5s
|
||||
evaluation_interval: 15s
|
||||
|
||||
scrape_configs:
|
||||
# ipld-eth-server
|
||||
- job_name: 'ipld-eth-server'
|
||||
metrics_path: /metrics
|
||||
scrape_interval: 5s
|
||||
static_configs:
|
||||
- targets: ['ipld-eth-server:8090']
|
||||
|
||||
# geth
|
||||
- job_name: 'geth'
|
||||
metrics_path: /debug/metrics/prometheus
|
||||
scheme: http
|
||||
static_configs:
|
||||
- targets: ['fixturenet-eth-geth-1:6060']
|
||||
|
||||
# nginx
|
||||
- job_name: 'nginx'
|
||||
scrape_interval: 5s
|
||||
metrics_path: /metrics
|
||||
scheme: http
|
||||
static_configs:
|
||||
- targets: ['keycloak-nginx-prometheus-exporter:9113']
|
||||
|
||||
# keycloak
|
||||
- job_name: 'keycloak'
|
||||
scrape_interval: 5s
|
||||
metrics_path: /auth/realms/cerc/metrics
|
||||
scheme: http
|
||||
static_configs:
|
||||
- targets: ['keycloak:8080']
|
@ -19,3 +19,5 @@ CERC_STATEDIFF_DB_USER="vdbm"
|
||||
CERC_STATEDIFF_DB_PASSWORD="password"
|
||||
CERC_STATEDIFF_DB_GOOSE_MIN_VER=23
|
||||
CERC_STATEDIFF_DB_LOG_STATEMENTS="false"
|
||||
|
||||
CERC_GETH_VMODULE="statediff/*=5,rpc/*=5"
|
||||
|
@ -20,16 +20,19 @@ server {
|
||||
proxy_pass http://fixturenet-eth-geth-1:8545;
|
||||
}
|
||||
|
||||
### ipld-eth-server
|
||||
## ipld-eth-server
|
||||
# location ~ ^/ipld/eth/([^/]*)$ {
|
||||
# set $apiKey $1;
|
||||
# if ($apiKey = '') {
|
||||
# set $apiKey $http_X_API_KEY;
|
||||
# }
|
||||
# auth_request /auth;
|
||||
# auth_request_set $user_id $sent_http_x_user_id;
|
||||
# proxy_buffering off;
|
||||
# rewrite /.*$ / break;
|
||||
# proxy_pass http://ipld-eth-server:8081;
|
||||
# proxy_set_header X-Original-Remote-Addr $remote_addr;
|
||||
# proxy_set_header X-User-Id $user_id;
|
||||
# }
|
||||
#
|
||||
# location ~ ^/ipld/gql/([^/]*)$ {
|
||||
@ -42,14 +45,14 @@ server {
|
||||
# rewrite /.*$ / break;
|
||||
# proxy_pass http://ipld-eth-server:8082;
|
||||
# }
|
||||
#
|
||||
### lighthouse
|
||||
# location /beacon/ {
|
||||
# set $apiKey $http_X_API_KEY;
|
||||
# auth_request /auth;
|
||||
# proxy_buffering off;
|
||||
# proxy_pass http://fixturenet-eth-lighthouse-1:8001/;
|
||||
# }
|
||||
|
||||
## lighthouse
|
||||
location /beacon/ {
|
||||
set $apiKey $http_X_API_KEY;
|
||||
auth_request /auth;
|
||||
proxy_buffering off;
|
||||
proxy_pass http://fixturenet-eth-lighthouse-1:8001/;
|
||||
}
|
||||
|
||||
location = /auth {
|
||||
internal;
|
||||
@ -63,7 +66,7 @@ server {
|
||||
proxy_set_header X-Original-Host $host;
|
||||
}
|
||||
|
||||
# location = /basic_status {
|
||||
# stub_status;
|
||||
# }
|
||||
location = /stub_status {
|
||||
stub_status;
|
||||
}
|
||||
}
|
||||
|
33
app/data/config/laconic-explorer/default.conf
Normal file
33
app/data/config/laconic-explorer/default.conf
Normal file
@ -0,0 +1,33 @@
|
||||
## copied from: https://github.com/gateway-fm/laconic-explorer/blob/master/ping.conf
|
||||
server {
|
||||
listen 80;
|
||||
listen [::]:80;
|
||||
server_name _;
|
||||
|
||||
#access_log /var/log/nginx/host.access.log main;
|
||||
|
||||
location / {
|
||||
root /usr/share/nginx/html;
|
||||
index index.html index.htm;
|
||||
try_files $uri $uri/ /index.html;
|
||||
}
|
||||
|
||||
#error_page 404 /404.html;
|
||||
|
||||
# redirect server error pages to the static page /50x.html
|
||||
#
|
||||
error_page 500 502 503 504 /50x.html;
|
||||
location = /50x.html {
|
||||
root /usr/share/nginx/html;
|
||||
}
|
||||
|
||||
gzip on;
|
||||
gzip_proxied any;
|
||||
gzip_static on;
|
||||
gzip_min_length 1024;
|
||||
gzip_buffers 4 16k;
|
||||
gzip_comp_level 2;
|
||||
gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript application/x-httpd-php application/vnd.ms-fontobject font/ttf font/opentype font/x-woff image/svg+xml;
|
||||
gzip_vary off;
|
||||
gzip_disable "MSIE [1-6]\.";
|
||||
}
|
31
app/data/container-build/cerc-builder-gerbil/Dockerfile
Normal file
31
app/data/container-build/cerc-builder-gerbil/Dockerfile
Normal file
@ -0,0 +1,31 @@
|
||||
# From: https://github.com/vyzo/gerbil/blob/master/docker/Dockerfile
|
||||
FROM gerbil/ubuntu
|
||||
|
||||
# Install the Solidity compiler (latest stable version)
|
||||
# and guile
|
||||
# and libsecp256k1-dev
|
||||
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive && export DEBCONF_NOWARNINGS="yes" && \
|
||||
apt-get install -y software-properties-common && \
|
||||
add-apt-repository ppa:ethereum/ethereum && \
|
||||
apt-get update && \
|
||||
apt-get install -y solc && \
|
||||
apt-get install -y guile-3.0 && \
|
||||
apt-get install -y libsecp256k1-dev && \
|
||||
apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
|
||||
RUN mkdir /scripts
|
||||
COPY install-dependencies.sh /scripts
|
||||
|
||||
# Override the definition of GERBIL_PATH in the base image, but
|
||||
# is safe because (at present) no gerbil packages are installed in the base image
|
||||
# We do this in order to allow a set of pre-installed packages from the container
|
||||
# to be used with an arbitrary, potentially different set of projects bind mounted
|
||||
# at /src
|
||||
ENV GERBIL_PATH=/.gerbil
|
||||
RUN bash /scripts/install-dependencies.sh
|
||||
|
||||
# Needed to prevent git from raging about /src
|
||||
RUN git config --global --add safe.directory /src
|
||||
|
||||
COPY entrypoint.sh /scripts
|
||||
ENTRYPOINT ["/scripts/entrypoint.sh"]
|
21
app/data/container-build/cerc-builder-gerbil/README.md
Normal file
21
app/data/container-build/cerc-builder-gerbil/README.md
Normal file
@ -0,0 +1,21 @@
|
||||
## Gerbil Scheme Builder
|
||||
|
||||
This container is designed to be used as a simple "build runner" environment for building and running Scheme projects using Gerbil and gerbil-ethereum. Its primary purpose is to allow build/test/run of gerbil code without the need to install and configure all the necessary prerequisites and dependencies on the host system.
|
||||
|
||||
### Usage
|
||||
|
||||
First build the container with:
|
||||
|
||||
```
|
||||
$ laconic-so build-containers --include cerc/builder-gerbil
|
||||
```
|
||||
|
||||
Now, assuming a gerbil project located at `~/projects/my-project`, run bash in the container mounting the project with:
|
||||
|
||||
```
|
||||
$ docker run -it -v $HOME/projects/my-project:/src cerc/builder-gerbil:latest bash
|
||||
root@7c4124bb09e3:/src#
|
||||
```
|
||||
|
||||
Now gerbil commands can be run.
|
||||
|
2
app/data/container-build/cerc-builder-gerbil/entrypoint.sh
Executable file
2
app/data/container-build/cerc-builder-gerbil/entrypoint.sh
Executable file
@ -0,0 +1,2 @@
|
||||
#!/bin/sh
|
||||
exec "$@"
|
16
app/data/container-build/cerc-builder-gerbil/install-dependencies.sh
Executable file
16
app/data/container-build/cerc-builder-gerbil/install-dependencies.sh
Executable file
@ -0,0 +1,16 @@
|
||||
DEPS=(github.com/fare/gerbil-utils
|
||||
github.com/fare/gerbil-poo
|
||||
github.com/fare/gerbil-crypto
|
||||
github.com/fare/gerbil-persist
|
||||
github.com/fare/gerbil-ethereum
|
||||
github.com/drewc/gerbil-swank
|
||||
github.com/drewc/drewc-r7rs-swank
|
||||
github.com/drewc/smug-gerbil
|
||||
github.com/drewc/ftw
|
||||
github.com/vyzo/gerbil-libp2p
|
||||
) ;
|
||||
for i in ${DEPS[@]} ; do
|
||||
echo "Installing gerbil package: $i"
|
||||
gxpkg install $i &&
|
||||
gxpkg build $i
|
||||
done
|
@ -13,6 +13,8 @@ if [[ -z "${CERC_NPM_AUTH_TOKEN}" ]]; then
|
||||
echo "CERC_NPM_AUTH_TOKEN is not set" >&2
|
||||
exit 1
|
||||
fi
|
||||
# Exit on error
|
||||
set -e
|
||||
local_npm_registry_url=$1
|
||||
package_publish_version=$2
|
||||
# TODO: make this a paramater and allow a list of scopes
|
||||
|
@ -17,6 +17,8 @@ if [[ $# -eq 2 ]]; then
|
||||
else
|
||||
package_publish_version=$( cat package.json | jq -r .version )
|
||||
fi
|
||||
# Exit on error
|
||||
set -e
|
||||
# Get the name of this package from package.json since we weren't passed that
|
||||
package_name=$( cat package.json | jq -r .name )
|
||||
local_npm_registry_url=$1
|
||||
@ -24,7 +26,7 @@ npm config set @lirewine:registry ${local_npm_registry_url}
|
||||
npm config set @cerc-io:registry ${local_npm_registry_url}
|
||||
npm config set -- ${local_npm_registry_url}:_authToken ${CERC_NPM_AUTH_TOKEN}
|
||||
# First check if the version of this package we're trying to build already exists in the registry
|
||||
package_exists=$( yarn info --json ${package_name}@${package_publish_version} | jq -r .data.dist.tarball )
|
||||
package_exists=$( yarn info --json ${package_name}@${package_publish_version} 2>/dev/null | jq -r .data.dist.tarball )
|
||||
if [[ ! -z "$package_exists" && "$package_exists" != "null" ]]; then
|
||||
echo "${package_publish_version} of ${package_name} already exists in the registry, skipping build"
|
||||
exit 0
|
||||
|
@ -14,14 +14,23 @@ if [[ $# -ne 2 ]]; then
|
||||
echo "Illegal number of parameters" >&2
|
||||
exit 1
|
||||
fi
|
||||
# Exit on error
|
||||
set -e
|
||||
target_package=$1
|
||||
local_npm_registry_url=$2
|
||||
# TODO: use jq rather than sed here:
|
||||
versioned_target_package=$(grep ${target_package} package.json | sed -e 's#[[:space:]]\{1,\}\"\('${target_package}'\)\":[[:space:]]\{1,\}\"\(.*\)\",#\1@\2#' )
|
||||
# Use yarn info to get URL checksums etc from the new registry
|
||||
yarn_info_output=$(yarn info --json $versioned_target_package 2>/dev/null)
|
||||
# Code below parses out the values we need
|
||||
# First check if the target version actually exists.
|
||||
# If it doesn't exist there will be no .data.dist.tarball element,
|
||||
# and jq will output the string "null"
|
||||
package_tarball=$(echo $yarn_info_output | jq -r .data.dist.tarball)
|
||||
if [[ $package_tarball == "null" ]]; then
|
||||
echo "FATAL: Target package version ($versioned_target_package) not found" >&2
|
||||
exit 1
|
||||
fi
|
||||
# Code below parses out the values we need
|
||||
# When running inside a container, the registry can return a URL with the wrong host name due to proxying
|
||||
# so we need to check if that has happened and fix the URL if so.
|
||||
if ! [[ "${package_tarball}" =~ ^${local_npm_registry_url}.* ]]; then
|
||||
@ -33,6 +42,7 @@ package_integrity=$(echo $yarn_info_output | jq -r .data.dist.integrity)
|
||||
package_shasum=$(echo $yarn_info_output | jq -r .data.dist.shasum)
|
||||
package_resolved=${package_tarball}#${package_shasum}
|
||||
# Some strings need to be escaped so they work when passed to sed later
|
||||
escaped_package_integrity=$(printf '%s\n' "$package_integrity" | sed -e 's/[\/&]/\\&/g')
|
||||
escaped_package_resolved=$(printf '%s\n' "$package_resolved" | sed -e 's/[\/&]/\\&/g')
|
||||
escaped_target_package=$(printf '%s\n' "$target_package" | sed -e 's/[\/&]/\\&/g')
|
||||
if [ -n "$CERC_SCRIPT_VERBOSE" ]; then
|
||||
@ -44,4 +54,4 @@ fi
|
||||
# Use magic sed regex to replace the values in yarn.lock
|
||||
# Note: yarn.lock is not json so we can not use jq for this
|
||||
sed -i -e '/^\"'${escaped_target_package}'.*\":$/ , /^\".*$/ s/^\([[:space:]]\{1,\}resolved \).*$/\1'\"${escaped_package_resolved}\"'/' yarn.lock
|
||||
sed -i -e '/^\"'${escaped_target_package}'.*\":$/ , /^\".*$/ s/^\([[:space:]]\{1,\}integrity \).*$/\1'${package_integrity}'/' yarn.lock
|
||||
sed -i -e '/^\"'${escaped_target_package}'.*\":$/ , /^\".*$/ s/^\([[:space:]]\{1,\}integrity \).*$/\1'${escaped_package_integrity}'/' yarn.lock
|
||||
|
@ -1,5 +1,9 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
ETHERBASE=`cat /opt/testnet/build/el/accounts.csv | head -1 | cut -d',' -f2`
|
||||
NETWORK_ID=`cat /opt/testnet/el/el-config.yaml | grep 'chain_id' | awk '{ print $2 }'`
|
||||
NETRESTRICT=`ip addr | grep inet | grep -v '127.0' | awk '{print $2}'`
|
||||
@ -28,7 +32,9 @@ else
|
||||
echo -n "$JWT" > /opt/testnet/build/el/jwtsecret
|
||||
|
||||
if [ "$CERC_RUN_STATEDIFF" == "detect" ] && [ -n "$CERC_STATEDIFF_DB_HOST" ]; then
|
||||
if [ -n "$(dig $CERC_STATEDIFF_DB_HOST +short)" ]; then
|
||||
dig_result=$(dig $CERC_STATEDIFF_DB_HOST +short)
|
||||
dig_status_code=$?
|
||||
if [[ $dig_status_code = 0 && -n $dig_result ]]; then
|
||||
echo "Statediff DB at $CERC_STATEDIFF_DB_HOST"
|
||||
CERC_RUN_STATEDIFF="true"
|
||||
else
|
||||
@ -86,6 +92,7 @@ else
|
||||
--mine \
|
||||
--miner.threads=1 \
|
||||
--metrics \
|
||||
--metrics.addr="0.0.0.0" \
|
||||
--verbosity=${CERC_GETH_VERBOSITY:-3} \
|
||||
--vmodule="${CERC_GETH_VMODULE:-statediff/*=5}" \
|
||||
--miner.etherbase="${ETHERBASE}" ${STATEDIFF_OPTS}
|
||||
|
@ -27,4 +27,8 @@ RUN cd /opt/testnet && make genesis-cl
|
||||
# Work around some bugs in lcli where the default path is always used.
|
||||
RUN mkdir -p /root/.lighthouse && cd /root/.lighthouse && ln -s /opt/testnet/build/cl/testnet
|
||||
|
||||
RUN mkdir -p /scripts
|
||||
COPY scripts/status-internal.sh /scripts
|
||||
COPY scripts/status.sh /scripts
|
||||
|
||||
ENTRYPOINT ["/opt/testnet/run.sh"]
|
||||
|
@ -0,0 +1,10 @@
|
||||
#!/usr/bin/env bash
|
||||
# Wrapper to facilitate using status.sh inside the container
|
||||
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
|
||||
set -x
|
||||
fi
|
||||
export LIGHTHOUSE_BASE_URL="http://fixturenet-eth-lighthouse-1:8001"
|
||||
export GETH_BASE_URL="http://fixturenet-eth-geth-1:8545"
|
||||
# See: https://stackoverflow.com/a/246128/1701505
|
||||
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
|
||||
$SCRIPT_DIR/status.sh
|
@ -1,5 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
#!/usr/bin/env bash
|
||||
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
|
||||
set -x
|
||||
fi
|
||||
STATUSES=("geth to generate DAG" "beacon phase0" "beacon altair" "beacon bellatrix pre-merge" "beacon bellatrix merge")
|
||||
STATUS=0
|
||||
|
||||
@ -7,13 +9,17 @@ STATUS=0
|
||||
LIGHTHOUSE_BASE_URL=${LIGHTHOUSE_BASE_URL}
|
||||
GETH_BASE_URL=${GETH_BASE_URL}
|
||||
|
||||
# TODO: Docker commands below should be replaced by some interface into stack orchestrator
|
||||
# or some execution environment-neutral mechanism.
|
||||
if [ -z "$LIGHTHOUSE_BASE_URL" ]; then
|
||||
LIGHTHOUSE_PORT=`docker ps -f "name=fixturenet-eth-lighthouse-1-1" --format "{{.Ports}}" | head -1 | cut -d':' -f2 | cut -d'-' -f1`
|
||||
LIGHTHOUSE_CONTAINER=`docker ps -q -f "name=fixturenet-eth-lighthouse-1-1"`
|
||||
LIGHTHOUSE_PORT=`docker port $LIGHTHOUSE_CONTAINER 8001 | cut -d':' -f2`
|
||||
LIGHTHOUSE_BASE_URL="http://localhost:${LIGHTHOUSE_PORT}"
|
||||
fi
|
||||
|
||||
if [ -z "$GETH_BASE_URL" ]; then
|
||||
GETH_PORT=`docker ps -f "name=fixturenet-eth-geth-1-1" --format "{{.Ports}}" | head -1 | cut -d':' -f2 | cut -d'-' -f1`
|
||||
GETH_CONTAINER=`docker ps -q -f "name=fixturenet-eth-geth-1-1"`
|
||||
GETH_PORT=`docker port $GETH_CONTAINER 8545 | cut -d':' -f2`
|
||||
GETH_BASE_URL="http://localhost:${GETH_PORT}"
|
||||
fi
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
FROM quay.io/keycloak/keycloak:20.0
|
||||
WORKDIR /opt/keycloak/providers
|
||||
RUN curl -L https://github.com/aerogear/keycloak-metrics-spi/releases/download/2.5.3/keycloak-metrics-spi-2.5.3.jar --output keycloak-metrics-spi.jar
|
||||
RUN curl -L https://github.com/cerc-io/keycloak-api-key-demo/releases/download/v0.1/api-key-module-0.1.jar --output api-key-module.jar
|
||||
RUN curl -L https://github.com/cerc-io/keycloak-api-key-demo/releases/download/v0.3/api-key-module-0.3.jar --output api-key-module.jar
|
||||
|
17
app/data/container-build/cerc-laconic-explorer/Dockerfile
Normal file
17
app/data/container-build/cerc-laconic-explorer/Dockerfile
Normal file
@ -0,0 +1,17 @@
|
||||
# copied and modified from erc20 Dockerfile
|
||||
|
||||
FROM node:16.17.1-alpine3.16
|
||||
|
||||
RUN apk --update --no-cache add git python3 alpine-sdk
|
||||
|
||||
WORKDIR /
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN echo "Building Laconic Explorer" && \
|
||||
git checkout master && \
|
||||
yarn
|
||||
|
||||
EXPOSE 8080
|
||||
|
||||
CMD ["yarn", "serve"]
|
7
app/data/container-build/cerc-laconic-explorer/build.sh
Executable file
7
app/data/container-build/cerc-laconic-explorer/build.sh
Executable file
@ -0,0 +1,7 @@
|
||||
#!/usr/bin/env bash
|
||||
# Build cerc/laconic-explorer
|
||||
|
||||
# See: https://stackoverflow.com/a/246128/1701505
|
||||
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
|
||||
|
||||
docker build -t cerc/laconic-explorer:local -f ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}/laconic-explorer
|
@ -1,4 +1,11 @@
|
||||
FROM alpine:latest
|
||||
FROM ubuntu:latest
|
||||
|
||||
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive && export DEBCONF_NOWARNINGS="yes" && \
|
||||
apt-get install -y software-properties-common && \
|
||||
apt-get install -y nginx && \
|
||||
apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
||||
|
||||
EXPOSE 80
|
||||
|
||||
COPY run.sh /app/run.sh
|
||||
|
||||
|
@ -12,5 +12,5 @@ else
|
||||
echo `date` > $EXISTSFILENAME
|
||||
fi
|
||||
|
||||
# Sleep forever to keep docker happy
|
||||
while true; do sleep 10; done
|
||||
# Run nginx which will block here forever
|
||||
/usr/sbin/nginx -g "daemon off;"
|
||||
|
@ -1,6 +1,9 @@
|
||||
#!/usr/bin/env bash
|
||||
# Usage: default-build.sh <image-tag> [<repo-relative-path>]
|
||||
# if <repo-relative-path> is not supplied, the context is the directory where the Dockerfile lives
|
||||
if [[ -n "$CERC_SCRIPT_DEBUG" ]]; then
|
||||
set -x
|
||||
fi
|
||||
if [[ $# -ne 2 ]]; then
|
||||
echo "Illegal number of parameters" >&2
|
||||
exit 1
|
||||
|
4
app/data/container-build/foundry-rs-foundry/build.sh
Executable file
4
app/data/container-build/foundry-rs-foundry/build.sh
Executable file
@ -0,0 +1,4 @@
|
||||
#!/usr/bin/env bash
|
||||
# Build foundry-rs/foundry
|
||||
# HACK below : TARGETARCH needs to be derived from the host environment
|
||||
docker build -t foundry-rs/foundry:local ${CERC_REPO_BASE_DIR}/foundry
|
@ -1,3 +1,4 @@
|
||||
foundry-rs/foundry
|
||||
cerc/test-contract
|
||||
cerc/eth-statediff-fill-service
|
||||
cerc/eth-statediff-service
|
||||
@ -22,3 +23,5 @@ cerc/eth-probe
|
||||
cerc/builder-js
|
||||
cerc/keycloak
|
||||
cerc/tx-spammer
|
||||
cerc/builder-gerbil
|
||||
cerc/laconic-explorer
|
||||
|
@ -6,10 +6,10 @@ ipld-eth-beacon-db
|
||||
ipld-eth-beacon-indexer
|
||||
ipld-eth-server
|
||||
lighthouse
|
||||
prometheus-grafana
|
||||
laconicd
|
||||
fixturenet-laconicd
|
||||
fixturenet-eth
|
||||
fixturenet-eth-metrics
|
||||
watcher-mobymask
|
||||
watcher-erc20
|
||||
watcher-erc721
|
||||
@ -18,3 +18,5 @@ test
|
||||
eth-probe
|
||||
keycloak
|
||||
tx-spammer
|
||||
kubo
|
||||
laconic-explorer
|
||||
|
@ -16,3 +16,5 @@ vulcanize/uniswap-v3-info
|
||||
vulcanize/assemblyscript
|
||||
cerc-io/eth-probe
|
||||
cerc-io/tx-spammer
|
||||
foundry-rs/foundry
|
||||
gateway-fm/laconic-explorer
|
||||
|
30
app/data/stacks/build-support/README.md
Normal file
30
app/data/stacks/build-support/README.md
Normal file
@ -0,0 +1,30 @@
|
||||
# Build Support Stack
|
||||
|
||||
## Instructions
|
||||
|
||||
JS/TS/NPM builds need an npm registry to store intermediate package artifacts.
|
||||
This can be supplied by the user (e.g. using a hosted registry or even npmjs.com), or a local registry using gitea can be deployed by stack orchestrator.
|
||||
To use a user-supplied registry set these environment variables:
|
||||
|
||||
`CERC_NPM_REGISTRY_URL` and
|
||||
`CERC_NPM_AUTH_TOKEN`
|
||||
|
||||
Leave `CERC_NPM_REGISTRY_URL` un-set to use the local gitea registry.
|
||||
|
||||
### Build support containers
|
||||
```
|
||||
$ laconic-so --stack build-support build-containers
|
||||
```
|
||||
### Deploy Gitea Package Registry
|
||||
|
||||
```
|
||||
$ laconic-so --stack package-registry setup-repositories
|
||||
$ laconic-so --stack package-registry deploy-system up
|
||||
This is your gitea access token: 84fe66a73698bf11edbdccd0a338236b7d1d5c45. Keep it safe and secure, it can not be fetched again from gitea.
|
||||
$ export CERC_NPM_AUTH_TOKEN=84fe66a73698bf11edbdccd0a338236b7d1d5c45
|
||||
```
|
||||
Now npm packages can be built:
|
||||
### Build npm Packages
|
||||
```
|
||||
$ laconic-so build-npms --include laconic-sdk
|
||||
```
|
6
app/data/stacks/build-support/stack.yml
Normal file
6
app/data/stacks/build-support/stack.yml
Normal file
@ -0,0 +1,6 @@
|
||||
version: "1.1"
|
||||
name: build-support
|
||||
decription: "Build Support Components"
|
||||
containers:
|
||||
- cerc/builder-js
|
||||
- cerc/builder-gerbil
|
@ -1,74 +1,74 @@
|
||||
# ERC20 Watcher
|
||||
|
||||
Instructions to deploy a local ERC20 watcher stack (core + watcher) for demonstration and testing purposes using [laconic-stack-orchestrator](../../README.md#setup)
|
||||
Instructions to deploy a local ERC20 watcher stack (core + watcher) for demonstration and testing purposes using [stack orchestrator](/README.md#install)
|
||||
|
||||
## Setup
|
||||
|
||||
* Clone / pull required repositories:
|
||||
Clone required repositories:
|
||||
|
||||
```bash
|
||||
$ laconic-so setup-repositories --include cerc-io/go-ethereum,cerc-io/ipld-eth-db,cerc-io/ipld-eth-server,cerc-io/watcher-ts --pull
|
||||
laconic-so --stack erc20 setup-repositories
|
||||
```
|
||||
|
||||
* Build the core and watcher container images:
|
||||
Build the core and watcher container images:
|
||||
|
||||
```bash
|
||||
$ laconic-so build-containers --include cerc/go-ethereum,cerc/go-ethereum-foundry,cerc/ipld-eth-db,cerc/ipld-eth-server,cerc/watcher-erc20
|
||||
laconic-so --stack erc20 build-containers
|
||||
```
|
||||
|
||||
This should create the required docker images in the local image registry.
|
||||
|
||||
* Deploy the stack:
|
||||
Deploy the stack:
|
||||
|
||||
```bash
|
||||
$ laconic-so deploy-system --include db,go-ethereum-foundry,ipld-eth-server,watcher-erc20 up
|
||||
laconic-so --stack erc20 deploy-system up
|
||||
```
|
||||
|
||||
## Demo
|
||||
|
||||
* Find the watcher container's id using `docker ps` and export it for later use:
|
||||
Find the watcher container's id using `docker ps` and export it for later use:
|
||||
|
||||
```bash
|
||||
$ export CONTAINER_ID=<CONTAINER_ID>
|
||||
export CONTAINER_ID=<CONTAINER_ID>
|
||||
```
|
||||
|
||||
* Deploy an ERC20 token:
|
||||
Deploy an ERC20 token:
|
||||
|
||||
```bash
|
||||
$ docker exec $CONTAINER_ID yarn token:deploy:docker
|
||||
docker exec $CONTAINER_ID yarn token:deploy:docker
|
||||
```
|
||||
|
||||
Export the address of the deployed token to a shell variable for later use:
|
||||
|
||||
```bash
|
||||
$ export TOKEN_ADDRESS=<TOKEN_ADDRESS>
|
||||
export TOKEN_ADDRESS=<TOKEN_ADDRESS>
|
||||
```
|
||||
|
||||
* Open `http://localhost:3002/graphql` (GraphQL Playground) in a browser window
|
||||
Open `http://localhost:3002/graphql` (GraphQL Playground) in a browser window
|
||||
|
||||
* Connect MetaMask to `http://localhost:8545` (with chain ID `99`)
|
||||
Connect MetaMask to `http://localhost:8545` (with chain ID `99`)
|
||||
|
||||
* Add the deployed token as an asset in MetaMask and check that the initial balance is zero
|
||||
Add the deployed token as an asset in MetaMask and check that the initial balance is zero
|
||||
|
||||
* Export your MetaMask account (second account) address to a shell variable for later use:
|
||||
Export your MetaMask account (second account) address to a shell variable for later use:
|
||||
|
||||
```bash
|
||||
$ export RECIPIENT_ADDRESS=<RECIPIENT_ADDRESS>
|
||||
export RECIPIENT_ADDRESS=<RECIPIENT_ADDRESS>
|
||||
```
|
||||
|
||||
* To get the primary account's address, run:
|
||||
To get the primary account's address, run:
|
||||
|
||||
```bash
|
||||
$ docker exec $CONTAINER_ID yarn account:docker
|
||||
docker exec $CONTAINER_ID yarn account:docker
|
||||
```
|
||||
|
||||
* To get the current block hash at any time, run:
|
||||
To get the current block hash at any time, run:
|
||||
|
||||
```bash
|
||||
$ docker exec $CONTAINER_ID yarn block:latest:docker
|
||||
docker exec $CONTAINER_ID yarn block:latest:docker
|
||||
```
|
||||
|
||||
* Fire a GQL query in the playground to get the name, symbol and total supply of the deployed token:
|
||||
Fire a GQL query in the playground to get the name, symbol and total supply of the deployed token:
|
||||
|
||||
```graphql
|
||||
query {
|
||||
@ -104,7 +104,7 @@ Instructions to deploy a local ERC20 watcher stack (core + watcher) for demonstr
|
||||
}
|
||||
```
|
||||
|
||||
* Fire the following query to get balances for the primary and the recipient account at the latest block hash:
|
||||
Fire the following query to get balances for the primary and the recipient account at the latest block hash:
|
||||
|
||||
```graphql
|
||||
query {
|
||||
@ -132,26 +132,26 @@ Instructions to deploy a local ERC20 watcher stack (core + watcher) for demonstr
|
||||
}
|
||||
```
|
||||
|
||||
* The initial balance for the primary account should be `1000000000000000000000`
|
||||
* The initial balance for the recipient should be `0`
|
||||
- The initial balance for the primary account should be `1000000000000000000000`
|
||||
- The initial balance for the recipient should be `0`
|
||||
|
||||
* Transfer tokens to the recipient account:
|
||||
Transfer tokens to the recipient account:
|
||||
|
||||
```bash
|
||||
$ docker exec $CONTAINER_ID yarn token:transfer:docker --token $TOKEN_ADDRESS --to $RECIPIENT_ADDRESS --amount 100
|
||||
docker exec $CONTAINER_ID yarn token:transfer:docker --token $TOKEN_ADDRESS --to $RECIPIENT_ADDRESS --amount 100
|
||||
```
|
||||
|
||||
* Fire the above GQL query again with the latest block hash to get updated balances for the primary (`from`) and the recipient (`to`) account:
|
||||
Fire the above GQL query again with the latest block hash to get updated balances for the primary (`from`) and the recipient (`to`) account:
|
||||
|
||||
* The balance for the primary account should be reduced by the transfer amount (`100`)
|
||||
* The balance for the recipient account should be equal to the transfer amount (`100`)
|
||||
- The balance for the primary account should be reduced by the transfer amount (`100`)
|
||||
- The balance for the recipient account should be equal to the transfer amount (`100`)
|
||||
|
||||
* Transfer funds between different accounts using MetaMask and use the playground to query the balance before and after the transfer.
|
||||
Transfer funds between different accounts using MetaMask and use the playground to query the balance before and after the transfer.
|
||||
|
||||
## Clean up
|
||||
|
||||
* To stop all the services running in background run:
|
||||
To stop all the services running in background run:
|
||||
|
||||
```bash
|
||||
$ laconic-so deploy-system --include db,go-ethereum-foundry,ipld-eth-server,watcher-erc20 down
|
||||
laconic-so --stack erc20 deploy-system down
|
||||
```
|
||||
|
@ -13,6 +13,6 @@ containers:
|
||||
- cerc/watcher-erc20
|
||||
pods:
|
||||
- go-ethereum-foundry
|
||||
- db
|
||||
- ipld-eth-db
|
||||
- ipld-eth-server
|
||||
- watcher-erc20
|
||||
|
@ -7,13 +7,13 @@ Instructions to deploy a local ERC721 watcher stack (core + watcher) for demonst
|
||||
* Clone / pull required repositories:
|
||||
|
||||
```bash
|
||||
$ laconic-so setup-repositories --include cerc-io/go-ethereum,cerc-io/ipld-eth-db,cerc-io/ipld-eth-server,cerc-io/watcher-ts --pull
|
||||
laconic-so --stack erc721 setup-repositories
|
||||
```
|
||||
|
||||
* Build the core and watcher container images:
|
||||
|
||||
```bash
|
||||
$ laconic-so build-containers --include cerc/go-ethereum,cerc/go-ethereum-foundry,cerc/ipld-eth-db,cerc/ipld-eth-server,cerc/watcher-erc721
|
||||
laconic-so --stack erc721 build-containers
|
||||
```
|
||||
|
||||
This should create the required docker images in the local image registry.
|
||||
@ -21,7 +21,7 @@ Instructions to deploy a local ERC721 watcher stack (core + watcher) for demonst
|
||||
* Deploy the stack:
|
||||
|
||||
```bash
|
||||
$ laconic-so deploy-system --include db,go-ethereum-foundry,ipld-eth-server,watcher-erc721 up
|
||||
laconic-so --stack erc721 deploy-system up
|
||||
```
|
||||
|
||||
## Demo
|
||||
@ -210,5 +210,5 @@ Instructions to deploy a local ERC721 watcher stack (core + watcher) for demonst
|
||||
* To stop all the services running in background:
|
||||
|
||||
```bash
|
||||
$ laconic-so deploy-system --include db,go-ethereum-foundry,ipld-eth-server,watcher-erc721 down
|
||||
laconic-so --stack erc721 deploy-system down
|
||||
```
|
||||
|
@ -13,6 +13,6 @@ containers:
|
||||
- cerc/watcher-erc721
|
||||
pods:
|
||||
- go-ethereum-foundry
|
||||
- db
|
||||
- ipld-eth-db
|
||||
- ipld-eth-server
|
||||
- watcher-erc721
|
||||
|
6
app/data/stacks/fixturenet-eth-loaded/README.md
Normal file
6
app/data/stacks/fixturenet-eth-loaded/README.md
Normal file
@ -0,0 +1,6 @@
|
||||
# fixturenet-eth
|
||||
|
||||
A "loaded" version of fixturenet-eth, with all the bells and whistles enabled.
|
||||
|
||||
TODO: write me
|
||||
|
24
app/data/stacks/fixturenet-eth-loaded/stack.yml
Normal file
24
app/data/stacks/fixturenet-eth-loaded/stack.yml
Normal file
@ -0,0 +1,24 @@
|
||||
version: "1.0"
|
||||
name: fixturenet-eth-loaded
|
||||
decription: "Loaded Ethereum Fixturenet"
|
||||
repos:
|
||||
- cerc-io/go-ethereum
|
||||
- cerc-io/tx-spammer
|
||||
- cerc-io/ipld-eth-server
|
||||
- cerc-io/ipld-eth-db
|
||||
- cerc/go-ethereum
|
||||
containers:
|
||||
- cerc/lighthouse
|
||||
- cerc/fixturenet-eth-geth
|
||||
- cerc/fixturenet-eth-lighthouse
|
||||
- cerc/ipld-eth-server
|
||||
- cerc/ipld-eth-db
|
||||
- cerc/keycloak
|
||||
- cerc/tx-spammer
|
||||
pods:
|
||||
- fixturenet-eth
|
||||
- tx-spammer
|
||||
- fixturenet-eth-metrics
|
||||
- keycloak
|
||||
- ipld-eth-server
|
||||
- ipld-eth-db
|
@ -4,12 +4,12 @@ Instructions for deploying a local a geth + lighthouse blockchain "fixturenet" f
|
||||
|
||||
## Clone required repositories
|
||||
```
|
||||
$ laconic-so setup-repositories --include cerc-io/go-ethereum
|
||||
$ laconic-so --stack fixturenet-eth setup-repositories
|
||||
```
|
||||
|
||||
## Build the fixturenet-eth containers
|
||||
```
|
||||
$ laconic-so build-containers --include cerc/go-ethereum,cerc/lighthouse,cerc/fixturenet-eth-geth,cerc/fixturenet-eth-lighthouse
|
||||
$ laconic-so --stack fixturenet-eth build-containers
|
||||
```
|
||||
This should create several container images in the local image registry:
|
||||
|
||||
@ -20,7 +20,7 @@ This should create several container images in the local image registry:
|
||||
|
||||
## Deploy the stack
|
||||
```
|
||||
$ laconic-so deploy-system --include fixturenet-eth up
|
||||
$ laconic-so --stack fixturenet-eth deploy-system up
|
||||
```
|
||||
|
||||
## Check status
|
||||
|
@ -1,8 +1,9 @@
|
||||
version: "1.0"
|
||||
name: fixturenet-eth
|
||||
decription: "Ethereum Fixturenet"
|
||||
repos:
|
||||
- cerc-io/go-ethereum
|
||||
- cerc-io/lighthouse
|
||||
- cerc-io/tx-spammer
|
||||
containers:
|
||||
- cerc/go-ethereum
|
||||
- cerc/lighthouse
|
||||
|
@ -1,9 +1,13 @@
|
||||
version: "1.0"
|
||||
name: laconicd-fixturenet
|
||||
name: fixturenet-laconicd
|
||||
description: "A laconicd fixturenet"
|
||||
repos:
|
||||
- cerc-io/laconicd
|
||||
- cerc-io/laconic-sdk
|
||||
- cerc-io/laconic-registry-cli
|
||||
npms:
|
||||
- laconic-sdk
|
||||
- laconic-registry-cli
|
||||
containers:
|
||||
- cerc/laconicd
|
||||
- cerc/laconic-registry-cli
|
11
app/data/stacks/package-registry/stack.yml
Normal file
11
app/data/stacks/package-registry/stack.yml
Normal file
@ -0,0 +1,11 @@
|
||||
version: "1.1"
|
||||
name: package-registry
|
||||
decription: "Local Package Registry"
|
||||
repos:
|
||||
- cerc-io/hosting
|
||||
pods:
|
||||
- name: gitea
|
||||
repository: cerc-io/hosting
|
||||
path: gitea
|
||||
pre_start_command: "run-this-first.sh"
|
||||
post_start_command: "initialize-gitea.sh"
|
3
app/data/stacks/test/README.md
Normal file
3
app/data/stacks/test/README.md
Normal file
@ -0,0 +1,3 @@
|
||||
# Test Stack
|
||||
|
||||
A stack for test/demo purposes.
|
9
app/data/stacks/test/stack.yml
Normal file
9
app/data/stacks/test/stack.yml
Normal file
@ -0,0 +1,9 @@
|
||||
version: "1.0"
|
||||
name: test
|
||||
description: "A test stack"
|
||||
repos:
|
||||
- cerc-io/laconicd
|
||||
containers:
|
||||
- cerc/test-container
|
||||
pods:
|
||||
- test
|
@ -1,2 +1,2 @@
|
||||
# This file should be re-generated running: scripts/update-version-file.sh script
|
||||
v1.0.7-alpha-7edfa1b
|
||||
v1.0.21-c52f9e6
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Copyright © 2022 Cerc
|
||||
# Copyright © 2022, 2023 Cerc
|
||||
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as published by
|
||||
@ -16,13 +16,16 @@
|
||||
# Deploys the system components using docker-compose
|
||||
|
||||
import hashlib
|
||||
import copy
|
||||
import os
|
||||
import sys
|
||||
from decouple import config
|
||||
import subprocess
|
||||
from python_on_whales import DockerClient
|
||||
import click
|
||||
import importlib.resources
|
||||
from pathlib import Path
|
||||
from .util import include_exclude_check
|
||||
from .util import include_exclude_check, get_parsed_stack_config
|
||||
|
||||
|
||||
@click.command()
|
||||
@ -30,65 +33,64 @@ from .util import include_exclude_check
|
||||
@click.option("--exclude", help="don\'t start these components")
|
||||
@click.option("--cluster", help="specify a non-default cluster name")
|
||||
@click.argument('command', required=True) # help: command: up|down|ps
|
||||
@click.argument('services', nargs=-1) # help: command: up|down|ps <service1> <service2>
|
||||
@click.argument('extra_args', nargs=-1) # help: command: up|down|ps <service1> <service2>
|
||||
@click.pass_context
|
||||
def command(ctx, include, exclude, cluster, command, services):
|
||||
def command(ctx, include, exclude, cluster, command, extra_args):
|
||||
'''deploy a stack'''
|
||||
|
||||
# TODO: implement option exclusion and command value constraint lost with the move from argparse to click
|
||||
|
||||
debug = ctx.obj.debug
|
||||
quiet = ctx.obj.quiet
|
||||
verbose = ctx.obj.verbose
|
||||
local_stack = ctx.obj.local_stack
|
||||
dry_run = ctx.obj.dry_run
|
||||
stack = ctx.obj.stack
|
||||
|
||||
# See: https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure
|
||||
compose_dir = Path(__file__).absolute().parent.joinpath("data", "compose")
|
||||
|
||||
if cluster is None:
|
||||
# Create default unique, stable cluster name from confile file path
|
||||
# TODO: change this to the config file path
|
||||
path = os.path.realpath(sys.argv[0])
|
||||
hash = hashlib.md5(path.encode()).hexdigest()
|
||||
cluster = f"laconic-{hash}"
|
||||
if verbose:
|
||||
print(f"Using cluster name: {cluster}")
|
||||
|
||||
# See: https://stackoverflow.com/a/20885799/1701505
|
||||
from . import data
|
||||
with importlib.resources.open_text(data, "pod-list.txt") as pod_list_file:
|
||||
pods = pod_list_file.read().splitlines()
|
||||
|
||||
if verbose:
|
||||
print(f"Pods: {pods}")
|
||||
|
||||
# Construct a docker compose command suitable for our purpose
|
||||
|
||||
compose_files = []
|
||||
for pod in pods:
|
||||
if include_exclude_check(pod, include, exclude):
|
||||
compose_file_name = os.path.join(compose_dir, f"docker-compose-{pod}.yml")
|
||||
compose_files.append(compose_file_name)
|
||||
else:
|
||||
if verbose:
|
||||
print(f"Excluding: {pod}")
|
||||
|
||||
if verbose:
|
||||
print(f"files: {compose_files}")
|
||||
cluster_context = _make_cluster_context(ctx.obj, include, exclude, cluster)
|
||||
|
||||
# See: https://gabrieldemarmiesse.github.io/python-on-whales/sub-commands/compose/
|
||||
docker = DockerClient(compose_files=compose_files, compose_project_name=cluster)
|
||||
docker = DockerClient(compose_files=cluster_context.compose_files, compose_project_name=cluster_context.cluster)
|
||||
|
||||
services_list = list(services) or None
|
||||
extra_args_list = list(extra_args) or None
|
||||
|
||||
if not dry_run:
|
||||
if command == "up":
|
||||
if debug:
|
||||
os.environ["CERC_SCRIPT_DEBUG"] = "true"
|
||||
if verbose:
|
||||
print(f"Running compose up for services: {services_list}")
|
||||
docker.compose.up(detach=True, services=services_list)
|
||||
print(f"Running compose up for extra_args: {extra_args_list}")
|
||||
for pre_start_command in cluster_context.pre_start_commands:
|
||||
_run_command(ctx.obj, cluster_context.cluster, pre_start_command)
|
||||
docker.compose.up(detach=True, services=extra_args_list)
|
||||
for post_start_command in cluster_context.post_start_commands:
|
||||
_run_command(ctx.obj, cluster_context.cluster, post_start_command)
|
||||
elif command == "down":
|
||||
if verbose:
|
||||
print("Running compose down")
|
||||
docker.compose.down()
|
||||
elif command == "exec":
|
||||
if extra_args_list is None or len(extra_args_list) < 2:
|
||||
print("Usage: exec <service> <cmd>")
|
||||
sys.exit(1)
|
||||
service_name = extra_args_list[0]
|
||||
command_to_exec = extra_args_list[1:]
|
||||
container_exec_env = {
|
||||
"CERC_SCRIPT_DEBUG": "true"
|
||||
} if debug else {}
|
||||
if verbose:
|
||||
print(f"Running compose exec {service_name} {command_to_exec}")
|
||||
docker.compose.execute(service_name, command_to_exec, envs=container_exec_env)
|
||||
elif command == "port":
|
||||
if extra_args_list is None or len(extra_args_list) < 2:
|
||||
print("Usage: port <service> <exposed-port>")
|
||||
sys.exit(1)
|
||||
service_name = extra_args_list[0]
|
||||
exposed_port = extra_args_list[1]
|
||||
if verbose:
|
||||
print(f"Running compose port {service_name} {exposed_port}")
|
||||
mapped_port_data = docker.compose.port(service_name, exposed_port)
|
||||
print(f"{mapped_port_data[0]}:{mapped_port_data[1]}")
|
||||
elif command == "ps":
|
||||
if verbose:
|
||||
print("Running compose ps")
|
||||
@ -114,3 +116,133 @@ def command(ctx, include, exclude, cluster, command, services):
|
||||
if verbose:
|
||||
print("Running compose logs")
|
||||
docker.compose.logs()
|
||||
|
||||
|
||||
def get_stack_status(ctx, stack):
|
||||
|
||||
ctx_copy = copy.copy(ctx)
|
||||
ctx_copy.stack = stack
|
||||
|
||||
cluster_context = _make_cluster_context(ctx_copy, None, None, None)
|
||||
docker = DockerClient(compose_files=cluster_context.compose_files, compose_project_name=cluster_context.cluster)
|
||||
# TODO: refactor to avoid duplicating this code above
|
||||
if ctx.verbose:
|
||||
print("Running compose ps")
|
||||
container_list = docker.compose.ps()
|
||||
if len(container_list) > 0:
|
||||
if ctx.debug:
|
||||
print(f"Container list from compose ps: {container_list}")
|
||||
return True
|
||||
else:
|
||||
if ctx.debug:
|
||||
print("No containers found from compose ps")
|
||||
False
|
||||
|
||||
|
||||
def _make_cluster_context(ctx, include, exclude, cluster):
|
||||
|
||||
if ctx.local_stack:
|
||||
dev_root_path = os.getcwd()[0:os.getcwd().rindex("stack-orchestrator")]
|
||||
print(f'Local stack dev_root_path (CERC_REPO_BASE_DIR) overridden to: {dev_root_path}')
|
||||
else:
|
||||
dev_root_path = os.path.expanduser(config("CERC_REPO_BASE_DIR", default="~/cerc"))
|
||||
|
||||
# See: https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure
|
||||
compose_dir = Path(__file__).absolute().parent.joinpath("data", "compose")
|
||||
|
||||
if cluster is None:
|
||||
# Create default unique, stable cluster name from confile file path
|
||||
# TODO: change this to the config file path
|
||||
path = os.path.realpath(sys.argv[0])
|
||||
hash = hashlib.md5(path.encode()).hexdigest()
|
||||
cluster = f"laconic-{hash}"
|
||||
if ctx.verbose:
|
||||
print(f"Using cluster name: {cluster}")
|
||||
|
||||
# See: https://stackoverflow.com/a/20885799/1701505
|
||||
from . import data
|
||||
with importlib.resources.open_text(data, "pod-list.txt") as pod_list_file:
|
||||
all_pods = pod_list_file.read().splitlines()
|
||||
|
||||
pods_in_scope = []
|
||||
if ctx.stack:
|
||||
stack_config = get_parsed_stack_config(ctx.stack)
|
||||
# TODO: syntax check the input here
|
||||
pods_in_scope = stack_config['pods']
|
||||
else:
|
||||
pods_in_scope = all_pods
|
||||
|
||||
# Convert all pod definitions to v1.1 format
|
||||
pods_in_scope = _convert_to_new_format(pods_in_scope)
|
||||
|
||||
if ctx.verbose:
|
||||
print(f"Pods: {pods_in_scope}")
|
||||
|
||||
# Construct a docker compose command suitable for our purpose
|
||||
|
||||
compose_files = []
|
||||
pre_start_commands = []
|
||||
post_start_commands = []
|
||||
for pod in pods_in_scope:
|
||||
pod_name = pod["name"]
|
||||
pod_repository = pod["repository"]
|
||||
pod_path = pod["path"]
|
||||
if include_exclude_check(pod_name, include, exclude):
|
||||
if pod_repository is None or pod_repository == "internal":
|
||||
compose_file_name = os.path.join(compose_dir, f"docker-compose-{pod_path}.yml")
|
||||
else:
|
||||
pod_root_dir = os.path.join(dev_root_path, pod_repository.split("/")[-1], pod["path"])
|
||||
compose_file_name = os.path.join(pod_root_dir, "docker-compose.yml")
|
||||
pod_pre_start_command = pod["pre_start_command"]
|
||||
pod_post_start_command = pod["post_start_command"]
|
||||
if pod_pre_start_command is not None:
|
||||
pre_start_commands.append(os.path.join(pod_root_dir, pod_pre_start_command))
|
||||
if pod_post_start_command is not None:
|
||||
post_start_commands.append(os.path.join(pod_root_dir, pod_post_start_command))
|
||||
compose_files.append(compose_file_name)
|
||||
else:
|
||||
if ctx.verbose:
|
||||
print(f"Excluding: {pod_name}")
|
||||
|
||||
if ctx.verbose:
|
||||
print(f"files: {compose_files}")
|
||||
|
||||
return cluster_context(cluster, compose_files, pre_start_commands, post_start_commands)
|
||||
|
||||
|
||||
class cluster_context:
|
||||
def __init__(self, cluster, compose_files, pre_start_commands, post_start_commands) -> None:
|
||||
self.cluster = cluster
|
||||
self.compose_files = compose_files
|
||||
self.pre_start_commands = pre_start_commands
|
||||
self.post_start_commands = post_start_commands
|
||||
|
||||
|
||||
def _convert_to_new_format(old_pod_array):
|
||||
new_pod_array = []
|
||||
for old_pod in old_pod_array:
|
||||
if isinstance(old_pod, dict):
|
||||
new_pod_array.append(old_pod)
|
||||
else:
|
||||
new_pod = {
|
||||
"name": old_pod,
|
||||
"repository": "internal",
|
||||
"path": old_pod
|
||||
}
|
||||
new_pod_array.append(new_pod)
|
||||
return new_pod_array
|
||||
|
||||
|
||||
def _run_command(ctx, cluster_name, command):
|
||||
if ctx.verbose:
|
||||
print(f"Running command: {command}")
|
||||
command_dir = os.path.dirname(command)
|
||||
command_file = os.path.join(".", os.path.basename(command))
|
||||
command_env = os.environ.copy()
|
||||
command_env["CERC_SO_COMPOSE_PROJECT"] = cluster_name
|
||||
if ctx.debug:
|
||||
command_env["CERC_SCRIPT_DEBUG"] = "true"
|
||||
command_result = subprocess.run(command_file, shell=True, env=command_env, cwd=command_dir)
|
||||
if command_result.returncode != 0:
|
||||
print(f"FATAL Error running command: {command}")
|
||||
sys.exit(1)
|
||||
|
@ -23,6 +23,8 @@ import git
|
||||
from tqdm import tqdm
|
||||
import click
|
||||
import importlib.resources
|
||||
from pathlib import Path
|
||||
import yaml
|
||||
from .util import include_exclude_check
|
||||
|
||||
|
||||
@ -64,9 +66,11 @@ def command(ctx, include, exclude, git_ssh, check_only, pull, branches_file):
|
||||
quiet = ctx.obj.quiet
|
||||
verbose = ctx.obj.verbose
|
||||
dry_run = ctx.obj.dry_run
|
||||
stack = ctx.obj.stack
|
||||
|
||||
branches = []
|
||||
|
||||
# TODO: branches file needs to be re-worked in the context of stacks
|
||||
if branches_file:
|
||||
if verbose:
|
||||
print(f"loading branches from: {branches_file}")
|
||||
@ -96,11 +100,25 @@ def command(ctx, include, exclude, git_ssh, check_only, pull, branches_file):
|
||||
with importlib.resources.open_text(data, "repository-list.txt") as repository_list_file:
|
||||
all_repos = repository_list_file.read().splitlines()
|
||||
|
||||
repos_in_scope = []
|
||||
if stack:
|
||||
# In order to be compatible with Python 3.8 we need to use this hack to get the path:
|
||||
# See: https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure
|
||||
stack_file_path = Path(__file__).absolute().parent.joinpath("data", "stacks", stack, "stack.yml")
|
||||
with stack_file_path:
|
||||
stack_config = yaml.safe_load(open(stack_file_path, "r"))
|
||||
# TODO: syntax check the input here
|
||||
repos_in_scope = stack_config['repos']
|
||||
else:
|
||||
repos_in_scope = all_repos
|
||||
|
||||
if verbose:
|
||||
print(f"Repos: {all_repos}")
|
||||
print(f"Repos: {repos_in_scope}")
|
||||
if stack:
|
||||
print(f"Stack: {stack}")
|
||||
|
||||
repos = []
|
||||
for repo in all_repos:
|
||||
for repo in repos_in_scope:
|
||||
if include_exclude_check(repo, include, exclude):
|
||||
repos.append(repo)
|
||||
else:
|
||||
|
28
app/util.py
28
app/util.py
@ -1,4 +1,4 @@
|
||||
# Copyright © 2022 Cerc
|
||||
# Copyright © 2022, 2023 Cerc
|
||||
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as published by
|
||||
@ -13,6 +13,12 @@
|
||||
# You should have received a copy of the GNU Affero General Public License
|
||||
# along with this program. If not, see <http:#www.gnu.org/licenses/>.
|
||||
|
||||
import os.path
|
||||
import sys
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def include_exclude_check(s, include, exclude):
|
||||
if include is None and exclude is None:
|
||||
return True
|
||||
@ -22,3 +28,23 @@ def include_exclude_check(s, include, exclude):
|
||||
if exclude is not None:
|
||||
exclude_list = exclude.split(",")
|
||||
return s not in exclude_list
|
||||
|
||||
|
||||
def get_parsed_stack_config(stack):
|
||||
# In order to be compatible with Python 3.8 we need to use this hack to get the path:
|
||||
# See: https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure
|
||||
stack_file_path = Path(__file__).absolute().parent.joinpath("data", "stacks", stack, "stack.yml")
|
||||
try:
|
||||
with stack_file_path:
|
||||
stack_config = yaml.safe_load(open(stack_file_path, "r"))
|
||||
return stack_config
|
||||
except FileNotFoundError as error:
|
||||
# We try here to generate a useful diagnostic error
|
||||
# First check if the stack directory is present
|
||||
stack_directory = stack_file_path.parent
|
||||
if os.path.exists(stack_directory):
|
||||
print(f"Error: stack.yml file is missing from stack: {stack}")
|
||||
else:
|
||||
print(f"Error: stack: {stack} does not exist")
|
||||
print(f"Exiting, error: {error}")
|
||||
sys.exit(1)
|
||||
|
12
cli.py
12
cli.py
@ -24,26 +24,32 @@ from app import version
|
||||
CONTEXT_SETTINGS = dict(help_option_names=['-h', '--help'])
|
||||
|
||||
|
||||
# TODO: this seems kind of weird and heavy on boilerplate -- check it is
|
||||
# the best Python can do for us.
|
||||
class Options(object):
|
||||
def __init__(self, quiet, verbose, dry_run, local_stack, debug):
|
||||
def __init__(self, stack, quiet, verbose, dry_run, local_stack, debug, continue_on_error):
|
||||
self.stack = stack
|
||||
self.quiet = quiet
|
||||
self.verbose = verbose
|
||||
self.dry_run = dry_run
|
||||
self.local_stack = local_stack
|
||||
self.debug = debug
|
||||
self.continue_on_error = continue_on_error
|
||||
|
||||
|
||||
@click.group(context_settings=CONTEXT_SETTINGS)
|
||||
@click.option('--stack', help="specify a stack to build/deploy")
|
||||
@click.option('--quiet', is_flag=True, default=False)
|
||||
@click.option('--verbose', is_flag=True, default=False)
|
||||
@click.option('--dry-run', is_flag=True, default=False)
|
||||
@click.option('--local-stack', is_flag=True, default=False)
|
||||
@click.option('--debug', is_flag=True, default=False)
|
||||
@click.option('--continue-on-error', is_flag=True, default=False)
|
||||
# See: https://click.palletsprojects.com/en/8.1.x/complex/#building-a-git-clone
|
||||
@click.pass_context
|
||||
def cli(ctx, quiet, verbose, dry_run, local_stack, debug):
|
||||
def cli(ctx, stack, quiet, verbose, dry_run, local_stack, debug, continue_on_error):
|
||||
"""Laconic Stack Orchestrator"""
|
||||
ctx.obj = Options(quiet, verbose, dry_run, local_stack, debug)
|
||||
ctx.obj = Options(stack, quiet, verbose, dry_run, local_stack, debug, continue_on_error)
|
||||
|
||||
|
||||
cli.add_command(setup_repositories.command, "setup-repositories")
|
||||
|
104
docs/CONTRIBUTING.md
Normal file
104
docs/CONTRIBUTING.md
Normal file
@ -0,0 +1,104 @@
|
||||
# Contributing
|
||||
|
||||
Thank you for taking the time to make a contribution to Stack Orchestrator.
|
||||
|
||||
## Install (developer mode)
|
||||
|
||||
Suitable for developers either modifying or debugging the orchestrator Python code:
|
||||
|
||||
### Prerequisites
|
||||
|
||||
In addition to the pre-requisites listed in the [README](/README.md), the following are required:
|
||||
|
||||
1. Python venv package
|
||||
This may or may not be already installed depending on the host OS and version. Check by running:
|
||||
```
|
||||
$ python3 -m venv
|
||||
usage: venv [-h] [--system-site-packages] [--symlinks | --copies] [--clear] [--upgrade] [--without-pip] [--prompt PROMPT] ENV_DIR [ENV_DIR ...]
|
||||
venv: error: the following arguments are required: ENV_DIR
|
||||
```
|
||||
If the venv package is missing you should see a message indicating how to install it, for example with:
|
||||
```
|
||||
$ apt install python3.10-venv
|
||||
```
|
||||
|
||||
### Install
|
||||
|
||||
1. Clone this repository:
|
||||
```
|
||||
$ git clone (https://github.com/cerc-io/stack-orchestrator.git
|
||||
```
|
||||
|
||||
2. Enter the project directory:
|
||||
```
|
||||
$ cd stack-orchestrator
|
||||
```
|
||||
|
||||
3. Create and activate a venv:
|
||||
```
|
||||
$ python3 -m venv venv
|
||||
$ source ./venv/bin/activate
|
||||
(venv) $
|
||||
```
|
||||
|
||||
4. Install the cli in edit mode:
|
||||
```
|
||||
$ pip install --editable .
|
||||
```
|
||||
|
||||
5. Verify installation:
|
||||
```
|
||||
(venv) $ laconic-so
|
||||
Usage: laconic-so [OPTIONS] COMMAND [ARGS]...
|
||||
|
||||
Laconic Stack Orchestrator
|
||||
|
||||
Options:
|
||||
--quiet
|
||||
--verbose
|
||||
--dry-run
|
||||
-h, --help Show this message and exit.
|
||||
|
||||
Commands:
|
||||
build-containers build the set of containers required for a complete...
|
||||
deploy-system deploy a stack
|
||||
setup-repositories git clone the set of repositories required to build...
|
||||
```
|
||||
|
||||
## Build a zipapp (single file distributable script)
|
||||
|
||||
Use shiv to build a single file Python executable zip archive of laconic-so:
|
||||
|
||||
1. Install [shiv](https://github.com/linkedin/shiv):
|
||||
```
|
||||
$ (venv) pip install shiv
|
||||
$ (venv) pip install wheel
|
||||
```
|
||||
|
||||
2. Run shiv to create a zipapp file:
|
||||
```
|
||||
$ (venv) shiv -c laconic-so -o laconic-so .
|
||||
```
|
||||
This creates a file `./laconic-so` that is executable outside of any venv, and on other machines and OSes and architectures, and requiring only the system Python3:
|
||||
|
||||
3. Verify it works:
|
||||
```
|
||||
$ cp stack-orchetrator/laconic-so ~/bin
|
||||
$ laconic-so
|
||||
Usage: python -m laconic-so [OPTIONS] COMMAND [ARGS]...
|
||||
|
||||
Laconic Stack Orchestrator
|
||||
|
||||
Options:
|
||||
--quiet
|
||||
--verbose
|
||||
--dry-run
|
||||
-h, --help Show this message and exit.
|
||||
|
||||
Commands:
|
||||
build-containers build the set of containers required for a complete...
|
||||
deploy-system deploy a stack
|
||||
setup-repositories git clone the set of repositories required to build...
|
||||
```
|
||||
|
||||
For cutting releases, use the [shiv build script](/scripts/build_shiv_package.sh).
|
3
docs/cli.md
Normal file
3
docs/cli.md
Normal file
@ -0,0 +1,3 @@
|
||||
# laconic-so
|
||||
|
||||
Sub-commands and flags
|
BIN
docs/images/laconic-stack.png
Normal file
BIN
docs/images/laconic-stack.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 999 KiB |
29
docs/release-process.md
Normal file
29
docs/release-process.md
Normal file
@ -0,0 +1,29 @@
|
||||
# Release Process
|
||||
|
||||
## Manually publish to github releases
|
||||
|
||||
In order to build, the shiv and wheel packages must be installed:
|
||||
```
|
||||
$ pip install shiv
|
||||
$ pip install wheel
|
||||
```
|
||||
|
||||
Then:
|
||||
|
||||
1. Define `CERC_GH_RELEASE_SCRIPTS_DIR`
|
||||
1. Define `CERC_PACKAGE_RELEASE_GITHUB_TOKEN`
|
||||
1. Run `./scripts/tag_new_release.sh <major> <minor> <patch>`
|
||||
1. Run `./scripts/build_shiv_package.sh`
|
||||
1. Run `./scripts/publish_shiv_package_github.sh <major> <minor> <patch>`
|
||||
1. Commit the new version file.
|
||||
|
||||
e.g.
|
||||
|
||||
```
|
||||
$ export CERC_GH_RELEASE_SCRIPTS_DIR=~/projects/cerc/github-release-api/
|
||||
$ export CERC_PACKAGE_RELEASE_GITHUB_TOKEN=github_pat_xxxxxx
|
||||
$ ./scripts/tag_new_release.sh 1 0 17
|
||||
$ ./scripts/build_shiv_package.sh
|
||||
$ ./scripts/publish_shiv_package_github.sh 1 0 17
|
||||
```
|
||||
|
82
docs/spec.md
Normal file
82
docs/spec.md
Normal file
@ -0,0 +1,82 @@
|
||||
# Specification
|
||||
|
||||
|
||||
## Implementation
|
||||
|
||||
The orchestrator's operation is driven by files shown below.
|
||||
|
||||
- `repository-list.txt` contains the list of git repositories;
|
||||
- `container-image-list.txt` contains the list of container image names
|
||||
- `pod-list.txt` specifies the set of compose components (corresponding to individual docker-compose-xxx.yml files which may in turn specify more than one container).
|
||||
- `container-build/` contains the files required to build each container image
|
||||
- `config/` contains the files required at deploy time
|
||||
|
||||
```
|
||||
├── container-image-list.txt
|
||||
├── pod-list.txt
|
||||
├── repository-list.txt
|
||||
├── compose
|
||||
│ ├── docker-compose-contract.yml
|
||||
│ ├── docker-compose-eth-probe.yml
|
||||
│ ├── docker-compose-eth-statediff-fill-service.yml
|
||||
│ ├── docker-compose-fixturenet-eth.yml
|
||||
│ ├── docker-compose-fixturenet-laconicd.yml
|
||||
│ ├── docker-compose-go-ethereum-foundry.yml
|
||||
│ ├── docker-compose-ipld-eth-beacon-db.yml
|
||||
│ ├── docker-compose-ipld-eth-beacon-indexer.yml
|
||||
│ ├── docker-compose-ipld-eth-db.yml
|
||||
│ ├── docker-compose-ipld-eth-server.yml
|
||||
│ ├── docker-compose-keycloak.yml
|
||||
│ ├── docker-compose-laconicd.yml
|
||||
│ ├── docker-compose-prometheus-grafana.yml
|
||||
│ ├── docker-compose-test.yml
|
||||
│ ├── docker-compose-tx-spammer.yml
|
||||
│ ├── docker-compose-watcher-erc20.yml
|
||||
│ ├── docker-compose-watcher-erc721.yml
|
||||
│ ├── docker-compose-watcher-mobymask.yml
|
||||
│ └── docker-compose-watcher-uniswap-v3.yml
|
||||
├── config
|
||||
│ ├── fixturenet-eth
|
||||
│ ├── fixturenet-laconicd
|
||||
│ ├── ipld-eth-beacon-indexer
|
||||
│ ├── ipld-eth-server
|
||||
│ ├── keycloak
|
||||
│ ├── postgresql
|
||||
│ ├── tx-spammer
|
||||
│ ├── watcher-erc20
|
||||
│ ├── watcher-erc721
|
||||
│ ├── watcher-mobymask
|
||||
│ └── watcher-uniswap-v3
|
||||
├── container-build
|
||||
│ ├── cerc-builder-js
|
||||
│ ├── cerc-eth-probe
|
||||
│ ├── cerc-eth-statediff-fill-service
|
||||
│ ├── cerc-eth-statediff-service
|
||||
│ ├── cerc-fixturenet-eth-geth
|
||||
│ ├── cerc-fixturenet-eth-lighthouse
|
||||
│ ├── cerc-go-ethereum
|
||||
│ ├── cerc-go-ethereum-foundry
|
||||
│ ├── cerc-ipld-eth-beacon-db
|
||||
│ ├── cerc-ipld-eth-beacon-indexer
|
||||
│ ├── cerc-ipld-eth-db
|
||||
│ ├── cerc-ipld-eth-server
|
||||
│ ├── cerc-keycloak
|
||||
│ ├── cerc-laconic-registry-cli
|
||||
│ ├── cerc-laconicd
|
||||
│ ├── cerc-lighthouse
|
||||
│ ├── cerc-test-container
|
||||
│ ├── cerc-test-contract
|
||||
│ ├── cerc-tx-spammer
|
||||
│ ├── cerc-uniswap-v3-info
|
||||
│ ├── cerc-watcher-erc20
|
||||
│ ├── cerc-watcher-erc721
|
||||
│ ├── cerc-watcher-mobymask
|
||||
│ ├── cerc-watcher-uniswap-v3
|
||||
└── stacks
|
||||
├── erc20
|
||||
├── erc721
|
||||
├── fixturenet-eth
|
||||
├── fixturenet-laconicd
|
||||
├── mobymask
|
||||
└── uniswap-v3
|
||||
```
|
@ -1,5 +1,6 @@
|
||||
python-decouple>=3.6
|
||||
GitPython>=3.1.27
|
||||
tqdm>=4.64.0
|
||||
python-on-whales>=0.52.0
|
||||
python-on-whales>=0.58.0
|
||||
click>=8.1.3
|
||||
pyyaml>=6.0
|
||||
|
40
scripts/publish_shiv_package_github.sh
Executable file
40
scripts/publish_shiv_package_github.sh
Executable file
@ -0,0 +1,40 @@
|
||||
#!/usr/bin/env bash
|
||||
# Usage: publish_shiv_package_github.sh <major> <minor> <patch>
|
||||
# Uses this script package to publish a new release:
|
||||
# https://github.com/cerc-io/github-release-api
|
||||
# User must define: CERC_GH_RELEASE_SCRIPTS_DIR
|
||||
# pointing to the location of that cloned repository
|
||||
# e.g.
|
||||
# cd ~/projects
|
||||
# git clone https://github.com/cerc-io/github-release-api
|
||||
# cd ./stack-orchestrator
|
||||
# export CERC_GH_RELEASE_SCRIPTS_DIR=~/projects/github-release-api
|
||||
# ./scripts/publish_shiv_package_github.sh
|
||||
# In addition, a valid GitHub token must be defined in
|
||||
# CERC_PACKAGE_RELEASE_GITHUB_TOKEN
|
||||
if [[ -z "${CERC_PACKAGE_RELEASE_GITHUB_TOKEN}" ]]; then
|
||||
echo "CERC_PACKAGE_RELEASE_GITHUB_TOKEN is not set" >&2
|
||||
exit 1
|
||||
fi
|
||||
# TODO: check args and env vars
|
||||
major=$1
|
||||
minor=$2
|
||||
patch=$3
|
||||
export PATH=$CERC_GH_RELEASE_SCRIPTS_DIR:$PATH
|
||||
github_org="cerc-io"
|
||||
github_repository="stack-orchestrator"
|
||||
latest_package=$(ls -1t ./package/* | head -1)
|
||||
uploaded_package="./package/laconic-so"
|
||||
# Remove any old package
|
||||
rm ${uploaded_package}
|
||||
cp ${latest_package} ${uploaded_package}
|
||||
github_release_manager.sh \
|
||||
-l notused -t ${CERC_PACKAGE_RELEASE_GITHUB_TOKEN} \
|
||||
-o ${github_org} -r ${github_repository} \
|
||||
-d v${major}.${minor}.${patch} \
|
||||
-c create -m "Release v${major}.${minor}.${patch}"
|
||||
github_release_manager.sh \
|
||||
-l notused -t ${CERC_PACKAGE_RELEASE_GITHUB_TOKEN} \
|
||||
-o ${github_org} -r ${github_repository} \
|
||||
-d v${major}.${minor}.${patch} \
|
||||
-c upload ${uploaded_package}
|
17
scripts/tag_new_release.sh
Executable file
17
scripts/tag_new_release.sh
Executable file
@ -0,0 +1,17 @@
|
||||
#!/usr/bin/env bash
|
||||
# Usage: tag_new_release.sh <major> <minor> <patch>
|
||||
# Uses this script package to tag a new release:
|
||||
# User must define: CERC_GH_RELEASE_SCRIPTS_DIR
|
||||
# pointing to the location of that cloned repository
|
||||
# e.g.
|
||||
# cd ~/projects
|
||||
# git clone https://github.com/cerc-io/github-release-api
|
||||
# cd ./stack-orchestrator
|
||||
# export CERC_GH_RELEASE_SCRIPTS_DIR=~/projects/github-release-api
|
||||
# ./scripts/publish_shiv_package_github.sh
|
||||
# TODO: check args and env vars
|
||||
major=$1
|
||||
minor=$2
|
||||
patch=$3
|
||||
export PATH=$CERC_GH_RELEASE_SCRIPTS_DIR:$PATH
|
||||
git_tag_manager.sh -M ${major} -m ${minor} -p ${patch} -t "Release ${major}.${minor}.${patch}"
|
2
setup.py
2
setup.py
@ -6,7 +6,7 @@ with open("requirements.txt", "r", encoding="utf-8") as fh:
|
||||
requirements = fh.read()
|
||||
setup(
|
||||
name='laconic-stack-orchestrator',
|
||||
version='0.0.5',
|
||||
version='1.0.12',
|
||||
author='Cerc',
|
||||
author_email='info@cerc.io',
|
||||
license='GNU Affero General Public License',
|
||||
|
@ -13,13 +13,15 @@ rm -rf $CERC_REPO_BASE_DIR
|
||||
mkdir -p $CERC_REPO_BASE_DIR
|
||||
# Pull an example small public repo to test we can pull a repo
|
||||
$TEST_TARGET_SO setup-repositories --include cerc-io/laconic-sdk
|
||||
# TODO: test building the repo into a container
|
||||
# Build two example containers
|
||||
# TODO:
|
||||
$TEST_TARGET_SO build-containers --include cerc/builder-js,cerc/test-container
|
||||
# Test pulling a stack
|
||||
$TEST_TARGET_SO --stack test setup-repositories
|
||||
# Test building the a stack container
|
||||
$TEST_TARGET_SO --stack test build-containers
|
||||
# Build one example containers
|
||||
$TEST_TARGET_SO build-containers --include cerc/builder-js
|
||||
# Deploy the test container
|
||||
$TEST_TARGET_SO deploy-system --include test up
|
||||
$TEST_TARGET_SO --stack test deploy-system up
|
||||
# TODO: test that we can use the deployed container somehow
|
||||
# Clean up
|
||||
$TEST_TARGET_SO deploy-system --include test down
|
||||
$TEST_TARGET_SO --stack test deploy-system down
|
||||
echo "Test passed"
|
||||
|
Loading…
Reference in New Issue
Block a user