Compare commits

..

2 Commits

Author SHA1 Message Date
A. F. Dudley
8426d99ed9 Add kind cluster reuse and list command
- Add get_kind_cluster() to detect existing kind clusters
- Modify create_cluster() to reuse existing clusters automatically
- Add 'laconic-so deploy k8s list cluster' command
- Skip --stack requirement for k8s subcommand

This allows multiple deployments to share the same kind cluster,
simplifying local development workflows.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 04:08:16 -05:00
A. F. Dudley
3606b5dd90 Add Caddy ingress controller support for kind deployments
Replace nginx with Caddy as the default ingress controller for kind
deployments. Caddy provides automatic HTTPS via Let's Encrypt without
requiring cert-manager.

Changes:
- Add ingress-caddy-kind-deploy.yaml manifest with full RBAC setup
- Modify helpers.py to support configurable ingress_type parameter
- Update cluster_info.py to use caddy ingress class
- Add port 443 mapping for HTTPS support in kind config

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-20 02:39:01 -05:00
147 changed files with 1380 additions and 5796 deletions

View File

@ -2,8 +2,7 @@ name: Deploy Test
on: on:
pull_request: pull_request:
branches: branches: '*'
- main
push: push:
branches: branches:
- main - main

View File

@ -2,8 +2,7 @@ name: K8s Deploy Test
on: on:
pull_request: pull_request:
branches: branches: '*'
- main
push: push:
branches: '*' branches: '*'
paths: paths:

View File

@ -2,8 +2,7 @@ name: K8s Deployment Control Test
on: on:
pull_request: pull_request:
branches: branches: '*'
- main
push: push:
branches: '*' branches: '*'
paths: paths:

View File

@ -2,8 +2,7 @@ name: Webapp Test
on: on:
pull_request: pull_request:
branches: branches: '*'
- main
push: push:
branches: branches:
- main - main

View File

@ -1,3 +1 @@
Change this file to trigger running the test-container-registry CI job Change this file to trigger running the test-container-registry CI job
Triggered: 2026-01-21
Triggered: 2026-01-21 19:28:29

View File

@ -1,2 +1,2 @@
Change this file to trigger running the test-database CI job Change this file to trigger running the test-database CI job
Trigger test run Trigger test run

View File

@ -1 +1,2 @@
Change this file to trigger running the fixturenet-eth-test CI job Change this file to trigger running the fixturenet-eth-test CI job

View File

@ -1,34 +0,0 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
args: ['--allow-multiple-documents']
- id: check-json
- id: check-merge-conflict
- id: check-added-large-files
- repo: https://github.com/psf/black
rev: 23.12.1
hooks:
- id: black
language_version: python3
- repo: https://github.com/PyCQA/flake8
rev: 7.1.1
hooks:
- id: flake8
args: ['--max-line-length=88', '--extend-ignore=E203,W503,E402']
- repo: https://github.com/RobertCraigie/pyright-python
rev: v1.1.345
hooks:
- id: pyright
- repo: https://github.com/adrienverge/yamllint
rev: v1.35.1
hooks:
- id: yamllint
args: [-d, relaxed]

121
CLAUDE.md
View File

@ -1,121 +0,0 @@
# CLAUDE.md
This file provides guidance to Claude Code when working with the stack-orchestrator project.
## Some rules to follow
NEVER speculate about the cause of something
NEVER assume your hypotheses are true without evidence
ALWAYS clearly state when something is a hypothesis
ALWAYS use evidence from the systems your interacting with to support your claims and hypotheses
ALWAYS run `pre-commit run --all-files` before committing changes
## Key Principles
### Development Guidelines
- **Single responsibility** - Each component has one clear purpose
- **Fail fast** - Let errors propagate, don't hide failures
- **DRY/KISS** - Minimize duplication and complexity
## Development Philosophy: Conversational Literate Programming
### Approach
This project follows principles inspired by literate programming, where development happens through explanatory conversation rather than code-first implementation.
### Core Principles
- **Documentation-First**: All changes begin with discussion of intent and reasoning
- **Narrative-Driven**: Complex systems are explained through conversational exploration
- **Justification Required**: Every coding task must have a corresponding TODO.md item explaining the "why"
- **Iterative Understanding**: Architecture and implementation evolve through dialogue
### Working Method
1. **Explore and Understand**: Read existing code to understand current state
2. **Discuss Architecture**: Workshop complex design decisions through conversation
3. **Document Intent**: Update TODO.md with clear justification before coding
4. **Explain Changes**: Each modification includes reasoning and context
5. **Maintain Narrative**: Conversations serve as living documentation of design evolution
### Implementation Guidelines
- Treat conversations as primary documentation
- Explain architectural decisions before implementing
- Use TODO.md as the "literate document" that justifies all work
- Maintain clear narrative threads across sessions
- Workshop complex ideas before coding
This approach treats the human-AI collaboration as a form of **conversational literate programming** where understanding emerges through dialogue before code implementation.
## External Stacks Preferred
When creating new stacks for any reason, **use the external stack pattern** rather than adding stacks directly to this repository.
External stacks follow this structure:
```
my-stack/
└── stack-orchestrator/
├── stacks/
│ └── my-stack/
│ ├── stack.yml
│ └── README.md
├── compose/
│ └── docker-compose-my-stack.yml
└── config/
└── my-stack/
└── (config files)
```
### Usage
```bash
# Fetch external stack
laconic-so fetch-stack github.com/org/my-stack
# Use external stack
STACK_PATH=~/cerc/my-stack/stack-orchestrator/stacks/my-stack
laconic-so --stack $STACK_PATH deploy init --output spec.yml
laconic-so --stack $STACK_PATH deploy create --spec-file spec.yml --deployment-dir deployment
laconic-so deployment --dir deployment start
```
### Examples
- `zenith-karma-stack` - Karma watcher deployment
- `urbit-stack` - Fake Urbit ship for testing
- `zenith-desk-stack` - Desk deployment stack
## Architecture: k8s-kind Deployments
### One Cluster Per Host
One Kind cluster per host by design. Never request or expect separate clusters.
- `create_cluster()` in `helpers.py` reuses any existing cluster
- `cluster-id` in deployment.yml is an identifier, not a cluster request
- All deployments share: ingress controller, etcd, certificates
### Stack Resolution
- External stacks detected via `Path(stack).exists()` in `util.py`
- Config/compose resolution: external path first, then internal fallback
- External path structure: `stack_orchestrator/data/stacks/<name>/stack.yml`
### Secret Generation Implementation
- `GENERATE_TOKEN_PATTERN` in `deployment_create.py` matches `$generate:type:length$`
- `_generate_and_store_secrets()` creates K8s Secret
- `cluster_info.py` adds `envFrom` with `secretRef` to containers
- Non-secret config written to `config.env`
### Repository Cloning
`setup-repositories --git-ssh` clones repos defined in stack.yml's `repos:` field. Requires SSH agent.
### Key Files (for codebase navigation)
- `repos/setup_repositories.py`: `setup-repositories` command (git clone)
- `deployment_create.py`: `deploy create` command, secret generation
- `deployment.py`: `deployment start/stop/restart` commands
- `deploy_k8s.py`: K8s deployer, cluster management calls
- `helpers.py`: `create_cluster()`, etcd cleanup, kind operations
- `cluster_info.py`: K8s resource generation (Deployment, Service, Ingress)
## Insights and Observations
### Design Principles
- **When something times out that doesn't mean it needs a longer timeout it means something that was expected never happened, not that we need to wait longer for it.**
- **NEVER change a timeout because you believe something truncated, you don't understand timeouts, don't edit them unless told to explicitly by user.**

View File

@ -658,4 +658,4 @@
You should also get your employer (if you work as a programmer) or school, You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary. if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see For more information on this, and how to apply and follow the GNU AGPL, see
<http://www.gnu.org/licenses/>. <http://www.gnu.org/licenses/>.

View File

@ -26,7 +26,7 @@ curl -SL https://github.com/docker/compose/releases/download/v2.11.2/docker-comp
chmod +x ~/.docker/cli-plugins/docker-compose chmod +x ~/.docker/cli-plugins/docker-compose
``` ```
Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be Next decide on a directory where you would like to put the stack-orchestrator program. Typically this would be
a "user" binary directory such as `~/bin` or perhaps `/usr/local/laconic` or possibly just the current working directory. a "user" binary directory such as `~/bin` or perhaps `/usr/local/laconic` or possibly just the current working directory.
Now, having selected that directory, download the latest release from [this page](https://git.vdb.to/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable: Now, having selected that directory, download the latest release from [this page](https://git.vdb.to/cerc-io/stack-orchestrator/tags) into it (we're using `~/bin` below for concreteness but edit to suit if you selected a different directory). Also be sure that the destination directory exists and is writable:
@ -71,59 +71,6 @@ The various [stacks](/stack_orchestrator/data/stacks) each contain instructions
- [laconicd with console and CLI](stack_orchestrator/data/stacks/fixturenet-laconic-loaded) - [laconicd with console and CLI](stack_orchestrator/data/stacks/fixturenet-laconic-loaded)
- [kubo (IPFS)](stack_orchestrator/data/stacks/kubo) - [kubo (IPFS)](stack_orchestrator/data/stacks/kubo)
## Deployment Types
- **compose**: Docker Compose on local machine
- **k8s**: External Kubernetes cluster (requires kubeconfig)
- **k8s-kind**: Local Kubernetes via Kind - one cluster per host, shared by all deployments
## External Stacks
Stacks can live in external git repositories. Required structure:
```
<repo>/
stack_orchestrator/data/
stacks/<stack-name>/stack.yml
compose/docker-compose-<pod-name>.yml
deployment/spec.yml
```
## Deployment Commands
```bash
# Create deployment from spec
laconic-so --stack <path> deploy create --spec-file <spec.yml> --deployment-dir <dir>
# Start (creates cluster on first run)
laconic-so deployment --dir <dir> start
# GitOps restart (git pull + redeploy, preserves data)
laconic-so deployment --dir <dir> restart
# Stop
laconic-so deployment --dir <dir> stop
```
## spec.yml Reference
```yaml
stack: stack-name-or-path
deploy-to: k8s-kind
network:
http-proxy:
- host-name: app.example.com
routes:
- path: /
proxy-to: service-name:port
acme-email: admin@example.com
config:
ENV_VAR: value
SECRET_VAR: $generate:hex:32$ # Auto-generated, stored in K8s Secret
volumes:
volume-name:
```
## Contributing ## Contributing
See the [CONTRIBUTING.md](/docs/CONTRIBUTING.md) for developer mode install. See the [CONTRIBUTING.md](/docs/CONTRIBUTING.md) for developer mode install.
@ -131,3 +78,5 @@ See the [CONTRIBUTING.md](/docs/CONTRIBUTING.md) for developer mode install.
## Platform Support ## Platform Support
Native aarm64 is _not_ currently supported. x64 emulation on ARM64 macos should work (not yet tested). Native aarm64 is _not_ currently supported. x64 emulation on ARM64 macos should work (not yet tested).

16
TODO.md
View File

@ -1,16 +0,0 @@
# TODO
## Features Needed
### Update Stack Command
We need an "update stack" command in stack orchestrator and cleaner documentation regarding how to do continuous deployment with and without payments.
**Context**: Currently, `deploy init` generates a spec file and `deploy create` creates a deployment directory. The `deployment update` command (added by Thomas Lackey) only syncs env vars and restarts - it doesn't regenerate configurations. There's a gap in the workflow for updating stack configurations after initial deployment.
## Architecture Refactoring
### Separate Deployer from Stack Orchestrator CLI
The deployer logic should be decoupled from the CLI tool to allow independent development and reuse.
### Separate Stacks from Stack Orchestrator Repo
Stacks should live in their own repositories, not bundled with the orchestrator tool. This allows stacks to evolve independently and be maintained by different teams.

View File

@ -65,71 +65,3 @@ Force full rebuild of packages:
``` ```
$ laconic-so build-npms --include <package-name> --force-rebuild $ laconic-so build-npms --include <package-name> --force-rebuild
``` ```
## deploy
The `deploy` command group manages persistent deployments. The general workflow is `deploy init` to generate a spec file, then `deploy create` to create a deployment directory from the spec, then runtime commands like `deploy up` and `deploy down`.
### deploy init
Generate a deployment spec file from a stack definition:
```
$ laconic-so --stack <stack-name> deploy init --output <spec-file>
```
Options:
- `--output` (required): write spec file here
- `--config`: provide config variables for the deployment
- `--config-file`: provide config variables in a file
- `--kube-config`: provide a config file for a k8s deployment
- `--image-registry`: provide a container image registry url for this k8s cluster
- `--map-ports-to-host`: map ports to the host (`any-variable-random`, `localhost-same`, `any-same`, `localhost-fixed-random`, `any-fixed-random`)
### deploy create
Create a deployment directory from a spec file:
```
$ laconic-so --stack <stack-name> deploy create --spec-file <spec-file> --deployment-dir <dir>
```
Update an existing deployment in-place (preserving data volumes and env file):
```
$ laconic-so --stack <stack-name> deploy create --spec-file <spec-file> --deployment-dir <dir> --update
```
Options:
- `--spec-file` (required): spec file to use
- `--deployment-dir`: target directory for deployment files
- `--update`: update an existing deployment directory, preserving data volumes and env file. Changed files are backed up with a `.bak` suffix. The deployment's `config.env` and `deployment.yml` are also preserved.
- `--network-dir`: network configuration supplied in this directory
- `--initial-peers`: initial set of persistent peers
### deploy up
Start a deployment:
```
$ laconic-so deployment --dir <deployment-dir> up
```
### deploy down
Stop a deployment:
```
$ laconic-so deployment --dir <deployment-dir> down
```
Use `--delete-volumes` to also remove data volumes.
### deploy ps
Show running services:
```
$ laconic-so deployment --dir <deployment-dir> ps
```
### deploy logs
View service logs:
```
$ laconic-so deployment --dir <deployment-dir> logs
```
Use `-f` to follow and `-n <count>` to tail.

View File

@ -1,202 +0,0 @@
# Deployment Patterns
## GitOps Pattern
For production deployments, we recommend a GitOps approach where your deployment configuration is tracked in version control.
### Overview
- **spec.yml is your source of truth**: Maintain it in your operator repository
- **Don't regenerate on every restart**: Run `deploy init` once, then customize and commit
- **Use restart for updates**: The restart command respects your git-tracked spec.yml
### Workflow
1. **Initial setup**: Run `deploy init` once to generate a spec.yml template
2. **Customize and commit**: Edit spec.yml with your configuration (hostnames, resources, etc.) and commit to your operator repo
3. **Deploy from git**: Use the committed spec.yml for deployments
4. **Update via git**: Make changes in git, then restart to apply
```bash
# Initial setup (run once)
laconic-so --stack my-stack deploy init --output spec.yml
# Customize for your environment
vim spec.yml # Set hostname, resources, etc.
# Commit to your operator repository
git add spec.yml
git commit -m "Add my-stack deployment configuration"
git push
# On deployment server: deploy from git-tracked spec
laconic-so deploy create \
--spec-file /path/to/operator-repo/spec.yml \
--deployment-dir my-deployment
laconic-so deployment --dir my-deployment start
```
### Updating Deployments
When you need to update a deployment:
```bash
# 1. Make changes in your operator repo
vim /path/to/operator-repo/spec.yml
git commit -am "Update configuration"
git push
# 2. On deployment server: pull and restart
cd /path/to/operator-repo && git pull
laconic-so deployment --dir my-deployment restart
```
The `restart` command:
- Pulls latest code from the stack repository
- Uses your git-tracked spec.yml (does NOT regenerate from defaults)
- Syncs the deployment directory
- Restarts services
### Anti-patterns
**Don't do this:**
```bash
# BAD: Regenerating spec on every deployment
laconic-so --stack my-stack deploy init --output spec.yml
laconic-so deploy create --spec-file spec.yml ...
```
This overwrites your customizations with defaults from the stack's `commands.py`.
**Do this instead:**
```bash
# GOOD: Use your git-tracked spec
git pull # Get latest spec.yml from your operator repo
laconic-so deployment --dir my-deployment restart
```
## Private Registry Authentication
For deployments using images from private container registries (e.g., GitHub Container Registry), configure authentication in your spec.yml:
### Configuration
Add a `registry-credentials` section to your spec.yml:
```yaml
registry-credentials:
server: ghcr.io
username: your-org-or-username
token-env: REGISTRY_TOKEN
```
**Fields:**
- `server`: The registry hostname (e.g., `ghcr.io`, `docker.io`, `gcr.io`)
- `username`: Registry username (for GHCR, use your GitHub username or org name)
- `token-env`: Name of the environment variable containing your API token/PAT
### Token Environment Variable
The `token-env` pattern keeps credentials out of version control. Set the environment variable when running `deployment start`:
```bash
export REGISTRY_TOKEN="your-personal-access-token"
laconic-so deployment --dir my-deployment start
```
For GHCR, create a Personal Access Token (PAT) with `read:packages` scope.
### Ansible Integration
When using Ansible for deployments, pass the token from a credentials file:
```yaml
- name: Start deployment
ansible.builtin.command:
cmd: laconic-so deployment --dir {{ deployment_dir }} start
environment:
REGISTRY_TOKEN: "{{ lookup('file', '~/.credentials/ghcr_token') }}"
```
### How It Works
1. laconic-so reads the `registry-credentials` config from spec.yml
2. Creates a Kubernetes `docker-registry` secret named `{deployment}-registry`
3. The deployment's pods reference this secret for image pulls
## Cluster and Volume Management
### Stopping Deployments
The `deployment stop` command has two important flags:
```bash
# Default: stops deployment, deletes cluster, PRESERVES volumes
laconic-so deployment --dir my-deployment stop
# Explicitly delete volumes (USE WITH CAUTION)
laconic-so deployment --dir my-deployment stop --delete-volumes
```
### Volume Persistence
Volumes persist across cluster deletion by design. This is important because:
- **Data survives cluster recreation**: Ledger data, databases, and other state are preserved
- **Faster recovery**: No need to re-sync or rebuild data after cluster issues
- **Safe cluster upgrades**: Delete and recreate cluster without data loss
**Only use `--delete-volumes` when:**
- You explicitly want to start fresh with no data
- The user specifically requests volume deletion
- You're cleaning up a test/dev environment completely
### Shared Cluster Architecture
In kind deployments, multiple stacks share a single cluster:
- First `deployment start` creates the cluster
- Subsequent deployments reuse the existing cluster
- `deployment stop` on ANY deployment deletes the shared cluster
- Other deployments will fail until cluster is recreated
To stop a single deployment without affecting the cluster:
```bash
laconic-so deployment --dir my-deployment stop --skip-cluster-management
```
## Volume Persistence in k8s-kind
k8s-kind has 3 storage layers:
- **Docker Host**: The physical server running Docker
- **Kind Node**: A Docker container simulating a k8s node
- **Pod Container**: Your workload
For k8s-kind, volumes with paths are mounted from Docker Host → Kind Node → Pod via extraMounts.
| spec.yml volume | Storage Location | Survives Pod Restart | Survives Cluster Restart |
|-----------------|------------------|---------------------|-------------------------|
| `vol:` (empty) | Kind Node PVC | ✅ | ❌ |
| `vol: ./data/x` | Docker Host | ✅ | ✅ |
| `vol: /abs/path`| Docker Host | ✅ | ✅ |
**Recommendation**: Always use paths for data you want to keep. Relative paths
(e.g., `./data/rpc-config`) resolve to `$DEPLOYMENT_DIR/data/rpc-config` on the
Docker Host.
### Example
```yaml
# In spec.yml
volumes:
rpc-config: ./data/rpc-config # Persists to $DEPLOYMENT_DIR/data/rpc-config
chain-data: ./data/chain # Persists to $DEPLOYMENT_DIR/data/chain
temp-cache: # Empty = Kind Node PVC (lost on cluster delete)
```
### The Antipattern
Empty-path volumes appear persistent because they survive pod restarts (data lives
in Kind Node container). However, this data is lost when the kind cluster is
recreated. This "false persistence" has caused data loss when operators assumed
their data was safe.

View File

@ -1,9 +1,9 @@
# Fetching pre-built container images # Fetching pre-built container images
When Stack Orchestrator deploys a stack containing a suite of one or more containers it expects images for those containers to be on the local machine with a tag of the form `<image-name>:local` Images for these containers can be built from source (and optionally base container images from public registries) with the `build-containers` subcommand. When Stack Orchestrator deploys a stack containing a suite of one or more containers it expects images for those containers to be on the local machine with a tag of the form `<image-name>:local` Images for these containers can be built from source (and optionally base container images from public registries) with the `build-containers` subcommand.
However, the task of building a large number of containers from source may consume considerable time and machine resources. This is where the `fetch-containers` subcommand steps in. It is designed to work exactly like `build-containers` but instead the images, pre-built, are fetched from an image registry then re-tagged for deployment. It can be used in place of `build-containers` for any stack provided the necessary containers, built for the local machine architecture (e.g. arm64 or x86-64) have already been published in an image registry. However, the task of building a large number of containers from source may consume considerable time and machine resources. This is where the `fetch-containers` subcommand steps in. It is designed to work exactly like `build-containers` but instead the images, pre-built, are fetched from an image registry then re-tagged for deployment. It can be used in place of `build-containers` for any stack provided the necessary containers, built for the local machine architecture (e.g. arm64 or x86-64) have already been published in an image registry.
## Usage ## Usage
To use `fetch-containers`, provide an image registry path, a username and token/password with read access to the registry, and optionally specify `--force-local-overwrite`. If this argument is not specified, if there is already a locally built or previously fetched image for a stack container on the machine, it will not be overwritten and a warning issued. To use `fetch-containers`, provide an image registry path, a username and token/password with read access to the registry, and optionally specify `--force-local-overwrite`. If this argument is not specified, if there is already a locally built or previously fetched image for a stack container on the machine, it will not be overwritten and a warning issued.
``` ```
$ laconic-so --stack mobymask-v3-demo fetch-containers --image-registry git.vdb.to/cerc-io --registry-username <registry-user> --registry-token <registry-token> --force-local-overwrite $ laconic-so --stack mobymask-v3-demo fetch-containers --image-registry git.vdb.to/cerc-io --registry-username <registry-user> --registry-token <registry-token> --force-local-overwrite
``` ```

View File

@ -7,7 +7,7 @@ Deploy a local Gitea server, publish NPM packages to it, then use those packages
```bash ```bash
laconic-so --stack build-support build-containers laconic-so --stack build-support build-containers
laconic-so --stack package-registry setup-repositories laconic-so --stack package-registry setup-repositories
laconic-so --stack package-registry build-containers laconic-so --stack package-registry build-containers
laconic-so --stack package-registry deploy up laconic-so --stack package-registry deploy up
``` ```

View File

@ -1,113 +0,0 @@
# Helm Chart Generation
Generate Kubernetes Helm charts from stack compose files using Kompose.
## Prerequisites
Install Kompose:
```bash
# Linux
curl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-linux-amd64 -o kompose
chmod +x kompose
sudo mv kompose /usr/local/bin/
# macOS
brew install kompose
# Verify
kompose version
```
## Usage
### 1. Create spec file
```bash
laconic-so --stack <stack-name> deploy --deploy-to k8s init \
--kube-config ~/.kube/config \
--output spec.yml
```
### 2. Generate Helm chart
```bash
laconic-so --stack <stack-name> deploy create \
--spec-file spec.yml \
--deployment-dir my-deployment \
--helm-chart
```
### 3. Deploy to Kubernetes
```bash
helm install my-release my-deployment/chart
kubectl get pods -n zenith
```
## Output Structure
```bash
my-deployment/
├── spec.yml # Reference
├── stack.yml # Reference
└── chart/ # Helm chart
├── Chart.yaml
├── README.md
└── templates/
└── *.yaml
```
## Example
```bash
# Generate chart for stage1-zenithd
laconic-so --stack stage1-zenithd deploy --deploy-to k8s init \
--kube-config ~/.kube/config \
--output stage1-spec.yml
laconic-so --stack stage1-zenithd deploy create \
--spec-file stage1-spec.yml \
--deployment-dir stage1-deployment \
--helm-chart
# Deploy
helm install stage1-zenithd stage1-deployment/chart
```
## Production Deployment (TODO)
### Local Development
```bash
# Access services using port-forward
kubectl port-forward service/zenithd 26657:26657
kubectl port-forward service/nginx-api-proxy 1317:80
kubectl port-forward service/cosmos-explorer 4173:4173
```
### Production Access Options
- Option 1: Ingress + cert-manager (Recommended)
- Install ingress-nginx + cert-manager
- Point DNS to cluster LoadBalancer IP
- Auto-provisions Let's Encrypt TLS certs
- Access: `https://api.zenith.example.com`
- Option 2: Cloud LoadBalancer
- Use cloud provider's LoadBalancer service type
- Point DNS to assigned external IP
- Manual TLS cert management
- Option 3: Bare Metal (MetalLB + Ingress)
- MetalLB provides LoadBalancer IPs from local network
- Same Ingress setup as cloud
- Option 4: NodePort + External Proxy
- Expose services on 30000-32767 range
- External nginx/Caddy proxies 80/443 → NodePort
- Manual cert management
### Changes Needed
- Add Ingress template to charts
- Add TLS configuration to values.yaml
- Document cert-manager setup
- Add production deployment guide

View File

@ -24,3 +24,4 @@ node-tolerations:
value: typeb value: typeb
``` ```
This example denotes that the stack's pods will tolerate a taint: `nodetype=typeb` This example denotes that the stack's pods will tolerate a taint: `nodetype=typeb`

View File

@ -26,3 +26,4 @@ $ ./scripts/tag_new_release.sh 1 0 17
$ ./scripts/build_shiv_package.sh $ ./scripts/build_shiv_package.sh
$ ./scripts/publish_shiv_package_github.sh 1 0 17 $ ./scripts/publish_shiv_package_github.sh 1 0 17
``` ```

View File

@ -4,9 +4,9 @@ Note: this page is out of date (but still useful) - it will no longer be useful
## Implementation ## Implementation
The orchestrator's operation is driven by files shown below. The orchestrator's operation is driven by files shown below.
- `repository-list.txt` contains the list of git repositories; - `repository-list.txt` contains the list of git repositories;
- `container-image-list.txt` contains the list of container image names - `container-image-list.txt` contains the list of container image names
- `pod-list.txt` specifies the set of compose components (corresponding to individual docker-compose-xxx.yml files which may in turn specify more than one container). - `pod-list.txt` specifies the set of compose components (corresponding to individual docker-compose-xxx.yml files which may in turn specify more than one container).
- `container-build/` contains the files required to build each container image - `container-build/` contains the files required to build each container image

View File

@ -7,7 +7,7 @@ compilation and static page generation are separated in the `build-webapp` and `
This offers much more flexibilty than standard Next.js build methods, since any environment variables accessed This offers much more flexibilty than standard Next.js build methods, since any environment variables accessed
via `process.env`, whether for pages or for API, will have values drawn from their runtime deployment environment, via `process.env`, whether for pages or for API, will have values drawn from their runtime deployment environment,
not their build environment. not their build environment.
## Building ## Building

View File

@ -1,110 +0,0 @@
[build-system]
requires = ["setuptools>=61.0", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "laconic-stack-orchestrator"
version = "1.1.0"
description = "Orchestrates deployment of the Laconic stack"
readme = "README.md"
license = {text = "GNU Affero General Public License"}
authors = [
{name = "Cerc", email = "info@cerc.io"}
]
requires-python = ">=3.8"
classifiers = [
"Programming Language :: Python :: 3.8",
"Operating System :: OS Independent",
]
dependencies = [
"python-decouple>=3.8",
"python-dotenv==1.0.0",
"GitPython>=3.1.32",
"tqdm>=4.65.0",
"python-on-whales>=0.64.0",
"click>=8.1.6",
"PyYAML>=6.0.1",
"ruamel.yaml>=0.17.32",
"pydantic==1.10.9",
"tomli==2.0.1",
"validators==0.22.0",
"kubernetes>=28.1.0",
"humanfriendly>=10.0",
"python-gnupg>=0.5.2",
"requests>=2.3.2",
]
[project.optional-dependencies]
dev = [
"pytest>=7.0.0",
"pytest-cov>=4.0.0",
"black>=22.0.0",
"flake8>=5.0.0",
"pyright>=1.1.0",
"yamllint>=1.28.0",
"pre-commit>=3.0.0",
]
[project.scripts]
laconic-so = "stack_orchestrator.main:cli"
[project.urls]
Homepage = "https://git.vdb.to/cerc-io/stack-orchestrator"
[tool.setuptools.packages.find]
where = ["."]
[tool.setuptools.package-data]
"*" = ["data/**"]
[tool.black]
line-length = 88
target-version = ['py38']
[tool.flake8]
max-line-length = 88
extend-ignore = ["E203", "W503", "E402"]
[tool.pyright]
pythonVersion = "3.9"
typeCheckingMode = "basic"
reportMissingImports = "none"
reportMissingModuleSource = "none"
reportUnusedImport = "error"
include = ["stack_orchestrator/**/*.py", "tests/**/*.py"]
exclude = ["**/build/**", "**/__pycache__/**"]
[tool.mypy]
python_version = "3.8"
warn_return_any = true
warn_unused_configs = true
disallow_untyped_defs = true
[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py"]
python_classes = ["Test*"]
python_functions = ["test_*"]
markers = [
"slow: marks tests as slow (deselect with '-m \"not slow\"')",
"e2e: marks tests as end-to-end (requires real infrastructure)",
]
addopts = [
"--cov",
"--cov-report=term-missing",
"--cov-report=html",
"--strict-markers",
]
asyncio_default_fixture_loop_scope = "function"
[tool.coverage.run]
source = ["stack_orchestrator"]
disable_warnings = ["couldnt-parse"]
[tool.coverage.report]
exclude_lines = [
"pragma: no cover",
"def __repr__",
"raise AssertionError",
"raise NotImplementedError",
]

View File

@ -1,9 +0,0 @@
{
"pythonVersion": "3.9",
"typeCheckingMode": "basic",
"reportMissingImports": "none",
"reportMissingModuleSource": "none",
"reportUnusedImport": "error",
"include": ["stack_orchestrator/**/*.py", "tests/**/*.py"],
"exclude": ["**/build/**", "**/__pycache__/**"]
}

View File

@ -4,7 +4,7 @@
# https://github.com/cerc-io/github-release-api # https://github.com/cerc-io/github-release-api
# User must define: CERC_GH_RELEASE_SCRIPTS_DIR # User must define: CERC_GH_RELEASE_SCRIPTS_DIR
# pointing to the location of that cloned repository # pointing to the location of that cloned repository
# e.g. # e.g.
# cd ~/projects # cd ~/projects
# git clone https://github.com/cerc-io/github-release-api # git clone https://github.com/cerc-io/github-release-api
# cd ./stack-orchestrator # cd ./stack-orchestrator

View File

@ -94,7 +94,7 @@ sudo apt -y install jq
# laconic-so depends on git # laconic-so depends on git
sudo apt -y install git sudo apt -y install git
# curl used below # curl used below
sudo apt -y install curl sudo apt -y install curl
# docker repo add depends on gnupg and updated ca-certificates # docker repo add depends on gnupg and updated ca-certificates
sudo apt -y install ca-certificates gnupg sudo apt -y install ca-certificates gnupg

View File

@ -3,7 +3,7 @@
# Uses this script package to tag a new release: # Uses this script package to tag a new release:
# User must define: CERC_GH_RELEASE_SCRIPTS_DIR # User must define: CERC_GH_RELEASE_SCRIPTS_DIR
# pointing to the location of that cloned repository # pointing to the location of that cloned repository
# e.g. # e.g.
# cd ~/projects # cd ~/projects
# git clone https://github.com/cerc-io/github-release-api # git clone https://github.com/cerc-io/github-release-api
# cd ./stack-orchestrator # cd ./stack-orchestrator

View File

@ -1,7 +1,5 @@
# See # See https://medium.com/nerd-for-tech/how-to-build-and-distribute-a-cli-tool-with-python-537ae41d9d78
# https://medium.com/nerd-for-tech/how-to-build-and-distribute-a-cli-tool-with-python-537ae41d9d78
from setuptools import setup, find_packages from setuptools import setup, find_packages
with open("README.md", "r", encoding="utf-8") as fh: with open("README.md", "r", encoding="utf-8") as fh:
long_description = fh.read() long_description = fh.read()
with open("requirements.txt", "r", encoding="utf-8") as fh: with open("requirements.txt", "r", encoding="utf-8") as fh:
@ -9,26 +7,26 @@ with open("requirements.txt", "r", encoding="utf-8") as fh:
with open("stack_orchestrator/data/version.txt", "r", encoding="utf-8") as fh: with open("stack_orchestrator/data/version.txt", "r", encoding="utf-8") as fh:
version = fh.readlines()[-1].strip(" \n") version = fh.readlines()[-1].strip(" \n")
setup( setup(
name="laconic-stack-orchestrator", name='laconic-stack-orchestrator',
version=version, version=version,
author="Cerc", author='Cerc',
author_email="info@cerc.io", author_email='info@cerc.io',
license="GNU Affero General Public License", license='GNU Affero General Public License',
description="Orchestrates deployment of the Laconic stack", description='Orchestrates deployment of the Laconic stack',
long_description=long_description, long_description=long_description,
long_description_content_type="text/markdown", long_description_content_type="text/markdown",
url="https://git.vdb.to/cerc-io/stack-orchestrator", url='https://git.vdb.to/cerc-io/stack-orchestrator',
py_modules=["stack_orchestrator"], py_modules=['stack_orchestrator'],
packages=find_packages(), packages=find_packages(),
install_requires=[requirements], install_requires=[requirements],
python_requires=">=3.7", python_requires='>=3.7',
include_package_data=True, include_package_data=True,
package_data={"": ["data/**"]}, package_data={'': ['data/**']},
classifiers=[ classifiers=[
"Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.8",
"Operating System :: OS Independent", "Operating System :: OS Independent",
], ],
entry_points={ entry_points={
"console_scripts": ["laconic-so=stack_orchestrator.main:cli"], 'console_scripts': ['laconic-so=stack_orchestrator.main:cli'],
}, }
) )

View File

@ -23,10 +23,11 @@ def get_stack(config, stack):
if stack == "package-registry": if stack == "package-registry":
return package_registry_stack(config, stack) return package_registry_stack(config, stack)
else: else:
return default_stack(config, stack) return base_stack(config, stack)
class base_stack(ABC): class base_stack(ABC):
def __init__(self, config, stack): def __init__(self, config, stack):
self.config = config self.config = config
self.stack = stack self.stack = stack
@ -40,27 +41,15 @@ class base_stack(ABC):
pass pass
class default_stack(base_stack):
"""Default stack implementation for stacks without specific handling."""
def ensure_available(self):
return True
def get_url(self):
return None
class package_registry_stack(base_stack): class package_registry_stack(base_stack):
def ensure_available(self): def ensure_available(self):
self.url = "<no registry url set>" self.url = "<no registry url set>"
# Check if we were given an external registry URL # Check if we were given an external registry URL
url_from_environment = os.environ.get("CERC_NPM_REGISTRY_URL") url_from_environment = os.environ.get("CERC_NPM_REGISTRY_URL")
if url_from_environment: if url_from_environment:
if self.config.verbose: if self.config.verbose:
print( print(f"Using package registry url from CERC_NPM_REGISTRY_URL: {url_from_environment}")
f"Using package registry url from CERC_NPM_REGISTRY_URL: "
f"{url_from_environment}"
)
self.url = url_from_environment self.url = url_from_environment
else: else:
# Otherwise we expect to use the local package-registry stack # Otherwise we expect to use the local package-registry stack
@ -73,16 +62,10 @@ class package_registry_stack(base_stack):
# TODO: get url from deploy-stack # TODO: get url from deploy-stack
self.url = "http://gitea.local:3000/api/packages/cerc-io/npm/" self.url = "http://gitea.local:3000/api/packages/cerc-io/npm/"
else: else:
# If not, print a message about how to start it and return fail to the # If not, print a message about how to start it and return fail to the caller
# caller print("ERROR: The package-registry stack is not running, and no external registry "
print( "specified with CERC_NPM_REGISTRY_URL")
"ERROR: The package-registry stack is not running, " print("ERROR: Start the local package registry with: laconic-so --stack package-registry deploy-system up")
"and no external registry specified with CERC_NPM_REGISTRY_URL"
)
print(
"ERROR: Start the local package registry with: "
"laconic-so --stack package-registry deploy-system up"
)
return False return False
return True return True
@ -93,9 +76,7 @@ class package_registry_stack(base_stack):
def get_npm_registry_url(): def get_npm_registry_url():
# If an auth token is not defined, we assume the default should be the cerc registry # If an auth token is not defined, we assume the default should be the cerc registry
# If an auth token is defined, we assume the local gitea should be used. # If an auth token is defined, we assume the local gitea should be used.
default_npm_registry_url = ( default_npm_registry_url = "http://gitea.local:3000/api/packages/cerc-io/npm/" if config(
"http://gitea.local:3000/api/packages/cerc-io/npm/" "CERC_NPM_AUTH_TOKEN", default=None
if config("CERC_NPM_AUTH_TOKEN", default=None) ) else "https://git.vdb.to/api/packages/cerc-io/npm/"
else "https://git.vdb.to/api/packages/cerc-io/npm/"
)
return config("CERC_NPM_REGISTRY_URL", default=default_npm_registry_url) return config("CERC_NPM_REGISTRY_URL", default=default_npm_registry_url)

View File

@ -18,8 +18,7 @@
# env vars: # env vars:
# CERC_REPO_BASE_DIR defaults to ~/cerc # CERC_REPO_BASE_DIR defaults to ~/cerc
# TODO: display the available list of containers; # TODO: display the available list of containers; allow re-build of either all or specific containers
# allow re-build of either all or specific containers
import os import os
import sys import sys
@ -35,17 +34,14 @@ from stack_orchestrator.build.publish import publish_image
from stack_orchestrator.build.build_util import get_containers_in_scope from stack_orchestrator.build.build_util import get_containers_in_scope
# TODO: find a place for this # TODO: find a place for this
# epilog="Config provided either in .env or settings.ini or env vars: # epilog="Config provided either in .env or settings.ini or env vars: CERC_REPO_BASE_DIR (defaults to ~/cerc)"
# CERC_REPO_BASE_DIR (defaults to ~/cerc)"
def make_container_build_env( def make_container_build_env(dev_root_path: str,
dev_root_path: str, container_build_dir: str,
container_build_dir: str, debug: bool,
debug: bool, force_rebuild: bool,
force_rebuild: bool, extra_build_args: str):
extra_build_args: str,
):
container_build_env = { container_build_env = {
"CERC_NPM_REGISTRY_URL": get_npm_registry_url(), "CERC_NPM_REGISTRY_URL": get_npm_registry_url(),
"CERC_GO_AUTH_TOKEN": config("CERC_GO_AUTH_TOKEN", default=""), "CERC_GO_AUTH_TOKEN": config("CERC_GO_AUTH_TOKEN", default=""),
@ -54,15 +50,11 @@ def make_container_build_env(
"CERC_CONTAINER_BASE_DIR": container_build_dir, "CERC_CONTAINER_BASE_DIR": container_build_dir,
"CERC_HOST_UID": f"{os.getuid()}", "CERC_HOST_UID": f"{os.getuid()}",
"CERC_HOST_GID": f"{os.getgid()}", "CERC_HOST_GID": f"{os.getgid()}",
"DOCKER_BUILDKIT": config("DOCKER_BUILDKIT", default="0"), "DOCKER_BUILDKIT": config("DOCKER_BUILDKIT", default="0")
} }
container_build_env.update({"CERC_SCRIPT_DEBUG": "true"} if debug else {}) container_build_env.update({"CERC_SCRIPT_DEBUG": "true"} if debug else {})
container_build_env.update({"CERC_FORCE_REBUILD": "true"} if force_rebuild else {}) container_build_env.update({"CERC_FORCE_REBUILD": "true"} if force_rebuild else {})
container_build_env.update( container_build_env.update({"CERC_CONTAINER_EXTRA_BUILD_ARGS": extra_build_args} if extra_build_args else {})
{"CERC_CONTAINER_EXTRA_BUILD_ARGS": extra_build_args}
if extra_build_args
else {}
)
docker_host_env = os.getenv("DOCKER_HOST") docker_host_env = os.getenv("DOCKER_HOST")
if docker_host_env: if docker_host_env:
container_build_env.update({"DOCKER_HOST": docker_host_env}) container_build_env.update({"DOCKER_HOST": docker_host_env})
@ -75,18 +67,12 @@ def process_container(build_context: BuildContext) -> bool:
print(f"Building: {build_context.container}") print(f"Building: {build_context.container}")
default_container_tag = f"{build_context.container}:local" default_container_tag = f"{build_context.container}:local"
build_context.container_build_env.update( build_context.container_build_env.update({"CERC_DEFAULT_CONTAINER_IMAGE_TAG": default_container_tag})
{"CERC_DEFAULT_CONTAINER_IMAGE_TAG": default_container_tag}
)
# Check if this is in an external stack # Check if this is in an external stack
if stack_is_external(build_context.stack): if stack_is_external(build_context.stack):
container_parent_dir = Path(build_context.stack).parent.parent.joinpath( container_parent_dir = Path(build_context.stack).parent.parent.joinpath("container-build")
"container-build" temp_build_dir = container_parent_dir.joinpath(build_context.container.replace("/", "-"))
)
temp_build_dir = container_parent_dir.joinpath(
build_context.container.replace("/", "-")
)
temp_build_script_filename = temp_build_dir.joinpath("build.sh") temp_build_script_filename = temp_build_dir.joinpath("build.sh")
# Now check if the container exists in the external stack. # Now check if the container exists in the external stack.
if not temp_build_script_filename.exists(): if not temp_build_script_filename.exists():
@ -104,34 +90,21 @@ def process_container(build_context: BuildContext) -> bool:
build_command = build_script_filename.as_posix() build_command = build_script_filename.as_posix()
else: else:
if opts.o.verbose: if opts.o.verbose:
print( print(f"No script file found: {build_script_filename}, using default build script")
f"No script file found: {build_script_filename}, " repo_dir = build_context.container.split('/')[1]
"using default build script" # TODO: make this less of a hack -- should be specified in some metadata somewhere
) # Check if we have a repo for this container. If not, set the context dir to the container-build subdir
repo_dir = build_context.container.split("/")[1]
# TODO: make this less of a hack -- should be specified in
# some metadata somewhere. Check if we have a repo for this
# container. If not, set the context dir to container-build subdir
repo_full_path = os.path.join(build_context.dev_root_path, repo_dir) repo_full_path = os.path.join(build_context.dev_root_path, repo_dir)
repo_dir_or_build_dir = ( repo_dir_or_build_dir = repo_full_path if os.path.exists(repo_full_path) else build_dir
repo_full_path if os.path.exists(repo_full_path) else build_dir build_command = os.path.join(build_context.container_build_dir,
) "default-build.sh") + f" {default_container_tag} {repo_dir_or_build_dir}"
build_command = (
os.path.join(build_context.container_build_dir, "default-build.sh")
+ f" {default_container_tag} {repo_dir_or_build_dir}"
)
if not opts.o.dry_run: if not opts.o.dry_run:
# No PATH at all causes failures with podman. # No PATH at all causes failures with podman.
if "PATH" not in build_context.container_build_env: if "PATH" not in build_context.container_build_env:
build_context.container_build_env["PATH"] = os.environ["PATH"] build_context.container_build_env["PATH"] = os.environ["PATH"]
if opts.o.verbose: if opts.o.verbose:
print( print(f"Executing: {build_command} with environment: {build_context.container_build_env}")
f"Executing: {build_command} with environment: " build_result = subprocess.run(build_command, shell=True, env=build_context.container_build_env)
f"{build_context.container_build_env}"
)
build_result = subprocess.run(
build_command, shell=True, env=build_context.container_build_env
)
if opts.o.verbose: if opts.o.verbose:
print(f"Return code is: {build_result.returncode}") print(f"Return code is: {build_result.returncode}")
if build_result.returncode != 0: if build_result.returncode != 0:
@ -144,61 +117,33 @@ def process_container(build_context: BuildContext) -> bool:
@click.command() @click.command()
@click.option("--include", help="only build these containers") @click.option('--include', help="only build these containers")
@click.option("--exclude", help="don't build these containers") @click.option('--exclude', help="don\'t build these containers")
@click.option( @click.option("--force-rebuild", is_flag=True, default=False, help="Override dependency checking -- always rebuild")
"--force-rebuild",
is_flag=True,
default=False,
help="Override dependency checking -- always rebuild",
)
@click.option("--extra-build-args", help="Supply extra arguments to build") @click.option("--extra-build-args", help="Supply extra arguments to build")
@click.option( @click.option("--publish-images", is_flag=True, default=False, help="Publish the built images in the specified image registry")
"--publish-images", @click.option("--image-registry", help="Specify the image registry for --publish-images")
is_flag=True,
default=False,
help="Publish the built images in the specified image registry",
)
@click.option(
"--image-registry", help="Specify the image registry for --publish-images"
)
@click.pass_context @click.pass_context
def command( def command(ctx, include, exclude, force_rebuild, extra_build_args, publish_images, image_registry):
ctx, '''build the set of containers required for a complete stack'''
include,
exclude,
force_rebuild,
extra_build_args,
publish_images,
image_registry,
):
"""build the set of containers required for a complete stack"""
local_stack = ctx.obj.local_stack local_stack = ctx.obj.local_stack
stack = ctx.obj.stack stack = ctx.obj.stack
# See: https://stackoverflow.com/questions/25389095/ # See: https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure
# python-get-path-of-root-project-structure container_build_dir = Path(__file__).absolute().parent.parent.joinpath("data", "container-build")
container_build_dir = (
Path(__file__).absolute().parent.parent.joinpath("data", "container-build")
)
if local_stack: if local_stack:
dev_root_path = os.getcwd()[0 : os.getcwd().rindex("stack-orchestrator")] dev_root_path = os.getcwd()[0:os.getcwd().rindex("stack-orchestrator")]
print( print(f'Local stack dev_root_path (CERC_REPO_BASE_DIR) overridden to: {dev_root_path}')
f"Local stack dev_root_path (CERC_REPO_BASE_DIR) overridden to: "
f"{dev_root_path}"
)
else: else:
dev_root_path = os.path.expanduser( dev_root_path = os.path.expanduser(config("CERC_REPO_BASE_DIR", default="~/cerc"))
config("CERC_REPO_BASE_DIR", default="~/cerc")
)
if not opts.o.quiet: if not opts.o.quiet:
print(f"Dev Root is: {dev_root_path}") print(f'Dev Root is: {dev_root_path}')
if not os.path.isdir(dev_root_path): if not os.path.isdir(dev_root_path):
print("Dev root directory doesn't exist, creating") print('Dev root directory doesn\'t exist, creating')
if publish_images: if publish_images:
if not image_registry: if not image_registry:
@ -206,22 +151,21 @@ def command(
containers_in_scope = get_containers_in_scope(stack) containers_in_scope = get_containers_in_scope(stack)
container_build_env = make_container_build_env( container_build_env = make_container_build_env(dev_root_path,
dev_root_path, container_build_dir,
container_build_dir, opts.o.debug,
opts.o.debug, force_rebuild,
force_rebuild, extra_build_args)
extra_build_args,
)
for container in containers_in_scope: for container in containers_in_scope:
if include_exclude_check(container, include, exclude): if include_exclude_check(container, include, exclude):
build_context = BuildContext( build_context = BuildContext(
stack, stack,
container, container,
container_build_dir, container_build_dir,
container_build_env, container_build_env,
dev_root_path, dev_root_path
) )
result = process_container(build_context) result = process_container(build_context)
if result: if result:
@ -230,16 +174,10 @@ def command(
else: else:
print(f"Error running build for {build_context.container}") print(f"Error running build for {build_context.container}")
if not opts.o.continue_on_error: if not opts.o.continue_on_error:
error_exit( error_exit("container build failed and --continue-on-error not set, exiting")
"container build failed and --continue-on-error "
"not set, exiting"
)
sys.exit(1) sys.exit(1)
else: else:
print( print("****** Container Build Error, continuing because --continue-on-error is set")
"****** Container Build Error, continuing because "
"--continue-on-error is set"
)
else: else:
if opts.o.verbose: if opts.o.verbose:
print(f"Excluding: {container}") print(f"Excluding: {container}")

View File

@ -32,18 +32,14 @@ builder_js_image_name = "cerc/builder-js:local"
@click.command() @click.command()
@click.option("--include", help="only build these packages") @click.option('--include', help="only build these packages")
@click.option("--exclude", help="don't build these packages") @click.option('--exclude', help="don\'t build these packages")
@click.option( @click.option("--force-rebuild", is_flag=True, default=False,
"--force-rebuild", help="Override existing target package version check -- force rebuild")
is_flag=True,
default=False,
help="Override existing target package version check -- force rebuild",
)
@click.option("--extra-build-args", help="Supply extra arguments to build") @click.option("--extra-build-args", help="Supply extra arguments to build")
@click.pass_context @click.pass_context
def command(ctx, include, exclude, force_rebuild, extra_build_args): def command(ctx, include, exclude, force_rebuild, extra_build_args):
"""build the set of npm packages required for a complete stack""" '''build the set of npm packages required for a complete stack'''
quiet = ctx.obj.quiet quiet = ctx.obj.quiet
verbose = ctx.obj.verbose verbose = ctx.obj.verbose
@ -69,54 +65,45 @@ def command(ctx, include, exclude, force_rebuild, extra_build_args):
sys.exit(1) sys.exit(1)
if local_stack: if local_stack:
dev_root_path = os.getcwd()[0 : os.getcwd().rindex("stack-orchestrator")] dev_root_path = os.getcwd()[0:os.getcwd().rindex("stack-orchestrator")]
print( print(f'Local stack dev_root_path (CERC_REPO_BASE_DIR) overridden to: {dev_root_path}')
f"Local stack dev_root_path (CERC_REPO_BASE_DIR) overridden to: "
f"{dev_root_path}"
)
else: else:
dev_root_path = os.path.expanduser( dev_root_path = os.path.expanduser(config("CERC_REPO_BASE_DIR", default="~/cerc"))
config("CERC_REPO_BASE_DIR", default="~/cerc")
)
build_root_path = os.path.join(dev_root_path, "build-trees") build_root_path = os.path.join(dev_root_path, "build-trees")
if verbose: if verbose:
print(f"Dev Root is: {dev_root_path}") print(f'Dev Root is: {dev_root_path}')
if not os.path.isdir(dev_root_path): if not os.path.isdir(dev_root_path):
print("Dev root directory doesn't exist, creating") print('Dev root directory doesn\'t exist, creating')
os.makedirs(dev_root_path) os.makedirs(dev_root_path)
if not os.path.isdir(dev_root_path): if not os.path.isdir(dev_root_path):
print("Build root directory doesn't exist, creating") print('Build root directory doesn\'t exist, creating')
os.makedirs(build_root_path) os.makedirs(build_root_path)
# See: https://stackoverflow.com/a/20885799/1701505 # See: https://stackoverflow.com/a/20885799/1701505
from stack_orchestrator import data from stack_orchestrator import data
with importlib.resources.open_text(data, "npm-package-list.txt") as package_list_file:
with importlib.resources.open_text(
data, "npm-package-list.txt"
) as package_list_file:
all_packages = package_list_file.read().splitlines() all_packages = package_list_file.read().splitlines()
packages_in_scope = [] packages_in_scope = []
if stack: if stack:
stack_config = get_parsed_stack_config(stack) stack_config = get_parsed_stack_config(stack)
# TODO: syntax check the input here # TODO: syntax check the input here
packages_in_scope = stack_config["npms"] packages_in_scope = stack_config['npms']
else: else:
packages_in_scope = all_packages packages_in_scope = all_packages
if verbose: if verbose:
print(f"Packages: {packages_in_scope}") print(f'Packages: {packages_in_scope}')
def build_package(package): def build_package(package):
if not quiet: if not quiet:
print(f"Building npm package: {package}") print(f"Building npm package: {package}")
repo_dir = package repo_dir = package
repo_full_path = os.path.join(dev_root_path, repo_dir) repo_full_path = os.path.join(dev_root_path, repo_dir)
# Copy the repo and build that to avoid propagating # Copy the repo and build that to avoid propagating JS tooling file changes back into the cloned repo
# JS tooling file changes back into the cloned repo
repo_copy_path = os.path.join(build_root_path, repo_dir) repo_copy_path = os.path.join(build_root_path, repo_dir)
# First delete any old build tree # First delete any old build tree
if os.path.isdir(repo_copy_path): if os.path.isdir(repo_copy_path):
@ -129,63 +116,41 @@ def command(ctx, include, exclude, force_rebuild, extra_build_args):
print(f"Copying build tree from: {repo_full_path} to: {repo_copy_path}") print(f"Copying build tree from: {repo_full_path} to: {repo_copy_path}")
if not dry_run: if not dry_run:
copytree(repo_full_path, repo_copy_path) copytree(repo_full_path, repo_copy_path)
build_command = [ build_command = ["sh", "-c", f"cd /workspace && build-npm-package-local-dependencies.sh {npm_registry_url}"]
"sh",
"-c",
"cd /workspace && "
f"build-npm-package-local-dependencies.sh {npm_registry_url}",
]
if not dry_run: if not dry_run:
if verbose: if verbose:
print(f"Executing: {build_command}") print(f"Executing: {build_command}")
# Originally we used the PEP 584 merge operator: # Originally we used the PEP 584 merge operator:
# envs = {"CERC_NPM_AUTH_TOKEN": npm_registry_url_token} | # envs = {"CERC_NPM_AUTH_TOKEN": npm_registry_url_token} | ({"CERC_SCRIPT_DEBUG": "true"} if debug else {})
# ({"CERC_SCRIPT_DEBUG": "true"} if debug else {}) # but that isn't available in Python 3.8 (default in Ubuntu 20) so for now we use dict.update:
# but that isn't available in Python 3.8 (default in Ubuntu 20) envs = {"CERC_NPM_AUTH_TOKEN": npm_registry_url_token,
# so for now we use dict.update: "LACONIC_HOSTED_CONFIG_FILE": "config-hosted.yml" # Convention used by our web app packages
envs = { }
"CERC_NPM_AUTH_TOKEN": npm_registry_url_token,
# Convention used by our web app packages
"LACONIC_HOSTED_CONFIG_FILE": "config-hosted.yml",
}
envs.update({"CERC_SCRIPT_DEBUG": "true"} if debug else {}) envs.update({"CERC_SCRIPT_DEBUG": "true"} if debug else {})
envs.update({"CERC_FORCE_REBUILD": "true"} if force_rebuild else {}) envs.update({"CERC_FORCE_REBUILD": "true"} if force_rebuild else {})
envs.update( envs.update({"CERC_CONTAINER_EXTRA_BUILD_ARGS": extra_build_args} if extra_build_args else {})
{"CERC_CONTAINER_EXTRA_BUILD_ARGS": extra_build_args}
if extra_build_args
else {}
)
try: try:
docker.run( docker.run(builder_js_image_name,
builder_js_image_name, remove=True,
remove=True, interactive=True,
interactive=True, tty=True,
tty=True, user=f"{os.getuid()}:{os.getgid()}",
user=f"{os.getuid()}:{os.getgid()}", envs=envs,
envs=envs, # TODO: detect this host name in npm_registry_url rather than hard-wiring it
# TODO: detect this host name in npm_registry_url add_hosts=[("gitea.local", "host-gateway")],
# rather than hard-wiring it volumes=[(repo_copy_path, "/workspace")],
add_hosts=[("gitea.local", "host-gateway")], command=build_command
volumes=[(repo_copy_path, "/workspace")], )
command=build_command, # Note that although the docs say that build_result should contain
) # the command output as a string, in reality it is always the empty string.
# Note that although the docs say that build_result should # Since we detect errors via catching exceptions below, we can safely ignore it here.
# contain the command output as a string, in reality it is
# always the empty string. Since we detect errors via catching
# exceptions below, we can safely ignore it here.
except DockerException as e: except DockerException as e:
print(f"Error executing build for {package} in container:\n {e}") print(f"Error executing build for {package} in container:\n {e}")
if not continue_on_error: if not continue_on_error:
print( print("FATAL Error: build failed and --continue-on-error not set, exiting")
"FATAL Error: build failed and --continue-on-error "
"not set, exiting"
)
sys.exit(1) sys.exit(1)
else: else:
print( print("****** Build Error, continuing because --continue-on-error is set")
"****** Build Error, continuing because "
"--continue-on-error is set"
)
else: else:
print("Skipped") print("Skipped")
@ -203,12 +168,6 @@ def _ensure_prerequisites():
# Tell the user how to build it if not # Tell the user how to build it if not
images = docker.image.list(builder_js_image_name) images = docker.image.list(builder_js_image_name)
if len(images) == 0: if len(images) == 0:
print( print(f"FATAL: builder image: {builder_js_image_name} is required but was not found")
f"FATAL: builder image: {builder_js_image_name} is required " print("Please run this command to create it: laconic-so --stack build-support build-containers")
"but was not found"
)
print(
"Please run this command to create it: "
"laconic-so --stack build-support build-containers"
)
sys.exit(1) sys.exit(1)

View File

@ -24,5 +24,6 @@ class BuildContext:
stack: str stack: str
container: str container: str
container_build_dir: Path container_build_dir: Path
container_build_env: Mapping[str, str] container_build_env: Mapping[str,str]
dev_root_path: str dev_root_path: str

View File

@ -20,23 +20,21 @@ from stack_orchestrator.util import get_parsed_stack_config, warn_exit
def get_containers_in_scope(stack: str): def get_containers_in_scope(stack: str):
containers_in_scope = [] containers_in_scope = []
if stack: if stack:
stack_config = get_parsed_stack_config(stack) stack_config = get_parsed_stack_config(stack)
if "containers" not in stack_config or stack_config["containers"] is None: if "containers" not in stack_config or stack_config["containers"] is None:
warn_exit(f"stack {stack} does not define any containers") warn_exit(f"stack {stack} does not define any containers")
containers_in_scope = stack_config["containers"] containers_in_scope = stack_config['containers']
else: else:
# See: https://stackoverflow.com/a/20885799/1701505 # See: https://stackoverflow.com/a/20885799/1701505
from stack_orchestrator import data from stack_orchestrator import data
with importlib.resources.open_text(data, "container-image-list.txt") as container_list_file:
with importlib.resources.open_text(
data, "container-image-list.txt"
) as container_list_file:
containers_in_scope = container_list_file.read().splitlines() containers_in_scope = container_list_file.read().splitlines()
if opts.o.verbose: if opts.o.verbose:
print(f"Containers: {containers_in_scope}") print(f'Containers: {containers_in_scope}')
if stack: if stack:
print(f"Stack: {stack}") print(f"Stack: {stack}")

View File

@ -18,8 +18,7 @@
# env vars: # env vars:
# CERC_REPO_BASE_DIR defaults to ~/cerc # CERC_REPO_BASE_DIR defaults to ~/cerc
# TODO: display the available list of containers; # TODO: display the available list of containers; allow re-build of either all or specific containers
# allow re-build of either all or specific containers
import os import os
import sys import sys
@ -33,55 +32,40 @@ from stack_orchestrator.build.build_types import BuildContext
@click.command() @click.command()
@click.option("--base-container") @click.option('--base-container')
@click.option( @click.option('--source-repo', help="directory containing the webapp to build", required=True)
"--source-repo", help="directory containing the webapp to build", required=True @click.option("--force-rebuild", is_flag=True, default=False, help="Override dependency checking -- always rebuild")
)
@click.option(
"--force-rebuild",
is_flag=True,
default=False,
help="Override dependency checking -- always rebuild",
)
@click.option("--extra-build-args", help="Supply extra arguments to build") @click.option("--extra-build-args", help="Supply extra arguments to build")
@click.option("--tag", help="Container tag (default: cerc/<app_name>:local)") @click.option("--tag", help="Container tag (default: cerc/<app_name>:local)")
@click.pass_context @click.pass_context
def command(ctx, base_container, source_repo, force_rebuild, extra_build_args, tag): def command(ctx, base_container, source_repo, force_rebuild, extra_build_args, tag):
"""build the specified webapp container""" '''build the specified webapp container'''
logger = TimedLogger() logger = TimedLogger()
quiet = ctx.obj.quiet
debug = ctx.obj.debug debug = ctx.obj.debug
verbose = ctx.obj.verbose verbose = ctx.obj.verbose
local_stack = ctx.obj.local_stack local_stack = ctx.obj.local_stack
stack = ctx.obj.stack stack = ctx.obj.stack
# See: https://stackoverflow.com/questions/25389095/ # See: https://stackoverflow.com/questions/25389095/python-get-path-of-root-project-structure
# python-get-path-of-root-project-structure container_build_dir = Path(__file__).absolute().parent.parent.joinpath("data", "container-build")
container_build_dir = (
Path(__file__).absolute().parent.parent.joinpath("data", "container-build")
)
if local_stack: if local_stack:
dev_root_path = os.getcwd()[0 : os.getcwd().rindex("stack-orchestrator")] dev_root_path = os.getcwd()[0:os.getcwd().rindex("stack-orchestrator")]
logger.log( logger.log(f'Local stack dev_root_path (CERC_REPO_BASE_DIR) overridden to: {dev_root_path}')
f"Local stack dev_root_path (CERC_REPO_BASE_DIR) overridden to: "
f"{dev_root_path}"
)
else: else:
dev_root_path = os.path.expanduser( dev_root_path = os.path.expanduser(config("CERC_REPO_BASE_DIR", default="~/cerc"))
config("CERC_REPO_BASE_DIR", default="~/cerc")
)
if verbose: if verbose:
logger.log(f"Dev Root is: {dev_root_path}") logger.log(f'Dev Root is: {dev_root_path}')
if not base_container: if not base_container:
base_container = determine_base_container(source_repo) base_container = determine_base_container(source_repo)
# First build the base container. # First build the base container.
container_build_env = build_containers.make_container_build_env( container_build_env = build_containers.make_container_build_env(dev_root_path, container_build_dir, debug,
dev_root_path, container_build_dir, debug, force_rebuild, extra_build_args force_rebuild, extra_build_args)
)
if verbose: if verbose:
logger.log(f"Building base container: {base_container}") logger.log(f"Building base container: {base_container}")
@ -101,13 +85,12 @@ def command(ctx, base_container, source_repo, force_rebuild, extra_build_args, t
if verbose: if verbose:
logger.log(f"Base container {base_container} build finished.") logger.log(f"Base container {base_container} build finished.")
# Now build the target webapp. We use the same build script, # Now build the target webapp. We use the same build script, but with a different Dockerfile and work dir.
# but with a different Dockerfile and work dir.
container_build_env["CERC_WEBAPP_BUILD_RUNNING"] = "true" container_build_env["CERC_WEBAPP_BUILD_RUNNING"] = "true"
container_build_env["CERC_CONTAINER_BUILD_WORK_DIR"] = os.path.abspath(source_repo) container_build_env["CERC_CONTAINER_BUILD_WORK_DIR"] = os.path.abspath(source_repo)
container_build_env["CERC_CONTAINER_BUILD_DOCKERFILE"] = os.path.join( container_build_env["CERC_CONTAINER_BUILD_DOCKERFILE"] = os.path.join(container_build_dir,
container_build_dir, base_container.replace("/", "-"), "Dockerfile.webapp" base_container.replace("/", "-"),
) "Dockerfile.webapp")
if not tag: if not tag:
webapp_name = os.path.abspath(source_repo).split(os.path.sep)[-1] webapp_name = os.path.abspath(source_repo).split(os.path.sep)[-1]
tag = f"cerc/{webapp_name}:local" tag = f"cerc/{webapp_name}:local"

View File

@ -52,8 +52,7 @@ def _local_tag_for(container: str):
# See: https://docker-docs.uclv.cu/registry/spec/api/ # See: https://docker-docs.uclv.cu/registry/spec/api/
# Emulate this: # Emulate this:
# $ curl -u "my-username:my-token" -X GET \ # $ curl -u "my-username:my-token" -X GET "https://<container-registry-hostname>/v2/cerc-io/cerc/test-container/tags/list"
# "https://<container-registry-hostname>/v2/cerc-io/cerc/test-container/tags/list"
# {"name":"cerc-io/cerc/test-container","tags":["202402232130","202402232208"]} # {"name":"cerc-io/cerc/test-container","tags":["202402232130","202402232208"]}
def _get_tags_for_container(container: str, registry_info: RegistryInfo) -> List[str]: def _get_tags_for_container(container: str, registry_info: RegistryInfo) -> List[str]:
# registry looks like: git.vdb.to/cerc-io # registry looks like: git.vdb.to/cerc-io
@ -61,9 +60,7 @@ def _get_tags_for_container(container: str, registry_info: RegistryInfo) -> List
url = f"https://{registry_parts[0]}/v2/{registry_parts[1]}/{container}/tags/list" url = f"https://{registry_parts[0]}/v2/{registry_parts[1]}/{container}/tags/list"
if opts.o.debug: if opts.o.debug:
print(f"Fetching tags from: {url}") print(f"Fetching tags from: {url}")
response = requests.get( response = requests.get(url, auth=(registry_info.registry_username, registry_info.registry_token))
url, auth=(registry_info.registry_username, registry_info.registry_token)
)
if response.status_code == 200: if response.status_code == 200:
tag_info = response.json() tag_info = response.json()
if opts.o.debug: if opts.o.debug:
@ -71,10 +68,7 @@ def _get_tags_for_container(container: str, registry_info: RegistryInfo) -> List
tags_array = tag_info["tags"] tags_array = tag_info["tags"]
return tags_array return tags_array
else: else:
error_exit( error_exit(f"failed to fetch tags from image registry, status code: {response.status_code}")
f"failed to fetch tags from image registry, "
f"status code: {response.status_code}"
)
def _find_latest(candidate_tags: List[str]): def _find_latest(candidate_tags: List[str]):
@ -85,9 +79,9 @@ def _find_latest(candidate_tags: List[str]):
return sorted_candidates[-1] return sorted_candidates[-1]
def _filter_for_platform( def _filter_for_platform(container: str,
container: str, registry_info: RegistryInfo, tag_list: List[str] registry_info: RegistryInfo,
) -> List[str]: tag_list: List[str]) -> List[str] :
filtered_tags = [] filtered_tags = []
this_machine = platform.machine() this_machine = platform.machine()
# Translate between Python and docker platform names # Translate between Python and docker platform names
@ -104,7 +98,7 @@ def _filter_for_platform(
manifest = manifest_cmd.inspect_verbose(remote_tag) manifest = manifest_cmd.inspect_verbose(remote_tag)
if opts.o.debug: if opts.o.debug:
print(f"manifest: {manifest}") print(f"manifest: {manifest}")
image_architecture = manifest["Descriptor"]["platform"]["architecture"] image_architecture = manifest["Descriptor"]["platform"]["architecture"]
if opts.o.debug: if opts.o.debug:
print(f"image_architecture: {image_architecture}") print(f"image_architecture: {image_architecture}")
if this_machine == image_architecture: if this_machine == image_architecture:
@ -143,44 +137,21 @@ def _add_local_tag(remote_tag: str, registry: str, local_tag: str):
@click.command() @click.command()
@click.option("--include", help="only fetch these containers") @click.option('--include', help="only fetch these containers")
@click.option("--exclude", help="don't fetch these containers") @click.option('--exclude', help="don\'t fetch these containers")
@click.option( @click.option("--force-local-overwrite", is_flag=True, default=False, help="Overwrite a locally built image, if present")
"--force-local-overwrite", @click.option("--image-registry", required=True, help="Specify the image registry to fetch from")
is_flag=True, @click.option("--registry-username", required=True, help="Specify the image registry username")
default=False, @click.option("--registry-token", required=True, help="Specify the image registry access token")
help="Overwrite a locally built image, if present",
)
@click.option(
"--image-registry", required=True, help="Specify the image registry to fetch from"
)
@click.option(
"--registry-username", required=True, help="Specify the image registry username"
)
@click.option(
"--registry-token", required=True, help="Specify the image registry access token"
)
@click.pass_context @click.pass_context
def command( def command(ctx, include, exclude, force_local_overwrite, image_registry, registry_username, registry_token):
ctx, '''EXPERIMENTAL: fetch the images for a stack from remote registry'''
include,
exclude,
force_local_overwrite,
image_registry,
registry_username,
registry_token,
):
"""EXPERIMENTAL: fetch the images for a stack from remote registry"""
registry_info = RegistryInfo(image_registry, registry_username, registry_token) registry_info = RegistryInfo(image_registry, registry_username, registry_token)
docker = DockerClient() docker = DockerClient()
if not opts.o.quiet: if not opts.o.quiet:
print("Logging into container registry:") print("Logging into container registry:")
docker.login( docker.login(registry_info.registry, registry_info.registry_username, registry_info.registry_token)
registry_info.registry,
registry_info.registry_username,
registry_info.registry_token,
)
# Generate list of target containers # Generate list of target containers
stack = ctx.obj.stack stack = ctx.obj.stack
containers_in_scope = get_containers_in_scope(stack) containers_in_scope = get_containers_in_scope(stack)
@ -201,24 +172,19 @@ def command(
print(f"Fetching: {image_to_fetch}") print(f"Fetching: {image_to_fetch}")
_fetch_image(image_to_fetch, registry_info) _fetch_image(image_to_fetch, registry_info)
# Now check if the target container already exists exists locally already # Now check if the target container already exists exists locally already
if _exists_locally(container): if (_exists_locally(container)):
if not opts.o.quiet: if not opts.o.quiet:
print(f"Container image {container} already exists locally") print(f"Container image {container} already exists locally")
# if so, fail unless the user specified force-local-overwrite # if so, fail unless the user specified force-local-overwrite
if force_local_overwrite: if (force_local_overwrite):
# In that case remove the existing :local tag # In that case remove the existing :local tag
if not opts.o.quiet: if not opts.o.quiet:
print( print(f"Warning: overwriting local tag from this image: {container} because "
f"Warning: overwriting local tag from this image: " "--force-local-overwrite was specified")
f"{container} because --force-local-overwrite was specified"
)
else: else:
if not opts.o.quiet: if not opts.o.quiet:
print( print(f"Skipping local tagging for this image: {container} because that would "
f"Skipping local tagging for this image: {container} " "overwrite an existing :local tagged image, use --force-local-overwrite to do so.")
"because that would overwrite an existing :local tagged "
"image, use --force-local-overwrite to do so."
)
continue continue
# Tag the fetched image with the :local tag # Tag the fetched image with the :local tag
_add_local_tag(image_to_fetch, image_registry, local_tag) _add_local_tag(image_to_fetch, image_registry, local_tag)
@ -226,7 +192,4 @@ def command(
if opts.o.verbose: if opts.o.verbose:
print(f"Excluding: {container}") print(f"Excluding: {container}")
if not all_containers_found: if not all_containers_found:
print( print("Warning: couldn't find usable images for one or more containers, this stack will not deploy")
"Warning: couldn't find usable images for one or more containers, "
"this stack will not deploy"
)

View File

@ -39,9 +39,3 @@ node_affinities_key = "node-affinities"
node_tolerations_key = "node-tolerations" node_tolerations_key = "node-tolerations"
kind_config_filename = "kind-config.yml" kind_config_filename = "kind-config.yml"
kube_config_filename = "kubeconfig.yml" kube_config_filename = "kubeconfig.yml"
cri_base_filename = "cri-base.json"
unlimited_memlock_key = "unlimited-memlock"
runtime_class_key = "runtime-class"
high_memlock_runtime = "high-memlock"
high_memlock_spec_filename = "high-memlock-spec.json"
acme_email_key = "acme-email"

View File

@ -20,7 +20,7 @@ services:
depends_on: depends_on:
generate-jwt: generate-jwt:
condition: service_completed_successfully condition: service_completed_successfully
env_file: env_file:
- ../config/fixturenet-blast/${NETWORK:-fixturenet}.config - ../config/fixturenet-blast/${NETWORK:-fixturenet}.config
blast-geth: blast-geth:
image: blastio/blast-geth:${NETWORK:-testnet-sepolia} image: blastio/blast-geth:${NETWORK:-testnet-sepolia}
@ -51,7 +51,7 @@ services:
--nodiscover --nodiscover
--maxpeers=0 --maxpeers=0
--rollup.disabletxpoolgossip=true --rollup.disabletxpoolgossip=true
env_file: env_file:
- ../config/fixturenet-blast/${NETWORK:-fixturenet}.config - ../config/fixturenet-blast/${NETWORK:-fixturenet}.config
depends_on: depends_on:
geth-init: geth-init:
@ -73,7 +73,7 @@ services:
--rollup.config="/blast/rollup.json" --rollup.config="/blast/rollup.json"
depends_on: depends_on:
- blast-geth - blast-geth
env_file: env_file:
- ../config/fixturenet-blast/${NETWORK:-fixturenet}.config - ../config/fixturenet-blast/${NETWORK:-fixturenet}.config
volumes: volumes:

View File

@ -14,3 +14,4 @@ services:
- "9090" - "9090"
- "9091" - "9091"
- "1317" - "1317"

View File

@ -19,7 +19,7 @@ services:
depends_on: depends_on:
generate-jwt: generate-jwt:
condition: service_completed_successfully condition: service_completed_successfully
env_file: env_file:
- ../config/mainnet-blast/${NETWORK:-mainnet}.config - ../config/mainnet-blast/${NETWORK:-mainnet}.config
blast-geth: blast-geth:
image: blastio/blast-geth:${NETWORK:-mainnet} image: blastio/blast-geth:${NETWORK:-mainnet}
@ -53,7 +53,7 @@ services:
--nodiscover --nodiscover
--maxpeers=0 --maxpeers=0
--rollup.disabletxpoolgossip=true --rollup.disabletxpoolgossip=true
env_file: env_file:
- ../config/mainnet-blast/${NETWORK:-mainnet}.config - ../config/mainnet-blast/${NETWORK:-mainnet}.config
depends_on: depends_on:
geth-init: geth-init:
@ -76,7 +76,7 @@ services:
--rollup.config="/blast/rollup.json" --rollup.config="/blast/rollup.json"
depends_on: depends_on:
- blast-geth - blast-geth
env_file: env_file:
- ../config/mainnet-blast/${NETWORK:-mainnet}.config - ../config/mainnet-blast/${NETWORK:-mainnet}.config
volumes: volumes:

View File

@ -17,3 +17,4 @@ services:
- URL_NEUTRON_TEST_REST=https://rest-palvus.pion-1.ntrn.tech - URL_NEUTRON_TEST_REST=https://rest-palvus.pion-1.ntrn.tech
- URL_NEUTRON_TEST_RPC=https://rpc-palvus.pion-1.ntrn.tech - URL_NEUTRON_TEST_RPC=https://rpc-palvus.pion-1.ntrn.tech
- WALLET_CONNECT_ID=0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x - WALLET_CONNECT_ID=0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x0x

View File

@ -32,4 +32,4 @@ services:
volumes: volumes:
reth_data: reth_data:
lighthouse_data: lighthouse_data:
shared_data: shared_data:

View File

@ -12,7 +12,7 @@ services:
POSTGRES_INITDB_ARGS: "-E UTF8 --locale=C" POSTGRES_INITDB_ARGS: "-E UTF8 --locale=C"
ports: ports:
- "5432" - "5432"
test-client: test-client:
image: cerc/test-database-client:local image: cerc/test-database-client:local

View File

@ -8,8 +8,6 @@ services:
CERC_TEST_PARAM_2: "CERC_TEST_PARAM_2_VALUE" CERC_TEST_PARAM_2: "CERC_TEST_PARAM_2_VALUE"
CERC_TEST_PARAM_3: ${CERC_TEST_PARAM_3:-FAILED} CERC_TEST_PARAM_3: ${CERC_TEST_PARAM_3:-FAILED}
volumes: volumes:
- ../config/test/script.sh:/opt/run.sh
- ../config/test/settings.env:/opt/settings.env
- test-data-bind:/data - test-data-bind:/data
- test-data-auto:/data2 - test-data-auto:/data2
- test-config:/config:ro - test-config:/config:ro

View File

@ -1,2 +1,2 @@
GETH_ROLLUP_SEQUENCERHTTP=https://sequencer.s2.testblast.io GETH_ROLLUP_SEQUENCERHTTP=https://sequencer.s2.testblast.io
OP_NODE_P2P_BOOTNODES=enr:-J-4QM3GLUFfKMSJQuP1UvuKQe8DyovE7Eaiit0l6By4zjTodkR4V8NWXJxNmlg8t8rP-Q-wp3jVmeAOml8cjMj__ROGAYznzb_HgmlkgnY0gmlwhA-cZ_eHb3BzdGFja4X947FQAIlzZWNwMjU2azGhAiuDqvB-AsVSRmnnWr6OHfjgY8YfNclFy9p02flKzXnOg3RjcIJ2YYN1ZHCCdmE,enr:-J-4QDCVpByqQ8nFqCS9aHicqwUfXgzFDslvpEyYz19lvkHLIdtcIGp2d4q5dxHdjRNTO6HXCsnIKxUeuZSPcEbyVQCGAYznzz0RgmlkgnY0gmlwhANiQfuHb3BzdGFja4X947FQAIlzZWNwMjU2azGhAy3AtF2Jh_aPdOohg506Hjmtx-fQ1AKmu71C7PfkWAw9g3RjcIJ2YYN1ZHCCdmE OP_NODE_P2P_BOOTNODES=enr:-J-4QM3GLUFfKMSJQuP1UvuKQe8DyovE7Eaiit0l6By4zjTodkR4V8NWXJxNmlg8t8rP-Q-wp3jVmeAOml8cjMj__ROGAYznzb_HgmlkgnY0gmlwhA-cZ_eHb3BzdGFja4X947FQAIlzZWNwMjU2azGhAiuDqvB-AsVSRmnnWr6OHfjgY8YfNclFy9p02flKzXnOg3RjcIJ2YYN1ZHCCdmE,enr:-J-4QDCVpByqQ8nFqCS9aHicqwUfXgzFDslvpEyYz19lvkHLIdtcIGp2d4q5dxHdjRNTO6HXCsnIKxUeuZSPcEbyVQCGAYznzz0RgmlkgnY0gmlwhANiQfuHb3BzdGFja4X947FQAIlzZWNwMjU2azGhAy3AtF2Jh_aPdOohg506Hjmtx-fQ1AKmu71C7PfkWAw9g3RjcIJ2YYN1ZHCCdmE

View File

@ -1411,4 +1411,4 @@
"uid": "nT9VeZoVk", "uid": "nT9VeZoVk",
"version": 2, "version": 2,
"weekStart": "" "weekStart": ""
} }

View File

@ -65,7 +65,7 @@ if [ -n "$CERC_L1_ADDRESS" ] && [ -n "$CERC_L1_PRIV_KEY" ]; then
# Sequencer # Sequencer
SEQ=$(echo "$wallet3" | awk '/Address:/{print $2}') SEQ=$(echo "$wallet3" | awk '/Address:/{print $2}')
SEQ_KEY=$(echo "$wallet3" | awk '/Private key:/{print $3}') SEQ_KEY=$(echo "$wallet3" | awk '/Private key:/{print $3}')
echo "Funding accounts." echo "Funding accounts."
wait_for_block 1 300 wait_for_block 1 300
cast send --from $ADMIN --rpc-url $CERC_L1_RPC --value 5ether $PROPOSER --private-key $ADMIN_KEY cast send --from $ADMIN --rpc-url $CERC_L1_RPC --value 5ether $PROPOSER --private-key $ADMIN_KEY

View File

@ -56,7 +56,7 @@
"value": "!validator-pubkey" "value": "!validator-pubkey"
} }
} }
} }
], ],
"supply": [] "supply": []
}, },
@ -269,4 +269,4 @@
"claims": null "claims": null
} }
} }
} }

View File

@ -2084,4 +2084,4 @@
"clientPolicies": { "clientPolicies": {
"policies": [] "policies": []
} }
} }

View File

@ -2388,4 +2388,4 @@
"clientPolicies": { "clientPolicies": {
"policies": [] "policies": []
} }
} }

View File

@ -29,3 +29,4 @@
"l1_system_config_address": "0x5531dcff39ec1ec727c4c5d2fc49835368f805a9", "l1_system_config_address": "0x5531dcff39ec1ec727c4c5d2fc49835368f805a9",
"protocol_versions_address": "0x0000000000000000000000000000000000000000" "protocol_versions_address": "0x0000000000000000000000000000000000000000"
} }

View File

@ -2388,4 +2388,4 @@
"clientPolicies": { "clientPolicies": {
"policies": [] "policies": []
} }
} }

View File

@ -12,10 +12,7 @@ from fabric import Connection
def dump_src_db_to_file(db_host, db_port, db_user, db_password, db_name, file_name): def dump_src_db_to_file(db_host, db_port, db_user, db_password, db_name, file_name):
command = ( command = f"pg_dump -h {db_host} -p {db_port} -U {db_user} -d {db_name} -c --inserts -f {file_name}"
f"pg_dump -h {db_host} -p {db_port} -U {db_user} "
f"-d {db_name} -c --inserts -f {file_name}"
)
my_env = os.environ.copy() my_env = os.environ.copy()
my_env["PGPASSWORD"] = db_password my_env["PGPASSWORD"] = db_password
print(f"Exporting from {db_host}:{db_port}/{db_name} to {file_name}... ", end="") print(f"Exporting from {db_host}:{db_port}/{db_name} to {file_name}... ", end="")

View File

@ -1901,4 +1901,4 @@
"uid": "b54352dd-35f6-4151-97dc-265bab0c67e9", "uid": "b54352dd-35f6-4151-97dc-265bab0c67e9",
"version": 18, "version": 18,
"weekStart": "" "weekStart": ""
} }

View File

@ -849,7 +849,7 @@ groups:
annotations: annotations:
summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }} summary: Watcher {{ index $labels "instance" }} of group {{ index $labels "job" }} is falling behind external head by {{ index $values "diff" }}
isPaused: false isPaused: false
# Secured Finance # Secured Finance
- uid: secured_finance_diff_external - uid: secured_finance_diff_external
title: secured_finance_watcher_head_tracking title: secured_finance_watcher_head_tracking

View File

@ -14,7 +14,7 @@ echo ACCOUNT_PRIVATE_KEY=${CERC_PRIVATE_KEY_DEPLOYER} >> .env
if [ -f ${erc20_address_file} ]; then if [ -f ${erc20_address_file} ]; then
echo "${erc20_address_file} already exists, skipping ERC20 contract deployment" echo "${erc20_address_file} already exists, skipping ERC20 contract deployment"
cat ${erc20_address_file} cat ${erc20_address_file}
# Keep the container running # Keep the container running
tail -f tail -f
fi fi

View File

@ -1,3 +0,0 @@
#!/bin/sh
echo "Hello"

View File

@ -1 +0,0 @@
ANSWER=42

View File

@ -940,3 +940,4 @@ ALTER TABLE ONLY public.state
-- --
-- PostgreSQL database dump complete -- PostgreSQL database dump complete
-- --

View File

@ -18,3 +18,4 @@ root@7c4124bb09e3:/src#
``` ```
Now gerbil commands can be run. Now gerbil commands can be run.

View File

@ -23,7 +23,7 @@ local_npm_registry_url=$2
versioned_target_package=$(yarn list --pattern ${target_package} --depth=0 --json --non-interactive --no-progress | jq -r '.data.trees[].name') versioned_target_package=$(yarn list --pattern ${target_package} --depth=0 --json --non-interactive --no-progress | jq -r '.data.trees[].name')
# Use yarn info to get URL checksums etc from the new registry # Use yarn info to get URL checksums etc from the new registry
yarn_info_output=$(yarn info --json $versioned_target_package 2>/dev/null) yarn_info_output=$(yarn info --json $versioned_target_package 2>/dev/null)
# First check if the target version actually exists. # First check if the target version actually exists.
# If it doesn't exist there will be no .data.dist.tarball element, # If it doesn't exist there will be no .data.dist.tarball element,
# and jq will output the string "null" # and jq will output the string "null"
package_tarball=$(echo $yarn_info_output | jq -r .data.dist.tarball) package_tarball=$(echo $yarn_info_output | jq -r .data.dist.tarball)

View File

@ -11,8 +11,6 @@ if len(sys.argv) > 1:
with open(testnet_config_path) as stream: with open(testnet_config_path) as stream:
data = yaml.safe_load(stream) data = yaml.safe_load(stream)
for key, value in data["el_premine"].items(): for key, value in data['el_premine'].items():
acct = w3.eth.account.from_mnemonic( acct = w3.eth.account.from_mnemonic(data['mnemonic'], account_path=key, passphrase='')
data["mnemonic"], account_path=key, passphrase=""
)
print("%s,%s,%s" % (key, acct.address, acct.key.hex())) print("%s,%s,%s" % (key, acct.address, acct.key.hex()))

View File

@ -4,4 +4,4 @@ out = 'out'
libs = ['lib'] libs = ['lib']
remappings = ['ds-test/=lib/ds-test/src/'] remappings = ['ds-test/=lib/ds-test/src/']
# See more config options https://github.com/gakonst/foundry/tree/master/config # See more config options https://github.com/gakonst/foundry/tree/master/config

View File

@ -20,4 +20,4 @@ contract Stateful {
function inc() public { function inc() public {
x = x + 1; x = x + 1;
} }
} }

View File

@ -11,4 +11,4 @@ record:
foo: bar foo: bar
tags: tags:
- a - a
- b - b

View File

@ -9,4 +9,4 @@ record:
foo: bar foo: bar
tags: tags:
- a - a
- b - b

View File

@ -1,4 +1,4 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Build cerc/laconicd # Build cerc/laconicd
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
docker build -t cerc/laconicd:local ${build_command_args} ${CERC_REPO_BASE_DIR}/laconicd docker build -t cerc/laconicd:local ${build_command_args} ${CERC_REPO_BASE_DIR}/laconicd

View File

@ -36,7 +36,7 @@ if [ -f "./run-webapp.sh" ]; then
./run-webapp.sh & ./run-webapp.sh &
tpid=$! tpid=$!
wait $tpid wait $tpid
else else
"$SCRIPT_DIR/apply-runtime-env.sh" "`pwd`" .next .next-r "$SCRIPT_DIR/apply-runtime-env.sh" "`pwd`" .next .next-r
mv .next .next.old mv .next .next.old
mv .next-r/.next . mv .next-r/.next .

View File

@ -5,3 +5,4 @@ WORKDIR /app
COPY . . COPY . .
RUN yarn RUN yarn

View File

@ -22,7 +22,7 @@ fi
# infers the directory from which to load chain configuration files # infers the directory from which to load chain configuration files
# by the presence or absense of the substring "testnet" in the host name # by the presence or absense of the substring "testnet" in the host name
# (browser side -- the host name of the host in the address bar of the browser) # (browser side -- the host name of the host in the address bar of the browser)
# Accordingly we configure our network in both directories in order to # Accordingly we configure our network in both directories in order to
# subvert this lunacy. # subvert this lunacy.
explorer_mainnet_config_dir=/app/chains/mainnet explorer_mainnet_config_dir=/app/chains/mainnet
explorer_testnet_config_dir=/app/chains/testnet explorer_testnet_config_dir=/app/chains/testnet

View File

@ -1,6 +1,9 @@
FROM alpine:latest FROM ubuntu:latest
RUN apk add --no-cache nginx RUN apt-get update && export DEBIAN_FRONTEND=noninteractive && export DEBCONF_NOWARNINGS="yes" && \
apt-get install -y software-properties-common && \
apt-get install -y nginx && \
apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
EXPOSE 80 EXPOSE 80

View File

@ -1,4 +1,4 @@
#!/usr/bin/env sh #!/usr/bin/env bash
set -e set -e
if [ -n "$CERC_SCRIPT_DEBUG" ]; then if [ -n "$CERC_SCRIPT_DEBUG" ]; then
@ -8,14 +8,14 @@ fi
echo "Test container starting" echo "Test container starting"
DATA_DEVICE=$(df | grep "/data$" | awk '{ print $1 }') DATA_DEVICE=$(df | grep "/data$" | awk '{ print $1 }')
if [ -n "$DATA_DEVICE" ]; then if [[ -n "$DATA_DEVICE" ]]; then
echo "/data: MOUNTED dev=${DATA_DEVICE}" echo "/data: MOUNTED dev=${DATA_DEVICE}"
else else
echo "/data: not mounted" echo "/data: not mounted"
fi fi
DATA2_DEVICE=$(df | grep "/data2$" | awk '{ print $1 }') DATA2_DEVICE=$(df | grep "/data2$" | awk '{ print $1 }')
if [ -n "$DATA_DEVICE" ]; then if [[ -n "$DATA_DEVICE" ]]; then
echo "/data2: MOUNTED dev=${DATA2_DEVICE}" echo "/data2: MOUNTED dev=${DATA2_DEVICE}"
else else
echo "/data2: not mounted" echo "/data2: not mounted"
@ -23,7 +23,7 @@ fi
# Test if the container's filesystem is old (run previously) or new # Test if the container's filesystem is old (run previously) or new
for d in /data /data2; do for d in /data /data2; do
if [ -f "$d/exists" ]; if [[ -f "$d/exists" ]];
then then
TIMESTAMP=`cat $d/exists` TIMESTAMP=`cat $d/exists`
echo "$d filesystem is old, created: $TIMESTAMP" echo "$d filesystem is old, created: $TIMESTAMP"
@ -52,7 +52,7 @@ fi
if [ -d "/config" ]; then if [ -d "/config" ]; then
echo "/config: EXISTS" echo "/config: EXISTS"
for f in /config/*; do for f in /config/*; do
if [ -f "$f" ] || [ -L "$f" ]; then if [[ -f "$f" ]] || [[ -L "$f" ]]; then
echo "$f:" echo "$f:"
cat "$f" cat "$f"
echo "" echo ""
@ -64,4 +64,4 @@ else
fi fi
# Run nginx which will block here forever # Run nginx which will block here forever
nginx -g "daemon off;" /usr/sbin/nginx -g "daemon off;"

View File

@ -2,4 +2,4 @@
# Build cerc/test-container # Build cerc/test-container
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/test-database-client:local -f ${SCRIPT_DIR}/Dockerfile ${build_command_args} $SCRIPT_DIR docker build -t cerc/test-database-client:local -f ${SCRIPT_DIR}/Dockerfile ${build_command_args} $SCRIPT_DIR

View File

@ -8,7 +8,7 @@ CERC_WEBAPP_FILES_DIR="${CERC_WEBAPP_FILES_DIR:-/data}"
CERC_ENABLE_CORS="${CERC_ENABLE_CORS:-false}" CERC_ENABLE_CORS="${CERC_ENABLE_CORS:-false}"
CERC_SINGLE_PAGE_APP="${CERC_SINGLE_PAGE_APP}" CERC_SINGLE_PAGE_APP="${CERC_SINGLE_PAGE_APP}"
if [ -z "${CERC_SINGLE_PAGE_APP}" ]; then if [ -z "${CERC_SINGLE_PAGE_APP}" ]; then
# If there is only one HTML file, assume an SPA. # If there is only one HTML file, assume an SPA.
if [ 1 -eq $(find "${CERC_WEBAPP_FILES_DIR}" -name '*.html' | wc -l) ]; then if [ 1 -eq $(find "${CERC_WEBAPP_FILES_DIR}" -name '*.html' | wc -l) ]; then
CERC_SINGLE_PAGE_APP=true CERC_SINGLE_PAGE_APP=true

View File

@ -33,23 +33,13 @@ rules:
- endpoints - endpoints
- nodes - nodes
- pods - pods
- secrets
- namespaces - namespaces
- services - services
verbs: verbs:
- list - list
- watch - watch
- get - get
- apiGroups:
- ""
resources:
- secrets
verbs:
- list
- watch
- get
- create
- update
- delete
- apiGroups: - apiGroups:
- "" - ""
resources: resources:
@ -93,7 +83,6 @@ rules:
- get - get
- create - create
- update - update
- delete
--- ---
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding kind: ClusterRoleBinding
@ -186,7 +175,7 @@ spec:
operator: Equal operator: Equal
containers: containers:
- name: caddy-ingress-controller - name: caddy-ingress-controller
image: caddy/ingress:latest image: ghcr.io/caddyserver/ingress:latest
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
ports: ports:
- name: http - name: http

View File

@ -6,7 +6,7 @@ JS/TS/NPM builds need an npm registry to store intermediate package artifacts.
This can be supplied by the user (e.g. using a hosted registry or even npmjs.com), or a local registry using gitea can be deployed by stack orchestrator. This can be supplied by the user (e.g. using a hosted registry or even npmjs.com), or a local registry using gitea can be deployed by stack orchestrator.
To use a user-supplied registry set these environment variables: To use a user-supplied registry set these environment variables:
`CERC_NPM_REGISTRY_URL` and `CERC_NPM_REGISTRY_URL` and
`CERC_NPM_AUTH_TOKEN` `CERC_NPM_AUTH_TOKEN`
Leave `CERC_NPM_REGISTRY_URL` un-set to use the local gitea registry. Leave `CERC_NPM_REGISTRY_URL` un-set to use the local gitea registry.
@ -22,7 +22,7 @@ $ laconic-so --stack build-support build-containers
``` ```
$ laconic-so --stack package-registry setup-repositories $ laconic-so --stack package-registry setup-repositories
$ laconic-so --stack package-registry build-containers $ laconic-so --stack package-registry build-containers
$ laconic-so --stack package-registry deploy up $ laconic-so --stack package-registry deploy up
[+] Running 3/3 [+] Running 3/3
⠿ Network laconic-aecc4a21d3a502b14522db97d427e850_gitea Created 0.0s ⠿ Network laconic-aecc4a21d3a502b14522db97d427e850_gitea Created 0.0s

View File

@ -14,3 +14,4 @@ containers:
pods: pods:
- fixturenet-blast - fixturenet-blast
- foundry - foundry

View File

@ -3,3 +3,4 @@
A "loaded" version of fixturenet-eth, with all the bells and whistles enabled. A "loaded" version of fixturenet-eth, with all the bells and whistles enabled.
TODO: write me TODO: write me

View File

@ -12,7 +12,7 @@ $ chmod +x ./laconic-so
$ export PATH=$PATH:$(pwd) # Or move laconic-so to ~/bin or your favorite on-path directory $ export PATH=$PATH:$(pwd) # Or move laconic-so to ~/bin or your favorite on-path directory
``` ```
## 2. Prepare the local build environment ## 2. Prepare the local build environment
Note that this step needs only to be done once on a new machine. Note that this step needs only to be done once on a new machine.
Detailed instructions can be found [here](../build-support/README.md). For the impatient run these commands: Detailed instructions can be found [here](../build-support/README.md). For the impatient run these commands:
``` ```
$ laconic-so --stack build-support build-containers --exclude cerc/builder-gerbil $ laconic-so --stack build-support build-containers --exclude cerc/builder-gerbil

View File

@ -52,7 +52,7 @@ laconic-so --stack fixturenet-optimism deploy init --map-ports-to-host any-fixed
It is usually necessary to expose certain container ports on one or more the host's addresses to allow incoming connections. It is usually necessary to expose certain container ports on one or more the host's addresses to allow incoming connections.
Any ports defined in the Docker compose file are exposed by default with random port assignments, bound to "any" interface (IP address 0.0.0.0), but the port mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`. Any ports defined in the Docker compose file are exposed by default with random port assignments, bound to "any" interface (IP address 0.0.0.0), but the port mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`.
In addition, a stack-wide port mapping "recipe" can be applied at the time the In addition, a stack-wide port mapping "recipe" can be applied at the time the
`laconic-so deploy init` command is run, by supplying the desired recipe with the `--map-ports-to-host` option. The following recipes are supported: `laconic-so deploy init` command is run, by supplying the desired recipe with the `--map-ports-to-host` option. The following recipes are supported:
| Recipe | Host Port Mapping | | Recipe | Host Port Mapping |
|--------|-------------------| |--------|-------------------|
@ -62,11 +62,11 @@ In addition, a stack-wide port mapping "recipe" can be applied at the time the
| localhost-fixed-random | Bind to 127.0.0.1 using a random port number selected at the time the command is run (not checked for already in use)| | localhost-fixed-random | Bind to 127.0.0.1 using a random port number selected at the time the command is run (not checked for already in use)|
| any-fixed-random | Bind to 0.0.0.0 using a random port number selected at the time the command is run (not checked for already in use) | | any-fixed-random | Bind to 0.0.0.0 using a random port number selected at the time the command is run (not checked for already in use) |
For example, you may wish to use `any-fixed-random` to generate the initial mappings and then edit the spec file to set the `fixturenet-eth-geth-1` RPC to port 8545 and the `op-geth` RPC to port 9545 on the host. For example, you may wish to use `any-fixed-random` to generate the initial mappings and then edit the spec file to set the `fixturenet-eth-geth-1` RPC to port 8545 and the `op-geth` RPC to port 9545 on the host.
Or, you may wish to use `any-same` for the initial mappings -- in which case you'll have to edit the spec to file to ensure the various geth instances aren't all trying to publish to host ports 8545/8546 at once. Or, you may wish to use `any-same` for the initial mappings -- in which case you'll have to edit the spec to file to ensure the various geth instances aren't all trying to publish to host ports 8545/8546 at once.
### Data volumes ### Data volumes
Container data volumes are bind-mounted to specified paths in the host filesystem. Container data volumes are bind-mounted to specified paths in the host filesystem.
The default setup (generated by `laconic-so deploy init`) places the volumes in the `./data` subdirectory of the deployment directory. The default mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`. The default setup (generated by `laconic-so deploy init`) places the volumes in the `./data` subdirectory of the deployment directory. The default mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`.
@ -101,7 +101,7 @@ docker logs -f <CONTAINER_ID>
## Example: bridge some ETH from L1 to L2 ## Example: bridge some ETH from L1 to L2
Send some ETH from the desired account to the `L1StandardBridgeProxy` contract on L1 to test bridging to L2. Send some ETH from the desired account to the `L1StandardBridgeProxy` contract on L1 to test bridging to L2.
We can use the testing account `0xe6CE22afe802CAf5fF7d3845cec8c736ecc8d61F` which is pre-funded and unlocked, and the `cerc/foundry:local` container to make use of the `cast` cli. We can use the testing account `0xe6CE22afe802CAf5fF7d3845cec8c736ecc8d61F` which is pre-funded and unlocked, and the `cerc/foundry:local` container to make use of the `cast` cli.

View File

@ -17,21 +17,16 @@ from stack_orchestrator.deploy.deployment_context import DeploymentContext
def create(context: DeploymentContext, extra_args): def create(context: DeploymentContext, extra_args):
# Slightly modify the base fixturenet-eth compose file to replace the # Slightly modify the base fixturenet-eth compose file to replace the startup script for fixturenet-eth-geth-1
# startup script for fixturenet-eth-geth-1 # We need to start geth with the flag to allow non eip-155 compliant transactions in order to publish the
# We need to start geth with the flag to allow non eip-155 compliant # deterministic-deployment-proxy contract, which itself is a prereq for Optimism contract deployment
# transactions in order to publish the fixturenet_eth_compose_file = context.deployment_dir.joinpath('compose', 'docker-compose-fixturenet-eth.yml')
# deterministic-deployment-proxy contract, which itself is a prereq for
# Optimism contract deployment
fixturenet_eth_compose_file = context.deployment_dir.joinpath(
"compose", "docker-compose-fixturenet-eth.yml"
)
new_script = "../config/fixturenet-optimism/run-geth.sh:/opt/testnet/run.sh" new_script = '../config/fixturenet-optimism/run-geth.sh:/opt/testnet/run.sh'
def add_geth_volume(yaml_data): def add_geth_volume(yaml_data):
if new_script not in yaml_data["services"]["fixturenet-eth-geth-1"]["volumes"]: if new_script not in yaml_data['services']['fixturenet-eth-geth-1']['volumes']:
yaml_data["services"]["fixturenet-eth-geth-1"]["volumes"].append(new_script) yaml_data['services']['fixturenet-eth-geth-1']['volumes'].append(new_script)
context.modify_yaml(fixturenet_eth_compose_file, add_geth_volume) context.modify_yaml(fixturenet_eth_compose_file, add_geth_volume)

View File

@ -38,7 +38,7 @@ laconic-so --stack fixturenet-optimism deploy init --map-ports-to-host any-fixed
It is usually necessary to expose certain container ports on one or more the host's addresses to allow incoming connections. It is usually necessary to expose certain container ports on one or more the host's addresses to allow incoming connections.
Any ports defined in the Docker compose file are exposed by default with random port assignments, bound to "any" interface (IP address 0.0.0.0), but the port mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`. Any ports defined in the Docker compose file are exposed by default with random port assignments, bound to "any" interface (IP address 0.0.0.0), but the port mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`.
In addition, a stack-wide port mapping "recipe" can be applied at the time the In addition, a stack-wide port mapping "recipe" can be applied at the time the
`laconic-so deploy init` command is run, by supplying the desired recipe with the `--map-ports-to-host` option. The following recipes are supported: `laconic-so deploy init` command is run, by supplying the desired recipe with the `--map-ports-to-host` option. The following recipes are supported:
| Recipe | Host Port Mapping | | Recipe | Host Port Mapping |
|--------|-------------------| |--------|-------------------|
@ -48,9 +48,9 @@ In addition, a stack-wide port mapping "recipe" can be applied at the time the
| localhost-fixed-random | Bind to 127.0.0.1 using a random port number selected at the time the command is run (not checked for already in use)| | localhost-fixed-random | Bind to 127.0.0.1 using a random port number selected at the time the command is run (not checked for already in use)|
| any-fixed-random | Bind to 0.0.0.0 using a random port number selected at the time the command is run (not checked for already in use) | | any-fixed-random | Bind to 0.0.0.0 using a random port number selected at the time the command is run (not checked for already in use) |
For example, you may wish to use `any-fixed-random` to generate the initial mappings and then edit the spec file to set the `op-geth` RPC to an easy to remember port like 8545 or 9545 on the host. For example, you may wish to use `any-fixed-random` to generate the initial mappings and then edit the spec file to set the `op-geth` RPC to an easy to remember port like 8545 or 9545 on the host.
### Data volumes ### Data volumes
Container data volumes are bind-mounted to specified paths in the host filesystem. Container data volumes are bind-mounted to specified paths in the host filesystem.
The default setup (generated by `laconic-so deploy init`) places the volumes in the `./data` subdirectory of the deployment directory. The default mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`. The default setup (generated by `laconic-so deploy init`) places the volumes in the `./data` subdirectory of the deployment directory. The default mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`.

View File

@ -128,7 +128,7 @@ Stack components:
removed removed
topics topics
transactionHash transactionHash
transactionIndex transactionIndex
} }
getEthBlock( getEthBlock(
@ -211,14 +211,14 @@ Stack components:
hash hash
} }
log { log {
id id
} }
block { block {
number number
} }
} }
metadata { metadata {
pageEndsAtTimestamp pageEndsAtTimestamp
isLastPage isLastPage
} }
} }
@ -227,7 +227,7 @@ Stack components:
* Open watcher Ponder app endpoint http://localhost:42069 * Open watcher Ponder app endpoint http://localhost:42069
* Try GQL query to see transfer events * Try GQL query to see transfer events
```graphql ```graphql
{ {
transferEvents (orderBy: "timestamp", orderDirection: "desc") { transferEvents (orderBy: "timestamp", orderDirection: "desc") {
@ -251,9 +251,9 @@ Stack components:
```bash ```bash
export TOKEN_ADDRESS=$(docker exec payments-ponder-er20-contracts-1 jq -r '.address' ./deployment/erc20-address.json) export TOKEN_ADDRESS=$(docker exec payments-ponder-er20-contracts-1 jq -r '.address' ./deployment/erc20-address.json)
``` ```
* Transfer token * Transfer token
```bash ```bash
docker exec -it payments-ponder-er20-contracts-1 bash -c "yarn token:transfer:docker --token ${TOKEN_ADDRESS} --to 0xe22AD83A0dE117bA0d03d5E94Eb4E0d80a69C62a --amount 5000" docker exec -it payments-ponder-er20-contracts-1 bash -c "yarn token:transfer:docker --token ${TOKEN_ADDRESS} --to 0xe22AD83A0dE117bA0d03d5E94Eb4E0d80a69C62a --amount 5000"
``` ```

View File

@ -48,7 +48,7 @@ or see the full logs:
$ laconic-so --stack fixturenet-pocket deploy logs pocket $ laconic-so --stack fixturenet-pocket deploy logs pocket
``` ```
## 5. Send a relay request to Pocket node ## 5. Send a relay request to Pocket node
The Pocket node serves relay requests at `http://localhost:8081/v1/client/sim` The Pocket node serves relay requests at `http://localhost:8081/v1/client/sim`
Example request: Example request:
``` ```

View File

@ -154,12 +154,12 @@ http://127.0.0.1:<HOST_PORT>/subgraphs/name/sushiswap/v3-lotus/graphql
deployment deployment
hasIndexingErrors hasIndexingErrors
} }
factories { factories {
poolCount poolCount
id id
} }
pools { pools {
id id
token0 { token0 {

View File

@ -7,7 +7,7 @@ We will use the [ethereum-gravatar](https://github.com/graphprotocol/graph-tooli
- Clone the repo - Clone the repo
```bash ```bash
git clone git@github.com:graphprotocol/graph-tooling.git git clone git@github.com:graphprotocol/graph-tooling.git
cd graph-tooling cd graph-tooling
``` ```
@ -54,11 +54,11 @@ The following steps should be similar for every subgraph
- Create and deploy the subgraph - Create and deploy the subgraph
```bash ```bash
pnpm graph create example --node <GRAPH_NODE_DEPLOY_ENDPOINT> pnpm graph create example --node <GRAPH_NODE_DEPLOY_ENDPOINT>
pnpm graph deploy example --ipfs <GRAPH_NODE_IPFS_ENDPOINT> --node <GRAPH_NODE_DEPLOY_ENDPOINT> pnpm graph deploy example --ipfs <GRAPH_NODE_IPFS_ENDPOINT> --node <GRAPH_NODE_DEPLOY_ENDPOINT>
``` ```
- `GRAPH_NODE_DEPLOY_ENDPOINT` and `GRAPH_NODE_IPFS_ENDPOINT` will be available after graph-node has been deployed - `GRAPH_NODE_DEPLOY_ENDPOINT` and `GRAPH_NODE_IPFS_ENDPOINT` will be available after graph-node has been deployed
- More details can be seen in [Create a deployment](./README.md#create-a-deployment) section - More details can be seen in [Create a deployment](./README.md#create-a-deployment) section
- The subgraph GQL endpoint will be seen after deploy command runs successfully - The subgraph GQL endpoint will be seen after deploy command runs successfully

View File

@ -1,7 +1,7 @@
version: "1.0" version: "1.0"
name: kubo name: kubo
description: "Run kubo (IPFS)" description: "Run kubo (IPFS)"
repos: repos:
containers: containers:
pods: pods:
- kubo - kubo

View File

@ -2,7 +2,7 @@
``` ```
laconic-so --stack laconic-dot-com setup-repositories laconic-so --stack laconic-dot-com setup-repositories
laconic-so --stack laconic-dot-com build-containers laconic-so --stack laconic-dot-com build-containers
laconic-so --stack laconic-dot-com deploy init --output laconic-website-spec.yml --map-ports-to-host localhost-same laconic-so --stack laconic-dot-com deploy init --output laconic-website-spec.yml --map-ports-to-host localhost-same
laconic-so --stack laconic-dot-com deploy create --spec-file laconic-website-spec.yml --deployment-dir lx-website laconic-so --stack laconic-dot-com deploy create --spec-file laconic-website-spec.yml --deployment-dir lx-website
laconic-so deployment --dir lx-website start laconic-so deployment --dir lx-website start

View File

@ -2,6 +2,6 @@
``` ```
laconic-so --stack lasso setup-repositories laconic-so --stack lasso setup-repositories
laconic-so --stack lasso build-containers laconic-so --stack lasso build-containers
laconic-so --stack lasso deploy up laconic-so --stack lasso deploy up
``` ```

View File

@ -22,24 +22,18 @@ import yaml
def create(context, extra_args): def create(context, extra_args):
# Our goal here is just to copy the json files for blast # Our goal here is just to copy the json files for blast
yml_path = context.deployment_dir.joinpath("spec.yml") yml_path = context.deployment_dir.joinpath("spec.yml")
with open(yml_path, "r") as file: with open(yml_path, 'r') as file:
data = yaml.safe_load(file) data = yaml.safe_load(file)
mount_point = data["volumes"]["blast-data"] mount_point = data['volumes']['blast-data']
if mount_point[0] == "/": if mount_point[0] == "/":
deploy_dir = Path(mount_point) deploy_dir = Path(mount_point)
else: else:
deploy_dir = context.deployment_dir.joinpath(mount_point) deploy_dir = context.deployment_dir.joinpath(mount_point)
command_context = extra_args[2] command_context = extra_args[2]
compose_file = [ compose_file = [f for f in command_context.cluster_context.compose_files if "mainnet-blast" in f][0]
f for f in command_context.cluster_context.compose_files if "mainnet-blast" in f source_config_file = Path(compose_file).parent.parent.joinpath("config", "mainnet-blast", "genesis.json")
][0]
source_config_file = Path(compose_file).parent.parent.joinpath(
"config", "mainnet-blast", "genesis.json"
)
copy(source_config_file, deploy_dir) copy(source_config_file, deploy_dir)
source_config_file = Path(compose_file).parent.parent.joinpath( source_config_file = Path(compose_file).parent.parent.joinpath("config", "mainnet-blast", "rollup.json")
"config", "mainnet-blast", "rollup.json"
)
copy(source_config_file, deploy_dir) copy(source_config_file, deploy_dir)

View File

@ -92,7 +92,7 @@ volumes:
mainnet_eth_plugeth_geth_1_data: ./data/mainnet_eth_plugeth_geth_1_data mainnet_eth_plugeth_geth_1_data: ./data/mainnet_eth_plugeth_geth_1_data
mainnet_eth_plugeth_lighthouse_1_data: ./data/mainnet_eth_plugeth_lighthouse_1_data mainnet_eth_plugeth_lighthouse_1_data: ./data/mainnet_eth_plugeth_lighthouse_1_data
``` ```
In addition, a stack-wide port mapping "recipe" can be applied at the time the In addition, a stack-wide port mapping "recipe" can be applied at the time the
`laconic-so deploy init` command is run, by supplying the desired recipe with the `--map-ports-to-host` option. The following recipes are supported: `laconic-so deploy init` command is run, by supplying the desired recipe with the `--map-ports-to-host` option. The following recipes are supported:
| Recipe | Host Port Mapping | | Recipe | Host Port Mapping |
|--------|-------------------| |--------|-------------------|

View File

@ -27,8 +27,6 @@ def setup(ctx):
def create(ctx, extra_args): def create(ctx, extra_args):
# Generate the JWT secret and save to its config file # Generate the JWT secret and save to its config file
secret = token_hex(32) secret = token_hex(32)
jwt_file_path = ctx.deployment_dir.joinpath( jwt_file_path = ctx.deployment_dir.joinpath("data", "mainnet_eth_plugeth_config_data", "jwtsecret")
"data", "mainnet_eth_plugeth_config_data", "jwtsecret" with open(jwt_file_path, 'w+') as jwt_file:
)
with open(jwt_file_path, "w+") as jwt_file:
jwt_file.write(secret) jwt_file.write(secret)

View File

@ -92,7 +92,7 @@ volumes:
mainnet_eth_geth_1_data: ./data/mainnet_eth_geth_1_data mainnet_eth_geth_1_data: ./data/mainnet_eth_geth_1_data
mainnet_eth_lighthouse_1_data: ./data/mainnet_eth_lighthouse_1_data mainnet_eth_lighthouse_1_data: ./data/mainnet_eth_lighthouse_1_data
``` ```
In addition, a stack-wide port mapping "recipe" can be applied at the time the In addition, a stack-wide port mapping "recipe" can be applied at the time the
`laconic-so deploy init` command is run, by supplying the desired recipe with the `--map-ports-to-host` option. The following recipes are supported: `laconic-so deploy init` command is run, by supplying the desired recipe with the `--map-ports-to-host` option. The following recipes are supported:
| Recipe | Host Port Mapping | | Recipe | Host Port Mapping |
|--------|-------------------| |--------|-------------------|

View File

@ -27,8 +27,6 @@ def setup(ctx):
def create(ctx, extra_args): def create(ctx, extra_args):
# Generate the JWT secret and save to its config file # Generate the JWT secret and save to its config file
secret = token_hex(32) secret = token_hex(32)
jwt_file_path = ctx.deployment_dir.joinpath( jwt_file_path = ctx.deployment_dir.joinpath("data", "mainnet_eth_config_data", "jwtsecret")
"data", "mainnet_eth_config_data", "jwtsecret" with open(jwt_file_path, 'w+') as jwt_file:
)
with open(jwt_file_path, "w+") as jwt_file:
jwt_file.write(secret) jwt_file.write(secret)

View File

@ -36,9 +36,9 @@ laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | 'mainnet-109331-no-histor
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.034] Maximum peer count total=50 laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.034] Maximum peer count total=50
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.034] Smartcard socket not found, disabling err="stat /run/pcscd/pcscd.comm: no such file or directory" laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.034] Smartcard socket not found, disabling err="stat /run/pcscd/pcscd.comm: no such file or directory"
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.034] Genesis file is a known preset name="Mainnet-109331 without history" laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.034] Genesis file is a known preset name="Mainnet-109331 without history"
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.052] Applying genesis state laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.052] Applying genesis state
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.052] - Reading epochs unit 0 laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.052] - Reading epochs unit 0
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.054] - Reading blocks unit 0 laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.054] - Reading blocks unit 0
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.530] Applied genesis state name=main id=250 genesis=0x4a53c5445584b3bfc20dbfb2ec18ae20037c716f3ba2d9e1da768a9deca17cb4 laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.530] Applied genesis state name=main id=250 genesis=0x4a53c5445584b3bfc20dbfb2ec18ae20037c716f3ba2d9e1da768a9deca17cb4
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.531] Regenerated local transaction journal transactions=0 accounts=0 laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.531] Regenerated local transaction journal transactions=0 accounts=0
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.532] Starting peer-to-peer node instance=go-opera/v1.1.2-rc.5-50cd051d-1677276206/linux-amd64/go1.19.10 laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.532] Starting peer-to-peer node instance=go-opera/v1.1.2-rc.5-50cd051d-1677276206/linux-amd64/go1.19.10
@ -47,7 +47,7 @@ laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.537]
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.537] IPC endpoint opened url=/root/.opera/opera.ipc laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.537] IPC endpoint opened url=/root/.opera/opera.ipc
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] HTTP server started endpoint=[::]:18545 prefix= cors=* vhosts=localhost laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] HTTP server started endpoint=[::]:18545 prefix= cors=* vhosts=localhost
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] WebSocket enabled url=ws://[::]:18546 laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] WebSocket enabled url=ws://[::]:18546
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] Rebuilding state snapshot laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] Rebuilding state snapshot
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] EVM snapshot module=gossip-store at=000000..000000 generating=true laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] EVM snapshot module=gossip-store at=000000..000000 generating=true
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] Resuming state snapshot generation accounts=0 slots=0 storage=0.00B elapsed="189.74µs" laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] Resuming state snapshot generation accounts=0 slots=0 storage=0.00B elapsed="189.74µs"
laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] Generated state snapshot accounts=0 slots=0 storage=0.00B elapsed="265.061µs" laconic-f028f14527b95e2eb97f0c0229d00939-go-opera-1 | INFO [06-20|13:32:33.538] Generated state snapshot accounts=0 slots=0 storage=0.00B elapsed="265.061µs"

View File

@ -1 +1,2 @@
# Laconic Mainnet Deployment (experimental) # Laconic Mainnet Deployment (experimental)

View File

@ -14,10 +14,7 @@
# along with this program. If not, see <http:#www.gnu.org/licenses/>. # along with this program. If not, see <http:#www.gnu.org/licenses/>.
from stack_orchestrator.util import get_yaml from stack_orchestrator.util import get_yaml
from stack_orchestrator.deploy.deploy_types import ( from stack_orchestrator.deploy.deploy_types import DeployCommandContext, LaconicStackSetupCommand
DeployCommandContext,
LaconicStackSetupCommand,
)
from stack_orchestrator.deploy.deployment_context import DeploymentContext from stack_orchestrator.deploy.deployment_context import DeploymentContext
from stack_orchestrator.deploy.stack_state import State from stack_orchestrator.deploy.stack_state import State
from stack_orchestrator.deploy.deploy_util import VolumeMapping, run_container_command from stack_orchestrator.deploy.deploy_util import VolumeMapping, run_container_command
@ -78,12 +75,7 @@ def _copy_gentx_files(network_dir: Path, gentx_file_list: str):
gentx_files = _comma_delimited_to_list(gentx_file_list) gentx_files = _comma_delimited_to_list(gentx_file_list)
for gentx_file in gentx_files: for gentx_file in gentx_files:
gentx_file_path = Path(gentx_file) gentx_file_path = Path(gentx_file)
copyfile( copyfile(gentx_file_path, os.path.join(network_dir, "config", "gentx", os.path.basename(gentx_file_path)))
gentx_file_path,
os.path.join(
network_dir, "config", "gentx", os.path.basename(gentx_file_path)
),
)
def _remove_persistent_peers(network_dir: Path): def _remove_persistent_peers(network_dir: Path):
@ -94,13 +86,8 @@ def _remove_persistent_peers(network_dir: Path):
with open(config_file_path, "r") as input_file: with open(config_file_path, "r") as input_file:
config_file_content = input_file.read() config_file_content = input_file.read()
persistent_peers_pattern = '^persistent_peers = "(.+?)"' persistent_peers_pattern = '^persistent_peers = "(.+?)"'
replace_with = 'persistent_peers = ""' replace_with = "persistent_peers = \"\""
config_file_content = re.sub( config_file_content = re.sub(persistent_peers_pattern, replace_with, config_file_content, flags=re.MULTILINE)
persistent_peers_pattern,
replace_with,
config_file_content,
flags=re.MULTILINE,
)
with open(config_file_path, "w") as output_file: with open(config_file_path, "w") as output_file:
output_file.write(config_file_content) output_file.write(config_file_content)
@ -113,13 +100,8 @@ def _insert_persistent_peers(config_dir: Path, new_persistent_peers: str):
with open(config_file_path, "r") as input_file: with open(config_file_path, "r") as input_file:
config_file_content = input_file.read() config_file_content = input_file.read()
persistent_peers_pattern = r'^persistent_peers = ""' persistent_peers_pattern = r'^persistent_peers = ""'
replace_with = f'persistent_peers = "{new_persistent_peers}"' replace_with = f"persistent_peers = \"{new_persistent_peers}\""
config_file_content = re.sub( config_file_content = re.sub(persistent_peers_pattern, replace_with, config_file_content, flags=re.MULTILINE)
persistent_peers_pattern,
replace_with,
config_file_content,
flags=re.MULTILINE,
)
with open(config_file_path, "w") as output_file: with open(config_file_path, "w") as output_file:
output_file.write(config_file_content) output_file.write(config_file_content)
@ -131,11 +113,9 @@ def _enable_cors(config_dir: Path):
sys.exit(1) sys.exit(1)
with open(config_file_path, "r") as input_file: with open(config_file_path, "r") as input_file:
config_file_content = input_file.read() config_file_content = input_file.read()
cors_pattern = r"^cors_allowed_origins = \[]" cors_pattern = r'^cors_allowed_origins = \[]'
replace_with = 'cors_allowed_origins = ["*"]' replace_with = 'cors_allowed_origins = ["*"]'
config_file_content = re.sub( config_file_content = re.sub(cors_pattern, replace_with, config_file_content, flags=re.MULTILINE)
cors_pattern, replace_with, config_file_content, flags=re.MULTILINE
)
with open(config_file_path, "w") as output_file: with open(config_file_path, "w") as output_file:
output_file.write(config_file_content) output_file.write(config_file_content)
app_file_path = config_dir.joinpath("app.toml") app_file_path = config_dir.joinpath("app.toml")
@ -144,11 +124,9 @@ def _enable_cors(config_dir: Path):
sys.exit(1) sys.exit(1)
with open(app_file_path, "r") as input_file: with open(app_file_path, "r") as input_file:
app_file_content = input_file.read() app_file_content = input_file.read()
cors_pattern = r"^enabled-unsafe-cors = false" cors_pattern = r'^enabled-unsafe-cors = false'
replace_with = "enabled-unsafe-cors = true" replace_with = "enabled-unsafe-cors = true"
app_file_content = re.sub( app_file_content = re.sub(cors_pattern, replace_with, app_file_content, flags=re.MULTILINE)
cors_pattern, replace_with, app_file_content, flags=re.MULTILINE
)
with open(app_file_path, "w") as output_file: with open(app_file_path, "w") as output_file:
output_file.write(app_file_content) output_file.write(app_file_content)
@ -163,9 +141,7 @@ def _set_listen_address(config_dir: Path):
existing_pattern = r'^laddr = "tcp://127.0.0.1:26657"' existing_pattern = r'^laddr = "tcp://127.0.0.1:26657"'
replace_with = 'laddr = "tcp://0.0.0.0:26657"' replace_with = 'laddr = "tcp://0.0.0.0:26657"'
print(f"Replacing in: {config_file_path}") print(f"Replacing in: {config_file_path}")
config_file_content = re.sub( config_file_content = re.sub(existing_pattern, replace_with, config_file_content, flags=re.MULTILINE)
existing_pattern, replace_with, config_file_content, flags=re.MULTILINE
)
with open(config_file_path, "w") as output_file: with open(config_file_path, "w") as output_file:
output_file.write(config_file_content) output_file.write(config_file_content)
app_file_path = config_dir.joinpath("app.toml") app_file_path = config_dir.joinpath("app.toml")
@ -176,14 +152,10 @@ def _set_listen_address(config_dir: Path):
app_file_content = input_file.read() app_file_content = input_file.read()
existing_pattern1 = r'^address = "tcp://localhost:1317"' existing_pattern1 = r'^address = "tcp://localhost:1317"'
replace_with1 = 'address = "tcp://0.0.0.0:1317"' replace_with1 = 'address = "tcp://0.0.0.0:1317"'
app_file_content = re.sub( app_file_content = re.sub(existing_pattern1, replace_with1, app_file_content, flags=re.MULTILINE)
existing_pattern1, replace_with1, app_file_content, flags=re.MULTILINE
)
existing_pattern2 = r'^address = "localhost:9090"' existing_pattern2 = r'^address = "localhost:9090"'
replace_with2 = 'address = "0.0.0.0:9090"' replace_with2 = 'address = "0.0.0.0:9090"'
app_file_content = re.sub( app_file_content = re.sub(existing_pattern2, replace_with2, app_file_content, flags=re.MULTILINE)
existing_pattern2, replace_with2, app_file_content, flags=re.MULTILINE
)
with open(app_file_path, "w") as output_file: with open(app_file_path, "w") as output_file:
output_file.write(app_file_content) output_file.write(app_file_content)
@ -192,10 +164,7 @@ def _phase_from_params(parameters):
phase = SetupPhase.ILLEGAL phase = SetupPhase.ILLEGAL
if parameters.initialize_network: if parameters.initialize_network:
if parameters.join_network or parameters.create_network: if parameters.join_network or parameters.create_network:
print( print("Can't supply --join-network or --create-network with --initialize-network")
"Can't supply --join-network or --create-network "
"with --initialize-network"
)
sys.exit(1) sys.exit(1)
if not parameters.chain_id: if not parameters.chain_id:
print("--chain-id is required") print("--chain-id is required")
@ -207,36 +176,24 @@ def _phase_from_params(parameters):
phase = SetupPhase.INITIALIZE phase = SetupPhase.INITIALIZE
elif parameters.join_network: elif parameters.join_network:
if parameters.initialize_network or parameters.create_network: if parameters.initialize_network or parameters.create_network:
print( print("Can't supply --initialize-network or --create-network with --join-network")
"Can't supply --initialize-network or --create-network "
"with --join-network"
)
sys.exit(1) sys.exit(1)
phase = SetupPhase.JOIN phase = SetupPhase.JOIN
elif parameters.create_network: elif parameters.create_network:
if parameters.initialize_network or parameters.join_network: if parameters.initialize_network or parameters.join_network:
print( print("Can't supply --initialize-network or --join-network with --create-network")
"Can't supply --initialize-network or --join-network "
"with --create-network"
)
sys.exit(1) sys.exit(1)
phase = SetupPhase.CREATE phase = SetupPhase.CREATE
elif parameters.connect_network: elif parameters.connect_network:
if parameters.initialize_network or parameters.join_network: if parameters.initialize_network or parameters.join_network:
print( print("Can't supply --initialize-network or --join-network with --connect-network")
"Can't supply --initialize-network or --join-network "
"with --connect-network"
)
sys.exit(1) sys.exit(1)
phase = SetupPhase.CONNECT phase = SetupPhase.CONNECT
return phase return phase
def setup( def setup(command_context: DeployCommandContext, parameters: LaconicStackSetupCommand, extra_args):
command_context: DeployCommandContext,
parameters: LaconicStackSetupCommand,
extra_args,
):
options = opts.o options = opts.o
currency = "alnt" # Does this need to be a parameter? currency = "alnt" # Does this need to be a parameter?
@ -248,9 +205,12 @@ def setup(
network_dir = Path(parameters.network_dir).absolute() network_dir = Path(parameters.network_dir).absolute()
laconicd_home_path_in_container = "/laconicd-home" laconicd_home_path_in_container = "/laconicd-home"
mounts = [VolumeMapping(str(network_dir), laconicd_home_path_in_container)] mounts = [
VolumeMapping(network_dir, laconicd_home_path_in_container)
]
if phase == SetupPhase.INITIALIZE: if phase == SetupPhase.INITIALIZE:
# We want to create the directory so if it exists that's an error # We want to create the directory so if it exists that's an error
if os.path.exists(network_dir): if os.path.exists(network_dir):
print(f"Error: network directory {network_dir} already exists") print(f"Error: network directory {network_dir} already exists")
@ -260,18 +220,13 @@ def setup(
output, status = run_container_command( output, status = run_container_command(
command_context, command_context,
"laconicd", "laconicd", f"laconicd init {parameters.node_moniker} --home {laconicd_home_path_in_container}\
f"laconicd init {parameters.node_moniker} " --chain-id {parameters.chain_id} --default-denom {currency}", mounts)
f"--home {laconicd_home_path_in_container} "
f"--chain-id {parameters.chain_id} --default-denom {currency}",
mounts,
)
if options.debug: if options.debug:
print(f"Command output: {output}") print(f"Command output: {output}")
elif phase == SetupPhase.JOIN: elif phase == SetupPhase.JOIN:
# In the join phase (alternative to connect) we are participating in a # In the join phase (alternative to connect) we are participating in a genesis ceremony for the chain
# genesis ceremony for the chain
if not os.path.exists(network_dir): if not os.path.exists(network_dir):
print(f"Error: network directory {network_dir} doesn't exist") print(f"Error: network directory {network_dir} doesn't exist")
sys.exit(1) sys.exit(1)
@ -279,72 +234,52 @@ def setup(
chain_id = _get_chain_id_from_config(network_dir) chain_id = _get_chain_id_from_config(network_dir)
output1, status1 = run_container_command( output1, status1 = run_container_command(
command_context, command_context, "laconicd", f"laconicd keys add {parameters.key_name} --home {laconicd_home_path_in_container}\
"laconicd", --keyring-backend test", mounts)
f"laconicd keys add {parameters.key_name} "
f"--home {laconicd_home_path_in_container} --keyring-backend test",
mounts,
)
if options.debug: if options.debug:
print(f"Command output: {output1}") print(f"Command output: {output1}")
output2, status2 = run_container_command( output2, status2 = run_container_command(
command_context, command_context,
"laconicd", "laconicd",
f"laconicd genesis add-genesis-account {parameters.key_name} " f"laconicd genesis add-genesis-account {parameters.key_name} 12900000000000000000000{currency}\
f"12900000000000000000000{currency} " --home {laconicd_home_path_in_container} --keyring-backend test",
f"--home {laconicd_home_path_in_container} --keyring-backend test", mounts)
mounts,
)
if options.debug: if options.debug:
print(f"Command output: {output2}") print(f"Command output: {output2}")
output3, status3 = run_container_command( output3, status3 = run_container_command(
command_context, command_context,
"laconicd", "laconicd",
f"laconicd genesis gentx {parameters.key_name} " f"laconicd genesis gentx {parameters.key_name} 90000000000{currency} --home {laconicd_home_path_in_container}\
f"90000000000{currency} --home {laconicd_home_path_in_container} " --chain-id {chain_id} --keyring-backend test",
f"--chain-id {chain_id} --keyring-backend test", mounts)
mounts,
)
if options.debug: if options.debug:
print(f"Command output: {output3}") print(f"Command output: {output3}")
output4, status4 = run_container_command( output4, status4 = run_container_command(
command_context, command_context,
"laconicd", "laconicd",
f"laconicd keys show {parameters.key_name} -a " f"laconicd keys show {parameters.key_name} -a --home {laconicd_home_path_in_container} --keyring-backend test",
f"--home {laconicd_home_path_in_container} --keyring-backend test", mounts)
mounts,
)
print(f"Node account address: {output4}") print(f"Node account address: {output4}")
elif phase == SetupPhase.CONNECT: elif phase == SetupPhase.CONNECT:
# In the connect phase (named to not conflict with join) we are # In the connect phase (named to not conflict with join) we are making a node that syncs a chain with existing genesis.json
# making a node that syncs a chain with existing genesis.json # but not with validator role. We need this kind of node in order to bootstrap it into a validator after it syncs
# but not with validator role. We need this kind of node in order to
# bootstrap it into a validator after it syncs
output1, status1 = run_container_command( output1, status1 = run_container_command(
command_context, command_context, "laconicd", f"laconicd keys add {parameters.key_name} --home {laconicd_home_path_in_container}\
"laconicd", --keyring-backend test", mounts)
f"laconicd keys add {parameters.key_name} "
f"--home {laconicd_home_path_in_container} --keyring-backend test",
mounts,
)
if options.debug: if options.debug:
print(f"Command output: {output1}") print(f"Command output: {output1}")
output2, status2 = run_container_command( output2, status2 = run_container_command(
command_context, command_context,
"laconicd", "laconicd",
f"laconicd keys show {parameters.key_name} -a " f"laconicd keys show {parameters.key_name} -a --home {laconicd_home_path_in_container} --keyring-backend test",
f"--home {laconicd_home_path_in_container} --keyring-backend test", mounts)
mounts,
)
print(f"Node account address: {output2}") print(f"Node account address: {output2}")
output3, status3 = run_container_command( output3, status3 = run_container_command(
command_context, command_context,
"laconicd", "laconicd",
f"laconicd cometbft show-validator " f"laconicd cometbft show-validator --home {laconicd_home_path_in_container}",
f"--home {laconicd_home_path_in_container}", mounts)
mounts,
)
print(f"Node validator address: {output3}") print(f"Node validator address: {output3}")
elif phase == SetupPhase.CREATE: elif phase == SetupPhase.CREATE:
@ -352,74 +287,42 @@ def setup(
print(f"Error: network directory {network_dir} doesn't exist") print(f"Error: network directory {network_dir} doesn't exist")
sys.exit(1) sys.exit(1)
# In the CREATE phase, we are either a "coordinator" node, # In the CREATE phase, we are either a "coordinator" node, generating the genesis.json file ourselves
# generating the genesis.json file ourselves # OR we are a "not-coordinator" node, consuming a genesis file we got from the coordinator node.
# OR we are a "not-coordinator" node, consuming a genesis file from
# the coordinator node.
if parameters.genesis_file: if parameters.genesis_file:
# We got the genesis file from elsewhere # We got the genesis file from elsewhere
# Copy it into our network dir # Copy it into our network dir
genesis_file_path = Path(parameters.genesis_file) genesis_file_path = Path(parameters.genesis_file)
if not os.path.exists(genesis_file_path): if not os.path.exists(genesis_file_path):
print( print(f"Error: supplied genesis file: {parameters.genesis_file} does not exist.")
f"Error: supplied genesis file: {parameters.genesis_file} "
"does not exist."
)
sys.exit(1) sys.exit(1)
copyfile( copyfile(genesis_file_path, os.path.join(network_dir, "config", os.path.basename(genesis_file_path)))
genesis_file_path,
os.path.join(
network_dir, "config", os.path.basename(genesis_file_path)
),
)
else: else:
# We're generating the genesis file # We're generating the genesis file
# First look in the supplied gentx files for the other nodes' keys # First look in the supplied gentx files for the other nodes' keys
other_node_keys = _get_node_keys_from_gentx_files( other_node_keys = _get_node_keys_from_gentx_files(parameters.gentx_address_list)
parameters.gentx_address_list
)
# Add those keys to our genesis, with balances we determine here (why?) # Add those keys to our genesis, with balances we determine here (why?)
outputk = None
for other_node_key in other_node_keys: for other_node_key in other_node_keys:
outputk, statusk = run_container_command( outputk, statusk = run_container_command(
command_context, command_context, "laconicd", f"laconicd genesis add-genesis-account {other_node_key} \
"laconicd", 12900000000000000000000{currency}\
f"laconicd genesis add-genesis-account {other_node_key} " --home {laconicd_home_path_in_container} --keyring-backend test", mounts)
f"12900000000000000000000{currency} " if options.debug:
f"--home {laconicd_home_path_in_container} "
"--keyring-backend test",
mounts,
)
if options.debug and outputk is not None:
print(f"Command output: {outputk}") print(f"Command output: {outputk}")
# Copy the gentx json files into our network dir # Copy the gentx json files into our network dir
_copy_gentx_files(network_dir, parameters.gentx_file_list) _copy_gentx_files(network_dir, parameters.gentx_file_list)
# Now we can run collect-gentxs # Now we can run collect-gentxs
output1, status1 = run_container_command( output1, status1 = run_container_command(
command_context, command_context, "laconicd", f"laconicd genesis collect-gentxs --home {laconicd_home_path_in_container}", mounts)
"laconicd",
f"laconicd genesis collect-gentxs "
f"--home {laconicd_home_path_in_container}",
mounts,
)
if options.debug: if options.debug:
print(f"Command output: {output1}") print(f"Command output: {output1}")
genesis_path = os.path.join(network_dir, "config", "genesis.json") print(f"Generated genesis file, please copy to other nodes as required: \
print( {os.path.join(network_dir, 'config', 'genesis.json')}")
f"Generated genesis file, please copy to other nodes " # Last thing, collect-gentxs puts a likely bogus set of persistent_peers in config.toml so we remove that now
f"as required: {genesis_path}"
)
# Last thing, collect-gentxs puts a likely bogus set of persistent_peers
# in config.toml so we remove that now
_remove_persistent_peers(network_dir) _remove_persistent_peers(network_dir)
# In both cases we validate the genesis file now # In both cases we validate the genesis file now
output2, status1 = run_container_command( output2, status1 = run_container_command(
command_context, command_context, "laconicd", f"laconicd genesis validate-genesis --home {laconicd_home_path_in_container}", mounts)
"laconicd",
f"laconicd genesis validate-genesis "
f"--home {laconicd_home_path_in_container}",
mounts,
)
print(f"validate-genesis result: {output2}") print(f"validate-genesis result: {output2}")
else: else:
@ -438,23 +341,15 @@ def create(deployment_context: DeploymentContext, extra_args):
sys.exit(1) sys.exit(1)
config_dir_path = network_dir_path.joinpath("config") config_dir_path = network_dir_path.joinpath("config")
if not (config_dir_path.exists() and config_dir_path.is_dir()): if not (config_dir_path.exists() and config_dir_path.is_dir()):
print( print(f"Error: supplied network directory does not contain a config directory: {config_dir_path}")
f"Error: supplied network directory does not contain "
f"a config directory: {config_dir_path}"
)
sys.exit(1) sys.exit(1)
data_dir_path = network_dir_path.joinpath("data") data_dir_path = network_dir_path.joinpath("data")
if not (data_dir_path.exists() and data_dir_path.is_dir()): if not (data_dir_path.exists() and data_dir_path.is_dir()):
print( print(f"Error: supplied network directory does not contain a data directory: {data_dir_path}")
f"Error: supplied network directory does not contain "
f"a data directory: {data_dir_path}"
)
sys.exit(1) sys.exit(1)
# Copy the network directory contents into our deployment # Copy the network directory contents into our deployment
# TODO: change this to work with non local paths # TODO: change this to work with non local paths
deployment_config_dir = deployment_context.deployment_dir.joinpath( deployment_config_dir = deployment_context.deployment_dir.joinpath("data", "laconicd-config")
"data", "laconicd-config"
)
copytree(config_dir_path, deployment_config_dir, dirs_exist_ok=True) copytree(config_dir_path, deployment_config_dir, dirs_exist_ok=True)
# If supplied, add the initial persistent peers to the config file # If supplied, add the initial persistent peers to the config file
if extra_args[1]: if extra_args[1]:
@ -465,9 +360,7 @@ def create(deployment_context: DeploymentContext, extra_args):
_set_listen_address(deployment_config_dir) _set_listen_address(deployment_config_dir)
# Copy the data directory contents into our deployment # Copy the data directory contents into our deployment
# TODO: change this to work with non local paths # TODO: change this to work with non local paths
deployment_data_dir = deployment_context.deployment_dir.joinpath( deployment_data_dir = deployment_context.deployment_dir.joinpath("data", "laconicd-data")
"data", "laconicd-data"
)
copytree(data_dir_path, deployment_data_dir, dirs_exist_ok=True) copytree(data_dir_path, deployment_data_dir, dirs_exist_ok=True)

View File

@ -5,4 +5,4 @@ repos:
containers: containers:
- cerc/watcher-mobymask - cerc/watcher-mobymask
pods: pods:
- watcher-mobymask - watcher-mobymask

View File

@ -180,7 +180,7 @@ Set the following env variables in the deployment env config file (`monitoring-d
# (Optional, default: http://localhost:3000) # (Optional, default: http://localhost:3000)
GF_SERVER_ROOT_URL= GF_SERVER_ROOT_URL=
# RPC endpoint used by graph-node for upstream head metric # RPC endpoint used by graph-node for upstream head metric
# (Optional, default: https://mainnet.infura.io/v3) # (Optional, default: https://mainnet.infura.io/v3)
GRAPH_NODE_RPC_ENDPOINT= GRAPH_NODE_RPC_ENDPOINT=

View File

@ -1,3 +1,3 @@
# Test Database Stack # Test Database Stack
A stack with a database for test/demo purposes. A stack with a database for test/demo purposes.

View File

@ -1,3 +1,3 @@
# Test Stack # Test Stack
A stack for test/demo purposes. A stack for test/demo purposes.

Some files were not shown because too many files have changed in this diff Show More