Compare commits

..

2 Commits

Author SHA1 Message Date
077cfe12a6 Use alnt token in bond creation example 2024-08-14 17:37:08 +05:30
IshaVenikar
8f750bf329 Add instructions to re-publish records in stage1 2024-08-14 17:13:50 +05:30
45 changed files with 77 additions and 835741 deletions

2
.gitignore vendored
View File

@ -1,2 +0,0 @@
*-deployment
*-spec.yml

View File

@ -10,26 +10,8 @@ Stacks to run a node for laconic testnet
- [Update deployments after code changes](./ops/update-deployments.md)
- [Halt stage0 and start stage1](./ops/stage0-to-stage1.md)
- [Halt stage1 and start stage2](./ops/stage1-to-stage2.md)
- [Create deployments from scratch (for reference only)](./ops/deployments-from-scratch.md)
- [Deploy and transfer new tokens for nitro operations](./ops/nitro-token-ops.md)
## Join LORO testnet
Follow steps in [testnet-onboarding-validator.md](./testnet-onboarding-validator.md) to onboard your participant and join as a validator on the LORO testnet
## SAPO testnet
Follow steps in [Upgrade to SAPO testnet](./testnet-onboarding-validator.md#upgrade-to-sapo-testnet) for upgrading your LORO testnet node to SAPO testnet
## Setup a Service Provider
Follow steps in [service-provider.md](./service-provider.md) to setup / update your service provider
### Support Custom Domains
Follow steps to [Update service provider for custom domains](./service-provider.md#update-service-provider-for-custom-domains)
## Run testnet Nitro Node
Follow steps in [testnet-nitro-node.md](./testnet-nitro-node.md) to run your Nitro node for the testnet

View File

@ -1,40 +0,0 @@
[server]
host = "0.0.0.0"
port = 8000
gqlPath = "/graphql"
[server.session]
secret = "<redacted>"
# Frontend webapp URL origin
appOriginUrl = "https://deploy.apps.vaasl.io"
# Set to true if server running behind proxy
trustProxy = true
# Backend URL hostname
domain = "deploy-backend.apps.vaasl.io"
[database]
dbPath = "/data/db/deploy-backend"
[gitHub]
webhookUrl = "https://deploy-backend.apps.vaasl.io"
[gitHub.oAuth]
clientId = "<redacted>"
clientSecret = "<redacted>"
[registryConfig]
fetchDeploymentRecordDelay = 5000
checkAuctionStatusDelay = 5000
restEndpoint = "https://laconicd-sapo.laconic.com"
gqlEndpoint = "https://laconicd-sapo.laconic.com/api"
chainId = "laconic-testnet-2"
privateKey = "<redacted>"
bondId = "<redacted>"
authority = "vaasl"
[registryConfig.fee]
gasPrice = "0.001alnt"
[auction]
commitFee = "1000"
commitsDuration = "60s"
revealFee = "1000"
revealsDuration = "60s"
denom = "alnt"

File diff suppressed because it is too large Load Diff

View File

@ -1,83 +0,0 @@
# Nitro Token Ops
## Deploy and transfer custom tokens
### Setup
* Go to the directory where `nitro-contracts-deployment` is present:
```bash
cd /srv/bridge
```
### Deploy new token
* To deploy another token:
```bash
# These values can be changed to deploy another token with different name and symbol
export TOKEN_NAME="TestToken2"
export TOKEN_SYMBOL="TST2"
# Note: Token supply denotes actual number of tokens and not the supply in Wei
export INITIAL_TOKEN_SUPPLY="129600"
laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "TOKEN_NAME=$TOKEN_NAME TOKEN_SYMBOL=$TOKEN_SYMBOL INITIAL_TOKEN_SUPPLY=$INITIAL_TOKEN_SUPPLY /app/deploy-l1-tokens.sh"
```
* Recreate `assets.json` to include newly deployed token address:
```bash
export GETH_CHAIN_ID="1212"
laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq --arg chainId \"$GETH_CHAIN_ID\" '{
(\$chainId): [
{
\"name\": .[\$chainId][0].name,
\"chainId\": .[\$chainId][0].chainId,
\"contracts\": (
.[\$chainId][0].contracts
| to_entries
| map(select(.key | in({\"ConsensusApp\":1, \"NitroAdjudicator\":1, \"VirtualPaymentApp\":1}) | not))
| from_entries
)
}
]
}' /app/deployment/nitro-addresses.json" > assets.json
```
* The required config file should be generated at `/srv/bridge/assets.json`
* Check in the generated file at location `ops/stage2/assets.json` within this repository
### Transfer deployed tokens to given address
* To transfer a token to an account:
```bash
export GETH_CHAIN_ID=1212
export TOKEN_NAME="<name-of-token-to-be-transferred>"
export ASSET_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.$TOKEN_NAME.address' /app/deployment/nitro-addresses.json")
export ACCOUNT="<target-account-address>"
laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "cd packages/nitro-protocol && yarn hardhat transfer --contract $ASSET_ADDRESS --to $ACCOUNT --amount 100 --network geth"
```
## Transfer ETH
* Go to the directory where `fixturenet-eth-deployment` is present:
```bash
cd /srv/fixturenet-eth
```
* To transfer ETH to an account:
```bash
export FUNDED_ADDRESS="0xe6CE22afe802CAf5fF7d3845cec8c736ecc8d61F"
export FUNDED_PK="888814df89c4358d7ddb3fa4b0213e7331239a80e1f013eaa7b2deca2a41a218"
export TO_ADDRESS="<target-account-address>"
laconic-so deployment --dir fixturenet-eth-deployment exec foundry "cast send $TO_ADDRESS --value 1ether --from $FUNDED_ADDRESS --private-key $FUNDED_PK"
```

View File

@ -1,465 +0,0 @@
# Service Provider deployments from scratch
## container-registry
* Reference: <https://github.com/LaconicNetwork/loro-testnet/blob/main/docs/service-provider-setup.md#deploy-docker-image-container-registry>
* Target dir: `/srv/service-provider/container-registry`
* Cleanup an existing deployment if required:
```bash
cd /srv/service-provider/container-registry
# Stop the deployment
laconic-so deployment --dir container-registry stop --delete-volumes
# Remove the deployment dir
sudo rm -rf container-registrty
# Remove the existing spec file
rm container-registry.spec
```
### Setup
- Generate the spec file for the container-registry stack
```bash
laconic-so --stack container-registry deploy init --output container-registry.spec
```
- Modify the `container-registry.spec` as shown below
```
stack: container-registry
deploy-to: k8s
kube-config: /home/dev/.kube/config-vs-narwhal.yaml
network:
ports:
registry:
- '5000'
http-proxy:
- host-name: container-registry.apps.vaasl.io
routes:
- path: '/'
proxy-to: registry:5000
volumes:
registry-data:
configmaps:
config: ./configmaps/config
```
- Create the deployment directory for the `container-registry` stack
```bash
laconic-so --stack container-registry deploy create --deployment-dir container-registry --spec-file container-registry.spec
```
- Modify file `container-registry/kubeconfig.yml` if required
```
apiVersion: v1
...
contexts:
- context:
cluster: ***
user: ***
name: default
...
```
NOTE: `context.name` must be default to use with SO
- Base64 encode the container registry credentials
NOTE: Use actual credentials for container registry (credentials set in `container-registry/credentials.txt`)
```bash
echo -n "so-reg-user:pXDwO5zLU7M88x3aA" | base64 -w0
# Output: c28tcmVnLXVzZXI6cFhEd081ekxVN004OHgzYUE=
```
- Install `apache2-utils` for next step
```bash
sudo apt install apache2-utils
```
- Encrypt the container registry credentials to create an `htpasswd` file
```bash
htpasswd -bB -c container-registry/configmaps/config/htpasswd so-reg-user pXDwO5zLU7M88x3aA
```
Resulting file should look like this
```
cat container-registry/configmaps/config/htpasswd
# so-reg-user:$2y$05$6EdxIwwDNlJfNhhQxZRr4eNd.aYrdmbBjAdw422w0u2j3TihQXgd2
```
- Using the credentials from the previous steps, create a `container-registry/my_password.json` file
```json
{
"auths": {
"container-registry.apps.vaasl.io": {
"username": "so-reg-user",
"password": "$2y$05$6EdxIwwDNlJfNhhQxZRr4eNd.aYrdmbBjAdw422w0u2j3TihQXgd2",
"auth": "c28tcmVnLXVzZXI6cFhEd081ekxVN004OHgzYUE="
}
}
}
```
- Configure the file `container-registry/config.env` as follows
```env
REGISTRY_AUTH=htpasswd
REGISTRY_AUTH_HTPASSWD_REALM="VSL Service Provider Image Registry"
REGISTRY_AUTH_HTPASSWD_PATH="/config/htpasswd"
REGISTRY_HTTP_SECRET='$2y$05$6EdxIwwDNlJfNhhQxZRr4eNd.aYrdmbBjAdw422w0u2j3TihQXgd2'
```
- Load context for k8s
```bash
kubie ctx vs-narwhal
```
- Add the container registry credentials as a secret available to the cluster
```bash
kubectl create secret generic laconic-registry --from-file=.dockerconfigjson=container-registry/my_password.json --type=kubernetes.io/dockerconfigjson
```
### Run
- Deploy the container registry
```bash
laconic-so deployment --dir container-registry start
```
- Check the logs
```bash
laconic-so deployment --dir container-registry logs
```
- Check status and await succesful deployment:
```bash
laconic-so deployment --dir container-registry status
```
- Confirm deployment by logging in:
```
docker login container-registry.apps.vaasl.io --username so-reg-user --password pXDwO5zLU7M88x3aA
```
- Set ingress annotations
- Set the `cluster-id` found in `container-registry/deployment.yml` and then run the following commands:
```
export CLUSTER_ID=<cluster-id>
# Example
# export CLUSTER_ID=laconic-26cc70be8a3db3f4
kubectl annotate ingress $CLUSTER_ID-ingress nginx.ingress.kubernetes.io/proxy-body-size=0
kubectl annotate ingress $CLUSTER_ID-ingress nginx.ingress.kubernetes.io/proxy-read-timeout=600
kubectl annotate ingress $CLUSTER_ID-ingress nginx.ingress.kubernetes.io/proxy-send-timeout=600
```
## webapp-deployer
### Backend
* Reference: <https://github.com/LaconicNetwork/loro-testnet/blob/main/docs/service-provider-setup.md#deploy-backend>
* Target dir: `/srv/service-provider/webapp-deployer`
* Cleanup an existing deployment if required:
```bash
cd /srv/service-provider/webapp-deployer
# Stop the deployment
laconic-so deployment --dir webapp-deployer stop
# Remove the deployment dir
sudo rm -rf webapp-deployer
# Remove the existing spec file
rm webapp-deployer.spec
```
#### Setup
- Initialize a spec file for the deployer backend.
```bash
laconic-so --stack webapp-deployer-backend setup-repositories
laconic-so --stack webapp-deployer-backend build-containers
laconic-so --stack webapp-deployer-backend deploy init --output webapp-deployer.spec
```
- Modify the contents of `webapp-deployer.spec`:
```
stack: webapp-deployer-backend
deploy-to: k8s
kube-config: /home/dev/.kube/config-vs-narwhal.yaml
image-registry: container-registry.apps.vaasl.io/laconic-registry
network:
ports:
server:
- '9555'
http-proxy:
- host-name: webapp-deployer-api.apps.vaasl.io
routes:
- path: '/'
proxy-to: server:9555
volumes:
srv:
configmaps:
config: ./data/config
annotations:
container.apparmor.security.beta.kubernetes.io/{name}: unconfined
labels:
container.kubeaudit.io/{name}.allow-disabled-apparmor: "podman"
security:
privileged: true
resources:
containers:
reservations:
cpus: 3
memory: 8G
limits:
cpus: 7
memory: 16G
volumes:
reservations:
storage: 200G
```
- Create the deployment directory from the spec file.
```
laconic-so --stack webapp-deployer-backend deploy create --deployment-dir webapp-deployer --spec-file webapp-deployer.spec
```
- Modify file `webapp-deployer/kubeconfig.yml` if required
```
apiVersion: v1
...
contexts:
- context:
cluster: ***
user: ***
name: default
...
```
NOTE: `context.name` must be default to use with SO
- Copy `webapp-deployer/kubeconfig.yml` from the k8s cluster creation step to `webapp-deployer/data/config/kube.yml`
```bash
cp webapp-deployer/kubeconfig.yml webapp-deployer/data/config/kube.yml
```
- Create `webapp-deployer/data/config/laconic.yml`, it should look like this:
```
services:
registry:
# Using public endpoint does not work inside machine where laconicd chain is deployed
rpcEndpoint: 'http://host.docker.internal:36657'
gqlEndpoint: 'http://host.docker.internal:3473/api'
# Set user key of account with balance and bond owned by the user
userKey:
bondId:
chainId: laconic-testnet-2
gasPrice: 1alnt
```
NOTE: Modify the user key and bond ID according to your configuration
* Publish a `WebappDeployer` record for the deployer backend by following the steps below:
* Setup GPG keys by following [these steps to create and export a key](https://git.vdb.to/cerc-io/webapp-deployment-status-api#keys)
```
cd webapp-deployer
# Create a key
gpg --batch --passphrase "SECRET" --quick-generate-key webapp-deployer-api.apps.vaasl.io default default never
# Export the public key
gpg --export webapp-deployer-api.apps.vaasl.io > webapp-deployer-api.apps.vaasl.io.pgp.pub
# Export the private key
gpg --export-secret-keys webapp-deployer-api.apps.vaasl.io > webapp-deployer-api.apps.vaasl.io.pgp.key
cd -
```
NOTE: Use "SECRET" for passphrase prompt
* Copy the GPG pub key file generated above to `webapp-deployer/data/config` directory. This ensures the Docker container has access to the key during the publish process
```bash
cp webapp-deployer/webapp-deployer-api.apps.vaasl.io.pgp.pub webapp-deployer/data/config
```
* Publish the webapp deployer record using the `publish-deployer-to-registry` command
```
docker run -i -t \
--add-host=host.docker.internal:host-gateway \
-v /srv/service-provider/webapp-deployer/data/config:/config \
cerc/webapp-deployer-backend:local laconic-so publish-deployer-to-registry \
--laconic-config /config/laconic.yml \
--api-url https://webapp-deployer-api.apps.vaasl.io \
--public-key-file /config/webapp-deployer-api.apps.vaasl.io.pgp.pub \
--lrn lrn://vaasl-provider/deployers/webapp-deployer-api.apps.vaasl.io \
--min-required-payment 10000
```
- Modify the contents of `webapp-deployer/config.env`:
```
DEPLOYMENT_DNS_SUFFIX="apps.vaasl.io"
# this should match the name authority reserved above
DEPLOYMENT_RECORD_NAMESPACE="vaasl-provider"
# url of the deployed docker image registry
IMAGE_REGISTRY="container-registry.apps.vaasl.io"
# credentials from the htpasswd section above in container-registry setup
IMAGE_REGISTRY_USER=
IMAGE_REGISTRY_CREDS=
# configs
CLEAN_DEPLOYMENTS=false
CLEAN_LOGS=false
CLEAN_CONTAINERS=false
SYSTEM_PRUNE=false
WEBAPP_IMAGE_PRUNE=true
CHECK_INTERVAL=10
FQDN_POLICY="allow"
# lrn of the webapp deployer
LRN="lrn://vaasl-provider/deployers/webapp-deployer-api.apps.vaasl.io"
# Path to the GPG key file inside the webapp-deployer container
OPENPGP_PRIVATE_KEY_FILE="webapp-deployer-api.apps.vaasl.io.pgp.key"
# Passphrase used when creating the GPG key
OPENPGP_PASSPHRASE="SECRET"
DEPLOYER_STATE="srv-test/deployments/autodeploy.state"
UNDEPLOYER_STATE="srv-test/deployments/autoundeploy.state"
UPLOAD_DIRECTORY="srv-test/uploads"
HANDLE_AUCTION_REQUESTS=true
AUCTION_BID_AMOUNT=10000
# Minimum payment amount required for single webapp deployment
MIN_REQUIRED_PAYMENT=10000
```
- Push the image to the container registry
```
laconic-so deployment --dir webapp-deployer push-images
```
- Modify `webapp-deployer/data/config/laconic.yml`:
```
services:
registry:
rpcEndpoint: 'https://laconicd-sapo.laconic.com/'
gqlEndpoint: 'https://laconicd-sapo.laconic.com/api'
# Set user key of account with balance and bond owned by the user
userKey:
bondId:
chainId: laconic-testnet-2
gasPrice: 1alnt
```
#### Run
- Start the deployer.
```
laconic-so deployment --dir webapp-deployer start
```
- Load context for k8s
```bash
kubie ctx vs-narwhal
```
- Copy the GPG key file to the webapp-deployer container
```bash
# Get the webapp-deployer pod id
laconic-so deployment --dir webapp-deployer ps
# Expected output
# Running containers:
# id: default/laconic-096fed46af974a47-deployment-644db859c7-snbq6, name: laconic-096fed46af974a47-deployment-644db859c7-snbq6, ports: 10.42.2.11:9555->9555
# Set pod id
export POD_ID=
# Example:
# export POD_ID=laconic-096fed46af974a47-deployment-644db859c7-snbq6
# Copy GPG key files to the pod
kubectl cp webapp-deployer/webapp-deployer-api.apps.vaasl.io.pgp.key $POD_ID:/app
kubectl cp webapp-deployer/webapp-deployer-api.apps.vaasl.io.pgp.pub $POD_ID:/app
```
- Publishing records to the registry will trigger deployments in backend now
### Frontend
* Target dir: `/srv/service-provider/webapp-ui`
* Cleanup an existing deployment if required:
```bash
cd /srv/service-provider/webapp-ui
# Stop the deployment
laconic-so deployment --dir webapp-ui stop
# Remove the deployment dir
sudo rm -rf webapp-ui
# Remove the existing spec file
rm webapp-ui.spec
```
#### Setup
* Clone and build the deployer UI
```
git clone https://git.vdb.to/cerc-io/webapp-deployment-status-ui.git ~/cerc/webapp-deployment-status-ui
laconic-so build-webapp --source-repo ~/cerc/webapp-deployment-status-ui
```
* Create a deployment
```bash
export KUBECONFIG_PATH=/home/dev/.kube/config-vs-narwhal.yaml
# NOTE: Use actual kubeconfig path
laconic-so deploy-webapp create --kube-config $KUBECONFIG_PATH --image-registry container-registry.apps.vaasl.io --deployment-dir webapp-ui --image cerc/webapp-deployment-status-ui:local --url https://webapp-deployer-ui.apps.vaasl.io --env-file ~/cerc/webapp-deployment-status-ui/.env
```
* Modify file `webapp-ui/kubeconfig.yml` if required
```yml
apiVersion: v1
...
contexts:
- context:
cluster: ***
user: ***
name: default
...
```
NOTE: `context.name` must be default to use with SO
- Push the image to the container registry.
```
laconic-so deployment --dir webapp-ui push-images
```
- Modify `webapp-ui/config.env` like [this Pull Request](https://git.vdb.to/cerc-io/webapp-deployment-status-ui/pulls/6) but with your host details.
#### Run
- Start the deployer UI
```bash
laconic-so deployment --dir webapp-ui start
```
- Wait a moment, then go to https://webapp-deployer-ui.apps.vaasl.io for the status and logs of each deployment

View File

@ -22,6 +22,12 @@ Once all the participants have completed their onboarding, stage0 laconicd chain
laconic-so deployment --dir stage0-deployment logs laconicd -f --tail 30
```
* List the participants on stage0:
```bash
laconic-so deployment --dir stage0-deployment exec laconicd "laconicd query onboarding list"
```
* Stop the stage0 deployment:
```bash
@ -30,31 +36,67 @@ Once all the participants have completed their onboarding, stage0 laconicd chain
## Start stage1
* Rebuild laconicd container with `>=v0.1.7` to enable `slashing` module:
* Use the scripts in fixturenet-laconicd stack to generate genesis file for stage1 using onboarding participants from stage0 chain with token allocations:
```bash
# laconicd source
cd ~/cerc/laconicd
# Pull latest changes
git pull
# Confirm the latest commit hash
git log
# Rebuild the containers
cd /srv/laconicd
laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd build-containers --force-rebuild
# Set current working dir path in a variable
DEPLOYMENTS_DIR=$(pwd)
cd ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd
# Generate the genesis file
# Participant allocation: 1000000000000 (10^12)
# Validator allocation: 2000000000000000 (10^15)
./scripts/generate-stage1-genesis-using-allocations.sh $DEPLOYMENTS_DIR/stage0-deployment 1000000000000 2000000000000000
# Expected output:
# Genesis file for stage1 written to output/genesis.json
# Remove the temporary data directory
sudo rm -rf stage1-genesis
# Go back to the directory where deployments are created
cd $DEPLOYMENTS_DIR
```
* Fetch the generated genesis file with stage1 participants and token allocations:
* Copy over the generated genesis file (`.json`) containing the onboarding module state with funded participants to data directory in stage1 deployment (`stage1-deployment/data/genesis-config`):
```bash
# Place in stage1 deployment directory
wget -O /srv/laconicd/stage1-deployment/data/genesis-config/genesis.json https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage1/genesis-accounts.json
cd /srv/laconicd
cp ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd/output/genesis.json stage1-deployment/data/genesis-config/genesis.json
```
* Republish records in stage1:
* Create bond
```bash
ALICE_PK=$(echo y | laconic-so deployment --dir /srv/laconicd/stage1-deployment exec laconicd "laconicd keys export alice --unarmored-hex --unsafe")
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry bond create --type alnt --quantity 1000000000000 --user-key $ALICE_PK" | jq -r '.bondId'
```
* Update CLI config
```bash
BOND_ID=c5ce12710a0ba7bf01f085a24b23dc78936e8d63ba3ce9d4b9d8f789e5b2f265
laconic-so deployment --dir laconic-console-deployment exec cli "CERC_LACONICD_USER_KEY=${ALICE_PK} CERC_LACONICD_BOND_ID=${BOND_ID} CERC_LACONICD_GAS=500000 CERC_LACONICD_FEES=500000alnt /app/create-config.sh"
```
* Copy records
```bash
cp -r /srv/laconicd/testnet-repo-records laconic-console-deployment/data/laconic-registry-data/testnet-repo-records
```
* Publish
```bash
laconic-so deployment --dir laconic-console-deployment exec cli "yarn ts-node demo/scripts/publish-records.ts --config config.yml --records /laconic-registry-data/testnet-repo-records"
```
* Start the stage1 deployment:
```bash
@ -93,79 +135,4 @@ Once all the participants have completed their onboarding, stage0 laconicd chain
scp dev@<deployments-server-hostname>:/srv/laconicd/stage1-deployment/data/laconicd-data/config/genesis.json </path/to/local/directory>
```
* Now users can follow the steps to [Join as a validator on stage1](../testnet-onboarding-validator.md#join-as-a-validator-on-stage1)
## Bank Transfer
* Transfer tokens to an address:
```bash
cd /srv/laconicd
RECEIVER_ADDRESS=
AMOUNT=
laconic-so deployment --dir stage1-deployment exec laconicd "laconicd tx bank send alice ${RECEIVER_ADDRESS} ${AMOUNT}alnt --from alice --fees 1000000alnt"
```
* Check balance:
```bash
laconic-so deployment --dir stage1-deployment exec laconicd "laconicd query bank balances ${RECEIVER_ADDRESS}"
```
---
## Generating stage1 genesis
* Following steps to be run on a local machine
* Clone repos:
```bash
git clone git@git.vdb.to:cerc-io/testnet-laconicd-stack.git
git clone git@git.vdb.to:cerc-io/fixturenet-laconicd-stack.git
```
* Create stage1 participants and allocations using provided validators list:
* Prerequisite: `validators.csv` file with list of laconic addresses, example:
```csv
laconic13ftz0c6cg6ttfda7ct4r6pf2j976zsey7l4wmj
laconic1he4wjpfm5atwfvqurpg57ctp8chmxt9swf02dx
laconic1wpsdkwz0t4ejdm7gcl7kn8989z88dd6wwy04np
...
```
* Build
```bash
# Change to scripts dir
cd testnet-laconicd-stack/scripts
# Install dependencies and build
yarn && yarn build
```
* Run script
```bash
yarn participants-with-filtered-validators --validators-csv ./validators.csv --participant-alloc 200000000000 --validator-alloc 1000200000000000 --output stage1-participants-$(date +"%Y-%m-%dT%H%M%S").json --output-allocs stage1-allocs-$(date +"%Y-%m-%dT%H%M%S").json
# This should create two json files with stage1 participants and allocations
```
* Create stage1 genesis file:
```bash
# Change to fixturenet-laconicd stack dir
cd fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd
# Generate genesis file
# Provide absolute paths to generated stage1-participants and stage1-allocs files
./scripts/generate-stage1-genesis-from-json.sh /path/to/testnet-laconicd-stack/scripts/stage1-participants.json /path/to/testnet-laconicd-stack/scripts/stage1-allocs.json
# This should generate the required genesis file at output/genesis.json
```
* Now users can follow the steps to [Join as a validator on stage1](https://git.vdb.to/cerc-io/testnet-laconicd-stack/src/branch/main/testnet-onboarding-validator.md#join-as-a-validator-on-stage1)

View File

@ -1,150 +0,0 @@
# Halt stage1 and start stage2
## Login
* Log in as `dev` user on the deployments VM
* All the deployments are placed in the `/srv` directory:
```bash
cd /srv
```
## Halt stage1
* Confirm the the currently running node is for stage1 chain:
```bash
# On stage1 deployment machine
STAGE1_DEPLOYMENT=/srv/laconicd/testnet-laconicd-deployment
laconic-so deployment --dir $STAGE1_DEPLOYMENT logs laconicd -f --tail 30
# Note: stage1 node on deployments VM has been changed to run from /srv/laconicd/testnet-laconicd-deployment instead of /srv/laconicd/stage1-deployment
```
* Stop the stage1 deployment:
```bash
laconic-so deployment --dir $STAGE1_DEPLOYMENT stop
# Stopping this deployment marks the end of testnet stage1
```
## Export stage1 state
* Export the chain state:
```bash
docker run -it \
-v $STAGE1_DEPLOYMENT/data/laconicd-data:/root/.laconicd \
cerc/laconicd-stage1:local bash -c "laconicd export | jq > /root/.laconicd/stage1-state.json"
```
* Archive the state and node config and keys:
```bash
sudo tar -czf /srv/laconicd/stage1-laconicd-export.tar.gz --exclude="./data" --exclude="./tmp" -C $STAGE1_DEPLOYMENT/data/laconicd-data .
sudo chown dev:dev /srv/laconicd/stage1-laconicd-export.tar.gz
```
## Initialize stage2
* Copy over the stage1 state and node export archive to stage2 deployment machine
* Extract the stage1 state and node config to stage2 deployment dir:
```bash
# On stage2 deployment machine
cd /srv/laconicd
# Unarchive
tar -xzf stage1-laconicd-export.tar.gz -C stage2-deployment/data/laconicd-data
# Verify contents
ll stage2-deployment/data/laconicd-data
```
* Initialize stage2 chain:
```bash
DEPLOYMENT_DIR=$(pwd)
cd ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd
STAGE2_CHAIN_ID=laconic-testnet-2
./scripts/initialize-stage2.sh $DEPLOYMENT_DIR/stage2-deployment $STAGE2_CHAIN_ID LaconicStage2 os 1000000000000000
# Enter the keyring passphrase for account from stage1 when prompted
cd $DEPLOYMENT_DIR
```
* Resets the node data (`unsafe-reset-all`)
* Initializes the `stage2-deployment` node
* Generates the genesis file for stage2 with stage1 state
* Carries over accounts, balances and laconicd modules from stage1
* Skips staking and validator data
* Copy over the genesis file outside data directory:
```bash
cp stage2-deployment/data/laconicd-data/config/genesis.json stage2-deployment
```
## Start stage2
* Start the stage2 deployment:
```bash
laconic-so deployment --dir stage2-deployment start
```
* Check status of stage2 laconicd:
```bash
# List down the container and check health status
docker ps -a | grep laconicd
# Follow logs for laconicd container, check that new blocks are getting created
laconic-so deployment --dir stage2-deployment logs laconicd -f
```
* Get the node's peer adddress and stage2 genesis file to share with the participants:
* Get the node id:
```bash
echo $(laconic-so deployment --dir stage2-deployment exec laconicd "laconicd cometbft show-node-id")@laconicd-sapo.laconic.com:36656
```
* Get the genesis file:
```bash
scp dev@<deployments-server-hostname>:/srv/laconicd/stage2-deployment/genesis.json </path/to/local/directory>
```
* Now users can follow the steps to [Upgrade to SAPO testnet](../testnet-onboarding-validator.md#upgrade-to-sapo-testnet)
## Bank Transfer
* Transfer tokens to an address:
```bash
cd /srv/laconicd
RECEIVER_ADDRESS=
AMOUNT=
laconic-so deployment --dir stage2-deployment exec laconicd "laconicd tx bank send alice ${RECEIVER_ADDRESS} ${AMOUNT}alnt --from alice --fees 1000alnt"
```
* Check balance:
```bash
laconic-so deployment --dir stage2-deployment exec laconicd "laconicd query bank balances ${RECEIVER_ADDRESS}"
```

File diff suppressed because it is too large Load Diff

View File

@ -1,16 +0,0 @@
{
"1212": [
{
"name": "geth",
"chainId": "1212",
"contracts": {
"TestToken": {
"address": "0x2b79F4a92c177B4E61F5c4AC37b1B8A623c665A4"
},
"TestToken2": {
"address": "0xa6B4B8b84576047A53255649b4994743d9C83A71"
}
}
}
]
}

File diff suppressed because one or more lines are too long

View File

@ -1,7 +0,0 @@
nitro_chain_url: "wss://fixturenet-eth.laconic.com"
na_address: "0x2B6AFbd4F479cE4101Df722cF4E05F941523EaD9"
ca_address: "0xBca48057Da826cB2eb1258E2C679678b269dC262"
vpa_address: "0xCf5207018766587b8cBad4B8B1a1a38c225ebA7A"
bridge_nitro_address: "0xf0E6a85C6D23AcA9ff1b83477D426ed26F218185"
nitro_l1_bridge_multiaddr: "/dns4/bridge.laconic.com/tcp/3005/p2p/16Uiu2HAky2PYTfBNHpytybz4ADY9n7boiLgK5btJpTrGVbWC3diZ"
nitro_l2_bridge_multiaddr: "/dns4/bridge.laconic.com/tcp/3006/p2p/16Uiu2HAky2PYTfBNHpytybz4ADY9n7boiLgK5btJpTrGVbWC3diZ"

View File

@ -1,25 +0,0 @@
#!/bin/bash
# Exit on error
set -e
set -u
NODE_HOME="$HOME/.laconicd"
testnet2_genesis="$NODE_HOME/tmp-testnet2/genesis.json"
if [ ! -f ${testnet2_genesis} ]; then
echo "testnet2 genesis file not found, exiting..."
exit 1
fi
# Remove data but keep keys
laconicd cometbft unsafe-reset-all
# Use provided genesis config
cp $testnet2_genesis $NODE_HOME/config/genesis.json
# Set chain id in config
chain_id=$(jq -r '.chain_id' $testnet2_genesis)
laconicd config set client chain-id $chain_id --home $NODE_HOME
echo "Node data reset and ready for testnet2!"

View File

@ -254,250 +254,3 @@ Instructions to reset / update the deployments
```
* The laconic console can now be viewed at <https://loro-console.laconic.com>
---
## stage2 laconicd
* Deployment dir: `/srv/laconicd/stage2-deployment`
* If code has changed, fetch and build with updated source code:
```bash
# laconicd source
cd ~/cerc/laconicd
# Pull latest changes, or checkout to the required branch
git pull
# Confirm the latest commit hash
git log
# Rebuild the containers
cd /srv/laconicd
laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd build-containers --force-rebuild
```
* Optionally, reset the data directory:
```bash
# Stop the deployment
laconic-so deployment --dir stage2-deployment stop --delete-volumes
# Remove and recreate the required data dirs
sudo rm -rf stage2-deployment/data/laconicd-data stage2-deployment/data/genesis-config
mkdir stage2-deployment/data/laconicd-data
mkdir stage2-deployment/data/genesis-config
```
* Follow [stage1-to-stage2.md](./stage1-to-stage2.md) to reinitialize stage2 and start the deployment
## laconic-console-testnet2
* Deployment dir: `/srv/console/laconic-console-testnet2-deployment`
* Steps to update the deployment similar as in [laconic-console](#laconic-console)
## Laconic Shopify
* Deployment dir: `/srv/shopify/laconic-shopify-deployment`
* If code has changed, fetch and build with updated source code:
```bash
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify setup-repositories --git-ssh --pull
# rebuild containers
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify build-containers --force-rebuild
```
* Update the configuration if required in `laconic-shopify-deployment/config.env`
* Restart the deployment:
```bash
cd /srv/shopify
laconic-so deployment --dir laconic-shopify-deployment stop
laconic-so deployment --dir laconic-shopify-deployment start
```
## webapp-deployer
### Backend
* Deployment dir: `/srv/service-provider/webapp-deployer`
* If code has changed, fetch and build with updated source code:
```bash
laconic-so --stack webapp-deployer-backend setup-repositories --git-ssh --pull
laconic-so --stack webapp-deployer-backend build-containers --force-rebuild
```
* Update the configuration, if required in :
* `/srv/service-provider/webapp-deployer/data/config/laconic.yml`
* `/srv/service-provider/webapp-deployer/config.env`
* Restart the deployment:
```bash
laconic-so deployment --dir webapp-deployer stop
laconic-so deployment --dir webapp-deployer start
```
* Load context for k8s
```bash
kubie ctx vs-narwhal
```
* Copy the GPG key file to the webapp-deployer container
```bash
# Get the webapp-deployer pod id
laconic-so deployment --dir webapp-deployer ps
# Expected output
# Running containers:
# id: default/laconic-096fed46af974a47-deployment-644db859c7-snbq6, name: laconic-096fed46af974a47-deployment-644db859c7-snbq6, ports: 10.42.2.11:9555->9555
# Set pod id
export POD_ID=
# Example:
# export POD_ID=laconic-096fed46af974a47-deployment-644db859c7-snbq6
# Copy GPG key files to the pod
kubectl cp webapp-deployer/webapp-deployer-api.apps.vaasl.io.pgp.key $POD_ID:/app
kubectl cp webapp-deployer/webapp-deployer-api.apps.vaasl.io.pgp.pub $POD_ID:/app
```
### Frontend
* Deployment dir: `/srv/service-provider/webapp-ui`
* If code has changed, fetch and build with updated source code:
```bash
cd ~/cerc/webapp-deployment-status-ui
# Pull latest changes, or checkout to the required branch
git pull
# Confirm the latest commit hash
git log
laconic-so build-webapp --source-repo ~/cerc/webapp-deployment-status-ui
```
* Modify `/srv/service-provider/webapp-ui/config.env` like [this Pull Request](https://git.vdb.to/cerc-io/webapp-deployment-status-ui/pulls/6) but with your host details.
* Restart the deployment:
```bash
laconic-so deployment --dir webapp-ui stop
laconic-so deployment --dir webapp-ui start
```
## Deploy Backend
* Deployment dir: `/srv/deploy-backend/laconic-backend-deployment`
* If code has changed, fetch and build with updated source code:
```bash
laconic-so --stack ~/cerc/snowballtools-base-api-stack/stack-orchestrator/stacks/snowballtools-base-backend setup-repositories --git-ssh --pull
# rebuild containers
laconic-so --stack ~/cerc/snowballtools-base-api-stack/stack-orchestrator/stacks/snowballtools-base-backend build-containers --force-rebuild
```
* Push updated images to the container registry:
```bash
cd /srv/deploy-backend
# login to container registry
CONTAINER_REGISTRY_URL=container-registry.apps.vaasl.io
# For credentials: "cat /srv/service-provider/webapp-deployer/config.env | grep IMAGE_REGISTRY"
CONTAINER_REGISTRY_USERNAME=
CONTAINER_REGISTRY_PASSWORD=
docker login $CONTAINER_REGISTRY_URL --username $CONTAINER_REGISTRY_USERNAME --password $CONTAINER_REGISTRY_PASSWORD
# Push backend images
laconic-so deployment --dir laconic-backend-deployment push-images
```
* Update the configuration if required in `laconic-backend-deployment/configmaps/config/prod.toml`
* Restart the deployment:
```bash
laconic-so deployment --dir laconic-backend-deployment stop
laconic-so deployment --dir laconic-backend-deployment start
```
## Deply Frontend
* Follow steps similar to [deployments-from-scratch.md](./deployments-from-scratch.md#deploy-frontend) to deploy the snowball frontend
* Use auhority `laconic-deploy` and the script `deploy-frontend.sh` instead
## Fixturenet Eth
* Deployment dir: `/srv/fixturenet-eth/fixturenet-eth-deployment`
* If code has changed, fetch and build with updated source code:
```bash
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth setup-repositories --git-ssh --pull
# Rebuild the containers
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth build-containers --force-rebuild
```
* Update the configuration if required in `fixturenet-eth-deployment/config.env`:
```bash
CERC_ALLOW_UNPROTECTED_TXS=true
```
* Restart the deployment:
```bash
cd /srv/fixturenet-eth
laconic-so deployment --dir fixturenet-eth-deployment stop
laconic-so deployment --dir fixturenet-eth-deployment start
```
## Nitro Bridge
* Deployment dir: `/srv/bridge/bridge-deployment`
* Rebuild containers:
```bash
# Rebuild the containers
laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/bridge build-containers --force-rebuild
```
* Update the configuration if required in `bridge-deployment/config.env`
* Restart the bridge deployment:
```bash
cd /srv/bridge
laconic-so deployment --dir bridge-deployment stop
laconic-so deployment --dir bridge-deployment start
```

View File

@ -1,8 +0,0 @@
# Default: https://laconicd.laconic.com/api
LACONICD_GQL_ENDPOINT=
# Default: https://laconicd.laconic.com
LACONICD_RPC_ENDPOINT=
# Default: laconic_9000-1
LACONICD_CHAIN_ID=

3
scripts/.gitignore vendored
View File

@ -1,3 +0,0 @@
node_modules
dist
.env

View File

@ -1 +0,0 @@
@cerc-io:registry=https://git.vdb.to/api/packages/cerc-io/npm/

View File

@ -1,45 +0,0 @@
# scripts
## Prerequisites
- NodeJS >= `v18.17.x`
## Instructions
- Change to scripts dir:
```bash
cd scripts
```
- Install dependencies and build:
```bash
yarn && yarn build
```
- Create required env configuration:
```bash
# Update the values as required
# By default, live laconicd testnet (laconicd.laconic.com) endpoint is configured
cp .env.example .env
```
- Generate a list of onboarded participants and allocations with given list of validators:
```bash
yarn participants-with-filtered-validators --validators-csv <validators-csv-file> --participant-alloc <participant-alloc-amount> --validator-alloc <validator-alloc-amount> --output <output-json-file> --output-allocs <output-allocs-json-file>
# Example:
# yarn participants-with-filtered-validators --validators-csv ./validators.csv --participant-alloc 200000000000 --validator-alloc 1000200000000000 --output stage1-participants-$(date +"%Y-%m-%dT%H%M%S").json --output-allocs stage1-allocs-$(date +"%Y-%m-%dT%H%M%S").json
```
- Map subscribers to onboarded participants:
```bash
yarn map-subscribers-to-participants --subscribers-csv <subscribers-csv-file> --output <output-csv-file>
# Example:
# yarn map-subscribers-to-participants --subscribers-csv subscribers.csv --output result-$(date +"%Y-%m-%dT%H%M%S").csv
```

View File

@ -1,29 +0,0 @@
{
"name": "testnet-laconicd-stack",
"version": "0.1.0",
"main": "index.js",
"repository": "git@git.vdb.to:cerc-io/testnet-laconicd-stack.git",
"license": "UNLICENSED",
"private": true,
"devDependencies": {
"@types/cli-progress": "^3.11.6",
"@types/node": "^22.2.0",
"@types/yargs": "^17.0.33",
"typescript": "^5.5.4"
},
"dependencies": {
"@cerc-io/registry-sdk": "^0.2.6",
"@cosmjs/stargate": "^0.32.4",
"csv-parse": "^5.5.6",
"csv-parser": "^3.0.0",
"csv-writer": "^1.6.0",
"dotenv": "^16.4.5",
"yargs": "^17.7.2"
},
"scripts": {
"build": "tsc",
"map-subscribers-to-participants": "node dist/map-subscribers-to-participants.js",
"participants-with-filtered-validators": "node dist/participants-with-filtered-validators.js"
},
"packageManager": "yarn@1.22.19+sha1.4ba7fc5c6e704fce2066ecbfb0b0d8976fe62447"
}

View File

@ -1,171 +0,0 @@
import * as fs from 'fs';
import * as crypto from 'crypto';
import * as path from 'path';
import yargs from 'yargs';
import { hideBin } from 'yargs/helpers';
import { parse as csvParse } from 'csv-parse';
import * as csvWriter from 'csv-writer';
import dotenv from 'dotenv';
import { StargateClient } from '@cosmjs/stargate';
import { Registry } from '@cerc-io/registry-sdk';
import { decodeTxRaw, decodePubkey } from '@cosmjs/proto-signing';
dotenv.config();
const LACONICD_GQL_ENDPOINT = process.env.LACONICD_GQL_ENDPOINT || 'https://laconicd.laconic.com/api';
const LACONICD_RPC_ENDPOINT = process.env.LACONICD_RPC_ENDPOINT || 'https://laconicd.laconic.com';
const LACONICD_CHAIN_ID = process.env.LACONICD_CHAIN_ID || 'laconic_9000-1';
async function main(): Promise<void> {
const argv = _getArgv();
const registry = new Registry(LACONICD_GQL_ENDPOINT, LACONICD_RPC_ENDPOINT, LACONICD_CHAIN_ID);
const client = await StargateClient.connect(LACONICD_RPC_ENDPOINT);
console.time('time_taken_getParticipants');
const participants = await registry.getParticipants();
console.timeEnd('time_taken_getParticipants');
const subscribers = await readSubscribers(argv.subscribersCsv);
console.log('Read subscribers, count:', subscribers.length);
await processSubscribers(client, participants, subscribers, argv.output);
}
async function readSubscribers(subscribersCsvPath: string): Promise<any> {
const fileContent = fs.readFileSync(path.resolve(subscribersCsvPath), { encoding: 'utf-8' });
const headers = ['subscriber_id', 'email', 'status', 'premium?', 'created_at', 'api_subscription_id'];
return csvParse(fileContent, { delimiter: ',', columns: headers }).toArray();
}
function hashSubscriberId(subscriberId: string): string {
return '0x' + crypto.createHash('sha256').update(subscriberId).digest('hex');
}
async function processSubscribers(client: StargateClient, participants: any[], subscribers: any[], outputPath: string) {
// Map kyc_id to participant data
const kycMap: Record<string, any> = {};
participants.forEach((participant: any) => {
kycMap[participant.kycId] = participant;
});
const onboardingTxsHeightMap: Record<string, { txHeight: number, pubkey: string }> = {};
console.time('time_taken_searchTx');
const onboardingTxs = await client.searchTx(`message.action='/cerc.onboarding.v1.MsgOnboardParticipant'`);
console.timeEnd('time_taken_searchTx');
console.log('Fetched onboardingTxs, count:', onboardingTxs.length);
console.time('time_taken_decodingTxs');
onboardingTxs.forEach(onboardingTx => {
const rawPubkey = decodeTxRaw(onboardingTx.tx).authInfo.signerInfos[0].publicKey;
if (!rawPubkey) {
console.error('pubkey not found in tx', onboardingTx.hash);
return;
}
const pubkey = decodePubkey(rawPubkey).value;
// Determine sender
const onboardParticipantEvent = onboardingTx.events.find(event => event.type === 'onboard_participant');
if (!onboardParticipantEvent) {
console.error('onboard_participant event not found in tx', onboardingTx.hash);
return;
}
const sender = onboardParticipantEvent.attributes.find(attr => attr.key === 'signer')?.value;
if (!sender) {
console.error('sender not found in onboard_participant event for tx', onboardingTx.hash)
return;
}
// Update if already exists
let latesTxHeight = onboardingTx.height;
if (onboardingTxsHeightMap[sender]) {
latesTxHeight = latesTxHeight > onboardingTxsHeightMap[sender].txHeight ? latesTxHeight : onboardingTxsHeightMap[sender].txHeight;
}
onboardingTxsHeightMap[sender] = { txHeight: latesTxHeight, pubkey };
});
console.timeEnd('time_taken_decodingTxs');
const onboardedSubscribers: any[] = [];
for (let i = 0; i < subscribers.length; i++) {
const subscriber = subscribers[i];
const hashedSubscriberId = hashSubscriberId(subscriber['subscriber_id']);
const participant = kycMap[hashedSubscriberId];
if (!participant) {
continue;
}
const participantAddresss = participant['cosmosAddress'];
// Skip participant if an onboarding tx not found
if (!onboardingTxsHeightMap[participantAddresss]) {
continue;
}
const onboardedSubscriber = {
subscriber_id: subscriber['subscriber_id'],
email: subscriber['email'],
status: subscriber['status'],
'premium?': subscriber['premium?'],
created_at: subscriber['created_at'],
laconic_address: participantAddresss,
nitro_address: participant['nitroAddress'],
role: participant['role'],
hashed_subscriber_id: participant['kycId'],
laconic_pubkey: onboardingTxsHeightMap[participantAddresss].pubkey,
onboarding_height: onboardingTxsHeightMap[participantAddresss].txHeight
};
onboardedSubscribers.push(onboardedSubscriber);
}
const writer = csvWriter.createObjectCsvWriter({
path: path.resolve(outputPath),
header: [
{ id: 'subscriber_id', title: 'subscriber_id' },
{ id: 'email', title: 'email' },
{ id: 'status', title: 'status' },
{ id: 'premium?', title: 'premium?' },
{ id: 'created_at', title: 'created_at' },
{ id: 'laconic_address', title: 'laconic_address' },
{ id: 'nitro_address', title: 'nitro_address' },
{ id: 'role', title: 'role' },
{ id: 'hashed_subscriber_id', title: 'hashed_subscriber_id' },
{ id: 'laconic_pubkey', title: 'laconic_pubkey' },
{ id: 'onboarding_height', title: 'onboarding_height' },
],
alwaysQuote: true
});
await writer.writeRecords(onboardedSubscribers);
console.log(`Data has been written to ${path.resolve(outputPath)}`);
}
function _getArgv (): any {
return yargs(hideBin(process.argv))
.option('subscribersCsv', {
alias: 's',
type: 'string',
demandOption: true,
describe: 'Path to the subscribers CSV file',
})
.option('output', {
alias: 'o',
type: 'string',
demandOption: true,
describe: 'Path to the output CSV file',
})
.help()
.argv;
}
main().catch(err => {
console.log(err);
});

View File

@ -1,114 +0,0 @@
import * as fs from 'fs';
import * as path from 'path';
import yargs from 'yargs';
import { hideBin } from 'yargs/helpers';
import dotenv from 'dotenv';
import { Registry } from '@cerc-io/registry-sdk';
dotenv.config();
const LACONICD_GQL_ENDPOINT = process.env.LACONICD_GQL_ENDPOINT || 'https://laconicd.laconic.com/api';
const LACONICD_RPC_ENDPOINT = process.env.LACONICD_RPC_ENDPOINT || 'https://laconicd.laconic.com';
const LACONICD_CHAIN_ID = process.env.LACONICD_CHAIN_ID || 'laconic_9000-1';
async function main(): Promise<void> {
const argv = _getArgv();
const registry = new Registry(LACONICD_GQL_ENDPOINT, LACONICD_RPC_ENDPOINT, LACONICD_CHAIN_ID);
console.time('time_taken_getParticipants');
const participants = await registry.getParticipants();
console.log('Fetched participants, count:', participants.length);
console.timeEnd('time_taken_getParticipants');
let validators: Array<string> = await readValidators(argv.validatorsCsv);
console.log('Read validators, count:', validators.length);
let stage1Allocations: Array<{ 'cosmos_address': string, balance: string }> = [];
const stage1Participants = participants.map((participant: any) => {
const outputParticipant: any = {
'cosmos_address': participant.cosmosAddress,
'nitro_address': participant.nitroAddress,
'kyc_id': participant.kycId
};
if (validators.includes(participant.cosmosAddress)) {
outputParticipant.role = 'validator';
stage1Allocations.push({
cosmos_address: participant.cosmosAddress,
balance: argv.validatorAlloc
});
// Remove processed participant from validators list
validators = validators.filter(val => val !== participant.cosmosAddress);
} else {
outputParticipant.role = 'participant';
stage1Allocations.push({
cosmos_address: participant.cosmosAddress,
balance: argv.participantAlloc
});
}
return outputParticipant;
});
// Provide allocs for remaining validators
validators.forEach(val => {
stage1Allocations.push({
cosmos_address: val,
balance: argv.validatorAlloc
});
});
const participantsOutputFilePath = path.resolve(argv.output);
fs.writeFileSync(participantsOutputFilePath, JSON.stringify(stage1Participants, null, 2));
console.log(`Onboarded participants with filtered validators written to ${participantsOutputFilePath}`);
const allocsOutputFilePath = path.resolve(argv.outputAllocs);
fs.writeFileSync(allocsOutputFilePath, JSON.stringify(stage1Allocations, null, 2));
console.log(`Stage1 allocations written to ${allocsOutputFilePath}`);
}
async function readValidators(subscribersCsvPath: string): Promise<any> {
const fileContent = fs.readFileSync(path.resolve(subscribersCsvPath), { encoding: 'utf-8' });
return fileContent.split('\r\n').map(address => address.trim());
}
function _getArgv (): any {
return yargs(hideBin(process.argv))
.option('validatorsCsv', {
type: 'string',
demandOption: true,
describe: 'Path to a CSV file with validators list',
})
.option('participantAlloc', {
type: 'string',
demandOption: true,
describe: 'Participant stage1 balance allocation',
})
.option('validatorAlloc', {
type: 'string',
demandOption: true,
describe: 'Validator stage1 balance allocation',
})
.option('output', {
type: 'string',
demandOption: true,
describe: 'Path to the output JSON file',
})
.option('outputAllocs', {
type: 'string',
demandOption: true,
describe: 'Path to the output JSON file with allocs',
})
.help()
.argv;
}
main().catch(err => {
console.log(err);
});

View File

@ -1,110 +0,0 @@
{
"compilerOptions": {
/* Visit https://aka.ms/tsconfig to read more about this file */
/* Projects */
// "incremental": true, /* Save .tsbuildinfo files to allow for incremental compilation of projects. */
// "composite": true, /* Enable constraints that allow a TypeScript project to be used with project references. */
// "tsBuildInfoFile": "./.tsbuildinfo", /* Specify the path to .tsbuildinfo incremental compilation file. */
// "disableSourceOfProjectReferenceRedirect": true, /* Disable preferring source files instead of declaration files when referencing composite projects. */
// "disableSolutionSearching": true, /* Opt a project out of multi-project reference checking when editing. */
// "disableReferencedProjectLoad": true, /* Reduce the number of projects loaded automatically by TypeScript. */
/* Language and Environment */
"target": "es2016", /* Set the JavaScript language version for emitted JavaScript and include compatible library declarations. */
// "lib": [], /* Specify a set of bundled library declaration files that describe the target runtime environment. */
// "jsx": "preserve", /* Specify what JSX code is generated. */
// "experimentalDecorators": true, /* Enable experimental support for legacy experimental decorators. */
// "emitDecoratorMetadata": true, /* Emit design-type metadata for decorated declarations in source files. */
// "jsxFactory": "", /* Specify the JSX factory function used when targeting React JSX emit, e.g. 'React.createElement' or 'h'. */
// "jsxFragmentFactory": "", /* Specify the JSX Fragment reference used for fragments when targeting React JSX emit e.g. 'React.Fragment' or 'Fragment'. */
// "jsxImportSource": "", /* Specify module specifier used to import the JSX factory functions when using 'jsx: react-jsx*'. */
// "reactNamespace": "", /* Specify the object invoked for 'createElement'. This only applies when targeting 'react' JSX emit. */
// "noLib": true, /* Disable including any library files, including the default lib.d.ts. */
// "useDefineForClassFields": true, /* Emit ECMAScript-standard-compliant class fields. */
// "moduleDetection": "auto", /* Control what method is used to detect module-format JS files. */
/* Modules */
"module": "commonjs", /* Specify what module code is generated. */
// "rootDir": "./", /* Specify the root folder within your source files. */
// "moduleResolution": "node10", /* Specify how TypeScript looks up a file from a given module specifier. */
// "baseUrl": "./", /* Specify the base directory to resolve non-relative module names. */
// "paths": {}, /* Specify a set of entries that re-map imports to additional lookup locations. */
// "rootDirs": [], /* Allow multiple folders to be treated as one when resolving modules. */
// "typeRoots": [], /* Specify multiple folders that act like './node_modules/@types'. */
// "types": [], /* Specify type package names to be included without being referenced in a source file. */
// "allowUmdGlobalAccess": true, /* Allow accessing UMD globals from modules. */
// "moduleSuffixes": [], /* List of file name suffixes to search when resolving a module. */
// "allowImportingTsExtensions": true, /* Allow imports to include TypeScript file extensions. Requires '--moduleResolution bundler' and either '--noEmit' or '--emitDeclarationOnly' to be set. */
// "resolvePackageJsonExports": true, /* Use the package.json 'exports' field when resolving package imports. */
// "resolvePackageJsonImports": true, /* Use the package.json 'imports' field when resolving imports. */
// "customConditions": [], /* Conditions to set in addition to the resolver-specific defaults when resolving imports. */
"resolveJsonModule": true, /* Enable importing .json files. */
// "allowArbitraryExtensions": true, /* Enable importing files with any extension, provided a declaration file is present. */
// "noResolve": true, /* Disallow 'import's, 'require's or '<reference>'s from expanding the number of files TypeScript should add to a project. */
/* JavaScript Support */
// "allowJs": true, /* Allow JavaScript files to be a part of your program. Use the 'checkJS' option to get errors from these files. */
// "checkJs": true, /* Enable error reporting in type-checked JavaScript files. */
// "maxNodeModuleJsDepth": 1, /* Specify the maximum folder depth used for checking JavaScript files from 'node_modules'. Only applicable with 'allowJs'. */
/* Emit */
"declaration": true, /* Generate .d.ts files from TypeScript and JavaScript files in your project. */
// "declarationMap": true, /* Create sourcemaps for d.ts files. */
// "emitDeclarationOnly": true, /* Only output d.ts files and not JavaScript files. */
"sourceMap": true, /* Create source map files for emitted JavaScript files. */
// "inlineSourceMap": true, /* Include sourcemap files inside the emitted JavaScript. */
// "outFile": "./", /* Specify a file that bundles all outputs into one JavaScript file. If 'declaration' is true, also designates a file that bundles all .d.ts output. */
"outDir": "dist", /* Specify an output folder for all emitted files. */
// "removeComments": true, /* Disable emitting comments. */
// "noEmit": true, /* Disable emitting files from a compilation. */
// "importHelpers": true, /* Allow importing helper functions from tslib once per project, instead of including them per-file. */
// "downlevelIteration": true, /* Emit more compliant, but verbose and less performant JavaScript for iteration. */
// "sourceRoot": "", /* Specify the root path for debuggers to find the reference source code. */
// "mapRoot": "", /* Specify the location where debugger should locate map files instead of generated locations. */
// "inlineSources": true, /* Include source code in the sourcemaps inside the emitted JavaScript. */
// "emitBOM": true, /* Emit a UTF-8 Byte Order Mark (BOM) in the beginning of output files. */
// "newLine": "crlf", /* Set the newline character for emitting files. */
// "stripInternal": true, /* Disable emitting declarations that have '@internal' in their JSDoc comments. */
// "noEmitHelpers": true, /* Disable generating custom helper functions like '__extends' in compiled output. */
// "noEmitOnError": true, /* Disable emitting files if any type checking errors are reported. */
// "preserveConstEnums": true, /* Disable erasing 'const enum' declarations in generated code. */
// "declarationDir": "./", /* Specify the output directory for generated declaration files. */
/* Interop Constraints */
// "isolatedModules": true, /* Ensure that each file can be safely transpiled without relying on other imports. */
// "verbatimModuleSyntax": true, /* Do not transform or elide any imports or exports not marked as type-only, ensuring they are written in the output file's format based on the 'module' setting. */
// "isolatedDeclarations": true, /* Require sufficient annotation on exports so other tools can trivially generate declaration files. */
// "allowSyntheticDefaultImports": true, /* Allow 'import x from y' when a module doesn't have a default export. */
"esModuleInterop": true, /* Emit additional JavaScript to ease support for importing CommonJS modules. This enables 'allowSyntheticDefaultImports' for type compatibility. */
// "preserveSymlinks": true, /* Disable resolving symlinks to their realpath. This correlates to the same flag in node. */
"forceConsistentCasingInFileNames": true, /* Ensure that casing is correct in imports. */
/* Type Checking */
"strict": true, /* Enable all strict type-checking options. */
// "noImplicitAny": true, /* Enable error reporting for expressions and declarations with an implied 'any' type. */
// "strictNullChecks": true, /* When type checking, take into account 'null' and 'undefined'. */
// "strictFunctionTypes": true, /* When assigning functions, check to ensure parameters and the return values are subtype-compatible. */
// "strictBindCallApply": true, /* Check that the arguments for 'bind', 'call', and 'apply' methods match the original function. */
// "strictPropertyInitialization": true, /* Check for class properties that are declared but not set in the constructor. */
// "noImplicitThis": true, /* Enable error reporting when 'this' is given the type 'any'. */
// "useUnknownInCatchVariables": true, /* Default catch clause variables as 'unknown' instead of 'any'. */
// "alwaysStrict": true, /* Ensure 'use strict' is always emitted. */
// "noUnusedLocals": true, /* Enable error reporting when local variables aren't read. */
// "noUnusedParameters": true, /* Raise an error when a function parameter isn't read. */
// "exactOptionalPropertyTypes": true, /* Interpret optional property types as written, rather than adding 'undefined'. */
// "noImplicitReturns": true, /* Enable error reporting for codepaths that do not explicitly return in a function. */
// "noFallthroughCasesInSwitch": true, /* Enable error reporting for fallthrough cases in switch statements. */
// "noUncheckedIndexedAccess": true, /* Add 'undefined' to a type when accessed using an index. */
// "noImplicitOverride": true, /* Ensure overriding members in derived classes are marked with an override modifier. */
// "noPropertyAccessFromIndexSignature": true, /* Enforces using indexed accessors for keys declared using an indexed type. */
// "allowUnusedLabels": true, /* Disable error reporting for unused labels. */
// "allowUnreachableCode": true, /* Disable error reporting for unreachable code. */
/* Completeness */
// "skipDefaultLibCheck": true, /* Skip type checking .d.ts files that are included with TypeScript. */
"skipLibCheck": true /* Skip type checking all .d.ts files. */
},
"include": ["src"],
"exclude": ["dist"],
}

File diff suppressed because it is too large Load Diff

View File

@ -1,510 +0,0 @@
# Service Provider
* Follow [Set up a new service provider](#set-up-a-new-service-provider) to setup a new service provider (SP)
* If you already have a SP setup for stage1, follow [Update service provider for SAPO testnet](#update-service-provider-for-sapo-testnet) to update it for testnet2
## Set up a new service provider
Follow steps from [service-provider-setup](<https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/service-provider-setup#service-provider-setup>). After setup, you will have the following services running (your configuration will look similar to the examples listed below):
* laconicd chain RPC endpoint: <http://lcn-daemon.laconic.com:26657>
* laconicd GQL endpoint: <http://lcn-daemon.laconic.com:9473/api>
* laconic console: <http://lcn-console.laconic.com:8080/registry>
* webapp deployer API: <https://webapp-deployer-api.pwa.laconic.com>
* webapp deployer UI: <https://webapp-deployer-ui.pwa.laconic.com>
Follow the steps below to point your deployer to the SAPO testnet
## Update service provider for SAPO testnet
* On a successful webapp-deployer setup with SAPO testnet, your deployer will be available on <https://deploy.laconic.com>
* For creating a project, users can create a deployment auction which your deployer will bid on or they can perform a targeted deployment using your deployer LRN
### Prerequisites
* A SAPO testnet node (see [Join SAPO testnet](./README.md#join-sapo-testnet))
### Stop services
* Stop a laconic-console deployment:
```bash
# In directory where laconic-console deployment was created
laconic-so deployment --dir laconic-console-deployment stop --delete-volumes
```
* Stop webapp deployer:
```bash
# In directory where webapp-deployer deployment was created
laconic-so deployment --dir webapp-deployer stop
laconic-so deployment --dir webapp-ui stop
```
### Update laconic console
* Remove an existing console deployment:
```bash
# In directory where laconic-console deployment was created
# Backup the config if required
rm -rf laconic-console-deployment
```
* Follow [laconic-console](stack-orchestrator/stacks/laconic-console/README.md) stack instructions to setup a new laconic-console deployment
* Example configuration:
```bash
# CLI configuration
# laconicd RPC endpoint (can be pointed to your node)
CERC_LACONICD_RPC_ENDPOINT=https://laconicd-sapo.laconic.com
# laconicd GQL endpoint (can be pointed to your node)
CERC_LACONICD_GQL_ENDPOINT=https://laconicd-sapo.laconic.com/api
CERC_LACONICD_CHAIN_ID=laconic-testnet-2
# your private key
CERC_LACONICD_USER_KEY=
# your bond id (optional)
CERC_LACONICD_BOND_ID=
# Gas price to use for txs (default: 0.001alnt)
# Use for auto fees calculation, gas and fees not required to be set in that case
# Reference: https://git.vdb.to/cerc-io/laconic-registry-cli#gas-and-fees
CERC_LACONICD_GASPRICE=
# Console configuration
# Laconicd (hosted) GQL endpoint (can be pointed to your node)
LACONIC_HOSTED_ENDPOINT=https://laconicd-sapo.laconic.com
```
### Check authority and deployer record
* The stage1 testnet state has been carried over to testnet2, if you had authority and records on stage1, they should be present in testnet2 as well
* Check authority:
```bash
# In directory where laconic-console deployment was created
AUTHORITY=<your-authority>
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry authority whois $AUTHORITY"
```
* Check deployer record:
```bash
PAYMENT_ADDRESS=<your-deployers-payment-address>
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry record list --all --type WebappDeployer --paymentAddress $PAYMENT_ADDRESS"
```
### (Optional) Reserve a new authority
* Follow steps if you want to reserve a new authority
* Create a bond:
```bash
# An existing bond can also be used
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry bond create --type alnt --quantity 100000000000"
# {"bondId":"a742489e5817ef274187611dadb0e4284a49c087608b545ab6bd990905fb61f3"}
# Set bond id
BOND_ID=
```
* Reserve an authority:
```bash
AUTHORITY=<your-authority>
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry authority reserve $AUTHORITY"
# Triggers an authority auction
```
* Obtain the authority auction id:
```bash
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry authority whois $AUTHORITY"
# "auction": {
# "id": "73e0b082a198c396009ce748804a9060c674a10045365d262c1584f99d2771c1"
# Set auction id
AUCTION_ID=
```
* Commit a bid to the auction:
```bash
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry auction bid commit $AUCTION_ID 5000000 alnt --chain-id laconic-testnet-2"
# {"reveal_file":"/app/out/bafyreiewi4osqyvrnljwwcb36fn6sr5iidfpuznqkz52gxc5ztt3jt4zmy.json"}
# Set reveal file
REVEAL_FILE=
# Wait for the auction to move from commit to reveal phase
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry auction get $AUCTION_ID"
```
* Reveal your bid using reveal file generated while commiting the bid:
```bash
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry auction bid reveal $AUCTION_ID $REVEAL_FILE --chain-id laconic-testnet-2"
# {"success": true}
```
* Verify auction status and winner address after auction completion:
```bash
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry auction get $AUCTION_ID"
```
* Set the authority with a bond:
```bash
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry authority bond set $AUTHORITY $BOND_ID"
# {"success": true}
```
* Verify the authority has been registered:
```bash
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry authority whois $AUTHORITY"
```
* Update laconic-console-deployment config (`laconic-console-deployment/config.env`) with created bond:
```bash
...
CERC_LACONICD_BOND_ID=<bond-id>
...
```
* Restart the console deployment:
```bash
laconic-so deployment --dir laconic-console-deployment stop && laconic-so deployment --dir laconic-console-deployment start
```
### Update webapp deployer
* Fetch latest stack repos:
```bash
# In directory where webapp-deployer deployment was created
laconic-so --stack webapp-deployer-backend setup-repositories --pull
# Confirm latest commit hash in the ~/cerc/webapp-deployment-status-api repo
```
* Rebuild container images:
```bash
laconic-so --stack webapp-deployer-backend build-containers --force-rebuild
```
* Push stack images to the container registry:
* Login to the container registry:
```bash
# Set required variables
# eg: container-registry.pwa.laconic.com
CONTAINER_REGISTRY_URL=
CONTAINER_REGISTRY_USERNAME=
CONTAINER_REGISTRY_PASSWORD=
# login to container registry
docker login $CONTAINER_REGISTRY_URL --username $CONTAINER_REGISTRY_USERNAME --password $CONTAINER_REGISTRY_PASSWORD
# WARNING! Using --password via the CLI is insecure. Use --password-stdin.
# WARNING! Your password will be stored unencrypted in /home/dev2/.docker/config.json.
# Configure a credential helper to remove this warning. See
# https://docs.docker.com/engine/reference/commandline/login/#credential-stores
# Login Succeeded
```
* Push images:
```bash
laconic-so deployment --dir webapp-deployer push-images
```
* Overwrite an existing compose file with the latest one:
```bash
# In directory where webapp-deployer deployment folder exists
cp ~/cerc/webapp-deployment-status-api/docker-compose.yml webapp-deployer/compose/docker-compose-webapp-deployer-backend.yml
```
* Update deployer laconic registry config (`webapp-deployer/data/config/laconic.yml`) with new endpoints:
```bash
services:
registry:
rpcEndpoint: "<your-sapo-rpc-endpoint>" # Eg. https://laconicd-sapo.laconic.com
gqlEndpoint: "<your-sapo-gql-endpoint>" # Eg. https://laconicd-sapo.laconic.com/api
userKey: "<userKey>"
bondId: "<bondId"
chainId: laconic-testnet-2
gasPrice: 0.001alnt
```
Note: Existing `userKey` and `bondId` can be used since they are carried over from laconicd stage1 to testnet2
* Publish a new webapp deployer record:
* Required if it doesn't already exist or some attribute needs to be updated
* Set the following variables:
```bash
# Path to the webapp-deployer directory
# eg: /home/dev/webapp-deployer
DEPLOYER_DIR=
# Deployer LRN (logical resource name)
# eg: "lrn://laconic/deployers/webapp-deployer-api.laconic.com"
DEPLOYER_LRN=
# Deployer API URL
# eg: "https://webapp-deployer-api.pwa.laconic.com"
API_URL=
# Deployer GPG public key file path
# eg: "/home/dev/webapp-deployer-api.laconic.com.pgp.pub"
GPG_PUB_KEY_FILE_PATH=
GPG_PUB_KEY_FILE=$(basename $GPG_PUB_KEY_FILE_PATH)
```
* Delete the LRN if it currently resolves to an existing record:
```bash
# In directory where laconic-console deployment was created
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry name resolve $DEPLOYER_LRN"
# Delete the name
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry name delete $DEPLOYER_LRN"
# Confirm deletion
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry name resolve $DEPLOYER_LRN"
```
* Copy over the GPG pub key file to webapp-deployer:
```bash
cp $GPG_PUB_KEY_FILE_PATH webapp-deployer/data/config
```
* Publish the deployer record:
```bash
docker run -it \
-v $DEPLOYER_DIR/data/config:/home/root/config \
cerc/webapp-deployer-backend:local laconic-so publish-deployer-to-registry \
--laconic-config /home/root/config/laconic.yml \
--api-url $API_URL \
--public-key-file /home/root/config/$GPG_PUB_KEY_FILE \
--lrn $DEPLOYER_LRN \
--min-required-payment 9500
```
* Update deployer config (`webapp-deployer/config.env`):
```bash
# Update the deployer LRN if it has changed
export LRN=
# Min payment to require for performing deployments
export MIN_REQUIRED_PAYMENT=9500
# Handle deployment auction requests
export HANDLE_AUCTION_REQUESTS=true
# Amount that the deployer will bid on deployment auctions
export AUCTION_BID_AMOUNT=9500
```
* Start the webapp deployer:
```bash
laconic-so deployment --dir webapp-deployer start
```
* Get the webapp-deployer pod id:
```bash
laconic-so deployment --dir webapp-deployer ps
# Expected output
# Running containers:
# id: default/laconic-096fed46af974a47-deployment-644db859c7-snbq6, name: laconic-096fed46af974a47-deployment-644db859c7-snbq6, ports: 10.42.2.11:9555->9555
# Set pod id
export POD_ID=
# Example:
# export POD_ID=laconic-096fed46af974a47-deployment-644db859c7-snbq6
```
* Copy over GPG keys files to the webapp-deployer container:
```bash
kubie ctx default
# Copy the GPG key files to the pod
kubectl cp <path-to-your-gpg-private-key> $POD_ID:/app
kubectl cp <path-to-your-gpg-public-key> $POD_ID:/app
# Required everytime you stop and start the deployer
```
* Check logs:
```bash
# Deployer
kubectl logs -f $POD_ID
# Deployer auction handler
kubectl logs -f $POD_ID -c cerc-webapp-auction-handler
```
* Update deployer UI config (`webapp-ui/config.env`):
```bash
# URL of the webapp deployer backend API
# eg: https://webapp-deployer-api.pwa.laconic.com
LACONIC_HOSTED_CONFIG_app_api_url=
# URL of the laconic console
LACONIC_HOSTED_CONFIG_app_console_link=https://console-sapo.laconic.com
```
* Start the webapp UI:
```bash
laconic-so deployment --dir webapp-ui start
```
* Check logs
```bash
laconic-so deployment --dir webapp-ui logs webapp
```
## Update service provider for custom domains
### Stop webapp deployer
```bash
# In directory where webapp-deployer deployment was created
laconic-so deployment --dir webapp-deployer stop
```
### Update webapp deployer setup
* Fetch latest stack repos:
```bash
# In directory where webapp-deployer deployment was created
laconic-so --stack webapp-deployer-backend setup-repositories --pull
# Confirm latest commit hash in the ~/cerc/webapp-deployment-status-api repo
```
* Rebuild container images:
```bash
laconic-so --stack webapp-deployer-backend build-containers --force-rebuild
```
* Push stack images to the container registry:
* Login to the container registry:
```bash
# Set required variables
# eg: container-registry.pwa.laconic.com
CONTAINER_REGISTRY_URL=
CONTAINER_REGISTRY_USERNAME=
CONTAINER_REGISTRY_PASSWORD=
# login to container registry
docker login $CONTAINER_REGISTRY_URL --username $CONTAINER_REGISTRY_USERNAME --password $CONTAINER_REGISTRY_PASSWORD
# WARNING! Using --password via the CLI is insecure. Use --password-stdin.
# WARNING! Your password will be stored unencrypted in /home/dev2/.docker/config.json.
# Configure a credential helper to remove this warning. See
# https://docs.docker.com/engine/reference/commandline/login/#credential-stores
# Login Succeeded
```
* Push images:
```bash
laconic-so deployment --dir webapp-deployer push-images
```
* Update deployer config (`webapp-deployer/config.env`):
```bash
# Update FQDN policy to "allow" for allowing custom domains
FQDN_POLICY="allow"
# Set IP of your k8s cluster (IP set in DNS records for users)
DEPLOYMENT_IP="a.b.c.d"
```
### Start webapp deployer
* Start the webapp deployer:
```bash
laconic-so deployment --dir webapp-deployer start
```
* Get the webapp-deployer pod id:
```bash
laconic-so deployment --dir webapp-deployer ps
# Expected output
# Running containers:
# id: default/laconic-096fed46af974a47-deployment-644db859c7-snbq6, name: laconic-096fed46af974a47-deployment-644db859c7-snbq6, ports: 10.42.2.11:9555->9555
# Set pod id
export POD_ID=
# Example:
# export POD_ID=laconic-096fed46af974a47-deployment-644db859c7-snbq6
```
* Copy over GPG keys files to the webapp-deployer container:
```bash
kubie ctx default
# Copy the GPG key files to the pod
kubectl cp <path-to-your-gpg-private-key> $POD_ID:/app
kubectl cp <path-to-your-gpg-public-key> $POD_ID:/app
# Required every time you stop and start the deployer
```
* Check logs:
```bash
# Deployer
kubectl logs -f $POD_ID
# Deployer auction handler
kubectl logs -f $POD_ID -c cerc-webapp-auction-handler
```

View File

@ -9,8 +9,7 @@ services:
CERC_LACONICD_USER_KEY: ${CERC_LACONICD_USER_KEY}
CERC_LACONICD_BOND_ID: ${CERC_LACONICD_BOND_ID}
CERC_LACONICD_GAS: ${CERC_LACONICD_GAS:-200000}
CERC_LACONICD_FEES: ${CERC_LACONICD_FEES:-200alnt}
CERC_LACONICD_GASPRICE: ${CERC_LACONICD_GASPRICE:-0.001alnt}
CERC_LACONICD_FEES: ${CERC_LACONICD_FEES:-200000alnt}
volumes:
- ../config/laconic-console/cli/create-config.sh:/app/create-config.sh
- laconic-registry-data:/laconic-registry-data

View File

@ -1,49 +0,0 @@
services:
shopify:
restart: unless-stopped
image: cerc/laconic-shopify:local
depends_on:
faucet:
condition: service_healthy
command: ["bash", "-c", "./start-shopify.sh"]
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_SHOPIFY_GRAPHQL_URL: ${CERC_SHOPIFY_GRAPHQL_URL}
CERC_SHOPIFY_ACCESS_TOKEN: ${CERC_SHOPIFY_ACCESS_TOKEN}
CERC_FETCH_ORDER_DELAY: ${CERC_FETCH_ORDER_DELAY:-10000}
CERC_FAUCET_URL: http://faucet:3000/
CERC_ITEMS_PER_ORDER: ${CERC_ITEMS_PER_ORDER:-10}
volumes:
- shopify-data:/app/data
- ../config/laconic-shopify/start-shopify.sh:/app/start-shopify.sh
- ../config/laconic-shopify/product_pricings.json:/app/config/product_pricings.json
extra_hosts:
- "host.docker.internal:host-gateway"
faucet:
restart: unless-stopped
image: cerc/laconic-shopify-faucet:local
command: ["bash", "-c", "./start-faucet.sh"]
environment:
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
CERC_LACONICD_RPC_ENDPOINT: ${CERC_LACONICD_RPC_ENDPOINT:-https://laconicd-sapo.laconic.com}
CERC_FAUCET_KEY: ${CERC_FAUCET_KEY}
CERC_LACONICD_CHAIN_ID: ${CERC_LACONICD_CHAIN_ID:-laconic-testnet-2}
CERC_LACONICD_PREFIX: ${CERC_LACONICD_PREFIX:-laconic}
CERC_LACONICD_GAS_PRICE: ${CERC_LACONICD_GAS_PRICE:-0.001}
volumes:
- faucet-data:/app/db
- ../config/laconic-shopify/start-faucet.sh:/app/start-faucet.sh
- ../config/laconic-shopify/config-template.toml:/app/environments/config-template.toml
healthcheck:
test: ["CMD", "nc", "-vz", "127.0.0.1", "3000"]
interval: 10s
timeout: 5s
retries: 10
start_period: 5s
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
shopify-data:
faucet-data:

View File

@ -7,7 +7,6 @@ services:
CERC_MONIKER: ${CERC_MONIKER:-TestnetNode}
CERC_CHAIN_ID: ${CERC_CHAIN_ID:-laconic_9000-1}
CERC_PEERS: ${CERC_PEERS}
MIN_GAS_PRICE: ${MIN_GAS_PRICE:-0.001}
CERC_LOGLEVEL: ${CERC_LOGLEVEL:-info}
volumes:
- laconicd-data:/root/.laconicd

View File

@ -18,7 +18,6 @@ services:
chainId: ${CERC_LACONICD_CHAIN_ID}
gas: ${CERC_LACONICD_GAS}
fees: ${CERC_LACONICD_FEES}
gasPrice: ${CERC_LACONICD_GASPRICE}
EOF
echo "Exported config to $config_file"

View File

@ -1,12 +0,0 @@
[upstream]
rpcEndpoint = "REPLACE_WITH_CERC_LACONICD_RPC_ENDPOINT"
chainId = "REPLACE_WITH_CERC_LACONICD_CHAIN_ID"
prefix = "REPLACE_WITH_CERC_LACONICD_PREFIX"
gasPrice = "REPLACE_WITH_CERC_LACONICD_GAS_PRICE"
faucetKey = "REPLACE_WITH_CERC_FAUCET_KEY"
denom = "alnt"
[server]
port=3000
enableRateLimit=false
dbDir = "db"

View File

@ -1,6 +0,0 @@
{
"10 pre-paid webapp deployments": "100000",
"100 webapp deployments": "1000000",
"500 webapp deployments": "5000000",
"1000 webapp deployments": "10000000"
}

View File

@ -1,29 +0,0 @@
#!/bin/bash
set -e
set -u
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
config_template=$(cat environments/config-template.toml)
target_config="./environments/local.toml"
# Check if faucet key is set
if [ -z "${CERC_FAUCET_KEY:-}" ]; then
echo "Error: CERC_FAUCET_KEY is not set. Exiting..."
exit 1
fi
echo "Using laconicd RPC endpoint: $CERC_LACONICD_RPC_ENDPOINT"
FAUCET_CONFIG=$(echo "$config_template" | \
sed -E "s|REPLACE_WITH_CERC_FAUCET_KEY|${CERC_FAUCET_KEY}|g; \
s|REPLACE_WITH_CERC_LACONICD_CHAIN_ID|${CERC_LACONICD_CHAIN_ID}|g; \
s|REPLACE_WITH_CERC_LACONICD_PREFIX|${CERC_LACONICD_PREFIX}|g; \
s|REPLACE_WITH_CERC_LACONICD_GAS_PRICE|${CERC_LACONICD_GAS_PRICE}|g; \
s|REPLACE_WITH_CERC_LACONICD_RPC_ENDPOINT|${CERC_LACONICD_RPC_ENDPOINT}|g; ")
echo "$FAUCET_CONFIG" > $target_config
echo "Starting faucet..."
node dist/index.js

View File

@ -1,21 +0,0 @@
#!/bin/bash
set -e
set -u
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
set -x
fi
echo "Shopify GraphQL URL: $CERC_SHOPIFY_GRAPHQL_URL"
echo "Shopify access token: $CERC_SHOPIFY_ACCESS_TOKEN"
echo "Fetch order delay: $CERC_FETCH_ORDER_DELAY"
echo "Faucet URL: $CERC_FAUCET_URL"
echo "Number of line items per order: $CERC_ITEMS_PER_ORDER"
export SHOPIFY_GRAPHQL_URL=$CERC_SHOPIFY_GRAPHQL_URL
export SHOPIFY_ACCESS_TOKEN=$CERC_SHOPIFY_ACCESS_TOKEN
export FETCH_ORDER_DELAY=$CERC_FETCH_ORDER_DELAY
export FAUCET_URL=$CERC_FAUCET_URL
export ITEMS_PER_ORDER=$CERC_ITEMS_PER_ORDER
yarn start

View File

@ -21,7 +21,6 @@ echo "Env:"
echo "Moniker: $CERC_MONIKER"
echo "Chain Id: $CERC_CHAIN_ID"
echo "Persistent peers: $CERC_PEERS"
echo "Min gas price: $MIN_GAS_PRICE"
echo "Log level: $CERC_LOGLEVEL"
NODE_HOME=/root/.laconicd
@ -41,16 +40,12 @@ else
echo "Node data dir $NODE_HOME/data already exists, skipping initialization..."
fi
# Enable cors
sed -i 's/cors_allowed_origins.*$/cors_allowed_origins = ["*"]/' $HOME/.laconicd/config/config.toml
# Update config with persistent peers
sed -i "s/^persistent_peers *=.*/persistent_peers = \"$CERC_PEERS\"/g" $NODE_HOME/config/config.toml
echo "Starting laconicd node..."
laconicd start \
--api.enable \
--minimum-gas-prices=${MIN_GAS_PRICE}alnt \
--rpc.laddr="tcp://0.0.0.0:26657" \
--gql-playground --gql-server \
--log_level $CERC_LOGLEVEL \

View File

@ -1,5 +0,0 @@
#!/usr/bin/env bash
# Build cerc/laconic-shopify-faucet
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
docker build -t cerc/laconic-shopify-faucet:local ${build_command_args} ${CERC_REPO_BASE_DIR}/laconic-faucet

View File

@ -1,9 +0,0 @@
FROM node:20-bullseye
WORKDIR /app
COPY . .
RUN yarn install
CMD ["yarn", "start"]

View File

@ -1,8 +0,0 @@
#!/usr/bin/env bash
# Build cerc/laconic-faucet
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
docker build -t cerc/laconic-shopify:local ${build_command_args} -f ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}/shopify

View File

@ -17,13 +17,13 @@ Instructions for running laconic registry CLI and console
* Clone required repositories:
```bash
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console setup-repositories --pull
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console setup-repositories
```
* Build the container images:
```bash
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console build-containers --force-rebuild
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console build-containers
```
This should create the following docker images locally:
@ -83,14 +83,9 @@ Instructions for running laconic registry CLI and console
# Gas limit for txs (default: 200000)
CERC_LACONICD_GAS=
# Max fees for txs (default: 200alnt)
# Max fees for txs (default: 200000alnt)
CERC_LACONICD_FEES=
# Gas price to use for txs (default: 0.001alnt)
# Use for auto fees calculation, gas and fees not required to be set in that case
# Reference: https://git.vdb.to/cerc-io/laconic-registry-cli#gas-and-fees
CERC_LACONICD_GASPRICE=
# Console configuration
# Laconicd (hosted) GQL endpoint (default: http://localhost:9473)

View File

@ -2,8 +2,8 @@ version: "1.0"
name: laconic-console
description: "Laconic registry CLI and console"
repos:
- git.vdb.to/cerc-io/laconic-registry-cli@v0.2.10
- git.vdb.to/cerc-io/laconic-console@v0.2.5
- git.vdb.to/cerc-io/laconic-registry-cli
- git.vdb.to/cerc-io/laconic-console
containers:
- cerc/laconic-registry-cli
- cerc/webapp-base

View File

@ -1,109 +0,0 @@
# laconic-shopify
Instructions for running the laconic shopify
## Setup
* Clone the stack repo:
```bash
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack
```
* Clone the laconic-shopify respository:
```bash
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify setup-repositories --git-ssh
```
* Build the container image:
```bash
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify build-containers
```
This should create the `cerc/laconic-shopify` and `cerc/laconic-shopify-faucet` images locally
## Create a deployment
* Create a spec file for the deployment:
```bash
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify deploy init --output laconic-shopify-spec.yml
```
* Create a deployment from the spec file:
```bash
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify deploy create --spec-file laconic-shopify-spec.yml --deployment-dir laconic-shopify-deployment
```
## Configuration
* Inside the `laconic-shopify-deployment` deployment directory, open `config.env` file and set following env variables:
```bash
# Shopify GraphQL URL (default: 'https://6h071x-zw.myshopify.com/admin/api/2024-10/graphql.json')
CERC_SHOPIFY_GRAPHQL_URL=
# Access token for Shopify API
CERC_SHOPIFY_ACCESS_TOKEN=
# Delay for fetching orders in milliseconds (default: 10000)
CERC_FETCH_ORDER_DELAY=
# Number of line items per order in Get Orders GraphQL query (default: 10)
CERC_ITEMS_PER_ORDER=
# Private key of a funded faucet account
CERC_FAUCET_KEY=
# laconicd RPC endpoint (default: https://laconicd-sapo.laconic.com/)
CERC_LACONICD_RPC_ENDPOINT=
# laconicd chain id (default: laconic-testnet-2)
CERC_LACONICD_CHAIN_ID=
# laconicd address prefix (default: laconic)
CERC_LACONICD_PREFIX=
# laconicd gas price (default: 0.001)
CERC_LACONICD_GAS_PRICE=
```
## Start the deployment
```bash
laconic-so deployment --dir laconic-shopify-deployment start
```
## Check status
* To list down and monitor the running container:
```bash
# With status
docker ps
# Check logs for a container
docker logs -f <CONTAINER_ID>
```
## Clean up
* Stop the `laconic-shopify-faucet` service running in the background:
```bash
# Stop the docker container
laconic-so deployment --dir laconic-shopify-deployment stop
```
* To stop the service and also delete data:
```bash
# Stop the docker containers
laconic-so deployment --dir laconic-shopify-deployment stop --delete-volumes
# Remove deployment directory (deployment will have to be recreated for a re-run)
rm -r laconic-shopify-deployment
```

View File

@ -1,11 +0,0 @@
version: "1.0"
name: laconic-shopify
description: "Service that integrates a Shopify app with the Laconic wallet."
repos:
- git.vdb.to/cerc-io/shopify@v0.1.1
- git.vdb.to/cerc-io/laconic-faucet@v0.1.0-shopify
containers:
- cerc/laconic-shopify
- cerc/laconic-shopify-faucet
pods:
- laconic-shopify

View File

@ -122,9 +122,6 @@ Instructions for running a laconicd testnet full node and joining as a validator
# Output log level (default: info)
CERC_LOGLEVEL=
# Minimum gas price in alnt to accept for transactions (default: "0.001")
MIN_GAS_PRICE
```
* Inside the `laconic-console-deployment` deployment directory, open `config.env` file and set following env variables:
@ -146,14 +143,9 @@ Instructions for running a laconicd testnet full node and joining as a validator
# Gas limit for txs (default: 200000)
CERC_LACONICD_GAS=
# Max fees for txs (default: 200alnt)
# Max fees for txs (default: 200000alnt)
CERC_LACONICD_FEES=
# Gas price to use for txs (default: 0.001alnt)
# Use for auto fees calculation, gas and fees not required to be set in that case
# Reference: https://git.vdb.to/cerc-io/laconic-registry-cli#gas-and-fees
CERC_LACONICD_GASPRICE=
# Console configuration
# Laconicd (hosted) GQL endpoint (default: http://localhost:9473)

View File

@ -2,7 +2,7 @@ version: "1.0"
name: testnet-laconicd
description: "Laconicd full node"
repos:
- git.vdb.to/cerc-io/laconicd@v0.1.11
- git.vdb.to/cerc-io/laconicd
containers:
- cerc/laconicd
pods:

View File

@ -1,946 +0,0 @@
# testnet-nitro-node
## Prerequisites
* Local:
* Clone the `cerc-io/testnet-ops` repository:
```bash
git clone git@git.vdb.to:cerc-io/testnet-ops.git
```
* Ansible: see [installation](https://git.vdb.to/cerc-io/testnet-ops#installation)
* On deployment machine:
* User with passwordless sudo: see [setup](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/user-setup/README.md#user-setup)
* laconic-so: see [installation](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/stack-orchestrator-setup/README.md#setup-stack-orchestrator)
## Setup
* Move to `nitro-nodes-setup` :
```bash
cd testnet-ops/nitro-nodes-setup
```
* Fetch the required Nitro node config:
```bash
wget -O nitro-vars.yml https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage2/nitro-node-config.yml
```
* Fetch required asset addresses:
```bash
wget -O assets.json https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage2/assets.json
```
* Ask testnet operator to send L1 tokens and ETH to your chain address
* [README for transferring tokens](./ops/nitro-token-ops.md#transfer-deployed-tokens-to-given-address)
* [README for transferring ETH](./ops/nitro-token-ops.md#transfer-eth)
* Check balance of your tokens once they are transferred:
```bash
# Note: Account address should be with "0x"
export ACCOUNT_ADDRESS="<account-address>"
export GETH_CHAIN_ID="1212"
export GETH_CHAIN_URL="https://fixturenet-eth.laconic.com"
export ASSET_ADDRESS_1=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken.address' assets.json)
export ASSET_ADDRESS_2=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken2.address' assets.json)
# Check balance of eth account
curl -X POST $GETH_CHAIN_URL \
-H "Content-Type: application/json" \
-d '{
"jsonrpc":"2.0",
"method":"eth_getBalance",
"params":["'"$ACCOUNT_ADDRESS"'", "latest"],
"id":1
}'
# Check balance of first asset address
curl -X POST $GETH_CHAIN_URL \
-H "Content-Type: application/json" \
-d '{
"jsonrpc":"2.0",
"method":"eth_call",
"params":[{
"to": "'"$ASSET_ADDRESS_1"'",
"data": "0x70a08231000000000000000000000000'"$ACCOUNT_ADDRESS"'"
}, "latest"],
"id":1
}'
# Check balance of second asset address
curl -X POST $GETH_CHAIN_URL \
-H "Content-Type: application/json" \
-d '{
"jsonrpc":"2.0",
"method":"eth_call",
"params":[{
"to": "'"$ASSET_ADDRESS_2"'",
"data": "0x70a08231000000000000000000000000'"$ACCOUNT_ADDRESS"'"
}, "latest"],
"id":1
}'
```
* Edit `nitro-vars.yml` and add the following variables:
```bash
# Private key for your Nitro account (same as the one used in stage0 onboarding)
# Export the key from Laconic wallet (https://wallet.laconic.com)
nitro_sc_pk: ""
# Private key for a funded account on L1
# This account should have L1 tokens for funding your Nitro channels
nitro_chain_pk: ""
# Multiaddr with publically accessible IP address / DNS for your L1 nitro node
# Use port 3007
# Example: "/ip4/192.168.x.y/tcp/3007"
# Example: "/dns4/example.com/tcp/3007"
nitro_l1_ext_multiaddr: ""
# Multiaddr with publically accessible IP address / DNS for your L2 nitro node
# Use port 3009
# Example: "/ip4/192.168.x.y/tcp/3009"
# Example: "/dns4/example.com/tcp/3009"
nitro_l2_ext_multiaddr: ""
```
* Edit the `setup-vars.yml` to update the target directory:
```bash
# Set absolute path to desired deployments directory (under your user)
# Example: /home/dev/nitro-node-deployments
...
nitro_directory: <path-to-deployments-dir>
...
# Will create deployments at <path-to-deployments-dir>/l1-nitro-deployment and <path-to-deployments-dir>/l2-nitro-deployment
```
## Run Nitro Nodes
**NOTE**: When following this setup, Nitro nodes from two parties won't be able to communicate with each other if they are on the same network
Nitro nodes can be set up on a target machine using Ansible:
* In `testnet-ops/nitro-nodes-setup`, create a new `hosts.ini` file:
```bash
cp ../hosts.example.ini hosts.ini
```
* Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
```ini
[<deployment_host>]
<host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
```
* Replace `<deployment_host>` with `nitro_host`
* Replace `<host_name>` with the alias of your choice
* Replace `<target_ip>` with the IP address or hostname of the target machine
* Replace `<ssh_user>` with the username of the user that you set up on target machine (e.g. dev, ubuntu)
* Verify that you are able to connect to the host using the following command
```bash
ansible all -m ping -i hosts.ini
# If using password based authentication, enter the ssh password on prompt; otherwise, leave it blank
# Expected output:
# <host_name> | SUCCESS => {
# "ansible_facts": {
# "discovered_interpreter_python": "/usr/bin/python3.10"
# },
# "changed": false,
# "ping": "pong"
# }
```
* Execute the `run-nitro-nodes.yml` Ansible playbook to setup and run a Nitro node (L1+L2):
```bash
LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER
```
### Check Deployment Status
* Run the following commands on deployment machine:
```bash
cd <path-to-deployments-dir>
# Check the logs, ensure that the nodes are running
laconic-so deployment --dir l1-nitro-deployment logs nitro-node -f
laconic-so deployment --dir l2-nitro-deployment logs nitro-node -f
# Let L1 node sync up with the chain
# Expected logs after sync:
# nitro-node-1 | 2:04PM INF Initializing Http RPC transport...
# nitro-node-1 | 2:04PM INF Completed RPC server initialization url=127.0.0.1:4005/api/v1
```
* Get your Nitro node's info:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-node"
# Expected output:
# {
# "SCAddress": "0xd0eA8b27591b1D070cCcD4D30b8D408fe794FDfc",
# "MessageServicePeerId": "16Uiu2HAmSHRjoxveaPmJipzmdq69U8zme8BMnFjSBPferj1E5XAd"
# }
# SCAddress -> nitro address, MessageServicePeerId -> libp2p peer id
```
## Create Channels
Create a ledger channel with the bridge on L1 which is mirrored on L2
* Run the following commands on deployment machine
* Set required variables:
```bash
cd <path-to-deployments-dir>
export BRIDGE_NITRO_ADDRESS=$(yq eval '.bridge_nitro_address' nitro-node-config.yml)
export GETH_CHAIN_ID="1212"
# Get asset addresses from assets.json file
export ASSET_ADDRESS_1=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken.address' assets.json)
export ASSET_ADDRESS_2=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken2.address' assets.json)
```
* Check that you have no existing channels on L1 or L2:
```bash
laconic-so deployment --dir l1-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
# Expected output:
# []
```
* Ensure that your account has enough balance of tokens from `assets.json`
* Create a ledger channel between your L1 Nitro node and Bridge with custom asset:
```bash
laconic-so deployment --dir l1-nitro-deployment exec nitro-rpc-client "nitro-rpc-client direct-fund $BRIDGE_NITRO_ADDRESS --asset "$ASSET_ADDRESS_1:1000,1000" --asset "$ASSET_ADDRESS_2:1000,1000" -p 4005 -h nitro-node"
# Follow your L1 Nitro node logs for progress
# Expected Output:
# Objective started DirectFunding-0x161d289a50222caa781db215bb82a3ede4f557217742245525b8e8cbff04ec21
# Channel Open 0x161d289a50222caa781db215bb82a3ede4f557217742245525b8e8cbff04ec21
# Set the resulting ledger channel id in a variable
export LEDGER_CHANNEL_ID=
```
* Check the [Troubleshooting](#troubleshooting) section if command to create a ledger channel fails or gets stuck
* Once direct-fund objective is complete, the bridge will create a mirrored channel on L2
* Check L2 Nitro node's logs to see that a bridged-fund objective completed:
```bash
laconic-so deployment --dir l2-nitro-deployment logs nitro-node -f --tail 30
# Expected Output:
# nitro-node-1 | 5:01AM INF INFO Objective cranked address=0xaaa6628ec44a8a742987ef3a114ddfe2d4f7adce objective-id=bridgedfunding-0x6a9f5ccf1fa802525d794f4a899897f947615f6acc7141e61e056a8bfca29179 waiting-for=WaitingForNothing
# nitro-node-1 | 5:01AM INF INFO Objective is complete & returned to API address=0xaaa6628ec44a8a742987ef3a114ddfe2d4f7adce objective-id=bridgedfunding-0x6a9f5ccf1fa802525d794f4a899897f947615f6acc7141e61e056a8bfca29179
```
* Check status of L1 ledger channel with the bridge using channel id:
```bash
laconic-so deployment --dir l1-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-ledger-channel $LEDGER_CHANNEL_ID -p 4005 -h nitro-node"
# Example output:
# {
# ID: '0xbb28acc2e1543f4b41eb1ab9eb2e354b18554aefe4e7f0fa5f20046869d8553f',
# Status: 'Open',
# Balances: [
# {
# AssetAddress: '0xa6b4b8b84576047a53255649b4994743d9c83a71',
# Me: '0xdaaa6ef3bc03f9c7dabc9a02847387d2c19107f5',
# Them: '0xf0e6a85c6d23aca9ff1b83477d426ed26f218185',
# MyBalance: 1000n,
# TheirBalance: 1000n
# },
# {
# AssetAddress: '0x0000000000000000000000000000000000000000',
# Me: '0xdaaa6ef3bc03f9c7dabc9a02847387d2c19107f5',
# Them: '0xf0e6a85c6d23aca9ff1b83477d426ed26f218185',
# MyBalance: 1000n,
# TheirBalance: 1000n
# }
# ],
# ChannelMode: 'Open'
# }
```
* Check status of the mirrored channel on L2:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
# Example output:
# [
# {
# "ID": "0xb34210b763d4fdd534190ba11886ad1daa1e411c87be6fd20cff74cd25077c46",
# "Status": "Open",
# "Balances": [
# {
# "AssetAddress": "0xa4351114dae1abeb2d552d441c9733c72682a45d",
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
# "MyBalance": 1000,
# "TheirBalance": 1000
# },
# {
# "AssetAddress": "0x314e43f9825b10961859c2a62c2de6a765c1c1f1",
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
# "MyBalance": 1000,
# "TheirBalance": 1000
# }
# ],
# "ChannelMode": "Open"
# }
# ]
```
## Payments On L2 Channel
Perform payments using a virtual payment channel created with another Nitro node over the mirrored L2 channel with bridge as an intermediary
* Prerequisite: Ledger channel is required to create a payment channel
* Note: Currently payment channel is created from first asset present in ledger channel
* Run the following commands on deployment machine
* Switch to the `nitro-node` directory:
```bash
cd <path-to-deployments-dir>
```
* Check status of the mirrored channel on L2:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
# Example output:
# [
# {
# "ID": "0xb34210b763d4fdd534190ba11886ad1daa1e411c87be6fd20cff74cd25077c46",
# "Status": "Open",
# "Balances": [
# {
# "AssetAddress": "0xa4351114dae1abeb2d552d441c9733c72682a45d",
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
# "MyBalance": 1000,
# "TheirBalance": 1000
# },
# {
# "AssetAddress": "0x314e43f9825b10961859c2a62c2de6a765c1c1f1",
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
# "MyBalance": 1000,
# "TheirBalance": 1000
# }
# ],
# "ChannelMode": "Open"
# }
# ]
```
* Set required variables:
```bash
export BRIDGE_NITRO_ADDRESS=$(yq eval '.bridge_nitro_address' nitro-node-config.yml)
# Mirrored channel on L2
export L2_CHANNEL_ID=<l2-channel-id>
# Amount to create the payment channel with
export PAYMENT_CHANNEL_AMOUNT=500
```
* Set counterparty address
```bash
export COUNTER_PARTY_ADDRESS=<counterparty-nitro-address>
```
* Get the nitro address of the counterparty's node with whom you want create payment channel
* To get the nitro address of the your node:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-node"
# `SCAddress` -> nitro address
```
* Check for existing payment channels for the L2 channel:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-payment-channels-by-ledger $L2_CHANNEL_ID -p 4005 -h nitro-node"
```
* Create a virtual payment channel:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client virtual-fund $COUNTER_PARTY_ADDRESS $BRIDGE_NITRO_ADDRESS --amount $PAYMENT_CHANNEL_AMOUNT -p 4005 -h nitro-node"
# Follow your L2 Nitro node logs for progress
# Expected Output:
# Objective started VirtualFund-0x43db45a101658387263b36d613322cc952d8ce5b70de51e3a495513c256bef4d
# Channel Open 0x43db45a101658387263b36d613322cc952d8ce5b70de51e3a495513c256bef4d
# Set the resulting payment channel id in a variable
PAYMENT_CHANNEL_ID=<payment-channel-id>
```
Multiple virtual payment channels can be created at once
* Check the payment channel's status:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-payment-channel $PAYMENT_CHANNEL_ID -p 4005 -h nitro-node"
# Expected output:
# {
# ID: '0xb29aeb32c9495a793ebf7bd116232075d1e7bfe89fc82281c7d498e3ffd3e3bf',
# Status: 'Open',
# Balance: {
# AssetAddress: '0x0000000000000000000000000000000000000000',
# Payee: '<your-nitro-address>',
# Payer: '<counterparty-nitro-address>',
# PaidSoFar: 0n,
# RemainingFunds: <payment-channel-amount>n
# }
# }
```
* Send payments using the virtual payment channel:
```bash
export PAY_AMOUNT=200
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client pay $PAYMENT_CHANNEL_ID $PAY_AMOUNT -p 4005 -h nitro-node"
# Expected output
# {
# Amount: <pay-amount>,
# Channel: '<payment-channel-id>'
# }
# This can be done multiple times until the payment channel balance is exhausted
```
* Check payment channel's status again to view updated channel state
* Close the payment channel to settle on the L2 mirrored channel:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client virtual-defund $PAYMENT_CHANNEL_ID -p 4005 -h nitro-node"
# Expected output:
# Objective started VirtualDefund-0x43db45a101658387263b36d613322cc952d8ce5b70de51e3a495513c256bef4d
# Channel complete 0x43db45a101658387263b36d613322cc952d8ce5b70de51e3a495513c256bef4d
```
* Check L2 mirrored channel's status after the virtual payment channel is closed:
* This can be checked by both nodes
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
# Example output:
# [
# {
# "ID": "0xb34210b763d4fdd534190ba11886ad1daa1e411c87be6fd20cff74cd25077c46",
# "Status": "Open",
# "Balances": [
# {
# "AssetAddress": "0xa4351114dae1abeb2d552d441c9733c72682a45d",
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
# "MyBalance": <updated balance>,
# "TheirBalance": <updated balance>
# },
# {
# "AssetAddress": "0x314e43f9825b10961859c2a62c2de6a765c1c1f1",
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
# "MyBalance": <updated balance>,
# "TheirBalance": <updated balance>
# }
# ],
# "ChannelMode": "Open"
# }
# ]
```
Your balance on the L2 channel should be reduced by total payments done on the virtual payment channel
## Swaps on L2
Perform swaps using a swap channel created with another Nitro node over the mirrored L2 channel with bridge as an intermediary
* Prerequisite: Ledger channel is required to create a swap channel
* Run the following commands on deployment machine
* Switch to the `nitro-node` directory:
```bash
cd <path-to-deployments-dir>
```
* Check status of the mirrored channel on L2:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
# Example output:
# [
# {
# "ID": "0xb34210b763d4fdd534190ba11886ad1daa1e411c87be6fd20cff74cd25077c46",
# "Status": "Open",
# "Balances": [
# {
# "AssetAddress": "0xa4351114dae1abeb2d552d441c9733c72682a45d",
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
# "MyBalance": 1000,
# "TheirBalance": 1000
# },
# {
# "AssetAddress": "0x314e43f9825b10961859c2a62c2de6a765c1c1f1",
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
# "MyBalance": 1000,
# "TheirBalance": 1000
# }
# ],
# "ChannelMode": "Open"
# }
# ]
```
* Set required variables:
```bash
export BRIDGE_NITRO_ADDRESS=$(yq eval '.bridge_nitro_address' nitro-node-config.yml)
export GETH_CHAIN_ID="1212"
# Get asset addresses from assets.json file
export ASSET_ADDRESS_1=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken.address' assets.json)
export ASSET_ADDRESS_2=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken2.address' assets.json)
```
* Set counterparty address
```bash
export COUNTER_PARTY_ADDRESS=<counterparty-nitro-address>
```
* Get the nitro address of the counterparty's node with whom you want create swap channel
* To get the nitro address of the your node:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-node"
# `SCAddress` -> nitro address
```
* Create swap channel:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client swap-fund $COUNTER_PARTY_ADDRESS $BRIDGE_NITRO_ADDRESS --asset "$ASSET_ADDRESS_1:100,100" --asset "$ASSET_ADDRESS_2:100,100" -p 4005 -h nitro-node"
# Expected output
# Objective started SwapFund-0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9
# Channel open 0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9
```
* Export swap channel ID:
```bash
export SWAP_CHANNEL_ID=
```
* Check swap channel:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-swap-channel $SWAP_CHANNEL_ID -p 4005 -h nitro-node"
# Expected output:
# {
# ID: '0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9',
# Status: 'Open',
# Balances: [
# {
# AssetAddress: '0xa4351114dae1abeb2d552d441c9733c72682a45d',
# Me: '0x075400039e303b3fb46c0cff0404c5fa61947c05',
# Them: '0xd0ea8b27591b1d070cccd4d30b8d408fe794fdfc',
# MyBalance: 100n,
# TheirBalance: 100n
# },
# {
# AssetAddress: '0x314e43f9825b10961859c2a62c2de6a765c1c1f1',
# Me: '0x075400039e303b3fb46c0cff0404c5fa61947c05',
# Them: '0xd0ea8b27591b1d070cccd4d30b8d408fe794fdfc',
# MyBalance: 100n,
# TheirBalance: 100n
# }
# ]
# }
```
### Performing swaps
* Ensure that environment variables for asset addresses are set (should be done by both parties):
```bash
export GETH_CHAIN_ID="1212"
# Get asset addresses from assets.json file
export ASSET_ADDRESS_1=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken.address' assets.json)
export ASSET_ADDRESS_2=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken2.address' assets.json)
```
* Get all active swap channels for a specific mirrored ledger channel (should be done by both parties)
* To get mirrored ledger channels:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
# Example output:
# [
# {
# "ID": "0xb34210b763d4fdd534190ba11886ad1daa1e411c87be6fd20cff74cd25077c46",
# "Status": "Open",
# "Balances": [
# {
# "AssetAddress": "0xa4351114dae1abeb2d552d441c9733c72682a45d",
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
# "MyBalance": 1000n,
# "TheirBalance": 1000n
# },
# {
# "AssetAddress": "0x314e43f9825b10961859c2a62c2de6a765c1c1f1",
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
# "MyBalance": 1000n,
# "TheirBalance": 1000n
# }
# ],
# "ChannelMode": "Open"
# }
# ]
```
* Export ledger channel ID:
```bash
export LEDGER_CHANNEL_ID=
```
* To get swap channels for a ledger channel:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-swap-channels-by-ledger $LEDGER_CHANNEL_ID -p 4005 -h nitro-node"
# Example Output:
# [
# {
# ID: '0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9',
# Status: 'Open',
# Balances: [
# {
# AssetAddress: '0xa4351114dae1abeb2d552d441c9733c72682a45d',
# Me: '0x075400039e303b3fb46c0cff0404c5fa61947c05',
# Them: '0xd0ea8b27591b1d070cccd4d30b8d408fe794fdfc',
# MyBalance: 100,
# TheirBalance: 100n
# },
# {
# AssetAddress: '0x314e43f9825b10961859c2a62c2de6a765c1c1f1',
# Me: '0x075400039e303b3fb46c0cff0404c5fa61947c05',
# Them: '0xd0ea8b27591b1d070cccd4d30b8d408fe794fdfc',
# MyBalance: 100,
# TheirBalance: 100
# }
# ]
# }
# ]
```
* Export swap channel ID:
```bash
export SWAP_CHANNEL_ID=
```
* One of the participants can initiate the swap and other one will either accept it or reject it
* For initiating the swap:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client swap-initiate $SWAP_CHANNEL_ID --AssetIn "$ASSET_ADDRESS_1:20" --AssetOut "$ASSET_ADDRESS_2:10" -p 4005 -h nitro-node"
# Expected output:
# {
# SwapAssetsData: {
# TokenIn: '0xa4351114dae1abeb2d552d441c9733c72682a45d',
# TokenOut: '0x314e43f9825b10961859c2a62c2de6a765c1c1f1',
# AmountIn: 20,
# AmountOut: 10
# },
# Channel: '0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9'
# }
```
OR
* For receiving the swap
* Get the pending swap:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-pending-swap $SWAP_CHANNEL_ID -p 4005 -h nitro-node"
# Expected output:
# {
# Id: '0x7d582020753335cfd2f2af14127c9b51c7ed7a5d547a674d9cb04fe62de6ddf3',
# ChannelId: '0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9',
# Exchange: {
# TokenIn: '0xa4351114dae1abeb2d552d441c9733c72682a45d',
# TokenOut: '0x314e43f9825b10961859c2a62c2de6a765c1c1f1',
# AmountIn: 20,
# AmountOut: 10
# },
# Sigs: {
# '0': '0x0a018de18a091f7bfb400d9bc64fe958d298882e569c1668c5b1c853b5493221576b2d72074ef6e1899b79e60eaa9934afac5c1e07b7000746bac5b3b1da93311b'
# },
# Nonce: 2840594896360394000
# }
```
* Export swap ID:
```bash
export SWAP_ID=
```
* Either accept or reject the swap
* To accept:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client swap-accept $SWAP_ID -p 4005 -h nitro-node"
# Expected output:
# Confirming Swap with accepted
# Objective complete Swap-0x7d582020753335cfd2f2af14127c9b51c7ed7a5d547a674d9cb04fe62de6ddf3
```
OR
* To reject:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client swap-reject $SWAP_ID -p 4005 -h nitro-node"
# Expected output:
# Confirming Swap with accepted
# Objective complete Swap-0x7d582020753335cfd2f2af14127c9b51c7ed7a5d547a674d9cb04fe62de6ddf3
```
* Check swap channel:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-swap-channel $SWAP_CHANNEL_ID -p 4005 -h nitro-node"
# Example output:
# {
# ID: '0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9',
# Status: 'Open',
# Balances: [
# {
# AssetAddress: '0xa4351114dae1abeb2d552d441c9733c72682a45d',
# Me: '0xd0ea8b27591b1d070cccd4d30b8d408fe794fdfc',
# Them: '0x075400039e303b3fb46c0cff0404c5fa61947c05',
# MyBalance: 120n,
# TheirBalance: 80n
# },
# {
# AssetAddress: '0x314e43f9825b10961859c2a62c2de6a765c1c1f1',
# Me: '0xd0ea8b27591b1d070cccd4d30b8d408fe794fdfc',
# Them: '0x075400039e303b3fb46c0cff0404c5fa61947c05',
# MyBalance: 90n,
# TheirBalance: 110n
# }
# ]
# }
```
* Close swap channel:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client swap-defund $SWAP_CHANNEL_ID -p 4005 -h nitro-node"
# Expected output:
# Objective started SwapDefund-0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9
# Objective complete SwapDefund-0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9
```
* Check L2 mirrored channel status:
```bash
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
# Example output:
# [
# {
# "ID": "0xb34210b763d4fdd534190ba11886ad1daa1e411c87be6fd20cff74cd25077c46",
# "Status": "Open",
# "Balances": [
# {
# "AssetAddress": "0xa4351114dae1abeb2d552d441c9733c72682a45d",
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
# "MyBalance": <updated balance>,
# "TheirBalance": <updated balance>
# },
# {
# "AssetAddress": "0x314e43f9825b10961859c2a62c2de6a765c1c1f1",
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
# "MyBalance": <updated balance>,
# "TheirBalance": <updated balance>
# }
# ],
# "ChannelMode": "Open"
# }
# ]
```
## Update nitro nodes
* Switch to deployments dir:
```bash
cd $DEPLOYMENTS_DIR/nitro-node
```
* Rebuild containers:
```bash
laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node build-containers --force-rebuild
```
* Restart the nodes
```bash
laconic-so deployment --dir l1-nitro-deployment stop
laconic-so deployment --dir l1-nitro-deployment start
laconic-so deployment --dir l2-nitro-deployment stop
laconic-so deployment --dir l2-nitro-deployment start
```
## Clean up
* Switch to deployments dir:
```bash
cd <path-to-deployments-dir>
```
* Stop all Nitro services running in the background:
```bash
laconic-so deployment --dir l1-nitro-deployment stop
laconic-so deployment --dir l2-nitro-deployment stop
```
* To stop all services and also delete data:
```bash
laconic-so deployment --dir l1-nitro-deployment stop --delete-volumes
laconic-so deployment --dir l2-nitro-deployment stop --delete-volumes
# Remove deployment directories (deployments will have to be recreated for a re-run)
sudo rm -r l1-nitro-deployment
sudo rm -r l2-nitro-deployment
```
## Troubleshooting
* Check the logs of nitro node to see if the objective is completed
```bash
# To check logs of L1 nitro-node
laconic-so deployment --dir l1-nitro-deployment logs nitro-node -f --tail 30
# To check logs of L2 nitro-node
laconic-so deployment --dir l2-nitro-deployment logs nitro-node -f --tail 30
```
* If the objective is completed, you can safely stop (`Ctrl+C`) the running CLI command and continue with the further instructions
* Stop (`Ctrl+C`) the direct-fund command if it is stuck
* Restart the L1 Nitro node:
* Stop the deployment:
```bash
cd <path-to-deployments-dir>
laconic-so deployment --dir l1-nitro-deployment stop
```
* Reset the node's durable store:
```bash
sudo rm -rf l1-nitro-deployment/data/nitro_node_data
mkdir l1-nitro-deployment/data/nitro_node_data
```
* Restart the deployment:
```bash
laconic-so deployment --dir l1-nitro-deployment start
```
* Retry the ledger channel creation command

View File

@ -879,7 +879,7 @@
- Open new terminal, check that no channels exist on L2
```bash
laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4005 -h nitro-bridge"
laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4006 -h nitro-bridge"
```
- Set address of bridge and address of custom token on L1 in the current terminal
@ -952,7 +952,7 @@
- Check status of all L2 mirrored ledger channels
```bash
laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4005 -h nitro-bridge"
laconic-so deployment --dir bridge-deployment exec nitro-rpc-client "nitro-rpc-client get-all-l2-channels -p 4006 -h nitro-bridge"
# Expected output:
# {"ID":"0x15dbe6b996e4e46fdd6ea3e2074cbca58014dbb07368e3e7ba286df5c7b9da0d","Status":"Open","Balance":{"AssetAddress":"<Token_address_on_L2>","Me":"0xbbb676f9cff8d242e9eac39d063848807d3d1d94","Them":"0xa8d2d06ace9c7ffc24ee785c2695678aecdfd7a0","MyBalance":1000000,"TheirBalance":1000000},"ChannelMode":"Open"}

View File

@ -63,8 +63,6 @@
```bash
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack
# See stack documentation stack-orchestrator/stacks/testnet-laconicd/README.md for more details
```
* Clone required repositories:
@ -128,8 +126,6 @@
* Inside the `testnet-laconicd-deployment` deployment directory, open `config.env` file and set following env variables:
```bash
CERC_CHAIN_ID=laconic_9000-1
# Comma separated list of nodes to keep persistent connections to
# Example: "node-1-id@laconicd.laconic.com:26656"
# Use the provided node id
@ -201,15 +197,12 @@ laconic-so deployment --dir testnet-laconicd-deployment start
* From wallet, approve and send transaction to stage1 laconicd chain
<a name="create-validator-using-cli"></a>
* Alternatively, create a validator using the laconicd CLI:
* Import a key pair:
```bash
KEY_NAME=alice
CHAIN_ID=laconic_9000-1
# Restore existing key with mnemonic seed phrase
# You will be prompted to enter mnemonic seed
@ -250,7 +243,7 @@ laconic-so deployment --dir testnet-laconicd-deployment start
```bash
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd tx staking create-validator my-validator.json \
--fees 500000alnt \
--chain-id=$CHAIN_ID \
--chain-id=laconic_9000-1 \
--from $KEY_NAME"
```
@ -287,95 +280,6 @@ laconic-so deployment --dir testnet-laconicd-deployment start
sudo rm -r testnet-laconicd-deployment
```
## Upgrade to SAPO testnet
### Prerequisites
* SAPO testnet (testnet2) [genesis file](./ops/stage2/genesis.json) and peers (see below)
### Setup
* If running, stop the stage1 node:
```bash
# In dir where stage1 node deployment (`testnet-laconicd-deployment`) exists
TESTNET_DEPLOYMENT=$(pwd)/testnet-laconicd-deployment
laconic-so deployment --dir testnet-laconicd-deployment stop --delete-volumes
```
* Clone / pull the stack repo:
```bash
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack --pull
```
* Ensure you are on tag v0.1.10
* Clone / pull the required repositories:
```bash
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd setup-repositories --pull
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories and re-run the command
```
Note: Make sure the latest `cerc-io/laconicd` changes have been pulled
* Build the container images:
```bash
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd build-containers --force-rebuild
```
This should create the following docker images locally with latest changes:
* `cerc/laconicd`
### Create a deployment
* Create a deployment from spec file:
```
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd deploy create --spec-file testnet-laconicd-spec.yml --deployment-dir stage-2-testnet-laconicd-deployment
```
* Copy over the published testnet2 genesis file (`.json`) to data directory in deployment (`stage-2-testnet-laconicd-deployment/data/laconicd-data/tmp-testnet2`):
```bash
# Example
mkdir -p stage-2-testnet-laconicd-deployment/data/laconicd-data/tmp-testnet2
cp genesis.json stage-2-testnet-laconicd-deployment/data/laconicd-data/tmp-testnet2/genesis.json
```
### Configuration
* Inside the `stage-2-testnet-laconicd-deployment` deployment directory, open `config.env` file and set following env variables:
```bash
CERC_CHAIN_ID=laconic-testnet-2
CERC_PEERS="bd56622c525a4dfce1e388a7b8c0cb072200797b@5.9.80.214:26103,289f10e94156f47c67bc26a8af747a58e8014f15@148.251.49.108:26656,72cd2f50dff154408cc2c7650a94c2141624b657@65.21.237.194:26656,21322e4fa90c485ff3cb9617438deec4acfa1f0b@143.198.37.25:26656"
# A custom human readable name for this node
CERC_MONIKER="my-node"
```
### Start the deployment
```bash
laconic-so deployment --dir stage-2-testnet-laconicd-deployment start
```
See [Check status](#check-status) to follow sync status of your node
See [Join as testnet validator](#create-validator-using-cli) to join as a validator using laconicd CLI (use chain id `laconic-testnet-2`)
### Clean up
* Same as [Clean up](#clean-up)
## Troubleshooting
* If you face any issues in the onboarding app or the web-wallet, clear your browser cache and reload