Compare commits
2 Commits
main
...
pm-full-no
Author | SHA1 | Date | |
---|---|---|---|
7adddacc72 | |||
3bee6e4941 |
2
.gitignore
vendored
2
.gitignore
vendored
@ -1,2 +0,0 @@
|
||||
*-deployment
|
||||
*-spec.yml
|
30
README.md
30
README.md
@ -1,31 +1,5 @@
|
||||
# testnet-laconicd-stack
|
||||
|
||||
Stacks to run a node for laconic testnet
|
||||
Stacks to run nodes for laconic-testnet
|
||||
|
||||
- [testnet-laconicd stack documentation](stack-orchestrator/stacks/testnet-laconicd/README.md)
|
||||
- [laconic-console stack documentation](stack-orchestrator/stacks/laconic-console/README.md) (to run laconic registry CLI and console standalone)
|
||||
- [laconic-faucet stack documentation](stack-orchestrator/stacks/laconic-faucet/README.md)
|
||||
|
||||
## ops
|
||||
|
||||
- [Update deployments after code changes](./ops/update-deployments.md)
|
||||
- [Halt stage0 and start stage1](./ops/stage0-to-stage1.md)
|
||||
- [Halt stage1 and start stage2](./ops/stage1-to-stage2.md)
|
||||
- [Create deployments from scratch (for reference only)](./ops/deployments-from-scratch.md)
|
||||
- [Deploy and transfer new tokens for nitro operations](./ops/nitro-token-ops.md)
|
||||
|
||||
## Join LORO testnet
|
||||
|
||||
Follow steps in [testnet-onboarding-validator.md](./testnet-onboarding-validator.md) to onboard your participant and join as a validator on the LORO testnet
|
||||
|
||||
## SAPO testnet
|
||||
|
||||
Follow steps in [Upgrade to SAPO testnet](./testnet-onboarding-validator.md#upgrade-to-sapo-testnet) for upgrading your LORO testnet node to SAPO testnet
|
||||
|
||||
## Setup a Service Provider
|
||||
|
||||
Follow steps in [service-provider.md](./service-provider.md) to setup / update your service provider
|
||||
|
||||
## Run testnet Nitro Node
|
||||
|
||||
Follow steps in [testnet-nitro-node.md](./testnet-nitro-node.md) to run your Nitro node for the testnet
|
||||
- [Full node stack documentation](stack-orchestrator/stacks/laconicd-full-node/README.md)
|
||||
|
@ -1,40 +0,0 @@
|
||||
[server]
|
||||
host = "0.0.0.0"
|
||||
port = 8000
|
||||
gqlPath = "/graphql"
|
||||
[server.session]
|
||||
secret = "<redacted>"
|
||||
# Frontend webapp URL origin
|
||||
appOriginUrl = "https://deploy.apps.vaasl.io"
|
||||
# Set to true if server running behind proxy
|
||||
trustProxy = true
|
||||
# Backend URL hostname
|
||||
domain = "deploy-backend.apps.vaasl.io"
|
||||
|
||||
[database]
|
||||
dbPath = "/data/db/deploy-backend"
|
||||
|
||||
[gitHub]
|
||||
webhookUrl = "https://deploy-backend.apps.vaasl.io"
|
||||
[gitHub.oAuth]
|
||||
clientId = "<redacted>"
|
||||
clientSecret = "<redacted>"
|
||||
|
||||
[registryConfig]
|
||||
fetchDeploymentRecordDelay = 5000
|
||||
checkAuctionStatusDelay = 5000
|
||||
restEndpoint = "https://laconicd-sapo.laconic.com"
|
||||
gqlEndpoint = "https://laconicd-sapo.laconic.com/api"
|
||||
chainId = "laconic-testnet-2"
|
||||
privateKey = "<redacted>"
|
||||
bondId = "<redacted>"
|
||||
authority = "vaasl"
|
||||
[registryConfig.fee]
|
||||
gasPrice = "0.001alnt"
|
||||
|
||||
[auction]
|
||||
commitFee = "1000"
|
||||
commitsDuration = "60s"
|
||||
revealFee = "1000"
|
||||
revealsDuration = "60s"
|
||||
denom = "alnt"
|
File diff suppressed because it is too large
Load Diff
@ -1,83 +0,0 @@
|
||||
# Nitro Token Ops
|
||||
|
||||
## Deploy and transfer custom tokens
|
||||
|
||||
### Setup
|
||||
|
||||
* Go to the directory where `nitro-contracts-deployment` is present:
|
||||
|
||||
```bash
|
||||
cd /srv/bridge
|
||||
```
|
||||
|
||||
### Deploy new token
|
||||
|
||||
* To deploy another token:
|
||||
|
||||
```bash
|
||||
# These values can be changed to deploy another token with different name and symbol
|
||||
export TOKEN_NAME="TestToken2"
|
||||
export TOKEN_SYMBOL="TST2"
|
||||
|
||||
# Note: Token supply denotes actual number of tokens and not the supply in Wei
|
||||
export INITIAL_TOKEN_SUPPLY="129600"
|
||||
|
||||
laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "TOKEN_NAME=$TOKEN_NAME TOKEN_SYMBOL=$TOKEN_SYMBOL INITIAL_TOKEN_SUPPLY=$INITIAL_TOKEN_SUPPLY /app/deploy-l1-tokens.sh"
|
||||
```
|
||||
|
||||
* Recreate `assets.json` to include newly deployed token address:
|
||||
|
||||
```bash
|
||||
export GETH_CHAIN_ID="1212"
|
||||
|
||||
laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq --arg chainId \"$GETH_CHAIN_ID\" '{
|
||||
(\$chainId): [
|
||||
{
|
||||
\"name\": .[\$chainId][0].name,
|
||||
\"chainId\": .[\$chainId][0].chainId,
|
||||
\"contracts\": (
|
||||
.[\$chainId][0].contracts
|
||||
| to_entries
|
||||
| map(select(.key | in({\"ConsensusApp\":1, \"NitroAdjudicator\":1, \"VirtualPaymentApp\":1}) | not))
|
||||
| from_entries
|
||||
)
|
||||
}
|
||||
]
|
||||
}' /app/deployment/nitro-addresses.json" > assets.json
|
||||
```
|
||||
|
||||
* The required config file should be generated at `/srv/bridge/assets.json`
|
||||
|
||||
* Check in the generated file at location `ops/stage2/assets.json` within this repository
|
||||
|
||||
### Transfer deployed tokens to given address
|
||||
|
||||
* To transfer a token to an account:
|
||||
|
||||
```bash
|
||||
export GETH_CHAIN_ID=1212
|
||||
export TOKEN_NAME="<name-of-token-to-be-transferred>"
|
||||
export ASSET_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.$TOKEN_NAME.address' /app/deployment/nitro-addresses.json")
|
||||
export ACCOUNT="<target-account-address>"
|
||||
|
||||
laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "cd packages/nitro-protocol && yarn hardhat transfer --contract $ASSET_ADDRESS --to $ACCOUNT --amount 100 --network geth"
|
||||
```
|
||||
|
||||
## Transfer ETH
|
||||
|
||||
* Go to the directory where `fixturenet-eth-deployment` is present:
|
||||
|
||||
```bash
|
||||
cd /srv/fixturenet-eth
|
||||
```
|
||||
|
||||
* To transfer ETH to an account:
|
||||
|
||||
```bash
|
||||
export FUNDED_ADDRESS="0xe6CE22afe802CAf5fF7d3845cec8c736ecc8d61F"
|
||||
export FUNDED_PK="888814df89c4358d7ddb3fa4b0213e7331239a80e1f013eaa7b2deca2a41a218"
|
||||
|
||||
export TO_ADDRESS="<target-account-address>"
|
||||
|
||||
laconic-so deployment --dir fixturenet-eth-deployment exec foundry "cast send $TO_ADDRESS --value 1ether --from $FUNDED_ADDRESS --private-key $FUNDED_PK"
|
||||
```
|
@ -1,465 +0,0 @@
|
||||
# Service Provider deployments from scratch
|
||||
|
||||
## container-registry
|
||||
|
||||
* Reference: <https://github.com/LaconicNetwork/loro-testnet/blob/main/docs/service-provider-setup.md#deploy-docker-image-container-registry>
|
||||
|
||||
* Target dir: `/srv/service-provider/container-registry`
|
||||
|
||||
* Cleanup an existing deployment if required:
|
||||
```bash
|
||||
cd /srv/service-provider/container-registry
|
||||
|
||||
# Stop the deployment
|
||||
laconic-so deployment --dir container-registry stop --delete-volumes
|
||||
|
||||
# Remove the deployment dir
|
||||
sudo rm -rf container-registrty
|
||||
|
||||
# Remove the existing spec file
|
||||
rm container-registry.spec
|
||||
```
|
||||
|
||||
### Setup
|
||||
|
||||
- Generate the spec file for the container-registry stack
|
||||
```bash
|
||||
laconic-so --stack container-registry deploy init --output container-registry.spec
|
||||
```
|
||||
|
||||
- Modify the `container-registry.spec` as shown below
|
||||
```
|
||||
stack: container-registry
|
||||
deploy-to: k8s
|
||||
kube-config: /home/dev/.kube/config-vs-narwhal.yaml
|
||||
network:
|
||||
ports:
|
||||
registry:
|
||||
- '5000'
|
||||
http-proxy:
|
||||
- host-name: container-registry.apps.vaasl.io
|
||||
routes:
|
||||
- path: '/'
|
||||
proxy-to: registry:5000
|
||||
volumes:
|
||||
registry-data:
|
||||
configmaps:
|
||||
config: ./configmaps/config
|
||||
```
|
||||
|
||||
- Create the deployment directory for the `container-registry` stack
|
||||
```bash
|
||||
laconic-so --stack container-registry deploy create --deployment-dir container-registry --spec-file container-registry.spec
|
||||
```
|
||||
|
||||
- Modify file `container-registry/kubeconfig.yml` if required
|
||||
```
|
||||
apiVersion: v1
|
||||
...
|
||||
contexts:
|
||||
- context:
|
||||
cluster: ***
|
||||
user: ***
|
||||
name: default
|
||||
...
|
||||
```
|
||||
NOTE: `context.name` must be default to use with SO
|
||||
|
||||
- Base64 encode the container registry credentials
|
||||
NOTE: Use actual credentials for container registry (credentials set in `container-registry/credentials.txt`)
|
||||
```bash
|
||||
echo -n "so-reg-user:pXDwO5zLU7M88x3aA" | base64 -w0
|
||||
|
||||
# Output: c28tcmVnLXVzZXI6cFhEd081ekxVN004OHgzYUE=
|
||||
```
|
||||
|
||||
- Install `apache2-utils` for next step
|
||||
```bash
|
||||
sudo apt install apache2-utils
|
||||
```
|
||||
|
||||
- Encrypt the container registry credentials to create an `htpasswd` file
|
||||
```bash
|
||||
htpasswd -bB -c container-registry/configmaps/config/htpasswd so-reg-user pXDwO5zLU7M88x3aA
|
||||
```
|
||||
|
||||
Resulting file should look like this
|
||||
```
|
||||
cat container-registry/configmaps/config/htpasswd
|
||||
# so-reg-user:$2y$05$6EdxIwwDNlJfNhhQxZRr4eNd.aYrdmbBjAdw422w0u2j3TihQXgd2
|
||||
```
|
||||
|
||||
- Using the credentials from the previous steps, create a `container-registry/my_password.json` file
|
||||
```json
|
||||
{
|
||||
"auths": {
|
||||
"container-registry.apps.vaasl.io": {
|
||||
"username": "so-reg-user",
|
||||
"password": "$2y$05$6EdxIwwDNlJfNhhQxZRr4eNd.aYrdmbBjAdw422w0u2j3TihQXgd2",
|
||||
"auth": "c28tcmVnLXVzZXI6cFhEd081ekxVN004OHgzYUE="
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- Configure the file `container-registry/config.env` as follows
|
||||
```env
|
||||
REGISTRY_AUTH=htpasswd
|
||||
REGISTRY_AUTH_HTPASSWD_REALM="VSL Service Provider Image Registry"
|
||||
REGISTRY_AUTH_HTPASSWD_PATH="/config/htpasswd"
|
||||
REGISTRY_HTTP_SECRET='$2y$05$6EdxIwwDNlJfNhhQxZRr4eNd.aYrdmbBjAdw422w0u2j3TihQXgd2'
|
||||
```
|
||||
|
||||
- Load context for k8s
|
||||
```bash
|
||||
kubie ctx vs-narwhal
|
||||
```
|
||||
|
||||
- Add the container registry credentials as a secret available to the cluster
|
||||
```bash
|
||||
kubectl create secret generic laconic-registry --from-file=.dockerconfigjson=container-registry/my_password.json --type=kubernetes.io/dockerconfigjson
|
||||
```
|
||||
|
||||
### Run
|
||||
|
||||
- Deploy the container registry
|
||||
```bash
|
||||
laconic-so deployment --dir container-registry start
|
||||
```
|
||||
|
||||
- Check the logs
|
||||
```bash
|
||||
laconic-so deployment --dir container-registry logs
|
||||
```
|
||||
|
||||
- Check status and await succesful deployment:
|
||||
```bash
|
||||
laconic-so deployment --dir container-registry status
|
||||
```
|
||||
|
||||
- Confirm deployment by logging in:
|
||||
```
|
||||
docker login container-registry.apps.vaasl.io --username so-reg-user --password pXDwO5zLU7M88x3aA
|
||||
```
|
||||
|
||||
- Set ingress annotations
|
||||
|
||||
- Set the `cluster-id` found in `container-registry/deployment.yml` and then run the following commands:
|
||||
```
|
||||
export CLUSTER_ID=<cluster-id>
|
||||
# Example
|
||||
# export CLUSTER_ID=laconic-26cc70be8a3db3f4
|
||||
|
||||
kubectl annotate ingress $CLUSTER_ID-ingress nginx.ingress.kubernetes.io/proxy-body-size=0
|
||||
kubectl annotate ingress $CLUSTER_ID-ingress nginx.ingress.kubernetes.io/proxy-read-timeout=600
|
||||
kubectl annotate ingress $CLUSTER_ID-ingress nginx.ingress.kubernetes.io/proxy-send-timeout=600
|
||||
```
|
||||
|
||||
## webapp-deployer
|
||||
|
||||
### Backend
|
||||
|
||||
* Reference: <https://github.com/LaconicNetwork/loro-testnet/blob/main/docs/service-provider-setup.md#deploy-backend>
|
||||
|
||||
* Target dir: `/srv/service-provider/webapp-deployer`
|
||||
|
||||
* Cleanup an existing deployment if required:
|
||||
```bash
|
||||
cd /srv/service-provider/webapp-deployer
|
||||
|
||||
# Stop the deployment
|
||||
laconic-so deployment --dir webapp-deployer stop
|
||||
|
||||
# Remove the deployment dir
|
||||
sudo rm -rf webapp-deployer
|
||||
|
||||
# Remove the existing spec file
|
||||
rm webapp-deployer.spec
|
||||
```
|
||||
|
||||
#### Setup
|
||||
|
||||
- Initialize a spec file for the deployer backend.
|
||||
```bash
|
||||
laconic-so --stack webapp-deployer-backend setup-repositories
|
||||
laconic-so --stack webapp-deployer-backend build-containers
|
||||
laconic-so --stack webapp-deployer-backend deploy init --output webapp-deployer.spec
|
||||
```
|
||||
|
||||
- Modify the contents of `webapp-deployer.spec`:
|
||||
```
|
||||
stack: webapp-deployer-backend
|
||||
deploy-to: k8s
|
||||
kube-config: /home/dev/.kube/config-vs-narwhal.yaml
|
||||
image-registry: container-registry.apps.vaasl.io/laconic-registry
|
||||
network:
|
||||
ports:
|
||||
server:
|
||||
- '9555'
|
||||
http-proxy:
|
||||
- host-name: webapp-deployer-api.apps.vaasl.io
|
||||
routes:
|
||||
- path: '/'
|
||||
proxy-to: server:9555
|
||||
volumes:
|
||||
srv:
|
||||
configmaps:
|
||||
config: ./data/config
|
||||
annotations:
|
||||
container.apparmor.security.beta.kubernetes.io/{name}: unconfined
|
||||
labels:
|
||||
container.kubeaudit.io/{name}.allow-disabled-apparmor: "podman"
|
||||
security:
|
||||
privileged: true
|
||||
|
||||
resources:
|
||||
containers:
|
||||
reservations:
|
||||
cpus: 3
|
||||
memory: 8G
|
||||
limits:
|
||||
cpus: 7
|
||||
memory: 16G
|
||||
volumes:
|
||||
reservations:
|
||||
storage: 200G
|
||||
```
|
||||
|
||||
- Create the deployment directory from the spec file.
|
||||
```
|
||||
laconic-so --stack webapp-deployer-backend deploy create --deployment-dir webapp-deployer --spec-file webapp-deployer.spec
|
||||
```
|
||||
|
||||
- Modify file `webapp-deployer/kubeconfig.yml` if required
|
||||
```
|
||||
apiVersion: v1
|
||||
...
|
||||
contexts:
|
||||
- context:
|
||||
cluster: ***
|
||||
user: ***
|
||||
name: default
|
||||
...
|
||||
```
|
||||
NOTE: `context.name` must be default to use with SO
|
||||
|
||||
- Copy `webapp-deployer/kubeconfig.yml` from the k8s cluster creation step to `webapp-deployer/data/config/kube.yml`
|
||||
```bash
|
||||
cp webapp-deployer/kubeconfig.yml webapp-deployer/data/config/kube.yml
|
||||
```
|
||||
|
||||
- Create `webapp-deployer/data/config/laconic.yml`, it should look like this:
|
||||
```
|
||||
services:
|
||||
registry:
|
||||
# Using public endpoint does not work inside machine where laconicd chain is deployed
|
||||
rpcEndpoint: 'http://host.docker.internal:36657'
|
||||
gqlEndpoint: 'http://host.docker.internal:3473/api'
|
||||
|
||||
# Set user key of account with balance and bond owned by the user
|
||||
userKey:
|
||||
bondId:
|
||||
|
||||
chainId: laconic-testnet-2
|
||||
gasPrice: 1alnt
|
||||
```
|
||||
NOTE: Modify the user key and bond ID according to your configuration
|
||||
|
||||
* Publish a `WebappDeployer` record for the deployer backend by following the steps below:
|
||||
|
||||
* Setup GPG keys by following [these steps to create and export a key](https://git.vdb.to/cerc-io/webapp-deployment-status-api#keys)
|
||||
```
|
||||
cd webapp-deployer
|
||||
|
||||
# Create a key
|
||||
gpg --batch --passphrase "SECRET" --quick-generate-key webapp-deployer-api.apps.vaasl.io default default never
|
||||
|
||||
# Export the public key
|
||||
gpg --export webapp-deployer-api.apps.vaasl.io > webapp-deployer-api.apps.vaasl.io.pgp.pub
|
||||
|
||||
# Export the private key
|
||||
gpg --export-secret-keys webapp-deployer-api.apps.vaasl.io > webapp-deployer-api.apps.vaasl.io.pgp.key
|
||||
|
||||
cd -
|
||||
```
|
||||
NOTE: Use "SECRET" for passphrase prompt
|
||||
|
||||
* Copy the GPG pub key file generated above to `webapp-deployer/data/config` directory. This ensures the Docker container has access to the key during the publish process
|
||||
```bash
|
||||
cp webapp-deployer/webapp-deployer-api.apps.vaasl.io.pgp.pub webapp-deployer/data/config
|
||||
```
|
||||
|
||||
|
||||
* Publish the webapp deployer record using the `publish-deployer-to-registry` command
|
||||
|
||||
```
|
||||
docker run -i -t \
|
||||
--add-host=host.docker.internal:host-gateway \
|
||||
-v /srv/service-provider/webapp-deployer/data/config:/config \
|
||||
cerc/webapp-deployer-backend:local laconic-so publish-deployer-to-registry \
|
||||
--laconic-config /config/laconic.yml \
|
||||
--api-url https://webapp-deployer-api.apps.vaasl.io \
|
||||
--public-key-file /config/webapp-deployer-api.apps.vaasl.io.pgp.pub \
|
||||
--lrn lrn://vaasl-provider/deployers/webapp-deployer-api.apps.vaasl.io \
|
||||
--min-required-payment 10000
|
||||
```
|
||||
|
||||
- Modify the contents of `webapp-deployer/config.env`:
|
||||
|
||||
```
|
||||
DEPLOYMENT_DNS_SUFFIX="apps.vaasl.io"
|
||||
|
||||
# this should match the name authority reserved above
|
||||
DEPLOYMENT_RECORD_NAMESPACE="vaasl-provider"
|
||||
|
||||
# url of the deployed docker image registry
|
||||
IMAGE_REGISTRY="container-registry.apps.vaasl.io"
|
||||
|
||||
# credentials from the htpasswd section above in container-registry setup
|
||||
IMAGE_REGISTRY_USER=
|
||||
IMAGE_REGISTRY_CREDS=
|
||||
|
||||
# configs
|
||||
CLEAN_DEPLOYMENTS=false
|
||||
CLEAN_LOGS=false
|
||||
CLEAN_CONTAINERS=false
|
||||
SYSTEM_PRUNE=false
|
||||
WEBAPP_IMAGE_PRUNE=true
|
||||
CHECK_INTERVAL=10
|
||||
FQDN_POLICY="allow"
|
||||
|
||||
# lrn of the webapp deployer
|
||||
LRN="lrn://vaasl-provider/deployers/webapp-deployer-api.apps.vaasl.io"
|
||||
|
||||
# Path to the GPG key file inside the webapp-deployer container
|
||||
OPENPGP_PRIVATE_KEY_FILE="webapp-deployer-api.apps.vaasl.io.pgp.key"
|
||||
# Passphrase used when creating the GPG key
|
||||
OPENPGP_PASSPHRASE="SECRET"
|
||||
|
||||
DEPLOYER_STATE="srv-test/deployments/autodeploy.state"
|
||||
UNDEPLOYER_STATE="srv-test/deployments/autoundeploy.state"
|
||||
UPLOAD_DIRECTORY="srv-test/uploads"
|
||||
HANDLE_AUCTION_REQUESTS=true
|
||||
AUCTION_BID_AMOUNT=10000
|
||||
|
||||
# Minimum payment amount required for single webapp deployment
|
||||
MIN_REQUIRED_PAYMENT=10000
|
||||
```
|
||||
|
||||
- Push the image to the container registry
|
||||
```
|
||||
laconic-so deployment --dir webapp-deployer push-images
|
||||
```
|
||||
|
||||
- Modify `webapp-deployer/data/config/laconic.yml`:
|
||||
```
|
||||
services:
|
||||
registry:
|
||||
rpcEndpoint: 'https://laconicd-sapo.laconic.com/'
|
||||
gqlEndpoint: 'https://laconicd-sapo.laconic.com/api'
|
||||
|
||||
# Set user key of account with balance and bond owned by the user
|
||||
userKey:
|
||||
bondId:
|
||||
|
||||
chainId: laconic-testnet-2
|
||||
gasPrice: 1alnt
|
||||
```
|
||||
|
||||
#### Run
|
||||
|
||||
- Start the deployer.
|
||||
```
|
||||
laconic-so deployment --dir webapp-deployer start
|
||||
```
|
||||
|
||||
- Load context for k8s
|
||||
```bash
|
||||
kubie ctx vs-narwhal
|
||||
```
|
||||
|
||||
- Copy the GPG key file to the webapp-deployer container
|
||||
|
||||
```bash
|
||||
# Get the webapp-deployer pod id
|
||||
laconic-so deployment --dir webapp-deployer ps
|
||||
|
||||
# Expected output
|
||||
# Running containers:
|
||||
# id: default/laconic-096fed46af974a47-deployment-644db859c7-snbq6, name: laconic-096fed46af974a47-deployment-644db859c7-snbq6, ports: 10.42.2.11:9555->9555
|
||||
|
||||
# Set pod id
|
||||
export POD_ID=
|
||||
# Example:
|
||||
# export POD_ID=laconic-096fed46af974a47-deployment-644db859c7-snbq6
|
||||
|
||||
# Copy GPG key files to the pod
|
||||
kubectl cp webapp-deployer/webapp-deployer-api.apps.vaasl.io.pgp.key $POD_ID:/app
|
||||
kubectl cp webapp-deployer/webapp-deployer-api.apps.vaasl.io.pgp.pub $POD_ID:/app
|
||||
```
|
||||
|
||||
- Publishing records to the registry will trigger deployments in backend now
|
||||
|
||||
### Frontend
|
||||
|
||||
* Target dir: `/srv/service-provider/webapp-ui`
|
||||
|
||||
* Cleanup an existing deployment if required:
|
||||
```bash
|
||||
cd /srv/service-provider/webapp-ui
|
||||
|
||||
# Stop the deployment
|
||||
laconic-so deployment --dir webapp-ui stop
|
||||
|
||||
# Remove the deployment dir
|
||||
sudo rm -rf webapp-ui
|
||||
|
||||
# Remove the existing spec file
|
||||
rm webapp-ui.spec
|
||||
```
|
||||
|
||||
#### Setup
|
||||
|
||||
* Clone and build the deployer UI
|
||||
```
|
||||
git clone https://git.vdb.to/cerc-io/webapp-deployment-status-ui.git ~/cerc/webapp-deployment-status-ui
|
||||
|
||||
laconic-so build-webapp --source-repo ~/cerc/webapp-deployment-status-ui
|
||||
```
|
||||
|
||||
* Create a deployment
|
||||
```bash
|
||||
export KUBECONFIG_PATH=/home/dev/.kube/config-vs-narwhal.yaml
|
||||
# NOTE: Use actual kubeconfig path
|
||||
|
||||
laconic-so deploy-webapp create --kube-config $KUBECONFIG_PATH --image-registry container-registry.apps.vaasl.io --deployment-dir webapp-ui --image cerc/webapp-deployment-status-ui:local --url https://webapp-deployer-ui.apps.vaasl.io --env-file ~/cerc/webapp-deployment-status-ui/.env
|
||||
```
|
||||
|
||||
* Modify file `webapp-ui/kubeconfig.yml` if required
|
||||
```yml
|
||||
apiVersion: v1
|
||||
...
|
||||
contexts:
|
||||
- context:
|
||||
cluster: ***
|
||||
user: ***
|
||||
name: default
|
||||
...
|
||||
```
|
||||
NOTE: `context.name` must be default to use with SO
|
||||
|
||||
- Push the image to the container registry.
|
||||
```
|
||||
laconic-so deployment --dir webapp-ui push-images
|
||||
```
|
||||
|
||||
- Modify `webapp-ui/config.env` like [this Pull Request](https://git.vdb.to/cerc-io/webapp-deployment-status-ui/pulls/6) but with your host details.
|
||||
|
||||
#### Run
|
||||
|
||||
- Start the deployer UI
|
||||
```bash
|
||||
laconic-so deployment --dir webapp-ui start
|
||||
```
|
||||
|
||||
- Wait a moment, then go to https://webapp-deployer-ui.apps.vaasl.io for the status and logs of each deployment
|
@ -1,171 +0,0 @@
|
||||
# Halt stage0 and start stage1
|
||||
|
||||
Once all the participants have completed their onboarding, stage0 laconicd chain can be halted and stage1 chain can be initialized and started
|
||||
|
||||
## Login
|
||||
|
||||
* Log in as `dev` user on the deployments VM
|
||||
|
||||
* All the deployments are placed in the `/srv` directory:
|
||||
|
||||
```bash
|
||||
cd /srv
|
||||
```
|
||||
|
||||
## Halt stage0
|
||||
|
||||
* Confirm the the currently running node is for stage0 chain:
|
||||
|
||||
```bash
|
||||
cd /srv/laconicd
|
||||
|
||||
laconic-so deployment --dir stage0-deployment logs laconicd -f --tail 30
|
||||
```
|
||||
|
||||
* Stop the stage0 deployment:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir stage0-deployment stop
|
||||
```
|
||||
|
||||
## Start stage1
|
||||
|
||||
* Rebuild laconicd container with `>=v0.1.7` to enable `slashing` module:
|
||||
|
||||
```bash
|
||||
# laconicd source
|
||||
cd ~/cerc/laconicd
|
||||
|
||||
# Pull latest changes
|
||||
git pull
|
||||
|
||||
# Confirm the latest commit hash
|
||||
git log
|
||||
|
||||
# Rebuild the containers
|
||||
cd /srv/laconicd
|
||||
|
||||
laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd build-containers --force-rebuild
|
||||
```
|
||||
|
||||
* Fetch the generated genesis file with stage1 participants and token allocations:
|
||||
|
||||
```bash
|
||||
# Place in stage1 deployment directory
|
||||
wget -O /srv/laconicd/stage1-deployment/data/genesis-config/genesis.json https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage1/genesis-accounts.json
|
||||
```
|
||||
|
||||
* Start the stage1 deployment:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir stage1-deployment start
|
||||
```
|
||||
|
||||
* Check status of stage1 laconicd:
|
||||
|
||||
```bash
|
||||
# List down the container and check health status
|
||||
docker ps -a | grep laconicd
|
||||
|
||||
# Follow logs for laconicd container, check that new blocks are getting created
|
||||
laconic-so deployment --dir stage1-deployment logs laconicd -f
|
||||
```
|
||||
|
||||
* Query the list of registered participants in stage1 laconicd:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir stage1-deployment exec laconicd "laconicd query onboarding list"
|
||||
|
||||
# Confirm that all onboarded participants on stage0 appear in the list
|
||||
```
|
||||
|
||||
* Get the node's peer adddress and stage1 genesis file to share with the participants:
|
||||
|
||||
* Get the node id:
|
||||
|
||||
```bash
|
||||
echo $(laconic-so deployment --dir stage1-deployment exec laconicd "laconicd cometbft show-node-id")@laconicd.laconic.com:26656
|
||||
```
|
||||
|
||||
* Get the genesis file:
|
||||
|
||||
```bash
|
||||
scp dev@<deployments-server-hostname>:/srv/laconicd/stage1-deployment/data/laconicd-data/config/genesis.json </path/to/local/directory>
|
||||
```
|
||||
|
||||
* Now users can follow the steps to [Join as a validator on stage1](../testnet-onboarding-validator.md#join-as-a-validator-on-stage1)
|
||||
|
||||
## Bank Transfer
|
||||
|
||||
* Transfer tokens to an address:
|
||||
|
||||
```bash
|
||||
cd /srv/laconicd
|
||||
|
||||
RECEIVER_ADDRESS=
|
||||
AMOUNT=
|
||||
|
||||
laconic-so deployment --dir stage1-deployment exec laconicd "laconicd tx bank send alice ${RECEIVER_ADDRESS} ${AMOUNT}alnt --from alice --fees 1000000alnt"
|
||||
```
|
||||
|
||||
* Check balance:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir stage1-deployment exec laconicd "laconicd query bank balances ${RECEIVER_ADDRESS}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Generating stage1 genesis
|
||||
|
||||
* Following steps to be run on a local machine
|
||||
|
||||
* Clone repos:
|
||||
|
||||
```bash
|
||||
git clone git@git.vdb.to:cerc-io/testnet-laconicd-stack.git
|
||||
|
||||
git clone git@git.vdb.to:cerc-io/fixturenet-laconicd-stack.git
|
||||
```
|
||||
|
||||
* Create stage1 participants and allocations using provided validators list:
|
||||
|
||||
* Prerequisite: `validators.csv` file with list of laconic addresses, example:
|
||||
|
||||
```csv
|
||||
laconic13ftz0c6cg6ttfda7ct4r6pf2j976zsey7l4wmj
|
||||
laconic1he4wjpfm5atwfvqurpg57ctp8chmxt9swf02dx
|
||||
laconic1wpsdkwz0t4ejdm7gcl7kn8989z88dd6wwy04np
|
||||
...
|
||||
```
|
||||
|
||||
* Build
|
||||
|
||||
```bash
|
||||
# Change to scripts dir
|
||||
cd testnet-laconicd-stack/scripts
|
||||
|
||||
# Install dependencies and build
|
||||
yarn && yarn build
|
||||
```
|
||||
|
||||
* Run script
|
||||
|
||||
```bash
|
||||
yarn participants-with-filtered-validators --validators-csv ./validators.csv --participant-alloc 200000000000 --validator-alloc 1000200000000000 --output stage1-participants-$(date +"%Y-%m-%dT%H%M%S").json --output-allocs stage1-allocs-$(date +"%Y-%m-%dT%H%M%S").json
|
||||
|
||||
# This should create two json files with stage1 participants and allocations
|
||||
```
|
||||
|
||||
* Create stage1 genesis file:
|
||||
|
||||
```bash
|
||||
# Change to fixturenet-laconicd stack dir
|
||||
cd fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd
|
||||
|
||||
# Generate genesis file
|
||||
# Provide absolute paths to generated stage1-participants and stage1-allocs files
|
||||
./scripts/generate-stage1-genesis-from-json.sh /path/to/testnet-laconicd-stack/scripts/stage1-participants.json /path/to/testnet-laconicd-stack/scripts/stage1-allocs.json
|
||||
|
||||
# This should generate the required genesis file at output/genesis.json
|
||||
```
|
@ -1,150 +0,0 @@
|
||||
# Halt stage1 and start stage2
|
||||
|
||||
## Login
|
||||
|
||||
* Log in as `dev` user on the deployments VM
|
||||
|
||||
* All the deployments are placed in the `/srv` directory:
|
||||
|
||||
```bash
|
||||
cd /srv
|
||||
```
|
||||
|
||||
## Halt stage1
|
||||
|
||||
* Confirm the the currently running node is for stage1 chain:
|
||||
|
||||
```bash
|
||||
# On stage1 deployment machine
|
||||
STAGE1_DEPLOYMENT=/srv/laconicd/testnet-laconicd-deployment
|
||||
|
||||
laconic-so deployment --dir $STAGE1_DEPLOYMENT logs laconicd -f --tail 30
|
||||
|
||||
# Note: stage1 node on deployments VM has been changed to run from /srv/laconicd/testnet-laconicd-deployment instead of /srv/laconicd/stage1-deployment
|
||||
```
|
||||
|
||||
* Stop the stage1 deployment:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir $STAGE1_DEPLOYMENT stop
|
||||
|
||||
# Stopping this deployment marks the end of testnet stage1
|
||||
```
|
||||
|
||||
## Export stage1 state
|
||||
|
||||
* Export the chain state:
|
||||
|
||||
```bash
|
||||
docker run -it \
|
||||
-v $STAGE1_DEPLOYMENT/data/laconicd-data:/root/.laconicd \
|
||||
cerc/laconicd-stage1:local bash -c "laconicd export | jq > /root/.laconicd/stage1-state.json"
|
||||
```
|
||||
|
||||
* Archive the state and node config and keys:
|
||||
|
||||
```bash
|
||||
sudo tar -czf /srv/laconicd/stage1-laconicd-export.tar.gz --exclude="./data" --exclude="./tmp" -C $STAGE1_DEPLOYMENT/data/laconicd-data .
|
||||
sudo chown dev:dev /srv/laconicd/stage1-laconicd-export.tar.gz
|
||||
```
|
||||
|
||||
## Initialize stage2
|
||||
|
||||
* Copy over the stage1 state and node export archive to stage2 deployment machine
|
||||
|
||||
* Extract the stage1 state and node config to stage2 deployment dir:
|
||||
|
||||
```bash
|
||||
# On stage2 deployment machine
|
||||
cd /srv/laconicd
|
||||
|
||||
# Unarchive
|
||||
tar -xzf stage1-laconicd-export.tar.gz -C stage2-deployment/data/laconicd-data
|
||||
|
||||
# Verify contents
|
||||
ll stage2-deployment/data/laconicd-data
|
||||
```
|
||||
|
||||
* Initialize stage2 chain:
|
||||
|
||||
```bash
|
||||
DEPLOYMENT_DIR=$(pwd)
|
||||
|
||||
cd ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd
|
||||
|
||||
STAGE2_CHAIN_ID=laconic-testnet-2
|
||||
./scripts/initialize-stage2.sh $DEPLOYMENT_DIR/stage2-deployment $STAGE2_CHAIN_ID LaconicStage2 os 1000000000000000
|
||||
|
||||
# Enter the keyring passphrase for account from stage1 when prompted
|
||||
|
||||
cd $DEPLOYMENT_DIR
|
||||
```
|
||||
|
||||
* Resets the node data (`unsafe-reset-all`)
|
||||
|
||||
* Initializes the `stage2-deployment` node
|
||||
|
||||
* Generates the genesis file for stage2 with stage1 state
|
||||
|
||||
* Carries over accounts, balances and laconicd modules from stage1
|
||||
|
||||
* Skips staking and validator data
|
||||
|
||||
* Copy over the genesis file outside data directory:
|
||||
|
||||
```bash
|
||||
cp stage2-deployment/data/laconicd-data/config/genesis.json stage2-deployment
|
||||
```
|
||||
|
||||
## Start stage2
|
||||
|
||||
* Start the stage2 deployment:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir stage2-deployment start
|
||||
```
|
||||
|
||||
* Check status of stage2 laconicd:
|
||||
|
||||
```bash
|
||||
# List down the container and check health status
|
||||
docker ps -a | grep laconicd
|
||||
|
||||
# Follow logs for laconicd container, check that new blocks are getting created
|
||||
laconic-so deployment --dir stage2-deployment logs laconicd -f
|
||||
```
|
||||
|
||||
* Get the node's peer adddress and stage2 genesis file to share with the participants:
|
||||
|
||||
* Get the node id:
|
||||
|
||||
```bash
|
||||
echo $(laconic-so deployment --dir stage2-deployment exec laconicd "laconicd cometbft show-node-id")@laconicd-sapo.laconic.com:36656
|
||||
```
|
||||
|
||||
* Get the genesis file:
|
||||
|
||||
```bash
|
||||
scp dev@<deployments-server-hostname>:/srv/laconicd/stage2-deployment/genesis.json </path/to/local/directory>
|
||||
```
|
||||
|
||||
* Now users can follow the steps to [Upgrade to SAPO testnet](../testnet-onboarding-validator.md#upgrade-to-sapo-testnet)
|
||||
|
||||
## Bank Transfer
|
||||
|
||||
* Transfer tokens to an address:
|
||||
|
||||
```bash
|
||||
cd /srv/laconicd
|
||||
|
||||
RECEIVER_ADDRESS=
|
||||
AMOUNT=
|
||||
|
||||
laconic-so deployment --dir stage2-deployment exec laconicd "laconicd tx bank send alice ${RECEIVER_ADDRESS} ${AMOUNT}alnt --from alice --fees 1000alnt"
|
||||
```
|
||||
|
||||
* Check balance:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir stage2-deployment exec laconicd "laconicd query bank balances ${RECEIVER_ADDRESS}"
|
||||
```
|
File diff suppressed because it is too large
Load Diff
@ -1,16 +0,0 @@
|
||||
{
|
||||
"1212": [
|
||||
{
|
||||
"name": "geth",
|
||||
"chainId": "1212",
|
||||
"contracts": {
|
||||
"TestToken": {
|
||||
"address": "0x2b79F4a92c177B4E61F5c4AC37b1B8A623c665A4"
|
||||
},
|
||||
"TestToken2": {
|
||||
"address": "0xa6B4B8b84576047A53255649b4994743d9C83A71"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
711201
ops/stage2/genesis.json
711201
ops/stage2/genesis.json
File diff suppressed because one or more lines are too long
@ -1,7 +0,0 @@
|
||||
nitro_chain_url: "wss://fixturenet-eth.laconic.com"
|
||||
na_address: "0x2B6AFbd4F479cE4101Df722cF4E05F941523EaD9"
|
||||
ca_address: "0xBca48057Da826cB2eb1258E2C679678b269dC262"
|
||||
vpa_address: "0xCf5207018766587b8cBad4B8B1a1a38c225ebA7A"
|
||||
bridge_nitro_address: "0xf0E6a85C6D23AcA9ff1b83477D426ed26F218185"
|
||||
nitro_l1_bridge_multiaddr: "/dns4/bridge.laconic.com/tcp/3005/p2p/16Uiu2HAky2PYTfBNHpytybz4ADY9n7boiLgK5btJpTrGVbWC3diZ"
|
||||
nitro_l2_bridge_multiaddr: "/dns4/bridge.laconic.com/tcp/3006/p2p/16Uiu2HAky2PYTfBNHpytybz4ADY9n7boiLgK5btJpTrGVbWC3diZ"
|
@ -1,25 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Exit on error
|
||||
set -e
|
||||
set -u
|
||||
|
||||
NODE_HOME="$HOME/.laconicd"
|
||||
testnet2_genesis="$NODE_HOME/tmp-testnet2/genesis.json"
|
||||
|
||||
if [ ! -f ${testnet2_genesis} ]; then
|
||||
echo "testnet2 genesis file not found, exiting..."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Remove data but keep keys
|
||||
laconicd cometbft unsafe-reset-all
|
||||
|
||||
# Use provided genesis config
|
||||
cp $testnet2_genesis $NODE_HOME/config/genesis.json
|
||||
|
||||
# Set chain id in config
|
||||
chain_id=$(jq -r '.chain_id' $testnet2_genesis)
|
||||
laconicd config set client chain-id $chain_id --home $NODE_HOME
|
||||
|
||||
echo "Node data reset and ready for testnet2!"
|
@ -1,500 +0,0 @@
|
||||
# Update deployments after code changes
|
||||
|
||||
Instructions to reset / update the deployments
|
||||
|
||||
## Login
|
||||
|
||||
* Log in as `dev` user on the deployments VM
|
||||
|
||||
* All the deployments are placed in the `/srv` directory:
|
||||
|
||||
```bash
|
||||
cd /srv
|
||||
```
|
||||
|
||||
## stage0 laconicd
|
||||
|
||||
* Deployment dir: `/srv/laconicd/stage0-deployment`
|
||||
|
||||
* If code has changed, fetch and build with updated source code:
|
||||
|
||||
```bash
|
||||
# laconicd source
|
||||
cd ~/cerc/laconicd
|
||||
|
||||
# Pull latest changes, or checkout to the required branch
|
||||
git pull
|
||||
|
||||
# Confirm the latest commit hash
|
||||
git log
|
||||
|
||||
# Rebuild the containers
|
||||
cd /srv/laconicd
|
||||
|
||||
laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd build-containers --force-rebuild
|
||||
```
|
||||
|
||||
* Optionally, reset the data directory (this will remove all stage0 data!):
|
||||
|
||||
```bash
|
||||
# Stop the deployment
|
||||
laconic-so deployment --dir stage0-deployment stop --delete-volumes
|
||||
|
||||
# Remove and recreate the required data dir
|
||||
sudo rm -rf stage0-deployment/data/laconicd-data
|
||||
mkdir stage0-deployment/data/laconicd-data
|
||||
```
|
||||
|
||||
* Start the deployment:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir stage0-deployment start
|
||||
|
||||
# Follow logs for laconicd container, check that new blocks are getting created
|
||||
laconic-so deployment --dir stage0-deployment logs laconicd -f
|
||||
```
|
||||
|
||||
* If the stage0 laconicd chain has been reset, reset the faucet deployment too with new faucet key:
|
||||
|
||||
```bash
|
||||
cd /srv/faucet
|
||||
|
||||
export FAUCET_ACCOUNT_PK=$(laconic-so deployment --dir /srv/laconicd/stage0-deployment exec laconicd "echo y | laconicd keys export alice --keyring-backend test --unarmored-hex --unsafe")
|
||||
|
||||
cat <<EOF > laconic-faucet-deployment/config.env
|
||||
CERC_FAUCET_KEY=$FAUCET_ACCOUNT_PK
|
||||
EOF
|
||||
|
||||
# Stop the deployment
|
||||
laconic-so deployment --dir laconic-faucet-deployment stop --delete-volumes
|
||||
|
||||
# Remove and recreate the required data dir
|
||||
sudo rm -rf laconic-faucet-deployment/data/faucet-data
|
||||
mkdir laconic-faucet-deployment/data/faucet-data
|
||||
|
||||
# Start the deployment
|
||||
laconic-so deployment --dir laconic-faucet-deployment start
|
||||
```
|
||||
|
||||
## testnet-onboarding-app
|
||||
|
||||
* Deployment dir: `/srv/app/onboarding-app-deployment`
|
||||
|
||||
* If code has changed, fetch and build with updated source code:
|
||||
|
||||
```bash
|
||||
# testnet-onboarding-app source
|
||||
cd ~/cerc/testnet-onboarding-app
|
||||
|
||||
# Pull latest changes, or checkout to the required branch
|
||||
git pull
|
||||
|
||||
# Confirm the latest commit hash
|
||||
git log
|
||||
|
||||
# Rebuild the containers
|
||||
cd /srv/app
|
||||
|
||||
laconic-so --stack ~/cerc/testnet-onboarding-app-stack/stack-orchestrator/stacks/onboarding-app build-containers --force-rebuild
|
||||
```
|
||||
|
||||
* Update the configuration, if required in `onboarding-app-deployment/config.env`:
|
||||
|
||||
```bash
|
||||
WALLET_CONNECT_ID=63...
|
||||
|
||||
CERC_REGISTRY_GQL_ENDPOINT="https://laconicd.laconic.com/api"
|
||||
CERC_LACONICD_RPC_ENDPOINT="https://laconicd.laconic.com"
|
||||
|
||||
CERC_FAUCET_ENDPOINT="https://faucet.laconic.com"
|
||||
|
||||
CERC_WALLET_META_URL="https://loro-signup.laconic.com"
|
||||
|
||||
CERC_STAKING_AMOUNT=1000000000000000
|
||||
````
|
||||
|
||||
* Restart the deployment:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir onboarding-app-deployment stop
|
||||
|
||||
laconic-so deployment --dir onboarding-app-deployment start
|
||||
|
||||
# Follow logs for testnet-onboarding-app container, wait for the build to finish
|
||||
laconic-so deployment --dir onboarding-app-deployment logs testnet-onboarding-app -f
|
||||
```
|
||||
|
||||
* The updated onboarding app can now be viewed at <https://loro-signup.laconic.com>
|
||||
|
||||
## laconic-wallet-web
|
||||
|
||||
* Deployment dir: `/srv/wallet/laconic-wallet-web-deployment`
|
||||
|
||||
* If code has changed, fetch and build with updated source code:
|
||||
|
||||
```bash
|
||||
# testnet-onboarding-app source
|
||||
cd ~/cerc/laconic-wallet-web
|
||||
|
||||
# Pull latest changes, or checkout to the required branch
|
||||
git pull
|
||||
|
||||
# Confirm the latest commit hash
|
||||
git log
|
||||
|
||||
# Rebuild the containers
|
||||
cd /srv/wallet
|
||||
|
||||
laconic-so --stack ~/cerc/laconic-wallet-web/stack/stack-orchestrator/stack/laconic-wallet-web build-containers --force-rebuild
|
||||
```
|
||||
|
||||
* Update the configuration, if required in `laconic-wallet-web-deployment/config.env`:
|
||||
|
||||
```bash
|
||||
WALLET_CONNECT_ID=63...
|
||||
```
|
||||
|
||||
* Restart the deployment:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir laconic-wallet-web-deployment stop
|
||||
|
||||
laconic-so deployment --dir laconic-wallet-web-deployment start
|
||||
|
||||
# Follow logs for laconic-wallet-web container, wait for the build to finish
|
||||
laconic-so deployment --dir laconic-wallet-web-deployment logs laconic-wallet-web -f
|
||||
```
|
||||
|
||||
* The web wallet can now be viewed at <https://wallet.laconic.com>
|
||||
|
||||
## stage1 laconicd
|
||||
|
||||
* Deployment dir: `/srv/laconicd/stage1-deployment`
|
||||
|
||||
* If code has changed, fetch and build with updated source code:
|
||||
|
||||
```bash
|
||||
# laconicd source
|
||||
cd ~/cerc/laconicd
|
||||
|
||||
# Pull latest changes, or checkout to the required branch
|
||||
git pull
|
||||
|
||||
# Confirm the latest commit hash
|
||||
git log
|
||||
|
||||
# Rebuild the containers
|
||||
cd /srv/laconicd
|
||||
|
||||
laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd build-containers --force-rebuild
|
||||
```
|
||||
|
||||
* Optionally, reset the data directory:
|
||||
|
||||
```bash
|
||||
# Stop the deployment
|
||||
laconic-so deployment --dir stage1-deployment stop --delete-volumes
|
||||
|
||||
# Remove and recreate the required data dirs
|
||||
sudo rm -rf stage1-deployment/data/laconicd-data stage1-deployment/data/genesis-config
|
||||
|
||||
mkdir stage1-deployment/data/laconicd-data
|
||||
mkdir stage1-deployment/data/genesis-config
|
||||
```
|
||||
|
||||
* Update the configuration, if required in `stage1-deployment/config.env`:
|
||||
|
||||
```bash
|
||||
AUTHORITY_AUCTION_ENABLED=true
|
||||
AUTHORITY_AUCTION_COMMITS_DURATION=3600
|
||||
AUTHORITY_AUCTION_REVEALS_DURATION=3600
|
||||
AUTHORITY_GRACE_PERIOD=7200
|
||||
```
|
||||
|
||||
* Follow [stage0-to-stage1.md](./stage0-to-stage1.md) to generate the genesis file for stage1 and start the deployment
|
||||
|
||||
## laconic-console
|
||||
|
||||
* Deployment dir: `/srv/console/laconic-console-deployment`
|
||||
|
||||
* If code has changed, fetch and build with updated source code:
|
||||
|
||||
```bash
|
||||
# testnet-onboarding-app source
|
||||
cd ~/cerc/laconic-console
|
||||
|
||||
# Pull latest changes, or checkout to the required branch
|
||||
git pull
|
||||
|
||||
# Confirm the latest commit hash
|
||||
git log
|
||||
|
||||
# Rebuild the containers
|
||||
cd /srv/console
|
||||
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console build-containers --force-rebuild
|
||||
```
|
||||
|
||||
* Update the configuration, if required in `laconic-console-deployment/config.env`:
|
||||
|
||||
```bash
|
||||
# Laconicd (hosted) GQL endpoint
|
||||
LACONIC_HOSTED_ENDPOINT=https://laconicd.laconic.com
|
||||
```
|
||||
|
||||
* Restart the deployment:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir laconic-console-deployment stop
|
||||
|
||||
laconic-so deployment --dir laconic-console-deployment start
|
||||
|
||||
# Follow logs for console container
|
||||
laconic-so deployment --dir laconic-console-deployment logs console -f
|
||||
```
|
||||
|
||||
* The laconic console can now be viewed at <https://loro-console.laconic.com>
|
||||
|
||||
---
|
||||
|
||||
## stage2 laconicd
|
||||
|
||||
* Deployment dir: `/srv/laconicd/stage2-deployment`
|
||||
|
||||
* If code has changed, fetch and build with updated source code:
|
||||
|
||||
```bash
|
||||
# laconicd source
|
||||
cd ~/cerc/laconicd
|
||||
|
||||
# Pull latest changes, or checkout to the required branch
|
||||
git pull
|
||||
|
||||
# Confirm the latest commit hash
|
||||
git log
|
||||
|
||||
# Rebuild the containers
|
||||
cd /srv/laconicd
|
||||
|
||||
laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd build-containers --force-rebuild
|
||||
```
|
||||
|
||||
* Optionally, reset the data directory:
|
||||
|
||||
```bash
|
||||
# Stop the deployment
|
||||
laconic-so deployment --dir stage2-deployment stop --delete-volumes
|
||||
|
||||
# Remove and recreate the required data dirs
|
||||
sudo rm -rf stage2-deployment/data/laconicd-data stage2-deployment/data/genesis-config
|
||||
|
||||
mkdir stage2-deployment/data/laconicd-data
|
||||
mkdir stage2-deployment/data/genesis-config
|
||||
```
|
||||
|
||||
* Follow [stage1-to-stage2.md](./stage1-to-stage2.md) to reinitialize stage2 and start the deployment
|
||||
|
||||
## laconic-console-testnet2
|
||||
|
||||
* Deployment dir: `/srv/console/laconic-console-testnet2-deployment`
|
||||
|
||||
* Steps to update the deployment similar as in [laconic-console](#laconic-console)
|
||||
|
||||
## Laconic Shopify
|
||||
|
||||
* Deployment dir: `/srv/shopify/laconic-shopify-deployment`
|
||||
|
||||
* If code has changed, fetch and build with updated source code:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify setup-repositories --git-ssh --pull
|
||||
|
||||
# rebuild containers
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify build-containers --force-rebuild
|
||||
```
|
||||
|
||||
* Update the configuration if required in `laconic-shopify-deployment/config.env`
|
||||
|
||||
* Restart the deployment:
|
||||
|
||||
```bash
|
||||
cd /srv/shopify
|
||||
|
||||
laconic-so deployment --dir laconic-shopify-deployment stop
|
||||
|
||||
laconic-so deployment --dir laconic-shopify-deployment start
|
||||
```
|
||||
|
||||
## webapp-deployer
|
||||
|
||||
### Backend
|
||||
|
||||
* Deployment dir: `/srv/service-provider/webapp-deployer`
|
||||
|
||||
* If code has changed, fetch and build with updated source code:
|
||||
|
||||
```bash
|
||||
laconic-so --stack webapp-deployer-backend setup-repositories --git-ssh --pull
|
||||
|
||||
laconic-so --stack webapp-deployer-backend build-containers --force-rebuild
|
||||
```
|
||||
|
||||
* Update the configuration, if required in :
|
||||
* `/srv/service-provider/webapp-deployer/data/config/laconic.yml`
|
||||
* `/srv/service-provider/webapp-deployer/config.env`
|
||||
|
||||
* Restart the deployment:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir webapp-deployer stop
|
||||
|
||||
laconic-so deployment --dir webapp-deployer start
|
||||
```
|
||||
|
||||
- Load context for k8s
|
||||
```bash
|
||||
kubie ctx vs-narwhal
|
||||
```
|
||||
|
||||
- Copy the GPG key file to the webapp-deployer container
|
||||
|
||||
```bash
|
||||
# Get the webapp-deployer pod id
|
||||
laconic-so deployment --dir webapp-deployer ps
|
||||
|
||||
# Expected output
|
||||
# Running containers:
|
||||
# id: default/laconic-096fed46af974a47-deployment-644db859c7-snbq6, name: laconic-096fed46af974a47-deployment-644db859c7-snbq6, ports: 10.42.2.11:9555->9555
|
||||
|
||||
# Set pod id
|
||||
export POD_ID=
|
||||
# Example:
|
||||
# export POD_ID=laconic-096fed46af974a47-deployment-644db859c7-snbq6
|
||||
|
||||
# Copy GPG key files to the pod
|
||||
kubectl cp webapp-deployer/webapp-deployer-api.apps.vaasl.io.pgp.key $POD_ID:/app
|
||||
kubectl cp webapp-deployer/webapp-deployer-api.apps.vaasl.io.pgp.pub $POD_ID:/app
|
||||
```
|
||||
|
||||
### Frontend
|
||||
|
||||
* Deployment dir: `/srv/service-provider/webapp-ui`
|
||||
|
||||
* If code has changed, fetch and build with updated source code:
|
||||
|
||||
```bash
|
||||
cd ~/cerc/webapp-deployment-status-ui
|
||||
|
||||
# Pull latest changes, or checkout to the required branch
|
||||
git pull
|
||||
|
||||
# Confirm the latest commit hash
|
||||
git log
|
||||
|
||||
laconic-so build-webapp --source-repo ~/cerc/webapp-deployment-status-ui
|
||||
```
|
||||
|
||||
- Modify `/srv/service-provider/webapp-ui/config.env` like [this Pull Request](https://git.vdb.to/cerc-io/webapp-deployment-status-ui/pulls/6) but with your host details.
|
||||
|
||||
* Restart the deployment:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir webapp-ui stop
|
||||
|
||||
laconic-so deployment --dir webapp-ui start
|
||||
```
|
||||
|
||||
## Deploy Backend
|
||||
|
||||
* Deployment dir: `/srv/deploy-backend/backend-deployment`
|
||||
|
||||
* If code has changed, fetch and build with updated source code:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/snowballtools-base-api-stack/stack-orchestrator/stacks/snowballtools-base-backend setup-repositories --git-ssh --pull
|
||||
|
||||
# rebuild containers
|
||||
laconic-so --stack ~/cerc/snowballtools-base-api-stack/stack-orchestrator/stacks/snowballtools-base-backend build-containers --force-rebuild
|
||||
```
|
||||
|
||||
* Push updated images to the container registry:
|
||||
|
||||
```bash
|
||||
cd /srv/deploy-backend
|
||||
|
||||
# login to container registry
|
||||
CONTAINER_REGISTRY_URL=container-registry.apps.vaasl.io
|
||||
CONTAINER_REGISTRY_USERNAME=
|
||||
CONTAINER_REGISTRY_PASSWORD=
|
||||
|
||||
docker login $CONTAINER_REGISTRY_URL --username $CONTAINER_REGISTRY_USERNAME --password $CONTAINER_REGISTRY_PASSWORD
|
||||
|
||||
# Push backend images
|
||||
laconic-so deployment --dir backend-deployment push-images
|
||||
```
|
||||
|
||||
* Update the configuration if required in `backend-deployment/configmaps/config/prod.toml`
|
||||
|
||||
* Restart the deployment:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir backend-deployment stop
|
||||
|
||||
laconic-so deployment --dir backend-deployment start
|
||||
```
|
||||
|
||||
## Deply Frontend
|
||||
|
||||
* Follow steps from [deployments-from-scratch.md](./deployments-from-scratch.md#deploy-frontend) to deploy the snowball frontend
|
||||
|
||||
## Fixturenet Eth
|
||||
|
||||
* Deployment dir: `/srv/fixturenet-eth/fixturenet-eth-deployment`
|
||||
|
||||
* If code has changed, fetch and build with updated source code:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth setup-repositories --git-ssh --pull
|
||||
|
||||
# Rebuild the containers
|
||||
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth build-containers --force-rebuild
|
||||
```
|
||||
|
||||
* Update the configuration if required in `fixturenet-eth-deployment/config.env`:
|
||||
|
||||
```bash
|
||||
CERC_ALLOW_UNPROTECTED_TXS=true
|
||||
```
|
||||
|
||||
* Restart the deployment:
|
||||
|
||||
```bash
|
||||
cd /srv/fixturenet-eth
|
||||
|
||||
laconic-so deployment --dir fixturenet-eth-deployment stop
|
||||
|
||||
laconic-so deployment --dir fixturenet-eth-deployment start
|
||||
```
|
||||
|
||||
## Nitro Bridge
|
||||
|
||||
* Deployment dir: `/srv/bridge/bridge-deployment`
|
||||
|
||||
* Rebuild containers:
|
||||
|
||||
```bash
|
||||
# Rebuild the containers
|
||||
laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/bridge build-containers --force-rebuild
|
||||
```
|
||||
|
||||
* Update the configuration if required in `bridge-deployment/config.env`
|
||||
|
||||
* Restart the bridge deployment:
|
||||
|
||||
```bash
|
||||
cd /srv/bridge
|
||||
|
||||
laconic-so deployment --dir bridge-deployment stop
|
||||
|
||||
laconic-so deployment --dir bridge-deployment start
|
||||
```
|
@ -1,8 +0,0 @@
|
||||
# Default: https://laconicd.laconic.com/api
|
||||
LACONICD_GQL_ENDPOINT=
|
||||
|
||||
# Default: https://laconicd.laconic.com
|
||||
LACONICD_RPC_ENDPOINT=
|
||||
|
||||
# Default: laconic_9000-1
|
||||
LACONICD_CHAIN_ID=
|
3
scripts/.gitignore
vendored
3
scripts/.gitignore
vendored
@ -1,3 +0,0 @@
|
||||
node_modules
|
||||
dist
|
||||
.env
|
@ -1 +0,0 @@
|
||||
@cerc-io:registry=https://git.vdb.to/api/packages/cerc-io/npm/
|
@ -1,45 +0,0 @@
|
||||
# scripts
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- NodeJS >= `v18.17.x`
|
||||
|
||||
## Instructions
|
||||
|
||||
- Change to scripts dir:
|
||||
|
||||
```bash
|
||||
cd scripts
|
||||
```
|
||||
|
||||
- Install dependencies and build:
|
||||
|
||||
```bash
|
||||
yarn && yarn build
|
||||
```
|
||||
|
||||
- Create required env configuration:
|
||||
|
||||
```bash
|
||||
# Update the values as required
|
||||
# By default, live laconicd testnet (laconicd.laconic.com) endpoint is configured
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
- Generate a list of onboarded participants and allocations with given list of validators:
|
||||
|
||||
```bash
|
||||
yarn participants-with-filtered-validators --validators-csv <validators-csv-file> --participant-alloc <participant-alloc-amount> --validator-alloc <validator-alloc-amount> --output <output-json-file> --output-allocs <output-allocs-json-file>
|
||||
|
||||
# Example:
|
||||
# yarn participants-with-filtered-validators --validators-csv ./validators.csv --participant-alloc 200000000000 --validator-alloc 1000200000000000 --output stage1-participants-$(date +"%Y-%m-%dT%H%M%S").json --output-allocs stage1-allocs-$(date +"%Y-%m-%dT%H%M%S").json
|
||||
```
|
||||
|
||||
- Map subscribers to onboarded participants:
|
||||
|
||||
```bash
|
||||
yarn map-subscribers-to-participants --subscribers-csv <subscribers-csv-file> --output <output-csv-file>
|
||||
|
||||
# Example:
|
||||
# yarn map-subscribers-to-participants --subscribers-csv subscribers.csv --output result-$(date +"%Y-%m-%dT%H%M%S").csv
|
||||
```
|
@ -1,29 +0,0 @@
|
||||
{
|
||||
"name": "testnet-laconicd-stack",
|
||||
"version": "0.1.0",
|
||||
"main": "index.js",
|
||||
"repository": "git@git.vdb.to:cerc-io/testnet-laconicd-stack.git",
|
||||
"license": "UNLICENSED",
|
||||
"private": true,
|
||||
"devDependencies": {
|
||||
"@types/cli-progress": "^3.11.6",
|
||||
"@types/node": "^22.2.0",
|
||||
"@types/yargs": "^17.0.33",
|
||||
"typescript": "^5.5.4"
|
||||
},
|
||||
"dependencies": {
|
||||
"@cerc-io/registry-sdk": "^0.2.6",
|
||||
"@cosmjs/stargate": "^0.32.4",
|
||||
"csv-parse": "^5.5.6",
|
||||
"csv-parser": "^3.0.0",
|
||||
"csv-writer": "^1.6.0",
|
||||
"dotenv": "^16.4.5",
|
||||
"yargs": "^17.7.2"
|
||||
},
|
||||
"scripts": {
|
||||
"build": "tsc",
|
||||
"map-subscribers-to-participants": "node dist/map-subscribers-to-participants.js",
|
||||
"participants-with-filtered-validators": "node dist/participants-with-filtered-validators.js"
|
||||
},
|
||||
"packageManager": "yarn@1.22.19+sha1.4ba7fc5c6e704fce2066ecbfb0b0d8976fe62447"
|
||||
}
|
@ -1,171 +0,0 @@
|
||||
import * as fs from 'fs';
|
||||
import * as crypto from 'crypto';
|
||||
import * as path from 'path';
|
||||
import yargs from 'yargs';
|
||||
import { hideBin } from 'yargs/helpers';
|
||||
import { parse as csvParse } from 'csv-parse';
|
||||
import * as csvWriter from 'csv-writer';
|
||||
import dotenv from 'dotenv';
|
||||
|
||||
import { StargateClient } from '@cosmjs/stargate';
|
||||
import { Registry } from '@cerc-io/registry-sdk';
|
||||
import { decodeTxRaw, decodePubkey } from '@cosmjs/proto-signing';
|
||||
|
||||
dotenv.config();
|
||||
|
||||
const LACONICD_GQL_ENDPOINT = process.env.LACONICD_GQL_ENDPOINT || 'https://laconicd.laconic.com/api';
|
||||
const LACONICD_RPC_ENDPOINT = process.env.LACONICD_RPC_ENDPOINT || 'https://laconicd.laconic.com';
|
||||
const LACONICD_CHAIN_ID = process.env.LACONICD_CHAIN_ID || 'laconic_9000-1';
|
||||
|
||||
async function main(): Promise<void> {
|
||||
const argv = _getArgv();
|
||||
|
||||
const registry = new Registry(LACONICD_GQL_ENDPOINT, LACONICD_RPC_ENDPOINT, LACONICD_CHAIN_ID);
|
||||
const client = await StargateClient.connect(LACONICD_RPC_ENDPOINT);
|
||||
|
||||
console.time('time_taken_getParticipants');
|
||||
const participants = await registry.getParticipants();
|
||||
console.timeEnd('time_taken_getParticipants');
|
||||
|
||||
const subscribers = await readSubscribers(argv.subscribersCsv);
|
||||
console.log('Read subscribers, count:', subscribers.length);
|
||||
|
||||
await processSubscribers(client, participants, subscribers, argv.output);
|
||||
}
|
||||
|
||||
async function readSubscribers(subscribersCsvPath: string): Promise<any> {
|
||||
const fileContent = fs.readFileSync(path.resolve(subscribersCsvPath), { encoding: 'utf-8' });
|
||||
const headers = ['subscriber_id', 'email', 'status', 'premium?', 'created_at', 'api_subscription_id'];
|
||||
|
||||
return csvParse(fileContent, { delimiter: ',', columns: headers }).toArray();
|
||||
}
|
||||
|
||||
function hashSubscriberId(subscriberId: string): string {
|
||||
return '0x' + crypto.createHash('sha256').update(subscriberId).digest('hex');
|
||||
}
|
||||
|
||||
async function processSubscribers(client: StargateClient, participants: any[], subscribers: any[], outputPath: string) {
|
||||
// Map kyc_id to participant data
|
||||
const kycMap: Record<string, any> = {};
|
||||
participants.forEach((participant: any) => {
|
||||
kycMap[participant.kycId] = participant;
|
||||
});
|
||||
|
||||
const onboardingTxsHeightMap: Record<string, { txHeight: number, pubkey: string }> = {};
|
||||
console.time('time_taken_searchTx');
|
||||
const onboardingTxs = await client.searchTx(`message.action='/cerc.onboarding.v1.MsgOnboardParticipant'`);
|
||||
console.timeEnd('time_taken_searchTx');
|
||||
|
||||
console.log('Fetched onboardingTxs, count:', onboardingTxs.length);
|
||||
|
||||
console.time('time_taken_decodingTxs');
|
||||
onboardingTxs.forEach(onboardingTx => {
|
||||
const rawPubkey = decodeTxRaw(onboardingTx.tx).authInfo.signerInfos[0].publicKey;
|
||||
if (!rawPubkey) {
|
||||
console.error('pubkey not found in tx', onboardingTx.hash);
|
||||
return;
|
||||
}
|
||||
|
||||
const pubkey = decodePubkey(rawPubkey).value;
|
||||
|
||||
// Determine sender
|
||||
const onboardParticipantEvent = onboardingTx.events.find(event => event.type === 'onboard_participant');
|
||||
if (!onboardParticipantEvent) {
|
||||
console.error('onboard_participant event not found in tx', onboardingTx.hash);
|
||||
return;
|
||||
}
|
||||
|
||||
const sender = onboardParticipantEvent.attributes.find(attr => attr.key === 'signer')?.value;
|
||||
if (!sender) {
|
||||
console.error('sender not found in onboard_participant event for tx', onboardingTx.hash)
|
||||
return;
|
||||
}
|
||||
|
||||
// Update if already exists
|
||||
let latesTxHeight = onboardingTx.height;
|
||||
if (onboardingTxsHeightMap[sender]) {
|
||||
latesTxHeight = latesTxHeight > onboardingTxsHeightMap[sender].txHeight ? latesTxHeight : onboardingTxsHeightMap[sender].txHeight;
|
||||
}
|
||||
|
||||
onboardingTxsHeightMap[sender] = { txHeight: latesTxHeight, pubkey };
|
||||
});
|
||||
console.timeEnd('time_taken_decodingTxs');
|
||||
|
||||
const onboardedSubscribers: any[] = [];
|
||||
for (let i = 0; i < subscribers.length; i++) {
|
||||
const subscriber = subscribers[i];
|
||||
|
||||
const hashedSubscriberId = hashSubscriberId(subscriber['subscriber_id']);
|
||||
const participant = kycMap[hashedSubscriberId];
|
||||
if (!participant) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const participantAddresss = participant['cosmosAddress'];
|
||||
|
||||
// Skip participant if an onboarding tx not found
|
||||
if (!onboardingTxsHeightMap[participantAddresss]) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const onboardedSubscriber = {
|
||||
subscriber_id: subscriber['subscriber_id'],
|
||||
email: subscriber['email'],
|
||||
status: subscriber['status'],
|
||||
'premium?': subscriber['premium?'],
|
||||
created_at: subscriber['created_at'],
|
||||
laconic_address: participantAddresss,
|
||||
nitro_address: participant['nitroAddress'],
|
||||
role: participant['role'],
|
||||
hashed_subscriber_id: participant['kycId'],
|
||||
laconic_pubkey: onboardingTxsHeightMap[participantAddresss].pubkey,
|
||||
onboarding_height: onboardingTxsHeightMap[participantAddresss].txHeight
|
||||
};
|
||||
|
||||
onboardedSubscribers.push(onboardedSubscriber);
|
||||
}
|
||||
|
||||
const writer = csvWriter.createObjectCsvWriter({
|
||||
path: path.resolve(outputPath),
|
||||
header: [
|
||||
{ id: 'subscriber_id', title: 'subscriber_id' },
|
||||
{ id: 'email', title: 'email' },
|
||||
{ id: 'status', title: 'status' },
|
||||
{ id: 'premium?', title: 'premium?' },
|
||||
{ id: 'created_at', title: 'created_at' },
|
||||
{ id: 'laconic_address', title: 'laconic_address' },
|
||||
{ id: 'nitro_address', title: 'nitro_address' },
|
||||
{ id: 'role', title: 'role' },
|
||||
{ id: 'hashed_subscriber_id', title: 'hashed_subscriber_id' },
|
||||
{ id: 'laconic_pubkey', title: 'laconic_pubkey' },
|
||||
{ id: 'onboarding_height', title: 'onboarding_height' },
|
||||
],
|
||||
alwaysQuote: true
|
||||
});
|
||||
|
||||
await writer.writeRecords(onboardedSubscribers);
|
||||
|
||||
console.log(`Data has been written to ${path.resolve(outputPath)}`);
|
||||
}
|
||||
|
||||
function _getArgv (): any {
|
||||
return yargs(hideBin(process.argv))
|
||||
.option('subscribersCsv', {
|
||||
alias: 's',
|
||||
type: 'string',
|
||||
demandOption: true,
|
||||
describe: 'Path to the subscribers CSV file',
|
||||
})
|
||||
.option('output', {
|
||||
alias: 'o',
|
||||
type: 'string',
|
||||
demandOption: true,
|
||||
describe: 'Path to the output CSV file',
|
||||
})
|
||||
.help()
|
||||
.argv;
|
||||
}
|
||||
|
||||
main().catch(err => {
|
||||
console.log(err);
|
||||
});
|
@ -1,114 +0,0 @@
|
||||
import * as fs from 'fs';
|
||||
import * as path from 'path';
|
||||
import yargs from 'yargs';
|
||||
import { hideBin } from 'yargs/helpers';
|
||||
import dotenv from 'dotenv';
|
||||
|
||||
import { Registry } from '@cerc-io/registry-sdk';
|
||||
|
||||
dotenv.config();
|
||||
|
||||
const LACONICD_GQL_ENDPOINT = process.env.LACONICD_GQL_ENDPOINT || 'https://laconicd.laconic.com/api';
|
||||
const LACONICD_RPC_ENDPOINT = process.env.LACONICD_RPC_ENDPOINT || 'https://laconicd.laconic.com';
|
||||
const LACONICD_CHAIN_ID = process.env.LACONICD_CHAIN_ID || 'laconic_9000-1';
|
||||
|
||||
async function main(): Promise<void> {
|
||||
const argv = _getArgv();
|
||||
|
||||
const registry = new Registry(LACONICD_GQL_ENDPOINT, LACONICD_RPC_ENDPOINT, LACONICD_CHAIN_ID);
|
||||
|
||||
console.time('time_taken_getParticipants');
|
||||
const participants = await registry.getParticipants();
|
||||
console.log('Fetched participants, count:', participants.length);
|
||||
console.timeEnd('time_taken_getParticipants');
|
||||
|
||||
let validators: Array<string> = await readValidators(argv.validatorsCsv);
|
||||
console.log('Read validators, count:', validators.length);
|
||||
|
||||
let stage1Allocations: Array<{ 'cosmos_address': string, balance: string }> = [];
|
||||
|
||||
const stage1Participants = participants.map((participant: any) => {
|
||||
const outputParticipant: any = {
|
||||
'cosmos_address': participant.cosmosAddress,
|
||||
'nitro_address': participant.nitroAddress,
|
||||
'kyc_id': participant.kycId
|
||||
};
|
||||
|
||||
if (validators.includes(participant.cosmosAddress)) {
|
||||
outputParticipant.role = 'validator';
|
||||
|
||||
stage1Allocations.push({
|
||||
cosmos_address: participant.cosmosAddress,
|
||||
balance: argv.validatorAlloc
|
||||
});
|
||||
|
||||
// Remove processed participant from validators list
|
||||
validators = validators.filter(val => val !== participant.cosmosAddress);
|
||||
} else {
|
||||
outputParticipant.role = 'participant';
|
||||
|
||||
stage1Allocations.push({
|
||||
cosmos_address: participant.cosmosAddress,
|
||||
balance: argv.participantAlloc
|
||||
});
|
||||
}
|
||||
|
||||
return outputParticipant;
|
||||
});
|
||||
|
||||
// Provide allocs for remaining validators
|
||||
validators.forEach(val => {
|
||||
stage1Allocations.push({
|
||||
cosmos_address: val,
|
||||
balance: argv.validatorAlloc
|
||||
});
|
||||
});
|
||||
|
||||
const participantsOutputFilePath = path.resolve(argv.output);
|
||||
fs.writeFileSync(participantsOutputFilePath, JSON.stringify(stage1Participants, null, 2));
|
||||
console.log(`Onboarded participants with filtered validators written to ${participantsOutputFilePath}`);
|
||||
|
||||
const allocsOutputFilePath = path.resolve(argv.outputAllocs);
|
||||
fs.writeFileSync(allocsOutputFilePath, JSON.stringify(stage1Allocations, null, 2));
|
||||
console.log(`Stage1 allocations written to ${allocsOutputFilePath}`);
|
||||
}
|
||||
|
||||
async function readValidators(subscribersCsvPath: string): Promise<any> {
|
||||
const fileContent = fs.readFileSync(path.resolve(subscribersCsvPath), { encoding: 'utf-8' });
|
||||
return fileContent.split('\r\n').map(address => address.trim());
|
||||
}
|
||||
|
||||
function _getArgv (): any {
|
||||
return yargs(hideBin(process.argv))
|
||||
.option('validatorsCsv', {
|
||||
type: 'string',
|
||||
demandOption: true,
|
||||
describe: 'Path to a CSV file with validators list',
|
||||
})
|
||||
.option('participantAlloc', {
|
||||
type: 'string',
|
||||
demandOption: true,
|
||||
describe: 'Participant stage1 balance allocation',
|
||||
})
|
||||
.option('validatorAlloc', {
|
||||
type: 'string',
|
||||
demandOption: true,
|
||||
describe: 'Validator stage1 balance allocation',
|
||||
})
|
||||
.option('output', {
|
||||
type: 'string',
|
||||
demandOption: true,
|
||||
describe: 'Path to the output JSON file',
|
||||
})
|
||||
.option('outputAllocs', {
|
||||
type: 'string',
|
||||
demandOption: true,
|
||||
describe: 'Path to the output JSON file with allocs',
|
||||
})
|
||||
.help()
|
||||
.argv;
|
||||
}
|
||||
|
||||
main().catch(err => {
|
||||
console.log(err);
|
||||
});
|
@ -1,110 +0,0 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
/* Visit https://aka.ms/tsconfig to read more about this file */
|
||||
|
||||
/* Projects */
|
||||
// "incremental": true, /* Save .tsbuildinfo files to allow for incremental compilation of projects. */
|
||||
// "composite": true, /* Enable constraints that allow a TypeScript project to be used with project references. */
|
||||
// "tsBuildInfoFile": "./.tsbuildinfo", /* Specify the path to .tsbuildinfo incremental compilation file. */
|
||||
// "disableSourceOfProjectReferenceRedirect": true, /* Disable preferring source files instead of declaration files when referencing composite projects. */
|
||||
// "disableSolutionSearching": true, /* Opt a project out of multi-project reference checking when editing. */
|
||||
// "disableReferencedProjectLoad": true, /* Reduce the number of projects loaded automatically by TypeScript. */
|
||||
|
||||
/* Language and Environment */
|
||||
"target": "es2016", /* Set the JavaScript language version for emitted JavaScript and include compatible library declarations. */
|
||||
// "lib": [], /* Specify a set of bundled library declaration files that describe the target runtime environment. */
|
||||
// "jsx": "preserve", /* Specify what JSX code is generated. */
|
||||
// "experimentalDecorators": true, /* Enable experimental support for legacy experimental decorators. */
|
||||
// "emitDecoratorMetadata": true, /* Emit design-type metadata for decorated declarations in source files. */
|
||||
// "jsxFactory": "", /* Specify the JSX factory function used when targeting React JSX emit, e.g. 'React.createElement' or 'h'. */
|
||||
// "jsxFragmentFactory": "", /* Specify the JSX Fragment reference used for fragments when targeting React JSX emit e.g. 'React.Fragment' or 'Fragment'. */
|
||||
// "jsxImportSource": "", /* Specify module specifier used to import the JSX factory functions when using 'jsx: react-jsx*'. */
|
||||
// "reactNamespace": "", /* Specify the object invoked for 'createElement'. This only applies when targeting 'react' JSX emit. */
|
||||
// "noLib": true, /* Disable including any library files, including the default lib.d.ts. */
|
||||
// "useDefineForClassFields": true, /* Emit ECMAScript-standard-compliant class fields. */
|
||||
// "moduleDetection": "auto", /* Control what method is used to detect module-format JS files. */
|
||||
|
||||
/* Modules */
|
||||
"module": "commonjs", /* Specify what module code is generated. */
|
||||
// "rootDir": "./", /* Specify the root folder within your source files. */
|
||||
// "moduleResolution": "node10", /* Specify how TypeScript looks up a file from a given module specifier. */
|
||||
// "baseUrl": "./", /* Specify the base directory to resolve non-relative module names. */
|
||||
// "paths": {}, /* Specify a set of entries that re-map imports to additional lookup locations. */
|
||||
// "rootDirs": [], /* Allow multiple folders to be treated as one when resolving modules. */
|
||||
// "typeRoots": [], /* Specify multiple folders that act like './node_modules/@types'. */
|
||||
// "types": [], /* Specify type package names to be included without being referenced in a source file. */
|
||||
// "allowUmdGlobalAccess": true, /* Allow accessing UMD globals from modules. */
|
||||
// "moduleSuffixes": [], /* List of file name suffixes to search when resolving a module. */
|
||||
// "allowImportingTsExtensions": true, /* Allow imports to include TypeScript file extensions. Requires '--moduleResolution bundler' and either '--noEmit' or '--emitDeclarationOnly' to be set. */
|
||||
// "resolvePackageJsonExports": true, /* Use the package.json 'exports' field when resolving package imports. */
|
||||
// "resolvePackageJsonImports": true, /* Use the package.json 'imports' field when resolving imports. */
|
||||
// "customConditions": [], /* Conditions to set in addition to the resolver-specific defaults when resolving imports. */
|
||||
"resolveJsonModule": true, /* Enable importing .json files. */
|
||||
// "allowArbitraryExtensions": true, /* Enable importing files with any extension, provided a declaration file is present. */
|
||||
// "noResolve": true, /* Disallow 'import's, 'require's or '<reference>'s from expanding the number of files TypeScript should add to a project. */
|
||||
|
||||
/* JavaScript Support */
|
||||
// "allowJs": true, /* Allow JavaScript files to be a part of your program. Use the 'checkJS' option to get errors from these files. */
|
||||
// "checkJs": true, /* Enable error reporting in type-checked JavaScript files. */
|
||||
// "maxNodeModuleJsDepth": 1, /* Specify the maximum folder depth used for checking JavaScript files from 'node_modules'. Only applicable with 'allowJs'. */
|
||||
|
||||
/* Emit */
|
||||
"declaration": true, /* Generate .d.ts files from TypeScript and JavaScript files in your project. */
|
||||
// "declarationMap": true, /* Create sourcemaps for d.ts files. */
|
||||
// "emitDeclarationOnly": true, /* Only output d.ts files and not JavaScript files. */
|
||||
"sourceMap": true, /* Create source map files for emitted JavaScript files. */
|
||||
// "inlineSourceMap": true, /* Include sourcemap files inside the emitted JavaScript. */
|
||||
// "outFile": "./", /* Specify a file that bundles all outputs into one JavaScript file. If 'declaration' is true, also designates a file that bundles all .d.ts output. */
|
||||
"outDir": "dist", /* Specify an output folder for all emitted files. */
|
||||
// "removeComments": true, /* Disable emitting comments. */
|
||||
// "noEmit": true, /* Disable emitting files from a compilation. */
|
||||
// "importHelpers": true, /* Allow importing helper functions from tslib once per project, instead of including them per-file. */
|
||||
// "downlevelIteration": true, /* Emit more compliant, but verbose and less performant JavaScript for iteration. */
|
||||
// "sourceRoot": "", /* Specify the root path for debuggers to find the reference source code. */
|
||||
// "mapRoot": "", /* Specify the location where debugger should locate map files instead of generated locations. */
|
||||
// "inlineSources": true, /* Include source code in the sourcemaps inside the emitted JavaScript. */
|
||||
// "emitBOM": true, /* Emit a UTF-8 Byte Order Mark (BOM) in the beginning of output files. */
|
||||
// "newLine": "crlf", /* Set the newline character for emitting files. */
|
||||
// "stripInternal": true, /* Disable emitting declarations that have '@internal' in their JSDoc comments. */
|
||||
// "noEmitHelpers": true, /* Disable generating custom helper functions like '__extends' in compiled output. */
|
||||
// "noEmitOnError": true, /* Disable emitting files if any type checking errors are reported. */
|
||||
// "preserveConstEnums": true, /* Disable erasing 'const enum' declarations in generated code. */
|
||||
// "declarationDir": "./", /* Specify the output directory for generated declaration files. */
|
||||
|
||||
/* Interop Constraints */
|
||||
// "isolatedModules": true, /* Ensure that each file can be safely transpiled without relying on other imports. */
|
||||
// "verbatimModuleSyntax": true, /* Do not transform or elide any imports or exports not marked as type-only, ensuring they are written in the output file's format based on the 'module' setting. */
|
||||
// "isolatedDeclarations": true, /* Require sufficient annotation on exports so other tools can trivially generate declaration files. */
|
||||
// "allowSyntheticDefaultImports": true, /* Allow 'import x from y' when a module doesn't have a default export. */
|
||||
"esModuleInterop": true, /* Emit additional JavaScript to ease support for importing CommonJS modules. This enables 'allowSyntheticDefaultImports' for type compatibility. */
|
||||
// "preserveSymlinks": true, /* Disable resolving symlinks to their realpath. This correlates to the same flag in node. */
|
||||
"forceConsistentCasingInFileNames": true, /* Ensure that casing is correct in imports. */
|
||||
|
||||
/* Type Checking */
|
||||
"strict": true, /* Enable all strict type-checking options. */
|
||||
// "noImplicitAny": true, /* Enable error reporting for expressions and declarations with an implied 'any' type. */
|
||||
// "strictNullChecks": true, /* When type checking, take into account 'null' and 'undefined'. */
|
||||
// "strictFunctionTypes": true, /* When assigning functions, check to ensure parameters and the return values are subtype-compatible. */
|
||||
// "strictBindCallApply": true, /* Check that the arguments for 'bind', 'call', and 'apply' methods match the original function. */
|
||||
// "strictPropertyInitialization": true, /* Check for class properties that are declared but not set in the constructor. */
|
||||
// "noImplicitThis": true, /* Enable error reporting when 'this' is given the type 'any'. */
|
||||
// "useUnknownInCatchVariables": true, /* Default catch clause variables as 'unknown' instead of 'any'. */
|
||||
// "alwaysStrict": true, /* Ensure 'use strict' is always emitted. */
|
||||
// "noUnusedLocals": true, /* Enable error reporting when local variables aren't read. */
|
||||
// "noUnusedParameters": true, /* Raise an error when a function parameter isn't read. */
|
||||
// "exactOptionalPropertyTypes": true, /* Interpret optional property types as written, rather than adding 'undefined'. */
|
||||
// "noImplicitReturns": true, /* Enable error reporting for codepaths that do not explicitly return in a function. */
|
||||
// "noFallthroughCasesInSwitch": true, /* Enable error reporting for fallthrough cases in switch statements. */
|
||||
// "noUncheckedIndexedAccess": true, /* Add 'undefined' to a type when accessed using an index. */
|
||||
// "noImplicitOverride": true, /* Ensure overriding members in derived classes are marked with an override modifier. */
|
||||
// "noPropertyAccessFromIndexSignature": true, /* Enforces using indexed accessors for keys declared using an indexed type. */
|
||||
// "allowUnusedLabels": true, /* Disable error reporting for unused labels. */
|
||||
// "allowUnreachableCode": true, /* Disable error reporting for unreachable code. */
|
||||
|
||||
/* Completeness */
|
||||
// "skipDefaultLibCheck": true, /* Skip type checking .d.ts files that are included with TypeScript. */
|
||||
"skipLibCheck": true /* Skip type checking all .d.ts files. */
|
||||
},
|
||||
"include": ["src"],
|
||||
"exclude": ["dist"],
|
||||
}
|
1999
scripts/yarn.lock
1999
scripts/yarn.lock
File diff suppressed because it is too large
Load Diff
@ -1,400 +0,0 @@
|
||||
# Service Provider
|
||||
|
||||
* Follow [Set up a new service provider](#set-up-a-new-service-provider) to setup a new service provider (SP)
|
||||
* If you already have a SP setup for stage1, follow [Update service provider for SAPO testnet](#update-service-provider-for-sapo-testnet) to update it for testnet2
|
||||
|
||||
## Set up a new service provider
|
||||
|
||||
Follow steps from [service-provider-setup](<https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/service-provider-setup#service-provider-setup>). After setup, you will have the following services running (your configuration will look similar to the examples listed below):
|
||||
|
||||
* laconicd chain RPC endpoint: <http://lcn-daemon.laconic.com:26657>
|
||||
* laconicd GQL endpoint: <http://lcn-daemon.laconic.com:9473/api>
|
||||
* laconic console: <http://lcn-console.laconic.com:8080/registry>
|
||||
* webapp deployer API: <https://webapp-deployer-api.pwa.laconic.com>
|
||||
* webapp deployer UI: <https://webapp-deployer-ui.pwa.laconic.com>
|
||||
|
||||
Follow the steps below to point your deployer to the SAPO testnet
|
||||
|
||||
## Update service provider for SAPO testnet
|
||||
|
||||
* On a successful webapp-deployer setup with SAPO testnet, your deployer will be available on <https://deploy.laconic.com>
|
||||
* For creating a project, users can create a deployment auction which your deployer will bid on or they can perform a targeted deployment using your deployer LRN
|
||||
|
||||
### Prerequisites
|
||||
|
||||
* A SAPO testnet node (see [Join SAPO testnet](./README.md#join-sapo-testnet))
|
||||
|
||||
### Stop services
|
||||
|
||||
* Stop a laconic-console deployment:
|
||||
|
||||
```bash
|
||||
# In directory where laconic-console deployment was created
|
||||
laconic-so deployment --dir laconic-console-deployment stop --delete-volumes
|
||||
```
|
||||
|
||||
* Stop webapp deployer:
|
||||
|
||||
```bash
|
||||
# In directory where webapp-deployer deployment was created
|
||||
laconic-so deployment --dir webapp-deployer stop
|
||||
laconic-so deployment --dir webapp-ui stop
|
||||
```
|
||||
|
||||
### Update laconic console
|
||||
|
||||
* Remove an existing console deployment:
|
||||
|
||||
```bash
|
||||
# In directory where laconic-console deployment was created
|
||||
# Backup the config if required
|
||||
rm -rf laconic-console-deployment
|
||||
```
|
||||
|
||||
* Follow [laconic-console](stack-orchestrator/stacks/laconic-console/README.md) stack instructions to setup a new laconic-console deployment
|
||||
|
||||
* Example configuration:
|
||||
|
||||
```bash
|
||||
# CLI configuration
|
||||
|
||||
# laconicd RPC endpoint (can be pointed to your node)
|
||||
CERC_LACONICD_RPC_ENDPOINT=https://laconicd-sapo.laconic.com
|
||||
|
||||
# laconicd GQL endpoint (can be pointed to your node)
|
||||
CERC_LACONICD_GQL_ENDPOINT=https://laconicd-sapo.laconic.com/api
|
||||
|
||||
CERC_LACONICD_CHAIN_ID=laconic-testnet-2
|
||||
|
||||
# your private key
|
||||
CERC_LACONICD_USER_KEY=
|
||||
|
||||
# your bond id (optional)
|
||||
CERC_LACONICD_BOND_ID=
|
||||
|
||||
# Gas price to use for txs (default: 0.001alnt)
|
||||
# Use for auto fees calculation, gas and fees not required to be set in that case
|
||||
# Reference: https://git.vdb.to/cerc-io/laconic-registry-cli#gas-and-fees
|
||||
CERC_LACONICD_GASPRICE=
|
||||
|
||||
# Console configuration
|
||||
|
||||
# Laconicd (hosted) GQL endpoint (can be pointed to your node)
|
||||
LACONIC_HOSTED_ENDPOINT=https://laconicd-sapo.laconic.com
|
||||
```
|
||||
|
||||
### Check authority and deployer record
|
||||
|
||||
* The stage1 testnet state has been carried over to testnet2, if you had authority and records on stage1, they should be present in testnet2 as well
|
||||
|
||||
* Check authority:
|
||||
|
||||
```bash
|
||||
# In directory where laconic-console deployment was created
|
||||
AUTHORITY=<your-authority>
|
||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry authority whois $AUTHORITY"
|
||||
```
|
||||
|
||||
* Check deployer record:
|
||||
|
||||
```bash
|
||||
PAYMENT_ADDRESS=<your-deployers-payment-address>
|
||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry record list --all --type WebappDeployer --paymentAddress $PAYMENT_ADDRESS"
|
||||
```
|
||||
|
||||
### (Optional) Reserve a new authority
|
||||
|
||||
* Follow steps if you want to reserve a new authority
|
||||
|
||||
* Create a bond:
|
||||
|
||||
```bash
|
||||
# An existing bond can also be used
|
||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry bond create --type alnt --quantity 100000000000"
|
||||
# {"bondId":"a742489e5817ef274187611dadb0e4284a49c087608b545ab6bd990905fb61f3"}
|
||||
|
||||
# Set bond id
|
||||
BOND_ID=
|
||||
```
|
||||
|
||||
* Reserve an authority:
|
||||
|
||||
```bash
|
||||
AUTHORITY=<your-authority>
|
||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry authority reserve $AUTHORITY"
|
||||
|
||||
# Triggers an authority auction
|
||||
```
|
||||
|
||||
* Obtain the authority auction id:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry authority whois $AUTHORITY"
|
||||
# "auction": {
|
||||
# "id": "73e0b082a198c396009ce748804a9060c674a10045365d262c1584f99d2771c1"
|
||||
|
||||
# Set auction id
|
||||
AUCTION_ID=
|
||||
```
|
||||
|
||||
* Commit a bid to the auction:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry auction bid commit $AUCTION_ID 5000000 alnt --chain-id laconic-testnet-2"
|
||||
|
||||
# {"reveal_file":"/app/out/bafyreiewi4osqyvrnljwwcb36fn6sr5iidfpuznqkz52gxc5ztt3jt4zmy.json"}
|
||||
|
||||
# Set reveal file
|
||||
REVEAL_FILE=
|
||||
|
||||
# Wait for the auction to move from commit to reveal phase
|
||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry auction get $AUCTION_ID"
|
||||
```
|
||||
|
||||
* Reveal your bid using reveal file generated while commiting the bid:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry auction bid reveal $AUCTION_ID $REVEAL_FILE --chain-id laconic-testnet-2"
|
||||
# {"success": true}
|
||||
```
|
||||
|
||||
* Verify auction status and winner address after auction completion:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry auction get $AUCTION_ID"
|
||||
```
|
||||
|
||||
* Set the authority with a bond:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry authority bond set $AUTHORITY $BOND_ID"
|
||||
# {"success": true}
|
||||
```
|
||||
|
||||
* Verify the authority has been registered:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry authority whois $AUTHORITY"
|
||||
```
|
||||
|
||||
* Update laconic-console-deployment config (`laconic-console-deployment/config.env`) with created bond:
|
||||
|
||||
```bash
|
||||
...
|
||||
CERC_LACONICD_BOND_ID=<bond-id>
|
||||
...
|
||||
```
|
||||
|
||||
* Restart the console deployment:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir laconic-console-deployment stop && laconic-so deployment --dir laconic-console-deployment start
|
||||
```
|
||||
|
||||
### Update webapp deployer
|
||||
|
||||
* Fetch latest stack repos:
|
||||
|
||||
```bash
|
||||
# In directory where webapp-deployer deployment was created
|
||||
laconic-so --stack webapp-deployer-backend setup-repositories --pull
|
||||
|
||||
# Confirm latest commit hash in the ~/cerc/webapp-deployment-status-api repo
|
||||
```
|
||||
|
||||
* Rebuild container images:
|
||||
|
||||
```bash
|
||||
laconic-so --stack webapp-deployer-backend build-containers --force-rebuild
|
||||
```
|
||||
|
||||
* Push stack images to the container registry:
|
||||
|
||||
* Login to the container registry:
|
||||
|
||||
```bash
|
||||
# Set required variables
|
||||
# eg: container-registry.pwa.laconic.com
|
||||
CONTAINER_REGISTRY_URL=
|
||||
CONTAINER_REGISTRY_USERNAME=
|
||||
CONTAINER_REGISTRY_PASSWORD=
|
||||
|
||||
# login to container registry
|
||||
docker login $CONTAINER_REGISTRY_URL --username $CONTAINER_REGISTRY_USERNAME --password $CONTAINER_REGISTRY_PASSWORD
|
||||
|
||||
# WARNING! Using --password via the CLI is insecure. Use --password-stdin.
|
||||
# WARNING! Your password will be stored unencrypted in /home/dev2/.docker/config.json.
|
||||
# Configure a credential helper to remove this warning. See
|
||||
# https://docs.docker.com/engine/reference/commandline/login/#credential-stores
|
||||
|
||||
# Login Succeeded
|
||||
```
|
||||
|
||||
* Push images:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir webapp-deployer push-images
|
||||
```
|
||||
|
||||
* Overwrite an existing compose file with the latest one:
|
||||
|
||||
```bash
|
||||
# In directory where webapp-deployer deployment folder exists
|
||||
cp ~/cerc/webapp-deployment-status-api/docker-compose.yml webapp-deployer/compose/docker-compose-webapp-deployer-backend.yml
|
||||
```
|
||||
|
||||
* Update deployer laconic registry config (`webapp-deployer/data/config/laconic.yml`) with new endpoints:
|
||||
|
||||
```bash
|
||||
services:
|
||||
registry:
|
||||
rpcEndpoint: "<your-sapo-rpc-endpoint>" # Eg. https://laconicd-sapo.laconic.com
|
||||
gqlEndpoint: "<your-sapo-gql-endpoint>" # Eg. https://laconicd-sapo.laconic.com/api
|
||||
userKey: "<userKey>"
|
||||
bondId: "<bondId"
|
||||
chainId: laconic-testnet-2
|
||||
gasPrice: 0.001alnt
|
||||
```
|
||||
|
||||
Note: Existing `userKey` and `bondId` can be used since they are carried over from laconicd stage1 to testnet2
|
||||
|
||||
* Publish a new webapp deployer record:
|
||||
|
||||
* Required if it doesn't already exist or some attribute needs to be updated
|
||||
|
||||
* Set the following variables:
|
||||
|
||||
```bash
|
||||
# Path to the webapp-deployer directory
|
||||
# eg: /home/dev/webapp-deployer
|
||||
DEPLOYER_DIR=
|
||||
|
||||
# Deployer LRN (logical resource name)
|
||||
# eg: "lrn://laconic/deployers/webapp-deployer-api.laconic.com"
|
||||
DEPLOYER_LRN=
|
||||
|
||||
# Deployer API URL
|
||||
# eg: "https://webapp-deployer-api.pwa.laconic.com"
|
||||
API_URL=
|
||||
|
||||
# Deployer GPG public key file path
|
||||
# eg: "/home/dev/webapp-deployer-api.laconic.com.pgp.pub"
|
||||
GPG_PUB_KEY_FILE_PATH=
|
||||
|
||||
GPG_PUB_KEY_FILE=$(basename $GPG_PUB_KEY_FILE_PATH)
|
||||
```
|
||||
|
||||
* Delete the LRN if it currently resolves to an existing record:
|
||||
|
||||
```bash
|
||||
# In directory where laconic-console deployment was created
|
||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry name resolve $DEPLOYER_LRN"
|
||||
|
||||
# Delete the name
|
||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry name delete $DEPLOYER_LRN"
|
||||
|
||||
# Confirm deletion
|
||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry name resolve $DEPLOYER_LRN"
|
||||
```
|
||||
|
||||
* Copy over the GPG pub key file to webapp-deployer:
|
||||
|
||||
```bash
|
||||
cp $GPG_PUB_KEY_FILE_PATH webapp-deployer/data/config
|
||||
```
|
||||
|
||||
* Publish the deployer record:
|
||||
|
||||
```bash
|
||||
docker run -it \
|
||||
-v $DEPLOYER_DIR/data/config:/home/root/config \
|
||||
cerc/webapp-deployer-backend:local laconic-so publish-deployer-to-registry \
|
||||
--laconic-config /home/root/config/laconic.yml \
|
||||
--api-url $API_URL \
|
||||
--public-key-file /home/root/config/$GPG_PUB_KEY_FILE \
|
||||
--lrn $DEPLOYER_LRN \
|
||||
--min-required-payment 9500
|
||||
```
|
||||
|
||||
* Update deployer config (`webapp-deployer/config.env`):
|
||||
|
||||
```bash
|
||||
# Update the deployer LRN if it has changed
|
||||
export LRN=
|
||||
|
||||
# Min payment to require for performing deployments
|
||||
export MIN_REQUIRED_PAYMENT=9500
|
||||
|
||||
# Handle deployment auction requests
|
||||
export HANDLE_AUCTION_REQUESTS=true
|
||||
|
||||
# Amount that the deployer will bid on deployment auctions
|
||||
export AUCTION_BID_AMOUNT=9500
|
||||
```
|
||||
|
||||
* Start the webapp deployer:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir webapp-deployer start
|
||||
```
|
||||
|
||||
* Get the webapp-deployer pod id:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir webapp-deployer ps
|
||||
|
||||
# Expected output
|
||||
# Running containers:
|
||||
# id: default/laconic-096fed46af974a47-deployment-644db859c7-snbq6, name: laconic-096fed46af974a47-deployment-644db859c7-snbq6, ports: 10.42.2.11:9555->9555
|
||||
|
||||
# Set pod id
|
||||
export POD_ID=
|
||||
|
||||
# Example:
|
||||
# export POD_ID=laconic-096fed46af974a47-deployment-644db859c7-snbq6
|
||||
```
|
||||
|
||||
* Copy over GPG keys files to the webapp-deployer container:
|
||||
|
||||
```bash
|
||||
kubie ctx default
|
||||
|
||||
# Copy the GPG key files to the pod
|
||||
kubectl cp <path-to-your-gpg-private-key> $POD_ID:/app
|
||||
kubectl cp <path-to-your-gpg-public-key> $POD_ID:/app
|
||||
|
||||
# Required everytime you stop and start the deployer
|
||||
```
|
||||
|
||||
* Check logs:
|
||||
|
||||
```bash
|
||||
# Deployer
|
||||
kubectl logs -f $POD_ID
|
||||
|
||||
# Deployer auction handler
|
||||
kubectl logs -f $POD_ID -c cerc-webapp-auction-handler
|
||||
```
|
||||
|
||||
* Update deployer UI config (`webapp-ui/config.env`):
|
||||
|
||||
```bash
|
||||
# URL of the webapp deployer backend API
|
||||
# eg: https://webapp-deployer-api.pwa.laconic.com
|
||||
LACONIC_HOSTED_CONFIG_app_api_url=
|
||||
|
||||
# URL of the laconic console
|
||||
LACONIC_HOSTED_CONFIG_app_console_link=https://console-sapo.laconic.com
|
||||
```
|
||||
|
||||
* Start the webapp UI:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir webapp-ui start
|
||||
```
|
||||
|
||||
* Check logs
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir webapp-ui logs webapp
|
||||
```
|
@ -1,33 +1,9 @@
|
||||
services:
|
||||
cli:
|
||||
image: cerc/laconic-registry-cli:local
|
||||
command: ["bash", "-c", "/app/create-config.sh && tail -f /dev/null"]
|
||||
environment:
|
||||
CERC_LACONICD_RPC_ENDPOINT: ${CERC_LACONICD_RPC_ENDPOINT:-http://laconicd:26657}
|
||||
CERC_LACONICD_GQL_ENDPOINT: ${CERC_LACONICD_GQL_ENDPOINT:-http://laconicd:9473/api}
|
||||
CERC_LACONICD_CHAIN_ID: ${CERC_LACONICD_CHAIN_ID:-laconic_9000-1}
|
||||
CERC_LACONICD_USER_KEY: ${CERC_LACONICD_USER_KEY}
|
||||
CERC_LACONICD_BOND_ID: ${CERC_LACONICD_BOND_ID}
|
||||
CERC_LACONICD_GAS: ${CERC_LACONICD_GAS:-200000}
|
||||
CERC_LACONICD_FEES: ${CERC_LACONICD_FEES:-200alnt}
|
||||
CERC_LACONICD_GASPRICE: ${CERC_LACONICD_GASPRICE:-0.001alnt}
|
||||
volumes:
|
||||
- ../config/laconic-console/cli/create-config.sh:/app/create-config.sh
|
||||
- laconic-registry-data:/laconic-registry-data
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
|
||||
console:
|
||||
laconic-console:
|
||||
restart: unless-stopped
|
||||
image: cerc/laconic-console-host:local
|
||||
environment:
|
||||
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
|
||||
CERC_WEBAPP_FILES_DIR: ${CERC_WEBAPP_FILES_DIR:-/usr/local/share/.config/yarn/global/node_modules/@cerc-io/console-app/dist/production}
|
||||
LACONIC_HOSTED_ENDPOINT: ${LACONIC_HOSTED_ENDPOINT:-http://localhost:9473}
|
||||
volumes:
|
||||
- ../config/laconic-console/console/config.yml:/config/config.yml
|
||||
- CERC_WEBAPP_FILES_DIR=${CERC_WEBAPP_FILES_DIR:-/usr/local/share/.config/yarn/global/node_modules/@cerc-io/console-app/dist/production}
|
||||
- LACONIC_HOSTED_ENDPOINT=${LACONIC_HOSTED_ENDPOINT:-http://localhost:9473}
|
||||
ports:
|
||||
- "80"
|
||||
|
||||
volumes:
|
||||
laconic-registry-data:
|
||||
|
@ -1,29 +0,0 @@
|
||||
services:
|
||||
faucet:
|
||||
restart: unless-stopped
|
||||
image: cerc/laconic-faucet:local
|
||||
command: ["bash", "-c", "./start-faucet.sh"]
|
||||
environment:
|
||||
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
|
||||
CERC_LACONICD_RPC_ENDPOINT: ${CERC_LACONICD_RPC_ENDPOINT:-http://laconicd:26657}
|
||||
CERC_FAUCET_KEY: ${CERC_FAUCET_KEY}
|
||||
CERC_LACONICD_CHAIN_ID: ${CERC_LACONICD_CHAIN_ID:-laconic_9000-1}
|
||||
CERC_TRANSFER_AMOUNT: ${CERC_TRANSFER_AMOUNT:-10000000000}
|
||||
CERC_PERIOD_TRANSFER_LIMIT: ${CERC_PERIOD_TRANSFER_LIMIT:-30000000000}
|
||||
volumes:
|
||||
- faucet-data:/app/db
|
||||
- ../config/laconic-faucet/start-faucet.sh:/app/start-faucet.sh
|
||||
- ../config/laconic-faucet/config-template.toml:/app/environments/config-template.toml
|
||||
ports:
|
||||
- 3000
|
||||
healthcheck:
|
||||
test: ["CMD", "nc", "-vz", "127.0.0.1", "3000"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 10
|
||||
start_period: 5s
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
|
||||
volumes:
|
||||
faucet-data:
|
@ -1,49 +0,0 @@
|
||||
services:
|
||||
shopify:
|
||||
restart: unless-stopped
|
||||
image: cerc/laconic-shopify:local
|
||||
depends_on:
|
||||
faucet:
|
||||
condition: service_healthy
|
||||
command: ["bash", "-c", "./start-shopify.sh"]
|
||||
environment:
|
||||
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
|
||||
CERC_SHOPIFY_GRAPHQL_URL: ${CERC_SHOPIFY_GRAPHQL_URL}
|
||||
CERC_SHOPIFY_ACCESS_TOKEN: ${CERC_SHOPIFY_ACCESS_TOKEN}
|
||||
CERC_FETCH_ORDER_DELAY: ${CERC_FETCH_ORDER_DELAY:-10000}
|
||||
CERC_FAUCET_URL: http://faucet:3000/
|
||||
CERC_ITEMS_PER_ORDER: ${CERC_ITEMS_PER_ORDER:-10}
|
||||
volumes:
|
||||
- shopify-data:/app/data
|
||||
- ../config/laconic-shopify/start-shopify.sh:/app/start-shopify.sh
|
||||
- ../config/laconic-shopify/product_pricings.json:/app/config/product_pricings.json
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
|
||||
faucet:
|
||||
restart: unless-stopped
|
||||
image: cerc/laconic-shopify-faucet:local
|
||||
command: ["bash", "-c", "./start-faucet.sh"]
|
||||
environment:
|
||||
CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
|
||||
CERC_LACONICD_RPC_ENDPOINT: ${CERC_LACONICD_RPC_ENDPOINT:-https://laconicd-sapo.laconic.com}
|
||||
CERC_FAUCET_KEY: ${CERC_FAUCET_KEY}
|
||||
CERC_LACONICD_CHAIN_ID: ${CERC_LACONICD_CHAIN_ID:-laconic-testnet-2}
|
||||
CERC_LACONICD_PREFIX: ${CERC_LACONICD_PREFIX:-laconic}
|
||||
CERC_LACONICD_GAS_PRICE: ${CERC_LACONICD_GAS_PRICE:-0.001}
|
||||
volumes:
|
||||
- faucet-data:/app/db
|
||||
- ../config/laconic-shopify/start-faucet.sh:/app/start-faucet.sh
|
||||
- ../config/laconic-shopify/config-template.toml:/app/environments/config-template.toml
|
||||
healthcheck:
|
||||
test: ["CMD", "nc", "-vz", "127.0.0.1", "3000"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 10
|
||||
start_period: 5s
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
|
||||
volumes:
|
||||
shopify-data:
|
||||
faucet-data:
|
@ -0,0 +1,30 @@
|
||||
services:
|
||||
laconicd:
|
||||
restart: no
|
||||
image: cerc/laconic2d:local
|
||||
command: ["/bin/sh", "-c", "/opt/run-laconicd.sh"]
|
||||
volumes:
|
||||
- laconicd-data:/root/.laconicd/data
|
||||
- laconicd-config:/root/.laconicd/config
|
||||
- laconicd-keyring:/root/.laconicd/keyring-test
|
||||
- ../config/laconicd/scripts/run-laconicd.sh:/opt/run-laconicd.sh
|
||||
- ../config/laconicd/scripts/export-mykey.sh:/docker-entrypoint-scripts.d/export-mykey.sh
|
||||
- ../config/laconicd/scripts/export-myaddress.sh:/docker-entrypoint-scripts.d/export-myaddress.sh
|
||||
# TODO: determine which of the ports below is really needed
|
||||
ports:
|
||||
- "6060"
|
||||
- "26657"
|
||||
- "26656"
|
||||
- "9473"
|
||||
- "9090"
|
||||
- "9091"
|
||||
- "1317"
|
||||
cli:
|
||||
image: cerc/laconic-registry-cli:local
|
||||
volumes:
|
||||
- ../config/laconicd/registry-cli-config-template.yml:/registry-cli-config-template.yml
|
||||
|
||||
volumes:
|
||||
laconicd-data:
|
||||
laconicd-config:
|
||||
laconicd-keyring:
|
@ -1,32 +0,0 @@
|
||||
services:
|
||||
laconicd:
|
||||
restart: unless-stopped
|
||||
image: cerc/laconicd:local
|
||||
command: ["bash", "-c", "/opt/run-laconicd.sh"]
|
||||
environment:
|
||||
CERC_MONIKER: ${CERC_MONIKER:-TestnetNode}
|
||||
CERC_CHAIN_ID: ${CERC_CHAIN_ID:-laconic_9000-1}
|
||||
CERC_PEERS: ${CERC_PEERS}
|
||||
MIN_GAS_PRICE: ${MIN_GAS_PRICE:-0.001}
|
||||
CERC_LOGLEVEL: ${CERC_LOGLEVEL:-info}
|
||||
volumes:
|
||||
- laconicd-data:/root/.laconicd
|
||||
- ../config/laconicd/run-laconicd.sh:/opt/run-laconicd.sh
|
||||
ports:
|
||||
- "6060"
|
||||
- "26657"
|
||||
- "26656"
|
||||
- "9473"
|
||||
- "9090"
|
||||
- "1317"
|
||||
healthcheck:
|
||||
test: ["CMD", "nc", "-vz", "127.0.0.1", "26657"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 10
|
||||
start_period: 10s
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
|
||||
volumes:
|
||||
laconicd-data:
|
@ -1,24 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [[ -n "$CERC_SCRIPT_DEBUG" ]]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
set -e
|
||||
|
||||
# Create the required config file
|
||||
config_file="/app/config.yml"
|
||||
cat <<EOF > $config_file
|
||||
services:
|
||||
registry:
|
||||
rpcEndpoint: ${CERC_LACONICD_RPC_ENDPOINT}
|
||||
gqlEndpoint: ${CERC_LACONICD_GQL_ENDPOINT}
|
||||
userKey: "${CERC_LACONICD_USER_KEY}"
|
||||
bondId: ${CERC_LACONICD_BOND_ID}
|
||||
chainId: ${CERC_LACONICD_CHAIN_ID}
|
||||
gas: ${CERC_LACONICD_GAS}
|
||||
fees: ${CERC_LACONICD_FEES}
|
||||
gasPrice: ${CERC_LACONICD_GASPRICE}
|
||||
EOF
|
||||
|
||||
echo "Exported config to $config_file"
|
@ -1,13 +0,0 @@
|
||||
[upstream]
|
||||
rpcEndpoint = "REPLACE_WITH_CERC_LACONICD_RPC_ENDPOINT"
|
||||
chainId = "laconic_9000-1"
|
||||
denom = "alnt"
|
||||
prefix = "laconic"
|
||||
gasPrice = "1"
|
||||
faucetKey = "REPLACE_WITH_CERC_FAUCET_KEY"
|
||||
|
||||
[server]
|
||||
port=3000
|
||||
transferAmount = "REPLACE_WITH_CERC_TRANSFER_AMOUNT"
|
||||
periodTransferLimit = "REPLACE_WITH_CERC_PERIOD_TRANSFER_LIMIT"
|
||||
dbDir = "db"
|
@ -1,30 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
set -u
|
||||
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
config_template=$(cat environments/config-template.toml)
|
||||
target_config="./environments/local.toml"
|
||||
|
||||
# Check if faucet key is set
|
||||
if [ -z "${CERC_FAUCET_KEY:-}" ]; then
|
||||
echo "Error: CERC_FAUCET_KEY is not set. Exiting..."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Using laconicd RPC endpoint: $CERC_LACONICD_RPC_ENDPOINT"
|
||||
echo "Transfer amount per request: $CERC_TRANSFER_AMOUNT"
|
||||
echo "Transfer limit for an address within a period: $CERC_PERIOD_TRANSFER_LIMIT"
|
||||
|
||||
FAUCET_CONFIG=$(echo "$config_template" | \
|
||||
sed -E "s|REPLACE_WITH_CERC_FAUCET_KEY|${CERC_FAUCET_KEY}|g; \
|
||||
s|REPLACE_WITH_CERC_LACONICD_RPC_ENDPOINT|${CERC_LACONICD_RPC_ENDPOINT}|g; \
|
||||
s|REPLACE_WITH_CERC_TRANSFER_AMOUNT|${CERC_TRANSFER_AMOUNT}|g; \
|
||||
s|REPLACE_WITH_CERC_PERIOD_TRANSFER_LIMIT|${CERC_PERIOD_TRANSFER_LIMIT}|; ")
|
||||
|
||||
echo "$FAUCET_CONFIG" > $target_config
|
||||
echo "Starting faucet..."
|
||||
node dist/index.js
|
@ -1,12 +0,0 @@
|
||||
[upstream]
|
||||
rpcEndpoint = "REPLACE_WITH_CERC_LACONICD_RPC_ENDPOINT"
|
||||
chainId = "REPLACE_WITH_CERC_LACONICD_CHAIN_ID"
|
||||
prefix = "REPLACE_WITH_CERC_LACONICD_PREFIX"
|
||||
gasPrice = "REPLACE_WITH_CERC_LACONICD_GAS_PRICE"
|
||||
faucetKey = "REPLACE_WITH_CERC_FAUCET_KEY"
|
||||
denom = "alnt"
|
||||
|
||||
[server]
|
||||
port=3000
|
||||
enableRateLimit=false
|
||||
dbDir = "db"
|
@ -1,6 +0,0 @@
|
||||
{
|
||||
"10 pre-paid webapp deployments": "100000",
|
||||
"100 webapp deployments": "1000000",
|
||||
"500 webapp deployments": "5000000",
|
||||
"1000 webapp deployments": "10000000"
|
||||
}
|
@ -1,29 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
set -u
|
||||
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
config_template=$(cat environments/config-template.toml)
|
||||
target_config="./environments/local.toml"
|
||||
|
||||
# Check if faucet key is set
|
||||
if [ -z "${CERC_FAUCET_KEY:-}" ]; then
|
||||
echo "Error: CERC_FAUCET_KEY is not set. Exiting..."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Using laconicd RPC endpoint: $CERC_LACONICD_RPC_ENDPOINT"
|
||||
|
||||
FAUCET_CONFIG=$(echo "$config_template" | \
|
||||
sed -E "s|REPLACE_WITH_CERC_FAUCET_KEY|${CERC_FAUCET_KEY}|g; \
|
||||
s|REPLACE_WITH_CERC_LACONICD_CHAIN_ID|${CERC_LACONICD_CHAIN_ID}|g; \
|
||||
s|REPLACE_WITH_CERC_LACONICD_PREFIX|${CERC_LACONICD_PREFIX}|g; \
|
||||
s|REPLACE_WITH_CERC_LACONICD_GAS_PRICE|${CERC_LACONICD_GAS_PRICE}|g; \
|
||||
s|REPLACE_WITH_CERC_LACONICD_RPC_ENDPOINT|${CERC_LACONICD_RPC_ENDPOINT}|g; ")
|
||||
|
||||
echo "$FAUCET_CONFIG" > $target_config
|
||||
echo "Starting faucet..."
|
||||
node dist/index.js
|
@ -1,21 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
set -u
|
||||
if [ -n "$CERC_SCRIPT_DEBUG" ]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
echo "Shopify GraphQL URL: $CERC_SHOPIFY_GRAPHQL_URL"
|
||||
echo "Shopify access token: $CERC_SHOPIFY_ACCESS_TOKEN"
|
||||
echo "Fetch order delay: $CERC_FETCH_ORDER_DELAY"
|
||||
echo "Faucet URL: $CERC_FAUCET_URL"
|
||||
echo "Number of line items per order: $CERC_ITEMS_PER_ORDER"
|
||||
|
||||
export SHOPIFY_GRAPHQL_URL=$CERC_SHOPIFY_GRAPHQL_URL
|
||||
export SHOPIFY_ACCESS_TOKEN=$CERC_SHOPIFY_ACCESS_TOKEN
|
||||
export FETCH_ORDER_DELAY=$CERC_FETCH_ORDER_DELAY
|
||||
export FAUCET_URL=$CERC_FAUCET_URL
|
||||
export ITEMS_PER_ORDER=$CERC_ITEMS_PER_ORDER
|
||||
|
||||
yarn start
|
@ -0,0 +1,9 @@
|
||||
services:
|
||||
cns:
|
||||
restEndpoint: 'http://laconicd:1317'
|
||||
gqlEndpoint: 'http://laconicd:9473/api'
|
||||
userKey: REPLACE_WITH_MYKEY
|
||||
bondId:
|
||||
chainId: laconic_9000-1
|
||||
gas: 250000
|
||||
fees: 200000aphoton
|
@ -1,57 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [[ -n "$CERC_SCRIPT_DEBUG" ]]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
set -e
|
||||
|
||||
input_genesis_file=/root/.laconicd/tmp/genesis.json
|
||||
if [ ! -f ${input_genesis_file} ]; then
|
||||
echo "Genesis file not provided, exiting..."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "$CERC_PEERS" ]; then
|
||||
echo "Persistent peers not provided, exiting..."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Env:"
|
||||
echo "Moniker: $CERC_MONIKER"
|
||||
echo "Chain Id: $CERC_CHAIN_ID"
|
||||
echo "Persistent peers: $CERC_PEERS"
|
||||
echo "Min gas price: $MIN_GAS_PRICE"
|
||||
echo "Log level: $CERC_LOGLEVEL"
|
||||
|
||||
NODE_HOME=/root/.laconicd
|
||||
|
||||
# Set chain id in config
|
||||
laconicd config set client chain-id $CERC_CHAIN_ID --home $NODE_HOME
|
||||
|
||||
# Check if node data dir already exists
|
||||
if [ -z "$(ls -A "$NODE_HOME/data")" ]; then
|
||||
# Init node
|
||||
echo "Initializing a new laconicd node with moniker $CERC_MONIKER and chain id $CERC_CHAIN_ID"
|
||||
laconicd init $CERC_MONIKER --chain-id=$CERC_CHAIN_ID --home $NODE_HOME
|
||||
|
||||
# Use provided genesis config
|
||||
cp $input_genesis_file $NODE_HOME/config/genesis.json
|
||||
else
|
||||
echo "Node data dir $NODE_HOME/data already exists, skipping initialization..."
|
||||
fi
|
||||
|
||||
# Enable cors
|
||||
sed -i 's/cors_allowed_origins.*$/cors_allowed_origins = ["*"]/' $HOME/.laconicd/config/config.toml
|
||||
|
||||
# Update config with persistent peers
|
||||
sed -i "s/^persistent_peers *=.*/persistent_peers = \"$CERC_PEERS\"/g" $NODE_HOME/config/config.toml
|
||||
|
||||
echo "Starting laconicd node..."
|
||||
laconicd start \
|
||||
--api.enable \
|
||||
--minimum-gas-prices=${MIN_GAS_PRICE}alnt \
|
||||
--rpc.laddr="tcp://0.0.0.0:26657" \
|
||||
--gql-playground --gql-server \
|
||||
--log_level $CERC_LOGLEVEL \
|
||||
--home $NODE_HOME
|
@ -0,0 +1,2 @@
|
||||
#!/bin/sh
|
||||
laconicd keys show mykey | grep address | cut -d ' ' -f 3
|
@ -0,0 +1,2 @@
|
||||
#!/bin/sh
|
||||
echo y | laconicd keys export mykey --unarmored-hex --unsafe
|
47
stack-orchestrator/config/laconicd/scripts/run-laconicd.sh
Executable file
47
stack-orchestrator/config/laconicd/scripts/run-laconicd.sh
Executable file
@ -0,0 +1,47 @@
|
||||
#!/bin/sh
|
||||
if [[ -n "$CERC_SCRIPT_DEBUG" ]]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
# TODO: Read from env
|
||||
MONIKER=MyNode
|
||||
CHAIN_ID=laconic_9000-1
|
||||
GENESIS_FILE_URL="/root/.laconicd/config/genesis.json"
|
||||
PEERS=""
|
||||
LOGLEVEL="info"
|
||||
|
||||
if [ -z "$PEERS" ]; then
|
||||
echo "Persistent peers not provided, exiting..."
|
||||
exit 1
|
||||
else
|
||||
echo "Using persistent peers $PEERS"
|
||||
fi
|
||||
|
||||
echo "Env:"
|
||||
echo "Moniker: $MONIKER"
|
||||
echo "Chain Id: $CHAIN_ID"
|
||||
echo "Genesis file: $GENESIS_FILE_URL"
|
||||
echo "Persistent peers: $PEERS"
|
||||
|
||||
NODE_HOME=/root/.laconicd
|
||||
|
||||
# Set chain id in config
|
||||
laconicd config set client chain-id $CHAIN_ID --home $NODE_HOME
|
||||
|
||||
# Check if node data dir already exists
|
||||
if [ -z "$(ls -A "$NODE_HOME/data")" ]; then
|
||||
# Init node
|
||||
echo "Initializing a new laconicd node with moniker $MONIKER and chain id $CHAIN_ID"
|
||||
laconicd init $MONIKER --chain-id=$CHAIN_ID --home $NODE_HOME
|
||||
|
||||
# Fetch genesis config
|
||||
echo "Fetching genesis file from $GENESIS_FILE_URL"
|
||||
curl -o $NODE_HOME/config/genesis.json $GENESIS_FILE_URL
|
||||
else
|
||||
echo "Node data dir $NODE_HOME/data already exists, skipping initialization..."
|
||||
fi
|
||||
|
||||
# Update config with persistent peers
|
||||
sed -i "s/^persistent_peers *=.*/persistent_peers = \"$PEERS\"/g" $NODE_HOME/config/config.toml
|
||||
|
||||
laconicd start --gql-playground --gql-server --log_level $LOGLEVEL --home $NODE_HOME
|
@ -1,16 +1,15 @@
|
||||
FROM cerc/webapp-base:local
|
||||
|
||||
# Configure the cerc-io npm registry
|
||||
RUN npm config set @cerc-io:registry https://git.vdb.to/api/packages/cerc-io/npm/
|
||||
RUN npm config set @lirewine:registry https://git.vdb.to/api/packages/cerc-io/npm/
|
||||
# This container pulls npm packages from a local registry configured via these env vars
|
||||
ARG CERC_NPM_REGISTRY_URL
|
||||
ARG CERC_NPM_AUTH_TOKEN
|
||||
|
||||
WORKDIR /app
|
||||
# Configure the local npm registry
|
||||
RUN npm config set @cerc-io:registry ${CERC_NPM_REGISTRY_URL} \
|
||||
&& npm config set @lirewine:registry ${CERC_NPM_REGISTRY_URL} \
|
||||
&& npm config set -- ${CERC_NPM_REGISTRY_URL}:_authToken ${CERC_NPM_AUTH_TOKEN}
|
||||
|
||||
COPY . .
|
||||
# Globally install the payload web app package
|
||||
RUN yarn global add @cerc-io/console-app
|
||||
|
||||
RUN echo "Installing dependencies" && yarn
|
||||
RUN echo "Building" && LACONIC_HOSTED_CONFIG_FILE=config-hosted.yml yarn dist
|
||||
|
||||
RUN npm config set @cerc-io:registry https://git.vdb.to/api/packages/cerc-io/npm/
|
||||
|
||||
RUN yarn global add file:$PWD
|
||||
COPY ./config.yml /config
|
||||
|
@ -1,10 +1,11 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Build cerc/laconic-console-host
|
||||
# Build cerc/laconic-registry-cli
|
||||
|
||||
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
|
||||
|
||||
# See: https://stackoverflow.com/a/246128/1701505
|
||||
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
|
||||
|
||||
docker build -t cerc/laconic-console-host:local ${build_command_args} -f ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}/laconic-console
|
||||
docker build -t cerc/laconic-console-host:local ${build_command_args} -f ${SCRIPT_DIR}/Dockerfile \
|
||||
--add-host gitea.local:host-gateway \
|
||||
--build-arg CERC_NPM_AUTH_TOKEN --build-arg CERC_NPM_REGISTRY_URL ${SCRIPT_DIR}
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Config for laconic-console
|
||||
# Config for laconic-console running in a fixturenet with laconicd
|
||||
|
||||
services:
|
||||
wns:
|
@ -1,5 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Build cerc/laconic-faucet
|
||||
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
|
||||
docker build -t cerc/laconic-faucet:local ${build_command_args} ${CERC_REPO_BASE_DIR}/laconic-faucet
|
@ -1,20 +1,65 @@
|
||||
# Node.js version (use -bullseye variants on local arm64/Apple Silicon): 18, 16, 14, 18-bullseye, 16-bullseye, 14-bullseye, 18-buster, 16-buster, 14-buster
|
||||
# Originally from: https://github.com/devcontainers/images/blob/main/src/javascript-node/.devcontainer/Dockerfile
|
||||
# [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 18, 16, 14, 18-bullseye, 16-bullseye, 14-bullseye, 18-buster, 16-buster, 14-buster
|
||||
ARG VARIANT=18-bullseye
|
||||
FROM node:${VARIANT}
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get -y install --no-install-recommends python3 jq bash curl nano
|
||||
ARG USERNAME=node
|
||||
ARG NPM_GLOBAL=/usr/local/share/npm-global
|
||||
|
||||
WORKDIR /app
|
||||
# This container pulls npm packages from a local registry configured via these env vars
|
||||
ARG CERC_NPM_REGISTRY_URL
|
||||
ARG CERC_NPM_AUTH_TOKEN
|
||||
|
||||
COPY . .
|
||||
# Add NPM global to PATH.
|
||||
ENV PATH=${NPM_GLOBAL}/bin:${PATH}
|
||||
# Prevents npm from printing version warnings
|
||||
ENV NPM_CONFIG_UPDATE_NOTIFIER=false
|
||||
|
||||
RUN echo "Installing dependencies and building laconic-registry-cli" && \
|
||||
yarn && yarn build
|
||||
RUN \
|
||||
# Configure global npm install location, use group to adapt to UID/GID changes
|
||||
if ! cat /etc/group | grep -e "^npm:" > /dev/null 2>&1; then groupadd -r npm; fi \
|
||||
&& usermod -a -G npm ${USERNAME} \
|
||||
&& umask 0002 \
|
||||
&& mkdir -p ${NPM_GLOBAL} \
|
||||
&& touch /usr/local/etc/npmrc \
|
||||
&& chown ${USERNAME}:npm ${NPM_GLOBAL} /usr/local/etc/npmrc \
|
||||
&& chmod g+s ${NPM_GLOBAL} \
|
||||
&& npm config -g set prefix ${NPM_GLOBAL} \
|
||||
&& su ${USERNAME} -c "npm config -g set prefix ${NPM_GLOBAL}" \
|
||||
# Install eslint
|
||||
&& su ${USERNAME} -c "umask 0002 && npm install -g eslint" \
|
||||
&& npm cache clean --force > /dev/null 2>&1
|
||||
|
||||
# Globally install the cli binary
|
||||
RUN npm config set @cerc-io:registry https://git.vdb.to/api/packages/cerc-io/npm/
|
||||
RUN yarn global add file:$PWD
|
||||
# [Optional] Uncomment this section to install additional OS packages.
|
||||
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
|
||||
&& apt-get -y install --no-install-recommends jq
|
||||
|
||||
# [Optional] Uncomment if you want to install an additional version of node using nvm
|
||||
# ARG EXTRA_NODE_VERSION=10
|
||||
# RUN su node -c "source /usr/local/share/nvm/nvm.sh && nvm install ${EXTRA_NODE_VERSION}"
|
||||
|
||||
# [Optional] Uncomment if you want to install more global node modules
|
||||
# RUN su node -c "npm install -g <your-package-list-here>"
|
||||
|
||||
# Configure the local npm registry
|
||||
RUN npm config set @cerc-io:registry ${CERC_NPM_REGISTRY_URL} \
|
||||
&& npm config set @lirewine:registry ${CERC_NPM_REGISTRY_URL} \
|
||||
&& npm config set -- ${CERC_NPM_REGISTRY_URL}:_authToken ${CERC_NPM_AUTH_TOKEN}
|
||||
|
||||
# TODO: the image at this point could be made a base image for several different CLI images
|
||||
# that install different Node-based CLI commands
|
||||
|
||||
# Globally install the cli package
|
||||
RUN yarn global add @cerc-io/laconic-registry-cli
|
||||
|
||||
# Add scripts
|
||||
RUN mkdir /scripts
|
||||
RUN mkdir /scripts/demo-records
|
||||
ENV PATH="${PATH}:/scripts"
|
||||
COPY ./create-demo-records.sh /scripts
|
||||
COPY ./demo-records /scripts/demo-records
|
||||
COPY ./import-key.sh /scripts
|
||||
COPY ./import-address.sh /scripts
|
||||
|
||||
# Default command sleeps forever so docker doesn't kill it
|
||||
CMD ["bash", "-c", "while :; do sleep 600; done"]
|
||||
CMD ["sh", "-c", "while :; do sleep 600; done"]
|
||||
|
@ -1,9 +1,11 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Build cerc/laconic-registry-cli
|
||||
|
||||
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
|
||||
|
||||
# See: https://stackoverflow.com/a/246128/1701505
|
||||
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
|
||||
|
||||
docker build -t cerc/laconic-registry-cli:local ${build_command_args} -f ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}/laconic-registry-cli
|
||||
docker build -t cerc/laconic-registry-cli:local ${build_command_args} -f ${SCRIPT_DIR}/Dockerfile \
|
||||
--add-host gitea.local:host-gateway \
|
||||
--build-arg CERC_NPM_AUTH_TOKEN --build-arg CERC_NPM_REGISTRY_URL ${SCRIPT_DIR}
|
||||
|
@ -0,0 +1,2 @@
|
||||
#!/bin/sh
|
||||
echo ${1} > my-address.txt
|
@ -0,0 +1,2 @@
|
||||
#!/bin/sh
|
||||
sed 's/REPLACE_WITH_MYKEY/'${1}'/' registry-cli-config-template.yml > config.yml
|
@ -1,5 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Build cerc/laconic-shopify-faucet
|
||||
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
|
||||
docker build -t cerc/laconic-shopify-faucet:local ${build_command_args} ${CERC_REPO_BASE_DIR}/laconic-faucet
|
@ -1,9 +0,0 @@
|
||||
FROM node:20-bullseye
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN yarn install
|
||||
|
||||
CMD ["yarn", "start"]
|
@ -1,8 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Build cerc/laconic-faucet
|
||||
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
|
||||
|
||||
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
|
||||
|
||||
docker build -t cerc/laconic-shopify:local ${build_command_args} -f ${SCRIPT_DIR}/Dockerfile ${CERC_REPO_BASE_DIR}/shopify
|
@ -2,4 +2,4 @@
|
||||
|
||||
# Build cerc/laconicd
|
||||
source ${CERC_CONTAINER_BASE_DIR}/build-base.sh
|
||||
docker build -t cerc/laconicd:local ${build_command_args} ${CERC_REPO_BASE_DIR}/laconicd
|
||||
docker build -t cerc/laconic2d:local ${build_command_args} ${CERC_REPO_BASE_DIR}/laconic2d
|
@ -8,27 +8,21 @@ CERC_WEBAPP_FILES_DIR="${CERC_WEBAPP_FILES_DIR:-/data}"
|
||||
CERC_ENABLE_CORS="${CERC_ENABLE_CORS:-false}"
|
||||
CERC_SINGLE_PAGE_APP="${CERC_SINGLE_PAGE_APP}"
|
||||
|
||||
if [ -z "${CERC_SINGLE_PAGE_APP}" ]; then
|
||||
# If there is only one HTML file, assume an SPA.
|
||||
if [ 1 -eq $(find "${CERC_WEBAPP_FILES_DIR}" -name '*.html' | wc -l) ]; then
|
||||
if [ -z "${CERC_SINGLE_PAGE_APP}" ]; then
|
||||
if [ 1 -eq $(find "${CERC_WEBAPP_FILES_DIR}" -name '*.html' | wc -l) ] && [ -d "${CERC_WEBAPP_FILES_DIR}/static" ]; then
|
||||
CERC_SINGLE_PAGE_APP=true
|
||||
else
|
||||
CERC_SINGLE_PAGE_APP=false
|
||||
fi
|
||||
fi
|
||||
|
||||
# ${var,,} is a lower-case comparison
|
||||
if [ "true" == "${CERC_ENABLE_CORS,,}" ]; then
|
||||
if [ "true" == "$CERC_ENABLE_CORS" ]; then
|
||||
CERC_HTTP_EXTRA_ARGS="$CERC_HTTP_EXTRA_ARGS --cors"
|
||||
fi
|
||||
|
||||
# ${var,,} is a lower-case comparison
|
||||
if [ "true" == "${CERC_SINGLE_PAGE_APP,,}" ]; then
|
||||
echo "Serving content as single-page app. If this is wrong, set 'CERC_SINGLE_PAGE_APP=false'"
|
||||
if [ "true" == "$CERC_SINGLE_PAGE_APP" ]; then
|
||||
# Create a catchall redirect back to /
|
||||
CERC_HTTP_EXTRA_ARGS="$CERC_HTTP_EXTRA_ARGS --proxy http://localhost:${CERC_LISTEN_PORT}?"
|
||||
else
|
||||
echo "Serving content normally. If this is a single-page app, set 'CERC_SINGLE_PAGE_APP=true'"
|
||||
fi
|
||||
|
||||
LACONIC_HOSTED_CONFIG_FILE=${LACONIC_HOSTED_CONFIG_FILE}
|
||||
@ -45,4 +39,4 @@ if [ -f "${LACONIC_HOSTED_CONFIG_FILE}" ]; then
|
||||
fi
|
||||
|
||||
/scripts/apply-runtime-env.sh ${CERC_WEBAPP_FILES_DIR}
|
||||
http-server $CERC_HTTP_EXTRA_ARGS -p ${CERC_LISTEN_PORT} "${CERC_WEBAPP_FILES_DIR}"
|
||||
http-server $CERC_HTTP_EXTRA_ARGS -p ${CERC_LISTEN_PORT} "${CERC_WEBAPP_FILES_DIR}"
|
@ -1,146 +0,0 @@
|
||||
# laconic-console
|
||||
|
||||
Instructions for running laconic registry CLI and console
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* laconicd RPC and GQL endpoints
|
||||
|
||||
## Setup
|
||||
|
||||
* Clone the stack repo:
|
||||
|
||||
```bash
|
||||
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack
|
||||
```
|
||||
|
||||
* Clone required repositories:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console setup-repositories --pull
|
||||
```
|
||||
|
||||
* Build the container images:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console build-containers --force-rebuild
|
||||
```
|
||||
|
||||
This should create the following docker images locally:
|
||||
|
||||
* `cerc/laconic-registry-cli`
|
||||
* `cerc/webapp-base`
|
||||
* `cerc/laconic-console-host`
|
||||
|
||||
## Create a deployment
|
||||
|
||||
* Create a spec file for the deployment:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy init --output laconic-console-spec.yml
|
||||
```
|
||||
|
||||
* Edit `network` in the spec file to map container ports to host ports as required:
|
||||
|
||||
```bash
|
||||
...
|
||||
network:
|
||||
ports:
|
||||
laconic-console:
|
||||
- '8080:80'
|
||||
```
|
||||
|
||||
* Create a deployment from the spec file:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy create --spec-file laconic-console-spec.yml --deployment-dir laconic-console-deployment
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
* Inside the deployment directory, open `config.env` file and set following env variables:
|
||||
|
||||
```bash
|
||||
# All optional
|
||||
|
||||
# CLI configuration
|
||||
|
||||
# laconicd RPC endpoint (default: http://laconicd:26657)
|
||||
CERC_LACONICD_RPC_ENDPOINT=
|
||||
|
||||
# laconicd GQL endpoint (default: http://laconicd:9473/api)
|
||||
CERC_LACONICD_GQL_ENDPOINT=
|
||||
|
||||
# laconicd chain id (default: laconic_9000-1)
|
||||
CERC_LACONICD_CHAIN_ID=
|
||||
|
||||
# laconicd user private key for txs
|
||||
CERC_LACONICD_USER_KEY=
|
||||
|
||||
# laconicd bond id for txs
|
||||
CERC_LACONICD_BOND_ID=
|
||||
|
||||
# Gas limit for txs (default: 200000)
|
||||
CERC_LACONICD_GAS=
|
||||
|
||||
# Max fees for txs (default: 200alnt)
|
||||
CERC_LACONICD_FEES=
|
||||
|
||||
# Gas price to use for txs (default: 0.001alnt)
|
||||
# Use for auto fees calculation, gas and fees not required to be set in that case
|
||||
# Reference: https://git.vdb.to/cerc-io/laconic-registry-cli#gas-and-fees
|
||||
CERC_LACONICD_GASPRICE=
|
||||
|
||||
# Console configuration
|
||||
|
||||
# Laconicd (hosted) GQL endpoint (default: http://localhost:9473)
|
||||
LACONIC_HOSTED_ENDPOINT=
|
||||
```
|
||||
|
||||
## Run
|
||||
|
||||
* Start the deployment:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir laconic-console-deployment start
|
||||
```
|
||||
|
||||
* View the laconic console at <http://localhost:8080>
|
||||
|
||||
* Use the `cli` service for registry CLI operations:
|
||||
|
||||
```bash
|
||||
# Example
|
||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry status"
|
||||
```
|
||||
|
||||
## Check status
|
||||
|
||||
* To list down and monitor the running containers:
|
||||
|
||||
```bash
|
||||
# With status
|
||||
docker ps -a
|
||||
|
||||
# Follow logs for console container
|
||||
laconic-so deployment --dir laconic-console-deployment logs console -f
|
||||
```
|
||||
|
||||
## Clean up
|
||||
|
||||
* Stop all services running in the background:
|
||||
|
||||
```bash
|
||||
# Stop the docker containers
|
||||
laconic-so deployment --dir laconic-console-deployment stop
|
||||
```
|
||||
|
||||
* To stop all services and also delete data:
|
||||
|
||||
```bash
|
||||
# Stop the docker containers
|
||||
laconic-so deployment --dir laconic-console-deployment stop --delete-volumes
|
||||
|
||||
# Remove deployment directory (deployment will have to be recreated for a re-run)
|
||||
rm -r laconic-console-deployment
|
||||
```
|
@ -1,12 +0,0 @@
|
||||
version: "1.0"
|
||||
name: laconic-console
|
||||
description: "Laconic registry CLI and console"
|
||||
repos:
|
||||
- git.vdb.to/cerc-io/laconic-registry-cli@v0.2.10
|
||||
- git.vdb.to/cerc-io/laconic-console@v0.2.5
|
||||
containers:
|
||||
- cerc/laconic-registry-cli
|
||||
- cerc/webapp-base
|
||||
- cerc/laconic-console-host
|
||||
pods:
|
||||
- laconic-console
|
@ -1,121 +0,0 @@
|
||||
# laconic-faucet
|
||||
|
||||
Instructions for running the laconic faucet
|
||||
|
||||
## Setup
|
||||
|
||||
* Clone the stack repo:
|
||||
|
||||
```bash
|
||||
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack
|
||||
```
|
||||
|
||||
* Clone the laconic-faucet respository:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet setup-repositories
|
||||
```
|
||||
|
||||
* Build the container image:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet build-containers
|
||||
```
|
||||
|
||||
This should create the `cerc/laconic-faucet` image locally
|
||||
|
||||
## Create a deployment
|
||||
|
||||
* Create a spec file for the deployment:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet deploy init --output laconic-faucet-spec.yml
|
||||
```
|
||||
|
||||
* Edit `network` in the spec file to map container ports to host ports as required:
|
||||
|
||||
```bash
|
||||
network:
|
||||
ports:
|
||||
faucet:
|
||||
- '3000:3000'
|
||||
```
|
||||
|
||||
* Create a deployment from the spec file:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-faucet deploy create --spec-file laconic-faucet-spec.yml --deployment-dir laconic-faucet-deployment
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
* Inside the `laconic-faucet-deployment` deployment directory, open `config.env` file and set following env variables:
|
||||
|
||||
```bash
|
||||
# Private key of a funded faucet account
|
||||
CERC_FAUCET_KEY=
|
||||
|
||||
# Optional
|
||||
|
||||
# laconicd RPC endpoint (default: http://laconicd:26657)
|
||||
CERC_LACONICD_RPC_ENDPOINT=
|
||||
|
||||
# laconicd chain id (default: laconic_9000-1)
|
||||
CERC_LACONICD_CHAIN_ID=
|
||||
|
||||
# Amount of tokens to transfer on a single request (default: 1000000)
|
||||
CERC_TRANSFER_AMOUNT=
|
||||
|
||||
# Transfer limit for an address within a period (default: 3000000)
|
||||
CERC_PERIOD_TRANSFER_LIMIT=
|
||||
```
|
||||
|
||||
## Start the deployment
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir laconic-faucet-deployment start
|
||||
```
|
||||
|
||||
## Check status
|
||||
|
||||
* To list down and monitor the running container:
|
||||
|
||||
```bash
|
||||
# With status
|
||||
docker ps
|
||||
|
||||
# Check logs for a container
|
||||
docker logs -f <CONTAINER_ID>
|
||||
```
|
||||
|
||||
## Run
|
||||
|
||||
* Request tokens from the faucet for an account:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3000/faucet \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"address": "<laconicd_address>"}'
|
||||
|
||||
# Expected output:
|
||||
# {"success":true,"txHash":"<tx_hash>"}
|
||||
```
|
||||
|
||||
## Clean up
|
||||
|
||||
* Stop the `laconic-faucet` service running in the background:
|
||||
|
||||
```bash
|
||||
# Stop the docker container
|
||||
laconic-so deployment --dir laconic-faucet-deployment stop
|
||||
```
|
||||
|
||||
* To stop the service and also delete data:
|
||||
|
||||
```bash
|
||||
# Stop the docker containers
|
||||
laconic-so deployment --dir laconic-faucet-deployment stop --delete-volumes
|
||||
|
||||
# Remove deployment directory (deployment will have to be recreated for a re-run)
|
||||
rm -r laconic-faucet-deployment
|
||||
```
|
@ -1,9 +0,0 @@
|
||||
version: "1.0"
|
||||
name: laconic-faucet
|
||||
description: "Faucet for laconicd"
|
||||
repos:
|
||||
- git.vdb.to/cerc-io/laconic-faucet
|
||||
containers:
|
||||
- cerc/laconic-faucet
|
||||
pods:
|
||||
- laconic-faucet
|
@ -1,109 +0,0 @@
|
||||
# laconic-shopify
|
||||
|
||||
Instructions for running the laconic shopify
|
||||
|
||||
## Setup
|
||||
|
||||
* Clone the stack repo:
|
||||
|
||||
```bash
|
||||
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack
|
||||
```
|
||||
|
||||
* Clone the laconic-shopify respository:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify setup-repositories
|
||||
```
|
||||
|
||||
* Build the container image:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify build-containers
|
||||
```
|
||||
|
||||
This should create the `cerc/laconic-shopify` and `cerc/laconic-shopify-faucet` images locally
|
||||
|
||||
## Create a deployment
|
||||
|
||||
* Create a spec file for the deployment:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify deploy init --output laconic-shopify-spec.yml
|
||||
```
|
||||
|
||||
* Create a deployment from the spec file:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify deploy create --spec-file laconic-shopify-spec.yml --deployment-dir laconic-shopify-deployment
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
* Inside the `laconic-shopify-deployment` deployment directory, open `config.env` file and set following env variables:
|
||||
|
||||
```bash
|
||||
# Shopify GraphQL URL (default: 'https://6h071x-zw.myshopify.com/admin/api/2024-10/graphql.json')
|
||||
CERC_SHOPIFY_GRAPHQL_URL=
|
||||
|
||||
# Access token for Shopify API
|
||||
CERC_SHOPIFY_ACCESS_TOKEN=
|
||||
|
||||
# Delay for fetching orders in milliseconds (default: 10000)
|
||||
CERC_FETCH_ORDER_DELAY=
|
||||
|
||||
# Number of line items per order in Get Orders GraphQL query (default: 10)
|
||||
CERC_ITEMS_PER_ORDER=
|
||||
|
||||
# Private key of a funded faucet account
|
||||
CERC_FAUCET_KEY=
|
||||
|
||||
# laconicd RPC endpoint (default: https://laconicd-sapo.laconic.com/)
|
||||
CERC_LACONICD_RPC_ENDPOINT=
|
||||
|
||||
# laconicd chain id (default: laconic-testnet-2)
|
||||
CERC_LACONICD_CHAIN_ID=
|
||||
|
||||
# laconicd address prefix (default: laconic)
|
||||
CERC_LACONICD_PREFIX=
|
||||
|
||||
# laconicd gas price (default: 0.001)
|
||||
CERC_LACONICD_GAS_PRICE=
|
||||
```
|
||||
|
||||
## Start the deployment
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir laconic-shopify-deployment start
|
||||
```
|
||||
|
||||
## Check status
|
||||
|
||||
* To list down and monitor the running container:
|
||||
|
||||
```bash
|
||||
# With status
|
||||
docker ps
|
||||
|
||||
# Check logs for a container
|
||||
docker logs -f <CONTAINER_ID>
|
||||
```
|
||||
|
||||
## Clean up
|
||||
|
||||
* Stop the `laconic-shopify-faucet` service running in the background:
|
||||
|
||||
```bash
|
||||
# Stop the docker container
|
||||
laconic-so deployment --dir laconic-shopify-deployment stop
|
||||
```
|
||||
|
||||
* To stop the service and also delete data:
|
||||
|
||||
```bash
|
||||
# Stop the docker containers
|
||||
laconic-so deployment --dir laconic-shopify-deployment stop --delete-volumes
|
||||
|
||||
# Remove deployment directory (deployment will have to be recreated for a re-run)
|
||||
rm -r laconic-shopify-deployment
|
||||
```
|
@ -1,11 +0,0 @@
|
||||
version: "1.0"
|
||||
name: laconic-shopify
|
||||
description: "Service that integrates a Shopify app with the Laconic wallet."
|
||||
repos:
|
||||
- git.vdb.to/cerc-io/shopify@v0.1.0
|
||||
- git.vdb.to/cerc-io/laconic-faucet@v0.1.0-shopify
|
||||
containers:
|
||||
- cerc/laconic-shopify
|
||||
- cerc/laconic-shopify-faucet
|
||||
pods:
|
||||
- laconic-shopify
|
62
stack-orchestrator/stacks/laconicd-full-node/README.md
Normal file
62
stack-orchestrator/stacks/laconicd-full-node/README.md
Normal file
@ -0,0 +1,62 @@
|
||||
# laconicd-full-node
|
||||
|
||||
Instructions for deploying a laconicd full node along with steps to join testnet as a validator post genesis
|
||||
|
||||
Minimum hardware requirements:
|
||||
|
||||
- RAM: 8-16GB
|
||||
- Disk space: 200GB
|
||||
- CPU: 2 cores
|
||||
|
||||
## Clone the stack repo
|
||||
|
||||
```bash
|
||||
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack
|
||||
```
|
||||
|
||||
## Clone required repositories
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconicd-full-node setup-repositories
|
||||
```
|
||||
|
||||
## Build the fixturenet-eth containers
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconicd-full-node build-containers
|
||||
```
|
||||
|
||||
This should create several container images in the local image registry:
|
||||
|
||||
* cerc/laconic2d
|
||||
* cerc/laconic-registry-cli
|
||||
* cerc/webapp-base
|
||||
* cerc/laconic-console-host
|
||||
|
||||
## Deploy the stack
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconicd-full-node deploy up
|
||||
```
|
||||
|
||||
## Check status
|
||||
|
||||
<!-- TODO -->
|
||||
|
||||
## Join as testnet validator
|
||||
|
||||
<!-- TODO import a funded account / create an account and get it funded -->
|
||||
|
||||
<!-- TODO create a validator-->
|
||||
|
||||
## Clean up
|
||||
|
||||
Stop all services running in the background:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconicd-full-node deploy down
|
||||
```
|
||||
|
||||
Clear volumes created by this stack:
|
||||
|
||||
<!-- TODO -->
|
19
stack-orchestrator/stacks/laconicd-full-node/stack.yml
Normal file
19
stack-orchestrator/stacks/laconicd-full-node/stack.yml
Normal file
@ -0,0 +1,19 @@
|
||||
version: "1.1"
|
||||
name: laconicd-full-node
|
||||
description: "Laconicd full node"
|
||||
repos:
|
||||
- cerc-io/laconic2d
|
||||
- cerc-io/laconic-registry-cli@laconic2
|
||||
- cerc-io/laconic-console@laconic2
|
||||
containers:
|
||||
- cerc/laconic2d
|
||||
- cerc/laconic-registry-cli
|
||||
- cerc/webapp-base
|
||||
- cerc/laconic-console-host
|
||||
pods:
|
||||
- laconicd-full-node
|
||||
- laconic-console
|
||||
config:
|
||||
cli:
|
||||
key: laconicd.mykey
|
||||
address: laconicd.myaddress
|
@ -1,310 +0,0 @@
|
||||
# testnet-laconicd
|
||||
|
||||
Instructions for running a laconicd testnet full node and joining as a validator
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Minimum hardware requirements:
|
||||
|
||||
```bash
|
||||
RAM: 8-16GB
|
||||
Disk space: 200GB
|
||||
CPU: 2 cores
|
||||
```
|
||||
|
||||
* Testnet genesis file (file or an URL) and peer node addresses
|
||||
|
||||
## Setup
|
||||
|
||||
* Clone the stack repo:
|
||||
|
||||
```bash
|
||||
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack
|
||||
```
|
||||
|
||||
* Clone required repositories:
|
||||
|
||||
```bash
|
||||
# laconicd
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd setup-repositories --pull
|
||||
|
||||
# laconic cli and console
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console setup-repositories --pull
|
||||
|
||||
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories and re-run the command
|
||||
```
|
||||
|
||||
* Build the container images:
|
||||
|
||||
```bash
|
||||
# laconicd
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd build-containers
|
||||
|
||||
# laconic cli and console
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console build-containers
|
||||
```
|
||||
|
||||
This should create the following docker images locally:
|
||||
|
||||
* `cerc/laconicd`
|
||||
* `cerc/laconic-registry-cli`
|
||||
* `cerc/webapp-base`
|
||||
* `cerc/laconic-console-host`
|
||||
|
||||
## Create a deployment
|
||||
|
||||
* Create a spec file for the deployment:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd deploy init --output testnet-laconicd-spec.yml
|
||||
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy init --output laconic-console-spec.yml
|
||||
```
|
||||
|
||||
* Edit `network` in both the spec files to map container ports to host ports as required:
|
||||
|
||||
```bash
|
||||
# testnet-laconicd-spec.yml
|
||||
...
|
||||
network:
|
||||
ports:
|
||||
laconicd:
|
||||
- '6060:6060'
|
||||
- '26657:26657'
|
||||
- '26656:26656'
|
||||
- '9473:9473'
|
||||
- '9090:9090'
|
||||
- '1317:1317'
|
||||
|
||||
# laconic-console-spec.yml
|
||||
...
|
||||
network:
|
||||
ports:
|
||||
console:
|
||||
- '8080:80'
|
||||
```
|
||||
|
||||
* Create deployments from the spec files:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd deploy create --spec-file testnet-laconicd-spec.yml --deployment-dir testnet-laconicd-deployment
|
||||
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console deploy create --spec-file laconic-console-spec.yml --deployment-dir laconic-console-deployment
|
||||
|
||||
# Place them both in the same namespace (cluster)
|
||||
cp testnet-laconicd-deployment/deployment.yml laconic-console-deployment/deployment.yml
|
||||
```
|
||||
|
||||
* Copy over the published testnet genesis file (`.json`) to data directory in deployment (`testnet-laconicd-deployment/data/laconicd-data/tmp`):
|
||||
|
||||
```bash
|
||||
# Example
|
||||
mkdir -p testnet-laconicd-deployment/data/laconicd-data/tmp
|
||||
cp genesis.json testnet-laconicd-deployment/data/laconicd-data/tmp/genesis.json
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
* Inside the `testnet-laconicd-deployment` deployment directory, open `config.env` file and set following env variables:
|
||||
|
||||
```bash
|
||||
# Comma separated list of nodes to keep persistent connections to
|
||||
# Example: "node-1-id@node-1-host:26656,node-2-id@node-2-host:26656"
|
||||
CERC_PEERS=""
|
||||
|
||||
# Optional
|
||||
|
||||
# A custom human readable name for this node (default: TestnetNode)
|
||||
CERC_MONIKER=
|
||||
|
||||
# Network chain ID (default: laconic_9000-1)
|
||||
CERC_CHAIN_ID=
|
||||
|
||||
# Output log level (default: info)
|
||||
CERC_LOGLEVEL=
|
||||
|
||||
# Minimum gas price in alnt to accept for transactions (default: "0.001")
|
||||
MIN_GAS_PRICE
|
||||
```
|
||||
|
||||
* Inside the `laconic-console-deployment` deployment directory, open `config.env` file and set following env variables:
|
||||
|
||||
```bash
|
||||
# All optional
|
||||
|
||||
# CLI configuration
|
||||
|
||||
# laconicd chain id (default: laconic_9000-1)
|
||||
CERC_LACONICD_CHAIN_ID=
|
||||
|
||||
# laconicd user private key for txs
|
||||
CERC_LACONICD_USER_KEY=
|
||||
|
||||
# laconicd bond id for txs
|
||||
CERC_LACONICD_BOND_ID=
|
||||
|
||||
# Gas limit for txs (default: 200000)
|
||||
CERC_LACONICD_GAS=
|
||||
|
||||
# Max fees for txs (default: 200alnt)
|
||||
CERC_LACONICD_FEES=
|
||||
|
||||
# Gas price to use for txs (default: 0.001alnt)
|
||||
# Use for auto fees calculation, gas and fees not required to be set in that case
|
||||
# Reference: https://git.vdb.to/cerc-io/laconic-registry-cli#gas-and-fees
|
||||
CERC_LACONICD_GASPRICE=
|
||||
|
||||
# Console configuration
|
||||
|
||||
# Laconicd (hosted) GQL endpoint (default: http://localhost:9473)
|
||||
LACONIC_HOSTED_ENDPOINT=
|
||||
```
|
||||
|
||||
Note: Use `host.docker.internal` as host to access ports on the host machine
|
||||
|
||||
## Start the deployments
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir testnet-laconicd-deployment start
|
||||
laconic-so deployment --dir laconic-console-deployment start
|
||||
```
|
||||
|
||||
## Check status
|
||||
|
||||
* To list down and monitor the running containers:
|
||||
|
||||
```bash
|
||||
# With status
|
||||
docker ps -a
|
||||
|
||||
# Follow logs for laconicd container
|
||||
laconic-so deployment --dir testnet-laconicd-deployment logs laconicd -f
|
||||
```
|
||||
|
||||
* Check the sync status of your node:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd status | jq .sync_info"
|
||||
|
||||
# `catching_up: false` indicates that node is completely synced
|
||||
```
|
||||
|
||||
* View the laconic console at <http://localhost:8080>
|
||||
|
||||
* Use the cli service for registry CLI operations:
|
||||
|
||||
```bash
|
||||
# Example
|
||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry status"
|
||||
```
|
||||
|
||||
## Join as testnet validator
|
||||
|
||||
* Create / import a new key pair:
|
||||
|
||||
```bash
|
||||
# Create new keypair
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd keys add <key-name>"
|
||||
|
||||
# OR
|
||||
# Restore existing key with mnemonic seed phrase
|
||||
# You will be prompted to enter mnemonic seed
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd keys add <key-name> --recover"
|
||||
|
||||
# Query the keystore for your account's address
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd keys show <key-name> -a"
|
||||
```
|
||||
|
||||
* Request tokens from the testnet faucet for your account if required
|
||||
|
||||
* Check balance for your account:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd query bank balances <account-address>"
|
||||
```
|
||||
|
||||
* Create required validator configuration:
|
||||
|
||||
```bash
|
||||
# Note:
|
||||
# Edit the staking amount and other fields as required
|
||||
# Replace <your-node-moniker> with your node's moniker in command below
|
||||
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd 'cat <<EOF > my-validator.json
|
||||
{
|
||||
"pubkey": $(laconicd cometbft show-validator),
|
||||
"amount": "900000000alnt",
|
||||
"moniker": "<your-node-moniker>",
|
||||
"commission-rate": "0.1",
|
||||
"commission-max-rate": "0.2",
|
||||
"commission-max-change-rate": "0.01",
|
||||
"min-self-delegation": "1"
|
||||
}
|
||||
EOF'
|
||||
```
|
||||
|
||||
* Create a validator:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd 'laconicd tx staking create-validator my-validator.json \
|
||||
--fees 500000alnt \
|
||||
--chain-id=laconic_9000-1 \
|
||||
--from <key-name>'
|
||||
```
|
||||
|
||||
* View validators:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd query staking validators"
|
||||
```
|
||||
|
||||
* Check that in the list of validators `<your-node-moniker>` exists
|
||||
|
||||
## Perform operations
|
||||
|
||||
* To perform txs against the chain using registry CLI, set your private key in config in the CLI container:
|
||||
|
||||
```bash
|
||||
# (Optional) Get the PK from your node
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd keys export <key-name> --unarmored-hex --unsafe"
|
||||
|
||||
# Set your PK as 'userKey' in the config file
|
||||
laconic-so deployment --dir laconic-console-deployment exec cli "nano config.yml"
|
||||
|
||||
# services:
|
||||
# registry:
|
||||
# ...
|
||||
# userKey: "<your-private-key>"
|
||||
# ...
|
||||
|
||||
# Note: any changes made to the config will be lost when the cli Docker container is brought down
|
||||
# So set / update the values in 'laconic-console-deployment/config.env' accordingly before restarting
|
||||
```
|
||||
|
||||
* Adjust / set other config values (`bondId`, `gas`, `fees`) as required and perform txs:
|
||||
|
||||
```bash
|
||||
# Example
|
||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry bond create --type alnt --quantity 1000000000000"
|
||||
```
|
||||
|
||||
## Clean up
|
||||
|
||||
* Stop all `testnet-laconicd` services running in the background:
|
||||
|
||||
```bash
|
||||
# Stop the docker containers
|
||||
laconic-so deployment --dir testnet-laconicd-deployment stop
|
||||
```
|
||||
|
||||
* To stop all services and also delete data:
|
||||
|
||||
```bash
|
||||
# Stop the docker containers
|
||||
laconic-so deployment --dir testnet-laconicd-deployment stop --delete-volumes
|
||||
|
||||
# Remove deployment directory (deployment will have to be recreated for a re-run)
|
||||
rm -r testnet-laconicd-deployment
|
||||
```
|
||||
|
||||
* For `laconic-console`, see [laconic-console clean up](../laconic-console/README.md#clean-up)
|
@ -1,9 +0,0 @@
|
||||
version: "1.0"
|
||||
name: testnet-laconicd
|
||||
description: "Laconicd full node"
|
||||
repos:
|
||||
- git.vdb.to/cerc-io/laconicd@v0.1.9
|
||||
containers:
|
||||
- cerc/laconicd
|
||||
pods:
|
||||
- testnet-laconicd
|
@ -1,944 +0,0 @@
|
||||
# testnet-nitro-node
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Local:
|
||||
|
||||
* Clone the `cerc-io/testnet-ops` repository:
|
||||
|
||||
```bash
|
||||
git clone git@git.vdb.to:cerc-io/testnet-ops.git
|
||||
```
|
||||
|
||||
* Ansible: see [installation](https://git.vdb.to/cerc-io/testnet-ops#installation)
|
||||
|
||||
* On deployment machine:
|
||||
|
||||
* User with passwordless sudo: see [setup](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/user-setup/README.md#user-setup)
|
||||
|
||||
* laconic-so: see [installation](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/stack-orchestrator-setup/README.md#setup-stack-orchestrator)
|
||||
|
||||
## Setup
|
||||
|
||||
* Move to `nitro-nodes-setup` :
|
||||
|
||||
```bash
|
||||
cd testnet-ops/nitro-nodes-setup
|
||||
```
|
||||
|
||||
* Fetch the required Nitro node config:
|
||||
|
||||
```bash
|
||||
wget -O nitro-vars.yml https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage2/nitro-node-config.yml
|
||||
```
|
||||
|
||||
* Fetch required asset addresses:
|
||||
|
||||
```bash
|
||||
wget -O assets.json https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage2/assets.json
|
||||
```
|
||||
|
||||
* Ask testnet operator to send L1 tokens and ETH to your chain address
|
||||
|
||||
* [README for transferring tokens](./ops/nitro-token-ops.md#transfer-deployed-tokens-to-given-address)
|
||||
|
||||
* [README for transferring ETH](./ops/nitro-token-ops.md#transfer-eth)
|
||||
|
||||
* Check balance of your tokens once they are transferred:
|
||||
|
||||
```bash
|
||||
# Note: Account address should be without "0x"
|
||||
export ACCOUNT_ADDRESS="<account-address>"
|
||||
|
||||
export GETH_CHAIN_ID="1212"
|
||||
export GETH_CHAIN_URL="https://fixturenet-eth.laconic.com"
|
||||
|
||||
export ASSET_ADDRESS_1=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken.address' assets.json)
|
||||
export ASSET_ADDRESS_2=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken2.address' assets.json)
|
||||
|
||||
# Check balance of eth account
|
||||
curl -X POST $GETH_CHAIN_URL \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"jsonrpc":"2.0",
|
||||
"method":"eth_getBalance",
|
||||
"params":["'"$ACCOUNT_ADDRESS"'", "latest"],
|
||||
"id":1
|
||||
}'
|
||||
|
||||
# Check balance of first asset address
|
||||
curl -X POST $GETH_CHAIN_URL \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"jsonrpc":"2.0",
|
||||
"method":"eth_call",
|
||||
"params":[{
|
||||
"to": "'"$ASSET_ADDRESS_1"'",
|
||||
"data": "0x70a08231000000000000000000000000'"$ACCOUNT_ADDRESS"'"
|
||||
}, "latest"],
|
||||
"id":1
|
||||
}'
|
||||
|
||||
# Check balance of second asset address
|
||||
curl -X POST $GETH_CHAIN_URL \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"jsonrpc":"2.0",
|
||||
"method":"eth_call",
|
||||
"params":[{
|
||||
"to": "'"$ASSET_ADDRESS_2"'",
|
||||
"data": "0x70a08231000000000000000000000000'"$ACCOUNT_ADDRESS"'"
|
||||
}, "latest"],
|
||||
"id":1
|
||||
}'
|
||||
```
|
||||
|
||||
* Edit `nitro-vars.yml` and add the following variables:
|
||||
|
||||
```bash
|
||||
# Private key for your Nitro account (same as the one used in stage0 onboarding)
|
||||
# Export the key from Laconic wallet (https://wallet.laconic.com)
|
||||
nitro_sc_pk: ""
|
||||
|
||||
# Private key for a funded account on L1
|
||||
# This account should have L1 tokens for funding your Nitro channels
|
||||
nitro_chain_pk: ""
|
||||
|
||||
# Multiaddr with publically accessible IP address / DNS for your L1 nitro node
|
||||
# Use port 3007
|
||||
# Example: "/ip4/192.168.x.y/tcp/3007"
|
||||
# Example: "/dns4/example.com/tcp/3007"
|
||||
nitro_l1_ext_multiaddr: ""
|
||||
|
||||
# Multiaddr with publically accessible IP address / DNS for your L2 nitro node
|
||||
# Use port 3009
|
||||
# Example: "/ip4/192.168.x.y/tcp/3009"
|
||||
# Example: "/dns4/example.com/tcp/3009"
|
||||
nitro_l2_ext_multiaddr: ""
|
||||
```
|
||||
|
||||
* Edit the `setup-vars.yml` to update the target directory:
|
||||
|
||||
```bash
|
||||
# Set absolute path to desired deployments directory (under your user)
|
||||
# Example: /home/dev/nitro-node-deployments
|
||||
...
|
||||
nitro_directory: <path-to-deployments-dir>
|
||||
...
|
||||
|
||||
# Will create deployments at <path-to-deployments-dir>/l1-nitro-deployment and <path-to-deployments-dir>/l2-nitro-deployment
|
||||
```
|
||||
|
||||
## Run Nitro Nodes
|
||||
|
||||
Nitro nodes can be set up on a target machine using Ansible:
|
||||
|
||||
* In `testnet-ops/nitro-nodes-setup`, create a new `hosts.ini` file:
|
||||
|
||||
```bash
|
||||
cp ../hosts.example.ini hosts.ini
|
||||
```
|
||||
|
||||
* Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
|
||||
|
||||
```ini
|
||||
[<deployment_host>]
|
||||
<host_name> ansible_host=<target_ip> ansible_user=<ssh_user> ansible_ssh_common_args='-o ForwardAgent=yes'
|
||||
```
|
||||
|
||||
* Replace `<deployment_host>` with `nitro_host`
|
||||
* Replace `<host_name>` with the alias of your choice
|
||||
* Replace `<target_ip>` with the IP address or hostname of the target machine
|
||||
* Replace `<ssh_user>` with the username of the user that you set up on target machine (e.g. dev, ubuntu)
|
||||
|
||||
* Verify that you are able to connect to the host using the following command
|
||||
|
||||
```bash
|
||||
ansible all -m ping -i hosts.ini
|
||||
|
||||
# If using password based authentication, enter the ssh password on prompt; otherwise, leave it blank
|
||||
|
||||
# Expected output:
|
||||
|
||||
# <host_name> | SUCCESS => {
|
||||
# "ansible_facts": {
|
||||
# "discovered_interpreter_python": "/usr/bin/python3.10"
|
||||
# },
|
||||
# "changed": false,
|
||||
# "ping": "pong"
|
||||
# }
|
||||
```
|
||||
|
||||
* Execute the `run-nitro-nodes.yml` Ansible playbook to setup and run a Nitro node (L1+L2):
|
||||
|
||||
```bash
|
||||
LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER
|
||||
```
|
||||
|
||||
### Check Deployment Status
|
||||
|
||||
* Run the following commands on deployment machine:
|
||||
|
||||
```bash
|
||||
cd <path-to-deployments-dir>
|
||||
|
||||
# Check the logs, ensure that the nodes are running
|
||||
laconic-so deployment --dir l1-nitro-deployment logs nitro-node -f
|
||||
laconic-so deployment --dir l2-nitro-deployment logs nitro-node -f
|
||||
|
||||
# Let L1 node sync up with the chain
|
||||
# Expected logs after sync:
|
||||
# nitro-node-1 | 2:04PM INF Initializing Http RPC transport...
|
||||
# nitro-node-1 | 2:04PM INF Completed RPC server initialization url=127.0.0.1:4005/api/v1
|
||||
```
|
||||
|
||||
* Get your Nitro node's info:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-node"
|
||||
|
||||
# Expected output:
|
||||
# {
|
||||
# "SCAddress": "0xd0eA8b27591b1D070cCcD4D30b8D408fe794FDfc",
|
||||
# "MessageServicePeerId": "16Uiu2HAmSHRjoxveaPmJipzmdq69U8zme8BMnFjSBPferj1E5XAd"
|
||||
# }
|
||||
|
||||
# SCAddress -> nitro address, MessageServicePeerId -> libp2p peer id
|
||||
```
|
||||
|
||||
## Create Channels
|
||||
|
||||
Create a ledger channel with the bridge on L1 which is mirrored on L2
|
||||
|
||||
* Run the following commands on deployment machine
|
||||
|
||||
* Set required variables:
|
||||
|
||||
```bash
|
||||
cd <path-to-deployments-dir>
|
||||
|
||||
export BRIDGE_NITRO_ADDRESS=$(yq eval '.bridge_nitro_address' nitro-node-config.yml)
|
||||
|
||||
export GETH_CHAIN_ID="1212"
|
||||
|
||||
# Get asset addresses from assets.json file
|
||||
export ASSET_ADDRESS_1=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken.address' assets.json)
|
||||
export ASSET_ADDRESS_2=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken2.address' assets.json)
|
||||
```
|
||||
|
||||
* Check that you have no existing channels on L1 or L2:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l1-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
|
||||
|
||||
# Expected output:
|
||||
# []
|
||||
```
|
||||
|
||||
* Ensure that your account has enough balance of tokens from `assets.json`
|
||||
|
||||
* Create a ledger channel between your L1 Nitro node and Bridge with custom asset:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l1-nitro-deployment exec nitro-rpc-client "nitro-rpc-client direct-fund $BRIDGE_NITRO_ADDRESS --asset "$ASSET_ADDRESS_1:1000,1000" --asset "$ASSET_ADDRESS_2:1000,1000" -p 4005 -h nitro-node"
|
||||
|
||||
# Follow your L1 Nitro node logs for progress
|
||||
|
||||
# Expected Output:
|
||||
# Objective started DirectFunding-0x161d289a50222caa781db215bb82a3ede4f557217742245525b8e8cbff04ec21
|
||||
# Channel Open 0x161d289a50222caa781db215bb82a3ede4f557217742245525b8e8cbff04ec21
|
||||
|
||||
# Set the resulting ledger channel id in a variable
|
||||
export LEDGER_CHANNEL_ID=
|
||||
```
|
||||
|
||||
* Check the [Troubleshooting](#troubleshooting) section if command to create a ledger channel fails or gets stuck
|
||||
|
||||
* Once direct-fund objective is complete, the bridge will create a mirrored channel on L2
|
||||
|
||||
* Check L2 Nitro node's logs to see that a bridged-fund objective completed:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment logs nitro-node -f --tail 30
|
||||
|
||||
# Expected Output:
|
||||
# nitro-node-1 | 5:01AM INF INFO Objective cranked address=0xaaa6628ec44a8a742987ef3a114ddfe2d4f7adce objective-id=bridgedfunding-0x6a9f5ccf1fa802525d794f4a899897f947615f6acc7141e61e056a8bfca29179 waiting-for=WaitingForNothing
|
||||
# nitro-node-1 | 5:01AM INF INFO Objective is complete & returned to API address=0xaaa6628ec44a8a742987ef3a114ddfe2d4f7adce objective-id=bridgedfunding-0x6a9f5ccf1fa802525d794f4a899897f947615f6acc7141e61e056a8bfca29179
|
||||
```
|
||||
|
||||
* Check status of L1 ledger channel with the bridge using channel id:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l1-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-ledger-channel $LEDGER_CHANNEL_ID -p 4005 -h nitro-node"
|
||||
|
||||
# Example output:
|
||||
# {
|
||||
# ID: '0xbb28acc2e1543f4b41eb1ab9eb2e354b18554aefe4e7f0fa5f20046869d8553f',
|
||||
# Status: 'Open',
|
||||
# Balances: [
|
||||
# {
|
||||
# AssetAddress: '0xa6b4b8b84576047a53255649b4994743d9c83a71',
|
||||
# Me: '0xdaaa6ef3bc03f9c7dabc9a02847387d2c19107f5',
|
||||
# Them: '0xf0e6a85c6d23aca9ff1b83477d426ed26f218185',
|
||||
# MyBalance: 1000n,
|
||||
# TheirBalance: 1000n
|
||||
# },
|
||||
# {
|
||||
# AssetAddress: '0x0000000000000000000000000000000000000000',
|
||||
# Me: '0xdaaa6ef3bc03f9c7dabc9a02847387d2c19107f5',
|
||||
# Them: '0xf0e6a85c6d23aca9ff1b83477d426ed26f218185',
|
||||
# MyBalance: 1000n,
|
||||
# TheirBalance: 1000n
|
||||
# }
|
||||
# ],
|
||||
# ChannelMode: 'Open'
|
||||
# }
|
||||
```
|
||||
|
||||
* Check status of the mirrored channel on L2:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
|
||||
|
||||
# Example output:
|
||||
# [
|
||||
# {
|
||||
# "ID": "0xb34210b763d4fdd534190ba11886ad1daa1e411c87be6fd20cff74cd25077c46",
|
||||
# "Status": "Open",
|
||||
# "Balances": [
|
||||
# {
|
||||
# "AssetAddress": "0xa4351114dae1abeb2d552d441c9733c72682a45d",
|
||||
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
|
||||
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
|
||||
# "MyBalance": 1000,
|
||||
# "TheirBalance": 1000
|
||||
# },
|
||||
# {
|
||||
# "AssetAddress": "0x314e43f9825b10961859c2a62c2de6a765c1c1f1",
|
||||
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
|
||||
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
|
||||
# "MyBalance": 1000,
|
||||
# "TheirBalance": 1000
|
||||
# }
|
||||
# ],
|
||||
# "ChannelMode": "Open"
|
||||
# }
|
||||
# ]
|
||||
```
|
||||
|
||||
## Payments On L2 Channel
|
||||
|
||||
Perform payments using a virtual payment channel created with another Nitro node over the mirrored L2 channel with bridge as an intermediary
|
||||
|
||||
* Prerequisite: Ledger channel is required to create a payment channel
|
||||
|
||||
* Note: Currently payment channel is created from first asset present in ledger channel
|
||||
|
||||
* Run the following commands on deployment machine
|
||||
|
||||
* Switch to the `nitro-node` directory:
|
||||
|
||||
```bash
|
||||
cd <path-to-deployments-dir>
|
||||
```
|
||||
|
||||
* Check status of the mirrored channel on L2:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
|
||||
|
||||
# Example output:
|
||||
# [
|
||||
# {
|
||||
# "ID": "0xb34210b763d4fdd534190ba11886ad1daa1e411c87be6fd20cff74cd25077c46",
|
||||
# "Status": "Open",
|
||||
# "Balances": [
|
||||
# {
|
||||
# "AssetAddress": "0xa4351114dae1abeb2d552d441c9733c72682a45d",
|
||||
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
|
||||
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
|
||||
# "MyBalance": 1000,
|
||||
# "TheirBalance": 1000
|
||||
# },
|
||||
# {
|
||||
# "AssetAddress": "0x314e43f9825b10961859c2a62c2de6a765c1c1f1",
|
||||
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
|
||||
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
|
||||
# "MyBalance": 1000,
|
||||
# "TheirBalance": 1000
|
||||
# }
|
||||
# ],
|
||||
# "ChannelMode": "Open"
|
||||
# }
|
||||
# ]
|
||||
```
|
||||
|
||||
* Set required variables:
|
||||
|
||||
```bash
|
||||
export BRIDGE_NITRO_ADDRESS=$(yq eval '.bridge_nitro_address' nitro-node-config.yml)
|
||||
|
||||
# Mirrored channel on L2
|
||||
export L2_CHANNEL_ID=<l2-channel-id>
|
||||
|
||||
# Amount to create the payment channel with
|
||||
export PAYMENT_CHANNEL_AMOUNT=500
|
||||
```
|
||||
|
||||
* Set counterparty address
|
||||
|
||||
```bash
|
||||
export COUNTER_PARTY_ADDRESS=<counterparty-nitro-address>
|
||||
```
|
||||
|
||||
* Get the nitro address of the counterparty's node with whom you want create payment channel
|
||||
|
||||
* To get the nitro address of the your node:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-node"
|
||||
# `SCAddress` -> nitro address
|
||||
```
|
||||
|
||||
* Check for existing payment channels for the L2 channel:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-payment-channels-by-ledger $L2_CHANNEL_ID -p 4005 -h nitro-node"
|
||||
```
|
||||
|
||||
* Create a virtual payment channel:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client virtual-fund $COUNTER_PARTY_ADDRESS $BRIDGE_NITRO_ADDRESS --amount $PAYMENT_CHANNEL_AMOUNT -p 4005 -h nitro-node"
|
||||
|
||||
# Follow your L2 Nitro node logs for progress
|
||||
|
||||
# Expected Output:
|
||||
# Objective started VirtualFund-0x43db45a101658387263b36d613322cc952d8ce5b70de51e3a495513c256bef4d
|
||||
# Channel Open 0x43db45a101658387263b36d613322cc952d8ce5b70de51e3a495513c256bef4d
|
||||
|
||||
# Set the resulting payment channel id in a variable
|
||||
PAYMENT_CHANNEL_ID=<payment-channel-id>
|
||||
```
|
||||
|
||||
Multiple virtual payment channels can be created at once
|
||||
|
||||
* Check the payment channel's status:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-payment-channel $PAYMENT_CHANNEL_ID -p 4005 -h nitro-node"
|
||||
|
||||
# Expected output:
|
||||
# {
|
||||
# ID: '0xb29aeb32c9495a793ebf7bd116232075d1e7bfe89fc82281c7d498e3ffd3e3bf',
|
||||
# Status: 'Open',
|
||||
# Balance: {
|
||||
# AssetAddress: '0x0000000000000000000000000000000000000000',
|
||||
# Payee: '<your-nitro-address>',
|
||||
# Payer: '<counterparty-nitro-address>',
|
||||
# PaidSoFar: 0n,
|
||||
# RemainingFunds: <payment-channel-amount>n
|
||||
# }
|
||||
# }
|
||||
```
|
||||
|
||||
* Send payments using the virtual payment channel:
|
||||
|
||||
```bash
|
||||
export PAY_AMOUNT=200
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client pay $PAYMENT_CHANNEL_ID $PAY_AMOUNT -p 4005 -h nitro-node"
|
||||
|
||||
# Expected output
|
||||
# {
|
||||
# Amount: <pay-amount>,
|
||||
# Channel: '<payment-channel-id>'
|
||||
# }
|
||||
|
||||
# This can be done multiple times until the payment channel balance is exhausted
|
||||
```
|
||||
|
||||
* Check payment channel's status again to view updated channel state
|
||||
|
||||
* Close the payment channel to settle on the L2 mirrored channel:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client virtual-defund $PAYMENT_CHANNEL_ID -p 4005 -h nitro-node"
|
||||
|
||||
# Expected output:
|
||||
# Objective started VirtualDefund-0x43db45a101658387263b36d613322cc952d8ce5b70de51e3a495513c256bef4d
|
||||
# Channel complete 0x43db45a101658387263b36d613322cc952d8ce5b70de51e3a495513c256bef4d
|
||||
```
|
||||
|
||||
* Check L2 mirrored channel's status after the virtual payment channel is closed:
|
||||
|
||||
* This can be checked by both nodes
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
|
||||
|
||||
# Example output:
|
||||
# [
|
||||
# {
|
||||
# "ID": "0xb34210b763d4fdd534190ba11886ad1daa1e411c87be6fd20cff74cd25077c46",
|
||||
# "Status": "Open",
|
||||
# "Balances": [
|
||||
# {
|
||||
# "AssetAddress": "0xa4351114dae1abeb2d552d441c9733c72682a45d",
|
||||
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
|
||||
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
|
||||
# "MyBalance": <updated balance>,
|
||||
# "TheirBalance": <updated balance>
|
||||
# },
|
||||
# {
|
||||
# "AssetAddress": "0x314e43f9825b10961859c2a62c2de6a765c1c1f1",
|
||||
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
|
||||
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
|
||||
# "MyBalance": <updated balance>,
|
||||
# "TheirBalance": <updated balance>
|
||||
# }
|
||||
# ],
|
||||
# "ChannelMode": "Open"
|
||||
# }
|
||||
# ]
|
||||
```
|
||||
|
||||
Your balance on the L2 channel should be reduced by total payments done on the virtual payment channel
|
||||
|
||||
## Swaps on L2
|
||||
|
||||
Perform swaps using a swap channel created with another Nitro node over the mirrored L2 channel with bridge as an intermediary
|
||||
|
||||
* Prerequisite: Ledger channel is required to create a swap channel
|
||||
|
||||
* Run the following commands on deployment machine
|
||||
|
||||
* Switch to the `nitro-node` directory:
|
||||
|
||||
```bash
|
||||
cd <path-to-deployments-dir>
|
||||
```
|
||||
|
||||
* Check status of the mirrored channel on L2:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
|
||||
|
||||
# Example output:
|
||||
# [
|
||||
# {
|
||||
# "ID": "0xb34210b763d4fdd534190ba11886ad1daa1e411c87be6fd20cff74cd25077c46",
|
||||
# "Status": "Open",
|
||||
# "Balances": [
|
||||
# {
|
||||
# "AssetAddress": "0xa4351114dae1abeb2d552d441c9733c72682a45d",
|
||||
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
|
||||
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
|
||||
# "MyBalance": 1000,
|
||||
# "TheirBalance": 1000
|
||||
# },
|
||||
# {
|
||||
# "AssetAddress": "0x314e43f9825b10961859c2a62c2de6a765c1c1f1",
|
||||
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
|
||||
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
|
||||
# "MyBalance": 1000,
|
||||
# "TheirBalance": 1000
|
||||
# }
|
||||
# ],
|
||||
# "ChannelMode": "Open"
|
||||
# }
|
||||
# ]
|
||||
```
|
||||
|
||||
* Set required variables:
|
||||
|
||||
```bash
|
||||
export BRIDGE_NITRO_ADDRESS=$(yq eval '.bridge_nitro_address' nitro-node-config.yml)
|
||||
|
||||
export GETH_CHAIN_ID="1212"
|
||||
|
||||
# Get asset addresses from assets.json file
|
||||
export ASSET_ADDRESS_1=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken.address' assets.json)
|
||||
export ASSET_ADDRESS_2=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken2.address' assets.json)
|
||||
```
|
||||
|
||||
* Set counterparty address
|
||||
|
||||
```bash
|
||||
export COUNTER_PARTY_ADDRESS=<counterparty-nitro-address>
|
||||
```
|
||||
|
||||
* Get the nitro address of the counterparty's node with whom you want create swap channel
|
||||
|
||||
* To get the nitro address of the your node:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-node"
|
||||
# `SCAddress` -> nitro address
|
||||
```
|
||||
|
||||
* Create swap channel:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client swap-fund $COUNTER_PARTY_ADDRESS $BRIDGE_NITRO_ADDRESS --asset "$ASSET_ADDRESS_1:100,100" --asset "$ASSET_ADDRESS_2:100,100" -p 4005 -h nitro-node"
|
||||
|
||||
# Expected output
|
||||
# Objective started SwapFund-0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9
|
||||
# Channel open 0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9
|
||||
```
|
||||
|
||||
* Export swap channel ID:
|
||||
|
||||
```bash
|
||||
export SWAP_CHANNEL_ID=
|
||||
```
|
||||
|
||||
* Check swap channel:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-swap-channel $SWAP_CHANNEL_ID -p 4005 -h nitro-node"
|
||||
|
||||
# Expected output:
|
||||
# {
|
||||
# ID: '0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9',
|
||||
# Status: 'Open',
|
||||
# Balances: [
|
||||
# {
|
||||
# AssetAddress: '0xa4351114dae1abeb2d552d441c9733c72682a45d',
|
||||
# Me: '0x075400039e303b3fb46c0cff0404c5fa61947c05',
|
||||
# Them: '0xd0ea8b27591b1d070cccd4d30b8d408fe794fdfc',
|
||||
# MyBalance: 100n,
|
||||
# TheirBalance: 100n
|
||||
# },
|
||||
# {
|
||||
# AssetAddress: '0x314e43f9825b10961859c2a62c2de6a765c1c1f1',
|
||||
# Me: '0x075400039e303b3fb46c0cff0404c5fa61947c05',
|
||||
# Them: '0xd0ea8b27591b1d070cccd4d30b8d408fe794fdfc',
|
||||
# MyBalance: 100n,
|
||||
# TheirBalance: 100n
|
||||
# }
|
||||
# ]
|
||||
# }
|
||||
```
|
||||
|
||||
### Performing swaps
|
||||
|
||||
* Ensure that environment variables for asset addresses are set (should be done by both parties):
|
||||
|
||||
```bash
|
||||
export GETH_CHAIN_ID="1212"
|
||||
|
||||
# Get asset addresses from assets.json file
|
||||
export ASSET_ADDRESS_1=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken.address' assets.json)
|
||||
export ASSET_ADDRESS_2=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken2.address' assets.json)
|
||||
```
|
||||
|
||||
* Get all active swap channels for a specific mirrored ledger channel (should be done by both parties)
|
||||
|
||||
* To get mirrored ledger channels:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
|
||||
|
||||
# Example output:
|
||||
# [
|
||||
# {
|
||||
# "ID": "0xb34210b763d4fdd534190ba11886ad1daa1e411c87be6fd20cff74cd25077c46",
|
||||
# "Status": "Open",
|
||||
# "Balances": [
|
||||
# {
|
||||
# "AssetAddress": "0xa4351114dae1abeb2d552d441c9733c72682a45d",
|
||||
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
|
||||
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
|
||||
# "MyBalance": 1000n,
|
||||
# "TheirBalance": 1000n
|
||||
# },
|
||||
# {
|
||||
# "AssetAddress": "0x314e43f9825b10961859c2a62c2de6a765c1c1f1",
|
||||
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
|
||||
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
|
||||
# "MyBalance": 1000n,
|
||||
# "TheirBalance": 1000n
|
||||
# }
|
||||
# ],
|
||||
# "ChannelMode": "Open"
|
||||
# }
|
||||
# ]
|
||||
```
|
||||
|
||||
* Export ledger channel ID:
|
||||
|
||||
```bash
|
||||
export LEDGER_CHANNEL_ID=
|
||||
```
|
||||
|
||||
* To get swap channels for a ledger channel:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-swap-channels-by-ledger $LEDGER_CHANNEL_ID -p 4005 -h nitro-node"
|
||||
# Example Output:
|
||||
# [
|
||||
# {
|
||||
# ID: '0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9',
|
||||
# Status: 'Open',
|
||||
# Balances: [
|
||||
# {
|
||||
# AssetAddress: '0xa4351114dae1abeb2d552d441c9733c72682a45d',
|
||||
# Me: '0x075400039e303b3fb46c0cff0404c5fa61947c05',
|
||||
# Them: '0xd0ea8b27591b1d070cccd4d30b8d408fe794fdfc',
|
||||
# MyBalance: 100,
|
||||
# TheirBalance: 100n
|
||||
# },
|
||||
# {
|
||||
# AssetAddress: '0x314e43f9825b10961859c2a62c2de6a765c1c1f1',
|
||||
# Me: '0x075400039e303b3fb46c0cff0404c5fa61947c05',
|
||||
# Them: '0xd0ea8b27591b1d070cccd4d30b8d408fe794fdfc',
|
||||
# MyBalance: 100,
|
||||
# TheirBalance: 100
|
||||
# }
|
||||
# ]
|
||||
# }
|
||||
# ]
|
||||
```
|
||||
|
||||
* Export swap channel ID:
|
||||
|
||||
```bash
|
||||
export SWAP_CHANNEL_ID=
|
||||
```
|
||||
|
||||
* One of the participants can initiate the swap and other one will either accept it or reject it
|
||||
|
||||
* For initiating the swap:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client swap-initiate $SWAP_CHANNEL_ID --AssetIn "$ASSET_ADDRESS_1:20" --AssetOut "$ASSET_ADDRESS_2:10" -p 4005 -h nitro-node"
|
||||
|
||||
# Expected output:
|
||||
# {
|
||||
# SwapAssetsData: {
|
||||
# TokenIn: '0xa4351114dae1abeb2d552d441c9733c72682a45d',
|
||||
# TokenOut: '0x314e43f9825b10961859c2a62c2de6a765c1c1f1',
|
||||
# AmountIn: 20,
|
||||
# AmountOut: 10
|
||||
# },
|
||||
# Channel: '0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9'
|
||||
# }
|
||||
```
|
||||
|
||||
OR
|
||||
|
||||
* For receiving the swap
|
||||
|
||||
* Get the pending swap:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-pending-swap $SWAP_CHANNEL_ID -p 4005 -h nitro-node"
|
||||
|
||||
# Expected output:
|
||||
# {
|
||||
# Id: '0x7d582020753335cfd2f2af14127c9b51c7ed7a5d547a674d9cb04fe62de6ddf3',
|
||||
# ChannelId: '0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9',
|
||||
# Exchange: {
|
||||
# TokenIn: '0xa4351114dae1abeb2d552d441c9733c72682a45d',
|
||||
# TokenOut: '0x314e43f9825b10961859c2a62c2de6a765c1c1f1',
|
||||
# AmountIn: 20,
|
||||
# AmountOut: 10
|
||||
# },
|
||||
# Sigs: {
|
||||
# '0': '0x0a018de18a091f7bfb400d9bc64fe958d298882e569c1668c5b1c853b5493221576b2d72074ef6e1899b79e60eaa9934afac5c1e07b7000746bac5b3b1da93311b'
|
||||
# },
|
||||
# Nonce: 2840594896360394000
|
||||
# }
|
||||
```
|
||||
|
||||
* Export swap ID:
|
||||
|
||||
```bash
|
||||
export SWAP_ID=
|
||||
```
|
||||
|
||||
* Either accept or reject the swap
|
||||
|
||||
* To accept:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client swap-accept $SWAP_ID -p 4005 -h nitro-node"
|
||||
|
||||
# Expected output:
|
||||
# Confirming Swap with accepted
|
||||
# Objective complete Swap-0x7d582020753335cfd2f2af14127c9b51c7ed7a5d547a674d9cb04fe62de6ddf3
|
||||
```
|
||||
|
||||
OR
|
||||
|
||||
* To reject:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client swap-reject $SWAP_ID -p 4005 -h nitro-node"
|
||||
|
||||
# Expected output:
|
||||
# Confirming Swap with accepted
|
||||
# Objective complete Swap-0x7d582020753335cfd2f2af14127c9b51c7ed7a5d547a674d9cb04fe62de6ddf3
|
||||
```
|
||||
|
||||
* Check swap channel:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-swap-channel $SWAP_CHANNEL_ID -p 4005 -h nitro-node"
|
||||
|
||||
# Example output:
|
||||
# {
|
||||
# ID: '0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9',
|
||||
# Status: 'Open',
|
||||
# Balances: [
|
||||
# {
|
||||
# AssetAddress: '0xa4351114dae1abeb2d552d441c9733c72682a45d',
|
||||
# Me: '0xd0ea8b27591b1d070cccd4d30b8d408fe794fdfc',
|
||||
# Them: '0x075400039e303b3fb46c0cff0404c5fa61947c05',
|
||||
# MyBalance: 120n,
|
||||
# TheirBalance: 80n
|
||||
# },
|
||||
# {
|
||||
# AssetAddress: '0x314e43f9825b10961859c2a62c2de6a765c1c1f1',
|
||||
# Me: '0xd0ea8b27591b1d070cccd4d30b8d408fe794fdfc',
|
||||
# Them: '0x075400039e303b3fb46c0cff0404c5fa61947c05',
|
||||
# MyBalance: 90n,
|
||||
# TheirBalance: 110n
|
||||
# }
|
||||
# ]
|
||||
# }
|
||||
```
|
||||
|
||||
* Close swap channel:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client swap-defund $SWAP_CHANNEL_ID -p 4005 -h nitro-node"
|
||||
|
||||
# Expected output:
|
||||
# Objective started SwapDefund-0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9
|
||||
# Objective complete SwapDefund-0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9
|
||||
```
|
||||
|
||||
* Check L2 mirrored channel status:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
|
||||
|
||||
# Example output:
|
||||
# [
|
||||
# {
|
||||
# "ID": "0xb34210b763d4fdd534190ba11886ad1daa1e411c87be6fd20cff74cd25077c46",
|
||||
# "Status": "Open",
|
||||
# "Balances": [
|
||||
# {
|
||||
# "AssetAddress": "0xa4351114dae1abeb2d552d441c9733c72682a45d",
|
||||
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
|
||||
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
|
||||
# "MyBalance": <updated balance>,
|
||||
# "TheirBalance": <updated balance>
|
||||
# },
|
||||
# {
|
||||
# "AssetAddress": "0x314e43f9825b10961859c2a62c2de6a765c1c1f1",
|
||||
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
|
||||
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
|
||||
# "MyBalance": <updated balance>,
|
||||
# "TheirBalance": <updated balance>
|
||||
# }
|
||||
# ],
|
||||
# "ChannelMode": "Open"
|
||||
# }
|
||||
# ]
|
||||
```
|
||||
|
||||
## Update nitro nodes
|
||||
|
||||
* Switch to deployments dir:
|
||||
|
||||
```bash
|
||||
cd $DEPLOYMENTS_DIR/nitro-node
|
||||
```
|
||||
|
||||
* Rebuild containers:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node build-containers --force-rebuild
|
||||
```
|
||||
|
||||
* Restart the nodes
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l1-nitro-deployment stop
|
||||
laconic-so deployment --dir l1-nitro-deployment start
|
||||
|
||||
laconic-so deployment --dir l2-nitro-deployment stop
|
||||
laconic-so deployment --dir l2-nitro-deployment start
|
||||
```
|
||||
|
||||
## Clean up
|
||||
|
||||
* Switch to deployments dir:
|
||||
|
||||
```bash
|
||||
cd <path-to-deployments-dir>
|
||||
```
|
||||
|
||||
* Stop all Nitro services running in the background:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l1-nitro-deployment stop
|
||||
laconic-so deployment --dir l2-nitro-deployment stop
|
||||
```
|
||||
|
||||
* To stop all services and also delete data:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l1-nitro-deployment stop --delete-volumes
|
||||
laconic-so deployment --dir l2-nitro-deployment stop --delete-volumes
|
||||
|
||||
# Remove deployment directories (deployments will have to be recreated for a re-run)
|
||||
sudo rm -r l1-nitro-deployment
|
||||
sudo rm -r l2-nitro-deployment
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
* Check the logs of nitro node to see if the objective is completed
|
||||
|
||||
```bash
|
||||
# To check logs of L1 nitro-node
|
||||
laconic-so deployment --dir l1-nitro-deployment logs nitro-node -f --tail 30
|
||||
|
||||
# To check logs of L2 nitro-node
|
||||
laconic-so deployment --dir l2-nitro-deployment logs nitro-node -f --tail 30
|
||||
```
|
||||
|
||||
* If the objective is completed, you can safely stop (`Ctrl+C`) the running CLI command and continue with the further instructions
|
||||
|
||||
* Stop (`Ctrl+C`) the direct-fund command if it is stuck
|
||||
|
||||
* Restart the L1 Nitro node:
|
||||
|
||||
* Stop the deployment:
|
||||
|
||||
```bash
|
||||
cd <path-to-deployments-dir>
|
||||
|
||||
laconic-so deployment --dir l1-nitro-deployment stop
|
||||
```
|
||||
|
||||
* Reset the node's durable store:
|
||||
|
||||
```bash
|
||||
sudo rm -rf l1-nitro-deployment/data/nitro_node_data
|
||||
|
||||
mkdir l1-nitro-deployment/data/nitro_node_data
|
||||
```
|
||||
|
||||
* Restart the deployment:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir l1-nitro-deployment start
|
||||
```
|
||||
|
||||
* Retry the ledger channel creation command
|
File diff suppressed because it is too large
Load Diff
@ -1,381 +0,0 @@
|
||||
# testnet-onboarding-validator
|
||||
|
||||
## Onboard as a participant / validator on stage0
|
||||
|
||||
* Visit <https://wallet.laconic.com/> and click on `Create wallet`
|
||||
* Save the mnemonic for further usage
|
||||
|
||||
* Register your laconic address as a participant using the [Onboarding App](https://loro-signup.laconic.com)
|
||||
|
||||
* Read and accept the `Terms and Conditions`
|
||||
|
||||
* On next page, enter your email to register to join the LORO testnet
|
||||
|
||||
* Visit the confirmation link sent on the registered email (email delivery might take a few minutes)
|
||||
|
||||
* It should take you to the `Testnet Onboarding` app
|
||||
|
||||
* Note: The confirmation link will only work the first time, visit <https://loro-signup.laconic.com/sign-with-nitro-key> for further attempts if required
|
||||
|
||||
* Connect testnet-onboarding app to the wallet:
|
||||
|
||||
* Click on `CONNECT WALLET` button on the testnet-onboarding app
|
||||
|
||||
* Click on the WalletConnect icon on the top right corner in the wallet
|
||||
|
||||
* If using the wallet website, enter WalletConnect URI for pairing
|
||||
|
||||
* In the onboarding app, choose your nitro and laconicd account to onboard
|
||||
|
||||
* Sign using the nitro key
|
||||
|
||||
* Approve sign request on Wallet
|
||||
|
||||
* Select the desired participant role (`Validator` or `Participant`) and accept the onboarding terms and conditions
|
||||
|
||||
* In the next step, fund your laconic account by clicking on the `REQUEST TOKENS FROM FAUCET` button; ensure that the displayed balance is updated
|
||||
|
||||
* Send transaction request to the wallet
|
||||
|
||||
* From wallet, approve and send transaction to stage0 laconicd chain
|
||||
|
||||
## Join as a validator on stage1
|
||||
|
||||
### Prerequisites
|
||||
|
||||
* Minimum hardware requirements:
|
||||
|
||||
```bash
|
||||
RAM: 8-16GB
|
||||
Disk space: 200GB
|
||||
CPU: 2 cores
|
||||
```
|
||||
|
||||
* Testnet genesis file and peer node address
|
||||
|
||||
* Mnemonic from the [wallet](https://wallet.laconic.com)
|
||||
|
||||
* Participant onboarded on stage0
|
||||
|
||||
### Setup
|
||||
|
||||
* Clone the stack repo:
|
||||
|
||||
```bash
|
||||
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack
|
||||
|
||||
# See stack documentation stack-orchestrator/stacks/testnet-laconicd/README.md for more details
|
||||
```
|
||||
|
||||
* Clone required repositories:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd setup-repositories --pull
|
||||
|
||||
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories and re-run the command
|
||||
```
|
||||
|
||||
* Build the container images:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd build-containers
|
||||
```
|
||||
|
||||
This should create the following docker images locally:
|
||||
|
||||
* `cerc/laconicd`
|
||||
|
||||
### Create a deployment
|
||||
|
||||
* Create a spec file for the deployment:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd deploy init --output testnet-laconicd-spec.yml
|
||||
```
|
||||
|
||||
* Edit `network` in the spec file to map container ports to host ports as required:
|
||||
|
||||
```bash
|
||||
# testnet-laconicd-spec.yml
|
||||
...
|
||||
network:
|
||||
ports:
|
||||
laconicd:
|
||||
- '6060:6060'
|
||||
- '26657:26657'
|
||||
- '26656:26656'
|
||||
- '9473:9473'
|
||||
- '9090:9090'
|
||||
- '1317:1317'
|
||||
```
|
||||
|
||||
* Create the deployment from the spec file:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd deploy create --spec-file testnet-laconicd-spec.yml --deployment-dir testnet-laconicd-deployment
|
||||
```
|
||||
|
||||
* Copy over the published testnet genesis file (`.json`) to data directory in deployment (`testnet-laconicd-deployment/data/laconicd-data/tmp`):
|
||||
|
||||
```bash
|
||||
# Example
|
||||
mkdir -p testnet-laconicd-deployment/data/laconicd-data/tmp
|
||||
cp genesis.json testnet-laconicd-deployment/data/laconicd-data/tmp/genesis.json
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
* Inside the `testnet-laconicd-deployment` deployment directory, open `config.env` file and set following env variables:
|
||||
|
||||
```bash
|
||||
CERC_CHAIN_ID=laconic_9000-1
|
||||
|
||||
# Comma separated list of nodes to keep persistent connections to
|
||||
# Example: "node-1-id@laconicd.laconic.com:26656"
|
||||
# Use the provided node id
|
||||
CERC_PEERS=""
|
||||
|
||||
# A custom human readable name for this node
|
||||
CERC_MONIKER=
|
||||
```
|
||||
|
||||
### Start the deployment
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir testnet-laconicd-deployment start
|
||||
```
|
||||
|
||||
### Check status
|
||||
|
||||
* To list down and monitor the running containers:
|
||||
|
||||
```bash
|
||||
# With status
|
||||
docker ps -a
|
||||
|
||||
# Follow logs for laconicd container
|
||||
laconic-so deployment --dir testnet-laconicd-deployment logs laconicd -f
|
||||
```
|
||||
|
||||
* Check the sync status of your node:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd status | jq .sync_info"
|
||||
|
||||
# `catching_up: false` indicates that node is completely synced
|
||||
```
|
||||
|
||||
* After the node has caught up, view current list of validators:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd query staking validators"
|
||||
```
|
||||
|
||||
* Confirm that in the list of validators, your node moniker does not exist
|
||||
|
||||
### Join as testnet validator
|
||||
|
||||
* Open the wallet: <https://wallet.laconic.com/>
|
||||
|
||||
* Create a validator from the onboarding app:
|
||||
|
||||
* Visit the [validator creation](https://loro-signup.laconic.com/validator) page
|
||||
|
||||
* If required, connect testnet-onboarding app to the wallet with which onboarding was done on stage0
|
||||
|
||||
* Select the Laconic account (same as the one used while onboarding) using which you wish to send the create validator request
|
||||
|
||||
* This should display the details of your onboarded participant
|
||||
|
||||
* You can proceed if the participant has `validator` role
|
||||
|
||||
* Enter your node's moniker (use the same one used while [configuring](#configuration) the `testnet-laconicd-deployment`)
|
||||
|
||||
* Fetch and enter your validator's pubkey:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd cometbft show-validator" | jq -r .key
|
||||
```
|
||||
|
||||
* Send the transaction request to wallet
|
||||
|
||||
* From wallet, approve and send transaction to stage1 laconicd chain
|
||||
|
||||
<a name="create-validator-using-cli"></a>
|
||||
|
||||
* Alternatively, create a validator using the laconicd CLI:
|
||||
|
||||
* Import a key pair:
|
||||
|
||||
```bash
|
||||
KEY_NAME=alice
|
||||
CHAIN_ID=laconic_9000-1
|
||||
|
||||
# Restore existing key with mnemonic seed phrase
|
||||
# You will be prompted to enter mnemonic seed
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd keys add $KEY_NAME --recover"
|
||||
|
||||
# Query the keystore for your account's address
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd keys show $KEY_NAME -a"
|
||||
```
|
||||
|
||||
* Check balance for your account:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd query bank balances <account-address>"
|
||||
```
|
||||
|
||||
* Create required validator configuration:
|
||||
|
||||
```bash
|
||||
# Note:
|
||||
# Edit the staking amount and other fields as required
|
||||
# Replace <your-node-moniker> with your node's moniker in command below
|
||||
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd 'cat <<EOF > my-validator.json
|
||||
{
|
||||
"pubkey": $(laconicd cometbft show-validator),
|
||||
"amount": "1000000000000000alnt",
|
||||
"moniker": "<your-node-moniker>",
|
||||
"commission-rate": "0.1",
|
||||
"commission-max-rate": "0.2",
|
||||
"commission-max-change-rate": "0.01",
|
||||
"min-self-delegation": "1"
|
||||
}
|
||||
EOF'
|
||||
```
|
||||
|
||||
* Create a validator:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd tx staking create-validator my-validator.json \
|
||||
--fees 500000alnt \
|
||||
--chain-id=$CHAIN_ID \
|
||||
--from $KEY_NAME"
|
||||
```
|
||||
|
||||
* View validators:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd query staking validators"
|
||||
```
|
||||
|
||||
* Check that in the list of validators `<your-node-moniker>` exists
|
||||
|
||||
* View validator set:
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd query consensus comet validator-set"
|
||||
```
|
||||
|
||||
### Clean up
|
||||
|
||||
* Stop all `testnet-laconicd` services running in the background:
|
||||
|
||||
```bash
|
||||
# Stop the docker containers
|
||||
laconic-so deployment --dir testnet-laconicd-deployment stop
|
||||
```
|
||||
|
||||
* To stop all services and also delete data:
|
||||
|
||||
```bash
|
||||
# Stop the docker containers
|
||||
laconic-so deployment --dir testnet-laconicd-deployment stop --delete-volumes
|
||||
|
||||
# Remove deployment directory (deployment will have to be recreated for a re-run)
|
||||
sudo rm -r testnet-laconicd-deployment
|
||||
```
|
||||
|
||||
## Upgrade to SAPO testnet
|
||||
|
||||
### Prerequisites
|
||||
|
||||
* SAPO testnet (testnet2) [genesis file](./ops/stage2/genesis.json) and peers (see below)
|
||||
|
||||
### Setup
|
||||
|
||||
* If running, stop the stage1 node:
|
||||
|
||||
```bash
|
||||
# In dir where stage1 node deployment (`testnet-laconicd-deployment`) exists
|
||||
|
||||
TESTNET_DEPLOYMENT=$(pwd)/testnet-laconicd-deployment
|
||||
|
||||
laconic-so deployment --dir testnet-laconicd-deployment stop --delete-volumes
|
||||
```
|
||||
|
||||
* Clone / pull the stack repo:
|
||||
|
||||
```bash
|
||||
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack --pull
|
||||
```
|
||||
|
||||
* Ensure you are on tag v0.1.10
|
||||
|
||||
* Clone / pull the required repositories:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd setup-repositories --pull
|
||||
|
||||
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories and re-run the command
|
||||
```
|
||||
|
||||
Note: Make sure the latest `cerc-io/laconicd` changes have been pulled
|
||||
|
||||
* Build the container images:
|
||||
|
||||
```bash
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd build-containers --force-rebuild
|
||||
```
|
||||
|
||||
This should create the following docker images locally with latest changes:
|
||||
|
||||
* `cerc/laconicd`
|
||||
|
||||
### Create a deployment
|
||||
|
||||
* Create a deployment from spec file:
|
||||
|
||||
```
|
||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd deploy create --spec-file testnet-laconicd-spec.yml --deployment-dir stage-2-testnet-laconicd-deployment
|
||||
```
|
||||
|
||||
* Copy over the published testnet2 genesis file (`.json`) to data directory in deployment (`stage-2-testnet-laconicd-deployment/data/laconicd-data/tmp-testnet2`):
|
||||
|
||||
```bash
|
||||
# Example
|
||||
mkdir -p stage-2-testnet-laconicd-deployment/data/laconicd-data/tmp-testnet2
|
||||
cp genesis.json stage-2-testnet-laconicd-deployment/data/laconicd-data/tmp-testnet2/genesis.json
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
* Inside the `stage-2-testnet-laconicd-deployment` deployment directory, open `config.env` file and set following env variables:
|
||||
|
||||
```bash
|
||||
CERC_CHAIN_ID=laconic-testnet-2
|
||||
|
||||
CERC_PEERS="bd56622c525a4dfce1e388a7b8c0cb072200797b@5.9.80.214:26103,289f10e94156f47c67bc26a8af747a58e8014f15@148.251.49.108:26656,72cd2f50dff154408cc2c7650a94c2141624b657@65.21.237.194:26656,21322e4fa90c485ff3cb9617438deec4acfa1f0b@143.198.37.25:26656"
|
||||
|
||||
# A custom human readable name for this node
|
||||
CERC_MONIKER="my-node"
|
||||
```
|
||||
|
||||
### Start the deployment
|
||||
|
||||
```bash
|
||||
laconic-so deployment --dir stage-2-testnet-laconicd-deployment start
|
||||
```
|
||||
|
||||
See [Check status](#check-status) to follow sync status of your node
|
||||
|
||||
See [Join as testnet validator](#create-validator-using-cli) to join as a validator using laconicd CLI (use chain id `laconic-testnet-2`)
|
||||
|
||||
### Clean up
|
||||
|
||||
* Same as [Clean up](#clean-up)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
* If you face any issues in the onboarding app or the web-wallet, clear your browser cache and reload
|
Loading…
Reference in New Issue
Block a user