Compare commits
7 Commits
main
...
nv-update-
Author | SHA1 | Date | |
---|---|---|---|
|
475e688589 | ||
|
cf670cc971 | ||
ea8d157fb0 | |||
|
7ac1f1178d | ||
|
c9658fe34e | ||
|
bc6e800407 | ||
|
f6be678bd5 |
11
README.md
11
README.md
@ -10,7 +10,6 @@ Stacks to run a node for laconic testnet
|
|||||||
|
|
||||||
- [Update deployments after code changes](./ops/update-deployments.md)
|
- [Update deployments after code changes](./ops/update-deployments.md)
|
||||||
- [Halt stage0 and start stage1](./ops/stage0-to-stage1.md)
|
- [Halt stage0 and start stage1](./ops/stage0-to-stage1.md)
|
||||||
- [Halt stage1 and start stage2](./ops/stage1-to-stage2.md)
|
|
||||||
- [Create deployments from scratch (for reference only)](./ops/deployments-from-scratch.md)
|
- [Create deployments from scratch (for reference only)](./ops/deployments-from-scratch.md)
|
||||||
- [Deploy and transfer new tokens for nitro operations](./ops/nitro-token-ops.md)
|
- [Deploy and transfer new tokens for nitro operations](./ops/nitro-token-ops.md)
|
||||||
|
|
||||||
@ -18,14 +17,6 @@ Stacks to run a node for laconic testnet
|
|||||||
|
|
||||||
Follow steps in [testnet-onboarding-validator.md](./testnet-onboarding-validator.md) to onboard your participant and join as a validator on the LORO testnet
|
Follow steps in [testnet-onboarding-validator.md](./testnet-onboarding-validator.md) to onboard your participant and join as a validator on the LORO testnet
|
||||||
|
|
||||||
## SAPO testnet
|
|
||||||
|
|
||||||
Follow steps in [Upgrade to SAPO testnet](./testnet-onboarding-validator.md#upgrade-to-sapo-testnet) for upgrading your LORO testnet node to SAPO testnet
|
|
||||||
|
|
||||||
## Setup a Service Provider
|
|
||||||
|
|
||||||
Follow steps in [service-provider.md](./service-provider.md) to setup / update your service provider
|
|
||||||
|
|
||||||
## Run testnet Nitro Node
|
## Run testnet Nitro Node
|
||||||
|
|
||||||
Follow steps in [testnet-nitro-node.md](./testnet-nitro-node.md) to run your Nitro node for the testnet
|
Follow steps in [testnet-nitro-node.md](./testnet-nitro-node.md) to run you Nitro node for the testnet
|
||||||
|
@ -1,40 +0,0 @@
|
|||||||
[server]
|
|
||||||
host = "0.0.0.0"
|
|
||||||
port = 8000
|
|
||||||
gqlPath = "/graphql"
|
|
||||||
[server.session]
|
|
||||||
secret = "<redacted>"
|
|
||||||
# Frontend webapp URL origin
|
|
||||||
appOriginUrl = "https://deploy.apps.vaasl.io"
|
|
||||||
# Set to true if server running behind proxy
|
|
||||||
trustProxy = true
|
|
||||||
# Backend URL hostname
|
|
||||||
domain = "deploy-backend.apps.vaasl.io"
|
|
||||||
|
|
||||||
[database]
|
|
||||||
dbPath = "/data/db/deploy-backend"
|
|
||||||
|
|
||||||
[gitHub]
|
|
||||||
webhookUrl = "https://deploy-backend.apps.vaasl.io"
|
|
||||||
[gitHub.oAuth]
|
|
||||||
clientId = "<redacted>"
|
|
||||||
clientSecret = "<redacted>"
|
|
||||||
|
|
||||||
[registryConfig]
|
|
||||||
fetchDeploymentRecordDelay = 5000
|
|
||||||
checkAuctionStatusDelay = 5000
|
|
||||||
restEndpoint = "https://laconicd-sapo.laconic.com"
|
|
||||||
gqlEndpoint = "https://laconicd-sapo.laconic.com/api"
|
|
||||||
chainId = "laconic-testnet-2"
|
|
||||||
privateKey = "<redacted>"
|
|
||||||
bondId = "<redacted>"
|
|
||||||
authority = "vaasl"
|
|
||||||
[registryConfig.fee]
|
|
||||||
gasPrice = "0.001alnt"
|
|
||||||
|
|
||||||
[auction]
|
|
||||||
commitFee = "1000"
|
|
||||||
commitsDuration = "60s"
|
|
||||||
revealFee = "1000"
|
|
||||||
revealsDuration = "60s"
|
|
||||||
denom = "alnt"
|
|
File diff suppressed because it is too large
Load Diff
@ -1,8 +1,6 @@
|
|||||||
# Nitro Token Ops
|
# Nitro Token Ops
|
||||||
|
|
||||||
## Deploy and transfer custom tokens
|
## Setup
|
||||||
|
|
||||||
### Setup
|
|
||||||
|
|
||||||
* Go to the directory where `nitro-contracts-deployment` is present:
|
* Go to the directory where `nitro-contracts-deployment` is present:
|
||||||
|
|
||||||
@ -10,7 +8,7 @@
|
|||||||
cd /srv/bridge
|
cd /srv/bridge
|
||||||
```
|
```
|
||||||
|
|
||||||
### Deploy new token
|
## Deploy new token
|
||||||
|
|
||||||
* To deploy another token:
|
* To deploy another token:
|
||||||
|
|
||||||
@ -50,7 +48,7 @@
|
|||||||
|
|
||||||
* Check in the generated file at location `ops/stage2/assets.json` within this repository
|
* Check in the generated file at location `ops/stage2/assets.json` within this repository
|
||||||
|
|
||||||
### Transfer deployed tokens to given address
|
## Transfer deployed tokens to given address
|
||||||
|
|
||||||
* To transfer a token to an account:
|
* To transfer a token to an account:
|
||||||
|
|
||||||
@ -59,25 +57,7 @@
|
|||||||
export TOKEN_NAME="<name-of-token-to-be-transferred>"
|
export TOKEN_NAME="<name-of-token-to-be-transferred>"
|
||||||
export ASSET_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.$TOKEN_NAME.address' /app/deployment/nitro-addresses.json")
|
export ASSET_ADDRESS=$(laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "jq -r '.\"$GETH_CHAIN_ID\"[0].contracts.$TOKEN_NAME.address' /app/deployment/nitro-addresses.json")
|
||||||
export ACCOUNT="<target-account-address>"
|
export ACCOUNT="<target-account-address>"
|
||||||
|
export AMOUNT="<transfer-amount>"
|
||||||
|
|
||||||
laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "cd packages/nitro-protocol && yarn hardhat transfer --contract $ASSET_ADDRESS --to $ACCOUNT --amount 100 --network geth"
|
laconic-so deployment --dir nitro-contracts-deployment exec nitro-contracts "cd packages/nitro-protocol && yarn hardhat transfer --contract $ASSET_ADDRESS --to $ACCOUNT --amount 1000 --network geth"
|
||||||
```
|
|
||||||
|
|
||||||
## Transfer ETH
|
|
||||||
|
|
||||||
* Go to the directory where `fixturenet-eth-deployment` is present:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd /srv/fixturenet-eth
|
|
||||||
```
|
|
||||||
|
|
||||||
* To transfer ETH to an account:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export FUNDED_ADDRESS="0xe6CE22afe802CAf5fF7d3845cec8c736ecc8d61F"
|
|
||||||
export FUNDED_PK="888814df89c4358d7ddb3fa4b0213e7331239a80e1f013eaa7b2deca2a41a218"
|
|
||||||
|
|
||||||
export TO_ADDRESS="<target-account-address>"
|
|
||||||
|
|
||||||
laconic-so deployment --dir fixturenet-eth-deployment exec foundry "cast send $TO_ADDRESS --value 1ether --from $FUNDED_ADDRESS --private-key $FUNDED_PK"
|
|
||||||
```
|
```
|
||||||
|
@ -1,465 +0,0 @@
|
|||||||
# Service Provider deployments from scratch
|
|
||||||
|
|
||||||
## container-registry
|
|
||||||
|
|
||||||
* Reference: <https://github.com/LaconicNetwork/loro-testnet/blob/main/docs/service-provider-setup.md#deploy-docker-image-container-registry>
|
|
||||||
|
|
||||||
* Target dir: `/srv/service-provider/container-registry`
|
|
||||||
|
|
||||||
* Cleanup an existing deployment if required:
|
|
||||||
```bash
|
|
||||||
cd /srv/service-provider/container-registry
|
|
||||||
|
|
||||||
# Stop the deployment
|
|
||||||
laconic-so deployment --dir container-registry stop --delete-volumes
|
|
||||||
|
|
||||||
# Remove the deployment dir
|
|
||||||
sudo rm -rf container-registrty
|
|
||||||
|
|
||||||
# Remove the existing spec file
|
|
||||||
rm container-registry.spec
|
|
||||||
```
|
|
||||||
|
|
||||||
### Setup
|
|
||||||
|
|
||||||
- Generate the spec file for the container-registry stack
|
|
||||||
```bash
|
|
||||||
laconic-so --stack container-registry deploy init --output container-registry.spec
|
|
||||||
```
|
|
||||||
|
|
||||||
- Modify the `container-registry.spec` as shown below
|
|
||||||
```
|
|
||||||
stack: container-registry
|
|
||||||
deploy-to: k8s
|
|
||||||
kube-config: /home/dev/.kube/config-vs-narwhal.yaml
|
|
||||||
network:
|
|
||||||
ports:
|
|
||||||
registry:
|
|
||||||
- '5000'
|
|
||||||
http-proxy:
|
|
||||||
- host-name: container-registry.apps.vaasl.io
|
|
||||||
routes:
|
|
||||||
- path: '/'
|
|
||||||
proxy-to: registry:5000
|
|
||||||
volumes:
|
|
||||||
registry-data:
|
|
||||||
configmaps:
|
|
||||||
config: ./configmaps/config
|
|
||||||
```
|
|
||||||
|
|
||||||
- Create the deployment directory for the `container-registry` stack
|
|
||||||
```bash
|
|
||||||
laconic-so --stack container-registry deploy create --deployment-dir container-registry --spec-file container-registry.spec
|
|
||||||
```
|
|
||||||
|
|
||||||
- Modify file `container-registry/kubeconfig.yml` if required
|
|
||||||
```
|
|
||||||
apiVersion: v1
|
|
||||||
...
|
|
||||||
contexts:
|
|
||||||
- context:
|
|
||||||
cluster: ***
|
|
||||||
user: ***
|
|
||||||
name: default
|
|
||||||
...
|
|
||||||
```
|
|
||||||
NOTE: `context.name` must be default to use with SO
|
|
||||||
|
|
||||||
- Base64 encode the container registry credentials
|
|
||||||
NOTE: Use actual credentials for container registry (credentials set in `container-registry/credentials.txt`)
|
|
||||||
```bash
|
|
||||||
echo -n "so-reg-user:pXDwO5zLU7M88x3aA" | base64 -w0
|
|
||||||
|
|
||||||
# Output: c28tcmVnLXVzZXI6cFhEd081ekxVN004OHgzYUE=
|
|
||||||
```
|
|
||||||
|
|
||||||
- Install `apache2-utils` for next step
|
|
||||||
```bash
|
|
||||||
sudo apt install apache2-utils
|
|
||||||
```
|
|
||||||
|
|
||||||
- Encrypt the container registry credentials to create an `htpasswd` file
|
|
||||||
```bash
|
|
||||||
htpasswd -bB -c container-registry/configmaps/config/htpasswd so-reg-user pXDwO5zLU7M88x3aA
|
|
||||||
```
|
|
||||||
|
|
||||||
Resulting file should look like this
|
|
||||||
```
|
|
||||||
cat container-registry/configmaps/config/htpasswd
|
|
||||||
# so-reg-user:$2y$05$6EdxIwwDNlJfNhhQxZRr4eNd.aYrdmbBjAdw422w0u2j3TihQXgd2
|
|
||||||
```
|
|
||||||
|
|
||||||
- Using the credentials from the previous steps, create a `container-registry/my_password.json` file
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"auths": {
|
|
||||||
"container-registry.apps.vaasl.io": {
|
|
||||||
"username": "so-reg-user",
|
|
||||||
"password": "$2y$05$6EdxIwwDNlJfNhhQxZRr4eNd.aYrdmbBjAdw422w0u2j3TihQXgd2",
|
|
||||||
"auth": "c28tcmVnLXVzZXI6cFhEd081ekxVN004OHgzYUE="
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
- Configure the file `container-registry/config.env` as follows
|
|
||||||
```env
|
|
||||||
REGISTRY_AUTH=htpasswd
|
|
||||||
REGISTRY_AUTH_HTPASSWD_REALM="VSL Service Provider Image Registry"
|
|
||||||
REGISTRY_AUTH_HTPASSWD_PATH="/config/htpasswd"
|
|
||||||
REGISTRY_HTTP_SECRET='$2y$05$6EdxIwwDNlJfNhhQxZRr4eNd.aYrdmbBjAdw422w0u2j3TihQXgd2'
|
|
||||||
```
|
|
||||||
|
|
||||||
- Load context for k8s
|
|
||||||
```bash
|
|
||||||
kubie ctx vs-narwhal
|
|
||||||
```
|
|
||||||
|
|
||||||
- Add the container registry credentials as a secret available to the cluster
|
|
||||||
```bash
|
|
||||||
kubectl create secret generic laconic-registry --from-file=.dockerconfigjson=container-registry/my_password.json --type=kubernetes.io/dockerconfigjson
|
|
||||||
```
|
|
||||||
|
|
||||||
### Run
|
|
||||||
|
|
||||||
- Deploy the container registry
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir container-registry start
|
|
||||||
```
|
|
||||||
|
|
||||||
- Check the logs
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir container-registry logs
|
|
||||||
```
|
|
||||||
|
|
||||||
- Check status and await succesful deployment:
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir container-registry status
|
|
||||||
```
|
|
||||||
|
|
||||||
- Confirm deployment by logging in:
|
|
||||||
```
|
|
||||||
docker login container-registry.apps.vaasl.io --username so-reg-user --password pXDwO5zLU7M88x3aA
|
|
||||||
```
|
|
||||||
|
|
||||||
- Set ingress annotations
|
|
||||||
|
|
||||||
- Set the `cluster-id` found in `container-registry/deployment.yml` and then run the following commands:
|
|
||||||
```
|
|
||||||
export CLUSTER_ID=<cluster-id>
|
|
||||||
# Example
|
|
||||||
# export CLUSTER_ID=laconic-26cc70be8a3db3f4
|
|
||||||
|
|
||||||
kubectl annotate ingress $CLUSTER_ID-ingress nginx.ingress.kubernetes.io/proxy-body-size=0
|
|
||||||
kubectl annotate ingress $CLUSTER_ID-ingress nginx.ingress.kubernetes.io/proxy-read-timeout=600
|
|
||||||
kubectl annotate ingress $CLUSTER_ID-ingress nginx.ingress.kubernetes.io/proxy-send-timeout=600
|
|
||||||
```
|
|
||||||
|
|
||||||
## webapp-deployer
|
|
||||||
|
|
||||||
### Backend
|
|
||||||
|
|
||||||
* Reference: <https://github.com/LaconicNetwork/loro-testnet/blob/main/docs/service-provider-setup.md#deploy-backend>
|
|
||||||
|
|
||||||
* Target dir: `/srv/service-provider/webapp-deployer`
|
|
||||||
|
|
||||||
* Cleanup an existing deployment if required:
|
|
||||||
```bash
|
|
||||||
cd /srv/service-provider/webapp-deployer
|
|
||||||
|
|
||||||
# Stop the deployment
|
|
||||||
laconic-so deployment --dir webapp-deployer stop
|
|
||||||
|
|
||||||
# Remove the deployment dir
|
|
||||||
sudo rm -rf webapp-deployer
|
|
||||||
|
|
||||||
# Remove the existing spec file
|
|
||||||
rm webapp-deployer.spec
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Setup
|
|
||||||
|
|
||||||
- Initialize a spec file for the deployer backend.
|
|
||||||
```bash
|
|
||||||
laconic-so --stack webapp-deployer-backend setup-repositories
|
|
||||||
laconic-so --stack webapp-deployer-backend build-containers
|
|
||||||
laconic-so --stack webapp-deployer-backend deploy init --output webapp-deployer.spec
|
|
||||||
```
|
|
||||||
|
|
||||||
- Modify the contents of `webapp-deployer.spec`:
|
|
||||||
```
|
|
||||||
stack: webapp-deployer-backend
|
|
||||||
deploy-to: k8s
|
|
||||||
kube-config: /home/dev/.kube/config-vs-narwhal.yaml
|
|
||||||
image-registry: container-registry.apps.vaasl.io/laconic-registry
|
|
||||||
network:
|
|
||||||
ports:
|
|
||||||
server:
|
|
||||||
- '9555'
|
|
||||||
http-proxy:
|
|
||||||
- host-name: webapp-deployer-api.apps.vaasl.io
|
|
||||||
routes:
|
|
||||||
- path: '/'
|
|
||||||
proxy-to: server:9555
|
|
||||||
volumes:
|
|
||||||
srv:
|
|
||||||
configmaps:
|
|
||||||
config: ./data/config
|
|
||||||
annotations:
|
|
||||||
container.apparmor.security.beta.kubernetes.io/{name}: unconfined
|
|
||||||
labels:
|
|
||||||
container.kubeaudit.io/{name}.allow-disabled-apparmor: "podman"
|
|
||||||
security:
|
|
||||||
privileged: true
|
|
||||||
|
|
||||||
resources:
|
|
||||||
containers:
|
|
||||||
reservations:
|
|
||||||
cpus: 3
|
|
||||||
memory: 8G
|
|
||||||
limits:
|
|
||||||
cpus: 7
|
|
||||||
memory: 16G
|
|
||||||
volumes:
|
|
||||||
reservations:
|
|
||||||
storage: 200G
|
|
||||||
```
|
|
||||||
|
|
||||||
- Create the deployment directory from the spec file.
|
|
||||||
```
|
|
||||||
laconic-so --stack webapp-deployer-backend deploy create --deployment-dir webapp-deployer --spec-file webapp-deployer.spec
|
|
||||||
```
|
|
||||||
|
|
||||||
- Modify file `webapp-deployer/kubeconfig.yml` if required
|
|
||||||
```
|
|
||||||
apiVersion: v1
|
|
||||||
...
|
|
||||||
contexts:
|
|
||||||
- context:
|
|
||||||
cluster: ***
|
|
||||||
user: ***
|
|
||||||
name: default
|
|
||||||
...
|
|
||||||
```
|
|
||||||
NOTE: `context.name` must be default to use with SO
|
|
||||||
|
|
||||||
- Copy `webapp-deployer/kubeconfig.yml` from the k8s cluster creation step to `webapp-deployer/data/config/kube.yml`
|
|
||||||
```bash
|
|
||||||
cp webapp-deployer/kubeconfig.yml webapp-deployer/data/config/kube.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
- Create `webapp-deployer/data/config/laconic.yml`, it should look like this:
|
|
||||||
```
|
|
||||||
services:
|
|
||||||
registry:
|
|
||||||
# Using public endpoint does not work inside machine where laconicd chain is deployed
|
|
||||||
rpcEndpoint: 'http://host.docker.internal:36657'
|
|
||||||
gqlEndpoint: 'http://host.docker.internal:3473/api'
|
|
||||||
|
|
||||||
# Set user key of account with balance and bond owned by the user
|
|
||||||
userKey:
|
|
||||||
bondId:
|
|
||||||
|
|
||||||
chainId: laconic-testnet-2
|
|
||||||
gasPrice: 1alnt
|
|
||||||
```
|
|
||||||
NOTE: Modify the user key and bond ID according to your configuration
|
|
||||||
|
|
||||||
* Publish a `WebappDeployer` record for the deployer backend by following the steps below:
|
|
||||||
|
|
||||||
* Setup GPG keys by following [these steps to create and export a key](https://git.vdb.to/cerc-io/webapp-deployment-status-api#keys)
|
|
||||||
```
|
|
||||||
cd webapp-deployer
|
|
||||||
|
|
||||||
# Create a key
|
|
||||||
gpg --batch --passphrase "SECRET" --quick-generate-key webapp-deployer-api.apps.vaasl.io default default never
|
|
||||||
|
|
||||||
# Export the public key
|
|
||||||
gpg --export webapp-deployer-api.apps.vaasl.io > webapp-deployer-api.apps.vaasl.io.pgp.pub
|
|
||||||
|
|
||||||
# Export the private key
|
|
||||||
gpg --export-secret-keys webapp-deployer-api.apps.vaasl.io > webapp-deployer-api.apps.vaasl.io.pgp.key
|
|
||||||
|
|
||||||
cd -
|
|
||||||
```
|
|
||||||
NOTE: Use "SECRET" for passphrase prompt
|
|
||||||
|
|
||||||
* Copy the GPG pub key file generated above to `webapp-deployer/data/config` directory. This ensures the Docker container has access to the key during the publish process
|
|
||||||
```bash
|
|
||||||
cp webapp-deployer/webapp-deployer-api.apps.vaasl.io.pgp.pub webapp-deployer/data/config
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
* Publish the webapp deployer record using the `publish-deployer-to-registry` command
|
|
||||||
|
|
||||||
```
|
|
||||||
docker run -i -t \
|
|
||||||
--add-host=host.docker.internal:host-gateway \
|
|
||||||
-v /srv/service-provider/webapp-deployer/data/config:/config \
|
|
||||||
cerc/webapp-deployer-backend:local laconic-so publish-deployer-to-registry \
|
|
||||||
--laconic-config /config/laconic.yml \
|
|
||||||
--api-url https://webapp-deployer-api.apps.vaasl.io \
|
|
||||||
--public-key-file /config/webapp-deployer-api.apps.vaasl.io.pgp.pub \
|
|
||||||
--lrn lrn://vaasl-provider/deployers/webapp-deployer-api.apps.vaasl.io \
|
|
||||||
--min-required-payment 10000
|
|
||||||
```
|
|
||||||
|
|
||||||
- Modify the contents of `webapp-deployer/config.env`:
|
|
||||||
|
|
||||||
```
|
|
||||||
DEPLOYMENT_DNS_SUFFIX="apps.vaasl.io"
|
|
||||||
|
|
||||||
# this should match the name authority reserved above
|
|
||||||
DEPLOYMENT_RECORD_NAMESPACE="vaasl-provider"
|
|
||||||
|
|
||||||
# url of the deployed docker image registry
|
|
||||||
IMAGE_REGISTRY="container-registry.apps.vaasl.io"
|
|
||||||
|
|
||||||
# credentials from the htpasswd section above in container-registry setup
|
|
||||||
IMAGE_REGISTRY_USER=
|
|
||||||
IMAGE_REGISTRY_CREDS=
|
|
||||||
|
|
||||||
# configs
|
|
||||||
CLEAN_DEPLOYMENTS=false
|
|
||||||
CLEAN_LOGS=false
|
|
||||||
CLEAN_CONTAINERS=false
|
|
||||||
SYSTEM_PRUNE=false
|
|
||||||
WEBAPP_IMAGE_PRUNE=true
|
|
||||||
CHECK_INTERVAL=10
|
|
||||||
FQDN_POLICY="allow"
|
|
||||||
|
|
||||||
# lrn of the webapp deployer
|
|
||||||
LRN="lrn://vaasl-provider/deployers/webapp-deployer-api.apps.vaasl.io"
|
|
||||||
|
|
||||||
# Path to the GPG key file inside the webapp-deployer container
|
|
||||||
OPENPGP_PRIVATE_KEY_FILE="webapp-deployer-api.apps.vaasl.io.pgp.key"
|
|
||||||
# Passphrase used when creating the GPG key
|
|
||||||
OPENPGP_PASSPHRASE="SECRET"
|
|
||||||
|
|
||||||
DEPLOYER_STATE="srv-test/deployments/autodeploy.state"
|
|
||||||
UNDEPLOYER_STATE="srv-test/deployments/autoundeploy.state"
|
|
||||||
UPLOAD_DIRECTORY="srv-test/uploads"
|
|
||||||
HANDLE_AUCTION_REQUESTS=true
|
|
||||||
AUCTION_BID_AMOUNT=10000
|
|
||||||
|
|
||||||
# Minimum payment amount required for single webapp deployment
|
|
||||||
MIN_REQUIRED_PAYMENT=10000
|
|
||||||
```
|
|
||||||
|
|
||||||
- Push the image to the container registry
|
|
||||||
```
|
|
||||||
laconic-so deployment --dir webapp-deployer push-images
|
|
||||||
```
|
|
||||||
|
|
||||||
- Modify `webapp-deployer/data/config/laconic.yml`:
|
|
||||||
```
|
|
||||||
services:
|
|
||||||
registry:
|
|
||||||
rpcEndpoint: 'https://laconicd-sapo.laconic.com/'
|
|
||||||
gqlEndpoint: 'https://laconicd-sapo.laconic.com/api'
|
|
||||||
|
|
||||||
# Set user key of account with balance and bond owned by the user
|
|
||||||
userKey:
|
|
||||||
bondId:
|
|
||||||
|
|
||||||
chainId: laconic-testnet-2
|
|
||||||
gasPrice: 1alnt
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Run
|
|
||||||
|
|
||||||
- Start the deployer.
|
|
||||||
```
|
|
||||||
laconic-so deployment --dir webapp-deployer start
|
|
||||||
```
|
|
||||||
|
|
||||||
- Load context for k8s
|
|
||||||
```bash
|
|
||||||
kubie ctx vs-narwhal
|
|
||||||
```
|
|
||||||
|
|
||||||
- Copy the GPG key file to the webapp-deployer container
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Get the webapp-deployer pod id
|
|
||||||
laconic-so deployment --dir webapp-deployer ps
|
|
||||||
|
|
||||||
# Expected output
|
|
||||||
# Running containers:
|
|
||||||
# id: default/laconic-096fed46af974a47-deployment-644db859c7-snbq6, name: laconic-096fed46af974a47-deployment-644db859c7-snbq6, ports: 10.42.2.11:9555->9555
|
|
||||||
|
|
||||||
# Set pod id
|
|
||||||
export POD_ID=
|
|
||||||
# Example:
|
|
||||||
# export POD_ID=laconic-096fed46af974a47-deployment-644db859c7-snbq6
|
|
||||||
|
|
||||||
# Copy GPG key files to the pod
|
|
||||||
kubectl cp webapp-deployer/webapp-deployer-api.apps.vaasl.io.pgp.key $POD_ID:/app
|
|
||||||
kubectl cp webapp-deployer/webapp-deployer-api.apps.vaasl.io.pgp.pub $POD_ID:/app
|
|
||||||
```
|
|
||||||
|
|
||||||
- Publishing records to the registry will trigger deployments in backend now
|
|
||||||
|
|
||||||
### Frontend
|
|
||||||
|
|
||||||
* Target dir: `/srv/service-provider/webapp-ui`
|
|
||||||
|
|
||||||
* Cleanup an existing deployment if required:
|
|
||||||
```bash
|
|
||||||
cd /srv/service-provider/webapp-ui
|
|
||||||
|
|
||||||
# Stop the deployment
|
|
||||||
laconic-so deployment --dir webapp-ui stop
|
|
||||||
|
|
||||||
# Remove the deployment dir
|
|
||||||
sudo rm -rf webapp-ui
|
|
||||||
|
|
||||||
# Remove the existing spec file
|
|
||||||
rm webapp-ui.spec
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Setup
|
|
||||||
|
|
||||||
* Clone and build the deployer UI
|
|
||||||
```
|
|
||||||
git clone https://git.vdb.to/cerc-io/webapp-deployment-status-ui.git ~/cerc/webapp-deployment-status-ui
|
|
||||||
|
|
||||||
laconic-so build-webapp --source-repo ~/cerc/webapp-deployment-status-ui
|
|
||||||
```
|
|
||||||
|
|
||||||
* Create a deployment
|
|
||||||
```bash
|
|
||||||
export KUBECONFIG_PATH=/home/dev/.kube/config-vs-narwhal.yaml
|
|
||||||
# NOTE: Use actual kubeconfig path
|
|
||||||
|
|
||||||
laconic-so deploy-webapp create --kube-config $KUBECONFIG_PATH --image-registry container-registry.apps.vaasl.io --deployment-dir webapp-ui --image cerc/webapp-deployment-status-ui:local --url https://webapp-deployer-ui.apps.vaasl.io --env-file ~/cerc/webapp-deployment-status-ui/.env
|
|
||||||
```
|
|
||||||
|
|
||||||
* Modify file `webapp-ui/kubeconfig.yml` if required
|
|
||||||
```yml
|
|
||||||
apiVersion: v1
|
|
||||||
...
|
|
||||||
contexts:
|
|
||||||
- context:
|
|
||||||
cluster: ***
|
|
||||||
user: ***
|
|
||||||
name: default
|
|
||||||
...
|
|
||||||
```
|
|
||||||
NOTE: `context.name` must be default to use with SO
|
|
||||||
|
|
||||||
- Push the image to the container registry.
|
|
||||||
```
|
|
||||||
laconic-so deployment --dir webapp-ui push-images
|
|
||||||
```
|
|
||||||
|
|
||||||
- Modify `webapp-ui/config.env` like [this Pull Request](https://git.vdb.to/cerc-io/webapp-deployment-status-ui/pulls/6) but with your host details.
|
|
||||||
|
|
||||||
#### Run
|
|
||||||
|
|
||||||
- Start the deployer UI
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir webapp-ui start
|
|
||||||
```
|
|
||||||
|
|
||||||
- Wait a moment, then go to https://webapp-deployer-ui.apps.vaasl.io for the status and logs of each deployment
|
|
@ -93,7 +93,7 @@ Once all the participants have completed their onboarding, stage0 laconicd chain
|
|||||||
scp dev@<deployments-server-hostname>:/srv/laconicd/stage1-deployment/data/laconicd-data/config/genesis.json </path/to/local/directory>
|
scp dev@<deployments-server-hostname>:/srv/laconicd/stage1-deployment/data/laconicd-data/config/genesis.json </path/to/local/directory>
|
||||||
```
|
```
|
||||||
|
|
||||||
* Now users can follow the steps to [Join as a validator on stage1](../testnet-onboarding-validator.md#join-as-a-validator-on-stage1)
|
* Now users can follow the steps to [Join as a validator on stage1](https://git.vdb.to/cerc-io/testnet-laconicd-stack/src/branch/main/testnet-onboarding-validator.md#join-as-a-validator-on-stage1)
|
||||||
|
|
||||||
## Bank Transfer
|
## Bank Transfer
|
||||||
|
|
||||||
|
@ -1,150 +0,0 @@
|
|||||||
# Halt stage1 and start stage2
|
|
||||||
|
|
||||||
## Login
|
|
||||||
|
|
||||||
* Log in as `dev` user on the deployments VM
|
|
||||||
|
|
||||||
* All the deployments are placed in the `/srv` directory:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd /srv
|
|
||||||
```
|
|
||||||
|
|
||||||
## Halt stage1
|
|
||||||
|
|
||||||
* Confirm the the currently running node is for stage1 chain:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# On stage1 deployment machine
|
|
||||||
STAGE1_DEPLOYMENT=/srv/laconicd/testnet-laconicd-deployment
|
|
||||||
|
|
||||||
laconic-so deployment --dir $STAGE1_DEPLOYMENT logs laconicd -f --tail 30
|
|
||||||
|
|
||||||
# Note: stage1 node on deployments VM has been changed to run from /srv/laconicd/testnet-laconicd-deployment instead of /srv/laconicd/stage1-deployment
|
|
||||||
```
|
|
||||||
|
|
||||||
* Stop the stage1 deployment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir $STAGE1_DEPLOYMENT stop
|
|
||||||
|
|
||||||
# Stopping this deployment marks the end of testnet stage1
|
|
||||||
```
|
|
||||||
|
|
||||||
## Export stage1 state
|
|
||||||
|
|
||||||
* Export the chain state:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker run -it \
|
|
||||||
-v $STAGE1_DEPLOYMENT/data/laconicd-data:/root/.laconicd \
|
|
||||||
cerc/laconicd-stage1:local bash -c "laconicd export | jq > /root/.laconicd/stage1-state.json"
|
|
||||||
```
|
|
||||||
|
|
||||||
* Archive the state and node config and keys:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo tar -czf /srv/laconicd/stage1-laconicd-export.tar.gz --exclude="./data" --exclude="./tmp" -C $STAGE1_DEPLOYMENT/data/laconicd-data .
|
|
||||||
sudo chown dev:dev /srv/laconicd/stage1-laconicd-export.tar.gz
|
|
||||||
```
|
|
||||||
|
|
||||||
## Initialize stage2
|
|
||||||
|
|
||||||
* Copy over the stage1 state and node export archive to stage2 deployment machine
|
|
||||||
|
|
||||||
* Extract the stage1 state and node config to stage2 deployment dir:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# On stage2 deployment machine
|
|
||||||
cd /srv/laconicd
|
|
||||||
|
|
||||||
# Unarchive
|
|
||||||
tar -xzf stage1-laconicd-export.tar.gz -C stage2-deployment/data/laconicd-data
|
|
||||||
|
|
||||||
# Verify contents
|
|
||||||
ll stage2-deployment/data/laconicd-data
|
|
||||||
```
|
|
||||||
|
|
||||||
* Initialize stage2 chain:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
DEPLOYMENT_DIR=$(pwd)
|
|
||||||
|
|
||||||
cd ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd
|
|
||||||
|
|
||||||
STAGE2_CHAIN_ID=laconic-testnet-2
|
|
||||||
./scripts/initialize-stage2.sh $DEPLOYMENT_DIR/stage2-deployment $STAGE2_CHAIN_ID LaconicStage2 os 1000000000000000
|
|
||||||
|
|
||||||
# Enter the keyring passphrase for account from stage1 when prompted
|
|
||||||
|
|
||||||
cd $DEPLOYMENT_DIR
|
|
||||||
```
|
|
||||||
|
|
||||||
* Resets the node data (`unsafe-reset-all`)
|
|
||||||
|
|
||||||
* Initializes the `stage2-deployment` node
|
|
||||||
|
|
||||||
* Generates the genesis file for stage2 with stage1 state
|
|
||||||
|
|
||||||
* Carries over accounts, balances and laconicd modules from stage1
|
|
||||||
|
|
||||||
* Skips staking and validator data
|
|
||||||
|
|
||||||
* Copy over the genesis file outside data directory:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cp stage2-deployment/data/laconicd-data/config/genesis.json stage2-deployment
|
|
||||||
```
|
|
||||||
|
|
||||||
## Start stage2
|
|
||||||
|
|
||||||
* Start the stage2 deployment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir stage2-deployment start
|
|
||||||
```
|
|
||||||
|
|
||||||
* Check status of stage2 laconicd:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# List down the container and check health status
|
|
||||||
docker ps -a | grep laconicd
|
|
||||||
|
|
||||||
# Follow logs for laconicd container, check that new blocks are getting created
|
|
||||||
laconic-so deployment --dir stage2-deployment logs laconicd -f
|
|
||||||
```
|
|
||||||
|
|
||||||
* Get the node's peer adddress and stage2 genesis file to share with the participants:
|
|
||||||
|
|
||||||
* Get the node id:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
echo $(laconic-so deployment --dir stage2-deployment exec laconicd "laconicd cometbft show-node-id")@laconicd-sapo.laconic.com:36656
|
|
||||||
```
|
|
||||||
|
|
||||||
* Get the genesis file:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
scp dev@<deployments-server-hostname>:/srv/laconicd/stage2-deployment/genesis.json </path/to/local/directory>
|
|
||||||
```
|
|
||||||
|
|
||||||
* Now users can follow the steps to [Upgrade to SAPO testnet](../testnet-onboarding-validator.md#upgrade-to-sapo-testnet)
|
|
||||||
|
|
||||||
## Bank Transfer
|
|
||||||
|
|
||||||
* Transfer tokens to an address:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd /srv/laconicd
|
|
||||||
|
|
||||||
RECEIVER_ADDRESS=
|
|
||||||
AMOUNT=
|
|
||||||
|
|
||||||
laconic-so deployment --dir stage2-deployment exec laconicd "laconicd tx bank send alice ${RECEIVER_ADDRESS} ${AMOUNT}alnt --from alice --fees 1000alnt"
|
|
||||||
```
|
|
||||||
|
|
||||||
* Check balance:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir stage2-deployment exec laconicd "laconicd query bank balances ${RECEIVER_ADDRESS}"
|
|
||||||
```
|
|
711201
ops/stage2/genesis.json
711201
ops/stage2/genesis.json
File diff suppressed because one or more lines are too long
@ -1,25 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# Exit on error
|
|
||||||
set -e
|
|
||||||
set -u
|
|
||||||
|
|
||||||
NODE_HOME="$HOME/.laconicd"
|
|
||||||
testnet2_genesis="$NODE_HOME/tmp-testnet2/genesis.json"
|
|
||||||
|
|
||||||
if [ ! -f ${testnet2_genesis} ]; then
|
|
||||||
echo "testnet2 genesis file not found, exiting..."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Remove data but keep keys
|
|
||||||
laconicd cometbft unsafe-reset-all
|
|
||||||
|
|
||||||
# Use provided genesis config
|
|
||||||
cp $testnet2_genesis $NODE_HOME/config/genesis.json
|
|
||||||
|
|
||||||
# Set chain id in config
|
|
||||||
chain_id=$(jq -r '.chain_id' $testnet2_genesis)
|
|
||||||
laconicd config set client chain-id $chain_id --home $NODE_HOME
|
|
||||||
|
|
||||||
echo "Node data reset and ready for testnet2!"
|
|
@ -254,247 +254,3 @@ Instructions to reset / update the deployments
|
|||||||
```
|
```
|
||||||
|
|
||||||
* The laconic console can now be viewed at <https://loro-console.laconic.com>
|
* The laconic console can now be viewed at <https://loro-console.laconic.com>
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## stage2 laconicd
|
|
||||||
|
|
||||||
* Deployment dir: `/srv/laconicd/stage2-deployment`
|
|
||||||
|
|
||||||
* If code has changed, fetch and build with updated source code:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# laconicd source
|
|
||||||
cd ~/cerc/laconicd
|
|
||||||
|
|
||||||
# Pull latest changes, or checkout to the required branch
|
|
||||||
git pull
|
|
||||||
|
|
||||||
# Confirm the latest commit hash
|
|
||||||
git log
|
|
||||||
|
|
||||||
# Rebuild the containers
|
|
||||||
cd /srv/laconicd
|
|
||||||
|
|
||||||
laconic-so --stack ~/cerc/fixturenet-laconicd-stack/stack-orchestrator/stacks/fixturenet-laconicd build-containers --force-rebuild
|
|
||||||
```
|
|
||||||
|
|
||||||
* Optionally, reset the data directory:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Stop the deployment
|
|
||||||
laconic-so deployment --dir stage2-deployment stop --delete-volumes
|
|
||||||
|
|
||||||
# Remove and recreate the required data dirs
|
|
||||||
sudo rm -rf stage2-deployment/data/laconicd-data stage2-deployment/data/genesis-config
|
|
||||||
|
|
||||||
mkdir stage2-deployment/data/laconicd-data
|
|
||||||
mkdir stage2-deployment/data/genesis-config
|
|
||||||
```
|
|
||||||
|
|
||||||
* Follow [stage1-to-stage2.md](./stage1-to-stage2.md) to reinitialize stage2 and start the deployment
|
|
||||||
|
|
||||||
## laconic-console-testnet2
|
|
||||||
|
|
||||||
* Deployment dir: `/srv/console/laconic-console-testnet2-deployment`
|
|
||||||
|
|
||||||
* Steps to update the deployment similar as in [laconic-console](#laconic-console)
|
|
||||||
|
|
||||||
## Laconic Shopify
|
|
||||||
|
|
||||||
* Deployment dir: `/srv/shopify/laconic-shopify-deployment`
|
|
||||||
|
|
||||||
* If code has changed, fetch and build with updated source code:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify setup-repositories --git-ssh --pull
|
|
||||||
|
|
||||||
# rebuild containers
|
|
||||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-shopify build-containers --force-rebuild
|
|
||||||
```
|
|
||||||
|
|
||||||
* Update the configuration if required in `laconic-shopify-deployment/config.env`
|
|
||||||
|
|
||||||
* Restart the deployment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd /srv/shopify
|
|
||||||
|
|
||||||
laconic-so deployment --dir laconic-shopify-deployment stop
|
|
||||||
|
|
||||||
laconic-so deployment --dir laconic-shopify-deployment start
|
|
||||||
```
|
|
||||||
|
|
||||||
## webapp-deployer
|
|
||||||
|
|
||||||
### Backend
|
|
||||||
|
|
||||||
* Deployment dir: `/srv/service-provider/webapp-deployer`
|
|
||||||
|
|
||||||
* If code has changed, fetch and build with updated source code:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so --stack webapp-deployer-backend setup-repositories --git-ssh --pull
|
|
||||||
|
|
||||||
laconic-so --stack webapp-deployer-backend build-containers --force-rebuild
|
|
||||||
```
|
|
||||||
|
|
||||||
* Update the configuration, if required in :
|
|
||||||
* `/srv/service-provider/webapp-deployer/data/config/laconic.yml`
|
|
||||||
* `/srv/service-provider/webapp-deployer/config.env`
|
|
||||||
|
|
||||||
* Restart the deployment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir webapp-deployer stop
|
|
||||||
|
|
||||||
laconic-so deployment --dir webapp-deployer start
|
|
||||||
```
|
|
||||||
|
|
||||||
- Load context for k8s
|
|
||||||
```bash
|
|
||||||
kubie ctx vs-narwhal
|
|
||||||
```
|
|
||||||
|
|
||||||
- Copy the GPG key file to the webapp-deployer container
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Get the webapp-deployer pod id
|
|
||||||
laconic-so deployment --dir webapp-deployer ps
|
|
||||||
|
|
||||||
# Expected output
|
|
||||||
# Running containers:
|
|
||||||
# id: default/laconic-096fed46af974a47-deployment-644db859c7-snbq6, name: laconic-096fed46af974a47-deployment-644db859c7-snbq6, ports: 10.42.2.11:9555->9555
|
|
||||||
|
|
||||||
# Set pod id
|
|
||||||
export POD_ID=
|
|
||||||
# Example:
|
|
||||||
# export POD_ID=laconic-096fed46af974a47-deployment-644db859c7-snbq6
|
|
||||||
|
|
||||||
# Copy GPG key files to the pod
|
|
||||||
kubectl cp webapp-deployer/webapp-deployer-api.apps.vaasl.io.pgp.key $POD_ID:/app
|
|
||||||
kubectl cp webapp-deployer/webapp-deployer-api.apps.vaasl.io.pgp.pub $POD_ID:/app
|
|
||||||
```
|
|
||||||
|
|
||||||
### Frontend
|
|
||||||
|
|
||||||
* Deployment dir: `/srv/service-provider/webapp-ui`
|
|
||||||
|
|
||||||
* If code has changed, fetch and build with updated source code:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ~/cerc/webapp-deployment-status-ui
|
|
||||||
|
|
||||||
# Pull latest changes, or checkout to the required branch
|
|
||||||
git pull
|
|
||||||
|
|
||||||
# Confirm the latest commit hash
|
|
||||||
git log
|
|
||||||
|
|
||||||
laconic-so build-webapp --source-repo ~/cerc/webapp-deployment-status-ui
|
|
||||||
```
|
|
||||||
|
|
||||||
- Modify `/srv/service-provider/webapp-ui/config.env` like [this Pull Request](https://git.vdb.to/cerc-io/webapp-deployment-status-ui/pulls/6) but with your host details.
|
|
||||||
|
|
||||||
* Restart the deployment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir webapp-ui stop
|
|
||||||
|
|
||||||
laconic-so deployment --dir webapp-ui start
|
|
||||||
```
|
|
||||||
|
|
||||||
## Deploy Backend
|
|
||||||
|
|
||||||
* Deployment dir: `/srv/deploy-backend/backend-deployment`
|
|
||||||
|
|
||||||
* If code has changed, fetch and build with updated source code:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so --stack ~/cerc/snowballtools-base-api-stack/stack-orchestrator/stacks/snowballtools-base-backend setup-repositories --git-ssh --pull
|
|
||||||
|
|
||||||
# rebuild containers
|
|
||||||
laconic-so --stack ~/cerc/snowballtools-base-api-stack/stack-orchestrator/stacks/snowballtools-base-backend build-containers --force-rebuild
|
|
||||||
```
|
|
||||||
|
|
||||||
* Push updated images to the container registry:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd /srv/deploy-backend
|
|
||||||
|
|
||||||
# login to container registry
|
|
||||||
CONTAINER_REGISTRY_URL=container-registry.apps.vaasl.io
|
|
||||||
CONTAINER_REGISTRY_USERNAME=
|
|
||||||
CONTAINER_REGISTRY_PASSWORD=
|
|
||||||
|
|
||||||
docker login $CONTAINER_REGISTRY_URL --username $CONTAINER_REGISTRY_USERNAME --password $CONTAINER_REGISTRY_PASSWORD
|
|
||||||
|
|
||||||
# Push backend images
|
|
||||||
laconic-so deployment --dir backend-deployment push-images
|
|
||||||
```
|
|
||||||
|
|
||||||
* Update the configuration if required in `backend-deployment/configmaps/config/prod.toml`
|
|
||||||
|
|
||||||
* Restart the deployment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir backend-deployment stop
|
|
||||||
|
|
||||||
laconic-so deployment --dir backend-deployment start
|
|
||||||
```
|
|
||||||
|
|
||||||
## Deply Frontend
|
|
||||||
|
|
||||||
* Follow steps from [deployments-from-scratch.md](./deployments-from-scratch.md#deploy-frontend) to deploy the snowball frontend
|
|
||||||
|
|
||||||
## Fixturenet Eth
|
|
||||||
|
|
||||||
* Deployment dir: `/srv/fixturenet-eth/fixturenet-eth-deployment`
|
|
||||||
|
|
||||||
* If code has changed, fetch and build with updated source code:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth setup-repositories --git-ssh --pull
|
|
||||||
|
|
||||||
# Rebuild the containers
|
|
||||||
laconic-so --stack ~/cerc/fixturenet-eth-stacks/stack-orchestrator/stacks/fixturenet-eth build-containers --force-rebuild
|
|
||||||
```
|
|
||||||
|
|
||||||
* Update the configuration if required in `fixturenet-eth-deployment/config.env`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CERC_ALLOW_UNPROTECTED_TXS=true
|
|
||||||
```
|
|
||||||
|
|
||||||
* Restart the deployment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd /srv/fixturenet-eth
|
|
||||||
|
|
||||||
laconic-so deployment --dir fixturenet-eth-deployment stop
|
|
||||||
|
|
||||||
laconic-so deployment --dir fixturenet-eth-deployment start
|
|
||||||
```
|
|
||||||
|
|
||||||
## Nitro Bridge
|
|
||||||
|
|
||||||
* Deployment dir: `/srv/bridge/bridge-deployment`
|
|
||||||
|
|
||||||
* Rebuild containers:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Rebuild the containers
|
|
||||||
laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/bridge build-containers --force-rebuild
|
|
||||||
```
|
|
||||||
|
|
||||||
* Update the configuration if required in `bridge-deployment/config.env`
|
|
||||||
|
|
||||||
* Restart the bridge deployment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd /srv/bridge
|
|
||||||
|
|
||||||
laconic-so deployment --dir bridge-deployment stop
|
|
||||||
|
|
||||||
laconic-so deployment --dir bridge-deployment start
|
|
||||||
```
|
|
||||||
|
@ -1,400 +0,0 @@
|
|||||||
# Service Provider
|
|
||||||
|
|
||||||
* Follow [Set up a new service provider](#set-up-a-new-service-provider) to setup a new service provider (SP)
|
|
||||||
* If you already have a SP setup for stage1, follow [Update service provider for SAPO testnet](#update-service-provider-for-sapo-testnet) to update it for testnet2
|
|
||||||
|
|
||||||
## Set up a new service provider
|
|
||||||
|
|
||||||
Follow steps from [service-provider-setup](<https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/service-provider-setup#service-provider-setup>). After setup, you will have the following services running (your configuration will look similar to the examples listed below):
|
|
||||||
|
|
||||||
* laconicd chain RPC endpoint: <http://lcn-daemon.laconic.com:26657>
|
|
||||||
* laconicd GQL endpoint: <http://lcn-daemon.laconic.com:9473/api>
|
|
||||||
* laconic console: <http://lcn-console.laconic.com:8080/registry>
|
|
||||||
* webapp deployer API: <https://webapp-deployer-api.pwa.laconic.com>
|
|
||||||
* webapp deployer UI: <https://webapp-deployer-ui.pwa.laconic.com>
|
|
||||||
|
|
||||||
Follow the steps below to point your deployer to the SAPO testnet
|
|
||||||
|
|
||||||
## Update service provider for SAPO testnet
|
|
||||||
|
|
||||||
* On a successful webapp-deployer setup with SAPO testnet, your deployer will be available on <https://deploy.laconic.com>
|
|
||||||
* For creating a project, users can create a deployment auction which your deployer will bid on or they can perform a targeted deployment using your deployer LRN
|
|
||||||
|
|
||||||
### Prerequisites
|
|
||||||
|
|
||||||
* A SAPO testnet node (see [Join SAPO testnet](./README.md#join-sapo-testnet))
|
|
||||||
|
|
||||||
### Stop services
|
|
||||||
|
|
||||||
* Stop a laconic-console deployment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# In directory where laconic-console deployment was created
|
|
||||||
laconic-so deployment --dir laconic-console-deployment stop --delete-volumes
|
|
||||||
```
|
|
||||||
|
|
||||||
* Stop webapp deployer:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# In directory where webapp-deployer deployment was created
|
|
||||||
laconic-so deployment --dir webapp-deployer stop
|
|
||||||
laconic-so deployment --dir webapp-ui stop
|
|
||||||
```
|
|
||||||
|
|
||||||
### Update laconic console
|
|
||||||
|
|
||||||
* Remove an existing console deployment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# In directory where laconic-console deployment was created
|
|
||||||
# Backup the config if required
|
|
||||||
rm -rf laconic-console-deployment
|
|
||||||
```
|
|
||||||
|
|
||||||
* Follow [laconic-console](stack-orchestrator/stacks/laconic-console/README.md) stack instructions to setup a new laconic-console deployment
|
|
||||||
|
|
||||||
* Example configuration:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# CLI configuration
|
|
||||||
|
|
||||||
# laconicd RPC endpoint (can be pointed to your node)
|
|
||||||
CERC_LACONICD_RPC_ENDPOINT=https://laconicd-sapo.laconic.com
|
|
||||||
|
|
||||||
# laconicd GQL endpoint (can be pointed to your node)
|
|
||||||
CERC_LACONICD_GQL_ENDPOINT=https://laconicd-sapo.laconic.com/api
|
|
||||||
|
|
||||||
CERC_LACONICD_CHAIN_ID=laconic-testnet-2
|
|
||||||
|
|
||||||
# your private key
|
|
||||||
CERC_LACONICD_USER_KEY=
|
|
||||||
|
|
||||||
# your bond id (optional)
|
|
||||||
CERC_LACONICD_BOND_ID=
|
|
||||||
|
|
||||||
# Gas price to use for txs (default: 0.001alnt)
|
|
||||||
# Use for auto fees calculation, gas and fees not required to be set in that case
|
|
||||||
# Reference: https://git.vdb.to/cerc-io/laconic-registry-cli#gas-and-fees
|
|
||||||
CERC_LACONICD_GASPRICE=
|
|
||||||
|
|
||||||
# Console configuration
|
|
||||||
|
|
||||||
# Laconicd (hosted) GQL endpoint (can be pointed to your node)
|
|
||||||
LACONIC_HOSTED_ENDPOINT=https://laconicd-sapo.laconic.com
|
|
||||||
```
|
|
||||||
|
|
||||||
### Check authority and deployer record
|
|
||||||
|
|
||||||
* The stage1 testnet state has been carried over to testnet2, if you had authority and records on stage1, they should be present in testnet2 as well
|
|
||||||
|
|
||||||
* Check authority:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# In directory where laconic-console deployment was created
|
|
||||||
AUTHORITY=<your-authority>
|
|
||||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry authority whois $AUTHORITY"
|
|
||||||
```
|
|
||||||
|
|
||||||
* Check deployer record:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
PAYMENT_ADDRESS=<your-deployers-payment-address>
|
|
||||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry record list --all --type WebappDeployer --paymentAddress $PAYMENT_ADDRESS"
|
|
||||||
```
|
|
||||||
|
|
||||||
### (Optional) Reserve a new authority
|
|
||||||
|
|
||||||
* Follow steps if you want to reserve a new authority
|
|
||||||
|
|
||||||
* Create a bond:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# An existing bond can also be used
|
|
||||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry bond create --type alnt --quantity 100000000000"
|
|
||||||
# {"bondId":"a742489e5817ef274187611dadb0e4284a49c087608b545ab6bd990905fb61f3"}
|
|
||||||
|
|
||||||
# Set bond id
|
|
||||||
BOND_ID=
|
|
||||||
```
|
|
||||||
|
|
||||||
* Reserve an authority:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
AUTHORITY=<your-authority>
|
|
||||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry authority reserve $AUTHORITY"
|
|
||||||
|
|
||||||
# Triggers an authority auction
|
|
||||||
```
|
|
||||||
|
|
||||||
* Obtain the authority auction id:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry authority whois $AUTHORITY"
|
|
||||||
# "auction": {
|
|
||||||
# "id": "73e0b082a198c396009ce748804a9060c674a10045365d262c1584f99d2771c1"
|
|
||||||
|
|
||||||
# Set auction id
|
|
||||||
AUCTION_ID=
|
|
||||||
```
|
|
||||||
|
|
||||||
* Commit a bid to the auction:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry auction bid commit $AUCTION_ID 5000000 alnt --chain-id laconic-testnet-2"
|
|
||||||
|
|
||||||
# {"reveal_file":"/app/out/bafyreiewi4osqyvrnljwwcb36fn6sr5iidfpuznqkz52gxc5ztt3jt4zmy.json"}
|
|
||||||
|
|
||||||
# Set reveal file
|
|
||||||
REVEAL_FILE=
|
|
||||||
|
|
||||||
# Wait for the auction to move from commit to reveal phase
|
|
||||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry auction get $AUCTION_ID"
|
|
||||||
```
|
|
||||||
|
|
||||||
* Reveal your bid using reveal file generated while commiting the bid:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry auction bid reveal $AUCTION_ID $REVEAL_FILE --chain-id laconic-testnet-2"
|
|
||||||
# {"success": true}
|
|
||||||
```
|
|
||||||
|
|
||||||
* Verify auction status and winner address after auction completion:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry auction get $AUCTION_ID"
|
|
||||||
```
|
|
||||||
|
|
||||||
* Set the authority with a bond:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry authority bond set $AUTHORITY $BOND_ID"
|
|
||||||
# {"success": true}
|
|
||||||
```
|
|
||||||
|
|
||||||
* Verify the authority has been registered:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry authority whois $AUTHORITY"
|
|
||||||
```
|
|
||||||
|
|
||||||
* Update laconic-console-deployment config (`laconic-console-deployment/config.env`) with created bond:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
...
|
|
||||||
CERC_LACONICD_BOND_ID=<bond-id>
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
* Restart the console deployment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir laconic-console-deployment stop && laconic-so deployment --dir laconic-console-deployment start
|
|
||||||
```
|
|
||||||
|
|
||||||
### Update webapp deployer
|
|
||||||
|
|
||||||
* Fetch latest stack repos:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# In directory where webapp-deployer deployment was created
|
|
||||||
laconic-so --stack webapp-deployer-backend setup-repositories --pull
|
|
||||||
|
|
||||||
# Confirm latest commit hash in the ~/cerc/webapp-deployment-status-api repo
|
|
||||||
```
|
|
||||||
|
|
||||||
* Rebuild container images:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so --stack webapp-deployer-backend build-containers --force-rebuild
|
|
||||||
```
|
|
||||||
|
|
||||||
* Push stack images to the container registry:
|
|
||||||
|
|
||||||
* Login to the container registry:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Set required variables
|
|
||||||
# eg: container-registry.pwa.laconic.com
|
|
||||||
CONTAINER_REGISTRY_URL=
|
|
||||||
CONTAINER_REGISTRY_USERNAME=
|
|
||||||
CONTAINER_REGISTRY_PASSWORD=
|
|
||||||
|
|
||||||
# login to container registry
|
|
||||||
docker login $CONTAINER_REGISTRY_URL --username $CONTAINER_REGISTRY_USERNAME --password $CONTAINER_REGISTRY_PASSWORD
|
|
||||||
|
|
||||||
# WARNING! Using --password via the CLI is insecure. Use --password-stdin.
|
|
||||||
# WARNING! Your password will be stored unencrypted in /home/dev2/.docker/config.json.
|
|
||||||
# Configure a credential helper to remove this warning. See
|
|
||||||
# https://docs.docker.com/engine/reference/commandline/login/#credential-stores
|
|
||||||
|
|
||||||
# Login Succeeded
|
|
||||||
```
|
|
||||||
|
|
||||||
* Push images:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir webapp-deployer push-images
|
|
||||||
```
|
|
||||||
|
|
||||||
* Overwrite an existing compose file with the latest one:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# In directory where webapp-deployer deployment folder exists
|
|
||||||
cp ~/cerc/webapp-deployment-status-api/docker-compose.yml webapp-deployer/compose/docker-compose-webapp-deployer-backend.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
* Update deployer laconic registry config (`webapp-deployer/data/config/laconic.yml`) with new endpoints:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
services:
|
|
||||||
registry:
|
|
||||||
rpcEndpoint: "<your-sapo-rpc-endpoint>" # Eg. https://laconicd-sapo.laconic.com
|
|
||||||
gqlEndpoint: "<your-sapo-gql-endpoint>" # Eg. https://laconicd-sapo.laconic.com/api
|
|
||||||
userKey: "<userKey>"
|
|
||||||
bondId: "<bondId"
|
|
||||||
chainId: laconic-testnet-2
|
|
||||||
gasPrice: 0.001alnt
|
|
||||||
```
|
|
||||||
|
|
||||||
Note: Existing `userKey` and `bondId` can be used since they are carried over from laconicd stage1 to testnet2
|
|
||||||
|
|
||||||
* Publish a new webapp deployer record:
|
|
||||||
|
|
||||||
* Required if it doesn't already exist or some attribute needs to be updated
|
|
||||||
|
|
||||||
* Set the following variables:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Path to the webapp-deployer directory
|
|
||||||
# eg: /home/dev/webapp-deployer
|
|
||||||
DEPLOYER_DIR=
|
|
||||||
|
|
||||||
# Deployer LRN (logical resource name)
|
|
||||||
# eg: "lrn://laconic/deployers/webapp-deployer-api.laconic.com"
|
|
||||||
DEPLOYER_LRN=
|
|
||||||
|
|
||||||
# Deployer API URL
|
|
||||||
# eg: "https://webapp-deployer-api.pwa.laconic.com"
|
|
||||||
API_URL=
|
|
||||||
|
|
||||||
# Deployer GPG public key file path
|
|
||||||
# eg: "/home/dev/webapp-deployer-api.laconic.com.pgp.pub"
|
|
||||||
GPG_PUB_KEY_FILE_PATH=
|
|
||||||
|
|
||||||
GPG_PUB_KEY_FILE=$(basename $GPG_PUB_KEY_FILE_PATH)
|
|
||||||
```
|
|
||||||
|
|
||||||
* Delete the LRN if it currently resolves to an existing record:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# In directory where laconic-console deployment was created
|
|
||||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry name resolve $DEPLOYER_LRN"
|
|
||||||
|
|
||||||
# Delete the name
|
|
||||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry name delete $DEPLOYER_LRN"
|
|
||||||
|
|
||||||
# Confirm deletion
|
|
||||||
laconic-so deployment --dir laconic-console-deployment exec cli "laconic registry name resolve $DEPLOYER_LRN"
|
|
||||||
```
|
|
||||||
|
|
||||||
* Copy over the GPG pub key file to webapp-deployer:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cp $GPG_PUB_KEY_FILE_PATH webapp-deployer/data/config
|
|
||||||
```
|
|
||||||
|
|
||||||
* Publish the deployer record:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker run -it \
|
|
||||||
-v $DEPLOYER_DIR/data/config:/home/root/config \
|
|
||||||
cerc/webapp-deployer-backend:local laconic-so publish-deployer-to-registry \
|
|
||||||
--laconic-config /home/root/config/laconic.yml \
|
|
||||||
--api-url $API_URL \
|
|
||||||
--public-key-file /home/root/config/$GPG_PUB_KEY_FILE \
|
|
||||||
--lrn $DEPLOYER_LRN \
|
|
||||||
--min-required-payment 9500
|
|
||||||
```
|
|
||||||
|
|
||||||
* Update deployer config (`webapp-deployer/config.env`):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Update the deployer LRN if it has changed
|
|
||||||
export LRN=
|
|
||||||
|
|
||||||
# Min payment to require for performing deployments
|
|
||||||
export MIN_REQUIRED_PAYMENT=9500
|
|
||||||
|
|
||||||
# Handle deployment auction requests
|
|
||||||
export HANDLE_AUCTION_REQUESTS=true
|
|
||||||
|
|
||||||
# Amount that the deployer will bid on deployment auctions
|
|
||||||
export AUCTION_BID_AMOUNT=9500
|
|
||||||
```
|
|
||||||
|
|
||||||
* Start the webapp deployer:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir webapp-deployer start
|
|
||||||
```
|
|
||||||
|
|
||||||
* Get the webapp-deployer pod id:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir webapp-deployer ps
|
|
||||||
|
|
||||||
# Expected output
|
|
||||||
# Running containers:
|
|
||||||
# id: default/laconic-096fed46af974a47-deployment-644db859c7-snbq6, name: laconic-096fed46af974a47-deployment-644db859c7-snbq6, ports: 10.42.2.11:9555->9555
|
|
||||||
|
|
||||||
# Set pod id
|
|
||||||
export POD_ID=
|
|
||||||
|
|
||||||
# Example:
|
|
||||||
# export POD_ID=laconic-096fed46af974a47-deployment-644db859c7-snbq6
|
|
||||||
```
|
|
||||||
|
|
||||||
* Copy over GPG keys files to the webapp-deployer container:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubie ctx default
|
|
||||||
|
|
||||||
# Copy the GPG key files to the pod
|
|
||||||
kubectl cp <path-to-your-gpg-private-key> $POD_ID:/app
|
|
||||||
kubectl cp <path-to-your-gpg-public-key> $POD_ID:/app
|
|
||||||
|
|
||||||
# Required everytime you stop and start the deployer
|
|
||||||
```
|
|
||||||
|
|
||||||
* Check logs:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Deployer
|
|
||||||
kubectl logs -f $POD_ID
|
|
||||||
|
|
||||||
# Deployer auction handler
|
|
||||||
kubectl logs -f $POD_ID -c cerc-webapp-auction-handler
|
|
||||||
```
|
|
||||||
|
|
||||||
* Update deployer UI config (`webapp-ui/config.env`):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# URL of the webapp deployer backend API
|
|
||||||
# eg: https://webapp-deployer-api.pwa.laconic.com
|
|
||||||
LACONIC_HOSTED_CONFIG_app_api_url=
|
|
||||||
|
|
||||||
# URL of the laconic console
|
|
||||||
LACONIC_HOSTED_CONFIG_app_console_link=https://console-sapo.laconic.com
|
|
||||||
```
|
|
||||||
|
|
||||||
* Start the webapp UI:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir webapp-ui start
|
|
||||||
```
|
|
||||||
|
|
||||||
* Check logs
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir webapp-ui logs webapp
|
|
||||||
```
|
|
@ -9,8 +9,7 @@ services:
|
|||||||
CERC_LACONICD_USER_KEY: ${CERC_LACONICD_USER_KEY}
|
CERC_LACONICD_USER_KEY: ${CERC_LACONICD_USER_KEY}
|
||||||
CERC_LACONICD_BOND_ID: ${CERC_LACONICD_BOND_ID}
|
CERC_LACONICD_BOND_ID: ${CERC_LACONICD_BOND_ID}
|
||||||
CERC_LACONICD_GAS: ${CERC_LACONICD_GAS:-200000}
|
CERC_LACONICD_GAS: ${CERC_LACONICD_GAS:-200000}
|
||||||
CERC_LACONICD_FEES: ${CERC_LACONICD_FEES:-200alnt}
|
CERC_LACONICD_FEES: ${CERC_LACONICD_FEES:-200000alnt}
|
||||||
CERC_LACONICD_GASPRICE: ${CERC_LACONICD_GASPRICE:-0.001alnt}
|
|
||||||
volumes:
|
volumes:
|
||||||
- ../config/laconic-console/cli/create-config.sh:/app/create-config.sh
|
- ../config/laconic-console/cli/create-config.sh:/app/create-config.sh
|
||||||
- laconic-registry-data:/laconic-registry-data
|
- laconic-registry-data:/laconic-registry-data
|
||||||
|
@ -7,7 +7,6 @@ services:
|
|||||||
CERC_MONIKER: ${CERC_MONIKER:-TestnetNode}
|
CERC_MONIKER: ${CERC_MONIKER:-TestnetNode}
|
||||||
CERC_CHAIN_ID: ${CERC_CHAIN_ID:-laconic_9000-1}
|
CERC_CHAIN_ID: ${CERC_CHAIN_ID:-laconic_9000-1}
|
||||||
CERC_PEERS: ${CERC_PEERS}
|
CERC_PEERS: ${CERC_PEERS}
|
||||||
MIN_GAS_PRICE: ${MIN_GAS_PRICE:-0.001}
|
|
||||||
CERC_LOGLEVEL: ${CERC_LOGLEVEL:-info}
|
CERC_LOGLEVEL: ${CERC_LOGLEVEL:-info}
|
||||||
volumes:
|
volumes:
|
||||||
- laconicd-data:/root/.laconicd
|
- laconicd-data:/root/.laconicd
|
||||||
|
@ -18,7 +18,6 @@ services:
|
|||||||
chainId: ${CERC_LACONICD_CHAIN_ID}
|
chainId: ${CERC_LACONICD_CHAIN_ID}
|
||||||
gas: ${CERC_LACONICD_GAS}
|
gas: ${CERC_LACONICD_GAS}
|
||||||
fees: ${CERC_LACONICD_FEES}
|
fees: ${CERC_LACONICD_FEES}
|
||||||
gasPrice: ${CERC_LACONICD_GASPRICE}
|
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
echo "Exported config to $config_file"
|
echo "Exported config to $config_file"
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"10 pre-paid webapp deployments": "100000",
|
"10 webapp deployments": "100000",
|
||||||
"100 webapp deployments": "1000000",
|
"100 webapp deployments": "1000000",
|
||||||
"500 webapp deployments": "5000000",
|
"500 webapp deployments": "5000000",
|
||||||
"1000 webapp deployments": "10000000"
|
"1000 webapp deployments": "10000000"
|
||||||
|
@ -21,7 +21,6 @@ echo "Env:"
|
|||||||
echo "Moniker: $CERC_MONIKER"
|
echo "Moniker: $CERC_MONIKER"
|
||||||
echo "Chain Id: $CERC_CHAIN_ID"
|
echo "Chain Id: $CERC_CHAIN_ID"
|
||||||
echo "Persistent peers: $CERC_PEERS"
|
echo "Persistent peers: $CERC_PEERS"
|
||||||
echo "Min gas price: $MIN_GAS_PRICE"
|
|
||||||
echo "Log level: $CERC_LOGLEVEL"
|
echo "Log level: $CERC_LOGLEVEL"
|
||||||
|
|
||||||
NODE_HOME=/root/.laconicd
|
NODE_HOME=/root/.laconicd
|
||||||
@ -41,16 +40,12 @@ else
|
|||||||
echo "Node data dir $NODE_HOME/data already exists, skipping initialization..."
|
echo "Node data dir $NODE_HOME/data already exists, skipping initialization..."
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Enable cors
|
|
||||||
sed -i 's/cors_allowed_origins.*$/cors_allowed_origins = ["*"]/' $HOME/.laconicd/config/config.toml
|
|
||||||
|
|
||||||
# Update config with persistent peers
|
# Update config with persistent peers
|
||||||
sed -i "s/^persistent_peers *=.*/persistent_peers = \"$CERC_PEERS\"/g" $NODE_HOME/config/config.toml
|
sed -i "s/^persistent_peers *=.*/persistent_peers = \"$CERC_PEERS\"/g" $NODE_HOME/config/config.toml
|
||||||
|
|
||||||
echo "Starting laconicd node..."
|
echo "Starting laconicd node..."
|
||||||
laconicd start \
|
laconicd start \
|
||||||
--api.enable \
|
--api.enable \
|
||||||
--minimum-gas-prices=${MIN_GAS_PRICE}alnt \
|
|
||||||
--rpc.laddr="tcp://0.0.0.0:26657" \
|
--rpc.laddr="tcp://0.0.0.0:26657" \
|
||||||
--gql-playground --gql-server \
|
--gql-playground --gql-server \
|
||||||
--log_level $CERC_LOGLEVEL \
|
--log_level $CERC_LOGLEVEL \
|
||||||
|
@ -17,13 +17,13 @@ Instructions for running laconic registry CLI and console
|
|||||||
* Clone required repositories:
|
* Clone required repositories:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console setup-repositories --pull
|
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console setup-repositories
|
||||||
```
|
```
|
||||||
|
|
||||||
* Build the container images:
|
* Build the container images:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console build-containers --force-rebuild
|
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/laconic-console build-containers
|
||||||
```
|
```
|
||||||
|
|
||||||
This should create the following docker images locally:
|
This should create the following docker images locally:
|
||||||
@ -83,14 +83,9 @@ Instructions for running laconic registry CLI and console
|
|||||||
# Gas limit for txs (default: 200000)
|
# Gas limit for txs (default: 200000)
|
||||||
CERC_LACONICD_GAS=
|
CERC_LACONICD_GAS=
|
||||||
|
|
||||||
# Max fees for txs (default: 200alnt)
|
# Max fees for txs (default: 200000alnt)
|
||||||
CERC_LACONICD_FEES=
|
CERC_LACONICD_FEES=
|
||||||
|
|
||||||
# Gas price to use for txs (default: 0.001alnt)
|
|
||||||
# Use for auto fees calculation, gas and fees not required to be set in that case
|
|
||||||
# Reference: https://git.vdb.to/cerc-io/laconic-registry-cli#gas-and-fees
|
|
||||||
CERC_LACONICD_GASPRICE=
|
|
||||||
|
|
||||||
# Console configuration
|
# Console configuration
|
||||||
|
|
||||||
# Laconicd (hosted) GQL endpoint (default: http://localhost:9473)
|
# Laconicd (hosted) GQL endpoint (default: http://localhost:9473)
|
||||||
|
@ -2,8 +2,8 @@ version: "1.0"
|
|||||||
name: laconic-console
|
name: laconic-console
|
||||||
description: "Laconic registry CLI and console"
|
description: "Laconic registry CLI and console"
|
||||||
repos:
|
repos:
|
||||||
- git.vdb.to/cerc-io/laconic-registry-cli@v0.2.10
|
- git.vdb.to/cerc-io/laconic-registry-cli
|
||||||
- git.vdb.to/cerc-io/laconic-console@v0.2.5
|
- git.vdb.to/cerc-io/laconic-console
|
||||||
containers:
|
containers:
|
||||||
- cerc/laconic-registry-cli
|
- cerc/laconic-registry-cli
|
||||||
- cerc/webapp-base
|
- cerc/webapp-base
|
||||||
|
@ -2,8 +2,8 @@ version: "1.0"
|
|||||||
name: laconic-shopify
|
name: laconic-shopify
|
||||||
description: "Service that integrates a Shopify app with the Laconic wallet."
|
description: "Service that integrates a Shopify app with the Laconic wallet."
|
||||||
repos:
|
repos:
|
||||||
- git.vdb.to/cerc-io/shopify@v0.1.0
|
- git.vdb.to/cerc-io/shopify
|
||||||
- git.vdb.to/cerc-io/laconic-faucet@v0.1.0-shopify
|
- git.vdb.to/cerc-io/laconic-faucet@shopify
|
||||||
containers:
|
containers:
|
||||||
- cerc/laconic-shopify
|
- cerc/laconic-shopify
|
||||||
- cerc/laconic-shopify-faucet
|
- cerc/laconic-shopify-faucet
|
||||||
|
@ -122,9 +122,6 @@ Instructions for running a laconicd testnet full node and joining as a validator
|
|||||||
|
|
||||||
# Output log level (default: info)
|
# Output log level (default: info)
|
||||||
CERC_LOGLEVEL=
|
CERC_LOGLEVEL=
|
||||||
|
|
||||||
# Minimum gas price in alnt to accept for transactions (default: "0.001")
|
|
||||||
MIN_GAS_PRICE
|
|
||||||
```
|
```
|
||||||
|
|
||||||
* Inside the `laconic-console-deployment` deployment directory, open `config.env` file and set following env variables:
|
* Inside the `laconic-console-deployment` deployment directory, open `config.env` file and set following env variables:
|
||||||
@ -146,14 +143,9 @@ Instructions for running a laconicd testnet full node and joining as a validator
|
|||||||
# Gas limit for txs (default: 200000)
|
# Gas limit for txs (default: 200000)
|
||||||
CERC_LACONICD_GAS=
|
CERC_LACONICD_GAS=
|
||||||
|
|
||||||
# Max fees for txs (default: 200alnt)
|
# Max fees for txs (default: 200000alnt)
|
||||||
CERC_LACONICD_FEES=
|
CERC_LACONICD_FEES=
|
||||||
|
|
||||||
# Gas price to use for txs (default: 0.001alnt)
|
|
||||||
# Use for auto fees calculation, gas and fees not required to be set in that case
|
|
||||||
# Reference: https://git.vdb.to/cerc-io/laconic-registry-cli#gas-and-fees
|
|
||||||
CERC_LACONICD_GASPRICE=
|
|
||||||
|
|
||||||
# Console configuration
|
# Console configuration
|
||||||
|
|
||||||
# Laconicd (hosted) GQL endpoint (default: http://localhost:9473)
|
# Laconicd (hosted) GQL endpoint (default: http://localhost:9473)
|
||||||
|
@ -2,7 +2,7 @@ version: "1.0"
|
|||||||
name: testnet-laconicd
|
name: testnet-laconicd
|
||||||
description: "Laconicd full node"
|
description: "Laconicd full node"
|
||||||
repos:
|
repos:
|
||||||
- git.vdb.to/cerc-io/laconicd@v0.1.9
|
- git.vdb.to/cerc-io/laconicd
|
||||||
containers:
|
containers:
|
||||||
- cerc/laconicd
|
- cerc/laconicd
|
||||||
pods:
|
pods:
|
||||||
|
@ -14,8 +14,6 @@
|
|||||||
|
|
||||||
* On deployment machine:
|
* On deployment machine:
|
||||||
|
|
||||||
* User with passwordless sudo: see [setup](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/user-setup/README.md#user-setup)
|
|
||||||
|
|
||||||
* laconic-so: see [installation](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/stack-orchestrator-setup/README.md#setup-stack-orchestrator)
|
* laconic-so: see [installation](https://git.vdb.to/cerc-io/testnet-ops/src/branch/main/stack-orchestrator-setup/README.md#setup-stack-orchestrator)
|
||||||
|
|
||||||
## Setup
|
## Setup
|
||||||
@ -30,68 +28,43 @@
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
wget -O nitro-vars.yml https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage2/nitro-node-config.yml
|
wget -O nitro-vars.yml https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage2/nitro-node-config.yml
|
||||||
|
|
||||||
|
# Expected variables in the fetched config file:
|
||||||
|
|
||||||
|
# nitro_chain_url: ""
|
||||||
|
# na_address: ""
|
||||||
|
# ca_address: ""
|
||||||
|
# vpa_address: ""
|
||||||
|
# bridge_nitro_address: ""
|
||||||
|
# nitro_l1_bridge_multiaddr: ""
|
||||||
|
# nitro_l2_bridge_multiaddr: ""
|
||||||
```
|
```
|
||||||
|
|
||||||
* Fetch required asset addresses:
|
* Fetch required asset addresses:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
wget -O assets.json https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage2/assets.json
|
wget -O assets.json https://git.vdb.to/cerc-io/testnet-laconicd-stack/raw/branch/main/ops/stage2/assets.json
|
||||||
|
|
||||||
|
# Example output:
|
||||||
|
# {
|
||||||
|
# "1212": [
|
||||||
|
# {
|
||||||
|
# "name": "geth",
|
||||||
|
# "chainId": "1212",
|
||||||
|
# "contracts": {
|
||||||
|
# "TestToken": {
|
||||||
|
# "address": "0xCf7Ed3AccA5a467e9e704C703E8D87F634fB0Fc9"
|
||||||
|
# },
|
||||||
|
# "TestToken2": {
|
||||||
|
# "address": "0xDc64a140Aa3E981100a9becA4E685f962f0cF6C9"
|
||||||
|
# }
|
||||||
|
# }
|
||||||
|
# }
|
||||||
|
# ]
|
||||||
|
# }
|
||||||
```
|
```
|
||||||
|
|
||||||
* Ask testnet operator to send L1 tokens and ETH to your chain address
|
* TODO: Get L1 tokens on your address
|
||||||
|
|
||||||
* [README for transferring tokens](./ops/nitro-token-ops.md#transfer-deployed-tokens-to-given-address)
|
|
||||||
|
|
||||||
* [README for transferring ETH](./ops/nitro-token-ops.md#transfer-eth)
|
|
||||||
|
|
||||||
* Check balance of your tokens once they are transferred:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Note: Account address should be without "0x"
|
|
||||||
export ACCOUNT_ADDRESS="<account-address>"
|
|
||||||
|
|
||||||
export GETH_CHAIN_ID="1212"
|
|
||||||
export GETH_CHAIN_URL="https://fixturenet-eth.laconic.com"
|
|
||||||
|
|
||||||
export ASSET_ADDRESS_1=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken.address' assets.json)
|
|
||||||
export ASSET_ADDRESS_2=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken2.address' assets.json)
|
|
||||||
|
|
||||||
# Check balance of eth account
|
|
||||||
curl -X POST $GETH_CHAIN_URL \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{
|
|
||||||
"jsonrpc":"2.0",
|
|
||||||
"method":"eth_getBalance",
|
|
||||||
"params":["'"$ACCOUNT_ADDRESS"'", "latest"],
|
|
||||||
"id":1
|
|
||||||
}'
|
|
||||||
|
|
||||||
# Check balance of first asset address
|
|
||||||
curl -X POST $GETH_CHAIN_URL \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{
|
|
||||||
"jsonrpc":"2.0",
|
|
||||||
"method":"eth_call",
|
|
||||||
"params":[{
|
|
||||||
"to": "'"$ASSET_ADDRESS_1"'",
|
|
||||||
"data": "0x70a08231000000000000000000000000'"$ACCOUNT_ADDRESS"'"
|
|
||||||
}, "latest"],
|
|
||||||
"id":1
|
|
||||||
}'
|
|
||||||
|
|
||||||
# Check balance of second asset address
|
|
||||||
curl -X POST $GETH_CHAIN_URL \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{
|
|
||||||
"jsonrpc":"2.0",
|
|
||||||
"method":"eth_call",
|
|
||||||
"params":[{
|
|
||||||
"to": "'"$ASSET_ADDRESS_2"'",
|
|
||||||
"data": "0x70a08231000000000000000000000000'"$ACCOUNT_ADDRESS"'"
|
|
||||||
}, "latest"],
|
|
||||||
"id":1
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
* Edit `nitro-vars.yml` and add the following variables:
|
* Edit `nitro-vars.yml` and add the following variables:
|
||||||
|
|
||||||
@ -117,27 +90,36 @@
|
|||||||
nitro_l2_ext_multiaddr: ""
|
nitro_l2_ext_multiaddr: ""
|
||||||
```
|
```
|
||||||
|
|
||||||
* Edit the `setup-vars.yml` to update the target directory:
|
* Update the target dir in `setup-vars.yml`:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Set absolute path to desired deployments directory (under your user)
|
# Set path to desired deployments dir (under your user)
|
||||||
# Example: /home/dev/nitro-node-deployments
|
DEPLOYMENTS_DIR=<path-to-deployments-dir>
|
||||||
...
|
|
||||||
nitro_directory: <path-to-deployments-dir>
|
|
||||||
...
|
|
||||||
|
|
||||||
# Will create deployments at <path-to-deployments-dir>/l1-nitro-deployment and <path-to-deployments-dir>/l2-nitro-deployment
|
sed -i "s|^nitro_directory:.*|nitro_directory: $DEPLOYMENTS_DIR/nitro-node|" setup-vars.yml
|
||||||
|
|
||||||
|
# Will create deployments at $DEPLOYMENTS_DIR/nitro-node/l1-nitro-deployment and $DEPLOYMENTS_DIR/nitro-node/l2-nitro-deployment
|
||||||
```
|
```
|
||||||
|
|
||||||
## Run Nitro Nodes
|
## Run Nitro Nodes
|
||||||
|
|
||||||
Nitro nodes can be set up on a target machine using Ansible:
|
Nitro nodes can be run using Ansible either locally or on a remote machine; follow corresponding steps for your setup
|
||||||
|
|
||||||
|
### On Local Host
|
||||||
|
|
||||||
|
* Setup and run a Nitro node (L1+L2) by executing the `run-nitro-nodes.yml` Ansible playbook:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
LANG=en_US.utf8 ansible-playbook -i localhost, --connection=local run-nitro-nodes.yml --extra-vars='{ "target_host": "localhost"}' --user $USER
|
||||||
|
```
|
||||||
|
|
||||||
|
### On Remote Host
|
||||||
|
|
||||||
* In `testnet-ops/nitro-nodes-setup`, create a new `hosts.ini` file:
|
* In `testnet-ops/nitro-nodes-setup`, create a new `hosts.ini` file:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cp ../hosts.example.ini hosts.ini
|
cp ../hosts.example.ini hosts.ini
|
||||||
```
|
```
|
||||||
|
|
||||||
* Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
|
* Edit the [`hosts.ini`](./hosts.ini) file to run the playbook on a remote machine:
|
||||||
|
|
||||||
@ -149,12 +131,12 @@ Nitro nodes can be set up on a target machine using Ansible:
|
|||||||
* Replace `<deployment_host>` with `nitro_host`
|
* Replace `<deployment_host>` with `nitro_host`
|
||||||
* Replace `<host_name>` with the alias of your choice
|
* Replace `<host_name>` with the alias of your choice
|
||||||
* Replace `<target_ip>` with the IP address or hostname of the target machine
|
* Replace `<target_ip>` with the IP address or hostname of the target machine
|
||||||
* Replace `<ssh_user>` with the username of the user that you set up on target machine (e.g. dev, ubuntu)
|
* Replace `<ssh_user>` with the SSH username (e.g., dev, ubuntu)
|
||||||
|
|
||||||
* Verify that you are able to connect to the host using the following command
|
* Verify that you are able to connect to the host using the following command
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
ansible all -m ping -i hosts.ini
|
ansible all -m ping -i hosts.ini -k
|
||||||
|
|
||||||
# If using password based authentication, enter the ssh password on prompt; otherwise, leave it blank
|
# If using password based authentication, enter the ssh password on prompt; otherwise, leave it blank
|
||||||
|
|
||||||
@ -169,10 +151,13 @@ Nitro nodes can be set up on a target machine using Ansible:
|
|||||||
# }
|
# }
|
||||||
```
|
```
|
||||||
|
|
||||||
* Execute the `run-nitro-nodes.yml` Ansible playbook to setup and run a Nitro node (L1+L2):
|
* Execute the `run-nitro-nodes.yml` Ansible playbook for remote deployment:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER
|
LANG=en_US.utf8 ansible-playbook -i hosts.ini run-nitro-nodes.yml --extra-vars='{ "target_host": "nitro_host"}' --user $USER -kK
|
||||||
|
|
||||||
|
# If using password based authentication, enter the ssh password on prompt; otherwise, leave it blank
|
||||||
|
# Enter the sudo password as "BECOME password" on prompt
|
||||||
```
|
```
|
||||||
|
|
||||||
### Check Deployment Status
|
### Check Deployment Status
|
||||||
@ -180,7 +165,9 @@ Nitro nodes can be set up on a target machine using Ansible:
|
|||||||
* Run the following commands on deployment machine:
|
* Run the following commands on deployment machine:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd <path-to-deployments-dir>
|
DEPLOYMENTS_DIR=<path-to-deployments-dir>
|
||||||
|
|
||||||
|
cd $DEPLOYMENTS_DIR/nitro-node
|
||||||
|
|
||||||
# Check the logs, ensure that the nodes are running
|
# Check the logs, ensure that the nodes are running
|
||||||
laconic-so deployment --dir l1-nitro-deployment logs nitro-node -f
|
laconic-so deployment --dir l1-nitro-deployment logs nitro-node -f
|
||||||
@ -195,7 +182,7 @@ Nitro nodes can be set up on a target machine using Ansible:
|
|||||||
* Get your Nitro node's info:
|
* Get your Nitro node's info:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-node"
|
laconic-so deployment --dir l1-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-node"
|
||||||
|
|
||||||
# Expected output:
|
# Expected output:
|
||||||
# {
|
# {
|
||||||
@ -215,18 +202,18 @@ Create a ledger channel with the bridge on L1 which is mirrored on L2
|
|||||||
* Set required variables:
|
* Set required variables:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd <path-to-deployments-dir>
|
DEPLOYMENTS_DIR=<path-to-deployments-dir>
|
||||||
|
|
||||||
|
cd $DEPLOYMENTS_DIR/nitro-node
|
||||||
|
|
||||||
export BRIDGE_NITRO_ADDRESS=$(yq eval '.bridge_nitro_address' nitro-node-config.yml)
|
export BRIDGE_NITRO_ADDRESS=$(yq eval '.bridge_nitro_address' nitro-node-config.yml)
|
||||||
|
|
||||||
export GETH_CHAIN_ID="1212"
|
|
||||||
|
|
||||||
# Get asset addresses from assets.json file
|
# Get asset addresses from assets.json file
|
||||||
export ASSET_ADDRESS_1=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken.address' assets.json)
|
export ASSET_ADDRESS_1=$(jq -r --arg chainId "$CHAIN_ID" '.[$chainId][0].contracts.TestToken.address' assets.json)
|
||||||
export ASSET_ADDRESS_2=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken2.address' assets.json)
|
export ASSET_ADDRESS_2=$(jq -r --arg chainId "$CHAIN_ID" '.[$chainId][0].contracts.TestToken2.address' assets.json)
|
||||||
```
|
```
|
||||||
|
|
||||||
* Check that you have no existing channels on L1 or L2:
|
* Check that check that you have no existing channels on L1 or L2:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
laconic-so deployment --dir l1-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
|
laconic-so deployment --dir l1-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
|
||||||
@ -236,8 +223,6 @@ Create a ledger channel with the bridge on L1 which is mirrored on L2
|
|||||||
# []
|
# []
|
||||||
```
|
```
|
||||||
|
|
||||||
* Ensure that your account has enough balance of tokens from `assets.json`
|
|
||||||
|
|
||||||
* Create a ledger channel between your L1 Nitro node and Bridge with custom asset:
|
* Create a ledger channel between your L1 Nitro node and Bridge with custom asset:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@ -340,7 +325,9 @@ Perform payments using a virtual payment channel created with another Nitro node
|
|||||||
* Switch to the `nitro-node` directory:
|
* Switch to the `nitro-node` directory:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd <path-to-deployments-dir>
|
DEPLOYMENTS_DIR=<path-to-deployments-dir>
|
||||||
|
|
||||||
|
cd $DEPLOYMENTS_DIR/nitro-node
|
||||||
```
|
```
|
||||||
|
|
||||||
* Check status of the mirrored channel on L2:
|
* Check status of the mirrored channel on L2:
|
||||||
@ -379,6 +366,9 @@ Perform payments using a virtual payment channel created with another Nitro node
|
|||||||
```bash
|
```bash
|
||||||
export BRIDGE_NITRO_ADDRESS=$(yq eval '.bridge_nitro_address' nitro-node-config.yml)
|
export BRIDGE_NITRO_ADDRESS=$(yq eval '.bridge_nitro_address' nitro-node-config.yml)
|
||||||
|
|
||||||
|
# Counterparty to create the payment channel with
|
||||||
|
export COUNTER_PARTY_ADDRESS=<counterparty-nitro-address>
|
||||||
|
|
||||||
# Mirrored channel on L2
|
# Mirrored channel on L2
|
||||||
export L2_CHANNEL_ID=<l2-channel-id>
|
export L2_CHANNEL_ID=<l2-channel-id>
|
||||||
|
|
||||||
@ -386,21 +376,6 @@ Perform payments using a virtual payment channel created with another Nitro node
|
|||||||
export PAYMENT_CHANNEL_AMOUNT=500
|
export PAYMENT_CHANNEL_AMOUNT=500
|
||||||
```
|
```
|
||||||
|
|
||||||
* Set counterparty address
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export COUNTER_PARTY_ADDRESS=<counterparty-nitro-address>
|
|
||||||
```
|
|
||||||
|
|
||||||
* Get the nitro address of the counterparty's node with whom you want create payment channel
|
|
||||||
|
|
||||||
* To get the nitro address of the your node:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-node"
|
|
||||||
# `SCAddress` -> nitro address
|
|
||||||
```
|
|
||||||
|
|
||||||
* Check for existing payment channels for the L2 channel:
|
* Check for existing payment channels for the L2 channel:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@ -472,8 +447,6 @@ Perform payments using a virtual payment channel created with another Nitro node
|
|||||||
|
|
||||||
* Check L2 mirrored channel's status after the virtual payment channel is closed:
|
* Check L2 mirrored channel's status after the virtual payment channel is closed:
|
||||||
|
|
||||||
* This can be checked by both nodes
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
|
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
|
||||||
|
|
||||||
@ -516,7 +489,9 @@ Perform swaps using a swap channel created with another Nitro node over the mirr
|
|||||||
* Switch to the `nitro-node` directory:
|
* Switch to the `nitro-node` directory:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd <path-to-deployments-dir>
|
DEPLOYMENTS_DIR=<path-to-deployments-dir>
|
||||||
|
|
||||||
|
cd $DEPLOYMENTS_DIR/nitro-node
|
||||||
```
|
```
|
||||||
|
|
||||||
* Check status of the mirrored channel on L2:
|
* Check status of the mirrored channel on L2:
|
||||||
@ -555,28 +530,14 @@ Perform swaps using a swap channel created with another Nitro node over the mirr
|
|||||||
```bash
|
```bash
|
||||||
export BRIDGE_NITRO_ADDRESS=$(yq eval '.bridge_nitro_address' nitro-node-config.yml)
|
export BRIDGE_NITRO_ADDRESS=$(yq eval '.bridge_nitro_address' nitro-node-config.yml)
|
||||||
|
|
||||||
export GETH_CHAIN_ID="1212"
|
# Counterparty to create the swap channel with
|
||||||
|
export COUNTER_PARTY_ADDRESS=<counterparty-nitro-address>
|
||||||
|
|
||||||
# Get asset addresses from assets.json file
|
# Get asset addresses from assets.json file
|
||||||
export ASSET_ADDRESS_1=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken.address' assets.json)
|
export ASSET_ADDRESS_1=$(jq -r --arg chainId "$CHAIN_ID" '.[$chainId][0].contracts.TestToken.address' assets.json)
|
||||||
export ASSET_ADDRESS_2=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken2.address' assets.json)
|
export ASSET_ADDRESS_2=$(jq -r --arg chainId "$CHAIN_ID" '.[$chainId][0].contracts.TestToken2.address' assets.json)
|
||||||
```
|
```
|
||||||
|
|
||||||
* Set counterparty address
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export COUNTER_PARTY_ADDRESS=<counterparty-nitro-address>
|
|
||||||
```
|
|
||||||
|
|
||||||
* Get the nitro address of the counterparty's node with whom you want create swap channel
|
|
||||||
|
|
||||||
* To get the nitro address of the your node:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-node-info -p 4005 -h nitro-node"
|
|
||||||
# `SCAddress` -> nitro address
|
|
||||||
```
|
|
||||||
|
|
||||||
* Create swap channel:
|
* Create swap channel:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@ -623,90 +584,6 @@ Perform swaps using a swap channel created with another Nitro node over the mirr
|
|||||||
|
|
||||||
### Performing swaps
|
### Performing swaps
|
||||||
|
|
||||||
* Ensure that environment variables for asset addresses are set (should be done by both parties):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export GETH_CHAIN_ID="1212"
|
|
||||||
|
|
||||||
# Get asset addresses from assets.json file
|
|
||||||
export ASSET_ADDRESS_1=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken.address' assets.json)
|
|
||||||
export ASSET_ADDRESS_2=$(jq -r --arg chainId "$GETH_CHAIN_ID" '.[$chainId][0].contracts.TestToken2.address' assets.json)
|
|
||||||
```
|
|
||||||
|
|
||||||
* Get all active swap channels for a specific mirrored ledger channel (should be done by both parties)
|
|
||||||
|
|
||||||
* To get mirrored ledger channels:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-all-ledger-channels -p 4005 -h nitro-node"
|
|
||||||
|
|
||||||
# Example output:
|
|
||||||
# [
|
|
||||||
# {
|
|
||||||
# "ID": "0xb34210b763d4fdd534190ba11886ad1daa1e411c87be6fd20cff74cd25077c46",
|
|
||||||
# "Status": "Open",
|
|
||||||
# "Balances": [
|
|
||||||
# {
|
|
||||||
# "AssetAddress": "0xa4351114dae1abeb2d552d441c9733c72682a45d",
|
|
||||||
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
|
|
||||||
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
|
|
||||||
# "MyBalance": 1000n,
|
|
||||||
# "TheirBalance": 1000n
|
|
||||||
# },
|
|
||||||
# {
|
|
||||||
# "AssetAddress": "0x314e43f9825b10961859c2a62c2de6a765c1c1f1",
|
|
||||||
# "Me": "0x075400039e303b3fb46c0cff0404c5fa61947c05",
|
|
||||||
# "Them": "0xf0e6a85c6d23aca9ff1b83477d426ed26f218185",
|
|
||||||
# "MyBalance": 1000n,
|
|
||||||
# "TheirBalance": 1000n
|
|
||||||
# }
|
|
||||||
# ],
|
|
||||||
# "ChannelMode": "Open"
|
|
||||||
# }
|
|
||||||
# ]
|
|
||||||
```
|
|
||||||
|
|
||||||
* Export ledger channel ID:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export LEDGER_CHANNEL_ID=
|
|
||||||
```
|
|
||||||
|
|
||||||
* To get swap channels for a ledger channel:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir l2-nitro-deployment exec nitro-rpc-client "nitro-rpc-client get-swap-channels-by-ledger $LEDGER_CHANNEL_ID -p 4005 -h nitro-node"
|
|
||||||
# Example Output:
|
|
||||||
# [
|
|
||||||
# {
|
|
||||||
# ID: '0x1dbd58d314f123f4b0f4147eee7fd92fa523ba7082d8a75b846f6d1189e2f0e9',
|
|
||||||
# Status: 'Open',
|
|
||||||
# Balances: [
|
|
||||||
# {
|
|
||||||
# AssetAddress: '0xa4351114dae1abeb2d552d441c9733c72682a45d',
|
|
||||||
# Me: '0x075400039e303b3fb46c0cff0404c5fa61947c05',
|
|
||||||
# Them: '0xd0ea8b27591b1d070cccd4d30b8d408fe794fdfc',
|
|
||||||
# MyBalance: 100,
|
|
||||||
# TheirBalance: 100n
|
|
||||||
# },
|
|
||||||
# {
|
|
||||||
# AssetAddress: '0x314e43f9825b10961859c2a62c2de6a765c1c1f1',
|
|
||||||
# Me: '0x075400039e303b3fb46c0cff0404c5fa61947c05',
|
|
||||||
# Them: '0xd0ea8b27591b1d070cccd4d30b8d408fe794fdfc',
|
|
||||||
# MyBalance: 100,
|
|
||||||
# TheirBalance: 100
|
|
||||||
# }
|
|
||||||
# ]
|
|
||||||
# }
|
|
||||||
# ]
|
|
||||||
```
|
|
||||||
|
|
||||||
* Export swap channel ID:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export SWAP_CHANNEL_ID=
|
|
||||||
```
|
|
||||||
|
|
||||||
* One of the participants can initiate the swap and other one will either accept it or reject it
|
* One of the participants can initiate the swap and other one will either accept it or reject it
|
||||||
|
|
||||||
* For initiating the swap:
|
* For initiating the swap:
|
||||||
@ -851,36 +728,12 @@ Perform swaps using a swap channel created with another Nitro node over the mirr
|
|||||||
# ]
|
# ]
|
||||||
```
|
```
|
||||||
|
|
||||||
## Update nitro nodes
|
|
||||||
|
|
||||||
* Switch to deployments dir:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd $DEPLOYMENTS_DIR/nitro-node
|
|
||||||
```
|
|
||||||
|
|
||||||
* Rebuild containers:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so --stack ~/cerc/nitro-stack/stack-orchestrator/stacks/nitro-node build-containers --force-rebuild
|
|
||||||
```
|
|
||||||
|
|
||||||
* Restart the nodes
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir l1-nitro-deployment stop
|
|
||||||
laconic-so deployment --dir l1-nitro-deployment start
|
|
||||||
|
|
||||||
laconic-so deployment --dir l2-nitro-deployment stop
|
|
||||||
laconic-so deployment --dir l2-nitro-deployment start
|
|
||||||
```
|
|
||||||
|
|
||||||
## Clean up
|
## Clean up
|
||||||
|
|
||||||
* Switch to deployments dir:
|
* Switch to deployments dir:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd <path-to-deployments-dir>
|
cd $DEPLOYMENTS_DIR/nitro-node
|
||||||
```
|
```
|
||||||
|
|
||||||
* Stop all Nitro services running in the background:
|
* Stop all Nitro services running in the background:
|
||||||
@ -922,7 +775,7 @@ Perform swaps using a swap channel created with another Nitro node over the mirr
|
|||||||
* Stop the deployment:
|
* Stop the deployment:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd <path-to-deployments-dir>
|
cd $DEPLOYMENTS_DIR/nitro-node
|
||||||
|
|
||||||
laconic-so deployment --dir l1-nitro-deployment stop
|
laconic-so deployment --dir l1-nitro-deployment stop
|
||||||
```
|
```
|
||||||
|
@ -63,8 +63,6 @@
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack
|
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack
|
||||||
|
|
||||||
# See stack documentation stack-orchestrator/stacks/testnet-laconicd/README.md for more details
|
|
||||||
```
|
```
|
||||||
|
|
||||||
* Clone required repositories:
|
* Clone required repositories:
|
||||||
@ -128,8 +126,6 @@
|
|||||||
* Inside the `testnet-laconicd-deployment` deployment directory, open `config.env` file and set following env variables:
|
* Inside the `testnet-laconicd-deployment` deployment directory, open `config.env` file and set following env variables:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
CERC_CHAIN_ID=laconic_9000-1
|
|
||||||
|
|
||||||
# Comma separated list of nodes to keep persistent connections to
|
# Comma separated list of nodes to keep persistent connections to
|
||||||
# Example: "node-1-id@laconicd.laconic.com:26656"
|
# Example: "node-1-id@laconicd.laconic.com:26656"
|
||||||
# Use the provided node id
|
# Use the provided node id
|
||||||
@ -201,15 +197,12 @@ laconic-so deployment --dir testnet-laconicd-deployment start
|
|||||||
|
|
||||||
* From wallet, approve and send transaction to stage1 laconicd chain
|
* From wallet, approve and send transaction to stage1 laconicd chain
|
||||||
|
|
||||||
<a name="create-validator-using-cli"></a>
|
|
||||||
|
|
||||||
* Alternatively, create a validator using the laconicd CLI:
|
* Alternatively, create a validator using the laconicd CLI:
|
||||||
|
|
||||||
* Import a key pair:
|
* Import a key pair:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
KEY_NAME=alice
|
KEY_NAME=alice
|
||||||
CHAIN_ID=laconic_9000-1
|
|
||||||
|
|
||||||
# Restore existing key with mnemonic seed phrase
|
# Restore existing key with mnemonic seed phrase
|
||||||
# You will be prompted to enter mnemonic seed
|
# You will be prompted to enter mnemonic seed
|
||||||
@ -250,7 +243,7 @@ laconic-so deployment --dir testnet-laconicd-deployment start
|
|||||||
```bash
|
```bash
|
||||||
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd tx staking create-validator my-validator.json \
|
laconic-so deployment --dir testnet-laconicd-deployment exec laconicd "laconicd tx staking create-validator my-validator.json \
|
||||||
--fees 500000alnt \
|
--fees 500000alnt \
|
||||||
--chain-id=$CHAIN_ID \
|
--chain-id=laconic_9000-1 \
|
||||||
--from $KEY_NAME"
|
--from $KEY_NAME"
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -287,95 +280,6 @@ laconic-so deployment --dir testnet-laconicd-deployment start
|
|||||||
sudo rm -r testnet-laconicd-deployment
|
sudo rm -r testnet-laconicd-deployment
|
||||||
```
|
```
|
||||||
|
|
||||||
## Upgrade to SAPO testnet
|
|
||||||
|
|
||||||
### Prerequisites
|
|
||||||
|
|
||||||
* SAPO testnet (testnet2) [genesis file](./ops/stage2/genesis.json) and peers (see below)
|
|
||||||
|
|
||||||
### Setup
|
|
||||||
|
|
||||||
* If running, stop the stage1 node:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# In dir where stage1 node deployment (`testnet-laconicd-deployment`) exists
|
|
||||||
|
|
||||||
TESTNET_DEPLOYMENT=$(pwd)/testnet-laconicd-deployment
|
|
||||||
|
|
||||||
laconic-so deployment --dir testnet-laconicd-deployment stop --delete-volumes
|
|
||||||
```
|
|
||||||
|
|
||||||
* Clone / pull the stack repo:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so fetch-stack git.vdb.to/cerc-io/testnet-laconicd-stack --pull
|
|
||||||
```
|
|
||||||
|
|
||||||
* Ensure you are on tag v0.1.10
|
|
||||||
|
|
||||||
* Clone / pull the required repositories:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd setup-repositories --pull
|
|
||||||
|
|
||||||
# If this throws an error as a result of being already checked out to a branch/tag in a repo, remove the repositories and re-run the command
|
|
||||||
```
|
|
||||||
|
|
||||||
Note: Make sure the latest `cerc-io/laconicd` changes have been pulled
|
|
||||||
|
|
||||||
* Build the container images:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd build-containers --force-rebuild
|
|
||||||
```
|
|
||||||
|
|
||||||
This should create the following docker images locally with latest changes:
|
|
||||||
|
|
||||||
* `cerc/laconicd`
|
|
||||||
|
|
||||||
### Create a deployment
|
|
||||||
|
|
||||||
* Create a deployment from spec file:
|
|
||||||
|
|
||||||
```
|
|
||||||
laconic-so --stack ~/cerc/testnet-laconicd-stack/stack-orchestrator/stacks/testnet-laconicd deploy create --spec-file testnet-laconicd-spec.yml --deployment-dir stage-2-testnet-laconicd-deployment
|
|
||||||
```
|
|
||||||
|
|
||||||
* Copy over the published testnet2 genesis file (`.json`) to data directory in deployment (`stage-2-testnet-laconicd-deployment/data/laconicd-data/tmp-testnet2`):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Example
|
|
||||||
mkdir -p stage-2-testnet-laconicd-deployment/data/laconicd-data/tmp-testnet2
|
|
||||||
cp genesis.json stage-2-testnet-laconicd-deployment/data/laconicd-data/tmp-testnet2/genesis.json
|
|
||||||
```
|
|
||||||
|
|
||||||
### Configuration
|
|
||||||
|
|
||||||
* Inside the `stage-2-testnet-laconicd-deployment` deployment directory, open `config.env` file and set following env variables:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CERC_CHAIN_ID=laconic-testnet-2
|
|
||||||
|
|
||||||
CERC_PEERS="bd56622c525a4dfce1e388a7b8c0cb072200797b@5.9.80.214:26103,289f10e94156f47c67bc26a8af747a58e8014f15@148.251.49.108:26656,72cd2f50dff154408cc2c7650a94c2141624b657@65.21.237.194:26656,21322e4fa90c485ff3cb9617438deec4acfa1f0b@143.198.37.25:26656"
|
|
||||||
|
|
||||||
# A custom human readable name for this node
|
|
||||||
CERC_MONIKER="my-node"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Start the deployment
|
|
||||||
|
|
||||||
```bash
|
|
||||||
laconic-so deployment --dir stage-2-testnet-laconicd-deployment start
|
|
||||||
```
|
|
||||||
|
|
||||||
See [Check status](#check-status) to follow sync status of your node
|
|
||||||
|
|
||||||
See [Join as testnet validator](#create-validator-using-cli) to join as a validator using laconicd CLI (use chain id `laconic-testnet-2`)
|
|
||||||
|
|
||||||
### Clean up
|
|
||||||
|
|
||||||
* Same as [Clean up](#clean-up)
|
|
||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
||||||
* If you face any issues in the onboarding app or the web-wallet, clear your browser cache and reload
|
* If you face any issues in the onboarding app or the web-wallet, clear your browser cache and reload
|
||||||
|
Loading…
Reference in New Issue
Block a user