Separate out preconfigured watcher dashboards and add instructions
This commit is contained in:
parent
4e3142d0b7
commit
b72e69e8ef
@ -10,6 +10,12 @@ services:
|
|||||||
- grafana_storage:/var/lib/grafana
|
- grafana_storage:/var/lib/grafana
|
||||||
ports:
|
ports:
|
||||||
- "3000"
|
- "3000"
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "nc", "-vz", "localhost", "3000"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 10
|
||||||
|
start_period: 3s
|
||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
grafana_storage:
|
grafana_storage:
|
||||||
|
@ -9,6 +9,12 @@ services:
|
|||||||
- prometheus_data:/prometheus
|
- prometheus_data:/prometheus
|
||||||
ports:
|
ports:
|
||||||
- "9090"
|
- "9090"
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "nc", "-vz", "localhost", "9090"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 10
|
||||||
|
start_period: 3s
|
||||||
extra_hosts:
|
extra_hosts:
|
||||||
- "host.docker.internal:host-gateway"
|
- "host.docker.internal:host-gateway"
|
||||||
|
|
||||||
|
@ -10,44 +10,3 @@ scrape_configs:
|
|||||||
- job_name: prometheus
|
- job_name: prometheus
|
||||||
static_configs:
|
static_configs:
|
||||||
- targets: ['localhost:9090']
|
- targets: ['localhost:9090']
|
||||||
|
|
||||||
- job_name: azimuth
|
|
||||||
metrics_path: /metrics
|
|
||||||
scheme: http
|
|
||||||
static_configs:
|
|
||||||
- targets: ['host.docker.internal:9000']
|
|
||||||
labels:
|
|
||||||
instance: 'azimuth'
|
|
||||||
- targets: ['host.docker.internal:9002']
|
|
||||||
labels:
|
|
||||||
instance: 'censures'
|
|
||||||
- targets: ['host.docker.internal:9004']
|
|
||||||
labels:
|
|
||||||
instance: 'claims'
|
|
||||||
- targets: ['host.docker.internal:9006']
|
|
||||||
labels:
|
|
||||||
instance: 'conditional_star_release'
|
|
||||||
- targets: ['host.docker.internal:9008']
|
|
||||||
labels:
|
|
||||||
instance: 'delegated_sending_watcher'
|
|
||||||
- targets: ['host.docker.internal:9010']
|
|
||||||
labels:
|
|
||||||
instance: 'ecliptic'
|
|
||||||
- targets: ['host.docker.internal:9012']
|
|
||||||
labels:
|
|
||||||
instance: 'linear_star_release'
|
|
||||||
- targets: ['host.docker.internal:9014']
|
|
||||||
labels:
|
|
||||||
instance: 'polls'
|
|
||||||
|
|
||||||
- job_name: sushi
|
|
||||||
metrics_path: /metrics
|
|
||||||
scheme: http
|
|
||||||
static_configs:
|
|
||||||
# TODO: Replace address programmatically
|
|
||||||
- targets: ['host.docker.internal:9016']
|
|
||||||
labels:
|
|
||||||
instance: 'sushiswap'
|
|
||||||
- targets: ['host.docker.internal:9018']
|
|
||||||
labels:
|
|
||||||
instance: 'merkl_sushiswap'
|
|
||||||
|
@ -1 +1,94 @@
|
|||||||
# monitoring
|
# monitoring
|
||||||
|
|
||||||
|
* Instructions to setup and run a Prometheus server and Grafana dashboard
|
||||||
|
* See [monitoring-watchers.md](./monitoring-watchers.md) for an example usage of the stack with pre-configured dashboards for watchers
|
||||||
|
|
||||||
|
## Create a deployment
|
||||||
|
|
||||||
|
First, create a spec file for the deployment, which will map the stack's ports and volumes to the host:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
laconic-so --stack monitoring deploy init --output monitoring-spec.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Ports
|
||||||
|
|
||||||
|
Edit `network` in spec file to map container ports to same ports in host:
|
||||||
|
|
||||||
|
```
|
||||||
|
...
|
||||||
|
network:
|
||||||
|
ports:
|
||||||
|
prometheus:
|
||||||
|
- '9090:9090'
|
||||||
|
grafana:
|
||||||
|
- '3000:3000'
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Data volumes
|
||||||
|
|
||||||
|
Container data volumes are bind-mounted to specified paths in the host filesystem.
|
||||||
|
The default setup (generated by `laconic-so deploy init`) places the volumes in the `./data` subdirectory of the deployment directory. The default mappings can be customized by editing the "spec" file generated by `laconic-so deploy init`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Once you've made any needed changes to the spec file, create a deployment from it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
laconic-so --stack monitoring deploy create --spec-file monitoring-spec.yml --deployment-dir monitoring-deployment
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configure
|
||||||
|
|
||||||
|
### Prometheus Config
|
||||||
|
|
||||||
|
Add desired scrape configs to prometheus config file (`monitoring-deployment/config/monitoring/prometheus/prometheus.yml`) in the deployment folder; for example:
|
||||||
|
|
||||||
|
```yml
|
||||||
|
...
|
||||||
|
- job_name: <JOB_NAME>
|
||||||
|
metrics_path: /metrics/path
|
||||||
|
scheme: http
|
||||||
|
static_configs:
|
||||||
|
- targets: ['<METRICS_ENDPOINT_HOST>:<METRICS_ENDPOINT_PORT>']
|
||||||
|
```
|
||||||
|
|
||||||
|
Note: Use `host.docker.internal` as host to access ports on the host machine
|
||||||
|
|
||||||
|
### Grafana Config
|
||||||
|
|
||||||
|
Place the dashboard configs files (json) in grafana dashboards config directory (`monitoring-deployment/config/monitoring/grafana/dashboards`) in the deployment folder
|
||||||
|
|
||||||
|
## Start the stack
|
||||||
|
|
||||||
|
Start the deployment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
laconic-so deployment --dir monitoring-deployment start
|
||||||
|
```
|
||||||
|
|
||||||
|
* List and check the health status of all the containers using `docker ps` and wait for them to be `healthy`
|
||||||
|
|
||||||
|
* Grafana should now be visible at http://localhost:3000 with configured dashboards
|
||||||
|
|
||||||
|
## Clean up
|
||||||
|
|
||||||
|
To stop monitoring services running in the background, while preserving data:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Only stop the docker containers
|
||||||
|
laconic-so deployment --dir monitoring-deployment stop
|
||||||
|
|
||||||
|
# Run 'start' to restart the deployment
|
||||||
|
```
|
||||||
|
|
||||||
|
To stop monitoring services and also delete data:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Stop the docker containers
|
||||||
|
laconic-so deployment --dir monitoring-deployment stop --delete-volumes
|
||||||
|
|
||||||
|
# Remove deployment directory (deployment will have to be recreated for a re-run)
|
||||||
|
rm -rf monitoring-deployment
|
||||||
|
```
|
||||||
|
124
stack_orchestrator/data/stacks/monitoring/monitoring-watchers.md
Normal file
124
stack_orchestrator/data/stacks/monitoring/monitoring-watchers.md
Normal file
@ -0,0 +1,124 @@
|
|||||||
|
# Monitoring Watchers
|
||||||
|
|
||||||
|
Instructions to setup and run monitoring stack with pre-configured watcher dashboards
|
||||||
|
|
||||||
|
## Create a deployment
|
||||||
|
|
||||||
|
First, create a spec file for the deployment, which will map the stack's ports and volumes to the host:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
laconic-so --stack monitoring deploy init --output monitoring-watchers-spec.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Ports
|
||||||
|
|
||||||
|
Edit `network` in spec file to map container ports to same ports in host:
|
||||||
|
|
||||||
|
```
|
||||||
|
...
|
||||||
|
network:
|
||||||
|
ports:
|
||||||
|
prometheus:
|
||||||
|
- '9090:9090'
|
||||||
|
grafana:
|
||||||
|
- '3000:3000'
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Once you've made any needed changes to the spec file, create a deployment from it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
laconic-so --stack monitoring deploy create --spec-file monitoring-watchers-spec.yml --deployment-dir monitoring-watchers-deployment
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configure
|
||||||
|
|
||||||
|
### Prometheus Config
|
||||||
|
|
||||||
|
Add the following scrape configs to prometheus config file (`monitoring-watchers-deployment/config/monitoring/prometheus/prometheus.yml`) in the deployment folder:
|
||||||
|
|
||||||
|
```yml
|
||||||
|
...
|
||||||
|
- job_name: azimuth
|
||||||
|
metrics_path: /metrics
|
||||||
|
scheme: http
|
||||||
|
static_configs:
|
||||||
|
- targets: ['AZIMUTH_WATCHER_HOST:AZIMUTH_WATCHER_PORT']
|
||||||
|
labels:
|
||||||
|
instance: 'azimuth'
|
||||||
|
- targets: ['CENSURES_WATCHER_HOST:CENSURES_WATCHER_PORT']
|
||||||
|
labels:
|
||||||
|
instance: 'censures'
|
||||||
|
- targets: ['CLAIMS_WATCHER_HOST:CLAIMS_WATCHER_PORT']
|
||||||
|
labels:
|
||||||
|
instance: 'claims'
|
||||||
|
- targets: ['CONDITIONAL_STAR_RELEASE_WATCHER_HOST:CONDITIONAL_STAR_RELEASE_WATCHER_PORT']
|
||||||
|
labels:
|
||||||
|
instance: 'conditional_star_release'
|
||||||
|
- targets: ['DELEGATED_SENDING_WATCHER_HOST:DELEGATED_SENDING_WATCHER_PORT']
|
||||||
|
labels:
|
||||||
|
instance: 'delegated_sending_watcher'
|
||||||
|
- targets: ['ECLIPTIC_WATCHER_HOST:ECLIPTIC_WATCHER_PORT']
|
||||||
|
labels:
|
||||||
|
instance: 'ecliptic'
|
||||||
|
- targets: ['LINEAR_STAR_WATCHER_HOST:LINEAR_STAR_WATCHER_PORT']
|
||||||
|
labels:
|
||||||
|
instance: 'linear_star_release'
|
||||||
|
- targets: ['POLLS_WATCHER_HOST:POLLS_WATCHER_PORT']
|
||||||
|
labels:
|
||||||
|
instance: 'polls'
|
||||||
|
|
||||||
|
- job_name: sushi
|
||||||
|
metrics_path: /metrics
|
||||||
|
scheme: http
|
||||||
|
static_configs:
|
||||||
|
- targets: ['SUSHISWAP_WATCHER_HOST:SUSHISWAP_WATCHER_PORT']
|
||||||
|
labels:
|
||||||
|
instance: 'sushiswap'
|
||||||
|
- targets: ['MERKLE_SUSHISWAP_WATCHER_HOST:MERKLE_SUSHISWAP_WATCHER_PORT']
|
||||||
|
labels:
|
||||||
|
instance: 'merkl_sushiswap'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Grafana Config
|
||||||
|
|
||||||
|
In the deployment folder, copy over the pre-configured watcher dashboard JSON files to grafana dashboards config directory:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cp -r monitoring-watchers-deployment/config/monitoring/grafana/watcher-dashboards/* monitoring-watchers-deployment/config/monitoring/grafana/dashboards/
|
||||||
|
```
|
||||||
|
|
||||||
|
## Start the stack
|
||||||
|
|
||||||
|
Start the deployment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
laconic-so deployment --dir monitoring-watchers-deployment start
|
||||||
|
```
|
||||||
|
|
||||||
|
* List and check the health status of all the containers using `docker ps` and wait for them to be `healthy`
|
||||||
|
|
||||||
|
* Grafana should now be visible at http://localhost:3000 with configured dashboards
|
||||||
|
|
||||||
|
## Clean up
|
||||||
|
|
||||||
|
To stop monitoring services running in the background, while preserving data:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Only stop the docker containers
|
||||||
|
laconic-so deployment --dir monitoring-watchers-deployment stop
|
||||||
|
|
||||||
|
# Run 'start' to restart the deployment
|
||||||
|
```
|
||||||
|
|
||||||
|
To stop monitoring services and also delete data:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Stop the docker containers
|
||||||
|
laconic-so deployment --dir monitoring-watchers-deployment stop --delete-volumes
|
||||||
|
|
||||||
|
# Remove deployment directory (deployment will have to be recreated for a re-run)
|
||||||
|
rm -rf monitoring-watchers-deployment
|
||||||
|
```
|
Loading…
Reference in New Issue
Block a user