## Issue Addressed
Related to https://github.com/sigp/lighthouse/issues/4676.
Deneb-specifc CI code to be removed before merging to `unstable`. Dot not merge until we're ready to merge into `unstable`, as we may need to release deneb docker images before merging.
Keep in mind that most of the changes in the below PR (to `unstable`) have already
been merged to `deneb-free-blobs`, so merging `deneb-free-blobs` into `unstable` would include those changes - it would be ok if the release runners are ready, otherwise we may want to exclude them before merging.
- https://github.com/sigp/lighthouse/pull/4592
## Issue Addressed
We're OOM'ing on Docker builds on the Deneb branch https://github.com/sigp/lighthouse/issues/3929
Are we ok to self host automated docker builds?
Co-authored-by: realbigsean <seananderson33@gmail.com>
Co-authored-by: realbigsean <sean@sigmaprime.io>
Co-authored-by: antondlr <anton@delaruelle.net>
## Issue Addressed
The documentation referring to build from source mismatches with the what gitworkflow uses.
aa5b7ef783/book/src/installation-source.md (L118-L120)
## Proposed Changes
Because the github workflow uses `cross` to build from source and for that build there is different env variable `CROSS_FEATURES` so need pass at the compile time.
## Additional Info
Verified that existing `-dev` builds does not contains the `minimal` spec enabled.
```bash
> docker run --rm --name node-5-cl-lighthouse sigp/lighthouse:latest-amd64-unstable-dev lighthouse --version
Lighthouse v3.4.0-aa5b7ef
BLS library: blst-portable
SHA256 hardware acceleration: true
Allocator: jemalloc
Specs: mainnet (true), minimal (false), gnosis (true)
```
## Proposed Changes
There are some features that are enabled/disabled with the `FEATURES` env variable. This PR would introduce a pattern to introduce docker images based on those features. This can be useful later on to have specific images for some experimental features in the future.
## Additional Info
We at Lodesart need to have `minimal` spec support for some cross-client network testing. To make it efficient on the CI, we tend to use minimal preset.
## Issue Addressed
Closes https://github.com/sigp/lighthouse/issues/3656
## Proposed Changes
* Replace `set-output` by `$GITHUB_OUTPUT` usage
* Avoid rate-limits when installing `protoc` by making authenticated requests (continuation of https://github.com/sigp/lighthouse/pull/3621)
* Upgrade all Ubuntu 18.04 usage to 22.04 (18.04 is end of life)
* Upgrade macOS-latest to explicit macOS-12 to silence warning
* Use `actions/checkout@v3` and `actions/cache@v3` to avoid deprecated NodeJS v12
## Additional Info
Can't silence the NodeJS warnings entirely due to https://github.com/sigp/lighthouse/issues/3705. Can fix that in future.
## Proposed Changes
Add a new Cargo compilation profile called `maxperf` which enables more aggressive compiler optimisations at the expense of compilation time.
Some rough initial benchmarks show that this can provide up to a 25% reduction to run time for CPU bound tasks like block processing: https://docs.google.com/spreadsheets/d/15jHuZe7lLHhZq9Nw8kc6EL0Qh_N_YAYqkW2NQ_Afmtk/edit
The numbers in that spreadsheet compare the `consensus-context` branch from #3604 to the same branch compiled with the `maxperf` profile using:
```
PROFILE=maxperf make install-lcli
```
## Additional Info
The downsides of the maxperf profile are:
- It increases compile times substantially, which will particularly impact low-spec hardware. Compiling `lcli` is about 3x slower. Compiling Lighthouse is about 5x slower on my 5950X: 17m 38s rather than 3m 28s.
As a result I think we should not enable this everywhere by default.
- **Option 1**: enable by default for our released binaries. This gives the majority of users the fastest version of `lighthouse` possible, at the expense of slowing down our release CI. Source builds will continue to use the default `release` profile unless users opt-in to `maxperf`.
- **Option 2**: enable by default for source builds. This gives users building from source an edge, but makes them pay for it with compilation time.
I think I would prefer Option 1. I'll try doing some benchmarking to see how long a maxperf build of Lighthouse would take on GitHub actions.
Credit to Nicholas Nethercote for documenting these options in the Rust Performance Book: https://nnethercote.github.io/perf-book/build-configuration.html.
## Issue Addressed
Closes#2938
## Proposed Changes
* Build and publish images with a `-modern` suffix which enable CPU optimizations for modern hardware.
* Add docs for the plethora of available images!
* Unify all the Docker workflows in `docker.yml` (including for tagged releases).
## Additional Info
The `Dockerfile` is no longer used by our Docker Hub builds, as we use `cross` and a generic approach for ARM and x86. There's a new CI job `docker-build-from-source` which tests the `Dockerfile` without publishing anything.
## Issue Addressed
Resolves: #2087
## Proposed Changes
- Add a `Dockerfile` to the `lcli` directory
- Add a github actions job to build and push and `lcli` docker image on pushes to `unstable` and `stable`
## Additional Info
It's a little awkward but `lcli` requires the full project scope so must be built:
- from the `lighthouse` dir with: `docker build -f ./lcli/Dockerflie .`
- from the `lcli` dir with: `docker build -f ./Dockerfile ../`
Didn't include `libssl-dev` or `ca-certificates`, `lcli` doesn't need these right?
Co-authored-by: realbigsean <seananderson33@gmail.com>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
## Issue Addressed
N/A
## Proposed Changes
On any tag formatted `v*`, a full multi-arch docker build will be kicked off and automatically pushed to docker hub with the version tag.
This is a bit repetitive, because the image built will usually be the same as the image built on pushes to `stable`, but it seems like the simplest way to go about it and this will also work if we incorporate a workflow with `vX.X.X-rc` tags.
## Additional Info
This may also need to wait for env variable updates: https://github.com/sigp/lighthouse/pull/2135#issuecomment-754977433
Co-authored-by: realbigsean <seananderson33@gmail.com>
## Issue Addressed
Resolves#1674
## Proposed Changes
- Whenever a tag is pushed with the prefix `v` this workflow is triggered
- creates portable and non-portable binaries for linux x86_64, linux aarch64, macOS
- an attempt at using github actions caching
- signs each binary using GPG
- auto-generates full changelog based on commit messages since the last release
- creates a **draft** release
- hot new formatting (preview [here](https://github.com/realbigsean/lighthouse/releases/tag/v0.9.23))
- has been taking around 35 minutes
## Additional Info
TODOs:
- Figure out how we should automate dockerhub's version tag.
- It'd be quickest just to tag `latest`, but we'd need to make sure the docker workflow completes before this starts
- we do the same cross-compile in the `docker` workflow, we could try to use the same binary
- integrate a similar flow for unstable binaries (`-rc` tag?)
- improve caching, potentially use sccache
- if we start using a self-hosted runner this'll require some re-working
Need to add the following secrets to Github:
- `GPG_PASSPHRASE`
- ~~`GPG_PUBLIC_KEY`~~ hard-coded this, because it was tough manage as a secret
- `GPG_SIGNING_KEY`
Co-authored-by: realbigsean <seananderson33@gmail.com>
## Issue Addressed
N/A
## Proposed Changes
I didn't realize the `PORTABLE` env variable is only picked up by `install` in the `Makefile` so we are still getting `SIGILL`s:
https://github.com/sigp/lighthouse/runs/1565004525?check_suite_focus=true
## Additional Info
Co-authored-by: realbigsean <seananderson33@gmail.com>
## Issue Addressed
N/A
## Proposed Changes
Add some caching to the test suite and to the aarch64 cross-compile in the docker build.
## Additional Info
Cache hits only occur if the Cargo.lock file is unchanged, Github Actions runner OS matches, and the cache is "in scope". Some documentation on github actions cache scoping is here:
https://docs.github.com/en/free-pro-team@latest/actions/guides/caching-dependencies-to-speed-up-workflows#matching-a-cache-key
I'm not sure how frequently we'll get cache hits, I imagine only on smaller PR's or updates to the same PR. And there is a cache size limit that we may end up reaching quickly. But Github actions handles evictions if we go over that limit.
Not sure how much of an impact this will end up having but I don't really see a downside to trying it out.
Co-authored-by: realbigsean <seananderson33@gmail.com>
## Issue Addressed
N/A
## Proposed Changes
- hardcode `ubuntu-18.04` -- I don't think this was causing us issues, but github actions is in the process of migrating `ubuntu-latest` from Ubuntu 18 -> 20.. so just in case
- different source of emulation dependencies -> https://github.com/tonistiigi/binfmt
- this one is explicitly referenced in the `buildx` github docs
- install emulation dependencies and run `docker buildx` in the same `run` command
- enable `buildx` with `DOCKER_CLI_EXPERIMENTAL: enabled` rather than re-building it
## Additional Info
N/A
Co-authored-by: realbigsean <seananderson33@gmail.com>
## Issue Addressed
Resolves#1512
## Proposed Changes
- Adds a new docker Github Actions workflow
- Removes the Dockerhub hook
- Adds a new Dockerfile for use with pre-existing cross-compiled binaries
- on pushes to `unstable`
- builds an ARM64 image and tags it `latest-arm64-unstable`
- builds an AMD64 image and tags it `latest-amd64-unstable`
- builds an multiarch image by creating a manifest list referencing the prior two images and tags it `latest-unstable`
- on pushes to `stable`
- builds an ARM64 image and tags it `latest-arm64`
- builds an AMD64 image and tags it `latest-amd64`
- builds an multiarch image by creating a manifest list referencing the prior two images and tags it `latest`
## Additional Info
- for ARM64, first `cross` is used to cross compile the `lighthouse` and `lcli` binaries, then `docker buildx` is installed to actually build the docker image for the correct target platform. The image build pretty much just copies the binaries from local into the docker image (thanks @michaelsproul :) )
- The AMD64 and ARM64 builds run in parallel, in total it's been taking around 45mins on a local runner
- This PR does **not** cover version tags on docker images at the moment
Co-authored-by: realbigsean <seananderson33@gmail.com>
This reverts commit 2627463366.
## Issue Addressed
This is a temporary fix for #1589, by reverting #1574. The Docker image needs to be built with `--build-arg PORTABLE=true`, and we could probably integrate that into the multi-arch build, but in the interests of expediting a fix, this PR opts for a revert.
## Issue Addressed
#1512
## Proposed Changes
Use Github Actions to automate the Docker image build, so that we can make a multi-arch image.
## Additional Info
This change will require adding the DOCKER_USERNAME and DOCKER_PASSWORD secrets in Github. It will also require disabling the Docker Hub automated build.