- Adds `Dockerfile_alpine`, which copies the complier from the scratch
step into an Alpine image
- Updates `docker_build.sh` to build the Alpine image
- Updates `docker_deploy.sh` to tag and push the Alpine images
- Updates `docker_deploy_manual.sh` to tag and push the Alpine images
The tags that are pushed follow:
- Scratch
- ethereum/solc:stable
- ethereum/solc:0.5.1
- ethereum/solc:nightly
- ethereum/solc:nightly-0.5.1-bc7cb301e3d71756c8fbefe888aca53433302117
- ethereum/solc:nightly-0.5.1-bc7cb301e3d71756c8fbefe888aca53433302117
- Alpine
- ethereum/solc:stable-alpine
- ethereum/solc:0.5.1-alpine
- ethereum/solc:nightly-alpine
- ethereum/solc:nightly-alpine-0.5.1-bc7cb301e3d71756c8fbefe888aca53433302117
- ethereum/solc:nightly-alpine-0.5.1-bc7cb301e3d71756c8fbefe888aca53433302117
* Implicitely also allows concurrent runs.
* Properly cleanup of any working files created during runtime.
* Properly cleanup upon singals.
* Allow early-abort during cmdline tests without leaking processes.
Run codespell against documentation and during Linux test run.
Use codespell_whitelist.txt dictionary to whitelist words that
should not be considered as misspelled.
Currently whitelist "iff" and "nd".
Refs: #4442
On the documentation the examples for the usage of isolate_tests.py are shown with single files, and it's currently not working. It only works for folders or wildcards that return more than one file, since that's how os.walk works within a loop for that cases.
Proposed an simple and easy fix.
I extracted the core functionality for extracting tests from files, and made another function called `extract_and_write`
If the program receives a single file the function `extract_and_write` is called once, it even works for `docs` when specified.
If the program receives a path or a wildcard, works as used to.
This greatly improves size of the final docker image with the help of
multi-stage docker builds.
With that change, we can also make the build stage readable/maintainable
again by splitting up into multiple RUN statements as well as not
needing to clean-up temporary objects.
The dependencies have been put on top of the primary COPY statement
in order to not rebuild the dependencies each time one wants to rebuild
the docker image (for example due to code changes).
The solc compilation itself is now parallelized to the CPU core count
to speed up those builds that have more cores available on their docker
build system.
Future Notes:
-------------
We could further improve the Dockerfile by explicitely adding the
directories this docker build is interested in (such as solc source code
exclusively).
Or one may want to also use the build step for automated testing (CI)
by enforcing soltest and cmdlineTests.sh right before finalizing the image.