adbb62f7f3
* some blob reprocessing work * remove ForceBlockLookup * reorder enum match arms in sync manager * a lot more reprocessing work * impl logic for triggerng blob lookups along with block lookups * deal with rpc blobs in groups per block in the da checker. don't cache missing blob ids in the da checker. * make single block lookup generic * more work * add delayed processing logic and combine some requests * start fixing some compile errors * fix compilation in main block lookup mod * much work * get things compiling * parent blob lookups * fix compile * revert red/stevie changes * fix up sync manager delay message logic * add peer usefulness enum * should remove lookup refactor * consolidate retry error handling * improve peer scoring during certain failures in parent lookups * improve retry code * drop parent lookup if either req has a peer disconnect during download * refactor single block processed method * processing peer refactor * smol bugfix * fix some todos * fix lints * fix lints * fix compile in lookup tests * fix lints * fix lints * fix existing block lookup tests * renamings * fix after merge * cargo fmt * compilation fix in beacon chain tests * fix * refactor lookup tests to work with multiple forks and response types * make tests into macros * wrap availability check error * fix compile after merge * add random blobs * start fixing up lookup verify error handling * some bug fixes and the start of deneb only tests * make tests work for all forks * track information about peer source * error refactoring * improve peer scoring * fix test compilation * make sure blobs are sent for processing after stream termination, delete copied tests * add some tests and fix a bug * smol bugfixes and moar tests * add tests and fix some things * compile after merge * lots of refactoring * retry on invalid block/blob * merge unknown parent messages before current slot lookup * get tests compiling * penalize blob peer on invalid blobs * Check disk on in-memory cache miss * Update beacon_node/beacon_chain/src/data_availability_checker/overflow_lru_cache.rs * Update beacon_node/network/src/sync/network_context.rs Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com> * fix bug in matching blocks and blobs in range sync * pr feedback * fix conflicts * upgrade logs from warn to crit when we receive incorrect response in range * synced_and_connected_within_tolerance -> should_search_for_block * remove todo * add data gas used and update excess data gas to u64 * Fix Broken Overflow Tests * payload verification with commitments * fix merge conflicts * restore payload file * Restore payload file * remove todo * add max blob commitments per block * c-kzg lib update * Fix ef tests * Abstract over minimal/mainnet spec in kzg crate * Start integrating new KZG * checkpoint sync without alignment * checkpoint sync without alignment * add import * add import * query for checkpoint state by slot rather than state root (teku doesn't serve by state root) * query for checkpoint state by slot rather than state root (teku doesn't serve by state root) * loosen check * get state first and query by most recent block root * Revert "loosen check" This reverts commit 069d13dd63aa794a3505db9f17bd1a6b73f0be81. * get state first and query by most recent block root * merge max blobs change * simplify delay logic * rename unknown parent sync message variants * rename parameter, block_slot -> slot * add some docs to the lookup module * use interval instead of sleep * drop request if blocks and blobs requests both return `None` for `Id` * clean up `find_single_lookup` logic * add lookup source enum * clean up `find_single_lookup` logic * add docs to find_single_lookup_request * move LookupSource our of param where unnecessary * remove unnecessary todo * query for block by `state.latest_block_header.slot` * fix lint * fix merge transition ef tests * fix test * fix test * fix observed blob sidecars test * Add some metrics (#33) * fix protocol limits for blobs by root * Update Engine API for 1:1 Structure Method * make beacon chain tests to fix devnet 6 changes * get ckzg working and fix some tests * fix remaining tests * fix lints * Fix KZG linking issues * remove unused dep * lockfile * test fixes * remove dbgs * remove unwrap * cleanup tx generator * small fixes * fixing fixes * more self reivew * more self review * refactor genesis header initialization * refactor mock el instantiations * fix compile * fix network test, make sure they run for each fork * pr feedback * fix last test (hopefully) --------- Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com> Co-authored-by: Mark Mackey <mark@sigmaprime.io> Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com> Co-authored-by: Michael Sproul <michael@sigmaprime.io>
99 lines
3.5 KiB
Python
Executable File
99 lines
3.5 KiB
Python
Executable File
#!/usr/bin/env python3
|
|
|
|
# The purpose of this script is to compare a list of file names that were accessed during testing
|
|
# against all the file names in the consensus-spec-tests repository. It then checks to see which files
|
|
# were not accessed and returns an error if any non-intentionally-ignored files are detected.
|
|
#
|
|
# The ultimate goal is to detect any accidentally-missed spec tests.
|
|
|
|
import os
|
|
import re
|
|
import sys
|
|
|
|
# First argument should the path to a file which contains a list of accessed file names.
|
|
accessed_files_filename = sys.argv[1]
|
|
|
|
# Second argument should be the path to the consensus-spec-tests directory.
|
|
tests_dir_filename = sys.argv[2]
|
|
|
|
# If any of the file names found in the consensus-spec-tests directory *starts with* one of the
|
|
# following regular expressions, we will assume they are to be ignored (i.e., we are purposefully
|
|
# *not* running the spec tests).
|
|
excluded_paths = [
|
|
# Eth1Block and PowBlock
|
|
#
|
|
# Intentionally omitted, as per https://github.com/sigp/lighthouse/issues/1835
|
|
"tests/.*/.*/ssz_static/Eth1Block/",
|
|
"tests/.*/.*/ssz_static/PowBlock/",
|
|
# light_client
|
|
"tests/.*/.*/light_client",
|
|
# LightClientStore
|
|
"tests/.*/.*/ssz_static/LightClientStore",
|
|
# LightClientUpdate
|
|
"tests/.*/.*/ssz_static/LightClientUpdate",
|
|
# LightClientSnapshot
|
|
"tests/.*/.*/ssz_static/LightClientSnapshot",
|
|
# LightClientBootstrap
|
|
"tests/.*/.*/ssz_static/LightClientBootstrap",
|
|
# LightClientOptimistic
|
|
"tests/.*/.*/ssz_static/LightClientOptimistic",
|
|
# LightClientFinalityUpdate
|
|
"tests/.*/.*/ssz_static/LightClientFinalityUpdate",
|
|
# LightClientHeader
|
|
"tests/.*/.*/ssz_static/LightClientHeader",
|
|
# One of the EF researchers likes to pack the tarballs on a Mac
|
|
".*\.DS_Store.*",
|
|
# More Mac weirdness.
|
|
"tests/mainnet/bellatrix/operations/deposit/pyspec_tests/deposit_with_previous_fork_version__valid_ineffective/._meta.yaml",
|
|
# bls tests are moved to bls12-381-tests directory
|
|
"tests/general/phase0/bls",
|
|
# some bls tests are not included now
|
|
"bls12-381-tests/deserialization_G1",
|
|
"bls12-381-tests/deserialization_G2",
|
|
"bls12-381-tests/hash_to_G2",
|
|
# FIXME(sean)
|
|
"tests/mainnet/capella/light_client/single_merkle_proof/BeaconBlockBody/*",
|
|
"tests/mainnet/deneb/light_client/single_merkle_proof/BeaconBlockBody/*",
|
|
"tests/.*/eip6110"
|
|
]
|
|
|
|
|
|
def normalize_path(path):
|
|
return path.split("consensus-spec-tests/")[1]
|
|
|
|
|
|
# Determine the list of filenames which were accessed during tests.
|
|
passed = set()
|
|
for line in open(accessed_files_filename, 'r').readlines():
|
|
file = normalize_path(line.strip().strip('"'))
|
|
passed.add(file)
|
|
|
|
missed = set()
|
|
accessed_files = 0
|
|
excluded_files = 0
|
|
|
|
# Iterate all files in the tests directory, ensure that all files were either accessed
|
|
# or intentionally missed.
|
|
for root, dirs, files in os.walk(tests_dir_filename):
|
|
for name in files:
|
|
name = normalize_path(os.path.join(root, name))
|
|
if name not in passed:
|
|
excluded = False
|
|
for excluded_path_regex in excluded_paths:
|
|
if re.match(excluded_path_regex, name):
|
|
excluded = True
|
|
break
|
|
if excluded:
|
|
excluded_files += 1
|
|
else:
|
|
print(name)
|
|
missed.add(name)
|
|
else:
|
|
accessed_files += 1
|
|
|
|
# Exit with an error if there were any files missed.
|
|
assert len(missed) == 0, "{} missed files".format(len(missed))
|
|
|
|
print("Accessed {} files ({} intentionally excluded)".format(
|
|
accessed_files, excluded_files))
|