- fetches logs from all three price feeds in one query
- assumes eth/usd price feed will be updated to include LogValue event
- updates transformers to run separate from header sync
- currently not validating price feeds if the underlying header already exists
and is valid, since price feeds should have been added when initial header
was added
- removes assertions against data with timestamps to facilitate running
the tests against a freshly setup local Ganache instance
- also applies a few `go vet` and `go fmt` changes
- Migrate various mocks of core namespaces to shared version in `fakes` pkg
- Err on the side of making test doubles less sophisticated
- Don't pull over mocks of namespaces that are only used in example code
- Only syncs block headers (excludes block bodies, transactions, receipts, and logs)
- Modifies validation window to include the most recent block
- Isolates validation window to the variable defined in the cmd directory (blocks
have a separate variable defined in the block_repository for determining when
to set a block as final)
= Add eth_node_fingerprint to block that can be imitated by both hot and cold imports
- Only sync missing blocks (blocks that are missing or don't share a fingerprint) on cold import
- Set block is_final status after import
* Rename geth package structs to not be prefaced with package name
* No longer need to dump schema since Travis uses migrate
* Rearrange history package
* Removed double request for receipt from block rewards
* Remove Listener + Observers and Replace w/ Polling Head
* Potential Short term Issue w/ Infura (ignore these tests for now)
* Add block categorization (is_final=)
* Add godo task for vulcanizeDB (Example of how everything could work together)
* Add unique constraint on block_number and node
* Add index on block_id for transactions_table
* Add node_id index on blocks table
* Sort transactions returned from FindBlock by tx_hash
* lowercase tx_to, tx_from like etherscan
* The command populates up to the highest known block number
* The anticipated use case is that the listener will be running
in parallel to the populateBlocks command
* This will mean that the listener is responsible for picking up
new blocks, and the populateBlocks command is reposible for
historical blocks
* Reformat SQL statements