The SQL query was anchoring data from the `blocks` table, which includes
all blocks seen from SyncIncomingBlocks which isn't always available in
the chainstate via the API. In order to prevent these blocks from
leaking into normal processing (which errors anyway), the join was
changed to allow `blocks_synced` to anchor the set as we originally
intended.
- When chainwatch is ran it will first start a Syncer that continuously collects blocks from the
ChainNotify channel and persists them to the blocks_synced table. Once the Syncer has caught the
blocks_synced table up to the lotus daemons current head a Processor is started. The Processor
selects a batch of contiguous blocks and extracts and stores their data. It attempts to do as much
work as it can in parallel. When the blocks are done being processed their corresponding
processed_at and is_processed fields in the blocks_synced table are filled out.