Separate out logs into own table #79
Labels
No Label
bug
critical
duplicate
enhancement
epic
help wanted
in progress
invalid
low priority
question
rebase
v1
v5
wontfix
Copied from Github
Kind/Breaking
Kind/Bug
Kind/Documentation
Kind/Enhancement
Kind/Feature
Kind/Security
Kind/Testing
Priority
Critical
Priority
High
Priority
Low
Priority
Medium
Reviewed
Confirmed
Reviewed
Duplicate
Reviewed
Invalid
Reviewed
Won't Fix
Status
Abandoned
Status
Blocked
Status
Need More Info
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: cerc-io/go-ethereum#79
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Currently, logs don't have their own table and are stored in additional columns in the receipts table. It's desirable to create a separate table and IPLD schema for them, allowing for more efficient query and distribution of individual logs.
DEPENDENCY: https://github.com/vulcanize/go-codec-dageth/issues/25.
After finishing https://github.com/vulcanize/go-codec-dageth/issues/25 it will be known precisely how we want to represent logs as IPLD objects.
When we create the new logs table, we need to also create a process to backfill the existing data.
Had a meeting with Ian, Rick and Ashwin. Agreed on the following structure for a new table to hold the Log data as described below:
Auto Index,
Receipt_cids.id
log.address,
(Name log.address as Address ),
log.Topics 0 (Varchar 66),
log.Topics 1 (Varchar 66),
log.Topics 2 (Varchar 66),
log.Topics 3 (Varchar 66),
Data
Additionally: Add B Tree indexes for the Topic columns and Data columns
Also : When we create the new logs table, we need to also create a process to backfill the existing data.
Worked on the coding. Have to parse the Logs in the receipt next
Created a local Logs table with the structure below using the following SQL statements:
CREATE TABLE Logs (
id SERIAL,
CONSTRAINT receipt_id
FOREIGN KEY(id)
REFERENCES eth.receipt_cids(id),
address char(42),
Topics0 Varchar (66),
Topics1 Varchar (66),
Topics2 Varchar (66),
Topics3 Varchar (66),
Data bytea);
CREATE UNIQUE INDEX topic0_idx ON Logs (Topics0);
CREATE UNIQUE INDEX topic1_idx ON Logs (Topics1);
CREATE UNIQUE INDEX topic2_idx ON Logs (Topics2);
CREATE UNIQUE INDEX topic3_idx ON Logs (Topics3);
CREATE UNIQUE INDEX address_idx ON Logs (address);
Table "public.logs"
Column | Type | Collation | Nullable | Default
---------+-----------------------+-----------+----------+----------------------------------
id | integer | | not null | nextval('logs_id_seq'::regclass)
address | character(42) | | |
topics0 | character varying(66) | | |
topics1 | character varying(66) | | |
topics2 | character varying(66) | | |
topics3 | character varying(66) | | |
data | bytea | | |
Indexes:
"address_idx" UNIQUE, btree (address)
"topic0_idx" UNIQUE, btree (topics0)
"topic1_idx" UNIQUE, btree (topics1)
"topic2_idx" UNIQUE, btree (topics2)
"topic3_idx" UNIQUE, btree (topics3)
Foreign-key constraints:
"receipt_id" FOREIGN KEY (id) REFERENCES eth.receipt_cids(id)
Related issue: https://github.com/vulcanize/go-codec-dageth/issues/25
We need to
VARCHAR(66)
row namelog_root
(should eventually beNOT NULL
, but we need to backfill the old missing values first).log_cids
, for indexing log IPLDs, rows in this table reference the parent receipt by fk.log_root
row in thereceipt_cids
table.0x99
as the multicodec type for the CID, but it's not official yet.0x9a
as the multicodec type for the CID, but it's not official yet.log_cids
table by referencing them by multihash fk.Linking to https://github.com/vulcanize/statediff-migrations/pull/7
Linking to https://github.com/vulcanize/go-ethereum/pull/102