Compare commits

...

202 Commits
v2.0.0 ... v5

Author SHA1 Message Date
89d08ac1b9 Add eth.blob_hashes table (#8)
All checks were successful
Publish Docker image / Build and publish image (release) Successful in 20s
Adds `eth.blob_hashes`, indexed by blob transaction hash.

Currently based on #7

Reviewed-on: #8
2024-08-01 00:21:06 +00:00
d24142a301 Update schema.sql 2024-07-24 07:13:22 +00:00
9e0b69d23c Add CI sanity check
+ compose healthcheck
2024-07-24 07:13:22 +00:00
825a0bc235 Simplify startup script 2024-07-24 07:13:22 +00:00
e90bc38fdb Remove unused Makefile, workflows 2024-07-24 07:13:22 +00:00
ccfc2dbc84 Simplify Dockerfile (#9)
We don't actually need a Go based image, since this just installs the `goose` binary.

Reviewed-on: #9
2024-07-24 07:05:47 +00:00
fdd56e9803 Add withdrawals (EIP-4895) (#7)
All checks were successful
Publish Docker image / Build and publish image (release) Successful in 29s
Support for validator withdrawal objects:
- new table `eth.withdrawal_cids`
- new column `withdrawals_root` in `eth.header_cids`

Reviewed-on: #7
2024-06-25 11:24:00 +00:00
68a347e38d Add .gitea specific workflows. (#6)
Reviewed-on: #6
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-01-22 19:16:56 +00:00
097804b1e9 Merge pull request 'update Dockerfile for ARM compatibility' (#4) from arm-dockerfile into v5
Reviewed-on: #4
2023-10-19 13:26:05 +00:00
925abd314b update Dockerfile for ARM compatibility 2023-10-18 09:16:16 -04:00
Ian Norden
aaa4459655
Merge pull request #145 from cerc-io/example_queries
example queries
2023-09-21 07:05:45 -05:00
i-norden
8f360133ed example queries 2023-09-20 22:24:23 -05:00
Ian Norden
f6da2ce571
Merge pull request #143 from cerc-io/uml
v5 uml
2023-09-20 13:47:58 -05:00
i-norden
245e6d07b2 consolidate 2023-09-20 13:32:40 -05:00
i-norden
402d2790e0 update uml 2023-09-20 12:14:16 -05:00
29c5d04159
Fix typo in CI image build, simplify step (#142) 2023-09-01 00:55:10 +08:00
Ian Norden
9196c449f0
Merge pull request #138 from roysc/v5-dev
Fix hypertable down-migration
2023-08-01 08:35:41 -05:00
Ian Norden
146282fb7f
Merge pull request #139 from roysc/v5-cleanup
Clean up
2023-08-01 08:35:25 -05:00
323ed2da00 Remove get_child function
Was only used in canonical_header_from_array
2023-07-23 01:27:00 +08:00
5b92e02ebd Docker compose cleanup
- preferred naming convenion is compose.yaml
- test compose was redundant
- uncomment build section for convenience, since it's only used as a fallback
2023-07-23 01:27:00 +08:00
dcd911a8ff Simplify docker publish workflow 2023-07-23 01:27:00 +08:00
59d01ad30b Drop version tag in docker-compose 2023-07-23 01:26:08 +08:00
d2d8020856 Rename indexes back on hypertable downgrade 2023-07-22 12:03:24 +08:00
9618c0710e
Switch to GITEA_PUBLISH_TOKEN for publishing. (#140) 2023-07-21 14:03:41 -05:00
8779bb2b86
Do not make eth.header_cids a hypertable. (#137) 2023-07-21 13:27:26 -05:00
1b922dbff3
Add canonical column to eth.header_cids (#136)
* Add canonical column to eth.header_cids

* NOT NULL

* Update stored procedures for new schema.

* Switch to sql syntax, since it can be inlined.

* Fix indent

* Fix indent
2023-07-18 12:29:27 -05:00
66cd1d9e69
Merge pull request #135 from cerc-io/roy/v5-dev 2023-05-15 21:23:32 +08:00
af9910d381 Update schema dump 2023-05-15 14:17:46 +08:00
e4e1b0ea1f Upgrade github action 2023-05-15 12:03:51 +08:00
050c1ae3df Startup script tweaks
* forward goose command
* don't idle migrations container
2023-05-15 12:03:30 +08:00
2afa6e9fa5
Merge pull request #134 from cerc-io/roy/v5-dev
Schema fixes for v5
2023-04-24 22:20:10 +08:00
3a475b4de4 independent header_result interface type 2023-04-18 17:46:33 +08:00
bf7b4eb627 add index col to uncle_cids index 2023-04-18 14:55:37 +08:00
Ian Norden
167cfbfb20
Merge pull request #131 from cerc-io/ian_dev
cherry-pick dockerfile, docker-compose, actions, and Makefile adjustments onto v5
2023-03-21 15:08:50 -05:00
i-norden
fcb93d17ed remove vestigal action 2023-03-21 15:08:04 -05:00
i-norden
1bb4fe04f0 fix after cherry-pick 2023-03-21 12:50:37 -05:00
i-norden
10c84b589f comments for using local build 2023-03-21 12:43:29 -05:00
i-norden
527ff11328 use v5.0.0-alpha 2023-03-21 12:42:14 -05:00
a91f44773c Clean up Dockerfile (#130) 2023-03-21 12:39:41 -05:00
0595f3dc15 Renames and clean up (#129)
* Cerc refactor contd.

* consistent local docker tag
2023-03-21 12:38:43 -05:00
Ian Norden
9f07d4f4d4
Merge pull request #128 from cerc-io/ian/v5
fix bug due to block_number argument being overridden by the optimist…
2023-03-09 09:03:55 -06:00
i-norden
c9edf6c832 fix bug due to block_number argument being overridden by the optimistic path 2023-03-08 12:42:37 -06:00
Ian Norden
67dc84205a
Merge pull request #127 from cerc-io/ian/v5
Modified `get_storage_at` procedures for v5
2023-03-08 12:02:13 -06:00
i-norden
22dcf5c72e review fixes 2023-03-08 11:10:23 -06:00
i-norden
be28c2ab79 fix 2023-03-07 20:00:01 -06:00
i-norden
42803af51a we don't need to join on state_cids in the pessimistic case anymore 2023-03-07 18:44:12 -06:00
i-norden
afc47af045 return val directly since it is now present in storage_cids 2023-03-07 18:31:26 -06:00
i-norden
2695d9e353 updated get_storage_at procedures for v5 2023-03-07 18:18:20 -06:00
71dede2031 Update behavior back to a comprehensive JOIN on state_path if we came up empty in our optimized check. (#118) 2023-03-07 18:18:20 -06:00
Ian Norden
df352ffd1a
Merge pull request #126 from cerc-io/ian/v5
Drop `access_list_elements` table
2023-02-27 10:31:00 -06:00
i-norden
dd35277c86 updated schema 2023-02-21 20:07:28 -06:00
i-norden
92cc8fbea3 remove access_list_elements table and associated indexes and hypertable functions 2023-02-21 20:07:28 -06:00
Ian Norden
05db3c697f
Merge pull request #124 from cerc-io/ian/v5
v5 part 5
2023-02-20 15:05:02 -06:00
i-norden
a710db0284 BRIN => BTREE 2023-02-20 13:47:23 -06:00
i-norden
5e153c601f updated schema 2023-02-17 14:26:30 -06:00
i-norden
802cfe7180 consolidate version migrations; reorder 2023-02-17 14:26:21 -06:00
i-norden
b06b4f2cfb remove partial_path and contract_hash columns and indexes 2023-02-17 14:25:40 -06:00
i-norden
3fd1638ff6 remove eth_probe table definitions, these migrations will continue to be defined in the eth_probe repo 2023-02-17 14:23:42 -06:00
i-norden
1e5cbfd184 remove unused postgraphile triggers; can add back as needed 2023-02-17 14:22:55 -06:00
Ian Norden
40b1709c2c
Merge pull request #123 from cerc-io/ian/v5
v5 part 4
2023-02-10 13:58:58 -06:00
i-norden
92a9f5856b minor fixes 2023-02-10 10:38:31 -06:00
i-norden
9f060ff0bf updated schema 2023-02-08 18:35:16 -06:00
i-norden
a8440e4ded public.blocks => ipld.blocks 2023-02-08 18:35:07 -06:00
i-norden
c402e5c285 drop mh_keys and use cids for blockstore keys and linking aka revert to v0 ipfs blockstore format. in addition to saving space, this format is closer to the CAR format used in filecoin deals 2023-02-08 18:22:40 -06:00
i-norden
85bc243896 drop tx_data and log_data; data can still be accessed in referenced ipld blocks in public.blocks 2023-02-08 18:13:15 -06:00
i-norden
f67f03481b renaming some columns; remove log_root column 2023-02-08 18:08:08 -06:00
Ian Norden
27e923f70d
Merge pull request #120 from cerc-io/ian_v5
v5 part 3
2023-02-01 21:12:34 -06:00
i-norden
73a66dae8b updated schema 2023-02-01 20:22:22 -06:00
i-norden
29e2bd4e7b remove 00023 get storage functions; they were broken by these changes and would need to be refactored, but with the v5 changes we shouldn't need them anymore 2023-02-01 20:20:01 -06:00
986ce1ead8 Update behavior back to a comprehensive JOIN on state_path if we came up empty in our optimized check. (#118) 2023-02-01 20:17:06 -06:00
268a282eac Draft: Split get_storage_at into two paths: one optimistic and simple, the other more comprehensive but slower. (#117)
* Split get_storage_at into two paths: one optimisic and simple, the other more exhaustive but slower.

* Remove is null check

* Fix name

* Update comment
2023-02-01 20:16:35 -06:00
53461a0996 Add new indexes and functions to improve eth_getStorageAt performance. (#116)
* New indexes and functions to implement get_storage_at as a function.

* Update schema.sql.
2023-02-01 20:16:11 -06:00
i-norden
26d970ed2f remove times_validated field 2023-02-01 20:03:12 -06:00
i-norden
c35cda7b5e updated schema 2023-02-01 19:56:57 -06:00
i-norden
f3c58e39ca node_id => node_ids 2023-02-01 19:56:36 -06:00
i-norden
2165b316fa remove snapshot functions (they wont work without intermediate node indexing); adjust remaining functions 2023-01-23 17:31:36 -06:00
i-norden
713f6a9208 add removed flag; embed value in storage_cids 2023-01-23 17:30:33 -06:00
i-norden
241bb281eb state_cids and storage_cids only index leaf nodes and related adjustments 2023-01-23 17:08:20 -06:00
Ian Norden
bb57b4a033
Merge pull request #105 from vulcanize/release-v5.0.0
v5 updates part 2
2022-09-06 19:45:03 -05:00
i-norden
a9755c6ecc update schema 2022-08-30 12:19:48 -05:00
i-norden
7f8247cb4f revert hash indexes; add eth probes tables 2022-08-30 12:05:51 -05:00
i-norden
701f9c2729 update schema.sql 2022-08-15 11:15:48 -05:00
i-norden
475ead282b remove known_gaps table 2022-08-15 11:11:19 -05:00
i-norden
42f46fc397 update schema.sql 2022-08-08 12:42:54 -05:00
i-norden
2bf5c82150 replace btree with hash index where it makes sense 2022-08-08 12:40:47 -05:00
Ian Norden
0c62ccc552
Merge pull request #103 from vulcanize/release-v5.0.0
v5 updates part 1
2022-08-08 11:16:24 -05:00
i-norden
13f0ff3933 update schema 2022-08-08 11:08:09 -05:00
i-norden
0856168b92 misc adjustments 2022-08-08 11:06:00 -05:00
erikdies
a8395d1413 Create issues-notion-sync.yml 2022-08-08 10:53:29 -05:00
prathamesh0
be345e0733
Use a specific tag while building migration tool (#101) 2022-07-22 11:44:07 +05:30
prathamesh0
b59505eab2
Add block hash to primary keys in transactions, receipts and logs tables (#100)
* Add block hash to primary keys in transactions, receipts and logs tables

* Add block hash in postgraphile triggers for transactions, receipts and logs tables

* Make indexes on transaction cid and mh_key non-unique
2022-07-07 16:21:06 +05:30
Abdul Rabbani
4e948c58ce
Merge pull request #93 from vulcanize/feature/update-go
Update the go-version in build container
2022-06-20 09:28:10 -04:00
Abdul Rabbani
65b7bee7a6 Update the go-version in build container 2022-06-20 09:23:39 -04:00
48eb594ea9
Fix docker-compose for publishing image (#91)
* Fix docker-compose for publishing image

* Add docker build in pr workflow
2022-06-14 13:03:22 +05:30
prathamesh0
a44724c3d7
Fix stored functions for in-place snapshot (#87) 2022-06-14 11:00:54 +05:30
prathamesh0
60074a945c
Update instructions and docker-compose files for simplified db setup (#88) 2022-06-07 14:46:47 +05:30
Abdul Rabbani
91d30b9ea1
Merge pull request #84 from vulcanize/feature/startup-script
Update startup_script.sh
2022-05-31 12:25:55 -04:00
Abdul Rabbani
3f034d40ce Update startup_script.sh 2022-05-27 09:23:48 -04:00
Ashwin Phatak
14762a119f
Merge pull request #79 from deep-stack/pm-single-node
Run migrations on a single-node TimescaleDB setup
2022-05-17 16:19:34 +05:30
1c4759da05 Clean up README 2022-05-17 16:01:48 +05:30
49daf851cc Run migrations on a single-node TimescaleDB setup 2022-05-17 15:50:55 +05:30
Ashwin Phatak
b64ad6e3f7
Merge pull request #78 from deep-stack/pm-docker-job
Merge changes for docker process to build and push a docker image
2022-05-17 11:19:52 +05:30
0ad88f6126 Merge changes for docker process to build and push a docker image 2022-05-17 10:27:00 +05:30
Abdul Rabbani
3014e51326
Merge pull request #77 from vulcanize/feature/docker-cicd
Use a single file to build and push a docker image
2022-05-16 11:46:27 -04:00
Abdul Rabbani
972ab2f102 Allow edited release tags to run the pipeline 2022-05-13 08:03:26 -04:00
Abdul Rabbani
47d4961ea6 Require the push job to wait. 2022-05-13 07:59:13 -04:00
Abdul Rabbani
31f115540f Combine docker build and push for releases 2022-05-12 11:18:08 -04:00
Ashwin Phatak
00897cef2c
Merge pull request #76 from vulcanize/pm-v4-merge
Merge latest changes from main into sharding branch
2022-05-11 15:22:53 +05:30
bc68969abb Fix migrations naming after merge 2022-05-10 19:34:01 +05:30
Ashwin Phatak
916af4f832
Merge pull request #75 from deep-stack/pm-image-migrations
Update Dockerfile to run migrations
2022-05-10 16:58:51 +05:30
526bda7090 Temporarily skip on-pr CI checks 2022-05-10 16:43:52 +05:30
d0f5110dec Update Dockerfile to run migrations 2022-05-10 16:22:35 +05:30
Ashwin Phatak
7312b330cd
Merge pull request #74 from deep-stack/pm-cleanup-migrations
Create distributed hypertables directly, skipping hypertables
2022-05-10 10:07:46 +05:30
0ccb54c770 Skip migration to create hypertables 2022-05-10 10:05:28 +05:30
33b293f2bb Fix migrations naming 2022-05-09 17:22:19 +05:30
Ashwin Phatak
045120ce20
Merge pull request #72 from deep-stack/pm-v4-multi-node
Multi-node setup to run the migrations
2022-05-04 16:04:10 +05:30
70cf01ff27 Create stored functions after creating distributed hypertables 2022-05-04 15:48:20 +05:30
06d9ef96e7 Multi-node setup to run the migrations 2022-04-29 10:51:40 +05:30
Ashwin Phatak
96bd0b5d9a
Merge pull request #70 from deep-stack/pm-v4-schema-fixes
Remove foreign keys to hypertables and fix schema dump
2022-04-26 09:55:37 +05:30
9b917746e4 Remove triggers blocking inserts 2022-04-22 18:54:43 +05:30
ee2f43a849 Remove foreign keys to hypertables 2022-04-22 18:53:43 +05:30
Ian Norden
b8f713d518
Merge pull request #67 from vulcanize/release-v4.0.0-alpha
Fixes + new schema dump
2022-04-18 09:34:45 -05:00
i-norden
c38394ad49 updated schema 2022-04-12 21:56:43 -05:00
i-norden
673ae7b265 fixes, PKs need to include partition key 2022-04-12 21:56:27 -05:00
Ian Norden
aa73e30b32
Merge pull request #65 from vulcanize/release-v4.0.0-alpha
[v4] TimescaleDB support
2022-04-12 21:12:18 -05:00
i-norden
3b4e097f00 remove pre and post migrations sets 2022-04-12 21:12:03 -05:00
i-norden
caf99f7194 use state leaf key in storage_cids FK 2022-04-12 21:11:36 -05:00
i-norden
b3832dd23d script for generating/updating the migration file for adding and attaching data nodes for distributed hypertables 2022-04-06 19:08:09 -05:00
i-norden
2816de15a3 migrations for hypertables and distributed hypertables 2022-04-06 18:44:19 -05:00
Ian Norden
f89ea6134f
Merge pull request #56 from vulcanize/release-v4.0.0-alpha
v4.0.0 alpha
2022-04-05 19:12:30 -05:00
Ian Norden
05600e51d2
Merge pull request #59 from vulcanize/release-v3.2.0
update the pre- and post- batch sets with new meta schema and tables
2022-03-31 12:47:32 -05:00
i-norden
201cadbd49 update the pre- and post- batch sets with new meta schema and tables 2022-03-31 12:45:27 -05:00
Ian Norden
82f28ae6ba
Merge pull request #53 from vulcanize/feature/known_table
Add `known_gaps` table
2022-03-31 11:12:54 -05:00
Abdul Rabbani
63e3d66cc1 Merge branch 'main' into feature/known_table 2022-03-31 12:07:28 -04:00
Abdul Rabbani
2ffb98c6f7 Change file name 2022-03-31 12:06:11 -04:00
Abdul Rabbani
b2f8f63a65 Update file name 2022-03-31 10:47:29 -04:00
Abdul Rabbani
e99787242e Add Migration file 2022-03-31 10:45:11 -04:00
i-norden
2162e73524 update schema 2022-03-28 18:23:30 -05:00
i-norden
da8d0af6df updates order of columns in compound PKs, update indexes e.g. we don't need a btree index on a column if it is the first column in the compound PK index but we do need a btree index for the later columns in a compound PK (searches on first column of a compound index are just as fast as searches on a btree index for that column alone, but searches on the 2nd or 3rd column in a compound index are significantly slower than on dedicated indexes) 2022-03-28 18:22:57 -05:00
i-norden
82de252160 stored functions for creating state and storage snapshots from the set of all diffs in Postgres 2022-03-28 18:20:29 -05:00
Ian Norden
d8dbd14af8
Merge pull request #54 from deep-stack/ng-watched-addresses-v3
Add a table for watched addresses (v3)
2022-03-23 07:08:59 -05:00
Ian Norden
20c320ac68
Merge pull request #51 from vulcanize/release-v4.0.0-alpha
denormalize tables by block_number
2022-03-21 20:02:44 -05:00
i-norden
ba2550cc01 update pre- and post- sets 2022-03-21 19:53:48 -05:00
679f3e8d79 Remove GO111MODULE flag when installing goose in makefile
Passing GO111MODULE=off with go get throws error. The CI fails due to this when using the makefile.
2022-03-21 17:07:46 +05:30
Abdul Rabbani
a1b75c31e9 Update schema.sql 2022-03-19 09:17:48 -04:00
Abdul Rabbani
35b40e6ff7 Add known_gaps table - DO NOT MERGE YET!!
We will probably need to change the `modulus_block_number` columns name
2022-03-18 14:35:07 -04:00
a500c1a49a Add table for watched addresses in eth_meta schema 2022-03-17 19:02:05 +05:30
i-norden
1dc12460dc denormalize tables by block_number so that we can partition all tables by block_number for purposes of sharding 2022-03-15 16:06:13 -05:00
Ian Norden
bba8a410f8
Merge pull request #50 from vulcanize/release-v3.0.7
Release v3.0.7
2022-03-07 11:12:09 -06:00
i-norden
f59582ab42 indexes on receipt cid and mh_key should not be unique, as it is possible (but improbable) that two receipts can be identical 2022-02-16 14:11:23 -06:00
Ian Norden
16e17abb7e
Merge pull request #47 from vulcanize/release-v3.0.6
split pk application into two parts
2022-01-26 13:09:24 -06:00
i-norden
9e7e0377a5 split pk application into two parts 2022-01-26 13:01:41 -06:00
Ian Norden
36de257357
Merge pull request #46 from vulcanize/release-v3.0.5
drop un-unique indexes
2022-01-25 12:33:34 -06:00
i-norden
0710ed1bb1 drop un-unique indexes 2022-01-25 12:31:59 -06:00
Ian Norden
c4254bdff1
Merge pull request #45 from vulcanize/release-v3.0.3
prep for integrated node of v3
2022-01-25 09:29:26 -06:00
i-norden
1088313ab7 prep for integrated node of v3 2022-01-25 08:29:25 -06:00
Ian Norden
ffdf0a0d4d
Merge pull request #43 from vulcanize/release-v3.0.2
v3.0.2
2022-01-19 15:31:13 -06:00
i-norden
edabfcc9c9 migration for applying log_cids.leaf_mh_key FK constraint to check that repair process was completed 2022-01-19 15:21:29 -06:00
Ian Norden
6612edabbe
Merge pull request #41 from vulcanize/release-v3.0.1
disable and renable indexes correctly
2022-01-10 21:52:25 -06:00
i-norden
ae1a6e9d31 disable and renable indexes correctly 2022-01-10 21:51:04 -06:00
Ian Norden
35a1ba7218
Merge pull request #40 from vulcanize/release-v3.0.0
Release v3.0.0
2022-01-10 12:49:57 -06:00
i-norden
4699385a21 new make targets for rolling back pre and post-batch migrations 2022-01-10 12:13:52 -06:00
i-norden
a2c98550b7 mig to upgrade schema version in db; mig to remove temporary indexes for logTrie fix 2022-01-10 12:13:49 -06:00
i-norden
4ad1250ea1 split out new post-batch migration for indexes that will accelerate our logTrie repair process 2022-01-10 11:53:55 -06:00
Ian Norden
3fb695c1e6
Merge pull request #39 from vulcanize/release-v0.3.3
Release v0.3.3
2022-01-07 13:44:36 -06:00
i-norden
b5b9950f1d default val for GOPATH in Makefile 2022-01-07 13:41:40 -06:00
i-norden
858d6ee85a set GO111MODULE=off when go getting 2022-01-07 13:07:57 -06:00
i-norden
d313d4823e new make targets for running up migrations one at a time 2022-01-07 12:56:19 -06:00
i-norden
8b64621f46 move public.blocks PK application into own migration to be applied alone 2022-01-07 12:52:45 -06:00
i-norden
70d571e03e rename file, no schema changes 2022-01-07 12:51:29 -06:00
Ian Norden
0e865ad735
Merge pull request #37 from vulcanize/version_table
singleton table for internally recording the database schema version
2021-12-28 23:00:01 -06:00
i-norden
e2f261caa9 include timestamp for when version is updated 2021-12-27 13:52:52 -06:00
i-norden
28be11187c updated uml 2021-12-27 11:41:30 -06:00
i-norden
3833fb85e2 singleton table for internally recording the database schema version 2021-12-27 11:35:32 -06:00
Ian Norden
762bc3607a
Merge pull request #36 from vulcanize/release-v0.3.1
Release v0.3.1
2021-12-20 12:50:56 -06:00
Ian Norden
6c75ec7196
Merge pull request #34 from vulcanize/postgres_refactor
New DB schema
2021-12-20 12:16:53 -06:00
i-norden
ba962f67f4 updated UML 2021-12-20 12:14:03 -06:00
Ian Norden
e64e6bad13
Merge pull request #29 from vulcanize/impl-postgraphile-trigger
Implement postgraphile subscriptions for the updated schema
2021-12-20 11:37:17 -06:00
Arijit Das
e532192a6d Fix CI test. 2021-12-20 21:20:50 +05:30
Arijit Das
db48f344f0 Fix failing CI and update incremental migration. 2021-12-20 14:07:48 +05:30
Arijit Das
852b5da40c Address comments and add subscription for state_accounts and access_list_elements 2021-12-14 15:57:54 +05:30
Arijit Das
35f72bc438 Fix canonical_header_id function. 2021-12-08 13:31:40 +05:30
Arijit Das
ed183ed182 Implement postgraphile trigger. 2021-12-08 13:30:59 +05:30
Ian Norden
8fe193be69
Merge pull request #27 from vulcanize/schema_updates
Schema updates
2021-11-26 09:23:15 -06:00
i-norden
925dba5759 reorder columns to match order of values written to .sql files so that when we convert to .csv and load using COPY FROM it doesn't need to sort 2021-11-25 11:33:39 -06:00
i-norden
36dc668976 proposed schema changes 2021-11-25 11:33:39 -06:00
Ian Norden
6c27203579
Merge pull request #26 from vulcanize/schema_updates
revert my data type mistake
2021-11-24 17:56:20 -06:00
i-norden
636fa35b84 revert my data type mistake 2021-11-24 17:52:21 -06:00
Ian Norden
1c1826773b
Merge pull request #25 from vulcanize/schema_updates
make tables unlogged for batch import
2021-11-23 11:04:37 -06:00
i-norden
217bbf9523 make tables unlogged for batch import 2021-11-22 23:41:32 -06:00
Ian Norden
65be24efaa
Merge pull request #24 from vulcanize/schema_updates
use BIGINT not NUMERIC, it is faster for arithmetic operations and us…
2021-11-20 14:30:10 -06:00
i-norden
4392505b16 use BIGINT not NUMERIC, it is faster for arithmetic operations and uses less space 2021-11-20 13:49:03 -06:00
Ian Norden
86ccdcd7e7
Merge pull request #23 from vulcanize/add_back_indexes
add back indexes
2021-11-18 19:11:34 -06:00
i-norden
1b4a6fc2e3 split pre and post batch processing migrations apart; create make targets for running each 2021-11-18 19:03:37 -06:00
i-norden
cb4d0a1275 fix access list and log pks 2021-11-18 18:53:55 -06:00
Ian Norden
827de908f4
Merge pull request #22 from vulcanize/strip_pks_and_fks
Migrations to add back pks and fks
2021-11-18 18:12:33 -06:00
i-norden
b8f0c194b8 remove pks; migrations to add back pks and fks 2021-11-18 17:46:50 -06:00
i-norden
0fb07a0a77 use node_id as PK/FK 2021-11-18 08:19:36 -06:00
i-norden
6105603585 remove postgraphile trigger, it needs to be reworked after PK updates 2021-11-17 18:56:04 -06:00
i-norden
8e644ebdf4 set of migrations for the parallel batch processing 2021-11-15 11:09:17 -06:00
Ian Norden
68e2b1e5cd
Merge pull request #18 from vulcanize/schema_update
new natural primary key scheme
2021-11-15 11:05:26 -06:00
i-norden
7cc9201a94 new natural primary key scheme 2021-11-14 18:23:26 -06:00
45 changed files with 1557 additions and 1726 deletions

View File

@ -0,0 +1,23 @@
name: Basic test
on: [pull_request]
jobs:
basic-test:
name: Build and sanity check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build Docker images
run: docker compose build
- name: Run Docker containers
run: docker compose up -d
- name: Check migration version
timeout-minutes: 1
run: |
MIGRATION_VERSION=$(ls db/migrations/*.sql | wc -l)
while
version=$(docker compose run --rm migrations version 2>&1 | tail -1 | awk '{print $(NF)}')
[[ $version != $MIGRATION_VERSION ]]; do
echo "Incorrect version: $version"
echo "Retrying..."
done

View File

@ -0,0 +1,26 @@
name: Publish Docker image
on:
release:
types: [published, edited]
jobs:
build:
name: Build and publish image
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- id: vars
name: Output SHA and version tag
run: |
echo "sha=${GITHUB_SHA:0:7}" >> $GITHUB_OUTPUT
echo "tag=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
- name: Build and tag image
run: |
docker build . \
-t cerc-io/ipld-eth-db \
-t git.vdb.to/cerc-io/ipld-eth-db/ipld-eth-db:${{steps.vars.outputs.sha}} \
-t git.vdb.to/cerc-io/ipld-eth-db/ipld-eth-db:${{steps.vars.outputs.tag}}
- name: Push image tags
run: |
echo ${{ secrets.CICD_PUBLISH_TOKEN }} | docker login https://git.vdb.to -u cerccicd --password-stdin
docker push git.vdb.to/cerc-io/ipld-eth-db/ipld-eth-db:${{steps.vars.outputs.sha}}
docker push git.vdb.to/cerc-io/ipld-eth-db/ipld-eth-db:${{steps.vars.outputs.tag}}

View File

@ -1,25 +0,0 @@
name: Docker Compose Build
on:
push:
branches:
- main
jobs:
build:
name: Run docker build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Get the version
id: vars
run: echo ::set-output name=sha::$(echo ${GITHUB_SHA:0:7})
- name: Run docker build
run: make docker-build
- name: Tag docker image
run: docker tag vulcanize/ipld-eth-db docker.pkg.github.com/vulcanize/ipld-eth-db/ipld-eth-db:${{steps.vars.outputs.sha}}
- name: Docker Login
run: echo ${{ secrets.GITHUB_TOKEN }} | docker login https://docker.pkg.github.com -u vulcanize --password-stdin
- name: Docker Push
run: docker push docker.pkg.github.com/vulcanize/ipld-eth-db/ipld-eth-db:${{steps.vars.outputs.sha}}

View File

@ -1,63 +0,0 @@
name: Docker Build
on: [pull_request]
jobs:
concise_migration_diff:
name: Verify concise migration and generated schema
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run docker concise migration build
run: make docker-concise-migration-build
- name: Run database
run: docker-compose -f docker-compose.test.yml up -d test-db
- name: Test concise migration
run: |
sleep 10
docker run --rm --network host -e DATABASE_USER=vdbm -e DATABASE_PASSWORD=password \
-e DATABASE_HOSTNAME=127.0.0.1 -e DATABASE_PORT=8066 -e DATABASE_NAME=vulcanize_testing \
vulcanize/concise-migration-build
- name: Verify schema is latest
run: |
PGPASSWORD="password" pg_dump -h localhost -p 8066 -U vdbm vulcanize_testing --no-owner --schema-only > ./db/migration_schema.sql
./scripts/check_diff.sh ./db/migration_schema.sql db/schema.sql
incremental_migration_diff:
name: Compare conscise migration schema with incremental migration.
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run database
run: docker-compose -f docker-compose.test.yml up -d test-db statediff-migrations
- name: Test incremental migration
run: |
sleep 10
docker run --rm --network host -e DATABASE_USER=vdbm -e DATABASE_PASSWORD=password \
-e DATABASE_HOSTNAME=127.0.0.1 -e DATABASE_PORT=8066 -e DATABASE_NAME=vulcanize_testing \
vulcanize/statediff-migrations:v0.9.0
- name: Verify schema is latest
run: |
PGPASSWORD="password" pg_dump -h localhost -p 8066 -U vdbm vulcanize_testing --no-owner --schema-only > ./db/migration_schema.sql
./scripts/check_diff.sh db/schema.sql ./db/migration_schema.sql
migration:
name: Compare up and down migration
env:
GOPATH: /tmp/go
strategy:
matrix:
go-version: [ 1.16.x ]
os: [ ubuntu-latest ]
runs-on: ${{ matrix.os }}
steps:
- name: Create GOPATH
run: mkdir -p /tmp/go
- name: Install Go
uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go-version }}
- uses: actions/checkout@v2
- name: Test migration
run: |
timeout 5m make test-migrations

View File

@ -1,24 +0,0 @@
name: Publish Docker image
on:
release:
types: [published]
jobs:
push_to_registries:
name: Push Docker image to Docker Hub
runs-on: ubuntu-latest
steps:
- name: Get the version
id: vars
run: |
echo ::set-output name=sha::$(echo ${GITHUB_SHA:0:7})
echo ::set-output name=tag::$(echo ${GITHUB_REF#refs/tags/})
- name: Docker Login to Github Registry
run: echo ${{ secrets.GITHUB_TOKEN }} | docker login https://docker.pkg.github.com -u vulcanize --password-stdin
- name: Docker Pull
run: docker pull docker.pkg.github.com/vulcanize/ipld-eth-db/ipld-eth-db:${{steps.vars.outputs.sha}}
- name: Docker Login to Docker Registry
run: echo ${{ secrets.VULCANIZEJENKINS_PAT }} | docker login -u vulcanizejenkins --password-stdin
- name: Tag docker image
run: docker tag docker.pkg.github.com/vulcanize/ipld-eth-db/ipld-eth-db:${{steps.vars.outputs.sha}} vulcanize/ipld-eth-db:${{steps.vars.outputs.tag}}
- name: Docker Push to Docker Hub
run: docker push vulcanize/ipld-eth-db:${{steps.vars.outputs.tag}}

3
.gitignore vendored
View File

@ -1 +1,2 @@
.idea/ .idea/
.vscode

View File

@ -1,3 +1,19 @@
FROM postgres:12-alpine FROM alpine as builder
COPY ./schema.sql /docker-entrypoint-initdb.d/init.sql # Get migration tool
WORKDIR /
ARG GOOSE_VERSION="v3.6.1"
RUN arch=$(arch | sed s/aarch64/arm64/) && \
wget -O ./goose https://github.com/pressly/goose/releases/download/${GOOSE_VERSION}/goose_linux_${arch}
RUN chmod +x ./goose
# app container
FROM alpine
WORKDIR /app
COPY --from=builder /goose goose
ADD scripts/startup_script.sh .
ADD db/migrations migrations
ENTRYPOINT ["/app/startup_script.sh"]

View File

@ -1,94 +0,0 @@
BIN = $(GOPATH)/bin
# Tools
## Migration tool
GOOSE = $(BIN)/goose
$(BIN)/goose:
go get -u github.com/pressly/goose/cmd/goose
.PHONY: installtools
installtools: | $(GOOSE)
echo "Installing tools"
#Database
HOST_NAME = localhost
PORT = 5432
NAME =
USER = postgres
PASSWORD = password
CONNECT_STRING=postgresql://$(USER):$(PASSWORD)@$(HOST_NAME):$(PORT)/$(NAME)?sslmode=disable
# Parameter checks
## Check that DB variables are provided
.PHONY: checkdbvars
checkdbvars:
test -n "$(HOST_NAME)" # $$HOST_NAME
test -n "$(PORT)" # $$PORT
test -n "$(NAME)" # $$NAME
@echo $(CONNECT_STRING)
## Check that the migration variable (id/timestamp) is provided
.PHONY: checkmigration
checkmigration:
test -n "$(MIGRATION)" # $$MIGRATION
# Check that the migration name is provided
.PHONY: checkmigname
checkmigname:
test -n "$(NAME)" # $$NAME
# Migration operations
## Rollback the last migration
.PHONY: rollback
rollback: $(GOOSE) checkdbvars
$(GOOSE) -dir db/migrations postgres "$(CONNECT_STRING)" down
pg_dump -O -s $(CONNECT_STRING) > schema.sql
## Rollback to a select migration (id/timestamp)
.PHONY: rollback_to
rollback_to: $(GOOSE) checkmigration checkdbvars
$(GOOSE) -dir db/migrations postgres "$(CONNECT_STRING)" down-to "$(MIGRATION)"
## Apply all migrations not already run
.PHONY: migrate
migrate: $(GOOSE) checkdbvars
$(GOOSE) -dir db/migrations postgres "$(CONNECT_STRING)" up
pg_dump -O -s $(CONNECT_STRING) > schema.sql
## Create a new migration file
.PHONY: new_migration
new_migration: $(GOOSE) checkmigname
$(GOOSE) -dir db/migrations create $(NAME) sql
## Check which migrations are applied at the moment
.PHONY: migration_status
migration_status: $(GOOSE) checkdbvars
$(GOOSE) -dir db/migrations postgres "$(CONNECT_STRING)" status
# Convert timestamped migrations to versioned (to be run in CI);
# merge timestamped files to prevent conflict
.PHONY: version_migrations
version_migrations:
$(GOOSE) -dir db/migrations fix
# Import a psql schema to the database
.PHONY: import
import:
test -n "$(NAME)" # $$NAME
psql $(NAME) < schema.sql
## Build docker image with schema
.PHONY: docker-build
docker-build:
docker-compose build
## Build docker image for migration
.PHONY: docker-concise-migration-build
docker-concise-migration-build:
docker build -t vulcanize/concise-migration-build -f ./db/Dockerfile .
.PHONY: test-migrations
test-migrations: $(GOOSE)
./scripts/test_migration.sh

260
README.md
View File

@ -2,4 +2,262 @@
Schemas and utils for IPLD ETH Postgres database Schemas and utils for IPLD ETH Postgres database
## Database UML ## Database UML
![](vulcanize_db.png) ![](vulcanize_db.png)
## Run
* Remove any existing containers / volumes:
```bash
docker-compose down -v --remove-orphans
```
* Spin up `ipld-eth-db` using an existing image:
* Update image source used for running the migrations in [docker-compose.yml](./docker-compose.yml) (if required).
* Run:
```
docker-compose -f docker-compose.yml up
```
* Spin up `ipld-eth-db` using a locally built image:
* Update [Dockerfile](./Dockerfile) (if required).
* Update build context used for running the migrations in [docker-compose.test.yml](./docker-compose.test.yml) (if required).
* Run:
```
docker-compose -f docker-compose.test.yml up --build
```
## Example queries
Note that searching by block_number in addition to block_hash is optional in the below queries where both are provided,
but since the tables are partitioned by block_number doing so will improve query performance by informing the query
planner which partition it needs to search.
### Headers
Retrieve header RLP (IPLD block) and CID for a given block hash
```sql
SELECT header_cids.cid,
blocks.data
FROM ipld.blocks,
eth.header_cids
WHERE header_cids.block_hash = {block_hash}
AND header_cids.block_number = {block_number}
AND header_cids.canonical
AND blocks.key = header_cids.cid
AND blocks.block_number = header_cids.block_number
LIMIT 1
```
### Uncles
Retrieve the uncle list RLP (IPLD block) and CID for a given block hash
```sql
SELECT uncle_cids.cid,
blocks.data
FROM eth.uncle_cids
INNER JOIN eth.header_cids ON (
uncle_cids.header_id = header_cids.block_hash
AND uncle_cids.block_number = header_cids.block_number)
INNER JOIN ipld.blocks ON (
uncle_cids.cid = blocks.key
AND uncle_cids.block_number = blocks.block_number)
WHERE header_cids.block_hash = {block_hash}
AND header_cids.block_number = {block_number}
ORDER BY uncle_cids.parent_hash
LIMIT 1
```
### Transactions
Retrieve an ordered list of all the RLP encoded transactions (IPLD blocks) and their CIDs for a given block hash
```sql
SELECT transaction_cids.cid,
blocks.data
FROM eth.transaction_cids,
eth.header_cids,
ipld.blocks
WHERE header_cids.block_hash = {block_hash}
AND header_cids.block_number = {block_number}
AND header_cids.canonical
AND transaction_cids.block_number = header_cids.block_number
AND transaction_cids.header_id = header_cids.block_hash
AND blocks.block_number = header_cids.block_number
AND blocks.key = transaction_cids.cid
ORDER BY eth.transaction_cids.index ASC
```
Retrieve an RLP encoded transaction (IPLD block), the block hash and block number for the block it belongs to, and its position in the transaction
for that block for a provided transaction hash
```sql
SELECT blocks.data,
transaction_cids.header_id,
transaction_cids.block_number,
transaction_cids.index
FROM eth.transaction_cids,
ipld.blocks,
eth.header_cids
WHERE transaction_cids.tx_hash = {transaction_hash}
AND header_cids.block_hash = transaction_cids.header_id
AND header_cids.block_number = transaction_cids.block_number
AND header_cids.canonical
AND blocks.key = transaction_cids.cid
AND blocks.block_number = transaction_cids.block_number
```
### Receipts
Retrieve an ordered list of all the RLP encoded receipts (IPLD blocks), their CIDs, and their corresponding transaction
hashes for a given block hash
```sql
SELECT receipt_cids.cid,
blocks.data,
eth.transaction_cids.tx_hash
FROM eth.receipt_cids,
eth.transaction_cids,
eth.header_cids,
ipld.blocks
WHERE header_cids.block_hash = {block_hash}
AND header_cids.block_number = {block_number}
AND header_cids.canonical
AND receipt_cids.block_number = header_cids.block_number
AND receipt_cids.header_id = header_cids.block_hash
AND receipt_cids.TX_ID = transaction_cids.TX_HASH
AND transaction_cids.block_number = header_cids.block_number
AND transaction_cids.header_id = header_cids.block_hash
AND blocks.block_number = header_cids.block_number
AND blocks.key = receipt_cids.cid
ORDER BY eth.transaction_cids.index ASC
```
Retrieve the RLP encoded receipt (IPLD) and CID corresponding to a provided transaction hash
```sql
SELECT receipt_cids.cid,
blocks.data
FROM eth.receipt_cids
INNER JOIN eth.transaction_cids ON (
receipt_cids.tx_id = transaction_cids.tx_hash
AND receipt_cids.block_number = transaction_cids.block_number)
INNER JOIN ipld.blocks ON (
receipt_cids.cid = blocks.key
AND receipt_cids.block_number = blocks.block_number)
WHERE transaction_cids.tx_hash = {transaction_hash}
```
### Logs
Retrieve all the logs and their associated transaction hashes at a given block with that were emitted from
any of the provided contract addresses and which match on any of the provided topics
```sql
SELECT blocks.data,
eth.transaction_cids.tx_hash
FROM eth.log_cids
INNER JOIN eth.transaction_cids ON (
log_cids.rct_id = transaction_cids.tx_hash
AND log_cids.header_id = transaction_cids.header_id
AND log_cids.block_number = transaction_cids.block_number)
INNER JOIN ipld.blocks ON (
log_cids.cid = blocks.key
AND log_cids.block_number = blocks.block_number)
WHERE log_cids.header_id = {block_hash}
AND log_cids.block_number = {block_number}
AND eth.log_cids.address = ANY ({list,of,addresses})
AND eth.log_cids.topic0 = ANY ({list,of,topic0s})
AND eth.log_cids.topic1 = ANY ({list,of,topic1s})
AND eth.log_cids.topic2 = ANY ({list,of,topic2s})
AND eth.log_cids.topic3 = ANY ({list,of,topic3s})
ORDER BY eth.transaction_cids.index, eth.log_cids.index
```
Retrieve all the logs and their associated transaction hashes within a provided block range that were emitted from
any of the provided contract addresses and which match on any of the provided topics
```sql
SELECT blocks.data,
eth.transaction_cids.tx_hash
FROM eth.log_cids
INNER JOIN eth.transaction_cids ON (
log_cids.rct_id = transaction_cids.tx_hash
AND log_cids.header_id = transaction_cids.header_id
AND log_cids.block_number = transaction_cids.block_number)
INNER JOIN eth.header_cids ON (
transaction_cids.header_id = header_cids.block_hash
AND transaction_cids.block_number = header_cids.block_number)
INNER JOIN ipld.blocks ON (
log_cids.cid = blocks.key
AND log_cids.block_number = blocks.block_number)
WHERE eth.header_cids.block_number >= {range_start} AND eth.header_cids.block_number <= {range_stop}
AND eth.header_cids.canonical
AND eth.log_cids.address = ANY ({list,of,addresses})
AND eth.log_cids.topic0 = ANY ({list,of,topic0s})
AND eth.log_cids.topic1 = ANY ({list,of,topic1s})
AND eth.log_cids.topic2 = ANY ({list,of,topic2s})
AND eth.log_cids.topic3 = ANY ({list,of,topic3s})
ORDER BY eth.header_cids.block_number, eth.transaction_cids.index, eth.log_cids.index
```
### State and storage
Retrieve the state account for a given address hash at a provided block hash. If `state_cids.removed == true` then
the account is empty.
```sql
SELECT state_cids.nonce,
state_cids.balance,
state_cids.storage_root,
state_cids.code_hash,
state_cids.removed
FROM eth.state_cids,
eth.header_cids
WHERE state_cids.state_leaf_key = {address_hash}
AND state_cids.block_number <=
(SELECT block_number
FROM eth.header_cids
WHERE block_hash = {block_hash}
LIMIT 1)
AND header_cids.canonical
AND state_cids.header_id = header_cids.block_hash
AND state_cids.block_number = header_cids.block_number
ORDER BY state_cids.block_number DESC
LIMIT 1
```
Retrieve a storage value, as well as the RLP encoded leaf node that stores it, for a given contract address hash and
storage leaf key (storage slot hash) at a provided block hash. If `state_leaf_removed == true`
or `storage_cids.removed == true` then the slot is empty
```sql
SELECT storage_cids.cid,
storage_cids.val,
storage_cids.block_number,
storage_cids.removed,
was_state_leaf_removed_by_number(storage_cids.state_leaf_key, storage_cids.block_number) AS state_leaf_removed,
blocks.data
FROM eth.storage_cids,
eth.header_cids,
ipld.blocks
WHERE header_cids.block_number <= (SELECT block_number from eth.header_cids where block_hash = $3 LIMIT 1)
AND header_cids.canonical
AND storage_cids.block_number = header_cids.block_number
AND storage_cids.header_id = header_cids.block_hash
AND storage_cids.storage_leaf_key = {storage_slot_hash}
AND storage_cids.state_leaf_key = {contract_address_hash}
AND blocks.key = storage_cids.cid
AND blocks.block_number = storage_cids.block_number
ORDER BY storage_cids.block_number DESC LIMIT 1
```

34
compose.yml Normal file
View File

@ -0,0 +1,34 @@
services:
migrations:
restart: on-failure
depends_on:
ipld-eth-db:
condition: service_healthy
# Use local build
build:
context: .
dockerfile: Dockerfile
# Use an existing image
image: cerc/ipld-eth-db:local
environment:
DATABASE_USER: "vdbm"
DATABASE_NAME: "cerc_testing"
DATABASE_PASSWORD: "password"
DATABASE_HOSTNAME: "ipld-eth-db"
DATABASE_PORT: 5432
ipld-eth-db:
image: timescale/timescaledb:latest-pg14
restart: always
command: ["postgres", "-c", "log_statement=all"]
environment:
POSTGRES_USER: "vdbm"
POSTGRES_DB: "cerc_testing"
POSTGRES_PASSWORD: "password"
ports:
- "127.0.0.1:8077:5432"
healthcheck:
test: ["CMD", "pg_isready", "-U", "vdbm"]
interval: 2s
timeout: 1s
retries: 3

View File

@ -1,23 +0,0 @@
FROM golang:1.16-alpine as builder
RUN apk --update --no-cache add make git g++ linux-headers
ADD . /go/src/github.com/vulcanize/ipld-eth-db
# Build migration tool
WORKDIR /go/src/github.com/pressly
RUN git clone https://github.com/pressly/goose.git
WORKDIR /go/src/github.com/pressly/goose/cmd/goose
RUN GCO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-extldflags "-static"' -tags='no_mysql no_sqlite' -o goose .
# app container
FROM alpine
WORKDIR /app
COPY --from=builder /go/src/github.com/vulcanize/ipld-eth-db/scripts/startup_script.sh .
COPY --from=builder /go/src/github.com/pressly/goose/cmd/goose/goose goose
COPY --from=builder /go/src/github.com/vulcanize/ipld-eth-db/db/migrations migrations/vulcanizedb
ENTRYPOINT ["/app/startup_script.sh"]

View File

@ -1,8 +1,13 @@
-- +goose Up -- +goose Up
CREATE TABLE IF NOT EXISTS public.blocks ( CREATE SCHEMA ipld;
key TEXT UNIQUE NOT NULL,
data BYTEA NOT NULL CREATE TABLE IF NOT EXISTS ipld.blocks (
block_number BIGINT NOT NULL,
key TEXT NOT NULL,
data BYTEA NOT NULL,
PRIMARY KEY (key, block_number)
); );
-- +goose Down -- +goose Down
DROP TABLE public.blocks; DROP TABLE ipld.blocks;
DROP SCHEMA ipld;

View File

@ -1,12 +1,10 @@
-- +goose Up -- +goose Up
CREATE TABLE nodes ( CREATE TABLE IF NOT EXISTS nodes (
id SERIAL PRIMARY KEY, genesis_block VARCHAR(66),
client_name VARCHAR, network_id VARCHAR,
genesis_block VARCHAR(66), node_id VARCHAR(128) PRIMARY KEY,
network_id VARCHAR, client_name VARCHAR,
node_id VARCHAR(128), chain_id INTEGER DEFAULT 1
chain_id INTEGER DEFAULT 1,
CONSTRAINT node_uc UNIQUE (genesis_block, network_id, node_id, chain_id)
); );
-- +goose Down -- +goose Down

View File

@ -1,24 +1,23 @@
-- +goose Up -- +goose Up
CREATE TABLE eth.header_cids ( CREATE TABLE IF NOT EXISTS eth.header_cids (
id SERIAL PRIMARY KEY, block_number BIGINT NOT NULL,
block_number BIGINT NOT NULL, block_hash VARCHAR(66) NOT NULL,
block_hash VARCHAR(66) NOT NULL, parent_hash VARCHAR(66) NOT NULL,
parent_hash VARCHAR(66) NOT NULL, cid TEXT NOT NULL,
cid TEXT NOT NULL, td NUMERIC NOT NULL,
mh_key TEXT NOT NULL REFERENCES public.blocks (key) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, node_ids VARCHAR(128)[] NOT NULL,
td NUMERIC NOT NULL, reward NUMERIC NOT NULL,
node_id INTEGER NOT NULL REFERENCES nodes (id) ON DELETE CASCADE, state_root VARCHAR(66) NOT NULL,
reward NUMERIC NOT NULL, tx_root VARCHAR(66) NOT NULL,
state_root VARCHAR(66) NOT NULL, receipt_root VARCHAR(66) NOT NULL,
tx_root VARCHAR(66) NOT NULL, uncles_hash VARCHAR(66) NOT NULL,
receipt_root VARCHAR(66) NOT NULL, bloom BYTEA NOT NULL,
uncle_root VARCHAR(66) NOT NULL, timestamp BIGINT NOT NULL,
bloom BYTEA NOT NULL, coinbase VARCHAR(66) NOT NULL,
timestamp NUMERIC NOT NULL, canonical BOOLEAN NOT NULL DEFAULT TRUE,
times_validated INTEGER NOT NULL DEFAULT 1, withdrawals_root VARCHAR(66) NOT NULL,
base_fee BIGINT, PRIMARY KEY (block_hash, block_number)
UNIQUE (block_number, block_hash)
); );
-- +goose Down -- +goose Down
DROP TABLE eth.header_cids; DROP TABLE eth.header_cids;

View File

@ -1,14 +1,14 @@
-- +goose Up -- +goose Up
CREATE TABLE eth.uncle_cids ( CREATE TABLE IF NOT EXISTS eth.uncle_cids (
id SERIAL PRIMARY KEY, block_number BIGINT NOT NULL,
header_id INTEGER NOT NULL REFERENCES eth.header_cids (id) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, block_hash VARCHAR(66) NOT NULL,
block_hash VARCHAR(66) NOT NULL, header_id VARCHAR(66) NOT NULL,
parent_hash VARCHAR(66) NOT NULL, parent_hash VARCHAR(66) NOT NULL,
cid TEXT NOT NULL, cid TEXT NOT NULL,
mh_key TEXT NOT NULL REFERENCES public.blocks (key) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, reward NUMERIC NOT NULL,
reward NUMERIC NOT NULL, index INT NOT NULL,
UNIQUE (header_id, block_hash) PRIMARY KEY (block_hash, block_number)
); );
-- +goose Down -- +goose Down
DROP TABLE eth.uncle_cids; DROP TABLE eth.uncle_cids;

View File

@ -1,16 +1,15 @@
-- +goose Up -- +goose Up
CREATE TABLE eth.transaction_cids ( CREATE TABLE IF NOT EXISTS eth.transaction_cids (
id SERIAL PRIMARY KEY, block_number BIGINT NOT NULL,
header_id INTEGER NOT NULL REFERENCES eth.header_cids (id) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, header_id VARCHAR(66) NOT NULL,
tx_hash VARCHAR(66) NOT NULL, tx_hash VARCHAR(66) NOT NULL,
index INTEGER NOT NULL, cid TEXT NOT NULL,
cid TEXT NOT NULL, dst VARCHAR(66),
mh_key TEXT NOT NULL REFERENCES public.blocks (key) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, src VARCHAR(66) NOT NULL,
dst VARCHAR(66) NOT NULL, index INTEGER NOT NULL,
src VARCHAR(66) NOT NULL, tx_type INTEGER,
tx_data BYTEA, value NUMERIC,
tx_type BYTEA, PRIMARY KEY (tx_hash, header_id, block_number)
UNIQUE (header_id, tx_hash)
); );
-- +goose Down -- +goose Down

View File

@ -1,16 +1,14 @@
-- +goose Up -- +goose Up
CREATE TABLE eth.receipt_cids ( CREATE TABLE IF NOT EXISTS eth.receipt_cids (
id SERIAL PRIMARY KEY, block_number BIGINT NOT NULL,
tx_id INTEGER NOT NULL REFERENCES eth.transaction_cids (id) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, header_id VARCHAR(66) NOT NULL,
leaf_cid TEXT NOT NULL, tx_id VARCHAR(66) NOT NULL,
leaf_mh_key TEXT NOT NULL REFERENCES public.blocks (key) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, cid TEXT NOT NULL,
contract VARCHAR(66), contract VARCHAR(66),
contract_hash VARCHAR(66), post_state VARCHAR(66),
post_state VARCHAR(66), post_status SMALLINT,
post_status INTEGER, PRIMARY KEY (tx_id, header_id, block_number)
log_root VARCHAR(66),
UNIQUE (tx_id)
); );
-- +goose Down -- +goose Down
DROP TABLE eth.receipt_cids; DROP TABLE eth.receipt_cids;

View File

@ -1,15 +1,17 @@
-- +goose Up -- +goose Up
CREATE TABLE eth.state_cids ( CREATE TABLE IF NOT EXISTS eth.state_cids (
id BIGSERIAL PRIMARY KEY, block_number BIGINT NOT NULL,
header_id INTEGER NOT NULL REFERENCES eth.header_cids (id) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, header_id VARCHAR(66) NOT NULL,
state_leaf_key VARCHAR(66), state_leaf_key VARCHAR(66) NOT NULL,
cid TEXT NOT NULL, cid TEXT NOT NULL,
mh_key TEXT NOT NULL REFERENCES public.blocks (key) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, diff BOOLEAN NOT NULL DEFAULT FALSE,
state_path BYTEA, balance NUMERIC, -- NULL if "removed"
node_type INTEGER NOT NULL, nonce BIGINT, -- NULL if "removed"
diff BOOLEAN NOT NULL DEFAULT FALSE, code_hash VARCHAR(66), -- NULL if "removed"
UNIQUE (header_id, state_path) storage_root VARCHAR(66), -- NULL if "removed"
removed BOOLEAN NOT NULL,
PRIMARY KEY (state_leaf_key, header_id, block_number)
); );
-- +goose Down -- +goose Down
DROP TABLE eth.state_cids; DROP TABLE eth.state_cids;

View File

@ -1,15 +1,15 @@
-- +goose Up -- +goose Up
CREATE TABLE eth.storage_cids ( CREATE TABLE IF NOT EXISTS eth.storage_cids (
id BIGSERIAL PRIMARY KEY, block_number BIGINT NOT NULL,
state_id BIGINT NOT NULL REFERENCES eth.state_cids (id) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, header_id VARCHAR(66) NOT NULL,
storage_leaf_key VARCHAR(66), state_leaf_key VARCHAR(66) NOT NULL,
cid TEXT NOT NULL, storage_leaf_key VARCHAR(66) NOT NULL,
mh_key TEXT NOT NULL REFERENCES public.blocks (key) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, cid TEXT NOT NULL,
storage_path BYTEA, diff BOOLEAN NOT NULL DEFAULT FALSE,
node_type INTEGER NOT NULL, val BYTEA, -- NULL if "removed"
diff BOOLEAN NOT NULL DEFAULT FALSE, removed BOOLEAN NOT NULL,
UNIQUE (state_id, storage_path) PRIMARY KEY (storage_leaf_key, state_leaf_key, header_id, block_number)
); );
-- +goose Down -- +goose Down
DROP TABLE eth.storage_cids; DROP TABLE eth.storage_cids;

View File

@ -0,0 +1,18 @@
-- +goose Up
CREATE TABLE IF NOT EXISTS eth.log_cids (
block_number BIGINT NOT NULL,
header_id VARCHAR(66) NOT NULL,
cid TEXT NOT NULL,
rct_id VARCHAR(66) NOT NULL,
address VARCHAR(66) NOT NULL,
index INTEGER NOT NULL,
topic0 VARCHAR(66),
topic1 VARCHAR(66),
topic2 VARCHAR(66),
topic3 VARCHAR(66),
PRIMARY KEY (rct_id, index, header_id, block_number)
);
-- +goose Down
-- log indexes
DROP TABLE eth.log_cids;

View File

@ -1,13 +0,0 @@
-- +goose Up
CREATE TABLE eth.state_accounts (
id SERIAL PRIMARY KEY,
state_id BIGINT NOT NULL REFERENCES eth.state_cids (id) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
balance NUMERIC NOT NULL,
nonce INTEGER NOT NULL,
code_hash BYTEA NOT NULL,
storage_root VARCHAR(66) NOT NULL,
UNIQUE (state_id)
);
-- +goose Down
DROP TABLE eth.state_accounts;

View File

@ -3,7 +3,7 @@ COMMENT ON TABLE public.nodes IS E'@name NodeInfo';
COMMENT ON TABLE eth.transaction_cids IS E'@name EthTransactionCids'; COMMENT ON TABLE eth.transaction_cids IS E'@name EthTransactionCids';
COMMENT ON TABLE eth.header_cids IS E'@name EthHeaderCids'; COMMENT ON TABLE eth.header_cids IS E'@name EthHeaderCids';
COMMENT ON COLUMN public.nodes.node_id IS E'@name ChainNodeID'; COMMENT ON COLUMN public.nodes.node_id IS E'@name ChainNodeID';
COMMENT ON COLUMN eth.header_cids.node_id IS E'@name EthNodeID'; COMMENT ON COLUMN eth.header_cids.node_ids IS E'@name EthNodeIDs';
-- +goose Down -- +goose Down
@ -11,4 +11,4 @@ COMMENT ON TABLE public.nodes IS NULL;
COMMENT ON TABLE eth.transaction_cids IS NULL; COMMENT ON TABLE eth.transaction_cids IS NULL;
COMMENT ON TABLE eth.header_cids IS NULL; COMMENT ON TABLE eth.header_cids IS NULL;
COMMENT ON COLUMN public.nodes.node_id IS NULL; COMMENT ON COLUMN public.nodes.node_id IS NULL;
COMMENT ON COLUMN eth.header_cids.node_id IS NULL; COMMENT ON COLUMN eth.header_cids.node_ids IS NULL;

View File

@ -0,0 +1,101 @@
-- +goose Up
-- header indexes
CREATE INDEX header_block_number_index ON eth.header_cids USING btree (block_number);
CREATE UNIQUE INDEX header_cid_block_number_index ON eth.header_cids USING btree (cid, block_number);
CREATE INDEX state_root_index ON eth.header_cids USING btree (state_root);
CREATE INDEX timestamp_index ON eth.header_cids USING btree (timestamp);
-- uncle indexes
CREATE INDEX uncle_block_number_index ON eth.uncle_cids USING btree (block_number);
CREATE UNIQUE INDEX uncle_cid_block_number_index ON eth.uncle_cids USING btree (cid, block_number, index);
CREATE INDEX uncle_header_id_index ON eth.uncle_cids USING btree (header_id);
-- transaction indexes
CREATE INDEX tx_block_number_index ON eth.transaction_cids USING btree (block_number);
CREATE INDEX tx_header_id_index ON eth.transaction_cids USING btree (header_id);
CREATE INDEX tx_cid_block_number_index ON eth.transaction_cids USING btree (cid, block_number);
CREATE INDEX tx_dst_index ON eth.transaction_cids USING btree (dst);
CREATE INDEX tx_src_index ON eth.transaction_cids USING btree (src);
-- receipt indexes
CREATE INDEX rct_block_number_index ON eth.receipt_cids USING btree (block_number);
CREATE INDEX rct_header_id_index ON eth.receipt_cids USING btree (header_id);
CREATE INDEX rct_cid_block_number_index ON eth.receipt_cids USING btree (cid, block_number);
CREATE INDEX rct_contract_index ON eth.receipt_cids USING btree (contract);
-- state node indexes
CREATE INDEX state_block_number_index ON eth.state_cids USING btree (block_number);
CREATE INDEX state_cid_block_number_index ON eth.state_cids USING btree (cid, block_number);
CREATE INDEX state_header_id_index ON eth.state_cids USING btree (header_id);
CREATE INDEX state_removed_index ON eth.state_cids USING btree (removed);
CREATE INDEX state_code_hash_index ON eth.state_cids USING btree (code_hash); -- could be useful for e.g. selecting all the state accounts with the same contract bytecode deployed
CREATE INDEX state_leaf_key_block_number_index ON eth.state_cids(state_leaf_key, block_number DESC);
-- storage node indexes
CREATE INDEX storage_block_number_index ON eth.storage_cids USING btree (block_number);
CREATE INDEX storage_state_leaf_key_index ON eth.storage_cids USING btree (state_leaf_key);
CREATE INDEX storage_cid_block_number_index ON eth.storage_cids USING btree (cid, block_number);
CREATE INDEX storage_header_id_index ON eth.storage_cids USING btree (header_id);
CREATE INDEX storage_removed_index ON eth.storage_cids USING btree (removed);
CREATE INDEX storage_leaf_key_block_number_index ON eth.storage_cids(storage_leaf_key, block_number DESC);
-- log indexes
CREATE INDEX log_block_number_index ON eth.log_cids USING btree (block_number);
CREATE INDEX log_header_id_index ON eth.log_cids USING btree (header_id);
CREATE INDEX log_cid_block_number_index ON eth.log_cids USING btree (cid, block_number);
CREATE INDEX log_address_index ON eth.log_cids USING btree (address);
CREATE INDEX log_topic0_index ON eth.log_cids USING btree (topic0);
CREATE INDEX log_topic1_index ON eth.log_cids USING btree (topic1);
CREATE INDEX log_topic2_index ON eth.log_cids USING btree (topic2);
CREATE INDEX log_topic3_index ON eth.log_cids USING btree (topic3);
-- +goose Down
-- log indexes
DROP INDEX eth.log_topic3_index;
DROP INDEX eth.log_topic2_index;
DROP INDEX eth.log_topic1_index;
DROP INDEX eth.log_topic0_index;
DROP INDEX eth.log_address_index;
DROP INDEX eth.log_cid_block_number_index;
DROP INDEX eth.log_header_id_index;
DROP INDEX eth.log_block_number_index;
-- storage node indexes
DROP INDEX eth.storage_removed_index;
DROP INDEX eth.storage_header_id_index;
DROP INDEX eth.storage_cid_block_number_index;
DROP INDEX eth.storage_state_leaf_key_index;
DROP INDEX eth.storage_block_number_index;
DROP INDEX eth.storage_leaf_key_block_number_index;
-- state node indexes
DROP INDEX eth.state_code_hash_index;
DROP INDEX eth.state_removed_index;
DROP INDEX eth.state_header_id_index;
DROP INDEX eth.state_cid_block_number_index;
DROP INDEX eth.state_block_number_index;
DROP INDEX eth.state_leaf_key_block_number_index;
-- receipt indexes
DROP INDEX eth.rct_contract_index;
DROP INDEX eth.rct_cid_block_number_index;
DROP INDEX eth.rct_header_id_index;
DROP INDEX eth.rct_block_number_index;
-- transaction indexes
DROP INDEX eth.tx_src_index;
DROP INDEX eth.tx_dst_index;
DROP INDEX eth.tx_cid_block_number_index;
DROP INDEX eth.tx_header_id_index;
DROP INDEX eth.tx_block_number_index;
-- uncle indexes
DROP INDEX eth.uncle_block_number_index;
DROP INDEX eth.uncle_cid_block_number_index;
DROP INDEX eth.uncle_header_id_index;
-- header indexes
DROP INDEX eth.timestamp_index;
DROP INDEX eth.state_root_index;
DROP INDEX eth.header_cid_block_number_index;
DROP INDEX eth.header_block_number_index;

View File

@ -1,69 +0,0 @@
-- +goose Up
-- +goose StatementBegin
CREATE FUNCTION eth.graphql_subscription() returns TRIGGER as $$
declare
table_name text = TG_ARGV[0];
attribute text = TG_ARGV[1];
id text;
begin
execute 'select $1.' || quote_ident(attribute)
using new
into id;
perform pg_notify('postgraphile:' || table_name,
json_build_object(
'__node__', json_build_array(
table_name,
id
)
)::text
);
return new;
end;
$$ language plpgsql;
-- +goose StatementEnd
CREATE TRIGGER header_cids_ai
after INSERT ON eth.header_cids
for each row
execute procedure eth.graphql_subscription('header_cids', 'id');
CREATE TRIGGER receipt_cids_ai
after INSERT ON eth.receipt_cids
for each row
execute procedure eth.graphql_subscription('receipt_cids', 'id');
CREATE TRIGGER state_accounts_ai
after INSERT ON eth.state_accounts
for each row
execute procedure eth.graphql_subscription('state_accounts', 'id');
CREATE TRIGGER state_cids_ai
after INSERT ON eth.state_cids
for each row
execute procedure eth.graphql_subscription('state_cids', 'id');
CREATE TRIGGER storage_cids_ai
after INSERT ON eth.storage_cids
for each row
execute procedure eth.graphql_subscription('storage_cids', 'id');
CREATE TRIGGER transaction_cids_ai
after INSERT ON eth.transaction_cids
for each row
execute procedure eth.graphql_subscription('transaction_cids', 'id');
CREATE TRIGGER uncle_cids_ai
after INSERT ON eth.uncle_cids
for each row
execute procedure eth.graphql_subscription('uncle_cids', 'id');
-- +goose Down
DROP TRIGGER uncle_cids_ai ON eth.uncle_cids;
DROP TRIGGER transaction_cids_ai ON eth.transaction_cids;
DROP TRIGGER storage_cids_ai ON eth.storage_cids;
DROP TRIGGER state_cids_ai ON eth.state_cids;
DROP TRIGGER state_accounts_ai ON eth.state_accounts;
DROP TRIGGER receipt_cids_ai ON eth.receipt_cids;
DROP TRIGGER header_cids_ai ON eth.header_cids;
DROP FUNCTION eth.graphql_subscription();

View File

@ -1,112 +0,0 @@
-- +goose Up
-- header indexes
CREATE INDEX block_number_index ON eth.header_cids USING brin (block_number);
CREATE INDEX block_hash_index ON eth.header_cids USING btree (block_hash);
CREATE INDEX header_cid_index ON eth.header_cids USING btree (cid);
CREATE INDEX header_mh_index ON eth.header_cids USING btree (mh_key);
CREATE INDEX state_root_index ON eth.header_cids USING btree (state_root);
CREATE INDEX timestamp_index ON eth.header_cids USING brin (timestamp);
-- transaction indexes
CREATE INDEX tx_header_id_index ON eth.transaction_cids USING btree (header_id);
CREATE INDEX tx_hash_index ON eth.transaction_cids USING btree (tx_hash);
CREATE INDEX tx_cid_index ON eth.transaction_cids USING btree (cid);
CREATE INDEX tx_mh_index ON eth.transaction_cids USING btree (mh_key);
CREATE INDEX tx_dst_index ON eth.transaction_cids USING btree (dst);
CREATE INDEX tx_src_index ON eth.transaction_cids USING btree (src);
-- receipt indexes
CREATE INDEX rct_tx_id_index ON eth.receipt_cids USING btree (tx_id);
CREATE INDEX rct_leaf_cid_index ON eth.receipt_cids USING btree (leaf_cid);
CREATE INDEX rct_leaf_mh_index ON eth.receipt_cids USING btree (leaf_mh_key);
CREATE INDEX rct_contract_index ON eth.receipt_cids USING btree (contract);
CREATE INDEX rct_contract_hash_index ON eth.receipt_cids USING btree (contract_hash);
-- state node indexes
CREATE INDEX state_header_id_index ON eth.state_cids USING btree (header_id);
CREATE INDEX state_leaf_key_index ON eth.state_cids USING btree (state_leaf_key);
CREATE INDEX state_cid_index ON eth.state_cids USING btree (cid);
CREATE INDEX state_mh_index ON eth.state_cids USING btree (mh_key);
CREATE INDEX state_path_index ON eth.state_cids USING btree (state_path);
CREATE INDEX state_node_type_index ON eth.state_cids USING btree (node_type);
-- storage node indexes
CREATE INDEX storage_state_id_index ON eth.storage_cids USING btree (state_id);
CREATE INDEX storage_leaf_key_index ON eth.storage_cids USING btree (storage_leaf_key);
CREATE INDEX storage_cid_index ON eth.storage_cids USING btree (cid);
CREATE INDEX storage_mh_index ON eth.storage_cids USING btree (mh_key);
CREATE INDEX storage_path_index ON eth.storage_cids USING btree (storage_path);
CREATE INDEX storage_node_type_index ON eth.storage_cids USING btree (node_type);
-- state accounts indexes
CREATE INDEX account_state_id_index ON eth.state_accounts USING btree (state_id);
CREATE INDEX storage_root_index ON eth.state_accounts USING btree (storage_root);
-- +goose Down
-- state account indexes
DROP INDEX eth.storage_root_index;
DROP INDEX eth.account_state_id_index;
-- storage node indexes
DROP INDEX eth.storage_node_type_index;
DROP INDEX eth.storage_path_index;
DROP INDEX eth.storage_mh_index;
DROP INDEX eth.storage_cid_index;
DROP INDEX eth.storage_leaf_key_index;
DROP INDEX eth.storage_state_id_index;
-- state node indexes
DROP INDEX eth.state_node_type_index;
DROP INDEX eth.state_path_index;
DROP INDEX eth.state_mh_index;
DROP INDEX eth.state_cid_index;
DROP INDEX eth.state_leaf_key_index;
DROP INDEX eth.state_header_id_index;
-- receipt indexes
DROP INDEX eth.rct_contract_hash_index;
DROP INDEX eth.rct_contract_index;
DROP INDEX eth.rct_leaf_mh_index;
DROP INDEX eth.rct_leaf_cid_index;
DROP INDEX eth.rct_tx_id_index;
-- transaction indexes
DROP INDEX eth.tx_src_index;
DROP INDEX eth.tx_dst_index;
DROP INDEX eth.tx_mh_index;
DROP INDEX eth.tx_cid_index;
DROP INDEX eth.tx_hash_index;
DROP INDEX eth.tx_header_id_index;
-- header indexes
DROP INDEX eth.timestamp_index;
DROP INDEX eth.state_root_index;
DROP INDEX eth.header_mh_index;
DROP INDEX eth.header_cid_index;
DROP INDEX eth.block_hash_index;
DROP INDEX eth.block_number_index;

View File

@ -0,0 +1,12 @@
-- +goose Up
CREATE TABLE IF NOT EXISTS public.db_version (
singleton BOOLEAN NOT NULL DEFAULT TRUE UNIQUE CHECK (singleton),
version TEXT NOT NULL,
tstamp TIMESTAMP WITHOUT TIME ZONE DEFAULT NOW()
);
INSERT INTO public.db_version (singleton, version) VALUES (true, 'v5.0.0')
ON CONFLICT (singleton) DO UPDATE SET (version, tstamp) = ('v5.0.0', NOW());
-- +goose Down
DROP TABLE public.db_version;

View File

@ -0,0 +1,5 @@
-- +goose Up
CREATE SCHEMA eth_meta;
-- +goose Down
DROP SCHEMA eth_meta;

View File

@ -1,138 +0,0 @@
-- +goose Up
-- +goose StatementBegin
-- returns if a state leaf node was removed within the provided block number
CREATE OR REPLACE FUNCTION was_state_leaf_removed(key character varying, hash character varying)
RETURNS boolean AS $$
SELECT state_cids.node_type = 3
FROM eth.state_cids
INNER JOIN eth.header_cids ON (state_cids.header_id = header_cids.id)
WHERE state_leaf_key = key
AND block_number <= (SELECT block_number
FROM eth.header_cids
WHERE block_hash = hash)
ORDER BY block_number DESC LIMIT 1;
$$
language sql;
-- +goose StatementEnd
-- +goose StatementBegin
CREATE TYPE child_result AS (
has_child BOOLEAN,
children eth.header_cids[]
);
CREATE OR REPLACE FUNCTION has_child(hash VARCHAR(66), height BIGINT) RETURNS child_result AS
$BODY$
DECLARE
child_height INT;
temp_child eth.header_cids;
new_child_result child_result;
BEGIN
child_height = height + 1;
-- short circuit if there are no children
SELECT exists(SELECT 1
FROM eth.header_cids
WHERE parent_hash = hash
AND block_number = child_height
LIMIT 1)
INTO new_child_result.has_child;
-- collect all the children for this header
IF new_child_result.has_child THEN
FOR temp_child IN
SELECT * FROM eth.header_cids WHERE parent_hash = hash AND block_number = child_height
LOOP
new_child_result.children = array_append(new_child_result.children, temp_child);
END LOOP;
END IF;
RETURN new_child_result;
END
$BODY$
LANGUAGE 'plpgsql';
-- +goose StatementEnd
-- +goose StatementBegin
CREATE OR REPLACE FUNCTION canonical_header_from_array(headers eth.header_cids[]) RETURNS eth.header_cids AS
$BODY$
DECLARE
canonical_header eth.header_cids;
canonical_child eth.header_cids;
header eth.header_cids;
current_child_result child_result;
child_headers eth.header_cids[];
current_header_with_child eth.header_cids;
has_children_count INT DEFAULT 0;
BEGIN
-- for each header in the provided set
FOREACH header IN ARRAY headers
LOOP
-- check if it has any children
current_child_result = has_child(header.block_hash, header.block_number);
IF current_child_result.has_child THEN
-- if it does, take note
has_children_count = has_children_count + 1;
current_header_with_child = header;
-- and add the children to the growing set of child headers
child_headers = array_cat(child_headers, current_child_result.children);
END IF;
END LOOP;
-- if none of the headers had children, none is more canonical than the other
IF has_children_count = 0 THEN
-- return the first one selected
SELECT * INTO canonical_header FROM unnest(headers) LIMIT 1;
-- if only one header had children, it can be considered the heaviest/canonical header of the set
ELSIF has_children_count = 1 THEN
-- return the only header with a child
canonical_header = current_header_with_child;
-- if there are multiple headers with children
ELSE
-- find the canonical header from the child set
canonical_child = canonical_header_from_array(child_headers);
-- the header that is parent to this header, is the canonical header at this level
SELECT * INTO canonical_header FROM unnest(headers)
WHERE block_hash = canonical_child.parent_hash;
END IF;
RETURN canonical_header;
END
$BODY$
LANGUAGE 'plpgsql';
-- +goose StatementEnd
-- +goose StatementBegin
CREATE OR REPLACE FUNCTION canonical_header_id(height BIGINT) RETURNS INTEGER AS
$BODY$
DECLARE
canonical_header eth.header_cids;
headers eth.header_cids[];
header_count INT;
temp_header eth.header_cids;
BEGIN
-- collect all headers at this height
FOR temp_header IN
SELECT * FROM eth.header_cids WHERE block_number = height
LOOP
headers = array_append(headers, temp_header);
END LOOP;
-- count the number of headers collected
header_count = array_length(headers, 1);
-- if we have less than 1 header, return NULL
IF header_count IS NULL OR header_count < 1 THEN
RETURN NULL;
-- if we have one header, return its id
ELSIF header_count = 1 THEN
RETURN headers[1].id;
-- if we have multiple headers we need to determine which one is canonical
ELSE
canonical_header = canonical_header_from_array(headers);
RETURN canonical_header.id;
END IF;
END;
$BODY$
LANGUAGE 'plpgsql';
-- +goose StatementEnd
-- +goose Down
DROP FUNCTION was_state_leaf_removed;
DROP FUNCTION canonical_header_id;
DROP FUNCTION canonical_header_from_array;
DROP FUNCTION has_child;
DROP TYPE child_result;

View File

@ -1,15 +0,0 @@
-- +goose Up
CREATE TABLE eth.access_list_element (
id SERIAL PRIMARY KEY,
tx_id INTEGER NOT NULL REFERENCES eth.transaction_cids (id) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
index INTEGER NOT NULL,
address VARCHAR(66),
storage_keys VARCHAR(66)[],
UNIQUE (tx_id, index)
);
CREATE INDEX accesss_list_element_address_index ON eth.access_list_element USING btree (address);
-- +goose Down
DROP INDEX eth.accesss_list_element_address_index;
DROP TABLE eth.access_list_element;

View File

@ -0,0 +1,10 @@
-- +goose Up
CREATE TABLE eth_meta.watched_addresses (
address VARCHAR(66) PRIMARY KEY,
created_at BIGINT NOT NULL,
watched_at BIGINT NOT NULL,
last_filled_at BIGINT NOT NULL DEFAULT 0
);
-- +goose Down
DROP TABLE eth_meta.watched_addresses;

View File

@ -1,61 +0,0 @@
-- +goose Up
CREATE TABLE eth.log_cids (
id SERIAL PRIMARY KEY,
leaf_cid TEXT NOT NULL,
leaf_mh_key TEXT NOT NULL REFERENCES public.blocks (key) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
receipt_id INTEGER NOT NULL REFERENCES eth.receipt_cids (id) ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
address VARCHAR(66) NOT NULL,
log_data BYTEA,
index INTEGER NOT NULL,
topic0 VARCHAR(66),
topic1 VARCHAR(66),
topic2 VARCHAR(66),
topic3 VARCHAR(66),
UNIQUE (receipt_id, index)
);
CREATE INDEX log_mh_index ON eth.log_cids USING btree (leaf_mh_key);
CREATE INDEX log_cid_index ON eth.log_cids USING btree (leaf_cid);
CREATE INDEX log_rct_id_index ON eth.log_cids USING btree (receipt_id);
--
-- Name: log_topic0_index; Type: INDEX; Schema: eth; Owner: -
--
CREATE INDEX log_topic0_index ON eth.log_cids USING btree (topic0);
--
-- Name: log_topic1_index; Type: INDEX; Schema: eth; Owner: -
--
CREATE INDEX log_topic1_index ON eth.log_cids USING btree (topic1);
--
-- Name: log_topic2_index; Type: INDEX; Schema: eth; Owner: -
--
CREATE INDEX log_topic2_index ON eth.log_cids USING btree (topic2);
--
-- Name: log_topic3_index; Type: INDEX; Schema: eth; Owner: -
--
CREATE INDEX log_topic3_index ON eth.log_cids USING btree (topic3);
-- +goose Down
-- log indexes
DROP INDEX eth.log_mh_index;
DROP INDEX eth.log_cid_index;
DROP INDEX eth.log_rct_id_index;
DROP INDEX eth.log_topic0_index;
DROP INDEX eth.log_topic1_index;
DROP INDEX eth.log_topic2_index;
DROP INDEX eth.log_topic3_index;
DROP TABLE eth.log_cids;

View File

@ -0,0 +1,43 @@
-- +goose Up
-- +goose StatementBegin
-- returns whether the state leaf key is vacated (previously existed but now is empty) at the provided block hash
CREATE OR REPLACE FUNCTION was_state_leaf_removed(v_key VARCHAR(66), v_hash VARCHAR)
RETURNS boolean AS $$
SELECT state_cids.removed = true
FROM eth.state_cids
INNER JOIN eth.header_cids ON (state_cids.header_id = header_cids.block_hash)
WHERE state_leaf_key = v_key
AND state_cids.block_number <= (SELECT block_number
FROM eth.header_cids
WHERE block_hash = v_hash)
ORDER BY state_cids.block_number DESC LIMIT 1;
$$
language sql;
-- +goose StatementEnd
-- +goose StatementBegin
-- returns whether the state leaf key is vacated (previously existed but now is empty) at the provided block height
CREATE OR REPLACE FUNCTION public.was_state_leaf_removed_by_number(v_key VARCHAR(66), v_block_no BIGINT)
RETURNS BOOLEAN AS $$
SELECT state_cids.removed = true
FROM eth.state_cids
INNER JOIN eth.header_cids ON (state_cids.header_id = header_cids.block_hash)
WHERE state_leaf_key = v_key
AND state_cids.block_number <= v_block_no
ORDER BY state_cids.block_number DESC LIMIT 1;
$$
language sql;
-- +goose StatementEnd
-- +goose StatementBegin
CREATE OR REPLACE FUNCTION canonical_header_hash(height BIGINT) RETURNS character varying AS
$BODY$
SELECT block_hash from eth.header_cids WHERE block_number = height AND canonical = true LIMIT 1;
$BODY$
LANGUAGE sql;
-- +goose StatementEnd
-- +goose Down
DROP FUNCTION was_state_leaf_removed;
DROP FUNCTION was_state_leaf_removed_by_number;
DROP FUNCTION canonical_header_hash;

View File

@ -0,0 +1,110 @@
-- +goose Up
-- +goose StatementBegin
CREATE OR REPLACE FUNCTION public.get_storage_at_by_number(v_state_leaf_key text, v_storage_leaf_key text, v_block_no bigint)
RETURNS TABLE
(
cid TEXT,
val BYTEA,
block_number BIGINT,
removed BOOL,
state_leaf_removed BOOL
)
AS
$BODY$
DECLARE
v_state_path BYTEA;
v_header TEXT;
v_canonical_header TEXT;
BEGIN
CREATE TEMP TABLE tmp_tt_stg2
(
header_id TEXT,
cid TEXT,
val BYTEA,
block_number BIGINT,
removed BOOL,
state_leaf_removed BOOL
) ON COMMIT DROP;
-- in best case scenario, the latest record we find for the provided keys is for a canonical block
INSERT INTO tmp_tt_stg2
SELECT storage_cids.header_id,
storage_cids.cid,
storage_cids.val,
storage_cids.block_number,
storage_cids.removed,
was_state_leaf_removed_by_number(v_state_leaf_key, v_block_no) AS state_leaf_removed
FROM eth.storage_cids
WHERE storage_leaf_key = v_storage_leaf_key
AND storage_cids.state_leaf_key = v_state_leaf_key -- can lookup directly on the leaf key in v5
AND storage_cids.block_number <= v_block_no
ORDER BY storage_cids.block_number DESC LIMIT 1;
-- check if result is from canonical state
SELECT header_id, canonical_header_hash(tmp_tt_stg2.block_number)
INTO v_header, v_canonical_header
FROM tmp_tt_stg2 LIMIT 1;
IF v_header IS NULL OR v_header != v_canonical_header THEN
RAISE NOTICE 'get_storage_at_by_number: chosen header NULL OR % != canonical header % for block number %, trying again.', v_header, v_canonical_header, v_block_no;
TRUNCATE tmp_tt_stg2;
-- If we hit on a non-canonical block, we need to go back and do a comprehensive check.
-- We try to avoid this to avoid joining between storage_cids and header_cids
INSERT INTO tmp_tt_stg2
SELECT storage_cids.header_id,
storage_cids.cid,
storage_cids.val,
storage_cids.block_number,
storage_cids.removed,
was_state_leaf_removed_by_number(
v_state_leaf_key,
v_block_no
) AS state_leaf_removed
FROM eth.storage_cids
INNER JOIN eth.header_cids ON (
storage_cids.header_id = header_cids.block_hash
AND storage_cids.block_number = header_cids.block_number
)
WHERE state_leaf_key = v_state_leaf_key
AND storage_leaf_key = v_storage_leaf_key
AND storage_cids.block_number <= v_block_no
AND header_cids.block_number <= v_block_no
AND header_cids.block_hash = (SELECT canonical_header_hash(header_cids.block_number))
ORDER BY header_cids.block_number DESC LIMIT 1;
END IF;
RETURN QUERY SELECT t.cid, t.val, t.block_number, t.removed, t.state_leaf_removed
FROM tmp_tt_stg2 AS t LIMIT 1;
END
$BODY$
language 'plpgsql';
-- +goose StatementEnd
-- +goose StatementBegin
CREATE OR REPLACE FUNCTION public.get_storage_at_by_hash(v_state_leaf_key TEXT, v_storage_leaf_key text, v_block_hash text)
RETURNS TABLE
(
cid TEXT,
val BYTEA,
block_number BIGINT,
removed BOOL,
state_leaf_removed BOOL
)
AS
$BODY$
DECLARE
v_block_no BIGINT;
BEGIN
SELECT h.block_number INTO v_block_no FROM eth.header_cids AS h WHERE block_hash = v_block_hash LIMIT 1;
IF v_block_no IS NULL THEN
RETURN;
END IF;
RETURN QUERY SELECT * FROM get_storage_at_by_number(v_state_leaf_key, v_storage_leaf_key, v_block_no);
END
$BODY$
LANGUAGE 'plpgsql';
-- +goose StatementEnd
-- +goose Down
DROP FUNCTION get_storage_at_by_hash;
DROP FUNCTION get_storage_at_by_number;

View File

@ -0,0 +1,105 @@
-- +goose Up
ALTER TABLE eth.header_cids
ADD CONSTRAINT header_cids_ipld_blocks_fkey
FOREIGN KEY (cid, block_number)
REFERENCES ipld.blocks (key, block_number);
ALTER TABLE eth.uncle_cids
ADD CONSTRAINT uncle_cids_ipld_blocks_fkey
FOREIGN KEY (cid, block_number)
REFERENCES ipld.blocks (key, block_number);
ALTER TABLE eth.uncle_cids
ADD CONSTRAINT uncle_cids_header_cids_fkey
FOREIGN KEY (header_id, block_number)
REFERENCES eth.header_cids (block_hash, block_number);
ALTER TABLE eth.transaction_cids
ADD CONSTRAINT transaction_cids_ipld_blocks_fkey
FOREIGN KEY (cid, block_number)
REFERENCES ipld.blocks (key, block_number);
ALTER TABLE eth.transaction_cids
ADD CONSTRAINT transaction_cids_header_cids_fkey
FOREIGN KEY (header_id, block_number)
REFERENCES eth.header_cids (block_hash, block_number);
ALTER TABLE eth.receipt_cids
ADD CONSTRAINT receipt_cids_ipld_blocks_fkey
FOREIGN KEY (cid, block_number)
REFERENCES ipld.blocks (key, block_number);
ALTER TABLE eth.receipt_cids
ADD CONSTRAINT receipt_cids_transaction_cids_fkey
FOREIGN KEY (tx_id, header_id, block_number)
REFERENCES eth.transaction_cids (tx_hash, header_id, block_number);
ALTER TABLE eth.state_cids
ADD CONSTRAINT state_cids_ipld_blocks_fkey
FOREIGN KEY (cid, block_number)
REFERENCES ipld.blocks (key, block_number);
ALTER TABLE eth.state_cids
ADD CONSTRAINT state_cids_header_cids_fkey
FOREIGN KEY (header_id, block_number)
REFERENCES eth.header_cids (block_hash, block_number);
ALTER TABLE eth.storage_cids
ADD CONSTRAINT storage_cids_ipld_blocks_fkey
FOREIGN KEY (cid, block_number)
REFERENCES ipld.blocks (key, block_number);
ALTER TABLE eth.storage_cids
ADD CONSTRAINT storage_cids_state_cids_fkey
FOREIGN KEY (state_leaf_key, header_id, block_number)
REFERENCES eth.state_cids (state_leaf_key, header_id, block_number);
ALTER TABLE eth.log_cids
ADD CONSTRAINT log_cids_ipld_blocks_fkey
FOREIGN KEY (cid, block_number)
REFERENCES ipld.blocks (key, block_number);
ALTER TABLE eth.log_cids
ADD CONSTRAINT log_cids_receipt_cids_fkey
FOREIGN KEY (rct_id, header_id, block_number)
REFERENCES eth.receipt_cids (tx_id, header_id, block_number);
-- +goose Down
ALTER TABLE eth.log_cids
DROP CONSTRAINT log_cids_receipt_cids_fkey;
ALTER TABLE eth.log_cids
DROP CONSTRAINT log_cids_ipld_blocks_fkey;
ALTER TABLE eth.storage_cids
DROP CONSTRAINT storage_cids_state_cids_fkey;
ALTER TABLE eth.storage_cids
DROP CONSTRAINT storage_cids_ipld_blocks_fkey;
ALTER TABLE eth.state_cids
DROP CONSTRAINT state_cids_header_cids_fkey;
ALTER TABLE eth.state_cids
DROP CONSTRAINT state_cids_ipld_blocks_fkey;
ALTER TABLE eth.receipt_cids
DROP CONSTRAINT receipt_cids_transaction_cids_fkey;
ALTER TABLE eth.receipt_cids
DROP CONSTRAINT receipt_cids_ipld_blocks_fkey;
ALTER TABLE eth.transaction_cids
DROP CONSTRAINT transaction_cids_header_cids_fkey;
ALTER TABLE eth.transaction_cids
DROP CONSTRAINT transaction_cids_ipld_blocks_fkey;
ALTER TABLE eth.uncle_cids
DROP CONSTRAINT uncle_cids_header_cids_fkey;
ALTER TABLE eth.uncle_cids
DROP CONSTRAINT uncle_cids_ipld_blocks_fkey;
ALTER TABLE eth.header_cids
DROP CONSTRAINT header_cids_ipld_blocks_fkey;

View File

@ -0,0 +1,105 @@
-- +goose Up
ALTER TABLE eth.log_cids
DROP CONSTRAINT log_cids_receipt_cids_fkey;
ALTER TABLE eth.log_cids
DROP CONSTRAINT log_cids_ipld_blocks_fkey;
ALTER TABLE eth.storage_cids
DROP CONSTRAINT storage_cids_state_cids_fkey;
ALTER TABLE eth.storage_cids
DROP CONSTRAINT storage_cids_ipld_blocks_fkey;
ALTER TABLE eth.state_cids
DROP CONSTRAINT state_cids_header_cids_fkey;
ALTER TABLE eth.state_cids
DROP CONSTRAINT state_cids_ipld_blocks_fkey;
ALTER TABLE eth.receipt_cids
DROP CONSTRAINT receipt_cids_transaction_cids_fkey;
ALTER TABLE eth.receipt_cids
DROP CONSTRAINT receipt_cids_ipld_blocks_fkey;
ALTER TABLE eth.transaction_cids
DROP CONSTRAINT transaction_cids_header_cids_fkey;
ALTER TABLE eth.transaction_cids
DROP CONSTRAINT transaction_cids_ipld_blocks_fkey;
ALTER TABLE eth.uncle_cids
DROP CONSTRAINT uncle_cids_header_cids_fkey;
ALTER TABLE eth.uncle_cids
DROP CONSTRAINT uncle_cids_ipld_blocks_fkey;
ALTER TABLE eth.header_cids
DROP CONSTRAINT header_cids_ipld_blocks_fkey;
-- +goose Down
ALTER TABLE eth.header_cids
ADD CONSTRAINT header_cids_ipld_blocks_fkey
FOREIGN KEY (cid, block_number)
REFERENCES ipld.blocks (key, block_number);
ALTER TABLE eth.uncle_cids
ADD CONSTRAINT uncle_cids_ipld_blocks_fkey
FOREIGN KEY (cid, block_number)
REFERENCES ipld.blocks (key, block_number);
ALTER TABLE eth.uncle_cids
ADD CONSTRAINT uncle_cids_header_cids_fkey
FOREIGN KEY (header_id, block_number)
REFERENCES eth.header_cids (block_hash, block_number);
ALTER TABLE eth.transaction_cids
ADD CONSTRAINT transaction_cids_ipld_blocks_fkey
FOREIGN KEY (cid, block_number)
REFERENCES ipld.blocks (key, block_number);
ALTER TABLE eth.transaction_cids
ADD CONSTRAINT transaction_cids_header_cids_fkey
FOREIGN KEY (header_id, block_number)
REFERENCES eth.header_cids (block_hash, block_number);
ALTER TABLE eth.receipt_cids
ADD CONSTRAINT receipt_cids_ipld_blocks_fkey
FOREIGN KEY (cid, block_number)
REFERENCES ipld.blocks (key, block_number);
ALTER TABLE eth.receipt_cids
ADD CONSTRAINT receipt_cids_transaction_cids_fkey
FOREIGN KEY (tx_id, header_id, block_number)
REFERENCES eth.transaction_cids (tx_hash, header_id, block_number);
ALTER TABLE eth.state_cids
ADD CONSTRAINT state_cids_ipld_blocks_fkey
FOREIGN KEY (cid, block_number)
REFERENCES ipld.blocks (key, block_number);
ALTER TABLE eth.state_cids
ADD CONSTRAINT state_cids_header_cids_fkey
FOREIGN KEY (header_id, block_number)
REFERENCES eth.header_cids (block_hash, block_number);
ALTER TABLE eth.storage_cids
ADD CONSTRAINT storage_cids_ipld_blocks_fkey
FOREIGN KEY (cid, block_number)
REFERENCES ipld.blocks (key, block_number);
ALTER TABLE eth.storage_cids
ADD CONSTRAINT storage_cids_state_cids_fkey
FOREIGN KEY (state_leaf_key, header_id, block_number)
REFERENCES eth.state_cids (state_leaf_key, header_id, block_number);
ALTER TABLE eth.log_cids
ADD CONSTRAINT log_cids_ipld_blocks_fkey
FOREIGN KEY (cid, block_number)
REFERENCES ipld.blocks (key, block_number);
ALTER TABLE eth.log_cids
ADD CONSTRAINT log_cids_receipt_cids_fkey
FOREIGN KEY (rct_id, header_id, block_number)
REFERENCES eth.receipt_cids (tx_id, header_id, block_number);

View File

@ -0,0 +1,98 @@
-- +goose Up
SELECT create_hypertable('ipld.blocks', 'block_number', migrate_data => true, chunk_time_interval => 32768);
SELECT create_hypertable('eth.uncle_cids', 'block_number', migrate_data => true, chunk_time_interval => 32768);
SELECT create_hypertable('eth.transaction_cids', 'block_number', migrate_data => true, chunk_time_interval => 32768);
SELECT create_hypertable('eth.receipt_cids', 'block_number', migrate_data => true, chunk_time_interval => 32768);
SELECT create_hypertable('eth.state_cids', 'block_number', migrate_data => true, chunk_time_interval => 32768);
SELECT create_hypertable('eth.storage_cids', 'block_number', migrate_data => true, chunk_time_interval => 32768);
SELECT create_hypertable('eth.log_cids', 'block_number', migrate_data => true, chunk_time_interval => 32768);
-- update version
INSERT INTO public.db_version (singleton, version) VALUES (true, 'v5.0.0-h')
ON CONFLICT (singleton) DO UPDATE SET (version, tstamp) = ('v5.0.0-h', NOW());
-- +goose Down
INSERT INTO public.db_version (singleton, version) VALUES (true, 'v5.0.0')
ON CONFLICT (singleton) DO UPDATE SET (version, tstamp) = ('v5.0.0', NOW());
-- reversing conversion to hypertable requires migrating all data from every chunk back to a single table
-- create new regular tables
CREATE TABLE eth.log_cids_i (LIKE eth.log_cids INCLUDING ALL);
CREATE TABLE eth.storage_cids_i (LIKE eth.storage_cids INCLUDING ALL);
CREATE TABLE eth.state_cids_i (LIKE eth.state_cids INCLUDING ALL);
CREATE TABLE eth.receipt_cids_i (LIKE eth.receipt_cids INCLUDING ALL);
CREATE TABLE eth.transaction_cids_i (LIKE eth.transaction_cids INCLUDING ALL);
CREATE TABLE eth.uncle_cids_i (LIKE eth.uncle_cids INCLUDING ALL);
CREATE TABLE ipld.blocks_i (LIKE ipld.blocks INCLUDING ALL);
-- migrate data
INSERT INTO eth.log_cids_i (SELECT * FROM eth.log_cids);
INSERT INTO eth.storage_cids_i (SELECT * FROM eth.storage_cids);
INSERT INTO eth.state_cids_i (SELECT * FROM eth.state_cids);
INSERT INTO eth.receipt_cids_i (SELECT * FROM eth.receipt_cids);
INSERT INTO eth.transaction_cids_i (SELECT * FROM eth.transaction_cids);
INSERT INTO eth.uncle_cids_i (SELECT * FROM eth.uncle_cids);
INSERT INTO ipld.blocks_i (SELECT * FROM ipld.blocks);
-- drop hypertables
DROP TABLE eth.log_cids;
DROP TABLE eth.storage_cids;
DROP TABLE eth.state_cids;
DROP TABLE eth.receipt_cids;
DROP TABLE eth.transaction_cids;
DROP TABLE eth.uncle_cids;
DROP TABLE ipld.blocks;
-- rename new tables
ALTER TABLE eth.log_cids_i RENAME TO log_cids;
ALTER TABLE eth.storage_cids_i RENAME TO storage_cids;
ALTER TABLE eth.state_cids_i RENAME TO state_cids;
ALTER TABLE eth.receipt_cids_i RENAME TO receipt_cids;
ALTER TABLE eth.transaction_cids_i RENAME TO transaction_cids;
ALTER TABLE eth.uncle_cids_i RENAME TO uncle_cids;
ALTER TABLE ipld.blocks_i RENAME TO blocks;
-- rename indexes:
-- log indexes
ALTER INDEX eth.log_cids_i_topic3_idx RENAME TO log_topic3_index;
ALTER INDEX eth.log_cids_i_topic2_idx RENAME TO log_topic2_index;
ALTER INDEX eth.log_cids_i_topic1_idx RENAME TO log_topic1_index;
ALTER INDEX eth.log_cids_i_topic0_idx RENAME TO log_topic0_index;
ALTER INDEX eth.log_cids_i_address_idx RENAME TO log_address_index;
ALTER INDEX eth.log_cids_i_cid_block_number_idx RENAME TO log_cid_block_number_index;
ALTER INDEX eth.log_cids_i_header_id_idx RENAME TO log_header_id_index;
ALTER INDEX eth.log_cids_i_block_number_idx RENAME TO log_block_number_index;
-- storage node indexes -- storage node indexes
ALTER INDEX eth.storage_cids_i_removed_idx RENAME TO storage_removed_index;
ALTER INDEX eth.storage_cids_i_header_id_idx RENAME TO storage_header_id_index;
ALTER INDEX eth.storage_cids_i_cid_block_number_idx RENAME TO storage_cid_block_number_index;
ALTER INDEX eth.storage_cids_i_state_leaf_key_idx RENAME TO storage_state_leaf_key_index;
ALTER INDEX eth.storage_cids_i_block_number_idx RENAME TO storage_block_number_index;
ALTER INDEX eth.storage_cids_i_storage_leaf_key_block_number_idx RENAME TO storage_leaf_key_block_number_index;
-- state node indexes -- state node indexes
ALTER INDEX eth.state_cids_i_code_hash_idx RENAME TO state_code_hash_index;
ALTER INDEX eth.state_cids_i_removed_idx RENAME TO state_removed_index;
ALTER INDEX eth.state_cids_i_header_id_idx RENAME TO state_header_id_index;
ALTER INDEX eth.state_cids_i_cid_block_number_idx RENAME TO state_cid_block_number_index;
ALTER INDEX eth.state_cids_i_block_number_idx RENAME TO state_block_number_index;
ALTER INDEX eth.state_cids_i_state_leaf_key_block_number_idx RENAME TO state_leaf_key_block_number_index;
-- receipt indexes -- receipt indexes
ALTER INDEX eth.receipt_cids_i_contract_idx RENAME TO rct_contract_index;
ALTER INDEX eth.receipt_cids_i_cid_block_number_idx RENAME TO rct_cid_block_number_index;
ALTER INDEX eth.receipt_cids_i_header_id_idx RENAME TO rct_header_id_index;
ALTER INDEX eth.receipt_cids_i_block_number_idx RENAME TO rct_block_number_index;
-- transaction indexes -- transaction indexes
ALTER INDEX eth.transaction_cids_i_src_idx RENAME TO tx_src_index;
ALTER INDEX eth.transaction_cids_i_dst_idx RENAME TO tx_dst_index;
ALTER INDEX eth.transaction_cids_i_cid_block_number_idx RENAME TO tx_cid_block_number_index;
ALTER INDEX eth.transaction_cids_i_header_id_idx RENAME TO tx_header_id_index;
ALTER INDEX eth.transaction_cids_i_block_number_idx RENAME TO tx_block_number_index;
-- uncle indexes -- uncle indexes
ALTER INDEX eth.uncle_cids_i_block_number_idx RENAME TO uncle_block_number_index;
ALTER INDEX eth.uncle_cids_i_cid_block_number_index_idx RENAME TO uncle_cid_block_number_index;
ALTER INDEX eth.uncle_cids_i_header_id_idx RENAME TO uncle_header_id_index;

View File

@ -0,0 +1,16 @@
-- +goose Up
CREATE TABLE IF NOT EXISTS eth.withdrawal_cids (
block_number BIGINT NOT NULL,
header_id VARCHAR(66) NOT NULL,
cid TEXT NOT NULL,
index INTEGER NOT NULL,
validator INTEGER NOT NULL,
address VARCHAR(66) NOT NULL,
amount NUMERIC NOT NULL,
PRIMARY KEY (index, header_id, block_number)
);
SELECT create_hypertable('eth.withdrawal_cids', 'block_number', migrate_data => true, chunk_time_interval => 32768);
-- +goose Down
DROP TABLE eth.withdrawal_cids;

View File

@ -0,0 +1,12 @@
-- +goose Up
CREATE TABLE eth.blob_hashes (
tx_hash VARCHAR(66) NOT NULL,
index INTEGER NOT NULL,
blob_hash BYTEA NOT NULL
);
CREATE UNIQUE INDEX blob_hashes_tx_hash_index ON eth.blob_hashes(tx_hash, index);
-- +goose Down
DROP INDEX eth.blob_hashes_tx_hash_index;
DROP TABLE eth.blob_hashes;

View File

@ -1,19 +0,0 @@
version: '3.2'
services:
statediff-migrations:
restart: on-failure
depends_on:
- test-db
image: vulcanize/statediff-migrations:v0.9.0
test-db:
restart: always
image: postgres:10.12-alpine
command: ["postgres", "-c", "log_statement=all"]
environment:
POSTGRES_USER: "vdbm"
POSTGRES_DB: "vulcanize_testing"
POSTGRES_PASSWORD: "password"
ports:
- "127.0.0.1:8066:5432"

View File

@ -1,14 +0,0 @@
version: '3.2'
services:
ipld-eth-db:
restart: always
image: vulcanize/ipld-eth-db
build: .
environment:
POSTGRES_USER: "vdbm"
POSTGRES_DB: "vulcanize_testing"
POSTGRES_PASSWORD: "password"
hostname: db
ports:
- "127.0.0.1:8077:5432"

1197
schema.sql

File diff suppressed because it is too large Load Diff

View File

@ -1,19 +1,14 @@
#!/bin/sh #!/bin/sh
# Runs the db migrations set -e
set +x
# Default command is "goose up"
if [[ $# -eq 0 ]]; then
set -- "up"
fi
# Construct the connection string for postgres # Construct the connection string for postgres
VDB_PG_CONNECT=postgresql://$DATABASE_USER:$DATABASE_PASSWORD@$DATABASE_HOSTNAME:$DATABASE_PORT/$DATABASE_NAME?sslmode=disable VDB_PG_CONNECT=postgresql://$DATABASE_USER:$DATABASE_PASSWORD@$DATABASE_HOSTNAME:$DATABASE_PORT/$DATABASE_NAME?sslmode=disable
# Run the DB migrations # Run the DB migrations
echo "Connecting with: $VDB_PG_CONNECT" set -x
echo "Running database migrations" exec ./goose -dir migrations postgres "$VDB_PG_CONNECT" "$@"
./goose -dir migrations/vulcanizedb postgres "$VDB_PG_CONNECT" up
# If the db migrations ran without err
if [[ $? -eq 0 ]]; then
echo "Migration process ran successfully"
else
echo "Could not run migrations. Are the database details correct?"
exit 1
fi

View File

@ -10,7 +10,7 @@ sleep 5s
export HOST_NAME=localhost export HOST_NAME=localhost
export PORT=8066 export PORT=8066
export USER=vdbm export USER=vdbm
export TEST_DB=vulcanize_testing export TEST_DB=cerc_testing
export TEST_CONNECT_STRING=postgresql://$USER@$HOST_NAME:$PORT/$TEST_DB?sslmode=disable export TEST_CONNECT_STRING=postgresql://$USER@$HOST_NAME:$PORT/$TEST_DB?sslmode=disable
export PGPASSWORD=password export PGPASSWORD=password
@ -54,4 +54,4 @@ do
then then
exit 0 exit 0
fi fi
done done

Binary file not shown.

Before

Width:  |  Height:  |  Size: 493 KiB

After

Width:  |  Height:  |  Size: 508 KiB

View File

@ -1,150 +1,113 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<Diagram> <Diagram>
<ID>DATABASE</ID> <ID>DATABASE</ID>
<OriginalElement>763cb2dc-728a-4fbd-a163-94dd564429aa</OriginalElement> <OriginalElement>86a0461b-ec84-4911-9aa2-e562b5d7b24c</OriginalElement>
<nodes> <nodes>
<node x="508.5625" y="998.0">763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.receipt_cids</node> <node x="561.75" y="152.0">86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.header_cids</node>
<node x="227.705078125" y="1282.0">763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.log_cids</node> <node x="0.0" y="1439.0">86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.public.goose_db_version</node>
<node x="256.5" y="1031.0">763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.state_accounts</node> <node x="1133.0" y="0.0">86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.public.nodes</node>
<node x="156.705078125" y="712.0">763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.state_cids</node> <node x="203.25" y="1140.0">86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.log_cids</node>
<node x="193.330078125" y="44.5">763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.public.blocks</node> <node x="729.5" y="603.0">86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.uncle_cids</node>
<node x="10.5" y="1009.0">763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.storage_cids</node> <node x="467.25" y="0.0">86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.ipld.blocks</node>
<node x="547.375" y="723.0">763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.uncle_cids</node> <node x="500.5" y="570.0">86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.state_cids</node>
<node x="1146.375" y="0.0">763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.public.goose_db_version</node> <node x="1384.0" y="0.0">86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth_meta.watched_addresses</node>
<node x="499.455078125" y="0.0">763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.public.nodes</node> <node x="660.6941964285713" y="878.0">86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.storage_cids</node>
<node x="824.375" y="1042.0">763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.access_list_element</node> <node x="303.5" y="581.0">86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.transaction_cids</node>
<node x="614.5" y="228.0">763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.header_cids</node> <node x="233.0" y="1439.0">86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.public.db_version</node>
<node x="761.375" y="690.0">763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.transaction_cids</node> <node x="251.5" y="889.0">86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.receipt_cids</node>
</nodes> </nodes>
<notes /> <notes />
<edges> <edges>
<edge source="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.public.nodes" target="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.public.nodes" relationship="REFERENCES"> <edge source="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.state_cids" target="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.ipld.blocks" relationship="REFERENCES">
<point x="-76.5" y="83.0" /> <point x="-52.25" y="-127.0" />
<point x="575.955078125" y="151.63819095477388" /> <point x="552.75" y="538.0" />
<point x="532.3193997330402" y="151.63819095477388" /> <point x="551.25" y="538.0" />
<point x="532.3193997330402" y="83.0" /> <point x="0.0" y="50.0" />
<point x="-153.0" y="0.0" />
</edge> </edge>
<edge source="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.uncle_cids" target="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.header_cids" relationship="REFERENCES"> <edge source="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.receipt_cids" target="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.ipld.blocks" relationship="REFERENCES">
<point x="48.5" y="-94.0" /> <point x="-45.5" y="-94.0" />
<point x="692.875" y="668.0" /> <point x="297.0" y="851.0" />
<point x="756.0" y="668.0" /> <point x="293.0" y="851.0" />
<point x="0.0" y="204.0" /> <point x="293.0" y="126.0" />
<point x="551.25" y="126.0" />
<point x="0.0" y="50.0" />
</edge> </edge>
<edge source="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.storage_cids" target="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.public.blocks" relationship="REFERENCES"> <edge source="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.transaction_cids" target="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.header_cids" relationship="REFERENCES">
<point x="-56.5" y="-105.0" /> <point x="44.25" y="-116.0" />
<point x="67.0" y="971.0" /> <point x="436.25" y="548.0" />
<point x="111.205078125" y="971.0" /> <point x="658.75" y="548.0" />
<point x="111.205078125" y="207.0" /> <point x="0.0" y="182.0" />
<point x="502.875" y="207.0" />
<point x="502.875" y="197.0" />
<point x="269.080078125" y="197.0" />
<point x="25.25" y="38.5" />
</edge> </edge>
<edge source="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.state_accounts" target="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.state_cids" relationship="REFERENCES"> <edge source="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.log_cids" target="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.receipt_cids" relationship="REFERENCES">
<point x="0.0" y="-83.0" /> <point x="44.25" y="-127.0" />
<point x="355.0" y="971.0" /> <point x="336.0" y="1114.0" />
<point x="261.205078125" y="971.0" /> <point x="342.5" y="1114.0" />
<point x="0.0" y="105.0" /> <point x="0.0" y="94.0" />
</edge> </edge>
<edge source="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.uncle_cids" target="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.public.blocks" relationship="REFERENCES"> <edge source="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.uncle_cids" target="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.header_cids" relationship="REFERENCES">
<point x="-48.5" y="-94.0" /> <point x="-48.5" y="-94.0" />
<point x="595.875" y="658.0" /> <point x="778.0" y="548.0" />
<point x="589.25" y="658.0" /> <point x="658.75" y="548.0" />
<point x="589.25" y="207.0" /> <point x="0.0" y="182.0" />
<point x="502.875" y="207.0" />
<point x="502.875" y="197.0" />
<point x="269.080078125" y="197.0" />
<point x="25.25" y="38.5" />
</edge> </edge>
<edge source="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.storage_cids" target="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.state_cids" relationship="REFERENCES"> <edge source="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.log_cids" target="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.ipld.blocks" relationship="REFERENCES">
<point x="56.5" y="-105.0" /> <point x="-44.25" y="-127.0" />
<point x="180.0" y="971.0" /> <point x="247.5" y="1114.0" />
<point x="261.205078125" y="971.0" /> <point x="241.0" y="1114.0" />
<point x="0.0" y="105.0" /> <point x="241.0" y="126.0" />
<point x="551.25" y="126.0" />
<point x="0.0" y="50.0" />
</edge> </edge>
<edge source="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.log_cids" target="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.receipt_cids" relationship="REFERENCES"> <edge source="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.receipt_cids" target="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.transaction_cids" relationship="REFERENCES">
<point x="41.0" y="-138.0" /> <point x="45.5" y="-94.0" />
<point x="350.705078125" y="1256.0" /> <point x="388.0" y="851.0" />
<point x="612.0625" y="1256.0" /> <point x="392.0" y="851.0" />
<point x="0.0" y="116.0" /> <point x="0.0" y="116.0" />
</edge> </edge>
<edge source="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.header_cids" target="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.public.blocks" relationship="REFERENCES"> <edge source="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.state_cids" target="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.header_cids" relationship="REFERENCES">
<point x="-70.75" y="-204.0" /> <point x="52.25" y="-127.0" />
<point x="685.25" y="207.0" /> <point x="657.25" y="548.0" />
<point x="502.875" y="207.0" /> <point x="658.75" y="548.0" />
<point x="502.875" y="197.0" /> <point x="0.0" y="182.0" />
<point x="269.080078125" y="197.0" />
<point x="25.25" y="38.5" />
</edge> </edge>
<edge source="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.state_cids" target="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.header_cids" relationship="REFERENCES"> <edge source="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.storage_cids" target="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.state_cids" relationship="REFERENCES">
<point x="52.25" y="-105.0" /> <point x="-56.5" y="-105.0" />
<point x="313.455078125" y="668.0" /> <point x="717.1941964285713" y="851.0" />
<point x="756.0" y="668.0" /> <point x="605.0" y="851.0" />
<point x="0.0" y="204.0" />
</edge>
<edge source="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.receipt_cids" target="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.transaction_cids" relationship="REFERENCES">
<point x="51.75" y="-116.0" />
<point x="663.8125" y="971.0" />
<point x="931.375" y="971.0" />
<point x="0.0" y="127.0" /> <point x="0.0" y="127.0" />
</edge> </edge>
<edge source="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.transaction_cids" target="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.public.blocks" relationship="REFERENCES"> <edge source="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.transaction_cids" target="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.ipld.blocks" relationship="REFERENCES">
<point x="85.0" y="-127.0" /> <point x="-44.25" y="-116.0" />
<point x="1016.375" y="668.0" /> <point x="347.75" y="548.0" />
<point x="923.3125" y="668.0" /> <point x="391.1941964285714" y="548.0" />
<point x="923.3125" y="207.0" /> <point x="391.1941964285714" y="126.0" />
<point x="502.875" y="207.0" /> <point x="551.25" y="126.0" />
<point x="502.875" y="197.0" /> <point x="0.0" y="50.0" />
<point x="269.080078125" y="197.0" />
<point x="25.25" y="38.5" />
</edge> </edge>
<edge source="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.receipt_cids" target="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.public.blocks" relationship="REFERENCES"> <edge source="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.storage_cids" target="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.ipld.blocks" relationship="REFERENCES">
<point x="-51.75" y="-116.0" /> <point x="56.5" y="-105.0" />
<point x="560.3125" y="971.0" /> <point x="830.1941964285713" y="851.0" />
<point x="497.625" y="971.0" /> <point x="934.0" y="851.0" />
<point x="497.625" y="207.0" /> <point x="934.0" y="126.0" />
<point x="502.875" y="207.0" /> <point x="551.25" y="126.0" />
<point x="502.875" y="197.0" /> <point x="0.0" y="50.0" />
<point x="269.080078125" y="197.0" />
<point x="25.25" y="38.5" />
</edge> </edge>
<edge source="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.log_cids" target="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.public.blocks" relationship="REFERENCES"> <edge source="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.uncle_cids" target="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.ipld.blocks" relationship="REFERENCES">
<point x="-41.0" y="-138.0" /> <point x="48.5" y="-94.0" />
<point x="268.705078125" y="1256.0" /> <point x="875.0" y="548.0" />
<point x="0.0" y="1256.0" /> <point x="835.6941964285714" y="548.0" />
<point x="0.0" y="197.0" /> <point x="835.6941964285714" y="126.0" />
<point x="218.580078125" y="197.0" /> <point x="551.25" y="126.0" />
<point x="-25.25" y="38.5" /> <point x="0.0" y="50.0" />
</edge> </edge>
<edge source="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.transaction_cids" target="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.header_cids" relationship="REFERENCES"> <edge source="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.eth.header_cids" target="86a0461b-ec84-4911-9aa2-e562b5d7b24c.TABLE:uml_diagram.ipld.blocks" relationship="REFERENCES">
<point x="-85.0" y="-127.0" /> <point x="0.0" y="-182.0" />
<point x="846.375" y="668.0" /> <point x="658.75" y="126.0" />
<point x="756.0" y="668.0" /> <point x="551.25" y="126.0" />
<point x="0.0" y="204.0" /> <point x="0.0" y="50.0" />
</edge>
<edge source="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.state_cids" target="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.public.blocks" relationship="REFERENCES">
<point x="-52.25" y="-105.0" />
<point x="208.955078125" y="668.0" />
<point x="256.205078125" y="668.0" />
<point x="256.205078125" y="207.0" />
<point x="502.875" y="207.0" />
<point x="502.875" y="197.0" />
<point x="269.080078125" y="197.0" />
<point x="25.25" y="38.5" />
</edge>
<edge source="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.access_list_element" target="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.transaction_cids" relationship="REFERENCES">
<point x="0.0" y="-72.0" />
<point x="0.0" y="127.0" />
</edge>
<edge source="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.eth.header_cids" target="763cb2dc-728a-4fbd-a163-94dd564429aa.TABLE:vulcanize_testing.public.nodes" relationship="REFERENCES">
<point x="70.75" y="-204.0" />
<point x="826.75" y="197.0" />
<point x="728.955078125" y="197.0" />
<point x="76.5" y="83.0" />
</edge> </edge>
</edges> </edges>
<settings layout="Hierarchic" zoom="0.37832699619771865" showDependencies="false" x="667.5" y="779.0" /> <settings layout="Hierarchic" zoom="0.40290955091714103" showDependencies="false" x="793.0" y="780.5" />
<SelectedNodes /> <SelectedNodes />
<Categories> <Categories>
<Category>Columns</Category> <Category>Columns</Category>