readme fixes

This commit is contained in:
Ian Norden 2019-03-08 10:38:35 -06:00
parent 1d6219cc83
commit 35c5808a07
2 changed files with 48 additions and 54 deletions

View File

@ -359,8 +359,10 @@ or [StorageTransformerInitializer](https://github.com/vulcanize/maker-vulcanized
* Design the transformers to work in the context of their [event](https://github.com/vulcanize/maker-vulcanizedb/blob/staging/libraries/shared/watcher/event_watcher.go#L83)
or [storage](https://github.com/vulcanize/maker-vulcanizedb/blob/staging/libraries/shared/watcher/storage_watcher.go#L58) watcher execution modes
* Create db migrations to run against vulcanizeDB so that we can store the transformer output
* Specify migration locations for each transformer in the config with the `exporter.transformer.migrations` fields
* Do not `goose fix` the transformer migrations
* Specify migration locations for each transformer in the config with the `exporter.transformer.migrations` fields
* If the base vDB migrations occupy this path as well, they need to be in their `goose fix`ed form
as they are [here](https://github.com/vulcanize/vulcanizedb/tree/master/db/migrations)
To update a plugin repository with changes to the core vulcanizedb repository, replace the vulcanizedb vendored in the plugin repo (`plugin_repo/vendor/github.com/vulcanize/vulcanizedb`)
with the newly updated version

View File

@ -11,7 +11,7 @@ with our `checked_headers` table.
This approach assumes you are running a vDB light sync which is itself run against a light Ethereum node,
this approach also assumes there is a full node available.
Looking forward, we will be building fetchers than enable sourcing data from IPFS instead of an ETH node.
Looking forward, we will be building fetchers that enable sourcing data from IPFS instead of an ETH node.
## Shared Code
@ -25,7 +25,7 @@ Using the `compose` or `composeAndExecute` command, event watchers can be loaded
### [Event Transformer](https://github.com/vulcanize/maker-vulcanizedb/blob/staging/libraries/shared/transformer/event_transformer.go)
The event transformer is responsible for converting event logs into more useful data objects and storing them in postgres.
The event transformer is responsible for converting event logs into more useful data objects and storing them in Postgres.
The event transformer is composed of converter and repository interfaces and a config struct:
```go
type Transformer struct {
@ -83,11 +83,11 @@ In order to watch events at a smart contract, for those events the developer mus
1. Config - struct to hold configuration information (contract address, starting block, event name and signature).
1. Entity - struct to unpack the event log into.
1. Model - struct representing the final data model we want to write to Postres.
1. Model - struct representing the final data model we want to write to Postgres.
1. Converter - an interface which can unpack event logs into our entities and convert those entities to our models.
1. Repository - an interface to write our models to Postgres.
1. TransformerInitializer - a public variable which exports our configured transformer to be loaded as part of a plugin,
1. DB migrations - migrations to generate the Posstgres schema, tables, views, function, etc that are needed to store and interface with the transformed data models.
1. TransformerInitializer - a public variable which exports our configured transformer to be loaded as part of a plugin.
1. DB migrations - migrations to generate the Postgres schema, tables, views, function, etc that are needed to store and interface with the transformed data models.
The example event we will use looks like:
```
@ -97,7 +97,7 @@ event ExampleEvent(bytes32 indexed arg1, address indexed arg2, bytes32 arg3, uin
### Config
The config holds configuration variables for the event transformer, including a name for the transformer, the contract address
it is working at, the contract's abi, the topic (e.g. event signature; topic0) that it is filtering for, and starting
it is working at, the contract's ABI, the topic (e.g. event signature; topic0) that it is filtering for, and starting
and ending block numbers.
```go
@ -118,9 +118,9 @@ TransactionIndex, and the Raw log are retained in order to link the data to it's
```go
type ExampleEntity struct {
Arg1 [32]byte
Arg1 common.Hash
Arg2 common.Address
Arg3 [32]byte
Arg3 common.Hash
Arg4 *big.Int
Arg5 *big.Int
LogIndex uint
@ -200,7 +200,10 @@ func (converter ExampleConverter) ToModels(entities []interface{}) ([]interface{
return nil, fmt.Errorf("entity of type %T, not %T", entity, ExampleModel{})
}
fractionSkimmed, err := hexutil.DecodeBig(entity.Arg3)
fractionSkimmed, err := hexutil.DecodeBig(entity.Arg3.Hex())
if err != nil {
reuturn nil, err
}
position := new(big.Int)
position.Sub(entity.Arg4, entity.Arg5)
finalPosition := new(big.Int)
@ -218,7 +221,7 @@ func (converter ExampleConverter) ToModels(entities []interface{}) ([]interface{
}
model := ExampleModel{
EventHash: common.BytesToHash([]byte(entity.Arg1[:])).Hex(),
EventHash: entity.Arg1.Hex(),
UserAddress: entity.Arg2.Hex(),
FractionSkimmed: fractionSkimmed.String(),
Surplus: entity.Arg4.String(),
@ -275,16 +278,6 @@ func (repository ExampleRepository) Create(headerID int64, models []interface{})
return fmt.Errorf("model of type %T, not %T", model, ExampleModel{})
}
EventHash: common.BytesToHash([]byte(entity.Arg1[:])).Hex(),
UserAddress: entity.Arg2.Hex(),
FractionSkimmed: fractionSkimmed.String(),
Surplus: entity.Arg4.String(),
Deficit: entity.Arg5.String(),
FinalPosition: finalPosition,
LogIndex: entity.LogIndex,
TransactionIndex: entity.TransactionIndex,
Raw: rawLog,
_, execErr := tx.Exec(
`INSERT into example_schema.example_event (header_id, event_hash, user_address, fraction_skimmed, surplus, deficit, final_position, log_idx, tx_idx, raw_log)
VALUES($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)
@ -363,10 +356,10 @@ CREATE TABLE example_schema.example_event (
header_id INTEGER NOT NULL REFERENCES headers (id) ON DELETE CASCADE,
event_hash CHARACTER VARYING(66) NOT NULL,
user_address CHARACTER VARYING(66) NOT NULL,
fraction_skimmed NUMERIC,
surplus NUMERIC,
deficit NUMERIC,
final_position NUMERIC,
fraction_skimmed NUMERIC NOT NULL,
surplus NUMERIC NOT NULL,
deficit NUMERIC NOT NULL,
final_position NUMERIC NOT NULL,
tx_idx INTEGER NOT NUll,
log_idx INTEGER NOT NUll,
raw_log JSONB,
@ -389,8 +382,7 @@ of which headers we have already filtered through for this event.
## Summary
To create a transformer for a contract event, we need to create entities to unpack the raw log into, models to represent
To create a transformer for a contract event we need to create entities for unpacking the raw log, models to represent
the final data structure, a converter to mediate this unpacking and conversion between entities to models, a repository to write
these models to Postres, db migrations to accommodate these models in Postgres, and a TransformerInitializer to export the
configured transformer and load it as a plugin to the `compose` or `composeAndExecute` commands. Note that these commands also
need a .toml config file, as described in the [main readme](https://github.com/vulcanize/maker-vulcanizedb/blob/staging/README.md#composeandexecute-configuration)
these models to Postgres, db migrations to accommodate these models in Postgres, and a TransformerInitializer to export the
configured transformer and load it as a plugin to the `compose` or `composeAndExecute` commands as described in the [main readme](https://github.com/vulcanize/maker-vulcanizedb/blob/staging/README.md#composeandexecute-configuration).