Dynamic record content indexing #27
Labels
No Label
bug
C:CLI
C:Crypto
C:Encoding
C:Proto
C:Types
dependencies
docker
documentation
duplicate
enhancement
go
good first issue
help wanted
high priority
in progress
invalid
javascript
low priority
medium priority
question
Status: Stale
Type: ADR
Type: Build
Type: CI
Type: Docs
Type: Tests
urgent
wontfix
Copied from Github
Kind/Breaking
Kind/Bug
Kind/Documentation
Kind/Enhancement
Kind/Feature
Kind/Security
Kind/Testing
Priority
Critical
Priority
High
Priority
Low
Priority
Medium
Reviewed
Confirmed
Reviewed
Duplicate
Reviewed
Invalid
Reviewed
Won't Fix
Status
Abandoned
Status
Blocked
Status
Need More Info
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: cerc-io/laconicd-deprecated#27
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
From Ashwin: GQL queries currently do a full scan on registry records when filtering. We need to implement indexing to make this work at scale.
Separate set of tables, inside the same database, to hold the indexes (2ndary mappings, use key prefixing to define buckets). Indexes should not contribute to the state commitment, but they need to be internal indexes (in Badger) so that the dApp can use them when exposing GraphQL.
GraphQL layer is the one that will be using these indexes. Can currently see where this code is performing expensive iteration over the data.
The fields we need to index on are the record attributes (for this endpoint:
17480f2716/x/nameservice/types/nameservice.pb.go (L165)
) this will require a dynamic approach for indexing the attributes (which can very from record tor record).For each new type, we need to define a schema or some set of descriptors for autogenerating indexes from (perhaps using the ORM).
Record type:
680d585084/x/nameservice/types/nameservice.pb.go (L158)
string
to protobufany.Any
Schema/type information is registered in a separate record because many content records will share the same type of attributes and this way they can share references to a single schema record rather than duplicating that data.
After mapping to the protobuf type that we want to unpack the "Attributes" with, we still requires some level of introspection of the protobuf type to figure out which fields within we want to index the record with. For this reason, the schema record we register in step 2 needs to contain more than just the protobuf type. We need to annotate this type/descriptor in some way, decorating the fields in such a way that identifies which fields to index the record with.
Still the issue of figuring out which fields in a given registered protobuf type to index by, so that will require some additional notation in the protobuf types/descriptors that we register. Also would need to use SelfDescribingMessage and FileDescriptorSet to handle the dynamic type registration/introspection which may negate a lot of the benefits of using protobuf.
Closing this and will create a new issue for how we want to extend and/or refine the functionality in #40