chore: remove unused orm module (#23633)

This commit is contained in:
Zachary Becker 2025-02-07 13:10:45 -05:00 committed by GitHub
parent 5cedd50480
commit d7f101e6aa
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
115 changed files with 0 additions and 24849 deletions

View File

@ -1,61 +0,0 @@
<!--
Guiding Principles:
Changelogs are for humans, not machines.
There should be an entry for every single version.
The same types of changes should be grouped.
Versions and sections should be linkable.
The latest version comes first.
The release date of each version is displayed.
Mention whether you follow Semantic Versioning.
Usage:
Change log entries are to be added to the Unreleased section under the
appropriate stanza (see below). Each entry should ideally include a tag and
the Github issue reference in the following format:
* (<tag>) \#<issue-number> message
The issue numbers will later be link-ified during the release process so you do
not have to worry about including a link manually, but you can if you wish.
Types of changes (Stanzas):
"Features" for new features.
"Improvements" for changes in existing functionality.
"Deprecated" for soon-to-be removed features.
"Bug Fixes" for any bug fixes.
"Client Breaking" for breaking Protobuf, gRPC and REST routes used by end-users.
"CLI Breaking" for breaking CLI commands.
"API Breaking" for breaking exported APIs used by developers building on SDK.
Ref: https://keepachangelog.com/en/1.0.0/
-->
# Changelog
## [Unreleased]
### Feature
* [#15320](https://github.com/cosmos/cosmos-sdk/pull/15320) Add current sequence getter (`LastInsertedSequence`) for auto increment tables.
### Improvements
* Bump minimum Go version to 1.21.
### API Breaking Changes
* [#15870](https://github.com/cosmos/cosmos-sdk/pull/15870) Rename the orm package to `cosmossdk.io/orm`.
* [#14822](https://github.com/cosmos/cosmos-sdk/pull/14822) Migrate to cosmossdk.io/core genesis API.
### State-machine Breaking Changes
* [#12273](https://github.com/cosmos/cosmos-sdk/pull/12273) The timestamp key encoding was reworked to properly handle nil values. Existing users will need to manually migrate their data to the new encoding before upgrading.
* [#15138](https://github.com/cosmos/cosmos-sdk/pull/15138) The duration key encoding was reworked to properly handle nil values. Existing users will need to manually migrate their data to the new encoding before upgrading.
* [#19909](https://github.com/cosmos/cosmos-sdk/pull/19909) (Breaking) Adjusts the encoding of zero and positive nanoseconds to ensure consistent comparison of duration objects. In the previous implementation, when nanoseconds were greater than or equal to zero, the encoding format was simple: we just represented the number in bytes (for example, 0 with [0x0]). For negative nanoseconds, we added 999,999,999 to ensure they were non-negative. In the new implementation, we always add 999,999,999 to all nanoseconds to ensure consistency in encoding and to maintain lexicographical order when compared with other durations.
### Bug Fixes
* [#16023](https://github.com/cosmos/cosmos-sdk/pull/16023) Fix bugs introduced by lack of CI tests in [#15138](https://github.com/cosmos/cosmos-sdk/pull/15138) and [#15813](https://github.com/cosmos/cosmos-sdk/pull/15813). This changes the duration encoding in [#15138](https://github.com/cosmos/cosmos-sdk/pull/15138) to correctly order values with negative nanos.

View File

@ -1,7 +0,0 @@
codegen:
go install ./cmd/protoc-gen-go-cosmos-orm
go install ./cmd/protoc-gen-go-cosmos-orm-proto
# generate .proto files first
(cd internal; buf generate --template buf.proto.gen.yaml)
# generate go code
(cd internal; buf generate)

View File

@ -1,329 +0,0 @@
# ORM
The Cosmos SDK ORM is a state management library that provides a rich, but opinionated set of tools for managing a
module's state. It provides support for:
* type safe management of state
* multipart keys
* secondary indexes
* unique indexes
* easy prefix and range queries
* automatic genesis import/export
* automatic query services for clients, including support for light client proofs (still in development)
* indexing state data in external databases (still in development)
## Design and Philosophy
The ORM's data model is inspired by the relational data model found in SQL databases. The core abstraction is a table
with a primary key and optional secondary indexes.
Because the Cosmos SDK uses protobuf as its encoding layer, ORM tables are defined directly in .proto files using
protobuf options. Each table is defined by a single protobuf `message` type and a schema of multiple tables is
represented by a single .proto file.
Table structure is specified in the same file where messages are defined in order to make it easy to focus on better
design of the state layer. Because blockchain state layout is part of the public API for clients (TODO: link to docs on
light client proofs), it is important to think about the state layout as being part of the public API of a module.
Changing the state layout actually breaks clients, so it is ideal to think through it carefully up front and to aim for
a design that will eliminate or minimize breaking changes down the road. Also, good design of state enables building
more performant and sophisticated applications. Providing users with a set of tools inspired by relational databases
which have a long history of database design best practices and allowing schema to be specified declaratively in a
single place are design choices the ORM makes to enable better design and more durable APIs.
Also, by only supporting the table abstraction as opposed to key-value pair maps, it is easy to add to new
columns/fields to any data structure without causing a breaking change and the data structures can easily be indexed in
any off-the-shelf SQL database for more sophisticated queries.
The encoding of fields in keys is designed to support ordered iteration for all protobuf primitive field types
except for `bytes` as well as the well-known types `google.protobuf.Timestamp` and `google.protobuf.Duration`. Encodings
are optimized for storage space when it makes sense (see the documentation in `cosmos/orm/v1/orm.proto` for more details)
and table rows do not use extra storage space to store key fields in the value.
We recommend that users of the ORM attempt to follow database design best practices such as
[normalization](https://en.wikipedia.org/wiki/Database_normalization) (at least 1NF).
For instance, defining `repeated` fields in a table is considered an anti-pattern because breaks first normal form (1NF).
Although we support `repeated` fields in tables, they cannot be used as key fields for this reason. This may seem
restrictive but years of best practice (and also experience in the SDK) have shown that following this pattern
leads to easier to maintain schemas.
To illustrate the motivation for these principles with an example from the SDK, historically balances were stored
as a mapping from account -> map of denom to amount. This did not scale well because an account with 100 token balances
needed to be encoded/decoded every time a single coin balance changed. Now balances are stored as account,denom -> amount
as in the example above. With the ORM's data model, if we wanted to add a new field to `Balance` such as
`unlocked_balance` (if vesting accounts were redesigned in this way), it would be easy to add it to this table without
requiring a data migration. Because of the ORM's optimizations, the account and denom are only stored in the key part
of storage and not in the value leading to both a flexible data model and efficient usage of storage.
## Defining Tables
To define a table:
1) create a .proto file to describe the module's state (naming it `state.proto` is recommended for consistency),
and import "cosmos/orm/v1/orm.proto", ex:
```protobuf
syntax = "proto3";
package bank_example;
import "cosmos/orm/v1/orm.proto";
```
2) define a `message` for the table, ex:
```protobuf
message Balance {
bytes account = 1;
string denom = 2;
uint64 balance = 3;
}
```
3) add the `cosmos.orm.v1.table` option to the table and give the table an `id` unique within this .proto file:
```protobuf
message Balance {
option (cosmos.orm.v1.table) = {
id: 1
};
bytes account = 1;
string denom = 2;
uint64 balance = 3;
}
```
4) define the primary key field or fields, as a comma-separated list of the fields from the message which should make
up the primary key:
```protobuf
message Balance {
option (cosmos.orm.v1.table) = {
id: 1
primary_key: { fields: "account,denom" }
};
bytes account = 1;
string denom = 2;
uint64 balance = 3;
}
```
5) add any desired secondary indexes by specifying an `id` unique within the table and a comma-separate list of the
index fields:
```protobuf
message Balance {
option (cosmos.orm.v1.table) = {
id: 1;
primary_key: { fields: "account,denom" }
index: { id: 1 fields: "denom" } // this allows querying for the accounts which own a denom
};
bytes account = 1;
string denom = 2;
uint64 amount = 3;
}
```
### Auto-incrementing Primary Keys
A common pattern in SDK modules and in database design is to define tables with a single integer `id` field with an
automatically generated primary key. In the ORM we can do this by setting the `auto_increment` option to `true` on the
primary key, ex:
```protobuf
message Account {
option (cosmos.orm.v1.table) = {
id: 2;
primary_key: { fields: "id", auto_increment: true }
};
uint64 id = 1;
bytes address = 2;
}
```
### Unique Indexes
A unique index can be added by setting the `unique` option to `true` on an index, ex:
```protobuf
message Account {
option (cosmos.orm.v1.table) = {
id: 2;
primary_key: { fields: "id", auto_increment: true }
index: {id: 1, fields: "address", unique: true}
};
uint64 id = 1;
bytes address = 2;
}
```
### Singletons
The ORM also supports a special type of table with only one row called a `singleton`. This can be used for storing
module parameters. Singletons only need to define a unique `id` and that cannot conflict with the id of other
tables or singletons in the same .proto file. Ex:
```protobuf
message Params {
option (cosmos.orm.v1.singleton) = {
id: 3;
};
google.protobuf.Duration voting_period = 1;
uint64 min_threshold = 2;
}
```
## Running Codegen
NOTE: the ORM will only work with protobuf code that implements the [google.golang.org/protobuf](https://pkg.go.dev/google.golang.org/protobuf)
API. That means it will not work with code generated using gogo-proto.
To install the ORM's code generator, run:
```shell
go install cosmossdk.io/orm/cmd/protoc-gen-go-cosmos-orm@latest
```
The recommended way to run the code generator is to use [buf build](https://docs.buf.build/build/usage).
This is an example `buf.gen.yaml` that runs `protoc-gen-go`, `protoc-gen-go-grpc` and `protoc-gen-go-cosmos-orm`
using buf managed mode:
```yaml
version: v1
managed:
enabled: true
go_package_prefix:
default: foo.bar/api # the go package prefix of your package
override:
buf.build/cosmos/cosmos-sdk: cosmossdk.io/api # required to import the Cosmos SDK api module
plugins:
- name: go
out: .
opt: paths=source_relative
- name: go-grpc
out: .
opt: paths=source_relative
- name: go-cosmos-orm
out: .
opt: paths=source_relative
```
## Using the ORM in a module
### Initialization
To use the ORM in a module, first create a `ModuleSchemaDescriptor`. This tells the ORM which .proto files have defined
an ORM schema and assigns them all a unique non-zero id. Ex:
```go
var MyModuleSchema = &ormv1alpha1.ModuleSchemaDescriptor{
SchemaFile: []*ormv1alpha1.ModuleSchemaDescriptor_FileEntry{
{
Id: 1,
ProtoFileName: mymodule.File_my_module_state_proto.Path(),
},
},
}
```
In the ORM generated code for a file named `state.proto`, there should be an interface `StateStore` that got generated
with a constructor `NewStateStore` that takes a parameter of type `ormdb.ModuleDB`. Add a reference to `StateStore`
to your module's keeper struct. Ex:
```go
type Keeper struct {
db StateStore
}
```
Then instantiate the `StateStore` instance via an `ormdb.ModuleDB` that is instantiated from the `SchemaDescriptor`
above and one or more store services from `cosmossdk.io/core/store`. Ex:
```go
func NewKeeper(storeService store.KVStoreService) (*Keeper, error) {
modDb, err := ormdb.NewModuleDB(MyModuleSchema, ormdb.ModuleDBOptions{KVStoreService: storeService})
if err != nil {
return nil, err
}
db, err := NewStateStore(modDb)
if err != nil {
return nil, err
}
return Keeper{db: db}, nil
}
```
### Using the generated code
The generated code for the ORM contains methods for inserting, updating, deleting and querying table entries.
For each table in a .proto file, there is a type-safe table interface implemented in generated code. For instance,
for a table named `Balance` there should be a `BalanceTable` interface that looks like this:
```go
type BalanceTable interface {
Insert(ctx context.Context, balance *Balance) error
Update(ctx context.Context, balance *Balance) error
Save(ctx context.Context, balance *Balance) error
Delete(ctx context.Context, balance *Balance) error
Has(ctx context.Context, account []byte, denom string) (found bool, err error)
// Get returns nil and an error which responds true to ormerrors.IsNotFound() if the record was not found.
Get(ctx context.Context, account []byte, denom string) (*Balance, error)
List(ctx context.Context, prefixKey BalanceIndexKey, opts ...ormlist.Option) (BalanceIterator, error)
ListRange(ctx context.Context, from, to BalanceIndexKey, opts ...ormlist.Option) (BalanceIterator, error)
DeleteBy(ctx context.Context, prefixKey BalanceIndexKey) error
DeleteRange(ctx context.Context, from, to BalanceIndexKey) error
doNotImplement()
}
```
This `BalanceTable` should be accessible from the `StateStore` interface (assuming our file is named `state.proto`)
via a `BalanceTable()` accessor method. If all the above example tables/singletons were in the same `state.proto`,
then `StateStore` would get generated like this:
```go
type BankStore interface {
BalanceTable() BalanceTable
AccountTable() AccountTable
ParamsTable() ParamsTable
doNotImplement()
}
```
So to work with the `BalanceTable` in a keeper method we could use code like this:
```go
func (k keeper) AddBalance(ctx context.Context, acct []byte, denom string, amount uint64) error {
balance, err := k.db.BalanceTable().Get(ctx, acct, denom)
if err != nil && !ormerrors.IsNotFound(err) {
return err
}
if balance == nil {
balance = &Balance{
Account: acct,
Denom: denom,
Amount: amount,
}
} else {
balance.Amount = balance.Amount + amount
}
return k.db.BalanceTable().Save(ctx, balance)
}
```
`List` methods take `IndexKey` parameters. For instance, `BalanceTable.List` takes `BalanceIndexKey`. `BalanceIndexKey`
let's represent index keys for the different indexes (primary and secondary) on the `Balance` table. The primary key
in the `Balance` table gets a struct `BalanceAccountDenomIndexKey` and the first index gets an index key `BalanceDenomIndexKey`.
If we wanted to list all the denoms and amounts that an account holds, we would use `BalanceAccountDenomIndexKey`
with a `List` query just on the account prefix. Ex:
```go
it, err := keeper.db.BalanceTable().List(ctx, BalanceAccountDenomIndexKey{}.WithAccount(acct))
```

View File

@ -1,11 +0,0 @@
package main
import (
"google.golang.org/protobuf/compiler/protogen"
"cosmossdk.io/orm/internal/codegen"
)
func main() {
protogen.Options{}.Run(codegen.QueryProtoPluginRunner)
}

View File

@ -1,11 +0,0 @@
package main
import (
"google.golang.org/protobuf/compiler/protogen"
"cosmossdk.io/orm/internal/codegen"
)
func main() {
protogen.Options{}.Run(codegen.ORMPluginRunner)
}

View File

@ -1,3 +0,0 @@
// Package encoding defines the core types and algorithms for encoding and decoding
// protobuf objects and values to/from ORM key-value pairs.
package encoding

View File

@ -1,51 +0,0 @@
package encodeutil
import (
"bytes"
"encoding/binary"
"io"
"reflect"
"google.golang.org/protobuf/reflect/protoreflect"
)
// SkipPrefix skips the provided prefix in the reader or returns an error.
// This is used for efficient logical decoding of keys.
func SkipPrefix(r *bytes.Reader, prefix []byte) error {
n := len(prefix)
// we skip checking the prefix for performance reasons because we assume
// that it was checked by the caller
_, err := r.Seek(int64(n), io.SeekCurrent)
return err
}
// AppendVarUInt32 creates a new key prefix, by encoding and appending a
// var-uint32 to the provided prefix.
func AppendVarUInt32(prefix []byte, x uint32) []byte {
prefixLen := len(prefix)
res := make([]byte, prefixLen+binary.MaxVarintLen32)
copy(res, prefix)
n := binary.PutUvarint(res[prefixLen:], uint64(x))
return res[:prefixLen+n]
}
// ValuesOf takes the arguments and converts them to protoreflect.Value's.
func ValuesOf(values ...interface{}) []protoreflect.Value {
n := len(values)
res := make([]protoreflect.Value, n)
for i := 0; i < n; i++ {
// we catch the case of proto messages here and call ProtoReflect.
// this allows us to use imported messages, such as timestamppb.Timestamp
// in iterators.
value := values[i]
if v, ok := value.(protoreflect.ProtoMessage); ok {
if !reflect.ValueOf(value).IsNil() {
value = v.ProtoReflect()
} else {
value = nil
}
}
res[i] = protoreflect.ValueOf(value)
}
return res
}

View File

@ -1,60 +0,0 @@
package ormfield
import (
"io"
"google.golang.org/protobuf/reflect/protoreflect"
)
// BoolCodec encodes a bool value as a single byte 0 or 1.
type BoolCodec struct{}
func (b BoolCodec) Decode(r Reader) (protoreflect.Value, error) {
x, err := r.ReadByte()
return protoreflect.ValueOfBool(x != 0), err
}
var (
zeroBz = []byte{0}
oneBz = []byte{1}
)
func (b BoolCodec) Encode(value protoreflect.Value, w io.Writer) error {
var err error
if !value.IsValid() || !value.Bool() {
_, err = w.Write(zeroBz)
} else {
_, err = w.Write(oneBz)
}
return err
}
func (b BoolCodec) Compare(v1, v2 protoreflect.Value) int {
var b1, b2 bool
if v1.IsValid() {
b1 = v1.Bool()
}
if v2.IsValid() {
b2 = v2.Bool()
}
switch {
case b1 == b2:
return 0
case b1:
return -1
default:
return 1
}
}
func (b BoolCodec) IsOrdered() bool {
return false
}
func (b BoolCodec) FixedBufferSize() int {
return 1
}
func (b BoolCodec) ComputeBufferSize(protoreflect.Value) (int, error) {
return b.FixedBufferSize(), nil
}

View File

@ -1,121 +0,0 @@
package ormfield
import (
"bytes"
"encoding/binary"
"io"
"google.golang.org/protobuf/reflect/protoreflect"
)
// BytesCodec encodes bytes as raw bytes. It errors if the byte array is longer
// than 255 bytes.
type BytesCodec struct{}
func (b BytesCodec) FixedBufferSize() int {
return -1
}
// ComputeBufferSize returns the bytes size of the value.
func (b BytesCodec) ComputeBufferSize(value protoreflect.Value) (int, error) {
return bytesSize(value), nil
}
func bytesSize(value protoreflect.Value) int {
if !value.IsValid() {
return 0
}
return len(value.Bytes())
}
func (b BytesCodec) IsOrdered() bool {
return false
}
func (b BytesCodec) Decode(r Reader) (protoreflect.Value, error) {
bz, err := io.ReadAll(r)
return protoreflect.ValueOfBytes(bz), err
}
func (b BytesCodec) Encode(value protoreflect.Value, w io.Writer) error {
if !value.IsValid() {
return nil
}
_, err := w.Write(value.Bytes())
return err
}
func (b BytesCodec) Compare(v1, v2 protoreflect.Value) int {
return compareBytes(v1, v2)
}
// NonTerminalBytesCodec encodes bytes as raw bytes length prefixed by a single
// byte. It errors if the byte array is longer than 255 bytes.
type NonTerminalBytesCodec struct{}
func (b NonTerminalBytesCodec) FixedBufferSize() int {
return -1
}
// ComputeBufferSize returns the bytes size of the value plus the length of the
// varint length-prefix.
func (b NonTerminalBytesCodec) ComputeBufferSize(value protoreflect.Value) (int, error) {
n := bytesSize(value)
prefixLen := 1
// we use varint, if the first bit of a byte is 1 then we need to signal continuation
for n >= 0x80 {
prefixLen++
n >>= 7
}
return n + prefixLen, nil
}
func (b NonTerminalBytesCodec) IsOrdered() bool {
return false
}
func (b NonTerminalBytesCodec) Compare(v1, v2 protoreflect.Value) int {
return compareBytes(v1, v2)
}
func (b NonTerminalBytesCodec) Decode(r Reader) (protoreflect.Value, error) {
n, err := binary.ReadUvarint(r)
if err != nil {
return protoreflect.Value{}, err
}
if n == 0 {
return protoreflect.ValueOfBytes([]byte{}), nil
}
bz := make([]byte, n)
_, err = r.Read(bz)
return protoreflect.ValueOfBytes(bz), err
}
func (b NonTerminalBytesCodec) Encode(value protoreflect.Value, w io.Writer) error {
var bz []byte
if value.IsValid() {
bz = value.Bytes()
}
n := len(bz)
var prefix [binary.MaxVarintLen64]byte
prefixLen := binary.PutUvarint(prefix[:], uint64(n))
_, err := w.Write(prefix[:prefixLen])
if err != nil {
return err
}
_, err = w.Write(bz)
return err
}
func compareBytes(v1, v2 protoreflect.Value) int {
var bz1, bz2 []byte
if v1.IsValid() {
bz1 = v1.Bytes()
}
if v2.IsValid() {
bz2 = v2.Bytes()
}
return bytes.Compare(bz1, bz2)
}

View File

@ -1,113 +0,0 @@
package ormfield
import (
"io"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/types/known/durationpb"
"google.golang.org/protobuf/types/known/timestamppb"
"cosmossdk.io/orm/types/ormerrors"
)
// Codec defines an interface for decoding and encoding values in ORM index keys.
type Codec interface {
// Decode decodes a value in a key.
Decode(r Reader) (protoreflect.Value, error)
// Encode encodes a value in a key.
Encode(value protoreflect.Value, w io.Writer) error
// Compare compares two values of this type and should primarily be used
// for testing.
Compare(v1, v2 protoreflect.Value) int
// IsOrdered returns true if callers can always assume that this ordering
// is suitable for sorted iteration.
IsOrdered() bool
// FixedBufferSize returns a positive value if encoders should assume a
// fixed size buffer for encoding. Encoders will use at most this much size
// to encode the value.
FixedBufferSize() int
// ComputeBufferSize estimates the buffer size needed to encode the field.
// Encoders will use at most this much size to encode the value.
ComputeBufferSize(value protoreflect.Value) (int, error)
}
type Reader interface {
io.Reader
io.ByteReader
}
var (
timestampMsgType = (&timestamppb.Timestamp{}).ProtoReflect().Type()
timestampFullName = timestampMsgType.Descriptor().FullName()
durationMsgType = (&durationpb.Duration{}).ProtoReflect().Type()
durationFullName = durationMsgType.Descriptor().FullName()
)
// GetCodec returns the Codec for the provided field if one is defined.
// nonTerminal should be set to true if this value is being encoded as a
// non-terminal segment of a multi-part key.
func GetCodec(field protoreflect.FieldDescriptor, nonTerminal bool) (Codec, error) {
if field == nil {
return nil, ormerrors.InvalidKeyField.Wrap("nil field")
}
if field.IsList() {
return nil, ormerrors.InvalidKeyField.Wrapf("repeated field %s", field.FullName())
}
if field.ContainingOneof() != nil {
return nil, ormerrors.InvalidKeyField.Wrapf("oneof field %s", field.FullName())
}
if field.HasOptionalKeyword() {
return nil, ormerrors.InvalidKeyField.Wrapf("optional field %s", field.FullName())
}
switch field.Kind() {
case protoreflect.BytesKind:
if nonTerminal {
return NonTerminalBytesCodec{}, nil
}
return BytesCodec{}, nil
case protoreflect.StringKind:
if nonTerminal {
return NonTerminalStringCodec{}, nil
}
return StringCodec{}, nil
case protoreflect.Uint32Kind:
return CompactUint32Codec{}, nil
case protoreflect.Fixed32Kind:
return FixedUint32Codec{}, nil
case protoreflect.Uint64Kind:
return CompactUint64Codec{}, nil
case protoreflect.Fixed64Kind:
return FixedUint64Codec{}, nil
case protoreflect.Int32Kind, protoreflect.Sint32Kind, protoreflect.Sfixed32Kind:
return Int32Codec{}, nil
case protoreflect.Int64Kind, protoreflect.Sint64Kind, protoreflect.Sfixed64Kind:
return Int64Codec{}, nil
case protoreflect.BoolKind:
return BoolCodec{}, nil
case protoreflect.EnumKind:
return EnumCodec{}, nil
case protoreflect.MessageKind:
msgName := field.Message().FullName()
switch msgName {
case timestampFullName:
return TimestampCodec{}, nil
case durationFullName:
return DurationCodec{}, nil
default:
return nil, ormerrors.InvalidKeyField.Wrapf("%s of type %s", field.FullName(), msgName)
}
default:
return nil, ormerrors.InvalidKeyField.Wrapf("%s of kind %s", field.FullName(), field.Kind())
}
}

View File

@ -1,173 +0,0 @@
package ormfield_test
import (
"bytes"
"fmt"
"testing"
"google.golang.org/protobuf/reflect/protoreflect"
"gotest.tools/v3/assert"
"pgregory.net/rapid"
"cosmossdk.io/orm/encoding/ormfield"
"cosmossdk.io/orm/internal/testutil"
"cosmossdk.io/orm/types/ormerrors"
)
func TestCodec(t *testing.T) {
for _, ks := range testutil.TestFieldSpecs {
testCodec(t, ks)
}
}
func testCodec(t *testing.T, spec testutil.TestFieldSpec) {
t.Helper()
t.Run(fmt.Sprintf("%s %v", spec.FieldName, false), func(t *testing.T) {
testCodecNT(t, spec.FieldName, spec.Gen, false)
})
t.Run(fmt.Sprintf("%s %v", spec.FieldName, true), func(t *testing.T) {
testCodecNT(t, spec.FieldName, spec.Gen, true)
})
}
func testCodecNT(t *testing.T, fname protoreflect.Name, generator *rapid.Generator[any], nonTerminal bool) {
t.Helper()
cdc, err := testutil.MakeTestCodec(fname, nonTerminal)
assert.NilError(t, err)
rapid.Check(t, func(t *rapid.T) {
x := protoreflect.ValueOf(generator.Draw(t, string(fname)))
bz1 := checkEncodeDecodeSize(t, x, cdc)
if cdc.IsOrdered() {
y := protoreflect.ValueOf(generator.Draw(t, fmt.Sprintf("%s 2", fname)))
bz2 := checkEncodeDecodeSize(t, y, cdc)
assert.Equal(t, cdc.Compare(x, y), bytes.Compare(bz1, bz2))
}
})
}
func checkEncodeDecodeSize(t *rapid.T, x protoreflect.Value, cdc ormfield.Codec) []byte {
buf := &bytes.Buffer{}
err := cdc.Encode(x, buf)
assert.NilError(t, err)
bz := buf.Bytes()
size, err := cdc.ComputeBufferSize(x)
assert.NilError(t, err)
assert.Assert(t, size >= len(bz))
fixedSize := cdc.FixedBufferSize()
if fixedSize > 0 {
assert.Equal(t, fixedSize, size)
}
y, err := cdc.Decode(bytes.NewReader(bz))
assert.NilError(t, err)
assert.Equal(t, 0, cdc.Compare(x, y))
return bz
}
func TestUnsupportedFields(t *testing.T) {
_, err := ormfield.GetCodec(nil, false)
assert.ErrorContains(t, err, ormerrors.InvalidKeyField.Error())
_, err = ormfield.GetCodec(testutil.GetTestField("repeated"), false)
assert.ErrorContains(t, err, ormerrors.InvalidKeyField.Error())
_, err = ormfield.GetCodec(testutil.GetTestField("map"), false)
assert.ErrorContains(t, err, ormerrors.InvalidKeyField.Error())
_, err = ormfield.GetCodec(testutil.GetTestField("msg"), false)
assert.ErrorContains(t, err, ormerrors.InvalidKeyField.Error())
_, err = ormfield.GetCodec(testutil.GetTestField("oneof"), false)
assert.ErrorContains(t, err, ormerrors.InvalidKeyField.Error())
}
func TestCompactUInt32(t *testing.T) {
var lastBz []byte
testEncodeDecode := func(x uint32, expectedLen int) {
bz := ormfield.EncodeCompactUint32(x)
assert.Equal(t, expectedLen, len(bz))
y, err := ormfield.DecodeCompactUint32(bytes.NewReader(bz))
assert.NilError(t, err)
assert.Equal(t, x, y)
assert.Assert(t, bytes.Compare(lastBz, bz) < 0)
lastBz = bz
}
testEncodeDecode(64, 2)
testEncodeDecode(16383, 2)
testEncodeDecode(16384, 3)
testEncodeDecode(4194303, 3)
testEncodeDecode(4194304, 4)
testEncodeDecode(1073741823, 4)
testEncodeDecode(1073741824, 5)
// randomized tests
rapid.Check(t, func(t *rapid.T) {
x := rapid.Uint32().Draw(t, "x")
y := rapid.Uint32().Draw(t, "y")
bx := ormfield.EncodeCompactUint32(x)
by := ormfield.EncodeCompactUint32(y)
cmp := bytes.Compare(bx, by)
switch {
case x < y:
assert.Equal(t, -1, cmp)
case x == y:
assert.Equal(t, 0, cmp)
default:
assert.Equal(t, 1, cmp)
}
x2, err := ormfield.DecodeCompactUint32(bytes.NewReader(bx))
assert.NilError(t, err)
assert.Equal(t, x, x2)
y2, err := ormfield.DecodeCompactUint32(bytes.NewReader(by))
assert.NilError(t, err)
assert.Equal(t, y, y2)
})
}
func TestCompactUInt64(t *testing.T) {
var lastBz []byte
testEncodeDecode := func(x uint64, expectedLen int) {
bz := ormfield.EncodeCompactUint64(x)
assert.Equal(t, expectedLen, len(bz))
y, err := ormfield.DecodeCompactUint64(bytes.NewReader(bz))
assert.NilError(t, err)
assert.Equal(t, x, y)
assert.Assert(t, bytes.Compare(lastBz, bz) < 0)
lastBz = bz
}
testEncodeDecode(64, 2)
testEncodeDecode(16383, 2)
testEncodeDecode(16384, 4)
testEncodeDecode(4194303, 4)
testEncodeDecode(4194304, 4)
testEncodeDecode(1073741823, 4)
testEncodeDecode(1073741824, 6)
testEncodeDecode(70368744177663, 6)
testEncodeDecode(70368744177664, 9)
// randomized tests
rapid.Check(t, func(t *rapid.T) {
x := rapid.Uint64().Draw(t, "x")
y := rapid.Uint64().Draw(t, "y")
bx := ormfield.EncodeCompactUint64(x)
by := ormfield.EncodeCompactUint64(y)
cmp := bytes.Compare(bx, by)
switch {
case x < y:
assert.Equal(t, -1, cmp)
case x == y:
assert.Equal(t, 0, cmp)
default:
assert.Equal(t, 1, cmp)
}
x2, err := ormfield.DecodeCompactUint64(bytes.NewReader(bx))
assert.NilError(t, err)
assert.Equal(t, x, x2)
y2, err := ormfield.DecodeCompactUint64(bytes.NewReader(by))
assert.NilError(t, err)
assert.Equal(t, y, y2)
})
}

View File

@ -1,324 +0,0 @@
package ormfield
import (
"fmt"
"io"
"google.golang.org/protobuf/reflect/protoreflect"
)
const (
DurationSecondsMin int64 = -315576000000
DurationSecondsMax int64 = 315576000000
DurationNanosMin = -999999999
DurationNanosMax = 999999999
)
// DurationCodec encodes google.protobuf.Duration values with the following
// encoding:
// - nil is encoded as []byte{0xFF}
// - seconds (which can range from -315,576,000,000 to +315,576,000,000) is encoded as 5 fixed bytes
// - nanos (which can range from 0 to 999,999,999 or -999,999,999 to 0 if seconds is negative) are encoded such
// that 999,999,999 is always added to nanos. This ensures that the encoded nanos are always >= 0. Additionally,
// by adding 999,999,999 to both positive and negative nanos, we guarantee that the lexicographical order is
// preserved when comparing the encoded values of two Durations:
// - []byte{0xBB, 0x9A, 0xC9, 0xFF} for zero nanos
// - 4 fixed bytes with the bit mask 0x80 applied to the first byte, with negative nanos scaled so that -999,999,999
// is encoded as 0 and -1 is encoded as 999,999,998
//
// When iterating over timestamp indexes, nil values will always be ordered last.
//
// Values for seconds and nanos outside the ranges specified by google.protobuf.Duration will be rejected.
type DurationCodec struct{}
func (d DurationCodec) Encode(value protoreflect.Value, w io.Writer) error {
// nil case
if !value.IsValid() {
_, err := w.Write(timestampDurationNilBz)
return err
}
seconds, nanos := getDurationSecondsAndNanos(value)
secondsInt := seconds.Int()
nanosInt := nanos.Int()
if err := validateDurationRanges(secondsInt, nanosInt); err != nil {
return err
}
// we subtract the min duration value to make sure secondsInt is always non-negative and starts at 0.
secondsInt -= DurationSecondsMin
err := encodeSeconds(secondsInt, w)
if err != nil {
return err
}
// we subtract the min duration value to make sure nanosInt is always non-negative and starts at 0.
nanosInt -= DurationNanosMin
return encodeNanos(nanosInt, w)
}
func (d DurationCodec) Decode(r Reader) (protoreflect.Value, error) {
isNil, seconds, err := decodeSeconds(r)
if isNil || err != nil {
return protoreflect.Value{}, err
}
// we add the min seconds duration value to get back the original value
seconds += DurationSecondsMin
msg := durationMsgType.New()
msg.Set(durationSecondsField, protoreflect.ValueOfInt64(seconds))
nanos, err := decodeNanos(r)
if err != nil {
return protoreflect.Value{}, err
}
// we add the min nanos duration value to get back the original value
nanos += DurationNanosMin
msg.Set(durationNanosField, protoreflect.ValueOfInt32(nanos))
return protoreflect.ValueOfMessage(msg), nil
}
func (d DurationCodec) Compare(v1, v2 protoreflect.Value) int {
if !v1.IsValid() {
if !v2.IsValid() {
return 0
}
return 1
}
if !v2.IsValid() {
return -1
}
s1, n1 := getDurationSecondsAndNanos(v1)
s2, n2 := getDurationSecondsAndNanos(v2)
c := compareInt(s1, s2)
if c != 0 {
return c
}
return compareInt(n1, n2)
}
func (d DurationCodec) IsOrdered() bool {
return true
}
func (d DurationCodec) FixedBufferSize() int {
return timestampDurationBufferSize
}
func (d DurationCodec) ComputeBufferSize(protoreflect.Value) (int, error) {
return timestampDurationBufferSize, nil
}
var (
durationSecondsField = durationMsgType.Descriptor().Fields().ByName("seconds")
durationNanosField = durationMsgType.Descriptor().Fields().ByName("nanos")
)
func getDurationSecondsAndNanos(value protoreflect.Value) (protoreflect.Value, protoreflect.Value) {
msg := value.Message()
return msg.Get(durationSecondsField), msg.Get(durationNanosField)
}
// validateDurationRanges checks whether seconds and nanoseconds are in valid ranges
// for a protobuf Duration type. It ensures that seconds are within the allowed range
// and, if seconds are zero or negative, verifies that nanoseconds are also within
// the valid range. For negative seconds, nanoseconds must be non-positive.
// Parameters:
// - seconds: The number of seconds component of the duration.
// - nanos: The number of nanoseconds component of the duration.
//
// Returns:
// - error: An error indicating if the duration components are out of range.
func validateDurationRanges(seconds, nanos int64) error {
if seconds < DurationSecondsMin || seconds > DurationSecondsMax {
return fmt.Errorf("duration seconds is out of range %d, must be between %d and %d", seconds, DurationSecondsMin, DurationSecondsMax)
}
if seconds == 0 {
if nanos < DurationNanosMin || nanos > DurationNanosMax {
return fmt.Errorf("duration nanos is out of range %d, must be between %d and %d", nanos, DurationNanosMin, DurationNanosMax)
}
} else if seconds < 0 {
if nanos < DurationNanosMin || nanos > 0 {
return fmt.Errorf("negative duration nanos is out of range %d, must be between %d and %d", nanos, DurationNanosMin, 0)
}
} else if nanos < 0 || nanos > DurationNanosMax {
return fmt.Errorf("duration nanos is out of range %d, must be between %d and %d", nanos, 0, DurationNanosMax)
}
return nil
}
// DurationV0Codec encodes a google.protobuf.Duration value as 12 bytes using
// Int64Codec for seconds followed by Int32Codec for nanos. This allows for
// sorted iteration.
type DurationV0Codec struct{}
func (d DurationV0Codec) Decode(r Reader) (protoreflect.Value, error) {
seconds, err := int64Codec.Decode(r)
if err != nil {
return protoreflect.Value{}, err
}
nanos, err := int32Codec.Decode(r)
if err != nil {
return protoreflect.Value{}, err
}
msg := durationMsgType.New()
msg.Set(durationSecondsField, seconds)
msg.Set(durationNanosField, nanos)
return protoreflect.ValueOfMessage(msg), nil
}
func (d DurationV0Codec) Encode(value protoreflect.Value, w io.Writer) error {
seconds, nanos := getDurationSecondsAndNanos(value)
err := int64Codec.Encode(seconds, w)
if err != nil {
return err
}
return int32Codec.Encode(nanos, w)
}
func (d DurationV0Codec) Compare(v1, v2 protoreflect.Value) int {
s1, n1 := getDurationSecondsAndNanos(v1)
s2, n2 := getDurationSecondsAndNanos(v2)
c := compareInt(s1, s2)
if c != 0 {
return c
}
return compareInt(n1, n2)
}
func (d DurationV0Codec) IsOrdered() bool {
return true
}
func (d DurationV0Codec) FixedBufferSize() int {
return 12
}
func (d DurationV0Codec) ComputeBufferSize(protoreflect.Value) (int, error) {
return d.FixedBufferSize(), nil
}
// DurationV1Codec encodes google.protobuf.Duration values with the following
// encoding:
// - nil is encoded as []byte{0xFF}
// - seconds (which can range from -315,576,000,000 to +315,576,000,000) is encoded as 5 fixed bytes
// - nanos (which can range from 0 to 999,999,999 or -999,999,999 to 0 if seconds is negative) is encoded as:
// - []byte{0x0} for zero nanos
// - 4 fixed bytes with the bit mask 0xC0 applied to the first byte, with negative nanos scaled so that -999,999,999
// is encoded as 1 and -1 is encoded as 999,999,999
//
// When iterating over timestamp indexes, nil values will always be ordered last.
//
// Values for seconds and nanos outside the ranges specified by google.protobuf.Duration will be rejected.
type DurationV1Codec struct{}
func (d DurationV1Codec) Encode(value protoreflect.Value, w io.Writer) error {
// nil case
if !value.IsValid() {
_, err := w.Write(timestampDurationNilBz)
return err
}
seconds, nanos := getDurationSecondsAndNanos(value)
secondsInt := seconds.Int()
if secondsInt < DurationSecondsMin || secondsInt > DurationSecondsMax {
return fmt.Errorf("duration seconds is out of range %d, must be between %d and %d", secondsInt, DurationSecondsMin, DurationSecondsMax)
}
negative := secondsInt < 0
// we subtract the min duration value to make sure secondsInt is always non-negative and starts at 0.
secondsInt -= DurationSecondsMin
err := encodeSeconds(secondsInt, w)
if err != nil {
return err
}
nanosInt := nanos.Int()
if nanosInt == 0 {
_, err = w.Write(timestampZeroNanosBz)
return err
}
if negative {
if nanosInt < DurationNanosMin || nanosInt > 0 {
return fmt.Errorf("negative duration nanos is out of range %d, must be between %d and %d", nanosInt, DurationNanosMin, 0)
}
nanosInt = DurationNanosMax + nanosInt + 1
} else if nanosInt < 0 || nanosInt > DurationNanosMax {
return fmt.Errorf("duration nanos is out of range %d, must be between %d and %d", nanosInt, 0, DurationNanosMax)
}
return encodeNanosV1(nanosInt, w)
}
func (d DurationV1Codec) Decode(r Reader) (protoreflect.Value, error) {
isNil, seconds, err := decodeSeconds(r)
if isNil || err != nil {
return protoreflect.Value{}, err
}
// we add the min duration value to get back the original value
seconds += DurationSecondsMin
negative := seconds < 0
msg := durationMsgType.New()
msg.Set(durationSecondsField, protoreflect.ValueOfInt64(seconds))
nanos, err := decodeNanosV1(r)
if err != nil {
return protoreflect.Value{}, err
}
if nanos == 0 {
return protoreflect.ValueOfMessage(msg), nil
}
if negative {
nanos = nanos - DurationNanosMax - 1
}
msg.Set(durationNanosField, protoreflect.ValueOfInt32(nanos))
return protoreflect.ValueOfMessage(msg), nil
}
func (d DurationV1Codec) Compare(v1, v2 protoreflect.Value) int {
if !v1.IsValid() {
if !v2.IsValid() {
return 0
}
return 1
}
if !v2.IsValid() {
return -1
}
s1, n1 := getDurationSecondsAndNanos(v1)
s2, n2 := getDurationSecondsAndNanos(v2)
c := compareInt(s1, s2)
if c != 0 {
return c
}
return compareInt(n1, n2)
}
func (d DurationV1Codec) IsOrdered() bool {
return true
}
func (d DurationV1Codec) FixedBufferSize() int {
return timestampDurationBufferSize
}
func (d DurationV1Codec) ComputeBufferSize(protoreflect.Value) (int, error) {
return timestampDurationBufferSize, nil
}

View File

@ -1,308 +0,0 @@
package ormfield_test
import (
"bytes"
"testing"
"time"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/types/known/durationpb"
"gotest.tools/v3/assert"
"cosmossdk.io/orm/encoding/ormfield"
)
func TestDurationNil(t *testing.T) {
t.Parallel()
cdc := ormfield.DurationCodec{}
buf := &bytes.Buffer{}
assert.NilError(t, cdc.Encode(protoreflect.Value{}, buf))
assert.Equal(t, 1, len(buf.Bytes()))
val, err := cdc.Decode(buf)
assert.NilError(t, err)
assert.Assert(t, !val.IsValid())
}
func TestDuration(t *testing.T) {
t.Parallel()
cdc := ormfield.DurationCodec{}
tt := []struct {
name string
seconds int64
nanos int32
wantLen int
}{
{
"no nanos",
100,
0,
9,
},
{
"with nanos",
3,
879468295,
9,
},
{
"min seconds, -1 nanos",
-315576000000,
-1,
9,
},
{
"min value",
-315576000000,
-999999999,
9,
},
{
"max value",
315576000000,
999999999,
9,
},
{
"max seconds, 1 nanos",
315576000000,
1,
9,
},
}
for _, tc := range tt {
t.Run(tc.name, func(t *testing.T) {
durPb := &durationpb.Duration{
Seconds: tc.seconds,
Nanos: tc.nanos,
}
val := protoreflect.ValueOfMessage(durPb.ProtoReflect())
buf := &bytes.Buffer{}
assert.NilError(t, cdc.Encode(val, buf))
assert.Equal(t, tc.wantLen, len(buf.Bytes()))
val2, err := cdc.Decode(buf)
assert.NilError(t, err)
assert.Equal(t, 0, cdc.Compare(val, val2))
})
}
}
func TestDurationOutOfRange(t *testing.T) {
t.Parallel()
cdc := ormfield.DurationCodec{}
tt := []struct {
name string
dur *durationpb.Duration
expectErr string
}{
{
name: "seconds too small",
dur: &durationpb.Duration{
Seconds: -315576000001,
Nanos: 0,
},
expectErr: "seconds is out of range",
},
{
name: "seconds too big",
dur: &durationpb.Duration{
Seconds: 315576000001,
Nanos: 0,
},
expectErr: "seconds is out of range",
},
{
name: "positive seconds nanos too big",
dur: &durationpb.Duration{
Seconds: 0,
Nanos: 1000000000,
},
expectErr: "nanos is out of range",
},
{
name: "negative seconds positive nanos",
dur: &durationpb.Duration{
Seconds: -1,
Nanos: 1,
},
expectErr: "negative duration nanos is out of range",
},
{
name: "negative seconds nanos too small",
dur: &durationpb.Duration{
Seconds: -1,
Nanos: -1000000000,
},
expectErr: "negative duration nanos is out of range",
},
}
for _, tc := range tt {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
val := protoreflect.ValueOfMessage(tc.dur.ProtoReflect())
buf := &bytes.Buffer{}
err := cdc.Encode(val, buf)
assert.ErrorContains(t, err, tc.expectErr)
})
}
}
func TestDurationCompare(t *testing.T) {
t.Parallel()
cdc := ormfield.DurationCodec{}
tt := []struct {
name string
dur1 *durationpb.Duration
dur2 *durationpb.Duration
want int
}{
{
name: "equal",
dur1: &durationpb.Duration{
Seconds: 1,
Nanos: 1,
},
dur2: &durationpb.Duration{
Seconds: 1,
Nanos: 1,
},
want: 0,
},
{
name: "seconds equal, dur1 nanos less than dur2 nanos",
dur1: &durationpb.Duration{
Seconds: 1,
Nanos: 1,
},
dur2: &durationpb.Duration{
Seconds: 1,
Nanos: 2,
},
want: -1,
},
{
name: "seconds equal, dur1 nanos greater than dur2 nanos",
dur1: &durationpb.Duration{
Seconds: 1,
Nanos: 2,
},
dur2: &durationpb.Duration{
Seconds: 1,
Nanos: 1,
},
want: 1,
},
{
name: "seconds less than",
dur1: &durationpb.Duration{
Seconds: 1,
Nanos: 1,
},
dur2: &durationpb.Duration{
Seconds: 2,
Nanos: 1,
},
want: -1,
},
{
name: "seconds greater than",
dur1: &durationpb.Duration{
Seconds: 2,
Nanos: 1,
},
dur2: &durationpb.Duration{
Seconds: 1,
Nanos: 1,
},
want: 1,
},
{
name: "negative seconds equal, dur1 nanos less than dur2 nanos",
dur1: &durationpb.Duration{
Seconds: -1,
Nanos: -2,
},
dur2: &durationpb.Duration{
Seconds: -1,
Nanos: -1,
},
want: -1,
},
{
name: "negative seconds equal, dur1 nanos zero",
dur1: &durationpb.Duration{
Seconds: -1,
Nanos: 0,
},
dur2: &durationpb.Duration{
Seconds: -1,
Nanos: -1,
},
want: 1,
},
{
name: "negative seconds equal, dur2 nanos zero",
dur1: &durationpb.Duration{
Seconds: -1,
Nanos: -1,
},
dur2: &durationpb.Duration{
Seconds: -1,
Nanos: 0,
},
want: -1,
},
{
name: "seconds equal and dur1 nanos min values",
dur1: &durationpb.Duration{
Seconds: ormfield.DurationSecondsMin,
Nanos: ormfield.DurationNanosMin,
},
dur2: &durationpb.Duration{
Seconds: ormfield.DurationSecondsMin,
Nanos: -1,
},
want: -1,
},
}
for _, tc := range tt {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
val1 := protoreflect.ValueOfMessage(tc.dur1.ProtoReflect())
val2 := protoreflect.ValueOfMessage(tc.dur2.ProtoReflect())
got := cdc.Compare(val1, val2)
assert.Equal(t, tc.want, got, "Compare(%v, %v)", tc.dur1, tc.dur2)
bz1 := encodeValue(t, cdc, val1)
bz2 := encodeValue(t, cdc, val2)
assert.Equal(t, tc.want, bytes.Compare(bz1, bz2), "bytes.Compare(%v, %v)", bz1, bz2)
})
}
t.Run("nanos", func(t *testing.T) {
t.Parallel()
dur, err := time.ParseDuration("3879468295ns")
assert.NilError(t, err)
durPb := durationpb.New(dur)
val := protoreflect.ValueOfMessage(durPb.ProtoReflect())
buf := &bytes.Buffer{}
assert.NilError(t, cdc.Encode(val, buf))
assert.Equal(t, 9, len(buf.Bytes()))
val2, err := cdc.Decode(buf)
assert.NilError(t, err)
assert.Equal(t, 0, cdc.Compare(val, val2))
})
}
func encodeValue(t *testing.T, cdc ormfield.Codec, val protoreflect.Value) []byte {
t.Helper()
buf := &bytes.Buffer{}
assert.NilError(t, cdc.Encode(val, buf))
return buf.Bytes()
}

View File

@ -1,57 +0,0 @@
package ormfield
import (
"encoding/binary"
"io"
"google.golang.org/protobuf/reflect/protoreflect"
)
// EnumCodec encodes enum values as varints.
type EnumCodec struct{}
func (e EnumCodec) Decode(r Reader) (protoreflect.Value, error) {
x, err := binary.ReadVarint(r)
return protoreflect.ValueOfEnum(protoreflect.EnumNumber(x)), err
}
func (e EnumCodec) Encode(value protoreflect.Value, w io.Writer) error {
var x protoreflect.EnumNumber
if value.IsValid() {
x = value.Enum()
}
buf := make([]byte, binary.MaxVarintLen32)
n := binary.PutVarint(buf, int64(x))
_, err := w.Write(buf[:n])
return err
}
func (e EnumCodec) Compare(v1, v2 protoreflect.Value) int {
var x, y protoreflect.EnumNumber
if v1.IsValid() {
x = v1.Enum()
}
if v2.IsValid() {
y = v2.Enum()
}
switch {
case x == y:
return 0
case x < y:
return -1
default:
return 1
}
}
func (e EnumCodec) IsOrdered() bool {
return false
}
func (e EnumCodec) FixedBufferSize() int {
return binary.MaxVarintLen32
}
func (e EnumCodec) ComputeBufferSize(protoreflect.Value) (int, error) {
return e.FixedBufferSize(), nil
}

View File

@ -1,52 +0,0 @@
package ormfield
import (
"encoding/binary"
"io"
"google.golang.org/protobuf/reflect/protoreflect"
)
// Int32Codec encodes 32-bit integers as big-endian unsigned 32-bit integers
// by adding the maximum value of int32 (2147583647) + 1 before encoding so
// that these values can be used for ordered iteration.
type Int32Codec struct{}
var int32Codec = Int32Codec{}
const (
int32Max = 2147483647
int32Offset = int32Max + 1
)
func (i Int32Codec) Decode(r Reader) (protoreflect.Value, error) {
var x uint32
err := binary.Read(r, binary.BigEndian, &x)
y := int64(x) - int32Offset
return protoreflect.ValueOfInt32(int32(y)), err
}
func (i Int32Codec) Encode(value protoreflect.Value, w io.Writer) error {
var x int64
if value.IsValid() {
x = value.Int()
}
x += int32Offset
return binary.Write(w, binary.BigEndian, uint32(x))
}
func (i Int32Codec) Compare(v1, v2 protoreflect.Value) int {
return compareInt(v1, v2)
}
func (i Int32Codec) IsOrdered() bool {
return true
}
func (i Int32Codec) FixedBufferSize() int {
return 4
}
func (i Int32Codec) ComputeBufferSize(protoreflect.Value) (int, error) {
return i.FixedBufferSize(), nil
}

View File

@ -1,78 +0,0 @@
package ormfield
import (
"encoding/binary"
"io"
"google.golang.org/protobuf/reflect/protoreflect"
)
// Int64Codec encodes 64-bit integers as big-endian unsigned 64-bit integers
// by adding the maximum value of int32 (9223372036854775807) + 1 before encoding so
// that these values can be used for ordered iteration.
type Int64Codec struct{}
var int64Codec = Int64Codec{}
const int64Max = 9223372036854775807
func (i Int64Codec) Decode(r Reader) (protoreflect.Value, error) {
var x uint64
err := binary.Read(r, binary.BigEndian, &x)
if x >= int64Max {
x = x - int64Max - 1
return protoreflect.ValueOfInt64(int64(x)), err
}
y := int64(x) - int64Max - 1
return protoreflect.ValueOfInt64(y), err
}
func (i Int64Codec) Encode(value protoreflect.Value, w io.Writer) error {
var x int64
if value.IsValid() {
x = value.Int()
}
if x >= -1 {
y := uint64(x) + int64Max + 1
return binary.Write(w, binary.BigEndian, y)
}
x += int64Max
x++
return binary.Write(w, binary.BigEndian, uint64(x))
}
func (i Int64Codec) Compare(v1, v2 protoreflect.Value) int {
return compareInt(v1, v2)
}
func (i Int64Codec) IsOrdered() bool {
return true
}
func (i Int64Codec) FixedBufferSize() int {
return 8
}
func (i Int64Codec) ComputeBufferSize(protoreflect.Value) (int, error) {
return i.FixedBufferSize(), nil
}
func compareInt(v1, v2 protoreflect.Value) int {
var x, y int64
if v1.IsValid() {
x = v1.Int()
}
if v2.IsValid() {
y = v2.Int()
}
switch {
case x == y:
return 0
case x < y:
return -1
default:
return 1
}
}

View File

@ -1,110 +0,0 @@
package ormfield
import (
"errors"
"fmt"
"io"
"strings"
"google.golang.org/protobuf/reflect/protoreflect"
)
// StringCodec encodes strings as raw bytes.
type StringCodec struct{}
func (s StringCodec) FixedBufferSize() int {
return -1
}
func (s StringCodec) ComputeBufferSize(value protoreflect.Value) (int, error) {
if !value.IsValid() {
return 0, nil
}
return len(value.String()), nil
}
func (s StringCodec) IsOrdered() bool {
return true
}
func (s StringCodec) Compare(v1, v2 protoreflect.Value) int {
return compareStrings(v1, v2)
}
func (s StringCodec) Decode(r Reader) (protoreflect.Value, error) {
bz, err := io.ReadAll(r)
return protoreflect.ValueOfString(string(bz)), err
}
func (s StringCodec) Encode(value protoreflect.Value, w io.Writer) error {
var x string
if value.IsValid() {
x = value.String()
}
_, err := w.Write([]byte(x))
return err
}
// NonTerminalStringCodec encodes strings as null-terminated raw bytes. Null
// values within strings will produce an error.
type NonTerminalStringCodec struct{}
func (s NonTerminalStringCodec) FixedBufferSize() int {
return -1
}
func (s NonTerminalStringCodec) ComputeBufferSize(value protoreflect.Value) (int, error) {
return len(value.String()) + 1, nil
}
func (s NonTerminalStringCodec) IsOrdered() bool {
return true
}
func (s NonTerminalStringCodec) Compare(v1, v2 protoreflect.Value) int {
return compareStrings(v1, v2)
}
func (s NonTerminalStringCodec) Decode(r Reader) (protoreflect.Value, error) {
var bz []byte
for {
b, err := r.ReadByte()
if b == 0 || errors.Is(err, io.EOF) {
return protoreflect.ValueOfString(string(bz)), err
}
bz = append(bz, b)
}
}
func (s NonTerminalStringCodec) Encode(value protoreflect.Value, w io.Writer) error {
var str string
if value.IsValid() {
str = value.String()
}
bz := []byte(str)
for _, b := range bz {
if b == 0 {
return fmt.Errorf("illegal null terminator found in index string: %s", str)
}
}
_, err := w.Write(bz)
if err != nil {
return err
}
_, err = w.Write(nullTerminator)
return err
}
var nullTerminator = []byte{0}
func compareStrings(v1, v2 protoreflect.Value) int {
var x, y string
if v1.IsValid() {
x = v1.String()
}
if v2.IsValid() {
y = v2.String()
}
return strings.Compare(x, y)
}

View File

@ -1,407 +0,0 @@
package ormfield
import (
"fmt"
"io"
"google.golang.org/protobuf/reflect/protoreflect"
)
// TimestampCodec encodes google.protobuf.Timestamp values with the following
// encoding:
// - nil is encoded as []byte{0xFF}
// - seconds (which can range from 0001-01-01T00:00:00Z to 9999-12-31T23:59:59Z) is encoded as 5 fixed bytes
// - nanos (which can range from 0 to 999,999,999 or -999,999,999 to 0 if seconds is negative) are encoded such
// that 999,999,999 is always added to nanos. This ensures that the encoded nanos are always >= 0. Additionally,
// by adding 999,999,999 to both positive and negative nanos, we guarantee that the lexicographical order is
// preserved when comparing the encoded values of two Timestamps.
//
// When iterating over timestamp indexes, nil values will always be ordered last.
//
// Values for seconds and nanos outside the ranges specified by google.protobuf.Timestamp will be rejected.
type TimestampCodec struct{}
const (
timestampDurationNilValue = 0xFF
timestampDurationZeroNanosValue = 0x0
timestampDurationBufferSize = 9
TimestampSecondsMin int64 = -62135596800
TimestampSecondsMax int64 = 253402300799
TimestampNanosMax = 999999999
)
var (
timestampDurationNilBz = []byte{timestampDurationNilValue}
timestampZeroNanosBz = []byte{timestampDurationZeroNanosValue}
)
func (t TimestampCodec) Encode(value protoreflect.Value, w io.Writer) error {
// nil case
if !value.IsValid() {
_, err := w.Write(timestampDurationNilBz)
return err
}
seconds, nanos := getTimestampSecondsAndNanos(value)
secondsInt := seconds.Int()
if secondsInt < TimestampSecondsMin || secondsInt > TimestampSecondsMax {
return fmt.Errorf("timestamp seconds is out of range %d, must be between %d and %d", secondsInt, TimestampSecondsMin, TimestampSecondsMax)
}
secondsInt -= TimestampSecondsMin
err := encodeSeconds(secondsInt, w)
if err != nil {
return err
}
nanosInt := nanos.Int()
if nanosInt == 0 {
_, err = w.Write(timestampZeroNanosBz)
return err
}
if nanosInt < 0 || nanosInt > TimestampNanosMax {
return fmt.Errorf("timestamp nanos is out of range %d, must be between %d and %d", secondsInt, 0, TimestampNanosMax)
}
return encodeNanos(nanosInt, w)
}
func encodeSeconds(secondsInt int64, w io.Writer) error {
var secondsBz [5]byte
// write the seconds buffer from the end to the front
for i := 4; i >= 0; i-- {
secondsBz[i] = byte(secondsInt)
secondsInt >>= 8
}
_, err := w.Write(secondsBz[:])
return err
}
func encodeNanos(nanosInt int64, w io.Writer) error {
var nanosBz [4]byte
for i := 3; i >= 0; i-- {
nanosBz[i] = byte(nanosInt)
nanosInt >>= 8
}
// This condition is crucial to ensure the function's correct behavior when dealing with a Timestamp or Duration encoding.
// Specifically, this function is bypassed for Timestamp values when their nanoseconds part is zero.
// In the decodeNanos function, there's a preliminary check for a zero first byte, which represents all values ≤ 16777215 (00000000 11111111 11111111 11111111).
// Without this adjustment (setting the first byte to 0x80 with is 10000000 in binary format), decodeNanos would incorrectly return 0 for any number ≤ 16777215,
// leading to inaccurate decoding of nanoseconds.
nanosBz[0] |= 0x80
_, err := w.Write(nanosBz[:])
return err
}
func (t TimestampCodec) Decode(r Reader) (protoreflect.Value, error) {
isNil, seconds, err := decodeSeconds(r)
if isNil || err != nil {
return protoreflect.Value{}, err
}
seconds += TimestampSecondsMin
msg := timestampMsgType.New()
msg.Set(timestampSecondsField, protoreflect.ValueOfInt64(seconds))
nanos, err := decodeNanos(r)
if err != nil {
return protoreflect.Value{}, err
}
if nanos == 0 {
return protoreflect.ValueOfMessage(msg), nil
}
msg.Set(timestampNanosField, protoreflect.ValueOfInt32(nanos))
return protoreflect.ValueOfMessage(msg), nil
}
func decodeSeconds(r Reader) (isNil bool, seconds int64, err error) {
b0, err := r.ReadByte()
if err != nil {
return false, 0, err
}
if b0 == timestampDurationNilValue {
return true, 0, nil
}
var secondsBz [4]byte
n, err := r.Read(secondsBz[:])
if err != nil {
return false, 0, err
}
if n < 4 {
return false, 0, io.EOF
}
seconds = int64(b0)
for i := 0; i < 4; i++ {
seconds <<= 8
seconds |= int64(secondsBz[i])
}
return false, seconds, nil
}
func decodeNanos(r Reader) (int32, error) {
b0, err := r.ReadByte()
if err != nil {
return 0, err
}
if b0 == timestampDurationZeroNanosValue {
return 0, nil
}
var nanosBz [3]byte
n, err := r.Read(nanosBz[:])
if err != nil {
return 0, err
}
if n < 3 {
return 0, io.EOF
}
// Clear the first bit, previously set in encodeNanos, to ensure this logic is applied
// and for numbers ≤ 16777215. This adjustment guarantees that we accurately interpret
// the value as intended when encoding smaller numbers.
nanos := int32(b0) & 0x7F
for i := 0; i < 3; i++ {
nanos <<= 8
nanos |= int32(nanosBz[i])
}
return nanos, nil
}
func (t TimestampCodec) Compare(v1, v2 protoreflect.Value) int {
if !v1.IsValid() {
if !v2.IsValid() {
return 0
}
return 1
}
if !v2.IsValid() {
return -1
}
s1, n1 := getTimestampSecondsAndNanos(v1)
s2, n2 := getTimestampSecondsAndNanos(v2)
c := compareInt(s1, s2)
if c != 0 {
return c
}
return compareInt(n1, n2)
}
func (t TimestampCodec) IsOrdered() bool {
return true
}
func (t TimestampCodec) FixedBufferSize() int {
return timestampDurationBufferSize
}
func (t TimestampCodec) ComputeBufferSize(protoreflect.Value) (int, error) {
return timestampDurationBufferSize, nil
}
// TimestampV0Codec encodes a google.protobuf.Timestamp value as 12 bytes using
// Int64Codec for seconds followed by Int32Codec for nanos. This type does not
// encode nil values correctly, but is retained in order to allow users of the
// previous encoding to successfully migrate from this encoding to the new encoding
// specified by TimestampCodec.
type TimestampV0Codec struct{}
var (
timestampSecondsField = timestampMsgType.Descriptor().Fields().ByName("seconds")
timestampNanosField = timestampMsgType.Descriptor().Fields().ByName("nanos")
)
func getTimestampSecondsAndNanos(value protoreflect.Value) (protoreflect.Value, protoreflect.Value) {
msg := value.Message()
return msg.Get(timestampSecondsField), msg.Get(timestampNanosField)
}
func (t TimestampV0Codec) Decode(r Reader) (protoreflect.Value, error) {
seconds, err := int64Codec.Decode(r)
if err != nil {
return protoreflect.Value{}, err
}
nanos, err := int32Codec.Decode(r)
if err != nil {
return protoreflect.Value{}, err
}
msg := timestampMsgType.New()
msg.Set(timestampSecondsField, seconds)
msg.Set(timestampNanosField, nanos)
return protoreflect.ValueOfMessage(msg), nil
}
func (t TimestampV0Codec) Encode(value protoreflect.Value, w io.Writer) error {
seconds, nanos := getTimestampSecondsAndNanos(value)
err := int64Codec.Encode(seconds, w)
if err != nil {
return err
}
return int32Codec.Encode(nanos, w)
}
func (t TimestampV0Codec) Compare(v1, v2 protoreflect.Value) int {
s1, n1 := getTimestampSecondsAndNanos(v1)
s2, n2 := getTimestampSecondsAndNanos(v2)
c := compareInt(s1, s2)
if c != 0 {
return c
}
return compareInt(n1, n2)
}
func (t TimestampV0Codec) IsOrdered() bool {
return true
}
func (t TimestampV0Codec) FixedBufferSize() int {
return 12
}
func (t TimestampV0Codec) ComputeBufferSize(protoreflect.Value) (int, error) {
return t.FixedBufferSize(), nil
}
type TimestampV1Codec struct{}
func (t TimestampV1Codec) Encode(value protoreflect.Value, w io.Writer) error {
// nil case
if !value.IsValid() {
_, err := w.Write(timestampDurationNilBz)
return err
}
seconds, nanos := getTimestampSecondsAndNanos(value)
secondsInt := seconds.Int()
if secondsInt < TimestampSecondsMin || secondsInt > TimestampSecondsMax {
return fmt.Errorf("timestamp seconds is out of range %d, must be between %d and %d", secondsInt, TimestampSecondsMin, TimestampSecondsMax)
}
secondsInt -= TimestampSecondsMin
err := encodeSeconds(secondsInt, w)
if err != nil {
return err
}
nanosInt := nanos.Int()
if nanosInt == 0 {
_, err = w.Write(timestampZeroNanosBz)
return err
}
if nanosInt < 0 || nanosInt > TimestampNanosMax {
return fmt.Errorf("timestamp nanos is out of range %d, must be between %d and %d", secondsInt, 0, TimestampNanosMax)
}
return encodeNanosV1(nanosInt, w)
}
func (t TimestampV1Codec) Decode(r Reader) (protoreflect.Value, error) {
isNil, seconds, err := decodeSeconds(r)
if isNil || err != nil {
return protoreflect.Value{}, err
}
seconds += TimestampSecondsMin
msg := timestampMsgType.New()
msg.Set(timestampSecondsField, protoreflect.ValueOfInt64(seconds))
nanos, err := decodeNanosV1(r)
if err != nil {
return protoreflect.Value{}, err
}
if nanos == 0 {
return protoreflect.ValueOfMessage(msg), nil
}
msg.Set(timestampNanosField, protoreflect.ValueOfInt32(nanos))
return protoreflect.ValueOfMessage(msg), nil
}
func (t TimestampV1Codec) Compare(v1, v2 protoreflect.Value) int {
if !v1.IsValid() {
if !v2.IsValid() {
return 0
}
return 1
}
if !v2.IsValid() {
return -1
}
s1, n1 := getTimestampSecondsAndNanos(v1)
s2, n2 := getTimestampSecondsAndNanos(v2)
c := compareInt(s1, s2)
if c != 0 {
return c
}
return compareInt(n1, n2)
}
func (t TimestampV1Codec) IsOrdered() bool {
return true
}
func (t TimestampV1Codec) FixedBufferSize() int {
return timestampDurationBufferSize
}
func (t TimestampV1Codec) ComputeBufferSize(protoreflect.Value) (int, error) {
return timestampDurationBufferSize, nil
}
func encodeNanosV1(nanosInt int64, w io.Writer) error {
var nanosBz [4]byte
for i := 3; i >= 0; i-- {
nanosBz[i] = byte(nanosInt)
nanosInt >>= 8
}
nanosBz[0] |= 0xC0
_, err := w.Write(nanosBz[:])
return err
}
func decodeNanosV1(r Reader) (int32, error) {
b0, err := r.ReadByte()
if err != nil {
return 0, err
}
if b0 == timestampDurationZeroNanosValue {
return 0, nil
}
var nanosBz [3]byte
n, err := r.Read(nanosBz[:])
if err != nil {
return 0, err
}
if n < 3 {
return 0, io.EOF
}
nanos := int32(b0) & 0x3F // clear first two bits
for i := 0; i < 3; i++ {
nanos <<= 8
nanos |= int32(nanosBz[i])
}
return nanos, nil
}

View File

@ -1,125 +0,0 @@
package ormfield_test
import (
"bytes"
"testing"
"time"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/types/known/timestamppb"
"gotest.tools/v3/assert"
"cosmossdk.io/orm/encoding/ormfield"
)
func TestTimestamp(t *testing.T) {
t.Parallel()
cdc := ormfield.TimestampCodec{}
t.Run("nil value", func(t *testing.T) {
t.Parallel()
buf := &bytes.Buffer{}
assert.NilError(t, cdc.Encode(protoreflect.Value{}, buf))
assert.Equal(t, 1, len(buf.Bytes()))
val, err := cdc.Decode(buf)
assert.NilError(t, err)
assert.Assert(t, !val.IsValid())
})
t.Run("no nanos", func(t *testing.T) {
t.Parallel()
ts := timestamppb.New(time.Date(2022, 1, 1, 12, 30, 15, 0, time.UTC))
val := protoreflect.ValueOfMessage(ts.ProtoReflect())
buf := &bytes.Buffer{}
assert.NilError(t, cdc.Encode(val, buf))
assert.Equal(t, 6, len(buf.Bytes()))
val2, err := cdc.Decode(buf)
assert.NilError(t, err)
assert.Equal(t, 0, cdc.Compare(val, val2))
})
t.Run("nanos", func(t *testing.T) {
t.Parallel()
ts := timestamppb.New(time.Date(2022, 1, 1, 12, 30, 15, 235809753, time.UTC))
val := protoreflect.ValueOfMessage(ts.ProtoReflect())
buf := &bytes.Buffer{}
assert.NilError(t, cdc.Encode(val, buf))
assert.Equal(t, 9, len(buf.Bytes()))
val2, err := cdc.Decode(buf)
assert.NilError(t, err)
assert.Equal(t, 0, cdc.Compare(val, val2))
})
t.Run("min value", func(t *testing.T) {
t.Parallel()
ts := timestamppb.New(time.Date(1, 1, 1, 0, 0, 0, 0, time.UTC))
val := protoreflect.ValueOfMessage(ts.ProtoReflect())
buf := &bytes.Buffer{}
assert.NilError(t, cdc.Encode(val, buf))
assert.Equal(t, 6, len(buf.Bytes()))
assert.Assert(t, bytes.Equal(buf.Bytes(), []byte{0, 0, 0, 0, 0, 0})) // the minimum value should be all zeros
val2, err := cdc.Decode(buf)
assert.NilError(t, err)
assert.Equal(t, 0, cdc.Compare(val, val2))
})
t.Run("max value", func(t *testing.T) {
t.Parallel()
ts := timestamppb.New(time.Date(9999, 12, 31, 23, 59, 59, 999999999, time.UTC))
val := protoreflect.ValueOfMessage(ts.ProtoReflect())
buf := &bytes.Buffer{}
assert.NilError(t, cdc.Encode(val, buf))
assert.Equal(t, 9, len(buf.Bytes()))
val2, err := cdc.Decode(buf)
assert.NilError(t, err)
assert.Equal(t, 0, cdc.Compare(val, val2))
})
}
func TestTimestampOutOfRange(t *testing.T) {
t.Parallel()
cdc := ormfield.TimestampCodec{}
tt := []struct {
name string
ts *timestamppb.Timestamp
expectErr string
}{
{
name: "before min",
ts: timestamppb.New(time.Date(0, 1, 1, 0, 0, 0, 0, time.UTC)),
expectErr: "timestamp seconds is out of range",
},
{
name: "after max",
ts: timestamppb.New(time.Date(10000, 1, 1, 0, 0, 0, 0, time.UTC)),
expectErr: "timestamp seconds is out of range",
},
{
name: "nanos too small",
ts: &timestamppb.Timestamp{
Seconds: 0,
Nanos: -1,
},
expectErr: "timestamp nanos is out of range",
},
{
name: "nanos too big",
ts: &timestamppb.Timestamp{
Seconds: 0,
Nanos: 1000000000,
},
expectErr: "timestamp nanos is out of range",
},
}
for _, tc := range tt {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
val := protoreflect.ValueOfMessage(tc.ts.ProtoReflect())
buf := &bytes.Buffer{}
err := cdc.Encode(val, buf)
assert.ErrorContains(t, err, tc.expectErr)
})
}
}

View File

@ -1,188 +0,0 @@
package ormfield
import (
"encoding/binary"
"errors"
"io"
"google.golang.org/protobuf/reflect/protoreflect"
)
// FixedUint32Codec encodes uint32 values as 4-byte big-endian integers.
type FixedUint32Codec struct{}
func (u FixedUint32Codec) FixedBufferSize() int {
return 4
}
func (u FixedUint32Codec) ComputeBufferSize(protoreflect.Value) (int, error) {
return u.FixedBufferSize(), nil
}
func (u FixedUint32Codec) IsOrdered() bool {
return true
}
func (u FixedUint32Codec) Compare(v1, v2 protoreflect.Value) int {
return compareUint(v1, v2)
}
func (u FixedUint32Codec) Decode(r Reader) (protoreflect.Value, error) {
var x uint32
err := binary.Read(r, binary.BigEndian, &x)
return protoreflect.ValueOfUint32(x), err
}
func (u FixedUint32Codec) Encode(value protoreflect.Value, w io.Writer) error {
var x uint64
if value.IsValid() {
x = value.Uint()
}
return binary.Write(w, binary.BigEndian, uint32(x))
}
// CompactUint32Codec encodes uint32 values using EncodeCompactUint32.
type CompactUint32Codec struct{}
func (c CompactUint32Codec) Decode(r Reader) (protoreflect.Value, error) {
x, err := DecodeCompactUint32(r)
return protoreflect.ValueOfUint32(x), err
}
func (c CompactUint32Codec) Encode(value protoreflect.Value, w io.Writer) error {
var x uint64
if value.IsValid() {
x = value.Uint()
}
_, err := w.Write(EncodeCompactUint32(uint32(x)))
return err
}
func (c CompactUint32Codec) Compare(v1, v2 protoreflect.Value) int {
return compareUint(v1, v2)
}
func (c CompactUint32Codec) IsOrdered() bool {
return true
}
func (c CompactUint32Codec) FixedBufferSize() int {
return 5
}
func (c CompactUint32Codec) ComputeBufferSize(protoreflect.Value) (int, error) {
return c.FixedBufferSize(), nil
}
// EncodeCompactUint32 encodes uint32 values in 2,3,4 or 5 bytes.
// Unlike regular varints, this encoding is
// suitable for ordered prefix scans. The length of the output + 2 is encoded
// in the first 2 bits of the first byte and the remaining bits encoded with
// big-endian ordering.
// Values less than 2^14 fill fit in 2 bytes, values less than 2^22 will
// fit in 3, and values less than 2^30 will fit in 4.
func EncodeCompactUint32(x uint32) []byte {
switch {
case x < 16384: // 2^14
buf := make([]byte, 2)
buf[0] = byte(x >> 8)
buf[1] = byte(x)
return buf
case x < 4194304: // 2^22
buf := make([]byte, 3)
buf[0] = 0x40
buf[0] |= byte(x >> 16)
buf[1] = byte(x >> 8)
buf[2] = byte(x)
return buf
case x < 1073741824: // 2^30
buf := make([]byte, 4)
buf[0] = 0x80
buf[0] |= byte(x >> 24)
buf[1] = byte(x >> 16)
buf[2] = byte(x >> 8)
buf[3] = byte(x)
return buf
default:
buf := make([]byte, 5)
buf[0] = 0xC0
buf[0] |= byte(x >> 26)
buf[1] = byte(x >> 18)
buf[2] = byte(x >> 10)
buf[3] = byte(x >> 2)
buf[4] = byte(x) & 0x3
return buf
}
}
// DecodeCompactUint32 decodes a uint32 encoded with EncodeCompactU32.
func DecodeCompactUint32(reader io.Reader) (uint32, error) {
var buf [5]byte
n, err := reader.Read(buf[:1])
if err != nil {
return 0, err
}
if n < 1 {
return 0, io.ErrUnexpectedEOF
}
switch buf[0] >> 6 {
case 0:
n, err := reader.Read(buf[1:2])
if err != nil {
return 0, err
}
if n < 1 {
return 0, io.ErrUnexpectedEOF
}
x := uint32(buf[0]) << 8
x |= uint32(buf[1])
return x, nil
case 1:
n, err := reader.Read(buf[1:3])
if err != nil {
return 0, err
}
if n < 2 {
return 0, io.ErrUnexpectedEOF
}
x := (uint32(buf[0]) & 0x3F) << 16
x |= uint32(buf[1]) << 8
x |= uint32(buf[2])
return x, nil
case 2:
n, err := reader.Read(buf[1:4])
if err != nil {
return 0, err
}
if n < 3 {
return 0, io.ErrUnexpectedEOF
}
x := (uint32(buf[0]) & 0x3F) << 24
x |= uint32(buf[1]) << 16
x |= uint32(buf[2]) << 8
x |= uint32(buf[3])
return x, nil
case 3:
n, err := reader.Read(buf[1:5])
if err != nil {
return 0, err
}
if n < 4 {
return 0, io.ErrUnexpectedEOF
}
x := (uint32(buf[0]) & 0x3F) << 26
x |= uint32(buf[1]) << 18
x |= uint32(buf[2]) << 10
x |= uint32(buf[3]) << 2
x |= uint32(buf[4])
return x, nil
default:
return 0, errors.New("unexpected case")
}
}

View File

@ -1,218 +0,0 @@
package ormfield
import (
"encoding/binary"
"errors"
"io"
"google.golang.org/protobuf/reflect/protoreflect"
)
// FixedUint64Codec encodes uint64 values as 8-byte big-endian integers.
type FixedUint64Codec struct{}
func (u FixedUint64Codec) FixedBufferSize() int {
return 8
}
func (u FixedUint64Codec) ComputeBufferSize(protoreflect.Value) (int, error) {
return u.FixedBufferSize(), nil
}
func (u FixedUint64Codec) IsOrdered() bool {
return true
}
func (u FixedUint64Codec) Compare(v1, v2 protoreflect.Value) int {
return compareUint(v1, v2)
}
func (u FixedUint64Codec) Decode(r Reader) (protoreflect.Value, error) {
var x uint64
err := binary.Read(r, binary.BigEndian, &x)
return protoreflect.ValueOfUint64(x), err
}
func (u FixedUint64Codec) Encode(value protoreflect.Value, w io.Writer) error {
var x uint64
if value.IsValid() {
x = value.Uint()
}
return binary.Write(w, binary.BigEndian, x)
}
func compareUint(v1, v2 protoreflect.Value) int {
var x, y uint64
if v1.IsValid() {
x = v1.Uint()
}
if v2.IsValid() {
y = v2.Uint()
}
switch {
case x == y:
return 0
case x < y:
return -1
default:
return 1
}
}
// CompactUint64Codec encodes uint64 values using EncodeCompactUint64.
type CompactUint64Codec struct{}
func (c CompactUint64Codec) Decode(r Reader) (protoreflect.Value, error) {
x, err := DecodeCompactUint64(r)
return protoreflect.ValueOfUint64(x), err
}
func (c CompactUint64Codec) Encode(value protoreflect.Value, w io.Writer) error {
var x uint64
if value.IsValid() {
x = value.Uint()
}
_, err := w.Write(EncodeCompactUint64(x))
return err
}
func (c CompactUint64Codec) Compare(v1, v2 protoreflect.Value) int {
return compareUint(v1, v2)
}
func (c CompactUint64Codec) IsOrdered() bool {
return true
}
func (c CompactUint64Codec) FixedBufferSize() int {
return 9
}
func (c CompactUint64Codec) ComputeBufferSize(protoreflect.Value) (int, error) {
return c.FixedBufferSize(), nil
}
// EncodeCompactUint64 encodes uint64 values in 2,4,6 or 9 bytes.
// Unlike regular varints, this encoding is
// suitable for ordered prefix scans. The first two bits of the first byte
// indicate the length of the buffer - 00 for 2, 01 for 4, 10 for 6 and
// 11 for 9. The remaining bits are encoded with big-endian ordering.
// Values less than 2^14 fill fit in 2 bytes, values less than 2^30 will
// fit in 4, and values less than 2^46 will fit in 6.
func EncodeCompactUint64(x uint64) []byte {
switch {
case x < 16384: // 2^14
buf := make([]byte, 2)
buf[0] = byte(x >> 8)
buf[1] = byte(x)
return buf
case x < 1073741824: // 2^30
buf := make([]byte, 4)
buf[0] = 0x40
buf[0] |= byte(x >> 24)
buf[1] = byte(x >> 16)
buf[2] = byte(x >> 8)
buf[3] = byte(x)
return buf
case x < 70368744177664: // 2^46
buf := make([]byte, 6)
buf[0] = 0x80
buf[0] |= byte(x >> 40)
buf[1] = byte(x >> 32)
buf[2] = byte(x >> 24)
buf[3] = byte(x >> 16)
buf[4] = byte(x >> 8)
buf[5] = byte(x)
return buf
default:
buf := make([]byte, 9)
buf[0] = 0xC0
buf[0] |= byte(x >> 58)
buf[1] = byte(x >> 50)
buf[2] = byte(x >> 42)
buf[3] = byte(x >> 34)
buf[4] = byte(x >> 26)
buf[5] = byte(x >> 18)
buf[6] = byte(x >> 10)
buf[7] = byte(x >> 2)
buf[8] = byte(x) & 0x3
return buf
}
}
func DecodeCompactUint64(reader io.Reader) (uint64, error) {
var buf [9]byte
n, err := reader.Read(buf[:1])
if err != nil {
return 0, err
}
if n < 1 {
return 0, io.ErrUnexpectedEOF
}
switch buf[0] >> 6 {
case 0:
n, err := reader.Read(buf[1:2])
if err != nil {
return 0, err
}
if n < 1 {
return 0, io.ErrUnexpectedEOF
}
x := uint64(buf[0]) << 8
x |= uint64(buf[1])
return x, nil
case 1:
n, err := reader.Read(buf[1:4])
if err != nil {
return 0, err
}
if n < 3 {
return 0, io.ErrUnexpectedEOF
}
x := (uint64(buf[0]) & 0x3F) << 24
x |= uint64(buf[1]) << 16
x |= uint64(buf[2]) << 8
x |= uint64(buf[3])
return x, nil
case 2:
n, err := reader.Read(buf[1:6])
if err != nil {
return 0, err
}
if n < 5 {
return 0, io.ErrUnexpectedEOF
}
x := (uint64(buf[0]) & 0x3F) << 40
x |= uint64(buf[1]) << 32
x |= uint64(buf[2]) << 24
x |= uint64(buf[3]) << 16
x |= uint64(buf[4]) << 8
x |= uint64(buf[5])
return x, nil
case 3:
n, err := reader.Read(buf[1:9])
if err != nil {
return 0, err
}
if n < 8 {
return 0, io.ErrUnexpectedEOF
}
x := (uint64(buf[0]) & 0x3F) << 58
x |= uint64(buf[1]) << 50
x |= uint64(buf[2]) << 42
x |= uint64(buf[3]) << 34
x |= uint64(buf[4]) << 26
x |= uint64(buf[5]) << 18
x |= uint64(buf[6]) << 10
x |= uint64(buf[7]) << 2
x |= uint64(buf[8])
return x, nil
default:
return 0, errors.New("unexpected case")
}
}

View File

@ -1,47 +0,0 @@
package ormkv
import "google.golang.org/protobuf/reflect/protoreflect"
// EntryCodec defines an interfaces for decoding and encoding entries in the
// kv-store backing an ORM instance. EntryCodec's enable full logical decoding
// of ORM data.
type EntryCodec interface {
// DecodeEntry decodes a kv-pair into an Entry.
DecodeEntry(k, v []byte) (Entry, error)
// EncodeEntry encodes an entry into a kv-pair.
EncodeEntry(entry Entry) (k, v []byte, err error)
}
// IndexCodec defines an interfaces for encoding and decoding index-keys in the
// kv-store.
type IndexCodec interface {
EntryCodec
// MessageType returns the message type this index codec applies to.
MessageType() protoreflect.MessageType
// GetFieldNames returns the field names in the key of this index.
GetFieldNames() []protoreflect.Name
// DecodeIndexKey decodes a kv-pair into index-fields and primary-key field
// values. These fields may or may not overlap depending on the index.
DecodeIndexKey(k, v []byte) (indexFields, primaryKey []protoreflect.Value, err error)
// EncodeKVFromMessage encodes a kv-pair for the index from a message.
EncodeKVFromMessage(message protoreflect.Message) (k, v []byte, err error)
// CompareKeys compares the provided values which must correspond to the
// fields in this key. Prefix keys of different lengths are supported but the
// function will panic if either array is too long. A negative value is returned
// if values1 is less than values2, 0 is returned if the two arrays are equal,
// and a positive value is returned if values2 is greater.
CompareKeys(key1, key2 []protoreflect.Value) int
// EncodeKeyFromMessage encodes the key part of this index and returns both
// index values and encoded key.
EncodeKeyFromMessage(message protoreflect.Message) (keyValues []protoreflect.Value, key []byte, err error)
// IsFullyOrdered returns true if all fields in the key are also ordered.
IsFullyOrdered() bool
}

View File

@ -1,135 +0,0 @@
package ormkv
import (
"fmt"
"strings"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
"cosmossdk.io/orm/internal/stablejson"
)
// Entry defines a logical representation of a kv-store entry for ORM instances.
type Entry interface {
fmt.Stringer
// GetTableName returns the table-name (equivalent to the fully-qualified
// proto message name) this entry corresponds to.
GetTableName() protoreflect.FullName
// to allow new methods to be added without breakage, this interface
// shouldn't be implemented outside this package,
// see https://go.dev/blog/module-compatibility
doNotImplement()
}
// PrimaryKeyEntry represents a logically decoded primary-key entry.
type PrimaryKeyEntry struct {
// TableName is the table this entry represents.
TableName protoreflect.FullName
// Key represents the primary key values.
Key []protoreflect.Value
// Value represents the message stored under the primary key.
Value proto.Message
}
func (p *PrimaryKeyEntry) GetTableName() protoreflect.FullName {
return p.TableName
}
func (p *PrimaryKeyEntry) String() string {
if p.Value == nil {
return fmt.Sprintf("PK %s %s -> _", p.TableName, fmtValues(p.Key))
}
valBz, err := stablejson.Marshal(p.Value)
valStr := string(valBz)
if err != nil {
valStr = fmt.Sprintf("ERR %v", err)
}
return fmt.Sprintf("PK %s %s -> %s", p.TableName, fmtValues(p.Key), valStr)
}
func fmtValues(values []protoreflect.Value) string {
if len(values) == 0 {
return "_"
}
parts := make([]string, len(values))
for i, v := range values {
parts[i] = fmt.Sprintf("%v", v.Interface())
}
return strings.Join(parts, "/")
}
func (p *PrimaryKeyEntry) doNotImplement() {}
// IndexKeyEntry represents a logically decoded index entry.
type IndexKeyEntry struct {
// TableName is the table this entry represents.
TableName protoreflect.FullName
// Fields are the index fields this entry represents.
Fields []protoreflect.Name
// IsUnique indicates whether this index is unique or not.
IsUnique bool
// IndexValues represent the index values.
IndexValues []protoreflect.Value
// PrimaryKey represents the primary key values, it is empty if this is a
// prefix key
PrimaryKey []protoreflect.Value
}
func (i *IndexKeyEntry) GetTableName() protoreflect.FullName {
return i.TableName
}
func (i *IndexKeyEntry) doNotImplement() {}
func (i *IndexKeyEntry) string() string {
return fmt.Sprintf("%s %s : %s -> %s", i.TableName, fmtFields(i.Fields), fmtValues(i.IndexValues), fmtValues(i.PrimaryKey))
}
func fmtFields(fields []protoreflect.Name) string {
strs := make([]string, len(fields))
for i, field := range fields {
strs[i] = string(field)
}
return strings.Join(strs, "/")
}
func (i *IndexKeyEntry) String() string {
if i.IsUnique {
return fmt.Sprintf("UNIQ %s", i.string())
}
return fmt.Sprintf("IDX %s", i.string())
}
// SeqEntry represents a sequence for tables with auto-incrementing primary keys.
type SeqEntry struct {
// TableName is the table this entry represents.
TableName protoreflect.FullName
// Value is the uint64 value stored for this sequence.
Value uint64
}
func (s *SeqEntry) GetTableName() protoreflect.FullName {
return s.TableName
}
func (s *SeqEntry) doNotImplement() {}
func (s *SeqEntry) String() string {
return fmt.Sprintf("SEQ %s %d", s.TableName, s.Value)
}
var _, _, _ Entry = &PrimaryKeyEntry{}, &IndexKeyEntry{}, &SeqEntry{}

View File

@ -1,75 +0,0 @@
package ormkv_test
import (
"testing"
"google.golang.org/protobuf/reflect/protoreflect"
"gotest.tools/v3/assert"
"cosmossdk.io/orm/encoding/encodeutil"
"cosmossdk.io/orm/encoding/ormkv"
"cosmossdk.io/orm/internal/testpb"
)
var aFullName = (&testpb.ExampleTable{}).ProtoReflect().Descriptor().FullName()
func TestPrimaryKeyEntry(t *testing.T) {
entry := &ormkv.PrimaryKeyEntry{
TableName: aFullName,
Key: encodeutil.ValuesOf(uint32(1), "abc"),
Value: &testpb.ExampleTable{I32: -1},
}
assert.Equal(t, `PK testpb.ExampleTable 1/abc -> {"i32":-1}`, entry.String())
assert.Equal(t, aFullName, entry.GetTableName())
// prefix key
entry = &ormkv.PrimaryKeyEntry{
TableName: aFullName,
Key: encodeutil.ValuesOf(uint32(1), "abc"),
Value: nil,
}
assert.Equal(t, `PK testpb.ExampleTable 1/abc -> _`, entry.String())
assert.Equal(t, aFullName, entry.GetTableName())
}
func TestIndexKeyEntry(t *testing.T) {
entry := &ormkv.IndexKeyEntry{
TableName: aFullName,
Fields: []protoreflect.Name{"u32", "i32", "str"},
IsUnique: false,
IndexValues: encodeutil.ValuesOf(uint32(10), int32(-1), "abc"),
PrimaryKey: encodeutil.ValuesOf("abc", int32(-1)),
}
assert.Equal(t, `IDX testpb.ExampleTable u32/i32/str : 10/-1/abc -> abc/-1`, entry.String())
assert.Equal(t, aFullName, entry.GetTableName())
entry = &ormkv.IndexKeyEntry{
TableName: aFullName,
Fields: []protoreflect.Name{"u32"},
IsUnique: true,
IndexValues: encodeutil.ValuesOf(uint32(10)),
PrimaryKey: encodeutil.ValuesOf("abc", int32(-1)),
}
assert.Equal(t, `UNIQ testpb.ExampleTable u32 : 10 -> abc/-1`, entry.String())
assert.Equal(t, aFullName, entry.GetTableName())
// prefix key
entry = &ormkv.IndexKeyEntry{
TableName: aFullName,
Fields: []protoreflect.Name{"u32", "i32", "str"},
IsUnique: false,
IndexValues: encodeutil.ValuesOf(uint32(10), int32(-1)),
}
assert.Equal(t, `IDX testpb.ExampleTable u32/i32/str : 10/-1 -> _`, entry.String())
assert.Equal(t, aFullName, entry.GetTableName())
// prefix key
entry = &ormkv.IndexKeyEntry{
TableName: aFullName,
Fields: []protoreflect.Name{"str", "i32"},
IsUnique: true,
IndexValues: encodeutil.ValuesOf("abc", int32(1)),
}
assert.Equal(t, `UNIQ testpb.ExampleTable str/i32 : abc/1 -> _`, entry.String())
assert.Equal(t, aFullName, entry.GetTableName())
}

View File

@ -1,124 +0,0 @@
package ormkv
import (
"bytes"
"errors"
"io"
"google.golang.org/protobuf/reflect/protoreflect"
"cosmossdk.io/orm/types/ormerrors"
)
// IndexKeyCodec is the codec for (non-unique) index keys.
type IndexKeyCodec struct {
*KeyCodec
pkFieldOrder []int
}
var _ IndexCodec = &IndexKeyCodec{}
// NewIndexKeyCodec creates a new IndexKeyCodec with an optional prefix for the
// provided message descriptor, index and primary key fields.
func NewIndexKeyCodec(prefix []byte, messageType protoreflect.MessageType, indexFields, primaryKeyFields []protoreflect.Name) (*IndexKeyCodec, error) {
if len(indexFields) == 0 {
return nil, ormerrors.InvalidTableDefinition.Wrapf("index fields are empty")
}
if len(primaryKeyFields) == 0 {
return nil, ormerrors.InvalidTableDefinition.Wrapf("primary key fields are empty")
}
indexFieldMap := map[protoreflect.Name]int{}
keyFields := make([]protoreflect.Name, 0, len(indexFields)+len(primaryKeyFields))
for i, f := range indexFields {
indexFieldMap[f] = i
keyFields = append(keyFields, f)
}
numIndexFields := len(indexFields)
numPrimaryKeyFields := len(primaryKeyFields)
pkFieldOrder := make([]int, numPrimaryKeyFields)
k := 0
for j, f := range primaryKeyFields {
if i, ok := indexFieldMap[f]; ok {
pkFieldOrder[j] = i
continue
}
keyFields = append(keyFields, f)
pkFieldOrder[j] = numIndexFields + k
k++
}
cdc, err := NewKeyCodec(prefix, messageType, keyFields)
if err != nil {
return nil, err
}
return &IndexKeyCodec{
KeyCodec: cdc,
pkFieldOrder: pkFieldOrder,
}, nil
}
func (cdc IndexKeyCodec) DecodeIndexKey(k, _ []byte) (indexFields, primaryKey []protoreflect.Value, err error) {
values, err := cdc.DecodeKey(bytes.NewReader(k))
// got prefix key
if errors.Is(err, io.EOF) {
return values, nil, nil
} else if err != nil {
return nil, nil, err
}
// got prefix key
if len(values) < len(cdc.fieldCodecs) {
return values, nil, nil
}
numPkFields := len(cdc.pkFieldOrder)
pkValues := make([]protoreflect.Value, numPkFields)
for i := 0; i < numPkFields; i++ {
pkValues[i] = values[cdc.pkFieldOrder[i]]
}
return values, pkValues, nil
}
func (cdc IndexKeyCodec) DecodeEntry(k, v []byte) (Entry, error) {
idxValues, pk, err := cdc.DecodeIndexKey(k, v)
if err != nil {
return nil, err
}
return &IndexKeyEntry{
TableName: cdc.messageType.Descriptor().FullName(),
Fields: cdc.fieldNames,
IndexValues: idxValues,
PrimaryKey: pk,
}, nil
}
func (cdc IndexKeyCodec) EncodeEntry(entry Entry) (k, v []byte, err error) {
indexEntry, ok := entry.(*IndexKeyEntry)
if !ok {
return nil, nil, ormerrors.BadDecodeEntry
}
if indexEntry.TableName != cdc.messageType.Descriptor().FullName() {
return nil, nil, ormerrors.BadDecodeEntry
}
bz, err := cdc.KeyCodec.EncodeKey(indexEntry.IndexValues)
if err != nil {
return nil, nil, err
}
return bz, []byte{}, nil
}
func (cdc IndexKeyCodec) EncodeKVFromMessage(message protoreflect.Message) (k, v []byte, err error) {
_, k, err = cdc.EncodeKeyFromMessage(message)
return k, []byte{}, err
}

View File

@ -1,63 +0,0 @@
package ormkv_test
import (
"bytes"
"fmt"
"testing"
"gotest.tools/v3/assert"
"pgregory.net/rapid"
"cosmossdk.io/orm/encoding/ormkv"
"cosmossdk.io/orm/internal/testpb"
"cosmossdk.io/orm/internal/testutil"
)
func TestIndexKeyCodec(t *testing.T) {
rapid.Check(t, func(t *rapid.T) {
idxPartCdc := testutil.TestKeyCodecGen(1, 5).Draw(t, "idxPartCdc")
pkCodec := testutil.TestKeyCodecGen(1, 5).Draw(t, "pkCdc")
prefix := rapid.SliceOfN(rapid.Byte(), 0, 5).Draw(t, "prefix")
messageType := (&testpb.ExampleTable{}).ProtoReflect().Type()
indexKeyCdc, err := ormkv.NewIndexKeyCodec(
prefix,
messageType,
idxPartCdc.Codec.GetFieldNames(),
pkCodec.Codec.GetFieldNames(),
)
assert.NilError(t, err)
for i := 0; i < 100; i++ {
a := testutil.GenA.Draw(t, fmt.Sprintf("a%d", i))
key := indexKeyCdc.GetKeyValues(a.ProtoReflect())
pk := pkCodec.Codec.GetKeyValues(a.ProtoReflect())
idx1 := &ormkv.IndexKeyEntry{
TableName: messageType.Descriptor().FullName(),
Fields: indexKeyCdc.GetFieldNames(),
IsUnique: false,
IndexValues: key,
PrimaryKey: pk,
}
k, v, err := indexKeyCdc.EncodeEntry(idx1)
assert.NilError(t, err)
k2, v2, err := indexKeyCdc.EncodeKVFromMessage(a.ProtoReflect())
assert.NilError(t, err)
assert.Assert(t, bytes.Equal(k, k2))
assert.Assert(t, bytes.Equal(v, v2))
entry2, err := indexKeyCdc.DecodeEntry(k, v)
assert.NilError(t, err)
idx2 := entry2.(*ormkv.IndexKeyEntry)
assert.Equal(t, 0, indexKeyCdc.CompareKeys(idx1.IndexValues, idx2.IndexValues))
assert.Equal(t, 0, pkCodec.Codec.CompareKeys(idx1.PrimaryKey, idx2.PrimaryKey))
assert.Equal(t, false, idx2.IsUnique)
assert.Equal(t, messageType.Descriptor().FullName(), idx2.TableName)
assert.DeepEqual(t, idx1.Fields, idx2.Fields)
idxFields, pk2, err := indexKeyCdc.DecodeIndexKey(k, v)
assert.NilError(t, err)
assert.Equal(t, 0, indexKeyCdc.CompareKeys(key, idxFields))
assert.Equal(t, 0, pkCodec.Codec.CompareKeys(pk, pk2))
}
})
}

View File

@ -1,308 +0,0 @@
package ormkv
import (
"bytes"
"errors"
"io"
"google.golang.org/protobuf/reflect/protoreflect"
"cosmossdk.io/orm/encoding/encodeutil"
"cosmossdk.io/orm/encoding/ormfield"
"cosmossdk.io/orm/types/ormerrors"
)
type KeyCodec struct {
fixedSize int
variableSizers []struct {
cdc ormfield.Codec
i int
}
prefix []byte
fieldDescriptors []protoreflect.FieldDescriptor
fieldNames []protoreflect.Name
fieldCodecs []ormfield.Codec
messageType protoreflect.MessageType
}
// NewKeyCodec returns a new KeyCodec with an optional prefix for the provided
// message descriptor and fields.
func NewKeyCodec(prefix []byte, messageType protoreflect.MessageType, fieldNames []protoreflect.Name) (*KeyCodec, error) {
n := len(fieldNames)
fieldCodecs := make([]ormfield.Codec, n)
fieldDescriptors := make([]protoreflect.FieldDescriptor, n)
var variableSizers []struct {
cdc ormfield.Codec
i int
}
fixedSize := 0
messageFields := messageType.Descriptor().Fields()
for i := 0; i < n; i++ {
nonTerminal := i != n-1
field := messageFields.ByName(fieldNames[i])
if field == nil {
return nil, ormerrors.FieldNotFound.Wrapf("field %s on %s", fieldNames[i], messageType.Descriptor().FullName())
}
cdc, err := ormfield.GetCodec(field, nonTerminal)
if err != nil {
return nil, err
}
if x := cdc.FixedBufferSize(); x > 0 {
fixedSize += x
} else {
variableSizers = append(variableSizers, struct {
cdc ormfield.Codec
i int
}{cdc, i})
}
fieldCodecs[i] = cdc
fieldDescriptors[i] = field
}
return &KeyCodec{
fieldCodecs: fieldCodecs,
fieldDescriptors: fieldDescriptors,
fieldNames: fieldNames,
prefix: prefix,
fixedSize: fixedSize,
variableSizers: variableSizers,
messageType: messageType,
}, nil
}
// EncodeKey encodes the values assuming that they correspond to the fields
// specified for the key. If the array of values is shorter than the
// number of fields in the key, a partial "prefix" key will be encoded
// which can be used for constructing a prefix iterator.
func (cdc *KeyCodec) EncodeKey(values []protoreflect.Value) ([]byte, error) {
sz, err := cdc.ComputeKeyBufferSize(values)
if err != nil {
return nil, err
}
w := bytes.NewBuffer(make([]byte, 0, sz+len(cdc.prefix)))
if _, err = w.Write(cdc.prefix); err != nil {
return nil, err
}
n := len(values)
if n > len(cdc.fieldCodecs) {
return nil, ormerrors.IndexOutOfBounds.Wrapf("cannot encode %d values into %d fields", n, len(cdc.fieldCodecs))
}
for i := 0; i < n; i++ {
if err = cdc.fieldCodecs[i].Encode(values[i], w); err != nil {
return nil, err
}
}
return w.Bytes(), nil
}
// GetKeyValues extracts the values specified by the key fields from the message.
func (cdc *KeyCodec) GetKeyValues(message protoreflect.Message) []protoreflect.Value {
res := make([]protoreflect.Value, len(cdc.fieldDescriptors))
for i, f := range cdc.fieldDescriptors {
if message.Has(f) {
res[i] = message.Get(f)
}
}
return res
}
// DecodeKey decodes the values in the key specified by the reader. If the
// provided key is a prefix key, the values that could be decoded will
// be returned with io.EOF as the error.
func (cdc *KeyCodec) DecodeKey(r *bytes.Reader) ([]protoreflect.Value, error) {
if err := encodeutil.SkipPrefix(r, cdc.prefix); err != nil {
return nil, err
}
n := len(cdc.fieldCodecs)
values := make([]protoreflect.Value, 0, n)
for i := 0; i < n; i++ {
value, err := cdc.fieldCodecs[i].Decode(r)
if errors.Is(err, io.EOF) {
return values, err
} else if err != nil {
return nil, err
}
values = append(values, value)
}
return values, nil
}
// EncodeKeyFromMessage combines GetKeyValues and EncodeKey.
func (cdc *KeyCodec) EncodeKeyFromMessage(message protoreflect.Message) ([]protoreflect.Value, []byte, error) {
values := cdc.GetKeyValues(message)
bz, err := cdc.EncodeKey(values)
return values, bz, err
}
// IsFullyOrdered returns true if all fields are also ordered.
func (cdc *KeyCodec) IsFullyOrdered() bool {
for _, p := range cdc.fieldCodecs {
if !p.IsOrdered() {
return false
}
}
return true
}
// CompareKeys compares the provided values which must correspond to the
// fields in this key. Prefix keys of different lengths are supported but the
// function will panic if either array is too long. A negative value is returned
// if values1 is less than values2, 0 is returned if the two arrays are equal,
// and a positive value is returned if values2 is greater.
func (cdc *KeyCodec) CompareKeys(values1, values2 []protoreflect.Value) int {
j := len(values1)
k := len(values2)
n := j
if k < j {
n = k
}
if n > len(cdc.fieldCodecs) {
panic("array is too long")
}
var cmp int
for i := 0; i < n; i++ {
cmp = cdc.fieldCodecs[i].Compare(values1[i], values2[i])
// any non-equal parts determine our ordering
if cmp != 0 {
return cmp
}
}
// values are equal but arrays of different length
switch {
case j == k:
return 0
case j < k:
return -1
default:
return 1
}
}
// ComputeKeyBufferSize computes the required buffer size for the provided values
// which can represent a full or prefix key.
func (cdc KeyCodec) ComputeKeyBufferSize(values []protoreflect.Value) (int, error) {
size := cdc.fixedSize
n := len(values)
for _, sz := range cdc.variableSizers {
// handle prefix key encoding case where don't need all the sizers
if sz.i >= n {
return size, nil
}
x, err := sz.cdc.ComputeBufferSize(values[sz.i])
if err != nil {
return 0, err
}
size += x
}
return size, nil
}
// SetKeyValues sets the provided values on the message which must correspond
// exactly to the field descriptors for this key. Prefix keys aren't
// supported.
func (cdc *KeyCodec) SetKeyValues(message protoreflect.Message, values []protoreflect.Value) {
for i, f := range cdc.fieldDescriptors {
value := values[i]
if value.IsValid() {
message.Set(f, value)
}
}
}
// CheckValidRangeIterationKeys checks if the start and end key prefixes are valid
// for range iteration meaning that for each non-equal field in the prefixes
// those field types support ordered iteration. If start or end is longer than
// the other, the omitted values will function as the minimum and maximum
// values of that type respectively.
func (cdc KeyCodec) CheckValidRangeIterationKeys(start, end []protoreflect.Value) error {
lenStart := len(start)
shortest := lenStart
longest := lenStart
lenEnd := len(end)
if lenEnd < shortest {
shortest = lenEnd
} else {
longest = lenEnd
}
if longest > len(cdc.fieldCodecs) {
return ormerrors.IndexOutOfBounds
}
i := 0
var cmp int
for ; i < shortest; i++ {
fieldCdc := cdc.fieldCodecs[i]
x := start[i]
y := end[i]
cmp = fieldCdc.Compare(x, y)
if cmp > 0 {
return ormerrors.InvalidRangeIterationKeys.Wrapf(
"start must be before end for field %s",
cdc.fieldDescriptors[i].FullName(),
)
} else if !fieldCdc.IsOrdered() && cmp != 0 {
descriptor := cdc.fieldDescriptors[i]
return ormerrors.InvalidRangeIterationKeys.Wrapf(
"field %s of kind %s doesn't support ordered range iteration",
descriptor.FullName(),
descriptor.Kind(),
)
} else if cmp < 0 {
break
}
}
// the last prefix value must not be equal if the key lengths are the same
if lenStart == lenEnd {
if cmp == 0 {
return ormerrors.InvalidRangeIterationKeys
}
} else {
// check any remaining values in start or end
for j := i; j < longest; j++ {
if !cdc.fieldCodecs[j].IsOrdered() {
return ormerrors.InvalidRangeIterationKeys.Wrapf(
"field %s of kind %s doesn't support ordered range iteration",
cdc.fieldDescriptors[j].FullName(),
cdc.fieldDescriptors[j].Kind(),
)
}
}
}
return nil
}
// GetFieldDescriptors returns the field descriptors for this codec.
func (cdc *KeyCodec) GetFieldDescriptors() []protoreflect.FieldDescriptor {
return cdc.fieldDescriptors
}
// GetFieldNames returns the field names for this codec.
func (cdc *KeyCodec) GetFieldNames() []protoreflect.Name {
return cdc.fieldNames
}
// Prefix returns the prefix applied to keys in this codec before any field
// values are encoded.
func (cdc *KeyCodec) Prefix() []byte {
return cdc.prefix
}
// MessageType returns the message type of fields in this key.
func (cdc *KeyCodec) MessageType() protoreflect.MessageType {
return cdc.messageType
}

View File

@ -1,317 +0,0 @@
package ormkv_test
import (
"bytes"
"io"
"testing"
"google.golang.org/protobuf/reflect/protoreflect"
"gotest.tools/v3/assert"
"pgregory.net/rapid"
"cosmossdk.io/orm/encoding/encodeutil"
"cosmossdk.io/orm/encoding/ormkv"
"cosmossdk.io/orm/internal/testpb"
"cosmossdk.io/orm/internal/testutil"
)
func TestKeyCodec(t *testing.T) {
rapid.Check(t, func(t *rapid.T) {
key := testutil.TestKeyCodecGen(0, 5).Draw(t, "key")
for i := 0; i < 100; i++ {
keyValues := key.Draw(t, "values")
bz1 := assertEncDecKey(t, key, keyValues)
if key.Codec.IsFullyOrdered() {
// check if ordered keys have ordered encodings
keyValues2 := key.Draw(t, "values2")
bz2 := assertEncDecKey(t, key, keyValues2)
// bytes comparison should equal comparison of values
assert.Equal(t, key.Codec.CompareKeys(keyValues, keyValues2), bytes.Compare(bz1, bz2))
}
}
})
}
func assertEncDecKey(t *rapid.T, key testutil.TestKeyCodec, keyValues []protoreflect.Value) []byte {
bz, err := key.Codec.EncodeKey(keyValues)
assert.NilError(t, err)
keyValues2, err := key.Codec.DecodeKey(bytes.NewReader(bz))
assert.NilError(t, err)
assert.Equal(t, 0, key.Codec.CompareKeys(keyValues, keyValues2))
return bz
}
func TestCompareValues(t *testing.T) {
cdc, err := ormkv.NewKeyCodec(nil,
(&testpb.ExampleTable{}).ProtoReflect().Type(),
[]protoreflect.Name{"u32", "str", "i32"})
assert.NilError(t, err)
tests := []struct {
name string
values1 []protoreflect.Value
values2 []protoreflect.Value
expect int
validRange bool
}{
{
"eq",
encodeutil.ValuesOf(uint32(0), "abc", int32(-3)),
encodeutil.ValuesOf(uint32(0), "abc", int32(-3)),
0,
false,
},
{
"eq prefix 0",
encodeutil.ValuesOf(),
encodeutil.ValuesOf(),
0,
false,
},
{
"eq prefix 1",
encodeutil.ValuesOf(uint32(0)),
encodeutil.ValuesOf(uint32(0)),
0,
false,
},
{
"eq prefix 2",
encodeutil.ValuesOf(uint32(0), "abc"),
encodeutil.ValuesOf(uint32(0), "abc"),
0,
false,
},
{
"lt1",
encodeutil.ValuesOf(uint32(0), "abc", int32(-3)),
encodeutil.ValuesOf(uint32(1), "abc", int32(-3)),
-1,
true,
},
{
"lt2",
encodeutil.ValuesOf(uint32(1), "abb", int32(-3)),
encodeutil.ValuesOf(uint32(1), "abc", int32(-3)),
-1,
true,
},
{
"lt3",
encodeutil.ValuesOf(uint32(1), "abb", int32(-4)),
encodeutil.ValuesOf(uint32(1), "abb", int32(-3)),
-1,
true,
},
{
"less prefix 0",
encodeutil.ValuesOf(),
encodeutil.ValuesOf(uint32(1), "abb", int32(-4)),
-1,
true,
},
{
"less prefix 1",
encodeutil.ValuesOf(uint32(1)),
encodeutil.ValuesOf(uint32(1), "abb", int32(-4)),
-1,
true,
},
{
"less prefix 2",
encodeutil.ValuesOf(uint32(1), "abb"),
encodeutil.ValuesOf(uint32(1), "abb", int32(-4)),
-1,
true,
},
{
"gt1",
encodeutil.ValuesOf(uint32(2), "abb", int32(-4)),
encodeutil.ValuesOf(uint32(1), "abb", int32(-4)),
1,
false,
},
{
"gt2",
encodeutil.ValuesOf(uint32(2), "abc", int32(-4)),
encodeutil.ValuesOf(uint32(2), "abb", int32(-4)),
1,
false,
},
{
"gt3",
encodeutil.ValuesOf(uint32(2), "abc", int32(1)),
encodeutil.ValuesOf(uint32(2), "abc", int32(-3)),
1,
false,
},
{
"gt prefix 0",
encodeutil.ValuesOf(uint32(2), "abc", int32(-3)),
encodeutil.ValuesOf(),
1,
true,
},
{
"gt prefix 1",
encodeutil.ValuesOf(uint32(2), "abc", int32(-3)),
encodeutil.ValuesOf(uint32(2)),
1,
true,
},
{
"gt prefix 2",
encodeutil.ValuesOf(uint32(2), "abc", int32(-3)),
encodeutil.ValuesOf(uint32(2), "abc"),
1,
true,
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
assert.Equal(
t, test.expect,
cdc.CompareKeys(test.values1, test.values2),
)
// CheckValidRangeIterationKeys should give comparable results
err := cdc.CheckValidRangeIterationKeys(test.values1, test.values2)
if test.validRange {
assert.NilError(t, err)
} else {
assert.ErrorContains(t, err, "")
}
})
}
}
func TestDecodePrefixKey(t *testing.T) {
cdc, err := ormkv.NewKeyCodec(nil,
(&testpb.ExampleTable{}).ProtoReflect().Type(),
[]protoreflect.Name{"u32", "str", "bz", "i32"})
assert.NilError(t, err)
tests := []struct {
name string
values []protoreflect.Value
}{
{
"1",
encodeutil.ValuesOf(uint32(5), "abc"),
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
bz, err := cdc.EncodeKey(test.values)
assert.NilError(t, err)
values, err := cdc.DecodeKey(bytes.NewReader(bz))
assert.ErrorIs(t, err, io.EOF)
assert.Equal(t, 0, cdc.CompareKeys(test.values, values))
})
}
}
func TestValidRangeIterationKeys(t *testing.T) {
cdc, err := ormkv.NewKeyCodec(nil,
(&testpb.ExampleTable{}).ProtoReflect().Type(),
[]protoreflect.Name{"u32", "str", "bz", "i32"})
assert.NilError(t, err)
tests := []struct {
name string
values1 []protoreflect.Value
values2 []protoreflect.Value
expectErr bool
}{
{
"1 eq",
encodeutil.ValuesOf(uint32(0)),
encodeutil.ValuesOf(uint32(0)),
true,
},
{
"1 lt",
encodeutil.ValuesOf(uint32(0)),
encodeutil.ValuesOf(uint32(1)),
false,
},
{
"1 gt",
encodeutil.ValuesOf(uint32(1)),
encodeutil.ValuesOf(uint32(0)),
true,
},
{
"1,2 lt",
encodeutil.ValuesOf(uint32(0)),
encodeutil.ValuesOf(uint32(0), "abc"),
false,
},
{
"1,2 gt",
encodeutil.ValuesOf(uint32(0), "abc"),
encodeutil.ValuesOf(uint32(0)),
false,
},
{
"1,2,3",
encodeutil.ValuesOf(uint32(0)),
encodeutil.ValuesOf(uint32(0), "abc", []byte{1, 2}),
true,
},
{
"1,2,3,4 lt",
encodeutil.ValuesOf(uint32(0), "abc", []byte{1, 2}, int32(-1)),
encodeutil.ValuesOf(uint32(0), "abc", []byte{1, 2}, int32(1)),
false,
},
{
"too long",
encodeutil.ValuesOf(uint32(0), "abc", []byte{1, 2}, int32(-1)),
encodeutil.ValuesOf(uint32(0), "abc", []byte{1, 2}, int32(1), int32(1)),
true,
},
{
"1,2,3,4 eq",
encodeutil.ValuesOf(uint32(0), "abc", []byte{1, 2}, int32(1)),
encodeutil.ValuesOf(uint32(0), "abc", []byte{1, 2}, int32(1)),
true,
},
{
"1,2,3,4 bz err",
encodeutil.ValuesOf(uint32(0), "abc", []byte{1, 2}, int32(-1)),
encodeutil.ValuesOf(uint32(0), "abc", []byte{1, 2, 3}, int32(1)),
true,
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
err := cdc.CheckValidRangeIterationKeys(test.values1, test.values2)
if test.expectErr {
assert.ErrorContains(t, err, "")
} else {
assert.NilError(t, err)
}
})
}
}
func TestGetSet(t *testing.T) {
cdc, err := ormkv.NewKeyCodec(nil,
(&testpb.ExampleTable{}).ProtoReflect().Type(),
[]protoreflect.Name{"u32", "str", "i32"})
assert.NilError(t, err)
var a testpb.ExampleTable
values := encodeutil.ValuesOf(uint32(4), "abc", int32(1))
cdc.SetKeyValues(a.ProtoReflect(), values)
values2 := cdc.GetKeyValues(a.ProtoReflect())
assert.Equal(t, 0, cdc.CompareKeys(values, values2))
bz, err := cdc.EncodeKey(values)
assert.NilError(t, err)
values3, bz2, err := cdc.EncodeKeyFromMessage(a.ProtoReflect())
assert.NilError(t, err)
assert.Equal(t, 0, cdc.CompareKeys(values, values3))
assert.Assert(t, bytes.Equal(bz, bz2))
}

View File

@ -1,141 +0,0 @@
package ormkv
import (
"bytes"
"errors"
"io"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
"cosmossdk.io/orm/types/ormerrors"
)
// PrimaryKeyCodec is the codec for primary keys.
type PrimaryKeyCodec struct {
*KeyCodec
unmarshalOptions proto.UnmarshalOptions
}
var _ IndexCodec = &PrimaryKeyCodec{}
// NewPrimaryKeyCodec creates a new PrimaryKeyCodec for the provided msg and
// fields, with an optional prefix and unmarshal options.
func NewPrimaryKeyCodec(prefix []byte, msgType protoreflect.MessageType, fieldNames []protoreflect.Name, unmarshalOptions proto.UnmarshalOptions) (*PrimaryKeyCodec, error) {
keyCodec, err := NewKeyCodec(prefix, msgType, fieldNames)
if err != nil {
return nil, err
}
return &PrimaryKeyCodec{
KeyCodec: keyCodec,
unmarshalOptions: unmarshalOptions,
}, nil
}
var _ IndexCodec = PrimaryKeyCodec{}
func (p PrimaryKeyCodec) DecodeIndexKey(k, _ []byte) (indexFields, primaryKey []protoreflect.Value, err error) {
indexFields, err = p.DecodeKey(bytes.NewReader(k))
// got prefix key
if errors.Is(err, io.EOF) {
return indexFields, nil, nil
} else if err != nil {
return nil, nil, err
}
if len(indexFields) == len(p.fieldCodecs) {
// for primary keys the index fields are the primary key
// but only if we don't have a prefix key
primaryKey = indexFields
}
return indexFields, primaryKey, nil
}
func (p PrimaryKeyCodec) DecodeEntry(k, v []byte) (Entry, error) {
values, err := p.DecodeKey(bytes.NewReader(k))
if errors.Is(err, io.EOF) {
return &PrimaryKeyEntry{
TableName: p.messageType.Descriptor().FullName(),
Key: values,
}, nil
} else if err != nil {
return nil, err
}
msg := p.messageType.New().Interface()
err = p.Unmarshal(values, v, msg)
return &PrimaryKeyEntry{
TableName: p.messageType.Descriptor().FullName(),
Key: values,
Value: msg,
}, err
}
func (p PrimaryKeyCodec) EncodeEntry(entry Entry) (k, v []byte, err error) {
pkEntry, ok := entry.(*PrimaryKeyEntry)
if !ok {
return nil, nil, ormerrors.BadDecodeEntry.Wrapf("expected %T, got %T", &PrimaryKeyEntry{}, entry)
}
if pkEntry.TableName != p.messageType.Descriptor().FullName() {
return nil, nil, ormerrors.BadDecodeEntry.Wrapf(
"wrong table name, got %s, expected %s",
pkEntry.TableName,
p.messageType.Descriptor().FullName(),
)
}
k, err = p.KeyCodec.EncodeKey(pkEntry.Key)
if err != nil {
return nil, nil, err
}
v, err = p.marshal(pkEntry.Key, pkEntry.Value)
return k, v, err
}
func (p PrimaryKeyCodec) marshal(key []protoreflect.Value, message proto.Message) (v []byte, err error) {
// first clear the priamry key values because these are already stored in
// the key so we don't need to store them again in the value
p.ClearValues(message.ProtoReflect())
v, err = proto.MarshalOptions{Deterministic: true}.Marshal(message)
if err != nil {
return nil, err
}
// set the primary key values again returning the message to its original state
p.SetKeyValues(message.ProtoReflect(), key)
return v, nil
}
func (p *PrimaryKeyCodec) ClearValues(message protoreflect.Message) {
for _, f := range p.fieldDescriptors {
message.Clear(f)
}
}
func (p *PrimaryKeyCodec) Unmarshal(key []protoreflect.Value, value []byte, message proto.Message) error {
err := p.unmarshalOptions.Unmarshal(value, message)
if err != nil {
return err
}
// rehydrate primary key
p.SetKeyValues(message.ProtoReflect(), key)
return nil
}
func (p PrimaryKeyCodec) EncodeKVFromMessage(message protoreflect.Message) (k, v []byte, err error) {
ks, k, err := p.KeyCodec.EncodeKeyFromMessage(message)
if err != nil {
return nil, nil, err
}
v, err = p.marshal(ks, message.Interface())
return k, v, err
}

View File

@ -1,60 +0,0 @@
package ormkv_test
import (
"bytes"
"fmt"
"testing"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/testing/protocmp"
"gotest.tools/v3/assert"
"pgregory.net/rapid"
"cosmossdk.io/orm/encoding/ormkv"
"cosmossdk.io/orm/internal/testpb"
"cosmossdk.io/orm/internal/testutil"
)
func TestPrimaryKeyCodec(t *testing.T) {
rapid.Check(t, func(t *rapid.T) {
keyCodec := testutil.TestKeyCodecGen(0, 5).Draw(t, "keyCodec")
pkCodec, err := ormkv.NewPrimaryKeyCodec(
keyCodec.Codec.Prefix(),
(&testpb.ExampleTable{}).ProtoReflect().Type(),
keyCodec.Codec.GetFieldNames(),
proto.UnmarshalOptions{},
)
assert.NilError(t, err)
for i := 0; i < 100; i++ {
a := testutil.GenA.Draw(t, fmt.Sprintf("a%d", i))
key := keyCodec.Codec.GetKeyValues(a.ProtoReflect())
pk1 := &ormkv.PrimaryKeyEntry{
TableName: aFullName,
Key: key,
Value: a,
}
k, v, err := pkCodec.EncodeEntry(pk1)
assert.NilError(t, err)
k2, v2, err := pkCodec.EncodeKVFromMessage(a.ProtoReflect())
assert.NilError(t, err)
assert.Assert(t, bytes.Equal(k, k2))
assert.Assert(t, bytes.Equal(v, v2))
entry2, err := pkCodec.DecodeEntry(k, v)
assert.NilError(t, err)
pk2 := entry2.(*ormkv.PrimaryKeyEntry)
assert.Equal(t, 0, pkCodec.CompareKeys(pk1.Key, pk2.Key))
assert.DeepEqual(t, pk1.Value, pk2.Value, protocmp.Transform())
idxFields, pk3, err := pkCodec.DecodeIndexKey(k, v)
assert.NilError(t, err)
assert.Equal(t, 0, pkCodec.CompareKeys(pk1.Key, pk3))
assert.Equal(t, 0, pkCodec.CompareKeys(pk1.Key, idxFields))
pkCodec.ClearValues(a.ProtoReflect())
pkCodec.SetKeyValues(a.ProtoReflect(), pk1.Key)
assert.DeepEqual(t, a, pk2.Value, protocmp.Transform())
}
})
}

View File

@ -1,69 +0,0 @@
package ormkv
import (
"bytes"
"encoding/binary"
"google.golang.org/protobuf/reflect/protoreflect"
"cosmossdk.io/orm/types/ormerrors"
)
// SeqCodec is the codec for auto-incrementing uint64 primary key sequences.
type SeqCodec struct {
messageType protoreflect.FullName
prefix []byte
}
// NewSeqCodec creates a new SeqCodec.
func NewSeqCodec(messageType protoreflect.MessageType, prefix []byte) *SeqCodec {
return &SeqCodec{messageType: messageType.Descriptor().FullName(), prefix: prefix}
}
var _ EntryCodec = &SeqCodec{}
func (s SeqCodec) DecodeEntry(k, v []byte) (Entry, error) {
if !bytes.Equal(k, s.prefix) {
return nil, ormerrors.UnexpectedDecodePrefix
}
x, err := s.DecodeValue(v)
if err != nil {
return nil, err
}
return &SeqEntry{
TableName: s.messageType,
Value: x,
}, nil
}
func (s SeqCodec) EncodeEntry(entry Entry) (k, v []byte, err error) {
seqEntry, ok := entry.(*SeqEntry)
if !ok {
return nil, nil, ormerrors.BadDecodeEntry
}
if seqEntry.TableName != s.messageType {
return nil, nil, ormerrors.BadDecodeEntry
}
return s.prefix, s.EncodeValue(seqEntry.Value), nil
}
func (s SeqCodec) Prefix() []byte {
return s.prefix
}
func (s SeqCodec) EncodeValue(seq uint64) (v []byte) {
bz := make([]byte, binary.MaxVarintLen64)
n := binary.PutUvarint(bz, seq)
return bz[:n]
}
func (s SeqCodec) DecodeValue(v []byte) (uint64, error) {
if len(v) == 0 {
return 0, nil
}
return binary.ReadUvarint(bytes.NewReader(v))
}

View File

@ -1,47 +0,0 @@
package ormkv_test
import (
"bytes"
"testing"
"gotest.tools/v3/assert"
"pgregory.net/rapid"
"cosmossdk.io/orm/encoding/ormkv"
"cosmossdk.io/orm/internal/testpb"
)
func TestSeqCodec(t *testing.T) {
rapid.Check(t, func(t *rapid.T) {
prefix := rapid.SliceOfN(rapid.Byte(), 0, 5).Draw(t, "prefix")
typ := (&testpb.ExampleTable{}).ProtoReflect().Type()
tableName := typ.Descriptor().FullName()
cdc := ormkv.NewSeqCodec(typ, prefix)
seq, err := cdc.DecodeValue(nil)
assert.NilError(t, err)
assert.Equal(t, uint64(0), seq)
seq, err = cdc.DecodeValue([]byte{})
assert.NilError(t, err)
assert.Equal(t, uint64(0), seq)
seq = rapid.Uint64().Draw(t, "seq")
v := cdc.EncodeValue(seq)
seq2, err := cdc.DecodeValue(v)
assert.NilError(t, err)
assert.Equal(t, seq, seq2)
entry := &ormkv.SeqEntry{
TableName: tableName,
Value: seq,
}
k, v, err := cdc.EncodeEntry(entry)
assert.NilError(t, err)
entry2, err := cdc.DecodeEntry(k, v)
assert.NilError(t, err)
assert.DeepEqual(t, entry, entry2)
assert.Assert(t, bytes.Equal(cdc.Prefix(), k))
})
}

View File

@ -1,210 +0,0 @@
package ormkv
import (
"bytes"
"errors"
"io"
"google.golang.org/protobuf/reflect/protoreflect"
"cosmossdk.io/orm/types/ormerrors"
)
// UniqueKeyCodec is the codec for unique indexes.
type UniqueKeyCodec struct {
pkFieldOrder []struct {
inKey bool
i int
}
keyCodec *KeyCodec
valueCodec *KeyCodec
}
var _ IndexCodec = &UniqueKeyCodec{}
// NewUniqueKeyCodec creates a new UniqueKeyCodec with an optional prefix for the
// provided message descriptor, index and primary key fields.
func NewUniqueKeyCodec(prefix []byte, messageType protoreflect.MessageType, indexFields, primaryKeyFields []protoreflect.Name) (*UniqueKeyCodec, error) {
if len(indexFields) == 0 {
return nil, ormerrors.InvalidTableDefinition.Wrapf("index fields are empty")
}
if len(primaryKeyFields) == 0 {
return nil, ormerrors.InvalidTableDefinition.Wrapf("primary key fields are empty")
}
keyCodec, err := NewKeyCodec(prefix, messageType, indexFields)
if err != nil {
return nil, err
}
haveFields := map[protoreflect.Name]int{}
for i, descriptor := range keyCodec.fieldDescriptors {
haveFields[descriptor.Name()] = i
}
var valueFields []protoreflect.Name
var pkFieldOrder []struct {
inKey bool
i int
}
k := 0
for _, field := range primaryKeyFields {
if j, ok := haveFields[field]; ok {
pkFieldOrder = append(pkFieldOrder, struct {
inKey bool
i int
}{inKey: true, i: j})
} else {
valueFields = append(valueFields, field)
pkFieldOrder = append(pkFieldOrder, struct {
inKey bool
i int
}{inKey: false, i: k})
k++
}
}
// if there is nothing in the value we have a trivial unique index
// which shouldn't actually be a unique index at all
if len(valueFields) == 0 {
return nil, ormerrors.InvalidTableDefinition.Wrapf("unique index %s on table %s introduces no new uniqueness constraint not already in the primary key and should not be marked as unique",
indexFields, messageType.Descriptor().FullName())
}
valueCodec, err := NewKeyCodec(nil, messageType, valueFields)
if err != nil {
return nil, err
}
return &UniqueKeyCodec{
pkFieldOrder: pkFieldOrder,
keyCodec: keyCodec,
valueCodec: valueCodec,
}, nil
}
func (u UniqueKeyCodec) DecodeIndexKey(k, v []byte) (indexFields, primaryKey []protoreflect.Value, err error) {
ks, err := u.keyCodec.DecodeKey(bytes.NewReader(k))
// got prefix key
if errors.Is(err, io.EOF) {
return ks, nil, err
} else if err != nil {
return nil, nil, err
}
// got prefix key
if len(ks) < len(u.keyCodec.fieldCodecs) {
return ks, nil, err
}
vs, err := u.valueCodec.DecodeKey(bytes.NewReader(v))
if err != nil {
return nil, nil, err
}
pk := u.extractPrimaryKey(ks, vs)
return ks, pk, nil
}
func (u UniqueKeyCodec) extractPrimaryKey(keyValues, valueValues []protoreflect.Value) []protoreflect.Value {
numPkFields := len(u.pkFieldOrder)
pkValues := make([]protoreflect.Value, numPkFields)
for i := 0; i < numPkFields; i++ {
fo := u.pkFieldOrder[i]
if fo.inKey {
pkValues[i] = keyValues[fo.i]
} else {
pkValues[i] = valueValues[fo.i]
}
}
return pkValues
}
func (u UniqueKeyCodec) DecodeEntry(k, v []byte) (Entry, error) {
idxVals, pk, err := u.DecodeIndexKey(k, v)
if err != nil {
return nil, err
}
return &IndexKeyEntry{
TableName: u.MessageType().Descriptor().FullName(),
Fields: u.keyCodec.fieldNames,
IsUnique: true,
IndexValues: idxVals,
PrimaryKey: pk,
}, err
}
func (u UniqueKeyCodec) EncodeEntry(entry Entry) (k, v []byte, err error) {
indexEntry, ok := entry.(*IndexKeyEntry)
if !ok {
return nil, nil, ormerrors.BadDecodeEntry
}
k, err = u.keyCodec.EncodeKey(indexEntry.IndexValues)
if err != nil {
return nil, nil, err
}
n := len(indexEntry.PrimaryKey)
if n != len(u.pkFieldOrder) {
return nil, nil, ormerrors.BadDecodeEntry.Wrapf("wrong primary key length")
}
var values []protoreflect.Value
for i := 0; i < n; i++ {
value := indexEntry.PrimaryKey[i]
fieldOrder := u.pkFieldOrder[i]
if !fieldOrder.inKey {
// goes in values because it is not present in the index key otherwise
values = append(values, value)
} else if u.keyCodec.fieldCodecs[fieldOrder.i].Compare(value, indexEntry.IndexValues[fieldOrder.i]) != 0 {
// does not go in values, but we need to verify that the value in index values matches the primary key value
return nil, nil, ormerrors.BadDecodeEntry.Wrapf("value in primary key does not match corresponding value in index key")
}
}
v, err = u.valueCodec.EncodeKey(values)
return k, v, err
}
func (u UniqueKeyCodec) EncodeKVFromMessage(message protoreflect.Message) (k, v []byte, err error) {
_, k, err = u.keyCodec.EncodeKeyFromMessage(message)
if err != nil {
return nil, nil, err
}
_, v, err = u.valueCodec.EncodeKeyFromMessage(message)
return k, v, err
}
func (u UniqueKeyCodec) GetFieldNames() []protoreflect.Name {
return u.keyCodec.GetFieldNames()
}
func (u UniqueKeyCodec) GetKeyCodec() *KeyCodec {
return u.keyCodec
}
func (u UniqueKeyCodec) GetValueCodec() *KeyCodec {
return u.valueCodec
}
func (u UniqueKeyCodec) CompareKeys(key1, key2 []protoreflect.Value) int {
return u.keyCodec.CompareKeys(key1, key2)
}
func (u UniqueKeyCodec) EncodeKeyFromMessage(message protoreflect.Message) (keyValues []protoreflect.Value, key []byte, err error) {
return u.keyCodec.EncodeKeyFromMessage(message)
}
func (u UniqueKeyCodec) IsFullyOrdered() bool {
return u.keyCodec.IsFullyOrdered()
}
func (u UniqueKeyCodec) MessageType() protoreflect.MessageType {
return u.keyCodec.messageType
}

View File

@ -1,92 +0,0 @@
package ormkv_test
import (
"bytes"
"fmt"
"testing"
"google.golang.org/protobuf/reflect/protoreflect"
"gotest.tools/v3/assert"
"pgregory.net/rapid"
"cosmossdk.io/orm/encoding/ormkv"
"cosmossdk.io/orm/internal/testpb"
"cosmossdk.io/orm/internal/testutil"
"cosmossdk.io/orm/types/ormerrors"
)
func TestUniqueKeyCodec(t *testing.T) {
rapid.Check(t, func(t *rapid.T) {
keyCodec := testutil.TestKeyCodecGen(1, 5).Draw(t, "keyCodec")
pkCodec := testutil.TestKeyCodecGen(1, 5).Draw(t, "primaryKeyCodec")
// check if we have a trivial unique index where all of the fields
// in the primary key are in the unique key, we should expect an
// error in this case
isInPk := map[protoreflect.Name]bool{}
for _, spec := range pkCodec.KeySpecs {
isInPk[spec.FieldName] = true
}
numPkFields := 0
for _, spec := range keyCodec.KeySpecs {
if isInPk[spec.FieldName] {
numPkFields++
}
}
isTrivialUniqueKey := numPkFields == len(pkCodec.KeySpecs)
messageType := (&testpb.ExampleTable{}).ProtoReflect().Type()
uniqueKeyCdc, err := ormkv.NewUniqueKeyCodec(
keyCodec.Codec.Prefix(),
messageType,
keyCodec.Codec.GetFieldNames(),
pkCodec.Codec.GetFieldNames(),
)
if isTrivialUniqueKey {
assert.ErrorContains(t, err, "no new uniqueness constraint")
return
}
assert.NilError(t, err)
for i := 0; i < 100; i++ {
a := testutil.GenA.Draw(t, fmt.Sprintf("a%d", i))
key := keyCodec.Codec.GetKeyValues(a.ProtoReflect())
pk := pkCodec.Codec.GetKeyValues(a.ProtoReflect())
uniq1 := &ormkv.IndexKeyEntry{
TableName: messageType.Descriptor().FullName(),
Fields: keyCodec.Codec.GetFieldNames(),
IsUnique: true,
IndexValues: key,
PrimaryKey: pk,
}
k, v, err := uniqueKeyCdc.EncodeEntry(uniq1)
assert.NilError(t, err)
k2, v2, err := uniqueKeyCdc.EncodeKVFromMessage(a.ProtoReflect())
assert.NilError(t, err)
assert.Assert(t, bytes.Equal(k, k2))
assert.Assert(t, bytes.Equal(v, v2))
entry2, err := uniqueKeyCdc.DecodeEntry(k, v)
assert.NilError(t, err)
uniq2 := entry2.(*ormkv.IndexKeyEntry)
assert.Equal(t, 0, keyCodec.Codec.CompareKeys(uniq1.IndexValues, uniq2.IndexValues))
assert.Equal(t, 0, pkCodec.Codec.CompareKeys(uniq1.PrimaryKey, uniq2.PrimaryKey))
assert.Equal(t, true, uniq2.IsUnique)
assert.Equal(t, messageType.Descriptor().FullName(), uniq2.TableName)
assert.DeepEqual(t, uniq1.Fields, uniq2.Fields)
idxFields, pk2, err := uniqueKeyCdc.DecodeIndexKey(k, v)
assert.NilError(t, err)
assert.Equal(t, 0, keyCodec.Codec.CompareKeys(key, idxFields))
assert.Equal(t, 0, pkCodec.Codec.CompareKeys(pk, pk2))
}
})
}
func TestTrivialUnique(t *testing.T) {
_, err := ormkv.NewUniqueKeyCodec(nil, (&testpb.ExampleTable{}).ProtoReflect().Type(),
[]protoreflect.Name{"u32", "str"}, []protoreflect.Name{"str", "u32"})
assert.ErrorIs(t, err, ormerrors.InvalidTableDefinition)
}

View File

@ -1,49 +0,0 @@
Feature: inserting, updating and saving entities
Scenario: can't insert an entity with a duplicate primary key
Given an existing entity
"""
{"name": "foo", "not_unique": "bar"}
"""
When I insert
"""
{"name": "foo", "not_unique": "baz"}
"""
Then expect a "already exists" error
And expect grpc error code "ALREADY_EXISTS"
Scenario: can't update entity that doesn't exist
When I update
"""
{"name":"foo"}
"""
Then expect a "not found" error
And expect grpc error code "NOT_FOUND"
#
Scenario: can't violate unique constraint on insert
Given an existing entity
"""
{"name": "foo", "unique": "bar"}
"""
When I insert
"""
{"name": "baz", "unique": "bar"}
"""
Then expect a "unique key violation" error
And expect grpc error code "FAILED_PRECONDITION"
Scenario: can't violate unique constraint on update
Given an existing entity
"""
{"name": "foo", "unique": "bar"}
"""
And an existing entity
"""
{"name": "baz", "unique": "bam"}
"""
When I update
"""
{"name": "baz", "unique": "bar"}
"""
Then expect a "unique key violation" error
And expect grpc error code "FAILED_PRECONDITION"

View File

@ -1,75 +0,0 @@
module cosmossdk.io/orm
go 1.23
require (
cosmossdk.io/api v0.8.2
cosmossdk.io/core v1.0.0
cosmossdk.io/core/testing v0.0.1
cosmossdk.io/depinject v1.1.0
cosmossdk.io/errors v1.0.1
github.com/cosmos/cosmos-db v1.1.1
github.com/cosmos/cosmos-proto v1.0.0-beta.5
github.com/google/go-cmp v0.6.0
github.com/iancoleman/strcase v0.3.0
github.com/regen-network/gocuke v1.1.1
github.com/stretchr/testify v1.10.0
go.uber.org/mock v0.5.0
google.golang.org/grpc v1.70.0
google.golang.org/protobuf v1.36.4
gotest.tools/v3 v3.5.1
pgregory.net/rapid v1.1.0
)
require (
buf.build/gen/go/cometbft/cometbft/protocolbuffers/go v1.36.4-20241120201313-68e42a58b301.1 // indirect
buf.build/gen/go/cosmos/gogo-proto/protocolbuffers/go v1.36.4-20240130113600-88ef6483f90f.1 // indirect
cosmossdk.io/schema v1.0.0 // indirect
github.com/DataDog/zstd v1.5.6 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/bmatcuk/doublestar/v4 v4.6.1 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cockroachdb/apd/v3 v3.2.1 // indirect
github.com/cockroachdb/errors v1.11.3 // indirect
github.com/cockroachdb/fifo v0.0.0-20240816210425-c5d0cb0b6fc0 // indirect
github.com/cockroachdb/logtags v0.0.0-20241215232642-bb51bb14a506 // indirect
github.com/cockroachdb/pebble v1.1.2 // indirect
github.com/cockroachdb/redact v1.1.5 // indirect
github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06 // indirect
github.com/cosmos/gogoproto v1.7.0 // indirect
github.com/cucumber/gherkin/go/v27 v27.0.0 // indirect
github.com/cucumber/messages/go/v22 v22.0.0 // indirect
github.com/cucumber/tag-expressions/go/v6 v6.1.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/fsnotify/fsnotify v1.8.0 // indirect
github.com/getsentry/sentry-go v0.30.0 // indirect
github.com/gofrs/uuid v4.4.0+incompatible // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/google/btree v1.1.3 // indirect
github.com/klauspost/compress v1.17.11 // indirect
github.com/kr/pretty v0.3.1 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/linxGnu/grocksdb v1.9.7 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/onsi/gomega v1.20.0 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_golang v1.20.5 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.62.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/rogpeppe/go-internal v1.13.1 // indirect
github.com/spf13/cast v1.7.1 // indirect
github.com/syndtr/goleveldb v1.0.1-0.20220721030215-126854af5e6d // indirect
github.com/tendermint/go-amino v0.16.0 // indirect
github.com/tidwall/btree v1.7.0 // indirect
golang.org/x/exp v0.0.0-20241217172543-b2144cdd0a67 // indirect
golang.org/x/net v0.34.0 // indirect
golang.org/x/sys v0.29.0 // indirect
golang.org/x/text v0.21.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20241202173237-19429a94021a // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250122153221-138b5a5a4fd4 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
sigs.k8s.io/yaml v1.4.0 // indirect
)

View File

@ -1,288 +0,0 @@
buf.build/gen/go/cometbft/cometbft/protocolbuffers/go v1.36.4-20241120201313-68e42a58b301.1 h1:lcvKfPJ0GTMLh1Ib9n9b3Hx/U2lXj27rb5pujo3EKko=
buf.build/gen/go/cometbft/cometbft/protocolbuffers/go v1.36.4-20241120201313-68e42a58b301.1/go.mod h1:dW1kItnxv+SgI6SBjokdyIZqKl2uhOJahkOqIXDNDC8=
buf.build/gen/go/cosmos/gogo-proto/protocolbuffers/go v1.36.4-20240130113600-88ef6483f90f.1 h1:xn+yVpC5XMvaSQWjfRWmitcYemPznXR7Y65rIMTxoiU=
buf.build/gen/go/cosmos/gogo-proto/protocolbuffers/go v1.36.4-20240130113600-88ef6483f90f.1/go.mod h1:9oTVKh0Ppx5zXStsybi9Zb//6TuLreQxSZqBDE25JGo=
cosmossdk.io/api v0.8.2 h1:klzA1RODd9tTawJ2CbBd/34RV/cB9qtd9oJN6rcRqqg=
cosmossdk.io/api v0.8.2/go.mod h1:XJUwQrihIDjErzs3+jm1zO/9KRzKf4HMjRzXC+l+Cio=
cosmossdk.io/core v1.0.0 h1:e7XBbISOytLBOXMVwpRPixThXqEkeLGlg8no/qpgS8U=
cosmossdk.io/core v1.0.0/go.mod h1:mKIp3RkoEmtqdEdFHxHwWAULRe+79gfdOvmArrLDbDc=
cosmossdk.io/core/testing v0.0.1 h1:gYCTaftcRrz+HoNXmK7r9KgbG1jgBJ8pNzm/Pa/erFQ=
cosmossdk.io/core/testing v0.0.1/go.mod h1:2VDNz/25qtxgPa0+j8LW5e8Ev/xObqoJA7QuJS9/wIQ=
cosmossdk.io/depinject v1.1.0 h1:wLan7LG35VM7Yo6ov0jId3RHWCGRhe8E8bsuARorl5E=
cosmossdk.io/depinject v1.1.0/go.mod h1:kkI5H9jCGHeKeYWXTqYdruogYrEeWvBQCw1Pj4/eCFI=
cosmossdk.io/errors v1.0.1 h1:bzu+Kcr0kS/1DuPBtUFdWjzLqyUuCiyHjyJB6srBV/0=
cosmossdk.io/errors v1.0.1/go.mod h1:MeelVSZThMi4bEakzhhhE/CKqVv3nOJDA25bIqRDu/U=
cosmossdk.io/schema v1.0.0 h1:/diH4XJjpV1JQwuIozwr+A4uFuuwanFdnw2kKeiXwwQ=
cosmossdk.io/schema v1.0.0/go.mod h1:RDAhxIeNB4bYqAlF4NBJwRrgtnciMcyyg0DOKnhNZQQ=
github.com/DataDog/zstd v1.5.6 h1:LbEglqepa/ipmmQJUDnSsfvA8e8IStVcGaFWDuxvGOY=
github.com/DataDog/zstd v1.5.6/go.mod h1:g4AWEaM3yOg3HYfnJ3YIawPnVdXJh9QME85blwSAmyw=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bmatcuk/doublestar/v4 v4.6.1 h1:FH9SifrbvJhnlQpztAx++wlkk70QBf0iBWDwNy7PA4I=
github.com/bmatcuk/doublestar/v4 v4.6.1/go.mod h1:xBQ8jztBU6kakFMg+8WGxn0c6z1fTSPVIjEY1Wr7jzc=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/cockroachdb/apd/v3 v3.2.1 h1:U+8j7t0axsIgvQUqthuNm82HIrYXodOV2iWLWtEaIwg=
github.com/cockroachdb/apd/v3 v3.2.1/go.mod h1:klXJcjp+FffLTHlhIG69tezTDvdP065naDsHzKhYSqc=
github.com/cockroachdb/datadriven v1.0.3-0.20230413201302-be42291fc80f h1:otljaYPt5hWxV3MUfO5dFPFiOXg9CyG5/kCfayTqsJ4=
github.com/cockroachdb/datadriven v1.0.3-0.20230413201302-be42291fc80f/go.mod h1:a9RdTaap04u637JoCzcUoIcDmvwSUtcUFtT/C3kJlTU=
github.com/cockroachdb/errors v1.11.3 h1:5bA+k2Y6r+oz/6Z/RFlNeVCesGARKuC6YymtcDrbC/I=
github.com/cockroachdb/errors v1.11.3/go.mod h1:m4UIW4CDjx+R5cybPsNrRbreomiFqt8o1h1wUVazSd8=
github.com/cockroachdb/fifo v0.0.0-20240816210425-c5d0cb0b6fc0 h1:pU88SPhIFid6/k0egdR5V6eALQYq2qbSmukrkgIh/0A=
github.com/cockroachdb/fifo v0.0.0-20240816210425-c5d0cb0b6fc0/go.mod h1:9/y3cnZ5GKakj/H4y9r9GTjCvAFta7KLgSHPJJYc52M=
github.com/cockroachdb/logtags v0.0.0-20241215232642-bb51bb14a506 h1:ASDL+UJcILMqgNeV5jiqR4j+sTuvQNHdf2chuKj1M5k=
github.com/cockroachdb/logtags v0.0.0-20241215232642-bb51bb14a506/go.mod h1:Mw7HqKr2kdtu6aYGn3tPmAftiP3QPX63LdK/zcariIo=
github.com/cockroachdb/pebble v1.1.2 h1:CUh2IPtR4swHlEj48Rhfzw6l/d0qA31fItcIszQVIsA=
github.com/cockroachdb/pebble v1.1.2/go.mod h1:4exszw1r40423ZsmkG/09AFEG83I0uDgfujJdbL6kYU=
github.com/cockroachdb/redact v1.1.5 h1:u1PMllDkdFfPWaNGMyLD1+so+aq3uUItthCFqzwPJ30=
github.com/cockroachdb/redact v1.1.5/go.mod h1:BVNblN9mBWFyMyqK1k3AAiSxhvhfK2oOZZ2lK+dpvRg=
github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06 h1:zuQyyAKVxetITBuuhv3BI9cMrmStnpT18zmgmTxunpo=
github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06/go.mod h1:7nc4anLGjupUW/PeY5qiNYsdNXj7zopG+eqsS7To5IQ=
github.com/cosmos/cosmos-db v1.1.1 h1:FezFSU37AlBC8S98NlSagL76oqBRWq/prTPvFcEJNCM=
github.com/cosmos/cosmos-db v1.1.1/go.mod h1:AghjcIPqdhSLP/2Z0yha5xPH3nLnskz81pBx3tcVSAw=
github.com/cosmos/cosmos-proto v1.0.0-beta.5 h1:eNcayDLpip+zVLRLYafhzLvQlSmyab+RC5W7ZfmxJLA=
github.com/cosmos/cosmos-proto v1.0.0-beta.5/go.mod h1:hQGLpiIUloJBMdQMMWb/4wRApmI9hjHH05nefC0Ojec=
github.com/cosmos/gogoproto v1.7.0 h1:79USr0oyXAbxg3rspGh/m4SWNyoz/GLaAh0QlCe2fro=
github.com/cosmos/gogoproto v1.7.0/go.mod h1:yWChEv5IUEYURQasfyBW5ffkMHR/90hiHgbNgrtp4j0=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/cucumber/gherkin/go/v27 v27.0.0 h1:waJh5eeq7rrKn5Gf3/FI4G34ypduPRaV8e370dnupDI=
github.com/cucumber/gherkin/go/v27 v27.0.0/go.mod h1:2JxwYskO0sO4kumc/Nv1g6bMncT5w0lShuKZnmUIhhk=
github.com/cucumber/messages/go/v22 v22.0.0 h1:hk3ITpEWQ+KWDe619zYcqtaLOfcu9jgClSeps3DlNWI=
github.com/cucumber/messages/go/v22 v22.0.0/go.mod h1:aZipXTKc0JnjCsXrJnuZpWhtay93k7Rn3Dee7iyPJjs=
github.com/cucumber/tag-expressions/go/v6 v6.1.0 h1:YOhnlISh/lyPZrLojFbJVzocv7TGhzOhB9aULN8A7Sg=
github.com/cucumber/tag-expressions/go/v6 v6.1.0/go.mod h1:6scGHUy3RLnbNq8un7XNoopF2qR/0RMgqolQH/TkycY=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fsnotify/fsnotify v1.5.4/go.mod h1:OVB6XrOHzAwXMpEM7uPOzcehqUV2UqJxmVXmkdnm1bU=
github.com/fsnotify/fsnotify v1.8.0 h1:dAwr6QBTBZIkG8roQaJjGof0pp0EeF+tNV7YBP3F/8M=
github.com/fsnotify/fsnotify v1.8.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/getsentry/sentry-go v0.30.0 h1:lWUwDnY7sKHaVIoZ9wYqRHJ5iEmoc0pqcRqFkosKzBo=
github.com/getsentry/sentry-go v0.30.0/go.mod h1:WU9B9/1/sHDqeV8T+3VwwbjeR5MSXs/6aqG3mqZrezA=
github.com/go-errors/errors v1.4.2 h1:J6MZopCL4uSllY1OfXM374weqZFFItUbrImctkmUxIA=
github.com/go-errors/errors v1.4.2/go.mod h1:sIVyrIiJhuEF+Pj9Ebtd6P/rEYROXFi3BopGUQ5a5Og=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/gofrs/uuid v4.4.0+incompatible h1:3qXRTX8/NbyulANqlc0lchS1gqAVxRgsuW1YrTJupqA=
github.com/gofrs/uuid v4.4.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.0/go.mod h1:Qd/q+1AKNOZr9uGQzbzCmRO6sUih6GTPZv6a1/R87v0=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg=
github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v0.0.0-20170612174753-24818f796faf/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/iancoleman/strcase v0.3.0 h1:nTXanmYxhfFAMjZL34Ov6gkzEsSJZ5DbhxWjvSASxEI=
github.com/iancoleman/strcase v0.3.0/go.mod h1:iwCmte+B7n89clKwxIoIXy/HfoL7AsD47ZCWhYzw7ho=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.17.11 h1:In6xLpyWOi1+C7tXUUWv2ot1QvBjxevKAaI6IXrJmUc=
github.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/lib/pq v1.10.7 h1:p7ZhMD+KsSRozJr34udlUrhboJwWAgCg34+/ZZNvZZw=
github.com/lib/pq v1.10.7/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/linxGnu/grocksdb v1.9.7 h1:Bp2r1Yti/IXxEobZZnDooXAui/Q+5gVqgQMenLWyDUw=
github.com/linxGnu/grocksdb v1.9.7/go.mod h1:QYiYypR2d4v63Wj1adOOfzglnoII0gLj3PNh4fZkcFA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/nxadm/tail v1.4.11 h1:8feyoE3OzPrcshW5/MJ4sGESc5cqmGkGCWlco4l0bqY=
github.com/nxadm/tail v1.4.11/go.mod h1:OTaG3NK980DZzxbRq6lEuzgU+mug70nY11sMd4JXXHc=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0=
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU=
github.com/onsi/ginkgo/v2 v2.1.3/go.mod h1:vw5CSIxN1JObi/U8gcbwft7ZxR2dgaR70JSE3/PpL4c=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.17.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY=
github.com/onsi/gomega v1.19.0/go.mod h1:LY+I3pBVzYsTBU1AnDwOSxaYi9WoWiqgwooUqq9yPro=
github.com/onsi/gomega v1.20.0 h1:8W0cWlwFkflGPLltQvLRB7ZVD5HuP6ng320w2IS245Q=
github.com/onsi/gomega v1.20.0/go.mod h1:DtrZpjmvpn2mPm4YWQa0/ALMDj9v4YxLgojwPeREyVo=
github.com/pingcap/errors v0.11.4 h1:lFuQV/oaUMGcD2tqt+01ROSmJs75VG1ToEOkZIZ4nE4=
github.com/pingcap/errors v0.11.4/go.mod h1:Oi8TUi2kEtXXLMJk9l1cGmz20kV3TaQ0usTwv5KuLY8=
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=
github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io=
github.com/prometheus/common v0.62.0/go.mod h1:vyBcEuLSvWos9B1+CyL7JZ2up+uFzXhkqml0W5zIY1I=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/regen-network/gocuke v1.1.1 h1:13D3n5xLbpzA/J2ELHC9jXYq0+XyEr64A3ehjvfmBbE=
github.com/regen-network/gocuke v1.1.1/go.mod h1:Nl9EbhLmTzdLqb52fr/Fvf8LcoVuTjjf8FlLmXz1zHo=
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
github.com/spf13/cast v1.7.1 h1:cuNEagBQEHWN1FnbGEjCXL2szYEXqfJPbP2HNUaca9Y=
github.com/spf13/cast v1.7.1/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/syndtr/goleveldb v1.0.1-0.20220721030215-126854af5e6d h1:vfofYNRScrDdvS342BElfbETmL1Aiz3i2t0zfRj16Hs=
github.com/syndtr/goleveldb v1.0.1-0.20220721030215-126854af5e6d/go.mod h1:RRCYJbIwD5jmqPI9XoAFR0OcDxqUctll6zUj/+B4S48=
github.com/tendermint/go-amino v0.16.0 h1:GyhmgQKvqF82e2oZeuMSp9JTN0N09emoSZlb2lyGa2E=
github.com/tendermint/go-amino v0.16.0/go.mod h1:TQU0M1i/ImAo+tYpZi73AU3V/dKeCoMC9Sphe2ZwGME=
github.com/tidwall/btree v1.7.0 h1:L1fkJH/AuEh5zBnnBbmTwQ5Lt+bRJ5A8EWecslvo9iI=
github.com/tidwall/btree v1.7.0/go.mod h1:twD9XRA5jj9VUQGELzDO4HPQTNJsoWWfYEL+EUQ2cKY=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.opentelemetry.io/otel v1.32.0 h1:WnBN+Xjcteh0zdk01SVqV55d/m62NJLJdIyb4y/WO5U=
go.opentelemetry.io/otel v1.32.0/go.mod h1:00DCVSB0RQcnzlwyTfqtxSm+DRr9hpYrHjNGiBHVQIg=
go.opentelemetry.io/otel/metric v1.32.0 h1:xV2umtmNcThh2/a/aCP+h64Xx5wsj8qqnkYZktzNa0M=
go.opentelemetry.io/otel/metric v1.32.0/go.mod h1:jH7CIbbK6SH2V2wE16W05BHCtIDzauciCRLoc/SyMv8=
go.opentelemetry.io/otel/sdk v1.32.0 h1:RNxepc9vK59A8XsgZQouW8ue8Gkb4jpWtJm9ge5lEG4=
go.opentelemetry.io/otel/sdk v1.32.0/go.mod h1:LqgegDBjKMmb2GC6/PrTnteJG39I8/vJCAP9LlJXEjU=
go.opentelemetry.io/otel/sdk/metric v1.32.0 h1:rZvFnvmvawYb0alrYkjraqJq0Z4ZUJAiyYCU9snn1CU=
go.opentelemetry.io/otel/sdk/metric v1.32.0/go.mod h1:PWeZlq0zt9YkYAp3gjKZ0eicRYvOh1Gd+X99x6GHpCQ=
go.opentelemetry.io/otel/trace v1.32.0 h1:WIC9mYrXf8TmY/EXuULKc8hR17vE+Hjv2cssQDe03fM=
go.opentelemetry.io/otel/trace v1.32.0/go.mod h1:+i4rkvCraA+tG6AzwloGaCtkx53Fa+L+V8e9a7YvhT8=
go.uber.org/mock v0.5.0 h1:KAMbZvZPyBPWgD14IrIQ38QCyjwpvVVV6K/bHl1IwQU=
go.uber.org/mock v0.5.0/go.mod h1:ge71pBPLYDk7QIi1LupWxdAykm7KIEFchiOqd6z7qMM=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20241217172543-b2144cdd0a67 h1:1UoZQm6f0P/ZO0w1Ri+f+ifG/gXhegadRdwBIXEFWDo=
golang.org/x/exp v0.0.0-20241217172543-b2144cdd0a67/go.mod h1:qj5a5QZpwLU2NLQudwIN5koi3beDhSAlJwa67PuM98c=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220607020251-c690dde0001d/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0=
golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU=
golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
google.golang.org/genproto v0.0.0-20180831171423-11092d34479b/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto/googleapis/api v0.0.0-20241202173237-19429a94021a h1:OAiGFfOiA0v9MRYsSidp3ubZaBnteRUyn3xB2ZQ5G/E=
google.golang.org/genproto/googleapis/api v0.0.0-20241202173237-19429a94021a/go.mod h1:jehYqy3+AhJU9ve55aNOaSml7wUXjF9x6z2LcCfpAhY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250122153221-138b5a5a4fd4 h1:yrTuav+chrF0zF/joFGICKTzYv7mh/gr9AgEXrVU8ao=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250122153221-138b5a5a4fd4/go.mod h1:+2Yz8+CLJbIfL9z73EW45avw8Lmge3xVElCP9zEKi50=
google.golang.org/grpc v1.70.0 h1:pWFv03aZoHzlRKHWicjsZytKAiYCtNS0dHbXnIdq7jQ=
google.golang.org/grpc v1.70.0/go.mod h1:ofIJqVKDXx/JiXrwr2IG4/zwdH9txy3IlF40RmcJSQw=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.36.4 h1:6A3ZDJHn/eNqc1i+IdefRzy/9PokBTPvcqMySR7NNIM=
google.golang.org/protobuf v1.36.4/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools/v3 v3.5.1 h1:EENdUnS3pdur5nybKYIh2Vfgc8IUNBjxDPSjtiJcOzU=
gotest.tools/v3 v3.5.1/go.mod h1:isy3WKz7GK6uNw/sbHzfKBLvlvXwUyV06n6brMxxopU=
pgregory.net/rapid v1.1.0 h1:CMa0sjHSru3puNx+J0MIAuiiEV4N0qj8/cMWGBBCsjw=
pgregory.net/rapid v1.1.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04=
sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=

View File

@ -1,17 +0,0 @@
version: v1
managed:
enabled: true
go_package_prefix:
default: cosmossdk.io/orm/internal
override:
buf.build/cosmos/cosmos-sdk: cosmossdk.io/api
plugins:
- name: go
out: .
opt: paths=source_relative
- name: go-grpc
out: .
opt: paths=source_relative
- name: go-cosmos-orm
out: .
opt: paths=source_relative

View File

@ -1,11 +0,0 @@
version: v1
managed:
enabled: true
go_package_prefix:
default: cosmossdk.io/orm/internal
override:
buf.build/cosmos/cosmos-sdk: cosmossdk.io/api
plugins:
- name: go-cosmos-orm-proto
out: .
opt: paths=source_relative

View File

@ -1,9 +0,0 @@
version: v1
lint:
use:
- DEFAULT
except:
- PACKAGE_VERSION_SUFFIX
breaking:
ignore:
- testpb

View File

@ -1,91 +0,0 @@
package codegen
import (
"fmt"
"os"
"github.com/cosmos/cosmos-proto/generator"
"google.golang.org/protobuf/compiler/protogen"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/types/pluginpb"
ormv1 "cosmossdk.io/api/cosmos/orm/v1"
)
const (
contextPkg = protogen.GoImportPath("context")
ormListPkg = protogen.GoImportPath("cosmossdk.io/orm/model/ormlist")
ormErrPkg = protogen.GoImportPath("cosmossdk.io/orm/types/ormerrors")
ormTablePkg = protogen.GoImportPath("cosmossdk.io/orm/model/ormtable")
)
func ORMPluginRunner(p *protogen.Plugin) error {
p.SupportedFeatures = uint64(pluginpb.CodeGeneratorResponse_FEATURE_PROTO3_OPTIONAL)
for _, f := range p.Files {
if !f.Generate {
continue
}
if !hasTables(f) {
continue
}
gen := p.NewGeneratedFile(fmt.Sprintf("%s.cosmos_orm.go", f.GeneratedFilenamePrefix), f.GoImportPath)
cgen := &generator.GeneratedFile{
GeneratedFile: gen,
LocalPackages: map[string]bool{},
}
fgen := fileGen{GeneratedFile: cgen, file: f}
err := fgen.gen()
if err != nil {
return err
}
}
return nil
}
func QueryProtoPluginRunner(p *protogen.Plugin) error {
p.SupportedFeatures = uint64(pluginpb.CodeGeneratorResponse_FEATURE_PROTO3_OPTIONAL)
for _, f := range p.Files {
if !f.Generate {
continue
}
if !hasTables(f) {
continue
}
out, err := os.OpenFile(fmt.Sprintf("%s_query.proto", f.GeneratedFilenamePrefix), os.O_RDWR|os.O_TRUNC|os.O_CREATE, 0o644)
if err != nil {
return err
}
err = queryProtoGen{
File: f,
svc: newWriter(),
msgs: newWriter(),
outFile: out,
imports: map[string]bool{},
}.gen()
if err != nil {
return err
}
}
return nil
}
func hasTables(file *protogen.File) bool {
for _, message := range file.Messages {
if proto.GetExtension(message.Desc.Options(), ormv1.E_Table).(*ormv1.TableDescriptor) != nil {
return true
}
if proto.GetExtension(message.Desc.Options(), ormv1.E_Singleton).(*ormv1.SingletonDescriptor) != nil {
return true
}
}
return false
}

View File

@ -1,179 +0,0 @@
//nolint:unused // ignore unused code linting
package codegen
import (
"path/filepath"
"strings"
"github.com/cosmos/cosmos-proto/generator"
"github.com/iancoleman/strcase"
"google.golang.org/protobuf/compiler/protogen"
"google.golang.org/protobuf/proto"
ormv1 "cosmossdk.io/api/cosmos/orm/v1"
)
type fileGen struct {
*generator.GeneratedFile
file *protogen.File
}
func (f fileGen) gen() error {
f.P("// Code generated by protoc-gen-go-cosmos-orm. DO NOT EDIT.")
f.P()
f.P("package ", f.file.GoPackageName)
stores := make([]*protogen.Message, 0)
for _, msg := range f.file.Messages {
tableDesc := proto.GetExtension(msg.Desc.Options(), ormv1.E_Table).(*ormv1.TableDescriptor)
if tableDesc != nil {
tableGen, err := newTableGen(f, msg, tableDesc)
if err != nil {
return err
}
tableGen.gen()
}
singletonDesc := proto.GetExtension(msg.Desc.Options(), ormv1.E_Singleton).(*ormv1.SingletonDescriptor)
if singletonDesc != nil {
// do some singleton magic
singletonGen, err := newSingletonGen(f, msg, singletonDesc)
if err != nil {
return err
}
singletonGen.gen()
}
if tableDesc != nil || singletonDesc != nil { // message is one of the tables,
stores = append(stores, msg)
}
}
f.genStoreInterface(stores)
f.genStoreStruct(stores)
f.genStoreMethods(stores)
f.genStoreInterfaceGuard()
f.genStoreConstructor(stores)
return nil
}
func (f fileGen) genStoreInterface(stores []*protogen.Message) {
f.P("type ", f.storeInterfaceName(), " interface {")
for _, store := range stores {
name := f.messageTableInterfaceName(store)
f.P(name, "()", name)
}
f.P()
f.P("doNotImplement()")
f.P("}")
f.P()
}
func (f fileGen) genStoreStruct(stores []*protogen.Message) {
// struct
f.P("type ", f.storeStructName(), " struct {")
for _, message := range stores {
f.P(f.param(message.GoIdent.GoName), " ", f.messageTableInterfaceName(message))
}
f.P("}")
}
func (f fileGen) storeAccessorName() string {
return f.storeInterfaceName()
}
func (f fileGen) storeInterfaceName() string {
return strcase.ToCamel(f.fileShortName()) + "Store"
}
func (f fileGen) storeStructName() string {
return strcase.ToLowerCamel(f.fileShortName()) + "Store"
}
func (f fileGen) fileShortName() string {
return fileShortName(f.file)
}
func fileShortName(file *protogen.File) string {
filename := file.Proto.GetName()
shortName := filepath.Base(filename)
i := strings.Index(shortName, ".")
if i > 0 {
return shortName[:i]
}
return strcase.ToCamel(shortName)
}
func (f fileGen) messageTableInterfaceName(m *protogen.Message) string {
return m.GoIdent.GoName + "Table"
}
func (f fileGen) messageReaderInterfaceName(m *protogen.Message) string {
return m.GoIdent.GoName + "Reader"
}
func (f fileGen) messageTableVar(m *protogen.Message) string {
return f.param(m.GoIdent.GoName + "Table")
}
func (f fileGen) param(name string) string {
return strcase.ToLowerCamel(name)
}
func (f fileGen) messageTableReceiverName(m *protogen.Message) string {
return f.param(f.messageTableInterfaceName(m))
}
func (f fileGen) messageConstructorName(m *protogen.Message) string {
return "New" + f.messageTableInterfaceName(m)
}
func (f fileGen) genStoreMethods(stores []*protogen.Message) {
// getters
for _, msg := range stores {
name := f.messageTableInterfaceName(msg)
f.P("func(x ", f.storeStructName(), ") ", name, "() ", name, "{")
f.P("return x.", f.param(msg.GoIdent.GoName))
f.P("}")
f.P()
}
f.P("func(", f.storeStructName(), ") doNotImplement() {}")
f.P()
}
func (f fileGen) genStoreInterfaceGuard() {
f.P("var _ ", f.storeInterfaceName(), " = ", f.storeStructName(), "{}")
}
func (f fileGen) genStoreConstructor(stores []*protogen.Message) {
f.P("func New", f.storeInterfaceName(), "(db ", ormTablePkg.Ident("Schema"), ") (", f.storeInterfaceName(), ", error) {")
for _, store := range stores {
f.P(f.messageTableReceiverName(store), ", err := ", f.messageConstructorName(store), "(db)")
f.P("if err != nil {")
f.P("return nil, err")
f.P("}")
f.P()
}
f.P("return ", f.storeStructName(), "{")
for _, store := range stores {
f.P(f.messageTableReceiverName(store), ",")
}
f.P("}, nil")
f.P("}")
}
func fieldsToCamelCase(fields string) string {
splitFields := strings.Split(fields, ",")
camelFields := make([]string, len(splitFields))
for i, field := range splitFields {
camelFields[i] = strcase.ToCamel(field)
}
return strings.Join(camelFields, "")
}
func fieldsToSnakeCase(fields string) string {
splitFields := strings.Split(fields, ",")
camelFields := make([]string, len(splitFields))
for i, field := range splitFields {
camelFields[i] = strcase.ToSnake(field)
}
return strings.Join(camelFields, "_")
}

View File

@ -1,140 +0,0 @@
//nolint:unused // ignore unused code linting
package codegen
import (
"fmt"
"strings"
"github.com/iancoleman/strcase"
"google.golang.org/protobuf/reflect/protoreflect"
)
const indexKey = "IndexKey"
func (t tableGen) genIndexKeys() {
// interface that all keys must adhere to
t.P("type ", t.indexKeyInterfaceName(), " interface {")
t.P("id() uint32")
t.P("values() []interface{}")
t.P(t.param(t.indexKeyInterfaceName()), "()")
t.P("}")
t.P()
// start with primary key..
t.P("// primary key starting index..")
t.genIndex(t.table.PrimaryKey.Fields, 0, true)
for _, idx := range t.table.Index {
t.genIndex(idx.Fields, idx.Id, false)
}
}
func (t tableGen) genIterator() {
t.P("type ", t.iteratorName(), " struct {")
t.P(ormTablePkg.Ident("Iterator"))
t.P("}")
t.P()
t.genValueFunc()
t.P()
}
func (t tableGen) genValueFunc() {
varName := t.param(t.msg.GoIdent.GoName)
t.P("func (i ", t.iteratorName(), ") Value() (*", t.QualifiedGoIdent(t.msg.GoIdent), ", error) {")
t.P("var ", varName, " ", t.QualifiedGoIdent(t.msg.GoIdent))
t.P("err := i.UnmarshalMessage(&", varName, ")")
t.P("return &", varName, ", err")
t.P("}")
}
func (t tableGen) genIndexMethods(idxKeyName string) {
receiverFunc := fmt.Sprintf("func (x %s) ", idxKeyName)
t.P(receiverFunc, "id() uint32 { return ", t.table.Id, " /* primary key */ }")
t.P(receiverFunc, "values() []interface{} { return x.vs }")
t.P(receiverFunc, t.param(t.indexKeyInterfaceName()), "() {}")
t.P()
}
func (t tableGen) genIndexInterfaceGuard(idxKeyName string) {
t.P("var _ ", t.indexKeyInterfaceName(), " = ", idxKeyName, "{}")
t.P()
}
func (t tableGen) indexKeyInterfaceName() string {
return t.msg.GoIdent.GoName + indexKey
}
func (t tableGen) genIndexKey(idxKeyName string) {
t.P("type ", idxKeyName, " struct {")
t.P("vs []interface{}")
t.P("}")
t.P()
}
func (t tableGen) indexKeyParts(names []protoreflect.Name) string {
cnames := make([]string, len(names))
for i, name := range names {
cnames[i] = strcase.ToCamel(string(name))
}
return strings.Join(cnames, "")
}
func (t tableGen) indexKeyName(names []protoreflect.Name) string {
cnames := make([]string, len(names))
for i, name := range names {
cnames[i] = strcase.ToCamel(string(name))
}
joinedNames := strings.Join(cnames, "")
return t.msg.GoIdent.GoName + joinedNames + indexKey
}
func (t tableGen) indexStructName(fields []string) string {
names := make([]string, len(fields))
for i, field := range fields {
names[i] = strcase.ToCamel(field)
}
joinedNames := strings.Join(names, "")
return t.msg.GoIdent.GoName + joinedNames + indexKey
}
func (t tableGen) genIndex(fields string, id uint32, isPrimaryKey bool) {
fieldsSlc := strings.Split(fields, ",")
idxKeyName := t.indexStructName(fieldsSlc)
if isPrimaryKey {
t.P("type ", t.msg.GoIdent.GoName, "PrimaryKey = ", idxKeyName)
t.P()
}
t.P("type ", idxKeyName, " struct {")
t.P("vs []interface{}")
t.P("}")
t.genIndexInterfaceMethods(id, idxKeyName)
for i := 1; i < len(fieldsSlc)+1; i++ {
t.genWithMethods(idxKeyName, fieldsSlc[:i])
}
}
func (t tableGen) genIndexInterfaceMethods(id uint32, indexStructName string) {
funPrefix := fmt.Sprintf("func (x %s) ", indexStructName)
t.P(funPrefix, "id() uint32 {return ", id, "}")
t.P(funPrefix, "values() []interface{} {return x.vs}")
t.P(funPrefix, t.param(t.indexKeyInterfaceName()), "() {}")
t.P()
}
func (t tableGen) genWithMethods(indexStructName string, parts []string) {
funcPrefix := fmt.Sprintf("func (this %s) ", indexStructName)
camelParts := make([]string, len(parts))
for i, part := range parts {
camelParts[i] = strcase.ToCamel(part)
}
funcName := "With" + strings.Join(camelParts, "")
t.P(funcPrefix, funcName, "(", t.fieldArgsFromStringSlice(parts), ") ", indexStructName, "{")
t.P("this.vs = []interface{}{", strings.Join(parts, ","), "}")
t.P("return this")
t.P("}")
t.P()
}

View File

@ -1,319 +0,0 @@
package codegen
import (
"bytes"
"fmt"
"maps"
"os"
"slices"
"github.com/iancoleman/strcase"
"google.golang.org/protobuf/compiler/protogen"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
ormv1 "cosmossdk.io/api/cosmos/orm/v1"
"cosmossdk.io/orm/internal/fieldnames"
)
type queryProtoGen struct {
*protogen.File
imports map[string]bool
svc *writer
msgs *writer
outFile *os.File
}
func (g queryProtoGen) gen() error {
g.imports[g.Desc.Path()] = true
g.svc.F("// %sService queries the state of the tables specified by %s.", g.queryServiceName(), g.Desc.Path())
g.svc.F("service %sService {", g.queryServiceName())
g.svc.Indent()
for _, msg := range g.Messages {
tableDesc := proto.GetExtension(msg.Desc.Options(), ormv1.E_Table).(*ormv1.TableDescriptor)
if tableDesc != nil {
err := g.genTableRPCMethods(msg, tableDesc)
if err != nil {
return err
}
}
singletonDesc := proto.GetExtension(msg.Desc.Options(), ormv1.E_Singleton).(*ormv1.SingletonDescriptor)
if singletonDesc != nil {
err := g.genSingletonRPCMethods(msg)
if err != nil {
return err
}
}
}
g.svc.Dedent()
g.svc.F("}")
g.svc.F("")
outBuf := newWriter()
outBuf.F("// Code generated by protoc-gen-go-cosmos-orm-proto. DO NOT EDIT.")
outBuf.F(`syntax = "proto3";`)
outBuf.F("package %s;", g.Desc.Package())
outBuf.F("")
imports := slices.Sorted(maps.Keys(g.imports))
for _, i := range imports {
outBuf.F(`import "%s";`, i)
}
outBuf.F("")
_, err := outBuf.Write(g.svc.Bytes())
if err != nil {
return err
}
_, err = outBuf.Write(g.msgs.Bytes())
if err != nil {
return err
}
_, err = g.outFile.Write(outBuf.Bytes())
if err != nil {
return err
}
return g.outFile.Close()
}
func (g queryProtoGen) genTableRPCMethods(msg *protogen.Message, desc *ormv1.TableDescriptor) error {
name := msg.Desc.Name()
g.svc.F("// Get queries the %s table by its primary key.", name)
g.svc.F("rpc Get%s(Get%sRequest) returns (Get%sResponse) {}", name, name, name) // TODO grpc gateway
g.startRequestType("Get%sRequest", name)
g.msgs.Indent()
primaryKeyFields := fieldnames.CommaSeparatedFieldNames(desc.PrimaryKey.Fields)
fields := msg.Desc.Fields()
for i, fieldName := range primaryKeyFields.Names() {
field := fields.ByName(fieldName)
if field == nil {
return fmt.Errorf("can't find primary key field %s", fieldName)
}
g.msgs.F("// %s specifies the value of the %s field in the primary key.", fieldName, fieldName)
g.msgs.F("%s %s = %d;", g.fieldType(field), fieldName, i+1)
}
g.msgs.Dedent()
g.msgs.F("}")
g.msgs.F("")
g.startResponseType("Get%sResponse", name)
g.msgs.Indent()
g.msgs.F("// value is the response value.")
g.msgs.F("%s value = 1;", name)
g.msgs.Dedent()
g.msgs.F("}")
g.msgs.F("")
for _, idx := range desc.Index {
if !idx.Unique {
continue
}
fieldsCamel := fieldsToCamelCase(idx.Fields)
methodName := fmt.Sprintf("Get%sBy%s", name, fieldsCamel)
g.svc.F("// %s queries the %s table by its %s index", methodName, name, fieldsCamel)
g.svc.F("rpc %s(%sRequest) returns (%sResponse) {}", methodName, methodName, methodName) // TODO grpc gateway
g.startRequestType("%sRequest", methodName)
g.msgs.Indent()
fieldNames := fieldnames.CommaSeparatedFieldNames(idx.Fields)
for i, fieldName := range fieldNames.Names() {
field := fields.ByName(fieldName)
if field == nil {
return fmt.Errorf("can't find unique index field %s", fieldName)
}
g.msgs.F("%s %s = %d;", g.fieldType(field), fieldName, i+1)
}
g.msgs.Dedent()
g.msgs.F("}")
g.msgs.F("")
g.startResponseType("%sResponse", methodName)
g.msgs.Indent()
g.msgs.F("%s value = 1;", name)
g.msgs.Dedent()
g.msgs.F("}")
g.msgs.F("")
}
g.imports["cosmos/base/query/v1beta1/pagination.proto"] = true
g.svc.F("// List%s queries the %s table using prefix and range queries against defined indexes.", name, name)
g.svc.F("rpc List%s(List%sRequest) returns (List%sResponse) {}", name, name, name) // TODO grpc gateway
g.startRequestType("List%sRequest", name)
g.msgs.Indent()
g.msgs.F("// IndexKey specifies the value of an index key to use in prefix and range queries.")
g.msgs.F("message IndexKey {")
g.msgs.Indent()
indexFields := []string{desc.PrimaryKey.Fields}
// the primary key has field number 1
fieldNums := []uint32{1}
for _, index := range desc.Index {
indexFields = append(indexFields, index.Fields)
// index field numbers are their id + 1
fieldNums = append(fieldNums, index.Id+1)
}
g.msgs.F("// key specifies the index key value.")
g.msgs.F("oneof key {")
g.msgs.Indent()
for i, fields := range indexFields {
fieldName := fieldsToSnakeCase(fields)
typeName := fieldsToCamelCase(fields)
g.msgs.F("// %s specifies the value of the %s index key to use in the query.", fieldName, typeName)
g.msgs.F("%s %s = %d;", typeName, fieldName, fieldNums[i])
}
g.msgs.Dedent()
g.msgs.F("}")
for _, fieldNames := range indexFields {
g.msgs.F("")
g.msgs.F("message %s {", fieldsToCamelCase(fieldNames))
g.msgs.Indent()
for i, fieldName := range fieldnames.CommaSeparatedFieldNames(fieldNames).Names() {
g.msgs.F("// %s is the value of the %s field in the index.", fieldName, fieldName)
g.msgs.F("// It can be omitted to query for all valid values of that field in this segment of the index.")
g.msgs.F("optional %s %s = %d;", g.fieldType(fields.ByName(fieldName)), fieldName, i+1)
}
g.msgs.Dedent()
g.msgs.F("}")
}
g.msgs.Dedent()
g.msgs.F("}")
g.msgs.F("")
g.msgs.F("// query specifies the type of query - either a prefix or range query.")
g.msgs.F("oneof query {")
g.msgs.Indent()
g.msgs.F("// prefix_query specifies the index key value to use for the prefix query.")
g.msgs.F("IndexKey prefix_query = 1;")
g.msgs.F("// range_query specifies the index key from/to values to use for the range query.")
g.msgs.F("RangeQuery range_query = 2;")
g.msgs.Dedent()
g.msgs.F("}")
g.msgs.F("// pagination specifies optional pagination parameters.")
g.msgs.F("cosmos.base.query.v1beta1.PageRequest pagination = 3;")
g.msgs.F("")
g.msgs.F("// RangeQuery specifies the from/to index keys for a range query.")
g.msgs.F("message RangeQuery {")
g.msgs.Indent()
g.msgs.F("// from is the index key to use for the start of the range query.")
g.msgs.F("// To query from the start of an index, specify an index key for that index with empty values.")
g.msgs.F("IndexKey from = 1;")
g.msgs.F("// to is the index key to use for the end of the range query.")
g.msgs.F("// The index key type MUST be the same as the index key type used for from.")
g.msgs.F("// To query from to the end of an index it can be omitted.")
g.msgs.F("IndexKey to = 2;")
g.msgs.Dedent()
g.msgs.F("}")
g.msgs.Dedent()
g.msgs.F("}")
g.msgs.F("")
g.startResponseType("List%sResponse", name)
g.msgs.Indent()
g.msgs.F("// values are the results of the query.")
g.msgs.F("repeated %s values = 1;", name)
g.msgs.F("// pagination is the pagination response.")
g.msgs.F("cosmos.base.query.v1beta1.PageResponse pagination = 2;")
g.msgs.Dedent()
g.msgs.F("}")
g.msgs.F("")
return nil
}
func (g queryProtoGen) genSingletonRPCMethods(msg *protogen.Message) error {
name := msg.Desc.Name()
g.svc.F("// Get%s queries the %s singleton.", name, name)
g.svc.F("rpc Get%s(Get%sRequest) returns (Get%sResponse) {}", name, name, name) // TODO grpc gateway
g.startRequestType("Get%sRequest", name)
g.msgs.F("}")
g.msgs.F("")
g.startRequestType("Get%sResponse", name)
g.msgs.Indent()
g.msgs.F("%s value = 1;", name)
g.msgs.Dedent()
g.msgs.F("}")
g.msgs.F("")
return nil
}
func (g queryProtoGen) startRequestType(format string, args ...any) {
g.startRequestResponseType("request", format, args...)
}
func (g queryProtoGen) startResponseType(format string, args ...any) {
g.startRequestResponseType("response", format, args...)
}
func (g queryProtoGen) startRequestResponseType(typ, format string, args ...any) {
msgTypeName := fmt.Sprintf(format, args...)
g.msgs.F("// %s is the %s/%s %s type.", msgTypeName, g.queryServiceName(), msgTypeName, typ)
g.msgs.F("message %s {", msgTypeName)
}
func (g queryProtoGen) queryServiceName() string {
return fmt.Sprintf("%sQuery", strcase.ToCamel(fileShortName(g.File)))
}
func (g queryProtoGen) fieldType(descriptor protoreflect.FieldDescriptor) string {
if descriptor.Kind() == protoreflect.MessageKind {
message := descriptor.Message()
g.imports[message.ParentFile().Path()] = true
return string(message.FullName())
}
return descriptor.Kind().String()
}
type writer struct {
*bytes.Buffer
indent int
indentStr string
}
func newWriter() *writer {
return &writer{
Buffer: &bytes.Buffer{},
}
}
func (w *writer) F(format string, args ...interface{}) {
_, err := w.Write([]byte(w.indentStr))
if err != nil {
panic(err)
}
_, err = fmt.Fprintf(w, format, args...)
if err != nil {
panic(err)
}
_, err = fmt.Fprintln(w)
if err != nil {
panic(err)
}
}
func (w *writer) Indent() {
w.indent++
w.updateIndent()
}
func (w *writer) updateIndent() {
w.indentStr = ""
for i := 0; i < w.indent; i++ {
w.indentStr += " "
}
}
func (w *writer) Dedent() {
w.indent--
w.updateIndent()
}

View File

@ -1,85 +0,0 @@
package codegen
import (
"fmt"
"google.golang.org/protobuf/compiler/protogen"
"google.golang.org/protobuf/types/dynamicpb"
ormv1 "cosmossdk.io/api/cosmos/orm/v1"
"cosmossdk.io/orm/model/ormtable"
)
type singletonGen struct {
fileGen
msg *protogen.Message
table *ormv1.SingletonDescriptor
ormTable ormtable.Table
}
func newSingletonGen(fileGen fileGen, msg *protogen.Message, table *ormv1.SingletonDescriptor) (*singletonGen, error) {
s := &singletonGen{fileGen: fileGen, msg: msg, table: table}
var err error
s.ormTable, err = ormtable.Build(ormtable.Options{
MessageType: dynamicpb.NewMessageType(msg.Desc),
SingletonDescriptor: table,
})
return s, err
}
func (s singletonGen) gen() {
s.genInterface()
s.genStruct()
s.genInterfaceGuard()
s.genMethods()
s.genConstructor()
}
func (s singletonGen) genInterface() {
s.P("// singleton store")
s.P("type ", s.messageTableInterfaceName(s.msg), " interface {")
s.P("Get(ctx ", contextPkg.Ident("Context"), ") (*", s.msg.GoIdent.GoName, ", error)")
s.P("Save(ctx ", contextPkg.Ident("Context"), ", ", s.param(s.msg.GoIdent.GoName), "*", s.msg.GoIdent.GoName, ") error")
s.P("}")
s.P()
}
func (s singletonGen) genStruct() {
s.P("type ", s.messageTableReceiverName(s.msg), " struct {")
s.P("table ", ormTablePkg.Ident("Table"))
s.P("}")
s.P()
}
func (s singletonGen) genInterfaceGuard() {
s.P("var _ ", s.messageTableInterfaceName(s.msg), " = ", s.messageTableReceiverName(s.msg), "{}")
}
func (s singletonGen) genMethods() {
receiver := fmt.Sprintf("func (x %s) ", s.messageTableReceiverName(s.msg))
varName := s.param(s.msg.GoIdent.GoName)
// Get
s.P(receiver, "Get(ctx ", contextPkg.Ident("Context"), ") (*", s.msg.GoIdent.GoName, ", error) {")
s.P(varName, " := &", s.msg.GoIdent.GoName, "{}")
s.P("_, err := x.table.Get(ctx, ", varName, ")")
s.P("return ", varName, ", err")
s.P("}")
s.P()
// Save
s.P(receiver, "Save(ctx ", contextPkg.Ident("Context"), ", ", varName, " *", s.msg.GoIdent.GoName, ") error {")
s.P("return x.table.Save(ctx, ", varName, ")")
s.P("}")
s.P()
}
func (s singletonGen) genConstructor() {
iface := s.messageTableInterfaceName(s.msg)
s.P("func New", iface, "(db ", ormTablePkg.Ident("Schema"), ") (", iface, ", error) {")
s.P("table := db.GetTable(&", s.msg.GoIdent.GoName, "{})")
s.P("if table == nil {")
s.P("return nil, ", ormErrPkg.Ident("TableNotFound.Wrap"), "(string((&", s.msg.GoIdent.GoName, "{}).ProtoReflect().Descriptor().FullName()))")
s.P("}")
s.P("return &", s.messageTableReceiverName(s.msg), "{table}, nil")
s.P("}")
}

View File

@ -1,304 +0,0 @@
//nolint:unused // ignore unused code linting
package codegen
import (
"fmt"
"strings"
"google.golang.org/protobuf/compiler/protogen"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/types/dynamicpb"
ormv1 "cosmossdk.io/api/cosmos/orm/v1"
"cosmossdk.io/orm/internal/fieldnames"
"cosmossdk.io/orm/model/ormtable"
)
type tableGen struct {
fileGen
msg *protogen.Message
table *ormv1.TableDescriptor
primaryKeyFields fieldnames.FieldNames
fields map[protoreflect.Name]*protogen.Field
uniqueIndexes []*ormv1.SecondaryIndexDescriptor
ormTable ormtable.Table
}
const notFoundDocs = " returns nil and an error which responds true to ormerrors.IsNotFound() if the record was not found."
func newTableGen(fileGen fileGen, msg *protogen.Message, table *ormv1.TableDescriptor) (*tableGen, error) {
t := &tableGen{fileGen: fileGen, msg: msg, table: table, fields: map[protoreflect.Name]*protogen.Field{}}
t.primaryKeyFields = fieldnames.CommaSeparatedFieldNames(table.PrimaryKey.Fields)
for _, field := range msg.Fields {
t.fields[field.Desc.Name()] = field
}
uniqIndexes := make([]*ormv1.SecondaryIndexDescriptor, 0)
for _, idx := range t.table.Index {
if idx.Unique {
uniqIndexes = append(uniqIndexes, idx)
}
}
t.uniqueIndexes = uniqIndexes
var err error
t.ormTable, err = ormtable.Build(ormtable.Options{
MessageType: dynamicpb.NewMessageType(msg.Desc),
TableDescriptor: table,
})
return t, err
}
func (t tableGen) gen() {
t.getTableInterface()
t.genIterator()
t.genIndexKeys()
t.genStruct()
t.genTableImpl()
t.genTableImplGuard()
t.genConstructor()
}
func (t tableGen) getTableInterface() {
t.P("type ", t.messageTableInterfaceName(t.msg), " interface {")
t.P("Insert(ctx ", contextPkg.Ident("Context"), ", ", t.param(t.msg.GoIdent.GoName), " *", t.QualifiedGoIdent(t.msg.GoIdent), ") error")
if t.table.PrimaryKey.AutoIncrement {
t.P("InsertReturning", fieldsToCamelCase(t.table.PrimaryKey.Fields), "(ctx ", contextPkg.Ident("Context"), ", ", t.param(t.msg.GoIdent.GoName), " *", t.QualifiedGoIdent(t.msg.GoIdent), ") (uint64, error)")
t.P("LastInsertedSequence(ctx ", contextPkg.Ident("Context"), ") (uint64, error)")
}
t.P("Update(ctx ", contextPkg.Ident("Context"), ", ", t.param(t.msg.GoIdent.GoName), " *", t.QualifiedGoIdent(t.msg.GoIdent), ") error")
t.P("Save(ctx ", contextPkg.Ident("Context"), ", ", t.param(t.msg.GoIdent.GoName), " *", t.QualifiedGoIdent(t.msg.GoIdent), ") error")
t.P("Delete(ctx ", contextPkg.Ident("Context"), ", ", t.param(t.msg.GoIdent.GoName), " *", t.QualifiedGoIdent(t.msg.GoIdent), ") error")
t.P("Has(ctx ", contextPkg.Ident("Context"), ", ", t.fieldsArgs(t.primaryKeyFields.Names()), ") (found bool, err error)")
t.P("// Get", notFoundDocs)
t.P("Get(ctx ", contextPkg.Ident("Context"), ", ", t.fieldsArgs(t.primaryKeyFields.Names()), ") (*", t.QualifiedGoIdent(t.msg.GoIdent), ", error)")
for _, idx := range t.uniqueIndexes {
t.genUniqueIndexSig(idx)
}
t.P("List(ctx ", contextPkg.Ident("Context"), ", prefixKey ", t.indexKeyInterfaceName(), ", opts ...", ormListPkg.Ident("Option"), ") ", "(", t.iteratorName(), ", error)")
t.P("ListRange(ctx ", contextPkg.Ident("Context"), ", from, to ", t.indexKeyInterfaceName(), ", opts ...", ormListPkg.Ident("Option"), ") ", "(", t.iteratorName(), ", error)")
t.P("DeleteBy(ctx ", contextPkg.Ident("Context"), ", prefixKey ", t.indexKeyInterfaceName(), ") error")
t.P("DeleteRange(ctx ", contextPkg.Ident("Context"), ", from, to ", t.indexKeyInterfaceName(), ") error")
t.P()
t.P("doNotImplement()")
t.P("}")
t.P()
}
// returns the has and get (in that order) function signature for unique indexes.
func (t tableGen) uniqueIndexSig(idxFields string) (string, string, string) {
fieldsSlc := strings.Split(idxFields, ",")
camelFields := fieldsToCamelCase(idxFields)
hasFuncName := "HasBy" + camelFields
getFuncName := "GetBy" + camelFields
args := t.fieldArgsFromStringSlice(fieldsSlc)
hasFuncSig := fmt.Sprintf("%s (ctx context.Context, %s) (found bool, err error)", hasFuncName, args)
getFuncSig := fmt.Sprintf("%s (ctx context.Context, %s) (*%s, error)", getFuncName, args, t.msg.GoIdent.GoName)
return hasFuncSig, getFuncSig, getFuncName
}
func (t tableGen) genUniqueIndexSig(idx *ormv1.SecondaryIndexDescriptor) {
hasSig, getSig, getFuncName := t.uniqueIndexSig(idx.Fields)
t.P(hasSig)
t.P("// ", getFuncName, notFoundDocs)
t.P(getSig)
}
func (t tableGen) iteratorName() string {
return t.msg.GoIdent.GoName + "Iterator"
}
func (t tableGen) getSig() string {
res := "Get" + t.msg.GoIdent.GoName + "("
res += t.fieldsArgs(t.primaryKeyFields.Names())
res += ") (*" + t.QualifiedGoIdent(t.msg.GoIdent) + ", error)"
return res
}
func (t tableGen) hasSig() string {
t.P("Has(ctx ", contextPkg.Ident("Context"), ", ", t.fieldsArgs(t.primaryKeyFields.Names()), ") (found bool, err error)")
return ""
}
func (t tableGen) listSig() string {
res := "List" + t.msg.GoIdent.GoName + "("
res += t.indexKeyInterfaceName()
res += ") ("
res += t.iteratorName()
res += ", error)"
return res
}
func (t tableGen) fieldArgsFromStringSlice(names []string) string {
args := make([]string, len(names))
for i, name := range names {
args[i] = t.fieldArg(protoreflect.Name(name))
}
return strings.Join(args, ",")
}
func (t tableGen) fieldsArgs(names []protoreflect.Name) string {
var params []string
for _, name := range names {
params = append(params, t.fieldArg(name))
}
return strings.Join(params, ",")
}
func (t tableGen) fieldArg(name protoreflect.Name) string {
typ, pointer := t.GeneratedFile.FieldGoType(t.fields[name])
if pointer {
typ = "*" + typ
}
return string(name) + " " + typ
}
func (t tableGen) genStruct() {
t.P("type ", t.messageTableReceiverName(t.msg), " struct {")
if t.table.PrimaryKey.AutoIncrement {
t.P("table ", ormTablePkg.Ident("AutoIncrementTable"))
} else {
t.P("table ", ormTablePkg.Ident("Table"))
}
t.P("}")
t.storeStructName()
}
func (t tableGen) genTableImpl() {
receiverVar := "this"
receiver := fmt.Sprintf("func (%s %s) ", receiverVar, t.messageTableReceiverName(t.msg))
varName := t.param(t.msg.GoIdent.GoName)
varTypeName := t.QualifiedGoIdent(t.msg.GoIdent)
// these methods all have the same impl sans their names. so we can just loop and replace.
methods := []string{"Insert", "Update", "Save", "Delete"}
for _, method := range methods {
t.P(receiver, method, "(ctx ", contextPkg.Ident("Context"), ", ", varName, " *", varTypeName, ") error {")
t.P("return ", receiverVar, ".table.", method, "(ctx, ", varName, ")")
t.P("}")
t.P()
}
if t.table.PrimaryKey.AutoIncrement {
t.P(receiver, "InsertReturning", fieldsToCamelCase(t.table.PrimaryKey.Fields), "(ctx ", contextPkg.Ident("Context"), ", ", varName, " *", varTypeName, ") (uint64, error) {")
t.P("return ", receiverVar, ".table.InsertReturningPKey(ctx, ", varName, ")")
t.P("}")
t.P()
t.P(receiver, "LastInsertedSequence(ctx ", contextPkg.Ident("Context"), ") (uint64, error) {")
t.P("return ", receiverVar, ".table.LastInsertedSequence(ctx)")
t.P("}")
t.P()
}
// Has
t.P(receiver, "Has(ctx ", contextPkg.Ident("Context"), ", ", t.fieldsArgs(t.primaryKeyFields.Names()), ") (found bool, err error) {")
t.P("return ", receiverVar, ".table.PrimaryKey().Has(ctx, ", t.primaryKeyFields.String(), ")")
t.P("}")
t.P()
// Get
t.P(receiver, "Get(ctx ", contextPkg.Ident("Context"), ", ", t.fieldsArgs(t.primaryKeyFields.Names()), ") (*", varTypeName, ", error) {")
t.P("var ", varName, " ", varTypeName)
t.P("found, err := ", receiverVar, ".table.PrimaryKey().Get(ctx, &", varName, ", ", t.primaryKeyFields.String(), ")")
t.P("if err != nil {")
t.P("return nil, err")
t.P("}")
t.P("if !found {")
t.P("return nil, ", ormErrPkg.Ident("NotFound"))
t.P("}")
t.P("return &", varName, ", nil")
t.P("}")
t.P()
for _, idx := range t.uniqueIndexes {
fields := strings.Split(idx.Fields, ",")
hasName, getName, _ := t.uniqueIndexSig(idx.Fields)
// has
t.P("func (", receiverVar, " ", t.messageTableReceiverName(t.msg), ") ", hasName, "{")
t.P("return ", receiverVar, ".table.GetIndexByID(", idx.Id, ").(",
ormTablePkg.Ident("UniqueIndex"), ").Has(ctx,")
for _, field := range fields {
t.P(field, ",")
}
t.P(")")
t.P("}")
t.P()
// get
varName := t.param(t.msg.GoIdent.GoName)
varTypeName := t.msg.GoIdent.GoName
t.P("func (", receiverVar, " ", t.messageTableReceiverName(t.msg), ") ", getName, "{")
t.P("var ", varName, " ", varTypeName)
t.P("found, err := ", receiverVar, ".table.GetIndexByID(", idx.Id, ").(",
ormTablePkg.Ident("UniqueIndex"), ").Get(ctx, &", varName, ",")
for _, field := range fields {
t.P(field, ",")
}
t.P(")")
t.P("if err != nil {")
t.P("return nil, err")
t.P("}")
t.P("if !found {")
t.P("return nil, ", ormErrPkg.Ident("NotFound"))
t.P("}")
t.P("return &", varName, ", nil")
t.P("}")
t.P()
}
// List
t.P(receiver, "List(ctx ", contextPkg.Ident("Context"), ", prefixKey ", t.indexKeyInterfaceName(), ", opts ...", ormListPkg.Ident("Option"), ") (", t.iteratorName(), ", error) {")
t.P("it, err := ", receiverVar, ".table.GetIndexByID(prefixKey.id()).List(ctx, prefixKey.values(), opts...)")
t.P("return ", t.iteratorName(), "{it}, err")
t.P("}")
t.P()
// ListRange
t.P(receiver, "ListRange(ctx ", contextPkg.Ident("Context"), ", from, to ", t.indexKeyInterfaceName(), ", opts ...", ormListPkg.Ident("Option"), ") (", t.iteratorName(), ", error) {")
t.P("it, err := ", receiverVar, ".table.GetIndexByID(from.id()).ListRange(ctx, from.values(), to.values(), opts...)")
t.P("return ", t.iteratorName(), "{it}, err")
t.P("}")
t.P()
// DeleteBy
t.P(receiver, "DeleteBy(ctx ", contextPkg.Ident("Context"), ", prefixKey ", t.indexKeyInterfaceName(), ") error {")
t.P("return ", receiverVar, ".table.GetIndexByID(prefixKey.id()).DeleteBy(ctx, prefixKey.values()...)")
t.P("}")
t.P()
t.P()
// DeleteRange
t.P(receiver, "DeleteRange(ctx ", contextPkg.Ident("Context"), ", from, to ", t.indexKeyInterfaceName(), ") error {")
t.P("return ", receiverVar, ".table.GetIndexByID(from.id()).DeleteRange(ctx, from.values(), to.values())")
t.P("}")
t.P()
t.P()
t.P(receiver, "doNotImplement() {}")
t.P()
}
func (t tableGen) genTableImplGuard() {
t.P("var _ ", t.messageTableInterfaceName(t.msg), " = ", t.messageTableReceiverName(t.msg), "{}")
}
func (t tableGen) genConstructor() {
iface := t.messageTableInterfaceName(t.msg)
t.P("func New", iface, "(db ", ormTablePkg.Ident("Schema"), ") (", iface, ", error) {")
t.P("table := db.GetTable(&", t.msg.GoIdent.GoName, "{})")
t.P("if table == nil {")
t.P("return nil,", ormErrPkg.Ident("TableNotFound.Wrap"), "(string((&", t.msg.GoIdent.GoName, "{}).ProtoReflect().Descriptor().FullName()))")
t.P("}")
if t.table.PrimaryKey.AutoIncrement {
t.P(
"return ", t.messageTableReceiverName(t.msg), "{table.(",
ormTablePkg.Ident("AutoIncrementTable"), ")}, nil",
)
} else {
t.P("return ", t.messageTableReceiverName(t.msg), "{table}, nil")
}
t.P("}")
}

View File

@ -1,55 +0,0 @@
package fieldnames
import (
"strings"
"google.golang.org/protobuf/reflect/protoreflect"
)
// FieldNames abstractly represents a list of fields with a comparable type which
// can be used as a map key. It is used primarily to lookup indexes.
type FieldNames struct {
fields string
}
// CommaSeparatedFieldNames creates a FieldNames instance from a list of comma-separated
// fields.
func CommaSeparatedFieldNames(fields string) FieldNames {
// normalize cases where there are spaces
if strings.IndexByte(fields, ' ') >= 0 {
parts := strings.Split(fields, ",")
for i, part := range parts {
parts[i] = strings.TrimSpace(part)
}
fields = strings.Join(parts, ",")
}
return FieldNames{fields: fields}
}
// FieldsFromNames creates a FieldNames instance from an array of field
// names.
func FieldsFromNames(fnames []protoreflect.Name) FieldNames {
var names []string
for _, name := range fnames {
names = append(names, string(name))
}
return FieldNames{fields: strings.Join(names, ",")}
}
// Names returns the array of names this FieldNames instance represents.
func (f FieldNames) Names() []protoreflect.Name {
if f.fields == "" {
return nil
}
fields := strings.Split(f.fields, ",")
names := make([]protoreflect.Name, len(fields))
for i, field := range fields {
names[i] = protoreflect.Name(field)
}
return names
}
func (f FieldNames) String() string {
return f.fields
}

View File

@ -1,42 +0,0 @@
package fieldnames
import (
"testing"
"google.golang.org/protobuf/reflect/protoreflect"
"gotest.tools/v3/assert"
)
func TestFieldNames(t *testing.T) {
names := []protoreflect.Name{"a", "b", "c"}
abc := "a,b,c"
f := CommaSeparatedFieldNames(abc)
assert.Equal(t, FieldNames{abc}, f)
assert.DeepEqual(t, names, f.Names())
assert.Equal(t, abc, f.String())
f = CommaSeparatedFieldNames("a, b ,c")
assert.Equal(t, FieldNames{abc}, f)
assert.DeepEqual(t, names, f.Names())
assert.Equal(t, abc, f.String())
// empty okay
f = CommaSeparatedFieldNames("")
assert.Equal(t, FieldNames{""}, f)
assert.Equal(t, 0, len(f.Names()))
assert.Equal(t, "", f.String())
f = FieldsFromNames(names)
assert.Equal(t, FieldNames{abc}, f)
assert.DeepEqual(t, names, f.Names())
assert.Equal(t, abc, f.String())
// empty okay
f = FieldsFromNames([]protoreflect.Name{})
assert.Equal(t, FieldNames{""}, f)
f = FieldsFromNames(nil)
assert.Equal(t, FieldNames{""}, f)
assert.Equal(t, 0, len(f.Names()))
assert.Equal(t, "", f.String())
}

View File

@ -1,40 +0,0 @@
package listinternal
import (
"errors"
"google.golang.org/protobuf/proto"
)
// Options is the internal list options struct.
type Options struct {
Reverse, CountTotal bool
Offset, Limit, DefaultLimit uint64
Cursor []byte
Filter func(proto.Message) bool
}
func (o Options) Validate() error {
if len(o.Cursor) != 0 {
if o.Offset > 0 {
return errors.New("can only specify one of cursor or offset")
}
}
return nil
}
type Option interface {
apply(*Options)
}
type FuncOption func(*Options)
func (f FuncOption) apply(options *Options) {
f(options)
}
func ApplyOptions(opts *Options, funcOpts []Option) {
for _, opt := range funcOpts {
opt.apply(opts)
}
}

View File

@ -1,93 +0,0 @@
package stablejson
import (
"bytes"
"fmt"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protopath"
"google.golang.org/protobuf/reflect/protorange"
"google.golang.org/protobuf/reflect/protoreflect"
)
// Marshal marshals the provided message to JSON with a stable ordering based
// on ascending field numbers.
func Marshal(message proto.Message) ([]byte, error) {
buf := &bytes.Buffer{}
firstStack := []bool{true}
err := protorange.Options{
Stable: true,
}.Range(message.ProtoReflect(),
func(p protopath.Values) error {
// Starting printing the value.
if !firstStack[len(firstStack)-1] {
_, _ = fmt.Fprintf(buf, ",")
}
firstStack[len(firstStack)-1] = false
// Print the key.
var fd protoreflect.FieldDescriptor
last := p.Index(-1)
beforeLast := p.Index(-2)
switch last.Step.Kind() {
case protopath.FieldAccessStep:
fd = last.Step.FieldDescriptor()
_, _ = fmt.Fprintf(buf, "%q:", fd.Name())
case protopath.ListIndexStep:
fd = beforeLast.Step.FieldDescriptor() // lists always appear in the context of a repeated field
case protopath.MapIndexStep:
fd = beforeLast.Step.FieldDescriptor() // maps always appear in the context of a repeated field
_, _ = fmt.Fprintf(buf, "%v:", last.Step.MapIndex().Interface())
case protopath.AnyExpandStep:
_, _ = fmt.Fprintf(buf, `"@type":%q`, last.Value.Message().Descriptor().FullName())
return nil
case protopath.UnknownAccessStep:
_, _ = fmt.Fprintf(buf, "?: ")
}
switch v := last.Value.Interface().(type) {
case protoreflect.Message:
_, _ = fmt.Fprintf(buf, "{")
firstStack = append(firstStack, true)
case protoreflect.List:
_, _ = fmt.Fprintf(buf, "[")
firstStack = append(firstStack, true)
case protoreflect.Map:
_, _ = fmt.Fprintf(buf, "{")
firstStack = append(firstStack, true)
case protoreflect.EnumNumber:
var ev protoreflect.EnumValueDescriptor
if fd != nil {
ev = fd.Enum().Values().ByNumber(v)
}
if ev != nil {
_, _ = fmt.Fprintf(buf, "%v", ev.Name())
} else {
_, _ = fmt.Fprintf(buf, "%v", v)
}
case string, []byte:
_, _ = fmt.Fprintf(buf, "%q", v)
default:
_, _ = fmt.Fprintf(buf, "%v", v)
}
return nil
},
func(p protopath.Values) error {
last := p.Index(-1)
switch last.Value.Interface().(type) {
case protoreflect.Message:
if last.Step.Kind() != protopath.AnyExpandStep {
_, _ = fmt.Fprintf(buf, "}")
}
case protoreflect.List:
_, _ = fmt.Fprintf(buf, "]")
firstStack = firstStack[:len(firstStack)-1]
case protoreflect.Map:
_, _ = fmt.Fprintf(buf, "}")
firstStack = firstStack[:len(firstStack)-1]
}
return nil
},
)
return buf.Bytes(), err
}

View File

@ -1,37 +0,0 @@
package stablejson_test
import (
"testing"
"github.com/cosmos/cosmos-proto/anyutil"
"github.com/stretchr/testify/require"
"google.golang.org/protobuf/types/known/anypb"
bankv1beta1 "cosmossdk.io/api/cosmos/bank/v1beta1"
basev1beta1 "cosmossdk.io/api/cosmos/base/v1beta1"
txv1beta1 "cosmossdk.io/api/cosmos/tx/v1beta1"
"cosmossdk.io/orm/internal/stablejson"
)
func TestStableJSON(t *testing.T) {
msg, err := anyutil.New(&bankv1beta1.MsgSend{
FromAddress: "foo213325",
ToAddress: "foo32t5sdfh",
Amount: []*basev1beta1.Coin{
{
Denom: "bar",
Amount: "1234",
},
{
Denom: "baz",
Amount: "321",
},
},
})
require.NoError(t, err)
bz, err := stablejson.Marshal(&txv1beta1.TxBody{Messages: []*anypb.Any{msg}})
require.NoError(t, err)
require.Equal(t,
`{"messages":[{"@type":"cosmos.bank.v1beta1.MsgSend","from_address":"foo213325","to_address":"foo32t5sdfh","amount":[{"denom":"bar","amount":"1234"},{"denom":"baz","amount":"321"}]}]}`,
string(bz))
}

View File

@ -1,38 +0,0 @@
package testkv
import (
"bytes"
"gotest.tools/v3/assert"
"cosmossdk.io/core/store"
"cosmossdk.io/orm/model/ormtable"
)
func AssertBackendsEqual(t assert.TestingT, b1, b2 ormtable.Backend) {
it1, err := b1.CommitmentStoreReader().Iterator(nil, nil)
assert.NilError(t, err)
it2, err := b2.CommitmentStoreReader().Iterator(nil, nil)
assert.NilError(t, err)
AssertIteratorsEqual(t, it1, it2)
it1, err = b1.IndexStoreReader().Iterator(nil, nil)
assert.NilError(t, err)
it2, err = b2.IndexStoreReader().Iterator(nil, nil)
assert.NilError(t, err)
AssertIteratorsEqual(t, it1, it2)
}
func AssertIteratorsEqual(t assert.TestingT, it1, it2 store.Iterator) {
for it1.Valid() {
assert.Assert(t, it2.Valid())
assert.Assert(t, bytes.Equal(it1.Key(), it2.Key()))
assert.Assert(t, bytes.Equal(it1.Value(), it2.Value()))
it1.Next()
it2.Next()
}
}

View File

@ -1,100 +0,0 @@
package testkv
import (
"cosmossdk.io/core/store"
)
type TestStore struct {
Db store.KVStoreWithBatch
}
func (ts TestStore) Get(bz []byte) ([]byte, error) {
return ts.Db.Get(bz)
}
// Has checks if a key exists.
func (ts TestStore) Has(key []byte) (bool, error) {
return ts.Db.Has(key)
}
func (ts TestStore) Set(k, v []byte) error {
return ts.Db.Set(k, v)
}
// Delete deletes the key, or does nothing if the key does not exist.
// CONTRACT: key readonly []byte
func (ts TestStore) Delete(bz []byte) error {
return ts.Db.Delete(bz)
}
func (ts TestStore) Iterator(start, end []byte) (store.Iterator, error) {
itr, err := ts.Db.Iterator(start, end)
return IteratorWrapper{itr: itr}, err
}
func (ts TestStore) ReverseIterator(start, end []byte) (store.Iterator, error) {
itr, err := ts.Db.ReverseIterator(start, end)
return itr, err
}
// Close closes the database connection.
func (ts TestStore) Close() error {
return ts.Db.Close()
}
// NewBatch creates a batch for atomic updates. The caller must call Batch.Close.
func (ts TestStore) NewBatch() store.Batch {
return ts.Db.NewBatch()
}
// NewBatchWithSize create a new batch for atomic updates, but with pre-allocated size.
// This will does the same thing as NewBatch if the batch implementation doesn't support pre-allocation.
func (ts TestStore) NewBatchWithSize(i int) store.Batch {
return ts.Db.NewBatchWithSize(i)
}
var _ store.Iterator = IteratorWrapper{}
type IteratorWrapper struct {
itr store.Iterator
}
// Domain returns the start (inclusive) and end (exclusive) limits of the iterator.
// CONTRACT: start, end readonly []byte
func (iw IteratorWrapper) Domain() (start, end []byte) {
return iw.itr.Domain()
}
// Valid returns whether the current iterator is valid. Once invalid, the Iterator remains
// invalid forever.
func (iw IteratorWrapper) Valid() bool {
return iw.itr.Valid()
}
// Next moves the iterator to the next key in the database, as defined by order of iteration.
// If Valid returns false, this method will panic.
func (iw IteratorWrapper) Next() {
iw.itr.Next()
}
// Key returns the key at the current position. Panics if the iterator is invalid.
// CONTRACT: key readonly []byte
func (iw IteratorWrapper) Key() (key []byte) {
return iw.itr.Key()
}
// Value returns the value at the current position. Panics if the iterator is invalid.
// CONTRACT: value readonly []byte
func (iw IteratorWrapper) Value() (value []byte) {
return iw.itr.Value()
}
// Error returns the last error encountered by the iterator, if any.
func (iw IteratorWrapper) Error() error {
return iw.itr.Error()
}
// Close closes the iterator, releasing any allocated resources.
func (iw IteratorWrapper) Close() error {
return iw.itr.Close()
}

View File

@ -1,330 +0,0 @@
package testkv
import (
"context"
"fmt"
"google.golang.org/protobuf/proto"
"cosmossdk.io/core/store"
"cosmossdk.io/orm/encoding/ormkv"
"cosmossdk.io/orm/internal/stablejson"
"cosmossdk.io/orm/model/ormtable"
"cosmossdk.io/orm/types/kv"
)
// Debugger is an interface that handles debug info from the debug store wrapper.
type Debugger interface {
// Log logs a single log message.
Log(string)
// Decode decodes a key-value entry into a debug string.
Decode(key, value []byte) string
}
// NewDebugBackend wraps both stores from a Backend with a debugger.
func NewDebugBackend(backend ormtable.Backend, debugger Debugger) ormtable.Backend {
hooks := debugHooks{
debugger: debugger,
validateHooks: backend.ValidateHooks(),
writeHooks: backend.WriteHooks(),
}
return ormtable.NewBackend(ormtable.BackendOptions{
CommitmentStore: NewDebugStore(backend.CommitmentStore(), debugger, "commit"),
IndexStore: NewDebugStore(backend.IndexStore(), debugger, "index"),
ValidateHooks: hooks,
WriteHooks: hooks,
})
}
type debugStore struct {
store kv.Store
debugger Debugger
storeName string
}
// NewDebugStore wraps the store with the debugger instance returning a debug store wrapper.
func NewDebugStore(store kv.Store, debugger Debugger, storeName string) kv.Store {
return &debugStore{store: store, debugger: debugger, storeName: storeName}
}
func (t debugStore) Get(key []byte) ([]byte, error) {
val, err := t.store.Get(key)
if err != nil {
if t.debugger != nil {
t.debugger.Log(fmt.Sprintf("ERR on GET %s: %v", t.debugger.Decode(key, nil), err))
}
return nil, err
}
if t.debugger != nil {
t.debugger.Log(fmt.Sprintf("GET %x %x", key, val))
t.debugger.Log(fmt.Sprintf(" %s", t.debugger.Decode(key, val)))
}
return val, nil
}
func (t debugStore) Has(key []byte) (bool, error) {
has, err := t.store.Has(key)
if err != nil {
if t.debugger != nil {
t.debugger.Log(fmt.Sprintf("ERR on HAS %s: %v", t.debugger.Decode(key, nil), err))
}
return has, err
}
if t.debugger != nil {
t.debugger.Log(fmt.Sprintf("HAS %x", key))
t.debugger.Log(fmt.Sprintf(" %s", t.debugger.Decode(key, nil)))
}
return has, nil
}
func (t debugStore) Iterator(start, end []byte) (store.Iterator, error) {
if t.debugger != nil {
t.debugger.Log(fmt.Sprintf("ITERATOR %x -> %x", start, end))
}
it, err := t.store.Iterator(start, end)
if err != nil {
return nil, err
}
return &debugIterator{
iterator: it,
storeName: t.storeName,
debugger: t.debugger,
}, nil
}
func (t debugStore) ReverseIterator(start, end []byte) (store.Iterator, error) {
if t.debugger != nil {
t.debugger.Log(fmt.Sprintf("ITERATOR %x <- %x", start, end))
}
it, err := t.store.ReverseIterator(start, end)
if err != nil {
return nil, err
}
return &debugIterator{
iterator: it,
storeName: t.storeName,
debugger: t.debugger,
}, nil
}
func (t debugStore) Set(key, value []byte) error {
if t.debugger != nil {
t.debugger.Log(fmt.Sprintf("SET %x %x", key, value))
t.debugger.Log(fmt.Sprintf(" %s", t.debugger.Decode(key, value)))
}
err := t.store.Set(key, value)
if err != nil {
if t.debugger != nil {
t.debugger.Log(fmt.Sprintf("ERR on SET %s: %v", t.debugger.Decode(key, value), err))
}
return err
}
return nil
}
func (t debugStore) Delete(key []byte) error {
if t.debugger != nil {
t.debugger.Log(fmt.Sprintf("DEL %x", key))
t.debugger.Log(fmt.Sprintf("DEL %s", t.debugger.Decode(key, nil)))
}
err := t.store.Delete(key)
if err != nil {
if t.debugger != nil {
t.debugger.Log(fmt.Sprintf("ERR on SET %s: %v", t.debugger.Decode(key, nil), err))
}
return err
}
return nil
}
var _ kv.Store = &debugStore{}
type debugIterator struct {
iterator store.Iterator
storeName string
debugger Debugger
}
func (d debugIterator) Domain() (start, end []byte) {
start, end = d.iterator.Domain()
d.debugger.Log(fmt.Sprintf(" DOMAIN %x -> %x", start, end))
return start, end
}
func (d debugIterator) Valid() bool {
valid := d.iterator.Valid()
d.debugger.Log(fmt.Sprintf(" VALID %t", valid))
return valid
}
func (d debugIterator) Next() {
d.debugger.Log(" NEXT")
d.iterator.Next()
}
func (d debugIterator) Key() (key []byte) {
key = d.iterator.Key()
value := d.iterator.Value()
d.debugger.Log(fmt.Sprintf(" KEY %x %x", key, value))
d.debugger.Log(fmt.Sprintf(" %s", d.debugger.Decode(key, value)))
return key
}
func (d debugIterator) Value() (value []byte) {
return d.iterator.Value()
}
func (d debugIterator) Error() error {
err := d.iterator.Error()
d.debugger.Log(fmt.Sprintf(" ERR %+v", err))
return err
}
func (d debugIterator) Close() error {
d.debugger.Log(" CLOSE")
return d.iterator.Close()
}
var _ store.Iterator = &debugIterator{}
// EntryCodecDebugger is a Debugger instance that uses an EntryCodec and Print
// function for debugging.
type EntryCodecDebugger struct {
EntryCodec ormkv.EntryCodec
Print func(string)
}
func (d *EntryCodecDebugger) Log(s string) {
if d.Print != nil {
d.Print(s)
} else {
fmt.Println(s)
}
}
func (d *EntryCodecDebugger) Decode(key, value []byte) string {
entry, err := d.EntryCodec.DecodeEntry(key, value)
if err != nil {
return fmt.Sprintf("ERR:%v", err)
}
return entry.String()
}
type debugHooks struct {
debugger Debugger
validateHooks ormtable.ValidateHooks
writeHooks ormtable.WriteHooks
}
func (d debugHooks) ValidateInsert(context context.Context, message proto.Message) error {
jsonBz, err := stablejson.Marshal(message)
if err != nil {
return err
}
d.debugger.Log(fmt.Sprintf(
"ORM BEFORE INSERT %s %s",
message.ProtoReflect().Descriptor().FullName(),
jsonBz,
))
if d.validateHooks != nil {
return d.validateHooks.ValidateInsert(context, message)
}
return nil
}
func (d debugHooks) ValidateUpdate(ctx context.Context, existing, new proto.Message) error {
existingJSON, err := stablejson.Marshal(existing)
if err != nil {
return err
}
newJSON, err := stablejson.Marshal(new)
if err != nil {
return err
}
d.debugger.Log(fmt.Sprintf(
"ORM BEFORE UPDATE %s %s -> %s",
existing.ProtoReflect().Descriptor().FullName(),
existingJSON,
newJSON,
))
if d.validateHooks != nil {
return d.validateHooks.ValidateUpdate(ctx, existing, new)
}
return nil
}
func (d debugHooks) ValidateDelete(ctx context.Context, message proto.Message) error {
jsonBz, err := stablejson.Marshal(message)
if err != nil {
return err
}
d.debugger.Log(fmt.Sprintf(
"ORM BEFORE DELETE %s %s",
message.ProtoReflect().Descriptor().FullName(),
jsonBz,
))
if d.validateHooks != nil {
return d.validateHooks.ValidateDelete(ctx, message)
}
return nil
}
func (d debugHooks) OnInsert(ctx context.Context, message proto.Message) {
jsonBz, err := stablejson.Marshal(message)
if err != nil {
panic(err)
}
d.debugger.Log(fmt.Sprintf(
"ORM AFTER INSERT %s %s",
message.ProtoReflect().Descriptor().FullName(),
jsonBz,
))
if d.writeHooks != nil {
d.writeHooks.OnInsert(ctx, message)
}
}
func (d debugHooks) OnUpdate(ctx context.Context, existing, new proto.Message) {
existingJSON, err := stablejson.Marshal(existing)
if err != nil {
panic(err)
}
newJSON, err := stablejson.Marshal(new)
if err != nil {
panic(err)
}
d.debugger.Log(fmt.Sprintf(
"ORM AFTER UPDATE %s %s -> %s",
existing.ProtoReflect().Descriptor().FullName(),
existingJSON,
newJSON,
))
if d.writeHooks != nil {
d.writeHooks.OnUpdate(ctx, existing, new)
}
}
func (d debugHooks) OnDelete(ctx context.Context, message proto.Message) {
jsonBz, err := stablejson.Marshal(message)
if err != nil {
panic(err)
}
d.debugger.Log(fmt.Sprintf(
"ORM AFTER DELETE %s %s",
message.ProtoReflect().Descriptor().FullName(),
jsonBz,
))
if d.writeHooks != nil {
d.writeHooks.OnDelete(ctx, message)
}
}

View File

@ -1,19 +0,0 @@
package testkv
import (
"testing"
dbm "github.com/cosmos/cosmos-db"
"gotest.tools/v3/assert"
"cosmossdk.io/orm/model/ormtable"
)
func NewGoLevelDBBackend(tb testing.TB) ormtable.Backend {
tb.Helper()
db, err := dbm.NewGoLevelDB("test", tb.TempDir(), nil)
assert.NilError(tb, err)
return ormtable.NewBackend(ormtable.BackendOptions{
CommitmentStore: TestStore{Db: db},
})
}

View File

@ -1,26 +0,0 @@
package testkv
import (
coretesting "cosmossdk.io/core/testing"
"cosmossdk.io/orm/model/ormtable"
)
// NewSplitMemBackend returns a Backend instance
// which uses two separate memory stores to simulate behavior when there
// are really two separate backing stores.
func NewSplitMemBackend() ormtable.Backend {
return ormtable.NewBackend(ormtable.BackendOptions{
CommitmentStore: TestStore{coretesting.NewMemDB()},
IndexStore: TestStore{coretesting.NewMemDB()},
})
}
// NewSharedMemBackend returns a Backend instance
// which uses a single backing memory store to simulate legacy scenarios
// where only a single KV-store is available to modules.
func NewSharedMemBackend() ormtable.Backend {
return ormtable.NewBackend(ormtable.BackendOptions{
CommitmentStore: TestStore{coretesting.NewMemDB()},
// commit store is automatically used as the index store
})
}

View File

@ -1,297 +0,0 @@
// Code generated by protoc-gen-go-cosmos-orm. DO NOT EDIT.
package testpb
import (
context "context"
ormlist "cosmossdk.io/orm/model/ormlist"
ormtable "cosmossdk.io/orm/model/ormtable"
ormerrors "cosmossdk.io/orm/types/ormerrors"
)
type BalanceTable interface {
Insert(ctx context.Context, balance *Balance) error
Update(ctx context.Context, balance *Balance) error
Save(ctx context.Context, balance *Balance) error
Delete(ctx context.Context, balance *Balance) error
Has(ctx context.Context, address string, denom string) (found bool, err error)
// Get returns nil and an error which responds true to ormerrors.IsNotFound() if the record was not found.
Get(ctx context.Context, address string, denom string) (*Balance, error)
List(ctx context.Context, prefixKey BalanceIndexKey, opts ...ormlist.Option) (BalanceIterator, error)
ListRange(ctx context.Context, from, to BalanceIndexKey, opts ...ormlist.Option) (BalanceIterator, error)
DeleteBy(ctx context.Context, prefixKey BalanceIndexKey) error
DeleteRange(ctx context.Context, from, to BalanceIndexKey) error
doNotImplement()
}
type BalanceIterator struct {
ormtable.Iterator
}
func (i BalanceIterator) Value() (*Balance, error) {
var balance Balance
err := i.UnmarshalMessage(&balance)
return &balance, err
}
type BalanceIndexKey interface {
id() uint32
values() []interface{}
balanceIndexKey()
}
// primary key starting index..
type BalancePrimaryKey = BalanceAddressDenomIndexKey
type BalanceAddressDenomIndexKey struct {
vs []interface{}
}
func (x BalanceAddressDenomIndexKey) id() uint32 { return 0 }
func (x BalanceAddressDenomIndexKey) values() []interface{} { return x.vs }
func (x BalanceAddressDenomIndexKey) balanceIndexKey() {}
func (this BalanceAddressDenomIndexKey) WithAddress(address string) BalanceAddressDenomIndexKey {
this.vs = []interface{}{address}
return this
}
func (this BalanceAddressDenomIndexKey) WithAddressDenom(address string, denom string) BalanceAddressDenomIndexKey {
this.vs = []interface{}{address, denom}
return this
}
type BalanceDenomIndexKey struct {
vs []interface{}
}
func (x BalanceDenomIndexKey) id() uint32 { return 1 }
func (x BalanceDenomIndexKey) values() []interface{} { return x.vs }
func (x BalanceDenomIndexKey) balanceIndexKey() {}
func (this BalanceDenomIndexKey) WithDenom(denom string) BalanceDenomIndexKey {
this.vs = []interface{}{denom}
return this
}
type balanceTable struct {
table ormtable.Table
}
func (this balanceTable) Insert(ctx context.Context, balance *Balance) error {
return this.table.Insert(ctx, balance)
}
func (this balanceTable) Update(ctx context.Context, balance *Balance) error {
return this.table.Update(ctx, balance)
}
func (this balanceTable) Save(ctx context.Context, balance *Balance) error {
return this.table.Save(ctx, balance)
}
func (this balanceTable) Delete(ctx context.Context, balance *Balance) error {
return this.table.Delete(ctx, balance)
}
func (this balanceTable) Has(ctx context.Context, address string, denom string) (found bool, err error) {
return this.table.PrimaryKey().Has(ctx, address, denom)
}
func (this balanceTable) Get(ctx context.Context, address string, denom string) (*Balance, error) {
var balance Balance
found, err := this.table.PrimaryKey().Get(ctx, &balance, address, denom)
if err != nil {
return nil, err
}
if !found {
return nil, ormerrors.NotFound
}
return &balance, nil
}
func (this balanceTable) List(ctx context.Context, prefixKey BalanceIndexKey, opts ...ormlist.Option) (BalanceIterator, error) {
it, err := this.table.GetIndexByID(prefixKey.id()).List(ctx, prefixKey.values(), opts...)
return BalanceIterator{it}, err
}
func (this balanceTable) ListRange(ctx context.Context, from, to BalanceIndexKey, opts ...ormlist.Option) (BalanceIterator, error) {
it, err := this.table.GetIndexByID(from.id()).ListRange(ctx, from.values(), to.values(), opts...)
return BalanceIterator{it}, err
}
func (this balanceTable) DeleteBy(ctx context.Context, prefixKey BalanceIndexKey) error {
return this.table.GetIndexByID(prefixKey.id()).DeleteBy(ctx, prefixKey.values()...)
}
func (this balanceTable) DeleteRange(ctx context.Context, from, to BalanceIndexKey) error {
return this.table.GetIndexByID(from.id()).DeleteRange(ctx, from.values(), to.values())
}
func (this balanceTable) doNotImplement() {}
var _ BalanceTable = balanceTable{}
func NewBalanceTable(db ormtable.Schema) (BalanceTable, error) {
table := db.GetTable(&Balance{})
if table == nil {
return nil, ormerrors.TableNotFound.Wrap(string((&Balance{}).ProtoReflect().Descriptor().FullName()))
}
return balanceTable{table}, nil
}
type SupplyTable interface {
Insert(ctx context.Context, supply *Supply) error
Update(ctx context.Context, supply *Supply) error
Save(ctx context.Context, supply *Supply) error
Delete(ctx context.Context, supply *Supply) error
Has(ctx context.Context, denom string) (found bool, err error)
// Get returns nil and an error which responds true to ormerrors.IsNotFound() if the record was not found.
Get(ctx context.Context, denom string) (*Supply, error)
List(ctx context.Context, prefixKey SupplyIndexKey, opts ...ormlist.Option) (SupplyIterator, error)
ListRange(ctx context.Context, from, to SupplyIndexKey, opts ...ormlist.Option) (SupplyIterator, error)
DeleteBy(ctx context.Context, prefixKey SupplyIndexKey) error
DeleteRange(ctx context.Context, from, to SupplyIndexKey) error
doNotImplement()
}
type SupplyIterator struct {
ormtable.Iterator
}
func (i SupplyIterator) Value() (*Supply, error) {
var supply Supply
err := i.UnmarshalMessage(&supply)
return &supply, err
}
type SupplyIndexKey interface {
id() uint32
values() []interface{}
supplyIndexKey()
}
// primary key starting index..
type SupplyPrimaryKey = SupplyDenomIndexKey
type SupplyDenomIndexKey struct {
vs []interface{}
}
func (x SupplyDenomIndexKey) id() uint32 { return 0 }
func (x SupplyDenomIndexKey) values() []interface{} { return x.vs }
func (x SupplyDenomIndexKey) supplyIndexKey() {}
func (this SupplyDenomIndexKey) WithDenom(denom string) SupplyDenomIndexKey {
this.vs = []interface{}{denom}
return this
}
type supplyTable struct {
table ormtable.Table
}
func (this supplyTable) Insert(ctx context.Context, supply *Supply) error {
return this.table.Insert(ctx, supply)
}
func (this supplyTable) Update(ctx context.Context, supply *Supply) error {
return this.table.Update(ctx, supply)
}
func (this supplyTable) Save(ctx context.Context, supply *Supply) error {
return this.table.Save(ctx, supply)
}
func (this supplyTable) Delete(ctx context.Context, supply *Supply) error {
return this.table.Delete(ctx, supply)
}
func (this supplyTable) Has(ctx context.Context, denom string) (found bool, err error) {
return this.table.PrimaryKey().Has(ctx, denom)
}
func (this supplyTable) Get(ctx context.Context, denom string) (*Supply, error) {
var supply Supply
found, err := this.table.PrimaryKey().Get(ctx, &supply, denom)
if err != nil {
return nil, err
}
if !found {
return nil, ormerrors.NotFound
}
return &supply, nil
}
func (this supplyTable) List(ctx context.Context, prefixKey SupplyIndexKey, opts ...ormlist.Option) (SupplyIterator, error) {
it, err := this.table.GetIndexByID(prefixKey.id()).List(ctx, prefixKey.values(), opts...)
return SupplyIterator{it}, err
}
func (this supplyTable) ListRange(ctx context.Context, from, to SupplyIndexKey, opts ...ormlist.Option) (SupplyIterator, error) {
it, err := this.table.GetIndexByID(from.id()).ListRange(ctx, from.values(), to.values(), opts...)
return SupplyIterator{it}, err
}
func (this supplyTable) DeleteBy(ctx context.Context, prefixKey SupplyIndexKey) error {
return this.table.GetIndexByID(prefixKey.id()).DeleteBy(ctx, prefixKey.values()...)
}
func (this supplyTable) DeleteRange(ctx context.Context, from, to SupplyIndexKey) error {
return this.table.GetIndexByID(from.id()).DeleteRange(ctx, from.values(), to.values())
}
func (this supplyTable) doNotImplement() {}
var _ SupplyTable = supplyTable{}
func NewSupplyTable(db ormtable.Schema) (SupplyTable, error) {
table := db.GetTable(&Supply{})
if table == nil {
return nil, ormerrors.TableNotFound.Wrap(string((&Supply{}).ProtoReflect().Descriptor().FullName()))
}
return supplyTable{table}, nil
}
type BankStore interface {
BalanceTable() BalanceTable
SupplyTable() SupplyTable
doNotImplement()
}
type bankStore struct {
balance BalanceTable
supply SupplyTable
}
func (x bankStore) BalanceTable() BalanceTable {
return x.balance
}
func (x bankStore) SupplyTable() SupplyTable {
return x.supply
}
func (bankStore) doNotImplement() {}
var _ BankStore = bankStore{}
func NewBankStore(db ormtable.Schema) (BankStore, error) {
balanceTable, err := NewBalanceTable(db)
if err != nil {
return nil, err
}
supplyTable, err := NewSupplyTable(db)
if err != nil {
return nil, err
}
return bankStore{
balanceTable,
supplyTable,
}, nil
}

View File

@ -1,308 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.28.1
// protoc (unknown)
// source: testpb/bank.proto
package testpb
import (
_ "cosmossdk.io/api/cosmos/app/v1alpha1"
_ "cosmossdk.io/api/cosmos/orm/v1"
_ "cosmossdk.io/api/cosmos/orm/v1alpha1"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
// Module is a test module for demonstrating how to use the ORM with appconfig.
type Module struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *Module) Reset() {
*x = Module{}
if protoimpl.UnsafeEnabled {
mi := &file_testpb_bank_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Module) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Module) ProtoMessage() {}
func (x *Module) ProtoReflect() protoreflect.Message {
mi := &file_testpb_bank_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Module.ProtoReflect.Descriptor instead.
func (*Module) Descriptor() ([]byte, []int) {
return file_testpb_bank_proto_rawDescGZIP(), []int{0}
}
type Balance struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Address string `protobuf:"bytes,1,opt,name=address,proto3" json:"address,omitempty"`
Denom string `protobuf:"bytes,2,opt,name=denom,proto3" json:"denom,omitempty"`
Amount uint64 `protobuf:"varint,3,opt,name=amount,proto3" json:"amount,omitempty"`
}
func (x *Balance) Reset() {
*x = Balance{}
if protoimpl.UnsafeEnabled {
mi := &file_testpb_bank_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Balance) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Balance) ProtoMessage() {}
func (x *Balance) ProtoReflect() protoreflect.Message {
mi := &file_testpb_bank_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Balance.ProtoReflect.Descriptor instead.
func (*Balance) Descriptor() ([]byte, []int) {
return file_testpb_bank_proto_rawDescGZIP(), []int{1}
}
func (x *Balance) GetAddress() string {
if x != nil {
return x.Address
}
return ""
}
func (x *Balance) GetDenom() string {
if x != nil {
return x.Denom
}
return ""
}
func (x *Balance) GetAmount() uint64 {
if x != nil {
return x.Amount
}
return 0
}
type Supply struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Denom string `protobuf:"bytes,1,opt,name=denom,proto3" json:"denom,omitempty"`
Amount uint64 `protobuf:"varint,2,opt,name=amount,proto3" json:"amount,omitempty"`
}
func (x *Supply) Reset() {
*x = Supply{}
if protoimpl.UnsafeEnabled {
mi := &file_testpb_bank_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Supply) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Supply) ProtoMessage() {}
func (x *Supply) ProtoReflect() protoreflect.Message {
mi := &file_testpb_bank_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Supply.ProtoReflect.Descriptor instead.
func (*Supply) Descriptor() ([]byte, []int) {
return file_testpb_bank_proto_rawDescGZIP(), []int{2}
}
func (x *Supply) GetDenom() string {
if x != nil {
return x.Denom
}
return ""
}
func (x *Supply) GetAmount() uint64 {
if x != nil {
return x.Amount
}
return 0
}
var File_testpb_bank_proto protoreflect.FileDescriptor
var file_testpb_bank_proto_rawDesc = []byte{
0x0a, 0x11, 0x74, 0x65, 0x73, 0x74, 0x70, 0x62, 0x2f, 0x62, 0x61, 0x6e, 0x6b, 0x2e, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x12, 0x06, 0x74, 0x65, 0x73, 0x74, 0x70, 0x62, 0x1a, 0x17, 0x63, 0x6f, 0x73,
0x6d, 0x6f, 0x73, 0x2f, 0x6f, 0x72, 0x6d, 0x2f, 0x76, 0x31, 0x2f, 0x6f, 0x72, 0x6d, 0x2e, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x20, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x6f, 0x72, 0x6d,
0x2f, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x2f, 0x73, 0x63, 0x68, 0x65, 0x6d, 0x61,
0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x20, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x61,
0x70, 0x70, 0x2f, 0x76, 0x31, 0x61, 0x6c, 0x70, 0x68, 0x61, 0x31, 0x2f, 0x6d, 0x6f, 0x64, 0x75,
0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x50, 0x0a, 0x06, 0x4d, 0x6f, 0x64, 0x75,
0x6c, 0x65, 0x3a, 0x46, 0xba, 0xc0, 0x96, 0xda, 0x01, 0x23, 0x0a, 0x21, 0x67, 0x69, 0x74, 0x68,
0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x6f, 0x72,
0x6d, 0x2f, 0x6d, 0x6f, 0x64, 0x65, 0x6c, 0x2f, 0x6f, 0x72, 0x6d, 0x64, 0x62, 0x82, 0x9f, 0xd3,
0x8e, 0x03, 0x17, 0x0a, 0x15, 0x08, 0x01, 0x12, 0x11, 0x74, 0x65, 0x73, 0x74, 0x70, 0x62, 0x2f,
0x62, 0x61, 0x6e, 0x6b, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x77, 0x0a, 0x07, 0x42, 0x61,
0x6c, 0x61, 0x6e, 0x63, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73,
0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x07, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73, 0x12,
0x14, 0x0a, 0x05, 0x64, 0x65, 0x6e, 0x6f, 0x6d, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05,
0x64, 0x65, 0x6e, 0x6f, 0x6d, 0x12, 0x16, 0x0a, 0x06, 0x61, 0x6d, 0x6f, 0x75, 0x6e, 0x74, 0x18,
0x03, 0x20, 0x01, 0x28, 0x04, 0x52, 0x06, 0x61, 0x6d, 0x6f, 0x75, 0x6e, 0x74, 0x3a, 0x24, 0xf2,
0x9e, 0xd3, 0x8e, 0x03, 0x1e, 0x0a, 0x0f, 0x0a, 0x0d, 0x61, 0x64, 0x64, 0x72, 0x65, 0x73, 0x73,
0x2c, 0x64, 0x65, 0x6e, 0x6f, 0x6d, 0x12, 0x09, 0x0a, 0x05, 0x64, 0x65, 0x6e, 0x6f, 0x6d, 0x10,
0x01, 0x18, 0x01, 0x22, 0x49, 0x0a, 0x06, 0x53, 0x75, 0x70, 0x70, 0x6c, 0x79, 0x12, 0x14, 0x0a,
0x05, 0x64, 0x65, 0x6e, 0x6f, 0x6d, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x64, 0x65,
0x6e, 0x6f, 0x6d, 0x12, 0x16, 0x0a, 0x06, 0x61, 0x6d, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x02, 0x20,
0x01, 0x28, 0x04, 0x52, 0x06, 0x61, 0x6d, 0x6f, 0x75, 0x6e, 0x74, 0x3a, 0x11, 0xf2, 0x9e, 0xd3,
0x8e, 0x03, 0x0b, 0x0a, 0x07, 0x0a, 0x05, 0x64, 0x65, 0x6e, 0x6f, 0x6d, 0x18, 0x02, 0x42, 0x71,
0x0a, 0x0a, 0x63, 0x6f, 0x6d, 0x2e, 0x74, 0x65, 0x73, 0x74, 0x70, 0x62, 0x42, 0x09, 0x42, 0x61,
0x6e, 0x6b, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x20, 0x63, 0x6f, 0x73, 0x6d, 0x6f,
0x73, 0x73, 0x64, 0x6b, 0x2e, 0x69, 0x6f, 0x2f, 0x6f, 0x72, 0x6d, 0x2f, 0x69, 0x6e, 0x74, 0x65,
0x72, 0x6e, 0x61, 0x6c, 0x2f, 0x74, 0x65, 0x73, 0x74, 0x70, 0x62, 0xa2, 0x02, 0x03, 0x54, 0x58,
0x58, 0xaa, 0x02, 0x06, 0x54, 0x65, 0x73, 0x74, 0x70, 0x62, 0xca, 0x02, 0x06, 0x54, 0x65, 0x73,
0x74, 0x70, 0x62, 0xe2, 0x02, 0x12, 0x54, 0x65, 0x73, 0x74, 0x70, 0x62, 0x5c, 0x47, 0x50, 0x42,
0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0xea, 0x02, 0x06, 0x54, 0x65, 0x73, 0x74, 0x70,
0x62, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_testpb_bank_proto_rawDescOnce sync.Once
file_testpb_bank_proto_rawDescData = file_testpb_bank_proto_rawDesc
)
func file_testpb_bank_proto_rawDescGZIP() []byte {
file_testpb_bank_proto_rawDescOnce.Do(func() {
file_testpb_bank_proto_rawDescData = protoimpl.X.CompressGZIP(file_testpb_bank_proto_rawDescData)
})
return file_testpb_bank_proto_rawDescData
}
var file_testpb_bank_proto_msgTypes = make([]protoimpl.MessageInfo, 3)
var file_testpb_bank_proto_goTypes = []interface{}{
(*Module)(nil), // 0: testpb.Module
(*Balance)(nil), // 1: testpb.Balance
(*Supply)(nil), // 2: testpb.Supply
}
var file_testpb_bank_proto_depIdxs = []int32{
0, // [0:0] is the sub-list for method output_type
0, // [0:0] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name
}
func init() { file_testpb_bank_proto_init() }
func file_testpb_bank_proto_init() {
if File_testpb_bank_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_testpb_bank_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Module); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_testpb_bank_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Balance); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_testpb_bank_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Supply); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_testpb_bank_proto_rawDesc,
NumEnums: 0,
NumMessages: 3,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_testpb_bank_proto_goTypes,
DependencyIndexes: file_testpb_bank_proto_depIdxs,
MessageInfos: file_testpb_bank_proto_msgTypes,
}.Build()
File_testpb_bank_proto = out.File
file_testpb_bank_proto_rawDesc = nil
file_testpb_bank_proto_goTypes = nil
file_testpb_bank_proto_depIdxs = nil
}

View File

@ -1,50 +0,0 @@
syntax = "proto3";
package testpb;
import "cosmos/orm/v1/orm.proto";
import "cosmos/orm/v1alpha1/schema.proto";
import "cosmos/app/v1alpha1/module.proto";
// This is a simulated bank schema used for testing.
// Module is a test module for demonstrating how to use the ORM with appconfig.
message Module {
option (cosmos.app.v1alpha1.module) = {
go_import: "github.com/cosmos/orm/model/ormdb"
};
option (cosmos.orm.v1alpha1.module_schema) = {
schema_file: {id: 1 proto_file_name: "testpb/bank.proto"}
};
}
message Balance {
option (cosmos.orm.v1.table) = {
id: 1;
primary_key: {
fields:
"address,denom"
}
index: {
id:
1 fields: "denom"
}
};
string address = 1;
string denom = 2;
uint64 amount = 3;
}
message Supply {
option (cosmos.orm.v1.table) = {
id: 2;
primary_key: {
fields:
"denom"
}
};
string denom = 1;
uint64 amount = 2;
}

File diff suppressed because it is too large Load Diff

View File

@ -1,149 +0,0 @@
// Code generated by protoc-gen-go-cosmos-orm-proto. DO NOT EDIT.
syntax = "proto3";
package testpb;
import "cosmos/base/query/v1beta1/pagination.proto";
import "testpb/bank.proto";
// BankQueryService queries the state of the tables specified by testpb/bank.proto.
service BankQueryService {
// Get queries the Balance table by its primary key.
rpc GetBalance(GetBalanceRequest) returns (GetBalanceResponse) {}
// ListBalance queries the Balance table using prefix and range queries against defined indexes.
rpc ListBalance(ListBalanceRequest) returns (ListBalanceResponse) {}
// Get queries the Supply table by its primary key.
rpc GetSupply(GetSupplyRequest) returns (GetSupplyResponse) {}
// ListSupply queries the Supply table using prefix and range queries against defined indexes.
rpc ListSupply(ListSupplyRequest) returns (ListSupplyResponse) {}
}
// GetBalanceRequest is the BankQuery/GetBalanceRequest request type.
message GetBalanceRequest {
// address specifies the value of the address field in the primary key.
string address = 1;
// denom specifies the value of the denom field in the primary key.
string denom = 2;
}
// GetBalanceResponse is the BankQuery/GetBalanceResponse response type.
message GetBalanceResponse {
// value is the response value.
Balance value = 1;
}
// ListBalanceRequest is the BankQuery/ListBalanceRequest request type.
message ListBalanceRequest {
// IndexKey specifies the value of an index key to use in prefix and range queries.
message IndexKey {
// key specifies the index key value.
oneof key {
// address_denom specifies the value of the AddressDenom index key to use in the query.
AddressDenom address_denom = 1;
// denom specifies the value of the Denom index key to use in the query.
Denom denom = 2;
}
message AddressDenom {
// address is the value of the address field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional string address = 1;
// denom is the value of the denom field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional string denom = 2;
}
message Denom {
// denom is the value of the denom field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional string denom = 1;
}
}
// query specifies the type of query - either a prefix or range query.
oneof query {
// prefix_query specifies the index key value to use for the prefix query.
IndexKey prefix_query = 1;
// range_query specifies the index key from/to values to use for the range query.
RangeQuery range_query = 2;
}
// pagination specifies optional pagination parameters.
cosmos.base.query.v1beta1.PageRequest pagination = 3;
// RangeQuery specifies the from/to index keys for a range query.
message RangeQuery {
// from is the index key to use for the start of the range query.
// To query from the start of an index, specify an index key for that index with empty values.
IndexKey from = 1;
// to is the index key to use for the end of the range query.
// The index key type MUST be the same as the index key type used for from.
// To query from to the end of an index it can be omitted.
IndexKey to = 2;
}
}
// ListBalanceResponse is the BankQuery/ListBalanceResponse response type.
message ListBalanceResponse {
// values are the results of the query.
repeated Balance values = 1;
// pagination is the pagination response.
cosmos.base.query.v1beta1.PageResponse pagination = 2;
}
// GetSupplyRequest is the BankQuery/GetSupplyRequest request type.
message GetSupplyRequest {
// denom specifies the value of the denom field in the primary key.
string denom = 1;
}
// GetSupplyResponse is the BankQuery/GetSupplyResponse response type.
message GetSupplyResponse {
// value is the response value.
Supply value = 1;
}
// ListSupplyRequest is the BankQuery/ListSupplyRequest request type.
message ListSupplyRequest {
// IndexKey specifies the value of an index key to use in prefix and range queries.
message IndexKey {
// key specifies the index key value.
oneof key {
// denom specifies the value of the Denom index key to use in the query.
Denom denom = 1;
}
message Denom {
// denom is the value of the denom field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional string denom = 1;
}
}
// query specifies the type of query - either a prefix or range query.
oneof query {
// prefix_query specifies the index key value to use for the prefix query.
IndexKey prefix_query = 1;
// range_query specifies the index key from/to values to use for the range query.
RangeQuery range_query = 2;
}
// pagination specifies optional pagination parameters.
cosmos.base.query.v1beta1.PageRequest pagination = 3;
// RangeQuery specifies the from/to index keys for a range query.
message RangeQuery {
// from is the index key to use for the start of the range query.
// To query from the start of an index, specify an index key for that index with empty values.
IndexKey from = 1;
// to is the index key to use for the end of the range query.
// The index key type MUST be the same as the index key type used for from.
// To query from to the end of an index it can be omitted.
IndexKey to = 2;
}
}
// ListSupplyResponse is the BankQuery/ListSupplyResponse response type.
message ListSupplyResponse {
// values are the results of the query.
repeated Supply values = 1;
// pagination is the pagination response.
cosmos.base.query.v1beta1.PageResponse pagination = 2;
}

View File

@ -1,230 +0,0 @@
// Code generated by protoc-gen-go-cosmos-orm-proto. DO NOT EDIT.
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.3.0
// - protoc (unknown)
// source: testpb/bank_query.proto
package testpb
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
)
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.32.0 or later.
const _ = grpc.SupportPackageIsVersion7
const (
BankQueryService_GetBalance_FullMethodName = "/testpb.BankQueryService/GetBalance"
BankQueryService_ListBalance_FullMethodName = "/testpb.BankQueryService/ListBalance"
BankQueryService_GetSupply_FullMethodName = "/testpb.BankQueryService/GetSupply"
BankQueryService_ListSupply_FullMethodName = "/testpb.BankQueryService/ListSupply"
)
// BankQueryServiceClient is the client API for BankQueryService service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type BankQueryServiceClient interface {
// Get queries the Balance table by its primary key.
GetBalance(ctx context.Context, in *GetBalanceRequest, opts ...grpc.CallOption) (*GetBalanceResponse, error)
// ListBalance queries the Balance table using prefix and range queries against defined indexes.
ListBalance(ctx context.Context, in *ListBalanceRequest, opts ...grpc.CallOption) (*ListBalanceResponse, error)
// Get queries the Supply table by its primary key.
GetSupply(ctx context.Context, in *GetSupplyRequest, opts ...grpc.CallOption) (*GetSupplyResponse, error)
// ListSupply queries the Supply table using prefix and range queries against defined indexes.
ListSupply(ctx context.Context, in *ListSupplyRequest, opts ...grpc.CallOption) (*ListSupplyResponse, error)
}
type bankQueryServiceClient struct {
cc grpc.ClientConnInterface
}
func NewBankQueryServiceClient(cc grpc.ClientConnInterface) BankQueryServiceClient {
return &bankQueryServiceClient{cc}
}
func (c *bankQueryServiceClient) GetBalance(ctx context.Context, in *GetBalanceRequest, opts ...grpc.CallOption) (*GetBalanceResponse, error) {
out := new(GetBalanceResponse)
err := c.cc.Invoke(ctx, BankQueryService_GetBalance_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *bankQueryServiceClient) ListBalance(ctx context.Context, in *ListBalanceRequest, opts ...grpc.CallOption) (*ListBalanceResponse, error) {
out := new(ListBalanceResponse)
err := c.cc.Invoke(ctx, BankQueryService_ListBalance_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *bankQueryServiceClient) GetSupply(ctx context.Context, in *GetSupplyRequest, opts ...grpc.CallOption) (*GetSupplyResponse, error) {
out := new(GetSupplyResponse)
err := c.cc.Invoke(ctx, BankQueryService_GetSupply_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *bankQueryServiceClient) ListSupply(ctx context.Context, in *ListSupplyRequest, opts ...grpc.CallOption) (*ListSupplyResponse, error) {
out := new(ListSupplyResponse)
err := c.cc.Invoke(ctx, BankQueryService_ListSupply_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// BankQueryServiceServer is the server API for BankQueryService service.
// All implementations must embed UnimplementedBankQueryServiceServer
// for forward compatibility
type BankQueryServiceServer interface {
// Get queries the Balance table by its primary key.
GetBalance(context.Context, *GetBalanceRequest) (*GetBalanceResponse, error)
// ListBalance queries the Balance table using prefix and range queries against defined indexes.
ListBalance(context.Context, *ListBalanceRequest) (*ListBalanceResponse, error)
// Get queries the Supply table by its primary key.
GetSupply(context.Context, *GetSupplyRequest) (*GetSupplyResponse, error)
// ListSupply queries the Supply table using prefix and range queries against defined indexes.
ListSupply(context.Context, *ListSupplyRequest) (*ListSupplyResponse, error)
mustEmbedUnimplementedBankQueryServiceServer()
}
// UnimplementedBankQueryServiceServer must be embedded to have forward compatible implementations.
type UnimplementedBankQueryServiceServer struct {
}
func (UnimplementedBankQueryServiceServer) GetBalance(context.Context, *GetBalanceRequest) (*GetBalanceResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GetBalance not implemented")
}
func (UnimplementedBankQueryServiceServer) ListBalance(context.Context, *ListBalanceRequest) (*ListBalanceResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ListBalance not implemented")
}
func (UnimplementedBankQueryServiceServer) GetSupply(context.Context, *GetSupplyRequest) (*GetSupplyResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GetSupply not implemented")
}
func (UnimplementedBankQueryServiceServer) ListSupply(context.Context, *ListSupplyRequest) (*ListSupplyResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ListSupply not implemented")
}
func (UnimplementedBankQueryServiceServer) mustEmbedUnimplementedBankQueryServiceServer() {}
// UnsafeBankQueryServiceServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to BankQueryServiceServer will
// result in compilation errors.
type UnsafeBankQueryServiceServer interface {
mustEmbedUnimplementedBankQueryServiceServer()
}
func RegisterBankQueryServiceServer(s grpc.ServiceRegistrar, srv BankQueryServiceServer) {
s.RegisterService(&BankQueryService_ServiceDesc, srv)
}
func _BankQueryService_GetBalance_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetBalanceRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(BankQueryServiceServer).GetBalance(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: BankQueryService_GetBalance_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(BankQueryServiceServer).GetBalance(ctx, req.(*GetBalanceRequest))
}
return interceptor(ctx, in, info, handler)
}
func _BankQueryService_ListBalance_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListBalanceRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(BankQueryServiceServer).ListBalance(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: BankQueryService_ListBalance_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(BankQueryServiceServer).ListBalance(ctx, req.(*ListBalanceRequest))
}
return interceptor(ctx, in, info, handler)
}
func _BankQueryService_GetSupply_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetSupplyRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(BankQueryServiceServer).GetSupply(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: BankQueryService_GetSupply_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(BankQueryServiceServer).GetSupply(ctx, req.(*GetSupplyRequest))
}
return interceptor(ctx, in, info, handler)
}
func _BankQueryService_ListSupply_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListSupplyRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(BankQueryServiceServer).ListSupply(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: BankQueryService_ListSupply_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(BankQueryServiceServer).ListSupply(ctx, req.(*ListSupplyRequest))
}
return interceptor(ctx, in, info, handler)
}
// BankQueryService_ServiceDesc is the grpc.ServiceDesc for BankQueryService service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
var BankQueryService_ServiceDesc = grpc.ServiceDesc{
ServiceName: "testpb.BankQueryService",
HandlerType: (*BankQueryServiceServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "GetBalance",
Handler: _BankQueryService_GetBalance_Handler,
},
{
MethodName: "ListBalance",
Handler: _BankQueryService_ListBalance_Handler,
},
{
MethodName: "GetSupply",
Handler: _BankQueryService_GetSupply_Handler,
},
{
MethodName: "ListSupply",
Handler: _BankQueryService_ListSupply_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "testpb/bank_query.proto",
}

File diff suppressed because it is too large Load Diff

View File

@ -1,999 +0,0 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.28.1
// protoc (unknown)
// source: testpb/test_schema.proto
package testpb
import (
_ "cosmossdk.io/api/cosmos/orm/v1"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
durationpb "google.golang.org/protobuf/types/known/durationpb"
timestamppb "google.golang.org/protobuf/types/known/timestamppb"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type Enum int32
const (
Enum_ENUM_UNSPECIFIED Enum = 0
Enum_ENUM_ONE Enum = 1
Enum_ENUM_TWO Enum = 2
Enum_ENUM_FIVE Enum = 5
Enum_ENUM_NEG_THREE Enum = -3
)
// Enum value maps for Enum.
var (
Enum_name = map[int32]string{
0: "ENUM_UNSPECIFIED",
1: "ENUM_ONE",
2: "ENUM_TWO",
5: "ENUM_FIVE",
-3: "ENUM_NEG_THREE",
}
Enum_value = map[string]int32{
"ENUM_UNSPECIFIED": 0,
"ENUM_ONE": 1,
"ENUM_TWO": 2,
"ENUM_FIVE": 5,
"ENUM_NEG_THREE": -3,
}
)
func (x Enum) Enum() *Enum {
p := new(Enum)
*p = x
return p
}
func (x Enum) String() string {
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
}
func (Enum) Descriptor() protoreflect.EnumDescriptor {
return file_testpb_test_schema_proto_enumTypes[0].Descriptor()
}
func (Enum) Type() protoreflect.EnumType {
return &file_testpb_test_schema_proto_enumTypes[0]
}
func (x Enum) Number() protoreflect.EnumNumber {
return protoreflect.EnumNumber(x)
}
// Deprecated: Use Enum.Descriptor instead.
func (Enum) EnumDescriptor() ([]byte, []int) {
return file_testpb_test_schema_proto_rawDescGZIP(), []int{0}
}
type ExampleTable struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// Valid key fields:
U32 uint32 `protobuf:"varint,1,opt,name=u32,proto3" json:"u32,omitempty"`
U64 uint64 `protobuf:"varint,2,opt,name=u64,proto3" json:"u64,omitempty"`
Str string `protobuf:"bytes,3,opt,name=str,proto3" json:"str,omitempty"`
Bz []byte `protobuf:"bytes,4,opt,name=bz,proto3" json:"bz,omitempty"`
Ts *timestamppb.Timestamp `protobuf:"bytes,5,opt,name=ts,proto3" json:"ts,omitempty"`
Dur *durationpb.Duration `protobuf:"bytes,6,opt,name=dur,proto3" json:"dur,omitempty"`
I32 int32 `protobuf:"varint,7,opt,name=i32,proto3" json:"i32,omitempty"`
S32 int32 `protobuf:"zigzag32,8,opt,name=s32,proto3" json:"s32,omitempty"`
Sf32 int32 `protobuf:"fixed32,9,opt,name=sf32,proto3" json:"sf32,omitempty"`
I64 int64 `protobuf:"varint,10,opt,name=i64,proto3" json:"i64,omitempty"`
S64 int64 `protobuf:"zigzag64,11,opt,name=s64,proto3" json:"s64,omitempty"`
Sf64 int64 `protobuf:"fixed64,12,opt,name=sf64,proto3" json:"sf64,omitempty"`
F32 uint32 `protobuf:"fixed32,13,opt,name=f32,proto3" json:"f32,omitempty"`
F64 uint64 `protobuf:"fixed64,14,opt,name=f64,proto3" json:"f64,omitempty"`
B bool `protobuf:"varint,15,opt,name=b,proto3" json:"b,omitempty"`
E Enum `protobuf:"varint,16,opt,name=e,proto3,enum=testpb.Enum" json:"e,omitempty"`
// Invalid key fields:
Repeated []uint32 `protobuf:"varint,17,rep,packed,name=repeated,proto3" json:"repeated,omitempty"`
Map map[string]uint32 `protobuf:"bytes,18,rep,name=map,proto3" json:"map,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"varint,2,opt,name=value,proto3"`
Msg *ExampleTable_ExampleMessage `protobuf:"bytes,19,opt,name=msg,proto3" json:"msg,omitempty"`
// Types that are assignable to Sum:
//
// *ExampleTable_Oneof
Sum isExampleTable_Sum `protobuf_oneof:"sum"`
}
func (x *ExampleTable) Reset() {
*x = ExampleTable{}
if protoimpl.UnsafeEnabled {
mi := &file_testpb_test_schema_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ExampleTable) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ExampleTable) ProtoMessage() {}
func (x *ExampleTable) ProtoReflect() protoreflect.Message {
mi := &file_testpb_test_schema_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ExampleTable.ProtoReflect.Descriptor instead.
func (*ExampleTable) Descriptor() ([]byte, []int) {
return file_testpb_test_schema_proto_rawDescGZIP(), []int{0}
}
func (x *ExampleTable) GetU32() uint32 {
if x != nil {
return x.U32
}
return 0
}
func (x *ExampleTable) GetU64() uint64 {
if x != nil {
return x.U64
}
return 0
}
func (x *ExampleTable) GetStr() string {
if x != nil {
return x.Str
}
return ""
}
func (x *ExampleTable) GetBz() []byte {
if x != nil {
return x.Bz
}
return nil
}
func (x *ExampleTable) GetTs() *timestamppb.Timestamp {
if x != nil {
return x.Ts
}
return nil
}
func (x *ExampleTable) GetDur() *durationpb.Duration {
if x != nil {
return x.Dur
}
return nil
}
func (x *ExampleTable) GetI32() int32 {
if x != nil {
return x.I32
}
return 0
}
func (x *ExampleTable) GetS32() int32 {
if x != nil {
return x.S32
}
return 0
}
func (x *ExampleTable) GetSf32() int32 {
if x != nil {
return x.Sf32
}
return 0
}
func (x *ExampleTable) GetI64() int64 {
if x != nil {
return x.I64
}
return 0
}
func (x *ExampleTable) GetS64() int64 {
if x != nil {
return x.S64
}
return 0
}
func (x *ExampleTable) GetSf64() int64 {
if x != nil {
return x.Sf64
}
return 0
}
func (x *ExampleTable) GetF32() uint32 {
if x != nil {
return x.F32
}
return 0
}
func (x *ExampleTable) GetF64() uint64 {
if x != nil {
return x.F64
}
return 0
}
func (x *ExampleTable) GetB() bool {
if x != nil {
return x.B
}
return false
}
func (x *ExampleTable) GetE() Enum {
if x != nil {
return x.E
}
return Enum_ENUM_UNSPECIFIED
}
func (x *ExampleTable) GetRepeated() []uint32 {
if x != nil {
return x.Repeated
}
return nil
}
func (x *ExampleTable) GetMap() map[string]uint32 {
if x != nil {
return x.Map
}
return nil
}
func (x *ExampleTable) GetMsg() *ExampleTable_ExampleMessage {
if x != nil {
return x.Msg
}
return nil
}
func (m *ExampleTable) GetSum() isExampleTable_Sum {
if m != nil {
return m.Sum
}
return nil
}
func (x *ExampleTable) GetOneof() uint32 {
if x, ok := x.GetSum().(*ExampleTable_Oneof); ok {
return x.Oneof
}
return 0
}
type isExampleTable_Sum interface {
isExampleTable_Sum()
}
type ExampleTable_Oneof struct {
Oneof uint32 `protobuf:"varint,20,opt,name=oneof,proto3,oneof"`
}
func (*ExampleTable_Oneof) isExampleTable_Sum() {}
type ExampleAutoIncrementTable struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Id uint64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
X string `protobuf:"bytes,2,opt,name=x,proto3" json:"x,omitempty"`
Y int32 `protobuf:"varint,3,opt,name=y,proto3" json:"y,omitempty"`
}
func (x *ExampleAutoIncrementTable) Reset() {
*x = ExampleAutoIncrementTable{}
if protoimpl.UnsafeEnabled {
mi := &file_testpb_test_schema_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ExampleAutoIncrementTable) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ExampleAutoIncrementTable) ProtoMessage() {}
func (x *ExampleAutoIncrementTable) ProtoReflect() protoreflect.Message {
mi := &file_testpb_test_schema_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ExampleAutoIncrementTable.ProtoReflect.Descriptor instead.
func (*ExampleAutoIncrementTable) Descriptor() ([]byte, []int) {
return file_testpb_test_schema_proto_rawDescGZIP(), []int{1}
}
func (x *ExampleAutoIncrementTable) GetId() uint64 {
if x != nil {
return x.Id
}
return 0
}
func (x *ExampleAutoIncrementTable) GetX() string {
if x != nil {
return x.X
}
return ""
}
func (x *ExampleAutoIncrementTable) GetY() int32 {
if x != nil {
return x.Y
}
return 0
}
type ExampleSingleton struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Foo string `protobuf:"bytes,1,opt,name=foo,proto3" json:"foo,omitempty"`
Bar int32 `protobuf:"varint,2,opt,name=bar,proto3" json:"bar,omitempty"`
}
func (x *ExampleSingleton) Reset() {
*x = ExampleSingleton{}
if protoimpl.UnsafeEnabled {
mi := &file_testpb_test_schema_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ExampleSingleton) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ExampleSingleton) ProtoMessage() {}
func (x *ExampleSingleton) ProtoReflect() protoreflect.Message {
mi := &file_testpb_test_schema_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ExampleSingleton.ProtoReflect.Descriptor instead.
func (*ExampleSingleton) Descriptor() ([]byte, []int) {
return file_testpb_test_schema_proto_rawDescGZIP(), []int{2}
}
func (x *ExampleSingleton) GetFoo() string {
if x != nil {
return x.Foo
}
return ""
}
func (x *ExampleSingleton) GetBar() int32 {
if x != nil {
return x.Bar
}
return 0
}
type ExampleTimestamp struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Id uint64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
Name string `protobuf:"bytes,2,opt,name=name,proto3" json:"name,omitempty"`
Ts *timestamppb.Timestamp `protobuf:"bytes,3,opt,name=ts,proto3" json:"ts,omitempty"`
}
func (x *ExampleTimestamp) Reset() {
*x = ExampleTimestamp{}
if protoimpl.UnsafeEnabled {
mi := &file_testpb_test_schema_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ExampleTimestamp) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ExampleTimestamp) ProtoMessage() {}
func (x *ExampleTimestamp) ProtoReflect() protoreflect.Message {
mi := &file_testpb_test_schema_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ExampleTimestamp.ProtoReflect.Descriptor instead.
func (*ExampleTimestamp) Descriptor() ([]byte, []int) {
return file_testpb_test_schema_proto_rawDescGZIP(), []int{3}
}
func (x *ExampleTimestamp) GetId() uint64 {
if x != nil {
return x.Id
}
return 0
}
func (x *ExampleTimestamp) GetName() string {
if x != nil {
return x.Name
}
return ""
}
func (x *ExampleTimestamp) GetTs() *timestamppb.Timestamp {
if x != nil {
return x.Ts
}
return nil
}
type ExampleDuration struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Id uint64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
Name string `protobuf:"bytes,2,opt,name=name,proto3" json:"name,omitempty"`
Dur *durationpb.Duration `protobuf:"bytes,3,opt,name=dur,proto3" json:"dur,omitempty"`
}
func (x *ExampleDuration) Reset() {
*x = ExampleDuration{}
if protoimpl.UnsafeEnabled {
mi := &file_testpb_test_schema_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ExampleDuration) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ExampleDuration) ProtoMessage() {}
func (x *ExampleDuration) ProtoReflect() protoreflect.Message {
mi := &file_testpb_test_schema_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ExampleDuration.ProtoReflect.Descriptor instead.
func (*ExampleDuration) Descriptor() ([]byte, []int) {
return file_testpb_test_schema_proto_rawDescGZIP(), []int{4}
}
func (x *ExampleDuration) GetId() uint64 {
if x != nil {
return x.Id
}
return 0
}
func (x *ExampleDuration) GetName() string {
if x != nil {
return x.Name
}
return ""
}
func (x *ExampleDuration) GetDur() *durationpb.Duration {
if x != nil {
return x.Dur
}
return nil
}
type SimpleExample struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Unique string `protobuf:"bytes,2,opt,name=unique,proto3" json:"unique,omitempty"`
NotUnique string `protobuf:"bytes,3,opt,name=not_unique,json=notUnique,proto3" json:"not_unique,omitempty"`
}
func (x *SimpleExample) Reset() {
*x = SimpleExample{}
if protoimpl.UnsafeEnabled {
mi := &file_testpb_test_schema_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SimpleExample) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SimpleExample) ProtoMessage() {}
func (x *SimpleExample) ProtoReflect() protoreflect.Message {
mi := &file_testpb_test_schema_proto_msgTypes[5]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SimpleExample.ProtoReflect.Descriptor instead.
func (*SimpleExample) Descriptor() ([]byte, []int) {
return file_testpb_test_schema_proto_rawDescGZIP(), []int{5}
}
func (x *SimpleExample) GetName() string {
if x != nil {
return x.Name
}
return ""
}
func (x *SimpleExample) GetUnique() string {
if x != nil {
return x.Unique
}
return ""
}
func (x *SimpleExample) GetNotUnique() string {
if x != nil {
return x.NotUnique
}
return ""
}
// ExampleAutoIncFieldName is a table for testing InsertReturning<FieldName>.
type ExampleAutoIncFieldName struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Foo uint64 `protobuf:"varint,1,opt,name=foo,proto3" json:"foo,omitempty"`
Bar uint64 `protobuf:"varint,2,opt,name=bar,proto3" json:"bar,omitempty"`
}
func (x *ExampleAutoIncFieldName) Reset() {
*x = ExampleAutoIncFieldName{}
if protoimpl.UnsafeEnabled {
mi := &file_testpb_test_schema_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ExampleAutoIncFieldName) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ExampleAutoIncFieldName) ProtoMessage() {}
func (x *ExampleAutoIncFieldName) ProtoReflect() protoreflect.Message {
mi := &file_testpb_test_schema_proto_msgTypes[6]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ExampleAutoIncFieldName.ProtoReflect.Descriptor instead.
func (*ExampleAutoIncFieldName) Descriptor() ([]byte, []int) {
return file_testpb_test_schema_proto_rawDescGZIP(), []int{6}
}
func (x *ExampleAutoIncFieldName) GetFoo() uint64 {
if x != nil {
return x.Foo
}
return 0
}
func (x *ExampleAutoIncFieldName) GetBar() uint64 {
if x != nil {
return x.Bar
}
return 0
}
type ExampleTable_ExampleMessage struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Foo string `protobuf:"bytes,1,opt,name=foo,proto3" json:"foo,omitempty"`
Bar int32 `protobuf:"varint,2,opt,name=bar,proto3" json:"bar,omitempty"`
}
func (x *ExampleTable_ExampleMessage) Reset() {
*x = ExampleTable_ExampleMessage{}
if protoimpl.UnsafeEnabled {
mi := &file_testpb_test_schema_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ExampleTable_ExampleMessage) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ExampleTable_ExampleMessage) ProtoMessage() {}
func (x *ExampleTable_ExampleMessage) ProtoReflect() protoreflect.Message {
mi := &file_testpb_test_schema_proto_msgTypes[8]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ExampleTable_ExampleMessage.ProtoReflect.Descriptor instead.
func (*ExampleTable_ExampleMessage) Descriptor() ([]byte, []int) {
return file_testpb_test_schema_proto_rawDescGZIP(), []int{0, 1}
}
func (x *ExampleTable_ExampleMessage) GetFoo() string {
if x != nil {
return x.Foo
}
return ""
}
func (x *ExampleTable_ExampleMessage) GetBar() int32 {
if x != nil {
return x.Bar
}
return 0
}
var File_testpb_test_schema_proto protoreflect.FileDescriptor
var file_testpb_test_schema_proto_rawDesc = []byte{
0x0a, 0x18, 0x74, 0x65, 0x73, 0x74, 0x70, 0x62, 0x2f, 0x74, 0x65, 0x73, 0x74, 0x5f, 0x73, 0x63,
0x68, 0x65, 0x6d, 0x61, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x06, 0x74, 0x65, 0x73, 0x74,
0x70, 0x62, 0x1a, 0x1f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x62, 0x75, 0x66, 0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2e, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x1a, 0x1e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x1a, 0x17, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x2f, 0x6f, 0x72, 0x6d, 0x2f,
0x76, 0x31, 0x2f, 0x6f, 0x72, 0x6d, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xbd, 0x05, 0x0a,
0x0c, 0x45, 0x78, 0x61, 0x6d, 0x70, 0x6c, 0x65, 0x54, 0x61, 0x62, 0x6c, 0x65, 0x12, 0x10, 0x0a,
0x03, 0x75, 0x33, 0x32, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x03, 0x75, 0x33, 0x32, 0x12,
0x10, 0x0a, 0x03, 0x75, 0x36, 0x34, 0x18, 0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x03, 0x75, 0x36,
0x34, 0x12, 0x10, 0x0a, 0x03, 0x73, 0x74, 0x72, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03,
0x73, 0x74, 0x72, 0x12, 0x0e, 0x0a, 0x02, 0x62, 0x7a, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0c, 0x52,
0x02, 0x62, 0x7a, 0x12, 0x2a, 0x0a, 0x02, 0x74, 0x73, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0b, 0x32,
0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75,
0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x02, 0x74, 0x73, 0x12,
0x2b, 0x0a, 0x03, 0x64, 0x75, 0x72, 0x18, 0x06, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67,
0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44,
0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x03, 0x64, 0x75, 0x72, 0x12, 0x10, 0x0a, 0x03,
0x69, 0x33, 0x32, 0x18, 0x07, 0x20, 0x01, 0x28, 0x05, 0x52, 0x03, 0x69, 0x33, 0x32, 0x12, 0x10,
0x0a, 0x03, 0x73, 0x33, 0x32, 0x18, 0x08, 0x20, 0x01, 0x28, 0x11, 0x52, 0x03, 0x73, 0x33, 0x32,
0x12, 0x12, 0x0a, 0x04, 0x73, 0x66, 0x33, 0x32, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0f, 0x52, 0x04,
0x73, 0x66, 0x33, 0x32, 0x12, 0x10, 0x0a, 0x03, 0x69, 0x36, 0x34, 0x18, 0x0a, 0x20, 0x01, 0x28,
0x03, 0x52, 0x03, 0x69, 0x36, 0x34, 0x12, 0x10, 0x0a, 0x03, 0x73, 0x36, 0x34, 0x18, 0x0b, 0x20,
0x01, 0x28, 0x12, 0x52, 0x03, 0x73, 0x36, 0x34, 0x12, 0x12, 0x0a, 0x04, 0x73, 0x66, 0x36, 0x34,
0x18, 0x0c, 0x20, 0x01, 0x28, 0x10, 0x52, 0x04, 0x73, 0x66, 0x36, 0x34, 0x12, 0x10, 0x0a, 0x03,
0x66, 0x33, 0x32, 0x18, 0x0d, 0x20, 0x01, 0x28, 0x07, 0x52, 0x03, 0x66, 0x33, 0x32, 0x12, 0x10,
0x0a, 0x03, 0x66, 0x36, 0x34, 0x18, 0x0e, 0x20, 0x01, 0x28, 0x06, 0x52, 0x03, 0x66, 0x36, 0x34,
0x12, 0x0c, 0x0a, 0x01, 0x62, 0x18, 0x0f, 0x20, 0x01, 0x28, 0x08, 0x52, 0x01, 0x62, 0x12, 0x1a,
0x0a, 0x01, 0x65, 0x18, 0x10, 0x20, 0x01, 0x28, 0x0e, 0x32, 0x0c, 0x2e, 0x74, 0x65, 0x73, 0x74,
0x70, 0x62, 0x2e, 0x45, 0x6e, 0x75, 0x6d, 0x52, 0x01, 0x65, 0x12, 0x1a, 0x0a, 0x08, 0x72, 0x65,
0x70, 0x65, 0x61, 0x74, 0x65, 0x64, 0x18, 0x11, 0x20, 0x03, 0x28, 0x0d, 0x52, 0x08, 0x72, 0x65,
0x70, 0x65, 0x61, 0x74, 0x65, 0x64, 0x12, 0x2f, 0x0a, 0x03, 0x6d, 0x61, 0x70, 0x18, 0x12, 0x20,
0x03, 0x28, 0x0b, 0x32, 0x1d, 0x2e, 0x74, 0x65, 0x73, 0x74, 0x70, 0x62, 0x2e, 0x45, 0x78, 0x61,
0x6d, 0x70, 0x6c, 0x65, 0x54, 0x61, 0x62, 0x6c, 0x65, 0x2e, 0x4d, 0x61, 0x70, 0x45, 0x6e, 0x74,
0x72, 0x79, 0x52, 0x03, 0x6d, 0x61, 0x70, 0x12, 0x35, 0x0a, 0x03, 0x6d, 0x73, 0x67, 0x18, 0x13,
0x20, 0x01, 0x28, 0x0b, 0x32, 0x23, 0x2e, 0x74, 0x65, 0x73, 0x74, 0x70, 0x62, 0x2e, 0x45, 0x78,
0x61, 0x6d, 0x70, 0x6c, 0x65, 0x54, 0x61, 0x62, 0x6c, 0x65, 0x2e, 0x45, 0x78, 0x61, 0x6d, 0x70,
0x6c, 0x65, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65, 0x52, 0x03, 0x6d, 0x73, 0x67, 0x12, 0x16,
0x0a, 0x05, 0x6f, 0x6e, 0x65, 0x6f, 0x66, 0x18, 0x14, 0x20, 0x01, 0x28, 0x0d, 0x48, 0x00, 0x52,
0x05, 0x6f, 0x6e, 0x65, 0x6f, 0x66, 0x1a, 0x36, 0x0a, 0x08, 0x4d, 0x61, 0x70, 0x45, 0x6e, 0x74,
0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52,
0x03, 0x6b, 0x65, 0x79, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20,
0x01, 0x28, 0x0d, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x1a, 0x34,
0x0a, 0x0e, 0x45, 0x78, 0x61, 0x6d, 0x70, 0x6c, 0x65, 0x4d, 0x65, 0x73, 0x73, 0x61, 0x67, 0x65,
0x12, 0x10, 0x0a, 0x03, 0x66, 0x6f, 0x6f, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x66,
0x6f, 0x6f, 0x12, 0x10, 0x0a, 0x03, 0x62, 0x61, 0x72, 0x18, 0x02, 0x20, 0x01, 0x28, 0x05, 0x52,
0x03, 0x62, 0x61, 0x72, 0x3a, 0x3f, 0xf2, 0x9e, 0xd3, 0x8e, 0x03, 0x39, 0x0a, 0x0d, 0x0a, 0x0b,
0x75, 0x33, 0x32, 0x2c, 0x69, 0x36, 0x34, 0x2c, 0x73, 0x74, 0x72, 0x12, 0x0d, 0x0a, 0x07, 0x75,
0x36, 0x34, 0x2c, 0x73, 0x74, 0x72, 0x10, 0x01, 0x18, 0x01, 0x12, 0x0b, 0x0a, 0x07, 0x73, 0x74,
0x72, 0x2c, 0x75, 0x33, 0x32, 0x10, 0x02, 0x12, 0x0a, 0x0a, 0x06, 0x62, 0x7a, 0x2c, 0x73, 0x74,
0x72, 0x10, 0x03, 0x18, 0x01, 0x42, 0x05, 0x0a, 0x03, 0x73, 0x75, 0x6d, 0x22, 0x62, 0x0a, 0x19,
0x45, 0x78, 0x61, 0x6d, 0x70, 0x6c, 0x65, 0x41, 0x75, 0x74, 0x6f, 0x49, 0x6e, 0x63, 0x72, 0x65,
0x6d, 0x65, 0x6e, 0x74, 0x54, 0x61, 0x62, 0x6c, 0x65, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64, 0x18,
0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x02, 0x69, 0x64, 0x12, 0x0c, 0x0a, 0x01, 0x78, 0x18, 0x02,
0x20, 0x01, 0x28, 0x09, 0x52, 0x01, 0x78, 0x12, 0x0c, 0x0a, 0x01, 0x79, 0x18, 0x03, 0x20, 0x01,
0x28, 0x05, 0x52, 0x01, 0x79, 0x3a, 0x19, 0xf2, 0x9e, 0xd3, 0x8e, 0x03, 0x13, 0x0a, 0x06, 0x0a,
0x02, 0x69, 0x64, 0x10, 0x01, 0x12, 0x07, 0x0a, 0x01, 0x78, 0x10, 0x01, 0x18, 0x01, 0x18, 0x03,
0x22, 0x40, 0x0a, 0x10, 0x45, 0x78, 0x61, 0x6d, 0x70, 0x6c, 0x65, 0x53, 0x69, 0x6e, 0x67, 0x6c,
0x65, 0x74, 0x6f, 0x6e, 0x12, 0x10, 0x0a, 0x03, 0x66, 0x6f, 0x6f, 0x18, 0x01, 0x20, 0x01, 0x28,
0x09, 0x52, 0x03, 0x66, 0x6f, 0x6f, 0x12, 0x10, 0x0a, 0x03, 0x62, 0x61, 0x72, 0x18, 0x02, 0x20,
0x01, 0x28, 0x05, 0x52, 0x03, 0x62, 0x61, 0x72, 0x3a, 0x08, 0xfa, 0x9e, 0xd3, 0x8e, 0x03, 0x02,
0x08, 0x02, 0x22, 0x7c, 0x0a, 0x10, 0x45, 0x78, 0x61, 0x6d, 0x70, 0x6c, 0x65, 0x54, 0x69, 0x6d,
0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01,
0x28, 0x04, 0x52, 0x02, 0x69, 0x64, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x02,
0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x2a, 0x0a, 0x02, 0x74, 0x73,
0x18, 0x03, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61,
0x6d, 0x70, 0x52, 0x02, 0x74, 0x73, 0x3a, 0x18, 0xf2, 0x9e, 0xd3, 0x8e, 0x03, 0x12, 0x0a, 0x06,
0x0a, 0x02, 0x69, 0x64, 0x10, 0x01, 0x12, 0x06, 0x0a, 0x02, 0x74, 0x73, 0x10, 0x01, 0x18, 0x04,
0x22, 0x7d, 0x0a, 0x0f, 0x45, 0x78, 0x61, 0x6d, 0x70, 0x6c, 0x65, 0x44, 0x75, 0x72, 0x61, 0x74,
0x69, 0x6f, 0x6e, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52,
0x02, 0x69, 0x64, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28,
0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x2b, 0x0a, 0x03, 0x64, 0x75, 0x72, 0x18, 0x03,
0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52,
0x03, 0x64, 0x75, 0x72, 0x3a, 0x19, 0xf2, 0x9e, 0xd3, 0x8e, 0x03, 0x13, 0x0a, 0x06, 0x0a, 0x02,
0x69, 0x64, 0x10, 0x01, 0x12, 0x07, 0x0a, 0x03, 0x64, 0x75, 0x72, 0x10, 0x01, 0x18, 0x04, 0x22,
0x7a, 0x0a, 0x0d, 0x53, 0x69, 0x6d, 0x70, 0x6c, 0x65, 0x45, 0x78, 0x61, 0x6d, 0x70, 0x6c, 0x65,
0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04,
0x6e, 0x61, 0x6d, 0x65, 0x12, 0x16, 0x0a, 0x06, 0x75, 0x6e, 0x69, 0x71, 0x75, 0x65, 0x18, 0x02,
0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x75, 0x6e, 0x69, 0x71, 0x75, 0x65, 0x12, 0x1d, 0x0a, 0x0a,
0x6e, 0x6f, 0x74, 0x5f, 0x75, 0x6e, 0x69, 0x71, 0x75, 0x65, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09,
0x52, 0x09, 0x6e, 0x6f, 0x74, 0x55, 0x6e, 0x69, 0x71, 0x75, 0x65, 0x3a, 0x1e, 0xf2, 0x9e, 0xd3,
0x8e, 0x03, 0x18, 0x0a, 0x06, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x0c, 0x0a, 0x06, 0x75,
0x6e, 0x69, 0x71, 0x75, 0x65, 0x10, 0x01, 0x18, 0x01, 0x18, 0x05, 0x22, 0x50, 0x0a, 0x17, 0x45,
0x78, 0x61, 0x6d, 0x70, 0x6c, 0x65, 0x41, 0x75, 0x74, 0x6f, 0x49, 0x6e, 0x63, 0x46, 0x69, 0x65,
0x6c, 0x64, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x10, 0x0a, 0x03, 0x66, 0x6f, 0x6f, 0x18, 0x01, 0x20,
0x01, 0x28, 0x04, 0x52, 0x03, 0x66, 0x6f, 0x6f, 0x12, 0x10, 0x0a, 0x03, 0x62, 0x61, 0x72, 0x18,
0x02, 0x20, 0x01, 0x28, 0x04, 0x52, 0x03, 0x62, 0x61, 0x72, 0x3a, 0x11, 0xf2, 0x9e, 0xd3, 0x8e,
0x03, 0x0b, 0x0a, 0x07, 0x0a, 0x03, 0x66, 0x6f, 0x6f, 0x10, 0x01, 0x18, 0x06, 0x2a, 0x64, 0x0a,
0x04, 0x45, 0x6e, 0x75, 0x6d, 0x12, 0x14, 0x0a, 0x10, 0x45, 0x4e, 0x55, 0x4d, 0x5f, 0x55, 0x4e,
0x53, 0x50, 0x45, 0x43, 0x49, 0x46, 0x49, 0x45, 0x44, 0x10, 0x00, 0x12, 0x0c, 0x0a, 0x08, 0x45,
0x4e, 0x55, 0x4d, 0x5f, 0x4f, 0x4e, 0x45, 0x10, 0x01, 0x12, 0x0c, 0x0a, 0x08, 0x45, 0x4e, 0x55,
0x4d, 0x5f, 0x54, 0x57, 0x4f, 0x10, 0x02, 0x12, 0x0d, 0x0a, 0x09, 0x45, 0x4e, 0x55, 0x4d, 0x5f,
0x46, 0x49, 0x56, 0x45, 0x10, 0x05, 0x12, 0x1b, 0x0a, 0x0e, 0x45, 0x4e, 0x55, 0x4d, 0x5f, 0x4e,
0x45, 0x47, 0x5f, 0x54, 0x48, 0x52, 0x45, 0x45, 0x10, 0xfd, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0x01, 0x42, 0x77, 0x0a, 0x0a, 0x63, 0x6f, 0x6d, 0x2e, 0x74, 0x65, 0x73, 0x74, 0x70,
0x62, 0x42, 0x0f, 0x54, 0x65, 0x73, 0x74, 0x53, 0x63, 0x68, 0x65, 0x6d, 0x61, 0x50, 0x72, 0x6f,
0x74, 0x6f, 0x50, 0x01, 0x5a, 0x20, 0x63, 0x6f, 0x73, 0x6d, 0x6f, 0x73, 0x73, 0x64, 0x6b, 0x2e,
0x69, 0x6f, 0x2f, 0x6f, 0x72, 0x6d, 0x2f, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2f,
0x74, 0x65, 0x73, 0x74, 0x70, 0x62, 0xa2, 0x02, 0x03, 0x54, 0x58, 0x58, 0xaa, 0x02, 0x06, 0x54,
0x65, 0x73, 0x74, 0x70, 0x62, 0xca, 0x02, 0x06, 0x54, 0x65, 0x73, 0x74, 0x70, 0x62, 0xe2, 0x02,
0x12, 0x54, 0x65, 0x73, 0x74, 0x70, 0x62, 0x5c, 0x47, 0x50, 0x42, 0x4d, 0x65, 0x74, 0x61, 0x64,
0x61, 0x74, 0x61, 0xea, 0x02, 0x06, 0x54, 0x65, 0x73, 0x74, 0x70, 0x62, 0x62, 0x06, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x33,
}
var (
file_testpb_test_schema_proto_rawDescOnce sync.Once
file_testpb_test_schema_proto_rawDescData = file_testpb_test_schema_proto_rawDesc
)
func file_testpb_test_schema_proto_rawDescGZIP() []byte {
file_testpb_test_schema_proto_rawDescOnce.Do(func() {
file_testpb_test_schema_proto_rawDescData = protoimpl.X.CompressGZIP(file_testpb_test_schema_proto_rawDescData)
})
return file_testpb_test_schema_proto_rawDescData
}
var file_testpb_test_schema_proto_enumTypes = make([]protoimpl.EnumInfo, 1)
var file_testpb_test_schema_proto_msgTypes = make([]protoimpl.MessageInfo, 9)
var file_testpb_test_schema_proto_goTypes = []interface{}{
(Enum)(0), // 0: testpb.Enum
(*ExampleTable)(nil), // 1: testpb.ExampleTable
(*ExampleAutoIncrementTable)(nil), // 2: testpb.ExampleAutoIncrementTable
(*ExampleSingleton)(nil), // 3: testpb.ExampleSingleton
(*ExampleTimestamp)(nil), // 4: testpb.ExampleTimestamp
(*ExampleDuration)(nil), // 5: testpb.ExampleDuration
(*SimpleExample)(nil), // 6: testpb.SimpleExample
(*ExampleAutoIncFieldName)(nil), // 7: testpb.ExampleAutoIncFieldName
nil, // 8: testpb.ExampleTable.MapEntry
(*ExampleTable_ExampleMessage)(nil), // 9: testpb.ExampleTable.ExampleMessage
(*timestamppb.Timestamp)(nil), // 10: google.protobuf.Timestamp
(*durationpb.Duration)(nil), // 11: google.protobuf.Duration
}
var file_testpb_test_schema_proto_depIdxs = []int32{
10, // 0: testpb.ExampleTable.ts:type_name -> google.protobuf.Timestamp
11, // 1: testpb.ExampleTable.dur:type_name -> google.protobuf.Duration
0, // 2: testpb.ExampleTable.e:type_name -> testpb.Enum
8, // 3: testpb.ExampleTable.map:type_name -> testpb.ExampleTable.MapEntry
9, // 4: testpb.ExampleTable.msg:type_name -> testpb.ExampleTable.ExampleMessage
10, // 5: testpb.ExampleTimestamp.ts:type_name -> google.protobuf.Timestamp
11, // 6: testpb.ExampleDuration.dur:type_name -> google.protobuf.Duration
7, // [7:7] is the sub-list for method output_type
7, // [7:7] is the sub-list for method input_type
7, // [7:7] is the sub-list for extension type_name
7, // [7:7] is the sub-list for extension extendee
0, // [0:7] is the sub-list for field type_name
}
func init() { file_testpb_test_schema_proto_init() }
func file_testpb_test_schema_proto_init() {
if File_testpb_test_schema_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_testpb_test_schema_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ExampleTable); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_testpb_test_schema_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ExampleAutoIncrementTable); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_testpb_test_schema_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ExampleSingleton); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_testpb_test_schema_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ExampleTimestamp); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_testpb_test_schema_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ExampleDuration); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_testpb_test_schema_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SimpleExample); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_testpb_test_schema_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ExampleAutoIncFieldName); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_testpb_test_schema_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ExampleTable_ExampleMessage); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
file_testpb_test_schema_proto_msgTypes[0].OneofWrappers = []interface{}{
(*ExampleTable_Oneof)(nil),
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_testpb_test_schema_proto_rawDesc,
NumEnums: 1,
NumMessages: 9,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_testpb_test_schema_proto_goTypes,
DependencyIndexes: file_testpb_test_schema_proto_depIdxs,
EnumInfos: file_testpb_test_schema_proto_enumTypes,
MessageInfos: file_testpb_test_schema_proto_msgTypes,
}.Build()
File_testpb_test_schema_proto = out.File
file_testpb_test_schema_proto_rawDesc = nil
file_testpb_test_schema_proto_goTypes = nil
file_testpb_test_schema_proto_depIdxs = nil
}

View File

@ -1,142 +0,0 @@
syntax = "proto3";
package testpb;
import "google/protobuf/timestamp.proto";
import "google/protobuf/duration.proto";
import "cosmos/orm/v1/orm.proto";
message ExampleTable {
// clang-format off
option (cosmos.orm.v1.table) = {
id: 1;
primary_key: {
fields:
"u32,i64,str"
}
index: {
id:
1;
fields:
"u64,str" unique: true
}
index: {
id:
2;
fields:
"str,u32"
}
index: {
id:
3;
fields:
"bz,str"
}
};
// clang-format on
// Valid key fields:
uint32 u32 = 1;
uint64 u64 = 2;
string str = 3;
bytes bz = 4;
google.protobuf.Timestamp ts = 5;
google.protobuf.Duration dur = 6;
int32 i32 = 7;
sint32 s32 = 8;
sfixed32 sf32 = 9;
int64 i64 = 10;
sint64 s64 = 11;
sfixed64 sf64 = 12;
fixed32 f32 = 13;
fixed64 f64 = 14;
bool b = 15;
Enum e = 16;
// Invalid key fields:
repeated uint32 repeated = 17;
map<string, uint32> map = 18;
ExampleMessage msg = 19;
oneof sum {
uint32 oneof = 20;
}
message ExampleMessage {
string foo = 1;
int32 bar = 2;
}
}
enum Enum {
ENUM_UNSPECIFIED = 0;
ENUM_ONE = 1;
ENUM_TWO = 2;
ENUM_FIVE = 5;
ENUM_NEG_THREE = -3;
}
message ExampleAutoIncrementTable {
option (cosmos.orm.v1.table) = {
id: 3
primary_key: {fields: "id" auto_increment: true}
index: {id: 1 fields: "x" unique: true}
};
uint64 id = 1;
string x = 2;
int32 y = 3;
}
message ExampleSingleton {
option (cosmos.orm.v1.singleton) = {
id: 2
};
string foo = 1;
int32 bar = 2;
}
message ExampleTimestamp {
option (cosmos.orm.v1.table) = {
id: 4
primary_key: {fields: "id" auto_increment: true}
index: {id: 1 fields: "ts"}
};
uint64 id = 1;
string name = 2;
google.protobuf.Timestamp ts = 3;
}
message ExampleDuration {
option (cosmos.orm.v1.table) = {
id: 4
primary_key: {fields: "id" auto_increment: true}
index: {id: 1 fields: "dur"}
};
uint64 id = 1;
string name = 2;
google.protobuf.Duration dur = 3;
}
message SimpleExample {
option (cosmos.orm.v1.table) = {
id: 5
primary_key: {fields: "name"}
index: {id: 1, fields: "unique", unique: true}
};
string name = 1;
string unique = 2;
string not_unique = 3;
}
// ExampleAutoIncFieldName is a table for testing InsertReturning<FieldName>.
message ExampleAutoIncFieldName {
option (cosmos.orm.v1.table) = {
id: 6
primary_key: {fields: "foo" auto_increment: true}
};
uint64 foo = 1;
uint64 bar = 2;
}

File diff suppressed because it is too large Load Diff

View File

@ -1,517 +0,0 @@
// Code generated by protoc-gen-go-cosmos-orm-proto. DO NOT EDIT.
syntax = "proto3";
package testpb;
import "cosmos/base/query/v1beta1/pagination.proto";
import "google/protobuf/duration.proto";
import "google/protobuf/timestamp.proto";
import "testpb/test_schema.proto";
// TestSchemaQueryService queries the state of the tables specified by testpb/test_schema.proto.
service TestSchemaQueryService {
// Get queries the ExampleTable table by its primary key.
rpc GetExampleTable(GetExampleTableRequest) returns (GetExampleTableResponse) {}
// GetExampleTableByU64Str queries the ExampleTable table by its U64Str index
rpc GetExampleTableByU64Str(GetExampleTableByU64StrRequest) returns (GetExampleTableByU64StrResponse) {}
// ListExampleTable queries the ExampleTable table using prefix and range queries against defined indexes.
rpc ListExampleTable(ListExampleTableRequest) returns (ListExampleTableResponse) {}
// Get queries the ExampleAutoIncrementTable table by its primary key.
rpc GetExampleAutoIncrementTable(GetExampleAutoIncrementTableRequest) returns (GetExampleAutoIncrementTableResponse) {
}
// GetExampleAutoIncrementTableByX queries the ExampleAutoIncrementTable table by its X index
rpc GetExampleAutoIncrementTableByX(GetExampleAutoIncrementTableByXRequest)
returns (GetExampleAutoIncrementTableByXResponse) {}
// ListExampleAutoIncrementTable queries the ExampleAutoIncrementTable table using prefix and range queries against
// defined indexes.
rpc ListExampleAutoIncrementTable(ListExampleAutoIncrementTableRequest)
returns (ListExampleAutoIncrementTableResponse) {}
// GetExampleSingleton queries the ExampleSingleton singleton.
rpc GetExampleSingleton(GetExampleSingletonRequest) returns (GetExampleSingletonResponse) {}
// Get queries the ExampleTimestamp table by its primary key.
rpc GetExampleTimestamp(GetExampleTimestampRequest) returns (GetExampleTimestampResponse) {}
// ListExampleTimestamp queries the ExampleTimestamp table using prefix and range queries against defined indexes.
rpc ListExampleTimestamp(ListExampleTimestampRequest) returns (ListExampleTimestampResponse) {}
// Get queries the ExampleDuration table by its primary key.
rpc GetExampleDuration(GetExampleDurationRequest) returns (GetExampleDurationResponse) {}
// ListExampleDuration queries the ExampleDuration table using prefix and range queries against defined indexes.
rpc ListExampleDuration(ListExampleDurationRequest) returns (ListExampleDurationResponse) {}
// Get queries the SimpleExample table by its primary key.
rpc GetSimpleExample(GetSimpleExampleRequest) returns (GetSimpleExampleResponse) {}
// GetSimpleExampleByUnique queries the SimpleExample table by its Unique index
rpc GetSimpleExampleByUnique(GetSimpleExampleByUniqueRequest) returns (GetSimpleExampleByUniqueResponse) {}
// ListSimpleExample queries the SimpleExample table using prefix and range queries against defined indexes.
rpc ListSimpleExample(ListSimpleExampleRequest) returns (ListSimpleExampleResponse) {}
// Get queries the ExampleAutoIncFieldName table by its primary key.
rpc GetExampleAutoIncFieldName(GetExampleAutoIncFieldNameRequest) returns (GetExampleAutoIncFieldNameResponse) {}
// ListExampleAutoIncFieldName queries the ExampleAutoIncFieldName table using prefix and range queries against
// defined indexes.
rpc ListExampleAutoIncFieldName(ListExampleAutoIncFieldNameRequest) returns (ListExampleAutoIncFieldNameResponse) {}
}
// GetExampleTableRequest is the TestSchemaQuery/GetExampleTableRequest request type.
message GetExampleTableRequest {
// u32 specifies the value of the u32 field in the primary key.
uint32 u32 = 1;
// i64 specifies the value of the i64 field in the primary key.
int64 i64 = 2;
// str specifies the value of the str field in the primary key.
string str = 3;
}
// GetExampleTableResponse is the TestSchemaQuery/GetExampleTableResponse response type.
message GetExampleTableResponse {
// value is the response value.
ExampleTable value = 1;
}
// GetExampleTableByU64StrRequest is the TestSchemaQuery/GetExampleTableByU64StrRequest request type.
message GetExampleTableByU64StrRequest {
uint64 u64 = 1;
string str = 2;
}
// GetExampleTableByU64StrResponse is the TestSchemaQuery/GetExampleTableByU64StrResponse response type.
message GetExampleTableByU64StrResponse {
ExampleTable value = 1;
}
// ListExampleTableRequest is the TestSchemaQuery/ListExampleTableRequest request type.
message ListExampleTableRequest {
// IndexKey specifies the value of an index key to use in prefix and range queries.
message IndexKey {
// key specifies the index key value.
oneof key {
// u_32_i_64_str specifies the value of the U32I64Str index key to use in the query.
U32I64Str u_32_i_64_str = 1;
// u_64_str specifies the value of the U64Str index key to use in the query.
U64Str u_64_str = 2;
// str_u_32 specifies the value of the StrU32 index key to use in the query.
StrU32 str_u_32 = 3;
// bz_str specifies the value of the BzStr index key to use in the query.
BzStr bz_str = 4;
}
message U32I64Str {
// u32 is the value of the u32 field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional uint32 u32 = 1;
// i64 is the value of the i64 field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional int64 i64 = 2;
// str is the value of the str field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional string str = 3;
}
message U64Str {
// u64 is the value of the u64 field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional uint64 u64 = 1;
// str is the value of the str field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional string str = 2;
}
message StrU32 {
// str is the value of the str field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional string str = 1;
// u32 is the value of the u32 field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional uint32 u32 = 2;
}
message BzStr {
// bz is the value of the bz field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional bytes bz = 1;
// str is the value of the str field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional string str = 2;
}
}
// query specifies the type of query - either a prefix or range query.
oneof query {
// prefix_query specifies the index key value to use for the prefix query.
IndexKey prefix_query = 1;
// range_query specifies the index key from/to values to use for the range query.
RangeQuery range_query = 2;
}
// pagination specifies optional pagination parameters.
cosmos.base.query.v1beta1.PageRequest pagination = 3;
// RangeQuery specifies the from/to index keys for a range query.
message RangeQuery {
// from is the index key to use for the start of the range query.
// To query from the start of an index, specify an index key for that index with empty values.
IndexKey from = 1;
// to is the index key to use for the end of the range query.
// The index key type MUST be the same as the index key type used for from.
// To query from to the end of an index it can be omitted.
IndexKey to = 2;
}
}
// ListExampleTableResponse is the TestSchemaQuery/ListExampleTableResponse response type.
message ListExampleTableResponse {
// values are the results of the query.
repeated ExampleTable values = 1;
// pagination is the pagination response.
cosmos.base.query.v1beta1.PageResponse pagination = 2;
}
// GetExampleAutoIncrementTableRequest is the TestSchemaQuery/GetExampleAutoIncrementTableRequest request type.
message GetExampleAutoIncrementTableRequest {
// id specifies the value of the id field in the primary key.
uint64 id = 1;
}
// GetExampleAutoIncrementTableResponse is the TestSchemaQuery/GetExampleAutoIncrementTableResponse response type.
message GetExampleAutoIncrementTableResponse {
// value is the response value.
ExampleAutoIncrementTable value = 1;
}
// GetExampleAutoIncrementTableByXRequest is the TestSchemaQuery/GetExampleAutoIncrementTableByXRequest request type.
message GetExampleAutoIncrementTableByXRequest {
string x = 1;
}
// GetExampleAutoIncrementTableByXResponse is the TestSchemaQuery/GetExampleAutoIncrementTableByXResponse response type.
message GetExampleAutoIncrementTableByXResponse {
ExampleAutoIncrementTable value = 1;
}
// ListExampleAutoIncrementTableRequest is the TestSchemaQuery/ListExampleAutoIncrementTableRequest request type.
message ListExampleAutoIncrementTableRequest {
// IndexKey specifies the value of an index key to use in prefix and range queries.
message IndexKey {
// key specifies the index key value.
oneof key {
// id specifies the value of the Id index key to use in the query.
Id id = 1;
// x specifies the value of the X index key to use in the query.
X x = 2;
}
message Id {
// id is the value of the id field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional uint64 id = 1;
}
message X {
// x is the value of the x field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional string x = 1;
}
}
// query specifies the type of query - either a prefix or range query.
oneof query {
// prefix_query specifies the index key value to use for the prefix query.
IndexKey prefix_query = 1;
// range_query specifies the index key from/to values to use for the range query.
RangeQuery range_query = 2;
}
// pagination specifies optional pagination parameters.
cosmos.base.query.v1beta1.PageRequest pagination = 3;
// RangeQuery specifies the from/to index keys for a range query.
message RangeQuery {
// from is the index key to use for the start of the range query.
// To query from the start of an index, specify an index key for that index with empty values.
IndexKey from = 1;
// to is the index key to use for the end of the range query.
// The index key type MUST be the same as the index key type used for from.
// To query from to the end of an index it can be omitted.
IndexKey to = 2;
}
}
// ListExampleAutoIncrementTableResponse is the TestSchemaQuery/ListExampleAutoIncrementTableResponse response type.
message ListExampleAutoIncrementTableResponse {
// values are the results of the query.
repeated ExampleAutoIncrementTable values = 1;
// pagination is the pagination response.
cosmos.base.query.v1beta1.PageResponse pagination = 2;
}
// GetExampleSingletonRequest is the TestSchemaQuery/GetExampleSingletonRequest request type.
message GetExampleSingletonRequest {}
// GetExampleSingletonResponse is the TestSchemaQuery/GetExampleSingletonResponse request type.
message GetExampleSingletonResponse {
ExampleSingleton value = 1;
}
// GetExampleTimestampRequest is the TestSchemaQuery/GetExampleTimestampRequest request type.
message GetExampleTimestampRequest {
// id specifies the value of the id field in the primary key.
uint64 id = 1;
}
// GetExampleTimestampResponse is the TestSchemaQuery/GetExampleTimestampResponse response type.
message GetExampleTimestampResponse {
// value is the response value.
ExampleTimestamp value = 1;
}
// ListExampleTimestampRequest is the TestSchemaQuery/ListExampleTimestampRequest request type.
message ListExampleTimestampRequest {
// IndexKey specifies the value of an index key to use in prefix and range queries.
message IndexKey {
// key specifies the index key value.
oneof key {
// id specifies the value of the Id index key to use in the query.
Id id = 1;
// ts specifies the value of the Ts index key to use in the query.
Ts ts = 2;
}
message Id {
// id is the value of the id field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional uint64 id = 1;
}
message Ts {
// ts is the value of the ts field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional google.protobuf.Timestamp ts = 1;
}
}
// query specifies the type of query - either a prefix or range query.
oneof query {
// prefix_query specifies the index key value to use for the prefix query.
IndexKey prefix_query = 1;
// range_query specifies the index key from/to values to use for the range query.
RangeQuery range_query = 2;
}
// pagination specifies optional pagination parameters.
cosmos.base.query.v1beta1.PageRequest pagination = 3;
// RangeQuery specifies the from/to index keys for a range query.
message RangeQuery {
// from is the index key to use for the start of the range query.
// To query from the start of an index, specify an index key for that index with empty values.
IndexKey from = 1;
// to is the index key to use for the end of the range query.
// The index key type MUST be the same as the index key type used for from.
// To query from to the end of an index it can be omitted.
IndexKey to = 2;
}
}
// ListExampleTimestampResponse is the TestSchemaQuery/ListExampleTimestampResponse response type.
message ListExampleTimestampResponse {
// values are the results of the query.
repeated ExampleTimestamp values = 1;
// pagination is the pagination response.
cosmos.base.query.v1beta1.PageResponse pagination = 2;
}
// GetExampleDurationRequest is the TestSchemaQuery/GetExampleDurationRequest request type.
message GetExampleDurationRequest {
// id specifies the value of the id field in the primary key.
uint64 id = 1;
}
// GetExampleDurationResponse is the TestSchemaQuery/GetExampleDurationResponse response type.
message GetExampleDurationResponse {
// value is the response value.
ExampleDuration value = 1;
}
// ListExampleDurationRequest is the TestSchemaQuery/ListExampleDurationRequest request type.
message ListExampleDurationRequest {
// IndexKey specifies the value of an index key to use in prefix and range queries.
message IndexKey {
// key specifies the index key value.
oneof key {
// id specifies the value of the Id index key to use in the query.
Id id = 1;
// dur specifies the value of the Dur index key to use in the query.
Dur dur = 2;
}
message Id {
// id is the value of the id field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional uint64 id = 1;
}
message Dur {
// dur is the value of the dur field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional google.protobuf.Duration dur = 1;
}
}
// query specifies the type of query - either a prefix or range query.
oneof query {
// prefix_query specifies the index key value to use for the prefix query.
IndexKey prefix_query = 1;
// range_query specifies the index key from/to values to use for the range query.
RangeQuery range_query = 2;
}
// pagination specifies optional pagination parameters.
cosmos.base.query.v1beta1.PageRequest pagination = 3;
// RangeQuery specifies the from/to index keys for a range query.
message RangeQuery {
// from is the index key to use for the start of the range query.
// To query from the start of an index, specify an index key for that index with empty values.
IndexKey from = 1;
// to is the index key to use for the end of the range query.
// The index key type MUST be the same as the index key type used for from.
// To query from to the end of an index it can be omitted.
IndexKey to = 2;
}
}
// ListExampleDurationResponse is the TestSchemaQuery/ListExampleDurationResponse response type.
message ListExampleDurationResponse {
// values are the results of the query.
repeated ExampleDuration values = 1;
// pagination is the pagination response.
cosmos.base.query.v1beta1.PageResponse pagination = 2;
}
// GetSimpleExampleRequest is the TestSchemaQuery/GetSimpleExampleRequest request type.
message GetSimpleExampleRequest {
// name specifies the value of the name field in the primary key.
string name = 1;
}
// GetSimpleExampleResponse is the TestSchemaQuery/GetSimpleExampleResponse response type.
message GetSimpleExampleResponse {
// value is the response value.
SimpleExample value = 1;
}
// GetSimpleExampleByUniqueRequest is the TestSchemaQuery/GetSimpleExampleByUniqueRequest request type.
message GetSimpleExampleByUniqueRequest {
string unique = 1;
}
// GetSimpleExampleByUniqueResponse is the TestSchemaQuery/GetSimpleExampleByUniqueResponse response type.
message GetSimpleExampleByUniqueResponse {
SimpleExample value = 1;
}
// ListSimpleExampleRequest is the TestSchemaQuery/ListSimpleExampleRequest request type.
message ListSimpleExampleRequest {
// IndexKey specifies the value of an index key to use in prefix and range queries.
message IndexKey {
// key specifies the index key value.
oneof key {
// name specifies the value of the Name index key to use in the query.
Name name = 1;
// unique specifies the value of the Unique index key to use in the query.
Unique unique = 2;
}
message Name {
// name is the value of the name field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional string name = 1;
}
message Unique {
// unique is the value of the unique field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional string unique = 1;
}
}
// query specifies the type of query - either a prefix or range query.
oneof query {
// prefix_query specifies the index key value to use for the prefix query.
IndexKey prefix_query = 1;
// range_query specifies the index key from/to values to use for the range query.
RangeQuery range_query = 2;
}
// pagination specifies optional pagination parameters.
cosmos.base.query.v1beta1.PageRequest pagination = 3;
// RangeQuery specifies the from/to index keys for a range query.
message RangeQuery {
// from is the index key to use for the start of the range query.
// To query from the start of an index, specify an index key for that index with empty values.
IndexKey from = 1;
// to is the index key to use for the end of the range query.
// The index key type MUST be the same as the index key type used for from.
// To query from to the end of an index it can be omitted.
IndexKey to = 2;
}
}
// ListSimpleExampleResponse is the TestSchemaQuery/ListSimpleExampleResponse response type.
message ListSimpleExampleResponse {
// values are the results of the query.
repeated SimpleExample values = 1;
// pagination is the pagination response.
cosmos.base.query.v1beta1.PageResponse pagination = 2;
}
// GetExampleAutoIncFieldNameRequest is the TestSchemaQuery/GetExampleAutoIncFieldNameRequest request type.
message GetExampleAutoIncFieldNameRequest {
// foo specifies the value of the foo field in the primary key.
uint64 foo = 1;
}
// GetExampleAutoIncFieldNameResponse is the TestSchemaQuery/GetExampleAutoIncFieldNameResponse response type.
message GetExampleAutoIncFieldNameResponse {
// value is the response value.
ExampleAutoIncFieldName value = 1;
}
// ListExampleAutoIncFieldNameRequest is the TestSchemaQuery/ListExampleAutoIncFieldNameRequest request type.
message ListExampleAutoIncFieldNameRequest {
// IndexKey specifies the value of an index key to use in prefix and range queries.
message IndexKey {
// key specifies the index key value.
oneof key {
// foo specifies the value of the Foo index key to use in the query.
Foo foo = 1;
}
message Foo {
// foo is the value of the foo field in the index.
// It can be omitted to query for all valid values of that field in this segment of the index.
optional uint64 foo = 1;
}
}
// query specifies the type of query - either a prefix or range query.
oneof query {
// prefix_query specifies the index key value to use for the prefix query.
IndexKey prefix_query = 1;
// range_query specifies the index key from/to values to use for the range query.
RangeQuery range_query = 2;
}
// pagination specifies optional pagination parameters.
cosmos.base.query.v1beta1.PageRequest pagination = 3;
// RangeQuery specifies the from/to index keys for a range query.
message RangeQuery {
// from is the index key to use for the start of the range query.
// To query from the start of an index, specify an index key for that index with empty values.
IndexKey from = 1;
// to is the index key to use for the end of the range query.
// The index key type MUST be the same as the index key type used for from.
// To query from to the end of an index it can be omitted.
IndexKey to = 2;
}
}
// ListExampleAutoIncFieldNameResponse is the TestSchemaQuery/ListExampleAutoIncFieldNameResponse response type.
message ListExampleAutoIncFieldNameResponse {
// values are the results of the query.
repeated ExampleAutoIncFieldName values = 1;
// pagination is the pagination response.
cosmos.base.query.v1beta1.PageResponse pagination = 2;
}

View File

@ -1,699 +0,0 @@
// Code generated by protoc-gen-go-cosmos-orm-proto. DO NOT EDIT.
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.3.0
// - protoc (unknown)
// source: testpb/test_schema_query.proto
package testpb
import (
context "context"
grpc "google.golang.org/grpc"
codes "google.golang.org/grpc/codes"
status "google.golang.org/grpc/status"
)
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.32.0 or later.
const _ = grpc.SupportPackageIsVersion7
const (
TestSchemaQueryService_GetExampleTable_FullMethodName = "/testpb.TestSchemaQueryService/GetExampleTable"
TestSchemaQueryService_GetExampleTableByU64Str_FullMethodName = "/testpb.TestSchemaQueryService/GetExampleTableByU64Str"
TestSchemaQueryService_ListExampleTable_FullMethodName = "/testpb.TestSchemaQueryService/ListExampleTable"
TestSchemaQueryService_GetExampleAutoIncrementTable_FullMethodName = "/testpb.TestSchemaQueryService/GetExampleAutoIncrementTable"
TestSchemaQueryService_GetExampleAutoIncrementTableByX_FullMethodName = "/testpb.TestSchemaQueryService/GetExampleAutoIncrementTableByX"
TestSchemaQueryService_ListExampleAutoIncrementTable_FullMethodName = "/testpb.TestSchemaQueryService/ListExampleAutoIncrementTable"
TestSchemaQueryService_GetExampleSingleton_FullMethodName = "/testpb.TestSchemaQueryService/GetExampleSingleton"
TestSchemaQueryService_GetExampleTimestamp_FullMethodName = "/testpb.TestSchemaQueryService/GetExampleTimestamp"
TestSchemaQueryService_ListExampleTimestamp_FullMethodName = "/testpb.TestSchemaQueryService/ListExampleTimestamp"
TestSchemaQueryService_GetExampleDuration_FullMethodName = "/testpb.TestSchemaQueryService/GetExampleDuration"
TestSchemaQueryService_ListExampleDuration_FullMethodName = "/testpb.TestSchemaQueryService/ListExampleDuration"
TestSchemaQueryService_GetSimpleExample_FullMethodName = "/testpb.TestSchemaQueryService/GetSimpleExample"
TestSchemaQueryService_GetSimpleExampleByUnique_FullMethodName = "/testpb.TestSchemaQueryService/GetSimpleExampleByUnique"
TestSchemaQueryService_ListSimpleExample_FullMethodName = "/testpb.TestSchemaQueryService/ListSimpleExample"
TestSchemaQueryService_GetExampleAutoIncFieldName_FullMethodName = "/testpb.TestSchemaQueryService/GetExampleAutoIncFieldName"
TestSchemaQueryService_ListExampleAutoIncFieldName_FullMethodName = "/testpb.TestSchemaQueryService/ListExampleAutoIncFieldName"
)
// TestSchemaQueryServiceClient is the client API for TestSchemaQueryService service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
type TestSchemaQueryServiceClient interface {
// Get queries the ExampleTable table by its primary key.
GetExampleTable(ctx context.Context, in *GetExampleTableRequest, opts ...grpc.CallOption) (*GetExampleTableResponse, error)
// GetExampleTableByU64Str queries the ExampleTable table by its U64Str index
GetExampleTableByU64Str(ctx context.Context, in *GetExampleTableByU64StrRequest, opts ...grpc.CallOption) (*GetExampleTableByU64StrResponse, error)
// ListExampleTable queries the ExampleTable table using prefix and range queries against defined indexes.
ListExampleTable(ctx context.Context, in *ListExampleTableRequest, opts ...grpc.CallOption) (*ListExampleTableResponse, error)
// Get queries the ExampleAutoIncrementTable table by its primary key.
GetExampleAutoIncrementTable(ctx context.Context, in *GetExampleAutoIncrementTableRequest, opts ...grpc.CallOption) (*GetExampleAutoIncrementTableResponse, error)
// GetExampleAutoIncrementTableByX queries the ExampleAutoIncrementTable table by its X index
GetExampleAutoIncrementTableByX(ctx context.Context, in *GetExampleAutoIncrementTableByXRequest, opts ...grpc.CallOption) (*GetExampleAutoIncrementTableByXResponse, error)
// ListExampleAutoIncrementTable queries the ExampleAutoIncrementTable table using prefix and range queries against defined indexes.
ListExampleAutoIncrementTable(ctx context.Context, in *ListExampleAutoIncrementTableRequest, opts ...grpc.CallOption) (*ListExampleAutoIncrementTableResponse, error)
// GetExampleSingleton queries the ExampleSingleton singleton.
GetExampleSingleton(ctx context.Context, in *GetExampleSingletonRequest, opts ...grpc.CallOption) (*GetExampleSingletonResponse, error)
// Get queries the ExampleTimestamp table by its primary key.
GetExampleTimestamp(ctx context.Context, in *GetExampleTimestampRequest, opts ...grpc.CallOption) (*GetExampleTimestampResponse, error)
// ListExampleTimestamp queries the ExampleTimestamp table using prefix and range queries against defined indexes.
ListExampleTimestamp(ctx context.Context, in *ListExampleTimestampRequest, opts ...grpc.CallOption) (*ListExampleTimestampResponse, error)
// Get queries the ExampleDuration table by its primary key.
GetExampleDuration(ctx context.Context, in *GetExampleDurationRequest, opts ...grpc.CallOption) (*GetExampleDurationResponse, error)
// ListExampleDuration queries the ExampleDuration table using prefix and range queries against defined indexes.
ListExampleDuration(ctx context.Context, in *ListExampleDurationRequest, opts ...grpc.CallOption) (*ListExampleDurationResponse, error)
// Get queries the SimpleExample table by its primary key.
GetSimpleExample(ctx context.Context, in *GetSimpleExampleRequest, opts ...grpc.CallOption) (*GetSimpleExampleResponse, error)
// GetSimpleExampleByUnique queries the SimpleExample table by its Unique index
GetSimpleExampleByUnique(ctx context.Context, in *GetSimpleExampleByUniqueRequest, opts ...grpc.CallOption) (*GetSimpleExampleByUniqueResponse, error)
// ListSimpleExample queries the SimpleExample table using prefix and range queries against defined indexes.
ListSimpleExample(ctx context.Context, in *ListSimpleExampleRequest, opts ...grpc.CallOption) (*ListSimpleExampleResponse, error)
// Get queries the ExampleAutoIncFieldName table by its primary key.
GetExampleAutoIncFieldName(ctx context.Context, in *GetExampleAutoIncFieldNameRequest, opts ...grpc.CallOption) (*GetExampleAutoIncFieldNameResponse, error)
// ListExampleAutoIncFieldName queries the ExampleAutoIncFieldName table using prefix and range queries against defined indexes.
ListExampleAutoIncFieldName(ctx context.Context, in *ListExampleAutoIncFieldNameRequest, opts ...grpc.CallOption) (*ListExampleAutoIncFieldNameResponse, error)
}
type testSchemaQueryServiceClient struct {
cc grpc.ClientConnInterface
}
func NewTestSchemaQueryServiceClient(cc grpc.ClientConnInterface) TestSchemaQueryServiceClient {
return &testSchemaQueryServiceClient{cc}
}
func (c *testSchemaQueryServiceClient) GetExampleTable(ctx context.Context, in *GetExampleTableRequest, opts ...grpc.CallOption) (*GetExampleTableResponse, error) {
out := new(GetExampleTableResponse)
err := c.cc.Invoke(ctx, TestSchemaQueryService_GetExampleTable_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *testSchemaQueryServiceClient) GetExampleTableByU64Str(ctx context.Context, in *GetExampleTableByU64StrRequest, opts ...grpc.CallOption) (*GetExampleTableByU64StrResponse, error) {
out := new(GetExampleTableByU64StrResponse)
err := c.cc.Invoke(ctx, TestSchemaQueryService_GetExampleTableByU64Str_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *testSchemaQueryServiceClient) ListExampleTable(ctx context.Context, in *ListExampleTableRequest, opts ...grpc.CallOption) (*ListExampleTableResponse, error) {
out := new(ListExampleTableResponse)
err := c.cc.Invoke(ctx, TestSchemaQueryService_ListExampleTable_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *testSchemaQueryServiceClient) GetExampleAutoIncrementTable(ctx context.Context, in *GetExampleAutoIncrementTableRequest, opts ...grpc.CallOption) (*GetExampleAutoIncrementTableResponse, error) {
out := new(GetExampleAutoIncrementTableResponse)
err := c.cc.Invoke(ctx, TestSchemaQueryService_GetExampleAutoIncrementTable_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *testSchemaQueryServiceClient) GetExampleAutoIncrementTableByX(ctx context.Context, in *GetExampleAutoIncrementTableByXRequest, opts ...grpc.CallOption) (*GetExampleAutoIncrementTableByXResponse, error) {
out := new(GetExampleAutoIncrementTableByXResponse)
err := c.cc.Invoke(ctx, TestSchemaQueryService_GetExampleAutoIncrementTableByX_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *testSchemaQueryServiceClient) ListExampleAutoIncrementTable(ctx context.Context, in *ListExampleAutoIncrementTableRequest, opts ...grpc.CallOption) (*ListExampleAutoIncrementTableResponse, error) {
out := new(ListExampleAutoIncrementTableResponse)
err := c.cc.Invoke(ctx, TestSchemaQueryService_ListExampleAutoIncrementTable_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *testSchemaQueryServiceClient) GetExampleSingleton(ctx context.Context, in *GetExampleSingletonRequest, opts ...grpc.CallOption) (*GetExampleSingletonResponse, error) {
out := new(GetExampleSingletonResponse)
err := c.cc.Invoke(ctx, TestSchemaQueryService_GetExampleSingleton_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *testSchemaQueryServiceClient) GetExampleTimestamp(ctx context.Context, in *GetExampleTimestampRequest, opts ...grpc.CallOption) (*GetExampleTimestampResponse, error) {
out := new(GetExampleTimestampResponse)
err := c.cc.Invoke(ctx, TestSchemaQueryService_GetExampleTimestamp_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *testSchemaQueryServiceClient) ListExampleTimestamp(ctx context.Context, in *ListExampleTimestampRequest, opts ...grpc.CallOption) (*ListExampleTimestampResponse, error) {
out := new(ListExampleTimestampResponse)
err := c.cc.Invoke(ctx, TestSchemaQueryService_ListExampleTimestamp_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *testSchemaQueryServiceClient) GetExampleDuration(ctx context.Context, in *GetExampleDurationRequest, opts ...grpc.CallOption) (*GetExampleDurationResponse, error) {
out := new(GetExampleDurationResponse)
err := c.cc.Invoke(ctx, TestSchemaQueryService_GetExampleDuration_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *testSchemaQueryServiceClient) ListExampleDuration(ctx context.Context, in *ListExampleDurationRequest, opts ...grpc.CallOption) (*ListExampleDurationResponse, error) {
out := new(ListExampleDurationResponse)
err := c.cc.Invoke(ctx, TestSchemaQueryService_ListExampleDuration_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *testSchemaQueryServiceClient) GetSimpleExample(ctx context.Context, in *GetSimpleExampleRequest, opts ...grpc.CallOption) (*GetSimpleExampleResponse, error) {
out := new(GetSimpleExampleResponse)
err := c.cc.Invoke(ctx, TestSchemaQueryService_GetSimpleExample_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *testSchemaQueryServiceClient) GetSimpleExampleByUnique(ctx context.Context, in *GetSimpleExampleByUniqueRequest, opts ...grpc.CallOption) (*GetSimpleExampleByUniqueResponse, error) {
out := new(GetSimpleExampleByUniqueResponse)
err := c.cc.Invoke(ctx, TestSchemaQueryService_GetSimpleExampleByUnique_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *testSchemaQueryServiceClient) ListSimpleExample(ctx context.Context, in *ListSimpleExampleRequest, opts ...grpc.CallOption) (*ListSimpleExampleResponse, error) {
out := new(ListSimpleExampleResponse)
err := c.cc.Invoke(ctx, TestSchemaQueryService_ListSimpleExample_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *testSchemaQueryServiceClient) GetExampleAutoIncFieldName(ctx context.Context, in *GetExampleAutoIncFieldNameRequest, opts ...grpc.CallOption) (*GetExampleAutoIncFieldNameResponse, error) {
out := new(GetExampleAutoIncFieldNameResponse)
err := c.cc.Invoke(ctx, TestSchemaQueryService_GetExampleAutoIncFieldName_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *testSchemaQueryServiceClient) ListExampleAutoIncFieldName(ctx context.Context, in *ListExampleAutoIncFieldNameRequest, opts ...grpc.CallOption) (*ListExampleAutoIncFieldNameResponse, error) {
out := new(ListExampleAutoIncFieldNameResponse)
err := c.cc.Invoke(ctx, TestSchemaQueryService_ListExampleAutoIncFieldName_FullMethodName, in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// TestSchemaQueryServiceServer is the server API for TestSchemaQueryService service.
// All implementations must embed UnimplementedTestSchemaQueryServiceServer
// for forward compatibility
type TestSchemaQueryServiceServer interface {
// Get queries the ExampleTable table by its primary key.
GetExampleTable(context.Context, *GetExampleTableRequest) (*GetExampleTableResponse, error)
// GetExampleTableByU64Str queries the ExampleTable table by its U64Str index
GetExampleTableByU64Str(context.Context, *GetExampleTableByU64StrRequest) (*GetExampleTableByU64StrResponse, error)
// ListExampleTable queries the ExampleTable table using prefix and range queries against defined indexes.
ListExampleTable(context.Context, *ListExampleTableRequest) (*ListExampleTableResponse, error)
// Get queries the ExampleAutoIncrementTable table by its primary key.
GetExampleAutoIncrementTable(context.Context, *GetExampleAutoIncrementTableRequest) (*GetExampleAutoIncrementTableResponse, error)
// GetExampleAutoIncrementTableByX queries the ExampleAutoIncrementTable table by its X index
GetExampleAutoIncrementTableByX(context.Context, *GetExampleAutoIncrementTableByXRequest) (*GetExampleAutoIncrementTableByXResponse, error)
// ListExampleAutoIncrementTable queries the ExampleAutoIncrementTable table using prefix and range queries against defined indexes.
ListExampleAutoIncrementTable(context.Context, *ListExampleAutoIncrementTableRequest) (*ListExampleAutoIncrementTableResponse, error)
// GetExampleSingleton queries the ExampleSingleton singleton.
GetExampleSingleton(context.Context, *GetExampleSingletonRequest) (*GetExampleSingletonResponse, error)
// Get queries the ExampleTimestamp table by its primary key.
GetExampleTimestamp(context.Context, *GetExampleTimestampRequest) (*GetExampleTimestampResponse, error)
// ListExampleTimestamp queries the ExampleTimestamp table using prefix and range queries against defined indexes.
ListExampleTimestamp(context.Context, *ListExampleTimestampRequest) (*ListExampleTimestampResponse, error)
// Get queries the ExampleDuration table by its primary key.
GetExampleDuration(context.Context, *GetExampleDurationRequest) (*GetExampleDurationResponse, error)
// ListExampleDuration queries the ExampleDuration table using prefix and range queries against defined indexes.
ListExampleDuration(context.Context, *ListExampleDurationRequest) (*ListExampleDurationResponse, error)
// Get queries the SimpleExample table by its primary key.
GetSimpleExample(context.Context, *GetSimpleExampleRequest) (*GetSimpleExampleResponse, error)
// GetSimpleExampleByUnique queries the SimpleExample table by its Unique index
GetSimpleExampleByUnique(context.Context, *GetSimpleExampleByUniqueRequest) (*GetSimpleExampleByUniqueResponse, error)
// ListSimpleExample queries the SimpleExample table using prefix and range queries against defined indexes.
ListSimpleExample(context.Context, *ListSimpleExampleRequest) (*ListSimpleExampleResponse, error)
// Get queries the ExampleAutoIncFieldName table by its primary key.
GetExampleAutoIncFieldName(context.Context, *GetExampleAutoIncFieldNameRequest) (*GetExampleAutoIncFieldNameResponse, error)
// ListExampleAutoIncFieldName queries the ExampleAutoIncFieldName table using prefix and range queries against defined indexes.
ListExampleAutoIncFieldName(context.Context, *ListExampleAutoIncFieldNameRequest) (*ListExampleAutoIncFieldNameResponse, error)
mustEmbedUnimplementedTestSchemaQueryServiceServer()
}
// UnimplementedTestSchemaQueryServiceServer must be embedded to have forward compatible implementations.
type UnimplementedTestSchemaQueryServiceServer struct {
}
func (UnimplementedTestSchemaQueryServiceServer) GetExampleTable(context.Context, *GetExampleTableRequest) (*GetExampleTableResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GetExampleTable not implemented")
}
func (UnimplementedTestSchemaQueryServiceServer) GetExampleTableByU64Str(context.Context, *GetExampleTableByU64StrRequest) (*GetExampleTableByU64StrResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GetExampleTableByU64Str not implemented")
}
func (UnimplementedTestSchemaQueryServiceServer) ListExampleTable(context.Context, *ListExampleTableRequest) (*ListExampleTableResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ListExampleTable not implemented")
}
func (UnimplementedTestSchemaQueryServiceServer) GetExampleAutoIncrementTable(context.Context, *GetExampleAutoIncrementTableRequest) (*GetExampleAutoIncrementTableResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GetExampleAutoIncrementTable not implemented")
}
func (UnimplementedTestSchemaQueryServiceServer) GetExampleAutoIncrementTableByX(context.Context, *GetExampleAutoIncrementTableByXRequest) (*GetExampleAutoIncrementTableByXResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GetExampleAutoIncrementTableByX not implemented")
}
func (UnimplementedTestSchemaQueryServiceServer) ListExampleAutoIncrementTable(context.Context, *ListExampleAutoIncrementTableRequest) (*ListExampleAutoIncrementTableResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ListExampleAutoIncrementTable not implemented")
}
func (UnimplementedTestSchemaQueryServiceServer) GetExampleSingleton(context.Context, *GetExampleSingletonRequest) (*GetExampleSingletonResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GetExampleSingleton not implemented")
}
func (UnimplementedTestSchemaQueryServiceServer) GetExampleTimestamp(context.Context, *GetExampleTimestampRequest) (*GetExampleTimestampResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GetExampleTimestamp not implemented")
}
func (UnimplementedTestSchemaQueryServiceServer) ListExampleTimestamp(context.Context, *ListExampleTimestampRequest) (*ListExampleTimestampResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ListExampleTimestamp not implemented")
}
func (UnimplementedTestSchemaQueryServiceServer) GetExampleDuration(context.Context, *GetExampleDurationRequest) (*GetExampleDurationResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GetExampleDuration not implemented")
}
func (UnimplementedTestSchemaQueryServiceServer) ListExampleDuration(context.Context, *ListExampleDurationRequest) (*ListExampleDurationResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ListExampleDuration not implemented")
}
func (UnimplementedTestSchemaQueryServiceServer) GetSimpleExample(context.Context, *GetSimpleExampleRequest) (*GetSimpleExampleResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GetSimpleExample not implemented")
}
func (UnimplementedTestSchemaQueryServiceServer) GetSimpleExampleByUnique(context.Context, *GetSimpleExampleByUniqueRequest) (*GetSimpleExampleByUniqueResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GetSimpleExampleByUnique not implemented")
}
func (UnimplementedTestSchemaQueryServiceServer) ListSimpleExample(context.Context, *ListSimpleExampleRequest) (*ListSimpleExampleResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ListSimpleExample not implemented")
}
func (UnimplementedTestSchemaQueryServiceServer) GetExampleAutoIncFieldName(context.Context, *GetExampleAutoIncFieldNameRequest) (*GetExampleAutoIncFieldNameResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GetExampleAutoIncFieldName not implemented")
}
func (UnimplementedTestSchemaQueryServiceServer) ListExampleAutoIncFieldName(context.Context, *ListExampleAutoIncFieldNameRequest) (*ListExampleAutoIncFieldNameResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ListExampleAutoIncFieldName not implemented")
}
func (UnimplementedTestSchemaQueryServiceServer) mustEmbedUnimplementedTestSchemaQueryServiceServer() {
}
// UnsafeTestSchemaQueryServiceServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to TestSchemaQueryServiceServer will
// result in compilation errors.
type UnsafeTestSchemaQueryServiceServer interface {
mustEmbedUnimplementedTestSchemaQueryServiceServer()
}
func RegisterTestSchemaQueryServiceServer(s grpc.ServiceRegistrar, srv TestSchemaQueryServiceServer) {
s.RegisterService(&TestSchemaQueryService_ServiceDesc, srv)
}
func _TestSchemaQueryService_GetExampleTable_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetExampleTableRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(TestSchemaQueryServiceServer).GetExampleTable(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: TestSchemaQueryService_GetExampleTable_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(TestSchemaQueryServiceServer).GetExampleTable(ctx, req.(*GetExampleTableRequest))
}
return interceptor(ctx, in, info, handler)
}
func _TestSchemaQueryService_GetExampleTableByU64Str_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetExampleTableByU64StrRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(TestSchemaQueryServiceServer).GetExampleTableByU64Str(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: TestSchemaQueryService_GetExampleTableByU64Str_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(TestSchemaQueryServiceServer).GetExampleTableByU64Str(ctx, req.(*GetExampleTableByU64StrRequest))
}
return interceptor(ctx, in, info, handler)
}
func _TestSchemaQueryService_ListExampleTable_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListExampleTableRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(TestSchemaQueryServiceServer).ListExampleTable(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: TestSchemaQueryService_ListExampleTable_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(TestSchemaQueryServiceServer).ListExampleTable(ctx, req.(*ListExampleTableRequest))
}
return interceptor(ctx, in, info, handler)
}
func _TestSchemaQueryService_GetExampleAutoIncrementTable_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetExampleAutoIncrementTableRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(TestSchemaQueryServiceServer).GetExampleAutoIncrementTable(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: TestSchemaQueryService_GetExampleAutoIncrementTable_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(TestSchemaQueryServiceServer).GetExampleAutoIncrementTable(ctx, req.(*GetExampleAutoIncrementTableRequest))
}
return interceptor(ctx, in, info, handler)
}
func _TestSchemaQueryService_GetExampleAutoIncrementTableByX_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetExampleAutoIncrementTableByXRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(TestSchemaQueryServiceServer).GetExampleAutoIncrementTableByX(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: TestSchemaQueryService_GetExampleAutoIncrementTableByX_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(TestSchemaQueryServiceServer).GetExampleAutoIncrementTableByX(ctx, req.(*GetExampleAutoIncrementTableByXRequest))
}
return interceptor(ctx, in, info, handler)
}
func _TestSchemaQueryService_ListExampleAutoIncrementTable_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListExampleAutoIncrementTableRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(TestSchemaQueryServiceServer).ListExampleAutoIncrementTable(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: TestSchemaQueryService_ListExampleAutoIncrementTable_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(TestSchemaQueryServiceServer).ListExampleAutoIncrementTable(ctx, req.(*ListExampleAutoIncrementTableRequest))
}
return interceptor(ctx, in, info, handler)
}
func _TestSchemaQueryService_GetExampleSingleton_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetExampleSingletonRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(TestSchemaQueryServiceServer).GetExampleSingleton(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: TestSchemaQueryService_GetExampleSingleton_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(TestSchemaQueryServiceServer).GetExampleSingleton(ctx, req.(*GetExampleSingletonRequest))
}
return interceptor(ctx, in, info, handler)
}
func _TestSchemaQueryService_GetExampleTimestamp_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetExampleTimestampRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(TestSchemaQueryServiceServer).GetExampleTimestamp(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: TestSchemaQueryService_GetExampleTimestamp_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(TestSchemaQueryServiceServer).GetExampleTimestamp(ctx, req.(*GetExampleTimestampRequest))
}
return interceptor(ctx, in, info, handler)
}
func _TestSchemaQueryService_ListExampleTimestamp_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListExampleTimestampRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(TestSchemaQueryServiceServer).ListExampleTimestamp(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: TestSchemaQueryService_ListExampleTimestamp_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(TestSchemaQueryServiceServer).ListExampleTimestamp(ctx, req.(*ListExampleTimestampRequest))
}
return interceptor(ctx, in, info, handler)
}
func _TestSchemaQueryService_GetExampleDuration_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetExampleDurationRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(TestSchemaQueryServiceServer).GetExampleDuration(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: TestSchemaQueryService_GetExampleDuration_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(TestSchemaQueryServiceServer).GetExampleDuration(ctx, req.(*GetExampleDurationRequest))
}
return interceptor(ctx, in, info, handler)
}
func _TestSchemaQueryService_ListExampleDuration_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListExampleDurationRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(TestSchemaQueryServiceServer).ListExampleDuration(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: TestSchemaQueryService_ListExampleDuration_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(TestSchemaQueryServiceServer).ListExampleDuration(ctx, req.(*ListExampleDurationRequest))
}
return interceptor(ctx, in, info, handler)
}
func _TestSchemaQueryService_GetSimpleExample_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetSimpleExampleRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(TestSchemaQueryServiceServer).GetSimpleExample(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: TestSchemaQueryService_GetSimpleExample_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(TestSchemaQueryServiceServer).GetSimpleExample(ctx, req.(*GetSimpleExampleRequest))
}
return interceptor(ctx, in, info, handler)
}
func _TestSchemaQueryService_GetSimpleExampleByUnique_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetSimpleExampleByUniqueRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(TestSchemaQueryServiceServer).GetSimpleExampleByUnique(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: TestSchemaQueryService_GetSimpleExampleByUnique_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(TestSchemaQueryServiceServer).GetSimpleExampleByUnique(ctx, req.(*GetSimpleExampleByUniqueRequest))
}
return interceptor(ctx, in, info, handler)
}
func _TestSchemaQueryService_ListSimpleExample_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListSimpleExampleRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(TestSchemaQueryServiceServer).ListSimpleExample(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: TestSchemaQueryService_ListSimpleExample_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(TestSchemaQueryServiceServer).ListSimpleExample(ctx, req.(*ListSimpleExampleRequest))
}
return interceptor(ctx, in, info, handler)
}
func _TestSchemaQueryService_GetExampleAutoIncFieldName_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(GetExampleAutoIncFieldNameRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(TestSchemaQueryServiceServer).GetExampleAutoIncFieldName(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: TestSchemaQueryService_GetExampleAutoIncFieldName_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(TestSchemaQueryServiceServer).GetExampleAutoIncFieldName(ctx, req.(*GetExampleAutoIncFieldNameRequest))
}
return interceptor(ctx, in, info, handler)
}
func _TestSchemaQueryService_ListExampleAutoIncFieldName_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListExampleAutoIncFieldNameRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(TestSchemaQueryServiceServer).ListExampleAutoIncFieldName(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: TestSchemaQueryService_ListExampleAutoIncFieldName_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(TestSchemaQueryServiceServer).ListExampleAutoIncFieldName(ctx, req.(*ListExampleAutoIncFieldNameRequest))
}
return interceptor(ctx, in, info, handler)
}
// TestSchemaQueryService_ServiceDesc is the grpc.ServiceDesc for TestSchemaQueryService service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
var TestSchemaQueryService_ServiceDesc = grpc.ServiceDesc{
ServiceName: "testpb.TestSchemaQueryService",
HandlerType: (*TestSchemaQueryServiceServer)(nil),
Methods: []grpc.MethodDesc{
{
MethodName: "GetExampleTable",
Handler: _TestSchemaQueryService_GetExampleTable_Handler,
},
{
MethodName: "GetExampleTableByU64Str",
Handler: _TestSchemaQueryService_GetExampleTableByU64Str_Handler,
},
{
MethodName: "ListExampleTable",
Handler: _TestSchemaQueryService_ListExampleTable_Handler,
},
{
MethodName: "GetExampleAutoIncrementTable",
Handler: _TestSchemaQueryService_GetExampleAutoIncrementTable_Handler,
},
{
MethodName: "GetExampleAutoIncrementTableByX",
Handler: _TestSchemaQueryService_GetExampleAutoIncrementTableByX_Handler,
},
{
MethodName: "ListExampleAutoIncrementTable",
Handler: _TestSchemaQueryService_ListExampleAutoIncrementTable_Handler,
},
{
MethodName: "GetExampleSingleton",
Handler: _TestSchemaQueryService_GetExampleSingleton_Handler,
},
{
MethodName: "GetExampleTimestamp",
Handler: _TestSchemaQueryService_GetExampleTimestamp_Handler,
},
{
MethodName: "ListExampleTimestamp",
Handler: _TestSchemaQueryService_ListExampleTimestamp_Handler,
},
{
MethodName: "GetExampleDuration",
Handler: _TestSchemaQueryService_GetExampleDuration_Handler,
},
{
MethodName: "ListExampleDuration",
Handler: _TestSchemaQueryService_ListExampleDuration_Handler,
},
{
MethodName: "GetSimpleExample",
Handler: _TestSchemaQueryService_GetSimpleExample_Handler,
},
{
MethodName: "GetSimpleExampleByUnique",
Handler: _TestSchemaQueryService_GetSimpleExampleByUnique_Handler,
},
{
MethodName: "ListSimpleExample",
Handler: _TestSchemaQueryService_ListSimpleExample_Handler,
},
{
MethodName: "GetExampleAutoIncFieldName",
Handler: _TestSchemaQueryService_GetExampleAutoIncFieldName_Handler,
},
{
MethodName: "ListExampleAutoIncFieldName",
Handler: _TestSchemaQueryService_ListExampleAutoIncFieldName_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "testpb/test_schema_query.proto",
}

View File

@ -1,199 +0,0 @@
package testutil
import (
"fmt"
"math"
"strings"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/types/known/durationpb"
"google.golang.org/protobuf/types/known/timestamppb"
"pgregory.net/rapid"
"cosmossdk.io/orm/encoding/ormfield"
"cosmossdk.io/orm/encoding/ormkv"
"cosmossdk.io/orm/internal/testpb"
)
// TestFieldSpec defines a test field against the testpb.ExampleTable message.
type TestFieldSpec struct {
FieldName protoreflect.Name
Gen *rapid.Generator[any]
}
var TestFieldSpecs = []TestFieldSpec{
{
"u32",
rapid.Uint32().AsAny(),
},
{
"u64",
rapid.Uint64().AsAny(),
},
{
"str",
rapid.String().Filter(func(x string) bool {
// filter out null terminators
return strings.IndexByte(x, 0) < 0
}).AsAny(),
},
{
"bz",
rapid.SliceOfN(rapid.Byte(), 0, math.MaxUint32).AsAny(),
},
{
"i32",
rapid.Int32().AsAny(),
},
{
"f32",
rapid.Uint32().AsAny(),
},
{
"s32",
rapid.Int32().AsAny(),
},
{
"sf32",
rapid.Int32().AsAny(),
},
{
"i64",
rapid.Int64().AsAny(),
},
{
"f64",
rapid.Uint64().AsAny(),
},
{
"s64",
rapid.Int64().AsAny(),
},
{
"sf64",
rapid.Int64().AsAny(),
},
{
"b",
rapid.Bool().AsAny(),
},
{
"ts",
rapid.Custom(func(t *rapid.T) protoreflect.Message {
isNil := rapid.Float32().Draw(t, "isNil")
if isNil >= 0.95 { // draw a nil 5% of the time
return nil
}
seconds := rapid.Int64Range(ormfield.TimestampSecondsMin, ormfield.TimestampSecondsMax).Draw(t, "seconds")
nanos := rapid.Int32Range(0, ormfield.TimestampNanosMax).Draw(t, "nanos")
return (&timestamppb.Timestamp{
Seconds: seconds,
Nanos: nanos,
}).ProtoReflect()
}).AsAny(),
},
{
"dur",
rapid.Custom(func(t *rapid.T) protoreflect.Message {
isNil := rapid.Float32().Draw(t, "isNil")
if isNil >= 0.95 { // draw a nil 5% of the time
return nil
}
seconds := rapid.Int64Range(ormfield.DurationNanosMin, ormfield.DurationNanosMax).Draw(t, "seconds")
nanos := rapid.Int32Range(0, ormfield.DurationNanosMax).Draw(t, "nanos")
if seconds < 0 {
nanos = -nanos
}
return (&durationpb.Duration{
Seconds: seconds,
Nanos: nanos,
}).ProtoReflect()
}).AsAny(),
},
{
"e",
rapid.Map(rapid.Int32(), func(x int32) protoreflect.EnumNumber {
return protoreflect.EnumNumber(x)
}).AsAny(),
},
}
func MakeTestCodec(fname protoreflect.Name, nonTerminal bool) (ormfield.Codec, error) {
field := GetTestField(fname)
if field == nil {
return nil, fmt.Errorf("can't find field %s", fname)
}
return ormfield.GetCodec(field, nonTerminal)
}
func GetTestField(fname protoreflect.Name) protoreflect.FieldDescriptor {
a := &testpb.ExampleTable{}
return a.ProtoReflect().Descriptor().Fields().ByName(fname)
}
type TestKeyCodec struct {
KeySpecs []TestFieldSpec
Codec *ormkv.KeyCodec
}
func TestFieldSpecsGen(minLen, maxLen int) *rapid.Generator[[]TestFieldSpec] {
return rapid.Custom(func(t *rapid.T) []TestFieldSpec {
xs := rapid.SliceOfNDistinct(rapid.IntRange(0, len(TestFieldSpecs)-1), minLen, maxLen, func(i int) int { return i }).
Draw(t, "fieldSpecIndexes")
var specs []TestFieldSpec
for _, x := range xs {
spec := TestFieldSpecs[x]
specs = append(specs, spec)
}
return specs
})
}
func TestKeyCodecGen(minLen, maxLen int) *rapid.Generator[TestKeyCodec] {
return rapid.Custom(func(t *rapid.T) TestKeyCodec {
specs := TestFieldSpecsGen(minLen, maxLen).Draw(t, "fieldSpecs")
var fields []protoreflect.Name
for _, spec := range specs {
fields = append(fields, spec.FieldName)
}
prefix := rapid.SliceOfN(rapid.Byte(), 0, 5).Draw(t, "prefix")
msgType := (&testpb.ExampleTable{}).ProtoReflect().Type()
cdc, err := ormkv.NewKeyCodec(prefix, msgType, fields)
if err != nil {
panic(err)
}
return TestKeyCodec{
Codec: cdc,
KeySpecs: specs,
}
})
}
func (k TestKeyCodec) Draw(t *rapid.T, id string) []protoreflect.Value {
n := len(k.KeySpecs)
keyValues := make([]protoreflect.Value, n)
for i, k := range k.KeySpecs {
keyValues[i] = protoreflect.ValueOf(k.Gen.Draw(t, fmt.Sprintf("%s[%d]", id, i)))
}
return keyValues
}
var GenA = rapid.Custom(func(t *rapid.T) *testpb.ExampleTable {
a := &testpb.ExampleTable{}
ref := a.ProtoReflect()
for _, spec := range TestFieldSpecs {
field := GetTestField(spec.FieldName)
value := spec.Gen.Draw(t, string(spec.FieldName))
if value != nil {
ref.Set(field, protoreflect.ValueOf(value))
}
}
return a
})

View File

@ -1,3 +0,0 @@
// Package model contains packages which define ORM data "model" types
// such as tables, indexes, and schemas.
package model

View File

@ -1,123 +0,0 @@
package ormdb
import (
"bytes"
"encoding/binary"
"errors"
"math"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/reflect/protoregistry"
"cosmossdk.io/orm/encoding/encodeutil"
"cosmossdk.io/orm/encoding/ormkv"
"cosmossdk.io/orm/model/ormtable"
"cosmossdk.io/orm/types/ormerrors"
)
type fileDescriptorDBOptions struct {
Prefix []byte
ID uint32
TypeResolver ormtable.TypeResolver
JSONValidator func(proto.Message) error
BackendResolver ormtable.BackendResolver
}
type fileDescriptorDB struct {
id uint32
prefix []byte
tablesByID map[uint32]ormtable.Table
tablesByName map[protoreflect.FullName]ormtable.Table
fileDescriptor protoreflect.FileDescriptor
}
func newFileDescriptorDB(fileDescriptor protoreflect.FileDescriptor, options fileDescriptorDBOptions) (*fileDescriptorDB, error) {
prefix := encodeutil.AppendVarUInt32(options.Prefix, options.ID)
schema := &fileDescriptorDB{
id: options.ID,
prefix: prefix,
tablesByID: map[uint32]ormtable.Table{},
tablesByName: map[protoreflect.FullName]ormtable.Table{},
fileDescriptor: fileDescriptor,
}
resolver := options.TypeResolver
if resolver == nil {
resolver = protoregistry.GlobalTypes
}
messages := fileDescriptor.Messages()
n := messages.Len()
for i := 0; i < n; i++ {
messageDescriptor := messages.Get(i)
tableName := messageDescriptor.FullName()
messageType, err := resolver.FindMessageByName(tableName)
if err != nil {
return nil, err
}
table, err := ormtable.Build(ormtable.Options{
Prefix: prefix,
MessageType: messageType,
TypeResolver: resolver,
JSONValidator: options.JSONValidator,
BackendResolver: options.BackendResolver,
})
if errors.Is(err, ormerrors.NoTableDescriptor) {
continue
}
if err != nil {
return nil, err
}
id := table.ID()
if _, ok := schema.tablesByID[id]; ok {
return nil, ormerrors.InvalidTableId.Wrapf("duplicate ID %d for %s", id, tableName)
}
schema.tablesByID[id] = table
if _, ok := schema.tablesByName[tableName]; ok {
return nil, ormerrors.InvalidTableDefinition.Wrapf("duplicate table %s", tableName)
}
schema.tablesByName[tableName] = table
}
return schema, nil
}
func (f fileDescriptorDB) DecodeEntry(k, v []byte) (ormkv.Entry, error) {
r := bytes.NewReader(k)
err := encodeutil.SkipPrefix(r, f.prefix)
if err != nil {
return nil, err
}
id, err := binary.ReadUvarint(r)
if err != nil {
return nil, err
}
if id > math.MaxUint32 {
return nil, ormerrors.UnexpectedDecodePrefix.Wrapf("uint32 varint id out of range %d", id)
}
table, ok := f.tablesByID[uint32(id)]
if !ok {
return nil, ormerrors.UnexpectedDecodePrefix.Wrapf("can't find table with id %d", id)
}
return table.DecodeEntry(k, v)
}
func (f fileDescriptorDB) EncodeEntry(entry ormkv.Entry) (k, v []byte, err error) {
table, ok := f.tablesByName[entry.GetTableName()]
if !ok {
return nil, nil, ormerrors.BadDecodeEntry.Wrapf("can't find table %s", entry.GetTableName())
}
return table.EncodeEntry(entry)
}
var _ ormkv.EntryCodec = fileDescriptorDB{}

View File

@ -1,152 +0,0 @@
package ormdb
import (
"context"
"fmt"
"maps"
"slices"
"sort"
"google.golang.org/protobuf/reflect/protoreflect"
"cosmossdk.io/core/appmodule"
"cosmossdk.io/errors"
"cosmossdk.io/orm/types/ormerrors"
)
type appModuleGenesisWrapper struct {
moduleDB
}
func (m appModuleGenesisWrapper) IsOnePerModuleType() {}
func (m appModuleGenesisWrapper) IsAppModule() {}
func (m appModuleGenesisWrapper) DefaultGenesis(target appmodule.GenesisTarget) error {
tableNames := slices.Collect(maps.Keys(m.tablesByName))
sort.Slice(tableNames, func(i, j int) bool {
ti, tj := tableNames[i], tableNames[j]
return ti.Name() < tj.Name()
})
for _, name := range tableNames {
table := m.tablesByName[name]
w, err := target(string(name))
if err != nil {
return err
}
_, err = w.Write(table.DefaultJSON())
if err != nil {
return err
}
err = w.Close()
if err != nil {
return err
}
}
return nil
}
func (m appModuleGenesisWrapper) ValidateGenesis(source appmodule.GenesisSource) error {
errMap := map[protoreflect.FullName]error{}
names := slices.Collect(maps.Keys(m.tablesByName))
sort.Slice(names, func(i, j int) bool {
ti, tj := names[i], names[j]
return ti.Name() < tj.Name()
})
for _, name := range names {
r, err := source(string(name))
if err != nil {
return err
}
if r == nil {
continue
}
table := m.tablesByName[name]
err = table.ValidateJSON(r)
if err != nil {
errMap[name] = err
}
err = r.Close()
if err != nil {
return err
}
}
if len(errMap) != 0 {
var allErrors string
for name, err := range errMap {
allErrors += fmt.Sprintf("Error in JSON for table %s: %v\n", name, err)
}
return ormerrors.JSONValidationError.Wrap(allErrors)
}
return nil
}
func (m appModuleGenesisWrapper) InitGenesis(ctx context.Context, source appmodule.GenesisSource) error {
var names []string
for name := range m.tablesByName {
names = append(names, string(name))
}
sort.Strings(names)
for _, name := range names {
fullName := protoreflect.FullName(name)
table := m.tablesByName[fullName]
r, err := source(string(fullName))
if err != nil {
return errors.Wrapf(err, "table %s", fullName)
}
if r == nil {
continue
}
err = table.ImportJSON(ctx, r)
if err != nil {
return errors.Wrapf(err, "table %s", fullName)
}
err = r.Close()
if err != nil {
return errors.Wrapf(err, "table %s", fullName)
}
}
return nil
}
func (m appModuleGenesisWrapper) ExportGenesis(ctx context.Context, sink appmodule.GenesisTarget) error {
// Ensure that we export the tables in a deterministic order.
tableNames := slices.Collect(maps.Keys(m.tablesByName))
sort.Slice(tableNames, func(i, j int) bool {
ti, tj := tableNames[i], tableNames[j]
return ti.Name() < tj.Name()
})
for _, name := range tableNames {
w, err := sink(string(name))
if err != nil {
return err
}
table := m.tablesByName[name]
err = table.ExportJSON(ctx, w)
if err != nil {
return err
}
err = w.Close()
if err != nil {
return err
}
}
return nil
}

View File

@ -1,220 +0,0 @@
package ormdb
import (
"bytes"
"context"
"encoding/binary"
"errors"
"fmt"
"math"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protodesc"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/reflect/protoregistry"
ormv1alpha1 "cosmossdk.io/api/cosmos/orm/v1alpha1"
"cosmossdk.io/core/appmodule"
"cosmossdk.io/core/store"
"cosmossdk.io/orm/encoding/encodeutil"
"cosmossdk.io/orm/encoding/ormkv"
"cosmossdk.io/orm/model/ormtable"
"cosmossdk.io/orm/types/ormerrors"
)
// ModuleDB defines the ORM database type to be used by modules.
type ModuleDB interface {
ormtable.Schema
// GenesisHandler returns an implementation of appmodule.HasGenesis
// to be embedded in or called from app module implementations.
// Ex:
// type AppModule struct {
// appmodule.HasGenesis
// }
//
// func NewKeeper(db ModuleDB) *Keeper {
// return &Keeper{genesisHandler: db.GenesisHandler()}
// }
//
// func NewAppModule(keeper keeper.Keeper) AppModule {
// return AppModule{HasGenesis: keeper.GenesisHandler()}
// }
GenesisHandler() appmodule.HasGenesisAuto
private()
}
type moduleDB struct {
prefix []byte
filesByID map[uint32]*fileDescriptorDB
tablesByName map[protoreflect.FullName]ormtable.Table
}
// ModuleDBOptions are options for constructing a ModuleDB.
type ModuleDBOptions struct {
// TypeResolver is an optional type resolver to be used when unmarshaling
// protobuf messages. If it is nil, protoregistry.GlobalTypes will be used.
TypeResolver ormtable.TypeResolver
// FileResolver is an optional file resolver that can be used to retrieve
// pinned file descriptors that may be different from those available at
// runtime. The file descriptor versions returned by this resolver will be
// used instead of the ones provided at runtime by the ModuleSchema.
FileResolver protodesc.Resolver
// JSONValidator is an optional validator that can be used for validating
// messaging when using ValidateJSON. If it is nil, DefaultJSONValidator
// will be used
JSONValidator func(proto.Message) error
// KVStoreService is the storage service to use for the DB if default KV-store storage is used.
KVStoreService store.KVStoreService
// KVStoreService is the storage service to use for the DB if memory storage is used.
MemoryStoreService store.MemoryStoreService
// KVStoreService is the storage service to use for the DB if transient storage is used.
TransientStoreService store.TransientStoreService
}
// NewModuleDB constructs a ModuleDB instance from the provided schema and options.
func NewModuleDB(schema *ormv1alpha1.ModuleSchemaDescriptor, options ModuleDBOptions) (ModuleDB, error) {
prefix := schema.Prefix
db := &moduleDB{
prefix: prefix,
filesByID: map[uint32]*fileDescriptorDB{},
tablesByName: map[protoreflect.FullName]ormtable.Table{},
}
fileResolver := options.FileResolver
if fileResolver == nil {
fileResolver = protoregistry.GlobalFiles
}
for _, entry := range schema.SchemaFile {
var backendResolver ormtable.BackendResolver
switch entry.StorageType {
case ormv1alpha1.StorageType_STORAGE_TYPE_DEFAULT_UNSPECIFIED:
service := options.KVStoreService
if service != nil {
// for testing purposes, the ORM allows KVStoreService to be omitted
// and a default test backend can be used
backendResolver = func(ctx context.Context) (ormtable.ReadBackend, error) {
kvStore := service.OpenKVStore(ctx)
return ormtable.NewBackend(ormtable.BackendOptions{
CommitmentStore: kvStore,
IndexStore: kvStore,
}), nil
}
}
case ormv1alpha1.StorageType_STORAGE_TYPE_MEMORY:
service := options.MemoryStoreService
if service == nil {
return nil, errors.New("missing MemoryStoreService")
}
backendResolver = func(ctx context.Context) (ormtable.ReadBackend, error) {
kvStore := service.OpenMemoryStore(ctx)
return ormtable.NewBackend(ormtable.BackendOptions{
CommitmentStore: kvStore,
IndexStore: kvStore,
}), nil
}
case ormv1alpha1.StorageType_STORAGE_TYPE_TRANSIENT:
service := options.TransientStoreService
if service == nil {
return nil, errors.New("missing TransientStoreService")
}
backendResolver = func(ctx context.Context) (ormtable.ReadBackend, error) {
kvStore := service.OpenTransientStore(ctx)
return ormtable.NewBackend(ormtable.BackendOptions{
CommitmentStore: kvStore,
IndexStore: kvStore,
}), nil
}
default:
return nil, fmt.Errorf("unsupported storage type %s", entry.StorageType)
}
id := entry.Id
fileDescriptor, err := fileResolver.FindFileByPath(entry.ProtoFileName)
if err != nil {
return nil, err
}
if id == 0 {
return nil, ormerrors.InvalidFileDescriptorID.Wrapf("for %s", fileDescriptor.Path())
}
opts := fileDescriptorDBOptions{
ID: id,
Prefix: prefix,
TypeResolver: options.TypeResolver,
JSONValidator: options.JSONValidator,
BackendResolver: backendResolver,
}
fdSchema, err := newFileDescriptorDB(fileDescriptor, opts)
if err != nil {
return nil, err
}
db.filesByID[id] = fdSchema
for name, table := range fdSchema.tablesByName {
if _, ok := db.tablesByName[name]; ok {
return nil, ormerrors.UnexpectedError.Wrapf("duplicate table %s", name)
}
db.tablesByName[name] = table
}
}
return db, nil
}
func (m moduleDB) DecodeEntry(k, v []byte) (ormkv.Entry, error) {
r := bytes.NewReader(k)
err := encodeutil.SkipPrefix(r, m.prefix)
if err != nil {
return nil, err
}
id, err := binary.ReadUvarint(r)
if err != nil {
return nil, err
}
if id > math.MaxUint32 {
return nil, ormerrors.UnexpectedDecodePrefix.Wrapf("uint32 varint id out of range %d", id)
}
fileSchema, ok := m.filesByID[uint32(id)]
if !ok {
return nil, ormerrors.UnexpectedDecodePrefix.Wrapf("can't find FileDescriptor schema with id %d", id)
}
return fileSchema.DecodeEntry(k, v)
}
func (m moduleDB) EncodeEntry(entry ormkv.Entry) (k, v []byte, err error) {
tableName := entry.GetTableName()
table, ok := m.tablesByName[tableName]
if !ok {
return nil, nil, ormerrors.BadDecodeEntry.Wrapf("can't find table %s", tableName)
}
return table.EncodeEntry(entry)
}
func (m moduleDB) GetTable(message proto.Message) ormtable.Table {
return m.tablesByName[message.ProtoReflect().Descriptor().FullName()]
}
func (m moduleDB) GenesisHandler() appmodule.HasGenesisAuto {
return appModuleGenesisWrapper{m}
}
func (moduleDB) private() {}

View File

@ -1,415 +0,0 @@
package ormdb_test
import (
"bytes"
"context"
"encoding/json"
"errors"
"strings"
"testing"
"go.uber.org/mock/gomock"
"gotest.tools/v3/assert"
"gotest.tools/v3/golden"
appv1alpha1 "cosmossdk.io/api/cosmos/app/v1alpha1"
ormmodulev1alpha1 "cosmossdk.io/api/cosmos/orm/module/v1alpha1"
ormv1alpha1 "cosmossdk.io/api/cosmos/orm/v1alpha1"
"cosmossdk.io/core/genesis"
corestore "cosmossdk.io/core/store"
coretesting "cosmossdk.io/core/testing"
"cosmossdk.io/depinject"
"cosmossdk.io/depinject/appconfig"
_ "cosmossdk.io/orm" // required for ORM module registration
"cosmossdk.io/orm/internal/testkv"
"cosmossdk.io/orm/internal/testpb"
"cosmossdk.io/orm/model/ormdb"
"cosmossdk.io/orm/model/ormtable"
"cosmossdk.io/orm/testing/ormmocks"
"cosmossdk.io/orm/testing/ormtest"
"cosmossdk.io/orm/types/ormerrors"
)
// These tests use a simulated bank keeper. Addresses and balances use
// string and uint64 types respectively for simplicity.
func init() {
// this registers the test module with the module registry
appconfig.RegisterModule(&testpb.Module{},
appconfig.Provide(NewKeeper),
)
}
var TestBankSchema = &ormv1alpha1.ModuleSchemaDescriptor{
SchemaFile: []*ormv1alpha1.ModuleSchemaDescriptor_FileEntry{
{
Id: 1,
ProtoFileName: testpb.File_testpb_bank_proto.Path(),
},
},
}
type keeper struct {
store testpb.BankStore
}
func NewKeeper(db ormdb.ModuleDB) (Keeper, error) {
bankStore, err := testpb.NewBankStore(db)
return keeper{bankStore}, err
}
type Keeper interface {
Send(ctx context.Context, from, to, denom string, amount uint64) error
Mint(ctx context.Context, acct, denom string, amount uint64) error
Burn(ctx context.Context, acct, denom string, amount uint64) error
Balance(ctx context.Context, acct, denom string) (uint64, error)
Supply(ctx context.Context, denom string) (uint64, error)
}
func (k keeper) Send(ctx context.Context, from, to, denom string, amount uint64) error {
err := k.safeSubBalance(ctx, from, denom, amount)
if err != nil {
return err
}
return k.addBalance(ctx, to, denom, amount)
}
func (k keeper) Mint(ctx context.Context, acct, denom string, amount uint64) error {
supply, err := k.store.SupplyTable().Get(ctx, denom)
if err != nil && !ormerrors.IsNotFound(err) {
return err
}
if supply == nil {
supply = &testpb.Supply{Denom: denom, Amount: amount}
} else {
supply.Amount += amount
}
err = k.store.SupplyTable().Save(ctx, supply)
if err != nil {
return err
}
return k.addBalance(ctx, acct, denom, amount)
}
func (k keeper) Burn(ctx context.Context, acct, denom string, amount uint64) error {
supplyStore := k.store.SupplyTable()
supply, err := supplyStore.Get(ctx, denom)
if err != nil {
return err
}
if amount > supply.Amount {
return errors.New("insufficient supply")
}
supply.Amount -= amount
if supply.Amount == 0 {
err = supplyStore.Delete(ctx, supply)
} else {
err = supplyStore.Save(ctx, supply)
}
if err != nil {
return err
}
return k.safeSubBalance(ctx, acct, denom, amount)
}
func (k keeper) Balance(ctx context.Context, acct, denom string) (uint64, error) {
balance, err := k.store.BalanceTable().Get(ctx, acct, denom)
if err != nil {
if ormerrors.IsNotFound(err) {
return 0, nil
}
return 0, err
}
return balance.Amount, err
}
func (k keeper) Supply(ctx context.Context, denom string) (uint64, error) {
supply, err := k.store.SupplyTable().Get(ctx, denom)
if supply == nil {
if ormerrors.IsNotFound(err) {
return 0, nil
}
return 0, err
}
return supply.Amount, err
}
func (k keeper) addBalance(ctx context.Context, acct, denom string, amount uint64) error {
balance, err := k.store.BalanceTable().Get(ctx, acct, denom)
if err != nil && !ormerrors.IsNotFound(err) {
return err
}
if balance == nil {
balance = &testpb.Balance{
Address: acct,
Denom: denom,
Amount: amount,
}
} else {
balance.Amount += amount
}
return k.store.BalanceTable().Save(ctx, balance)
}
func (k keeper) safeSubBalance(ctx context.Context, acct, denom string, amount uint64) error {
balanceStore := k.store.BalanceTable()
balance, err := balanceStore.Get(ctx, acct, denom)
if err != nil {
return err
}
if amount > balance.Amount {
return errors.New("insufficient funds")
}
balance.Amount -= amount
if balance.Amount == 0 {
return balanceStore.Delete(ctx, balance)
}
return balanceStore.Save(ctx, balance)
}
func TestModuleDB(t *testing.T) {
// create db & debug context
db, err := ormdb.NewModuleDB(TestBankSchema, ormdb.ModuleDBOptions{})
assert.NilError(t, err)
debugBuf := &strings.Builder{}
backend := ormtest.NewMemoryBackend()
ctx := ormtable.WrapContextDefault(testkv.NewDebugBackend(
backend,
&testkv.EntryCodecDebugger{
EntryCodec: db,
Print: func(s string) { debugBuf.WriteString(s + "\n") },
},
))
// create keeper
k, err := NewKeeper(db)
assert.NilError(t, err)
runSimpleBankTests(t, k, ctx)
// check debug output
golden.Assert(t, debugBuf.String(), "bank_scenario.golden")
// check decode & encode
it, err := backend.CommitmentStore().Iterator(nil, nil)
assert.NilError(t, err)
for it.Valid() {
entry, err := db.DecodeEntry(it.Key(), it.Value())
assert.NilError(t, err)
k, v, err := db.EncodeEntry(entry)
assert.NilError(t, err)
assert.Assert(t, bytes.Equal(k, it.Key()))
assert.Assert(t, bytes.Equal(v, it.Value()))
it.Next()
}
// check JSON
target := genesis.RawJSONTarget{}
assert.NilError(t, db.GenesisHandler().DefaultGenesis(target.Target()))
rawJSON, err := target.JSON()
assert.NilError(t, err)
golden.Assert(t, string(rawJSON), "default_json.golden")
target = genesis.RawJSONTarget{}
assert.NilError(t, db.GenesisHandler().ExportGenesis(ctx, target.Target()))
rawJSON, err = target.JSON()
assert.NilError(t, err)
goodJSON := `{
"testpb.Supply": []
}`
source, err := genesis.SourceFromRawJSON(json.RawMessage(goodJSON))
assert.NilError(t, err)
assert.NilError(t, db.GenesisHandler().ValidateGenesis(source))
assert.NilError(t, db.GenesisHandler().InitGenesis(ormtable.WrapContextDefault(ormtest.NewMemoryBackend()), source))
badJSON := `{
"testpb.Balance": 5,
"testpb.Supply": {}
}
`
source, err = genesis.SourceFromRawJSON(json.RawMessage(badJSON))
assert.NilError(t, err)
assert.ErrorIs(t, db.GenesisHandler().ValidateGenesis(source), ormerrors.JSONValidationError)
backend2 := ormtest.NewMemoryBackend()
ctx2 := ormtable.WrapContextDefault(backend2)
source, err = genesis.SourceFromRawJSON(rawJSON)
assert.NilError(t, err)
assert.NilError(t, db.GenesisHandler().ValidateGenesis(source))
assert.NilError(t, db.GenesisHandler().InitGenesis(ctx2, source))
testkv.AssertBackendsEqual(t, backend, backend2)
}
func runSimpleBankTests(t *testing.T, k Keeper, ctx context.Context) {
t.Helper()
// mint coins
denom := "foo"
acct1 := "bob"
err := k.Mint(ctx, acct1, denom, 100)
assert.NilError(t, err)
bal, err := k.Balance(ctx, acct1, denom)
assert.NilError(t, err)
assert.Equal(t, uint64(100), bal)
supply, err := k.Supply(ctx, denom)
assert.NilError(t, err)
assert.Equal(t, uint64(100), supply)
// send coins
acct2 := "sally"
err = k.Send(ctx, acct1, acct2, denom, 30)
assert.NilError(t, err)
bal, err = k.Balance(ctx, acct1, denom)
assert.NilError(t, err)
assert.Equal(t, uint64(70), bal)
bal, err = k.Balance(ctx, acct2, denom)
assert.NilError(t, err)
assert.Equal(t, uint64(30), bal)
// burn coins
err = k.Burn(ctx, acct2, denom, 3)
assert.NilError(t, err)
bal, err = k.Balance(ctx, acct2, denom)
assert.NilError(t, err)
assert.Equal(t, uint64(27), bal)
supply, err = k.Supply(ctx, denom)
assert.NilError(t, err)
assert.Equal(t, uint64(97), supply)
}
func TestHooks(t *testing.T) {
ctrl := gomock.NewController(t)
db, err := ormdb.NewModuleDB(TestBankSchema, ormdb.ModuleDBOptions{})
assert.NilError(t, err)
validateHooks := ormmocks.NewMockValidateHooks(ctrl)
writeHooks := ormmocks.NewMockWriteHooks(ctrl)
ctx := ormtable.WrapContextDefault(ormtest.NewMemoryBackend().
WithValidateHooks(validateHooks).
WithWriteHooks(writeHooks))
k, err := NewKeeper(db)
assert.NilError(t, err)
denom := "foo"
acct1 := "bob"
acct2 := "sally"
validateHooks.EXPECT().ValidateInsert(gomock.Any(), ormmocks.Eq(&testpb.Balance{Address: acct1, Denom: denom, Amount: 10}))
validateHooks.EXPECT().ValidateInsert(gomock.Any(), ormmocks.Eq(&testpb.Supply{Denom: denom, Amount: 10}))
writeHooks.EXPECT().OnInsert(gomock.Any(), ormmocks.Eq(&testpb.Balance{Address: acct1, Denom: denom, Amount: 10}))
writeHooks.EXPECT().OnInsert(gomock.Any(), ormmocks.Eq(&testpb.Supply{Denom: denom, Amount: 10}))
assert.NilError(t, k.Mint(ctx, acct1, denom, 10))
validateHooks.EXPECT().ValidateUpdate(
gomock.Any(),
ormmocks.Eq(&testpb.Balance{Address: acct1, Denom: denom, Amount: 10}),
ormmocks.Eq(&testpb.Balance{Address: acct1, Denom: denom, Amount: 5}),
)
validateHooks.EXPECT().ValidateInsert(
gomock.Any(),
ormmocks.Eq(&testpb.Balance{Address: acct2, Denom: denom, Amount: 5}),
)
writeHooks.EXPECT().OnUpdate(
gomock.Any(),
ormmocks.Eq(&testpb.Balance{Address: acct1, Denom: denom, Amount: 10}),
ormmocks.Eq(&testpb.Balance{Address: acct1, Denom: denom, Amount: 5}),
)
writeHooks.EXPECT().OnInsert(
gomock.Any(),
ormmocks.Eq(&testpb.Balance{Address: acct2, Denom: denom, Amount: 5}),
)
assert.NilError(t, k.Send(ctx, acct1, acct2, denom, 5))
validateHooks.EXPECT().ValidateUpdate(
gomock.Any(),
ormmocks.Eq(&testpb.Supply{Denom: denom, Amount: 10}),
ormmocks.Eq(&testpb.Supply{Denom: denom, Amount: 5}),
)
validateHooks.EXPECT().ValidateDelete(
gomock.Any(),
ormmocks.Eq(&testpb.Balance{Address: acct1, Denom: denom, Amount: 5}),
)
writeHooks.EXPECT().OnUpdate(
gomock.Any(),
ormmocks.Eq(&testpb.Supply{Denom: denom, Amount: 10}),
ormmocks.Eq(&testpb.Supply{Denom: denom, Amount: 5}),
)
writeHooks.EXPECT().OnDelete(
gomock.Any(),
ormmocks.Eq(&testpb.Balance{Address: acct1, Denom: denom, Amount: 5}),
)
assert.NilError(t, k.Burn(ctx, acct1, denom, 5))
}
type testStoreService struct {
db corestore.KVStoreWithBatch
}
func (t testStoreService) OpenKVStore(context.Context) corestore.KVStore {
return testkv.TestStore{Db: t.db}
}
func (t testStoreService) OpenMemoryStore(context.Context) corestore.KVStore {
return testkv.TestStore{Db: t.db}
}
func TestGetBackendResolver(t *testing.T) {
_, err := ormdb.NewModuleDB(&ormv1alpha1.ModuleSchemaDescriptor{
SchemaFile: []*ormv1alpha1.ModuleSchemaDescriptor_FileEntry{
{
Id: 1,
ProtoFileName: testpb.File_testpb_bank_proto.Path(),
StorageType: ormv1alpha1.StorageType_STORAGE_TYPE_MEMORY,
},
},
}, ormdb.ModuleDBOptions{})
assert.ErrorContains(t, err, "missing MemoryStoreService")
_, err = ormdb.NewModuleDB(&ormv1alpha1.ModuleSchemaDescriptor{
SchemaFile: []*ormv1alpha1.ModuleSchemaDescriptor_FileEntry{
{
Id: 1,
ProtoFileName: testpb.File_testpb_bank_proto.Path(),
StorageType: ormv1alpha1.StorageType_STORAGE_TYPE_MEMORY,
},
},
}, ormdb.ModuleDBOptions{
MemoryStoreService: testStoreService{db: coretesting.NewMemDB()},
})
assert.NilError(t, err)
}
func ProvideTestRuntime() corestore.KVStoreService {
return testStoreService{db: coretesting.NewMemDB()}
}
func TestAppConfigModule(t *testing.T) {
appCfg := appconfig.Compose(&appv1alpha1.Config{
Modules: []*appv1alpha1.ModuleConfig{
{Name: "bank", Config: appconfig.WrapAny(&testpb.Module{})},
{Name: "orm", Config: appconfig.WrapAny(&ormmodulev1alpha1.Module{})},
},
})
var k Keeper
err := depinject.Inject(depinject.Configs(
appCfg, depinject.Provide(ProvideTestRuntime),
), &k)
assert.NilError(t, err)
runSimpleBankTests(t, k, context.Background())
}

View File

@ -1,64 +0,0 @@
GET 010200666f6f
PK testpb.Supply foo -> {"denom":"foo"}
GET 010200666f6f
PK testpb.Supply foo -> {"denom":"foo"}
ORM BEFORE INSERT testpb.Supply {"denom":"foo","amount":100}
SET 010200666f6f 1064
PK testpb.Supply foo -> {"denom":"foo","amount":100}
ORM AFTER INSERT testpb.Supply {"denom":"foo","amount":100}
GET 010100626f6200666f6f
PK testpb.Balance bob/foo -> {"address":"bob","denom":"foo"}
GET 010100626f6200666f6f
PK testpb.Balance bob/foo -> {"address":"bob","denom":"foo"}
ORM BEFORE INSERT testpb.Balance {"address":"bob","denom":"foo","amount":100}
SET 010100626f6200666f6f 1864
PK testpb.Balance bob/foo -> {"address":"bob","denom":"foo","amount":100}
SET 010101666f6f00626f62
IDX testpb.Balance denom/address : foo/bob -> bob/foo
ORM AFTER INSERT testpb.Balance {"address":"bob","denom":"foo","amount":100}
GET 010100626f6200666f6f 1864
PK testpb.Balance bob/foo -> {"address":"bob","denom":"foo","amount":100}
GET 010200666f6f 1064
PK testpb.Supply foo -> {"denom":"foo","amount":100}
GET 010100626f6200666f6f 1864
PK testpb.Balance bob/foo -> {"address":"bob","denom":"foo","amount":100}
GET 010100626f6200666f6f 1864
PK testpb.Balance bob/foo -> {"address":"bob","denom":"foo","amount":100}
ORM BEFORE UPDATE testpb.Balance {"address":"bob","denom":"foo","amount":100} -> {"address":"bob","denom":"foo","amount":70}
SET 010100626f6200666f6f 1846
PK testpb.Balance bob/foo -> {"address":"bob","denom":"foo","amount":70}
ORM AFTER UPDATE testpb.Balance {"address":"bob","denom":"foo","amount":100} -> {"address":"bob","denom":"foo","amount":70}
GET 01010073616c6c7900666f6f
PK testpb.Balance sally/foo -> {"address":"sally","denom":"foo"}
GET 01010073616c6c7900666f6f
PK testpb.Balance sally/foo -> {"address":"sally","denom":"foo"}
ORM BEFORE INSERT testpb.Balance {"address":"sally","denom":"foo","amount":30}
SET 01010073616c6c7900666f6f 181e
PK testpb.Balance sally/foo -> {"address":"sally","denom":"foo","amount":30}
SET 010101666f6f0073616c6c79
IDX testpb.Balance denom/address : foo/sally -> sally/foo
ORM AFTER INSERT testpb.Balance {"address":"sally","denom":"foo","amount":30}
GET 010100626f6200666f6f 1846
PK testpb.Balance bob/foo -> {"address":"bob","denom":"foo","amount":70}
GET 01010073616c6c7900666f6f 181e
PK testpb.Balance sally/foo -> {"address":"sally","denom":"foo","amount":30}
GET 010200666f6f 1064
PK testpb.Supply foo -> {"denom":"foo","amount":100}
GET 010200666f6f 1064
PK testpb.Supply foo -> {"denom":"foo","amount":100}
ORM BEFORE UPDATE testpb.Supply {"denom":"foo","amount":100} -> {"denom":"foo","amount":97}
SET 010200666f6f 1061
PK testpb.Supply foo -> {"denom":"foo","amount":97}
ORM AFTER UPDATE testpb.Supply {"denom":"foo","amount":100} -> {"denom":"foo","amount":97}
GET 01010073616c6c7900666f6f 181e
PK testpb.Balance sally/foo -> {"address":"sally","denom":"foo","amount":30}
GET 01010073616c6c7900666f6f 181e
PK testpb.Balance sally/foo -> {"address":"sally","denom":"foo","amount":30}
ORM BEFORE UPDATE testpb.Balance {"address":"sally","denom":"foo","amount":30} -> {"address":"sally","denom":"foo","amount":27}
SET 01010073616c6c7900666f6f 181b
PK testpb.Balance sally/foo -> {"address":"sally","denom":"foo","amount":27}
ORM AFTER UPDATE testpb.Balance {"address":"sally","denom":"foo","amount":30} -> {"address":"sally","denom":"foo","amount":27}
GET 01010073616c6c7900666f6f 181b
PK testpb.Balance sally/foo -> {"address":"sally","denom":"foo","amount":27}
GET 010200666f6f 1061
PK testpb.Supply foo -> {"denom":"foo","amount":97}

View File

@ -1,4 +0,0 @@
{
"testpb.Balance": [],
"testpb.Supply": []
}

View File

@ -1,81 +0,0 @@
// Package ormlist defines options for listing items from ORM indexes.
package ormlist
import (
"google.golang.org/protobuf/proto"
queryv1beta1 "cosmossdk.io/api/cosmos/base/query/v1beta1"
"cosmossdk.io/orm/internal/listinternal"
)
// Option represents a list option.
type Option = listinternal.Option
// Reverse reverses the direction of iteration. If Reverse is
// provided twice, iteration will happen in the forward direction.
func Reverse() Option {
return listinternal.FuncOption(func(options *listinternal.Options) {
options.Reverse = !options.Reverse
})
}
// Filter returns an option which applies a filter function to each item
// and skips over it when the filter function returns false.
func Filter(filterFn func(message proto.Message) bool) Option {
return listinternal.FuncOption(func(options *listinternal.Options) {
options.Filter = filterFn
})
}
// Cursor specifies a cursor after which to restart iteration. Cursor values
// are returned by iterators and in pagination results.
func Cursor(cursor CursorT) Option {
return listinternal.FuncOption(func(options *listinternal.Options) {
options.Cursor = cursor
})
}
// Paginate paginates iterator output based on the provided page request.
// The Iterator.PageRequest value on the returned iterator will be non-nil
// after Iterator.Next() returns false when this option is provided.
//
// Care should be taken when using Paginate together with Reverse and/or Cursor.
// In the case of combining Reverse and Paginate, if pageRequest.Reverse is
// true then iteration will proceed in the forward direction. This allows
// the default iteration direction for a query to be reverse with the option
// to switch this (to forward in this case) using PageRequest. If Cursor
// and Paginate are used together, whichever option is used first wins.
// If pageRequest is nil, this option will be a no-op so the caller does not
// need to do a nil check. This function defines no default limit, so if
// the caller does not define a limit, this will return all results. To
// specify a default limit use the DefaultLimit option.
func Paginate(pageRequest *queryv1beta1.PageRequest) Option {
return listinternal.FuncOption(func(options *listinternal.Options) {
if pageRequest == nil {
return
}
if pageRequest.Reverse {
// if the reverse is true we invert the direction of iteration,
// meaning if iteration was already reversed we set it forward.
options.Reverse = !options.Reverse
}
options.Cursor = pageRequest.Key
options.Offset = pageRequest.Offset
options.Limit = pageRequest.Limit
options.CountTotal = pageRequest.CountTotal
})
}
// DefaultLimit specifies a default limit for iteration. This option can be
// combined with Paginate to ensure that there is a default limit if none
// is specified in PageRequest.
func DefaultLimit(defaultLimit uint64) Option {
return listinternal.FuncOption(func(options *listinternal.Options) {
options.DefaultLimit = defaultLimit
})
}
// CursorT defines a cursor type.
type CursorT []byte

View File

@ -1,250 +0,0 @@
package ormtable
import (
"context"
"encoding/json"
"fmt"
"io"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
"cosmossdk.io/orm/encoding/ormkv"
"cosmossdk.io/orm/types/kv"
"cosmossdk.io/orm/types/ormerrors"
)
// autoIncrementTable is a Table implementation for tables with an
// auto-incrementing uint64 primary key.
type autoIncrementTable struct {
*tableImpl
autoIncField protoreflect.FieldDescriptor
seqCodec *ormkv.SeqCodec
}
func (t autoIncrementTable) InsertReturningPKey(ctx context.Context, message proto.Message) (newPK uint64, err error) {
backend, err := t.getWriteBackend(ctx)
if err != nil {
return 0, err
}
return t.save(ctx, backend, message, saveModeInsert)
}
func (t autoIncrementTable) Save(ctx context.Context, message proto.Message) error {
backend, err := t.getWriteBackend(ctx)
if err != nil {
return err
}
_, err = t.save(ctx, backend, message, saveModeDefault)
return err
}
func (t autoIncrementTable) Insert(ctx context.Context, message proto.Message) error {
backend, err := t.getWriteBackend(ctx)
if err != nil {
return err
}
_, err = t.save(ctx, backend, message, saveModeInsert)
return err
}
func (t autoIncrementTable) Update(ctx context.Context, message proto.Message) error {
backend, err := t.getWriteBackend(ctx)
if err != nil {
return err
}
_, err = t.save(ctx, backend, message, saveModeUpdate)
return err
}
func (t autoIncrementTable) LastInsertedSequence(ctx context.Context) (uint64, error) {
backend, err := t.getBackend(ctx)
if err != nil {
return 0, err
}
return t.curSeqValue(backend.IndexStoreReader())
}
func (t *autoIncrementTable) save(ctx context.Context, backend Backend, message proto.Message, mode saveMode) (newPK uint64, err error) {
messageRef := message.ProtoReflect()
val := messageRef.Get(t.autoIncField).Uint()
writer := newBatchIndexCommitmentWriter(backend)
defer writer.Close()
if val == 0 {
if mode == saveModeUpdate {
return 0, ormerrors.PrimaryKeyInvalidOnUpdate
}
mode = saveModeInsert
newPK, err = t.nextSeqValue(writer.IndexStore())
if err != nil {
return 0, err
}
messageRef.Set(t.autoIncField, protoreflect.ValueOfUint64(newPK))
} else {
if mode == saveModeInsert {
return 0, ormerrors.AutoIncrementKeyAlreadySet
}
mode = saveModeUpdate
}
return newPK, t.tableImpl.doSave(ctx, writer, message, mode)
}
func (t *autoIncrementTable) curSeqValue(kv kv.ReadonlyStore) (uint64, error) {
bz, err := kv.Get(t.seqCodec.Prefix())
if err != nil {
return 0, err
}
return t.seqCodec.DecodeValue(bz)
}
func (t *autoIncrementTable) nextSeqValue(kv kv.Store) (uint64, error) {
seq, err := t.curSeqValue(kv)
if err != nil {
return 0, err
}
seq++
return seq, t.setSeqValue(kv, seq)
}
func (t *autoIncrementTable) setSeqValue(kv kv.Store, seq uint64) error {
return kv.Set(t.seqCodec.Prefix(), t.seqCodec.EncodeValue(seq))
}
func (t autoIncrementTable) EncodeEntry(entry ormkv.Entry) (k, v []byte, err error) {
if _, ok := entry.(*ormkv.SeqEntry); ok {
return t.seqCodec.EncodeEntry(entry)
}
return t.tableImpl.EncodeEntry(entry)
}
func (t autoIncrementTable) ValidateJSON(reader io.Reader) error {
return t.decodeAutoIncJSON(nil, reader, func(message proto.Message, maxSeq uint64) error {
messageRef := message.ProtoReflect()
pkey := messageRef.Get(t.autoIncField).Uint()
if pkey > maxSeq {
return fmt.Errorf("invalid auto increment primary key %d, expected a value <= %d, the highest "+
"sequence number", pkey, maxSeq)
}
if t.customJSONValidator != nil {
return t.customJSONValidator(message)
}
return DefaultJSONValidator(message)
})
}
func (t autoIncrementTable) ImportJSON(ctx context.Context, reader io.Reader) error {
backend, err := t.getWriteBackend(ctx)
if err != nil {
return err
}
return t.decodeAutoIncJSON(backend, reader, func(message proto.Message, maxSeq uint64) error {
messageRef := message.ProtoReflect()
pkey := messageRef.Get(t.autoIncField).Uint()
if pkey == 0 {
// we don't have a primary key in the JSON, so we call Save to insert and
// generate one
_, err = t.save(ctx, backend, message, saveModeInsert)
return err
}
if pkey > maxSeq {
return fmt.Errorf("invalid auto increment primary key %d, expected a value <= %d, the highest "+
"sequence number", pkey, maxSeq)
}
// we do have a primary key and calling Save will fail because it expects
// either no primary key or SAVE_MODE_UPDATE. So instead we drop one level
// down and insert using tableImpl which doesn't know about
// auto-incrementing primary keys.
return t.tableImpl.save(ctx, backend, message, saveModeInsert)
})
}
func (t autoIncrementTable) decodeAutoIncJSON(backend Backend, reader io.Reader, onMsg func(message proto.Message, maxID uint64) error) error {
decoder, err := t.startDecodeJSON(reader)
if err != nil {
return err
}
var seq uint64
return t.doDecodeJSON(decoder,
func(message json.RawMessage) bool {
err = json.Unmarshal(message, &seq)
if err == nil {
// writer is nil during validation
if backend != nil {
writer := newBatchIndexCommitmentWriter(backend)
defer writer.Close()
err = t.setSeqValue(writer.IndexStore(), seq)
if err != nil {
panic(err)
}
err = writer.Write()
if err != nil {
panic(err)
}
}
return true
}
return false
},
func(message proto.Message) error {
return onMsg(message, seq)
})
}
func (t autoIncrementTable) ExportJSON(ctx context.Context, writer io.Writer) error {
backend, err := t.getBackend(ctx)
if err != nil {
return err
}
_, err = writer.Write([]byte("["))
if err != nil {
return err
}
seq, err := t.curSeqValue(backend.IndexStoreReader())
if err != nil {
return err
}
start := true
if seq != 0 {
start = false
bz, err := json.Marshal(seq)
if err != nil {
return err
}
_, err = writer.Write(bz)
if err != nil {
return err
}
}
return t.doExportJSON(ctx, writer, start)
}
func (t *autoIncrementTable) GetTable(message proto.Message) Table {
if message.ProtoReflect().Descriptor().FullName() == t.MessageType().Descriptor().FullName() {
return t
}
return nil
}
var _ AutoIncrementTable = &autoIncrementTable{}

View File

@ -1,107 +0,0 @@
package ormtable_test
import (
"bytes"
"context"
"os"
"strings"
"testing"
"gotest.tools/v3/assert"
"gotest.tools/v3/golden"
"cosmossdk.io/orm/internal/testkv"
"cosmossdk.io/orm/internal/testpb"
"cosmossdk.io/orm/model/ormtable"
)
func TestAutoIncrementScenario(t *testing.T) {
table, err := ormtable.Build(ormtable.Options{
MessageType: (&testpb.ExampleAutoIncrementTable{}).ProtoReflect().Type(),
})
assert.NilError(t, err)
autoTable, ok := table.(ormtable.AutoIncrementTable)
assert.Assert(t, ok)
// first run tests with a split index-commitment store
runAutoIncrementScenario(t, autoTable, ormtable.WrapContextDefault(testkv.NewSplitMemBackend()))
// now run with shared store and debugging
debugBuf := &strings.Builder{}
store := testkv.NewDebugBackend(
testkv.NewSharedMemBackend(),
&testkv.EntryCodecDebugger{
EntryCodec: table,
Print: func(s string) { debugBuf.WriteString(s + "\n") },
},
)
runAutoIncrementScenario(t, autoTable, ormtable.WrapContextDefault(store))
golden.Assert(t, debugBuf.String(), "test_auto_inc.golden")
checkEncodeDecodeEntries(t, table, store.IndexStoreReader())
}
func runAutoIncrementScenario(t *testing.T, table ormtable.AutoIncrementTable, ctx context.Context) {
t.Helper()
store, err := testpb.NewExampleAutoIncrementTableTable(table)
assert.NilError(t, err)
err = store.Save(ctx, &testpb.ExampleAutoIncrementTable{Id: 5})
assert.ErrorContains(t, err, "not found")
ex1 := &testpb.ExampleAutoIncrementTable{X: "foo", Y: 5}
assert.NilError(t, store.Save(ctx, ex1))
assert.Equal(t, uint64(1), ex1.Id)
curSeq, err := table.LastInsertedSequence(ctx)
assert.NilError(t, err)
assert.Equal(t, curSeq, uint64(1))
ex2 := &testpb.ExampleAutoIncrementTable{X: "bar", Y: 10}
newID, err := table.InsertReturningPKey(ctx, ex2)
assert.NilError(t, err)
assert.Equal(t, uint64(2), ex2.Id)
assert.Equal(t, newID, ex2.Id)
curSeq, err = table.LastInsertedSequence(ctx)
assert.NilError(t, err)
assert.Equal(t, curSeq, uint64(2))
buf := &bytes.Buffer{}
assert.NilError(t, table.ExportJSON(ctx, buf))
assert.NilError(t, table.ValidateJSON(bytes.NewReader(buf.Bytes())))
store2 := ormtable.WrapContextDefault(testkv.NewSplitMemBackend())
assert.NilError(t, table.ImportJSON(store2, bytes.NewReader(buf.Bytes())))
assertTablesEqual(t, table, ctx, store2)
// test edge case where we have deleted all entities but we're still exporting the sequence number
assert.NilError(t, table.Delete(ctx, ex1))
assert.NilError(t, table.Delete(ctx, ex2))
buf = &bytes.Buffer{}
assert.NilError(t, table.ExportJSON(ctx, buf))
assert.NilError(t, table.ValidateJSON(bytes.NewReader(buf.Bytes())))
golden.Assert(t, buf.String(), "trivial_auto_inc_export.golden")
store3 := ormtable.WrapContextDefault(testkv.NewSplitMemBackend())
assert.NilError(t, table.ImportJSON(store3, bytes.NewReader(buf.Bytes())))
ex1.Id = 0
assert.NilError(t, table.Insert(store3, ex1))
assert.Equal(t, uint64(3), ex1.Id) // should equal 3 because the sequence number 2 should have been imported from JSON
curSeq, err = table.LastInsertedSequence(store3)
assert.NilError(t, err)
assert.Equal(t, curSeq, uint64(3))
}
func TestBadJSON(t *testing.T) {
table, err := ormtable.Build(ormtable.Options{
MessageType: (&testpb.ExampleAutoIncrementTable{}).ProtoReflect().Type(),
})
assert.NilError(t, err)
store := ormtable.WrapContextDefault(testkv.NewSplitMemBackend())
f, err := os.Open("testdata/bad_auto_inc.json")
assert.NilError(t, err)
assert.ErrorContains(t, table.ImportJSON(store, f), "invalid auto increment primary key")
f, err = os.Open("testdata/bad_auto_inc2.json")
assert.NilError(t, err)
assert.ErrorContains(t, table.ImportJSON(store, f), "invalid auto increment primary key")
}

View File

@ -1,195 +0,0 @@
package ormtable
import (
"context"
"errors"
"fmt"
"cosmossdk.io/core/store"
"cosmossdk.io/orm/types/kv"
)
// ReadBackend defines the type used for read-only ORM operations.
type ReadBackend interface {
// CommitmentStoreReader returns the reader for the commitment store.
CommitmentStoreReader() kv.ReadonlyStore
// IndexStoreReader returns the reader for the index store.
IndexStoreReader() kv.ReadonlyStore
private()
}
// Backend defines the type used for read-write ORM operations.
// Unlike ReadBackend, write access to the underlying kv-store
// is hidden so that this can be fully encapsulated by the ORM.
type Backend interface {
ReadBackend
// CommitmentStore returns the merklized commitment store.
CommitmentStore() store.KVStore
// IndexStore returns the index store if a separate one exists,
// otherwise it the commitment store.
IndexStore() store.KVStore
// ValidateHooks returns a ValidateHooks instance or nil.
ValidateHooks() ValidateHooks
// WithValidateHooks returns a copy of this backend with the provided validate hooks.
WithValidateHooks(ValidateHooks) Backend
// WriteHooks returns a WriteHooks instance of nil.
WriteHooks() WriteHooks
// WithWriteHooks returns a copy of this backend with the provided write hooks.
WithWriteHooks(WriteHooks) Backend
}
// ReadBackendOptions defines options for creating a ReadBackend.
// Read context can optionally define two stores - a commitment store
// that is backed by a merkle tree and an index store that isn't.
// If the index store is not defined, the commitment store will be
// used for all operations.
type ReadBackendOptions struct {
// CommitmentStoreReader is a reader for the commitment store.
CommitmentStoreReader kv.ReadonlyStore
// IndexStoreReader is an optional reader for the index store.
// If it is nil the CommitmentStoreReader will be used.
IndexStoreReader kv.ReadonlyStore
}
type readBackend struct {
commitmentReader kv.ReadonlyStore
indexReader kv.ReadonlyStore
}
func (r readBackend) CommitmentStoreReader() kv.ReadonlyStore {
return r.commitmentReader
}
func (r readBackend) IndexStoreReader() kv.ReadonlyStore {
return r.indexReader
}
func (readBackend) private() {}
// NewReadBackend creates a new ReadBackend.
func NewReadBackend(options ReadBackendOptions) ReadBackend {
indexReader := options.IndexStoreReader
if indexReader == nil {
indexReader = options.CommitmentStoreReader
}
return &readBackend{
commitmentReader: options.CommitmentStoreReader,
indexReader: indexReader,
}
}
type backend struct {
commitmentStore store.KVStore
indexStore store.KVStore
validateHooks ValidateHooks
writeHooks WriteHooks
}
func (c backend) ValidateHooks() ValidateHooks {
return c.validateHooks
}
func (c backend) WithValidateHooks(hooks ValidateHooks) Backend {
c.validateHooks = hooks
return c
}
func (c backend) WriteHooks() WriteHooks {
return c.writeHooks
}
func (c backend) WithWriteHooks(hooks WriteHooks) Backend {
c.writeHooks = hooks
return c
}
func (backend) private() {}
func (c backend) CommitmentStoreReader() kv.ReadonlyStore {
return c.commitmentStore
}
func (c backend) IndexStoreReader() kv.ReadonlyStore {
return c.indexStore
}
func (c backend) CommitmentStore() store.KVStore {
return c.commitmentStore
}
func (c backend) IndexStore() store.KVStore {
return c.indexStore
}
// BackendOptions defines options for creating a Backend.
// Context can optionally define two stores - a commitment store
// that is backed by a merkle tree and an index store that isn't.
// If the index store is not defined, the commitment store will be
// used for all operations.
type BackendOptions struct {
// CommitmentStore is the commitment store.
CommitmentStore store.KVStore
// IndexStore is the optional index store.
// If it is nil the CommitmentStore will be used.
IndexStore store.KVStore
// ValidateHooks are optional hooks into ORM insert, update and delete operations.
ValidateHooks ValidateHooks
WriteHooks WriteHooks
}
// NewBackend creates a new Backend.
func NewBackend(options BackendOptions) Backend {
indexStore := options.IndexStore
if indexStore == nil {
indexStore = options.CommitmentStore
}
return &backend{
commitmentStore: options.CommitmentStore,
indexStore: indexStore,
validateHooks: options.ValidateHooks,
writeHooks: options.WriteHooks,
}
}
// BackendResolver resolves a backend from the context or returns an error.
// Callers should type cast the returned ReadBackend to Backend to test whether
// the backend is writable.
type BackendResolver func(context.Context) (ReadBackend, error)
// WrapContextDefault performs the default wrapping of a backend in a context.
// This should be used primarily for testing purposes and production code
// should use some other framework specific wrapping (for instance using
// "store keys").
func WrapContextDefault(backend ReadBackend) context.Context {
return context.WithValue(context.Background(), defaultContextKey, backend)
}
type contextKeyType string
var defaultContextKey = contextKeyType("backend")
func getBackendDefault(ctx context.Context) (ReadBackend, error) {
value := ctx.Value(defaultContextKey)
if value == nil {
return nil, errors.New("can't resolve backend")
}
backend, ok := value.(ReadBackend)
if !ok {
return nil, fmt.Errorf("expected value of type %T, instead got %T", backend, value)
}
return backend, nil
}

View File

@ -1,130 +0,0 @@
package ormtable
import (
"cosmossdk.io/core/store"
"cosmossdk.io/orm/types/kv"
)
type batchIndexCommitmentWriter struct {
Backend
commitmentWriter *batchStoreWriter
indexWriter *batchStoreWriter
}
func newBatchIndexCommitmentWriter(store Backend) *batchIndexCommitmentWriter {
return &batchIndexCommitmentWriter{
Backend: store,
commitmentWriter: &batchStoreWriter{
ReadonlyStore: store.CommitmentStoreReader(),
curBuf: make([]*batchWriterEntry, 0, capacity),
},
indexWriter: &batchStoreWriter{
ReadonlyStore: store.IndexStoreReader(),
curBuf: make([]*batchWriterEntry, 0, capacity),
},
}
}
func (w *batchIndexCommitmentWriter) CommitmentStore() store.KVStore {
return w.commitmentWriter
}
func (w *batchIndexCommitmentWriter) IndexStore() store.KVStore {
return w.indexWriter
}
// Write flushes any pending writes.
func (w *batchIndexCommitmentWriter) Write() error {
err := flushWrites(w.Backend.CommitmentStore(), w.commitmentWriter)
if err != nil {
return err
}
err = flushWrites(w.Backend.IndexStore(), w.indexWriter)
if err != nil {
return err
}
// clear writes
w.Close()
return err
}
func flushWrites(store kv.Store, writer *batchStoreWriter) error {
for _, buf := range writer.prevBufs {
err := flushBuf(store, buf)
if err != nil {
return err
}
}
return flushBuf(store, writer.curBuf)
}
func flushBuf(store kv.Store, writes []*batchWriterEntry) error {
for _, write := range writes {
switch {
case write.hookCall != nil:
write.hookCall()
case !write.delete:
err := store.Set(write.key, write.value)
if err != nil {
return err
}
default:
err := store.Delete(write.key)
if err != nil {
return err
}
}
}
return nil
}
// Close discards any pending writes and should generally be called using
// a defer statement.
func (w *batchIndexCommitmentWriter) Close() {
w.commitmentWriter.prevBufs = nil
w.commitmentWriter.curBuf = nil
w.indexWriter.prevBufs = nil
w.indexWriter.curBuf = nil
}
type batchWriterEntry struct {
key, value []byte
delete bool
hookCall func()
}
type batchStoreWriter struct {
kv.ReadonlyStore
prevBufs [][]*batchWriterEntry
curBuf []*batchWriterEntry
}
const capacity = 16
func (b *batchStoreWriter) Set(key, value []byte) error {
b.append(&batchWriterEntry{key: key, value: value})
return nil
}
func (b *batchStoreWriter) Delete(key []byte) error {
b.append(&batchWriterEntry{key: key, delete: true})
return nil
}
func (w *batchIndexCommitmentWriter) enqueueHook(f func()) {
w.indexWriter.append(&batchWriterEntry{hookCall: f})
}
func (b *batchStoreWriter) append(entry *batchWriterEntry) {
if len(b.curBuf) == capacity {
b.prevBufs = append(b.prevBufs, b.curBuf)
b.curBuf = make([]*batchWriterEntry, 0, capacity)
}
b.curBuf = append(b.curBuf, entry)
}
var _ Backend = &batchIndexCommitmentWriter{}

View File

@ -1,330 +0,0 @@
package ormtable_test
import (
"context"
"errors"
"fmt"
"testing"
dbm "github.com/cosmos/cosmos-db"
"google.golang.org/protobuf/proto"
"gotest.tools/v3/assert"
"cosmossdk.io/core/store"
coretesting "cosmossdk.io/core/testing"
"cosmossdk.io/orm/internal/testkv"
"cosmossdk.io/orm/internal/testpb"
"cosmossdk.io/orm/model/ormtable"
"cosmossdk.io/orm/testing/ormtest"
"cosmossdk.io/orm/types/kv"
)
func initBalanceTable(tb testing.TB) testpb.BalanceTable {
tb.Helper()
table, err := ormtable.Build(ormtable.Options{
MessageType: (&testpb.Balance{}).ProtoReflect().Type(),
})
assert.NilError(tb, err)
balanceTable, err := testpb.NewBalanceTable(table)
assert.NilError(tb, err)
return balanceTable
}
func BenchmarkMemory(b *testing.B) {
b.Helper()
bench(b, func(tb testing.TB) ormtable.Backend {
tb.Helper()
return ormtest.NewMemoryBackend()
})
}
func BenchmarkLevelDB(b *testing.B) {
bench(b, testkv.NewGoLevelDBBackend)
}
func bench(b *testing.B, newBackend func(testing.TB) ormtable.Backend) {
b.Helper()
b.Run("insert", func(b *testing.B) {
b.StopTimer()
ctx := ormtable.WrapContextDefault(newBackend(b))
b.StartTimer()
benchInsert(b, ctx)
})
b.Run("update", func(b *testing.B) {
b.StopTimer()
ctx := ormtable.WrapContextDefault(newBackend(b))
benchInsert(b, ctx)
b.StartTimer()
benchUpdate(b, ctx)
})
b.Run("get", func(b *testing.B) {
b.StopTimer()
ctx := ormtable.WrapContextDefault(newBackend(b))
benchInsert(b, ctx)
b.StartTimer()
benchGet(b, ctx)
})
b.Run("delete", func(b *testing.B) {
b.StopTimer()
ctx := ormtable.WrapContextDefault(newBackend(b))
benchInsert(b, ctx)
b.StartTimer()
benchDelete(b, ctx)
})
}
func benchInsert(b *testing.B, ctx context.Context) {
b.Helper()
balanceTable := initBalanceTable(b)
for i := 0; i < b.N; i++ {
assert.NilError(b, balanceTable.Insert(ctx, &testpb.Balance{
Address: fmt.Sprintf("acct%d", i),
Denom: "bar",
Amount: 10,
}))
}
}
func benchUpdate(b *testing.B, ctx context.Context) {
b.Helper()
balanceTable := initBalanceTable(b)
for i := 0; i < b.N; i++ {
assert.NilError(b, balanceTable.Update(ctx, &testpb.Balance{
Address: fmt.Sprintf("acct%d", i),
Denom: "bar",
Amount: 11,
}))
}
}
func benchGet(b *testing.B, ctx context.Context) {
b.Helper()
balanceTable := initBalanceTable(b)
for i := 0; i < b.N; i++ {
balance, err := balanceTable.Get(ctx, fmt.Sprintf("acct%d", i), "bar")
assert.NilError(b, err)
assert.Equal(b, uint64(10), balance.Amount)
}
}
func benchDelete(b *testing.B, ctx context.Context) {
b.Helper()
balanceTable := initBalanceTable(b)
for i := 0; i < b.N; i++ {
assert.NilError(b, balanceTable.Delete(ctx, &testpb.Balance{
Address: fmt.Sprintf("acct%d", i),
Denom: "bar",
}))
}
}
//
// Manually written versions of insert, update, delete and get for testpb.Balance
//
const (
addressDenomPrefix byte = iota
denomAddressPrefix
)
func insertBalance(store kv.Store, balance *testpb.Balance) error {
denom := balance.Denom
balance.Denom = ""
addr := balance.Address
balance.Address = ""
addressDenomKey := []byte{addressDenomPrefix}
addressDenomKey = append(addressDenomKey, []byte(addr)...)
addressDenomKey = append(addressDenomKey, 0x0)
addressDenomKey = append(addressDenomKey, []byte(denom)...)
has, err := store.Has(addressDenomKey)
if err != nil {
return err
}
if has {
return errors.New("already exists")
}
bz, err := proto.Marshal(balance)
if err != nil {
return err
}
balance.Denom = denom
balance.Address = addr
err = store.Set(addressDenomKey, bz)
if err != nil {
return err
}
// set denom address index
denomAddressKey := []byte{denomAddressPrefix}
denomAddressKey = append(denomAddressKey, []byte(balance.Denom)...)
denomAddressKey = append(denomAddressKey, 0x0)
denomAddressKey = append(denomAddressKey, []byte(balance.Address)...)
err = store.Set(denomAddressKey, []byte{})
if err != nil {
return err
}
return nil
}
func updateBalance(store kv.Store, balance *testpb.Balance) error {
denom := balance.Denom
balance.Denom = ""
addr := balance.Address
balance.Address = ""
bz, err := proto.Marshal(balance)
if err != nil {
return err
}
balance.Denom = denom
balance.Address = addr
addressDenomKey := []byte{addressDenomPrefix}
addressDenomKey = append(addressDenomKey, []byte(addr)...)
addressDenomKey = append(addressDenomKey, 0x0)
addressDenomKey = append(addressDenomKey, []byte(denom)...)
return store.Set(addressDenomKey, bz)
}
func deleteBalance(store kv.Store, balance *testpb.Balance) error {
denom := balance.Denom
addr := balance.Address
addressDenomKey := []byte{addressDenomPrefix}
addressDenomKey = append(addressDenomKey, []byte(addr)...)
addressDenomKey = append(addressDenomKey, 0x0)
addressDenomKey = append(addressDenomKey, []byte(denom)...)
err := store.Delete(addressDenomKey)
if err != nil {
return err
}
denomAddressKey := []byte{denomAddressPrefix}
denomAddressKey = append(denomAddressKey, []byte(balance.Denom)...)
denomAddressKey = append(denomAddressKey, 0x0)
denomAddressKey = append(denomAddressKey, []byte(balance.Address)...)
return store.Delete(denomAddressKey)
}
func getBalance(store kv.Store, address, denom string) (*testpb.Balance, error) {
addressDenomKey := []byte{addressDenomPrefix}
addressDenomKey = append(addressDenomKey, []byte(address)...)
addressDenomKey = append(addressDenomKey, 0x0)
addressDenomKey = append(addressDenomKey, []byte(denom)...)
bz, err := store.Get(addressDenomKey)
if err != nil {
return nil, err
}
if bz == nil {
return nil, errors.New("not found")
}
balance := testpb.Balance{}
err = proto.Unmarshal(bz, &balance)
if err != nil {
return nil, err
}
balance.Address = address
balance.Denom = denom
return &balance, nil
}
func BenchmarkManualInsertMemory(b *testing.B) {
benchManual(b, func() (store.KVStore, error) {
return testkv.TestStore{Db: coretesting.NewMemDB()}, nil
})
}
func BenchmarkManualInsertLevelDB(b *testing.B) {
benchManual(b, func() (store.KVStore, error) {
db, err := dbm.NewGoLevelDB("test", b.TempDir(), nil)
return testkv.TestStore{Db: db}, err
})
}
func benchManual(b *testing.B, newStore func() (store.KVStore, error)) {
b.Helper()
b.Run("insert", func(b *testing.B) {
b.StopTimer()
store, err := newStore()
assert.NilError(b, err)
b.StartTimer()
benchManualInsert(b, store)
})
b.Run("update", func(b *testing.B) {
b.StopTimer()
store, err := newStore()
assert.NilError(b, err)
benchManualInsert(b, store)
b.StartTimer()
benchManualUpdate(b, store)
})
b.Run("get", func(b *testing.B) {
b.StopTimer()
store, err := newStore()
assert.NilError(b, err)
benchManualInsert(b, store)
b.StartTimer()
benchManualGet(b, store)
})
b.Run("delete", func(b *testing.B) {
b.StopTimer()
store, err := newStore()
assert.NilError(b, err)
benchManualInsert(b, store)
b.StartTimer()
benchManualDelete(b, store)
})
}
func benchManualInsert(b *testing.B, store store.KVStore) {
b.Helper()
for i := 0; i < b.N; i++ {
assert.NilError(b, insertBalance(store, &testpb.Balance{
Address: fmt.Sprintf("acct%d", i),
Denom: "bar",
Amount: 10,
}))
}
}
func benchManualUpdate(b *testing.B, store store.KVStore) {
b.Helper()
for i := 0; i < b.N; i++ {
assert.NilError(b, updateBalance(store, &testpb.Balance{
Address: fmt.Sprintf("acct%d", i),
Denom: "bar",
Amount: 11,
}))
}
}
func benchManualDelete(b *testing.B, store store.KVStore) {
b.Helper()
for i := 0; i < b.N; i++ {
assert.NilError(b, deleteBalance(store, &testpb.Balance{
Address: fmt.Sprintf("acct%d", i),
Denom: "bar",
}))
}
}
func benchManualGet(b *testing.B, store store.KVStore) {
b.Helper()
for i := 0; i < b.N; i++ {
balance, err := getBalance(store, fmt.Sprintf("acct%d", i), "bar")
assert.NilError(b, err)
assert.Equal(b, uint64(10), balance.Amount)
}
}

View File

@ -1,289 +0,0 @@
package ormtable
import (
"fmt"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/reflect/protoregistry"
ormv1 "cosmossdk.io/api/cosmos/orm/v1"
"cosmossdk.io/orm/encoding/encodeutil"
"cosmossdk.io/orm/encoding/ormkv"
"cosmossdk.io/orm/internal/fieldnames"
"cosmossdk.io/orm/types/ormerrors"
)
const (
primaryKeyID uint32 = 0
indexIDLimit uint32 = 32768
seqID = indexIDLimit
)
// Options are options for building a Table.
type Options struct {
// Prefix is an optional prefix used to build the table's prefix.
Prefix []byte
// MessageType is the protobuf message type of the table.
MessageType protoreflect.MessageType
// TableDescriptor is an optional table descriptor to be explicitly used
// with the table. Generally this should be nil and the table descriptor
// should be pulled from the table message option. TableDescriptor
// cannot be used together with SingletonDescriptor.
TableDescriptor *ormv1.TableDescriptor
// SingletonDescriptor is an optional singleton descriptor to be explicitly used.
// Generally this should be nil and the table descriptor
// should be pulled from the singleton message option. SingletonDescriptor
// cannot be used together with TableDescriptor.
SingletonDescriptor *ormv1.SingletonDescriptor
// TypeResolver is an optional type resolver to be used when unmarshaling
// protobuf messages.
TypeResolver TypeResolver
// JSONValidator is an optional validator that can be used for validating
// messaging when using ValidateJSON. If it is nil, DefaultJSONValidator
// will be used
JSONValidator func(proto.Message) error
// BackendResolver is an optional function which retrieves a Backend from the context.
// If it is nil, the default behavior will be to attempt to retrieve a
// backend using the method that WrapContextDefault uses. This method
// can be used to implement things like "store keys" which would allow a
// table to only be used with a specific backend and to hide direct
// access to the backend other than through the table interface.
// Mutating operations will attempt to cast ReadBackend to Backend and
// will return an error if that fails.
BackendResolver BackendResolver
}
// TypeResolver is an interface that can be used for the protoreflect.UnmarshalOptions.Resolver option.
type TypeResolver interface {
protoregistry.MessageTypeResolver
protoregistry.ExtensionTypeResolver
}
// Build builds a Table instance from the provided Options.
func Build(options Options) (Table, error) {
messageDescriptor := options.MessageType.Descriptor()
backendResolver := options.BackendResolver
if backendResolver == nil {
backendResolver = getBackendDefault
}
table := &tableImpl{
primaryKeyIndex: &primaryKeyIndex{
indexers: []indexer{},
getBackend: backendResolver,
},
indexes: []Index{},
indexesByFields: map[fieldnames.FieldNames]concreteIndex{},
uniqueIndexesByFields: map[fieldnames.FieldNames]UniqueIndex{},
entryCodecsByID: map[uint32]ormkv.EntryCodec{},
indexesByID: map[uint32]Index{},
typeResolver: options.TypeResolver,
customJSONValidator: options.JSONValidator,
}
pkIndex := table.primaryKeyIndex
tableDesc := options.TableDescriptor
if tableDesc == nil {
tableDesc = proto.GetExtension(messageDescriptor.Options(), ormv1.E_Table).(*ormv1.TableDescriptor)
}
singletonDesc := options.SingletonDescriptor
if singletonDesc == nil {
singletonDesc = proto.GetExtension(messageDescriptor.Options(), ormv1.E_Singleton).(*ormv1.SingletonDescriptor)
}
switch {
case tableDesc != nil:
if singletonDesc != nil {
return nil, ormerrors.InvalidTableDefinition.Wrapf("message %s cannot be declared as both a table and a singleton", messageDescriptor.FullName())
}
case singletonDesc != nil:
if singletonDesc.Id == 0 {
return nil, ormerrors.InvalidTableId.Wrapf("%s", messageDescriptor.FullName())
}
prefix := encodeutil.AppendVarUInt32(options.Prefix, singletonDesc.Id)
pkCodec, err := ormkv.NewPrimaryKeyCodec(
prefix,
options.MessageType,
nil,
proto.UnmarshalOptions{Resolver: options.TypeResolver},
)
if err != nil {
return nil, err
}
pkIndex.PrimaryKeyCodec = pkCodec
table.tablePrefix = prefix
table.tableID = singletonDesc.Id
return &singleton{table}, nil
default:
return nil, ormerrors.NoTableDescriptor.Wrapf("missing table descriptor for %s", messageDescriptor.FullName())
}
tableID := tableDesc.Id
if tableID == 0 {
return nil, ormerrors.InvalidTableId.Wrapf("table %s", messageDescriptor.FullName())
}
prefix := options.Prefix
prefix = encodeutil.AppendVarUInt32(prefix, tableID)
table.tablePrefix = prefix
table.tableID = tableID
if tableDesc.PrimaryKey == nil {
return nil, ormerrors.MissingPrimaryKey.Wrap(string(messageDescriptor.FullName()))
}
pkFields := fieldnames.CommaSeparatedFieldNames(tableDesc.PrimaryKey.Fields)
table.primaryKeyIndex.fields = pkFields
pkFieldNames := pkFields.Names()
if len(pkFieldNames) == 0 {
return nil, ormerrors.InvalidTableDefinition.Wrapf("empty primary key fields for %s", messageDescriptor.FullName())
}
pkPrefix := encodeutil.AppendVarUInt32(prefix, primaryKeyID)
pkCodec, err := ormkv.NewPrimaryKeyCodec(
pkPrefix,
options.MessageType,
pkFieldNames,
proto.UnmarshalOptions{Resolver: options.TypeResolver},
)
if err != nil {
return nil, err
}
pkIndex.PrimaryKeyCodec = pkCodec
table.indexesByFields[pkFields] = pkIndex
table.uniqueIndexesByFields[pkFields] = pkIndex
table.entryCodecsByID[primaryKeyID] = pkIndex
table.indexesByID[primaryKeyID] = pkIndex
table.indexes = append(table.indexes, pkIndex)
for _, idxDesc := range tableDesc.Index {
id := idxDesc.Id
if id == 0 || id >= indexIDLimit {
return nil, ormerrors.InvalidIndexId.Wrapf("index on table %s with fields %s, invalid id %d", messageDescriptor.FullName(), idxDesc.Fields, id)
}
if _, ok := table.entryCodecsByID[id]; ok {
return nil, ormerrors.DuplicateIndexId.Wrapf("id %d on table %s", id, messageDescriptor.FullName())
}
idxFields := fieldnames.CommaSeparatedFieldNames(idxDesc.Fields)
idxPrefix := encodeutil.AppendVarUInt32(prefix, id)
var index concreteIndex
// altNames contains all the alternative "names" of this index
altNames := map[fieldnames.FieldNames]bool{idxFields: true}
if idxDesc.Unique && isNonTrivialUniqueKey(idxFields.Names(), pkFieldNames) {
uniqCdc, err := ormkv.NewUniqueKeyCodec(
idxPrefix,
options.MessageType,
idxFields.Names(),
pkFieldNames,
)
if err != nil {
return nil, err
}
uniqIdx := &uniqueKeyIndex{
UniqueKeyCodec: uniqCdc,
fields: idxFields,
primaryKey: pkIndex,
getReadBackend: backendResolver,
}
table.uniqueIndexesByFields[idxFields] = uniqIdx
index = uniqIdx
} else {
idxCdc, err := ormkv.NewIndexKeyCodec(
idxPrefix,
options.MessageType,
idxFields.Names(),
pkFieldNames,
)
if err != nil {
return nil, err
}
index = &indexKeyIndex{
IndexKeyCodec: idxCdc,
fields: idxFields,
primaryKey: pkIndex,
getReadBackend: backendResolver,
}
// non-unique indexes can sometimes be named by several sub-lists of
// fields and we need to handle all of them. For example consider,
// a primary key for fields "a,b,c" and an index on field "c". Because the
// rest of the primary key gets appended to the index key, the index for "c"
// is actually stored as "c,a,b". So this index can be referred to
// by the fields "c", "c,a", or "c,a,b".
allFields := index.GetFieldNames()
allFieldNames := fieldnames.FieldsFromNames(allFields)
altNames[allFieldNames] = true
for i := 1; i < len(allFields); i++ {
altName := fieldnames.FieldsFromNames(allFields[:i])
if altNames[altName] {
continue
}
// we check by generating a codec for each sub-list of fields,
// then we see if the full list of fields matches.
altIdxCdc, err := ormkv.NewIndexKeyCodec(
idxPrefix,
options.MessageType,
allFields[:i],
pkFieldNames,
)
if err != nil {
return nil, err
}
if fieldnames.FieldsFromNames(altIdxCdc.GetFieldNames()) == allFieldNames {
altNames[altName] = true
}
}
}
for name := range altNames {
if _, ok := table.indexesByFields[name]; ok {
return nil, fmt.Errorf("duplicate index for fields %s", name)
}
table.indexesByFields[name] = index
}
table.entryCodecsByID[id] = index
table.indexesByID[id] = index
table.indexes = append(table.indexes, index)
table.indexers = append(table.indexers, index.(indexer))
}
if tableDesc.PrimaryKey.AutoIncrement {
autoIncField := pkCodec.GetFieldDescriptors()[0]
if len(pkFieldNames) != 1 && autoIncField.Kind() != protoreflect.Uint64Kind {
return nil, ormerrors.InvalidAutoIncrementKey.Wrapf("field %s", autoIncField.FullName())
}
seqPrefix := encodeutil.AppendVarUInt32(prefix, seqID)
seqCodec := ormkv.NewSeqCodec(options.MessageType, seqPrefix)
table.entryCodecsByID[seqID] = seqCodec
return &autoIncrementTable{
tableImpl: table,
autoIncField: autoIncField,
seqCodec: seqCodec,
}, nil
}
return table, nil
}

View File

@ -1,3 +0,0 @@
// Package ormtable defines the interfaces and implementations of tables and
// indexes.
package ormtable

View File

@ -1,103 +0,0 @@
package ormtable_test
import (
"testing"
"time"
"google.golang.org/protobuf/types/known/durationpb"
"gotest.tools/v3/assert"
"cosmossdk.io/orm/internal/testkv"
"cosmossdk.io/orm/internal/testpb"
"cosmossdk.io/orm/model/ormtable"
)
func TestDurationIndex(t *testing.T) {
table, err := ormtable.Build(ormtable.Options{
MessageType: (&testpb.ExampleDuration{}).ProtoReflect().Type(),
})
assert.NilError(t, err)
backend := testkv.NewDebugBackend(testkv.NewSplitMemBackend(), &testkv.EntryCodecDebugger{
EntryCodec: table,
})
ctx := ormtable.WrapContextDefault(backend)
store, err := testpb.NewExampleDurationTable(table)
assert.NilError(t, err)
neg, err := time.ParseDuration("-1h")
assert.NilError(t, err)
zero, err := time.ParseDuration("0")
assert.NilError(t, err)
pos, err := time.ParseDuration("11000ms")
assert.NilError(t, err)
negPb, zeroPb, posPb := durationpb.New(neg), durationpb.New(zero), durationpb.New(pos)
durOrder := []*durationpb.Duration{negPb, zeroPb, posPb}
assert.NilError(t, store.Insert(ctx, &testpb.ExampleDuration{
Name: "foo",
Dur: negPb,
}))
assert.NilError(t, store.Insert(ctx, &testpb.ExampleDuration{
Name: "bar",
Dur: zeroPb,
}))
assert.NilError(t, store.Insert(ctx, &testpb.ExampleDuration{
Name: "baz",
Dur: posPb,
}))
from, to := testpb.ExampleDurationDurIndexKey{}.WithDur(durationpb.New(neg)),
testpb.ExampleDurationDurIndexKey{}.WithDur(durationpb.New(pos))
it, err := store.ListRange(ctx, from, to)
assert.NilError(t, err)
i := 0
for it.Next() {
v, err := it.Value()
assert.NilError(t, err)
assert.Equal(t, durOrder[i].String(), v.Dur.String())
i++
}
// insert a nil entry
id, err := store.InsertReturningId(ctx, &testpb.ExampleDuration{
Name: "nil",
Dur: nil,
})
assert.NilError(t, err)
res, err := store.Get(ctx, id)
assert.NilError(t, err)
assert.Assert(t, res.Dur == nil)
it, err = store.List(ctx, testpb.ExampleDurationDurIndexKey{})
assert.NilError(t, err)
// make sure nils are ordered last
durOrder = append(durOrder, nil)
i = 0
for it.Next() {
v, err := it.Value()
assert.NilError(t, err)
assert.Assert(t, v != nil)
x := durOrder[i]
if x == nil {
assert.Assert(t, v.Dur == nil)
} else {
assert.Equal(t, x.String(), v.Dur.String())
}
i++
}
it.Close()
// try iterating over just nil timestamps
it, err = store.List(ctx, testpb.ExampleDurationDurIndexKey{}.WithDur(nil))
assert.NilError(t, err)
assert.Assert(t, it.Next())
res, err = it.Value()
assert.NilError(t, err)
assert.Assert(t, res.Dur == nil)
assert.Assert(t, !it.Next())
it.Close()
}

View File

@ -1,28 +0,0 @@
package ormtable
import "google.golang.org/protobuf/proto"
type filterIterator struct {
Iterator
filter func(proto.Message) bool
msg proto.Message
}
func (f *filterIterator) Next() bool {
for f.Iterator.Next() {
msg, err := f.Iterator.GetMessage()
if err != nil {
return false
}
if f.filter(msg) {
f.msg = msg
return true
}
}
return false
}
func (f filterIterator) GetMessage() (proto.Message, error) {
return f.msg, nil
}

View File

@ -1,40 +0,0 @@
package ormtable
import (
"context"
"google.golang.org/protobuf/proto"
)
// ValidateHooks defines an interface for a table hooks which can intercept
// insert, update and delete operations and possibly return an error.
type ValidateHooks interface {
// ValidateInsert is called before the message is inserted.
// If error is not nil the insertion will fail.
ValidateInsert(context.Context, proto.Message) error
// ValidateUpdate is called before the existing message is updated with the new one.
// If error is not nil the update will fail.
ValidateUpdate(ctx context.Context, existing, new proto.Message) error
// ValidateDelete is called before the message is deleted.
// If error is not nil the deletion will fail.
ValidateDelete(context.Context, proto.Message) error
}
// WriteHooks defines an interface for listening to insertions, updates and
// deletes after they are written to the store. This can be used for indexing
// state in another database. Indexers should make sure they coordinate with
// transactions at live at the next level above the ORM as they write hooks
// may be called but the enclosing transaction may still fail. The context
// is provided in each method to help coordinate this.
type WriteHooks interface {
// OnInsert is called after a message is inserted into the store.
OnInsert(context.Context, proto.Message)
// OnUpdate is called after the entity is updated in the store.
OnUpdate(ctx context.Context, existing, new proto.Message)
// OnDelete is called after the entity is deleted from the store.
OnDelete(context.Context, proto.Message)
}

View File

@ -1,78 +0,0 @@
package ormtable
import (
"context"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
"cosmossdk.io/orm/encoding/ormkv"
"cosmossdk.io/orm/model/ormlist"
"cosmossdk.io/orm/types/kv"
)
// Index defines an index on a table. Index instances
// are stateless, with all state existing only in the store passed
// to index methods.
type Index interface {
// List does iteration over the index with the provided prefix key and options.
// Prefix key values must correspond in type to the index's fields and the
// number of values provided cannot exceed the number of fields in the index,
// although fewer values can be provided.
List(ctx context.Context, prefixKey []interface{}, options ...ormlist.Option) (Iterator, error)
// ListRange does range iteration over the index with the provided from and to
// values and options.
//
// From and to values must correspond in type to the index's fields and the number of values
// provided cannot exceed the number of fields in the index, although fewer
// values can be provided.
//
// Range iteration can only be done for from and to values which are
// well-ordered, meaning that any unordered components must be equal. Ex.
// the bytes type is considered unordered, so a range iterator is created
// over an index with a bytes field, both start and end must have the same
// value for bytes.
//
// Range iteration is inclusive at both ends.
ListRange(ctx context.Context, from, to []interface{}, options ...ormlist.Option) (Iterator, error)
// DeleteBy deletes any entries which match the provided prefix key.
DeleteBy(context context.Context, prefixKey ...interface{}) error
// DeleteRange deletes any entries between the provided range keys.
DeleteRange(context context.Context, from, to []interface{}) error
// MessageType returns the protobuf message type of the index.
MessageType() protoreflect.MessageType
// Fields returns the canonical field names of the index.
Fields() string
doNotImplement()
}
// concreteIndex is used internally by table implementations.
type concreteIndex interface {
Index
ormkv.IndexCodec
readValueFromIndexKey(context ReadBackend, primaryKey []protoreflect.Value, value []byte, message proto.Message) error
}
// UniqueIndex defines an unique index on a table.
type UniqueIndex interface {
Index
// Has returns true if the key values are present in the store for this index.
Has(context context.Context, keyValues ...interface{}) (found bool, err error)
// Get retrieves the message if one exists for the provided key values.
Get(context context.Context, message proto.Message, keyValues ...interface{}) (found bool, err error)
}
type indexer interface {
onInsert(store kv.Store, message protoreflect.Message) error
onUpdate(store kv.Store, new, existing protoreflect.Message) error
onDelete(store kv.Store, message protoreflect.Message) error
}

View File

@ -1,121 +0,0 @@
package ormtable
import (
"context"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
"cosmossdk.io/orm/encoding/ormkv"
"cosmossdk.io/orm/internal/fieldnames"
"cosmossdk.io/orm/model/ormlist"
"cosmossdk.io/orm/types/kv"
"cosmossdk.io/orm/types/ormerrors"
)
// indexKeyIndex implements Index for a regular IndexKey.
type indexKeyIndex struct {
*ormkv.IndexKeyCodec
fields fieldnames.FieldNames
primaryKey *primaryKeyIndex
getReadBackend func(context.Context) (ReadBackend, error)
}
func (i indexKeyIndex) DeleteBy(ctx context.Context, keyValues ...interface{}) error {
it, err := i.List(ctx, keyValues)
if err != nil {
return err
}
return i.primaryKey.deleteByIterator(ctx, it)
}
func (i indexKeyIndex) DeleteRange(ctx context.Context, from, to []interface{}) error {
it, err := i.ListRange(ctx, from, to)
if err != nil {
return err
}
return i.primaryKey.deleteByIterator(ctx, it)
}
func (i indexKeyIndex) List(ctx context.Context, prefixKey []interface{}, options ...ormlist.Option) (Iterator, error) {
backend, err := i.getReadBackend(ctx)
if err != nil {
return nil, err
}
return prefixIterator(backend.IndexStoreReader(), backend, i, i.KeyCodec, prefixKey, options)
}
func (i indexKeyIndex) ListRange(ctx context.Context, from, to []interface{}, options ...ormlist.Option) (Iterator, error) {
backend, err := i.getReadBackend(ctx)
if err != nil {
return nil, err
}
return rangeIterator(backend.IndexStoreReader(), backend, i, i.KeyCodec, from, to, options)
}
var (
_ indexer = &indexKeyIndex{}
_ Index = &indexKeyIndex{}
)
func (i indexKeyIndex) doNotImplement() {}
func (i indexKeyIndex) onInsert(store kv.Store, message protoreflect.Message) error {
k, v, err := i.EncodeKVFromMessage(message)
if err != nil {
return err
}
return store.Set(k, v)
}
func (i indexKeyIndex) onUpdate(store kv.Store, new, existing protoreflect.Message) error {
newValues := i.GetKeyValues(new)
existingValues := i.GetKeyValues(existing)
if i.CompareKeys(newValues, existingValues) == 0 {
return nil
}
existingKey, err := i.EncodeKey(existingValues)
if err != nil {
return err
}
err = store.Delete(existingKey)
if err != nil {
return err
}
newKey, err := i.EncodeKey(newValues)
if err != nil {
return err
}
return store.Set(newKey, []byte{})
}
func (i indexKeyIndex) onDelete(store kv.Store, message protoreflect.Message) error {
_, key, err := i.EncodeKeyFromMessage(message)
if err != nil {
return err
}
return store.Delete(key)
}
func (i indexKeyIndex) readValueFromIndexKey(backend ReadBackend, primaryKey []protoreflect.Value, _ []byte, message proto.Message) error {
found, err := i.primaryKey.get(backend, message, primaryKey)
if err != nil {
return err
}
if !found {
return ormerrors.UnexpectedError.Wrapf("can't find primary key")
}
return nil
}
func (i indexKeyIndex) Fields() string {
return i.fields.String()
}

View File

@ -1,265 +0,0 @@
package ormtable
import (
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
queryv1beta1 "cosmossdk.io/api/cosmos/base/query/v1beta1"
"cosmossdk.io/core/store"
"cosmossdk.io/orm/encoding/encodeutil"
"cosmossdk.io/orm/encoding/ormkv"
"cosmossdk.io/orm/internal/listinternal"
"cosmossdk.io/orm/model/ormlist"
"cosmossdk.io/orm/types/kv"
)
// Iterator defines the interface for iterating over indexes.
//
// WARNING: it is generally unsafe to mutate a table while iterating over it.
// Instead you should do reads and writes separately, or use a helper
// function like DeleteBy which does this efficiently.
type Iterator interface {
// Next advances the iterator and returns true if a valid entry is found.
// Next must be called before starting iteration.
Next() bool
// Keys returns the current index key and primary key values that the
// iterator points to.
Keys() (indexKey, primaryKey []protoreflect.Value, err error)
// UnmarshalMessage unmarshals the entry the iterator currently points to
// the provided proto.Message.
UnmarshalMessage(proto.Message) error
// GetMessage retrieves the proto.Message that the iterator currently points
// to.
GetMessage() (proto.Message, error)
// Cursor returns the cursor referencing the current iteration position
// and can be used to restart iteration right after this position.
Cursor() ormlist.CursorT
// PageResponse returns a non-nil page response after Next() returns false
// if pagination was requested in list options.
PageResponse() *queryv1beta1.PageResponse
// Close closes the iterator and must always be called when done using
// the iterator. The defer keyword should generally be used for this.
Close()
doNotImplement()
}
func prefixIterator(iteratorStore kv.ReadonlyStore, backend ReadBackend, index concreteIndex, codec *ormkv.KeyCodec, prefix []interface{}, opts []listinternal.Option) (Iterator, error) {
options := &listinternal.Options{}
listinternal.ApplyOptions(options, opts)
if err := options.Validate(); err != nil {
return nil, err
}
var prefixBz []byte
prefixBz, err := codec.EncodeKey(encodeutil.ValuesOf(prefix...))
if err != nil {
return nil, err
}
var res Iterator
if !options.Reverse {
var start []byte
if len(options.Cursor) != 0 {
// must start right after cursor
start = append(options.Cursor, 0x0)
} else {
start = prefixBz
}
end := prefixEndBytes(prefixBz)
it, err := iteratorStore.Iterator(start, end)
if err != nil {
return nil, err
}
res = &indexIterator{
index: index,
store: backend,
iterator: it,
started: false,
}
} else {
var end []byte
if len(options.Cursor) != 0 {
// end bytes is already exclusive by default
end = options.Cursor
} else {
end = prefixEndBytes(prefixBz)
}
it, err := iteratorStore.ReverseIterator(prefixBz, end)
if err != nil {
return nil, err
}
res = &indexIterator{
index: index,
store: backend,
iterator: it,
started: false,
}
}
return applyCommonIteratorOptions(res, options)
}
func rangeIterator(iteratorStore kv.ReadonlyStore, reader ReadBackend, index concreteIndex, codec *ormkv.KeyCodec, start, end []interface{}, opts []listinternal.Option) (Iterator, error) {
options := &listinternal.Options{}
listinternal.ApplyOptions(options, opts)
if err := options.Validate(); err != nil {
return nil, err
}
startValues := encodeutil.ValuesOf(start...)
endValues := encodeutil.ValuesOf(end...)
err := codec.CheckValidRangeIterationKeys(startValues, endValues)
if err != nil {
return nil, err
}
startBz, err := codec.EncodeKey(startValues)
if err != nil {
return nil, err
}
endBz, err := codec.EncodeKey(endValues)
if err != nil {
return nil, err
}
// NOTE: fullEndKey indicates whether the end key contained all the fields of the key,
// if it did then we need to use inclusive end bytes, otherwise we prefix the end bytes
fullEndKey := len(codec.GetFieldNames()) == len(end)
var res Iterator
if !options.Reverse {
if len(options.Cursor) != 0 {
startBz = append(options.Cursor, 0)
}
if fullEndKey {
endBz = inclusiveEndBytes(endBz)
} else {
endBz = prefixEndBytes(endBz)
}
it, err := iteratorStore.Iterator(startBz, endBz)
if err != nil {
return nil, err
}
res = &indexIterator{
index: index,
store: reader,
iterator: it,
started: false,
}
} else {
if len(options.Cursor) != 0 {
endBz = options.Cursor
} else {
if fullEndKey {
endBz = inclusiveEndBytes(endBz)
} else {
endBz = prefixEndBytes(endBz)
}
}
it, err := iteratorStore.ReverseIterator(startBz, endBz)
if err != nil {
return nil, err
}
res = &indexIterator{
index: index,
store: reader,
iterator: it,
started: false,
}
}
return applyCommonIteratorOptions(res, options)
}
func applyCommonIteratorOptions(iterator Iterator, options *listinternal.Options) (Iterator, error) {
if options.Filter != nil {
iterator = &filterIterator{Iterator: iterator, filter: options.Filter}
}
if options.CountTotal || options.Limit != 0 || options.Offset != 0 || options.DefaultLimit != 0 {
iterator = paginate(iterator, options)
}
return iterator, nil
}
type indexIterator struct {
index concreteIndex
store ReadBackend
iterator store.Iterator
indexValues []protoreflect.Value
primaryKey []protoreflect.Value
value []byte
started bool
}
func (i *indexIterator) PageResponse() *queryv1beta1.PageResponse {
return nil
}
func (i *indexIterator) Next() bool {
if !i.started {
i.started = true
} else {
i.iterator.Next()
i.indexValues = nil
}
return i.iterator.Valid()
}
func (i *indexIterator) Keys() (indexKey, primaryKey []protoreflect.Value, err error) {
if i.indexValues != nil {
return i.indexValues, i.primaryKey, nil
}
i.value = i.iterator.Value()
i.indexValues, i.primaryKey, err = i.index.DecodeIndexKey(i.iterator.Key(), i.value)
if err != nil {
return nil, nil, err
}
return i.indexValues, i.primaryKey, nil
}
func (i indexIterator) UnmarshalMessage(message proto.Message) error {
_, pk, err := i.Keys()
if err != nil {
return err
}
return i.index.readValueFromIndexKey(i.store, pk, i.value, message)
}
func (i *indexIterator) GetMessage() (proto.Message, error) {
msg := i.index.MessageType().New().Interface()
err := i.UnmarshalMessage(msg)
return msg, err
}
func (i indexIterator) Cursor() ormlist.CursorT {
return i.iterator.Key()
}
func (i indexIterator) Close() {
err := i.iterator.Close()
if err != nil {
panic(err)
}
}
func (indexIterator) doNotImplement() {}
var _ Iterator = &indexIterator{}

View File

@ -1,96 +0,0 @@
package ormtable
import (
"math"
queryv1beta1 "cosmossdk.io/api/cosmos/base/query/v1beta1"
"cosmossdk.io/orm/internal/listinternal"
)
func paginate(it Iterator, options *listinternal.Options) Iterator {
offset := int(options.Offset)
limit := int(options.Limit)
if limit == 0 {
limit = int(options.DefaultLimit)
}
i := 0
if offset != 0 {
for ; i < offset; i++ {
if !it.Next() {
return &paginationIterator{
Iterator: it,
pageRes: &queryv1beta1.PageResponse{Total: uint64(i)},
}
}
}
}
var done int
if limit != 0 {
done = limit + offset
} else {
done = math.MaxInt
}
return &paginationIterator{
Iterator: it,
pageRes: nil,
countTotal: options.CountTotal,
i: i,
done: done,
}
}
type paginationIterator struct {
Iterator
pageRes *queryv1beta1.PageResponse
countTotal bool
i int
done int
}
func (it *paginationIterator) Next() bool {
if it.i >= it.done {
it.pageRes = &queryv1beta1.PageResponse{}
cursor := it.Cursor()
next := it.Iterator.Next()
if next {
it.pageRes.NextKey = cursor
it.i++
}
if it.countTotal {
// once it.Iterator.Next() returns false, another call to it will panic.
// we check next here to ensure we do not call it again.
if next {
for {
if !it.Iterator.Next() {
it.pageRes.Total = uint64(it.i)
return false
}
it.i++
}
} else {
// when next is false, the iterator can no longer move forward,
// so the index == total entries.
it.pageRes.Total = uint64(it.i)
}
}
return false
}
ok := it.Iterator.Next()
if ok {
it.i++
return true
}
it.pageRes = &queryv1beta1.PageResponse{
Total: uint64(it.i),
}
return false
}
func (it paginationIterator) PageResponse() *queryv1beta1.PageResponse {
return it.pageRes
}

View File

@ -1,240 +0,0 @@
package ormtable
import (
"context"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
"cosmossdk.io/orm/encoding/encodeutil"
"cosmossdk.io/orm/encoding/ormkv"
"cosmossdk.io/orm/internal/fieldnames"
"cosmossdk.io/orm/model/ormlist"
"cosmossdk.io/orm/types/ormerrors"
)
// primaryKeyIndex defines an UniqueIndex for the primary key.
type primaryKeyIndex struct {
*ormkv.PrimaryKeyCodec
fields fieldnames.FieldNames
indexers []indexer
getBackend func(context.Context) (ReadBackend, error)
}
func (p primaryKeyIndex) List(ctx context.Context, prefixKey []interface{}, options ...ormlist.Option) (Iterator, error) {
backend, err := p.getBackend(ctx)
if err != nil {
return nil, err
}
return prefixIterator(backend.CommitmentStoreReader(), backend, p, p.KeyCodec, prefixKey, options)
}
func (p primaryKeyIndex) ListRange(ctx context.Context, from, to []interface{}, options ...ormlist.Option) (Iterator, error) {
backend, err := p.getBackend(ctx)
if err != nil {
return nil, err
}
return rangeIterator(backend.CommitmentStoreReader(), backend, p, p.KeyCodec, from, to, options)
}
func (p primaryKeyIndex) doNotImplement() {}
func (p primaryKeyIndex) Has(ctx context.Context, key ...interface{}) (found bool, err error) {
backend, err := p.getBackend(ctx)
if err != nil {
return false, err
}
return p.has(backend, encodeutil.ValuesOf(key...))
}
func (p primaryKeyIndex) has(backend ReadBackend, values []protoreflect.Value) (found bool, err error) {
keyBz, err := p.EncodeKey(values)
if err != nil {
return false, err
}
return backend.CommitmentStoreReader().Has(keyBz)
}
func (p primaryKeyIndex) Get(ctx context.Context, message proto.Message, values ...interface{}) (found bool, err error) {
backend, err := p.getBackend(ctx)
if err != nil {
return false, err
}
return p.get(backend, message, encodeutil.ValuesOf(values...))
}
func (p primaryKeyIndex) get(backend ReadBackend, message proto.Message, values []protoreflect.Value) (found bool, err error) {
key, err := p.EncodeKey(values)
if err != nil {
return false, err
}
return p.getByKeyBytes(backend, key, values, message)
}
func (p primaryKeyIndex) DeleteBy(ctx context.Context, primaryKeyValues ...interface{}) error {
if len(primaryKeyValues) == len(p.GetFieldNames()) {
return p.doDelete(ctx, encodeutil.ValuesOf(primaryKeyValues...))
}
it, err := p.List(ctx, primaryKeyValues)
if err != nil {
return err
}
return p.deleteByIterator(ctx, it)
}
func (p primaryKeyIndex) DeleteRange(ctx context.Context, from, to []interface{}) error {
it, err := p.ListRange(ctx, from, to)
if err != nil {
return err
}
return p.deleteByIterator(ctx, it)
}
func (p primaryKeyIndex) getWriteBackend(ctx context.Context) (Backend, error) {
backend, err := p.getBackend(ctx)
if err != nil {
return nil, err
}
if writeBackend, ok := backend.(Backend); ok {
return writeBackend, nil
}
return nil, ormerrors.ReadOnly
}
func (p primaryKeyIndex) doDelete(ctx context.Context, primaryKeyValues []protoreflect.Value) error {
backend, err := p.getWriteBackend(ctx)
if err != nil {
return err
}
// delete object
writer := newBatchIndexCommitmentWriter(backend)
defer writer.Close()
pk, err := p.EncodeKey(primaryKeyValues)
if err != nil {
return err
}
msg := p.MessageType().New().Interface()
found, err := p.getByKeyBytes(backend, pk, primaryKeyValues, msg)
if err != nil {
return err
}
if !found {
return nil
}
err = p.doDeleteWithWriteBatch(ctx, backend, writer, pk, msg)
if err != nil {
return err
}
return writer.Write()
}
func (p primaryKeyIndex) doDeleteWithWriteBatch(ctx context.Context, backend Backend, writer *batchIndexCommitmentWriter, primaryKeyBz []byte, message proto.Message) error {
if hooks := backend.ValidateHooks(); hooks != nil {
err := hooks.ValidateDelete(ctx, message)
if err != nil {
return err
}
}
// delete object
err := writer.CommitmentStore().Delete(primaryKeyBz)
if err != nil {
return err
}
// clear indexes
mref := message.ProtoReflect()
indexStoreWriter := writer.IndexStore()
for _, idx := range p.indexers {
err := idx.onDelete(indexStoreWriter, mref)
if err != nil {
return err
}
}
if writeHooks := backend.WriteHooks(); writeHooks != nil {
writer.enqueueHook(func() {
writeHooks.OnDelete(ctx, message)
})
}
return nil
}
func (p primaryKeyIndex) getByKeyBytes(store ReadBackend, key []byte, keyValues []protoreflect.Value, message proto.Message) (found bool, err error) {
bz, err := store.CommitmentStoreReader().Get(key)
if err != nil {
return false, err
}
if bz == nil {
return false, nil
}
return true, p.Unmarshal(keyValues, bz, message)
}
func (p primaryKeyIndex) readValueFromIndexKey(_ ReadBackend, primaryKey []protoreflect.Value, value []byte, message proto.Message) error {
return p.Unmarshal(primaryKey, value, message)
}
func (p primaryKeyIndex) Fields() string {
return p.fields.String()
}
func (p primaryKeyIndex) deleteByIterator(ctx context.Context, it Iterator) error {
backend, err := p.getWriteBackend(ctx)
if err != nil {
return err
}
// we batch writes while the iterator is still open
writer := newBatchIndexCommitmentWriter(backend)
defer writer.Close()
for it.Next() {
_, pk, err := it.Keys()
if err != nil {
return err
}
msg, err := it.GetMessage()
if err != nil {
return err
}
pkBz, err := p.EncodeKey(pk)
if err != nil {
return err
}
err = p.doDeleteWithWriteBatch(ctx, backend, writer, pkBz, msg)
if err != nil {
return err
}
}
// close iterator
it.Close()
// then write batch
return writer.Write()
}
var _ UniqueIndex = &primaryKeyIndex{}

View File

@ -1,68 +0,0 @@
package ormtable_test
import (
"context"
"fmt"
"testing"
"github.com/regen-network/gocuke"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"google.golang.org/protobuf/encoding/protojson"
"gotest.tools/v3/assert"
"cosmossdk.io/orm/internal/testpb"
"cosmossdk.io/orm/model/ormtable"
"cosmossdk.io/orm/testing/ormtest"
)
func TestSave(t *testing.T) {
gocuke.NewRunner(t, &suite{}).Path("../../features/table/saving.feature").Run()
}
type suite struct {
gocuke.TestingT
table ormtable.Table
ctx context.Context
err error
}
func (s *suite) Before() {
var err error
s.table, err = ormtable.Build(ormtable.Options{
MessageType: (&testpb.SimpleExample{}).ProtoReflect().Type(),
})
assert.NilError(s, err)
s.ctx = ormtable.WrapContextDefault(ormtest.NewMemoryBackend())
}
func (s *suite) AnExistingEntity(docString gocuke.DocString) {
existing := s.simpleExampleFromDocString(docString)
assert.NilError(s, s.table.Insert(s.ctx, existing))
}
func (s suite) simpleExampleFromDocString(docString gocuke.DocString) *testpb.SimpleExample {
ex := &testpb.SimpleExample{}
assert.NilError(s, protojson.Unmarshal([]byte(docString.Content), ex))
return ex
}
func (s *suite) IInsert(a gocuke.DocString) {
ex := s.simpleExampleFromDocString(a)
s.err = s.table.Insert(s.ctx, ex)
}
func (s *suite) IUpdate(a gocuke.DocString) {
ex := s.simpleExampleFromDocString(a)
s.err = s.table.Update(s.ctx, ex)
}
func (s *suite) ExpectAError(a string) {
assert.ErrorContains(s, s.err, a)
}
func (s *suite) ExpectGrpcErrorCode(a string) {
var code codes.Code
assert.NilError(s, code.UnmarshalJSON([]byte(fmt.Sprintf("%q", a))))
assert.Equal(s, code, status.Code(s.err))
}

View File

@ -1,101 +0,0 @@
package ormtable
import (
"context"
"encoding/json"
"io"
"google.golang.org/protobuf/encoding/protojson"
"google.golang.org/protobuf/proto"
)
// singleton implements a Table instance for singletons.
type singleton struct {
*tableImpl
}
func (t singleton) DefaultJSON() json.RawMessage {
msg := t.MessageType().New().Interface()
bz, err := t.jsonMarshalOptions().Marshal(msg)
if err != nil {
return json.RawMessage("{}")
}
return bz
}
func (t singleton) ValidateJSON(reader io.Reader) error {
bz, err := io.ReadAll(reader)
if err != nil {
return err
}
msg := t.MessageType().New().Interface()
err = protojson.Unmarshal(bz, msg)
if err != nil {
return err
}
if t.customJSONValidator != nil {
return t.customJSONValidator(msg)
}
return DefaultJSONValidator(msg)
}
func (t singleton) ImportJSON(ctx context.Context, reader io.Reader) error {
backend, err := t.getWriteBackend(ctx)
if err != nil {
return err
}
bz, err := io.ReadAll(reader)
if err != nil {
return err
}
msg := t.MessageType().New().Interface()
err = protojson.Unmarshal(bz, msg)
if err != nil {
return err
}
return t.save(ctx, backend, msg, saveModeDefault)
}
func (t singleton) ExportJSON(ctx context.Context, writer io.Writer) error {
msg := t.MessageType().New().Interface()
found, err := t.Get(ctx, msg)
if err != nil {
return err
}
var bz []byte
if !found {
bz = t.DefaultJSON()
} else {
bz, err = t.jsonMarshalOptions().Marshal(msg)
if err != nil {
return err
}
}
_, err = writer.Write(bz)
return err
}
func (t singleton) jsonMarshalOptions() protojson.MarshalOptions {
return protojson.MarshalOptions{
Multiline: true,
Indent: "",
UseProtoNames: true,
EmitUnpopulated: true,
Resolver: t.typeResolver,
}
}
func (t *singleton) GetTable(message proto.Message) Table {
if message.ProtoReflect().Descriptor().FullName() == t.MessageType().Descriptor().FullName() {
return t
}
return nil
}

View File

@ -1,47 +0,0 @@
package ormtable_test
import (
"bytes"
"testing"
"google.golang.org/protobuf/testing/protocmp"
"gotest.tools/v3/assert"
"cosmossdk.io/orm/internal/testkv"
"cosmossdk.io/orm/internal/testpb"
"cosmossdk.io/orm/model/ormtable"
)
func TestSingleton(t *testing.T) {
table, err := ormtable.Build(ormtable.Options{
MessageType: (&testpb.ExampleSingleton{}).ProtoReflect().Type(),
})
assert.NilError(t, err)
ctx := ormtable.WrapContextDefault(testkv.NewSplitMemBackend())
store, err := testpb.NewExampleSingletonTable(table)
assert.NilError(t, err)
val, err := store.Get(ctx)
assert.NilError(t, err)
assert.Assert(t, val != nil) // singletons are always set
assert.NilError(t, store.Save(ctx, &testpb.ExampleSingleton{}))
val.Foo = "abc"
val.Bar = 3
assert.NilError(t, store.Save(ctx, val))
val2, err := store.Get(ctx)
assert.NilError(t, err)
assert.DeepEqual(t, val, val2, protocmp.Transform())
buf := &bytes.Buffer{}
assert.NilError(t, table.ExportJSON(ctx, buf))
assert.NilError(t, table.ValidateJSON(bytes.NewReader(buf.Bytes())))
store2 := ormtable.WrapContextDefault(testkv.NewSplitMemBackend())
assert.NilError(t, table.ImportJSON(store2, bytes.NewReader(buf.Bytes())))
val3, err := store.Get(ctx)
assert.NilError(t, err)
assert.DeepEqual(t, val, val3, protocmp.Transform())
}

View File

@ -1,164 +0,0 @@
package ormtable
import (
"context"
"encoding/json"
"io"
"google.golang.org/protobuf/proto"
"cosmossdk.io/orm/encoding/ormkv"
)
// View defines a read-only table.
//
// It exists as a separate interacted to support future scenarios where
// tables may be "supported" virtually to provide compatibility between
// systems, for instance to enable backwards compatibility when a major
// migration needs to be performed.
type View interface {
Index
// Has returns true if there is an entity in the table with the same
// primary key as message. Other fields besides the primary key fields will not
// be used for retrieval.
Has(ctx context.Context, message proto.Message) (found bool, err error)
// Get retrieves the message if one exists for the primary key fields
// set on the message. Other fields besides the primary key fields will not
// be used for retrieval.
Get(ctx context.Context, message proto.Message) (found bool, err error)
// GetIndex returns the index referenced by the provided fields if
// one exists or nil. Note that some concrete indexes can be retrieved by
// multiple lists of fields.
GetIndex(fields string) Index
// GetUniqueIndex returns the unique index referenced by the provided fields if
// one exists or nil. Note that some concrete indexes can be retrieved by
// multiple lists of fields.
GetUniqueIndex(fields string) UniqueIndex
// Indexes returns all the concrete indexes for the table.
Indexes() []Index
// GetIndexByID returns the index with the provided ID or nil.
GetIndexByID(id uint32) Index
// PrimaryKey returns the primary key unique index.
PrimaryKey() UniqueIndex
}
// Table is an abstract interface around a concrete table. Table instances
// are stateless, with all state existing only in the store passed
// to table and index methods.
type Table interface {
View
ormkv.EntryCodec
// Save saves the provided entry in the store either inserting it or
// updating it if needed.
//
// If store implement the ValidateHooks interface, the appropriate ValidateInsert or
// ValidateUpdate hook method will be called.
//
// Save attempts to be atomic with respect to the underlying store,
// meaning that either the full save operation is written or the store is
// left unchanged, unless there is an error with the underlying store.
//
// If a unique key constraint is violated, ormerrors.UniqueKeyViolation
// (or an error wrapping it) will be returned.
Save(context context.Context, message proto.Message) error
// Insert inserts the provided entry in the store and fails if there is
// an unique key violation. See Save for more details on behavior.
//
// If an entity with the same primary key exists, an error wrapping
// ormerrors.AlreadyExists will be returned.
Insert(ctx context.Context, message proto.Message) error
// Update updates the provided entry in the store and fails if an entry
// with a matching primary key does not exist. See Save for more details
// on behavior.
//
// If an entity with the same primary key does not exist, ormerrors.NotFound
// (or an error wrapping it) will be returned.
Update(ctx context.Context, message proto.Message) error
// Delete deletes the entry with the with primary key fields set on message
// if one exists. Other fields besides the primary key fields will not
// be used for retrieval.
//
// If store implement the ValidateHooks interface, the ValidateDelete hook method will
// be called.
//
// Delete attempts to be atomic with respect to the underlying store,
// meaning that either the full save operation is written or the store is
// left unchanged, unless there is an error with the underlying store.
Delete(ctx context.Context, message proto.Message) error
// DefaultJSON returns default JSON that can be used as a template for
// genesis files.
//
// For regular tables this an empty JSON array, but for singletons an
// empty instance of the singleton is marshaled.
DefaultJSON() json.RawMessage
// ValidateJSON validates JSON streamed from the reader.
ValidateJSON(io.Reader) error
// ImportJSON imports JSON into the store, streaming one entry at a time.
// Each table should be import from a separate JSON file to enable proper
// streaming.
//
// Regular tables should be stored as an array of objects with each object
// corresponding to a single record in the table.
//
// Auto-incrementing tables
// can optionally have the last sequence value as the first element in the
// array. If the last sequence value is provided, then each value of the
// primary key in the file must be <= this last sequence value or omitted
// entirely. If no last sequence value is provided, no entries should
// contain the primary key as this will be auto-assigned.
//
// Singletons should define a single object and not an array.
//
// ImportJSON is not atomic with respect to the underlying store, meaning
// that in the case of an error, some records may already have been
// imported. It is assumed that ImportJSON is called in the context of some
// larger transaction isolation.
ImportJSON(context.Context, io.Reader) error
// ExportJSON exports JSON in the format accepted by ImportJSON.
// Auto-incrementing tables will export the last sequence number as the
// first element in the JSON array.
ExportJSON(context.Context, io.Writer) error
// ID is the ID of this table within the schema of its FileDescriptor.
ID() uint32
Schema
}
// Schema is an interface for things that contain tables and can encode and
// decode kv-store pairs.
type Schema interface {
ormkv.EntryCodec
// GetTable returns the table for the provided message type or nil.
GetTable(message proto.Message) Table
}
type AutoIncrementTable interface {
Table
// InsertReturningPKey inserts the provided entry in the store and returns the newly
// generated primary key for the message or an error.
InsertReturningPKey(ctx context.Context, message proto.Message) (newPK uint64, err error)
// LastInsertedSequence retrieves the sequence number of the last entry inserted into the table.
// The LastInsertedSequence is 0 if no entries have been inserted into the table.
LastInsertedSequence(ctx context.Context) (uint64, error)
}

View File

@ -1,430 +0,0 @@
package ormtable
import (
"bytes"
"context"
"encoding/binary"
"encoding/json"
"io"
"math"
"google.golang.org/protobuf/encoding/protojson"
"google.golang.org/protobuf/proto"
"cosmossdk.io/orm/encoding/encodeutil"
"cosmossdk.io/orm/encoding/ormkv"
"cosmossdk.io/orm/internal/fieldnames"
"cosmossdk.io/orm/types/ormerrors"
)
// tableImpl implements Table.
type tableImpl struct {
*primaryKeyIndex
indexes []Index
indexesByFields map[fieldnames.FieldNames]concreteIndex
uniqueIndexesByFields map[fieldnames.FieldNames]UniqueIndex
indexesByID map[uint32]Index
entryCodecsByID map[uint32]ormkv.EntryCodec
tablePrefix []byte
tableID uint32
typeResolver TypeResolver
customJSONValidator func(message proto.Message) error
}
func (t *tableImpl) GetTable(message proto.Message) Table {
if message.ProtoReflect().Descriptor().FullName() == t.MessageType().Descriptor().FullName() {
return t
}
return nil
}
func (t tableImpl) PrimaryKey() UniqueIndex {
return t.primaryKeyIndex
}
func (t tableImpl) GetIndexByID(id uint32) Index {
return t.indexesByID[id]
}
func (t tableImpl) Save(ctx context.Context, message proto.Message) error {
backend, err := t.getWriteBackend(ctx)
if err != nil {
return err
}
return t.save(ctx, backend, message, saveModeDefault)
}
func (t tableImpl) Insert(ctx context.Context, message proto.Message) error {
backend, err := t.getWriteBackend(ctx)
if err != nil {
return err
}
return t.save(ctx, backend, message, saveModeInsert)
}
func (t tableImpl) Update(ctx context.Context, message proto.Message) error {
backend, err := t.getWriteBackend(ctx)
if err != nil {
return err
}
return t.save(ctx, backend, message, saveModeUpdate)
}
func (t tableImpl) save(ctx context.Context, backend Backend, message proto.Message, mode saveMode) error {
writer := newBatchIndexCommitmentWriter(backend)
defer writer.Close()
return t.doSave(ctx, writer, message, mode)
}
func (t tableImpl) doSave(ctx context.Context, writer *batchIndexCommitmentWriter, message proto.Message, mode saveMode) error {
mref := message.ProtoReflect()
pkValues, pk, err := t.EncodeKeyFromMessage(mref)
if err != nil {
return err
}
existing := mref.New().Interface()
haveExisting, err := t.getByKeyBytes(writer, pk, pkValues, existing)
if err != nil {
return err
}
if haveExisting {
if mode == saveModeInsert {
return ormerrors.AlreadyExists.Wrapf("%q:%+v", mref.Descriptor().FullName(), pkValues)
}
if validateHooks := writer.ValidateHooks(); validateHooks != nil {
err = validateHooks.ValidateUpdate(ctx, existing, message)
if err != nil {
return err
}
}
} else {
if mode == saveModeUpdate {
return ormerrors.NotFound.Wrapf("%q", mref.Descriptor().FullName())
}
if validateHooks := writer.ValidateHooks(); validateHooks != nil {
err = validateHooks.ValidateInsert(ctx, message)
if err != nil {
return err
}
}
}
// temporarily clear primary key
t.ClearValues(mref)
// store object
bz, err := proto.MarshalOptions{Deterministic: true}.Marshal(message)
if err != nil {
return err
}
err = writer.CommitmentStore().Set(pk, bz)
if err != nil {
return err
}
// set primary key again
t.SetKeyValues(mref, pkValues)
// set indexes
indexStoreWriter := writer.IndexStore()
if !haveExisting {
for _, idx := range t.indexers {
err = idx.onInsert(indexStoreWriter, mref)
if err != nil {
return err
}
}
if writeHooks := writer.WriteHooks(); writeHooks != nil {
writer.enqueueHook(func() {
writeHooks.OnInsert(ctx, message)
})
}
} else {
existingMref := existing.ProtoReflect()
for _, idx := range t.indexers {
err = idx.onUpdate(indexStoreWriter, mref, existingMref)
if err != nil {
return err
}
}
if writeHooks := writer.WriteHooks(); writeHooks != nil {
writer.enqueueHook(func() {
writeHooks.OnUpdate(ctx, existing, message)
})
}
}
return writer.Write()
}
func (t tableImpl) Delete(ctx context.Context, message proto.Message) error {
pk := t.PrimaryKeyCodec.GetKeyValues(message.ProtoReflect())
return t.doDelete(ctx, pk)
}
func (t tableImpl) GetIndex(fields string) Index {
return t.indexesByFields[fieldnames.CommaSeparatedFieldNames(fields)]
}
func (t tableImpl) GetUniqueIndex(fields string) UniqueIndex {
return t.uniqueIndexesByFields[fieldnames.CommaSeparatedFieldNames(fields)]
}
func (t tableImpl) Indexes() []Index {
return t.indexes
}
func (t tableImpl) DefaultJSON() json.RawMessage {
return json.RawMessage("[]")
}
func (t tableImpl) decodeJSON(reader io.Reader, onMsg func(message proto.Message) error) error {
decoder, err := t.startDecodeJSON(reader)
if err != nil {
return err
}
return t.doDecodeJSON(decoder, nil, onMsg)
}
func (t tableImpl) startDecodeJSON(reader io.Reader) (*json.Decoder, error) {
decoder := json.NewDecoder(reader)
token, err := decoder.Token()
if err != nil {
return nil, err
}
if token != json.Delim('[') {
return nil, ormerrors.JSONImportError.Wrapf("expected [ got %s", token)
}
return decoder, nil
}
// onFirst is called on the first RawMessage and used for auto-increment tables
// to decode the sequence in which case it should return true.
// onMsg is called on every decoded message
func (t tableImpl) doDecodeJSON(decoder *json.Decoder, onFirst func(message json.RawMessage) bool, onMsg func(message proto.Message) error) error {
unmarshalOptions := protojson.UnmarshalOptions{Resolver: t.typeResolver}
first := true
for decoder.More() {
var rawJSON json.RawMessage
err := decoder.Decode(&rawJSON)
if err != nil {
return ormerrors.JSONImportError.Wrapf("%s", err)
}
if first {
first = false
if onFirst != nil {
if onFirst(rawJSON) {
// if onFirst handled this, skip decoding into a proto message
continue
}
}
}
msg := t.MessageType().New().Interface()
err = unmarshalOptions.Unmarshal(rawJSON, msg)
if err != nil {
return err
}
err = onMsg(msg)
if err != nil {
return err
}
}
token, err := decoder.Token()
if err != nil {
return err
}
if token != json.Delim(']') {
return ormerrors.JSONImportError.Wrapf("expected ] got %s", token)
}
return nil
}
// DefaultJSONValidator is the default validator used when calling
// Table.ValidateJSON(). It will call methods with the signature `ValidateBasic() error`
// and/or `Validate() error` to validate the message.
func DefaultJSONValidator(message proto.Message) error {
if v, ok := message.(interface{ ValidateBasic() error }); ok {
err := v.ValidateBasic()
if err != nil {
return err
}
}
if v, ok := message.(interface{ Validate() error }); ok {
err := v.Validate()
if err != nil {
return err
}
}
return nil
}
func (t tableImpl) ValidateJSON(reader io.Reader) error {
return t.decodeJSON(reader, func(message proto.Message) error {
if t.customJSONValidator != nil {
return t.customJSONValidator(message)
}
return DefaultJSONValidator(message)
})
}
func (t tableImpl) ImportJSON(ctx context.Context, reader io.Reader) error {
backend, err := t.getWriteBackend(ctx)
if err != nil {
return err
}
return t.decodeJSON(reader, func(message proto.Message) error {
return t.save(ctx, backend, message, saveModeDefault)
})
}
func (t tableImpl) ExportJSON(context context.Context, writer io.Writer) error {
_, err := writer.Write([]byte("["))
if err != nil {
return err
}
return t.doExportJSON(context, writer, true)
}
func (t tableImpl) doExportJSON(ctx context.Context, writer io.Writer, start bool) error {
marshalOptions := protojson.MarshalOptions{
UseProtoNames: true,
Resolver: t.typeResolver,
}
var err error
it, _ := t.List(ctx, nil)
for {
found := it.Next()
if !found {
_, err = writer.Write([]byte("]"))
return err
} else if !start {
_, err = writer.Write([]byte(",\n"))
if err != nil {
return err
}
}
start = false
msg := t.MessageType().New().Interface()
err = it.UnmarshalMessage(msg)
if err != nil {
return err
}
bz, err := marshalOptions.Marshal(msg)
if err != nil {
return err
}
_, err = writer.Write(bz)
if err != nil {
return err
}
}
}
func (t tableImpl) DecodeEntry(k, v []byte) (ormkv.Entry, error) {
r := bytes.NewReader(k)
err := encodeutil.SkipPrefix(r, t.tablePrefix)
if err != nil {
return nil, err
}
id, err := binary.ReadUvarint(r)
if err != nil {
return nil, err
}
if id > math.MaxUint32 {
return nil, ormerrors.UnexpectedDecodePrefix.Wrapf("uint32 varint id out of range %d", id)
}
idx, ok := t.entryCodecsByID[uint32(id)]
if !ok {
return nil, ormerrors.UnexpectedDecodePrefix.Wrapf("can't find field with id %d", id)
}
return idx.DecodeEntry(k, v)
}
func (t tableImpl) EncodeEntry(entry ormkv.Entry) (k, v []byte, err error) {
switch entry := entry.(type) {
case *ormkv.PrimaryKeyEntry:
return t.PrimaryKeyCodec.EncodeEntry(entry)
case *ormkv.IndexKeyEntry:
idx, ok := t.indexesByFields[fieldnames.FieldsFromNames(entry.Fields)]
if !ok {
return nil, nil, ormerrors.BadDecodeEntry.Wrapf("can't find index with fields %s", entry.Fields)
}
return idx.EncodeEntry(entry)
default:
return nil, nil, ormerrors.BadDecodeEntry.Wrapf("%s", entry)
}
}
func (t tableImpl) ID() uint32 {
return t.tableID
}
func (t tableImpl) Has(ctx context.Context, message proto.Message) (found bool, err error) {
backend, err := t.getBackend(ctx)
if err != nil {
return false, err
}
keyValues := t.primaryKeyIndex.PrimaryKeyCodec.GetKeyValues(message.ProtoReflect())
return t.primaryKeyIndex.has(backend, keyValues)
}
// Get retrieves the message if one exists for the primary key fields
// set on the message. Other fields besides the primary key fields will not
// be used for retrieval.
func (t tableImpl) Get(ctx context.Context, message proto.Message) (found bool, err error) {
backend, err := t.getBackend(ctx)
if err != nil {
return false, err
}
keyValues := t.primaryKeyIndex.PrimaryKeyCodec.GetKeyValues(message.ProtoReflect())
return t.primaryKeyIndex.get(backend, message, keyValues)
}
var (
_ Table = &tableImpl{}
_ Schema = &tableImpl{}
)
type saveMode int
const (
saveModeDefault saveMode = iota
saveModeInsert
saveModeUpdate
)

View File

@ -1,766 +0,0 @@
package ormtable_test
import (
"bytes"
"context"
"fmt"
"sort"
"strings"
"testing"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/testing/protocmp"
"gotest.tools/v3/assert"
"gotest.tools/v3/golden"
"pgregory.net/rapid"
queryv1beta1 "cosmossdk.io/api/cosmos/base/query/v1beta1"
coretesting "cosmossdk.io/core/testing"
sdkerrors "cosmossdk.io/errors"
"cosmossdk.io/orm/encoding/ormkv"
"cosmossdk.io/orm/internal/testkv"
"cosmossdk.io/orm/internal/testpb"
"cosmossdk.io/orm/internal/testutil"
"cosmossdk.io/orm/model/ormlist"
"cosmossdk.io/orm/model/ormtable"
"cosmossdk.io/orm/types/kv"
"cosmossdk.io/orm/types/ormerrors"
)
func TestScenario(t *testing.T) {
table, err := ormtable.Build(ormtable.Options{
MessageType: (&testpb.ExampleTable{}).ProtoReflect().Type(),
})
assert.NilError(t, err)
// first run tests with a split index-commitment store
runTestScenario(t, table, testkv.NewSplitMemBackend())
// now run tests with a shared index-commitment store
// we're going to wrap this test in a debug store and save the decoded debug
// messages, these will be checked against a golden file at the end of the
// test. the golden file can be used for fine-grained debugging of kv-store
// layout
debugBuf := &strings.Builder{}
store := testkv.NewDebugBackend(
testkv.NewSharedMemBackend(),
&testkv.EntryCodecDebugger{
EntryCodec: table,
Print: func(s string) { debugBuf.WriteString(s + "\n") },
},
)
runTestScenario(t, table, store)
// we're going to store debug data in a golden file to make sure that
// logical decoding works successfully
// run `go test pkgname -test.update-golden` to update the golden file
// see https://pkg.go.dev/gotest.tools/v3/golden for docs
golden.Assert(t, debugBuf.String(), "test_scenario.golden")
checkEncodeDecodeEntries(t, table, store.IndexStoreReader())
}
// isolated test for bug - https://github.com/cosmos/cosmos-sdk/issues/11431
func TestPaginationLimitCountTotal(t *testing.T) {
table, err := ormtable.Build(ormtable.Options{
MessageType: (&testpb.ExampleTable{}).ProtoReflect().Type(),
})
assert.NilError(t, err)
backend := testkv.NewSplitMemBackend()
ctx := ormtable.WrapContextDefault(backend)
store, err := testpb.NewExampleTableTable(table)
assert.NilError(t, err)
assert.NilError(t, store.Insert(ctx, &testpb.ExampleTable{U32: 4, I64: 2, Str: "co"}))
assert.NilError(t, store.Insert(ctx, &testpb.ExampleTable{U32: 5, I64: 2, Str: "sm"}))
assert.NilError(t, store.Insert(ctx, &testpb.ExampleTable{U32: 6, I64: 2, Str: "os"}))
it, err := store.List(ctx, &testpb.ExampleTablePrimaryKey{}, ormlist.Paginate(&queryv1beta1.PageRequest{Limit: 3, CountTotal: true}))
assert.NilError(t, err)
assert.Check(t, it.Next())
it, err = store.List(ctx, &testpb.ExampleTablePrimaryKey{}, ormlist.Paginate(&queryv1beta1.PageRequest{Limit: 4, CountTotal: true}))
assert.NilError(t, err)
assert.Check(t, it.Next())
it, err = store.List(ctx, &testpb.ExampleTablePrimaryKey{}, ormlist.Paginate(&queryv1beta1.PageRequest{Limit: 1, CountTotal: true}))
assert.NilError(t, err)
for it.Next() {
}
pr := it.PageResponse()
assert.Check(t, pr != nil)
assert.Equal(t, uint64(3), pr.Total)
}
// check that the ormkv.Entry's decode and encode to the same bytes
func checkEncodeDecodeEntries(t *testing.T, table ormtable.Table, store kv.ReadonlyStore) {
t.Helper()
it, err := store.Iterator(nil, nil)
assert.NilError(t, err)
for it.Valid() {
key := it.Key()
value := it.Value()
entry, err := table.DecodeEntry(key, value)
assert.NilError(t, err)
k, v, err := table.EncodeEntry(entry)
assert.NilError(t, err)
assert.Assert(t, bytes.Equal(key, k), "%x %x %s", key, k, entry)
assert.Assert(t, bytes.Equal(value, v), "%x %x %s", value, v, entry)
it.Next()
}
}
func runTestScenario(t *testing.T, table ormtable.Table, backend ormtable.Backend) {
t.Helper()
ctx := ormtable.WrapContextDefault(backend)
store, err := testpb.NewExampleTableTable(table)
assert.NilError(t, err)
// let's create 10 data items we'll use later and give them indexes
data := []*testpb.ExampleTable{
{U32: 4, I64: -2, Str: "abc", U64: 7}, // 0
{U32: 4, I64: -2, Str: "abd", U64: 7}, // 1
{U32: 4, I64: -1, Str: "abc", U64: 8}, // 2
{U32: 5, I64: -2, Str: "abd", U64: 8}, // 3
{U32: 5, I64: -2, Str: "abe", U64: 9}, // 4
{U32: 7, I64: -2, Str: "abe", U64: 10}, // 5
{U32: 7, I64: -1, Str: "abe", U64: 11}, // 6
{U32: 8, I64: -4, Str: "abc", U64: 11}, // 7
{U32: 8, I64: 1, Str: "abc", U64: 12}, // 8
{U32: 8, I64: 1, Str: "abd", U64: 10}, // 9
}
// let's make a function to match what's in our iterator with what we
// expect using indexes in the data array above
assertIteratorItems := func(it ormtable.Iterator, xs ...int) {
for _, i := range xs {
assert.Assert(t, it.Next())
msg, err := it.GetMessage()
assert.NilError(t, err)
// t.Logf("data[%d] %v == %v", i, data[i], msg)
assert.DeepEqual(t, data[i], msg, protocmp.Transform())
}
// make sure the iterator is done
assert.Assert(t, !it.Next())
}
// insert one record
err = store.Insert(ctx, data[0])
assert.NilError(t, err)
// trivial prefix query has one record
it, err := store.List(ctx, testpb.ExampleTablePrimaryKey{})
assert.NilError(t, err)
assertIteratorItems(it, 0)
// insert one record
err = store.Insert(ctx, data[1])
assert.NilError(t, err)
// trivial prefix query has two records
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{})
assert.NilError(t, err)
assertIteratorItems(it, 0, 1)
// insert the other records
assert.NilError(t, err)
for i := 2; i < len(data); i++ {
err = store.Insert(ctx, data[i])
assert.NilError(t, err)
}
// let's do a prefix query on the primary key
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{}.WithU32(8))
assert.NilError(t, err)
assertIteratorItems(it, 7, 8, 9)
// let's try a reverse prefix query
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{}.WithU32(4), ormlist.Reverse())
assert.NilError(t, err)
defer it.Close()
assertIteratorItems(it, 2, 1, 0)
// let's try a range query
it, err = store.ListRange(ctx,
testpb.ExampleTablePrimaryKey{}.WithU32I64(4, -1),
testpb.ExampleTablePrimaryKey{}.WithU32(7),
)
assert.NilError(t, err)
defer it.Close()
assertIteratorItems(it, 2, 3, 4, 5, 6)
// and another range query
it, err = store.ListRange(ctx,
testpb.ExampleTablePrimaryKey{}.WithU32I64(5, -3),
testpb.ExampleTablePrimaryKey{}.WithU32I64Str(8, 1, "abc"),
)
assert.NilError(t, err)
defer it.Close()
assertIteratorItems(it, 3, 4, 5, 6, 7, 8)
// now a reverse range query on a different index
strU32Index := table.GetIndex("str,u32")
assert.Assert(t, strU32Index != nil)
it, err = store.ListRange(ctx,
testpb.ExampleTableStrU32IndexKey{}.WithStr("abc"),
testpb.ExampleTableStrU32IndexKey{}.WithStr("abd"),
ormlist.Reverse(),
)
assert.NilError(t, err)
assertIteratorItems(it, 9, 3, 1, 8, 7, 2, 0)
// another prefix query forwards
it, err = store.List(ctx,
testpb.ExampleTableStrU32IndexKey{}.WithStrU32("abe", 7),
)
assert.NilError(t, err)
assertIteratorItems(it, 5, 6)
// and backwards
it, err = store.List(ctx,
testpb.ExampleTableStrU32IndexKey{}.WithStrU32("abc", 4),
ormlist.Reverse(),
)
assert.NilError(t, err)
assertIteratorItems(it, 2, 0)
// try filtering
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{}, ormlist.Filter(func(message proto.Message) bool {
ex := message.(*testpb.ExampleTable)
return ex.U64 != 10
}))
assert.NilError(t, err)
assertIteratorItems(it, 0, 1, 2, 3, 4, 6, 7, 8)
// try a cursor
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{})
assert.NilError(t, err)
assert.Assert(t, it.Next())
assert.Assert(t, it.Next())
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{}, ormlist.Cursor(it.Cursor()))
assert.NilError(t, err)
assertIteratorItems(it, 2, 3, 4, 5, 6, 7, 8, 9)
// try an unique index
found, err := store.HasByU64Str(ctx, 12, "abc")
assert.NilError(t, err)
assert.Assert(t, found)
a, err := store.GetByU64Str(ctx, 12, "abc")
assert.NilError(t, err)
assert.DeepEqual(t, data[8], a, protocmp.Transform())
// let's try paginating some stuff
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{}, ormlist.Paginate(&queryv1beta1.PageRequest{
Limit: 4,
CountTotal: true,
}))
assert.NilError(t, err)
assertIteratorItems(it, 0, 1, 2, 3)
res := it.PageResponse()
assert.Assert(t, res != nil)
assert.Equal(t, uint64(10), res.Total)
assert.Assert(t, res.NextKey != nil)
// let's use a default limit
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{},
ormlist.DefaultLimit(4),
ormlist.Paginate(&queryv1beta1.PageRequest{
CountTotal: true,
}))
assert.NilError(t, err)
assertIteratorItems(it, 0, 1, 2, 3)
res = it.PageResponse()
assert.Assert(t, res != nil)
assert.Equal(t, uint64(10), res.Total)
assert.Assert(t, res.NextKey != nil)
// read another page
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{}, ormlist.Paginate(&queryv1beta1.PageRequest{
Key: res.NextKey,
Limit: 4,
}))
assert.NilError(t, err)
assertIteratorItems(it, 4, 5, 6, 7)
res = it.PageResponse()
assert.Assert(t, res != nil)
assert.Assert(t, res.NextKey != nil)
// and the last page
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{}, ormlist.Paginate(&queryv1beta1.PageRequest{
Key: res.NextKey,
Limit: 4,
}))
assert.NilError(t, err)
assertIteratorItems(it, 8, 9)
res = it.PageResponse()
assert.Assert(t, res != nil)
assert.Assert(t, res.NextKey == nil)
// let's go backwards
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{}, ormlist.Paginate(&queryv1beta1.PageRequest{
Limit: 2,
CountTotal: true,
Reverse: true,
}))
assert.NilError(t, err)
assertIteratorItems(it, 9, 8)
res = it.PageResponse()
assert.Assert(t, res != nil)
assert.Assert(t, res.NextKey != nil)
assert.Equal(t, uint64(10), res.Total)
// a bit more
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{}, ormlist.Paginate(&queryv1beta1.PageRequest{
Key: res.NextKey,
Limit: 2,
Reverse: true,
}))
assert.NilError(t, err)
assertIteratorItems(it, 7, 6)
res = it.PageResponse()
assert.Assert(t, res != nil)
assert.Assert(t, res.NextKey != nil)
// range query
it, err = store.ListRange(ctx,
testpb.ExampleTablePrimaryKey{}.WithU32I64Str(4, -1, "abc"),
testpb.ExampleTablePrimaryKey{}.WithU32I64Str(7, -2, "abe"),
ormlist.Paginate(&queryv1beta1.PageRequest{
Limit: 10,
}))
assert.NilError(t, err)
assertIteratorItems(it, 2, 3, 4, 5)
res = it.PageResponse()
assert.Assert(t, res != nil)
assert.Assert(t, res.NextKey == nil)
// let's try an offset
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{}, ormlist.Paginate(&queryv1beta1.PageRequest{
Limit: 2,
CountTotal: true,
Offset: 3,
}))
assert.NilError(t, err)
assertIteratorItems(it, 3, 4)
res = it.PageResponse()
assert.Assert(t, res != nil)
assert.Assert(t, res.NextKey != nil)
assert.Equal(t, uint64(10), res.Total)
// and reverse
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{}, ormlist.Paginate(&queryv1beta1.PageRequest{
Limit: 3,
CountTotal: true,
Offset: 5,
Reverse: true,
}))
assert.NilError(t, err)
assertIteratorItems(it, 4, 3, 2)
res = it.PageResponse()
assert.Assert(t, res != nil)
assert.Assert(t, res.NextKey != nil)
assert.Equal(t, uint64(10), res.Total)
// now an offset that's slightly too big
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{}, ormlist.Paginate(&queryv1beta1.PageRequest{
Limit: 1,
CountTotal: true,
Offset: 10,
}))
assert.NilError(t, err)
assert.Assert(t, !it.Next())
res = it.PageResponse()
assert.Assert(t, res != nil)
assert.Assert(t, res.NextKey == nil)
assert.Equal(t, uint64(10), res.Total)
// now let's update some things
for i := 0; i < 5; i++ {
data[i].U64 *= 2
data[i].Bz = []byte(data[i].Str)
err = store.Update(ctx, data[i])
assert.NilError(t, err)
}
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{})
assert.NilError(t, err)
// we should still get everything in the same order
assertIteratorItems(it, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
// let's use SAVE_MODE_DEFAULT and add something
data = append(data, &testpb.ExampleTable{U32: 9})
err = store.Save(ctx, data[10])
assert.NilError(t, err)
a, err = store.Get(ctx, 9, 0, "")
assert.NilError(t, err)
assert.Assert(t, a != nil)
assert.DeepEqual(t, data[10], a, protocmp.Transform())
// and update it
data[10].B = true
assert.NilError(t, table.Save(ctx, data[10]))
a, err = store.Get(ctx, 9, 0, "")
assert.NilError(t, err)
assert.Assert(t, a != nil)
assert.DeepEqual(t, data[10], a, protocmp.Transform())
// and iterate
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{})
assert.NilError(t, err)
assertIteratorItems(it, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
// let's export and import JSON and use a read-only backend
buf := &bytes.Buffer{}
readBackend := ormtable.NewReadBackend(ormtable.ReadBackendOptions{
CommitmentStoreReader: backend.CommitmentStoreReader(),
IndexStoreReader: backend.IndexStoreReader(),
})
assert.NilError(t, table.ExportJSON(ormtable.WrapContextDefault(readBackend), buf))
assert.NilError(t, table.ValidateJSON(bytes.NewReader(buf.Bytes())))
store2 := ormtable.WrapContextDefault(testkv.NewSplitMemBackend())
assert.NilError(t, table.ImportJSON(store2, bytes.NewReader(buf.Bytes())))
assertTablesEqual(t, table, ctx, store2)
// let's delete item 5
err = store.DeleteBy(ctx, testpb.ExampleTableU32I64StrIndexKey{}.WithU32I64Str(7, -2, "abe"))
assert.NilError(t, err)
// it should be gone
found, err = store.Has(ctx, 7, -2, "abe")
assert.NilError(t, err)
assert.Assert(t, !found)
// and missing from the iterator
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{})
assert.NilError(t, err)
assertIteratorItems(it, 0, 1, 2, 3, 4, 6, 7, 8, 9, 10)
// let's do a batch delete
// first iterate over the items we'll delete to check that iterator
it, err = store.List(ctx, testpb.ExampleTableStrU32IndexKey{}.WithStr("abd"))
assert.NilError(t, err)
assertIteratorItems(it, 1, 3, 9)
// now delete them
assert.NilError(t, store.DeleteBy(ctx, testpb.ExampleTableStrU32IndexKey{}.WithStr("abd")))
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{})
assert.NilError(t, err)
assertIteratorItems(it, 0, 2, 4, 6, 7, 8, 10)
// Let's do a range delete
assert.NilError(t, store.DeleteRange(ctx,
testpb.ExampleTableStrU32IndexKey{}.WithStrU32("abc", 8),
testpb.ExampleTableStrU32IndexKey{}.WithStrU32("abe", 5),
))
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{})
assert.NilError(t, err)
assertIteratorItems(it, 0, 2, 6, 10)
// Let's delete something directly
assert.NilError(t, store.Delete(ctx, data[0]))
it, err = store.List(ctx, testpb.ExampleTablePrimaryKey{})
assert.NilError(t, err)
assertIteratorItems(it, 2, 6, 10)
}
func TestRandomTableData(t *testing.T) {
testTable(t, TableDataGen(testutil.GenA, 100).Example())
}
func testTable(t *testing.T, tableData *TableData) {
t.Helper()
for _, index := range tableData.table.Indexes() {
indexModel := &IndexModel{
TableData: tableData,
index: index.(TestIndex),
}
sort.Sort(indexModel)
if _, ok := index.(ormtable.UniqueIndex); ok {
testUniqueIndex(t, indexModel)
}
testIndex(t, indexModel)
}
}
func testUniqueIndex(t *testing.T, model *IndexModel) {
t.Helper()
index := model.index.(ormtable.UniqueIndex)
t.Logf("testing unique index %T %s", index, index.Fields())
for i := 0; i < len(model.data); i++ {
x := model.data[i]
ks, _, err := index.(ormkv.IndexCodec).EncodeKeyFromMessage(x.ProtoReflect())
assert.NilError(t, err)
values := protoValuesToInterfaces(ks)
found, err := index.Has(model.context, values...)
assert.NilError(t, err)
assert.Assert(t, found)
msg := model.table.MessageType().New().Interface()
found, err = index.Get(model.context, msg, values...)
assert.NilError(t, err)
assert.Assert(t, found)
assert.DeepEqual(t, x, msg, protocmp.Transform())
}
}
func testIndex(t *testing.T, model *IndexModel) {
t.Helper()
index := model.index
if index.IsFullyOrdered() {
t.Logf("testing index %T %s", index, index.Fields())
it, err := model.index.List(model.context, nil)
assert.NilError(t, err)
checkIteratorAgainstSlice(t, it, model.data)
it, err = model.index.List(model.context, nil, ormlist.Reverse())
assert.NilError(t, err)
checkIteratorAgainstSlice(t, it, reverseData(model.data))
rapid.Check(t, func(t *rapid.T) {
i := rapid.IntRange(0, len(model.data)-2).Draw(t, "i")
j := rapid.IntRange(i+1, len(model.data)-1).Draw(t, "j")
start, _, err := model.index.(ormkv.IndexCodec).EncodeKeyFromMessage(model.data[i].ProtoReflect())
assert.NilError(t, err)
end, _, err := model.index.(ormkv.IndexCodec).EncodeKeyFromMessage(model.data[j].ProtoReflect())
assert.NilError(t, err)
startVals := protoValuesToInterfaces(start)
endVals := protoValuesToInterfaces(end)
it, err = model.index.ListRange(model.context, startVals, endVals)
assert.NilError(t, err)
checkIteratorAgainstSlice(t, it, model.data[i:j+1])
it, err = model.index.ListRange(model.context, startVals, endVals, ormlist.Reverse())
assert.NilError(t, err)
checkIteratorAgainstSlice(t, it, reverseData(model.data[i:j+1]))
})
} else {
t.Logf("testing unordered index %T %s", index, index.Fields())
// get all the data
it, err := model.index.List(model.context, nil)
assert.NilError(t, err)
var data2 []proto.Message
for it.Next() {
msg, err := it.GetMessage()
assert.NilError(t, err)
data2 = append(data2, msg)
}
assert.Equal(t, len(model.data), len(data2))
// sort it
model2 := &IndexModel{
TableData: &TableData{
table: model.table,
data: data2,
context: model.context,
},
index: model.index,
}
sort.Sort(model2)
// compare
for i := 0; i < len(data2); i++ {
assert.DeepEqual(t, model.data[i], data2[i], protocmp.Transform())
}
}
}
func reverseData(data []proto.Message) []proto.Message {
n := len(data)
reverse := make([]proto.Message, n)
for i := 0; i < n; i++ {
reverse[n-i-1] = data[i]
}
return reverse
}
func checkIteratorAgainstSlice(t assert.TestingT, iterator ormtable.Iterator, data []proto.Message) {
i := 0
for iterator.Next() {
if i >= len(data) {
for iterator.Next() {
i++
}
t.Log(fmt.Sprintf("too many elements in iterator, len(data) = %d, i = %d", len(data), i))
t.FailNow()
}
msg, err := iterator.GetMessage()
assert.NilError(t, err)
assert.DeepEqual(t, data[i], msg, protocmp.Transform())
i++
}
}
func TableDataGen[T proto.Message](elemGen *rapid.Generator[T], n int) *rapid.Generator[*TableData] {
return rapid.Custom(func(t *rapid.T) *TableData {
prefix := rapid.SliceOfN(rapid.Byte(), 0, 5).Draw(t, "prefix")
message := elemGen.Draw(t, "message")
table, err := ormtable.Build(ormtable.Options{
Prefix: prefix,
MessageType: message.ProtoReflect().Type(),
})
if err != nil {
panic(err)
}
data := make([]proto.Message, n)
store := ormtable.WrapContextDefault(testkv.NewSplitMemBackend())
for i := 0; i < n; {
message = elemGen.Draw(t, fmt.Sprintf("message[%d]", i))
err := table.Insert(store, message)
if sdkerrors.IsOf(err, ormerrors.PrimaryKeyConstraintViolation, ormerrors.UniqueKeyViolation) {
continue
} else if err != nil {
panic(err)
}
data[i] = message
i++
}
return &TableData{
data: data,
table: table,
context: store,
}
})
}
type TableData struct {
table ormtable.Table
data []proto.Message
context context.Context
}
type IndexModel struct {
*TableData
index TestIndex
}
// TestIndex exposes methods that all index implementations expose publicly
// but on private structs because they are intended only to be used for testing.
type TestIndex interface {
ormtable.Index
// CompareKeys the two keys against the underlying IndexCodec, returning a
// negative value if key1 is less than key2, 0 if they are equal, and a
// positive value otherwise.
CompareKeys(key1, key2 []protoreflect.Value) int
// IsFullyOrdered returns true if all of the fields in the index are
// considered "well-ordered" in terms of sorted iteration.
IsFullyOrdered() bool
}
func (m *IndexModel) Len() int {
return len(m.data)
}
func (m *IndexModel) Less(i, j int) bool {
is, _, err := m.index.(ormkv.IndexCodec).EncodeKeyFromMessage(m.data[i].ProtoReflect())
if err != nil {
panic(err)
}
js, _, err := m.index.(ormkv.IndexCodec).EncodeKeyFromMessage(m.data[j].ProtoReflect())
if err != nil {
panic(err)
}
return m.index.CompareKeys(is, js) < 0
}
func (m *IndexModel) Swap(i, j int) {
m.data[i], m.data[j] = m.data[j], m.data[i]
}
var _ sort.Interface = &IndexModel{}
func TestJSONExportImport(t *testing.T) {
table, err := ormtable.Build(ormtable.Options{
MessageType: (&testpb.ExampleTable{}).ProtoReflect().Type(),
})
assert.NilError(t, err)
store := ormtable.WrapContextDefault(testkv.NewSplitMemBackend())
for i := 0; i < 100; {
x := testutil.GenA.Example()
err = table.Insert(store, x)
if sdkerrors.IsOf(err, ormerrors.PrimaryKeyConstraintViolation, ormerrors.UniqueKeyViolation) {
continue
} else {
assert.NilError(t, err)
}
i++
}
buf := &bytes.Buffer{}
assert.NilError(t, table.ExportJSON(store, buf))
assert.NilError(t, table.ValidateJSON(bytes.NewReader(buf.Bytes())))
store2 := ormtable.WrapContextDefault(testkv.NewSplitMemBackend())
assert.NilError(t, table.ImportJSON(store2, bytes.NewReader(buf.Bytes())))
assertTablesEqual(t, table, store, store2)
}
func assertTablesEqual(t assert.TestingT, table ormtable.Table, ctx, ctx2 context.Context) {
it, err := table.List(ctx, nil)
assert.NilError(t, err)
it2, err := table.List(ctx2, nil)
assert.NilError(t, err)
for {
have := it.Next()
have2 := it2.Next()
assert.Equal(t, have, have2)
if !have {
break
}
msg1, err := it.GetMessage()
assert.NilError(t, err)
msg2, err := it.GetMessage()
assert.NilError(t, err)
assert.DeepEqual(t, msg1, msg2, protocmp.Transform())
}
}
func protoValuesToInterfaces(ks []protoreflect.Value) []interface{} {
values := make([]interface{}, len(ks))
for i := 0; i < len(ks); i++ {
values[i] = ks[i].Interface()
}
return values
}
func TestReadonly(t *testing.T) {
table, err := ormtable.Build(ormtable.Options{
MessageType: (&testpb.ExampleTable{}).ProtoReflect().Type(),
})
assert.NilError(t, err)
readBackend := ormtable.NewReadBackend(ormtable.ReadBackendOptions{
CommitmentStoreReader: testkv.TestStore{Db: coretesting.NewMemDB()},
IndexStoreReader: testkv.TestStore{Db: coretesting.NewMemDB()},
})
ctx := ormtable.WrapContextDefault(readBackend)
assert.ErrorIs(t, ormerrors.ReadOnly, table.Insert(ctx, &testpb.ExampleTable{}))
}
func TestInsertReturningFieldName(t *testing.T) {
table, err := ormtable.Build(ormtable.Options{
MessageType: (&testpb.ExampleAutoIncFieldName{}).ProtoReflect().Type(),
})
assert.NilError(t, err)
backend := testkv.NewSplitMemBackend()
ctx := ormtable.WrapContextDefault(backend)
store, err := testpb.NewExampleAutoIncFieldNameTable(table)
assert.NilError(t, err)
foo, err := store.InsertReturningFoo(ctx, &testpb.ExampleAutoIncFieldName{
Bar: 45,
})
assert.NilError(t, err)
assert.Equal(t, uint64(1), foo)
}

View File

@ -1,2 +0,0 @@
[1,
{"id":"2","x":"foo","y":5}]

View File

@ -1 +0,0 @@
[{"id":"1","x":"foo","y":5}]

Some files were not shown because too many files have changed in this diff Show More