fix: typos (#25048)
Co-authored-by: Alex | Interchain Labs <alex@interchainlabs.io>
This commit is contained in:
parent
8918fe28c8
commit
2789409491
@ -47,7 +47,7 @@ func writeConfigToFile(configFilePath string, config *ClientConfig) error {
|
||||
return os.WriteFile(configFilePath, buffer.Bytes(), 0o600)
|
||||
}
|
||||
|
||||
// getClientConfig reads values from client.toml file and unmarshalls them into ClientConfig
|
||||
// getClientConfig reads values from client.toml file and unmarshals them into ClientConfig
|
||||
func getClientConfig(configPath string, v *viper.Viper) (*ClientConfig, error) {
|
||||
v.AddConfigPath(configPath)
|
||||
v.SetConfigName("client")
|
||||
|
||||
@ -11,7 +11,7 @@ PROPOSED - Implemented
|
||||
## Abstract
|
||||
|
||||
We propose a simplified module storage layer which leverages golang generics to allow module developers to handle module
|
||||
storage in a simple and straightforward manner, whilst offering safety, extensibility and standardisation.
|
||||
storage in a simple and straightforward manner, whilst offering safety, extensibility and standardization.
|
||||
|
||||
## Context
|
||||
|
||||
@ -33,7 +33,7 @@ This brings in a lot of problems:
|
||||
* Key to bytes formats are complex and their definition is error-prone, for example:
|
||||
* how do I format time to bytes in such a way that bytes are sorted?
|
||||
* how do I ensure when I don't have namespace collisions when dealing with secondary indexes?
|
||||
* The lack of standardisation makes life hard for clients, and the problem is exacerbated when it comes to providing proofs for objects present in state. Clients are forced to maintain a list of object paths to gather proofs.
|
||||
* The lack of standardization makes life hard for clients, and the problem is exacerbated when it comes to providing proofs for objects present in state. Clients are forced to maintain a list of object paths to gather proofs.
|
||||
|
||||
### Current Solution: ORM
|
||||
|
||||
@ -68,13 +68,13 @@ All the collection APIs build on top of the simple `Map` type.
|
||||
|
||||
Collections is fully generic, meaning that anything can be used as `Key` and `Value`. It can be a protobuf object or not.
|
||||
|
||||
Collections types, in fact, delegate the duty of serialisation of keys and values to a secondary collections API component called `ValueEncoders` and `KeyEncoders`.
|
||||
Collections types, in fact, delegate the duty of serialization of keys and values to a secondary collections API component called `ValueEncoders` and `KeyEncoders`.
|
||||
|
||||
`ValueEncoders` take care of converting a value to bytes (relevant only for `Map`). And offers a plug and play layer which allows us to change how we encode objects,
|
||||
which is relevant for swapping serialisation frameworks and enhancing performance.
|
||||
which is relevant for swapping serialization frameworks and enhancing performance.
|
||||
`Collections` already comes in with default `ValueEncoders`, specifically for: protobuf objects, special SDK types (sdk.Int, sdk.Dec).
|
||||
|
||||
`KeyEncoders` take care of converting keys to bytes, `collections` already comes in with some default `KeyEncoders` for some privimite golang types
|
||||
`KeyEncoders` take care of converting keys to bytes, `collections` already comes in with some default `KeyEncoders` for some primitive golang types
|
||||
(uint64, string, time.Time, ...) and some widely used sdk types (sdk.Acc/Val/ConsAddress, sdk.Int/Dec, ...).
|
||||
These default implementations also offer safety around proper lexicographic ordering and namespace-collision.
|
||||
|
||||
@ -96,10 +96,10 @@ the upgrade to the new storage layer non-state breaking.
|
||||
### Positive
|
||||
|
||||
* ADR aimed at removing code from the SDK rather than adding it. Migrating just `x/staking` to collections would yield to a net decrease in LOC (even considering the addition of collections itself).
|
||||
* Simplifies and standardises storage layers across modules in the SDK.
|
||||
* Simplifies and standardizes storage layers across modules in the SDK.
|
||||
* Does not require to have to deal with protobuf.
|
||||
* It's pure golang code.
|
||||
* Generalisation over `KeyEncoders` and `ValueEncoders` allows us to not tie ourself to the data serialisation framework.
|
||||
* Generalization over `KeyEncoders` and `ValueEncoders` allows us to not tie ourself to the data serialization framework.
|
||||
* `KeyEncoders` and `ValueEncoders` can be extended to provide schema reflection.
|
||||
|
||||
### Negative
|
||||
|
||||
@ -226,7 +226,7 @@ type HasServices interface {
|
||||
|
||||
```
|
||||
|
||||
Because of the `cosmos.msg.v1.service` protobuf option, required for `Msg` services, the same `ServiceRegitrar` can be
|
||||
Because of the `cosmos.msg.v1.service` protobuf option, required for `Msg` services, the same `ServiceRegistrar` can be
|
||||
used to register both `Msg` and query services.
|
||||
|
||||
#### Genesis
|
||||
|
||||
@ -319,7 +319,7 @@ single-use unordered nonces, instead of deriving nonces from bytes in the transa
|
||||
|
||||
* Requires additional storage overhead.
|
||||
* Requirement of unique timestamps per transaction causes a small amount of additional overhead for clients. Clients must ensure each transaction's timeout timestamp is different. However, nanosecond differentials suffice.
|
||||
* Usage of Cosmos SDK KV store is slower in comparison to using a non-merklized store or ad-hoc methods, and block times may slow down as a result.
|
||||
* Usage of Cosmos SDK KV store is slower in comparison to using a non-merkleized store or ad-hoc methods, and block times may slow down as a result.
|
||||
|
||||
## References
|
||||
|
||||
|
||||
@ -396,7 +396,7 @@ The `AnteHandler` is theoretically optional, but still a very important componen
|
||||
|
||||
* Be a primary line of defense against spam and second line of defense (the first one being the mempool) against transaction replay with fees deduction and [`sequence`](./01-transactions.md#transaction-generation) checking.
|
||||
* Perform preliminary _stateful_ validity checks like ensuring signatures are valid or that the sender has enough funds to pay for fees.
|
||||
* Play a role in the incentivisation of stakeholders via the collection of transaction fees.
|
||||
* Play a role in the incentivization of stakeholders via the collection of transaction fees.
|
||||
|
||||
`BaseApp` holds an `anteHandler` as parameter that is initialized in the [application's constructor](../beginner/00-app-anatomy.md#application-constructor). The most widely used `anteHandler` is the [`auth` module](https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/x/auth/ante/ante.go).
|
||||
|
||||
@ -474,7 +474,7 @@ https://github.com/cosmos/cosmos-sdk/blob/v0.53.0/baseapp/baseapp.go#LL772-L807
|
||||
|
||||
Transaction execution within `FinalizeBlock` performs the **exact same steps as `CheckTx`**, with a little caveat at step 3 and the addition of a fifth step:
|
||||
|
||||
1. The `AnteHandler` does **not** check that the transaction's `gas-prices` is sufficient. That is because the `min-gas-prices` value `gas-prices` is checked against is local to the node, and therefore what is enough for one full-node might not be for another. This means that the proposer can potentially include transactions for free, although they are not incentivised to do so, as they earn a bonus on the total fee of the block they propose.
|
||||
1. The `AnteHandler` does **not** check that the transaction's `gas-prices` is sufficient. That is because the `min-gas-prices` value `gas-prices` is checked against is local to the node, and therefore what is enough for one full-node might not be for another. This means that the proposer can potentially include transactions for free, although they are not incentivized to do so, as they earn a bonus on the total fee of the block they propose.
|
||||
2. For each `sdk.Msg` in the transaction, route to the appropriate module's Protobuf [`Msg` service](../../build/building-modules/03-msg-services.md). Additional _stateful_ checks are performed, and the branched multistore held in `finalizeBlockState`'s `context` is updated by the module's `keeper`. If the `Msg` service returns successfully, the branched multistore held in `context` is written to `finalizeBlockState` `CacheMultiStore`.
|
||||
|
||||
During the additional fifth step outlined in (2), each read/write to the store increases the value of `GasConsumed`. You can find the default cost of each operation:
|
||||
|
||||
@ -45,7 +45,7 @@ func (k FooKeeper) Do(obj interface{}) {
|
||||
}
|
||||
```
|
||||
|
||||
By default that panic would be recovered and an error message will be printed to log. To override that behaviour we should register a custom RecoveryHandler:
|
||||
By default that panic would be recovered and an error message will be printed to log. To override that behavior we should register a custom RecoveryHandler:
|
||||
|
||||
```go
|
||||
// Cosmos SDK application constructor
|
||||
|
||||
@ -179,7 +179,7 @@ Note that `sdk.Msg`s are bundled in [transactions](../advanced/01-transactions.m
|
||||
|
||||
When a valid block of transactions is received by the full-node, CometBFT relays each one to the application via [`DeliverTx`](https://docs.cometbft.com/v0.37/spec/abci/abci++_app_requirements#specifics-of-responsedelivertx). Then, the application handles the transaction:
|
||||
|
||||
1. Upon receiving the transaction, the application first unmarshalls it from `[]byte`.
|
||||
1. Upon receiving the transaction, the application first unmarshals it from `[]byte`.
|
||||
2. Then, it verifies a few things about the transaction like [fee payment and signatures](./04-gas-fees.md#antehandler) before extracting the `Msg`(s) contained in the transaction.
|
||||
3. `sdk.Msg`s are encoded using Protobuf [`Any`s](#register-codec). By analyzing each `Any`'s `type_url`, baseapp's `msgServiceRouter` routes the `sdk.Msg` to the corresponding module's `Msg` service.
|
||||
4. If the message is successfully processed, the state is updated.
|
||||
|
||||
@ -58,15 +58,15 @@ Gas consumption can be done manually, generally by the module developer in the [
|
||||
|
||||
`ctx.BlockGasMeter()` is the gas meter used to track gas consumption per block and make sure it does not go above a certain limit.
|
||||
|
||||
During the genesis phase, gas consumption is unlimited to accommodate initialisation transactions.
|
||||
During the genesis phase, gas consumption is unlimited to accommodate initialization transactions.
|
||||
|
||||
```go
|
||||
app.finalizeBlockState.SetContext(app.finalizeBlockState.Context().WithBlockGasMeter(storetypes.NewInfiniteGasMeter()))
|
||||
```
|
||||
|
||||
Following the genesis block, the block gas meter is set to a finite value by the SDK. This transition is facilitated by the consensus engine (e.g., CometBFT) calling the `RequestFinalizeBlock` function, which in turn triggers the SDK's `FinalizeBlock` method. Within `FinalizeBlock`, `internalFinalizeBlock` is executed, performing necessary state updates and function executions. The block gas meter, initialised each with a finite limit, is then incorporated into the context for transaction execution, ensuring gas consumption does not exceed the block's gas limit and is reset at the end of each block.
|
||||
Following the genesis block, the block gas meter is set to a finite value by the SDK. This transition is facilitated by the consensus engine (e.g., CometBFT) calling the `RequestFinalizeBlock` function, which in turn triggers the SDK's `FinalizeBlock` method. Within `FinalizeBlock`, `internalFinalizeBlock` is executed, performing necessary state updates and function executions. The block gas meter, initialized each with a finite limit, is then incorporated into the context for transaction execution, ensuring gas consumption does not exceed the block's gas limit and is reset at the end of each block.
|
||||
|
||||
Modules within the Cosmos SDK can consume block gas at any point during their execution by utilising the `ctx`. This gas consumption primarily occurs during state read/write operations and transaction processing. The block gas meter, accessible via `ctx.BlockGasMeter()`, monitors the total gas usage within a block, enforcing the gas limit to prevent excessive computation. This ensures that gas limits are adhered to on a per-block basis, starting from the first block post-genesis.
|
||||
Modules within the Cosmos SDK can consume block gas at any point during their execution by utilizing the `ctx`. This gas consumption primarily occurs during state read/write operations and transaction processing. The block gas meter, accessible via `ctx.BlockGasMeter()`, monitors the total gas usage within a block, enforcing the gas limit to prevent excessive computation. This ensures that gas limits are adhered to on a per-block basis, starting from the first block post-genesis.
|
||||
|
||||
```go
|
||||
gasMeter := app.getBlockGasMeter(app.finalizeBlockState.Context())
|
||||
|
||||
@ -14,7 +14,7 @@ The main difference the Cosmos SDK is defining as a differentiation between RFC
|
||||
|
||||
## RFC life cycle
|
||||
|
||||
RFC creation is an **iterative** process. An RFC is meant as a distributed colloboration session, it may have many comments and is usually the bi-product of no working group or synchornous communication
|
||||
RFC creation is an **iterative** process. An RFC is meant as a distributed collaboration session, it may have many comments and is usually the bi-product of no working group or synchronous communication
|
||||
|
||||
1. Proposals could start with a new GitHub Issue, be a result of existing Issues or a discussion.
|
||||
|
||||
@ -28,7 +28,7 @@ RFC creation is an **iterative** process. An RFC is meant as a distributed collo
|
||||
|
||||
6. If there is consensus and enough feedback then the RFC can be accepted.
|
||||
|
||||
> Note: An RFC is written when there is no working group or team session on the problem. RFC's are meant as a distributed white boarding session. If there is a working group on the proposal there is no need to have an RFC as there is synchornous whiteboarding going on.
|
||||
> Note: An RFC is written when there is no working group or team session on the problem. RFC's are meant as a distributed white boarding session. If there is a working group on the proposal there is no need to have an RFC as there is synchronous whiteboarding going on.
|
||||
|
||||
### RFC status
|
||||
|
||||
|
||||
@ -78,7 +78,7 @@ $T_f$ is a separate variable in state for the amount of fees this validator has
|
||||
This variable is incremented at every block by however much fees this validator received that block.
|
||||
On the update to the validators power, this variable is used to create the entry in state at $f$, and is then reset to 0.
|
||||
|
||||
This fee distribution proposal is agnostic to how all of the blocks fees are divied up between validators.
|
||||
This fee distribution proposal is agnostic to how all of the blocks fees are divided up between validators.
|
||||
This creates many nice properties, for example it is possible to only rewarding validators who signed that block.
|
||||
|
||||
\section{Additional add-ons}
|
||||
@ -237,7 +237,7 @@ Thus this scheme has efficiency improvements, simplicity improvements, and expre
|
||||
\item Mention storage optimization for how to prune slashing entries in the uniform inflation and iteration over slashing case
|
||||
\item Add equation numbers
|
||||
\item perhaps re-organize so that the no iteration
|
||||
\item Section on decimal precision considerations (would unums help?), and mitigating errors in calculation with floats and decimals. -- This probably belongs in a corrollary markdown file in the implementation
|
||||
\item Section on decimal precision considerations (would unums help?), and mitigating errors in calculation with floats and decimals. -- This probably belongs in a corollary markdown file in the implementation
|
||||
\item Consider indicating that the withdraw action need not be a tx type and could instead happen 'transparently' when more coins are needed, if a chain desired this for UX / p2p efficiency.
|
||||
\end{itemize}
|
||||
|
||||
|
||||
@ -221,9 +221,9 @@ func TestSubspace(t *testing.T) {
|
||||
// Test space.Get, space.GetIfExists
|
||||
for i, kv := range kvs {
|
||||
require.NotPanics(t, func() { space.GetIfExists(ctx, []byte("invalid"), kv.ptr) }, "space.GetIfExists panics when no value exists, tc #%d", i)
|
||||
require.Equal(t, kv.zero, indirect(kv.ptr), "space.GetIfExists unmarshalls when no value exists, tc #%d", i)
|
||||
require.Equal(t, kv.zero, indirect(kv.ptr), "space.GetIfExists unmarshals when no value exists, tc #%d", i)
|
||||
require.Panics(t, func() { space.Get(ctx, []byte("invalid"), kv.ptr) }, "invalid space.Get not panics when no value exists, tc #%d", i)
|
||||
require.Equal(t, kv.zero, indirect(kv.ptr), "invalid space.Get unmarshalls when no value exists, tc #%d", i)
|
||||
require.Equal(t, kv.zero, indirect(kv.ptr), "invalid space.Get unmarshals when no value exists, tc #%d", i)
|
||||
|
||||
require.NotPanics(t, func() { space.GetIfExists(ctx, []byte(kv.key), kv.ptr) }, "space.GetIfExists panics, tc #%d", i)
|
||||
require.Equal(t, kv.param, indirect(kv.ptr), "stored param not equal, tc #%d", i)
|
||||
@ -231,7 +231,7 @@ func TestSubspace(t *testing.T) {
|
||||
require.Equal(t, kv.param, indirect(kv.ptr), "stored param not equal, tc #%d", i)
|
||||
|
||||
require.Panics(t, func() { space.Get(ctx, []byte("invalid"), kv.ptr) }, "invalid space.Get not panics when no value exists, tc #%d", i)
|
||||
require.Equal(t, kv.param, indirect(kv.ptr), "invalid space.Get unmarshalls when no value exist, tc #%d", i)
|
||||
require.Equal(t, kv.param, indirect(kv.ptr), "invalid space.Get unmarshals when no value exist, tc #%d", i)
|
||||
|
||||
require.Panics(t, func() { space.Get(ctx, []byte(kv.key), nil) }, "invalid space.Get not panics when the pointer is nil, tc #%d", i)
|
||||
require.Panics(t, func() { space.Get(ctx, []byte(kv.key), new(invalid)) }, "invalid space.Get not panics when the pointer is different type, tc #%d", i)
|
||||
|
||||
Loading…
Reference in New Issue
Block a user