fix: correct typos and grammar in architecture documentation (#25124)
This commit is contained in:
parent
15e1465613
commit
6433a0bb5b
@ -11,12 +11,12 @@ Accepted
|
||||
|
||||
## Abstract
|
||||
|
||||
We want to leverage protobuf `service` definitions for defining `Msg`s which will give us significant developer UX
|
||||
We want to leverage protobuf `service` definitions for defining `Msg`s, which will give us significant developer UX
|
||||
improvements in terms of the code that is generated and the fact that return types will now be well defined.
|
||||
|
||||
## Context
|
||||
|
||||
Currently `Msg` handlers in the Cosmos SDK do have return values that are placed in the `data` field of the response.
|
||||
Currently `Msg` handlers in the Cosmos SDK have return values that are placed in the `data` field of the response.
|
||||
These return values, however, are not specified anywhere except in the golang handler code.
|
||||
|
||||
In early conversations [it was proposed](https://docs.google.com/document/d/1eEgYgvgZqLE45vETjhwIw4VOqK-5hwQtZtjVbiXnIGc/edit)
|
||||
@ -105,12 +105,12 @@ One consequence of this convention is that each `Msg` type can be the request pa
|
||||
|
||||
### Encoding
|
||||
|
||||
Encoding of transactions generated with `Msg` services do not differ from current Protobuf transaction encoding as defined in [ADR-020](./adr-020-protobuf-transaction-encoding.md). We are encoding `Msg` types (which are exactly `Msg` service methods' request parameters) as `Any` in `Tx`s which involves packing the
|
||||
Encoding of transactions generated with `Msg` services does not differ from current Protobuf transaction encoding as defined in [ADR-020](./adr-020-protobuf-transaction-encoding.md). We are encoding `Msg` types (which are exactly `Msg` service methods' request parameters) as `Any` in `Tx`s which involves packing the
|
||||
binary-encoded `Msg` with its type URL.
|
||||
|
||||
### Decoding
|
||||
|
||||
Since `Msg` types are packed into `Any`, decoding transactions messages are done by unpacking `Any`s into `Msg` types. For more information, please refer to [ADR-020](./adr-020-protobuf-transaction-encoding.md#transactions).
|
||||
Since `Msg` types are packed into `Any`, decoding transaction messages is done by unpacking `Any`s into `Msg` types. For more information, please refer to [ADR-020](./adr-020-protobuf-transaction-encoding.md#transactions).
|
||||
|
||||
### Routing
|
||||
|
||||
@ -157,7 +157,7 @@ than later when transactions are processed.
|
||||
### `Msg` Service Implementation
|
||||
|
||||
Just like query services, `Msg` service methods can retrieve the `sdk.Context`
|
||||
from the `context.Context` parameter method using the `sdk.UnwrapSDKContext`
|
||||
from the `context.Context` parameter using the `sdk.UnwrapSDKContext`
|
||||
method:
|
||||
|
||||
```go
|
||||
|
||||
@ -153,7 +153,7 @@ Users will be able to subscribe using `client.Context.Client.Subscribe` and cons
|
||||
|
||||
Akash Network has built a simple [`pubsub`](https://github.com/ovrclk/akash/blob/90d258caeb933b611d575355b8df281208a214f8/pubsub/bus.go#L20). This can be used to subscribe to `abci.Events` and [publish](https://github.com/ovrclk/akash/blob/90d258caeb933b611d575355b8df281208a214f8/events/publish.go#L21) them as typed events.
|
||||
|
||||
Please see the below code sample for more detail on this flow looks for clients.
|
||||
Please see the below code sample for more detail on how this flow looks for clients.
|
||||
|
||||
## Consequences
|
||||
|
||||
|
||||
@ -42,7 +42,7 @@ own account. These permissions are actually stored as a `[]string` array on the
|
||||
|
||||
However, these permissions don’t really do much. They control what modules can be referenced in the `MintCoins`,
|
||||
`BurnCoins` and `DelegateCoins***` methods, but for one there is no unique object capability token that controls access —
|
||||
just a simple string. So the `x/upgrade` module could mint tokens for the `x/staking` module simple by calling
|
||||
just a simple string. So the `x/upgrade` module could mint tokens for the `x/staking` module simply by calling
|
||||
`MintCoins(“staking”)`. Furthermore, all modules which have access to these keeper methods, also have access to
|
||||
`SetBalance` negating any other attempt at OCAPs and breaking even basic object-oriented encapsulation.
|
||||
|
||||
@ -111,7 +111,7 @@ key" corresponding to a module account, where authentication is provided through
|
||||
described in more detail below.
|
||||
|
||||
Blockchain users (external clients) use their account's private key to sign transactions containing `Msg`s where they are listed as signers (each
|
||||
message specifies required signers with `Msg.GetSigner`). The authentication checks is performed by `AnteHandler`.
|
||||
message specifies required signers with `Msg.GetSigner`). The authentication check is performed by `AnteHandler`.
|
||||
|
||||
Here, we extend this process, by allowing modules to be identified in `Msg.GetSigners`. When a module wants to trigger the execution a `Msg` in another module,
|
||||
its `ModuleKey` acts as the sender (through the `ClientConn` interface we describe below) and is set as a sole "signer". It's worth to note
|
||||
@ -144,7 +144,7 @@ func NewFooMsgServer(moduleKey RootModuleKey, ...) FooMsgServer {
|
||||
}
|
||||
|
||||
func (foo *FooMsgServer) Bar(ctx context.Context, req *MsgBarRequest) (*MsgBarResponse, error) {
|
||||
balance, err := foo.bankQuery.Balance(&bank.QueryBalanceRequest{Address: fooMsgServer.moduleKey.Address(), Denom: "foo"})
|
||||
balance, err := foo.bankQuery.Balance(&bank.QueryBalanceRequest{Address: foo.moduleKey.Address(), Denom: "foo"})
|
||||
|
||||
...
|
||||
|
||||
@ -270,7 +270,7 @@ introduced in the future. The `ModuleManager` will handle creation of module acc
|
||||
Because modules do not get direct access to each other anymore, modules may have unfulfilled dependencies. To make sure
|
||||
that module dependencies are resolved at startup, the `Configurator.RequireServer` method should be added. The `ModuleManager`
|
||||
will make sure that all dependencies declared with `RequireServer` can be resolved before the app starts. An example
|
||||
module `foo` could declare it's dependency on `x/bank` like this:
|
||||
module `foo` could declare its dependency on `x/bank` like this:
|
||||
|
||||
```go
|
||||
package foo
|
||||
@ -331,7 +331,7 @@ ADR.
|
||||
|
||||
By default, the inter-module router requires that messages are sent by the first signer returned by `GetSigners`. The
|
||||
inter-module router should also accept authorization middleware such as that provided by [ADR 030](https://github.com/cosmos/cosmos-sdk/blob/main/docs/architecture/adr-030-authz-module.md).
|
||||
This middleware will allow accounts to otherwise specific module accounts to perform actions on their behalf.
|
||||
This middleware will allow accounts to authorize specific module accounts to perform actions on their behalf.
|
||||
Authorization middleware should take into account the need to grant certain modules effectively "admin" privileges to
|
||||
other modules. This will be addressed in separate ADRs or updates to this ADR.
|
||||
|
||||
|
||||
@ -14,9 +14,9 @@ Account rekeying is a process that allows an account to replace its authenticati
|
||||
|
||||
## Context
|
||||
|
||||
Currently, in the Cosmos SDK, the address of an auth `BaseAccount` is based on the hash of the public key. Once an account is created, the public key for the account is set in stone, and cannot be changed. This can be a problem for users, as key rotation is a useful security practice, but is not possible currently. Furthermore, as multisigs are a type of pubkey, once a multisig for an account is set, it can not be updated. This is problematic, as multisigs are often used by organizations or companies, who may need to change their set of multisig signers for internal reasons.
|
||||
Currently, in the Cosmos SDK, the address of an auth `BaseAccount` is based on the hash of the public key. Once an account is created, the public key for the account is set in stone, and cannot be changed. This can be a problem for users, as key rotation is a useful security practice, but is not possible currently. Furthermore, as multisigs are a type of pubkey, once a multisig for an account is set, it cannot be updated. This is problematic, as multisigs are often used by organizations or companies, who may need to change their set of multisig signers for internal reasons.
|
||||
|
||||
Transferring all the assets of an account to a new account with the updated pubkey is not sufficient, because some "engagements" of an account are not easily transferable. For example, in staking, to transfer bonded Atoms, an account would have to unbond all delegations and wait the three week unbonding period. Even more significantly, for validator operators, ownership over a validator is not transferable at all, meaning that the operator key for a validator can never be updated, leading to poor operational security for validators.
|
||||
Transferring all the assets of an account to a new account with the updated pubkey is not sufficient, because some "engagements" of an account are not easily transferable. For example, in staking, to transfer bonded Atoms, an account would have to unbond all delegations and wait the three-week unbonding period. Even more significantly, for validator operators, ownership over a validator is not transferable at all, meaning that the operator key for a validator can never be updated, leading to poor operational security for validators.
|
||||
|
||||
## Decision
|
||||
|
||||
@ -41,9 +41,9 @@ message MsgChangePubKeyResponse {}
|
||||
|
||||
The MsgChangePubKey transaction needs to be signed by the existing pubkey in state.
|
||||
|
||||
Once, approved, the handler for this message type, which takes in the AccountKeeper, will update the in-state pubkey for the account and replace it with the pubkey from the Msg.
|
||||
Once approved, the handler for this message type, which takes in the AccountKeeper, will update the in-state pubkey for the account and replace it with the pubkey from the Msg.
|
||||
|
||||
An account that has had its pubkey changed cannot be automatically pruned from state. This is because if pruned, the original pubkey of the account would be needed to recreate the same address, but the owner of the address may not have the original pubkey anymore. Currently, we do not automatically prune any accounts anyways, but we would like to keep this option open the road (this is the purpose of account numbers). To resolve this, we charge an additional gas fee for this operation to compensate for this externality (this bound gas amount is configured as parameter `PubKeyChangeCost`). The bonus gas is charged inside the handler, using the `ConsumeGas` function. Furthermore, in the future, we can allow accounts that have rekeyed manually prune themselves using a new Msg type such as `MsgDeleteAccount`. Manually pruning accounts can give a gas refund as an incentive for performing the action.
|
||||
An account that has had its pubkey changed cannot be automatically pruned from state. This is because if pruned, the original pubkey of the account would be needed to recreate the same address, but the owner of the address may not have the original pubkey anymore. Currently, we do not automatically prune any accounts anyways, but we would like to keep this option open down the road (this is the purpose of account numbers). To resolve this, we charge an additional gas fee for this operation to compensate for this externality (this bound gas amount is configured as a parameter `PubKeyChangeCost`). The bonus gas is charged inside the handler, using the `ConsumeGas` function. Furthermore, in the future, we can allow accounts that have rekeyed manually prune themselves using a new Msg type such as `MsgDeleteAccount`. Manually pruning accounts can give a gas refund as an incentive for performing the action.
|
||||
|
||||
```go
|
||||
amount := ak.GetParams(ctx).PubKeyChangeCost
|
||||
@ -61,9 +61,9 @@ Every time a key for an address is changed, we will store a log of this change i
|
||||
|
||||
### Negative
|
||||
|
||||
Breaks the current assumed relationship between address and pubkeys as H(pubkey) = address. This has a couple of consequences.
|
||||
Breaks the current assumed relationship between address and pubkey as H(pubkey) = address. This has a couple of consequences.
|
||||
|
||||
* This makes wallets that support this feature more complicated. For example, if an address on chain was updated, the corresponding key in the CLI wallet also needs to be updated.
|
||||
* This makes wallets that support this feature more complicated. For example, if an address on-chain was updated, the corresponding key in the CLI wallet also needs to be updated.
|
||||
* Cannot automatically prune accounts with 0 balance that have had their pubkey changed.
|
||||
|
||||
### Neutral
|
||||
|
||||
@ -9,7 +9,7 @@
|
||||
|
||||
## Changelog
|
||||
|
||||
* 2021-05-12: the external library [cosmos-rosetta-gateway](https://github.com/tendermint/cosmos-rosetta-gateway) has been moved within the Cosmos SDK.
|
||||
* 2021-05-12: the external library [cosmos-rosetta-gateway](https://github.com/tendermint/cosmos-rosetta-gateway) has been moved within the Cosmos SDK.
|
||||
|
||||
## Context
|
||||
|
||||
@ -32,11 +32,11 @@ The driving principles of the proposed design are:
|
||||
|
||||
1. **Extensibility:** it must be as riskless and painless as possible for application developers to set-up network
|
||||
configurations to expose Rosetta API-compliant services.
|
||||
2. **Long term support:** This proposal aims to provide support for all the supported Cosmos SDK release series.
|
||||
2. **Long term support:** This proposal aims to provide support for all the Cosmos SDK release series.
|
||||
3. **Cost-efficiency:** Backporting changes to Rosetta API specifications from `master` to the various stable
|
||||
branches of Cosmos SDK is a cost that needs to be reduced.
|
||||
|
||||
We will achieve these delivering on these principles by the following:
|
||||
We will achieve these by delivering on these principles by the following:
|
||||
|
||||
1. There will be a package `rosetta/lib`
|
||||
for the implementation of the core Rosetta API features, particularly:
|
||||
@ -53,7 +53,7 @@ We will achieve these delivering on these principles by the following:
|
||||
|
||||
### The External Repo
|
||||
|
||||
As section will describe the proposed external library, including the service implementation, plus the defined types and interfaces.
|
||||
This section will describe the proposed external library, including the service implementation, plus the defined types and interfaces.
|
||||
|
||||
#### Server
|
||||
|
||||
@ -120,8 +120,8 @@ type Client interface {
|
||||
// BlockTransactionsByHash gets the block, parent block and transactions
|
||||
// given the block hash.
|
||||
BlockTransactionsByHash(ctx context.Context, hash string) (BlockTransactionsResponse, error)
|
||||
// BlockTransactionsByHash gets the block, parent block and transactions
|
||||
// given the block hash.
|
||||
// BlockTransactionsByHeight gets the block, parent block and transactions
|
||||
// given the block height.
|
||||
BlockTransactionsByHeight(ctx context.Context, height *int64) (BlockTransactionsResponse, error)
|
||||
// GetTx gets a transaction given its hash
|
||||
GetTx(ctx context.Context, hash string) (*types.Transaction, error)
|
||||
@ -189,7 +189,7 @@ As stated at the start, application developers will have two methods for invocat
|
||||
|
||||
#### Shared Process (Only Stargate)
|
||||
|
||||
Rosetta API service could run within the same execution process as the application. This would be enabled via app.toml settings, and if gRPC is not enabled the rosetta instance would be spinned in offline mode (tx building capabilities only).
|
||||
Rosetta API service could run within the same execution process as the application. This would be enabled via app.toml settings, and if gRPC is not enabled the rosetta instance would be spun in offline mode (tx building capabilities only).
|
||||
|
||||
#### Separate API service
|
||||
|
||||
|
||||
@ -17,27 +17,27 @@ Draft
|
||||
|
||||
## Abstract
|
||||
|
||||
Currently, in the Cosmos SDK, there is no convention to sign arbitrary message like in Ethereum. We propose with this specification, for Cosmos SDK ecosystem, a way to sign and validate off-chain arbitrary messages.
|
||||
Currently, in the Cosmos SDK, there is no convention to sign arbitrary messages like in Ethereum. We propose with this specification, for Cosmos SDK ecosystem, a way to sign and validate off-chain arbitrary messages.
|
||||
|
||||
This specification serves the purpose of covering every use case, this means that cosmos-sdk applications developers decide how to serialize and represent `Data` to users.
|
||||
This specification serves the purpose of covering every use case; this means that Cosmos SDK application developers decide how to serialize and represent `Data` to users.
|
||||
|
||||
## Context
|
||||
|
||||
Having the ability to sign messages off-chain has proven to be a fundamental aspect of nearly any blockchain. The notion of signing messages off-chain has many added benefits such as saving on computational costs and reducing transaction throughput and overhead. Within the context of the Cosmos, some of the major applications of signing such data includes, but is not limited to, providing a cryptographic secure and verifiable means of proving validator identity and possibly associating it with some other framework or organization. In addition, having the ability to sign Cosmos messages with a Ledger or similar HSM device.
|
||||
Having the ability to sign messages off-chain has proven to be a fundamental aspect of nearly any blockchain. The notion of signing messages off-chain has many added benefits such as saving on computational costs and reducing transaction throughput and overhead. Within the context of the Cosmos, some of the major applications of signing such data include, but is not limited to, providing a cryptographic secure and verifiable means of proving validator identity and possibly associating it with some other framework or organization. In addition, having the ability to sign Cosmos messages with a Ledger or similar HSM device.
|
||||
|
||||
Further context and use cases can be found in the references links.
|
||||
Further context and use cases can be found in the reference links.
|
||||
|
||||
## Decision
|
||||
|
||||
The aim is being able to sign arbitrary messages, even using Ledger or similar HSM devices.
|
||||
|
||||
As a result signed messages should look roughly like Cosmos SDK messages but **must not** be a valid on-chain transaction. `chain-id`, `account_number` and `sequence` can all be assigned invalid values.
|
||||
As a result, signed messages should look roughly like Cosmos SDK messages but **must not** be a valid on-chain transaction. `chain-id`, `account_number` and `sequence` can all be assigned invalid values.
|
||||
|
||||
Cosmos SDK 0.40 also introduces a concept of “auth_info” this can specify SIGN_MODES.
|
||||
|
||||
A spec should include an `auth_info` that supports SIGN_MODE_DIRECT and SIGN_MODE_LEGACY_AMINO.
|
||||
|
||||
Create the `offchain` proto definitions, we extend the auth module with `offchain` package to offer functionalities to verify and sign offline messages.
|
||||
To create the `offchain` proto definitions, we extend the auth module with `offchain` package to offer functionalities to verify and sign offline messages.
|
||||
|
||||
An offchain transaction follows these rules:
|
||||
|
||||
@ -51,7 +51,7 @@ Verification of an offchain transaction follows the same rules as an onchain one
|
||||
|
||||
The first message added to the `offchain` package is `MsgSignData`.
|
||||
|
||||
`MsgSignData` allows developers to sign arbitrary bytes validatable offchain only. Where `Signer` is the account address of the signer. `Data` is arbitrary bytes which can represent `text`, `files`, `object`s. It's applications developers decision how `Data` should be deserialized, serialized and the object it can represent in their context.
|
||||
`MsgSignData` allows developers to sign arbitrary bytes validatable offchain only. `Signer` is the account address of the signer. `Data` is arbitrary bytes which can represent `text`, `files`, `object`s. It's applications developers decision how `Data` should be deserialized, serialized and the object it can represent in their context.
|
||||
|
||||
It's applications developers decision how `Data` should be treated, by treated we mean the serialization and deserialization process and the Object `Data` should represent.
|
||||
|
||||
@ -116,13 +116,13 @@ Backwards compatibility is maintained as this is a new message spec definition.
|
||||
|
||||
### Negative
|
||||
|
||||
* Current proposal requires a fixed relationship between an account address and a public key.
|
||||
* The current proposal requires a fixed relationship between an account address and a public key.
|
||||
* Doesn't work with multisig accounts.
|
||||
|
||||
## Further discussion
|
||||
|
||||
* Regarding security in `MsgSignData`, the developer using `MsgSignData` is in charge of making the content lying in `Data` non-replayable when, and if, needed.
|
||||
* the offchain package will be further extended with extra messages that target specific use cases such as, but not limited to, authentication in applications, payment channels, L2 solutions in general.
|
||||
* Regarding security in `MsgSignData`, the developer using `MsgSignData` is in charge of making the content contained in `Data` non-replayable when, and if, needed.
|
||||
* The offchain package will be further extended with extra messages that target specific use cases such as, but not limited to, authentication in applications, payment channels, L2 solutions in general.
|
||||
|
||||
## References
|
||||
|
||||
|
||||
@ -14,9 +14,9 @@ This ADR defines a modification to the governance module that would allow a stak
|
||||
|
||||
## Context
|
||||
|
||||
Currently, an address can cast a vote with only one options (Yes/No/Abstain/NoWithVeto) and use their full voting power behind that choice.
|
||||
Currently, an address can cast a vote with only one option (Yes/No/Abstain/NoWithVeto) and use their full voting power behind that choice.
|
||||
|
||||
However, often times the entity owning that address might not be a single individual. For example, a company might have different stakeholders who want to vote differently, and so it makes sense to allow them to split their voting power. Another example use case is exchanges. Many centralized exchanges often stake a portion of their users' tokens in their custody. Currently, it is not possible for them to do "passthrough voting" and giving their users voting rights over their tokens. However, with this system, exchanges can poll their users for voting preferences, and then vote on-chain proportionally to the results of the poll.
|
||||
However, oftentimes the entity owning that address might not be a single individual. For example, a company might have different stakeholders who want to vote differently, and so it makes sense to allow them to split their voting power. Another example use case is exchanges. Many centralized exchanges often stake a portion of their users' tokens in their custody. Currently, it is not possible for them to do "passthrough voting" and giving their users voting rights over their tokens. However, with this system, exchanges can poll their users for voting preferences, and then vote on-chain proportionally to the results of the poll.
|
||||
|
||||
## Decision
|
||||
|
||||
@ -53,7 +53,7 @@ type MsgVoteWeighted struct {
|
||||
|
||||
The `ValidateBasic` of a `MsgVoteWeighted` struct would require that
|
||||
|
||||
1. The sum of all the Rates is equal to 1.0
|
||||
1. The sum of all the rates is equal to 1.0
|
||||
2. No Option is repeated
|
||||
|
||||
The governance tally function will iterate over all the options in a vote and add to the tally the result of the voter's voting power * the rate for that option.
|
||||
|
||||
@ -7,7 +7,7 @@
|
||||
* 10/14/2022:
|
||||
* Add `ListenCommit`, flatten the state writes in a block to a single batch.
|
||||
* Remove listeners from cache stores, should only listen to `rootmulti.Store`.
|
||||
* Remove `HaltAppOnDeliveryError()`, the errors are propagated by default, the implementations should return nil if don't want to propagate errors.
|
||||
* Remove `HaltAppOnDeliveryError()`, the errors are propagated by default, the implementations should return nil if they don't want to propagate errors.
|
||||
* 26/05/2023: Update with ABCI 2.0
|
||||
|
||||
## Status
|
||||
@ -40,7 +40,7 @@ type MemoryListener struct {
|
||||
stateCache []StoreKVPair
|
||||
}
|
||||
|
||||
// NewMemoryListener creates a listener that accumulate the state writes in memory.
|
||||
// NewMemoryListener creates a listener that accumulates the state writes in memory.
|
||||
func NewMemoryListener() *MemoryListener {
|
||||
return &MemoryListener{}
|
||||
}
|
||||
@ -114,7 +114,7 @@ func (s *Store) Delete(key []byte) {
|
||||
|
||||
### MultiStore interface updates
|
||||
|
||||
We will update the `CommitMultiStore` interface to allow us to wrap a `Memorylistener` to a specific `KVStore`.
|
||||
We will update the `CommitMultiStore` interface to allow us to wrap a `MemoryListener` to a specific `KVStore`.
|
||||
Note that the `MemoryListener` will be attached internally by the concrete `rootmulti` implementation.
|
||||
|
||||
```go
|
||||
@ -225,7 +225,7 @@ so that the service can group the state changes with the ABCI requests.
|
||||
type ABCIListener interface {
|
||||
// ListenFinalizeBlock updates the streaming service with the latest FinalizeBlock messages
|
||||
ListenFinalizeBlock(ctx context.Context, req abci.FinalizeBlockRequest, res abci.FinalizeBlockResponse) error
|
||||
// ListenCommit updates the steaming service with the latest Commit messages and state changes
|
||||
// ListenCommit updates the streaming service with the latest Commit messages and state changes
|
||||
ListenCommit(ctx context.Context, res abci.CommitResponse, changeSet []*StoreKVPair) error
|
||||
}
|
||||
```
|
||||
@ -720,5 +720,5 @@ These changes will provide a means of subscribing to KVStore state changes in re
|
||||
|
||||
### Neutral
|
||||
|
||||
* Introduces additional- but optional- complexity to configuring and running a cosmos application
|
||||
* Introduces additional—but optional—complexity to configuring and running a cosmos application
|
||||
* If an application developer opts to use these features to expose data, they need to be aware of the ramifications/risks of that data exposure as it pertains to the specifics of their application
|
||||
|
||||
@ -19,9 +19,9 @@ This ADR updates the proof of stake module to buffer the staking weight updates
|
||||
|
||||
## Context
|
||||
|
||||
The current proof of stake module takes the design decision to apply staking weight changes to the consensus engine immediately. This means that delegations and unbonds get applied immediately to the validator set. This decision was primarily done as it was implementationally simplest, and because we at the time believed that this would lead to better UX for clients.
|
||||
The current proof of stake module takes the design decision to apply staking weight changes to the consensus engine immediately. This means that delegations and unbonds get applied immediately to the validator set. This decision was primarily done as it was the simplest from an implementation perspective, and because we at the time believed that this would lead to better UX for clients.
|
||||
|
||||
An alternative design choice is to allow buffering staking updates (delegations, unbonds, validators joining) for a number of blocks. This 'epoch'd proof of stake consensus provides the guarantee that the consensus weights for validators will not change mid-epoch, except in the event of a slash condition.
|
||||
An alternative design choice is to allow buffering staking updates (delegations, unbonds, validators joining) for a number of blocks. This epoched proof of stake consensus provides the guarantee that the consensus weights for validators will not change mid-epoch, except in the event of a slash condition.
|
||||
|
||||
Additionally, the UX hurdle may not be as significant as was previously thought. This is because it is possible to provide users immediate acknowledgement that their bond was recorded and will be executed.
|
||||
|
||||
@ -29,9 +29,9 @@ Furthermore, it has become clearer over time that immediate execution of staking
|
||||
|
||||
* Threshold based cryptography. One of the main limitations is that because the validator set can change so regularly, it makes the running of multiparty computation by a fixed validator set difficult. Many threshold-based cryptographic features for blockchains such as randomness beacons and threshold decryption require a computationally-expensive DKG process (will take much longer than 1 block to create). To productively use these, we need to guarantee that the result of the DKG will be used for a reasonably long time. It wouldn't be feasible to rerun the DKG every block. By epoching staking, it guarantees we'll only need to run a new DKG once every epoch.
|
||||
|
||||
* Light client efficiency. This would lessen the overhead for IBC when there is high churn in the validator set. In the Tendermint light client bisection algorithm, the number of headers you need to verify is related to bounding the difference in validator sets between a trusted header and the latest header. If the difference is too great, you verify more header in between the two. By limiting the frequency of validator set changes, we can reduce the worst case size of IBC lite client proofs, which occurs when a validator set has high churn.
|
||||
* Light client efficiency. This would lessen the overhead for IBC when there is high churn in the validator set. In the Tendermint light client bisection algorithm, the number of headers you need to verify is related to bounding the difference in validator sets between a trusted header and the latest header. If the difference is too great, you verify more headers in between the two. By limiting the frequency of validator set changes, we can reduce the worst case size of IBC lite client proofs, which occurs when a validator set has high churn.
|
||||
|
||||
* Fairness of deterministic leader election. Currently we have no ways of reasoning of fairness of deterministic leader election in the presence of staking changes without epochs (tendermint/spec#217). Breaking fairness of leader election is profitable for validators, as they earn additional rewards from being the proposer. Adding epochs at least makes it easier for our deterministic leader election to match something we can prove secure. (Albeit, we still haven’t proven if our current algorithm is fair with > 2 validators in the presence of stake changes)
|
||||
* Fairness of deterministic leader election. Currently we have no ways of reasoning about fairness of deterministic leader election in the presence of staking changes without epochs (tendermint/spec#217). Breaking fairness of leader election is profitable for validators, as they earn additional rewards from being the proposer. Adding epochs at least makes it easier for our deterministic leader election to match something we can prove secure. (Albeit, we still haven’t proven if our current algorithm is fair with > 2 validators in the presence of stake changes)
|
||||
|
||||
* Staking derivative design. Currently, reward distribution is done lazily using the F1 fee distribution. While saving computational complexity, lazy accounting requires a more stateful staking implementation. Right now, each delegation entry has to track the time of last withdrawal. Handling this can be a challenge for some staking derivatives designs that seek to provide fungibility for all tokens staked to a single validator. Force-withdrawing rewards to users can help solve this, however it is infeasible to force-withdraw rewards to users on a per block basis. With epochs, a chain could more easily alter the design to have rewards be forcefully withdrawn (iterating over delegator accounts only once per-epoch), and can thus remove delegation timing from state. This may be useful for certain staking derivative designs.
|
||||
|
||||
@ -67,7 +67,7 @@ Even though all staking updates are applied at epoch boundaries, rewards can sti
|
||||
|
||||
### Parameterizing the epoch length
|
||||
|
||||
When choosing the epoch length, there is a trade-off queued state/computation buildup, and countering the previously discussed limitations of immediate execution if they apply to a given chain.
|
||||
When choosing the epoch length, there is a trade-off between queued state/computation buildup, and countering the previously discussed limitations of immediate execution if they apply to a given chain.
|
||||
|
||||
Until an ABCI mechanism for variable block times is introduced, it is ill-advised to be using high epoch lengths due to the computation buildup. This is because when a block's execution time is greater than the expected block time from Tendermint, rounds may increment.
|
||||
|
||||
@ -79,16 +79,16 @@ First we create a pool for storing tokens that are being bonded, but should be a
|
||||
|
||||
### Staking messages
|
||||
|
||||
* **MsgCreateValidator**: Move user's self-bond to `EpochDelegationPool` immediately. Queue a message for the epoch boundary to handle the self-bond, taking the funds from the `EpochDelegationPool`. If Epoch execution fail, return back funds from `EpochDelegationPool` to user's account.
|
||||
* **MsgCreateValidator**: Move user's self-bond to `EpochDelegationPool` immediately. Queue a message for the epoch boundary to handle the self-bond, taking the funds from the `EpochDelegationPool`. If Epoch execution fails, return back funds from `EpochDelegationPool` to user's account.
|
||||
* **MsgEditValidator**: Validate message and if valid queue the message for execution at the end of the Epoch.
|
||||
* **MsgDelegate**: Move user's funds to `EpochDelegationPool` immediately. Queue a message for the epoch boundary to handle the delegation, taking the funds from the `EpochDelegationPool`. If Epoch execution fail, return back funds from `EpochDelegationPool` to user's account.
|
||||
* **MsgDelegate**: Move user's funds to `EpochDelegationPool` immediately. Queue a message for the epoch boundary to handle the delegation, taking the funds from the `EpochDelegationPool`. If Epoch execution fails, return back funds from `EpochDelegationPool` to user's account.
|
||||
* **MsgBeginRedelegate**: Validate message and if valid queue the message for execution at the end of the Epoch.
|
||||
* **MsgUndelegate**: Validate message and if valid queue the message for execution at the end of the Epoch.
|
||||
|
||||
### Slashing messages
|
||||
|
||||
* **MsgUnjail**: Validate message and if valid queue the message for execution at the end of the Epoch.
|
||||
* **Slash Event**: Whenever a slash event is created, it gets queued in the slashing module to apply at the end of the epoch. The queues should be setup such that this slash applies immediately.
|
||||
* **Slash Event**: Whenever a slash event is created, it gets queued in the slashing module to apply at the end of the epoch. The queues should be set up such that this slash applies immediately.
|
||||
|
||||
### Evidence Messages
|
||||
|
||||
@ -100,14 +100,14 @@ Then we add methods to the end blockers, to ensure that at the epoch boundary th
|
||||
|
||||
When querying the staking activity of a given address, the status should return not only the amount of tokens staked, but also if there are any queued stake events for that address. This will require more work to be done in the querying logic, to trace the queued upcoming staking events.
|
||||
|
||||
As an initial implementation, this can be implemented as a linear search over all queued staking events. However, for chains that need long epochs, they should eventually build additional support for nodes that support querying to be able to produce results in constant time. (This is do-able by maintaining an auxiliary hashmap for indexing upcoming staking events by address)
|
||||
As an initial implementation, this can be implemented as a linear search over all queued staking events. However, for chains that need long epochs, they should eventually build additional support for nodes that support querying to be able to produce results in constant time. (This is doable by maintaining an auxiliary hashmap for indexing upcoming staking events by address)
|
||||
|
||||
**Step-3**: Adjust gas
|
||||
|
||||
Currently gas represents the cost of executing a transaction when its done immediately. (Merging together costs of p2p overhead, state access overhead, and computational overhead) However, now a transaction can cause computation in a future block, namely at the epoch boundary.
|
||||
|
||||
To handle this, we should initially include parameters for estimating the amount of future computation (denominated in gas), and add that as a flat charge needed for the message.
|
||||
We leave it as out of scope for how to weight future computation versus current computation in gas pricing, and have it set such that the are weighted equally for now.
|
||||
We leave it out of scope for how to weight future computation versus current computation in gas pricing, and have it set such that they are weighted equally for now.
|
||||
|
||||
## Consequences
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user