Fixes: #10814
This PR updates the following RPC methods according to EIP-1898
specs.
The following RPC methods are affected:
- eth_getBalance
- eth_getStorageAt
- eth_getTransactionCount
- eth_getCode
- eth_call
Note that eth_getBlockByNumber was not included in this list in
the spec although it seems it should be affected also?
Currently these methods all accept a blkParam string which can be
one of "latest", "earliest", "pending", or a block number (decimal
or hex). The spec enables caller to additionally specify a json
hash which can include the following fields:
- blockNumber EthUint64: A block number (decimal or hex) which is
similar to the original use of the blkParam string
- blockHash EthHash: The block hash
- requireCanonical bool) If true we should make sure the block is
in the canonical chain
Since the blkParam needs to support both being a number/string and
a json hash then this to properly work we need to introduce a new
struct with pointer fields to check if they exist. This is done
in the EthBlockParamByNumberOrHash struct which first tries to
unmarshal as a json hash (according to eip-1898) and then fallback
to unmarshal as string/number.
However, we're leaving the default at 1.25x for backwards compatibility, for now.
Also:
1. Actually use the configured replace fee ratio.
2. Store said ratios as percentages instead of floats. 1.25, or 1+1/(2^2),
can be represented as a float. 1.1, or 1 + 1/(2 * 5), cannot.
fixes#10415
This is now "FVM" native. Changes include:
1. Don't treat "trace" messages like off-chain messages. E.g., don't
include CIDs, versions, etc.
2. Include IPLD codecs where applicable.
3. Remove fields that aren't filled by the FVM (timing, some errors,
code locations, etc.).
Before this change workers can only be allocated one GPU task,
regardless of how much of the GPU resources that task uses, or how many
GPUs are in the system.
This makes GPUUtilization a float which can represent that a task needs
a portion, or multiple GPUs. GPUs are accounted for like RAM and CPUs so
that workers with more GPUs can be allocated more tasks.
A known issue is that PC2 cannot use multiple GPUs. And even if the
worker has multiple GPUs and is allocated multiple PC2 tasks, those
tasks will only run on the first GPU.
This could result in unexpected behavior when a worker with multiple
GPUs is assigned multiple PC2 tasks. But this should not suprise any
existing users who upgrade, as any existing users who run workers with
multiple GPUs should already know this and be running a worker per GPU
for PC2. But now those users have the freedom to customize the GPU
utilization of PC2 to be less than one and effectively run multiple PC2
processes in a single worker.
C2 is capable of utilizing multiple GPUs, and now workers can be
customized for C2 accordingly.