Merge tag 'v1.10.14' into develop

This commit is contained in:
philip-morlier 2021-12-23 10:21:12 -08:00
commit 968e79b705
182 changed files with 9644 additions and 6482 deletions

View File

@ -57,7 +57,7 @@ on how you can run your own `geth` instance.
By far the most common scenario is people wanting to simply interact with the Ethereum By far the most common scenario is people wanting to simply interact with the Ethereum
network: create accounts; transfer funds; deploy and interact with contracts. For this network: create accounts; transfer funds; deploy and interact with contracts. For this
particular use-case the user doesn't care about years-old historical data, so we can particular use-case the user doesn't care about years-old historical data, so we can
fast-sync quickly to the current state of the network. To do so: sync quickly to the current state of the network. To do so:
```shell ```shell
$ geth console $ geth console
@ -68,7 +68,7 @@ This command will:
causing it to download more data in exchange for avoiding processing the entire history causing it to download more data in exchange for avoiding processing the entire history
of the Ethereum network, which is very CPU intensive. of the Ethereum network, which is very CPU intensive.
* Start up `geth`'s built-in interactive [JavaScript console](https://geth.ethereum.org/docs/interface/javascript-console), * Start up `geth`'s built-in interactive [JavaScript console](https://geth.ethereum.org/docs/interface/javascript-console),
(via the trailing `console` subcommand) through which you can interact using [`web3` methods](https://web3js.readthedocs.io/) (via the trailing `console` subcommand) through which you can interact using [`web3` methods](https://github.com/ChainSafe/web3.js/blob/0.20.7/DOCUMENTATION.md)
(note: the `web3` version bundled within `geth` is very old, and not up to date with official docs), (note: the `web3` version bundled within `geth` is very old, and not up to date with official docs),
as well as `geth`'s own [management APIs](https://geth.ethereum.org/docs/rpc/server). as well as `geth`'s own [management APIs](https://geth.ethereum.org/docs/rpc/server).
This tool is optional and if you leave it out you can always attach to an already running This tool is optional and if you leave it out you can always attach to an already running
@ -159,7 +159,7 @@ docker run -d --name ethereum-node -v /Users/alice/ethereum:/root \
ethereum/client-go ethereum/client-go
``` ```
This will start `geth` in fast-sync mode with a DB memory allowance of 1GB just as the This will start `geth` in snap-sync mode with a DB memory allowance of 1GB just as the
above command does. It will also create a persistent volume in your home directory for above command does. It will also create a persistent volume in your home directory for
saving your blockchain as well as map the default ports. There is also an `alpine` tag saving your blockchain as well as map the default ports. There is also an `alpine` tag
available for a slim version of the image. available for a slim version of the image.

View File

@ -29,92 +29,147 @@ Fingerprint: `AE96 ED96 9E47 9B00 84F3 E17F E88D 3334 FA5F 6A0A`
``` ```
-----BEGIN PGP PUBLIC KEY BLOCK----- -----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1 Version: SKS 1.1.6
Comment: Hostname: pgp.mit.edu
mQINBFgl3tgBEAC8A1tUBkD9YV+eLrOmtgy+/JS/H9RoZvkg3K1WZ8IYfj6iIRaY mQINBFgl3tgBEAC8A1tUBkD9YV+eLrOmtgy+/JS/H9RoZvkg3K1WZ8IYfj6iIRaYneAk3Bp1
neAk3Bp182GUPVz/zhKr2g0tMXIScDR3EnaDsY+Qg+JqQl8NOG+Cikr1nnkG2on9 82GUPVz/zhKr2g0tMXIScDR3EnaDsY+Qg+JqQl8NOG+Cikr1nnkG2on9L8c8yiqry1ZTCmYM
L8c8yiqry1ZTCmYMqCa2acTFqnyuXJ482aZNtB4QG2BpzfhW4k8YThpegk/EoRUi qCa2acTFqnyuXJ482aZNtB4QG2BpzfhW4k8YThpegk/EoRUim+y7buJDtoNf7YILlhDQXN8q
m+y7buJDtoNf7YILlhDQXN8qlHB02DWOVUihph9tUIFsPK6BvTr9SIr/eG6j6k0b lHB02DWOVUihph9tUIFsPK6BvTr9SIr/eG6j6k0bfUo9pexOn7LS4SojoJmsm/5dp6AoKlac
fUo9pexOn7LS4SojoJmsm/5dp6AoKlac48cZU5zwR9AYcq/nvkrfmf2WkObg/xRd 48cZU5zwR9AYcq/nvkrfmf2WkObg/xRdEvKZzn05jRopmAIwmoC3CiLmqCHPmT5a29vEob/y
EvKZzn05jRopmAIwmoC3CiLmqCHPmT5a29vEob/yPFE335k+ujjZCPOu7OwjzDk7 PFE335k+ujjZCPOu7OwjzDk7M0zMSfnNfDq8bXh16nn+ueBxJ0NzgD1oC6c2PhM+XRQCXCho
M0zMSfnNfDq8bXh16nn+ueBxJ0NzgD1oC6c2PhM+XRQCXChoyI8vbfp4dGvCvYqv yI8vbfp4dGvCvYqvQAE1bWjqnumZ/7vUPgZN6gDfiAzG2mUxC2SeFBhacgzDvtQls+uuvm+F
QAE1bWjqnumZ/7vUPgZN6gDfiAzG2mUxC2SeFBhacgzDvtQls+uuvm+FnQOUgg2H nQOUgg2Hh8x2zgoZ7kqV29wjaUPFREuew7e+Th5BxielnzOfVycVXeSuvvIn6cd3g/s8mX1c
h8x2zgoZ7kqV29wjaUPFREuew7e+Th5BxielnzOfVycVXeSuvvIn6cd3g/s8mX1c 2kLSXJR7+KdWDrIrR5Az0kwAqFZt6B6QTlDrPswu3mxsm5TzMbny0PsbL/HBM+GZEZCjMXxB
2kLSXJR7+KdWDrIrR5Az0kwAqFZt6B6QTlDrPswu3mxsm5TzMbny0PsbL/HBM+GZ 8bqV2eSaktjnSlUNX1VXxyOxXA+ZG2jwpr51egi57riVRXokrQARAQABtDRFdGhlcmV1bSBG
EZCjMXxB8bqV2eSaktjnSlUNX1VXxyOxXA+ZG2jwpr51egi57riVRXokrQARAQAB b3VuZGF0aW9uIEJ1ZyBCb3VudHkgPGJvdW50eUBldGhlcmV1bS5vcmc+iQIcBBEBCAAGBQJa
tDlFdGhlcmV1bSBGb3VuZGF0aW9uIFNlY3VyaXR5IFRlYW0gPHNlY3VyaXR5QGV0 FCY6AAoJEHoMA3Q0/nfveH8P+gJBPo9BXZL8isUfbUWjwLi81Yi70hZqIJUnz64SWTqBzg5b
aGVyZXVtLm9yZz6JAj4EEwECACgCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheA mCZ69Ji5637THsxQetS2ARabz0DybQ779FhD/IWnqV9T3KuBM/9RzJtuhLzKCyMrAINPMo28
BQJaCWH6BQkFo2BYAAoJEOiNMzT6X2oK+DEP/3H6dxkm0hvHZKoHLVuuxcu3EHYo rKWdunHHarpuR4m3tL2zWJkle5QVYb+vkZXJJE98PJw+N4IYeKKeCs2ubeqZu636GA0sMzzB
k5sd3MMWPrZSN8qzZnY7ayEDMxnarWOizc+2jfOxfJlzX/g8lR1/fsHdWPFPhPoV Jn3m/dRRA2va+/zzbr6F6b51ynzbMxWKTsJnstjC8gs8EeI+Zcd6otSyelLtCUkk3h5sTvpV
Qk8ygrHn1H8U8+rpw/U03BqmqHpYCDzJ+CIis9UWROniqXw1nuqu/FtWOsdWxNKh Wv67BNSU0BYsMkxyFi9PUyy07Wixgeas89K5jG1oOtDva/FkpRHrTE/WA5OXDRcLrHJM+SwD
jUo6k/0EsaXsxRPzgJv7fEUcVcQ7as/C3x9sy3muc2gvgA4/BKoGPb1/U0GuA8lV CwqcLQqJd09NxwUW1iKeBmPptTiOGu1Gv2o7aEyoaWrHRBO7JuYrQrj6q2B3H1Je0zjAd2qt
fDIDshAggmnSUAg+TuYSAAdoFQ1sKwFMPigcLJF2eyKuK3iUyixJrec/c4LSf3wA 09ni2bLwLn4LA+VDpprNTO+eZDprv09s2oFSU6NwziHybovu0y7X4pADGkK2evOM7c86PohX
cGghbeuqI8INP0Y2zvXDQN2cByxsFAuoZG+m0cyKGaDH2MVUvOKKYqn/03qvrf15 QRQ1M1T16xLj6wP8/Ykwl6v/LUk7iDPXP3GPILnh4YOkwBR3DsCOPn8098xy7FxEELmupRzt
AWAsW0l0yQwOTCo3FbsNzemClm5Bj/xH0E4XuwXwChcMCMOWJrFoxyvCEI+keoQc Cj9oC7YAoweeShgUjBPzb+nGY1m6OcFfbUPBgFyMMfwF6joHbiVIO+39+Ut2g2ysZa7KF+yp
c08/a8/MtS7vBAABXwOziSmm6CNqmzpWrh/fDrjlJlba9U3MxzvqU3IFlTdMratv XqVDqyEkYXsOLb25OC7brt8IJEPgBPwcHK5GNag6RfLxnQV+iVZ9KNH1yQgSiQI+BBMBAgAo
6V+SgX+L25lCzW4NxxUavoB8fAlvo8lxpHKo24FP+RcLQ8XqkU3RiUsgRjQRFOqQ AhsDBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAUCWglh+gUJBaNgWAAKCRDojTM0+l9qCgQ2
TaJcsp8mimmiYyf24mNu6b48pi+a5c/eQR9w59emeEUZqsJU+nqv8BWIIp7o4Agh D/4udJpV4zGIZW1yNaVvtd3vfKsTLi7GIRJLUBqVb2Yx/uhnN8jTl/tAhCVosCQ1pzvi9kMl
NYnKjkhPlY5e1fLVfAHIADZFynWwRPkPMJSrBiP5EtcOFxQGHGjRxU/KjXkvE0hV s8qO1vu2kw5EWFFkwK96roI8pTql3VIjwhRVQrCkR7oAk/eUd1U/nt2q6J4UTYeVgqbq4dsI
xYb1PB8pWMTu/beeiQI+BBMBAgAoBQJYJd7YAhsDBQkB4TOABgsJCAcDAgYVCAIJ ZZTRyPJMD667YpuAIcaah+w9j/E5xksYQdMeprnDrQkkBCb4FIMqfDzBPKvEa8DcQr949K85
CgsEFgIDAQIeAQIXgAAKCRDojTM0+l9qCplDD/9IZ2i+m1cnqQKtiyHbyFGx32oL kxhr6LDq9i5l4Egxt2JdH8DaR4GLca6+oHy0MyPs/bZOsfmZUObfM2oZgPpqYM96JanhzO1j
fzqPylX2bOG5DPsSTorSUdJMGVfT04oVxXc4S/2DVnNvi7RAbSiLapCWSplgtBOj dpnItyBii2pc+kNx5nMOf4eikE/MBv+WUJ0TttWzApGGmFUzDhtuEvRH9NBjtJ/pMrYspIGu
j1xlblOoXxT3m7s1XHGCX5tENxI9fVSSPVKJn+fQaWpPB2MhBA+1lUI6GJ+11T7K O/QNY5KKOKQTvVIlwGcm8dTsSkqtBDSUwZyWbfKfKOI1/RhM9dC3gj5/BOY57DYYV4rdTK01
J8LrP/fiw1/nOb7rW61HW44Gtyox23sA/d1+DsFVaF8hxJlNj5coPKr8xWzQ8pQl ZtYjuhdfs2bhuP1uF/cgnSSZlv8azvf7Egh7tHPnYxvLjfq1bJAhCIX0hNg0a81/ndPAEFky
juzdjHDukjevuw4rRmRq9vozvj9keEU9XJ5dldyEVXFmdDk7KT0p0Rla9nxYhzf/ fSko+JPKvdSvsUcSi2QQ4U2HX//jNBjXRfG4F0utgbJnhXzEckz6gqt7wSDZH2oddVuO8Ssc
r/Bv8Bzy0HCWRb2D31BjXXGG05oVnYmNGxGFxYja4MwgrMmne3ilEVjfUJsapsqi T7sK+CdXthSKnRyuI+sGUpG+6glpKWIfYkWFKNZWuQ+YUatY3QEDHXTIioycSmV8p4d/g/0S
w41BAyQgIdfREulYN7ahsF5PrjVAqBd9IGtE8ULelF2SQxEBQBngEkP0ahP6tRAL V6TegidLxY8bXMkbqz+3n6FArRffv5MH7qt3cYkCPgQTAQIAKAUCWCXhOwIbAwUJAeEzgAYL
i7/CBjPKOyKijtqVny7qrGOnU2ygcA88/WDibexDhrjz0Gx8WmErU7rIWZiZ5u4Y CQgHAwIGFQgCCQoLBBYCAwECHgECF4AACgkQ6I0zNPpfagrN/w/+Igp3vtYdNunikw3yHnYf
vJYVRo0+6rBCXRPeSJfiP5h1p17Anr2l42boAYslfcrzquB8MHtrNcyn650OLtHG Jkm0MmaMDUM9mtsaXVN6xb9n25N3Xa3GWCpmdsbYZ8334tI/oQ4/NHq/bEI5WFH5F1aFkMkm
nbxgIdniKrpuzGN6Opw+O2id2JhD1/1p4SOemwAmthplr1MIyOHNP3q93rEj2J7h 5AJVLuUkipCtmCZ5NkbRPJA9l0uNUUE6uuFXBhf4ddu7jb0jMetRF/kifJHVCCo5fISUNhLp
5zPS/AJuKkMDFUpslPNLQjCOwPXtdzL7/kUZGBSyez1T3TaW1uY6l9XaJJRaSn+v 7bwcWq9qgDQNZNYMOo4s9WX5Tl+5x4gTZdd2/cAYt49h/wnkw+huM+Jm0GojpLqIQ1jZiffm
1zPgfp4GJ3lPs4AlAbQ0RXRoZXJldW0gRm91bmRhdGlvbiBCdWcgQm91bnR5IDxi otf5rF4L+JhIIdW0W4IIh1v9BhHVllXw+z9oj0PALstT5h8/DuKoIiirFJ4DejU85GR1KKAS
b3VudHlAZXRoZXJldW0ub3JnPokCPgQTAQIAKAIbAwYLCQgHAwIGFQgCCQoLBBYC DeO19G/lSpWj1rSgFv2N2gAOxq0X+BbQTua2jdcY6JpHR4H1JJ2wzfHsHPgDQcgY1rGlmjVF
AwECHgECF4AFAloJYfoFCQWjYFgACgkQ6I0zNPpfagoENg/+LnSaVeMxiGVtcjWl aqU73WV4/hzXc/HshK/k4Zd8uD4zypv6rFsZ3UemK0aL2zXLVpV8SPWQ61nS03x675SmDlYr
b7Xd73yrEy4uxiESS1AalW9mMf7oZzfI05f7QIQlaLAkNac74vZDJbPKjtb7tpMO A80ENfdqvsn00JQuBVIv4Tv0Ub7NfDraDGJCst8rObjBT/0vnBWTBCebb2EsnS2iStIFkWdz
RFhRZMCveq6CPKU6pd1SI8IUVUKwpEe6AJP3lHdVP57dquieFE2HlYKm6uHbCGWU /WXs4L4Yzre1iJwqRjiuqahZR5jHsjAUf2a0O29HVHE7zlFtCFmLPClml2lGQfQOpm5klGZF
0cjyTA+uu2KbgCHGmofsPY/xOcZLGEHTHqa5w60JJAQm+BSDKnw8wTyrxGvA3EK/ rmvus+qZ9rt35UgWHPZezykkwtWrFOwspwuCWaPDto6tgbRJZ4ftitpdYYM3dKW9IGJXBwrt
ePSvOZMYa+iw6vYuZeBIMbdiXR/A2keBi3GuvqB8tDMj7P22TrH5mVDm3zNqGYD6 BQrMsu+lp0vDF+yJAlUEEwEIAD8CGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAFiEErpbt
amDPeiWp4cztY3aZyLcgYotqXPpDceZzDn+HopBPzAb/llCdE7bVswKRhphVMw4b lp5HmwCE8+F/6I0zNPpfagoFAmEAEJwFCQycmLgACgkQ6I0zNPpfagpWoBAAhOcbMAUw6Zt0
bhL0R/TQY7Sf6TK2LKSBrjv0DWOSijikE71SJcBnJvHU7EpKrQQ0lMGclm3ynyji GYzT3sR5/c0iatezPzXEXJf9ebzR8M5uPElXcxcnMx1dvXZmGPXPJKCPa99WCu1NZYy8F+Wj
Nf0YTPXQt4I+fwTmOew2GFeK3UytNWbWI7oXX7Nm4bj9bhf3IJ0kmZb/Gs73+xII GTOY9tfIkvSxhys1p/giPAmvid6uQmD+bz7ivktnyzCkDWfMA+l8lsCSEqVlaq6y5T+a6SWB
e7Rz52Mby436tWyQIQiF9ITYNGvNf53TwBBZMn0pKPiTyr3Ur7FHEotkEOFNh1// 6TzC2S0MPb/RrC/7DpwyrNYWumvyVJh09adm1Mw/UGgst/sZ8eMaRYEd3X0yyT1CBpX4zp2E
4zQY10XxuBdLrYGyZ4V8xHJM+oKre8Eg2R9qHXVbjvErHE+7CvgnV7YUip0criPr qQj9IEOTizvzv1x2jkHe5ZUeU3+nTBNlhSA+WFHUi0pfBdo2qog3Mv2EC1P2qMKoSdD5tPbA
BlKRvuoJaSliH2JFhSjWVrkPmFGrWN0BAx10yIqMnEplfKeHf4P9Elek3oInS8WP zql1yKoHHnXOMsqdftGwbiv2sYXWvrYvmaCd3Ys/viOyt3HOy9uV2ZEtBd9Yqo9x/NZj8QMA
G1zJG6s/t5+hQK0X37+TB+6rd3GJAj4EEwECACgFAlgl4TsCGwMFCQHhM4AGCwkI nY5k8jjrIXbUC89MqrJsQ6xxWQIg5ikMT7DvY0Ln89ev4oJyVvwIQAwCm4jUzFNm9bZLYDOP
BwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEOiNMzT6X2oKzf8P/iIKd77WHTbp4pMN 5lGJCV7tF5NYVU7NxNM8vescKc40mVNK/pygS5mxhK9QYOUjZsIv8gddrl1TkqrFMuxFnTyN
8h52HyZJtDJmjA1DPZrbGl1TesW/Z9uTd12txlgqZnbG2GfN9+LSP6EOPzR6v2xC WvzE29wFu/n4N1DkF+ZBqS70SlRvB+Hjz5LrDgEzF1Wf1eA/wq1dZbvMjjDVIc2VGlYp8Cp2
OVhR+RdWhZDJJuQCVS7lJIqQrZgmeTZG0TyQPZdLjVFBOrrhVwYX+HXbu429IzHr 8ob23c1seTtYXTNYgSR5go4EpH+xi+bIWv01bQQ9xGwBbT5sm4WUeWOcmX4QewzLZ3T/wK9+
URf5InyR1QgqOXyElDYS6e28HFqvaoA0DWTWDDqOLPVl+U5fuceIE2XXdv3AGLeP N4Ye/hmU9O34FwWJOY58EIe0OUV0aGVyZXVtIEZvdW5kYXRpb24gU2VjdXJpdHkgVGVhbSA8
Yf8J5MPobjPiZtBqI6S6iENY2Yn35qLX+axeC/iYSCHVtFuCCIdb/QYR1ZZV8Ps/ c2VjdXJpdHlAZXRoZXJldW0ub3JnPokCHAQRAQgABgUCWhQmOgAKCRB6DAN0NP5372LSEACT
aI9DwC7LU+YfPw7iqCIoqxSeA3o1PORkdSigEg3jtfRv5UqVo9a0oBb9jdoADsat wZk1TASWZj5QF7rmkIM1GEyBxLE+PundNcMgM9Ktj1315ED8SmiukNI4knVS1MY99OIgXhQl
F/gW0E7mto3XGOiaR0eB9SSdsM3x7Bz4A0HIGNaxpZo1RWqlO91leP4c13Px7ISv D1foF2GKdTomrwwC4012zTNyUYCY60LnPZ6Z511HG+rZgZtZrbkz0IiUpwAlhGQND77lBqem
5OGXfLg+M8qb+qxbGd1HpitGi9s1y1aVfEj1kOtZ0tN8eu+Upg5WKwPNBDX3ar7J J3K+CFX2XpDA/ojui/kqrY4cwMT5P8xPJkwgpRgw/jgdcZyJTsXdHblV9IGU4H1Vd1SgcfAf
9NCULgVSL+E79FG+zXw62gxiQrLfKzm4wU/9L5wVkwQnm29hLJ0tokrSBZFnc/1l Db3YxDUlBtzlp0NkZqxen8irLIXUQvsfuIfRUbUSkWoK/n3U/gOCajAe8ZNF07iX4OWjH4Sw
7OC+GM63tYicKkY4rqmoWUeYx7IwFH9mtDtvR1RxO85RbQhZizwpZpdpRkH0DqZu NDA841WhFWcGE+d8+pfMVfPASU3UPKH72uw86b2VgR46Av6voyMFd1pj+yCA+YAhJuOpV4yL
ZJRmRa5r7rPqmfa7d+VIFhz2Xs8pJMLVqxTsLKcLglmjw7aOrYG0SWeH7YraXWGD QaGg2Z0kVOjuNWK/kBzp1F58DWGh4YBatbhE/UyQOqAAtR7lNf0M3QF9AdrHTxX8oZeqVW3V
N3SlvSBiVwcK7QUKzLLvpadLwxfsuQINBFgl3tgBEACbgq6HTN5gEBi0lkD/MafI Fmi2mk0NwCIUv8SSrZr1dTchp04OtyXe5gZBXSfzncCSRQIUDC8OgNWaOzAaUmK299v4bvye
nmNi+59U5gRGYqk46WlfRjhHudXjDpgD0lolGb4hYontkMaKRlCg2Rvgjvk3Zve0 uSCxOysxC7Q1hZtjzFPKdljS81mRlYeUL4fHlJU9R57bg8mriSXLmn7eKrSEDm/EG5T8nRx7
PKWjKw7gr8YBa9fMFY8BhAXI32OdyI9rFhxEZFfWAfwKVmT19BdeAQRFvcfd+8w8 TgX2MqJs8sWFxD2+bboVEu75yuFmZ//nmCBApAit9Hr2/sCshGIEpa9MQ6xJCYUxyqeJH+Cc
f1XVc+zddULMJFBTr+xKDlIRWwTkdLPQeWbjo0eHl/g4tuLiLrTxVbnj26bf+2+1 Aja0UfXhnK2uvPClpJLIl4RE3gm4OXeE1IkCPgQTAQIAKAIbAwYLCQgHAwIGFQgCCQoLBBYC
DbM/w5VavzPrkviHqvKe/QP/gay4QDViWvFgLb90idfAHIdsPgflp0VDS5rVHFL6 AwECHgECF4AFAloJYfoFCQWjYFgACgkQ6I0zNPpfagr4MQ//cfp3GSbSG8dkqgctW67Fy7cQ
D73rSRdIRo3I8c8mYoNjSR4XDuvgOkAKW9LR3pvouFHHjp6Fr0GesRbrbb2EG66i diiTmx3cwxY+tlI3yrNmdjtrIQMzGdqtY6LNz7aN87F8mXNf+DyVHX9+wd1Y8U+E+hVCTzKC
PsR99MQ7FqIL9VMHPm2mtR+XvbnKkH2rYyEqaMbSdk29jGapkAWle4sIhSKk749A sefUfxTz6unD9TTcGqaoelgIPMn4IiKz1RZE6eKpfDWe6q78W1Y6x1bE0qGNSjqT/QSxpezF
4tGkHl08KZ2N9o6GrfUehP/V2eJLaph2DioFL1HxRryrKy80QQKLMJRekxigq8gr E/OAm/t8RRxVxDtqz8LfH2zLea5zaC+ADj8EqgY9vX9TQa4DyVV8MgOyECCCadJQCD5O5hIA
eW8xB4zuf9Mkuou+RHNmo8PebHjFstLigiD6/zP2e+4tUmrT0/JTGOShoGMl8Rt0 B2gVDWwrAUw+KBwskXZ7Iq4reJTKLEmt5z9zgtJ/fABwaCFt66ojwg0/RjbO9cNA3ZwHLGwU
VRxdPImKun+4LOXbfOxArOSkY6i35+gsgkkSy1gTJE0BY3S9auT6+YrglY/TWPQ9 C6hkb6bRzIoZoMfYxVS84opiqf/Teq+t/XkBYCxbSXTJDA5MKjcVuw3N6YKWbkGP/EfQThe7
IJxWVOKlT+3WIp5wJu2bBKQ420VLqDYzkoWytel/bM1ACUtipMiIVeUs2uFiRjpz BfAKFwwIw5YmsWjHK8IQj6R6hBxzTz9rz8y1Lu8EAAFfA7OJKaboI2qbOlauH98OuOUmVtr1
A1Wy0QHKPTdSuGlJPRrfcQARAQABiQIlBBgBAgAPAhsMBQJaCWIIBQkFo2BYAAoJ TczHO+pTcgWVN0ytq2/pX5KBf4vbmULNbg3HFRq+gHx8CW+jyXGkcqjbgU/5FwtDxeqRTdGJ
EOiNMzT6X2oKgSwQAKKs7BGF8TyZeIEO2EUK7R2bdQDCdSGZY06tqLFg3IHMGxDM SyBGNBEU6pBNolyynyaKaaJjJ/biY27pvjymL5rlz95BH3Dn16Z4RRmqwlT6eq/wFYginujg
b/7FVoa2AEsFgv6xpoebxBB5zkhUk7lslgxvKiSLYjxfNjTBltfiFJ+eQnf+OTs8 CCE1icqOSE+Vjl7V8tV8AcgANkXKdbBE+Q8wlKsGI/kS1w4XFAYcaNHFT8qNeS8TSFXFhvU8
KeR51lLa66rvIH2qUzkNDCCTF45H4wIDpV05AXhBjKYkrDCrtey1rQyFp5fxI+0I HylYxO79t56JAj4EEwECACgFAlgl3tgCGwMFCQHhM4AGCwkIBwMCBhUIAgkKCwQWAgMBAh4B
Q1UKKXvzZK4GdxhxDbOUSd38MYy93nqcmclGSGK/gF8XiyuVjeifDCM6+T1NQTX0 AheAAAoJEOiNMzT6X2oKmUMP/0hnaL6bVyepAq2LIdvIUbHfagt/Oo/KVfZs4bkM+xJOitJR
K9lneidcqtBDvlggJTLJtQPO33o5EHzXSiud+dKth1uUhZOFEaYRZoye1YE3yB0T 0kwZV9PTihXFdzhL/YNWc2+LtEBtKItqkJZKmWC0E6OPXGVuU6hfFPebuzVccYJfm0Q3Ej19
NOOE8fXlvu8iuIAMBSDL9ep6sEIaXYwoD60I2gHdWD0lkP0DOjGQpi4ouXM3Edsd VJI9Uomf59Bpak8HYyEED7WVQjoYn7XVPsonwus/9+LDX+c5vutbrUdbjga3KjHbewD93X4O
5MTi0MDRNTij431kn8T/D0LCgmoUmYYMBgbwFhXr67axPZlKjrqR0z3F/Elv0ZPP wVVoXyHEmU2Plyg8qvzFbNDylCWO7N2McO6SN6+7DitGZGr2+jO+P2R4RT1cnl2V3IRVcWZ0
cVg1tNznsALYQ9Ovl6b5M3cJ5GapbbvNWC7yEE1qScl9HiMxjt/H6aPastH63/7w OTspPSnRGVr2fFiHN/+v8G/wHPLQcJZFvYPfUGNdcYbTmhWdiY0bEYXFiNrgzCCsyad7eKUR
cN0TslW+zRBy05VNJvpWGStQXcngsSUeJtI1Gd992YNjUJq4/Lih6Z1TlwcFVap+ WN9QmxqmyqLDjUEDJCAh19ES6Vg3tqGwXk+uNUCoF30ga0TxQt6UXZJDEQFAGeASQ/RqE/q1
cTcDptoUvXYGg/9mRNNPZwErSfIJ0Ibnx9wPVuRN6NiCLOt2mtKp2F1pM6AOQPpZ EAuLv8IGM8o7IqKO2pWfLuqsY6dTbKBwDzz9YOJt7EOGuPPQbHxaYStTushZmJnm7hi8lhVG
85vEh6I8i6OaO0w/Z0UHBwvpY6jDUliaROsWUQsqz78Z34CVj4cy6vPW2EF4 jT7qsEJdE95Il+I/mHWnXsCevaXjZugBiyV9yvOq4Hwwe2s1zKfrnQ4u0cadvGAh2eIqum7M
=r6KK Y3o6nD47aJ3YmEPX/WnhI56bACa2GmWvUwjI4c0/er3esSPYnuHnM9L8Am4qQwMVSmyU80tC
-----END PGP PUBLIC KEY BLOCK----- MI7A9e13Mvv+RRkYFLJ7PVPdNpbW5jqX1doklFpKf6/XM+B+ngYneU+zgCUBiQJVBBMBCAA/
AhsDBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgBYhBK6W7ZaeR5sAhPPhf+iNMzT6X2oKBQJh
ABCQBQkMnJi4AAoJEOiNMzT6X2oKAv0P+gJ3twBp5efNWyVLcIg4h4cOo9uD0NPvz8/fm2gX
FoOJL3MeigtPuSVfE9kuTaTuRbArzuFtdvH6G/kcRQvOlO4zyiIRHCk1gDHoIvvtn6RbRhVm
/Xo4uGIsFHst7n4A7BjicwEK5Op6Ih5Hoq19xz83YSBgBVk2fYEJIRyJiKFbyPjH0eSYe8v+
Ra5/F85ugLx1P6mMVkW+WPzULns89riW7BGTnZmXFHZp8nO2pkUlcI7F3KRG7l4kmlC50ox6
DiG/6AJCVulbAClky9C68TmJ/R1RazQxU/9IqVywsydq66tbJQbm5Z7GEti0C5jjbSRJL2oT
1xC7Rilr85PMREkPL3vegJdgj5PKlffZ/MocD/0EohiQ7wFpejFD4iTljeh0exRUwCRb6655
9ib34JSQgU8Hl4JJu+mEgd9v0ZHD0/1mMD6fnAR84zca+O3cdASbnQmzTOKcGzLIrkE8TEnU
+2UZ8Ol7SAAqmBgzY1gKOilUho6dkyCAwNL+QDpvrITDPLEFPsjyB/M2KudZSVEn+Rletju1
qkMW31qFMNlsbwzMZw+0USeGcs31Cs0B2/WQsro99CExlhS9auUFkmoVjJmYVTIYOM0zuPa4
OyGspqPhRu5hEsmMDPDWD7Aad5k4GTqogQNnuKyRliZjXXrDZqFD5nfsJSL8Ky/sJGEMuQIN
BFgl3tgBEACbgq6HTN5gEBi0lkD/MafInmNi+59U5gRGYqk46WlfRjhHudXjDpgD0lolGb4h
YontkMaKRlCg2Rvgjvk3Zve0PKWjKw7gr8YBa9fMFY8BhAXI32OdyI9rFhxEZFfWAfwKVmT1
9BdeAQRFvcfd+8w8f1XVc+zddULMJFBTr+xKDlIRWwTkdLPQeWbjo0eHl/g4tuLiLrTxVbnj
26bf+2+1DbM/w5VavzPrkviHqvKe/QP/gay4QDViWvFgLb90idfAHIdsPgflp0VDS5rVHFL6
D73rSRdIRo3I8c8mYoNjSR4XDuvgOkAKW9LR3pvouFHHjp6Fr0GesRbrbb2EG66iPsR99MQ7
FqIL9VMHPm2mtR+XvbnKkH2rYyEqaMbSdk29jGapkAWle4sIhSKk749A4tGkHl08KZ2N9o6G
rfUehP/V2eJLaph2DioFL1HxRryrKy80QQKLMJRekxigq8greW8xB4zuf9Mkuou+RHNmo8Pe
bHjFstLigiD6/zP2e+4tUmrT0/JTGOShoGMl8Rt0VRxdPImKun+4LOXbfOxArOSkY6i35+gs
gkkSy1gTJE0BY3S9auT6+YrglY/TWPQ9IJxWVOKlT+3WIp5wJu2bBKQ420VLqDYzkoWytel/
bM1ACUtipMiIVeUs2uFiRjpzA1Wy0QHKPTdSuGlJPRrfcQARAQABiQIlBBgBAgAPAhsMBQJa
CWIIBQkFo2BYAAoJEOiNMzT6X2oKgSwQAKKs7BGF8TyZeIEO2EUK7R2bdQDCdSGZY06tqLFg
3IHMGxDMb/7FVoa2AEsFgv6xpoebxBB5zkhUk7lslgxvKiSLYjxfNjTBltfiFJ+eQnf+OTs8
KeR51lLa66rvIH2qUzkNDCCTF45H4wIDpV05AXhBjKYkrDCrtey1rQyFp5fxI+0IQ1UKKXvz
ZK4GdxhxDbOUSd38MYy93nqcmclGSGK/gF8XiyuVjeifDCM6+T1NQTX0K9lneidcqtBDvlgg
JTLJtQPO33o5EHzXSiud+dKth1uUhZOFEaYRZoye1YE3yB0TNOOE8fXlvu8iuIAMBSDL9ep6
sEIaXYwoD60I2gHdWD0lkP0DOjGQpi4ouXM3Edsd5MTi0MDRNTij431kn8T/D0LCgmoUmYYM
BgbwFhXr67axPZlKjrqR0z3F/Elv0ZPPcVg1tNznsALYQ9Ovl6b5M3cJ5GapbbvNWC7yEE1q
Scl9HiMxjt/H6aPastH63/7wcN0TslW+zRBy05VNJvpWGStQXcngsSUeJtI1Gd992YNjUJq4
/Lih6Z1TlwcFVap+cTcDptoUvXYGg/9mRNNPZwErSfIJ0Ibnx9wPVuRN6NiCLOt2mtKp2F1p
M6AOQPpZ85vEh6I8i6OaO0w/Z0UHBwvpY6jDUliaROsWUQsqz78Z34CVj4cy6vPW2EF4iQIl
BBgBAgAPBQJYJd7YAhsMBQkB4TOAAAoJEOiNMzT6X2oKTjgP/1ojCVyGyvHMLUgnX0zwrR5Q
1M5RKFz6kHwKjODVLR3Isp8I935oTQt3DY7yFDI4t0GqbYRQMtxcNEb7maianhK2trCXfhPs
6/L04igjDf5iTcmzamXN6xnh5xkz06hZJJCMuu4MvKxC9MQHCVKAwjswl/9H9JqIBXAY3E2l
LpX5P+5jDZuPxS86p3+k4Rrdp9KTGXjiuEleM3zGlz5BLWydqovOck7C2aKh27ETFpDYY0z3
yQ5AsPJyk1rAr0wrH6+ywmwWlzuQewavnrLnJ2M8iMFXpIhyHeEIU/f7o8f+dQk72rZ9CGzd
cqig2za/BS3zawZWgbv2vB2elNsIllYLdir45jxBOxx2yvJvEuu4glz78y4oJTCTAYAbMlle
5gVdPkVcGyvvVS9tinnSaiIzuvWrYHKWll1uYPm2Q1CDs06P5I7bUGAXpgQLUh/XQguy/0sX
GWqW3FS5JzP+XgcR/7UASvwBdHylubKbeqEpB7G1s+m+8C67qOrc7EQv3Jmy1YDOkhEyNig1
rmjplLuir3tC1X+D7dHpn7NJe7nMwFx2b2MpMkLA9jPPAGPp/ekcu5sxCe+E0J/4UF++K+CR
XIxgtzU2UJfp8p9x+ygbx5qHinR0tVRdIzv3ZnGsXrfxnWfSOaB582cU3VRN9INzHHax8ETa
QVDnGO5uQa+FiQI8BBgBCAAmAhsMFiEErpbtlp5HmwCE8+F/6I0zNPpfagoFAmEAELYFCQyc
mN4ACgkQ6I0zNPpfagoqAQ/+MnDjBx8JWMd/XjeFoYKx/Oo0ntkInV+ME61JTBls4PdVk+TB
8PWZdPQHw9SnTvRmykFeznXIRzuxkowjrZYXdPXBxY2b1WyD5V3Ati1TM9vqpaR4osyPs2xy
I4dzDssh9YvUsIRL99O04/65lGiYeBNuACq+yK/7nD/ErzBkDYJHhMCdadbVWUACxvVIDvro
yQeVLKMsHqMCd8BTGD7VDs79NXskPnN77pAFnkzS4Z2b8SNzrlgTc5pUiuZHIXPIpEYmsYzh
ucTU6uI3dN1PbSFHK5tG2pHb4ZrPxY3L20Dgc2Tfu5/SDApZzwvvKTqjdO891MEJ++H+ssOz
i4O1UeWKs9owWttan9+PI47ozBSKOTxmMqLSQ0f56Np9FJsV0ilGxRKfjhzJ4KniOMUBA7mP
+m+TmXfVtthJred4sHlJMTJNpt+sCcT6wLMmyc3keIEAu33gsJj3LTpkEA2q+V+ZiP6Q8HRB
402ITklABSArrPSE/fQU9L8hZ5qmy0Z96z0iyILgVMLuRCCfQOMWhwl8yQWIIaf1yPI07xur
epy6lH7HmxjjOR7eo0DaSxQGQpThAtFGwkWkFh8yki8j3E42kkrxvEyyYZDXn2YcI3bpqhJx
PtwCMZUJ3kc/skOrs6bOI19iBNaEoNX5Dllm7UHjOgWNDQkcCuOCxucKano=
=arte
-----END PGP PUBLIC KEY BLOCK------
``` ```

View File

@ -496,7 +496,7 @@ func TestEstimateGas(t *testing.T) {
GasPrice: big.NewInt(0), GasPrice: big.NewInt(0),
Value: nil, Value: nil,
Data: common.Hex2Bytes("b9b046f9"), Data: common.Hex2Bytes("b9b046f9"),
}, 0, errors.New("invalid opcode: opcode 0xfe not defined"), nil}, }, 0, errors.New("invalid opcode: INVALID"), nil},
{"Valid", ethereum.CallMsg{ {"Valid", ethereum.CallMsg{
From: addr, From: addr,

View File

@ -88,6 +88,13 @@ func Bind(types []string, abis []string, bytecodes []string, fsigs []map[string]
transactIdentifiers = make(map[string]bool) transactIdentifiers = make(map[string]bool)
eventIdentifiers = make(map[string]bool) eventIdentifiers = make(map[string]bool)
) )
for _, input := range evmABI.Constructor.Inputs {
if hasStruct(input.Type) {
bindStructType[lang](input.Type, structs)
}
}
for _, original := range evmABI.Methods { for _, original := range evmABI.Methods {
// Normalize the method for capital cases and non-anonymous inputs/outputs // Normalize the method for capital cases and non-anonymous inputs/outputs
normalized := original normalized := original

View File

@ -1911,6 +1911,50 @@ var bindTests = []struct {
nil, nil,
nil, nil,
}, },
{
name: `ConstructorWithStructParam`,
contract: `
pragma solidity >=0.8.0 <0.9.0;
contract ConstructorWithStructParam {
struct StructType {
uint256 field;
}
constructor(StructType memory st) {}
}
`,
bytecode: []string{`0x608060405234801561001057600080fd5b506040516101c43803806101c48339818101604052810190610032919061014a565b50610177565b6000604051905090565b600080fd5b600080fd5b6000601f19601f8301169050919050565b7f4e487b7100000000000000000000000000000000000000000000000000000000600052604160045260246000fd5b6100958261004c565b810181811067ffffffffffffffff821117156100b4576100b361005d565b5b80604052505050565b60006100c7610038565b90506100d3828261008c565b919050565b6000819050919050565b6100eb816100d8565b81146100f657600080fd5b50565b600081519050610108816100e2565b92915050565b60006020828403121561012457610123610047565b5b61012e60206100bd565b9050600061013e848285016100f9565b60008301525092915050565b6000602082840312156101605761015f610042565b5b600061016e8482850161010e565b91505092915050565b603f806101856000396000f3fe6080604052600080fdfea2646970667358221220cdffa667affecefac5561f65f4a4ba914204a8d4eb859d8cd426fb306e5c12a364736f6c634300080a0033`},
abi: []string{`[{"inputs":[{"components":[{"internalType":"uint256","name":"field","type":"uint256"}],"internalType":"struct ConstructorWithStructParam.StructType","name":"st","type":"tuple"}],"stateMutability":"nonpayable","type":"constructor"}]`},
imports: `
"math/big"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/accounts/abi/bind/backends"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth/ethconfig"
`,
tester: `
var (
key, _ = crypto.GenerateKey()
user, _ = bind.NewKeyedTransactorWithChainID(key, big.NewInt(1337))
sim = backends.NewSimulatedBackend(core.GenesisAlloc{user.From: {Balance: big.NewInt(1000000000000000000)}}, ethconfig.Defaults.Miner.GasCeil)
)
defer sim.Close()
_, tx, _, err := DeployConstructorWithStructParam(user, sim, ConstructorWithStructParamStructType{Field: big.NewInt(42)})
if err != nil {
t.Fatalf("DeployConstructorWithStructParam() got err %v; want nil err", err)
}
sim.Commit()
if _, err = bind.WaitDeployed(nil, sim, tx); err != nil {
t.Logf("Deployment tx: %+v", tx)
t.Errorf("bind.WaitDeployed(nil, %T, <deployment tx>) got err %v; want nil err", sim, err)
}
`,
},
} }
// Tests that packages generated by the binder can be successfully compiled and // Tests that packages generated by the binder can be successfully compiled and
@ -1934,6 +1978,7 @@ func TestGolangBindings(t *testing.T) {
} }
// Generate the test suite for all the contracts // Generate the test suite for all the contracts
for i, tt := range bindTests { for i, tt := range bindTests {
t.Run(tt.name, func(t *testing.T) {
var types []string var types []string
if tt.types != nil { if tt.types != nil {
types = tt.types types = tt.types
@ -1964,6 +2009,7 @@ func TestGolangBindings(t *testing.T) {
if err := ioutil.WriteFile(filepath.Join(pkg, strings.ToLower(tt.name)+"_test.go"), []byte(code), 0600); err != nil { if err := ioutil.WriteFile(filepath.Join(pkg, strings.ToLower(tt.name)+"_test.go"), []byte(code), 0600); err != nil {
t.Fatalf("test %d: failed to write tests: %v", i, err) t.Fatalf("test %d: failed to write tests: %v", i, err)
} }
})
} }
// Convert the package to go modules and use the current source for go-ethereum // Convert the package to go modules and use the current source for go-ethereum
moder := exec.Command(gocmd, "mod", "init", "bindtest") moder := exec.Command(gocmd, "mod", "init", "bindtest")

View File

@ -290,7 +290,7 @@ func tuplePointsTo(index int, output []byte) (start int, err error) {
offset := big.NewInt(0).SetBytes(output[index : index+32]) offset := big.NewInt(0).SetBytes(output[index : index+32])
outputLen := big.NewInt(int64(len(output))) outputLen := big.NewInt(int64(len(output)))
if offset.Cmp(big.NewInt(int64(len(output)))) > 0 { if offset.Cmp(outputLen) > 0 {
return 0, fmt.Errorf("abi: cannot marshal in to go slice: offset %v would go over slice boundary (len=%v)", offset, outputLen) return 0, fmt.Errorf("abi: cannot marshal in to go slice: offset %v would go over slice boundary (len=%v)", offset, outputLen)
} }
if offset.BitLen() > 63 { if offset.BitLen() > 63 {

View File

@ -1,19 +1,19 @@
# This file contains sha256 checksums of optional build dependencies. # This file contains sha256 checksums of optional build dependencies.
2255eb3e4e824dd7d5fcdc2e7f84534371c186312e546fb1086a34c17752f431 go1.17.2.src.tar.gz 3defb9a09bed042403195e872dcbc8c6fae1485963332279668ec52e80a95a2d go1.17.5.src.tar.gz
7914497a302a132a465d33f5ee044ce05568bacdb390ab805cb75a3435a23f94 go1.17.2.darwin-amd64.tar.gz 2db6a5d25815b56072465a2cacc8ed426c18f1d5fc26c1fc8c4f5a7188658264 go1.17.5.darwin-amd64.tar.gz
ce8771bd3edfb5b28104084b56bbb532eeb47fbb7769c3e664c6223712c30904 go1.17.2.darwin-arm64.tar.gz 111f71166de0cb8089bb3e8f9f5b02d76e1bf1309256824d4062a47b0e5f98e0 go1.17.5.darwin-arm64.tar.gz
8cea5b8d1f8e8cbb58069bfed58954c71c5b1aca2f3c857765dae83bf724d0d7 go1.17.2.freebsd-386.tar.gz 443c1cd9768df02085014f1eb034ebc7dbe032ffc8a9bb9f2e6617d037eee23c go1.17.5.freebsd-386.tar.gz
c96e57218fb03e74d683ad63b1684d44c89d5e5b994f36102b33dce21b58499a go1.17.2.freebsd-amd64.tar.gz 17180bdc4126acffd0ebf86d66ef5cbc3488b6734e93374fb00eb09494e006d3 go1.17.5.freebsd-amd64.tar.gz
8617f2e40d51076983502894181ae639d1d8101bfbc4d7463a2b442f239f5596 go1.17.2.linux-386.tar.gz 4f4914303bc18f24fd137a97e595735308f5ce81323c7224c12466fd763fc59f go1.17.5.linux-386.tar.gz
f242a9db6a0ad1846de7b6d94d507915d14062660616a61ef7c808a76e4f1676 go1.17.2.linux-amd64.tar.gz bd78114b0d441b029c8fe0341f4910370925a4d270a6a590668840675b0c653e go1.17.5.linux-amd64.tar.gz
a5a43c9cdabdb9f371d56951b14290eba8ce2f9b0db48fb5fc657943984fd4fc go1.17.2.linux-arm64.tar.gz 6f95ce3da40d9ce1355e48f31f4eb6508382415ca4d7413b1e7a3314e6430e7e go1.17.5.linux-arm64.tar.gz
04d16105008230a9763005be05606f7eb1c683a3dbf0fbfed4034b23889cb7f2 go1.17.2.linux-armv6l.tar.gz aa1fb6c53b4fe72f159333362a10aca37ae938bde8adc9c6eaf2a8e87d1e47de go1.17.5.linux-armv6l.tar.gz
12e2dc7e0ffeebe77083f267ef6705fec1621cdf2ed6489b3af04a13597ed68d go1.17.2.linux-ppc64le.tar.gz 3d4be616e568f0a02cb7f7769bcaafda4b0969ed0f9bb4277619930b96847e70 go1.17.5.linux-ppc64le.tar.gz
c4b2349a8d11350ca038b8c57f3cc58dc0b31284bcbed4f7fca39aeed28b4a51 go1.17.2.linux-s390x.tar.gz 8087d4fe991e82804e6485c26568c2e0ee0bfde00ceb9015dc86cb6bf84ef40b go1.17.5.linux-s390x.tar.gz
8a85257a351996fdf045fe95ed5fdd6917dd48636d562dd11dedf193005a53e0 go1.17.2.windows-386.zip 6d7b9948ee14a906b14f5cbebdfab63cd6828b0b618160847ecd3cc3470a26fe go1.17.5.windows-386.zip
fa6da0b829a66f5fab7e4e312fd6aa1b2d8f045c7ecee83b3d00f6fe5306759a go1.17.2.windows-amd64.zip 671faf99cd5d81cd7e40936c0a94363c64d654faa0148d2af4bbc262555620b9 go1.17.5.windows-amd64.zip
00575c85dc7a129ba892685a456b27a3f3670f71c8bfde1c5ad151f771d55df7 go1.17.2.windows-arm64.zip 45e88676b68e9cf364be469b5a27965397f4e339aa622c2f52c10433c56e5030 go1.17.5.windows-arm64.zip
d4bd25b9814eeaa2134197dd2c7671bb791eae786d42010d9d788af20dee4bfa golangci-lint-1.42.0-darwin-amd64.tar.gz d4bd25b9814eeaa2134197dd2c7671bb791eae786d42010d9d788af20dee4bfa golangci-lint-1.42.0-darwin-amd64.tar.gz
e56859c04a2ad5390c6a497b1acb1cc9329ecb1010260c6faae9b5a4c35b35ea golangci-lint-1.42.0-darwin-arm64.tar.gz e56859c04a2ad5390c6a497b1acb1cc9329ecb1010260c6faae9b5a4c35b35ea golangci-lint-1.42.0-darwin-arm64.tar.gz

View File

@ -147,7 +147,7 @@ var (
// This is the version of go that will be downloaded by // This is the version of go that will be downloaded by
// //
// go run ci.go install -dlgo // go run ci.go install -dlgo
dlgoVersion = "1.17.2" dlgoVersion = "1.17.5"
) )
var GOBIN, _ = filepath.Abs(filepath.Join("build", "bin")) var GOBIN, _ = filepath.Abs(filepath.Join("build", "bin"))

View File

@ -898,7 +898,7 @@ func testExternalUI(api *core.SignerAPI) {
addr, _ := common.NewMixedcaseAddressFromString("0x0011223344556677889900112233445566778899") addr, _ := common.NewMixedcaseAddressFromString("0x0011223344556677889900112233445566778899")
data := `{"types":{"EIP712Domain":[{"name":"name","type":"string"},{"name":"version","type":"string"},{"name":"chainId","type":"uint256"},{"name":"verifyingContract","type":"address"}],"Person":[{"name":"name","type":"string"},{"name":"test","type":"uint8"},{"name":"wallet","type":"address"}],"Mail":[{"name":"from","type":"Person"},{"name":"to","type":"Person"},{"name":"contents","type":"string"}]},"primaryType":"Mail","domain":{"name":"Ether Mail","version":"1","chainId":"1","verifyingContract":"0xCCCcccccCCCCcCCCCCCcCcCccCcCCCcCcccccccC"},"message":{"from":{"name":"Cow","test":"3","wallet":"0xcD2a3d9F938E13CD947Ec05AbC7FE734Df8DD826"},"to":{"name":"Bob","wallet":"0xbBbBBBBbbBBBbbbBbbBbbbbBBbBbbbbBbBbbBBbB","test":"2"},"contents":"Hello, Bob!"}}` data := `{"types":{"EIP712Domain":[{"name":"name","type":"string"},{"name":"version","type":"string"},{"name":"chainId","type":"uint256"},{"name":"verifyingContract","type":"address"}],"Person":[{"name":"name","type":"string"},{"name":"test","type":"uint8"},{"name":"wallet","type":"address"}],"Mail":[{"name":"from","type":"Person"},{"name":"to","type":"Person"},{"name":"contents","type":"string"}]},"primaryType":"Mail","domain":{"name":"Ether Mail","version":"1","chainId":"1","verifyingContract":"0xCCCcccccCCCCcCCCCCCcCcCccCcCCCcCcccccccC"},"message":{"from":{"name":"Cow","test":"3","wallet":"0xcD2a3d9F938E13CD947Ec05AbC7FE734Df8DD826"},"to":{"name":"Bob","wallet":"0xbBbBBBBbbBBBbbbBbbBbbbbBBbBbbbbBbBbbBBbB","test":"2"},"contents":"Hello, Bob!"}}`
//_, err := api.SignData(ctx, accounts.MimetypeTypedData, *addr, hexutil.Encode([]byte(data))) //_, err := api.SignData(ctx, accounts.MimetypeTypedData, *addr, hexutil.Encode([]byte(data)))
var typedData core.TypedData var typedData apitypes.TypedData
json.Unmarshal([]byte(data), &typedData) json.Unmarshal([]byte(data), &typedData)
_, err := api.SignTypedData(ctx, *addr, typedData) _, err := api.SignTypedData(ctx, *addr, typedData)
expectApprove("sign 712 typed data", err) expectApprove("sign 712 typed data", err)
@ -1025,7 +1025,7 @@ func GenDoc(ctx *cli.Context) {
"of the work in canonicalizing and making sense of the data, and it's up to the UI to present" + "of the work in canonicalizing and making sense of the data, and it's up to the UI to present" +
"the user with the contents of the `message`" "the user with the contents of the `message`"
sighash, msg := accounts.TextAndHash([]byte("hello world")) sighash, msg := accounts.TextAndHash([]byte("hello world"))
messages := []*core.NameValueType{{Name: "message", Value: msg, Typ: accounts.MimetypeTextPlain}} messages := []*apitypes.NameValueType{{Name: "message", Value: msg, Typ: accounts.MimetypeTextPlain}}
add("SignDataRequest", desc, &core.SignDataRequest{ add("SignDataRequest", desc, &core.SignDataRequest{
Address: common.NewMixedcaseAddress(a), Address: common.NewMixedcaseAddress(a),

View File

@ -229,7 +229,7 @@ func PingPastExpiration(t *utesting.T) {
reply, _, _ := te.read(te.l1) reply, _, _ := te.read(te.l1)
if reply != nil { if reply != nil {
t.Fatal("Expected no reply, got", reply) t.Fatalf("Expected no reply, got %v %v", reply.Name(), reply)
} }
} }
@ -247,7 +247,7 @@ func WrongPacketType(t *utesting.T) {
reply, _, _ := te.read(te.l1) reply, _, _ := te.read(te.l1)
if reply != nil { if reply != nil {
t.Fatal("Expected no reply, got", reply) t.Fatalf("Expected no reply, got %v %v", reply.Name(), reply)
} }
} }
@ -282,9 +282,16 @@ func FindnodeWithoutEndpointProof(t *utesting.T) {
rand.Read(req.Target[:]) rand.Read(req.Target[:])
te.send(te.l1, &req) te.send(te.l1, &req)
for {
reply, _, _ := te.read(te.l1) reply, _, _ := te.read(te.l1)
if reply != nil { if reply == nil {
t.Fatal("Expected no response, got", reply) // No response, all good
break
}
if reply.Kind() == v4wire.PingPacket {
continue // A ping is ok, just ignore it
}
t.Fatalf("Expected no reply, got %v %v", reply.Name(), reply)
} }
} }
@ -304,7 +311,7 @@ func BasicFindnode(t *utesting.T) {
t.Fatal("read find nodes", err) t.Fatal("read find nodes", err)
} }
if reply.Kind() != v4wire.NeighborsPacket { if reply.Kind() != v4wire.NeighborsPacket {
t.Fatal("Expected neighbors, got", reply.Name()) t.Fatalf("Expected neighbors, got %v %v", reply.Name(), reply)
} }
} }
@ -341,7 +348,7 @@ func UnsolicitedNeighbors(t *utesting.T) {
t.Fatal("read find nodes", err) t.Fatal("read find nodes", err)
} }
if reply.Kind() != v4wire.NeighborsPacket { if reply.Kind() != v4wire.NeighborsPacket {
t.Fatal("Expected neighbors, got", reply.Name()) t.Fatalf("Expected neighbors, got %v %v", reply.Name(), reply)
} }
nodes := reply.(*v4wire.Neighbors).Nodes nodes := reply.(*v4wire.Neighbors).Nodes
if contains(nodes, encFakeKey) { if contains(nodes, encFakeKey) {

View File

@ -34,6 +34,7 @@ import (
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm" "github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth/tracers/logger"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp" "github.com/ethereum/go-ethereum/rlp"
@ -112,7 +113,7 @@ func Transition(ctx *cli.Context) error {
log.Warn(fmt.Sprintf("--%s has been deprecated in favour of --%s", TraceDisableReturnDataFlag.Name, TraceEnableReturnDataFlag.Name)) log.Warn(fmt.Sprintf("--%s has been deprecated in favour of --%s", TraceDisableReturnDataFlag.Name, TraceEnableReturnDataFlag.Name))
} }
// Configure the EVM logger // Configure the EVM logger
logConfig := &vm.LogConfig{ logConfig := &logger.Config{
DisableStack: ctx.Bool(TraceDisableStackFlag.Name), DisableStack: ctx.Bool(TraceDisableStackFlag.Name),
EnableMemory: !ctx.Bool(TraceDisableMemoryFlag.Name) || ctx.Bool(TraceEnableMemoryFlag.Name), EnableMemory: !ctx.Bool(TraceDisableMemoryFlag.Name) || ctx.Bool(TraceEnableMemoryFlag.Name),
EnableReturnData: !ctx.Bool(TraceDisableReturnDataFlag.Name) || ctx.Bool(TraceEnableReturnDataFlag.Name), EnableReturnData: !ctx.Bool(TraceDisableReturnDataFlag.Name) || ctx.Bool(TraceEnableReturnDataFlag.Name),
@ -134,7 +135,7 @@ func Transition(ctx *cli.Context) error {
return nil, NewError(ErrorIO, fmt.Errorf("failed creating trace-file: %v", err)) return nil, NewError(ErrorIO, fmt.Errorf("failed creating trace-file: %v", err))
} }
prevFile = traceFile prevFile = traceFile
return vm.NewJSONLogger(logConfig, traceFile), nil return logger.NewJSONLogger(logConfig, traceFile), nil
} }
} else { } else {
getTracer = func(txIndex int, txHash common.Hash) (tracer vm.EVMLogger, err error) { getTracer = func(txIndex int, txHash common.Hash) (tracer vm.EVMLogger, err error) {

View File

@ -36,6 +36,7 @@ import (
"github.com/ethereum/go-ethereum/core/state" "github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/vm" "github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/core/vm/runtime" "github.com/ethereum/go-ethereum/core/vm/runtime"
"github.com/ethereum/go-ethereum/eth/tracers/logger"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
"gopkg.in/urfave/cli.v1" "gopkg.in/urfave/cli.v1"
@ -107,7 +108,7 @@ func runCmd(ctx *cli.Context) error {
glogger := log.NewGlogHandler(log.StreamHandler(os.Stderr, log.TerminalFormat(false))) glogger := log.NewGlogHandler(log.StreamHandler(os.Stderr, log.TerminalFormat(false)))
glogger.Verbosity(log.Lvl(ctx.GlobalInt(VerbosityFlag.Name))) glogger.Verbosity(log.Lvl(ctx.GlobalInt(VerbosityFlag.Name)))
log.Root().SetHandler(glogger) log.Root().SetHandler(glogger)
logconfig := &vm.LogConfig{ logconfig := &logger.Config{
EnableMemory: !ctx.GlobalBool(DisableMemoryFlag.Name), EnableMemory: !ctx.GlobalBool(DisableMemoryFlag.Name),
DisableStack: ctx.GlobalBool(DisableStackFlag.Name), DisableStack: ctx.GlobalBool(DisableStackFlag.Name),
DisableStorage: ctx.GlobalBool(DisableStorageFlag.Name), DisableStorage: ctx.GlobalBool(DisableStorageFlag.Name),
@ -117,7 +118,7 @@ func runCmd(ctx *cli.Context) error {
var ( var (
tracer vm.EVMLogger tracer vm.EVMLogger
debugLogger *vm.StructLogger debugLogger *logger.StructLogger
statedb *state.StateDB statedb *state.StateDB
chainConfig *params.ChainConfig chainConfig *params.ChainConfig
sender = common.BytesToAddress([]byte("sender")) sender = common.BytesToAddress([]byte("sender"))
@ -125,12 +126,12 @@ func runCmd(ctx *cli.Context) error {
genesisConfig *core.Genesis genesisConfig *core.Genesis
) )
if ctx.GlobalBool(MachineFlag.Name) { if ctx.GlobalBool(MachineFlag.Name) {
tracer = vm.NewJSONLogger(logconfig, os.Stdout) tracer = logger.NewJSONLogger(logconfig, os.Stdout)
} else if ctx.GlobalBool(DebugFlag.Name) { } else if ctx.GlobalBool(DebugFlag.Name) {
debugLogger = vm.NewStructLogger(logconfig) debugLogger = logger.NewStructLogger(logconfig)
tracer = debugLogger tracer = debugLogger
} else { } else {
debugLogger = vm.NewStructLogger(logconfig) debugLogger = logger.NewStructLogger(logconfig)
} }
if ctx.GlobalString(GenesisFlag.Name) != "" { if ctx.GlobalString(GenesisFlag.Name) != "" {
gen := readGenesis(ctx.GlobalString(GenesisFlag.Name)) gen := readGenesis(ctx.GlobalString(GenesisFlag.Name))
@ -288,10 +289,10 @@ func runCmd(ctx *cli.Context) error {
if ctx.GlobalBool(DebugFlag.Name) { if ctx.GlobalBool(DebugFlag.Name) {
if debugLogger != nil { if debugLogger != nil {
fmt.Fprintln(os.Stderr, "#### TRACE ####") fmt.Fprintln(os.Stderr, "#### TRACE ####")
vm.WriteTrace(os.Stderr, debugLogger.StructLogs()) logger.WriteTrace(os.Stderr, debugLogger.StructLogs())
} }
fmt.Fprintln(os.Stderr, "#### LOGS ####") fmt.Fprintln(os.Stderr, "#### LOGS ####")
vm.WriteLogs(os.Stderr, statedb.Logs()) logger.WriteLogs(os.Stderr, statedb.Logs())
} }
if bench || ctx.GlobalBool(StatDumpFlag.Name) { if bench || ctx.GlobalBool(StatDumpFlag.Name) {

View File

@ -25,6 +25,7 @@ import (
"github.com/ethereum/go-ethereum/core/state" "github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/vm" "github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/eth/tracers/logger"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/tests" "github.com/ethereum/go-ethereum/tests"
@ -58,7 +59,7 @@ func stateTestCmd(ctx *cli.Context) error {
log.Root().SetHandler(glogger) log.Root().SetHandler(glogger)
// Configure the EVM logger // Configure the EVM logger
config := &vm.LogConfig{ config := &logger.Config{
EnableMemory: !ctx.GlobalBool(DisableMemoryFlag.Name), EnableMemory: !ctx.GlobalBool(DisableMemoryFlag.Name),
DisableStack: ctx.GlobalBool(DisableStackFlag.Name), DisableStack: ctx.GlobalBool(DisableStackFlag.Name),
DisableStorage: ctx.GlobalBool(DisableStorageFlag.Name), DisableStorage: ctx.GlobalBool(DisableStorageFlag.Name),
@ -66,18 +67,18 @@ func stateTestCmd(ctx *cli.Context) error {
} }
var ( var (
tracer vm.EVMLogger tracer vm.EVMLogger
debugger *vm.StructLogger debugger *logger.StructLogger
) )
switch { switch {
case ctx.GlobalBool(MachineFlag.Name): case ctx.GlobalBool(MachineFlag.Name):
tracer = vm.NewJSONLogger(config, os.Stderr) tracer = logger.NewJSONLogger(config, os.Stderr)
case ctx.GlobalBool(DebugFlag.Name): case ctx.GlobalBool(DebugFlag.Name):
debugger = vm.NewStructLogger(config) debugger = logger.NewStructLogger(config)
tracer = debugger tracer = debugger
default: default:
debugger = vm.NewStructLogger(config) debugger = logger.NewStructLogger(config)
} }
// Load the test content from the input file // Load the test content from the input file
src, err := ioutil.ReadFile(ctx.Args().First()) src, err := ioutil.ReadFile(ctx.Args().First())
@ -118,7 +119,7 @@ func stateTestCmd(ctx *cli.Context) error {
if ctx.GlobalBool(DebugFlag.Name) { if ctx.GlobalBool(DebugFlag.Name) {
if debugger != nil { if debugger != nil {
fmt.Fprintln(os.Stderr, "#### TRACE ####") fmt.Fprintln(os.Stderr, "#### TRACE ####")
vm.WriteTrace(os.Stderr, debugger.StructLogs()) logger.WriteTrace(os.Stderr, debugger.StructLogs())
} }
} }
} }

View File

@ -32,7 +32,6 @@ import (
"github.com/ethereum/go-ethereum/accounts/scwallet" "github.com/ethereum/go-ethereum/accounts/scwallet"
"github.com/ethereum/go-ethereum/accounts/usbwallet" "github.com/ethereum/go-ethereum/accounts/usbwallet"
"github.com/ethereum/go-ethereum/cmd/utils" "github.com/ethereum/go-ethereum/cmd/utils"
"github.com/ethereum/go-ethereum/eth/catalyst"
"github.com/ethereum/go-ethereum/eth/ethconfig" "github.com/ethereum/go-ethereum/eth/ethconfig"
"github.com/ethereum/go-ethereum/internal/ethapi" "github.com/ethereum/go-ethereum/internal/ethapi"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
@ -159,17 +158,10 @@ func makeFullNode(ctx *cli.Context) (*node.Node, ethapi.Backend) {
if ctx.GlobalIsSet(utils.OverrideArrowGlacierFlag.Name) { if ctx.GlobalIsSet(utils.OverrideArrowGlacierFlag.Name) {
cfg.Eth.OverrideArrowGlacier = new(big.Int).SetUint64(ctx.GlobalUint64(utils.OverrideArrowGlacierFlag.Name)) cfg.Eth.OverrideArrowGlacier = new(big.Int).SetUint64(ctx.GlobalUint64(utils.OverrideArrowGlacierFlag.Name))
} }
backend, eth := utils.RegisterEthService(stack, &cfg.Eth) if ctx.GlobalIsSet(utils.OverrideTerminalTotalDifficulty.Name) {
cfg.Eth.OverrideTerminalTotalDifficulty = new(big.Int).SetUint64(ctx.GlobalUint64(utils.OverrideTerminalTotalDifficulty.Name))
// Configure catalyst.
if ctx.GlobalBool(utils.CatalystFlag.Name) {
if eth == nil {
utils.Fatalf("Catalyst does not work in light client mode.")
}
if err := catalyst.Register(stack, eth); err != nil {
utils.Fatalf("%v", err)
}
} }
backend, _ := utils.RegisterEthService(stack, &cfg.Eth, ctx.GlobalBool(utils.CatalystFlag.Name))
// Configure GraphQL if requested // Configure GraphQL if requested
if ctx.GlobalIsSet(utils.GraphQLEnabledFlag.Name) { if ctx.GlobalIsSet(utils.GraphQLEnabledFlag.Name) {

View File

@ -77,13 +77,13 @@ func localConsole(ctx *cli.Context) error {
// Create and start the node based on the CLI flags // Create and start the node based on the CLI flags
prepare(ctx) prepare(ctx)
stack, backend := makeFullNode(ctx) stack, backend := makeFullNode(ctx)
startNode(ctx, stack, backend) startNode(ctx, stack, backend, true)
defer stack.Close() defer stack.Close()
// Attach to the newly started node and start the JavaScript console // Attach to the newly started node and create the JavaScript console.
client, err := stack.Attach() client, err := stack.Attach()
if err != nil { if err != nil {
utils.Fatalf("Failed to attach to the inproc geth: %v", err) return fmt.Errorf("Failed to attach to the inproc geth: %v", err)
} }
config := console.Config{ config := console.Config{
DataDir: utils.MakeDataDir(ctx), DataDir: utils.MakeDataDir(ctx),
@ -91,29 +91,34 @@ func localConsole(ctx *cli.Context) error {
Client: client, Client: client,
Preload: utils.MakeConsolePreloads(ctx), Preload: utils.MakeConsolePreloads(ctx),
} }
console, err := console.New(config) console, err := console.New(config)
if err != nil { if err != nil {
utils.Fatalf("Failed to start the JavaScript console: %v", err) return fmt.Errorf("Failed to start the JavaScript console: %v", err)
} }
defer console.Stop(false) defer console.Stop(false)
// If only a short execution was requested, evaluate and return // If only a short execution was requested, evaluate and return.
if script := ctx.GlobalString(utils.ExecFlag.Name); script != "" { if script := ctx.GlobalString(utils.ExecFlag.Name); script != "" {
console.Evaluate(script) console.Evaluate(script)
return nil return nil
} }
// Otherwise print the welcome screen and enter interactive mode
// Track node shutdown and stop the console when it goes down.
// This happens when SIGTERM is sent to the process.
go func() {
stack.Wait()
console.StopInteractive()
}()
// Print the welcome screen and enter interactive mode.
console.Welcome() console.Welcome()
console.Interactive() console.Interactive()
return nil return nil
} }
// remoteConsole will connect to a remote geth instance, attaching a JavaScript // remoteConsole will connect to a remote geth instance, attaching a JavaScript
// console to it. // console to it.
func remoteConsole(ctx *cli.Context) error { func remoteConsole(ctx *cli.Context) error {
// Attach to a remotely running geth instance and start the JavaScript console
endpoint := ctx.Args().First() endpoint := ctx.Args().First()
if endpoint == "" { if endpoint == "" {
path := node.DefaultDataDir() path := node.DefaultDataDir()
@ -150,7 +155,6 @@ func remoteConsole(ctx *cli.Context) error {
Client: client, Client: client,
Preload: utils.MakeConsolePreloads(ctx), Preload: utils.MakeConsolePreloads(ctx),
} }
console, err := console.New(config) console, err := console.New(config)
if err != nil { if err != nil {
utils.Fatalf("Failed to start the JavaScript console: %v", err) utils.Fatalf("Failed to start the JavaScript console: %v", err)
@ -165,7 +169,6 @@ func remoteConsole(ctx *cli.Context) error {
// Otherwise print the welcome screen and enter interactive mode // Otherwise print the welcome screen and enter interactive mode
console.Welcome() console.Welcome()
console.Interactive() console.Interactive()
return nil return nil
} }
@ -189,13 +192,13 @@ func dialRPC(endpoint string) (*rpc.Client, error) {
func ephemeralConsole(ctx *cli.Context) error { func ephemeralConsole(ctx *cli.Context) error {
// Create and start the node based on the CLI flags // Create and start the node based on the CLI flags
stack, backend := makeFullNode(ctx) stack, backend := makeFullNode(ctx)
startNode(ctx, stack, backend) startNode(ctx, stack, backend, false)
defer stack.Close() defer stack.Close()
// Attach to the newly started node and start the JavaScript console // Attach to the newly started node and start the JavaScript console
client, err := stack.Attach() client, err := stack.Attach()
if err != nil { if err != nil {
utils.Fatalf("Failed to attach to the inproc geth: %v", err) return fmt.Errorf("Failed to attach to the inproc geth: %v", err)
} }
config := console.Config{ config := console.Config{
DataDir: utils.MakeDataDir(ctx), DataDir: utils.MakeDataDir(ctx),
@ -206,22 +209,24 @@ func ephemeralConsole(ctx *cli.Context) error {
console, err := console.New(config) console, err := console.New(config)
if err != nil { if err != nil {
utils.Fatalf("Failed to start the JavaScript console: %v", err) return fmt.Errorf("Failed to start the JavaScript console: %v", err)
} }
defer console.Stop(false) defer console.Stop(false)
// Evaluate each of the specified JavaScript files // Interrupt the JS interpreter when node is stopped.
for _, file := range ctx.Args() {
if err = console.Execute(file); err != nil {
utils.Fatalf("Failed to execute %s: %v", file, err)
}
}
go func() { go func() {
stack.Wait() stack.Wait()
console.Stop(false) console.Stop(false)
}() }()
console.Stop(true)
// Evaluate each of the specified JavaScript files.
for _, file := range ctx.Args() {
if err = console.Execute(file); err != nil {
return fmt.Errorf("Failed to execute %s: %v", file, err)
}
}
// The main script is now done, but keep running timers/callbacks.
console.Stop(true)
return nil return nil
} }

View File

@ -75,6 +75,7 @@ var (
utils.USBFlag, utils.USBFlag,
utils.SmartCardDaemonPathFlag, utils.SmartCardDaemonPathFlag,
utils.OverrideArrowGlacierFlag, utils.OverrideArrowGlacierFlag,
utils.OverrideTerminalTotalDifficulty,
utils.EthashCacheDirFlag, utils.EthashCacheDirFlag,
utils.EthashCachesInMemoryFlag, utils.EthashCachesInMemoryFlag,
utils.EthashCachesOnDiskFlag, utils.EthashCachesOnDiskFlag,
@ -276,6 +277,9 @@ func prepare(ctx *cli.Context) {
case ctx.GlobalIsSet(utils.RopstenFlag.Name): case ctx.GlobalIsSet(utils.RopstenFlag.Name):
log.Info("Starting Geth on Ropsten testnet...") log.Info("Starting Geth on Ropsten testnet...")
case ctx.GlobalIsSet(utils.SepoliaFlag.Name):
log.Info("Starting Geth on Sepolia testnet...")
case ctx.GlobalIsSet(utils.RinkebyFlag.Name): case ctx.GlobalIsSet(utils.RinkebyFlag.Name):
log.Info("Starting Geth on Rinkeby testnet...") log.Info("Starting Geth on Rinkeby testnet...")
@ -291,7 +295,11 @@ func prepare(ctx *cli.Context) {
// If we're a full node on mainnet without --cache specified, bump default cache allowance // If we're a full node on mainnet without --cache specified, bump default cache allowance
if ctx.GlobalString(utils.SyncModeFlag.Name) != "light" && !ctx.GlobalIsSet(utils.CacheFlag.Name) && !ctx.GlobalIsSet(utils.NetworkIdFlag.Name) { if ctx.GlobalString(utils.SyncModeFlag.Name) != "light" && !ctx.GlobalIsSet(utils.CacheFlag.Name) && !ctx.GlobalIsSet(utils.NetworkIdFlag.Name) {
// Make sure we're not on any supported preconfigured testnet either // Make sure we're not on any supported preconfigured testnet either
if !ctx.GlobalIsSet(utils.RopstenFlag.Name) && !ctx.GlobalIsSet(utils.RinkebyFlag.Name) && !ctx.GlobalIsSet(utils.GoerliFlag.Name) && !ctx.GlobalIsSet(utils.DeveloperFlag.Name) { if !ctx.GlobalIsSet(utils.RopstenFlag.Name) &&
!ctx.GlobalIsSet(utils.SepoliaFlag.Name) &&
!ctx.GlobalIsSet(utils.RinkebyFlag.Name) &&
!ctx.GlobalIsSet(utils.GoerliFlag.Name) &&
!ctx.GlobalIsSet(utils.DeveloperFlag.Name) {
// Nope, we're really on mainnet. Bump that cache up! // Nope, we're really on mainnet. Bump that cache up!
log.Info("Bumping default cache on mainnet", "provided", ctx.GlobalInt(utils.CacheFlag.Name), "updated", 4096) log.Info("Bumping default cache on mainnet", "provided", ctx.GlobalInt(utils.CacheFlag.Name), "updated", 4096)
ctx.GlobalSet(utils.CacheFlag.Name, strconv.Itoa(4096)) ctx.GlobalSet(utils.CacheFlag.Name, strconv.Itoa(4096))
@ -333,7 +341,7 @@ func geth(ctx *cli.Context) error {
defer stack.Close() defer stack.Close()
stack.RegisterAPIs(pluginGetAPIs(stack, wrapperBackend)) stack.RegisterAPIs(pluginGetAPIs(stack, wrapperBackend))
startNode(ctx, stack, backend) startNode(ctx, stack, backend, false)
stack.Wait() stack.Wait()
return nil return nil
} }
@ -341,11 +349,11 @@ func geth(ctx *cli.Context) error {
// startNode boots up the system node and all registered protocols, after which // startNode boots up the system node and all registered protocols, after which
// it unlocks any requested accounts, and starts the RPC/IPC interfaces and the // it unlocks any requested accounts, and starts the RPC/IPC interfaces and the
// miner. // miner.
func startNode(ctx *cli.Context, stack *node.Node, backend ethapi.Backend) { func startNode(ctx *cli.Context, stack *node.Node, backend ethapi.Backend, isConsole bool) {
debug.Memsize.Add("node", stack) debug.Memsize.Add("node", stack)
// Start up the node itself // Start up the node itself
utils.StartNode(ctx, stack) utils.StartNode(ctx, stack, isConsole)
// Unlock any account specifically requested // Unlock any account specifically requested
unlockAccounts(ctx, stack) unlockAccounts(ctx, stack)

View File

@ -68,7 +68,7 @@ func Fatalf(format string, args ...interface{}) {
os.Exit(1) os.Exit(1)
} }
func StartNode(ctx *cli.Context, stack *node.Node) { func StartNode(ctx *cli.Context, stack *node.Node, isConsole bool) {
if err := stack.Start(); err != nil { if err := stack.Start(); err != nil {
Fatalf("Error starting protocol stack: %v", err) Fatalf("Error starting protocol stack: %v", err)
} }
@ -87,7 +87,7 @@ func StartNode(ctx *cli.Context, stack *node.Node) {
go monitorFreeDiskSpace(sigc, stack.InstanceDir(), uint64(minFreeDiskSpace)*1024*1024) go monitorFreeDiskSpace(sigc, stack.InstanceDir(), uint64(minFreeDiskSpace)*1024*1024)
} }
<-sigc shutdown := func() {
log.Info("Got interrupt, shutting down...") log.Info("Got interrupt, shutting down...")
go stack.Close() go stack.Close()
for i := 10; i > 0; i-- { for i := 10; i > 0; i-- {
@ -98,6 +98,22 @@ func StartNode(ctx *cli.Context, stack *node.Node) {
} }
debug.Exit() // ensure trace and CPU profile data is flushed. debug.Exit() // ensure trace and CPU profile data is flushed.
debug.LoudPanic("boom") debug.LoudPanic("boom")
}
if isConsole {
// In JS console mode, SIGINT is ignored because it's handled by the console.
// However, SIGTERM still shuts down the node.
for {
sig := <-sigc
if sig == syscall.SIGTERM {
shutdown()
return
}
}
} else {
<-sigc
shutdown()
}
}() }()
} }

View File

@ -45,6 +45,7 @@ import (
"github.com/ethereum/go-ethereum/core/vm" "github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth" "github.com/ethereum/go-ethereum/eth"
"github.com/ethereum/go-ethereum/eth/catalyst"
"github.com/ethereum/go-ethereum/eth/downloader" "github.com/ethereum/go-ethereum/eth/downloader"
"github.com/ethereum/go-ethereum/eth/ethconfig" "github.com/ethereum/go-ethereum/eth/ethconfig"
"github.com/ethereum/go-ethereum/eth/gasprice" "github.com/ethereum/go-ethereum/eth/gasprice"
@ -214,7 +215,7 @@ var (
defaultSyncMode = ethconfig.Defaults.SyncMode defaultSyncMode = ethconfig.Defaults.SyncMode
SyncModeFlag = TextMarshalerFlag{ SyncModeFlag = TextMarshalerFlag{
Name: "syncmode", Name: "syncmode",
Usage: `Blockchain sync mode ("fast", "full", "snap" or "light")`, Usage: `Blockchain sync mode ("snap", "full" or "light")`,
Value: &defaultSyncMode, Value: &defaultSyncMode,
} }
GCModeFlag = cli.StringFlag{ GCModeFlag = cli.StringFlag{
@ -248,6 +249,10 @@ var (
Name: "override.arrowglacier", Name: "override.arrowglacier",
Usage: "Manually specify Arrow Glacier fork-block, overriding the bundled setting", Usage: "Manually specify Arrow Glacier fork-block, overriding the bundled setting",
} }
OverrideTerminalTotalDifficulty = cli.Uint64Flag{
Name: "override.terminaltotaldifficulty",
Usage: "Manually specify TerminalTotalDifficulty, overriding the bundled setting",
}
// Light server and client settings // Light server and client settings
LightServeFlag = cli.IntFlag{ LightServeFlag = cli.IntFlag{
Name: "light.serve", Name: "light.serve",
@ -1196,7 +1201,7 @@ func SetP2PConfig(ctx *cli.Context, cfg *p2p.Config) {
cfg.NetRestrict = list cfg.NetRestrict = list
} }
if ctx.GlobalBool(DeveloperFlag.Name) || ctx.GlobalBool(CatalystFlag.Name) { if ctx.GlobalBool(DeveloperFlag.Name) {
// --dev mode can't use p2p networking. // --dev mode can't use p2p networking.
cfg.MaxPeers = 0 cfg.MaxPeers = 0
cfg.ListenAddr = "" cfg.ListenAddr = ""
@ -1705,13 +1710,18 @@ func SetDNSDiscoveryDefaults(cfg *ethconfig.Config, genesis common.Hash) {
// RegisterEthService adds an Ethereum client to the stack. // RegisterEthService adds an Ethereum client to the stack.
// The second return value is the full node instance, which may be nil if the // The second return value is the full node instance, which may be nil if the
// node is running as a light client. // node is running as a light client.
func RegisterEthService(stack *node.Node, cfg *ethconfig.Config) (ethapi.Backend, *eth.Ethereum) { func RegisterEthService(stack *node.Node, cfg *ethconfig.Config, isCatalyst bool) (ethapi.Backend, *eth.Ethereum) {
if cfg.SyncMode == downloader.LightSync { if cfg.SyncMode == downloader.LightSync {
backend, err := les.New(stack, cfg) backend, err := les.New(stack, cfg)
if err != nil { if err != nil {
Fatalf("Failed to register the Ethereum service: %v", err) Fatalf("Failed to register the Ethereum service: %v", err)
} }
stack.RegisterAPIs(tracers.APIs(backend.ApiBackend)) stack.RegisterAPIs(tracers.APIs(backend.ApiBackend))
if isCatalyst {
if err := catalyst.RegisterLight(stack, backend); err != nil {
Fatalf("Failed to register the catalyst service: %v", err)
}
}
return backend.ApiBackend, nil return backend.ApiBackend, nil
} }
backend, err := eth.New(stack, cfg) backend, err := eth.New(stack, cfg)
@ -1724,6 +1734,11 @@ func RegisterEthService(stack *node.Node, cfg *ethconfig.Config) (ethapi.Backend
Fatalf("Failed to create the LES server: %v", err) Fatalf("Failed to create the LES server: %v", err)
} }
} }
if isCatalyst {
if err := catalyst.Register(stack, backend); err != nil {
Fatalf("Failed to register the catalyst service: %v", err)
}
}
stack.RegisterAPIs(tracers.APIs(backend.APIBackend)) stack.RegisterAPIs(tracers.APIs(backend.APIBackend))
return backend.APIBackend, backend return backend.APIBackend, backend
} }

View File

@ -0,0 +1,376 @@
// Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package beacon
import (
"errors"
"fmt"
"math/big"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/consensus/misc"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rpc"
"github.com/ethereum/go-ethereum/trie"
)
// Proof-of-stake protocol constants.
var (
beaconDifficulty = common.Big0 // The default block difficulty in the beacon consensus
beaconNonce = types.EncodeNonce(0) // The default block nonce in the beacon consensus
)
// Various error messages to mark blocks invalid. These should be private to
// prevent engine specific errors from being referenced in the remainder of the
// codebase, inherently breaking if the engine is swapped out. Please put common
// error types into the consensus package.
var (
errTooManyUncles = errors.New("too many uncles")
errInvalidMixDigest = errors.New("invalid mix digest")
errInvalidNonce = errors.New("invalid nonce")
errInvalidUncleHash = errors.New("invalid uncle hash")
)
// Beacon is a consensus engine that combines the eth1 consensus and proof-of-stake
// algorithm. There is a special flag inside to decide whether to use legacy consensus
// rules or new rules. The transition rule is described in the eth1/2 merge spec.
// https://github.com/ethereum/EIPs/blob/master/EIPS/eip-3675.md
//
// The beacon here is a half-functional consensus engine with partial functions which
// is only used for necessary consensus checks. The legacy consensus engine can be any
// engine implements the consensus interface (except the beacon itself).
type Beacon struct {
ethone consensus.Engine // Original consensus engine used in eth1, e.g. ethash or clique
}
// New creates a consensus engine with the given embedded eth1 engine.
func New(ethone consensus.Engine) *Beacon {
if _, ok := ethone.(*Beacon); ok {
panic("nested consensus engine")
}
return &Beacon{ethone: ethone}
}
// Author implements consensus.Engine, returning the verified author of the block.
func (beacon *Beacon) Author(header *types.Header) (common.Address, error) {
if !beacon.IsPoSHeader(header) {
return beacon.ethone.Author(header)
}
return header.Coinbase, nil
}
// VerifyHeader checks whether a header conforms to the consensus rules of the
// stock Ethereum consensus engine.
func (beacon *Beacon) VerifyHeader(chain consensus.ChainHeaderReader, header *types.Header, seal bool) error {
reached, _ := IsTTDReached(chain, header.ParentHash, header.Number.Uint64()-1)
if !reached {
return beacon.ethone.VerifyHeader(chain, header, seal)
}
// Short circuit if the parent is not known
parent := chain.GetHeader(header.ParentHash, header.Number.Uint64()-1)
if parent == nil {
return consensus.ErrUnknownAncestor
}
// Sanity checks passed, do a proper verification
return beacon.verifyHeader(chain, header, parent)
}
// VerifyHeaders is similar to VerifyHeader, but verifies a batch of headers
// concurrently. The method returns a quit channel to abort the operations and
// a results channel to retrieve the async verifications.
// VerifyHeaders expect the headers to be ordered and continuous.
func (beacon *Beacon) VerifyHeaders(chain consensus.ChainHeaderReader, headers []*types.Header, seals []bool) (chan<- struct{}, <-chan error) {
if !beacon.IsPoSHeader(headers[len(headers)-1]) {
return beacon.ethone.VerifyHeaders(chain, headers, seals)
}
var (
preHeaders []*types.Header
postHeaders []*types.Header
preSeals []bool
)
for index, header := range headers {
if beacon.IsPoSHeader(header) {
preHeaders = headers[:index]
postHeaders = headers[index:]
preSeals = seals[:index]
break
}
}
// All the headers have passed the transition point, use new rules.
if len(preHeaders) == 0 {
return beacon.verifyHeaders(chain, headers, nil)
}
// The transition point exists in the middle, separate the headers
// into two batches and apply different verification rules for them.
var (
abort = make(chan struct{})
results = make(chan error, len(headers))
)
go func() {
var (
old, new, out = 0, len(preHeaders), 0
errors = make([]error, len(headers))
done = make([]bool, len(headers))
oldDone, oldResult = beacon.ethone.VerifyHeaders(chain, preHeaders, preSeals)
newDone, newResult = beacon.verifyHeaders(chain, postHeaders, preHeaders[len(preHeaders)-1])
)
for {
for ; done[out]; out++ {
results <- errors[out]
if out == len(headers)-1 {
return
}
}
select {
case err := <-oldResult:
errors[old], done[old] = err, true
old++
case err := <-newResult:
errors[new], done[new] = err, true
new++
case <-abort:
close(oldDone)
close(newDone)
return
}
}
}()
return abort, results
}
// VerifyUncles verifies that the given block's uncles conform to the consensus
// rules of the Ethereum consensus engine.
func (beacon *Beacon) VerifyUncles(chain consensus.ChainReader, block *types.Block) error {
if !beacon.IsPoSHeader(block.Header()) {
return beacon.ethone.VerifyUncles(chain, block)
}
// Verify that there is no uncle block. It's explicitly disabled in the beacon
if len(block.Uncles()) > 0 {
return errTooManyUncles
}
return nil
}
// verifyHeader checks whether a header conforms to the consensus rules of the
// stock Ethereum consensus engine. The difference between the beacon and classic is
// (a) The following fields are expected to be constants:
// - difficulty is expected to be 0
// - nonce is expected to be 0
// - unclehash is expected to be Hash(emptyHeader)
// to be the desired constants
// (b) the timestamp is not verified anymore
// (c) the extradata is limited to 32 bytes
func (beacon *Beacon) verifyHeader(chain consensus.ChainHeaderReader, header, parent *types.Header) error {
// Ensure that the header's extra-data section is of a reasonable size
if len(header.Extra) > 32 {
return fmt.Errorf("extra-data longer than 32 bytes (%d)", len(header.Extra))
}
// Verify the seal parts. Ensure the mixhash, nonce and uncle hash are the expected value.
if header.MixDigest != (common.Hash{}) {
return errInvalidMixDigest
}
if header.Nonce != beaconNonce {
return errInvalidNonce
}
if header.UncleHash != types.EmptyUncleHash {
return errInvalidUncleHash
}
// Verify the block's difficulty to ensure it's the default constant
if beaconDifficulty.Cmp(header.Difficulty) != 0 {
return fmt.Errorf("invalid difficulty: have %v, want %v", header.Difficulty, beaconDifficulty)
}
// Verify that the gas limit is <= 2^63-1
if header.GasLimit > params.MaxGasLimit {
return fmt.Errorf("invalid gasLimit: have %v, max %v", header.GasLimit, params.MaxGasLimit)
}
// Verify that the gasUsed is <= gasLimit
if header.GasUsed > header.GasLimit {
return fmt.Errorf("invalid gasUsed: have %d, gasLimit %d", header.GasUsed, header.GasLimit)
}
// Verify that the block number is parent's +1
if diff := new(big.Int).Sub(header.Number, parent.Number); diff.Cmp(common.Big1) != 0 {
return consensus.ErrInvalidNumber
}
// Verify the header's EIP-1559 attributes.
return misc.VerifyEip1559Header(chain.Config(), parent, header)
}
// verifyHeaders is similar to verifyHeader, but verifies a batch of headers
// concurrently. The method returns a quit channel to abort the operations and
// a results channel to retrieve the async verifications. An additional parent
// header will be passed if the relevant header is not in the database yet.
func (beacon *Beacon) verifyHeaders(chain consensus.ChainHeaderReader, headers []*types.Header, ancestor *types.Header) (chan<- struct{}, <-chan error) {
var (
abort = make(chan struct{})
results = make(chan error, len(headers))
)
go func() {
for i, header := range headers {
var parent *types.Header
if i == 0 {
if ancestor != nil {
parent = ancestor
} else {
parent = chain.GetHeader(headers[0].ParentHash, headers[0].Number.Uint64()-1)
}
} else if headers[i-1].Hash() == headers[i].ParentHash {
parent = headers[i-1]
}
if parent == nil {
select {
case <-abort:
return
case results <- consensus.ErrUnknownAncestor:
}
continue
}
err := beacon.verifyHeader(chain, header, parent)
select {
case <-abort:
return
case results <- err:
}
}
}()
return abort, results
}
// Prepare implements consensus.Engine, initializing the difficulty field of a
// header to conform to the beacon protocol. The changes are done inline.
func (beacon *Beacon) Prepare(chain consensus.ChainHeaderReader, header *types.Header) error {
// Transition isn't triggered yet, use the legacy rules for preparation.
reached, err := IsTTDReached(chain, header.ParentHash, header.Number.Uint64()-1)
if err != nil {
return err
}
if !reached {
return beacon.ethone.Prepare(chain, header)
}
header.Difficulty = beaconDifficulty
return nil
}
// Finalize implements consensus.Engine, setting the final state on the header
func (beacon *Beacon) Finalize(chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, txs []*types.Transaction, uncles []*types.Header) {
// Finalize is different with Prepare, it can be used in both block generation
// and verification. So determine the consensus rules by header type.
if !beacon.IsPoSHeader(header) {
beacon.ethone.Finalize(chain, header, state, txs, uncles)
return
}
// The block reward is no longer handled here. It's done by the
// external consensus engine.
header.Root = state.IntermediateRoot(true)
}
// FinalizeAndAssemble implements consensus.Engine, setting the final state and
// assembling the block.
func (beacon *Beacon) FinalizeAndAssemble(chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, txs []*types.Transaction, uncles []*types.Header, receipts []*types.Receipt) (*types.Block, error) {
// FinalizeAndAssemble is different with Prepare, it can be used in both block
// generation and verification. So determine the consensus rules by header type.
if !beacon.IsPoSHeader(header) {
return beacon.ethone.FinalizeAndAssemble(chain, header, state, txs, uncles, receipts)
}
// Finalize and assemble the block
beacon.Finalize(chain, header, state, txs, uncles)
return types.NewBlock(header, txs, uncles, receipts, trie.NewStackTrie(nil)), nil
}
// Seal generates a new sealing request for the given input block and pushes
// the result into the given channel.
//
// Note, the method returns immediately and will send the result async. More
// than one result may also be returned depending on the consensus algorithm.
func (beacon *Beacon) Seal(chain consensus.ChainHeaderReader, block *types.Block, results chan<- *types.Block, stop <-chan struct{}) error {
if !beacon.IsPoSHeader(block.Header()) {
return beacon.ethone.Seal(chain, block, results, stop)
}
// The seal verification is done by the external consensus engine,
// return directly without pushing any block back. In another word
// beacon won't return any result by `results` channel which may
// blocks the receiver logic forever.
return nil
}
// SealHash returns the hash of a block prior to it being sealed.
func (beacon *Beacon) SealHash(header *types.Header) common.Hash {
return beacon.ethone.SealHash(header)
}
// CalcDifficulty is the difficulty adjustment algorithm. It returns
// the difficulty that a new block should have when created at time
// given the parent block's time and difficulty.
func (beacon *Beacon) CalcDifficulty(chain consensus.ChainHeaderReader, time uint64, parent *types.Header) *big.Int {
// Transition isn't triggered yet, use the legacy rules for calculation
if reached, _ := IsTTDReached(chain, parent.Hash(), parent.Number.Uint64()); !reached {
return beacon.ethone.CalcDifficulty(chain, time, parent)
}
return beaconDifficulty
}
// APIs implements consensus.Engine, returning the user facing RPC APIs.
func (beacon *Beacon) APIs(chain consensus.ChainHeaderReader) []rpc.API {
return beacon.ethone.APIs(chain)
}
// Close shutdowns the consensus engine
func (beacon *Beacon) Close() error {
return beacon.ethone.Close()
}
// IsPoSHeader reports the header belongs to the PoS-stage with some special fields.
// This function is not suitable for a part of APIs like Prepare or CalcDifficulty
// because the header difficulty is not set yet.
func (beacon *Beacon) IsPoSHeader(header *types.Header) bool {
if header.Difficulty == nil {
panic("IsPoSHeader called with invalid difficulty")
}
return header.Difficulty.Cmp(beaconDifficulty) == 0
}
// InnerEngine returns the embedded eth1 consensus engine.
func (beacon *Beacon) InnerEngine() consensus.Engine {
return beacon.ethone
}
// SetThreads updates the mining threads. Delegate the call
// to the eth1 engine if it's threaded.
func (beacon *Beacon) SetThreads(threads int) {
type threaded interface {
SetThreads(threads int)
}
if th, ok := beacon.ethone.(threaded); ok {
th.SetThreads(threads)
}
}
// IsTTDReached checks if the TotalTerminalDifficulty has been surpassed on the `parentHash` block.
// It depends on the parentHash already being stored in the database.
// If the parentHash is not stored in the database a UnknownAncestor error is returned.
func IsTTDReached(chain consensus.ChainHeaderReader, parentHash common.Hash, number uint64) (bool, error) {
if chain.Config().TerminalTotalDifficulty == nil {
return false, nil
}
td := chain.GetTd(parentHash, number)
if td == nil {
return false, consensus.ErrUnknownAncestor
}
return td.Cmp(chain.Config().TerminalTotalDifficulty) >= 0, nil
}

View File

@ -196,7 +196,11 @@ func (sb *blockNumberOrHashOrRLP) UnmarshalJSON(data []byte) error {
if err := json.Unmarshal(data, &input); err != nil { if err := json.Unmarshal(data, &input); err != nil {
return err return err
} }
sb.RLP = hexutil.MustDecode(input) blob, err := hexutil.Decode(input)
if err != nil {
return err
}
sb.RLP = blob
return nil return nil
} }

View File

@ -295,9 +295,8 @@ func (c *Clique) verifyHeader(chain consensus.ChainHeaderReader, header *types.H
} }
} }
// Verify that the gas limit is <= 2^63-1 // Verify that the gas limit is <= 2^63-1
cap := uint64(0x7fffffffffffffff) if header.GasLimit > params.MaxGasLimit {
if header.GasLimit > cap { return fmt.Errorf("invalid gasLimit: have %v, max %v", header.GasLimit, params.MaxGasLimit)
return fmt.Errorf("invalid gasLimit: have %v, max %v", header.GasLimit, cap)
} }
// If all checks passed, validate any special fields for hard forks // If all checks passed, validate any special fields for hard forks
if err := misc.VerifyForkHashes(chain.Config(), header, false); err != nil { if err := misc.VerifyForkHashes(chain.Config(), header, false); err != nil {

View File

@ -44,6 +44,9 @@ type ChainHeaderReader interface {
// GetHeaderByHash retrieves a block header from the database by its hash. // GetHeaderByHash retrieves a block header from the database by its hash.
GetHeaderByHash(hash common.Hash) *types.Header GetHeaderByHash(hash common.Hash) *types.Header
// GetTd retrieves the total difficulty from the database by hash and number.
GetTd(hash common.Hash, number uint64) *big.Int
} }
// ChainReader defines a small collection of methods needed to access the local // ChainReader defines a small collection of methods needed to access the local

View File

@ -281,9 +281,8 @@ func (ethash *Ethash) verifyHeader(chain consensus.ChainHeaderReader, header, pa
return fmt.Errorf("invalid difficulty: have %v, want %v", header.Difficulty, expected) return fmt.Errorf("invalid difficulty: have %v, want %v", header.Difficulty, expected)
} }
// Verify that the gas limit is <= 2^63-1 // Verify that the gas limit is <= 2^63-1
cap := uint64(0x7fffffffffffffff) if header.GasLimit > params.MaxGasLimit {
if header.GasLimit > cap { return fmt.Errorf("invalid gasLimit: have %v, max %v", header.GasLimit, params.MaxGasLimit)
return fmt.Errorf("invalid gasLimit: have %v, max %v", header.GasLimit, cap)
} }
// Verify that the gasUsed is <= gasLimit // Verify that the gasUsed is <= gasLimit
if header.GasUsed > header.GasLimit { if header.GasUsed > header.GasLimit {

110
consensus/merger.go Normal file
View File

@ -0,0 +1,110 @@
// Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package consensus
import (
"fmt"
"sync"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/rlp"
)
// transitionStatus describes the status of eth1/2 transition. This switch
// between modes is a one-way action which is triggered by corresponding
// consensus-layer message.
type transitionStatus struct {
LeftPoW bool // The flag is set when the first NewHead message received
EnteredPoS bool // The flag is set when the first FinalisedBlock message received
}
// Merger is an internal help structure used to track the eth1/2 transition status.
// It's a common structure can be used in both full node and light client.
type Merger struct {
db ethdb.KeyValueStore
status transitionStatus
mu sync.RWMutex
}
// NewMerger creates a new Merger which stores its transition status in the provided db.
func NewMerger(db ethdb.KeyValueStore) *Merger {
var status transitionStatus
blob := rawdb.ReadTransitionStatus(db)
if len(blob) != 0 {
if err := rlp.DecodeBytes(blob, &status); err != nil {
log.Crit("Failed to decode the transition status", "err", err)
}
}
return &Merger{
db: db,
status: status,
}
}
// ReachTTD is called whenever the first NewHead message received
// from the consensus-layer.
func (m *Merger) ReachTTD() {
m.mu.Lock()
defer m.mu.Unlock()
if m.status.LeftPoW {
return
}
m.status = transitionStatus{LeftPoW: true}
blob, err := rlp.EncodeToBytes(m.status)
if err != nil {
panic(fmt.Sprintf("Failed to encode the transition status: %v", err))
}
rawdb.WriteTransitionStatus(m.db, blob)
log.Info("Left PoW stage")
}
// FinalizePoS is called whenever the first FinalisedBlock message received
// from the consensus-layer.
func (m *Merger) FinalizePoS() {
m.mu.Lock()
defer m.mu.Unlock()
if m.status.EnteredPoS {
return
}
m.status = transitionStatus{LeftPoW: true, EnteredPoS: true}
blob, err := rlp.EncodeToBytes(m.status)
if err != nil {
panic(fmt.Sprintf("Failed to encode the transition status: %v", err))
}
rawdb.WriteTransitionStatus(m.db, blob)
log.Info("Entered PoS stage")
}
// TDDReached reports whether the chain has left the PoW stage.
func (m *Merger) TDDReached() bool {
m.mu.RLock()
defer m.mu.RUnlock()
return m.status.LeftPoW
}
// PoSFinalized reports whether the chain has entered the PoS stage.
func (m *Merger) PoSFinalized() bool {
m.mu.RLock()
defer m.mu.RUnlock()
return m.status.EnteredPoS
}

View File

@ -17,6 +17,7 @@
package console package console
import ( import (
"errors"
"fmt" "fmt"
"io" "io"
"io/ioutil" "io/ioutil"
@ -26,6 +27,7 @@ import (
"regexp" "regexp"
"sort" "sort"
"strings" "strings"
"sync"
"syscall" "syscall"
"github.com/dop251/goja" "github.com/dop251/goja"
@ -74,6 +76,13 @@ type Console struct {
histPath string // Absolute path to the console scrollback history histPath string // Absolute path to the console scrollback history
history []string // Scroll history maintained by the console history []string // Scroll history maintained by the console
printer io.Writer // Output writer to serialize any display strings to printer io.Writer // Output writer to serialize any display strings to
interactiveStopped chan struct{}
stopInteractiveCh chan struct{}
signalReceived chan struct{}
stopped chan struct{}
wg sync.WaitGroup
stopOnce sync.Once
} }
// New initializes a JavaScript interpreted runtime environment and sets defaults // New initializes a JavaScript interpreted runtime environment and sets defaults
@ -98,6 +107,10 @@ func New(config Config) (*Console, error) {
prompter: config.Prompter, prompter: config.Prompter,
printer: config.Printer, printer: config.Printer,
histPath: filepath.Join(config.DataDir, HistoryFile), histPath: filepath.Join(config.DataDir, HistoryFile),
interactiveStopped: make(chan struct{}),
stopInteractiveCh: make(chan struct{}),
signalReceived: make(chan struct{}, 1),
stopped: make(chan struct{}),
} }
if err := os.MkdirAll(config.DataDir, 0700); err != nil { if err := os.MkdirAll(config.DataDir, 0700); err != nil {
return nil, err return nil, err
@ -105,6 +118,10 @@ func New(config Config) (*Console, error) {
if err := console.init(config.Preload); err != nil { if err := console.init(config.Preload); err != nil {
return nil, err return nil, err
} }
console.wg.Add(1)
go console.interruptHandler()
return console, nil return console, nil
} }
@ -337,9 +354,63 @@ func (c *Console) Evaluate(statement string) {
} }
}() }()
c.jsre.Evaluate(statement, c.printer) c.jsre.Evaluate(statement, c.printer)
// Avoid exiting Interactive when jsre was interrupted by SIGINT.
c.clearSignalReceived()
} }
// Interactive starts an interactive user session, where input is propted from // interruptHandler runs in its own goroutine and waits for signals.
// When a signal is received, it interrupts the JS interpreter.
func (c *Console) interruptHandler() {
defer c.wg.Done()
// During Interactive, liner inhibits the signal while it is prompting for
// input. However, the signal will be received while evaluating JS.
//
// On unsupported terminals, SIGINT can also happen while prompting.
// Unfortunately, it is not possible to abort the prompt in this case and
// the c.readLines goroutine leaks.
sig := make(chan os.Signal, 1)
signal.Notify(sig, syscall.SIGINT)
defer signal.Stop(sig)
for {
select {
case <-sig:
c.setSignalReceived()
c.jsre.Interrupt(errors.New("interrupted"))
case <-c.stopInteractiveCh:
close(c.interactiveStopped)
c.jsre.Interrupt(errors.New("interrupted"))
case <-c.stopped:
return
}
}
}
func (c *Console) setSignalReceived() {
select {
case c.signalReceived <- struct{}{}:
default:
}
}
func (c *Console) clearSignalReceived() {
select {
case <-c.signalReceived:
default:
}
}
// StopInteractive causes Interactive to return as soon as possible.
func (c *Console) StopInteractive() {
select {
case c.stopInteractiveCh <- struct{}{}:
case <-c.stopped:
}
}
// Interactive starts an interactive user session, where in.put is propted from
// the configured user prompter. // the configured user prompter.
func (c *Console) Interactive() { func (c *Console) Interactive() {
var ( var (
@ -349,15 +420,11 @@ func (c *Console) Interactive() {
inputLine = make(chan string, 1) // receives user input inputLine = make(chan string, 1) // receives user input
inputErr = make(chan error, 1) // receives liner errors inputErr = make(chan error, 1) // receives liner errors
requestLine = make(chan string) // requests a line of input requestLine = make(chan string) // requests a line of input
interrupt = make(chan os.Signal, 1)
) )
// Monitor Ctrl-C. While liner does turn on the relevant terminal mode bits to avoid defer func() {
// the signal, a signal can still be received for unsupported terminals. Unfortunately c.writeHistory()
// there is no way to cancel the line reader when this happens. The readLines }()
// goroutine will be leaked in this case.
signal.Notify(interrupt, syscall.SIGINT, syscall.SIGTERM)
defer signal.Stop(interrupt)
// The line reader runs in a separate goroutine. // The line reader runs in a separate goroutine.
go c.readLines(inputLine, inputErr, requestLine) go c.readLines(inputLine, inputErr, requestLine)
@ -368,7 +435,14 @@ func (c *Console) Interactive() {
requestLine <- prompt requestLine <- prompt
select { select {
case <-interrupt: case <-c.interactiveStopped:
fmt.Fprintln(c.printer, "node is down, exiting console")
return
case <-c.signalReceived:
// SIGINT received while prompting for input -> unsupported terminal.
// I'm not sure if the best choice would be to leave the console running here.
// Bash keeps running in this case. node.js does not.
fmt.Fprintln(c.printer, "caught interrupt, exiting") fmt.Fprintln(c.printer, "caught interrupt, exiting")
return return
@ -476,12 +550,19 @@ func (c *Console) Execute(path string) error {
// Stop cleans up the console and terminates the runtime environment. // Stop cleans up the console and terminates the runtime environment.
func (c *Console) Stop(graceful bool) error { func (c *Console) Stop(graceful bool) error {
if err := ioutil.WriteFile(c.histPath, []byte(strings.Join(c.history, "\n")), 0600); err != nil { c.stopOnce.Do(func() {
return err // Stop the interrupt handler.
} close(c.stopped)
if err := os.Chmod(c.histPath, 0600); err != nil { // Force 0600, even if it was different previously c.wg.Wait()
return err })
}
c.jsre.Stop(graceful) c.jsre.Stop(graceful)
return nil return nil
} }
func (c *Console) writeHistory() error {
if err := ioutil.WriteFile(c.histPath, []byte(strings.Join(c.history, "\n")), 0600); err != nil {
return err
}
return os.Chmod(c.histPath, 0600) // Force 0600, even if it was different previously
}

View File

@ -17,14 +17,21 @@
package core package core
import ( import (
"encoding/json"
"math/big"
"runtime" "runtime"
"testing" "testing"
"time" "time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/consensus/beacon"
"github.com/ethereum/go-ethereum/consensus/clique"
"github.com/ethereum/go-ethereum/consensus/ethash" "github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm" "github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
) )
@ -76,6 +83,172 @@ func TestHeaderVerification(t *testing.T) {
} }
} }
func TestHeaderVerificationForMergingClique(t *testing.T) { testHeaderVerificationForMerging(t, true) }
func TestHeaderVerificationForMergingEthash(t *testing.T) { testHeaderVerificationForMerging(t, false) }
// Tests the verification for eth1/2 merging, including pre-merge and post-merge
func testHeaderVerificationForMerging(t *testing.T, isClique bool) {
var (
testdb = rawdb.NewMemoryDatabase()
preBlocks []*types.Block
postBlocks []*types.Block
runEngine consensus.Engine
chainConfig *params.ChainConfig
merger = consensus.NewMerger(rawdb.NewMemoryDatabase())
)
if isClique {
var (
key, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291")
addr = crypto.PubkeyToAddress(key.PublicKey)
engine = clique.New(params.AllCliqueProtocolChanges.Clique, testdb)
)
genspec := &Genesis{
ExtraData: make([]byte, 32+common.AddressLength+crypto.SignatureLength),
Alloc: map[common.Address]GenesisAccount{
addr: {Balance: big.NewInt(1)},
},
BaseFee: big.NewInt(params.InitialBaseFee),
}
copy(genspec.ExtraData[32:], addr[:])
genesis := genspec.MustCommit(testdb)
genEngine := beacon.New(engine)
preBlocks, _ = GenerateChain(params.AllCliqueProtocolChanges, genesis, genEngine, testdb, 8, nil)
td := 0
for i, block := range preBlocks {
header := block.Header()
if i > 0 {
header.ParentHash = preBlocks[i-1].Hash()
}
header.Extra = make([]byte, 32+crypto.SignatureLength)
header.Difficulty = big.NewInt(2)
sig, _ := crypto.Sign(genEngine.SealHash(header).Bytes(), key)
copy(header.Extra[len(header.Extra)-crypto.SignatureLength:], sig)
preBlocks[i] = block.WithSeal(header)
// calculate td
td += int(block.Difficulty().Uint64())
}
config := *params.AllCliqueProtocolChanges
config.TerminalTotalDifficulty = big.NewInt(int64(td))
postBlocks, _ = GenerateChain(&config, preBlocks[len(preBlocks)-1], genEngine, testdb, 8, nil)
chainConfig = &config
runEngine = beacon.New(engine)
} else {
gspec := &Genesis{Config: params.TestChainConfig}
genesis := gspec.MustCommit(testdb)
genEngine := beacon.New(ethash.NewFaker())
preBlocks, _ = GenerateChain(params.TestChainConfig, genesis, genEngine, testdb, 8, nil)
td := 0
for _, block := range preBlocks {
// calculate td
td += int(block.Difficulty().Uint64())
}
config := *params.TestChainConfig
config.TerminalTotalDifficulty = big.NewInt(int64(td))
postBlocks, _ = GenerateChain(params.TestChainConfig, preBlocks[len(preBlocks)-1], genEngine, testdb, 8, nil)
chainConfig = &config
runEngine = beacon.New(ethash.NewFaker())
}
preHeaders := make([]*types.Header, len(preBlocks))
for i, block := range preBlocks {
preHeaders[i] = block.Header()
blob, _ := json.Marshal(block.Header())
t.Logf("Log header before the merging %d: %v", block.NumberU64(), string(blob))
}
postHeaders := make([]*types.Header, len(postBlocks))
for i, block := range postBlocks {
postHeaders[i] = block.Header()
blob, _ := json.Marshal(block.Header())
t.Logf("Log header after the merging %d: %v", block.NumberU64(), string(blob))
}
// Run the header checker for blocks one-by-one, checking for both valid and invalid nonces
chain, _ := NewBlockChain(testdb, nil, chainConfig, runEngine, vm.Config{}, nil, nil)
defer chain.Stop()
// Verify the blocks before the merging
for i := 0; i < len(preBlocks); i++ {
_, results := runEngine.VerifyHeaders(chain, []*types.Header{preHeaders[i]}, []bool{true})
// Wait for the verification result
select {
case result := <-results:
if result != nil {
t.Errorf("test %d: verification failed %v", i, result)
}
case <-time.After(time.Second):
t.Fatalf("test %d: verification timeout", i)
}
// Make sure no more data is returned
select {
case result := <-results:
t.Fatalf("test %d: unexpected result returned: %v", i, result)
case <-time.After(25 * time.Millisecond):
}
chain.InsertChain(preBlocks[i : i+1])
}
// Make the transition
merger.ReachTTD()
merger.FinalizePoS()
// Verify the blocks after the merging
for i := 0; i < len(postBlocks); i++ {
_, results := runEngine.VerifyHeaders(chain, []*types.Header{postHeaders[i]}, []bool{true})
// Wait for the verification result
select {
case result := <-results:
if result != nil {
t.Errorf("test %d: verification failed %v", i, result)
}
case <-time.After(time.Second):
t.Fatalf("test %d: verification timeout", i)
}
// Make sure no more data is returned
select {
case result := <-results:
t.Fatalf("test %d: unexpected result returned: %v", i, result)
case <-time.After(25 * time.Millisecond):
}
chain.InsertBlockWithoutSetHead(postBlocks[i])
}
// Verify the blocks with pre-merge blocks and post-merge blocks
var (
headers []*types.Header
seals []bool
)
for _, block := range preBlocks {
headers = append(headers, block.Header())
seals = append(seals, true)
}
for _, block := range postBlocks {
headers = append(headers, block.Header())
seals = append(seals, true)
}
_, results := runEngine.VerifyHeaders(chain, headers, seals)
for i := 0; i < len(headers); i++ {
select {
case result := <-results:
if result != nil {
t.Errorf("test %d: verification failed %v", i, result)
}
case <-time.After(time.Second):
t.Fatalf("test %d: verification timeout", i)
}
}
// Make sure no more data is returned
select {
case result := <-results:
t.Fatalf("unexpected result returned: %v", result)
case <-time.After(25 * time.Millisecond):
}
}
// Tests that concurrent header verification works, for both good and bad blocks. // Tests that concurrent header verification works, for both good and bad blocks.
func TestHeaderConcurrentVerification2(t *testing.T) { testHeaderConcurrentVerification(t, 2) } func TestHeaderConcurrentVerification2(t *testing.T) { testHeaderConcurrentVerification(t, 2) }
func TestHeaderConcurrentVerification8(t *testing.T) { testHeaderConcurrentVerification(t, 8) } func TestHeaderConcurrentVerification8(t *testing.T) { testHeaderConcurrentVerification(t, 8) }

View File

@ -22,7 +22,6 @@ import (
"fmt" "fmt"
"io" "io"
"math/big" "math/big"
mrand "math/rand"
"sort" "sort"
"sync" "sync"
"sync/atomic" "sync/atomic"
@ -208,15 +207,14 @@ type BlockChain struct {
validator Validator // Block and state validator interface validator Validator // Block and state validator interface
prefetcher Prefetcher prefetcher Prefetcher
processor Processor // Block transaction processor interface processor Processor // Block transaction processor interface
forker *ForkChoice
vmConfig vm.Config vmConfig vm.Config
shouldPreserve func(*types.Block) bool // Function used to determine whether should preserve the given block.
} }
// NewBlockChain returns a fully initialised block chain using information // NewBlockChain returns a fully initialised block chain using information
// available in the database. It initialises the default Ethereum Validator and // available in the database. It initialises the default Ethereum Validator
// Processor. // and Processor.
func NewBlockChain(db ethdb.Database, cacheConfig *CacheConfig, chainConfig *params.ChainConfig, engine consensus.Engine, vmConfig vm.Config, shouldPreserve func(block *types.Block) bool, txLookupLimit *uint64) (*BlockChain, error) { func NewBlockChain(db ethdb.Database, cacheConfig *CacheConfig, chainConfig *params.ChainConfig, engine consensus.Engine, vmConfig vm.Config, shouldPreserve func(header *types.Header) bool, txLookupLimit *uint64) (*BlockChain, error) {
if cacheConfig == nil { if cacheConfig == nil {
cacheConfig = defaultCacheConfig cacheConfig = defaultCacheConfig
} }
@ -239,7 +237,6 @@ func NewBlockChain(db ethdb.Database, cacheConfig *CacheConfig, chainConfig *par
}), }),
quit: make(chan struct{}), quit: make(chan struct{}),
chainmu: syncx.NewClosableMutex(), chainmu: syncx.NewClosableMutex(),
shouldPreserve: shouldPreserve,
bodyCache: bodyCache, bodyCache: bodyCache,
bodyRLPCache: bodyRLPCache, bodyRLPCache: bodyRLPCache,
receiptsCache: receiptsCache, receiptsCache: receiptsCache,
@ -249,6 +246,7 @@ func NewBlockChain(db ethdb.Database, cacheConfig *CacheConfig, chainConfig *par
engine: engine, engine: engine,
vmConfig: vmConfig, vmConfig: vmConfig,
} }
bc.forker = NewForkChoice(bc, shouldPreserve)
bc.validator = NewBlockValidator(chainConfig, bc, engine) bc.validator = NewBlockValidator(chainConfig, bc, engine)
bc.prefetcher = newStatePrefetcher(chainConfig, bc, engine) bc.prefetcher = newStatePrefetcher(chainConfig, bc, engine)
bc.processor = NewStateProcessor(chainConfig, bc, engine) bc.processor = NewStateProcessor(chainConfig, bc, engine)
@ -382,7 +380,7 @@ func NewBlockChain(db ethdb.Database, cacheConfig *CacheConfig, chainConfig *par
// Start future block processor. // Start future block processor.
bc.wg.Add(1) bc.wg.Add(1)
go bc.futureBlocksLoop() go bc.updateFutureBlocks()
// Start tx indexer/unindexer. // Start tx indexer/unindexer.
if txLookupLimit != nil { if txLookupLimit != nil {
@ -631,9 +629,9 @@ func (bc *BlockChain) setHeadBeyondRoot(head uint64, root common.Hash, repair bo
return rootNumber, bc.loadLastState() return rootNumber, bc.loadLastState()
} }
// FastSyncCommitHead sets the current head block to the one defined by the hash // SnapSyncCommitHead sets the current head block to the one defined by the hash
// irrelevant what the chain contents were prior. // irrelevant what the chain contents were prior.
func (bc *BlockChain) FastSyncCommitHead(hash common.Hash) error { func (bc *BlockChain) SnapSyncCommitHead(hash common.Hash) error {
// Make sure that both the block as well at its state trie exists // Make sure that both the block as well at its state trie exists
block := bc.GetBlockByHash(hash) block := bc.GetBlockByHash(hash)
if block == nil { if block == nil {
@ -738,30 +736,24 @@ func (bc *BlockChain) ExportN(w io.Writer, first uint64, last uint64) error {
// //
// Note, this function assumes that the `mu` mutex is held! // Note, this function assumes that the `mu` mutex is held!
func (bc *BlockChain) writeHeadBlock(block *types.Block) { func (bc *BlockChain) writeHeadBlock(block *types.Block) {
// If the block is on a side chain or an unknown one, force other heads onto it too
updateHeads := rawdb.ReadCanonicalHash(bc.db, block.NumberU64()) != block.Hash()
// Add the block to the canonical chain number scheme and mark as the head // Add the block to the canonical chain number scheme and mark as the head
batch := bc.db.NewBatch() batch := bc.db.NewBatch()
rawdb.WriteHeadHeaderHash(batch, block.Hash())
rawdb.WriteHeadFastBlockHash(batch, block.Hash())
rawdb.WriteCanonicalHash(batch, block.Hash(), block.NumberU64()) rawdb.WriteCanonicalHash(batch, block.Hash(), block.NumberU64())
rawdb.WriteTxLookupEntriesByBlock(batch, block) rawdb.WriteTxLookupEntriesByBlock(batch, block)
rawdb.WriteHeadBlockHash(batch, block.Hash()) rawdb.WriteHeadBlockHash(batch, block.Hash())
// If the block is better than our head or is on a different chain, force update heads
if updateHeads {
rawdb.WriteHeadHeaderHash(batch, block.Hash())
rawdb.WriteHeadFastBlockHash(batch, block.Hash())
}
// Flush the whole batch into the disk, exit the node if failed // Flush the whole batch into the disk, exit the node if failed
if err := batch.Write(); err != nil { if err := batch.Write(); err != nil {
log.Crit("Failed to update chain indexes and markers", "err", err) log.Crit("Failed to update chain indexes and markers", "err", err)
} }
// Update all in-memory chain markers in the last step // Update all in-memory chain markers in the last step
if updateHeads {
bc.hc.SetCurrentHeader(block.Header()) bc.hc.SetCurrentHeader(block.Header())
bc.currentFastBlock.Store(block) bc.currentFastBlock.Store(block)
headFastBlockGauge.Update(int64(block.NumberU64())) headFastBlockGauge.Update(int64(block.NumberU64()))
}
bc.currentBlock.Store(block) bc.currentBlock.Store(block)
headBlockGauge.Update(int64(block.NumberU64())) headBlockGauge.Update(int64(block.NumberU64()))
} }
@ -877,12 +869,6 @@ const (
SideStatTy SideStatTy
) )
// numberHash is just a container for a number and a hash, to represent a block
type numberHash struct {
number uint64
hash common.Hash
}
// InsertReceiptChain attempts to complete an already existing header chain with // InsertReceiptChain attempts to complete an already existing header chain with
// transaction and receipt data. // transaction and receipt data.
func (bc *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain []types.Receipts, ancientLimit uint64) (int, error) { func (bc *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain []types.Receipts, ancientLimit uint64) (int, error) {
@ -928,14 +914,18 @@ func (bc *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain [
// Rewind may have occurred, skip in that case. // Rewind may have occurred, skip in that case.
if bc.CurrentHeader().Number.Cmp(head.Number()) >= 0 { if bc.CurrentHeader().Number.Cmp(head.Number()) >= 0 {
currentFastBlock, td := bc.CurrentFastBlock(), bc.GetTd(head.Hash(), head.NumberU64()) reorg, err := bc.forker.ReorgNeeded(bc.CurrentFastBlock().Header(), head.Header())
if bc.GetTd(currentFastBlock.Hash(), currentFastBlock.NumberU64()).Cmp(td) < 0 { if err != nil {
log.Warn("Reorg failed", "err", err)
return false
} else if !reorg {
return false
}
rawdb.WriteHeadFastBlockHash(bc.db, head.Hash()) rawdb.WriteHeadFastBlockHash(bc.db, head.Hash())
bc.currentFastBlock.Store(head) bc.currentFastBlock.Store(head)
headFastBlockGauge.Update(int64(head.NumberU64())) headFastBlockGauge.Update(int64(head.NumberU64()))
return true return true
} }
}
return false return false
} }
@ -1181,30 +1171,15 @@ func (bc *BlockChain) writeKnownBlock(block *types.Block) error {
return nil return nil
} }
// WriteBlockWithState writes the block and all associated state to the database. // writeBlockWithState writes block, metadata and corresponding state data to the
func (bc *BlockChain) WriteBlockWithState(block *types.Block, receipts []*types.Receipt, logs []*types.Log, state *state.StateDB, emitHeadEvent bool) (status WriteStatus, err error) { // database.
if !bc.chainmu.TryLock() { func (bc *BlockChain) writeBlockWithState(block *types.Block, receipts []*types.Receipt, logs []*types.Log, state *state.StateDB) error {
return NonStatTy, errInsertionInterrupted
}
defer bc.chainmu.Unlock()
return bc.writeBlockWithState(block, receipts, logs, state, emitHeadEvent)
}
// writeBlockWithState writes the block and all associated state to the database,
// but is expects the chain mutex to be held.
func (bc *BlockChain) writeBlockWithState(block *types.Block, receipts []*types.Receipt, logs []*types.Log, state *state.StateDB, emitHeadEvent bool) (status WriteStatus, err error) {
if bc.insertStopped() {
return NonStatTy, errInsertionInterrupted
}
// Calculate the total difficulty of the block // Calculate the total difficulty of the block
ptd := bc.GetTd(block.ParentHash(), block.NumberU64()-1) ptd := bc.GetTd(block.ParentHash(), block.NumberU64()-1)
if ptd == nil { if ptd == nil {
return NonStatTy, consensus.ErrUnknownAncestor return consensus.ErrUnknownAncestor
} }
// Make sure no inconsistent state is leaked during insertion // Make sure no inconsistent state is leaked during insertion
currentBlock := bc.CurrentBlock()
localTd := bc.GetTd(currentBlock.Hash(), currentBlock.NumberU64())
externTd := new(big.Int).Add(block.Difficulty(), ptd) externTd := new(big.Int).Add(block.Difficulty(), ptd)
// Irrelevant of the canonical status, write the block itself to the database. // Irrelevant of the canonical status, write the block itself to the database.
@ -1222,15 +1197,13 @@ func (bc *BlockChain) writeBlockWithState(block *types.Block, receipts []*types.
// Commit all cached state changes into underlying memory database. // Commit all cached state changes into underlying memory database.
root, err := state.Commit(bc.chainConfig.IsEIP158(block.Number())) root, err := state.Commit(bc.chainConfig.IsEIP158(block.Number()))
if err != nil { if err != nil {
return NonStatTy, err return err
} }
triedb := bc.stateCache.TrieDB() triedb := bc.stateCache.TrieDB()
// If we're running an archive node, always flush // If we're running an archive node, always flush
if bc.cacheConfig.TrieDirtyDisabled { if bc.cacheConfig.TrieDirtyDisabled {
if err := triedb.Commit(root, false, nil); err != nil { return triedb.Commit(root, false, nil)
return NonStatTy, err
}
} else { } else {
// Full but not archive node, do proper garbage collection // Full but not archive node, do proper garbage collection
triedb.Reference(root, common.Hash{}) // metadata reference to keep trie alive triedb.Reference(root, common.Hash{}) // metadata reference to keep trie alive
@ -1278,23 +1251,30 @@ func (bc *BlockChain) writeBlockWithState(block *types.Block, receipts []*types.
} }
} }
} }
// If the total difficulty is higher than our known, add it to the canonical chain return nil
// Second clause in the if statement reduces the vulnerability to selfish mining.
// Please refer to http://www.cs.cornell.edu/~ie53/publications/btcProcFC.pdf
reorg := externTd.Cmp(localTd) > 0
currentBlock = bc.CurrentBlock()
if !reorg && externTd.Cmp(localTd) == 0 {
// Split same-difficulty blocks by number, then preferentially select
// the block generated by the local miner as the canonical block.
if block.NumberU64() < currentBlock.NumberU64() {
reorg = true
} else if block.NumberU64() == currentBlock.NumberU64() {
var currentPreserve, blockPreserve bool
if bc.shouldPreserve != nil {
currentPreserve, blockPreserve = bc.shouldPreserve(currentBlock), bc.shouldPreserve(block)
} }
reorg = !currentPreserve && (blockPreserve || mrand.Float64() < 0.5)
// WriteBlockWithState writes the block and all associated state to the database.
func (bc *BlockChain) WriteBlockAndSetHead(block *types.Block, receipts []*types.Receipt, logs []*types.Log, state *state.StateDB, emitHeadEvent bool) (status WriteStatus, err error) {
if !bc.chainmu.TryLock() {
return NonStatTy, errChainStopped
} }
defer bc.chainmu.Unlock()
return bc.writeBlockAndSetHead(block, receipts, logs, state, emitHeadEvent)
}
// writeBlockAndSetHead writes the block and all associated state to the database,
// and also it applies the given block as the new chain head. This function expects
// the chain mutex to be held.
func (bc *BlockChain) writeBlockAndSetHead(block *types.Block, receipts []*types.Receipt, logs []*types.Log, state *state.StateDB, emitHeadEvent bool) (status WriteStatus, err error) {
if err := bc.writeBlockWithState(block, receipts, logs, state); err != nil {
return NonStatTy, err
}
currentBlock := bc.CurrentBlock()
reorg, err := bc.forker.ReorgNeeded(currentBlock.Header(), block.Header())
if err != nil {
return NonStatTy, err
} }
if reorg { if reorg {
// Reorganise the chain if the parent is not the head block // Reorganise the chain if the parent is not the head block
@ -1321,7 +1301,7 @@ func (bc *BlockChain) writeBlockWithState(block *types.Block, receipts []*types.
} }
// In theory we should fire a ChainHeadEvent when we inject // In theory we should fire a ChainHeadEvent when we inject
// a canonical block, but sometimes we can insert a batch of // a canonical block, but sometimes we can insert a batch of
// canonicial blocks. Avoid firing too much ChainHeadEvents, // canonicial blocks. Avoid firing too many ChainHeadEvents,
// we will fire an accumulated ChainHeadEvent and disable fire // we will fire an accumulated ChainHeadEvent and disable fire
// event here. // event here.
if emitHeadEvent { if emitHeadEvent {
@ -1337,11 +1317,18 @@ func (bc *BlockChain) writeBlockWithState(block *types.Block, receipts []*types.
// addFutureBlock checks if the block is within the max allowed window to get // addFutureBlock checks if the block is within the max allowed window to get
// accepted for future processing, and returns an error if the block is too far // accepted for future processing, and returns an error if the block is too far
// ahead and was not added. // ahead and was not added.
//
// TODO after the transition, the future block shouldn't be kept. Because
// it's not checked in the Geth side anymore.
func (bc *BlockChain) addFutureBlock(block *types.Block) error { func (bc *BlockChain) addFutureBlock(block *types.Block) error {
max := uint64(time.Now().Unix() + maxTimeFutureBlocks) max := uint64(time.Now().Unix() + maxTimeFutureBlocks)
if block.Time() > max { if block.Time() > max {
return fmt.Errorf("future block timestamp %v > allowed %v", block.Time(), max) return fmt.Errorf("future block timestamp %v > allowed %v", block.Time(), max)
} }
if block.Difficulty().Cmp(common.Big0) == 0 {
// Never add PoS blocks into the future queue
return nil
}
bc.futureBlocks.Add(block.Hash(), block) bc.futureBlocks.Add(block.Hash(), block)
return nil return nil
} }
@ -1349,15 +1336,12 @@ func (bc *BlockChain) addFutureBlock(block *types.Block) error {
// InsertChain attempts to insert the given batch of blocks in to the canonical // InsertChain attempts to insert the given batch of blocks in to the canonical
// chain or, otherwise, create a fork. If an error is returned it will return // chain or, otherwise, create a fork. If an error is returned it will return
// the index number of the failing block as well an error describing what went // the index number of the failing block as well an error describing what went
// wrong. // wrong. After insertion is done, all accumulated events will be fired.
//
// After insertion is done, all accumulated events will be fired.
func (bc *BlockChain) InsertChain(chain types.Blocks) (int, error) { func (bc *BlockChain) InsertChain(chain types.Blocks) (int, error) {
// Sanity check that we have something meaningful to import // Sanity check that we have something meaningful to import
if len(chain) == 0 { if len(chain) == 0 {
return 0, nil return 0, nil
} }
bc.blockProcFeed.Send(true) bc.blockProcFeed.Send(true)
defer bc.blockProcFeed.Send(false) defer bc.blockProcFeed.Send(false)
@ -1376,26 +1360,12 @@ func (bc *BlockChain) InsertChain(chain types.Blocks) (int, error) {
prev.Hash().Bytes()[:4], i, block.NumberU64(), block.Hash().Bytes()[:4], block.ParentHash().Bytes()[:4]) prev.Hash().Bytes()[:4], i, block.NumberU64(), block.Hash().Bytes()[:4], block.ParentHash().Bytes()[:4])
} }
} }
// Pre-checks passed, start the full block imports
// Pre-check passed, start the full block imports.
if !bc.chainmu.TryLock() { if !bc.chainmu.TryLock() {
return 0, errChainStopped return 0, errChainStopped
} }
defer bc.chainmu.Unlock() defer bc.chainmu.Unlock()
return bc.insertChain(chain, true) return bc.insertChain(chain, true, true)
}
// InsertChainWithoutSealVerification works exactly the same
// except for seal verification, seal verification is omitted
func (bc *BlockChain) InsertChainWithoutSealVerification(block *types.Block) (int, error) {
bc.blockProcFeed.Send(true)
defer bc.blockProcFeed.Send(false)
if !bc.chainmu.TryLock() {
return 0, errChainStopped
}
defer bc.chainmu.Unlock()
return bc.insertChain(types.Blocks([]*types.Block{block}), false)
} }
// insertChain is the internal implementation of InsertChain, which assumes that // insertChain is the internal implementation of InsertChain, which assumes that
@ -1406,7 +1376,7 @@ func (bc *BlockChain) InsertChainWithoutSealVerification(block *types.Block) (in
// racey behaviour. If a sidechain import is in progress, and the historic state // racey behaviour. If a sidechain import is in progress, and the historic state
// is imported, but then new canon-head is added before the actual sidechain // is imported, but then new canon-head is added before the actual sidechain
// completes, then the historic state could be pruned again // completes, then the historic state could be pruned again
func (bc *BlockChain) insertChain(chain types.Blocks, verifySeals bool) (int, error) { func (bc *BlockChain) insertChain(chain types.Blocks, verifySeals, setHead bool) (int, error) {
// If the chain is terminating, don't even bother starting up. // If the chain is terminating, don't even bother starting up.
if bc.insertStopped() { if bc.insertStopped() {
return 0, nil return 0, nil
@ -1448,15 +1418,24 @@ func (bc *BlockChain) insertChain(chain types.Blocks, verifySeals bool) (int, er
// from the canonical chain, which has not been verified. // from the canonical chain, which has not been verified.
// Skip all known blocks that are behind us. // Skip all known blocks that are behind us.
var ( var (
reorg bool
current = bc.CurrentBlock() current = bc.CurrentBlock()
localTd = bc.GetTd(current.Hash(), current.NumberU64())
externTd = bc.GetTd(block.ParentHash(), block.NumberU64()-1) // The first block can't be nil
) )
for block != nil && bc.skipBlock(err, it) { for block != nil && bc.skipBlock(err, it) {
externTd = new(big.Int).Add(externTd, block.Difficulty()) reorg, err = bc.forker.ReorgNeeded(current.Header(), block.Header())
if localTd.Cmp(externTd) < 0 { if err != nil {
return it.index, err
}
if reorg {
// Switch to import mode if the forker says the reorg is necessary
// and also the block is not on the canonical chain.
// In eth2 the forker always returns true for reorg decision (blindly trusting
// the external consensus engine), but in order to prevent the unnecessary
// reorgs when importing known blocks, the special case is handled here.
if block.NumberU64() > current.NumberU64() || bc.GetCanonicalHash(block.NumberU64()) != block.Hash() {
break break
} }
}
log.Debug("Ignoring already known block", "number", block.Number(), "hash", block.Hash()) log.Debug("Ignoring already known block", "number", block.Number(), "hash", block.Hash())
stats.ignored++ stats.ignored++
@ -1482,11 +1461,17 @@ func (bc *BlockChain) insertChain(chain types.Blocks, verifySeals bool) (int, er
// Falls through to the block import // Falls through to the block import
} }
switch { switch {
// First block is pruned, insert as sidechain and reorg only if TD grows enough // First block is pruned
case errors.Is(err, consensus.ErrPrunedAncestor): case errors.Is(err, consensus.ErrPrunedAncestor):
if setHead {
// First block is pruned, insert as sidechain and reorg only if TD grows enough
log.Debug("Pruned ancestor, inserting as sidechain", "number", block.Number(), "hash", block.Hash()) log.Debug("Pruned ancestor, inserting as sidechain", "number", block.Number(), "hash", block.Hash())
return bc.insertSideChain(block, it) return bc.insertSideChain(block, it)
} else {
// We're post-merge and the parent is pruned, try to recover the parent state
log.Debug("Pruned ancestor", "number", block.Number(), "hash", block.Hash())
return it.index, bc.recoverAncestors(block)
}
// First block is future, shove it (and all children) to the future queue (unknown ancestor) // First block is future, shove it (and all children) to the future queue (unknown ancestor)
case errors.Is(err, consensus.ErrFutureBlock) || (errors.Is(err, consensus.ErrUnknownAncestor) && bc.futureBlocks.Contains(it.first().ParentHash())): case errors.Is(err, consensus.ErrFutureBlock) || (errors.Is(err, consensus.ErrUnknownAncestor) && bc.futureBlocks.Contains(it.first().ParentHash())):
for block != nil && (it.index == 0 || errors.Is(err, consensus.ErrUnknownAncestor)) { for block != nil && (it.index == 0 || errors.Is(err, consensus.ErrUnknownAncestor)) {
@ -1641,12 +1626,17 @@ func (bc *BlockChain) insertChain(chain types.Blocks, verifySeals bool) (int, er
// Update the metrics touched during block validation // Update the metrics touched during block validation
accountHashTimer.Update(statedb.AccountHashes) // Account hashes are complete, we can mark them accountHashTimer.Update(statedb.AccountHashes) // Account hashes are complete, we can mark them
storageHashTimer.Update(statedb.StorageHashes) // Storage hashes are complete, we can mark them storageHashTimer.Update(statedb.StorageHashes) // Storage hashes are complete, we can mark them
blockValidationTimer.Update(time.Since(substart) - (statedb.AccountHashes + statedb.StorageHashes - triehash)) blockValidationTimer.Update(time.Since(substart) - (statedb.AccountHashes + statedb.StorageHashes - triehash))
// Write the block to the chain and get the status. // Write the block to the chain and get the status.
substart = time.Now() substart = time.Now()
status, err := bc.writeBlockWithState(block, receipts, logs, statedb, false) var status WriteStatus
if !setHead {
// Don't set the head, only insert the block
err = bc.writeBlockWithState(block, receipts, logs, statedb)
} else {
status, err = bc.writeBlockAndSetHead(block, receipts, logs, statedb, false)
}
atomic.StoreUint32(&followupInterrupt, 1) atomic.StoreUint32(&followupInterrupt, 1)
if err != nil { if err != nil {
return it.index, err return it.index, err
@ -1659,6 +1649,12 @@ func (bc *BlockChain) insertChain(chain types.Blocks, verifySeals bool) (int, er
blockWriteTimer.Update(time.Since(substart) - statedb.AccountCommits - statedb.StorageCommits - statedb.SnapshotCommits) blockWriteTimer.Update(time.Since(substart) - statedb.AccountCommits - statedb.StorageCommits - statedb.SnapshotCommits)
blockInsertTimer.UpdateSince(start) blockInsertTimer.UpdateSince(start)
if !setHead {
// We did not setHead, so we don't have any stats to update
log.Info("Inserted block", "number", block.Number(), "hash", block.Hash(), "txs", len(block.Transactions()), "elapsed", common.PrettyDuration(time.Since(start)))
return it.index, nil
}
switch status { switch status {
case CanonStatTy: case CanonStatTy:
log.Debug("Inserted new block", "number", block.Number(), "hash", block.Hash(), log.Debug("Inserted new block", "number", block.Number(), "hash", block.Hash(),
@ -1717,9 +1713,11 @@ func (bc *BlockChain) insertChain(chain types.Blocks, verifySeals bool) (int, er
// //
// The method writes all (header-and-body-valid) blocks to disk, then tries to // The method writes all (header-and-body-valid) blocks to disk, then tries to
// switch over to the new chain if the TD exceeded the current chain. // switch over to the new chain if the TD exceeded the current chain.
// insertSideChain is only used pre-merge.
func (bc *BlockChain) insertSideChain(block *types.Block, it *insertIterator) (int, error) { func (bc *BlockChain) insertSideChain(block *types.Block, it *insertIterator) (int, error) {
var ( var (
externTd *big.Int externTd *big.Int
lastBlock = block
current = bc.CurrentBlock() current = bc.CurrentBlock()
) )
// The first sidechain block error is already verified to be ErrPrunedAncestor. // The first sidechain block error is already verified to be ErrPrunedAncestor.
@ -1771,6 +1769,7 @@ func (bc *BlockChain) insertSideChain(block *types.Block, it *insertIterator) (i
"txs", len(block.Transactions()), "gas", block.GasUsed(), "uncles", len(block.Uncles()), "txs", len(block.Transactions()), "gas", block.GasUsed(), "uncles", len(block.Uncles()),
"root", block.Root()) "root", block.Root())
} }
lastBlock = block
} }
// At this point, we've written all sidechain blocks to database. Loop ended // At this point, we've written all sidechain blocks to database. Loop ended
// either on some other error or all were processed. If there was some other // either on some other error or all were processed. If there was some other
@ -1778,8 +1777,12 @@ func (bc *BlockChain) insertSideChain(block *types.Block, it *insertIterator) (i
// //
// If the externTd was larger than our local TD, we now need to reimport the previous // If the externTd was larger than our local TD, we now need to reimport the previous
// blocks to regenerate the required state // blocks to regenerate the required state
reorg, err := bc.forker.ReorgNeeded(current.Header(), lastBlock.Header())
if err != nil {
return it.index, err
}
if !reorg {
localTd := bc.GetTd(current.Hash(), current.NumberU64()) localTd := bc.GetTd(current.Hash(), current.NumberU64())
if localTd.Cmp(externTd) > 0 {
log.Info("Sidechain written to disk", "start", it.first().NumberU64(), "end", it.previous().Number, "sidetd", externTd, "localtd", localTd) log.Info("Sidechain written to disk", "start", it.first().NumberU64(), "end", it.previous().Number, "sidetd", externTd, "localtd", localTd)
return it.index, err return it.index, err
} }
@ -1815,7 +1818,7 @@ func (bc *BlockChain) insertSideChain(block *types.Block, it *insertIterator) (i
// memory here. // memory here.
if len(blocks) >= 2048 || memory > 64*1024*1024 { if len(blocks) >= 2048 || memory > 64*1024*1024 {
log.Info("Importing heavy sidechain segment", "blocks", len(blocks), "start", blocks[0].NumberU64(), "end", block.NumberU64()) log.Info("Importing heavy sidechain segment", "blocks", len(blocks), "start", blocks[0].NumberU64(), "end", block.NumberU64())
if _, err := bc.insertChain(blocks, false); err != nil { if _, err := bc.insertChain(blocks, false, true); err != nil {
return 0, err return 0, err
} }
blocks, memory = blocks[:0], 0 blocks, memory = blocks[:0], 0
@ -1829,33 +1832,62 @@ func (bc *BlockChain) insertSideChain(block *types.Block, it *insertIterator) (i
} }
if len(blocks) > 0 { if len(blocks) > 0 {
log.Info("Importing sidechain segment", "start", blocks[0].NumberU64(), "end", blocks[len(blocks)-1].NumberU64()) log.Info("Importing sidechain segment", "start", blocks[0].NumberU64(), "end", blocks[len(blocks)-1].NumberU64())
return bc.insertChain(blocks, false) return bc.insertChain(blocks, false, true)
} }
return 0, nil return 0, nil
} }
// reorg takes two blocks, an old chain and a new chain and will reconstruct the // recoverAncestors finds the closest ancestor with available state and re-execute
// blocks and inserts them to be part of the new canonical chain and accumulates // all the ancestor blocks since that.
// potential missing transactions and post an event about them. // recoverAncestors is only used post-merge.
func (bc *BlockChain) reorg(oldBlock, newBlock *types.Block) error { func (bc *BlockChain) recoverAncestors(block *types.Block) error {
// Gather all the sidechain hashes (full blocks may be memory heavy)
var ( var (
newChain types.Blocks hashes []common.Hash
oldChain types.Blocks numbers []uint64
commonBlock *types.Block parent = block
)
for parent != nil && !bc.HasState(parent.Root()) {
hashes = append(hashes, parent.Hash())
numbers = append(numbers, parent.NumberU64())
parent = bc.GetBlock(parent.ParentHash(), parent.NumberU64()-1)
deletedTxs types.Transactions // If the chain is terminating, stop iteration
addedTxs types.Transactions if bc.insertStopped() {
log.Debug("Abort during blocks iteration")
deletedLogs [][]*types.Log return errInsertionInterrupted
rebirthLogs [][]*types.Log }
}
if parent == nil {
return errors.New("missing parent")
}
// Import all the pruned blocks to make the state available
for i := len(hashes) - 1; i >= 0; i-- {
// If the chain is terminating, stop processing blocks
if bc.insertStopped() {
log.Debug("Abort during blocks processing")
return errInsertionInterrupted
}
var b *types.Block
if i == 0 {
b = block
} else {
b = bc.GetBlock(hashes[i], numbers[i])
}
if _, err := bc.insertChain(types.Blocks{b}, false, false); err != nil {
return err
}
}
return nil
}
// collectLogs collects the logs that were generated or removed during // collectLogs collects the logs that were generated or removed during
// the processing of the block that corresponds with the given hash. // the processing of the block that corresponds with the given hash.
// These logs are later announced as deleted or reborn // These logs are later announced as deleted or reborn.
collectLogs = func(hash common.Hash, removed bool) { func (bc *BlockChain) collectLogs(hash common.Hash, removed bool) []*types.Log {
number := bc.hc.GetBlockNumber(hash) number := bc.hc.GetBlockNumber(hash)
if number == nil { if number == nil {
return return nil
} }
receipts := rawdb.ReadReceipts(bc.db, hash, *number, bc.chainConfig) receipts := rawdb.ReadReceipts(bc.db, hash, *number, bc.chainConfig)
@ -1869,16 +1901,11 @@ func (bc *BlockChain) reorg(oldBlock, newBlock *types.Block) error {
logs = append(logs, &l) logs = append(logs, &l)
} }
} }
if len(logs) > 0 { return logs
if removed {
deletedLogs = append(deletedLogs, logs)
} else {
rebirthLogs = append(rebirthLogs, logs)
}
}
} }
// mergeLogs returns a merged log slice with specified sort order. // mergeLogs returns a merged log slice with specified sort order.
mergeLogs = func(logs [][]*types.Log, reverse bool) []*types.Log { func mergeLogs(logs [][]*types.Log, reverse bool) []*types.Log {
var ret []*types.Log var ret []*types.Log
if reverse { if reverse {
for i := len(logs) - 1; i >= 0; i-- { for i := len(logs) - 1; i >= 0; i-- {
@ -1891,6 +1918,23 @@ func (bc *BlockChain) reorg(oldBlock, newBlock *types.Block) error {
} }
return ret return ret
} }
// reorg takes two blocks, an old chain and a new chain and will reconstruct the
// blocks and inserts them to be part of the new canonical chain and accumulates
// potential missing transactions and post an event about them.
// Note the new head block won't be processed here, callers need to handle it
// externally.
func (bc *BlockChain) reorg(oldBlock, newBlock *types.Block) error {
var (
newChain types.Blocks
oldChain types.Blocks
commonBlock *types.Block
deletedTxs types.Transactions
addedTxs types.Transactions
deletedLogs [][]*types.Log
rebirthLogs [][]*types.Log
) )
// Reduce the longer chain to the same number as the shorter one // Reduce the longer chain to the same number as the shorter one
if oldBlock.NumberU64() > newBlock.NumberU64() { if oldBlock.NumberU64() > newBlock.NumberU64() {
@ -1898,7 +1942,12 @@ func (bc *BlockChain) reorg(oldBlock, newBlock *types.Block) error {
for ; oldBlock != nil && oldBlock.NumberU64() != newBlock.NumberU64(); oldBlock = bc.GetBlock(oldBlock.ParentHash(), oldBlock.NumberU64()-1) { for ; oldBlock != nil && oldBlock.NumberU64() != newBlock.NumberU64(); oldBlock = bc.GetBlock(oldBlock.ParentHash(), oldBlock.NumberU64()-1) {
oldChain = append(oldChain, oldBlock) oldChain = append(oldChain, oldBlock)
deletedTxs = append(deletedTxs, oldBlock.Transactions()...) deletedTxs = append(deletedTxs, oldBlock.Transactions()...)
collectLogs(oldBlock.Hash(), true)
// Collect deleted logs for notification
logs := bc.collectLogs(oldBlock.Hash(), true)
if len(logs) > 0 {
deletedLogs = append(deletedLogs, logs)
}
} }
} else { } else {
// New chain is longer, stash all blocks away for subsequent insertion // New chain is longer, stash all blocks away for subsequent insertion
@ -1923,8 +1972,12 @@ func (bc *BlockChain) reorg(oldBlock, newBlock *types.Block) error {
// Remove an old block as well as stash away a new block // Remove an old block as well as stash away a new block
oldChain = append(oldChain, oldBlock) oldChain = append(oldChain, oldBlock)
deletedTxs = append(deletedTxs, oldBlock.Transactions()...) deletedTxs = append(deletedTxs, oldBlock.Transactions()...)
collectLogs(oldBlock.Hash(), true)
// Collect deleted logs for notification
logs := bc.collectLogs(oldBlock.Hash(), true)
if len(logs) > 0 {
deletedLogs = append(deletedLogs, logs)
}
newChain = append(newChain, newBlock) newChain = append(newChain, newBlock)
// Step back with both chains // Step back with both chains
@ -1951,8 +2004,15 @@ func (bc *BlockChain) reorg(oldBlock, newBlock *types.Block) error {
blockReorgAddMeter.Mark(int64(len(newChain))) blockReorgAddMeter.Mark(int64(len(newChain)))
blockReorgDropMeter.Mark(int64(len(oldChain))) blockReorgDropMeter.Mark(int64(len(oldChain)))
blockReorgMeter.Mark(1) blockReorgMeter.Mark(1)
} else if len(newChain) > 0 {
// Special case happens in the post merge stage that current head is
// the ancestor of new head while these two blocks are not consecutive
log.Info("Extend chain", "add", len(newChain), "number", newChain[0].NumberU64(), "hash", newChain[0].Hash())
blockReorgAddMeter.Mark(int64(len(newChain)))
} else { } else {
log.Error("Impossible reorg, please file an issue", "oldnum", oldBlock.Number(), "oldhash", oldBlock.Hash(), "newnum", newBlock.Number(), "newhash", newBlock.Hash()) // len(newChain) == 0 && len(oldChain) > 0
// rewind the canonical chain to a lower point.
log.Error("Impossible reorg, please file an issue", "oldnum", oldBlock.Number(), "oldhash", oldBlock.Hash(), "oldblocks", len(oldChain), "newnum", newBlock.Number(), "newhash", newBlock.Hash(), "newblocks", len(newChain))
} }
// Insert the new chain(except the head block(reverse order)), // Insert the new chain(except the head block(reverse order)),
// taking care of the proper incremental order. // taking care of the proper incremental order.
@ -1961,8 +2021,10 @@ func (bc *BlockChain) reorg(oldBlock, newBlock *types.Block) error {
bc.writeHeadBlock(newChain[i]) bc.writeHeadBlock(newChain[i])
// Collect reborn logs due to chain reorg // Collect reborn logs due to chain reorg
collectLogs(newChain[i].Hash(), false) logs := bc.collectLogs(newChain[i].Hash(), false)
if len(logs) > 0 {
rebirthLogs = append(rebirthLogs, logs)
}
// Collect the new added transactions. // Collect the new added transactions.
addedTxs = append(addedTxs, newChain[i].Transactions()...) addedTxs = append(addedTxs, newChain[i].Transactions()...)
} }
@ -2002,12 +2064,54 @@ func (bc *BlockChain) reorg(oldBlock, newBlock *types.Block) error {
return nil return nil
} }
// futureBlocksLoop processes the 'future block' queue. // InsertBlockWithoutSetHead executes the block, runs the necessary verification
func (bc *BlockChain) futureBlocksLoop() { // upon it and then persist the block and the associate state into the database.
defer bc.wg.Done() // The key difference between the InsertChain is it won't do the canonical chain
// updating. It relies on the additional SetChainHead call to finalize the entire
// procedure.
func (bc *BlockChain) InsertBlockWithoutSetHead(block *types.Block) error {
if !bc.chainmu.TryLock() {
return errChainStopped
}
defer bc.chainmu.Unlock()
_, err := bc.insertChain(types.Blocks{block}, true, false)
return err
}
// SetChainHead rewinds the chain to set the new head block as the specified
// block. It's possible that after the reorg the relevant state of head
// is missing. It can be fixed by inserting a new block which triggers
// the re-execution.
func (bc *BlockChain) SetChainHead(newBlock *types.Block) error {
if !bc.chainmu.TryLock() {
return errChainStopped
}
defer bc.chainmu.Unlock()
// Run the reorg if necessary and set the given block as new head.
if newBlock.ParentHash() != bc.CurrentBlock().Hash() {
if err := bc.reorg(bc.CurrentBlock(), newBlock); err != nil {
return err
}
}
bc.writeHeadBlock(newBlock)
// Emit events
logs := bc.collectLogs(newBlock.Hash(), false)
bc.chainFeed.Send(ChainEvent{Block: newBlock, Hash: newBlock.Hash(), Logs: logs})
if len(logs) > 0 {
bc.logsFeed.Send(logs)
}
bc.chainHeadFeed.Send(ChainHeadEvent{Block: newBlock})
log.Info("Set the chain head", "number", newBlock.Number(), "hash", newBlock.Hash())
return nil
}
func (bc *BlockChain) updateFutureBlocks() {
futureTimer := time.NewTicker(5 * time.Second) futureTimer := time.NewTicker(5 * time.Second)
defer futureTimer.Stop() defer futureTimer.Stop()
defer bc.wg.Done()
for { for {
select { select {
case <-futureTimer.C: case <-futureTimer.C:
@ -2103,7 +2207,14 @@ func (bc *BlockChain) maintainTxIndex(ancients uint64) {
// If a previous indexing existed, make sure that we fill in any missing entries // If a previous indexing existed, make sure that we fill in any missing entries
if bc.txLookupLimit == 0 || head < bc.txLookupLimit { if bc.txLookupLimit == 0 || head < bc.txLookupLimit {
if *tail > 0 { if *tail > 0 {
rawdb.IndexTransactions(bc.db, 0, *tail, bc.quit) // It can happen when chain is rewound to a historical point which
// is even lower than the indexes tail, recap the indexing target
// to new head to avoid reading non-existent block bodies.
end := *tail
if end > head+1 {
end = head + 1
}
rawdb.IndexTransactions(bc.db, 0, end, bc.quit)
} }
return return
} }
@ -2188,6 +2299,6 @@ func (bc *BlockChain) InsertHeaderChain(chain []*types.Header, checkFreq int) (i
return 0, errChainStopped return 0, errChainStopped
} }
defer bc.chainmu.Unlock() defer bc.chainmu.Unlock()
_, err := bc.hc.InsertHeaderChain(chain, start) _, err := bc.hc.InsertHeaderChain(chain, start, bc.forker)
return 0, err return 0, err
} }

View File

@ -73,6 +73,12 @@ func (bc *BlockChain) GetHeaderByNumber(number uint64) *types.Header {
return bc.hc.GetHeaderByNumber(number) return bc.hc.GetHeaderByNumber(number)
} }
// GetHeadersFrom returns a contiguous segment of headers, in rlp-form, going
// backwards from the given number.
func (bc *BlockChain) GetHeadersFrom(number, count uint64) []rlp.RawValue {
return bc.hc.GetHeadersFrom(number, count)
}
// GetBody retrieves a block body (transactions and uncles) from the database by // GetBody retrieves a block body (transactions and uncles) from the database by
// hash, caching it if found. // hash, caching it if found.
func (bc *BlockChain) GetBody(hash common.Hash) *types.Body { func (bc *BlockChain) GetBody(hash common.Hash) *types.Body {

View File

@ -79,10 +79,10 @@ func testShortRepair(t *testing.T, snapshots bool) {
// already committed, after which the process crashed. In this case we expect the full // already committed, after which the process crashed. In this case we expect the full
// chain to be rolled back to the committed block, but the chain data itself left in // chain to be rolled back to the committed block, but the chain data itself left in
// the database for replaying. // the database for replaying.
func TestShortFastSyncedRepair(t *testing.T) { testShortFastSyncedRepair(t, false) } func TestShortSnapSyncedRepair(t *testing.T) { testShortSnapSyncedRepair(t, false) }
func TestShortFastSyncedRepairWithSnapshots(t *testing.T) { testShortFastSyncedRepair(t, true) } func TestShortSnapSyncedRepairWithSnapshots(t *testing.T) { testShortSnapSyncedRepair(t, true) }
func testShortFastSyncedRepair(t *testing.T, snapshots bool) { func testShortSnapSyncedRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD)
// //
@ -119,10 +119,10 @@ func testShortFastSyncedRepair(t *testing.T, snapshots bool) {
// not yet committed, but the process crashed. In this case we expect the chain to // not yet committed, but the process crashed. In this case we expect the chain to
// detect that it was fast syncing and not delete anything, since we can just pick // detect that it was fast syncing and not delete anything, since we can just pick
// up directly where we left off. // up directly where we left off.
func TestShortFastSyncingRepair(t *testing.T) { testShortFastSyncingRepair(t, false) } func TestShortSnapSyncingRepair(t *testing.T) { testShortSnapSyncingRepair(t, false) }
func TestShortFastSyncingRepairWithSnapshots(t *testing.T) { testShortFastSyncingRepair(t, true) } func TestShortSnapSyncingRepairWithSnapshots(t *testing.T) { testShortSnapSyncingRepair(t, true) }
func testShortFastSyncingRepair(t *testing.T, snapshots bool) { func testShortSnapSyncingRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD)
// //
@ -203,14 +203,14 @@ func testShortOldForkedRepair(t *testing.T, snapshots bool) {
// crashed. In this test scenario the side chain is below the committed block. In // crashed. In this test scenario the side chain is below the committed block. In
// this case we expect the canonical chain to be rolled back to the committed block, // this case we expect the canonical chain to be rolled back to the committed block,
// but the chain data itself left in the database for replaying. // but the chain data itself left in the database for replaying.
func TestShortOldForkedFastSyncedRepair(t *testing.T) { func TestShortOldForkedSnapSyncedRepair(t *testing.T) {
testShortOldForkedFastSyncedRepair(t, false) testShortOldForkedSnapSyncedRepair(t, false)
} }
func TestShortOldForkedFastSyncedRepairWithSnapshots(t *testing.T) { func TestShortOldForkedSnapSyncedRepairWithSnapshots(t *testing.T) {
testShortOldForkedFastSyncedRepair(t, true) testShortOldForkedSnapSyncedRepair(t, true)
} }
func testShortOldForkedFastSyncedRepair(t *testing.T, snapshots bool) { func testShortOldForkedSnapSyncedRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD)
// └->S1->S2->S3 // └->S1->S2->S3
@ -250,14 +250,14 @@ func testShortOldForkedFastSyncedRepair(t *testing.T, snapshots bool) {
// test scenario the side chain is below the committed block. In this case we expect // test scenario the side chain is below the committed block. In this case we expect
// the chain to detect that it was fast syncing and not delete anything, since we // the chain to detect that it was fast syncing and not delete anything, since we
// can just pick up directly where we left off. // can just pick up directly where we left off.
func TestShortOldForkedFastSyncingRepair(t *testing.T) { func TestShortOldForkedSnapSyncingRepair(t *testing.T) {
testShortOldForkedFastSyncingRepair(t, false) testShortOldForkedSnapSyncingRepair(t, false)
} }
func TestShortOldForkedFastSyncingRepairWithSnapshots(t *testing.T) { func TestShortOldForkedSnapSyncingRepairWithSnapshots(t *testing.T) {
testShortOldForkedFastSyncingRepair(t, true) testShortOldForkedSnapSyncingRepair(t, true)
} }
func testShortOldForkedFastSyncingRepair(t *testing.T, snapshots bool) { func testShortOldForkedSnapSyncingRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD)
// └->S1->S2->S3 // └->S1->S2->S3
@ -340,14 +340,14 @@ func testShortNewlyForkedRepair(t *testing.T, snapshots bool) {
// crashed. In this test scenario the side chain reaches above the committed block. // crashed. In this test scenario the side chain reaches above the committed block.
// In this case we expect the canonical chain to be rolled back to the committed // In this case we expect the canonical chain to be rolled back to the committed
// block, but the chain data itself left in the database for replaying. // block, but the chain data itself left in the database for replaying.
func TestShortNewlyForkedFastSyncedRepair(t *testing.T) { func TestShortNewlyForkedSnapSyncedRepair(t *testing.T) {
testShortNewlyForkedFastSyncedRepair(t, false) testShortNewlyForkedSnapSyncedRepair(t, false)
} }
func TestShortNewlyForkedFastSyncedRepairWithSnapshots(t *testing.T) { func TestShortNewlyForkedSnapSyncedRepairWithSnapshots(t *testing.T) {
testShortNewlyForkedFastSyncedRepair(t, true) testShortNewlyForkedSnapSyncedRepair(t, true)
} }
func testShortNewlyForkedFastSyncedRepair(t *testing.T, snapshots bool) { func testShortNewlyForkedSnapSyncedRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD)
// └->S1->S2->S3->S4->S5->S6 // └->S1->S2->S3->S4->S5->S6
@ -387,14 +387,14 @@ func testShortNewlyForkedFastSyncedRepair(t *testing.T, snapshots bool) {
// this test scenario the side chain reaches above the committed block. In this // this test scenario the side chain reaches above the committed block. In this
// case we expect the chain to detect that it was fast syncing and not delete // case we expect the chain to detect that it was fast syncing and not delete
// anything, since we can just pick up directly where we left off. // anything, since we can just pick up directly where we left off.
func TestShortNewlyForkedFastSyncingRepair(t *testing.T) { func TestShortNewlyForkedSnapSyncingRepair(t *testing.T) {
testShortNewlyForkedFastSyncingRepair(t, false) testShortNewlyForkedSnapSyncingRepair(t, false)
} }
func TestShortNewlyForkedFastSyncingRepairWithSnapshots(t *testing.T) { func TestShortNewlyForkedSnapSyncingRepairWithSnapshots(t *testing.T) {
testShortNewlyForkedFastSyncingRepair(t, true) testShortNewlyForkedSnapSyncingRepair(t, true)
} }
func testShortNewlyForkedFastSyncingRepair(t *testing.T, snapshots bool) { func testShortNewlyForkedSnapSyncingRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD)
// └->S1->S2->S3->S4->S5->S6 // └->S1->S2->S3->S4->S5->S6
@ -475,14 +475,14 @@ func testShortReorgedRepair(t *testing.T, snapshots bool) {
// the fast sync pivot point was already committed to disk and then the process // the fast sync pivot point was already committed to disk and then the process
// crashed. In this case we expect the canonical chain to be rolled back to the // crashed. In this case we expect the canonical chain to be rolled back to the
// committed block, but the chain data itself left in the database for replaying. // committed block, but the chain data itself left in the database for replaying.
func TestShortReorgedFastSyncedRepair(t *testing.T) { func TestShortReorgedSnapSyncedRepair(t *testing.T) {
testShortReorgedFastSyncedRepair(t, false) testShortReorgedSnapSyncedRepair(t, false)
} }
func TestShortReorgedFastSyncedRepairWithSnapshots(t *testing.T) { func TestShortReorgedSnapSyncedRepairWithSnapshots(t *testing.T) {
testShortReorgedFastSyncedRepair(t, true) testShortReorgedSnapSyncedRepair(t, true)
} }
func testShortReorgedFastSyncedRepair(t *testing.T, snapshots bool) { func testShortReorgedSnapSyncedRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10
@ -521,14 +521,14 @@ func testShortReorgedFastSyncedRepair(t *testing.T, snapshots bool) {
// the fast sync pivot point was not yet committed, but the process crashed. In // the fast sync pivot point was not yet committed, but the process crashed. In
// this case we expect the chain to detect that it was fast syncing and not delete // this case we expect the chain to detect that it was fast syncing and not delete
// anything, since we can just pick up directly where we left off. // anything, since we can just pick up directly where we left off.
func TestShortReorgedFastSyncingRepair(t *testing.T) { func TestShortReorgedSnapSyncingRepair(t *testing.T) {
testShortReorgedFastSyncingRepair(t, false) testShortReorgedSnapSyncingRepair(t, false)
} }
func TestShortReorgedFastSyncingRepairWithSnapshots(t *testing.T) { func TestShortReorgedSnapSyncingRepairWithSnapshots(t *testing.T) {
testShortReorgedFastSyncingRepair(t, true) testShortReorgedSnapSyncingRepair(t, true)
} }
func testShortReorgedFastSyncingRepair(t *testing.T, snapshots bool) { func testShortReorgedSnapSyncingRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10
@ -656,14 +656,14 @@ func testLongDeepRepair(t *testing.T, snapshots bool) {
// sync pivot point - newer than the ancient limit - was already committed, after // sync pivot point - newer than the ancient limit - was already committed, after
// which the process crashed. In this case we expect the chain to be rolled back // which the process crashed. In this case we expect the chain to be rolled back
// to the committed block, with everything afterwads kept as fast sync data. // to the committed block, with everything afterwads kept as fast sync data.
func TestLongFastSyncedShallowRepair(t *testing.T) { func TestLongSnapSyncedShallowRepair(t *testing.T) {
testLongFastSyncedShallowRepair(t, false) testLongSnapSyncedShallowRepair(t, false)
} }
func TestLongFastSyncedShallowRepairWithSnapshots(t *testing.T) { func TestLongSnapSyncedShallowRepairWithSnapshots(t *testing.T) {
testLongFastSyncedShallowRepair(t, true) testLongSnapSyncedShallowRepair(t, true)
} }
func testLongFastSyncedShallowRepair(t *testing.T, snapshots bool) { func testLongSnapSyncedShallowRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD)
// //
@ -705,10 +705,10 @@ func testLongFastSyncedShallowRepair(t *testing.T, snapshots bool) {
// sync pivot point - older than the ancient limit - was already committed, after // sync pivot point - older than the ancient limit - was already committed, after
// which the process crashed. In this case we expect the chain to be rolled back // which the process crashed. In this case we expect the chain to be rolled back
// to the committed block, with everything afterwads deleted. // to the committed block, with everything afterwads deleted.
func TestLongFastSyncedDeepRepair(t *testing.T) { testLongFastSyncedDeepRepair(t, false) } func TestLongSnapSyncedDeepRepair(t *testing.T) { testLongSnapSyncedDeepRepair(t, false) }
func TestLongFastSyncedDeepRepairWithSnapshots(t *testing.T) { testLongFastSyncedDeepRepair(t, true) } func TestLongSnapSyncedDeepRepairWithSnapshots(t *testing.T) { testLongSnapSyncedDeepRepair(t, true) }
func testLongFastSyncedDeepRepair(t *testing.T, snapshots bool) { func testLongSnapSyncedDeepRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD)
// //
@ -750,14 +750,14 @@ func testLongFastSyncedDeepRepair(t *testing.T, snapshots bool) {
// process crashed. In this case we expect the chain to detect that it was fast // process crashed. In this case we expect the chain to detect that it was fast
// syncing and not delete anything, since we can just pick up directly where we // syncing and not delete anything, since we can just pick up directly where we
// left off. // left off.
func TestLongFastSyncingShallowRepair(t *testing.T) { func TestLongSnapSyncingShallowRepair(t *testing.T) {
testLongFastSyncingShallowRepair(t, false) testLongSnapSyncingShallowRepair(t, false)
} }
func TestLongFastSyncingShallowRepairWithSnapshots(t *testing.T) { func TestLongSnapSyncingShallowRepairWithSnapshots(t *testing.T) {
testLongFastSyncingShallowRepair(t, true) testLongSnapSyncingShallowRepair(t, true)
} }
func testLongFastSyncingShallowRepair(t *testing.T, snapshots bool) { func testLongSnapSyncingShallowRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD)
// //
@ -800,10 +800,10 @@ func testLongFastSyncingShallowRepair(t *testing.T, snapshots bool) {
// process crashed. In this case we expect the chain to detect that it was fast // process crashed. In this case we expect the chain to detect that it was fast
// syncing and not delete anything, since we can just pick up directly where we // syncing and not delete anything, since we can just pick up directly where we
// left off. // left off.
func TestLongFastSyncingDeepRepair(t *testing.T) { testLongFastSyncingDeepRepair(t, false) } func TestLongSnapSyncingDeepRepair(t *testing.T) { testLongSnapSyncingDeepRepair(t, false) }
func TestLongFastSyncingDeepRepairWithSnapshots(t *testing.T) { testLongFastSyncingDeepRepair(t, true) } func TestLongSnapSyncingDeepRepairWithSnapshots(t *testing.T) { testLongSnapSyncingDeepRepair(t, true) }
func testLongFastSyncingDeepRepair(t *testing.T, snapshots bool) { func testLongSnapSyncingDeepRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD)
// //
@ -946,14 +946,14 @@ func testLongOldForkedDeepRepair(t *testing.T, snapshots bool) {
// the side chain is below the committed block. In this case we expect the chain // the side chain is below the committed block. In this case we expect the chain
// to be rolled back to the committed block, with everything afterwads kept as // to be rolled back to the committed block, with everything afterwads kept as
// fast sync data; the side chain completely nuked by the freezer. // fast sync data; the side chain completely nuked by the freezer.
func TestLongOldForkedFastSyncedShallowRepair(t *testing.T) { func TestLongOldForkedSnapSyncedShallowRepair(t *testing.T) {
testLongOldForkedFastSyncedShallowRepair(t, false) testLongOldForkedSnapSyncedShallowRepair(t, false)
} }
func TestLongOldForkedFastSyncedShallowRepairWithSnapshots(t *testing.T) { func TestLongOldForkedSnapSyncedShallowRepairWithSnapshots(t *testing.T) {
testLongOldForkedFastSyncedShallowRepair(t, true) testLongOldForkedSnapSyncedShallowRepair(t, true)
} }
func testLongOldForkedFastSyncedShallowRepair(t *testing.T, snapshots bool) { func testLongOldForkedSnapSyncedShallowRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD)
// └->S1->S2->S3 // └->S1->S2->S3
@ -998,14 +998,14 @@ func testLongOldForkedFastSyncedShallowRepair(t *testing.T, snapshots bool) {
// the side chain is below the committed block. In this case we expect the canonical // the side chain is below the committed block. In this case we expect the canonical
// chain to be rolled back to the committed block, with everything afterwads deleted; // chain to be rolled back to the committed block, with everything afterwads deleted;
// the side chain completely nuked by the freezer. // the side chain completely nuked by the freezer.
func TestLongOldForkedFastSyncedDeepRepair(t *testing.T) { func TestLongOldForkedSnapSyncedDeepRepair(t *testing.T) {
testLongOldForkedFastSyncedDeepRepair(t, false) testLongOldForkedSnapSyncedDeepRepair(t, false)
} }
func TestLongOldForkedFastSyncedDeepRepairWithSnapshots(t *testing.T) { func TestLongOldForkedSnapSyncedDeepRepairWithSnapshots(t *testing.T) {
testLongOldForkedFastSyncedDeepRepair(t, true) testLongOldForkedSnapSyncedDeepRepair(t, true)
} }
func testLongOldForkedFastSyncedDeepRepair(t *testing.T, snapshots bool) { func testLongOldForkedSnapSyncedDeepRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD)
// └->S1->S2->S3 // └->S1->S2->S3
@ -1049,14 +1049,14 @@ func testLongOldForkedFastSyncedDeepRepair(t *testing.T, snapshots bool) {
// chain is below the committed block. In this case we expect the chain to detect // chain is below the committed block. In this case we expect the chain to detect
// that it was fast syncing and not delete anything. The side chain is completely // that it was fast syncing and not delete anything. The side chain is completely
// nuked by the freezer. // nuked by the freezer.
func TestLongOldForkedFastSyncingShallowRepair(t *testing.T) { func TestLongOldForkedSnapSyncingShallowRepair(t *testing.T) {
testLongOldForkedFastSyncingShallowRepair(t, false) testLongOldForkedSnapSyncingShallowRepair(t, false)
} }
func TestLongOldForkedFastSyncingShallowRepairWithSnapshots(t *testing.T) { func TestLongOldForkedSnapSyncingShallowRepairWithSnapshots(t *testing.T) {
testLongOldForkedFastSyncingShallowRepair(t, true) testLongOldForkedSnapSyncingShallowRepair(t, true)
} }
func testLongOldForkedFastSyncingShallowRepair(t *testing.T, snapshots bool) { func testLongOldForkedSnapSyncingShallowRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD)
// └->S1->S2->S3 // └->S1->S2->S3
@ -1101,14 +1101,14 @@ func testLongOldForkedFastSyncingShallowRepair(t *testing.T, snapshots bool) {
// chain is below the committed block. In this case we expect the chain to detect // chain is below the committed block. In this case we expect the chain to detect
// that it was fast syncing and not delete anything. The side chain is completely // that it was fast syncing and not delete anything. The side chain is completely
// nuked by the freezer. // nuked by the freezer.
func TestLongOldForkedFastSyncingDeepRepair(t *testing.T) { func TestLongOldForkedSnapSyncingDeepRepair(t *testing.T) {
testLongOldForkedFastSyncingDeepRepair(t, false) testLongOldForkedSnapSyncingDeepRepair(t, false)
} }
func TestLongOldForkedFastSyncingDeepRepairWithSnapshots(t *testing.T) { func TestLongOldForkedSnapSyncingDeepRepairWithSnapshots(t *testing.T) {
testLongOldForkedFastSyncingDeepRepair(t, true) testLongOldForkedSnapSyncingDeepRepair(t, true)
} }
func testLongOldForkedFastSyncingDeepRepair(t *testing.T, snapshots bool) { func testLongOldForkedSnapSyncingDeepRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD)
// └->S1->S2->S3 // └->S1->S2->S3
@ -1252,14 +1252,14 @@ func testLongNewerForkedDeepRepair(t *testing.T, snapshots bool) {
// the side chain is above the committed block. In this case we expect the chain // the side chain is above the committed block. In this case we expect the chain
// to be rolled back to the committed block, with everything afterwads kept as fast // to be rolled back to the committed block, with everything afterwads kept as fast
// sync data; the side chain completely nuked by the freezer. // sync data; the side chain completely nuked by the freezer.
func TestLongNewerForkedFastSyncedShallowRepair(t *testing.T) { func TestLongNewerForkedSnapSyncedShallowRepair(t *testing.T) {
testLongNewerForkedFastSyncedShallowRepair(t, false) testLongNewerForkedSnapSyncedShallowRepair(t, false)
} }
func TestLongNewerForkedFastSyncedShallowRepairWithSnapshots(t *testing.T) { func TestLongNewerForkedSnapSyncedShallowRepairWithSnapshots(t *testing.T) {
testLongNewerForkedFastSyncedShallowRepair(t, true) testLongNewerForkedSnapSyncedShallowRepair(t, true)
} }
func testLongNewerForkedFastSyncedShallowRepair(t *testing.T, snapshots bool) { func testLongNewerForkedSnapSyncedShallowRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12
@ -1304,14 +1304,14 @@ func testLongNewerForkedFastSyncedShallowRepair(t *testing.T, snapshots bool) {
// the side chain is above the committed block. In this case we expect the canonical // the side chain is above the committed block. In this case we expect the canonical
// chain to be rolled back to the committed block, with everything afterwads deleted; // chain to be rolled back to the committed block, with everything afterwads deleted;
// the side chain completely nuked by the freezer. // the side chain completely nuked by the freezer.
func TestLongNewerForkedFastSyncedDeepRepair(t *testing.T) { func TestLongNewerForkedSnapSyncedDeepRepair(t *testing.T) {
testLongNewerForkedFastSyncedDeepRepair(t, false) testLongNewerForkedSnapSyncedDeepRepair(t, false)
} }
func TestLongNewerForkedFastSyncedDeepRepairWithSnapshots(t *testing.T) { func TestLongNewerForkedSnapSyncedDeepRepairWithSnapshots(t *testing.T) {
testLongNewerForkedFastSyncedDeepRepair(t, true) testLongNewerForkedSnapSyncedDeepRepair(t, true)
} }
func testLongNewerForkedFastSyncedDeepRepair(t *testing.T, snapshots bool) { func testLongNewerForkedSnapSyncedDeepRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12
@ -1355,14 +1355,14 @@ func testLongNewerForkedFastSyncedDeepRepair(t *testing.T, snapshots bool) {
// chain is above the committed block. In this case we expect the chain to detect // chain is above the committed block. In this case we expect the chain to detect
// that it was fast syncing and not delete anything. The side chain is completely // that it was fast syncing and not delete anything. The side chain is completely
// nuked by the freezer. // nuked by the freezer.
func TestLongNewerForkedFastSyncingShallowRepair(t *testing.T) { func TestLongNewerForkedSnapSyncingShallowRepair(t *testing.T) {
testLongNewerForkedFastSyncingShallowRepair(t, false) testLongNewerForkedSnapSyncingShallowRepair(t, false)
} }
func TestLongNewerForkedFastSyncingShallowRepairWithSnapshots(t *testing.T) { func TestLongNewerForkedSnapSyncingShallowRepairWithSnapshots(t *testing.T) {
testLongNewerForkedFastSyncingShallowRepair(t, true) testLongNewerForkedSnapSyncingShallowRepair(t, true)
} }
func testLongNewerForkedFastSyncingShallowRepair(t *testing.T, snapshots bool) { func testLongNewerForkedSnapSyncingShallowRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12
@ -1407,14 +1407,14 @@ func testLongNewerForkedFastSyncingShallowRepair(t *testing.T, snapshots bool) {
// chain is above the committed block. In this case we expect the chain to detect // chain is above the committed block. In this case we expect the chain to detect
// that it was fast syncing and not delete anything. The side chain is completely // that it was fast syncing and not delete anything. The side chain is completely
// nuked by the freezer. // nuked by the freezer.
func TestLongNewerForkedFastSyncingDeepRepair(t *testing.T) { func TestLongNewerForkedSnapSyncingDeepRepair(t *testing.T) {
testLongNewerForkedFastSyncingDeepRepair(t, false) testLongNewerForkedSnapSyncingDeepRepair(t, false)
} }
func TestLongNewerForkedFastSyncingDeepRepairWithSnapshots(t *testing.T) { func TestLongNewerForkedSnapSyncingDeepRepairWithSnapshots(t *testing.T) {
testLongNewerForkedFastSyncingDeepRepair(t, true) testLongNewerForkedSnapSyncingDeepRepair(t, true)
} }
func testLongNewerForkedFastSyncingDeepRepair(t *testing.T, snapshots bool) { func testLongNewerForkedSnapSyncingDeepRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12
@ -1552,14 +1552,14 @@ func testLongReorgedDeepRepair(t *testing.T, snapshots bool) {
// expect the chain to be rolled back to the committed block, with everything // expect the chain to be rolled back to the committed block, with everything
// afterwads kept as fast sync data. The side chain completely nuked by the // afterwads kept as fast sync data. The side chain completely nuked by the
// freezer. // freezer.
func TestLongReorgedFastSyncedShallowRepair(t *testing.T) { func TestLongReorgedSnapSyncedShallowRepair(t *testing.T) {
testLongReorgedFastSyncedShallowRepair(t, false) testLongReorgedSnapSyncedShallowRepair(t, false)
} }
func TestLongReorgedFastSyncedShallowRepairWithSnapshots(t *testing.T) { func TestLongReorgedSnapSyncedShallowRepairWithSnapshots(t *testing.T) {
testLongReorgedFastSyncedShallowRepair(t, true) testLongReorgedSnapSyncedShallowRepair(t, true)
} }
func testLongReorgedFastSyncedShallowRepair(t *testing.T, snapshots bool) { func testLongReorgedSnapSyncedShallowRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12->S13->S14->S15->S16->S17->S18->S19->S20->S21->S22->S23->S24->S25->S26 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12->S13->S14->S15->S16->S17->S18->S19->S20->S21->S22->S23->S24->S25->S26
@ -1603,14 +1603,14 @@ func testLongReorgedFastSyncedShallowRepair(t *testing.T, snapshots bool) {
// was already committed to disk and then the process crashed. In this case we // was already committed to disk and then the process crashed. In this case we
// expect the canonical chains to be rolled back to the committed block, with // expect the canonical chains to be rolled back to the committed block, with
// everything afterwads deleted. The side chain completely nuked by the freezer. // everything afterwads deleted. The side chain completely nuked by the freezer.
func TestLongReorgedFastSyncedDeepRepair(t *testing.T) { func TestLongReorgedSnapSyncedDeepRepair(t *testing.T) {
testLongReorgedFastSyncedDeepRepair(t, false) testLongReorgedSnapSyncedDeepRepair(t, false)
} }
func TestLongReorgedFastSyncedDeepRepairWithSnapshots(t *testing.T) { func TestLongReorgedSnapSyncedDeepRepairWithSnapshots(t *testing.T) {
testLongReorgedFastSyncedDeepRepair(t, true) testLongReorgedSnapSyncedDeepRepair(t, true)
} }
func testLongReorgedFastSyncedDeepRepair(t *testing.T, snapshots bool) { func testLongReorgedSnapSyncedDeepRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12->S13->S14->S15->S16->S17->S18->S19->S20->S21->S22->S23->S24->S25->S26 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12->S13->S14->S15->S16->S17->S18->S19->S20->S21->S22->S23->S24->S25->S26
@ -1653,14 +1653,14 @@ func testLongReorgedFastSyncedDeepRepair(t *testing.T, snapshots bool) {
// was not yet committed, but the process crashed. In this case we expect the // was not yet committed, but the process crashed. In this case we expect the
// chain to detect that it was fast syncing and not delete anything, since we // chain to detect that it was fast syncing and not delete anything, since we
// can just pick up directly where we left off. // can just pick up directly where we left off.
func TestLongReorgedFastSyncingShallowRepair(t *testing.T) { func TestLongReorgedSnapSyncingShallowRepair(t *testing.T) {
testLongReorgedFastSyncingShallowRepair(t, false) testLongReorgedSnapSyncingShallowRepair(t, false)
} }
func TestLongReorgedFastSyncingShallowRepairWithSnapshots(t *testing.T) { func TestLongReorgedSnapSyncingShallowRepairWithSnapshots(t *testing.T) {
testLongReorgedFastSyncingShallowRepair(t, true) testLongReorgedSnapSyncingShallowRepair(t, true)
} }
func testLongReorgedFastSyncingShallowRepair(t *testing.T, snapshots bool) { func testLongReorgedSnapSyncingShallowRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12->S13->S14->S15->S16->S17->S18->S19->S20->S21->S22->S23->S24->S25->S26 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12->S13->S14->S15->S16->S17->S18->S19->S20->S21->S22->S23->S24->S25->S26
@ -1704,14 +1704,14 @@ func testLongReorgedFastSyncingShallowRepair(t *testing.T, snapshots bool) {
// was not yet committed, but the process crashed. In this case we expect the // was not yet committed, but the process crashed. In this case we expect the
// chain to detect that it was fast syncing and not delete anything, since we // chain to detect that it was fast syncing and not delete anything, since we
// can just pick up directly where we left off. // can just pick up directly where we left off.
func TestLongReorgedFastSyncingDeepRepair(t *testing.T) { func TestLongReorgedSnapSyncingDeepRepair(t *testing.T) {
testLongReorgedFastSyncingDeepRepair(t, false) testLongReorgedSnapSyncingDeepRepair(t, false)
} }
func TestLongReorgedFastSyncingDeepRepairWithSnapshots(t *testing.T) { func TestLongReorgedSnapSyncingDeepRepairWithSnapshots(t *testing.T) {
testLongReorgedFastSyncingDeepRepair(t, true) testLongReorgedSnapSyncingDeepRepair(t, true)
} }
func testLongReorgedFastSyncingDeepRepair(t *testing.T, snapshots bool) { func testLongReorgedSnapSyncingDeepRepair(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12->S13->S14->S15->S16->S17->S18->S19->S20->S21->S22->S23->S24->S25->S26 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12->S13->S14->S15->S16->S17->S18->S19->S20->S21->S22->S23->S24->S25->S26
@ -1829,7 +1829,7 @@ func testRepair(t *testing.T, tt *rewindTest, snapshots bool) {
// Pull the plug on the database, simulating a hard crash // Pull the plug on the database, simulating a hard crash
db.Close() db.Close()
// Start a new blockchain back up and see where the repait leads us // Start a new blockchain back up and see where the repair leads us
db, err = rawdb.NewLevelDBDatabaseWithFreezer(datadir, 0, 0, datadir, "", false) db, err = rawdb.NewLevelDBDatabaseWithFreezer(datadir, 0, 0, datadir, "", false)
if err != nil { if err != nil {
t.Fatalf("Failed to reopen persistent database: %v", err) t.Fatalf("Failed to reopen persistent database: %v", err)

View File

@ -194,10 +194,10 @@ func testShortSetHead(t *testing.T, snapshots bool) {
// Everything above the sethead point should be deleted. In between the committed // Everything above the sethead point should be deleted. In between the committed
// block and the requested head the data can remain as "fast sync" data to avoid // block and the requested head the data can remain as "fast sync" data to avoid
// redownloading it. // redownloading it.
func TestShortFastSyncedSetHead(t *testing.T) { testShortFastSyncedSetHead(t, false) } func TestShortSnapSyncedSetHead(t *testing.T) { testShortSnapSyncedSetHead(t, false) }
func TestShortFastSyncedSetHeadWithSnapshots(t *testing.T) { testShortFastSyncedSetHead(t, true) } func TestShortSnapSyncedSetHeadWithSnapshots(t *testing.T) { testShortSnapSyncedSetHead(t, true) }
func testShortFastSyncedSetHead(t *testing.T, snapshots bool) { func testShortSnapSyncedSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD)
// //
@ -236,10 +236,10 @@ func testShortFastSyncedSetHead(t *testing.T, snapshots bool) {
// detect that it was fast syncing and delete everything from the new head, since // detect that it was fast syncing and delete everything from the new head, since
// we can just pick up fast syncing from there. The head full block should be set // we can just pick up fast syncing from there. The head full block should be set
// to the genesis. // to the genesis.
func TestShortFastSyncingSetHead(t *testing.T) { testShortFastSyncingSetHead(t, false) } func TestShortSnapSyncingSetHead(t *testing.T) { testShortSnapSyncingSetHead(t, false) }
func TestShortFastSyncingSetHeadWithSnapshots(t *testing.T) { testShortFastSyncingSetHead(t, true) } func TestShortSnapSyncingSetHeadWithSnapshots(t *testing.T) { testShortSnapSyncingSetHead(t, true) }
func testShortFastSyncingSetHead(t *testing.T, snapshots bool) { func testShortSnapSyncingSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD)
// //
@ -326,14 +326,14 @@ func testShortOldForkedSetHead(t *testing.T, snapshots bool) {
// block. Everything above the sethead point should be deleted. In between the // block. Everything above the sethead point should be deleted. In between the
// committed block and the requested head the data can remain as "fast sync" data // committed block and the requested head the data can remain as "fast sync" data
// to avoid redownloading it. The side chain should be left alone as it was shorter. // to avoid redownloading it. The side chain should be left alone as it was shorter.
func TestShortOldForkedFastSyncedSetHead(t *testing.T) { func TestShortOldForkedSnapSyncedSetHead(t *testing.T) {
testShortOldForkedFastSyncedSetHead(t, false) testShortOldForkedSnapSyncedSetHead(t, false)
} }
func TestShortOldForkedFastSyncedSetHeadWithSnapshots(t *testing.T) { func TestShortOldForkedSnapSyncedSetHeadWithSnapshots(t *testing.T) {
testShortOldForkedFastSyncedSetHead(t, true) testShortOldForkedSnapSyncedSetHead(t, true)
} }
func testShortOldForkedFastSyncedSetHead(t *testing.T, snapshots bool) { func testShortOldForkedSnapSyncedSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD)
// └->S1->S2->S3 // └->S1->S2->S3
@ -375,14 +375,14 @@ func testShortOldForkedFastSyncedSetHead(t *testing.T, snapshots bool) {
// the chain to detect that it was fast syncing and delete everything from the new // the chain to detect that it was fast syncing and delete everything from the new
// head, since we can just pick up fast syncing from there. The head full block // head, since we can just pick up fast syncing from there. The head full block
// should be set to the genesis. // should be set to the genesis.
func TestShortOldForkedFastSyncingSetHead(t *testing.T) { func TestShortOldForkedSnapSyncingSetHead(t *testing.T) {
testShortOldForkedFastSyncingSetHead(t, false) testShortOldForkedSnapSyncingSetHead(t, false)
} }
func TestShortOldForkedFastSyncingSetHeadWithSnapshots(t *testing.T) { func TestShortOldForkedSnapSyncingSetHeadWithSnapshots(t *testing.T) {
testShortOldForkedFastSyncingSetHead(t, true) testShortOldForkedSnapSyncingSetHead(t, true)
} }
func testShortOldForkedFastSyncingSetHead(t *testing.T, snapshots bool) { func testShortOldForkedSnapSyncingSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD)
// └->S1->S2->S3 // └->S1->S2->S3
@ -478,14 +478,14 @@ func testShortNewlyForkedSetHead(t *testing.T, snapshots bool) {
// The side chain could be left to be if the fork point was before the new head // The side chain could be left to be if the fork point was before the new head
// we are deleting to, but it would be exceedingly hard to detect that case and // we are deleting to, but it would be exceedingly hard to detect that case and
// properly handle it, so we'll trade extra work in exchange for simpler code. // properly handle it, so we'll trade extra work in exchange for simpler code.
func TestShortNewlyForkedFastSyncedSetHead(t *testing.T) { func TestShortNewlyForkedSnapSyncedSetHead(t *testing.T) {
testShortNewlyForkedFastSyncedSetHead(t, false) testShortNewlyForkedSnapSyncedSetHead(t, false)
} }
func TestShortNewlyForkedFastSyncedSetHeadWithSnapshots(t *testing.T) { func TestShortNewlyForkedSnapSyncedSetHeadWithSnapshots(t *testing.T) {
testShortNewlyForkedFastSyncedSetHead(t, true) testShortNewlyForkedSnapSyncedSetHead(t, true)
} }
func testShortNewlyForkedFastSyncedSetHead(t *testing.T, snapshots bool) { func testShortNewlyForkedSnapSyncedSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8 // └->S1->S2->S3->S4->S5->S6->S7->S8
@ -531,14 +531,14 @@ func testShortNewlyForkedFastSyncedSetHead(t *testing.T, snapshots bool) {
// The side chain could be left to be if the fork point was before the new head // The side chain could be left to be if the fork point was before the new head
// we are deleting to, but it would be exceedingly hard to detect that case and // we are deleting to, but it would be exceedingly hard to detect that case and
// properly handle it, so we'll trade extra work in exchange for simpler code. // properly handle it, so we'll trade extra work in exchange for simpler code.
func TestShortNewlyForkedFastSyncingSetHead(t *testing.T) { func TestShortNewlyForkedSnapSyncingSetHead(t *testing.T) {
testShortNewlyForkedFastSyncingSetHead(t, false) testShortNewlyForkedSnapSyncingSetHead(t, false)
} }
func TestShortNewlyForkedFastSyncingSetHeadWithSnapshots(t *testing.T) { func TestShortNewlyForkedSnapSyncingSetHeadWithSnapshots(t *testing.T) {
testShortNewlyForkedFastSyncingSetHead(t, true) testShortNewlyForkedSnapSyncingSetHead(t, true)
} }
func testShortNewlyForkedFastSyncingSetHead(t *testing.T, snapshots bool) { func testShortNewlyForkedSnapSyncingSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8 // └->S1->S2->S3->S4->S5->S6->S7->S8
@ -634,14 +634,14 @@ func testShortReorgedSetHead(t *testing.T, snapshots bool) {
// The side chain could be left to be if the fork point was before the new head // The side chain could be left to be if the fork point was before the new head
// we are deleting to, but it would be exceedingly hard to detect that case and // we are deleting to, but it would be exceedingly hard to detect that case and
// properly handle it, so we'll trade extra work in exchange for simpler code. // properly handle it, so we'll trade extra work in exchange for simpler code.
func TestShortReorgedFastSyncedSetHead(t *testing.T) { func TestShortReorgedSnapSyncedSetHead(t *testing.T) {
testShortReorgedFastSyncedSetHead(t, false) testShortReorgedSnapSyncedSetHead(t, false)
} }
func TestShortReorgedFastSyncedSetHeadWithSnapshots(t *testing.T) { func TestShortReorgedSnapSyncedSetHeadWithSnapshots(t *testing.T) {
testShortReorgedFastSyncedSetHead(t, true) testShortReorgedSnapSyncedSetHead(t, true)
} }
func testShortReorgedFastSyncedSetHead(t *testing.T, snapshots bool) { func testShortReorgedSnapSyncedSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10
@ -686,14 +686,14 @@ func testShortReorgedFastSyncedSetHead(t *testing.T, snapshots bool) {
// The side chain could be left to be if the fork point was before the new head // The side chain could be left to be if the fork point was before the new head
// we are deleting to, but it would be exceedingly hard to detect that case and // we are deleting to, but it would be exceedingly hard to detect that case and
// properly handle it, so we'll trade extra work in exchange for simpler code. // properly handle it, so we'll trade extra work in exchange for simpler code.
func TestShortReorgedFastSyncingSetHead(t *testing.T) { func TestShortReorgedSnapSyncingSetHead(t *testing.T) {
testShortReorgedFastSyncingSetHead(t, false) testShortReorgedSnapSyncingSetHead(t, false)
} }
func TestShortReorgedFastSyncingSetHeadWithSnapshots(t *testing.T) { func TestShortReorgedSnapSyncingSetHeadWithSnapshots(t *testing.T) {
testShortReorgedFastSyncingSetHead(t, true) testShortReorgedSnapSyncingSetHead(t, true)
} }
func testShortReorgedFastSyncingSetHead(t *testing.T, snapshots bool) { func testShortReorgedSnapSyncingSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10
@ -829,14 +829,14 @@ func testLongDeepSetHead(t *testing.T, snapshots bool) {
// back to the committed block. Everything above the sethead point should be // back to the committed block. Everything above the sethead point should be
// deleted. In between the committed block and the requested head the data can // deleted. In between the committed block and the requested head the data can
// remain as "fast sync" data to avoid redownloading it. // remain as "fast sync" data to avoid redownloading it.
func TestLongFastSyncedShallowSetHead(t *testing.T) { func TestLongSnapSyncedShallowSetHead(t *testing.T) {
testLongFastSyncedShallowSetHead(t, false) testLongSnapSyncedShallowSetHead(t, false)
} }
func TestLongFastSyncedShallowSetHeadWithSnapshots(t *testing.T) { func TestLongSnapSyncedShallowSetHeadWithSnapshots(t *testing.T) {
testLongFastSyncedShallowSetHead(t, true) testLongSnapSyncedShallowSetHead(t, true)
} }
func testLongFastSyncedShallowSetHead(t *testing.T, snapshots bool) { func testLongSnapSyncedShallowSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD)
// //
@ -880,10 +880,10 @@ func testLongFastSyncedShallowSetHead(t *testing.T, snapshots bool) {
// which sethead was called. In this case we expect the full chain to be rolled // which sethead was called. In this case we expect the full chain to be rolled
// back to the committed block. Since the ancient limit was underflown, everything // back to the committed block. Since the ancient limit was underflown, everything
// needs to be deleted onwards to avoid creating a gap. // needs to be deleted onwards to avoid creating a gap.
func TestLongFastSyncedDeepSetHead(t *testing.T) { testLongFastSyncedDeepSetHead(t, false) } func TestLongSnapSyncedDeepSetHead(t *testing.T) { testLongSnapSyncedDeepSetHead(t, false) }
func TestLongFastSyncedDeepSetHeadWithSnapshots(t *testing.T) { testLongFastSyncedDeepSetHead(t, true) } func TestLongSnapSyncedDeepSetHeadWithSnapshots(t *testing.T) { testLongSnapSyncedDeepSetHead(t, true) }
func testLongFastSyncedDeepSetHead(t *testing.T, snapshots bool) { func testLongSnapSyncedDeepSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD)
// //
@ -926,14 +926,14 @@ func testLongFastSyncedDeepSetHead(t *testing.T, snapshots bool) {
// sethead was called. In this case we expect the chain to detect that it was fast // sethead was called. In this case we expect the chain to detect that it was fast
// syncing and delete everything from the new head, since we can just pick up fast // syncing and delete everything from the new head, since we can just pick up fast
// syncing from there. // syncing from there.
func TestLongFastSyncingShallowSetHead(t *testing.T) { func TestLongSnapSyncingShallowSetHead(t *testing.T) {
testLongFastSyncingShallowSetHead(t, false) testLongSnapSyncingShallowSetHead(t, false)
} }
func TestLongFastSyncingShallowSetHeadWithSnapshots(t *testing.T) { func TestLongSnapSyncingShallowSetHeadWithSnapshots(t *testing.T) {
testLongFastSyncingShallowSetHead(t, true) testLongSnapSyncingShallowSetHead(t, true)
} }
func testLongFastSyncingShallowSetHead(t *testing.T, snapshots bool) { func testLongSnapSyncingShallowSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD)
// //
@ -977,14 +977,14 @@ func testLongFastSyncingShallowSetHead(t *testing.T, snapshots bool) {
// sethead was called. In this case we expect the chain to detect that it was fast // sethead was called. In this case we expect the chain to detect that it was fast
// syncing and delete everything from the new head, since we can just pick up fast // syncing and delete everything from the new head, since we can just pick up fast
// syncing from there. // syncing from there.
func TestLongFastSyncingDeepSetHead(t *testing.T) { func TestLongSnapSyncingDeepSetHead(t *testing.T) {
testLongFastSyncingDeepSetHead(t, false) testLongSnapSyncingDeepSetHead(t, false)
} }
func TestLongFastSyncingDeepSetHeadWithSnapshots(t *testing.T) { func TestLongSnapSyncingDeepSetHeadWithSnapshots(t *testing.T) {
testLongFastSyncingDeepSetHead(t, true) testLongSnapSyncingDeepSetHead(t, true)
} }
func testLongFastSyncingDeepSetHead(t *testing.T, snapshots bool) { func testLongSnapSyncingDeepSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD)
// //
@ -1132,14 +1132,14 @@ func testLongOldForkedDeepSetHead(t *testing.T, snapshots bool) {
// sethead point should be deleted. In between the committed block and the // sethead point should be deleted. In between the committed block and the
// requested head the data can remain as "fast sync" data to avoid redownloading // requested head the data can remain as "fast sync" data to avoid redownloading
// it. The side chain is nuked by the freezer. // it. The side chain is nuked by the freezer.
func TestLongOldForkedFastSyncedShallowSetHead(t *testing.T) { func TestLongOldForkedSnapSyncedShallowSetHead(t *testing.T) {
testLongOldForkedFastSyncedShallowSetHead(t, false) testLongOldForkedSnapSyncedShallowSetHead(t, false)
} }
func TestLongOldForkedFastSyncedShallowSetHeadWithSnapshots(t *testing.T) { func TestLongOldForkedSnapSyncedShallowSetHeadWithSnapshots(t *testing.T) {
testLongOldForkedFastSyncedShallowSetHead(t, true) testLongOldForkedSnapSyncedShallowSetHead(t, true)
} }
func testLongOldForkedFastSyncedShallowSetHead(t *testing.T, snapshots bool) { func testLongOldForkedSnapSyncedShallowSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD)
// └->S1->S2->S3 // └->S1->S2->S3
@ -1186,14 +1186,14 @@ func testLongOldForkedFastSyncedShallowSetHead(t *testing.T, snapshots bool) {
// full chain to be rolled back to the committed block. Since the ancient limit was // full chain to be rolled back to the committed block. Since the ancient limit was
// underflown, everything needs to be deleted onwards to avoid creating a gap. The // underflown, everything needs to be deleted onwards to avoid creating a gap. The
// side chain is nuked by the freezer. // side chain is nuked by the freezer.
func TestLongOldForkedFastSyncedDeepSetHead(t *testing.T) { func TestLongOldForkedSnapSyncedDeepSetHead(t *testing.T) {
testLongOldForkedFastSyncedDeepSetHead(t, false) testLongOldForkedSnapSyncedDeepSetHead(t, false)
} }
func TestLongOldForkedFastSyncedDeepSetHeadWithSnapshots(t *testing.T) { func TestLongOldForkedSnapSyncedDeepSetHeadWithSnapshots(t *testing.T) {
testLongOldForkedFastSyncedDeepSetHead(t, true) testLongOldForkedSnapSyncedDeepSetHead(t, true)
} }
func testLongOldForkedFastSyncedDeepSetHead(t *testing.T, snapshots bool) { func testLongOldForkedSnapSyncedDeepSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD)
// └->S1->S2->S3 // └->S1->S2->S3
@ -1239,14 +1239,14 @@ func testLongOldForkedFastSyncedDeepSetHead(t *testing.T, snapshots bool) {
// that it was fast syncing and delete everything from the new head, since we can // that it was fast syncing and delete everything from the new head, since we can
// just pick up fast syncing from there. The side chain is completely nuked by the // just pick up fast syncing from there. The side chain is completely nuked by the
// freezer. // freezer.
func TestLongOldForkedFastSyncingShallowSetHead(t *testing.T) { func TestLongOldForkedSnapSyncingShallowSetHead(t *testing.T) {
testLongOldForkedFastSyncingShallowSetHead(t, false) testLongOldForkedSnapSyncingShallowSetHead(t, false)
} }
func TestLongOldForkedFastSyncingShallowSetHeadWithSnapshots(t *testing.T) { func TestLongOldForkedSnapSyncingShallowSetHeadWithSnapshots(t *testing.T) {
testLongOldForkedFastSyncingShallowSetHead(t, true) testLongOldForkedSnapSyncingShallowSetHead(t, true)
} }
func testLongOldForkedFastSyncingShallowSetHead(t *testing.T, snapshots bool) { func testLongOldForkedSnapSyncingShallowSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD)
// └->S1->S2->S3 // └->S1->S2->S3
@ -1293,14 +1293,14 @@ func testLongOldForkedFastSyncingShallowSetHead(t *testing.T, snapshots bool) {
// that it was fast syncing and delete everything from the new head, since we can // that it was fast syncing and delete everything from the new head, since we can
// just pick up fast syncing from there. The side chain is completely nuked by the // just pick up fast syncing from there. The side chain is completely nuked by the
// freezer. // freezer.
func TestLongOldForkedFastSyncingDeepSetHead(t *testing.T) { func TestLongOldForkedSnapSyncingDeepSetHead(t *testing.T) {
testLongOldForkedFastSyncingDeepSetHead(t, false) testLongOldForkedSnapSyncingDeepSetHead(t, false)
} }
func TestLongOldForkedFastSyncingDeepSetHeadWithSnapshots(t *testing.T) { func TestLongOldForkedSnapSyncingDeepSetHeadWithSnapshots(t *testing.T) {
testLongOldForkedFastSyncingDeepSetHead(t, true) testLongOldForkedSnapSyncingDeepSetHead(t, true)
} }
func testLongOldForkedFastSyncingDeepSetHead(t *testing.T, snapshots bool) { func testLongOldForkedSnapSyncingDeepSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD)
// └->S1->S2->S3 // └->S1->S2->S3
@ -1446,15 +1446,15 @@ func testLongNewerForkedDeepSetHead(t *testing.T, snapshots bool) {
// side chain, where the fast sync pivot point - newer than the ancient limit - // side chain, where the fast sync pivot point - newer than the ancient limit -
// was already committed to disk and then sethead was called. In this test scenario // was already committed to disk and then sethead was called. In this test scenario
// the side chain is above the committed block. In this case the freezer will delete // the side chain is above the committed block. In this case the freezer will delete
// the sidechain since it's dangling, reverting to TestLongFastSyncedShallowSetHead. // the sidechain since it's dangling, reverting to TestLongSnapSyncedShallowSetHead.
func TestLongNewerForkedFastSyncedShallowSetHead(t *testing.T) { func TestLongNewerForkedSnapSyncedShallowSetHead(t *testing.T) {
testLongNewerForkedFastSyncedShallowSetHead(t, false) testLongNewerForkedSnapSyncedShallowSetHead(t, false)
} }
func TestLongNewerForkedFastSyncedShallowSetHeadWithSnapshots(t *testing.T) { func TestLongNewerForkedSnapSyncedShallowSetHeadWithSnapshots(t *testing.T) {
testLongNewerForkedFastSyncedShallowSetHead(t, true) testLongNewerForkedSnapSyncedShallowSetHead(t, true)
} }
func testLongNewerForkedFastSyncedShallowSetHead(t *testing.T, snapshots bool) { func testLongNewerForkedSnapSyncedShallowSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12
@ -1498,15 +1498,15 @@ func testLongNewerForkedFastSyncedShallowSetHead(t *testing.T, snapshots bool) {
// side chain, where the fast sync pivot point - older than the ancient limit - // side chain, where the fast sync pivot point - older than the ancient limit -
// was already committed to disk and then sethead was called. In this test scenario // was already committed to disk and then sethead was called. In this test scenario
// the side chain is above the committed block. In this case the freezer will delete // the side chain is above the committed block. In this case the freezer will delete
// the sidechain since it's dangling, reverting to TestLongFastSyncedDeepSetHead. // the sidechain since it's dangling, reverting to TestLongSnapSyncedDeepSetHead.
func TestLongNewerForkedFastSyncedDeepSetHead(t *testing.T) { func TestLongNewerForkedSnapSyncedDeepSetHead(t *testing.T) {
testLongNewerForkedFastSyncedDeepSetHead(t, false) testLongNewerForkedSnapSyncedDeepSetHead(t, false)
} }
func TestLongNewerForkedFastSyncedDeepSetHeadWithSnapshots(t *testing.T) { func TestLongNewerForkedSnapSyncedDeepSetHeadWithSnapshots(t *testing.T) {
testLongNewerForkedFastSyncedDeepSetHead(t, true) testLongNewerForkedSnapSyncedDeepSetHead(t, true)
} }
func testLongNewerForkedFastSyncedDeepSetHead(t *testing.T, snapshots bool) { func testLongNewerForkedSnapSyncedDeepSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12
@ -1549,15 +1549,15 @@ func testLongNewerForkedFastSyncedDeepSetHead(t *testing.T, snapshots bool) {
// side chain, where the fast sync pivot point - newer than the ancient limit - // side chain, where the fast sync pivot point - newer than the ancient limit -
// was not yet committed, but sethead was called. In this test scenario the side // was not yet committed, but sethead was called. In this test scenario the side
// chain is above the committed block. In this case the freezer will delete the // chain is above the committed block. In this case the freezer will delete the
// sidechain since it's dangling, reverting to TestLongFastSyncinghallowSetHead. // sidechain since it's dangling, reverting to TestLongSnapSyncinghallowSetHead.
func TestLongNewerForkedFastSyncingShallowSetHead(t *testing.T) { func TestLongNewerForkedSnapSyncingShallowSetHead(t *testing.T) {
testLongNewerForkedFastSyncingShallowSetHead(t, false) testLongNewerForkedSnapSyncingShallowSetHead(t, false)
} }
func TestLongNewerForkedFastSyncingShallowSetHeadWithSnapshots(t *testing.T) { func TestLongNewerForkedSnapSyncingShallowSetHeadWithSnapshots(t *testing.T) {
testLongNewerForkedFastSyncingShallowSetHead(t, true) testLongNewerForkedSnapSyncingShallowSetHead(t, true)
} }
func testLongNewerForkedFastSyncingShallowSetHead(t *testing.T, snapshots bool) { func testLongNewerForkedSnapSyncingShallowSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12
@ -1601,15 +1601,15 @@ func testLongNewerForkedFastSyncingShallowSetHead(t *testing.T, snapshots bool)
// side chain, where the fast sync pivot point - older than the ancient limit - // side chain, where the fast sync pivot point - older than the ancient limit -
// was not yet committed, but sethead was called. In this test scenario the side // was not yet committed, but sethead was called. In this test scenario the side
// chain is above the committed block. In this case the freezer will delete the // chain is above the committed block. In this case the freezer will delete the
// sidechain since it's dangling, reverting to TestLongFastSyncingDeepSetHead. // sidechain since it's dangling, reverting to TestLongSnapSyncingDeepSetHead.
func TestLongNewerForkedFastSyncingDeepSetHead(t *testing.T) { func TestLongNewerForkedSnapSyncingDeepSetHead(t *testing.T) {
testLongNewerForkedFastSyncingDeepSetHead(t, false) testLongNewerForkedSnapSyncingDeepSetHead(t, false)
} }
func TestLongNewerForkedFastSyncingDeepSetHeadWithSnapshots(t *testing.T) { func TestLongNewerForkedSnapSyncingDeepSetHeadWithSnapshots(t *testing.T) {
testLongNewerForkedFastSyncingDeepSetHead(t, true) testLongNewerForkedSnapSyncingDeepSetHead(t, true)
} }
func testLongNewerForkedFastSyncingDeepSetHead(t *testing.T, snapshots bool) { func testLongNewerForkedSnapSyncingDeepSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12
@ -1745,15 +1745,15 @@ func testLongReorgedDeepSetHead(t *testing.T, snapshots bool) {
// side chain, where the fast sync pivot point - newer than the ancient limit - // side chain, where the fast sync pivot point - newer than the ancient limit -
// was already committed to disk and then sethead was called. In this case the // was already committed to disk and then sethead was called. In this case the
// freezer will delete the sidechain since it's dangling, reverting to // freezer will delete the sidechain since it's dangling, reverting to
// TestLongFastSyncedShallowSetHead. // TestLongSnapSyncedShallowSetHead.
func TestLongReorgedFastSyncedShallowSetHead(t *testing.T) { func TestLongReorgedSnapSyncedShallowSetHead(t *testing.T) {
testLongReorgedFastSyncedShallowSetHead(t, false) testLongReorgedSnapSyncedShallowSetHead(t, false)
} }
func TestLongReorgedFastSyncedShallowSetHeadWithSnapshots(t *testing.T) { func TestLongReorgedSnapSyncedShallowSetHeadWithSnapshots(t *testing.T) {
testLongReorgedFastSyncedShallowSetHead(t, true) testLongReorgedSnapSyncedShallowSetHead(t, true)
} }
func testLongReorgedFastSyncedShallowSetHead(t *testing.T, snapshots bool) { func testLongReorgedSnapSyncedShallowSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12->S13->S14->S15->S16->S17->S18->S19->S20->S21->S22->S23->S24->S25->S26 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12->S13->S14->S15->S16->S17->S18->S19->S20->S21->S22->S23->S24->S25->S26
@ -1797,15 +1797,15 @@ func testLongReorgedFastSyncedShallowSetHead(t *testing.T, snapshots bool) {
// side chain, where the fast sync pivot point - older than the ancient limit - // side chain, where the fast sync pivot point - older than the ancient limit -
// was already committed to disk and then sethead was called. In this case the // was already committed to disk and then sethead was called. In this case the
// freezer will delete the sidechain since it's dangling, reverting to // freezer will delete the sidechain since it's dangling, reverting to
// TestLongFastSyncedDeepSetHead. // TestLongSnapSyncedDeepSetHead.
func TestLongReorgedFastSyncedDeepSetHead(t *testing.T) { func TestLongReorgedSnapSyncedDeepSetHead(t *testing.T) {
testLongReorgedFastSyncedDeepSetHead(t, false) testLongReorgedSnapSyncedDeepSetHead(t, false)
} }
func TestLongReorgedFastSyncedDeepSetHeadWithSnapshots(t *testing.T) { func TestLongReorgedSnapSyncedDeepSetHeadWithSnapshots(t *testing.T) {
testLongReorgedFastSyncedDeepSetHead(t, true) testLongReorgedSnapSyncedDeepSetHead(t, true)
} }
func testLongReorgedFastSyncedDeepSetHead(t *testing.T, snapshots bool) { func testLongReorgedSnapSyncedDeepSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12->S13->S14->S15->S16->S17->S18->S19->S20->S21->S22->S23->S24->S25->S26 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12->S13->S14->S15->S16->S17->S18->S19->S20->S21->S22->S23->S24->S25->S26
@ -1850,14 +1850,14 @@ func testLongReorgedFastSyncedDeepSetHead(t *testing.T, snapshots bool) {
// chain to detect that it was fast syncing and delete everything from the new // chain to detect that it was fast syncing and delete everything from the new
// head, since we can just pick up fast syncing from there. The side chain is // head, since we can just pick up fast syncing from there. The side chain is
// completely nuked by the freezer. // completely nuked by the freezer.
func TestLongReorgedFastSyncingShallowSetHead(t *testing.T) { func TestLongReorgedSnapSyncingShallowSetHead(t *testing.T) {
testLongReorgedFastSyncingShallowSetHead(t, false) testLongReorgedSnapSyncingShallowSetHead(t, false)
} }
func TestLongReorgedFastSyncingShallowSetHeadWithSnapshots(t *testing.T) { func TestLongReorgedSnapSyncingShallowSetHeadWithSnapshots(t *testing.T) {
testLongReorgedFastSyncingShallowSetHead(t, true) testLongReorgedSnapSyncingShallowSetHead(t, true)
} }
func testLongReorgedFastSyncingShallowSetHead(t *testing.T, snapshots bool) { func testLongReorgedSnapSyncingShallowSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12->S13->S14->S15->S16->S17->S18->S19->S20->S21->S22->S23->S24->S25->S26 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12->S13->S14->S15->S16->S17->S18->S19->S20->S21->S22->S23->S24->S25->S26
@ -1903,14 +1903,14 @@ func testLongReorgedFastSyncingShallowSetHead(t *testing.T, snapshots bool) {
// chain to detect that it was fast syncing and delete everything from the new // chain to detect that it was fast syncing and delete everything from the new
// head, since we can just pick up fast syncing from there. The side chain is // head, since we can just pick up fast syncing from there. The side chain is
// completely nuked by the freezer. // completely nuked by the freezer.
func TestLongReorgedFastSyncingDeepSetHead(t *testing.T) { func TestLongReorgedSnapSyncingDeepSetHead(t *testing.T) {
testLongReorgedFastSyncingDeepSetHead(t, false) testLongReorgedSnapSyncingDeepSetHead(t, false)
} }
func TestLongReorgedFastSyncingDeepSetHeadWithSnapshots(t *testing.T) { func TestLongReorgedSnapSyncingDeepSetHeadWithSnapshots(t *testing.T) {
testLongReorgedFastSyncingDeepSetHead(t, true) testLongReorgedSnapSyncingDeepSetHead(t, true)
} }
func testLongReorgedFastSyncingDeepSetHead(t *testing.T, snapshots bool) { func testLongReorgedSnapSyncingDeepSetHead(t *testing.T, snapshots bool) {
// Chain: // Chain:
// G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD) // G->C1->C2->C3->C4->C5->C6->C7->C8->C9->C10->C11->C12->C13->C14->C15->C16->C17->C18->C19->C20->C21->C22->C23->C24 (HEAD)
// └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12->S13->S14->S15->S16->S17->S18->S19->S20->S21->S22->S23->S24->S25->S26 // └->S1->S2->S3->S4->S5->S6->S7->S8->S9->S10->S11->S12->S13->S14->S15->S16->S17->S18->S19->S20->S21->S22->S23->S24->S25->S26

View File

@ -28,13 +28,16 @@ import (
"time" "time"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/consensus" "github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/consensus/beacon"
"github.com/ethereum/go-ethereum/consensus/ethash" "github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state" "github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm" "github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth/tracers/logger"
"github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/trie" "github.com/ethereum/go-ethereum/trie"
@ -210,6 +213,55 @@ func TestLastBlock(t *testing.T) {
} }
} }
// Test inserts the blocks/headers after the fork choice rule is changed.
// The chain is reorged to whatever specified.
func testInsertAfterMerge(t *testing.T, blockchain *BlockChain, i, n int, full bool) {
// Copy old chain up to #i into a new db
db, blockchain2, err := newCanonical(ethash.NewFaker(), i, full)
if err != nil {
t.Fatal("could not make new canonical in testFork", err)
}
defer blockchain2.Stop()
// Assert the chains have the same header/block at #i
var hash1, hash2 common.Hash
if full {
hash1 = blockchain.GetBlockByNumber(uint64(i)).Hash()
hash2 = blockchain2.GetBlockByNumber(uint64(i)).Hash()
} else {
hash1 = blockchain.GetHeaderByNumber(uint64(i)).Hash()
hash2 = blockchain2.GetHeaderByNumber(uint64(i)).Hash()
}
if hash1 != hash2 {
t.Errorf("chain content mismatch at %d: have hash %v, want hash %v", i, hash2, hash1)
}
// Extend the newly created chain
if full {
blockChainB := makeBlockChain(blockchain2.CurrentBlock(), n, ethash.NewFaker(), db, forkSeed)
if _, err := blockchain2.InsertChain(blockChainB); err != nil {
t.Fatalf("failed to insert forking chain: %v", err)
}
if blockchain2.CurrentBlock().NumberU64() != blockChainB[len(blockChainB)-1].NumberU64() {
t.Fatalf("failed to reorg to the given chain")
}
if blockchain2.CurrentBlock().Hash() != blockChainB[len(blockChainB)-1].Hash() {
t.Fatalf("failed to reorg to the given chain")
}
} else {
headerChainB := makeHeaderChain(blockchain2.CurrentHeader(), n, ethash.NewFaker(), db, forkSeed)
if _, err := blockchain2.InsertHeaderChain(headerChainB, 1); err != nil {
t.Fatalf("failed to insert forking chain: %v", err)
}
if blockchain2.CurrentHeader().Number.Uint64() != headerChainB[len(headerChainB)-1].Number.Uint64() {
t.Fatalf("failed to reorg to the given chain")
}
if blockchain2.CurrentHeader().Hash() != headerChainB[len(headerChainB)-1].Hash() {
t.Fatalf("failed to reorg to the given chain")
}
}
}
// Tests that given a starting canonical chain of a given size, it can be extended // Tests that given a starting canonical chain of a given size, it can be extended
// with various length chains. // with various length chains.
func TestExtendCanonicalHeaders(t *testing.T) { testExtendCanonical(t, false) } func TestExtendCanonicalHeaders(t *testing.T) { testExtendCanonical(t, false) }
@ -238,6 +290,25 @@ func testExtendCanonical(t *testing.T, full bool) {
testFork(t, processor, length, 10, full, better) testFork(t, processor, length, 10, full, better)
} }
// Tests that given a starting canonical chain of a given size, it can be extended
// with various length chains.
func TestExtendCanonicalHeadersAfterMerge(t *testing.T) { testExtendCanonicalAfterMerge(t, false) }
func TestExtendCanonicalBlocksAfterMerge(t *testing.T) { testExtendCanonicalAfterMerge(t, true) }
func testExtendCanonicalAfterMerge(t *testing.T, full bool) {
length := 5
// Make first chain starting from genesis
_, processor, err := newCanonical(ethash.NewFaker(), length, full)
if err != nil {
t.Fatalf("failed to make new canonical chain: %v", err)
}
defer processor.Stop()
testInsertAfterMerge(t, processor, length, 1, full)
testInsertAfterMerge(t, processor, length, 10, full)
}
// Tests that given a starting canonical chain of a given size, creating shorter // Tests that given a starting canonical chain of a given size, creating shorter
// forks do not take canonical ownership. // forks do not take canonical ownership.
func TestShorterForkHeaders(t *testing.T) { testShorterFork(t, false) } func TestShorterForkHeaders(t *testing.T) { testShorterFork(t, false) }
@ -268,6 +339,29 @@ func testShorterFork(t *testing.T, full bool) {
testFork(t, processor, 5, 4, full, worse) testFork(t, processor, 5, 4, full, worse)
} }
// Tests that given a starting canonical chain of a given size, creating shorter
// forks do not take canonical ownership.
func TestShorterForkHeadersAfterMerge(t *testing.T) { testShorterForkAfterMerge(t, false) }
func TestShorterForkBlocksAfterMerge(t *testing.T) { testShorterForkAfterMerge(t, true) }
func testShorterForkAfterMerge(t *testing.T, full bool) {
length := 10
// Make first chain starting from genesis
_, processor, err := newCanonical(ethash.NewFaker(), length, full)
if err != nil {
t.Fatalf("failed to make new canonical chain: %v", err)
}
defer processor.Stop()
testInsertAfterMerge(t, processor, 0, 3, full)
testInsertAfterMerge(t, processor, 0, 7, full)
testInsertAfterMerge(t, processor, 1, 1, full)
testInsertAfterMerge(t, processor, 1, 7, full)
testInsertAfterMerge(t, processor, 5, 3, full)
testInsertAfterMerge(t, processor, 5, 4, full)
}
// Tests that given a starting canonical chain of a given size, creating longer // Tests that given a starting canonical chain of a given size, creating longer
// forks do take canonical ownership. // forks do take canonical ownership.
func TestLongerForkHeaders(t *testing.T) { testLongerFork(t, false) } func TestLongerForkHeaders(t *testing.T) { testLongerFork(t, false) }
@ -283,19 +377,35 @@ func testLongerFork(t *testing.T, full bool) {
} }
defer processor.Stop() defer processor.Stop()
// Define the difficulty comparator testInsertAfterMerge(t, processor, 0, 11, full)
better := func(td1, td2 *big.Int) { testInsertAfterMerge(t, processor, 0, 15, full)
if td2.Cmp(td1) <= 0 { testInsertAfterMerge(t, processor, 1, 10, full)
t.Errorf("total difficulty mismatch: have %v, expected more than %v", td2, td1) testInsertAfterMerge(t, processor, 1, 12, full)
testInsertAfterMerge(t, processor, 5, 6, full)
testInsertAfterMerge(t, processor, 5, 8, full)
} }
// Tests that given a starting canonical chain of a given size, creating longer
// forks do take canonical ownership.
func TestLongerForkHeadersAfterMerge(t *testing.T) { testLongerForkAfterMerge(t, false) }
func TestLongerForkBlocksAfterMerge(t *testing.T) { testLongerForkAfterMerge(t, true) }
func testLongerForkAfterMerge(t *testing.T, full bool) {
length := 10
// Make first chain starting from genesis
_, processor, err := newCanonical(ethash.NewFaker(), length, full)
if err != nil {
t.Fatalf("failed to make new canonical chain: %v", err)
} }
// Sum of numbers must be greater than `length` for this to be a longer fork defer processor.Stop()
testFork(t, processor, 0, 11, full, better)
testFork(t, processor, 0, 15, full, better) testInsertAfterMerge(t, processor, 0, 11, full)
testFork(t, processor, 1, 10, full, better) testInsertAfterMerge(t, processor, 0, 15, full)
testFork(t, processor, 1, 12, full, better) testInsertAfterMerge(t, processor, 1, 10, full)
testFork(t, processor, 5, 6, full, better) testInsertAfterMerge(t, processor, 1, 12, full)
testFork(t, processor, 5, 8, full, better) testInsertAfterMerge(t, processor, 5, 6, full)
testInsertAfterMerge(t, processor, 5, 8, full)
} }
// Tests that given a starting canonical chain of a given size, creating equal // Tests that given a starting canonical chain of a given size, creating equal
@ -328,6 +438,29 @@ func testEqualFork(t *testing.T, full bool) {
testFork(t, processor, 9, 1, full, equal) testFork(t, processor, 9, 1, full, equal)
} }
// Tests that given a starting canonical chain of a given size, creating equal
// forks do take canonical ownership.
func TestEqualForkHeadersAfterMerge(t *testing.T) { testEqualForkAfterMerge(t, false) }
func TestEqualForkBlocksAfterMerge(t *testing.T) { testEqualForkAfterMerge(t, true) }
func testEqualForkAfterMerge(t *testing.T, full bool) {
length := 10
// Make first chain starting from genesis
_, processor, err := newCanonical(ethash.NewFaker(), length, full)
if err != nil {
t.Fatalf("failed to make new canonical chain: %v", err)
}
defer processor.Stop()
testInsertAfterMerge(t, processor, 0, 10, full)
testInsertAfterMerge(t, processor, 1, 9, full)
testInsertAfterMerge(t, processor, 2, 8, full)
testInsertAfterMerge(t, processor, 5, 5, full)
testInsertAfterMerge(t, processor, 6, 4, full)
testInsertAfterMerge(t, processor, 9, 1, full)
}
// Tests that chains missing links do not get accepted by the processor. // Tests that chains missing links do not get accepted by the processor.
func TestBrokenHeaderChain(t *testing.T) { testBrokenChain(t, false) } func TestBrokenHeaderChain(t *testing.T) { testBrokenChain(t, false) }
func TestBrokenBlockChain(t *testing.T) { testBrokenChain(t, true) } func TestBrokenBlockChain(t *testing.T) { testBrokenChain(t, true) }
@ -1800,21 +1933,56 @@ func TestLowDiffLongChain(t *testing.T) {
// - C is canon chain, containing blocks [G..Cn..Cm] // - C is canon chain, containing blocks [G..Cn..Cm]
// - A common ancestor is placed at prune-point + blocksBetweenCommonAncestorAndPruneblock // - A common ancestor is placed at prune-point + blocksBetweenCommonAncestorAndPruneblock
// - The sidechain S is prepended with numCanonBlocksInSidechain blocks from the canon chain // - The sidechain S is prepended with numCanonBlocksInSidechain blocks from the canon chain
func testSideImport(t *testing.T, numCanonBlocksInSidechain, blocksBetweenCommonAncestorAndPruneblock int) { //
// The mergePoint can be these values:
// -1: the transition won't happen
// 0: the transition happens since genesis
// 1: the transition happens after some chain segments
func testSideImport(t *testing.T, numCanonBlocksInSidechain, blocksBetweenCommonAncestorAndPruneblock int, mergePoint int) {
// Copy the TestChainConfig so we can modify it during tests
chainConfig := *params.TestChainConfig
// Generate a canonical chain to act as the main dataset // Generate a canonical chain to act as the main dataset
engine := ethash.NewFaker() var (
db := rawdb.NewMemoryDatabase() merger = consensus.NewMerger(rawdb.NewMemoryDatabase())
genesis := (&Genesis{BaseFee: big.NewInt(params.InitialBaseFee)}).MustCommit(db) genEngine = beacon.New(ethash.NewFaker())
runEngine = beacon.New(ethash.NewFaker())
db = rawdb.NewMemoryDatabase()
key, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291")
addr = crypto.PubkeyToAddress(key.PublicKey)
nonce = uint64(0)
gspec = &Genesis{
Config: &chainConfig,
Alloc: GenesisAlloc{addr: {Balance: big.NewInt(math.MaxInt64)}},
BaseFee: big.NewInt(params.InitialBaseFee),
}
signer = types.LatestSigner(gspec.Config)
genesis, _ = gspec.Commit(db)
)
// Generate and import the canonical chain // Generate and import the canonical chain
blocks, _ := GenerateChain(params.TestChainConfig, genesis, engine, db, 2*TriesInMemory, nil)
diskdb := rawdb.NewMemoryDatabase() diskdb := rawdb.NewMemoryDatabase()
(&Genesis{BaseFee: big.NewInt(params.InitialBaseFee)}).MustCommit(diskdb) gspec.MustCommit(diskdb)
chain, err := NewBlockChain(diskdb, nil, params.TestChainConfig, engine, vm.Config{}, nil, nil) chain, err := NewBlockChain(diskdb, nil, &chainConfig, runEngine, vm.Config{}, nil, nil)
if err != nil { if err != nil {
t.Fatalf("failed to create tester chain: %v", err) t.Fatalf("failed to create tester chain: %v", err)
} }
// Activate the transition since genesis if required
if mergePoint == 0 {
merger.ReachTTD()
merger.FinalizePoS()
// Set the terminal total difficulty in the config
gspec.Config.TerminalTotalDifficulty = big.NewInt(0)
}
blocks, _ := GenerateChain(&chainConfig, genesis, genEngine, db, 2*TriesInMemory, func(i int, gen *BlockGen) {
tx, err := types.SignTx(types.NewTransaction(nonce, common.HexToAddress("deadbeef"), big.NewInt(100), 21000, big.NewInt(int64(i+1)*params.GWei), nil), signer, key)
if err != nil {
t.Fatalf("failed to create tx: %v", err)
}
gen.AddTx(tx)
nonce++
})
if n, err := chain.InsertChain(blocks); err != nil { if n, err := chain.InsertChain(blocks); err != nil {
t.Fatalf("block %d: failed to insert into chain: %v", n, err) t.Fatalf("block %d: failed to insert into chain: %v", n, err)
} }
@ -1831,6 +1999,15 @@ func testSideImport(t *testing.T, numCanonBlocksInSidechain, blocksBetweenCommon
if !chain.HasBlockAndState(firstNonPrunedBlock.Hash(), firstNonPrunedBlock.NumberU64()) { if !chain.HasBlockAndState(firstNonPrunedBlock.Hash(), firstNonPrunedBlock.NumberU64()) {
t.Errorf("Block %d pruned", firstNonPrunedBlock.NumberU64()) t.Errorf("Block %d pruned", firstNonPrunedBlock.NumberU64())
} }
// Activate the transition in the middle of the chain
if mergePoint == 1 {
merger.ReachTTD()
merger.FinalizePoS()
// Set the terminal total difficulty in the config
gspec.Config.TerminalTotalDifficulty = big.NewInt(int64(len(blocks)))
}
// Generate the sidechain // Generate the sidechain
// First block should be a known block, block after should be a pruned block. So // First block should be a known block, block after should be a pruned block. So
// canon(pruned), side, side... // canon(pruned), side, side...
@ -1838,7 +2015,7 @@ func testSideImport(t *testing.T, numCanonBlocksInSidechain, blocksBetweenCommon
// Generate fork chain, make it longer than canon // Generate fork chain, make it longer than canon
parentIndex := lastPrunedIndex + blocksBetweenCommonAncestorAndPruneblock parentIndex := lastPrunedIndex + blocksBetweenCommonAncestorAndPruneblock
parent := blocks[parentIndex] parent := blocks[parentIndex]
fork, _ := GenerateChain(params.TestChainConfig, parent, engine, db, 2*TriesInMemory, func(i int, b *BlockGen) { fork, _ := GenerateChain(&chainConfig, parent, genEngine, db, 2*TriesInMemory, func(i int, b *BlockGen) {
b.SetCoinbase(common.Address{2}) b.SetCoinbase(common.Address{2})
}) })
// Prepend the parent(s) // Prepend the parent(s)
@ -1847,9 +2024,9 @@ func testSideImport(t *testing.T, numCanonBlocksInSidechain, blocksBetweenCommon
sidechain = append(sidechain, blocks[parentIndex+1-i]) sidechain = append(sidechain, blocks[parentIndex+1-i])
} }
sidechain = append(sidechain, fork...) sidechain = append(sidechain, fork...)
_, err = chain.InsertChain(sidechain) n, err := chain.InsertChain(sidechain)
if err != nil { if err != nil {
t.Errorf("Got error, %v", err) t.Errorf("Got error, %v number %d - %d", err, sidechain[n].NumberU64(), n)
} }
head := chain.CurrentBlock() head := chain.CurrentBlock()
if got := fork[len(fork)-1].Hash(); got != head.Hash() { if got := fork[len(fork)-1].Hash(); got != head.Hash() {
@ -1870,11 +2047,28 @@ func TestPrunedImportSide(t *testing.T) {
//glogger := log.NewGlogHandler(log.StreamHandler(os.Stdout, log.TerminalFormat(false))) //glogger := log.NewGlogHandler(log.StreamHandler(os.Stdout, log.TerminalFormat(false)))
//glogger.Verbosity(3) //glogger.Verbosity(3)
//log.Root().SetHandler(log.Handler(glogger)) //log.Root().SetHandler(log.Handler(glogger))
testSideImport(t, 3, 3) testSideImport(t, 3, 3, -1)
testSideImport(t, 3, -3) testSideImport(t, 3, -3, -1)
testSideImport(t, 10, 0) testSideImport(t, 10, 0, -1)
testSideImport(t, 1, 10) testSideImport(t, 1, 10, -1)
testSideImport(t, 1, -10) testSideImport(t, 1, -10, -1)
}
func TestPrunedImportSideWithMerging(t *testing.T) {
//glogger := log.NewGlogHandler(log.StreamHandler(os.Stdout, log.TerminalFormat(false)))
//glogger.Verbosity(3)
//log.Root().SetHandler(log.Handler(glogger))
testSideImport(t, 3, 3, 0)
testSideImport(t, 3, -3, 0)
testSideImport(t, 10, 0, 0)
testSideImport(t, 1, 10, 0)
testSideImport(t, 1, -10, 0)
testSideImport(t, 3, 3, 1)
testSideImport(t, 3, -3, 1)
testSideImport(t, 10, 0, 1)
testSideImport(t, 1, 10, 1)
testSideImport(t, 1, -10, 1)
} }
func TestInsertKnownHeaders(t *testing.T) { testInsertKnownChainData(t, "headers") } func TestInsertKnownHeaders(t *testing.T) { testInsertKnownChainData(t, "headers") }
@ -2002,6 +2196,179 @@ func testInsertKnownChainData(t *testing.T, typ string) {
asserter(t, blocks2[len(blocks2)-1]) asserter(t, blocks2[len(blocks2)-1])
} }
func TestInsertKnownHeadersWithMerging(t *testing.T) {
testInsertKnownChainDataWithMerging(t, "headers", 0)
}
func TestInsertKnownReceiptChainWithMerging(t *testing.T) {
testInsertKnownChainDataWithMerging(t, "receipts", 0)
}
func TestInsertKnownBlocksWithMerging(t *testing.T) {
testInsertKnownChainDataWithMerging(t, "blocks", 0)
}
func TestInsertKnownHeadersAfterMerging(t *testing.T) {
testInsertKnownChainDataWithMerging(t, "headers", 1)
}
func TestInsertKnownReceiptChainAfterMerging(t *testing.T) {
testInsertKnownChainDataWithMerging(t, "receipts", 1)
}
func TestInsertKnownBlocksAfterMerging(t *testing.T) {
testInsertKnownChainDataWithMerging(t, "blocks", 1)
}
// mergeHeight can be assigned in these values:
// 0: means the merging is applied since genesis
// 1: means the merging is applied after the first segment
func testInsertKnownChainDataWithMerging(t *testing.T, typ string, mergeHeight int) {
// Copy the TestChainConfig so we can modify it during tests
chainConfig := *params.TestChainConfig
var (
db = rawdb.NewMemoryDatabase()
genesis = (&Genesis{BaseFee: big.NewInt(params.InitialBaseFee), Config: &chainConfig}).MustCommit(db)
runMerger = consensus.NewMerger(db)
runEngine = beacon.New(ethash.NewFaker())
genEngine = beacon.New(ethash.NewFaker())
)
applyMerge := func(engine *beacon.Beacon, height int) {
if engine != nil {
runMerger.FinalizePoS()
// Set the terminal total difficulty in the config
chainConfig.TerminalTotalDifficulty = big.NewInt(int64(height))
}
}
// Apply merging since genesis
if mergeHeight == 0 {
applyMerge(genEngine, 0)
}
blocks, receipts := GenerateChain(&chainConfig, genesis, genEngine, db, 32, func(i int, b *BlockGen) { b.SetCoinbase(common.Address{1}) })
// Apply merging after the first segment
if mergeHeight == 1 {
applyMerge(genEngine, len(blocks))
}
// Longer chain and shorter chain
blocks2, receipts2 := GenerateChain(&chainConfig, blocks[len(blocks)-1], genEngine, db, 65, func(i int, b *BlockGen) { b.SetCoinbase(common.Address{1}) })
blocks3, receipts3 := GenerateChain(&chainConfig, blocks[len(blocks)-1], genEngine, db, 64, func(i int, b *BlockGen) {
b.SetCoinbase(common.Address{1})
b.OffsetTime(-9) // Time shifted, difficulty shouldn't be changed
})
// Import the shared chain and the original canonical one
dir, err := ioutil.TempDir("", "")
if err != nil {
t.Fatalf("failed to create temp freezer dir: %v", err)
}
defer os.Remove(dir)
chaindb, err := rawdb.NewDatabaseWithFreezer(rawdb.NewMemoryDatabase(), dir, "", false)
if err != nil {
t.Fatalf("failed to create temp freezer db: %v", err)
}
(&Genesis{BaseFee: big.NewInt(params.InitialBaseFee)}).MustCommit(chaindb)
defer os.RemoveAll(dir)
chain, err := NewBlockChain(chaindb, nil, &chainConfig, runEngine, vm.Config{}, nil, nil)
if err != nil {
t.Fatalf("failed to create tester chain: %v", err)
}
var (
inserter func(blocks []*types.Block, receipts []types.Receipts) error
asserter func(t *testing.T, block *types.Block)
)
if typ == "headers" {
inserter = func(blocks []*types.Block, receipts []types.Receipts) error {
headers := make([]*types.Header, 0, len(blocks))
for _, block := range blocks {
headers = append(headers, block.Header())
}
_, err := chain.InsertHeaderChain(headers, 1)
return err
}
asserter = func(t *testing.T, block *types.Block) {
if chain.CurrentHeader().Hash() != block.Hash() {
t.Fatalf("current head header mismatch, have %v, want %v", chain.CurrentHeader().Hash().Hex(), block.Hash().Hex())
}
}
} else if typ == "receipts" {
inserter = func(blocks []*types.Block, receipts []types.Receipts) error {
headers := make([]*types.Header, 0, len(blocks))
for _, block := range blocks {
headers = append(headers, block.Header())
}
_, err := chain.InsertHeaderChain(headers, 1)
if err != nil {
return err
}
_, err = chain.InsertReceiptChain(blocks, receipts, 0)
return err
}
asserter = func(t *testing.T, block *types.Block) {
if chain.CurrentFastBlock().Hash() != block.Hash() {
t.Fatalf("current head fast block mismatch, have %v, want %v", chain.CurrentFastBlock().Hash().Hex(), block.Hash().Hex())
}
}
} else {
inserter = func(blocks []*types.Block, receipts []types.Receipts) error {
_, err := chain.InsertChain(blocks)
return err
}
asserter = func(t *testing.T, block *types.Block) {
if chain.CurrentBlock().Hash() != block.Hash() {
t.Fatalf("current head block mismatch, have %v, want %v", chain.CurrentBlock().Hash().Hex(), block.Hash().Hex())
}
}
}
// Apply merging since genesis if required
if mergeHeight == 0 {
applyMerge(runEngine, 0)
}
if err := inserter(blocks, receipts); err != nil {
t.Fatalf("failed to insert chain data: %v", err)
}
// Reimport the chain data again. All the imported
// chain data are regarded "known" data.
if err := inserter(blocks, receipts); err != nil {
t.Fatalf("failed to insert chain data: %v", err)
}
asserter(t, blocks[len(blocks)-1])
// Import a long canonical chain with some known data as prefix.
rollback := blocks[len(blocks)/2].NumberU64()
chain.SetHead(rollback - 1)
if err := inserter(blocks, receipts); err != nil {
t.Fatalf("failed to insert chain data: %v", err)
}
asserter(t, blocks[len(blocks)-1])
// Apply merging after the first segment
if mergeHeight == 1 {
applyMerge(runEngine, len(blocks))
}
// Import a longer chain with some known data as prefix.
if err := inserter(append(blocks, blocks2...), append(receipts, receipts2...)); err != nil {
t.Fatalf("failed to insert chain data: %v", err)
}
asserter(t, blocks2[len(blocks2)-1])
// Import a shorter chain with some known data as prefix.
// The reorg is expected since the fork choice rule is
// already changed.
if err := inserter(append(blocks, blocks3...), append(receipts, receipts3...)); err != nil {
t.Fatalf("failed to insert chain data: %v", err)
}
// The head shouldn't change.
asserter(t, blocks3[len(blocks3)-1])
// Reimport the longer chain again, the reorg is still expected
chain.SetHead(rollback - 1)
if err := inserter(append(blocks, blocks2...), append(receipts, receipts2...)); err != nil {
t.Fatalf("failed to insert chain data: %v", err)
}
asserter(t, blocks2[len(blocks2)-1])
}
// getLongAndShortChains returns two chains: A is longer, B is heavier. // getLongAndShortChains returns two chains: A is longer, B is heavier.
func getLongAndShortChains() (bc *BlockChain, longChain []*types.Block, heavyChain []*types.Block, err error) { func getLongAndShortChains() (bc *BlockChain, longChain []*types.Block, heavyChain []*types.Block, err error) {
// Generate a canonical chain to act as the main dataset // Generate a canonical chain to act as the main dataset
@ -2270,7 +2637,7 @@ func TestTransactionIndices(t *testing.T) {
} }
} }
func TestSkipStaleTxIndicesInFastSync(t *testing.T) { func TestSkipStaleTxIndicesInSnapSync(t *testing.T) {
// Configure and generate a sample block chain // Configure and generate a sample block chain
var ( var (
gendb = rawdb.NewMemoryDatabase() gendb = rawdb.NewMemoryDatabase()
@ -2482,6 +2849,7 @@ func TestSideImportPrunedBlocks(t *testing.T) {
// Generate and import the canonical chain // Generate and import the canonical chain
blocks, _ := GenerateChain(params.TestChainConfig, genesis, engine, db, 2*TriesInMemory, nil) blocks, _ := GenerateChain(params.TestChainConfig, genesis, engine, db, 2*TriesInMemory, nil)
diskdb := rawdb.NewMemoryDatabase() diskdb := rawdb.NewMemoryDatabase()
(&Genesis{BaseFee: big.NewInt(params.InitialBaseFee)}).MustCommit(diskdb) (&Genesis{BaseFee: big.NewInt(params.InitialBaseFee)}).MustCommit(diskdb)
chain, err := NewBlockChain(diskdb, nil, params.TestChainConfig, engine, vm.Config{}, nil, nil) chain, err := NewBlockChain(diskdb, nil, params.TestChainConfig, engine, vm.Config{}, nil, nil)
if err != nil { if err != nil {
@ -2690,7 +3058,7 @@ func TestDeleteRecreateSlots(t *testing.T) {
gspec.MustCommit(diskdb) gspec.MustCommit(diskdb)
chain, err := NewBlockChain(diskdb, nil, params.TestChainConfig, engine, vm.Config{ chain, err := NewBlockChain(diskdb, nil, params.TestChainConfig, engine, vm.Config{
Debug: true, Debug: true,
Tracer: vm.NewJSONLogger(nil, os.Stdout), Tracer: logger.NewJSONLogger(nil, os.Stdout),
}, nil, nil) }, nil, nil)
if err != nil { if err != nil {
t.Fatalf("failed to create tester chain: %v", err) t.Fatalf("failed to create tester chain: %v", err)
@ -2770,7 +3138,7 @@ func TestDeleteRecreateAccount(t *testing.T) {
gspec.MustCommit(diskdb) gspec.MustCommit(diskdb)
chain, err := NewBlockChain(diskdb, nil, params.TestChainConfig, engine, vm.Config{ chain, err := NewBlockChain(diskdb, nil, params.TestChainConfig, engine, vm.Config{
Debug: true, Debug: true,
Tracer: vm.NewJSONLogger(nil, os.Stdout), Tracer: logger.NewJSONLogger(nil, os.Stdout),
}, nil, nil) }, nil, nil)
if err != nil { if err != nil {
t.Fatalf("failed to create tester chain: %v", err) t.Fatalf("failed to create tester chain: %v", err)

View File

@ -155,6 +155,28 @@ func (b *BlockGen) TxNonce(addr common.Address) uint64 {
// AddUncle adds an uncle header to the generated block. // AddUncle adds an uncle header to the generated block.
func (b *BlockGen) AddUncle(h *types.Header) { func (b *BlockGen) AddUncle(h *types.Header) {
// The uncle will have the same timestamp and auto-generated difficulty
h.Time = b.header.Time
var parent *types.Header
for i := b.i - 1; i >= 0; i-- {
if b.chain[i].Hash() == h.ParentHash {
parent = b.chain[i].Header()
break
}
}
chainreader := &fakeChainReader{config: b.config}
h.Difficulty = b.engine.CalcDifficulty(chainreader, b.header.Time, parent)
// The gas limit and price should be derived from the parent
h.GasLimit = parent.GasLimit
if b.config.IsLondon(h.Number) {
h.BaseFee = misc.CalcBaseFee(b.config, parent)
if !b.config.IsLondon(parent.Number) {
parentGasLimit := parent.GasLimit * params.ElasticityMultiplier
h.GasLimit = CalcGasLimit(parentGasLimit, parentGasLimit)
}
}
b.uncles = append(b.uncles, h) b.uncles = append(b.uncles, h)
} }
@ -205,6 +227,18 @@ func GenerateChain(config *params.ChainConfig, parent *types.Block, engine conse
b := &BlockGen{i: i, chain: blocks, parent: parent, statedb: statedb, config: config, engine: engine} b := &BlockGen{i: i, chain: blocks, parent: parent, statedb: statedb, config: config, engine: engine}
b.header = makeHeader(chainreader, parent, statedb, b.engine) b.header = makeHeader(chainreader, parent, statedb, b.engine)
// Set the difficulty for clique block. The chain maker doesn't have access
// to a chain, so the difficulty will be left unset (nil). Set it here to the
// correct value.
if b.header.Difficulty == nil {
if config.TerminalTotalDifficulty == nil {
// Clique chain
b.header.Difficulty = big.NewInt(2)
} else {
// Post-merge chain
b.header.Difficulty = big.NewInt(0)
}
}
// Mutate the state and block according to any hard-fork specs // Mutate the state and block according to any hard-fork specs
if daoBlock := config.DAOForkBlock; daoBlock != nil { if daoBlock := config.DAOForkBlock; daoBlock != nil {
limit := new(big.Int).Add(daoBlock, params.DAOForkExtraRange) limit := new(big.Int).Add(daoBlock, params.DAOForkExtraRange)
@ -313,3 +347,4 @@ func (cr *fakeChainReader) GetHeaderByNumber(number uint64) *types.Header
func (cr *fakeChainReader) GetHeaderByHash(hash common.Hash) *types.Header { return nil } func (cr *fakeChainReader) GetHeaderByHash(hash common.Hash) *types.Header { return nil }
func (cr *fakeChainReader) GetHeader(hash common.Hash, number uint64) *types.Header { return nil } func (cr *fakeChainReader) GetHeader(hash common.Hash, number uint64) *types.Header { return nil }
func (cr *fakeChainReader) GetBlock(hash common.Hash, number uint64) *types.Block { return nil } func (cr *fakeChainReader) GetBlock(hash common.Hash, number uint64) *types.Block { return nil }
func (cr *fakeChainReader) GetTd(hash common.Hash, number uint64) *big.Int { return nil }

108
core/forkchoice.go Normal file
View File

@ -0,0 +1,108 @@
// Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package core
import (
crand "crypto/rand"
"errors"
"math/big"
mrand "math/rand"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params"
)
// ChainReader defines a small collection of methods needed to access the local
// blockchain during header verification. It's implemented by both blockchain
// and lightchain.
type ChainReader interface {
// Config retrieves the header chain's chain configuration.
Config() *params.ChainConfig
// GetTd returns the total difficulty of a local block.
GetTd(common.Hash, uint64) *big.Int
}
// ForkChoice is the fork chooser based on the highest total difficulty of the
// chain(the fork choice used in the eth1) and the external fork choice (the fork
// choice used in the eth2). This main goal of this ForkChoice is not only for
// offering fork choice during the eth1/2 merge phase, but also keep the compatibility
// for all other proof-of-work networks.
type ForkChoice struct {
chain ChainReader
rand *mrand.Rand
// preserve is a helper function used in td fork choice.
// Miners will prefer to choose the local mined block if the
// local td is equal to the extern one. It can be nil for light
// client
preserve func(header *types.Header) bool
}
func NewForkChoice(chainReader ChainReader, preserve func(header *types.Header) bool) *ForkChoice {
// Seed a fast but crypto originating random generator
seed, err := crand.Int(crand.Reader, big.NewInt(math.MaxInt64))
if err != nil {
log.Crit("Failed to initialize random seed", "err", err)
}
return &ForkChoice{
chain: chainReader,
rand: mrand.New(mrand.NewSource(seed.Int64())),
preserve: preserve,
}
}
// ReorgNeeded returns whether the reorg should be applied
// based on the given external header and local canonical chain.
// In the td mode, the new head is chosen if the corresponding
// total difficulty is higher. In the extern mode, the trusted
// header is always selected as the head.
func (f *ForkChoice) ReorgNeeded(current *types.Header, header *types.Header) (bool, error) {
var (
localTD = f.chain.GetTd(current.Hash(), current.Number.Uint64())
externTd = f.chain.GetTd(header.Hash(), header.Number.Uint64())
)
if localTD == nil || externTd == nil {
return false, errors.New("missing td")
}
// Accept the new header as the chain head if the transition
// is already triggered. We assume all the headers after the
// transition come from the trusted consensus layer.
if ttd := f.chain.Config().TerminalTotalDifficulty; ttd != nil && ttd.Cmp(externTd) <= 0 {
return true, nil
}
// If the total difficulty is higher than our known, add it to the canonical chain
// Second clause in the if statement reduces the vulnerability to selfish mining.
// Please refer to http://www.cs.cornell.edu/~ie53/publications/btcProcFC.pdf
reorg := externTd.Cmp(localTD) > 0
if !reorg && externTd.Cmp(localTD) == 0 {
number, headNumber := header.Number.Uint64(), current.Number.Uint64()
if number < headNumber {
reorg = true
} else if number == headNumber {
var currentPreserve, externPreserve bool
if f.preserve != nil {
currentPreserve, externPreserve = f.preserve(current), f.preserve(header)
}
reorg = !currentPreserve && (externPreserve || f.rand.Float64() < 0.5)
}
}
return reorg, nil
}

View File

@ -19,6 +19,7 @@ package forkid
import ( import (
"bytes" "bytes"
"math" "math"
"math/big"
"testing" "testing"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
@ -29,6 +30,8 @@ import (
// TestCreation tests that different genesis and fork rule combinations result in // TestCreation tests that different genesis and fork rule combinations result in
// the correct fork ID. // the correct fork ID.
func TestCreation(t *testing.T) { func TestCreation(t *testing.T) {
mergeConfig := *params.MainnetChainConfig
mergeConfig.MergeForkBlock = big.NewInt(15000000)
type testcase struct { type testcase struct {
head uint64 head uint64
want ID want ID
@ -65,7 +68,7 @@ func TestCreation(t *testing.T) {
{12964999, ID{Hash: checksumToBytes(0x0eb440f6), Next: 12965000}}, // Last Berlin block {12964999, ID{Hash: checksumToBytes(0x0eb440f6), Next: 12965000}}, // Last Berlin block
{12965000, ID{Hash: checksumToBytes(0xb715077d), Next: 13773000}}, // First London block {12965000, ID{Hash: checksumToBytes(0xb715077d), Next: 13773000}}, // First London block
{13772999, ID{Hash: checksumToBytes(0xb715077d), Next: 13773000}}, // Last London block {13772999, ID{Hash: checksumToBytes(0xb715077d), Next: 13773000}}, // Last London block
{13773000, ID{Hash: checksumToBytes(0x20c327fc), Next: 0}}, /// First Arrow Glacier block {13773000, ID{Hash: checksumToBytes(0x20c327fc), Next: 0}}, // First Arrow Glacier block
{20000000, ID{Hash: checksumToBytes(0x20c327fc), Next: 0}}, // Future Arrow Glacier block {20000000, ID{Hash: checksumToBytes(0x20c327fc), Next: 0}}, // Future Arrow Glacier block
}, },
}, },
@ -133,6 +136,38 @@ func TestCreation(t *testing.T) {
{6000000, ID{Hash: checksumToBytes(0xB8C6299D), Next: 0}}, // Future London block {6000000, ID{Hash: checksumToBytes(0xB8C6299D), Next: 0}}, // Future London block
}, },
}, },
// Merge test cases
{
&mergeConfig,
params.MainnetGenesisHash,
[]testcase{
{0, ID{Hash: checksumToBytes(0xfc64ec04), Next: 1150000}}, // Unsynced
{1149999, ID{Hash: checksumToBytes(0xfc64ec04), Next: 1150000}}, // Last Frontier block
{1150000, ID{Hash: checksumToBytes(0x97c2c34c), Next: 1920000}}, // First Homestead block
{1919999, ID{Hash: checksumToBytes(0x97c2c34c), Next: 1920000}}, // Last Homestead block
{1920000, ID{Hash: checksumToBytes(0x91d1f948), Next: 2463000}}, // First DAO block
{2462999, ID{Hash: checksumToBytes(0x91d1f948), Next: 2463000}}, // Last DAO block
{2463000, ID{Hash: checksumToBytes(0x7a64da13), Next: 2675000}}, // First Tangerine block
{2674999, ID{Hash: checksumToBytes(0x7a64da13), Next: 2675000}}, // Last Tangerine block
{2675000, ID{Hash: checksumToBytes(0x3edd5b10), Next: 4370000}}, // First Spurious block
{4369999, ID{Hash: checksumToBytes(0x3edd5b10), Next: 4370000}}, // Last Spurious block
{4370000, ID{Hash: checksumToBytes(0xa00bc324), Next: 7280000}}, // First Byzantium block
{7279999, ID{Hash: checksumToBytes(0xa00bc324), Next: 7280000}}, // Last Byzantium block
{7280000, ID{Hash: checksumToBytes(0x668db0af), Next: 9069000}}, // First and last Constantinople, first Petersburg block
{9068999, ID{Hash: checksumToBytes(0x668db0af), Next: 9069000}}, // Last Petersburg block
{9069000, ID{Hash: checksumToBytes(0x879d6e30), Next: 9200000}}, // First Istanbul and first Muir Glacier block
{9199999, ID{Hash: checksumToBytes(0x879d6e30), Next: 9200000}}, // Last Istanbul and first Muir Glacier block
{9200000, ID{Hash: checksumToBytes(0xe029e991), Next: 12244000}}, // First Muir Glacier block
{12243999, ID{Hash: checksumToBytes(0xe029e991), Next: 12244000}}, // Last Muir Glacier block
{12244000, ID{Hash: checksumToBytes(0x0eb440f6), Next: 12965000}}, // First Berlin block
{12964999, ID{Hash: checksumToBytes(0x0eb440f6), Next: 12965000}}, // Last Berlin block
{12965000, ID{Hash: checksumToBytes(0xb715077d), Next: 13773000}}, // First London block
{13772999, ID{Hash: checksumToBytes(0xb715077d), Next: 13773000}}, // Last London block
{13773000, ID{Hash: checksumToBytes(0x20c327fc), Next: 15000000}}, // First Arrow Glacier block
{15000000, ID{Hash: checksumToBytes(0xe3abe201), Next: 0}}, // First Merge Start block
{20000000, ID{Hash: checksumToBytes(0xe3abe201), Next: 0}}, // Future Merge Start block
},
},
} }
for i, tt := range tests { for i, tt := range tests {
for j, ttt := range tt.cases { for j, ttt := range tt.cases {

View File

@ -155,10 +155,10 @@ func (e *GenesisMismatchError) Error() string {
// //
// The returned chain configuration is never nil. // The returned chain configuration is never nil.
func SetupGenesisBlock(db ethdb.Database, genesis *Genesis) (*params.ChainConfig, common.Hash, error) { func SetupGenesisBlock(db ethdb.Database, genesis *Genesis) (*params.ChainConfig, common.Hash, error) {
return SetupGenesisBlockWithOverride(db, genesis, nil) return SetupGenesisBlockWithOverride(db, genesis, nil, nil)
} }
func SetupGenesisBlockWithOverride(db ethdb.Database, genesis *Genesis, overrideArrowGlacier *big.Int) (*params.ChainConfig, common.Hash, error) { func SetupGenesisBlockWithOverride(db ethdb.Database, genesis *Genesis, overrideArrowGlacier, overrideTerminalTotalDifficulty *big.Int) (*params.ChainConfig, common.Hash, error) {
if genesis != nil && genesis.Config == nil { if genesis != nil && genesis.Config == nil {
return params.AllEthashProtocolChanges, common.Hash{}, errGenesisNoConfig return params.AllEthashProtocolChanges, common.Hash{}, errGenesisNoConfig
} }
@ -207,6 +207,9 @@ func SetupGenesisBlockWithOverride(db ethdb.Database, genesis *Genesis, override
if overrideArrowGlacier != nil { if overrideArrowGlacier != nil {
newcfg.ArrowGlacierBlock = overrideArrowGlacier newcfg.ArrowGlacierBlock = overrideArrowGlacier
} }
if overrideTerminalTotalDifficulty != nil {
newcfg.TerminalTotalDifficulty = overrideTerminalTotalDifficulty
}
if err := newcfg.CheckConfigForkOrder(); err != nil { if err := newcfg.CheckConfigForkOrder(); err != nil {
return newcfg, common.Hash{}, err return newcfg, common.Hash{}, err
} }

View File

@ -33,6 +33,7 @@ import (
"github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp"
lru "github.com/hashicorp/golang-lru" lru "github.com/hashicorp/golang-lru"
) )
@ -49,7 +50,7 @@ const (
// HeaderChain is responsible for maintaining the header chain including the // HeaderChain is responsible for maintaining the header chain including the
// header query and updating. // header query and updating.
// //
// The components maintained by headerchain includes: (1) total difficult // The components maintained by headerchain includes: (1) total difficulty
// (2) header (3) block hash -> number mapping (4) canonical number -> hash mapping // (2) header (3) block hash -> number mapping (4) canonical number -> hash mapping
// and (5) head header flag. // and (5) head header flag.
// //
@ -57,7 +58,6 @@ const (
// the necessary mutex locking/unlocking. // the necessary mutex locking/unlocking.
type HeaderChain struct { type HeaderChain struct {
config *params.ChainConfig config *params.ChainConfig
chainDb ethdb.Database chainDb ethdb.Database
genesisHeader *types.Header genesisHeader *types.Header
@ -86,7 +86,6 @@ func NewHeaderChain(chainDb ethdb.Database, config *params.ChainConfig, engine c
if err != nil { if err != nil {
return nil, err return nil, err
} }
hc := &HeaderChain{ hc := &HeaderChain{
config: config, config: config,
chainDb: chainDb, chainDb: chainDb,
@ -97,12 +96,10 @@ func NewHeaderChain(chainDb ethdb.Database, config *params.ChainConfig, engine c
rand: mrand.New(mrand.NewSource(seed.Int64())), rand: mrand.New(mrand.NewSource(seed.Int64())),
engine: engine, engine: engine,
} }
hc.genesisHeader = hc.GetHeaderByNumber(0) hc.genesisHeader = hc.GetHeaderByNumber(0)
if hc.genesisHeader == nil { if hc.genesisHeader == nil {
return nil, ErrNoGenesis return nil, ErrNoGenesis
} }
hc.currentHeader.Store(hc.genesisHeader) hc.currentHeader.Store(hc.genesisHeader)
if head := rawdb.ReadHeadBlockHash(chainDb); head != (common.Hash{}) { if head := rawdb.ReadHeadBlockHash(chainDb); head != (common.Hash{}) {
if chead := hc.GetHeaderByHash(head); chead != nil { if chead := hc.GetHeaderByHash(head); chead != nil {
@ -111,7 +108,6 @@ func NewHeaderChain(chainDb ethdb.Database, config *params.ChainConfig, engine c
} }
hc.currentHeaderHash = hc.CurrentHeader().Hash() hc.currentHeaderHash = hc.CurrentHeader().Hash()
headHeaderGauge.Update(hc.CurrentHeader().Number.Int64()) headHeaderGauge.Update(hc.CurrentHeader().Number.Int64())
return hc, nil return hc, nil
} }
@ -137,35 +133,93 @@ type headerWriteResult struct {
lastHeader *types.Header lastHeader *types.Header
} }
// WriteHeaders writes a chain of headers into the local chain, given that the parents // Reorg reorgs the local canonical chain into the specified chain. The reorg
// are already known. If the total difficulty of the newly inserted chain becomes // can be classified into two cases: (a) extend the local chain (b) switch the
// greater than the current known TD, the canonical chain is reorged. // head to the given header.
// func (hc *HeaderChain) Reorg(headers []*types.Header) error {
// Note: This method is not concurrent-safe with inserting blocks simultaneously // Short circuit if nothing to reorg.
// into the chain, as side effects caused by reorganisations cannot be emulated
// without the real blocks. Hence, writing headers directly should only be done
// in two scenarios: pure-header mode of operation (light clients), or properly
// separated header/block phases (non-archive clients).
func (hc *HeaderChain) writeHeaders(headers []*types.Header) (result *headerWriteResult, err error) {
if len(headers) == 0 { if len(headers) == 0 {
return &headerWriteResult{}, nil return nil
}
// If the parent of the (first) block is already the canon header,
// we don't have to go backwards to delete canon blocks, but simply
// pile them onto the existing chain. Otherwise, do the necessary
// reorgs.
var (
first = headers[0]
last = headers[len(headers)-1]
batch = hc.chainDb.NewBatch()
)
if first.ParentHash != hc.currentHeaderHash {
// Delete any canonical number assignments above the new head
for i := last.Number.Uint64() + 1; ; i++ {
hash := rawdb.ReadCanonicalHash(hc.chainDb, i)
if hash == (common.Hash{}) {
break
}
rawdb.DeleteCanonicalHash(batch, i)
}
// Overwrite any stale canonical number assignments, going
// backwards from the first header in this import until the
// cross link between two chains.
var (
header = first
headNumber = header.Number.Uint64()
headHash = header.Hash()
)
for rawdb.ReadCanonicalHash(hc.chainDb, headNumber) != headHash {
rawdb.WriteCanonicalHash(batch, headHash, headNumber)
if headNumber == 0 {
break // It shouldn't be reached
}
headHash, headNumber = header.ParentHash, header.Number.Uint64()-1
header = hc.GetHeader(headHash, headNumber)
if header == nil {
return fmt.Errorf("missing parent %d %x", headNumber, headHash)
}
}
}
// Extend the canonical chain with the new headers
for i := 0; i < len(headers)-1; i++ {
hash := headers[i+1].ParentHash // Save some extra hashing
num := headers[i].Number.Uint64()
rawdb.WriteCanonicalHash(batch, hash, num)
rawdb.WriteHeadHeaderHash(batch, hash)
}
// Write the last header
hash := headers[len(headers)-1].Hash()
num := headers[len(headers)-1].Number.Uint64()
rawdb.WriteCanonicalHash(batch, hash, num)
rawdb.WriteHeadHeaderHash(batch, hash)
if err := batch.Write(); err != nil {
return err
}
// Last step update all in-memory head header markers
hc.currentHeaderHash = last.Hash()
hc.currentHeader.Store(types.CopyHeader(last))
headHeaderGauge.Update(last.Number.Int64())
return nil
}
// WriteHeaders writes a chain of headers into the local chain, given that the
// parents are already known. The chain head header won't be updated in this
// function, the additional setChainHead is expected in order to finish the entire
// procedure.
func (hc *HeaderChain) WriteHeaders(headers []*types.Header) (int, error) {
if len(headers) == 0 {
return 0, nil
} }
ptd := hc.GetTd(headers[0].ParentHash, headers[0].Number.Uint64()-1) ptd := hc.GetTd(headers[0].ParentHash, headers[0].Number.Uint64()-1)
if ptd == nil { if ptd == nil {
return &headerWriteResult{}, consensus.ErrUnknownAncestor return 0, consensus.ErrUnknownAncestor
} }
var ( var (
lastNumber = headers[0].Number.Uint64() - 1 // Last successfully imported number
lastHash = headers[0].ParentHash // Last imported header hash
newTD = new(big.Int).Set(ptd) // Total difficulty of inserted chain newTD = new(big.Int).Set(ptd) // Total difficulty of inserted chain
inserted []rawdb.NumberHash // Ephemeral lookup of number/hash for the chain
lastHeader *types.Header parentKnown = true // Set to true to force hc.HasHeader check the first iteration
inserted []numberHash // Ephemeral lookup of number/hash for the chain batch = hc.chainDb.NewBatch()
firstInserted = -1 // Index of the first non-ignored header
) )
batch := hc.chainDb.NewBatch()
parentKnown := true // Set to true to force hc.HasHeader check the first iteration
for i, header := range headers { for i, header := range headers {
var hash common.Hash var hash common.Hash
// The headers have already been validated at this point, so we already // The headers have already been validated at this point, so we already
@ -188,116 +242,67 @@ func (hc *HeaderChain) writeHeaders(headers []*types.Header) (result *headerWrit
hc.tdCache.Add(hash, new(big.Int).Set(newTD)) hc.tdCache.Add(hash, new(big.Int).Set(newTD))
rawdb.WriteHeader(batch, header) rawdb.WriteHeader(batch, header)
inserted = append(inserted, numberHash{number, hash}) inserted = append(inserted, rawdb.NumberHash{Number: number, Hash: hash})
hc.headerCache.Add(hash, header) hc.headerCache.Add(hash, header)
hc.numberCache.Add(hash, number) hc.numberCache.Add(hash, number)
if firstInserted < 0 {
firstInserted = i
}
} }
parentKnown = alreadyKnown parentKnown = alreadyKnown
lastHeader, lastHash, lastNumber = header, hash, number
} }
// Skip the slow disk write of all headers if interrupted. // Skip the slow disk write of all headers if interrupted.
if hc.procInterrupt() { if hc.procInterrupt() {
log.Debug("Premature abort during headers import") log.Debug("Premature abort during headers import")
return &headerWriteResult{}, errors.New("aborted") return 0, errors.New("aborted")
} }
// Commit to disk! // Commit to disk!
if err := batch.Write(); err != nil { if err := batch.Write(); err != nil {
log.Crit("Failed to write headers", "error", err) log.Crit("Failed to write headers", "error", err)
} }
batch.Reset() return len(inserted), nil
}
// writeHeadersAndSetHead writes a batch of block headers and applies the last
// header as the chain head if the fork choicer says it's ok to update the chain.
// Note: This method is not concurrent-safe with inserting blocks simultaneously
// into the chain, as side effects caused by reorganisations cannot be emulated
// without the real blocks. Hence, writing headers directly should only be done
// in two scenarios: pure-header mode of operation (light clients), or properly
// separated header/block phases (non-archive clients).
func (hc *HeaderChain) writeHeadersAndSetHead(headers []*types.Header, forker *ForkChoice) (*headerWriteResult, error) {
inserted, err := hc.WriteHeaders(headers)
if err != nil {
return nil, err
}
var ( var (
head = hc.CurrentHeader().Number.Uint64() lastHeader = headers[len(headers)-1]
localTD = hc.GetTd(hc.currentHeaderHash, head) lastHash = headers[len(headers)-1].Hash()
status = SideStatTy result = &headerWriteResult{
) status: NonStatTy,
// If the total difficulty is higher than our known, add it to the canonical chain ignored: len(headers) - inserted,
// Second clause in the if statement reduces the vulnerability to selfish mining. imported: inserted,
// Please refer to http://www.cs.cornell.edu/~ie53/publications/btcProcFC.pdf
reorg := newTD.Cmp(localTD) > 0
if !reorg && newTD.Cmp(localTD) == 0 {
if lastNumber < head {
reorg = true
} else if lastNumber == head {
reorg = mrand.Float64() < 0.5
}
}
// If the parent of the (first) block is already the canon header,
// we don't have to go backwards to delete canon blocks, but
// simply pile them onto the existing chain
chainAlreadyCanon := headers[0].ParentHash == hc.currentHeaderHash
if reorg {
// If the header can be added into canonical chain, adjust the
// header chain markers(canonical indexes and head header flag).
//
// Note all markers should be written atomically.
markerBatch := batch // we can reuse the batch to keep allocs down
if !chainAlreadyCanon {
// Delete any canonical number assignments above the new head
for i := lastNumber + 1; ; i++ {
hash := rawdb.ReadCanonicalHash(hc.chainDb, i)
if hash == (common.Hash{}) {
break
}
rawdb.DeleteCanonicalHash(markerBatch, i)
}
// Overwrite any stale canonical number assignments, going
// backwards from the first header in this import
var (
headHash = headers[0].ParentHash // inserted[0].parent?
headNumber = headers[0].Number.Uint64() - 1 // inserted[0].num-1 ?
headHeader = hc.GetHeader(headHash, headNumber)
)
for rawdb.ReadCanonicalHash(hc.chainDb, headNumber) != headHash {
rawdb.WriteCanonicalHash(markerBatch, headHash, headNumber)
headHash = headHeader.ParentHash
headNumber = headHeader.Number.Uint64() - 1
headHeader = hc.GetHeader(headHash, headNumber)
}
// If some of the older headers were already known, but obtained canon-status
// during this import batch, then we need to write that now
// Further down, we continue writing the staus for the ones that
// were not already known
for i := 0; i < firstInserted; i++ {
hash := headers[i].Hash()
num := headers[i].Number.Uint64()
rawdb.WriteCanonicalHash(markerBatch, hash, num)
rawdb.WriteHeadHeaderHash(markerBatch, hash)
}
}
// Extend the canonical chain with the new headers
for _, hn := range inserted {
rawdb.WriteCanonicalHash(markerBatch, hn.hash, hn.number)
rawdb.WriteHeadHeaderHash(markerBatch, hn.hash)
}
if err := markerBatch.Write(); err != nil {
log.Crit("Failed to write header markers into disk", "err", err)
}
markerBatch.Reset()
// Last step update all in-memory head header markers
hc.currentHeaderHash = lastHash
hc.currentHeader.Store(types.CopyHeader(lastHeader))
headHeaderGauge.Update(lastHeader.Number.Int64())
// Chain status is canonical since this insert was a reorg.
// Note that all inserts which have higher TD than existing are 'reorg'.
status = CanonStatTy
}
if len(inserted) == 0 {
status = NonStatTy
}
return &headerWriteResult{
status: status,
ignored: len(headers) - len(inserted),
imported: len(inserted),
lastHash: lastHash, lastHash: lastHash,
lastHeader: lastHeader, lastHeader: lastHeader,
}, nil }
)
// Ask the fork choicer if the reorg is necessary
if reorg, err := forker.ReorgNeeded(hc.CurrentHeader(), lastHeader); err != nil {
return nil, err
} else if !reorg {
if inserted != 0 {
result.status = SideStatTy
}
return result, nil
}
// Special case, all the inserted headers are already on the canonical
// header chain, skip the reorg operation.
if hc.GetCanonicalHash(lastHeader.Number.Uint64()) == lastHash && lastHeader.Number.Uint64() <= hc.CurrentHeader().Number.Uint64() {
return result, nil
}
// Apply the reorg operation
if err := hc.Reorg(headers); err != nil {
return nil, err
}
result.status = CanonStatTy
return result, nil
} }
func (hc *HeaderChain) ValidateHeaderChain(chain []*types.Header, checkFreq int) (int, error) { func (hc *HeaderChain) ValidateHeaderChain(chain []*types.Header, checkFreq int) (int, error) {
@ -357,7 +362,7 @@ func (hc *HeaderChain) ValidateHeaderChain(chain []*types.Header, checkFreq int)
return 0, nil return 0, nil
} }
// InsertHeaderChain inserts the given headers. // InsertHeaderChain inserts the given headers and does the reorganisations.
// //
// The validity of the headers is NOT CHECKED by this method, i.e. they need to be // The validity of the headers is NOT CHECKED by this method, i.e. they need to be
// validated by ValidateHeaderChain before calling InsertHeaderChain. // validated by ValidateHeaderChain before calling InsertHeaderChain.
@ -367,20 +372,19 @@ func (hc *HeaderChain) ValidateHeaderChain(chain []*types.Header, checkFreq int)
// //
// The returned 'write status' says if the inserted headers are part of the canonical chain // The returned 'write status' says if the inserted headers are part of the canonical chain
// or a side chain. // or a side chain.
func (hc *HeaderChain) InsertHeaderChain(chain []*types.Header, start time.Time) (WriteStatus, error) { func (hc *HeaderChain) InsertHeaderChain(chain []*types.Header, start time.Time, forker *ForkChoice) (WriteStatus, error) {
if hc.procInterrupt() { if hc.procInterrupt() {
return 0, errors.New("aborted") return 0, errors.New("aborted")
} }
res, err := hc.writeHeaders(chain) res, err := hc.writeHeadersAndSetHead(chain, forker)
if err != nil {
return 0, err
}
// Report some public statistics so the user has a clue what's going on // Report some public statistics so the user has a clue what's going on
context := []interface{}{ context := []interface{}{
"count", res.imported, "count", res.imported,
"elapsed", common.PrettyDuration(time.Since(start)), "elapsed", common.PrettyDuration(time.Since(start)),
} }
if err != nil {
context = append(context, "err", err)
}
if last := res.lastHeader; last != nil { if last := res.lastHeader; last != nil {
context = append(context, "number", last.Number, "hash", res.lastHash) context = append(context, "number", last.Number, "hash", res.lastHash)
if timestamp := time.Unix(int64(last.Time), 0); time.Since(timestamp) > time.Minute { if timestamp := time.Unix(int64(last.Time), 0); time.Since(timestamp) > time.Minute {
@ -495,6 +499,46 @@ func (hc *HeaderChain) GetHeaderByNumber(number uint64) *types.Header {
return hc.GetHeader(hash, number) return hc.GetHeader(hash, number)
} }
// GetHeadersFrom returns a contiguous segment of headers, in rlp-form, going
// backwards from the given number.
// If the 'number' is higher than the highest local header, this method will
// return a best-effort response, containing the headers that we do have.
func (hc *HeaderChain) GetHeadersFrom(number, count uint64) []rlp.RawValue {
// If the request is for future headers, we still return the portion of
// headers that we are able to serve
if current := hc.CurrentHeader().Number.Uint64(); current < number {
if count > number-current {
count -= number - current
number = current
} else {
return nil
}
}
var headers []rlp.RawValue
// If we have some of the headers in cache already, use that before going to db.
hash := rawdb.ReadCanonicalHash(hc.chainDb, number)
if hash == (common.Hash{}) {
return nil
}
for count > 0 {
header, ok := hc.headerCache.Get(hash)
if !ok {
break
}
h := header.(*types.Header)
rlpData, _ := rlp.EncodeToBytes(h)
headers = append(headers, rlpData)
hash = h.ParentHash
count--
number--
}
// Read remaining from db
if count > 0 {
headers = append(headers, rawdb.ReadHeaderRange(hc.chainDb, number, count)...)
}
return headers
}
func (hc *HeaderChain) GetCanonicalHash(number uint64) common.Hash { func (hc *HeaderChain) GetCanonicalHash(number uint64) common.Hash {
return rawdb.ReadCanonicalHash(hc.chainDb, number) return rawdb.ReadCanonicalHash(hc.chainDb, number)
} }

View File

@ -51,10 +51,10 @@ func verifyUnbrokenCanonchain(hc *HeaderChain) error {
return nil return nil
} }
func testInsert(t *testing.T, hc *HeaderChain, chain []*types.Header, wantStatus WriteStatus, wantErr error) { func testInsert(t *testing.T, hc *HeaderChain, chain []*types.Header, wantStatus WriteStatus, wantErr error, forker *ForkChoice) {
t.Helper() t.Helper()
status, err := hc.InsertHeaderChain(chain, time.Now()) status, err := hc.InsertHeaderChain(chain, time.Now(), forker)
if status != wantStatus { if status != wantStatus {
t.Errorf("wrong write status from InsertHeaderChain: got %v, want %v", status, wantStatus) t.Errorf("wrong write status from InsertHeaderChain: got %v, want %v", status, wantStatus)
} }
@ -80,37 +80,38 @@ func TestHeaderInsertion(t *testing.T) {
} }
// chain A: G->A1->A2...A128 // chain A: G->A1->A2...A128
chainA := makeHeaderChain(genesis.Header(), 128, ethash.NewFaker(), db, 10) chainA := makeHeaderChain(genesis.Header(), 128, ethash.NewFaker(), db, 10)
// chain B: G->A1->B2...B128 // chain B: G->A1->B1...B128
chainB := makeHeaderChain(chainA[0], 128, ethash.NewFaker(), db, 10) chainB := makeHeaderChain(chainA[0], 128, ethash.NewFaker(), db, 10)
log.Root().SetHandler(log.StdoutHandler) log.Root().SetHandler(log.StdoutHandler)
forker := NewForkChoice(hc, nil)
// Inserting 64 headers on an empty chain, expecting // Inserting 64 headers on an empty chain, expecting
// 1 callbacks, 1 canon-status, 0 sidestatus, // 1 callbacks, 1 canon-status, 0 sidestatus,
testInsert(t, hc, chainA[:64], CanonStatTy, nil) testInsert(t, hc, chainA[:64], CanonStatTy, nil, forker)
// Inserting 64 identical headers, expecting // Inserting 64 identical headers, expecting
// 0 callbacks, 0 canon-status, 0 sidestatus, // 0 callbacks, 0 canon-status, 0 sidestatus,
testInsert(t, hc, chainA[:64], NonStatTy, nil) testInsert(t, hc, chainA[:64], NonStatTy, nil, forker)
// Inserting the same some old, some new headers // Inserting the same some old, some new headers
// 1 callbacks, 1 canon, 0 side // 1 callbacks, 1 canon, 0 side
testInsert(t, hc, chainA[32:96], CanonStatTy, nil) testInsert(t, hc, chainA[32:96], CanonStatTy, nil, forker)
// Inserting side blocks, but not overtaking the canon chain // Inserting side blocks, but not overtaking the canon chain
testInsert(t, hc, chainB[0:32], SideStatTy, nil) testInsert(t, hc, chainB[0:32], SideStatTy, nil, forker)
// Inserting more side blocks, but we don't have the parent // Inserting more side blocks, but we don't have the parent
testInsert(t, hc, chainB[34:36], NonStatTy, consensus.ErrUnknownAncestor) testInsert(t, hc, chainB[34:36], NonStatTy, consensus.ErrUnknownAncestor, forker)
// Inserting more sideblocks, overtaking the canon chain // Inserting more sideblocks, overtaking the canon chain
testInsert(t, hc, chainB[32:97], CanonStatTy, nil) testInsert(t, hc, chainB[32:97], CanonStatTy, nil, forker)
// Inserting more A-headers, taking back the canonicality // Inserting more A-headers, taking back the canonicality
testInsert(t, hc, chainA[90:100], CanonStatTy, nil) testInsert(t, hc, chainA[90:100], CanonStatTy, nil, forker)
// And B becomes canon again // And B becomes canon again
testInsert(t, hc, chainB[97:107], CanonStatTy, nil) testInsert(t, hc, chainB[97:107], CanonStatTy, nil, forker)
// And B becomes even longer // And B becomes even longer
testInsert(t, hc, chainB[107:128], CanonStatTy, nil) testInsert(t, hc, chainB[107:128], CanonStatTy, nil, forker)
} }

View File

@ -242,24 +242,6 @@ func WriteLastPivotNumber(db ethdb.KeyValueWriter, pivot uint64) {
} }
} }
// ReadFastTrieProgress retrieves the number of tries nodes fast synced to allow
// reporting correct numbers across restarts.
func ReadFastTrieProgress(db ethdb.KeyValueReader) uint64 {
data, _ := db.Get(fastTrieProgressKey)
if len(data) == 0 {
return 0
}
return new(big.Int).SetBytes(data).Uint64()
}
// WriteFastTrieProgress stores the fast sync trie process counter to support
// retrieving it across restarts.
func WriteFastTrieProgress(db ethdb.KeyValueWriter, count uint64) {
if err := db.Put(fastTrieProgressKey, new(big.Int).SetUint64(count).Bytes()); err != nil {
log.Crit("Failed to store fast sync trie progress", "err", err)
}
}
// ReadTxIndexTail retrieves the number of oldest indexed block // ReadTxIndexTail retrieves the number of oldest indexed block
// whose transaction indices has been indexed. If the corresponding entry // whose transaction indices has been indexed. If the corresponding entry
// is non-existent in database it means the indexing has been finished. // is non-existent in database it means the indexing has been finished.
@ -297,6 +279,56 @@ func WriteFastTxLookupLimit(db ethdb.KeyValueWriter, number uint64) {
} }
} }
// ReadHeaderRange returns the rlp-encoded headers, starting at 'number', and going
// backwards towards genesis. This method assumes that the caller already has
// placed a cap on count, to prevent DoS issues.
// Since this method operates in head-towards-genesis mode, it will return an empty
// slice in case the head ('number') is missing. Hence, the caller must ensure that
// the head ('number') argument is actually an existing header.
//
// N.B: Since the input is a number, as opposed to a hash, it's implicit that
// this method only operates on canon headers.
func ReadHeaderRange(db ethdb.Reader, number uint64, count uint64) []rlp.RawValue {
var rlpHeaders []rlp.RawValue
if count == 0 {
return rlpHeaders
}
i := number
if count-1 > number {
// It's ok to request block 0, 1 item
count = number + 1
}
limit, _ := db.Ancients()
// First read live blocks
if i >= limit {
// If we need to read live blocks, we need to figure out the hash first
hash := ReadCanonicalHash(db, number)
for ; i >= limit && count > 0; i-- {
if data, _ := db.Get(headerKey(i, hash)); len(data) > 0 {
rlpHeaders = append(rlpHeaders, data)
// Get the parent hash for next query
hash = types.HeaderParentHashFromRLP(data)
} else {
break // Maybe got moved to ancients
}
count--
}
}
if count == 0 {
return rlpHeaders
}
// read remaining from ancients
max := count * 700
data, err := db.AncientRange(freezerHeaderTable, i+1-count, count, max)
if err == nil && uint64(len(data)) == count {
// the data is on the order [h, h+1, .., n] -- reordering needed
for i := range data {
rlpHeaders = append(rlpHeaders, data[len(data)-1-i])
}
}
return rlpHeaders
}
// ReadHeaderRLP retrieves a block header in its raw RLP database encoding. // ReadHeaderRLP retrieves a block header in its raw RLP database encoding.
func ReadHeaderRLP(db ethdb.Reader, hash common.Hash, number uint64) rlp.RawValue { func ReadHeaderRLP(db ethdb.Reader, hash common.Hash, number uint64) rlp.RawValue {
var data []byte var data []byte
@ -682,7 +714,7 @@ func ReadLogs(db ethdb.Reader, hash common.Hash, number uint64, config *params.C
if logs := readLegacyLogs(db, hash, number, config); logs != nil { if logs := readLegacyLogs(db, hash, number, config); logs != nil {
return logs return logs
} }
log.Error("Invalid receipt array RLP", "hash", "err", err) log.Error("Invalid receipt array RLP", "hash", hash, "err", err)
return nil return nil
} }

View File

@ -883,3 +883,67 @@ func BenchmarkDecodeRLPLogs(b *testing.B) {
} }
}) })
} }
func TestHeadersRLPStorage(t *testing.T) {
// Have N headers in the freezer
frdir, err := ioutil.TempDir("", "")
if err != nil {
t.Fatalf("failed to create temp freezer dir: %v", err)
}
defer os.Remove(frdir)
db, err := NewDatabaseWithFreezer(NewMemoryDatabase(), frdir, "", false)
if err != nil {
t.Fatalf("failed to create database with ancient backend")
}
defer db.Close()
// Create blocks
var chain []*types.Block
var pHash common.Hash
for i := 0; i < 100; i++ {
block := types.NewBlockWithHeader(&types.Header{
Number: big.NewInt(int64(i)),
Extra: []byte("test block"),
UncleHash: types.EmptyUncleHash,
TxHash: types.EmptyRootHash,
ReceiptHash: types.EmptyRootHash,
ParentHash: pHash,
})
chain = append(chain, block)
pHash = block.Hash()
}
var receipts []types.Receipts = make([]types.Receipts, 100)
// Write first half to ancients
WriteAncientBlocks(db, chain[:50], receipts[:50], big.NewInt(100))
// Write second half to db
for i := 50; i < 100; i++ {
WriteCanonicalHash(db, chain[i].Hash(), chain[i].NumberU64())
WriteBlock(db, chain[i])
}
checkSequence := func(from, amount int) {
headersRlp := ReadHeaderRange(db, uint64(from), uint64(amount))
if have, want := len(headersRlp), amount; have != want {
t.Fatalf("have %d headers, want %d", have, want)
}
for i, headerRlp := range headersRlp {
var header types.Header
if err := rlp.DecodeBytes(headerRlp, &header); err != nil {
t.Fatal(err)
}
if have, want := header.Number.Uint64(), uint64(from-i); have != want {
t.Fatalf("wrong number, have %d want %d", have, want)
}
}
}
checkSequence(99, 20) // Latest block and 19 parents
checkSequence(99, 50) // Latest block -> all db blocks
checkSequence(99, 51) // Latest block -> one from ancients
checkSequence(99, 52) // Latest blocks -> two from ancients
checkSequence(50, 2) // One from db, one from ancients
checkSequence(49, 1) // One from ancients
checkSequence(49, 50) // All ancient ones
checkSequence(99, 100) // All blocks
checkSequence(0, 1) // Only genesis
checkSequence(1, 1) // Only block 1
checkSequence(1, 2) // Genesis + block 1
}

View File

@ -138,3 +138,38 @@ func PopUncleanShutdownMarker(db ethdb.KeyValueStore) {
log.Warn("Failed to clear unclean-shutdown marker", "err", err) log.Warn("Failed to clear unclean-shutdown marker", "err", err)
} }
} }
// UpdateUncleanShutdownMarker updates the last marker's timestamp to now.
func UpdateUncleanShutdownMarker(db ethdb.KeyValueStore) {
var uncleanShutdowns crashList
// Read old data
if data, err := db.Get(uncleanShutdownKey); err != nil {
log.Warn("Error reading unclean shutdown markers", "error", err)
} else if err := rlp.DecodeBytes(data, &uncleanShutdowns); err != nil {
log.Warn("Error decoding unclean shutdown markers", "error", err)
}
// This shouldn't happen because we push a marker on Backend instantiation
count := len(uncleanShutdowns.Recent)
if count == 0 {
log.Warn("No unclean shutdown marker to update")
return
}
uncleanShutdowns.Recent[count-1] = uint64(time.Now().Unix())
data, _ := rlp.EncodeToBytes(uncleanShutdowns)
if err := db.Put(uncleanShutdownKey, data); err != nil {
log.Warn("Failed to write unclean-shutdown marker", "err", err)
}
}
// ReadTransitionStatus retrieves the eth2 transition status from the database
func ReadTransitionStatus(db ethdb.KeyValueReader) []byte {
data, _ := db.Get(transitionStatusKey)
return data
}
// WriteTransitionStatus stores the eth2 transition status to the database
func WriteTransitionStatus(db ethdb.KeyValueWriter, data []byte) {
if err := db.Put(transitionStatusKey, data); err != nil {
log.Crit("Failed to store the eth2 transition status", "err", err)
}
}

View File

@ -208,11 +208,3 @@ func WriteSnapshotSyncStatus(db ethdb.KeyValueWriter, status []byte) {
log.Crit("Failed to store snapshot sync status", "err", err) log.Crit("Failed to store snapshot sync status", "err", err)
} }
} }
// DeleteSnapshotSyncStatus deletes the serialized sync status saved at the last
// shutdown
func DeleteSnapshotSyncStatus(db ethdb.KeyValueWriter) {
if err := db.Delete(snapshotSyncStatusKey); err != nil {
log.Crit("Failed to remove snapshot sync status", "err", err)
}
}

View File

@ -247,7 +247,8 @@ func indexTransactions(db ethdb.Database, from uint64, to uint64, interrupt chan
} }
} }
// IndexTransactions creates txlookup indices of the specified block range. // IndexTransactions creates txlookup indices of the specified block range. The from
// is included while to is excluded.
// //
// This function iterates canonical chain in reverse order, it has one main advantage: // This function iterates canonical chain in reverse order, it has one main advantage:
// We can write tx index tail flag periodically even without the whole indexing // We can write tx index tail flag periodically even without the whole indexing
@ -339,6 +340,7 @@ func unindexTransactions(db ethdb.Database, from uint64, to uint64, interrupt ch
} }
// UnindexTransactions removes txlookup indices of the specified block range. // UnindexTransactions removes txlookup indices of the specified block range.
// The from is included while to is excluded.
// //
// There is a passed channel, the whole procedure will be interrupted if any // There is a passed channel, the whole procedure will be interrupted if any
// signal received. // signal received.

View File

@ -395,7 +395,7 @@ func InspectDatabase(db ethdb.Database, keyPrefix, keyStart []byte) error {
databaseVersionKey, headHeaderKey, headBlockKey, headFastBlockKey, lastPivotKey, databaseVersionKey, headHeaderKey, headBlockKey, headFastBlockKey, lastPivotKey,
fastTrieProgressKey, snapshotDisabledKey, SnapshotRootKey, snapshotJournalKey, fastTrieProgressKey, snapshotDisabledKey, SnapshotRootKey, snapshotJournalKey,
snapshotGeneratorKey, snapshotRecoveryKey, txIndexTailKey, fastTxLookupLimitKey, snapshotGeneratorKey, snapshotRecoveryKey, txIndexTailKey, fastTxLookupLimitKey,
uncleanShutdownKey, badBlockKey, uncleanShutdownKey, badBlockKey, transitionStatusKey,
} { } {
if bytes.Equal(key, meta) { if bytes.Equal(key, meta) {
metadata.Add(size) metadata.Add(size)

View File

@ -75,6 +75,9 @@ var (
// uncleanShutdownKey tracks the list of local crashes // uncleanShutdownKey tracks the list of local crashes
uncleanShutdownKey = []byte("unclean-shutdown") // config prefix for the db uncleanShutdownKey = []byte("unclean-shutdown") // config prefix for the db
// transitionStatusKey tracks the eth2 transition status.
transitionStatusKey = []byte("eth2-transition")
// Data item prefixes (use single byte to avoid mixing data types, avoid `i`, used for indexes). // Data item prefixes (use single byte to avoid mixing data types, avoid `i`, used for indexes).
headerPrefix = []byte("h") // headerPrefix + num (uint64 big endian) + hash -> header headerPrefix = []byte("h") // headerPrefix + num (uint64 big endian) + hash -> header
headerTDSuffix = []byte("t") // headerPrefix + num (uint64 big endian) + hash + headerTDSuffix -> td headerTDSuffix = []byte("t") // headerPrefix + num (uint64 big endian) + hash + headerTDSuffix -> td

View File

@ -48,13 +48,13 @@ var (
// accountCheckRange is the upper limit of the number of accounts involved in // accountCheckRange is the upper limit of the number of accounts involved in
// each range check. This is a value estimated based on experience. If this // each range check. This is a value estimated based on experience. If this
// value is too large, the failure rate of range prove will increase. Otherwise // value is too large, the failure rate of range prove will increase. Otherwise
// the the value is too small, the efficiency of the state recovery will decrease. // the value is too small, the efficiency of the state recovery will decrease.
accountCheckRange = 128 accountCheckRange = 128
// storageCheckRange is the upper limit of the number of storage slots involved // storageCheckRange is the upper limit of the number of storage slots involved
// in each range check. This is a value estimated based on experience. If this // in each range check. This is a value estimated based on experience. If this
// value is too large, the failure rate of range prove will increase. Otherwise // value is too large, the failure rate of range prove will increase. Otherwise
// the the value is too small, the efficiency of the state recovery will decrease. // the value is too small, the efficiency of the state recovery will decrease.
storageCheckRange = 1024 storageCheckRange = 1024
// errMissingTrie is returned if the target trie is missing while the generation // errMissingTrie is returned if the target trie is missing while the generation

View File

@ -21,67 +21,11 @@ import (
"time" "time"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics" "github.com/ethereum/go-ethereum/metrics"
) )
// wipeSnapshot starts a goroutine to iterate over the entire key-value database
// and delete all the data associated with the snapshot (accounts, storage,
// metadata). After all is done, the snapshot range of the database is compacted
// to free up unused data blocks.
func wipeSnapshot(db ethdb.KeyValueStore, full bool) chan struct{} {
// Wipe the snapshot root marker synchronously
if full {
rawdb.DeleteSnapshotRoot(db)
}
// Wipe everything else asynchronously
wiper := make(chan struct{}, 1)
go func() {
if err := wipeContent(db); err != nil {
log.Error("Failed to wipe state snapshot", "err", err) // Database close will trigger this
return
}
close(wiper)
}()
return wiper
}
// wipeContent iterates over the entire key-value database and deletes all the
// data associated with the snapshot (accounts, storage), but not the root hash
// as the wiper is meant to run on a background thread but the root needs to be
// removed in sync to avoid data races. After all is done, the snapshot range of
// the database is compacted to free up unused data blocks.
func wipeContent(db ethdb.KeyValueStore) error {
if err := wipeKeyRange(db, "accounts", rawdb.SnapshotAccountPrefix, nil, nil, len(rawdb.SnapshotAccountPrefix)+common.HashLength, snapWipedAccountMeter, true); err != nil {
return err
}
if err := wipeKeyRange(db, "storage", rawdb.SnapshotStoragePrefix, nil, nil, len(rawdb.SnapshotStoragePrefix)+2*common.HashLength, snapWipedStorageMeter, true); err != nil {
return err
}
// Compact the snapshot section of the database to get rid of unused space
start := time.Now()
log.Info("Compacting snapshot account area ")
end := common.CopyBytes(rawdb.SnapshotAccountPrefix)
end[len(end)-1]++
if err := db.Compact(rawdb.SnapshotAccountPrefix, end); err != nil {
return err
}
log.Info("Compacting snapshot storage area ")
end = common.CopyBytes(rawdb.SnapshotStoragePrefix)
end[len(end)-1]++
if err := db.Compact(rawdb.SnapshotStoragePrefix, end); err != nil {
return err
}
log.Info("Compacted snapshot area in database", "elapsed", common.PrettyDuration(time.Since(start)))
return nil
}
// wipeKeyRange deletes a range of keys from the database starting with prefix // wipeKeyRange deletes a range of keys from the database starting with prefix
// and having a specific total key length. The start and limit is optional for // and having a specific total key length. The start and limit is optional for
// specifying a particular key range for deletion. // specifying a particular key range for deletion.

View File

@ -30,95 +30,50 @@ import (
func TestWipe(t *testing.T) { func TestWipe(t *testing.T) {
// Create a database with some random snapshot data // Create a database with some random snapshot data
db := memorydb.New() db := memorydb.New()
for i := 0; i < 128; i++ { for i := 0; i < 128; i++ {
account := randomHash() rawdb.WriteAccountSnapshot(db, randomHash(), randomHash().Bytes())
rawdb.WriteAccountSnapshot(db, account, randomHash().Bytes())
for j := 0; j < 1024; j++ {
rawdb.WriteStorageSnapshot(db, account, randomHash(), randomHash().Bytes())
} }
}
rawdb.WriteSnapshotRoot(db, randomHash())
// Add some random non-snapshot data too to make wiping harder // Add some random non-snapshot data too to make wiping harder
for i := 0; i < 65536; i++ { for i := 0; i < 500; i++ {
// Generate a key that's the wrong length for a state snapshot item // Generate keys with wrong length for a state snapshot item
var keysize int keysuffix := make([]byte, 31)
for keysize == 0 || keysize == 32 || keysize == 64 { rand.Read(keysuffix)
keysize = 8 + rand.Intn(64) // +8 to ensure we will "never" randomize duplicates db.Put(append(rawdb.SnapshotAccountPrefix, keysuffix...), randomHash().Bytes())
} keysuffix = make([]byte, 33)
// Randomize the suffix, dedup and inject it under the snapshot namespace
keysuffix := make([]byte, keysize)
rand.Read(keysuffix) rand.Read(keysuffix)
if rand.Int31n(2) == 0 {
db.Put(append(rawdb.SnapshotAccountPrefix, keysuffix...), randomHash().Bytes()) db.Put(append(rawdb.SnapshotAccountPrefix, keysuffix...), randomHash().Bytes())
} else {
db.Put(append(rawdb.SnapshotStoragePrefix, keysuffix...), randomHash().Bytes())
} }
} count := func() (items int) {
// Sanity check that all the keys are present
var items int
it := db.NewIterator(rawdb.SnapshotAccountPrefix, nil) it := db.NewIterator(rawdb.SnapshotAccountPrefix, nil)
defer it.Release() defer it.Release()
for it.Next() { for it.Next() {
key := it.Key() if len(it.Key()) == len(rawdb.SnapshotAccountPrefix)+common.HashLength {
if len(key) == len(rawdb.SnapshotAccountPrefix)+common.HashLength {
items++ items++
} }
} }
it = db.NewIterator(rawdb.SnapshotStoragePrefix, nil) return items
defer it.Release()
for it.Next() {
key := it.Key()
if len(key) == len(rawdb.SnapshotStoragePrefix)+2*common.HashLength {
items++
} }
// Sanity check that all the keys are present
if items := count(); items != 128 {
t.Fatalf("snapshot size mismatch: have %d, want %d", items, 128)
} }
if items != 128+128*1024 { // Wipe the accounts
t.Fatalf("snapshot size mismatch: have %d, want %d", items, 128+128*1024) if err := wipeKeyRange(db, "accounts", rawdb.SnapshotAccountPrefix, nil, nil,
len(rawdb.SnapshotAccountPrefix)+common.HashLength, snapWipedAccountMeter, true); err != nil {
t.Fatal(err)
} }
if hash := rawdb.ReadSnapshotRoot(db); hash == (common.Hash{}) {
t.Errorf("snapshot block marker mismatch: have %#x, want <not-nil>", hash)
}
// Wipe all snapshot entries from the database
<-wipeSnapshot(db, true)
// Iterate over the database end ensure no snapshot information remains // Iterate over the database end ensure no snapshot information remains
it = db.NewIterator(rawdb.SnapshotAccountPrefix, nil) if items := count(); items != 0 {
defer it.Release() t.Fatalf("snapshot size mismatch: have %d, want %d", items, 0)
for it.Next() {
key := it.Key()
if len(key) == len(rawdb.SnapshotAccountPrefix)+common.HashLength {
t.Errorf("snapshot entry remained after wipe: %x", key)
}
}
it = db.NewIterator(rawdb.SnapshotStoragePrefix, nil)
defer it.Release()
for it.Next() {
key := it.Key()
if len(key) == len(rawdb.SnapshotStoragePrefix)+2*common.HashLength {
t.Errorf("snapshot entry remained after wipe: %x", key)
}
}
if hash := rawdb.ReadSnapshotRoot(db); hash != (common.Hash{}) {
t.Errorf("snapshot block marker remained after wipe: %#x", hash)
} }
// Iterate over the database and ensure miscellaneous items are present // Iterate over the database and ensure miscellaneous items are present
items = 0 items := 0
it := db.NewIterator(nil, nil)
it = db.NewIterator(nil, nil)
defer it.Release() defer it.Release()
for it.Next() { for it.Next() {
items++ items++
} }
if items != 65536 { if items != 1000 {
t.Fatalf("misc item count mismatch: have %d, want %d", items, 65536) t.Fatalf("misc item count mismatch: have %d, want %d", items, 1000)
} }
} }

View File

@ -27,7 +27,7 @@ import (
) )
// NewStateSync create a new state trie download scheduler. // NewStateSync create a new state trie download scheduler.
func NewStateSync(root common.Hash, database ethdb.KeyValueReader, bloom *trie.SyncBloom, onLeaf func(paths [][]byte, leaf []byte) error) *trie.Sync { func NewStateSync(root common.Hash, database ethdb.KeyValueReader, onLeaf func(paths [][]byte, leaf []byte) error) *trie.Sync {
// Register the storage slot callback if the external callback is specified. // Register the storage slot callback if the external callback is specified.
var onSlot func(paths [][]byte, hexpath []byte, leaf []byte, parent common.Hash) error var onSlot func(paths [][]byte, hexpath []byte, leaf []byte, parent common.Hash) error
if onLeaf != nil { if onLeaf != nil {
@ -52,6 +52,6 @@ func NewStateSync(root common.Hash, database ethdb.KeyValueReader, bloom *trie.S
syncer.AddCodeEntry(common.BytesToHash(obj.CodeHash), hexpath, parent) syncer.AddCodeEntry(common.BytesToHash(obj.CodeHash), hexpath, parent)
return nil return nil
} }
syncer = trie.NewSync(root, database, onAccount, bloom) syncer = trie.NewSync(root, database, onAccount)
return syncer return syncer
} }

View File

@ -26,7 +26,6 @@ import (
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/ethdb/memorydb"
"github.com/ethereum/go-ethereum/rlp" "github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/trie" "github.com/ethereum/go-ethereum/trie"
) )
@ -134,7 +133,7 @@ func checkStateConsistency(db ethdb.Database, root common.Hash) error {
// Tests that an empty state is not scheduled for syncing. // Tests that an empty state is not scheduled for syncing.
func TestEmptyStateSync(t *testing.T) { func TestEmptyStateSync(t *testing.T) {
empty := common.HexToHash("56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421") empty := common.HexToHash("56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421")
sync := NewStateSync(empty, rawdb.NewMemoryDatabase(), trie.NewSyncBloom(1, memorydb.New()), nil) sync := NewStateSync(empty, rawdb.NewMemoryDatabase(), nil)
if nodes, paths, codes := sync.Missing(1); len(nodes) != 0 || len(paths) != 0 || len(codes) != 0 { if nodes, paths, codes := sync.Missing(1); len(nodes) != 0 || len(paths) != 0 || len(codes) != 0 {
t.Errorf(" content requested for empty state: %v, %v, %v", nodes, paths, codes) t.Errorf(" content requested for empty state: %v, %v, %v", nodes, paths, codes)
} }
@ -171,7 +170,7 @@ func testIterativeStateSync(t *testing.T, count int, commit bool, bypath bool) {
// Create a destination state and sync with the scheduler // Create a destination state and sync with the scheduler
dstDb := rawdb.NewMemoryDatabase() dstDb := rawdb.NewMemoryDatabase()
sched := NewStateSync(srcRoot, dstDb, trie.NewSyncBloom(1, dstDb), nil) sched := NewStateSync(srcRoot, dstDb, nil)
nodes, paths, codes := sched.Missing(count) nodes, paths, codes := sched.Missing(count)
var ( var (
@ -250,7 +249,7 @@ func TestIterativeDelayedStateSync(t *testing.T) {
// Create a destination state and sync with the scheduler // Create a destination state and sync with the scheduler
dstDb := rawdb.NewMemoryDatabase() dstDb := rawdb.NewMemoryDatabase()
sched := NewStateSync(srcRoot, dstDb, trie.NewSyncBloom(1, dstDb), nil) sched := NewStateSync(srcRoot, dstDb, nil)
nodes, _, codes := sched.Missing(0) nodes, _, codes := sched.Missing(0)
queue := append(append([]common.Hash{}, nodes...), codes...) queue := append(append([]common.Hash{}, nodes...), codes...)
@ -298,7 +297,7 @@ func testIterativeRandomStateSync(t *testing.T, count int) {
// Create a destination state and sync with the scheduler // Create a destination state and sync with the scheduler
dstDb := rawdb.NewMemoryDatabase() dstDb := rawdb.NewMemoryDatabase()
sched := NewStateSync(srcRoot, dstDb, trie.NewSyncBloom(1, dstDb), nil) sched := NewStateSync(srcRoot, dstDb, nil)
queue := make(map[common.Hash]struct{}) queue := make(map[common.Hash]struct{})
nodes, _, codes := sched.Missing(count) nodes, _, codes := sched.Missing(count)
@ -348,7 +347,7 @@ func TestIterativeRandomDelayedStateSync(t *testing.T) {
// Create a destination state and sync with the scheduler // Create a destination state and sync with the scheduler
dstDb := rawdb.NewMemoryDatabase() dstDb := rawdb.NewMemoryDatabase()
sched := NewStateSync(srcRoot, dstDb, trie.NewSyncBloom(1, dstDb), nil) sched := NewStateSync(srcRoot, dstDb, nil)
queue := make(map[common.Hash]struct{}) queue := make(map[common.Hash]struct{})
nodes, _, codes := sched.Missing(0) nodes, _, codes := sched.Missing(0)
@ -415,7 +414,7 @@ func TestIncompleteStateSync(t *testing.T) {
// Create a destination state and sync with the scheduler // Create a destination state and sync with the scheduler
dstDb := rawdb.NewMemoryDatabase() dstDb := rawdb.NewMemoryDatabase()
sched := NewStateSync(srcRoot, dstDb, trie.NewSyncBloom(1, dstDb), nil) sched := NewStateSync(srcRoot, dstDb, nil)
var added []common.Hash var added []common.Hash

View File

@ -621,9 +621,8 @@ func (pool *TxPool) validateTx(tx *types.Transaction, local bool) error {
if err != nil { if err != nil {
return ErrInvalidSender return ErrInvalidSender
} }
// Drop non-local transactions under our own minimal accepted gas price or tip. // Drop non-local transactions under our own minimal accepted gas price or tip
pendingBaseFee := pool.priced.urgent.baseFee if !local && tx.GasTipCapIntCmp(pool.gasPrice) < 0 {
if !local && tx.EffectiveGasTipIntCmp(pool.gasPrice, pendingBaseFee) < 0 {
return ErrUnderpriced return ErrUnderpriced
} }
// Ensure the transaction adheres to nonce ordering // Ensure the transaction adheres to nonce ordering

View File

@ -85,6 +85,12 @@ type Header struct {
// BaseFee was added by EIP-1559 and is ignored in legacy headers. // BaseFee was added by EIP-1559 and is ignored in legacy headers.
BaseFee *big.Int `json:"baseFeePerGas" rlp:"optional"` BaseFee *big.Int `json:"baseFeePerGas" rlp:"optional"`
/*
TODO (MariusVanDerWijden) Add this field once needed
// Random was added during the merge and contains the BeaconState randomness
Random common.Hash `json:"random" rlp:"optional"`
*/
} }
// field type overrides for gencodec // field type overrides for gencodec
@ -383,3 +389,21 @@ func (b *Block) Hash() common.Hash {
} }
type Blocks []*Block type Blocks []*Block
// HeaderParentHashFromRLP returns the parentHash of an RLP-encoded
// header. If 'header' is invalid, the zero hash is returned.
func HeaderParentHashFromRLP(header []byte) common.Hash {
// parentHash is the first list element.
listContent, _, err := rlp.SplitList(header)
if err != nil {
return common.Hash{}
}
parentHash, _, err := rlp.SplitString(listContent)
if err != nil {
return common.Hash{}
}
if len(parentHash) != 32 {
return common.Hash{}
}
return common.BytesToHash(parentHash)
}

View File

@ -281,3 +281,64 @@ func makeBenchBlock() *Block {
} }
return NewBlock(header, txs, uncles, receipts, newHasher()) return NewBlock(header, txs, uncles, receipts, newHasher())
} }
func TestRlpDecodeParentHash(t *testing.T) {
// A minimum one
want := common.HexToHash("0x112233445566778899001122334455667788990011223344556677889900aabb")
if rlpData, err := rlp.EncodeToBytes(Header{ParentHash: want}); err != nil {
t.Fatal(err)
} else {
if have := HeaderParentHashFromRLP(rlpData); have != want {
t.Fatalf("have %x, want %x", have, want)
}
}
// And a maximum one
// | Difficulty | dynamic| *big.Int | 0x5ad3c2c71bbff854908 (current mainnet TD: 76 bits) |
// | Number | dynamic| *big.Int | 64 bits |
// | Extra | dynamic| []byte | 65+32 byte (clique) |
// | BaseFee | dynamic| *big.Int | 64 bits |
mainnetTd := new(big.Int)
mainnetTd.SetString("5ad3c2c71bbff854908", 16)
if rlpData, err := rlp.EncodeToBytes(Header{
ParentHash: want,
Difficulty: mainnetTd,
Number: new(big.Int).SetUint64(math.MaxUint64),
Extra: make([]byte, 65+32),
BaseFee: new(big.Int).SetUint64(math.MaxUint64),
}); err != nil {
t.Fatal(err)
} else {
if have := HeaderParentHashFromRLP(rlpData); have != want {
t.Fatalf("have %x, want %x", have, want)
}
}
// Also test a very very large header.
{
// The rlp-encoding of the heder belowCauses _total_ length of 65540,
// which is the first to blow the fast-path.
h := Header{
ParentHash: want,
Extra: make([]byte, 65041),
}
if rlpData, err := rlp.EncodeToBytes(h); err != nil {
t.Fatal(err)
} else {
if have := HeaderParentHashFromRLP(rlpData); have != want {
t.Fatalf("have %x, want %x", have, want)
}
}
}
{
// Test some invalid erroneous stuff
for i, rlpData := range [][]byte{
nil,
common.FromHex("0x"),
common.FromHex("0x01"),
common.FromHex("0x3031323334"),
} {
if have, want := HeaderParentHashFromRLP(rlpData), (common.Hash{}); have != want {
t.Fatalf("invalid %d: have %x, want %x", i, have, want)
}
}
}
}

View File

@ -25,8 +25,8 @@ import (
type DynamicFeeTx struct { type DynamicFeeTx struct {
ChainID *big.Int ChainID *big.Int
Nonce uint64 Nonce uint64
GasTipCap *big.Int GasTipCap *big.Int // a.k.a. maxPriorityFeePerGas
GasFeeCap *big.Int GasFeeCap *big.Int // a.k.a. maxFeePerGas
Gas uint64 Gas uint64
To *common.Address `rlp:"nil"` // nil means contract creation To *common.Address `rlp:"nil"` // nil means contract creation
Value *big.Int Value *big.Int

View File

@ -17,12 +17,12 @@
package vm package vm
const ( const (
set2BitsMask = uint16(0b1100_0000_0000_0000) set2BitsMask = uint16(0b11)
set3BitsMask = uint16(0b1110_0000_0000_0000) set3BitsMask = uint16(0b111)
set4BitsMask = uint16(0b1111_0000_0000_0000) set4BitsMask = uint16(0b1111)
set5BitsMask = uint16(0b1111_1000_0000_0000) set5BitsMask = uint16(0b1_1111)
set6BitsMask = uint16(0b1111_1100_0000_0000) set6BitsMask = uint16(0b11_1111)
set7BitsMask = uint16(0b1111_1110_0000_0000) set7BitsMask = uint16(0b111_1111)
) )
// bitvec is a bit vector which maps bytes in a program. // bitvec is a bit vector which maps bytes in a program.
@ -30,32 +30,26 @@ const (
// it's data (i.e. argument of PUSHxx). // it's data (i.e. argument of PUSHxx).
type bitvec []byte type bitvec []byte
var lookup = [8]byte{
0x80, 0x40, 0x20, 0x10, 0x8, 0x4, 0x2, 0x1,
}
func (bits bitvec) set1(pos uint64) { func (bits bitvec) set1(pos uint64) {
bits[pos/8] |= lookup[pos%8] bits[pos/8] |= 1 << (pos % 8)
} }
func (bits bitvec) setN(flag uint16, pos uint64) { func (bits bitvec) setN(flag uint16, pos uint64) {
a := flag >> (pos % 8) a := flag << (pos % 8)
bits[pos/8] |= byte(a >> 8) bits[pos/8] |= byte(a)
if b := byte(a); b != 0 { if b := byte(a >> 8); b != 0 {
// If the bit-setting affects the neighbouring byte, we can assign - no need to OR it,
// since it's the first write to that byte
bits[pos/8+1] = b bits[pos/8+1] = b
} }
} }
func (bits bitvec) set8(pos uint64) { func (bits bitvec) set8(pos uint64) {
a := byte(0xFF >> (pos % 8)) a := byte(0xFF << (pos % 8))
bits[pos/8] |= a bits[pos/8] |= a
bits[pos/8+1] = ^a bits[pos/8+1] = ^a
} }
func (bits bitvec) set16(pos uint64) { func (bits bitvec) set16(pos uint64) {
a := byte(0xFF >> (pos % 8)) a := byte(0xFF << (pos % 8))
bits[pos/8] |= a bits[pos/8] |= a
bits[pos/8+1] = 0xFF bits[pos/8+1] = 0xFF
bits[pos/8+2] = ^a bits[pos/8+2] = ^a
@ -63,7 +57,7 @@ func (bits bitvec) set16(pos uint64) {
// codeSegment checks if the position is in a code segment. // codeSegment checks if the position is in a code segment.
func (bits *bitvec) codeSegment(pos uint64) bool { func (bits *bitvec) codeSegment(pos uint64) bool {
return ((*bits)[pos/8] & (0x80 >> (pos % 8))) == 0 return (((*bits)[pos/8] >> (pos % 8)) & 1) == 0
} }
// codeBitmap collects data locations in code. // codeBitmap collects data locations in code.

View File

@ -17,6 +17,7 @@
package vm package vm
import ( import (
"math/bits"
"testing" "testing"
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
@ -28,24 +29,27 @@ func TestJumpDestAnalysis(t *testing.T) {
exp byte exp byte
which int which int
}{ }{
{[]byte{byte(PUSH1), 0x01, 0x01, 0x01}, 0x40, 0}, {[]byte{byte(PUSH1), 0x01, 0x01, 0x01}, 0b0000_0010, 0},
{[]byte{byte(PUSH1), byte(PUSH1), byte(PUSH1), byte(PUSH1)}, 0x50, 0}, {[]byte{byte(PUSH1), byte(PUSH1), byte(PUSH1), byte(PUSH1)}, 0b0000_1010, 0},
{[]byte{byte(PUSH8), byte(PUSH8), byte(PUSH8), byte(PUSH8), byte(PUSH8), byte(PUSH8), byte(PUSH8), byte(PUSH8), 0x01, 0x01, 0x01}, 0x7F, 0}, {[]byte{0x00, byte(PUSH1), 0x00, byte(PUSH1), 0x00, byte(PUSH1), 0x00, byte(PUSH1)}, 0b0101_0100, 0},
{[]byte{byte(PUSH8), 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01}, 0x80, 1}, {[]byte{byte(PUSH8), byte(PUSH8), byte(PUSH8), byte(PUSH8), byte(PUSH8), byte(PUSH8), byte(PUSH8), byte(PUSH8), 0x01, 0x01, 0x01}, bits.Reverse8(0x7F), 0},
{[]byte{0x01, 0x01, 0x01, 0x01, 0x01, byte(PUSH2), byte(PUSH2), byte(PUSH2), 0x01, 0x01, 0x01}, 0x03, 0}, {[]byte{byte(PUSH8), 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01}, 0b0000_0001, 1},
{[]byte{0x01, 0x01, 0x01, 0x01, 0x01, byte(PUSH2), 0x01, 0x01, 0x01, 0x01, 0x01}, 0x00, 1}, {[]byte{0x01, 0x01, 0x01, 0x01, 0x01, byte(PUSH2), byte(PUSH2), byte(PUSH2), 0x01, 0x01, 0x01}, 0b1100_0000, 0},
{[]byte{byte(PUSH3), 0x01, 0x01, 0x01, byte(PUSH1), 0x01, 0x01, 0x01, 0x01, 0x01, 0x01}, 0x74, 0}, {[]byte{0x01, 0x01, 0x01, 0x01, 0x01, byte(PUSH2), 0x01, 0x01, 0x01, 0x01, 0x01}, 0b0000_0000, 1},
{[]byte{byte(PUSH3), 0x01, 0x01, 0x01, byte(PUSH1), 0x01, 0x01, 0x01, 0x01, 0x01, 0x01}, 0x00, 1}, {[]byte{byte(PUSH3), 0x01, 0x01, 0x01, byte(PUSH1), 0x01, 0x01, 0x01, 0x01, 0x01, 0x01}, 0b0010_1110, 0},
{[]byte{0x01, byte(PUSH8), 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01}, 0x3F, 0}, {[]byte{byte(PUSH3), 0x01, 0x01, 0x01, byte(PUSH1), 0x01, 0x01, 0x01, 0x01, 0x01, 0x01}, 0b0000_0000, 1},
{[]byte{0x01, byte(PUSH8), 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01}, 0xC0, 1}, {[]byte{0x01, byte(PUSH8), 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01}, 0b1111_1100, 0},
{[]byte{byte(PUSH16), 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01}, 0x7F, 0}, {[]byte{0x01, byte(PUSH8), 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01}, 0b0000_0011, 1},
{[]byte{byte(PUSH16), 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01}, 0xFF, 1}, {[]byte{byte(PUSH16), 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01}, 0b1111_1110, 0},
{[]byte{byte(PUSH16), 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01}, 0x80, 2}, {[]byte{byte(PUSH16), 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01}, 0b1111_1111, 1},
{[]byte{byte(PUSH8), 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, byte(PUSH1), 0x01}, 0x7f, 0}, {[]byte{byte(PUSH16), 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01}, 0b0000_0001, 2},
{[]byte{byte(PUSH8), 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, byte(PUSH1), 0x01}, 0xA0, 1}, {[]byte{byte(PUSH8), 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, byte(PUSH1), 0x01}, 0b1111_1110, 0},
{[]byte{byte(PUSH32)}, 0x7F, 0}, {[]byte{byte(PUSH8), 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, byte(PUSH1), 0x01}, 0b0000_0101, 1},
{[]byte{byte(PUSH32)}, 0xFF, 1}, {[]byte{byte(PUSH32)}, 0b1111_1110, 0},
{[]byte{byte(PUSH32)}, 0xFF, 2}, {[]byte{byte(PUSH32)}, 0b1111_1111, 1},
{[]byte{byte(PUSH32)}, 0b1111_1111, 2},
{[]byte{byte(PUSH32)}, 0b1111_1111, 3},
{[]byte{byte(PUSH32)}, 0b0000_0001, 4},
} }
for i, test := range tests { for i, test := range tests {
ret := codeBitmap(test.code) ret := codeBitmap(test.code)

View File

@ -143,16 +143,11 @@ func (c *Contract) AsDelegate() *Contract {
// GetOp returns the n'th element in the contract's byte array // GetOp returns the n'th element in the contract's byte array
func (c *Contract) GetOp(n uint64) OpCode { func (c *Contract) GetOp(n uint64) OpCode {
return OpCode(c.GetByte(n))
}
// GetByte returns the n'th byte in the contract's byte array
func (c *Contract) GetByte(n uint64) byte {
if n < uint64(len(c.Code)) { if n < uint64(len(c.Code)) {
return c.Code[n] return OpCode(c.Code[n])
} }
return 0 return STOP
} }
// Caller returns the caller of the contract. // Caller returns the caller of the contract.

View File

@ -36,6 +36,10 @@ var (
ErrGasUintOverflow = errors.New("gas uint64 overflow") ErrGasUintOverflow = errors.New("gas uint64 overflow")
ErrInvalidCode = errors.New("invalid code: must not begin with 0xef") ErrInvalidCode = errors.New("invalid code: must not begin with 0xef")
ErrNonceUintOverflow = errors.New("nonce uint64 overflow") ErrNonceUintOverflow = errors.New("nonce uint64 overflow")
// errStopToken is an internal token indicating interpreter loop termination,
// never returned to outside callers.
errStopToken = errors.New("stop token")
) )
// ErrStackUnderflow wraps an evm error when the items on the stack less // ErrStackUnderflow wraps an evm error when the items on the stack less

View File

@ -165,9 +165,6 @@ func (evm *EVM) Interpreter() *EVMInterpreter {
// the necessary steps to create accounts and reverses the state in case of an // the necessary steps to create accounts and reverses the state in case of an
// execution error or failed value transfer. // execution error or failed value transfer.
func (evm *EVM) Call(caller ContractRef, addr common.Address, input []byte, gas uint64, value *big.Int) (ret []byte, leftOverGas uint64, err error) { func (evm *EVM) Call(caller ContractRef, addr common.Address, input []byte, gas uint64, value *big.Int) (ret []byte, leftOverGas uint64, err error) {
if evm.Config.NoRecursion && evm.depth > 0 {
return nil, gas, nil
}
// Fail if we're trying to execute above the call depth limit // Fail if we're trying to execute above the call depth limit
if evm.depth > int(params.CallCreateDepth) { if evm.depth > int(params.CallCreateDepth) {
return nil, gas, ErrDepth return nil, gas, ErrDepth
@ -254,9 +251,6 @@ func (evm *EVM) Call(caller ContractRef, addr common.Address, input []byte, gas
// CallCode differs from Call in the sense that it executes the given address' // CallCode differs from Call in the sense that it executes the given address'
// code with the caller as context. // code with the caller as context.
func (evm *EVM) CallCode(caller ContractRef, addr common.Address, input []byte, gas uint64, value *big.Int) (ret []byte, leftOverGas uint64, err error) { func (evm *EVM) CallCode(caller ContractRef, addr common.Address, input []byte, gas uint64, value *big.Int) (ret []byte, leftOverGas uint64, err error) {
if evm.Config.NoRecursion && evm.depth > 0 {
return nil, gas, nil
}
// Fail if we're trying to execute above the call depth limit // Fail if we're trying to execute above the call depth limit
if evm.depth > int(params.CallCreateDepth) { if evm.depth > int(params.CallCreateDepth) {
return nil, gas, ErrDepth return nil, gas, ErrDepth
@ -305,9 +299,6 @@ func (evm *EVM) CallCode(caller ContractRef, addr common.Address, input []byte,
// DelegateCall differs from CallCode in the sense that it executes the given address' // DelegateCall differs from CallCode in the sense that it executes the given address'
// code with the caller as context and the caller is set to the caller of the caller. // code with the caller as context and the caller is set to the caller of the caller.
func (evm *EVM) DelegateCall(caller ContractRef, addr common.Address, input []byte, gas uint64) (ret []byte, leftOverGas uint64, err error) { func (evm *EVM) DelegateCall(caller ContractRef, addr common.Address, input []byte, gas uint64) (ret []byte, leftOverGas uint64, err error) {
if evm.Config.NoRecursion && evm.depth > 0 {
return nil, gas, nil
}
// Fail if we're trying to execute above the call depth limit // Fail if we're trying to execute above the call depth limit
if evm.depth > int(params.CallCreateDepth) { if evm.depth > int(params.CallCreateDepth) {
return nil, gas, ErrDepth return nil, gas, ErrDepth
@ -347,9 +338,6 @@ func (evm *EVM) DelegateCall(caller ContractRef, addr common.Address, input []by
// Opcodes that attempt to perform such modifications will result in exceptions // Opcodes that attempt to perform such modifications will result in exceptions
// instead of performing the modifications. // instead of performing the modifications.
func (evm *EVM) StaticCall(caller ContractRef, addr common.Address, input []byte, gas uint64) (ret []byte, leftOverGas uint64, err error) { func (evm *EVM) StaticCall(caller ContractRef, addr common.Address, input []byte, gas uint64) (ret []byte, leftOverGas uint64, err error) {
if evm.Config.NoRecursion && evm.depth > 0 {
return nil, gas, nil
}
// Fail if we're trying to execute above the call depth limit // Fail if we're trying to execute above the call depth limit
if evm.depth > int(params.CallCreateDepth) { if evm.depth > int(params.CallCreateDepth) {
return nil, gas, ErrDepth return nil, gas, ErrDepth
@ -451,10 +439,6 @@ func (evm *EVM) create(caller ContractRef, codeAndHash *codeAndHash, gas uint64,
contract := NewContract(caller, AccountRef(address), value, gas) contract := NewContract(caller, AccountRef(address), value, gas)
contract.SetCodeOptionalHash(&address, codeAndHash) contract.SetCodeOptionalHash(&address, codeAndHash)
if evm.Config.NoRecursion && evm.depth > 0 {
return nil, address, gas, nil
}
if evm.Config.Debug { if evm.Config.Debug {
if evm.depth == 0 { if evm.depth == 0 {
evm.Config.Tracer.CaptureStart(evm, caller.Address(), address, true, codeAndHash.code, gas, value) evm.Config.Tracer.CaptureStart(evm, caller.Address(), address, true, codeAndHash.code, gas, value)
@ -518,7 +502,7 @@ func (evm *EVM) Create(caller ContractRef, code []byte, gas uint64, value *big.I
// Create2 creates a new contract using code as deployment code. // Create2 creates a new contract using code as deployment code.
// //
// The different between Create2 with Create is Create2 uses sha3(0xff ++ msg.sender ++ salt ++ sha3(init_code))[12:] // The different between Create2 with Create is Create2 uses keccak256(0xff ++ msg.sender ++ salt ++ keccak256(init_code))[12:]
// instead of the usual sender-and-nonce-hash as the address where the contract is initialized at. // instead of the usual sender-and-nonce-hash as the address where the contract is initialized at.
func (evm *EVM) Create2(caller ContractRef, code []byte, gas uint64, endowment *big.Int, salt *uint256.Int) (ret []byte, contractAddr common.Address, leftOverGas uint64, err error) { func (evm *EVM) Create2(caller ContractRef, code []byte, gas uint64, endowment *big.Int, salt *uint256.Int) (ret []byte, contractAddr common.Address, leftOverGas uint64, err error) {
codeAndHash := &codeAndHash{code: code} codeAndHash := &codeAndHash{code: code}

View File

@ -247,7 +247,7 @@ func makeGasLog(n uint64) gasFunc {
} }
} }
func gasSha3(evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) { func gasKeccak256(evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
gas, err := memoryGasCost(mem, memorySize) gas, err := memoryGasCost(mem, memorySize)
if err != nil { if err != nil {
return 0, err return 0, err
@ -256,7 +256,7 @@ func gasSha3(evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize
if overflow { if overflow {
return 0, ErrGasUintOverflow return 0, ErrGasUintOverflow
} }
if wordGas, overflow = math.SafeMul(toWordSize(wordGas), params.Sha3WordGas); overflow { if wordGas, overflow = math.SafeMul(toWordSize(wordGas), params.Keccak256WordGas); overflow {
return 0, ErrGasUintOverflow return 0, ErrGasUintOverflow
} }
if gas, overflow = math.SafeAdd(gas, wordGas); overflow { if gas, overflow = math.SafeAdd(gas, wordGas); overflow {
@ -290,7 +290,7 @@ func gasCreate2(evm *EVM, contract *Contract, stack *Stack, mem *Memory, memoryS
if overflow { if overflow {
return 0, ErrGasUintOverflow return 0, ErrGasUintOverflow
} }
if wordGas, overflow = math.SafeMul(toWordSize(wordGas), params.Sha3WordGas); overflow { if wordGas, overflow = math.SafeMul(toWordSize(wordGas), params.Keccak256WordGas); overflow {
return 0, ErrGasUintOverflow return 0, ErrGasUintOverflow
} }
if gas, overflow = math.SafeAdd(gas, wordGas); overflow { if gas, overflow = math.SafeAdd(gas, wordGas); overflow {

View File

@ -17,6 +17,8 @@
package vm package vm
import ( import (
"sync/atomic"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
@ -231,7 +233,7 @@ func opSAR(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte
return nil, nil return nil, nil
} }
func opSha3(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) { func opKeccak256(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) {
offset, size := scope.Stack.pop(), scope.Stack.peek() offset, size := scope.Stack.pop(), scope.Stack.peek()
data := scope.Memory.GetPtr(int64(offset.Uint64()), int64(size.Uint64())) data := scope.Memory.GetPtr(int64(offset.Uint64()), int64(size.Uint64()))
@ -514,6 +516,9 @@ func opSload(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]by
} }
func opSstore(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) { func opSstore(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) {
if interpreter.readOnly {
return nil, ErrWriteProtection
}
loc := scope.Stack.pop() loc := scope.Stack.pop()
val := scope.Stack.pop() val := scope.Stack.pop()
interpreter.evm.StateDB.SetState(scope.Contract.Address(), interpreter.evm.StateDB.SetState(scope.Contract.Address(),
@ -522,23 +527,27 @@ func opSstore(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]b
} }
func opJump(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) { func opJump(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) {
if atomic.LoadInt32(&interpreter.evm.abort) != 0 {
return nil, errStopToken
}
pos := scope.Stack.pop() pos := scope.Stack.pop()
if !scope.Contract.validJumpdest(&pos) { if !scope.Contract.validJumpdest(&pos) {
return nil, ErrInvalidJump return nil, ErrInvalidJump
} }
*pc = pos.Uint64() *pc = pos.Uint64() - 1 // pc will be increased by the interpreter loop
return nil, nil return nil, nil
} }
func opJumpi(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) { func opJumpi(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) {
if atomic.LoadInt32(&interpreter.evm.abort) != 0 {
return nil, errStopToken
}
pos, cond := scope.Stack.pop(), scope.Stack.pop() pos, cond := scope.Stack.pop(), scope.Stack.pop()
if !cond.IsZero() { if !cond.IsZero() {
if !scope.Contract.validJumpdest(&pos) { if !scope.Contract.validJumpdest(&pos) {
return nil, ErrInvalidJump return nil, ErrInvalidJump
} }
*pc = pos.Uint64() *pc = pos.Uint64() - 1 // pc will be increased by the interpreter loop
} else {
*pc++
} }
return nil, nil return nil, nil
} }
@ -563,6 +572,9 @@ func opGas(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte
} }
func opCreate(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) { func opCreate(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) {
if interpreter.readOnly {
return nil, ErrWriteProtection
}
var ( var (
value = scope.Stack.pop() value = scope.Stack.pop()
offset, size = scope.Stack.pop(), scope.Stack.pop() offset, size = scope.Stack.pop(), scope.Stack.pop()
@ -598,12 +610,17 @@ func opCreate(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]b
scope.Contract.Gas += returnGas scope.Contract.Gas += returnGas
if suberr == ErrExecutionReverted { if suberr == ErrExecutionReverted {
interpreter.returnData = res // set REVERT data to return data buffer
return res, nil return res, nil
} }
interpreter.returnData = nil // clear dirty return data buffer
return nil, nil return nil, nil
} }
func opCreate2(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) { func opCreate2(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) {
if interpreter.readOnly {
return nil, ErrWriteProtection
}
var ( var (
endowment = scope.Stack.pop() endowment = scope.Stack.pop()
offset, size = scope.Stack.pop(), scope.Stack.pop() offset, size = scope.Stack.pop(), scope.Stack.pop()
@ -634,8 +651,10 @@ func opCreate2(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]
scope.Contract.Gas += returnGas scope.Contract.Gas += returnGas
if suberr == ErrExecutionReverted { if suberr == ErrExecutionReverted {
interpreter.returnData = res // set REVERT data to return data buffer
return res, nil return res, nil
} }
interpreter.returnData = nil // clear dirty return data buffer
return nil, nil return nil, nil
} }
@ -651,6 +670,9 @@ func opCall(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byt
// Get the arguments from the memory. // Get the arguments from the memory.
args := scope.Memory.GetPtr(int64(inOffset.Uint64()), int64(inSize.Uint64())) args := scope.Memory.GetPtr(int64(inOffset.Uint64()), int64(inSize.Uint64()))
if interpreter.readOnly && !value.IsZero() {
return nil, ErrWriteProtection
}
var bigVal = big0 var bigVal = big0
//TODO: use uint256.Int instead of converting with toBig() //TODO: use uint256.Int instead of converting with toBig()
// By using big0 here, we save an alloc for the most common case (non-ether-transferring contract calls), // By using big0 here, we save an alloc for the most common case (non-ether-transferring contract calls),
@ -674,6 +696,7 @@ func opCall(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byt
} }
scope.Contract.Gas += returnGas scope.Contract.Gas += returnGas
interpreter.returnData = ret
return ret, nil return ret, nil
} }
@ -709,6 +732,7 @@ func opCallCode(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([
} }
scope.Contract.Gas += returnGas scope.Contract.Gas += returnGas
interpreter.returnData = ret
return ret, nil return ret, nil
} }
@ -737,6 +761,7 @@ func opDelegateCall(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext
} }
scope.Contract.Gas += returnGas scope.Contract.Gas += returnGas
interpreter.returnData = ret
return ret, nil return ret, nil
} }
@ -765,6 +790,7 @@ func opStaticCall(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext)
} }
scope.Contract.Gas += returnGas scope.Contract.Gas += returnGas
interpreter.returnData = ret
return ret, nil return ret, nil
} }
@ -772,21 +798,29 @@ func opReturn(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]b
offset, size := scope.Stack.pop(), scope.Stack.pop() offset, size := scope.Stack.pop(), scope.Stack.pop()
ret := scope.Memory.GetPtr(int64(offset.Uint64()), int64(size.Uint64())) ret := scope.Memory.GetPtr(int64(offset.Uint64()), int64(size.Uint64()))
return ret, nil return ret, errStopToken
} }
func opRevert(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) { func opRevert(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) {
offset, size := scope.Stack.pop(), scope.Stack.pop() offset, size := scope.Stack.pop(), scope.Stack.pop()
ret := scope.Memory.GetPtr(int64(offset.Uint64()), int64(size.Uint64())) ret := scope.Memory.GetPtr(int64(offset.Uint64()), int64(size.Uint64()))
return ret, nil interpreter.returnData = ret
return ret, ErrExecutionReverted
}
func opUndefined(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) {
return nil, &ErrInvalidOpCode{opcode: OpCode(scope.Contract.Code[*pc])}
} }
func opStop(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) { func opStop(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) {
return nil, nil return nil, errStopToken
} }
func opSuicide(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) { func opSelfdestruct(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) {
if interpreter.readOnly {
return nil, ErrWriteProtection
}
beneficiary := scope.Stack.pop() beneficiary := scope.Stack.pop()
balance := interpreter.evm.StateDB.GetBalance(scope.Contract.Address()) balance := interpreter.evm.StateDB.GetBalance(scope.Contract.Address())
interpreter.evm.StateDB.AddBalance(beneficiary.Bytes20(), balance) interpreter.evm.StateDB.AddBalance(beneficiary.Bytes20(), balance)
@ -795,7 +829,7 @@ func opSuicide(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]
interpreter.cfg.Tracer.CaptureEnter(SELFDESTRUCT, scope.Contract.Address(), beneficiary.Bytes20(), []byte{}, 0, balance) interpreter.cfg.Tracer.CaptureEnter(SELFDESTRUCT, scope.Contract.Address(), beneficiary.Bytes20(), []byte{}, 0, balance)
interpreter.cfg.Tracer.CaptureExit([]byte{}, 0, nil) interpreter.cfg.Tracer.CaptureExit([]byte{}, 0, nil)
} }
return nil, nil return nil, errStopToken
} }
// following functions are used by the instruction jump table // following functions are used by the instruction jump table
@ -803,6 +837,9 @@ func opSuicide(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]
// make log instruction function // make log instruction function
func makeLog(size int) executionFunc { func makeLog(size int) executionFunc {
return func(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) { return func(pc *uint64, interpreter *EVMInterpreter, scope *ScopeContext) ([]byte, error) {
if interpreter.readOnly {
return nil, ErrWriteProtection
}
topics := make([]common.Hash, size) topics := make([]common.Hash, size)
stack := scope.Stack stack := scope.Stack
mStart, mSize := stack.pop(), stack.pop() mStart, mSize := stack.pop(), stack.pop()

View File

@ -525,12 +525,14 @@ func TestOpMstore(t *testing.T) {
mem.Resize(64) mem.Resize(64)
pc := uint64(0) pc := uint64(0)
v := "abcdef00000000000000abba000000000deaf000000c0de00100000000133700" v := "abcdef00000000000000abba000000000deaf000000c0de00100000000133700"
stack.pushN(*new(uint256.Int).SetBytes(common.Hex2Bytes(v)), *new(uint256.Int)) stack.push(new(uint256.Int).SetBytes(common.Hex2Bytes(v)))
stack.push(new(uint256.Int))
opMstore(&pc, evmInterpreter, &ScopeContext{mem, stack, nil}) opMstore(&pc, evmInterpreter, &ScopeContext{mem, stack, nil})
if got := common.Bytes2Hex(mem.GetCopy(0, 32)); got != v { if got := common.Bytes2Hex(mem.GetCopy(0, 32)); got != v {
t.Fatalf("Mstore fail, got %v, expected %v", got, v) t.Fatalf("Mstore fail, got %v, expected %v", got, v)
} }
stack.pushN(*new(uint256.Int).SetUint64(0x1), *new(uint256.Int)) stack.push(new(uint256.Int).SetUint64(0x1))
stack.push(new(uint256.Int))
opMstore(&pc, evmInterpreter, &ScopeContext{mem, stack, nil}) opMstore(&pc, evmInterpreter, &ScopeContext{mem, stack, nil})
if common.Bytes2Hex(mem.GetCopy(0, 32)) != "0000000000000000000000000000000000000000000000000000000000000001" { if common.Bytes2Hex(mem.GetCopy(0, 32)) != "0000000000000000000000000000000000000000000000000000000000000001" {
t.Fatalf("Mstore failed to overwrite previous value") t.Fatalf("Mstore failed to overwrite previous value")
@ -553,12 +555,13 @@ func BenchmarkOpMstore(bench *testing.B) {
bench.ResetTimer() bench.ResetTimer()
for i := 0; i < bench.N; i++ { for i := 0; i < bench.N; i++ {
stack.pushN(*value, *memStart) stack.push(value)
stack.push(memStart)
opMstore(&pc, evmInterpreter, &ScopeContext{mem, stack, nil}) opMstore(&pc, evmInterpreter, &ScopeContext{mem, stack, nil})
} }
} }
func BenchmarkOpSHA3(bench *testing.B) { func BenchmarkOpKeccak256(bench *testing.B) {
var ( var (
env = NewEVM(BlockContext{}, TxContext{}, nil, params.TestChainConfig, Config{}) env = NewEVM(BlockContext{}, TxContext{}, nil, params.TestChainConfig, Config{})
stack = newstack() stack = newstack()
@ -572,8 +575,9 @@ func BenchmarkOpSHA3(bench *testing.B) {
bench.ResetTimer() bench.ResetTimer()
for i := 0; i < bench.N; i++ { for i := 0; i < bench.N; i++ {
stack.pushN(*uint256.NewInt(32), *start) stack.push(uint256.NewInt(32))
opSha3(&pc, evmInterpreter, &ScopeContext{mem, stack, nil}) stack.push(start)
opKeccak256(&pc, evmInterpreter, &ScopeContext{mem, stack, nil})
} }
} }

View File

@ -18,7 +18,6 @@ package vm
import ( import (
"hash" "hash"
"sync/atomic"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/math" "github.com/ethereum/go-ethereum/common/math"
@ -29,11 +28,10 @@ import (
type Config struct { type Config struct {
Debug bool // Enables debugging Debug bool // Enables debugging
Tracer EVMLogger // Opcode logger Tracer EVMLogger // Opcode logger
NoRecursion bool // Disables call, callcode, delegate call and create
NoBaseFee bool // Forces the EIP-1559 baseFee to 0 (needed for 0 price calls) NoBaseFee bool // Forces the EIP-1559 baseFee to 0 (needed for 0 price calls)
EnablePreimageRecording bool // Enables recording of SHA3/keccak preimages EnablePreimageRecording bool // Enables recording of SHA3/keccak preimages
JumpTable [256]*operation // EVM instruction table, automatically populated if unset JumpTable *JumpTable // EVM instruction table, automatically populated if unset
ExtraEips []int // Additional EIPS that are to be enabled ExtraEips []int // Additional EIPS that are to be enabled
} }
@ -68,39 +66,37 @@ type EVMInterpreter struct {
// NewEVMInterpreter returns a new instance of the Interpreter. // NewEVMInterpreter returns a new instance of the Interpreter.
func NewEVMInterpreter(evm *EVM, cfg Config) *EVMInterpreter { func NewEVMInterpreter(evm *EVM, cfg Config) *EVMInterpreter {
// We use the STOP instruction whether to see // If jump table was not initialised we set the default one.
// the jump table was initialised. If it was not if cfg.JumpTable == nil {
// we'll set the default jump table.
if cfg.JumpTable[STOP] == nil {
var jt JumpTable
switch { switch {
case evm.chainRules.IsLondon: case evm.chainRules.IsLondon:
jt = londonInstructionSet cfg.JumpTable = &londonInstructionSet
case evm.chainRules.IsBerlin: case evm.chainRules.IsBerlin:
jt = berlinInstructionSet cfg.JumpTable = &berlinInstructionSet
case evm.chainRules.IsIstanbul: case evm.chainRules.IsIstanbul:
jt = istanbulInstructionSet cfg.JumpTable = &istanbulInstructionSet
case evm.chainRules.IsConstantinople: case evm.chainRules.IsConstantinople:
jt = constantinopleInstructionSet cfg.JumpTable = &constantinopleInstructionSet
case evm.chainRules.IsByzantium: case evm.chainRules.IsByzantium:
jt = byzantiumInstructionSet cfg.JumpTable = &byzantiumInstructionSet
case evm.chainRules.IsEIP158: case evm.chainRules.IsEIP158:
jt = spuriousDragonInstructionSet cfg.JumpTable = &spuriousDragonInstructionSet
case evm.chainRules.IsEIP150: case evm.chainRules.IsEIP150:
jt = tangerineWhistleInstructionSet cfg.JumpTable = &tangerineWhistleInstructionSet
case evm.chainRules.IsHomestead: case evm.chainRules.IsHomestead:
jt = homesteadInstructionSet cfg.JumpTable = &homesteadInstructionSet
default: default:
jt = frontierInstructionSet cfg.JumpTable = &frontierInstructionSet
} }
for i, eip := range cfg.ExtraEips { for i, eip := range cfg.ExtraEips {
if err := EnableEIP(eip, &jt); err != nil { copy := *cfg.JumpTable
if err := EnableEIP(eip, &copy); err != nil {
// Disable it, so caller can check if it's activated or not // Disable it, so caller can check if it's activated or not
cfg.ExtraEips = append(cfg.ExtraEips[:i], cfg.ExtraEips[i+1:]...) cfg.ExtraEips = append(cfg.ExtraEips[:i], cfg.ExtraEips[i+1:]...)
log.Error("EIP activation failed", "eip", eip, "error", err) log.Error("EIP activation failed", "eip", eip, "error", err)
} }
cfg.JumpTable = &copy
} }
cfg.JumpTable = jt
} }
return &EVMInterpreter{ return &EVMInterpreter{
@ -180,47 +176,27 @@ func (in *EVMInterpreter) Run(contract *Contract, input []byte, readOnly bool) (
// explicit STOP, RETURN or SELFDESTRUCT is executed, an error occurred during // explicit STOP, RETURN or SELFDESTRUCT is executed, an error occurred during
// the execution of one of the operations or until the done flag is set by the // the execution of one of the operations or until the done flag is set by the
// parent context. // parent context.
steps := 0
for { for {
steps++
if steps%1000 == 0 && atomic.LoadInt32(&in.evm.abort) != 0 {
break
}
if in.cfg.Debug { if in.cfg.Debug {
// Capture pre-execution values for tracing. // Capture pre-execution values for tracing.
logged, pcCopy, gasCopy = false, pc, contract.Gas logged, pcCopy, gasCopy = false, pc, contract.Gas
} }
// Get the operation from the jump table and validate the stack to ensure there are // Get the operation from the jump table and validate the stack to ensure there are
// enough stack items available to perform the operation. // enough stack items available to perform the operation.
op = contract.GetOp(pc) op = contract.GetOp(pc)
operation := in.cfg.JumpTable[op] operation := in.cfg.JumpTable[op]
if operation == nil { cost = operation.constantGas // For tracing
return nil, &ErrInvalidOpCode{opcode: op}
}
// Validate stack // Validate stack
if sLen := stack.len(); sLen < operation.minStack { if sLen := stack.len(); sLen < operation.minStack {
return nil, &ErrStackUnderflow{stackLen: sLen, required: operation.minStack} return nil, &ErrStackUnderflow{stackLen: sLen, required: operation.minStack}
} else if sLen > operation.maxStack { } else if sLen > operation.maxStack {
return nil, &ErrStackOverflow{stackLen: sLen, limit: operation.maxStack} return nil, &ErrStackOverflow{stackLen: sLen, limit: operation.maxStack}
} }
// If the operation is valid, enforce write restrictions if !contract.UseGas(cost) {
if in.readOnly && in.evm.chainRules.IsByzantium {
// If the interpreter is operating in readonly mode, make sure no
// state-modifying operation is performed. The 3rd stack item
// for a call operation is the value. Transferring value from one
// account to the others means the state is modified and should also
// return with an error.
if operation.writes || (op == CALL && stack.Back(2).Sign() != 0) {
return nil, ErrWriteProtection
}
}
// Static portion of gas
cost = operation.constantGas // For tracing
if !contract.UseGas(operation.constantGas) {
return nil, ErrOutOfGas return nil, ErrOutOfGas
} }
if operation.dynamicGas != nil {
// All ops with a dynamic memory usage also has a dynamic gas cost.
var memorySize uint64 var memorySize uint64
// calculate the new memory size and expand the memory to fit // calculate the new memory size and expand the memory to fit
// the operation // the operation
@ -237,44 +213,33 @@ func (in *EVMInterpreter) Run(contract *Contract, input []byte, readOnly bool) (
return nil, ErrGasUintOverflow return nil, ErrGasUintOverflow
} }
} }
// Dynamic portion of gas // Consume the gas and return an error if not enough gas is available.
// consume the gas and return an error if not enough gas is available.
// cost is explicitly set so that the capture state defer method can get the proper cost // cost is explicitly set so that the capture state defer method can get the proper cost
if operation.dynamicGas != nil {
var dynamicCost uint64 var dynamicCost uint64
dynamicCost, err = operation.dynamicGas(in.evm, contract, stack, mem, memorySize) dynamicCost, err = operation.dynamicGas(in.evm, contract, stack, mem, memorySize)
cost += dynamicCost // total cost, for debug tracing cost += dynamicCost // for tracing
if err != nil || !contract.UseGas(dynamicCost) { if err != nil || !contract.UseGas(dynamicCost) {
return nil, ErrOutOfGas return nil, ErrOutOfGas
} }
}
if memorySize > 0 { if memorySize > 0 {
mem.Resize(memorySize) mem.Resize(memorySize)
} }
}
if in.cfg.Debug { if in.cfg.Debug {
in.cfg.Tracer.CaptureState(pc, op, gasCopy, cost, callContext, in.returnData, in.evm.depth, err) in.cfg.Tracer.CaptureState(pc, op, gasCopy, cost, callContext, in.returnData, in.evm.depth, err)
logged = true logged = true
} }
// execute the operation // execute the operation
res, err = operation.execute(&pc, in, callContext) res, err = operation.execute(&pc, in, callContext)
// if the operation clears the return data (e.g. it has returning data) if err != nil {
// set the last return to the result of the operation. break
if operation.returns {
in.returnData = res
} }
switch {
case err != nil:
return nil, err
case operation.reverts:
return res, ErrExecutionReverted
case operation.halts:
return res, nil
case !operation.jumps:
pc++ pc++
} }
if err == errStopToken {
err = nil // clear stop token error
} }
return nil, nil
return res, err
} }

View File

@ -0,0 +1,77 @@
// Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package vm
import (
"math/big"
"testing"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/params"
)
var loopInterruptTests = []string{
// infinite loop using JUMP: push(2) jumpdest dup1 jump
"60025b8056",
// infinite loop using JUMPI: push(1) push(4) jumpdest dup2 dup2 jumpi
"600160045b818157",
}
func TestLoopInterrupt(t *testing.T) {
address := common.BytesToAddress([]byte("contract"))
vmctx := BlockContext{
Transfer: func(StateDB, common.Address, common.Address, *big.Int) {},
}
for i, tt := range loopInterruptTests {
statedb, _ := state.New(common.Hash{}, state.NewDatabase(rawdb.NewMemoryDatabase()), nil)
statedb.CreateAccount(address)
statedb.SetCode(address, common.Hex2Bytes(tt))
statedb.Finalise(true)
evm := NewEVM(vmctx, TxContext{}, statedb, params.AllEthashProtocolChanges, Config{})
errChannel := make(chan error)
timeout := make(chan bool)
go func(evm *EVM) {
_, _, err := evm.Call(AccountRef(common.Address{}), address, nil, math.MaxUint64, new(big.Int))
errChannel <- err
}(evm)
go func() {
<-time.After(time.Second)
timeout <- true
}()
evm.Cancel()
select {
case <-timeout:
t.Errorf("test %d timed out", i)
case err := <-errChannel:
if err != nil {
t.Errorf("test %d failure: %v", i, err)
}
}
}
}

View File

@ -17,6 +17,8 @@
package vm package vm
import ( import (
"fmt"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
) )
@ -40,12 +42,6 @@ type operation struct {
// memorySize returns the memory size required for the operation // memorySize returns the memory size required for the operation
memorySize memorySizeFunc memorySize memorySizeFunc
halts bool // indicates whether the operation should halt further execution
jumps bool // indicates whether the program counter should not increment
writes bool // determines whether this a state modifying operation
reverts bool // determines whether the operation reverts state (implicitly halts)
returns bool // determines whether the operations sets the return data content
} }
var ( var (
@ -63,13 +59,31 @@ var (
// JumpTable contains the EVM opcodes supported at a given fork. // JumpTable contains the EVM opcodes supported at a given fork.
type JumpTable [256]*operation type JumpTable [256]*operation
func validate(jt JumpTable) JumpTable {
for i, op := range jt {
if op == nil {
panic(fmt.Sprintf("op 0x%x is not set", i))
}
// The interpreter has an assumption that if the memorySize function is
// set, then the dynamicGas function is also set. This is a somewhat
// arbitrary assumption, and can be removed if we need to -- but it
// allows us to avoid a condition check. As long as we have that assumption
// in there, this little sanity check prevents us from merging in a
// change which violates it.
if op.memorySize != nil && op.dynamicGas == nil {
panic(fmt.Sprintf("op %v has dynamic memory but not dynamic gas", OpCode(i).String()))
}
}
return jt
}
// newLondonInstructionSet returns the frontier, homestead, byzantium, // newLondonInstructionSet returns the frontier, homestead, byzantium,
// contantinople, istanbul, petersburg, berlin and london instructions. // contantinople, istanbul, petersburg, berlin and london instructions.
func newLondonInstructionSet() JumpTable { func newLondonInstructionSet() JumpTable {
instructionSet := newBerlinInstructionSet() instructionSet := newBerlinInstructionSet()
enable3529(&instructionSet) // EIP-3529: Reduction in refunds https://eips.ethereum.org/EIPS/eip-3529 enable3529(&instructionSet) // EIP-3529: Reduction in refunds https://eips.ethereum.org/EIPS/eip-3529
enable3198(&instructionSet) // Base fee opcode https://eips.ethereum.org/EIPS/eip-3198 enable3198(&instructionSet) // Base fee opcode https://eips.ethereum.org/EIPS/eip-3198
return instructionSet return validate(instructionSet)
} }
// newBerlinInstructionSet returns the frontier, homestead, byzantium, // newBerlinInstructionSet returns the frontier, homestead, byzantium,
@ -77,7 +91,7 @@ func newLondonInstructionSet() JumpTable {
func newBerlinInstructionSet() JumpTable { func newBerlinInstructionSet() JumpTable {
instructionSet := newIstanbulInstructionSet() instructionSet := newIstanbulInstructionSet()
enable2929(&instructionSet) // Access lists for trie accesses https://eips.ethereum.org/EIPS/eip-2929 enable2929(&instructionSet) // Access lists for trie accesses https://eips.ethereum.org/EIPS/eip-2929
return instructionSet return validate(instructionSet)
} }
// newIstanbulInstructionSet returns the frontier, homestead, byzantium, // newIstanbulInstructionSet returns the frontier, homestead, byzantium,
@ -89,7 +103,7 @@ func newIstanbulInstructionSet() JumpTable {
enable1884(&instructionSet) // Reprice reader opcodes - https://eips.ethereum.org/EIPS/eip-1884 enable1884(&instructionSet) // Reprice reader opcodes - https://eips.ethereum.org/EIPS/eip-1884
enable2200(&instructionSet) // Net metered SSTORE - https://eips.ethereum.org/EIPS/eip-2200 enable2200(&instructionSet) // Net metered SSTORE - https://eips.ethereum.org/EIPS/eip-2200
return instructionSet return validate(instructionSet)
} }
// newConstantinopleInstructionSet returns the frontier, homestead, // newConstantinopleInstructionSet returns the frontier, homestead,
@ -127,10 +141,8 @@ func newConstantinopleInstructionSet() JumpTable {
minStack: minStack(4, 1), minStack: minStack(4, 1),
maxStack: maxStack(4, 1), maxStack: maxStack(4, 1),
memorySize: memoryCreate2, memorySize: memoryCreate2,
writes: true,
returns: true,
} }
return instructionSet return validate(instructionSet)
} }
// newByzantiumInstructionSet returns the frontier, homestead and // newByzantiumInstructionSet returns the frontier, homestead and
@ -144,7 +156,6 @@ func newByzantiumInstructionSet() JumpTable {
minStack: minStack(6, 1), minStack: minStack(6, 1),
maxStack: maxStack(6, 1), maxStack: maxStack(6, 1),
memorySize: memoryStaticCall, memorySize: memoryStaticCall,
returns: true,
} }
instructionSet[RETURNDATASIZE] = &operation{ instructionSet[RETURNDATASIZE] = &operation{
execute: opReturnDataSize, execute: opReturnDataSize,
@ -166,17 +177,15 @@ func newByzantiumInstructionSet() JumpTable {
minStack: minStack(2, 0), minStack: minStack(2, 0),
maxStack: maxStack(2, 0), maxStack: maxStack(2, 0),
memorySize: memoryRevert, memorySize: memoryRevert,
reverts: true,
returns: true,
} }
return instructionSet return validate(instructionSet)
} }
// EIP 158 a.k.a Spurious Dragon // EIP 158 a.k.a Spurious Dragon
func newSpuriousDragonInstructionSet() JumpTable { func newSpuriousDragonInstructionSet() JumpTable {
instructionSet := newTangerineWhistleInstructionSet() instructionSet := newTangerineWhistleInstructionSet()
instructionSet[EXP].dynamicGas = gasExpEIP158 instructionSet[EXP].dynamicGas = gasExpEIP158
return instructionSet return validate(instructionSet)
} }
@ -190,7 +199,7 @@ func newTangerineWhistleInstructionSet() JumpTable {
instructionSet[CALL].constantGas = params.CallGasEIP150 instructionSet[CALL].constantGas = params.CallGasEIP150
instructionSet[CALLCODE].constantGas = params.CallGasEIP150 instructionSet[CALLCODE].constantGas = params.CallGasEIP150
instructionSet[DELEGATECALL].constantGas = params.CallGasEIP150 instructionSet[DELEGATECALL].constantGas = params.CallGasEIP150
return instructionSet return validate(instructionSet)
} }
// newHomesteadInstructionSet returns the frontier and homestead // newHomesteadInstructionSet returns the frontier and homestead
@ -204,21 +213,19 @@ func newHomesteadInstructionSet() JumpTable {
minStack: minStack(6, 1), minStack: minStack(6, 1),
maxStack: maxStack(6, 1), maxStack: maxStack(6, 1),
memorySize: memoryDelegateCall, memorySize: memoryDelegateCall,
returns: true,
} }
return instructionSet return validate(instructionSet)
} }
// newFrontierInstructionSet returns the frontier instructions // newFrontierInstructionSet returns the frontier instructions
// that can be executed during the frontier phase. // that can be executed during the frontier phase.
func newFrontierInstructionSet() JumpTable { func newFrontierInstructionSet() JumpTable {
return JumpTable{ tbl := JumpTable{
STOP: { STOP: {
execute: opStop, execute: opStop,
constantGas: 0, constantGas: 0,
minStack: minStack(0, 0), minStack: minStack(0, 0),
maxStack: maxStack(0, 0), maxStack: maxStack(0, 0),
halts: true,
}, },
ADD: { ADD: {
execute: opAdd, execute: opAdd,
@ -352,13 +359,13 @@ func newFrontierInstructionSet() JumpTable {
minStack: minStack(2, 1), minStack: minStack(2, 1),
maxStack: maxStack(2, 1), maxStack: maxStack(2, 1),
}, },
SHA3: { KECCAK256: {
execute: opSha3, execute: opKeccak256,
constantGas: params.Sha3Gas, constantGas: params.Keccak256Gas,
dynamicGas: gasSha3, dynamicGas: gasKeccak256,
minStack: minStack(2, 1), minStack: minStack(2, 1),
maxStack: maxStack(2, 1), maxStack: maxStack(2, 1),
memorySize: memorySha3, memorySize: memoryKeccak256,
}, },
ADDRESS: { ADDRESS: {
execute: opAddress, execute: opAddress,
@ -521,21 +528,18 @@ func newFrontierInstructionSet() JumpTable {
dynamicGas: gasSStore, dynamicGas: gasSStore,
minStack: minStack(2, 0), minStack: minStack(2, 0),
maxStack: maxStack(2, 0), maxStack: maxStack(2, 0),
writes: true,
}, },
JUMP: { JUMP: {
execute: opJump, execute: opJump,
constantGas: GasMidStep, constantGas: GasMidStep,
minStack: minStack(1, 0), minStack: minStack(1, 0),
maxStack: maxStack(1, 0), maxStack: maxStack(1, 0),
jumps: true,
}, },
JUMPI: { JUMPI: {
execute: opJumpi, execute: opJumpi,
constantGas: GasSlowStep, constantGas: GasSlowStep,
minStack: minStack(2, 0), minStack: minStack(2, 0),
maxStack: maxStack(2, 0), maxStack: maxStack(2, 0),
jumps: true,
}, },
PC: { PC: {
execute: opPc, execute: opPc,
@ -951,7 +955,6 @@ func newFrontierInstructionSet() JumpTable {
minStack: minStack(2, 0), minStack: minStack(2, 0),
maxStack: maxStack(2, 0), maxStack: maxStack(2, 0),
memorySize: memoryLog, memorySize: memoryLog,
writes: true,
}, },
LOG1: { LOG1: {
execute: makeLog(1), execute: makeLog(1),
@ -959,7 +962,6 @@ func newFrontierInstructionSet() JumpTable {
minStack: minStack(3, 0), minStack: minStack(3, 0),
maxStack: maxStack(3, 0), maxStack: maxStack(3, 0),
memorySize: memoryLog, memorySize: memoryLog,
writes: true,
}, },
LOG2: { LOG2: {
execute: makeLog(2), execute: makeLog(2),
@ -967,7 +969,6 @@ func newFrontierInstructionSet() JumpTable {
minStack: minStack(4, 0), minStack: minStack(4, 0),
maxStack: maxStack(4, 0), maxStack: maxStack(4, 0),
memorySize: memoryLog, memorySize: memoryLog,
writes: true,
}, },
LOG3: { LOG3: {
execute: makeLog(3), execute: makeLog(3),
@ -975,7 +976,6 @@ func newFrontierInstructionSet() JumpTable {
minStack: minStack(5, 0), minStack: minStack(5, 0),
maxStack: maxStack(5, 0), maxStack: maxStack(5, 0),
memorySize: memoryLog, memorySize: memoryLog,
writes: true,
}, },
LOG4: { LOG4: {
execute: makeLog(4), execute: makeLog(4),
@ -983,7 +983,6 @@ func newFrontierInstructionSet() JumpTable {
minStack: minStack(6, 0), minStack: minStack(6, 0),
maxStack: maxStack(6, 0), maxStack: maxStack(6, 0),
memorySize: memoryLog, memorySize: memoryLog,
writes: true,
}, },
CREATE: { CREATE: {
execute: opCreate, execute: opCreate,
@ -992,8 +991,6 @@ func newFrontierInstructionSet() JumpTable {
minStack: minStack(3, 1), minStack: minStack(3, 1),
maxStack: maxStack(3, 1), maxStack: maxStack(3, 1),
memorySize: memoryCreate, memorySize: memoryCreate,
writes: true,
returns: true,
}, },
CALL: { CALL: {
execute: opCall, execute: opCall,
@ -1002,7 +999,6 @@ func newFrontierInstructionSet() JumpTable {
minStack: minStack(7, 1), minStack: minStack(7, 1),
maxStack: maxStack(7, 1), maxStack: maxStack(7, 1),
memorySize: memoryCall, memorySize: memoryCall,
returns: true,
}, },
CALLCODE: { CALLCODE: {
execute: opCallCode, execute: opCallCode,
@ -1011,7 +1007,6 @@ func newFrontierInstructionSet() JumpTable {
minStack: minStack(7, 1), minStack: minStack(7, 1),
maxStack: maxStack(7, 1), maxStack: maxStack(7, 1),
memorySize: memoryCall, memorySize: memoryCall,
returns: true,
}, },
RETURN: { RETURN: {
execute: opReturn, execute: opReturn,
@ -1019,15 +1014,21 @@ func newFrontierInstructionSet() JumpTable {
minStack: minStack(2, 0), minStack: minStack(2, 0),
maxStack: maxStack(2, 0), maxStack: maxStack(2, 0),
memorySize: memoryReturn, memorySize: memoryReturn,
halts: true,
}, },
SELFDESTRUCT: { SELFDESTRUCT: {
execute: opSuicide, execute: opSelfdestruct,
dynamicGas: gasSelfdestruct, dynamicGas: gasSelfdestruct,
minStack: minStack(1, 0), minStack: minStack(1, 0),
maxStack: maxStack(1, 0), maxStack: maxStack(1, 0),
halts: true,
writes: true,
}, },
} }
// Fill all unassigned slots with opUndefined.
for i, entry := range tbl {
if entry == nil {
tbl[i] = &operation{execute: opUndefined, maxStack: maxStack(0, 0)}
}
}
return validate(tbl)
} }

View File

@ -1,4 +1,4 @@
// Copyright 2015 The go-ethereum Authors // Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library. // This file is part of the go-ethereum library.
// //
// The go-ethereum library is free software: you can redistribute it and/or modify // The go-ethereum library is free software: you can redistribute it and/or modify
@ -17,87 +17,12 @@
package vm package vm
import ( import (
"encoding/hex"
"fmt"
"io"
"math/big" "math/big"
"strings"
"time" "time"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/common/math"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/params"
"github.com/holiman/uint256"
) )
// Storage represents a contract's storage.
type Storage map[common.Hash]common.Hash
// Copy duplicates the current storage.
func (s Storage) Copy() Storage {
cpy := make(Storage)
for key, value := range s {
cpy[key] = value
}
return cpy
}
// LogConfig are the configuration options for structured logger the EVM
type LogConfig struct {
EnableMemory bool // enable memory capture
DisableStack bool // disable stack capture
DisableStorage bool // disable storage capture
EnableReturnData bool // enable return data capture
Debug bool // print output during capture end
Limit int // maximum length of output, but zero means unlimited
// Chain overrides, can be used to execute a trace using future fork rules
Overrides *params.ChainConfig `json:"overrides,omitempty"`
}
//go:generate gencodec -type StructLog -field-override structLogMarshaling -out gen_structlog.go
// StructLog is emitted to the EVM each cycle and lists information about the current internal state
// prior to the execution of the statement.
type StructLog struct {
Pc uint64 `json:"pc"`
Op OpCode `json:"op"`
Gas uint64 `json:"gas"`
GasCost uint64 `json:"gasCost"`
Memory []byte `json:"memory"`
MemorySize int `json:"memSize"`
Stack []uint256.Int `json:"stack"`
ReturnData []byte `json:"returnData"`
Storage map[common.Hash]common.Hash `json:"-"`
Depth int `json:"depth"`
RefundCounter uint64 `json:"refund"`
Err error `json:"-"`
}
// overrides for gencodec
type structLogMarshaling struct {
Gas math.HexOrDecimal64
GasCost math.HexOrDecimal64
Memory hexutil.Bytes
ReturnData hexutil.Bytes
OpName string `json:"opName"` // adds call to OpName() in MarshalJSON
ErrorString string `json:"error"` // adds call to ErrorString() in MarshalJSON
}
// OpName formats the operand name in a human-readable format.
func (s *StructLog) OpName() string {
return s.Op.String()
}
// ErrorString formats the log's error as a string.
func (s *StructLog) ErrorString() string {
if s.Err != nil {
return s.Err.Error()
}
return ""
}
// EVMLogger is used to collect execution traces from an EVM transaction // EVMLogger is used to collect execution traces from an EVM transaction
// execution. CaptureState is called for each step of the VM with the // execution. CaptureState is called for each step of the VM with the
// current VM state. // current VM state.
@ -111,250 +36,3 @@ type EVMLogger interface {
CaptureFault(pc uint64, op OpCode, gas, cost uint64, scope *ScopeContext, depth int, err error) CaptureFault(pc uint64, op OpCode, gas, cost uint64, scope *ScopeContext, depth int, err error)
CaptureEnd(output []byte, gasUsed uint64, t time.Duration, err error) CaptureEnd(output []byte, gasUsed uint64, t time.Duration, err error)
} }
// StructLogger is an EVM state logger and implements EVMLogger.
//
// StructLogger can capture state based on the given Log configuration and also keeps
// a track record of modified storage which is used in reporting snapshots of the
// contract their storage.
type StructLogger struct {
cfg LogConfig
env *EVM
storage map[common.Address]Storage
logs []StructLog
output []byte
err error
}
// NewStructLogger returns a new logger
func NewStructLogger(cfg *LogConfig) *StructLogger {
logger := &StructLogger{
storage: make(map[common.Address]Storage),
}
if cfg != nil {
logger.cfg = *cfg
}
return logger
}
// Reset clears the data held by the logger.
func (l *StructLogger) Reset() {
l.storage = make(map[common.Address]Storage)
l.output = make([]byte, 0)
l.logs = l.logs[:0]
l.err = nil
}
// CaptureStart implements the EVMLogger interface to initialize the tracing operation.
func (l *StructLogger) CaptureStart(env *EVM, from common.Address, to common.Address, create bool, input []byte, gas uint64, value *big.Int) {
l.env = env
}
// CaptureState logs a new structured log message and pushes it out to the environment
//
// CaptureState also tracks SLOAD/SSTORE ops to track storage change.
func (l *StructLogger) CaptureState(pc uint64, op OpCode, gas, cost uint64, scope *ScopeContext, rData []byte, depth int, err error) {
memory := scope.Memory
stack := scope.Stack
contract := scope.Contract
// check if already accumulated the specified number of logs
if l.cfg.Limit != 0 && l.cfg.Limit <= len(l.logs) {
return
}
// Copy a snapshot of the current memory state to a new buffer
var mem []byte
if l.cfg.EnableMemory {
mem = make([]byte, len(memory.Data()))
copy(mem, memory.Data())
}
// Copy a snapshot of the current stack state to a new buffer
var stck []uint256.Int
if !l.cfg.DisableStack {
stck = make([]uint256.Int, len(stack.Data()))
for i, item := range stack.Data() {
stck[i] = item
}
}
// Copy a snapshot of the current storage to a new container
var storage Storage
if !l.cfg.DisableStorage && (op == SLOAD || op == SSTORE) {
// initialise new changed values storage container for this contract
// if not present.
if l.storage[contract.Address()] == nil {
l.storage[contract.Address()] = make(Storage)
}
// capture SLOAD opcodes and record the read entry in the local storage
if op == SLOAD && stack.len() >= 1 {
var (
address = common.Hash(stack.data[stack.len()-1].Bytes32())
value = l.env.StateDB.GetState(contract.Address(), address)
)
l.storage[contract.Address()][address] = value
storage = l.storage[contract.Address()].Copy()
} else if op == SSTORE && stack.len() >= 2 {
// capture SSTORE opcodes and record the written entry in the local storage.
var (
value = common.Hash(stack.data[stack.len()-2].Bytes32())
address = common.Hash(stack.data[stack.len()-1].Bytes32())
)
l.storage[contract.Address()][address] = value
storage = l.storage[contract.Address()].Copy()
}
}
var rdata []byte
if l.cfg.EnableReturnData {
rdata = make([]byte, len(rData))
copy(rdata, rData)
}
// create a new snapshot of the EVM.
log := StructLog{pc, op, gas, cost, mem, memory.Len(), stck, rdata, storage, depth, l.env.StateDB.GetRefund(), err}
l.logs = append(l.logs, log)
}
// CaptureFault implements the EVMLogger interface to trace an execution fault
// while running an opcode.
func (l *StructLogger) CaptureFault(pc uint64, op OpCode, gas, cost uint64, scope *ScopeContext, depth int, err error) {
}
// CaptureEnd is called after the call finishes to finalize the tracing.
func (l *StructLogger) CaptureEnd(output []byte, gasUsed uint64, t time.Duration, err error) {
l.output = output
l.err = err
if l.cfg.Debug {
fmt.Printf("0x%x\n", output)
if err != nil {
fmt.Printf(" error: %v\n", err)
}
}
}
func (l *StructLogger) CaptureEnter(typ OpCode, from common.Address, to common.Address, input []byte, gas uint64, value *big.Int) {
}
func (l *StructLogger) CaptureExit(output []byte, gasUsed uint64, err error) {}
// StructLogs returns the captured log entries.
func (l *StructLogger) StructLogs() []StructLog { return l.logs }
// Error returns the VM error captured by the trace.
func (l *StructLogger) Error() error { return l.err }
// Output returns the VM return value captured by the trace.
func (l *StructLogger) Output() []byte { return l.output }
// WriteTrace writes a formatted trace to the given writer
func WriteTrace(writer io.Writer, logs []StructLog) {
for _, log := range logs {
fmt.Fprintf(writer, "%-16spc=%08d gas=%v cost=%v", log.Op, log.Pc, log.Gas, log.GasCost)
if log.Err != nil {
fmt.Fprintf(writer, " ERROR: %v", log.Err)
}
fmt.Fprintln(writer)
if len(log.Stack) > 0 {
fmt.Fprintln(writer, "Stack:")
for i := len(log.Stack) - 1; i >= 0; i-- {
fmt.Fprintf(writer, "%08d %s\n", len(log.Stack)-i-1, log.Stack[i].Hex())
}
}
if len(log.Memory) > 0 {
fmt.Fprintln(writer, "Memory:")
fmt.Fprint(writer, hex.Dump(log.Memory))
}
if len(log.Storage) > 0 {
fmt.Fprintln(writer, "Storage:")
for h, item := range log.Storage {
fmt.Fprintf(writer, "%x: %x\n", h, item)
}
}
if len(log.ReturnData) > 0 {
fmt.Fprintln(writer, "ReturnData:")
fmt.Fprint(writer, hex.Dump(log.ReturnData))
}
fmt.Fprintln(writer)
}
}
// WriteLogs writes vm logs in a readable format to the given writer
func WriteLogs(writer io.Writer, logs []*types.Log) {
for _, log := range logs {
fmt.Fprintf(writer, "LOG%d: %x bn=%d txi=%x\n", len(log.Topics), log.Address, log.BlockNumber, log.TxIndex)
for i, topic := range log.Topics {
fmt.Fprintf(writer, "%08d %x\n", i, topic)
}
fmt.Fprint(writer, hex.Dump(log.Data))
fmt.Fprintln(writer)
}
}
type mdLogger struct {
out io.Writer
cfg *LogConfig
env *EVM
}
// NewMarkdownLogger creates a logger which outputs information in a format adapted
// for human readability, and is also a valid markdown table
func NewMarkdownLogger(cfg *LogConfig, writer io.Writer) *mdLogger {
l := &mdLogger{out: writer, cfg: cfg}
if l.cfg == nil {
l.cfg = &LogConfig{}
}
return l
}
func (t *mdLogger) CaptureStart(env *EVM, from common.Address, to common.Address, create bool, input []byte, gas uint64, value *big.Int) {
t.env = env
if !create {
fmt.Fprintf(t.out, "From: `%v`\nTo: `%v`\nData: `0x%x`\nGas: `%d`\nValue `%v` wei\n",
from.String(), to.String(),
input, gas, value)
} else {
fmt.Fprintf(t.out, "From: `%v`\nCreate at: `%v`\nData: `0x%x`\nGas: `%d`\nValue `%v` wei\n",
from.String(), to.String(),
input, gas, value)
}
fmt.Fprintf(t.out, `
| Pc | Op | Cost | Stack | RStack | Refund |
|-------|-------------|------|-----------|-----------|---------|
`)
}
// CaptureState also tracks SLOAD/SSTORE ops to track storage change.
func (t *mdLogger) CaptureState(pc uint64, op OpCode, gas, cost uint64, scope *ScopeContext, rData []byte, depth int, err error) {
stack := scope.Stack
fmt.Fprintf(t.out, "| %4d | %10v | %3d |", pc, op, cost)
if !t.cfg.DisableStack {
// format stack
var a []string
for _, elem := range stack.data {
a = append(a, elem.Hex())
}
b := fmt.Sprintf("[%v]", strings.Join(a, ","))
fmt.Fprintf(t.out, "%10v |", b)
}
fmt.Fprintf(t.out, "%10v |", t.env.StateDB.GetRefund())
fmt.Fprintln(t.out, "")
if err != nil {
fmt.Fprintf(t.out, "Error: %v\n", err)
}
}
func (t *mdLogger) CaptureFault(pc uint64, op OpCode, gas, cost uint64, scope *ScopeContext, depth int, err error) {
fmt.Fprintf(t.out, "\nError: at pc=%d, op=%v: %v\n", pc, op, err)
}
func (t *mdLogger) CaptureEnd(output []byte, gasUsed uint64, tm time.Duration, err error) {
fmt.Fprintf(t.out, "\nOutput: `0x%x`\nConsumed gas: `%d`\nError: `%v`\n",
output, gasUsed, err)
}
func (t *mdLogger) CaptureEnter(typ OpCode, from common.Address, to common.Address, input []byte, gas uint64, value *big.Int) {
}
func (t *mdLogger) CaptureExit(output []byte, gasUsed uint64, err error) {}

View File

@ -16,7 +16,7 @@
package vm package vm
func memorySha3(stack *Stack) (uint64, bool) { func memoryKeccak256(stack *Stack) (uint64, bool) {
return calcMemSize64(stack.Back(0), stack.Back(1)) return calcMemSize64(stack.Back(0), stack.Back(1))
} }

View File

@ -32,11 +32,6 @@ func (op OpCode) IsPush() bool {
return false return false
} }
// IsStaticJump specifies if an opcode is JUMP.
func (op OpCode) IsStaticJump() bool {
return op == JUMP
}
// 0x0 range - arithmetic ops. // 0x0 range - arithmetic ops.
const ( const (
STOP OpCode = 0x0 STOP OpCode = 0x0
@ -70,7 +65,7 @@ const (
SHR OpCode = 0x1c SHR OpCode = 0x1c
SAR OpCode = 0x1d SAR OpCode = 0x1d
SHA3 OpCode = 0x20 KECCAK256 OpCode = 0x20
) )
// 0x30 range - closure state. // 0x30 range - closure state.
@ -207,13 +202,6 @@ const (
LOG4 LOG4
) )
// unofficial opcodes used for parsing.
const (
PUSH OpCode = 0xb0 + iota
DUP
SWAP
)
// 0xf0 range - closures. // 0xf0 range - closures.
const ( const (
CREATE OpCode = 0xf0 CREATE OpCode = 0xf0
@ -225,6 +213,7 @@ const (
STATICCALL OpCode = 0xfa STATICCALL OpCode = 0xfa
REVERT OpCode = 0xfd REVERT OpCode = 0xfd
INVALID OpCode = 0xfe
SELFDESTRUCT OpCode = 0xff SELFDESTRUCT OpCode = 0xff
) )
@ -261,7 +250,7 @@ var opCodeToString = map[OpCode]string{
MULMOD: "MULMOD", MULMOD: "MULMOD",
// 0x20 range - crypto. // 0x20 range - crypto.
SHA3: "SHA3", KECCAK256: "KECCAK256",
// 0x30 range - closure state. // 0x30 range - closure state.
ADDRESS: "ADDRESS", ADDRESS: "ADDRESS",
@ -390,11 +379,8 @@ var opCodeToString = map[OpCode]string{
CREATE2: "CREATE2", CREATE2: "CREATE2",
STATICCALL: "STATICCALL", STATICCALL: "STATICCALL",
REVERT: "REVERT", REVERT: "REVERT",
INVALID: "INVALID",
SELFDESTRUCT: "SELFDESTRUCT", SELFDESTRUCT: "SELFDESTRUCT",
PUSH: "PUSH",
DUP: "DUP",
SWAP: "SWAP",
} }
func (op OpCode) String() string { func (op OpCode) String() string {
@ -433,7 +419,7 @@ var stringToOp = map[string]OpCode{
"SAR": SAR, "SAR": SAR,
"ADDMOD": ADDMOD, "ADDMOD": ADDMOD,
"MULMOD": MULMOD, "MULMOD": MULMOD,
"SHA3": SHA3, "KECCAK256": KECCAK256,
"ADDRESS": ADDRESS, "ADDRESS": ADDRESS,
"BALANCE": BALANCE, "BALANCE": BALANCE,
"ORIGIN": ORIGIN, "ORIGIN": ORIGIN,
@ -548,6 +534,7 @@ var stringToOp = map[string]OpCode{
"RETURN": RETURN, "RETURN": RETURN,
"CALLCODE": CALLCODE, "CALLCODE": CALLCODE,
"REVERT": REVERT, "REVERT": REVERT,
"INVALID": INVALID,
"SELFDESTRUCT": SELFDESTRUCT, "SELFDESTRUCT": SELFDESTRUCT,
} }

View File

@ -34,6 +34,7 @@ import (
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm" "github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/eth/tracers" "github.com/ethereum/go-ethereum/eth/tracers"
"github.com/ethereum/go-ethereum/eth/tracers/logger"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
// force-load js tracers to trigger registration // force-load js tracers to trigger registration
@ -326,7 +327,7 @@ func TestBlockhash(t *testing.T) {
} }
type stepCounter struct { type stepCounter struct {
inner *vm.JSONLogger inner *logger.JSONLogger
steps int steps int
} }
@ -493,7 +494,7 @@ func BenchmarkSimpleLoop(b *testing.B) {
byte(vm.JUMP), byte(vm.JUMP),
} }
//tracer := vm.NewJSONLogger(nil, os.Stdout) //tracer := logger.NewJSONLogger(nil, os.Stdout)
//Execute(loopingCode, nil, &Config{ //Execute(loopingCode, nil, &Config{
// EVMConfig: vm.Config{ // EVMConfig: vm.Config{
// Debug: true, // Debug: true,
@ -536,7 +537,7 @@ func TestEip2929Cases(t *testing.T) {
Execute(code, nil, &Config{ Execute(code, nil, &Config{
EVMConfig: vm.Config{ EVMConfig: vm.Config{
Debug: true, Debug: true,
Tracer: vm.NewMarkdownLogger(nil, os.Stdout), Tracer: logger.NewMarkdownLogger(nil, os.Stdout),
ExtraEips: []int{2929}, ExtraEips: []int{2929},
}, },
}) })
@ -686,7 +687,7 @@ func TestColdAccountAccessCost(t *testing.T) {
want: 7600, want: 7600,
}, },
} { } {
tracer := vm.NewStructLogger(nil) tracer := logger.NewStructLogger(nil)
Execute(tc.code, nil, &Config{ Execute(tc.code, nil, &Config{
EVMConfig: vm.Config{ EVMConfig: vm.Config{
Debug: true, Debug: true,

View File

@ -54,10 +54,6 @@ func (st *Stack) push(d *uint256.Int) {
// NOTE push limit (1024) is checked in baseCheck // NOTE push limit (1024) is checked in baseCheck
st.data = append(st.data, *d) st.data = append(st.data, *d)
} }
func (st *Stack) pushN(ds ...uint256.Int) {
// FIXME: Is there a way to pass args by pointers.
st.data = append(st.data, ds...)
}
func (st *Stack) pop() (ret uint256.Int) { func (st *Stack) pop() (ret uint256.Int) {
ret = st.data[len(st.data)-1] ret = st.data[len(st.data)-1]

View File

@ -11,7 +11,7 @@ Note:
- To avoid unnecessary loads and make use of available registers, two - To avoid unnecessary loads and make use of available registers, two
'passes' have every time been interleaved, with the odd passes accumulating c' and d' 'passes' have every time been interleaved, with the odd passes accumulating c' and d'
which will be added to c and d respectively in the the even passes which will be added to c and d respectively in the even passes
*/ */

View File

@ -353,7 +353,7 @@ func (b *EthAPIBackend) StartMining(threads int) error {
} }
func (b *EthAPIBackend) StateAtBlock(ctx context.Context, block *types.Block, reexec uint64, base *state.StateDB, checkLive, preferDisk bool) (*state.StateDB, error) { func (b *EthAPIBackend) StateAtBlock(ctx context.Context, block *types.Block, reexec uint64, base *state.StateDB, checkLive, preferDisk bool) (*state.StateDB, error) {
return b.eth.stateAtBlock(block, reexec, base, checkLive, preferDisk) return b.eth.StateAtBlock(block, reexec, base, checkLive, preferDisk)
} }
func (b *EthAPIBackend) StateAtTransaction(ctx context.Context, block *types.Block, txIndex int, reexec uint64) (core.Message, vm.BlockContext, *state.StateDB, error) { func (b *EthAPIBackend) StateAtTransaction(ctx context.Context, block *types.Block, txIndex int, reexec uint64) (core.Message, vm.BlockContext, *state.StateDB, error) {

View File

@ -30,6 +30,7 @@ import (
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil" "github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/consensus" "github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/consensus/beacon"
"github.com/ethereum/go-ethereum/consensus/clique" "github.com/ethereum/go-ethereum/consensus/clique"
"github.com/ethereum/go-ethereum/core" "github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/bloombits" "github.com/ethereum/go-ethereum/core/bloombits"
@ -46,6 +47,7 @@ import (
"github.com/ethereum/go-ethereum/ethdb" "github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/event" "github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/internal/ethapi" "github.com/ethereum/go-ethereum/internal/ethapi"
"github.com/ethereum/go-ethereum/internal/shutdowncheck"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/miner" "github.com/ethereum/go-ethereum/miner"
"github.com/ethereum/go-ethereum/node" "github.com/ethereum/go-ethereum/node"
@ -71,6 +73,7 @@ type Ethereum struct {
handler *handler handler *handler
ethDialCandidates enode.Iterator ethDialCandidates enode.Iterator
snapDialCandidates enode.Iterator snapDialCandidates enode.Iterator
merger *consensus.Merger
// DB interfaces // DB interfaces
chainDb ethdb.Database // Block chain database chainDb ethdb.Database // Block chain database
@ -95,6 +98,8 @@ type Ethereum struct {
p2pServer *p2p.Server p2pServer *p2p.Server
lock sync.RWMutex // Protects the variadic fields (e.g. gas price and etherbase) lock sync.RWMutex // Protects the variadic fields (e.g. gas price and etherbase)
shutdownTracker *shutdowncheck.ShutdownTracker // Tracks if and when the node has shutdown ungracefully
} }
// New creates a new Ethereum object (including the // New creates a new Ethereum object (including the
@ -131,7 +136,7 @@ func New(stack *node.Node, config *ethconfig.Config) (*Ethereum, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
chainConfig, genesisHash, genesisErr := core.SetupGenesisBlockWithOverride(chainDb, config.Genesis, config.OverrideArrowGlacier) chainConfig, genesisHash, genesisErr := core.SetupGenesisBlockWithOverride(chainDb, config.Genesis, config.OverrideArrowGlacier, config.OverrideTerminalTotalDifficulty)
if _, ok := genesisErr.(*params.ConfigCompatError); genesisErr != nil && !ok { if _, ok := genesisErr.(*params.ConfigCompatError); genesisErr != nil && !ok {
return nil, genesisErr return nil, genesisErr
} }
@ -140,8 +145,10 @@ func New(stack *node.Node, config *ethconfig.Config) (*Ethereum, error) {
if err := pruner.RecoverPruning(stack.ResolvePath(""), chainDb, stack.ResolvePath(config.TrieCleanCacheJournal)); err != nil { if err := pruner.RecoverPruning(stack.ResolvePath(""), chainDb, stack.ResolvePath(config.TrieCleanCacheJournal)); err != nil {
log.Error("Failed to recover state", "error", err) log.Error("Failed to recover state", "error", err)
} }
merger := consensus.NewMerger(chainDb)
eth := &Ethereum{ eth := &Ethereum{
config: config, config: config,
merger: merger,
chainDb: chainDb, chainDb: chainDb,
eventMux: stack.EventMux(), eventMux: stack.EventMux(),
accountManager: stack.AccountManager(), accountManager: stack.AccountManager(),
@ -153,6 +160,7 @@ func New(stack *node.Node, config *ethconfig.Config) (*Ethereum, error) {
bloomRequests: make(chan chan *bloombits.Retrieval), bloomRequests: make(chan chan *bloombits.Retrieval),
bloomIndexer: core.NewBloomIndexer(chainDb, params.BloomBitsBlocks, params.BloomConfirms), bloomIndexer: core.NewBloomIndexer(chainDb, params.BloomBitsBlocks, params.BloomConfirms),
p2pServer: stack.Server(), p2pServer: stack.Server(),
shutdownTracker: shutdowncheck.NewShutdownTracker(chainDb),
} }
bcVersion := rawdb.ReadDatabaseVersion(chainDb) bcVersion := rawdb.ReadDatabaseVersion(chainDb)
@ -215,6 +223,7 @@ func New(stack *node.Node, config *ethconfig.Config) (*Ethereum, error) {
Database: chainDb, Database: chainDb,
Chain: eth.blockchain, Chain: eth.blockchain,
TxPool: eth.txPool, TxPool: eth.txPool,
Merger: merger,
Network: config.NetworkId, Network: config.NetworkId,
Sync: config.SyncMode, Sync: config.SyncMode,
BloomCache: uint64(cacheLimit), BloomCache: uint64(cacheLimit),
@ -225,7 +234,7 @@ func New(stack *node.Node, config *ethconfig.Config) (*Ethereum, error) {
return nil, err return nil, err
} }
eth.miner = miner.New(eth, &config.Miner, chainConfig, eth.EventMux(), eth.engine, eth.isLocalBlock) eth.miner = miner.New(eth, &config.Miner, chainConfig, eth.EventMux(), eth.engine, eth.isLocalBlock, merger)
eth.miner.SetExtra(makeExtraData(config.Miner.ExtraData)) eth.miner.SetExtra(makeExtraData(config.Miner.ExtraData))
eth.APIBackend = &EthAPIBackend{stack.Config().ExtRPCEnabled(), stack.Config().AllowUnprotectedTxs, eth, nil} eth.APIBackend = &EthAPIBackend{stack.Config().ExtRPCEnabled(), stack.Config().AllowUnprotectedTxs, eth, nil}
@ -256,19 +265,10 @@ func New(stack *node.Node, config *ethconfig.Config) (*Ethereum, error) {
stack.RegisterAPIs(eth.APIs()) stack.RegisterAPIs(eth.APIs())
stack.RegisterProtocols(eth.Protocols()) stack.RegisterProtocols(eth.Protocols())
stack.RegisterLifecycle(eth) stack.RegisterLifecycle(eth)
// Check for unclean shutdown
if uncleanShutdowns, discards, err := rawdb.PushUncleanShutdownMarker(chainDb); err != nil { // Successful startup; push a marker and check previous unclean shutdowns.
log.Error("Could not update unclean-shutdown-marker list", "error", err) eth.shutdownTracker.MarkStartup()
} else {
if discards > 0 {
log.Warn("Old unclean shutdowns found", "count", discards)
}
for _, tstamp := range uncleanShutdowns {
t := time.Unix(int64(tstamp), 0)
log.Warn("Unclean shutdown detected", "booted", t,
"age", common.PrettyAge(t))
}
}
return eth, nil return eth, nil
} }
@ -378,10 +378,10 @@ func (s *Ethereum) Etherbase() (eb common.Address, err error) {
// //
// We regard two types of accounts as local miner account: etherbase // We regard two types of accounts as local miner account: etherbase
// and accounts specified via `txpool.locals` flag. // and accounts specified via `txpool.locals` flag.
func (s *Ethereum) isLocalBlock(block *types.Block) bool { func (s *Ethereum) isLocalBlock(header *types.Header) bool {
author, err := s.engine.Author(block.Header()) author, err := s.engine.Author(header)
if err != nil { if err != nil {
log.Warn("Failed to retrieve block author", "number", block.NumberU64(), "hash", block.Hash(), "err", err) log.Warn("Failed to retrieve block author", "number", header.Number.Uint64(), "hash", header.Hash(), "err", err)
return false return false
} }
// Check whether the given address is etherbase. // Check whether the given address is etherbase.
@ -404,7 +404,7 @@ func (s *Ethereum) isLocalBlock(block *types.Block) bool {
// shouldPreserve checks whether we should preserve the given block // shouldPreserve checks whether we should preserve the given block
// during the chain reorg depending on whether the author of block // during the chain reorg depending on whether the author of block
// is a local account. // is a local account.
func (s *Ethereum) shouldPreserve(block *types.Block) bool { func (s *Ethereum) shouldPreserve(header *types.Header) bool {
// The reason we need to disable the self-reorg preserving for clique // The reason we need to disable the self-reorg preserving for clique
// is it can be probable to introduce a deadlock. // is it can be probable to introduce a deadlock.
// //
@ -424,7 +424,7 @@ func (s *Ethereum) shouldPreserve(block *types.Block) bool {
if _, ok := s.engine.(*clique.Clique); ok { if _, ok := s.engine.(*clique.Clique); ok {
return false return false
} }
return s.isLocalBlock(block) return s.isLocalBlock(header)
} }
// SetEtherbase sets the mining reward address. // SetEtherbase sets the mining reward address.
@ -465,13 +465,21 @@ func (s *Ethereum) StartMining(threads int) error {
log.Error("Cannot start mining without etherbase", "err", err) log.Error("Cannot start mining without etherbase", "err", err)
return fmt.Errorf("etherbase missing: %v", err) return fmt.Errorf("etherbase missing: %v", err)
} }
if clique, ok := s.engine.(*clique.Clique); ok { var cli *clique.Clique
if c, ok := s.engine.(*clique.Clique); ok {
cli = c
} else if cl, ok := s.engine.(*beacon.Beacon); ok {
if c, ok := cl.InnerEngine().(*clique.Clique); ok {
cli = c
}
}
if cli != nil {
wallet, err := s.accountManager.Find(accounts.Account{Address: eb}) wallet, err := s.accountManager.Find(accounts.Account{Address: eb})
if wallet == nil || err != nil { if wallet == nil || err != nil {
log.Error("Etherbase account unavailable locally", "err", err) log.Error("Etherbase account unavailable locally", "err", err)
return fmt.Errorf("signer missing: %v", err) return fmt.Errorf("signer missing: %v", err)
} }
clique.Authorize(eb, wallet.SignData) cli.Authorize(eb, wallet.SignData)
} }
// If mining is started, we can disable the transaction rejection mechanism // If mining is started, we can disable the transaction rejection mechanism
// introduced to speed sync times. // introduced to speed sync times.
@ -508,8 +516,14 @@ func (s *Ethereum) ChainDb() ethdb.Database { return s.chainDb }
func (s *Ethereum) IsListening() bool { return true } // Always listening func (s *Ethereum) IsListening() bool { return true } // Always listening
func (s *Ethereum) Downloader() *downloader.Downloader { return s.handler.downloader } func (s *Ethereum) Downloader() *downloader.Downloader { return s.handler.downloader }
func (s *Ethereum) Synced() bool { return atomic.LoadUint32(&s.handler.acceptTxs) == 1 } func (s *Ethereum) Synced() bool { return atomic.LoadUint32(&s.handler.acceptTxs) == 1 }
func (s *Ethereum) SetSynced() { atomic.StoreUint32(&s.handler.acceptTxs, 1) }
func (s *Ethereum) ArchiveMode() bool { return s.config.NoPruning } func (s *Ethereum) ArchiveMode() bool { return s.config.NoPruning }
func (s *Ethereum) BloomIndexer() *core.ChainIndexer { return s.bloomIndexer } func (s *Ethereum) BloomIndexer() *core.ChainIndexer { return s.bloomIndexer }
func (s *Ethereum) Merger() *consensus.Merger { return s.merger }
func (s *Ethereum) SyncMode() downloader.SyncMode {
mode, _ := s.handler.chainSync.modeAndLocalHead()
return mode
}
// Protocols returns all the currently configured // Protocols returns all the currently configured
// network protocols to start. // network protocols to start.
@ -529,6 +543,9 @@ func (s *Ethereum) Start() error {
// Start the bloom bits servicing goroutines // Start the bloom bits servicing goroutines
s.startBloomHandlers(params.BloomBitsBlocks) s.startBloomHandlers(params.BloomBitsBlocks)
// Regularly update shutdown marker
s.shutdownTracker.Start()
// Figure out a max peers count based on the server limits // Figure out a max peers count based on the server limits
maxPeers := s.p2pServer.MaxPeers maxPeers := s.p2pServer.MaxPeers
if s.config.LightServ > 0 { if s.config.LightServ > 0 {
@ -557,7 +574,10 @@ func (s *Ethereum) Stop() error {
s.miner.Close() s.miner.Close()
s.blockchain.Stop() s.blockchain.Stop()
s.engine.Close() s.engine.Close()
rawdb.PopUncleanShutdownMarker(s.chainDb)
// Clean shutdown marker as the last thing before closing db
s.shutdownTracker.Stop()
s.chainDb.Close() s.chainDb.Close()
s.eventMux.Stop() s.eventMux.Stop()

View File

@ -18,17 +18,23 @@
package catalyst package catalyst
import ( import (
"crypto/sha256"
"encoding/binary"
"errors" "errors"
"fmt" "fmt"
"math/big" "math/big"
"time" "time"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/consensus/beacon"
"github.com/ethereum/go-ethereum/consensus/misc" "github.com/ethereum/go-ethereum/consensus/misc"
"github.com/ethereum/go-ethereum/core" "github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/state" "github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/eth" "github.com/ethereum/go-ethereum/eth"
"github.com/ethereum/go-ethereum/les"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/node" "github.com/ethereum/go-ethereum/node"
chainParams "github.com/ethereum/go-ethereum/params" chainParams "github.com/ethereum/go-ethereum/params"
@ -36,31 +42,81 @@ import (
"github.com/ethereum/go-ethereum/trie" "github.com/ethereum/go-ethereum/trie"
) )
// Register adds catalyst APIs to the node. var (
func Register(stack *node.Node, backend *eth.Ethereum) error { VALID = GenericStringResponse{"VALID"}
chainconfig := backend.BlockChain().Config() SUCCESS = GenericStringResponse{"SUCCESS"}
if chainconfig.TerminalTotalDifficulty == nil { INVALID = ForkChoiceResponse{Status: "INVALID", PayloadID: nil}
return errors.New("catalyst started without valid total difficulty") SYNCING = ForkChoiceResponse{Status: "SYNCING", PayloadID: nil}
} GenericServerError = rpc.CustomError{Code: -32000, ValidationError: "Server error"}
UnknownPayload = rpc.CustomError{Code: -32001, ValidationError: "Unknown payload"}
InvalidTB = rpc.CustomError{Code: -32002, ValidationError: "Invalid terminal block"}
InvalidPayloadID = rpc.CustomError{Code: 1, ValidationError: "invalid payload id"}
)
log.Warn("Catalyst mode enabled") // Register adds catalyst APIs to the full node.
func Register(stack *node.Node, backend *eth.Ethereum) error {
log.Warn("Catalyst mode enabled", "protocol", "eth")
stack.RegisterAPIs([]rpc.API{ stack.RegisterAPIs([]rpc.API{
{ {
Namespace: "consensus", Namespace: "engine",
Version: "1.0", Version: "1.0",
Service: newConsensusAPI(backend), Service: NewConsensusAPI(backend, nil),
Public: true, Public: true,
}, },
}) })
return nil return nil
} }
type consensusAPI struct { // RegisterLight adds catalyst APIs to the light client.
eth *eth.Ethereum func RegisterLight(stack *node.Node, backend *les.LightEthereum) error {
log.Warn("Catalyst mode enabled", "protocol", "les")
stack.RegisterAPIs([]rpc.API{
{
Namespace: "engine",
Version: "1.0",
Service: NewConsensusAPI(nil, backend),
Public: true,
},
})
return nil
} }
func newConsensusAPI(eth *eth.Ethereum) *consensusAPI { type ConsensusAPI struct {
return &consensusAPI{eth: eth} light bool
eth *eth.Ethereum
les *les.LightEthereum
engine consensus.Engine // engine is the post-merge consensus engine, only for block creation
preparedBlocks map[uint64]*ExecutableDataV1
}
func NewConsensusAPI(eth *eth.Ethereum, les *les.LightEthereum) *ConsensusAPI {
var engine consensus.Engine
if eth == nil {
if les.BlockChain().Config().TerminalTotalDifficulty == nil {
panic("Catalyst started without valid total difficulty")
}
if b, ok := les.Engine().(*beacon.Beacon); ok {
engine = beacon.New(b.InnerEngine())
} else {
engine = beacon.New(les.Engine())
}
} else {
if eth.BlockChain().Config().TerminalTotalDifficulty == nil {
panic("Catalyst started without valid total difficulty")
}
if b, ok := eth.Engine().(*beacon.Beacon); ok {
engine = beacon.New(b.InnerEngine())
} else {
engine = beacon.New(eth.Engine())
}
}
return &ConsensusAPI{
light: eth == nil,
eth: eth,
les: les,
engine: engine,
preparedBlocks: make(map[uint64]*ExecutableDataV1),
}
} }
// blockExecutionEnv gathers all the data required to execute // blockExecutionEnv gathers all the data required to execute
@ -89,8 +145,24 @@ func (env *blockExecutionEnv) commitTransaction(tx *types.Transaction, coinbase
return nil return nil
} }
func (api *consensusAPI) makeEnv(parent *types.Block, header *types.Header) (*blockExecutionEnv, error) { func (api *ConsensusAPI) makeEnv(parent *types.Block, header *types.Header) (*blockExecutionEnv, error) {
state, err := api.eth.BlockChain().StateAt(parent.Root()) // The parent state might be missing. It can be the special scenario
// that consensus layer tries to build a new block based on the very
// old side chain block and the relevant state is already pruned. So
// try to retrieve the live state from the chain, if it's not existent,
// do the necessary recovery work.
var (
err error
state *state.StateDB
)
if api.eth.BlockChain().HasState(parent.Root()) {
state, err = api.eth.BlockChain().StateAt(parent.Root())
} else {
// The maximum acceptable reorg depth can be limited by the
// finalised block somehow. TODO(rjl493456442) fix the hard-
// coded number here later.
state, err = api.eth.StateAtBlock(parent, 1000, nil, false, false)
}
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -103,57 +175,160 @@ func (api *consensusAPI) makeEnv(parent *types.Block, header *types.Header) (*bl
return env, nil return env, nil
} }
func (api *ConsensusAPI) GetPayloadV1(payloadID hexutil.Bytes) (*ExecutableDataV1, error) {
hash := []byte(payloadID)
if len(hash) < 8 {
return nil, &InvalidPayloadID
}
id := binary.BigEndian.Uint64(hash[:8])
data, ok := api.preparedBlocks[id]
if !ok {
return nil, &UnknownPayload
}
return data, nil
}
func (api *ConsensusAPI) ForkchoiceUpdatedV1(heads ForkchoiceStateV1, PayloadAttributes *PayloadAttributesV1) (ForkChoiceResponse, error) {
if heads.HeadBlockHash == (common.Hash{}) {
return ForkChoiceResponse{Status: SUCCESS.Status, PayloadID: nil}, nil
}
if err := api.checkTerminalTotalDifficulty(heads.HeadBlockHash); err != nil {
if block := api.eth.BlockChain().GetBlockByHash(heads.HeadBlockHash); block == nil {
// TODO (MariusVanDerWijden) trigger sync
return SYNCING, nil
}
return INVALID, err
}
// If the finalized block is set, check if it is in our blockchain
if heads.FinalizedBlockHash != (common.Hash{}) {
if block := api.eth.BlockChain().GetBlockByHash(heads.FinalizedBlockHash); block == nil {
// TODO (MariusVanDerWijden) trigger sync
return SYNCING, nil
}
}
// SetHead
if err := api.setHead(heads.HeadBlockHash); err != nil {
return INVALID, err
}
// Assemble block (if needed)
if PayloadAttributes != nil {
data, err := api.assembleBlock(heads.HeadBlockHash, PayloadAttributes)
if err != nil {
return INVALID, err
}
hash := computePayloadId(heads.HeadBlockHash, PayloadAttributes)
id := binary.BigEndian.Uint64(hash)
api.preparedBlocks[id] = data
log.Info("Created payload", "payloadid", id)
// TODO (MariusVanDerWijden) do something with the payloadID?
hex := hexutil.Bytes(hash)
return ForkChoiceResponse{Status: SUCCESS.Status, PayloadID: &hex}, nil
}
return ForkChoiceResponse{Status: SUCCESS.Status, PayloadID: nil}, nil
}
func computePayloadId(headBlockHash common.Hash, params *PayloadAttributesV1) []byte {
// Hash
hasher := sha256.New()
hasher.Write(headBlockHash[:])
binary.Write(hasher, binary.BigEndian, params.Timestamp)
hasher.Write(params.Random[:])
hasher.Write(params.SuggestedFeeRecipient[:])
return hasher.Sum([]byte{})[:8]
}
func (api *ConsensusAPI) invalid() ExecutePayloadResponse {
if api.light {
return ExecutePayloadResponse{Status: INVALID.Status, LatestValidHash: api.les.BlockChain().CurrentHeader().Hash()}
}
return ExecutePayloadResponse{Status: INVALID.Status, LatestValidHash: api.eth.BlockChain().CurrentHeader().Hash()}
}
// ExecutePayload creates an Eth1 block, inserts it in the chain, and returns the status of the chain.
func (api *ConsensusAPI) ExecutePayloadV1(params ExecutableDataV1) (ExecutePayloadResponse, error) {
block, err := ExecutableDataToBlock(params)
if err != nil {
return api.invalid(), err
}
if api.light {
parent := api.les.BlockChain().GetHeaderByHash(params.ParentHash)
if parent == nil {
return api.invalid(), fmt.Errorf("could not find parent %x", params.ParentHash)
}
if err = api.les.BlockChain().InsertHeader(block.Header()); err != nil {
return api.invalid(), err
}
return ExecutePayloadResponse{Status: VALID.Status, LatestValidHash: block.Hash()}, nil
}
if !api.eth.BlockChain().HasBlock(block.ParentHash(), block.NumberU64()-1) {
/*
TODO (MariusVanDerWijden) reenable once sync is merged
if err := api.eth.Downloader().BeaconSync(api.eth.SyncMode(), block.Header()); err != nil {
return SYNCING, err
}
*/
// TODO (MariusVanDerWijden) we should return nil here not empty hash
return ExecutePayloadResponse{Status: SYNCING.Status, LatestValidHash: common.Hash{}}, nil
}
parent := api.eth.BlockChain().GetBlockByHash(params.ParentHash)
td := api.eth.BlockChain().GetTd(parent.Hash(), block.NumberU64()-1)
ttd := api.eth.BlockChain().Config().TerminalTotalDifficulty
if td.Cmp(ttd) < 0 {
return api.invalid(), fmt.Errorf("can not execute payload on top of block with low td got: %v threshold %v", td, ttd)
}
if err := api.eth.BlockChain().InsertBlockWithoutSetHead(block); err != nil {
return api.invalid(), err
}
if merger := api.merger(); !merger.TDDReached() {
merger.ReachTTD()
}
return ExecutePayloadResponse{Status: VALID.Status, LatestValidHash: block.Hash()}, nil
}
// AssembleBlock creates a new block, inserts it into the chain, and returns the "execution // AssembleBlock creates a new block, inserts it into the chain, and returns the "execution
// data" required for eth2 clients to process the new block. // data" required for eth2 clients to process the new block.
func (api *consensusAPI) AssembleBlock(params assembleBlockParams) (*executableData, error) { func (api *ConsensusAPI) assembleBlock(parentHash common.Hash, params *PayloadAttributesV1) (*ExecutableDataV1, error) {
log.Info("Producing block", "parentHash", params.ParentHash) if api.light {
return nil, errors.New("not supported")
}
log.Info("Producing block", "parentHash", parentHash)
bc := api.eth.BlockChain() bc := api.eth.BlockChain()
parent := bc.GetBlockByHash(params.ParentHash) parent := bc.GetBlockByHash(parentHash)
if parent == nil { if parent == nil {
log.Warn("Cannot assemble block with parent hash to unknown block", "parentHash", params.ParentHash) log.Warn("Cannot assemble block with parent hash to unknown block", "parentHash", parentHash)
return nil, fmt.Errorf("cannot assemble block with unknown parent %s", params.ParentHash) return nil, fmt.Errorf("cannot assemble block with unknown parent %s", parentHash)
} }
pool := api.eth.TxPool() if params.Timestamp < parent.Time() {
return nil, fmt.Errorf("child timestamp lower than parent's: %d < %d", params.Timestamp, parent.Time())
if parent.Time() >= params.Timestamp {
return nil, fmt.Errorf("child timestamp lower than parent's: %d >= %d", parent.Time(), params.Timestamp)
} }
if now := uint64(time.Now().Unix()); params.Timestamp > now+1 { if now := uint64(time.Now().Unix()); params.Timestamp > now+1 {
wait := time.Duration(params.Timestamp-now) * time.Second diff := time.Duration(params.Timestamp-now) * time.Second
log.Info("Producing block too far in the future", "wait", common.PrettyDuration(wait)) log.Warn("Producing block too far in the future", "diff", common.PrettyDuration(diff))
time.Sleep(wait)
}
pending := pool.Pending(true)
coinbase, err := api.eth.Etherbase()
if err != nil {
return nil, err
} }
pending := api.eth.TxPool().Pending(true)
coinbase := params.SuggestedFeeRecipient
num := parent.Number() num := parent.Number()
header := &types.Header{ header := &types.Header{
ParentHash: parent.Hash(), ParentHash: parent.Hash(),
Number: num.Add(num, common.Big1), Number: num.Add(num, common.Big1),
Coinbase: coinbase, Coinbase: coinbase,
GasLimit: parent.GasLimit(), // Keep the gas limit constant in this prototype GasLimit: parent.GasLimit(), // Keep the gas limit constant in this prototype
Extra: []byte{}, Extra: []byte{}, // TODO (MariusVanDerWijden) properly set extra data
Time: params.Timestamp, Time: params.Timestamp,
} }
if config := api.eth.BlockChain().Config(); config.IsLondon(header.Number) { if config := api.eth.BlockChain().Config(); config.IsLondon(header.Number) {
header.BaseFee = misc.CalcBaseFee(config, parent.Header()) header.BaseFee = misc.CalcBaseFee(config, parent.Header())
} }
err = api.eth.Engine().Prepare(bc, header) if err := api.engine.Prepare(bc, header); err != nil {
if err != nil {
return nil, err return nil, err
} }
env, err := api.makeEnv(parent, header) env, err := api.makeEnv(parent, header)
if err != nil { if err != nil {
return nil, err return nil, err
} }
var ( var (
signer = types.MakeSigner(bc.Config(), header.Number) signer = types.MakeSigner(bc.Config(), header.Number)
txHeap = types.NewTransactionsByPriceAndNonce(signer, pending, nil) txHeap = types.NewTransactionsByPriceAndNonce(signer, pending, nil)
@ -204,25 +379,12 @@ func (api *consensusAPI) AssembleBlock(params assembleBlockParams) (*executableD
txHeap.Shift() txHeap.Shift()
} }
} }
// Create the block. // Create the block.
block, err := api.eth.Engine().FinalizeAndAssemble(bc, header, env.state, transactions, nil /* uncles */, env.receipts) block, err := api.engine.FinalizeAndAssemble(bc, header, env.state, transactions, nil /* uncles */, env.receipts)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return &executableData{ return BlockToExecutableData(block, params.Random), nil
BlockHash: block.Hash(),
ParentHash: block.ParentHash(),
Miner: block.Coinbase(),
StateRoot: block.Root(),
Number: block.NumberU64(),
GasLimit: block.GasLimit(),
GasUsed: block.GasUsed(),
Timestamp: block.Time(),
ReceiptRoot: block.ReceiptHash(),
LogsBloom: block.Bloom().Bytes(),
Transactions: encodeTransactions(block.Transactions()),
}, nil
} }
func encodeTransactions(txs []*types.Transaction) [][]byte { func encodeTransactions(txs []*types.Transaction) [][]byte {
@ -245,66 +407,130 @@ func decodeTransactions(enc [][]byte) ([]*types.Transaction, error) {
return txs, nil return txs, nil
} }
func insertBlockParamsToBlock(config *chainParams.ChainConfig, parent *types.Header, params executableData) (*types.Block, error) { func ExecutableDataToBlock(params ExecutableDataV1) (*types.Block, error) {
txs, err := decodeTransactions(params.Transactions) txs, err := decodeTransactions(params.Transactions)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if len(params.ExtraData) > 32 {
return nil, fmt.Errorf("invalid extradata length: %v", len(params.ExtraData))
}
number := big.NewInt(0) number := big.NewInt(0)
number.SetUint64(params.Number) number.SetUint64(params.Number)
header := &types.Header{ header := &types.Header{
ParentHash: params.ParentHash, ParentHash: params.ParentHash,
UncleHash: types.EmptyUncleHash, UncleHash: types.EmptyUncleHash,
Coinbase: params.Miner, Coinbase: params.FeeRecipient,
Root: params.StateRoot, Root: params.StateRoot,
TxHash: types.DeriveSha(types.Transactions(txs), trie.NewStackTrie(nil)), TxHash: types.DeriveSha(types.Transactions(txs), trie.NewStackTrie(nil)),
ReceiptHash: params.ReceiptRoot, ReceiptHash: params.ReceiptsRoot,
Bloom: types.BytesToBloom(params.LogsBloom), Bloom: types.BytesToBloom(params.LogsBloom),
Difficulty: big.NewInt(1), Difficulty: common.Big0,
Number: number, Number: number,
GasLimit: params.GasLimit, GasLimit: params.GasLimit,
GasUsed: params.GasUsed, GasUsed: params.GasUsed,
Time: params.Timestamp, Time: params.Timestamp,
} BaseFee: params.BaseFeePerGas,
if config.IsLondon(number) { Extra: params.ExtraData,
header.BaseFee = misc.CalcBaseFee(config, parent) // TODO (MariusVanDerWijden) add params.Random to header once required
} }
block := types.NewBlockWithHeader(header).WithBody(txs, nil /* uncles */) block := types.NewBlockWithHeader(header).WithBody(txs, nil /* uncles */)
if block.Hash() != params.BlockHash {
return nil, fmt.Errorf("blockhash mismatch, want %x, got %x", params.BlockHash, block.Hash())
}
return block, nil return block, nil
} }
// NewBlock creates an Eth1 block, inserts it in the chain, and either returns true, func BlockToExecutableData(block *types.Block, random common.Hash) *ExecutableDataV1 {
// or false + an error. This is a bit redundant for go, but simplifies things on the return &ExecutableDataV1{
// eth2 side. BlockHash: block.Hash(),
func (api *consensusAPI) NewBlock(params executableData) (*newBlockResponse, error) { ParentHash: block.ParentHash(),
parent := api.eth.BlockChain().GetBlockByHash(params.ParentHash) FeeRecipient: block.Coinbase(),
if parent == nil { StateRoot: block.Root(),
return &newBlockResponse{false}, fmt.Errorf("could not find parent %x", params.ParentHash) Number: block.NumberU64(),
GasLimit: block.GasLimit(),
GasUsed: block.GasUsed(),
BaseFeePerGas: block.BaseFee(),
Timestamp: block.Time(),
ReceiptsRoot: block.ReceiptHash(),
LogsBloom: block.Bloom().Bytes(),
Transactions: encodeTransactions(block.Transactions()),
Random: random,
ExtraData: block.Extra(),
} }
block, err := insertBlockParamsToBlock(api.eth.BlockChain().Config(), parent.Header(), params)
if err != nil {
return nil, err
}
_, err = api.eth.BlockChain().InsertChainWithoutSealVerification(block)
return &newBlockResponse{err == nil}, err
} }
// Used in tests to add a the list of transactions from a block to the tx pool. // Used in tests to add a the list of transactions from a block to the tx pool.
func (api *consensusAPI) addBlockTxs(block *types.Block) error { func (api *ConsensusAPI) insertTransactions(txs types.Transactions) error {
for _, tx := range block.Transactions() { for _, tx := range txs {
api.eth.TxPool().AddLocal(tx) api.eth.TxPool().AddLocal(tx)
} }
return nil return nil
} }
// FinalizeBlock is called to mark a block as synchronized, so func (api *ConsensusAPI) checkTerminalTotalDifficulty(head common.Hash) error {
// that data that is no longer needed can be removed. // shortcut if we entered PoS already
func (api *consensusAPI) FinalizeBlock(blockHash common.Hash) (*genericResponse, error) { if api.merger().PoSFinalized() {
return &genericResponse{true}, nil return nil
}
// make sure the parent has enough terminal total difficulty
newHeadBlock := api.eth.BlockChain().GetBlockByHash(head)
if newHeadBlock == nil {
return &GenericServerError
}
td := api.eth.BlockChain().GetTd(newHeadBlock.Hash(), newHeadBlock.NumberU64())
if td != nil && td.Cmp(api.eth.BlockChain().Config().TerminalTotalDifficulty) < 0 {
return &InvalidTB
}
return nil
} }
// SetHead is called to perform a force choice. // setHead is called to perform a force choice.
func (api *consensusAPI) SetHead(newHead common.Hash) (*genericResponse, error) { func (api *ConsensusAPI) setHead(newHead common.Hash) error {
return &genericResponse{true}, nil log.Info("Setting head", "head", newHead)
if api.light {
headHeader := api.les.BlockChain().CurrentHeader()
if headHeader.Hash() == newHead {
return nil
}
newHeadHeader := api.les.BlockChain().GetHeaderByHash(newHead)
if newHeadHeader == nil {
return &GenericServerError
}
if err := api.les.BlockChain().SetChainHead(newHeadHeader); err != nil {
return err
}
// Trigger the transition if it's the first `NewHead` event.
merger := api.merger()
if !merger.PoSFinalized() {
merger.FinalizePoS()
}
return nil
}
headBlock := api.eth.BlockChain().CurrentBlock()
if headBlock.Hash() == newHead {
return nil
}
newHeadBlock := api.eth.BlockChain().GetBlockByHash(newHead)
if newHeadBlock == nil {
return &GenericServerError
}
if err := api.eth.BlockChain().SetChainHead(newHeadBlock); err != nil {
return err
}
// Trigger the transition if it's the first `NewHead` event.
if merger := api.merger(); !merger.PoSFinalized() {
merger.FinalizePoS()
}
// TODO (MariusVanDerWijden) are we really synced now?
api.eth.SetSynced()
return nil
}
// Helper function, return the merger instance.
func (api *ConsensusAPI) merger() *consensus.Merger {
if api.light {
return api.les.Merger()
}
return api.eth.Merger()
} }

View File

@ -19,7 +19,10 @@ package catalyst
import ( import (
"math/big" "math/big"
"testing" "testing"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/consensus/ethash" "github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core" "github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/rawdb"
@ -38,10 +41,10 @@ var (
// testAddr is the Ethereum address of the tester account. // testAddr is the Ethereum address of the tester account.
testAddr = crypto.PubkeyToAddress(testKey.PublicKey) testAddr = crypto.PubkeyToAddress(testKey.PublicKey)
testBalance = big.NewInt(2e15) testBalance = big.NewInt(2e18)
) )
func generateTestChain() (*core.Genesis, []*types.Block) { func generatePreMergeChain(n int) (*core.Genesis, []*types.Block) {
db := rawdb.NewMemoryDatabase() db := rawdb.NewMemoryDatabase()
config := params.AllEthashProtocolChanges config := params.AllEthashProtocolChanges
genesis := &core.Genesis{ genesis := &core.Genesis{
@ -51,177 +54,280 @@ func generateTestChain() (*core.Genesis, []*types.Block) {
Timestamp: 9000, Timestamp: 9000,
BaseFee: big.NewInt(params.InitialBaseFee), BaseFee: big.NewInt(params.InitialBaseFee),
} }
testNonce := uint64(0)
generate := func(i int, g *core.BlockGen) { generate := func(i int, g *core.BlockGen) {
g.OffsetTime(5) g.OffsetTime(5)
g.SetExtra([]byte("test")) g.SetExtra([]byte("test"))
} tx, _ := types.SignTx(types.NewTransaction(testNonce, common.HexToAddress("0x9a9070028361F7AAbeB3f2F2Dc07F82C4a98A02a"), big.NewInt(1), params.TxGas, big.NewInt(params.InitialBaseFee*2), nil), types.LatestSigner(config), testKey)
gblock := genesis.ToBlock(db) g.AddTx(tx)
engine := ethash.NewFaker() testNonce++
blocks, _ := core.GenerateChain(config, gblock, engine, db, 10, generate)
blocks = append([]*types.Block{gblock}, blocks...)
return genesis, blocks
}
// TODO (MariusVanDerWijden) reenable once engine api is updated to the latest spec
/*
func generateTestChainWithFork(n int, fork int) (*core.Genesis, []*types.Block, []*types.Block) {
if fork >= n {
fork = n - 1
}
db := rawdb.NewMemoryDatabase()
config := &params.ChainConfig{
ChainID: big.NewInt(1337),
HomesteadBlock: big.NewInt(0),
EIP150Block: big.NewInt(0),
EIP155Block: big.NewInt(0),
EIP158Block: big.NewInt(0),
ByzantiumBlock: big.NewInt(0),
ConstantinopleBlock: big.NewInt(0),
PetersburgBlock: big.NewInt(0),
IstanbulBlock: big.NewInt(0),
MuirGlacierBlock: big.NewInt(0),
BerlinBlock: big.NewInt(0),
LondonBlock: big.NewInt(0),
TerminalTotalDifficulty: big.NewInt(0),
Ethash: new(params.EthashConfig),
}
genesis := &core.Genesis{
Config: config,
Alloc: core.GenesisAlloc{testAddr: {Balance: testBalance}},
ExtraData: []byte("test genesis"),
Timestamp: 9000,
BaseFee: big.NewInt(params.InitialBaseFee),
}
generate := func(i int, g *core.BlockGen) {
g.OffsetTime(5)
g.SetExtra([]byte("test"))
}
generateFork := func(i int, g *core.BlockGen) {
g.OffsetTime(5)
g.SetExtra([]byte("testF"))
} }
gblock := genesis.ToBlock(db) gblock := genesis.ToBlock(db)
engine := ethash.NewFaker() engine := ethash.NewFaker()
blocks, _ := core.GenerateChain(config, gblock, engine, db, n, generate) blocks, _ := core.GenerateChain(config, gblock, engine, db, n, generate)
blocks = append([]*types.Block{gblock}, blocks...) totalDifficulty := big.NewInt(0)
forkedBlocks, _ := core.GenerateChain(config, blocks[fork], engine, db, n-fork, generateFork) for _, b := range blocks {
return genesis, blocks, forkedBlocks totalDifficulty.Add(totalDifficulty, b.Difficulty())
}
config.TerminalTotalDifficulty = totalDifficulty
return genesis, blocks
} }
*/
func TestEth2AssembleBlock(t *testing.T) { func TestEth2AssembleBlock(t *testing.T) {
genesis, blocks := generateTestChain() genesis, blocks := generatePreMergeChain(10)
n, ethservice := startEthService(t, genesis, blocks[1:9]) n, ethservice := startEthService(t, genesis, blocks)
defer n.Close() defer n.Close()
api := newConsensusAPI(ethservice) api := NewConsensusAPI(ethservice, nil)
signer := types.NewEIP155Signer(ethservice.BlockChain().Config().ChainID) signer := types.NewEIP155Signer(ethservice.BlockChain().Config().ChainID)
tx, err := types.SignTx(types.NewTransaction(0, blocks[8].Coinbase(), big.NewInt(1000), params.TxGas, big.NewInt(params.InitialBaseFee), nil), signer, testKey) tx, err := types.SignTx(types.NewTransaction(uint64(10), blocks[9].Coinbase(), big.NewInt(1000), params.TxGas, big.NewInt(params.InitialBaseFee), nil), signer, testKey)
if err != nil { if err != nil {
t.Fatalf("error signing transaction, err=%v", err) t.Fatalf("error signing transaction, err=%v", err)
} }
ethservice.TxPool().AddLocal(tx) ethservice.TxPool().AddLocal(tx)
blockParams := assembleBlockParams{ blockParams := PayloadAttributesV1{
ParentHash: blocks[8].ParentHash(), Timestamp: blocks[9].Time() + 5,
Timestamp: blocks[8].Time(),
} }
execData, err := api.AssembleBlock(blockParams) execData, err := api.assembleBlock(blocks[9].Hash(), &blockParams)
if err != nil { if err != nil {
t.Fatalf("error producing block, err=%v", err) t.Fatalf("error producing block, err=%v", err)
} }
if len(execData.Transactions) != 1 { if len(execData.Transactions) != 1 {
t.Fatalf("invalid number of transactions %d != 1", len(execData.Transactions)) t.Fatalf("invalid number of transactions %d != 1", len(execData.Transactions))
} }
} }
func TestEth2AssembleBlockWithAnotherBlocksTxs(t *testing.T) { func TestEth2AssembleBlockWithAnotherBlocksTxs(t *testing.T) {
genesis, blocks := generateTestChain() genesis, blocks := generatePreMergeChain(10)
n, ethservice := startEthService(t, genesis, blocks[1:9]) n, ethservice := startEthService(t, genesis, blocks[:9])
defer n.Close() defer n.Close()
api := newConsensusAPI(ethservice) api := NewConsensusAPI(ethservice, nil)
// Put the 10th block's tx in the pool and produce a new block // Put the 10th block's tx in the pool and produce a new block
api.addBlockTxs(blocks[9]) api.insertTransactions(blocks[9].Transactions())
blockParams := assembleBlockParams{ blockParams := PayloadAttributesV1{
ParentHash: blocks[9].ParentHash(), Timestamp: blocks[8].Time() + 5,
Timestamp: blocks[9].Time(),
} }
execData, err := api.AssembleBlock(blockParams) execData, err := api.assembleBlock(blocks[8].Hash(), &blockParams)
if err != nil { if err != nil {
t.Fatalf("error producing block, err=%v", err) t.Fatalf("error producing block, err=%v", err)
} }
if len(execData.Transactions) != blocks[9].Transactions().Len() { if len(execData.Transactions) != blocks[9].Transactions().Len() {
t.Fatalf("invalid number of transactions %d != 1", len(execData.Transactions)) t.Fatalf("invalid number of transactions %d != 1", len(execData.Transactions))
} }
} }
// TODO (MariusVanDerWijden) reenable once engine api is updated to the latest spec func TestSetHeadBeforeTotalDifficulty(t *testing.T) {
/* genesis, blocks := generatePreMergeChain(10)
func TestEth2NewBlock(t *testing.T) { n, ethservice := startEthService(t, genesis, blocks)
genesis, blocks, forkedBlocks := generateTestChainWithFork(10, 4)
n, ethservice := startEthService(t, genesis, blocks[1:5])
defer n.Close() defer n.Close()
api := newConsensusAPI(ethservice) api := NewConsensusAPI(ethservice, nil)
for i := 5; i < 10; i++ { fcState := ForkchoiceStateV1{
p := executableData{ HeadBlockHash: blocks[5].Hash(),
ParentHash: ethservice.BlockChain().CurrentBlock().Hash(), SafeBlockHash: common.Hash{},
Miner: blocks[i].Coinbase(), FinalizedBlockHash: common.Hash{},
StateRoot: blocks[i].Root(),
GasLimit: blocks[i].GasLimit(),
GasUsed: blocks[i].GasUsed(),
Transactions: encodeTransactions(blocks[i].Transactions()),
ReceiptRoot: blocks[i].ReceiptHash(),
LogsBloom: blocks[i].Bloom().Bytes(),
BlockHash: blocks[i].Hash(),
Timestamp: blocks[i].Time(),
Number: uint64(i),
} }
success, err := api.NewBlock(p) if _, err := api.ForkchoiceUpdatedV1(fcState, nil); err == nil {
if err != nil || !success.Valid { t.Errorf("fork choice updated before total terminal difficulty should fail")
}
}
func TestEth2PrepareAndGetPayload(t *testing.T) {
genesis, blocks := generatePreMergeChain(10)
// We need to properly set the terminal total difficulty
genesis.Config.TerminalTotalDifficulty.Sub(genesis.Config.TerminalTotalDifficulty, blocks[9].Difficulty())
n, ethservice := startEthService(t, genesis, blocks[:9])
defer n.Close()
api := NewConsensusAPI(ethservice, nil)
// Put the 10th block's tx in the pool and produce a new block
api.insertTransactions(blocks[9].Transactions())
blockParams := PayloadAttributesV1{
Timestamp: blocks[8].Time() + 5,
}
fcState := ForkchoiceStateV1{
HeadBlockHash: blocks[8].Hash(),
SafeBlockHash: common.Hash{},
FinalizedBlockHash: common.Hash{},
}
_, err := api.ForkchoiceUpdatedV1(fcState, &blockParams)
if err != nil {
t.Fatalf("error preparing payload, err=%v", err)
}
payloadID := computePayloadId(fcState.HeadBlockHash, &blockParams)
execData, err := api.GetPayloadV1(hexutil.Bytes(payloadID))
if err != nil {
t.Fatalf("error getting payload, err=%v", err)
}
if len(execData.Transactions) != blocks[9].Transactions().Len() {
t.Fatalf("invalid number of transactions %d != 1", len(execData.Transactions))
}
}
func checkLogEvents(t *testing.T, logsCh <-chan []*types.Log, rmLogsCh <-chan core.RemovedLogsEvent, wantNew, wantRemoved int) {
t.Helper()
if len(logsCh) != wantNew {
t.Fatalf("wrong number of log events: got %d, want %d", len(logsCh), wantNew)
}
if len(rmLogsCh) != wantRemoved {
t.Fatalf("wrong number of removed log events: got %d, want %d", len(rmLogsCh), wantRemoved)
}
// Drain events.
for i := 0; i < len(logsCh); i++ {
<-logsCh
}
for i := 0; i < len(rmLogsCh); i++ {
<-rmLogsCh
}
}
func TestEth2NewBlock(t *testing.T) {
genesis, preMergeBlocks := generatePreMergeChain(10)
n, ethservice := startEthService(t, genesis, preMergeBlocks)
ethservice.Merger().ReachTTD()
defer n.Close()
var (
api = NewConsensusAPI(ethservice, nil)
parent = preMergeBlocks[len(preMergeBlocks)-1]
// This EVM code generates a log when the contract is created.
logCode = common.Hex2Bytes("60606040525b7f24ec1d3ff24c2f6ff210738839dbc339cd45a5294d85c79361016243157aae7b60405180905060405180910390a15b600a8060416000396000f360606040526008565b00")
)
// The event channels.
newLogCh := make(chan []*types.Log, 10)
rmLogsCh := make(chan core.RemovedLogsEvent, 10)
ethservice.BlockChain().SubscribeLogsEvent(newLogCh)
ethservice.BlockChain().SubscribeRemovedLogsEvent(rmLogsCh)
for i := 0; i < 10; i++ {
statedb, _ := ethservice.BlockChain().StateAt(parent.Root())
nonce := statedb.GetNonce(testAddr)
tx, _ := types.SignTx(types.NewContractCreation(nonce, new(big.Int), 1000000, big.NewInt(2*params.InitialBaseFee), logCode), types.LatestSigner(ethservice.BlockChain().Config()), testKey)
ethservice.TxPool().AddLocal(tx)
execData, err := api.assembleBlock(parent.Hash(), &PayloadAttributesV1{
Timestamp: parent.Time() + 5,
})
if err != nil {
t.Fatalf("Failed to create the executable data %v", err)
}
block, err := ExecutableDataToBlock(*execData)
if err != nil {
t.Fatalf("Failed to convert executable data to block %v", err)
}
newResp, err := api.ExecutePayloadV1(*execData)
if err != nil || newResp.Status != "VALID" {
t.Fatalf("Failed to insert block: %v", err) t.Fatalf("Failed to insert block: %v", err)
} }
if ethservice.BlockChain().CurrentBlock().NumberU64() != block.NumberU64()-1 {
t.Fatalf("Chain head shouldn't be updated")
}
checkLogEvents(t, newLogCh, rmLogsCh, 0, 0)
fcState := ForkchoiceStateV1{
HeadBlockHash: block.Hash(),
SafeBlockHash: block.Hash(),
FinalizedBlockHash: block.Hash(),
}
if _, err := api.ForkchoiceUpdatedV1(fcState, nil); err != nil {
t.Fatalf("Failed to insert block: %v", err)
}
if ethservice.BlockChain().CurrentBlock().NumberU64() != block.NumberU64() {
t.Fatalf("Chain head should be updated")
}
checkLogEvents(t, newLogCh, rmLogsCh, 1, 0)
parent = block
} }
exp := ethservice.BlockChain().CurrentBlock().Hash() // Introduce fork chain
var (
// Introduce the fork point. head = ethservice.BlockChain().CurrentBlock().NumberU64()
lastBlockNum := blocks[4].Number() )
lastBlock := blocks[4] parent = preMergeBlocks[len(preMergeBlocks)-1]
for i := 0; i < 4; i++ { for i := 0; i < 10; i++ {
lastBlockNum.Add(lastBlockNum, big.NewInt(1)) execData, err := api.assembleBlock(parent.Hash(), &PayloadAttributesV1{
p := executableData{ Timestamp: parent.Time() + 6,
ParentHash: lastBlock.Hash(), })
Miner: forkedBlocks[i].Coinbase(),
StateRoot: forkedBlocks[i].Root(),
Number: lastBlockNum.Uint64(),
GasLimit: forkedBlocks[i].GasLimit(),
GasUsed: forkedBlocks[i].GasUsed(),
Transactions: encodeTransactions(blocks[i].Transactions()),
ReceiptRoot: forkedBlocks[i].ReceiptHash(),
LogsBloom: forkedBlocks[i].Bloom().Bytes(),
BlockHash: forkedBlocks[i].Hash(),
Timestamp: forkedBlocks[i].Time(),
}
success, err := api.NewBlock(p)
if err != nil || !success.Valid {
t.Fatalf("Failed to insert forked block #%d: %v", i, err)
}
lastBlock, err = insertBlockParamsToBlock(ethservice.BlockChain().Config(), lastBlock.Header(), p)
if err != nil { if err != nil {
t.Fatal(err) t.Fatalf("Failed to create the executable data %v", err)
}
block, err := ExecutableDataToBlock(*execData)
if err != nil {
t.Fatalf("Failed to convert executable data to block %v", err)
}
newResp, err := api.ExecutePayloadV1(*execData)
if err != nil || newResp.Status != "VALID" {
t.Fatalf("Failed to insert block: %v", err)
}
if ethservice.BlockChain().CurrentBlock().NumberU64() != head {
t.Fatalf("Chain head shouldn't be updated")
}
fcState := ForkchoiceStateV1{
HeadBlockHash: block.Hash(),
SafeBlockHash: block.Hash(),
FinalizedBlockHash: block.Hash(),
}
if _, err := api.ForkchoiceUpdatedV1(fcState, nil); err != nil {
t.Fatalf("Failed to insert block: %v", err)
}
if ethservice.BlockChain().CurrentBlock().NumberU64() != block.NumberU64() {
t.Fatalf("Chain head should be updated")
}
parent, head = block, block.NumberU64()
} }
} }
if ethservice.BlockChain().CurrentBlock().Hash() != exp { func TestEth2DeepReorg(t *testing.T) {
t.Fatalf("Wrong head after inserting fork %x != %x", exp, ethservice.BlockChain().CurrentBlock().Hash()) // TODO (MariusVanDerWijden) TestEth2DeepReorg is currently broken, because it tries to reorg
// before the totalTerminalDifficulty threshold
/*
genesis, preMergeBlocks := generatePreMergeChain(core.TriesInMemory * 2)
n, ethservice := startEthService(t, genesis, preMergeBlocks)
defer n.Close()
var (
api = NewConsensusAPI(ethservice, nil)
parent = preMergeBlocks[len(preMergeBlocks)-core.TriesInMemory-1]
head = ethservice.BlockChain().CurrentBlock().NumberU64()
)
if ethservice.BlockChain().HasBlockAndState(parent.Hash(), parent.NumberU64()) {
t.Errorf("Block %d not pruned", parent.NumberU64())
} }
for i := 0; i < 10; i++ {
execData, err := api.assembleBlock(AssembleBlockParams{
ParentHash: parent.Hash(),
Timestamp: parent.Time() + 5,
})
if err != nil {
t.Fatalf("Failed to create the executable data %v", err)
}
block, err := ExecutableDataToBlock(ethservice.BlockChain().Config(), parent.Header(), *execData)
if err != nil {
t.Fatalf("Failed to convert executable data to block %v", err)
}
newResp, err := api.ExecutePayload(*execData)
if err != nil || newResp.Status != "VALID" {
t.Fatalf("Failed to insert block: %v", err)
}
if ethservice.BlockChain().CurrentBlock().NumberU64() != head {
t.Fatalf("Chain head shouldn't be updated")
}
if err := api.setHead(block.Hash()); err != nil {
t.Fatalf("Failed to set head: %v", err)
}
if ethservice.BlockChain().CurrentBlock().NumberU64() != block.NumberU64() {
t.Fatalf("Chain head should be updated")
}
parent, head = block, block.NumberU64()
} }
*/ */
}
// startEthService creates a full node instance for testing. // startEthService creates a full node instance for testing.
func startEthService(t *testing.T, genesis *core.Genesis, blocks []*types.Block) (*node.Node, *eth.Ethereum) { func startEthService(t *testing.T, genesis *core.Genesis, blocks []*types.Block) (*node.Node, *eth.Ethereum) {
@ -232,7 +338,7 @@ func startEthService(t *testing.T, genesis *core.Genesis, blocks []*types.Block)
t.Fatal("can't create node:", err) t.Fatal("can't create node:", err)
} }
ethcfg := &ethconfig.Config{Genesis: genesis, Ethash: ethash.Config{PowMode: ethash.ModeFake}} ethcfg := &ethconfig.Config{Genesis: genesis, Ethash: ethash.Config{PowMode: ethash.ModeFake}, TrieTimeout: time.Minute, TrieDirtyCache: 256, TrieCleanCache: 256}
ethservice, err := eth.New(n, ethcfg) ethservice, err := eth.New(n, ethcfg)
if err != nil { if err != nil {
t.Fatal("can't create eth service:", err) t.Fatal("can't create eth service:", err)
@ -245,6 +351,69 @@ func startEthService(t *testing.T, genesis *core.Genesis, blocks []*types.Block)
t.Fatal("can't import test blocks:", err) t.Fatal("can't import test blocks:", err)
} }
ethservice.SetEtherbase(testAddr) ethservice.SetEtherbase(testAddr)
ethservice.SetSynced()
return n, ethservice return n, ethservice
} }
func TestFullAPI(t *testing.T) {
genesis, preMergeBlocks := generatePreMergeChain(10)
n, ethservice := startEthService(t, genesis, preMergeBlocks)
ethservice.Merger().ReachTTD()
defer n.Close()
var (
api = NewConsensusAPI(ethservice, nil)
parent = ethservice.BlockChain().CurrentBlock()
// This EVM code generates a log when the contract is created.
logCode = common.Hex2Bytes("60606040525b7f24ec1d3ff24c2f6ff210738839dbc339cd45a5294d85c79361016243157aae7b60405180905060405180910390a15b600a8060416000396000f360606040526008565b00")
)
for i := 0; i < 10; i++ {
statedb, _ := ethservice.BlockChain().StateAt(parent.Root())
nonce := statedb.GetNonce(testAddr)
tx, _ := types.SignTx(types.NewContractCreation(nonce, new(big.Int), 1000000, big.NewInt(2*params.InitialBaseFee), logCode), types.LatestSigner(ethservice.BlockChain().Config()), testKey)
ethservice.TxPool().AddLocal(tx)
params := PayloadAttributesV1{
Timestamp: parent.Time() + 1,
Random: crypto.Keccak256Hash([]byte{byte(i)}),
SuggestedFeeRecipient: parent.Coinbase(),
}
fcState := ForkchoiceStateV1{
HeadBlockHash: parent.Hash(),
SafeBlockHash: common.Hash{},
FinalizedBlockHash: common.Hash{},
}
resp, err := api.ForkchoiceUpdatedV1(fcState, &params)
if err != nil {
t.Fatalf("error preparing payload, err=%v", err)
}
if resp.Status != SUCCESS.Status {
t.Fatalf("error preparing payload, invalid status: %v", resp.Status)
}
payloadID := computePayloadId(parent.Hash(), &params)
payload, err := api.GetPayloadV1(hexutil.Bytes(payloadID))
if err != nil {
t.Fatalf("can't get payload: %v", err)
}
execResp, err := api.ExecutePayloadV1(*payload)
if err != nil {
t.Fatalf("can't execute payload: %v", err)
}
if execResp.Status != VALID.Status {
t.Fatalf("invalid status: %v", execResp.Status)
}
fcState = ForkchoiceStateV1{
HeadBlockHash: payload.BlockHash,
SafeBlockHash: payload.ParentHash,
FinalizedBlockHash: payload.ParentHash,
}
if _, err := api.ForkchoiceUpdatedV1(fcState, nil); err != nil {
t.Fatalf("Failed to insert block: %v", err)
}
if ethservice.BlockChain().CurrentBlock().NumberU64() != payload.Number {
t.Fatalf("Chain head should be updated")
}
parent = ethservice.BlockChain().CurrentBlock()
}
}

View File

@ -17,37 +17,43 @@
package catalyst package catalyst
import ( import (
"math/big"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil" "github.com/ethereum/go-ethereum/common/hexutil"
) )
//go:generate go run github.com/fjl/gencodec -type assembleBlockParams -field-override assembleBlockParamsMarshaling -out gen_blockparams.go //go:generate go run github.com/fjl/gencodec -type PayloadAttributesV1 -field-override payloadAttributesMarshaling -out gen_blockparams.go
// Structure described at https://hackmd.io/T9x2mMA4S7us8tJwEB3FDQ // Structure described at https://github.com/ethereum/execution-apis/pull/74
type assembleBlockParams struct { type PayloadAttributesV1 struct {
ParentHash common.Hash `json:"parentHash" gencodec:"required"`
Timestamp uint64 `json:"timestamp" gencodec:"required"` Timestamp uint64 `json:"timestamp" gencodec:"required"`
Random common.Hash `json:"random" gencodec:"required"`
SuggestedFeeRecipient common.Address `json:"suggestedFeeRecipient" gencodec:"required"`
} }
// JSON type overrides for assembleBlockParams. // JSON type overrides for PayloadAttributesV1.
type assembleBlockParamsMarshaling struct { type payloadAttributesMarshaling struct {
Timestamp hexutil.Uint64 Timestamp hexutil.Uint64
} }
//go:generate go run github.com/fjl/gencodec -type executableData -field-override executableDataMarshaling -out gen_ed.go //go:generate go run github.com/fjl/gencodec -type ExecutableDataV1 -field-override executableDataMarshaling -out gen_ed.go
// Structure described at https://notes.ethereum.org/@n0ble/rayonism-the-merge-spec#Parameters1 // Structure described at https://github.com/ethereum/execution-apis/src/engine/specification.md
type executableData struct { type ExecutableDataV1 struct {
BlockHash common.Hash `json:"blockHash" gencodec:"required"`
ParentHash common.Hash `json:"parentHash" gencodec:"required"` ParentHash common.Hash `json:"parentHash" gencodec:"required"`
Miner common.Address `json:"miner" gencodec:"required"` FeeRecipient common.Address `json:"feeRecipient" gencodec:"required"`
StateRoot common.Hash `json:"stateRoot" gencodec:"required"` StateRoot common.Hash `json:"stateRoot" gencodec:"required"`
Number uint64 `json:"number" gencodec:"required"` ReceiptsRoot common.Hash `json:"receiptsRoot" gencodec:"required"`
LogsBloom []byte `json:"logsBloom" gencodec:"required"`
Random common.Hash `json:"random" gencodec:"required"`
Number uint64 `json:"blockNumber" gencodec:"required"`
GasLimit uint64 `json:"gasLimit" gencodec:"required"` GasLimit uint64 `json:"gasLimit" gencodec:"required"`
GasUsed uint64 `json:"gasUsed" gencodec:"required"` GasUsed uint64 `json:"gasUsed" gencodec:"required"`
Timestamp uint64 `json:"timestamp" gencodec:"required"` Timestamp uint64 `json:"timestamp" gencodec:"required"`
ReceiptRoot common.Hash `json:"receiptsRoot" gencodec:"required"` ExtraData []byte `json:"extraData" gencodec:"required"`
LogsBloom []byte `json:"logsBloom" gencodec:"required"` BaseFeePerGas *big.Int `json:"baseFeePerGas" gencodec:"required"`
BlockHash common.Hash `json:"blockHash" gencodec:"required"`
Transactions [][]byte `json:"transactions" gencodec:"required"` Transactions [][]byte `json:"transactions" gencodec:"required"`
} }
@ -57,14 +63,52 @@ type executableDataMarshaling struct {
GasLimit hexutil.Uint64 GasLimit hexutil.Uint64
GasUsed hexutil.Uint64 GasUsed hexutil.Uint64
Timestamp hexutil.Uint64 Timestamp hexutil.Uint64
BaseFeePerGas *hexutil.Big
ExtraData hexutil.Bytes
LogsBloom hexutil.Bytes LogsBloom hexutil.Bytes
Transactions []hexutil.Bytes Transactions []hexutil.Bytes
} }
type newBlockResponse struct { //go:generate go run github.com/fjl/gencodec -type PayloadResponse -field-override payloadResponseMarshaling -out gen_payload.go
type PayloadResponse struct {
PayloadID uint64 `json:"payloadId"`
}
// JSON type overrides for payloadResponse.
type payloadResponseMarshaling struct {
PayloadID hexutil.Uint64
}
type NewBlockResponse struct {
Valid bool `json:"valid"` Valid bool `json:"valid"`
} }
type genericResponse struct { type GenericResponse struct {
Success bool `json:"success"` Success bool `json:"success"`
} }
type GenericStringResponse struct {
Status string `json:"status"`
}
type ExecutePayloadResponse struct {
Status string `json:"status"`
LatestValidHash common.Hash `json:"latestValidHash"`
}
type ConsensusValidatedParams struct {
BlockHash common.Hash `json:"blockHash"`
Status string `json:"status"`
}
type ForkChoiceResponse struct {
Status string `json:"status"`
PayloadID *hexutil.Bytes `json:"payloadId"`
}
type ForkchoiceStateV1 struct {
HeadBlockHash common.Hash `json:"headBlockHash"`
SafeBlockHash common.Hash `json:"safeBlockHash"`
FinalizedBlockHash common.Hash `json:"finalizedBlockHash"`
}

View File

@ -10,37 +10,44 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil" "github.com/ethereum/go-ethereum/common/hexutil"
) )
var _ = (*assembleBlockParamsMarshaling)(nil) var _ = (*payloadAttributesMarshaling)(nil)
// MarshalJSON marshals as JSON. // MarshalJSON marshals as JSON.
func (a assembleBlockParams) MarshalJSON() ([]byte, error) { func (p PayloadAttributesV1) MarshalJSON() ([]byte, error) {
type assembleBlockParams struct { type PayloadAttributesV1 struct {
ParentHash common.Hash `json:"parentHash" gencodec:"required"`
Timestamp hexutil.Uint64 `json:"timestamp" gencodec:"required"` Timestamp hexutil.Uint64 `json:"timestamp" gencodec:"required"`
Random common.Hash `json:"random" gencodec:"required"`
SuggestedFeeRecipient common.Address `json:"suggestedFeeRecipient" gencodec:"required"`
} }
var enc assembleBlockParams var enc PayloadAttributesV1
enc.ParentHash = a.ParentHash enc.Timestamp = hexutil.Uint64(p.Timestamp)
enc.Timestamp = hexutil.Uint64(a.Timestamp) enc.Random = p.Random
enc.SuggestedFeeRecipient = p.SuggestedFeeRecipient
return json.Marshal(&enc) return json.Marshal(&enc)
} }
// UnmarshalJSON unmarshals from JSON. // UnmarshalJSON unmarshals from JSON.
func (a *assembleBlockParams) UnmarshalJSON(input []byte) error { func (p *PayloadAttributesV1) UnmarshalJSON(input []byte) error {
type assembleBlockParams struct { type PayloadAttributesV1 struct {
ParentHash *common.Hash `json:"parentHash" gencodec:"required"`
Timestamp *hexutil.Uint64 `json:"timestamp" gencodec:"required"` Timestamp *hexutil.Uint64 `json:"timestamp" gencodec:"required"`
Random *common.Hash `json:"random" gencodec:"required"`
SuggestedFeeRecipient *common.Address `json:"suggestedFeeRecipient" gencodec:"required"`
} }
var dec assembleBlockParams var dec PayloadAttributesV1
if err := json.Unmarshal(input, &dec); err != nil { if err := json.Unmarshal(input, &dec); err != nil {
return err return err
} }
if dec.ParentHash == nil {
return errors.New("missing required field 'parentHash' for assembleBlockParams")
}
a.ParentHash = *dec.ParentHash
if dec.Timestamp == nil { if dec.Timestamp == nil {
return errors.New("missing required field 'timestamp' for assembleBlockParams") return errors.New("missing required field 'timestamp' for PayloadAttributesV1")
} }
a.Timestamp = uint64(*dec.Timestamp) p.Timestamp = uint64(*dec.Timestamp)
if dec.Random == nil {
return errors.New("missing required field 'random' for PayloadAttributesV1")
}
p.Random = *dec.Random
if dec.SuggestedFeeRecipient == nil {
return errors.New("missing required field 'suggestedFeeRecipient' for PayloadAttributesV1")
}
p.SuggestedFeeRecipient = *dec.SuggestedFeeRecipient
return nil return nil
} }

View File

@ -5,6 +5,7 @@ package catalyst
import ( import (
"encoding/json" "encoding/json"
"errors" "errors"
"math/big"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil" "github.com/ethereum/go-ethereum/common/hexutil"
@ -13,31 +14,37 @@ import (
var _ = (*executableDataMarshaling)(nil) var _ = (*executableDataMarshaling)(nil)
// MarshalJSON marshals as JSON. // MarshalJSON marshals as JSON.
func (e executableData) MarshalJSON() ([]byte, error) { func (e ExecutableDataV1) MarshalJSON() ([]byte, error) {
type executableData struct { type ExecutableDataV1 struct {
BlockHash common.Hash `json:"blockHash" gencodec:"required"`
ParentHash common.Hash `json:"parentHash" gencodec:"required"` ParentHash common.Hash `json:"parentHash" gencodec:"required"`
Miner common.Address `json:"miner" gencodec:"required"` FeeRecipient common.Address `json:"feeRecipient" gencodec:"required"`
StateRoot common.Hash `json:"stateRoot" gencodec:"required"` StateRoot common.Hash `json:"stateRoot" gencodec:"required"`
Number hexutil.Uint64 `json:"number" gencodec:"required"` ReceiptsRoot common.Hash `json:"receiptsRoot" gencodec:"required"`
LogsBloom hexutil.Bytes `json:"logsBloom" gencodec:"required"`
Random common.Hash `json:"random" gencodec:"required"`
Number hexutil.Uint64 `json:"blockNumber" gencodec:"required"`
GasLimit hexutil.Uint64 `json:"gasLimit" gencodec:"required"` GasLimit hexutil.Uint64 `json:"gasLimit" gencodec:"required"`
GasUsed hexutil.Uint64 `json:"gasUsed" gencodec:"required"` GasUsed hexutil.Uint64 `json:"gasUsed" gencodec:"required"`
Timestamp hexutil.Uint64 `json:"timestamp" gencodec:"required"` Timestamp hexutil.Uint64 `json:"timestamp" gencodec:"required"`
ReceiptRoot common.Hash `json:"receiptsRoot" gencodec:"required"` ExtraData hexutil.Bytes `json:"extraData" gencodec:"required"`
LogsBloom hexutil.Bytes `json:"logsBloom" gencodec:"required"` BaseFeePerGas *hexutil.Big `json:"baseFeePerGas" gencodec:"required"`
BlockHash common.Hash `json:"blockHash" gencodec:"required"`
Transactions []hexutil.Bytes `json:"transactions" gencodec:"required"` Transactions []hexutil.Bytes `json:"transactions" gencodec:"required"`
} }
var enc executableData var enc ExecutableDataV1
enc.BlockHash = e.BlockHash
enc.ParentHash = e.ParentHash enc.ParentHash = e.ParentHash
enc.Miner = e.Miner enc.FeeRecipient = e.FeeRecipient
enc.StateRoot = e.StateRoot enc.StateRoot = e.StateRoot
enc.ReceiptsRoot = e.ReceiptsRoot
enc.LogsBloom = e.LogsBloom
enc.Random = e.Random
enc.Number = hexutil.Uint64(e.Number) enc.Number = hexutil.Uint64(e.Number)
enc.GasLimit = hexutil.Uint64(e.GasLimit) enc.GasLimit = hexutil.Uint64(e.GasLimit)
enc.GasUsed = hexutil.Uint64(e.GasUsed) enc.GasUsed = hexutil.Uint64(e.GasUsed)
enc.Timestamp = hexutil.Uint64(e.Timestamp) enc.Timestamp = hexutil.Uint64(e.Timestamp)
enc.ReceiptRoot = e.ReceiptRoot enc.ExtraData = e.ExtraData
enc.LogsBloom = e.LogsBloom enc.BaseFeePerGas = (*hexutil.Big)(e.BaseFeePerGas)
enc.BlockHash = e.BlockHash
if e.Transactions != nil { if e.Transactions != nil {
enc.Transactions = make([]hexutil.Bytes, len(e.Transactions)) enc.Transactions = make([]hexutil.Bytes, len(e.Transactions))
for k, v := range e.Transactions { for k, v := range e.Transactions {
@ -48,66 +55,81 @@ func (e executableData) MarshalJSON() ([]byte, error) {
} }
// UnmarshalJSON unmarshals from JSON. // UnmarshalJSON unmarshals from JSON.
func (e *executableData) UnmarshalJSON(input []byte) error { func (e *ExecutableDataV1) UnmarshalJSON(input []byte) error {
type executableData struct { type ExecutableDataV1 struct {
BlockHash *common.Hash `json:"blockHash" gencodec:"required"`
ParentHash *common.Hash `json:"parentHash" gencodec:"required"` ParentHash *common.Hash `json:"parentHash" gencodec:"required"`
Miner *common.Address `json:"miner" gencodec:"required"` FeeRecipient *common.Address `json:"feeRecipient" gencodec:"required"`
StateRoot *common.Hash `json:"stateRoot" gencodec:"required"` StateRoot *common.Hash `json:"stateRoot" gencodec:"required"`
Number *hexutil.Uint64 `json:"number" gencodec:"required"` ReceiptsRoot *common.Hash `json:"receiptsRoot" gencodec:"required"`
LogsBloom *hexutil.Bytes `json:"logsBloom" gencodec:"required"`
Random *common.Hash `json:"random" gencodec:"required"`
Number *hexutil.Uint64 `json:"blockNumber" gencodec:"required"`
GasLimit *hexutil.Uint64 `json:"gasLimit" gencodec:"required"` GasLimit *hexutil.Uint64 `json:"gasLimit" gencodec:"required"`
GasUsed *hexutil.Uint64 `json:"gasUsed" gencodec:"required"` GasUsed *hexutil.Uint64 `json:"gasUsed" gencodec:"required"`
Timestamp *hexutil.Uint64 `json:"timestamp" gencodec:"required"` Timestamp *hexutil.Uint64 `json:"timestamp" gencodec:"required"`
ReceiptRoot *common.Hash `json:"receiptsRoot" gencodec:"required"` ExtraData *hexutil.Bytes `json:"extraData" gencodec:"required"`
LogsBloom *hexutil.Bytes `json:"logsBloom" gencodec:"required"` BaseFeePerGas *hexutil.Big `json:"baseFeePerGas" gencodec:"required"`
BlockHash *common.Hash `json:"blockHash" gencodec:"required"`
Transactions []hexutil.Bytes `json:"transactions" gencodec:"required"` Transactions []hexutil.Bytes `json:"transactions" gencodec:"required"`
} }
var dec executableData var dec ExecutableDataV1
if err := json.Unmarshal(input, &dec); err != nil { if err := json.Unmarshal(input, &dec); err != nil {
return err return err
} }
if dec.BlockHash == nil {
return errors.New("missing required field 'blockHash' for executableData")
}
e.BlockHash = *dec.BlockHash
if dec.ParentHash == nil { if dec.ParentHash == nil {
return errors.New("missing required field 'parentHash' for executableData") return errors.New("missing required field 'parentHash' for ExecutableDataV1")
} }
e.ParentHash = *dec.ParentHash e.ParentHash = *dec.ParentHash
if dec.Miner == nil { if dec.FeeRecipient == nil {
return errors.New("missing required field 'miner' for executableData") return errors.New("missing required field 'feeRecipient' for ExecutableDataV1")
} }
e.Miner = *dec.Miner e.FeeRecipient = *dec.FeeRecipient
if dec.StateRoot == nil { if dec.StateRoot == nil {
return errors.New("missing required field 'stateRoot' for executableData") return errors.New("missing required field 'stateRoot' for ExecutableDataV1")
} }
e.StateRoot = *dec.StateRoot e.StateRoot = *dec.StateRoot
if dec.ReceiptsRoot == nil {
return errors.New("missing required field 'receiptsRoot' for ExecutableDataV1")
}
e.ReceiptsRoot = *dec.ReceiptsRoot
if dec.LogsBloom == nil {
return errors.New("missing required field 'logsBloom' for ExecutableDataV1")
}
e.LogsBloom = *dec.LogsBloom
if dec.Random == nil {
return errors.New("missing required field 'random' for ExecutableDataV1")
}
e.Random = *dec.Random
if dec.Number == nil { if dec.Number == nil {
return errors.New("missing required field 'number' for executableData") return errors.New("missing required field 'blockNumber' for ExecutableDataV1")
} }
e.Number = uint64(*dec.Number) e.Number = uint64(*dec.Number)
if dec.GasLimit == nil { if dec.GasLimit == nil {
return errors.New("missing required field 'gasLimit' for executableData") return errors.New("missing required field 'gasLimit' for ExecutableDataV1")
} }
e.GasLimit = uint64(*dec.GasLimit) e.GasLimit = uint64(*dec.GasLimit)
if dec.GasUsed == nil { if dec.GasUsed == nil {
return errors.New("missing required field 'gasUsed' for executableData") return errors.New("missing required field 'gasUsed' for ExecutableDataV1")
} }
e.GasUsed = uint64(*dec.GasUsed) e.GasUsed = uint64(*dec.GasUsed)
if dec.Timestamp == nil { if dec.Timestamp == nil {
return errors.New("missing required field 'timestamp' for executableData") return errors.New("missing required field 'timestamp' for ExecutableDataV1")
} }
e.Timestamp = uint64(*dec.Timestamp) e.Timestamp = uint64(*dec.Timestamp)
if dec.ReceiptRoot == nil { if dec.ExtraData == nil {
return errors.New("missing required field 'receiptsRoot' for executableData") return errors.New("missing required field 'extraData' for ExecutableDataV1")
} }
e.ReceiptRoot = *dec.ReceiptRoot e.ExtraData = *dec.ExtraData
if dec.LogsBloom == nil { if dec.BaseFeePerGas == nil {
return errors.New("missing required field 'logsBloom' for executableData") return errors.New("missing required field 'baseFeePerGas' for ExecutableDataV1")
} }
e.LogsBloom = *dec.LogsBloom e.BaseFeePerGas = (*big.Int)(dec.BaseFeePerGas)
if dec.BlockHash == nil {
return errors.New("missing required field 'blockHash' for ExecutableDataV1")
}
e.BlockHash = *dec.BlockHash
if dec.Transactions == nil { if dec.Transactions == nil {
return errors.New("missing required field 'transactions' for executableData") return errors.New("missing required field 'transactions' for ExecutableDataV1")
} }
e.Transactions = make([][]byte, len(dec.Transactions)) e.Transactions = make([][]byte, len(dec.Transactions))
for k, v := range dec.Transactions { for k, v := range dec.Transactions {

View File

@ -0,0 +1,36 @@
// Code generated by github.com/fjl/gencodec. DO NOT EDIT.
package catalyst
import (
"encoding/json"
"github.com/ethereum/go-ethereum/common/hexutil"
)
var _ = (*payloadResponseMarshaling)(nil)
// MarshalJSON marshals as JSON.
func (p PayloadResponse) MarshalJSON() ([]byte, error) {
type PayloadResponse struct {
PayloadID hexutil.Uint64 `json:"payloadId"`
}
var enc PayloadResponse
enc.PayloadID = hexutil.Uint64(p.PayloadID)
return json.Marshal(&enc)
}
// UnmarshalJSON unmarshals from JSON.
func (p *PayloadResponse) UnmarshalJSON(input []byte) error {
type PayloadResponse struct {
PayloadID *hexutil.Uint64 `json:"payloadId"`
}
var dec PayloadResponse
if err := json.Unmarshal(input, &dec); err != nil {
return err
}
if dec.PayloadID != nil {
p.PayloadID = uint64(*dec.PayloadID)
}
return nil
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

115
eth/downloader/fetchers.go Normal file
View File

@ -0,0 +1,115 @@
// Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package downloader
import (
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/eth/protocols/eth"
)
// fetchHeadersByHash is a blocking version of Peer.RequestHeadersByHash which
// handles all the cancellation, interruption and timeout mechanisms of a data
// retrieval to allow blocking API calls.
func (d *Downloader) fetchHeadersByHash(p *peerConnection, hash common.Hash, amount int, skip int, reverse bool) ([]*types.Header, []common.Hash, error) {
// Create the response sink and send the network request
start := time.Now()
resCh := make(chan *eth.Response)
req, err := p.peer.RequestHeadersByHash(hash, amount, skip, reverse, resCh)
if err != nil {
return nil, nil, err
}
defer req.Close()
// Wait until the response arrives, the request is cancelled or times out
ttl := d.peers.rates.TargetTimeout()
timeoutTimer := time.NewTimer(ttl)
defer timeoutTimer.Stop()
select {
case <-d.cancelCh:
return nil, nil, errCanceled
case <-timeoutTimer.C:
// Header retrieval timed out, update the metrics
p.log.Debug("Header request timed out", "elapsed", ttl)
headerTimeoutMeter.Mark(1)
return nil, nil, errTimeout
case res := <-resCh:
// Headers successfully retrieved, update the metrics
headerReqTimer.Update(time.Since(start))
headerInMeter.Mark(int64(len(*res.Res.(*eth.BlockHeadersPacket))))
// Don't reject the packet even if it turns out to be bad, downloader will
// disconnect the peer on its own terms. Simply delivery the headers to
// be processed by the caller
res.Done <- nil
return *res.Res.(*eth.BlockHeadersPacket), res.Meta.([]common.Hash), nil
}
}
// fetchHeadersByNumber is a blocking version of Peer.RequestHeadersByNumber which
// handles all the cancellation, interruption and timeout mechanisms of a data
// retrieval to allow blocking API calls.
func (d *Downloader) fetchHeadersByNumber(p *peerConnection, number uint64, amount int, skip int, reverse bool) ([]*types.Header, []common.Hash, error) {
// Create the response sink and send the network request
start := time.Now()
resCh := make(chan *eth.Response)
req, err := p.peer.RequestHeadersByNumber(number, amount, skip, reverse, resCh)
if err != nil {
return nil, nil, err
}
defer req.Close()
// Wait until the response arrives, the request is cancelled or times out
ttl := d.peers.rates.TargetTimeout()
timeoutTimer := time.NewTimer(ttl)
defer timeoutTimer.Stop()
select {
case <-d.cancelCh:
return nil, nil, errCanceled
case <-timeoutTimer.C:
// Header retrieval timed out, update the metrics
p.log.Debug("Header request timed out", "elapsed", ttl)
headerTimeoutMeter.Mark(1)
return nil, nil, errTimeout
case res := <-resCh:
// Headers successfully retrieved, update the metrics
headerReqTimer.Update(time.Since(start))
headerInMeter.Mark(int64(len(*res.Res.(*eth.BlockHeadersPacket))))
// Don't reject the packet even if it turns out to be bad, downloader will
// disconnect the peer on its own terms. Simply delivery the headers to
// be processed by the caller
res.Done <- nil
return *res.Res.(*eth.BlockHeadersPacket), res.Meta.([]common.Hash), nil
}
}

View File

@ -0,0 +1,381 @@
// Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package downloader
import (
"errors"
"sort"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/prque"
"github.com/ethereum/go-ethereum/eth/protocols/eth"
"github.com/ethereum/go-ethereum/log"
)
// timeoutGracePeriod is the amount of time to allow for a peer to deliver a
// response to a locally already timed out request. Timeouts are not penalized
// as a peer might be temporarily overloaded, however, they still must reply
// to each request. Failing to do so is considered a protocol violation.
var timeoutGracePeriod = 2 * time.Minute
// typedQueue is an interface defining the adaptor needed to translate the type
// specific downloader/queue schedulers into the type-agnostic general concurrent
// fetcher algorithm calls.
type typedQueue interface {
// waker returns a notification channel that gets pinged in case more fetches
// have been queued up, so the fetcher might assign it to idle peers.
waker() chan bool
// pending returns the number of wrapped items that are currently queued for
// fetching by the concurrent downloader.
pending() int
// capacity is responsible for calculating how many items of the abstracted
// type a particular peer is estimated to be able to retrieve within the
// alloted round trip time.
capacity(peer *peerConnection, rtt time.Duration) int
// updateCapacity is responsible for updating how many items of the abstracted
// type a particular peer is estimated to be able to retrieve in a unit time.
updateCapacity(peer *peerConnection, items int, elapsed time.Duration)
// reserve is responsible for allocating a requested number of pending items
// from the download queue to the specified peer.
reserve(peer *peerConnection, items int) (*fetchRequest, bool, bool)
// unreserve is resposible for removing the current retrieval allocation
// assigned to a specific peer and placing it back into the pool to allow
// reassigning to some other peer.
unreserve(peer string) int
// request is responsible for converting a generic fetch request into a typed
// one and sending it to the remote peer for fulfillment.
request(peer *peerConnection, req *fetchRequest, resCh chan *eth.Response) (*eth.Request, error)
// deliver is responsible for taking a generic response packet from the
// concurrent fetcher, unpacking the type specific data and delivering
// it to the downloader's queue.
deliver(peer *peerConnection, packet *eth.Response) (int, error)
}
// concurrentFetch iteratively downloads scheduled block parts, taking available
// peers, reserving a chunk of fetch requests for each and waiting for delivery
// or timeouts.
func (d *Downloader) concurrentFetch(queue typedQueue) error {
// Create a delivery channel to accept responses from all peers
responses := make(chan *eth.Response)
// Track the currently active requests and their timeout order
pending := make(map[string]*eth.Request)
defer func() {
// Abort all requests on sync cycle cancellation. The requests may still
// be fulfilled by the remote side, but the dispatcher will not wait to
// deliver them since nobody's going to be listening.
for _, req := range pending {
req.Close()
}
}()
ordering := make(map[*eth.Request]int)
timeouts := prque.New(func(data interface{}, index int) {
ordering[data.(*eth.Request)] = index
})
timeout := time.NewTimer(0)
if !timeout.Stop() {
<-timeout.C
}
defer timeout.Stop()
// Track the timed-out but not-yet-answered requests separately. We want to
// keep tracking which peers are busy (potentially overloaded), so removing
// all trace of a timed out request is not good. We also can't just cancel
// the pending request altogether as that would prevent a late response from
// being delivered, thus never unblocking the peer.
stales := make(map[string]*eth.Request)
defer func() {
// Abort all requests on sync cycle cancellation. The requests may still
// be fulfilled by the remote side, but the dispatcher will not wait to
// deliver them since nobody's going to be listening.
for _, req := range stales {
req.Close()
}
}()
// Subscribe to peer lifecycle events to schedule tasks to new joiners and
// reschedule tasks upon disconnections. We don't care which event happened
// for simplicity, so just use a single channel.
peering := make(chan *peeringEvent, 64) // arbitrary buffer, just some burst protection
peeringSub := d.peers.SubscribeEvents(peering)
defer peeringSub.Unsubscribe()
// Prepare the queue and fetch block parts until the block header fetcher's done
finished := false
for {
// Short circuit if we lost all our peers
if d.peers.Len() == 0 {
return errNoPeers
}
// If there's nothing more to fetch, wait or terminate
if queue.pending() == 0 {
if len(pending) == 0 && finished {
return nil
}
} else {
// Send a download request to all idle peers, until throttled
var (
idles []*peerConnection
caps []int
)
for _, peer := range d.peers.AllPeers() {
pending, stale := pending[peer.id], stales[peer.id]
if pending == nil && stale == nil {
idles = append(idles, peer)
caps = append(caps, queue.capacity(peer, time.Second))
} else if stale != nil {
if waited := time.Since(stale.Sent); waited > timeoutGracePeriod {
// Request has been in flight longer than the grace period
// permitted it, consider the peer malicious attempting to
// stall the sync.
peer.log.Warn("Peer stalling, dropping", "waited", common.PrettyDuration(waited))
d.dropPeer(peer.id)
}
}
}
sort.Sort(&peerCapacitySort{idles, caps})
var (
progressed bool
throttled bool
queued = queue.pending()
)
for _, peer := range idles {
// Short circuit if throttling activated or there are no more
// queued tasks to be retrieved
if throttled {
break
}
if queued = queue.pending(); queued == 0 {
break
}
// Reserve a chunk of fetches for a peer. A nil can mean either that
// no more headers are available, or that the peer is known not to
// have them.
request, progress, throttle := queue.reserve(peer, queue.capacity(peer, d.peers.rates.TargetRoundTrip()))
if progress {
progressed = true
}
if throttle {
throttled = true
throttleCounter.Inc(1)
}
if request == nil {
continue
}
// Fetch the chunk and make sure any errors return the hashes to the queue
req, err := queue.request(peer, request, responses)
if err != nil {
// Sending the request failed, which generally means the peer
// was diconnected in between assignment and network send.
// Although all peer removal operations return allocated tasks
// to the queue, that is async, and we can do better here by
// immediately pushing the unfulfilled requests.
queue.unreserve(peer.id) // TODO(karalabe): This needs a non-expiration method
continue
}
pending[peer.id] = req
ttl := d.peers.rates.TargetTimeout()
ordering[req] = timeouts.Size()
timeouts.Push(req, -time.Now().Add(ttl).UnixNano())
if timeouts.Size() == 1 {
timeout.Reset(ttl)
}
}
// Make sure that we have peers available for fetching. If all peers have been tried
// and all failed throw an error
if !progressed && !throttled && len(pending) == 0 && len(idles) == d.peers.Len() && queued > 0 {
return errPeersUnavailable
}
}
// Wait for something to happen
select {
case <-d.cancelCh:
// If sync was cancelled, tear down the parallel retriever. Pending
// requests will be cancelled locally, and the remote responses will
// be dropped when they arrive
return errCanceled
case event := <-peering:
// A peer joined or left, the tasks queue and allocations need to be
// checked for potential assignment or reassignment
peerid := event.peer.id
if event.join {
// Sanity check the internal state; this can be dropped later
if _, ok := pending[peerid]; ok {
event.peer.log.Error("Pending request exists for joining peer")
}
if _, ok := stales[peerid]; ok {
event.peer.log.Error("Stale request exists for joining peer")
}
// Loop back to the entry point for task assignment
continue
}
// A peer left, any existing requests need to be untracked, pending
// tasks returned and possible reassignment checked
if req, ok := pending[peerid]; ok {
queue.unreserve(peerid) // TODO(karalabe): This needs a non-expiration method
delete(pending, peerid)
req.Close()
if index, live := ordering[req]; live {
timeouts.Remove(index)
if index == 0 {
if !timeout.Stop() {
<-timeout.C
}
if timeouts.Size() > 0 {
_, exp := timeouts.Peek()
timeout.Reset(time.Until(time.Unix(0, -exp)))
}
}
delete(ordering, req)
}
}
if req, ok := stales[peerid]; ok {
delete(stales, peerid)
req.Close()
}
case <-timeout.C:
// Retrieve the next request which should have timed out. The check
// below is purely for to catch programming errors, given the correct
// code, there's no possible order of events that should result in a
// timeout firing for a non-existent event.
item, exp := timeouts.Peek()
if now, at := time.Now(), time.Unix(0, -exp); now.Before(at) {
log.Error("Timeout triggered but not reached", "left", at.Sub(now))
timeout.Reset(at.Sub(now))
continue
}
req := item.(*eth.Request)
// Stop tracking the timed out request from a timing perspective,
// cancel it, so it's not considered in-flight anymore, but keep
// the peer marked busy to prevent assigning a second request and
// overloading it further.
delete(pending, req.Peer)
stales[req.Peer] = req
delete(ordering, req)
timeouts.Pop()
if timeouts.Size() > 0 {
_, exp := timeouts.Peek()
timeout.Reset(time.Until(time.Unix(0, -exp)))
}
// New timeout potentially set if there are more requests pending,
// reschedule the failed one to a free peer
fails := queue.unreserve(req.Peer)
// Finally, update the peer's retrieval capacity, or if it's already
// below the minimum allowance, drop the peer. If a lot of retrieval
// elements expired, we might have overestimated the remote peer or
// perhaps ourselves. Only reset to minimal throughput but don't drop
// just yet.
//
// The reason the minimum threshold is 2 is that the downloader tries
// to estimate the bandwidth and latency of a peer separately, which
// requires pushing the measured capacity a bit and seeing how response
// times reacts, to it always requests one more than the minimum (i.e.
// min 2).
peer := d.peers.Peer(req.Peer)
if peer == nil {
// If the peer got disconnected in between, we should really have
// short-circuited it already. Just in case there's some strange
// codepath, leave this check in not to crash.
log.Error("Delivery timeout from unknown peer", "peer", req.Peer)
continue
}
if fails > 2 {
queue.updateCapacity(peer, 0, 0)
} else {
d.dropPeer(peer.id)
// If this peer was the master peer, abort sync immediately
d.cancelLock.RLock()
master := peer.id == d.cancelPeer
d.cancelLock.RUnlock()
if master {
d.cancel()
return errTimeout
}
}
case res := <-responses:
// Response arrived, it may be for an existing or an already timed
// out request. If the former, update the timeout heap and perhaps
// reschedule the timeout timer.
index, live := ordering[res.Req]
if live {
timeouts.Remove(index)
if index == 0 {
if !timeout.Stop() {
<-timeout.C
}
if timeouts.Size() > 0 {
_, exp := timeouts.Peek()
timeout.Reset(time.Until(time.Unix(0, -exp)))
}
}
delete(ordering, res.Req)
}
// Delete the pending request (if it still exists) and mark the peer idle
delete(pending, res.Req.Peer)
delete(stales, res.Req.Peer)
// Signal the dispatcher that the round trip is done. We'll drop the
// peer if the data turns out to be junk.
res.Done <- nil
res.Req.Close()
// If the peer was previously banned and failed to deliver its pack
// in a reasonable time frame, ignore its message.
if peer := d.peers.Peer(res.Req.Peer); peer != nil {
// Deliver the received chunk of data and check chain validity
accepted, err := queue.deliver(peer, res)
if errors.Is(err, errInvalidChain) {
return err
}
// Unless a peer delivered something completely else than requested (usually
// caused by a timed out request which came through in the end), set it to
// idle. If the delivery's stale, the peer should have already been idled.
if !errors.Is(err, errStaleDelivery) {
queue.updateCapacity(peer, accepted, res.Time)
}
}
case cont := <-queue.waker():
// The header fetcher sent a continuation flag, check if it's done
if !cont {
finished = true
}
}
}
}

View File

@ -0,0 +1,105 @@
// Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package downloader
import (
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/eth/protocols/eth"
"github.com/ethereum/go-ethereum/log"
)
// bodyQueue implements typedQueue and is a type adapter between the generic
// concurrent fetcher and the downloader.
type bodyQueue Downloader
// waker returns a notification channel that gets pinged in case more body
// fetches have been queued up, so the fetcher might assign it to idle peers.
func (q *bodyQueue) waker() chan bool {
return q.queue.blockWakeCh
}
// pending returns the number of bodies that are currently queued for fetching
// by the concurrent downloader.
func (q *bodyQueue) pending() int {
return q.queue.PendingBodies()
}
// capacity is responsible for calculating how many bodies a particular peer is
// estimated to be able to retrieve within the alloted round trip time.
func (q *bodyQueue) capacity(peer *peerConnection, rtt time.Duration) int {
return peer.BodyCapacity(rtt)
}
// updateCapacity is responsible for updating how many bodies a particular peer
// is estimated to be able to retrieve in a unit time.
func (q *bodyQueue) updateCapacity(peer *peerConnection, items int, span time.Duration) {
peer.UpdateBodyRate(items, span)
}
// reserve is responsible for allocating a requested number of pending bodies
// from the download queue to the specified peer.
func (q *bodyQueue) reserve(peer *peerConnection, items int) (*fetchRequest, bool, bool) {
return q.queue.ReserveBodies(peer, items)
}
// unreserve is resposible for removing the current body retrieval allocation
// assigned to a specific peer and placing it back into the pool to allow
// reassigning to some other peer.
func (q *bodyQueue) unreserve(peer string) int {
fails := q.queue.ExpireBodies(peer)
if fails > 2 {
log.Trace("Body delivery timed out", "peer", peer)
} else {
log.Debug("Body delivery stalling", "peer", peer)
}
return fails
}
// request is responsible for converting a generic fetch request into a body
// one and sending it to the remote peer for fulfillment.
func (q *bodyQueue) request(peer *peerConnection, req *fetchRequest, resCh chan *eth.Response) (*eth.Request, error) {
peer.log.Trace("Requesting new batch of bodies", "count", len(req.Headers), "from", req.Headers[0].Number)
if q.bodyFetchHook != nil {
q.bodyFetchHook(req.Headers)
}
hashes := make([]common.Hash, 0, len(req.Headers))
for _, header := range req.Headers {
hashes = append(hashes, header.Hash())
}
return peer.peer.RequestBodies(hashes, resCh)
}
// deliver is responsible for taking a generic response packet from the concurrent
// fetcher, unpacking the body data and delivering it to the downloader's queue.
func (q *bodyQueue) deliver(peer *peerConnection, packet *eth.Response) (int, error) {
txs, uncles := packet.Res.(*eth.BlockBodiesPacket).Unpack()
hashsets := packet.Meta.([][]common.Hash) // {txs hashes, uncle hashes}
accepted, err := q.queue.DeliverBodies(peer.id, txs, hashsets[0], uncles, hashsets[1])
switch {
case err == nil && len(txs) == 0:
peer.log.Trace("Requested bodies delivered")
case err == nil:
peer.log.Trace("Delivered new batch of bodies", "count", len(txs), "accepted", accepted)
default:
peer.log.Debug("Failed to deliver retrieved bodies", "err", err)
}
return accepted, err
}

View File

@ -0,0 +1,97 @@
// Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package downloader
import (
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/eth/protocols/eth"
"github.com/ethereum/go-ethereum/log"
)
// headerQueue implements typedQueue and is a type adapter between the generic
// concurrent fetcher and the downloader.
type headerQueue Downloader
// waker returns a notification channel that gets pinged in case more header
// fetches have been queued up, so the fetcher might assign it to idle peers.
func (q *headerQueue) waker() chan bool {
return q.queue.headerContCh
}
// pending returns the number of headers that are currently queued for fetching
// by the concurrent downloader.
func (q *headerQueue) pending() int {
return q.queue.PendingHeaders()
}
// capacity is responsible for calculating how many headers a particular peer is
// estimated to be able to retrieve within the alloted round trip time.
func (q *headerQueue) capacity(peer *peerConnection, rtt time.Duration) int {
return peer.HeaderCapacity(rtt)
}
// updateCapacity is responsible for updating how many headers a particular peer
// is estimated to be able to retrieve in a unit time.
func (q *headerQueue) updateCapacity(peer *peerConnection, items int, span time.Duration) {
peer.UpdateHeaderRate(items, span)
}
// reserve is responsible for allocating a requested number of pending headers
// from the download queue to the specified peer.
func (q *headerQueue) reserve(peer *peerConnection, items int) (*fetchRequest, bool, bool) {
return q.queue.ReserveHeaders(peer, items), false, false
}
// unreserve is resposible for removing the current header retrieval allocation
// assigned to a specific peer and placing it back into the pool to allow
// reassigning to some other peer.
func (q *headerQueue) unreserve(peer string) int {
fails := q.queue.ExpireHeaders(peer)
if fails > 2 {
log.Trace("Header delivery timed out", "peer", peer)
} else {
log.Debug("Header delivery stalling", "peer", peer)
}
return fails
}
// request is responsible for converting a generic fetch request into a header
// one and sending it to the remote peer for fulfillment.
func (q *headerQueue) request(peer *peerConnection, req *fetchRequest, resCh chan *eth.Response) (*eth.Request, error) {
peer.log.Trace("Requesting new batch of headers", "from", req.From)
return peer.peer.RequestHeadersByNumber(req.From, MaxHeaderFetch, 0, false, resCh)
}
// deliver is responsible for taking a generic response packet from the concurrent
// fetcher, unpacking the header data and delivering it to the downloader's queue.
func (q *headerQueue) deliver(peer *peerConnection, packet *eth.Response) (int, error) {
headers := *packet.Res.(*eth.BlockHeadersPacket)
hashes := packet.Meta.([]common.Hash)
accepted, err := q.queue.DeliverHeaders(peer.id, headers, hashes, q.headerProcCh)
switch {
case err == nil && len(headers) == 0:
peer.log.Trace("Requested headers delivered")
case err == nil:
peer.log.Trace("Delivered new batch of headers", "count", len(headers), "accepted", accepted)
default:
peer.log.Debug("Failed to deliver retrieved headers", "err", err)
}
return accepted, err
}

View File

@ -0,0 +1,104 @@
// Copyright 2021 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package downloader
import (
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/eth/protocols/eth"
"github.com/ethereum/go-ethereum/log"
)
// receiptQueue implements typedQueue and is a type adapter between the generic
// concurrent fetcher and the downloader.
type receiptQueue Downloader
// waker returns a notification channel that gets pinged in case more reecipt
// fetches have been queued up, so the fetcher might assign it to idle peers.
func (q *receiptQueue) waker() chan bool {
return q.queue.receiptWakeCh
}
// pending returns the number of receipt that are currently queued for fetching
// by the concurrent downloader.
func (q *receiptQueue) pending() int {
return q.queue.PendingReceipts()
}
// capacity is responsible for calculating how many receipts a particular peer is
// estimated to be able to retrieve within the alloted round trip time.
func (q *receiptQueue) capacity(peer *peerConnection, rtt time.Duration) int {
return peer.ReceiptCapacity(rtt)
}
// updateCapacity is responsible for updating how many receipts a particular peer
// is estimated to be able to retrieve in a unit time.
func (q *receiptQueue) updateCapacity(peer *peerConnection, items int, span time.Duration) {
peer.UpdateReceiptRate(items, span)
}
// reserve is responsible for allocating a requested number of pending receipts
// from the download queue to the specified peer.
func (q *receiptQueue) reserve(peer *peerConnection, items int) (*fetchRequest, bool, bool) {
return q.queue.ReserveReceipts(peer, items)
}
// unreserve is resposible for removing the current receipt retrieval allocation
// assigned to a specific peer and placing it back into the pool to allow
// reassigning to some other peer.
func (q *receiptQueue) unreserve(peer string) int {
fails := q.queue.ExpireReceipts(peer)
if fails > 2 {
log.Trace("Receipt delivery timed out", "peer", peer)
} else {
log.Debug("Receipt delivery stalling", "peer", peer)
}
return fails
}
// request is responsible for converting a generic fetch request into a receipt
// one and sending it to the remote peer for fulfillment.
func (q *receiptQueue) request(peer *peerConnection, req *fetchRequest, resCh chan *eth.Response) (*eth.Request, error) {
peer.log.Trace("Requesting new batch of receipts", "count", len(req.Headers), "from", req.Headers[0].Number)
if q.receiptFetchHook != nil {
q.receiptFetchHook(req.Headers)
}
hashes := make([]common.Hash, 0, len(req.Headers))
for _, header := range req.Headers {
hashes = append(hashes, header.Hash())
}
return peer.peer.RequestReceipts(hashes, resCh)
}
// deliver is responsible for taking a generic response packet from the concurrent
// fetcher, unpacking the receipt data and delivering it to the downloader's queue.
func (q *receiptQueue) deliver(peer *peerConnection, packet *eth.Response) (int, error) {
receipts := *packet.Res.(*eth.ReceiptsPacket)
hashes := packet.Meta.([]common.Hash) // {receipt hashes}
accepted, err := q.queue.DeliverReceipts(peer.id, receipts, hashes)
switch {
case err == nil && len(receipts) == 0:
peer.log.Trace("Requested receipts delivered")
case err == nil:
peer.log.Trace("Delivered new batch of receipts", "count", len(receipts), "accepted", accepted)
default:
peer.log.Debug("Failed to deliver retrieved receipts", "err", err)
}
return accepted, err
}

View File

@ -38,8 +38,5 @@ var (
receiptDropMeter = metrics.NewRegisteredMeter("eth/downloader/receipts/drop", nil) receiptDropMeter = metrics.NewRegisteredMeter("eth/downloader/receipts/drop", nil)
receiptTimeoutMeter = metrics.NewRegisteredMeter("eth/downloader/receipts/timeout", nil) receiptTimeoutMeter = metrics.NewRegisteredMeter("eth/downloader/receipts/timeout", nil)
stateInMeter = metrics.NewRegisteredMeter("eth/downloader/states/in", nil)
stateDropMeter = metrics.NewRegisteredMeter("eth/downloader/states/drop", nil)
throttleCounter = metrics.NewRegisteredCounter("eth/downloader/throttle", nil) throttleCounter = metrics.NewRegisteredCounter("eth/downloader/throttle", nil)
) )

View File

@ -24,7 +24,6 @@ type SyncMode uint32
const ( const (
FullSync SyncMode = iota // Synchronise the entire blockchain history from full blocks FullSync SyncMode = iota // Synchronise the entire blockchain history from full blocks
FastSync // Quickly download the headers, full sync only at the chain
SnapSync // Download the chain and the state via compact snapshots SnapSync // Download the chain and the state via compact snapshots
LightSync // Download only the headers and terminate afterwards LightSync // Download only the headers and terminate afterwards
) )
@ -38,8 +37,6 @@ func (mode SyncMode) String() string {
switch mode { switch mode {
case FullSync: case FullSync:
return "full" return "full"
case FastSync:
return "fast"
case SnapSync: case SnapSync:
return "snap" return "snap"
case LightSync: case LightSync:
@ -53,8 +50,6 @@ func (mode SyncMode) MarshalText() ([]byte, error) {
switch mode { switch mode {
case FullSync: case FullSync:
return []byte("full"), nil return []byte("full"), nil
case FastSync:
return []byte("fast"), nil
case SnapSync: case SnapSync:
return []byte("snap"), nil return []byte("snap"), nil
case LightSync: case LightSync:
@ -68,14 +63,12 @@ func (mode *SyncMode) UnmarshalText(text []byte) error {
switch string(text) { switch string(text) {
case "full": case "full":
*mode = FullSync *mode = FullSync
case "fast":
*mode = FastSync
case "snap": case "snap":
*mode = SnapSync *mode = SnapSync
case "light": case "light":
*mode = LightSync *mode = LightSync
default: default:
return fmt.Errorf(`unknown sync mode %q, want "full", "fast" or "light"`, text) return fmt.Errorf(`unknown sync mode %q, want "full", "snap" or "light"`, text)
} }
return nil return nil
} }

View File

@ -22,9 +22,7 @@ package downloader
import ( import (
"errors" "errors"
"math/big" "math/big"
"sort"
"sync" "sync"
"sync/atomic"
"time" "time"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
@ -39,7 +37,6 @@ const (
) )
var ( var (
errAlreadyFetching = errors.New("already fetching blocks from peer")
errAlreadyRegistered = errors.New("peer is already registered") errAlreadyRegistered = errors.New("peer is already registered")
errNotRegistered = errors.New("peer is not registered") errNotRegistered = errors.New("peer is not registered")
) )
@ -48,16 +45,6 @@ var (
type peerConnection struct { type peerConnection struct {
id string // Unique identifier of the peer id string // Unique identifier of the peer
headerIdle int32 // Current header activity state of the peer (idle = 0, active = 1)
blockIdle int32 // Current block activity state of the peer (idle = 0, active = 1)
receiptIdle int32 // Current receipt activity state of the peer (idle = 0, active = 1)
stateIdle int32 // Current node data activity state of the peer (idle = 0, active = 1)
headerStarted time.Time // Time instance when the last header fetch was started
blockStarted time.Time // Time instance when the last block (body) fetch was started
receiptStarted time.Time // Time instance when the last receipt fetch was started
stateStarted time.Time // Time instance when the last node data fetch was started
rates *msgrate.Tracker // Tracker to hone in on the number of items retrievable per second rates *msgrate.Tracker // Tracker to hone in on the number of items retrievable per second
lacking map[common.Hash]struct{} // Set of hashes not to request (didn't have previously) lacking map[common.Hash]struct{} // Set of hashes not to request (didn't have previously)
@ -71,16 +58,15 @@ type peerConnection struct {
// LightPeer encapsulates the methods required to synchronise with a remote light peer. // LightPeer encapsulates the methods required to synchronise with a remote light peer.
type LightPeer interface { type LightPeer interface {
Head() (common.Hash, *big.Int) Head() (common.Hash, *big.Int)
RequestHeadersByHash(common.Hash, int, int, bool) error RequestHeadersByHash(common.Hash, int, int, bool, chan *eth.Response) (*eth.Request, error)
RequestHeadersByNumber(uint64, int, int, bool) error RequestHeadersByNumber(uint64, int, int, bool, chan *eth.Response) (*eth.Request, error)
} }
// Peer encapsulates the methods required to synchronise with a remote full peer. // Peer encapsulates the methods required to synchronise with a remote full peer.
type Peer interface { type Peer interface {
LightPeer LightPeer
RequestBodies([]common.Hash) error RequestBodies([]common.Hash, chan *eth.Response) (*eth.Request, error)
RequestReceipts([]common.Hash) error RequestReceipts([]common.Hash, chan *eth.Response) (*eth.Request, error)
RequestNodeData([]common.Hash) error
} }
// lightPeerWrapper wraps a LightPeer struct, stubbing out the Peer-only methods. // lightPeerWrapper wraps a LightPeer struct, stubbing out the Peer-only methods.
@ -89,21 +75,18 @@ type lightPeerWrapper struct {
} }
func (w *lightPeerWrapper) Head() (common.Hash, *big.Int) { return w.peer.Head() } func (w *lightPeerWrapper) Head() (common.Hash, *big.Int) { return w.peer.Head() }
func (w *lightPeerWrapper) RequestHeadersByHash(h common.Hash, amount int, skip int, reverse bool) error { func (w *lightPeerWrapper) RequestHeadersByHash(h common.Hash, amount int, skip int, reverse bool, sink chan *eth.Response) (*eth.Request, error) {
return w.peer.RequestHeadersByHash(h, amount, skip, reverse) return w.peer.RequestHeadersByHash(h, amount, skip, reverse, sink)
} }
func (w *lightPeerWrapper) RequestHeadersByNumber(i uint64, amount int, skip int, reverse bool) error { func (w *lightPeerWrapper) RequestHeadersByNumber(i uint64, amount int, skip int, reverse bool, sink chan *eth.Response) (*eth.Request, error) {
return w.peer.RequestHeadersByNumber(i, amount, skip, reverse) return w.peer.RequestHeadersByNumber(i, amount, skip, reverse, sink)
} }
func (w *lightPeerWrapper) RequestBodies([]common.Hash) error { func (w *lightPeerWrapper) RequestBodies([]common.Hash, chan *eth.Response) (*eth.Request, error) {
panic("RequestBodies not supported in light client mode sync") panic("RequestBodies not supported in light client mode sync")
} }
func (w *lightPeerWrapper) RequestReceipts([]common.Hash) error { func (w *lightPeerWrapper) RequestReceipts([]common.Hash, chan *eth.Response) (*eth.Request, error) {
panic("RequestReceipts not supported in light client mode sync") panic("RequestReceipts not supported in light client mode sync")
} }
func (w *lightPeerWrapper) RequestNodeData([]common.Hash) error {
panic("RequestNodeData not supported in light client mode sync")
}
// newPeerConnection creates a new downloader peer. // newPeerConnection creates a new downloader peer.
func newPeerConnection(id string, version uint, peer Peer, logger log.Logger) *peerConnection { func newPeerConnection(id string, version uint, peer Peer, logger log.Logger) *peerConnection {
@ -121,114 +104,28 @@ func (p *peerConnection) Reset() {
p.lock.Lock() p.lock.Lock()
defer p.lock.Unlock() defer p.lock.Unlock()
atomic.StoreInt32(&p.headerIdle, 0)
atomic.StoreInt32(&p.blockIdle, 0)
atomic.StoreInt32(&p.receiptIdle, 0)
atomic.StoreInt32(&p.stateIdle, 0)
p.lacking = make(map[common.Hash]struct{}) p.lacking = make(map[common.Hash]struct{})
} }
// FetchHeaders sends a header retrieval request to the remote peer. // UpdateHeaderRate updates the peer's estimated header retrieval throughput with
func (p *peerConnection) FetchHeaders(from uint64, count int) error { // the current measurement.
// Short circuit if the peer is already fetching func (p *peerConnection) UpdateHeaderRate(delivered int, elapsed time.Duration) {
if !atomic.CompareAndSwapInt32(&p.headerIdle, 0, 1) { p.rates.Update(eth.BlockHeadersMsg, elapsed, delivered)
return errAlreadyFetching
}
p.headerStarted = time.Now()
// Issue the header retrieval request (absolute upwards without gaps)
go p.peer.RequestHeadersByNumber(from, count, 0, false)
return nil
} }
// FetchBodies sends a block body retrieval request to the remote peer. // UpdateBodyRate updates the peer's estimated body retrieval throughput with the
func (p *peerConnection) FetchBodies(request *fetchRequest) error { // current measurement.
// Short circuit if the peer is already fetching func (p *peerConnection) UpdateBodyRate(delivered int, elapsed time.Duration) {
if !atomic.CompareAndSwapInt32(&p.blockIdle, 0, 1) { p.rates.Update(eth.BlockBodiesMsg, elapsed, delivered)
return errAlreadyFetching
}
p.blockStarted = time.Now()
go func() {
// Convert the header set to a retrievable slice
hashes := make([]common.Hash, 0, len(request.Headers))
for _, header := range request.Headers {
hashes = append(hashes, header.Hash())
}
p.peer.RequestBodies(hashes)
}()
return nil
} }
// FetchReceipts sends a receipt retrieval request to the remote peer. // UpdateReceiptRate updates the peer's estimated receipt retrieval throughput
func (p *peerConnection) FetchReceipts(request *fetchRequest) error { // with the current measurement.
// Short circuit if the peer is already fetching func (p *peerConnection) UpdateReceiptRate(delivered int, elapsed time.Duration) {
if !atomic.CompareAndSwapInt32(&p.receiptIdle, 0, 1) { p.rates.Update(eth.ReceiptsMsg, elapsed, delivered)
return errAlreadyFetching
}
p.receiptStarted = time.Now()
go func() {
// Convert the header set to a retrievable slice
hashes := make([]common.Hash, 0, len(request.Headers))
for _, header := range request.Headers {
hashes = append(hashes, header.Hash())
}
p.peer.RequestReceipts(hashes)
}()
return nil
} }
// FetchNodeData sends a node state data retrieval request to the remote peer. // HeaderCapacity retrieves the peer's header download allowance based on its
func (p *peerConnection) FetchNodeData(hashes []common.Hash) error {
// Short circuit if the peer is already fetching
if !atomic.CompareAndSwapInt32(&p.stateIdle, 0, 1) {
return errAlreadyFetching
}
p.stateStarted = time.Now()
go p.peer.RequestNodeData(hashes)
return nil
}
// SetHeadersIdle sets the peer to idle, allowing it to execute new header retrieval
// requests. Its estimated header retrieval throughput is updated with that measured
// just now.
func (p *peerConnection) SetHeadersIdle(delivered int, deliveryTime time.Time) {
p.rates.Update(eth.BlockHeadersMsg, deliveryTime.Sub(p.headerStarted), delivered)
atomic.StoreInt32(&p.headerIdle, 0)
}
// SetBodiesIdle sets the peer to idle, allowing it to execute block body retrieval
// requests. Its estimated body retrieval throughput is updated with that measured
// just now.
func (p *peerConnection) SetBodiesIdle(delivered int, deliveryTime time.Time) {
p.rates.Update(eth.BlockBodiesMsg, deliveryTime.Sub(p.blockStarted), delivered)
atomic.StoreInt32(&p.blockIdle, 0)
}
// SetReceiptsIdle sets the peer to idle, allowing it to execute new receipt
// retrieval requests. Its estimated receipt retrieval throughput is updated
// with that measured just now.
func (p *peerConnection) SetReceiptsIdle(delivered int, deliveryTime time.Time) {
p.rates.Update(eth.ReceiptsMsg, deliveryTime.Sub(p.receiptStarted), delivered)
atomic.StoreInt32(&p.receiptIdle, 0)
}
// SetNodeDataIdle sets the peer to idle, allowing it to execute new state trie
// data retrieval requests. Its estimated state retrieval throughput is updated
// with that measured just now.
func (p *peerConnection) SetNodeDataIdle(delivered int, deliveryTime time.Time) {
p.rates.Update(eth.NodeDataMsg, deliveryTime.Sub(p.stateStarted), delivered)
atomic.StoreInt32(&p.stateIdle, 0)
}
// HeaderCapacity retrieves the peers header download allowance based on its
// previously discovered throughput. // previously discovered throughput.
func (p *peerConnection) HeaderCapacity(targetRTT time.Duration) int { func (p *peerConnection) HeaderCapacity(targetRTT time.Duration) int {
cap := p.rates.Capacity(eth.BlockHeadersMsg, targetRTT) cap := p.rates.Capacity(eth.BlockHeadersMsg, targetRTT)
@ -238,9 +135,9 @@ func (p *peerConnection) HeaderCapacity(targetRTT time.Duration) int {
return cap return cap
} }
// BlockCapacity retrieves the peers block download allowance based on its // BodyCapacity retrieves the peer's body download allowance based on its
// previously discovered throughput. // previously discovered throughput.
func (p *peerConnection) BlockCapacity(targetRTT time.Duration) int { func (p *peerConnection) BodyCapacity(targetRTT time.Duration) int {
cap := p.rates.Capacity(eth.BlockBodiesMsg, targetRTT) cap := p.rates.Capacity(eth.BlockBodiesMsg, targetRTT)
if cap > MaxBlockFetch { if cap > MaxBlockFetch {
cap = MaxBlockFetch cap = MaxBlockFetch
@ -258,16 +155,6 @@ func (p *peerConnection) ReceiptCapacity(targetRTT time.Duration) int {
return cap return cap
} }
// NodeDataCapacity retrieves the peers state download allowance based on its
// previously discovered throughput.
func (p *peerConnection) NodeDataCapacity(targetRTT time.Duration) int {
cap := p.rates.Capacity(eth.NodeDataMsg, targetRTT)
if cap > MaxStateFetch {
cap = MaxStateFetch
}
return cap
}
// MarkLacking appends a new entity to the set of items (blocks, receipts, states) // MarkLacking appends a new entity to the set of items (blocks, receipts, states)
// that a peer is known not to have (i.e. have been requested before). If the // that a peer is known not to have (i.e. have been requested before). If the
// set reaches its maximum allowed capacity, items are randomly dropped off. // set reaches its maximum allowed capacity, items are randomly dropped off.
@ -294,14 +181,19 @@ func (p *peerConnection) Lacks(hash common.Hash) bool {
return ok return ok
} }
// peeringEvent is sent on the peer event feed when a remote peer connects or
// disconnects.
type peeringEvent struct {
peer *peerConnection
join bool
}
// peerSet represents the collection of active peer participating in the chain // peerSet represents the collection of active peer participating in the chain
// download procedure. // download procedure.
type peerSet struct { type peerSet struct {
peers map[string]*peerConnection peers map[string]*peerConnection
rates *msgrate.Trackers // Set of rate trackers to give the sync a common beat rates *msgrate.Trackers // Set of rate trackers to give the sync a common beat
events event.Feed // Feed to publish peer lifecycle events on
newPeerFeed event.Feed
peerDropFeed event.Feed
lock sync.RWMutex lock sync.RWMutex
} }
@ -314,14 +206,9 @@ func newPeerSet() *peerSet {
} }
} }
// SubscribeNewPeers subscribes to peer arrival events. // SubscribeEvents subscribes to peer arrival and departure events.
func (ps *peerSet) SubscribeNewPeers(ch chan<- *peerConnection) event.Subscription { func (ps *peerSet) SubscribeEvents(ch chan<- *peeringEvent) event.Subscription {
return ps.newPeerFeed.Subscribe(ch) return ps.events.Subscribe(ch)
}
// SubscribePeerDrops subscribes to peer departure events.
func (ps *peerSet) SubscribePeerDrops(ch chan<- *peerConnection) event.Subscription {
return ps.peerDropFeed.Subscribe(ch)
} }
// Reset iterates over the current peer set, and resets each of the known peers // Reset iterates over the current peer set, and resets each of the known peers
@ -355,7 +242,7 @@ func (ps *peerSet) Register(p *peerConnection) error {
ps.peers[p.id] = p ps.peers[p.id] = p
ps.lock.Unlock() ps.lock.Unlock()
ps.newPeerFeed.Send(p) ps.events.Send(&peeringEvent{peer: p, join: true})
return nil return nil
} }
@ -372,7 +259,7 @@ func (ps *peerSet) Unregister(id string) error {
ps.rates.Untrack(id) ps.rates.Untrack(id)
ps.lock.Unlock() ps.lock.Unlock()
ps.peerDropFeed.Send(p) ps.events.Send(&peeringEvent{peer: p, join: false})
return nil return nil
} }
@ -404,82 +291,6 @@ func (ps *peerSet) AllPeers() []*peerConnection {
return list return list
} }
// HeaderIdlePeers retrieves a flat list of all the currently header-idle peers
// within the active peer set, ordered by their reputation.
func (ps *peerSet) HeaderIdlePeers() ([]*peerConnection, int) {
idle := func(p *peerConnection) bool {
return atomic.LoadInt32(&p.headerIdle) == 0
}
throughput := func(p *peerConnection) int {
return p.rates.Capacity(eth.BlockHeadersMsg, time.Second)
}
return ps.idlePeers(eth.ETH66, eth.ETH66, idle, throughput)
}
// BodyIdlePeers retrieves a flat list of all the currently body-idle peers within
// the active peer set, ordered by their reputation.
func (ps *peerSet) BodyIdlePeers() ([]*peerConnection, int) {
idle := func(p *peerConnection) bool {
return atomic.LoadInt32(&p.blockIdle) == 0
}
throughput := func(p *peerConnection) int {
return p.rates.Capacity(eth.BlockBodiesMsg, time.Second)
}
return ps.idlePeers(eth.ETH66, eth.ETH66, idle, throughput)
}
// ReceiptIdlePeers retrieves a flat list of all the currently receipt-idle peers
// within the active peer set, ordered by their reputation.
func (ps *peerSet) ReceiptIdlePeers() ([]*peerConnection, int) {
idle := func(p *peerConnection) bool {
return atomic.LoadInt32(&p.receiptIdle) == 0
}
throughput := func(p *peerConnection) int {
return p.rates.Capacity(eth.ReceiptsMsg, time.Second)
}
return ps.idlePeers(eth.ETH66, eth.ETH66, idle, throughput)
}
// NodeDataIdlePeers retrieves a flat list of all the currently node-data-idle
// peers within the active peer set, ordered by their reputation.
func (ps *peerSet) NodeDataIdlePeers() ([]*peerConnection, int) {
idle := func(p *peerConnection) bool {
return atomic.LoadInt32(&p.stateIdle) == 0
}
throughput := func(p *peerConnection) int {
return p.rates.Capacity(eth.NodeDataMsg, time.Second)
}
return ps.idlePeers(eth.ETH66, eth.ETH66, idle, throughput)
}
// idlePeers retrieves a flat list of all currently idle peers satisfying the
// protocol version constraints, using the provided function to check idleness.
// The resulting set of peers are sorted by their capacity.
func (ps *peerSet) idlePeers(minProtocol, maxProtocol uint, idleCheck func(*peerConnection) bool, capacity func(*peerConnection) int) ([]*peerConnection, int) {
ps.lock.RLock()
defer ps.lock.RUnlock()
var (
total = 0
idle = make([]*peerConnection, 0, len(ps.peers))
tps = make([]int, 0, len(ps.peers))
)
for _, p := range ps.peers {
if p.version >= minProtocol && p.version <= maxProtocol {
if idleCheck(p) {
idle = append(idle, p)
tps = append(tps, capacity(p))
}
total++
}
}
// And sort them
sortPeers := &peerCapacitySort{idle, tps}
sort.Sort(sortPeers)
return sortPeers.p, total
}
// peerCapacitySort implements sort.Interface. // peerCapacitySort implements sort.Interface.
// It sorts peer connections by capacity (descending). // It sorts peer connections by capacity (descending).
type peerCapacitySort struct { type peerCapacitySort struct {

View File

@ -31,7 +31,6 @@ import (
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics" "github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/trie"
) )
const ( const (
@ -54,8 +53,8 @@ var (
// fetchRequest is a currently running data retrieval operation. // fetchRequest is a currently running data retrieval operation.
type fetchRequest struct { type fetchRequest struct {
Peer *peerConnection // Peer to which the request was sent Peer *peerConnection // Peer to which the request was sent
From uint64 // [eth/62] Requested chain element index (used for skeleton fills only) From uint64 // Requested chain element index (used for skeleton fills only)
Headers []*types.Header // [eth/62] Requested headers, sorted by request order Headers []*types.Header // Requested headers, sorted by request order
Time time.Time // Time when the request was made Time time.Time // Time when the request was made
} }
@ -119,6 +118,7 @@ type queue struct {
headerPeerMiss map[string]map[uint64]struct{} // Set of per-peer header batches known to be unavailable headerPeerMiss map[string]map[uint64]struct{} // Set of per-peer header batches known to be unavailable
headerPendPool map[string]*fetchRequest // Currently pending header retrieval operations headerPendPool map[string]*fetchRequest // Currently pending header retrieval operations
headerResults []*types.Header // Result cache accumulating the completed headers headerResults []*types.Header // Result cache accumulating the completed headers
headerHashes []common.Hash // Result cache accumulating the completed header hashes
headerProced int // Number of headers already processed from the results headerProced int // Number of headers already processed from the results
headerOffset uint64 // Number of the first header in the result cache headerOffset uint64 // Number of the first header in the result cache
headerContCh chan bool // Channel to notify when header download finishes headerContCh chan bool // Channel to notify when header download finishes
@ -127,10 +127,12 @@ type queue struct {
blockTaskPool map[common.Hash]*types.Header // Pending block (body) retrieval tasks, mapping hashes to headers blockTaskPool map[common.Hash]*types.Header // Pending block (body) retrieval tasks, mapping hashes to headers
blockTaskQueue *prque.Prque // Priority queue of the headers to fetch the blocks (bodies) for blockTaskQueue *prque.Prque // Priority queue of the headers to fetch the blocks (bodies) for
blockPendPool map[string]*fetchRequest // Currently pending block (body) retrieval operations blockPendPool map[string]*fetchRequest // Currently pending block (body) retrieval operations
blockWakeCh chan bool // Channel to notify the block fetcher of new tasks
receiptTaskPool map[common.Hash]*types.Header // Pending receipt retrieval tasks, mapping hashes to headers receiptTaskPool map[common.Hash]*types.Header // Pending receipt retrieval tasks, mapping hashes to headers
receiptTaskQueue *prque.Prque // Priority queue of the headers to fetch the receipts for receiptTaskQueue *prque.Prque // Priority queue of the headers to fetch the receipts for
receiptPendPool map[string]*fetchRequest // Currently pending receipt retrieval operations receiptPendPool map[string]*fetchRequest // Currently pending receipt retrieval operations
receiptWakeCh chan bool // Channel to notify when receipt fetcher of new tasks
resultCache *resultStore // Downloaded but not yet delivered fetch results resultCache *resultStore // Downloaded but not yet delivered fetch results
resultSize common.StorageSize // Approximate size of a block (exponential moving average) resultSize common.StorageSize // Approximate size of a block (exponential moving average)
@ -146,9 +148,11 @@ type queue struct {
func newQueue(blockCacheLimit int, thresholdInitialSize int) *queue { func newQueue(blockCacheLimit int, thresholdInitialSize int) *queue {
lock := new(sync.RWMutex) lock := new(sync.RWMutex)
q := &queue{ q := &queue{
headerContCh: make(chan bool), headerContCh: make(chan bool, 1),
blockTaskQueue: prque.New(nil), blockTaskQueue: prque.New(nil),
blockWakeCh: make(chan bool, 1),
receiptTaskQueue: prque.New(nil), receiptTaskQueue: prque.New(nil),
receiptWakeCh: make(chan bool, 1),
active: sync.NewCond(lock), active: sync.NewCond(lock),
lock: lock, lock: lock,
} }
@ -196,8 +200,8 @@ func (q *queue) PendingHeaders() int {
return q.headerTaskQueue.Size() return q.headerTaskQueue.Size()
} }
// PendingBlocks retrieves the number of block (body) requests pending for retrieval. // PendingBodies retrieves the number of block body requests pending for retrieval.
func (q *queue) PendingBlocks() int { func (q *queue) PendingBodies() int {
q.lock.Lock() q.lock.Lock()
defer q.lock.Unlock() defer q.lock.Unlock()
@ -212,15 +216,6 @@ func (q *queue) PendingReceipts() int {
return q.receiptTaskQueue.Size() return q.receiptTaskQueue.Size()
} }
// InFlightHeaders retrieves whether there are header fetch requests currently
// in flight.
func (q *queue) InFlightHeaders() bool {
q.lock.Lock()
defer q.lock.Unlock()
return len(q.headerPendPool) > 0
}
// InFlightBlocks retrieves whether there are block fetch requests currently in // InFlightBlocks retrieves whether there are block fetch requests currently in
// flight. // flight.
func (q *queue) InFlightBlocks() bool { func (q *queue) InFlightBlocks() bool {
@ -265,6 +260,7 @@ func (q *queue) ScheduleSkeleton(from uint64, skeleton []*types.Header) {
q.headerTaskQueue = prque.New(nil) q.headerTaskQueue = prque.New(nil)
q.headerPeerMiss = make(map[string]map[uint64]struct{}) // Reset availability to correct invalid chains q.headerPeerMiss = make(map[string]map[uint64]struct{}) // Reset availability to correct invalid chains
q.headerResults = make([]*types.Header, len(skeleton)*MaxHeaderFetch) q.headerResults = make([]*types.Header, len(skeleton)*MaxHeaderFetch)
q.headerHashes = make([]common.Hash, len(skeleton)*MaxHeaderFetch)
q.headerProced = 0 q.headerProced = 0
q.headerOffset = from q.headerOffset = from
q.headerContCh = make(chan bool, 1) q.headerContCh = make(chan bool, 1)
@ -279,27 +275,27 @@ func (q *queue) ScheduleSkeleton(from uint64, skeleton []*types.Header) {
// RetrieveHeaders retrieves the header chain assemble based on the scheduled // RetrieveHeaders retrieves the header chain assemble based on the scheduled
// skeleton. // skeleton.
func (q *queue) RetrieveHeaders() ([]*types.Header, int) { func (q *queue) RetrieveHeaders() ([]*types.Header, []common.Hash, int) {
q.lock.Lock() q.lock.Lock()
defer q.lock.Unlock() defer q.lock.Unlock()
headers, proced := q.headerResults, q.headerProced headers, hashes, proced := q.headerResults, q.headerHashes, q.headerProced
q.headerResults, q.headerProced = nil, 0 q.headerResults, q.headerHashes, q.headerProced = nil, nil, 0
return headers, proced return headers, hashes, proced
} }
// Schedule adds a set of headers for the download queue for scheduling, returning // Schedule adds a set of headers for the download queue for scheduling, returning
// the new headers encountered. // the new headers encountered.
func (q *queue) Schedule(headers []*types.Header, from uint64) []*types.Header { func (q *queue) Schedule(headers []*types.Header, hashes []common.Hash, from uint64) []*types.Header {
q.lock.Lock() q.lock.Lock()
defer q.lock.Unlock() defer q.lock.Unlock()
// Insert all the headers prioritised by the contained block number // Insert all the headers prioritised by the contained block number
inserts := make([]*types.Header, 0, len(headers)) inserts := make([]*types.Header, 0, len(headers))
for _, header := range headers { for i, header := range headers {
// Make sure chain order is honoured and preserved throughout // Make sure chain order is honoured and preserved throughout
hash := header.Hash() hash := hashes[i]
if header.Number == nil || header.Number.Uint64() != from { if header.Number == nil || header.Number.Uint64() != from {
log.Warn("Header broke chain ordering", "number", header.Number, "hash", hash, "expected", from) log.Warn("Header broke chain ordering", "number", header.Number, "hash", hash, "expected", from)
break break
@ -318,7 +314,7 @@ func (q *queue) Schedule(headers []*types.Header, from uint64) []*types.Header {
q.blockTaskQueue.Push(header, -int64(header.Number.Uint64())) q.blockTaskQueue.Push(header, -int64(header.Number.Uint64()))
} }
// Queue for receipt retrieval // Queue for receipt retrieval
if q.mode == FastSync && !header.EmptyReceipts() { if q.mode == SnapSync && !header.EmptyReceipts() {
if _, ok := q.receiptTaskPool[hash]; ok { if _, ok := q.receiptTaskPool[hash]; ok {
log.Warn("Header already scheduled for receipt fetch", "number", header.Number, "hash", hash) log.Warn("Header already scheduled for receipt fetch", "number", header.Number, "hash", hash)
} else { } else {
@ -383,6 +379,13 @@ func (q *queue) Results(block bool) []*fetchResult {
throttleThreshold := uint64((common.StorageSize(blockCacheMemory) + q.resultSize - 1) / q.resultSize) throttleThreshold := uint64((common.StorageSize(blockCacheMemory) + q.resultSize - 1) / q.resultSize)
throttleThreshold = q.resultCache.SetThrottleThreshold(throttleThreshold) throttleThreshold = q.resultCache.SetThrottleThreshold(throttleThreshold)
// With results removed from the cache, wake throttled fetchers
for _, ch := range []chan bool{q.blockWakeCh, q.receiptWakeCh} {
select {
case ch <- true:
default:
}
}
// Log some info at certain times // Log some info at certain times
if time.Since(q.lastStatLog) > 60*time.Second { if time.Since(q.lastStatLog) > 60*time.Second {
q.lastStatLog = time.Now() q.lastStatLog = time.Now()
@ -503,7 +506,7 @@ func (q *queue) reserveHeaders(p *peerConnection, count int, taskPool map[common
// we can ask the resultcache if this header is within the // we can ask the resultcache if this header is within the
// "prioritized" segment of blocks. If it is not, we need to throttle // "prioritized" segment of blocks. If it is not, we need to throttle
stale, throttle, item, err := q.resultCache.AddFetch(header, q.mode == FastSync) stale, throttle, item, err := q.resultCache.AddFetch(header, q.mode == SnapSync)
if stale { if stale {
// Don't put back in the task queue, this item has already been // Don't put back in the task queue, this item has already been
// delivered upstream // delivered upstream
@ -566,40 +569,6 @@ func (q *queue) reserveHeaders(p *peerConnection, count int, taskPool map[common
return request, progress, throttled return request, progress, throttled
} }
// CancelHeaders aborts a fetch request, returning all pending skeleton indexes to the queue.
func (q *queue) CancelHeaders(request *fetchRequest) {
q.lock.Lock()
defer q.lock.Unlock()
q.cancel(request, q.headerTaskQueue, q.headerPendPool)
}
// CancelBodies aborts a body fetch request, returning all pending headers to the
// task queue.
func (q *queue) CancelBodies(request *fetchRequest) {
q.lock.Lock()
defer q.lock.Unlock()
q.cancel(request, q.blockTaskQueue, q.blockPendPool)
}
// CancelReceipts aborts a body fetch request, returning all pending headers to
// the task queue.
func (q *queue) CancelReceipts(request *fetchRequest) {
q.lock.Lock()
defer q.lock.Unlock()
q.cancel(request, q.receiptTaskQueue, q.receiptPendPool)
}
// Cancel aborts a fetch request, returning all pending hashes to the task queue.
func (q *queue) cancel(request *fetchRequest, taskQueue *prque.Prque, pendPool map[string]*fetchRequest) {
if request.From > 0 {
taskQueue.Push(request.From, -int64(request.From))
}
for _, header := range request.Headers {
taskQueue.Push(header, -int64(header.Number.Uint64()))
}
delete(pendPool, request.Peer.id)
}
// Revoke cancels all pending requests belonging to a given peer. This method is // Revoke cancels all pending requests belonging to a given peer. This method is
// meant to be called during a peer drop to quickly reassign owned data fetches // meant to be called during a peer drop to quickly reassign owned data fetches
// to remaining nodes. // to remaining nodes.
@ -607,6 +576,10 @@ func (q *queue) Revoke(peerID string) {
q.lock.Lock() q.lock.Lock()
defer q.lock.Unlock() defer q.lock.Unlock()
if request, ok := q.headerPendPool[peerID]; ok {
q.headerTaskQueue.Push(request.From, -int64(request.From))
delete(q.headerPendPool, peerID)
}
if request, ok := q.blockPendPool[peerID]; ok { if request, ok := q.blockPendPool[peerID]; ok {
for _, header := range request.Headers { for _, header := range request.Headers {
q.blockTaskQueue.Push(header, -int64(header.Number.Uint64())) q.blockTaskQueue.Push(header, -int64(header.Number.Uint64()))
@ -621,62 +594,60 @@ func (q *queue) Revoke(peerID string) {
} }
} }
// ExpireHeaders checks for in flight requests that exceeded a timeout allowance, // ExpireHeaders cancels a request that timed out and moves the pending fetch
// canceling them and returning the responsible peers for penalisation. // task back into the queue for rescheduling.
func (q *queue) ExpireHeaders(timeout time.Duration) map[string]int { func (q *queue) ExpireHeaders(peer string) int {
q.lock.Lock() q.lock.Lock()
defer q.lock.Unlock() defer q.lock.Unlock()
return q.expire(timeout, q.headerPendPool, q.headerTaskQueue, headerTimeoutMeter) headerTimeoutMeter.Mark(1)
return q.expire(peer, q.headerPendPool, q.headerTaskQueue)
} }
// ExpireBodies checks for in flight block body requests that exceeded a timeout // ExpireBodies checks for in flight block body requests that exceeded a timeout
// allowance, canceling them and returning the responsible peers for penalisation. // allowance, canceling them and returning the responsible peers for penalisation.
func (q *queue) ExpireBodies(timeout time.Duration) map[string]int { func (q *queue) ExpireBodies(peer string) int {
q.lock.Lock() q.lock.Lock()
defer q.lock.Unlock() defer q.lock.Unlock()
return q.expire(timeout, q.blockPendPool, q.blockTaskQueue, bodyTimeoutMeter) bodyTimeoutMeter.Mark(1)
return q.expire(peer, q.blockPendPool, q.blockTaskQueue)
} }
// ExpireReceipts checks for in flight receipt requests that exceeded a timeout // ExpireReceipts checks for in flight receipt requests that exceeded a timeout
// allowance, canceling them and returning the responsible peers for penalisation. // allowance, canceling them and returning the responsible peers for penalisation.
func (q *queue) ExpireReceipts(timeout time.Duration) map[string]int { func (q *queue) ExpireReceipts(peer string) int {
q.lock.Lock() q.lock.Lock()
defer q.lock.Unlock() defer q.lock.Unlock()
return q.expire(timeout, q.receiptPendPool, q.receiptTaskQueue, receiptTimeoutMeter) receiptTimeoutMeter.Mark(1)
return q.expire(peer, q.receiptPendPool, q.receiptTaskQueue)
} }
// expire is the generic check that move expired tasks from a pending pool back // expire is the generic check that moves a specific expired task from a pending
// into a task pool, returning all entities caught with expired tasks. // pool back into a task pool.
// //
// Note, this method expects the queue lock to be already held. The // Note, this method expects the queue lock to be already held. The reason the
// reason the lock is not obtained in here is because the parameters already need // lock is not obtained in here is that the parameters already need to access
// to access the queue, so they already need a lock anyway. // the queue, so they already need a lock anyway.
func (q *queue) expire(timeout time.Duration, pendPool map[string]*fetchRequest, taskQueue *prque.Prque, timeoutMeter metrics.Meter) map[string]int { func (q *queue) expire(peer string, pendPool map[string]*fetchRequest, taskQueue *prque.Prque) int {
// Iterate over the expired requests and return each to the queue // Retrieve the request being expired and log an error if it's non-existnet,
expiries := make(map[string]int) // as there's no order of events that should lead to such expirations.
for id, request := range pendPool { req := pendPool[peer]
if time.Since(request.Time) > timeout { if req == nil {
// Update the metrics with the timeout log.Error("Expired request does not exist", "peer", peer)
timeoutMeter.Mark(1) return 0
// Return any non satisfied requests to the pool
if request.From > 0 {
taskQueue.Push(request.From, -int64(request.From))
} }
for _, header := range request.Headers { delete(pendPool, peer)
// Return any non-satisfied requests to the pool
if req.From > 0 {
taskQueue.Push(req.From, -int64(req.From))
}
for _, header := range req.Headers {
taskQueue.Push(header, -int64(header.Number.Uint64())) taskQueue.Push(header, -int64(header.Number.Uint64()))
} }
// Add the peer to the expiry report along the number of failed requests return len(req.Headers)
expiries[id] = len(request.Headers)
// Remove the expired requests from the pending pool directly
delete(pendPool, id)
}
}
return expiries
} }
// DeliverHeaders injects a header retrieval response into the header results // DeliverHeaders injects a header retrieval response into the header results
@ -684,9 +655,9 @@ func (q *queue) expire(timeout time.Duration, pendPool map[string]*fetchRequest,
// if they do not map correctly to the skeleton. // if they do not map correctly to the skeleton.
// //
// If the headers are accepted, the method makes an attempt to deliver the set // If the headers are accepted, the method makes an attempt to deliver the set
// of ready headers to the processor to keep the pipeline full. However it will // of ready headers to the processor to keep the pipeline full. However, it will
// not block to prevent stalling other pending deliveries. // not block to prevent stalling other pending deliveries.
func (q *queue) DeliverHeaders(id string, headers []*types.Header, headerProcCh chan []*types.Header) (int, error) { func (q *queue) DeliverHeaders(id string, headers []*types.Header, hashes []common.Hash, headerProcCh chan *headerTask) (int, error) {
q.lock.Lock() q.lock.Lock()
defer q.lock.Unlock() defer q.lock.Unlock()
@ -700,28 +671,31 @@ func (q *queue) DeliverHeaders(id string, headers []*types.Header, headerProcCh
// Short circuit if the data was never requested // Short circuit if the data was never requested
request := q.headerPendPool[id] request := q.headerPendPool[id]
if request == nil { if request == nil {
headerDropMeter.Mark(int64(len(headers)))
return 0, errNoFetchesPending return 0, errNoFetchesPending
} }
headerReqTimer.UpdateSince(request.Time)
delete(q.headerPendPool, id) delete(q.headerPendPool, id)
headerReqTimer.UpdateSince(request.Time)
headerInMeter.Mark(int64(len(headers)))
// Ensure headers can be mapped onto the skeleton chain // Ensure headers can be mapped onto the skeleton chain
target := q.headerTaskPool[request.From].Hash() target := q.headerTaskPool[request.From].Hash()
accepted := len(headers) == MaxHeaderFetch accepted := len(headers) == MaxHeaderFetch
if accepted { if accepted {
if headers[0].Number.Uint64() != request.From { if headers[0].Number.Uint64() != request.From {
logger.Trace("First header broke chain ordering", "number", headers[0].Number, "hash", headers[0].Hash(), "expected", request.From) logger.Trace("First header broke chain ordering", "number", headers[0].Number, "hash", hashes[0], "expected", request.From)
accepted = false accepted = false
} else if headers[len(headers)-1].Hash() != target { } else if hashes[len(headers)-1] != target {
logger.Trace("Last header broke skeleton structure ", "number", headers[len(headers)-1].Number, "hash", headers[len(headers)-1].Hash(), "expected", target) logger.Trace("Last header broke skeleton structure ", "number", headers[len(headers)-1].Number, "hash", hashes[len(headers)-1], "expected", target)
accepted = false accepted = false
} }
} }
if accepted { if accepted {
parentHash := headers[0].Hash() parentHash := hashes[0]
for i, header := range headers[1:] { for i, header := range headers[1:] {
hash := header.Hash() hash := hashes[i+1]
if want := request.From + 1 + uint64(i); header.Number.Uint64() != want { if want := request.From + 1 + uint64(i); header.Number.Uint64() != want {
logger.Warn("Header broke chain ordering", "number", header.Number, "hash", hash, "expected", want) logger.Warn("Header broke chain ordering", "number", header.Number, "hash", hash, "expected", want)
accepted = false accepted = false
@ -739,6 +713,7 @@ func (q *queue) DeliverHeaders(id string, headers []*types.Header, headerProcCh
// If the batch of headers wasn't accepted, mark as unavailable // If the batch of headers wasn't accepted, mark as unavailable
if !accepted { if !accepted {
logger.Trace("Skeleton filling not accepted", "from", request.From) logger.Trace("Skeleton filling not accepted", "from", request.From)
headerDropMeter.Mark(int64(len(headers)))
miss := q.headerPeerMiss[id] miss := q.headerPeerMiss[id]
if miss == nil { if miss == nil {
@ -752,6 +727,8 @@ func (q *queue) DeliverHeaders(id string, headers []*types.Header, headerProcCh
} }
// Clean up a successful fetch and try to deliver any sub-results // Clean up a successful fetch and try to deliver any sub-results
copy(q.headerResults[request.From-q.headerOffset:], headers) copy(q.headerResults[request.From-q.headerOffset:], headers)
copy(q.headerHashes[request.From-q.headerOffset:], hashes)
delete(q.headerTaskPool, request.From) delete(q.headerTaskPool, request.From)
ready := 0 ready := 0
@ -760,13 +737,19 @@ func (q *queue) DeliverHeaders(id string, headers []*types.Header, headerProcCh
} }
if ready > 0 { if ready > 0 {
// Headers are ready for delivery, gather them and push forward (non blocking) // Headers are ready for delivery, gather them and push forward (non blocking)
process := make([]*types.Header, ready) processHeaders := make([]*types.Header, ready)
copy(process, q.headerResults[q.headerProced:q.headerProced+ready]) copy(processHeaders, q.headerResults[q.headerProced:q.headerProced+ready])
processHashes := make([]common.Hash, ready)
copy(processHashes, q.headerHashes[q.headerProced:q.headerProced+ready])
select { select {
case headerProcCh <- process: case headerProcCh <- &headerTask{
logger.Trace("Pre-scheduled new headers", "count", len(process), "from", process[0].Number) headers: processHeaders,
q.headerProced += len(process) hashes: processHashes,
}:
logger.Trace("Pre-scheduled new headers", "count", len(processHeaders), "from", processHeaders[0].Number)
q.headerProced += len(processHeaders)
default: default:
} }
} }
@ -780,15 +763,15 @@ func (q *queue) DeliverHeaders(id string, headers []*types.Header, headerProcCh
// DeliverBodies injects a block body retrieval response into the results queue. // DeliverBodies injects a block body retrieval response into the results queue.
// The method returns the number of blocks bodies accepted from the delivery and // The method returns the number of blocks bodies accepted from the delivery and
// also wakes any threads waiting for data delivery. // also wakes any threads waiting for data delivery.
func (q *queue) DeliverBodies(id string, txLists [][]*types.Transaction, uncleLists [][]*types.Header) (int, error) { func (q *queue) DeliverBodies(id string, txLists [][]*types.Transaction, txListHashes []common.Hash, uncleLists [][]*types.Header, uncleListHashes []common.Hash) (int, error) {
q.lock.Lock() q.lock.Lock()
defer q.lock.Unlock() defer q.lock.Unlock()
trieHasher := trie.NewStackTrie(nil)
validate := func(index int, header *types.Header) error { validate := func(index int, header *types.Header) error {
if types.DeriveSha(types.Transactions(txLists[index]), trieHasher) != header.TxHash { if txListHashes[index] != header.TxHash {
return errInvalidBody return errInvalidBody
} }
if types.CalcUncleHash(uncleLists[index]) != header.UncleHash { if uncleListHashes[index] != header.UncleHash {
return errInvalidBody return errInvalidBody
} }
return nil return nil
@ -800,18 +783,18 @@ func (q *queue) DeliverBodies(id string, txLists [][]*types.Transaction, uncleLi
result.SetBodyDone() result.SetBodyDone()
} }
return q.deliver(id, q.blockTaskPool, q.blockTaskQueue, q.blockPendPool, return q.deliver(id, q.blockTaskPool, q.blockTaskQueue, q.blockPendPool,
bodyReqTimer, len(txLists), validate, reconstruct) bodyReqTimer, bodyInMeter, bodyDropMeter, len(txLists), validate, reconstruct)
} }
// DeliverReceipts injects a receipt retrieval response into the results queue. // DeliverReceipts injects a receipt retrieval response into the results queue.
// The method returns the number of transaction receipts accepted from the delivery // The method returns the number of transaction receipts accepted from the delivery
// and also wakes any threads waiting for data delivery. // and also wakes any threads waiting for data delivery.
func (q *queue) DeliverReceipts(id string, receiptList [][]*types.Receipt) (int, error) { func (q *queue) DeliverReceipts(id string, receiptList [][]*types.Receipt, receiptListHashes []common.Hash) (int, error) {
q.lock.Lock() q.lock.Lock()
defer q.lock.Unlock() defer q.lock.Unlock()
trieHasher := trie.NewStackTrie(nil)
validate := func(index int, header *types.Header) error { validate := func(index int, header *types.Header) error {
if types.DeriveSha(types.Receipts(receiptList[index]), trieHasher) != header.ReceiptHash { if receiptListHashes[index] != header.ReceiptHash {
return errInvalidReceipt return errInvalidReceipt
} }
return nil return nil
@ -821,7 +804,7 @@ func (q *queue) DeliverReceipts(id string, receiptList [][]*types.Receipt) (int,
result.SetReceiptsDone() result.SetReceiptsDone()
} }
return q.deliver(id, q.receiptTaskPool, q.receiptTaskQueue, q.receiptPendPool, return q.deliver(id, q.receiptTaskPool, q.receiptTaskQueue, q.receiptPendPool,
receiptReqTimer, len(receiptList), validate, reconstruct) receiptReqTimer, receiptInMeter, receiptDropMeter, len(receiptList), validate, reconstruct)
} }
// deliver injects a data retrieval response into the results queue. // deliver injects a data retrieval response into the results queue.
@ -830,18 +813,22 @@ func (q *queue) DeliverReceipts(id string, receiptList [][]*types.Receipt) (int,
// reason this lock is not obtained in here is because the parameters already need // reason this lock is not obtained in here is because the parameters already need
// to access the queue, so they already need a lock anyway. // to access the queue, so they already need a lock anyway.
func (q *queue) deliver(id string, taskPool map[common.Hash]*types.Header, func (q *queue) deliver(id string, taskPool map[common.Hash]*types.Header,
taskQueue *prque.Prque, pendPool map[string]*fetchRequest, reqTimer metrics.Timer, taskQueue *prque.Prque, pendPool map[string]*fetchRequest,
reqTimer metrics.Timer, resInMeter metrics.Meter, resDropMeter metrics.Meter,
results int, validate func(index int, header *types.Header) error, results int, validate func(index int, header *types.Header) error,
reconstruct func(index int, result *fetchResult)) (int, error) { reconstruct func(index int, result *fetchResult)) (int, error) {
// Short circuit if the data was never requested // Short circuit if the data was never requested
request := pendPool[id] request := pendPool[id]
if request == nil { if request == nil {
resDropMeter.Mark(int64(results))
return 0, errNoFetchesPending return 0, errNoFetchesPending
} }
reqTimer.UpdateSince(request.Time)
delete(pendPool, id) delete(pendPool, id)
reqTimer.UpdateSince(request.Time)
resInMeter.Mark(int64(results))
// If no data items were retrieved, mark them as unavailable for the origin peer // If no data items were retrieved, mark them as unavailable for the origin peer
if results == 0 { if results == 0 {
for _, header := range request.Headers { for _, header := range request.Headers {
@ -883,6 +870,8 @@ func (q *queue) deliver(id string, taskPool map[common.Hash]*types.Header,
delete(taskPool, hashes[accepted]) delete(taskPool, hashes[accepted])
accepted++ accepted++
} }
resDropMeter.Mark(int64(results - accepted))
// Return all failed or missing fetches to the queue // Return all failed or missing fetches to the queue
for _, header := range request.Headers[accepted:] { for _, header := range request.Headers[accepted:] {
taskQueue.Push(header, -int64(header.Number.Uint64())) taskQueue.Push(header, -int64(header.Number.Uint64()))

View File

@ -31,6 +31,7 @@ import (
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/trie"
) )
var ( var (
@ -104,17 +105,22 @@ func TestBasics(t *testing.T) {
if !q.Idle() { if !q.Idle() {
t.Errorf("new queue should be idle") t.Errorf("new queue should be idle")
} }
q.Prepare(1, FastSync) q.Prepare(1, SnapSync)
if res := q.Results(false); len(res) != 0 { if res := q.Results(false); len(res) != 0 {
t.Fatal("new queue should have 0 results") t.Fatal("new queue should have 0 results")
} }
// Schedule a batch of headers // Schedule a batch of headers
q.Schedule(chain.headers(), 1) headers := chain.headers()
hashes := make([]common.Hash, len(headers))
for i, header := range headers {
hashes[i] = header.Hash()
}
q.Schedule(headers, hashes, 1)
if q.Idle() { if q.Idle() {
t.Errorf("queue should not be idle") t.Errorf("queue should not be idle")
} }
if got, exp := q.PendingBlocks(), chain.Len(); got != exp { if got, exp := q.PendingBodies(), chain.Len(); got != exp {
t.Errorf("wrong pending block count, got %d, exp %d", got, exp) t.Errorf("wrong pending block count, got %d, exp %d", got, exp)
} }
// Only non-empty receipts get added to task-queue // Only non-empty receipts get added to task-queue
@ -197,13 +203,19 @@ func TestEmptyBlocks(t *testing.T) {
q := newQueue(10, 10) q := newQueue(10, 10)
q.Prepare(1, FastSync) q.Prepare(1, SnapSync)
// Schedule a batch of headers // Schedule a batch of headers
q.Schedule(emptyChain.headers(), 1) headers := emptyChain.headers()
hashes := make([]common.Hash, len(headers))
for i, header := range headers {
hashes[i] = header.Hash()
}
q.Schedule(headers, hashes, 1)
if q.Idle() { if q.Idle() {
t.Errorf("queue should not be idle") t.Errorf("queue should not be idle")
} }
if got, exp := q.PendingBlocks(), len(emptyChain.blocks); got != exp { if got, exp := q.PendingBodies(), len(emptyChain.blocks); got != exp {
t.Errorf("wrong pending block count, got %d, exp %d", got, exp) t.Errorf("wrong pending block count, got %d, exp %d", got, exp)
} }
if got, exp := q.PendingReceipts(), 0; got != exp { if got, exp := q.PendingReceipts(), 0; got != exp {
@ -272,7 +284,7 @@ func XTestDelivery(t *testing.T) {
} }
q := newQueue(10, 10) q := newQueue(10, 10)
var wg sync.WaitGroup var wg sync.WaitGroup
q.Prepare(1, FastSync) q.Prepare(1, SnapSync)
wg.Add(1) wg.Add(1)
go func() { go func() {
// deliver headers // deliver headers
@ -280,11 +292,15 @@ func XTestDelivery(t *testing.T) {
c := 1 c := 1
for { for {
//fmt.Printf("getting headers from %d\n", c) //fmt.Printf("getting headers from %d\n", c)
hdrs := world.headers(c) headers := world.headers(c)
l := len(hdrs) hashes := make([]common.Hash, len(headers))
for i, header := range headers {
hashes[i] = header.Hash()
}
l := len(headers)
//fmt.Printf("scheduling %d headers, first %d last %d\n", //fmt.Printf("scheduling %d headers, first %d last %d\n",
// l, hdrs[0].Number.Uint64(), hdrs[len(hdrs)-1].Number.Uint64()) // l, headers[0].Number.Uint64(), headers[len(headers)-1].Number.Uint64())
q.Schedule(hdrs, uint64(c)) q.Schedule(headers, hashes, uint64(c))
c += l c += l
} }
}() }()
@ -311,18 +327,31 @@ func XTestDelivery(t *testing.T) {
peer := dummyPeer(fmt.Sprintf("peer-%d", i)) peer := dummyPeer(fmt.Sprintf("peer-%d", i))
f, _, _ := q.ReserveBodies(peer, rand.Intn(30)) f, _, _ := q.ReserveBodies(peer, rand.Intn(30))
if f != nil { if f != nil {
var emptyList []*types.Header var (
var txs [][]*types.Transaction emptyList []*types.Header
var uncles [][]*types.Header txset [][]*types.Transaction
uncleset [][]*types.Header
)
numToSkip := rand.Intn(len(f.Headers)) numToSkip := rand.Intn(len(f.Headers))
for _, hdr := range f.Headers[0 : len(f.Headers)-numToSkip] { for _, hdr := range f.Headers[0 : len(f.Headers)-numToSkip] {
txs = append(txs, world.getTransactions(hdr.Number.Uint64())) txset = append(txset, world.getTransactions(hdr.Number.Uint64()))
uncles = append(uncles, emptyList) uncleset = append(uncleset, emptyList)
}
var (
txsHashes = make([]common.Hash, len(txset))
uncleHashes = make([]common.Hash, len(uncleset))
)
hasher := trie.NewStackTrie(nil)
for i, txs := range txset {
txsHashes[i] = types.DeriveSha(types.Transactions(txs), hasher)
}
for i, uncles := range uncleset {
uncleHashes[i] = types.CalcUncleHash(uncles)
} }
time.Sleep(100 * time.Millisecond) time.Sleep(100 * time.Millisecond)
_, err := q.DeliverBodies(peer.id, txs, uncles) _, err := q.DeliverBodies(peer.id, txset, txsHashes, uncleset, uncleHashes)
if err != nil { if err != nil {
fmt.Printf("delivered %d bodies %v\n", len(txs), err) fmt.Printf("delivered %d bodies %v\n", len(txset), err)
} }
} else { } else {
i++ i++
@ -341,7 +370,12 @@ func XTestDelivery(t *testing.T) {
for _, hdr := range f.Headers { for _, hdr := range f.Headers {
rcs = append(rcs, world.getReceipts(hdr.Number.Uint64())) rcs = append(rcs, world.getReceipts(hdr.Number.Uint64()))
} }
_, err := q.DeliverReceipts(peer.id, rcs) hasher := trie.NewStackTrie(nil)
hashes := make([]common.Hash, len(rcs))
for i, receipt := range rcs {
hashes[i] = types.DeriveSha(types.Receipts(receipt), hasher)
}
_, err := q.DeliverReceipts(peer.id, rcs, hashes)
if err != nil { if err != nil {
fmt.Printf("delivered %d receipts %v\n", len(rcs), err) fmt.Printf("delivered %d receipts %v\n", len(rcs), err)
} }

View File

@ -17,48 +17,12 @@
package downloader package downloader
import ( import (
"fmt"
"sync" "sync"
"time"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/trie"
"golang.org/x/crypto/sha3"
) )
// stateReq represents a batch of state fetch requests grouped together into
// a single data retrieval network packet.
type stateReq struct {
nItems uint16 // Number of items requested for download (max is 384, so uint16 is sufficient)
trieTasks map[common.Hash]*trieTask // Trie node download tasks to track previous attempts
codeTasks map[common.Hash]*codeTask // Byte code download tasks to track previous attempts
timeout time.Duration // Maximum round trip time for this to complete
timer *time.Timer // Timer to fire when the RTT timeout expires
peer *peerConnection // Peer that we're requesting from
delivered time.Time // Time when the packet was delivered (independent when we process it)
response [][]byte // Response data of the peer (nil for timeouts)
dropped bool // Flag whether the peer dropped off early
}
// timedOut returns if this request timed out.
func (req *stateReq) timedOut() bool {
return req.response == nil
}
// stateSyncStats is a collection of progress stats to report during a state trie
// sync to RPC requests as well as to display in user logs.
type stateSyncStats struct {
processed uint64 // Number of state entries processed
duplicate uint64 // Number of state entries downloaded twice
unexpected uint64 // Number of non-requested state entries received
pending uint64 // Number of still pending state entries
}
// syncState starts downloading state with the given root hash. // syncState starts downloading state with the given root hash.
func (d *Downloader) syncState(root common.Hash) *stateSync { func (d *Downloader) syncState(root common.Hash) *stateSync {
// Create the state sync // Create the state sync
@ -85,8 +49,6 @@ func (d *Downloader) stateFetcher() {
for next := s; next != nil; { for next := s; next != nil; {
next = d.runStateSync(next) next = d.runStateSync(next)
} }
case <-d.stateCh:
// Ignore state responses while no sync is running.
case <-d.quitCh: case <-d.quitCh:
return return
} }
@ -96,162 +58,19 @@ func (d *Downloader) stateFetcher() {
// runStateSync runs a state synchronisation until it completes or another root // runStateSync runs a state synchronisation until it completes or another root
// hash is requested to be switched over to. // hash is requested to be switched over to.
func (d *Downloader) runStateSync(s *stateSync) *stateSync { func (d *Downloader) runStateSync(s *stateSync) *stateSync {
var (
active = make(map[string]*stateReq) // Currently in-flight requests
finished []*stateReq // Completed or failed requests
timeout = make(chan *stateReq) // Timed out active requests
)
log.Trace("State sync starting", "root", s.root) log.Trace("State sync starting", "root", s.root)
defer func() {
// Cancel active request timers on exit. Also set peers to idle so they're
// available for the next sync.
for _, req := range active {
req.timer.Stop()
req.peer.SetNodeDataIdle(int(req.nItems), time.Now())
}
}()
go s.run() go s.run()
defer s.Cancel() defer s.Cancel()
// Listen for peer departure events to cancel assigned tasks
peerDrop := make(chan *peerConnection, 1024)
peerSub := s.d.peers.SubscribePeerDrops(peerDrop)
defer peerSub.Unsubscribe()
for { for {
// Enable sending of the first buffered element if there is one.
var (
deliverReq *stateReq
deliverReqCh chan *stateReq
)
if len(finished) > 0 {
deliverReq = finished[0]
deliverReqCh = s.deliver
}
select { select {
// The stateSync lifecycle:
case next := <-d.stateSyncStart: case next := <-d.stateSyncStart:
d.spindownStateSync(active, finished, timeout, peerDrop)
return next return next
case <-s.done: case <-s.done:
d.spindownStateSync(active, finished, timeout, peerDrop)
return nil return nil
// Send the next finished request to the current sync:
case deliverReqCh <- deliverReq:
// Shift out the first request, but also set the emptied slot to nil for GC
copy(finished, finished[1:])
finished[len(finished)-1] = nil
finished = finished[:len(finished)-1]
// Handle incoming state packs:
case pack := <-d.stateCh:
// Discard any data not requested (or previously timed out)
req := active[pack.PeerId()]
if req == nil {
log.Debug("Unrequested node data", "peer", pack.PeerId(), "len", pack.Items())
continue
} }
// Finalize the request and queue up for processing
req.timer.Stop()
req.response = pack.(*statePack).states
req.delivered = time.Now()
finished = append(finished, req)
delete(active, pack.PeerId())
// Handle dropped peer connections:
case p := <-peerDrop:
// Skip if no request is currently pending
req := active[p.id]
if req == nil {
continue
}
// Finalize the request and queue up for processing
req.timer.Stop()
req.dropped = true
req.delivered = time.Now()
finished = append(finished, req)
delete(active, p.id)
// Handle timed-out requests:
case req := <-timeout:
// If the peer is already requesting something else, ignore the stale timeout.
// This can happen when the timeout and the delivery happens simultaneously,
// causing both pathways to trigger.
if active[req.peer.id] != req {
continue
}
req.delivered = time.Now()
// Move the timed out data back into the download queue
finished = append(finished, req)
delete(active, req.peer.id)
// Track outgoing state requests:
case req := <-d.trackStateReq:
// If an active request already exists for this peer, we have a problem. In
// theory the trie node schedule must never assign two requests to the same
// peer. In practice however, a peer might receive a request, disconnect and
// immediately reconnect before the previous times out. In this case the first
// request is never honored, alas we must not silently overwrite it, as that
// causes valid requests to go missing and sync to get stuck.
if old := active[req.peer.id]; old != nil {
log.Warn("Busy peer assigned new state fetch", "peer", old.peer.id)
// Move the previous request to the finished set
old.timer.Stop()
old.dropped = true
old.delivered = time.Now()
finished = append(finished, old)
}
// Start a timer to notify the sync loop if the peer stalled.
req.timer = time.AfterFunc(req.timeout, func() {
timeout <- req
})
active[req.peer.id] = req
}
}
}
// spindownStateSync 'drains' the outstanding requests; some will be delivered and other
// will time out. This is to ensure that when the next stateSync starts working, all peers
// are marked as idle and de facto _are_ idle.
func (d *Downloader) spindownStateSync(active map[string]*stateReq, finished []*stateReq, timeout chan *stateReq, peerDrop chan *peerConnection) {
log.Trace("State sync spinning down", "active", len(active), "finished", len(finished))
for len(active) > 0 {
var (
req *stateReq
reason string
)
select {
// Handle (drop) incoming state packs:
case pack := <-d.stateCh:
req = active[pack.PeerId()]
reason = "delivered"
// Handle dropped peer connections:
case p := <-peerDrop:
req = active[p.id]
reason = "peerdrop"
// Handle timed-out requests:
case req = <-timeout:
reason = "timeout"
}
if req == nil {
continue
}
req.peer.log.Trace("State peer marked idle (spindown)", "req.items", int(req.nItems), "reason", reason)
req.timer.Stop()
delete(active, req.peer.id)
req.peer.SetNodeDataIdle(int(req.nItems), time.Now())
}
// The 'finished' set contains deliveries that we were going to pass to processing.
// Those are now moot, but we still need to set those peers as idle, which would
// otherwise have been done after processing
for _, req := range finished {
req.peer.SetNodeDataIdle(int(req.nItems), time.Now())
} }
} }
@ -259,50 +78,21 @@ func (d *Downloader) spindownStateSync(active map[string]*stateReq, finished []*
// by a given state root. // by a given state root.
type stateSync struct { type stateSync struct {
d *Downloader // Downloader instance to access and manage current peerset d *Downloader // Downloader instance to access and manage current peerset
root common.Hash // State root currently being synced root common.Hash // State root currently being synced
sched *trie.Sync // State trie sync scheduler defining the tasks
keccak crypto.KeccakState // Keccak256 hasher to verify deliveries with
trieTasks map[common.Hash]*trieTask // Set of trie node tasks currently queued for retrieval
codeTasks map[common.Hash]*codeTask // Set of byte code tasks currently queued for retrieval
numUncommitted int
bytesUncommitted int
started chan struct{} // Started is signalled once the sync loop starts started chan struct{} // Started is signalled once the sync loop starts
deliver chan *stateReq // Delivery channel multiplexing peer responses
cancel chan struct{} // Channel to signal a termination request cancel chan struct{} // Channel to signal a termination request
cancelOnce sync.Once // Ensures cancel only ever gets called once cancelOnce sync.Once // Ensures cancel only ever gets called once
done chan struct{} // Channel to signal termination completion done chan struct{} // Channel to signal termination completion
err error // Any error hit during sync (set before completion) err error // Any error hit during sync (set before completion)
} }
// trieTask represents a single trie node download task, containing a set of
// peers already attempted retrieval from to detect stalled syncs and abort.
type trieTask struct {
path [][]byte
attempts map[string]struct{}
}
// codeTask represents a single byte code download task, containing a set of
// peers already attempted retrieval from to detect stalled syncs and abort.
type codeTask struct {
attempts map[string]struct{}
}
// newStateSync creates a new state trie download scheduler. This method does not // newStateSync creates a new state trie download scheduler. This method does not
// yet start the sync. The user needs to call run to initiate. // yet start the sync. The user needs to call run to initiate.
func newStateSync(d *Downloader, root common.Hash) *stateSync { func newStateSync(d *Downloader, root common.Hash) *stateSync {
return &stateSync{ return &stateSync{
d: d, d: d,
root: root, root: root,
sched: state.NewStateSync(root, d.stateDB, d.stateBloom, nil),
keccak: sha3.NewLegacyKeccak256().(crypto.KeccakState),
trieTasks: make(map[common.Hash]*trieTask),
codeTasks: make(map[common.Hash]*codeTask),
deliver: make(chan *stateReq),
cancel: make(chan struct{}), cancel: make(chan struct{}),
done: make(chan struct{}), done: make(chan struct{}),
started: make(chan struct{}), started: make(chan struct{}),
@ -314,11 +104,7 @@ func newStateSync(d *Downloader, root common.Hash) *stateSync {
// finish. // finish.
func (s *stateSync) run() { func (s *stateSync) run() {
close(s.started) close(s.started)
if s.d.snapSync {
s.err = s.d.SnapSyncer.Sync(s.root, s.cancel) s.err = s.d.SnapSyncer.Sync(s.root, s.cancel)
} else {
s.err = s.loop()
}
close(s.done) close(s.done)
} }
@ -335,281 +121,3 @@ func (s *stateSync) Cancel() error {
}) })
return s.Wait() return s.Wait()
} }
// loop is the main event loop of a state trie sync. It it responsible for the
// assignment of new tasks to peers (including sending it to them) as well as
// for the processing of inbound data. Note, that the loop does not directly
// receive data from peers, rather those are buffered up in the downloader and
// pushed here async. The reason is to decouple processing from data receipt
// and timeouts.
func (s *stateSync) loop() (err error) {
// Listen for new peer events to assign tasks to them
newPeer := make(chan *peerConnection, 1024)
peerSub := s.d.peers.SubscribeNewPeers(newPeer)
defer peerSub.Unsubscribe()
defer func() {
cerr := s.commit(true)
if err == nil {
err = cerr
}
}()
// Keep assigning new tasks until the sync completes or aborts
for s.sched.Pending() > 0 {
if err = s.commit(false); err != nil {
return err
}
s.assignTasks()
// Tasks assigned, wait for something to happen
select {
case <-newPeer:
// New peer arrived, try to assign it download tasks
case <-s.cancel:
return errCancelStateFetch
case <-s.d.cancelCh:
return errCanceled
case req := <-s.deliver:
// Response, disconnect or timeout triggered, drop the peer if stalling
log.Trace("Received node data response", "peer", req.peer.id, "count", len(req.response), "dropped", req.dropped, "timeout", !req.dropped && req.timedOut())
if req.nItems <= 2 && !req.dropped && req.timedOut() {
// 2 items are the minimum requested, if even that times out, we've no use of
// this peer at the moment.
log.Warn("Stalling state sync, dropping peer", "peer", req.peer.id)
if s.d.dropPeer == nil {
// The dropPeer method is nil when `--copydb` is used for a local copy.
// Timeouts can occur if e.g. compaction hits at the wrong time, and can be ignored
req.peer.log.Warn("Downloader wants to drop peer, but peerdrop-function is not set", "peer", req.peer.id)
} else {
s.d.dropPeer(req.peer.id)
// If this peer was the master peer, abort sync immediately
s.d.cancelLock.RLock()
master := req.peer.id == s.d.cancelPeer
s.d.cancelLock.RUnlock()
if master {
s.d.cancel()
return errTimeout
}
}
}
// Process all the received blobs and check for stale delivery
delivered, err := s.process(req)
req.peer.SetNodeDataIdle(delivered, req.delivered)
if err != nil {
log.Warn("Node data write error", "err", err)
return err
}
}
}
return nil
}
func (s *stateSync) commit(force bool) error {
if !force && s.bytesUncommitted < ethdb.IdealBatchSize {
return nil
}
start := time.Now()
b := s.d.stateDB.NewBatch()
if err := s.sched.Commit(b); err != nil {
return err
}
if err := b.Write(); err != nil {
return fmt.Errorf("DB write error: %v", err)
}
s.updateStats(s.numUncommitted, 0, 0, time.Since(start))
s.numUncommitted = 0
s.bytesUncommitted = 0
return nil
}
// assignTasks attempts to assign new tasks to all idle peers, either from the
// batch currently being retried, or fetching new data from the trie sync itself.
func (s *stateSync) assignTasks() {
// Iterate over all idle peers and try to assign them state fetches
peers, _ := s.d.peers.NodeDataIdlePeers()
for _, p := range peers {
// Assign a batch of fetches proportional to the estimated latency/bandwidth
cap := p.NodeDataCapacity(s.d.peers.rates.TargetRoundTrip())
req := &stateReq{peer: p, timeout: s.d.peers.rates.TargetTimeout()}
nodes, _, codes := s.fillTasks(cap, req)
// If the peer was assigned tasks to fetch, send the network request
if len(nodes)+len(codes) > 0 {
req.peer.log.Trace("Requesting batch of state data", "nodes", len(nodes), "codes", len(codes), "root", s.root)
select {
case s.d.trackStateReq <- req:
req.peer.FetchNodeData(append(nodes, codes...)) // Unified retrieval under eth/6x
case <-s.cancel:
case <-s.d.cancelCh:
}
}
}
}
// fillTasks fills the given request object with a maximum of n state download
// tasks to send to the remote peer.
func (s *stateSync) fillTasks(n int, req *stateReq) (nodes []common.Hash, paths []trie.SyncPath, codes []common.Hash) {
// Refill available tasks from the scheduler.
if fill := n - (len(s.trieTasks) + len(s.codeTasks)); fill > 0 {
nodes, paths, codes := s.sched.Missing(fill)
for i, hash := range nodes {
s.trieTasks[hash] = &trieTask{
path: paths[i],
attempts: make(map[string]struct{}),
}
}
for _, hash := range codes {
s.codeTasks[hash] = &codeTask{
attempts: make(map[string]struct{}),
}
}
}
// Find tasks that haven't been tried with the request's peer. Prefer code
// over trie nodes as those can be written to disk and forgotten about.
nodes = make([]common.Hash, 0, n)
paths = make([]trie.SyncPath, 0, n)
codes = make([]common.Hash, 0, n)
req.trieTasks = make(map[common.Hash]*trieTask, n)
req.codeTasks = make(map[common.Hash]*codeTask, n)
for hash, t := range s.codeTasks {
// Stop when we've gathered enough requests
if len(nodes)+len(codes) == n {
break
}
// Skip any requests we've already tried from this peer
if _, ok := t.attempts[req.peer.id]; ok {
continue
}
// Assign the request to this peer
t.attempts[req.peer.id] = struct{}{}
codes = append(codes, hash)
req.codeTasks[hash] = t
delete(s.codeTasks, hash)
}
for hash, t := range s.trieTasks {
// Stop when we've gathered enough requests
if len(nodes)+len(codes) == n {
break
}
// Skip any requests we've already tried from this peer
if _, ok := t.attempts[req.peer.id]; ok {
continue
}
// Assign the request to this peer
t.attempts[req.peer.id] = struct{}{}
nodes = append(nodes, hash)
paths = append(paths, t.path)
req.trieTasks[hash] = t
delete(s.trieTasks, hash)
}
req.nItems = uint16(len(nodes) + len(codes))
return nodes, paths, codes
}
// process iterates over a batch of delivered state data, injecting each item
// into a running state sync, re-queuing any items that were requested but not
// delivered. Returns whether the peer actually managed to deliver anything of
// value, and any error that occurred.
func (s *stateSync) process(req *stateReq) (int, error) {
// Collect processing stats and update progress if valid data was received
duplicate, unexpected, successful := 0, 0, 0
defer func(start time.Time) {
if duplicate > 0 || unexpected > 0 {
s.updateStats(0, duplicate, unexpected, time.Since(start))
}
}(time.Now())
// Iterate over all the delivered data and inject one-by-one into the trie
for _, blob := range req.response {
hash, err := s.processNodeData(blob)
switch err {
case nil:
s.numUncommitted++
s.bytesUncommitted += len(blob)
successful++
case trie.ErrNotRequested:
unexpected++
case trie.ErrAlreadyProcessed:
duplicate++
default:
return successful, fmt.Errorf("invalid state node %s: %v", hash.TerminalString(), err)
}
// Delete from both queues (one delivery is enough for the syncer)
delete(req.trieTasks, hash)
delete(req.codeTasks, hash)
}
// Put unfulfilled tasks back into the retry queue
npeers := s.d.peers.Len()
for hash, task := range req.trieTasks {
// If the node did deliver something, missing items may be due to a protocol
// limit or a previous timeout + delayed delivery. Both cases should permit
// the node to retry the missing items (to avoid single-peer stalls).
if len(req.response) > 0 || req.timedOut() {
delete(task.attempts, req.peer.id)
}
// If we've requested the node too many times already, it may be a malicious
// sync where nobody has the right data. Abort.
if len(task.attempts) >= npeers {
return successful, fmt.Errorf("trie node %s failed with all peers (%d tries, %d peers)", hash.TerminalString(), len(task.attempts), npeers)
}
// Missing item, place into the retry queue.
s.trieTasks[hash] = task
}
for hash, task := range req.codeTasks {
// If the node did deliver something, missing items may be due to a protocol
// limit or a previous timeout + delayed delivery. Both cases should permit
// the node to retry the missing items (to avoid single-peer stalls).
if len(req.response) > 0 || req.timedOut() {
delete(task.attempts, req.peer.id)
}
// If we've requested the node too many times already, it may be a malicious
// sync where nobody has the right data. Abort.
if len(task.attempts) >= npeers {
return successful, fmt.Errorf("byte code %s failed with all peers (%d tries, %d peers)", hash.TerminalString(), len(task.attempts), npeers)
}
// Missing item, place into the retry queue.
s.codeTasks[hash] = task
}
return successful, nil
}
// processNodeData tries to inject a trie node data blob delivered from a remote
// peer into the state trie, returning whether anything useful was written or any
// error occurred.
func (s *stateSync) processNodeData(blob []byte) (common.Hash, error) {
res := trie.SyncResult{Data: blob}
s.keccak.Reset()
s.keccak.Write(blob)
s.keccak.Read(res.Hash[:])
err := s.sched.Process(res)
return res.Hash, err
}
// updateStats bumps the various state sync progress counters and displays a log
// message for the user to see.
func (s *stateSync) updateStats(written, duplicate, unexpected int, duration time.Duration) {
s.d.syncStatsLock.Lock()
defer s.d.syncStatsLock.Unlock()
s.d.syncStatsState.pending = uint64(s.sched.Pending())
s.d.syncStatsState.processed += uint64(written)
s.d.syncStatsState.duplicate += uint64(duplicate)
s.d.syncStatsState.unexpected += uint64(unexpected)
if written > 0 || duplicate > 0 || unexpected > 0 {
log.Info("Imported new state entries", "count", written, "elapsed", common.PrettyDuration(duration), "processed", s.d.syncStatsState.processed, "pending", s.d.syncStatsState.pending, "trieretry", len(s.trieTasks), "coderetry", len(s.codeTasks), "duplicate", s.d.syncStatsState.duplicate, "unexpected", s.d.syncStatsState.unexpected)
}
if written > 0 {
rawdb.WriteFastTrieProgress(s.d.stateDB, s.d.syncStatsState.processed)
}
}

View File

@ -20,12 +20,14 @@ import (
"fmt" "fmt"
"math/big" "math/big"
"sync" "sync"
"time"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/consensus/ethash" "github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core" "github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/core/vm"
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
) )
@ -39,73 +41,110 @@ var (
) )
// The common prefix of all test chains: // The common prefix of all test chains:
var testChainBase = newTestChain(blockCacheMaxItems+200, testGenesis) var testChainBase *testChain
// Different forks on top of the base chain: // Different forks on top of the base chain:
var testChainForkLightA, testChainForkLightB, testChainForkHeavy *testChain var testChainForkLightA, testChainForkLightB, testChainForkHeavy *testChain
var pregenerated bool
func init() { func init() {
// Reduce some of the parameters to make the tester faster
fullMaxForkAncestry = 10000
lightMaxForkAncestry = 10000
blockCacheMaxItems = 1024
fsHeaderSafetyNet = 256
fsHeaderContCheck = 500 * time.Millisecond
testChainBase = newTestChain(blockCacheMaxItems+200, testGenesis)
var forkLen = int(fullMaxForkAncestry + 50) var forkLen = int(fullMaxForkAncestry + 50)
var wg sync.WaitGroup var wg sync.WaitGroup
// Generate the test chains to seed the peers with
wg.Add(3) wg.Add(3)
go func() { testChainForkLightA = testChainBase.makeFork(forkLen, false, 1); wg.Done() }() go func() { testChainForkLightA = testChainBase.makeFork(forkLen, false, 1); wg.Done() }()
go func() { testChainForkLightB = testChainBase.makeFork(forkLen, false, 2); wg.Done() }() go func() { testChainForkLightB = testChainBase.makeFork(forkLen, false, 2); wg.Done() }()
go func() { testChainForkHeavy = testChainBase.makeFork(forkLen, true, 3); wg.Done() }() go func() { testChainForkHeavy = testChainBase.makeFork(forkLen, true, 3); wg.Done() }()
wg.Wait() wg.Wait()
// Generate the test peers used by the tests to avoid overloading during testing.
// These seemingly random chains are used in various downloader tests. We're just
// pre-generating them here.
chains := []*testChain{
testChainBase,
testChainForkLightA,
testChainForkLightB,
testChainForkHeavy,
testChainBase.shorten(1),
testChainBase.shorten(blockCacheMaxItems - 15),
testChainBase.shorten((blockCacheMaxItems - 15) / 2),
testChainBase.shorten(blockCacheMaxItems - 15 - 5),
testChainBase.shorten(MaxHeaderFetch),
testChainBase.shorten(800),
testChainBase.shorten(800 / 2),
testChainBase.shorten(800 / 3),
testChainBase.shorten(800 / 4),
testChainBase.shorten(800 / 5),
testChainBase.shorten(800 / 6),
testChainBase.shorten(800 / 7),
testChainBase.shorten(800 / 8),
testChainBase.shorten(3*fsHeaderSafetyNet + 256 + fsMinFullBlocks),
testChainBase.shorten(fsMinFullBlocks + 256 - 1),
testChainForkLightA.shorten(len(testChainBase.blocks) + 80),
testChainForkLightB.shorten(len(testChainBase.blocks) + 81),
testChainForkLightA.shorten(len(testChainBase.blocks) + MaxHeaderFetch),
testChainForkLightB.shorten(len(testChainBase.blocks) + MaxHeaderFetch),
testChainForkHeavy.shorten(len(testChainBase.blocks) + 79),
}
wg.Add(len(chains))
for _, chain := range chains {
go func(blocks []*types.Block) {
newTestBlockchain(blocks)
wg.Done()
}(chain.blocks[1:])
}
wg.Wait()
// Mark the chains pregenerated. Generating a new one will lead to a panic.
pregenerated = true
} }
type testChain struct { type testChain struct {
genesis *types.Block blocks []*types.Block
chain []common.Hash
headerm map[common.Hash]*types.Header
blockm map[common.Hash]*types.Block
receiptm map[common.Hash][]*types.Receipt
tdm map[common.Hash]*big.Int
} }
// newTestChain creates a blockchain of the given length. // newTestChain creates a blockchain of the given length.
func newTestChain(length int, genesis *types.Block) *testChain { func newTestChain(length int, genesis *types.Block) *testChain {
tc := new(testChain).copy(length) tc := &testChain{
tc.genesis = genesis blocks: []*types.Block{genesis},
tc.chain = append(tc.chain, genesis.Hash()) }
tc.headerm[tc.genesis.Hash()] = tc.genesis.Header()
tc.tdm[tc.genesis.Hash()] = tc.genesis.Difficulty()
tc.blockm[tc.genesis.Hash()] = tc.genesis
tc.generate(length-1, 0, genesis, false) tc.generate(length-1, 0, genesis, false)
return tc return tc
} }
// makeFork creates a fork on top of the test chain. // makeFork creates a fork on top of the test chain.
func (tc *testChain) makeFork(length int, heavy bool, seed byte) *testChain { func (tc *testChain) makeFork(length int, heavy bool, seed byte) *testChain {
fork := tc.copy(tc.len() + length) fork := tc.copy(len(tc.blocks) + length)
fork.generate(length, seed, tc.headBlock(), heavy) fork.generate(length, seed, tc.blocks[len(tc.blocks)-1], heavy)
return fork return fork
} }
// shorten creates a copy of the chain with the given length. It panics if the // shorten creates a copy of the chain with the given length. It panics if the
// length is longer than the number of available blocks. // length is longer than the number of available blocks.
func (tc *testChain) shorten(length int) *testChain { func (tc *testChain) shorten(length int) *testChain {
if length > tc.len() { if length > len(tc.blocks) {
panic(fmt.Errorf("can't shorten test chain to %d blocks, it's only %d blocks long", length, tc.len())) panic(fmt.Errorf("can't shorten test chain to %d blocks, it's only %d blocks long", length, len(tc.blocks)))
} }
return tc.copy(length) return tc.copy(length)
} }
func (tc *testChain) copy(newlen int) *testChain { func (tc *testChain) copy(newlen int) *testChain {
cpy := &testChain{ if newlen > len(tc.blocks) {
genesis: tc.genesis, newlen = len(tc.blocks)
headerm: make(map[common.Hash]*types.Header, newlen),
blockm: make(map[common.Hash]*types.Block, newlen),
receiptm: make(map[common.Hash][]*types.Receipt, newlen),
tdm: make(map[common.Hash]*big.Int, newlen),
} }
for i := 0; i < len(tc.chain) && i < newlen; i++ { cpy := &testChain{
hash := tc.chain[i] blocks: append([]*types.Block{}, tc.blocks[:newlen]...),
cpy.chain = append(cpy.chain, tc.chain[i])
cpy.tdm[hash] = tc.tdm[hash]
cpy.blockm[hash] = tc.blockm[hash]
cpy.headerm[hash] = tc.headerm[hash]
cpy.receiptm[hash] = tc.receiptm[hash]
} }
return cpy return cpy
} }
@ -115,17 +154,14 @@ func (tc *testChain) copy(newlen int) *testChain {
// contains a transaction and every 5th an uncle to allow testing correct block // contains a transaction and every 5th an uncle to allow testing correct block
// reassembly. // reassembly.
func (tc *testChain) generate(n int, seed byte, parent *types.Block, heavy bool) { func (tc *testChain) generate(n int, seed byte, parent *types.Block, heavy bool) {
// start := time.Now() blocks, _ := core.GenerateChain(params.TestChainConfig, parent, ethash.NewFaker(), testDB, n, func(i int, block *core.BlockGen) {
// defer func() { fmt.Printf("test chain generated in %v\n", time.Since(start)) }()
blocks, receipts := core.GenerateChain(params.TestChainConfig, parent, ethash.NewFaker(), testDB, n, func(i int, block *core.BlockGen) {
block.SetCoinbase(common.Address{seed}) block.SetCoinbase(common.Address{seed})
// If a heavy chain is requested, delay blocks to raise difficulty // If a heavy chain is requested, delay blocks to raise difficulty
if heavy { if heavy {
block.OffsetTime(-1) block.OffsetTime(-9)
} }
// Include transactions to the miner to make blocks more interesting. // Include transactions to the miner to make blocks more interesting.
if parent == tc.genesis && i%22 == 0 { if parent == tc.blocks[0] && i%22 == 0 {
signer := types.MakeSigner(params.TestChainConfig, block.Number()) signer := types.MakeSigner(params.TestChainConfig, block.Number())
tx, err := types.SignTx(types.NewTransaction(block.TxNonce(testAddress), common.Address{seed}, big.NewInt(1000), params.TxGas, block.BaseFee(), nil), signer, testKey) tx, err := types.SignTx(types.NewTransaction(block.TxNonce(testAddress), common.Address{seed}, big.NewInt(1000), params.TxGas, block.BaseFee(), nil), signer, testKey)
if err != nil { if err != nil {
@ -136,95 +172,56 @@ func (tc *testChain) generate(n int, seed byte, parent *types.Block, heavy bool)
// if the block number is a multiple of 5, add a bonus uncle to the block // if the block number is a multiple of 5, add a bonus uncle to the block
if i > 0 && i%5 == 0 { if i > 0 && i%5 == 0 {
block.AddUncle(&types.Header{ block.AddUncle(&types.Header{
ParentHash: block.PrevBlock(i - 1).Hash(), ParentHash: block.PrevBlock(i - 2).Hash(),
Number: big.NewInt(block.Number().Int64() - 1), Number: big.NewInt(block.Number().Int64() - 1),
}) })
} }
}) })
tc.blocks = append(tc.blocks, blocks...)
// Convert the block-chain into a hash-chain and header/block maps
td := new(big.Int).Set(tc.td(parent.Hash()))
for i, b := range blocks {
td := td.Add(td, b.Difficulty())
hash := b.Hash()
tc.chain = append(tc.chain, hash)
tc.blockm[hash] = b
tc.headerm[hash] = b.Header()
tc.receiptm[hash] = receipts[i]
tc.tdm[hash] = new(big.Int).Set(td)
}
} }
// len returns the total number of blocks in the chain. var (
func (tc *testChain) len() int { testBlockchains = make(map[common.Hash]*testBlockchain)
return len(tc.chain) testBlockchainsLock sync.Mutex
)
type testBlockchain struct {
chain *core.BlockChain
gen sync.Once
} }
// headBlock returns the head of the chain. // newTestBlockchain creates a blockchain database built by running the given blocks,
func (tc *testChain) headBlock() *types.Block { // either actually running them, or reusing a previously created one. The returned
return tc.blockm[tc.chain[len(tc.chain)-1]] // chains are *shared*, so *do not* mutate them.
func newTestBlockchain(blocks []*types.Block) *core.BlockChain {
// Retrieve an existing database, or create a new one
head := testGenesis.Hash()
if len(blocks) > 0 {
head = blocks[len(blocks)-1].Hash()
} }
testBlockchainsLock.Lock()
if _, ok := testBlockchains[head]; !ok {
testBlockchains[head] = new(testBlockchain)
}
tbc := testBlockchains[head]
testBlockchainsLock.Unlock()
// td returns the total difficulty of the given block. // Ensure that the database is generated
func (tc *testChain) td(hash common.Hash) *big.Int { tbc.gen.Do(func() {
return tc.tdm[hash] if pregenerated {
panic("Requested chain generation outside of init")
} }
db := rawdb.NewMemoryDatabase()
core.GenesisBlockForTesting(db, testAddress, big.NewInt(1000000000000000))
// headersByHash returns headers in order from the given hash. chain, err := core.NewBlockChain(db, nil, params.TestChainConfig, ethash.NewFaker(), vm.Config{}, nil, nil)
func (tc *testChain) headersByHash(origin common.Hash, amount int, skip int, reverse bool) []*types.Header { if err != nil {
num, _ := tc.hashToNumber(origin) panic(err)
return tc.headersByNumber(num, amount, skip, reverse)
} }
if n, err := chain.InsertChain(blocks); err != nil {
// headersByNumber returns headers from the given number. panic(fmt.Sprintf("block %d: %v", n, err))
func (tc *testChain) headersByNumber(origin uint64, amount int, skip int, reverse bool) []*types.Header {
result := make([]*types.Header, 0, amount)
if !reverse {
for num := origin; num < uint64(len(tc.chain)) && len(result) < amount; num += uint64(skip) + 1 {
if header, ok := tc.headerm[tc.chain[int(num)]]; ok {
result = append(result, header)
} }
} tbc.chain = chain
} else { })
for num := int64(origin); num >= 0 && len(result) < amount; num -= int64(skip) + 1 { return tbc.chain
if header, ok := tc.headerm[tc.chain[int(num)]]; ok {
result = append(result, header)
}
}
}
return result
}
// receipts returns the receipts of the given block hashes.
func (tc *testChain) receipts(hashes []common.Hash) [][]*types.Receipt {
results := make([][]*types.Receipt, 0, len(hashes))
for _, hash := range hashes {
if receipt, ok := tc.receiptm[hash]; ok {
results = append(results, receipt)
}
}
return results
}
// bodies returns the block bodies of the given block hashes.
func (tc *testChain) bodies(hashes []common.Hash) ([][]*types.Transaction, [][]*types.Header) {
transactions := make([][]*types.Transaction, 0, len(hashes))
uncles := make([][]*types.Header, 0, len(hashes))
for _, hash := range hashes {
if block, ok := tc.blockm[hash]; ok {
transactions = append(transactions, block.Transactions())
uncles = append(uncles, block.Uncles())
}
}
return transactions, uncles
}
func (tc *testChain) hashToNumber(target common.Hash) (uint64, bool) {
for num, hash := range tc.chain {
if hash == target {
return uint64(num), true
}
}
return 0, false
} }

View File

@ -1,79 +0,0 @@
// Copyright 2015 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package downloader
import (
"fmt"
"github.com/ethereum/go-ethereum/core/types"
)
// peerDropFn is a callback type for dropping a peer detected as malicious.
type peerDropFn func(id string)
// dataPack is a data message returned by a peer for some query.
type dataPack interface {
PeerId() string
Items() int
Stats() string
}
// headerPack is a batch of block headers returned by a peer.
type headerPack struct {
peerID string
headers []*types.Header
}
func (p *headerPack) PeerId() string { return p.peerID }
func (p *headerPack) Items() int { return len(p.headers) }
func (p *headerPack) Stats() string { return fmt.Sprintf("%d", len(p.headers)) }
// bodyPack is a batch of block bodies returned by a peer.
type bodyPack struct {
peerID string
transactions [][]*types.Transaction
uncles [][]*types.Header
}
func (p *bodyPack) PeerId() string { return p.peerID }
func (p *bodyPack) Items() int {
if len(p.transactions) <= len(p.uncles) {
return len(p.transactions)
}
return len(p.uncles)
}
func (p *bodyPack) Stats() string { return fmt.Sprintf("%d:%d", len(p.transactions), len(p.uncles)) }
// receiptPack is a batch of receipts returned by a peer.
type receiptPack struct {
peerID string
receipts [][]*types.Receipt
}
func (p *receiptPack) PeerId() string { return p.peerID }
func (p *receiptPack) Items() int { return len(p.receipts) }
func (p *receiptPack) Stats() string { return fmt.Sprintf("%d", len(p.receipts)) }
// statePack is a batch of states returned by a peer.
type statePack struct {
peerID string
states [][]byte
}
func (p *statePack) PeerId() string { return p.peerID }
func (p *statePack) Items() int { return len(p.states) }
func (p *statePack) Stats() string { return fmt.Sprintf("%d", len(p.states)) }

View File

@ -27,6 +27,7 @@ import (
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/consensus" "github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/consensus/beacon"
"github.com/ethereum/go-ethereum/consensus/clique" "github.com/ethereum/go-ethereum/consensus/clique"
"github.com/ethereum/go-ethereum/consensus/ethash" "github.com/ethereum/go-ethereum/consensus/ethash"
"github.com/ethereum/go-ethereum/core" "github.com/ethereum/go-ethereum/core"
@ -204,15 +205,18 @@ type Config struct {
// Arrow Glacier block override (TODO: remove after the fork) // Arrow Glacier block override (TODO: remove after the fork)
OverrideArrowGlacier *big.Int `toml:",omitempty"` OverrideArrowGlacier *big.Int `toml:",omitempty"`
// OverrideTerminalTotalDifficulty (TODO: remove after the fork)
OverrideTerminalTotalDifficulty *big.Int `toml:",omitempty"`
} }
// CreateConsensusEngine creates a consensus engine for the given chain configuration. // CreateConsensusEngine creates a consensus engine for the given chain configuration.
func CreateConsensusEngine(stack *node.Node, chainConfig *params.ChainConfig, config *ethash.Config, notify []string, noverify bool, db ethdb.Database) consensus.Engine { func CreateConsensusEngine(stack *node.Node, chainConfig *params.ChainConfig, config *ethash.Config, notify []string, noverify bool, db ethdb.Database) consensus.Engine {
// If proof-of-authority is requested, set it up // If proof-of-authority is requested, set it up
var engine consensus.Engine
if chainConfig.Clique != nil { if chainConfig.Clique != nil {
return clique.New(chainConfig.Clique, db) engine = clique.New(chainConfig.Clique, db)
} } else {
// Otherwise assume proof-of-work
switch config.PowMode { switch config.PowMode {
case ethash.ModeFake: case ethash.ModeFake:
log.Warn("Ethash used in fake mode") log.Warn("Ethash used in fake mode")
@ -221,7 +225,7 @@ func CreateConsensusEngine(stack *node.Node, chainConfig *params.ChainConfig, co
case ethash.ModeShared: case ethash.ModeShared:
log.Warn("Ethash used in shared mode") log.Warn("Ethash used in shared mode")
} }
engine := ethash.New(ethash.Config{ engine = ethash.New(ethash.Config{
PowMode: config.PowMode, PowMode: config.PowMode,
CacheDir: stack.ResolvePath(config.CacheDir), CacheDir: stack.ResolvePath(config.CacheDir),
CachesInMem: config.CachesInMem, CachesInMem: config.CachesInMem,
@ -233,6 +237,7 @@ func CreateConsensusEngine(stack *node.Node, chainConfig *params.ChainConfig, co
DatasetsLockMmap: config.DatasetsLockMmap, DatasetsLockMmap: config.DatasetsLockMmap,
NotifyFull: config.NotifyFull, NotifyFull: config.NotifyFull,
}, notify, noverify) }, notify, noverify)
engine.SetThreads(-1) // Disable CPU mining engine.(*ethash.Ethash).SetThreads(-1) // Disable CPU mining
return engine }
return beacon.New(engine)
} }

View File

@ -60,6 +60,7 @@ func (c Config) MarshalTOML() (interface{}, error) {
Checkpoint *params.TrustedCheckpoint `toml:",omitempty"` Checkpoint *params.TrustedCheckpoint `toml:",omitempty"`
CheckpointOracle *params.CheckpointOracleConfig `toml:",omitempty"` CheckpointOracle *params.CheckpointOracleConfig `toml:",omitempty"`
OverrideArrowGlacier *big.Int `toml:",omitempty"` OverrideArrowGlacier *big.Int `toml:",omitempty"`
OverrideTerminalTotalDifficulty *big.Int `toml:",omitempty"`
} }
var enc Config var enc Config
enc.Genesis = c.Genesis enc.Genesis = c.Genesis
@ -104,6 +105,7 @@ func (c Config) MarshalTOML() (interface{}, error) {
enc.Checkpoint = c.Checkpoint enc.Checkpoint = c.Checkpoint
enc.CheckpointOracle = c.CheckpointOracle enc.CheckpointOracle = c.CheckpointOracle
enc.OverrideArrowGlacier = c.OverrideArrowGlacier enc.OverrideArrowGlacier = c.OverrideArrowGlacier
enc.OverrideTerminalTotalDifficulty = c.OverrideTerminalTotalDifficulty
return &enc, nil return &enc, nil
} }
@ -152,6 +154,7 @@ func (c *Config) UnmarshalTOML(unmarshal func(interface{}) error) error {
Checkpoint *params.TrustedCheckpoint `toml:",omitempty"` Checkpoint *params.TrustedCheckpoint `toml:",omitempty"`
CheckpointOracle *params.CheckpointOracleConfig `toml:",omitempty"` CheckpointOracle *params.CheckpointOracleConfig `toml:",omitempty"`
OverrideArrowGlacier *big.Int `toml:",omitempty"` OverrideArrowGlacier *big.Int `toml:",omitempty"`
OverrideTerminalTotalDifficulty *big.Int `toml:",omitempty"`
} }
var dec Config var dec Config
if err := unmarshal(&dec); err != nil { if err := unmarshal(&dec); err != nil {
@ -283,5 +286,8 @@ func (c *Config) UnmarshalTOML(unmarshal func(interface{}) error) error {
if dec.OverrideArrowGlacier != nil { if dec.OverrideArrowGlacier != nil {
c.OverrideArrowGlacier = dec.OverrideArrowGlacier c.OverrideArrowGlacier = dec.OverrideArrowGlacier
} }
if dec.OverrideTerminalTotalDifficulty != nil {
c.OverrideTerminalTotalDifficulty = dec.OverrideTerminalTotalDifficulty
}
return nil return nil
} }

View File

@ -26,6 +26,7 @@ import (
"github.com/ethereum/go-ethereum/common/prque" "github.com/ethereum/go-ethereum/common/prque"
"github.com/ethereum/go-ethereum/consensus" "github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/eth/protocols/eth"
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics" "github.com/ethereum/go-ethereum/metrics"
"github.com/ethereum/go-ethereum/trie" "github.com/ethereum/go-ethereum/trie"
@ -74,10 +75,10 @@ type HeaderRetrievalFn func(common.Hash) *types.Header
type blockRetrievalFn func(common.Hash) *types.Block type blockRetrievalFn func(common.Hash) *types.Block
// headerRequesterFn is a callback type for sending a header retrieval request. // headerRequesterFn is a callback type for sending a header retrieval request.
type headerRequesterFn func(common.Hash) error type headerRequesterFn func(common.Hash, chan *eth.Response) (*eth.Request, error)
// bodyRequesterFn is a callback type for sending a body retrieval request. // bodyRequesterFn is a callback type for sending a body retrieval request.
type bodyRequesterFn func([]common.Hash) error type bodyRequesterFn func([]common.Hash, chan *eth.Response) (*eth.Request, error)
// headerVerifierFn is a callback type to verify a block's header for fast propagation. // headerVerifierFn is a callback type to verify a block's header for fast propagation.
type headerVerifierFn func(header *types.Header) error type headerVerifierFn func(header *types.Header) error
@ -461,15 +462,28 @@ func (f *BlockFetcher) loop() {
// Create a closure of the fetch and schedule in on a new thread // Create a closure of the fetch and schedule in on a new thread
fetchHeader, hashes := f.fetching[hashes[0]].fetchHeader, hashes fetchHeader, hashes := f.fetching[hashes[0]].fetchHeader, hashes
go func() { go func(peer string) {
if f.fetchingHook != nil { if f.fetchingHook != nil {
f.fetchingHook(hashes) f.fetchingHook(hashes)
} }
for _, hash := range hashes { for _, hash := range hashes {
headerFetchMeter.Mark(1) headerFetchMeter.Mark(1)
fetchHeader(hash) // Suboptimal, but protocol doesn't allow batch header retrievals go func(hash common.Hash) {
resCh := make(chan *eth.Response)
req, err := fetchHeader(hash, resCh)
if err != nil {
return // Legacy code, yolo
} }
}() defer req.Close()
res := <-resCh
res.Done <- nil
f.FilterHeaders(peer, *res.Res.(*eth.BlockHeadersPacket), time.Now().Add(res.Time))
}(hash)
}
}(peer)
} }
// Schedule the next fetch if blocks are still pending // Schedule the next fetch if blocks are still pending
f.rescheduleFetch(fetchTimer) f.rescheduleFetch(fetchTimer)
@ -497,8 +511,24 @@ func (f *BlockFetcher) loop() {
if f.completingHook != nil { if f.completingHook != nil {
f.completingHook(hashes) f.completingHook(hashes)
} }
fetchBodies := f.completing[hashes[0]].fetchBodies
bodyFetchMeter.Mark(int64(len(hashes))) bodyFetchMeter.Mark(int64(len(hashes)))
go f.completing[hashes[0]].fetchBodies(hashes)
go func(peer string, hashes []common.Hash) {
resCh := make(chan *eth.Response)
req, err := fetchBodies(hashes, resCh)
if err != nil {
return // Legacy code, yolo
}
defer req.Close()
res := <-resCh
res.Done <- nil
txs, uncles := res.Res.(*eth.BlockBodiesPacket).Unpack()
f.FilterBodies(peer, txs, uncles, time.Now())
}(peer, hashes)
} }
// Schedule the next fetch if blocks are still pending // Schedule the next fetch if blocks are still pending
f.rescheduleComplete(completeTimer) f.rescheduleComplete(completeTimer)

View File

@ -30,6 +30,7 @@ import (
"github.com/ethereum/go-ethereum/core/rawdb" "github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/eth/protocols/eth"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/trie" "github.com/ethereum/go-ethereum/trie"
) )
@ -60,8 +61,8 @@ func makeChain(n int, seed byte, parent *types.Block) ([]common.Hash, map[common
block.AddTx(tx) block.AddTx(tx)
} }
// If the block number is a multiple of 5, add a bonus uncle to the block // If the block number is a multiple of 5, add a bonus uncle to the block
if i%5 == 0 { if i > 0 && i%5 == 0 {
block.AddUncle(&types.Header{ParentHash: block.PrevBlock(i - 1).Hash(), Number: big.NewInt(int64(i - 1))}) block.AddUncle(&types.Header{ParentHash: block.PrevBlock(i - 2).Hash(), Number: big.NewInt(int64(i - 1))})
} }
}) })
hashes := make([]common.Hash, n+1) hashes := make([]common.Hash, n+1)
@ -195,16 +196,26 @@ func (f *fetcherTester) makeHeaderFetcher(peer string, blocks map[common.Hash]*t
closure[hash] = block closure[hash] = block
} }
// Create a function that return a header from the closure // Create a function that return a header from the closure
return func(hash common.Hash) error { return func(hash common.Hash, sink chan *eth.Response) (*eth.Request, error) {
// Gather the blocks to return // Gather the blocks to return
headers := make([]*types.Header, 0, 1) headers := make([]*types.Header, 0, 1)
if block, ok := closure[hash]; ok { if block, ok := closure[hash]; ok {
headers = append(headers, block.Header()) headers = append(headers, block.Header())
} }
// Return on a new thread // Return on a new thread
go f.fetcher.FilterHeaders(peer, headers, time.Now().Add(drift)) req := &eth.Request{
Peer: peer,
return nil }
res := &eth.Response{
Req: req,
Res: (*eth.BlockHeadersPacket)(&headers),
Time: drift,
Done: make(chan error, 1), // Ignore the returned status
}
go func() {
sink <- res
}()
return req, nil
} }
} }
@ -215,7 +226,7 @@ func (f *fetcherTester) makeBodyFetcher(peer string, blocks map[common.Hash]*typ
closure[hash] = block closure[hash] = block
} }
// Create a function that returns blocks from the closure // Create a function that returns blocks from the closure
return func(hashes []common.Hash) error { return func(hashes []common.Hash, sink chan *eth.Response) (*eth.Request, error) {
// Gather the block bodies to return // Gather the block bodies to return
transactions := make([][]*types.Transaction, 0, len(hashes)) transactions := make([][]*types.Transaction, 0, len(hashes))
uncles := make([][]*types.Header, 0, len(hashes)) uncles := make([][]*types.Header, 0, len(hashes))
@ -227,14 +238,33 @@ func (f *fetcherTester) makeBodyFetcher(peer string, blocks map[common.Hash]*typ
} }
} }
// Return on a new thread // Return on a new thread
go f.fetcher.FilterBodies(peer, transactions, uncles, time.Now().Add(drift)) bodies := make([]*eth.BlockBody, len(transactions))
for i, txs := range transactions {
return nil bodies[i] = &eth.BlockBody{
Transactions: txs,
Uncles: uncles[i],
}
}
req := &eth.Request{
Peer: peer,
}
res := &eth.Response{
Req: req,
Res: (*eth.BlockBodiesPacket)(&bodies),
Time: drift,
Done: make(chan error, 1), // Ignore the returned status
}
go func() {
sink <- res
}()
return req, nil
} }
} }
// verifyFetchingEvent verifies that one single event arrive on a fetching channel. // verifyFetchingEvent verifies that one single event arrive on a fetching channel.
func verifyFetchingEvent(t *testing.T, fetching chan []common.Hash, arrive bool) { func verifyFetchingEvent(t *testing.T, fetching chan []common.Hash, arrive bool) {
t.Helper()
if arrive { if arrive {
select { select {
case <-fetching: case <-fetching:
@ -252,6 +282,8 @@ func verifyFetchingEvent(t *testing.T, fetching chan []common.Hash, arrive bool)
// verifyCompletingEvent verifies that one single event arrive on an completing channel. // verifyCompletingEvent verifies that one single event arrive on an completing channel.
func verifyCompletingEvent(t *testing.T, completing chan []common.Hash, arrive bool) { func verifyCompletingEvent(t *testing.T, completing chan []common.Hash, arrive bool) {
t.Helper()
if arrive { if arrive {
select { select {
case <-completing: case <-completing:
@ -269,6 +301,8 @@ func verifyCompletingEvent(t *testing.T, completing chan []common.Hash, arrive b
// verifyImportEvent verifies that one single event arrive on an import channel. // verifyImportEvent verifies that one single event arrive on an import channel.
func verifyImportEvent(t *testing.T, imported chan interface{}, arrive bool) { func verifyImportEvent(t *testing.T, imported chan interface{}, arrive bool) {
t.Helper()
if arrive { if arrive {
select { select {
case <-imported: case <-imported:
@ -287,6 +321,8 @@ func verifyImportEvent(t *testing.T, imported chan interface{}, arrive bool) {
// verifyImportCount verifies that exactly count number of events arrive on an // verifyImportCount verifies that exactly count number of events arrive on an
// import hook channel. // import hook channel.
func verifyImportCount(t *testing.T, imported chan interface{}, count int) { func verifyImportCount(t *testing.T, imported chan interface{}, count int) {
t.Helper()
for i := 0; i < count; i++ { for i := 0; i < count; i++ {
select { select {
case <-imported: case <-imported:
@ -299,6 +335,8 @@ func verifyImportCount(t *testing.T, imported chan interface{}, count int) {
// verifyImportDone verifies that no more events are arriving on an import channel. // verifyImportDone verifies that no more events are arriving on an import channel.
func verifyImportDone(t *testing.T, imported chan interface{}) { func verifyImportDone(t *testing.T, imported chan interface{}) {
t.Helper()
select { select {
case <-imported: case <-imported:
t.Fatalf("extra block imported") t.Fatalf("extra block imported")
@ -308,6 +346,8 @@ func verifyImportDone(t *testing.T, imported chan interface{}) {
// verifyChainHeight verifies the chain height is as expected. // verifyChainHeight verifies the chain height is as expected.
func verifyChainHeight(t *testing.T, fetcher *fetcherTester, height uint64) { func verifyChainHeight(t *testing.T, fetcher *fetcherTester, height uint64) {
t.Helper()
if fetcher.chainHeight() != height { if fetcher.chainHeight() != height {
t.Fatalf("chain height mismatch, got %d, want %d", fetcher.chainHeight(), height) t.Fatalf("chain height mismatch, got %d, want %d", fetcher.chainHeight(), height)
} }
@ -368,13 +408,13 @@ func testConcurrentAnnouncements(t *testing.T, light bool) {
secondBodyFetcher := tester.makeBodyFetcher("second", blocks, 0) secondBodyFetcher := tester.makeBodyFetcher("second", blocks, 0)
counter := uint32(0) counter := uint32(0)
firstHeaderWrapper := func(hash common.Hash) error { firstHeaderWrapper := func(hash common.Hash, sink chan *eth.Response) (*eth.Request, error) {
atomic.AddUint32(&counter, 1) atomic.AddUint32(&counter, 1)
return firstHeaderFetcher(hash) return firstHeaderFetcher(hash, sink)
} }
secondHeaderWrapper := func(hash common.Hash) error { secondHeaderWrapper := func(hash common.Hash, sink chan *eth.Response) (*eth.Request, error) {
atomic.AddUint32(&counter, 1) atomic.AddUint32(&counter, 1)
return secondHeaderFetcher(hash) return secondHeaderFetcher(hash, sink)
} }
// Iteratively announce blocks until all are imported // Iteratively announce blocks until all are imported
imported := make(chan interface{}) imported := make(chan interface{})
@ -468,15 +508,20 @@ func testPendingDeduplication(t *testing.T, light bool) {
delay := 50 * time.Millisecond delay := 50 * time.Millisecond
counter := uint32(0) counter := uint32(0)
headerWrapper := func(hash common.Hash) error { headerWrapper := func(hash common.Hash, sink chan *eth.Response) (*eth.Request, error) {
atomic.AddUint32(&counter, 1) atomic.AddUint32(&counter, 1)
// Simulate a long running fetch // Simulate a long running fetch
resink := make(chan *eth.Response)
req, err := headerFetcher(hash, resink)
if err == nil {
go func() { go func() {
res := <-resink
time.Sleep(delay) time.Sleep(delay)
headerFetcher(hash) sink <- res
}() }()
return nil }
return req, err
} }
checkNonExist := func() bool { checkNonExist := func() bool {
return tester.getBlock(hashes[0]) == nil return tester.getBlock(hashes[0]) == nil

View File

@ -29,7 +29,6 @@ import (
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil" "github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/event" "github.com/ethereum/go-ethereum/event"
"github.com/ethereum/go-ethereum/rpc" "github.com/ethereum/go-ethereum/rpc"
) )
@ -51,7 +50,6 @@ type PublicFilterAPI struct {
backend Backend backend Backend
mux *event.TypeMux mux *event.TypeMux
quit chan struct{} quit chan struct{}
chainDb ethdb.Database
events *EventSystem events *EventSystem
filtersMu sync.Mutex filtersMu sync.Mutex
filters map[rpc.ID]*filter filters map[rpc.ID]*filter
@ -62,7 +60,6 @@ type PublicFilterAPI struct {
func NewPublicFilterAPI(backend Backend, lightMode bool, timeout time.Duration) *PublicFilterAPI { func NewPublicFilterAPI(backend Backend, lightMode bool, timeout time.Duration) *PublicFilterAPI {
api := &PublicFilterAPI{ api := &PublicFilterAPI{
backend: backend, backend: backend,
chainDb: backend.ChainDb(),
events: NewEventSystem(backend, lightMode), events: NewEventSystem(backend, lightMode),
filters: make(map[rpc.ID]*filter), filters: make(map[rpc.ID]*filter),
timeout: timeout, timeout: timeout,

View File

@ -144,7 +144,7 @@ func newTestBackend(t *testing.T, londonBlock *big.Int, pending bool) *testBacke
// Construct testing chain // Construct testing chain
diskdb := rawdb.NewMemoryDatabase() diskdb := rawdb.NewMemoryDatabase()
gspec.Commit(diskdb) gspec.Commit(diskdb)
chain, err := core.NewBlockChain(diskdb, &core.CacheConfig{TrieCleanNoPrefetch: true}, &config, engine, vm.Config{}, nil, nil) chain, err := core.NewBlockChain(diskdb, &core.CacheConfig{TrieCleanNoPrefetch: true}, gspec.Config, engine, vm.Config{}, nil, nil)
if err != nil { if err != nil {
t.Fatalf("Failed to create local chain, %v", err) t.Fatalf("Failed to create local chain, %v", err)
} }

View File

@ -25,6 +25,8 @@ import (
"time" "time"
"github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/consensus"
"github.com/ethereum/go-ethereum/consensus/beacon"
"github.com/ethereum/go-ethereum/core" "github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/forkid" "github.com/ethereum/go-ethereum/core/forkid"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
@ -37,7 +39,6 @@ import (
"github.com/ethereum/go-ethereum/log" "github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/p2p" "github.com/ethereum/go-ethereum/p2p"
"github.com/ethereum/go-ethereum/params" "github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/trie"
) )
const ( const (
@ -79,9 +80,10 @@ type handlerConfig struct {
Database ethdb.Database // Database for direct sync insertions Database ethdb.Database // Database for direct sync insertions
Chain *core.BlockChain // Blockchain to serve data from Chain *core.BlockChain // Blockchain to serve data from
TxPool txPool // Transaction pool to propagate from TxPool txPool // Transaction pool to propagate from
Merger *consensus.Merger // The manager for eth1/2 transition
Network uint64 // Network identifier to adfvertise Network uint64 // Network identifier to adfvertise
Sync downloader.SyncMode // Whether to fast or full sync Sync downloader.SyncMode // Whether to snap or full sync
BloomCache uint64 // Megabytes to alloc for fast sync bloom BloomCache uint64 // Megabytes to alloc for snap sync bloom
EventMux *event.TypeMux // Legacy event mux, deprecate for `feed` EventMux *event.TypeMux // Legacy event mux, deprecate for `feed`
Checkpoint *params.TrustedCheckpoint // Hard coded checkpoint for sync challenges Checkpoint *params.TrustedCheckpoint // Hard coded checkpoint for sync challenges
Whitelist map[uint64]common.Hash // Hard coded whitelist for sync challenged Whitelist map[uint64]common.Hash // Hard coded whitelist for sync challenged
@ -91,8 +93,7 @@ type handler struct {
networkID uint64 networkID uint64
forkFilter forkid.Filter // Fork ID filter, constant across the lifetime of the node forkFilter forkid.Filter // Fork ID filter, constant across the lifetime of the node
fastSync uint32 // Flag whether fast sync is enabled (gets disabled if we already have blocks) snapSync uint32 // Flag whether snap sync is enabled (gets disabled if we already have blocks)
snapSync uint32 // Flag whether fast sync should operate on top of the snap protocol
acceptTxs uint32 // Flag whether we're considered synchronised (enables transaction processing) acceptTxs uint32 // Flag whether we're considered synchronised (enables transaction processing)
checkpointNumber uint64 // Block number for the sync progress validator to cross reference checkpointNumber uint64 // Block number for the sync progress validator to cross reference
@ -104,10 +105,10 @@ type handler struct {
maxPeers int maxPeers int
downloader *downloader.Downloader downloader *downloader.Downloader
stateBloom *trie.SyncBloom
blockFetcher *fetcher.BlockFetcher blockFetcher *fetcher.BlockFetcher
txFetcher *fetcher.TxFetcher txFetcher *fetcher.TxFetcher
peers *peerSet peers *peerSet
merger *consensus.Merger
eventMux *event.TypeMux eventMux *event.TypeMux
txsCh chan core.NewTxsEvent txsCh chan core.NewTxsEvent
@ -138,60 +139,80 @@ func newHandler(config *handlerConfig) (*handler, error) {
txpool: config.TxPool, txpool: config.TxPool,
chain: config.Chain, chain: config.Chain,
peers: newPeerSet(), peers: newPeerSet(),
merger: config.Merger,
whitelist: config.Whitelist, whitelist: config.Whitelist,
quitSync: make(chan struct{}), quitSync: make(chan struct{}),
} }
if config.Sync == downloader.FullSync { if config.Sync == downloader.FullSync {
// The database seems empty as the current block is the genesis. Yet the fast // The database seems empty as the current block is the genesis. Yet the snap
// block is ahead, so fast sync was enabled for this node at a certain point. // block is ahead, so snap sync was enabled for this node at a certain point.
// The scenarios where this can happen is // The scenarios where this can happen is
// * if the user manually (or via a bad block) rolled back a fast sync node // * if the user manually (or via a bad block) rolled back a snap sync node
// below the sync point. // below the sync point.
// * the last fast sync is not finished while user specifies a full sync this // * the last snap sync is not finished while user specifies a full sync this
// time. But we don't have any recent state for full sync. // time. But we don't have any recent state for full sync.
// In these cases however it's safe to reenable fast sync. // In these cases however it's safe to reenable snap sync.
fullBlock, fastBlock := h.chain.CurrentBlock(), h.chain.CurrentFastBlock() fullBlock, fastBlock := h.chain.CurrentBlock(), h.chain.CurrentFastBlock()
if fullBlock.NumberU64() == 0 && fastBlock.NumberU64() > 0 { if fullBlock.NumberU64() == 0 && fastBlock.NumberU64() > 0 {
h.fastSync = uint32(1) h.snapSync = uint32(1)
log.Warn("Switch sync mode from full sync to fast sync") log.Warn("Switch sync mode from full sync to snap sync")
} }
} else { } else {
if h.chain.CurrentBlock().NumberU64() > 0 { if h.chain.CurrentBlock().NumberU64() > 0 {
// Print warning log if database is not empty to run fast sync. // Print warning log if database is not empty to run snap sync.
log.Warn("Switch sync mode from fast sync to full sync") log.Warn("Switch sync mode from snap sync to full sync")
} else { } else {
// If fast sync was requested and our database is empty, grant it // If snap sync was requested and our database is empty, grant it
h.fastSync = uint32(1)
if config.Sync == downloader.SnapSync {
h.snapSync = uint32(1) h.snapSync = uint32(1)
} }
} }
}
// If we have trusted checkpoints, enforce them on the chain // If we have trusted checkpoints, enforce them on the chain
if config.Checkpoint != nil { if config.Checkpoint != nil {
h.checkpointNumber = (config.Checkpoint.SectionIndex+1)*params.CHTFrequency - 1 h.checkpointNumber = (config.Checkpoint.SectionIndex+1)*params.CHTFrequency - 1
h.checkpointHash = config.Checkpoint.SectionHead h.checkpointHash = config.Checkpoint.SectionHead
} }
// Construct the downloader (long sync) and its backing state bloom if fast // Construct the downloader (long sync) and its backing state bloom if snap
// sync is requested. The downloader is responsible for deallocating the state // sync is requested. The downloader is responsible for deallocating the state
// bloom when it's done. // bloom when it's done.
// Note: we don't enable it if snap-sync is performed, since it's very heavy h.downloader = downloader.New(h.checkpointNumber, config.Database, h.eventMux, h.chain, nil, h.removePeer)
// and the heal-portion of the snap sync is much lighter than fast. What we particularly
// want to avoid, is a 90%-finished (but restarted) snap-sync to begin
// indexing the entire trie
if atomic.LoadUint32(&h.fastSync) == 1 && atomic.LoadUint32(&h.snapSync) == 0 {
h.stateBloom = trie.NewSyncBloom(config.BloomCache, config.Database)
}
h.downloader = downloader.New(h.checkpointNumber, config.Database, h.stateBloom, h.eventMux, h.chain, nil, h.removePeer)
// Construct the fetcher (short sync) // Construct the fetcher (short sync)
validator := func(header *types.Header) error { validator := func(header *types.Header) error {
// All the block fetcher activities should be disabled
// after the transition. Print the warning log.
if h.merger.PoSFinalized() {
log.Warn("Unexpected validation activity", "hash", header.Hash(), "number", header.Number)
return errors.New("unexpected behavior after transition")
}
// Reject all the PoS style headers in the first place. No matter
// the chain has finished the transition or not, the PoS headers
// should only come from the trusted consensus layer instead of
// p2p network.
if beacon, ok := h.chain.Engine().(*beacon.Beacon); ok {
if beacon.IsPoSHeader(header) {
return errors.New("unexpected post-merge header")
}
}
return h.chain.Engine().VerifyHeader(h.chain, header, true) return h.chain.Engine().VerifyHeader(h.chain, header, true)
} }
heighter := func() uint64 { heighter := func() uint64 {
return h.chain.CurrentBlock().NumberU64() return h.chain.CurrentBlock().NumberU64()
} }
inserter := func(blocks types.Blocks) (int, error) { inserter := func(blocks types.Blocks) (int, error) {
// All the block fetcher activities should be disabled
// after the transition. Print the warning log.
if h.merger.PoSFinalized() {
var ctx []interface{}
ctx = append(ctx, "blocks", len(blocks))
if len(blocks) > 0 {
ctx = append(ctx, "firsthash", blocks[0].Hash())
ctx = append(ctx, "firstnumber", blocks[0].Number())
ctx = append(ctx, "lasthash", blocks[len(blocks)-1].Hash())
ctx = append(ctx, "lastnumber", blocks[len(blocks)-1].Number())
}
log.Warn("Unexpected insertion activity", ctx...)
return 0, errors.New("unexpected behavior after transition")
}
// If sync hasn't reached the checkpoint yet, deny importing weird blocks. // If sync hasn't reached the checkpoint yet, deny importing weird blocks.
// //
// Ideally we would also compare the head block's timestamp and similarly reject // Ideally we would also compare the head block's timestamp and similarly reject
@ -202,15 +223,38 @@ func newHandler(config *handlerConfig) (*handler, error) {
log.Warn("Unsynced yet, discarded propagated block", "number", blocks[0].Number(), "hash", blocks[0].Hash()) log.Warn("Unsynced yet, discarded propagated block", "number", blocks[0].Number(), "hash", blocks[0].Hash())
return 0, nil return 0, nil
} }
// If fast sync is running, deny importing weird blocks. This is a problematic // If snap sync is running, deny importing weird blocks. This is a problematic
// clause when starting up a new network, because fast-syncing miners might not // clause when starting up a new network, because snap-syncing miners might not
// accept each others' blocks until a restart. Unfortunately we haven't figured // accept each others' blocks until a restart. Unfortunately we haven't figured
// out a way yet where nodes can decide unilaterally whether the network is new // out a way yet where nodes can decide unilaterally whether the network is new
// or not. This should be fixed if we figure out a solution. // or not. This should be fixed if we figure out a solution.
if atomic.LoadUint32(&h.fastSync) == 1 { if atomic.LoadUint32(&h.snapSync) == 1 {
log.Warn("Fast syncing, discarded propagated block", "number", blocks[0].Number(), "hash", blocks[0].Hash()) log.Warn("Fast syncing, discarded propagated block", "number", blocks[0].Number(), "hash", blocks[0].Hash())
return 0, nil return 0, nil
} }
if h.merger.TDDReached() {
// The blocks from the p2p network is regarded as untrusted
// after the transition. In theory block gossip should be disabled
// entirely whenever the transition is started. But in order to
// handle the transition boundary reorg in the consensus-layer,
// the legacy blocks are still accepted, but only for the terminal
// pow blocks. Spec: https://github.com/ethereum/EIPs/blob/master/EIPS/eip-3675.md#halt-the-importing-of-pow-blocks
for i, block := range blocks {
ptd := h.chain.GetTd(block.ParentHash(), block.NumberU64()-1)
if ptd == nil {
return 0, nil
}
td := new(big.Int).Add(ptd, block.Difficulty())
if !h.chain.Config().IsTerminalPoWBlock(ptd, td) {
log.Info("Filtered out non-termimal pow block", "number", block.NumberU64(), "hash", block.Hash())
return 0, nil
}
if err := h.chain.InsertBlockWithoutSetHead(block); err != nil {
return i, err
}
}
return 0, nil
}
n, err := h.chain.InsertChain(blocks) n, err := h.chain.InsertChain(blocks)
if err == nil { if err == nil {
atomic.StoreUint32(&h.acceptTxs, 1) // Mark initial sync done on any fetcher import atomic.StoreUint32(&h.acceptTxs, 1) // Mark initial sync done on any fetcher import
@ -308,30 +352,93 @@ func (h *handler) runEthPeer(peer *eth.Peer, handler eth.Handler) error {
// after this will be sent via broadcasts. // after this will be sent via broadcasts.
h.syncTransactions(peer) h.syncTransactions(peer)
// Create a notification channel for pending requests if the peer goes down
dead := make(chan struct{})
defer close(dead)
// If we have a trusted CHT, reject all peers below that (avoid fast sync eclipse) // If we have a trusted CHT, reject all peers below that (avoid fast sync eclipse)
if h.checkpointHash != (common.Hash{}) { if h.checkpointHash != (common.Hash{}) {
// Request the peer's checkpoint header for chain height/weight validation // Request the peer's checkpoint header for chain height/weight validation
if err := peer.RequestHeadersByNumber(h.checkpointNumber, 1, 0, false); err != nil { resCh := make(chan *eth.Response)
if _, err := peer.RequestHeadersByNumber(h.checkpointNumber, 1, 0, false, resCh); err != nil {
return err return err
} }
// Start a timer to disconnect if the peer doesn't reply in time // Start a timer to disconnect if the peer doesn't reply in time
p.syncDrop = time.AfterFunc(syncChallengeTimeout, func() { go func() {
timeout := time.NewTimer(syncChallengeTimeout)
defer timeout.Stop()
select {
case res := <-resCh:
headers := ([]*types.Header)(*res.Res.(*eth.BlockHeadersPacket))
if len(headers) == 0 {
// If we're doing a snap sync, we must enforce the checkpoint
// block to avoid eclipse attacks. Unsynced nodes are welcome
// to connect after we're done joining the network.
if atomic.LoadUint32(&h.snapSync) == 1 {
peer.Log().Warn("Dropping unsynced node during sync", "addr", peer.RemoteAddr(), "type", peer.Name())
res.Done <- errors.New("unsynced node cannot serve sync")
return
}
res.Done <- nil
return
}
// Validate the header and either drop the peer or continue
if len(headers) > 1 {
res.Done <- errors.New("too many headers in checkpoint response")
return
}
if headers[0].Hash() != h.checkpointHash {
res.Done <- errors.New("checkpoint hash mismatch")
return
}
res.Done <- nil
case <-timeout.C:
peer.Log().Warn("Checkpoint challenge timed out, dropping", "addr", peer.RemoteAddr(), "type", peer.Name()) peer.Log().Warn("Checkpoint challenge timed out, dropping", "addr", peer.RemoteAddr(), "type", peer.Name())
h.removePeer(peer.ID()) h.removePeer(peer.ID())
})
// Make sure it's cleaned up if the peer dies off case <-dead:
defer func() { // Peer handler terminated, abort all goroutines
if p.syncDrop != nil {
p.syncDrop.Stop()
p.syncDrop = nil
} }
}() }()
} }
// If we have any explicit whitelist block hashes, request them // If we have any explicit whitelist block hashes, request them
for number := range h.whitelist { for number, hash := range h.whitelist {
if err := peer.RequestHeadersByNumber(number, 1, 0, false); err != nil { resCh := make(chan *eth.Response)
if _, err := peer.RequestHeadersByNumber(number, 1, 0, false, resCh); err != nil {
return err return err
} }
go func(number uint64, hash common.Hash) {
timeout := time.NewTimer(syncChallengeTimeout)
defer timeout.Stop()
select {
case res := <-resCh:
headers := ([]*types.Header)(*res.Res.(*eth.BlockHeadersPacket))
if len(headers) == 0 {
// Whitelisted blocks are allowed to be missing if the remote
// node is not yet synced
res.Done <- nil
return
}
// Validate the header and either drop the peer or continue
if len(headers) > 1 {
res.Done <- errors.New("too many headers in whitelist response")
return
}
if headers[0].Number.Uint64() != number || headers[0].Hash() != hash {
peer.Log().Info("Whitelist mismatch, dropping peer", "number", number, "hash", headers[0].Hash(), "want", hash)
res.Done <- errors.New("whitelist block mismatch")
return
}
peer.Log().Debug("Whitelist block verified", "number", number, "hash", hash)
case <-timeout.C:
peer.Log().Warn("Whitelist challenge timed out, dropping", "addr", peer.RemoteAddr(), "type", peer.Name())
h.removePeer(peer.ID())
}
}(number, hash)
} }
// Handle incoming messages until the connection is torn down // Handle incoming messages until the connection is torn down
return handler(peer) return handler(peer)
@ -432,6 +539,17 @@ func (h *handler) Stop() {
// BroadcastBlock will either propagate a block to a subset of its peers, or // BroadcastBlock will either propagate a block to a subset of its peers, or
// will only announce its availability (depending what's requested). // will only announce its availability (depending what's requested).
func (h *handler) BroadcastBlock(block *types.Block, propagate bool) { func (h *handler) BroadcastBlock(block *types.Block, propagate bool) {
// Disable the block propagation if the chain has already entered the PoS
// stage. The block propagation is delegated to the consensus layer.
if h.merger.PoSFinalized() {
return
}
// Disable the block propagation if it's the post-merge block.
if beacon, ok := h.chain.Engine().(*beacon.Beacon); ok {
if beacon.IsPoSHeader(block.Header()) {
return
}
}
hash := block.Hash() hash := block.Hash()
peers := h.peers.peersWithoutBlock(hash) peers := h.peers.peersWithoutBlock(hash)

Some files were not shown because too many files have changed in this diff Show More