readme additions and vendor updates
This commit is contained in:
parent
6434a7279d
commit
bc59aa4ed6
@ -14,6 +14,8 @@ conform to the
|
||||
[standard-readme specification](https://github.com/RichardLitt/standard-readme).
|
||||
- Once a Pull Request has received two approvals it can be merged in by a core developer.
|
||||
|
||||
Pull requests should be opened against the `master` branch. Periodically, updates on `master` will be ported over to `staging` for tagged release.
|
||||
|
||||
## Creating a new migration file
|
||||
1. `make new_migration NAME=add_columnA_to_table1`
|
||||
- This will create a new timestamped migration file in `db/migrations`
|
||||
|
@ -21,6 +21,7 @@ import (
|
||||
"strconv"
|
||||
|
||||
"github.com/ethereum/go-ethereum/core/types"
|
||||
|
||||
"github.com/vulcanize/vulcanizedb/pkg/core"
|
||||
)
|
||||
|
||||
|
22
vendor/github.com/davecgh/go-spew/.gitignore
generated
vendored
Normal file
22
vendor/github.com/davecgh/go-spew/.gitignore
generated
vendored
Normal file
@ -0,0 +1,22 @@
|
||||
# Compiled Object files, Static and Dynamic libs (Shared Objects)
|
||||
*.o
|
||||
*.a
|
||||
*.so
|
||||
|
||||
# Folders
|
||||
_obj
|
||||
_test
|
||||
|
||||
# Architecture specific extensions/prefixes
|
||||
*.[568vq]
|
||||
[568vq].out
|
||||
|
||||
*.cgo1.go
|
||||
*.cgo2.c
|
||||
_cgo_defun.c
|
||||
_cgo_gotypes.go
|
||||
_cgo_export.*
|
||||
|
||||
_testmain.go
|
||||
|
||||
*.exe
|
28
vendor/github.com/davecgh/go-spew/.travis.yml
generated
vendored
Normal file
28
vendor/github.com/davecgh/go-spew/.travis.yml
generated
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
language: go
|
||||
go_import_path: github.com/davecgh/go-spew
|
||||
go:
|
||||
- 1.6.x
|
||||
- 1.7.x
|
||||
- 1.8.x
|
||||
- 1.9.x
|
||||
- 1.10.x
|
||||
- tip
|
||||
sudo: false
|
||||
install:
|
||||
- go get -v github.com/alecthomas/gometalinter
|
||||
- gometalinter --install
|
||||
script:
|
||||
- export PATH=$PATH:$HOME/gopath/bin
|
||||
- export GORACE="halt_on_error=1"
|
||||
- test -z "$(gometalinter --disable-all
|
||||
--enable=gofmt
|
||||
--enable=golint
|
||||
--enable=vet
|
||||
--enable=gosimple
|
||||
--enable=unconvert
|
||||
--deadline=4m ./spew | tee /dev/stderr)"
|
||||
- go test -v -race -tags safe ./spew
|
||||
- go test -v -race -tags testcgo ./spew -covermode=atomic -coverprofile=profile.cov
|
||||
after_success:
|
||||
- go get -v github.com/mattn/goveralls
|
||||
- goveralls -coverprofile=profile.cov -service=travis-ci
|
15
vendor/github.com/davecgh/go-spew/LICENSE
generated
vendored
Normal file
15
vendor/github.com/davecgh/go-spew/LICENSE
generated
vendored
Normal file
@ -0,0 +1,15 @@
|
||||
ISC License
|
||||
|
||||
Copyright (c) 2012-2016 Dave Collins <dave@davec.name>
|
||||
|
||||
Permission to use, copy, modify, and/or distribute this software for any
|
||||
purpose with or without fee is hereby granted, provided that the above
|
||||
copyright notice and this permission notice appear in all copies.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
201
vendor/github.com/davecgh/go-spew/README.md
generated
vendored
Normal file
201
vendor/github.com/davecgh/go-spew/README.md
generated
vendored
Normal file
@ -0,0 +1,201 @@
|
||||
go-spew
|
||||
=======
|
||||
|
||||
[](https://travis-ci.org/davecgh/go-spew)
|
||||
[](http://copyfree.org)
|
||||
[](https://coveralls.io/r/davecgh/go-spew?branch=master)
|
||||
|
||||
Go-spew implements a deep pretty printer for Go data structures to aid in
|
||||
debugging. A comprehensive suite of tests with 100% test coverage is provided
|
||||
to ensure proper functionality. See `test_coverage.txt` for the gocov coverage
|
||||
report. Go-spew is licensed under the liberal ISC license, so it may be used in
|
||||
open source or commercial projects.
|
||||
|
||||
If you're interested in reading about how this package came to life and some
|
||||
of the challenges involved in providing a deep pretty printer, there is a blog
|
||||
post about it
|
||||
[here](https://web.archive.org/web/20160304013555/https://blog.cyphertite.com/go-spew-a-journey-into-dumping-go-data-structures/).
|
||||
|
||||
## Documentation
|
||||
|
||||
[](http://godoc.org/github.com/davecgh/go-spew/spew)
|
||||
|
||||
Full `go doc` style documentation for the project can be viewed online without
|
||||
installing this package by using the excellent GoDoc site here:
|
||||
http://godoc.org/github.com/davecgh/go-spew/spew
|
||||
|
||||
You can also view the documentation locally once the package is installed with
|
||||
the `godoc` tool by running `godoc -http=":6060"` and pointing your browser to
|
||||
http://localhost:6060/pkg/github.com/davecgh/go-spew/spew
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
$ go get -u github.com/davecgh/go-spew/spew
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
Add this import line to the file you're working in:
|
||||
|
||||
```Go
|
||||
import "github.com/davecgh/go-spew/spew"
|
||||
```
|
||||
|
||||
To dump a variable with full newlines, indentation, type, and pointer
|
||||
information use Dump, Fdump, or Sdump:
|
||||
|
||||
```Go
|
||||
spew.Dump(myVar1, myVar2, ...)
|
||||
spew.Fdump(someWriter, myVar1, myVar2, ...)
|
||||
str := spew.Sdump(myVar1, myVar2, ...)
|
||||
```
|
||||
|
||||
Alternatively, if you would prefer to use format strings with a compacted inline
|
||||
printing style, use the convenience wrappers Printf, Fprintf, etc with %v (most
|
||||
compact), %+v (adds pointer addresses), %#v (adds types), or %#+v (adds types
|
||||
and pointer addresses):
|
||||
|
||||
```Go
|
||||
spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2)
|
||||
spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
|
||||
spew.Fprintf(someWriter, "myVar1: %v -- myVar2: %+v", myVar1, myVar2)
|
||||
spew.Fprintf(someWriter, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
|
||||
```
|
||||
|
||||
## Debugging a Web Application Example
|
||||
|
||||
Here is an example of how you can use `spew.Sdump()` to help debug a web application. Please be sure to wrap your output using the `html.EscapeString()` function for safety reasons. You should also only use this debugging technique in a development environment, never in production.
|
||||
|
||||
```Go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"html"
|
||||
"net/http"
|
||||
|
||||
"github.com/davecgh/go-spew/spew"
|
||||
)
|
||||
|
||||
func handler(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "text/html")
|
||||
fmt.Fprintf(w, "Hi there, %s!", r.URL.Path[1:])
|
||||
fmt.Fprintf(w, "<!--\n" + html.EscapeString(spew.Sdump(w)) + "\n-->")
|
||||
}
|
||||
|
||||
func main() {
|
||||
http.HandleFunc("/", handler)
|
||||
http.ListenAndServe(":8080", nil)
|
||||
}
|
||||
```
|
||||
|
||||
## Sample Dump Output
|
||||
|
||||
```
|
||||
(main.Foo) {
|
||||
unexportedField: (*main.Bar)(0xf84002e210)({
|
||||
flag: (main.Flag) flagTwo,
|
||||
data: (uintptr) <nil>
|
||||
}),
|
||||
ExportedField: (map[interface {}]interface {}) {
|
||||
(string) "one": (bool) true
|
||||
}
|
||||
}
|
||||
([]uint8) {
|
||||
00000000 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f 20 |............... |
|
||||
00000010 21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2f 30 |!"#$%&'()*+,-./0|
|
||||
00000020 31 32 |12|
|
||||
}
|
||||
```
|
||||
|
||||
## Sample Formatter Output
|
||||
|
||||
Double pointer to a uint8:
|
||||
```
|
||||
%v: <**>5
|
||||
%+v: <**>(0xf8400420d0->0xf8400420c8)5
|
||||
%#v: (**uint8)5
|
||||
%#+v: (**uint8)(0xf8400420d0->0xf8400420c8)5
|
||||
```
|
||||
|
||||
Pointer to circular struct with a uint8 field and a pointer to itself:
|
||||
```
|
||||
%v: <*>{1 <*><shown>}
|
||||
%+v: <*>(0xf84003e260){ui8:1 c:<*>(0xf84003e260)<shown>}
|
||||
%#v: (*main.circular){ui8:(uint8)1 c:(*main.circular)<shown>}
|
||||
%#+v: (*main.circular)(0xf84003e260){ui8:(uint8)1 c:(*main.circular)(0xf84003e260)<shown>}
|
||||
```
|
||||
|
||||
## Configuration Options
|
||||
|
||||
Configuration of spew is handled by fields in the ConfigState type. For
|
||||
convenience, all of the top-level functions use a global state available via the
|
||||
spew.Config global.
|
||||
|
||||
It is also possible to create a ConfigState instance that provides methods
|
||||
equivalent to the top-level functions. This allows concurrent configuration
|
||||
options. See the ConfigState documentation for more details.
|
||||
|
||||
```
|
||||
* Indent
|
||||
String to use for each indentation level for Dump functions.
|
||||
It is a single space by default. A popular alternative is "\t".
|
||||
|
||||
* MaxDepth
|
||||
Maximum number of levels to descend into nested data structures.
|
||||
There is no limit by default.
|
||||
|
||||
* DisableMethods
|
||||
Disables invocation of error and Stringer interface methods.
|
||||
Method invocation is enabled by default.
|
||||
|
||||
* DisablePointerMethods
|
||||
Disables invocation of error and Stringer interface methods on types
|
||||
which only accept pointer receivers from non-pointer variables. This option
|
||||
relies on access to the unsafe package, so it will not have any effect when
|
||||
running in environments without access to the unsafe package such as Google
|
||||
App Engine or with the "safe" build tag specified.
|
||||
Pointer method invocation is enabled by default.
|
||||
|
||||
* DisablePointerAddresses
|
||||
DisablePointerAddresses specifies whether to disable the printing of
|
||||
pointer addresses. This is useful when diffing data structures in tests.
|
||||
|
||||
* DisableCapacities
|
||||
DisableCapacities specifies whether to disable the printing of capacities
|
||||
for arrays, slices, maps and channels. This is useful when diffing data
|
||||
structures in tests.
|
||||
|
||||
* ContinueOnMethod
|
||||
Enables recursion into types after invoking error and Stringer interface
|
||||
methods. Recursion after method invocation is disabled by default.
|
||||
|
||||
* SortKeys
|
||||
Specifies map keys should be sorted before being printed. Use
|
||||
this to have a more deterministic, diffable output. Note that
|
||||
only native types (bool, int, uint, floats, uintptr and string)
|
||||
and types which implement error or Stringer interfaces are supported,
|
||||
with other types sorted according to the reflect.Value.String() output
|
||||
which guarantees display stability. Natural map order is used by
|
||||
default.
|
||||
|
||||
* SpewKeys
|
||||
SpewKeys specifies that, as a last resort attempt, map keys should be
|
||||
spewed to strings and sorted by those strings. This is only considered
|
||||
if SortKeys is true.
|
||||
|
||||
```
|
||||
|
||||
## Unsafe Package Dependency
|
||||
|
||||
This package relies on the unsafe package to perform some of the more advanced
|
||||
features, however it also supports a "limited" mode which allows it to work in
|
||||
environments where the unsafe package is not available. By default, it will
|
||||
operate in this mode on Google App Engine and when compiled with GopherJS. The
|
||||
"safe" build tag may also be specified to force the package to build without
|
||||
using the unsafe package.
|
||||
|
||||
## License
|
||||
|
||||
Go-spew is licensed under the [copyfree](http://copyfree.org) ISC License.
|
22
vendor/github.com/davecgh/go-spew/cov_report.sh
generated
vendored
Normal file
22
vendor/github.com/davecgh/go-spew/cov_report.sh
generated
vendored
Normal file
@ -0,0 +1,22 @@
|
||||
#!/bin/sh
|
||||
|
||||
# This script uses gocov to generate a test coverage report.
|
||||
# The gocov tool my be obtained with the following command:
|
||||
# go get github.com/axw/gocov/gocov
|
||||
#
|
||||
# It will be installed to $GOPATH/bin, so ensure that location is in your $PATH.
|
||||
|
||||
# Check for gocov.
|
||||
if ! type gocov >/dev/null 2>&1; then
|
||||
echo >&2 "This script requires the gocov tool."
|
||||
echo >&2 "You may obtain it with the following command:"
|
||||
echo >&2 "go get github.com/axw/gocov/gocov"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Only run the cgo tests if gcc is installed.
|
||||
if type gcc >/dev/null 2>&1; then
|
||||
(cd spew && gocov test -tags testcgo | gocov report)
|
||||
else
|
||||
(cd spew && gocov test | gocov report)
|
||||
fi
|
145
vendor/github.com/davecgh/go-spew/spew/bypass.go
generated
vendored
Normal file
145
vendor/github.com/davecgh/go-spew/spew/bypass.go
generated
vendored
Normal file
@ -0,0 +1,145 @@
|
||||
// Copyright (c) 2015-2016 Dave Collins <dave@davec.name>
|
||||
//
|
||||
// Permission to use, copy, modify, and distribute this software for any
|
||||
// purpose with or without fee is hereby granted, provided that the above
|
||||
// copyright notice and this permission notice appear in all copies.
|
||||
//
|
||||
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
// NOTE: Due to the following build constraints, this file will only be compiled
|
||||
// when the code is not running on Google App Engine, compiled by GopherJS, and
|
||||
// "-tags safe" is not added to the go build command line. The "disableunsafe"
|
||||
// tag is deprecated and thus should not be used.
|
||||
// Go versions prior to 1.4 are disabled because they use a different layout
|
||||
// for interfaces which make the implementation of unsafeReflectValue more complex.
|
||||
// +build !js,!appengine,!safe,!disableunsafe,go1.4
|
||||
|
||||
package spew
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
const (
|
||||
// UnsafeDisabled is a build-time constant which specifies whether or
|
||||
// not access to the unsafe package is available.
|
||||
UnsafeDisabled = false
|
||||
|
||||
// ptrSize is the size of a pointer on the current arch.
|
||||
ptrSize = unsafe.Sizeof((*byte)(nil))
|
||||
)
|
||||
|
||||
type flag uintptr
|
||||
|
||||
var (
|
||||
// flagRO indicates whether the value field of a reflect.Value
|
||||
// is read-only.
|
||||
flagRO flag
|
||||
|
||||
// flagAddr indicates whether the address of the reflect.Value's
|
||||
// value may be taken.
|
||||
flagAddr flag
|
||||
)
|
||||
|
||||
// flagKindMask holds the bits that make up the kind
|
||||
// part of the flags field. In all the supported versions,
|
||||
// it is in the lower 5 bits.
|
||||
const flagKindMask = flag(0x1f)
|
||||
|
||||
// Different versions of Go have used different
|
||||
// bit layouts for the flags type. This table
|
||||
// records the known combinations.
|
||||
var okFlags = []struct {
|
||||
ro, addr flag
|
||||
}{{
|
||||
// From Go 1.4 to 1.5
|
||||
ro: 1 << 5,
|
||||
addr: 1 << 7,
|
||||
}, {
|
||||
// Up to Go tip.
|
||||
ro: 1<<5 | 1<<6,
|
||||
addr: 1 << 8,
|
||||
}}
|
||||
|
||||
var flagValOffset = func() uintptr {
|
||||
field, ok := reflect.TypeOf(reflect.Value{}).FieldByName("flag")
|
||||
if !ok {
|
||||
panic("reflect.Value has no flag field")
|
||||
}
|
||||
return field.Offset
|
||||
}()
|
||||
|
||||
// flagField returns a pointer to the flag field of a reflect.Value.
|
||||
func flagField(v *reflect.Value) *flag {
|
||||
return (*flag)(unsafe.Pointer(uintptr(unsafe.Pointer(v)) + flagValOffset))
|
||||
}
|
||||
|
||||
// unsafeReflectValue converts the passed reflect.Value into a one that bypasses
|
||||
// the typical safety restrictions preventing access to unaddressable and
|
||||
// unexported data. It works by digging the raw pointer to the underlying
|
||||
// value out of the protected value and generating a new unprotected (unsafe)
|
||||
// reflect.Value to it.
|
||||
//
|
||||
// This allows us to check for implementations of the Stringer and error
|
||||
// interfaces to be used for pretty printing ordinarily unaddressable and
|
||||
// inaccessible values such as unexported struct fields.
|
||||
func unsafeReflectValue(v reflect.Value) reflect.Value {
|
||||
if !v.IsValid() || (v.CanInterface() && v.CanAddr()) {
|
||||
return v
|
||||
}
|
||||
flagFieldPtr := flagField(&v)
|
||||
*flagFieldPtr &^= flagRO
|
||||
*flagFieldPtr |= flagAddr
|
||||
return v
|
||||
}
|
||||
|
||||
// Sanity checks against future reflect package changes
|
||||
// to the type or semantics of the Value.flag field.
|
||||
func init() {
|
||||
field, ok := reflect.TypeOf(reflect.Value{}).FieldByName("flag")
|
||||
if !ok {
|
||||
panic("reflect.Value has no flag field")
|
||||
}
|
||||
if field.Type.Kind() != reflect.TypeOf(flag(0)).Kind() {
|
||||
panic("reflect.Value flag field has changed kind")
|
||||
}
|
||||
type t0 int
|
||||
var t struct {
|
||||
A t0
|
||||
// t0 will have flagEmbedRO set.
|
||||
t0
|
||||
// a will have flagStickyRO set
|
||||
a t0
|
||||
}
|
||||
vA := reflect.ValueOf(t).FieldByName("A")
|
||||
va := reflect.ValueOf(t).FieldByName("a")
|
||||
vt0 := reflect.ValueOf(t).FieldByName("t0")
|
||||
|
||||
// Infer flagRO from the difference between the flags
|
||||
// for the (otherwise identical) fields in t.
|
||||
flagPublic := *flagField(&vA)
|
||||
flagWithRO := *flagField(&va) | *flagField(&vt0)
|
||||
flagRO = flagPublic ^ flagWithRO
|
||||
|
||||
// Infer flagAddr from the difference between a value
|
||||
// taken from a pointer and not.
|
||||
vPtrA := reflect.ValueOf(&t).Elem().FieldByName("A")
|
||||
flagNoPtr := *flagField(&vA)
|
||||
flagPtr := *flagField(&vPtrA)
|
||||
flagAddr = flagNoPtr ^ flagPtr
|
||||
|
||||
// Check that the inferred flags tally with one of the known versions.
|
||||
for _, f := range okFlags {
|
||||
if flagRO == f.ro && flagAddr == f.addr {
|
||||
return
|
||||
}
|
||||
}
|
||||
panic("reflect.Value read-only flag has changed semantics")
|
||||
}
|
38
vendor/github.com/davecgh/go-spew/spew/bypasssafe.go
generated
vendored
Normal file
38
vendor/github.com/davecgh/go-spew/spew/bypasssafe.go
generated
vendored
Normal file
@ -0,0 +1,38 @@
|
||||
// Copyright (c) 2015-2016 Dave Collins <dave@davec.name>
|
||||
//
|
||||
// Permission to use, copy, modify, and distribute this software for any
|
||||
// purpose with or without fee is hereby granted, provided that the above
|
||||
// copyright notice and this permission notice appear in all copies.
|
||||
//
|
||||
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
// NOTE: Due to the following build constraints, this file will only be compiled
|
||||
// when the code is running on Google App Engine, compiled by GopherJS, or
|
||||
// "-tags safe" is added to the go build command line. The "disableunsafe"
|
||||
// tag is deprecated and thus should not be used.
|
||||
// +build js appengine safe disableunsafe !go1.4
|
||||
|
||||
package spew
|
||||
|
||||
import "reflect"
|
||||
|
||||
const (
|
||||
// UnsafeDisabled is a build-time constant which specifies whether or
|
||||
// not access to the unsafe package is available.
|
||||
UnsafeDisabled = true
|
||||
)
|
||||
|
||||
// unsafeReflectValue typically converts the passed reflect.Value into a one
|
||||
// that bypasses the typical safety restrictions preventing access to
|
||||
// unaddressable and unexported data. However, doing this relies on access to
|
||||
// the unsafe package. This is a stub version which simply returns the passed
|
||||
// reflect.Value when the unsafe package is not available.
|
||||
func unsafeReflectValue(v reflect.Value) reflect.Value {
|
||||
return v
|
||||
}
|
341
vendor/github.com/davecgh/go-spew/spew/common.go
generated
vendored
Normal file
341
vendor/github.com/davecgh/go-spew/spew/common.go
generated
vendored
Normal file
@ -0,0 +1,341 @@
|
||||
/*
|
||||
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
|
||||
*
|
||||
* Permission to use, copy, modify, and distribute this software for any
|
||||
* purpose with or without fee is hereby granted, provided that the above
|
||||
* copyright notice and this permission notice appear in all copies.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
*/
|
||||
|
||||
package spew
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
// Some constants in the form of bytes to avoid string overhead. This mirrors
|
||||
// the technique used in the fmt package.
|
||||
var (
|
||||
panicBytes = []byte("(PANIC=")
|
||||
plusBytes = []byte("+")
|
||||
iBytes = []byte("i")
|
||||
trueBytes = []byte("true")
|
||||
falseBytes = []byte("false")
|
||||
interfaceBytes = []byte("(interface {})")
|
||||
commaNewlineBytes = []byte(",\n")
|
||||
newlineBytes = []byte("\n")
|
||||
openBraceBytes = []byte("{")
|
||||
openBraceNewlineBytes = []byte("{\n")
|
||||
closeBraceBytes = []byte("}")
|
||||
asteriskBytes = []byte("*")
|
||||
colonBytes = []byte(":")
|
||||
colonSpaceBytes = []byte(": ")
|
||||
openParenBytes = []byte("(")
|
||||
closeParenBytes = []byte(")")
|
||||
spaceBytes = []byte(" ")
|
||||
pointerChainBytes = []byte("->")
|
||||
nilAngleBytes = []byte("<nil>")
|
||||
maxNewlineBytes = []byte("<max depth reached>\n")
|
||||
maxShortBytes = []byte("<max>")
|
||||
circularBytes = []byte("<already shown>")
|
||||
circularShortBytes = []byte("<shown>")
|
||||
invalidAngleBytes = []byte("<invalid>")
|
||||
openBracketBytes = []byte("[")
|
||||
closeBracketBytes = []byte("]")
|
||||
percentBytes = []byte("%")
|
||||
precisionBytes = []byte(".")
|
||||
openAngleBytes = []byte("<")
|
||||
closeAngleBytes = []byte(">")
|
||||
openMapBytes = []byte("map[")
|
||||
closeMapBytes = []byte("]")
|
||||
lenEqualsBytes = []byte("len=")
|
||||
capEqualsBytes = []byte("cap=")
|
||||
)
|
||||
|
||||
// hexDigits is used to map a decimal value to a hex digit.
|
||||
var hexDigits = "0123456789abcdef"
|
||||
|
||||
// catchPanic handles any panics that might occur during the handleMethods
|
||||
// calls.
|
||||
func catchPanic(w io.Writer, v reflect.Value) {
|
||||
if err := recover(); err != nil {
|
||||
w.Write(panicBytes)
|
||||
fmt.Fprintf(w, "%v", err)
|
||||
w.Write(closeParenBytes)
|
||||
}
|
||||
}
|
||||
|
||||
// handleMethods attempts to call the Error and String methods on the underlying
|
||||
// type the passed reflect.Value represents and outputes the result to Writer w.
|
||||
//
|
||||
// It handles panics in any called methods by catching and displaying the error
|
||||
// as the formatted value.
|
||||
func handleMethods(cs *ConfigState, w io.Writer, v reflect.Value) (handled bool) {
|
||||
// We need an interface to check if the type implements the error or
|
||||
// Stringer interface. However, the reflect package won't give us an
|
||||
// interface on certain things like unexported struct fields in order
|
||||
// to enforce visibility rules. We use unsafe, when it's available,
|
||||
// to bypass these restrictions since this package does not mutate the
|
||||
// values.
|
||||
if !v.CanInterface() {
|
||||
if UnsafeDisabled {
|
||||
return false
|
||||
}
|
||||
|
||||
v = unsafeReflectValue(v)
|
||||
}
|
||||
|
||||
// Choose whether or not to do error and Stringer interface lookups against
|
||||
// the base type or a pointer to the base type depending on settings.
|
||||
// Technically calling one of these methods with a pointer receiver can
|
||||
// mutate the value, however, types which choose to satisify an error or
|
||||
// Stringer interface with a pointer receiver should not be mutating their
|
||||
// state inside these interface methods.
|
||||
if !cs.DisablePointerMethods && !UnsafeDisabled && !v.CanAddr() {
|
||||
v = unsafeReflectValue(v)
|
||||
}
|
||||
if v.CanAddr() {
|
||||
v = v.Addr()
|
||||
}
|
||||
|
||||
// Is it an error or Stringer?
|
||||
switch iface := v.Interface().(type) {
|
||||
case error:
|
||||
defer catchPanic(w, v)
|
||||
if cs.ContinueOnMethod {
|
||||
w.Write(openParenBytes)
|
||||
w.Write([]byte(iface.Error()))
|
||||
w.Write(closeParenBytes)
|
||||
w.Write(spaceBytes)
|
||||
return false
|
||||
}
|
||||
|
||||
w.Write([]byte(iface.Error()))
|
||||
return true
|
||||
|
||||
case fmt.Stringer:
|
||||
defer catchPanic(w, v)
|
||||
if cs.ContinueOnMethod {
|
||||
w.Write(openParenBytes)
|
||||
w.Write([]byte(iface.String()))
|
||||
w.Write(closeParenBytes)
|
||||
w.Write(spaceBytes)
|
||||
return false
|
||||
}
|
||||
w.Write([]byte(iface.String()))
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// printBool outputs a boolean value as true or false to Writer w.
|
||||
func printBool(w io.Writer, val bool) {
|
||||
if val {
|
||||
w.Write(trueBytes)
|
||||
} else {
|
||||
w.Write(falseBytes)
|
||||
}
|
||||
}
|
||||
|
||||
// printInt outputs a signed integer value to Writer w.
|
||||
func printInt(w io.Writer, val int64, base int) {
|
||||
w.Write([]byte(strconv.FormatInt(val, base)))
|
||||
}
|
||||
|
||||
// printUint outputs an unsigned integer value to Writer w.
|
||||
func printUint(w io.Writer, val uint64, base int) {
|
||||
w.Write([]byte(strconv.FormatUint(val, base)))
|
||||
}
|
||||
|
||||
// printFloat outputs a floating point value using the specified precision,
|
||||
// which is expected to be 32 or 64bit, to Writer w.
|
||||
func printFloat(w io.Writer, val float64, precision int) {
|
||||
w.Write([]byte(strconv.FormatFloat(val, 'g', -1, precision)))
|
||||
}
|
||||
|
||||
// printComplex outputs a complex value using the specified float precision
|
||||
// for the real and imaginary parts to Writer w.
|
||||
func printComplex(w io.Writer, c complex128, floatPrecision int) {
|
||||
r := real(c)
|
||||
w.Write(openParenBytes)
|
||||
w.Write([]byte(strconv.FormatFloat(r, 'g', -1, floatPrecision)))
|
||||
i := imag(c)
|
||||
if i >= 0 {
|
||||
w.Write(plusBytes)
|
||||
}
|
||||
w.Write([]byte(strconv.FormatFloat(i, 'g', -1, floatPrecision)))
|
||||
w.Write(iBytes)
|
||||
w.Write(closeParenBytes)
|
||||
}
|
||||
|
||||
// printHexPtr outputs a uintptr formatted as hexadecimal with a leading '0x'
|
||||
// prefix to Writer w.
|
||||
func printHexPtr(w io.Writer, p uintptr) {
|
||||
// Null pointer.
|
||||
num := uint64(p)
|
||||
if num == 0 {
|
||||
w.Write(nilAngleBytes)
|
||||
return
|
||||
}
|
||||
|
||||
// Max uint64 is 16 bytes in hex + 2 bytes for '0x' prefix
|
||||
buf := make([]byte, 18)
|
||||
|
||||
// It's simpler to construct the hex string right to left.
|
||||
base := uint64(16)
|
||||
i := len(buf) - 1
|
||||
for num >= base {
|
||||
buf[i] = hexDigits[num%base]
|
||||
num /= base
|
||||
i--
|
||||
}
|
||||
buf[i] = hexDigits[num]
|
||||
|
||||
// Add '0x' prefix.
|
||||
i--
|
||||
buf[i] = 'x'
|
||||
i--
|
||||
buf[i] = '0'
|
||||
|
||||
// Strip unused leading bytes.
|
||||
buf = buf[i:]
|
||||
w.Write(buf)
|
||||
}
|
||||
|
||||
// valuesSorter implements sort.Interface to allow a slice of reflect.Value
|
||||
// elements to be sorted.
|
||||
type valuesSorter struct {
|
||||
values []reflect.Value
|
||||
strings []string // either nil or same len and values
|
||||
cs *ConfigState
|
||||
}
|
||||
|
||||
// newValuesSorter initializes a valuesSorter instance, which holds a set of
|
||||
// surrogate keys on which the data should be sorted. It uses flags in
|
||||
// ConfigState to decide if and how to populate those surrogate keys.
|
||||
func newValuesSorter(values []reflect.Value, cs *ConfigState) sort.Interface {
|
||||
vs := &valuesSorter{values: values, cs: cs}
|
||||
if canSortSimply(vs.values[0].Kind()) {
|
||||
return vs
|
||||
}
|
||||
if !cs.DisableMethods {
|
||||
vs.strings = make([]string, len(values))
|
||||
for i := range vs.values {
|
||||
b := bytes.Buffer{}
|
||||
if !handleMethods(cs, &b, vs.values[i]) {
|
||||
vs.strings = nil
|
||||
break
|
||||
}
|
||||
vs.strings[i] = b.String()
|
||||
}
|
||||
}
|
||||
if vs.strings == nil && cs.SpewKeys {
|
||||
vs.strings = make([]string, len(values))
|
||||
for i := range vs.values {
|
||||
vs.strings[i] = Sprintf("%#v", vs.values[i].Interface())
|
||||
}
|
||||
}
|
||||
return vs
|
||||
}
|
||||
|
||||
// canSortSimply tests whether a reflect.Kind is a primitive that can be sorted
|
||||
// directly, or whether it should be considered for sorting by surrogate keys
|
||||
// (if the ConfigState allows it).
|
||||
func canSortSimply(kind reflect.Kind) bool {
|
||||
// This switch parallels valueSortLess, except for the default case.
|
||||
switch kind {
|
||||
case reflect.Bool:
|
||||
return true
|
||||
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
|
||||
return true
|
||||
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
|
||||
return true
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return true
|
||||
case reflect.String:
|
||||
return true
|
||||
case reflect.Uintptr:
|
||||
return true
|
||||
case reflect.Array:
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Len returns the number of values in the slice. It is part of the
|
||||
// sort.Interface implementation.
|
||||
func (s *valuesSorter) Len() int {
|
||||
return len(s.values)
|
||||
}
|
||||
|
||||
// Swap swaps the values at the passed indices. It is part of the
|
||||
// sort.Interface implementation.
|
||||
func (s *valuesSorter) Swap(i, j int) {
|
||||
s.values[i], s.values[j] = s.values[j], s.values[i]
|
||||
if s.strings != nil {
|
||||
s.strings[i], s.strings[j] = s.strings[j], s.strings[i]
|
||||
}
|
||||
}
|
||||
|
||||
// valueSortLess returns whether the first value should sort before the second
|
||||
// value. It is used by valueSorter.Less as part of the sort.Interface
|
||||
// implementation.
|
||||
func valueSortLess(a, b reflect.Value) bool {
|
||||
switch a.Kind() {
|
||||
case reflect.Bool:
|
||||
return !a.Bool() && b.Bool()
|
||||
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
|
||||
return a.Int() < b.Int()
|
||||
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
|
||||
return a.Uint() < b.Uint()
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return a.Float() < b.Float()
|
||||
case reflect.String:
|
||||
return a.String() < b.String()
|
||||
case reflect.Uintptr:
|
||||
return a.Uint() < b.Uint()
|
||||
case reflect.Array:
|
||||
// Compare the contents of both arrays.
|
||||
l := a.Len()
|
||||
for i := 0; i < l; i++ {
|
||||
av := a.Index(i)
|
||||
bv := b.Index(i)
|
||||
if av.Interface() == bv.Interface() {
|
||||
continue
|
||||
}
|
||||
return valueSortLess(av, bv)
|
||||
}
|
||||
}
|
||||
return a.String() < b.String()
|
||||
}
|
||||
|
||||
// Less returns whether the value at index i should sort before the
|
||||
// value at index j. It is part of the sort.Interface implementation.
|
||||
func (s *valuesSorter) Less(i, j int) bool {
|
||||
if s.strings == nil {
|
||||
return valueSortLess(s.values[i], s.values[j])
|
||||
}
|
||||
return s.strings[i] < s.strings[j]
|
||||
}
|
||||
|
||||
// sortValues is a sort function that handles both native types and any type that
|
||||
// can be converted to error or Stringer. Other inputs are sorted according to
|
||||
// their Value.String() value to ensure display stability.
|
||||
func sortValues(values []reflect.Value, cs *ConfigState) {
|
||||
if len(values) == 0 {
|
||||
return
|
||||
}
|
||||
sort.Sort(newValuesSorter(values, cs))
|
||||
}
|
298
vendor/github.com/davecgh/go-spew/spew/common_test.go
generated
vendored
Normal file
298
vendor/github.com/davecgh/go-spew/spew/common_test.go
generated
vendored
Normal file
@ -0,0 +1,298 @@
|
||||
/*
|
||||
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
|
||||
*
|
||||
* Permission to use, copy, modify, and distribute this software for any
|
||||
* purpose with or without fee is hereby granted, provided that the above
|
||||
* copyright notice and this permission notice appear in all copies.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
*/
|
||||
|
||||
package spew_test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/davecgh/go-spew/spew"
|
||||
)
|
||||
|
||||
// custom type to test Stinger interface on non-pointer receiver.
|
||||
type stringer string
|
||||
|
||||
// String implements the Stringer interface for testing invocation of custom
|
||||
// stringers on types with non-pointer receivers.
|
||||
func (s stringer) String() string {
|
||||
return "stringer " + string(s)
|
||||
}
|
||||
|
||||
// custom type to test Stinger interface on pointer receiver.
|
||||
type pstringer string
|
||||
|
||||
// String implements the Stringer interface for testing invocation of custom
|
||||
// stringers on types with only pointer receivers.
|
||||
func (s *pstringer) String() string {
|
||||
return "stringer " + string(*s)
|
||||
}
|
||||
|
||||
// xref1 and xref2 are cross referencing structs for testing circular reference
|
||||
// detection.
|
||||
type xref1 struct {
|
||||
ps2 *xref2
|
||||
}
|
||||
type xref2 struct {
|
||||
ps1 *xref1
|
||||
}
|
||||
|
||||
// indirCir1, indirCir2, and indirCir3 are used to generate an indirect circular
|
||||
// reference for testing detection.
|
||||
type indirCir1 struct {
|
||||
ps2 *indirCir2
|
||||
}
|
||||
type indirCir2 struct {
|
||||
ps3 *indirCir3
|
||||
}
|
||||
type indirCir3 struct {
|
||||
ps1 *indirCir1
|
||||
}
|
||||
|
||||
// embed is used to test embedded structures.
|
||||
type embed struct {
|
||||
a string
|
||||
}
|
||||
|
||||
// embedwrap is used to test embedded structures.
|
||||
type embedwrap struct {
|
||||
*embed
|
||||
e *embed
|
||||
}
|
||||
|
||||
// panicer is used to intentionally cause a panic for testing spew properly
|
||||
// handles them
|
||||
type panicer int
|
||||
|
||||
func (p panicer) String() string {
|
||||
panic("test panic")
|
||||
}
|
||||
|
||||
// customError is used to test custom error interface invocation.
|
||||
type customError int
|
||||
|
||||
func (e customError) Error() string {
|
||||
return fmt.Sprintf("error: %d", int(e))
|
||||
}
|
||||
|
||||
// stringizeWants converts a slice of wanted test output into a format suitable
|
||||
// for a test error message.
|
||||
func stringizeWants(wants []string) string {
|
||||
s := ""
|
||||
for i, want := range wants {
|
||||
if i > 0 {
|
||||
s += fmt.Sprintf("want%d: %s", i+1, want)
|
||||
} else {
|
||||
s += "want: " + want
|
||||
}
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// testFailed returns whether or not a test failed by checking if the result
|
||||
// of the test is in the slice of wanted strings.
|
||||
func testFailed(result string, wants []string) bool {
|
||||
for _, want := range wants {
|
||||
if result == want {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
type sortableStruct struct {
|
||||
x int
|
||||
}
|
||||
|
||||
func (ss sortableStruct) String() string {
|
||||
return fmt.Sprintf("ss.%d", ss.x)
|
||||
}
|
||||
|
||||
type unsortableStruct struct {
|
||||
x int
|
||||
}
|
||||
|
||||
type sortTestCase struct {
|
||||
input []reflect.Value
|
||||
expected []reflect.Value
|
||||
}
|
||||
|
||||
func helpTestSortValues(tests []sortTestCase, cs *spew.ConfigState, t *testing.T) {
|
||||
getInterfaces := func(values []reflect.Value) []interface{} {
|
||||
interfaces := []interface{}{}
|
||||
for _, v := range values {
|
||||
interfaces = append(interfaces, v.Interface())
|
||||
}
|
||||
return interfaces
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
spew.SortValues(test.input, cs)
|
||||
// reflect.DeepEqual cannot really make sense of reflect.Value,
|
||||
// probably because of all the pointer tricks. For instance,
|
||||
// v(2.0) != v(2.0) on a 32-bits system. Turn them into interface{}
|
||||
// instead.
|
||||
input := getInterfaces(test.input)
|
||||
expected := getInterfaces(test.expected)
|
||||
if !reflect.DeepEqual(input, expected) {
|
||||
t.Errorf("Sort mismatch:\n %v != %v", input, expected)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// TestSortValues ensures the sort functionality for relect.Value based sorting
|
||||
// works as intended.
|
||||
func TestSortValues(t *testing.T) {
|
||||
v := reflect.ValueOf
|
||||
|
||||
a := v("a")
|
||||
b := v("b")
|
||||
c := v("c")
|
||||
embedA := v(embed{"a"})
|
||||
embedB := v(embed{"b"})
|
||||
embedC := v(embed{"c"})
|
||||
tests := []sortTestCase{
|
||||
// No values.
|
||||
{
|
||||
[]reflect.Value{},
|
||||
[]reflect.Value{},
|
||||
},
|
||||
// Bools.
|
||||
{
|
||||
[]reflect.Value{v(false), v(true), v(false)},
|
||||
[]reflect.Value{v(false), v(false), v(true)},
|
||||
},
|
||||
// Ints.
|
||||
{
|
||||
[]reflect.Value{v(2), v(1), v(3)},
|
||||
[]reflect.Value{v(1), v(2), v(3)},
|
||||
},
|
||||
// Uints.
|
||||
{
|
||||
[]reflect.Value{v(uint8(2)), v(uint8(1)), v(uint8(3))},
|
||||
[]reflect.Value{v(uint8(1)), v(uint8(2)), v(uint8(3))},
|
||||
},
|
||||
// Floats.
|
||||
{
|
||||
[]reflect.Value{v(2.0), v(1.0), v(3.0)},
|
||||
[]reflect.Value{v(1.0), v(2.0), v(3.0)},
|
||||
},
|
||||
// Strings.
|
||||
{
|
||||
[]reflect.Value{b, a, c},
|
||||
[]reflect.Value{a, b, c},
|
||||
},
|
||||
// Array
|
||||
{
|
||||
[]reflect.Value{v([3]int{3, 2, 1}), v([3]int{1, 3, 2}), v([3]int{1, 2, 3})},
|
||||
[]reflect.Value{v([3]int{1, 2, 3}), v([3]int{1, 3, 2}), v([3]int{3, 2, 1})},
|
||||
},
|
||||
// Uintptrs.
|
||||
{
|
||||
[]reflect.Value{v(uintptr(2)), v(uintptr(1)), v(uintptr(3))},
|
||||
[]reflect.Value{v(uintptr(1)), v(uintptr(2)), v(uintptr(3))},
|
||||
},
|
||||
// SortableStructs.
|
||||
{
|
||||
// Note: not sorted - DisableMethods is set.
|
||||
[]reflect.Value{v(sortableStruct{2}), v(sortableStruct{1}), v(sortableStruct{3})},
|
||||
[]reflect.Value{v(sortableStruct{2}), v(sortableStruct{1}), v(sortableStruct{3})},
|
||||
},
|
||||
// UnsortableStructs.
|
||||
{
|
||||
// Note: not sorted - SpewKeys is false.
|
||||
[]reflect.Value{v(unsortableStruct{2}), v(unsortableStruct{1}), v(unsortableStruct{3})},
|
||||
[]reflect.Value{v(unsortableStruct{2}), v(unsortableStruct{1}), v(unsortableStruct{3})},
|
||||
},
|
||||
// Invalid.
|
||||
{
|
||||
[]reflect.Value{embedB, embedA, embedC},
|
||||
[]reflect.Value{embedB, embedA, embedC},
|
||||
},
|
||||
}
|
||||
cs := spew.ConfigState{DisableMethods: true, SpewKeys: false}
|
||||
helpTestSortValues(tests, &cs, t)
|
||||
}
|
||||
|
||||
// TestSortValuesWithMethods ensures the sort functionality for relect.Value
|
||||
// based sorting works as intended when using string methods.
|
||||
func TestSortValuesWithMethods(t *testing.T) {
|
||||
v := reflect.ValueOf
|
||||
|
||||
a := v("a")
|
||||
b := v("b")
|
||||
c := v("c")
|
||||
tests := []sortTestCase{
|
||||
// Ints.
|
||||
{
|
||||
[]reflect.Value{v(2), v(1), v(3)},
|
||||
[]reflect.Value{v(1), v(2), v(3)},
|
||||
},
|
||||
// Strings.
|
||||
{
|
||||
[]reflect.Value{b, a, c},
|
||||
[]reflect.Value{a, b, c},
|
||||
},
|
||||
// SortableStructs.
|
||||
{
|
||||
[]reflect.Value{v(sortableStruct{2}), v(sortableStruct{1}), v(sortableStruct{3})},
|
||||
[]reflect.Value{v(sortableStruct{1}), v(sortableStruct{2}), v(sortableStruct{3})},
|
||||
},
|
||||
// UnsortableStructs.
|
||||
{
|
||||
// Note: not sorted - SpewKeys is false.
|
||||
[]reflect.Value{v(unsortableStruct{2}), v(unsortableStruct{1}), v(unsortableStruct{3})},
|
||||
[]reflect.Value{v(unsortableStruct{2}), v(unsortableStruct{1}), v(unsortableStruct{3})},
|
||||
},
|
||||
}
|
||||
cs := spew.ConfigState{DisableMethods: false, SpewKeys: false}
|
||||
helpTestSortValues(tests, &cs, t)
|
||||
}
|
||||
|
||||
// TestSortValuesWithSpew ensures the sort functionality for relect.Value
|
||||
// based sorting works as intended when using spew to stringify keys.
|
||||
func TestSortValuesWithSpew(t *testing.T) {
|
||||
v := reflect.ValueOf
|
||||
|
||||
a := v("a")
|
||||
b := v("b")
|
||||
c := v("c")
|
||||
tests := []sortTestCase{
|
||||
// Ints.
|
||||
{
|
||||
[]reflect.Value{v(2), v(1), v(3)},
|
||||
[]reflect.Value{v(1), v(2), v(3)},
|
||||
},
|
||||
// Strings.
|
||||
{
|
||||
[]reflect.Value{b, a, c},
|
||||
[]reflect.Value{a, b, c},
|
||||
},
|
||||
// SortableStructs.
|
||||
{
|
||||
[]reflect.Value{v(sortableStruct{2}), v(sortableStruct{1}), v(sortableStruct{3})},
|
||||
[]reflect.Value{v(sortableStruct{1}), v(sortableStruct{2}), v(sortableStruct{3})},
|
||||
},
|
||||
// UnsortableStructs.
|
||||
{
|
||||
[]reflect.Value{v(unsortableStruct{2}), v(unsortableStruct{1}), v(unsortableStruct{3})},
|
||||
[]reflect.Value{v(unsortableStruct{1}), v(unsortableStruct{2}), v(unsortableStruct{3})},
|
||||
},
|
||||
}
|
||||
cs := spew.ConfigState{DisableMethods: true, SpewKeys: true}
|
||||
helpTestSortValues(tests, &cs, t)
|
||||
}
|
306
vendor/github.com/davecgh/go-spew/spew/config.go
generated
vendored
Normal file
306
vendor/github.com/davecgh/go-spew/spew/config.go
generated
vendored
Normal file
@ -0,0 +1,306 @@
|
||||
/*
|
||||
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
|
||||
*
|
||||
* Permission to use, copy, modify, and distribute this software for any
|
||||
* purpose with or without fee is hereby granted, provided that the above
|
||||
* copyright notice and this permission notice appear in all copies.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
*/
|
||||
|
||||
package spew
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
)
|
||||
|
||||
// ConfigState houses the configuration options used by spew to format and
|
||||
// display values. There is a global instance, Config, that is used to control
|
||||
// all top-level Formatter and Dump functionality. Each ConfigState instance
|
||||
// provides methods equivalent to the top-level functions.
|
||||
//
|
||||
// The zero value for ConfigState provides no indentation. You would typically
|
||||
// want to set it to a space or a tab.
|
||||
//
|
||||
// Alternatively, you can use NewDefaultConfig to get a ConfigState instance
|
||||
// with default settings. See the documentation of NewDefaultConfig for default
|
||||
// values.
|
||||
type ConfigState struct {
|
||||
// Indent specifies the string to use for each indentation level. The
|
||||
// global config instance that all top-level functions use set this to a
|
||||
// single space by default. If you would like more indentation, you might
|
||||
// set this to a tab with "\t" or perhaps two spaces with " ".
|
||||
Indent string
|
||||
|
||||
// MaxDepth controls the maximum number of levels to descend into nested
|
||||
// data structures. The default, 0, means there is no limit.
|
||||
//
|
||||
// NOTE: Circular data structures are properly detected, so it is not
|
||||
// necessary to set this value unless you specifically want to limit deeply
|
||||
// nested data structures.
|
||||
MaxDepth int
|
||||
|
||||
// DisableMethods specifies whether or not error and Stringer interfaces are
|
||||
// invoked for types that implement them.
|
||||
DisableMethods bool
|
||||
|
||||
// DisablePointerMethods specifies whether or not to check for and invoke
|
||||
// error and Stringer interfaces on types which only accept a pointer
|
||||
// receiver when the current type is not a pointer.
|
||||
//
|
||||
// NOTE: This might be an unsafe action since calling one of these methods
|
||||
// with a pointer receiver could technically mutate the value, however,
|
||||
// in practice, types which choose to satisify an error or Stringer
|
||||
// interface with a pointer receiver should not be mutating their state
|
||||
// inside these interface methods. As a result, this option relies on
|
||||
// access to the unsafe package, so it will not have any effect when
|
||||
// running in environments without access to the unsafe package such as
|
||||
// Google App Engine or with the "safe" build tag specified.
|
||||
DisablePointerMethods bool
|
||||
|
||||
// DisablePointerAddresses specifies whether to disable the printing of
|
||||
// pointer addresses. This is useful when diffing data structures in tests.
|
||||
DisablePointerAddresses bool
|
||||
|
||||
// DisableCapacities specifies whether to disable the printing of capacities
|
||||
// for arrays, slices, maps and channels. This is useful when diffing
|
||||
// data structures in tests.
|
||||
DisableCapacities bool
|
||||
|
||||
// ContinueOnMethod specifies whether or not recursion should continue once
|
||||
// a custom error or Stringer interface is invoked. The default, false,
|
||||
// means it will print the results of invoking the custom error or Stringer
|
||||
// interface and return immediately instead of continuing to recurse into
|
||||
// the internals of the data type.
|
||||
//
|
||||
// NOTE: This flag does not have any effect if method invocation is disabled
|
||||
// via the DisableMethods or DisablePointerMethods options.
|
||||
ContinueOnMethod bool
|
||||
|
||||
// SortKeys specifies map keys should be sorted before being printed. Use
|
||||
// this to have a more deterministic, diffable output. Note that only
|
||||
// native types (bool, int, uint, floats, uintptr and string) and types
|
||||
// that support the error or Stringer interfaces (if methods are
|
||||
// enabled) are supported, with other types sorted according to the
|
||||
// reflect.Value.String() output which guarantees display stability.
|
||||
SortKeys bool
|
||||
|
||||
// SpewKeys specifies that, as a last resort attempt, map keys should
|
||||
// be spewed to strings and sorted by those strings. This is only
|
||||
// considered if SortKeys is true.
|
||||
SpewKeys bool
|
||||
}
|
||||
|
||||
// Config is the active configuration of the top-level functions.
|
||||
// The configuration can be changed by modifying the contents of spew.Config.
|
||||
var Config = ConfigState{Indent: " "}
|
||||
|
||||
// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were
|
||||
// passed with a Formatter interface returned by c.NewFormatter. It returns
|
||||
// the formatted string as a value that satisfies error. See NewFormatter
|
||||
// for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Errorf(format, c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Errorf(format string, a ...interface{}) (err error) {
|
||||
return fmt.Errorf(format, c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were
|
||||
// passed with a Formatter interface returned by c.NewFormatter. It returns
|
||||
// the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Fprint(w, c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Fprint(w io.Writer, a ...interface{}) (n int, err error) {
|
||||
return fmt.Fprint(w, c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were
|
||||
// passed with a Formatter interface returned by c.NewFormatter. It returns
|
||||
// the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Fprintf(w, format, c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {
|
||||
return fmt.Fprintf(w, format, c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it
|
||||
// passed with a Formatter interface returned by c.NewFormatter. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Fprintln(w, c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Fprintln(w io.Writer, a ...interface{}) (n int, err error) {
|
||||
return fmt.Fprintln(w, c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Print is a wrapper for fmt.Print that treats each argument as if it were
|
||||
// passed with a Formatter interface returned by c.NewFormatter. It returns
|
||||
// the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Print(c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Print(a ...interface{}) (n int, err error) {
|
||||
return fmt.Print(c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Printf is a wrapper for fmt.Printf that treats each argument as if it were
|
||||
// passed with a Formatter interface returned by c.NewFormatter. It returns
|
||||
// the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Printf(format, c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Printf(format string, a ...interface{}) (n int, err error) {
|
||||
return fmt.Printf(format, c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Println is a wrapper for fmt.Println that treats each argument as if it were
|
||||
// passed with a Formatter interface returned by c.NewFormatter. It returns
|
||||
// the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Println(c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Println(a ...interface{}) (n int, err error) {
|
||||
return fmt.Println(c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were
|
||||
// passed with a Formatter interface returned by c.NewFormatter. It returns
|
||||
// the resulting string. See NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Sprint(c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Sprint(a ...interface{}) string {
|
||||
return fmt.Sprint(c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were
|
||||
// passed with a Formatter interface returned by c.NewFormatter. It returns
|
||||
// the resulting string. See NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Sprintf(format, c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Sprintf(format string, a ...interface{}) string {
|
||||
return fmt.Sprintf(format, c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it
|
||||
// were passed with a Formatter interface returned by c.NewFormatter. It
|
||||
// returns the resulting string. See NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Sprintln(c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Sprintln(a ...interface{}) string {
|
||||
return fmt.Sprintln(c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
/*
|
||||
NewFormatter returns a custom formatter that satisfies the fmt.Formatter
|
||||
interface. As a result, it integrates cleanly with standard fmt package
|
||||
printing functions. The formatter is useful for inline printing of smaller data
|
||||
types similar to the standard %v format specifier.
|
||||
|
||||
The custom formatter only responds to the %v (most compact), %+v (adds pointer
|
||||
addresses), %#v (adds types), and %#+v (adds types and pointer addresses) verb
|
||||
combinations. Any other verbs such as %x and %q will be sent to the the
|
||||
standard fmt package for formatting. In addition, the custom formatter ignores
|
||||
the width and precision arguments (however they will still work on the format
|
||||
specifiers not handled by the custom formatter).
|
||||
|
||||
Typically this function shouldn't be called directly. It is much easier to make
|
||||
use of the custom formatter by calling one of the convenience functions such as
|
||||
c.Printf, c.Println, or c.Printf.
|
||||
*/
|
||||
func (c *ConfigState) NewFormatter(v interface{}) fmt.Formatter {
|
||||
return newFormatter(c, v)
|
||||
}
|
||||
|
||||
// Fdump formats and displays the passed arguments to io.Writer w. It formats
|
||||
// exactly the same as Dump.
|
||||
func (c *ConfigState) Fdump(w io.Writer, a ...interface{}) {
|
||||
fdump(c, w, a...)
|
||||
}
|
||||
|
||||
/*
|
||||
Dump displays the passed parameters to standard out with newlines, customizable
|
||||
indentation, and additional debug information such as complete types and all
|
||||
pointer addresses used to indirect to the final value. It provides the
|
||||
following features over the built-in printing facilities provided by the fmt
|
||||
package:
|
||||
|
||||
* Pointers are dereferenced and followed
|
||||
* Circular data structures are detected and handled properly
|
||||
* Custom Stringer/error interfaces are optionally invoked, including
|
||||
on unexported types
|
||||
* Custom types which only implement the Stringer/error interfaces via
|
||||
a pointer receiver are optionally invoked when passing non-pointer
|
||||
variables
|
||||
* Byte arrays and slices are dumped like the hexdump -C command which
|
||||
includes offsets, byte values in hex, and ASCII output
|
||||
|
||||
The configuration options are controlled by modifying the public members
|
||||
of c. See ConfigState for options documentation.
|
||||
|
||||
See Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to
|
||||
get the formatted result as a string.
|
||||
*/
|
||||
func (c *ConfigState) Dump(a ...interface{}) {
|
||||
fdump(c, os.Stdout, a...)
|
||||
}
|
||||
|
||||
// Sdump returns a string with the passed arguments formatted exactly the same
|
||||
// as Dump.
|
||||
func (c *ConfigState) Sdump(a ...interface{}) string {
|
||||
var buf bytes.Buffer
|
||||
fdump(c, &buf, a...)
|
||||
return buf.String()
|
||||
}
|
||||
|
||||
// convertArgs accepts a slice of arguments and returns a slice of the same
|
||||
// length with each argument converted to a spew Formatter interface using
|
||||
// the ConfigState associated with s.
|
||||
func (c *ConfigState) convertArgs(args []interface{}) (formatters []interface{}) {
|
||||
formatters = make([]interface{}, len(args))
|
||||
for index, arg := range args {
|
||||
formatters[index] = newFormatter(c, arg)
|
||||
}
|
||||
return formatters
|
||||
}
|
||||
|
||||
// NewDefaultConfig returns a ConfigState with the following default settings.
|
||||
//
|
||||
// Indent: " "
|
||||
// MaxDepth: 0
|
||||
// DisableMethods: false
|
||||
// DisablePointerMethods: false
|
||||
// ContinueOnMethod: false
|
||||
// SortKeys: false
|
||||
func NewDefaultConfig() *ConfigState {
|
||||
return &ConfigState{Indent: " "}
|
||||
}
|
211
vendor/github.com/davecgh/go-spew/spew/doc.go
generated
vendored
Normal file
211
vendor/github.com/davecgh/go-spew/spew/doc.go
generated
vendored
Normal file
@ -0,0 +1,211 @@
|
||||
/*
|
||||
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
|
||||
*
|
||||
* Permission to use, copy, modify, and distribute this software for any
|
||||
* purpose with or without fee is hereby granted, provided that the above
|
||||
* copyright notice and this permission notice appear in all copies.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
*/
|
||||
|
||||
/*
|
||||
Package spew implements a deep pretty printer for Go data structures to aid in
|
||||
debugging.
|
||||
|
||||
A quick overview of the additional features spew provides over the built-in
|
||||
printing facilities for Go data types are as follows:
|
||||
|
||||
* Pointers are dereferenced and followed
|
||||
* Circular data structures are detected and handled properly
|
||||
* Custom Stringer/error interfaces are optionally invoked, including
|
||||
on unexported types
|
||||
* Custom types which only implement the Stringer/error interfaces via
|
||||
a pointer receiver are optionally invoked when passing non-pointer
|
||||
variables
|
||||
* Byte arrays and slices are dumped like the hexdump -C command which
|
||||
includes offsets, byte values in hex, and ASCII output (only when using
|
||||
Dump style)
|
||||
|
||||
There are two different approaches spew allows for dumping Go data structures:
|
||||
|
||||
* Dump style which prints with newlines, customizable indentation,
|
||||
and additional debug information such as types and all pointer addresses
|
||||
used to indirect to the final value
|
||||
* A custom Formatter interface that integrates cleanly with the standard fmt
|
||||
package and replaces %v, %+v, %#v, and %#+v to provide inline printing
|
||||
similar to the default %v while providing the additional functionality
|
||||
outlined above and passing unsupported format verbs such as %x and %q
|
||||
along to fmt
|
||||
|
||||
Quick Start
|
||||
|
||||
This section demonstrates how to quickly get started with spew. See the
|
||||
sections below for further details on formatting and configuration options.
|
||||
|
||||
To dump a variable with full newlines, indentation, type, and pointer
|
||||
information use Dump, Fdump, or Sdump:
|
||||
spew.Dump(myVar1, myVar2, ...)
|
||||
spew.Fdump(someWriter, myVar1, myVar2, ...)
|
||||
str := spew.Sdump(myVar1, myVar2, ...)
|
||||
|
||||
Alternatively, if you would prefer to use format strings with a compacted inline
|
||||
printing style, use the convenience wrappers Printf, Fprintf, etc with
|
||||
%v (most compact), %+v (adds pointer addresses), %#v (adds types), or
|
||||
%#+v (adds types and pointer addresses):
|
||||
spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2)
|
||||
spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
|
||||
spew.Fprintf(someWriter, "myVar1: %v -- myVar2: %+v", myVar1, myVar2)
|
||||
spew.Fprintf(someWriter, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
|
||||
|
||||
Configuration Options
|
||||
|
||||
Configuration of spew is handled by fields in the ConfigState type. For
|
||||
convenience, all of the top-level functions use a global state available
|
||||
via the spew.Config global.
|
||||
|
||||
It is also possible to create a ConfigState instance that provides methods
|
||||
equivalent to the top-level functions. This allows concurrent configuration
|
||||
options. See the ConfigState documentation for more details.
|
||||
|
||||
The following configuration options are available:
|
||||
* Indent
|
||||
String to use for each indentation level for Dump functions.
|
||||
It is a single space by default. A popular alternative is "\t".
|
||||
|
||||
* MaxDepth
|
||||
Maximum number of levels to descend into nested data structures.
|
||||
There is no limit by default.
|
||||
|
||||
* DisableMethods
|
||||
Disables invocation of error and Stringer interface methods.
|
||||
Method invocation is enabled by default.
|
||||
|
||||
* DisablePointerMethods
|
||||
Disables invocation of error and Stringer interface methods on types
|
||||
which only accept pointer receivers from non-pointer variables.
|
||||
Pointer method invocation is enabled by default.
|
||||
|
||||
* DisablePointerAddresses
|
||||
DisablePointerAddresses specifies whether to disable the printing of
|
||||
pointer addresses. This is useful when diffing data structures in tests.
|
||||
|
||||
* DisableCapacities
|
||||
DisableCapacities specifies whether to disable the printing of
|
||||
capacities for arrays, slices, maps and channels. This is useful when
|
||||
diffing data structures in tests.
|
||||
|
||||
* ContinueOnMethod
|
||||
Enables recursion into types after invoking error and Stringer interface
|
||||
methods. Recursion after method invocation is disabled by default.
|
||||
|
||||
* SortKeys
|
||||
Specifies map keys should be sorted before being printed. Use
|
||||
this to have a more deterministic, diffable output. Note that
|
||||
only native types (bool, int, uint, floats, uintptr and string)
|
||||
and types which implement error or Stringer interfaces are
|
||||
supported with other types sorted according to the
|
||||
reflect.Value.String() output which guarantees display
|
||||
stability. Natural map order is used by default.
|
||||
|
||||
* SpewKeys
|
||||
Specifies that, as a last resort attempt, map keys should be
|
||||
spewed to strings and sorted by those strings. This is only
|
||||
considered if SortKeys is true.
|
||||
|
||||
Dump Usage
|
||||
|
||||
Simply call spew.Dump with a list of variables you want to dump:
|
||||
|
||||
spew.Dump(myVar1, myVar2, ...)
|
||||
|
||||
You may also call spew.Fdump if you would prefer to output to an arbitrary
|
||||
io.Writer. For example, to dump to standard error:
|
||||
|
||||
spew.Fdump(os.Stderr, myVar1, myVar2, ...)
|
||||
|
||||
A third option is to call spew.Sdump to get the formatted output as a string:
|
||||
|
||||
str := spew.Sdump(myVar1, myVar2, ...)
|
||||
|
||||
Sample Dump Output
|
||||
|
||||
See the Dump example for details on the setup of the types and variables being
|
||||
shown here.
|
||||
|
||||
(main.Foo) {
|
||||
unexportedField: (*main.Bar)(0xf84002e210)({
|
||||
flag: (main.Flag) flagTwo,
|
||||
data: (uintptr) <nil>
|
||||
}),
|
||||
ExportedField: (map[interface {}]interface {}) (len=1) {
|
||||
(string) (len=3) "one": (bool) true
|
||||
}
|
||||
}
|
||||
|
||||
Byte (and uint8) arrays and slices are displayed uniquely like the hexdump -C
|
||||
command as shown.
|
||||
([]uint8) (len=32 cap=32) {
|
||||
00000000 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f 20 |............... |
|
||||
00000010 21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2f 30 |!"#$%&'()*+,-./0|
|
||||
00000020 31 32 |12|
|
||||
}
|
||||
|
||||
Custom Formatter
|
||||
|
||||
Spew provides a custom formatter that implements the fmt.Formatter interface
|
||||
so that it integrates cleanly with standard fmt package printing functions. The
|
||||
formatter is useful for inline printing of smaller data types similar to the
|
||||
standard %v format specifier.
|
||||
|
||||
The custom formatter only responds to the %v (most compact), %+v (adds pointer
|
||||
addresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb
|
||||
combinations. Any other verbs such as %x and %q will be sent to the the
|
||||
standard fmt package for formatting. In addition, the custom formatter ignores
|
||||
the width and precision arguments (however they will still work on the format
|
||||
specifiers not handled by the custom formatter).
|
||||
|
||||
Custom Formatter Usage
|
||||
|
||||
The simplest way to make use of the spew custom formatter is to call one of the
|
||||
convenience functions such as spew.Printf, spew.Println, or spew.Printf. The
|
||||
functions have syntax you are most likely already familiar with:
|
||||
|
||||
spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2)
|
||||
spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
|
||||
spew.Println(myVar, myVar2)
|
||||
spew.Fprintf(os.Stderr, "myVar1: %v -- myVar2: %+v", myVar1, myVar2)
|
||||
spew.Fprintf(os.Stderr, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
|
||||
|
||||
See the Index for the full list convenience functions.
|
||||
|
||||
Sample Formatter Output
|
||||
|
||||
Double pointer to a uint8:
|
||||
%v: <**>5
|
||||
%+v: <**>(0xf8400420d0->0xf8400420c8)5
|
||||
%#v: (**uint8)5
|
||||
%#+v: (**uint8)(0xf8400420d0->0xf8400420c8)5
|
||||
|
||||
Pointer to circular struct with a uint8 field and a pointer to itself:
|
||||
%v: <*>{1 <*><shown>}
|
||||
%+v: <*>(0xf84003e260){ui8:1 c:<*>(0xf84003e260)<shown>}
|
||||
%#v: (*main.circular){ui8:(uint8)1 c:(*main.circular)<shown>}
|
||||
%#+v: (*main.circular)(0xf84003e260){ui8:(uint8)1 c:(*main.circular)(0xf84003e260)<shown>}
|
||||
|
||||
See the Printf example for details on the setup of variables being shown
|
||||
here.
|
||||
|
||||
Errors
|
||||
|
||||
Since it is possible for custom Stringer/error interfaces to panic, spew
|
||||
detects them and handles them internally by printing the panic information
|
||||
inline with the output. Since spew is intended to provide deep pretty printing
|
||||
capabilities on structures, it intentionally does not return any errors.
|
||||
*/
|
||||
package spew
|
509
vendor/github.com/davecgh/go-spew/spew/dump.go
generated
vendored
Normal file
509
vendor/github.com/davecgh/go-spew/spew/dump.go
generated
vendored
Normal file
@ -0,0 +1,509 @@
|
||||
/*
|
||||
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
|
||||
*
|
||||
* Permission to use, copy, modify, and distribute this software for any
|
||||
* purpose with or without fee is hereby granted, provided that the above
|
||||
* copyright notice and this permission notice appear in all copies.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
*/
|
||||
|
||||
package spew
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var (
|
||||
// uint8Type is a reflect.Type representing a uint8. It is used to
|
||||
// convert cgo types to uint8 slices for hexdumping.
|
||||
uint8Type = reflect.TypeOf(uint8(0))
|
||||
|
||||
// cCharRE is a regular expression that matches a cgo char.
|
||||
// It is used to detect character arrays to hexdump them.
|
||||
cCharRE = regexp.MustCompile(`^.*\._Ctype_char$`)
|
||||
|
||||
// cUnsignedCharRE is a regular expression that matches a cgo unsigned
|
||||
// char. It is used to detect unsigned character arrays to hexdump
|
||||
// them.
|
||||
cUnsignedCharRE = regexp.MustCompile(`^.*\._Ctype_unsignedchar$`)
|
||||
|
||||
// cUint8tCharRE is a regular expression that matches a cgo uint8_t.
|
||||
// It is used to detect uint8_t arrays to hexdump them.
|
||||
cUint8tCharRE = regexp.MustCompile(`^.*\._Ctype_uint8_t$`)
|
||||
)
|
||||
|
||||
// dumpState contains information about the state of a dump operation.
|
||||
type dumpState struct {
|
||||
w io.Writer
|
||||
depth int
|
||||
pointers map[uintptr]int
|
||||
ignoreNextType bool
|
||||
ignoreNextIndent bool
|
||||
cs *ConfigState
|
||||
}
|
||||
|
||||
// indent performs indentation according to the depth level and cs.Indent
|
||||
// option.
|
||||
func (d *dumpState) indent() {
|
||||
if d.ignoreNextIndent {
|
||||
d.ignoreNextIndent = false
|
||||
return
|
||||
}
|
||||
d.w.Write(bytes.Repeat([]byte(d.cs.Indent), d.depth))
|
||||
}
|
||||
|
||||
// unpackValue returns values inside of non-nil interfaces when possible.
|
||||
// This is useful for data types like structs, arrays, slices, and maps which
|
||||
// can contain varying types packed inside an interface.
|
||||
func (d *dumpState) unpackValue(v reflect.Value) reflect.Value {
|
||||
if v.Kind() == reflect.Interface && !v.IsNil() {
|
||||
v = v.Elem()
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
// dumpPtr handles formatting of pointers by indirecting them as necessary.
|
||||
func (d *dumpState) dumpPtr(v reflect.Value) {
|
||||
// Remove pointers at or below the current depth from map used to detect
|
||||
// circular refs.
|
||||
for k, depth := range d.pointers {
|
||||
if depth >= d.depth {
|
||||
delete(d.pointers, k)
|
||||
}
|
||||
}
|
||||
|
||||
// Keep list of all dereferenced pointers to show later.
|
||||
pointerChain := make([]uintptr, 0)
|
||||
|
||||
// Figure out how many levels of indirection there are by dereferencing
|
||||
// pointers and unpacking interfaces down the chain while detecting circular
|
||||
// references.
|
||||
nilFound := false
|
||||
cycleFound := false
|
||||
indirects := 0
|
||||
ve := v
|
||||
for ve.Kind() == reflect.Ptr {
|
||||
if ve.IsNil() {
|
||||
nilFound = true
|
||||
break
|
||||
}
|
||||
indirects++
|
||||
addr := ve.Pointer()
|
||||
pointerChain = append(pointerChain, addr)
|
||||
if pd, ok := d.pointers[addr]; ok && pd < d.depth {
|
||||
cycleFound = true
|
||||
indirects--
|
||||
break
|
||||
}
|
||||
d.pointers[addr] = d.depth
|
||||
|
||||
ve = ve.Elem()
|
||||
if ve.Kind() == reflect.Interface {
|
||||
if ve.IsNil() {
|
||||
nilFound = true
|
||||
break
|
||||
}
|
||||
ve = ve.Elem()
|
||||
}
|
||||
}
|
||||
|
||||
// Display type information.
|
||||
d.w.Write(openParenBytes)
|
||||
d.w.Write(bytes.Repeat(asteriskBytes, indirects))
|
||||
d.w.Write([]byte(ve.Type().String()))
|
||||
d.w.Write(closeParenBytes)
|
||||
|
||||
// Display pointer information.
|
||||
if !d.cs.DisablePointerAddresses && len(pointerChain) > 0 {
|
||||
d.w.Write(openParenBytes)
|
||||
for i, addr := range pointerChain {
|
||||
if i > 0 {
|
||||
d.w.Write(pointerChainBytes)
|
||||
}
|
||||
printHexPtr(d.w, addr)
|
||||
}
|
||||
d.w.Write(closeParenBytes)
|
||||
}
|
||||
|
||||
// Display dereferenced value.
|
||||
d.w.Write(openParenBytes)
|
||||
switch {
|
||||
case nilFound:
|
||||
d.w.Write(nilAngleBytes)
|
||||
|
||||
case cycleFound:
|
||||
d.w.Write(circularBytes)
|
||||
|
||||
default:
|
||||
d.ignoreNextType = true
|
||||
d.dump(ve)
|
||||
}
|
||||
d.w.Write(closeParenBytes)
|
||||
}
|
||||
|
||||
// dumpSlice handles formatting of arrays and slices. Byte (uint8 under
|
||||
// reflection) arrays and slices are dumped in hexdump -C fashion.
|
||||
func (d *dumpState) dumpSlice(v reflect.Value) {
|
||||
// Determine whether this type should be hex dumped or not. Also,
|
||||
// for types which should be hexdumped, try to use the underlying data
|
||||
// first, then fall back to trying to convert them to a uint8 slice.
|
||||
var buf []uint8
|
||||
doConvert := false
|
||||
doHexDump := false
|
||||
numEntries := v.Len()
|
||||
if numEntries > 0 {
|
||||
vt := v.Index(0).Type()
|
||||
vts := vt.String()
|
||||
switch {
|
||||
// C types that need to be converted.
|
||||
case cCharRE.MatchString(vts):
|
||||
fallthrough
|
||||
case cUnsignedCharRE.MatchString(vts):
|
||||
fallthrough
|
||||
case cUint8tCharRE.MatchString(vts):
|
||||
doConvert = true
|
||||
|
||||
// Try to use existing uint8 slices and fall back to converting
|
||||
// and copying if that fails.
|
||||
case vt.Kind() == reflect.Uint8:
|
||||
// We need an addressable interface to convert the type
|
||||
// to a byte slice. However, the reflect package won't
|
||||
// give us an interface on certain things like
|
||||
// unexported struct fields in order to enforce
|
||||
// visibility rules. We use unsafe, when available, to
|
||||
// bypass these restrictions since this package does not
|
||||
// mutate the values.
|
||||
vs := v
|
||||
if !vs.CanInterface() || !vs.CanAddr() {
|
||||
vs = unsafeReflectValue(vs)
|
||||
}
|
||||
if !UnsafeDisabled {
|
||||
vs = vs.Slice(0, numEntries)
|
||||
|
||||
// Use the existing uint8 slice if it can be
|
||||
// type asserted.
|
||||
iface := vs.Interface()
|
||||
if slice, ok := iface.([]uint8); ok {
|
||||
buf = slice
|
||||
doHexDump = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// The underlying data needs to be converted if it can't
|
||||
// be type asserted to a uint8 slice.
|
||||
doConvert = true
|
||||
}
|
||||
|
||||
// Copy and convert the underlying type if needed.
|
||||
if doConvert && vt.ConvertibleTo(uint8Type) {
|
||||
// Convert and copy each element into a uint8 byte
|
||||
// slice.
|
||||
buf = make([]uint8, numEntries)
|
||||
for i := 0; i < numEntries; i++ {
|
||||
vv := v.Index(i)
|
||||
buf[i] = uint8(vv.Convert(uint8Type).Uint())
|
||||
}
|
||||
doHexDump = true
|
||||
}
|
||||
}
|
||||
|
||||
// Hexdump the entire slice as needed.
|
||||
if doHexDump {
|
||||
indent := strings.Repeat(d.cs.Indent, d.depth)
|
||||
str := indent + hex.Dump(buf)
|
||||
str = strings.Replace(str, "\n", "\n"+indent, -1)
|
||||
str = strings.TrimRight(str, d.cs.Indent)
|
||||
d.w.Write([]byte(str))
|
||||
return
|
||||
}
|
||||
|
||||
// Recursively call dump for each item.
|
||||
for i := 0; i < numEntries; i++ {
|
||||
d.dump(d.unpackValue(v.Index(i)))
|
||||
if i < (numEntries - 1) {
|
||||
d.w.Write(commaNewlineBytes)
|
||||
} else {
|
||||
d.w.Write(newlineBytes)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// dump is the main workhorse for dumping a value. It uses the passed reflect
|
||||
// value to figure out what kind of object we are dealing with and formats it
|
||||
// appropriately. It is a recursive function, however circular data structures
|
||||
// are detected and handled properly.
|
||||
func (d *dumpState) dump(v reflect.Value) {
|
||||
// Handle invalid reflect values immediately.
|
||||
kind := v.Kind()
|
||||
if kind == reflect.Invalid {
|
||||
d.w.Write(invalidAngleBytes)
|
||||
return
|
||||
}
|
||||
|
||||
// Handle pointers specially.
|
||||
if kind == reflect.Ptr {
|
||||
d.indent()
|
||||
d.dumpPtr(v)
|
||||
return
|
||||
}
|
||||
|
||||
// Print type information unless already handled elsewhere.
|
||||
if !d.ignoreNextType {
|
||||
d.indent()
|
||||
d.w.Write(openParenBytes)
|
||||
d.w.Write([]byte(v.Type().String()))
|
||||
d.w.Write(closeParenBytes)
|
||||
d.w.Write(spaceBytes)
|
||||
}
|
||||
d.ignoreNextType = false
|
||||
|
||||
// Display length and capacity if the built-in len and cap functions
|
||||
// work with the value's kind and the len/cap itself is non-zero.
|
||||
valueLen, valueCap := 0, 0
|
||||
switch v.Kind() {
|
||||
case reflect.Array, reflect.Slice, reflect.Chan:
|
||||
valueLen, valueCap = v.Len(), v.Cap()
|
||||
case reflect.Map, reflect.String:
|
||||
valueLen = v.Len()
|
||||
}
|
||||
if valueLen != 0 || !d.cs.DisableCapacities && valueCap != 0 {
|
||||
d.w.Write(openParenBytes)
|
||||
if valueLen != 0 {
|
||||
d.w.Write(lenEqualsBytes)
|
||||
printInt(d.w, int64(valueLen), 10)
|
||||
}
|
||||
if !d.cs.DisableCapacities && valueCap != 0 {
|
||||
if valueLen != 0 {
|
||||
d.w.Write(spaceBytes)
|
||||
}
|
||||
d.w.Write(capEqualsBytes)
|
||||
printInt(d.w, int64(valueCap), 10)
|
||||
}
|
||||
d.w.Write(closeParenBytes)
|
||||
d.w.Write(spaceBytes)
|
||||
}
|
||||
|
||||
// Call Stringer/error interfaces if they exist and the handle methods flag
|
||||
// is enabled
|
||||
if !d.cs.DisableMethods {
|
||||
if (kind != reflect.Invalid) && (kind != reflect.Interface) {
|
||||
if handled := handleMethods(d.cs, d.w, v); handled {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
switch kind {
|
||||
case reflect.Invalid:
|
||||
// Do nothing. We should never get here since invalid has already
|
||||
// been handled above.
|
||||
|
||||
case reflect.Bool:
|
||||
printBool(d.w, v.Bool())
|
||||
|
||||
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
|
||||
printInt(d.w, v.Int(), 10)
|
||||
|
||||
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
|
||||
printUint(d.w, v.Uint(), 10)
|
||||
|
||||
case reflect.Float32:
|
||||
printFloat(d.w, v.Float(), 32)
|
||||
|
||||
case reflect.Float64:
|
||||
printFloat(d.w, v.Float(), 64)
|
||||
|
||||
case reflect.Complex64:
|
||||
printComplex(d.w, v.Complex(), 32)
|
||||
|
||||
case reflect.Complex128:
|
||||
printComplex(d.w, v.Complex(), 64)
|
||||
|
||||
case reflect.Slice:
|
||||
if v.IsNil() {
|
||||
d.w.Write(nilAngleBytes)
|
||||
break
|
||||
}
|
||||
fallthrough
|
||||
|
||||
case reflect.Array:
|
||||
d.w.Write(openBraceNewlineBytes)
|
||||
d.depth++
|
||||
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
|
||||
d.indent()
|
||||
d.w.Write(maxNewlineBytes)
|
||||
} else {
|
||||
d.dumpSlice(v)
|
||||
}
|
||||
d.depth--
|
||||
d.indent()
|
||||
d.w.Write(closeBraceBytes)
|
||||
|
||||
case reflect.String:
|
||||
d.w.Write([]byte(strconv.Quote(v.String())))
|
||||
|
||||
case reflect.Interface:
|
||||
// The only time we should get here is for nil interfaces due to
|
||||
// unpackValue calls.
|
||||
if v.IsNil() {
|
||||
d.w.Write(nilAngleBytes)
|
||||
}
|
||||
|
||||
case reflect.Ptr:
|
||||
// Do nothing. We should never get here since pointers have already
|
||||
// been handled above.
|
||||
|
||||
case reflect.Map:
|
||||
// nil maps should be indicated as different than empty maps
|
||||
if v.IsNil() {
|
||||
d.w.Write(nilAngleBytes)
|
||||
break
|
||||
}
|
||||
|
||||
d.w.Write(openBraceNewlineBytes)
|
||||
d.depth++
|
||||
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
|
||||
d.indent()
|
||||
d.w.Write(maxNewlineBytes)
|
||||
} else {
|
||||
numEntries := v.Len()
|
||||
keys := v.MapKeys()
|
||||
if d.cs.SortKeys {
|
||||
sortValues(keys, d.cs)
|
||||
}
|
||||
for i, key := range keys {
|
||||
d.dump(d.unpackValue(key))
|
||||
d.w.Write(colonSpaceBytes)
|
||||
d.ignoreNextIndent = true
|
||||
d.dump(d.unpackValue(v.MapIndex(key)))
|
||||
if i < (numEntries - 1) {
|
||||
d.w.Write(commaNewlineBytes)
|
||||
} else {
|
||||
d.w.Write(newlineBytes)
|
||||
}
|
||||
}
|
||||
}
|
||||
d.depth--
|
||||
d.indent()
|
||||
d.w.Write(closeBraceBytes)
|
||||
|
||||
case reflect.Struct:
|
||||
d.w.Write(openBraceNewlineBytes)
|
||||
d.depth++
|
||||
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
|
||||
d.indent()
|
||||
d.w.Write(maxNewlineBytes)
|
||||
} else {
|
||||
vt := v.Type()
|
||||
numFields := v.NumField()
|
||||
for i := 0; i < numFields; i++ {
|
||||
d.indent()
|
||||
vtf := vt.Field(i)
|
||||
d.w.Write([]byte(vtf.Name))
|
||||
d.w.Write(colonSpaceBytes)
|
||||
d.ignoreNextIndent = true
|
||||
d.dump(d.unpackValue(v.Field(i)))
|
||||
if i < (numFields - 1) {
|
||||
d.w.Write(commaNewlineBytes)
|
||||
} else {
|
||||
d.w.Write(newlineBytes)
|
||||
}
|
||||
}
|
||||
}
|
||||
d.depth--
|
||||
d.indent()
|
||||
d.w.Write(closeBraceBytes)
|
||||
|
||||
case reflect.Uintptr:
|
||||
printHexPtr(d.w, uintptr(v.Uint()))
|
||||
|
||||
case reflect.UnsafePointer, reflect.Chan, reflect.Func:
|
||||
printHexPtr(d.w, v.Pointer())
|
||||
|
||||
// There were not any other types at the time this code was written, but
|
||||
// fall back to letting the default fmt package handle it in case any new
|
||||
// types are added.
|
||||
default:
|
||||
if v.CanInterface() {
|
||||
fmt.Fprintf(d.w, "%v", v.Interface())
|
||||
} else {
|
||||
fmt.Fprintf(d.w, "%v", v.String())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// fdump is a helper function to consolidate the logic from the various public
|
||||
// methods which take varying writers and config states.
|
||||
func fdump(cs *ConfigState, w io.Writer, a ...interface{}) {
|
||||
for _, arg := range a {
|
||||
if arg == nil {
|
||||
w.Write(interfaceBytes)
|
||||
w.Write(spaceBytes)
|
||||
w.Write(nilAngleBytes)
|
||||
w.Write(newlineBytes)
|
||||
continue
|
||||
}
|
||||
|
||||
d := dumpState{w: w, cs: cs}
|
||||
d.pointers = make(map[uintptr]int)
|
||||
d.dump(reflect.ValueOf(arg))
|
||||
d.w.Write(newlineBytes)
|
||||
}
|
||||
}
|
||||
|
||||
// Fdump formats and displays the passed arguments to io.Writer w. It formats
|
||||
// exactly the same as Dump.
|
||||
func Fdump(w io.Writer, a ...interface{}) {
|
||||
fdump(&Config, w, a...)
|
||||
}
|
||||
|
||||
// Sdump returns a string with the passed arguments formatted exactly the same
|
||||
// as Dump.
|
||||
func Sdump(a ...interface{}) string {
|
||||
var buf bytes.Buffer
|
||||
fdump(&Config, &buf, a...)
|
||||
return buf.String()
|
||||
}
|
||||
|
||||
/*
|
||||
Dump displays the passed parameters to standard out with newlines, customizable
|
||||
indentation, and additional debug information such as complete types and all
|
||||
pointer addresses used to indirect to the final value. It provides the
|
||||
following features over the built-in printing facilities provided by the fmt
|
||||
package:
|
||||
|
||||
* Pointers are dereferenced and followed
|
||||
* Circular data structures are detected and handled properly
|
||||
* Custom Stringer/error interfaces are optionally invoked, including
|
||||
on unexported types
|
||||
* Custom types which only implement the Stringer/error interfaces via
|
||||
a pointer receiver are optionally invoked when passing non-pointer
|
||||
variables
|
||||
* Byte arrays and slices are dumped like the hexdump -C command which
|
||||
includes offsets, byte values in hex, and ASCII output
|
||||
|
||||
The configuration options are controlled by an exported package global,
|
||||
spew.Config. See ConfigState for options documentation.
|
||||
|
||||
See Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to
|
||||
get the formatted result as a string.
|
||||
*/
|
||||
func Dump(a ...interface{}) {
|
||||
fdump(&Config, os.Stdout, a...)
|
||||
}
|
1042
vendor/github.com/davecgh/go-spew/spew/dump_test.go
generated
vendored
Normal file
1042
vendor/github.com/davecgh/go-spew/spew/dump_test.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
101
vendor/github.com/davecgh/go-spew/spew/dumpcgo_test.go
generated
vendored
Normal file
101
vendor/github.com/davecgh/go-spew/spew/dumpcgo_test.go
generated
vendored
Normal file
@ -0,0 +1,101 @@
|
||||
// Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
|
||||
//
|
||||
// Permission to use, copy, modify, and distribute this software for any
|
||||
// purpose with or without fee is hereby granted, provided that the above
|
||||
// copyright notice and this permission notice appear in all copies.
|
||||
//
|
||||
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
// NOTE: Due to the following build constraints, this file will only be compiled
|
||||
// when both cgo is supported and "-tags testcgo" is added to the go test
|
||||
// command line. This means the cgo tests are only added (and hence run) when
|
||||
// specifially requested. This configuration is used because spew itself
|
||||
// does not require cgo to run even though it does handle certain cgo types
|
||||
// specially. Rather than forcing all clients to require cgo and an external
|
||||
// C compiler just to run the tests, this scheme makes them optional.
|
||||
// +build cgo,testcgo
|
||||
|
||||
package spew_test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/davecgh/go-spew/spew/testdata"
|
||||
)
|
||||
|
||||
func addCgoDumpTests() {
|
||||
// C char pointer.
|
||||
v := testdata.GetCgoCharPointer()
|
||||
nv := testdata.GetCgoNullCharPointer()
|
||||
pv := &v
|
||||
vcAddr := fmt.Sprintf("%p", v)
|
||||
vAddr := fmt.Sprintf("%p", pv)
|
||||
pvAddr := fmt.Sprintf("%p", &pv)
|
||||
vt := "*testdata._Ctype_char"
|
||||
vs := "116"
|
||||
addDumpTest(v, "("+vt+")("+vcAddr+")("+vs+")\n")
|
||||
addDumpTest(pv, "(*"+vt+")("+vAddr+"->"+vcAddr+")("+vs+")\n")
|
||||
addDumpTest(&pv, "(**"+vt+")("+pvAddr+"->"+vAddr+"->"+vcAddr+")("+vs+")\n")
|
||||
addDumpTest(nv, "("+vt+")(<nil>)\n")
|
||||
|
||||
// C char array.
|
||||
v2, v2l, v2c := testdata.GetCgoCharArray()
|
||||
v2Len := fmt.Sprintf("%d", v2l)
|
||||
v2Cap := fmt.Sprintf("%d", v2c)
|
||||
v2t := "[6]testdata._Ctype_char"
|
||||
v2s := "(len=" + v2Len + " cap=" + v2Cap + ") " +
|
||||
"{\n 00000000 74 65 73 74 32 00 " +
|
||||
" |test2.|\n}"
|
||||
addDumpTest(v2, "("+v2t+") "+v2s+"\n")
|
||||
|
||||
// C unsigned char array.
|
||||
v3, v3l, v3c := testdata.GetCgoUnsignedCharArray()
|
||||
v3Len := fmt.Sprintf("%d", v3l)
|
||||
v3Cap := fmt.Sprintf("%d", v3c)
|
||||
v3t := "[6]testdata._Ctype_unsignedchar"
|
||||
v3t2 := "[6]testdata._Ctype_uchar"
|
||||
v3s := "(len=" + v3Len + " cap=" + v3Cap + ") " +
|
||||
"{\n 00000000 74 65 73 74 33 00 " +
|
||||
" |test3.|\n}"
|
||||
addDumpTest(v3, "("+v3t+") "+v3s+"\n", "("+v3t2+") "+v3s+"\n")
|
||||
|
||||
// C signed char array.
|
||||
v4, v4l, v4c := testdata.GetCgoSignedCharArray()
|
||||
v4Len := fmt.Sprintf("%d", v4l)
|
||||
v4Cap := fmt.Sprintf("%d", v4c)
|
||||
v4t := "[6]testdata._Ctype_schar"
|
||||
v4t2 := "testdata._Ctype_schar"
|
||||
v4s := "(len=" + v4Len + " cap=" + v4Cap + ") " +
|
||||
"{\n (" + v4t2 + ") 116,\n (" + v4t2 + ") 101,\n (" + v4t2 +
|
||||
") 115,\n (" + v4t2 + ") 116,\n (" + v4t2 + ") 52,\n (" + v4t2 +
|
||||
") 0\n}"
|
||||
addDumpTest(v4, "("+v4t+") "+v4s+"\n")
|
||||
|
||||
// C uint8_t array.
|
||||
v5, v5l, v5c := testdata.GetCgoUint8tArray()
|
||||
v5Len := fmt.Sprintf("%d", v5l)
|
||||
v5Cap := fmt.Sprintf("%d", v5c)
|
||||
v5t := "[6]testdata._Ctype_uint8_t"
|
||||
v5t2 := "[6]testdata._Ctype_uchar"
|
||||
v5s := "(len=" + v5Len + " cap=" + v5Cap + ") " +
|
||||
"{\n 00000000 74 65 73 74 35 00 " +
|
||||
" |test5.|\n}"
|
||||
addDumpTest(v5, "("+v5t+") "+v5s+"\n", "("+v5t2+") "+v5s+"\n")
|
||||
|
||||
// C typedefed unsigned char array.
|
||||
v6, v6l, v6c := testdata.GetCgoTypdefedUnsignedCharArray()
|
||||
v6Len := fmt.Sprintf("%d", v6l)
|
||||
v6Cap := fmt.Sprintf("%d", v6c)
|
||||
v6t := "[6]testdata._Ctype_custom_uchar_t"
|
||||
v6t2 := "[6]testdata._Ctype_uchar"
|
||||
v6s := "(len=" + v6Len + " cap=" + v6Cap + ") " +
|
||||
"{\n 00000000 74 65 73 74 36 00 " +
|
||||
" |test6.|\n}"
|
||||
addDumpTest(v6, "("+v6t+") "+v6s+"\n", "("+v6t2+") "+v6s+"\n")
|
||||
}
|
26
vendor/github.com/davecgh/go-spew/spew/dumpnocgo_test.go
generated
vendored
Normal file
26
vendor/github.com/davecgh/go-spew/spew/dumpnocgo_test.go
generated
vendored
Normal file
@ -0,0 +1,26 @@
|
||||
// Copyright (c) 2013 Dave Collins <dave@davec.name>
|
||||
//
|
||||
// Permission to use, copy, modify, and distribute this software for any
|
||||
// purpose with or without fee is hereby granted, provided that the above
|
||||
// copyright notice and this permission notice appear in all copies.
|
||||
//
|
||||
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
// NOTE: Due to the following build constraints, this file will only be compiled
|
||||
// when either cgo is not supported or "-tags testcgo" is not added to the go
|
||||
// test command line. This file intentionally does not setup any cgo tests in
|
||||
// this scenario.
|
||||
// +build !cgo !testcgo
|
||||
|
||||
package spew_test
|
||||
|
||||
func addCgoDumpTests() {
|
||||
// Don't add any tests for cgo since this file is only compiled when
|
||||
// there should not be any cgo tests.
|
||||
}
|
226
vendor/github.com/davecgh/go-spew/spew/example_test.go
generated
vendored
Normal file
226
vendor/github.com/davecgh/go-spew/spew/example_test.go
generated
vendored
Normal file
@ -0,0 +1,226 @@
|
||||
/*
|
||||
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
|
||||
*
|
||||
* Permission to use, copy, modify, and distribute this software for any
|
||||
* purpose with or without fee is hereby granted, provided that the above
|
||||
* copyright notice and this permission notice appear in all copies.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
*/
|
||||
|
||||
package spew_test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/davecgh/go-spew/spew"
|
||||
)
|
||||
|
||||
type Flag int
|
||||
|
||||
const (
|
||||
flagOne Flag = iota
|
||||
flagTwo
|
||||
)
|
||||
|
||||
var flagStrings = map[Flag]string{
|
||||
flagOne: "flagOne",
|
||||
flagTwo: "flagTwo",
|
||||
}
|
||||
|
||||
func (f Flag) String() string {
|
||||
if s, ok := flagStrings[f]; ok {
|
||||
return s
|
||||
}
|
||||
return fmt.Sprintf("Unknown flag (%d)", int(f))
|
||||
}
|
||||
|
||||
type Bar struct {
|
||||
data uintptr
|
||||
}
|
||||
|
||||
type Foo struct {
|
||||
unexportedField Bar
|
||||
ExportedField map[interface{}]interface{}
|
||||
}
|
||||
|
||||
// This example demonstrates how to use Dump to dump variables to stdout.
|
||||
func ExampleDump() {
|
||||
// The following package level declarations are assumed for this example:
|
||||
/*
|
||||
type Flag int
|
||||
|
||||
const (
|
||||
flagOne Flag = iota
|
||||
flagTwo
|
||||
)
|
||||
|
||||
var flagStrings = map[Flag]string{
|
||||
flagOne: "flagOne",
|
||||
flagTwo: "flagTwo",
|
||||
}
|
||||
|
||||
func (f Flag) String() string {
|
||||
if s, ok := flagStrings[f]; ok {
|
||||
return s
|
||||
}
|
||||
return fmt.Sprintf("Unknown flag (%d)", int(f))
|
||||
}
|
||||
|
||||
type Bar struct {
|
||||
data uintptr
|
||||
}
|
||||
|
||||
type Foo struct {
|
||||
unexportedField Bar
|
||||
ExportedField map[interface{}]interface{}
|
||||
}
|
||||
*/
|
||||
|
||||
// Setup some sample data structures for the example.
|
||||
bar := Bar{uintptr(0)}
|
||||
s1 := Foo{bar, map[interface{}]interface{}{"one": true}}
|
||||
f := Flag(5)
|
||||
b := []byte{
|
||||
0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18,
|
||||
0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x20,
|
||||
0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, 0x28,
|
||||
0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f, 0x30,
|
||||
0x31, 0x32,
|
||||
}
|
||||
|
||||
// Dump!
|
||||
spew.Dump(s1, f, b)
|
||||
|
||||
// Output:
|
||||
// (spew_test.Foo) {
|
||||
// unexportedField: (spew_test.Bar) {
|
||||
// data: (uintptr) <nil>
|
||||
// },
|
||||
// ExportedField: (map[interface {}]interface {}) (len=1) {
|
||||
// (string) (len=3) "one": (bool) true
|
||||
// }
|
||||
// }
|
||||
// (spew_test.Flag) Unknown flag (5)
|
||||
// ([]uint8) (len=34 cap=34) {
|
||||
// 00000000 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f 20 |............... |
|
||||
// 00000010 21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2f 30 |!"#$%&'()*+,-./0|
|
||||
// 00000020 31 32 |12|
|
||||
// }
|
||||
//
|
||||
}
|
||||
|
||||
// This example demonstrates how to use Printf to display a variable with a
|
||||
// format string and inline formatting.
|
||||
func ExamplePrintf() {
|
||||
// Create a double pointer to a uint 8.
|
||||
ui8 := uint8(5)
|
||||
pui8 := &ui8
|
||||
ppui8 := &pui8
|
||||
|
||||
// Create a circular data type.
|
||||
type circular struct {
|
||||
ui8 uint8
|
||||
c *circular
|
||||
}
|
||||
c := circular{ui8: 1}
|
||||
c.c = &c
|
||||
|
||||
// Print!
|
||||
spew.Printf("ppui8: %v\n", ppui8)
|
||||
spew.Printf("circular: %v\n", c)
|
||||
|
||||
// Output:
|
||||
// ppui8: <**>5
|
||||
// circular: {1 <*>{1 <*><shown>}}
|
||||
}
|
||||
|
||||
// This example demonstrates how to use a ConfigState.
|
||||
func ExampleConfigState() {
|
||||
// Modify the indent level of the ConfigState only. The global
|
||||
// configuration is not modified.
|
||||
scs := spew.ConfigState{Indent: "\t"}
|
||||
|
||||
// Output using the ConfigState instance.
|
||||
v := map[string]int{"one": 1}
|
||||
scs.Printf("v: %v\n", v)
|
||||
scs.Dump(v)
|
||||
|
||||
// Output:
|
||||
// v: map[one:1]
|
||||
// (map[string]int) (len=1) {
|
||||
// (string) (len=3) "one": (int) 1
|
||||
// }
|
||||
}
|
||||
|
||||
// This example demonstrates how to use ConfigState.Dump to dump variables to
|
||||
// stdout
|
||||
func ExampleConfigState_Dump() {
|
||||
// See the top-level Dump example for details on the types used in this
|
||||
// example.
|
||||
|
||||
// Create two ConfigState instances with different indentation.
|
||||
scs := spew.ConfigState{Indent: "\t"}
|
||||
scs2 := spew.ConfigState{Indent: " "}
|
||||
|
||||
// Setup some sample data structures for the example.
|
||||
bar := Bar{uintptr(0)}
|
||||
s1 := Foo{bar, map[interface{}]interface{}{"one": true}}
|
||||
|
||||
// Dump using the ConfigState instances.
|
||||
scs.Dump(s1)
|
||||
scs2.Dump(s1)
|
||||
|
||||
// Output:
|
||||
// (spew_test.Foo) {
|
||||
// unexportedField: (spew_test.Bar) {
|
||||
// data: (uintptr) <nil>
|
||||
// },
|
||||
// ExportedField: (map[interface {}]interface {}) (len=1) {
|
||||
// (string) (len=3) "one": (bool) true
|
||||
// }
|
||||
// }
|
||||
// (spew_test.Foo) {
|
||||
// unexportedField: (spew_test.Bar) {
|
||||
// data: (uintptr) <nil>
|
||||
// },
|
||||
// ExportedField: (map[interface {}]interface {}) (len=1) {
|
||||
// (string) (len=3) "one": (bool) true
|
||||
// }
|
||||
// }
|
||||
//
|
||||
}
|
||||
|
||||
// This example demonstrates how to use ConfigState.Printf to display a variable
|
||||
// with a format string and inline formatting.
|
||||
func ExampleConfigState_Printf() {
|
||||
// See the top-level Dump example for details on the types used in this
|
||||
// example.
|
||||
|
||||
// Create two ConfigState instances and modify the method handling of the
|
||||
// first ConfigState only.
|
||||
scs := spew.NewDefaultConfig()
|
||||
scs2 := spew.NewDefaultConfig()
|
||||
scs.DisableMethods = true
|
||||
|
||||
// Alternatively
|
||||
// scs := spew.ConfigState{Indent: " ", DisableMethods: true}
|
||||
// scs2 := spew.ConfigState{Indent: " "}
|
||||
|
||||
// This is of type Flag which implements a Stringer and has raw value 1.
|
||||
f := flagTwo
|
||||
|
||||
// Dump using the ConfigState instances.
|
||||
scs.Printf("f: %v\n", f)
|
||||
scs2.Printf("f: %v\n", f)
|
||||
|
||||
// Output:
|
||||
// f: 1
|
||||
// f: flagTwo
|
||||
}
|
419
vendor/github.com/davecgh/go-spew/spew/format.go
generated
vendored
Normal file
419
vendor/github.com/davecgh/go-spew/spew/format.go
generated
vendored
Normal file
@ -0,0 +1,419 @@
|
||||
/*
|
||||
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
|
||||
*
|
||||
* Permission to use, copy, modify, and distribute this software for any
|
||||
* purpose with or without fee is hereby granted, provided that the above
|
||||
* copyright notice and this permission notice appear in all copies.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
*/
|
||||
|
||||
package spew
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// supportedFlags is a list of all the character flags supported by fmt package.
|
||||
const supportedFlags = "0-+# "
|
||||
|
||||
// formatState implements the fmt.Formatter interface and contains information
|
||||
// about the state of a formatting operation. The NewFormatter function can
|
||||
// be used to get a new Formatter which can be used directly as arguments
|
||||
// in standard fmt package printing calls.
|
||||
type formatState struct {
|
||||
value interface{}
|
||||
fs fmt.State
|
||||
depth int
|
||||
pointers map[uintptr]int
|
||||
ignoreNextType bool
|
||||
cs *ConfigState
|
||||
}
|
||||
|
||||
// buildDefaultFormat recreates the original format string without precision
|
||||
// and width information to pass in to fmt.Sprintf in the case of an
|
||||
// unrecognized type. Unless new types are added to the language, this
|
||||
// function won't ever be called.
|
||||
func (f *formatState) buildDefaultFormat() (format string) {
|
||||
buf := bytes.NewBuffer(percentBytes)
|
||||
|
||||
for _, flag := range supportedFlags {
|
||||
if f.fs.Flag(int(flag)) {
|
||||
buf.WriteRune(flag)
|
||||
}
|
||||
}
|
||||
|
||||
buf.WriteRune('v')
|
||||
|
||||
format = buf.String()
|
||||
return format
|
||||
}
|
||||
|
||||
// constructOrigFormat recreates the original format string including precision
|
||||
// and width information to pass along to the standard fmt package. This allows
|
||||
// automatic deferral of all format strings this package doesn't support.
|
||||
func (f *formatState) constructOrigFormat(verb rune) (format string) {
|
||||
buf := bytes.NewBuffer(percentBytes)
|
||||
|
||||
for _, flag := range supportedFlags {
|
||||
if f.fs.Flag(int(flag)) {
|
||||
buf.WriteRune(flag)
|
||||
}
|
||||
}
|
||||
|
||||
if width, ok := f.fs.Width(); ok {
|
||||
buf.WriteString(strconv.Itoa(width))
|
||||
}
|
||||
|
||||
if precision, ok := f.fs.Precision(); ok {
|
||||
buf.Write(precisionBytes)
|
||||
buf.WriteString(strconv.Itoa(precision))
|
||||
}
|
||||
|
||||
buf.WriteRune(verb)
|
||||
|
||||
format = buf.String()
|
||||
return format
|
||||
}
|
||||
|
||||
// unpackValue returns values inside of non-nil interfaces when possible and
|
||||
// ensures that types for values which have been unpacked from an interface
|
||||
// are displayed when the show types flag is also set.
|
||||
// This is useful for data types like structs, arrays, slices, and maps which
|
||||
// can contain varying types packed inside an interface.
|
||||
func (f *formatState) unpackValue(v reflect.Value) reflect.Value {
|
||||
if v.Kind() == reflect.Interface {
|
||||
f.ignoreNextType = false
|
||||
if !v.IsNil() {
|
||||
v = v.Elem()
|
||||
}
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
// formatPtr handles formatting of pointers by indirecting them as necessary.
|
||||
func (f *formatState) formatPtr(v reflect.Value) {
|
||||
// Display nil if top level pointer is nil.
|
||||
showTypes := f.fs.Flag('#')
|
||||
if v.IsNil() && (!showTypes || f.ignoreNextType) {
|
||||
f.fs.Write(nilAngleBytes)
|
||||
return
|
||||
}
|
||||
|
||||
// Remove pointers at or below the current depth from map used to detect
|
||||
// circular refs.
|
||||
for k, depth := range f.pointers {
|
||||
if depth >= f.depth {
|
||||
delete(f.pointers, k)
|
||||
}
|
||||
}
|
||||
|
||||
// Keep list of all dereferenced pointers to possibly show later.
|
||||
pointerChain := make([]uintptr, 0)
|
||||
|
||||
// Figure out how many levels of indirection there are by derferencing
|
||||
// pointers and unpacking interfaces down the chain while detecting circular
|
||||
// references.
|
||||
nilFound := false
|
||||
cycleFound := false
|
||||
indirects := 0
|
||||
ve := v
|
||||
for ve.Kind() == reflect.Ptr {
|
||||
if ve.IsNil() {
|
||||
nilFound = true
|
||||
break
|
||||
}
|
||||
indirects++
|
||||
addr := ve.Pointer()
|
||||
pointerChain = append(pointerChain, addr)
|
||||
if pd, ok := f.pointers[addr]; ok && pd < f.depth {
|
||||
cycleFound = true
|
||||
indirects--
|
||||
break
|
||||
}
|
||||
f.pointers[addr] = f.depth
|
||||
|
||||
ve = ve.Elem()
|
||||
if ve.Kind() == reflect.Interface {
|
||||
if ve.IsNil() {
|
||||
nilFound = true
|
||||
break
|
||||
}
|
||||
ve = ve.Elem()
|
||||
}
|
||||
}
|
||||
|
||||
// Display type or indirection level depending on flags.
|
||||
if showTypes && !f.ignoreNextType {
|
||||
f.fs.Write(openParenBytes)
|
||||
f.fs.Write(bytes.Repeat(asteriskBytes, indirects))
|
||||
f.fs.Write([]byte(ve.Type().String()))
|
||||
f.fs.Write(closeParenBytes)
|
||||
} else {
|
||||
if nilFound || cycleFound {
|
||||
indirects += strings.Count(ve.Type().String(), "*")
|
||||
}
|
||||
f.fs.Write(openAngleBytes)
|
||||
f.fs.Write([]byte(strings.Repeat("*", indirects)))
|
||||
f.fs.Write(closeAngleBytes)
|
||||
}
|
||||
|
||||
// Display pointer information depending on flags.
|
||||
if f.fs.Flag('+') && (len(pointerChain) > 0) {
|
||||
f.fs.Write(openParenBytes)
|
||||
for i, addr := range pointerChain {
|
||||
if i > 0 {
|
||||
f.fs.Write(pointerChainBytes)
|
||||
}
|
||||
printHexPtr(f.fs, addr)
|
||||
}
|
||||
f.fs.Write(closeParenBytes)
|
||||
}
|
||||
|
||||
// Display dereferenced value.
|
||||
switch {
|
||||
case nilFound:
|
||||
f.fs.Write(nilAngleBytes)
|
||||
|
||||
case cycleFound:
|
||||
f.fs.Write(circularShortBytes)
|
||||
|
||||
default:
|
||||
f.ignoreNextType = true
|
||||
f.format(ve)
|
||||
}
|
||||
}
|
||||
|
||||
// format is the main workhorse for providing the Formatter interface. It
|
||||
// uses the passed reflect value to figure out what kind of object we are
|
||||
// dealing with and formats it appropriately. It is a recursive function,
|
||||
// however circular data structures are detected and handled properly.
|
||||
func (f *formatState) format(v reflect.Value) {
|
||||
// Handle invalid reflect values immediately.
|
||||
kind := v.Kind()
|
||||
if kind == reflect.Invalid {
|
||||
f.fs.Write(invalidAngleBytes)
|
||||
return
|
||||
}
|
||||
|
||||
// Handle pointers specially.
|
||||
if kind == reflect.Ptr {
|
||||
f.formatPtr(v)
|
||||
return
|
||||
}
|
||||
|
||||
// Print type information unless already handled elsewhere.
|
||||
if !f.ignoreNextType && f.fs.Flag('#') {
|
||||
f.fs.Write(openParenBytes)
|
||||
f.fs.Write([]byte(v.Type().String()))
|
||||
f.fs.Write(closeParenBytes)
|
||||
}
|
||||
f.ignoreNextType = false
|
||||
|
||||
// Call Stringer/error interfaces if they exist and the handle methods
|
||||
// flag is enabled.
|
||||
if !f.cs.DisableMethods {
|
||||
if (kind != reflect.Invalid) && (kind != reflect.Interface) {
|
||||
if handled := handleMethods(f.cs, f.fs, v); handled {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
switch kind {
|
||||
case reflect.Invalid:
|
||||
// Do nothing. We should never get here since invalid has already
|
||||
// been handled above.
|
||||
|
||||
case reflect.Bool:
|
||||
printBool(f.fs, v.Bool())
|
||||
|
||||
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
|
||||
printInt(f.fs, v.Int(), 10)
|
||||
|
||||
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
|
||||
printUint(f.fs, v.Uint(), 10)
|
||||
|
||||
case reflect.Float32:
|
||||
printFloat(f.fs, v.Float(), 32)
|
||||
|
||||
case reflect.Float64:
|
||||
printFloat(f.fs, v.Float(), 64)
|
||||
|
||||
case reflect.Complex64:
|
||||
printComplex(f.fs, v.Complex(), 32)
|
||||
|
||||
case reflect.Complex128:
|
||||
printComplex(f.fs, v.Complex(), 64)
|
||||
|
||||
case reflect.Slice:
|
||||
if v.IsNil() {
|
||||
f.fs.Write(nilAngleBytes)
|
||||
break
|
||||
}
|
||||
fallthrough
|
||||
|
||||
case reflect.Array:
|
||||
f.fs.Write(openBracketBytes)
|
||||
f.depth++
|
||||
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
|
||||
f.fs.Write(maxShortBytes)
|
||||
} else {
|
||||
numEntries := v.Len()
|
||||
for i := 0; i < numEntries; i++ {
|
||||
if i > 0 {
|
||||
f.fs.Write(spaceBytes)
|
||||
}
|
||||
f.ignoreNextType = true
|
||||
f.format(f.unpackValue(v.Index(i)))
|
||||
}
|
||||
}
|
||||
f.depth--
|
||||
f.fs.Write(closeBracketBytes)
|
||||
|
||||
case reflect.String:
|
||||
f.fs.Write([]byte(v.String()))
|
||||
|
||||
case reflect.Interface:
|
||||
// The only time we should get here is for nil interfaces due to
|
||||
// unpackValue calls.
|
||||
if v.IsNil() {
|
||||
f.fs.Write(nilAngleBytes)
|
||||
}
|
||||
|
||||
case reflect.Ptr:
|
||||
// Do nothing. We should never get here since pointers have already
|
||||
// been handled above.
|
||||
|
||||
case reflect.Map:
|
||||
// nil maps should be indicated as different than empty maps
|
||||
if v.IsNil() {
|
||||
f.fs.Write(nilAngleBytes)
|
||||
break
|
||||
}
|
||||
|
||||
f.fs.Write(openMapBytes)
|
||||
f.depth++
|
||||
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
|
||||
f.fs.Write(maxShortBytes)
|
||||
} else {
|
||||
keys := v.MapKeys()
|
||||
if f.cs.SortKeys {
|
||||
sortValues(keys, f.cs)
|
||||
}
|
||||
for i, key := range keys {
|
||||
if i > 0 {
|
||||
f.fs.Write(spaceBytes)
|
||||
}
|
||||
f.ignoreNextType = true
|
||||
f.format(f.unpackValue(key))
|
||||
f.fs.Write(colonBytes)
|
||||
f.ignoreNextType = true
|
||||
f.format(f.unpackValue(v.MapIndex(key)))
|
||||
}
|
||||
}
|
||||
f.depth--
|
||||
f.fs.Write(closeMapBytes)
|
||||
|
||||
case reflect.Struct:
|
||||
numFields := v.NumField()
|
||||
f.fs.Write(openBraceBytes)
|
||||
f.depth++
|
||||
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
|
||||
f.fs.Write(maxShortBytes)
|
||||
} else {
|
||||
vt := v.Type()
|
||||
for i := 0; i < numFields; i++ {
|
||||
if i > 0 {
|
||||
f.fs.Write(spaceBytes)
|
||||
}
|
||||
vtf := vt.Field(i)
|
||||
if f.fs.Flag('+') || f.fs.Flag('#') {
|
||||
f.fs.Write([]byte(vtf.Name))
|
||||
f.fs.Write(colonBytes)
|
||||
}
|
||||
f.format(f.unpackValue(v.Field(i)))
|
||||
}
|
||||
}
|
||||
f.depth--
|
||||
f.fs.Write(closeBraceBytes)
|
||||
|
||||
case reflect.Uintptr:
|
||||
printHexPtr(f.fs, uintptr(v.Uint()))
|
||||
|
||||
case reflect.UnsafePointer, reflect.Chan, reflect.Func:
|
||||
printHexPtr(f.fs, v.Pointer())
|
||||
|
||||
// There were not any other types at the time this code was written, but
|
||||
// fall back to letting the default fmt package handle it if any get added.
|
||||
default:
|
||||
format := f.buildDefaultFormat()
|
||||
if v.CanInterface() {
|
||||
fmt.Fprintf(f.fs, format, v.Interface())
|
||||
} else {
|
||||
fmt.Fprintf(f.fs, format, v.String())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Format satisfies the fmt.Formatter interface. See NewFormatter for usage
|
||||
// details.
|
||||
func (f *formatState) Format(fs fmt.State, verb rune) {
|
||||
f.fs = fs
|
||||
|
||||
// Use standard formatting for verbs that are not v.
|
||||
if verb != 'v' {
|
||||
format := f.constructOrigFormat(verb)
|
||||
fmt.Fprintf(fs, format, f.value)
|
||||
return
|
||||
}
|
||||
|
||||
if f.value == nil {
|
||||
if fs.Flag('#') {
|
||||
fs.Write(interfaceBytes)
|
||||
}
|
||||
fs.Write(nilAngleBytes)
|
||||
return
|
||||
}
|
||||
|
||||
f.format(reflect.ValueOf(f.value))
|
||||
}
|
||||
|
||||
// newFormatter is a helper function to consolidate the logic from the various
|
||||
// public methods which take varying config states.
|
||||
func newFormatter(cs *ConfigState, v interface{}) fmt.Formatter {
|
||||
fs := &formatState{value: v, cs: cs}
|
||||
fs.pointers = make(map[uintptr]int)
|
||||
return fs
|
||||
}
|
||||
|
||||
/*
|
||||
NewFormatter returns a custom formatter that satisfies the fmt.Formatter
|
||||
interface. As a result, it integrates cleanly with standard fmt package
|
||||
printing functions. The formatter is useful for inline printing of smaller data
|
||||
types similar to the standard %v format specifier.
|
||||
|
||||
The custom formatter only responds to the %v (most compact), %+v (adds pointer
|
||||
addresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb
|
||||
combinations. Any other verbs such as %x and %q will be sent to the the
|
||||
standard fmt package for formatting. In addition, the custom formatter ignores
|
||||
the width and precision arguments (however they will still work on the format
|
||||
specifiers not handled by the custom formatter).
|
||||
|
||||
Typically this function shouldn't be called directly. It is much easier to make
|
||||
use of the custom formatter by calling one of the convenience functions such as
|
||||
Printf, Println, or Fprintf.
|
||||
*/
|
||||
func NewFormatter(v interface{}) fmt.Formatter {
|
||||
return newFormatter(&Config, v)
|
||||
}
|
1558
vendor/github.com/davecgh/go-spew/spew/format_test.go
generated
vendored
Normal file
1558
vendor/github.com/davecgh/go-spew/spew/format_test.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
84
vendor/github.com/davecgh/go-spew/spew/internal_test.go
generated
vendored
Normal file
84
vendor/github.com/davecgh/go-spew/spew/internal_test.go
generated
vendored
Normal file
@ -0,0 +1,84 @@
|
||||
/*
|
||||
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
|
||||
*
|
||||
* Permission to use, copy, modify, and distribute this software for any
|
||||
* purpose with or without fee is hereby granted, provided that the above
|
||||
* copyright notice and this permission notice appear in all copies.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
*/
|
||||
|
||||
/*
|
||||
This test file is part of the spew package rather than than the spew_test
|
||||
package because it needs access to internals to properly test certain cases
|
||||
which are not possible via the public interface since they should never happen.
|
||||
*/
|
||||
|
||||
package spew
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// dummyFmtState implements a fake fmt.State to use for testing invalid
|
||||
// reflect.Value handling. This is necessary because the fmt package catches
|
||||
// invalid values before invoking the formatter on them.
|
||||
type dummyFmtState struct {
|
||||
bytes.Buffer
|
||||
}
|
||||
|
||||
func (dfs *dummyFmtState) Flag(f int) bool {
|
||||
return f == int('+')
|
||||
}
|
||||
|
||||
func (dfs *dummyFmtState) Precision() (int, bool) {
|
||||
return 0, false
|
||||
}
|
||||
|
||||
func (dfs *dummyFmtState) Width() (int, bool) {
|
||||
return 0, false
|
||||
}
|
||||
|
||||
// TestInvalidReflectValue ensures the dump and formatter code handles an
|
||||
// invalid reflect value properly. This needs access to internal state since it
|
||||
// should never happen in real code and therefore can't be tested via the public
|
||||
// API.
|
||||
func TestInvalidReflectValue(t *testing.T) {
|
||||
i := 1
|
||||
|
||||
// Dump invalid reflect value.
|
||||
v := new(reflect.Value)
|
||||
buf := new(bytes.Buffer)
|
||||
d := dumpState{w: buf, cs: &Config}
|
||||
d.dump(*v)
|
||||
s := buf.String()
|
||||
want := "<invalid>"
|
||||
if s != want {
|
||||
t.Errorf("InvalidReflectValue #%d\n got: %s want: %s", i, s, want)
|
||||
}
|
||||
i++
|
||||
|
||||
// Formatter invalid reflect value.
|
||||
buf2 := new(dummyFmtState)
|
||||
f := formatState{value: *v, cs: &Config, fs: buf2}
|
||||
f.format(*v)
|
||||
s = buf2.String()
|
||||
want = "<invalid>"
|
||||
if s != want {
|
||||
t.Errorf("InvalidReflectValue #%d got: %s want: %s", i, s, want)
|
||||
}
|
||||
}
|
||||
|
||||
// SortValues makes the internal sortValues function available to the test
|
||||
// package.
|
||||
func SortValues(values []reflect.Value, cs *ConfigState) {
|
||||
sortValues(values, cs)
|
||||
}
|
101
vendor/github.com/davecgh/go-spew/spew/internalunsafe_test.go
generated
vendored
Normal file
101
vendor/github.com/davecgh/go-spew/spew/internalunsafe_test.go
generated
vendored
Normal file
@ -0,0 +1,101 @@
|
||||
// Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
|
||||
|
||||
// Permission to use, copy, modify, and distribute this software for any
|
||||
// purpose with or without fee is hereby granted, provided that the above
|
||||
// copyright notice and this permission notice appear in all copies.
|
||||
|
||||
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
// NOTE: Due to the following build constraints, this file will only be compiled
|
||||
// when the code is not running on Google App Engine, compiled by GopherJS, and
|
||||
// "-tags safe" is not added to the go build command line. The "disableunsafe"
|
||||
// tag is deprecated and thus should not be used.
|
||||
// +build !js,!appengine,!safe,!disableunsafe,go1.4
|
||||
|
||||
/*
|
||||
This test file is part of the spew package rather than than the spew_test
|
||||
package because it needs access to internals to properly test certain cases
|
||||
which are not possible via the public interface since they should never happen.
|
||||
*/
|
||||
|
||||
package spew
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// changeKind uses unsafe to intentionally change the kind of a reflect.Value to
|
||||
// the maximum kind value which does not exist. This is needed to test the
|
||||
// fallback code which punts to the standard fmt library for new types that
|
||||
// might get added to the language.
|
||||
func changeKind(v *reflect.Value, readOnly bool) {
|
||||
flags := flagField(v)
|
||||
if readOnly {
|
||||
*flags |= flagRO
|
||||
} else {
|
||||
*flags &^= flagRO
|
||||
}
|
||||
*flags |= flagKindMask
|
||||
}
|
||||
|
||||
// TestAddedReflectValue tests functionaly of the dump and formatter code which
|
||||
// falls back to the standard fmt library for new types that might get added to
|
||||
// the language.
|
||||
func TestAddedReflectValue(t *testing.T) {
|
||||
i := 1
|
||||
|
||||
// Dump using a reflect.Value that is exported.
|
||||
v := reflect.ValueOf(int8(5))
|
||||
changeKind(&v, false)
|
||||
buf := new(bytes.Buffer)
|
||||
d := dumpState{w: buf, cs: &Config}
|
||||
d.dump(v)
|
||||
s := buf.String()
|
||||
want := "(int8) 5"
|
||||
if s != want {
|
||||
t.Errorf("TestAddedReflectValue #%d\n got: %s want: %s", i, s, want)
|
||||
}
|
||||
i++
|
||||
|
||||
// Dump using a reflect.Value that is not exported.
|
||||
changeKind(&v, true)
|
||||
buf.Reset()
|
||||
d.dump(v)
|
||||
s = buf.String()
|
||||
want = "(int8) <int8 Value>"
|
||||
if s != want {
|
||||
t.Errorf("TestAddedReflectValue #%d\n got: %s want: %s", i, s, want)
|
||||
}
|
||||
i++
|
||||
|
||||
// Formatter using a reflect.Value that is exported.
|
||||
changeKind(&v, false)
|
||||
buf2 := new(dummyFmtState)
|
||||
f := formatState{value: v, cs: &Config, fs: buf2}
|
||||
f.format(v)
|
||||
s = buf2.String()
|
||||
want = "5"
|
||||
if s != want {
|
||||
t.Errorf("TestAddedReflectValue #%d got: %s want: %s", i, s, want)
|
||||
}
|
||||
i++
|
||||
|
||||
// Formatter using a reflect.Value that is not exported.
|
||||
changeKind(&v, true)
|
||||
buf2.Reset()
|
||||
f = formatState{value: v, cs: &Config, fs: buf2}
|
||||
f.format(v)
|
||||
s = buf2.String()
|
||||
want = "<int8 Value>"
|
||||
if s != want {
|
||||
t.Errorf("TestAddedReflectValue #%d got: %s want: %s", i, s, want)
|
||||
}
|
||||
}
|
148
vendor/github.com/davecgh/go-spew/spew/spew.go
generated
vendored
Normal file
148
vendor/github.com/davecgh/go-spew/spew/spew.go
generated
vendored
Normal file
@ -0,0 +1,148 @@
|
||||
/*
|
||||
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
|
||||
*
|
||||
* Permission to use, copy, modify, and distribute this software for any
|
||||
* purpose with or without fee is hereby granted, provided that the above
|
||||
* copyright notice and this permission notice appear in all copies.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
*/
|
||||
|
||||
package spew
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
)
|
||||
|
||||
// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were
|
||||
// passed with a default Formatter interface returned by NewFormatter. It
|
||||
// returns the formatted string as a value that satisfies error. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Errorf(format, spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Errorf(format string, a ...interface{}) (err error) {
|
||||
return fmt.Errorf(format, convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were
|
||||
// passed with a default Formatter interface returned by NewFormatter. It
|
||||
// returns the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Fprint(w, spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Fprint(w io.Writer, a ...interface{}) (n int, err error) {
|
||||
return fmt.Fprint(w, convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were
|
||||
// passed with a default Formatter interface returned by NewFormatter. It
|
||||
// returns the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Fprintf(w, format, spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {
|
||||
return fmt.Fprintf(w, format, convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it
|
||||
// passed with a default Formatter interface returned by NewFormatter. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Fprintln(w, spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Fprintln(w io.Writer, a ...interface{}) (n int, err error) {
|
||||
return fmt.Fprintln(w, convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Print is a wrapper for fmt.Print that treats each argument as if it were
|
||||
// passed with a default Formatter interface returned by NewFormatter. It
|
||||
// returns the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Print(spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Print(a ...interface{}) (n int, err error) {
|
||||
return fmt.Print(convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Printf is a wrapper for fmt.Printf that treats each argument as if it were
|
||||
// passed with a default Formatter interface returned by NewFormatter. It
|
||||
// returns the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Printf(format, spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Printf(format string, a ...interface{}) (n int, err error) {
|
||||
return fmt.Printf(format, convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Println is a wrapper for fmt.Println that treats each argument as if it were
|
||||
// passed with a default Formatter interface returned by NewFormatter. It
|
||||
// returns the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Println(spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Println(a ...interface{}) (n int, err error) {
|
||||
return fmt.Println(convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were
|
||||
// passed with a default Formatter interface returned by NewFormatter. It
|
||||
// returns the resulting string. See NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Sprint(spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Sprint(a ...interface{}) string {
|
||||
return fmt.Sprint(convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were
|
||||
// passed with a default Formatter interface returned by NewFormatter. It
|
||||
// returns the resulting string. See NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Sprintf(format, spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Sprintf(format string, a ...interface{}) string {
|
||||
return fmt.Sprintf(format, convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it
|
||||
// were passed with a default Formatter interface returned by NewFormatter. It
|
||||
// returns the resulting string. See NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Sprintln(spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Sprintln(a ...interface{}) string {
|
||||
return fmt.Sprintln(convertArgs(a)...)
|
||||
}
|
||||
|
||||
// convertArgs accepts a slice of arguments and returns a slice of the same
|
||||
// length with each argument converted to a default spew Formatter interface.
|
||||
func convertArgs(args []interface{}) (formatters []interface{}) {
|
||||
formatters = make([]interface{}, len(args))
|
||||
for index, arg := range args {
|
||||
formatters[index] = NewFormatter(arg)
|
||||
}
|
||||
return formatters
|
||||
}
|
320
vendor/github.com/davecgh/go-spew/spew/spew_test.go
generated
vendored
Normal file
320
vendor/github.com/davecgh/go-spew/spew/spew_test.go
generated
vendored
Normal file
@ -0,0 +1,320 @@
|
||||
/*
|
||||
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
|
||||
*
|
||||
* Permission to use, copy, modify, and distribute this software for any
|
||||
* purpose with or without fee is hereby granted, provided that the above
|
||||
* copyright notice and this permission notice appear in all copies.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
*/
|
||||
|
||||
package spew_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/davecgh/go-spew/spew"
|
||||
)
|
||||
|
||||
// spewFunc is used to identify which public function of the spew package or
|
||||
// ConfigState a test applies to.
|
||||
type spewFunc int
|
||||
|
||||
const (
|
||||
fCSFdump spewFunc = iota
|
||||
fCSFprint
|
||||
fCSFprintf
|
||||
fCSFprintln
|
||||
fCSPrint
|
||||
fCSPrintln
|
||||
fCSSdump
|
||||
fCSSprint
|
||||
fCSSprintf
|
||||
fCSSprintln
|
||||
fCSErrorf
|
||||
fCSNewFormatter
|
||||
fErrorf
|
||||
fFprint
|
||||
fFprintln
|
||||
fPrint
|
||||
fPrintln
|
||||
fSdump
|
||||
fSprint
|
||||
fSprintf
|
||||
fSprintln
|
||||
)
|
||||
|
||||
// Map of spewFunc values to names for pretty printing.
|
||||
var spewFuncStrings = map[spewFunc]string{
|
||||
fCSFdump: "ConfigState.Fdump",
|
||||
fCSFprint: "ConfigState.Fprint",
|
||||
fCSFprintf: "ConfigState.Fprintf",
|
||||
fCSFprintln: "ConfigState.Fprintln",
|
||||
fCSSdump: "ConfigState.Sdump",
|
||||
fCSPrint: "ConfigState.Print",
|
||||
fCSPrintln: "ConfigState.Println",
|
||||
fCSSprint: "ConfigState.Sprint",
|
||||
fCSSprintf: "ConfigState.Sprintf",
|
||||
fCSSprintln: "ConfigState.Sprintln",
|
||||
fCSErrorf: "ConfigState.Errorf",
|
||||
fCSNewFormatter: "ConfigState.NewFormatter",
|
||||
fErrorf: "spew.Errorf",
|
||||
fFprint: "spew.Fprint",
|
||||
fFprintln: "spew.Fprintln",
|
||||
fPrint: "spew.Print",
|
||||
fPrintln: "spew.Println",
|
||||
fSdump: "spew.Sdump",
|
||||
fSprint: "spew.Sprint",
|
||||
fSprintf: "spew.Sprintf",
|
||||
fSprintln: "spew.Sprintln",
|
||||
}
|
||||
|
||||
func (f spewFunc) String() string {
|
||||
if s, ok := spewFuncStrings[f]; ok {
|
||||
return s
|
||||
}
|
||||
return fmt.Sprintf("Unknown spewFunc (%d)", int(f))
|
||||
}
|
||||
|
||||
// spewTest is used to describe a test to be performed against the public
|
||||
// functions of the spew package or ConfigState.
|
||||
type spewTest struct {
|
||||
cs *spew.ConfigState
|
||||
f spewFunc
|
||||
format string
|
||||
in interface{}
|
||||
want string
|
||||
}
|
||||
|
||||
// spewTests houses the tests to be performed against the public functions of
|
||||
// the spew package and ConfigState.
|
||||
//
|
||||
// These tests are only intended to ensure the public functions are exercised
|
||||
// and are intentionally not exhaustive of types. The exhaustive type
|
||||
// tests are handled in the dump and format tests.
|
||||
var spewTests []spewTest
|
||||
|
||||
// redirStdout is a helper function to return the standard output from f as a
|
||||
// byte slice.
|
||||
func redirStdout(f func()) ([]byte, error) {
|
||||
tempFile, err := ioutil.TempFile("", "ss-test")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
fileName := tempFile.Name()
|
||||
defer os.Remove(fileName) // Ignore error
|
||||
|
||||
origStdout := os.Stdout
|
||||
os.Stdout = tempFile
|
||||
f()
|
||||
os.Stdout = origStdout
|
||||
tempFile.Close()
|
||||
|
||||
return ioutil.ReadFile(fileName)
|
||||
}
|
||||
|
||||
func initSpewTests() {
|
||||
// Config states with various settings.
|
||||
scsDefault := spew.NewDefaultConfig()
|
||||
scsNoMethods := &spew.ConfigState{Indent: " ", DisableMethods: true}
|
||||
scsNoPmethods := &spew.ConfigState{Indent: " ", DisablePointerMethods: true}
|
||||
scsMaxDepth := &spew.ConfigState{Indent: " ", MaxDepth: 1}
|
||||
scsContinue := &spew.ConfigState{Indent: " ", ContinueOnMethod: true}
|
||||
scsNoPtrAddr := &spew.ConfigState{DisablePointerAddresses: true}
|
||||
scsNoCap := &spew.ConfigState{DisableCapacities: true}
|
||||
|
||||
// Variables for tests on types which implement Stringer interface with and
|
||||
// without a pointer receiver.
|
||||
ts := stringer("test")
|
||||
tps := pstringer("test")
|
||||
|
||||
type ptrTester struct {
|
||||
s *struct{}
|
||||
}
|
||||
tptr := &ptrTester{s: &struct{}{}}
|
||||
|
||||
// depthTester is used to test max depth handling for structs, array, slices
|
||||
// and maps.
|
||||
type depthTester struct {
|
||||
ic indirCir1
|
||||
arr [1]string
|
||||
slice []string
|
||||
m map[string]int
|
||||
}
|
||||
dt := depthTester{indirCir1{nil}, [1]string{"arr"}, []string{"slice"},
|
||||
map[string]int{"one": 1}}
|
||||
|
||||
// Variable for tests on types which implement error interface.
|
||||
te := customError(10)
|
||||
|
||||
spewTests = []spewTest{
|
||||
{scsDefault, fCSFdump, "", int8(127), "(int8) 127\n"},
|
||||
{scsDefault, fCSFprint, "", int16(32767), "32767"},
|
||||
{scsDefault, fCSFprintf, "%v", int32(2147483647), "2147483647"},
|
||||
{scsDefault, fCSFprintln, "", int(2147483647), "2147483647\n"},
|
||||
{scsDefault, fCSPrint, "", int64(9223372036854775807), "9223372036854775807"},
|
||||
{scsDefault, fCSPrintln, "", uint8(255), "255\n"},
|
||||
{scsDefault, fCSSdump, "", uint8(64), "(uint8) 64\n"},
|
||||
{scsDefault, fCSSprint, "", complex(1, 2), "(1+2i)"},
|
||||
{scsDefault, fCSSprintf, "%v", complex(float32(3), 4), "(3+4i)"},
|
||||
{scsDefault, fCSSprintln, "", complex(float64(5), 6), "(5+6i)\n"},
|
||||
{scsDefault, fCSErrorf, "%#v", uint16(65535), "(uint16)65535"},
|
||||
{scsDefault, fCSNewFormatter, "%v", uint32(4294967295), "4294967295"},
|
||||
{scsDefault, fErrorf, "%v", uint64(18446744073709551615), "18446744073709551615"},
|
||||
{scsDefault, fFprint, "", float32(3.14), "3.14"},
|
||||
{scsDefault, fFprintln, "", float64(6.28), "6.28\n"},
|
||||
{scsDefault, fPrint, "", true, "true"},
|
||||
{scsDefault, fPrintln, "", false, "false\n"},
|
||||
{scsDefault, fSdump, "", complex(-10, -20), "(complex128) (-10-20i)\n"},
|
||||
{scsDefault, fSprint, "", complex(-1, -2), "(-1-2i)"},
|
||||
{scsDefault, fSprintf, "%v", complex(float32(-3), -4), "(-3-4i)"},
|
||||
{scsDefault, fSprintln, "", complex(float64(-5), -6), "(-5-6i)\n"},
|
||||
{scsNoMethods, fCSFprint, "", ts, "test"},
|
||||
{scsNoMethods, fCSFprint, "", &ts, "<*>test"},
|
||||
{scsNoMethods, fCSFprint, "", tps, "test"},
|
||||
{scsNoMethods, fCSFprint, "", &tps, "<*>test"},
|
||||
{scsNoPmethods, fCSFprint, "", ts, "stringer test"},
|
||||
{scsNoPmethods, fCSFprint, "", &ts, "<*>stringer test"},
|
||||
{scsNoPmethods, fCSFprint, "", tps, "test"},
|
||||
{scsNoPmethods, fCSFprint, "", &tps, "<*>stringer test"},
|
||||
{scsMaxDepth, fCSFprint, "", dt, "{{<max>} [<max>] [<max>] map[<max>]}"},
|
||||
{scsMaxDepth, fCSFdump, "", dt, "(spew_test.depthTester) {\n" +
|
||||
" ic: (spew_test.indirCir1) {\n <max depth reached>\n },\n" +
|
||||
" arr: ([1]string) (len=1 cap=1) {\n <max depth reached>\n },\n" +
|
||||
" slice: ([]string) (len=1 cap=1) {\n <max depth reached>\n },\n" +
|
||||
" m: (map[string]int) (len=1) {\n <max depth reached>\n }\n}\n"},
|
||||
{scsContinue, fCSFprint, "", ts, "(stringer test) test"},
|
||||
{scsContinue, fCSFdump, "", ts, "(spew_test.stringer) " +
|
||||
"(len=4) (stringer test) \"test\"\n"},
|
||||
{scsContinue, fCSFprint, "", te, "(error: 10) 10"},
|
||||
{scsContinue, fCSFdump, "", te, "(spew_test.customError) " +
|
||||
"(error: 10) 10\n"},
|
||||
{scsNoPtrAddr, fCSFprint, "", tptr, "<*>{<*>{}}"},
|
||||
{scsNoPtrAddr, fCSSdump, "", tptr, "(*spew_test.ptrTester)({\ns: (*struct {})({\n})\n})\n"},
|
||||
{scsNoCap, fCSSdump, "", make([]string, 0, 10), "([]string) {\n}\n"},
|
||||
{scsNoCap, fCSSdump, "", make([]string, 1, 10), "([]string) (len=1) {\n(string) \"\"\n}\n"},
|
||||
}
|
||||
}
|
||||
|
||||
// TestSpew executes all of the tests described by spewTests.
|
||||
func TestSpew(t *testing.T) {
|
||||
initSpewTests()
|
||||
|
||||
t.Logf("Running %d tests", len(spewTests))
|
||||
for i, test := range spewTests {
|
||||
buf := new(bytes.Buffer)
|
||||
switch test.f {
|
||||
case fCSFdump:
|
||||
test.cs.Fdump(buf, test.in)
|
||||
|
||||
case fCSFprint:
|
||||
test.cs.Fprint(buf, test.in)
|
||||
|
||||
case fCSFprintf:
|
||||
test.cs.Fprintf(buf, test.format, test.in)
|
||||
|
||||
case fCSFprintln:
|
||||
test.cs.Fprintln(buf, test.in)
|
||||
|
||||
case fCSPrint:
|
||||
b, err := redirStdout(func() { test.cs.Print(test.in) })
|
||||
if err != nil {
|
||||
t.Errorf("%v #%d %v", test.f, i, err)
|
||||
continue
|
||||
}
|
||||
buf.Write(b)
|
||||
|
||||
case fCSPrintln:
|
||||
b, err := redirStdout(func() { test.cs.Println(test.in) })
|
||||
if err != nil {
|
||||
t.Errorf("%v #%d %v", test.f, i, err)
|
||||
continue
|
||||
}
|
||||
buf.Write(b)
|
||||
|
||||
case fCSSdump:
|
||||
str := test.cs.Sdump(test.in)
|
||||
buf.WriteString(str)
|
||||
|
||||
case fCSSprint:
|
||||
str := test.cs.Sprint(test.in)
|
||||
buf.WriteString(str)
|
||||
|
||||
case fCSSprintf:
|
||||
str := test.cs.Sprintf(test.format, test.in)
|
||||
buf.WriteString(str)
|
||||
|
||||
case fCSSprintln:
|
||||
str := test.cs.Sprintln(test.in)
|
||||
buf.WriteString(str)
|
||||
|
||||
case fCSErrorf:
|
||||
err := test.cs.Errorf(test.format, test.in)
|
||||
buf.WriteString(err.Error())
|
||||
|
||||
case fCSNewFormatter:
|
||||
fmt.Fprintf(buf, test.format, test.cs.NewFormatter(test.in))
|
||||
|
||||
case fErrorf:
|
||||
err := spew.Errorf(test.format, test.in)
|
||||
buf.WriteString(err.Error())
|
||||
|
||||
case fFprint:
|
||||
spew.Fprint(buf, test.in)
|
||||
|
||||
case fFprintln:
|
||||
spew.Fprintln(buf, test.in)
|
||||
|
||||
case fPrint:
|
||||
b, err := redirStdout(func() { spew.Print(test.in) })
|
||||
if err != nil {
|
||||
t.Errorf("%v #%d %v", test.f, i, err)
|
||||
continue
|
||||
}
|
||||
buf.Write(b)
|
||||
|
||||
case fPrintln:
|
||||
b, err := redirStdout(func() { spew.Println(test.in) })
|
||||
if err != nil {
|
||||
t.Errorf("%v #%d %v", test.f, i, err)
|
||||
continue
|
||||
}
|
||||
buf.Write(b)
|
||||
|
||||
case fSdump:
|
||||
str := spew.Sdump(test.in)
|
||||
buf.WriteString(str)
|
||||
|
||||
case fSprint:
|
||||
str := spew.Sprint(test.in)
|
||||
buf.WriteString(str)
|
||||
|
||||
case fSprintf:
|
||||
str := spew.Sprintf(test.format, test.in)
|
||||
buf.WriteString(str)
|
||||
|
||||
case fSprintln:
|
||||
str := spew.Sprintln(test.in)
|
||||
buf.WriteString(str)
|
||||
|
||||
default:
|
||||
t.Errorf("%v #%d unrecognized function", test.f, i)
|
||||
continue
|
||||
}
|
||||
s := buf.String()
|
||||
if test.want != s {
|
||||
t.Errorf("ConfigState #%d\n got: %s want: %s", i, s, test.want)
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
82
vendor/github.com/davecgh/go-spew/spew/testdata/dumpcgo.go
generated
vendored
Normal file
82
vendor/github.com/davecgh/go-spew/spew/testdata/dumpcgo.go
generated
vendored
Normal file
@ -0,0 +1,82 @@
|
||||
// Copyright (c) 2013 Dave Collins <dave@davec.name>
|
||||
//
|
||||
// Permission to use, copy, modify, and distribute this software for any
|
||||
// purpose with or without fee is hereby granted, provided that the above
|
||||
// copyright notice and this permission notice appear in all copies.
|
||||
//
|
||||
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
// NOTE: Due to the following build constraints, this file will only be compiled
|
||||
// when both cgo is supported and "-tags testcgo" is added to the go test
|
||||
// command line. This code should really only be in the dumpcgo_test.go file,
|
||||
// but unfortunately Go will not allow cgo in test files, so this is a
|
||||
// workaround to allow cgo types to be tested. This configuration is used
|
||||
// because spew itself does not require cgo to run even though it does handle
|
||||
// certain cgo types specially. Rather than forcing all clients to require cgo
|
||||
// and an external C compiler just to run the tests, this scheme makes them
|
||||
// optional.
|
||||
// +build cgo,testcgo
|
||||
|
||||
package testdata
|
||||
|
||||
/*
|
||||
#include <stdint.h>
|
||||
typedef unsigned char custom_uchar_t;
|
||||
|
||||
char *ncp = 0;
|
||||
char *cp = "test";
|
||||
char ca[6] = {'t', 'e', 's', 't', '2', '\0'};
|
||||
unsigned char uca[6] = {'t', 'e', 's', 't', '3', '\0'};
|
||||
signed char sca[6] = {'t', 'e', 's', 't', '4', '\0'};
|
||||
uint8_t ui8ta[6] = {'t', 'e', 's', 't', '5', '\0'};
|
||||
custom_uchar_t tuca[6] = {'t', 'e', 's', 't', '6', '\0'};
|
||||
*/
|
||||
import "C"
|
||||
|
||||
// GetCgoNullCharPointer returns a null char pointer via cgo. This is only
|
||||
// used for tests.
|
||||
func GetCgoNullCharPointer() interface{} {
|
||||
return C.ncp
|
||||
}
|
||||
|
||||
// GetCgoCharPointer returns a char pointer via cgo. This is only used for
|
||||
// tests.
|
||||
func GetCgoCharPointer() interface{} {
|
||||
return C.cp
|
||||
}
|
||||
|
||||
// GetCgoCharArray returns a char array via cgo and the array's len and cap.
|
||||
// This is only used for tests.
|
||||
func GetCgoCharArray() (interface{}, int, int) {
|
||||
return C.ca, len(C.ca), cap(C.ca)
|
||||
}
|
||||
|
||||
// GetCgoUnsignedCharArray returns an unsigned char array via cgo and the
|
||||
// array's len and cap. This is only used for tests.
|
||||
func GetCgoUnsignedCharArray() (interface{}, int, int) {
|
||||
return C.uca, len(C.uca), cap(C.uca)
|
||||
}
|
||||
|
||||
// GetCgoSignedCharArray returns a signed char array via cgo and the array's len
|
||||
// and cap. This is only used for tests.
|
||||
func GetCgoSignedCharArray() (interface{}, int, int) {
|
||||
return C.sca, len(C.sca), cap(C.sca)
|
||||
}
|
||||
|
||||
// GetCgoUint8tArray returns a uint8_t array via cgo and the array's len and
|
||||
// cap. This is only used for tests.
|
||||
func GetCgoUint8tArray() (interface{}, int, int) {
|
||||
return C.ui8ta, len(C.ui8ta), cap(C.ui8ta)
|
||||
}
|
||||
|
||||
// GetCgoTypdefedUnsignedCharArray returns a typedefed unsigned char array via
|
||||
// cgo and the array's len and cap. This is only used for tests.
|
||||
func GetCgoTypdefedUnsignedCharArray() (interface{}, int, int) {
|
||||
return C.tuca, len(C.tuca), cap(C.tuca)
|
||||
}
|
61
vendor/github.com/davecgh/go-spew/test_coverage.txt
generated
vendored
Normal file
61
vendor/github.com/davecgh/go-spew/test_coverage.txt
generated
vendored
Normal file
@ -0,0 +1,61 @@
|
||||
|
||||
github.com/davecgh/go-spew/spew/dump.go dumpState.dump 100.00% (88/88)
|
||||
github.com/davecgh/go-spew/spew/format.go formatState.format 100.00% (82/82)
|
||||
github.com/davecgh/go-spew/spew/format.go formatState.formatPtr 100.00% (52/52)
|
||||
github.com/davecgh/go-spew/spew/dump.go dumpState.dumpPtr 100.00% (44/44)
|
||||
github.com/davecgh/go-spew/spew/dump.go dumpState.dumpSlice 100.00% (39/39)
|
||||
github.com/davecgh/go-spew/spew/common.go handleMethods 100.00% (30/30)
|
||||
github.com/davecgh/go-spew/spew/common.go printHexPtr 100.00% (18/18)
|
||||
github.com/davecgh/go-spew/spew/common.go unsafeReflectValue 100.00% (13/13)
|
||||
github.com/davecgh/go-spew/spew/format.go formatState.constructOrigFormat 100.00% (12/12)
|
||||
github.com/davecgh/go-spew/spew/dump.go fdump 100.00% (11/11)
|
||||
github.com/davecgh/go-spew/spew/format.go formatState.Format 100.00% (11/11)
|
||||
github.com/davecgh/go-spew/spew/common.go init 100.00% (10/10)
|
||||
github.com/davecgh/go-spew/spew/common.go printComplex 100.00% (9/9)
|
||||
github.com/davecgh/go-spew/spew/common.go valuesSorter.Less 100.00% (8/8)
|
||||
github.com/davecgh/go-spew/spew/format.go formatState.buildDefaultFormat 100.00% (7/7)
|
||||
github.com/davecgh/go-spew/spew/format.go formatState.unpackValue 100.00% (5/5)
|
||||
github.com/davecgh/go-spew/spew/dump.go dumpState.indent 100.00% (4/4)
|
||||
github.com/davecgh/go-spew/spew/common.go catchPanic 100.00% (4/4)
|
||||
github.com/davecgh/go-spew/spew/config.go ConfigState.convertArgs 100.00% (4/4)
|
||||
github.com/davecgh/go-spew/spew/spew.go convertArgs 100.00% (4/4)
|
||||
github.com/davecgh/go-spew/spew/format.go newFormatter 100.00% (3/3)
|
||||
github.com/davecgh/go-spew/spew/dump.go Sdump 100.00% (3/3)
|
||||
github.com/davecgh/go-spew/spew/common.go printBool 100.00% (3/3)
|
||||
github.com/davecgh/go-spew/spew/common.go sortValues 100.00% (3/3)
|
||||
github.com/davecgh/go-spew/spew/config.go ConfigState.Sdump 100.00% (3/3)
|
||||
github.com/davecgh/go-spew/spew/dump.go dumpState.unpackValue 100.00% (3/3)
|
||||
github.com/davecgh/go-spew/spew/spew.go Printf 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/spew.go Println 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/spew.go Sprint 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/spew.go Sprintf 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/spew.go Sprintln 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/common.go printFloat 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/config.go NewDefaultConfig 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/common.go printInt 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/common.go printUint 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/common.go valuesSorter.Len 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/common.go valuesSorter.Swap 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/config.go ConfigState.Errorf 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/config.go ConfigState.Fprint 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/config.go ConfigState.Fprintf 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/config.go ConfigState.Fprintln 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/config.go ConfigState.Print 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/config.go ConfigState.Printf 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/config.go ConfigState.Println 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/config.go ConfigState.Sprint 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/config.go ConfigState.Sprintf 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/config.go ConfigState.Sprintln 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/config.go ConfigState.NewFormatter 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/config.go ConfigState.Fdump 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/config.go ConfigState.Dump 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/dump.go Fdump 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/dump.go Dump 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/spew.go Fprintln 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/format.go NewFormatter 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/spew.go Errorf 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/spew.go Fprint 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/spew.go Fprintf 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew/spew.go Print 100.00% (1/1)
|
||||
github.com/davecgh/go-spew/spew ------------------------------- 100.00% (505/505)
|
||||
|
8
vendor/github.com/edsrzf/mmap-go/.gitignore
generated
vendored
Normal file
8
vendor/github.com/edsrzf/mmap-go/.gitignore
generated
vendored
Normal file
@ -0,0 +1,8 @@
|
||||
*.out
|
||||
*.5
|
||||
*.6
|
||||
*.8
|
||||
*.swp
|
||||
_obj
|
||||
_test
|
||||
testdata
|
25
vendor/github.com/edsrzf/mmap-go/LICENSE
generated
vendored
Normal file
25
vendor/github.com/edsrzf/mmap-go/LICENSE
generated
vendored
Normal file
@ -0,0 +1,25 @@
|
||||
Copyright (c) 2011, Evan Shaw <edsrzf@gmail.com>
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in the
|
||||
documentation and/or other materials provided with the distribution.
|
||||
* Neither the name of the copyright holder nor the
|
||||
names of its contributors may be used to endorse or promote products
|
||||
derived from this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
|
||||
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
||||
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
||||
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
12
vendor/github.com/edsrzf/mmap-go/README.md
generated
vendored
Normal file
12
vendor/github.com/edsrzf/mmap-go/README.md
generated
vendored
Normal file
@ -0,0 +1,12 @@
|
||||
mmap-go
|
||||
=======
|
||||
|
||||
mmap-go is a portable mmap package for the [Go programming language](http://golang.org).
|
||||
It has been tested on Linux (386, amd64), OS X, and Windows (386). It should also
|
||||
work on other Unix-like platforms, but hasn't been tested with them. I'm interested
|
||||
to hear about the results.
|
||||
|
||||
I haven't been able to add more features without adding significant complexity,
|
||||
so mmap-go doesn't support mprotect, mincore, and maybe a few other things.
|
||||
If you're running on a Unix-like platform and need some of these features,
|
||||
I suggest Gustavo Niemeyer's [gommap](http://labix.org/gommap).
|
117
vendor/github.com/edsrzf/mmap-go/mmap.go
generated
vendored
Normal file
117
vendor/github.com/edsrzf/mmap-go/mmap.go
generated
vendored
Normal file
@ -0,0 +1,117 @@
|
||||
// Copyright 2011 Evan Shaw. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// This file defines the common package interface and contains a little bit of
|
||||
// factored out logic.
|
||||
|
||||
// Package mmap allows mapping files into memory. It tries to provide a simple, reasonably portable interface,
|
||||
// but doesn't go out of its way to abstract away every little platform detail.
|
||||
// This specifically means:
|
||||
// * forked processes may or may not inherit mappings
|
||||
// * a file's timestamp may or may not be updated by writes through mappings
|
||||
// * specifying a size larger than the file's actual size can increase the file's size
|
||||
// * If the mapped file is being modified by another process while your program's running, don't expect consistent results between platforms
|
||||
package mmap
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"os"
|
||||
"reflect"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
const (
|
||||
// RDONLY maps the memory read-only.
|
||||
// Attempts to write to the MMap object will result in undefined behavior.
|
||||
RDONLY = 0
|
||||
// RDWR maps the memory as read-write. Writes to the MMap object will update the
|
||||
// underlying file.
|
||||
RDWR = 1 << iota
|
||||
// COPY maps the memory as copy-on-write. Writes to the MMap object will affect
|
||||
// memory, but the underlying file will remain unchanged.
|
||||
COPY
|
||||
// If EXEC is set, the mapped memory is marked as executable.
|
||||
EXEC
|
||||
)
|
||||
|
||||
const (
|
||||
// If the ANON flag is set, the mapped memory will not be backed by a file.
|
||||
ANON = 1 << iota
|
||||
)
|
||||
|
||||
// MMap represents a file mapped into memory.
|
||||
type MMap []byte
|
||||
|
||||
// Map maps an entire file into memory.
|
||||
// If ANON is set in flags, f is ignored.
|
||||
func Map(f *os.File, prot, flags int) (MMap, error) {
|
||||
return MapRegion(f, -1, prot, flags, 0)
|
||||
}
|
||||
|
||||
// MapRegion maps part of a file into memory.
|
||||
// The offset parameter must be a multiple of the system's page size.
|
||||
// If length < 0, the entire file will be mapped.
|
||||
// If ANON is set in flags, f is ignored.
|
||||
func MapRegion(f *os.File, length int, prot, flags int, offset int64) (MMap, error) {
|
||||
if offset%int64(os.Getpagesize()) != 0 {
|
||||
return nil, errors.New("offset parameter must be a multiple of the system's page size")
|
||||
}
|
||||
|
||||
var fd uintptr
|
||||
if flags&ANON == 0 {
|
||||
fd = uintptr(f.Fd())
|
||||
if length < 0 {
|
||||
fi, err := f.Stat()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
length = int(fi.Size())
|
||||
}
|
||||
} else {
|
||||
if length <= 0 {
|
||||
return nil, errors.New("anonymous mapping requires non-zero length")
|
||||
}
|
||||
fd = ^uintptr(0)
|
||||
}
|
||||
return mmap(length, uintptr(prot), uintptr(flags), fd, offset)
|
||||
}
|
||||
|
||||
func (m *MMap) header() *reflect.SliceHeader {
|
||||
return (*reflect.SliceHeader)(unsafe.Pointer(m))
|
||||
}
|
||||
|
||||
func (m *MMap) addrLen() (uintptr, uintptr) {
|
||||
header := m.header()
|
||||
return header.Data, uintptr(header.Len)
|
||||
}
|
||||
|
||||
// Lock keeps the mapped region in physical memory, ensuring that it will not be
|
||||
// swapped out.
|
||||
func (m MMap) Lock() error {
|
||||
return m.lock()
|
||||
}
|
||||
|
||||
// Unlock reverses the effect of Lock, allowing the mapped region to potentially
|
||||
// be swapped out.
|
||||
// If m is already unlocked, aan error will result.
|
||||
func (m MMap) Unlock() error {
|
||||
return m.unlock()
|
||||
}
|
||||
|
||||
// Flush synchronizes the mapping's contents to the file's contents on disk.
|
||||
func (m MMap) Flush() error {
|
||||
return m.flush()
|
||||
}
|
||||
|
||||
// Unmap deletes the memory mapped region, flushes any remaining changes, and sets
|
||||
// m to nil.
|
||||
// Trying to read or write any remaining references to m after Unmap is called will
|
||||
// result in undefined behavior.
|
||||
// Unmap should only be called on the slice value that was originally returned from
|
||||
// a call to Map. Calling Unmap on a derived slice may cause errors.
|
||||
func (m *MMap) Unmap() error {
|
||||
err := m.unmap()
|
||||
*m = nil
|
||||
return err
|
||||
}
|
152
vendor/github.com/edsrzf/mmap-go/mmap_test.go
generated
vendored
Normal file
152
vendor/github.com/edsrzf/mmap-go/mmap_test.go
generated
vendored
Normal file
@ -0,0 +1,152 @@
|
||||
// Copyright 2011 Evan Shaw. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// These tests are adapted from gommap: http://labix.org/gommap
|
||||
// Copyright (c) 2010, Gustavo Niemeyer <gustavo@niemeyer.net>
|
||||
|
||||
package mmap
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
)
|
||||
|
||||
var testData = []byte("0123456789ABCDEF")
|
||||
var testPath = filepath.Join(os.TempDir(), "testdata")
|
||||
|
||||
func init() {
|
||||
f := openFile(os.O_RDWR | os.O_CREATE | os.O_TRUNC)
|
||||
f.Write(testData)
|
||||
f.Close()
|
||||
}
|
||||
|
||||
func openFile(flags int) *os.File {
|
||||
f, err := os.OpenFile(testPath, flags, 0644)
|
||||
if err != nil {
|
||||
panic(err.Error())
|
||||
}
|
||||
return f
|
||||
}
|
||||
|
||||
func TestUnmap(t *testing.T) {
|
||||
f := openFile(os.O_RDONLY)
|
||||
defer f.Close()
|
||||
mmap, err := Map(f, RDONLY, 0)
|
||||
if err != nil {
|
||||
t.Errorf("error mapping: %s", err)
|
||||
}
|
||||
if err := mmap.Unmap(); err != nil {
|
||||
t.Errorf("error unmapping: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestReadWrite(t *testing.T) {
|
||||
f := openFile(os.O_RDWR)
|
||||
defer f.Close()
|
||||
mmap, err := Map(f, RDWR, 0)
|
||||
if err != nil {
|
||||
t.Errorf("error mapping: %s", err)
|
||||
}
|
||||
defer mmap.Unmap()
|
||||
if !bytes.Equal(testData, mmap) {
|
||||
t.Errorf("mmap != testData: %q, %q", mmap, testData)
|
||||
}
|
||||
|
||||
mmap[9] = 'X'
|
||||
mmap.Flush()
|
||||
|
||||
fileData, err := ioutil.ReadAll(f)
|
||||
if err != nil {
|
||||
t.Errorf("error reading file: %s", err)
|
||||
}
|
||||
if !bytes.Equal(fileData, []byte("012345678XABCDEF")) {
|
||||
t.Errorf("file wasn't modified")
|
||||
}
|
||||
|
||||
// leave things how we found them
|
||||
mmap[9] = '9'
|
||||
mmap.Flush()
|
||||
}
|
||||
|
||||
func TestProtFlagsAndErr(t *testing.T) {
|
||||
f := openFile(os.O_RDONLY)
|
||||
defer f.Close()
|
||||
if _, err := Map(f, RDWR, 0); err == nil {
|
||||
t.Errorf("expected error")
|
||||
}
|
||||
}
|
||||
|
||||
func TestFlags(t *testing.T) {
|
||||
f := openFile(os.O_RDWR)
|
||||
defer f.Close()
|
||||
mmap, err := Map(f, COPY, 0)
|
||||
if err != nil {
|
||||
t.Errorf("error mapping: %s", err)
|
||||
}
|
||||
defer mmap.Unmap()
|
||||
|
||||
mmap[9] = 'X'
|
||||
mmap.Flush()
|
||||
|
||||
fileData, err := ioutil.ReadAll(f)
|
||||
if err != nil {
|
||||
t.Errorf("error reading file: %s", err)
|
||||
}
|
||||
if !bytes.Equal(fileData, testData) {
|
||||
t.Errorf("file was modified")
|
||||
}
|
||||
}
|
||||
|
||||
// Test that we can map files from non-0 offsets
|
||||
// The page size on most Unixes is 4KB, but on Windows it's 64KB
|
||||
func TestNonZeroOffset(t *testing.T) {
|
||||
const pageSize = 65536
|
||||
|
||||
// Create a 2-page sized file
|
||||
bigFilePath := filepath.Join(os.TempDir(), "nonzero")
|
||||
fileobj, err := os.OpenFile(bigFilePath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0644)
|
||||
if err != nil {
|
||||
panic(err.Error())
|
||||
}
|
||||
|
||||
bigData := make([]byte, 2*pageSize, 2*pageSize)
|
||||
fileobj.Write(bigData)
|
||||
fileobj.Close()
|
||||
|
||||
// Map the first page by itself
|
||||
fileobj, err = os.OpenFile(bigFilePath, os.O_RDONLY, 0)
|
||||
if err != nil {
|
||||
panic(err.Error())
|
||||
}
|
||||
m, err := MapRegion(fileobj, pageSize, RDONLY, 0, 0)
|
||||
if err != nil {
|
||||
t.Errorf("error mapping file: %s", err)
|
||||
}
|
||||
m.Unmap()
|
||||
fileobj.Close()
|
||||
|
||||
// Map the second page by itself
|
||||
fileobj, err = os.OpenFile(bigFilePath, os.O_RDONLY, 0)
|
||||
if err != nil {
|
||||
panic(err.Error())
|
||||
}
|
||||
m, err = MapRegion(fileobj, pageSize, RDONLY, 0, pageSize)
|
||||
if err != nil {
|
||||
t.Errorf("error mapping file: %s", err)
|
||||
}
|
||||
err = m.Unmap()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
m, err = MapRegion(fileobj, pageSize, RDONLY, 0, 1)
|
||||
if err == nil {
|
||||
t.Error("expect error because offset is not multiple of page size")
|
||||
}
|
||||
|
||||
fileobj.Close()
|
||||
}
|
51
vendor/github.com/edsrzf/mmap-go/mmap_unix.go
generated
vendored
Normal file
51
vendor/github.com/edsrzf/mmap-go/mmap_unix.go
generated
vendored
Normal file
@ -0,0 +1,51 @@
|
||||
// Copyright 2011 Evan Shaw. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build darwin dragonfly freebsd linux openbsd solaris netbsd
|
||||
|
||||
package mmap
|
||||
|
||||
import (
|
||||
"golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
func mmap(len int, inprot, inflags, fd uintptr, off int64) ([]byte, error) {
|
||||
flags := unix.MAP_SHARED
|
||||
prot := unix.PROT_READ
|
||||
switch {
|
||||
case inprot© != 0:
|
||||
prot |= unix.PROT_WRITE
|
||||
flags = unix.MAP_PRIVATE
|
||||
case inprot&RDWR != 0:
|
||||
prot |= unix.PROT_WRITE
|
||||
}
|
||||
if inprot&EXEC != 0 {
|
||||
prot |= unix.PROT_EXEC
|
||||
}
|
||||
if inflags&ANON != 0 {
|
||||
flags |= unix.MAP_ANON
|
||||
}
|
||||
|
||||
b, err := unix.Mmap(int(fd), off, len, prot, flags)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b, nil
|
||||
}
|
||||
|
||||
func (m MMap) flush() error {
|
||||
return unix.Msync([]byte(m), unix.MS_SYNC)
|
||||
}
|
||||
|
||||
func (m MMap) lock() error {
|
||||
return unix.Mlock([]byte(m))
|
||||
}
|
||||
|
||||
func (m MMap) unlock() error {
|
||||
return unix.Munlock([]byte(m))
|
||||
}
|
||||
|
||||
func (m MMap) unmap() error {
|
||||
return unix.Munmap([]byte(m))
|
||||
}
|
143
vendor/github.com/edsrzf/mmap-go/mmap_windows.go
generated
vendored
Normal file
143
vendor/github.com/edsrzf/mmap-go/mmap_windows.go
generated
vendored
Normal file
@ -0,0 +1,143 @@
|
||||
// Copyright 2011 Evan Shaw. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package mmap
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"os"
|
||||
"sync"
|
||||
|
||||
"golang.org/x/sys/windows"
|
||||
)
|
||||
|
||||
// mmap on Windows is a two-step process.
|
||||
// First, we call CreateFileMapping to get a handle.
|
||||
// Then, we call MapviewToFile to get an actual pointer into memory.
|
||||
// Because we want to emulate a POSIX-style mmap, we don't want to expose
|
||||
// the handle -- only the pointer. We also want to return only a byte slice,
|
||||
// not a struct, so it's convenient to manipulate.
|
||||
|
||||
// We keep this map so that we can get back the original handle from the memory address.
|
||||
|
||||
type addrinfo struct {
|
||||
file windows.Handle
|
||||
mapview windows.Handle
|
||||
}
|
||||
|
||||
var handleLock sync.Mutex
|
||||
var handleMap = map[uintptr]*addrinfo{}
|
||||
|
||||
func mmap(len int, prot, flags, hfile uintptr, off int64) ([]byte, error) {
|
||||
flProtect := uint32(windows.PAGE_READONLY)
|
||||
dwDesiredAccess := uint32(windows.FILE_MAP_READ)
|
||||
switch {
|
||||
case prot© != 0:
|
||||
flProtect = windows.PAGE_WRITECOPY
|
||||
dwDesiredAccess = windows.FILE_MAP_COPY
|
||||
case prot&RDWR != 0:
|
||||
flProtect = windows.PAGE_READWRITE
|
||||
dwDesiredAccess = windows.FILE_MAP_WRITE
|
||||
}
|
||||
if prot&EXEC != 0 {
|
||||
flProtect <<= 4
|
||||
dwDesiredAccess |= windows.FILE_MAP_EXECUTE
|
||||
}
|
||||
|
||||
// The maximum size is the area of the file, starting from 0,
|
||||
// that we wish to allow to be mappable. It is the sum of
|
||||
// the length the user requested, plus the offset where that length
|
||||
// is starting from. This does not map the data into memory.
|
||||
maxSizeHigh := uint32((off + int64(len)) >> 32)
|
||||
maxSizeLow := uint32((off + int64(len)) & 0xFFFFFFFF)
|
||||
// TODO: Do we need to set some security attributes? It might help portability.
|
||||
h, errno := windows.CreateFileMapping(windows.Handle(hfile), nil, flProtect, maxSizeHigh, maxSizeLow, nil)
|
||||
if h == 0 {
|
||||
return nil, os.NewSyscallError("CreateFileMapping", errno)
|
||||
}
|
||||
|
||||
// Actually map a view of the data into memory. The view's size
|
||||
// is the length the user requested.
|
||||
fileOffsetHigh := uint32(off >> 32)
|
||||
fileOffsetLow := uint32(off & 0xFFFFFFFF)
|
||||
addr, errno := windows.MapViewOfFile(h, dwDesiredAccess, fileOffsetHigh, fileOffsetLow, uintptr(len))
|
||||
if addr == 0 {
|
||||
return nil, os.NewSyscallError("MapViewOfFile", errno)
|
||||
}
|
||||
handleLock.Lock()
|
||||
handleMap[addr] = &addrinfo{
|
||||
file: windows.Handle(hfile),
|
||||
mapview: h,
|
||||
}
|
||||
handleLock.Unlock()
|
||||
|
||||
m := MMap{}
|
||||
dh := m.header()
|
||||
dh.Data = addr
|
||||
dh.Len = len
|
||||
dh.Cap = dh.Len
|
||||
|
||||
return m, nil
|
||||
}
|
||||
|
||||
func (m MMap) flush() error {
|
||||
addr, len := m.addrLen()
|
||||
errno := windows.FlushViewOfFile(addr, len)
|
||||
if errno != nil {
|
||||
return os.NewSyscallError("FlushViewOfFile", errno)
|
||||
}
|
||||
|
||||
handleLock.Lock()
|
||||
defer handleLock.Unlock()
|
||||
handle, ok := handleMap[addr]
|
||||
if !ok {
|
||||
// should be impossible; we would've errored above
|
||||
return errors.New("unknown base address")
|
||||
}
|
||||
|
||||
errno = windows.FlushFileBuffers(handle.file)
|
||||
return os.NewSyscallError("FlushFileBuffers", errno)
|
||||
}
|
||||
|
||||
func (m MMap) lock() error {
|
||||
addr, len := m.addrLen()
|
||||
errno := windows.VirtualLock(addr, len)
|
||||
return os.NewSyscallError("VirtualLock", errno)
|
||||
}
|
||||
|
||||
func (m MMap) unlock() error {
|
||||
addr, len := m.addrLen()
|
||||
errno := windows.VirtualUnlock(addr, len)
|
||||
return os.NewSyscallError("VirtualUnlock", errno)
|
||||
}
|
||||
|
||||
func (m MMap) unmap() error {
|
||||
err := m.flush()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
addr := m.header().Data
|
||||
// Lock the UnmapViewOfFile along with the handleMap deletion.
|
||||
// As soon as we unmap the view, the OS is free to give the
|
||||
// same addr to another new map. We don't want another goroutine
|
||||
// to insert and remove the same addr into handleMap while
|
||||
// we're trying to remove our old addr/handle pair.
|
||||
handleLock.Lock()
|
||||
defer handleLock.Unlock()
|
||||
err = windows.UnmapViewOfFile(addr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
handle, ok := handleMap[addr]
|
||||
if !ok {
|
||||
// should be impossible; we would've errored above
|
||||
return errors.New("unknown base address")
|
||||
}
|
||||
delete(handleMap, addr)
|
||||
|
||||
e := windows.CloseHandle(windows.Handle(handle.mapview))
|
||||
return os.NewSyscallError("CloseHandle", e)
|
||||
}
|
18
vendor/github.com/ethereum/go-ethereum/.github/CODEOWNERS
generated
vendored
Normal file
18
vendor/github.com/ethereum/go-ethereum/.github/CODEOWNERS
generated
vendored
Normal file
@ -0,0 +1,18 @@
|
||||
# Lines starting with '#' are comments.
|
||||
# Each line is a file pattern followed by one or more owners.
|
||||
|
||||
accounts/usbwallet @karalabe
|
||||
accounts/scwallet @gballet
|
||||
accounts/abi @gballet
|
||||
consensus @karalabe
|
||||
core/ @karalabe @holiman
|
||||
eth/ @karalabe
|
||||
graphql/ @gballet
|
||||
les/ @zsfelfoldi
|
||||
light/ @zsfelfoldi
|
||||
mobile/ @karalabe
|
||||
p2p/ @fjl @zsfelfoldi
|
||||
p2p/simulations @zelig @nonsense @janos @justelad
|
||||
p2p/protocols @zelig @nonsense @janos @justelad
|
||||
p2p/testing @zelig @nonsense @janos @justelad
|
||||
whisper/ @gballet @gluk256
|
235
vendor/github.com/ethereum/go-ethereum/.travis.yml
generated
vendored
Normal file
235
vendor/github.com/ethereum/go-ethereum/.travis.yml
generated
vendored
Normal file
@ -0,0 +1,235 @@
|
||||
language: go
|
||||
go_import_path: github.com/ethereum/go-ethereum
|
||||
sudo: false
|
||||
matrix:
|
||||
include:
|
||||
- os: linux
|
||||
dist: xenial
|
||||
sudo: required
|
||||
go: 1.10.x
|
||||
script:
|
||||
- sudo modprobe fuse
|
||||
- sudo chmod 666 /dev/fuse
|
||||
- sudo chown root:$USER /etc/fuse.conf
|
||||
- go run build/ci.go install
|
||||
- go run build/ci.go test -coverage $TEST_PACKAGES
|
||||
|
||||
- os: linux
|
||||
dist: xenial
|
||||
sudo: required
|
||||
go: 1.11.x
|
||||
script:
|
||||
- sudo modprobe fuse
|
||||
- sudo chmod 666 /dev/fuse
|
||||
- sudo chown root:$USER /etc/fuse.conf
|
||||
- go run build/ci.go install
|
||||
- go run build/ci.go test -coverage $TEST_PACKAGES
|
||||
|
||||
# These are the latest Go versions.
|
||||
- os: linux
|
||||
dist: xenial
|
||||
sudo: required
|
||||
go: 1.12.x
|
||||
script:
|
||||
- sudo modprobe fuse
|
||||
- sudo chmod 666 /dev/fuse
|
||||
- sudo chown root:$USER /etc/fuse.conf
|
||||
- go run build/ci.go install
|
||||
- go run build/ci.go test -coverage $TEST_PACKAGES
|
||||
|
||||
- os: osx
|
||||
go: 1.12.x
|
||||
script:
|
||||
- echo "Increase the maximum number of open file descriptors on macOS"
|
||||
- NOFILE=20480
|
||||
- sudo sysctl -w kern.maxfiles=$NOFILE
|
||||
- sudo sysctl -w kern.maxfilesperproc=$NOFILE
|
||||
- sudo launchctl limit maxfiles $NOFILE $NOFILE
|
||||
- sudo launchctl limit maxfiles
|
||||
- ulimit -S -n $NOFILE
|
||||
- ulimit -n
|
||||
- unset -f cd # workaround for https://github.com/travis-ci/travis-ci/issues/8703
|
||||
- go run build/ci.go install
|
||||
- go run build/ci.go test -coverage $TEST_PACKAGES
|
||||
|
||||
# This builder only tests code linters on latest version of Go
|
||||
- os: linux
|
||||
dist: xenial
|
||||
go: 1.12.x
|
||||
env:
|
||||
- lint
|
||||
git:
|
||||
submodules: false # avoid cloning ethereum/tests
|
||||
script:
|
||||
- go run build/ci.go lint
|
||||
|
||||
# This builder does the Ubuntu PPA upload
|
||||
- if: repo = ethereum/go-ethereum AND type = push
|
||||
os: linux
|
||||
dist: xenial
|
||||
go: 1.12.x
|
||||
env:
|
||||
- ubuntu-ppa
|
||||
git:
|
||||
submodules: false # avoid cloning ethereum/tests
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- devscripts
|
||||
- debhelper
|
||||
- dput
|
||||
- fakeroot
|
||||
- python-bzrlib
|
||||
- python-paramiko
|
||||
script:
|
||||
- echo '|1|7SiYPr9xl3uctzovOTj4gMwAC1M=|t6ReES75Bo/PxlOPJ6/GsGbTrM0= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0aKz5UTUndYgIGG7dQBV+HaeuEZJ2xPHo2DS2iSKvUL4xNMSAY4UguNW+pX56nAQmZKIZZ8MaEvSj6zMEDiq6HFfn5JcTlM80UwlnyKe8B8p7Nk06PPQLrnmQt5fh0HmEcZx+JU9TZsfCHPnX7MNz4ELfZE6cFsclClrKim3BHUIGq//t93DllB+h4O9LHjEUsQ1Sr63irDLSutkLJD6RXchjROXkNirlcNVHH/jwLWR5RcYilNX7S5bIkK8NlWPjsn/8Ua5O7I9/YoE97PpO6i73DTGLh5H9JN/SITwCKBkgSDWUt61uPK3Y11Gty7o2lWsBjhBUm2Y38CBsoGmBw==' >> ~/.ssh/known_hosts
|
||||
- go run build/ci.go debsrc -upload ethereum/ethereum -sftp-user geth-ci -signer "Go Ethereum Linux Builder <geth-ci@ethereum.org>"
|
||||
|
||||
# This builder does the Linux Azure uploads
|
||||
- if: repo = ethereum/go-ethereum AND type = push
|
||||
os: linux
|
||||
dist: xenial
|
||||
sudo: required
|
||||
go: 1.12.x
|
||||
env:
|
||||
- azure-linux
|
||||
git:
|
||||
submodules: false # avoid cloning ethereum/tests
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- gcc-multilib
|
||||
script:
|
||||
# Build for the primary platforms that Trusty can manage
|
||||
- go run build/ci.go install
|
||||
- go run build/ci.go archive -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
|
||||
- go run build/ci.go install -arch 386
|
||||
- go run build/ci.go archive -arch 386 -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
|
||||
|
||||
# Switch over GCC to cross compilation (breaks 386, hence why do it here only)
|
||||
- sudo -E apt-get -yq --no-install-suggests --no-install-recommends --force-yes install gcc-arm-linux-gnueabi libc6-dev-armel-cross gcc-arm-linux-gnueabihf libc6-dev-armhf-cross gcc-aarch64-linux-gnu libc6-dev-arm64-cross
|
||||
- sudo ln -s /usr/include/asm-generic /usr/include/asm
|
||||
|
||||
- GOARM=5 go run build/ci.go install -arch arm -cc arm-linux-gnueabi-gcc
|
||||
- GOARM=5 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
|
||||
- GOARM=6 go run build/ci.go install -arch arm -cc arm-linux-gnueabi-gcc
|
||||
- GOARM=6 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
|
||||
- GOARM=7 go run build/ci.go install -arch arm -cc arm-linux-gnueabihf-gcc
|
||||
- GOARM=7 go run build/ci.go archive -arch arm -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
|
||||
- go run build/ci.go install -arch arm64 -cc aarch64-linux-gnu-gcc
|
||||
- go run build/ci.go archive -arch arm64 -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
|
||||
|
||||
# This builder does the Linux Azure MIPS xgo uploads
|
||||
- if: repo = ethereum/go-ethereum AND type = push
|
||||
os: linux
|
||||
dist: xenial
|
||||
services:
|
||||
- docker
|
||||
go: 1.12.x
|
||||
env:
|
||||
- azure-linux-mips
|
||||
git:
|
||||
submodules: false # avoid cloning ethereum/tests
|
||||
script:
|
||||
- go run build/ci.go xgo --alltools -- --targets=linux/mips --ldflags '-extldflags "-static"' -v
|
||||
- for bin in build/bin/*-linux-mips; do mv -f "${bin}" "${bin/-linux-mips/}"; done
|
||||
- go run build/ci.go archive -arch mips -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
|
||||
|
||||
- go run build/ci.go xgo --alltools -- --targets=linux/mipsle --ldflags '-extldflags "-static"' -v
|
||||
- for bin in build/bin/*-linux-mipsle; do mv -f "${bin}" "${bin/-linux-mipsle/}"; done
|
||||
- go run build/ci.go archive -arch mipsle -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
|
||||
|
||||
- go run build/ci.go xgo --alltools -- --targets=linux/mips64 --ldflags '-extldflags "-static"' -v
|
||||
- for bin in build/bin/*-linux-mips64; do mv -f "${bin}" "${bin/-linux-mips64/}"; done
|
||||
- go run build/ci.go archive -arch mips64 -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
|
||||
|
||||
- go run build/ci.go xgo --alltools -- --targets=linux/mips64le --ldflags '-extldflags "-static"' -v
|
||||
- for bin in build/bin/*-linux-mips64le; do mv -f "${bin}" "${bin/-linux-mips64le/}"; done
|
||||
- go run build/ci.go archive -arch mips64le -type tar -signer LINUX_SIGNING_KEY -upload gethstore/builds
|
||||
|
||||
# This builder does the Android Maven and Azure uploads
|
||||
- if: repo = ethereum/go-ethereum AND type = push
|
||||
os: linux
|
||||
dist: xenial
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- oracle-java8-installer
|
||||
- oracle-java8-set-default
|
||||
language: android
|
||||
android:
|
||||
components:
|
||||
- platform-tools
|
||||
- tools
|
||||
- android-15
|
||||
- android-19
|
||||
- android-24
|
||||
env:
|
||||
- azure-android
|
||||
- maven-android
|
||||
git:
|
||||
submodules: false # avoid cloning ethereum/tests
|
||||
before_install:
|
||||
- curl https://dl.google.com/go/go1.12.linux-amd64.tar.gz | tar -xz
|
||||
- export PATH=`pwd`/go/bin:$PATH
|
||||
- export GOROOT=`pwd`/go
|
||||
- export GOPATH=$HOME/go
|
||||
script:
|
||||
# Build the Android archive and upload it to Maven Central and Azure
|
||||
- curl https://dl.google.com/android/repository/android-ndk-r19b-linux-x86_64.zip -o android-ndk-r19b.zip
|
||||
- unzip -q android-ndk-r19b.zip && rm android-ndk-r19b.zip
|
||||
- mv android-ndk-r19b $ANDROID_HOME/ndk-bundle
|
||||
|
||||
- mkdir -p $GOPATH/src/github.com/ethereum
|
||||
- ln -s `pwd` $GOPATH/src/github.com/ethereum/go-ethereum
|
||||
- go run build/ci.go aar -signer ANDROID_SIGNING_KEY -deploy https://oss.sonatype.org -upload gethstore/builds
|
||||
|
||||
# This builder does the OSX Azure, iOS CocoaPods and iOS Azure uploads
|
||||
- if: repo = ethereum/go-ethereum AND type = push
|
||||
os: osx
|
||||
go: 1.12.x
|
||||
env:
|
||||
- azure-osx
|
||||
- azure-ios
|
||||
- cocoapods-ios
|
||||
git:
|
||||
submodules: false # avoid cloning ethereum/tests
|
||||
script:
|
||||
- go run build/ci.go install
|
||||
- go run build/ci.go archive -type tar -signer OSX_SIGNING_KEY -upload gethstore/builds
|
||||
|
||||
# Build the iOS framework and upload it to CocoaPods and Azure
|
||||
- gem uninstall cocoapods -a -x
|
||||
- gem install cocoapods
|
||||
|
||||
- mv ~/.cocoapods/repos/master ~/.cocoapods/repos/master.bak
|
||||
- sed -i '.bak' 's/repo.join/!repo.join/g' $(dirname `gem which cocoapods`)/cocoapods/sources_manager.rb
|
||||
- if [ "$TRAVIS_PULL_REQUEST" = "false" ]; then git clone --depth=1 https://github.com/CocoaPods/Specs.git ~/.cocoapods/repos/master && pod setup --verbose; fi
|
||||
|
||||
- xctool -version
|
||||
- xcrun simctl list
|
||||
|
||||
# Workaround for https://github.com/golang/go/issues/23749
|
||||
- export CGO_CFLAGS_ALLOW='-fmodules|-fblocks|-fobjc-arc'
|
||||
- go run build/ci.go xcode -signer IOS_SIGNING_KEY -deploy trunk -upload gethstore/builds
|
||||
|
||||
# This builder does the Azure archive purges to avoid accumulating junk
|
||||
- if: repo = ethereum/go-ethereum AND type = cron
|
||||
os: linux
|
||||
dist: xenial
|
||||
go: 1.12.x
|
||||
env:
|
||||
- azure-purge
|
||||
git:
|
||||
submodules: false # avoid cloning ethereum/tests
|
||||
script:
|
||||
- go run build/ci.go purge -store gethstore/builds -days 14
|
||||
|
||||
- name: Race Detector for Swarm
|
||||
if: repo = ethersphere/go-ethereum
|
||||
os: linux
|
||||
dist: xenial
|
||||
go: 1.12.x
|
||||
git:
|
||||
submodules: false # avoid cloning ethereum/tests
|
||||
script: ./build/travis_keepalive.sh go test -timeout 20m -race ./swarm... ./p2p/{protocols,simulations,testing}/...
|
152
vendor/github.com/ethereum/go-ethereum/accounts/hd.go
generated
vendored
Normal file
152
vendor/github.com/ethereum/go-ethereum/accounts/hd.go
generated
vendored
Normal file
@ -0,0 +1,152 @@
|
||||
// Copyright 2017 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package accounts
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"math"
|
||||
"math/big"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// DefaultRootDerivationPath is the root path to which custom derivation endpoints
|
||||
// are appended. As such, the first account will be at m/44'/60'/0'/0, the second
|
||||
// at m/44'/60'/0'/1, etc.
|
||||
var DefaultRootDerivationPath = DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}
|
||||
|
||||
// DefaultBaseDerivationPath is the base path from which custom derivation endpoints
|
||||
// are incremented. As such, the first account will be at m/44'/60'/0'/0/0, the second
|
||||
// at m/44'/60'/0'/0/1, etc.
|
||||
var DefaultBaseDerivationPath = DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0, 0}
|
||||
|
||||
// DefaultLedgerBaseDerivationPath is the base path from which custom derivation endpoints
|
||||
// are incremented. As such, the first account will be at m/44'/60'/0'/0, the second
|
||||
// at m/44'/60'/0'/1, etc.
|
||||
var DefaultLedgerBaseDerivationPath = DerivationPath{0x80000000 + 44, 0x80000000 + 60, 0x80000000 + 0, 0}
|
||||
|
||||
// DerivationPath represents the computer friendly version of a hierarchical
|
||||
// deterministic wallet account derivaion path.
|
||||
//
|
||||
// The BIP-32 spec https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki
|
||||
// defines derivation paths to be of the form:
|
||||
//
|
||||
// m / purpose' / coin_type' / account' / change / address_index
|
||||
//
|
||||
// The BIP-44 spec https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki
|
||||
// defines that the `purpose` be 44' (or 0x8000002C) for crypto currencies, and
|
||||
// SLIP-44 https://github.com/satoshilabs/slips/blob/master/slip-0044.md assigns
|
||||
// the `coin_type` 60' (or 0x8000003C) to Ethereum.
|
||||
//
|
||||
// The root path for Ethereum is m/44'/60'/0'/0 according to the specification
|
||||
// from https://github.com/ethereum/EIPs/issues/84, albeit it's not set in stone
|
||||
// yet whether accounts should increment the last component or the children of
|
||||
// that. We will go with the simpler approach of incrementing the last component.
|
||||
type DerivationPath []uint32
|
||||
|
||||
// ParseDerivationPath converts a user specified derivation path string to the
|
||||
// internal binary representation.
|
||||
//
|
||||
// Full derivation paths need to start with the `m/` prefix, relative derivation
|
||||
// paths (which will get appended to the default root path) must not have prefixes
|
||||
// in front of the first element. Whitespace is ignored.
|
||||
func ParseDerivationPath(path string) (DerivationPath, error) {
|
||||
var result DerivationPath
|
||||
|
||||
// Handle absolute or relative paths
|
||||
components := strings.Split(path, "/")
|
||||
switch {
|
||||
case len(components) == 0:
|
||||
return nil, errors.New("empty derivation path")
|
||||
|
||||
case strings.TrimSpace(components[0]) == "":
|
||||
return nil, errors.New("ambiguous path: use 'm/' prefix for absolute paths, or no leading '/' for relative ones")
|
||||
|
||||
case strings.TrimSpace(components[0]) == "m":
|
||||
components = components[1:]
|
||||
|
||||
default:
|
||||
result = append(result, DefaultRootDerivationPath...)
|
||||
}
|
||||
// All remaining components are relative, append one by one
|
||||
if len(components) == 0 {
|
||||
return nil, errors.New("empty derivation path") // Empty relative paths
|
||||
}
|
||||
for _, component := range components {
|
||||
// Ignore any user added whitespace
|
||||
component = strings.TrimSpace(component)
|
||||
var value uint32
|
||||
|
||||
// Handle hardened paths
|
||||
if strings.HasSuffix(component, "'") {
|
||||
value = 0x80000000
|
||||
component = strings.TrimSpace(strings.TrimSuffix(component, "'"))
|
||||
}
|
||||
// Handle the non hardened component
|
||||
bigval, ok := new(big.Int).SetString(component, 0)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("invalid component: %s", component)
|
||||
}
|
||||
max := math.MaxUint32 - value
|
||||
if bigval.Sign() < 0 || bigval.Cmp(big.NewInt(int64(max))) > 0 {
|
||||
if value == 0 {
|
||||
return nil, fmt.Errorf("component %v out of allowed range [0, %d]", bigval, max)
|
||||
}
|
||||
return nil, fmt.Errorf("component %v out of allowed hardened range [0, %d]", bigval, max)
|
||||
}
|
||||
value += uint32(bigval.Uint64())
|
||||
|
||||
// Append and repeat
|
||||
result = append(result, value)
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// String implements the stringer interface, converting a binary derivation path
|
||||
// to its canonical representation.
|
||||
func (path DerivationPath) String() string {
|
||||
result := "m"
|
||||
for _, component := range path {
|
||||
var hardened bool
|
||||
if component >= 0x80000000 {
|
||||
component -= 0x80000000
|
||||
hardened = true
|
||||
}
|
||||
result = fmt.Sprintf("%s/%d", result, component)
|
||||
if hardened {
|
||||
result += "'"
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// MarshalJSON turns a derivation path into its json-serialized string
|
||||
func (path DerivationPath) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(path.String())
|
||||
}
|
||||
|
||||
// UnmarshalJSON a json-serialized string back into a derivation path
|
||||
func (path *DerivationPath) UnmarshalJSON(b []byte) error {
|
||||
var dp string
|
||||
var err error
|
||||
if err = json.Unmarshal(b, &dp); err != nil {
|
||||
return err
|
||||
}
|
||||
*path, err = ParseDerivationPath(dp)
|
||||
return err
|
||||
}
|
214
vendor/github.com/ethereum/go-ethereum/accounts/manager.go
generated
vendored
Normal file
214
vendor/github.com/ethereum/go-ethereum/accounts/manager.go
generated
vendored
Normal file
@ -0,0 +1,214 @@
|
||||
// Copyright 2017 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package accounts
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"sort"
|
||||
"sync"
|
||||
|
||||
"github.com/ethereum/go-ethereum/event"
|
||||
)
|
||||
|
||||
// Config contains the settings of the global account manager.
|
||||
//
|
||||
// TODO(rjl493456442, karalabe, holiman): Get rid of this when account management
|
||||
// is removed in favor of Clef.
|
||||
type Config struct {
|
||||
InsecureUnlockAllowed bool // Whether account unlocking in insecure environment is allowed
|
||||
}
|
||||
|
||||
// Manager is an overarching account manager that can communicate with various
|
||||
// backends for signing transactions.
|
||||
type Manager struct {
|
||||
config *Config // Global account manager configurations
|
||||
backends map[reflect.Type][]Backend // Index of backends currently registered
|
||||
updaters []event.Subscription // Wallet update subscriptions for all backends
|
||||
updates chan WalletEvent // Subscription sink for backend wallet changes
|
||||
wallets []Wallet // Cache of all wallets from all registered backends
|
||||
|
||||
feed event.Feed // Wallet feed notifying of arrivals/departures
|
||||
|
||||
quit chan chan error
|
||||
lock sync.RWMutex
|
||||
}
|
||||
|
||||
// NewManager creates a generic account manager to sign transaction via various
|
||||
// supported backends.
|
||||
func NewManager(config *Config, backends ...Backend) *Manager {
|
||||
// Retrieve the initial list of wallets from the backends and sort by URL
|
||||
var wallets []Wallet
|
||||
for _, backend := range backends {
|
||||
wallets = merge(wallets, backend.Wallets()...)
|
||||
}
|
||||
// Subscribe to wallet notifications from all backends
|
||||
updates := make(chan WalletEvent, 4*len(backends))
|
||||
|
||||
subs := make([]event.Subscription, len(backends))
|
||||
for i, backend := range backends {
|
||||
subs[i] = backend.Subscribe(updates)
|
||||
}
|
||||
// Assemble the account manager and return
|
||||
am := &Manager{
|
||||
config: config,
|
||||
backends: make(map[reflect.Type][]Backend),
|
||||
updaters: subs,
|
||||
updates: updates,
|
||||
wallets: wallets,
|
||||
quit: make(chan chan error),
|
||||
}
|
||||
for _, backend := range backends {
|
||||
kind := reflect.TypeOf(backend)
|
||||
am.backends[kind] = append(am.backends[kind], backend)
|
||||
}
|
||||
go am.update()
|
||||
|
||||
return am
|
||||
}
|
||||
|
||||
// Close terminates the account manager's internal notification processes.
|
||||
func (am *Manager) Close() error {
|
||||
errc := make(chan error)
|
||||
am.quit <- errc
|
||||
return <-errc
|
||||
}
|
||||
|
||||
// Config returns the configuration of account manager.
|
||||
func (am *Manager) Config() *Config {
|
||||
return am.config
|
||||
}
|
||||
|
||||
// update is the wallet event loop listening for notifications from the backends
|
||||
// and updating the cache of wallets.
|
||||
func (am *Manager) update() {
|
||||
// Close all subscriptions when the manager terminates
|
||||
defer func() {
|
||||
am.lock.Lock()
|
||||
for _, sub := range am.updaters {
|
||||
sub.Unsubscribe()
|
||||
}
|
||||
am.updaters = nil
|
||||
am.lock.Unlock()
|
||||
}()
|
||||
|
||||
// Loop until termination
|
||||
for {
|
||||
select {
|
||||
case event := <-am.updates:
|
||||
// Wallet event arrived, update local cache
|
||||
am.lock.Lock()
|
||||
switch event.Kind {
|
||||
case WalletArrived:
|
||||
am.wallets = merge(am.wallets, event.Wallet)
|
||||
case WalletDropped:
|
||||
am.wallets = drop(am.wallets, event.Wallet)
|
||||
}
|
||||
am.lock.Unlock()
|
||||
|
||||
// Notify any listeners of the event
|
||||
am.feed.Send(event)
|
||||
|
||||
case errc := <-am.quit:
|
||||
// Manager terminating, return
|
||||
errc <- nil
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Backends retrieves the backend(s) with the given type from the account manager.
|
||||
func (am *Manager) Backends(kind reflect.Type) []Backend {
|
||||
return am.backends[kind]
|
||||
}
|
||||
|
||||
// Wallets returns all signer accounts registered under this account manager.
|
||||
func (am *Manager) Wallets() []Wallet {
|
||||
am.lock.RLock()
|
||||
defer am.lock.RUnlock()
|
||||
|
||||
cpy := make([]Wallet, len(am.wallets))
|
||||
copy(cpy, am.wallets)
|
||||
return cpy
|
||||
}
|
||||
|
||||
// Wallet retrieves the wallet associated with a particular URL.
|
||||
func (am *Manager) Wallet(url string) (Wallet, error) {
|
||||
am.lock.RLock()
|
||||
defer am.lock.RUnlock()
|
||||
|
||||
parsed, err := parseURL(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, wallet := range am.Wallets() {
|
||||
if wallet.URL() == parsed {
|
||||
return wallet, nil
|
||||
}
|
||||
}
|
||||
return nil, ErrUnknownWallet
|
||||
}
|
||||
|
||||
// Find attempts to locate the wallet corresponding to a specific account. Since
|
||||
// accounts can be dynamically added to and removed from wallets, this method has
|
||||
// a linear runtime in the number of wallets.
|
||||
func (am *Manager) Find(account Account) (Wallet, error) {
|
||||
am.lock.RLock()
|
||||
defer am.lock.RUnlock()
|
||||
|
||||
for _, wallet := range am.wallets {
|
||||
if wallet.Contains(account) {
|
||||
return wallet, nil
|
||||
}
|
||||
}
|
||||
return nil, ErrUnknownAccount
|
||||
}
|
||||
|
||||
// Subscribe creates an async subscription to receive notifications when the
|
||||
// manager detects the arrival or departure of a wallet from any of its backends.
|
||||
func (am *Manager) Subscribe(sink chan<- WalletEvent) event.Subscription {
|
||||
return am.feed.Subscribe(sink)
|
||||
}
|
||||
|
||||
// merge is a sorted analogue of append for wallets, where the ordering of the
|
||||
// origin list is preserved by inserting new wallets at the correct position.
|
||||
//
|
||||
// The original slice is assumed to be already sorted by URL.
|
||||
func merge(slice []Wallet, wallets ...Wallet) []Wallet {
|
||||
for _, wallet := range wallets {
|
||||
n := sort.Search(len(slice), func(i int) bool { return slice[i].URL().Cmp(wallet.URL()) >= 0 })
|
||||
if n == len(slice) {
|
||||
slice = append(slice, wallet)
|
||||
continue
|
||||
}
|
||||
slice = append(slice[:n], append([]Wallet{wallet}, slice[n:]...)...)
|
||||
}
|
||||
return slice
|
||||
}
|
||||
|
||||
// drop is the couterpart of merge, which looks up wallets from within the sorted
|
||||
// cache and removes the ones specified.
|
||||
func drop(slice []Wallet, wallets ...Wallet) []Wallet {
|
||||
for _, wallet := range wallets {
|
||||
n := sort.Search(len(slice), func(i int) bool { return slice[i].URL().Cmp(wallet.URL()) >= 0 })
|
||||
if n == len(slice) {
|
||||
// Wallet not found, may happen during startup
|
||||
continue
|
||||
}
|
||||
slice = append(slice[:n], slice[n+1:]...)
|
||||
}
|
||||
return slice
|
||||
}
|
102
vendor/github.com/ethereum/go-ethereum/accounts/scwallet/README.md
generated
vendored
Normal file
102
vendor/github.com/ethereum/go-ethereum/accounts/scwallet/README.md
generated
vendored
Normal file
@ -0,0 +1,102 @@
|
||||
# Using the smartcard wallet
|
||||
|
||||
## Requirements
|
||||
|
||||
* A USB smartcard reader
|
||||
* A keycard that supports the status app
|
||||
* PCSCD version 4.3 running on your system **Only version 4.3 is currently supported**
|
||||
|
||||
## Preparing the smartcard
|
||||
|
||||
**WARNING: FOILLOWING THESE INSTRUCTIONS WILL DESTROY THE MASTER KEY ON YOUR CARD. ONLY PROCEED IF NO FUNDS ARE ASSOCIATED WITH THESE ACCOUNTS**
|
||||
|
||||
You can use status' [keycard-cli](https://github.com/status-im/keycard-cli) and you should get _at least_ version 2.1.1 of their [smartcard application](https://github.com/status-im/status-keycard/releases/download/2.2.1/keycard_v2.2.1.cap)
|
||||
|
||||
You also need to make sure that the PCSC daemon is running on your system.
|
||||
|
||||
Then, you can install the application to the card by typing:
|
||||
|
||||
```
|
||||
keycard install -a keycard_v2.2.1.cap && keycard init
|
||||
```
|
||||
|
||||
At the end of this process, you will be provided with a PIN, a PUK and a pairing password. Write them down, you'll need them shortly.
|
||||
|
||||
Start `geth` with the `console` command. You will notice the following warning:
|
||||
|
||||
```
|
||||
WARN [04-09|16:58:38.898] Failed to open wallet url=pcsc://044def09 err="smartcard: pairing password needed"
|
||||
```
|
||||
|
||||
Write down the URL (`pcsc://044def09` in this example). Then ask `geth` to open the wallet:
|
||||
|
||||
```
|
||||
> personal.openWallet("pcsc://044def09")
|
||||
Please enter the pairing password:
|
||||
```
|
||||
|
||||
Enter the pairing password that you have received during card initialization. Same with the PIN that you will subsequently be
|
||||
asked for.
|
||||
|
||||
If everything goes well, you should see your new account when typing `personal` on the console:
|
||||
|
||||
```
|
||||
> personal
|
||||
WARN [04-09|17:02:07.330] Smartcard wallet account derivation failed url=pcsc://044def09 err="Unexpected response status Cla=0x80, Ins=0xd1, Sw=0x6985"
|
||||
{
|
||||
listAccounts: [],
|
||||
listWallets: [{
|
||||
status: "Empty, waiting for initialization",
|
||||
url: "pcsc://044def09"
|
||||
}],
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
So the communication with the card is working, but there is no key associated with this wallet. Let's create it:
|
||||
|
||||
```
|
||||
> personal.initializeWallet("pcsc://044def09")
|
||||
"tilt ... impact"
|
||||
```
|
||||
|
||||
You should get a list of words, this is your seed so write them down. Your wallet should now be initialized:
|
||||
|
||||
```
|
||||
> personal.listWallets
|
||||
[{
|
||||
accounts: [{
|
||||
address: "0x678b7cd55c61917defb23546a41803c5bfefbc7a",
|
||||
url: "pcsc://044d/m/44'/60'/0'/0/0"
|
||||
}],
|
||||
status: "Online",
|
||||
url: "pcsc://044def09"
|
||||
}]
|
||||
```
|
||||
|
||||
You're all set!
|
||||
|
||||
## Usage
|
||||
|
||||
1. Start `geth` with the `console` command
|
||||
2. Check the card's URL by checking `personal.listWallets`:
|
||||
|
||||
```
|
||||
listWallets: [{
|
||||
status: "Online, can derive public keys",
|
||||
url: "pcsc://a4d73015"
|
||||
}]
|
||||
```
|
||||
|
||||
3. Open the wallet, you will be prompted for your pairing password, then PIN:
|
||||
|
||||
```
|
||||
personal.openWallet("pcsc://a4d73015")
|
||||
```
|
||||
|
||||
4. Check that creation was successful by typing e.g. `personal`. Then use it like a regular wallet.
|
||||
|
||||
## Known issues
|
||||
|
||||
* Starting geth with a valid card seems to make firefox crash.
|
||||
* PCSC version 4.4 should work, but is currently untested
|
87
vendor/github.com/ethereum/go-ethereum/accounts/scwallet/apdu.go
generated
vendored
Normal file
87
vendor/github.com/ethereum/go-ethereum/accounts/scwallet/apdu.go
generated
vendored
Normal file
@ -0,0 +1,87 @@
|
||||
// Copyright 2018 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package scwallet
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// commandAPDU represents an application data unit sent to a smartcard.
|
||||
type commandAPDU struct {
|
||||
Cla, Ins, P1, P2 uint8 // Class, Instruction, Parameter 1, Parameter 2
|
||||
Data []byte // Command data
|
||||
Le uint8 // Command data length
|
||||
}
|
||||
|
||||
// serialize serializes a command APDU.
|
||||
func (ca commandAPDU) serialize() ([]byte, error) {
|
||||
buf := new(bytes.Buffer)
|
||||
|
||||
if err := binary.Write(buf, binary.BigEndian, ca.Cla); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := binary.Write(buf, binary.BigEndian, ca.Ins); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := binary.Write(buf, binary.BigEndian, ca.P1); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := binary.Write(buf, binary.BigEndian, ca.P2); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(ca.Data) > 0 {
|
||||
if err := binary.Write(buf, binary.BigEndian, uint8(len(ca.Data))); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := binary.Write(buf, binary.BigEndian, ca.Data); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if err := binary.Write(buf, binary.BigEndian, ca.Le); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return buf.Bytes(), nil
|
||||
}
|
||||
|
||||
// responseAPDU represents an application data unit received from a smart card.
|
||||
type responseAPDU struct {
|
||||
Data []byte // response data
|
||||
Sw1, Sw2 uint8 // status words 1 and 2
|
||||
}
|
||||
|
||||
// deserialize deserializes a response APDU.
|
||||
func (ra *responseAPDU) deserialize(data []byte) error {
|
||||
if len(data) < 2 {
|
||||
return fmt.Errorf("can not deserialize data: payload too short (%d < 2)", len(data))
|
||||
}
|
||||
|
||||
ra.Data = make([]byte, len(data)-2)
|
||||
|
||||
buf := bytes.NewReader(data)
|
||||
if err := binary.Read(buf, binary.BigEndian, &ra.Data); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := binary.Read(buf, binary.BigEndian, &ra.Sw1); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := binary.Read(buf, binary.BigEndian, &ra.Sw2); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
302
vendor/github.com/ethereum/go-ethereum/accounts/scwallet/hub.go
generated
vendored
Normal file
302
vendor/github.com/ethereum/go-ethereum/accounts/scwallet/hub.go
generated
vendored
Normal file
@ -0,0 +1,302 @@
|
||||
// Copyright 2018 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
// This package implements support for smartcard-based hardware wallets such as
|
||||
// the one written by Status: https://github.com/status-im/hardware-wallet
|
||||
//
|
||||
// This implementation of smartcard wallets have a different interaction process
|
||||
// to other types of hardware wallet. The process works like this:
|
||||
//
|
||||
// 1. (First use with a given client) Establish a pairing between hardware
|
||||
// wallet and client. This requires a secret value called a 'pairing password'.
|
||||
// You can pair with an unpaired wallet with `personal.openWallet(URI, pairing password)`.
|
||||
// 2. (First use only) Initialize the wallet, which generates a keypair, stores
|
||||
// it on the wallet, and returns it so the user can back it up. You can
|
||||
// initialize a wallet with `personal.initializeWallet(URI)`.
|
||||
// 3. Connect to the wallet using the pairing information established in step 1.
|
||||
// You can connect to a paired wallet with `personal.openWallet(URI, PIN)`.
|
||||
// 4. Interact with the wallet as normal.
|
||||
|
||||
package scwallet
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/accounts"
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/event"
|
||||
"github.com/ethereum/go-ethereum/log"
|
||||
pcsc "github.com/gballet/go-libpcsclite"
|
||||
)
|
||||
|
||||
// Scheme is the URI prefix for smartcard wallets.
|
||||
const Scheme = "pcsc"
|
||||
|
||||
// refreshCycle is the maximum time between wallet refreshes (if USB hotplug
|
||||
// notifications don't work).
|
||||
const refreshCycle = time.Second
|
||||
|
||||
// refreshThrottling is the minimum time between wallet refreshes to avoid thrashing.
|
||||
const refreshThrottling = 500 * time.Millisecond
|
||||
|
||||
// smartcardPairing contains information about a smart card we have paired with
|
||||
// or might pair with the hub.
|
||||
type smartcardPairing struct {
|
||||
PublicKey []byte `json:"publicKey"`
|
||||
PairingIndex uint8 `json:"pairingIndex"`
|
||||
PairingKey []byte `json:"pairingKey"`
|
||||
Accounts map[common.Address]accounts.DerivationPath `json:"accounts"`
|
||||
}
|
||||
|
||||
// Hub is a accounts.Backend that can find and handle generic PC/SC hardware wallets.
|
||||
type Hub struct {
|
||||
scheme string // Protocol scheme prefixing account and wallet URLs.
|
||||
|
||||
context *pcsc.Client
|
||||
datadir string
|
||||
pairings map[string]smartcardPairing
|
||||
|
||||
refreshed time.Time // Time instance when the list of wallets was last refreshed
|
||||
wallets map[string]*Wallet // Mapping from reader names to wallet instances
|
||||
updateFeed event.Feed // Event feed to notify wallet additions/removals
|
||||
updateScope event.SubscriptionScope // Subscription scope tracking current live listeners
|
||||
updating bool // Whether the event notification loop is running
|
||||
|
||||
quit chan chan error
|
||||
|
||||
stateLock sync.RWMutex // Protects the internals of the hub from racey access
|
||||
}
|
||||
|
||||
func (hub *Hub) readPairings() error {
|
||||
hub.pairings = make(map[string]smartcardPairing)
|
||||
pairingFile, err := os.Open(filepath.Join(hub.datadir, "smartcards.json"))
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
pairingData, err := ioutil.ReadAll(pairingFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
var pairings []smartcardPairing
|
||||
if err := json.Unmarshal(pairingData, &pairings); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, pairing := range pairings {
|
||||
hub.pairings[string(pairing.PublicKey)] = pairing
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (hub *Hub) writePairings() error {
|
||||
pairingFile, err := os.OpenFile(filepath.Join(hub.datadir, "smartcards.json"), os.O_RDWR|os.O_CREATE, 0755)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer pairingFile.Close()
|
||||
|
||||
pairings := make([]smartcardPairing, 0, len(hub.pairings))
|
||||
for _, pairing := range hub.pairings {
|
||||
pairings = append(pairings, pairing)
|
||||
}
|
||||
|
||||
pairingData, err := json.Marshal(pairings)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if _, err := pairingFile.Write(pairingData); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (hub *Hub) pairing(wallet *Wallet) *smartcardPairing {
|
||||
if pairing, ok := hub.pairings[string(wallet.PublicKey)]; ok {
|
||||
return &pairing
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (hub *Hub) setPairing(wallet *Wallet, pairing *smartcardPairing) error {
|
||||
if pairing == nil {
|
||||
delete(hub.pairings, string(wallet.PublicKey))
|
||||
} else {
|
||||
hub.pairings[string(wallet.PublicKey)] = *pairing
|
||||
}
|
||||
return hub.writePairings()
|
||||
}
|
||||
|
||||
// NewHub creates a new hardware wallet manager for smartcards.
|
||||
func NewHub(scheme string, datadir string) (*Hub, error) {
|
||||
context, err := pcsc.EstablishContext(pcsc.ScopeSystem)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
hub := &Hub{
|
||||
scheme: scheme,
|
||||
context: context,
|
||||
datadir: datadir,
|
||||
wallets: make(map[string]*Wallet),
|
||||
quit: make(chan chan error),
|
||||
}
|
||||
if err := hub.readPairings(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
hub.refreshWallets()
|
||||
return hub, nil
|
||||
}
|
||||
|
||||
// Wallets implements accounts.Backend, returning all the currently tracked smart
|
||||
// cards that appear to be hardware wallets.
|
||||
func (hub *Hub) Wallets() []accounts.Wallet {
|
||||
// Make sure the list of wallets is up to date
|
||||
hub.refreshWallets()
|
||||
|
||||
hub.stateLock.RLock()
|
||||
defer hub.stateLock.RUnlock()
|
||||
|
||||
cpy := make([]accounts.Wallet, 0, len(hub.wallets))
|
||||
for _, wallet := range hub.wallets {
|
||||
cpy = append(cpy, wallet)
|
||||
}
|
||||
sort.Sort(accounts.WalletsByURL(cpy))
|
||||
return cpy
|
||||
}
|
||||
|
||||
// refreshWallets scans the devices attached to the machine and updates the
|
||||
// list of wallets based on the found devices.
|
||||
func (hub *Hub) refreshWallets() {
|
||||
// Don't scan the USB like crazy it the user fetches wallets in a loop
|
||||
hub.stateLock.RLock()
|
||||
elapsed := time.Since(hub.refreshed)
|
||||
hub.stateLock.RUnlock()
|
||||
|
||||
if elapsed < refreshThrottling {
|
||||
return
|
||||
}
|
||||
// Retrieve all the smart card reader to check for cards
|
||||
readers, err := hub.context.ListReaders()
|
||||
if err != nil {
|
||||
// This is a perverted hack, the scard library returns an error if no card
|
||||
// readers are present instead of simply returning an empty list. We don't
|
||||
// want to fill the user's log with errors, so filter those out.
|
||||
if err.Error() != "scard: Cannot find a smart card reader." {
|
||||
log.Error("Failed to enumerate smart card readers", "err", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
// Transform the current list of wallets into the new one
|
||||
hub.stateLock.Lock()
|
||||
|
||||
events := []accounts.WalletEvent{}
|
||||
seen := make(map[string]struct{})
|
||||
|
||||
for _, reader := range readers {
|
||||
// Mark the reader as present
|
||||
seen[reader] = struct{}{}
|
||||
|
||||
// If we alreay know about this card, skip to the next reader, otherwise clean up
|
||||
if wallet, ok := hub.wallets[reader]; ok {
|
||||
if err := wallet.ping(); err == nil {
|
||||
continue
|
||||
}
|
||||
wallet.Close()
|
||||
events = append(events, accounts.WalletEvent{Wallet: wallet, Kind: accounts.WalletDropped})
|
||||
delete(hub.wallets, reader)
|
||||
}
|
||||
// New card detected, try to connect to it
|
||||
card, err := hub.context.Connect(reader, pcsc.ShareShared, pcsc.ProtocolAny)
|
||||
if err != nil {
|
||||
log.Debug("Failed to open smart card", "reader", reader, "err", err)
|
||||
continue
|
||||
}
|
||||
wallet := NewWallet(hub, card)
|
||||
if err = wallet.connect(); err != nil {
|
||||
log.Debug("Failed to connect to smart card", "reader", reader, "err", err)
|
||||
card.Disconnect(pcsc.LeaveCard)
|
||||
continue
|
||||
}
|
||||
// Card connected, start tracking in amongs the wallets
|
||||
hub.wallets[reader] = wallet
|
||||
events = append(events, accounts.WalletEvent{Wallet: wallet, Kind: accounts.WalletArrived})
|
||||
}
|
||||
// Remove any wallets no longer present
|
||||
for reader, wallet := range hub.wallets {
|
||||
if _, ok := seen[reader]; !ok {
|
||||
wallet.Close()
|
||||
events = append(events, accounts.WalletEvent{Wallet: wallet, Kind: accounts.WalletDropped})
|
||||
delete(hub.wallets, reader)
|
||||
}
|
||||
}
|
||||
hub.refreshed = time.Now()
|
||||
hub.stateLock.Unlock()
|
||||
|
||||
for _, event := range events {
|
||||
hub.updateFeed.Send(event)
|
||||
}
|
||||
}
|
||||
|
||||
// Subscribe implements accounts.Backend, creating an async subscription to
|
||||
// receive notifications on the addition or removal of smart card wallets.
|
||||
func (hub *Hub) Subscribe(sink chan<- accounts.WalletEvent) event.Subscription {
|
||||
// We need the mutex to reliably start/stop the update loop
|
||||
hub.stateLock.Lock()
|
||||
defer hub.stateLock.Unlock()
|
||||
|
||||
// Subscribe the caller and track the subscriber count
|
||||
sub := hub.updateScope.Track(hub.updateFeed.Subscribe(sink))
|
||||
|
||||
// Subscribers require an active notification loop, start it
|
||||
if !hub.updating {
|
||||
hub.updating = true
|
||||
go hub.updater()
|
||||
}
|
||||
return sub
|
||||
}
|
||||
|
||||
// updater is responsible for maintaining an up-to-date list of wallets managed
|
||||
// by the smart card hub, and for firing wallet addition/removal events.
|
||||
func (hub *Hub) updater() {
|
||||
for {
|
||||
// TODO: Wait for a USB hotplug event (not supported yet) or a refresh timeout
|
||||
// <-hub.changes
|
||||
time.Sleep(refreshCycle)
|
||||
|
||||
// Run the wallet refresher
|
||||
hub.refreshWallets()
|
||||
|
||||
// If all our subscribers left, stop the updater
|
||||
hub.stateLock.Lock()
|
||||
if hub.updateScope.Count() == 0 {
|
||||
hub.updating = false
|
||||
hub.stateLock.Unlock()
|
||||
return
|
||||
}
|
||||
hub.stateLock.Unlock()
|
||||
}
|
||||
}
|
346
vendor/github.com/ethereum/go-ethereum/accounts/scwallet/securechannel.go
generated
vendored
Normal file
346
vendor/github.com/ethereum/go-ethereum/accounts/scwallet/securechannel.go
generated
vendored
Normal file
@ -0,0 +1,346 @@
|
||||
// Copyright 2018 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package scwallet
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/aes"
|
||||
"crypto/cipher"
|
||||
"crypto/rand"
|
||||
"crypto/sha256"
|
||||
"crypto/sha512"
|
||||
"fmt"
|
||||
|
||||
"github.com/ethereum/go-ethereum/crypto"
|
||||
pcsc "github.com/gballet/go-libpcsclite"
|
||||
"github.com/wsddn/go-ecdh"
|
||||
"golang.org/x/crypto/pbkdf2"
|
||||
"golang.org/x/text/unicode/norm"
|
||||
)
|
||||
|
||||
const (
|
||||
maxPayloadSize = 223
|
||||
pairP1FirstStep = 0
|
||||
pairP1LastStep = 1
|
||||
|
||||
scSecretLength = 32
|
||||
scBlockSize = 16
|
||||
|
||||
insOpenSecureChannel = 0x10
|
||||
insMutuallyAuthenticate = 0x11
|
||||
insPair = 0x12
|
||||
insUnpair = 0x13
|
||||
|
||||
pairingSalt = "Keycard Pairing Password Salt"
|
||||
)
|
||||
|
||||
// SecureChannelSession enables secure communication with a hardware wallet.
|
||||
type SecureChannelSession struct {
|
||||
card *pcsc.Card // A handle to the smartcard for communication
|
||||
secret []byte // A shared secret generated from our ECDSA keys
|
||||
publicKey []byte // Our own ephemeral public key
|
||||
PairingKey []byte // A permanent shared secret for a pairing, if present
|
||||
sessionEncKey []byte // The current session encryption key
|
||||
sessionMacKey []byte // The current session MAC key
|
||||
iv []byte // The current IV
|
||||
PairingIndex uint8 // The pairing index
|
||||
}
|
||||
|
||||
// NewSecureChannelSession creates a new secure channel for the given card and public key.
|
||||
func NewSecureChannelSession(card *pcsc.Card, keyData []byte) (*SecureChannelSession, error) {
|
||||
// Generate an ECDSA keypair for ourselves
|
||||
gen := ecdh.NewEllipticECDH(crypto.S256())
|
||||
private, public, err := gen.GenerateKey(rand.Reader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
cardPublic, ok := gen.Unmarshal(keyData)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("Could not unmarshal public key from card")
|
||||
}
|
||||
|
||||
secret, err := gen.GenerateSharedSecret(private, cardPublic)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &SecureChannelSession{
|
||||
card: card,
|
||||
secret: secret,
|
||||
publicKey: gen.Marshal(public),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Pair establishes a new pairing with the smartcard.
|
||||
func (s *SecureChannelSession) Pair(pairingPassword []byte) error {
|
||||
secretHash := pbkdf2.Key(norm.NFKD.Bytes(pairingPassword), norm.NFKD.Bytes([]byte(pairingSalt)), 50000, 32, sha256.New)
|
||||
|
||||
challenge := make([]byte, 32)
|
||||
if _, err := rand.Read(challenge); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
response, err := s.pair(pairP1FirstStep, challenge)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
md := sha256.New()
|
||||
md.Write(secretHash[:])
|
||||
md.Write(challenge)
|
||||
|
||||
expectedCryptogram := md.Sum(nil)
|
||||
cardCryptogram := response.Data[:32]
|
||||
cardChallenge := response.Data[32:64]
|
||||
|
||||
if !bytes.Equal(expectedCryptogram, cardCryptogram) {
|
||||
return fmt.Errorf("Invalid card cryptogram %v != %v", expectedCryptogram, cardCryptogram)
|
||||
}
|
||||
|
||||
md.Reset()
|
||||
md.Write(secretHash[:])
|
||||
md.Write(cardChallenge)
|
||||
response, err = s.pair(pairP1LastStep, md.Sum(nil))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
md.Reset()
|
||||
md.Write(secretHash[:])
|
||||
md.Write(response.Data[1:])
|
||||
s.PairingKey = md.Sum(nil)
|
||||
s.PairingIndex = response.Data[0]
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Unpair disestablishes an existing pairing.
|
||||
func (s *SecureChannelSession) Unpair() error {
|
||||
if s.PairingKey == nil {
|
||||
return fmt.Errorf("Cannot unpair: not paired")
|
||||
}
|
||||
|
||||
_, err := s.transmitEncrypted(claSCWallet, insUnpair, s.PairingIndex, 0, []byte{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
s.PairingKey = nil
|
||||
// Close channel
|
||||
s.iv = nil
|
||||
return nil
|
||||
}
|
||||
|
||||
// Open initializes the secure channel.
|
||||
func (s *SecureChannelSession) Open() error {
|
||||
if s.iv != nil {
|
||||
return fmt.Errorf("Session already opened")
|
||||
}
|
||||
|
||||
response, err := s.open()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Generate the encryption/mac key by hashing our shared secret,
|
||||
// pairing key, and the first bytes returned from the Open APDU.
|
||||
md := sha512.New()
|
||||
md.Write(s.secret)
|
||||
md.Write(s.PairingKey)
|
||||
md.Write(response.Data[:scSecretLength])
|
||||
keyData := md.Sum(nil)
|
||||
s.sessionEncKey = keyData[:scSecretLength]
|
||||
s.sessionMacKey = keyData[scSecretLength : scSecretLength*2]
|
||||
|
||||
// The IV is the last bytes returned from the Open APDU.
|
||||
s.iv = response.Data[scSecretLength:]
|
||||
|
||||
return s.mutuallyAuthenticate()
|
||||
}
|
||||
|
||||
// mutuallyAuthenticate is an internal method to authenticate both ends of the
|
||||
// connection.
|
||||
func (s *SecureChannelSession) mutuallyAuthenticate() error {
|
||||
data := make([]byte, scSecretLength)
|
||||
if _, err := rand.Read(data); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
response, err := s.transmitEncrypted(claSCWallet, insMutuallyAuthenticate, 0, 0, data)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if response.Sw1 != 0x90 || response.Sw2 != 0x00 {
|
||||
return fmt.Errorf("Got unexpected response from MUTUALLY_AUTHENTICATE: 0x%x%x", response.Sw1, response.Sw2)
|
||||
}
|
||||
|
||||
if len(response.Data) != scSecretLength {
|
||||
return fmt.Errorf("Response from MUTUALLY_AUTHENTICATE was %d bytes, expected %d", len(response.Data), scSecretLength)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// open is an internal method that sends an open APDU.
|
||||
func (s *SecureChannelSession) open() (*responseAPDU, error) {
|
||||
return transmit(s.card, &commandAPDU{
|
||||
Cla: claSCWallet,
|
||||
Ins: insOpenSecureChannel,
|
||||
P1: s.PairingIndex,
|
||||
P2: 0,
|
||||
Data: s.publicKey,
|
||||
Le: 0,
|
||||
})
|
||||
}
|
||||
|
||||
// pair is an internal method that sends a pair APDU.
|
||||
func (s *SecureChannelSession) pair(p1 uint8, data []byte) (*responseAPDU, error) {
|
||||
return transmit(s.card, &commandAPDU{
|
||||
Cla: claSCWallet,
|
||||
Ins: insPair,
|
||||
P1: p1,
|
||||
P2: 0,
|
||||
Data: data,
|
||||
Le: 0,
|
||||
})
|
||||
}
|
||||
|
||||
// transmitEncrypted sends an encrypted message, and decrypts and returns the response.
|
||||
func (s *SecureChannelSession) transmitEncrypted(cla, ins, p1, p2 byte, data []byte) (*responseAPDU, error) {
|
||||
if s.iv == nil {
|
||||
return nil, fmt.Errorf("Channel not open")
|
||||
}
|
||||
|
||||
data, err := s.encryptAPDU(data)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
meta := [16]byte{cla, ins, p1, p2, byte(len(data) + scBlockSize)}
|
||||
if err = s.updateIV(meta[:], data); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
fulldata := make([]byte, len(s.iv)+len(data))
|
||||
copy(fulldata, s.iv)
|
||||
copy(fulldata[len(s.iv):], data)
|
||||
|
||||
response, err := transmit(s.card, &commandAPDU{
|
||||
Cla: cla,
|
||||
Ins: ins,
|
||||
P1: p1,
|
||||
P2: p2,
|
||||
Data: fulldata,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
rmeta := [16]byte{byte(len(response.Data))}
|
||||
rmac := response.Data[:len(s.iv)]
|
||||
rdata := response.Data[len(s.iv):]
|
||||
plainData, err := s.decryptAPDU(rdata)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err = s.updateIV(rmeta[:], rdata); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !bytes.Equal(s.iv, rmac) {
|
||||
return nil, fmt.Errorf("Invalid MAC in response")
|
||||
}
|
||||
|
||||
rapdu := &responseAPDU{}
|
||||
rapdu.deserialize(plainData)
|
||||
|
||||
if rapdu.Sw1 != sw1Ok {
|
||||
return nil, fmt.Errorf("Unexpected response status Cla=0x%x, Ins=0x%x, Sw=0x%x%x", cla, ins, rapdu.Sw1, rapdu.Sw2)
|
||||
}
|
||||
|
||||
return rapdu, nil
|
||||
}
|
||||
|
||||
// encryptAPDU is an internal method that serializes and encrypts an APDU.
|
||||
func (s *SecureChannelSession) encryptAPDU(data []byte) ([]byte, error) {
|
||||
if len(data) > maxPayloadSize {
|
||||
return nil, fmt.Errorf("Payload of %d bytes exceeds maximum of %d", len(data), maxPayloadSize)
|
||||
}
|
||||
data = pad(data, 0x80)
|
||||
|
||||
ret := make([]byte, len(data))
|
||||
|
||||
a, err := aes.NewCipher(s.sessionEncKey)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
crypter := cipher.NewCBCEncrypter(a, s.iv)
|
||||
crypter.CryptBlocks(ret, data)
|
||||
return ret, nil
|
||||
}
|
||||
|
||||
// pad applies message padding to a 16 byte boundary.
|
||||
func pad(data []byte, terminator byte) []byte {
|
||||
padded := make([]byte, (len(data)/16+1)*16)
|
||||
copy(padded, data)
|
||||
padded[len(data)] = terminator
|
||||
return padded
|
||||
}
|
||||
|
||||
// decryptAPDU is an internal method that decrypts and deserializes an APDU.
|
||||
func (s *SecureChannelSession) decryptAPDU(data []byte) ([]byte, error) {
|
||||
a, err := aes.NewCipher(s.sessionEncKey)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ret := make([]byte, len(data))
|
||||
|
||||
crypter := cipher.NewCBCDecrypter(a, s.iv)
|
||||
crypter.CryptBlocks(ret, data)
|
||||
return unpad(ret, 0x80)
|
||||
}
|
||||
|
||||
// unpad strips padding from a message.
|
||||
func unpad(data []byte, terminator byte) ([]byte, error) {
|
||||
for i := 1; i <= 16; i++ {
|
||||
switch data[len(data)-i] {
|
||||
case 0:
|
||||
continue
|
||||
case terminator:
|
||||
return data[:len(data)-i], nil
|
||||
default:
|
||||
return nil, fmt.Errorf("Expected end of padding, got %d", data[len(data)-i])
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("Expected end of padding, got 0")
|
||||
}
|
||||
|
||||
// updateIV is an internal method that updates the initialization vector after
|
||||
// each message exchanged.
|
||||
func (s *SecureChannelSession) updateIV(meta, data []byte) error {
|
||||
data = pad(data, 0)
|
||||
a, err := aes.NewCipher(s.sessionMacKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
crypter := cipher.NewCBCEncrypter(a, make([]byte, 16))
|
||||
crypter.CryptBlocks(meta, meta)
|
||||
crypter.CryptBlocks(data, data)
|
||||
// The first 16 bytes of the last block is the MAC
|
||||
s.iv = data[len(data)-32 : len(data)-16]
|
||||
return nil
|
||||
}
|
1074
vendor/github.com/ethereum/go-ethereum/accounts/scwallet/wallet.go
generated
vendored
Normal file
1074
vendor/github.com/ethereum/go-ethereum/accounts/scwallet/wallet.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
31
vendor/github.com/ethereum/go-ethereum/accounts/sort.go
generated
vendored
Normal file
31
vendor/github.com/ethereum/go-ethereum/accounts/sort.go
generated
vendored
Normal file
@ -0,0 +1,31 @@
|
||||
// Copyright 2018 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package accounts
|
||||
|
||||
// AccountsByURL implements sort.Interface for []Account based on the URL field.
|
||||
type AccountsByURL []Account
|
||||
|
||||
func (a AccountsByURL) Len() int { return len(a) }
|
||||
func (a AccountsByURL) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
|
||||
func (a AccountsByURL) Less(i, j int) bool { return a[i].URL.Cmp(a[j].URL) < 0 }
|
||||
|
||||
// WalletsByURL implements sort.Interface for []Wallet based on the URL field.
|
||||
type WalletsByURL []Wallet
|
||||
|
||||
func (w WalletsByURL) Len() int { return len(w) }
|
||||
func (w WalletsByURL) Swap(i, j int) { w[i], w[j] = w[j], w[i] }
|
||||
func (w WalletsByURL) Less(i, j int) bool { return w[i].URL().Cmp(w[j].URL()) < 0 }
|
40
vendor/github.com/ethereum/go-ethereum/appveyor.yml
generated
vendored
Normal file
40
vendor/github.com/ethereum/go-ethereum/appveyor.yml
generated
vendored
Normal file
@ -0,0 +1,40 @@
|
||||
os: Visual Studio 2015
|
||||
|
||||
# Clone directly into GOPATH.
|
||||
clone_folder: C:\gopath\src\github.com\ethereum\go-ethereum
|
||||
clone_depth: 5
|
||||
version: "{branch}.{build}"
|
||||
environment:
|
||||
global:
|
||||
GOPATH: C:\gopath
|
||||
CC: gcc.exe
|
||||
matrix:
|
||||
- GETH_ARCH: amd64
|
||||
MSYS2_ARCH: x86_64
|
||||
MSYS2_BITS: 64
|
||||
MSYSTEM: MINGW64
|
||||
PATH: C:\msys64\mingw64\bin\;C:\Program Files (x86)\NSIS\;%PATH%
|
||||
- GETH_ARCH: 386
|
||||
MSYS2_ARCH: i686
|
||||
MSYS2_BITS: 32
|
||||
MSYSTEM: MINGW32
|
||||
PATH: C:\msys64\mingw32\bin\;C:\Program Files (x86)\NSIS\;%PATH%
|
||||
|
||||
install:
|
||||
- git submodule update --init
|
||||
- rmdir C:\go /s /q
|
||||
- appveyor DownloadFile https://dl.google.com/go/go1.12.3.windows-%GETH_ARCH%.zip
|
||||
- 7z x go1.12.3.windows-%GETH_ARCH%.zip -y -oC:\ > NUL
|
||||
- go version
|
||||
- gcc --version
|
||||
|
||||
build_script:
|
||||
- go run build\ci.go install
|
||||
|
||||
after_build:
|
||||
- go run build\ci.go archive -type zip -signer WINDOWS_SIGNING_KEY -upload gethstore/builds
|
||||
- go run build\ci.go nsis -signer WINDOWS_SIGNING_KEY -upload gethstore/builds
|
||||
|
||||
test_script:
|
||||
- set CGO_ENABLED=1
|
||||
- go run build\ci.go test -coverage
|
983
vendor/github.com/ethereum/go-ethereum/cmd/clef/main.go
generated
vendored
Normal file
983
vendor/github.com/ethereum/go-ethereum/cmd/clef/main.go
generated
vendored
Normal file
@ -0,0 +1,983 @@
|
||||
// Copyright 2018 The go-ethereum Authors
|
||||
// This file is part of go-ethereum.
|
||||
//
|
||||
// go-ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// go-ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"crypto/rand"
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"math/big"
|
||||
"os"
|
||||
"os/signal"
|
||||
"os/user"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
|
||||
"github.com/ethereum/go-ethereum/accounts"
|
||||
"github.com/ethereum/go-ethereum/accounts/keystore"
|
||||
"github.com/ethereum/go-ethereum/cmd/utils"
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/ethereum/go-ethereum/console"
|
||||
"github.com/ethereum/go-ethereum/core/types"
|
||||
"github.com/ethereum/go-ethereum/crypto"
|
||||
"github.com/ethereum/go-ethereum/internal/ethapi"
|
||||
"github.com/ethereum/go-ethereum/log"
|
||||
"github.com/ethereum/go-ethereum/node"
|
||||
"github.com/ethereum/go-ethereum/params"
|
||||
"github.com/ethereum/go-ethereum/rlp"
|
||||
"github.com/ethereum/go-ethereum/rpc"
|
||||
"github.com/ethereum/go-ethereum/signer/core"
|
||||
"github.com/ethereum/go-ethereum/signer/fourbyte"
|
||||
"github.com/ethereum/go-ethereum/signer/rules"
|
||||
"github.com/ethereum/go-ethereum/signer/storage"
|
||||
"gopkg.in/urfave/cli.v1"
|
||||
)
|
||||
|
||||
const legalWarning = `
|
||||
WARNING!
|
||||
|
||||
Clef is an account management tool. It may, like any software, contain bugs.
|
||||
|
||||
Please take care to
|
||||
- backup your keystore files,
|
||||
- verify that the keystore(s) can be opened with your password.
|
||||
|
||||
Clef is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
|
||||
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
|
||||
PURPOSE. See the GNU General Public License for more details.
|
||||
`
|
||||
|
||||
var (
|
||||
logLevelFlag = cli.IntFlag{
|
||||
Name: "loglevel",
|
||||
Value: 4,
|
||||
Usage: "log level to emit to the screen",
|
||||
}
|
||||
advancedMode = cli.BoolFlag{
|
||||
Name: "advanced",
|
||||
Usage: "If enabled, issues warnings instead of rejections for suspicious requests. Default off",
|
||||
}
|
||||
keystoreFlag = cli.StringFlag{
|
||||
Name: "keystore",
|
||||
Value: filepath.Join(node.DefaultDataDir(), "keystore"),
|
||||
Usage: "Directory for the keystore",
|
||||
}
|
||||
configdirFlag = cli.StringFlag{
|
||||
Name: "configdir",
|
||||
Value: DefaultConfigDir(),
|
||||
Usage: "Directory for Clef configuration",
|
||||
}
|
||||
chainIdFlag = cli.Int64Flag{
|
||||
Name: "chainid",
|
||||
Value: params.MainnetChainConfig.ChainID.Int64(),
|
||||
Usage: "Chain id to use for signing (1=mainnet, 3=ropsten, 4=rinkeby, 5=Goerli)",
|
||||
}
|
||||
rpcPortFlag = cli.IntFlag{
|
||||
Name: "rpcport",
|
||||
Usage: "HTTP-RPC server listening port",
|
||||
Value: node.DefaultHTTPPort + 5,
|
||||
}
|
||||
signerSecretFlag = cli.StringFlag{
|
||||
Name: "signersecret",
|
||||
Usage: "A file containing the (encrypted) master seed to encrypt Clef data, e.g. keystore credentials and ruleset hash",
|
||||
}
|
||||
customDBFlag = cli.StringFlag{
|
||||
Name: "4bytedb-custom",
|
||||
Usage: "File used for writing new 4byte-identifiers submitted via API",
|
||||
Value: "./4byte-custom.json",
|
||||
}
|
||||
auditLogFlag = cli.StringFlag{
|
||||
Name: "auditlog",
|
||||
Usage: "File used to emit audit logs. Set to \"\" to disable",
|
||||
Value: "audit.log",
|
||||
}
|
||||
ruleFlag = cli.StringFlag{
|
||||
Name: "rules",
|
||||
Usage: "Enable rule-engine",
|
||||
Value: "",
|
||||
}
|
||||
stdiouiFlag = cli.BoolFlag{
|
||||
Name: "stdio-ui",
|
||||
Usage: "Use STDIN/STDOUT as a channel for an external UI. " +
|
||||
"This means that an STDIN/STDOUT is used for RPC-communication with a e.g. a graphical user " +
|
||||
"interface, and can be used when Clef is started by an external process.",
|
||||
}
|
||||
testFlag = cli.BoolFlag{
|
||||
Name: "stdio-ui-test",
|
||||
Usage: "Mechanism to test interface between Clef and UI. Requires 'stdio-ui'.",
|
||||
}
|
||||
app = cli.NewApp()
|
||||
initCommand = cli.Command{
|
||||
Action: utils.MigrateFlags(initializeSecrets),
|
||||
Name: "init",
|
||||
Usage: "Initialize the signer, generate secret storage",
|
||||
ArgsUsage: "",
|
||||
Flags: []cli.Flag{
|
||||
logLevelFlag,
|
||||
configdirFlag,
|
||||
},
|
||||
Description: `
|
||||
The init command generates a master seed which Clef can use to store credentials and data needed for
|
||||
the rule-engine to work.`,
|
||||
}
|
||||
attestCommand = cli.Command{
|
||||
Action: utils.MigrateFlags(attestFile),
|
||||
Name: "attest",
|
||||
Usage: "Attest that a js-file is to be used",
|
||||
ArgsUsage: "<sha256sum>",
|
||||
Flags: []cli.Flag{
|
||||
logLevelFlag,
|
||||
configdirFlag,
|
||||
signerSecretFlag,
|
||||
},
|
||||
Description: `
|
||||
The attest command stores the sha256 of the rule.js-file that you want to use for automatic processing of
|
||||
incoming requests.
|
||||
|
||||
Whenever you make an edit to the rule file, you need to use attestation to tell
|
||||
Clef that the file is 'safe' to execute.`,
|
||||
}
|
||||
|
||||
setCredentialCommand = cli.Command{
|
||||
Action: utils.MigrateFlags(setCredential),
|
||||
Name: "setpw",
|
||||
Usage: "Store a credential for a keystore file",
|
||||
ArgsUsage: "<address>",
|
||||
Flags: []cli.Flag{
|
||||
logLevelFlag,
|
||||
configdirFlag,
|
||||
signerSecretFlag,
|
||||
},
|
||||
Description: `
|
||||
The setpw command stores a password for a given address (keyfile). If you enter a blank passphrase, it will
|
||||
remove any stored credential for that address (keyfile)
|
||||
`}
|
||||
gendocCommand = cli.Command{
|
||||
Action: GenDoc,
|
||||
Name: "gendoc",
|
||||
Usage: "Generate documentation about json-rpc format",
|
||||
Description: `
|
||||
The gendoc generates example structures of the json-rpc communication types.
|
||||
`}
|
||||
)
|
||||
|
||||
func init() {
|
||||
app.Name = "Clef"
|
||||
app.Usage = "Manage Ethereum account operations"
|
||||
app.Flags = []cli.Flag{
|
||||
logLevelFlag,
|
||||
keystoreFlag,
|
||||
configdirFlag,
|
||||
chainIdFlag,
|
||||
utils.LightKDFFlag,
|
||||
utils.NoUSBFlag,
|
||||
utils.RPCListenAddrFlag,
|
||||
utils.RPCVirtualHostsFlag,
|
||||
utils.IPCDisabledFlag,
|
||||
utils.IPCPathFlag,
|
||||
utils.RPCEnabledFlag,
|
||||
rpcPortFlag,
|
||||
signerSecretFlag,
|
||||
customDBFlag,
|
||||
auditLogFlag,
|
||||
ruleFlag,
|
||||
stdiouiFlag,
|
||||
testFlag,
|
||||
advancedMode,
|
||||
}
|
||||
app.Action = signer
|
||||
app.Commands = []cli.Command{initCommand, attestCommand, setCredentialCommand, gendocCommand}
|
||||
|
||||
}
|
||||
func main() {
|
||||
if err := app.Run(os.Args); err != nil {
|
||||
fmt.Fprintln(os.Stderr, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
func initializeSecrets(c *cli.Context) error {
|
||||
if err := initialize(c); err != nil {
|
||||
return err
|
||||
}
|
||||
configDir := c.GlobalString(configdirFlag.Name)
|
||||
|
||||
masterSeed := make([]byte, 256)
|
||||
num, err := io.ReadFull(rand.Reader, masterSeed)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if num != len(masterSeed) {
|
||||
return fmt.Errorf("failed to read enough random")
|
||||
}
|
||||
|
||||
n, p := keystore.StandardScryptN, keystore.StandardScryptP
|
||||
if c.GlobalBool(utils.LightKDFFlag.Name) {
|
||||
n, p = keystore.LightScryptN, keystore.LightScryptP
|
||||
}
|
||||
text := "The master seed of clef is locked with a password. Please give a password. Do not forget this password."
|
||||
var password string
|
||||
for {
|
||||
password = getPassPhrase(text, true)
|
||||
if err := core.ValidatePasswordFormat(password); err != nil {
|
||||
fmt.Printf("invalid password: %v\n", err)
|
||||
} else {
|
||||
break
|
||||
}
|
||||
}
|
||||
cipherSeed, err := encryptSeed(masterSeed, []byte(password), n, p)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encrypt master seed: %v", err)
|
||||
}
|
||||
|
||||
err = os.Mkdir(configDir, 0700)
|
||||
if err != nil && !os.IsExist(err) {
|
||||
return err
|
||||
}
|
||||
location := filepath.Join(configDir, "masterseed.json")
|
||||
if _, err := os.Stat(location); err == nil {
|
||||
return fmt.Errorf("file %v already exists, will not overwrite", location)
|
||||
}
|
||||
err = ioutil.WriteFile(location, cipherSeed, 0400)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Printf("A master seed has been generated into %s\n", location)
|
||||
fmt.Printf(`
|
||||
This is required to be able to store credentials, such as :
|
||||
* Passwords for keystores (used by rule engine)
|
||||
* Storage for javascript rules
|
||||
* Hash of rule-file
|
||||
|
||||
You should treat that file with utmost secrecy, and make a backup of it.
|
||||
NOTE: This file does not contain your accounts. Those need to be backed up separately!
|
||||
|
||||
`)
|
||||
return nil
|
||||
}
|
||||
func attestFile(ctx *cli.Context) error {
|
||||
if len(ctx.Args()) < 1 {
|
||||
utils.Fatalf("This command requires an argument.")
|
||||
}
|
||||
if err := initialize(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
stretchedKey, err := readMasterKey(ctx, nil)
|
||||
if err != nil {
|
||||
utils.Fatalf(err.Error())
|
||||
}
|
||||
configDir := ctx.GlobalString(configdirFlag.Name)
|
||||
vaultLocation := filepath.Join(configDir, common.Bytes2Hex(crypto.Keccak256([]byte("vault"), stretchedKey)[:10]))
|
||||
confKey := crypto.Keccak256([]byte("config"), stretchedKey)
|
||||
|
||||
// Initialize the encrypted storages
|
||||
configStorage := storage.NewAESEncryptedStorage(filepath.Join(vaultLocation, "config.json"), confKey)
|
||||
val := ctx.Args().First()
|
||||
configStorage.Put("ruleset_sha256", val)
|
||||
log.Info("Ruleset attestation updated", "sha256", val)
|
||||
return nil
|
||||
}
|
||||
|
||||
func setCredential(ctx *cli.Context) error {
|
||||
if len(ctx.Args()) < 1 {
|
||||
utils.Fatalf("This command requires an address to be passed as an argument.")
|
||||
}
|
||||
if err := initialize(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
address := ctx.Args().First()
|
||||
password := getPassPhrase("Enter a passphrase to store with this address.", true)
|
||||
|
||||
stretchedKey, err := readMasterKey(ctx, nil)
|
||||
if err != nil {
|
||||
utils.Fatalf(err.Error())
|
||||
}
|
||||
configDir := ctx.GlobalString(configdirFlag.Name)
|
||||
vaultLocation := filepath.Join(configDir, common.Bytes2Hex(crypto.Keccak256([]byte("vault"), stretchedKey)[:10]))
|
||||
pwkey := crypto.Keccak256([]byte("credentials"), stretchedKey)
|
||||
|
||||
// Initialize the encrypted storages
|
||||
pwStorage := storage.NewAESEncryptedStorage(filepath.Join(vaultLocation, "credentials.json"), pwkey)
|
||||
pwStorage.Put(address, password)
|
||||
log.Info("Credential store updated", "key", address)
|
||||
return nil
|
||||
}
|
||||
|
||||
func initialize(c *cli.Context) error {
|
||||
// Set up the logger to print everything
|
||||
logOutput := os.Stdout
|
||||
if c.GlobalBool(stdiouiFlag.Name) {
|
||||
logOutput = os.Stderr
|
||||
// If using the stdioui, we can't do the 'confirm'-flow
|
||||
fmt.Fprintf(logOutput, legalWarning)
|
||||
} else {
|
||||
if !confirm(legalWarning) {
|
||||
return fmt.Errorf("aborted by user")
|
||||
}
|
||||
}
|
||||
|
||||
log.Root().SetHandler(log.LvlFilterHandler(log.Lvl(c.Int(logLevelFlag.Name)), log.StreamHandler(logOutput, log.TerminalFormat(true))))
|
||||
return nil
|
||||
}
|
||||
|
||||
func signer(c *cli.Context) error {
|
||||
if err := initialize(c); err != nil {
|
||||
return err
|
||||
}
|
||||
var (
|
||||
ui core.UIClientAPI
|
||||
)
|
||||
if c.GlobalBool(stdiouiFlag.Name) {
|
||||
log.Info("Using stdin/stdout as UI-channel")
|
||||
ui = core.NewStdIOUI()
|
||||
} else {
|
||||
log.Info("Using CLI as UI-channel")
|
||||
ui = core.NewCommandlineUI()
|
||||
}
|
||||
// 4bytedb data
|
||||
fourByteLocal := c.GlobalString(customDBFlag.Name)
|
||||
db, err := fourbyte.NewWithFile(fourByteLocal)
|
||||
if err != nil {
|
||||
utils.Fatalf(err.Error())
|
||||
}
|
||||
embeds, locals := db.Size()
|
||||
log.Info("Loaded 4byte database", "embeds", embeds, "locals", locals, "local", fourByteLocal)
|
||||
|
||||
var (
|
||||
api core.ExternalAPI
|
||||
pwStorage storage.Storage = &storage.NoStorage{}
|
||||
)
|
||||
|
||||
configDir := c.GlobalString(configdirFlag.Name)
|
||||
if stretchedKey, err := readMasterKey(c, ui); err != nil {
|
||||
log.Info("No master seed provided, rules disabled", "error", err)
|
||||
} else {
|
||||
vaultLocation := filepath.Join(configDir, common.Bytes2Hex(crypto.Keccak256([]byte("vault"), stretchedKey)[:10]))
|
||||
|
||||
// Generate domain specific keys
|
||||
pwkey := crypto.Keccak256([]byte("credentials"), stretchedKey)
|
||||
jskey := crypto.Keccak256([]byte("jsstorage"), stretchedKey)
|
||||
confkey := crypto.Keccak256([]byte("config"), stretchedKey)
|
||||
|
||||
// Initialize the encrypted storages
|
||||
pwStorage = storage.NewAESEncryptedStorage(filepath.Join(vaultLocation, "credentials.json"), pwkey)
|
||||
jsStorage := storage.NewAESEncryptedStorage(filepath.Join(vaultLocation, "jsstorage.json"), jskey)
|
||||
configStorage := storage.NewAESEncryptedStorage(filepath.Join(vaultLocation, "config.json"), confkey)
|
||||
|
||||
//Do we have a rule-file?
|
||||
if ruleFile := c.GlobalString(ruleFlag.Name); ruleFile != "" {
|
||||
ruleJS, err := ioutil.ReadFile(c.GlobalString(ruleFile))
|
||||
if err != nil {
|
||||
log.Info("Could not load rulefile, rules not enabled", "file", "rulefile")
|
||||
} else {
|
||||
shasum := sha256.Sum256(ruleJS)
|
||||
foundShaSum := hex.EncodeToString(shasum[:])
|
||||
storedShasum := configStorage.Get("ruleset_sha256")
|
||||
if storedShasum != foundShaSum {
|
||||
log.Info("Could not validate ruleset hash, rules not enabled", "got", foundShaSum, "expected", storedShasum)
|
||||
} else {
|
||||
// Initialize rules
|
||||
ruleEngine, err := rules.NewRuleEvaluator(ui, jsStorage)
|
||||
if err != nil {
|
||||
utils.Fatalf(err.Error())
|
||||
}
|
||||
ruleEngine.Init(string(ruleJS))
|
||||
ui = ruleEngine
|
||||
log.Info("Rule engine configured", "file", c.String(ruleFlag.Name))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
var (
|
||||
chainId = c.GlobalInt64(chainIdFlag.Name)
|
||||
ksLoc = c.GlobalString(keystoreFlag.Name)
|
||||
lightKdf = c.GlobalBool(utils.LightKDFFlag.Name)
|
||||
advanced = c.GlobalBool(advancedMode.Name)
|
||||
nousb = c.GlobalBool(utils.NoUSBFlag.Name)
|
||||
)
|
||||
log.Info("Starting signer", "chainid", chainId, "keystore", ksLoc,
|
||||
"light-kdf", lightKdf, "advanced", advanced)
|
||||
am := core.StartClefAccountManager(ksLoc, nousb, lightKdf)
|
||||
apiImpl := core.NewSignerAPI(am, chainId, nousb, ui, db, advanced, pwStorage)
|
||||
|
||||
// Establish the bidirectional communication, by creating a new UI backend and registering
|
||||
// it with the UI.
|
||||
ui.RegisterUIServer(core.NewUIServerAPI(apiImpl))
|
||||
api = apiImpl
|
||||
// Audit logging
|
||||
if logfile := c.GlobalString(auditLogFlag.Name); logfile != "" {
|
||||
api, err = core.NewAuditLogger(logfile, api)
|
||||
if err != nil {
|
||||
utils.Fatalf(err.Error())
|
||||
}
|
||||
log.Info("Audit logs configured", "file", logfile)
|
||||
}
|
||||
// register signer API with server
|
||||
var (
|
||||
extapiURL = "n/a"
|
||||
ipcapiURL = "n/a"
|
||||
)
|
||||
rpcAPI := []rpc.API{
|
||||
{
|
||||
Namespace: "account",
|
||||
Public: true,
|
||||
Service: api,
|
||||
Version: "1.0"},
|
||||
}
|
||||
if c.GlobalBool(utils.RPCEnabledFlag.Name) {
|
||||
|
||||
vhosts := splitAndTrim(c.GlobalString(utils.RPCVirtualHostsFlag.Name))
|
||||
cors := splitAndTrim(c.GlobalString(utils.RPCCORSDomainFlag.Name))
|
||||
|
||||
// start http server
|
||||
httpEndpoint := fmt.Sprintf("%s:%d", c.GlobalString(utils.RPCListenAddrFlag.Name), c.Int(rpcPortFlag.Name))
|
||||
listener, _, err := rpc.StartHTTPEndpoint(httpEndpoint, rpcAPI, []string{"account"}, cors, vhosts, rpc.DefaultHTTPTimeouts)
|
||||
if err != nil {
|
||||
utils.Fatalf("Could not start RPC api: %v", err)
|
||||
}
|
||||
extapiURL = fmt.Sprintf("http://%s", httpEndpoint)
|
||||
log.Info("HTTP endpoint opened", "url", extapiURL)
|
||||
|
||||
defer func() {
|
||||
listener.Close()
|
||||
log.Info("HTTP endpoint closed", "url", httpEndpoint)
|
||||
}()
|
||||
|
||||
}
|
||||
if !c.GlobalBool(utils.IPCDisabledFlag.Name) {
|
||||
if c.IsSet(utils.IPCPathFlag.Name) {
|
||||
ipcapiURL = c.GlobalString(utils.IPCPathFlag.Name)
|
||||
} else {
|
||||
ipcapiURL = filepath.Join(configDir, "clef.ipc")
|
||||
}
|
||||
|
||||
listener, _, err := rpc.StartIPCEndpoint(ipcapiURL, rpcAPI)
|
||||
if err != nil {
|
||||
utils.Fatalf("Could not start IPC api: %v", err)
|
||||
}
|
||||
log.Info("IPC endpoint opened", "url", ipcapiURL)
|
||||
defer func() {
|
||||
listener.Close()
|
||||
log.Info("IPC endpoint closed", "url", ipcapiURL)
|
||||
}()
|
||||
|
||||
}
|
||||
|
||||
if c.GlobalBool(testFlag.Name) {
|
||||
log.Info("Performing UI test")
|
||||
go testExternalUI(apiImpl)
|
||||
}
|
||||
ui.OnSignerStartup(core.StartupInfo{
|
||||
Info: map[string]interface{}{
|
||||
"extapi_version": core.ExternalAPIVersion,
|
||||
"intapi_version": core.InternalAPIVersion,
|
||||
"extapi_http": extapiURL,
|
||||
"extapi_ipc": ipcapiURL,
|
||||
},
|
||||
})
|
||||
|
||||
abortChan := make(chan os.Signal)
|
||||
signal.Notify(abortChan, os.Interrupt)
|
||||
|
||||
sig := <-abortChan
|
||||
log.Info("Exiting...", "signal", sig)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// splitAndTrim splits input separated by a comma
|
||||
// and trims excessive white space from the substrings.
|
||||
func splitAndTrim(input string) []string {
|
||||
result := strings.Split(input, ",")
|
||||
for i, r := range result {
|
||||
result[i] = strings.TrimSpace(r)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// DefaultConfigDir is the default config directory to use for the vaults and other
|
||||
// persistence requirements.
|
||||
func DefaultConfigDir() string {
|
||||
// Try to place the data folder in the user's home dir
|
||||
home := homeDir()
|
||||
if home != "" {
|
||||
if runtime.GOOS == "darwin" {
|
||||
return filepath.Join(home, "Library", "Signer")
|
||||
} else if runtime.GOOS == "windows" {
|
||||
appdata := os.Getenv("APPDATA")
|
||||
if appdata != "" {
|
||||
return filepath.Join(appdata, "Signer")
|
||||
} else {
|
||||
return filepath.Join(home, "AppData", "Roaming", "Signer")
|
||||
}
|
||||
} else {
|
||||
return filepath.Join(home, ".clef")
|
||||
}
|
||||
}
|
||||
// As we cannot guess a stable location, return empty and handle later
|
||||
return ""
|
||||
}
|
||||
|
||||
func homeDir() string {
|
||||
if home := os.Getenv("HOME"); home != "" {
|
||||
return home
|
||||
}
|
||||
if usr, err := user.Current(); err == nil {
|
||||
return usr.HomeDir
|
||||
}
|
||||
return ""
|
||||
}
|
||||
func readMasterKey(ctx *cli.Context, ui core.UIClientAPI) ([]byte, error) {
|
||||
var (
|
||||
file string
|
||||
configDir = ctx.GlobalString(configdirFlag.Name)
|
||||
)
|
||||
if ctx.GlobalIsSet(signerSecretFlag.Name) {
|
||||
file = ctx.GlobalString(signerSecretFlag.Name)
|
||||
} else {
|
||||
file = filepath.Join(configDir, "masterseed.json")
|
||||
}
|
||||
if err := checkFile(file); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cipherKey, err := ioutil.ReadFile(file)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var password string
|
||||
// If ui is not nil, get the password from ui.
|
||||
if ui != nil {
|
||||
resp, err := ui.OnInputRequired(core.UserInputRequest{
|
||||
Title: "Master Password",
|
||||
Prompt: "Please enter the password to decrypt the master seed",
|
||||
IsPassword: true})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
password = resp.Text
|
||||
} else {
|
||||
password = getPassPhrase("Decrypt master seed of clef", false)
|
||||
}
|
||||
masterSeed, err := decryptSeed(cipherKey, password)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to decrypt the master seed of clef")
|
||||
}
|
||||
if len(masterSeed) < 256 {
|
||||
return nil, fmt.Errorf("master seed of insufficient length, expected >255 bytes, got %d", len(masterSeed))
|
||||
}
|
||||
|
||||
// Create vault location
|
||||
vaultLocation := filepath.Join(configDir, common.Bytes2Hex(crypto.Keccak256([]byte("vault"), masterSeed)[:10]))
|
||||
err = os.Mkdir(vaultLocation, 0700)
|
||||
if err != nil && !os.IsExist(err) {
|
||||
return nil, err
|
||||
}
|
||||
return masterSeed, nil
|
||||
}
|
||||
|
||||
// checkFile is a convenience function to check if a file
|
||||
// * exists
|
||||
// * is mode 0400
|
||||
func checkFile(filename string) error {
|
||||
info, err := os.Stat(filename)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed stat on %s: %v", filename, err)
|
||||
}
|
||||
// Check the unix permission bits
|
||||
if info.Mode().Perm()&0377 != 0 {
|
||||
return fmt.Errorf("file (%v) has insecure file permissions (%v)", filename, info.Mode().String())
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// confirm displays a text and asks for user confirmation
|
||||
func confirm(text string) bool {
|
||||
fmt.Printf(text)
|
||||
fmt.Printf("\nEnter 'ok' to proceed:\n>")
|
||||
|
||||
text, err := bufio.NewReader(os.Stdin).ReadString('\n')
|
||||
if err != nil {
|
||||
log.Crit("Failed to read user input", "err", err)
|
||||
}
|
||||
|
||||
if text := strings.TrimSpace(text); text == "ok" {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func testExternalUI(api *core.SignerAPI) {
|
||||
|
||||
ctx := context.WithValue(context.Background(), "remote", "clef binary")
|
||||
ctx = context.WithValue(ctx, "scheme", "in-proc")
|
||||
ctx = context.WithValue(ctx, "local", "main")
|
||||
errs := make([]string, 0)
|
||||
|
||||
a := common.HexToAddress("0xdeadbeef000000000000000000000000deadbeef")
|
||||
|
||||
queryUser := func(q string) string {
|
||||
resp, err := api.UI.OnInputRequired(core.UserInputRequest{
|
||||
Title: "Testing",
|
||||
Prompt: q,
|
||||
})
|
||||
if err != nil {
|
||||
errs = append(errs, err.Error())
|
||||
}
|
||||
return resp.Text
|
||||
}
|
||||
expectResponse := func(testcase, question, expect string) {
|
||||
if got := queryUser(question); got != expect {
|
||||
errs = append(errs, fmt.Sprintf("%s: got %v, expected %v", testcase, got, expect))
|
||||
}
|
||||
}
|
||||
expectApprove := func(testcase string, err error) {
|
||||
if err == nil || err == accounts.ErrUnknownAccount {
|
||||
return
|
||||
}
|
||||
errs = append(errs, fmt.Sprintf("%v: expected no error, got %v", testcase, err.Error()))
|
||||
}
|
||||
expectDeny := func(testcase string, err error) {
|
||||
if err == nil || err != core.ErrRequestDenied {
|
||||
errs = append(errs, fmt.Sprintf("%v: expected ErrRequestDenied, got %v", testcase, err))
|
||||
}
|
||||
}
|
||||
|
||||
// Test display of info and error
|
||||
{
|
||||
api.UI.ShowInfo("If you see this message, enter 'yes' to next question")
|
||||
expectResponse("showinfo", "Did you see the message? [yes/no]", "yes")
|
||||
api.UI.ShowError("If you see this message, enter 'yes' to the next question")
|
||||
expectResponse("showerror", "Did you see the message? [yes/no]", "yes")
|
||||
}
|
||||
{ // Sign data test - clique header
|
||||
api.UI.ShowInfo("Please approve the next request for signing a clique header")
|
||||
cliqueHeader := types.Header{
|
||||
common.HexToHash("0000H45H"),
|
||||
common.HexToHash("0000H45H"),
|
||||
common.HexToAddress("0000H45H"),
|
||||
common.HexToHash("0000H00H"),
|
||||
common.HexToHash("0000H45H"),
|
||||
common.HexToHash("0000H45H"),
|
||||
types.Bloom{},
|
||||
big.NewInt(1337),
|
||||
big.NewInt(1337),
|
||||
1338,
|
||||
1338,
|
||||
1338,
|
||||
[]byte("Extra data Extra data Extra data Extra data Extra data Extra data Extra data Extra data"),
|
||||
common.HexToHash("0x0000H45H"),
|
||||
types.BlockNonce{},
|
||||
}
|
||||
cliqueRlp, err := rlp.EncodeToBytes(cliqueHeader)
|
||||
if err != nil {
|
||||
utils.Fatalf("Should not error: %v", err)
|
||||
}
|
||||
addr, _ := common.NewMixedcaseAddressFromString("0x0011223344556677889900112233445566778899")
|
||||
_, err = api.SignData(ctx, accounts.MimetypeClique, *addr, hexutil.Encode(cliqueRlp))
|
||||
expectApprove("signdata - clique header", err)
|
||||
}
|
||||
{ // Sign data test - plain text
|
||||
api.UI.ShowInfo("Please approve the next request for signing text")
|
||||
addr, _ := common.NewMixedcaseAddressFromString("0x0011223344556677889900112233445566778899")
|
||||
_, err := api.SignData(ctx, accounts.MimetypeTextPlain, *addr, hexutil.Encode([]byte("hello world")))
|
||||
expectApprove("signdata - text", err)
|
||||
}
|
||||
{ // Sign data test - plain text reject
|
||||
api.UI.ShowInfo("Please deny the next request for signing text")
|
||||
addr, _ := common.NewMixedcaseAddressFromString("0x0011223344556677889900112233445566778899")
|
||||
_, err := api.SignData(ctx, accounts.MimetypeTextPlain, *addr, hexutil.Encode([]byte("hello world")))
|
||||
expectDeny("signdata - text", err)
|
||||
}
|
||||
{ // Sign transaction
|
||||
|
||||
api.UI.ShowInfo("Please reject next transaction")
|
||||
data := hexutil.Bytes([]byte{})
|
||||
to := common.NewMixedcaseAddress(a)
|
||||
tx := core.SendTxArgs{
|
||||
Data: &data,
|
||||
Nonce: 0x1,
|
||||
Value: hexutil.Big(*big.NewInt(6)),
|
||||
From: common.NewMixedcaseAddress(a),
|
||||
To: &to,
|
||||
GasPrice: hexutil.Big(*big.NewInt(5)),
|
||||
Gas: 1000,
|
||||
Input: nil,
|
||||
}
|
||||
_, err := api.SignTransaction(ctx, tx, nil)
|
||||
expectDeny("signtransaction [1]", err)
|
||||
expectResponse("signtransaction [2]", "Did you see any warnings for the last transaction? (yes/no)", "no")
|
||||
}
|
||||
{ // Listing
|
||||
api.UI.ShowInfo("Please reject listing-request")
|
||||
_, err := api.List(ctx)
|
||||
expectDeny("list", err)
|
||||
}
|
||||
{ // Import
|
||||
api.UI.ShowInfo("Please reject new account-request")
|
||||
_, err := api.New(ctx)
|
||||
expectDeny("newaccount", err)
|
||||
}
|
||||
{ // Metadata
|
||||
api.UI.ShowInfo("Please check if you see the Origin in next listing (approve or deny)")
|
||||
api.List(context.WithValue(ctx, "Origin", "origin.com"))
|
||||
expectResponse("metadata - origin", "Did you see origin (origin.com)? [yes/no] ", "yes")
|
||||
}
|
||||
|
||||
for _, e := range errs {
|
||||
log.Error(e)
|
||||
}
|
||||
result := fmt.Sprintf("Tests completed. %d errors:\n%s\n", len(errs), strings.Join(errs, "\n"))
|
||||
api.UI.ShowInfo(result)
|
||||
|
||||
}
|
||||
|
||||
// getPassPhrase retrieves the password associated with clef, either fetched
|
||||
// from a list of preloaded passphrases, or requested interactively from the user.
|
||||
// TODO: there are many `getPassPhrase` functions, it will be better to abstract them into one.
|
||||
func getPassPhrase(prompt string, confirmation bool) string {
|
||||
fmt.Println(prompt)
|
||||
password, err := console.Stdin.PromptPassword("Passphrase: ")
|
||||
if err != nil {
|
||||
utils.Fatalf("Failed to read passphrase: %v", err)
|
||||
}
|
||||
if confirmation {
|
||||
confirm, err := console.Stdin.PromptPassword("Repeat passphrase: ")
|
||||
if err != nil {
|
||||
utils.Fatalf("Failed to read passphrase confirmation: %v", err)
|
||||
}
|
||||
if password != confirm {
|
||||
utils.Fatalf("Passphrases do not match")
|
||||
}
|
||||
}
|
||||
return password
|
||||
}
|
||||
|
||||
type encryptedSeedStorage struct {
|
||||
Description string `json:"description"`
|
||||
Version int `json:"version"`
|
||||
Params keystore.CryptoJSON `json:"params"`
|
||||
}
|
||||
|
||||
// encryptSeed uses a similar scheme as the keystore uses, but with a different wrapping,
|
||||
// to encrypt the master seed
|
||||
func encryptSeed(seed []byte, auth []byte, scryptN, scryptP int) ([]byte, error) {
|
||||
cryptoStruct, err := keystore.EncryptDataV3(seed, auth, scryptN, scryptP)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return json.Marshal(&encryptedSeedStorage{"Clef seed", 1, cryptoStruct})
|
||||
}
|
||||
|
||||
// decryptSeed decrypts the master seed
|
||||
func decryptSeed(keyjson []byte, auth string) ([]byte, error) {
|
||||
var encSeed encryptedSeedStorage
|
||||
if err := json.Unmarshal(keyjson, &encSeed); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if encSeed.Version != 1 {
|
||||
log.Warn(fmt.Sprintf("unsupported encryption format of seed: %d, operation will likely fail", encSeed.Version))
|
||||
}
|
||||
seed, err := keystore.DecryptDataV3(encSeed.Params, auth)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return seed, err
|
||||
}
|
||||
|
||||
// GenDoc outputs examples of all structures used in json-rpc communication
|
||||
func GenDoc(ctx *cli.Context) {
|
||||
|
||||
var (
|
||||
a = common.HexToAddress("0xdeadbeef000000000000000000000000deadbeef")
|
||||
b = common.HexToAddress("0x1111111122222222222233333333334444444444")
|
||||
meta = core.Metadata{
|
||||
Scheme: "http",
|
||||
Local: "localhost:8545",
|
||||
Origin: "www.malicious.ru",
|
||||
Remote: "localhost:9999",
|
||||
UserAgent: "Firefox 3.2",
|
||||
}
|
||||
output []string
|
||||
add = func(name, desc string, v interface{}) {
|
||||
if data, err := json.MarshalIndent(v, "", " "); err == nil {
|
||||
output = append(output, fmt.Sprintf("### %s\n\n%s\n\nExample:\n```json\n%s\n```", name, desc, data))
|
||||
} else {
|
||||
log.Error("Error generating output", err)
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
{ // Sign plain text request
|
||||
desc := "SignDataRequest contains information about a pending request to sign some data. " +
|
||||
"The data to be signed can be of various types, defined by content-type. Clef has done most " +
|
||||
"of the work in canonicalizing and making sense of the data, and it's up to the UI to present" +
|
||||
"the user with the contents of the `message`"
|
||||
sighash, msg := accounts.TextAndHash([]byte("hello world"))
|
||||
message := []*core.NameValueType{{"message", msg, accounts.MimetypeTextPlain}}
|
||||
|
||||
add("SignDataRequest", desc, &core.SignDataRequest{
|
||||
Address: common.NewMixedcaseAddress(a),
|
||||
Meta: meta,
|
||||
ContentType: accounts.MimetypeTextPlain,
|
||||
Rawdata: []byte(msg),
|
||||
Message: message,
|
||||
Hash: sighash})
|
||||
}
|
||||
{ // Sign plain text response
|
||||
add("SignDataResponse - approve", "Response to SignDataRequest",
|
||||
&core.SignDataResponse{Approved: true})
|
||||
add("SignDataResponse - deny", "Response to SignDataRequest",
|
||||
&core.SignDataResponse{})
|
||||
}
|
||||
{ // Sign transaction request
|
||||
desc := "SignTxRequest contains information about a pending request to sign a transaction. " +
|
||||
"Aside from the transaction itself, there is also a `call_info`-struct. That struct contains " +
|
||||
"messages of various types, that the user should be informed of." +
|
||||
"\n\n" +
|
||||
"As in any request, it's important to consider that the `meta` info also contains untrusted data." +
|
||||
"\n\n" +
|
||||
"The `transaction` (on input into clef) can have either `data` or `input` -- if both are set, " +
|
||||
"they must be identical, otherwise an error is generated. " +
|
||||
"However, Clef will always use `data` when passing this struct on (if Clef does otherwise, please file a ticket)"
|
||||
|
||||
data := hexutil.Bytes([]byte{0x01, 0x02, 0x03, 0x04})
|
||||
add("SignTxRequest", desc, &core.SignTxRequest{
|
||||
Meta: meta,
|
||||
Callinfo: []core.ValidationInfo{
|
||||
{"Warning", "Something looks odd, show this message as a warning"},
|
||||
{"Info", "User should see this aswell"},
|
||||
},
|
||||
Transaction: core.SendTxArgs{
|
||||
Data: &data,
|
||||
Nonce: 0x1,
|
||||
Value: hexutil.Big(*big.NewInt(6)),
|
||||
From: common.NewMixedcaseAddress(a),
|
||||
To: nil,
|
||||
GasPrice: hexutil.Big(*big.NewInt(5)),
|
||||
Gas: 1000,
|
||||
Input: nil,
|
||||
}})
|
||||
}
|
||||
{ // Sign tx response
|
||||
data := hexutil.Bytes([]byte{0x04, 0x03, 0x02, 0x01})
|
||||
add("SignTxResponse - approve", "Response to request to sign a transaction. This response needs to contain the `transaction`"+
|
||||
", because the UI is free to make modifications to the transaction.",
|
||||
&core.SignTxResponse{Approved: true,
|
||||
Transaction: core.SendTxArgs{
|
||||
Data: &data,
|
||||
Nonce: 0x4,
|
||||
Value: hexutil.Big(*big.NewInt(6)),
|
||||
From: common.NewMixedcaseAddress(a),
|
||||
To: nil,
|
||||
GasPrice: hexutil.Big(*big.NewInt(5)),
|
||||
Gas: 1000,
|
||||
Input: nil,
|
||||
}})
|
||||
add("SignTxResponse - deny", "Response to SignTxRequest. When denying a request, there's no need to "+
|
||||
"provide the transaction in return",
|
||||
&core.SignTxResponse{})
|
||||
}
|
||||
{ // WHen a signed tx is ready to go out
|
||||
desc := "SignTransactionResult is used in the call `clef` -> `OnApprovedTx(result)`" +
|
||||
"\n\n" +
|
||||
"This occurs _after_ successful completion of the entire signing procedure, but right before the signed " +
|
||||
"transaction is passed to the external caller. This method (and data) can be used by the UI to signal " +
|
||||
"to the user that the transaction was signed, but it is primarily useful for ruleset implementations." +
|
||||
"\n\n" +
|
||||
"A ruleset that implements a rate limitation needs to know what transactions are sent out to the external " +
|
||||
"interface. By hooking into this methods, the ruleset can maintain track of that count." +
|
||||
"\n\n" +
|
||||
"**OBS:** Note that if an attacker can restore your `clef` data to a previous point in time" +
|
||||
" (e.g through a backup), the attacker can reset such windows, even if he/she is unable to decrypt the content. " +
|
||||
"\n\n" +
|
||||
"The `OnApproved` method cannot be responded to, it's purely informative"
|
||||
|
||||
rlpdata := common.FromHex("0xf85d640101948a8eafb1cf62bfbeb1741769dae1a9dd47996192018026a0716bd90515acb1e68e5ac5867aa11a1e65399c3349d479f5fb698554ebc6f293a04e8a4ebfff434e971e0ef12c5bf3a881b06fd04fc3f8b8a7291fb67a26a1d4ed")
|
||||
var tx types.Transaction
|
||||
rlp.DecodeBytes(rlpdata, &tx)
|
||||
add("OnApproved - SignTransactionResult", desc, ðapi.SignTransactionResult{Raw: rlpdata, Tx: &tx})
|
||||
|
||||
}
|
||||
{ // User input
|
||||
add("UserInputRequest", "Sent when clef needs the user to provide data. If 'password' is true, the input field should be treated accordingly (echo-free)",
|
||||
&core.UserInputRequest{IsPassword: true, Title: "The title here", Prompt: "The question to ask the user"})
|
||||
add("UserInputResponse", "Response to UserInputRequest",
|
||||
&core.UserInputResponse{Text: "The textual response from user"})
|
||||
}
|
||||
{ // List request
|
||||
add("ListRequest", "Sent when a request has been made to list addresses. The UI is provided with the "+
|
||||
"full `account`s, including local directory names. Note: this information is not passed back to the external caller, "+
|
||||
"who only sees the `address`es. ",
|
||||
&core.ListRequest{
|
||||
Meta: meta,
|
||||
Accounts: []accounts.Account{
|
||||
{a, accounts.URL{Scheme: "keystore", Path: "/path/to/keyfile/a"}},
|
||||
{b, accounts.URL{Scheme: "keystore", Path: "/path/to/keyfile/b"}}},
|
||||
})
|
||||
|
||||
add("ListResponse", "Response to list request. The response contains a list of all addresses to show to the caller. "+
|
||||
"Note: the UI is free to respond with any address the caller, regardless of whether it exists or not",
|
||||
&core.ListResponse{
|
||||
Accounts: []accounts.Account{
|
||||
{common.HexToAddress("0xcowbeef000000cowbeef00000000000000000c0w"), accounts.URL{Path: ".. ignored .."}},
|
||||
{common.HexToAddress("0xffffffffffffffffffffffffffffffffffffffff"), accounts.URL{}},
|
||||
}})
|
||||
}
|
||||
|
||||
fmt.Println(`## UI Client interface
|
||||
|
||||
These data types are defined in the channel between clef and the UI`)
|
||||
for _, elem := range output {
|
||||
fmt.Println(elem)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
//Create Account
|
||||
|
||||
curl -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_new","params":["test"],"id":67}' localhost:8550
|
||||
|
||||
// List accounts
|
||||
|
||||
curl -i -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_list","params":[""],"id":67}' http://localhost:8550/
|
||||
|
||||
// Make Transaction
|
||||
// safeSend(0x12)
|
||||
// 4401a6e40000000000000000000000000000000000000000000000000000000000000012
|
||||
|
||||
// supplied abi
|
||||
curl -i -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x82A2A876D39022B3019932D30Cd9c97ad5616813","gas":"0x333","gasPrice":"0x123","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x10", "data":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"},"test"],"id":67}' http://localhost:8550/
|
||||
|
||||
// Not supplied
|
||||
curl -i -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_signTransaction","params":[{"from":"0x82A2A876D39022B3019932D30Cd9c97ad5616813","gas":"0x333","gasPrice":"0x123","nonce":"0x0","to":"0x07a565b7ed7d7a678680a4c162885bedbb695fe0", "value":"0x10", "data":"0x4401a6e40000000000000000000000000000000000000000000000000000000000000012"}],"id":67}' http://localhost:8550/
|
||||
|
||||
// Sign data
|
||||
|
||||
curl -i -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"account_sign","params":["0x694267f14675d7e1b9494fd8d72fefe1755710fa","bazonk gaz baz"],"id":67}' http://localhost:8550/
|
||||
|
||||
|
||||
**/
|
779
vendor/github.com/ethereum/go-ethereum/cmd/faucet/faucet.go
generated
vendored
Normal file
779
vendor/github.com/ethereum/go-ethereum/cmd/faucet/faucet.go
generated
vendored
Normal file
@ -0,0 +1,779 @@
|
||||
// Copyright 2017 The go-ethereum Authors
|
||||
// This file is part of go-ethereum.
|
||||
//
|
||||
// go-ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// go-ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
// faucet is a Ether faucet backed by a light client.
|
||||
package main
|
||||
|
||||
//go:generate go-bindata -nometadata -o website.go faucet.html
|
||||
//go:generate gofmt -w -s website.go
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"flag"
|
||||
"fmt"
|
||||
"html/template"
|
||||
"io/ioutil"
|
||||
"math"
|
||||
"math/big"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/accounts"
|
||||
"github.com/ethereum/go-ethereum/accounts/keystore"
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/core"
|
||||
"github.com/ethereum/go-ethereum/core/types"
|
||||
"github.com/ethereum/go-ethereum/eth"
|
||||
"github.com/ethereum/go-ethereum/eth/downloader"
|
||||
"github.com/ethereum/go-ethereum/ethclient"
|
||||
"github.com/ethereum/go-ethereum/ethstats"
|
||||
"github.com/ethereum/go-ethereum/les"
|
||||
"github.com/ethereum/go-ethereum/log"
|
||||
"github.com/ethereum/go-ethereum/node"
|
||||
"github.com/ethereum/go-ethereum/p2p"
|
||||
"github.com/ethereum/go-ethereum/p2p/discv5"
|
||||
"github.com/ethereum/go-ethereum/p2p/enode"
|
||||
"github.com/ethereum/go-ethereum/p2p/nat"
|
||||
"github.com/ethereum/go-ethereum/params"
|
||||
"golang.org/x/net/websocket"
|
||||
)
|
||||
|
||||
var (
|
||||
genesisFlag = flag.String("genesis", "", "Genesis json file to seed the chain with")
|
||||
apiPortFlag = flag.Int("apiport", 8080, "Listener port for the HTTP API connection")
|
||||
ethPortFlag = flag.Int("ethport", 30303, "Listener port for the devp2p connection")
|
||||
bootFlag = flag.String("bootnodes", "", "Comma separated bootnode enode URLs to seed with")
|
||||
netFlag = flag.Uint64("network", 0, "Network ID to use for the Ethereum protocol")
|
||||
statsFlag = flag.String("ethstats", "", "Ethstats network monitoring auth string")
|
||||
|
||||
netnameFlag = flag.String("faucet.name", "", "Network name to assign to the faucet")
|
||||
payoutFlag = flag.Int("faucet.amount", 1, "Number of Ethers to pay out per user request")
|
||||
minutesFlag = flag.Int("faucet.minutes", 1440, "Number of minutes to wait between funding rounds")
|
||||
tiersFlag = flag.Int("faucet.tiers", 3, "Number of funding tiers to enable (x3 time, x2.5 funds)")
|
||||
|
||||
accJSONFlag = flag.String("account.json", "", "Key json file to fund user requests with")
|
||||
accPassFlag = flag.String("account.pass", "", "Decryption password to access faucet funds")
|
||||
|
||||
captchaToken = flag.String("captcha.token", "", "Recaptcha site key to authenticate client side")
|
||||
captchaSecret = flag.String("captcha.secret", "", "Recaptcha secret key to authenticate server side")
|
||||
|
||||
noauthFlag = flag.Bool("noauth", false, "Enables funding requests without authentication")
|
||||
logFlag = flag.Int("loglevel", 3, "Log level to use for Ethereum and the faucet")
|
||||
)
|
||||
|
||||
var (
|
||||
ether = new(big.Int).Exp(big.NewInt(10), big.NewInt(18), nil)
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Parse the flags and set up the logger to print everything requested
|
||||
flag.Parse()
|
||||
log.Root().SetHandler(log.LvlFilterHandler(log.Lvl(*logFlag), log.StreamHandler(os.Stderr, log.TerminalFormat(true))))
|
||||
|
||||
// Construct the payout tiers
|
||||
amounts := make([]string, *tiersFlag)
|
||||
periods := make([]string, *tiersFlag)
|
||||
for i := 0; i < *tiersFlag; i++ {
|
||||
// Calculate the amount for the next tier and format it
|
||||
amount := float64(*payoutFlag) * math.Pow(2.5, float64(i))
|
||||
amounts[i] = fmt.Sprintf("%s Ethers", strconv.FormatFloat(amount, 'f', -1, 64))
|
||||
if amount == 1 {
|
||||
amounts[i] = strings.TrimSuffix(amounts[i], "s")
|
||||
}
|
||||
// Calculate the period for the next tier and format it
|
||||
period := *minutesFlag * int(math.Pow(3, float64(i)))
|
||||
periods[i] = fmt.Sprintf("%d mins", period)
|
||||
if period%60 == 0 {
|
||||
period /= 60
|
||||
periods[i] = fmt.Sprintf("%d hours", period)
|
||||
|
||||
if period%24 == 0 {
|
||||
period /= 24
|
||||
periods[i] = fmt.Sprintf("%d days", period)
|
||||
}
|
||||
}
|
||||
if period == 1 {
|
||||
periods[i] = strings.TrimSuffix(periods[i], "s")
|
||||
}
|
||||
}
|
||||
// Load up and render the faucet website
|
||||
tmpl, err := Asset("faucet.html")
|
||||
if err != nil {
|
||||
log.Crit("Failed to load the faucet template", "err", err)
|
||||
}
|
||||
website := new(bytes.Buffer)
|
||||
err = template.Must(template.New("").Parse(string(tmpl))).Execute(website, map[string]interface{}{
|
||||
"Network": *netnameFlag,
|
||||
"Amounts": amounts,
|
||||
"Periods": periods,
|
||||
"Recaptcha": *captchaToken,
|
||||
"NoAuth": *noauthFlag,
|
||||
})
|
||||
if err != nil {
|
||||
log.Crit("Failed to render the faucet template", "err", err)
|
||||
}
|
||||
// Load and parse the genesis block requested by the user
|
||||
blob, err := ioutil.ReadFile(*genesisFlag)
|
||||
if err != nil {
|
||||
log.Crit("Failed to read genesis block contents", "genesis", *genesisFlag, "err", err)
|
||||
}
|
||||
genesis := new(core.Genesis)
|
||||
if err = json.Unmarshal(blob, genesis); err != nil {
|
||||
log.Crit("Failed to parse genesis block json", "err", err)
|
||||
}
|
||||
// Convert the bootnodes to internal enode representations
|
||||
var enodes []*discv5.Node
|
||||
for _, boot := range strings.Split(*bootFlag, ",") {
|
||||
if url, err := discv5.ParseNode(boot); err == nil {
|
||||
enodes = append(enodes, url)
|
||||
} else {
|
||||
log.Error("Failed to parse bootnode URL", "url", boot, "err", err)
|
||||
}
|
||||
}
|
||||
// Load up the account key and decrypt its password
|
||||
if blob, err = ioutil.ReadFile(*accPassFlag); err != nil {
|
||||
log.Crit("Failed to read account password contents", "file", *accPassFlag, "err", err)
|
||||
}
|
||||
// Delete trailing newline in password
|
||||
pass := strings.TrimSuffix(string(blob), "\n")
|
||||
|
||||
ks := keystore.NewKeyStore(filepath.Join(os.Getenv("HOME"), ".faucet", "keys"), keystore.StandardScryptN, keystore.StandardScryptP)
|
||||
if blob, err = ioutil.ReadFile(*accJSONFlag); err != nil {
|
||||
log.Crit("Failed to read account key contents", "file", *accJSONFlag, "err", err)
|
||||
}
|
||||
acc, err := ks.Import(blob, pass, pass)
|
||||
if err != nil {
|
||||
log.Crit("Failed to import faucet signer account", "err", err)
|
||||
}
|
||||
ks.Unlock(acc, pass)
|
||||
|
||||
// Assemble and start the faucet light service
|
||||
faucet, err := newFaucet(genesis, *ethPortFlag, enodes, *netFlag, *statsFlag, ks, website.Bytes())
|
||||
if err != nil {
|
||||
log.Crit("Failed to start faucet", "err", err)
|
||||
}
|
||||
defer faucet.close()
|
||||
|
||||
if err := faucet.listenAndServe(*apiPortFlag); err != nil {
|
||||
log.Crit("Failed to launch faucet API", "err", err)
|
||||
}
|
||||
}
|
||||
|
||||
// request represents an accepted funding request.
|
||||
type request struct {
|
||||
Avatar string `json:"avatar"` // Avatar URL to make the UI nicer
|
||||
Account common.Address `json:"account"` // Ethereum address being funded
|
||||
Time time.Time `json:"time"` // Timestamp when the request was accepted
|
||||
Tx *types.Transaction `json:"tx"` // Transaction funding the account
|
||||
}
|
||||
|
||||
// faucet represents a crypto faucet backed by an Ethereum light client.
|
||||
type faucet struct {
|
||||
config *params.ChainConfig // Chain configurations for signing
|
||||
stack *node.Node // Ethereum protocol stack
|
||||
client *ethclient.Client // Client connection to the Ethereum chain
|
||||
index []byte // Index page to serve up on the web
|
||||
|
||||
keystore *keystore.KeyStore // Keystore containing the single signer
|
||||
account accounts.Account // Account funding user faucet requests
|
||||
head *types.Header // Current head header of the faucet
|
||||
balance *big.Int // Current balance of the faucet
|
||||
nonce uint64 // Current pending nonce of the faucet
|
||||
price *big.Int // Current gas price to issue funds with
|
||||
|
||||
conns []*websocket.Conn // Currently live websocket connections
|
||||
timeouts map[string]time.Time // History of users and their funding timeouts
|
||||
reqs []*request // Currently pending funding requests
|
||||
update chan struct{} // Channel to signal request updates
|
||||
|
||||
lock sync.RWMutex // Lock protecting the faucet's internals
|
||||
}
|
||||
|
||||
func newFaucet(genesis *core.Genesis, port int, enodes []*discv5.Node, network uint64, stats string, ks *keystore.KeyStore, index []byte) (*faucet, error) {
|
||||
// Assemble the raw devp2p protocol stack
|
||||
stack, err := node.New(&node.Config{
|
||||
Name: "geth",
|
||||
Version: params.VersionWithMeta,
|
||||
DataDir: filepath.Join(os.Getenv("HOME"), ".faucet"),
|
||||
P2P: p2p.Config{
|
||||
NAT: nat.Any(),
|
||||
NoDiscovery: true,
|
||||
DiscoveryV5: true,
|
||||
ListenAddr: fmt.Sprintf(":%d", port),
|
||||
MaxPeers: 25,
|
||||
BootstrapNodesV5: enodes,
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Assemble the Ethereum light client protocol
|
||||
if err := stack.Register(func(ctx *node.ServiceContext) (node.Service, error) {
|
||||
cfg := eth.DefaultConfig
|
||||
cfg.SyncMode = downloader.LightSync
|
||||
cfg.NetworkId = network
|
||||
cfg.Genesis = genesis
|
||||
return les.New(ctx, &cfg)
|
||||
}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Assemble the ethstats monitoring and reporting service'
|
||||
if stats != "" {
|
||||
if err := stack.Register(func(ctx *node.ServiceContext) (node.Service, error) {
|
||||
var serv *les.LightEthereum
|
||||
ctx.Service(&serv)
|
||||
return ethstats.New(stats, nil, serv)
|
||||
}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
// Boot up the client and ensure it connects to bootnodes
|
||||
if err := stack.Start(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, boot := range enodes {
|
||||
old, err := enode.ParseV4(boot.String())
|
||||
if err == nil {
|
||||
stack.Server().AddPeer(old)
|
||||
}
|
||||
}
|
||||
// Attach to the client and retrieve and interesting metadatas
|
||||
api, err := stack.Attach()
|
||||
if err != nil {
|
||||
stack.Stop()
|
||||
return nil, err
|
||||
}
|
||||
client := ethclient.NewClient(api)
|
||||
|
||||
return &faucet{
|
||||
config: genesis.Config,
|
||||
stack: stack,
|
||||
client: client,
|
||||
index: index,
|
||||
keystore: ks,
|
||||
account: ks.Accounts()[0],
|
||||
timeouts: make(map[string]time.Time),
|
||||
update: make(chan struct{}, 1),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// close terminates the Ethereum connection and tears down the faucet.
|
||||
func (f *faucet) close() error {
|
||||
return f.stack.Close()
|
||||
}
|
||||
|
||||
// listenAndServe registers the HTTP handlers for the faucet and boots it up
|
||||
// for service user funding requests.
|
||||
func (f *faucet) listenAndServe(port int) error {
|
||||
go f.loop()
|
||||
|
||||
http.HandleFunc("/", f.webHandler)
|
||||
http.Handle("/api", websocket.Handler(f.apiHandler))
|
||||
|
||||
return http.ListenAndServe(fmt.Sprintf(":%d", port), nil)
|
||||
}
|
||||
|
||||
// webHandler handles all non-api requests, simply flattening and returning the
|
||||
// faucet website.
|
||||
func (f *faucet) webHandler(w http.ResponseWriter, r *http.Request) {
|
||||
w.Write(f.index)
|
||||
}
|
||||
|
||||
// apiHandler handles requests for Ether grants and transaction statuses.
|
||||
func (f *faucet) apiHandler(conn *websocket.Conn) {
|
||||
// Start tracking the connection and drop at the end
|
||||
defer conn.Close()
|
||||
|
||||
f.lock.Lock()
|
||||
f.conns = append(f.conns, conn)
|
||||
f.lock.Unlock()
|
||||
|
||||
defer func() {
|
||||
f.lock.Lock()
|
||||
for i, c := range f.conns {
|
||||
if c == conn {
|
||||
f.conns = append(f.conns[:i], f.conns[i+1:]...)
|
||||
break
|
||||
}
|
||||
}
|
||||
f.lock.Unlock()
|
||||
}()
|
||||
// Gather the initial stats from the network to report
|
||||
var (
|
||||
head *types.Header
|
||||
balance *big.Int
|
||||
nonce uint64
|
||||
err error
|
||||
)
|
||||
for head == nil || balance == nil {
|
||||
// Retrieve the current stats cached by the faucet
|
||||
f.lock.RLock()
|
||||
if f.head != nil {
|
||||
head = types.CopyHeader(f.head)
|
||||
}
|
||||
if f.balance != nil {
|
||||
balance = new(big.Int).Set(f.balance)
|
||||
}
|
||||
nonce = f.nonce
|
||||
f.lock.RUnlock()
|
||||
|
||||
if head == nil || balance == nil {
|
||||
// Report the faucet offline until initial stats are ready
|
||||
if err = sendError(conn, errors.New("Faucet offline")); err != nil {
|
||||
log.Warn("Failed to send faucet error to client", "err", err)
|
||||
return
|
||||
}
|
||||
time.Sleep(3 * time.Second)
|
||||
}
|
||||
}
|
||||
// Send over the initial stats and the latest header
|
||||
if err = send(conn, map[string]interface{}{
|
||||
"funds": new(big.Int).Div(balance, ether),
|
||||
"funded": nonce,
|
||||
"peers": f.stack.Server().PeerCount(),
|
||||
"requests": f.reqs,
|
||||
}, 3*time.Second); err != nil {
|
||||
log.Warn("Failed to send initial stats to client", "err", err)
|
||||
return
|
||||
}
|
||||
if err = send(conn, head, 3*time.Second); err != nil {
|
||||
log.Warn("Failed to send initial header to client", "err", err)
|
||||
return
|
||||
}
|
||||
// Keep reading requests from the websocket until the connection breaks
|
||||
for {
|
||||
// Fetch the next funding request and validate against github
|
||||
var msg struct {
|
||||
URL string `json:"url"`
|
||||
Tier uint `json:"tier"`
|
||||
Captcha string `json:"captcha"`
|
||||
}
|
||||
if err = websocket.JSON.Receive(conn, &msg); err != nil {
|
||||
return
|
||||
}
|
||||
if !*noauthFlag && !strings.HasPrefix(msg.URL, "https://gist.github.com/") && !strings.HasPrefix(msg.URL, "https://twitter.com/") &&
|
||||
!strings.HasPrefix(msg.URL, "https://plus.google.com/") && !strings.HasPrefix(msg.URL, "https://www.facebook.com/") {
|
||||
if err = sendError(conn, errors.New("URL doesn't link to supported services")); err != nil {
|
||||
log.Warn("Failed to send URL error to client", "err", err)
|
||||
return
|
||||
}
|
||||
continue
|
||||
}
|
||||
if msg.Tier >= uint(*tiersFlag) {
|
||||
if err = sendError(conn, errors.New("Invalid funding tier requested")); err != nil {
|
||||
log.Warn("Failed to send tier error to client", "err", err)
|
||||
return
|
||||
}
|
||||
continue
|
||||
}
|
||||
log.Info("Faucet funds requested", "url", msg.URL, "tier", msg.Tier)
|
||||
|
||||
// If captcha verifications are enabled, make sure we're not dealing with a robot
|
||||
if *captchaToken != "" {
|
||||
form := url.Values{}
|
||||
form.Add("secret", *captchaSecret)
|
||||
form.Add("response", msg.Captcha)
|
||||
|
||||
res, err := http.PostForm("https://www.google.com/recaptcha/api/siteverify", form)
|
||||
if err != nil {
|
||||
if err = sendError(conn, err); err != nil {
|
||||
log.Warn("Failed to send captcha post error to client", "err", err)
|
||||
return
|
||||
}
|
||||
continue
|
||||
}
|
||||
var result struct {
|
||||
Success bool `json:"success"`
|
||||
Errors json.RawMessage `json:"error-codes"`
|
||||
}
|
||||
err = json.NewDecoder(res.Body).Decode(&result)
|
||||
res.Body.Close()
|
||||
if err != nil {
|
||||
if err = sendError(conn, err); err != nil {
|
||||
log.Warn("Failed to send captcha decode error to client", "err", err)
|
||||
return
|
||||
}
|
||||
continue
|
||||
}
|
||||
if !result.Success {
|
||||
log.Warn("Captcha verification failed", "err", string(result.Errors))
|
||||
if err = sendError(conn, errors.New("Beep-bop, you're a robot!")); err != nil {
|
||||
log.Warn("Failed to send captcha failure to client", "err", err)
|
||||
return
|
||||
}
|
||||
continue
|
||||
}
|
||||
}
|
||||
// Retrieve the Ethereum address to fund, the requesting user and a profile picture
|
||||
var (
|
||||
username string
|
||||
avatar string
|
||||
address common.Address
|
||||
)
|
||||
switch {
|
||||
case strings.HasPrefix(msg.URL, "https://gist.github.com/"):
|
||||
if err = sendError(conn, errors.New("GitHub authentication discontinued at the official request of GitHub")); err != nil {
|
||||
log.Warn("Failed to send GitHub deprecation to client", "err", err)
|
||||
return
|
||||
}
|
||||
continue
|
||||
case strings.HasPrefix(msg.URL, "https://twitter.com/"):
|
||||
username, avatar, address, err = authTwitter(msg.URL)
|
||||
case strings.HasPrefix(msg.URL, "https://plus.google.com/"):
|
||||
username, avatar, address, err = authGooglePlus(msg.URL)
|
||||
case strings.HasPrefix(msg.URL, "https://www.facebook.com/"):
|
||||
username, avatar, address, err = authFacebook(msg.URL)
|
||||
case *noauthFlag:
|
||||
username, avatar, address, err = authNoAuth(msg.URL)
|
||||
default:
|
||||
err = errors.New("Something funky happened, please open an issue at https://github.com/ethereum/go-ethereum/issues")
|
||||
}
|
||||
if err != nil {
|
||||
if err = sendError(conn, err); err != nil {
|
||||
log.Warn("Failed to send prefix error to client", "err", err)
|
||||
return
|
||||
}
|
||||
continue
|
||||
}
|
||||
log.Info("Faucet request valid", "url", msg.URL, "tier", msg.Tier, "user", username, "address", address)
|
||||
|
||||
// Ensure the user didn't request funds too recently
|
||||
f.lock.Lock()
|
||||
var (
|
||||
fund bool
|
||||
timeout time.Time
|
||||
)
|
||||
if timeout = f.timeouts[username]; time.Now().After(timeout) {
|
||||
// User wasn't funded recently, create the funding transaction
|
||||
amount := new(big.Int).Mul(big.NewInt(int64(*payoutFlag)), ether)
|
||||
amount = new(big.Int).Mul(amount, new(big.Int).Exp(big.NewInt(5), big.NewInt(int64(msg.Tier)), nil))
|
||||
amount = new(big.Int).Div(amount, new(big.Int).Exp(big.NewInt(2), big.NewInt(int64(msg.Tier)), nil))
|
||||
|
||||
tx := types.NewTransaction(f.nonce+uint64(len(f.reqs)), address, amount, 21000, f.price, nil)
|
||||
signed, err := f.keystore.SignTx(f.account, tx, f.config.ChainID)
|
||||
if err != nil {
|
||||
f.lock.Unlock()
|
||||
if err = sendError(conn, err); err != nil {
|
||||
log.Warn("Failed to send transaction creation error to client", "err", err)
|
||||
return
|
||||
}
|
||||
continue
|
||||
}
|
||||
// Submit the transaction and mark as funded if successful
|
||||
if err := f.client.SendTransaction(context.Background(), signed); err != nil {
|
||||
f.lock.Unlock()
|
||||
if err = sendError(conn, err); err != nil {
|
||||
log.Warn("Failed to send transaction transmission error to client", "err", err)
|
||||
return
|
||||
}
|
||||
continue
|
||||
}
|
||||
f.reqs = append(f.reqs, &request{
|
||||
Avatar: avatar,
|
||||
Account: address,
|
||||
Time: time.Now(),
|
||||
Tx: signed,
|
||||
})
|
||||
f.timeouts[username] = time.Now().Add(time.Duration(*minutesFlag*int(math.Pow(3, float64(msg.Tier)))) * time.Minute)
|
||||
fund = true
|
||||
}
|
||||
f.lock.Unlock()
|
||||
|
||||
// Send an error if too frequent funding, othewise a success
|
||||
if !fund {
|
||||
if err = sendError(conn, fmt.Errorf("%s left until next allowance", common.PrettyDuration(timeout.Sub(time.Now())))); err != nil { // nolint: gosimple
|
||||
log.Warn("Failed to send funding error to client", "err", err)
|
||||
return
|
||||
}
|
||||
continue
|
||||
}
|
||||
if err = sendSuccess(conn, fmt.Sprintf("Funding request accepted for %s into %s", username, address.Hex())); err != nil {
|
||||
log.Warn("Failed to send funding success to client", "err", err)
|
||||
return
|
||||
}
|
||||
select {
|
||||
case f.update <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// refresh attempts to retrieve the latest header from the chain and extract the
|
||||
// associated faucet balance and nonce for connectivity caching.
|
||||
func (f *faucet) refresh(head *types.Header) error {
|
||||
// Ensure a state update does not run for too long
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
// If no header was specified, use the current chain head
|
||||
var err error
|
||||
if head == nil {
|
||||
if head, err = f.client.HeaderByNumber(ctx, nil); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// Retrieve the balance, nonce and gas price from the current head
|
||||
var (
|
||||
balance *big.Int
|
||||
nonce uint64
|
||||
price *big.Int
|
||||
)
|
||||
if balance, err = f.client.BalanceAt(ctx, f.account.Address, head.Number); err != nil {
|
||||
return err
|
||||
}
|
||||
if nonce, err = f.client.NonceAt(ctx, f.account.Address, head.Number); err != nil {
|
||||
return err
|
||||
}
|
||||
if price, err = f.client.SuggestGasPrice(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
// Everything succeeded, update the cached stats and eject old requests
|
||||
f.lock.Lock()
|
||||
f.head, f.balance = head, balance
|
||||
f.price, f.nonce = price, nonce
|
||||
for len(f.reqs) > 0 && f.reqs[0].Tx.Nonce() < f.nonce {
|
||||
f.reqs = f.reqs[1:]
|
||||
}
|
||||
f.lock.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// loop keeps waiting for interesting events and pushes them out to connected
|
||||
// websockets.
|
||||
func (f *faucet) loop() {
|
||||
// Wait for chain events and push them to clients
|
||||
heads := make(chan *types.Header, 16)
|
||||
sub, err := f.client.SubscribeNewHead(context.Background(), heads)
|
||||
if err != nil {
|
||||
log.Crit("Failed to subscribe to head events", "err", err)
|
||||
}
|
||||
defer sub.Unsubscribe()
|
||||
|
||||
// Start a goroutine to update the state from head notifications in the background
|
||||
update := make(chan *types.Header)
|
||||
|
||||
go func() {
|
||||
for head := range update {
|
||||
// New chain head arrived, query the current stats and stream to clients
|
||||
timestamp := time.Unix(int64(head.Time), 0)
|
||||
if time.Since(timestamp) > time.Hour {
|
||||
log.Warn("Skipping faucet refresh, head too old", "number", head.Number, "hash", head.Hash(), "age", common.PrettyAge(timestamp))
|
||||
continue
|
||||
}
|
||||
if err := f.refresh(head); err != nil {
|
||||
log.Warn("Failed to update faucet state", "block", head.Number, "hash", head.Hash(), "err", err)
|
||||
continue
|
||||
}
|
||||
// Faucet state retrieved, update locally and send to clients
|
||||
f.lock.RLock()
|
||||
log.Info("Updated faucet state", "number", head.Number, "hash", head.Hash(), "age", common.PrettyAge(timestamp), "balance", f.balance, "nonce", f.nonce, "price", f.price)
|
||||
|
||||
balance := new(big.Int).Div(f.balance, ether)
|
||||
peers := f.stack.Server().PeerCount()
|
||||
|
||||
for _, conn := range f.conns {
|
||||
if err := send(conn, map[string]interface{}{
|
||||
"funds": balance,
|
||||
"funded": f.nonce,
|
||||
"peers": peers,
|
||||
"requests": f.reqs,
|
||||
}, time.Second); err != nil {
|
||||
log.Warn("Failed to send stats to client", "err", err)
|
||||
conn.Close()
|
||||
continue
|
||||
}
|
||||
if err := send(conn, head, time.Second); err != nil {
|
||||
log.Warn("Failed to send header to client", "err", err)
|
||||
conn.Close()
|
||||
}
|
||||
}
|
||||
f.lock.RUnlock()
|
||||
}
|
||||
}()
|
||||
// Wait for various events and assing to the appropriate background threads
|
||||
for {
|
||||
select {
|
||||
case head := <-heads:
|
||||
// New head arrived, send if for state update if there's none running
|
||||
select {
|
||||
case update <- head:
|
||||
default:
|
||||
}
|
||||
|
||||
case <-f.update:
|
||||
// Pending requests updated, stream to clients
|
||||
f.lock.RLock()
|
||||
for _, conn := range f.conns {
|
||||
if err := send(conn, map[string]interface{}{"requests": f.reqs}, time.Second); err != nil {
|
||||
log.Warn("Failed to send requests to client", "err", err)
|
||||
conn.Close()
|
||||
}
|
||||
}
|
||||
f.lock.RUnlock()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// sends transmits a data packet to the remote end of the websocket, but also
|
||||
// setting a write deadline to prevent waiting forever on the node.
|
||||
func send(conn *websocket.Conn, value interface{}, timeout time.Duration) error {
|
||||
if timeout == 0 {
|
||||
timeout = 60 * time.Second
|
||||
}
|
||||
conn.SetWriteDeadline(time.Now().Add(timeout))
|
||||
return websocket.JSON.Send(conn, value)
|
||||
}
|
||||
|
||||
// sendError transmits an error to the remote end of the websocket, also setting
|
||||
// the write deadline to 1 second to prevent waiting forever.
|
||||
func sendError(conn *websocket.Conn, err error) error {
|
||||
return send(conn, map[string]string{"error": err.Error()}, time.Second)
|
||||
}
|
||||
|
||||
// sendSuccess transmits a success message to the remote end of the websocket, also
|
||||
// setting the write deadline to 1 second to prevent waiting forever.
|
||||
func sendSuccess(conn *websocket.Conn, msg string) error {
|
||||
return send(conn, map[string]string{"success": msg}, time.Second)
|
||||
}
|
||||
|
||||
// authTwitter tries to authenticate a faucet request using Twitter posts, returning
|
||||
// the username, avatar URL and Ethereum address to fund on success.
|
||||
func authTwitter(url string) (string, string, common.Address, error) {
|
||||
// Ensure the user specified a meaningful URL, no fancy nonsense
|
||||
parts := strings.Split(url, "/")
|
||||
if len(parts) < 4 || parts[len(parts)-2] != "status" {
|
||||
return "", "", common.Address{}, errors.New("Invalid Twitter status URL")
|
||||
}
|
||||
// Twitter's API isn't really friendly with direct links. Still, we don't
|
||||
// want to do ask read permissions from users, so just load the public posts and
|
||||
// scrape it for the Ethereum address and profile URL.
|
||||
res, err := http.Get(url)
|
||||
if err != nil {
|
||||
return "", "", common.Address{}, err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
|
||||
// Resolve the username from the final redirect, no intermediate junk
|
||||
parts = strings.Split(res.Request.URL.String(), "/")
|
||||
if len(parts) < 4 || parts[len(parts)-2] != "status" {
|
||||
return "", "", common.Address{}, errors.New("Invalid Twitter status URL")
|
||||
}
|
||||
username := parts[len(parts)-3]
|
||||
|
||||
body, err := ioutil.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
return "", "", common.Address{}, err
|
||||
}
|
||||
address := common.HexToAddress(string(regexp.MustCompile("0x[0-9a-fA-F]{40}").Find(body)))
|
||||
if address == (common.Address{}) {
|
||||
return "", "", common.Address{}, errors.New("No Ethereum address found to fund")
|
||||
}
|
||||
var avatar string
|
||||
if parts = regexp.MustCompile("src=\"([^\"]+twimg.com/profile_images[^\"]+)\"").FindStringSubmatch(string(body)); len(parts) == 2 {
|
||||
avatar = parts[1]
|
||||
}
|
||||
return username + "@twitter", avatar, address, nil
|
||||
}
|
||||
|
||||
// authGooglePlus tries to authenticate a faucet request using GooglePlus posts,
|
||||
// returning the username, avatar URL and Ethereum address to fund on success.
|
||||
func authGooglePlus(url string) (string, string, common.Address, error) {
|
||||
// Ensure the user specified a meaningful URL, no fancy nonsense
|
||||
parts := strings.Split(url, "/")
|
||||
if len(parts) < 4 || parts[len(parts)-2] != "posts" {
|
||||
return "", "", common.Address{}, errors.New("Invalid Google+ post URL")
|
||||
}
|
||||
username := parts[len(parts)-3]
|
||||
|
||||
// Google's API isn't really friendly with direct links. Still, we don't
|
||||
// want to do ask read permissions from users, so just load the public posts and
|
||||
// scrape it for the Ethereum address and profile URL.
|
||||
res, err := http.Get(url)
|
||||
if err != nil {
|
||||
return "", "", common.Address{}, err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
|
||||
body, err := ioutil.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
return "", "", common.Address{}, err
|
||||
}
|
||||
address := common.HexToAddress(string(regexp.MustCompile("0x[0-9a-fA-F]{40}").Find(body)))
|
||||
if address == (common.Address{}) {
|
||||
return "", "", common.Address{}, errors.New("No Ethereum address found to fund")
|
||||
}
|
||||
var avatar string
|
||||
if parts = regexp.MustCompile("src=\"([^\"]+googleusercontent.com[^\"]+photo.jpg)\"").FindStringSubmatch(string(body)); len(parts) == 2 {
|
||||
avatar = parts[1]
|
||||
}
|
||||
return username + "@google+", avatar, address, nil
|
||||
}
|
||||
|
||||
// authFacebook tries to authenticate a faucet request using Facebook posts,
|
||||
// returning the username, avatar URL and Ethereum address to fund on success.
|
||||
func authFacebook(url string) (string, string, common.Address, error) {
|
||||
// Ensure the user specified a meaningful URL, no fancy nonsense
|
||||
parts := strings.Split(url, "/")
|
||||
if len(parts) < 4 || parts[len(parts)-2] != "posts" {
|
||||
return "", "", common.Address{}, errors.New("Invalid Facebook post URL")
|
||||
}
|
||||
username := parts[len(parts)-3]
|
||||
|
||||
// Facebook's Graph API isn't really friendly with direct links. Still, we don't
|
||||
// want to do ask read permissions from users, so just load the public posts and
|
||||
// scrape it for the Ethereum address and profile URL.
|
||||
res, err := http.Get(url)
|
||||
if err != nil {
|
||||
return "", "", common.Address{}, err
|
||||
}
|
||||
defer res.Body.Close()
|
||||
|
||||
body, err := ioutil.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
return "", "", common.Address{}, err
|
||||
}
|
||||
address := common.HexToAddress(string(regexp.MustCompile("0x[0-9a-fA-F]{40}").Find(body)))
|
||||
if address == (common.Address{}) {
|
||||
return "", "", common.Address{}, errors.New("No Ethereum address found to fund")
|
||||
}
|
||||
var avatar string
|
||||
if parts = regexp.MustCompile("src=\"([^\"]+fbcdn.net[^\"]+)\"").FindStringSubmatch(string(body)); len(parts) == 2 {
|
||||
avatar = parts[1]
|
||||
}
|
||||
return username + "@facebook", avatar, address, nil
|
||||
}
|
||||
|
||||
// authNoAuth tries to interpret a faucet request as a plain Ethereum address,
|
||||
// without actually performing any remote authentication. This mode is prone to
|
||||
// Byzantine attack, so only ever use for truly private networks.
|
||||
func authNoAuth(url string) (string, string, common.Address, error) {
|
||||
address := common.HexToAddress(regexp.MustCompile("0x[0-9a-fA-F]{40}").FindString(url))
|
||||
if address == (common.Address{}) {
|
||||
return "", "", common.Address{}, errors.New("No Ethereum address found to fund")
|
||||
}
|
||||
return address.Hex() + "@noauth", "", address, nil
|
||||
}
|
379
vendor/github.com/ethereum/go-ethereum/cmd/geth/accountcmd.go
generated
vendored
Normal file
379
vendor/github.com/ethereum/go-ethereum/cmd/geth/accountcmd.go
generated
vendored
Normal file
@ -0,0 +1,379 @@
|
||||
// Copyright 2016 The go-ethereum Authors
|
||||
// This file is part of go-ethereum.
|
||||
//
|
||||
// go-ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// go-ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
|
||||
"github.com/ethereum/go-ethereum/accounts"
|
||||
"github.com/ethereum/go-ethereum/accounts/keystore"
|
||||
"github.com/ethereum/go-ethereum/cmd/utils"
|
||||
"github.com/ethereum/go-ethereum/console"
|
||||
"github.com/ethereum/go-ethereum/crypto"
|
||||
"github.com/ethereum/go-ethereum/log"
|
||||
"gopkg.in/urfave/cli.v1"
|
||||
)
|
||||
|
||||
var (
|
||||
walletCommand = cli.Command{
|
||||
Name: "wallet",
|
||||
Usage: "Manage Ethereum presale wallets",
|
||||
ArgsUsage: "",
|
||||
Category: "ACCOUNT COMMANDS",
|
||||
Description: `
|
||||
geth wallet import /path/to/my/presale.wallet
|
||||
|
||||
will prompt for your password and imports your ether presale account.
|
||||
It can be used non-interactively with the --password option taking a
|
||||
passwordfile as argument containing the wallet password in plaintext.`,
|
||||
Subcommands: []cli.Command{
|
||||
{
|
||||
|
||||
Name: "import",
|
||||
Usage: "Import Ethereum presale wallet",
|
||||
ArgsUsage: "<keyFile>",
|
||||
Action: utils.MigrateFlags(importWallet),
|
||||
Category: "ACCOUNT COMMANDS",
|
||||
Flags: []cli.Flag{
|
||||
utils.DataDirFlag,
|
||||
utils.KeyStoreDirFlag,
|
||||
utils.PasswordFileFlag,
|
||||
utils.LightKDFFlag,
|
||||
},
|
||||
Description: `
|
||||
geth wallet [options] /path/to/my/presale.wallet
|
||||
|
||||
will prompt for your password and imports your ether presale account.
|
||||
It can be used non-interactively with the --password option taking a
|
||||
passwordfile as argument containing the wallet password in plaintext.`,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
accountCommand = cli.Command{
|
||||
Name: "account",
|
||||
Usage: "Manage accounts",
|
||||
Category: "ACCOUNT COMMANDS",
|
||||
Description: `
|
||||
|
||||
Manage accounts, list all existing accounts, import a private key into a new
|
||||
account, create a new account or update an existing account.
|
||||
|
||||
It supports interactive mode, when you are prompted for password as well as
|
||||
non-interactive mode where passwords are supplied via a given password file.
|
||||
Non-interactive mode is only meant for scripted use on test networks or known
|
||||
safe environments.
|
||||
|
||||
Make sure you remember the password you gave when creating a new account (with
|
||||
either new or import). Without it you are not able to unlock your account.
|
||||
|
||||
Note that exporting your key in unencrypted format is NOT supported.
|
||||
|
||||
Keys are stored under <DATADIR>/keystore.
|
||||
It is safe to transfer the entire directory or the individual keys therein
|
||||
between ethereum nodes by simply copying.
|
||||
|
||||
Make sure you backup your keys regularly.`,
|
||||
Subcommands: []cli.Command{
|
||||
{
|
||||
Name: "list",
|
||||
Usage: "Print summary of existing accounts",
|
||||
Action: utils.MigrateFlags(accountList),
|
||||
Flags: []cli.Flag{
|
||||
utils.DataDirFlag,
|
||||
utils.KeyStoreDirFlag,
|
||||
},
|
||||
Description: `
|
||||
Print a short summary of all accounts`,
|
||||
},
|
||||
{
|
||||
Name: "new",
|
||||
Usage: "Create a new account",
|
||||
Action: utils.MigrateFlags(accountCreate),
|
||||
Flags: []cli.Flag{
|
||||
utils.DataDirFlag,
|
||||
utils.KeyStoreDirFlag,
|
||||
utils.PasswordFileFlag,
|
||||
utils.LightKDFFlag,
|
||||
},
|
||||
Description: `
|
||||
geth account new
|
||||
|
||||
Creates a new account and prints the address.
|
||||
|
||||
The account is saved in encrypted format, you are prompted for a passphrase.
|
||||
|
||||
You must remember this passphrase to unlock your account in the future.
|
||||
|
||||
For non-interactive use the passphrase can be specified with the --password flag:
|
||||
|
||||
Note, this is meant to be used for testing only, it is a bad idea to save your
|
||||
password to file or expose in any other way.
|
||||
`,
|
||||
},
|
||||
{
|
||||
Name: "update",
|
||||
Usage: "Update an existing account",
|
||||
Action: utils.MigrateFlags(accountUpdate),
|
||||
ArgsUsage: "<address>",
|
||||
Flags: []cli.Flag{
|
||||
utils.DataDirFlag,
|
||||
utils.KeyStoreDirFlag,
|
||||
utils.LightKDFFlag,
|
||||
},
|
||||
Description: `
|
||||
geth account update <address>
|
||||
|
||||
Update an existing account.
|
||||
|
||||
The account is saved in the newest version in encrypted format, you are prompted
|
||||
for a passphrase to unlock the account and another to save the updated file.
|
||||
|
||||
This same command can therefore be used to migrate an account of a deprecated
|
||||
format to the newest format or change the password for an account.
|
||||
|
||||
For non-interactive use the passphrase can be specified with the --password flag:
|
||||
|
||||
geth account update [options] <address>
|
||||
|
||||
Since only one password can be given, only format update can be performed,
|
||||
changing your password is only possible interactively.
|
||||
`,
|
||||
},
|
||||
{
|
||||
Name: "import",
|
||||
Usage: "Import a private key into a new account",
|
||||
Action: utils.MigrateFlags(accountImport),
|
||||
Flags: []cli.Flag{
|
||||
utils.DataDirFlag,
|
||||
utils.KeyStoreDirFlag,
|
||||
utils.PasswordFileFlag,
|
||||
utils.LightKDFFlag,
|
||||
},
|
||||
ArgsUsage: "<keyFile>",
|
||||
Description: `
|
||||
geth account import <keyfile>
|
||||
|
||||
Imports an unencrypted private key from <keyfile> and creates a new account.
|
||||
Prints the address.
|
||||
|
||||
The keyfile is assumed to contain an unencrypted private key in hexadecimal format.
|
||||
|
||||
The account is saved in encrypted format, you are prompted for a passphrase.
|
||||
|
||||
You must remember this passphrase to unlock your account in the future.
|
||||
|
||||
For non-interactive use the passphrase can be specified with the -password flag:
|
||||
|
||||
geth account import [options] <keyfile>
|
||||
|
||||
Note:
|
||||
As you can directly copy your encrypted accounts to another ethereum instance,
|
||||
this import mechanism is not needed when you transfer an account between
|
||||
nodes.
|
||||
`,
|
||||
},
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func accountList(ctx *cli.Context) error {
|
||||
stack, _ := makeConfigNode(ctx)
|
||||
var index int
|
||||
for _, wallet := range stack.AccountManager().Wallets() {
|
||||
for _, account := range wallet.Accounts() {
|
||||
fmt.Printf("Account #%d: {%x} %s\n", index, account.Address, &account.URL)
|
||||
index++
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// tries unlocking the specified account a few times.
|
||||
func unlockAccount(ks *keystore.KeyStore, address string, i int, passwords []string) (accounts.Account, string) {
|
||||
account, err := utils.MakeAddress(ks, address)
|
||||
if err != nil {
|
||||
utils.Fatalf("Could not list accounts: %v", err)
|
||||
}
|
||||
for trials := 0; trials < 3; trials++ {
|
||||
prompt := fmt.Sprintf("Unlocking account %s | Attempt %d/%d", address, trials+1, 3)
|
||||
password := getPassPhrase(prompt, false, i, passwords)
|
||||
err = ks.Unlock(account, password)
|
||||
if err == nil {
|
||||
log.Info("Unlocked account", "address", account.Address.Hex())
|
||||
return account, password
|
||||
}
|
||||
if err, ok := err.(*keystore.AmbiguousAddrError); ok {
|
||||
log.Info("Unlocked account", "address", account.Address.Hex())
|
||||
return ambiguousAddrRecovery(ks, err, password), password
|
||||
}
|
||||
if err != keystore.ErrDecrypt {
|
||||
// No need to prompt again if the error is not decryption-related.
|
||||
break
|
||||
}
|
||||
}
|
||||
// All trials expended to unlock account, bail out
|
||||
utils.Fatalf("Failed to unlock account %s (%v)", address, err)
|
||||
|
||||
return accounts.Account{}, ""
|
||||
}
|
||||
|
||||
// getPassPhrase retrieves the password associated with an account, either fetched
|
||||
// from a list of preloaded passphrases, or requested interactively from the user.
|
||||
func getPassPhrase(prompt string, confirmation bool, i int, passwords []string) string {
|
||||
// If a list of passwords was supplied, retrieve from them
|
||||
if len(passwords) > 0 {
|
||||
if i < len(passwords) {
|
||||
return passwords[i]
|
||||
}
|
||||
return passwords[len(passwords)-1]
|
||||
}
|
||||
// Otherwise prompt the user for the password
|
||||
if prompt != "" {
|
||||
fmt.Println(prompt)
|
||||
}
|
||||
password, err := console.Stdin.PromptPassword("Passphrase: ")
|
||||
if err != nil {
|
||||
utils.Fatalf("Failed to read passphrase: %v", err)
|
||||
}
|
||||
if confirmation {
|
||||
confirm, err := console.Stdin.PromptPassword("Repeat passphrase: ")
|
||||
if err != nil {
|
||||
utils.Fatalf("Failed to read passphrase confirmation: %v", err)
|
||||
}
|
||||
if password != confirm {
|
||||
utils.Fatalf("Passphrases do not match")
|
||||
}
|
||||
}
|
||||
return password
|
||||
}
|
||||
|
||||
func ambiguousAddrRecovery(ks *keystore.KeyStore, err *keystore.AmbiguousAddrError, auth string) accounts.Account {
|
||||
fmt.Printf("Multiple key files exist for address %x:\n", err.Addr)
|
||||
for _, a := range err.Matches {
|
||||
fmt.Println(" ", a.URL)
|
||||
}
|
||||
fmt.Println("Testing your passphrase against all of them...")
|
||||
var match *accounts.Account
|
||||
for _, a := range err.Matches {
|
||||
if err := ks.Unlock(a, auth); err == nil {
|
||||
match = &a
|
||||
break
|
||||
}
|
||||
}
|
||||
if match == nil {
|
||||
utils.Fatalf("None of the listed files could be unlocked.")
|
||||
}
|
||||
fmt.Printf("Your passphrase unlocked %s\n", match.URL)
|
||||
fmt.Println("In order to avoid this warning, you need to remove the following duplicate key files:")
|
||||
for _, a := range err.Matches {
|
||||
if a != *match {
|
||||
fmt.Println(" ", a.URL)
|
||||
}
|
||||
}
|
||||
return *match
|
||||
}
|
||||
|
||||
// accountCreate creates a new account into the keystore defined by the CLI flags.
|
||||
func accountCreate(ctx *cli.Context) error {
|
||||
cfg := gethConfig{Node: defaultNodeConfig()}
|
||||
// Load config file.
|
||||
if file := ctx.GlobalString(configFileFlag.Name); file != "" {
|
||||
if err := loadConfig(file, &cfg); err != nil {
|
||||
utils.Fatalf("%v", err)
|
||||
}
|
||||
}
|
||||
utils.SetNodeConfig(ctx, &cfg.Node)
|
||||
scryptN, scryptP, keydir, err := cfg.Node.AccountConfig()
|
||||
|
||||
if err != nil {
|
||||
utils.Fatalf("Failed to read configuration: %v", err)
|
||||
}
|
||||
|
||||
password := getPassPhrase("Your new account is locked with a password. Please give a password. Do not forget this password.", true, 0, utils.MakePasswordList(ctx))
|
||||
|
||||
address, err := keystore.StoreKey(keydir, password, scryptN, scryptP)
|
||||
|
||||
if err != nil {
|
||||
utils.Fatalf("Failed to create account: %v", err)
|
||||
}
|
||||
fmt.Printf("Address: {%x}\n", address)
|
||||
return nil
|
||||
}
|
||||
|
||||
// accountUpdate transitions an account from a previous format to the current
|
||||
// one, also providing the possibility to change the pass-phrase.
|
||||
func accountUpdate(ctx *cli.Context) error {
|
||||
if len(ctx.Args()) == 0 {
|
||||
utils.Fatalf("No accounts specified to update")
|
||||
}
|
||||
stack, _ := makeConfigNode(ctx)
|
||||
ks := stack.AccountManager().Backends(keystore.KeyStoreType)[0].(*keystore.KeyStore)
|
||||
|
||||
for _, addr := range ctx.Args() {
|
||||
account, oldPassword := unlockAccount(ks, addr, 0, nil)
|
||||
newPassword := getPassPhrase("Please give a new password. Do not forget this password.", true, 0, nil)
|
||||
if err := ks.Update(account, oldPassword, newPassword); err != nil {
|
||||
utils.Fatalf("Could not update the account: %v", err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func importWallet(ctx *cli.Context) error {
|
||||
keyfile := ctx.Args().First()
|
||||
if len(keyfile) == 0 {
|
||||
utils.Fatalf("keyfile must be given as argument")
|
||||
}
|
||||
keyJSON, err := ioutil.ReadFile(keyfile)
|
||||
if err != nil {
|
||||
utils.Fatalf("Could not read wallet file: %v", err)
|
||||
}
|
||||
|
||||
stack, _ := makeConfigNode(ctx)
|
||||
passphrase := getPassPhrase("", false, 0, utils.MakePasswordList(ctx))
|
||||
|
||||
ks := stack.AccountManager().Backends(keystore.KeyStoreType)[0].(*keystore.KeyStore)
|
||||
acct, err := ks.ImportPreSaleKey(keyJSON, passphrase)
|
||||
if err != nil {
|
||||
utils.Fatalf("%v", err)
|
||||
}
|
||||
fmt.Printf("Address: {%x}\n", acct.Address)
|
||||
return nil
|
||||
}
|
||||
|
||||
func accountImport(ctx *cli.Context) error {
|
||||
keyfile := ctx.Args().First()
|
||||
if len(keyfile) == 0 {
|
||||
utils.Fatalf("keyfile must be given as argument")
|
||||
}
|
||||
key, err := crypto.LoadECDSA(keyfile)
|
||||
if err != nil {
|
||||
utils.Fatalf("Failed to load the private key: %v", err)
|
||||
}
|
||||
stack, _ := makeConfigNode(ctx)
|
||||
passphrase := getPassPhrase("Your new account is locked with a password. Please give a password. Do not forget this password.", true, 0, utils.MakePasswordList(ctx))
|
||||
|
||||
ks := stack.AccountManager().Backends(keystore.KeyStoreType)[0].(*keystore.KeyStore)
|
||||
acct, err := ks.ImportECDSA(key, passphrase)
|
||||
if err != nil {
|
||||
utils.Fatalf("Could not create the account: %v", err)
|
||||
}
|
||||
fmt.Printf("Address: {%x}\n", acct.Address)
|
||||
return nil
|
||||
}
|
229
vendor/github.com/ethereum/go-ethereum/cmd/geth/config.go
generated
vendored
Normal file
229
vendor/github.com/ethereum/go-ethereum/cmd/geth/config.go
generated
vendored
Normal file
@ -0,0 +1,229 @@
|
||||
// Copyright 2017 The go-ethereum Authors
|
||||
// This file is part of go-ethereum.
|
||||
//
|
||||
// go-ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// go-ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"errors"
|
||||
"fmt"
|
||||
"math/big"
|
||||
"os"
|
||||
"reflect"
|
||||
"unicode"
|
||||
|
||||
cli "gopkg.in/urfave/cli.v1"
|
||||
|
||||
"github.com/ethereum/go-ethereum/cmd/utils"
|
||||
"github.com/ethereum/go-ethereum/dashboard"
|
||||
"github.com/ethereum/go-ethereum/eth"
|
||||
"github.com/ethereum/go-ethereum/graphql"
|
||||
"github.com/ethereum/go-ethereum/node"
|
||||
"github.com/ethereum/go-ethereum/params"
|
||||
whisper "github.com/ethereum/go-ethereum/whisper/whisperv6"
|
||||
"github.com/naoina/toml"
|
||||
)
|
||||
|
||||
var (
|
||||
dumpConfigCommand = cli.Command{
|
||||
Action: utils.MigrateFlags(dumpConfig),
|
||||
Name: "dumpconfig",
|
||||
Usage: "Show configuration values",
|
||||
ArgsUsage: "",
|
||||
Flags: append(append(nodeFlags, rpcFlags...), whisperFlags...),
|
||||
Category: "MISCELLANEOUS COMMANDS",
|
||||
Description: `The dumpconfig command shows configuration values.`,
|
||||
}
|
||||
|
||||
configFileFlag = cli.StringFlag{
|
||||
Name: "config",
|
||||
Usage: "TOML configuration file",
|
||||
}
|
||||
)
|
||||
|
||||
// These settings ensure that TOML keys use the same names as Go struct fields.
|
||||
var tomlSettings = toml.Config{
|
||||
NormFieldName: func(rt reflect.Type, key string) string {
|
||||
return key
|
||||
},
|
||||
FieldToKey: func(rt reflect.Type, field string) string {
|
||||
return field
|
||||
},
|
||||
MissingField: func(rt reflect.Type, field string) error {
|
||||
link := ""
|
||||
if unicode.IsUpper(rune(rt.Name()[0])) && rt.PkgPath() != "main" {
|
||||
link = fmt.Sprintf(", see https://godoc.org/%s#%s for available fields", rt.PkgPath(), rt.Name())
|
||||
}
|
||||
return fmt.Errorf("field '%s' is not defined in %s%s", field, rt.String(), link)
|
||||
},
|
||||
}
|
||||
|
||||
type ethstatsConfig struct {
|
||||
URL string `toml:",omitempty"`
|
||||
}
|
||||
|
||||
type gethConfig struct {
|
||||
Eth eth.Config
|
||||
Shh whisper.Config
|
||||
Node node.Config
|
||||
Ethstats ethstatsConfig
|
||||
Dashboard dashboard.Config
|
||||
}
|
||||
|
||||
func loadConfig(file string, cfg *gethConfig) error {
|
||||
f, err := os.Open(file)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
err = tomlSettings.NewDecoder(bufio.NewReader(f)).Decode(cfg)
|
||||
// Add file name to errors that have a line number.
|
||||
if _, ok := err.(*toml.LineError); ok {
|
||||
err = errors.New(file + ", " + err.Error())
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func defaultNodeConfig() node.Config {
|
||||
cfg := node.DefaultConfig
|
||||
cfg.Name = clientIdentifier
|
||||
cfg.Version = params.VersionWithCommit(gitCommit)
|
||||
cfg.HTTPModules = append(cfg.HTTPModules, "eth", "shh")
|
||||
cfg.WSModules = append(cfg.WSModules, "eth", "shh")
|
||||
cfg.IPCPath = "geth.ipc"
|
||||
return cfg
|
||||
}
|
||||
|
||||
func makeConfigNode(ctx *cli.Context) (*node.Node, gethConfig) {
|
||||
// Load defaults.
|
||||
cfg := gethConfig{
|
||||
Eth: eth.DefaultConfig,
|
||||
Shh: whisper.DefaultConfig,
|
||||
Node: defaultNodeConfig(),
|
||||
Dashboard: dashboard.DefaultConfig,
|
||||
}
|
||||
|
||||
// Load config file.
|
||||
if file := ctx.GlobalString(configFileFlag.Name); file != "" {
|
||||
if err := loadConfig(file, &cfg); err != nil {
|
||||
utils.Fatalf("%v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Apply flags.
|
||||
utils.SetULC(ctx, &cfg.Eth)
|
||||
utils.SetNodeConfig(ctx, &cfg.Node)
|
||||
stack, err := node.New(&cfg.Node)
|
||||
if err != nil {
|
||||
utils.Fatalf("Failed to create the protocol stack: %v", err)
|
||||
}
|
||||
utils.SetEthConfig(ctx, stack, &cfg.Eth)
|
||||
if ctx.GlobalIsSet(utils.EthStatsURLFlag.Name) {
|
||||
cfg.Ethstats.URL = ctx.GlobalString(utils.EthStatsURLFlag.Name)
|
||||
}
|
||||
|
||||
utils.SetShhConfig(ctx, stack, &cfg.Shh)
|
||||
utils.SetDashboardConfig(ctx, &cfg.Dashboard)
|
||||
|
||||
return stack, cfg
|
||||
}
|
||||
|
||||
// enableWhisper returns true in case one of the whisper flags is set.
|
||||
func enableWhisper(ctx *cli.Context) bool {
|
||||
for _, flag := range whisperFlags {
|
||||
if ctx.GlobalIsSet(flag.GetName()) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func makeFullNode(ctx *cli.Context) *node.Node {
|
||||
stack, cfg := makeConfigNode(ctx)
|
||||
if ctx.GlobalIsSet(utils.ConstantinopleOverrideFlag.Name) {
|
||||
cfg.Eth.ConstantinopleOverride = new(big.Int).SetUint64(ctx.GlobalUint64(utils.ConstantinopleOverrideFlag.Name))
|
||||
}
|
||||
|
||||
utils.RegisterEthService(stack, &cfg.Eth)
|
||||
|
||||
if ctx.GlobalBool(utils.StateDiffFlag.Name) {
|
||||
cfg.Eth.StateDiff = true
|
||||
utils.RegisterStateDiffService(stack, ctx)
|
||||
}
|
||||
|
||||
if ctx.GlobalBool(utils.DashboardEnabledFlag.Name) {
|
||||
utils.RegisterDashboardService(stack, &cfg.Dashboard, gitCommit)
|
||||
}
|
||||
// Whisper must be explicitly enabled by specifying at least 1 whisper flag or in dev mode
|
||||
shhEnabled := enableWhisper(ctx)
|
||||
shhAutoEnabled := !ctx.GlobalIsSet(utils.WhisperEnabledFlag.Name) && ctx.GlobalIsSet(utils.DeveloperFlag.Name)
|
||||
if shhEnabled || shhAutoEnabled {
|
||||
if ctx.GlobalIsSet(utils.WhisperMaxMessageSizeFlag.Name) {
|
||||
cfg.Shh.MaxMessageSize = uint32(ctx.Int(utils.WhisperMaxMessageSizeFlag.Name))
|
||||
}
|
||||
if ctx.GlobalIsSet(utils.WhisperMinPOWFlag.Name) {
|
||||
cfg.Shh.MinimumAcceptedPOW = ctx.Float64(utils.WhisperMinPOWFlag.Name)
|
||||
}
|
||||
if ctx.GlobalIsSet(utils.WhisperRestrictConnectionBetweenLightClientsFlag.Name) {
|
||||
cfg.Shh.RestrictConnectionBetweenLightClients = true
|
||||
}
|
||||
utils.RegisterShhService(stack, &cfg.Shh)
|
||||
}
|
||||
|
||||
// Configure GraphQL if required
|
||||
if ctx.GlobalIsSet(utils.GraphQLEnabledFlag.Name) {
|
||||
if err := graphql.RegisterGraphQLService(stack, cfg.Node.GraphQLEndpoint(), cfg.Node.GraphQLCors, cfg.Node.GraphQLVirtualHosts, cfg.Node.HTTPTimeouts); err != nil {
|
||||
utils.Fatalf("Failed to register the Ethereum service: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Add the Ethereum Stats daemon if requested.
|
||||
if cfg.Ethstats.URL != "" {
|
||||
utils.RegisterEthStatsService(stack, cfg.Ethstats.URL)
|
||||
}
|
||||
|
||||
return stack
|
||||
}
|
||||
|
||||
// dumpConfig is the dumpconfig command.
|
||||
func dumpConfig(ctx *cli.Context) error {
|
||||
_, cfg := makeConfigNode(ctx)
|
||||
comment := ""
|
||||
|
||||
if cfg.Eth.Genesis != nil {
|
||||
cfg.Eth.Genesis = nil
|
||||
comment += "# Note: this config doesn't contain the genesis block.\n\n"
|
||||
}
|
||||
|
||||
out, err := tomlSettings.Marshal(&cfg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
dump := os.Stdout
|
||||
if ctx.NArg() > 0 {
|
||||
dump, err = os.OpenFile(ctx.Args().Get(0), os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0644)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer dump.Close()
|
||||
}
|
||||
dump.WriteString(comment)
|
||||
dump.Write(out)
|
||||
|
||||
return nil
|
||||
}
|
420
vendor/github.com/ethereum/go-ethereum/cmd/geth/main.go
generated
vendored
Normal file
420
vendor/github.com/ethereum/go-ethereum/cmd/geth/main.go
generated
vendored
Normal file
@ -0,0 +1,420 @@
|
||||
// Copyright 2014 The go-ethereum Authors
|
||||
// This file is part of go-ethereum.
|
||||
//
|
||||
// go-ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// go-ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
// geth is the official command-line client for Ethereum.
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math"
|
||||
"os"
|
||||
godebug "runtime/debug"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/elastic/gosigar"
|
||||
"github.com/ethereum/go-ethereum/accounts"
|
||||
"github.com/ethereum/go-ethereum/accounts/keystore"
|
||||
"github.com/ethereum/go-ethereum/cmd/utils"
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/console"
|
||||
"github.com/ethereum/go-ethereum/eth"
|
||||
"github.com/ethereum/go-ethereum/eth/downloader"
|
||||
"github.com/ethereum/go-ethereum/ethclient"
|
||||
"github.com/ethereum/go-ethereum/internal/debug"
|
||||
"github.com/ethereum/go-ethereum/log"
|
||||
"github.com/ethereum/go-ethereum/metrics"
|
||||
"github.com/ethereum/go-ethereum/node"
|
||||
cli "gopkg.in/urfave/cli.v1"
|
||||
)
|
||||
|
||||
const (
|
||||
clientIdentifier = "geth" // Client identifier to advertise over the network
|
||||
)
|
||||
|
||||
var (
|
||||
// Git SHA1 commit hash of the release (set via linker flags)
|
||||
gitCommit = ""
|
||||
// The app that holds all commands and flags.
|
||||
app = utils.NewApp(gitCommit, "the go-ethereum command line interface")
|
||||
// flags that configure the node
|
||||
nodeFlags = []cli.Flag{
|
||||
utils.IdentityFlag,
|
||||
utils.UnlockedAccountFlag,
|
||||
utils.PasswordFileFlag,
|
||||
utils.BootnodesFlag,
|
||||
utils.BootnodesV4Flag,
|
||||
utils.BootnodesV5Flag,
|
||||
utils.DataDirFlag,
|
||||
utils.KeyStoreDirFlag,
|
||||
utils.ExternalSignerFlag,
|
||||
utils.NoUSBFlag,
|
||||
utils.DashboardEnabledFlag,
|
||||
utils.DashboardAddrFlag,
|
||||
utils.DashboardPortFlag,
|
||||
utils.DashboardRefreshFlag,
|
||||
utils.EthashCacheDirFlag,
|
||||
utils.EthashCachesInMemoryFlag,
|
||||
utils.EthashCachesOnDiskFlag,
|
||||
utils.EthashDatasetDirFlag,
|
||||
utils.EthashDatasetsInMemoryFlag,
|
||||
utils.EthashDatasetsOnDiskFlag,
|
||||
utils.TxPoolLocalsFlag,
|
||||
utils.TxPoolNoLocalsFlag,
|
||||
utils.TxPoolJournalFlag,
|
||||
utils.TxPoolRejournalFlag,
|
||||
utils.TxPoolPriceLimitFlag,
|
||||
utils.TxPoolPriceBumpFlag,
|
||||
utils.TxPoolAccountSlotsFlag,
|
||||
utils.TxPoolGlobalSlotsFlag,
|
||||
utils.TxPoolAccountQueueFlag,
|
||||
utils.TxPoolGlobalQueueFlag,
|
||||
utils.TxPoolLifetimeFlag,
|
||||
utils.ULCModeConfigFlag,
|
||||
utils.OnlyAnnounceModeFlag,
|
||||
utils.ULCTrustedNodesFlag,
|
||||
utils.ULCMinTrustedFractionFlag,
|
||||
utils.SyncModeFlag,
|
||||
utils.ExitWhenSyncedFlag,
|
||||
utils.GCModeFlag,
|
||||
utils.LightServFlag,
|
||||
utils.LightBandwidthInFlag,
|
||||
utils.LightBandwidthOutFlag,
|
||||
utils.LightPeersFlag,
|
||||
utils.LightKDFFlag,
|
||||
utils.WhitelistFlag,
|
||||
utils.CacheFlag,
|
||||
utils.CacheDatabaseFlag,
|
||||
utils.CacheTrieFlag,
|
||||
utils.CacheGCFlag,
|
||||
utils.CacheNoPrefetchFlag,
|
||||
utils.ListenPortFlag,
|
||||
utils.MaxPeersFlag,
|
||||
utils.MaxPendingPeersFlag,
|
||||
utils.MiningEnabledFlag,
|
||||
utils.MinerThreadsFlag,
|
||||
utils.MinerLegacyThreadsFlag,
|
||||
utils.MinerNotifyFlag,
|
||||
utils.MinerGasTargetFlag,
|
||||
utils.MinerLegacyGasTargetFlag,
|
||||
utils.MinerGasLimitFlag,
|
||||
utils.MinerGasPriceFlag,
|
||||
utils.MinerLegacyGasPriceFlag,
|
||||
utils.MinerEtherbaseFlag,
|
||||
utils.MinerLegacyEtherbaseFlag,
|
||||
utils.MinerExtraDataFlag,
|
||||
utils.MinerLegacyExtraDataFlag,
|
||||
utils.MinerRecommitIntervalFlag,
|
||||
utils.MinerNoVerfiyFlag,
|
||||
utils.NATFlag,
|
||||
utils.NoDiscoverFlag,
|
||||
utils.DiscoveryV5Flag,
|
||||
utils.NetrestrictFlag,
|
||||
utils.NodeKeyFileFlag,
|
||||
utils.NodeKeyHexFlag,
|
||||
utils.DeveloperFlag,
|
||||
utils.DeveloperPeriodFlag,
|
||||
utils.TestnetFlag,
|
||||
utils.RinkebyFlag,
|
||||
utils.GoerliFlag,
|
||||
utils.VMEnableDebugFlag,
|
||||
utils.NetworkIdFlag,
|
||||
utils.ConstantinopleOverrideFlag,
|
||||
utils.EthStatsURLFlag,
|
||||
utils.FakePoWFlag,
|
||||
utils.NoCompactionFlag,
|
||||
utils.GpoBlocksFlag,
|
||||
utils.GpoPercentileFlag,
|
||||
utils.EWASMInterpreterFlag,
|
||||
utils.EVMInterpreterFlag,
|
||||
utils.StateDiffFlag,
|
||||
configFileFlag,
|
||||
}
|
||||
|
||||
rpcFlags = []cli.Flag{
|
||||
utils.RPCEnabledFlag,
|
||||
utils.RPCListenAddrFlag,
|
||||
utils.RPCPortFlag,
|
||||
utils.RPCCORSDomainFlag,
|
||||
utils.RPCVirtualHostsFlag,
|
||||
utils.GraphQLEnabledFlag,
|
||||
utils.GraphQLListenAddrFlag,
|
||||
utils.GraphQLPortFlag,
|
||||
utils.GraphQLCORSDomainFlag,
|
||||
utils.GraphQLVirtualHostsFlag,
|
||||
utils.RPCApiFlag,
|
||||
utils.WSEnabledFlag,
|
||||
utils.WSListenAddrFlag,
|
||||
utils.WSPortFlag,
|
||||
utils.WSApiFlag,
|
||||
utils.WSAllowedOriginsFlag,
|
||||
utils.IPCDisabledFlag,
|
||||
utils.IPCPathFlag,
|
||||
utils.InsecureUnlockAllowedFlag,
|
||||
utils.RPCGlobalGasCap,
|
||||
}
|
||||
|
||||
whisperFlags = []cli.Flag{
|
||||
utils.WhisperEnabledFlag,
|
||||
utils.WhisperMaxMessageSizeFlag,
|
||||
utils.WhisperMinPOWFlag,
|
||||
utils.WhisperRestrictConnectionBetweenLightClientsFlag,
|
||||
}
|
||||
|
||||
metricsFlags = []cli.Flag{
|
||||
utils.MetricsEnabledFlag,
|
||||
utils.MetricsEnabledExpensiveFlag,
|
||||
utils.MetricsEnableInfluxDBFlag,
|
||||
utils.MetricsInfluxDBEndpointFlag,
|
||||
utils.MetricsInfluxDBDatabaseFlag,
|
||||
utils.MetricsInfluxDBUsernameFlag,
|
||||
utils.MetricsInfluxDBPasswordFlag,
|
||||
utils.MetricsInfluxDBTagsFlag,
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
// Initialize the CLI app and start Geth
|
||||
app.Action = geth
|
||||
app.HideVersion = true // we have a command to print the version
|
||||
app.Copyright = "Copyright 2013-2019 The go-ethereum Authors"
|
||||
app.Commands = []cli.Command{
|
||||
// See chaincmd.go:
|
||||
initCommand,
|
||||
importCommand,
|
||||
exportCommand,
|
||||
importPreimagesCommand,
|
||||
exportPreimagesCommand,
|
||||
copydbCommand,
|
||||
removedbCommand,
|
||||
dumpCommand,
|
||||
// See accountcmd.go:
|
||||
accountCommand,
|
||||
walletCommand,
|
||||
// See consolecmd.go:
|
||||
consoleCommand,
|
||||
attachCommand,
|
||||
javascriptCommand,
|
||||
// See misccmd.go:
|
||||
makecacheCommand,
|
||||
makedagCommand,
|
||||
versionCommand,
|
||||
licenseCommand,
|
||||
// See config.go
|
||||
dumpConfigCommand,
|
||||
}
|
||||
sort.Sort(cli.CommandsByName(app.Commands))
|
||||
|
||||
app.Flags = append(app.Flags, nodeFlags...)
|
||||
app.Flags = append(app.Flags, rpcFlags...)
|
||||
app.Flags = append(app.Flags, consoleFlags...)
|
||||
app.Flags = append(app.Flags, debug.Flags...)
|
||||
app.Flags = append(app.Flags, whisperFlags...)
|
||||
app.Flags = append(app.Flags, metricsFlags...)
|
||||
|
||||
app.Before = func(ctx *cli.Context) error {
|
||||
logdir := ""
|
||||
if ctx.GlobalBool(utils.DashboardEnabledFlag.Name) {
|
||||
logdir = (&node.Config{DataDir: utils.MakeDataDir(ctx)}).ResolvePath("logs")
|
||||
}
|
||||
if err := debug.Setup(ctx, logdir); err != nil {
|
||||
return err
|
||||
}
|
||||
// Cap the cache allowance and tune the garbage collector
|
||||
var mem gosigar.Mem
|
||||
if err := mem.Get(); err == nil {
|
||||
allowance := int(mem.Total / 1024 / 1024 / 3)
|
||||
if cache := ctx.GlobalInt(utils.CacheFlag.Name); cache > allowance {
|
||||
log.Warn("Sanitizing cache to Go's GC limits", "provided", cache, "updated", allowance)
|
||||
ctx.GlobalSet(utils.CacheFlag.Name, strconv.Itoa(allowance))
|
||||
}
|
||||
}
|
||||
// Ensure Go's GC ignores the database cache for trigger percentage
|
||||
cache := ctx.GlobalInt(utils.CacheFlag.Name)
|
||||
gogc := math.Max(20, math.Min(100, 100/(float64(cache)/1024)))
|
||||
|
||||
log.Debug("Sanitizing Go's GC trigger", "percent", int(gogc))
|
||||
godebug.SetGCPercent(int(gogc))
|
||||
|
||||
// Start metrics export if enabled
|
||||
utils.SetupMetrics(ctx)
|
||||
|
||||
// Start system runtime metrics collection
|
||||
go metrics.CollectProcessMetrics(3 * time.Second)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
app.After = func(ctx *cli.Context) error {
|
||||
debug.Exit()
|
||||
console.Stdin.Close() // Resets terminal mode.
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func main() {
|
||||
if err := app.Run(os.Args); err != nil {
|
||||
fmt.Fprintln(os.Stderr, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// geth is the main entry point into the system if no special subcommand is ran.
|
||||
// It creates a default node based on the command line arguments and runs it in
|
||||
// blocking mode, waiting for it to be shut down.
|
||||
func geth(ctx *cli.Context) error {
|
||||
if args := ctx.Args(); len(args) > 0 {
|
||||
return fmt.Errorf("invalid command: %q", args[0])
|
||||
}
|
||||
node := makeFullNode(ctx)
|
||||
defer node.Close()
|
||||
startNode(ctx, node)
|
||||
node.Wait()
|
||||
return nil
|
||||
}
|
||||
|
||||
// startNode boots up the system node and all registered protocols, after which
|
||||
// it unlocks any requested accounts, and starts the RPC/IPC interfaces and the
|
||||
// miner.
|
||||
func startNode(ctx *cli.Context, stack *node.Node) {
|
||||
debug.Memsize.Add("node", stack)
|
||||
|
||||
// Start up the node itself
|
||||
utils.StartNode(stack)
|
||||
|
||||
// Unlock any account specifically requested
|
||||
unlockAccounts(ctx, stack)
|
||||
|
||||
// Register wallet event handlers to open and auto-derive wallets
|
||||
events := make(chan accounts.WalletEvent, 16)
|
||||
stack.AccountManager().Subscribe(events)
|
||||
|
||||
go func() {
|
||||
// Create a chain state reader for self-derivation
|
||||
rpcClient, err := stack.Attach()
|
||||
if err != nil {
|
||||
utils.Fatalf("Failed to attach to self: %v", err)
|
||||
}
|
||||
stateReader := ethclient.NewClient(rpcClient)
|
||||
|
||||
// Open any wallets already attached
|
||||
for _, wallet := range stack.AccountManager().Wallets() {
|
||||
if err := wallet.Open(""); err != nil {
|
||||
log.Warn("Failed to open wallet", "url", wallet.URL(), "err", err)
|
||||
}
|
||||
}
|
||||
// Listen for wallet event till termination
|
||||
for event := range events {
|
||||
switch event.Kind {
|
||||
case accounts.WalletArrived:
|
||||
if err := event.Wallet.Open(""); err != nil {
|
||||
log.Warn("New wallet appeared, failed to open", "url", event.Wallet.URL(), "err", err)
|
||||
}
|
||||
case accounts.WalletOpened:
|
||||
status, _ := event.Wallet.Status()
|
||||
log.Info("New wallet appeared", "url", event.Wallet.URL(), "status", status)
|
||||
|
||||
derivationPath := accounts.DefaultBaseDerivationPath
|
||||
if event.Wallet.URL().Scheme == "ledger" {
|
||||
derivationPath = accounts.DefaultLedgerBaseDerivationPath
|
||||
}
|
||||
event.Wallet.SelfDerive(derivationPath, stateReader)
|
||||
|
||||
case accounts.WalletDropped:
|
||||
log.Info("Old wallet dropped", "url", event.Wallet.URL())
|
||||
event.Wallet.Close()
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
// Spawn a standalone goroutine for status synchronization monitoring,
|
||||
// close the node when synchronization is complete if user required.
|
||||
if ctx.GlobalBool(utils.ExitWhenSyncedFlag.Name) {
|
||||
go func() {
|
||||
sub := stack.EventMux().Subscribe(downloader.DoneEvent{})
|
||||
defer sub.Unsubscribe()
|
||||
for {
|
||||
event := <-sub.Chan()
|
||||
if event == nil {
|
||||
continue
|
||||
}
|
||||
done, ok := event.Data.(downloader.DoneEvent)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
if timestamp := time.Unix(int64(done.Latest.Time), 0); time.Since(timestamp) < 10*time.Minute {
|
||||
log.Info("Synchronisation completed", "latestnum", done.Latest.Number, "latesthash", done.Latest.Hash(),
|
||||
"age", common.PrettyAge(timestamp))
|
||||
stack.Stop()
|
||||
}
|
||||
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// Start auxiliary services if enabled
|
||||
if ctx.GlobalBool(utils.MiningEnabledFlag.Name) || ctx.GlobalBool(utils.DeveloperFlag.Name) {
|
||||
// Mining only makes sense if a full Ethereum node is running
|
||||
if ctx.GlobalString(utils.SyncModeFlag.Name) == "light" {
|
||||
utils.Fatalf("Light clients do not support mining")
|
||||
}
|
||||
var ethereum *eth.Ethereum
|
||||
if err := stack.Service(ðereum); err != nil {
|
||||
utils.Fatalf("Ethereum service not running: %v", err)
|
||||
}
|
||||
// Set the gas price to the limits from the CLI and start mining
|
||||
gasprice := utils.GlobalBig(ctx, utils.MinerLegacyGasPriceFlag.Name)
|
||||
if ctx.IsSet(utils.MinerGasPriceFlag.Name) {
|
||||
gasprice = utils.GlobalBig(ctx, utils.MinerGasPriceFlag.Name)
|
||||
}
|
||||
ethereum.TxPool().SetGasPrice(gasprice)
|
||||
|
||||
threads := ctx.GlobalInt(utils.MinerLegacyThreadsFlag.Name)
|
||||
if ctx.GlobalIsSet(utils.MinerThreadsFlag.Name) {
|
||||
threads = ctx.GlobalInt(utils.MinerThreadsFlag.Name)
|
||||
}
|
||||
if err := ethereum.StartMining(threads); err != nil {
|
||||
utils.Fatalf("Failed to start mining: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// unlockAccounts unlocks any account specifically requested.
|
||||
func unlockAccounts(ctx *cli.Context, stack *node.Node) {
|
||||
var unlocks []string
|
||||
inputs := strings.Split(ctx.GlobalString(utils.UnlockedAccountFlag.Name), ",")
|
||||
for _, input := range inputs {
|
||||
if trimmed := strings.TrimSpace(input); trimmed != "" {
|
||||
unlocks = append(unlocks, trimmed)
|
||||
}
|
||||
}
|
||||
// Short circuit if there is no account to unlock.
|
||||
if len(unlocks) == 0 {
|
||||
return
|
||||
}
|
||||
// If insecure account unlocking is not allowed if node's APIs are exposed to external.
|
||||
// Print warning log to user and skip unlocking.
|
||||
if !stack.Config().InsecureUnlockAllowed && stack.Config().ExtRPCEnabled() {
|
||||
utils.Fatalf("Account unlock with HTTP access is forbidden!")
|
||||
}
|
||||
ks := stack.AccountManager().Backends(keystore.KeyStoreType)[0].(*keystore.KeyStore)
|
||||
passwords := utils.MakePasswordList(ctx)
|
||||
for i, account := range unlocks {
|
||||
unlockAccount(ks, account, i, passwords)
|
||||
}
|
||||
}
|
359
vendor/github.com/ethereum/go-ethereum/cmd/geth/usage.go
generated
vendored
Normal file
359
vendor/github.com/ethereum/go-ethereum/cmd/geth/usage.go
generated
vendored
Normal file
@ -0,0 +1,359 @@
|
||||
// Copyright 2015 The go-ethereum Authors
|
||||
// This file is part of go-ethereum.
|
||||
//
|
||||
// go-ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// go-ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
// Contains the geth command usage template and generator.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"io"
|
||||
"sort"
|
||||
|
||||
"strings"
|
||||
|
||||
"github.com/ethereum/go-ethereum/cmd/utils"
|
||||
"github.com/ethereum/go-ethereum/internal/debug"
|
||||
cli "gopkg.in/urfave/cli.v1"
|
||||
)
|
||||
|
||||
// AppHelpTemplate is the test template for the default, global app help topic.
|
||||
var AppHelpTemplate = `NAME:
|
||||
{{.App.Name}} - {{.App.Usage}}
|
||||
|
||||
Copyright 2013-2019 The go-ethereum Authors
|
||||
|
||||
USAGE:
|
||||
{{.App.HelpName}} [options]{{if .App.Commands}} command [command options]{{end}} {{if .App.ArgsUsage}}{{.App.ArgsUsage}}{{else}}[arguments...]{{end}}
|
||||
{{if .App.Version}}
|
||||
VERSION:
|
||||
{{.App.Version}}
|
||||
{{end}}{{if len .App.Authors}}
|
||||
AUTHOR(S):
|
||||
{{range .App.Authors}}{{ . }}{{end}}
|
||||
{{end}}{{if .App.Commands}}
|
||||
COMMANDS:
|
||||
{{range .App.Commands}}{{join .Names ", "}}{{ "\t" }}{{.Usage}}
|
||||
{{end}}{{end}}{{if .FlagGroups}}
|
||||
{{range .FlagGroups}}{{.Name}} OPTIONS:
|
||||
{{range .Flags}}{{.}}
|
||||
{{end}}
|
||||
{{end}}{{end}}{{if .App.Copyright }}
|
||||
COPYRIGHT:
|
||||
{{.App.Copyright}}
|
||||
{{end}}
|
||||
`
|
||||
|
||||
// flagGroup is a collection of flags belonging to a single topic.
|
||||
type flagGroup struct {
|
||||
Name string
|
||||
Flags []cli.Flag
|
||||
}
|
||||
|
||||
// AppHelpFlagGroups is the application flags, grouped by functionality.
|
||||
var AppHelpFlagGroups = []flagGroup{
|
||||
{
|
||||
Name: "ETHEREUM",
|
||||
Flags: []cli.Flag{
|
||||
configFileFlag,
|
||||
utils.DataDirFlag,
|
||||
utils.KeyStoreDirFlag,
|
||||
utils.NoUSBFlag,
|
||||
utils.NetworkIdFlag,
|
||||
utils.TestnetFlag,
|
||||
utils.RinkebyFlag,
|
||||
utils.GoerliFlag,
|
||||
utils.SyncModeFlag,
|
||||
utils.ExitWhenSyncedFlag,
|
||||
utils.GCModeFlag,
|
||||
utils.EthStatsURLFlag,
|
||||
utils.IdentityFlag,
|
||||
utils.LightServFlag,
|
||||
utils.LightBandwidthInFlag,
|
||||
utils.LightBandwidthOutFlag,
|
||||
utils.LightPeersFlag,
|
||||
utils.LightKDFFlag,
|
||||
utils.WhitelistFlag,
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "DEVELOPER CHAIN",
|
||||
Flags: []cli.Flag{
|
||||
utils.DeveloperFlag,
|
||||
utils.DeveloperPeriodFlag,
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "ETHASH",
|
||||
Flags: []cli.Flag{
|
||||
utils.EthashCacheDirFlag,
|
||||
utils.EthashCachesInMemoryFlag,
|
||||
utils.EthashCachesOnDiskFlag,
|
||||
utils.EthashDatasetDirFlag,
|
||||
utils.EthashDatasetsInMemoryFlag,
|
||||
utils.EthashDatasetsOnDiskFlag,
|
||||
},
|
||||
},
|
||||
//{
|
||||
// Name: "DASHBOARD",
|
||||
// Flags: []cli.Flag{
|
||||
// utils.DashboardEnabledFlag,
|
||||
// utils.DashboardAddrFlag,
|
||||
// utils.DashboardPortFlag,
|
||||
// utils.DashboardRefreshFlag,
|
||||
// utils.DashboardAssetsFlag,
|
||||
// },
|
||||
//},
|
||||
{
|
||||
Name: "TRANSACTION POOL",
|
||||
Flags: []cli.Flag{
|
||||
utils.TxPoolLocalsFlag,
|
||||
utils.TxPoolNoLocalsFlag,
|
||||
utils.TxPoolJournalFlag,
|
||||
utils.TxPoolRejournalFlag,
|
||||
utils.TxPoolPriceLimitFlag,
|
||||
utils.TxPoolPriceBumpFlag,
|
||||
utils.TxPoolAccountSlotsFlag,
|
||||
utils.TxPoolGlobalSlotsFlag,
|
||||
utils.TxPoolAccountQueueFlag,
|
||||
utils.TxPoolGlobalQueueFlag,
|
||||
utils.TxPoolLifetimeFlag,
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "PERFORMANCE TUNING",
|
||||
Flags: []cli.Flag{
|
||||
utils.CacheFlag,
|
||||
utils.CacheDatabaseFlag,
|
||||
utils.CacheTrieFlag,
|
||||
utils.CacheGCFlag,
|
||||
utils.CacheNoPrefetchFlag,
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "ACCOUNT",
|
||||
Flags: []cli.Flag{
|
||||
utils.UnlockedAccountFlag,
|
||||
utils.PasswordFileFlag,
|
||||
utils.ExternalSignerFlag,
|
||||
utils.InsecureUnlockAllowedFlag,
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "API AND CONSOLE",
|
||||
Flags: []cli.Flag{
|
||||
utils.RPCEnabledFlag,
|
||||
utils.RPCListenAddrFlag,
|
||||
utils.RPCPortFlag,
|
||||
utils.RPCApiFlag,
|
||||
utils.RPCGlobalGasCap,
|
||||
utils.WSEnabledFlag,
|
||||
utils.WSListenAddrFlag,
|
||||
utils.WSPortFlag,
|
||||
utils.WSApiFlag,
|
||||
utils.WSAllowedOriginsFlag,
|
||||
utils.IPCDisabledFlag,
|
||||
utils.IPCPathFlag,
|
||||
utils.RPCCORSDomainFlag,
|
||||
utils.RPCVirtualHostsFlag,
|
||||
utils.JSpathFlag,
|
||||
utils.ExecFlag,
|
||||
utils.PreloadJSFlag,
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "NETWORKING",
|
||||
Flags: []cli.Flag{
|
||||
utils.BootnodesFlag,
|
||||
utils.BootnodesV4Flag,
|
||||
utils.BootnodesV5Flag,
|
||||
utils.ListenPortFlag,
|
||||
utils.MaxPeersFlag,
|
||||
utils.MaxPendingPeersFlag,
|
||||
utils.NATFlag,
|
||||
utils.NoDiscoverFlag,
|
||||
utils.DiscoveryV5Flag,
|
||||
utils.NetrestrictFlag,
|
||||
utils.NodeKeyFileFlag,
|
||||
utils.NodeKeyHexFlag,
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "MINER",
|
||||
Flags: []cli.Flag{
|
||||
utils.MiningEnabledFlag,
|
||||
utils.MinerThreadsFlag,
|
||||
utils.MinerNotifyFlag,
|
||||
utils.MinerGasPriceFlag,
|
||||
utils.MinerGasTargetFlag,
|
||||
utils.MinerGasLimitFlag,
|
||||
utils.MinerEtherbaseFlag,
|
||||
utils.MinerExtraDataFlag,
|
||||
utils.MinerRecommitIntervalFlag,
|
||||
utils.MinerNoVerfiyFlag,
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "GAS PRICE ORACLE",
|
||||
Flags: []cli.Flag{
|
||||
utils.GpoBlocksFlag,
|
||||
utils.GpoPercentileFlag,
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "VIRTUAL MACHINE",
|
||||
Flags: []cli.Flag{
|
||||
utils.VMEnableDebugFlag,
|
||||
utils.EVMInterpreterFlag,
|
||||
utils.EWASMInterpreterFlag,
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "LOGGING AND DEBUGGING",
|
||||
Flags: append([]cli.Flag{
|
||||
utils.FakePoWFlag,
|
||||
utils.NoCompactionFlag,
|
||||
}, debug.Flags...),
|
||||
},
|
||||
{
|
||||
Name: "METRICS AND STATS",
|
||||
Flags: metricsFlags,
|
||||
},
|
||||
{
|
||||
Name: "WHISPER (EXPERIMENTAL)",
|
||||
Flags: whisperFlags,
|
||||
},
|
||||
{
|
||||
Name: "DEPRECATED",
|
||||
Flags: []cli.Flag{
|
||||
utils.MinerLegacyThreadsFlag,
|
||||
utils.MinerLegacyGasTargetFlag,
|
||||
utils.MinerLegacyGasPriceFlag,
|
||||
utils.MinerLegacyEtherbaseFlag,
|
||||
utils.MinerLegacyExtraDataFlag,
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "STATE DIFF",
|
||||
Flags: []cli.Flag{
|
||||
utils.StateDiffFlag,
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "MISC",
|
||||
},
|
||||
}
|
||||
|
||||
// byCategory sorts an array of flagGroup by Name in the order
|
||||
// defined in AppHelpFlagGroups.
|
||||
type byCategory []flagGroup
|
||||
|
||||
func (a byCategory) Len() int { return len(a) }
|
||||
func (a byCategory) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
|
||||
func (a byCategory) Less(i, j int) bool {
|
||||
iCat, jCat := a[i].Name, a[j].Name
|
||||
iIdx, jIdx := len(AppHelpFlagGroups), len(AppHelpFlagGroups) // ensure non categorized flags come last
|
||||
|
||||
for i, group := range AppHelpFlagGroups {
|
||||
if iCat == group.Name {
|
||||
iIdx = i
|
||||
}
|
||||
if jCat == group.Name {
|
||||
jIdx = i
|
||||
}
|
||||
}
|
||||
|
||||
return iIdx < jIdx
|
||||
}
|
||||
|
||||
func flagCategory(flag cli.Flag) string {
|
||||
for _, category := range AppHelpFlagGroups {
|
||||
for _, flg := range category.Flags {
|
||||
if flg.GetName() == flag.GetName() {
|
||||
return category.Name
|
||||
}
|
||||
}
|
||||
}
|
||||
return "MISC"
|
||||
}
|
||||
|
||||
func init() {
|
||||
// Override the default app help template
|
||||
cli.AppHelpTemplate = AppHelpTemplate
|
||||
|
||||
// Define a one shot struct to pass to the usage template
|
||||
type helpData struct {
|
||||
App interface{}
|
||||
FlagGroups []flagGroup
|
||||
}
|
||||
|
||||
// Override the default app help printer, but only for the global app help
|
||||
originalHelpPrinter := cli.HelpPrinter
|
||||
cli.HelpPrinter = func(w io.Writer, tmpl string, data interface{}) {
|
||||
if tmpl == AppHelpTemplate {
|
||||
// Iterate over all the flags and add any uncategorized ones
|
||||
categorized := make(map[string]struct{})
|
||||
for _, group := range AppHelpFlagGroups {
|
||||
for _, flag := range group.Flags {
|
||||
categorized[flag.String()] = struct{}{}
|
||||
}
|
||||
}
|
||||
var uncategorized []cli.Flag
|
||||
for _, flag := range data.(*cli.App).Flags {
|
||||
if _, ok := categorized[flag.String()]; !ok {
|
||||
if strings.HasPrefix(flag.GetName(), "dashboard") {
|
||||
continue
|
||||
}
|
||||
uncategorized = append(uncategorized, flag)
|
||||
}
|
||||
}
|
||||
if len(uncategorized) > 0 {
|
||||
// Append all ungategorized options to the misc group
|
||||
miscs := len(AppHelpFlagGroups[len(AppHelpFlagGroups)-1].Flags)
|
||||
AppHelpFlagGroups[len(AppHelpFlagGroups)-1].Flags = append(AppHelpFlagGroups[len(AppHelpFlagGroups)-1].Flags, uncategorized...)
|
||||
|
||||
// Make sure they are removed afterwards
|
||||
defer func() {
|
||||
AppHelpFlagGroups[len(AppHelpFlagGroups)-1].Flags = AppHelpFlagGroups[len(AppHelpFlagGroups)-1].Flags[:miscs]
|
||||
}()
|
||||
}
|
||||
// Render out custom usage screen
|
||||
originalHelpPrinter(w, tmpl, helpData{data, AppHelpFlagGroups})
|
||||
} else if tmpl == utils.CommandHelpTemplate {
|
||||
// Iterate over all command specific flags and categorize them
|
||||
categorized := make(map[string][]cli.Flag)
|
||||
for _, flag := range data.(cli.Command).Flags {
|
||||
if _, ok := categorized[flag.String()]; !ok {
|
||||
categorized[flagCategory(flag)] = append(categorized[flagCategory(flag)], flag)
|
||||
}
|
||||
}
|
||||
|
||||
// sort to get a stable ordering
|
||||
sorted := make([]flagGroup, 0, len(categorized))
|
||||
for cat, flgs := range categorized {
|
||||
sorted = append(sorted, flagGroup{cat, flgs})
|
||||
}
|
||||
sort.Sort(byCategory(sorted))
|
||||
|
||||
// add sorted array to data and render with default printer
|
||||
originalHelpPrinter(w, tmpl, map[string]interface{}{
|
||||
"cmd": data,
|
||||
"categorizedFlags": sorted,
|
||||
})
|
||||
} else {
|
||||
originalHelpPrinter(w, tmpl, data)
|
||||
}
|
||||
}
|
||||
}
|
617
vendor/github.com/ethereum/go-ethereum/cmd/swarm/access_test.go
generated
vendored
Normal file
617
vendor/github.com/ethereum/go-ethereum/cmd/swarm/access_test.go
generated
vendored
Normal file
@ -0,0 +1,617 @@
|
||||
// Copyright 2018 The go-ethereum Authors
|
||||
// This file is part of go-ethereum.
|
||||
//
|
||||
// go-ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// go-ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
gorand "math/rand"
|
||||
"net/http"
|
||||
"os"
|
||||
"runtime"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/crypto"
|
||||
"github.com/ethereum/go-ethereum/crypto/ecies"
|
||||
"github.com/ethereum/go-ethereum/log"
|
||||
"github.com/ethereum/go-ethereum/swarm/api"
|
||||
swarmapi "github.com/ethereum/go-ethereum/swarm/api/client"
|
||||
"github.com/ethereum/go-ethereum/swarm/testutil"
|
||||
"golang.org/x/crypto/sha3"
|
||||
)
|
||||
|
||||
const (
|
||||
hashRegexp = `[a-f\d]{128}`
|
||||
data = "notsorandomdata"
|
||||
)
|
||||
|
||||
var DefaultCurve = crypto.S256()
|
||||
|
||||
func TestACT(t *testing.T) {
|
||||
if runtime.GOOS == "windows" {
|
||||
t.Skip()
|
||||
}
|
||||
|
||||
cluster := newTestCluster(t, clusterSize)
|
||||
defer cluster.Shutdown()
|
||||
|
||||
cases := []struct {
|
||||
name string
|
||||
f func(t *testing.T, cluster *testCluster)
|
||||
}{
|
||||
{"Password", testPassword},
|
||||
{"PK", testPK},
|
||||
{"ACTWithoutBogus", testACTWithoutBogus},
|
||||
{"ACTWithBogus", testACTWithBogus},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
tc.f(t, cluster)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// testPassword tests for the correct creation of an ACT manifest protected by a password.
|
||||
// The test creates bogus content, uploads it encrypted, then creates the wrapping manifest with the Access entry
|
||||
// The parties participating - node (publisher), uploads to second node then disappears. Content which was uploaded
|
||||
// is then fetched through 2nd node. since the tested code is not key-aware - we can just
|
||||
// fetch from the 2nd node using HTTP BasicAuth
|
||||
func testPassword(t *testing.T, cluster *testCluster) {
|
||||
dataFilename := testutil.TempFileWithContent(t, data)
|
||||
defer os.RemoveAll(dataFilename)
|
||||
|
||||
// upload the file with 'swarm up' and expect a hash
|
||||
up := runSwarm(t,
|
||||
"--bzzapi",
|
||||
cluster.Nodes[0].URL,
|
||||
"up",
|
||||
"--encrypt",
|
||||
dataFilename)
|
||||
_, matches := up.ExpectRegexp(hashRegexp)
|
||||
up.ExpectExit()
|
||||
|
||||
if len(matches) < 1 {
|
||||
t.Fatal("no matches found")
|
||||
}
|
||||
|
||||
ref := matches[0]
|
||||
tmp, err := ioutil.TempDir("", "swarm-test")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer os.RemoveAll(tmp)
|
||||
password := "smth"
|
||||
passwordFilename := testutil.TempFileWithContent(t, "smth")
|
||||
defer os.RemoveAll(passwordFilename)
|
||||
|
||||
up = runSwarm(t,
|
||||
"access",
|
||||
"new",
|
||||
"pass",
|
||||
"--dry-run",
|
||||
"--password",
|
||||
passwordFilename,
|
||||
ref,
|
||||
)
|
||||
|
||||
_, matches = up.ExpectRegexp(".+")
|
||||
up.ExpectExit()
|
||||
|
||||
if len(matches) == 0 {
|
||||
t.Fatalf("stdout not matched")
|
||||
}
|
||||
|
||||
var m api.Manifest
|
||||
|
||||
err = json.Unmarshal([]byte(matches[0]), &m)
|
||||
if err != nil {
|
||||
t.Fatalf("unmarshal manifest: %v", err)
|
||||
}
|
||||
|
||||
if len(m.Entries) != 1 {
|
||||
t.Fatalf("expected one manifest entry, got %v", len(m.Entries))
|
||||
}
|
||||
|
||||
e := m.Entries[0]
|
||||
|
||||
ct := "application/bzz-manifest+json"
|
||||
if e.ContentType != ct {
|
||||
t.Errorf("expected %q content type, got %q", ct, e.ContentType)
|
||||
}
|
||||
|
||||
if e.Access == nil {
|
||||
t.Fatal("manifest access is nil")
|
||||
}
|
||||
|
||||
a := e.Access
|
||||
|
||||
if a.Type != "pass" {
|
||||
t.Errorf(`got access type %q, expected "pass"`, a.Type)
|
||||
}
|
||||
if len(a.Salt) < 32 {
|
||||
t.Errorf(`got salt with length %v, expected not less the 32 bytes`, len(a.Salt))
|
||||
}
|
||||
if a.KdfParams == nil {
|
||||
t.Fatal("manifest access kdf params is nil")
|
||||
}
|
||||
if a.Publisher != "" {
|
||||
t.Fatal("should be empty")
|
||||
}
|
||||
|
||||
client := swarmapi.NewClient(cluster.Nodes[0].URL)
|
||||
|
||||
hash, err := client.UploadManifest(&m, false)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
url := cluster.Nodes[0].URL + "/" + "bzz:/" + hash
|
||||
|
||||
httpClient := &http.Client{}
|
||||
response, err := httpClient.Get(url)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if response.StatusCode != http.StatusUnauthorized {
|
||||
t.Fatal("should be a 401")
|
||||
}
|
||||
authHeader := response.Header.Get("WWW-Authenticate")
|
||||
if authHeader == "" {
|
||||
t.Fatal("should be something here")
|
||||
}
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, url, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
req.SetBasicAuth("", password)
|
||||
|
||||
response, err = http.DefaultClient.Do(req)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer response.Body.Close()
|
||||
|
||||
if response.StatusCode != http.StatusOK {
|
||||
t.Errorf("expected status %v, got %v", http.StatusOK, response.StatusCode)
|
||||
}
|
||||
d, err := ioutil.ReadAll(response.Body)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if string(d) != data {
|
||||
t.Errorf("expected decrypted data %q, got %q", data, string(d))
|
||||
}
|
||||
|
||||
wrongPasswordFilename := testutil.TempFileWithContent(t, "just wr0ng")
|
||||
defer os.RemoveAll(wrongPasswordFilename)
|
||||
|
||||
//download file with 'swarm down' with wrong password
|
||||
up = runSwarm(t,
|
||||
"--bzzapi",
|
||||
cluster.Nodes[0].URL,
|
||||
"down",
|
||||
"bzz:/"+hash,
|
||||
tmp,
|
||||
"--password",
|
||||
wrongPasswordFilename)
|
||||
|
||||
_, matches = up.ExpectRegexp("unauthorized")
|
||||
if len(matches) != 1 && matches[0] != "unauthorized" {
|
||||
t.Fatal(`"unauthorized" not found in output"`)
|
||||
}
|
||||
up.ExpectExit()
|
||||
}
|
||||
|
||||
// testPK tests for the correct creation of an ACT manifest between two parties (publisher and grantee).
|
||||
// The test creates bogus content, uploads it encrypted, then creates the wrapping manifest with the Access entry
|
||||
// The parties participating - node (publisher), uploads to second node (which is also the grantee) then disappears.
|
||||
// Content which was uploaded is then fetched through the grantee's http proxy. Since the tested code is private-key aware,
|
||||
// the test will fail if the proxy's given private key is not granted on the ACT.
|
||||
func testPK(t *testing.T, cluster *testCluster) {
|
||||
dataFilename := testutil.TempFileWithContent(t, data)
|
||||
defer os.RemoveAll(dataFilename)
|
||||
|
||||
// upload the file with 'swarm up' and expect a hash
|
||||
up := runSwarm(t,
|
||||
"--bzzapi",
|
||||
cluster.Nodes[0].URL,
|
||||
"up",
|
||||
"--encrypt",
|
||||
dataFilename)
|
||||
_, matches := up.ExpectRegexp(hashRegexp)
|
||||
up.ExpectExit()
|
||||
|
||||
if len(matches) < 1 {
|
||||
t.Fatal("no matches found")
|
||||
}
|
||||
|
||||
ref := matches[0]
|
||||
pk := cluster.Nodes[0].PrivateKey
|
||||
granteePubKey := crypto.CompressPubkey(&pk.PublicKey)
|
||||
|
||||
publisherDir, err := ioutil.TempDir("", "swarm-account-dir-temp")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
passwordFilename := testutil.TempFileWithContent(t, testPassphrase)
|
||||
defer os.RemoveAll(passwordFilename)
|
||||
|
||||
_, publisherAccount := getTestAccount(t, publisherDir)
|
||||
up = runSwarm(t,
|
||||
"--bzzaccount",
|
||||
publisherAccount.Address.String(),
|
||||
"--password",
|
||||
passwordFilename,
|
||||
"--datadir",
|
||||
publisherDir,
|
||||
"--bzzapi",
|
||||
cluster.Nodes[0].URL,
|
||||
"access",
|
||||
"new",
|
||||
"pk",
|
||||
"--dry-run",
|
||||
"--grant-key",
|
||||
hex.EncodeToString(granteePubKey),
|
||||
ref,
|
||||
)
|
||||
|
||||
_, matches = up.ExpectRegexp(".+")
|
||||
up.ExpectExit()
|
||||
|
||||
if len(matches) == 0 {
|
||||
t.Fatalf("stdout not matched")
|
||||
}
|
||||
|
||||
//get the public key from the publisher directory
|
||||
publicKeyFromDataDir := runSwarm(t,
|
||||
"--bzzaccount",
|
||||
publisherAccount.Address.String(),
|
||||
"--password",
|
||||
passwordFilename,
|
||||
"--datadir",
|
||||
publisherDir,
|
||||
"print-keys",
|
||||
"--compressed",
|
||||
)
|
||||
_, publicKeyString := publicKeyFromDataDir.ExpectRegexp(".+")
|
||||
publicKeyFromDataDir.ExpectExit()
|
||||
pkComp := strings.Split(publicKeyString[0], "=")[1]
|
||||
var m api.Manifest
|
||||
|
||||
err = json.Unmarshal([]byte(matches[0]), &m)
|
||||
if err != nil {
|
||||
t.Fatalf("unmarshal manifest: %v", err)
|
||||
}
|
||||
|
||||
if len(m.Entries) != 1 {
|
||||
t.Fatalf("expected one manifest entry, got %v", len(m.Entries))
|
||||
}
|
||||
|
||||
e := m.Entries[0]
|
||||
|
||||
ct := "application/bzz-manifest+json"
|
||||
if e.ContentType != ct {
|
||||
t.Errorf("expected %q content type, got %q", ct, e.ContentType)
|
||||
}
|
||||
|
||||
if e.Access == nil {
|
||||
t.Fatal("manifest access is nil")
|
||||
}
|
||||
|
||||
a := e.Access
|
||||
|
||||
if a.Type != "pk" {
|
||||
t.Errorf(`got access type %q, expected "pk"`, a.Type)
|
||||
}
|
||||
if len(a.Salt) < 32 {
|
||||
t.Errorf(`got salt with length %v, expected not less the 32 bytes`, len(a.Salt))
|
||||
}
|
||||
if a.KdfParams != nil {
|
||||
t.Fatal("manifest access kdf params should be nil")
|
||||
}
|
||||
if a.Publisher != pkComp {
|
||||
t.Fatal("publisher key did not match")
|
||||
}
|
||||
client := swarmapi.NewClient(cluster.Nodes[0].URL)
|
||||
|
||||
hash, err := client.UploadManifest(&m, false)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
httpClient := &http.Client{}
|
||||
|
||||
url := cluster.Nodes[0].URL + "/" + "bzz:/" + hash
|
||||
response, err := httpClient.Get(url)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if response.StatusCode != http.StatusOK {
|
||||
t.Fatal("should be a 200")
|
||||
}
|
||||
d, err := ioutil.ReadAll(response.Body)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if string(d) != data {
|
||||
t.Errorf("expected decrypted data %q, got %q", data, string(d))
|
||||
}
|
||||
}
|
||||
|
||||
// testACTWithoutBogus tests the creation of the ACT manifest end-to-end, without any bogus entries (i.e. default scenario = 3 nodes 1 unauthorized)
|
||||
func testACTWithoutBogus(t *testing.T, cluster *testCluster) {
|
||||
testACT(t, cluster, 0)
|
||||
}
|
||||
|
||||
// testACTWithBogus tests the creation of the ACT manifest end-to-end, with 100 bogus entries (i.e. 100 EC keys + default scenario = 3 nodes 1 unauthorized = 103 keys in the ACT manifest)
|
||||
func testACTWithBogus(t *testing.T, cluster *testCluster) {
|
||||
testACT(t, cluster, 100)
|
||||
}
|
||||
|
||||
// testACT tests the e2e creation, uploading and downloading of an ACT access control with both EC keys AND password protection
|
||||
// the test fires up a 3 node cluster, then randomly picks 2 nodes which will be acting as grantees to the data
|
||||
// set and also protects the ACT with a password. the third node should fail decoding the reference as it will not be granted access.
|
||||
// the third node then then tries to download using a correct password (and succeeds) then uses a wrong password and fails.
|
||||
// the publisher uploads through one of the nodes then disappears.
|
||||
func testACT(t *testing.T, cluster *testCluster, bogusEntries int) {
|
||||
var uploadThroughNode = cluster.Nodes[0]
|
||||
client := swarmapi.NewClient(uploadThroughNode.URL)
|
||||
|
||||
r1 := gorand.New(gorand.NewSource(time.Now().UnixNano()))
|
||||
nodeToSkip := r1.Intn(clusterSize) // a number between 0 and 2 (node indices in `cluster`)
|
||||
dataFilename := testutil.TempFileWithContent(t, data)
|
||||
defer os.RemoveAll(dataFilename)
|
||||
|
||||
// upload the file with 'swarm up' and expect a hash
|
||||
up := runSwarm(t,
|
||||
"--bzzapi",
|
||||
cluster.Nodes[0].URL,
|
||||
"up",
|
||||
"--encrypt",
|
||||
dataFilename)
|
||||
_, matches := up.ExpectRegexp(hashRegexp)
|
||||
up.ExpectExit()
|
||||
|
||||
if len(matches) < 1 {
|
||||
t.Fatal("no matches found")
|
||||
}
|
||||
|
||||
ref := matches[0]
|
||||
var grantees []string
|
||||
for i, v := range cluster.Nodes {
|
||||
if i == nodeToSkip {
|
||||
continue
|
||||
}
|
||||
pk := v.PrivateKey
|
||||
granteePubKey := crypto.CompressPubkey(&pk.PublicKey)
|
||||
grantees = append(grantees, hex.EncodeToString(granteePubKey))
|
||||
}
|
||||
|
||||
if bogusEntries > 0 {
|
||||
var bogusGrantees []string
|
||||
|
||||
for i := 0; i < bogusEntries; i++ {
|
||||
prv, err := ecies.GenerateKey(rand.Reader, DefaultCurve, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
bogusGrantees = append(bogusGrantees, hex.EncodeToString(crypto.CompressPubkey(&prv.ExportECDSA().PublicKey)))
|
||||
}
|
||||
r2 := gorand.New(gorand.NewSource(time.Now().UnixNano()))
|
||||
for i := 0; i < len(grantees); i++ {
|
||||
insertAtIdx := r2.Intn(len(bogusGrantees))
|
||||
bogusGrantees = append(bogusGrantees[:insertAtIdx], append([]string{grantees[i]}, bogusGrantees[insertAtIdx:]...)...)
|
||||
}
|
||||
grantees = bogusGrantees
|
||||
}
|
||||
granteesPubkeyListFile := testutil.TempFileWithContent(t, strings.Join(grantees, "\n"))
|
||||
defer os.RemoveAll(granteesPubkeyListFile)
|
||||
|
||||
publisherDir, err := ioutil.TempDir("", "swarm-account-dir-temp")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer os.RemoveAll(publisherDir)
|
||||
|
||||
passwordFilename := testutil.TempFileWithContent(t, testPassphrase)
|
||||
defer os.RemoveAll(passwordFilename)
|
||||
actPasswordFilename := testutil.TempFileWithContent(t, "smth")
|
||||
defer os.RemoveAll(actPasswordFilename)
|
||||
_, publisherAccount := getTestAccount(t, publisherDir)
|
||||
up = runSwarm(t,
|
||||
"--bzzaccount",
|
||||
publisherAccount.Address.String(),
|
||||
"--password",
|
||||
passwordFilename,
|
||||
"--datadir",
|
||||
publisherDir,
|
||||
"--bzzapi",
|
||||
cluster.Nodes[0].URL,
|
||||
"access",
|
||||
"new",
|
||||
"act",
|
||||
"--grant-keys",
|
||||
granteesPubkeyListFile,
|
||||
"--password",
|
||||
actPasswordFilename,
|
||||
ref,
|
||||
)
|
||||
|
||||
_, matches = up.ExpectRegexp(`[a-f\d]{64}`)
|
||||
up.ExpectExit()
|
||||
|
||||
if len(matches) == 0 {
|
||||
t.Fatalf("stdout not matched")
|
||||
}
|
||||
|
||||
//get the public key from the publisher directory
|
||||
publicKeyFromDataDir := runSwarm(t,
|
||||
"--bzzaccount",
|
||||
publisherAccount.Address.String(),
|
||||
"--password",
|
||||
passwordFilename,
|
||||
"--datadir",
|
||||
publisherDir,
|
||||
"print-keys",
|
||||
"--compressed",
|
||||
)
|
||||
_, publicKeyString := publicKeyFromDataDir.ExpectRegexp(".+")
|
||||
publicKeyFromDataDir.ExpectExit()
|
||||
pkComp := strings.Split(publicKeyString[0], "=")[1]
|
||||
|
||||
hash := matches[0]
|
||||
m, _, err := client.DownloadManifest(hash)
|
||||
if err != nil {
|
||||
t.Fatalf("unmarshal manifest: %v", err)
|
||||
}
|
||||
|
||||
if len(m.Entries) != 1 {
|
||||
t.Fatalf("expected one manifest entry, got %v", len(m.Entries))
|
||||
}
|
||||
|
||||
e := m.Entries[0]
|
||||
|
||||
ct := "application/bzz-manifest+json"
|
||||
if e.ContentType != ct {
|
||||
t.Errorf("expected %q content type, got %q", ct, e.ContentType)
|
||||
}
|
||||
|
||||
if e.Access == nil {
|
||||
t.Fatal("manifest access is nil")
|
||||
}
|
||||
|
||||
a := e.Access
|
||||
|
||||
if a.Type != "act" {
|
||||
t.Fatalf(`got access type %q, expected "act"`, a.Type)
|
||||
}
|
||||
if len(a.Salt) < 32 {
|
||||
t.Fatalf(`got salt with length %v, expected not less the 32 bytes`, len(a.Salt))
|
||||
}
|
||||
|
||||
if a.Publisher != pkComp {
|
||||
t.Fatal("publisher key did not match")
|
||||
}
|
||||
httpClient := &http.Client{}
|
||||
|
||||
// all nodes except the skipped node should be able to decrypt the content
|
||||
for i, node := range cluster.Nodes {
|
||||
log.Debug("trying to fetch from node", "node index", i)
|
||||
|
||||
url := node.URL + "/" + "bzz:/" + hash
|
||||
response, err := httpClient.Get(url)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
log.Debug("got response from node", "response code", response.StatusCode)
|
||||
|
||||
if i == nodeToSkip {
|
||||
log.Debug("reached node to skip", "status code", response.StatusCode)
|
||||
|
||||
if response.StatusCode != http.StatusUnauthorized {
|
||||
t.Fatalf("should be a 401")
|
||||
}
|
||||
|
||||
// try downloading using a password instead, using the unauthorized node
|
||||
passwordUrl := strings.Replace(url, "http://", "http://:smth@", -1)
|
||||
response, err = httpClient.Get(passwordUrl)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if response.StatusCode != http.StatusOK {
|
||||
t.Fatal("should be a 200")
|
||||
}
|
||||
|
||||
// now try with the wrong password, expect 401
|
||||
passwordUrl = strings.Replace(url, "http://", "http://:smthWrong@", -1)
|
||||
response, err = httpClient.Get(passwordUrl)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if response.StatusCode != http.StatusUnauthorized {
|
||||
t.Fatal("should be a 401")
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if response.StatusCode != http.StatusOK {
|
||||
t.Fatal("should be a 200")
|
||||
}
|
||||
d, err := ioutil.ReadAll(response.Body)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if string(d) != data {
|
||||
t.Errorf("expected decrypted data %q, got %q", data, string(d))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// TestKeypairSanity is a sanity test for the crypto scheme for ACT. it asserts the correct shared secret according to
|
||||
// the specs at https://github.com/ethersphere/swarm-docs/blob/eb857afda906c6e7bb90d37f3f334ccce5eef230/act.md
|
||||
func TestKeypairSanity(t *testing.T) {
|
||||
salt := make([]byte, 32)
|
||||
if _, err := io.ReadFull(rand.Reader, salt); err != nil {
|
||||
t.Fatalf("reading from crypto/rand failed: %v", err.Error())
|
||||
}
|
||||
sharedSecret := "a85586744a1ddd56a7ed9f33fa24f40dd745b3a941be296a0d60e329dbdb896d"
|
||||
|
||||
for i, v := range []struct {
|
||||
publisherPriv string
|
||||
granteePub string
|
||||
}{
|
||||
{
|
||||
publisherPriv: "ec5541555f3bc6376788425e9d1a62f55a82901683fd7062c5eddcc373a73459",
|
||||
granteePub: "0226f213613e843a413ad35b40f193910d26eb35f00154afcde9ded57479a6224a",
|
||||
},
|
||||
{
|
||||
publisherPriv: "70c7a73011aa56584a0009ab874794ee7e5652fd0c6911cd02f8b6267dd82d2d",
|
||||
granteePub: "02e6f8d5e28faaa899744972bb847b6eb805a160494690c9ee7197ae9f619181db",
|
||||
},
|
||||
} {
|
||||
b, _ := hex.DecodeString(v.granteePub)
|
||||
granteePub, _ := crypto.DecompressPubkey(b)
|
||||
publisherPrivate, _ := crypto.HexToECDSA(v.publisherPriv)
|
||||
|
||||
ssKey, err := api.NewSessionKeyPK(publisherPrivate, granteePub, salt)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
hasher := sha3.NewLegacyKeccak256()
|
||||
hasher.Write(salt)
|
||||
shared, err := hex.DecodeString(sharedSecret)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
hasher.Write(shared)
|
||||
sum := hasher.Sum(nil)
|
||||
|
||||
if !bytes.Equal(ssKey, sum) {
|
||||
t.Fatalf("%d: got a session key mismatch", i)
|
||||
}
|
||||
}
|
||||
}
|
502
vendor/github.com/ethereum/go-ethereum/cmd/swarm/run_test.go
generated
vendored
Normal file
502
vendor/github.com/ethereum/go-ethereum/cmd/swarm/run_test.go
generated
vendored
Normal file
@ -0,0 +1,502 @@
|
||||
// Copyright 2017 The go-ethereum Authors
|
||||
// This file is part of go-ethereum.
|
||||
//
|
||||
// go-ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// go-ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/ecdsa"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"sync"
|
||||
"syscall"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/docker/docker/pkg/reexec"
|
||||
"github.com/ethereum/go-ethereum/accounts"
|
||||
"github.com/ethereum/go-ethereum/accounts/keystore"
|
||||
"github.com/ethereum/go-ethereum/internal/cmdtest"
|
||||
"github.com/ethereum/go-ethereum/node"
|
||||
"github.com/ethereum/go-ethereum/p2p"
|
||||
"github.com/ethereum/go-ethereum/rpc"
|
||||
"github.com/ethereum/go-ethereum/swarm"
|
||||
"github.com/ethereum/go-ethereum/swarm/api"
|
||||
swarmhttp "github.com/ethereum/go-ethereum/swarm/api/http"
|
||||
)
|
||||
|
||||
var loglevel = flag.Int("loglevel", 3, "verbosity of logs")
|
||||
|
||||
func init() {
|
||||
// Run the app if we've been exec'd as "swarm-test" in runSwarm.
|
||||
reexec.Register("swarm-test", func() {
|
||||
if err := app.Run(os.Args); err != nil {
|
||||
fmt.Fprintln(os.Stderr, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
os.Exit(0)
|
||||
})
|
||||
}
|
||||
|
||||
const clusterSize = 3
|
||||
|
||||
func serverFunc(api *api.API) swarmhttp.TestServer {
|
||||
return swarmhttp.NewServer(api, "")
|
||||
}
|
||||
func TestMain(m *testing.M) {
|
||||
// check if we have been reexec'd
|
||||
if reexec.Init() {
|
||||
return
|
||||
}
|
||||
os.Exit(m.Run())
|
||||
}
|
||||
|
||||
func runSwarm(t *testing.T, args ...string) *cmdtest.TestCmd {
|
||||
tt := cmdtest.NewTestCmd(t, nil)
|
||||
|
||||
found := false
|
||||
for _, v := range args {
|
||||
if v == "--bootnodes" {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !found {
|
||||
args = append([]string{"--bootnodes", ""}, args...)
|
||||
}
|
||||
|
||||
// Boot "swarm". This actually runs the test binary but the TestMain
|
||||
// function will prevent any tests from running.
|
||||
tt.Run("swarm-test", args...)
|
||||
|
||||
return tt
|
||||
}
|
||||
|
||||
type testCluster struct {
|
||||
Nodes []*testNode
|
||||
TmpDir string
|
||||
}
|
||||
|
||||
// newTestCluster starts a test swarm cluster of the given size.
|
||||
//
|
||||
// A temporary directory is created and each node gets a data directory inside
|
||||
// it.
|
||||
//
|
||||
// Each node listens on 127.0.0.1 with random ports for both the HTTP and p2p
|
||||
// ports (assigned by first listening on 127.0.0.1:0 and then passing the ports
|
||||
// as flags).
|
||||
//
|
||||
// When starting more than one node, they are connected together using the
|
||||
// admin SetPeer RPC method.
|
||||
|
||||
func newTestCluster(t *testing.T, size int) *testCluster {
|
||||
cluster := &testCluster{}
|
||||
defer func() {
|
||||
if t.Failed() {
|
||||
cluster.Shutdown()
|
||||
}
|
||||
}()
|
||||
|
||||
tmpdir, err := ioutil.TempDir("", "swarm-test")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
cluster.TmpDir = tmpdir
|
||||
|
||||
// start the nodes
|
||||
cluster.StartNewNodes(t, size)
|
||||
|
||||
if size == 1 {
|
||||
return cluster
|
||||
}
|
||||
|
||||
// connect the nodes together
|
||||
for _, node := range cluster.Nodes {
|
||||
if err := node.Client.Call(nil, "admin_addPeer", cluster.Nodes[0].Enode); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
// wait until all nodes have the correct number of peers
|
||||
outer:
|
||||
for _, node := range cluster.Nodes {
|
||||
var peers []*p2p.PeerInfo
|
||||
for start := time.Now(); time.Since(start) < time.Minute; time.Sleep(50 * time.Millisecond) {
|
||||
if err := node.Client.Call(&peers, "admin_peers"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(peers) == len(cluster.Nodes)-1 {
|
||||
continue outer
|
||||
}
|
||||
}
|
||||
t.Fatalf("%s only has %d / %d peers", node.Name, len(peers), len(cluster.Nodes)-1)
|
||||
}
|
||||
|
||||
return cluster
|
||||
}
|
||||
|
||||
func (c *testCluster) Shutdown() {
|
||||
c.Stop()
|
||||
c.Cleanup()
|
||||
}
|
||||
|
||||
func (c *testCluster) Stop() {
|
||||
for _, node := range c.Nodes {
|
||||
node.Shutdown()
|
||||
}
|
||||
}
|
||||
|
||||
func (c *testCluster) StartNewNodes(t *testing.T, size int) {
|
||||
c.Nodes = make([]*testNode, 0, size)
|
||||
|
||||
errors := make(chan error, size)
|
||||
nodes := make(chan *testNode, size)
|
||||
for i := 0; i < size; i++ {
|
||||
go func(nodeIndex int) {
|
||||
dir := filepath.Join(c.TmpDir, fmt.Sprintf("swarm%02d", nodeIndex))
|
||||
if err := os.Mkdir(dir, 0700); err != nil {
|
||||
errors <- err
|
||||
return
|
||||
}
|
||||
|
||||
node := newTestNode(t, dir)
|
||||
node.Name = fmt.Sprintf("swarm%02d", nodeIndex)
|
||||
nodes <- node
|
||||
}(i)
|
||||
}
|
||||
|
||||
for i := 0; i < size; i++ {
|
||||
select {
|
||||
case node := <-nodes:
|
||||
c.Nodes = append(c.Nodes, node)
|
||||
case err := <-errors:
|
||||
t.Error(err)
|
||||
}
|
||||
}
|
||||
|
||||
if t.Failed() {
|
||||
c.Shutdown()
|
||||
t.FailNow()
|
||||
}
|
||||
}
|
||||
|
||||
func (c *testCluster) StartExistingNodes(t *testing.T, size int, bzzaccount string) {
|
||||
c.Nodes = make([]*testNode, 0, size)
|
||||
for i := 0; i < size; i++ {
|
||||
dir := filepath.Join(c.TmpDir, fmt.Sprintf("swarm%02d", i))
|
||||
node := existingTestNode(t, dir, bzzaccount)
|
||||
node.Name = fmt.Sprintf("swarm%02d", i)
|
||||
|
||||
c.Nodes = append(c.Nodes, node)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *testCluster) Cleanup() {
|
||||
os.RemoveAll(c.TmpDir)
|
||||
}
|
||||
|
||||
type testNode struct {
|
||||
Name string
|
||||
Addr string
|
||||
URL string
|
||||
Enode string
|
||||
Dir string
|
||||
IpcPath string
|
||||
PrivateKey *ecdsa.PrivateKey
|
||||
Client *rpc.Client
|
||||
Cmd *cmdtest.TestCmd
|
||||
}
|
||||
|
||||
const testPassphrase = "swarm-test-passphrase"
|
||||
|
||||
func getTestAccount(t *testing.T, dir string) (conf *node.Config, account accounts.Account) {
|
||||
// create key
|
||||
conf = &node.Config{
|
||||
DataDir: dir,
|
||||
IPCPath: "bzzd.ipc",
|
||||
NoUSB: true,
|
||||
}
|
||||
n, err := node.New(conf)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
account, err = n.AccountManager().Backends(keystore.KeyStoreType)[0].(*keystore.KeyStore).NewAccount(testPassphrase)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// use a unique IPCPath when running tests on Windows
|
||||
if runtime.GOOS == "windows" {
|
||||
conf.IPCPath = fmt.Sprintf("bzzd-%s.ipc", account.Address.String())
|
||||
}
|
||||
|
||||
return conf, account
|
||||
}
|
||||
|
||||
func existingTestNode(t *testing.T, dir string, bzzaccount string) *testNode {
|
||||
conf, _ := getTestAccount(t, dir)
|
||||
node := &testNode{Dir: dir}
|
||||
|
||||
// use a unique IPCPath when running tests on Windows
|
||||
if runtime.GOOS == "windows" {
|
||||
conf.IPCPath = fmt.Sprintf("bzzd-%s.ipc", bzzaccount)
|
||||
}
|
||||
|
||||
// assign ports
|
||||
ports, err := getAvailableTCPPorts(2)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
p2pPort := ports[0]
|
||||
httpPort := ports[1]
|
||||
|
||||
// start the node
|
||||
node.Cmd = runSwarm(t,
|
||||
"--bootnodes", "",
|
||||
"--port", p2pPort,
|
||||
"--nat", "extip:127.0.0.1",
|
||||
"--datadir", dir,
|
||||
"--ipcpath", conf.IPCPath,
|
||||
"--ens-api", "",
|
||||
"--bzzaccount", bzzaccount,
|
||||
"--bzznetworkid", "321",
|
||||
"--bzzport", httpPort,
|
||||
"--verbosity", fmt.Sprint(*loglevel),
|
||||
)
|
||||
node.Cmd.InputLine(testPassphrase)
|
||||
defer func() {
|
||||
if t.Failed() {
|
||||
node.Shutdown()
|
||||
}
|
||||
}()
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
// ensure that all ports have active listeners
|
||||
// so that the next node will not get the same
|
||||
// when calling getAvailableTCPPorts
|
||||
err = waitTCPPorts(ctx, ports...)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// wait for the node to start
|
||||
for start := time.Now(); time.Since(start) < 10*time.Second; time.Sleep(50 * time.Millisecond) {
|
||||
node.Client, err = rpc.Dial(conf.IPCEndpoint())
|
||||
if err == nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
if node.Client == nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// load info
|
||||
var info swarm.Info
|
||||
if err := node.Client.Call(&info, "bzz_info"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
node.Addr = net.JoinHostPort("127.0.0.1", info.Port)
|
||||
node.URL = "http://" + node.Addr
|
||||
|
||||
var nodeInfo p2p.NodeInfo
|
||||
if err := node.Client.Call(&nodeInfo, "admin_nodeInfo"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
node.Enode = nodeInfo.Enode
|
||||
node.IpcPath = conf.IPCPath
|
||||
return node
|
||||
}
|
||||
|
||||
func newTestNode(t *testing.T, dir string) *testNode {
|
||||
|
||||
conf, account := getTestAccount(t, dir)
|
||||
ks := keystore.NewKeyStore(path.Join(dir, "keystore"), 1<<18, 1)
|
||||
|
||||
pk := decryptStoreAccount(ks, account.Address.Hex(), []string{testPassphrase})
|
||||
|
||||
node := &testNode{Dir: dir, PrivateKey: pk}
|
||||
|
||||
// assign ports
|
||||
ports, err := getAvailableTCPPorts(2)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
p2pPort := ports[0]
|
||||
httpPort := ports[1]
|
||||
|
||||
// start the node
|
||||
node.Cmd = runSwarm(t,
|
||||
"--bootnodes", "",
|
||||
"--port", p2pPort,
|
||||
"--nat", "extip:127.0.0.1",
|
||||
"--datadir", dir,
|
||||
"--ipcpath", conf.IPCPath,
|
||||
"--ens-api", "",
|
||||
"--bzzaccount", account.Address.String(),
|
||||
"--bzznetworkid", "321",
|
||||
"--bzzport", httpPort,
|
||||
"--verbosity", fmt.Sprint(*loglevel),
|
||||
)
|
||||
node.Cmd.InputLine(testPassphrase)
|
||||
defer func() {
|
||||
if t.Failed() {
|
||||
node.Shutdown()
|
||||
}
|
||||
}()
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
// ensure that all ports have active listeners
|
||||
// so that the next node will not get the same
|
||||
// when calling getAvailableTCPPorts
|
||||
err = waitTCPPorts(ctx, ports...)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// wait for the node to start
|
||||
for start := time.Now(); time.Since(start) < 10*time.Second; time.Sleep(50 * time.Millisecond) {
|
||||
node.Client, err = rpc.Dial(conf.IPCEndpoint())
|
||||
if err == nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
if node.Client == nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// load info
|
||||
var info swarm.Info
|
||||
if err := node.Client.Call(&info, "bzz_info"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
node.Addr = net.JoinHostPort("127.0.0.1", info.Port)
|
||||
node.URL = "http://" + node.Addr
|
||||
|
||||
var nodeInfo p2p.NodeInfo
|
||||
if err := node.Client.Call(&nodeInfo, "admin_nodeInfo"); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
node.Enode = nodeInfo.Enode
|
||||
node.IpcPath = conf.IPCPath
|
||||
return node
|
||||
}
|
||||
|
||||
func (n *testNode) Shutdown() {
|
||||
if n.Cmd != nil {
|
||||
n.Cmd.Kill()
|
||||
}
|
||||
}
|
||||
|
||||
// getAvailableTCPPorts returns a set of ports that
|
||||
// nothing is listening on at the time.
|
||||
//
|
||||
// Function assignTCPPort cannot be called in sequence
|
||||
// and guardantee that the same port will be returned in
|
||||
// different calls as the listener is closed within the function,
|
||||
// not after all listeners are started and selected unique
|
||||
// available ports.
|
||||
func getAvailableTCPPorts(count int) (ports []string, err error) {
|
||||
for i := 0; i < count; i++ {
|
||||
l, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// defer close in the loop to be sure the same port will not
|
||||
// be selected in the next iteration
|
||||
defer l.Close()
|
||||
|
||||
_, port, err := net.SplitHostPort(l.Addr().String())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ports = append(ports, port)
|
||||
}
|
||||
return ports, nil
|
||||
}
|
||||
|
||||
// waitTCPPorts blocks until tcp connections can be
|
||||
// established on all provided ports. It runs all
|
||||
// ports dialers in parallel, and returns the first
|
||||
// encountered error.
|
||||
// See waitTCPPort also.
|
||||
func waitTCPPorts(ctx context.Context, ports ...string) error {
|
||||
var err error
|
||||
// mu locks err variable that is assigned in
|
||||
// other goroutines
|
||||
var mu sync.Mutex
|
||||
|
||||
// cancel is canceling all goroutines
|
||||
// when the firs error is returned
|
||||
// to prevent unnecessary waiting
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
defer cancel()
|
||||
|
||||
var wg sync.WaitGroup
|
||||
for _, port := range ports {
|
||||
wg.Add(1)
|
||||
go func(port string) {
|
||||
defer wg.Done()
|
||||
|
||||
e := waitTCPPort(ctx, port)
|
||||
|
||||
mu.Lock()
|
||||
defer mu.Unlock()
|
||||
if e != nil && err == nil {
|
||||
err = e
|
||||
cancel()
|
||||
}
|
||||
}(port)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// waitTCPPort blocks until tcp connection can be established
|
||||
// ona provided port. It has a 3 minute timeout as maximum,
|
||||
// to prevent long waiting, but it can be shortened with
|
||||
// a provided context instance. Dialer has a 10 second timeout
|
||||
// in every iteration, and connection refused error will be
|
||||
// retried in 100 milliseconds periods.
|
||||
func waitTCPPort(ctx context.Context, port string) error {
|
||||
ctx, cancel := context.WithTimeout(ctx, 3*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
for {
|
||||
c, err := (&net.Dialer{Timeout: 10 * time.Second}).DialContext(ctx, "tcp", "127.0.0.1:"+port)
|
||||
if err != nil {
|
||||
if operr, ok := err.(*net.OpError); ok {
|
||||
if syserr, ok := operr.Err.(*os.SyscallError); ok && syserr.Err == syscall.ECONNREFUSED {
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
continue
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
return c.Close()
|
||||
}
|
||||
}
|
359
vendor/github.com/ethereum/go-ethereum/cmd/swarm/upload_test.go
generated
vendored
Normal file
359
vendor/github.com/ethereum/go-ethereum/cmd/swarm/upload_test.go
generated
vendored
Normal file
@ -0,0 +1,359 @@
|
||||
// Copyright 2017 The go-ethereum Authors
|
||||
// This file is part of go-ethereum.
|
||||
//
|
||||
// go-ethereum is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// go-ethereum is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with go-ethereum. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/log"
|
||||
swarmapi "github.com/ethereum/go-ethereum/swarm/api/client"
|
||||
"github.com/ethereum/go-ethereum/swarm/testutil"
|
||||
"github.com/mattn/go-colorable"
|
||||
)
|
||||
|
||||
func init() {
|
||||
log.PrintOrigins(true)
|
||||
log.Root().SetHandler(log.LvlFilterHandler(log.Lvl(*loglevel), log.StreamHandler(colorable.NewColorableStderr(), log.TerminalFormat(true))))
|
||||
}
|
||||
|
||||
func TestSwarmUp(t *testing.T) {
|
||||
if runtime.GOOS == "windows" {
|
||||
t.Skip()
|
||||
}
|
||||
|
||||
cluster := newTestCluster(t, clusterSize)
|
||||
defer cluster.Shutdown()
|
||||
|
||||
cases := []struct {
|
||||
name string
|
||||
f func(t *testing.T, cluster *testCluster)
|
||||
}{
|
||||
{"NoEncryption", testNoEncryption},
|
||||
{"Encrypted", testEncrypted},
|
||||
{"RecursiveNoEncryption", testRecursiveNoEncryption},
|
||||
{"RecursiveEncrypted", testRecursiveEncrypted},
|
||||
{"DefaultPathAll", testDefaultPathAll},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
tc.f(t, cluster)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// testNoEncryption tests that running 'swarm up' makes the resulting file
|
||||
// available from all nodes via the HTTP API
|
||||
func testNoEncryption(t *testing.T, cluster *testCluster) {
|
||||
testDefault(t, cluster, false)
|
||||
}
|
||||
|
||||
// testEncrypted tests that running 'swarm up --encrypted' makes the resulting file
|
||||
// available from all nodes via the HTTP API
|
||||
func testEncrypted(t *testing.T, cluster *testCluster) {
|
||||
testDefault(t, cluster, true)
|
||||
}
|
||||
|
||||
func testRecursiveNoEncryption(t *testing.T, cluster *testCluster) {
|
||||
testRecursive(t, cluster, false)
|
||||
}
|
||||
|
||||
func testRecursiveEncrypted(t *testing.T, cluster *testCluster) {
|
||||
testRecursive(t, cluster, true)
|
||||
}
|
||||
|
||||
func testDefault(t *testing.T, cluster *testCluster, toEncrypt bool) {
|
||||
tmpFileName := testutil.TempFileWithContent(t, data)
|
||||
defer os.Remove(tmpFileName)
|
||||
|
||||
// write data to file
|
||||
hashRegexp := `[a-f\d]{64}`
|
||||
flags := []string{
|
||||
"--bzzapi", cluster.Nodes[0].URL,
|
||||
"up",
|
||||
tmpFileName}
|
||||
if toEncrypt {
|
||||
hashRegexp = `[a-f\d]{128}`
|
||||
flags = []string{
|
||||
"--bzzapi", cluster.Nodes[0].URL,
|
||||
"up",
|
||||
"--encrypt",
|
||||
tmpFileName}
|
||||
}
|
||||
// upload the file with 'swarm up' and expect a hash
|
||||
log.Info(fmt.Sprintf("uploading file with 'swarm up'"))
|
||||
up := runSwarm(t, flags...)
|
||||
_, matches := up.ExpectRegexp(hashRegexp)
|
||||
up.ExpectExit()
|
||||
hash := matches[0]
|
||||
log.Info("file uploaded", "hash", hash)
|
||||
|
||||
// get the file from the HTTP API of each node
|
||||
for _, node := range cluster.Nodes {
|
||||
log.Info("getting file from node", "node", node.Name)
|
||||
|
||||
res, err := http.Get(node.URL + "/bzz:/" + hash)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer res.Body.Close()
|
||||
|
||||
reply, err := ioutil.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if res.StatusCode != 200 {
|
||||
t.Fatalf("expected HTTP status 200, got %s", res.Status)
|
||||
}
|
||||
if string(reply) != data {
|
||||
t.Fatalf("expected HTTP body %q, got %q", data, reply)
|
||||
}
|
||||
log.Debug("verifying uploaded file using `swarm down`")
|
||||
//try to get the content with `swarm down`
|
||||
tmpDownload, err := ioutil.TempDir("", "swarm-test")
|
||||
tmpDownload = path.Join(tmpDownload, "tmpfile.tmp")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer os.RemoveAll(tmpDownload)
|
||||
|
||||
bzzLocator := "bzz:/" + hash
|
||||
flags = []string{
|
||||
"--bzzapi", cluster.Nodes[0].URL,
|
||||
"down",
|
||||
bzzLocator,
|
||||
tmpDownload,
|
||||
}
|
||||
|
||||
down := runSwarm(t, flags...)
|
||||
down.ExpectExit()
|
||||
|
||||
fi, err := os.Stat(tmpDownload)
|
||||
if err != nil {
|
||||
t.Fatalf("could not stat path: %v", err)
|
||||
}
|
||||
|
||||
switch mode := fi.Mode(); {
|
||||
case mode.IsRegular():
|
||||
downloadedBytes, err := ioutil.ReadFile(tmpDownload)
|
||||
if err != nil {
|
||||
t.Fatalf("had an error reading the downloaded file: %v", err)
|
||||
}
|
||||
if !bytes.Equal(downloadedBytes, bytes.NewBufferString(data).Bytes()) {
|
||||
t.Fatalf("retrieved data and posted data not equal!")
|
||||
}
|
||||
|
||||
default:
|
||||
t.Fatalf("expected to download regular file, got %s", fi.Mode())
|
||||
}
|
||||
}
|
||||
|
||||
timeout := time.Duration(2 * time.Second)
|
||||
httpClient := http.Client{
|
||||
Timeout: timeout,
|
||||
}
|
||||
|
||||
// try to squeeze a timeout by getting an non-existent hash from each node
|
||||
for _, node := range cluster.Nodes {
|
||||
_, err := httpClient.Get(node.URL + "/bzz:/1023e8bae0f70be7d7b5f74343088ba408a218254391490c85ae16278e230340")
|
||||
// we're speeding up the timeout here since netstore has a 60 seconds timeout on a request
|
||||
if err != nil && !strings.Contains(err.Error(), "Client.Timeout exceeded while awaiting headers") {
|
||||
t.Fatal(err)
|
||||
}
|
||||
// this is disabled since it takes 60s due to netstore timeout
|
||||
// if res.StatusCode != 404 {
|
||||
// t.Fatalf("expected HTTP status 404, got %s", res.Status)
|
||||
// }
|
||||
}
|
||||
}
|
||||
|
||||
func testRecursive(t *testing.T, cluster *testCluster, toEncrypt bool) {
|
||||
tmpUploadDir, err := ioutil.TempDir("", "swarm-test")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer os.RemoveAll(tmpUploadDir)
|
||||
// create tmp files
|
||||
for _, path := range []string{"tmp1", "tmp2"} {
|
||||
if err := ioutil.WriteFile(filepath.Join(tmpUploadDir, path), bytes.NewBufferString(data).Bytes(), 0644); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
hashRegexp := `[a-f\d]{64}`
|
||||
flags := []string{
|
||||
"--bzzapi", cluster.Nodes[0].URL,
|
||||
"--recursive",
|
||||
"up",
|
||||
tmpUploadDir}
|
||||
if toEncrypt {
|
||||
hashRegexp = `[a-f\d]{128}`
|
||||
flags = []string{
|
||||
"--bzzapi", cluster.Nodes[0].URL,
|
||||
"--recursive",
|
||||
"up",
|
||||
"--encrypt",
|
||||
tmpUploadDir}
|
||||
}
|
||||
// upload the file with 'swarm up' and expect a hash
|
||||
log.Info(fmt.Sprintf("uploading file with 'swarm up'"))
|
||||
up := runSwarm(t, flags...)
|
||||
_, matches := up.ExpectRegexp(hashRegexp)
|
||||
up.ExpectExit()
|
||||
hash := matches[0]
|
||||
log.Info("dir uploaded", "hash", hash)
|
||||
|
||||
// get the file from the HTTP API of each node
|
||||
for _, node := range cluster.Nodes {
|
||||
log.Info("getting file from node", "node", node.Name)
|
||||
//try to get the content with `swarm down`
|
||||
tmpDownload, err := ioutil.TempDir("", "swarm-test")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer os.RemoveAll(tmpDownload)
|
||||
bzzLocator := "bzz:/" + hash
|
||||
flagss := []string{
|
||||
"--bzzapi", cluster.Nodes[0].URL,
|
||||
"down",
|
||||
"--recursive",
|
||||
bzzLocator,
|
||||
tmpDownload,
|
||||
}
|
||||
|
||||
fmt.Println("downloading from swarm with recursive")
|
||||
down := runSwarm(t, flagss...)
|
||||
down.ExpectExit()
|
||||
|
||||
files, err := ioutil.ReadDir(tmpDownload)
|
||||
for _, v := range files {
|
||||
fi, err := os.Stat(path.Join(tmpDownload, v.Name()))
|
||||
if err != nil {
|
||||
t.Fatalf("got an error: %v", err)
|
||||
}
|
||||
|
||||
switch mode := fi.Mode(); {
|
||||
case mode.IsRegular():
|
||||
if file, err := swarmapi.Open(path.Join(tmpDownload, v.Name())); err != nil {
|
||||
t.Fatalf("encountered an error opening the file returned from the CLI: %v", err)
|
||||
} else {
|
||||
ff := make([]byte, len(data))
|
||||
io.ReadFull(file, ff)
|
||||
buf := bytes.NewBufferString(data)
|
||||
|
||||
if !bytes.Equal(ff, buf.Bytes()) {
|
||||
t.Fatalf("retrieved data and posted data not equal!")
|
||||
}
|
||||
}
|
||||
default:
|
||||
t.Fatalf("this shouldnt happen")
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatalf("could not list files at: %v", files)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// testDefaultPathAll tests swarm recursive upload with relative and absolute
|
||||
// default paths and with encryption.
|
||||
func testDefaultPathAll(t *testing.T, cluster *testCluster) {
|
||||
testDefaultPath(t, cluster, false, false)
|
||||
testDefaultPath(t, cluster, false, true)
|
||||
testDefaultPath(t, cluster, true, false)
|
||||
testDefaultPath(t, cluster, true, true)
|
||||
}
|
||||
|
||||
func testDefaultPath(t *testing.T, cluster *testCluster, toEncrypt bool, absDefaultPath bool) {
|
||||
tmp, err := ioutil.TempDir("", "swarm-defaultpath-test")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer os.RemoveAll(tmp)
|
||||
|
||||
err = ioutil.WriteFile(filepath.Join(tmp, "index.html"), []byte("<h1>Test</h1>"), 0666)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
err = ioutil.WriteFile(filepath.Join(tmp, "robots.txt"), []byte("Disallow: /"), 0666)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
defaultPath := "index.html"
|
||||
if absDefaultPath {
|
||||
defaultPath = filepath.Join(tmp, defaultPath)
|
||||
}
|
||||
|
||||
args := []string{
|
||||
"--bzzapi",
|
||||
cluster.Nodes[0].URL,
|
||||
"--recursive",
|
||||
"--defaultpath",
|
||||
defaultPath,
|
||||
"up",
|
||||
tmp,
|
||||
}
|
||||
if toEncrypt {
|
||||
args = append(args, "--encrypt")
|
||||
}
|
||||
|
||||
up := runSwarm(t, args...)
|
||||
hashRegexp := `[a-f\d]{64,128}`
|
||||
_, matches := up.ExpectRegexp(hashRegexp)
|
||||
up.ExpectExit()
|
||||
hash := matches[0]
|
||||
|
||||
client := swarmapi.NewClient(cluster.Nodes[0].URL)
|
||||
|
||||
m, isEncrypted, err := client.DownloadManifest(hash)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if toEncrypt != isEncrypted {
|
||||
t.Error("downloaded manifest is not encrypted")
|
||||
}
|
||||
|
||||
var found bool
|
||||
var entriesCount int
|
||||
for _, e := range m.Entries {
|
||||
entriesCount++
|
||||
if e.Path == "" {
|
||||
found = true
|
||||
}
|
||||
}
|
||||
|
||||
if !found {
|
||||
t.Error("manifest default entry was not found")
|
||||
}
|
||||
|
||||
if entriesCount != 3 {
|
||||
t.Errorf("manifest contains %v entries, expected %v", entriesCount, 3)
|
||||
}
|
||||
}
|
1699
vendor/github.com/ethereum/go-ethereum/cmd/utils/flags.go
generated
vendored
Normal file
1699
vendor/github.com/ethereum/go-ethereum/cmd/utils/flags.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
727
vendor/github.com/ethereum/go-ethereum/consensus/clique/clique.go
generated
vendored
Normal file
727
vendor/github.com/ethereum/go-ethereum/consensus/clique/clique.go
generated
vendored
Normal file
@ -0,0 +1,727 @@
|
||||
// Copyright 2017 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
// Package clique implements the proof-of-authority consensus engine.
|
||||
package clique
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"io"
|
||||
"math/big"
|
||||
"math/rand"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/accounts"
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/ethereum/go-ethereum/consensus"
|
||||
"github.com/ethereum/go-ethereum/consensus/misc"
|
||||
"github.com/ethereum/go-ethereum/core/state"
|
||||
"github.com/ethereum/go-ethereum/core/types"
|
||||
"github.com/ethereum/go-ethereum/crypto"
|
||||
"github.com/ethereum/go-ethereum/ethdb"
|
||||
"github.com/ethereum/go-ethereum/log"
|
||||
"github.com/ethereum/go-ethereum/params"
|
||||
"github.com/ethereum/go-ethereum/rlp"
|
||||
"github.com/ethereum/go-ethereum/rpc"
|
||||
lru "github.com/hashicorp/golang-lru"
|
||||
"golang.org/x/crypto/sha3"
|
||||
)
|
||||
|
||||
const (
|
||||
checkpointInterval = 1024 // Number of blocks after which to save the vote snapshot to the database
|
||||
inmemorySnapshots = 128 // Number of recent vote snapshots to keep in memory
|
||||
inmemorySignatures = 4096 // Number of recent block signatures to keep in memory
|
||||
|
||||
wiggleTime = 500 * time.Millisecond // Random delay (per signer) to allow concurrent signers
|
||||
)
|
||||
|
||||
// Clique proof-of-authority protocol constants.
|
||||
var (
|
||||
epochLength = uint64(30000) // Default number of blocks after which to checkpoint and reset the pending votes
|
||||
|
||||
extraVanity = 32 // Fixed number of extra-data prefix bytes reserved for signer vanity
|
||||
extraSeal = 65 // Fixed number of extra-data suffix bytes reserved for signer seal
|
||||
|
||||
nonceAuthVote = hexutil.MustDecode("0xffffffffffffffff") // Magic nonce number to vote on adding a new signer
|
||||
nonceDropVote = hexutil.MustDecode("0x0000000000000000") // Magic nonce number to vote on removing a signer.
|
||||
|
||||
uncleHash = types.CalcUncleHash(nil) // Always Keccak256(RLP([])) as uncles are meaningless outside of PoW.
|
||||
|
||||
diffInTurn = big.NewInt(2) // Block difficulty for in-turn signatures
|
||||
diffNoTurn = big.NewInt(1) // Block difficulty for out-of-turn signatures
|
||||
)
|
||||
|
||||
// Various error messages to mark blocks invalid. These should be private to
|
||||
// prevent engine specific errors from being referenced in the remainder of the
|
||||
// codebase, inherently breaking if the engine is swapped out. Please put common
|
||||
// error types into the consensus package.
|
||||
var (
|
||||
// errUnknownBlock is returned when the list of signers is requested for a block
|
||||
// that is not part of the local blockchain.
|
||||
errUnknownBlock = errors.New("unknown block")
|
||||
|
||||
// errInvalidCheckpointBeneficiary is returned if a checkpoint/epoch transition
|
||||
// block has a beneficiary set to non-zeroes.
|
||||
errInvalidCheckpointBeneficiary = errors.New("beneficiary in checkpoint block non-zero")
|
||||
|
||||
// errInvalidVote is returned if a nonce value is something else that the two
|
||||
// allowed constants of 0x00..0 or 0xff..f.
|
||||
errInvalidVote = errors.New("vote nonce not 0x00..0 or 0xff..f")
|
||||
|
||||
// errInvalidCheckpointVote is returned if a checkpoint/epoch transition block
|
||||
// has a vote nonce set to non-zeroes.
|
||||
errInvalidCheckpointVote = errors.New("vote nonce in checkpoint block non-zero")
|
||||
|
||||
// errMissingVanity is returned if a block's extra-data section is shorter than
|
||||
// 32 bytes, which is required to store the signer vanity.
|
||||
errMissingVanity = errors.New("extra-data 32 byte vanity prefix missing")
|
||||
|
||||
// errMissingSignature is returned if a block's extra-data section doesn't seem
|
||||
// to contain a 65 byte secp256k1 signature.
|
||||
errMissingSignature = errors.New("extra-data 65 byte signature suffix missing")
|
||||
|
||||
// errExtraSigners is returned if non-checkpoint block contain signer data in
|
||||
// their extra-data fields.
|
||||
errExtraSigners = errors.New("non-checkpoint block contains extra signer list")
|
||||
|
||||
// errInvalidCheckpointSigners is returned if a checkpoint block contains an
|
||||
// invalid list of signers (i.e. non divisible by 20 bytes).
|
||||
errInvalidCheckpointSigners = errors.New("invalid signer list on checkpoint block")
|
||||
|
||||
// errMismatchingCheckpointSigners is returned if a checkpoint block contains a
|
||||
// list of signers different than the one the local node calculated.
|
||||
errMismatchingCheckpointSigners = errors.New("mismatching signer list on checkpoint block")
|
||||
|
||||
// errInvalidMixDigest is returned if a block's mix digest is non-zero.
|
||||
errInvalidMixDigest = errors.New("non-zero mix digest")
|
||||
|
||||
// errInvalidUncleHash is returned if a block contains an non-empty uncle list.
|
||||
errInvalidUncleHash = errors.New("non empty uncle hash")
|
||||
|
||||
// errInvalidDifficulty is returned if the difficulty of a block neither 1 or 2.
|
||||
errInvalidDifficulty = errors.New("invalid difficulty")
|
||||
|
||||
// errWrongDifficulty is returned if the difficulty of a block doesn't match the
|
||||
// turn of the signer.
|
||||
errWrongDifficulty = errors.New("wrong difficulty")
|
||||
|
||||
// ErrInvalidTimestamp is returned if the timestamp of a block is lower than
|
||||
// the previous block's timestamp + the minimum block period.
|
||||
ErrInvalidTimestamp = errors.New("invalid timestamp")
|
||||
|
||||
// errInvalidVotingChain is returned if an authorization list is attempted to
|
||||
// be modified via out-of-range or non-contiguous headers.
|
||||
errInvalidVotingChain = errors.New("invalid voting chain")
|
||||
|
||||
// errUnauthorizedSigner is returned if a header is signed by a non-authorized entity.
|
||||
errUnauthorizedSigner = errors.New("unauthorized signer")
|
||||
|
||||
// errRecentlySigned is returned if a header is signed by an authorized entity
|
||||
// that already signed a header recently, thus is temporarily not allowed to.
|
||||
errRecentlySigned = errors.New("recently signed")
|
||||
)
|
||||
|
||||
// SignerFn is a signer callback function to request a header to be signed by a
|
||||
// backing account.
|
||||
type SignerFn func(accounts.Account, string, []byte) ([]byte, error)
|
||||
|
||||
// ecrecover extracts the Ethereum account address from a signed header.
|
||||
func ecrecover(header *types.Header, sigcache *lru.ARCCache) (common.Address, error) {
|
||||
// If the signature's already cached, return that
|
||||
hash := header.Hash()
|
||||
if address, known := sigcache.Get(hash); known {
|
||||
return address.(common.Address), nil
|
||||
}
|
||||
// Retrieve the signature from the header extra-data
|
||||
if len(header.Extra) < extraSeal {
|
||||
return common.Address{}, errMissingSignature
|
||||
}
|
||||
signature := header.Extra[len(header.Extra)-extraSeal:]
|
||||
|
||||
// Recover the public key and the Ethereum address
|
||||
pubkey, err := crypto.Ecrecover(SealHash(header).Bytes(), signature)
|
||||
if err != nil {
|
||||
return common.Address{}, err
|
||||
}
|
||||
var signer common.Address
|
||||
copy(signer[:], crypto.Keccak256(pubkey[1:])[12:])
|
||||
|
||||
sigcache.Add(hash, signer)
|
||||
return signer, nil
|
||||
}
|
||||
|
||||
// Clique is the proof-of-authority consensus engine proposed to support the
|
||||
// Ethereum testnet following the Ropsten attacks.
|
||||
type Clique struct {
|
||||
config *params.CliqueConfig // Consensus engine configuration parameters
|
||||
db ethdb.Database // Database to store and retrieve snapshot checkpoints
|
||||
|
||||
recents *lru.ARCCache // Snapshots for recent block to speed up reorgs
|
||||
signatures *lru.ARCCache // Signatures of recent blocks to speed up mining
|
||||
|
||||
proposals map[common.Address]bool // Current list of proposals we are pushing
|
||||
|
||||
signer common.Address // Ethereum address of the signing key
|
||||
signFn SignerFn // Signer function to authorize hashes with
|
||||
lock sync.RWMutex // Protects the signer fields
|
||||
|
||||
// The fields below are for testing only
|
||||
fakeDiff bool // Skip difficulty verifications
|
||||
}
|
||||
|
||||
// New creates a Clique proof-of-authority consensus engine with the initial
|
||||
// signers set to the ones provided by the user.
|
||||
func New(config *params.CliqueConfig, db ethdb.Database) *Clique {
|
||||
// Set any missing consensus parameters to their defaults
|
||||
conf := *config
|
||||
if conf.Epoch == 0 {
|
||||
conf.Epoch = epochLength
|
||||
}
|
||||
// Allocate the snapshot caches and create the engine
|
||||
recents, _ := lru.NewARC(inmemorySnapshots)
|
||||
signatures, _ := lru.NewARC(inmemorySignatures)
|
||||
|
||||
return &Clique{
|
||||
config: &conf,
|
||||
db: db,
|
||||
recents: recents,
|
||||
signatures: signatures,
|
||||
proposals: make(map[common.Address]bool),
|
||||
}
|
||||
}
|
||||
|
||||
// Author implements consensus.Engine, returning the Ethereum address recovered
|
||||
// from the signature in the header's extra-data section.
|
||||
func (c *Clique) Author(header *types.Header) (common.Address, error) {
|
||||
return ecrecover(header, c.signatures)
|
||||
}
|
||||
|
||||
// VerifyHeader checks whether a header conforms to the consensus rules.
|
||||
func (c *Clique) VerifyHeader(chain consensus.ChainReader, header *types.Header, seal bool) error {
|
||||
return c.verifyHeader(chain, header, nil)
|
||||
}
|
||||
|
||||
// VerifyHeaders is similar to VerifyHeader, but verifies a batch of headers. The
|
||||
// method returns a quit channel to abort the operations and a results channel to
|
||||
// retrieve the async verifications (the order is that of the input slice).
|
||||
func (c *Clique) VerifyHeaders(chain consensus.ChainReader, headers []*types.Header, seals []bool) (chan<- struct{}, <-chan error) {
|
||||
abort := make(chan struct{})
|
||||
results := make(chan error, len(headers))
|
||||
|
||||
go func() {
|
||||
for i, header := range headers {
|
||||
err := c.verifyHeader(chain, header, headers[:i])
|
||||
|
||||
select {
|
||||
case <-abort:
|
||||
return
|
||||
case results <- err:
|
||||
}
|
||||
}
|
||||
}()
|
||||
return abort, results
|
||||
}
|
||||
|
||||
// verifyHeader checks whether a header conforms to the consensus rules.The
|
||||
// caller may optionally pass in a batch of parents (ascending order) to avoid
|
||||
// looking those up from the database. This is useful for concurrently verifying
|
||||
// a batch of new headers.
|
||||
func (c *Clique) verifyHeader(chain consensus.ChainReader, header *types.Header, parents []*types.Header) error {
|
||||
if header.Number == nil {
|
||||
return errUnknownBlock
|
||||
}
|
||||
number := header.Number.Uint64()
|
||||
|
||||
// Don't waste time checking blocks from the future
|
||||
if header.Time > uint64(time.Now().Unix()) {
|
||||
return consensus.ErrFutureBlock
|
||||
}
|
||||
// Checkpoint blocks need to enforce zero beneficiary
|
||||
checkpoint := (number % c.config.Epoch) == 0
|
||||
if checkpoint && header.Coinbase != (common.Address{}) {
|
||||
return errInvalidCheckpointBeneficiary
|
||||
}
|
||||
// Nonces must be 0x00..0 or 0xff..f, zeroes enforced on checkpoints
|
||||
if !bytes.Equal(header.Nonce[:], nonceAuthVote) && !bytes.Equal(header.Nonce[:], nonceDropVote) {
|
||||
return errInvalidVote
|
||||
}
|
||||
if checkpoint && !bytes.Equal(header.Nonce[:], nonceDropVote) {
|
||||
return errInvalidCheckpointVote
|
||||
}
|
||||
// Check that the extra-data contains both the vanity and signature
|
||||
if len(header.Extra) < extraVanity {
|
||||
return errMissingVanity
|
||||
}
|
||||
if len(header.Extra) < extraVanity+extraSeal {
|
||||
return errMissingSignature
|
||||
}
|
||||
// Ensure that the extra-data contains a signer list on checkpoint, but none otherwise
|
||||
signersBytes := len(header.Extra) - extraVanity - extraSeal
|
||||
if !checkpoint && signersBytes != 0 {
|
||||
return errExtraSigners
|
||||
}
|
||||
if checkpoint && signersBytes%common.AddressLength != 0 {
|
||||
return errInvalidCheckpointSigners
|
||||
}
|
||||
// Ensure that the mix digest is zero as we don't have fork protection currently
|
||||
if header.MixDigest != (common.Hash{}) {
|
||||
return errInvalidMixDigest
|
||||
}
|
||||
// Ensure that the block doesn't contain any uncles which are meaningless in PoA
|
||||
if header.UncleHash != uncleHash {
|
||||
return errInvalidUncleHash
|
||||
}
|
||||
// Ensure that the block's difficulty is meaningful (may not be correct at this point)
|
||||
if number > 0 {
|
||||
if header.Difficulty == nil || (header.Difficulty.Cmp(diffInTurn) != 0 && header.Difficulty.Cmp(diffNoTurn) != 0) {
|
||||
return errInvalidDifficulty
|
||||
}
|
||||
}
|
||||
// If all checks passed, validate any special fields for hard forks
|
||||
if err := misc.VerifyForkHashes(chain.Config(), header, false); err != nil {
|
||||
return err
|
||||
}
|
||||
// All basic checks passed, verify cascading fields
|
||||
return c.verifyCascadingFields(chain, header, parents)
|
||||
}
|
||||
|
||||
// verifyCascadingFields verifies all the header fields that are not standalone,
|
||||
// rather depend on a batch of previous headers. The caller may optionally pass
|
||||
// in a batch of parents (ascending order) to avoid looking those up from the
|
||||
// database. This is useful for concurrently verifying a batch of new headers.
|
||||
func (c *Clique) verifyCascadingFields(chain consensus.ChainReader, header *types.Header, parents []*types.Header) error {
|
||||
// The genesis block is the always valid dead-end
|
||||
number := header.Number.Uint64()
|
||||
if number == 0 {
|
||||
return nil
|
||||
}
|
||||
// Ensure that the block's timestamp isn't too close to it's parent
|
||||
var parent *types.Header
|
||||
if len(parents) > 0 {
|
||||
parent = parents[len(parents)-1]
|
||||
} else {
|
||||
parent = chain.GetHeader(header.ParentHash, number-1)
|
||||
}
|
||||
if parent == nil || parent.Number.Uint64() != number-1 || parent.Hash() != header.ParentHash {
|
||||
return consensus.ErrUnknownAncestor
|
||||
}
|
||||
if parent.Time+c.config.Period > header.Time {
|
||||
return ErrInvalidTimestamp
|
||||
}
|
||||
// Retrieve the snapshot needed to verify this header and cache it
|
||||
snap, err := c.snapshot(chain, number-1, header.ParentHash, parents)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// If the block is a checkpoint block, verify the signer list
|
||||
if number%c.config.Epoch == 0 {
|
||||
signers := make([]byte, len(snap.Signers)*common.AddressLength)
|
||||
for i, signer := range snap.signers() {
|
||||
copy(signers[i*common.AddressLength:], signer[:])
|
||||
}
|
||||
extraSuffix := len(header.Extra) - extraSeal
|
||||
if !bytes.Equal(header.Extra[extraVanity:extraSuffix], signers) {
|
||||
return errMismatchingCheckpointSigners
|
||||
}
|
||||
}
|
||||
// All basic checks passed, verify the seal and return
|
||||
return c.verifySeal(chain, header, parents)
|
||||
}
|
||||
|
||||
// snapshot retrieves the authorization snapshot at a given point in time.
|
||||
func (c *Clique) snapshot(chain consensus.ChainReader, number uint64, hash common.Hash, parents []*types.Header) (*Snapshot, error) {
|
||||
// Search for a snapshot in memory or on disk for checkpoints
|
||||
var (
|
||||
headers []*types.Header
|
||||
snap *Snapshot
|
||||
)
|
||||
for snap == nil {
|
||||
// If an in-memory snapshot was found, use that
|
||||
if s, ok := c.recents.Get(hash); ok {
|
||||
snap = s.(*Snapshot)
|
||||
break
|
||||
}
|
||||
// If an on-disk checkpoint snapshot can be found, use that
|
||||
if number%checkpointInterval == 0 {
|
||||
if s, err := loadSnapshot(c.config, c.signatures, c.db, hash); err == nil {
|
||||
log.Trace("Loaded voting snapshot from disk", "number", number, "hash", hash)
|
||||
snap = s
|
||||
break
|
||||
}
|
||||
}
|
||||
// If we're at an checkpoint block, make a snapshot if it's known
|
||||
if number == 0 || (number%c.config.Epoch == 0 && chain.GetHeaderByNumber(number-1) == nil) {
|
||||
checkpoint := chain.GetHeaderByNumber(number)
|
||||
if checkpoint != nil {
|
||||
hash := checkpoint.Hash()
|
||||
|
||||
signers := make([]common.Address, (len(checkpoint.Extra)-extraVanity-extraSeal)/common.AddressLength)
|
||||
for i := 0; i < len(signers); i++ {
|
||||
copy(signers[i][:], checkpoint.Extra[extraVanity+i*common.AddressLength:])
|
||||
}
|
||||
snap = newSnapshot(c.config, c.signatures, number, hash, signers)
|
||||
if err := snap.store(c.db); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
log.Info("Stored checkpoint snapshot to disk", "number", number, "hash", hash)
|
||||
break
|
||||
}
|
||||
}
|
||||
// No snapshot for this header, gather the header and move backward
|
||||
var header *types.Header
|
||||
if len(parents) > 0 {
|
||||
// If we have explicit parents, pick from there (enforced)
|
||||
header = parents[len(parents)-1]
|
||||
if header.Hash() != hash || header.Number.Uint64() != number {
|
||||
return nil, consensus.ErrUnknownAncestor
|
||||
}
|
||||
parents = parents[:len(parents)-1]
|
||||
} else {
|
||||
// No explicit parents (or no more left), reach out to the database
|
||||
header = chain.GetHeader(hash, number)
|
||||
if header == nil {
|
||||
return nil, consensus.ErrUnknownAncestor
|
||||
}
|
||||
}
|
||||
headers = append(headers, header)
|
||||
number, hash = number-1, header.ParentHash
|
||||
}
|
||||
// Previous snapshot found, apply any pending headers on top of it
|
||||
for i := 0; i < len(headers)/2; i++ {
|
||||
headers[i], headers[len(headers)-1-i] = headers[len(headers)-1-i], headers[i]
|
||||
}
|
||||
snap, err := snap.apply(headers)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
c.recents.Add(snap.Hash, snap)
|
||||
|
||||
// If we've generated a new checkpoint snapshot, save to disk
|
||||
if snap.Number%checkpointInterval == 0 && len(headers) > 0 {
|
||||
if err = snap.store(c.db); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
log.Trace("Stored voting snapshot to disk", "number", snap.Number, "hash", snap.Hash)
|
||||
}
|
||||
return snap, err
|
||||
}
|
||||
|
||||
// VerifyUncles implements consensus.Engine, always returning an error for any
|
||||
// uncles as this consensus mechanism doesn't permit uncles.
|
||||
func (c *Clique) VerifyUncles(chain consensus.ChainReader, block *types.Block) error {
|
||||
if len(block.Uncles()) > 0 {
|
||||
return errors.New("uncles not allowed")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// VerifySeal implements consensus.Engine, checking whether the signature contained
|
||||
// in the header satisfies the consensus protocol requirements.
|
||||
func (c *Clique) VerifySeal(chain consensus.ChainReader, header *types.Header) error {
|
||||
return c.verifySeal(chain, header, nil)
|
||||
}
|
||||
|
||||
// verifySeal checks whether the signature contained in the header satisfies the
|
||||
// consensus protocol requirements. The method accepts an optional list of parent
|
||||
// headers that aren't yet part of the local blockchain to generate the snapshots
|
||||
// from.
|
||||
func (c *Clique) verifySeal(chain consensus.ChainReader, header *types.Header, parents []*types.Header) error {
|
||||
// Verifying the genesis block is not supported
|
||||
number := header.Number.Uint64()
|
||||
if number == 0 {
|
||||
return errUnknownBlock
|
||||
}
|
||||
// Retrieve the snapshot needed to verify this header and cache it
|
||||
snap, err := c.snapshot(chain, number-1, header.ParentHash, parents)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Resolve the authorization key and check against signers
|
||||
signer, err := ecrecover(header, c.signatures)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, ok := snap.Signers[signer]; !ok {
|
||||
return errUnauthorizedSigner
|
||||
}
|
||||
for seen, recent := range snap.Recents {
|
||||
if recent == signer {
|
||||
// Signer is among recents, only fail if the current block doesn't shift it out
|
||||
if limit := uint64(len(snap.Signers)/2 + 1); seen > number-limit {
|
||||
return errRecentlySigned
|
||||
}
|
||||
}
|
||||
}
|
||||
// Ensure that the difficulty corresponds to the turn-ness of the signer
|
||||
if !c.fakeDiff {
|
||||
inturn := snap.inturn(header.Number.Uint64(), signer)
|
||||
if inturn && header.Difficulty.Cmp(diffInTurn) != 0 {
|
||||
return errWrongDifficulty
|
||||
}
|
||||
if !inturn && header.Difficulty.Cmp(diffNoTurn) != 0 {
|
||||
return errWrongDifficulty
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Prepare implements consensus.Engine, preparing all the consensus fields of the
|
||||
// header for running the transactions on top.
|
||||
func (c *Clique) Prepare(chain consensus.ChainReader, header *types.Header) error {
|
||||
// If the block isn't a checkpoint, cast a random vote (good enough for now)
|
||||
header.Coinbase = common.Address{}
|
||||
header.Nonce = types.BlockNonce{}
|
||||
|
||||
number := header.Number.Uint64()
|
||||
// Assemble the voting snapshot to check which votes make sense
|
||||
snap, err := c.snapshot(chain, number-1, header.ParentHash, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if number%c.config.Epoch != 0 {
|
||||
c.lock.RLock()
|
||||
|
||||
// Gather all the proposals that make sense voting on
|
||||
addresses := make([]common.Address, 0, len(c.proposals))
|
||||
for address, authorize := range c.proposals {
|
||||
if snap.validVote(address, authorize) {
|
||||
addresses = append(addresses, address)
|
||||
}
|
||||
}
|
||||
// If there's pending proposals, cast a vote on them
|
||||
if len(addresses) > 0 {
|
||||
header.Coinbase = addresses[rand.Intn(len(addresses))]
|
||||
if c.proposals[header.Coinbase] {
|
||||
copy(header.Nonce[:], nonceAuthVote)
|
||||
} else {
|
||||
copy(header.Nonce[:], nonceDropVote)
|
||||
}
|
||||
}
|
||||
c.lock.RUnlock()
|
||||
}
|
||||
// Set the correct difficulty
|
||||
header.Difficulty = CalcDifficulty(snap, c.signer)
|
||||
|
||||
// Ensure the extra data has all it's components
|
||||
if len(header.Extra) < extraVanity {
|
||||
header.Extra = append(header.Extra, bytes.Repeat([]byte{0x00}, extraVanity-len(header.Extra))...)
|
||||
}
|
||||
header.Extra = header.Extra[:extraVanity]
|
||||
|
||||
if number%c.config.Epoch == 0 {
|
||||
for _, signer := range snap.signers() {
|
||||
header.Extra = append(header.Extra, signer[:]...)
|
||||
}
|
||||
}
|
||||
header.Extra = append(header.Extra, make([]byte, extraSeal)...)
|
||||
|
||||
// Mix digest is reserved for now, set to empty
|
||||
header.MixDigest = common.Hash{}
|
||||
|
||||
// Ensure the timestamp has the correct delay
|
||||
parent := chain.GetHeader(header.ParentHash, number-1)
|
||||
if parent == nil {
|
||||
return consensus.ErrUnknownAncestor
|
||||
}
|
||||
header.Time = parent.Time + c.config.Period
|
||||
if header.Time < uint64(time.Now().Unix()) {
|
||||
header.Time = uint64(time.Now().Unix())
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Finalize implements consensus.Engine, ensuring no uncles are set, nor block
|
||||
// rewards given, and returns the final block.
|
||||
func (c *Clique) Finalize(chain consensus.ChainReader, header *types.Header, state *state.StateDB, txs []*types.Transaction, uncles []*types.Header, receipts []*types.Receipt) (*types.Block, error) {
|
||||
// No block rewards in PoA, so the state remains as is and uncles are dropped
|
||||
header.Root = state.IntermediateRoot(chain.Config().IsEIP158(header.Number))
|
||||
header.UncleHash = types.CalcUncleHash(nil)
|
||||
|
||||
// Assemble and return the final block for sealing
|
||||
return types.NewBlock(header, txs, nil, receipts), nil
|
||||
}
|
||||
|
||||
// Authorize injects a private key into the consensus engine to mint new blocks
|
||||
// with.
|
||||
func (c *Clique) Authorize(signer common.Address, signFn SignerFn) {
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
|
||||
c.signer = signer
|
||||
c.signFn = signFn
|
||||
}
|
||||
|
||||
// Seal implements consensus.Engine, attempting to create a sealed block using
|
||||
// the local signing credentials.
|
||||
func (c *Clique) Seal(chain consensus.ChainReader, block *types.Block, results chan<- *types.Block, stop <-chan struct{}) error {
|
||||
header := block.Header()
|
||||
|
||||
// Sealing the genesis block is not supported
|
||||
number := header.Number.Uint64()
|
||||
if number == 0 {
|
||||
return errUnknownBlock
|
||||
}
|
||||
// For 0-period chains, refuse to seal empty blocks (no reward but would spin sealing)
|
||||
if c.config.Period == 0 && len(block.Transactions()) == 0 {
|
||||
log.Info("Sealing paused, waiting for transactions")
|
||||
return nil
|
||||
}
|
||||
// Don't hold the signer fields for the entire sealing procedure
|
||||
c.lock.RLock()
|
||||
signer, signFn := c.signer, c.signFn
|
||||
c.lock.RUnlock()
|
||||
|
||||
// Bail out if we're unauthorized to sign a block
|
||||
snap, err := c.snapshot(chain, number-1, header.ParentHash, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, authorized := snap.Signers[signer]; !authorized {
|
||||
return errUnauthorizedSigner
|
||||
}
|
||||
// If we're amongst the recent signers, wait for the next block
|
||||
for seen, recent := range snap.Recents {
|
||||
if recent == signer {
|
||||
// Signer is among recents, only wait if the current block doesn't shift it out
|
||||
if limit := uint64(len(snap.Signers)/2 + 1); number < limit || seen > number-limit {
|
||||
log.Info("Signed recently, must wait for others")
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
// Sweet, the protocol permits us to sign the block, wait for our time
|
||||
delay := time.Unix(int64(header.Time), 0).Sub(time.Now()) // nolint: gosimple
|
||||
if header.Difficulty.Cmp(diffNoTurn) == 0 {
|
||||
// It's not our turn explicitly to sign, delay it a bit
|
||||
wiggle := time.Duration(len(snap.Signers)/2+1) * wiggleTime
|
||||
delay += time.Duration(rand.Int63n(int64(wiggle)))
|
||||
|
||||
log.Trace("Out-of-turn signing requested", "wiggle", common.PrettyDuration(wiggle))
|
||||
}
|
||||
// Sign all the things!
|
||||
sighash, err := signFn(accounts.Account{Address: signer}, accounts.MimetypeClique, CliqueRLP(header))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
copy(header.Extra[len(header.Extra)-extraSeal:], sighash)
|
||||
// Wait until sealing is terminated or delay timeout.
|
||||
log.Trace("Waiting for slot to sign and propagate", "delay", common.PrettyDuration(delay))
|
||||
go func() {
|
||||
select {
|
||||
case <-stop:
|
||||
return
|
||||
case <-time.After(delay):
|
||||
}
|
||||
|
||||
select {
|
||||
case results <- block.WithSeal(header):
|
||||
default:
|
||||
log.Warn("Sealing result is not read by miner", "sealhash", SealHash(header))
|
||||
}
|
||||
}()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// CalcDifficulty is the difficulty adjustment algorithm. It returns the difficulty
|
||||
// that a new block should have based on the previous blocks in the chain and the
|
||||
// current signer.
|
||||
func (c *Clique) CalcDifficulty(chain consensus.ChainReader, time uint64, parent *types.Header) *big.Int {
|
||||
snap, err := c.snapshot(chain, parent.Number.Uint64(), parent.Hash(), nil)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
return CalcDifficulty(snap, c.signer)
|
||||
}
|
||||
|
||||
// CalcDifficulty is the difficulty adjustment algorithm. It returns the difficulty
|
||||
// that a new block should have based on the previous blocks in the chain and the
|
||||
// current signer.
|
||||
func CalcDifficulty(snap *Snapshot, signer common.Address) *big.Int {
|
||||
if snap.inturn(snap.Number+1, signer) {
|
||||
return new(big.Int).Set(diffInTurn)
|
||||
}
|
||||
return new(big.Int).Set(diffNoTurn)
|
||||
}
|
||||
|
||||
// SealHash returns the hash of a block prior to it being sealed.
|
||||
func (c *Clique) SealHash(header *types.Header) common.Hash {
|
||||
return SealHash(header)
|
||||
}
|
||||
|
||||
// Close implements consensus.Engine. It's a noop for clique as there are no background threads.
|
||||
func (c *Clique) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// APIs implements consensus.Engine, returning the user facing RPC API to allow
|
||||
// controlling the signer voting.
|
||||
func (c *Clique) APIs(chain consensus.ChainReader) []rpc.API {
|
||||
return []rpc.API{{
|
||||
Namespace: "clique",
|
||||
Version: "1.0",
|
||||
Service: &API{chain: chain, clique: c},
|
||||
Public: false,
|
||||
}}
|
||||
}
|
||||
|
||||
// SealHash returns the hash of a block prior to it being sealed.
|
||||
func SealHash(header *types.Header) (hash common.Hash) {
|
||||
hasher := sha3.NewLegacyKeccak256()
|
||||
encodeSigHeader(hasher, header)
|
||||
hasher.Sum(hash[:0])
|
||||
return hash
|
||||
}
|
||||
|
||||
// CliqueRLP returns the rlp bytes which needs to be signed for the proof-of-authority
|
||||
// sealing. The RLP to sign consists of the entire header apart from the 65 byte signature
|
||||
// contained at the end of the extra data.
|
||||
//
|
||||
// Note, the method requires the extra data to be at least 65 bytes, otherwise it
|
||||
// panics. This is done to avoid accidentally using both forms (signature present
|
||||
// or not), which could be abused to produce different hashes for the same header.
|
||||
func CliqueRLP(header *types.Header) []byte {
|
||||
b := new(bytes.Buffer)
|
||||
encodeSigHeader(b, header)
|
||||
return b.Bytes()
|
||||
}
|
||||
|
||||
func encodeSigHeader(w io.Writer, header *types.Header) {
|
||||
err := rlp.Encode(w, []interface{}{
|
||||
header.ParentHash,
|
||||
header.UncleHash,
|
||||
header.Coinbase,
|
||||
header.Root,
|
||||
header.TxHash,
|
||||
header.ReceiptHash,
|
||||
header.Bloom,
|
||||
header.Difficulty,
|
||||
header.Number,
|
||||
header.GasLimit,
|
||||
header.GasUsed,
|
||||
header.Time,
|
||||
header.Extra[:len(header.Extra)-65], // Yes, this will panic if extra is too short
|
||||
header.MixDigest,
|
||||
header.Nonce,
|
||||
})
|
||||
if err != nil {
|
||||
panic("can't encode: " + err.Error())
|
||||
}
|
||||
}
|
789
vendor/github.com/ethereum/go-ethereum/consensus/ethash/algorithm_test.go
generated
vendored
Normal file
789
vendor/github.com/ethereum/go-ethereum/consensus/ethash/algorithm_test.go
generated
vendored
Normal file
@ -0,0 +1,789 @@
|
||||
// Copyright 2017 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package ethash
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"io/ioutil"
|
||||
"math/big"
|
||||
"os"
|
||||
"reflect"
|
||||
"sync"
|
||||
"testing"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/ethereum/go-ethereum/core/types"
|
||||
)
|
||||
|
||||
// prepare converts an ethash cache or dataset from a byte stream into the internal
|
||||
// int representation. All ethash methods work with ints to avoid constant byte to
|
||||
// int conversions as well as to handle both little and big endian systems.
|
||||
func prepare(dest []uint32, src []byte) {
|
||||
for i := 0; i < len(dest); i++ {
|
||||
dest[i] = binary.LittleEndian.Uint32(src[i*4:])
|
||||
}
|
||||
}
|
||||
|
||||
// Tests whether the dataset size calculator works correctly by cross checking the
|
||||
// hard coded lookup table with the value generated by it.
|
||||
func TestSizeCalculations(t *testing.T) {
|
||||
// Verify all the cache and dataset sizes from the lookup table.
|
||||
for epoch, want := range cacheSizes {
|
||||
if size := calcCacheSize(epoch); size != want {
|
||||
t.Errorf("cache %d: cache size mismatch: have %d, want %d", epoch, size, want)
|
||||
}
|
||||
}
|
||||
for epoch, want := range datasetSizes {
|
||||
if size := calcDatasetSize(epoch); size != want {
|
||||
t.Errorf("dataset %d: dataset size mismatch: have %d, want %d", epoch, size, want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Tests that verification caches can be correctly generated.
|
||||
func TestCacheGeneration(t *testing.T) {
|
||||
tests := []struct {
|
||||
size uint64
|
||||
epoch uint64
|
||||
cache []byte
|
||||
}{
|
||||
{
|
||||
size: 1024,
|
||||
epoch: 0,
|
||||
cache: hexutil.MustDecode("0x" +
|
||||
"7ce2991c951f7bf4c4c1bb119887ee07871eb5339d7b97b8588e85c742de90e5bafd5bbe6ce93a134fb6be9ad3e30db99d9528a2ea7846833f52e9ca119b6b54" +
|
||||
"8979480c46e19972bd0738779c932c1b43e665a2fd3122fc3ddb2691f353ceb0ed3e38b8f51fd55b6940290743563c9f8fa8822e611924657501a12aafab8a8d" +
|
||||
"88fb5fbae3a99d14792406672e783a06940a42799b1c38bc28715db6d37cb11f9f6b24e386dc52dd8c286bd8c36fa813dffe4448a9f56ebcbeea866b42f68d22" +
|
||||
"6c32aae4d695a23cab28fd74af53b0c2efcc180ceaaccc0b2e280103d097a03c1d1b0f0f26ce5f32a90238f9bc49f645db001ef9cd3d13d44743f841fad11a37" +
|
||||
"fa290c62c16042f703578921f30b9951465aae2af4a5dad43a7341d7b4a62750954965a47a1c3af638dc3495c4d62a9bab843168c9fc0114e79cffd1b2827b01" +
|
||||
"75d30ba054658f214e946cf24c43b40d3383fbb0493408e5c5392434ca21bbcf43200dfb876c713d201813934fa485f48767c5915745cf0986b1dc0f33e57748" +
|
||||
"bf483ee2aff4248dfe461ec0504a13628401020fc22638584a8f2f5206a13b2f233898c78359b21c8226024d0a7a93df5eb6c282bdbf005a4aab497e096f2847" +
|
||||
"76c71cee57932a8fb89f6d6b8743b60a4ea374899a94a2e0f218d5c55818cefb1790c8529a76dba31ebb0f4592d709b49587d2317970d39c086f18dd244291d9" +
|
||||
"eedb16705e53e3350591bd4ff4566a3595ac0f0ce24b5e112a3d033bc51b6fea0a92296dea7f5e20bf6ee6bc347d868fda193c395b9bb147e55e5a9f67cfe741" +
|
||||
"7eea7d699b155bd13804204df7ea91fa9249e4474dddf35188f77019c67d201e4c10d7079c5ad492a71afff9a23ca7e900ba7d1bdeaf3270514d8eb35eab8a0a" +
|
||||
"718bb7273aeb37768fa589ed8ab01fbf4027f4ebdbbae128d21e485f061c20183a9bc2e31edbda0727442e9d58eb0fe198440fe199e02e77c0f7b99973f1f74c" +
|
||||
"c9089a51ab96c94a84d66e6aa48b2d0a4543adb5a789039a2aa7b335ca85c91026c7d3c894da53ae364188c3fd92f78e01d080399884a47385aa792e38150cda" +
|
||||
"a8620b2ebeca41fbc773bb837b5e724d6eb2de570d99858df0d7d97067fb8103b21757873b735097b35d3bea8fd1c359a9e8a63c1540c76c9784cf8d975e995c" +
|
||||
"778401b94a2e66e6993ad67ad3ecdc2acb17779f1ea8606827ec92b11c728f8c3b6d3f04a3e6ed05ff81dd76d5dc5695a50377bc135aaf1671cf68b750315493" +
|
||||
"6c64510164d53312bf3c41740c7a237b05faf4a191bd8a95dafa068dbcf370255c725900ce5c934f36feadcfe55b687c440574c1f06f39d207a8553d39156a24" +
|
||||
"845f64fd8324bb85312979dead74f764c9677aab89801ad4f927f1c00f12e28f22422bb44200d1969d9ab377dd6b099dc6dbc3222e9321b2c1e84f8e2f07731c"),
|
||||
},
|
||||
{
|
||||
size: 1024,
|
||||
epoch: 1,
|
||||
cache: hexutil.MustDecode("0x" +
|
||||
"1f56855d59cc5a085720899b4377a0198f1abe948d85fe5820dc0e346b7c0931b9cde8e541d751de3b2b3275d0aabfae316209d5879297d8bd99f8a033c9d4df" +
|
||||
"35add1029f4e6404a022d504fb8023e42989aba985a65933b0109c7218854356f9284983c9e7de97de591828ae348b63d1fc78d8db58157344d4e06530ffd422" +
|
||||
"5c7f6080d451ff94961ec2dd9e28e6d81b49102451676dbdcb6ef1094c1e8b29e7e808d47b2ba5aeb52dabf00d5f0ee08c116289cbf56d8132e5ca557c3d6220" +
|
||||
"5ba3a48539acabfd4ca3c89e3aaa668e24ffeaeb9eb0136a9fc5a8a676b6d5ad76175eeda0a1fa44b5ff5591079e4b7f581569b6c82416adcb82d7e92980df67" +
|
||||
"2248c4024013e7be52cf91a82491627d9e6d80eda2770ab82badc5e120cd33a4c84495f718b57396a8f397e797087fad81fa50f0e2f5da71e40816a85de35a96" +
|
||||
"3cd351364905c45b3116ff25851d43a2ca1d2aa5cdb408440dabef8c57778fc18608bf431d0c7ffd37649a21a7bb9d90def39c821669dbaf165c0262434dfb08" +
|
||||
"5d057a12de4a7a59fd2dfc931c29c20371abf748b69b618a9bd485b3fb3166cad4d3d27edf0197aabeceb28b96670bdf020f26d1bb9b564aaf82d866bdffd6d4" +
|
||||
"1aea89e20b15a5d1264ab01d1556bfc2a266081609d60928216bd9646038f07de9fedcc9f2b86ab1b07d7bd88ba1df08b3d89b2ac789001b48a723f217debcb7" +
|
||||
"090303a3ef50c1d5d99a75c640ec2b401ab149e06511753d8c49cafdde2929ae61e09cc0f0319d262869d21ead9e0cf5ff2de3dbedfb994f32432d2e4aa44c82" +
|
||||
"7c42781d1477fe03ea0772998e776d63363c6c3edd2d52c89b4d2c9d89cdd90fa33b2b41c8e3f78ef06fe90bcf5cc5756d33a032f16b744141aaa8852bb4cb3a" +
|
||||
"40792b93489c6d6e56c235ec4aa36c263e9b766a4daaff34b2ea709f9f811aef498a65bfbc1deffd36fcc4d1a123345fac7bf57a1fb50394843cd28976a6c7ff" +
|
||||
"fe70f7b8d8f384aa06e2c9964c92a8788cef397fffdd35181b42a35d5d98cd7244bbd09e802888d7efc0311ae58e0961e3656205df4bdc553f317df4b6ede4ca" +
|
||||
"846294a32aec830ab1aa5aac4e78b821c35c70fd752fec353e373bf9be656e775a0111bcbeffdfebd3bd5251d27b9f6971aa561a2bd27a99d61b2ce3965c3726" +
|
||||
"1e114353e6a31b09340f4078b8a8c6ce6ff4213067a8f21020f78aff4f8b472b701ef730aacb8ce7806ea31b14abe8f8efdd6357ca299d339abc4e43ba324ad1" +
|
||||
"efe6eb1a5a6e137daa6ec9f6be30931ca368a944cfcf2a0a29f9a9664188f0466e6f078c347f9fe26a9a89d2029462b19245f24ace47aecace6ef85a4e96b31b" +
|
||||
"5f470eb0165c6375eb8f245d50a25d521d1e569e3b2dccce626752bb26eae624a24511e831a81fab6898a791579f462574ca4851e6588116493dbccc3072e0c5"),
|
||||
},
|
||||
}
|
||||
for i, tt := range tests {
|
||||
cache := make([]uint32, tt.size/4)
|
||||
generateCache(cache, tt.epoch, seedHash(tt.epoch*epochLength+1))
|
||||
|
||||
want := make([]uint32, tt.size/4)
|
||||
prepare(want, tt.cache)
|
||||
|
||||
if !reflect.DeepEqual(cache, want) {
|
||||
t.Errorf("cache %d: content mismatch: have %x, want %x", i, cache, want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestDatasetGeneration(t *testing.T) {
|
||||
tests := []struct {
|
||||
epoch uint64
|
||||
cacheSize uint64
|
||||
datasetSize uint64
|
||||
dataset []byte
|
||||
}{
|
||||
{
|
||||
epoch: 0,
|
||||
cacheSize: 1024,
|
||||
datasetSize: 32 * 1024,
|
||||
dataset: hexutil.MustDecode("0x" +
|
||||
"4bc09fbd530a041dd2ec296110a29e8f130f179c59d223f51ecce3126e8b0c74326abc2f32ccd9d7f976bd0944e3ccf8479db39343cbbffa467046ca97e2da63" +
|
||||
"da5f9d9688c7c33ab7b8aace570e422fa48b24659b72fc534669209d66389ca15b099c5604601e7581488e3bd6925cec0f12d465f8004d4fa84793f8e1e46a1b" +
|
||||
"31b7298991c6142f4f0b6e6b296728ae5fa63ccb667b61fbb1b078003d18d97b906af157debed5e6c55d5a61cae90c85f9e97d565314a2f9fd9e0c08430547d0" +
|
||||
"7cfcee3271f921b95c32a11596219abaa30abc62c2c72c6725078c436c677320594df6bcb92134c1b114fffec982a1f68f13a9f812f074b9fb9c78f2cd4c1c90" +
|
||||
"7ebf1e447f7a422b06303921e3d54f430584d849eaa4b7d652e92a5d659bdfc462adcdd7991e8c66a19da4ddb5390463d073941491859397f135ebbbdbdf5801" +
|
||||
"cafb873c383893390141ae385515504d74a33608273310c312ba468046d2e20c271a38cc0e3920b39705050e752f34f244fc23ddd17ff18677756a87671d4145" +
|
||||
"3aebf97e4890da1d645f41eb20da92a8537c787ce419580073c46aa3bb154952993142ec5b4fb6e8f108fd15fc618cd5c27b45a37ee6dcd52a4ce656c0f58604" +
|
||||
"717ec55f5e592355f1f20e8316f8fd77243734a8b0f50ad93c1d95b5b0482afb22cd0667d935bd6053d7198b54974e10d100df7ca3ec2e0bb5ccce5807b266e0" +
|
||||
"8429d5fec2ae6ae1cc7c5efc27f19c89d4b4a6c5c0b9397886dac635ba37446ff528b582457a4fe7f803f1a47903574f8982d4a679b627396a4e97aaa12fa179" +
|
||||
"0d31ba52e9010bc3c26ace81f702f86649fe9eeda9ec03b74a8a5cf540d82e22af33ab893564397dfc4edd8b1677350df5b82ab61d24db95f58fd2d78afb49c7" +
|
||||
"2d2b1fefa8ff6606b8623829cc752ea37d663b945f3f1d48ad07b1416af252f81b55acd8f164da4faa9d9453721b3b795041ce7df7c77edc13865dbe04fee331" +
|
||||
"47daebe18c183c4a6594a6df3a4d2dc5e3811d805102c9c49286e3d12b38927fa49a7b0cdcb1d799f57118953e31c560aae213a1799d59a78ae68f0590347061" +
|
||||
"fc2668caf08f860452f6b7d3ebc1efecc2e1227d33296b1f1850360dee7236e85274eaede4d18a58b4261ce1f6a7d283dcf64e6d021813f82a566354445327e5" +
|
||||
"6217279b2393fe5aa0f9eb149d4866e1105106bcc221810ceaf053f2ec733d8a22f409c1baf955e50184005c5d55de907de97f5f713b62ae10937e1a7af6267b" +
|
||||
"d2a239e8589017197c343b81540bc26bc52bffd5336fb1da1202a511c7175014d2f500b9d9ce78e4b9f2b158d0fb27af352b6f78c129cad642fe909612c9d658" +
|
||||
"17a8d7f9195ee97201675a918e3cf520fdc19f92b7e6a3db806d4f3799361334082cc58a22ddb4e4f5760bd1667c177b26be325166c6bbed669a158fc87acd43" +
|
||||
"a2462e12578d72db6606f9e24ae659ff411ac9b31d696b8354fd08a591622967a14f8468eaaae3907b7818154ba2d6e4581589354d178bb6ae1c03651c44bbf0" +
|
||||
"e7fa52cb0da09508b5a444aed05a54f416841247a4fe36bd5529029e3adf78b105e22468ed775f4d0954504dd55f2c9b9e6b3a086370b2c0b6fec7efd6914e07" +
|
||||
"26627edb7a04869a874e31f448271077a7de3031cf81bdbc39848efee6075e0d65fa3a32640e9f0395cf7ec12139992aff0a54e0a7dfe5048b3cc03246b56f7d" +
|
||||
"3093538a7b87538d8792a665bc589373621b2f3cf47d2c1f8f580fe34d79c6b2a66323ce89808ce0e5cf77700f5a4446c4be01a310e8f7c7ebefe756b0044886" +
|
||||
"a0477c88ee8ea8c71503748a4cf9eb40ad5c1c8accf7c63c0f43a94ed2b8a5999df3ab9b11b80de73310e036ca88668e640015fcf9cd18eed05517d54896f43e" +
|
||||
"25e7931b44872c4e4183500e0e8c5103292bca1c0d6b0b00c9acce25d31204bb3e4f255c03a0a0916664e9c831b28b364078109a74411a11afb1e610c7d1c9d4" +
|
||||
"ba5e10d0ee0da409654d9e7308395e17caeb9caebccb0192679866e6f2ecb5f10044333bb70d61712adb6d74cdec6918ed9a71d9925da576a1e6f4e906a5cd5f" +
|
||||
"0e94a25e48a4141e4e2770144b63e2449b0f84c82879f34d78440cc430196ba85a213fdac1bcf279a46d7592fa29a876bb7a2efb7081365522a3f06fdceaedd3" +
|
||||
"cc0335cef9ea570733fe8799bb1b918aa7732b4d175929d80c7844a78e19f2dc6a6febf648f49b40320b0f7d784e7f84e45408d70b046bd01cbd8fdaf606fcd3" +
|
||||
"02f4e5a48ab8d13e93a246adfcc94f3109e02a7a969986e75b6ced6bf2d11a55ab77488e131b65a06398fa8e384dc90d875584c9b17cdcf2da5dd72a461cd07c" +
|
||||
"4a955c5fe48509b3284476c42247e086de7d63839b7358cf4ebd9edf9ac8b6fd0c096166405de19c51e8785009d30feb67cdb8ff9ba55459dfdffba8c022e26c" +
|
||||
"0ebd399e4b76ccb4d5491a862c2c4d8cdf1461a96c9b98150e170efacec980edc00a2c7f6d7c6bea3075627e1eb386a7f1ede1059da81a4ac5cf35aa173c88c5" +
|
||||
"1818dc0fbc688b68b82ddc225b6c87588e0c680e303e737c82a13e34be58df8b0cb336aeacc698c79e7682ebb69e6cd6bdc5d11790c96afcfa9290f39515142f" +
|
||||
"5f90b938216a1d14bc049ce3f0ac135722208b989d2557d3520c2186479f179e50fe5b125b8d6638a65047729c6249b9b2c6381c9103c97d1b389cc9cdb31c21" +
|
||||
"8a2eecbf4b9ad1dcfa57446cde88f96563a544c49d6f5303a84a1b7cf074fca78e67e72c9ffa0c542fb646418c6434b16b771088140725cf2dc723c1a975c4ca" +
|
||||
"8a80e633721274907353f51e95952c2b403b45750b42ad10961f60473eb54616f61f7b038c5b7eca475d6a2b844994a9eeddce4f7bb49782e50ef78bc13b85d1" +
|
||||
"9e956f47c60823f3d1981413cb78d309f63a844694861b11b5238961c71f61d82daef6795734f0961e92b9167c57f48e91693e9656fcc6e88f9ce2d373da26bf" +
|
||||
"45b3dff50211fec72387005a7e04828e4ae7ddd10fc2332acf5f1b0f67adcd863752573c2d24488857bfc58c41af45be7641f5cfff611f184612fc0d695866f4" +
|
||||
"2b396b1d9881f442c4a995f4b500f02d4ab4b53ad6e01776ab0e244583f01301203a1515f3dbb73906014e36c7143bf882b005f0228ca0562623893c8a24b7c6" +
|
||||
"4c2c561912010c121b5c3a1e35e75c0b094731e9c0d6acf5a2b1e5b179355525a175640579705f898feb98bffa25633bc126613fa27d2ceb214812902ada23f4" +
|
||||
"367a78655d0d2276095c9e83dfa79153730103963499c367c5621fecfd0888253df82b3d5716703ef92594cf269310b9e6c892c488edb3bba1d0b216e92f622a" +
|
||||
"7f8f7f00d2926d81a4c7ca6cef40d240576a8d5541ccf561c8e0e699925d20347ba7493ed6e182cfe3b633e70b3ce3a0d90813574f6fe329c495d3cd46fd5d7e" +
|
||||
"bdde58d7eafcb134a9a5d3e5d66136e8c9b5d9ecac195dcc44158941c9fe2d87db52a7ddcedc9f82ec160901cc36a9c877af80ceae0563dfa75cabde5d7a7c94" +
|
||||
"9f24bc190f7c2045368356474ff6eee284e7125d1c5f9a036fbde24cecfd3a30481ce077f20cbcb31924368296abf66ce4834102cf7cb949d1b4c6faa6d006ef" +
|
||||
"21379cead5d5a39324d41555c46e0b42a1e871143e47f8e6b3d794e75d7a43c282732766d856e04e666ea346657b157404b0fc8534a2dee8243d40a5e37609e6" +
|
||||
"18bc1d52b91a7623aaf8214a97e4c8c5d860b31c3792b129354a121a7a7e42b50dfbe3ab6590769401eb280545547a43c3a1455355d5d5fdedccb472abfe75b8" +
|
||||
"f5e7d62b0b31553d8d55de0c3c71e6f5a2abba6fe81e9a42ec1968f235bc4296c1ac5df7430917453384450ab56dafa7c7af764cefa3b0bc861c52ae27692365" +
|
||||
"9d7d9ed7609958884147acca867909a75bb6a2c364debefaf98c7ff70c7f4acb5cdb81100fd79a48c5139f8bbdc6553b509f1eb0f5d5d31886a602cd669b3f9f" +
|
||||
"59195a1fa2bcff1170003ba1b2e5e9ad7f2bfcd0573d0f2be9d8fc1773c3a63a2b9292cdbf9b4515c0b1d51772e5ee95303ff493d85314c989e269df4ec3a916" +
|
||||
"40988a11c6a4ad96f7d0541a150edf444c2b1672aa6d37564453b835c2d39864c05c4366492fc9164bf73795410e7aae8206430403357fec6389142b4976b218" +
|
||||
"d70622b4098e322f73020a0d045f07668d1e512c6eeed6e2befbfc3a6ac64054396df96fd41f7aeefa0ab1f66bb52ee1a1df066f365fc43ff0800b0398b621f9" +
|
||||
"a415895268505a81517c44a56dc94e76580fd107dba034bab9f4f4b8a9f881ff34c60c406c47b6d4a998894401006aa88f328393f9cd55a2b4d24db5abbcb05e" +
|
||||
"20d392f3feab3ca12dac475eb3690f2bf9c699d7d90900d9a840068c8cdda2ca7a27bebd685a26eb01a768259a65ab4d7efc1811c87a5a1f4e5038f6b3dc74a6" +
|
||||
"b46d9ac58d31bfc22dac23645aeef819329c9419326f22e1c24c53457baf62ae9b92ab5f999d4ef0ccfb5a21b7598340eb2d399ec81588b6a674c5a1e45aa238" +
|
||||
"c55cae8e1af0f5d64ea378b8afeab263a3a2e5c71cdda4cdb824ae55df2b0260aadf386275ef57781d46f6da3d0b300ea68c14a620c25b5df738c54aef04d63b" +
|
||||
"7dee06cd225e9ed00e78abcddca5a133d8b5e0d9b287e6436014c5da729442239bddb7ecd3fe34e6f6e530134a03ef45de4ae4fe3bf507f16cdfb9bab1fc90e8" +
|
||||
"dc565e4a7ead95352c5a894661e5d82c6d0fc47843d5cab12c4013db76c90734cbff34c73d0d873ac9b27b417665f4948469865f33179624860604a9aba2ceb1" +
|
||||
"68e74b6af3d1ad0bfcac4180ea844339a034b6b2c3e2f61f0c7afbaa76c1ebe93727df1d3db27d59a5cf51b2baaf637b6eb8a20302ef9af0b25dbe3a5e74331c" +
|
||||
"6b0c8a0cf2a2ad72d2e19797983e09468ea95270dc229f2fa084dd2aa96e722016504f6d82508572d9c30711c3ef41ae3ae2f36cc6f5dddbcb0b40d9499b24c5" +
|
||||
"4cd36d2927a6b9d57e335e4fca7f0f16887711a8c1ffa0b48bda46c506ca444b7c23e2c8dd086c2a87283d5fc0d58e9a384106837318dc84ffe65b52d4cb9141" +
|
||||
"2672adfe139c3327350fe3cf355a08c0ca43598a253833e114243c5253077d65643323f5d69b3c7902d91bab7a0928754e7d80afab8d48539fcbe0d9ab83b4db" +
|
||||
"43a6594c4071df2ef35acd1f53006a570f09104f1776b26a303e2aec93a00d2fd8c952d1ca0e54504cd9b469be7c1e71557ec31467ecc773ee817b17c4418712" +
|
||||
"163ae86646b20b80c85860e828c48e88f1309c9ff018e6a95f4c1178de6a4f9f5860039511845da7d8727b5d824ba2502d0a3d76ce74372db77c2050c728dd65" +
|
||||
"b3a15da4f1e1e41c3c2acfebc5618e5e923d503c43a3421d2628ac037c5ce13c74c4ee14d47af02323872f6bf2e8bf09d017ea6e8ec4d3f9fc4fb203ac4e1663" +
|
||||
"756b11629224c676713a42b1f43dfd6362876be1c4865928688765589e26c8dd8bc04ca18d76ced7f786cdb0fa5028ae53991d5b7b45f93bbd50aeb97300f04e" +
|
||||
"69c6736f270907f6a7ad76dde0a365183a961bc8385511e0f22ce0cb8f3c42c5d3928621841e30285fb625294865409267dbb0cf91730ba2fb1088fb79789a54" +
|
||||
"a856311bdca5b0ac0e95fbc79b11c561dc03ea82db182808031e86ec327097143ee761bb62dae8a9f4101fabcac1fc87b3c2080820582dc8a7a8287364550013" +
|
||||
"08053c781b3eb279c89e817fe97103b6930fef2dbf7728def389403a4283f63ec04ae953784b749f0ea6f08749781cd17fadfd15bb197afd2f4e0a8aade2b1ad" +
|
||||
"5100cbdce49ed59658993c00e06bf57c0026b97beadc30cd25f586ff03ab40fcd731535c9a1ccb2c99dc7f8815feab767e1237cb069981f28d8fe26bdec24218" +
|
||||
"488e6086c0ab0efc5d4211fa0726b3a11387df9bb62b863a7b154ca390a268f5e49f50dec45d24bece2a06575cc07a24bfff017d7445024739efb050ace5f345" +
|
||||
"98dacda843d4ef5bfb2c931dc16ee3dd8b61a6f01d9a7de8bbb6d89ca8695f8ef8bd1cc6e0455848fac7691e6789218790270aef40fba114557fd88ff74fe8fc" +
|
||||
"476d9b9665d7e45582540710ce92c8dcad1ad8c05642a23a0d58c02db37ae1a0e70fbc5f71b1300fe398c74cbad37fd57379f58dd3e2d3de6860a17acf3c9321" +
|
||||
"02eb4f9d596497bd849c5bfaf59a83113ef389b6896aa4d4665504a22486299993a9987b2bbdb47d59b3f6ce5d2c9f9ba33b5f0760388ca7f8d8af07c1cd28f5" +
|
||||
"67a417a59ebde4bb9867d4e7b7b79dd8665602c029e9a16a7718efde3d034f13f7f0b9af1702c335893526cb87afc2100e874b25c37fd666bf34bf6a653c7cf5" +
|
||||
"44e1fe0286a6723c7d33461dea380b392dad68f79a78fe1b785d7833ca0d1cd68cff472991a625e3099f3ad2cdc99bd37eae35353cecf424098389dbaf1885fd" +
|
||||
"7db54909a92ee879609eb2e9ef4de1f4338f0df53dbde486ede944ae69869fac701d4f1f48c83757b470ea28c9de2ae5f1ef5d1c91118d16ca0d80b1baf3d314" +
|
||||
"056949df27a09eff70c9ac50b54feff67a165ce5e22ba2222defedc7c39e02356c3553e97524c1506441527da4f5de121142ccd494f83114b3ca2dc37e15c752" +
|
||||
"e2faed7d50254124d68f67e26f4f50c9f0edf6e58b916ca830c4e33801dc11039b18292b87b08f4f2edbaaacddcdab78ff3a0004f86034080f2ca4394b14aed4" +
|
||||
"31e38e3605e6b257bd772954d2f4b846a17df7ed6e5dafa33964d9e56a07a19898fb4dfe8b2ddbd11fa0013e6ebc0e429a5166a43d1ec45557cd1fc1bddbec4b" +
|
||||
"2e9ca26395394c96395ff8f557bd0f7f805c09f0c18534585b7c7fc1d07f145372983ad77fa804fbb7765934e72beae0929a87cc6bf7f6c242ff5db2d4d5541c" +
|
||||
"8c366d22e24e1da5379836fc0eb484683285f99f178b98ca170464bdff60ee04584c12c65408102ac6dc7d10bf58a7d770bf1b3c636a48f934f6f4bbdbcc75d3" +
|
||||
"fc551de3ebaf77006707f6120b3804f2bef9b4bd59f5996610c09ba3953994d1b78a9f3bc3bafeb52266f10755ea842e5b4370c937c09afd34a092ff9b98b4d3" +
|
||||
"518bc2480d4b132455b7f03774ad76b83b254742117921c31cebeab5f39c145f7f373a5603d17dd95217ba1af37a0aa95b2992efcd02d0bb4ad08ebafb31440f" +
|
||||
"1ccfce45882b547ee4bf6ec7ecae11ed79fc63b03636c8a14ec4e0f6877eb658d839be2eac0f10a8948e74203f46078ce66aad2764ff05590e2ac7a8dd8b3036" +
|
||||
"901fcb7ff3369ee989a28e34b9b62e1e607d14da3049ded1a4ee50257195eaaef995bed79ec85111abb522aba1fb306869a1ab381e82943f35345bb5502bb90a" +
|
||||
"e2a0af77526a84754ee4d600ba7f8ac98705ee687bab949a081849889d7b83a21a3dd34af84dc2b9458230ef0ff44c6398d3c6e48e5c09c399ac4d4c7b285549" +
|
||||
"e0bcab7fd96de42f072f1cb633e3e250745321049d0d7ecdef4636e70e94c8414e76ecaedd6ee0792e97de11e7dc1e1e1801ad68f9147278e268d7ad76c5bbb7" +
|
||||
"98386fdc13ca8c77569d96e0debba8ea3b751352136c8f1c8d611a69f1baa9aa4b9d0a476ebd5dd21339ef7f97f09aa86b69a7b114cebe17a6b0e58bf52803d6" +
|
||||
"fd47d9eac3a988b51e9bca95c546d49367a3126bf8ee44fbd0e77611473a1d3d2de0ce4ea54f9bb7f9dd0d0c065f613a623fad43a445eba294fd00037492914f" +
|
||||
"b74d10d0b97a0cf9bd3151c3cade89521f36b6fe1aca7f352e79a77d063da5337a7c88d90e9e566bcd97732baa4459305967c2f65adf1a4a4c7991cbc99df3b7" +
|
||||
"14335a107a97a4ab104bc94fecd1d003fe6d2f22e717853c449881c4ccaa7e7a1e44961a14a47a0d0aa1b1493dd02760ff4d31fbddf5941f93c8e5925d1886e2" +
|
||||
"8761baef8610fa6be016c8f4fe65bb0f335152d5e94893e274f2ab90118e4c07957d252963755b4b638ffc0a734fbe6e32c2e304b10a46a4eed330d101c4f0ae" +
|
||||
"011e7f94b89bc0eb9d358a6548b3f0c47ccc3c2d986d381437c49041629c6cbf61bdf0825efe17e4abef128003681450ceeff0e28842895d8e338c247abf81cb" +
|
||||
"7260fd45042c1f6c630a4b195579721392e577fbfdb9f5b003a8b9a6bc15ae754f6255131a0be600c7b07e2cee1ffe32aad4687f9a429998ed9059a99fd879ea" +
|
||||
"c4dcb55f4551bbb70c187cf1b162e2ca4a929edd6ec9260877df652622ae073fc63c0d8522d3882ba888ac50a67a68fb6530193f93165093a1d8132e87d8887e" +
|
||||
"ff2fdab0fbae6ab9506dae61fada4023133d166bcf1956aedc3237c77d1c81dcc84ae957d89367b0fc950c78e58f2cb9c4fd93e16b94421fdecd46c3ff55592e" +
|
||||
"4374a7f7d8ede9923115770cb416071e8f102d4ad78b891464ffd14f589c238c8e13a4e2a81744d179e7d3ae36cffee75ceb99633face85d077d0c15b3970930" +
|
||||
"075dc08b420e0a545200895207c5a746a18ce9ab64a50d3dcf44da857fb65e4efc29b2b4d532dc6a03b699dcfd77030a4945e6431273e25f06ad8f913c2a9eb7" +
|
||||
"59d8d3049868d337e451726d95c4cf8baf381096fc9b62679175dc8f14e52f8b99f212cab6544414c62f17c8323256cce95356dcd351e34c7a1576b17c1406d7" +
|
||||
"5b8bcca8099a1993df1541ded61b876ae83396b191b719c4b1cbe50d73fc13da352d827ba09aa7fdfef3e4e0273c31ef4fd38b93cf64199c3969a7c09dd5e0f3" +
|
||||
"ff93a5a7db9c2c1ec25e3060bcb5481c6802e1eca78f31862842ea08e8f92b8e52856c4c9fedd0bf20e386cfdf926425f7756ff41fd3567c5bf334e96e3f492f" +
|
||||
"74bd0519d8d98efa0b427ba681b8b1be8fab041ff084dc5f8c4d5d48f481115d7e407ad8a6034f481c2be86f8451980c3aa83a3fff245d90d13801a54527e97b" +
|
||||
"e392b25867882d43e3819f4a8aa380db63954ec23d2f0c11a7aa5bc7a3aedc43ecd3b024280ed8843399e28deb954bfc11a3197fb14a9c9a895859e390e9586e" +
|
||||
"2ad21da39bb9ba79a62222d228a0fc96a24e801f00afc3f98d2168a8a253f24deffe461f6313de9b433e1d2e307239c0e3fd5d9fe4c8352c2c6797b1737e93fc" +
|
||||
"14d411bc69bbc9d78cf91734052b8aa1dab348e4c243b8e6d623865c135f807de8d5fd88f3921327affa37066dd538351bc4ec52eece88856de0a424a87d062a" +
|
||||
"f68cf24db37dbaa8e8e96a812fbf32ccafdf1b9d27f11bea23df02143bd09061a881c010819a315a5b6ee44b3c60979b3f7b41f488b2429d49377d6542fb0e22" +
|
||||
"d09a0ef5b81aa7c8134c0aecdc7a4f9228559d0bb826d30fd77fe0f834212647ce61e22fef0a1c10eb4177de81c31c12054a15f81b605619f3045646110673d0" +
|
||||
"b2d79d80577fa43284266fd2ed54f9a3b9df3509f79559c5bc51a58521bfeb2f95d8851527b7ea47b92a694f6ea2b67dc2d4f506d11d2db32c2929cdf5c8816b" +
|
||||
"7f0c310cceb7ede08d5965ed2c7be6c0a317251c7d31cc4a15f6d7976a8a1e6a2f386fe0071d43a50bd0ce5e864a8e449fe9600c6e4a84866879c490de9f9d46" +
|
||||
"3f22708abf34d3e180dbb6005484a6afad373838cdac335f05c034e3090b2fbaaa53fa2db1f96bbe141d570f17363ff98672500e16994b79be74634755b09e66" +
|
||||
"f1b37e338c946bf85e06c97e31dbddf257d58fd10468278648d86f38710c2ca0b6ea7cac4ea0e2c49b96bf1998bde1b3d38aa853736308e12b4a0d467fdb8a73" +
|
||||
"0d810ce45518614bd5845f58a9835a5cfbe745f45ef59ce9a677d10d8c9f6294f1a0565301efb3c6610afda35167150bd326c77057e530c213da63af3e6a600a" +
|
||||
"d87b16ec5cbf76a13764f71b3e7e0c867086ebd9fad02e1d747030064e071a13da4758cd0fa20872b3dc350f4cbfcde1b68a97aca41e32207b40beddec412c0e" +
|
||||
"c75d87c6671ed94bda5170aa2866509161c28d550190675f60139a7b460469f3d4829b3c65f5d185936582629160522fcfdcc53fd0dcc8fc46de11d52bfcc5e6" +
|
||||
"3407ecbbb682cc1693d6543756fa4e068e92ae1a94924a1ff6891361e5f262b7d3c3a3bc2866f54e6d03ebd5479afa3f424077d51668cc60e23b35fb0222ae22" +
|
||||
"5223ba8a8c416b68c8853022d150c951f06f8f85c2078d3035b8ac3ae984ffcfb024431acaae8bfbeb981870f9ad6bbb88d7d5ff34ba21a44cbffd0aeaa435ba" +
|
||||
"7d40d22602e807ac9a69db514ab13248133142cf03fac999a2b185f34d83fdb495ef042d4a5e92f2624193c88858d91c0812b18fd67046cf50635e6ab1ea9ade" +
|
||||
"7b1fe783dc5147f14f9194cfa92c03a0456f4171f9e5c156fee1c607a1e9e06535f2dac49b92ddf5fdacbf88a062bd7ca5439bae645100121e598deee6043baa" +
|
||||
"85cc0d727f08d37a766a55a9ca21ffb6594fb73f9aad15be4a64bafddb6c85d00f7bb5705d9e56b410dd80df8b087b6d67c7ca84eff2ad699f901415fab21343" +
|
||||
"6351a9bdf83b440e29f3950c7e4c49963ab109686d78fce629e9207db2e17eb5f02f01db6441002d72c06c6c6bbcdc0a7443589ba29909a5a78864ad51e1dfda" +
|
||||
"14782d869e4989ac3c5ef0aa1eabe540e9e7cd4e8eabe25b07f300a134a92718186f085d5c10a711ed0e574bf7550f6bccfc3c094d6e59619bde9fd892af8ef2" +
|
||||
"50e1cc3afdcd9c84ccb97344542843028b00064b0c3d18ac0f0703fe6f9683d40813abbb883e164c5797bc1555338566cf8cdd358e9fcb0e93f08f7ae06a5121" +
|
||||
"c67a231106ad8fd42f0798d7185c2de78b8b76c10e82272a405212ce3b904f90236eeea02054953b967cb614e8f8ac49b977152a52df981c86fa4a92f7f70eb6" +
|
||||
"cd4eb65986564039b0d77f8bafedb4fcbf9c34b8fe9c5fa87b0785c118a8624498fb0184a0dbbfb16777579e1964330c12e494449f6aa5cf69ec4a32054be553" +
|
||||
"6027e0d27c7044abd4c0b8e43db703209037efcfd08944647a90a1ab0c71011753354990cac5a472fae44dc370aac8131ebdf31456a8484e7fdefd268cbf5cb5" +
|
||||
"85ac615039d3655b1348fc0b3b078ac41cbcaf6aaedcc1153bb8d55c307f45405ad6a959abb37bf8891c8dec79a9d7ccd9b791cb60361d4a28f33ec0dfd13fa8" +
|
||||
"e0b9b29e14bf36f5047e51a39c2efcefcc156bd08e46c5c1000a3cdc2bb20713e19d6f492c40e51eb93628cf85d07041ae5353e7decc824cbb1db8ab3a7a7fca" +
|
||||
"ff04c2af423bcfb1864ddc864624b827ddcff2a2f8fdb7a3d86d76e72b4f850ec1262d8fc89e7b12e4cc618afe6a2bdf205075c2008f93b7281d80180199409c" +
|
||||
"de850d1f14ca0ff960f69772385cf0f0a0f47cafd5489ea4fd8b68ec7aa539b942379139756c95bb90818842cd43511edbb7577ae469f46728b13a61e6eede06" +
|
||||
"3a4cdfab5ed590feb807d55d76e518d1d74bfa6704f7c8ccc672824b4d5ef5fa5b3ab8fdf2b6c1753404ba35b76aaa931a4e0e5ca7e440524166b23e9a8be9e8" +
|
||||
"635381f6c9086802d428fece81395dada6b3b866e905ec00ccc4fb9b8415dd15e443f84b7220e3b28700ce3d88f9c6df2afea39e0ead537a50ee11f0c247ee86" +
|
||||
"d4b3074e8761de4de611c409c6d4c369c2c11742a7763f6550edfaae49afeec33353a14d2ae60687dbeefd2fe29689da6ae79d7b06042dfd25a68bde9182fba4" +
|
||||
"1ac53706a8b96535057fc2f99ac84a9cfc6549920c3e2cab44e48a08e77207b6a95b2f6179d6dfd6c2d9e3c91106a7a687e40bb2a1c5ccf566c0e31a0fdbd0a4" +
|
||||
"f270f9812208f939efd9698a8b28ce9c5633f18ace7ab0a7550d9e7e26cf62eef49200331e19a64bed648b5d18ceb389bafbcb3f280ba78e4cf03b053f2a5f08" +
|
||||
"3c852452837138004073cf6726143179386279f1a8f15d44876c19bf6c2e2992ce6056191bb1a386f0e1f6f249495cff126991c6560e3f613e56525c0c49b5cc" +
|
||||
"2ea4e736d83480f2b45d7dc840b849887f54a2aa072e72e3fd0db34e5cddb02221fdf2a40fb6ec271ba3a09de8dc73c24328c5d9a33ae0adc9874902f25d5bef" +
|
||||
"4d85914557e2983c93fba16cdd4bd929e878b5d51b142b6e9aa0ce84871b7b03ee6cc13251e17547c2d20a7d4e948760e207e29de58a7ccb71b87f99d79837db" +
|
||||
"d0f293ad3d33ffe91435598e8a4584b7b7ef5b1a895a2827b4976f81d335e4aa6feda3539690899619a4cb34fdcbbecf1b8b38cec2ec7c07ce84ec3044f49656" +
|
||||
"28fdba8971585afb509526640d36425777b6ddf5b2a49d795fdcf71e57fd35f29fff37890541b6e152f14fb6ea4c70a1b9f159d02ed895a68dcc276f5d5ae83e" +
|
||||
"47c021392ee22a398c8c73b3446d61562b3ec596036959aa645a65e5d24f733e142ec0e184b72a2adcbe3913932b2c9503c856a7e989d24f306e01e99268188d" +
|
||||
"f858694e297803effeb8e28bf8fb63ed6787acc2c61f509e19099607512d40928a08e649474a43728b63523175fad12ad088aade0c1e20815c7c12773bc959e8" +
|
||||
"640ee23eef2b1653ae8918615b45158a01be5a5f39a75a7c6cd8f1f6b463516539771ad251d5c2d40c5049877765512c44e58bd3b9ac3a0ac281771097880fe2" +
|
||||
"c9516dcd6f1373e1e8a52fc485d104004dcc839fe3d120f1432b213388dd37980ce8238c87a70d5abe95d78d696d2436eb23a8f620ce74335d5e47f6524b11c3" +
|
||||
"e22288644b539e3ab664dd5fd6bafb02897aab35adaef204f82d9318b22f45b787f5bacd74b01d23537973060868a47f2e3a45c1d8805a1d657f2332af8170e2" +
|
||||
"9435d7540e70e92a8c8794bf22d3e11d54ff2d48cbc7a1ac3cecfc48f80fe521f6852f97aafa0605f3e7084b15e61a74869512c9c2d84180686ea07b562cf35b" +
|
||||
"5a0ca529481ddbdba9c60729f821dc7a5a8b5c7eaef1ea7927d455a702aab538e7441933c4fff2d27de5659d6fa41f0ee72bb13a829839267f3a7b51a81a85b0" +
|
||||
"d737194d94e1bf8173248cb057cee19eb5e2cdda38c529298f3c4d3b95400198063c5b27e9262f9c66425c65568a09035bed9cd55c1f2ec4becb6b9c59445398" +
|
||||
"ad5b7c85142e713b6dd32493dcb817c8bcdbd728e325c25c5a14d764b63f960d1e48a0bc7f4d2bf51060f83b1d1f2591c6a9b79182e686b887a2c1461442e2f9" +
|
||||
"16e8582e298f87ca95a8052df33af20ebded7bb1c528920233d1aca3b3789494d97084890fa3db0ea7eb561b0087c4a90000db41ea072613f91ebba82790f33c" +
|
||||
"fd52cdd92d2ef1246911ef1dd82ad083881b72a08a40ee55884380dd136a7c0724cded69c6abf1f156b14ecd7284abcbf66522264145ee78ab0ef0d2a74eb390" +
|
||||
"10946d5efefb7175164e96621d3f158de8b57956b8b1155c35b32007e47d915cb61dabd556a370537737574741fcf9a8a23f7155bf1f0e3d3c0d2088d1191d9c" +
|
||||
"9c974139303f3dda55a70ab4810fddca3561114969d370f4e6bad60a53815eab1c4613854d04ba8b049dd7ab1a935c728299d1502ff9aa3fbb356f87f2a52b6e" +
|
||||
"947dc79b5fd211ed31dee722d3fd857f43aad973fbfacb7cbfe1b2553bdc76142ccae5b4021a4647b8d8087925dd3191a57198792b6f918de87a92705ce57905" +
|
||||
"f2dcfb67a20f8c77e700933432d60a4536d0959415f15f3eb8a788f1b19c497d3b68194e27ee736231835469d8bf0ce1717ecf533ab77dd97b35881d8eda959f" +
|
||||
"54a7935b1bc11d7f2e472757734afaf0463da3fad9804eb948e8d6444e8394b33f1c187618c7c02371ee6d378ebb7a20b6049a5504daa71999d15944ee82650a" +
|
||||
"2388f374f3ec3afd4ca58ef3f2588997d194a2741252cf6562e00cd6b5c5fe4066454d2b3150317694052b4dafb40c2f04c850e4062cd8f0af2da75280046850" +
|
||||
"77990788b27fa457ae9d0b622d18fc070f1d2661ecab529b5cb82f30a29610dc6a9e93ca9a2617ab0109957a45c1204e5eedb8860c6f4d57122060f39a4194fc" +
|
||||
"a285f1e9e7a75cc3511b8cb4865719c2260a630845051876e7795bba59573b6ce5faf7e5708eda7be25dd49c8cace4c04c541074d703e6601e043f6c63a0a371" +
|
||||
"1a381f0ff83d136f4aa29de266169ce5b3105cbfeffba370fa306a93830e3c0519a495b8b9f4b72078e2c45421b4b0667f903676a1339c70ddd1a90dbd21853b" +
|
||||
"2826ac3fa5add5073c634d4c5e87db0efe18638ee93c460257e52aacb8600ff36739818056110b2e974a1959e3784903aa97b0fcd9264f7d8f6bb5d8b7d9f03c" +
|
||||
"4b643955bf7966250936d4e7d651712db5e695a6a36b5e6f56c651ff737042b5bb73638e21ca6ce9a3e63fbb1906675d97001d7ee240d277d62df18acb169677" +
|
||||
"963d231c5276bdf5767ec35fbedb062e61c23d759aefd287b2dd62a0d6f0518d90b3c1756fde50afd33cab395ddf3cd538b9ad8862a199141331c63110c9ddaf" +
|
||||
"fa3d6c63a1fb1b45529eace826cc29a1df5df327bb782e573c41864c18e6d31401d19719326e5c35bb50de7fdc67177a6a6015b4264fecba2360ab72ae8b060a" +
|
||||
"6c66c5a05782a15fe3c1833b47e3495d29f2cfa579fcb08f02fd064e9ef2ef5564ac6a43cfbcae7d79e9f87ebc2176611823c6624db11892f8c47f8c96a49539" +
|
||||
"1c18f821ecdefb343eae3fd98dae1ef96fa3527788543c0d06d9793579cc62d91dc4d25312901c6368ba81c8536c6287230e8f97d25f6c77366609580cf26a27" +
|
||||
"88502a9aada84a794d3674ae11cd1742cf245e9d9502dbb5b340c2a6c79e3607f6b47666e1ea991ccfbdf6cc41ede46d043bc4d3e5e6882414dc65d62f9f47b9" +
|
||||
"fb7b828a89afd6361ae458c2cdc82f459c54977072702ee5a4c22955b8019d8b8d91f558897c4b661f8e5412ccdc10c40521303c0ffd353a0c04cebca5622a71" +
|
||||
"192b144d0f9c5c0706a130df887526b7b6e0f358ad9f7d0fd4d87c5fdb29a7453388c0d009da0d4c47a5d6cf8363892ac42b6ce3388771f698802b4dbfd66aa3" +
|
||||
"5fa6a6f8b42dd8446324501c807b6e72cdd35cfe08956a52f86bb4709fe2980f62152dba3571f18fcc4c1cf7a25384c4b5174e93e5afc9b9f12db2bd505ddade" +
|
||||
"d670d0d71b9548f9a07ef98521961cd96e8f363cf3222336bc4baa284b5305aab47dace615c1b3f3fb1ee23ad9ca3f58b086d9169ee5b2d3c2831e1db4f905da" +
|
||||
"11e1fe79e3d48c01bd9879ed68391e4d24d6db8d6774cb8747e7ea368aba3bbf355386408af4a59b23fce74a5e673a1044db66ed8529a65462269480736cdaa5" +
|
||||
"0784fbd77e1c41197335b4c517af8a67eef5b7165c5fd6022cceed0396089c3985c36595497db0a0fcae478e4e4d68c57b93f466aae86dd4244633beaa8116a0" +
|
||||
"de25d2a54353b7ee85fee58ad4780a2957d69816585a64f65e75f332614aa6786d1a1432f6acde385d3d6e870bc968c60c81401726a958f0caae336c83a9523a" +
|
||||
"c174faed43ec67473dcd151506e334a6aaf1731dd3aaa831f934be83beaefafa11810e7eb140f4fe80cfba574e6106c1bfe9f0b20173a4ec2663ce0580df6daa" +
|
||||
"7966a3a8906677ab680025782c61b95cec6a73b5deb16599e6521f9c6c4cae0d9286566388d5181d6ba11c51a25c62b510d9b1793f3ce9f73ff0c9226c8aae69" +
|
||||
"5d014287df074a244014720ee38e3968557db00aa63dab71854b8573c42c65116e3d88bf040d53ef3165a5827c717179e2939e310be5eaf6fb75447ba98ce925" +
|
||||
"98e83a32a90eea848500a30eaaaceb307d37b1201b83a744468a1a52632ce5525c1fce5f702421e42e7cc4c61caed539dc09001cd31a8a2b48a783c36c56a3a2" +
|
||||
"d50de42c63981c86642cc92bcceeec8a66b4afad3c1be1df4bcb8beedd442c281080c94692bf453196ed1a66a074d56a8e7f60238ce18358373efc173e70c691" +
|
||||
"f832e1139bc04e6258d77cf7529af7ce5eca28ca5cda818625c0bb5beca96d99fc9b6689a7771434aa96e23c55a41cff7b7b718df58260b3bc91762034debf49" +
|
||||
"7d8ca8d5764c52bc9665bf86db5407ee1b786d90f8d7772597eceb98f0121e3996e771d951568a162f6b71042998db8208ece5b8b0c68107b8e2079765b0d8c3" +
|
||||
"2747597072756208b0d84415a5334a88d916bda390e26ccf3046b860e7ccbe22c48cd3d3f51bb65a98ace74d52613f782db726babd02780b8d620655bf9d551c" +
|
||||
"ae9ef3056e3d24f5e7c3105c4857492fedd244ac2b8c30a874c1446630b042d819bc6b6d2d96829de903db22af706e93c5ae876d72c633600222443d1765bc62" +
|
||||
"a8a20c458ae55bce8cbbef753cccc5e7d929408d6a3709467373651f0163128aba4142ecc56ef11ff1fabf5eaf6e955b4252d1350e9002300a1236ab2fa0ed34" +
|
||||
"c9cc7dc1d4f09bd31296cec1493e725b57cc496fdac4e8d26197376bda7f74c0965c4352bc9d5c731df04f9908899cce6ec3afe15210d115992b2d95308dd032" +
|
||||
"13c557ac527424c7db02475a2fc78b88d022d212c3d02d5ee490e2436e6e572e8a1330465b9052f8a3de01aab76662d18fa3d076fb77103fe432d549bc861fcf" +
|
||||
"f63f3401cda31673ee48826b68b387802fea4471deb1fc928586f1b1614c16311c9820b563ab0112c28af5c1af5121818540c4b7d7f549b33906c1b86c6674ad" +
|
||||
"799acee7342e4a79d9295493b2430fd08f373338795764621bca444868f3f42b0e40abd4b8e148cad2861fb4980b83bb58d40eeecd8d8cb1ef1ece17b0ab72e5" +
|
||||
"06c6e650a3a43081f545acbac51ed7e121df51edb75120cce30ef7dbf41fad331120e537fb35be45d93de4fac0cadc7e5f644e2b767a285facd5f12845559785" +
|
||||
"57f4afc276e21d77f6162062430dc8918435f035f435ea419ae9f1ddb6afd46b243f8bd6a3a33e7970e7e76fab9ba6afa72a4806189462f9d0f231a23e3ee1cc" +
|
||||
"51cd10cb9043a27deecaca866751f971254fbe3084c243ef5f516bb652988b770896ae5abfa12db2eb2abe404cf694e9f60d47e734e260ae668b750e11b26001" +
|
||||
"0d2bee5ca555a44523742fb069e484f7a69c12d4bad026c03ed7af10ebc9cf2f54d143fbe4de83448df80668217a11f5a1187f35ff306e6c685cfc2417c14aa7" +
|
||||
"aeba1fb7dab05c913fbcbb8e677dd0f89324048862220ab6f5340c38b70804f625f5a526d6675a49fdc22ea6ceed477097fc723a7b6eaffd65c48dbee13df566" +
|
||||
"f8f3449d91abb367cf37a8460fc8072c4ac75f88be8b9c840ef438cbf12a2e7d55799f641316e3381f72265425f3e90fbeaa9919533d8f9262da27f1f933d4f9" +
|
||||
"a83e07aeb968016fed89e7b16babf0b6af3800a27c9c3d330b6bf8be447d31bedcc526b1bb53ecb10c3ea098bfa7d014d93274bec70b6e82bd5c443e860835f0" +
|
||||
"ae82b7be7c78cd996e0990e3cac8c1c431481c8159ae1dbc40c03f4ac543e5758f347e12715822d86c881030de83a76ba1c49e4d4836bab7b5287122ccf523d2" +
|
||||
"33935d802d2bca303cf57b36a5ff17e7c611f1cf99699881ae464da2911d77580587a7228db8325f204adb14413a13fe318e995d60e35c88bb47b99ba9ee8daa" +
|
||||
"3e40ce5818876a3911107a159125dcf768ba04074e5771334e0de430c439070422508577e474e9532f7dfbc489d0c87d37103920415b6c116a422ac15e0736a8" +
|
||||
"1e1e317adc87005f868815950882fc7497794c5eaf76f9def434d198304ff495bd2f9f4026aea330450741fb969700b953ab265aabf1fe146d861ba2aedc53d4" +
|
||||
"f929abec2dee710aed8fa605fbb9bba914eaff01fdc113836d34d855383e4a311b4ec6ef6e80dfe32bc8035d84ddc4e2c305c112b93560112c1f3dff800d6043" +
|
||||
"7eab01991f924075b4dea4db01c377ee1ca374d383ff1fbb463bf7078f6cc7509a0ecf536871abe7c95bf89f29c71f72f1a2002854113cb0d6d2192c00123010" +
|
||||
"8dc9477808a218f84afb81f0274718c024393d5be66edaac7406e520b0c8e2c02ab98ee7b290db261f2122ea68bd79f2cc6dc64936af5064cce2b4d1b7078703" +
|
||||
"951b6b81b9b60b99da4c2d12bbb50351a5b7713541db0958740910ff69e748c71bc7470a3c05489febefd384e06d267371935f652736bbcaacb20c34bd50144c" +
|
||||
"71923b5a521ac4b1ba694d024ba51b4bef3ffcff74d5dc63810b2c0f529073e13ec3232d8647ad124b21ff73402d371c0db39d46cf4d2d4cf7ad43fd8dd365f6" +
|
||||
"9b6b7bcdf664df0e62ba58f3ca0c62ad6fdcc9b091fb4926cb47b5ff8de7d3b12bd8709a46e5c3d5f0d22934c7a0574ee70b87af97d0fa46f7d9673915fed1d5" +
|
||||
"a6c57197524ec9978d1bdf65633721ea2ccc25626dcb5e7f5e090b00e413c10a6d20b45fb8e98c22928de6dda184e856c86792c7cd09d38e4333a76882d363f1" +
|
||||
"7f4d773ba104b2d04fd81027da087258fb175bfa8005c035a4719bac5b9630ae57889fb3b52a0fd47ec4060137b0f95fa5d5684172d07ca91e91eaf20dbfdea8" +
|
||||
"a3e23937f33d8774f30c7e8e5d4b2d5371e5ea5e8d290970904c4c1ff33baf675ed79599653808f652ec4fd0088877f7dd7973023ccc8377d1ada2b80c07d077" +
|
||||
"d7208686354f511925a3514c9e93c13525353b3d9528ab678e3e783c290ead88c2c3d6230bd4cb3bf79fce6dc3e95bfebda41e5d994e61ab083d73408ff6b627" +
|
||||
"6996a263d2920170fff6869c2311441837a2fc190bee104328591b402defa38b421b972b01d020bd20b1b6a6ae884b23eb829fdf032a81d4f199a87ef125d4cc" +
|
||||
"8662e24deb93700980e6ebc6882bcbaaa0283492e81f81e76bbe2ce18df4fb665436310658918ee217b5da262f1a1adbd59eb3c555cfebb12280058c75b5b33f" +
|
||||
"8aa8c2d7cebf12ce46c5f49ecec5a865a9f0b65476793884f0021f8731b1bd288f55dfa1665776b2aee1007bcaa6d92a76a2ba9925bcfa68db7cc727b2a07ebc" +
|
||||
"e24c0314c96ee4d6164c699e585461388dd73476a1e0519d92f51b64eb2842a7b17bb55d512d52da802df63206ee926f6a6a8c32de7b30e7cd3f23e37e0fd82a" +
|
||||
"556323736ecd9de77494a2f8702463f40fb837c2a99270b9050b0cbbc2c305a32380ff5fa94bf9c101c667f36293c12ff9aaf6e0a810b75230caf915135cbe6e" +
|
||||
"63ffb2a0e8632d32f72a65aa965fc556e10ddf6d5e40be919066eebda09d581a32156e1675300f52c8b355e88696fc2a67dd8e350a6e902e082af28a9809ba11" +
|
||||
"ae0a5fd9c6627fb808d757147e5d59cffd9c45874478ab226e72909ccba6592a54391d072c7eb0221f1ff7be9924b9d037e4f8c31e94fdc814a8c4cc7ad4c9f6" +
|
||||
"eacd5af66dd76bb6222b2fd3ea50a828fc3a91ef8b084214bfdcca56348517be18ca472166dd7f18c8e444e3641486e7dada626ced8710fc73a2b09b6e9395b0" +
|
||||
"31ee2c48c9183851357d230204c911b345457de602824273193b795fc21e90a0c1cdaaba36787424b23ce73e2116947f143f9641d39a4c07c2e40e02f3bd7c68" +
|
||||
"6899fd57e3eb23c6f5615c9dbc279fca0d4218bc79d928e70018533a85b4646bdc78015149b4d41d77ec7b46900e7fd5250116ce978f825569bd887bf3fd0365" +
|
||||
"e1259a7514116fdcdd6da3ffdf432bbe8e59b9bca9222c5dca1eaa61caf29b8461ddced6f312838fe490f742db696fadddd19bab8de6bedaade878be07aca4ac" +
|
||||
"76d69b81a6890e66dccd702720c3bd5601c6abdab95fbe4ccde6e35385b75e1977d5085ace928adfa382ea2890889017b9c4c81d9ba4629771f84cced6280db7" +
|
||||
"a6cd83ff9375ffb0a75a6bebba9a209f048788ba39127c1036e4bd0aad9be40754fd75295611e455909a818a3541af32eae98df7222353a4405da0e7be9f1cf1" +
|
||||
"bcb823fdea7976a810e8a3c7bf93fd947f961a344a93aa1ba99bf2df48ec82769d8c08e7b14191050d5706a9467c9122f34e27f060dd4d6e936c414c4e551b9e" +
|
||||
"5d6b5b58347ed0012a8a323f41b43bf5e960b2806de59da85b998affdb490fbc965d569114223db3ca65df69a617f6808bea23017327ddaf32990070aaf5f444" +
|
||||
"a9db44a57b5c92bc27bc71c5f8a2b6929edfed8e182bf5942564ef045c75448450eb1a4e4e09a1875e8a4a74f229879ccb7a2f2cd0359abd91a782c2ec1f68bb" +
|
||||
"40ce0a63bcc014b198adc222fc957eec0483f5b93f0db91b7ab3b3e3c59841dae057eec97abb55fc42b2de124946e66ed2a7fe8cd047cb79051b55f82594ab45" +
|
||||
"711c92364f932a5fd274fe184c85583ac7cfaf258c57e296f9c18fd181308565315e27272cbad3b21cb4490ca0e5f675365caac42f299e22d8a74ca51a9d0883" +
|
||||
"bb376804e234502db66067e7a434d38c3dc075346e888e4558b1745d00458df99db02f0e4c37702fb0989387f74d002a924790a6b7351ee0f41684bef079be26" +
|
||||
"ee9d70b560c006cff4b08b9578afb5019c21ab9418ae4ecaa7a1cfed2d880a06a03c2c7711b601a2cb3d9193e1577b4f1d0e614c0be1f69205fa6524fee80bf1" +
|
||||
"e1f1906b50e75fea2d19b8a83071a460145e1730581e5e9538888d2e797ee3cbd3b31399ecb4d6244ee44362493802b142ea397c2e7a3c1bc86f0ea0546a38ce" +
|
||||
"574e1df0c27ad8a28dca70f659ae6a1369d8b3aee7d0dd24ea370cc2bc1b1a4dc9f63911b63e60fe4ed8552bbca10e01c82d11b0ddf748d234b4aa3b31683c09" +
|
||||
"86358fad680dd2178902beadc4646b3eceff572631ff9e6b64d8a622ad9f0308cc46b7d422ce792fe5573e9b9480e1ae9fedf31edaaac3b08c5a2c6c27d6b033" +
|
||||
"6b92a3da7b838bb0a2916ebb6ee72bf33a7fa70630491f49c67031ce4b9dec2315088d0a5cbf7473fd121e0ef5f4e92d43114014c9f8c6e671086a446eb1f66f" +
|
||||
"70f0cb0c668998ed96ee0ad2687946681fe40dc46cbd170e0cabb6f6216be61221f171fb2f4273f58c10d5c4eccafd1df62fdc8ac2c5c8f6d5eb637b71fa89e3" +
|
||||
"f8347343f89667a4450c5c6e3791034d2dc3a593185b55bb95d8f8f2984ef981e4b692c1383ace4cb2c4adb80d5d582857b5d0e3ccb12845a59587b47232ad20" +
|
||||
"926efa78e05a57b136e284401c516296b6b194d541ec165d11ef94f166cb52f45145d745ff3deaf643b5c45573ed0e69a22f0e0c9c5367f6d1398105516729b6" +
|
||||
"3f2edf1b01ad9633edf80efbba6555d4253fd99b45a36f16ba98ea0bb0d80533aed806544a084a398a692f698c78b9bcfc9b4d3328dd869dbf7085893b8dafe1" +
|
||||
"59e0517c2f6a3ddfd4a8c670072b30c96b90f81fcc08523e4fd75919752bfa52a1db7c374debbd83ca8e311b98b0d8275bedad215847fa8984cb50e108f69550" +
|
||||
"f6517d719dbb5dade1d3c283357e14b6d9e85d61e33813546517e1262a7cbac814d79cf6b7e21b0fbbee9b6314f02b2d4e6995d2231670884c78cfd86a2acbcf" +
|
||||
"0a178ba64de2f13f022e22b9b968ceefaff374aff02b703811f3dc541a69a21d6e1c5d1aca48889b125ff1274e65413f61e42bb0194b60b65a3454c696033cc8" +
|
||||
"e3cc3613a52850296a0154bde0e2a81b7a6489bfce505dbe1bc44e0e1052f678297bb19cbdf7970bfa5268af8a54eee004063f9894118ddce7fae8bbba53a428" +
|
||||
"678cec8a2bf6cca2b1a5f4a2e95562437e4eae41167f39d2a150f7c46c1eb6da35587f7234d870b16ed91c7db548ddc99967381b4bb4f3a2b0a5ebcbc7ab1b06" +
|
||||
"7d5418768eaf7d526ca116e239ceb3ab393c45f3b32b713c11fa8e5ae8d7611e6008fa08d1305d5655315a72c85a04dc853da3e8ea9d46674194e15226f126c1" +
|
||||
"a233c26dd7d3cc04ae572320d0c351911b6fcdbc0b8450523e96022f4b964d4e479b6cb1c40a6d27699b57ed2952ef7fb3172c69ba7beb8c8633a01070ac4344" +
|
||||
"d4c401acf8ca7fcafeaa59e1d4c2ff251bb67dbe10a862103df1b416fd2097fe412b3da9d4095b48ea094fc3bbf2ca41e4452af3a179580e3bc11a7d97ba050c" +
|
||||
"ae1d6b8075da267b3ae2231a1fcfce0c976402f34963c007d4f85d9ca95646990d1bb09691ceea3b34211dc58409e052d0acf8c2296a7e8fb52d7c673506d89b" +
|
||||
"847c369daec7909da8657e8976f59f2ef4c8a049b46fdf30d6d223ea4175e4d60e469bcea0eb3bdcaa4d6024f2b43cf6de9bb40efa9172381291079dd82ac5b2" +
|
||||
"39f2051a7f1aabcb8d50333e8c160de19ce1c76ced8056a0724ac630dd45ec4e315437391158a633c179a3d1f364b475454fd29c1e539077b9d5f7227786a5d9" +
|
||||
"d8ec78e5615c25e517e9fcaf07611b85dff2c131a1b11a901a431a601854e5cb627cf7b8b0c5e66ad6cf60b7ffd6c6441f9ecd58f414013279e9de533d8d797b" +
|
||||
"936cfdbfcc78342b7ab586457541df5f3b7d1873612df200896e2929f44c6fe10d24f7e6dbe52b6c42c0a40c947c1cbda2a41437079eebcdc29716d80957c159" +
|
||||
"627e7366cc16df92cdedfa9f52edc848335f1c7152652fe24661a469fd503393229063c7ab20d8d895139a2f580dceac9f6dd4c4ac652b1d60c2b8a1b0b2923a" +
|
||||
"86c31742807549e6d523b3c88d31e8534b9e05a6c63f6c8fb8a1eb4dad733d92e7071e410f0087ca3074f4a2df511ae89cefe9ed09a8df603d61f23754e43cc2" +
|
||||
"e42bcdcbe58b0587aba9a62f32c7507116fdc8a9db3d65d6c0097c8f473eb7f3bcd11ab81d5b636b0812b7982201a63d0b8d40f2c38f65ffd953668eaa5751b3" +
|
||||
"dab7f038aa7adbcd1f1102267c9d55d43649f9b4f65f1851546c5a9ef2c7ef56e84b16f12641e9d5ddaa78ec778b5f113b2e06bad5821e1a5203b006a774e36f" +
|
||||
"56c9336d92c8cd8bddcf014b6d58c394e2a93554af6361fc1bbd13c359fed98bb5adfa4dd1266e2744e126e1bc029ab28fd68b648a2ab26ac23252171b298641" +
|
||||
"2621f2a8697a00ab3fdc1b3b04921390ee16d213601ab249a51830661051d34eb777f690fc2d8dfb8e0898567e388830bac8b0bc896f43003feadf34256a927e" +
|
||||
"b4d9293e32ca135351a19d1246cda30551c87de1e148ff5ea576b67e19e1a0389b88a5548b3b1a8cbee19eecc7de5c2333264c711d50d688a1c57eebc28dd6f3" +
|
||||
"3dc0e4cb857973c3d0f28683a6f3c09db9f54b8fabeca9e4f9b86d794ca55d6611858f0d48736adc10dd6763ed7199bad81369ab1f3de30f521d43382bcccb7b" +
|
||||
"be0178f716d5c3cb87488cebd7d9e2bbe671dfcf2512f1b815075777ea92a867f35e09ff0110e61db24423d0598eb6fd078dde0dc2b5d7f5e0bb6fee207da109" +
|
||||
"2e656b5c982866d5fe01e6db79809646559a6f2b9088e977789aac74435dc625b54296b25788bfbbda9bbb25247d428f5141b03172fa11f12339b91ca96c92e7" +
|
||||
"ea5a128c8046087dc7a7eba63e3bdb200565d8a103e7b3c292b088eb06aa27b43688c8516bbffcf123499574f00908ff43d66b79106cebaf16725f1dee600a29" +
|
||||
"7b3a3da878940867f9549e65c73ea798ca923b012fb8a7ef3e2ded1d2c4e85635219f627dc4feb90f884ae6436e7b44f9159f9889d8e194828e079cd2ee60a7a" +
|
||||
"6fbb6b8fc1f7355d7322709fabddd76e4283ddda3018b7882ad79b32bac133da415453eecd5bb1f0deb4f3b987a71a2f2e60194cde63a42b91b39bfe51b4aa8c" +
|
||||
"20952b601df11d170c65a7fe935915890849a367936e97bd242edf305eaf2f4f4fb9e5ee1464c51a899ba5cc69cf56731502c1b75d0d565b1dce15440b0de0f5" +
|
||||
"58bd4f810bf058af99c158a2be0dd02a01bb5317f55675f4d42c6766fc61271954b6988c33a84518bcedbac8de305946d060d19c4691c026953ebd680a4c9012" +
|
||||
"0e8bd54675d6c33cc86e65f5cd3c34cb1e6fd47784a64f39e95a1945b5c21df2b3288f963863b33366908b05c2bd499dd25c1b8e97329d7e435899afeaed174d" +
|
||||
"2a2471b6e8d6ad7a0b1b6a8b19fbd976362283e5abffcbd2cd310245092749b23e0d114e727622953487f373c833281a74a1b97742ca99e49cac14d9102e3680" +
|
||||
"404509889ace009c47d075ba9891e7f67b89aca3e213150f3c715cbab1869135601612d7dffda3cc104b6508f56eb8b7e7f379b21e1ce290ce5fb96f53e3a7eb" +
|
||||
"c7f7bddcbdfc266f23b775602d8d12527d30446cb4144df7fe3c2756e232a8ffca625d7b6ea2c8c0a92e6425ba67ab75160623c39f01fd96856b582e257a6930" +
|
||||
"224c6da90a6eac4249214c3b85aef52835d904a8a5e224d59eae0c80a33b3141ffb31a7d8e62833fa4c850fa6be135558fff5434777df45feed00316c475759f" +
|
||||
"ac6e014e9d3cf23e7322281ed75623ed69a81d6f05ee7de193f6b44ede4a94ced27aef5ab9056144593a836da80f5297875e7bd84d8ca6df95de8650b00b3528" +
|
||||
"123132f26aabf755d00450648e44f3beafa4dc746775958c6dd88bee825c29112a3af582bb2ebe628d70364fe9ad01b8a9961d5b71018690440151486114af1a" +
|
||||
"d85679bcd3eca510c6d6887e70e0d04b04fc2db5ab1eb21fff925b66f08f4fcbf31be3d743154056ba137727b63576e72f1756029c86bbcf9452fc6cfd89f3b5" +
|
||||
"9f243d84c410253ba7c9284191a0ed87b2513901a93606f1aeb736c90dfe40c0a343d45e9a992ea894b22ee5d49e0f7d55d9bddaa6c74bde8ca5839db67b77a9" +
|
||||
"ef740f9a47241f05e5dc1b9c95c459cc9db560b1db090daa3f4c6de46f695a158baaf357a1fc63ebc0d9db8144137ec4bd69c5af89cdf9cfa66e06bff6339d62" +
|
||||
"2c372fbe5a855d14fa7ff3726512f966e4da0556b29ca6d7517803f897d0e1911f9b46a291002a8320091aa7016cb7ac993e35c8b0f5aed3c94ff0b5dadd8b77" +
|
||||
"056d06d1bed59aaf7bca8516c3bba6b33e12df2e5ca4aa40664b3bf48c4dc2c57cfd74c765fe9f794f55b5df6ac6dd2b3592bbc71354c8dd9ae41b0a05e1c7c0" +
|
||||
"d3bd1a0ac6b671c48c01d4a0fec7a01ad11040f213461759f9e029c835ca1d22f9a661b69d72bc46e34b1be7ed85a21830fb87baa74d7ab145ac1647f5f2df68" +
|
||||
"671100d4d9e41082d3c81f3b5a6e603bb33fd56c1dbcbdce5e213c651da45d9d1dd7532d9a955202338387af6315137dc458fed62920a0e721aa7ff1660981c0" +
|
||||
"e4c3de0a4863f6f660a7c1b9745ea26036a25cfa37e1337ded405ebb0401d7041a7938800a97a032fcababcc06391a77a580b1a61de014db9d7e280ffa6b2381" +
|
||||
"ab6969ae5cfcca00a47ac2fa05be02aae7beb806d2afcc11dc0642d2a12ecde2d2926efd9fe790e1bee19f9114d22ca42f438ef656a1311e4931ab7fac93ed17" +
|
||||
"3f68ea0abed18cc2c8905bb2d599780690eabe4996e38872a3190fee361df9fecd5906f664106de4835f8fbb657366327871a2d38cbb671df04e0d14fe97e260" +
|
||||
"c42eb07bd1d70514913c7a64a51e405cc92e06845e5a78981fef9822fc79e9937ce0513138f6bbf247f5c457da708cf84e30d083b4ba48d2d43d70e7c31e9482" +
|
||||
"4472617910a3de4369217b4daf892c2c3250d1de0457e88b3bcb5c4568f9b26aa675c551a9a730fe9ea8145ce7f8e23ec825be9be3b9edd588c391295fe31ac5" +
|
||||
"bfc97d2e438ca9bf6551728b3be6d6c6ac064baca763e0eaa24f754f4bbc84a4377de45fb6a8f37150865df18749df1af4ea911b62f616dd4dd4b25b27c7b6fd" +
|
||||
"99d8c00ce8a53fde3ced091891e8daf43cade10086be046ee5607003de24101db49b1a4fb0ac270d05bab12583e263e903e94dab8bba7c785e40499ab01ff92b" +
|
||||
"b82c2e5342dce84881adedf77cab593f541e4c963f4f9ffc80a16bd4eb7f20ef4bf3f57abc7cbd86332d8be80f0794fc82767d13c71d8ee20468ee35c13308b0" +
|
||||
"dc29ebe8c6a81e02ee9a21807ff57e4d932edcaf59ae9e76f7cdad46b32f94a74982f0887d7083c90ff54058e873b10cec67fba1b717deba5356e170dec1a40d" +
|
||||
"36c57674ad8d43c5c98022b553fe060251b994271585f702de3e71fb1c8e36293dd44a4b99a1baf33f6205e9fbc9acdfe8cfdf007224f93a7104e7803454fdc0" +
|
||||
"9fc5a20be59f600ee734847257a5ad62c599a7fa836d1174a6291e61c1be4b310bd4d7b7cb9be976dbdfdd2b99340a9863c8c0e5009165d7097317e6c3a29cfc" +
|
||||
"dc84b19bc68f38694998f626567b80ce6699124b12bae4bb9e661c2484f5109517318341287e142a849d61d0d7b11d4996547e7325f28842dcaed26367f7a888" +
|
||||
"e58c24c857da2f48a9fb91c78cf351a23e82ae443223580a9fe15a6a778f6c13be66888219e3e15971170712b6c356520cc15e4e75167993b66e6f125799cd40" +
|
||||
"86c72588a85f68361f1c2f09e87f9a4de95ef9a3b92c3313664a706cb72916b96a9cb50771f6917ddcf696ce8d7f2525745fb6edc30bf3fdaad66ca5b013300a" +
|
||||
"7ec7cd274327b1b9cd931c068d8fa9fd6336d59f6ac79b84a24b34c47e408b3bcb8ead49428c123922e54bfcaec7e39c4d6ef79e5645a35f715d151e679ef5c6" +
|
||||
"6f86cd013fdaab978ee4e52eea5e2753e693271344a1f215e1c690de06f29c856c469ccb484d445bacb16694f4def1537cdb32260705e8a50fd65e98a24967a2" +
|
||||
"456af6cf90643638999389a35de6e192068fd2e2ec29aced58611560c792ea5c7fa37583ebd5452a8d94cbf1898937dd8aa6656047e6e03f84dfd0bded514a6c" +
|
||||
"b47ca71c2cf1e76f606c04374663712fd96925eecf0ac1c38392390c8cb095f39e1814252ded78b55ebeb9915dc5e2ec14fe99e3a075bd389ac601681f154286" +
|
||||
"885289e568a8646d94abc806b4637492e3a407cde582d42764eef0d56ab14b00e9aa1f64d8fdd533d4314145c8255c44d0c746af6da844d285eb044d57e8cafb" +
|
||||
"ab6c3b962e0177f11a839f4a5c0d2c2e8d5f76375ac115e0a89f460ea1be238f974a68e0693d15790117106c1a65ab5f7aa08e738aa888d5b56be39d2078837d" +
|
||||
"fb2357d86f5be85a9af41aed611b231495564493e46acc90c6a3e67d5b055320290aef508aa6a1896f19cb5633edc0fec023216726e50960a44d81e0614ce748" +
|
||||
"6ccfdaf620eaac0517e8cdeb1095d55f3a60d61dd27d967eb26128b84c9ea8418779e074cee8961c5dac811ce5ee8134d3910a47de7a1344293f5c64ae8f1b38" +
|
||||
"9d6c457dc74e7005c339394f5f24630f5e40cf270640d1e4c27cb6a74fb440f3203026acfcd31f39cd4844ede7e785290878fac8770f930e96c3edf61748dc6f" +
|
||||
"b7476832cf77ddfbe8eb8e12fd002038630301439ef8a7659bb10593a92cb84018e1ec78856f403e1eb9d6343aa0bbd77a63d776f1d12838f27f3cf6296ab0b3" +
|
||||
"b4436f0ec545a5a1e92a5351fb273b3ed56a40e5a5d25e0057f4077bfeba2e2d8cb17a553b157609b20bfb5cd2699af9936f50d823bb59a950a24b8fd15ed705" +
|
||||
"b1628663f0eb5b5c2b18f000ab039bc425ebafb2010e1a2264c38fa2bbd0f77e38eac8acd670565490fd60cac7fd28d988c8dc0658505dd98425f22c94647d44" +
|
||||
"5d0236b97ea58b3c71feee90be0055ce1fabda5ebfced9d9bf5efaeac8408c4b6bcdb39851cfe038d88ada5211de2f0f69e9e3c62453106c366cf0c40971c0e8" +
|
||||
"e8f2a790aa66999a0cb4cdb57a8c2d812e9e4a66df2f001a57e291864339257ec26c9bc2dc6cb2eb5c3301c167e1ed0387f9ce9f76c6759ebe5c68e8be378c42" +
|
||||
"e0350b344acbc8b40c95cee9e82bb43cef5e91a32a6be8a727d5fbe089321ede3abee4da6b9f41775d7e9abc36f6a5d26ab88ba32978b5ea0ad63f0ce8a772be" +
|
||||
"5aa51143bcd00d78bdcbd69beb652139ad658dc7ad242b2057eacee092aab4940d6ff993a8c7d8fbe93c08c93c45d5f3a01058f3c75c94be9da1a19a97754734" +
|
||||
"b713e1ad6b7cd472619ec1abd4cf42f50b0648661c2b8dbe8976037c094c7176090ba94618e1918db44f5d2c367a0c7f911132d9a8b2398b9417542c7ad99b53" +
|
||||
"a7ca48253bab8382a1a24d35b9b9818bda513f4b52fc576a71fa63e72aa8042ee1fc806c6fd3fc16e07ed2caf9f82bd3bb6b393b2708c051c24c2e05aaf72531" +
|
||||
"d865888db06f719314d6094b2c4f0718c151c88958d2d6c8a6f49464f81cc46709dde026f4e05325ea4ca2dddf9a79bf98bff3aa5eb412434f0b7457b4ed47ab" +
|
||||
"85a212e0c7720c78c961d56141bff0f964622d4d3984c1017de6f5846c72fae0c771a819ba6c111bd739fcf16f4b85f8101e7c3f0daefe753ec130a6f34c7697" +
|
||||
"4dc531f83715ecae28bf2e55111778ae42aef17fa95340584cfae3d4599af9dbd10211baf3aafa8ac8a07edf8243daffd6a6486b1e3be4b60711194261e2b646" +
|
||||
"e2667554cc0bb2fc07054b653231cede43154c9002890ca20b0ac81c4788847c6ecf7c174e528f36f8cfc53f3366fa9ce07b1843939cf6d318ed11f7ba6eb791" +
|
||||
"ce25e75cbe37d2ee3d45bea487d969de041011959c0fed4e6c86802a7485fad70059ece14a29b03d4df41677acf71419ee63f1101060ca5e4ca0ab2edd71fe77" +
|
||||
"46c6bd9f36bdbbf0a9956eaaf974f7bef982cd34881abd686fe77b536c85d042d77dadd00c5cb0130737e5318a025e6ae6af96ca28cbd41094d86a85765ff891" +
|
||||
"af825793910c406470cc61be5d9282911d2faf82abfb309598fce0101ca64abe3920701a958c20ac35927733466a23de809afa2bdf331f68c3ab0cfa08b0c549" +
|
||||
"a20e9b50dbe85d22d215d0e5fef854ba271a4c0f95e6abca19018bdd4a042721887418136b4a60cf291bf06ec47a5a4d2f9b29f988733c6bf6f65da5a95f8939" +
|
||||
"fe0f2bab0bdce98569a81f861014e532f6a995542db02b6bdf3169191d300fb0429c1cae1d2dd4d29e0b61751576e04b558d38d3afcce8326c2871e969c1492a" +
|
||||
"8391c0becec29edcf7f038a8093471763db9f13b97114acf7a979f5ba3bf6f990317890ee0705850fb97bfacf306a0ad621b2c3b633af01fc5aa059c0e22ed17" +
|
||||
"23584dde6cf140bd1d0087ca9090ca9f07d3b93c60938af8df976555455bafbd8cc986ba32fc3f15b5962dbb2d37b6ae55a7de0c0c6f2366be0278e26bf9a725" +
|
||||
"f61f2bcb545d66f79261783f7f03395f2a5d27e56af62a01ffcf778c3c686e244bd9b7e5029d1d40dd2250705c6825bf78e83730212640cb5ba54191b61fce33" +
|
||||
"ce6df7721b15662162b631d99e6431efd24ec35639c2b97f10374fa5b9e2ca4231f523195206fb9695ec7721c98d74f29533cf714866adae8edbe8ed2d0969c4" +
|
||||
"9ed36200c4b8b75131e6d1efa913106bb0759aa8255bd6a9dc2b00407358f4523486575b111676730094f46d0a7b95427df74f053c6611b4c465efa5310f760c" +
|
||||
"5ff081e841e5f90c2de35855d45a7f35ce73d7c7f9f61fbfa953398e042c3946aaa4b7a2094d95410b8a5ef76c8b57d49f77311192b3f4578f37bda1a426de7c" +
|
||||
"7cc54b5400bd16bb30cd8d1b7b42ff31c5e3759e3c9a7668174c02bc5a08f1bcc7e3ba145fa5f5c41e48877b41b0ef8fffd0f75c6547047c2e7b7c7e1aef2cda" +
|
||||
"c4a778adbf71257618b4eb3c6dbd8211f829c1d6373415b969cc48f33d586d2678e7c1b441364a9fe2bb426a33b2a132741fac547766d196df3505fdb17977de" +
|
||||
"7853cdcd8d9932eb9452620aa4921b4416f65055d77573b132a40795bf142815b655e670bf2c4464adb5d826a1744c8049d7a6cfbc8a4634e66eb32f0cb6fa17" +
|
||||
"ffa8925131c3a253101733406a2a3a0dc61ec3ca1448623b6295791d4e2d65d303f78038e15d0ef75d823759bcb4b277b51410c37d5efbbb2e3a9e0cd78a8475" +
|
||||
"05d44bb1fed7f72b1bf1a96ad0148e816d34c66b1b5faf172b8141ba007bf2e5dbbfee4b09ef66656ea3cde54f086040d14116aa7f3584ab6773f6091a2fbcee" +
|
||||
"f59d6ea115f88ef9fcb358c87c35caf7c1a6022e141a3c688beef17da5a619e733d854218b30d5edc39b933b19dedd6750acabc52234934b08f930b608a18008" +
|
||||
"838cb0fb73d4c78af0c468d9fa4fc5852135ae91ae00a99a6c603371d09b031ee37f87586cdc83897d8fd8ee2e07b9d0478a812d3f7eddca08860386e3ad9521" +
|
||||
"98d5fc04fd0aba4b3da6ab8bdd9eca8e0399a2012d6158ed75ced5f432a223449b4e3db3fd4b19c494a69e9f2670833f8a88f7b2873319e9495f03fc69b6d098" +
|
||||
"6006e3ffd8cdb9c1b98f72345848deea1b98ff6ef766f4398e642e5f2b217c1a87a608c1dc701bbb79d75a4433ca1d600061836888a220ee262124d145d371f6" +
|
||||
"576f04cf71701133787a97aaa615ca98138c2be1046604d885e2f274b0de8743af50ad5dfc4c3a09164448e102be577eecf77ffaec1724f91f00f908ff6af41e" +
|
||||
"57056dfa8f5dbcade85a66c10e524bae55922b4084407fb36ca8d6b7322f76a8139be9455a34440c719d0db8a36385efa48841170c8d35046407b586f5bbd169" +
|
||||
"7cbaf6819b663fb17d0f0ae89691a099a8ccf47fa61fb6dbb22b3298e5cf2465e4b93c49da70fa76924fdf29389194cc5c61cb4b3084d0851bc3018270d1a24c" +
|
||||
"b4b04e8af927d9fec9ea1c9ce18d4dbe61f7aac0ffd4e7c2e9729b49ed9874b883ec644864c6d9ad0422c4d89f87df1dfb2c96314b6a3e19afd21783f365445f" +
|
||||
"bef10562a26b48df42dc344ddd63fcc03220dbde98f1109cade221027c40f0f996f4beb29513c3979ba374c4c6a2c2dc6276ca6be66eecf1dcb245d6efe78aea" +
|
||||
"e49ece37f87894bef3c0cb1b993d974685564e2476c12c8d8f63a1aaf142fe34a6840be340b64f96d441f4537dff434ddce630101ed9f78e807881f6b7590697" +
|
||||
"bc97e60accd7a135d8915781f4fc22e437145154dad0a39e5e306c117b11deb10462ba74d58e81de7674ef0bcb20b38511991447f63ad906b11abd4ba88df3c9" +
|
||||
"e6931f87fece49f48543fed0439c88ad78f82aabb32dea03d030bdd76efef6b737daac2de2db1cce10e2ec74565b0a609606cbb6aa259ba88715229b8176c874" +
|
||||
"db3fc4f6db9f167e7b2d55b33a261f9eecb69a0d36ecf9ec4f8f9cee5b74bcdc5d77b02ada89f56259edeec0d9ea866ccf454b9abd29d5d21041179912a5c302" +
|
||||
"1862d850c3ff483e09479957df5bde03a29504b4a43e1fd40af2b8a2653a37cae89c8d917aecdec3959fecd32b7fd313a61e134abc15ad008aa993aba9629a5f" +
|
||||
"0af0ec713f742bee096e171729e70530b60f910ef83746a61580f0cc6d67723792c0e0e94775d5b1edf37864a50678d197bb34a97e84d7f764c0bfe05f4b2d0c" +
|
||||
"dc431d1f4410500dbe2758eb05bb6b19b154707c255a97cedc6aec1841f1817f6bcba0b9a9c1d3aebf747bec4423c71309fb8b4ada90dd9f7adbcffebbc905de" +
|
||||
"74ce531403df33457c4d0b970fea5df4f85732e3c33c5b8242b041141a8c51a62f0bb14dbe07b14d3f5ce646d76e87b258e9b62128f9c0c0a8014f2c5b3d3dbd" +
|
||||
"a3a77be6222419cd3fbbd3b842c46c099f142bcd36442961e8205ec5d7fd159befdbff12693953307026f1e06fd57b6447dd3cb52df466f0352cc46f27d1fc56" +
|
||||
"56e06f68ca2847d291421bc9e0af6bbcfb7b3ce07600827809506ba3f96f40ca22766f8cba32d4461488f6596082a52c11e9ac908922075a7b443c41e55b719d" +
|
||||
"9cac9fb587cf02432e1accf3cb7a16de0d5bc3a1c0aeff5a1795680b4551316e3d7b5a9bc63a09c6f75b0f00eb69fb6ef5130c1ec40c7a7d5d6ccce364b74f63" +
|
||||
"a836a4a711027e592d6a70e10e573cc6d730a0def4a7a2d4dfcf3b0aba37fa2060ae6935710191c023a0b8e123a67ee811ed43b5127a1c4cf82d52ad6c40fd66" +
|
||||
"1160f77dc320bbed349c8b6d08b2a7a6234a8dc88e4744b51d2d7c56e02f1c3bea9e6c2c3d5522ca02ec7e0b8160555eaf28797ed30b5c931a73562791f5f0a1" +
|
||||
"b7ce83824bae17de449cff41312bd441f34df62904f4a0265d6fb9b8a352895ac6f0025d6b2074570970b4e679c559d03ef40794708eae36567008d9c33f7fc3" +
|
||||
"5f8df7e901c27f408cc7fcc52631f1178695ea660d07df541e5a32721d145a32e8d32f06301b5073149e8798371fdb1a2daa5e1d02c24da07682a2cbacf5af55" +
|
||||
"64810e479e5966dc6bfe14b4472c42cb154e19f7b8659d42de5ac926224cc6b0d8f3fa797058fd6e21ea85146838c4612760d84e24341825b6931a6417327394" +
|
||||
"0154125254d4e11ac80e475a178605e851e1be39695cdc0781da241f232cedb32a04b1cc7352882fb635162ec3f5fc5004cfa7d03780753c14173ae7b12a71cd" +
|
||||
"b40d4835023a00a4803bdfb6916956ade9f687af567e6f29981120d306084ad061ca1585f0e9497fdb27f9d54cbac8fecff176145114ebfc17e3f346b246ab91" +
|
||||
"094dac0e684a708b45dcea16378fc29683dd033310068339b13d995dce77a50f9ee9cb4cf564566b05ce352a21159ad21e720e05ce6069a5ef4e9fe8ffd28516" +
|
||||
"8356a0b80c4d1da547776f486a117f6f7ff6557edae7d68834cd71973517cfe4af045ad0fecdead68edc8017000958b69410247a51bd9bd3152dd57389f25223" +
|
||||
"d5e88c0d343ab3aeb89b763eeb7ee48b3966fee147a4614e436c9a1a398487c80a001700666251b3dd6a2e5dc96814d21e6498e75811ba4c51160cbecb7d5510" +
|
||||
"62697171a25a6abbc41fd806c3dfc83daaa10d7ce47f5a29ef0d85dda5a61429c637520e6a4048624cbb25f53977254cf803848ad81f25eda07690fe7a0466e4" +
|
||||
"d18a2fd145dda1c94a994bc4ba5ce1aa1b50c38151febee757afceeaa91c7b35e57b90ff7b62efa929dcb962d32dde5a0bc3159524728621a3d7487eb7c3edd8" +
|
||||
"6df3f8a18e590039bfc84a22b23b11468c90dcdc8506187233d8a6b3dc9785ddf6f341709fefccde91a7a0925a8446b1896aabd6a6826ef88b756a9711cb3b78" +
|
||||
"1ab1f4df4d0515070e41fd5b0c5483270307e60eaaaf0b3a48f6bb96eef6141075445285675bb12f2ce38b42c91c1e063400d7bba9b322a0783e7d2f5d3f8874" +
|
||||
"52ceb65bdedaa032336d969d2e0e3007d2ac07bcf054b9d0330f2e26c486c054bfa709fdabe283ed9a4ae67cee24f40f2a5c4e70160e6ceead208ca400959922" +
|
||||
"70bc35be104c9ad94cdbe288b1c599db1758331340c9e927bc9d688e4186d5badd463bd3ba116bdc22a39c604778cd95503ce4ce642041e89bcdeb86fb19ab25" +
|
||||
"e1f94ed2a2f857b228ed4a582ad411d7273a0d5189bf7a2b87a135753e03383033b989ea532041ab9ed397ecb3ce61e01923b3729068f6828ffd12e2ab1d28db" +
|
||||
"6ed7423d458decb00476657a0580b4af3aa5615bf07df55beaa2bec71447aeb39791477dd09349bf573e29e9c4fd454b4bbf1e19591bf38dc47c83bf39babdc8" +
|
||||
"737d69ce4b586cd10ed406426b88e686c11072f04c680e8b6275166e2dbe91f701642b1b4ed92d23d6fe14f39ff7f5a09401f3a398eb4bb742f6cf10aa35e767" +
|
||||
"7e6e92aec791e94f8122e8c9cc9d0bc979e3eac6562ab614ff20330b00d9cdbd08e9deaeaf5cd67b49164f550c5f5c2d7523fe5ad71a2bd03fe2a97329980cb3" +
|
||||
"049ecf6d677d815e56f7cc27407cf73528549ea98265ef90277c14763d5acb3572f5a482432cd8288972af580fdd3418889bfa6a373c4813c4c45e933ea4ec72" +
|
||||
"cdd068089c2a30897dd57031445834de9f23faf506ad930d843b1cabad2c0aa8965d1b5e57032c969f9f55fe2a3049f4e63d5b5c6f5f760da5ba44e3bb9307e9" +
|
||||
"ea39973d05a74a49e15bb71eaecf62373153ca316fdd40b1c64ce2896c95a7b5df970c2bec85edbd5ed84fa7949c08d5ec4d987052fffe357d444e2408a22295" +
|
||||
"6ac1fb272f5023740b381e00dae9f09751a33bc6ad673c4221ce3f932164deb99f1da3eb3581caf475e385bcc56d47a7a1615a9543403750f0121d5482c4ea5c" +
|
||||
"94fa3428178f6a4deea08d754ba2abb3d1aa48c3e06f06ffcdf4571579d398cd991e60599e9633fae6a0c07e10e538aebb7d33aa127c830f14b083728f6ad7eb" +
|
||||
"c9a60a0ba222f47780eaa82a21393a49defee97aa8c3aa2fa53a2af86059a7587074128c2fee7060f398ae70b156d53aba0bf1af4bff10a966ce7e6382cec4b6" +
|
||||
"054a8f6b9ef0e8729ee182f86c862f9b7d5ea36ef7e15bed10ed41b25738c380e58cf42795e3202749074fe5cb6e8fcb49a116f54d84734a834609a3443b8b42" +
|
||||
"97c05ced428f5756ba59bfc1535bc7e16d370d81b72b1f3f78ba75c820b22e485dc042e4f38e93cc2918a491750c92998f03aee571cbe9abce4d00fdf9801f9e" +
|
||||
"8e0fa276822e1e5349945f1d337e656b431c48c1a2e9d4142ea14e9427881bf201ad8cd8effaa6fe6a7e07c8112299db1b327a0cc34c9fbd35596f4ee25caeee" +
|
||||
"221afad93ce7df64aa6ac766cf4fe1660446dcbafdfb86b4e0fea78c29c3e84ce42da4a503178bd250a6dbc4fc65e397062229001da05d5be118dea7ca5ce67f" +
|
||||
"b4ee07a8b01e408aebef2c913434921df686a242b7d015a559f9efdc54ad62d7f31ceb72463041843d7fb60f948fed03ff143fe24ab81bd4ef6bdabb856ef1b7" +
|
||||
"174cc987436322271bf48423114e05758a08cdbf300931fc7e950830b7ee920f7033541f1db9b0d2b91cad80d06c049b05fd0a76d6dc211bef2a08d53b1d16d4" +
|
||||
"2232fb263941daac4e004542517807341bde98e9990a97739ef86d66c7a51324f1f6911cce4c3db37ebacb6e58eb02d8f7d6ea31338b56a99649c4e730a01bca" +
|
||||
"deb6fc87cabe00addf1bf76b83927de26bc2bb3f0cd5945d863b0c31cfe8fd4b60462000a911755cbecdb6a98139041d52df498aa99aa3876836ce5b5bb426e7" +
|
||||
"c22b5977902e0b3425fdbdb8f44e8758b207b469c3e5363f552c89fbf778e95e8b7ff6566ab591fb68a8bde38d8169c708a321b669c08d9ecf1a06c5321bb1cc" +
|
||||
"9c8a585b6381645edfbd1ea4a2ad7e7eb8be6c431958add393c0a257aeb283644c6fe97580aef613f1b9d83e5b009f7a4d059025c11e0a0a67801be511dc097e" +
|
||||
"4e7c065684effcafab83e0e716e2d0862e83b295f82089ed3ba4f6897c8d8eb2b358231f95eef840e1fe22e9065de2b3dfb3633e2968135756cd9c109e8acbb3" +
|
||||
"172bbb6680c2e4fd69e179916a7849315c9f4dc86991d75cc6358617454694b3fcf2471ec7fca6ea2d99f704b9aa37a25a3b3183c5e32e3711346ba2336d6001" +
|
||||
"489afb9cbd8822dbe4f0323ebf7cfa9367b6548213d473c0f07b1bb6d16e1c66fd2bfa1ca623e03149fc81eb6f71c12e7b4b76ca588548bb4872469687f334f9" +
|
||||
"7e114a16a0a58ec70ed74ef69dd96666a85aa52d1ca812235796d90b9af4296247f4c1ab632effeaaef6acbb637f1aa9379195b3b668ca541bc6eb595bbc430b" +
|
||||
"28adc5d1a26fd4cc2239516ac9ca9c0c028110926a2f88881a5886554c31539f4c8260e16364f4ac27710d2becdadf573f4a2b7b55d76ef059432c91c6f5beb1" +
|
||||
"56686a620bdb4aea50df564cc0c5ccd8a93c454e06b8969a0f59d63ae5a29105149c08a5de65e87b0dc445dc5d86db8788ba77b83e22125c69621140d7f17906" +
|
||||
"4ec0157a877cc51ce3c0d565bdf6c884f69b0ca631d6863770f6db30eff847e33c8b30d5714668a38a09f454ee44ff2b7f97207f10efcba74325378f6f272ef9" +
|
||||
"9f09c501c12bd0a4155f559a604204b36751ce8d4c0af35a8b445a9290c305d5d3ea21f944e31df9a711ee90bd16a37850e2a87c3bd3fdecdc6e2f261a5d6d0d" +
|
||||
"580990fcab9228cbb39f8c79608d821ce27c10b0ee0b5a96474759f67970cbe03cec9fe594765bc935abccf867b9717a4087465c8604eae89497c8ffae7e46f7" +
|
||||
"ade2848916b54aa796809cb98a4364b7b44c17944dbc408909a92d4cbb24a514b72fe8de7d1cbba0a101973fa9b29d97dcf1f4ed8a05d5e0cb38849dc6e2d041" +
|
||||
"16892ce649e0a553a727bfbb1d5794a059d6a411e43876e561d83bd22c054680cc8fa928f5f4be2d849f02ddf9c6d11ba35810b81553e1938ab013663f6ef35b" +
|
||||
"08f06260932d7acf99f57967eec57a61f03d880c3225e53102a672f5842da21aaaee02444d372ab8ed7096235a4926e3288912d9c736c2c4dc49918abdfdd6d6" +
|
||||
"d0df5be0133abd61b02a6f008909c5350f9077598ab2e612603431bddd3826e314feb280585b37eb89e597f7f0bdb738a9a93d9af224659d50c8f7479b240487" +
|
||||
"76c2a960eb18923fa2d3b31b3d20ef538759cf22f5b415d19bdae689f2bab651d79ff99f77a721bc1b2233da12c12be0c9881ad82fc97a6343b3af8207dda8b6" +
|
||||
"5c600644d741b8a16750964e341e060260c4de26f991f3a1f6a606d1153565f1c9cfee58eef327edc0e9cfaa206ce930b191f521be2344949bc75d583a413a96" +
|
||||
"ee4edac424cbf9bdad2883c96a1306b96ee059d8044e3b7af4e7138697f142774ed6409a86a3c6c456600d4e405e6117ec759f4b22d7e5a185b0f9c67ad987bc" +
|
||||
"58d2e8c929d4a487e5b77201d7c1416878e8d63258b2f58727cb631494cf1d68b99c28493b99b0409ccc1f9c218a2b95c45ff36563f0045ae5c3098f641ea6a9" +
|
||||
"b48a3e1489831b2d176a1e0cb2afe6bd8cc5e797de01391e47e798c1aa945d33d5e7dd607aa73c9efe93f0646adcd7e211303ac7deea4d02c80370e8e867e2ad" +
|
||||
"9942bfd5a66143560a1f59e5be1f3aeecd7eab689a4a481aec78045ae0604f69d9eea550152f6e2bc692529357b509d60e5a497bd94e63dc698cdfa2a3a55976" +
|
||||
"0b2d072a2fe9c1fd41f31518aae0edaab532591476a9c5a61cd76937575cef71ff5dd66e158e7820b4b6bff4067cc26ee9aa66f41b80f078645b920512b5efd8" +
|
||||
"88b3644601a72e3f665b9c8f0ee246593667379b8fa043718f2d75c21d2a11640c328971c32d5743c11ada6c95cfabf1c6b66e0b09342afc899e1f272ec48a7f" +
|
||||
"ba5a51943763bf969cbac879363e14dad1952517d8f4b463511adccf25e655bced7cd9666d01dd4f2a0a21729ac4f44970c9c478a995d1c3b358a244110f1db9" +
|
||||
"fe6335685701e0c2660ae69d33a93a75e44f5374b979a5af140200db43ff612be2728548192ebfd0a3860a9e135b910fe3fb249926d334167622bf4123bdf0d5" +
|
||||
"38e9ff2a3bb67a44f8407328e3c94b47d92e0401aa1db85459967699804df245a7808f972683afded9cad8fbce15c1be38fd10c62c7abc302eb0537d5cc573ec" +
|
||||
"245513a87c1a8d386f7ef0c4a91ec3c602b14a14ae395da13284df3391b929c7379e181c5d3d4597e3c955ef6e3dc2fc55890df04785bdd4e3fa35ac775f44ef" +
|
||||
"9d7813cc036d6bcc316e869eeddf7b30e4b837e9285eb20397b4d7e0d12080c502c750268bcd6ffc323cb094afbe8304ae840d37be833878697f2cf931faf06d" +
|
||||
"28dc6c7e1b5df69327127b47eddd0237f1bb5942ee5903d8cbfe1b11484199e90fe7c8e7f2f725deb2293630bd8c8a377d539736e2ccc2b90c08b97abd8c5ce4" +
|
||||
"ea91a6219ab06c61c31eb48a35587b3c1719f387bd8c2063c5a79d041ca8a9ffac2e3c728f74efdb74ee0730f84cb3a8aefff7c8a1b570127cfc93eb6d3327f5" +
|
||||
"ba7f886dee8be0548f710d6bfb18cbe5910bf61aed2c95028006f419241d968933aa00bb0760a41d2693465827a00837a84cadaf8a8e804d175adc5915c6cb6e" +
|
||||
"fefb2cf70db063f2f3812da17586436c176aa0a815dfc7983eb88bfb1b6d1db7ab119cd3058c0db4d1910034f70f6eedfee8b742ea45af9780f415be2f851061" +
|
||||
"313a218ad48e992b75afaa07c33ca47ee0155fe72e13d7e5736e512c5e5a45d351f7c7902d8b0fa31b34569a9aea31b018d63d572a9898c389d07caa427f114d" +
|
||||
"251263d56cf5d6663159c2b32683b266fb909ba9d4caadaeda6700c03b25307cdea597a3287fd76082dbf33f073482872fdb494b892112c594d7f265d2799b5e" +
|
||||
"5ec46a30fbf1557fa344a664a7af457a4e8ce2c014a270215d3f95d47a42d8f86a61d6d6b363d04a99a0d8f06c5b15cd803d951aea0ca185a807ca4c677db789" +
|
||||
"fca64f0c5ba95b8c64f930eda658f9f773a9e1c8669589a7d98ade8dbc2c2c4cbbaf6ea2bbc6e762d4098f4db0d3f055958ae9da15ae57ee0b60fb9513dacf5a" +
|
||||
"d65e34613570186acecf9e165bfa470aabcd35f22620497fbcabf220c53cff84eec12cf9965297b364f0e9122895c175d213fc2a9c9cbf27ebe1cf96fdacaf1c" +
|
||||
"1c79ede66cfaa5057d33e09b31b43869812e5a0ce730663c18c4333141ae9565e437d99ade6b2cbe005214e8b3392c55bf4d7b38ef16e7f84b4ba3c85e1dfd1a" +
|
||||
"ca8da1a5c75fd190e7752926533327880aed1461c7e9de2443ba0a2d094f4a14d5fffd3b102ea78acd34d162e82ab78fbb82bfbc8a9708ab532aa861643c39cd" +
|
||||
"2bc89f2be53c583f9930fb2da14f1c5d5f218384b1740a76bd8b7ddd2c9888c8d7e7e78cc7a3304fa41995c7c1c3316894296caeeb9711f0e6bf16abc380bd41" +
|
||||
"10448be3cb03cc3246ee7b9559c858307001033c84ecf89690526544c05c146f206d4a21e710597bb57759d232154a1f9d88eb3f3440374bad1e901da7a154b8" +
|
||||
"39a6d1b1b6b2ab0be872ff036a9f9f769a169fbf91bada732d8f28c453b9be49011b211155fa5c588b43018775f99e3b92b322a4c41282326b79fd26541ccafc" +
|
||||
"c0e2f09797e3217fb0e5785b72e654dbcde8ba14b2d56faa2218748c6789c158bb635d43c9a64690b004ab70f457e9fd959b2d90875966968c7ac44b103283e7" +
|
||||
"50b60deeb1f89444aee25fbdb7fa3a96d70c3dce38246f111e466cdfa3b807c54ed584f5b1a64456e923dcf37f45b36cea3d602ba3a55a4fd883ebb6dc198650" +
|
||||
"b522461614656897b9b7d408d48b12e594af06c91f715b32a4ed65a379f0ab461acb9b8b20d1f1b12e9f7fea422c0c7d545eff4152e06002cbd120fd74b483d3" +
|
||||
"a0ee30cfd851c98e9aa8fb19b60528de4a75b412bed656933ae8ab600aeaef5befdcca4d35fa472ed38ffb91a9017c19c5d500426f262ba379034c45cf5d1627" +
|
||||
"48da223207721b4bc4504b79309f3d622c53dfe3c83ff8866dd7614a2e90a85c077b2e18bf1cb6008f0d785d6a0ffd5f15a83a343036f3fdd25314bfe47b5a12" +
|
||||
"58a7c89475f39a58a671d0a17f6fd100a8928181b94d8d53149316d5addf14bd398b538e2593273f02cf296fd73ff92d02230de939dae94e03d44ce93dd4dfa1" +
|
||||
"b9219fd369c854ec409d7bf94b316e5e9c16e1ba525a783e24bd3fc0ecc949be245c402efae8ea77aaca74c78703506cfd5a5a614793e04c76652b4f344f79fd" +
|
||||
"f2da1e34f650fc1094116ead723813d204ffe375d20707fd94d90f21c009194201c88d22afaee83a8a6be7518dc915331b863664e033d397c64e1516c0fd9324" +
|
||||
"11614a1bdf2feb86e0d0ae21e784a55086c596c7eedd44d3afd7295455450f507f1c1a33c9ba94d50931ec054d8740510ea23990c266f30678a74fdd485b482b" +
|
||||
"cbfb4070e3f10b66c65a4210794a3137adabe887ffb9bcf2a30c625138f840b2666610e76e5a0abc183088a94930c025836653eddbc440621bbf94761c74e108" +
|
||||
"3672c6a914a753fd452e8e7a02c54b21d7bab4b705b4509b9b5b27e2e5144289eece950c3634b410b5e3cf8c5a5f74d98b55d17d45d7014390cf696a7e693777" +
|
||||
"4c028517062a69276910cf5f139078e8ef6e77aa8b35aa55fd4f53e48ae6b4875d1732b286ffe8bf852b73af7b964fdf1aa4c4f16d9f14485a2b1a704c2615ac" +
|
||||
"8ac74eaaacec7e8e4e506e1b418d377e4d5a271dfab47b3d3c11a809beda596fdf37935dfe06c147dfe7d5be696ffb2a0cff907d1eb2a88477c261d5a7aba06c" +
|
||||
"d70dc52d00b9a9d851e849f86e1cba91b4c40d1ae3d4f21a2763369dde34d084adfc09d2a6cb5f09114cd8d6fa26d15f1ec428adc245064e5b8e80f21b0b3ff2" +
|
||||
"6690398d3080f5355fc082bc4bf3a38576c7da00efbc80839dc9a06fab2b998a152553c36fab42e03e3e4b54456ed954e53bd63902d89e2617a263e70146d1eb" +
|
||||
"71557baf43aeb0a681f600a784778c895afce26fe17e3ac33990c54cd96fcb2432de79d4f95ab2fb96effdd37f4e4247ae5b4c1fa461ca3269d45a90af090333" +
|
||||
"fc3ab5152bd5aed4445eab93466701382ba76fc8745abd911bdb45a494e1c62647670380c04377bcdb5e631318dfa79850469a988094acd48a4110bbc7289617" +
|
||||
"ce436294ff242302d154ad75437ae2f551df5b84f884c87497de0bb2ef7bd41a8c758e4b158084c78ef047389d88974faafa00ce7396e849509d39c403fdcca6" +
|
||||
"8f47e1d0fc294e5510a07af24c165e1a4b4ba9498e7b333c4e8624c552801079775fc684b6e98b72ff133164a2052c2aadcef168d9cdeab8a935c98f08e23b95" +
|
||||
"859277381a2ce23ea61fbe9ec1439a489523161ed370b0069aa6a5c7981e4a80c04e304ff2fd85f80b51e3de3484b53084f376cc72a390aaefc49baddf4d2545" +
|
||||
"93dcb5a49326c9c15c3d1c0e0709c9879d68bee07b956d018a995bf1e7f8fa03ef2079d01e0bec601519704cced98854c94f1f0ae837653f14c0221e12f2cbdb" +
|
||||
"1564066062bc1d4dcf7ed8b2c980b90e8101842d5844375cb370f402d858dffd9eb52572f8420d4d246462230ca0dbd567250e4f065730a6aecbd804b1acf949" +
|
||||
"30e2890a39fdd4c1eb693f7e345504dbad5ac207f1a649968c3a7b416bd972b6a6bfde04375337a93b0ac08f6fae62c0fa7df8ae9deeee421f7ac62d8cf5ecf3" +
|
||||
"b5ff39877ee4abaeb9db03d8a8f13f7925e54267a2651c55ecf580d5cbb24bf504fb01291e3e97ad1696ed995608fceda79f2441ca67bfe3c31f4f4bf0fffcd6" +
|
||||
"55408744524412cd4d3cbdbdd216694aa7477e88b25f7efeb34abf491a50695ff686829a8fea9e999877bcb37291b8dbeeddfd44465a2c28a215aa532590c487" +
|
||||
"d4747b6ece4e1aeaef725cb305d11b965a9647bef36a5c2fb45cc334d35ff4e308cd8813b6de3953b35a4ef6a3ae07794f8b54ef6365a573135320612bd1acfe" +
|
||||
"6cac5524c0e98b6f2a33a790b94f5134f0cba075a6fa93c191f4176ca62ea2e365557d6b3363a17b9ee52f3c347c82cd19f8432d16a934ae9c5d4d4505e7d20e" +
|
||||
"1ae31bb64ccb084f7a59744b27d58c2388d449ff4b63604878ae858240348ecfcb51761678265bd60d5dd7d51e25e91668fdf80f6b726b29ef6c3f0f229d8af4" +
|
||||
"b2cdadc3ca7fbadab49b28819b9c9b92b49cbe9a281e5891f4eae7616013777605a0623dd7a650baf9a9dad66ca9aae3c76ef1e27db32bd9514a2776eb0c8d05" +
|
||||
"65eec06fc4c8a69c417efa336842e248e5a51e3b5f3ba3227e3f78f1bd12d81595e03a01f4259c772fd481ab5f3d7a945e1c95fe0dc3c4742eeb7e15c9426ec3" +
|
||||
"ed4c90ee07d56acc78fecfd7c5ce1e04e7db1a970091f15c90f0aae2865d135395d27787aaf68c6a179064d82691e0b6c795f61875f317dc6d2e8feea55a28f2" +
|
||||
"461d74e14e350351720b6f536adfe3addd4111f08e3a84da2656fd4bc83989b329b383da9f01cf2392aa0b19577884d1281f2e6c106df451c078a472b36057d3" +
|
||||
"065dfc4bbb47ce4e5dce4acf6da095bdd10322f3ae12bcdd1f805e73b303f1fc7a7e16cf3ffd822bd8b25fbc93be065019e394113182713f1ad299ae6537f6bf" +
|
||||
"57116e8dc9ec775519b797ab4107c2ac5129ba85188852c3bc5f116044bbd8985b6dc8b8da4659589bf9d2351c4c3adaed87fe2ea20ef6bf62224c7af86fe8b4" +
|
||||
"973e558f39465dff43bf23cf1f78957514e4e82a3009d40d97bf8d8442a11deabde806e2fa84c1ba75273da75ce8dad3b2a34786b2958ac4bfd248ebe604a173" +
|
||||
"83c727b11dd922b1f72476af700b663fbd7033d0ac74b463d40a92d26c938b69f96fb4a9cb7a9ca2bd9496251270c0c5fcae6b3c2eda5377b897891648a97125" +
|
||||
"8ac71fed8dce8e02c30961a299cb7f3145845dbe8f4dfaaaf4baf0ca3fb730abdd258e98215f072a943d5aee8d8bc4c86023524f7b69186d99ad88ccdfc0b4bd" +
|
||||
"7db422bbad7eaa0824ce24b5c186e172c8c584f1cc5c126c901a69ebee8dbd230a653a3643b7875672d22fd86079daf8d834ba33664f5ad0b6eec767b4f58b45" +
|
||||
"e67b776b90e0a5e130aa5365003eb7fd78b757b1cf9133f6a1d51064b293cb42c8c41b15b7e95e2a39fa5dae19c6e20031d2bfa4632c37779163fdecc6b45624" +
|
||||
"4d6bfd01a8877f6fe7739591917a86e7dd795ad85cc3f256cff5961e8b62e92a0754a51f2c6d59819446eec8bdd08b87cf9f4fb5373e809d52240d2dd691cd50" +
|
||||
"37fc79d35b61d63851917cfdba164868a3f79e2061bd4610c1f5216ed77df00baa75f949ad37142db4fd282a5c7d2e2636ca590f92fc4781d4f51efa69f53947" +
|
||||
"d4fca1dc7dd2429837b6d7e5c9528effdecf6f731f676587785e5c4096bdb6f1f44e72f5f77d9025813e848881506f65bfb0f2b2d3ae6f9e00731929b5ac083d" +
|
||||
"b1c9c324703e63fef6319e1d8150aa0ff7d9a2049961df9158f3e1f2e540a91feb742625d2a859a452186d2ccaa3ec2ba086ee0868a4dc24ae6818fc02f9c1df" +
|
||||
"dc326cc31c46feefda97265238592f638968627ec24903b97513ab05ed58ce5b516decda0e2fbf01a70e6cb2e53c3e7b8855f80cd7e007b78da727ef0893e099" +
|
||||
"592ba684d62ae2d1f06ad148fa7f34cfc724d804149cda21aee7eac064ad20d29132b260c2c2867fa6a2e747739fc30df2f002c2a99da6c7e64ee51e806af7d9" +
|
||||
"768aec456b93a05002666cb61b2229c99f2cdef9afc9cea1c4ee3a85dd189823399781ee33cde2abedff09c47960c035e075a29156005d75845a11fa06abcc50" +
|
||||
"5f7f849a0caaf683f334e9e7bbbef90fe6cf94cbf87767219440d31713daef6ad1e4a1cc720ce59fee4cf7731e46bbba9ec1648908ea345030aa8f10ade10ffa" +
|
||||
"3d2acfd480b0b11eadc4fb2b740596b204e911456cb2f35ad9993ab7dd6a48b35ba0c207625384bb3c2ff24437810bd13c7ee96cd6f97f19ffd537ad182a3657" +
|
||||
"b0e83d42fd6e2ebac6cbf5ea1bde97465b7cec6954ff5b5be049e59a49ea25ed6667dbace765401bde12031e5cfabf2df7afb728d2a0b2a38b24d79bf23a313c" +
|
||||
"40fe5adef97487641c6088dd8712c0c352708e474b02c08fd2d71b6d44f16d82f291ccd61c43a339408379a8de54cfdbbae5e421e084112fbc17fb5561e084d1" +
|
||||
"4149bf4bb06fd161878d8574f856867cff974d5898e161923d55bdac8699c9df6a220bcb6c800d3ae7f107b8c4acab206d780aafaf6c2e2379de8c900700d9c9" +
|
||||
"c87d464772514c5aa3e5f5bfc00fb54f2b74702838b4731c5ac8a070b50783e81dd97fa8d55c739d026b607a2a78aba1bb79b1a7a3c22c78368672ac020061e6" +
|
||||
"d9683d57d6989c6c6f08b8d5d74720f5cb25505fbe81d2bf53a68e972a54784705b20f83fd1ab5afff30764ef89dba4465b56f48b325ab3810bf8dcbf4faa61f" +
|
||||
"676e2043ac8540df9e3af4c0f51d816e89c09bf67253be45fc5f75f64be97f6c7dc0c6392af6fa8e75aab58eda976b36773cd37d315771400a2cb846fdef3d8a" +
|
||||
"a15bce5dfda0379e526f87cf67767a2ab93d41c85b4ed016ed0a89d2f94737433a3ebade813def29eaa18a1fb925fca7d08d1020f64caebc562cb4ad2fb241e2" +
|
||||
"94923b2f2df5e6c4953c4b73be0f5568defe57ce49d16e2a205323e46cbb5a3e77fff1557671503bd7b5de5320f1fb951fbe26400cfa854af2d12fab0215310c" +
|
||||
"f070add34dc4565d1757d7e10a03e3bb73a607ed7e10861b1274ddee76183cf7e56c1de7162c805c2dba0e0331d36f3a4e2019a2e0705ee2747ed1e52bc3a6fb" +
|
||||
"3b061f784348204cdf8d643ff6c271fa72b56900edcc2f77201f3bd4fc296ad6534a7029ea66761bb9a3589a1f6ef566504c70297b98fbb603214fed2e4b7ca1" +
|
||||
"06e3f0e993118897fa641fb9722d4667fa98d07a6837e5ab2144e5ec1548a7dbca28c559f2a9a99d54b8e55f56d0e59bcef1ac45e2046835b60579da0d2261e7" +
|
||||
"30dab9009d138421c6458d146e870358b0b3fa20257e50b58f167c6b47edf7053513d58f33547d06ce52458baaa4dcf15f77b103565c66a81f183c827801b455" +
|
||||
"b61b6287a46a37a96884075a7eada9ba7f0ddcc14654bf87a26d2e27a978b415257773796923a220e06b25af16fb5aaded9b2d081a4c64106df460ddce9c3b2a" +
|
||||
"c8553e1521e501ad29a4b7f7681c9b60576a127087a5237c4c2bacf9b163dd590e63f2bc80e7f1e613773f87d034313064710404739d63363d204be7b14800c4" +
|
||||
"d8c1b6a2a21da70223be51d281fee302ef806454f9d7d28244ba537c1d9f8f1bcc5d47038d986a8f95ca48437ffe94fd44a90bb03014a259112a97508adb3db4" +
|
||||
"34f72a5268c1af6bc6d5801e579aab2228ca33600ebbf1a1959081c3a4ca99e444f97409f5e0ca4779241c9aacad1f4ee7fb4369bd6ae076378e4f63000b9a5c" +
|
||||
"849ba6e72e47e2454a44659149338ac0767cd25d8693c0d143e354bd600f1c1d3a44eeb024923ea659060665d5cd9a4ca1ca86162819556535fd59b9fde90caa" +
|
||||
"29920efe99479fe7e4b4e5371e13ccb43a1419cf023433239d840900d31bab37fabc3fd20e31bb7dbcb3ae8df66f67e2844944bcf544b658364f9e3d0b6d84b4" +
|
||||
"63ad4eb621644fd7d774b501407a1178814b15149345d551b474347188067db2ab4d7f4d1abd3027133039e855e129f3f5649550da8c04fe2db57cb89bf1bf4f" +
|
||||
"72eb35ccfe31afb92f6136d4c2a1c115b07b721b2da43151f11c356256230408979c5d95243714429e2c9500e7b043b20dac8a9763e5b487d1cbdb34ac379b9c" +
|
||||
"6409454c79385b6e562459c4fdaad1b7f9297c1473e9b90fffe6d1c5390e241a187a4cefa2eb0cb0c11f4ca6c5b961c18ceb57892295290dbc991692556bffa3" +
|
||||
"b8c405cf285e6bcb8a90246ad0ac15122f4cf73adc129d23aa2240733404beb6d74bf698e5589288a522573c774ce9f514b5d5c086382ea1dd4e89ff5facbf23" +
|
||||
"d36bc3d203941e17747ede4b82820351f4df278ddb787ce8f6f1cc468ef953399efb072ce706e253f1bab76444bb70be6443cd0db633e958dc57bd223e00418e" +
|
||||
"915a7c2e4d94c0623f9788276480cdfe798387d35e2ea2d304d066aec7627794cdd4200a44208d6c87f242c76e2d4a3f966b6fb96eaa63d892c1a177bef249b4" +
|
||||
"fdd1a4c06c791f677dd9919f739ccf318bd77835330b0219786249c9c9736161dac771a838724f2dd70afba46a6782fd27601cf8a7126ae95a66e526131a68d7" +
|
||||
"7a809e513533ed8021eecdbc5851dfcf95e10f1bbe47b5c7f079275a1837836245266b66d89fab25ac4bd6c1225560bea3259b67bf50a58ee056754d574da79e" +
|
||||
"f9a1a0df3a5defed0f74fe74ce0bf65a04086f17e94a8451828c723c97932f26f9349f1a2c7866c617a528602721de4f3cc8916bcfc66cdc106bafa26ea87a13" +
|
||||
"94dfa37e396365fb7f92df007b46a50ff04c7f85bfa679230ebedf18c2fb876fc7098dd1c4328adf85de71c31d94687a308053bfcdc158cfa7772170fbed63f5" +
|
||||
"37dda41f65196dfacdd1186b5de0f3369d841ce6502192292d05a19ce7464f5bcab3015c721cac13ddca561b92dc1ee25d3068dc1945a1b4e2bd1e6604c42e4c" +
|
||||
"3c04b490f6365828957990007394557854a903e19feb06906e41cbc8766bf37bd7dece90f4cdc987709b1129e84bfdc502543b5bfa887bf78553a5ec10ad68c5" +
|
||||
"d10eff75f7aa495e7d934a55577fdc0aead31aee4522db0259d7d4ea8438a7996d80a787465a2980457193d1c4bf1a0a1e01741d72e5fc4dfe58475c1c01026b" +
|
||||
"5a3bc973b902280753e9c3226db9cc778e2506c56ee86ae85b4d54dbf05394107329b2d1ee56522cb1ce562fb1aa4e592199d9c29f64cc3ab1d757531e209eec" +
|
||||
"aa138d8388169b5e28c45f5aba267eeaa57f69869f0b6855d82b0eafcde63895251f41e8e676a0ab12ef3f569bb7de91b79fa46ad9637da01ca004f4d30259c1" +
|
||||
"f5b00761f6ca9c17721a6718390624a10a11f7f52d7afb71ee5f8338828910e48f94a1347761abac87897b2dd0e23f1d325aec5031ef58f2972e8b402e05f8c1" +
|
||||
"ae7053a90380a1ae0d4d06645548c23e13afa31aac8ff83b10f8341418af4114632f6406d6e33076391696c9161d63c8bcfd1c625fc737f68198046212d1638a" +
|
||||
"d2d2d42ff7029c1fcc682a046edc4d4f24862d82c600180b1e8f57ff6a3865dfe9274f9886d00efa523a1b3b3757c4489200fec3dc5583854c955492336253dd" +
|
||||
"767f2a60ce3d224afcff9cdc19e9b28830d33affda6af99942a8fe39562055f3e884fd6c1ebc1908ac159061f35e9b0da80434ce9673d9c6b87265170077c670" +
|
||||
"743e37474d7605cd01c44af600f16d9ffaf24acf87fbe5ccf39bac41047a810d210051c87f06147a0bb8f1427a406700483679638f1af23f1dafb7aa0c468669" +
|
||||
"71c3a82f535c26cf6cb335e8e915fda393799d3dbe0e04b907ed3612d12ac95783a6876cd986d2a13b82192532e02c250eaa42f891d2481655fa4494c723fe00" +
|
||||
"87d224444245eb5b0eade5f741b025db1992a8ad0dce51b0c1af4a18a9e244f9f755891adf0f19179c7baa6c32bffc91e0b03c4ed3aaee1978b6a1f03b87ac6a" +
|
||||
"fc3b9e7030bb212b17de198edfccde29d04224798c1204e47ea235f048724fac62d637d1ba0ee3922048fcf79c746b6c0c036d882e3491fd72bad6e009c6403e" +
|
||||
"55876f4d31330caa02aedd0b0c121c3c41e736853a08071f0dd4ddc7412db0bbe274a9ac2932552bb37c40e72c2ef1d7cca8236942e480d709d3ea9d5ae0a1b7",
|
||||
),
|
||||
},
|
||||
}
|
||||
for i, tt := range tests {
|
||||
cache := make([]uint32, tt.cacheSize/4)
|
||||
generateCache(cache, tt.epoch, seedHash(tt.epoch*epochLength+1))
|
||||
|
||||
dataset := make([]uint32, tt.datasetSize/4)
|
||||
generateDataset(dataset, tt.epoch, cache)
|
||||
|
||||
want := make([]uint32, tt.datasetSize/4)
|
||||
prepare(want, tt.dataset)
|
||||
|
||||
if !reflect.DeepEqual(dataset, want) {
|
||||
t.Errorf("dataset %d: content mismatch: have %x, want %x", i, dataset, want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Tests whether the hashimoto lookup works for both light as well as the full
|
||||
// datasets.
|
||||
func TestHashimoto(t *testing.T) {
|
||||
// Create the verification cache and mining dataset
|
||||
cache := make([]uint32, 1024/4)
|
||||
generateCache(cache, 0, make([]byte, 32))
|
||||
|
||||
dataset := make([]uint32, 32*1024/4)
|
||||
generateDataset(dataset, 0, cache)
|
||||
|
||||
// Create a block to verify
|
||||
hash := hexutil.MustDecode("0xc9149cc0386e689d789a1c2f3d5d169a61a6218ed30e74414dc736e442ef3d1f")
|
||||
nonce := uint64(0)
|
||||
|
||||
wantDigest := hexutil.MustDecode("0xe4073cffaef931d37117cefd9afd27ea0f1cad6a981dd2605c4a1ac97c519800")
|
||||
wantResult := hexutil.MustDecode("0xd3539235ee2e6f8db665c0a72169f55b7f6c605712330b778ec3944f0eb5a557")
|
||||
|
||||
digest, result := hashimotoLight(32*1024, cache, hash, nonce)
|
||||
if !bytes.Equal(digest, wantDigest) {
|
||||
t.Errorf("light hashimoto digest mismatch: have %x, want %x", digest, wantDigest)
|
||||
}
|
||||
if !bytes.Equal(result, wantResult) {
|
||||
t.Errorf("light hashimoto result mismatch: have %x, want %x", result, wantResult)
|
||||
}
|
||||
digest, result = hashimotoFull(dataset, hash, nonce)
|
||||
if !bytes.Equal(digest, wantDigest) {
|
||||
t.Errorf("full hashimoto digest mismatch: have %x, want %x", digest, wantDigest)
|
||||
}
|
||||
if !bytes.Equal(result, wantResult) {
|
||||
t.Errorf("full hashimoto result mismatch: have %x, want %x", result, wantResult)
|
||||
}
|
||||
}
|
||||
|
||||
// Tests that caches generated on disk may be done concurrently.
|
||||
func TestConcurrentDiskCacheGeneration(t *testing.T) {
|
||||
// Create a temp folder to generate the caches into
|
||||
cachedir, err := ioutil.TempDir("", "")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create temporary cache dir: %v", err)
|
||||
}
|
||||
defer os.RemoveAll(cachedir)
|
||||
|
||||
// Define a heavy enough block, one from mainnet should do
|
||||
block := types.NewBlockWithHeader(&types.Header{
|
||||
Number: big.NewInt(3311058),
|
||||
ParentHash: common.HexToHash("0xd783efa4d392943503f28438ad5830b2d5964696ffc285f338585e9fe0a37a05"),
|
||||
UncleHash: common.HexToHash("0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347"),
|
||||
Coinbase: common.HexToAddress("0xc0ea08a2d404d3172d2add29a45be56da40e2949"),
|
||||
Root: common.HexToHash("0x77d14e10470b5850332524f8cd6f69ad21f070ce92dca33ab2858300242ef2f1"),
|
||||
TxHash: common.HexToHash("0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421"),
|
||||
ReceiptHash: common.HexToHash("0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421"),
|
||||
Difficulty: big.NewInt(167925187834220),
|
||||
GasLimit: 4015682,
|
||||
GasUsed: 0,
|
||||
Time: 1488928920,
|
||||
Extra: []byte("www.bw.com"),
|
||||
MixDigest: common.HexToHash("0x3e140b0784516af5e5ec6730f2fb20cca22f32be399b9e4ad77d32541f798cd0"),
|
||||
Nonce: types.EncodeNonce(0xf400cd0006070c49),
|
||||
})
|
||||
// Simulate multiple processes sharing the same datadir
|
||||
var pend sync.WaitGroup
|
||||
|
||||
for i := 0; i < 3; i++ {
|
||||
pend.Add(1)
|
||||
|
||||
go func(idx int) {
|
||||
defer pend.Done()
|
||||
ethash := New(Config{cachedir, 0, 1, "", 0, 0, ModeNormal}, nil, false)
|
||||
defer ethash.Close()
|
||||
if err := ethash.VerifySeal(nil, block.Header()); err != nil {
|
||||
t.Errorf("proc %d: block verification failed: %v", idx, err)
|
||||
}
|
||||
}(i)
|
||||
}
|
||||
pend.Wait()
|
||||
}
|
||||
|
||||
// Benchmarks the cache generation performance.
|
||||
func BenchmarkCacheGeneration(b *testing.B) {
|
||||
for i := 0; i < b.N; i++ {
|
||||
cache := make([]uint32, cacheSize(1)/4)
|
||||
generateCache(cache, 0, make([]byte, 32))
|
||||
}
|
||||
}
|
||||
|
||||
// Benchmarks the dataset (small) generation performance.
|
||||
func BenchmarkSmallDatasetGeneration(b *testing.B) {
|
||||
cache := make([]uint32, 65536/4)
|
||||
generateCache(cache, 0, make([]byte, 32))
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
dataset := make([]uint32, 32*65536/4)
|
||||
generateDataset(dataset, 0, cache)
|
||||
}
|
||||
}
|
||||
|
||||
// Benchmarks the light verification performance.
|
||||
func BenchmarkHashimotoLight(b *testing.B) {
|
||||
cache := make([]uint32, cacheSize(1)/4)
|
||||
generateCache(cache, 0, make([]byte, 32))
|
||||
|
||||
hash := hexutil.MustDecode("0xc9149cc0386e689d789a1c2f3d5d169a61a6218ed30e74414dc736e442ef3d1f")
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
hashimotoLight(datasetSize(1), cache, hash, 0)
|
||||
}
|
||||
}
|
||||
|
||||
// Benchmarks the full (small) verification performance.
|
||||
func BenchmarkHashimotoFullSmall(b *testing.B) {
|
||||
cache := make([]uint32, 65536/4)
|
||||
generateCache(cache, 0, make([]byte, 32))
|
||||
|
||||
dataset := make([]uint32, 32*65536/4)
|
||||
generateDataset(dataset, 0, cache)
|
||||
|
||||
hash := hexutil.MustDecode("0xc9149cc0386e689d789a1c2f3d5d169a61a6218ed30e74414dc736e442ef3d1f")
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
hashimotoFull(dataset, hash, 0)
|
||||
}
|
||||
}
|
629
vendor/github.com/ethereum/go-ethereum/consensus/ethash/consensus.go
generated
vendored
Normal file
629
vendor/github.com/ethereum/go-ethereum/consensus/ethash/consensus.go
generated
vendored
Normal file
@ -0,0 +1,629 @@
|
||||
// Copyright 2017 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package ethash
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"math/big"
|
||||
"runtime"
|
||||
"time"
|
||||
|
||||
mapset "github.com/deckarep/golang-set"
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/common/math"
|
||||
"github.com/ethereum/go-ethereum/consensus"
|
||||
"github.com/ethereum/go-ethereum/consensus/misc"
|
||||
"github.com/ethereum/go-ethereum/core/state"
|
||||
"github.com/ethereum/go-ethereum/core/types"
|
||||
"github.com/ethereum/go-ethereum/params"
|
||||
"github.com/ethereum/go-ethereum/rlp"
|
||||
"golang.org/x/crypto/sha3"
|
||||
)
|
||||
|
||||
// Ethash proof-of-work protocol constants.
|
||||
var (
|
||||
FrontierBlockReward = big.NewInt(5e+18) // Block reward in wei for successfully mining a block
|
||||
ByzantiumBlockReward = big.NewInt(3e+18) // Block reward in wei for successfully mining a block upward from Byzantium
|
||||
ConstantinopleBlockReward = big.NewInt(2e+18) // Block reward in wei for successfully mining a block upward from Constantinople
|
||||
maxUncles = 2 // Maximum number of uncles allowed in a single block
|
||||
allowedFutureBlockTime = 15 * time.Second // Max time from current time allowed for blocks, before they're considered future blocks
|
||||
|
||||
// calcDifficultyConstantinople is the difficulty adjustment algorithm for Constantinople.
|
||||
// It returns the difficulty that a new block should have when created at time given the
|
||||
// parent block's time and difficulty. The calculation uses the Byzantium rules, but with
|
||||
// bomb offset 5M.
|
||||
// Specification EIP-1234: https://eips.ethereum.org/EIPS/eip-1234
|
||||
calcDifficultyConstantinople = makeDifficultyCalculator(big.NewInt(5000000))
|
||||
|
||||
// calcDifficultyByzantium is the difficulty adjustment algorithm. It returns
|
||||
// the difficulty that a new block should have when created at time given the
|
||||
// parent block's time and difficulty. The calculation uses the Byzantium rules.
|
||||
// Specification EIP-649: https://eips.ethereum.org/EIPS/eip-649
|
||||
calcDifficultyByzantium = makeDifficultyCalculator(big.NewInt(3000000))
|
||||
)
|
||||
|
||||
// Various error messages to mark blocks invalid. These should be private to
|
||||
// prevent engine specific errors from being referenced in the remainder of the
|
||||
// codebase, inherently breaking if the engine is swapped out. Please put common
|
||||
// error types into the consensus package.
|
||||
var (
|
||||
errZeroBlockTime = errors.New("timestamp equals parent's")
|
||||
errTooManyUncles = errors.New("too many uncles")
|
||||
errDuplicateUncle = errors.New("duplicate uncle")
|
||||
errUncleIsAncestor = errors.New("uncle is ancestor")
|
||||
errDanglingUncle = errors.New("uncle's parent is not ancestor")
|
||||
errInvalidDifficulty = errors.New("non-positive difficulty")
|
||||
errInvalidMixDigest = errors.New("invalid mix digest")
|
||||
errInvalidPoW = errors.New("invalid proof-of-work")
|
||||
)
|
||||
|
||||
// Author implements consensus.Engine, returning the header's coinbase as the
|
||||
// proof-of-work verified author of the block.
|
||||
func (ethash *Ethash) Author(header *types.Header) (common.Address, error) {
|
||||
return header.Coinbase, nil
|
||||
}
|
||||
|
||||
// VerifyHeader checks whether a header conforms to the consensus rules of the
|
||||
// stock Ethereum ethash engine.
|
||||
func (ethash *Ethash) VerifyHeader(chain consensus.ChainReader, header *types.Header, seal bool) error {
|
||||
// If we're running a full engine faking, accept any input as valid
|
||||
if ethash.config.PowMode == ModeFullFake {
|
||||
return nil
|
||||
}
|
||||
// Short circuit if the header is known, or it's parent not
|
||||
number := header.Number.Uint64()
|
||||
if chain.GetHeader(header.Hash(), number) != nil {
|
||||
return nil
|
||||
}
|
||||
parent := chain.GetHeader(header.ParentHash, number-1)
|
||||
if parent == nil {
|
||||
return consensus.ErrUnknownAncestor
|
||||
}
|
||||
// Sanity checks passed, do a proper verification
|
||||
return ethash.verifyHeader(chain, header, parent, false, seal)
|
||||
}
|
||||
|
||||
// VerifyHeaders is similar to VerifyHeader, but verifies a batch of headers
|
||||
// concurrently. The method returns a quit channel to abort the operations and
|
||||
// a results channel to retrieve the async verifications.
|
||||
func (ethash *Ethash) VerifyHeaders(chain consensus.ChainReader, headers []*types.Header, seals []bool) (chan<- struct{}, <-chan error) {
|
||||
// If we're running a full engine faking, accept any input as valid
|
||||
if ethash.config.PowMode == ModeFullFake || len(headers) == 0 {
|
||||
abort, results := make(chan struct{}), make(chan error, len(headers))
|
||||
for i := 0; i < len(headers); i++ {
|
||||
results <- nil
|
||||
}
|
||||
return abort, results
|
||||
}
|
||||
|
||||
// Spawn as many workers as allowed threads
|
||||
workers := runtime.GOMAXPROCS(0)
|
||||
if len(headers) < workers {
|
||||
workers = len(headers)
|
||||
}
|
||||
|
||||
// Create a task channel and spawn the verifiers
|
||||
var (
|
||||
inputs = make(chan int)
|
||||
done = make(chan int, workers)
|
||||
errors = make([]error, len(headers))
|
||||
abort = make(chan struct{})
|
||||
)
|
||||
for i := 0; i < workers; i++ {
|
||||
go func() {
|
||||
for index := range inputs {
|
||||
errors[index] = ethash.verifyHeaderWorker(chain, headers, seals, index)
|
||||
done <- index
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
errorsOut := make(chan error, len(headers))
|
||||
go func() {
|
||||
defer close(inputs)
|
||||
var (
|
||||
in, out = 0, 0
|
||||
checked = make([]bool, len(headers))
|
||||
inputs = inputs
|
||||
)
|
||||
for {
|
||||
select {
|
||||
case inputs <- in:
|
||||
if in++; in == len(headers) {
|
||||
// Reached end of headers. Stop sending to workers.
|
||||
inputs = nil
|
||||
}
|
||||
case index := <-done:
|
||||
for checked[index] = true; checked[out]; out++ {
|
||||
errorsOut <- errors[out]
|
||||
if out == len(headers)-1 {
|
||||
return
|
||||
}
|
||||
}
|
||||
case <-abort:
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
return abort, errorsOut
|
||||
}
|
||||
|
||||
func (ethash *Ethash) verifyHeaderWorker(chain consensus.ChainReader, headers []*types.Header, seals []bool, index int) error {
|
||||
var parent *types.Header
|
||||
if index == 0 {
|
||||
parent = chain.GetHeader(headers[0].ParentHash, headers[0].Number.Uint64()-1)
|
||||
} else if headers[index-1].Hash() == headers[index].ParentHash {
|
||||
parent = headers[index-1]
|
||||
}
|
||||
if parent == nil {
|
||||
return consensus.ErrUnknownAncestor
|
||||
}
|
||||
if chain.GetHeader(headers[index].Hash(), headers[index].Number.Uint64()) != nil {
|
||||
return nil // known block
|
||||
}
|
||||
return ethash.verifyHeader(chain, headers[index], parent, false, seals[index])
|
||||
}
|
||||
|
||||
// VerifyUncles verifies that the given block's uncles conform to the consensus
|
||||
// rules of the stock Ethereum ethash engine.
|
||||
func (ethash *Ethash) VerifyUncles(chain consensus.ChainReader, block *types.Block) error {
|
||||
// If we're running a full engine faking, accept any input as valid
|
||||
if ethash.config.PowMode == ModeFullFake {
|
||||
return nil
|
||||
}
|
||||
// Verify that there are at most 2 uncles included in this block
|
||||
if len(block.Uncles()) > maxUncles {
|
||||
return errTooManyUncles
|
||||
}
|
||||
if len(block.Uncles()) == 0 {
|
||||
return nil
|
||||
}
|
||||
// Gather the set of past uncles and ancestors
|
||||
uncles, ancestors := mapset.NewSet(), make(map[common.Hash]*types.Header)
|
||||
|
||||
number, parent := block.NumberU64()-1, block.ParentHash()
|
||||
for i := 0; i < 7; i++ {
|
||||
ancestor := chain.GetBlock(parent, number)
|
||||
if ancestor == nil {
|
||||
break
|
||||
}
|
||||
ancestors[ancestor.Hash()] = ancestor.Header()
|
||||
for _, uncle := range ancestor.Uncles() {
|
||||
uncles.Add(uncle.Hash())
|
||||
}
|
||||
parent, number = ancestor.ParentHash(), number-1
|
||||
}
|
||||
ancestors[block.Hash()] = block.Header()
|
||||
uncles.Add(block.Hash())
|
||||
|
||||
// Verify each of the uncles that it's recent, but not an ancestor
|
||||
for _, uncle := range block.Uncles() {
|
||||
// Make sure every uncle is rewarded only once
|
||||
hash := uncle.Hash()
|
||||
if uncles.Contains(hash) {
|
||||
return errDuplicateUncle
|
||||
}
|
||||
uncles.Add(hash)
|
||||
|
||||
// Make sure the uncle has a valid ancestry
|
||||
if ancestors[hash] != nil {
|
||||
return errUncleIsAncestor
|
||||
}
|
||||
if ancestors[uncle.ParentHash] == nil || uncle.ParentHash == block.ParentHash() {
|
||||
return errDanglingUncle
|
||||
}
|
||||
if err := ethash.verifyHeader(chain, uncle, ancestors[uncle.ParentHash], true, true); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// verifyHeader checks whether a header conforms to the consensus rules of the
|
||||
// stock Ethereum ethash engine.
|
||||
// See YP section 4.3.4. "Block Header Validity"
|
||||
func (ethash *Ethash) verifyHeader(chain consensus.ChainReader, header, parent *types.Header, uncle bool, seal bool) error {
|
||||
// Ensure that the header's extra-data section is of a reasonable size
|
||||
if uint64(len(header.Extra)) > params.MaximumExtraDataSize {
|
||||
return fmt.Errorf("extra-data too long: %d > %d", len(header.Extra), params.MaximumExtraDataSize)
|
||||
}
|
||||
// Verify the header's timestamp
|
||||
if !uncle {
|
||||
if header.Time > uint64(time.Now().Add(allowedFutureBlockTime).Unix()) {
|
||||
return consensus.ErrFutureBlock
|
||||
}
|
||||
}
|
||||
if header.Time <= parent.Time {
|
||||
return errZeroBlockTime
|
||||
}
|
||||
// Verify the block's difficulty based in it's timestamp and parent's difficulty
|
||||
expected := ethash.CalcDifficulty(chain, header.Time, parent)
|
||||
|
||||
if expected.Cmp(header.Difficulty) != 0 {
|
||||
return fmt.Errorf("invalid difficulty: have %v, want %v", header.Difficulty, expected)
|
||||
}
|
||||
// Verify that the gas limit is <= 2^63-1
|
||||
cap := uint64(0x7fffffffffffffff)
|
||||
if header.GasLimit > cap {
|
||||
return fmt.Errorf("invalid gasLimit: have %v, max %v", header.GasLimit, cap)
|
||||
}
|
||||
// Verify that the gasUsed is <= gasLimit
|
||||
if header.GasUsed > header.GasLimit {
|
||||
return fmt.Errorf("invalid gasUsed: have %d, gasLimit %d", header.GasUsed, header.GasLimit)
|
||||
}
|
||||
|
||||
// Verify that the gas limit remains within allowed bounds
|
||||
diff := int64(parent.GasLimit) - int64(header.GasLimit)
|
||||
if diff < 0 {
|
||||
diff *= -1
|
||||
}
|
||||
limit := parent.GasLimit / params.GasLimitBoundDivisor
|
||||
|
||||
if uint64(diff) >= limit || header.GasLimit < params.MinGasLimit {
|
||||
return fmt.Errorf("invalid gas limit: have %d, want %d += %d", header.GasLimit, parent.GasLimit, limit)
|
||||
}
|
||||
// Verify that the block number is parent's +1
|
||||
if diff := new(big.Int).Sub(header.Number, parent.Number); diff.Cmp(big.NewInt(1)) != 0 {
|
||||
return consensus.ErrInvalidNumber
|
||||
}
|
||||
// Verify the engine specific seal securing the block
|
||||
if seal {
|
||||
if err := ethash.VerifySeal(chain, header); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// If all checks passed, validate any special fields for hard forks
|
||||
if err := misc.VerifyDAOHeaderExtraData(chain.Config(), header); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := misc.VerifyForkHashes(chain.Config(), header, uncle); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// CalcDifficulty is the difficulty adjustment algorithm. It returns
|
||||
// the difficulty that a new block should have when created at time
|
||||
// given the parent block's time and difficulty.
|
||||
func (ethash *Ethash) CalcDifficulty(chain consensus.ChainReader, time uint64, parent *types.Header) *big.Int {
|
||||
return CalcDifficulty(chain.Config(), time, parent)
|
||||
}
|
||||
|
||||
// CalcDifficulty is the difficulty adjustment algorithm. It returns
|
||||
// the difficulty that a new block should have when created at time
|
||||
// given the parent block's time and difficulty.
|
||||
func CalcDifficulty(config *params.ChainConfig, time uint64, parent *types.Header) *big.Int {
|
||||
next := new(big.Int).Add(parent.Number, big1)
|
||||
switch {
|
||||
case config.IsConstantinople(next):
|
||||
return calcDifficultyConstantinople(time, parent)
|
||||
case config.IsByzantium(next):
|
||||
return calcDifficultyByzantium(time, parent)
|
||||
case config.IsHomestead(next):
|
||||
return calcDifficultyHomestead(time, parent)
|
||||
default:
|
||||
return calcDifficultyFrontier(time, parent)
|
||||
}
|
||||
}
|
||||
|
||||
// Some weird constants to avoid constant memory allocs for them.
|
||||
var (
|
||||
expDiffPeriod = big.NewInt(100000)
|
||||
big1 = big.NewInt(1)
|
||||
big2 = big.NewInt(2)
|
||||
big9 = big.NewInt(9)
|
||||
big10 = big.NewInt(10)
|
||||
bigMinus99 = big.NewInt(-99)
|
||||
)
|
||||
|
||||
// makeDifficultyCalculator creates a difficultyCalculator with the given bomb-delay.
|
||||
// the difficulty is calculated with Byzantium rules, which differs from Homestead in
|
||||
// how uncles affect the calculation
|
||||
func makeDifficultyCalculator(bombDelay *big.Int) func(time uint64, parent *types.Header) *big.Int {
|
||||
// Note, the calculations below looks at the parent number, which is 1 below
|
||||
// the block number. Thus we remove one from the delay given
|
||||
bombDelayFromParent := new(big.Int).Sub(bombDelay, big1)
|
||||
return func(time uint64, parent *types.Header) *big.Int {
|
||||
// https://github.com/ethereum/EIPs/issues/100.
|
||||
// algorithm:
|
||||
// diff = (parent_diff +
|
||||
// (parent_diff / 2048 * max((2 if len(parent.uncles) else 1) - ((timestamp - parent.timestamp) // 9), -99))
|
||||
// ) + 2^(periodCount - 2)
|
||||
|
||||
bigTime := new(big.Int).SetUint64(time)
|
||||
bigParentTime := new(big.Int).SetUint64(parent.Time)
|
||||
|
||||
// holds intermediate values to make the algo easier to read & audit
|
||||
x := new(big.Int)
|
||||
y := new(big.Int)
|
||||
|
||||
// (2 if len(parent_uncles) else 1) - (block_timestamp - parent_timestamp) // 9
|
||||
x.Sub(bigTime, bigParentTime)
|
||||
x.Div(x, big9)
|
||||
if parent.UncleHash == types.EmptyUncleHash {
|
||||
x.Sub(big1, x)
|
||||
} else {
|
||||
x.Sub(big2, x)
|
||||
}
|
||||
// max((2 if len(parent_uncles) else 1) - (block_timestamp - parent_timestamp) // 9, -99)
|
||||
if x.Cmp(bigMinus99) < 0 {
|
||||
x.Set(bigMinus99)
|
||||
}
|
||||
// parent_diff + (parent_diff / 2048 * max((2 if len(parent.uncles) else 1) - ((timestamp - parent.timestamp) // 9), -99))
|
||||
y.Div(parent.Difficulty, params.DifficultyBoundDivisor)
|
||||
x.Mul(y, x)
|
||||
x.Add(parent.Difficulty, x)
|
||||
|
||||
// minimum difficulty can ever be (before exponential factor)
|
||||
if x.Cmp(params.MinimumDifficulty) < 0 {
|
||||
x.Set(params.MinimumDifficulty)
|
||||
}
|
||||
// calculate a fake block number for the ice-age delay
|
||||
// Specification: https://eips.ethereum.org/EIPS/eip-1234
|
||||
fakeBlockNumber := new(big.Int)
|
||||
if parent.Number.Cmp(bombDelayFromParent) >= 0 {
|
||||
fakeBlockNumber = fakeBlockNumber.Sub(parent.Number, bombDelayFromParent)
|
||||
}
|
||||
// for the exponential factor
|
||||
periodCount := fakeBlockNumber
|
||||
periodCount.Div(periodCount, expDiffPeriod)
|
||||
|
||||
// the exponential factor, commonly referred to as "the bomb"
|
||||
// diff = diff + 2^(periodCount - 2)
|
||||
if periodCount.Cmp(big1) > 0 {
|
||||
y.Sub(periodCount, big2)
|
||||
y.Exp(big2, y, nil)
|
||||
x.Add(x, y)
|
||||
}
|
||||
return x
|
||||
}
|
||||
}
|
||||
|
||||
// calcDifficultyHomestead is the difficulty adjustment algorithm. It returns
|
||||
// the difficulty that a new block should have when created at time given the
|
||||
// parent block's time and difficulty. The calculation uses the Homestead rules.
|
||||
func calcDifficultyHomestead(time uint64, parent *types.Header) *big.Int {
|
||||
// https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2.md
|
||||
// algorithm:
|
||||
// diff = (parent_diff +
|
||||
// (parent_diff / 2048 * max(1 - (block_timestamp - parent_timestamp) // 10, -99))
|
||||
// ) + 2^(periodCount - 2)
|
||||
|
||||
bigTime := new(big.Int).SetUint64(time)
|
||||
bigParentTime := new(big.Int).SetUint64(parent.Time)
|
||||
|
||||
// holds intermediate values to make the algo easier to read & audit
|
||||
x := new(big.Int)
|
||||
y := new(big.Int)
|
||||
|
||||
// 1 - (block_timestamp - parent_timestamp) // 10
|
||||
x.Sub(bigTime, bigParentTime)
|
||||
x.Div(x, big10)
|
||||
x.Sub(big1, x)
|
||||
|
||||
// max(1 - (block_timestamp - parent_timestamp) // 10, -99)
|
||||
if x.Cmp(bigMinus99) < 0 {
|
||||
x.Set(bigMinus99)
|
||||
}
|
||||
// (parent_diff + parent_diff // 2048 * max(1 - (block_timestamp - parent_timestamp) // 10, -99))
|
||||
y.Div(parent.Difficulty, params.DifficultyBoundDivisor)
|
||||
x.Mul(y, x)
|
||||
x.Add(parent.Difficulty, x)
|
||||
|
||||
// minimum difficulty can ever be (before exponential factor)
|
||||
if x.Cmp(params.MinimumDifficulty) < 0 {
|
||||
x.Set(params.MinimumDifficulty)
|
||||
}
|
||||
// for the exponential factor
|
||||
periodCount := new(big.Int).Add(parent.Number, big1)
|
||||
periodCount.Div(periodCount, expDiffPeriod)
|
||||
|
||||
// the exponential factor, commonly referred to as "the bomb"
|
||||
// diff = diff + 2^(periodCount - 2)
|
||||
if periodCount.Cmp(big1) > 0 {
|
||||
y.Sub(periodCount, big2)
|
||||
y.Exp(big2, y, nil)
|
||||
x.Add(x, y)
|
||||
}
|
||||
return x
|
||||
}
|
||||
|
||||
// calcDifficultyFrontier is the difficulty adjustment algorithm. It returns the
|
||||
// difficulty that a new block should have when created at time given the parent
|
||||
// block's time and difficulty. The calculation uses the Frontier rules.
|
||||
func calcDifficultyFrontier(time uint64, parent *types.Header) *big.Int {
|
||||
diff := new(big.Int)
|
||||
adjust := new(big.Int).Div(parent.Difficulty, params.DifficultyBoundDivisor)
|
||||
bigTime := new(big.Int)
|
||||
bigParentTime := new(big.Int)
|
||||
|
||||
bigTime.SetUint64(time)
|
||||
bigParentTime.SetUint64(parent.Time)
|
||||
|
||||
if bigTime.Sub(bigTime, bigParentTime).Cmp(params.DurationLimit) < 0 {
|
||||
diff.Add(parent.Difficulty, adjust)
|
||||
} else {
|
||||
diff.Sub(parent.Difficulty, adjust)
|
||||
}
|
||||
if diff.Cmp(params.MinimumDifficulty) < 0 {
|
||||
diff.Set(params.MinimumDifficulty)
|
||||
}
|
||||
|
||||
periodCount := new(big.Int).Add(parent.Number, big1)
|
||||
periodCount.Div(periodCount, expDiffPeriod)
|
||||
if periodCount.Cmp(big1) > 0 {
|
||||
// diff = diff + 2^(periodCount - 2)
|
||||
expDiff := periodCount.Sub(periodCount, big2)
|
||||
expDiff.Exp(big2, expDiff, nil)
|
||||
diff.Add(diff, expDiff)
|
||||
diff = math.BigMax(diff, params.MinimumDifficulty)
|
||||
}
|
||||
return diff
|
||||
}
|
||||
|
||||
// VerifySeal implements consensus.Engine, checking whether the given block satisfies
|
||||
// the PoW difficulty requirements.
|
||||
func (ethash *Ethash) VerifySeal(chain consensus.ChainReader, header *types.Header) error {
|
||||
return ethash.verifySeal(chain, header, false)
|
||||
}
|
||||
|
||||
// verifySeal checks whether a block satisfies the PoW difficulty requirements,
|
||||
// either using the usual ethash cache for it, or alternatively using a full DAG
|
||||
// to make remote mining fast.
|
||||
func (ethash *Ethash) verifySeal(chain consensus.ChainReader, header *types.Header, fulldag bool) error {
|
||||
// If we're running a fake PoW, accept any seal as valid
|
||||
if ethash.config.PowMode == ModeFake || ethash.config.PowMode == ModeFullFake {
|
||||
time.Sleep(ethash.fakeDelay)
|
||||
if ethash.fakeFail == header.Number.Uint64() {
|
||||
return errInvalidPoW
|
||||
}
|
||||
return nil
|
||||
}
|
||||
// If we're running a shared PoW, delegate verification to it
|
||||
if ethash.shared != nil {
|
||||
return ethash.shared.verifySeal(chain, header, fulldag)
|
||||
}
|
||||
// Ensure that we have a valid difficulty for the block
|
||||
if header.Difficulty.Sign() <= 0 {
|
||||
return errInvalidDifficulty
|
||||
}
|
||||
// Recompute the digest and PoW values
|
||||
number := header.Number.Uint64()
|
||||
|
||||
var (
|
||||
digest []byte
|
||||
result []byte
|
||||
)
|
||||
// If fast-but-heavy PoW verification was requested, use an ethash dataset
|
||||
if fulldag {
|
||||
dataset := ethash.dataset(number, true)
|
||||
if dataset.generated() {
|
||||
digest, result = hashimotoFull(dataset.dataset, ethash.SealHash(header).Bytes(), header.Nonce.Uint64())
|
||||
|
||||
// Datasets are unmapped in a finalizer. Ensure that the dataset stays alive
|
||||
// until after the call to hashimotoFull so it's not unmapped while being used.
|
||||
runtime.KeepAlive(dataset)
|
||||
} else {
|
||||
// Dataset not yet generated, don't hang, use a cache instead
|
||||
fulldag = false
|
||||
}
|
||||
}
|
||||
// If slow-but-light PoW verification was requested (or DAG not yet ready), use an ethash cache
|
||||
if !fulldag {
|
||||
cache := ethash.cache(number)
|
||||
|
||||
size := datasetSize(number)
|
||||
if ethash.config.PowMode == ModeTest {
|
||||
size = 32 * 1024
|
||||
}
|
||||
digest, result = hashimotoLight(size, cache.cache, ethash.SealHash(header).Bytes(), header.Nonce.Uint64())
|
||||
|
||||
// Caches are unmapped in a finalizer. Ensure that the cache stays alive
|
||||
// until after the call to hashimotoLight so it's not unmapped while being used.
|
||||
runtime.KeepAlive(cache)
|
||||
}
|
||||
// Verify the calculated values against the ones provided in the header
|
||||
if !bytes.Equal(header.MixDigest[:], digest) {
|
||||
return errInvalidMixDigest
|
||||
}
|
||||
target := new(big.Int).Div(two256, header.Difficulty)
|
||||
if new(big.Int).SetBytes(result).Cmp(target) > 0 {
|
||||
return errInvalidPoW
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Prepare implements consensus.Engine, initializing the difficulty field of a
|
||||
// header to conform to the ethash protocol. The changes are done inline.
|
||||
func (ethash *Ethash) Prepare(chain consensus.ChainReader, header *types.Header) error {
|
||||
parent := chain.GetHeader(header.ParentHash, header.Number.Uint64()-1)
|
||||
if parent == nil {
|
||||
return consensus.ErrUnknownAncestor
|
||||
}
|
||||
header.Difficulty = ethash.CalcDifficulty(chain, header.Time, parent)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Finalize implements consensus.Engine, accumulating the block and uncle rewards,
|
||||
// setting the final state and assembling the block.
|
||||
func (ethash *Ethash) Finalize(chain consensus.ChainReader, header *types.Header, state *state.StateDB, txs []*types.Transaction, uncles []*types.Header, receipts []*types.Receipt) (*types.Block, error) {
|
||||
// Accumulate any block and uncle rewards and commit the final state root
|
||||
accumulateRewards(chain.Config(), state, header, uncles)
|
||||
header.Root = state.IntermediateRoot(chain.Config().IsEIP158(header.Number))
|
||||
|
||||
// Header seems complete, assemble into a block and return
|
||||
return types.NewBlock(header, txs, uncles, receipts), nil
|
||||
}
|
||||
|
||||
// SealHash returns the hash of a block prior to it being sealed.
|
||||
func (ethash *Ethash) SealHash(header *types.Header) (hash common.Hash) {
|
||||
hasher := sha3.NewLegacyKeccak256()
|
||||
|
||||
rlp.Encode(hasher, []interface{}{
|
||||
header.ParentHash,
|
||||
header.UncleHash,
|
||||
header.Coinbase,
|
||||
header.Root,
|
||||
header.TxHash,
|
||||
header.ReceiptHash,
|
||||
header.Bloom,
|
||||
header.Difficulty,
|
||||
header.Number,
|
||||
header.GasLimit,
|
||||
header.GasUsed,
|
||||
header.Time,
|
||||
header.Extra,
|
||||
})
|
||||
hasher.Sum(hash[:0])
|
||||
return hash
|
||||
}
|
||||
|
||||
// Some weird constants to avoid constant memory allocs for them.
|
||||
var (
|
||||
big8 = big.NewInt(8)
|
||||
big32 = big.NewInt(32)
|
||||
)
|
||||
|
||||
// AccumulateRewards credits the coinbase of the given block with the mining
|
||||
// reward. The total reward consists of the static block reward and rewards for
|
||||
// included uncles. The coinbase of each uncle block is also rewarded.
|
||||
func accumulateRewards(config *params.ChainConfig, state *state.StateDB, header *types.Header, uncles []*types.Header) {
|
||||
// Select the correct block reward based on chain progression
|
||||
blockReward := FrontierBlockReward
|
||||
if config.IsByzantium(header.Number) {
|
||||
blockReward = ByzantiumBlockReward
|
||||
}
|
||||
if config.IsConstantinople(header.Number) {
|
||||
blockReward = ConstantinopleBlockReward
|
||||
}
|
||||
// Accumulate the rewards for the miner and any included uncles
|
||||
reward := new(big.Int).Set(blockReward)
|
||||
r := new(big.Int)
|
||||
for _, uncle := range uncles {
|
||||
r.Add(uncle.Number, big8)
|
||||
r.Sub(r, header.Number)
|
||||
r.Mul(r, blockReward)
|
||||
r.Div(r, big8)
|
||||
state.AddBalance(uncle.Coinbase, r)
|
||||
|
||||
r.Div(blockReward, big32)
|
||||
reward.Add(reward, r)
|
||||
}
|
||||
state.AddBalance(header.Coinbase, reward)
|
||||
}
|
86
vendor/github.com/ethereum/go-ethereum/consensus/ethash/consensus_test.go
generated
vendored
Normal file
86
vendor/github.com/ethereum/go-ethereum/consensus/ethash/consensus_test.go
generated
vendored
Normal file
@ -0,0 +1,86 @@
|
||||
// Copyright 2017 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package ethash
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"math/big"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/math"
|
||||
"github.com/ethereum/go-ethereum/core/types"
|
||||
"github.com/ethereum/go-ethereum/params"
|
||||
)
|
||||
|
||||
type diffTest struct {
|
||||
ParentTimestamp uint64
|
||||
ParentDifficulty *big.Int
|
||||
CurrentTimestamp uint64
|
||||
CurrentBlocknumber *big.Int
|
||||
CurrentDifficulty *big.Int
|
||||
}
|
||||
|
||||
func (d *diffTest) UnmarshalJSON(b []byte) (err error) {
|
||||
var ext struct {
|
||||
ParentTimestamp string
|
||||
ParentDifficulty string
|
||||
CurrentTimestamp string
|
||||
CurrentBlocknumber string
|
||||
CurrentDifficulty string
|
||||
}
|
||||
if err := json.Unmarshal(b, &ext); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.ParentTimestamp = math.MustParseUint64(ext.ParentTimestamp)
|
||||
d.ParentDifficulty = math.MustParseBig256(ext.ParentDifficulty)
|
||||
d.CurrentTimestamp = math.MustParseUint64(ext.CurrentTimestamp)
|
||||
d.CurrentBlocknumber = math.MustParseBig256(ext.CurrentBlocknumber)
|
||||
d.CurrentDifficulty = math.MustParseBig256(ext.CurrentDifficulty)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestCalcDifficulty(t *testing.T) {
|
||||
file, err := os.Open(filepath.Join("..", "..", "tests", "testdata", "BasicTests", "difficulty.json"))
|
||||
if err != nil {
|
||||
t.Skip(err)
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
tests := make(map[string]diffTest)
|
||||
err = json.NewDecoder(file).Decode(&tests)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
config := ¶ms.ChainConfig{HomesteadBlock: big.NewInt(1150000)}
|
||||
|
||||
for name, test := range tests {
|
||||
number := new(big.Int).Sub(test.CurrentBlocknumber, big.NewInt(1))
|
||||
diff := CalcDifficulty(config, test.CurrentTimestamp, &types.Header{
|
||||
Number: number,
|
||||
Time: test.ParentTimestamp,
|
||||
Difficulty: test.ParentDifficulty,
|
||||
})
|
||||
if diff.Cmp(test.CurrentDifficulty) != 0 {
|
||||
t.Error(name, "failed. Expected", test.CurrentDifficulty, "and calculated", diff)
|
||||
}
|
||||
}
|
||||
}
|
438
vendor/github.com/ethereum/go-ethereum/console/bridge.go
generated
vendored
Normal file
438
vendor/github.com/ethereum/go-ethereum/console/bridge.go
generated
vendored
Normal file
@ -0,0 +1,438 @@
|
||||
// Copyright 2016 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package console
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/accounts/scwallet"
|
||||
"github.com/ethereum/go-ethereum/accounts/usbwallet"
|
||||
"github.com/ethereum/go-ethereum/log"
|
||||
"github.com/ethereum/go-ethereum/rpc"
|
||||
"github.com/robertkrimen/otto"
|
||||
)
|
||||
|
||||
// bridge is a collection of JavaScript utility methods to bride the .js runtime
|
||||
// environment and the Go RPC connection backing the remote method calls.
|
||||
type bridge struct {
|
||||
client *rpc.Client // RPC client to execute Ethereum requests through
|
||||
prompter UserPrompter // Input prompter to allow interactive user feedback
|
||||
printer io.Writer // Output writer to serialize any display strings to
|
||||
}
|
||||
|
||||
// newBridge creates a new JavaScript wrapper around an RPC client.
|
||||
func newBridge(client *rpc.Client, prompter UserPrompter, printer io.Writer) *bridge {
|
||||
return &bridge{
|
||||
client: client,
|
||||
prompter: prompter,
|
||||
printer: printer,
|
||||
}
|
||||
}
|
||||
|
||||
// NewAccount is a wrapper around the personal.newAccount RPC method that uses a
|
||||
// non-echoing password prompt to acquire the passphrase and executes the original
|
||||
// RPC method (saved in jeth.newAccount) with it to actually execute the RPC call.
|
||||
func (b *bridge) NewAccount(call otto.FunctionCall) (response otto.Value) {
|
||||
var (
|
||||
password string
|
||||
confirm string
|
||||
err error
|
||||
)
|
||||
switch {
|
||||
// No password was specified, prompt the user for it
|
||||
case len(call.ArgumentList) == 0:
|
||||
if password, err = b.prompter.PromptPassword("Passphrase: "); err != nil {
|
||||
throwJSException(err.Error())
|
||||
}
|
||||
if confirm, err = b.prompter.PromptPassword("Repeat passphrase: "); err != nil {
|
||||
throwJSException(err.Error())
|
||||
}
|
||||
if password != confirm {
|
||||
throwJSException("passphrases don't match!")
|
||||
}
|
||||
|
||||
// A single string password was specified, use that
|
||||
case len(call.ArgumentList) == 1 && call.Argument(0).IsString():
|
||||
password, _ = call.Argument(0).ToString()
|
||||
|
||||
// Otherwise fail with some error
|
||||
default:
|
||||
throwJSException("expected 0 or 1 string argument")
|
||||
}
|
||||
// Password acquired, execute the call and return
|
||||
ret, err := call.Otto.Call("jeth.newAccount", nil, password)
|
||||
if err != nil {
|
||||
throwJSException(err.Error())
|
||||
}
|
||||
return ret
|
||||
}
|
||||
|
||||
// OpenWallet is a wrapper around personal.openWallet which can interpret and
|
||||
// react to certain error messages, such as the Trezor PIN matrix request.
|
||||
func (b *bridge) OpenWallet(call otto.FunctionCall) (response otto.Value) {
|
||||
// Make sure we have a wallet specified to open
|
||||
if !call.Argument(0).IsString() {
|
||||
throwJSException("first argument must be the wallet URL to open")
|
||||
}
|
||||
wallet := call.Argument(0)
|
||||
|
||||
var passwd otto.Value
|
||||
if call.Argument(1).IsUndefined() || call.Argument(1).IsNull() {
|
||||
passwd, _ = otto.ToValue("")
|
||||
} else {
|
||||
passwd = call.Argument(1)
|
||||
}
|
||||
// Open the wallet and return if successful in itself
|
||||
val, err := call.Otto.Call("jeth.openWallet", nil, wallet, passwd)
|
||||
if err == nil {
|
||||
return val
|
||||
}
|
||||
|
||||
// Wallet open failed, report error unless it's a PIN or PUK entry
|
||||
switch {
|
||||
case strings.HasSuffix(err.Error(), usbwallet.ErrTrezorPINNeeded.Error()):
|
||||
val, err = b.readPinAndReopenWallet(call)
|
||||
if err == nil {
|
||||
return val
|
||||
}
|
||||
val, err = b.readPassphraseAndReopenWallet(call)
|
||||
if err != nil {
|
||||
throwJSException(err.Error())
|
||||
}
|
||||
|
||||
case strings.HasSuffix(err.Error(), scwallet.ErrPairingPasswordNeeded.Error()):
|
||||
// PUK input requested, fetch from the user and call open again
|
||||
if input, err := b.prompter.PromptPassword("Please enter the pairing password: "); err != nil {
|
||||
throwJSException(err.Error())
|
||||
} else {
|
||||
passwd, _ = otto.ToValue(input)
|
||||
}
|
||||
if val, err = call.Otto.Call("jeth.openWallet", nil, wallet, passwd); err != nil {
|
||||
if !strings.HasSuffix(err.Error(), scwallet.ErrPINNeeded.Error()) {
|
||||
throwJSException(err.Error())
|
||||
} else {
|
||||
// PIN input requested, fetch from the user and call open again
|
||||
if input, err := b.prompter.PromptPassword("Please enter current PIN: "); err != nil {
|
||||
throwJSException(err.Error())
|
||||
} else {
|
||||
passwd, _ = otto.ToValue(input)
|
||||
}
|
||||
if val, err = call.Otto.Call("jeth.openWallet", nil, wallet, passwd); err != nil {
|
||||
throwJSException(err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
case strings.HasSuffix(err.Error(), scwallet.ErrPINUnblockNeeded.Error()):
|
||||
// PIN unblock requested, fetch PUK and new PIN from the user
|
||||
var pukpin string
|
||||
if input, err := b.prompter.PromptPassword("Please enter current PUK: "); err != nil {
|
||||
throwJSException(err.Error())
|
||||
} else {
|
||||
pukpin = input
|
||||
}
|
||||
if input, err := b.prompter.PromptPassword("Please enter new PIN: "); err != nil {
|
||||
throwJSException(err.Error())
|
||||
} else {
|
||||
pukpin += input
|
||||
}
|
||||
passwd, _ = otto.ToValue(pukpin)
|
||||
if val, err = call.Otto.Call("jeth.openWallet", nil, wallet, passwd); err != nil {
|
||||
throwJSException(err.Error())
|
||||
}
|
||||
|
||||
case strings.HasSuffix(err.Error(), scwallet.ErrPINNeeded.Error()):
|
||||
// PIN input requested, fetch from the user and call open again
|
||||
if input, err := b.prompter.PromptPassword("Please enter current PIN: "); err != nil {
|
||||
throwJSException(err.Error())
|
||||
} else {
|
||||
passwd, _ = otto.ToValue(input)
|
||||
}
|
||||
if val, err = call.Otto.Call("jeth.openWallet", nil, wallet, passwd); err != nil {
|
||||
throwJSException(err.Error())
|
||||
}
|
||||
|
||||
default:
|
||||
// Unknown error occurred, drop to the user
|
||||
throwJSException(err.Error())
|
||||
}
|
||||
return val
|
||||
}
|
||||
|
||||
func (b *bridge) readPassphraseAndReopenWallet(call otto.FunctionCall) (otto.Value, error) {
|
||||
var passwd otto.Value
|
||||
wallet := call.Argument(0)
|
||||
if input, err := b.prompter.PromptPassword("Please enter your passphrase: "); err != nil {
|
||||
throwJSException(err.Error())
|
||||
} else {
|
||||
passwd, _ = otto.ToValue(input)
|
||||
}
|
||||
return call.Otto.Call("jeth.openWallet", nil, wallet, passwd)
|
||||
}
|
||||
|
||||
func (b *bridge) readPinAndReopenWallet(call otto.FunctionCall) (otto.Value, error) {
|
||||
var passwd otto.Value
|
||||
wallet := call.Argument(0)
|
||||
// Trezor PIN matrix input requested, display the matrix to the user and fetch the data
|
||||
fmt.Fprintf(b.printer, "Look at the device for number positions\n\n")
|
||||
fmt.Fprintf(b.printer, "7 | 8 | 9\n")
|
||||
fmt.Fprintf(b.printer, "--+---+--\n")
|
||||
fmt.Fprintf(b.printer, "4 | 5 | 6\n")
|
||||
fmt.Fprintf(b.printer, "--+---+--\n")
|
||||
fmt.Fprintf(b.printer, "1 | 2 | 3\n\n")
|
||||
|
||||
if input, err := b.prompter.PromptPassword("Please enter current PIN: "); err != nil {
|
||||
throwJSException(err.Error())
|
||||
} else {
|
||||
passwd, _ = otto.ToValue(input)
|
||||
}
|
||||
return call.Otto.Call("jeth.openWallet", nil, wallet, passwd)
|
||||
}
|
||||
|
||||
// UnlockAccount is a wrapper around the personal.unlockAccount RPC method that
|
||||
// uses a non-echoing password prompt to acquire the passphrase and executes the
|
||||
// original RPC method (saved in jeth.unlockAccount) with it to actually execute
|
||||
// the RPC call.
|
||||
func (b *bridge) UnlockAccount(call otto.FunctionCall) (response otto.Value) {
|
||||
// Make sure we have an account specified to unlock
|
||||
if !call.Argument(0).IsString() {
|
||||
throwJSException("first argument must be the account to unlock")
|
||||
}
|
||||
account := call.Argument(0)
|
||||
|
||||
// If password is not given or is the null value, prompt the user for it
|
||||
var passwd otto.Value
|
||||
|
||||
if call.Argument(1).IsUndefined() || call.Argument(1).IsNull() {
|
||||
fmt.Fprintf(b.printer, "Unlock account %s\n", account)
|
||||
if input, err := b.prompter.PromptPassword("Passphrase: "); err != nil {
|
||||
throwJSException(err.Error())
|
||||
} else {
|
||||
passwd, _ = otto.ToValue(input)
|
||||
}
|
||||
} else {
|
||||
if !call.Argument(1).IsString() {
|
||||
throwJSException("password must be a string")
|
||||
}
|
||||
passwd = call.Argument(1)
|
||||
}
|
||||
// Third argument is the duration how long the account must be unlocked.
|
||||
duration := otto.NullValue()
|
||||
if call.Argument(2).IsDefined() && !call.Argument(2).IsNull() {
|
||||
if !call.Argument(2).IsNumber() {
|
||||
throwJSException("unlock duration must be a number")
|
||||
}
|
||||
duration = call.Argument(2)
|
||||
}
|
||||
// Send the request to the backend and return
|
||||
val, err := call.Otto.Call("jeth.unlockAccount", nil, account, passwd, duration)
|
||||
if err != nil {
|
||||
throwJSException(err.Error())
|
||||
}
|
||||
return val
|
||||
}
|
||||
|
||||
// Sign is a wrapper around the personal.sign RPC method that uses a non-echoing password
|
||||
// prompt to acquire the passphrase and executes the original RPC method (saved in
|
||||
// jeth.sign) with it to actually execute the RPC call.
|
||||
func (b *bridge) Sign(call otto.FunctionCall) (response otto.Value) {
|
||||
var (
|
||||
message = call.Argument(0)
|
||||
account = call.Argument(1)
|
||||
passwd = call.Argument(2)
|
||||
)
|
||||
|
||||
if !message.IsString() {
|
||||
throwJSException("first argument must be the message to sign")
|
||||
}
|
||||
if !account.IsString() {
|
||||
throwJSException("second argument must be the account to sign with")
|
||||
}
|
||||
|
||||
// if the password is not given or null ask the user and ensure password is a string
|
||||
if passwd.IsUndefined() || passwd.IsNull() {
|
||||
fmt.Fprintf(b.printer, "Give password for account %s\n", account)
|
||||
if input, err := b.prompter.PromptPassword("Passphrase: "); err != nil {
|
||||
throwJSException(err.Error())
|
||||
} else {
|
||||
passwd, _ = otto.ToValue(input)
|
||||
}
|
||||
}
|
||||
if !passwd.IsString() {
|
||||
throwJSException("third argument must be the password to unlock the account")
|
||||
}
|
||||
|
||||
// Send the request to the backend and return
|
||||
val, err := call.Otto.Call("jeth.sign", nil, message, account, passwd)
|
||||
if err != nil {
|
||||
throwJSException(err.Error())
|
||||
}
|
||||
return val
|
||||
}
|
||||
|
||||
// Sleep will block the console for the specified number of seconds.
|
||||
func (b *bridge) Sleep(call otto.FunctionCall) (response otto.Value) {
|
||||
if call.Argument(0).IsNumber() {
|
||||
sleep, _ := call.Argument(0).ToInteger()
|
||||
time.Sleep(time.Duration(sleep) * time.Second)
|
||||
return otto.TrueValue()
|
||||
}
|
||||
return throwJSException("usage: sleep(<number of seconds>)")
|
||||
}
|
||||
|
||||
// SleepBlocks will block the console for a specified number of new blocks optionally
|
||||
// until the given timeout is reached.
|
||||
func (b *bridge) SleepBlocks(call otto.FunctionCall) (response otto.Value) {
|
||||
var (
|
||||
blocks = int64(0)
|
||||
sleep = int64(9999999999999999) // indefinitely
|
||||
)
|
||||
// Parse the input parameters for the sleep
|
||||
nArgs := len(call.ArgumentList)
|
||||
if nArgs == 0 {
|
||||
throwJSException("usage: sleepBlocks(<n blocks>[, max sleep in seconds])")
|
||||
}
|
||||
if nArgs >= 1 {
|
||||
if call.Argument(0).IsNumber() {
|
||||
blocks, _ = call.Argument(0).ToInteger()
|
||||
} else {
|
||||
throwJSException("expected number as first argument")
|
||||
}
|
||||
}
|
||||
if nArgs >= 2 {
|
||||
if call.Argument(1).IsNumber() {
|
||||
sleep, _ = call.Argument(1).ToInteger()
|
||||
} else {
|
||||
throwJSException("expected number as second argument")
|
||||
}
|
||||
}
|
||||
// go through the console, this will allow web3 to call the appropriate
|
||||
// callbacks if a delayed response or notification is received.
|
||||
blockNumber := func() int64 {
|
||||
result, err := call.Otto.Run("eth.blockNumber")
|
||||
if err != nil {
|
||||
throwJSException(err.Error())
|
||||
}
|
||||
block, err := result.ToInteger()
|
||||
if err != nil {
|
||||
throwJSException(err.Error())
|
||||
}
|
||||
return block
|
||||
}
|
||||
// Poll the current block number until either it ot a timeout is reached
|
||||
targetBlockNr := blockNumber() + blocks
|
||||
deadline := time.Now().Add(time.Duration(sleep) * time.Second)
|
||||
|
||||
for time.Now().Before(deadline) {
|
||||
if blockNumber() >= targetBlockNr {
|
||||
return otto.TrueValue()
|
||||
}
|
||||
time.Sleep(time.Second)
|
||||
}
|
||||
return otto.FalseValue()
|
||||
}
|
||||
|
||||
type jsonrpcCall struct {
|
||||
ID int64
|
||||
Method string
|
||||
Params []interface{}
|
||||
}
|
||||
|
||||
// Send implements the web3 provider "send" method.
|
||||
func (b *bridge) Send(call otto.FunctionCall) (response otto.Value) {
|
||||
// Remarshal the request into a Go value.
|
||||
JSON, _ := call.Otto.Object("JSON")
|
||||
reqVal, err := JSON.Call("stringify", call.Argument(0))
|
||||
if err != nil {
|
||||
throwJSException(err.Error())
|
||||
}
|
||||
var (
|
||||
rawReq = reqVal.String()
|
||||
dec = json.NewDecoder(strings.NewReader(rawReq))
|
||||
reqs []jsonrpcCall
|
||||
batch bool
|
||||
)
|
||||
dec.UseNumber() // avoid float64s
|
||||
if rawReq[0] == '[' {
|
||||
batch = true
|
||||
dec.Decode(&reqs)
|
||||
} else {
|
||||
batch = false
|
||||
reqs = make([]jsonrpcCall, 1)
|
||||
dec.Decode(&reqs[0])
|
||||
}
|
||||
|
||||
// Execute the requests.
|
||||
resps, _ := call.Otto.Object("new Array()")
|
||||
for _, req := range reqs {
|
||||
resp, _ := call.Otto.Object(`({"jsonrpc":"2.0"})`)
|
||||
resp.Set("id", req.ID)
|
||||
var result json.RawMessage
|
||||
err = b.client.Call(&result, req.Method, req.Params...)
|
||||
switch err := err.(type) {
|
||||
case nil:
|
||||
if result == nil {
|
||||
// Special case null because it is decoded as an empty
|
||||
// raw message for some reason.
|
||||
resp.Set("result", otto.NullValue())
|
||||
} else {
|
||||
resultVal, err := JSON.Call("parse", string(result))
|
||||
if err != nil {
|
||||
setError(resp, -32603, err.Error())
|
||||
} else {
|
||||
resp.Set("result", resultVal)
|
||||
}
|
||||
}
|
||||
case rpc.Error:
|
||||
setError(resp, err.ErrorCode(), err.Error())
|
||||
default:
|
||||
setError(resp, -32603, err.Error())
|
||||
}
|
||||
resps.Call("push", resp)
|
||||
}
|
||||
|
||||
// Return the responses either to the callback (if supplied)
|
||||
// or directly as the return value.
|
||||
if batch {
|
||||
response = resps.Value()
|
||||
} else {
|
||||
response, _ = resps.Get("0")
|
||||
}
|
||||
if fn := call.Argument(1); fn.Class() == "Function" {
|
||||
fn.Call(otto.NullValue(), otto.NullValue(), response)
|
||||
return otto.UndefinedValue()
|
||||
}
|
||||
return response
|
||||
}
|
||||
|
||||
func setError(resp *otto.Object, code int, msg string) {
|
||||
resp.Set("error", map[string]interface{}{"code": code, "message": msg})
|
||||
}
|
||||
|
||||
// throwJSException panics on an otto.Value. The Otto VM will recover from the
|
||||
// Go panic and throw msg as a JavaScript error.
|
||||
func throwJSException(msg interface{}) otto.Value {
|
||||
val, err := otto.ToValue(msg)
|
||||
if err != nil {
|
||||
log.Error("Failed to serialize JavaScript exception", "exception", msg, "err", err)
|
||||
}
|
||||
panic(val)
|
||||
}
|
450
vendor/github.com/ethereum/go-ethereum/console/console.go
generated
vendored
Normal file
450
vendor/github.com/ethereum/go-ethereum/console/console.go
generated
vendored
Normal file
@ -0,0 +1,450 @@
|
||||
// Copyright 2016 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package console
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/signal"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strings"
|
||||
"syscall"
|
||||
|
||||
"github.com/ethereum/go-ethereum/internal/jsre"
|
||||
"github.com/ethereum/go-ethereum/internal/web3ext"
|
||||
"github.com/ethereum/go-ethereum/rpc"
|
||||
"github.com/mattn/go-colorable"
|
||||
"github.com/peterh/liner"
|
||||
"github.com/robertkrimen/otto"
|
||||
)
|
||||
|
||||
var (
|
||||
passwordRegexp = regexp.MustCompile(`personal.[nus]`)
|
||||
onlyWhitespace = regexp.MustCompile(`^\s*$`)
|
||||
exit = regexp.MustCompile(`^\s*exit\s*;*\s*$`)
|
||||
)
|
||||
|
||||
// HistoryFile is the file within the data directory to store input scrollback.
|
||||
const HistoryFile = "history"
|
||||
|
||||
// DefaultPrompt is the default prompt line prefix to use for user input querying.
|
||||
const DefaultPrompt = "> "
|
||||
|
||||
// Config is the collection of configurations to fine tune the behavior of the
|
||||
// JavaScript console.
|
||||
type Config struct {
|
||||
DataDir string // Data directory to store the console history at
|
||||
DocRoot string // Filesystem path from where to load JavaScript files from
|
||||
Client *rpc.Client // RPC client to execute Ethereum requests through
|
||||
Prompt string // Input prompt prefix string (defaults to DefaultPrompt)
|
||||
Prompter UserPrompter // Input prompter to allow interactive user feedback (defaults to TerminalPrompter)
|
||||
Printer io.Writer // Output writer to serialize any display strings to (defaults to os.Stdout)
|
||||
Preload []string // Absolute paths to JavaScript files to preload
|
||||
}
|
||||
|
||||
// Console is a JavaScript interpreted runtime environment. It is a fully fledged
|
||||
// JavaScript console attached to a running node via an external or in-process RPC
|
||||
// client.
|
||||
type Console struct {
|
||||
client *rpc.Client // RPC client to execute Ethereum requests through
|
||||
jsre *jsre.JSRE // JavaScript runtime environment running the interpreter
|
||||
prompt string // Input prompt prefix string
|
||||
prompter UserPrompter // Input prompter to allow interactive user feedback
|
||||
histPath string // Absolute path to the console scrollback history
|
||||
history []string // Scroll history maintained by the console
|
||||
printer io.Writer // Output writer to serialize any display strings to
|
||||
}
|
||||
|
||||
// New initializes a JavaScript interpreted runtime environment and sets defaults
|
||||
// with the config struct.
|
||||
func New(config Config) (*Console, error) {
|
||||
// Handle unset config values gracefully
|
||||
if config.Prompter == nil {
|
||||
config.Prompter = Stdin
|
||||
}
|
||||
if config.Prompt == "" {
|
||||
config.Prompt = DefaultPrompt
|
||||
}
|
||||
if config.Printer == nil {
|
||||
config.Printer = colorable.NewColorableStdout()
|
||||
}
|
||||
// Initialize the console and return
|
||||
console := &Console{
|
||||
client: config.Client,
|
||||
jsre: jsre.New(config.DocRoot, config.Printer),
|
||||
prompt: config.Prompt,
|
||||
prompter: config.Prompter,
|
||||
printer: config.Printer,
|
||||
histPath: filepath.Join(config.DataDir, HistoryFile),
|
||||
}
|
||||
if err := os.MkdirAll(config.DataDir, 0700); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := console.init(config.Preload); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return console, nil
|
||||
}
|
||||
|
||||
// init retrieves the available APIs from the remote RPC provider and initializes
|
||||
// the console's JavaScript namespaces based on the exposed modules.
|
||||
func (c *Console) init(preload []string) error {
|
||||
// Initialize the JavaScript <-> Go RPC bridge
|
||||
bridge := newBridge(c.client, c.prompter, c.printer)
|
||||
c.jsre.Set("jeth", struct{}{})
|
||||
|
||||
jethObj, _ := c.jsre.Get("jeth")
|
||||
jethObj.Object().Set("send", bridge.Send)
|
||||
jethObj.Object().Set("sendAsync", bridge.Send)
|
||||
|
||||
consoleObj, _ := c.jsre.Get("console")
|
||||
consoleObj.Object().Set("log", c.consoleOutput)
|
||||
consoleObj.Object().Set("error", c.consoleOutput)
|
||||
|
||||
// Load all the internal utility JavaScript libraries
|
||||
if err := c.jsre.Compile("bignumber.js", jsre.BignumberJs); err != nil {
|
||||
return fmt.Errorf("bignumber.js: %v", err)
|
||||
}
|
||||
if err := c.jsre.Compile("web3.js", jsre.Web3Js); err != nil {
|
||||
return fmt.Errorf("web3.js: %v", err)
|
||||
}
|
||||
if _, err := c.jsre.Run("var Web3 = require('web3');"); err != nil {
|
||||
return fmt.Errorf("web3 require: %v", err)
|
||||
}
|
||||
if _, err := c.jsre.Run("var web3 = new Web3(jeth);"); err != nil {
|
||||
return fmt.Errorf("web3 provider: %v", err)
|
||||
}
|
||||
// Load the supported APIs into the JavaScript runtime environment
|
||||
apis, err := c.client.SupportedModules()
|
||||
if err != nil {
|
||||
return fmt.Errorf("api modules: %v", err)
|
||||
}
|
||||
flatten := "var eth = web3.eth; var personal = web3.personal; "
|
||||
for api := range apis {
|
||||
if api == "web3" {
|
||||
continue // manually mapped or ignore
|
||||
}
|
||||
if file, ok := web3ext.Modules[api]; ok {
|
||||
// Load our extension for the module.
|
||||
if err = c.jsre.Compile(fmt.Sprintf("%s.js", api), file); err != nil {
|
||||
return fmt.Errorf("%s.js: %v", api, err)
|
||||
}
|
||||
flatten += fmt.Sprintf("var %s = web3.%s; ", api, api)
|
||||
} else if obj, err := c.jsre.Run("web3." + api); err == nil && obj.IsObject() {
|
||||
// Enable web3.js built-in extension if available.
|
||||
flatten += fmt.Sprintf("var %s = web3.%s; ", api, api)
|
||||
}
|
||||
}
|
||||
if _, err = c.jsre.Run(flatten); err != nil {
|
||||
return fmt.Errorf("namespace flattening: %v", err)
|
||||
}
|
||||
// Initialize the global name register (disabled for now)
|
||||
//c.jsre.Run(`var GlobalRegistrar = eth.contract(` + registrar.GlobalRegistrarAbi + `); registrar = GlobalRegistrar.at("` + registrar.GlobalRegistrarAddr + `");`)
|
||||
|
||||
// If the console is in interactive mode, instrument password related methods to query the user
|
||||
if c.prompter != nil {
|
||||
// Retrieve the account management object to instrument
|
||||
personal, err := c.jsre.Get("personal")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Override the openWallet, unlockAccount, newAccount and sign methods since
|
||||
// these require user interaction. Assign these method in the Console the
|
||||
// original web3 callbacks. These will be called by the jeth.* methods after
|
||||
// they got the password from the user and send the original web3 request to
|
||||
// the backend.
|
||||
if obj := personal.Object(); obj != nil { // make sure the personal api is enabled over the interface
|
||||
if _, err = c.jsre.Run(`jeth.openWallet = personal.openWallet;`); err != nil {
|
||||
return fmt.Errorf("personal.openWallet: %v", err)
|
||||
}
|
||||
if _, err = c.jsre.Run(`jeth.unlockAccount = personal.unlockAccount;`); err != nil {
|
||||
return fmt.Errorf("personal.unlockAccount: %v", err)
|
||||
}
|
||||
if _, err = c.jsre.Run(`jeth.newAccount = personal.newAccount;`); err != nil {
|
||||
return fmt.Errorf("personal.newAccount: %v", err)
|
||||
}
|
||||
if _, err = c.jsre.Run(`jeth.sign = personal.sign;`); err != nil {
|
||||
return fmt.Errorf("personal.sign: %v", err)
|
||||
}
|
||||
obj.Set("openWallet", bridge.OpenWallet)
|
||||
obj.Set("unlockAccount", bridge.UnlockAccount)
|
||||
obj.Set("newAccount", bridge.NewAccount)
|
||||
obj.Set("sign", bridge.Sign)
|
||||
}
|
||||
}
|
||||
// The admin.sleep and admin.sleepBlocks are offered by the console and not by the RPC layer.
|
||||
admin, err := c.jsre.Get("admin")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if obj := admin.Object(); obj != nil { // make sure the admin api is enabled over the interface
|
||||
obj.Set("sleepBlocks", bridge.SleepBlocks)
|
||||
obj.Set("sleep", bridge.Sleep)
|
||||
obj.Set("clearHistory", c.clearHistory)
|
||||
}
|
||||
// Preload any JavaScript files before starting the console
|
||||
for _, path := range preload {
|
||||
if err := c.jsre.Exec(path); err != nil {
|
||||
failure := err.Error()
|
||||
if ottoErr, ok := err.(*otto.Error); ok {
|
||||
failure = ottoErr.String()
|
||||
}
|
||||
return fmt.Errorf("%s: %v", path, failure)
|
||||
}
|
||||
}
|
||||
// Configure the console's input prompter for scrollback and tab completion
|
||||
if c.prompter != nil {
|
||||
if content, err := ioutil.ReadFile(c.histPath); err != nil {
|
||||
c.prompter.SetHistory(nil)
|
||||
} else {
|
||||
c.history = strings.Split(string(content), "\n")
|
||||
c.prompter.SetHistory(c.history)
|
||||
}
|
||||
c.prompter.SetWordCompleter(c.AutoCompleteInput)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Console) clearHistory() {
|
||||
c.history = nil
|
||||
c.prompter.ClearHistory()
|
||||
if err := os.Remove(c.histPath); err != nil {
|
||||
fmt.Fprintln(c.printer, "can't delete history file:", err)
|
||||
} else {
|
||||
fmt.Fprintln(c.printer, "history file deleted")
|
||||
}
|
||||
}
|
||||
|
||||
// consoleOutput is an override for the console.log and console.error methods to
|
||||
// stream the output into the configured output stream instead of stdout.
|
||||
func (c *Console) consoleOutput(call otto.FunctionCall) otto.Value {
|
||||
var output []string
|
||||
for _, argument := range call.ArgumentList {
|
||||
output = append(output, fmt.Sprintf("%v", argument))
|
||||
}
|
||||
fmt.Fprintln(c.printer, strings.Join(output, " "))
|
||||
return otto.Value{}
|
||||
}
|
||||
|
||||
// AutoCompleteInput is a pre-assembled word completer to be used by the user
|
||||
// input prompter to provide hints to the user about the methods available.
|
||||
func (c *Console) AutoCompleteInput(line string, pos int) (string, []string, string) {
|
||||
// No completions can be provided for empty inputs
|
||||
if len(line) == 0 || pos == 0 {
|
||||
return "", nil, ""
|
||||
}
|
||||
// Chunck data to relevant part for autocompletion
|
||||
// E.g. in case of nested lines eth.getBalance(eth.coinb<tab><tab>
|
||||
start := pos - 1
|
||||
for ; start > 0; start-- {
|
||||
// Skip all methods and namespaces (i.e. including the dot)
|
||||
if line[start] == '.' || (line[start] >= 'a' && line[start] <= 'z') || (line[start] >= 'A' && line[start] <= 'Z') {
|
||||
continue
|
||||
}
|
||||
// Handle web3 in a special way (i.e. other numbers aren't auto completed)
|
||||
if start >= 3 && line[start-3:start] == "web3" {
|
||||
start -= 3
|
||||
continue
|
||||
}
|
||||
// We've hit an unexpected character, autocomplete form here
|
||||
start++
|
||||
break
|
||||
}
|
||||
return line[:start], c.jsre.CompleteKeywords(line[start:pos]), line[pos:]
|
||||
}
|
||||
|
||||
// Welcome show summary of current Geth instance and some metadata about the
|
||||
// console's available modules.
|
||||
func (c *Console) Welcome() {
|
||||
message := "Welcome to the Geth JavaScript console!\n\n"
|
||||
|
||||
// Print some generic Geth metadata
|
||||
if res, err := c.jsre.Run(`
|
||||
var message = "instance: " + web3.version.node + "\n";
|
||||
try {
|
||||
message += "coinbase: " + eth.coinbase + "\n";
|
||||
} catch (err) {}
|
||||
message += "at block: " + eth.blockNumber + " (" + new Date(1000 * eth.getBlock(eth.blockNumber).timestamp) + ")\n";
|
||||
try {
|
||||
message += " datadir: " + admin.datadir + "\n";
|
||||
} catch (err) {}
|
||||
message
|
||||
`); err == nil {
|
||||
message += res.String()
|
||||
}
|
||||
// List all the supported modules for the user to call
|
||||
if apis, err := c.client.SupportedModules(); err == nil {
|
||||
modules := make([]string, 0, len(apis))
|
||||
for api, version := range apis {
|
||||
modules = append(modules, fmt.Sprintf("%s:%s", api, version))
|
||||
}
|
||||
sort.Strings(modules)
|
||||
message += " modules: " + strings.Join(modules, " ") + "\n"
|
||||
}
|
||||
fmt.Fprintln(c.printer, message)
|
||||
}
|
||||
|
||||
// Evaluate executes code and pretty prints the result to the specified output
|
||||
// stream.
|
||||
func (c *Console) Evaluate(statement string) error {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
fmt.Fprintf(c.printer, "[native] error: %v\n", r)
|
||||
}
|
||||
}()
|
||||
return c.jsre.Evaluate(statement, c.printer)
|
||||
}
|
||||
|
||||
// Interactive starts an interactive user session, where input is propted from
|
||||
// the configured user prompter.
|
||||
func (c *Console) Interactive() {
|
||||
var (
|
||||
prompt = c.prompt // Current prompt line (used for multi-line inputs)
|
||||
indents = 0 // Current number of input indents (used for multi-line inputs)
|
||||
input = "" // Current user input
|
||||
scheduler = make(chan string) // Channel to send the next prompt on and receive the input
|
||||
)
|
||||
// Start a goroutine to listen for prompt requests and send back inputs
|
||||
go func() {
|
||||
for {
|
||||
// Read the next user input
|
||||
line, err := c.prompter.PromptInput(<-scheduler)
|
||||
if err != nil {
|
||||
// In case of an error, either clear the prompt or fail
|
||||
if err == liner.ErrPromptAborted { // ctrl-C
|
||||
prompt, indents, input = c.prompt, 0, ""
|
||||
scheduler <- ""
|
||||
continue
|
||||
}
|
||||
close(scheduler)
|
||||
return
|
||||
}
|
||||
// User input retrieved, send for interpretation and loop
|
||||
scheduler <- line
|
||||
}
|
||||
}()
|
||||
// Monitor Ctrl-C too in case the input is empty and we need to bail
|
||||
abort := make(chan os.Signal, 1)
|
||||
signal.Notify(abort, syscall.SIGINT, syscall.SIGTERM)
|
||||
|
||||
// Start sending prompts to the user and reading back inputs
|
||||
for {
|
||||
// Send the next prompt, triggering an input read and process the result
|
||||
scheduler <- prompt
|
||||
select {
|
||||
case <-abort:
|
||||
// User forcefully quite the console
|
||||
fmt.Fprintln(c.printer, "caught interrupt, exiting")
|
||||
return
|
||||
|
||||
case line, ok := <-scheduler:
|
||||
// User input was returned by the prompter, handle special cases
|
||||
if !ok || (indents <= 0 && exit.MatchString(line)) {
|
||||
return
|
||||
}
|
||||
if onlyWhitespace.MatchString(line) {
|
||||
continue
|
||||
}
|
||||
// Append the line to the input and check for multi-line interpretation
|
||||
input += line + "\n"
|
||||
|
||||
indents = countIndents(input)
|
||||
if indents <= 0 {
|
||||
prompt = c.prompt
|
||||
} else {
|
||||
prompt = strings.Repeat(".", indents*3) + " "
|
||||
}
|
||||
// If all the needed lines are present, save the command and run
|
||||
if indents <= 0 {
|
||||
if len(input) > 0 && input[0] != ' ' && !passwordRegexp.MatchString(input) {
|
||||
if command := strings.TrimSpace(input); len(c.history) == 0 || command != c.history[len(c.history)-1] {
|
||||
c.history = append(c.history, command)
|
||||
if c.prompter != nil {
|
||||
c.prompter.AppendHistory(command)
|
||||
}
|
||||
}
|
||||
}
|
||||
c.Evaluate(input)
|
||||
input = ""
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// countIndents returns the number of identations for the given input.
|
||||
// In case of invalid input such as var a = } the result can be negative.
|
||||
func countIndents(input string) int {
|
||||
var (
|
||||
indents = 0
|
||||
inString = false
|
||||
strOpenChar = ' ' // keep track of the string open char to allow var str = "I'm ....";
|
||||
charEscaped = false // keep track if the previous char was the '\' char, allow var str = "abc\"def";
|
||||
)
|
||||
|
||||
for _, c := range input {
|
||||
switch c {
|
||||
case '\\':
|
||||
// indicate next char as escaped when in string and previous char isn't escaping this backslash
|
||||
if !charEscaped && inString {
|
||||
charEscaped = true
|
||||
}
|
||||
case '\'', '"':
|
||||
if inString && !charEscaped && strOpenChar == c { // end string
|
||||
inString = false
|
||||
} else if !inString && !charEscaped { // begin string
|
||||
inString = true
|
||||
strOpenChar = c
|
||||
}
|
||||
charEscaped = false
|
||||
case '{', '(':
|
||||
if !inString { // ignore brackets when in string, allow var str = "a{"; without indenting
|
||||
indents++
|
||||
}
|
||||
charEscaped = false
|
||||
case '}', ')':
|
||||
if !inString {
|
||||
indents--
|
||||
}
|
||||
charEscaped = false
|
||||
default:
|
||||
charEscaped = false
|
||||
}
|
||||
}
|
||||
|
||||
return indents
|
||||
}
|
||||
|
||||
// Execute runs the JavaScript file specified as the argument.
|
||||
func (c *Console) Execute(path string) error {
|
||||
return c.jsre.Exec(path)
|
||||
}
|
||||
|
||||
// Stop cleans up the console and terminates the runtime environment.
|
||||
func (c *Console) Stop(graceful bool) error {
|
||||
if err := ioutil.WriteFile(c.histPath, []byte(strings.Join(c.history, "\n")), 0600); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := os.Chmod(c.histPath, 0600); err != nil { // Force 0600, even if it was different previously
|
||||
return err
|
||||
}
|
||||
c.jsre.Stop(graceful)
|
||||
return nil
|
||||
}
|
1831
vendor/github.com/ethereum/go-ethereum/core/blockchain.go
generated
vendored
Normal file
1831
vendor/github.com/ethereum/go-ethereum/core/blockchain.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
166
vendor/github.com/ethereum/go-ethereum/core/blockchain_insert.go
generated
vendored
Normal file
166
vendor/github.com/ethereum/go-ethereum/core/blockchain_insert.go
generated
vendored
Normal file
@ -0,0 +1,166 @@
|
||||
// Copyright 2018 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package core
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/common/mclock"
|
||||
"github.com/ethereum/go-ethereum/core/types"
|
||||
"github.com/ethereum/go-ethereum/log"
|
||||
)
|
||||
|
||||
// insertStats tracks and reports on block insertion.
|
||||
type insertStats struct {
|
||||
queued, processed, ignored int
|
||||
usedGas uint64
|
||||
lastIndex int
|
||||
startTime mclock.AbsTime
|
||||
}
|
||||
|
||||
// statsReportLimit is the time limit during import and export after which we
|
||||
// always print out progress. This avoids the user wondering what's going on.
|
||||
const statsReportLimit = 8 * time.Second
|
||||
|
||||
// report prints statistics if some number of blocks have been processed
|
||||
// or more than a few seconds have passed since the last message.
|
||||
func (st *insertStats) report(chain []*types.Block, index int, dirty common.StorageSize) {
|
||||
// Fetch the timings for the batch
|
||||
var (
|
||||
now = mclock.Now()
|
||||
elapsed = time.Duration(now) - time.Duration(st.startTime)
|
||||
)
|
||||
// If we're at the last block of the batch or report period reached, log
|
||||
if index == len(chain)-1 || elapsed >= statsReportLimit {
|
||||
// Count the number of transactions in this segment
|
||||
var txs int
|
||||
for _, block := range chain[st.lastIndex : index+1] {
|
||||
txs += len(block.Transactions())
|
||||
}
|
||||
end := chain[index]
|
||||
|
||||
// Assemble the log context and send it to the logger
|
||||
context := []interface{}{
|
||||
"blocks", st.processed, "txs", txs, "mgas", float64(st.usedGas) / 1000000,
|
||||
"elapsed", common.PrettyDuration(elapsed), "mgasps", float64(st.usedGas) * 1000 / float64(elapsed),
|
||||
"number", end.Number(), "hash", end.Hash(),
|
||||
}
|
||||
if timestamp := time.Unix(int64(end.Time()), 0); time.Since(timestamp) > time.Minute {
|
||||
context = append(context, []interface{}{"age", common.PrettyAge(timestamp)}...)
|
||||
}
|
||||
context = append(context, []interface{}{"dirty", dirty}...)
|
||||
|
||||
if st.queued > 0 {
|
||||
context = append(context, []interface{}{"queued", st.queued}...)
|
||||
}
|
||||
if st.ignored > 0 {
|
||||
context = append(context, []interface{}{"ignored", st.ignored}...)
|
||||
}
|
||||
log.Info("Imported new chain segment", context...)
|
||||
|
||||
// Bump the stats reported to the next section
|
||||
*st = insertStats{startTime: now, lastIndex: index + 1}
|
||||
}
|
||||
}
|
||||
|
||||
// insertIterator is a helper to assist during chain import.
|
||||
type insertIterator struct {
|
||||
chain types.Blocks // Chain of blocks being iterated over
|
||||
|
||||
results <-chan error // Verification result sink from the consensus engine
|
||||
errors []error // Header verification errors for the blocks
|
||||
|
||||
index int // Current offset of the iterator
|
||||
validator Validator // Validator to run if verification succeeds
|
||||
}
|
||||
|
||||
// newInsertIterator creates a new iterator based on the given blocks, which are
|
||||
// assumed to be a contiguous chain.
|
||||
func newInsertIterator(chain types.Blocks, results <-chan error, validator Validator) *insertIterator {
|
||||
return &insertIterator{
|
||||
chain: chain,
|
||||
results: results,
|
||||
errors: make([]error, 0, len(chain)),
|
||||
index: -1,
|
||||
validator: validator,
|
||||
}
|
||||
}
|
||||
|
||||
// next returns the next block in the iterator, along with any potential validation
|
||||
// error for that block. When the end is reached, it will return (nil, nil).
|
||||
func (it *insertIterator) next() (*types.Block, error) {
|
||||
// If we reached the end of the chain, abort
|
||||
if it.index+1 >= len(it.chain) {
|
||||
it.index = len(it.chain)
|
||||
return nil, nil
|
||||
}
|
||||
// Advance the iterator and wait for verification result if not yet done
|
||||
it.index++
|
||||
if len(it.errors) <= it.index {
|
||||
it.errors = append(it.errors, <-it.results)
|
||||
}
|
||||
if it.errors[it.index] != nil {
|
||||
return it.chain[it.index], it.errors[it.index]
|
||||
}
|
||||
// Block header valid, run body validation and return
|
||||
return it.chain[it.index], it.validator.ValidateBody(it.chain[it.index])
|
||||
}
|
||||
|
||||
// peek returns the next block in the iterator, along with any potential validation
|
||||
// error for that block, but does **not** advance the iterator.
|
||||
//
|
||||
// Both header and body validation errors (nil too) is cached into the iterator
|
||||
// to avoid duplicating work on the following next() call.
|
||||
func (it *insertIterator) peek() (*types.Block, error) {
|
||||
// If we reached the end of the chain, abort
|
||||
if it.index+1 >= len(it.chain) {
|
||||
return nil, nil
|
||||
}
|
||||
// Wait for verification result if not yet done
|
||||
if len(it.errors) <= it.index+1 {
|
||||
it.errors = append(it.errors, <-it.results)
|
||||
}
|
||||
if it.errors[it.index+1] != nil {
|
||||
return it.chain[it.index+1], it.errors[it.index+1]
|
||||
}
|
||||
// Block header valid, ignore body validation since we don't have a parent anyway
|
||||
return it.chain[it.index+1], nil
|
||||
}
|
||||
|
||||
// previous returns the previous header that was being processed, or nil.
|
||||
func (it *insertIterator) previous() *types.Header {
|
||||
if it.index < 1 {
|
||||
return nil
|
||||
}
|
||||
return it.chain[it.index-1].Header()
|
||||
}
|
||||
|
||||
// first returns the first block in the it.
|
||||
func (it *insertIterator) first() *types.Block {
|
||||
return it.chain[0]
|
||||
}
|
||||
|
||||
// remaining returns the number of remaining blocks.
|
||||
func (it *insertIterator) remaining() int {
|
||||
return len(it.chain) - it.index
|
||||
}
|
||||
|
||||
// processed returns the number of processed blocks.
|
||||
func (it *insertIterator) processed() int {
|
||||
return it.index + 1
|
||||
}
|
1893
vendor/github.com/ethereum/go-ethereum/core/blockchain_test.go
generated
vendored
Normal file
1893
vendor/github.com/ethereum/go-ethereum/core/blockchain_test.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
283
vendor/github.com/ethereum/go-ethereum/core/chain_makers.go
generated
vendored
Normal file
283
vendor/github.com/ethereum/go-ethereum/core/chain_makers.go
generated
vendored
Normal file
@ -0,0 +1,283 @@
|
||||
// Copyright 2015 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package core
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math/big"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/consensus"
|
||||
"github.com/ethereum/go-ethereum/consensus/misc"
|
||||
"github.com/ethereum/go-ethereum/core/state"
|
||||
"github.com/ethereum/go-ethereum/core/types"
|
||||
"github.com/ethereum/go-ethereum/core/vm"
|
||||
"github.com/ethereum/go-ethereum/ethdb"
|
||||
"github.com/ethereum/go-ethereum/params"
|
||||
)
|
||||
|
||||
// BlockGen creates blocks for testing.
|
||||
// See GenerateChain for a detailed explanation.
|
||||
type BlockGen struct {
|
||||
i int
|
||||
parent *types.Block
|
||||
chain []*types.Block
|
||||
header *types.Header
|
||||
statedb *state.StateDB
|
||||
|
||||
gasPool *GasPool
|
||||
txs []*types.Transaction
|
||||
receipts []*types.Receipt
|
||||
uncles []*types.Header
|
||||
|
||||
config *params.ChainConfig
|
||||
engine consensus.Engine
|
||||
}
|
||||
|
||||
// SetCoinbase sets the coinbase of the generated block.
|
||||
// It can be called at most once.
|
||||
func (b *BlockGen) SetCoinbase(addr common.Address) {
|
||||
if b.gasPool != nil {
|
||||
if len(b.txs) > 0 {
|
||||
panic("coinbase must be set before adding transactions")
|
||||
}
|
||||
panic("coinbase can only be set once")
|
||||
}
|
||||
b.header.Coinbase = addr
|
||||
b.gasPool = new(GasPool).AddGas(b.header.GasLimit)
|
||||
}
|
||||
|
||||
// SetExtra sets the extra data field of the generated block.
|
||||
func (b *BlockGen) SetExtra(data []byte) {
|
||||
b.header.Extra = data
|
||||
}
|
||||
|
||||
// SetNonce sets the nonce field of the generated block.
|
||||
func (b *BlockGen) SetNonce(nonce types.BlockNonce) {
|
||||
b.header.Nonce = nonce
|
||||
}
|
||||
|
||||
// AddTx adds a transaction to the generated block. If no coinbase has
|
||||
// been set, the block's coinbase is set to the zero address.
|
||||
//
|
||||
// AddTx panics if the transaction cannot be executed. In addition to
|
||||
// the protocol-imposed limitations (gas limit, etc.), there are some
|
||||
// further limitations on the content of transactions that can be
|
||||
// added. Notably, contract code relying on the BLOCKHASH instruction
|
||||
// will panic during execution.
|
||||
func (b *BlockGen) AddTx(tx *types.Transaction) {
|
||||
b.AddTxWithChain(nil, tx)
|
||||
}
|
||||
|
||||
// AddTxWithChain adds a transaction to the generated block. If no coinbase has
|
||||
// been set, the block's coinbase is set to the zero address.
|
||||
//
|
||||
// AddTxWithChain panics if the transaction cannot be executed. In addition to
|
||||
// the protocol-imposed limitations (gas limit, etc.), there are some
|
||||
// further limitations on the content of transactions that can be
|
||||
// added. If contract code relies on the BLOCKHASH instruction,
|
||||
// the block in chain will be returned.
|
||||
func (b *BlockGen) AddTxWithChain(bc *BlockChain, tx *types.Transaction) {
|
||||
if b.gasPool == nil {
|
||||
b.SetCoinbase(common.Address{})
|
||||
}
|
||||
b.statedb.Prepare(tx.Hash(), common.Hash{}, len(b.txs))
|
||||
receipt, _, err := ApplyTransaction(b.config, bc, &b.header.Coinbase, b.gasPool, b.statedb, b.header, tx, &b.header.GasUsed, vm.Config{})
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
b.txs = append(b.txs, tx)
|
||||
b.receipts = append(b.receipts, receipt)
|
||||
}
|
||||
|
||||
// Number returns the block number of the block being generated.
|
||||
func (b *BlockGen) Number() *big.Int {
|
||||
return new(big.Int).Set(b.header.Number)
|
||||
}
|
||||
|
||||
// AddUncheckedReceipt forcefully adds a receipts to the block without a
|
||||
// backing transaction.
|
||||
//
|
||||
// AddUncheckedReceipt will cause consensus failures when used during real
|
||||
// chain processing. This is best used in conjunction with raw block insertion.
|
||||
func (b *BlockGen) AddUncheckedReceipt(receipt *types.Receipt) {
|
||||
b.receipts = append(b.receipts, receipt)
|
||||
}
|
||||
|
||||
// TxNonce returns the next valid transaction nonce for the
|
||||
// account at addr. It panics if the account does not exist.
|
||||
func (b *BlockGen) TxNonce(addr common.Address) uint64 {
|
||||
if !b.statedb.Exist(addr) {
|
||||
panic("account does not exist")
|
||||
}
|
||||
return b.statedb.GetNonce(addr)
|
||||
}
|
||||
|
||||
// AddUncle adds an uncle header to the generated block.
|
||||
func (b *BlockGen) AddUncle(h *types.Header) {
|
||||
b.uncles = append(b.uncles, h)
|
||||
}
|
||||
|
||||
// PrevBlock returns a previously generated block by number. It panics if
|
||||
// num is greater or equal to the number of the block being generated.
|
||||
// For index -1, PrevBlock returns the parent block given to GenerateChain.
|
||||
func (b *BlockGen) PrevBlock(index int) *types.Block {
|
||||
if index >= b.i {
|
||||
panic(fmt.Errorf("block index %d out of range (%d,%d)", index, -1, b.i))
|
||||
}
|
||||
if index == -1 {
|
||||
return b.parent
|
||||
}
|
||||
return b.chain[index]
|
||||
}
|
||||
|
||||
// OffsetTime modifies the time instance of a block, implicitly changing its
|
||||
// associated difficulty. It's useful to test scenarios where forking is not
|
||||
// tied to chain length directly.
|
||||
func (b *BlockGen) OffsetTime(seconds int64) {
|
||||
b.header.Time += uint64(seconds)
|
||||
if b.header.Time <= b.parent.Header().Time {
|
||||
panic("block time out of range")
|
||||
}
|
||||
chainreader := &fakeChainReader{config: b.config}
|
||||
b.header.Difficulty = b.engine.CalcDifficulty(chainreader, b.header.Time, b.parent.Header())
|
||||
}
|
||||
|
||||
// GenerateChain creates a chain of n blocks. The first block's
|
||||
// parent will be the provided parent. db is used to store
|
||||
// intermediate states and should contain the parent's state trie.
|
||||
//
|
||||
// The generator function is called with a new block generator for
|
||||
// every block. Any transactions and uncles added to the generator
|
||||
// become part of the block. If gen is nil, the blocks will be empty
|
||||
// and their coinbase will be the zero address.
|
||||
//
|
||||
// Blocks created by GenerateChain do not contain valid proof of work
|
||||
// values. Inserting them into BlockChain requires use of FakePow or
|
||||
// a similar non-validating proof of work implementation.
|
||||
func GenerateChain(config *params.ChainConfig, parent *types.Block, engine consensus.Engine, db ethdb.Database, n int, gen func(int, *BlockGen)) ([]*types.Block, []types.Receipts) {
|
||||
if config == nil {
|
||||
config = params.TestChainConfig
|
||||
}
|
||||
blocks, receipts := make(types.Blocks, n), make([]types.Receipts, n)
|
||||
chainreader := &fakeChainReader{config: config}
|
||||
genblock := func(i int, parent *types.Block, statedb *state.StateDB) (*types.Block, types.Receipts) {
|
||||
b := &BlockGen{i: i, chain: blocks, parent: parent, statedb: statedb, config: config, engine: engine}
|
||||
b.header = makeHeader(chainreader, parent, statedb, b.engine)
|
||||
|
||||
// Mutate the state and block according to any hard-fork specs
|
||||
if daoBlock := config.DAOForkBlock; daoBlock != nil {
|
||||
limit := new(big.Int).Add(daoBlock, params.DAOForkExtraRange)
|
||||
if b.header.Number.Cmp(daoBlock) >= 0 && b.header.Number.Cmp(limit) < 0 {
|
||||
if config.DAOForkSupport {
|
||||
b.header.Extra = common.CopyBytes(params.DAOForkBlockExtra)
|
||||
}
|
||||
}
|
||||
}
|
||||
if config.DAOForkSupport && config.DAOForkBlock != nil && config.DAOForkBlock.Cmp(b.header.Number) == 0 {
|
||||
misc.ApplyDAOHardFork(statedb)
|
||||
}
|
||||
// Execute any user modifications to the block
|
||||
if gen != nil {
|
||||
gen(i, b)
|
||||
}
|
||||
if b.engine != nil {
|
||||
// Finalize and seal the block
|
||||
block, _ := b.engine.Finalize(chainreader, b.header, statedb, b.txs, b.uncles, b.receipts)
|
||||
|
||||
// Write state changes to db
|
||||
root, err := statedb.Commit(config.IsEIP158(b.header.Number))
|
||||
if err != nil {
|
||||
panic(fmt.Sprintf("state write error: %v", err))
|
||||
}
|
||||
if err := statedb.Database().TrieDB().Commit(root, false); err != nil {
|
||||
panic(fmt.Sprintf("trie write error: %v", err))
|
||||
}
|
||||
return block, b.receipts
|
||||
}
|
||||
return nil, nil
|
||||
}
|
||||
for i := 0; i < n; i++ {
|
||||
statedb, err := state.New(parent.Root(), state.NewDatabase(db))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
block, receipt := genblock(i, parent, statedb)
|
||||
blocks[i] = block
|
||||
receipts[i] = receipt
|
||||
parent = block
|
||||
}
|
||||
return blocks, receipts
|
||||
}
|
||||
|
||||
func makeHeader(chain consensus.ChainReader, parent *types.Block, state *state.StateDB, engine consensus.Engine) *types.Header {
|
||||
var time uint64
|
||||
if parent.Time() == 0 {
|
||||
time = 10
|
||||
} else {
|
||||
time = parent.Time() + 10 // block time is fixed at 10 seconds
|
||||
}
|
||||
|
||||
return &types.Header{
|
||||
Root: state.IntermediateRoot(chain.Config().IsEIP158(parent.Number())),
|
||||
ParentHash: parent.Hash(),
|
||||
Coinbase: parent.Coinbase(),
|
||||
Difficulty: engine.CalcDifficulty(chain, time, &types.Header{
|
||||
Number: parent.Number(),
|
||||
Time: time - 10,
|
||||
Difficulty: parent.Difficulty(),
|
||||
UncleHash: parent.UncleHash(),
|
||||
}),
|
||||
GasLimit: CalcGasLimit(parent, parent.GasLimit(), parent.GasLimit()),
|
||||
Number: new(big.Int).Add(parent.Number(), common.Big1),
|
||||
Time: time,
|
||||
}
|
||||
}
|
||||
|
||||
// makeHeaderChain creates a deterministic chain of headers rooted at parent.
|
||||
func makeHeaderChain(parent *types.Header, n int, engine consensus.Engine, db ethdb.Database, seed int) []*types.Header {
|
||||
blocks := makeBlockChain(types.NewBlockWithHeader(parent), n, engine, db, seed)
|
||||
headers := make([]*types.Header, len(blocks))
|
||||
for i, block := range blocks {
|
||||
headers[i] = block.Header()
|
||||
}
|
||||
return headers
|
||||
}
|
||||
|
||||
// makeBlockChain creates a deterministic chain of blocks rooted at parent.
|
||||
func makeBlockChain(parent *types.Block, n int, engine consensus.Engine, db ethdb.Database, seed int) []*types.Block {
|
||||
blocks, _ := GenerateChain(params.TestChainConfig, parent, engine, db, n, func(i int, b *BlockGen) {
|
||||
b.SetCoinbase(common.Address{0: byte(seed), 19: byte(i)})
|
||||
})
|
||||
return blocks
|
||||
}
|
||||
|
||||
type fakeChainReader struct {
|
||||
config *params.ChainConfig
|
||||
genesis *types.Block
|
||||
}
|
||||
|
||||
// Config returns the chain configuration.
|
||||
func (cr *fakeChainReader) Config() *params.ChainConfig {
|
||||
return cr.config
|
||||
}
|
||||
|
||||
func (cr *fakeChainReader) CurrentHeader() *types.Header { return nil }
|
||||
func (cr *fakeChainReader) GetHeaderByNumber(number uint64) *types.Header { return nil }
|
||||
func (cr *fakeChainReader) GetHeaderByHash(hash common.Hash) *types.Header { return nil }
|
||||
func (cr *fakeChainReader) GetHeader(hash common.Hash, number uint64) *types.Header { return nil }
|
||||
func (cr *fakeChainReader) GetBlock(hash common.Hash, number uint64) *types.Block { return nil }
|
97
vendor/github.com/ethereum/go-ethereum/core/evm.go
generated
vendored
Normal file
97
vendor/github.com/ethereum/go-ethereum/core/evm.go
generated
vendored
Normal file
@ -0,0 +1,97 @@
|
||||
// Copyright 2016 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package core
|
||||
|
||||
import (
|
||||
"math/big"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/consensus"
|
||||
"github.com/ethereum/go-ethereum/core/types"
|
||||
"github.com/ethereum/go-ethereum/core/vm"
|
||||
)
|
||||
|
||||
// ChainContext supports retrieving headers and consensus parameters from the
|
||||
// current blockchain to be used during transaction processing.
|
||||
type ChainContext interface {
|
||||
// Engine retrieves the chain's consensus engine.
|
||||
Engine() consensus.Engine
|
||||
|
||||
// GetHeader returns the hash corresponding to their hash.
|
||||
GetHeader(common.Hash, uint64) *types.Header
|
||||
}
|
||||
|
||||
// NewEVMContext creates a new context for use in the EVM.
|
||||
func NewEVMContext(msg Message, header *types.Header, chain ChainContext, author *common.Address) vm.Context {
|
||||
// If we don't have an explicit author (i.e. not mining), extract from the header
|
||||
var beneficiary common.Address
|
||||
if author == nil {
|
||||
beneficiary, _ = chain.Engine().Author(header) // Ignore error, we're past header validation
|
||||
} else {
|
||||
beneficiary = *author
|
||||
}
|
||||
return vm.Context{
|
||||
CanTransfer: CanTransfer,
|
||||
Transfer: Transfer,
|
||||
GetHash: GetHashFn(header, chain),
|
||||
Origin: msg.From(),
|
||||
Coinbase: beneficiary,
|
||||
BlockNumber: new(big.Int).Set(header.Number),
|
||||
Time: new(big.Int).SetUint64(header.Time),
|
||||
Difficulty: new(big.Int).Set(header.Difficulty),
|
||||
GasLimit: header.GasLimit,
|
||||
GasPrice: new(big.Int).Set(msg.GasPrice()),
|
||||
}
|
||||
}
|
||||
|
||||
// GetHashFn returns a GetHashFunc which retrieves header hashes by number
|
||||
func GetHashFn(ref *types.Header, chain ChainContext) func(n uint64) common.Hash {
|
||||
var cache map[uint64]common.Hash
|
||||
|
||||
return func(n uint64) common.Hash {
|
||||
// If there's no hash cache yet, make one
|
||||
if cache == nil {
|
||||
cache = map[uint64]common.Hash{
|
||||
ref.Number.Uint64() - 1: ref.ParentHash,
|
||||
}
|
||||
}
|
||||
// Try to fulfill the request from the cache
|
||||
if hash, ok := cache[n]; ok {
|
||||
return hash
|
||||
}
|
||||
// Not cached, iterate the blocks and cache the hashes
|
||||
for header := chain.GetHeader(ref.ParentHash, ref.Number.Uint64()-1); header != nil; header = chain.GetHeader(header.ParentHash, header.Number.Uint64()-1) {
|
||||
cache[header.Number.Uint64()-1] = header.ParentHash
|
||||
if n == header.Number.Uint64()-1 {
|
||||
return header.ParentHash
|
||||
}
|
||||
}
|
||||
return common.Hash{}
|
||||
}
|
||||
}
|
||||
|
||||
// CanTransfer checks whether there are enough funds in the address' account to make a transfer.
|
||||
// This does not take the necessary gas in to account to make the transfer valid.
|
||||
func CanTransfer(db vm.StateDB, addr common.Address, amount *big.Int) bool {
|
||||
return db.GetBalance(addr).Cmp(amount) >= 0
|
||||
}
|
||||
|
||||
// Transfer subtracts amount from sender and adds amount to recipient using the given Db
|
||||
func Transfer(db vm.StateDB, sender, recipient common.Address, amount *big.Int) {
|
||||
db.SubBalance(sender, amount)
|
||||
db.AddBalance(recipient, amount)
|
||||
}
|
391
vendor/github.com/ethereum/go-ethereum/core/genesis.go
generated
vendored
Normal file
391
vendor/github.com/ethereum/go-ethereum/core/genesis.go
generated
vendored
Normal file
@ -0,0 +1,391 @@
|
||||
// Copyright 2014 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package core
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"math/big"
|
||||
"strings"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/ethereum/go-ethereum/common/math"
|
||||
"github.com/ethereum/go-ethereum/core/rawdb"
|
||||
"github.com/ethereum/go-ethereum/core/state"
|
||||
"github.com/ethereum/go-ethereum/core/types"
|
||||
"github.com/ethereum/go-ethereum/ethdb"
|
||||
"github.com/ethereum/go-ethereum/log"
|
||||
"github.com/ethereum/go-ethereum/params"
|
||||
"github.com/ethereum/go-ethereum/rlp"
|
||||
)
|
||||
|
||||
//go:generate gencodec -type Genesis -field-override genesisSpecMarshaling -out gen_genesis.go
|
||||
//go:generate gencodec -type GenesisAccount -field-override genesisAccountMarshaling -out gen_genesis_account.go
|
||||
|
||||
var errGenesisNoConfig = errors.New("genesis has no chain configuration")
|
||||
|
||||
// Genesis specifies the header fields, state of a genesis block. It also defines hard
|
||||
// fork switch-over blocks through the chain configuration.
|
||||
type Genesis struct {
|
||||
Config *params.ChainConfig `json:"config"`
|
||||
Nonce uint64 `json:"nonce"`
|
||||
Timestamp uint64 `json:"timestamp"`
|
||||
ExtraData []byte `json:"extraData"`
|
||||
GasLimit uint64 `json:"gasLimit" gencodec:"required"`
|
||||
Difficulty *big.Int `json:"difficulty" gencodec:"required"`
|
||||
Mixhash common.Hash `json:"mixHash"`
|
||||
Coinbase common.Address `json:"coinbase"`
|
||||
Alloc GenesisAlloc `json:"alloc" gencodec:"required"`
|
||||
|
||||
// These fields are used for consensus tests. Please don't use them
|
||||
// in actual genesis blocks.
|
||||
Number uint64 `json:"number"`
|
||||
GasUsed uint64 `json:"gasUsed"`
|
||||
ParentHash common.Hash `json:"parentHash"`
|
||||
}
|
||||
|
||||
// GenesisAlloc specifies the initial state that is part of the genesis block.
|
||||
type GenesisAlloc map[common.Address]GenesisAccount
|
||||
|
||||
func (ga *GenesisAlloc) UnmarshalJSON(data []byte) error {
|
||||
m := make(map[common.UnprefixedAddress]GenesisAccount)
|
||||
if err := json.Unmarshal(data, &m); err != nil {
|
||||
return err
|
||||
}
|
||||
*ga = make(GenesisAlloc)
|
||||
for addr, a := range m {
|
||||
(*ga)[common.Address(addr)] = a
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// GenesisAccount is an account in the state of the genesis block.
|
||||
type GenesisAccount struct {
|
||||
Code []byte `json:"code,omitempty"`
|
||||
Storage map[common.Hash]common.Hash `json:"storage,omitempty"`
|
||||
Balance *big.Int `json:"balance" gencodec:"required"`
|
||||
Nonce uint64 `json:"nonce,omitempty"`
|
||||
PrivateKey []byte `json:"secretKey,omitempty"` // for tests
|
||||
}
|
||||
|
||||
// field type overrides for gencodec
|
||||
type genesisSpecMarshaling struct {
|
||||
Nonce math.HexOrDecimal64
|
||||
Timestamp math.HexOrDecimal64
|
||||
ExtraData hexutil.Bytes
|
||||
GasLimit math.HexOrDecimal64
|
||||
GasUsed math.HexOrDecimal64
|
||||
Number math.HexOrDecimal64
|
||||
Difficulty *math.HexOrDecimal256
|
||||
Alloc map[common.UnprefixedAddress]GenesisAccount
|
||||
}
|
||||
|
||||
type genesisAccountMarshaling struct {
|
||||
Code hexutil.Bytes
|
||||
Balance *math.HexOrDecimal256
|
||||
Nonce math.HexOrDecimal64
|
||||
Storage map[storageJSON]storageJSON
|
||||
PrivateKey hexutil.Bytes
|
||||
}
|
||||
|
||||
// storageJSON represents a 256 bit byte array, but allows less than 256 bits when
|
||||
// unmarshaling from hex.
|
||||
type storageJSON common.Hash
|
||||
|
||||
func (h *storageJSON) UnmarshalText(text []byte) error {
|
||||
text = bytes.TrimPrefix(text, []byte("0x"))
|
||||
if len(text) > 64 {
|
||||
return fmt.Errorf("too many hex characters in storage key/value %q", text)
|
||||
}
|
||||
offset := len(h) - len(text)/2 // pad on the left
|
||||
if _, err := hex.Decode(h[offset:], text); err != nil {
|
||||
fmt.Println(err)
|
||||
return fmt.Errorf("invalid hex storage key/value %q", text)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (h storageJSON) MarshalText() ([]byte, error) {
|
||||
return hexutil.Bytes(h[:]).MarshalText()
|
||||
}
|
||||
|
||||
// GenesisMismatchError is raised when trying to overwrite an existing
|
||||
// genesis block with an incompatible one.
|
||||
type GenesisMismatchError struct {
|
||||
Stored, New common.Hash
|
||||
}
|
||||
|
||||
func (e *GenesisMismatchError) Error() string {
|
||||
return fmt.Sprintf("database contains incompatible genesis (have %x, new %x)", e.Stored, e.New)
|
||||
}
|
||||
|
||||
// SetupGenesisBlock writes or updates the genesis block in db.
|
||||
// The block that will be used is:
|
||||
//
|
||||
// genesis == nil genesis != nil
|
||||
// +------------------------------------------
|
||||
// db has no genesis | main-net default | genesis
|
||||
// db has genesis | from DB | genesis (if compatible)
|
||||
//
|
||||
// The stored chain configuration will be updated if it is compatible (i.e. does not
|
||||
// specify a fork block below the local head block). In case of a conflict, the
|
||||
// error is a *params.ConfigCompatError and the new, unwritten config is returned.
|
||||
//
|
||||
// The returned chain configuration is never nil.
|
||||
func SetupGenesisBlock(db ethdb.Database, genesis *Genesis) (*params.ChainConfig, common.Hash, error) {
|
||||
return SetupGenesisBlockWithOverride(db, genesis, nil)
|
||||
}
|
||||
func SetupGenesisBlockWithOverride(db ethdb.Database, genesis *Genesis, constantinopleOverride *big.Int) (*params.ChainConfig, common.Hash, error) {
|
||||
if genesis != nil && genesis.Config == nil {
|
||||
return params.AllEthashProtocolChanges, common.Hash{}, errGenesisNoConfig
|
||||
}
|
||||
// Just commit the new block if there is no stored genesis block.
|
||||
stored := rawdb.ReadCanonicalHash(db, 0)
|
||||
if (stored == common.Hash{}) {
|
||||
if genesis == nil {
|
||||
log.Info("Writing default main-net genesis block")
|
||||
genesis = DefaultGenesisBlock()
|
||||
} else {
|
||||
log.Info("Writing custom genesis block")
|
||||
}
|
||||
block, err := genesis.Commit(db)
|
||||
return genesis.Config, block.Hash(), err
|
||||
}
|
||||
|
||||
// Check whether the genesis block is already written.
|
||||
if genesis != nil {
|
||||
hash := genesis.ToBlock(nil).Hash()
|
||||
if hash != stored {
|
||||
return genesis.Config, hash, &GenesisMismatchError{stored, hash}
|
||||
}
|
||||
}
|
||||
|
||||
// Get the existing chain configuration.
|
||||
newcfg := genesis.configOrDefault(stored)
|
||||
if constantinopleOverride != nil {
|
||||
newcfg.ConstantinopleBlock = constantinopleOverride
|
||||
newcfg.PetersburgBlock = constantinopleOverride
|
||||
}
|
||||
storedcfg := rawdb.ReadChainConfig(db, stored)
|
||||
if storedcfg == nil {
|
||||
log.Warn("Found genesis block without chain config")
|
||||
rawdb.WriteChainConfig(db, stored, newcfg)
|
||||
return newcfg, stored, nil
|
||||
}
|
||||
// Special case: don't change the existing config of a non-mainnet chain if no new
|
||||
// config is supplied. These chains would get AllProtocolChanges (and a compat error)
|
||||
// if we just continued here.
|
||||
if genesis == nil && stored != params.MainnetGenesisHash {
|
||||
return storedcfg, stored, nil
|
||||
}
|
||||
|
||||
// Check config compatibility and write the config. Compatibility errors
|
||||
// are returned to the caller unless we're already at block zero.
|
||||
height := rawdb.ReadHeaderNumber(db, rawdb.ReadHeadHeaderHash(db))
|
||||
if height == nil {
|
||||
return newcfg, stored, fmt.Errorf("missing block number for head header hash")
|
||||
}
|
||||
compatErr := storedcfg.CheckCompatible(newcfg, *height)
|
||||
if compatErr != nil && *height != 0 && compatErr.RewindTo != 0 {
|
||||
return newcfg, stored, compatErr
|
||||
}
|
||||
rawdb.WriteChainConfig(db, stored, newcfg)
|
||||
return newcfg, stored, nil
|
||||
}
|
||||
|
||||
func (g *Genesis) configOrDefault(ghash common.Hash) *params.ChainConfig {
|
||||
switch {
|
||||
case g != nil:
|
||||
return g.Config
|
||||
case ghash == params.MainnetGenesisHash:
|
||||
return params.MainnetChainConfig
|
||||
case ghash == params.TestnetGenesisHash:
|
||||
return params.TestnetChainConfig
|
||||
default:
|
||||
return params.AllEthashProtocolChanges
|
||||
}
|
||||
}
|
||||
|
||||
// ToBlock creates the genesis block and writes state of a genesis specification
|
||||
// to the given database (or discards it if nil).
|
||||
func (g *Genesis) ToBlock(db ethdb.Database) *types.Block {
|
||||
if db == nil {
|
||||
db = rawdb.NewMemoryDatabase()
|
||||
}
|
||||
statedb, _ := state.New(common.Hash{}, state.NewDatabase(db))
|
||||
for addr, account := range g.Alloc {
|
||||
statedb.AddBalance(addr, account.Balance)
|
||||
statedb.SetCode(addr, account.Code)
|
||||
statedb.SetNonce(addr, account.Nonce)
|
||||
for key, value := range account.Storage {
|
||||
statedb.SetState(addr, key, value)
|
||||
}
|
||||
}
|
||||
root := statedb.IntermediateRoot(false)
|
||||
head := &types.Header{
|
||||
Number: new(big.Int).SetUint64(g.Number),
|
||||
Nonce: types.EncodeNonce(g.Nonce),
|
||||
Time: g.Timestamp,
|
||||
ParentHash: g.ParentHash,
|
||||
Extra: g.ExtraData,
|
||||
GasLimit: g.GasLimit,
|
||||
GasUsed: g.GasUsed,
|
||||
Difficulty: g.Difficulty,
|
||||
MixDigest: g.Mixhash,
|
||||
Coinbase: g.Coinbase,
|
||||
Root: root,
|
||||
}
|
||||
if g.GasLimit == 0 {
|
||||
head.GasLimit = params.GenesisGasLimit
|
||||
}
|
||||
if g.Difficulty == nil {
|
||||
head.Difficulty = params.GenesisDifficulty
|
||||
}
|
||||
statedb.Commit(false)
|
||||
statedb.Database().TrieDB().Commit(root, true)
|
||||
|
||||
return types.NewBlock(head, nil, nil, nil)
|
||||
}
|
||||
|
||||
// Commit writes the block and state of a genesis specification to the database.
|
||||
// The block is committed as the canonical head block.
|
||||
func (g *Genesis) Commit(db ethdb.Database) (*types.Block, error) {
|
||||
block := g.ToBlock(db)
|
||||
if block.Number().Sign() != 0 {
|
||||
return nil, fmt.Errorf("can't commit genesis block with number > 0")
|
||||
}
|
||||
rawdb.WriteTd(db, block.Hash(), block.NumberU64(), g.Difficulty)
|
||||
rawdb.WriteBlock(db, block)
|
||||
rawdb.WriteReceipts(db, block.Hash(), block.NumberU64(), nil)
|
||||
rawdb.WriteCanonicalHash(db, block.Hash(), block.NumberU64())
|
||||
rawdb.WriteHeadBlockHash(db, block.Hash())
|
||||
rawdb.WriteHeadHeaderHash(db, block.Hash())
|
||||
|
||||
config := g.Config
|
||||
if config == nil {
|
||||
config = params.AllEthashProtocolChanges
|
||||
}
|
||||
rawdb.WriteChainConfig(db, block.Hash(), config)
|
||||
return block, nil
|
||||
}
|
||||
|
||||
// MustCommit writes the genesis block and state to db, panicking on error.
|
||||
// The block is committed as the canonical head block.
|
||||
func (g *Genesis) MustCommit(db ethdb.Database) *types.Block {
|
||||
block, err := g.Commit(db)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return block
|
||||
}
|
||||
|
||||
// GenesisBlockForTesting creates and writes a block in which addr has the given wei balance.
|
||||
func GenesisBlockForTesting(db ethdb.Database, addr common.Address, balance *big.Int) *types.Block {
|
||||
g := Genesis{Alloc: GenesisAlloc{addr: {Balance: balance}}}
|
||||
return g.MustCommit(db)
|
||||
}
|
||||
|
||||
// DefaultGenesisBlock returns the Ethereum main net genesis block.
|
||||
func DefaultGenesisBlock() *Genesis {
|
||||
return &Genesis{
|
||||
Config: params.MainnetChainConfig,
|
||||
Nonce: 66,
|
||||
ExtraData: hexutil.MustDecode("0x11bbe8db4e347b4e8c937c1c8370e4b5ed33adb3db69cbdb7a38e1e50b1b82fa"),
|
||||
GasLimit: 5000,
|
||||
Difficulty: big.NewInt(17179869184),
|
||||
Alloc: decodePrealloc(mainnetAllocData),
|
||||
}
|
||||
}
|
||||
|
||||
// DefaultTestnetGenesisBlock returns the Ropsten network genesis block.
|
||||
func DefaultTestnetGenesisBlock() *Genesis {
|
||||
return &Genesis{
|
||||
Config: params.TestnetChainConfig,
|
||||
Nonce: 66,
|
||||
ExtraData: hexutil.MustDecode("0x3535353535353535353535353535353535353535353535353535353535353535"),
|
||||
GasLimit: 16777216,
|
||||
Difficulty: big.NewInt(1048576),
|
||||
Alloc: decodePrealloc(testnetAllocData),
|
||||
}
|
||||
}
|
||||
|
||||
// DefaultRinkebyGenesisBlock returns the Rinkeby network genesis block.
|
||||
func DefaultRinkebyGenesisBlock() *Genesis {
|
||||
return &Genesis{
|
||||
Config: params.RinkebyChainConfig,
|
||||
Timestamp: 1492009146,
|
||||
ExtraData: hexutil.MustDecode("0x52657370656374206d7920617574686f7269746168207e452e436172746d616e42eb768f2244c8811c63729a21a3569731535f067ffc57839b00206d1ad20c69a1981b489f772031b279182d99e65703f0076e4812653aab85fca0f00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"),
|
||||
GasLimit: 4700000,
|
||||
Difficulty: big.NewInt(1),
|
||||
Alloc: decodePrealloc(rinkebyAllocData),
|
||||
}
|
||||
}
|
||||
|
||||
// DefaultGoerliGenesisBlock returns the Görli network genesis block.
|
||||
func DefaultGoerliGenesisBlock() *Genesis {
|
||||
return &Genesis{
|
||||
Config: params.GoerliChainConfig,
|
||||
Timestamp: 1548854791,
|
||||
ExtraData: hexutil.MustDecode("0x22466c6578692069732061207468696e6722202d204166726900000000000000e0a2bd4258d2768837baa26a28fe71dc079f84c70000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"),
|
||||
GasLimit: 10485760,
|
||||
Difficulty: big.NewInt(1),
|
||||
Alloc: decodePrealloc(goerliAllocData),
|
||||
}
|
||||
}
|
||||
|
||||
// DeveloperGenesisBlock returns the 'geth --dev' genesis block. Note, this must
|
||||
// be seeded with the
|
||||
func DeveloperGenesisBlock(period uint64, faucet common.Address) *Genesis {
|
||||
// Override the default period to the user requested one
|
||||
config := *params.AllCliqueProtocolChanges
|
||||
config.Clique.Period = period
|
||||
|
||||
// Assemble and return the genesis with the precompiles and faucet pre-funded
|
||||
return &Genesis{
|
||||
Config: &config,
|
||||
ExtraData: append(append(make([]byte, 32), faucet[:]...), make([]byte, 65)...),
|
||||
GasLimit: 6283185,
|
||||
Difficulty: big.NewInt(1),
|
||||
Alloc: map[common.Address]GenesisAccount{
|
||||
common.BytesToAddress([]byte{1}): {Balance: big.NewInt(1)}, // ECRecover
|
||||
common.BytesToAddress([]byte{2}): {Balance: big.NewInt(1)}, // SHA256
|
||||
common.BytesToAddress([]byte{3}): {Balance: big.NewInt(1)}, // RIPEMD
|
||||
common.BytesToAddress([]byte{4}): {Balance: big.NewInt(1)}, // Identity
|
||||
common.BytesToAddress([]byte{5}): {Balance: big.NewInt(1)}, // ModExp
|
||||
common.BytesToAddress([]byte{6}): {Balance: big.NewInt(1)}, // ECAdd
|
||||
common.BytesToAddress([]byte{7}): {Balance: big.NewInt(1)}, // ECScalarMul
|
||||
common.BytesToAddress([]byte{8}): {Balance: big.NewInt(1)}, // ECPairing
|
||||
faucet: {Balance: new(big.Int).Sub(new(big.Int).Lsh(big.NewInt(1), 256), big.NewInt(9))},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func decodePrealloc(data string) GenesisAlloc {
|
||||
var p []struct{ Addr, Balance *big.Int }
|
||||
if err := rlp.NewStream(strings.NewReader(data), 0).Decode(&p); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
ga := make(GenesisAlloc, len(p))
|
||||
for _, account := range p {
|
||||
ga[common.BigToAddress(account.Addr)] = GenesisAccount{Balance: account.Balance}
|
||||
}
|
||||
return ga
|
||||
}
|
514
vendor/github.com/ethereum/go-ethereum/core/headerchain.go
generated
vendored
Normal file
514
vendor/github.com/ethereum/go-ethereum/core/headerchain.go
generated
vendored
Normal file
@ -0,0 +1,514 @@
|
||||
// Copyright 2015 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package core
|
||||
|
||||
import (
|
||||
crand "crypto/rand"
|
||||
"errors"
|
||||
"fmt"
|
||||
"math"
|
||||
"math/big"
|
||||
mrand "math/rand"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/consensus"
|
||||
"github.com/ethereum/go-ethereum/core/rawdb"
|
||||
"github.com/ethereum/go-ethereum/core/types"
|
||||
"github.com/ethereum/go-ethereum/ethdb"
|
||||
"github.com/ethereum/go-ethereum/log"
|
||||
"github.com/ethereum/go-ethereum/params"
|
||||
lru "github.com/hashicorp/golang-lru"
|
||||
)
|
||||
|
||||
const (
|
||||
headerCacheLimit = 512
|
||||
tdCacheLimit = 1024
|
||||
numberCacheLimit = 2048
|
||||
)
|
||||
|
||||
// HeaderChain implements the basic block header chain logic that is shared by
|
||||
// core.BlockChain and light.LightChain. It is not usable in itself, only as
|
||||
// a part of either structure.
|
||||
// It is not thread safe either, the encapsulating chain structures should do
|
||||
// the necessary mutex locking/unlocking.
|
||||
type HeaderChain struct {
|
||||
config *params.ChainConfig
|
||||
|
||||
chainDb ethdb.Database
|
||||
genesisHeader *types.Header
|
||||
|
||||
currentHeader atomic.Value // Current head of the header chain (may be above the block chain!)
|
||||
currentHeaderHash common.Hash // Hash of the current head of the header chain (prevent recomputing all the time)
|
||||
|
||||
headerCache *lru.Cache // Cache for the most recent block headers
|
||||
tdCache *lru.Cache // Cache for the most recent block total difficulties
|
||||
numberCache *lru.Cache // Cache for the most recent block numbers
|
||||
|
||||
procInterrupt func() bool
|
||||
|
||||
rand *mrand.Rand
|
||||
engine consensus.Engine
|
||||
}
|
||||
|
||||
// NewHeaderChain creates a new HeaderChain structure.
|
||||
// getValidator should return the parent's validator
|
||||
// procInterrupt points to the parent's interrupt semaphore
|
||||
// wg points to the parent's shutdown wait group
|
||||
func NewHeaderChain(chainDb ethdb.Database, config *params.ChainConfig, engine consensus.Engine, procInterrupt func() bool) (*HeaderChain, error) {
|
||||
headerCache, _ := lru.New(headerCacheLimit)
|
||||
tdCache, _ := lru.New(tdCacheLimit)
|
||||
numberCache, _ := lru.New(numberCacheLimit)
|
||||
|
||||
// Seed a fast but crypto originating random generator
|
||||
seed, err := crand.Int(crand.Reader, big.NewInt(math.MaxInt64))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
hc := &HeaderChain{
|
||||
config: config,
|
||||
chainDb: chainDb,
|
||||
headerCache: headerCache,
|
||||
tdCache: tdCache,
|
||||
numberCache: numberCache,
|
||||
procInterrupt: procInterrupt,
|
||||
rand: mrand.New(mrand.NewSource(seed.Int64())),
|
||||
engine: engine,
|
||||
}
|
||||
|
||||
hc.genesisHeader = hc.GetHeaderByNumber(0)
|
||||
if hc.genesisHeader == nil {
|
||||
return nil, ErrNoGenesis
|
||||
}
|
||||
|
||||
hc.currentHeader.Store(hc.genesisHeader)
|
||||
if head := rawdb.ReadHeadBlockHash(chainDb); head != (common.Hash{}) {
|
||||
if chead := hc.GetHeaderByHash(head); chead != nil {
|
||||
hc.currentHeader.Store(chead)
|
||||
}
|
||||
}
|
||||
hc.currentHeaderHash = hc.CurrentHeader().Hash()
|
||||
|
||||
return hc, nil
|
||||
}
|
||||
|
||||
// GetBlockNumber retrieves the block number belonging to the given hash
|
||||
// from the cache or database
|
||||
func (hc *HeaderChain) GetBlockNumber(hash common.Hash) *uint64 {
|
||||
if cached, ok := hc.numberCache.Get(hash); ok {
|
||||
number := cached.(uint64)
|
||||
return &number
|
||||
}
|
||||
number := rawdb.ReadHeaderNumber(hc.chainDb, hash)
|
||||
if number != nil {
|
||||
hc.numberCache.Add(hash, *number)
|
||||
}
|
||||
return number
|
||||
}
|
||||
|
||||
// WriteHeader writes a header into the local chain, given that its parent is
|
||||
// already known. If the total difficulty of the newly inserted header becomes
|
||||
// greater than the current known TD, the canonical chain is re-routed.
|
||||
//
|
||||
// Note: This method is not concurrent-safe with inserting blocks simultaneously
|
||||
// into the chain, as side effects caused by reorganisations cannot be emulated
|
||||
// without the real blocks. Hence, writing headers directly should only be done
|
||||
// in two scenarios: pure-header mode of operation (light clients), or properly
|
||||
// separated header/block phases (non-archive clients).
|
||||
func (hc *HeaderChain) WriteHeader(header *types.Header) (status WriteStatus, err error) {
|
||||
// Cache some values to prevent constant recalculation
|
||||
var (
|
||||
hash = header.Hash()
|
||||
number = header.Number.Uint64()
|
||||
)
|
||||
// Calculate the total difficulty of the header
|
||||
ptd := hc.GetTd(header.ParentHash, number-1)
|
||||
if ptd == nil {
|
||||
return NonStatTy, consensus.ErrUnknownAncestor
|
||||
}
|
||||
localTd := hc.GetTd(hc.currentHeaderHash, hc.CurrentHeader().Number.Uint64())
|
||||
externTd := new(big.Int).Add(header.Difficulty, ptd)
|
||||
|
||||
// Irrelevant of the canonical status, write the td and header to the database
|
||||
if err := hc.WriteTd(hash, number, externTd); err != nil {
|
||||
log.Crit("Failed to write header total difficulty", "err", err)
|
||||
}
|
||||
rawdb.WriteHeader(hc.chainDb, header)
|
||||
|
||||
// If the total difficulty is higher than our known, add it to the canonical chain
|
||||
// Second clause in the if statement reduces the vulnerability to selfish mining.
|
||||
// Please refer to http://www.cs.cornell.edu/~ie53/publications/btcProcFC.pdf
|
||||
if externTd.Cmp(localTd) > 0 || (externTd.Cmp(localTd) == 0 && mrand.Float64() < 0.5) {
|
||||
// Delete any canonical number assignments above the new head
|
||||
batch := hc.chainDb.NewBatch()
|
||||
for i := number + 1; ; i++ {
|
||||
hash := rawdb.ReadCanonicalHash(hc.chainDb, i)
|
||||
if hash == (common.Hash{}) {
|
||||
break
|
||||
}
|
||||
rawdb.DeleteCanonicalHash(batch, i)
|
||||
}
|
||||
batch.Write()
|
||||
|
||||
// Overwrite any stale canonical number assignments
|
||||
var (
|
||||
headHash = header.ParentHash
|
||||
headNumber = header.Number.Uint64() - 1
|
||||
headHeader = hc.GetHeader(headHash, headNumber)
|
||||
)
|
||||
for rawdb.ReadCanonicalHash(hc.chainDb, headNumber) != headHash {
|
||||
rawdb.WriteCanonicalHash(hc.chainDb, headHash, headNumber)
|
||||
|
||||
headHash = headHeader.ParentHash
|
||||
headNumber = headHeader.Number.Uint64() - 1
|
||||
headHeader = hc.GetHeader(headHash, headNumber)
|
||||
}
|
||||
// Extend the canonical chain with the new header
|
||||
rawdb.WriteCanonicalHash(hc.chainDb, hash, number)
|
||||
rawdb.WriteHeadHeaderHash(hc.chainDb, hash)
|
||||
|
||||
hc.currentHeaderHash = hash
|
||||
hc.currentHeader.Store(types.CopyHeader(header))
|
||||
|
||||
status = CanonStatTy
|
||||
} else {
|
||||
status = SideStatTy
|
||||
}
|
||||
|
||||
hc.headerCache.Add(hash, header)
|
||||
hc.numberCache.Add(hash, number)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// WhCallback is a callback function for inserting individual headers.
|
||||
// A callback is used for two reasons: first, in a LightChain, status should be
|
||||
// processed and light chain events sent, while in a BlockChain this is not
|
||||
// necessary since chain events are sent after inserting blocks. Second, the
|
||||
// header writes should be protected by the parent chain mutex individually.
|
||||
type WhCallback func(*types.Header) error
|
||||
|
||||
func (hc *HeaderChain) ValidateHeaderChain(chain []*types.Header, checkFreq int) (int, error) {
|
||||
// Do a sanity check that the provided chain is actually ordered and linked
|
||||
for i := 1; i < len(chain); i++ {
|
||||
if chain[i].Number.Uint64() != chain[i-1].Number.Uint64()+1 || chain[i].ParentHash != chain[i-1].Hash() {
|
||||
// Chain broke ancestry, log a message (programming error) and skip insertion
|
||||
log.Error("Non contiguous header insert", "number", chain[i].Number, "hash", chain[i].Hash(),
|
||||
"parent", chain[i].ParentHash, "prevnumber", chain[i-1].Number, "prevhash", chain[i-1].Hash())
|
||||
|
||||
return 0, fmt.Errorf("non contiguous insert: item %d is #%d [%x…], item %d is #%d [%x…] (parent [%x…])", i-1, chain[i-1].Number,
|
||||
chain[i-1].Hash().Bytes()[:4], i, chain[i].Number, chain[i].Hash().Bytes()[:4], chain[i].ParentHash[:4])
|
||||
}
|
||||
}
|
||||
|
||||
// Generate the list of seal verification requests, and start the parallel verifier
|
||||
seals := make([]bool, len(chain))
|
||||
if checkFreq != 0 {
|
||||
// In case of checkFreq == 0 all seals are left false.
|
||||
for i := 0; i < len(seals)/checkFreq; i++ {
|
||||
index := i*checkFreq + hc.rand.Intn(checkFreq)
|
||||
if index >= len(seals) {
|
||||
index = len(seals) - 1
|
||||
}
|
||||
seals[index] = true
|
||||
}
|
||||
// Last should always be verified to avoid junk.
|
||||
seals[len(seals)-1] = true
|
||||
}
|
||||
|
||||
abort, results := hc.engine.VerifyHeaders(hc, chain, seals)
|
||||
defer close(abort)
|
||||
|
||||
// Iterate over the headers and ensure they all check out
|
||||
for i, header := range chain {
|
||||
// If the chain is terminating, stop processing blocks
|
||||
if hc.procInterrupt() {
|
||||
log.Debug("Premature abort during headers verification")
|
||||
return 0, errors.New("aborted")
|
||||
}
|
||||
// If the header is a banned one, straight out abort
|
||||
if BadHashes[header.Hash()] {
|
||||
return i, ErrBlacklistedHash
|
||||
}
|
||||
// Otherwise wait for headers checks and ensure they pass
|
||||
if err := <-results; err != nil {
|
||||
return i, err
|
||||
}
|
||||
}
|
||||
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
// InsertHeaderChain attempts to insert the given header chain in to the local
|
||||
// chain, possibly creating a reorg. If an error is returned, it will return the
|
||||
// index number of the failing header as well an error describing what went wrong.
|
||||
//
|
||||
// The verify parameter can be used to fine tune whether nonce verification
|
||||
// should be done or not. The reason behind the optional check is because some
|
||||
// of the header retrieval mechanisms already need to verfy nonces, as well as
|
||||
// because nonces can be verified sparsely, not needing to check each.
|
||||
func (hc *HeaderChain) InsertHeaderChain(chain []*types.Header, writeHeader WhCallback, start time.Time) (int, error) {
|
||||
// Collect some import statistics to report on
|
||||
stats := struct{ processed, ignored int }{}
|
||||
// All headers passed verification, import them into the database
|
||||
for i, header := range chain {
|
||||
// Short circuit insertion if shutting down
|
||||
if hc.procInterrupt() {
|
||||
log.Debug("Premature abort during headers import")
|
||||
return i, errors.New("aborted")
|
||||
}
|
||||
// If the header's already known, skip it, otherwise store
|
||||
if hc.HasHeader(header.Hash(), header.Number.Uint64()) {
|
||||
stats.ignored++
|
||||
continue
|
||||
}
|
||||
if err := writeHeader(header); err != nil {
|
||||
return i, err
|
||||
}
|
||||
stats.processed++
|
||||
}
|
||||
// Report some public statistics so the user has a clue what's going on
|
||||
last := chain[len(chain)-1]
|
||||
|
||||
context := []interface{}{
|
||||
"count", stats.processed, "elapsed", common.PrettyDuration(time.Since(start)),
|
||||
"number", last.Number, "hash", last.Hash(),
|
||||
}
|
||||
if timestamp := time.Unix(int64(last.Time), 0); time.Since(timestamp) > time.Minute {
|
||||
context = append(context, []interface{}{"age", common.PrettyAge(timestamp)}...)
|
||||
}
|
||||
if stats.ignored > 0 {
|
||||
context = append(context, []interface{}{"ignored", stats.ignored}...)
|
||||
}
|
||||
log.Info("Imported new block headers", context...)
|
||||
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
// GetBlockHashesFromHash retrieves a number of block hashes starting at a given
|
||||
// hash, fetching towards the genesis block.
|
||||
func (hc *HeaderChain) GetBlockHashesFromHash(hash common.Hash, max uint64) []common.Hash {
|
||||
// Get the origin header from which to fetch
|
||||
header := hc.GetHeaderByHash(hash)
|
||||
if header == nil {
|
||||
return nil
|
||||
}
|
||||
// Iterate the headers until enough is collected or the genesis reached
|
||||
chain := make([]common.Hash, 0, max)
|
||||
for i := uint64(0); i < max; i++ {
|
||||
next := header.ParentHash
|
||||
if header = hc.GetHeader(next, header.Number.Uint64()-1); header == nil {
|
||||
break
|
||||
}
|
||||
chain = append(chain, next)
|
||||
if header.Number.Sign() == 0 {
|
||||
break
|
||||
}
|
||||
}
|
||||
return chain
|
||||
}
|
||||
|
||||
// GetAncestor retrieves the Nth ancestor of a given block. It assumes that either the given block or
|
||||
// a close ancestor of it is canonical. maxNonCanonical points to a downwards counter limiting the
|
||||
// number of blocks to be individually checked before we reach the canonical chain.
|
||||
//
|
||||
// Note: ancestor == 0 returns the same block, 1 returns its parent and so on.
|
||||
func (hc *HeaderChain) GetAncestor(hash common.Hash, number, ancestor uint64, maxNonCanonical *uint64) (common.Hash, uint64) {
|
||||
if ancestor > number {
|
||||
return common.Hash{}, 0
|
||||
}
|
||||
if ancestor == 1 {
|
||||
// in this case it is cheaper to just read the header
|
||||
if header := hc.GetHeader(hash, number); header != nil {
|
||||
return header.ParentHash, number - 1
|
||||
} else {
|
||||
return common.Hash{}, 0
|
||||
}
|
||||
}
|
||||
for ancestor != 0 {
|
||||
if rawdb.ReadCanonicalHash(hc.chainDb, number) == hash {
|
||||
number -= ancestor
|
||||
return rawdb.ReadCanonicalHash(hc.chainDb, number), number
|
||||
}
|
||||
if *maxNonCanonical == 0 {
|
||||
return common.Hash{}, 0
|
||||
}
|
||||
*maxNonCanonical--
|
||||
ancestor--
|
||||
header := hc.GetHeader(hash, number)
|
||||
if header == nil {
|
||||
return common.Hash{}, 0
|
||||
}
|
||||
hash = header.ParentHash
|
||||
number--
|
||||
}
|
||||
return hash, number
|
||||
}
|
||||
|
||||
// GetTd retrieves a block's total difficulty in the canonical chain from the
|
||||
// database by hash and number, caching it if found.
|
||||
func (hc *HeaderChain) GetTd(hash common.Hash, number uint64) *big.Int {
|
||||
// Short circuit if the td's already in the cache, retrieve otherwise
|
||||
if cached, ok := hc.tdCache.Get(hash); ok {
|
||||
return cached.(*big.Int)
|
||||
}
|
||||
td := rawdb.ReadTd(hc.chainDb, hash, number)
|
||||
if td == nil {
|
||||
return nil
|
||||
}
|
||||
// Cache the found body for next time and return
|
||||
hc.tdCache.Add(hash, td)
|
||||
return td
|
||||
}
|
||||
|
||||
// GetTdByHash retrieves a block's total difficulty in the canonical chain from the
|
||||
// database by hash, caching it if found.
|
||||
func (hc *HeaderChain) GetTdByHash(hash common.Hash) *big.Int {
|
||||
number := hc.GetBlockNumber(hash)
|
||||
if number == nil {
|
||||
return nil
|
||||
}
|
||||
return hc.GetTd(hash, *number)
|
||||
}
|
||||
|
||||
// WriteTd stores a block's total difficulty into the database, also caching it
|
||||
// along the way.
|
||||
func (hc *HeaderChain) WriteTd(hash common.Hash, number uint64, td *big.Int) error {
|
||||
rawdb.WriteTd(hc.chainDb, hash, number, td)
|
||||
hc.tdCache.Add(hash, new(big.Int).Set(td))
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetHeader retrieves a block header from the database by hash and number,
|
||||
// caching it if found.
|
||||
func (hc *HeaderChain) GetHeader(hash common.Hash, number uint64) *types.Header {
|
||||
// Short circuit if the header's already in the cache, retrieve otherwise
|
||||
if header, ok := hc.headerCache.Get(hash); ok {
|
||||
return header.(*types.Header)
|
||||
}
|
||||
header := rawdb.ReadHeader(hc.chainDb, hash, number)
|
||||
if header == nil {
|
||||
return nil
|
||||
}
|
||||
// Cache the found header for next time and return
|
||||
hc.headerCache.Add(hash, header)
|
||||
return header
|
||||
}
|
||||
|
||||
// GetHeaderByHash retrieves a block header from the database by hash, caching it if
|
||||
// found.
|
||||
func (hc *HeaderChain) GetHeaderByHash(hash common.Hash) *types.Header {
|
||||
number := hc.GetBlockNumber(hash)
|
||||
if number == nil {
|
||||
return nil
|
||||
}
|
||||
return hc.GetHeader(hash, *number)
|
||||
}
|
||||
|
||||
// HasHeader checks if a block header is present in the database or not.
|
||||
func (hc *HeaderChain) HasHeader(hash common.Hash, number uint64) bool {
|
||||
if hc.numberCache.Contains(hash) || hc.headerCache.Contains(hash) {
|
||||
return true
|
||||
}
|
||||
return rawdb.HasHeader(hc.chainDb, hash, number)
|
||||
}
|
||||
|
||||
// GetHeaderByNumber retrieves a block header from the database by number,
|
||||
// caching it (associated with its hash) if found.
|
||||
func (hc *HeaderChain) GetHeaderByNumber(number uint64) *types.Header {
|
||||
hash := rawdb.ReadCanonicalHash(hc.chainDb, number)
|
||||
if hash == (common.Hash{}) {
|
||||
return nil
|
||||
}
|
||||
return hc.GetHeader(hash, number)
|
||||
}
|
||||
|
||||
// CurrentHeader retrieves the current head header of the canonical chain. The
|
||||
// header is retrieved from the HeaderChain's internal cache.
|
||||
func (hc *HeaderChain) CurrentHeader() *types.Header {
|
||||
return hc.currentHeader.Load().(*types.Header)
|
||||
}
|
||||
|
||||
// SetCurrentHeader sets the current head header of the canonical chain.
|
||||
func (hc *HeaderChain) SetCurrentHeader(head *types.Header) {
|
||||
rawdb.WriteHeadHeaderHash(hc.chainDb, head.Hash())
|
||||
|
||||
hc.currentHeader.Store(head)
|
||||
hc.currentHeaderHash = head.Hash()
|
||||
}
|
||||
|
||||
// DeleteCallback is a callback function that is called by SetHead before
|
||||
// each header is deleted.
|
||||
type DeleteCallback func(ethdb.Writer, common.Hash, uint64)
|
||||
|
||||
// SetHead rewinds the local chain to a new head. Everything above the new head
|
||||
// will be deleted and the new one set.
|
||||
func (hc *HeaderChain) SetHead(head uint64, delFn DeleteCallback) {
|
||||
height := uint64(0)
|
||||
|
||||
if hdr := hc.CurrentHeader(); hdr != nil {
|
||||
height = hdr.Number.Uint64()
|
||||
}
|
||||
batch := hc.chainDb.NewBatch()
|
||||
for hdr := hc.CurrentHeader(); hdr != nil && hdr.Number.Uint64() > head; hdr = hc.CurrentHeader() {
|
||||
hash := hdr.Hash()
|
||||
num := hdr.Number.Uint64()
|
||||
if delFn != nil {
|
||||
delFn(batch, hash, num)
|
||||
}
|
||||
rawdb.DeleteHeader(batch, hash, num)
|
||||
rawdb.DeleteTd(batch, hash, num)
|
||||
|
||||
hc.currentHeader.Store(hc.GetHeader(hdr.ParentHash, hdr.Number.Uint64()-1))
|
||||
}
|
||||
// Roll back the canonical chain numbering
|
||||
for i := height; i > head; i-- {
|
||||
rawdb.DeleteCanonicalHash(batch, i)
|
||||
}
|
||||
batch.Write()
|
||||
|
||||
// Clear out any stale content from the caches
|
||||
hc.headerCache.Purge()
|
||||
hc.tdCache.Purge()
|
||||
hc.numberCache.Purge()
|
||||
|
||||
if hc.CurrentHeader() == nil {
|
||||
hc.currentHeader.Store(hc.genesisHeader)
|
||||
}
|
||||
hc.currentHeaderHash = hc.CurrentHeader().Hash()
|
||||
|
||||
rawdb.WriteHeadHeaderHash(hc.chainDb, hc.currentHeaderHash)
|
||||
}
|
||||
|
||||
// SetGenesis sets a new genesis block header for the chain
|
||||
func (hc *HeaderChain) SetGenesis(head *types.Header) {
|
||||
hc.genesisHeader = head
|
||||
}
|
||||
|
||||
// Config retrieves the header chain's chain configuration.
|
||||
func (hc *HeaderChain) Config() *params.ChainConfig { return hc.config }
|
||||
|
||||
// Engine retrieves the header chain's consensus engine.
|
||||
func (hc *HeaderChain) Engine() consensus.Engine { return hc.engine }
|
||||
|
||||
// GetBlock implements consensus.ChainReader, and returns nil for every input as
|
||||
// a header chain does not have blocks available for retrieval.
|
||||
func (hc *HeaderChain) GetBlock(hash common.Hash, number uint64) *types.Block {
|
||||
return nil
|
||||
}
|
728
vendor/github.com/ethereum/go-ethereum/core/state/statedb.go
generated
vendored
Normal file
728
vendor/github.com/ethereum/go-ethereum/core/state/statedb.go
generated
vendored
Normal file
@ -0,0 +1,728 @@
|
||||
// Copyright 2014 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
// Package state provides a caching layer atop the Ethereum state trie.
|
||||
package state
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"math/big"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/core/types"
|
||||
"github.com/ethereum/go-ethereum/crypto"
|
||||
"github.com/ethereum/go-ethereum/log"
|
||||
"github.com/ethereum/go-ethereum/metrics"
|
||||
"github.com/ethereum/go-ethereum/rlp"
|
||||
"github.com/ethereum/go-ethereum/trie"
|
||||
)
|
||||
|
||||
type revision struct {
|
||||
id int
|
||||
journalIndex int
|
||||
}
|
||||
|
||||
var (
|
||||
// emptyRoot is the known root hash of an empty trie.
|
||||
emptyRoot = common.HexToHash("56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421")
|
||||
|
||||
// emptyCode is the known hash of the empty EVM bytecode.
|
||||
emptyCode = crypto.Keccak256Hash(nil)
|
||||
)
|
||||
|
||||
type proofList [][]byte
|
||||
|
||||
func (n *proofList) Put(key []byte, value []byte) error {
|
||||
*n = append(*n, value)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (n *proofList) Delete(key []byte) error {
|
||||
panic("not supported")
|
||||
}
|
||||
|
||||
// StateDBs within the ethereum protocol are used to store anything
|
||||
// within the merkle trie. StateDBs take care of caching and storing
|
||||
// nested states. It's the general query interface to retrieve:
|
||||
// * Contracts
|
||||
// * Accounts
|
||||
type StateDB struct {
|
||||
db Database
|
||||
trie Trie
|
||||
|
||||
// This map holds 'live' objects, which will get modified while processing a state transition.
|
||||
stateObjects map[common.Address]*stateObject
|
||||
stateObjectsDirty map[common.Address]struct{}
|
||||
|
||||
// DB error.
|
||||
// State objects are used by the consensus core and VM which are
|
||||
// unable to deal with database-level errors. Any error that occurs
|
||||
// during a database read is memoized here and will eventually be returned
|
||||
// by StateDB.Commit.
|
||||
dbErr error
|
||||
|
||||
// The refund counter, also used by state transitioning.
|
||||
refund uint64
|
||||
|
||||
thash, bhash common.Hash
|
||||
txIndex int
|
||||
logs map[common.Hash][]*types.Log
|
||||
logSize uint
|
||||
|
||||
preimages map[common.Hash][]byte
|
||||
|
||||
// Journal of state modifications. This is the backbone of
|
||||
// Snapshot and RevertToSnapshot.
|
||||
journal *journal
|
||||
validRevisions []revision
|
||||
nextRevisionId int
|
||||
|
||||
// Measurements gathered during execution for debugging purposes
|
||||
AccountReads time.Duration
|
||||
AccountHashes time.Duration
|
||||
AccountUpdates time.Duration
|
||||
AccountCommits time.Duration
|
||||
StorageReads time.Duration
|
||||
StorageHashes time.Duration
|
||||
StorageUpdates time.Duration
|
||||
StorageCommits time.Duration
|
||||
}
|
||||
|
||||
// Create a new state from a given trie.
|
||||
func New(root common.Hash, db Database) (*StateDB, error) {
|
||||
tr, err := db.OpenTrie(root)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &StateDB{
|
||||
db: db,
|
||||
trie: tr,
|
||||
stateObjects: make(map[common.Address]*stateObject),
|
||||
stateObjectsDirty: make(map[common.Address]struct{}),
|
||||
logs: make(map[common.Hash][]*types.Log),
|
||||
preimages: make(map[common.Hash][]byte),
|
||||
journal: newJournal(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// setError remembers the first non-nil error it is called with.
|
||||
func (self *StateDB) setError(err error) {
|
||||
if self.dbErr == nil {
|
||||
self.dbErr = err
|
||||
}
|
||||
}
|
||||
|
||||
func (self *StateDB) Error() error {
|
||||
return self.dbErr
|
||||
}
|
||||
|
||||
// Reset clears out all ephemeral state objects from the state db, but keeps
|
||||
// the underlying state trie to avoid reloading data for the next operations.
|
||||
func (self *StateDB) Reset(root common.Hash) error {
|
||||
tr, err := self.db.OpenTrie(root)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
self.trie = tr
|
||||
self.stateObjects = make(map[common.Address]*stateObject)
|
||||
self.stateObjectsDirty = make(map[common.Address]struct{})
|
||||
self.thash = common.Hash{}
|
||||
self.bhash = common.Hash{}
|
||||
self.txIndex = 0
|
||||
self.logs = make(map[common.Hash][]*types.Log)
|
||||
self.logSize = 0
|
||||
self.preimages = make(map[common.Hash][]byte)
|
||||
self.clearJournalAndRefund()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (self *StateDB) AddLog(log *types.Log) {
|
||||
self.journal.append(addLogChange{txhash: self.thash})
|
||||
|
||||
log.TxHash = self.thash
|
||||
log.BlockHash = self.bhash
|
||||
log.TxIndex = uint(self.txIndex)
|
||||
log.Index = self.logSize
|
||||
self.logs[self.thash] = append(self.logs[self.thash], log)
|
||||
self.logSize++
|
||||
}
|
||||
|
||||
func (self *StateDB) GetLogs(hash common.Hash) []*types.Log {
|
||||
return self.logs[hash]
|
||||
}
|
||||
|
||||
func (self *StateDB) Logs() []*types.Log {
|
||||
var logs []*types.Log
|
||||
for _, lgs := range self.logs {
|
||||
logs = append(logs, lgs...)
|
||||
}
|
||||
return logs
|
||||
}
|
||||
|
||||
// AddPreimage records a SHA3 preimage seen by the VM.
|
||||
func (self *StateDB) AddPreimage(hash common.Hash, preimage []byte) {
|
||||
if _, ok := self.preimages[hash]; !ok {
|
||||
self.journal.append(addPreimageChange{hash: hash})
|
||||
pi := make([]byte, len(preimage))
|
||||
copy(pi, preimage)
|
||||
self.preimages[hash] = pi
|
||||
}
|
||||
}
|
||||
|
||||
// Preimages returns a list of SHA3 preimages that have been submitted.
|
||||
func (self *StateDB) Preimages() map[common.Hash][]byte {
|
||||
return self.preimages
|
||||
}
|
||||
|
||||
// AddRefund adds gas to the refund counter
|
||||
func (self *StateDB) AddRefund(gas uint64) {
|
||||
self.journal.append(refundChange{prev: self.refund})
|
||||
self.refund += gas
|
||||
}
|
||||
|
||||
// SubRefund removes gas from the refund counter.
|
||||
// This method will panic if the refund counter goes below zero
|
||||
func (self *StateDB) SubRefund(gas uint64) {
|
||||
self.journal.append(refundChange{prev: self.refund})
|
||||
if gas > self.refund {
|
||||
panic("Refund counter below zero")
|
||||
}
|
||||
self.refund -= gas
|
||||
}
|
||||
|
||||
// Exist reports whether the given account address exists in the state.
|
||||
// Notably this also returns true for suicided accounts.
|
||||
func (self *StateDB) Exist(addr common.Address) bool {
|
||||
return self.getStateObject(addr) != nil
|
||||
}
|
||||
|
||||
// Empty returns whether the state object is either non-existent
|
||||
// or empty according to the EIP161 specification (balance = nonce = code = 0)
|
||||
func (self *StateDB) Empty(addr common.Address) bool {
|
||||
so := self.getStateObject(addr)
|
||||
return so == nil || so.empty()
|
||||
}
|
||||
|
||||
// Retrieve the balance from the given address or 0 if object not found
|
||||
func (self *StateDB) GetBalance(addr common.Address) *big.Int {
|
||||
stateObject := self.getStateObject(addr)
|
||||
if stateObject != nil {
|
||||
return stateObject.Balance()
|
||||
}
|
||||
return common.Big0
|
||||
}
|
||||
|
||||
func (self *StateDB) GetNonce(addr common.Address) uint64 {
|
||||
stateObject := self.getStateObject(addr)
|
||||
if stateObject != nil {
|
||||
return stateObject.Nonce()
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
// TxIndex returns the current transaction index set by Prepare.
|
||||
func (self *StateDB) TxIndex() int {
|
||||
return self.txIndex
|
||||
}
|
||||
|
||||
// BlockHash returns the current block hash set by Prepare.
|
||||
func (self *StateDB) BlockHash() common.Hash {
|
||||
return self.bhash
|
||||
}
|
||||
|
||||
func (self *StateDB) GetCode(addr common.Address) []byte {
|
||||
stateObject := self.getStateObject(addr)
|
||||
if stateObject != nil {
|
||||
return stateObject.Code(self.db)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (self *StateDB) GetCodeSize(addr common.Address) int {
|
||||
stateObject := self.getStateObject(addr)
|
||||
if stateObject == nil {
|
||||
return 0
|
||||
}
|
||||
if stateObject.code != nil {
|
||||
return len(stateObject.code)
|
||||
}
|
||||
size, err := self.db.ContractCodeSize(stateObject.addrHash, common.BytesToHash(stateObject.CodeHash()))
|
||||
if err != nil {
|
||||
self.setError(err)
|
||||
}
|
||||
return size
|
||||
}
|
||||
|
||||
func (self *StateDB) GetCodeHash(addr common.Address) common.Hash {
|
||||
stateObject := self.getStateObject(addr)
|
||||
if stateObject == nil {
|
||||
return common.Hash{}
|
||||
}
|
||||
return common.BytesToHash(stateObject.CodeHash())
|
||||
}
|
||||
|
||||
// GetState retrieves a value from the given account's storage trie.
|
||||
func (self *StateDB) GetState(addr common.Address, hash common.Hash) common.Hash {
|
||||
stateObject := self.getStateObject(addr)
|
||||
if stateObject != nil {
|
||||
return stateObject.GetState(self.db, hash)
|
||||
}
|
||||
return common.Hash{}
|
||||
}
|
||||
|
||||
// GetProof returns the MerkleProof for a given Account
|
||||
func (self *StateDB) GetProof(a common.Address) ([][]byte, error) {
|
||||
var proof proofList
|
||||
err := self.trie.Prove(crypto.Keccak256(a.Bytes()), 0, &proof)
|
||||
return [][]byte(proof), err
|
||||
}
|
||||
|
||||
// GetProof returns the StorageProof for given key
|
||||
func (self *StateDB) GetStorageProof(a common.Address, key common.Hash) ([][]byte, error) {
|
||||
var proof proofList
|
||||
trie := self.StorageTrie(a)
|
||||
if trie == nil {
|
||||
return proof, errors.New("storage trie for requested address does not exist")
|
||||
}
|
||||
err := trie.Prove(crypto.Keccak256(key.Bytes()), 0, &proof)
|
||||
return [][]byte(proof), err
|
||||
}
|
||||
|
||||
// GetCommittedState retrieves a value from the given account's committed storage trie.
|
||||
func (self *StateDB) GetCommittedState(addr common.Address, hash common.Hash) common.Hash {
|
||||
stateObject := self.getStateObject(addr)
|
||||
if stateObject != nil {
|
||||
return stateObject.GetCommittedState(self.db, hash)
|
||||
}
|
||||
return common.Hash{}
|
||||
}
|
||||
|
||||
// Database retrieves the low level database supporting the lower level trie ops.
|
||||
func (self *StateDB) Database() Database {
|
||||
return self.db
|
||||
}
|
||||
|
||||
// StorageTrie returns the storage trie of an account.
|
||||
// The return value is a copy and is nil for non-existent accounts.
|
||||
func (self *StateDB) StorageTrie(addr common.Address) Trie {
|
||||
stateObject := self.getStateObject(addr)
|
||||
if stateObject == nil {
|
||||
return nil
|
||||
}
|
||||
cpy := stateObject.deepCopy(self)
|
||||
return cpy.updateTrie(self.db)
|
||||
}
|
||||
|
||||
func (self *StateDB) HasSuicided(addr common.Address) bool {
|
||||
stateObject := self.getStateObject(addr)
|
||||
if stateObject != nil {
|
||||
return stateObject.suicided
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
/*
|
||||
* SETTERS
|
||||
*/
|
||||
|
||||
// AddBalance adds amount to the account associated with addr.
|
||||
func (self *StateDB) AddBalance(addr common.Address, amount *big.Int) {
|
||||
stateObject := self.GetOrNewStateObject(addr)
|
||||
if stateObject != nil {
|
||||
stateObject.AddBalance(amount)
|
||||
}
|
||||
}
|
||||
|
||||
// SubBalance subtracts amount from the account associated with addr.
|
||||
func (self *StateDB) SubBalance(addr common.Address, amount *big.Int) {
|
||||
stateObject := self.GetOrNewStateObject(addr)
|
||||
if stateObject != nil {
|
||||
stateObject.SubBalance(amount)
|
||||
}
|
||||
}
|
||||
|
||||
func (self *StateDB) SetBalance(addr common.Address, amount *big.Int) {
|
||||
stateObject := self.GetOrNewStateObject(addr)
|
||||
if stateObject != nil {
|
||||
stateObject.SetBalance(amount)
|
||||
}
|
||||
}
|
||||
|
||||
func (self *StateDB) SetNonce(addr common.Address, nonce uint64) {
|
||||
stateObject := self.GetOrNewStateObject(addr)
|
||||
if stateObject != nil {
|
||||
stateObject.SetNonce(nonce)
|
||||
}
|
||||
}
|
||||
|
||||
func (self *StateDB) SetCode(addr common.Address, code []byte) {
|
||||
stateObject := self.GetOrNewStateObject(addr)
|
||||
if stateObject != nil {
|
||||
stateObject.SetCode(crypto.Keccak256Hash(code), code)
|
||||
}
|
||||
}
|
||||
|
||||
func (self *StateDB) SetState(addr common.Address, key, value common.Hash) {
|
||||
stateObject := self.GetOrNewStateObject(addr)
|
||||
if stateObject != nil {
|
||||
stateObject.SetState(self.db, key, value)
|
||||
}
|
||||
}
|
||||
|
||||
// Suicide marks the given account as suicided.
|
||||
// This clears the account balance.
|
||||
//
|
||||
// The account's state object is still available until the state is committed,
|
||||
// getStateObject will return a non-nil account after Suicide.
|
||||
func (self *StateDB) Suicide(addr common.Address) bool {
|
||||
stateObject := self.getStateObject(addr)
|
||||
if stateObject == nil {
|
||||
return false
|
||||
}
|
||||
self.journal.append(suicideChange{
|
||||
account: &addr,
|
||||
prev: stateObject.suicided,
|
||||
prevbalance: new(big.Int).Set(stateObject.Balance()),
|
||||
})
|
||||
stateObject.markSuicided()
|
||||
stateObject.data.Balance = new(big.Int)
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
//
|
||||
// Setting, updating & deleting state object methods.
|
||||
//
|
||||
|
||||
// updateStateObject writes the given object to the trie.
|
||||
func (s *StateDB) updateStateObject(stateObject *stateObject) {
|
||||
// Track the amount of time wasted on updating the account from the trie
|
||||
if metrics.EnabledExpensive {
|
||||
defer func(start time.Time) { s.AccountUpdates += time.Since(start) }(time.Now())
|
||||
}
|
||||
// Encode the account and update the account trie
|
||||
addr := stateObject.Address()
|
||||
|
||||
data, err := rlp.EncodeToBytes(stateObject)
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("can't encode object at %x: %v", addr[:], err))
|
||||
}
|
||||
s.setError(s.trie.TryUpdate(addr[:], data))
|
||||
}
|
||||
|
||||
// deleteStateObject removes the given object from the state trie.
|
||||
func (s *StateDB) deleteStateObject(stateObject *stateObject) {
|
||||
// Track the amount of time wasted on deleting the account from the trie
|
||||
if metrics.EnabledExpensive {
|
||||
defer func(start time.Time) { s.AccountUpdates += time.Since(start) }(time.Now())
|
||||
}
|
||||
// Delete the account from the trie
|
||||
stateObject.deleted = true
|
||||
|
||||
addr := stateObject.Address()
|
||||
s.setError(s.trie.TryDelete(addr[:]))
|
||||
}
|
||||
|
||||
// Retrieve a state object given by the address. Returns nil if not found.
|
||||
func (s *StateDB) getStateObject(addr common.Address) (stateObject *stateObject) {
|
||||
// Prefer live objects
|
||||
if obj := s.stateObjects[addr]; obj != nil {
|
||||
if obj.deleted {
|
||||
return nil
|
||||
}
|
||||
return obj
|
||||
}
|
||||
// Track the amount of time wasted on loading the object from the database
|
||||
if metrics.EnabledExpensive {
|
||||
defer func(start time.Time) { s.AccountReads += time.Since(start) }(time.Now())
|
||||
}
|
||||
// Load the object from the database
|
||||
enc, err := s.trie.TryGet(addr[:])
|
||||
if len(enc) == 0 {
|
||||
s.setError(err)
|
||||
return nil
|
||||
}
|
||||
var data Account
|
||||
if err := rlp.DecodeBytes(enc, &data); err != nil {
|
||||
log.Error("Failed to decode state object", "addr", addr, "err", err)
|
||||
return nil
|
||||
}
|
||||
// Insert into the live set
|
||||
obj := newObject(s, addr, data)
|
||||
s.setStateObject(obj)
|
||||
return obj
|
||||
}
|
||||
|
||||
func (self *StateDB) setStateObject(object *stateObject) {
|
||||
self.stateObjects[object.Address()] = object
|
||||
}
|
||||
|
||||
// Retrieve a state object or create a new state object if nil.
|
||||
func (self *StateDB) GetOrNewStateObject(addr common.Address) *stateObject {
|
||||
stateObject := self.getStateObject(addr)
|
||||
if stateObject == nil || stateObject.deleted {
|
||||
stateObject, _ = self.createObject(addr)
|
||||
}
|
||||
return stateObject
|
||||
}
|
||||
|
||||
// createObject creates a new state object. If there is an existing account with
|
||||
// the given address, it is overwritten and returned as the second return value.
|
||||
func (self *StateDB) createObject(addr common.Address) (newobj, prev *stateObject) {
|
||||
prev = self.getStateObject(addr)
|
||||
newobj = newObject(self, addr, Account{})
|
||||
newobj.setNonce(0) // sets the object to dirty
|
||||
if prev == nil {
|
||||
self.journal.append(createObjectChange{account: &addr})
|
||||
} else {
|
||||
self.journal.append(resetObjectChange{prev: prev})
|
||||
}
|
||||
self.setStateObject(newobj)
|
||||
return newobj, prev
|
||||
}
|
||||
|
||||
// CreateAccount explicitly creates a state object. If a state object with the address
|
||||
// already exists the balance is carried over to the new account.
|
||||
//
|
||||
// CreateAccount is called during the EVM CREATE operation. The situation might arise that
|
||||
// a contract does the following:
|
||||
//
|
||||
// 1. sends funds to sha(account ++ (nonce + 1))
|
||||
// 2. tx_create(sha(account ++ nonce)) (note that this gets the address of 1)
|
||||
//
|
||||
// Carrying over the balance ensures that Ether doesn't disappear.
|
||||
func (self *StateDB) CreateAccount(addr common.Address) {
|
||||
newObj, prev := self.createObject(addr)
|
||||
if prev != nil {
|
||||
newObj.setBalance(prev.data.Balance)
|
||||
}
|
||||
}
|
||||
|
||||
func (db *StateDB) ForEachStorage(addr common.Address, cb func(key, value common.Hash) bool) error {
|
||||
so := db.getStateObject(addr)
|
||||
if so == nil {
|
||||
return nil
|
||||
}
|
||||
it := trie.NewIterator(so.getTrie(db.db).NodeIterator(nil))
|
||||
|
||||
for it.Next() {
|
||||
key := common.BytesToHash(db.trie.GetKey(it.Key))
|
||||
if value, dirty := so.dirtyStorage[key]; dirty {
|
||||
if !cb(key, value) {
|
||||
return nil
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if len(it.Value) > 0 {
|
||||
_, content, _, err := rlp.Split(it.Value)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !cb(key, common.BytesToHash(content)) {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Copy creates a deep, independent copy of the state.
|
||||
// Snapshots of the copied state cannot be applied to the copy.
|
||||
func (self *StateDB) Copy() *StateDB {
|
||||
// Copy all the basic fields, initialize the memory ones
|
||||
state := &StateDB{
|
||||
db: self.db,
|
||||
trie: self.db.CopyTrie(self.trie),
|
||||
stateObjects: make(map[common.Address]*stateObject, len(self.journal.dirties)),
|
||||
stateObjectsDirty: make(map[common.Address]struct{}, len(self.journal.dirties)),
|
||||
refund: self.refund,
|
||||
logs: make(map[common.Hash][]*types.Log, len(self.logs)),
|
||||
logSize: self.logSize,
|
||||
preimages: make(map[common.Hash][]byte, len(self.preimages)),
|
||||
journal: newJournal(),
|
||||
}
|
||||
// Copy the dirty states, logs, and preimages
|
||||
for addr := range self.journal.dirties {
|
||||
// As documented [here](https://github.com/ethereum/go-ethereum/pull/16485#issuecomment-380438527),
|
||||
// and in the Finalise-method, there is a case where an object is in the journal but not
|
||||
// in the stateObjects: OOG after touch on ripeMD prior to Byzantium. Thus, we need to check for
|
||||
// nil
|
||||
if object, exist := self.stateObjects[addr]; exist {
|
||||
state.stateObjects[addr] = object.deepCopy(state)
|
||||
state.stateObjectsDirty[addr] = struct{}{}
|
||||
}
|
||||
}
|
||||
// Above, we don't copy the actual journal. This means that if the copy is copied, the
|
||||
// loop above will be a no-op, since the copy's journal is empty.
|
||||
// Thus, here we iterate over stateObjects, to enable copies of copies
|
||||
for addr := range self.stateObjectsDirty {
|
||||
if _, exist := state.stateObjects[addr]; !exist {
|
||||
state.stateObjects[addr] = self.stateObjects[addr].deepCopy(state)
|
||||
state.stateObjectsDirty[addr] = struct{}{}
|
||||
}
|
||||
}
|
||||
for hash, logs := range self.logs {
|
||||
cpy := make([]*types.Log, len(logs))
|
||||
for i, l := range logs {
|
||||
cpy[i] = new(types.Log)
|
||||
*cpy[i] = *l
|
||||
}
|
||||
state.logs[hash] = cpy
|
||||
}
|
||||
for hash, preimage := range self.preimages {
|
||||
state.preimages[hash] = preimage
|
||||
}
|
||||
return state
|
||||
}
|
||||
|
||||
// Snapshot returns an identifier for the current revision of the state.
|
||||
func (self *StateDB) Snapshot() int {
|
||||
id := self.nextRevisionId
|
||||
self.nextRevisionId++
|
||||
self.validRevisions = append(self.validRevisions, revision{id, self.journal.length()})
|
||||
return id
|
||||
}
|
||||
|
||||
// RevertToSnapshot reverts all state changes made since the given revision.
|
||||
func (self *StateDB) RevertToSnapshot(revid int) {
|
||||
// Find the snapshot in the stack of valid snapshots.
|
||||
idx := sort.Search(len(self.validRevisions), func(i int) bool {
|
||||
return self.validRevisions[i].id >= revid
|
||||
})
|
||||
if idx == len(self.validRevisions) || self.validRevisions[idx].id != revid {
|
||||
panic(fmt.Errorf("revision id %v cannot be reverted", revid))
|
||||
}
|
||||
snapshot := self.validRevisions[idx].journalIndex
|
||||
|
||||
// Replay the journal to undo changes and remove invalidated snapshots
|
||||
self.journal.revert(self, snapshot)
|
||||
self.validRevisions = self.validRevisions[:idx]
|
||||
}
|
||||
|
||||
// GetRefund returns the current value of the refund counter.
|
||||
func (self *StateDB) GetRefund() uint64 {
|
||||
return self.refund
|
||||
}
|
||||
|
||||
// Finalise finalises the state by removing the self destructed objects
|
||||
// and clears the journal as well as the refunds.
|
||||
func (s *StateDB) Finalise(deleteEmptyObjects bool) {
|
||||
for addr := range s.journal.dirties {
|
||||
stateObject, exist := s.stateObjects[addr]
|
||||
if !exist {
|
||||
// ripeMD is 'touched' at block 1714175, in tx 0x1237f737031e40bcde4a8b7e717b2d15e3ecadfe49bb1bbc71ee9deb09c6fcf2
|
||||
// That tx goes out of gas, and although the notion of 'touched' does not exist there, the
|
||||
// touch-event will still be recorded in the journal. Since ripeMD is a special snowflake,
|
||||
// it will persist in the journal even though the journal is reverted. In this special circumstance,
|
||||
// it may exist in `s.journal.dirties` but not in `s.stateObjects`.
|
||||
// Thus, we can safely ignore it here
|
||||
continue
|
||||
}
|
||||
|
||||
if stateObject.suicided || (deleteEmptyObjects && stateObject.empty()) {
|
||||
s.deleteStateObject(stateObject)
|
||||
} else {
|
||||
stateObject.updateRoot(s.db)
|
||||
s.updateStateObject(stateObject)
|
||||
}
|
||||
s.stateObjectsDirty[addr] = struct{}{}
|
||||
}
|
||||
// Invalidate journal because reverting across transactions is not allowed.
|
||||
s.clearJournalAndRefund()
|
||||
}
|
||||
|
||||
// IntermediateRoot computes the current root hash of the state trie.
|
||||
// It is called in between transactions to get the root hash that
|
||||
// goes into transaction receipts.
|
||||
func (s *StateDB) IntermediateRoot(deleteEmptyObjects bool) common.Hash {
|
||||
s.Finalise(deleteEmptyObjects)
|
||||
|
||||
// Track the amount of time wasted on hashing the account trie
|
||||
if metrics.EnabledExpensive {
|
||||
defer func(start time.Time) { s.AccountHashes += time.Since(start) }(time.Now())
|
||||
}
|
||||
return s.trie.Hash()
|
||||
}
|
||||
|
||||
// Prepare sets the current transaction hash and index and block hash which is
|
||||
// used when the EVM emits new state logs.
|
||||
func (self *StateDB) Prepare(thash, bhash common.Hash, ti int) {
|
||||
self.thash = thash
|
||||
self.bhash = bhash
|
||||
self.txIndex = ti
|
||||
}
|
||||
|
||||
func (s *StateDB) clearJournalAndRefund() {
|
||||
s.journal = newJournal()
|
||||
s.validRevisions = s.validRevisions[:0]
|
||||
s.refund = 0
|
||||
}
|
||||
|
||||
// Commit writes the state to the underlying in-memory trie database.
|
||||
func (s *StateDB) Commit(deleteEmptyObjects bool) (root common.Hash, err error) {
|
||||
defer s.clearJournalAndRefund()
|
||||
|
||||
for addr := range s.journal.dirties {
|
||||
s.stateObjectsDirty[addr] = struct{}{}
|
||||
}
|
||||
// Commit objects to the trie, measuring the elapsed time
|
||||
for addr, stateObject := range s.stateObjects {
|
||||
_, isDirty := s.stateObjectsDirty[addr]
|
||||
switch {
|
||||
case stateObject.suicided || (isDirty && deleteEmptyObjects && stateObject.empty()):
|
||||
// If the object has been removed, don't bother syncing it
|
||||
// and just mark it for deletion in the trie.
|
||||
s.deleteStateObject(stateObject)
|
||||
case isDirty:
|
||||
// Write any contract code associated with the state object
|
||||
if stateObject.code != nil && stateObject.dirtyCode {
|
||||
s.db.TrieDB().InsertBlob(common.BytesToHash(stateObject.CodeHash()), stateObject.code)
|
||||
stateObject.dirtyCode = false
|
||||
}
|
||||
// Write any storage changes in the state object to its storage trie.
|
||||
if err := stateObject.CommitTrie(s.db); err != nil {
|
||||
return common.Hash{}, err
|
||||
}
|
||||
// Update the object in the main account trie.
|
||||
s.updateStateObject(stateObject)
|
||||
}
|
||||
delete(s.stateObjectsDirty, addr)
|
||||
}
|
||||
// Write the account trie changes, measuing the amount of wasted time
|
||||
if metrics.EnabledExpensive {
|
||||
defer func(start time.Time) { s.AccountCommits += time.Since(start) }(time.Now())
|
||||
}
|
||||
root, err = s.trie.Commit(func(leaf []byte, parent common.Hash) error {
|
||||
var account Account
|
||||
if err := rlp.DecodeBytes(leaf, &account); err != nil {
|
||||
return nil
|
||||
}
|
||||
if account.Root != emptyRoot {
|
||||
s.db.TrieDB().Reference(account.Root, parent)
|
||||
}
|
||||
code := common.BytesToHash(account.CodeHash)
|
||||
if code != emptyCode {
|
||||
s.db.TrieDB().Reference(code, parent)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
return root, err
|
||||
}
|
85
vendor/github.com/ethereum/go-ethereum/core/state_prefetcher.go
generated
vendored
Normal file
85
vendor/github.com/ethereum/go-ethereum/core/state_prefetcher.go
generated
vendored
Normal file
@ -0,0 +1,85 @@
|
||||
// Copyright 2019 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package core
|
||||
|
||||
import (
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/consensus"
|
||||
"github.com/ethereum/go-ethereum/core/state"
|
||||
"github.com/ethereum/go-ethereum/core/types"
|
||||
"github.com/ethereum/go-ethereum/core/vm"
|
||||
"github.com/ethereum/go-ethereum/params"
|
||||
)
|
||||
|
||||
// statePrefetcher is a basic Prefetcher, which blindly executes a block on top
|
||||
// of an arbitrary state with the goal of prefetching potentially useful state
|
||||
// data from disk before the main block processor start executing.
|
||||
type statePrefetcher struct {
|
||||
config *params.ChainConfig // Chain configuration options
|
||||
bc *BlockChain // Canonical block chain
|
||||
engine consensus.Engine // Consensus engine used for block rewards
|
||||
}
|
||||
|
||||
// newStatePrefetcher initialises a new statePrefetcher.
|
||||
func newStatePrefetcher(config *params.ChainConfig, bc *BlockChain, engine consensus.Engine) *statePrefetcher {
|
||||
return &statePrefetcher{
|
||||
config: config,
|
||||
bc: bc,
|
||||
engine: engine,
|
||||
}
|
||||
}
|
||||
|
||||
// Prefetch processes the state changes according to the Ethereum rules by running
|
||||
// the transaction messages using the statedb, but any changes are discarded. The
|
||||
// only goal is to pre-cache transaction signatures and state trie nodes.
|
||||
func (p *statePrefetcher) Prefetch(block *types.Block, statedb *state.StateDB, cfg vm.Config, interrupt *uint32) {
|
||||
var (
|
||||
header = block.Header()
|
||||
gaspool = new(GasPool).AddGas(block.GasLimit())
|
||||
)
|
||||
// Iterate over and process the individual transactions
|
||||
for i, tx := range block.Transactions() {
|
||||
// If block precaching was interrupted, abort
|
||||
if interrupt != nil && atomic.LoadUint32(interrupt) == 1 {
|
||||
return
|
||||
}
|
||||
// Block precaching permitted to continue, execute the transaction
|
||||
statedb.Prepare(tx.Hash(), block.Hash(), i)
|
||||
if err := precacheTransaction(p.config, p.bc, nil, gaspool, statedb, header, tx, cfg); err != nil {
|
||||
return // Ugh, something went horribly wrong, bail out
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// precacheTransaction attempts to apply a transaction to the given state database
|
||||
// and uses the input parameters for its environment. The goal is not to execute
|
||||
// the transaction successfully, rather to warm up touched data slots.
|
||||
func precacheTransaction(config *params.ChainConfig, bc ChainContext, author *common.Address, gaspool *GasPool, statedb *state.StateDB, header *types.Header, tx *types.Transaction, cfg vm.Config) error {
|
||||
// Convert the transaction into an executable message and pre-cache its sender
|
||||
msg, err := tx.AsMessage(types.MakeSigner(config, header.Number))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Create the EVM and execute the transaction
|
||||
context := NewEVMContext(msg, header, bc, author)
|
||||
vm := vm.NewEVM(context, statedb, config, cfg)
|
||||
|
||||
_, _, _, err = ApplyMessage(vm, msg, gaspool)
|
||||
return err
|
||||
}
|
51
vendor/github.com/ethereum/go-ethereum/core/types.go
generated
vendored
Normal file
51
vendor/github.com/ethereum/go-ethereum/core/types.go
generated
vendored
Normal file
@ -0,0 +1,51 @@
|
||||
// Copyright 2015 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package core
|
||||
|
||||
import (
|
||||
"github.com/ethereum/go-ethereum/core/state"
|
||||
"github.com/ethereum/go-ethereum/core/types"
|
||||
"github.com/ethereum/go-ethereum/core/vm"
|
||||
)
|
||||
|
||||
// Validator is an interface which defines the standard for block validation. It
|
||||
// is only responsible for validating block contents, as the header validation is
|
||||
// done by the specific consensus engines.
|
||||
type Validator interface {
|
||||
// ValidateBody validates the given block's content.
|
||||
ValidateBody(block *types.Block) error
|
||||
|
||||
// ValidateState validates the given statedb and optionally the receipts and
|
||||
// gas used.
|
||||
ValidateState(block *types.Block, state *state.StateDB, receipts types.Receipts, usedGas uint64) error
|
||||
}
|
||||
|
||||
// Prefetcher is an interface for pre-caching transaction signatures and state.
|
||||
type Prefetcher interface {
|
||||
// Prefetch processes the state changes according to the Ethereum rules by running
|
||||
// the transaction messages using the statedb, but any changes are discarded. The
|
||||
// only goal is to pre-cache transaction signatures and state trie nodes.
|
||||
Prefetch(block *types.Block, statedb *state.StateDB, cfg vm.Config, interrupt *uint32)
|
||||
}
|
||||
|
||||
// Processor is an interface for processing blocks using a given initial state.
|
||||
type Processor interface {
|
||||
// Process processes the state changes according to the Ethereum rules by running
|
||||
// the transaction messages using the statedb and applying any rewards to both
|
||||
// the processor (coinbase) and any included uncles.
|
||||
Process(block *types.Block, statedb *state.StateDB, cfg vm.Config) (types.Receipts, []*types.Log, uint64, error)
|
||||
}
|
393
vendor/github.com/ethereum/go-ethereum/core/types/block.go
generated
vendored
Normal file
393
vendor/github.com/ethereum/go-ethereum/core/types/block.go
generated
vendored
Normal file
@ -0,0 +1,393 @@
|
||||
// Copyright 2014 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
// Package types contains data types related to Ethereum consensus.
|
||||
package types
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"io"
|
||||
"math/big"
|
||||
"reflect"
|
||||
"sort"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/ethereum/go-ethereum/rlp"
|
||||
"golang.org/x/crypto/sha3"
|
||||
)
|
||||
|
||||
var (
|
||||
EmptyRootHash = DeriveSha(Transactions{})
|
||||
EmptyUncleHash = rlpHash([]*Header(nil))
|
||||
)
|
||||
|
||||
// A BlockNonce is a 64-bit hash which proves (combined with the
|
||||
// mix-hash) that a sufficient amount of computation has been carried
|
||||
// out on a block.
|
||||
type BlockNonce [8]byte
|
||||
|
||||
// EncodeNonce converts the given integer to a block nonce.
|
||||
func EncodeNonce(i uint64) BlockNonce {
|
||||
var n BlockNonce
|
||||
binary.BigEndian.PutUint64(n[:], i)
|
||||
return n
|
||||
}
|
||||
|
||||
// Uint64 returns the integer value of a block nonce.
|
||||
func (n BlockNonce) Uint64() uint64 {
|
||||
return binary.BigEndian.Uint64(n[:])
|
||||
}
|
||||
|
||||
// MarshalText encodes n as a hex string with 0x prefix.
|
||||
func (n BlockNonce) MarshalText() ([]byte, error) {
|
||||
return hexutil.Bytes(n[:]).MarshalText()
|
||||
}
|
||||
|
||||
// UnmarshalText implements encoding.TextUnmarshaler.
|
||||
func (n *BlockNonce) UnmarshalText(input []byte) error {
|
||||
return hexutil.UnmarshalFixedText("BlockNonce", input, n[:])
|
||||
}
|
||||
|
||||
//go:generate gencodec -type Header -field-override headerMarshaling -out gen_header_json.go
|
||||
|
||||
// Header represents a block header in the Ethereum blockchain.
|
||||
type Header struct {
|
||||
ParentHash common.Hash `json:"parentHash" gencodec:"required"`
|
||||
UncleHash common.Hash `json:"sha3Uncles" gencodec:"required"`
|
||||
Coinbase common.Address `json:"miner" gencodec:"required"`
|
||||
Root common.Hash `json:"stateRoot" gencodec:"required"`
|
||||
TxHash common.Hash `json:"transactionsRoot" gencodec:"required"`
|
||||
ReceiptHash common.Hash `json:"receiptsRoot" gencodec:"required"`
|
||||
Bloom Bloom `json:"logsBloom" gencodec:"required"`
|
||||
Difficulty *big.Int `json:"difficulty" gencodec:"required"`
|
||||
Number *big.Int `json:"number" gencodec:"required"`
|
||||
GasLimit uint64 `json:"gasLimit" gencodec:"required"`
|
||||
GasUsed uint64 `json:"gasUsed" gencodec:"required"`
|
||||
Time uint64 `json:"timestamp" gencodec:"required"`
|
||||
Extra []byte `json:"extraData" gencodec:"required"`
|
||||
MixDigest common.Hash `json:"mixHash"`
|
||||
Nonce BlockNonce `json:"nonce"`
|
||||
}
|
||||
|
||||
// field type overrides for gencodec
|
||||
type headerMarshaling struct {
|
||||
Difficulty *hexutil.Big
|
||||
Number *hexutil.Big
|
||||
GasLimit hexutil.Uint64
|
||||
GasUsed hexutil.Uint64
|
||||
Time hexutil.Uint64
|
||||
Extra hexutil.Bytes
|
||||
Hash common.Hash `json:"hash"` // adds call to Hash() in MarshalJSON
|
||||
}
|
||||
|
||||
// Hash returns the block hash of the header, which is simply the keccak256 hash of its
|
||||
// RLP encoding.
|
||||
func (h *Header) Hash() common.Hash {
|
||||
return rlpHash(h)
|
||||
}
|
||||
|
||||
var headerSize = common.StorageSize(reflect.TypeOf(Header{}).Size())
|
||||
|
||||
// Size returns the approximate memory used by all internal contents. It is used
|
||||
// to approximate and limit the memory consumption of various caches.
|
||||
func (h *Header) Size() common.StorageSize {
|
||||
return headerSize + common.StorageSize(len(h.Extra)+(h.Difficulty.BitLen()+h.Number.BitLen())/8)
|
||||
}
|
||||
|
||||
func rlpHash(x interface{}) (h common.Hash) {
|
||||
hw := sha3.NewLegacyKeccak256()
|
||||
rlp.Encode(hw, x)
|
||||
hw.Sum(h[:0])
|
||||
return h
|
||||
}
|
||||
|
||||
// Body is a simple (mutable, non-safe) data container for storing and moving
|
||||
// a block's data contents (transactions and uncles) together.
|
||||
type Body struct {
|
||||
Transactions []*Transaction
|
||||
Uncles []*Header
|
||||
}
|
||||
|
||||
// Block represents an entire block in the Ethereum blockchain.
|
||||
type Block struct {
|
||||
header *Header
|
||||
uncles []*Header
|
||||
transactions Transactions
|
||||
|
||||
// caches
|
||||
hash atomic.Value
|
||||
size atomic.Value
|
||||
|
||||
// Td is used by package core to store the total difficulty
|
||||
// of the chain up to and including the block.
|
||||
td *big.Int
|
||||
|
||||
// These fields are used by package eth to track
|
||||
// inter-peer block relay.
|
||||
ReceivedAt time.Time
|
||||
ReceivedFrom interface{}
|
||||
}
|
||||
|
||||
// DeprecatedTd is an old relic for extracting the TD of a block. It is in the
|
||||
// code solely to facilitate upgrading the database from the old format to the
|
||||
// new, after which it should be deleted. Do not use!
|
||||
func (b *Block) DeprecatedTd() *big.Int {
|
||||
return b.td
|
||||
}
|
||||
|
||||
// [deprecated by eth/63]
|
||||
// StorageBlock defines the RLP encoding of a Block stored in the
|
||||
// state database. The StorageBlock encoding contains fields that
|
||||
// would otherwise need to be recomputed.
|
||||
type StorageBlock Block
|
||||
|
||||
// "external" block encoding. used for eth protocol, etc.
|
||||
type extblock struct {
|
||||
Header *Header
|
||||
Txs []*Transaction
|
||||
Uncles []*Header
|
||||
}
|
||||
|
||||
// [deprecated by eth/63]
|
||||
// "storage" block encoding. used for database.
|
||||
type storageblock struct {
|
||||
Header *Header
|
||||
Txs []*Transaction
|
||||
Uncles []*Header
|
||||
TD *big.Int
|
||||
}
|
||||
|
||||
// NewBlock creates a new block. The input data is copied,
|
||||
// changes to header and to the field values will not affect the
|
||||
// block.
|
||||
//
|
||||
// The values of TxHash, UncleHash, ReceiptHash and Bloom in header
|
||||
// are ignored and set to values derived from the given txs, uncles
|
||||
// and receipts.
|
||||
func NewBlock(header *Header, txs []*Transaction, uncles []*Header, receipts []*Receipt) *Block {
|
||||
b := &Block{header: CopyHeader(header), td: new(big.Int)}
|
||||
|
||||
// TODO: panic if len(txs) != len(receipts)
|
||||
if len(txs) == 0 {
|
||||
b.header.TxHash = EmptyRootHash
|
||||
} else {
|
||||
b.header.TxHash = DeriveSha(Transactions(txs))
|
||||
b.transactions = make(Transactions, len(txs))
|
||||
copy(b.transactions, txs)
|
||||
}
|
||||
|
||||
if len(receipts) == 0 {
|
||||
b.header.ReceiptHash = EmptyRootHash
|
||||
} else {
|
||||
b.header.ReceiptHash = DeriveSha(Receipts(receipts))
|
||||
b.header.Bloom = CreateBloom(receipts)
|
||||
}
|
||||
|
||||
if len(uncles) == 0 {
|
||||
b.header.UncleHash = EmptyUncleHash
|
||||
} else {
|
||||
b.header.UncleHash = CalcUncleHash(uncles)
|
||||
b.uncles = make([]*Header, len(uncles))
|
||||
for i := range uncles {
|
||||
b.uncles[i] = CopyHeader(uncles[i])
|
||||
}
|
||||
}
|
||||
|
||||
return b
|
||||
}
|
||||
|
||||
// NewBlockWithHeader creates a block with the given header data. The
|
||||
// header data is copied, changes to header and to the field values
|
||||
// will not affect the block.
|
||||
func NewBlockWithHeader(header *Header) *Block {
|
||||
return &Block{header: CopyHeader(header)}
|
||||
}
|
||||
|
||||
// CopyHeader creates a deep copy of a block header to prevent side effects from
|
||||
// modifying a header variable.
|
||||
func CopyHeader(h *Header) *Header {
|
||||
cpy := *h
|
||||
if cpy.Difficulty = new(big.Int); h.Difficulty != nil {
|
||||
cpy.Difficulty.Set(h.Difficulty)
|
||||
}
|
||||
if cpy.Number = new(big.Int); h.Number != nil {
|
||||
cpy.Number.Set(h.Number)
|
||||
}
|
||||
if len(h.Extra) > 0 {
|
||||
cpy.Extra = make([]byte, len(h.Extra))
|
||||
copy(cpy.Extra, h.Extra)
|
||||
}
|
||||
return &cpy
|
||||
}
|
||||
|
||||
// DecodeRLP decodes the Ethereum
|
||||
func (b *Block) DecodeRLP(s *rlp.Stream) error {
|
||||
var eb extblock
|
||||
_, size, _ := s.Kind()
|
||||
if err := s.Decode(&eb); err != nil {
|
||||
return err
|
||||
}
|
||||
b.header, b.uncles, b.transactions = eb.Header, eb.Uncles, eb.Txs
|
||||
b.size.Store(common.StorageSize(rlp.ListSize(size)))
|
||||
return nil
|
||||
}
|
||||
|
||||
// EncodeRLP serializes b into the Ethereum RLP block format.
|
||||
func (b *Block) EncodeRLP(w io.Writer) error {
|
||||
return rlp.Encode(w, extblock{
|
||||
Header: b.header,
|
||||
Txs: b.transactions,
|
||||
Uncles: b.uncles,
|
||||
})
|
||||
}
|
||||
|
||||
// [deprecated by eth/63]
|
||||
func (b *StorageBlock) DecodeRLP(s *rlp.Stream) error {
|
||||
var sb storageblock
|
||||
if err := s.Decode(&sb); err != nil {
|
||||
return err
|
||||
}
|
||||
b.header, b.uncles, b.transactions, b.td = sb.Header, sb.Uncles, sb.Txs, sb.TD
|
||||
return nil
|
||||
}
|
||||
|
||||
// TODO: copies
|
||||
|
||||
func (b *Block) Uncles() []*Header { return b.uncles }
|
||||
func (b *Block) Transactions() Transactions { return b.transactions }
|
||||
|
||||
func (b *Block) Transaction(hash common.Hash) *Transaction {
|
||||
for _, transaction := range b.transactions {
|
||||
if transaction.Hash() == hash {
|
||||
return transaction
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *Block) Number() *big.Int { return new(big.Int).Set(b.header.Number) }
|
||||
func (b *Block) GasLimit() uint64 { return b.header.GasLimit }
|
||||
func (b *Block) GasUsed() uint64 { return b.header.GasUsed }
|
||||
func (b *Block) Difficulty() *big.Int { return new(big.Int).Set(b.header.Difficulty) }
|
||||
func (b *Block) Time() uint64 { return b.header.Time }
|
||||
|
||||
func (b *Block) NumberU64() uint64 { return b.header.Number.Uint64() }
|
||||
func (b *Block) MixDigest() common.Hash { return b.header.MixDigest }
|
||||
func (b *Block) Nonce() uint64 { return binary.BigEndian.Uint64(b.header.Nonce[:]) }
|
||||
func (b *Block) Bloom() Bloom { return b.header.Bloom }
|
||||
func (b *Block) Coinbase() common.Address { return b.header.Coinbase }
|
||||
func (b *Block) Root() common.Hash { return b.header.Root }
|
||||
func (b *Block) ParentHash() common.Hash { return b.header.ParentHash }
|
||||
func (b *Block) TxHash() common.Hash { return b.header.TxHash }
|
||||
func (b *Block) ReceiptHash() common.Hash { return b.header.ReceiptHash }
|
||||
func (b *Block) UncleHash() common.Hash { return b.header.UncleHash }
|
||||
func (b *Block) Extra() []byte { return common.CopyBytes(b.header.Extra) }
|
||||
|
||||
func (b *Block) Header() *Header { return CopyHeader(b.header) }
|
||||
|
||||
// Body returns the non-header content of the block.
|
||||
func (b *Block) Body() *Body { return &Body{b.transactions, b.uncles} }
|
||||
|
||||
// Size returns the true RLP encoded storage size of the block, either by encoding
|
||||
// and returning it, or returning a previsouly cached value.
|
||||
func (b *Block) Size() common.StorageSize {
|
||||
if size := b.size.Load(); size != nil {
|
||||
return size.(common.StorageSize)
|
||||
}
|
||||
c := writeCounter(0)
|
||||
rlp.Encode(&c, b)
|
||||
b.size.Store(common.StorageSize(c))
|
||||
return common.StorageSize(c)
|
||||
}
|
||||
|
||||
type writeCounter common.StorageSize
|
||||
|
||||
func (c *writeCounter) Write(b []byte) (int, error) {
|
||||
*c += writeCounter(len(b))
|
||||
return len(b), nil
|
||||
}
|
||||
|
||||
func CalcUncleHash(uncles []*Header) common.Hash {
|
||||
if len(uncles) == 0 {
|
||||
return EmptyUncleHash
|
||||
}
|
||||
return rlpHash(uncles)
|
||||
}
|
||||
|
||||
// WithSeal returns a new block with the data from b but the header replaced with
|
||||
// the sealed one.
|
||||
func (b *Block) WithSeal(header *Header) *Block {
|
||||
cpy := *header
|
||||
|
||||
return &Block{
|
||||
header: &cpy,
|
||||
transactions: b.transactions,
|
||||
uncles: b.uncles,
|
||||
}
|
||||
}
|
||||
|
||||
// WithBody returns a new block with the given transaction and uncle contents.
|
||||
func (b *Block) WithBody(transactions []*Transaction, uncles []*Header) *Block {
|
||||
block := &Block{
|
||||
header: CopyHeader(b.header),
|
||||
transactions: make([]*Transaction, len(transactions)),
|
||||
uncles: make([]*Header, len(uncles)),
|
||||
}
|
||||
copy(block.transactions, transactions)
|
||||
for i := range uncles {
|
||||
block.uncles[i] = CopyHeader(uncles[i])
|
||||
}
|
||||
return block
|
||||
}
|
||||
|
||||
// Hash returns the keccak256 hash of b's header.
|
||||
// The hash is computed on the first call and cached thereafter.
|
||||
func (b *Block) Hash() common.Hash {
|
||||
if hash := b.hash.Load(); hash != nil {
|
||||
return hash.(common.Hash)
|
||||
}
|
||||
v := b.header.Hash()
|
||||
b.hash.Store(v)
|
||||
return v
|
||||
}
|
||||
|
||||
type Blocks []*Block
|
||||
|
||||
type BlockBy func(b1, b2 *Block) bool
|
||||
|
||||
func (self BlockBy) Sort(blocks Blocks) {
|
||||
bs := blockSorter{
|
||||
blocks: blocks,
|
||||
by: self,
|
||||
}
|
||||
sort.Sort(bs)
|
||||
}
|
||||
|
||||
type blockSorter struct {
|
||||
blocks Blocks
|
||||
by func(b1, b2 *Block) bool
|
||||
}
|
||||
|
||||
func (self blockSorter) Len() int { return len(self.blocks) }
|
||||
func (self blockSorter) Swap(i, j int) {
|
||||
self.blocks[i], self.blocks[j] = self.blocks[j], self.blocks[i]
|
||||
}
|
||||
func (self blockSorter) Less(i, j int) bool { return self.by(self.blocks[i], self.blocks[j]) }
|
||||
|
||||
func Number(b1, b2 *Block) bool { return b1.header.Number.Cmp(b2.header.Number) < 0 }
|
86
vendor/github.com/ethereum/go-ethereum/core/types/block_test.go
generated
vendored
Normal file
86
vendor/github.com/ethereum/go-ethereum/core/types/block_test.go
generated
vendored
Normal file
@ -0,0 +1,86 @@
|
||||
// Copyright 2014 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package types
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"math/big"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/rlp"
|
||||
)
|
||||
|
||||
// from bcValidBlockTest.json, "SimpleTx"
|
||||
func TestBlockEncoding(t *testing.T) {
|
||||
blockEnc := common.FromHex("f90260f901f9a083cafc574e1f51ba9dc0568fc617a08ea2429fb384059c972f13b19fa1c8dd55a01dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347948888f1f195afa192cfee860698584c030f4c9db1a0ef1552a40b7165c3cd773806b9e0c165b75356e0314bf0706f279c729f51e017a05fe50b260da6308036625b850b5d6ced6d0a9f814c0688bc91ffb7b7a3a54b67a0bc37d79753ad738a6dac4921e57392f145d8887476de3f783dfa7edae9283e52b90100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000008302000001832fefd8825208845506eb0780a0bd4472abb6659ebe3ee06ee4d7b72a00a9f4d001caca51342001075469aff49888a13a5a8c8f2bb1c4f861f85f800a82c35094095e7baea6a6c7c4c2dfeb977efac326af552d870a801ba09bea4c4daac7c7c52e093e6a4c35dbbcf8856f1af7b059ba20253e70848d094fa08a8fae537ce25ed8cb5af9adac3f141af69bd515bd2ba031522df09b97dd72b1c0")
|
||||
var block Block
|
||||
if err := rlp.DecodeBytes(blockEnc, &block); err != nil {
|
||||
t.Fatal("decode error: ", err)
|
||||
}
|
||||
|
||||
check := func(f string, got, want interface{}) {
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Errorf("%s mismatch: got %v, want %v", f, got, want)
|
||||
}
|
||||
}
|
||||
check("Difficulty", block.Difficulty(), big.NewInt(131072))
|
||||
check("GasLimit", block.GasLimit(), uint64(3141592))
|
||||
check("GasUsed", block.GasUsed(), uint64(21000))
|
||||
check("Coinbase", block.Coinbase(), common.HexToAddress("8888f1f195afa192cfee860698584c030f4c9db1"))
|
||||
check("MixDigest", block.MixDigest(), common.HexToHash("bd4472abb6659ebe3ee06ee4d7b72a00a9f4d001caca51342001075469aff498"))
|
||||
check("Root", block.Root(), common.HexToHash("ef1552a40b7165c3cd773806b9e0c165b75356e0314bf0706f279c729f51e017"))
|
||||
check("Hash", block.Hash(), common.HexToHash("0a5843ac1cb04865017cb35a57b50b07084e5fcee39b5acadade33149f4fff9e"))
|
||||
check("Nonce", block.Nonce(), uint64(0xa13a5a8c8f2bb1c4))
|
||||
check("Time", block.Time(), uint64(1426516743))
|
||||
check("Size", block.Size(), common.StorageSize(len(blockEnc)))
|
||||
|
||||
tx1 := NewTransaction(0, common.HexToAddress("095e7baea6a6c7c4c2dfeb977efac326af552d87"), big.NewInt(10), 50000, big.NewInt(10), nil)
|
||||
|
||||
tx1, _ = tx1.WithSignature(HomesteadSigner{}, common.Hex2Bytes("9bea4c4daac7c7c52e093e6a4c35dbbcf8856f1af7b059ba20253e70848d094f8a8fae537ce25ed8cb5af9adac3f141af69bd515bd2ba031522df09b97dd72b100"))
|
||||
fmt.Println(block.Transactions()[0].Hash())
|
||||
fmt.Println(tx1.data)
|
||||
fmt.Println(tx1.Hash())
|
||||
check("len(Transactions)", len(block.Transactions()), 1)
|
||||
check("Transactions[0].Hash", block.Transactions()[0].Hash(), tx1.Hash())
|
||||
|
||||
ourBlockEnc, err := rlp.EncodeToBytes(&block)
|
||||
if err != nil {
|
||||
t.Fatal("encode error: ", err)
|
||||
}
|
||||
if !bytes.Equal(ourBlockEnc, blockEnc) {
|
||||
t.Errorf("encoded block mismatch:\ngot: %x\nwant: %x", ourBlockEnc, blockEnc)
|
||||
}
|
||||
}
|
||||
|
||||
func TestUncleHash(t *testing.T) {
|
||||
uncles := make([]*Header, 0)
|
||||
h := CalcUncleHash(uncles)
|
||||
exp := common.HexToHash("1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347")
|
||||
if h != exp {
|
||||
t.Fatalf("empty uncle hash is wrong, got %x != %x", h, exp)
|
||||
}
|
||||
}
|
||||
func BenchmarkUncleHash(b *testing.B) {
|
||||
uncles := make([]*Header, 0)
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
CalcUncleHash(uncles)
|
||||
}
|
||||
}
|
138
vendor/github.com/ethereum/go-ethereum/core/types/gen_header_json.go
generated
vendored
Normal file
138
vendor/github.com/ethereum/go-ethereum/core/types/gen_header_json.go
generated
vendored
Normal file
@ -0,0 +1,138 @@
|
||||
// Code generated by github.com/fjl/gencodec. DO NOT EDIT.
|
||||
|
||||
package types
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"math/big"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
)
|
||||
|
||||
var _ = (*headerMarshaling)(nil)
|
||||
|
||||
// MarshalJSON marshals as JSON.
|
||||
func (h Header) MarshalJSON() ([]byte, error) {
|
||||
type Header struct {
|
||||
ParentHash common.Hash `json:"parentHash" gencodec:"required"`
|
||||
UncleHash common.Hash `json:"sha3Uncles" gencodec:"required"`
|
||||
Coinbase common.Address `json:"miner" gencodec:"required"`
|
||||
Root common.Hash `json:"stateRoot" gencodec:"required"`
|
||||
TxHash common.Hash `json:"transactionsRoot" gencodec:"required"`
|
||||
ReceiptHash common.Hash `json:"receiptsRoot" gencodec:"required"`
|
||||
Bloom Bloom `json:"logsBloom" gencodec:"required"`
|
||||
Difficulty *hexutil.Big `json:"difficulty" gencodec:"required"`
|
||||
Number *hexutil.Big `json:"number" gencodec:"required"`
|
||||
GasLimit hexutil.Uint64 `json:"gasLimit" gencodec:"required"`
|
||||
GasUsed hexutil.Uint64 `json:"gasUsed" gencodec:"required"`
|
||||
Time hexutil.Uint64 `json:"timestamp" gencodec:"required"`
|
||||
Extra hexutil.Bytes `json:"extraData" gencodec:"required"`
|
||||
MixDigest common.Hash `json:"mixHash"`
|
||||
Nonce BlockNonce `json:"nonce"`
|
||||
Hash common.Hash `json:"hash"`
|
||||
}
|
||||
var enc Header
|
||||
enc.ParentHash = h.ParentHash
|
||||
enc.UncleHash = h.UncleHash
|
||||
enc.Coinbase = h.Coinbase
|
||||
enc.Root = h.Root
|
||||
enc.TxHash = h.TxHash
|
||||
enc.ReceiptHash = h.ReceiptHash
|
||||
enc.Bloom = h.Bloom
|
||||
enc.Difficulty = (*hexutil.Big)(h.Difficulty)
|
||||
enc.Number = (*hexutil.Big)(h.Number)
|
||||
enc.GasLimit = hexutil.Uint64(h.GasLimit)
|
||||
enc.GasUsed = hexutil.Uint64(h.GasUsed)
|
||||
enc.Time = hexutil.Uint64(h.Time)
|
||||
enc.Extra = h.Extra
|
||||
enc.MixDigest = h.MixDigest
|
||||
enc.Nonce = h.Nonce
|
||||
enc.Hash = h.Hash()
|
||||
return json.Marshal(&enc)
|
||||
}
|
||||
|
||||
// UnmarshalJSON unmarshals from JSON.
|
||||
func (h *Header) UnmarshalJSON(input []byte) error {
|
||||
type Header struct {
|
||||
ParentHash *common.Hash `json:"parentHash" gencodec:"required"`
|
||||
UncleHash *common.Hash `json:"sha3Uncles" gencodec:"required"`
|
||||
Coinbase *common.Address `json:"miner" gencodec:"required"`
|
||||
Root *common.Hash `json:"stateRoot" gencodec:"required"`
|
||||
TxHash *common.Hash `json:"transactionsRoot" gencodec:"required"`
|
||||
ReceiptHash *common.Hash `json:"receiptsRoot" gencodec:"required"`
|
||||
Bloom *Bloom `json:"logsBloom" gencodec:"required"`
|
||||
Difficulty *hexutil.Big `json:"difficulty" gencodec:"required"`
|
||||
Number *hexutil.Big `json:"number" gencodec:"required"`
|
||||
GasLimit *hexutil.Uint64 `json:"gasLimit" gencodec:"required"`
|
||||
GasUsed *hexutil.Uint64 `json:"gasUsed" gencodec:"required"`
|
||||
Time *hexutil.Uint64 `json:"timestamp" gencodec:"required"`
|
||||
Extra *hexutil.Bytes `json:"extraData" gencodec:"required"`
|
||||
MixDigest *common.Hash `json:"mixHash"`
|
||||
Nonce *BlockNonce `json:"nonce"`
|
||||
}
|
||||
var dec Header
|
||||
if err := json.Unmarshal(input, &dec); err != nil {
|
||||
return err
|
||||
}
|
||||
if dec.ParentHash == nil {
|
||||
return errors.New("missing required field 'parentHash' for Header")
|
||||
}
|
||||
h.ParentHash = *dec.ParentHash
|
||||
if dec.UncleHash == nil {
|
||||
return errors.New("missing required field 'sha3Uncles' for Header")
|
||||
}
|
||||
h.UncleHash = *dec.UncleHash
|
||||
if dec.Coinbase == nil {
|
||||
return errors.New("missing required field 'miner' for Header")
|
||||
}
|
||||
h.Coinbase = *dec.Coinbase
|
||||
if dec.Root == nil {
|
||||
return errors.New("missing required field 'stateRoot' for Header")
|
||||
}
|
||||
h.Root = *dec.Root
|
||||
if dec.TxHash == nil {
|
||||
return errors.New("missing required field 'transactionsRoot' for Header")
|
||||
}
|
||||
h.TxHash = *dec.TxHash
|
||||
if dec.ReceiptHash == nil {
|
||||
return errors.New("missing required field 'receiptsRoot' for Header")
|
||||
}
|
||||
h.ReceiptHash = *dec.ReceiptHash
|
||||
if dec.Bloom == nil {
|
||||
return errors.New("missing required field 'logsBloom' for Header")
|
||||
}
|
||||
h.Bloom = *dec.Bloom
|
||||
if dec.Difficulty == nil {
|
||||
return errors.New("missing required field 'difficulty' for Header")
|
||||
}
|
||||
h.Difficulty = (*big.Int)(dec.Difficulty)
|
||||
if dec.Number == nil {
|
||||
return errors.New("missing required field 'number' for Header")
|
||||
}
|
||||
h.Number = (*big.Int)(dec.Number)
|
||||
if dec.GasLimit == nil {
|
||||
return errors.New("missing required field 'gasLimit' for Header")
|
||||
}
|
||||
h.GasLimit = uint64(*dec.GasLimit)
|
||||
if dec.GasUsed == nil {
|
||||
return errors.New("missing required field 'gasUsed' for Header")
|
||||
}
|
||||
h.GasUsed = uint64(*dec.GasUsed)
|
||||
if dec.Time == nil {
|
||||
return errors.New("missing required field 'timestamp' for Header")
|
||||
}
|
||||
h.Time = uint64(*dec.Time)
|
||||
if dec.Extra == nil {
|
||||
return errors.New("missing required field 'extraData' for Header")
|
||||
}
|
||||
h.Extra = *dec.Extra
|
||||
if dec.MixDigest != nil {
|
||||
h.MixDigest = *dec.MixDigest
|
||||
}
|
||||
if dec.Nonce != nil {
|
||||
h.Nonce = *dec.Nonce
|
||||
}
|
||||
return nil
|
||||
}
|
513
vendor/github.com/ethereum/go-ethereum/core/vm/gas_table.go
generated
vendored
Normal file
513
vendor/github.com/ethereum/go-ethereum/core/vm/gas_table.go
generated
vendored
Normal file
@ -0,0 +1,513 @@
|
||||
// Copyright 2017 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package vm
|
||||
|
||||
import (
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/common/math"
|
||||
"github.com/ethereum/go-ethereum/params"
|
||||
)
|
||||
|
||||
// memoryGasCost calculates the quadratic gas for memory expansion. It does so
|
||||
// only for the memory region that is expanded, not the total memory.
|
||||
func memoryGasCost(mem *Memory, newMemSize uint64) (uint64, error) {
|
||||
if newMemSize == 0 {
|
||||
return 0, nil
|
||||
}
|
||||
// The maximum that will fit in a uint64 is max_word_count - 1. Anything above
|
||||
// that will result in an overflow. Additionally, a newMemSize which results in
|
||||
// a newMemSizeWords larger than 0xFFFFFFFF will cause the square operation to
|
||||
// overflow. The constant 0x1FFFFFFFE0 is the highest number that can be used
|
||||
// without overflowing the gas calculation.
|
||||
if newMemSize > 0x1FFFFFFFE0 {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
newMemSizeWords := toWordSize(newMemSize)
|
||||
newMemSize = newMemSizeWords * 32
|
||||
|
||||
if newMemSize > uint64(mem.Len()) {
|
||||
square := newMemSizeWords * newMemSizeWords
|
||||
linCoef := newMemSizeWords * params.MemoryGas
|
||||
quadCoef := square / params.QuadCoeffDiv
|
||||
newTotalFee := linCoef + quadCoef
|
||||
|
||||
fee := newTotalFee - mem.lastGasCost
|
||||
mem.lastGasCost = newTotalFee
|
||||
|
||||
return fee, nil
|
||||
}
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
func gasCallDataCopy(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
gas, err := memoryGasCost(mem, memorySize)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
var overflow bool
|
||||
if gas, overflow = math.SafeAdd(gas, GasFastestStep); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
|
||||
words, overflow := bigUint64(stack.Back(2))
|
||||
if overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
|
||||
if words, overflow = math.SafeMul(toWordSize(words), params.CopyGas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
|
||||
if gas, overflow = math.SafeAdd(gas, words); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
return gas, nil
|
||||
}
|
||||
|
||||
func gasReturnDataCopy(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
gas, err := memoryGasCost(mem, memorySize)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
var overflow bool
|
||||
if gas, overflow = math.SafeAdd(gas, GasFastestStep); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
|
||||
words, overflow := bigUint64(stack.Back(2))
|
||||
if overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
|
||||
if words, overflow = math.SafeMul(toWordSize(words), params.CopyGas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
|
||||
if gas, overflow = math.SafeAdd(gas, words); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
return gas, nil
|
||||
}
|
||||
|
||||
func gasSStore(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
var (
|
||||
y, x = stack.Back(1), stack.Back(0)
|
||||
current = evm.StateDB.GetState(contract.Address(), common.BigToHash(x))
|
||||
)
|
||||
// The legacy gas metering only takes into consideration the current state
|
||||
// Legacy rules should be applied if we are in Petersburg (removal of EIP-1283)
|
||||
// OR Constantinople is not active
|
||||
if evm.chainRules.IsPetersburg || !evm.chainRules.IsConstantinople {
|
||||
// This checks for 3 scenario's and calculates gas accordingly:
|
||||
//
|
||||
// 1. From a zero-value address to a non-zero value (NEW VALUE)
|
||||
// 2. From a non-zero value address to a zero-value address (DELETE)
|
||||
// 3. From a non-zero to a non-zero (CHANGE)
|
||||
switch {
|
||||
case current == (common.Hash{}) && y.Sign() != 0: // 0 => non 0
|
||||
return params.SstoreSetGas, nil
|
||||
case current != (common.Hash{}) && y.Sign() == 0: // non 0 => 0
|
||||
evm.StateDB.AddRefund(params.SstoreRefundGas)
|
||||
return params.SstoreClearGas, nil
|
||||
default: // non 0 => non 0 (or 0 => 0)
|
||||
return params.SstoreResetGas, nil
|
||||
}
|
||||
}
|
||||
// The new gas metering is based on net gas costs (EIP-1283):
|
||||
//
|
||||
// 1. If current value equals new value (this is a no-op), 200 gas is deducted.
|
||||
// 2. If current value does not equal new value
|
||||
// 2.1. If original value equals current value (this storage slot has not been changed by the current execution context)
|
||||
// 2.1.1. If original value is 0, 20000 gas is deducted.
|
||||
// 2.1.2. Otherwise, 5000 gas is deducted. If new value is 0, add 15000 gas to refund counter.
|
||||
// 2.2. If original value does not equal current value (this storage slot is dirty), 200 gas is deducted. Apply both of the following clauses.
|
||||
// 2.2.1. If original value is not 0
|
||||
// 2.2.1.1. If current value is 0 (also means that new value is not 0), remove 15000 gas from refund counter. We can prove that refund counter will never go below 0.
|
||||
// 2.2.1.2. If new value is 0 (also means that current value is not 0), add 15000 gas to refund counter.
|
||||
// 2.2.2. If original value equals new value (this storage slot is reset)
|
||||
// 2.2.2.1. If original value is 0, add 19800 gas to refund counter.
|
||||
// 2.2.2.2. Otherwise, add 4800 gas to refund counter.
|
||||
value := common.BigToHash(y)
|
||||
if current == value { // noop (1)
|
||||
return params.NetSstoreNoopGas, nil
|
||||
}
|
||||
original := evm.StateDB.GetCommittedState(contract.Address(), common.BigToHash(x))
|
||||
if original == current {
|
||||
if original == (common.Hash{}) { // create slot (2.1.1)
|
||||
return params.NetSstoreInitGas, nil
|
||||
}
|
||||
if value == (common.Hash{}) { // delete slot (2.1.2b)
|
||||
evm.StateDB.AddRefund(params.NetSstoreClearRefund)
|
||||
}
|
||||
return params.NetSstoreCleanGas, nil // write existing slot (2.1.2)
|
||||
}
|
||||
if original != (common.Hash{}) {
|
||||
if current == (common.Hash{}) { // recreate slot (2.2.1.1)
|
||||
evm.StateDB.SubRefund(params.NetSstoreClearRefund)
|
||||
} else if value == (common.Hash{}) { // delete slot (2.2.1.2)
|
||||
evm.StateDB.AddRefund(params.NetSstoreClearRefund)
|
||||
}
|
||||
}
|
||||
if original == value {
|
||||
if original == (common.Hash{}) { // reset to original inexistent slot (2.2.2.1)
|
||||
evm.StateDB.AddRefund(params.NetSstoreResetClearRefund)
|
||||
} else { // reset to original existing slot (2.2.2.2)
|
||||
evm.StateDB.AddRefund(params.NetSstoreResetRefund)
|
||||
}
|
||||
}
|
||||
return params.NetSstoreDirtyGas, nil
|
||||
}
|
||||
|
||||
func makeGasLog(n uint64) gasFunc {
|
||||
return func(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
requestedSize, overflow := bigUint64(stack.Back(1))
|
||||
if overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
|
||||
gas, err := memoryGasCost(mem, memorySize)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
if gas, overflow = math.SafeAdd(gas, params.LogGas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
if gas, overflow = math.SafeAdd(gas, n*params.LogTopicGas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
|
||||
var memorySizeGas uint64
|
||||
if memorySizeGas, overflow = math.SafeMul(requestedSize, params.LogDataGas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
if gas, overflow = math.SafeAdd(gas, memorySizeGas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
return gas, nil
|
||||
}
|
||||
}
|
||||
|
||||
func gasSha3(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
var overflow bool
|
||||
gas, err := memoryGasCost(mem, memorySize)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
if gas, overflow = math.SafeAdd(gas, params.Sha3Gas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
|
||||
wordGas, overflow := bigUint64(stack.Back(1))
|
||||
if overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
if wordGas, overflow = math.SafeMul(toWordSize(wordGas), params.Sha3WordGas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
if gas, overflow = math.SafeAdd(gas, wordGas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
return gas, nil
|
||||
}
|
||||
|
||||
func gasCodeCopy(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
gas, err := memoryGasCost(mem, memorySize)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
var overflow bool
|
||||
if gas, overflow = math.SafeAdd(gas, GasFastestStep); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
|
||||
wordGas, overflow := bigUint64(stack.Back(2))
|
||||
if overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
if wordGas, overflow = math.SafeMul(toWordSize(wordGas), params.CopyGas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
if gas, overflow = math.SafeAdd(gas, wordGas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
return gas, nil
|
||||
}
|
||||
|
||||
func gasExtCodeCopy(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
gas, err := memoryGasCost(mem, memorySize)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
var overflow bool
|
||||
if gas, overflow = math.SafeAdd(gas, gt.ExtcodeCopy); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
|
||||
wordGas, overflow := bigUint64(stack.Back(3))
|
||||
if overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
|
||||
if wordGas, overflow = math.SafeMul(toWordSize(wordGas), params.CopyGas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
|
||||
if gas, overflow = math.SafeAdd(gas, wordGas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
return gas, nil
|
||||
}
|
||||
|
||||
func gasExtCodeHash(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
return gt.ExtcodeHash, nil
|
||||
}
|
||||
|
||||
func gasMLoad(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
var overflow bool
|
||||
gas, err := memoryGasCost(mem, memorySize)
|
||||
if err != nil {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
if gas, overflow = math.SafeAdd(gas, GasFastestStep); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
return gas, nil
|
||||
}
|
||||
|
||||
func gasMStore8(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
var overflow bool
|
||||
gas, err := memoryGasCost(mem, memorySize)
|
||||
if err != nil {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
if gas, overflow = math.SafeAdd(gas, GasFastestStep); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
return gas, nil
|
||||
}
|
||||
|
||||
func gasMStore(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
var overflow bool
|
||||
gas, err := memoryGasCost(mem, memorySize)
|
||||
if err != nil {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
if gas, overflow = math.SafeAdd(gas, GasFastestStep); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
return gas, nil
|
||||
}
|
||||
|
||||
func gasCreate(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
var overflow bool
|
||||
gas, err := memoryGasCost(mem, memorySize)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if gas, overflow = math.SafeAdd(gas, params.CreateGas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
return gas, nil
|
||||
}
|
||||
|
||||
func gasCreate2(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
var overflow bool
|
||||
gas, err := memoryGasCost(mem, memorySize)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if gas, overflow = math.SafeAdd(gas, params.Create2Gas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
wordGas, overflow := bigUint64(stack.Back(2))
|
||||
if overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
if wordGas, overflow = math.SafeMul(toWordSize(wordGas), params.Sha3WordGas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
if gas, overflow = math.SafeAdd(gas, wordGas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
|
||||
return gas, nil
|
||||
}
|
||||
|
||||
func gasBalance(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
return gt.Balance, nil
|
||||
}
|
||||
|
||||
func gasExtCodeSize(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
return gt.ExtcodeSize, nil
|
||||
}
|
||||
|
||||
func gasSLoad(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
return gt.SLoad, nil
|
||||
}
|
||||
|
||||
func gasExp(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
expByteLen := uint64((stack.data[stack.len()-2].BitLen() + 7) / 8)
|
||||
|
||||
var (
|
||||
gas = expByteLen * gt.ExpByte // no overflow check required. Max is 256 * ExpByte gas
|
||||
overflow bool
|
||||
)
|
||||
if gas, overflow = math.SafeAdd(gas, params.ExpGas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
return gas, nil
|
||||
}
|
||||
|
||||
func gasCall(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
var (
|
||||
gas = gt.Calls
|
||||
transfersValue = stack.Back(2).Sign() != 0
|
||||
address = common.BigToAddress(stack.Back(1))
|
||||
eip158 = evm.ChainConfig().IsEIP158(evm.BlockNumber)
|
||||
)
|
||||
if eip158 {
|
||||
if transfersValue && evm.StateDB.Empty(address) {
|
||||
gas += params.CallNewAccountGas
|
||||
}
|
||||
} else if !evm.StateDB.Exist(address) {
|
||||
gas += params.CallNewAccountGas
|
||||
}
|
||||
if transfersValue {
|
||||
gas += params.CallValueTransferGas
|
||||
}
|
||||
memoryGas, err := memoryGasCost(mem, memorySize)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
var overflow bool
|
||||
if gas, overflow = math.SafeAdd(gas, memoryGas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
|
||||
evm.callGasTemp, err = callGas(gt, contract.Gas, gas, stack.Back(0))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if gas, overflow = math.SafeAdd(gas, evm.callGasTemp); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
return gas, nil
|
||||
}
|
||||
|
||||
func gasCallCode(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
gas := gt.Calls
|
||||
if stack.Back(2).Sign() != 0 {
|
||||
gas += params.CallValueTransferGas
|
||||
}
|
||||
memoryGas, err := memoryGasCost(mem, memorySize)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
var overflow bool
|
||||
if gas, overflow = math.SafeAdd(gas, memoryGas); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
|
||||
evm.callGasTemp, err = callGas(gt, contract.Gas, gas, stack.Back(0))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if gas, overflow = math.SafeAdd(gas, evm.callGasTemp); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
return gas, nil
|
||||
}
|
||||
|
||||
func gasReturn(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
return memoryGasCost(mem, memorySize)
|
||||
}
|
||||
|
||||
func gasRevert(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
return memoryGasCost(mem, memorySize)
|
||||
}
|
||||
|
||||
func gasSuicide(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
var gas uint64
|
||||
// EIP150 homestead gas reprice fork:
|
||||
if evm.ChainConfig().IsEIP150(evm.BlockNumber) {
|
||||
gas = gt.Suicide
|
||||
var (
|
||||
address = common.BigToAddress(stack.Back(0))
|
||||
eip158 = evm.ChainConfig().IsEIP158(evm.BlockNumber)
|
||||
)
|
||||
|
||||
if eip158 {
|
||||
// if empty and transfers value
|
||||
if evm.StateDB.Empty(address) && evm.StateDB.GetBalance(contract.Address()).Sign() != 0 {
|
||||
gas += gt.CreateBySuicide
|
||||
}
|
||||
} else if !evm.StateDB.Exist(address) {
|
||||
gas += gt.CreateBySuicide
|
||||
}
|
||||
}
|
||||
|
||||
if !evm.StateDB.HasSuicided(contract.Address()) {
|
||||
evm.StateDB.AddRefund(params.SuicideRefundGas)
|
||||
}
|
||||
return gas, nil
|
||||
}
|
||||
|
||||
func gasDelegateCall(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
gas, err := memoryGasCost(mem, memorySize)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
var overflow bool
|
||||
if gas, overflow = math.SafeAdd(gas, gt.Calls); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
|
||||
evm.callGasTemp, err = callGas(gt, contract.Gas, gas, stack.Back(0))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if gas, overflow = math.SafeAdd(gas, evm.callGasTemp); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
return gas, nil
|
||||
}
|
||||
|
||||
func gasStaticCall(gt params.GasTable, evm *EVM, contract *Contract, stack *Stack, mem *Memory, memorySize uint64) (uint64, error) {
|
||||
gas, err := memoryGasCost(mem, memorySize)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
var overflow bool
|
||||
if gas, overflow = math.SafeAdd(gas, gt.Calls); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
|
||||
evm.callGasTemp, err = callGas(gt, contract.Gas, gas, stack.Back(0))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if gas, overflow = math.SafeAdd(gas, evm.callGasTemp); overflow {
|
||||
return 0, errGasUintOverflow
|
||||
}
|
||||
return gas, nil
|
||||
}
|
39
vendor/github.com/ethereum/go-ethereum/core/vm/gas_table_test.go
generated
vendored
Normal file
39
vendor/github.com/ethereum/go-ethereum/core/vm/gas_table_test.go
generated
vendored
Normal file
@ -0,0 +1,39 @@
|
||||
// Copyright 2017 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package vm
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestMemoryGasCost(t *testing.T) {
|
||||
tests := []struct {
|
||||
size uint64
|
||||
cost uint64
|
||||
overflow bool
|
||||
}{
|
||||
{0x1fffffffe0, 36028809887088637, false},
|
||||
{0x1fffffffe1, 0, true},
|
||||
}
|
||||
for i, tt := range tests {
|
||||
v, err := memoryGasCost(&Memory{}, tt.size)
|
||||
if (err == errGasUintOverflow) != tt.overflow {
|
||||
t.Errorf("test %d: overflow mismatch: have %v, want %v", i, err == errGasUintOverflow, tt.overflow)
|
||||
}
|
||||
if v != tt.cost {
|
||||
t.Errorf("test %d: gas cost mismatch: have %v, want %v", i, v, tt.cost)
|
||||
}
|
||||
}
|
||||
}
|
640
vendor/github.com/ethereum/go-ethereum/core/vm/instructions_test.go
generated
vendored
Normal file
640
vendor/github.com/ethereum/go-ethereum/core/vm/instructions_test.go
generated
vendored
Normal file
@ -0,0 +1,640 @@
|
||||
// Copyright 2017 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package vm
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"math/big"
|
||||
"testing"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/crypto"
|
||||
"github.com/ethereum/go-ethereum/params"
|
||||
)
|
||||
|
||||
type TwoOperandTestcase struct {
|
||||
X string
|
||||
Y string
|
||||
Expected string
|
||||
}
|
||||
|
||||
type twoOperandParams struct {
|
||||
x string
|
||||
y string
|
||||
}
|
||||
|
||||
var commonParams []*twoOperandParams
|
||||
var twoOpMethods map[string]executionFunc
|
||||
|
||||
func init() {
|
||||
|
||||
// Params is a list of common edgecases that should be used for some common tests
|
||||
params := []string{
|
||||
"0000000000000000000000000000000000000000000000000000000000000000", // 0
|
||||
"0000000000000000000000000000000000000000000000000000000000000001", // +1
|
||||
"0000000000000000000000000000000000000000000000000000000000000005", // +5
|
||||
"7ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffe", // + max -1
|
||||
"7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", // + max
|
||||
"8000000000000000000000000000000000000000000000000000000000000000", // - max
|
||||
"8000000000000000000000000000000000000000000000000000000000000001", // - max+1
|
||||
"fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffb", // - 5
|
||||
"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", // - 1
|
||||
}
|
||||
// Params are combined so each param is used on each 'side'
|
||||
commonParams = make([]*twoOperandParams, len(params)*len(params))
|
||||
for i, x := range params {
|
||||
for j, y := range params {
|
||||
commonParams[i*len(params)+j] = &twoOperandParams{x, y}
|
||||
}
|
||||
}
|
||||
twoOpMethods = map[string]executionFunc{
|
||||
"add": opAdd,
|
||||
"sub": opSub,
|
||||
"mul": opMul,
|
||||
"div": opDiv,
|
||||
"sdiv": opSdiv,
|
||||
"mod": opMod,
|
||||
"smod": opSmod,
|
||||
"exp": opExp,
|
||||
"signext": opSignExtend,
|
||||
"lt": opLt,
|
||||
"gt": opGt,
|
||||
"slt": opSlt,
|
||||
"sgt": opSgt,
|
||||
"eq": opEq,
|
||||
"and": opAnd,
|
||||
"or": opOr,
|
||||
"xor": opXor,
|
||||
"byte": opByte,
|
||||
"shl": opSHL,
|
||||
"shr": opSHR,
|
||||
"sar": opSAR,
|
||||
}
|
||||
}
|
||||
|
||||
func testTwoOperandOp(t *testing.T, tests []TwoOperandTestcase, opFn executionFunc, name string) {
|
||||
|
||||
var (
|
||||
env = NewEVM(Context{}, nil, params.TestChainConfig, Config{})
|
||||
stack = newstack()
|
||||
pc = uint64(0)
|
||||
evmInterpreter = env.interpreter.(*EVMInterpreter)
|
||||
)
|
||||
// Stuff a couple of nonzero bigints into pool, to ensure that ops do not rely on pooled integers to be zero
|
||||
evmInterpreter.intPool = poolOfIntPools.get()
|
||||
evmInterpreter.intPool.put(big.NewInt(-1337))
|
||||
evmInterpreter.intPool.put(big.NewInt(-1337))
|
||||
evmInterpreter.intPool.put(big.NewInt(-1337))
|
||||
|
||||
for i, test := range tests {
|
||||
x := new(big.Int).SetBytes(common.Hex2Bytes(test.X))
|
||||
y := new(big.Int).SetBytes(common.Hex2Bytes(test.Y))
|
||||
expected := new(big.Int).SetBytes(common.Hex2Bytes(test.Expected))
|
||||
stack.push(x)
|
||||
stack.push(y)
|
||||
opFn(&pc, evmInterpreter, nil, nil, stack)
|
||||
actual := stack.pop()
|
||||
|
||||
if actual.Cmp(expected) != 0 {
|
||||
t.Errorf("Testcase %v %d, %v(%x, %x): expected %x, got %x", name, i, name, x, y, expected, actual)
|
||||
}
|
||||
// Check pool usage
|
||||
// 1.pool is not allowed to contain anything on the stack
|
||||
// 2.pool is not allowed to contain the same pointers twice
|
||||
if evmInterpreter.intPool.pool.len() > 0 {
|
||||
|
||||
poolvals := make(map[*big.Int]struct{})
|
||||
poolvals[actual] = struct{}{}
|
||||
|
||||
for evmInterpreter.intPool.pool.len() > 0 {
|
||||
key := evmInterpreter.intPool.get()
|
||||
if _, exist := poolvals[key]; exist {
|
||||
t.Errorf("Testcase %v %d, pool contains double-entry", name, i)
|
||||
}
|
||||
poolvals[key] = struct{}{}
|
||||
}
|
||||
}
|
||||
}
|
||||
poolOfIntPools.put(evmInterpreter.intPool)
|
||||
}
|
||||
|
||||
func TestByteOp(t *testing.T) {
|
||||
tests := []TwoOperandTestcase{
|
||||
{"ABCDEF0908070605040302010000000000000000000000000000000000000000", "00", "AB"},
|
||||
{"ABCDEF0908070605040302010000000000000000000000000000000000000000", "01", "CD"},
|
||||
{"00CDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff", "00", "00"},
|
||||
{"00CDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff", "01", "CD"},
|
||||
{"0000000000000000000000000000000000000000000000000000000000102030", "1F", "30"},
|
||||
{"0000000000000000000000000000000000000000000000000000000000102030", "1E", "20"},
|
||||
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "20", "00"},
|
||||
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "FFFFFFFFFFFFFFFF", "00"},
|
||||
}
|
||||
testTwoOperandOp(t, tests, opByte, "byte")
|
||||
}
|
||||
|
||||
func TestSHL(t *testing.T) {
|
||||
// Testcases from https://github.com/ethereum/EIPs/blob/master/EIPS/eip-145.md#shl-shift-left
|
||||
tests := []TwoOperandTestcase{
|
||||
{"0000000000000000000000000000000000000000000000000000000000000001", "01", "0000000000000000000000000000000000000000000000000000000000000002"},
|
||||
{"0000000000000000000000000000000000000000000000000000000000000001", "ff", "8000000000000000000000000000000000000000000000000000000000000000"},
|
||||
{"0000000000000000000000000000000000000000000000000000000000000001", "0100", "0000000000000000000000000000000000000000000000000000000000000000"},
|
||||
{"0000000000000000000000000000000000000000000000000000000000000001", "0101", "0000000000000000000000000000000000000000000000000000000000000000"},
|
||||
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "00", "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"},
|
||||
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "01", "fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffe"},
|
||||
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "ff", "8000000000000000000000000000000000000000000000000000000000000000"},
|
||||
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "0100", "0000000000000000000000000000000000000000000000000000000000000000"},
|
||||
{"0000000000000000000000000000000000000000000000000000000000000000", "01", "0000000000000000000000000000000000000000000000000000000000000000"},
|
||||
{"7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "01", "fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffe"},
|
||||
}
|
||||
testTwoOperandOp(t, tests, opSHL, "shl")
|
||||
}
|
||||
|
||||
func TestSHR(t *testing.T) {
|
||||
// Testcases from https://github.com/ethereum/EIPs/blob/master/EIPS/eip-145.md#shr-logical-shift-right
|
||||
tests := []TwoOperandTestcase{
|
||||
{"0000000000000000000000000000000000000000000000000000000000000001", "00", "0000000000000000000000000000000000000000000000000000000000000001"},
|
||||
{"0000000000000000000000000000000000000000000000000000000000000001", "01", "0000000000000000000000000000000000000000000000000000000000000000"},
|
||||
{"8000000000000000000000000000000000000000000000000000000000000000", "01", "4000000000000000000000000000000000000000000000000000000000000000"},
|
||||
{"8000000000000000000000000000000000000000000000000000000000000000", "ff", "0000000000000000000000000000000000000000000000000000000000000001"},
|
||||
{"8000000000000000000000000000000000000000000000000000000000000000", "0100", "0000000000000000000000000000000000000000000000000000000000000000"},
|
||||
{"8000000000000000000000000000000000000000000000000000000000000000", "0101", "0000000000000000000000000000000000000000000000000000000000000000"},
|
||||
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "00", "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"},
|
||||
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "01", "7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"},
|
||||
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "ff", "0000000000000000000000000000000000000000000000000000000000000001"},
|
||||
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "0100", "0000000000000000000000000000000000000000000000000000000000000000"},
|
||||
{"0000000000000000000000000000000000000000000000000000000000000000", "01", "0000000000000000000000000000000000000000000000000000000000000000"},
|
||||
}
|
||||
testTwoOperandOp(t, tests, opSHR, "shr")
|
||||
}
|
||||
|
||||
func TestSAR(t *testing.T) {
|
||||
// Testcases from https://github.com/ethereum/EIPs/blob/master/EIPS/eip-145.md#sar-arithmetic-shift-right
|
||||
tests := []TwoOperandTestcase{
|
||||
{"0000000000000000000000000000000000000000000000000000000000000001", "00", "0000000000000000000000000000000000000000000000000000000000000001"},
|
||||
{"0000000000000000000000000000000000000000000000000000000000000001", "01", "0000000000000000000000000000000000000000000000000000000000000000"},
|
||||
{"8000000000000000000000000000000000000000000000000000000000000000", "01", "c000000000000000000000000000000000000000000000000000000000000000"},
|
||||
{"8000000000000000000000000000000000000000000000000000000000000000", "ff", "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"},
|
||||
{"8000000000000000000000000000000000000000000000000000000000000000", "0100", "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"},
|
||||
{"8000000000000000000000000000000000000000000000000000000000000000", "0101", "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"},
|
||||
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "00", "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"},
|
||||
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "01", "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"},
|
||||
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "ff", "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"},
|
||||
{"ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "0100", "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"},
|
||||
{"0000000000000000000000000000000000000000000000000000000000000000", "01", "0000000000000000000000000000000000000000000000000000000000000000"},
|
||||
{"4000000000000000000000000000000000000000000000000000000000000000", "fe", "0000000000000000000000000000000000000000000000000000000000000001"},
|
||||
{"7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "f8", "000000000000000000000000000000000000000000000000000000000000007f"},
|
||||
{"7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "fe", "0000000000000000000000000000000000000000000000000000000000000001"},
|
||||
{"7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "ff", "0000000000000000000000000000000000000000000000000000000000000000"},
|
||||
{"7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", "0100", "0000000000000000000000000000000000000000000000000000000000000000"},
|
||||
}
|
||||
|
||||
testTwoOperandOp(t, tests, opSAR, "sar")
|
||||
}
|
||||
|
||||
// getResult is a convenience function to generate the expected values
|
||||
func getResult(args []*twoOperandParams, opFn executionFunc) []TwoOperandTestcase {
|
||||
var (
|
||||
env = NewEVM(Context{}, nil, params.TestChainConfig, Config{})
|
||||
stack = newstack()
|
||||
pc = uint64(0)
|
||||
interpreter = env.interpreter.(*EVMInterpreter)
|
||||
)
|
||||
interpreter.intPool = poolOfIntPools.get()
|
||||
result := make([]TwoOperandTestcase, len(args))
|
||||
for i, param := range args {
|
||||
x := new(big.Int).SetBytes(common.Hex2Bytes(param.x))
|
||||
y := new(big.Int).SetBytes(common.Hex2Bytes(param.y))
|
||||
stack.push(x)
|
||||
stack.push(y)
|
||||
opFn(&pc, interpreter, nil, nil, stack)
|
||||
actual := stack.pop()
|
||||
result[i] = TwoOperandTestcase{param.x, param.y, fmt.Sprintf("%064x", actual)}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// utility function to fill the json-file with testcases
|
||||
// Enable this test to generate the 'testcases_xx.json' files
|
||||
func xTestWriteExpectedValues(t *testing.T) {
|
||||
for name, method := range twoOpMethods {
|
||||
data, err := json.Marshal(getResult(commonParams, method))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
_ = ioutil.WriteFile(fmt.Sprintf("testdata/testcases_%v.json", name), data, 0644)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
t.Fatal("This test should not be activated")
|
||||
}
|
||||
|
||||
// TestJsonTestcases runs through all the testcases defined as json-files
|
||||
func TestJsonTestcases(t *testing.T) {
|
||||
for name := range twoOpMethods {
|
||||
data, err := ioutil.ReadFile(fmt.Sprintf("testdata/testcases_%v.json", name))
|
||||
if err != nil {
|
||||
t.Fatal("Failed to read file", err)
|
||||
}
|
||||
var testcases []TwoOperandTestcase
|
||||
json.Unmarshal(data, &testcases)
|
||||
testTwoOperandOp(t, testcases, twoOpMethods[name], name)
|
||||
}
|
||||
}
|
||||
|
||||
func opBenchmark(bench *testing.B, op func(pc *uint64, interpreter *EVMInterpreter, contract *Contract, memory *Memory, stack *Stack) ([]byte, error), args ...string) {
|
||||
var (
|
||||
env = NewEVM(Context{}, nil, params.TestChainConfig, Config{})
|
||||
stack = newstack()
|
||||
evmInterpreter = NewEVMInterpreter(env, env.vmConfig)
|
||||
)
|
||||
|
||||
env.interpreter = evmInterpreter
|
||||
evmInterpreter.intPool = poolOfIntPools.get()
|
||||
// convert args
|
||||
byteArgs := make([][]byte, len(args))
|
||||
for i, arg := range args {
|
||||
byteArgs[i] = common.Hex2Bytes(arg)
|
||||
}
|
||||
pc := uint64(0)
|
||||
bench.ResetTimer()
|
||||
for i := 0; i < bench.N; i++ {
|
||||
for _, arg := range byteArgs {
|
||||
a := new(big.Int).SetBytes(arg)
|
||||
stack.push(a)
|
||||
}
|
||||
op(&pc, evmInterpreter, nil, nil, stack)
|
||||
stack.pop()
|
||||
}
|
||||
poolOfIntPools.put(evmInterpreter.intPool)
|
||||
}
|
||||
|
||||
func BenchmarkOpAdd64(b *testing.B) {
|
||||
x := "ffffffff"
|
||||
y := "fd37f3e2bba2c4f"
|
||||
|
||||
opBenchmark(b, opAdd, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpAdd128(b *testing.B) {
|
||||
x := "ffffffffffffffff"
|
||||
y := "f5470b43c6549b016288e9a65629687"
|
||||
|
||||
opBenchmark(b, opAdd, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpAdd256(b *testing.B) {
|
||||
x := "0802431afcbce1fc194c9eaa417b2fb67dc75a95db0bc7ec6b1c8af11df6a1da9"
|
||||
y := "a1f5aac137876480252e5dcac62c354ec0d42b76b0642b6181ed099849ea1d57"
|
||||
|
||||
opBenchmark(b, opAdd, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpSub64(b *testing.B) {
|
||||
x := "51022b6317003a9d"
|
||||
y := "a20456c62e00753a"
|
||||
|
||||
opBenchmark(b, opSub, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpSub128(b *testing.B) {
|
||||
x := "4dde30faaacdc14d00327aac314e915d"
|
||||
y := "9bbc61f5559b829a0064f558629d22ba"
|
||||
|
||||
opBenchmark(b, opSub, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpSub256(b *testing.B) {
|
||||
x := "4bfcd8bb2ac462735b48a17580690283980aa2d679f091c64364594df113ea37"
|
||||
y := "97f9b1765588c4e6b69142eb00d20507301545acf3e1238c86c8b29be227d46e"
|
||||
|
||||
opBenchmark(b, opSub, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpMul(b *testing.B) {
|
||||
x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
|
||||
opBenchmark(b, opMul, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpDiv256(b *testing.B) {
|
||||
x := "ff3f9014f20db29ae04af2c2d265de17"
|
||||
y := "fe7fb0d1f59dfe9492ffbf73683fd1e870eec79504c60144cc7f5fc2bad1e611"
|
||||
opBenchmark(b, opDiv, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpDiv128(b *testing.B) {
|
||||
x := "fdedc7f10142ff97"
|
||||
y := "fbdfda0e2ce356173d1993d5f70a2b11"
|
||||
opBenchmark(b, opDiv, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpDiv64(b *testing.B) {
|
||||
x := "fcb34eb3"
|
||||
y := "f97180878e839129"
|
||||
opBenchmark(b, opDiv, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpSdiv(b *testing.B) {
|
||||
x := "ff3f9014f20db29ae04af2c2d265de17"
|
||||
y := "fe7fb0d1f59dfe9492ffbf73683fd1e870eec79504c60144cc7f5fc2bad1e611"
|
||||
|
||||
opBenchmark(b, opSdiv, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpMod(b *testing.B) {
|
||||
x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
|
||||
opBenchmark(b, opMod, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpSmod(b *testing.B) {
|
||||
x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
|
||||
opBenchmark(b, opSmod, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpExp(b *testing.B) {
|
||||
x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
|
||||
opBenchmark(b, opExp, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpSignExtend(b *testing.B) {
|
||||
x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
|
||||
opBenchmark(b, opSignExtend, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpLt(b *testing.B) {
|
||||
x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
|
||||
opBenchmark(b, opLt, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpGt(b *testing.B) {
|
||||
x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
|
||||
opBenchmark(b, opGt, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpSlt(b *testing.B) {
|
||||
x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
|
||||
opBenchmark(b, opSlt, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpSgt(b *testing.B) {
|
||||
x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
|
||||
opBenchmark(b, opSgt, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpEq(b *testing.B) {
|
||||
x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
|
||||
opBenchmark(b, opEq, x, y)
|
||||
}
|
||||
func BenchmarkOpEq2(b *testing.B) {
|
||||
x := "FBCDEF090807060504030201ffffffffFBCDEF090807060504030201ffffffff"
|
||||
y := "FBCDEF090807060504030201ffffffffFBCDEF090807060504030201fffffffe"
|
||||
opBenchmark(b, opEq, x, y)
|
||||
}
|
||||
func BenchmarkOpAnd(b *testing.B) {
|
||||
x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
|
||||
opBenchmark(b, opAnd, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpOr(b *testing.B) {
|
||||
x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
|
||||
opBenchmark(b, opOr, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpXor(b *testing.B) {
|
||||
x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
|
||||
opBenchmark(b, opXor, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpByte(b *testing.B) {
|
||||
x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
|
||||
opBenchmark(b, opByte, x, y)
|
||||
}
|
||||
|
||||
func BenchmarkOpAddmod(b *testing.B) {
|
||||
x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
z := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
|
||||
opBenchmark(b, opAddmod, x, y, z)
|
||||
}
|
||||
|
||||
func BenchmarkOpMulmod(b *testing.B) {
|
||||
x := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
y := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
z := "ABCDEF090807060504030201ffffffffffffffffffffffffffffffffffffffff"
|
||||
|
||||
opBenchmark(b, opMulmod, x, y, z)
|
||||
}
|
||||
|
||||
func BenchmarkOpSHL(b *testing.B) {
|
||||
x := "FBCDEF090807060504030201ffffffffFBCDEF090807060504030201ffffffff"
|
||||
y := "ff"
|
||||
|
||||
opBenchmark(b, opSHL, x, y)
|
||||
}
|
||||
func BenchmarkOpSHR(b *testing.B) {
|
||||
x := "FBCDEF090807060504030201ffffffffFBCDEF090807060504030201ffffffff"
|
||||
y := "ff"
|
||||
|
||||
opBenchmark(b, opSHR, x, y)
|
||||
}
|
||||
func BenchmarkOpSAR(b *testing.B) {
|
||||
x := "FBCDEF090807060504030201ffffffffFBCDEF090807060504030201ffffffff"
|
||||
y := "ff"
|
||||
|
||||
opBenchmark(b, opSAR, x, y)
|
||||
}
|
||||
func BenchmarkOpIsZero(b *testing.B) {
|
||||
x := "FBCDEF090807060504030201ffffffffFBCDEF090807060504030201ffffffff"
|
||||
opBenchmark(b, opIszero, x)
|
||||
}
|
||||
|
||||
func TestOpMstore(t *testing.T) {
|
||||
var (
|
||||
env = NewEVM(Context{}, nil, params.TestChainConfig, Config{})
|
||||
stack = newstack()
|
||||
mem = NewMemory()
|
||||
evmInterpreter = NewEVMInterpreter(env, env.vmConfig)
|
||||
)
|
||||
|
||||
env.interpreter = evmInterpreter
|
||||
evmInterpreter.intPool = poolOfIntPools.get()
|
||||
mem.Resize(64)
|
||||
pc := uint64(0)
|
||||
v := "abcdef00000000000000abba000000000deaf000000c0de00100000000133700"
|
||||
stack.pushN(new(big.Int).SetBytes(common.Hex2Bytes(v)), big.NewInt(0))
|
||||
opMstore(&pc, evmInterpreter, nil, mem, stack)
|
||||
if got := common.Bytes2Hex(mem.Get(0, 32)); got != v {
|
||||
t.Fatalf("Mstore fail, got %v, expected %v", got, v)
|
||||
}
|
||||
stack.pushN(big.NewInt(0x1), big.NewInt(0))
|
||||
opMstore(&pc, evmInterpreter, nil, mem, stack)
|
||||
if common.Bytes2Hex(mem.Get(0, 32)) != "0000000000000000000000000000000000000000000000000000000000000001" {
|
||||
t.Fatalf("Mstore failed to overwrite previous value")
|
||||
}
|
||||
poolOfIntPools.put(evmInterpreter.intPool)
|
||||
}
|
||||
|
||||
func BenchmarkOpMstore(bench *testing.B) {
|
||||
var (
|
||||
env = NewEVM(Context{}, nil, params.TestChainConfig, Config{})
|
||||
stack = newstack()
|
||||
mem = NewMemory()
|
||||
evmInterpreter = NewEVMInterpreter(env, env.vmConfig)
|
||||
)
|
||||
|
||||
env.interpreter = evmInterpreter
|
||||
evmInterpreter.intPool = poolOfIntPools.get()
|
||||
mem.Resize(64)
|
||||
pc := uint64(0)
|
||||
memStart := big.NewInt(0)
|
||||
value := big.NewInt(0x1337)
|
||||
|
||||
bench.ResetTimer()
|
||||
for i := 0; i < bench.N; i++ {
|
||||
stack.pushN(value, memStart)
|
||||
opMstore(&pc, evmInterpreter, nil, mem, stack)
|
||||
}
|
||||
poolOfIntPools.put(evmInterpreter.intPool)
|
||||
}
|
||||
|
||||
func BenchmarkOpSHA3(bench *testing.B) {
|
||||
var (
|
||||
env = NewEVM(Context{}, nil, params.TestChainConfig, Config{})
|
||||
stack = newstack()
|
||||
mem = NewMemory()
|
||||
evmInterpreter = NewEVMInterpreter(env, env.vmConfig)
|
||||
)
|
||||
env.interpreter = evmInterpreter
|
||||
evmInterpreter.intPool = poolOfIntPools.get()
|
||||
mem.Resize(32)
|
||||
pc := uint64(0)
|
||||
start := big.NewInt(0)
|
||||
|
||||
bench.ResetTimer()
|
||||
for i := 0; i < bench.N; i++ {
|
||||
stack.pushN(big.NewInt(32), start)
|
||||
opSha3(&pc, evmInterpreter, nil, mem, stack)
|
||||
}
|
||||
poolOfIntPools.put(evmInterpreter.intPool)
|
||||
}
|
||||
|
||||
func TestCreate2Addreses(t *testing.T) {
|
||||
type testcase struct {
|
||||
origin string
|
||||
salt string
|
||||
code string
|
||||
expected string
|
||||
}
|
||||
|
||||
for i, tt := range []testcase{
|
||||
{
|
||||
origin: "0x0000000000000000000000000000000000000000",
|
||||
salt: "0x0000000000000000000000000000000000000000",
|
||||
code: "0x00",
|
||||
expected: "0x4d1a2e2bb4f88f0250f26ffff098b0b30b26bf38",
|
||||
},
|
||||
{
|
||||
origin: "0xdeadbeef00000000000000000000000000000000",
|
||||
salt: "0x0000000000000000000000000000000000000000",
|
||||
code: "0x00",
|
||||
expected: "0xB928f69Bb1D91Cd65274e3c79d8986362984fDA3",
|
||||
},
|
||||
{
|
||||
origin: "0xdeadbeef00000000000000000000000000000000",
|
||||
salt: "0xfeed000000000000000000000000000000000000",
|
||||
code: "0x00",
|
||||
expected: "0xD04116cDd17beBE565EB2422F2497E06cC1C9833",
|
||||
},
|
||||
{
|
||||
origin: "0x0000000000000000000000000000000000000000",
|
||||
salt: "0x0000000000000000000000000000000000000000",
|
||||
code: "0xdeadbeef",
|
||||
expected: "0x70f2b2914A2a4b783FaEFb75f459A580616Fcb5e",
|
||||
},
|
||||
{
|
||||
origin: "0x00000000000000000000000000000000deadbeef",
|
||||
salt: "0xcafebabe",
|
||||
code: "0xdeadbeef",
|
||||
expected: "0x60f3f640a8508fC6a86d45DF051962668E1e8AC7",
|
||||
},
|
||||
{
|
||||
origin: "0x00000000000000000000000000000000deadbeef",
|
||||
salt: "0xcafebabe",
|
||||
code: "0xdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef",
|
||||
expected: "0x1d8bfDC5D46DC4f61D6b6115972536eBE6A8854C",
|
||||
},
|
||||
{
|
||||
origin: "0x0000000000000000000000000000000000000000",
|
||||
salt: "0x0000000000000000000000000000000000000000",
|
||||
code: "0x",
|
||||
expected: "0xE33C0C7F7df4809055C3ebA6c09CFe4BaF1BD9e0",
|
||||
},
|
||||
} {
|
||||
|
||||
origin := common.BytesToAddress(common.FromHex(tt.origin))
|
||||
salt := common.BytesToHash(common.FromHex(tt.salt))
|
||||
code := common.FromHex(tt.code)
|
||||
codeHash := crypto.Keccak256(code)
|
||||
address := crypto.CreateAddress2(origin, salt, codeHash)
|
||||
/*
|
||||
stack := newstack()
|
||||
// salt, but we don't need that for this test
|
||||
stack.push(big.NewInt(int64(len(code)))) //size
|
||||
stack.push(big.NewInt(0)) // memstart
|
||||
stack.push(big.NewInt(0)) // value
|
||||
gas, _ := gasCreate2(params.GasTable{}, nil, nil, stack, nil, 0)
|
||||
fmt.Printf("Example %d\n* address `0x%x`\n* salt `0x%x`\n* init_code `0x%x`\n* gas (assuming no mem expansion): `%v`\n* result: `%s`\n\n", i,origin, salt, code, gas, address.String())
|
||||
*/
|
||||
expected := common.BytesToAddress(common.FromHex(tt.expected))
|
||||
if !bytes.Equal(expected.Bytes(), address.Bytes()) {
|
||||
t.Errorf("test %d: expected %s, got %s", i, expected.String(), address.String())
|
||||
}
|
||||
|
||||
}
|
||||
}
|
80
vendor/github.com/ethereum/go-ethereum/core/vm/interface.go
generated
vendored
Normal file
80
vendor/github.com/ethereum/go-ethereum/core/vm/interface.go
generated
vendored
Normal file
@ -0,0 +1,80 @@
|
||||
// Copyright 2016 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package vm
|
||||
|
||||
import (
|
||||
"math/big"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/core/types"
|
||||
)
|
||||
|
||||
// StateDB is an EVM database for full state querying.
|
||||
type StateDB interface {
|
||||
CreateAccount(common.Address)
|
||||
|
||||
SubBalance(common.Address, *big.Int)
|
||||
AddBalance(common.Address, *big.Int)
|
||||
GetBalance(common.Address) *big.Int
|
||||
|
||||
GetNonce(common.Address) uint64
|
||||
SetNonce(common.Address, uint64)
|
||||
|
||||
GetCodeHash(common.Address) common.Hash
|
||||
GetCode(common.Address) []byte
|
||||
SetCode(common.Address, []byte)
|
||||
GetCodeSize(common.Address) int
|
||||
|
||||
AddRefund(uint64)
|
||||
SubRefund(uint64)
|
||||
GetRefund() uint64
|
||||
|
||||
GetCommittedState(common.Address, common.Hash) common.Hash
|
||||
GetState(common.Address, common.Hash) common.Hash
|
||||
SetState(common.Address, common.Hash, common.Hash)
|
||||
|
||||
Suicide(common.Address) bool
|
||||
HasSuicided(common.Address) bool
|
||||
|
||||
// Exist reports whether the given account exists in state.
|
||||
// Notably this should also return true for suicided accounts.
|
||||
Exist(common.Address) bool
|
||||
// Empty returns whether the given account is empty. Empty
|
||||
// is defined according to EIP161 (balance = nonce = code = 0).
|
||||
Empty(common.Address) bool
|
||||
|
||||
RevertToSnapshot(int)
|
||||
Snapshot() int
|
||||
|
||||
AddLog(*types.Log)
|
||||
AddPreimage(common.Hash, []byte)
|
||||
|
||||
ForEachStorage(common.Address, func(common.Hash, common.Hash) bool) error
|
||||
}
|
||||
|
||||
// CallContext provides a basic interface for the EVM calling conventions. The EVM
|
||||
// depends on this context being implemented for doing subcalls and initialising new EVM contracts.
|
||||
type CallContext interface {
|
||||
// Call another contract
|
||||
Call(env *EVM, me ContractRef, addr common.Address, data []byte, gas, value *big.Int) ([]byte, error)
|
||||
// Take another's contract code and execute within our own context
|
||||
CallCode(env *EVM, me ContractRef, addr common.Address, data []byte, gas, value *big.Int) ([]byte, error)
|
||||
// Same as CallCode except sender and value is propagated from parent to child scope
|
||||
DelegateCall(env *EVM, me ContractRef, addr common.Address, data []byte, gas *big.Int) ([]byte, error)
|
||||
// Create a new contract
|
||||
Create(env *EVM, me ContractRef, data []byte, gas, value *big.Int) ([]byte, common.Address, error)
|
||||
}
|
282
vendor/github.com/ethereum/go-ethereum/core/vm/interpreter.go
generated
vendored
Normal file
282
vendor/github.com/ethereum/go-ethereum/core/vm/interpreter.go
generated
vendored
Normal file
@ -0,0 +1,282 @@
|
||||
// Copyright 2014 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package vm
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"hash"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/common/math"
|
||||
"github.com/ethereum/go-ethereum/params"
|
||||
)
|
||||
|
||||
// Config are the configuration options for the Interpreter
|
||||
type Config struct {
|
||||
Debug bool // Enables debugging
|
||||
Tracer Tracer // Opcode logger
|
||||
NoRecursion bool // Disables call, callcode, delegate call and create
|
||||
EnablePreimageRecording bool // Enables recording of SHA3/keccak preimages
|
||||
|
||||
JumpTable [256]operation // EVM instruction table, automatically populated if unset
|
||||
|
||||
EWASMInterpreter string // External EWASM interpreter options
|
||||
EVMInterpreter string // External EVM interpreter options
|
||||
}
|
||||
|
||||
// Interpreter is used to run Ethereum based contracts and will utilise the
|
||||
// passed environment to query external sources for state information.
|
||||
// The Interpreter will run the byte code VM based on the passed
|
||||
// configuration.
|
||||
type Interpreter interface {
|
||||
// Run loops and evaluates the contract's code with the given input data and returns
|
||||
// the return byte-slice and an error if one occurred.
|
||||
Run(contract *Contract, input []byte, static bool) ([]byte, error)
|
||||
// CanRun tells if the contract, passed as an argument, can be
|
||||
// run by the current interpreter. This is meant so that the
|
||||
// caller can do something like:
|
||||
//
|
||||
// ```golang
|
||||
// for _, interpreter := range interpreters {
|
||||
// if interpreter.CanRun(contract.code) {
|
||||
// interpreter.Run(contract.code, input)
|
||||
// }
|
||||
// }
|
||||
// ```
|
||||
CanRun([]byte) bool
|
||||
}
|
||||
|
||||
// keccakState wraps sha3.state. In addition to the usual hash methods, it also supports
|
||||
// Read to get a variable amount of data from the hash state. Read is faster than Sum
|
||||
// because it doesn't copy the internal state, but also modifies the internal state.
|
||||
type keccakState interface {
|
||||
hash.Hash
|
||||
Read([]byte) (int, error)
|
||||
}
|
||||
|
||||
// EVMInterpreter represents an EVM interpreter
|
||||
type EVMInterpreter struct {
|
||||
evm *EVM
|
||||
cfg Config
|
||||
gasTable params.GasTable
|
||||
|
||||
intPool *intPool
|
||||
|
||||
hasher keccakState // Keccak256 hasher instance shared across opcodes
|
||||
hasherBuf common.Hash // Keccak256 hasher result array shared aross opcodes
|
||||
|
||||
readOnly bool // Whether to throw on stateful modifications
|
||||
returnData []byte // Last CALL's return data for subsequent reuse
|
||||
}
|
||||
|
||||
// NewEVMInterpreter returns a new instance of the Interpreter.
|
||||
func NewEVMInterpreter(evm *EVM, cfg Config) *EVMInterpreter {
|
||||
// We use the STOP instruction whether to see
|
||||
// the jump table was initialised. If it was not
|
||||
// we'll set the default jump table.
|
||||
if !cfg.JumpTable[STOP].valid {
|
||||
switch {
|
||||
case evm.ChainConfig().IsConstantinople(evm.BlockNumber):
|
||||
cfg.JumpTable = constantinopleInstructionSet
|
||||
case evm.ChainConfig().IsByzantium(evm.BlockNumber):
|
||||
cfg.JumpTable = byzantiumInstructionSet
|
||||
case evm.ChainConfig().IsHomestead(evm.BlockNumber):
|
||||
cfg.JumpTable = homesteadInstructionSet
|
||||
default:
|
||||
cfg.JumpTable = frontierInstructionSet
|
||||
}
|
||||
}
|
||||
|
||||
return &EVMInterpreter{
|
||||
evm: evm,
|
||||
cfg: cfg,
|
||||
gasTable: evm.ChainConfig().GasTable(evm.BlockNumber),
|
||||
}
|
||||
}
|
||||
|
||||
// Run loops and evaluates the contract's code with the given input data and returns
|
||||
// the return byte-slice and an error if one occurred.
|
||||
//
|
||||
// It's important to note that any errors returned by the interpreter should be
|
||||
// considered a revert-and-consume-all-gas operation except for
|
||||
// errExecutionReverted which means revert-and-keep-gas-left.
|
||||
func (in *EVMInterpreter) Run(contract *Contract, input []byte, readOnly bool) (ret []byte, err error) {
|
||||
if in.intPool == nil {
|
||||
in.intPool = poolOfIntPools.get()
|
||||
defer func() {
|
||||
poolOfIntPools.put(in.intPool)
|
||||
in.intPool = nil
|
||||
}()
|
||||
}
|
||||
|
||||
// Increment the call depth which is restricted to 1024
|
||||
in.evm.depth++
|
||||
defer func() { in.evm.depth-- }()
|
||||
|
||||
// Make sure the readOnly is only set if we aren't in readOnly yet.
|
||||
// This makes also sure that the readOnly flag isn't removed for child calls.
|
||||
if readOnly && !in.readOnly {
|
||||
in.readOnly = true
|
||||
defer func() { in.readOnly = false }()
|
||||
}
|
||||
|
||||
// Reset the previous call's return data. It's unimportant to preserve the old buffer
|
||||
// as every returning call will return new data anyway.
|
||||
in.returnData = nil
|
||||
|
||||
// Don't bother with the execution if there's no code.
|
||||
if len(contract.Code) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
var (
|
||||
op OpCode // current opcode
|
||||
mem = NewMemory() // bound memory
|
||||
stack = newstack() // local stack
|
||||
// For optimisation reason we're using uint64 as the program counter.
|
||||
// It's theoretically possible to go above 2^64. The YP defines the PC
|
||||
// to be uint256. Practically much less so feasible.
|
||||
pc = uint64(0) // program counter
|
||||
cost uint64
|
||||
// copies used by tracer
|
||||
pcCopy uint64 // needed for the deferred Tracer
|
||||
gasCopy uint64 // for Tracer to log gas remaining before execution
|
||||
logged bool // deferred Tracer should ignore already logged steps
|
||||
res []byte // result of the opcode execution function
|
||||
)
|
||||
contract.Input = input
|
||||
|
||||
// Reclaim the stack as an int pool when the execution stops
|
||||
defer func() { in.intPool.put(stack.data...) }()
|
||||
|
||||
if in.cfg.Debug {
|
||||
defer func() {
|
||||
if err != nil {
|
||||
if !logged {
|
||||
in.cfg.Tracer.CaptureState(in.evm, pcCopy, op, gasCopy, cost, mem, stack, contract, in.evm.depth, err)
|
||||
} else {
|
||||
in.cfg.Tracer.CaptureFault(in.evm, pcCopy, op, gasCopy, cost, mem, stack, contract, in.evm.depth, err)
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
// The Interpreter main run loop (contextual). This loop runs until either an
|
||||
// explicit STOP, RETURN or SELFDESTRUCT is executed, an error occurred during
|
||||
// the execution of one of the operations or until the done flag is set by the
|
||||
// parent context.
|
||||
for atomic.LoadInt32(&in.evm.abort) == 0 {
|
||||
if in.cfg.Debug {
|
||||
// Capture pre-execution values for tracing.
|
||||
logged, pcCopy, gasCopy = false, pc, contract.Gas
|
||||
}
|
||||
|
||||
// Get the operation from the jump table and validate the stack to ensure there are
|
||||
// enough stack items available to perform the operation.
|
||||
op = contract.GetOp(pc)
|
||||
operation := in.cfg.JumpTable[op]
|
||||
if !operation.valid {
|
||||
return nil, fmt.Errorf("invalid opcode 0x%x", int(op))
|
||||
}
|
||||
// Validate stack
|
||||
if sLen := stack.len(); sLen < operation.minStack {
|
||||
return nil, fmt.Errorf("stack underflow (%d <=> %d)", sLen, operation.minStack)
|
||||
} else if sLen > operation.maxStack {
|
||||
return nil, fmt.Errorf("stack limit reached %d (%d)", sLen, operation.maxStack)
|
||||
}
|
||||
// If the operation is valid, enforce and write restrictions
|
||||
if in.readOnly && in.evm.chainRules.IsByzantium {
|
||||
// If the interpreter is operating in readonly mode, make sure no
|
||||
// state-modifying operation is performed. The 3rd stack item
|
||||
// for a call operation is the value. Transferring value from one
|
||||
// account to the others means the state is modified and should also
|
||||
// return with an error.
|
||||
if operation.writes || (op == CALL && stack.Back(2).Sign() != 0) {
|
||||
return nil, errWriteProtection
|
||||
}
|
||||
}
|
||||
// Static portion of gas
|
||||
if !contract.UseGas(operation.constantGas) {
|
||||
return nil, ErrOutOfGas
|
||||
}
|
||||
|
||||
var memorySize uint64
|
||||
// calculate the new memory size and expand the memory to fit
|
||||
// the operation
|
||||
// Memory check needs to be done prior to evaluating the dynamic gas portion,
|
||||
// to detect calculation overflows
|
||||
if operation.memorySize != nil {
|
||||
memSize, overflow := operation.memorySize(stack)
|
||||
if overflow {
|
||||
return nil, errGasUintOverflow
|
||||
}
|
||||
// memory is expanded in words of 32 bytes. Gas
|
||||
// is also calculated in words.
|
||||
if memorySize, overflow = math.SafeMul(toWordSize(memSize), 32); overflow {
|
||||
return nil, errGasUintOverflow
|
||||
}
|
||||
}
|
||||
// Dynamic portion of gas
|
||||
// consume the gas and return an error if not enough gas is available.
|
||||
// cost is explicitly set so that the capture state defer method can get the proper cost
|
||||
if operation.dynamicGas != nil {
|
||||
cost, err = operation.dynamicGas(in.gasTable, in.evm, contract, stack, mem, memorySize)
|
||||
if err != nil || !contract.UseGas(cost) {
|
||||
return nil, ErrOutOfGas
|
||||
}
|
||||
}
|
||||
if memorySize > 0 {
|
||||
mem.Resize(memorySize)
|
||||
}
|
||||
|
||||
if in.cfg.Debug {
|
||||
in.cfg.Tracer.CaptureState(in.evm, pc, op, gasCopy, cost, mem, stack, contract, in.evm.depth, err)
|
||||
logged = true
|
||||
}
|
||||
|
||||
// execute the operation
|
||||
res, err = operation.execute(&pc, in, contract, mem, stack)
|
||||
// verifyPool is a build flag. Pool verification makes sure the integrity
|
||||
// of the integer pool by comparing values to a default value.
|
||||
if verifyPool {
|
||||
verifyIntegerPool(in.intPool)
|
||||
}
|
||||
// if the operation clears the return data (e.g. it has returning data)
|
||||
// set the last return to the result of the operation.
|
||||
if operation.returns {
|
||||
in.returnData = res
|
||||
}
|
||||
|
||||
switch {
|
||||
case err != nil:
|
||||
return nil, err
|
||||
case operation.reverts:
|
||||
return res, errExecutionReverted
|
||||
case operation.halts:
|
||||
return res, nil
|
||||
case !operation.jumps:
|
||||
pc++
|
||||
}
|
||||
}
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// CanRun tells if the contract, passed as an argument, can be
|
||||
// run by the current interpreter.
|
||||
func (in *EVMInterpreter) CanRun(code []byte) bool {
|
||||
return true
|
||||
}
|
256
vendor/github.com/ethereum/go-ethereum/core/vm/logger.go
generated
vendored
Normal file
256
vendor/github.com/ethereum/go-ethereum/core/vm/logger.go
generated
vendored
Normal file
@ -0,0 +1,256 @@
|
||||
// Copyright 2015 The go-ethereum Authors
|
||||
// This file is part of the go-ethereum library.
|
||||
//
|
||||
// The go-ethereum library is free software: you can redistribute it and/or modify
|
||||
// it under the terms of the GNU Lesser General Public License as published by
|
||||
// the Free Software Foundation, either version 3 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// The go-ethereum library is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU Lesser General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU Lesser General Public License
|
||||
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package vm
|
||||
|
||||
import (
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"io"
|
||||
"math/big"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/ethereum/go-ethereum/common/math"
|
||||
"github.com/ethereum/go-ethereum/core/types"
|
||||
)
|
||||
|
||||
// Storage represents a contract's storage.
|
||||
type Storage map[common.Hash]common.Hash
|
||||
|
||||
// Copy duplicates the current storage.
|
||||
func (s Storage) Copy() Storage {
|
||||
cpy := make(Storage)
|
||||
for key, value := range s {
|
||||
cpy[key] = value
|
||||
}
|
||||
|
||||
return cpy
|
||||
}
|
||||
|
||||
// LogConfig are the configuration options for structured logger the EVM
|
||||
type LogConfig struct {
|
||||
DisableMemory bool // disable memory capture
|
||||
DisableStack bool // disable stack capture
|
||||
DisableStorage bool // disable storage capture
|
||||
Debug bool // print output during capture end
|
||||
Limit int // maximum length of output, but zero means unlimited
|
||||
}
|
||||
|
||||
//go:generate gencodec -type StructLog -field-override structLogMarshaling -out gen_structlog.go
|
||||
|
||||
// StructLog is emitted to the EVM each cycle and lists information about the current internal state
|
||||
// prior to the execution of the statement.
|
||||
type StructLog struct {
|
||||
Pc uint64 `json:"pc"`
|
||||
Op OpCode `json:"op"`
|
||||
Gas uint64 `json:"gas"`
|
||||
GasCost uint64 `json:"gasCost"`
|
||||
Memory []byte `json:"memory"`
|
||||
MemorySize int `json:"memSize"`
|
||||
Stack []*big.Int `json:"stack"`
|
||||
Storage map[common.Hash]common.Hash `json:"-"`
|
||||
Depth int `json:"depth"`
|
||||
RefundCounter uint64 `json:"refund"`
|
||||
Err error `json:"-"`
|
||||
}
|
||||
|
||||
// overrides for gencodec
|
||||
type structLogMarshaling struct {
|
||||
Stack []*math.HexOrDecimal256
|
||||
Gas math.HexOrDecimal64
|
||||
GasCost math.HexOrDecimal64
|
||||
Memory hexutil.Bytes
|
||||
OpName string `json:"opName"` // adds call to OpName() in MarshalJSON
|
||||
ErrorString string `json:"error"` // adds call to ErrorString() in MarshalJSON
|
||||
}
|
||||
|
||||
// OpName formats the operand name in a human-readable format.
|
||||
func (s *StructLog) OpName() string {
|
||||
return s.Op.String()
|
||||
}
|
||||
|
||||
// ErrorString formats the log's error as a string.
|
||||
func (s *StructLog) ErrorString() string {
|
||||
if s.Err != nil {
|
||||
return s.Err.Error()
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// Tracer is used to collect execution traces from an EVM transaction
|
||||
// execution. CaptureState is called for each step of the VM with the
|
||||
// current VM state.
|
||||
// Note that reference types are actual VM data structures; make copies
|
||||
// if you need to retain them beyond the current call.
|
||||
type Tracer interface {
|
||||
CaptureStart(from common.Address, to common.Address, call bool, input []byte, gas uint64, value *big.Int) error
|
||||
CaptureState(env *EVM, pc uint64, op OpCode, gas, cost uint64, memory *Memory, stack *Stack, contract *Contract, depth int, err error) error
|
||||
CaptureFault(env *EVM, pc uint64, op OpCode, gas, cost uint64, memory *Memory, stack *Stack, contract *Contract, depth int, err error) error
|
||||
CaptureEnd(output []byte, gasUsed uint64, t time.Duration, err error) error
|
||||
}
|
||||
|
||||
// StructLogger is an EVM state logger and implements Tracer.
|
||||
//
|
||||
// StructLogger can capture state based on the given Log configuration and also keeps
|
||||
// a track record of modified storage which is used in reporting snapshots of the
|
||||
// contract their storage.
|
||||
type StructLogger struct {
|
||||
cfg LogConfig
|
||||
|
||||
logs []StructLog
|
||||
changedValues map[common.Address]Storage
|
||||
output []byte
|
||||
err error
|
||||
}
|
||||
|
||||
// NewStructLogger returns a new logger
|
||||
func NewStructLogger(cfg *LogConfig) *StructLogger {
|
||||
logger := &StructLogger{
|
||||
changedValues: make(map[common.Address]Storage),
|
||||
}
|
||||
if cfg != nil {
|
||||
logger.cfg = *cfg
|
||||
}
|
||||
return logger
|
||||
}
|
||||
|
||||
// CaptureStart implements the Tracer interface to initialize the tracing operation.
|
||||
func (l *StructLogger) CaptureStart(from common.Address, to common.Address, create bool, input []byte, gas uint64, value *big.Int) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// CaptureState logs a new structured log message and pushes it out to the environment
|
||||
//
|
||||
// CaptureState also tracks SSTORE ops to track dirty values.
|
||||
func (l *StructLogger) CaptureState(env *EVM, pc uint64, op OpCode, gas, cost uint64, memory *Memory, stack *Stack, contract *Contract, depth int, err error) error {
|
||||
// check if already accumulated the specified number of logs
|
||||
if l.cfg.Limit != 0 && l.cfg.Limit <= len(l.logs) {
|
||||
return ErrTraceLimitReached
|
||||
}
|
||||
|
||||
// initialise new changed values storage container for this contract
|
||||
// if not present.
|
||||
if l.changedValues[contract.Address()] == nil {
|
||||
l.changedValues[contract.Address()] = make(Storage)
|
||||
}
|
||||
|
||||
// capture SSTORE opcodes and determine the changed value and store
|
||||
// it in the local storage container.
|
||||
if op == SSTORE && stack.len() >= 2 {
|
||||
var (
|
||||
value = common.BigToHash(stack.data[stack.len()-2])
|
||||
address = common.BigToHash(stack.data[stack.len()-1])
|
||||
)
|
||||
l.changedValues[contract.Address()][address] = value
|
||||
}
|
||||
// Copy a snapshot of the current memory state to a new buffer
|
||||
var mem []byte
|
||||
if !l.cfg.DisableMemory {
|
||||
mem = make([]byte, len(memory.Data()))
|
||||
copy(mem, memory.Data())
|
||||
}
|
||||
// Copy a snapshot of the current stack state to a new buffer
|
||||
var stck []*big.Int
|
||||
if !l.cfg.DisableStack {
|
||||
stck = make([]*big.Int, len(stack.Data()))
|
||||
for i, item := range stack.Data() {
|
||||
stck[i] = new(big.Int).Set(item)
|
||||
}
|
||||
}
|
||||
// Copy a snapshot of the current storage to a new container
|
||||
var storage Storage
|
||||
if !l.cfg.DisableStorage {
|
||||
storage = l.changedValues[contract.Address()].Copy()
|
||||
}
|
||||
// create a new snapshot of the EVM.
|
||||
log := StructLog{pc, op, gas, cost, mem, memory.Len(), stck, storage, depth, env.StateDB.GetRefund(), err}
|
||||
|
||||
l.logs = append(l.logs, log)
|
||||
return nil
|
||||
}
|
||||
|
||||
// CaptureFault implements the Tracer interface to trace an execution fault
|
||||
// while running an opcode.
|
||||
func (l *StructLogger) CaptureFault(env *EVM, pc uint64, op OpCode, gas, cost uint64, memory *Memory, stack *Stack, contract *Contract, depth int, err error) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// CaptureEnd is called after the call finishes to finalize the tracing.
|
||||
func (l *StructLogger) CaptureEnd(output []byte, gasUsed uint64, t time.Duration, err error) error {
|
||||
l.output = output
|
||||
l.err = err
|
||||
if l.cfg.Debug {
|
||||
fmt.Printf("0x%x\n", output)
|
||||
if err != nil {
|
||||
fmt.Printf(" error: %v\n", err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// StructLogs returns the captured log entries.
|
||||
func (l *StructLogger) StructLogs() []StructLog { return l.logs }
|
||||
|
||||
// Error returns the VM error captured by the trace.
|
||||
func (l *StructLogger) Error() error { return l.err }
|
||||
|
||||
// Output returns the VM return value captured by the trace.
|
||||
func (l *StructLogger) Output() []byte { return l.output }
|
||||
|
||||
// WriteTrace writes a formatted trace to the given writer
|
||||
func WriteTrace(writer io.Writer, logs []StructLog) {
|
||||
for _, log := range logs {
|
||||
fmt.Fprintf(writer, "%-16spc=%08d gas=%v cost=%v", log.Op, log.Pc, log.Gas, log.GasCost)
|
||||
if log.Err != nil {
|
||||
fmt.Fprintf(writer, " ERROR: %v", log.Err)
|
||||
}
|
||||
fmt.Fprintln(writer)
|
||||
|
||||
if len(log.Stack) > 0 {
|
||||
fmt.Fprintln(writer, "Stack:")
|
||||
for i := len(log.Stack) - 1; i >= 0; i-- {
|
||||
fmt.Fprintf(writer, "%08d %x\n", len(log.Stack)-i-1, math.PaddedBigBytes(log.Stack[i], 32))
|
||||
}
|
||||
}
|
||||
if len(log.Memory) > 0 {
|
||||
fmt.Fprintln(writer, "Memory:")
|
||||
fmt.Fprint(writer, hex.Dump(log.Memory))
|
||||
}
|
||||
if len(log.Storage) > 0 {
|
||||
fmt.Fprintln(writer, "Storage:")
|
||||
for h, item := range log.Storage {
|
||||
fmt.Fprintf(writer, "%x: %x\n", h, item)
|
||||
}
|
||||
}
|
||||
fmt.Fprintln(writer)
|
||||
}
|
||||
}
|
||||
|
||||
// WriteLogs writes vm logs in a readable format to the given writer
|
||||
func WriteLogs(writer io.Writer, logs []*types.Log) {
|
||||
for _, log := range logs {
|
||||
fmt.Fprintf(writer, "LOG%d: %x bn=%d txi=%x\n", len(log.Topics), log.Address, log.BlockNumber, log.TxIndex)
|
||||
|
||||
for i, topic := range log.Topics {
|
||||
fmt.Fprintf(writer, "%08d %x\n", i, topic)
|
||||
}
|
||||
|
||||
fmt.Fprint(writer, hex.Dump(log.Data))
|
||||
fmt.Fprintln(writer)
|
||||
}
|
||||
}
|
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_add.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_add.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_and.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_and.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_byte.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_byte.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_div.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_div.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_eq.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_eq.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_exp.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_exp.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_gt.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_gt.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_lt.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_lt.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_mod.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_mod.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_mul.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_mul.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_or.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_or.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_sar.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_sar.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_sdiv.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_sdiv.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_sgt.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_sgt.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_shl.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_shl.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_shr.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_shr.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_signext.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_signext.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_slt.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_slt.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_smod.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_smod.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_sub.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_sub.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_xor.json
generated
vendored
Normal file
1
vendor/github.com/ethereum/go-ethereum/core/vm/testdata/testcases_xor.json
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user