7dc100714d
This PR adds counter metrics for the CPU system and the Geth process. Currently the only metrics available for these items are gauges. Gauges are fine when the consumer scrapes metrics data at the same interval as Geth produces new values (every 3 seconds), but it is likely that most consumers will not scrape that often. Intervals of 10, 15, or maybe even 30 seconds are probably more common. So the problem is, how does the consumer estimate what the CPU was doing in between scrapes. With a counter, it's easy ... you just subtract two successive values and divide by the time to get a nice, accurate average. But with a gauge, you can't do that. A gauge reading is an instantaneous picture of what was happening at that moment, but it gives you no idea about what was going on between scrapes. Taking an average of values is meaningless. |
||
---|---|---|
.. | ||
exp | ||
influxdb | ||
librato | ||
prometheus | ||
config.go | ||
counter_float64.go | ||
counter_float_64_test.go | ||
counter_test.go | ||
counter.go | ||
cpu_disabled.go | ||
cpu_enabled.go | ||
cpu.go | ||
cputime_nop.go | ||
cputime_unix.go | ||
debug_test.go | ||
debug.go | ||
disk_linux.go | ||
disk_nop.go | ||
disk.go | ||
doc.go | ||
ewma_test.go | ||
ewma.go | ||
FORK.md | ||
gauge_float64_test.go | ||
gauge_float64.go | ||
gauge_test.go | ||
gauge.go | ||
graphite_test.go | ||
graphite.go | ||
healthcheck.go | ||
histogram_test.go | ||
histogram.go | ||
init_test.go | ||
json_test.go | ||
json.go | ||
LICENSE | ||
log.go | ||
memory.md | ||
meter_test.go | ||
meter.go | ||
metrics_test.go | ||
metrics.go | ||
opentsdb_test.go | ||
opentsdb.go | ||
README.md | ||
registry_test.go | ||
registry.go | ||
resetting_sample.go | ||
resetting_timer_test.go | ||
resetting_timer.go | ||
runtimehistogram_test.go | ||
runtimehistogram.go | ||
sample_test.go | ||
sample.go | ||
syslog.go | ||
timer_test.go | ||
timer.go | ||
validate.sh | ||
writer_test.go | ||
writer.go |
go-metrics
Go port of Coda Hale's Metrics library: https://github.com/dropwizard/metrics.
Documentation: https://godoc.org/github.com/rcrowley/go-metrics.
Usage
Create and update metrics:
c := metrics.NewCounter()
metrics.Register("foo", c)
c.Inc(47)
g := metrics.NewGauge()
metrics.Register("bar", g)
g.Update(47)
r := NewRegistry()
g := metrics.NewRegisteredFunctionalGauge("cache-evictions", r, func() int64 { return cache.getEvictionsCount() })
s := metrics.NewExpDecaySample(1028, 0.015) // or metrics.NewUniformSample(1028)
h := metrics.NewHistogram(s)
metrics.Register("baz", h)
h.Update(47)
m := metrics.NewMeter()
metrics.Register("quux", m)
m.Mark(47)
t := metrics.NewTimer()
metrics.Register("bang", t)
t.Time(func() {})
t.Update(47)
Register() is not threadsafe. For threadsafe metric registration use GetOrRegister:
t := metrics.GetOrRegisterTimer("account.create.latency", nil)
t.Time(func() {})
t.Update(47)
NOTE: Be sure to unregister short-lived meters and timers otherwise they will leak memory:
// Will call Stop() on the Meter to allow for garbage collection
metrics.Unregister("quux")
// Or similarly for a Timer that embeds a Meter
metrics.Unregister("bang")
Periodically log every metric in human-readable form to standard error:
go metrics.Log(metrics.DefaultRegistry, 5 * time.Second, log.New(os.Stderr, "metrics: ", log.Lmicroseconds))
Periodically log every metric in slightly-more-parseable form to syslog:
w, _ := syslog.Dial("unixgram", "/dev/log", syslog.LOG_INFO, "metrics")
go metrics.Syslog(metrics.DefaultRegistry, 60e9, w)
Periodically emit every metric to Graphite using the Graphite client:
import "github.com/cyberdelia/go-metrics-graphite"
addr, _ := net.ResolveTCPAddr("tcp", "127.0.0.1:2003")
go graphite.Graphite(metrics.DefaultRegistry, 10e9, "metrics", addr)
Periodically emit every metric into InfluxDB:
NOTE: this has been pulled out of the library due to constant fluctuations in the InfluxDB API. In fact, all client libraries are on their way out. see issues #121 and #124 for progress and details.
import "github.com/vrischmann/go-metrics-influxdb"
go influxdb.InfluxDB(metrics.DefaultRegistry,
10e9,
"127.0.0.1:8086",
"database-name",
"username",
"password"
)
Periodically upload every metric to Librato using the Librato client:
Note: the client included with this repository under the librato
package
has been deprecated and moved to the repository linked above.
import "github.com/mihasya/go-metrics-librato"
go librato.Librato(metrics.DefaultRegistry,
10e9, // interval
"example@example.com", // account owner email address
"token", // Librato API token
"hostname", // source
[]float64{0.95}, // percentiles to send
time.Millisecond, // time unit
)
Periodically emit every metric to StatHat:
import "github.com/rcrowley/go-metrics/stathat"
go stathat.Stathat(metrics.DefaultRegistry, 10e9, "example@example.com")
Maintain all metrics along with expvars at /debug/metrics
:
This uses the same mechanism as the official expvar
but exposed under /debug/metrics
, which shows a json representation of all your usual expvars
as well as all your go-metrics.
import "github.com/rcrowley/go-metrics/exp"
exp.Exp(metrics.DefaultRegistry)
Installation
go get github.com/rcrowley/go-metrics
StatHat support additionally requires their Go client:
go get github.com/stathat/go
Publishing Metrics
Clients are available for the following destinations:
- Librato - https://github.com/mihasya/go-metrics-librato
- Graphite - https://github.com/cyberdelia/go-metrics-graphite
- InfluxDB - https://github.com/vrischmann/go-metrics-influxdb
- Ganglia - https://github.com/appscode/metlia
- Prometheus - https://github.com/deathowl/go-metrics-prometheus
- DataDog - https://github.com/syntaqx/go-metrics-datadog
- SignalFX - https://github.com/pascallouisperez/go-metrics-signalfx