Add new examples to kompose (#1803)

#### What type of PR is this?

<!--
Add one of the following kinds:
/kind bug
/kind documentation
/kind feature
-->

/kind cleanup

#### What this PR does / why we need it:

Fixes the current broken examples by:

* Removing all the old incompatible ones (we do not really support v3
  anymore or v2... since switching libraries)
* Uses quay.io/kompose/web as our front end example which is a fork of
  the guestbook-go kubernetes examples

#### Which issue(s) this PR fixes:
<!--
*Automatically closes linked issue when PR is merged.
Usage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`.
-->

Closes https://github.com/kubernetes/kompose/issues/1757

#### Special notes for your reviewer:

Test using docker-compose (you'll see it come up!), then try with
kompose :)

Signed-off-by: Charlie Drage <charlie@charliedrage.com>
This commit is contained in:
Charlie Drage 2024-01-16 09:37:31 -05:00 committed by GitHub
parent 5f0c1c9c50
commit 575066d3ed
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
74 changed files with 9967 additions and 324 deletions

View File

@ -12,7 +12,7 @@ redirect_from:
* TOC
{:toc}
This document outlines all possible conversion details regarding `docker-compose.yaml` values to Kubernetes / OpenShift artifacts.
This document outlines all possible conversion details regarding `compose.yaml` values to Kubernetes / OpenShift artifacts.
## Version Support

View File

@ -77,7 +77,7 @@ Currently, it is not possible to use a different Kubernetes version from the ver
### Adding CLI tests
[Kompose CLI tests](https://github.com/kubernetes/kompose/tree/main/script/test/cmd) run `kompose convert` with docker-compose files, and cross-check the k8s and OpenShift artifacts generated with the template files.
[Kompose CLI tests](https://github.com/kubernetes/kompose/tree/main/script/test/cmd) run `kompose convert` with compose files, and cross-check the k8s and OpenShift artifacts generated with the template files.
To generate CLI tests, please run `make gen-cmd`.

View File

@ -20,7 +20,7 @@ For beginners and the most compatibility, follow the _Minikube and Kompose_ guid
## Minikube and Kompose
In this guide, we'll deploy a sample `docker-compose.yaml` file to a Kubernetes cluster.
In this guide, we'll deploy a sample `compose.yaml` file to a Kubernetes cluster.
Requirements:
@ -44,15 +44,15 @@ Starting cluster components...
Kubectl is now configured to use the cluster
```
**Download an [example Docker Compose file](https://raw.githubusercontent.com/kubernetes/kompose/main/examples/docker-compose.yaml), or use your own:**
**Download an [example Docker Compose file](https://raw.githubusercontent.com/kubernetes/kompose/main/examples/compose.yaml), or use your own:**
```sh
wget https://raw.githubusercontent.com/kubernetes/kompose/main/examples/docker-compose.yaml
wget https://raw.githubusercontent.com/kubernetes/kompose/main/examples/compose.yaml
```
**Convert your Docker Compose file to Kubernetes:**
Run `kompose convert` in the same directory as your `docker-compose.yaml` file.
Run `kompose convert` in the same directory as your `compose.yaml` file.
```sh
$ kompose convert
@ -110,7 +110,7 @@ $ curl http://123.45.67.89
## Minishift and Kompose
In this guide, we'll deploy a sample `docker-compose.yaml` file to an OpenShift cluster.
In this guide, we'll deploy a sample `compose.yaml` file to an OpenShift cluster.
Requirements:
@ -134,15 +134,15 @@ Starting local OpenShift cluster using 'kvm' hypervisor...
...
```
**Download an [example Docker Compose file](https://raw.githubusercontent.com/kubernetes/kompose/main/examples/docker-compose.yaml), or use your own:**
**Download an [example Docker Compose file](https://raw.githubusercontent.com/kubernetes/kompose/main/examples/compose.yaml), or use your own:**
```sh
wget https://raw.githubusercontent.com/kubernetes/kompose/main/examples/docker-compose.yaml
wget https://raw.githubusercontent.com/kubernetes/kompose/main/examples/compose.yaml
```
**Convert your Docker Compose file to OpenShift:**
Run `kompose convert --provider=openshift` in the same directory as your `docker-compose.yaml` file.
Run `kompose convert --provider=openshift` in the same directory as your `compose.yaml` file.
```sh
$ kompose convert --provider=openshift
@ -188,7 +188,7 @@ Opening the OpenShift Web console in the default browser...
## RHEL and Kompose
In this guide, we'll deploy a sample `docker-compose.yaml` file using both RHEL (Red Hat Enterprise Linux) and OpenShift.
In this guide, we'll deploy a sample `compose.yaml` file using both RHEL (Red Hat Enterprise Linux) and OpenShift.
Requirements:
@ -254,15 +254,15 @@ Starting local OpenShift cluster using 'kvm' hypervisor...
...
```
**Download an [example Docker Compose file](https://raw.githubusercontent.com/kubernetes/kompose/main/examples/docker-compose.yaml), or use your own:**
**Download an [example Docker Compose file](https://raw.githubusercontent.com/kubernetes/kompose/main/examples/compose.yaml), or use your own:**
```sh
wget https://raw.githubusercontent.com/kubernetes/kompose/main/examples/docker-compose.yaml
wget https://raw.githubusercontent.com/kubernetes/kompose/main/examples/compose.yaml
```
**Convert your Docker Compose file to OpenShift:**
Run `kompose convert --provider=openshift` in the same directory as your `docker-compose.yaml` file.
Run `kompose convert --provider=openshift` in the same directory as your `compose.yaml` file.
```sh
$ kompose convert --provider=openshift

View File

@ -6,7 +6,7 @@ layout: index
---
```sh
$ kompose convert -f docker-compose.yaml
$ kompose convert -f compose.yaml
$ kubectl apply -f .

View File

@ -22,7 +22,7 @@ There are some projects out there known to use Kompose integrated in some form o
### Kompose Docker Container by Cloudfind
**Description:** "A Docker container for the Kompose translator for docker-compose"
**Description:** "A Docker container for the Kompose translator for compose"
**Link:** [https://github.com/cloudfind/kompose-docker](https://github.com/cloudfind/kompose-docker)
@ -49,9 +49,9 @@ There are some projects out there known to use Kompose integrated in some form o
**Description:** "Maven is one of the widely used build tools for Java applications. The Fabric8 Maven Plugin is a maven extension that simplifies the deployment of Java applications to Kubernetes or OpenShift clusters.
The main task of this plugin is to build Docker images, generate Kubernetes or OpenShift resource descriptors and run/deploy the application on Kubernetes or OpenShift cluster.
The plugin has a wide range of configuration options. Docker Compose is one of the options to bring up deployments on Kubernetes or OpenShift clusters.
Technically, Fabric8 Maven Plugin processes the external docker-compose.yml file and generates Kubernetes or OpenShift resources via Kompose."
Technically, Fabric8 Maven Plugin processes the external compose.yml file and generates Kubernetes or OpenShift resources via Kompose."
**Links:**
* [Quickstart](/maven-example)
* [Documentation](https://maven.fabric8.io/#docker-compose)
* [Documentation](https://maven.fabric8.io/#compose)

View File

@ -87,20 +87,20 @@ Now that your service has been deployed, let's access it by querying `pod`, `ser
```bash
$ oc get pods
NAME READY STATUS RESTARTS AGE
springboot-docker-compose-1-xl0vb 1/1 Running 0 5m
springboot-docker-compose-s2i-1-build 0/1 Completed 0 7m
springboot-compose-1-xl0vb 1/1 Running 0 5m
springboot-compose-s2i-1-build 0/1 Completed 0 7m
```
```bash
$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
springboot-docker-compose 172.30.205.137 <none> 8080/TCP 6m
springboot-compose 172.30.205.137 <none> 8080/TCP 6m
```
Let's access the Springboot service.
```bash
$ minishift openshift service --in-browser springboot-docker-compose
$ minishift openshift service --in-browser springboot-compose
Created the new window in existing browser session.
```

View File

@ -37,14 +37,14 @@ INFO Kubernetes file "worker-deployment.yaml" created
INFO Kubernetes file "db-deployment.yaml" created
$ ls
db-deployment.yaml docker-compose.yml docker-gitlab.yml redis-deployment.yaml result-deployment.yaml vote-deployment.yaml worker-deployment.yaml
db-deployment.yaml compose.yml docker-gitlab.yml redis-deployment.yaml result-deployment.yaml vote-deployment.yaml worker-deployment.yaml
db-svc.yaml docker-voting.yml redis-svc.yaml result-svc.yaml vote-svc.yaml worker-svc.yaml
```
You can also provide multiple docker-compose files at the same time:
You can also provide multiple compose files at the same time:
```sh
$ kompose -f docker-compose.yml -f docker-guestbook.yml convert
$ kompose -f compose.yml -f docker-guestbook.yml convert
INFO Kubernetes file "frontend-service.yaml" created
INFO Kubernetes file "mlbparks-service.yaml" created
INFO Kubernetes file "mongodb-service.yaml" created
@ -64,11 +64,11 @@ frontend-service.yaml mongodb-deployment.yaml redis-repli
redis-master-deployment.yaml
```
When multiple docker-compose files are provided the configuration is merged. Any configuration that is common will be over ridden by subsequent file.
When multiple compose files are provided the configuration is merged. Any configuration that is common will be over ridden by subsequent file.
You can provide your docker-compose files via environment variables as following:
You can provide your compose files via environment variables as following:
```sh
$ COMPOSE_FILE="docker-compose.yaml alternative-docker-compose.yaml" kompose convert
$ COMPOSE_FILE="compose.yaml alternative-compose.yaml" kompose convert
```
### OpenShift
@ -95,7 +95,7 @@ INFO OpenShift file "result-imagestream.yaml" created
It also supports creating buildconfig for build directive in a service. By default, it uses the remote repository for the current git branch as the source repository, and the current branch as the source branch for the build. You can specify a different source repository and branch using `--build-repo` and `--build-branch` options respectively.
```sh
$ kompose --provider openshift --file buildconfig/docker-compose.yml convert
$ kompose --provider openshift --file buildconfig/compose.yml convert
WARN [foo] Service cannot be created because of missing port.
INFO OpenShift Buildconfig using git@github.com:rtnpro/kompose.git::master as source.
INFO OpenShift file "foo-deploymentconfig.yaml" created
@ -157,10 +157,10 @@ INFO Kubernetes file "web-svc.yaml" created
INFO Kubernetes file "redis-svc.yaml" created
INFO Kubernetes file "web-deployment.yaml" created
INFO Kubernetes file "redis-deployment.yaml" created
chart created in "./docker-compose/"
chart created in "./compose/"
$ tree docker-compose/
docker-compose
$ tree compose/
compose
├── Chart.yaml
├── README.md
└── templates
@ -174,7 +174,7 @@ The chart structure is aimed at providing a skeleton for building your Helm char
## Labels
`kompose` supports Kompose-specific labels within the `docker-compose.yml` file to
`kompose` supports Kompose-specific labels within the `compose.yml` file to
explicitly define the generated resources' behavior upon conversion, like Service, PersistentVolumeClaim...
The currently supported options are:
@ -428,11 +428,9 @@ version: "3.3"
services:
front-end:
image: gcr.io/google-samples/gb-frontend:v4
environment:
- GET_HOSTS_FROM=dns
image: quay.io/kompose/web
ports:
- 80:80
- 8080:8080
labels:
kompose.service.expose: lb
kompose.service.external-traffic-policy: local
@ -468,9 +466,9 @@ services:
```
## Restart
If you want to create normal pods without controller you can use `restart` construct of docker-compose to define that. Follow table below to see what happens on the `restart` value.
If you want to create normal pods without controller you can use `restart` construct of compose to define that. Follow table below to see what happens on the `restart` value.
| `docker-compose` `restart` | object created | Pod `restartPolicy` |
| `compose` `restart` | object created | Pod `restartPolicy` |
| -------------------------- | ----------------- | ------------------- |
| `""` | controller object | `Always` |
| `always` | controller object | `Always` |
@ -498,7 +496,7 @@ If the Docker Compose file has a volume specified for a service, the Deployment
If the Docker Compose file has service name with `_` or `.` in it (eg.`web_service` or `web.service`), then it will be replaced by `-` and the service name will be renamed accordingly (eg.`web-service`). Kompose does this because "Kubernetes" doesn't allow `_` in object name.
Please note that changing service name might break some `docker-compose` files.
Please note that changing service name might break some `compose` files.
## Network policies generation
[Network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies) are not generated by default, because it's not mandatory to deploy your application. However, it's one of the best practices when it comes to deploy secure applications on top of Kubernetes.

View File

@ -1,4 +0,0 @@
FROM busybox:1.26.2
RUN touch /test

View File

@ -1,6 +0,0 @@
version: "2"
services:
foo:
build: "./build"
image: docker.io/foo/bar

20
examples/compose.yaml Normal file
View File

@ -0,0 +1,20 @@
services:
redis-leader:
container_name: redis-leader
image: redis:latest
ports:
- "6379"
redis-replica:
container_name: redis-replica
image: redis:latest
ports:
- "6379"
command: redis-server --replicaof redis-leader 6379
web:
container_name: web
image: quay.io/kompose/web
ports:
- "8080:8080"

View File

@ -1,19 +0,0 @@
version: "3"
services:
web:
image: tuna/docker-counter23
ports:
- "5000:5000"
deploy:
replicas: 1
restart_policy:
condition: any
labels:
kompose.service.type: NodePort
redis:
image: redis:3.0
ports:
- "6379"

View File

@ -1,10 +0,0 @@
web:
image: tuna/docker-counter23
ports:
- "5000:5000"
links:
- redis
redis:
image: redis:3.0
ports:
- "6379"

View File

@ -1,13 +0,0 @@
version: "2"
services:
nginx:
image: nginx
build: ./foobar
ports:
- "6060:6060/udp"
- "5000:5000"
cap_add:
- ALL
container_name: foobar
labels:
kompose.service.type: loadbalancer

View File

@ -1,24 +0,0 @@
version: "3"
services:
redis-master:
image: registry.k8s.io/redis:e2e
ports:
- "6379"
redis-replica:
image: registry.k8s.io/redis-slave:v2
ports:
- "6379"
environment:
- GET_HOSTS_FROM=dns
frontend:
image: registry.k8s.io/guestbook:v3
ports:
- "80:80"
environment:
- GET_HOSTS_FROM=dns
labels:
kompose.service.type: LoadBalancer

View File

@ -1,25 +0,0 @@
version: "2"
services:
redis-master:
image: registry.k8s.io/redis:e2e
ports:
- "6379"
redis-replica:
image: registry.k8s.io/redis-slave:v2
ports:
- "6379"
environment:
- GET_HOSTS_FROM=dns
frontend:
image: registry.k8s.io/guestbook:v3
ports:
- "80:80"
environment:
- GET_HOSTS_FROM=dns
labels:
kompose.service.type: LoadBalancer

View File

@ -1,156 +0,0 @@
version: '2'
services:
redis:
restart: always
image: sameersbn/redis:latest
command:
- --loglevel warning
volumes:
- /srv/docker/gitlab/redis:/var/lib/redis
ports:
- "6379"
postgresql:
restart: always
image: sameersbn/postgresql:9.5-3
volumes:
- /srv/docker/gitlab/postgresql:/var/lib/postgresql
environment:
- DB_USER=gitlab
- DB_PASS=password
- DB_NAME=gitlabhq_production
- DB_EXTENSION=pg_trgm
ports:
- "5432"
gitlab:
restart: always
image: sameersbn/gitlab:8.13.3
depends_on:
- redis
- postgresql
ports:
- "10080:80"
- "10022:22"
volumes:
- /srv/docker/gitlab/gitlab:/home/git/data
environment:
- DEBUG=false
- DB_ADAPTER=postgresql
- DB_HOST=postgresql
- DB_PORT=5432
- DB_USER=gitlab
- DB_PASS=password
- DB_NAME=gitlabhq_production
- REDIS_HOST=redis
- REDIS_PORT=6379
- TZ=Asia/Kolkata
- GITLAB_TIMEZONE=Kolkata
- GITLAB_HTTPS=false
- SSL_SELF_SIGNED=false
- GITLAB_HOST=localhost
- GITLAB_PORT=10080
- GITLAB_SSH_PORT=10022
- GITLAB_RELATIVE_URL_ROOT=
- GITLAB_SECRETS_DB_KEY_BASE=long-and-random-alphanumeric-string
- GITLAB_SECRETS_SECRET_KEY_BASE=long-and-random-alphanumeric-string
- GITLAB_SECRETS_OTP_KEY_BASE=long-and-random-alphanumeric-string
- GITLAB_ROOT_PASSWORD=
- GITLAB_ROOT_EMAIL=
- GITLAB_NOTIFY_ON_BROKEN_BUILDS=true
- GITLAB_NOTIFY_PUSHER=false
- GITLAB_EMAIL=notifications@example.com
- GITLAB_EMAIL_REPLY_TO=noreply@example.com
- GITLAB_INCOMING_EMAIL_ADDRESS=reply@example.com
- GITLAB_BACKUP_SCHEDULE=daily
- GITLAB_BACKUP_TIME=01:00
- SMTP_ENABLED=false
- SMTP_DOMAIN=www.example.com
- SMTP_HOST=smtp.gmail.com
- SMTP_PORT=587
- SMTP_USER=mailer@example.com
- SMTP_PASS=password
- SMTP_STARTTLS=true
- SMTP_AUTHENTICATION=login
- IMAP_ENABLED=false
- IMAP_HOST=imap.gmail.com
- IMAP_PORT=993
- IMAP_USER=mailer@example.com
- IMAP_PASS=password
- IMAP_SSL=true
- IMAP_STARTTLS=false
- OAUTH_ENABLED=false
- OAUTH_AUTO_SIGN_IN_WITH_PROVIDER=
- OAUTH_ALLOW_SSO=
- OAUTH_BLOCK_AUTO_CREATED_USERS=true
- OAUTH_AUTO_LINK_LDAP_USER=false
- OAUTH_AUTO_LINK_SAML_USER=false
- OAUTH_EXTERNAL_PROVIDERS=
- OAUTH_CAS3_LABEL=cas3
- OAUTH_CAS3_SERVER=
- OAUTH_CAS3_DISABLE_SSL_VERIFICATION=false
- OAUTH_CAS3_LOGIN_URL=/cas/login
- OAUTH_CAS3_VALIDATE_URL=/cas/p3/serviceValidate
- OAUTH_CAS3_LOGOUT_URL=/cas/logout
- OAUTH_GOOGLE_API_KEY=
- OAUTH_GOOGLE_APP_SECRET=
- OAUTH_GOOGLE_RESTRICT_DOMAIN=
- OAUTH_FACEBOOK_API_KEY=
- OAUTH_FACEBOOK_APP_SECRET=
- OAUTH_TWITTER_API_KEY=
- OAUTH_TWITTER_APP_SECRET=
- OAUTH_GITHUB_API_KEY=
- OAUTH_GITHUB_APP_SECRET=
- OAUTH_GITHUB_URL=
- OAUTH_GITHUB_VERIFY_SSL=
- OAUTH_GITLAB_API_KEY=
- OAUTH_GITLAB_APP_SECRET=
- OAUTH_BITBUCKET_API_KEY=
- OAUTH_BITBUCKET_APP_SECRET=
- OAUTH_SAML_ASSERTION_CONSUMER_SERVICE_URL=
- OAUTH_SAML_IDP_CERT_FINGERPRINT=
- OAUTH_SAML_IDP_SSO_TARGET_URL=
- OAUTH_SAML_ISSUER=
- OAUTH_SAML_LABEL="Our SAML Provider"
- OAUTH_SAML_NAME_IDENTIFIER_FORMAT=urn:oasis:names:tc:SAML:2.0:nameid-format:transient
- OAUTH_SAML_GROUPS_ATTRIBUTE=
- OAUTH_SAML_EXTERNAL_GROUPS=
- OAUTH_SAML_ATTRIBUTE_STATEMENTS_EMAIL=
- OAUTH_SAML_ATTRIBUTE_STATEMENTS_NAME=
- OAUTH_SAML_ATTRIBUTE_STATEMENTS_FIRST_NAME=
- OAUTH_SAML_ATTRIBUTE_STATEMENTS_LAST_NAME=
- OAUTH_CROWD_SERVER_URL=
- OAUTH_CROWD_APP_NAME=
- OAUTH_CROWD_APP_PASSWORD=
- OAUTH_AUTH0_CLIENT_ID=
- OAUTH_AUTH0_CLIENT_SECRET=
- OAUTH_AUTH0_DOMAIN=
- OAUTH_AZURE_API_KEY=
- OAUTH_AZURE_API_SECRET=
- OAUTH_AZURE_TENANT_ID=
labels:
kompose.service.type: NodePort

View File

@ -1,27 +0,0 @@
version: "2"
services:
vote:
image: docker/example-voting-app-vote:latest
labels:
- "com.example.description=Vote"
ports:
- "5000:80"
redis:
image: redis:alpine
ports: ["6379"]
worker:
image: docker/example-voting-app-worker:latest
db:
image: postgres:9.4
ports: ["5432"]
labels:
- "com.example.description=Postgres Database"
result:
image: tmadams333/example-voting-app-result:latest
ports:
- "5001:80"

11
examples/web/Dockerfile Normal file
View File

@ -0,0 +1,11 @@
FROM golang:1.21.2
WORKDIR /app
# Copy the entire project which includes the public directory, vendoring, etc.
COPY . .
# Build your application
RUN CGO_ENABLED=0 GOOS=linux go build -o /frontend
EXPOSE 8080
CMD ["/frontend"]

5
examples/web/README.md Normal file
View File

@ -0,0 +1,5 @@
A fork of https://github.com/kubernetes/examples/blob/master/guestbook-go/README.md
A simple example that shows the functionality of Kompose
Pushed to https://docker.io/kompose/web

20
examples/web/compose.yaml Normal file
View File

@ -0,0 +1,20 @@
services:
redis-leader:
container_name: redis-leader
image: redis
ports:
- "6379"
redis-replica:
container_name: redis-replica
image: redis
ports:
- "6379"
command: redis-server --replicaof redis-leader 6379
web:
container_name: web
build: ./web
ports:
- "8080:8080"

14
examples/web/go.mod Normal file
View File

@ -0,0 +1,14 @@
module github.com/redhat-developer/podman-desktop-demo
go 1.21.2
require (
github.com/codegangsta/negroni v1.0.0
github.com/gorilla/mux v1.8.1
github.com/xyproto/simpleredis/v2 v2.6.5
)
require (
github.com/gomodule/redigo v1.8.9 // indirect
github.com/xyproto/pinterface v1.5.3 // indirect
)

20
examples/web/go.sum Normal file
View File

@ -0,0 +1,20 @@
github.com/codegangsta/negroni v1.0.0 h1:+aYywywx4bnKXWvoWtRfJ91vC59NbEhEY03sZjQhbVY=
github.com/codegangsta/negroni v1.0.0/go.mod h1:v0y3T5G7Y1UlFfyxFn/QLRU4a2EuNau2iZY63YTKWo0=
github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/gomodule/redigo v1.8.9 h1:Sl3u+2BI/kk+VEatbj0scLdrFhjPmbxOc1myhDP41ws=
github.com/gomodule/redigo v1.8.9/go.mod h1:7ArFNvsTjH8GMMzB4uy1snslv2BwmginuMs06a1uzZE=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/xyproto/pinterface v1.5.3 h1:RKkNT88cwrSqD9hU4cYAO5yeo8srg4TG+74Pcj88iz0=
github.com/xyproto/pinterface v1.5.3/go.mod h1:X5B5pKE49ak7SpyDh5QvJvLH9cC9XuZNDcl5hEyYc34=
github.com/xyproto/simpleredis/v2 v2.6.5 h1:PtAz5j2UUACNHx5LetI60dbCsVMhIv1H878bND4VQK4=
github.com/xyproto/simpleredis/v2 v2.6.5/go.mod h1:OrWVubZGr0SbpAjkAQkFn/iiex2AsblBm80uhYxbjQY=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

133
examples/web/main.go Normal file
View File

@ -0,0 +1,133 @@
package main
import (
"encoding/json"
"net/http"
"os"
"strings"
"github.com/codegangsta/negroni"
"github.com/gorilla/mux"
"github.com/xyproto/simpleredis/v2"
)
var (
leaderPool *simpleredis.ConnectionPool
replicaPools []*simpleredis.ConnectionPool
)
func ListRangeHandler(rw http.ResponseWriter, req *http.Request) {
key := mux.Vars(req)["key"]
var members []string
var err error
for _, replicaPool := range replicaPools {
list := simpleredis.NewList(replicaPool, key)
members, err = list.GetAll()
if err == nil {
break // Found a replica with data, exit loop
}
}
if err != nil {
http.Error(rw, "Failed to retrieve data from replicas", http.StatusInternalServerError)
return
}
membersJSON, err := json.MarshalIndent(members, "", " ")
if err != nil {
http.Error(rw, "Failed to marshal JSON", http.StatusInternalServerError)
return
}
rw.Write(membersJSON)
}
func ListPushHandler(rw http.ResponseWriter, req *http.Request) {
key := mux.Vars(req)["key"]
value := mux.Vars(req)["value"]
list := simpleredis.NewList(leaderPool, key)
HandleError(nil, list.Add(value))
ListRangeHandler(rw, req)
}
func InfoHandler(rw http.ResponseWriter, req *http.Request) {
info := HandleError(leaderPool.Get(0).Do("INFO")).([]byte)
rw.Write(info)
}
func EnvHandler(rw http.ResponseWriter, req *http.Request) {
environment := make(map[string]string)
for _, item := range os.Environ() {
splits := strings.Split(item, "=")
key := splits[0]
val := strings.Join(splits[1:], "=")
environment[key] = val
}
envJSON := HandleError(json.MarshalIndent(environment, "", " ")).([]byte)
rw.Write(envJSON)
}
func HandleError(result interface{}, err error) (r interface{}) {
if err != nil {
panic(err)
}
return result
}
func getReplicaPool() *simpleredis.ConnectionPool {
// Use the first replica as the primary replica for read operations
return replicaPools[0]
}
func main() {
// Read the Redis replica addresses from an environment variable
replicaAddresses := os.Getenv("REDIS_REPLICAS")
if replicaAddresses == "" {
// Use default values if not set
replicaAddresses = "redis-replica"
}
// Read the Redis port number from an environment variable
redisPort := os.Getenv("REDIS_PORT")
if redisPort == "" {
// Use default port if not set
redisPort = "6379"
}
// Read the Redis leader address from an environment variable
redisLeaderAddress := os.Getenv("REDIS_LEADER")
if redisLeaderAddress == "" {
// Use default leader address if not set
redisLeaderAddress = "redis-leader"
}
// Read the server port number from an environment variable
serverPort := os.Getenv("SERVER_PORT")
if serverPort == "" {
// Use default port if not set
serverPort = "8080"
}
// Construct the Redis leader and replica addresses
leaderAddress := redisLeaderAddress + ":" + redisPort
replicaAddressesArr := strings.Split(replicaAddresses, ",")
replicaPools = make([]*simpleredis.ConnectionPool, len(replicaAddressesArr))
for i, addr := range replicaAddressesArr {
replicaPools[i] = simpleredis.NewConnectionPoolHost(addr + ":" + redisPort)
defer replicaPools[i].Close()
}
// Create a connection pool for the leader using the constructed address
leaderPool = simpleredis.NewConnectionPoolHost(leaderAddress)
defer leaderPool.Close()
r := mux.NewRouter()
r.Path("/lrange/{key}").Methods("GET").HandlerFunc(ListRangeHandler)
r.Path("/rpush/{key}/{value}").Methods("GET").HandlerFunc(ListPushHandler)
r.Path("/info").Methods("GET").HandlerFunc(InfoHandler)
r.Path("/env").Methods("GET").HandlerFunc(EnvHandler)
n := negroni.Classic()
n.UseHandler(r)
n.Run(":" + serverPort)
}

View File

@ -0,0 +1,35 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
<meta charset="utf-8">
<meta content="width=device-width" name="viewport">
<link href="style.css" rel="stylesheet">
<title>Kompose Guestbook</title>
</head>
<body>
<div id="header">
<img src="logo.png" width="300px">
<h1>Kompose Guestbook</h1>
</div>
<div id="guestbook-entries">
<p>Waiting for database connection...</p>
</div>
<div>
<form id="guestbook-form">
<input autocomplete="off" id="guestbook-entry-content" type="text">
<a href="#" id="guestbook-submit">Submit</a>
</form>
</div>
<div>
<p><h2 id="guestbook-host-address"></h2></p>
<p><a href="env">/env</a>
<a href="info">/info</a></p>
</div>
<script src="//ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="script.js"></script>
</body>
</html>

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

View File

@ -0,0 +1,47 @@
$(document).ready(function() {
var headerTitleElement = $("#header h1");
var entriesElement = $("#guestbook-entries");
var formElement = $("#guestbook-form");
var submitElement = $("#guestbook-submit");
var entryContentElement = $("#guestbook-entry-content");
var hostAddressElement = $("#guestbook-host-address");
var appendGuestbookEntries = function(data) {
entriesElement.empty();
$.each(data, function(key, val) {
entriesElement.append("<p>" + val + "</p>");
});
}
// On submission, clear the input field and add the entry to the guestbook.
var handleSubmission = function(e) {
e.preventDefault();
var entryValue = entryContentElement.val()
if (entryValue.length > 0) {
entriesElement.append("<p>...</p>");
$.getJSON("rpush/guestbook/" + entryValue, appendGuestbookEntries);
}
// Clear entryContentElement input field on click.
entryContentElement.val("");
return false;
}
var colors = ["#1f77b4", "#2ca02c", "#d62728", "#9467bd", "#ff7f0e"];
var randomColor = colors[Math.floor(5 * Math.random())];
(function setElementsColor(color) {
headerTitleElement.css("color", color);
entryContentElement.css("box-shadow", "inset 0 0 0 2px " + color);
submitElement.css("background-color", color);
})(randomColor);
submitElement.click(handleSubmission);
formElement.submit(handleSubmission);
hostAddressElement.append(document.URL);
// Poll every second.
(function fetchGuestbook() {
$.getJSON("lrange/guestbook").done(appendGuestbookEntries).always(
function() {
setTimeout(fetchGuestbook, 1000);
});
})();
});

View File

@ -0,0 +1,61 @@
body, input {
color: #123;
font-family: "Gill Sans", sans-serif;
}
div {
overflow: hidden;
padding: 1em 0;
position: relative;
text-align: center;
}
h1, h2, p, input, a {
font-weight: 300;
margin: 0;
}
h1 {
color: #BDB76B;
font-size: 3.5em;
}
h2 {
color: #999;
}
form {
margin: 0 auto;
max-width: 50em;
text-align: center;
}
input {
border: 0;
border-radius: 1000px;
box-shadow: inset 0 0 0 2px #BDB76B;
display: inline;
font-size: 1.5em;
margin-bottom: 1em;
outline: none;
padding: .5em 5%;
width: 55%;
}
form a {
background: #BDB76B;
border: 0;
border-radius: 1000px;
color: #FFF;
font-size: 1.25em;
font-weight: 400;
padding: .75em 2em;
text-decoration: none;
text-transform: uppercase;
white-space: normal;
}
p {
font-size: 1.5em;
line-height: 1.5;
}

View File

@ -0,0 +1 @@
/coverage.txt

View File

@ -0,0 +1,27 @@
language: go
sudo: false
dist: trusty
go:
- 1.x
- 1.2.x
- 1.3.x
- 1.4.x
- 1.5.x
- 1.6.x
- 1.7.x
- 1.8.x
- master
before_install:
- find "${GOPATH%%:*}" -name '*.a' -delete
- rm -rf "${GOPATH%%:*}/src/golang.org"
- go get golang.org/x/tools/cover
- go get golang.org/x/tools/cmd/cover
script:
- go test -race -coverprofile=coverage.txt -covermode=atomic
after_success:
- bash <(curl -s "https://codecov.io/bash")

View File

@ -0,0 +1,69 @@
# Change Log
**ATTN**: This project uses [semantic versioning](http://semver.org/).
## [Unreleased] -
## [1.0.0] - 2018-09-01
### Fixed
- `Logger` middleware now correctly handles paths containing a `%` instead of trying to treat it as a format specifier
## [0.3.0] - 2017-11-11
### Added
- `With()` helper for building a new `Negroni` struct chaining handlers from
existing `Negroni` structs
- Format log output in `Logger` middleware via a configurable `text/template`
string injectable via `.SetFormat`. Added `LoggerDefaultFormat` and
`LoggerDefaultDateFormat` to configure the default template and date format
used by the `Logger` middleware.
- Support for HTTP/2 pusher support via `http.Pusher` interface for Go 1.8+.
- `WrapFunc` to convert `http.HandlerFunc` into a `negroni.Handler`
- `Formatter` field added to `Recovery` middleware to allow configuring how
`panic`s are output. Default of `TextFormatter` (how it was output in
`0.2.0`) used. `HTMLPanicFormatter` also added to allow easy outputing of
`panic`s as HTML.
### Fixed
- `Written()` correct returns `false` if no response header has been written
- Only implement `http.CloseNotifier` with the `negroni.ResponseWriter` if the
underlying `http.ResponseWriter` implements it (previously would always
implement it and panic if the underlying `http.ResponseWriter` did not.
### Changed
- Set default status to `0` in the case that no handler writes status -- was
previously `200` (in 0.2.0, before that it was `0` so this reestablishes that
behavior)
- Catch `panic`s thrown by callbacks provided to the `Recovery` handler
- Recovery middleware will set `text/plain` content-type if none is set
- `ALogger` interface to allow custom logger outputs to be used with the
`Logger` middleware. Changes embeded field in `negroni.Logger` from `Logger`
to `ALogger`.
- Default `Logger` middleware output changed to be more structure and verbose
(also now configurable, see `Added`)
- Automatically bind to port specified in `$PORT` in `.Run()` if an address is
not passed in. Fall back to binding to `:8080` if no address specified
(configuable via `DefaultAddress`).
- `PanicHandlerFunc` added to `Recovery` middleware to enhance custom handling
of `panic`s by providing additional information to the handler including the
stack and the `http.Request`. `Recovery.ErrorHandlerFunc` was also added, but
deprecated in favor of the new `PanicHandlerFunc`.
## [0.2.0] - 2016-05-10
### Added
- Support for variadic handlers in `New()`
- Added `Negroni.Handlers()` to fetch all of the handlers for a given chain
- Allowed size in `Recovery` handler was bumped to 8k
- `Negroni.UseFunc` to push another handler onto the chain
### Changed
- Set the status before calling `beforeFuncs` so the information is available to them
- Set default status to `200` in the case that no handler writes status -- was previously `0`
- Panic if `nil` handler is given to `negroni.Use`
## 0.1.0 - 2013-07-22
### Added
- Initial implementation.
[Unreleased]: https://github.com/urfave/negroni/compare/v0.2.0...HEAD
[0.2.0]: https://github.com/urfave/negroni/compare/v0.1.0...v0.2.0

View File

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2014 Jeremy Saenz
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -0,0 +1,549 @@
# Negroni
[![GoDoc](https://godoc.org/github.com/urfave/negroni?status.svg)](http://godoc.org/github.com/urfave/negroni)
[![Build Status](https://travis-ci.org/urfave/negroni.svg?branch=master)](https://travis-ci.org/urfave/negroni)
[![codebeat](https://codebeat.co/badges/47d320b1-209e-45e8-bd99-9094bc5111e2)](https://codebeat.co/projects/github-com-urfave-negroni)
[![codecov](https://codecov.io/gh/urfave/negroni/branch/master/graph/badge.svg)](https://codecov.io/gh/urfave/negroni)
**Notice:** This is the library formerly known as
`github.com/codegangsta/negroni` -- Github will automatically redirect requests
to this repository, but we recommend updating your references for clarity.
Negroni is an idiomatic approach to web middleware in Go. It is tiny,
non-intrusive, and encourages use of `net/http` Handlers.
If you like the idea of [Martini](https://github.com/go-martini/martini), but
you think it contains too much magic, then Negroni is a great fit.
Language Translations:
* [Deutsch (de_DE)](translations/README_de_de.md)
* [Português Brasileiro (pt_BR)](translations/README_pt_br.md)
* [简体中文 (zh_CN)](translations/README_zh_CN.md)
* [繁體中文 (zh_TW)](translations/README_zh_tw.md)
* [日本語 (ja_JP)](translations/README_ja_JP.md)
* [Français (fr_FR)](translations/README_fr_FR.md)
## Getting Started
After installing Go and setting up your
[GOPATH](http://golang.org/doc/code.html#GOPATH), create your first `.go` file.
We'll call it `server.go`.
<!-- { "interrupt": true } -->
``` go
package main
import (
"fmt"
"net/http"
"github.com/urfave/negroni"
)
func main() {
mux := http.NewServeMux()
mux.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
fmt.Fprintf(w, "Welcome to the home page!")
})
n := negroni.Classic() // Includes some default middlewares
n.UseHandler(mux)
http.ListenAndServe(":3000", n)
}
```
Then install the Negroni package (**NOTE**: &gt;= **go 1.1** is required):
```
go get github.com/urfave/negroni
```
Then run your server:
```
go run server.go
```
You will now have a Go `net/http` webserver running on `localhost:3000`.
### Packaging
If you are on Debian, `negroni` is also available as [a
package](https://packages.debian.org/sid/golang-github-urfave-negroni-dev) that
you can install via `apt install golang-github-urfave-negroni-dev` (at the time
of writing, it is in the `sid` repositories).
## Is Negroni a Framework?
Negroni is **not** a framework. It is a middleware-focused library that is
designed to work directly with `net/http`.
## Routing?
Negroni is BYOR (Bring your own Router). The Go community already has a number
of great http routers available, and Negroni tries to play well with all of them
by fully supporting `net/http`. For instance, integrating with [Gorilla Mux]
looks like so:
``` go
router := mux.NewRouter()
router.HandleFunc("/", HomeHandler)
n := negroni.New(Middleware1, Middleware2)
// Or use a middleware with the Use() function
n.Use(Middleware3)
// router goes last
n.UseHandler(router)
http.ListenAndServe(":3001", n)
```
## `negroni.Classic()`
`negroni.Classic()` provides some default middleware that is useful for most
applications:
* [`negroni.Recovery`](#recovery) - Panic Recovery Middleware.
* [`negroni.Logger`](#logger) - Request/Response Logger Middleware.
* [`negroni.Static`](#static) - Static File serving under the "public"
directory.
This makes it really easy to get started with some useful features from Negroni.
## Handlers
Negroni provides a bidirectional middleware flow. This is done through the
`negroni.Handler` interface:
``` go
type Handler interface {
ServeHTTP(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc)
}
```
If a middleware hasn't already written to the `ResponseWriter`, it should call
the next `http.HandlerFunc` in the chain to yield to the next middleware
handler. This can be used for great good:
``` go
func MyMiddleware(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
// do some stuff before
next(rw, r)
// do some stuff after
}
```
And you can map it to the handler chain with the `Use` function:
``` go
n := negroni.New()
n.Use(negroni.HandlerFunc(MyMiddleware))
```
You can also map plain old `http.Handler`s:
``` go
n := negroni.New()
mux := http.NewServeMux()
// map your routes
n.UseHandler(mux)
http.ListenAndServe(":3000", n)
```
## `With()`
Negroni has a convenience function called `With`. `With` takes one or more
`Handler` instances and returns a new `Negroni` with the combination of the
receiver's handlers and the new handlers.
```go
// middleware we want to reuse
common := negroni.New()
common.Use(MyMiddleware1)
common.Use(MyMiddleware2)
// `specific` is a new negroni with the handlers from `common` combined with the
// the handlers passed in
specific := common.With(
SpecificMiddleware1,
SpecificMiddleware2
)
```
## `Run()`
Negroni has a convenience function called `Run`. `Run` takes an addr string
identical to [`http.ListenAndServe`](https://godoc.org/net/http#ListenAndServe).
<!-- { "interrupt": true } -->
``` go
package main
import (
"github.com/urfave/negroni"
)
func main() {
n := negroni.Classic()
n.Run(":8080")
}
```
If no address is provided, the `PORT` environment variable is used instead.
If the `PORT` environment variable is not defined, the default address will be used.
See [Run](https://godoc.org/github.com/urfave/negroni#Negroni.Run) for a complete description.
In general, you will want to use `net/http` methods and pass `negroni` as a
`Handler`, as this is more flexible, e.g.:
<!-- { "interrupt": true } -->
``` go
package main
import (
"fmt"
"log"
"net/http"
"time"
"github.com/urfave/negroni"
)
func main() {
mux := http.NewServeMux()
mux.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
fmt.Fprintf(w, "Welcome to the home page!")
})
n := negroni.Classic() // Includes some default middlewares
n.UseHandler(mux)
s := &http.Server{
Addr: ":8080",
Handler: n,
ReadTimeout: 10 * time.Second,
WriteTimeout: 10 * time.Second,
MaxHeaderBytes: 1 << 20,
}
log.Fatal(s.ListenAndServe())
}
```
## Route Specific Middleware
If you have a route group of routes that need specific middleware to be
executed, you can simply create a new Negroni instance and use it as your route
handler.
``` go
router := mux.NewRouter()
adminRoutes := mux.NewRouter()
// add admin routes here
// Create a new negroni for the admin middleware
router.PathPrefix("/admin").Handler(negroni.New(
Middleware1,
Middleware2,
negroni.Wrap(adminRoutes),
))
```
If you are using [Gorilla Mux], here is an example using a subrouter:
``` go
router := mux.NewRouter()
subRouter := mux.NewRouter().PathPrefix("/subpath").Subrouter().StrictSlash(true)
subRouter.HandleFunc("/", someSubpathHandler) // "/subpath/"
subRouter.HandleFunc("/:id", someSubpathHandler) // "/subpath/:id"
// "/subpath" is necessary to ensure the subRouter and main router linkup
router.PathPrefix("/subpath").Handler(negroni.New(
Middleware1,
Middleware2,
negroni.Wrap(subRouter),
))
```
`With()` can be used to eliminate redundancy for middlewares shared across
routes.
``` go
router := mux.NewRouter()
apiRoutes := mux.NewRouter()
// add api routes here
webRoutes := mux.NewRouter()
// add web routes here
// create common middleware to be shared across routes
common := negroni.New(
Middleware1,
Middleware2,
)
// create a new negroni for the api middleware
// using the common middleware as a base
router.PathPrefix("/api").Handler(common.With(
APIMiddleware1,
negroni.Wrap(apiRoutes),
))
// create a new negroni for the web middleware
// using the common middleware as a base
router.PathPrefix("/web").Handler(common.With(
WebMiddleware1,
negroni.Wrap(webRoutes),
))
```
## Bundled Middleware
### Static
This middleware will serve files on the filesystem. If the files do not exist,
it proxies the request to the next middleware. If you want the requests for
non-existent files to return a `404 File Not Found` to the user you should look
at using [http.FileServer](https://golang.org/pkg/net/http/#FileServer) as
a handler.
Example:
<!-- { "interrupt": true } -->
``` go
package main
import (
"fmt"
"net/http"
"github.com/urfave/negroni"
)
func main() {
mux := http.NewServeMux()
mux.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
fmt.Fprintf(w, "Welcome to the home page!")
})
// Example of using a http.FileServer if you want "server-like" rather than "middleware" behavior
// mux.Handle("/public", http.FileServer(http.Dir("/home/public")))
n := negroni.New()
n.Use(negroni.NewStatic(http.Dir("/tmp")))
n.UseHandler(mux)
http.ListenAndServe(":3002", n)
}
```
Will serve files from the `/tmp` directory first, but proxy calls to the next
handler if the request does not match a file on the filesystem.
### Recovery
This middleware catches `panic`s and responds with a `500` response code. If
any other middleware has written a response code or body, this middleware will
fail to properly send a 500 to the client, as the client has already received
the HTTP response code. Additionally, an `PanicHandlerFunc` can be attached
to report 500's to an error reporting service such as Sentry or Airbrake.
Example:
<!-- { "interrupt": true } -->
``` go
package main
import (
"net/http"
"github.com/urfave/negroni"
)
func main() {
mux := http.NewServeMux()
mux.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
panic("oh no")
})
n := negroni.New()
n.Use(negroni.NewRecovery())
n.UseHandler(mux)
http.ListenAndServe(":3003", n)
}
```
Will return a `500 Internal Server Error` to each request. It will also log the
stack traces as well as print the stack trace to the requester if `PrintStack`
is set to `true` (the default).
Example with error handler:
``` go
package main
import (
"net/http"
"github.com/urfave/negroni"
)
func main() {
mux := http.NewServeMux()
mux.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
panic("oh no")
})
n := negroni.New()
recovery := negroni.NewRecovery()
recovery.PanicHandlerFunc = reportToSentry
n.Use(recovery)
n.UseHandler(mux)
http.ListenAndServe(":3003", n)
}
func reportToSentry(info *negroni.PanicInformation) {
// write code here to report error to Sentry
}
```
The middleware simply output the informations on STDOUT by default.
You can customize the output process by using the `SetFormatter()` function.
You can use also the `HTMLPanicFormatter` to display a pretty HTML when a crash occurs.
<!-- { "interrupt": true } -->
``` go
package main
import (
"net/http"
"github.com/urfave/negroni"
)
func main() {
mux := http.NewServeMux()
mux.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
panic("oh no")
})
n := negroni.New()
recovery := negroni.NewRecovery()
recovery.Formatter = &negroni.HTMLPanicFormatter{}
n.Use(recovery)
n.UseHandler(mux)
http.ListenAndServe(":3003", n)
}
```
## Logger
This middleware logs each incoming request and response.
Example:
<!-- { "interrupt": true } -->
``` go
package main
import (
"fmt"
"net/http"
"github.com/urfave/negroni"
)
func main() {
mux := http.NewServeMux()
mux.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
fmt.Fprintf(w, "Welcome to the home page!")
})
n := negroni.New()
n.Use(negroni.NewLogger())
n.UseHandler(mux)
http.ListenAndServe(":3004", n)
}
```
Will print a log similar to:
```
[negroni] 2017-10-04T14:56:25+02:00 | 200 | 378µs | localhost:3004 | GET /
```
on each request.
You can also set your own log format by calling the `SetFormat` function. The format is a template string with fields as mentioned in the `LoggerEntry` struct. So, as an example -
```go
l.SetFormat("[{{.Status}} {{.Duration}}] - {{.Request.UserAgent}}")
```
will show something like - `[200 18.263µs] - Go-User-Agent/1.1 `
## Third Party Middleware
Here is a current list of Negroni compatible middlware. Feel free to put up a PR
linking your middleware if you have built one:
| Middleware | Author | Description |
| -----------|--------|-------------|
| [authz](https://github.com/casbin/negroni-authz) | [Yang Luo](https://github.com/hsluoyz) | ACL, RBAC, ABAC Authorization middlware based on [Casbin](https://github.com/casbin/casbin) |
| [binding](https://github.com/mholt/binding) | [Matt Holt](https://github.com/mholt) | Data binding from HTTP requests into structs |
| [cloudwatch](https://github.com/cvillecsteele/negroni-cloudwatch) | [Colin Steele](https://github.com/cvillecsteele) | AWS cloudwatch metrics middleware |
| [cors](https://github.com/rs/cors) | [Olivier Poitrey](https://github.com/rs) | [Cross Origin Resource Sharing](http://www.w3.org/TR/cors/) (CORS) support |
| [csp](https://github.com/awakenetworks/csp) | [Awake Networks](https://github.com/awakenetworks) | [Content Security Policy](https://www.w3.org/TR/CSP2/) (CSP) support |
| [delay](https://github.com/jeffbmartinez/delay) | [Jeff Martinez](https://github.com/jeffbmartinez) | Add delays/latency to endpoints. Useful when testing effects of high latency |
| [New Relic Go Agent](https://github.com/yadvendar/negroni-newrelic-go-agent) | [Yadvendar Champawat](https://github.com/yadvendar) | Official [New Relic Go Agent](https://github.com/newrelic/go-agent) (currently in beta) |
| [gorelic](https://github.com/jingweno/negroni-gorelic) | [Jingwen Owen Ou](https://github.com/jingweno) | New Relic agent for Go runtime |
| [Graceful](https://github.com/tylerb/graceful) | [Tyler Bunnell](https://github.com/tylerb) | Graceful HTTP Shutdown |
| [gzip](https://github.com/phyber/negroni-gzip) | [phyber](https://github.com/phyber) | GZIP response compression |
| [JWT Middleware](https://github.com/auth0/go-jwt-middleware) | [Auth0](https://github.com/auth0) | Middleware checks for a JWT on the `Authorization` header on incoming requests and decodes it|
| [JWT Middleware](https://github.com/mfuentesg/go-jwtmiddleware) | [Marcelo Fuentes](https://github.com/mfuentesg) | JWT middleware for golang |
| [logrus](https://github.com/meatballhat/negroni-logrus) | [Dan Buch](https://github.com/meatballhat) | Logrus-based logger |
| [oauth2](https://github.com/goincremental/negroni-oauth2) | [David Bochenski](https://github.com/bochenski) | oAuth2 middleware |
| [onthefly](https://github.com/xyproto/onthefly) | [Alexander Rødseth](https://github.com/xyproto) | Generate TinySVG, HTML and CSS on the fly |
| [permissions2](https://github.com/xyproto/permissions2) | [Alexander Rødseth](https://github.com/xyproto) | Cookies, users and permissions |
| [prometheus](https://github.com/zbindenren/negroni-prometheus) | [Rene Zbinden](https://github.com/zbindenren) | Easily create metrics endpoint for the [prometheus](http://prometheus.io) instrumentation tool |
| [render](https://github.com/unrolled/render) | [Cory Jacobsen](https://github.com/unrolled) | Render JSON, XML and HTML templates |
| [RestGate](https://github.com/pjebs/restgate) | [Prasanga Siripala](https://github.com/pjebs) | Secure authentication for REST API endpoints |
| [secure](https://github.com/unrolled/secure) | [Cory Jacobsen](https://github.com/unrolled) | Middleware that implements a few quick security wins |
| [sessions](https://github.com/goincremental/negroni-sessions) | [David Bochenski](https://github.com/bochenski) | Session Management |
| [stats](https://github.com/thoas/stats) | [Florent Messa](https://github.com/thoas) | Store information about your web application (response time, etc.) |
| [VanGoH](https://github.com/auroratechnologies/vangoh) | [Taylor Wrobel](https://github.com/twrobel3) | Configurable [AWS-Style](http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html) HMAC authentication middleware |
| [xrequestid](https://github.com/pilu/xrequestid) | [Andrea Franz](https://github.com/pilu) | Middleware that assigns a random X-Request-Id header to each request |
| [mgo session](https://github.com/joeljames/nigroni-mgo-session) | [Joel James](https://github.com/joeljames) | Middleware that handles creating and closing mgo sessions per request |
| [digits](https://github.com/bamarni/digits) | [Bilal Amarni](https://github.com/bamarni) | Middleware that handles [Twitter Digits](https://get.digits.com/) authentication |
| [stats](https://github.com/guptachirag/stats) | [Chirag Gupta](https://github.com/guptachirag/stats) | Middleware that manages qps and latency stats for your endpoints and asynchronously flushes them to influx db |
| [Chaos](https://github.com/falzm/chaos) | [Marc Falzon](https://github.com/falzm) | Middleware for injecting chaotic behavior into application in a programmatic way |
## Examples
[Alexander Rødseth](https://github.com/xyproto) created
[mooseware](https://github.com/xyproto/mooseware), a skeleton for writing a
Negroni middleware handler.
[Prasanga Siripala](https://github.com/pjebs) created an effective skeleton structure for web-based Go/Negroni projects: [Go-Skeleton](https://github.com/pjebs/go-skeleton)
## Live code reload?
[gin](https://github.com/codegangsta/gin) and
[fresh](https://github.com/pilu/fresh) both live reload negroni apps.
## Essential Reading for Beginners of Go & Negroni
* [Using a Context to pass information from middleware to end handler](http://elithrar.github.io/article/map-string-interface/)
* [Understanding middleware](https://mattstauffer.co/blog/laravel-5.0-middleware-filter-style)
## About
Negroni is obsessively designed by none other than the [Code
Gangsta](https://codegangsta.io/)
[Gorilla Mux]: https://github.com/gorilla/mux
[`http.FileSystem`]: https://godoc.org/net/http#FileSystem

View File

@ -0,0 +1,25 @@
// Package negroni is an idiomatic approach to web middleware in Go. It is tiny, non-intrusive, and encourages use of net/http Handlers.
//
// If you like the idea of Martini, but you think it contains too much magic, then Negroni is a great fit.
//
// For a full guide visit http://github.com/urfave/negroni
//
// package main
//
// import (
// "github.com/urfave/negroni"
// "net/http"
// "fmt"
// )
//
// func main() {
// mux := http.NewServeMux()
// mux.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
// fmt.Fprintf(w, "Welcome to the home page!")
// })
//
// n := negroni.Classic()
// n.UseHandler(mux)
// n.Run(":3000")
// }
package negroni

View File

@ -0,0 +1,78 @@
package negroni
import (
"bytes"
"log"
"net/http"
"os"
"text/template"
"time"
)
// LoggerEntry is the structure passed to the template.
type LoggerEntry struct {
StartTime string
Status int
Duration time.Duration
Hostname string
Method string
Path string
Request *http.Request
}
// LoggerDefaultFormat is the format logged used by the default Logger instance.
var LoggerDefaultFormat = "{{.StartTime}} | {{.Status}} | \t {{.Duration}} | {{.Hostname}} | {{.Method}} {{.Path}}"
// LoggerDefaultDateFormat is the format used for date by the default Logger instance.
var LoggerDefaultDateFormat = time.RFC3339
// ALogger interface
type ALogger interface {
Println(v ...interface{})
Printf(format string, v ...interface{})
}
// Logger is a middleware handler that logs the request as it goes in and the response as it goes out.
type Logger struct {
// ALogger implements just enough log.Logger interface to be compatible with other implementations
ALogger
dateFormat string
template *template.Template
}
// NewLogger returns a new Logger instance
func NewLogger() *Logger {
logger := &Logger{ALogger: log.New(os.Stdout, "[negroni] ", 0), dateFormat: LoggerDefaultDateFormat}
logger.SetFormat(LoggerDefaultFormat)
return logger
}
func (l *Logger) SetFormat(format string) {
l.template = template.Must(template.New("negroni_parser").Parse(format))
}
func (l *Logger) SetDateFormat(format string) {
l.dateFormat = format
}
func (l *Logger) ServeHTTP(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
start := time.Now()
next(rw, r)
res := rw.(ResponseWriter)
log := LoggerEntry{
StartTime: start.Format(l.dateFormat),
Status: res.Status(),
Duration: time.Since(start),
Hostname: r.Host,
Method: r.Method,
Path: r.URL.Path,
Request: r,
}
buff := &bytes.Buffer{}
l.template.Execute(buff, log)
l.Println(buff.String())
}

View File

@ -0,0 +1,169 @@
package negroni
import (
"log"
"net/http"
"os"
)
const (
// DefaultAddress is used if no other is specified.
DefaultAddress = ":8080"
)
// Handler handler is an interface that objects can implement to be registered to serve as middleware
// in the Negroni middleware stack.
// ServeHTTP should yield to the next middleware in the chain by invoking the next http.HandlerFunc
// passed in.
//
// If the Handler writes to the ResponseWriter, the next http.HandlerFunc should not be invoked.
type Handler interface {
ServeHTTP(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc)
}
// HandlerFunc is an adapter to allow the use of ordinary functions as Negroni handlers.
// If f is a function with the appropriate signature, HandlerFunc(f) is a Handler object that calls f.
type HandlerFunc func(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc)
func (h HandlerFunc) ServeHTTP(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
h(rw, r, next)
}
type middleware struct {
handler Handler
next *middleware
}
func (m middleware) ServeHTTP(rw http.ResponseWriter, r *http.Request) {
m.handler.ServeHTTP(rw, r, m.next.ServeHTTP)
}
// Wrap converts a http.Handler into a negroni.Handler so it can be used as a Negroni
// middleware. The next http.HandlerFunc is automatically called after the Handler
// is executed.
func Wrap(handler http.Handler) Handler {
return HandlerFunc(func(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
handler.ServeHTTP(rw, r)
next(rw, r)
})
}
// WrapFunc converts a http.HandlerFunc into a negroni.Handler so it can be used as a Negroni
// middleware. The next http.HandlerFunc is automatically called after the Handler
// is executed.
func WrapFunc(handlerFunc http.HandlerFunc) Handler {
return HandlerFunc(func(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
handlerFunc(rw, r)
next(rw, r)
})
}
// Negroni is a stack of Middleware Handlers that can be invoked as an http.Handler.
// Negroni middleware is evaluated in the order that they are added to the stack using
// the Use and UseHandler methods.
type Negroni struct {
middleware middleware
handlers []Handler
}
// New returns a new Negroni instance with no middleware preconfigured.
func New(handlers ...Handler) *Negroni {
return &Negroni{
handlers: handlers,
middleware: build(handlers),
}
}
// With returns a new Negroni instance that is a combination of the negroni
// receiver's handlers and the provided handlers.
func (n *Negroni) With(handlers ...Handler) *Negroni {
return New(
append(n.handlers, handlers...)...,
)
}
// Classic returns a new Negroni instance with the default middleware already
// in the stack.
//
// Recovery - Panic Recovery Middleware
// Logger - Request/Response Logging
// Static - Static File Serving
func Classic() *Negroni {
return New(NewRecovery(), NewLogger(), NewStatic(http.Dir("public")))
}
func (n *Negroni) ServeHTTP(rw http.ResponseWriter, r *http.Request) {
n.middleware.ServeHTTP(NewResponseWriter(rw), r)
}
// Use adds a Handler onto the middleware stack. Handlers are invoked in the order they are added to a Negroni.
func (n *Negroni) Use(handler Handler) {
if handler == nil {
panic("handler cannot be nil")
}
n.handlers = append(n.handlers, handler)
n.middleware = build(n.handlers)
}
// UseFunc adds a Negroni-style handler function onto the middleware stack.
func (n *Negroni) UseFunc(handlerFunc func(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc)) {
n.Use(HandlerFunc(handlerFunc))
}
// UseHandler adds a http.Handler onto the middleware stack. Handlers are invoked in the order they are added to a Negroni.
func (n *Negroni) UseHandler(handler http.Handler) {
n.Use(Wrap(handler))
}
// UseHandlerFunc adds a http.HandlerFunc-style handler function onto the middleware stack.
func (n *Negroni) UseHandlerFunc(handlerFunc func(rw http.ResponseWriter, r *http.Request)) {
n.UseHandler(http.HandlerFunc(handlerFunc))
}
// Run is a convenience function that runs the negroni stack as an HTTP
// server. The addr string, if provided, takes the same format as http.ListenAndServe.
// If no address is provided but the PORT environment variable is set, the PORT value is used.
// If neither is provided, the address' value will equal the DefaultAddress constant.
func (n *Negroni) Run(addr ...string) {
l := log.New(os.Stdout, "[negroni] ", 0)
finalAddr := detectAddress(addr...)
l.Printf("listening on %s", finalAddr)
l.Fatal(http.ListenAndServe(finalAddr, n))
}
func detectAddress(addr ...string) string {
if len(addr) > 0 {
return addr[0]
}
if port := os.Getenv("PORT"); port != "" {
return ":" + port
}
return DefaultAddress
}
// Returns a list of all the handlers in the current Negroni middleware chain.
func (n *Negroni) Handlers() []Handler {
return n.handlers
}
func build(handlers []Handler) middleware {
var next middleware
if len(handlers) == 0 {
return voidMiddleware()
} else if len(handlers) > 1 {
next = build(handlers[1:])
} else {
next = voidMiddleware()
}
return middleware{handlers[0], &next}
}
func voidMiddleware() middleware {
return middleware{
HandlerFunc(func(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc) {}),
&middleware{},
}
}

View File

@ -0,0 +1,194 @@
package negroni
import (
"fmt"
"log"
"net/http"
"os"
"runtime"
"runtime/debug"
"text/template"
)
const (
panicText = "PANIC: %s\n%s"
panicHTML = `<html>
<head><title>PANIC: {{.RecoveredPanic}}</title></head>
<style type="text/css">
html, body {
font-family: Helvetica, Arial, Sans;
color: #333333;
background-color: #ffffff;
margin: 0px;
}
h1 {
color: #ffffff;
background-color: #f14c4c;
padding: 20px;
border-bottom: 1px solid #2b3848;
}
.block {
margin: 2em;
}
.panic-interface {
}
.panic-stack-raw pre {
padding: 1em;
background: #f6f8fa;
border: dashed 1px;
}
.panic-interface-title {
font-weight: bold;
}
</style>
<body>
<h1>Negroni - PANIC</h1>
<div class="panic-interface block">
<h3>{{.RequestDescription}}</h3>
<span class="panic-interface-title">Runtime error:</span> <span class="panic-interface-element">{{.RecoveredPanic}}</span>
</div>
{{ if .Stack }}
<div class="panic-stack-raw block">
<h3>Runtime Stack</h3>
<pre>{{.StackAsString}}</pre>
</div>
{{ end }}
</body>
</html>`
nilRequestMessage = "Request is nil"
)
var panicHTMLTemplate = template.Must(template.New("PanicPage").Parse(panicHTML))
// PanicInformation contains all
// elements for printing stack informations.
type PanicInformation struct {
RecoveredPanic interface{}
Stack []byte
Request *http.Request
}
// StackAsString returns a printable version of the stack
func (p *PanicInformation) StackAsString() string {
return string(p.Stack)
}
// RequestDescription returns a printable description of the url
func (p *PanicInformation) RequestDescription() string {
if p.Request == nil {
return nilRequestMessage
}
var queryOutput string
if p.Request.URL.RawQuery != "" {
queryOutput = "?" + p.Request.URL.RawQuery
}
return fmt.Sprintf("%s %s%s", p.Request.Method, p.Request.URL.Path, queryOutput)
}
// PanicFormatter is an interface on object can implement
// to be able to output the stack trace
type PanicFormatter interface {
// FormatPanicError output the stack for a given answer/response.
// In case the the middleware should not output the stack trace,
// the field `Stack` of the passed `PanicInformation` instance equals `[]byte{}`.
FormatPanicError(rw http.ResponseWriter, r *http.Request, infos *PanicInformation)
}
// TextPanicFormatter output the stack
// as simple text on os.Stdout. If no `Content-Type` is set,
// it will output the data as `text/plain; charset=utf-8`.
// Otherwise, the origin `Content-Type` is kept.
type TextPanicFormatter struct{}
func (t *TextPanicFormatter) FormatPanicError(rw http.ResponseWriter, r *http.Request, infos *PanicInformation) {
if rw.Header().Get("Content-Type") == "" {
rw.Header().Set("Content-Type", "text/plain; charset=utf-8")
}
fmt.Fprintf(rw, panicText, infos.RecoveredPanic, infos.Stack)
}
// HTMLPanicFormatter output the stack inside
// an HTML page. This has been largely inspired by
// https://github.com/go-martini/martini/pull/156/commits.
type HTMLPanicFormatter struct{}
func (t *HTMLPanicFormatter) FormatPanicError(rw http.ResponseWriter, r *http.Request, infos *PanicInformation) {
if rw.Header().Get("Content-Type") == "" {
rw.Header().Set("Content-Type", "text/html; charset=utf-8")
}
panicHTMLTemplate.Execute(rw, infos)
}
// Recovery is a Negroni middleware that recovers from any panics and writes a 500 if there was one.
type Recovery struct {
Logger ALogger
PrintStack bool
PanicHandlerFunc func(*PanicInformation)
StackAll bool
StackSize int
Formatter PanicFormatter
// Deprecated: Use PanicHandlerFunc instead to receive panic
// error with additional information (see PanicInformation)
ErrorHandlerFunc func(interface{})
}
// NewRecovery returns a new instance of Recovery
func NewRecovery() *Recovery {
return &Recovery{
Logger: log.New(os.Stdout, "[negroni] ", 0),
PrintStack: true,
StackAll: false,
StackSize: 1024 * 8,
Formatter: &TextPanicFormatter{},
}
}
func (rec *Recovery) ServeHTTP(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
defer func() {
if err := recover(); err != nil {
rw.WriteHeader(http.StatusInternalServerError)
stack := make([]byte, rec.StackSize)
stack = stack[:runtime.Stack(stack, rec.StackAll)]
infos := &PanicInformation{RecoveredPanic: err, Request: r}
if rec.PrintStack {
infos.Stack = stack
}
rec.Logger.Printf(panicText, err, stack)
rec.Formatter.FormatPanicError(rw, r, infos)
if rec.ErrorHandlerFunc != nil {
func() {
defer func() {
if err := recover(); err != nil {
rec.Logger.Printf("provided ErrorHandlerFunc panic'd: %s, trace:\n%s", err, debug.Stack())
rec.Logger.Printf("%s\n", debug.Stack())
}
}()
rec.ErrorHandlerFunc(err)
}()
}
if rec.PanicHandlerFunc != nil {
func() {
defer func() {
if err := recover(); err != nil {
rec.Logger.Printf("provided PanicHandlerFunc panic'd: %s, trace:\n%s", err, debug.Stack())
rec.Logger.Printf("%s\n", debug.Stack())
}
}()
rec.PanicHandlerFunc(infos)
}()
}
}
}()
next(rw, r)
}

View File

@ -0,0 +1,113 @@
package negroni
import (
"bufio"
"fmt"
"net"
"net/http"
)
// ResponseWriter is a wrapper around http.ResponseWriter that provides extra information about
// the response. It is recommended that middleware handlers use this construct to wrap a responsewriter
// if the functionality calls for it.
type ResponseWriter interface {
http.ResponseWriter
http.Flusher
// Status returns the status code of the response or 0 if the response has
// not been written
Status() int
// Written returns whether or not the ResponseWriter has been written.
Written() bool
// Size returns the size of the response body.
Size() int
// Before allows for a function to be called before the ResponseWriter has been written to. This is
// useful for setting headers or any other operations that must happen before a response has been written.
Before(func(ResponseWriter))
}
type beforeFunc func(ResponseWriter)
// NewResponseWriter creates a ResponseWriter that wraps an http.ResponseWriter
func NewResponseWriter(rw http.ResponseWriter) ResponseWriter {
nrw := &responseWriter{
ResponseWriter: rw,
}
if _, ok := rw.(http.CloseNotifier); ok {
return &responseWriterCloseNotifer{nrw}
}
return nrw
}
type responseWriter struct {
http.ResponseWriter
status int
size int
beforeFuncs []beforeFunc
}
func (rw *responseWriter) WriteHeader(s int) {
rw.status = s
rw.callBefore()
rw.ResponseWriter.WriteHeader(s)
}
func (rw *responseWriter) Write(b []byte) (int, error) {
if !rw.Written() {
// The status will be StatusOK if WriteHeader has not been called yet
rw.WriteHeader(http.StatusOK)
}
size, err := rw.ResponseWriter.Write(b)
rw.size += size
return size, err
}
func (rw *responseWriter) Status() int {
return rw.status
}
func (rw *responseWriter) Size() int {
return rw.size
}
func (rw *responseWriter) Written() bool {
return rw.status != 0
}
func (rw *responseWriter) Before(before func(ResponseWriter)) {
rw.beforeFuncs = append(rw.beforeFuncs, before)
}
func (rw *responseWriter) Hijack() (net.Conn, *bufio.ReadWriter, error) {
hijacker, ok := rw.ResponseWriter.(http.Hijacker)
if !ok {
return nil, nil, fmt.Errorf("the ResponseWriter doesn't support the Hijacker interface")
}
return hijacker.Hijack()
}
func (rw *responseWriter) callBefore() {
for i := len(rw.beforeFuncs) - 1; i >= 0; i-- {
rw.beforeFuncs[i](rw)
}
}
func (rw *responseWriter) Flush() {
flusher, ok := rw.ResponseWriter.(http.Flusher)
if ok {
if !rw.Written() {
// The status will be StatusOK if WriteHeader has not been called yet
rw.WriteHeader(http.StatusOK)
}
flusher.Flush()
}
}
type responseWriterCloseNotifer struct {
*responseWriter
}
func (rw *responseWriterCloseNotifer) CloseNotify() <-chan bool {
return rw.ResponseWriter.(http.CloseNotifier).CloseNotify()
}

View File

@ -0,0 +1,17 @@
//go:build go1.8
// +build go1.8
package negroni
import (
"fmt"
"net/http"
)
func (rw *responseWriter) Push(target string, opts *http.PushOptions) error {
pusher, ok := rw.ResponseWriter.(http.Pusher)
if ok {
return pusher.Push(target, opts)
}
return fmt.Errorf("the ResponseWriter doesn't support the Pusher interface")
}

View File

@ -0,0 +1,88 @@
package negroni
import (
"net/http"
"path"
"strings"
)
// Static is a middleware handler that serves static files in the given
// directory/filesystem. If the file does not exist on the filesystem, it
// passes along to the next middleware in the chain. If you desire "fileserver"
// type behavior where it returns a 404 for unfound files, you should consider
// using http.FileServer from the Go stdlib.
type Static struct {
// Dir is the directory to serve static files from
Dir http.FileSystem
// Prefix is the optional prefix used to serve the static directory content
Prefix string
// IndexFile defines which file to serve as index if it exists.
IndexFile string
}
// NewStatic returns a new instance of Static
func NewStatic(directory http.FileSystem) *Static {
return &Static{
Dir: directory,
Prefix: "",
IndexFile: "index.html",
}
}
func (s *Static) ServeHTTP(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
if r.Method != "GET" && r.Method != "HEAD" {
next(rw, r)
return
}
file := r.URL.Path
// if we have a prefix, filter requests by stripping the prefix
if s.Prefix != "" {
if !strings.HasPrefix(file, s.Prefix) {
next(rw, r)
return
}
file = file[len(s.Prefix):]
if file != "" && file[0] != '/' {
next(rw, r)
return
}
}
f, err := s.Dir.Open(file)
if err != nil {
// discard the error?
next(rw, r)
return
}
defer f.Close()
fi, err := f.Stat()
if err != nil {
next(rw, r)
return
}
// try to serve index file
if fi.IsDir() {
// redirect if missing trailing slash
if !strings.HasSuffix(r.URL.Path, "/") {
http.Redirect(rw, r, r.URL.Path+"/", http.StatusFound)
return
}
file = path.Join(file, s.IndexFile)
f, err = s.Dir.Open(file)
if err != nil {
next(rw, r)
return
}
defer f.Close()
fi, err = f.Stat()
if err != nil || fi.IsDir() {
next(rw, r)
return
}
}
http.ServeContent(rw, r, file, fi.ModTime(), f)
}

177
examples/web/vendor/github.com/gomodule/redigo/LICENSE generated vendored Normal file
View File

@ -0,0 +1,177 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS

View File

@ -0,0 +1,55 @@
// Copyright 2014 Gary Burd
//
// Licensed under the Apache License, Version 2.0 (the "License"): you may
// not use this file except in compliance with the License. You may obtain
// a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations
// under the License.
package redis
import (
"strings"
)
const (
connectionWatchState = 1 << iota
connectionMultiState
connectionSubscribeState
connectionMonitorState
)
type commandInfo struct {
// Set or Clear these states on connection.
Set, Clear int
}
var commandInfos = map[string]commandInfo{
"WATCH": {Set: connectionWatchState},
"UNWATCH": {Clear: connectionWatchState},
"MULTI": {Set: connectionMultiState},
"EXEC": {Clear: connectionWatchState | connectionMultiState},
"DISCARD": {Clear: connectionWatchState | connectionMultiState},
"PSUBSCRIBE": {Set: connectionSubscribeState},
"SUBSCRIBE": {Set: connectionSubscribeState},
"MONITOR": {Set: connectionMonitorState},
}
func init() {
for n, ci := range commandInfos {
commandInfos[strings.ToLower(n)] = ci
}
}
func lookupCommandInfo(commandName string) commandInfo {
if ci, ok := commandInfos[commandName]; ok {
return ci
}
return commandInfos[strings.ToUpper(commandName)]
}

View File

@ -0,0 +1,865 @@
// Copyright 2012 Gary Burd
//
// Licensed under the Apache License, Version 2.0 (the "License"): you may
// not use this file except in compliance with the License. You may obtain
// a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations
// under the License.
package redis
import (
"bufio"
"bytes"
"context"
"crypto/tls"
"errors"
"fmt"
"io"
"net"
"net/url"
"regexp"
"strconv"
"sync"
"time"
)
var (
_ ConnWithTimeout = (*conn)(nil)
)
// conn is the low-level implementation of Conn
type conn struct {
// Shared
mu sync.Mutex
pending int
err error
conn net.Conn
// Read
readTimeout time.Duration
br *bufio.Reader
// Write
writeTimeout time.Duration
bw *bufio.Writer
// Scratch space for formatting argument length.
// '*' or '$', length, "\r\n"
lenScratch [32]byte
// Scratch space for formatting integers and floats.
numScratch [40]byte
}
// DialTimeout acts like Dial but takes timeouts for establishing the
// connection to the server, writing a command and reading a reply.
//
// Deprecated: Use Dial with options instead.
func DialTimeout(network, address string, connectTimeout, readTimeout, writeTimeout time.Duration) (Conn, error) {
return Dial(network, address,
DialConnectTimeout(connectTimeout),
DialReadTimeout(readTimeout),
DialWriteTimeout(writeTimeout))
}
// DialOption specifies an option for dialing a Redis server.
type DialOption struct {
f func(*dialOptions)
}
type dialOptions struct {
readTimeout time.Duration
writeTimeout time.Duration
tlsHandshakeTimeout time.Duration
dialer *net.Dialer
dialContext func(ctx context.Context, network, addr string) (net.Conn, error)
db int
username string
password string
clientName string
useTLS bool
skipVerify bool
tlsConfig *tls.Config
}
// DialTLSHandshakeTimeout specifies the maximum amount of time waiting to
// wait for a TLS handshake. Zero means no timeout.
// If no DialTLSHandshakeTimeout option is specified then the default is 30 seconds.
func DialTLSHandshakeTimeout(d time.Duration) DialOption {
return DialOption{func(do *dialOptions) {
do.tlsHandshakeTimeout = d
}}
}
// DialReadTimeout specifies the timeout for reading a single command reply.
func DialReadTimeout(d time.Duration) DialOption {
return DialOption{func(do *dialOptions) {
do.readTimeout = d
}}
}
// DialWriteTimeout specifies the timeout for writing a single command.
func DialWriteTimeout(d time.Duration) DialOption {
return DialOption{func(do *dialOptions) {
do.writeTimeout = d
}}
}
// DialConnectTimeout specifies the timeout for connecting to the Redis server when
// no DialNetDial option is specified.
// If no DialConnectTimeout option is specified then the default is 30 seconds.
func DialConnectTimeout(d time.Duration) DialOption {
return DialOption{func(do *dialOptions) {
do.dialer.Timeout = d
}}
}
// DialKeepAlive specifies the keep-alive period for TCP connections to the Redis server
// when no DialNetDial option is specified.
// If zero, keep-alives are not enabled. If no DialKeepAlive option is specified then
// the default of 5 minutes is used to ensure that half-closed TCP sessions are detected.
func DialKeepAlive(d time.Duration) DialOption {
return DialOption{func(do *dialOptions) {
do.dialer.KeepAlive = d
}}
}
// DialNetDial specifies a custom dial function for creating TCP
// connections, otherwise a net.Dialer customized via the other options is used.
// DialNetDial overrides DialConnectTimeout and DialKeepAlive.
func DialNetDial(dial func(network, addr string) (net.Conn, error)) DialOption {
return DialOption{func(do *dialOptions) {
do.dialContext = func(ctx context.Context, network, addr string) (net.Conn, error) {
return dial(network, addr)
}
}}
}
// DialContextFunc specifies a custom dial function with context for creating TCP
// connections, otherwise a net.Dialer customized via the other options is used.
// DialContextFunc overrides DialConnectTimeout and DialKeepAlive.
func DialContextFunc(f func(ctx context.Context, network, addr string) (net.Conn, error)) DialOption {
return DialOption{func(do *dialOptions) {
do.dialContext = f
}}
}
// DialDatabase specifies the database to select when dialing a connection.
func DialDatabase(db int) DialOption {
return DialOption{func(do *dialOptions) {
do.db = db
}}
}
// DialPassword specifies the password to use when connecting to
// the Redis server.
func DialPassword(password string) DialOption {
return DialOption{func(do *dialOptions) {
do.password = password
}}
}
// DialUsername specifies the username to use when connecting to
// the Redis server when Redis ACLs are used.
// A DialPassword must also be passed otherwise this option will have no effect.
func DialUsername(username string) DialOption {
return DialOption{func(do *dialOptions) {
do.username = username
}}
}
// DialClientName specifies a client name to be used
// by the Redis server connection.
func DialClientName(name string) DialOption {
return DialOption{func(do *dialOptions) {
do.clientName = name
}}
}
// DialTLSConfig specifies the config to use when a TLS connection is dialed.
// Has no effect when not dialing a TLS connection.
func DialTLSConfig(c *tls.Config) DialOption {
return DialOption{func(do *dialOptions) {
do.tlsConfig = c
}}
}
// DialTLSSkipVerify disables server name verification when connecting over
// TLS. Has no effect when not dialing a TLS connection.
func DialTLSSkipVerify(skip bool) DialOption {
return DialOption{func(do *dialOptions) {
do.skipVerify = skip
}}
}
// DialUseTLS specifies whether TLS should be used when connecting to the
// server. This option is ignore by DialURL.
func DialUseTLS(useTLS bool) DialOption {
return DialOption{func(do *dialOptions) {
do.useTLS = useTLS
}}
}
// Dial connects to the Redis server at the given network and
// address using the specified options.
func Dial(network, address string, options ...DialOption) (Conn, error) {
return DialContext(context.Background(), network, address, options...)
}
type tlsHandshakeTimeoutError struct{}
func (tlsHandshakeTimeoutError) Timeout() bool { return true }
func (tlsHandshakeTimeoutError) Temporary() bool { return true }
func (tlsHandshakeTimeoutError) Error() string { return "TLS handshake timeout" }
// DialContext connects to the Redis server at the given network and
// address using the specified options and context.
func DialContext(ctx context.Context, network, address string, options ...DialOption) (Conn, error) {
do := dialOptions{
dialer: &net.Dialer{
Timeout: time.Second * 30,
KeepAlive: time.Minute * 5,
},
tlsHandshakeTimeout: time.Second * 10,
}
for _, option := range options {
option.f(&do)
}
if do.dialContext == nil {
do.dialContext = do.dialer.DialContext
}
netConn, err := do.dialContext(ctx, network, address)
if err != nil {
return nil, err
}
if do.useTLS {
var tlsConfig *tls.Config
if do.tlsConfig == nil {
tlsConfig = &tls.Config{InsecureSkipVerify: do.skipVerify}
} else {
tlsConfig = do.tlsConfig.Clone()
}
if tlsConfig.ServerName == "" {
host, _, err := net.SplitHostPort(address)
if err != nil {
netConn.Close()
return nil, err
}
tlsConfig.ServerName = host
}
tlsConn := tls.Client(netConn, tlsConfig)
errc := make(chan error, 2) // buffered so we don't block timeout or Handshake
if d := do.tlsHandshakeTimeout; d != 0 {
timer := time.AfterFunc(d, func() {
errc <- tlsHandshakeTimeoutError{}
})
defer timer.Stop()
}
go func() {
errc <- tlsConn.Handshake()
}()
if err := <-errc; err != nil {
// Timeout or Handshake error.
netConn.Close() // nolint: errcheck
return nil, err
}
netConn = tlsConn
}
c := &conn{
conn: netConn,
bw: bufio.NewWriter(netConn),
br: bufio.NewReader(netConn),
readTimeout: do.readTimeout,
writeTimeout: do.writeTimeout,
}
if do.password != "" {
authArgs := make([]interface{}, 0, 2)
if do.username != "" {
authArgs = append(authArgs, do.username)
}
authArgs = append(authArgs, do.password)
if _, err := c.DoContext(ctx, "AUTH", authArgs...); err != nil {
netConn.Close()
return nil, err
}
}
if do.clientName != "" {
if _, err := c.DoContext(ctx, "CLIENT", "SETNAME", do.clientName); err != nil {
netConn.Close()
return nil, err
}
}
if do.db != 0 {
if _, err := c.DoContext(ctx, "SELECT", do.db); err != nil {
netConn.Close()
return nil, err
}
}
return c, nil
}
var pathDBRegexp = regexp.MustCompile(`/(\d*)\z`)
// DialURL wraps DialURLContext using context.Background.
func DialURL(rawurl string, options ...DialOption) (Conn, error) {
ctx := context.Background()
return DialURLContext(ctx, rawurl, options...)
}
// DialURLContext connects to a Redis server at the given URL using the Redis
// URI scheme. URLs should follow the draft IANA specification for the
// scheme (https://www.iana.org/assignments/uri-schemes/prov/redis).
func DialURLContext(ctx context.Context, rawurl string, options ...DialOption) (Conn, error) {
u, err := url.Parse(rawurl)
if err != nil {
return nil, err
}
if u.Scheme != "redis" && u.Scheme != "rediss" {
return nil, fmt.Errorf("invalid redis URL scheme: %s", u.Scheme)
}
if u.Opaque != "" {
return nil, fmt.Errorf("invalid redis URL, url is opaque: %s", rawurl)
}
// As per the IANA draft spec, the host defaults to localhost and
// the port defaults to 6379.
host, port, err := net.SplitHostPort(u.Host)
if err != nil {
// assume port is missing
host = u.Host
port = "6379"
}
if host == "" {
host = "localhost"
}
address := net.JoinHostPort(host, port)
if u.User != nil {
password, isSet := u.User.Password()
username := u.User.Username()
if isSet {
if username != "" {
// ACL
options = append(options, DialUsername(username), DialPassword(password))
} else {
// requirepass - user-info username:password with blank username
options = append(options, DialPassword(password))
}
} else if username != "" {
// requirepass - redis-cli compatibility which treats as single arg in user-info as a password
options = append(options, DialPassword(username))
}
}
match := pathDBRegexp.FindStringSubmatch(u.Path)
if len(match) == 2 {
db := 0
if len(match[1]) > 0 {
db, err = strconv.Atoi(match[1])
if err != nil {
return nil, fmt.Errorf("invalid database: %s", u.Path[1:])
}
}
if db != 0 {
options = append(options, DialDatabase(db))
}
} else if u.Path != "" {
return nil, fmt.Errorf("invalid database: %s", u.Path[1:])
}
options = append(options, DialUseTLS(u.Scheme == "rediss"))
return DialContext(ctx, "tcp", address, options...)
}
// NewConn returns a new Redigo connection for the given net connection.
func NewConn(netConn net.Conn, readTimeout, writeTimeout time.Duration) Conn {
return &conn{
conn: netConn,
bw: bufio.NewWriter(netConn),
br: bufio.NewReader(netConn),
readTimeout: readTimeout,
writeTimeout: writeTimeout,
}
}
func (c *conn) Close() error {
c.mu.Lock()
err := c.err
if c.err == nil {
c.err = errors.New("redigo: closed")
err = c.conn.Close()
}
c.mu.Unlock()
return err
}
func (c *conn) fatal(err error) error {
c.mu.Lock()
if c.err == nil {
c.err = err
// Close connection to force errors on subsequent calls and to unblock
// other reader or writer.
c.conn.Close()
}
c.mu.Unlock()
return err
}
func (c *conn) Err() error {
c.mu.Lock()
err := c.err
c.mu.Unlock()
return err
}
func (c *conn) writeLen(prefix byte, n int) error {
c.lenScratch[len(c.lenScratch)-1] = '\n'
c.lenScratch[len(c.lenScratch)-2] = '\r'
i := len(c.lenScratch) - 3
for {
c.lenScratch[i] = byte('0' + n%10)
i -= 1
n = n / 10
if n == 0 {
break
}
}
c.lenScratch[i] = prefix
_, err := c.bw.Write(c.lenScratch[i:])
return err
}
func (c *conn) writeString(s string) error {
if err := c.writeLen('$', len(s)); err != nil {
return err
}
if _, err := c.bw.WriteString(s); err != nil {
return err
}
_, err := c.bw.WriteString("\r\n")
return err
}
func (c *conn) writeBytes(p []byte) error {
if err := c.writeLen('$', len(p)); err != nil {
return err
}
if _, err := c.bw.Write(p); err != nil {
return err
}
_, err := c.bw.WriteString("\r\n")
return err
}
func (c *conn) writeInt64(n int64) error {
return c.writeBytes(strconv.AppendInt(c.numScratch[:0], n, 10))
}
func (c *conn) writeFloat64(n float64) error {
return c.writeBytes(strconv.AppendFloat(c.numScratch[:0], n, 'g', -1, 64))
}
func (c *conn) writeCommand(cmd string, args []interface{}) error {
if err := c.writeLen('*', 1+len(args)); err != nil {
return err
}
if err := c.writeString(cmd); err != nil {
return err
}
for _, arg := range args {
if err := c.writeArg(arg, true); err != nil {
return err
}
}
return nil
}
func (c *conn) writeArg(arg interface{}, argumentTypeOK bool) (err error) {
switch arg := arg.(type) {
case string:
return c.writeString(arg)
case []byte:
return c.writeBytes(arg)
case int:
return c.writeInt64(int64(arg))
case int64:
return c.writeInt64(arg)
case float64:
return c.writeFloat64(arg)
case bool:
if arg {
return c.writeString("1")
} else {
return c.writeString("0")
}
case nil:
return c.writeString("")
case Argument:
if argumentTypeOK {
return c.writeArg(arg.RedisArg(), false)
}
// See comment in default clause below.
var buf bytes.Buffer
fmt.Fprint(&buf, arg)
return c.writeBytes(buf.Bytes())
default:
// This default clause is intended to handle builtin numeric types.
// The function should return an error for other types, but this is not
// done for compatibility with previous versions of the package.
var buf bytes.Buffer
fmt.Fprint(&buf, arg)
return c.writeBytes(buf.Bytes())
}
}
type protocolError string
func (pe protocolError) Error() string {
return fmt.Sprintf("redigo: %s (possible server error or unsupported concurrent read by application)", string(pe))
}
// readLine reads a line of input from the RESP stream.
func (c *conn) readLine() ([]byte, error) {
// To avoid allocations, attempt to read the line using ReadSlice. This
// call typically succeeds. The known case where the call fails is when
// reading the output from the MONITOR command.
p, err := c.br.ReadSlice('\n')
if err == bufio.ErrBufferFull {
// The line does not fit in the bufio.Reader's buffer. Fall back to
// allocating a buffer for the line.
buf := append([]byte{}, p...)
for err == bufio.ErrBufferFull {
p, err = c.br.ReadSlice('\n')
buf = append(buf, p...)
}
p = buf
}
if err != nil {
return nil, err
}
i := len(p) - 2
if i < 0 || p[i] != '\r' {
return nil, protocolError("bad response line terminator")
}
return p[:i], nil
}
// parseLen parses bulk string and array lengths.
func parseLen(p []byte) (int, error) {
if len(p) == 0 {
return -1, protocolError("malformed length")
}
if p[0] == '-' && len(p) == 2 && p[1] == '1' {
// handle $-1 and $-1 null replies.
return -1, nil
}
var n int
for _, b := range p {
n *= 10
if b < '0' || b > '9' {
return -1, protocolError("illegal bytes in length")
}
n += int(b - '0')
}
return n, nil
}
// parseInt parses an integer reply.
func parseInt(p []byte) (interface{}, error) {
if len(p) == 0 {
return 0, protocolError("malformed integer")
}
var negate bool
if p[0] == '-' {
negate = true
p = p[1:]
if len(p) == 0 {
return 0, protocolError("malformed integer")
}
}
var n int64
for _, b := range p {
n *= 10
if b < '0' || b > '9' {
return 0, protocolError("illegal bytes in length")
}
n += int64(b - '0')
}
if negate {
n = -n
}
return n, nil
}
var (
okReply interface{} = "OK"
pongReply interface{} = "PONG"
)
func (c *conn) readReply() (interface{}, error) {
line, err := c.readLine()
if err != nil {
return nil, err
}
if len(line) == 0 {
return nil, protocolError("short response line")
}
switch line[0] {
case '+':
switch string(line[1:]) {
case "OK":
// Avoid allocation for frequent "+OK" response.
return okReply, nil
case "PONG":
// Avoid allocation in PING command benchmarks :)
return pongReply, nil
default:
return string(line[1:]), nil
}
case '-':
return Error(line[1:]), nil
case ':':
return parseInt(line[1:])
case '$':
n, err := parseLen(line[1:])
if n < 0 || err != nil {
return nil, err
}
p := make([]byte, n)
_, err = io.ReadFull(c.br, p)
if err != nil {
return nil, err
}
if line, err := c.readLine(); err != nil {
return nil, err
} else if len(line) != 0 {
return nil, protocolError("bad bulk string format")
}
return p, nil
case '*':
n, err := parseLen(line[1:])
if n < 0 || err != nil {
return nil, err
}
r := make([]interface{}, n)
for i := range r {
r[i], err = c.readReply()
if err != nil {
return nil, err
}
}
return r, nil
}
return nil, protocolError("unexpected response line")
}
func (c *conn) Send(cmd string, args ...interface{}) error {
c.mu.Lock()
c.pending += 1
c.mu.Unlock()
if c.writeTimeout != 0 {
if err := c.conn.SetWriteDeadline(time.Now().Add(c.writeTimeout)); err != nil {
return c.fatal(err)
}
}
if err := c.writeCommand(cmd, args); err != nil {
return c.fatal(err)
}
return nil
}
func (c *conn) Flush() error {
if c.writeTimeout != 0 {
if err := c.conn.SetWriteDeadline(time.Now().Add(c.writeTimeout)); err != nil {
return c.fatal(err)
}
}
if err := c.bw.Flush(); err != nil {
return c.fatal(err)
}
return nil
}
func (c *conn) Receive() (interface{}, error) {
return c.ReceiveWithTimeout(c.readTimeout)
}
func (c *conn) ReceiveContext(ctx context.Context) (interface{}, error) {
var realTimeout time.Duration
if dl, ok := ctx.Deadline(); ok {
timeout := time.Until(dl)
if timeout >= c.readTimeout && c.readTimeout != 0 {
realTimeout = c.readTimeout
} else if timeout <= 0 {
return nil, c.fatal(context.DeadlineExceeded)
} else {
realTimeout = timeout
}
} else {
realTimeout = c.readTimeout
}
endch := make(chan struct{})
var r interface{}
var e error
go func() {
defer close(endch)
r, e = c.ReceiveWithTimeout(realTimeout)
}()
select {
case <-ctx.Done():
return nil, c.fatal(ctx.Err())
case <-endch:
return r, e
}
}
func (c *conn) ReceiveWithTimeout(timeout time.Duration) (reply interface{}, err error) {
var deadline time.Time
if timeout != 0 {
deadline = time.Now().Add(timeout)
}
if err := c.conn.SetReadDeadline(deadline); err != nil {
return nil, c.fatal(err)
}
if reply, err = c.readReply(); err != nil {
return nil, c.fatal(err)
}
// When using pub/sub, the number of receives can be greater than the
// number of sends. To enable normal use of the connection after
// unsubscribing from all channels, we do not decrement pending to a
// negative value.
//
// The pending field is decremented after the reply is read to handle the
// case where Receive is called before Send.
c.mu.Lock()
if c.pending > 0 {
c.pending -= 1
}
c.mu.Unlock()
if err, ok := reply.(Error); ok {
return nil, err
}
return
}
func (c *conn) Do(cmd string, args ...interface{}) (interface{}, error) {
return c.DoWithTimeout(c.readTimeout, cmd, args...)
}
func (c *conn) DoContext(ctx context.Context, cmd string, args ...interface{}) (interface{}, error) {
var realTimeout time.Duration
if dl, ok := ctx.Deadline(); ok {
timeout := time.Until(dl)
if timeout >= c.readTimeout && c.readTimeout != 0 {
realTimeout = c.readTimeout
} else if timeout <= 0 {
return nil, c.fatal(context.DeadlineExceeded)
} else {
realTimeout = timeout
}
} else {
realTimeout = c.readTimeout
}
endch := make(chan struct{})
var r interface{}
var e error
go func() {
defer close(endch)
r, e = c.DoWithTimeout(realTimeout, cmd, args...)
}()
select {
case <-ctx.Done():
return nil, c.fatal(ctx.Err())
case <-endch:
return r, e
}
}
func (c *conn) DoWithTimeout(readTimeout time.Duration, cmd string, args ...interface{}) (interface{}, error) {
c.mu.Lock()
pending := c.pending
c.pending = 0
c.mu.Unlock()
if cmd == "" && pending == 0 {
return nil, nil
}
if c.writeTimeout != 0 {
if err := c.conn.SetWriteDeadline(time.Now().Add(c.writeTimeout)); err != nil {
return nil, c.fatal(err)
}
}
if cmd != "" {
if err := c.writeCommand(cmd, args); err != nil {
return nil, c.fatal(err)
}
}
if err := c.bw.Flush(); err != nil {
return nil, c.fatal(err)
}
var deadline time.Time
if readTimeout != 0 {
deadline = time.Now().Add(readTimeout)
}
if err := c.conn.SetReadDeadline(deadline); err != nil {
return nil, c.fatal(err)
}
if cmd == "" {
reply := make([]interface{}, pending)
for i := range reply {
r, e := c.readReply()
if e != nil {
return nil, c.fatal(e)
}
reply[i] = r
}
return reply, nil
}
var err error
var reply interface{}
for i := 0; i <= pending; i++ {
var e error
if reply, e = c.readReply(); e != nil {
return nil, c.fatal(e)
}
if e, ok := reply.(Error); ok && err == nil {
err = e
}
}
return reply, err
}

View File

@ -0,0 +1,177 @@
// Copyright 2012 Gary Burd
//
// Licensed under the Apache License, Version 2.0 (the "License"): you may
// not use this file except in compliance with the License. You may obtain
// a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations
// under the License.
// Package redis is a client for the Redis database.
//
// The Redigo FAQ (https://github.com/gomodule/redigo/wiki/FAQ) contains more
// documentation about this package.
//
// # Connections
//
// The Conn interface is the primary interface for working with Redis.
// Applications create connections by calling the Dial, DialWithTimeout or
// NewConn functions. In the future, functions will be added for creating
// sharded and other types of connections.
//
// The application must call the connection Close method when the application
// is done with the connection.
//
// # Executing Commands
//
// The Conn interface has a generic method for executing Redis commands:
//
// Do(commandName string, args ...interface{}) (reply interface{}, err error)
//
// The Redis command reference (http://redis.io/commands) lists the available
// commands. An example of using the Redis APPEND command is:
//
// n, err := conn.Do("APPEND", "key", "value")
//
// The Do method converts command arguments to bulk strings for transmission
// to the server as follows:
//
// Go Type Conversion
// []byte Sent as is
// string Sent as is
// int, int64 strconv.FormatInt(v)
// float64 strconv.FormatFloat(v, 'g', -1, 64)
// bool true -> "1", false -> "0"
// nil ""
// all other types fmt.Fprint(w, v)
//
// Redis command reply types are represented using the following Go types:
//
// Redis type Go type
// error redis.Error
// integer int64
// simple string string
// bulk string []byte or nil if value not present.
// array []interface{} or nil if value not present.
//
// Use type assertions or the reply helper functions to convert from
// interface{} to the specific Go type for the command result.
//
// # Pipelining
//
// Connections support pipelining using the Send, Flush and Receive methods.
//
// Send(commandName string, args ...interface{}) error
// Flush() error
// Receive() (reply interface{}, err error)
//
// Send writes the command to the connection's output buffer. Flush flushes the
// connection's output buffer to the server. Receive reads a single reply from
// the server. The following example shows a simple pipeline.
//
// c.Send("SET", "foo", "bar")
// c.Send("GET", "foo")
// c.Flush()
// c.Receive() // reply from SET
// v, err = c.Receive() // reply from GET
//
// The Do method combines the functionality of the Send, Flush and Receive
// methods. The Do method starts by writing the command and flushing the output
// buffer. Next, the Do method receives all pending replies including the reply
// for the command just sent by Do. If any of the received replies is an error,
// then Do returns the error. If there are no errors, then Do returns the last
// reply. If the command argument to the Do method is "", then the Do method
// will flush the output buffer and receive pending replies without sending a
// command.
//
// Use the Send and Do methods to implement pipelined transactions.
//
// c.Send("MULTI")
// c.Send("INCR", "foo")
// c.Send("INCR", "bar")
// r, err := c.Do("EXEC")
// fmt.Println(r) // prints [1, 1]
//
// # Concurrency
//
// Connections support one concurrent caller to the Receive method and one
// concurrent caller to the Send and Flush methods. No other concurrency is
// supported including concurrent calls to the Do and Close methods.
//
// For full concurrent access to Redis, use the thread-safe Pool to get, use
// and release a connection from within a goroutine. Connections returned from
// a Pool have the concurrency restrictions described in the previous
// paragraph.
//
// # Publish and Subscribe
//
// Use the Send, Flush and Receive methods to implement Pub/Sub subscribers.
//
// c.Send("SUBSCRIBE", "example")
// c.Flush()
// for {
// reply, err := c.Receive()
// if err != nil {
// return err
// }
// // process pushed message
// }
//
// The PubSubConn type wraps a Conn with convenience methods for implementing
// subscribers. The Subscribe, PSubscribe, Unsubscribe and PUnsubscribe methods
// send and flush a subscription management command. The receive method
// converts a pushed message to convenient types for use in a type switch.
//
// psc := redis.PubSubConn{Conn: c}
// psc.Subscribe("example")
// for {
// switch v := psc.Receive().(type) {
// case redis.Message:
// fmt.Printf("%s: message: %s\n", v.Channel, v.Data)
// case redis.Subscription:
// fmt.Printf("%s: %s %d\n", v.Channel, v.Kind, v.Count)
// case error:
// return v
// }
// }
//
// # Reply Helpers
//
// The Bool, Int, Bytes, String, Strings and Values functions convert a reply
// to a value of a specific type. To allow convenient wrapping of calls to the
// connection Do and Receive methods, the functions take a second argument of
// type error. If the error is non-nil, then the helper function returns the
// error. If the error is nil, the function converts the reply to the specified
// type:
//
// exists, err := redis.Bool(c.Do("EXISTS", "foo"))
// if err != nil {
// // handle error return from c.Do or type conversion error.
// }
//
// The Scan function converts elements of a array reply to Go types:
//
// var value1 int
// var value2 string
// reply, err := redis.Values(c.Do("MGET", "key1", "key2"))
// if err != nil {
// // handle error
// }
// if _, err := redis.Scan(reply, &value1, &value2); err != nil {
// // handle error
// }
//
// # Errors
//
// Connection methods return error replies from the server as type redis.Error.
//
// Call the connection Err() method to determine if the connection encountered
// non-recoverable error such as a network error or protocol parsing error. If
// Err() returns a non-nil value, then the connection is not usable and should
// be closed.
package redis

View File

@ -0,0 +1,159 @@
// Copyright 2012 Gary Burd
//
// Licensed under the Apache License, Version 2.0 (the "License"): you may
// not use this file except in compliance with the License. You may obtain
// a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations
// under the License.
package redis
import (
"bytes"
"context"
"fmt"
"log"
"time"
)
var (
_ ConnWithTimeout = (*loggingConn)(nil)
)
// NewLoggingConn returns a logging wrapper around a connection.
func NewLoggingConn(conn Conn, logger *log.Logger, prefix string) Conn {
if prefix != "" {
prefix = prefix + "."
}
return &loggingConn{conn, logger, prefix, nil}
}
// NewLoggingConnFilter returns a logging wrapper around a connection and a filter function.
func NewLoggingConnFilter(conn Conn, logger *log.Logger, prefix string, skip func(cmdName string) bool) Conn {
if prefix != "" {
prefix = prefix + "."
}
return &loggingConn{conn, logger, prefix, skip}
}
type loggingConn struct {
Conn
logger *log.Logger
prefix string
skip func(cmdName string) bool
}
func (c *loggingConn) Close() error {
err := c.Conn.Close()
var buf bytes.Buffer
fmt.Fprintf(&buf, "%sClose() -> (%v)", c.prefix, err)
c.logger.Output(2, buf.String()) // nolint: errcheck
return err
}
func (c *loggingConn) printValue(buf *bytes.Buffer, v interface{}) {
const chop = 32
switch v := v.(type) {
case []byte:
if len(v) > chop {
fmt.Fprintf(buf, "%q...", v[:chop])
} else {
fmt.Fprintf(buf, "%q", v)
}
case string:
if len(v) > chop {
fmt.Fprintf(buf, "%q...", v[:chop])
} else {
fmt.Fprintf(buf, "%q", v)
}
case []interface{}:
if len(v) == 0 {
buf.WriteString("[]")
} else {
sep := "["
fin := "]"
if len(v) > chop {
v = v[:chop]
fin = "...]"
}
for _, vv := range v {
buf.WriteString(sep)
c.printValue(buf, vv)
sep = ", "
}
buf.WriteString(fin)
}
default:
fmt.Fprint(buf, v)
}
}
func (c *loggingConn) print(method, commandName string, args []interface{}, reply interface{}, err error) {
if c.skip != nil && c.skip(commandName) {
return
}
var buf bytes.Buffer
fmt.Fprintf(&buf, "%s%s(", c.prefix, method)
if method != "Receive" {
buf.WriteString(commandName)
for _, arg := range args {
buf.WriteString(", ")
c.printValue(&buf, arg)
}
}
buf.WriteString(") -> (")
if method != "Send" {
c.printValue(&buf, reply)
buf.WriteString(", ")
}
fmt.Fprintf(&buf, "%v)", err)
c.logger.Output(3, buf.String()) // nolint: errcheck
}
func (c *loggingConn) Do(commandName string, args ...interface{}) (interface{}, error) {
reply, err := c.Conn.Do(commandName, args...)
c.print("Do", commandName, args, reply, err)
return reply, err
}
func (c *loggingConn) DoContext(ctx context.Context, commandName string, args ...interface{}) (interface{}, error) {
reply, err := DoContext(c.Conn, ctx, commandName, args...)
c.print("DoContext", commandName, args, reply, err)
return reply, err
}
func (c *loggingConn) DoWithTimeout(timeout time.Duration, commandName string, args ...interface{}) (interface{}, error) {
reply, err := DoWithTimeout(c.Conn, timeout, commandName, args...)
c.print("DoWithTimeout", commandName, args, reply, err)
return reply, err
}
func (c *loggingConn) Send(commandName string, args ...interface{}) error {
err := c.Conn.Send(commandName, args...)
c.print("Send", commandName, args, nil, err)
return err
}
func (c *loggingConn) Receive() (interface{}, error) {
reply, err := c.Conn.Receive()
c.print("Receive", "", nil, reply, err)
return reply, err
}
func (c *loggingConn) ReceiveContext(ctx context.Context) (interface{}, error) {
reply, err := ReceiveContext(c.Conn, ctx)
c.print("ReceiveContext", "", nil, reply, err)
return reply, err
}
func (c *loggingConn) ReceiveWithTimeout(timeout time.Duration) (interface{}, error) {
reply, err := ReceiveWithTimeout(c.Conn, timeout)
c.print("ReceiveWithTimeout", "", nil, reply, err)
return reply, err
}

View File

@ -0,0 +1,665 @@
// Copyright 2012 Gary Burd
//
// Licensed under the Apache License, Version 2.0 (the "License"): you may
// not use this file except in compliance with the License. You may obtain
// a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations
// under the License.
package redis
import (
"bytes"
"context"
"crypto/rand"
"crypto/sha1"
"errors"
"io"
"strconv"
"sync"
"time"
)
var (
_ ConnWithTimeout = (*activeConn)(nil)
_ ConnWithTimeout = (*errorConn)(nil)
)
var nowFunc = time.Now // for testing
// ErrPoolExhausted is returned from a pool connection method (Do, Send,
// Receive, Flush, Err) when the maximum number of database connections in the
// pool has been reached.
var ErrPoolExhausted = errors.New("redigo: connection pool exhausted")
var (
errConnClosed = errors.New("redigo: connection closed")
)
// Pool maintains a pool of connections. The application calls the Get method
// to get a connection from the pool and the connection's Close method to
// return the connection's resources to the pool.
//
// The following example shows how to use a pool in a web application. The
// application creates a pool at application startup and makes it available to
// request handlers using a package level variable. The pool configuration used
// here is an example, not a recommendation.
//
// func newPool(addr string) *redis.Pool {
// return &redis.Pool{
// MaxIdle: 3,
// IdleTimeout: 240 * time.Second,
// // Dial or DialContext must be set. When both are set, DialContext takes precedence over Dial.
// Dial: func () (redis.Conn, error) { return redis.Dial("tcp", addr) },
// }
// }
//
// var (
// pool *redis.Pool
// redisServer = flag.String("redisServer", ":6379", "")
// )
//
// func main() {
// flag.Parse()
// pool = newPool(*redisServer)
// ...
// }
//
// A request handler gets a connection from the pool and closes the connection
// when the handler is done:
//
// func serveHome(w http.ResponseWriter, r *http.Request) {
// conn := pool.Get()
// defer conn.Close()
// ...
// }
//
// Use the Dial function to authenticate connections with the AUTH command or
// select a database with the SELECT command:
//
// pool := &redis.Pool{
// // Other pool configuration not shown in this example.
// Dial: func () (redis.Conn, error) {
// c, err := redis.Dial("tcp", server)
// if err != nil {
// return nil, err
// }
// if _, err := c.Do("AUTH", password); err != nil {
// c.Close()
// return nil, err
// }
// if _, err := c.Do("SELECT", db); err != nil {
// c.Close()
// return nil, err
// }
// return c, nil
// },
// }
//
// Use the TestOnBorrow function to check the health of an idle connection
// before the connection is returned to the application. This example PINGs
// connections that have been idle more than a minute:
//
// pool := &redis.Pool{
// // Other pool configuration not shown in this example.
// TestOnBorrow: func(c redis.Conn, t time.Time) error {
// if time.Since(t) < time.Minute {
// return nil
// }
// _, err := c.Do("PING")
// return err
// },
// }
type Pool struct {
// Dial is an application supplied function for creating and configuring a
// connection.
//
// The connection returned from Dial must not be in a special state
// (subscribed to pubsub channel, transaction started, ...).
Dial func() (Conn, error)
// DialContext is an application supplied function for creating and configuring a
// connection with the given context.
//
// The connection returned from Dial must not be in a special state
// (subscribed to pubsub channel, transaction started, ...).
DialContext func(ctx context.Context) (Conn, error)
// TestOnBorrow is an optional application supplied function for checking
// the health of an idle connection before the connection is used again by
// the application. Argument t is the time that the connection was returned
// to the pool. If the function returns an error, then the connection is
// closed.
TestOnBorrow func(c Conn, t time.Time) error
// Maximum number of idle connections in the pool.
MaxIdle int
// Maximum number of connections allocated by the pool at a given time.
// When zero, there is no limit on the number of connections in the pool.
MaxActive int
// Close connections after remaining idle for this duration. If the value
// is zero, then idle connections are not closed. Applications should set
// the timeout to a value less than the server's timeout.
IdleTimeout time.Duration
// If Wait is true and the pool is at the MaxActive limit, then Get() waits
// for a connection to be returned to the pool before returning.
Wait bool
// Close connections older than this duration. If the value is zero, then
// the pool does not close connections based on age.
MaxConnLifetime time.Duration
mu sync.Mutex // mu protects the following fields
closed bool // set to true when the pool is closed.
active int // the number of open connections in the pool
initOnce sync.Once // the init ch once func
ch chan struct{} // limits open connections when p.Wait is true
idle idleList // idle connections
waitCount int64 // total number of connections waited for.
waitDuration time.Duration // total time waited for new connections.
}
// NewPool creates a new pool.
//
// Deprecated: Initialize the Pool directly as shown in the example.
func NewPool(newFn func() (Conn, error), maxIdle int) *Pool {
return &Pool{Dial: newFn, MaxIdle: maxIdle}
}
// Get gets a connection. The application must close the returned connection.
// This method always returns a valid connection so that applications can defer
// error handling to the first use of the connection. If there is an error
// getting an underlying connection, then the connection Err, Do, Send, Flush
// and Receive methods return that error.
func (p *Pool) Get() Conn {
// GetContext returns errorConn in the first argument when an error occurs.
c, _ := p.GetContext(context.Background())
return c
}
// GetContext gets a connection using the provided context.
//
// The provided Context must be non-nil. If the context expires before the
// connection is complete, an error is returned. Any expiration on the context
// will not affect the returned connection.
//
// If the function completes without error, then the application must close the
// returned connection.
func (p *Pool) GetContext(ctx context.Context) (Conn, error) {
// Wait until there is a vacant connection in the pool.
waited, err := p.waitVacantConn(ctx)
if err != nil {
return errorConn{err}, err
}
p.mu.Lock()
if waited > 0 {
p.waitCount++
p.waitDuration += waited
}
// Prune stale connections at the back of the idle list.
if p.IdleTimeout > 0 {
n := p.idle.count
for i := 0; i < n && p.idle.back != nil && p.idle.back.t.Add(p.IdleTimeout).Before(nowFunc()); i++ {
pc := p.idle.back
p.idle.popBack()
p.mu.Unlock()
pc.c.Close()
p.mu.Lock()
p.active--
}
}
// Get idle connection from the front of idle list.
for p.idle.front != nil {
pc := p.idle.front
p.idle.popFront()
p.mu.Unlock()
if (p.TestOnBorrow == nil || p.TestOnBorrow(pc.c, pc.t) == nil) &&
(p.MaxConnLifetime == 0 || nowFunc().Sub(pc.created) < p.MaxConnLifetime) {
return &activeConn{p: p, pc: pc}, nil
}
pc.c.Close()
p.mu.Lock()
p.active--
}
// Check for pool closed before dialing a new connection.
if p.closed {
p.mu.Unlock()
err := errors.New("redigo: get on closed pool")
return errorConn{err}, err
}
// Handle limit for p.Wait == false.
if !p.Wait && p.MaxActive > 0 && p.active >= p.MaxActive {
p.mu.Unlock()
return errorConn{ErrPoolExhausted}, ErrPoolExhausted
}
p.active++
p.mu.Unlock()
c, err := p.dial(ctx)
if err != nil {
p.mu.Lock()
p.active--
if p.ch != nil && !p.closed {
p.ch <- struct{}{}
}
p.mu.Unlock()
return errorConn{err}, err
}
return &activeConn{p: p, pc: &poolConn{c: c, created: nowFunc()}}, nil
}
// PoolStats contains pool statistics.
type PoolStats struct {
// ActiveCount is the number of connections in the pool. The count includes
// idle connections and connections in use.
ActiveCount int
// IdleCount is the number of idle connections in the pool.
IdleCount int
// WaitCount is the total number of connections waited for.
// This value is currently not guaranteed to be 100% accurate.
WaitCount int64
// WaitDuration is the total time blocked waiting for a new connection.
// This value is currently not guaranteed to be 100% accurate.
WaitDuration time.Duration
}
// Stats returns pool's statistics.
func (p *Pool) Stats() PoolStats {
p.mu.Lock()
stats := PoolStats{
ActiveCount: p.active,
IdleCount: p.idle.count,
WaitCount: p.waitCount,
WaitDuration: p.waitDuration,
}
p.mu.Unlock()
return stats
}
// ActiveCount returns the number of connections in the pool. The count
// includes idle connections and connections in use.
func (p *Pool) ActiveCount() int {
p.mu.Lock()
active := p.active
p.mu.Unlock()
return active
}
// IdleCount returns the number of idle connections in the pool.
func (p *Pool) IdleCount() int {
p.mu.Lock()
idle := p.idle.count
p.mu.Unlock()
return idle
}
// Close releases the resources used by the pool.
func (p *Pool) Close() error {
p.mu.Lock()
if p.closed {
p.mu.Unlock()
return nil
}
p.closed = true
p.active -= p.idle.count
pc := p.idle.front
p.idle.count = 0
p.idle.front, p.idle.back = nil, nil
if p.ch != nil {
close(p.ch)
}
p.mu.Unlock()
for ; pc != nil; pc = pc.next {
pc.c.Close()
}
return nil
}
func (p *Pool) lazyInit() {
p.initOnce.Do(func() {
p.ch = make(chan struct{}, p.MaxActive)
if p.closed {
close(p.ch)
} else {
for i := 0; i < p.MaxActive; i++ {
p.ch <- struct{}{}
}
}
})
}
// waitVacantConn waits for a vacant connection in pool if waiting
// is enabled and pool size is limited, otherwise returns instantly.
// If ctx expires before that, an error is returned.
//
// If there were no vacant connection in the pool right away it returns the time spent waiting
// for that connection to appear in the pool.
func (p *Pool) waitVacantConn(ctx context.Context) (waited time.Duration, err error) {
if !p.Wait || p.MaxActive <= 0 {
// No wait or no connection limit.
return 0, nil
}
p.lazyInit()
// wait indicates if we believe it will block so its not 100% accurate
// however for stats it should be good enough.
wait := len(p.ch) == 0
var start time.Time
if wait {
start = time.Now()
}
select {
case <-p.ch:
// Additionally check that context hasn't expired while we were waiting,
// because `select` picks a random `case` if several of them are "ready".
select {
case <-ctx.Done():
p.ch <- struct{}{}
return 0, ctx.Err()
default:
}
case <-ctx.Done():
return 0, ctx.Err()
}
if wait {
return time.Since(start), nil
}
return 0, nil
}
func (p *Pool) dial(ctx context.Context) (Conn, error) {
if p.DialContext != nil {
return p.DialContext(ctx)
}
if p.Dial != nil {
return p.Dial()
}
return nil, errors.New("redigo: must pass Dial or DialContext to pool")
}
func (p *Pool) put(pc *poolConn, forceClose bool) error {
p.mu.Lock()
if !p.closed && !forceClose {
pc.t = nowFunc()
p.idle.pushFront(pc)
if p.idle.count > p.MaxIdle {
pc = p.idle.back
p.idle.popBack()
} else {
pc = nil
}
}
if pc != nil {
p.mu.Unlock()
pc.c.Close()
p.mu.Lock()
p.active--
}
if p.ch != nil && !p.closed {
p.ch <- struct{}{}
}
p.mu.Unlock()
return nil
}
type activeConn struct {
p *Pool
pc *poolConn
state int
}
var (
sentinel []byte
sentinelOnce sync.Once
)
func initSentinel() {
p := make([]byte, 64)
if _, err := rand.Read(p); err == nil {
sentinel = p
} else {
h := sha1.New()
io.WriteString(h, "Oops, rand failed. Use time instead.") // nolint: errcheck
io.WriteString(h, strconv.FormatInt(time.Now().UnixNano(), 10)) // nolint: errcheck
sentinel = h.Sum(nil)
}
}
func (ac *activeConn) firstError(errs ...error) error {
for _, err := range errs[:len(errs)-1] {
if err != nil {
return err
}
}
return errs[len(errs)-1]
}
func (ac *activeConn) Close() (err error) {
pc := ac.pc
if pc == nil {
return nil
}
ac.pc = nil
if ac.state&connectionMultiState != 0 {
err = pc.c.Send("DISCARD")
ac.state &^= (connectionMultiState | connectionWatchState)
} else if ac.state&connectionWatchState != 0 {
err = pc.c.Send("UNWATCH")
ac.state &^= connectionWatchState
}
if ac.state&connectionSubscribeState != 0 {
err = ac.firstError(err,
pc.c.Send("UNSUBSCRIBE"),
pc.c.Send("PUNSUBSCRIBE"),
)
// To detect the end of the message stream, ask the server to echo
// a sentinel value and read until we see that value.
sentinelOnce.Do(initSentinel)
err = ac.firstError(err,
pc.c.Send("ECHO", sentinel),
pc.c.Flush(),
)
for {
p, err2 := pc.c.Receive()
if err2 != nil {
err = ac.firstError(err, err2)
break
}
if p, ok := p.([]byte); ok && bytes.Equal(p, sentinel) {
ac.state &^= connectionSubscribeState
break
}
}
}
_, err2 := pc.c.Do("")
return ac.firstError(
err,
err2,
ac.p.put(pc, ac.state != 0 || pc.c.Err() != nil),
)
}
func (ac *activeConn) Err() error {
pc := ac.pc
if pc == nil {
return errConnClosed
}
return pc.c.Err()
}
func (ac *activeConn) DoContext(ctx context.Context, commandName string, args ...interface{}) (reply interface{}, err error) {
pc := ac.pc
if pc == nil {
return nil, errConnClosed
}
cwt, ok := pc.c.(ConnWithContext)
if !ok {
return nil, errContextNotSupported
}
ci := lookupCommandInfo(commandName)
ac.state = (ac.state | ci.Set) &^ ci.Clear
return cwt.DoContext(ctx, commandName, args...)
}
func (ac *activeConn) Do(commandName string, args ...interface{}) (reply interface{}, err error) {
pc := ac.pc
if pc == nil {
return nil, errConnClosed
}
ci := lookupCommandInfo(commandName)
ac.state = (ac.state | ci.Set) &^ ci.Clear
return pc.c.Do(commandName, args...)
}
func (ac *activeConn) DoWithTimeout(timeout time.Duration, commandName string, args ...interface{}) (reply interface{}, err error) {
pc := ac.pc
if pc == nil {
return nil, errConnClosed
}
cwt, ok := pc.c.(ConnWithTimeout)
if !ok {
return nil, errTimeoutNotSupported
}
ci := lookupCommandInfo(commandName)
ac.state = (ac.state | ci.Set) &^ ci.Clear
return cwt.DoWithTimeout(timeout, commandName, args...)
}
func (ac *activeConn) Send(commandName string, args ...interface{}) error {
pc := ac.pc
if pc == nil {
return errConnClosed
}
ci := lookupCommandInfo(commandName)
ac.state = (ac.state | ci.Set) &^ ci.Clear
return pc.c.Send(commandName, args...)
}
func (ac *activeConn) Flush() error {
pc := ac.pc
if pc == nil {
return errConnClosed
}
return pc.c.Flush()
}
func (ac *activeConn) Receive() (reply interface{}, err error) {
pc := ac.pc
if pc == nil {
return nil, errConnClosed
}
return pc.c.Receive()
}
func (ac *activeConn) ReceiveContext(ctx context.Context) (reply interface{}, err error) {
pc := ac.pc
if pc == nil {
return nil, errConnClosed
}
cwt, ok := pc.c.(ConnWithContext)
if !ok {
return nil, errContextNotSupported
}
return cwt.ReceiveContext(ctx)
}
func (ac *activeConn) ReceiveWithTimeout(timeout time.Duration) (reply interface{}, err error) {
pc := ac.pc
if pc == nil {
return nil, errConnClosed
}
cwt, ok := pc.c.(ConnWithTimeout)
if !ok {
return nil, errTimeoutNotSupported
}
return cwt.ReceiveWithTimeout(timeout)
}
type errorConn struct{ err error }
func (ec errorConn) Do(string, ...interface{}) (interface{}, error) { return nil, ec.err }
func (ec errorConn) DoContext(context.Context, string, ...interface{}) (interface{}, error) {
return nil, ec.err
}
func (ec errorConn) DoWithTimeout(time.Duration, string, ...interface{}) (interface{}, error) {
return nil, ec.err
}
func (ec errorConn) Send(string, ...interface{}) error { return ec.err }
func (ec errorConn) Err() error { return ec.err }
func (ec errorConn) Close() error { return nil }
func (ec errorConn) Flush() error { return ec.err }
func (ec errorConn) Receive() (interface{}, error) { return nil, ec.err }
func (ec errorConn) ReceiveContext(context.Context) (interface{}, error) { return nil, ec.err }
func (ec errorConn) ReceiveWithTimeout(time.Duration) (interface{}, error) { return nil, ec.err }
type idleList struct {
count int
front, back *poolConn
}
type poolConn struct {
c Conn
t time.Time
created time.Time
next, prev *poolConn
}
func (l *idleList) pushFront(pc *poolConn) {
pc.next = l.front
pc.prev = nil
if l.count == 0 {
l.back = pc
} else {
l.front.prev = pc
}
l.front = pc
l.count++
}
func (l *idleList) popFront() {
pc := l.front
l.count--
if l.count == 0 {
l.front, l.back = nil, nil
} else {
pc.next.prev = nil
l.front = pc.next
}
pc.next, pc.prev = nil, nil
}
func (l *idleList) popBack() {
pc := l.back
l.count--
if l.count == 0 {
l.front, l.back = nil, nil
} else {
pc.prev.next = nil
l.back = pc.prev
}
pc.next, pc.prev = nil, nil
}

View File

@ -0,0 +1,166 @@
// Copyright 2012 Gary Burd
//
// Licensed under the Apache License, Version 2.0 (the "License"): you may
// not use this file except in compliance with the License. You may obtain
// a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations
// under the License.
package redis
import (
"context"
"errors"
"time"
)
// Subscription represents a subscribe or unsubscribe notification.
type Subscription struct {
// Kind is "subscribe", "unsubscribe", "psubscribe" or "punsubscribe"
Kind string
// The channel that was changed.
Channel string
// The current number of subscriptions for connection.
Count int
}
// Message represents a message notification.
type Message struct {
// The originating channel.
Channel string
// The matched pattern, if any
Pattern string
// The message data.
Data []byte
}
// Pong represents a pubsub pong notification.
type Pong struct {
Data string
}
// PubSubConn wraps a Conn with convenience methods for subscribers.
type PubSubConn struct {
Conn Conn
}
// Close closes the connection.
func (c PubSubConn) Close() error {
return c.Conn.Close()
}
// Subscribe subscribes the connection to the specified channels.
func (c PubSubConn) Subscribe(channel ...interface{}) error {
if err := c.Conn.Send("SUBSCRIBE", channel...); err != nil {
return err
}
return c.Conn.Flush()
}
// PSubscribe subscribes the connection to the given patterns.
func (c PubSubConn) PSubscribe(channel ...interface{}) error {
if err := c.Conn.Send("PSUBSCRIBE", channel...); err != nil {
return err
}
return c.Conn.Flush()
}
// Unsubscribe unsubscribes the connection from the given channels, or from all
// of them if none is given.
func (c PubSubConn) Unsubscribe(channel ...interface{}) error {
if err := c.Conn.Send("UNSUBSCRIBE", channel...); err != nil {
return err
}
return c.Conn.Flush()
}
// PUnsubscribe unsubscribes the connection from the given patterns, or from all
// of them if none is given.
func (c PubSubConn) PUnsubscribe(channel ...interface{}) error {
if err := c.Conn.Send("PUNSUBSCRIBE", channel...); err != nil {
return err
}
return c.Conn.Flush()
}
// Ping sends a PING to the server with the specified data.
//
// The connection must be subscribed to at least one channel or pattern when
// calling this method.
func (c PubSubConn) Ping(data string) error {
if err := c.Conn.Send("PING", data); err != nil {
return err
}
return c.Conn.Flush()
}
// Receive returns a pushed message as a Subscription, Message, Pong or error.
// The return value is intended to be used directly in a type switch as
// illustrated in the PubSubConn example.
func (c PubSubConn) Receive() interface{} {
return c.receiveInternal(c.Conn.Receive())
}
// ReceiveWithTimeout is like Receive, but it allows the application to
// override the connection's default timeout.
func (c PubSubConn) ReceiveWithTimeout(timeout time.Duration) interface{} {
return c.receiveInternal(ReceiveWithTimeout(c.Conn, timeout))
}
// ReceiveContext is like Receive, but it allows termination of the receive
// via a Context. If the call returns due to closure of the context's Done
// channel the underlying Conn will have been closed.
func (c PubSubConn) ReceiveContext(ctx context.Context) interface{} {
return c.receiveInternal(ReceiveContext(c.Conn, ctx))
}
func (c PubSubConn) receiveInternal(replyArg interface{}, errArg error) interface{} {
reply, err := Values(replyArg, errArg)
if err != nil {
return err
}
var kind string
reply, err = Scan(reply, &kind)
if err != nil {
return err
}
switch kind {
case "message":
var m Message
if _, err := Scan(reply, &m.Channel, &m.Data); err != nil {
return err
}
return m
case "pmessage":
var m Message
if _, err := Scan(reply, &m.Pattern, &m.Channel, &m.Data); err != nil {
return err
}
return m
case "subscribe", "psubscribe", "unsubscribe", "punsubscribe":
s := Subscription{Kind: kind}
if _, err := Scan(reply, &s.Channel, &s.Count); err != nil {
return err
}
return s
case "pong":
var p Pong
if _, err := Scan(reply, &p.Data); err != nil {
return err
}
return p
}
return errors.New("redigo: unknown pubsub notification")
}

View File

@ -0,0 +1,213 @@
// Copyright 2012 Gary Burd
//
// Licensed under the Apache License, Version 2.0 (the "License"): you may
// not use this file except in compliance with the License. You may obtain
// a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations
// under the License.
package redis
import (
"context"
"errors"
"time"
)
// Error represents an error returned in a command reply.
type Error string
func (err Error) Error() string { return string(err) }
// Conn represents a connection to a Redis server.
type Conn interface {
// Close closes the connection.
Close() error
// Err returns a non-nil value when the connection is not usable.
Err() error
// Do sends a command to the server and returns the received reply.
// This function will use the timeout which was set when the connection is created
Do(commandName string, args ...interface{}) (reply interface{}, err error)
// Send writes the command to the client's output buffer.
Send(commandName string, args ...interface{}) error
// Flush flushes the output buffer to the Redis server.
Flush() error
// Receive receives a single reply from the Redis server
Receive() (reply interface{}, err error)
}
// Argument is the interface implemented by an object which wants to control how
// the object is converted to Redis bulk strings.
type Argument interface {
// RedisArg returns a value to be encoded as a bulk string per the
// conversions listed in the section 'Executing Commands'.
// Implementations should typically return a []byte or string.
RedisArg() interface{}
}
// Scanner is implemented by an object which wants to control its value is
// interpreted when read from Redis.
type Scanner interface {
// RedisScan assigns a value from a Redis value. The argument src is one of
// the reply types listed in the section `Executing Commands`.
//
// An error should be returned if the value cannot be stored without
// loss of information.
RedisScan(src interface{}) error
}
// ConnWithTimeout is an optional interface that allows the caller to override
// a connection's default read timeout. This interface is useful for executing
// the BLPOP, BRPOP, BRPOPLPUSH, XREAD and other commands that block at the
// server.
//
// A connection's default read timeout is set with the DialReadTimeout dial
// option. Applications should rely on the default timeout for commands that do
// not block at the server.
//
// All of the Conn implementations in this package satisfy the ConnWithTimeout
// interface.
//
// Use the DoWithTimeout and ReceiveWithTimeout helper functions to simplify
// use of this interface.
type ConnWithTimeout interface {
Conn
// DoWithTimeout sends a command to the server and returns the received reply.
// The timeout overrides the readtimeout set when dialing the connection.
DoWithTimeout(timeout time.Duration, commandName string, args ...interface{}) (reply interface{}, err error)
// ReceiveWithTimeout receives a single reply from the Redis server.
// The timeout overrides the readtimeout set when dialing the connection.
ReceiveWithTimeout(timeout time.Duration) (reply interface{}, err error)
}
// ConnWithContext is an optional interface that allows the caller to control the command's life with context.
type ConnWithContext interface {
Conn
// DoContext sends a command to server and returns the received reply.
// min(ctx,DialReadTimeout()) will be used as the deadline.
// The connection will be closed if DialReadTimeout() timeout or ctx timeout or ctx canceled when this function is running.
// DialReadTimeout() timeout return err can be checked by errors.Is(err, os.ErrDeadlineExceeded).
// ctx timeout return err context.DeadlineExceeded.
// ctx canceled return err context.Canceled.
DoContext(ctx context.Context, commandName string, args ...interface{}) (reply interface{}, err error)
// ReceiveContext receives a single reply from the Redis server.
// min(ctx,DialReadTimeout()) will be used as the deadline.
// The connection will be closed if DialReadTimeout() timeout or ctx timeout or ctx canceled when this function is running.
// DialReadTimeout() timeout return err can be checked by errors.Is(err, os.ErrDeadlineExceeded).
// ctx timeout return err context.DeadlineExceeded.
// ctx canceled return err context.Canceled.
ReceiveContext(ctx context.Context) (reply interface{}, err error)
}
var errTimeoutNotSupported = errors.New("redis: connection does not support ConnWithTimeout")
var errContextNotSupported = errors.New("redis: connection does not support ConnWithContext")
// DoContext sends a command to server and returns the received reply.
// min(ctx,DialReadTimeout()) will be used as the deadline.
// The connection will be closed if DialReadTimeout() timeout or ctx timeout or ctx canceled when this function is running.
// DialReadTimeout() timeout return err can be checked by errors.Is(err, os.ErrDeadlineExceeded).
// ctx timeout return err context.DeadlineExceeded.
// ctx canceled return err context.Canceled.
func DoContext(c Conn, ctx context.Context, cmd string, args ...interface{}) (interface{}, error) {
cwt, ok := c.(ConnWithContext)
if !ok {
return nil, errContextNotSupported
}
return cwt.DoContext(ctx, cmd, args...)
}
// DoWithTimeout executes a Redis command with the specified read timeout. If
// the connection does not satisfy the ConnWithTimeout interface, then an error
// is returned.
func DoWithTimeout(c Conn, timeout time.Duration, cmd string, args ...interface{}) (interface{}, error) {
cwt, ok := c.(ConnWithTimeout)
if !ok {
return nil, errTimeoutNotSupported
}
return cwt.DoWithTimeout(timeout, cmd, args...)
}
// ReceiveContext receives a single reply from the Redis server.
// min(ctx,DialReadTimeout()) will be used as the deadline.
// The connection will be closed if DialReadTimeout() timeout or ctx timeout or ctx canceled when this function is running.
// DialReadTimeout() timeout return err can be checked by strings.Contains(e.Error(), "io/timeout").
// ctx timeout return err context.DeadlineExceeded.
// ctx canceled return err context.Canceled.
func ReceiveContext(c Conn, ctx context.Context) (interface{}, error) {
cwt, ok := c.(ConnWithContext)
if !ok {
return nil, errContextNotSupported
}
return cwt.ReceiveContext(ctx)
}
// ReceiveWithTimeout receives a reply with the specified read timeout. If the
// connection does not satisfy the ConnWithTimeout interface, then an error is
// returned.
func ReceiveWithTimeout(c Conn, timeout time.Duration) (interface{}, error) {
cwt, ok := c.(ConnWithTimeout)
if !ok {
return nil, errTimeoutNotSupported
}
return cwt.ReceiveWithTimeout(timeout)
}
// SlowLog represents a redis SlowLog
type SlowLog struct {
// ID is a unique progressive identifier for every slow log entry.
ID int64
// Time is the unix timestamp at which the logged command was processed.
Time time.Time
// ExecutationTime is the amount of time needed for the command execution.
ExecutionTime time.Duration
// Args is the command name and arguments
Args []string
// ClientAddr is the client IP address (4.0 only).
ClientAddr string
// ClientName is the name set via the CLIENT SETNAME command (4.0 only).
ClientName string
}
// Latency represents a redis LATENCY LATEST.
type Latency struct {
// Name of the latest latency spike event.
Name string
// Time of the latest latency spike for the event.
Time time.Time
// Latest is the latest recorded latency for the named event.
Latest time.Duration
// Max is the maximum latency for the named event.
Max time.Duration
}
// LatencyHistory represents a redis LATENCY HISTORY.
type LatencyHistory struct {
// Time is the unix timestamp at which the event was processed.
Time time.Time
// ExecutationTime is the amount of time needed for the command execution.
ExecutionTime time.Duration
}

View File

@ -0,0 +1,48 @@
package redis
import (
"reflect"
"runtime"
)
// methodName returns the name of the calling method,
// assumed to be two stack frames above.
func methodName() string {
pc, _, _, _ := runtime.Caller(2)
f := runtime.FuncForPC(pc)
if f == nil {
return "unknown method"
}
return f.Name()
}
// mustBe panics if f's kind is not expected.
func mustBe(v reflect.Value, expected reflect.Kind) {
if v.Kind() != expected {
panic(&reflect.ValueError{Method: methodName(), Kind: v.Kind()})
}
}
// fieldByIndexCreate returns the nested field corresponding
// to index creating elements that are nil when stepping through.
// It panics if v is not a struct.
func fieldByIndexCreate(v reflect.Value, index []int) reflect.Value {
if len(index) == 1 {
return v.Field(index[0])
}
mustBe(v, reflect.Struct)
for i, x := range index {
if i > 0 {
if v.Kind() == reflect.Ptr && v.Type().Elem().Kind() == reflect.Struct {
if v.IsNil() {
v.Set(reflect.New(v.Type().Elem()))
}
v = v.Elem()
}
}
v = v.Field(x)
}
return v
}

View File

@ -0,0 +1,34 @@
//go:build !go1.18
// +build !go1.18
package redis
import (
"errors"
"reflect"
)
// fieldByIndexErr returns the nested field corresponding to index.
// It returns an error if evaluation requires stepping through a nil
// pointer, but panics if it must step through a field that
// is not a struct.
func fieldByIndexErr(v reflect.Value, index []int) (reflect.Value, error) {
if len(index) == 1 {
return v.Field(index[0]), nil
}
mustBe(v, reflect.Struct)
for i, x := range index {
if i > 0 {
if v.Kind() == reflect.Ptr && v.Type().Elem().Kind() == reflect.Struct {
if v.IsNil() {
return reflect.Value{}, errors.New("reflect: indirection through nil pointer to embedded struct field " + v.Type().Elem().Name())
}
v = v.Elem()
}
}
v = v.Field(x)
}
return v, nil
}

View File

@ -0,0 +1,16 @@
//go:build go1.18
// +build go1.18
package redis
import (
"reflect"
)
// fieldByIndexErr returns the nested field corresponding to index.
// It returns an error if evaluation requires stepping through a nil
// pointer, but panics if it must step through a field that
// is not a struct.
func fieldByIndexErr(v reflect.Value, index []int) (reflect.Value, error) {
return v.FieldByIndexErr(index)
}

View File

@ -0,0 +1,735 @@
// Copyright 2012 Gary Burd
//
// Licensed under the Apache License, Version 2.0 (the "License"): you may
// not use this file except in compliance with the License. You may obtain
// a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations
// under the License.
package redis
import (
"errors"
"fmt"
"strconv"
"time"
)
// ErrNil indicates that a reply value is nil.
var ErrNil = errors.New("redigo: nil returned")
// Int is a helper that converts a command reply to an integer. If err is not
// equal to nil, then Int returns 0, err. Otherwise, Int converts the
// reply to an int as follows:
//
// Reply type Result
// integer int(reply), nil
// bulk string parsed reply, nil
// nil 0, ErrNil
// other 0, error
func Int(reply interface{}, err error) (int, error) {
if err != nil {
return 0, err
}
switch reply := reply.(type) {
case int64:
x := int(reply)
if int64(x) != reply {
return 0, strconv.ErrRange
}
return x, nil
case []byte:
n, err := strconv.ParseInt(string(reply), 10, 0)
return int(n), err
case nil:
return 0, ErrNil
case Error:
return 0, reply
}
return 0, fmt.Errorf("redigo: unexpected type for Int, got type %T", reply)
}
// Int64 is a helper that converts a command reply to 64 bit integer. If err is
// not equal to nil, then Int64 returns 0, err. Otherwise, Int64 converts the
// reply to an int64 as follows:
//
// Reply type Result
// integer reply, nil
// bulk string parsed reply, nil
// nil 0, ErrNil
// other 0, error
func Int64(reply interface{}, err error) (int64, error) {
if err != nil {
return 0, err
}
switch reply := reply.(type) {
case int64:
return reply, nil
case []byte:
n, err := strconv.ParseInt(string(reply), 10, 64)
return n, err
case nil:
return 0, ErrNil
case Error:
return 0, reply
}
return 0, fmt.Errorf("redigo: unexpected type for Int64, got type %T", reply)
}
func errNegativeInt(v int64) error {
return fmt.Errorf("redigo: unexpected negative value %v for Uint64", v)
}
// Uint64 is a helper that converts a command reply to 64 bit unsigned integer.
// If err is not equal to nil, then Uint64 returns 0, err. Otherwise, Uint64 converts the
// reply to an uint64 as follows:
//
// Reply type Result
// +integer reply, nil
// bulk string parsed reply, nil
// nil 0, ErrNil
// other 0, error
func Uint64(reply interface{}, err error) (uint64, error) {
if err != nil {
return 0, err
}
switch reply := reply.(type) {
case int64:
if reply < 0 {
return 0, errNegativeInt(reply)
}
return uint64(reply), nil
case []byte:
n, err := strconv.ParseUint(string(reply), 10, 64)
return n, err
case nil:
return 0, ErrNil
case Error:
return 0, reply
}
return 0, fmt.Errorf("redigo: unexpected type for Uint64, got type %T", reply)
}
// Float64 is a helper that converts a command reply to 64 bit float. If err is
// not equal to nil, then Float64 returns 0, err. Otherwise, Float64 converts
// the reply to a float64 as follows:
//
// Reply type Result
// bulk string parsed reply, nil
// nil 0, ErrNil
// other 0, error
func Float64(reply interface{}, err error) (float64, error) {
if err != nil {
return 0, err
}
switch reply := reply.(type) {
case []byte:
n, err := strconv.ParseFloat(string(reply), 64)
return n, err
case nil:
return 0, ErrNil
case Error:
return 0, reply
}
return 0, fmt.Errorf("redigo: unexpected type for Float64, got type %T", reply)
}
// String is a helper that converts a command reply to a string. If err is not
// equal to nil, then String returns "", err. Otherwise String converts the
// reply to a string as follows:
//
// Reply type Result
// bulk string string(reply), nil
// simple string reply, nil
// nil "", ErrNil
// other "", error
func String(reply interface{}, err error) (string, error) {
if err != nil {
return "", err
}
switch reply := reply.(type) {
case []byte:
return string(reply), nil
case string:
return reply, nil
case nil:
return "", ErrNil
case Error:
return "", reply
}
return "", fmt.Errorf("redigo: unexpected type for String, got type %T", reply)
}
// Bytes is a helper that converts a command reply to a slice of bytes. If err
// is not equal to nil, then Bytes returns nil, err. Otherwise Bytes converts
// the reply to a slice of bytes as follows:
//
// Reply type Result
// bulk string reply, nil
// simple string []byte(reply), nil
// nil nil, ErrNil
// other nil, error
func Bytes(reply interface{}, err error) ([]byte, error) {
if err != nil {
return nil, err
}
switch reply := reply.(type) {
case []byte:
return reply, nil
case string:
return []byte(reply), nil
case nil:
return nil, ErrNil
case Error:
return nil, reply
}
return nil, fmt.Errorf("redigo: unexpected type for Bytes, got type %T", reply)
}
// Bool is a helper that converts a command reply to a boolean. If err is not
// equal to nil, then Bool returns false, err. Otherwise Bool converts the
// reply to boolean as follows:
//
// Reply type Result
// integer value != 0, nil
// bulk string strconv.ParseBool(reply)
// nil false, ErrNil
// other false, error
func Bool(reply interface{}, err error) (bool, error) {
if err != nil {
return false, err
}
switch reply := reply.(type) {
case int64:
return reply != 0, nil
case []byte:
return strconv.ParseBool(string(reply))
case nil:
return false, ErrNil
case Error:
return false, reply
}
return false, fmt.Errorf("redigo: unexpected type for Bool, got type %T", reply)
}
// MultiBulk is a helper that converts an array command reply to a []interface{}.
//
// Deprecated: Use Values instead.
func MultiBulk(reply interface{}, err error) ([]interface{}, error) { return Values(reply, err) }
// Values is a helper that converts an array command reply to a []interface{}.
// If err is not equal to nil, then Values returns nil, err. Otherwise, Values
// converts the reply as follows:
//
// Reply type Result
// array reply, nil
// nil nil, ErrNil
// other nil, error
func Values(reply interface{}, err error) ([]interface{}, error) {
if err != nil {
return nil, err
}
switch reply := reply.(type) {
case []interface{}:
return reply, nil
case nil:
return nil, ErrNil
case Error:
return nil, reply
}
return nil, fmt.Errorf("redigo: unexpected type for Values, got type %T", reply)
}
func sliceHelper(reply interface{}, err error, name string, makeSlice func(int), assign func(int, interface{}) error) error {
if err != nil {
return err
}
switch reply := reply.(type) {
case []interface{}:
makeSlice(len(reply))
for i := range reply {
if reply[i] == nil {
continue
}
if err := assign(i, reply[i]); err != nil {
return err
}
}
return nil
case nil:
return ErrNil
case Error:
return reply
}
return fmt.Errorf("redigo: unexpected type for %s, got type %T", name, reply)
}
// Float64s is a helper that converts an array command reply to a []float64. If
// err is not equal to nil, then Float64s returns nil, err. Nil array items are
// converted to 0 in the output slice. Floats64 returns an error if an array
// item is not a bulk string or nil.
func Float64s(reply interface{}, err error) ([]float64, error) {
var result []float64
err = sliceHelper(reply, err, "Float64s", func(n int) { result = make([]float64, n) }, func(i int, v interface{}) error {
switch v := v.(type) {
case []byte:
f, err := strconv.ParseFloat(string(v), 64)
result[i] = f
return err
case Error:
return v
default:
return fmt.Errorf("redigo: unexpected element type for Float64s, got type %T", v)
}
})
return result, err
}
// Strings is a helper that converts an array command reply to a []string. If
// err is not equal to nil, then Strings returns nil, err. Nil array items are
// converted to "" in the output slice. Strings returns an error if an array
// item is not a bulk string or nil.
func Strings(reply interface{}, err error) ([]string, error) {
var result []string
err = sliceHelper(reply, err, "Strings", func(n int) { result = make([]string, n) }, func(i int, v interface{}) error {
switch v := v.(type) {
case string:
result[i] = v
return nil
case []byte:
result[i] = string(v)
return nil
case Error:
return v
default:
return fmt.Errorf("redigo: unexpected element type for Strings, got type %T", v)
}
})
return result, err
}
// ByteSlices is a helper that converts an array command reply to a [][]byte.
// If err is not equal to nil, then ByteSlices returns nil, err. Nil array
// items are stay nil. ByteSlices returns an error if an array item is not a
// bulk string or nil.
func ByteSlices(reply interface{}, err error) ([][]byte, error) {
var result [][]byte
err = sliceHelper(reply, err, "ByteSlices", func(n int) { result = make([][]byte, n) }, func(i int, v interface{}) error {
switch v := v.(type) {
case []byte:
result[i] = v
return nil
case Error:
return v
default:
return fmt.Errorf("redigo: unexpected element type for ByteSlices, got type %T", v)
}
})
return result, err
}
// Int64s is a helper that converts an array command reply to a []int64.
// If err is not equal to nil, then Int64s returns nil, err. Nil array
// items are stay nil. Int64s returns an error if an array item is not a
// bulk string or nil.
func Int64s(reply interface{}, err error) ([]int64, error) {
var result []int64
err = sliceHelper(reply, err, "Int64s", func(n int) { result = make([]int64, n) }, func(i int, v interface{}) error {
switch v := v.(type) {
case int64:
result[i] = v
return nil
case []byte:
n, err := strconv.ParseInt(string(v), 10, 64)
result[i] = n
return err
case Error:
return v
default:
return fmt.Errorf("redigo: unexpected element type for Int64s, got type %T", v)
}
})
return result, err
}
// Ints is a helper that converts an array command reply to a []int.
// If err is not equal to nil, then Ints returns nil, err. Nil array
// items are stay nil. Ints returns an error if an array item is not a
// bulk string or nil.
func Ints(reply interface{}, err error) ([]int, error) {
var result []int
err = sliceHelper(reply, err, "Ints", func(n int) { result = make([]int, n) }, func(i int, v interface{}) error {
switch v := v.(type) {
case int64:
n := int(v)
if int64(n) != v {
return strconv.ErrRange
}
result[i] = n
return nil
case []byte:
n, err := strconv.Atoi(string(v))
result[i] = n
return err
case Error:
return v
default:
return fmt.Errorf("redigo: unexpected element type for Ints, got type %T", v)
}
})
return result, err
}
// mapHelper builds a map from the data in reply.
func mapHelper(reply interface{}, err error, name string, makeMap func(int), assign func(key string, value interface{}) error) error {
values, err := Values(reply, err)
if err != nil {
return err
}
if len(values)%2 != 0 {
return fmt.Errorf("redigo: %s expects even number of values result, got %d", name, len(values))
}
makeMap(len(values) / 2)
for i := 0; i < len(values); i += 2 {
key, ok := values[i].([]byte)
if !ok {
return fmt.Errorf("redigo: %s key[%d] not a bulk string value, got %T", name, i, values[i])
}
if err := assign(string(key), values[i+1]); err != nil {
return err
}
}
return nil
}
// StringMap is a helper that converts an array of strings (alternating key, value)
// into a map[string]string. The HGETALL and CONFIG GET commands return replies in this format.
// Requires an even number of values in result.
func StringMap(reply interface{}, err error) (map[string]string, error) {
var result map[string]string
err = mapHelper(reply, err, "StringMap",
func(n int) {
result = make(map[string]string, n)
}, func(key string, v interface{}) error {
value, ok := v.([]byte)
if !ok {
return fmt.Errorf("redigo: StringMap for %q not a bulk string value, got %T", key, v)
}
result[key] = string(value)
return nil
},
)
return result, err
}
// IntMap is a helper that converts an array of strings (alternating key, value)
// into a map[string]int. The HGETALL commands return replies in this format.
// Requires an even number of values in result.
func IntMap(result interface{}, err error) (map[string]int, error) {
var m map[string]int
err = mapHelper(result, err, "IntMap",
func(n int) {
m = make(map[string]int, n)
}, func(key string, v interface{}) error {
value, err := Int(v, nil)
if err != nil {
return err
}
m[key] = value
return nil
},
)
return m, err
}
// Int64Map is a helper that converts an array of strings (alternating key, value)
// into a map[string]int64. The HGETALL commands return replies in this format.
// Requires an even number of values in result.
func Int64Map(result interface{}, err error) (map[string]int64, error) {
var m map[string]int64
err = mapHelper(result, err, "Int64Map",
func(n int) {
m = make(map[string]int64, n)
}, func(key string, v interface{}) error {
value, err := Int64(v, nil)
if err != nil {
return err
}
m[key] = value
return nil
},
)
return m, err
}
// Float64Map is a helper that converts an array of strings (alternating key, value)
// into a map[string]float64. The HGETALL commands return replies in this format.
// Requires an even number of values in result.
func Float64Map(result interface{}, err error) (map[string]float64, error) {
var m map[string]float64
err = mapHelper(result, err, "Float64Map",
func(n int) {
m = make(map[string]float64, n)
}, func(key string, v interface{}) error {
value, err := Float64(v, nil)
if err != nil {
return err
}
m[key] = value
return nil
},
)
return m, err
}
// Positions is a helper that converts an array of positions (lat, long)
// into a [][2]float64. The GEOPOS command returns replies in this format.
func Positions(result interface{}, err error) ([]*[2]float64, error) {
values, err := Values(result, err)
if err != nil {
return nil, err
}
positions := make([]*[2]float64, len(values))
for i := range values {
if values[i] == nil {
continue
}
p, ok := values[i].([]interface{})
if !ok {
return nil, fmt.Errorf("redigo: unexpected element type for interface slice, got type %T", values[i])
}
if len(p) != 2 {
return nil, fmt.Errorf("redigo: unexpected number of values for a member position, got %d", len(p))
}
lat, err := Float64(p[0], nil)
if err != nil {
return nil, err
}
long, err := Float64(p[1], nil)
if err != nil {
return nil, err
}
positions[i] = &[2]float64{lat, long}
}
return positions, nil
}
// Uint64s is a helper that converts an array command reply to a []uint64.
// If err is not equal to nil, then Uint64s returns nil, err. Nil array
// items are stay nil. Uint64s returns an error if an array item is not a
// bulk string or nil.
func Uint64s(reply interface{}, err error) ([]uint64, error) {
var result []uint64
err = sliceHelper(reply, err, "Uint64s", func(n int) { result = make([]uint64, n) }, func(i int, v interface{}) error {
switch v := v.(type) {
case uint64:
result[i] = v
return nil
case []byte:
n, err := strconv.ParseUint(string(v), 10, 64)
result[i] = n
return err
case Error:
return v
default:
return fmt.Errorf("redigo: unexpected element type for Uint64s, got type %T", v)
}
})
return result, err
}
// Uint64Map is a helper that converts an array of strings (alternating key, value)
// into a map[string]uint64. The HGETALL commands return replies in this format.
// Requires an even number of values in result.
func Uint64Map(result interface{}, err error) (map[string]uint64, error) {
var m map[string]uint64
err = mapHelper(result, err, "Uint64Map",
func(n int) {
m = make(map[string]uint64, n)
}, func(key string, v interface{}) error {
value, err := Uint64(v, nil)
if err != nil {
return err
}
m[key] = value
return nil
},
)
return m, err
}
// SlowLogs is a helper that parse the SLOWLOG GET command output and
// return the array of SlowLog
func SlowLogs(result interface{}, err error) ([]SlowLog, error) {
rawLogs, err := Values(result, err)
if err != nil {
return nil, err
}
logs := make([]SlowLog, len(rawLogs))
for i, e := range rawLogs {
rawLog, ok := e.([]interface{})
if !ok {
return nil, fmt.Errorf("redigo: slowlog element is not an array, got %T", e)
}
var log SlowLog
if len(rawLog) < 4 {
return nil, fmt.Errorf("redigo: slowlog element has %d elements, expected at least 4", len(rawLog))
}
log.ID, ok = rawLog[0].(int64)
if !ok {
return nil, fmt.Errorf("redigo: slowlog element[0] not an int64, got %T", rawLog[0])
}
timestamp, ok := rawLog[1].(int64)
if !ok {
return nil, fmt.Errorf("redigo: slowlog element[1] not an int64, got %T", rawLog[1])
}
log.Time = time.Unix(timestamp, 0)
duration, ok := rawLog[2].(int64)
if !ok {
return nil, fmt.Errorf("redigo: slowlog element[2] not an int64, got %T", rawLog[2])
}
log.ExecutionTime = time.Duration(duration) * time.Microsecond
log.Args, err = Strings(rawLog[3], nil)
if err != nil {
return nil, fmt.Errorf("redigo: slowlog element[3] is not array of strings: %w", err)
}
if len(rawLog) >= 6 {
log.ClientAddr, err = String(rawLog[4], nil)
if err != nil {
return nil, fmt.Errorf("redigo: slowlog element[4] is not a string: %w", err)
}
log.ClientName, err = String(rawLog[5], nil)
if err != nil {
return nil, fmt.Errorf("redigo: slowlog element[5] is not a string: %w", err)
}
}
logs[i] = log
}
return logs, nil
}
// Latencies is a helper that parses the LATENCY LATEST command output and
// return the slice of Latency values.
func Latencies(result interface{}, err error) ([]Latency, error) {
rawLatencies, err := Values(result, err)
if err != nil {
return nil, err
}
latencies := make([]Latency, len(rawLatencies))
for i, e := range rawLatencies {
rawLatency, ok := e.([]interface{})
if !ok {
return nil, fmt.Errorf("redigo: latencies element is not slice, got %T", e)
}
var event Latency
if len(rawLatency) != 4 {
return nil, fmt.Errorf("redigo: latencies element has %d elements, expected 4", len(rawLatency))
}
event.Name, err = String(rawLatency[0], nil)
if err != nil {
return nil, fmt.Errorf("redigo: latencies element[0] is not a string: %w", err)
}
timestamp, ok := rawLatency[1].(int64)
if !ok {
return nil, fmt.Errorf("redigo: latencies element[1] not an int64, got %T", rawLatency[1])
}
event.Time = time.Unix(timestamp, 0)
latestDuration, ok := rawLatency[2].(int64)
if !ok {
return nil, fmt.Errorf("redigo: latencies element[2] not an int64, got %T", rawLatency[2])
}
event.Latest = time.Duration(latestDuration) * time.Millisecond
maxDuration, ok := rawLatency[3].(int64)
if !ok {
return nil, fmt.Errorf("redigo: latencies element[3] not an int64, got %T", rawLatency[3])
}
event.Max = time.Duration(maxDuration) * time.Millisecond
latencies[i] = event
}
return latencies, nil
}
// LatencyHistories is a helper that parse the LATENCY HISTORY command output and
// returns a LatencyHistory slice.
func LatencyHistories(result interface{}, err error) ([]LatencyHistory, error) {
rawLogs, err := Values(result, err)
if err != nil {
return nil, err
}
latencyHistories := make([]LatencyHistory, len(rawLogs))
for i, e := range rawLogs {
rawLog, ok := e.([]interface{})
if !ok {
return nil, fmt.Errorf("redigo: latency history element is not an slice, got %T", e)
}
var event LatencyHistory
timestamp, ok := rawLog[0].(int64)
if !ok {
return nil, fmt.Errorf("redigo: latency history element[0] not an int64, got %T", rawLog[0])
}
event.Time = time.Unix(timestamp, 0)
duration, ok := rawLog[1].(int64)
if !ok {
return nil, fmt.Errorf("redigo: latency history element[1] not an int64, got %T", rawLog[1])
}
event.ExecutionTime = time.Duration(duration) * time.Millisecond
latencyHistories[i] = event
}
return latencyHistories, nil
}

View File

@ -0,0 +1,716 @@
// Copyright 2012 Gary Burd
//
// Licensed under the Apache License, Version 2.0 (the "License"): you may
// not use this file except in compliance with the License. You may obtain
// a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations
// under the License.
package redis
import (
"errors"
"fmt"
"reflect"
"strconv"
"strings"
"sync"
)
var (
scannerType = reflect.TypeOf((*Scanner)(nil)).Elem()
)
func ensureLen(d reflect.Value, n int) {
if n > d.Cap() {
d.Set(reflect.MakeSlice(d.Type(), n, n))
} else {
d.SetLen(n)
}
}
func cannotConvert(d reflect.Value, s interface{}) error {
var sname string
switch s.(type) {
case string:
sname = "Redis simple string"
case Error:
sname = "Redis error"
case int64:
sname = "Redis integer"
case []byte:
sname = "Redis bulk string"
case []interface{}:
sname = "Redis array"
case nil:
sname = "Redis nil"
default:
sname = reflect.TypeOf(s).String()
}
return fmt.Errorf("cannot convert from %s to %s", sname, d.Type())
}
func convertAssignNil(d reflect.Value) (err error) {
switch d.Type().Kind() {
case reflect.Slice, reflect.Interface:
d.Set(reflect.Zero(d.Type()))
default:
err = cannotConvert(d, nil)
}
return err
}
func convertAssignError(d reflect.Value, s Error) (err error) {
if d.Kind() == reflect.String {
d.SetString(string(s))
} else if d.Kind() == reflect.Slice && d.Type().Elem().Kind() == reflect.Uint8 {
d.SetBytes([]byte(s))
} else {
err = cannotConvert(d, s)
}
return
}
func convertAssignString(d reflect.Value, s string) (err error) {
switch d.Type().Kind() {
case reflect.Float32, reflect.Float64:
var x float64
x, err = strconv.ParseFloat(s, d.Type().Bits())
d.SetFloat(x)
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
var x int64
x, err = strconv.ParseInt(s, 10, d.Type().Bits())
d.SetInt(x)
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
var x uint64
x, err = strconv.ParseUint(s, 10, d.Type().Bits())
d.SetUint(x)
case reflect.Bool:
var x bool
x, err = strconv.ParseBool(s)
d.SetBool(x)
case reflect.String:
d.SetString(s)
case reflect.Slice:
if d.Type().Elem().Kind() == reflect.Uint8 {
d.SetBytes([]byte(s))
} else {
err = cannotConvert(d, s)
}
case reflect.Ptr:
err = convertAssignString(d.Elem(), s)
default:
err = cannotConvert(d, s)
}
return
}
func convertAssignBulkString(d reflect.Value, s []byte) (err error) {
switch d.Type().Kind() {
case reflect.Slice:
// Handle []byte destination here to avoid unnecessary
// []byte -> string -> []byte converion.
if d.Type().Elem().Kind() == reflect.Uint8 {
d.SetBytes(s)
} else {
err = cannotConvert(d, s)
}
case reflect.Ptr:
if d.CanInterface() && d.CanSet() {
if s == nil {
if d.IsNil() {
return nil
}
d.Set(reflect.Zero(d.Type()))
return nil
}
if d.IsNil() {
d.Set(reflect.New(d.Type().Elem()))
}
if sc, ok := d.Interface().(Scanner); ok {
return sc.RedisScan(s)
}
}
err = convertAssignString(d, string(s))
default:
err = convertAssignString(d, string(s))
}
return err
}
func convertAssignInt(d reflect.Value, s int64) (err error) {
switch d.Type().Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
d.SetInt(s)
if d.Int() != s {
err = strconv.ErrRange
d.SetInt(0)
}
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
if s < 0 {
err = strconv.ErrRange
} else {
x := uint64(s)
d.SetUint(x)
if d.Uint() != x {
err = strconv.ErrRange
d.SetUint(0)
}
}
case reflect.Bool:
d.SetBool(s != 0)
default:
err = cannotConvert(d, s)
}
return
}
func convertAssignValue(d reflect.Value, s interface{}) (err error) {
if d.Kind() != reflect.Ptr {
if d.CanAddr() {
d2 := d.Addr()
if d2.CanInterface() {
if scanner, ok := d2.Interface().(Scanner); ok {
return scanner.RedisScan(s)
}
}
}
} else if d.CanInterface() {
// Already a reflect.Ptr
if d.IsNil() {
d.Set(reflect.New(d.Type().Elem()))
}
if scanner, ok := d.Interface().(Scanner); ok {
return scanner.RedisScan(s)
}
}
switch s := s.(type) {
case nil:
err = convertAssignNil(d)
case []byte:
err = convertAssignBulkString(d, s)
case int64:
err = convertAssignInt(d, s)
case string:
err = convertAssignString(d, s)
case Error:
err = convertAssignError(d, s)
default:
err = cannotConvert(d, s)
}
return err
}
func convertAssignArray(d reflect.Value, s []interface{}) error {
if d.Type().Kind() != reflect.Slice {
return cannotConvert(d, s)
}
ensureLen(d, len(s))
for i := 0; i < len(s); i++ {
if err := convertAssignValue(d.Index(i), s[i]); err != nil {
return err
}
}
return nil
}
func convertAssign(d interface{}, s interface{}) (err error) {
if scanner, ok := d.(Scanner); ok {
return scanner.RedisScan(s)
}
// Handle the most common destination types using type switches and
// fall back to reflection for all other types.
switch s := s.(type) {
case nil:
// ignore
case []byte:
switch d := d.(type) {
case *string:
*d = string(s)
case *int:
*d, err = strconv.Atoi(string(s))
case *bool:
*d, err = strconv.ParseBool(string(s))
case *[]byte:
*d = s
case *interface{}:
*d = s
case nil:
// skip value
default:
if d := reflect.ValueOf(d); d.Type().Kind() != reflect.Ptr {
err = cannotConvert(d, s)
} else {
err = convertAssignBulkString(d.Elem(), s)
}
}
case int64:
switch d := d.(type) {
case *int:
x := int(s)
if int64(x) != s {
err = strconv.ErrRange
x = 0
}
*d = x
case *bool:
*d = s != 0
case *interface{}:
*d = s
case nil:
// skip value
default:
if d := reflect.ValueOf(d); d.Type().Kind() != reflect.Ptr {
err = cannotConvert(d, s)
} else {
err = convertAssignInt(d.Elem(), s)
}
}
case string:
switch d := d.(type) {
case *string:
*d = s
case *interface{}:
*d = s
case nil:
// skip value
default:
err = cannotConvert(reflect.ValueOf(d), s)
}
case []interface{}:
switch d := d.(type) {
case *[]interface{}:
*d = s
case *interface{}:
*d = s
case nil:
// skip value
default:
if d := reflect.ValueOf(d); d.Type().Kind() != reflect.Ptr {
err = cannotConvert(d, s)
} else {
err = convertAssignArray(d.Elem(), s)
}
}
case Error:
err = s
default:
err = cannotConvert(reflect.ValueOf(d), s)
}
return
}
// Scan copies from src to the values pointed at by dest.
//
// Scan uses RedisScan if available otherwise:
//
// The values pointed at by dest must be an integer, float, boolean, string,
// []byte, interface{} or slices of these types. Scan uses the standard strconv
// package to convert bulk strings to numeric and boolean types.
//
// If a dest value is nil, then the corresponding src value is skipped.
//
// If a src element is nil, then the corresponding dest value is not modified.
//
// To enable easy use of Scan in a loop, Scan returns the slice of src
// following the copied values.
func Scan(src []interface{}, dest ...interface{}) ([]interface{}, error) {
if len(src) < len(dest) {
return nil, errors.New("redigo.Scan: array short")
}
var err error
for i, d := range dest {
err = convertAssign(d, src[i])
if err != nil {
err = fmt.Errorf("redigo.Scan: cannot assign to dest %d: %v", i, err)
break
}
}
return src[len(dest):], err
}
type fieldSpec struct {
name string
index []int
omitEmpty bool
}
type structSpec struct {
m map[string]*fieldSpec
l []*fieldSpec
}
func (ss *structSpec) fieldSpec(name []byte) *fieldSpec {
return ss.m[string(name)]
}
func compileStructSpec(t reflect.Type, depth map[string]int, index []int, ss *structSpec, seen map[reflect.Type]struct{}) error {
if _, ok := seen[t]; ok {
// Protect against infinite recursion.
return fmt.Errorf("recursive struct definition for %v", t)
}
seen[t] = struct{}{}
LOOP:
for i := 0; i < t.NumField(); i++ {
f := t.Field(i)
switch {
case f.PkgPath != "" && !f.Anonymous:
// Ignore unexported fields.
case f.Anonymous:
switch f.Type.Kind() {
case reflect.Struct:
if err := compileStructSpec(f.Type, depth, append(index, i), ss, seen); err != nil {
return err
}
case reflect.Ptr:
if f.Type.Elem().Kind() == reflect.Struct {
if err := compileStructSpec(f.Type.Elem(), depth, append(index, i), ss, seen); err != nil {
return err
}
}
}
default:
fs := &fieldSpec{name: f.Name}
tag := f.Tag.Get("redis")
var p string
first := true
for len(tag) > 0 {
i := strings.IndexByte(tag, ',')
if i < 0 {
p, tag = tag, ""
} else {
p, tag = tag[:i], tag[i+1:]
}
if p == "-" {
continue LOOP
}
if first && len(p) > 0 {
fs.name = p
first = false
} else {
switch p {
case "omitempty":
fs.omitEmpty = true
default:
panic(fmt.Errorf("redigo: unknown field tag %s for type %s", p, t.Name()))
}
}
}
d, found := depth[fs.name]
if !found {
d = 1 << 30
}
switch {
case len(index) == d:
// At same depth, remove from result.
delete(ss.m, fs.name)
j := 0
for i := 0; i < len(ss.l); i++ {
if fs.name != ss.l[i].name {
ss.l[j] = ss.l[i]
j += 1
}
}
ss.l = ss.l[:j]
case len(index) < d:
fs.index = make([]int, len(index)+1)
copy(fs.index, index)
fs.index[len(index)] = i
depth[fs.name] = len(index)
ss.m[fs.name] = fs
ss.l = append(ss.l, fs)
}
}
}
return nil
}
var (
structSpecMutex sync.RWMutex
structSpecCache = make(map[reflect.Type]*structSpec)
)
func structSpecForType(t reflect.Type) (*structSpec, error) {
structSpecMutex.RLock()
ss, found := structSpecCache[t]
structSpecMutex.RUnlock()
if found {
return ss, nil
}
structSpecMutex.Lock()
defer structSpecMutex.Unlock()
ss, found = structSpecCache[t]
if found {
return ss, nil
}
ss = &structSpec{m: make(map[string]*fieldSpec)}
if err := compileStructSpec(t, make(map[string]int), nil, ss, make(map[reflect.Type]struct{})); err != nil {
return nil, fmt.Errorf("compile struct: %s: %w", t, err)
}
structSpecCache[t] = ss
return ss, nil
}
var errScanStructValue = errors.New("redigo.ScanStruct: value must be non-nil pointer to a struct")
// ScanStruct scans alternating names and values from src to a struct. The
// HGETALL and CONFIG GET commands return replies in this format.
//
// ScanStruct uses exported field names to match values in the response. Use
// 'redis' field tag to override the name:
//
// Field int `redis:"myName"`
//
// Fields with the tag redis:"-" are ignored.
//
// Each field uses RedisScan if available otherwise:
// Integer, float, boolean, string and []byte fields are supported. Scan uses the
// standard strconv package to convert bulk string values to numeric and
// boolean types.
//
// If a src element is nil, then the corresponding field is not modified.
func ScanStruct(src []interface{}, dest interface{}) error {
d := reflect.ValueOf(dest)
if d.Kind() != reflect.Ptr || d.IsNil() {
return errScanStructValue
}
d = d.Elem()
if d.Kind() != reflect.Struct {
return errScanStructValue
}
if len(src)%2 != 0 {
return errors.New("redigo.ScanStruct: number of values not a multiple of 2")
}
ss, err := structSpecForType(d.Type())
if err != nil {
return fmt.Errorf("redigo.ScanStruct: %w", err)
}
for i := 0; i < len(src); i += 2 {
s := src[i+1]
if s == nil {
continue
}
name, ok := src[i].([]byte)
if !ok {
return fmt.Errorf("redigo.ScanStruct: key %d not a bulk string value", i)
}
fs := ss.fieldSpec(name)
if fs == nil {
continue
}
if err := convertAssignValue(fieldByIndexCreate(d, fs.index), s); err != nil {
return fmt.Errorf("redigo.ScanStruct: cannot assign field %s: %v", fs.name, err)
}
}
return nil
}
var (
errScanSliceValue = errors.New("redigo.ScanSlice: dest must be non-nil pointer to a struct")
)
// ScanSlice scans src to the slice pointed to by dest.
//
// If the target is a slice of types which implement Scanner then the custom
// RedisScan method is used otherwise the following rules apply:
//
// The elements in the dest slice must be integer, float, boolean, string, struct
// or pointer to struct values.
//
// Struct fields must be integer, float, boolean or string values. All struct
// fields are used unless a subset is specified using fieldNames.
func ScanSlice(src []interface{}, dest interface{}, fieldNames ...string) error {
d := reflect.ValueOf(dest)
if d.Kind() != reflect.Ptr || d.IsNil() {
return errScanSliceValue
}
d = d.Elem()
if d.Kind() != reflect.Slice {
return errScanSliceValue
}
isPtr := false
t := d.Type().Elem()
st := t
if t.Kind() == reflect.Ptr && t.Elem().Kind() == reflect.Struct {
isPtr = true
t = t.Elem()
}
if t.Kind() != reflect.Struct || st.Implements(scannerType) {
ensureLen(d, len(src))
for i, s := range src {
if s == nil {
continue
}
if err := convertAssignValue(d.Index(i), s); err != nil {
return fmt.Errorf("redigo.ScanSlice: cannot assign element %d: %v", i, err)
}
}
return nil
}
ss, err := structSpecForType(t)
if err != nil {
return fmt.Errorf("redigo.ScanSlice: %w", err)
}
fss := ss.l
if len(fieldNames) > 0 {
fss = make([]*fieldSpec, len(fieldNames))
for i, name := range fieldNames {
fss[i] = ss.m[name]
if fss[i] == nil {
return fmt.Errorf("redigo.ScanSlice: ScanSlice bad field name %s", name)
}
}
}
if len(fss) == 0 {
return errors.New("redigo.ScanSlice: no struct fields")
}
n := len(src) / len(fss)
if n*len(fss) != len(src) {
return errors.New("redigo.ScanSlice: length not a multiple of struct field count")
}
ensureLen(d, n)
for i := 0; i < n; i++ {
d := d.Index(i)
if isPtr {
if d.IsNil() {
d.Set(reflect.New(t))
}
d = d.Elem()
}
for j, fs := range fss {
s := src[i*len(fss)+j]
if s == nil {
continue
}
if err := convertAssignValue(d.FieldByIndex(fs.index), s); err != nil {
return fmt.Errorf("redigo.ScanSlice: cannot assign element %d to field %s: %v", i*len(fss)+j, fs.name, err)
}
}
}
return nil
}
// Args is a helper for constructing command arguments from structured values.
type Args []interface{}
// Add returns the result of appending value to args.
func (args Args) Add(value ...interface{}) Args {
return append(args, value...)
}
// AddFlat returns the result of appending the flattened value of v to args.
//
// Maps are flattened by appending the alternating keys and map values to args.
//
// Slices are flattened by appending the slice elements to args.
//
// Structs are flattened by appending the alternating names and values of
// exported fields to args. If v is a nil struct pointer, then nothing is
// appended. The 'redis' field tag overrides struct field names. See ScanStruct
// for more information on the use of the 'redis' field tag.
//
// Other types are appended to args as is.
// panics if v includes a recursive anonymous struct.
func (args Args) AddFlat(v interface{}) Args {
rv := reflect.ValueOf(v)
switch rv.Kind() {
case reflect.Struct:
args = flattenStruct(args, rv)
case reflect.Slice:
for i := 0; i < rv.Len(); i++ {
args = append(args, rv.Index(i).Interface())
}
case reflect.Map:
for _, k := range rv.MapKeys() {
args = append(args, k.Interface(), rv.MapIndex(k).Interface())
}
case reflect.Ptr:
if rv.Type().Elem().Kind() == reflect.Struct {
if !rv.IsNil() {
args = flattenStruct(args, rv.Elem())
}
} else {
args = append(args, v)
}
default:
args = append(args, v)
}
return args
}
func flattenStruct(args Args, v reflect.Value) Args {
ss, err := structSpecForType(v.Type())
if err != nil {
panic(fmt.Errorf("redigo.AddFlat: %w", err))
}
for _, fs := range ss.l {
fv, err := fieldByIndexErr(v, fs.index)
if err != nil {
// Nil item ignore.
continue
}
if fs.omitEmpty {
var empty = false
switch fv.Kind() {
case reflect.Array, reflect.Map, reflect.Slice, reflect.String:
empty = fv.Len() == 0
case reflect.Bool:
empty = !fv.Bool()
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
empty = fv.Int() == 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
empty = fv.Uint() == 0
case reflect.Float32, reflect.Float64:
empty = fv.Float() == 0
case reflect.Interface, reflect.Ptr:
empty = fv.IsNil()
}
if empty {
continue
}
}
if arg, ok := fv.Interface().(Argument); ok {
args = append(args, fs.name, arg.RedisArg())
} else if fv.Kind() == reflect.Ptr {
if !fv.IsNil() {
args = append(args, fs.name, fv.Elem().Interface())
}
} else {
args = append(args, fs.name, fv.Interface())
}
}
return args
}

View File

@ -0,0 +1,104 @@
// Copyright 2012 Gary Burd
//
// Licensed under the Apache License, Version 2.0 (the "License"): you may
// not use this file except in compliance with the License. You may obtain
// a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations
// under the License.
package redis
import (
"context"
"crypto/sha1"
"encoding/hex"
"io"
"strings"
)
// Script encapsulates the source, hash and key count for a Lua script. See
// http://redis.io/commands/eval for information on scripts in Redis.
type Script struct {
keyCount int
src string
hash string
}
// NewScript returns a new script object. If keyCount is greater than or equal
// to zero, then the count is automatically inserted in the EVAL command
// argument list. If keyCount is less than zero, then the application supplies
// the count as the first value in the keysAndArgs argument to the Do, Send and
// SendHash methods.
func NewScript(keyCount int, src string) *Script {
h := sha1.New()
io.WriteString(h, src) // nolint: errcheck
return &Script{keyCount, src, hex.EncodeToString(h.Sum(nil))}
}
func (s *Script) args(spec string, keysAndArgs []interface{}) []interface{} {
var args []interface{}
if s.keyCount < 0 {
args = make([]interface{}, 1+len(keysAndArgs))
args[0] = spec
copy(args[1:], keysAndArgs)
} else {
args = make([]interface{}, 2+len(keysAndArgs))
args[0] = spec
args[1] = s.keyCount
copy(args[2:], keysAndArgs)
}
return args
}
// Hash returns the script hash.
func (s *Script) Hash() string {
return s.hash
}
func (s *Script) DoContext(ctx context.Context, c Conn, keysAndArgs ...interface{}) (interface{}, error) {
cwt, ok := c.(ConnWithContext)
if !ok {
return nil, errContextNotSupported
}
v, err := cwt.DoContext(ctx, "EVALSHA", s.args(s.hash, keysAndArgs)...)
if e, ok := err.(Error); ok && strings.HasPrefix(string(e), "NOSCRIPT ") {
v, err = cwt.DoContext(ctx, "EVAL", s.args(s.src, keysAndArgs)...)
}
return v, err
}
// Do evaluates the script. Under the covers, Do optimistically evaluates the
// script using the EVALSHA command. If the command fails because the script is
// not loaded, then Do evaluates the script using the EVAL command (thus
// causing the script to load).
func (s *Script) Do(c Conn, keysAndArgs ...interface{}) (interface{}, error) {
v, err := c.Do("EVALSHA", s.args(s.hash, keysAndArgs)...)
if err != nil && strings.HasPrefix(err.Error(), "NOSCRIPT ") {
v, err = c.Do("EVAL", s.args(s.src, keysAndArgs)...)
}
return v, err
}
// SendHash evaluates the script without waiting for the reply. The script is
// evaluated with the EVALSHA command. The application must ensure that the
// script is loaded by a previous call to Send, Do or Load methods.
func (s *Script) SendHash(c Conn, keysAndArgs ...interface{}) error {
return c.Send("EVALSHA", s.args(s.hash, keysAndArgs)...)
}
// Send evaluates the script without waiting for the reply.
func (s *Script) Send(c Conn, keysAndArgs ...interface{}) error {
return c.Send("EVAL", s.args(s.src, keysAndArgs)...)
}
// Load loads the script without evaluating it.
func (s *Script) Load(c Conn) error {
_, err := c.Do("SCRIPT", "LOAD", s.src)
return err
}

View File

@ -0,0 +1,20 @@
; https://editorconfig.org/
root = true
[*]
insert_final_newline = true
charset = utf-8
trim_trailing_whitespace = true
indent_style = space
indent_size = 2
[{Makefile,go.mod,go.sum,*.go,.gitmodules}]
indent_style = tab
indent_size = 4
[*.md]
indent_size = 4
trim_trailing_whitespace = false
eclint_indent_style = unset

View File

@ -0,0 +1 @@
coverage.coverprofile

27
examples/web/vendor/github.com/gorilla/mux/LICENSE generated vendored Normal file
View File

@ -0,0 +1,27 @@
Copyright (c) 2023 The Gorilla Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

34
examples/web/vendor/github.com/gorilla/mux/Makefile generated vendored Normal file
View File

@ -0,0 +1,34 @@
GO_LINT=$(shell which golangci-lint 2> /dev/null || echo '')
GO_LINT_URI=github.com/golangci/golangci-lint/cmd/golangci-lint@latest
GO_SEC=$(shell which gosec 2> /dev/null || echo '')
GO_SEC_URI=github.com/securego/gosec/v2/cmd/gosec@latest
GO_VULNCHECK=$(shell which govulncheck 2> /dev/null || echo '')
GO_VULNCHECK_URI=golang.org/x/vuln/cmd/govulncheck@latest
.PHONY: golangci-lint
golangci-lint:
$(if $(GO_LINT), ,go install $(GO_LINT_URI))
@echo "##### Running golangci-lint"
golangci-lint run -v
.PHONY: gosec
gosec:
$(if $(GO_SEC), ,go install $(GO_SEC_URI))
@echo "##### Running gosec"
gosec ./...
.PHONY: govulncheck
govulncheck:
$(if $(GO_VULNCHECK), ,go install $(GO_VULNCHECK_URI))
@echo "##### Running govulncheck"
govulncheck ./...
.PHONY: verify
verify: golangci-lint gosec govulncheck
.PHONY: test
test:
@echo "##### Running tests"
go test -race -cover -coverprofile=coverage.coverprofile -covermode=atomic -v ./...

812
examples/web/vendor/github.com/gorilla/mux/README.md generated vendored Normal file
View File

@ -0,0 +1,812 @@
# gorilla/mux
![testing](https://github.com/gorilla/mux/actions/workflows/test.yml/badge.svg)
[![codecov](https://codecov.io/github/gorilla/mux/branch/main/graph/badge.svg)](https://codecov.io/github/gorilla/mux)
[![godoc](https://godoc.org/github.com/gorilla/mux?status.svg)](https://godoc.org/github.com/gorilla/mux)
[![sourcegraph](https://sourcegraph.com/github.com/gorilla/mux/-/badge.svg)](https://sourcegraph.com/github.com/gorilla/mux?badge)
![Gorilla Logo](https://github.com/gorilla/.github/assets/53367916/d92caabf-98e0-473e-bfbf-ab554ba435e5)
Package `gorilla/mux` implements a request router and dispatcher for matching incoming requests to
their respective handler.
The name mux stands for "HTTP request multiplexer". Like the standard `http.ServeMux`, `mux.Router` matches incoming requests against a list of registered routes and calls a handler for the route that matches the URL or other conditions. The main features are:
* It implements the `http.Handler` interface so it is compatible with the standard `http.ServeMux`.
* Requests can be matched based on URL host, path, path prefix, schemes, header and query values, HTTP methods or using custom matchers.
* URL hosts, paths and query values can have variables with an optional regular expression.
* Registered URLs can be built, or "reversed", which helps maintaining references to resources.
* Routes can be used as subrouters: nested routes are only tested if the parent route matches. This is useful to define groups of routes that share common conditions like a host, a path prefix or other repeated attributes. As a bonus, this optimizes request matching.
---
* [Install](#install)
* [Examples](#examples)
* [Matching Routes](#matching-routes)
* [Static Files](#static-files)
* [Serving Single Page Applications](#serving-single-page-applications) (e.g. React, Vue, Ember.js, etc.)
* [Registered URLs](#registered-urls)
* [Walking Routes](#walking-routes)
* [Graceful Shutdown](#graceful-shutdown)
* [Middleware](#middleware)
* [Handling CORS Requests](#handling-cors-requests)
* [Testing Handlers](#testing-handlers)
* [Full Example](#full-example)
---
## Install
With a [correctly configured](https://golang.org/doc/install#testing) Go toolchain:
```sh
go get -u github.com/gorilla/mux
```
## Examples
Let's start registering a couple of URL paths and handlers:
```go
func main() {
r := mux.NewRouter()
r.HandleFunc("/", HomeHandler)
r.HandleFunc("/products", ProductsHandler)
r.HandleFunc("/articles", ArticlesHandler)
http.Handle("/", r)
}
```
Here we register three routes mapping URL paths to handlers. This is equivalent to how `http.HandleFunc()` works: if an incoming request URL matches one of the paths, the corresponding handler is called passing (`http.ResponseWriter`, `*http.Request`) as parameters.
Paths can have variables. They are defined using the format `{name}` or `{name:pattern}`. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example:
```go
r := mux.NewRouter()
r.HandleFunc("/products/{key}", ProductHandler)
r.HandleFunc("/articles/{category}/", ArticlesCategoryHandler)
r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler)
```
The names are used to create a map of route variables which can be retrieved calling `mux.Vars()`:
```go
func ArticlesCategoryHandler(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
w.WriteHeader(http.StatusOK)
fmt.Fprintf(w, "Category: %v\n", vars["category"])
}
```
And this is all you need to know about the basic usage. More advanced options are explained below.
### Matching Routes
Routes can also be restricted to a domain or subdomain. Just define a host pattern to be matched. They can also have variables:
```go
r := mux.NewRouter()
// Only matches if domain is "www.example.com".
r.Host("www.example.com")
// Matches a dynamic subdomain.
r.Host("{subdomain:[a-z]+}.example.com")
```
There are several other matchers that can be added. To match path prefixes:
```go
r.PathPrefix("/products/")
```
...or HTTP methods:
```go
r.Methods("GET", "POST")
```
...or URL schemes:
```go
r.Schemes("https")
```
...or header values:
```go
r.Headers("X-Requested-With", "XMLHttpRequest")
```
...or query values:
```go
r.Queries("key", "value")
```
...or to use a custom matcher function:
```go
r.MatcherFunc(func(r *http.Request, rm *RouteMatch) bool {
return r.ProtoMajor == 0
})
```
...and finally, it is possible to combine several matchers in a single route:
```go
r.HandleFunc("/products", ProductsHandler).
Host("www.example.com").
Methods("GET").
Schemes("http")
```
Routes are tested in the order they were added to the router. If two routes match, the first one wins:
```go
r := mux.NewRouter()
r.HandleFunc("/specific", specificHandler)
r.PathPrefix("/").Handler(catchAllHandler)
```
Setting the same matching conditions again and again can be boring, so we have a way to group several routes that share the same requirements. We call it "subrouting".
For example, let's say we have several URLs that should only match when the host is `www.example.com`. Create a route for that host and get a "subrouter" from it:
```go
r := mux.NewRouter()
s := r.Host("www.example.com").Subrouter()
```
Then register routes in the subrouter:
```go
s.HandleFunc("/products/", ProductsHandler)
s.HandleFunc("/products/{key}", ProductHandler)
s.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler)
```
The three URL paths we registered above will only be tested if the domain is `www.example.com`, because the subrouter is tested first. This is not only convenient, but also optimizes request matching. You can create subrouters combining any attribute matchers accepted by a route.
Subrouters can be used to create domain or path "namespaces": you define subrouters in a central place and then parts of the app can register its paths relatively to a given subrouter.
There's one more thing about subroutes. When a subrouter has a path prefix, the inner routes use it as base for their paths:
```go
r := mux.NewRouter()
s := r.PathPrefix("/products").Subrouter()
// "/products/"
s.HandleFunc("/", ProductsHandler)
// "/products/{key}/"
s.HandleFunc("/{key}/", ProductHandler)
// "/products/{key}/details"
s.HandleFunc("/{key}/details", ProductDetailsHandler)
```
### Static Files
Note that the path provided to `PathPrefix()` represents a "wildcard": calling
`PathPrefix("/static/").Handler(...)` means that the handler will be passed any
request that matches "/static/\*". This makes it easy to serve static files with mux:
```go
func main() {
var dir string
flag.StringVar(&dir, "dir", ".", "the directory to serve files from. Defaults to the current dir")
flag.Parse()
r := mux.NewRouter()
// This will serve files under http://localhost:8000/static/<filename>
r.PathPrefix("/static/").Handler(http.StripPrefix("/static/", http.FileServer(http.Dir(dir))))
srv := &http.Server{
Handler: r,
Addr: "127.0.0.1:8000",
// Good practice: enforce timeouts for servers you create!
WriteTimeout: 15 * time.Second,
ReadTimeout: 15 * time.Second,
}
log.Fatal(srv.ListenAndServe())
}
```
### Serving Single Page Applications
Most of the time it makes sense to serve your SPA on a separate web server from your API,
but sometimes it's desirable to serve them both from one place. It's possible to write a simple
handler for serving your SPA (for use with React Router's [BrowserRouter](https://reacttraining.com/react-router/web/api/BrowserRouter) for example), and leverage
mux's powerful routing for your API endpoints.
```go
package main
import (
"encoding/json"
"log"
"net/http"
"os"
"path/filepath"
"time"
"github.com/gorilla/mux"
)
// spaHandler implements the http.Handler interface, so we can use it
// to respond to HTTP requests. The path to the static directory and
// path to the index file within that static directory are used to
// serve the SPA in the given static directory.
type spaHandler struct {
staticPath string
indexPath string
}
// ServeHTTP inspects the URL path to locate a file within the static dir
// on the SPA handler. If a file is found, it will be served. If not, the
// file located at the index path on the SPA handler will be served. This
// is suitable behavior for serving an SPA (single page application).
func (h spaHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// Join internally call path.Clean to prevent directory traversal
path := filepath.Join(h.staticPath, r.URL.Path)
// check whether a file exists or is a directory at the given path
fi, err := os.Stat(path)
if os.IsNotExist(err) || fi.IsDir() {
// file does not exist or path is a directory, serve index.html
http.ServeFile(w, r, filepath.Join(h.staticPath, h.indexPath))
return
}
if err != nil {
// if we got an error (that wasn't that the file doesn't exist) stating the
// file, return a 500 internal server error and stop
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
// otherwise, use http.FileServer to serve the static file
http.FileServer(http.Dir(h.staticPath)).ServeHTTP(w, r)
}
func main() {
router := mux.NewRouter()
router.HandleFunc("/api/health", func(w http.ResponseWriter, r *http.Request) {
// an example API handler
json.NewEncoder(w).Encode(map[string]bool{"ok": true})
})
spa := spaHandler{staticPath: "build", indexPath: "index.html"}
router.PathPrefix("/").Handler(spa)
srv := &http.Server{
Handler: router,
Addr: "127.0.0.1:8000",
// Good practice: enforce timeouts for servers you create!
WriteTimeout: 15 * time.Second,
ReadTimeout: 15 * time.Second,
}
log.Fatal(srv.ListenAndServe())
}
```
### Registered URLs
Now let's see how to build registered URLs.
Routes can be named. All routes that define a name can have their URLs built, or "reversed". We define a name calling `Name()` on a route. For example:
```go
r := mux.NewRouter()
r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler).
Name("article")
```
To build a URL, get the route and call the `URL()` method, passing a sequence of key/value pairs for the route variables. For the previous route, we would do:
```go
url, err := r.Get("article").URL("category", "technology", "id", "42")
```
...and the result will be a `url.URL` with the following path:
```
"/articles/technology/42"
```
This also works for host and query value variables:
```go
r := mux.NewRouter()
r.Host("{subdomain}.example.com").
Path("/articles/{category}/{id:[0-9]+}").
Queries("filter", "{filter}").
HandlerFunc(ArticleHandler).
Name("article")
// url.String() will be "http://news.example.com/articles/technology/42?filter=gorilla"
url, err := r.Get("article").URL("subdomain", "news",
"category", "technology",
"id", "42",
"filter", "gorilla")
```
All variables defined in the route are required, and their values must conform to the corresponding patterns. These requirements guarantee that a generated URL will always match a registered route -- the only exception is for explicitly defined "build-only" routes which never match.
Regex support also exists for matching Headers within a route. For example, we could do:
```go
r.HeadersRegexp("Content-Type", "application/(text|json)")
```
...and the route will match both requests with a Content-Type of `application/json` as well as `application/text`
There's also a way to build only the URL host or path for a route: use the methods `URLHost()` or `URLPath()` instead. For the previous route, we would do:
```go
// "http://news.example.com/"
host, err := r.Get("article").URLHost("subdomain", "news")
// "/articles/technology/42"
path, err := r.Get("article").URLPath("category", "technology", "id", "42")
```
And if you use subrouters, host and path defined separately can be built as well:
```go
r := mux.NewRouter()
s := r.Host("{subdomain}.example.com").Subrouter()
s.Path("/articles/{category}/{id:[0-9]+}").
HandlerFunc(ArticleHandler).
Name("article")
// "http://news.example.com/articles/technology/42"
url, err := r.Get("article").URL("subdomain", "news",
"category", "technology",
"id", "42")
```
To find all the required variables for a given route when calling `URL()`, the method `GetVarNames()` is available:
```go
r := mux.NewRouter()
r.Host("{domain}").
Path("/{group}/{item_id}").
Queries("some_data1", "{some_data1}").
Queries("some_data2", "{some_data2}").
Name("article")
// Will print [domain group item_id some_data1 some_data2] <nil>
fmt.Println(r.Get("article").GetVarNames())
```
### Walking Routes
The `Walk` function on `mux.Router` can be used to visit all of the routes that are registered on a router. For example,
the following prints all of the registered routes:
```go
package main
import (
"fmt"
"net/http"
"strings"
"github.com/gorilla/mux"
)
func handler(w http.ResponseWriter, r *http.Request) {
return
}
func main() {
r := mux.NewRouter()
r.HandleFunc("/", handler)
r.HandleFunc("/products", handler).Methods("POST")
r.HandleFunc("/articles", handler).Methods("GET")
r.HandleFunc("/articles/{id}", handler).Methods("GET", "PUT")
r.HandleFunc("/authors", handler).Queries("surname", "{surname}")
err := r.Walk(func(route *mux.Route, router *mux.Router, ancestors []*mux.Route) error {
pathTemplate, err := route.GetPathTemplate()
if err == nil {
fmt.Println("ROUTE:", pathTemplate)
}
pathRegexp, err := route.GetPathRegexp()
if err == nil {
fmt.Println("Path regexp:", pathRegexp)
}
queriesTemplates, err := route.GetQueriesTemplates()
if err == nil {
fmt.Println("Queries templates:", strings.Join(queriesTemplates, ","))
}
queriesRegexps, err := route.GetQueriesRegexp()
if err == nil {
fmt.Println("Queries regexps:", strings.Join(queriesRegexps, ","))
}
methods, err := route.GetMethods()
if err == nil {
fmt.Println("Methods:", strings.Join(methods, ","))
}
fmt.Println()
return nil
})
if err != nil {
fmt.Println(err)
}
http.Handle("/", r)
}
```
### Graceful Shutdown
Go 1.8 introduced the ability to [gracefully shutdown](https://golang.org/doc/go1.8#http_shutdown) a `*http.Server`. Here's how to do that alongside `mux`:
```go
package main
import (
"context"
"flag"
"log"
"net/http"
"os"
"os/signal"
"time"
"github.com/gorilla/mux"
)
func main() {
var wait time.Duration
flag.DurationVar(&wait, "graceful-timeout", time.Second * 15, "the duration for which the server gracefully wait for existing connections to finish - e.g. 15s or 1m")
flag.Parse()
r := mux.NewRouter()
// Add your routes as needed
srv := &http.Server{
Addr: "0.0.0.0:8080",
// Good practice to set timeouts to avoid Slowloris attacks.
WriteTimeout: time.Second * 15,
ReadTimeout: time.Second * 15,
IdleTimeout: time.Second * 60,
Handler: r, // Pass our instance of gorilla/mux in.
}
// Run our server in a goroutine so that it doesn't block.
go func() {
if err := srv.ListenAndServe(); err != nil {
log.Println(err)
}
}()
c := make(chan os.Signal, 1)
// We'll accept graceful shutdowns when quit via SIGINT (Ctrl+C)
// SIGKILL, SIGQUIT or SIGTERM (Ctrl+/) will not be caught.
signal.Notify(c, os.Interrupt)
// Block until we receive our signal.
<-c
// Create a deadline to wait for.
ctx, cancel := context.WithTimeout(context.Background(), wait)
defer cancel()
// Doesn't block if no connections, but will otherwise wait
// until the timeout deadline.
srv.Shutdown(ctx)
// Optionally, you could run srv.Shutdown in a goroutine and block on
// <-ctx.Done() if your application should wait for other services
// to finalize based on context cancellation.
log.Println("shutting down")
os.Exit(0)
}
```
### Middleware
Mux supports the addition of middlewares to a [Router](https://godoc.org/github.com/gorilla/mux#Router), which are executed in the order they are added if a match is found, including its subrouters.
Middlewares are (typically) small pieces of code which take one request, do something with it, and pass it down to another middleware or the final handler. Some common use cases for middleware are request logging, header manipulation, or `ResponseWriter` hijacking.
Mux middlewares are defined using the de facto standard type:
```go
type MiddlewareFunc func(http.Handler) http.Handler
```
Typically, the returned handler is a closure which does something with the http.ResponseWriter and http.Request passed to it, and then calls the handler passed as parameter to the MiddlewareFunc. This takes advantage of closures being able access variables from the context where they are created, while retaining the signature enforced by the receivers.
A very basic middleware which logs the URI of the request being handled could be written as:
```go
func loggingMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Do stuff here
log.Println(r.RequestURI)
// Call the next handler, which can be another middleware in the chain, or the final handler.
next.ServeHTTP(w, r)
})
}
```
Middlewares can be added to a router using `Router.Use()`:
```go
r := mux.NewRouter()
r.HandleFunc("/", handler)
r.Use(loggingMiddleware)
```
A more complex authentication middleware, which maps session token to users, could be written as:
```go
// Define our struct
type authenticationMiddleware struct {
tokenUsers map[string]string
}
// Initialize it somewhere
func (amw *authenticationMiddleware) Populate() {
amw.tokenUsers["00000000"] = "user0"
amw.tokenUsers["aaaaaaaa"] = "userA"
amw.tokenUsers["05f717e5"] = "randomUser"
amw.tokenUsers["deadbeef"] = "user0"
}
// Middleware function, which will be called for each request
func (amw *authenticationMiddleware) Middleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
token := r.Header.Get("X-Session-Token")
if user, found := amw.tokenUsers[token]; found {
// We found the token in our map
log.Printf("Authenticated user %s\n", user)
// Pass down the request to the next middleware (or final handler)
next.ServeHTTP(w, r)
} else {
// Write an error and stop the handler chain
http.Error(w, "Forbidden", http.StatusForbidden)
}
})
}
```
```go
r := mux.NewRouter()
r.HandleFunc("/", handler)
amw := authenticationMiddleware{tokenUsers: make(map[string]string)}
amw.Populate()
r.Use(amw.Middleware)
```
Note: The handler chain will be stopped if your middleware doesn't call `next.ServeHTTP()` with the corresponding parameters. This can be used to abort a request if the middleware writer wants to. Middlewares _should_ write to `ResponseWriter` if they _are_ going to terminate the request, and they _should not_ write to `ResponseWriter` if they _are not_ going to terminate it.
### Handling CORS Requests
[CORSMethodMiddleware](https://godoc.org/github.com/gorilla/mux#CORSMethodMiddleware) intends to make it easier to strictly set the `Access-Control-Allow-Methods` response header.
* You will still need to use your own CORS handler to set the other CORS headers such as `Access-Control-Allow-Origin`
* The middleware will set the `Access-Control-Allow-Methods` header to all the method matchers (e.g. `r.Methods(http.MethodGet, http.MethodPut, http.MethodOptions)` -> `Access-Control-Allow-Methods: GET,PUT,OPTIONS`) on a route
* If you do not specify any methods, then:
> _Important_: there must be an `OPTIONS` method matcher for the middleware to set the headers.
Here is an example of using `CORSMethodMiddleware` along with a custom `OPTIONS` handler to set all the required CORS headers:
```go
package main
import (
"net/http"
"github.com/gorilla/mux"
)
func main() {
r := mux.NewRouter()
// IMPORTANT: you must specify an OPTIONS method matcher for the middleware to set CORS headers
r.HandleFunc("/foo", fooHandler).Methods(http.MethodGet, http.MethodPut, http.MethodPatch, http.MethodOptions)
r.Use(mux.CORSMethodMiddleware(r))
http.ListenAndServe(":8080", r)
}
func fooHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Access-Control-Allow-Origin", "*")
if r.Method == http.MethodOptions {
return
}
w.Write([]byte("foo"))
}
```
And an request to `/foo` using something like:
```bash
curl localhost:8080/foo -v
```
Would look like:
```bash
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> GET /foo HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.59.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Access-Control-Allow-Methods: GET,PUT,PATCH,OPTIONS
< Access-Control-Allow-Origin: *
< Date: Fri, 28 Jun 2019 20:13:30 GMT
< Content-Length: 3
< Content-Type: text/plain; charset=utf-8
<
* Connection #0 to host localhost left intact
foo
```
### Testing Handlers
Testing handlers in a Go web application is straightforward, and _mux_ doesn't complicate this any further. Given two files: `endpoints.go` and `endpoints_test.go`, here's how we'd test an application using _mux_.
First, our simple HTTP handler:
```go
// endpoints.go
package main
func HealthCheckHandler(w http.ResponseWriter, r *http.Request) {
// A very simple health check.
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
// In the future we could report back on the status of our DB, or our cache
// (e.g. Redis) by performing a simple PING, and include them in the response.
io.WriteString(w, `{"alive": true}`)
}
func main() {
r := mux.NewRouter()
r.HandleFunc("/health", HealthCheckHandler)
log.Fatal(http.ListenAndServe("localhost:8080", r))
}
```
Our test code:
```go
// endpoints_test.go
package main
import (
"net/http"
"net/http/httptest"
"testing"
)
func TestHealthCheckHandler(t *testing.T) {
// Create a request to pass to our handler. We don't have any query parameters for now, so we'll
// pass 'nil' as the third parameter.
req, err := http.NewRequest("GET", "/health", nil)
if err != nil {
t.Fatal(err)
}
// We create a ResponseRecorder (which satisfies http.ResponseWriter) to record the response.
rr := httptest.NewRecorder()
handler := http.HandlerFunc(HealthCheckHandler)
// Our handlers satisfy http.Handler, so we can call their ServeHTTP method
// directly and pass in our Request and ResponseRecorder.
handler.ServeHTTP(rr, req)
// Check the status code is what we expect.
if status := rr.Code; status != http.StatusOK {
t.Errorf("handler returned wrong status code: got %v want %v",
status, http.StatusOK)
}
// Check the response body is what we expect.
expected := `{"alive": true}`
if rr.Body.String() != expected {
t.Errorf("handler returned unexpected body: got %v want %v",
rr.Body.String(), expected)
}
}
```
In the case that our routes have [variables](#examples), we can pass those in the request. We could write
[table-driven tests](https://dave.cheney.net/2013/06/09/writing-table-driven-tests-in-go) to test multiple
possible route variables as needed.
```go
// endpoints.go
func main() {
r := mux.NewRouter()
// A route with a route variable:
r.HandleFunc("/metrics/{type}", MetricsHandler)
log.Fatal(http.ListenAndServe("localhost:8080", r))
}
```
Our test file, with a table-driven test of `routeVariables`:
```go
// endpoints_test.go
func TestMetricsHandler(t *testing.T) {
tt := []struct{
routeVariable string
shouldPass bool
}{
{"goroutines", true},
{"heap", true},
{"counters", true},
{"queries", true},
{"adhadaeqm3k", false},
}
for _, tc := range tt {
path := fmt.Sprintf("/metrics/%s", tc.routeVariable)
req, err := http.NewRequest("GET", path, nil)
if err != nil {
t.Fatal(err)
}
rr := httptest.NewRecorder()
// To add the vars to the context,
// we need to create a router through which we can pass the request.
router := mux.NewRouter()
router.HandleFunc("/metrics/{type}", MetricsHandler)
router.ServeHTTP(rr, req)
// In this case, our MetricsHandler returns a non-200 response
// for a route variable it doesn't know about.
if rr.Code == http.StatusOK && !tc.shouldPass {
t.Errorf("handler should have failed on routeVariable %s: got %v want %v",
tc.routeVariable, rr.Code, http.StatusOK)
}
}
}
```
## Full Example
Here's a complete, runnable example of a small `mux` based server:
```go
package main
import (
"net/http"
"log"
"github.com/gorilla/mux"
)
func YourHandler(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Gorilla!\n"))
}
func main() {
r := mux.NewRouter()
// Routes consist of a path and a handler function.
r.HandleFunc("/", YourHandler)
// Bind to a port and pass our router in
log.Fatal(http.ListenAndServe(":8000", r))
}
```
## License
BSD licensed. See the LICENSE file for details.

305
examples/web/vendor/github.com/gorilla/mux/doc.go generated vendored Normal file
View File

@ -0,0 +1,305 @@
// Copyright 2012 The Gorilla Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
/*
Package mux implements a request router and dispatcher.
The name mux stands for "HTTP request multiplexer". Like the standard
http.ServeMux, mux.Router matches incoming requests against a list of
registered routes and calls a handler for the route that matches the URL
or other conditions. The main features are:
- Requests can be matched based on URL host, path, path prefix, schemes,
header and query values, HTTP methods or using custom matchers.
- URL hosts, paths and query values can have variables with an optional
regular expression.
- Registered URLs can be built, or "reversed", which helps maintaining
references to resources.
- Routes can be used as subrouters: nested routes are only tested if the
parent route matches. This is useful to define groups of routes that
share common conditions like a host, a path prefix or other repeated
attributes. As a bonus, this optimizes request matching.
- It implements the http.Handler interface so it is compatible with the
standard http.ServeMux.
Let's start registering a couple of URL paths and handlers:
func main() {
r := mux.NewRouter()
r.HandleFunc("/", HomeHandler)
r.HandleFunc("/products", ProductsHandler)
r.HandleFunc("/articles", ArticlesHandler)
http.Handle("/", r)
}
Here we register three routes mapping URL paths to handlers. This is
equivalent to how http.HandleFunc() works: if an incoming request URL matches
one of the paths, the corresponding handler is called passing
(http.ResponseWriter, *http.Request) as parameters.
Paths can have variables. They are defined using the format {name} or
{name:pattern}. If a regular expression pattern is not defined, the matched
variable will be anything until the next slash. For example:
r := mux.NewRouter()
r.HandleFunc("/products/{key}", ProductHandler)
r.HandleFunc("/articles/{category}/", ArticlesCategoryHandler)
r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler)
Groups can be used inside patterns, as long as they are non-capturing (?:re). For example:
r.HandleFunc("/articles/{category}/{sort:(?:asc|desc|new)}", ArticlesCategoryHandler)
The names are used to create a map of route variables which can be retrieved
calling mux.Vars():
vars := mux.Vars(request)
category := vars["category"]
Note that if any capturing groups are present, mux will panic() during parsing. To prevent
this, convert any capturing groups to non-capturing, e.g. change "/{sort:(asc|desc)}" to
"/{sort:(?:asc|desc)}". This is a change from prior versions which behaved unpredictably
when capturing groups were present.
And this is all you need to know about the basic usage. More advanced options
are explained below.
Routes can also be restricted to a domain or subdomain. Just define a host
pattern to be matched. They can also have variables:
r := mux.NewRouter()
// Only matches if domain is "www.example.com".
r.Host("www.example.com")
// Matches a dynamic subdomain.
r.Host("{subdomain:[a-z]+}.domain.com")
There are several other matchers that can be added. To match path prefixes:
r.PathPrefix("/products/")
...or HTTP methods:
r.Methods("GET", "POST")
...or URL schemes:
r.Schemes("https")
...or header values:
r.Headers("X-Requested-With", "XMLHttpRequest")
...or query values:
r.Queries("key", "value")
...or to use a custom matcher function:
r.MatcherFunc(func(r *http.Request, rm *RouteMatch) bool {
return r.ProtoMajor == 0
})
...and finally, it is possible to combine several matchers in a single route:
r.HandleFunc("/products", ProductsHandler).
Host("www.example.com").
Methods("GET").
Schemes("http")
Setting the same matching conditions again and again can be boring, so we have
a way to group several routes that share the same requirements.
We call it "subrouting".
For example, let's say we have several URLs that should only match when the
host is "www.example.com". Create a route for that host and get a "subrouter"
from it:
r := mux.NewRouter()
s := r.Host("www.example.com").Subrouter()
Then register routes in the subrouter:
s.HandleFunc("/products/", ProductsHandler)
s.HandleFunc("/products/{key}", ProductHandler)
s.HandleFunc("/articles/{category}/{id:[0-9]+}"), ArticleHandler)
The three URL paths we registered above will only be tested if the domain is
"www.example.com", because the subrouter is tested first. This is not
only convenient, but also optimizes request matching. You can create
subrouters combining any attribute matchers accepted by a route.
Subrouters can be used to create domain or path "namespaces": you define
subrouters in a central place and then parts of the app can register its
paths relatively to a given subrouter.
There's one more thing about subroutes. When a subrouter has a path prefix,
the inner routes use it as base for their paths:
r := mux.NewRouter()
s := r.PathPrefix("/products").Subrouter()
// "/products/"
s.HandleFunc("/", ProductsHandler)
// "/products/{key}/"
s.HandleFunc("/{key}/", ProductHandler)
// "/products/{key}/details"
s.HandleFunc("/{key}/details", ProductDetailsHandler)
Note that the path provided to PathPrefix() represents a "wildcard": calling
PathPrefix("/static/").Handler(...) means that the handler will be passed any
request that matches "/static/*". This makes it easy to serve static files with mux:
func main() {
var dir string
flag.StringVar(&dir, "dir", ".", "the directory to serve files from. Defaults to the current dir")
flag.Parse()
r := mux.NewRouter()
// This will serve files under http://localhost:8000/static/<filename>
r.PathPrefix("/static/").Handler(http.StripPrefix("/static/", http.FileServer(http.Dir(dir))))
srv := &http.Server{
Handler: r,
Addr: "127.0.0.1:8000",
// Good practice: enforce timeouts for servers you create!
WriteTimeout: 15 * time.Second,
ReadTimeout: 15 * time.Second,
}
log.Fatal(srv.ListenAndServe())
}
Now let's see how to build registered URLs.
Routes can be named. All routes that define a name can have their URLs built,
or "reversed". We define a name calling Name() on a route. For example:
r := mux.NewRouter()
r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler).
Name("article")
To build a URL, get the route and call the URL() method, passing a sequence of
key/value pairs for the route variables. For the previous route, we would do:
url, err := r.Get("article").URL("category", "technology", "id", "42")
...and the result will be a url.URL with the following path:
"/articles/technology/42"
This also works for host and query value variables:
r := mux.NewRouter()
r.Host("{subdomain}.domain.com").
Path("/articles/{category}/{id:[0-9]+}").
Queries("filter", "{filter}").
HandlerFunc(ArticleHandler).
Name("article")
// url.String() will be "http://news.domain.com/articles/technology/42?filter=gorilla"
url, err := r.Get("article").URL("subdomain", "news",
"category", "technology",
"id", "42",
"filter", "gorilla")
All variables defined in the route are required, and their values must
conform to the corresponding patterns. These requirements guarantee that a
generated URL will always match a registered route -- the only exception is
for explicitly defined "build-only" routes which never match.
Regex support also exists for matching Headers within a route. For example, we could do:
r.HeadersRegexp("Content-Type", "application/(text|json)")
...and the route will match both requests with a Content-Type of `application/json` as well as
`application/text`
There's also a way to build only the URL host or path for a route:
use the methods URLHost() or URLPath() instead. For the previous route,
we would do:
// "http://news.domain.com/"
host, err := r.Get("article").URLHost("subdomain", "news")
// "/articles/technology/42"
path, err := r.Get("article").URLPath("category", "technology", "id", "42")
And if you use subrouters, host and path defined separately can be built
as well:
r := mux.NewRouter()
s := r.Host("{subdomain}.domain.com").Subrouter()
s.Path("/articles/{category}/{id:[0-9]+}").
HandlerFunc(ArticleHandler).
Name("article")
// "http://news.domain.com/articles/technology/42"
url, err := r.Get("article").URL("subdomain", "news",
"category", "technology",
"id", "42")
Mux supports the addition of middlewares to a Router, which are executed in the order they are added if a match is found, including its subrouters. Middlewares are (typically) small pieces of code which take one request, do something with it, and pass it down to another middleware or the final handler. Some common use cases for middleware are request logging, header manipulation, or ResponseWriter hijacking.
type MiddlewareFunc func(http.Handler) http.Handler
Typically, the returned handler is a closure which does something with the http.ResponseWriter and http.Request passed to it, and then calls the handler passed as parameter to the MiddlewareFunc (closures can access variables from the context where they are created).
A very basic middleware which logs the URI of the request being handled could be written as:
func simpleMw(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Do stuff here
log.Println(r.RequestURI)
// Call the next handler, which can be another middleware in the chain, or the final handler.
next.ServeHTTP(w, r)
})
}
Middlewares can be added to a router using `Router.Use()`:
r := mux.NewRouter()
r.HandleFunc("/", handler)
r.Use(simpleMw)
A more complex authentication middleware, which maps session token to users, could be written as:
// Define our struct
type authenticationMiddleware struct {
tokenUsers map[string]string
}
// Initialize it somewhere
func (amw *authenticationMiddleware) Populate() {
amw.tokenUsers["00000000"] = "user0"
amw.tokenUsers["aaaaaaaa"] = "userA"
amw.tokenUsers["05f717e5"] = "randomUser"
amw.tokenUsers["deadbeef"] = "user0"
}
// Middleware function, which will be called for each request
func (amw *authenticationMiddleware) Middleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
token := r.Header.Get("X-Session-Token")
if user, found := amw.tokenUsers[token]; found {
// We found the token in our map
log.Printf("Authenticated user %s\n", user)
next.ServeHTTP(w, r)
} else {
http.Error(w, "Forbidden", http.StatusForbidden)
}
})
}
r := mux.NewRouter()
r.HandleFunc("/", handler)
amw := authenticationMiddleware{tokenUsers: make(map[string]string)}
amw.Populate()
r.Use(amw.Middleware)
Note: The handler chain will be stopped if your middleware doesn't call `next.ServeHTTP()` with the corresponding parameters. This can be used to abort a request if the middleware writer wants to.
*/
package mux

View File

@ -0,0 +1,74 @@
package mux
import (
"net/http"
"strings"
)
// MiddlewareFunc is a function which receives an http.Handler and returns another http.Handler.
// Typically, the returned handler is a closure which does something with the http.ResponseWriter and http.Request passed
// to it, and then calls the handler passed as parameter to the MiddlewareFunc.
type MiddlewareFunc func(http.Handler) http.Handler
// middleware interface is anything which implements a MiddlewareFunc named Middleware.
type middleware interface {
Middleware(handler http.Handler) http.Handler
}
// Middleware allows MiddlewareFunc to implement the middleware interface.
func (mw MiddlewareFunc) Middleware(handler http.Handler) http.Handler {
return mw(handler)
}
// Use appends a MiddlewareFunc to the chain. Middleware can be used to intercept or otherwise modify requests and/or responses, and are executed in the order that they are applied to the Router.
func (r *Router) Use(mwf ...MiddlewareFunc) {
for _, fn := range mwf {
r.middlewares = append(r.middlewares, fn)
}
}
// useInterface appends a middleware to the chain. Middleware can be used to intercept or otherwise modify requests and/or responses, and are executed in the order that they are applied to the Router.
func (r *Router) useInterface(mw middleware) {
r.middlewares = append(r.middlewares, mw)
}
// CORSMethodMiddleware automatically sets the Access-Control-Allow-Methods response header
// on requests for routes that have an OPTIONS method matcher to all the method matchers on
// the route. Routes that do not explicitly handle OPTIONS requests will not be processed
// by the middleware. See examples for usage.
func CORSMethodMiddleware(r *Router) MiddlewareFunc {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
allMethods, err := getAllMethodsForRoute(r, req)
if err == nil {
for _, v := range allMethods {
if v == http.MethodOptions {
w.Header().Set("Access-Control-Allow-Methods", strings.Join(allMethods, ","))
}
}
}
next.ServeHTTP(w, req)
})
}
}
// getAllMethodsForRoute returns all the methods from method matchers matching a given
// request.
func getAllMethodsForRoute(r *Router, req *http.Request) ([]string, error) {
var allMethods []string
for _, route := range r.routes {
var match RouteMatch
if route.Match(req, &match) || match.MatchErr == ErrMethodMismatch {
methods, err := route.GetMethods()
if err != nil {
return nil, err
}
allMethods = append(allMethods, methods...)
}
}
return allMethods, nil
}

608
examples/web/vendor/github.com/gorilla/mux/mux.go generated vendored Normal file
View File

@ -0,0 +1,608 @@
// Copyright 2012 The Gorilla Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package mux
import (
"context"
"errors"
"fmt"
"net/http"
"path"
"regexp"
)
var (
// ErrMethodMismatch is returned when the method in the request does not match
// the method defined against the route.
ErrMethodMismatch = errors.New("method is not allowed")
// ErrNotFound is returned when no route match is found.
ErrNotFound = errors.New("no matching route was found")
)
// NewRouter returns a new router instance.
func NewRouter() *Router {
return &Router{namedRoutes: make(map[string]*Route)}
}
// Router registers routes to be matched and dispatches a handler.
//
// It implements the http.Handler interface, so it can be registered to serve
// requests:
//
// var router = mux.NewRouter()
//
// func main() {
// http.Handle("/", router)
// }
//
// Or, for Google App Engine, register it in a init() function:
//
// func init() {
// http.Handle("/", router)
// }
//
// This will send all incoming requests to the router.
type Router struct {
// Configurable Handler to be used when no route matches.
// This can be used to render your own 404 Not Found errors.
NotFoundHandler http.Handler
// Configurable Handler to be used when the request method does not match the route.
// This can be used to render your own 405 Method Not Allowed errors.
MethodNotAllowedHandler http.Handler
// Routes to be matched, in order.
routes []*Route
// Routes by name for URL building.
namedRoutes map[string]*Route
// If true, do not clear the request context after handling the request.
//
// Deprecated: No effect, since the context is stored on the request itself.
KeepContext bool
// Slice of middlewares to be called after a match is found
middlewares []middleware
// configuration shared with `Route`
routeConf
}
// common route configuration shared between `Router` and `Route`
type routeConf struct {
// If true, "/path/foo%2Fbar/to" will match the path "/path/{var}/to"
useEncodedPath bool
// If true, when the path pattern is "/path/", accessing "/path" will
// redirect to the former and vice versa.
strictSlash bool
// If true, when the path pattern is "/path//to", accessing "/path//to"
// will not redirect
skipClean bool
// Manager for the variables from host and path.
regexp routeRegexpGroup
// List of matchers.
matchers []matcher
// The scheme used when building URLs.
buildScheme string
buildVarsFunc BuildVarsFunc
}
// returns an effective deep copy of `routeConf`
func copyRouteConf(r routeConf) routeConf {
c := r
if r.regexp.path != nil {
c.regexp.path = copyRouteRegexp(r.regexp.path)
}
if r.regexp.host != nil {
c.regexp.host = copyRouteRegexp(r.regexp.host)
}
c.regexp.queries = make([]*routeRegexp, 0, len(r.regexp.queries))
for _, q := range r.regexp.queries {
c.regexp.queries = append(c.regexp.queries, copyRouteRegexp(q))
}
c.matchers = make([]matcher, len(r.matchers))
copy(c.matchers, r.matchers)
return c
}
func copyRouteRegexp(r *routeRegexp) *routeRegexp {
c := *r
return &c
}
// Match attempts to match the given request against the router's registered routes.
//
// If the request matches a route of this router or one of its subrouters the Route,
// Handler, and Vars fields of the the match argument are filled and this function
// returns true.
//
// If the request does not match any of this router's or its subrouters' routes
// then this function returns false. If available, a reason for the match failure
// will be filled in the match argument's MatchErr field. If the match failure type
// (eg: not found) has a registered handler, the handler is assigned to the Handler
// field of the match argument.
func (r *Router) Match(req *http.Request, match *RouteMatch) bool {
for _, route := range r.routes {
if route.Match(req, match) {
// Build middleware chain if no error was found
if match.MatchErr == nil {
for i := len(r.middlewares) - 1; i >= 0; i-- {
match.Handler = r.middlewares[i].Middleware(match.Handler)
}
}
return true
}
}
if match.MatchErr == ErrMethodMismatch {
if r.MethodNotAllowedHandler != nil {
match.Handler = r.MethodNotAllowedHandler
return true
}
return false
}
// Closest match for a router (includes sub-routers)
if r.NotFoundHandler != nil {
match.Handler = r.NotFoundHandler
match.MatchErr = ErrNotFound
return true
}
match.MatchErr = ErrNotFound
return false
}
// ServeHTTP dispatches the handler registered in the matched route.
//
// When there is a match, the route variables can be retrieved calling
// mux.Vars(request).
func (r *Router) ServeHTTP(w http.ResponseWriter, req *http.Request) {
if !r.skipClean {
path := req.URL.Path
if r.useEncodedPath {
path = req.URL.EscapedPath()
}
// Clean path to canonical form and redirect.
if p := cleanPath(path); p != path {
// Added 3 lines (Philip Schlump) - It was dropping the query string and #whatever from query.
// This matches with fix in go 1.2 r.c. 4 for same problem. Go Issue:
// http://code.google.com/p/go/issues/detail?id=5252
url := *req.URL
url.Path = p
p = url.String()
w.Header().Set("Location", p)
w.WriteHeader(http.StatusMovedPermanently)
return
}
}
var match RouteMatch
var handler http.Handler
if r.Match(req, &match) {
handler = match.Handler
req = requestWithVars(req, match.Vars)
req = requestWithRoute(req, match.Route)
}
if handler == nil && match.MatchErr == ErrMethodMismatch {
handler = methodNotAllowedHandler()
}
if handler == nil {
handler = http.NotFoundHandler()
}
handler.ServeHTTP(w, req)
}
// Get returns a route registered with the given name.
func (r *Router) Get(name string) *Route {
return r.namedRoutes[name]
}
// GetRoute returns a route registered with the given name. This method
// was renamed to Get() and remains here for backwards compatibility.
func (r *Router) GetRoute(name string) *Route {
return r.namedRoutes[name]
}
// StrictSlash defines the trailing slash behavior for new routes. The initial
// value is false.
//
// When true, if the route path is "/path/", accessing "/path" will perform a redirect
// to the former and vice versa. In other words, your application will always
// see the path as specified in the route.
//
// When false, if the route path is "/path", accessing "/path/" will not match
// this route and vice versa.
//
// The re-direct is a HTTP 301 (Moved Permanently). Note that when this is set for
// routes with a non-idempotent method (e.g. POST, PUT), the subsequent re-directed
// request will be made as a GET by most clients. Use middleware or client settings
// to modify this behaviour as needed.
//
// Special case: when a route sets a path prefix using the PathPrefix() method,
// strict slash is ignored for that route because the redirect behavior can't
// be determined from a prefix alone. However, any subrouters created from that
// route inherit the original StrictSlash setting.
func (r *Router) StrictSlash(value bool) *Router {
r.strictSlash = value
return r
}
// SkipClean defines the path cleaning behaviour for new routes. The initial
// value is false. Users should be careful about which routes are not cleaned
//
// When true, if the route path is "/path//to", it will remain with the double
// slash. This is helpful if you have a route like: /fetch/http://xkcd.com/534/
//
// When false, the path will be cleaned, so /fetch/http://xkcd.com/534/ will
// become /fetch/http/xkcd.com/534
func (r *Router) SkipClean(value bool) *Router {
r.skipClean = value
return r
}
// UseEncodedPath tells the router to match the encoded original path
// to the routes.
// For eg. "/path/foo%2Fbar/to" will match the path "/path/{var}/to".
//
// If not called, the router will match the unencoded path to the routes.
// For eg. "/path/foo%2Fbar/to" will match the path "/path/foo/bar/to"
func (r *Router) UseEncodedPath() *Router {
r.useEncodedPath = true
return r
}
// ----------------------------------------------------------------------------
// Route factories
// ----------------------------------------------------------------------------
// NewRoute registers an empty route.
func (r *Router) NewRoute() *Route {
// initialize a route with a copy of the parent router's configuration
route := &Route{routeConf: copyRouteConf(r.routeConf), namedRoutes: r.namedRoutes}
r.routes = append(r.routes, route)
return route
}
// Name registers a new route with a name.
// See Route.Name().
func (r *Router) Name(name string) *Route {
return r.NewRoute().Name(name)
}
// Handle registers a new route with a matcher for the URL path.
// See Route.Path() and Route.Handler().
func (r *Router) Handle(path string, handler http.Handler) *Route {
return r.NewRoute().Path(path).Handler(handler)
}
// HandleFunc registers a new route with a matcher for the URL path.
// See Route.Path() and Route.HandlerFunc().
func (r *Router) HandleFunc(path string, f func(http.ResponseWriter,
*http.Request)) *Route {
return r.NewRoute().Path(path).HandlerFunc(f)
}
// Headers registers a new route with a matcher for request header values.
// See Route.Headers().
func (r *Router) Headers(pairs ...string) *Route {
return r.NewRoute().Headers(pairs...)
}
// Host registers a new route with a matcher for the URL host.
// See Route.Host().
func (r *Router) Host(tpl string) *Route {
return r.NewRoute().Host(tpl)
}
// MatcherFunc registers a new route with a custom matcher function.
// See Route.MatcherFunc().
func (r *Router) MatcherFunc(f MatcherFunc) *Route {
return r.NewRoute().MatcherFunc(f)
}
// Methods registers a new route with a matcher for HTTP methods.
// See Route.Methods().
func (r *Router) Methods(methods ...string) *Route {
return r.NewRoute().Methods(methods...)
}
// Path registers a new route with a matcher for the URL path.
// See Route.Path().
func (r *Router) Path(tpl string) *Route {
return r.NewRoute().Path(tpl)
}
// PathPrefix registers a new route with a matcher for the URL path prefix.
// See Route.PathPrefix().
func (r *Router) PathPrefix(tpl string) *Route {
return r.NewRoute().PathPrefix(tpl)
}
// Queries registers a new route with a matcher for URL query values.
// See Route.Queries().
func (r *Router) Queries(pairs ...string) *Route {
return r.NewRoute().Queries(pairs...)
}
// Schemes registers a new route with a matcher for URL schemes.
// See Route.Schemes().
func (r *Router) Schemes(schemes ...string) *Route {
return r.NewRoute().Schemes(schemes...)
}
// BuildVarsFunc registers a new route with a custom function for modifying
// route variables before building a URL.
func (r *Router) BuildVarsFunc(f BuildVarsFunc) *Route {
return r.NewRoute().BuildVarsFunc(f)
}
// Walk walks the router and all its sub-routers, calling walkFn for each route
// in the tree. The routes are walked in the order they were added. Sub-routers
// are explored depth-first.
func (r *Router) Walk(walkFn WalkFunc) error {
return r.walk(walkFn, []*Route{})
}
// SkipRouter is used as a return value from WalkFuncs to indicate that the
// router that walk is about to descend down to should be skipped.
var SkipRouter = errors.New("skip this router")
// WalkFunc is the type of the function called for each route visited by Walk.
// At every invocation, it is given the current route, and the current router,
// and a list of ancestor routes that lead to the current route.
type WalkFunc func(route *Route, router *Router, ancestors []*Route) error
func (r *Router) walk(walkFn WalkFunc, ancestors []*Route) error {
for _, t := range r.routes {
err := walkFn(t, r, ancestors)
if err == SkipRouter {
continue
}
if err != nil {
return err
}
for _, sr := range t.matchers {
if h, ok := sr.(*Router); ok {
ancestors = append(ancestors, t)
err := h.walk(walkFn, ancestors)
if err != nil {
return err
}
ancestors = ancestors[:len(ancestors)-1]
}
}
if h, ok := t.handler.(*Router); ok {
ancestors = append(ancestors, t)
err := h.walk(walkFn, ancestors)
if err != nil {
return err
}
ancestors = ancestors[:len(ancestors)-1]
}
}
return nil
}
// ----------------------------------------------------------------------------
// Context
// ----------------------------------------------------------------------------
// RouteMatch stores information about a matched route.
type RouteMatch struct {
Route *Route
Handler http.Handler
Vars map[string]string
// MatchErr is set to appropriate matching error
// It is set to ErrMethodMismatch if there is a mismatch in
// the request method and route method
MatchErr error
}
type contextKey int
const (
varsKey contextKey = iota
routeKey
)
// Vars returns the route variables for the current request, if any.
func Vars(r *http.Request) map[string]string {
if rv := r.Context().Value(varsKey); rv != nil {
return rv.(map[string]string)
}
return nil
}
// CurrentRoute returns the matched route for the current request, if any.
// This only works when called inside the handler of the matched route
// because the matched route is stored in the request context which is cleared
// after the handler returns.
func CurrentRoute(r *http.Request) *Route {
if rv := r.Context().Value(routeKey); rv != nil {
return rv.(*Route)
}
return nil
}
func requestWithVars(r *http.Request, vars map[string]string) *http.Request {
ctx := context.WithValue(r.Context(), varsKey, vars)
return r.WithContext(ctx)
}
func requestWithRoute(r *http.Request, route *Route) *http.Request {
ctx := context.WithValue(r.Context(), routeKey, route)
return r.WithContext(ctx)
}
// ----------------------------------------------------------------------------
// Helpers
// ----------------------------------------------------------------------------
// cleanPath returns the canonical path for p, eliminating . and .. elements.
// Borrowed from the net/http package.
func cleanPath(p string) string {
if p == "" {
return "/"
}
if p[0] != '/' {
p = "/" + p
}
np := path.Clean(p)
// path.Clean removes trailing slash except for root;
// put the trailing slash back if necessary.
if p[len(p)-1] == '/' && np != "/" {
np += "/"
}
return np
}
// uniqueVars returns an error if two slices contain duplicated strings.
func uniqueVars(s1, s2 []string) error {
for _, v1 := range s1 {
for _, v2 := range s2 {
if v1 == v2 {
return fmt.Errorf("mux: duplicated route variable %q", v2)
}
}
}
return nil
}
// checkPairs returns the count of strings passed in, and an error if
// the count is not an even number.
func checkPairs(pairs ...string) (int, error) {
length := len(pairs)
if length%2 != 0 {
return length, fmt.Errorf(
"mux: number of parameters must be multiple of 2, got %v", pairs)
}
return length, nil
}
// mapFromPairsToString converts variadic string parameters to a
// string to string map.
func mapFromPairsToString(pairs ...string) (map[string]string, error) {
length, err := checkPairs(pairs...)
if err != nil {
return nil, err
}
m := make(map[string]string, length/2)
for i := 0; i < length; i += 2 {
m[pairs[i]] = pairs[i+1]
}
return m, nil
}
// mapFromPairsToRegex converts variadic string parameters to a
// string to regex map.
func mapFromPairsToRegex(pairs ...string) (map[string]*regexp.Regexp, error) {
length, err := checkPairs(pairs...)
if err != nil {
return nil, err
}
m := make(map[string]*regexp.Regexp, length/2)
for i := 0; i < length; i += 2 {
regex, err := regexp.Compile(pairs[i+1])
if err != nil {
return nil, err
}
m[pairs[i]] = regex
}
return m, nil
}
// matchInArray returns true if the given string value is in the array.
func matchInArray(arr []string, value string) bool {
for _, v := range arr {
if v == value {
return true
}
}
return false
}
// matchMapWithString returns true if the given key/value pairs exist in a given map.
func matchMapWithString(toCheck map[string]string, toMatch map[string][]string, canonicalKey bool) bool {
for k, v := range toCheck {
// Check if key exists.
if canonicalKey {
k = http.CanonicalHeaderKey(k)
}
if values := toMatch[k]; values == nil {
return false
} else if v != "" {
// If value was defined as an empty string we only check that the
// key exists. Otherwise we also check for equality.
valueExists := false
for _, value := range values {
if v == value {
valueExists = true
break
}
}
if !valueExists {
return false
}
}
}
return true
}
// matchMapWithRegex returns true if the given key/value pairs exist in a given map compiled against
// the given regex
func matchMapWithRegex(toCheck map[string]*regexp.Regexp, toMatch map[string][]string, canonicalKey bool) bool {
for k, v := range toCheck {
// Check if key exists.
if canonicalKey {
k = http.CanonicalHeaderKey(k)
}
if values := toMatch[k]; values == nil {
return false
} else if v != nil {
// If value was defined as an empty string we only check that the
// key exists. Otherwise we also check for equality.
valueExists := false
for _, value := range values {
if v.MatchString(value) {
valueExists = true
break
}
}
if !valueExists {
return false
}
}
}
return true
}
// methodNotAllowed replies to the request with an HTTP status code 405.
func methodNotAllowed(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusMethodNotAllowed)
}
// methodNotAllowedHandler returns a simple request handler
// that replies to each request with a status code 405.
func methodNotAllowedHandler() http.Handler { return http.HandlerFunc(methodNotAllowed) }

388
examples/web/vendor/github.com/gorilla/mux/regexp.go generated vendored Normal file
View File

@ -0,0 +1,388 @@
// Copyright 2012 The Gorilla Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package mux
import (
"bytes"
"fmt"
"net/http"
"net/url"
"regexp"
"strconv"
"strings"
)
type routeRegexpOptions struct {
strictSlash bool
useEncodedPath bool
}
type regexpType int
const (
regexpTypePath regexpType = iota
regexpTypeHost
regexpTypePrefix
regexpTypeQuery
)
// newRouteRegexp parses a route template and returns a routeRegexp,
// used to match a host, a path or a query string.
//
// It will extract named variables, assemble a regexp to be matched, create
// a "reverse" template to build URLs and compile regexps to validate variable
// values used in URL building.
//
// Previously we accepted only Python-like identifiers for variable
// names ([a-zA-Z_][a-zA-Z0-9_]*), but currently the only restriction is that
// name and pattern can't be empty, and names can't contain a colon.
func newRouteRegexp(tpl string, typ regexpType, options routeRegexpOptions) (*routeRegexp, error) {
// Check if it is well-formed.
idxs, errBraces := braceIndices(tpl)
if errBraces != nil {
return nil, errBraces
}
// Backup the original.
template := tpl
// Now let's parse it.
defaultPattern := "[^/]+"
if typ == regexpTypeQuery {
defaultPattern = ".*"
} else if typ == regexpTypeHost {
defaultPattern = "[^.]+"
}
// Only match strict slash if not matching
if typ != regexpTypePath {
options.strictSlash = false
}
// Set a flag for strictSlash.
endSlash := false
if options.strictSlash && strings.HasSuffix(tpl, "/") {
tpl = tpl[:len(tpl)-1]
endSlash = true
}
varsN := make([]string, len(idxs)/2)
varsR := make([]*regexp.Regexp, len(idxs)/2)
pattern := bytes.NewBufferString("")
pattern.WriteByte('^')
reverse := bytes.NewBufferString("")
var end int
var err error
for i := 0; i < len(idxs); i += 2 {
// Set all values we are interested in.
raw := tpl[end:idxs[i]]
end = idxs[i+1]
parts := strings.SplitN(tpl[idxs[i]+1:end-1], ":", 2)
name := parts[0]
patt := defaultPattern
if len(parts) == 2 {
patt = parts[1]
}
// Name or pattern can't be empty.
if name == "" || patt == "" {
return nil, fmt.Errorf("mux: missing name or pattern in %q",
tpl[idxs[i]:end])
}
// Build the regexp pattern.
fmt.Fprintf(pattern, "%s(?P<%s>%s)", regexp.QuoteMeta(raw), varGroupName(i/2), patt)
// Build the reverse template.
fmt.Fprintf(reverse, "%s%%s", raw)
// Append variable name and compiled pattern.
varsN[i/2] = name
varsR[i/2], err = regexp.Compile(fmt.Sprintf("^%s$", patt))
if err != nil {
return nil, err
}
}
// Add the remaining.
raw := tpl[end:]
pattern.WriteString(regexp.QuoteMeta(raw))
if options.strictSlash {
pattern.WriteString("[/]?")
}
if typ == regexpTypeQuery {
// Add the default pattern if the query value is empty
if queryVal := strings.SplitN(template, "=", 2)[1]; queryVal == "" {
pattern.WriteString(defaultPattern)
}
}
if typ != regexpTypePrefix {
pattern.WriteByte('$')
}
var wildcardHostPort bool
if typ == regexpTypeHost {
if !strings.Contains(pattern.String(), ":") {
wildcardHostPort = true
}
}
reverse.WriteString(raw)
if endSlash {
reverse.WriteByte('/')
}
// Compile full regexp.
reg, errCompile := regexp.Compile(pattern.String())
if errCompile != nil {
return nil, errCompile
}
// Check for capturing groups which used to work in older versions
if reg.NumSubexp() != len(idxs)/2 {
panic(fmt.Sprintf("route %s contains capture groups in its regexp. ", template) +
"Only non-capturing groups are accepted: e.g. (?:pattern) instead of (pattern)")
}
// Done!
return &routeRegexp{
template: template,
regexpType: typ,
options: options,
regexp: reg,
reverse: reverse.String(),
varsN: varsN,
varsR: varsR,
wildcardHostPort: wildcardHostPort,
}, nil
}
// routeRegexp stores a regexp to match a host or path and information to
// collect and validate route variables.
type routeRegexp struct {
// The unmodified template.
template string
// The type of match
regexpType regexpType
// Options for matching
options routeRegexpOptions
// Expanded regexp.
regexp *regexp.Regexp
// Reverse template.
reverse string
// Variable names.
varsN []string
// Variable regexps (validators).
varsR []*regexp.Regexp
// Wildcard host-port (no strict port match in hostname)
wildcardHostPort bool
}
// Match matches the regexp against the URL host or path.
func (r *routeRegexp) Match(req *http.Request, match *RouteMatch) bool {
if r.regexpType == regexpTypeHost {
host := getHost(req)
if r.wildcardHostPort {
// Don't be strict on the port match
if i := strings.Index(host, ":"); i != -1 {
host = host[:i]
}
}
return r.regexp.MatchString(host)
}
if r.regexpType == regexpTypeQuery {
return r.matchQueryString(req)
}
path := req.URL.Path
if r.options.useEncodedPath {
path = req.URL.EscapedPath()
}
return r.regexp.MatchString(path)
}
// url builds a URL part using the given values.
func (r *routeRegexp) url(values map[string]string) (string, error) {
urlValues := make([]interface{}, len(r.varsN))
for k, v := range r.varsN {
value, ok := values[v]
if !ok {
return "", fmt.Errorf("mux: missing route variable %q", v)
}
if r.regexpType == regexpTypeQuery {
value = url.QueryEscape(value)
}
urlValues[k] = value
}
rv := fmt.Sprintf(r.reverse, urlValues...)
if !r.regexp.MatchString(rv) {
// The URL is checked against the full regexp, instead of checking
// individual variables. This is faster but to provide a good error
// message, we check individual regexps if the URL doesn't match.
for k, v := range r.varsN {
if !r.varsR[k].MatchString(values[v]) {
return "", fmt.Errorf(
"mux: variable %q doesn't match, expected %q", values[v],
r.varsR[k].String())
}
}
}
return rv, nil
}
// getURLQuery returns a single query parameter from a request URL.
// For a URL with foo=bar&baz=ding, we return only the relevant key
// value pair for the routeRegexp.
func (r *routeRegexp) getURLQuery(req *http.Request) string {
if r.regexpType != regexpTypeQuery {
return ""
}
templateKey := strings.SplitN(r.template, "=", 2)[0]
val, ok := findFirstQueryKey(req.URL.RawQuery, templateKey)
if ok {
return templateKey + "=" + val
}
return ""
}
// findFirstQueryKey returns the same result as (*url.URL).Query()[key][0].
// If key was not found, empty string and false is returned.
func findFirstQueryKey(rawQuery, key string) (value string, ok bool) {
query := []byte(rawQuery)
for len(query) > 0 {
foundKey := query
if i := bytes.IndexAny(foundKey, "&;"); i >= 0 {
foundKey, query = foundKey[:i], foundKey[i+1:]
} else {
query = query[:0]
}
if len(foundKey) == 0 {
continue
}
var value []byte
if i := bytes.IndexByte(foundKey, '='); i >= 0 {
foundKey, value = foundKey[:i], foundKey[i+1:]
}
if len(foundKey) < len(key) {
// Cannot possibly be key.
continue
}
keyString, err := url.QueryUnescape(string(foundKey))
if err != nil {
continue
}
if keyString != key {
continue
}
valueString, err := url.QueryUnescape(string(value))
if err != nil {
continue
}
return valueString, true
}
return "", false
}
func (r *routeRegexp) matchQueryString(req *http.Request) bool {
return r.regexp.MatchString(r.getURLQuery(req))
}
// braceIndices returns the first level curly brace indices from a string.
// It returns an error in case of unbalanced braces.
func braceIndices(s string) ([]int, error) {
var level, idx int
var idxs []int
for i := 0; i < len(s); i++ {
switch s[i] {
case '{':
if level++; level == 1 {
idx = i
}
case '}':
if level--; level == 0 {
idxs = append(idxs, idx, i+1)
} else if level < 0 {
return nil, fmt.Errorf("mux: unbalanced braces in %q", s)
}
}
}
if level != 0 {
return nil, fmt.Errorf("mux: unbalanced braces in %q", s)
}
return idxs, nil
}
// varGroupName builds a capturing group name for the indexed variable.
func varGroupName(idx int) string {
return "v" + strconv.Itoa(idx)
}
// ----------------------------------------------------------------------------
// routeRegexpGroup
// ----------------------------------------------------------------------------
// routeRegexpGroup groups the route matchers that carry variables.
type routeRegexpGroup struct {
host *routeRegexp
path *routeRegexp
queries []*routeRegexp
}
// setMatch extracts the variables from the URL once a route matches.
func (v routeRegexpGroup) setMatch(req *http.Request, m *RouteMatch, r *Route) {
// Store host variables.
if v.host != nil {
host := getHost(req)
if v.host.wildcardHostPort {
// Don't be strict on the port match
if i := strings.Index(host, ":"); i != -1 {
host = host[:i]
}
}
matches := v.host.regexp.FindStringSubmatchIndex(host)
if len(matches) > 0 {
extractVars(host, matches, v.host.varsN, m.Vars)
}
}
path := req.URL.Path
if r.useEncodedPath {
path = req.URL.EscapedPath()
}
// Store path variables.
if v.path != nil {
matches := v.path.regexp.FindStringSubmatchIndex(path)
if len(matches) > 0 {
extractVars(path, matches, v.path.varsN, m.Vars)
// Check if we should redirect.
if v.path.options.strictSlash {
p1 := strings.HasSuffix(path, "/")
p2 := strings.HasSuffix(v.path.template, "/")
if p1 != p2 {
u, _ := url.Parse(req.URL.String())
if p1 {
u.Path = u.Path[:len(u.Path)-1]
} else {
u.Path += "/"
}
m.Handler = http.RedirectHandler(u.String(), http.StatusMovedPermanently)
}
}
}
}
// Store query string variables.
for _, q := range v.queries {
queryURL := q.getURLQuery(req)
matches := q.regexp.FindStringSubmatchIndex(queryURL)
if len(matches) > 0 {
extractVars(queryURL, matches, q.varsN, m.Vars)
}
}
}
// getHost tries its best to return the request host.
// According to section 14.23 of RFC 2616 the Host header
// can include the port number if the default value of 80 is not used.
func getHost(r *http.Request) string {
if r.URL.IsAbs() {
return r.URL.Host
}
return r.Host
}
func extractVars(input string, matches []int, names []string, output map[string]string) {
for i, name := range names {
output[name] = input[matches[2*i+2]:matches[2*i+3]]
}
}

765
examples/web/vendor/github.com/gorilla/mux/route.go generated vendored Normal file
View File

@ -0,0 +1,765 @@
// Copyright 2012 The Gorilla Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package mux
import (
"errors"
"fmt"
"net/http"
"net/url"
"regexp"
"strings"
)
// Route stores information to match a request and build URLs.
type Route struct {
// Request handler for the route.
handler http.Handler
// If true, this route never matches: it is only used to build URLs.
buildOnly bool
// The name used to build URLs.
name string
// Error resulted from building a route.
err error
// "global" reference to all named routes
namedRoutes map[string]*Route
// config possibly passed in from `Router`
routeConf
}
// SkipClean reports whether path cleaning is enabled for this route via
// Router.SkipClean.
func (r *Route) SkipClean() bool {
return r.skipClean
}
// Match matches the route against the request.
func (r *Route) Match(req *http.Request, match *RouteMatch) bool {
if r.buildOnly || r.err != nil {
return false
}
var matchErr error
// Match everything.
for _, m := range r.matchers {
if matched := m.Match(req, match); !matched {
if _, ok := m.(methodMatcher); ok {
matchErr = ErrMethodMismatch
continue
}
// Ignore ErrNotFound errors. These errors arise from match call
// to Subrouters.
//
// This prevents subsequent matching subrouters from failing to
// run middleware. If not ignored, the middleware would see a
// non-nil MatchErr and be skipped, even when there was a
// matching route.
if match.MatchErr == ErrNotFound {
match.MatchErr = nil
}
matchErr = nil // nolint:ineffassign
return false
} else {
// Multiple routes may share the same path but use different HTTP methods. For instance:
// Route 1: POST "/users/{id}".
// Route 2: GET "/users/{id}", parameters: "id": "[0-9]+".
//
// The router must handle these cases correctly. For a GET request to "/users/abc" with "id" as "-2",
// The router should return a "Not Found" error as no route fully matches this request.
if match.MatchErr == ErrMethodMismatch {
match.MatchErr = nil
}
}
}
if matchErr != nil {
match.MatchErr = matchErr
return false
}
if match.MatchErr == ErrMethodMismatch && r.handler != nil {
// We found a route which matches request method, clear MatchErr
match.MatchErr = nil
// Then override the mis-matched handler
match.Handler = r.handler
}
// Yay, we have a match. Let's collect some info about it.
if match.Route == nil {
match.Route = r
}
if match.Handler == nil {
match.Handler = r.handler
}
if match.Vars == nil {
match.Vars = make(map[string]string)
}
// Set variables.
r.regexp.setMatch(req, match, r)
return true
}
// ----------------------------------------------------------------------------
// Route attributes
// ----------------------------------------------------------------------------
// GetError returns an error resulted from building the route, if any.
func (r *Route) GetError() error {
return r.err
}
// BuildOnly sets the route to never match: it is only used to build URLs.
func (r *Route) BuildOnly() *Route {
r.buildOnly = true
return r
}
// Handler --------------------------------------------------------------------
// Handler sets a handler for the route.
func (r *Route) Handler(handler http.Handler) *Route {
if r.err == nil {
r.handler = handler
}
return r
}
// HandlerFunc sets a handler function for the route.
func (r *Route) HandlerFunc(f func(http.ResponseWriter, *http.Request)) *Route {
return r.Handler(http.HandlerFunc(f))
}
// GetHandler returns the handler for the route, if any.
func (r *Route) GetHandler() http.Handler {
return r.handler
}
// Name -----------------------------------------------------------------------
// Name sets the name for the route, used to build URLs.
// It is an error to call Name more than once on a route.
func (r *Route) Name(name string) *Route {
if r.name != "" {
r.err = fmt.Errorf("mux: route already has name %q, can't set %q",
r.name, name)
}
if r.err == nil {
r.name = name
r.namedRoutes[name] = r
}
return r
}
// GetName returns the name for the route, if any.
func (r *Route) GetName() string {
return r.name
}
// ----------------------------------------------------------------------------
// Matchers
// ----------------------------------------------------------------------------
// matcher types try to match a request.
type matcher interface {
Match(*http.Request, *RouteMatch) bool
}
// addMatcher adds a matcher to the route.
func (r *Route) addMatcher(m matcher) *Route {
if r.err == nil {
r.matchers = append(r.matchers, m)
}
return r
}
// addRegexpMatcher adds a host or path matcher and builder to a route.
func (r *Route) addRegexpMatcher(tpl string, typ regexpType) error {
if r.err != nil {
return r.err
}
if typ == regexpTypePath || typ == regexpTypePrefix {
if len(tpl) > 0 && tpl[0] != '/' {
return fmt.Errorf("mux: path must start with a slash, got %q", tpl)
}
if r.regexp.path != nil {
tpl = strings.TrimRight(r.regexp.path.template, "/") + tpl
}
}
rr, err := newRouteRegexp(tpl, typ, routeRegexpOptions{
strictSlash: r.strictSlash,
useEncodedPath: r.useEncodedPath,
})
if err != nil {
return err
}
for _, q := range r.regexp.queries {
if err = uniqueVars(rr.varsN, q.varsN); err != nil {
return err
}
}
if typ == regexpTypeHost {
if r.regexp.path != nil {
if err = uniqueVars(rr.varsN, r.regexp.path.varsN); err != nil {
return err
}
}
r.regexp.host = rr
} else {
if r.regexp.host != nil {
if err = uniqueVars(rr.varsN, r.regexp.host.varsN); err != nil {
return err
}
}
if typ == regexpTypeQuery {
r.regexp.queries = append(r.regexp.queries, rr)
} else {
r.regexp.path = rr
}
}
r.addMatcher(rr)
return nil
}
// Headers --------------------------------------------------------------------
// headerMatcher matches the request against header values.
type headerMatcher map[string]string
func (m headerMatcher) Match(r *http.Request, match *RouteMatch) bool {
return matchMapWithString(m, r.Header, true)
}
// Headers adds a matcher for request header values.
// It accepts a sequence of key/value pairs to be matched. For example:
//
// r := mux.NewRouter().NewRoute()
// r.Headers("Content-Type", "application/json",
// "X-Requested-With", "XMLHttpRequest")
//
// The above route will only match if both request header values match.
// If the value is an empty string, it will match any value if the key is set.
func (r *Route) Headers(pairs ...string) *Route {
if r.err == nil {
var headers map[string]string
headers, r.err = mapFromPairsToString(pairs...)
return r.addMatcher(headerMatcher(headers))
}
return r
}
// headerRegexMatcher matches the request against the route given a regex for the header
type headerRegexMatcher map[string]*regexp.Regexp
func (m headerRegexMatcher) Match(r *http.Request, match *RouteMatch) bool {
return matchMapWithRegex(m, r.Header, true)
}
// HeadersRegexp accepts a sequence of key/value pairs, where the value has regex
// support. For example:
//
// r := mux.NewRouter().NewRoute()
// r.HeadersRegexp("Content-Type", "application/(text|json)",
// "X-Requested-With", "XMLHttpRequest")
//
// The above route will only match if both the request header matches both regular expressions.
// If the value is an empty string, it will match any value if the key is set.
// Use the start and end of string anchors (^ and $) to match an exact value.
func (r *Route) HeadersRegexp(pairs ...string) *Route {
if r.err == nil {
var headers map[string]*regexp.Regexp
headers, r.err = mapFromPairsToRegex(pairs...)
return r.addMatcher(headerRegexMatcher(headers))
}
return r
}
// Host -----------------------------------------------------------------------
// Host adds a matcher for the URL host.
// It accepts a template with zero or more URL variables enclosed by {}.
// Variables can define an optional regexp pattern to be matched:
//
// - {name} matches anything until the next dot.
//
// - {name:pattern} matches the given regexp pattern.
//
// For example:
//
// r := mux.NewRouter().NewRoute()
// r.Host("www.example.com")
// r.Host("{subdomain}.domain.com")
// r.Host("{subdomain:[a-z]+}.domain.com")
//
// Variable names must be unique in a given route. They can be retrieved
// calling mux.Vars(request).
func (r *Route) Host(tpl string) *Route {
r.err = r.addRegexpMatcher(tpl, regexpTypeHost)
return r
}
// MatcherFunc ----------------------------------------------------------------
// MatcherFunc is the function signature used by custom matchers.
type MatcherFunc func(*http.Request, *RouteMatch) bool
// Match returns the match for a given request.
func (m MatcherFunc) Match(r *http.Request, match *RouteMatch) bool {
return m(r, match)
}
// MatcherFunc adds a custom function to be used as request matcher.
func (r *Route) MatcherFunc(f MatcherFunc) *Route {
return r.addMatcher(f)
}
// Methods --------------------------------------------------------------------
// methodMatcher matches the request against HTTP methods.
type methodMatcher []string
func (m methodMatcher) Match(r *http.Request, match *RouteMatch) bool {
return matchInArray(m, r.Method)
}
// Methods adds a matcher for HTTP methods.
// It accepts a sequence of one or more methods to be matched, e.g.:
// "GET", "POST", "PUT".
func (r *Route) Methods(methods ...string) *Route {
for k, v := range methods {
methods[k] = strings.ToUpper(v)
}
return r.addMatcher(methodMatcher(methods))
}
// Path -----------------------------------------------------------------------
// Path adds a matcher for the URL path.
// It accepts a template with zero or more URL variables enclosed by {}. The
// template must start with a "/".
// Variables can define an optional regexp pattern to be matched:
//
// - {name} matches anything until the next slash.
//
// - {name:pattern} matches the given regexp pattern.
//
// For example:
//
// r := mux.NewRouter().NewRoute()
// r.Path("/products/").Handler(ProductsHandler)
// r.Path("/products/{key}").Handler(ProductsHandler)
// r.Path("/articles/{category}/{id:[0-9]+}").
// Handler(ArticleHandler)
//
// Variable names must be unique in a given route. They can be retrieved
// calling mux.Vars(request).
func (r *Route) Path(tpl string) *Route {
r.err = r.addRegexpMatcher(tpl, regexpTypePath)
return r
}
// PathPrefix -----------------------------------------------------------------
// PathPrefix adds a matcher for the URL path prefix. This matches if the given
// template is a prefix of the full URL path. See Route.Path() for details on
// the tpl argument.
//
// Note that it does not treat slashes specially ("/foobar/" will be matched by
// the prefix "/foo") so you may want to use a trailing slash here.
//
// Also note that the setting of Router.StrictSlash() has no effect on routes
// with a PathPrefix matcher.
func (r *Route) PathPrefix(tpl string) *Route {
r.err = r.addRegexpMatcher(tpl, regexpTypePrefix)
return r
}
// Query ----------------------------------------------------------------------
// Queries adds a matcher for URL query values.
// It accepts a sequence of key/value pairs. Values may define variables.
// For example:
//
// r := mux.NewRouter().NewRoute()
// r.Queries("foo", "bar", "id", "{id:[0-9]+}")
//
// The above route will only match if the URL contains the defined queries
// values, e.g.: ?foo=bar&id=42.
//
// If the value is an empty string, it will match any value if the key is set.
//
// Variables can define an optional regexp pattern to be matched:
//
// - {name} matches anything until the next slash.
//
// - {name:pattern} matches the given regexp pattern.
func (r *Route) Queries(pairs ...string) *Route {
length := len(pairs)
if length%2 != 0 {
r.err = fmt.Errorf(
"mux: number of parameters must be multiple of 2, got %v", pairs)
return nil
}
for i := 0; i < length; i += 2 {
if r.err = r.addRegexpMatcher(pairs[i]+"="+pairs[i+1], regexpTypeQuery); r.err != nil {
return r
}
}
return r
}
// Schemes --------------------------------------------------------------------
// schemeMatcher matches the request against URL schemes.
type schemeMatcher []string
func (m schemeMatcher) Match(r *http.Request, match *RouteMatch) bool {
scheme := r.URL.Scheme
// https://golang.org/pkg/net/http/#Request
// "For [most] server requests, fields other than Path and RawQuery will be
// empty."
// Since we're an http muxer, the scheme is either going to be http or https
// though, so we can just set it based on the tls termination state.
if scheme == "" {
if r.TLS == nil {
scheme = "http"
} else {
scheme = "https"
}
}
return matchInArray(m, scheme)
}
// Schemes adds a matcher for URL schemes.
// It accepts a sequence of schemes to be matched, e.g.: "http", "https".
// If the request's URL has a scheme set, it will be matched against.
// Generally, the URL scheme will only be set if a previous handler set it,
// such as the ProxyHeaders handler from gorilla/handlers.
// If unset, the scheme will be determined based on the request's TLS
// termination state.
// The first argument to Schemes will be used when constructing a route URL.
func (r *Route) Schemes(schemes ...string) *Route {
for k, v := range schemes {
schemes[k] = strings.ToLower(v)
}
if len(schemes) > 0 {
r.buildScheme = schemes[0]
}
return r.addMatcher(schemeMatcher(schemes))
}
// BuildVarsFunc --------------------------------------------------------------
// BuildVarsFunc is the function signature used by custom build variable
// functions (which can modify route variables before a route's URL is built).
type BuildVarsFunc func(map[string]string) map[string]string
// BuildVarsFunc adds a custom function to be used to modify build variables
// before a route's URL is built.
func (r *Route) BuildVarsFunc(f BuildVarsFunc) *Route {
if r.buildVarsFunc != nil {
// compose the old and new functions
old := r.buildVarsFunc
r.buildVarsFunc = func(m map[string]string) map[string]string {
return f(old(m))
}
} else {
r.buildVarsFunc = f
}
return r
}
// Subrouter ------------------------------------------------------------------
// Subrouter creates a subrouter for the route.
//
// It will test the inner routes only if the parent route matched. For example:
//
// r := mux.NewRouter().NewRoute()
// s := r.Host("www.example.com").Subrouter()
// s.HandleFunc("/products/", ProductsHandler)
// s.HandleFunc("/products/{key}", ProductHandler)
// s.HandleFunc("/articles/{category}/{id:[0-9]+}"), ArticleHandler)
//
// Here, the routes registered in the subrouter won't be tested if the host
// doesn't match.
func (r *Route) Subrouter() *Router {
// initialize a subrouter with a copy of the parent route's configuration
router := &Router{routeConf: copyRouteConf(r.routeConf), namedRoutes: r.namedRoutes}
r.addMatcher(router)
return router
}
// ----------------------------------------------------------------------------
// URL building
// ----------------------------------------------------------------------------
// URL builds a URL for the route.
//
// It accepts a sequence of key/value pairs for the route variables. For
// example, given this route:
//
// r := mux.NewRouter()
// r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler).
// Name("article")
//
// ...a URL for it can be built using:
//
// url, err := r.Get("article").URL("category", "technology", "id", "42")
//
// ...which will return an url.URL with the following path:
//
// "/articles/technology/42"
//
// This also works for host variables:
//
// r := mux.NewRouter()
// r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler).
// Host("{subdomain}.domain.com").
// Name("article")
//
// // url.String() will be "http://news.domain.com/articles/technology/42"
// url, err := r.Get("article").URL("subdomain", "news",
// "category", "technology",
// "id", "42")
//
// The scheme of the resulting url will be the first argument that was passed to Schemes:
//
// // url.String() will be "https://example.com"
// r := mux.NewRouter().NewRoute()
// url, err := r.Host("example.com")
// .Schemes("https", "http").URL()
//
// All variables defined in the route are required, and their values must
// conform to the corresponding patterns.
func (r *Route) URL(pairs ...string) (*url.URL, error) {
if r.err != nil {
return nil, r.err
}
values, err := r.prepareVars(pairs...)
if err != nil {
return nil, err
}
var scheme, host, path string
queries := make([]string, 0, len(r.regexp.queries))
if r.regexp.host != nil {
if host, err = r.regexp.host.url(values); err != nil {
return nil, err
}
scheme = "http"
if r.buildScheme != "" {
scheme = r.buildScheme
}
}
if r.regexp.path != nil {
if path, err = r.regexp.path.url(values); err != nil {
return nil, err
}
}
for _, q := range r.regexp.queries {
var query string
if query, err = q.url(values); err != nil {
return nil, err
}
queries = append(queries, query)
}
return &url.URL{
Scheme: scheme,
Host: host,
Path: path,
RawQuery: strings.Join(queries, "&"),
}, nil
}
// URLHost builds the host part of the URL for a route. See Route.URL().
//
// The route must have a host defined.
func (r *Route) URLHost(pairs ...string) (*url.URL, error) {
if r.err != nil {
return nil, r.err
}
if r.regexp.host == nil {
return nil, errors.New("mux: route doesn't have a host")
}
values, err := r.prepareVars(pairs...)
if err != nil {
return nil, err
}
host, err := r.regexp.host.url(values)
if err != nil {
return nil, err
}
u := &url.URL{
Scheme: "http",
Host: host,
}
if r.buildScheme != "" {
u.Scheme = r.buildScheme
}
return u, nil
}
// URLPath builds the path part of the URL for a route. See Route.URL().
//
// The route must have a path defined.
func (r *Route) URLPath(pairs ...string) (*url.URL, error) {
if r.err != nil {
return nil, r.err
}
if r.regexp.path == nil {
return nil, errors.New("mux: route doesn't have a path")
}
values, err := r.prepareVars(pairs...)
if err != nil {
return nil, err
}
path, err := r.regexp.path.url(values)
if err != nil {
return nil, err
}
return &url.URL{
Path: path,
}, nil
}
// GetPathTemplate returns the template used to build the
// route match.
// This is useful for building simple REST API documentation and for instrumentation
// against third-party services.
// An error will be returned if the route does not define a path.
func (r *Route) GetPathTemplate() (string, error) {
if r.err != nil {
return "", r.err
}
if r.regexp.path == nil {
return "", errors.New("mux: route doesn't have a path")
}
return r.regexp.path.template, nil
}
// GetPathRegexp returns the expanded regular expression used to match route path.
// This is useful for building simple REST API documentation and for instrumentation
// against third-party services.
// An error will be returned if the route does not define a path.
func (r *Route) GetPathRegexp() (string, error) {
if r.err != nil {
return "", r.err
}
if r.regexp.path == nil {
return "", errors.New("mux: route does not have a path")
}
return r.regexp.path.regexp.String(), nil
}
// GetQueriesRegexp returns the expanded regular expressions used to match the
// route queries.
// This is useful for building simple REST API documentation and for instrumentation
// against third-party services.
// An error will be returned if the route does not have queries.
func (r *Route) GetQueriesRegexp() ([]string, error) {
if r.err != nil {
return nil, r.err
}
if r.regexp.queries == nil {
return nil, errors.New("mux: route doesn't have queries")
}
queries := make([]string, 0, len(r.regexp.queries))
for _, query := range r.regexp.queries {
queries = append(queries, query.regexp.String())
}
return queries, nil
}
// GetQueriesTemplates returns the templates used to build the
// query matching.
// This is useful for building simple REST API documentation and for instrumentation
// against third-party services.
// An error will be returned if the route does not define queries.
func (r *Route) GetQueriesTemplates() ([]string, error) {
if r.err != nil {
return nil, r.err
}
if r.regexp.queries == nil {
return nil, errors.New("mux: route doesn't have queries")
}
queries := make([]string, 0, len(r.regexp.queries))
for _, query := range r.regexp.queries {
queries = append(queries, query.template)
}
return queries, nil
}
// GetMethods returns the methods the route matches against
// This is useful for building simple REST API documentation and for instrumentation
// against third-party services.
// An error will be returned if route does not have methods.
func (r *Route) GetMethods() ([]string, error) {
if r.err != nil {
return nil, r.err
}
for _, m := range r.matchers {
if methods, ok := m.(methodMatcher); ok {
return []string(methods), nil
}
}
return nil, errors.New("mux: route doesn't have methods")
}
// GetHostTemplate returns the template used to build the
// route match.
// This is useful for building simple REST API documentation and for instrumentation
// against third-party services.
// An error will be returned if the route does not define a host.
func (r *Route) GetHostTemplate() (string, error) {
if r.err != nil {
return "", r.err
}
if r.regexp.host == nil {
return "", errors.New("mux: route doesn't have a host")
}
return r.regexp.host.template, nil
}
// GetVarNames returns the names of all variables added by regexp matchers
// These can be used to know which route variables should be passed into r.URL()
func (r *Route) GetVarNames() ([]string, error) {
if r.err != nil {
return nil, r.err
}
var varNames []string
if r.regexp.host != nil {
varNames = append(varNames, r.regexp.host.varsN...)
}
if r.regexp.path != nil {
varNames = append(varNames, r.regexp.path.varsN...)
}
for _, regx := range r.regexp.queries {
varNames = append(varNames, regx.varsN...)
}
return varNames, nil
}
// prepareVars converts the route variable pairs into a map. If the route has a
// BuildVarsFunc, it is invoked.
func (r *Route) prepareVars(pairs ...string) (map[string]string, error) {
m, err := mapFromPairsToString(pairs...)
if err != nil {
return nil, err
}
return r.buildVars(m), nil
}
func (r *Route) buildVars(m map[string]string) map[string]string {
if r.buildVarsFunc != nil {
m = r.buildVarsFunc(m)
}
return m
}

View File

@ -0,0 +1,19 @@
// Copyright 2012 The Gorilla Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package mux
import "net/http"
// SetURLVars sets the URL variables for the given request, to be accessed via
// mux.Vars for testing route behaviour. Arguments are not modified, a shallow
// copy is returned.
//
// This API should only be used for testing purposes; it provides a way to
// inject variables into the request context. Alternatively, URL variables
// can be set by making a route that captures the required variables,
// starting a server and sending the request to that server.
func SetURLVars(r *http.Request, val map[string]string) *http.Request {
return requestWithVars(r, val)
}

View File

@ -0,0 +1,24 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test
*.prof

View File

@ -0,0 +1,11 @@
language: go
go:
- "1.8"
- "1.9"
- "1.10"
- "1.11"
- "1.12"
- "1.13"
- "1.14"
- tip

View File

@ -0,0 +1,11 @@
Copyright 2021 Alexander F. Rødseth
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -0,0 +1,22 @@
# pinterface
[![Go Report Card](https://goreportcard.com/badge/github.com/xyproto/pinterface)](https://goreportcard.com/report/github.com/xyproto/pinterface) [![License](https://img.shields.io/badge/license-BSD-green.svg?style=flat)](https://raw.githubusercontent.com/xyproto/pinterface/main/LICENSE)
Interface types for `simple*` and `permission*` packages.
Interfaces for:
---------------
* Redis: [permissions2](https://github.com/xyproto/permissions2) and [simpleredis](https://github.com/xyproto/simpleredis)
* Bolt: [permissionbolt](https://github.com/xyproto/permissionbolt) and [simplebolt](https://github.com/xyproto/simplebolt)
* MariaDB/MySQL: [permissionsql](https://github.com/xyproto/permissionsql) and [simplemaria](https://github.com/xyproto/simplemaria)
* PostgreSQL: [pstore](https://github.com/xyproto/pstore) and [simplehstore](https://github.com/xyproto/simplehstore)
[![Packaging status](https://repology.org/badge/vertical-allrepos/go:github-xyproto-pinterface.svg)](https://repology.org/project/go:github-xyproto-pinterface/versions)
General information
-------------------
* Version: 1.5.3 (The tag is `v1.5.3` to work better with `go mod`. The API has version `5.3`.)
* License: BSD-3
* Author: Alexander F. Rødseth &lt;xyproto@archlinux.org&gt;

View File

@ -0,0 +1,161 @@
// Package pinterface provides interface types for the xyproto/simple* and xyproto/permission* packages
package pinterface
import "net/http"
// Version is the API version. The API is stable within the same major version number
const Version = 5.3
// Database interfaces
type IList interface {
Add(value string) error
All() ([]string, error)
Clear() error
LastN(n int) ([]string, error)
Last() (string, error)
Remove() error
}
type ISet interface {
Add(value string) error
All() ([]string, error)
Clear() error
Del(value string) error
Has(value string) (bool, error)
Remove() error
}
type IHashMap interface {
All() ([]string, error)
Clear() error
DelKey(owner, key string) error
Del(key string) error
Exists(owner string) (bool, error)
Get(owner, key string) (string, error)
Has(owner, key string) (bool, error)
Keys(owner string) ([]string, error)
Remove() error
Set(owner, key, value string) error
}
type IHashMap2 interface {
All() ([]string, error)
AllWhere(key, value string) ([]string, error)
Clear() error
Count() (int64, error)
DelKey(owner, key string) error
Del(key string) error
Empty() (bool, error)
Exists(owner string) (bool, error)
GetMap(owner string, keys []string) (map[string]string, error)
Get(owner, key string) (string, error)
Has(owner, key string) (bool, error)
Keys(owner string) ([]string, error)
Remove() error
SetLargeMap(all map[string]map[string]string) error
SetMap(owner string, m map[string]string) error
Set(owner, key, value string) error
}
type IKeyValue interface {
Clear() error
Del(key string) error
Get(key string) (string, error)
Inc(key string) (string, error)
Remove() error
Set(key, value string) error
}
// Interface for making it possible to depend on different versions of the permission package,
// or other packages that implement userstates.
type IUserState interface {
AddUnconfirmed(username, confirmationCode string)
AddUser(username, password, email string)
AdminRights(req *http.Request) bool
AllUnconfirmedUsernames() ([]string, error)
AllUsernames() ([]string, error)
AlreadyHasConfirmationCode(confirmationCode string) bool
BooleanField(username, fieldname string) bool
ClearCookie(w http.ResponseWriter)
ConfirmationCode(username string) (string, error)
ConfirmUserByConfirmationCode(confirmationcode string) error
Confirm(username string)
CookieSecret() string
CookieTimeout(username string) int64
CorrectPassword(username, password string) bool
Email(username string) (string, error)
FindUserByConfirmationCode(confirmationcode string) (string, error)
GenerateUniqueConfirmationCode() (string, error)
HashPassword(username, password string) string
HasUser(username string) bool
IsAdmin(username string) bool
IsConfirmed(username string) bool
IsLoggedIn(username string) bool
Login(w http.ResponseWriter, username string) error
Logout(username string)
MarkConfirmed(username string)
PasswordAlgo() string
PasswordHash(username string) (string, error)
RemoveAdminStatus(username string)
RemoveUnconfirmed(username string)
RemoveUser(username string)
SetAdminStatus(username string)
SetBooleanField(username, fieldname string, val bool)
SetCookieSecret(cookieSecret string)
SetCookieTimeout(cookieTime int64)
SetLoggedIn(username string)
SetLoggedOut(username string)
SetMinimumConfirmationCodeLength(length int)
SetPasswordAlgo(algorithm string) error
SetPassword(username, password string)
SetUsernameCookie(w http.ResponseWriter, username string) error
UsernameCookie(req *http.Request) (string, error)
Username(req *http.Request) string
UserRights(req *http.Request) bool
Creator() ICreator
Host() IHost
Users() IHashMap
}
// Data structure creator
type ICreator interface {
NewHashMap(id string) (IHashMap, error)
NewKeyValue(id string) (IKeyValue, error)
NewList(id string) (IList, error)
NewSet(id string) (ISet, error)
}
// Database host (or file)
type IHost interface {
Close()
Ping() error
}
// Redis host (implemented structures can also be an IHost, of course)
type IRedisHost interface {
DatabaseIndex()
Pool()
}
// Redis data structure creator
type IRedisCreator interface {
SelectDatabase(dbindex int)
}
// Middleware for permissions
type IPermissions interface {
AddAdminPath(prefix string)
AddPublicPath(prefix string)
AddUserPath(prefix string)
Clear()
DenyFunction() http.HandlerFunc
Rejected(w http.ResponseWriter, req *http.Request) bool
ServeHTTP(w http.ResponseWriter, req *http.Request, next http.HandlerFunc)
SetAdminPath(pathPrefixes []string)
SetDenyFunction(f http.HandlerFunc)
SetPublicPath(pathPrefixes []string)
SetUserPath(pathPrefixes []string)
UserState() IUserState
}

View File

@ -0,0 +1,11 @@
Copyright 2023 Alexander F. Rødseth
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -0,0 +1,36 @@
package simpleredis
import (
"github.com/xyproto/pinterface"
)
// For implementing pinterface.ICreator
type RedisCreator struct {
pool *ConnectionPool
dbindex int
}
func NewCreator(pool *ConnectionPool, dbindex int) *RedisCreator {
return &RedisCreator{pool, dbindex}
}
func (c *RedisCreator) SelectDatabase(dbindex int) {
c.dbindex = dbindex
}
func (c *RedisCreator) NewList(id string) (pinterface.IList, error) {
return &List{c.pool, id, c.dbindex}, nil
}
func (c *RedisCreator) NewSet(id string) (pinterface.ISet, error) {
return &Set{c.pool, id, c.dbindex}, nil
}
func (c *RedisCreator) NewHashMap(id string) (pinterface.IHashMap, error) {
return &HashMap{c.pool, id, c.dbindex}, nil
}
func (c *RedisCreator) NewKeyValue(id string) (pinterface.IKeyValue, error) {
return &KeyValue{c.pool, id, c.dbindex}, nil
}

View File

@ -0,0 +1,737 @@
// Package simpleredis provides an easy way to use Redis.
package simpleredis
import (
"errors"
"strconv"
"strings"
"time"
"github.com/gomodule/redigo/redis"
)
const (
// Version number. Stable API within major version numbers.
Version = 2.6
// The default [url]:port that Redis is running at
defaultRedisServer = ":6379"
)
// Common for each of the Redis data structures used here
type redisDatastructure struct {
pool *ConnectionPool
id string
dbindex int
}
type (
// A pool of readily available Redis connections
ConnectionPool redis.Pool
List redisDatastructure
Set redisDatastructure
HashMap redisDatastructure
KeyValue redisDatastructure
)
var (
// Timeout settings for new connections
connectTimeout = 7 * time.Second
readTimeout = 7 * time.Second
writeTimeout = 7 * time.Second
idleTimeout = 240 * time.Second
// How many connections should stay ready for requests, at a maximum?
// When an idle connection is used, new idle connections are created.
maxIdleConnections = 3
)
/* --- Helper functions --- */
// Connect to the local instance of Redis at port 6379
func newRedisConnection() (redis.Conn, error) {
return newRedisConnectionTo(defaultRedisServer)
}
// Connect to host:port, host may be omitted, so ":6379" is valid.
// Will not try to AUTH with any given password (password@host:port).
func newRedisConnectionTo(hostColonPort string) (redis.Conn, error) {
// Discard the password, if provided
if _, theRest, ok := twoFields(hostColonPort, "@"); ok {
hostColonPort = theRest
}
hostColonPort = strings.TrimSpace(hostColonPort)
c, err := redis.Dial("tcp", hostColonPort, redis.DialConnectTimeout(connectTimeout), redis.DialReadTimeout(readTimeout), redis.DialWriteTimeout(writeTimeout))
if err != nil {
if c != nil {
c.Close()
}
return nil, err
}
return c, nil
}
// Get a string from a list of results at a given position
func getString(bi []interface{}, i int) string {
return string(bi[i].([]uint8))
}
// Test if the local Redis server is up and running
func TestConnection() (err error) {
return TestConnectionHost(defaultRedisServer)
}
// Test if a given Redis server at host:port is up and running.
// Does not try to PING or AUTH.
func TestConnectionHost(hostColonPort string) (err error) {
// Connect to the given host:port
conn, err := newRedisConnectionTo(hostColonPort)
defer func() {
if conn != nil {
conn.Close()
}
if r := recover(); r != nil {
err = errors.New("Could not connect to redis server: " + hostColonPort)
}
}()
return err
}
/* --- ConnectionPool functions --- */
func copyPoolValues(src *redis.Pool) ConnectionPool {
return ConnectionPool{
Dial: src.Dial,
DialContext: src.DialContext,
TestOnBorrow: src.TestOnBorrow,
MaxIdle: src.MaxIdle,
MaxActive: src.MaxActive,
IdleTimeout: src.IdleTimeout,
Wait: src.Wait,
MaxConnLifetime: src.MaxConnLifetime,
}
}
// Create a new connection pool
func NewConnectionPool() *ConnectionPool {
// The second argument is the maximum number of idle connections
redisPool := &redis.Pool{
MaxIdle: maxIdleConnections,
IdleTimeout: idleTimeout,
Dial: newRedisConnection,
}
pool := copyPoolValues(redisPool)
return &pool
}
// Split a string into two parts, given a delimiter.
// Returns the two parts and true if it works out.
func twoFields(s, delim string) (string, string, bool) {
if strings.Count(s, delim) != 1 {
return s, "", false
}
fields := strings.Split(s, delim)
return fields[0], fields[1], true
}
// Create a new connection pool given a host:port string.
// A password may be supplied as well, on the form "password@host:port".
func NewConnectionPoolHost(hostColonPort string) *ConnectionPool {
// Create a redis Pool
redisPool := &redis.Pool{
// Maximum number of idle connections to the redis database
MaxIdle: maxIdleConnections,
IdleTimeout: idleTimeout,
// Anonymous function for calling new RedisConnectionTo with the host:port
Dial: func() (redis.Conn, error) {
conn, err := newRedisConnectionTo(hostColonPort)
if err != nil {
return nil, err
}
// If a password is given, use it to authenticate
if password, _, ok := twoFields(hostColonPort, "@"); ok {
if password != "" {
if _, err := conn.Do("AUTH", password); err != nil {
conn.Close()
return nil, err
}
}
}
return conn, err
},
}
pool := copyPoolValues(redisPool)
return &pool
}
// Set the number of maximum *idle* connections standing ready when
// creating new connection pools. When an idle connection is used,
// a new idle connection is created. The default is 3 and should be fine
// for most cases.
func SetMaxIdleConnections(maximum int) {
maxIdleConnections = maximum
}
// Get one of the available connections from the connection pool, given a database index
func (pool *ConnectionPool) Get(dbindex int) redis.Conn {
redisPool := (*redis.Pool)(pool)
conn := redisPool.Get()
// The default database index is 0
if dbindex != 0 {
// SELECT is not critical, ignore the return values
conn.Do("SELECT", strconv.Itoa(dbindex))
}
return conn
}
// Ping the server by sending a PING command
func (pool *ConnectionPool) Ping() error {
redisPool := (*redis.Pool)(pool)
conn := redisPool.Get()
_, err := conn.Do("PING")
return err
}
// Close down the connection pool
func (pool *ConnectionPool) Close() {
redisPool := (*redis.Pool)(pool)
redisPool.Close()
}
/* --- List functions --- */
// Create a new list
func NewList(pool *ConnectionPool, id string) *List {
return &List{pool, id, 0}
}
// Select a different database
func (rl *List) SelectDatabase(dbindex int) {
rl.dbindex = dbindex
}
// Returns the element at index index in the list
func (rl *List) Get(index int64) (string, error) {
conn := rl.pool.Get(rl.dbindex)
result, err := conn.Do("LINDEX", rl.id, index)
if err != nil {
panic(err)
}
return redis.String(result, err)
}
// Get the size of the list
func (rl *List) Size() (int64, error) {
conn := rl.pool.Get(rl.dbindex)
size, err := conn.Do("LLEN", rl.id)
if err != nil {
panic(err)
}
return redis.Int64(size, err)
}
// Removes and returns the first element of the list
func (rl *List) PopFirst() (string, error) {
conn := rl.pool.Get(rl.dbindex)
result, err := conn.Do("LPOP", rl.id)
if err != nil {
panic(err)
}
return redis.String(result, err)
}
// Removes and returns the last element of the list
func (rl *List) PopLast() (string, error) {
conn := rl.pool.Get(rl.dbindex)
result, err := conn.Do("LPOP", rl.id)
if err != nil {
panic(err)
}
return redis.String(result, err)
}
// Add an element to the start of the list
func (rl *List) AddStart(value string) error {
conn := rl.pool.Get(rl.dbindex)
_, err := conn.Do("RPUSH", rl.id, value)
return err
}
// Add an element to the end of the list list
func (rl *List) AddEnd(value string) error {
conn := rl.pool.Get(rl.dbindex)
_, err := conn.Do("LPUSH", rl.id, value)
return err
}
// Default Add, aliased to List.AddStart
func (rl *List) Add(value string) error {
return rl.AddStart(value)
}
// Get all elements of a list
func (rl *List) All() ([]string, error) {
conn := rl.pool.Get(rl.dbindex)
result, err := redis.Values(conn.Do("LRANGE", rl.id, "0", "-1"))
strs := make([]string, len(result))
for i := 0; i < len(result); i++ {
strs[i] = getString(result, i)
}
return strs, err
}
// Deprecated
func (rl *List) GetAll() ([]string, error) {
return rl.All()
}
// Get the last element of a list
func (rl *List) Last() (string, error) {
conn := rl.pool.Get(rl.dbindex)
result, err := redis.Values(conn.Do("LRANGE", rl.id, "-1", "-1"))
if len(result) == 1 {
return getString(result, 0), err
}
return "", err
}
// Deprecated
func (rl *List) GetLast() (string, error) {
return rl.Last()
}
// Get the last N elements of a list
func (rl *List) LastN(n int) ([]string, error) {
conn := rl.pool.Get(rl.dbindex)
result, err := redis.Values(conn.Do("LRANGE", rl.id, "-"+strconv.Itoa(n), "-1"))
strs := make([]string, len(result))
for i := 0; i < len(result); i++ {
strs[i] = getString(result, i)
}
return strs, err
}
// Deprecated
func (rl *List) GetLastN(n int) ([]string, error) {
return rl.LastN(n)
}
// Remove the first occurrence of an element from the list
func (rl *List) RemoveElement(value string) error {
conn := rl.pool.Get(rl.dbindex)
_, err := conn.Do("LREM", rl.id, value)
return err
}
// Set element of list at index n to value
func (rl *List) Set(index int64, value string) error {
conn := rl.pool.Get(rl.dbindex)
_, err := conn.Do("LSET", rl.id, index, value)
return err
}
// Trim an existing list so that it will contain only the specified range of
// elements specified.
func (rl *List) Trim(start, stop int64) error {
conn := rl.pool.Get(rl.dbindex)
_, err := conn.Do("LTRIM", rl.id, start, stop)
return err
}
// Remove this list
func (rl *List) Remove() error {
conn := rl.pool.Get(rl.dbindex)
_, err := conn.Do("DEL", rl.id)
return err
}
// Clear the contents
func (rl *List) Clear() error {
return rl.Remove()
}
/* --- Set functions --- */
// Create a new set
func NewSet(pool *ConnectionPool, id string) *Set {
return &Set{pool, id, 0}
}
// Select a different database
func (rs *Set) SelectDatabase(dbindex int) {
rs.dbindex = dbindex
}
// Add an element to the set
func (rs *Set) Add(value string) error {
conn := rs.pool.Get(rs.dbindex)
_, err := conn.Do("SADD", rs.id, value)
return err
}
// Returns the set cardinality (number of elements) of the set
func (rs *Set) Size() (int64, error) {
conn := rs.pool.Get(rs.dbindex)
size, err := conn.Do("SCARD", rs.id)
if err != nil {
panic(err)
}
return redis.Int64(size, err)
}
// Check if a given value is in the set
func (rs *Set) Has(value string) (bool, error) {
conn := rs.pool.Get(rs.dbindex)
retval, err := conn.Do("SISMEMBER", rs.id, value)
if err != nil {
panic(err)
}
return redis.Bool(retval, err)
}
// Get all elements of the set
func (rs *Set) All() ([]string, error) {
conn := rs.pool.Get(rs.dbindex)
result, err := redis.Values(conn.Do("SMEMBERS", rs.id))
strs := make([]string, len(result))
for i := 0; i < len(result); i++ {
strs[i] = getString(result, i)
}
return strs, err
}
// Deprecated
func (rs *Set) GetAll() ([]string, error) {
return rs.All()
}
// Remove a random member from the set
func (rs *Set) Pop() (string, error) {
conn := rs.pool.Get(rs.dbindex)
result, err := conn.Do("SPOP", rs.id)
if err != nil {
panic(err)
}
return redis.String(result, err)
}
// Get a random member of the set
func (rs *Set) Random() (string, error) {
conn := rs.pool.Get(rs.dbindex)
result, err := conn.Do("SRANDMEMBER", rs.id)
if err != nil {
panic(err)
}
return redis.String(result, err)
}
// Remove an element from the set
func (rs *Set) Del(value string) error {
conn := rs.pool.Get(rs.dbindex)
_, err := conn.Do("SREM", rs.id, value)
return err
}
// Remove this set
func (rs *Set) Remove() error {
conn := rs.pool.Get(rs.dbindex)
_, err := conn.Do("DEL", rs.id)
return err
}
// Clear the contents
func (rs *Set) Clear() error {
return rs.Remove()
}
/* --- HashMap functions --- */
// Create a new hashmap
func NewHashMap(pool *ConnectionPool, id string) *HashMap {
return &HashMap{pool, id, 0}
}
// Select a different database
func (rh *HashMap) SelectDatabase(dbindex int) {
rh.dbindex = dbindex
}
// Set a value in a hashmap given the element id (for instance a user id) and the key (for instance "password")
func (rh *HashMap) Set(elementid, key, value string) error {
conn := rh.pool.Get(rh.dbindex)
_, err := conn.Do("HSET", rh.id+":"+elementid, key, value)
return err
}
// Given an element id, set a key and a value together with an expiration time
func (rh *HashMap) SetExpire(elementid, key, value string, expire time.Duration) error {
conn := rh.pool.Get(rh.dbindex)
if _, err := conn.Do("HSET", rh.id+":"+elementid, key, value); err != nil {
return err
}
// No EXPIRE in Redis for hash keys, as far as I can tell from the documentation.
// This is the manual way.
go func() {
time.Sleep(expire)
rh.DelKey(elementid, key)
}()
// Set the elementid to expire in the given duration (as milliseconds)
//expireMilliseconds := expire.Nanoseconds() / 1000000
//if _, err := conn.Do("PEXPIRE", rh.id+":"+elementid, expireMilliseconds); err != nil {
// return err
//}
return nil
}
// Commented out because this would only return TTL for the elementid, not for the key
// TimeToLive returns how long a key has to live until it expires
// Returns a duration of 0 when the time has passed
//func (rh *HashMap) TimeToLive(elementid string) (time.Duration, error) {
// conn := rh.pool.Get(rh.dbindex)
// ttlSecondsInterface, err := conn.Do("TTL", rh.id+":"+elementid)
// if err != nil || ttlSecondsInterface.(int64) <= 0 {
// return time.Duration(0), err
// }
// ns := time.Duration(ttlSecondsInterface.(int64)) * time.Second
// return ns, nil
//}
// Get a value from a hashmap given the element id (for instance a user id) and the key (for instance "password")
func (rh *HashMap) Get(elementid, key string) (string, error) {
conn := rh.pool.Get(rh.dbindex)
result, err := redis.String(conn.Do("HGET", rh.id+":"+elementid, key))
if err != nil {
return "", err
}
return result, nil
}
// Check if a given elementid + key is in the hash map
func (rh *HashMap) Has(elementid, key string) (bool, error) {
conn := rh.pool.Get(rh.dbindex)
retval, err := conn.Do("HEXISTS", rh.id+":"+elementid, key)
if err != nil {
panic(err)
}
return redis.Bool(retval, err)
}
// Keys returns the keys of the given elementid.
func (rh *HashMap) Keys(elementid string) ([]string, error) {
conn := rh.pool.Get(rh.dbindex)
result, err := redis.Values(conn.Do("HKEYS", rh.id+":"+elementid))
strs := make([]string, len(result))
for i := 0; i < len(result); i++ {
strs[i] = getString(result, i)
}
return strs, err
}
// Check if a given elementid exists as a hash map at all
func (rh *HashMap) Exists(elementid string) (bool, error) {
// TODO: key is not meant to be a wildcard, check for "*"
return hasKey(rh.pool, rh.id+":"+elementid, rh.dbindex)
}
// Get all elementid's for all hash elements
func (rh *HashMap) All() ([]string, error) {
conn := rh.pool.Get(rh.dbindex)
result, err := redis.Values(conn.Do("KEYS", rh.id+":*"))
strs := make([]string, len(result))
idlen := len(rh.id)
for i := 0; i < len(result); i++ {
strs[i] = getString(result, i)[idlen+1:]
}
return strs, err
}
// Deprecated
func (rh *HashMap) GetAll() ([]string, error) {
return rh.All()
}
// Remove a key for an entry in a hashmap (for instance the email field for a user)
func (rh *HashMap) DelKey(elementid, key string) error {
conn := rh.pool.Get(rh.dbindex)
_, err := conn.Do("HDEL", rh.id+":"+elementid, key)
return err
}
// Remove an element (for instance a user)
func (rh *HashMap) Del(elementid string) error {
conn := rh.pool.Get(rh.dbindex)
_, err := conn.Do("DEL", rh.id+":"+elementid)
return err
}
// Remove this hashmap (all keys that starts with this hashmap id and a colon)
func (rh *HashMap) Remove() error {
conn := rh.pool.Get(rh.dbindex)
// Find all hashmap keys that starts with rh.id+":"
results, err := redis.Values(conn.Do("KEYS", rh.id+":*"))
if err != nil {
return err
}
// For each key id
for i := 0; i < len(results); i++ {
// Delete this key
if _, err = conn.Do("DEL", getString(results, i)); err != nil {
return err
}
}
return nil
}
// Clear the contents
func (rh *HashMap) Clear() error {
return rh.Remove()
}
/* --- KeyValue functions --- */
// Create a new key/value
func NewKeyValue(pool *ConnectionPool, id string) *KeyValue {
return &KeyValue{pool, id, 0}
}
// Select a different database
func (rkv *KeyValue) SelectDatabase(dbindex int) {
rkv.dbindex = dbindex
}
// Set a key and value
func (rkv *KeyValue) Set(key, value string) error {
conn := rkv.pool.Get(rkv.dbindex)
_, err := conn.Do("SET", rkv.id+":"+key, value)
return err
}
// Set a key and value, with expiry
func (rkv *KeyValue) SetExpire(key, value string, expire time.Duration) error {
conn := rkv.pool.Get(rkv.dbindex)
// Convert from nanoseconds to milliseconds
expireMilliseconds := expire.Nanoseconds() / 1000000
// Set the value, together with an expiry time, given in milliseconds
_, err := conn.Do("SET", rkv.id+":"+key, value, "PX", expireMilliseconds)
return err
}
// TimeToLive returns how long a key has to live until it expires
// Returns a duration of 0 when the time has passed
func (rkv *KeyValue) TimeToLive(key string) (time.Duration, error) {
conn := rkv.pool.Get(rkv.dbindex)
ttlSecondsInterface, err := conn.Do("TTL", rkv.id+":"+key)
if err != nil || ttlSecondsInterface.(int64) <= 0 {
return time.Duration(0), err
}
ns := time.Duration(ttlSecondsInterface.(int64)) * time.Second
return ns, nil
}
// Get a value given a key
func (rkv *KeyValue) Get(key string) (string, error) {
conn := rkv.pool.Get(rkv.dbindex)
result, err := redis.String(conn.Do("GET", rkv.id+":"+key))
if err != nil {
return "", err
}
return result, nil
}
// Remove a key
func (rkv *KeyValue) Del(key string) error {
conn := rkv.pool.Get(rkv.dbindex)
_, err := conn.Do("DEL", rkv.id+":"+key)
return err
}
// Increase the value of a key, returns the new value
// Returns an empty string if there were errors,
// or "0" if the key does not already exist.
func (rkv *KeyValue) Inc(key string) (string, error) {
conn := rkv.pool.Get(rkv.dbindex)
result, err := redis.Int64(conn.Do("INCR", rkv.id+":"+key))
if err != nil {
return "0", err
}
return strconv.FormatInt(result, 10), nil
}
// Remove this key/value
func (rkv *KeyValue) Remove() error {
conn := rkv.pool.Get(rkv.dbindex)
// Find all keys that starts with rkv.id+":"
results, err := redis.Values(conn.Do("KEYS", rkv.id+":*"))
if err != nil {
return err
}
// For each key id
for i := 0; i < len(results); i++ {
// Delete this key
if _, err = conn.Do("DEL", getString(results, i)); err != nil {
return err
}
}
return nil
}
// Clear the contents
func (rkv *KeyValue) Clear() error {
return rkv.Remove()
}
// --- Generic redis functions ---
// Check if a key exists. The key can be a wildcard (ie. "user*").
func hasKey(pool *ConnectionPool, wildcard string, dbindex int) (bool, error) {
conn := pool.Get(dbindex)
result, err := redis.Values(conn.Do("KEYS", wildcard))
if err != nil {
return false, err
}
return len(result) > 0, nil
}
// --- Related to setting and retrieving timeout values
// SetConnectTimeout sets the connect timeout for new connections
func SetConnectTimeout(t time.Duration) {
connectTimeout = t
}
// SetReadTimeout sets the read timeout for new connections
func SetReadTimeout(t time.Duration) {
readTimeout = t
}
// SetWriteTimeout sets the write timeout for new connections
func SetWriteTimeout(t time.Duration) {
writeTimeout = t
}
// SetIdleTimeout sets the idle timeout for new connections
func SetIdleTimeout(t time.Duration) {
idleTimeout = t
}
// ConnectTimeout returns the current connect timeout for new connections
func ConnectTimeout() time.Duration {
return connectTimeout
}
// ReadTimeout returns the current read timeout for new connections
func ReadTimeout() time.Duration {
return readTimeout
}
// WriteTimeout returns the current write timeout for new connections
func WriteTimeout() time.Duration {
return writeTimeout
}
// IdleTimeout returns the current idle timeout for new connections
func IdleTimeout() time.Duration {
return idleTimeout
}

15
examples/web/vendor/modules.txt vendored Normal file
View File

@ -0,0 +1,15 @@
# github.com/codegangsta/negroni v1.0.0
## explicit
github.com/codegangsta/negroni
# github.com/gomodule/redigo v1.8.9
## explicit; go 1.16
github.com/gomodule/redigo/redis
# github.com/gorilla/mux v1.8.1
## explicit; go 1.20
github.com/gorilla/mux
# github.com/xyproto/pinterface v1.5.3
## explicit; go 1.8
github.com/xyproto/pinterface
# github.com/xyproto/simpleredis/v2 v2.6.5
## explicit; go 1.17
github.com/xyproto/simpleredis/v2

View File

@ -17,6 +17,7 @@
# This checks if all go source files in current directory are format using gofmt
# Ignore vendor directory that's NOT in the main path
GO_FILES=$(find . -path ./vendor -prune -o -name '*.go' -print )
for file in $GO_FILES; do