diff options
author | Jhon Honce <jhonce@redhat.com> | 2019-11-01 13:03:34 -0700 |
---|---|---|
committer | baude <bbaude@redhat.com> | 2020-01-10 09:41:39 -0600 |
commit | d924494f561bb878a2b3a7ce438d87ecb934b5fb (patch) | |
tree | 29983e7411c8108e74e4286b90a535a114dee755 /vendor/github.com/gorilla/schema | |
parent | 6ed88e047579bd2d1eac99a6089cc617f0c4773d (diff) | |
download | podman-d924494f561bb878a2b3a7ce438d87ecb934b5fb.tar.gz podman-d924494f561bb878a2b3a7ce438d87ecb934b5fb.tar.bz2 podman-d924494f561bb878a2b3a7ce438d87ecb934b5fb.zip |
Initial commit on compatible API
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Create service command
Use cd cmd/service && go build .
$ systemd-socket-activate -l 8081 cmd/service/service &
$ curl http://localhost:8081/v1.24/images/json
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Correct Makefile
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Two more stragglers
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Report errors back as http headers
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Split out handlers, updated output
Output aligned to docker structures
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Refactored routing, added more endpoints and types
* Encapsulated all the routing information in the handler_* files.
* Added more serviceapi/types, including podman additions. See Info
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Cleaned up code, implemented info content
* Move Content-Type check into serviceHandler
* Custom 404 handler showing the url, mostly for debugging
* Refactored images: better method names and explicit http codes
* Added content to /info
* Added podman fields to Info struct
* Added Container struct
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Add a bunch of endpoints
containers: stop, pause, unpause, wait, rm
images: tag, rmi, create (pull only)
Signed-off-by: baude <bbaude@redhat.com>
Add even more handlers
* Add serviceapi/Error() to improve error handling
* Better support for API return payloads
* Renamed unimplemented to unsupported these are generic endpoints
we don't intend to ever support. Swarm broken out since it uses
different HTTP codes to signal that the node is not in a swarm.
* Added more types
* API Version broken out so it can be validated in the future
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Refactor to introduce ServiceWriter
Signed-off-by: Jhon Honce <jhonce@redhat.com>
populate pods endpoints
/libpod/pods/..
exists, kill, pause, prune, restart, remove, start, stop, unpause
Signed-off-by: baude <bbaude@redhat.com>
Add components to Version, fix Error body
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Add images pull output, fix swarm routes
* docker-py tests/integration/api_client_test.py pass 100%
* docker-py tests/integration/api_image_test.py pass 4/16
+ Test failures include services podman does not support
Signed-off-by: Jhon Honce <jhonce@redhat.com>
pods endpoint submission 2
add create and others; only top and stats is left.
Signed-off-by: baude <bbaude@redhat.com>
Update pull image to work from empty registry
Signed-off-by: Jhon Honce <jhonce@redhat.com>
pod create and container create
first pass at pod and container create. the container create does not
quite work yet but it is very close. pod create needs a partial
rewrite. also broken off the DELETE (rm/rmi) to specific handler funcs.
Signed-off-by: baude <bbaude@redhat.com>
Add docker-py demos, GET .../containers/json
* Update serviceapi/types to reflect libpod not podman
* Refactored removeImage() to provide non-streaming return
Signed-off-by: Jhon Honce <jhonce@redhat.com>
create container part2
finished minimal config needed for create container. started demo.py
for upcoming talk
Signed-off-by: baude <bbaude@redhat.com>
Stop server after honoring request
* Remove casting for method calls
* Improve WriteResponse()
* Update Container API type to match docker API
Signed-off-by: Jhon Honce <jhonce@redhat.com>
fix namespace assumptions
cleaned up namespace issues with libpod.
Signed-off-by: baude <bbaude@redhat.com>
wip
Signed-off-by: baude <bbaude@redhat.com>
Add sliding window when shutting down server
* Added a Timeout rather than closing down service on each call
* Added gorilla/schema dependency for Decode'ing query parameters
* Improved error handling
* Container logs returned and multiplexed for stdout and stderr
* .../containers/{name}/logs?stdout=True&stderr=True
* Container stats
* .../containers/{name}/stats
Signed-off-by: Jhon Honce <jhonce@redhat.com>
Improve error handling
* Add check for at least one std stream required for /containers/{id}/logs
* Add check for state in /containers/{id}/top
* Fill in more fields for /info
* Fixed error checking in service start code
Signed-off-by: Jhon Honce <jhonce@redhat.com>
get rest of image tests for pass
Signed-off-by: baude <bbaude@redhat.com>
linting our content
Signed-off-by: baude <bbaude@redhat.com>
more linting
Signed-off-by: baude <bbaude@redhat.com>
more linting
Signed-off-by: baude <bbaude@redhat.com>
pruning
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]apiv2 pods
migrate from using args in the url to using a json struct in body for
pod create.
Signed-off-by: baude <bbaude@redhat.com>
fix handler_images prune
prune's api changed slightly to deal with filters.
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]enabled base container create tests
enabling the base container create tests which allow us to get more into
the stop, kill, etc tests. many new tests now pass.
Signed-off-by: baude <bbaude@redhat.com>
serviceapi errors: append error message to API message
I dearly hope this is not breaking any other tests but debugging
"Internal Server Error" is not helpful to any user. In case, it
breaks tests, we can rever the commit - that's why it's a small one.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
serviceAPI: add containers/prune endpoint
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
add `service` make target
Also remove the non-functional sub-Makefile.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
add make targets for testing the service
* `sudo make run-service` for running the service.
* `DOCKERPY_TEST="tests/integration/api_container_test.py::ListContainersTest" \
make run-docker-py-tests`
for running a specific tests. Run all tests by leaving the env
variable empty.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
Split handlers and server packages
The files were split to help contain bloat. The api/server package will
contain all code related to the functioning of the server while
api/handlers will have all the code related to implementing the end
points.
api/server/register_* will contain the methods for registering
endpoints. Additionally, they will have the comments for generating the
swagger spec file.
See api/handlers/version.go for a small example handler,
api/handlers/containers.go contains much more complex handlers.
Signed-off-by: Jhon Honce <jhonce@redhat.com>
[CI:DOCS]enabled more tests
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]libpod endpoints
small refactor for libpod inclusion and began adding endpoints.
Signed-off-by: baude <bbaude@redhat.com>
Implement /build and /events
* Include crypto libraries for future ssh work
Signed-off-by: Jhon Honce <jhonce@redhat.com>
[CI:DOCS]more image implementations
convert from using for to query structs among other changes including
new endpoints.
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]add bindings for golang
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]add volume endpoints for libpod
create, inspect, ls, prune, and rm
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]apiv2 healthcheck enablement
wire up container healthchecks for the api.
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]Add mount endpoints
via the api, allow ability to mount a container and list container
mounts.
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]Add search endpoint
add search endpoint with golang bindings
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]more apiv2 development
misc population of methods, etc
Signed-off-by: baude <bbaude@redhat.com>
rebase cleanup and epoch reset
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]add more network endpoints
also, add some initial error handling and convenience functions for
standard endpoints.
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]use helper funcs for bindings
use the methods developed to make writing bindings less duplicative and
easier to use.
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]add return info for prereview
begin to add return info and status codes for errors so that we can
review the apiv2
Signed-off-by: baude <bbaude@redhat.com>
[CI:DOCS]first pass at adding swagger docs for api
Signed-off-by: baude <bbaude@redhat.com>
Diffstat (limited to 'vendor/github.com/gorilla/schema')
-rw-r--r-- | vendor/github.com/gorilla/schema/.travis.yml | 18 | ||||
-rw-r--r-- | vendor/github.com/gorilla/schema/LICENSE | 27 | ||||
-rw-r--r-- | vendor/github.com/gorilla/schema/README.md | 90 | ||||
-rw-r--r-- | vendor/github.com/gorilla/schema/cache.go | 305 | ||||
-rw-r--r-- | vendor/github.com/gorilla/schema/converter.go | 145 | ||||
-rw-r--r-- | vendor/github.com/gorilla/schema/decoder.go | 504 | ||||
-rw-r--r-- | vendor/github.com/gorilla/schema/doc.go | 148 | ||||
-rw-r--r-- | vendor/github.com/gorilla/schema/encoder.go | 195 |
8 files changed, 1432 insertions, 0 deletions
diff --git a/vendor/github.com/gorilla/schema/.travis.yml b/vendor/github.com/gorilla/schema/.travis.yml new file mode 100644 index 000000000..5f51dce4e --- /dev/null +++ b/vendor/github.com/gorilla/schema/.travis.yml @@ -0,0 +1,18 @@ +language: go +sudo: false + +matrix: + include: + - go: 1.5 + - go: 1.6 + - go: 1.7 + - go: 1.8 + - go: tip + allow_failures: + - go: tip + +script: + - go get -t -v ./... + - diff -u <(echo -n) <(gofmt -d .) + - go vet $(go list ./... | grep -v /vendor/) + - go test -v -race ./... diff --git a/vendor/github.com/gorilla/schema/LICENSE b/vendor/github.com/gorilla/schema/LICENSE new file mode 100644 index 000000000..0e5fb8728 --- /dev/null +++ b/vendor/github.com/gorilla/schema/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2012 Rodrigo Moraes. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/gorilla/schema/README.md b/vendor/github.com/gorilla/schema/README.md new file mode 100644 index 000000000..aefdd6699 --- /dev/null +++ b/vendor/github.com/gorilla/schema/README.md @@ -0,0 +1,90 @@ +schema +====== +[![GoDoc](https://godoc.org/github.com/gorilla/schema?status.svg)](https://godoc.org/github.com/gorilla/schema) [![Build Status](https://travis-ci.org/gorilla/schema.png?branch=master)](https://travis-ci.org/gorilla/schema) +[![Sourcegraph](https://sourcegraph.com/github.com/gorilla/schema/-/badge.svg)](https://sourcegraph.com/github.com/gorilla/schema?badge) + + +Package gorilla/schema converts structs to and from form values. + +## Example + +Here's a quick example: we parse POST form values and then decode them into a struct: + +```go +// Set a Decoder instance as a package global, because it caches +// meta-data about structs, and an instance can be shared safely. +var decoder = schema.NewDecoder() + +type Person struct { + Name string + Phone string +} + +func MyHandler(w http.ResponseWriter, r *http.Request) { + err := r.ParseForm() + if err != nil { + // Handle error + } + + var person Person + + // r.PostForm is a map of our POST form values + err = decoder.Decode(&person, r.PostForm) + if err != nil { + // Handle error + } + + // Do something with person.Name or person.Phone +} +``` + +Conversely, contents of a struct can be encoded into form values. Here's a variant of the previous example using the Encoder: + +```go +var encoder = schema.NewEncoder() + +func MyHttpRequest() { + person := Person{"Jane Doe", "555-5555"} + form := url.Values{} + + err := encoder.Encode(person, form) + + if err != nil { + // Handle error + } + + // Use form values, for example, with an http client + client := new(http.Client) + res, err := client.PostForm("http://my-api.test", form) +} + +``` + +To define custom names for fields, use a struct tag "schema". To not populate certain fields, use a dash for the name and it will be ignored: + +```go +type Person struct { + Name string `schema:"name,required"` // custom name, must be supplied + Phone string `schema:"phone"` // custom name + Admin bool `schema:"-"` // this field is never set +} +``` + +The supported field types in the struct are: + +* bool +* float variants (float32, float64) +* int variants (int, int8, int16, int32, int64) +* string +* uint variants (uint, uint8, uint16, uint32, uint64) +* struct +* a pointer to one of the above types +* a slice or a pointer to a slice of one of the above types + +Unsupported types are simply ignored, however custom types can be registered to be converted. + +More examples are available on the Gorilla website: https://www.gorillatoolkit.org/pkg/schema + +## License + +BSD licensed. See the LICENSE file for details. diff --git a/vendor/github.com/gorilla/schema/cache.go b/vendor/github.com/gorilla/schema/cache.go new file mode 100644 index 000000000..0746c1202 --- /dev/null +++ b/vendor/github.com/gorilla/schema/cache.go @@ -0,0 +1,305 @@ +// Copyright 2012 The Gorilla Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package schema + +import ( + "errors" + "reflect" + "strconv" + "strings" + "sync" +) + +var invalidPath = errors.New("schema: invalid path") + +// newCache returns a new cache. +func newCache() *cache { + c := cache{ + m: make(map[reflect.Type]*structInfo), + regconv: make(map[reflect.Type]Converter), + tag: "schema", + } + return &c +} + +// cache caches meta-data about a struct. +type cache struct { + l sync.RWMutex + m map[reflect.Type]*structInfo + regconv map[reflect.Type]Converter + tag string +} + +// registerConverter registers a converter function for a custom type. +func (c *cache) registerConverter(value interface{}, converterFunc Converter) { + c.regconv[reflect.TypeOf(value)] = converterFunc +} + +// parsePath parses a path in dotted notation verifying that it is a valid +// path to a struct field. +// +// It returns "path parts" which contain indices to fields to be used by +// reflect.Value.FieldByString(). Multiple parts are required for slices of +// structs. +func (c *cache) parsePath(p string, t reflect.Type) ([]pathPart, error) { + var struc *structInfo + var field *fieldInfo + var index64 int64 + var err error + parts := make([]pathPart, 0) + path := make([]string, 0) + keys := strings.Split(p, ".") + for i := 0; i < len(keys); i++ { + if t.Kind() != reflect.Struct { + return nil, invalidPath + } + if struc = c.get(t); struc == nil { + return nil, invalidPath + } + if field = struc.get(keys[i]); field == nil { + return nil, invalidPath + } + // Valid field. Append index. + path = append(path, field.name) + if field.isSliceOfStructs && (!field.unmarshalerInfo.IsValid || (field.unmarshalerInfo.IsValid && field.unmarshalerInfo.IsSliceElement)) { + // Parse a special case: slices of structs. + // i+1 must be the slice index. + // + // Now that struct can implements TextUnmarshaler interface, + // we don't need to force the struct's fields to appear in the path. + // So checking i+2 is not necessary anymore. + i++ + if i+1 > len(keys) { + return nil, invalidPath + } + if index64, err = strconv.ParseInt(keys[i], 10, 0); err != nil { + return nil, invalidPath + } + parts = append(parts, pathPart{ + path: path, + field: field, + index: int(index64), + }) + path = make([]string, 0) + + // Get the next struct type, dropping ptrs. + if field.typ.Kind() == reflect.Ptr { + t = field.typ.Elem() + } else { + t = field.typ + } + if t.Kind() == reflect.Slice { + t = t.Elem() + if t.Kind() == reflect.Ptr { + t = t.Elem() + } + } + } else if field.typ.Kind() == reflect.Ptr { + t = field.typ.Elem() + } else { + t = field.typ + } + } + // Add the remaining. + parts = append(parts, pathPart{ + path: path, + field: field, + index: -1, + }) + return parts, nil +} + +// get returns a cached structInfo, creating it if necessary. +func (c *cache) get(t reflect.Type) *structInfo { + c.l.RLock() + info := c.m[t] + c.l.RUnlock() + if info == nil { + info = c.create(t, "") + c.l.Lock() + c.m[t] = info + c.l.Unlock() + } + return info +} + +// create creates a structInfo with meta-data about a struct. +func (c *cache) create(t reflect.Type, parentAlias string) *structInfo { + info := &structInfo{} + var anonymousInfos []*structInfo + for i := 0; i < t.NumField(); i++ { + if f := c.createField(t.Field(i), parentAlias); f != nil { + info.fields = append(info.fields, f) + if ft := indirectType(f.typ); ft.Kind() == reflect.Struct && f.isAnonymous { + anonymousInfos = append(anonymousInfos, c.create(ft, f.canonicalAlias)) + } + } + } + for i, a := range anonymousInfos { + others := []*structInfo{info} + others = append(others, anonymousInfos[:i]...) + others = append(others, anonymousInfos[i+1:]...) + for _, f := range a.fields { + if !containsAlias(others, f.alias) { + info.fields = append(info.fields, f) + } + } + } + return info +} + +// createField creates a fieldInfo for the given field. +func (c *cache) createField(field reflect.StructField, parentAlias string) *fieldInfo { + alias, options := fieldAlias(field, c.tag) + if alias == "-" { + // Ignore this field. + return nil + } + canonicalAlias := alias + if parentAlias != "" { + canonicalAlias = parentAlias + "." + alias + } + // Check if the type is supported and don't cache it if not. + // First let's get the basic type. + isSlice, isStruct := false, false + ft := field.Type + m := isTextUnmarshaler(reflect.Zero(ft)) + if ft.Kind() == reflect.Ptr { + ft = ft.Elem() + } + if isSlice = ft.Kind() == reflect.Slice; isSlice { + ft = ft.Elem() + if ft.Kind() == reflect.Ptr { + ft = ft.Elem() + } + } + if ft.Kind() == reflect.Array { + ft = ft.Elem() + if ft.Kind() == reflect.Ptr { + ft = ft.Elem() + } + } + if isStruct = ft.Kind() == reflect.Struct; !isStruct { + if c.converter(ft) == nil && builtinConverters[ft.Kind()] == nil { + // Type is not supported. + return nil + } + } + + return &fieldInfo{ + typ: field.Type, + name: field.Name, + alias: alias, + canonicalAlias: canonicalAlias, + unmarshalerInfo: m, + isSliceOfStructs: isSlice && isStruct, + isAnonymous: field.Anonymous, + isRequired: options.Contains("required"), + } +} + +// converter returns the converter for a type. +func (c *cache) converter(t reflect.Type) Converter { + return c.regconv[t] +} + +// ---------------------------------------------------------------------------- + +type structInfo struct { + fields []*fieldInfo +} + +func (i *structInfo) get(alias string) *fieldInfo { + for _, field := range i.fields { + if strings.EqualFold(field.alias, alias) { + return field + } + } + return nil +} + +func containsAlias(infos []*structInfo, alias string) bool { + for _, info := range infos { + if info.get(alias) != nil { + return true + } + } + return false +} + +type fieldInfo struct { + typ reflect.Type + // name is the field name in the struct. + name string + alias string + // canonicalAlias is almost the same as the alias, but is prefixed with + // an embedded struct field alias in dotted notation if this field is + // promoted from the struct. + // For instance, if the alias is "N" and this field is an embedded field + // in a struct "X", canonicalAlias will be "X.N". + canonicalAlias string + // unmarshalerInfo contains information regarding the + // encoding.TextUnmarshaler implementation of the field type. + unmarshalerInfo unmarshaler + // isSliceOfStructs indicates if the field type is a slice of structs. + isSliceOfStructs bool + // isAnonymous indicates whether the field is embedded in the struct. + isAnonymous bool + isRequired bool +} + +func (f *fieldInfo) paths(prefix string) []string { + if f.alias == f.canonicalAlias { + return []string{prefix + f.alias} + } + return []string{prefix + f.alias, prefix + f.canonicalAlias} +} + +type pathPart struct { + field *fieldInfo + path []string // path to the field: walks structs using field names. + index int // struct index in slices of structs. +} + +// ---------------------------------------------------------------------------- + +func indirectType(typ reflect.Type) reflect.Type { + if typ.Kind() == reflect.Ptr { + return typ.Elem() + } + return typ +} + +// fieldAlias parses a field tag to get a field alias. +func fieldAlias(field reflect.StructField, tagName string) (alias string, options tagOptions) { + if tag := field.Tag.Get(tagName); tag != "" { + alias, options = parseTag(tag) + } + if alias == "" { + alias = field.Name + } + return alias, options +} + +// tagOptions is the string following a comma in a struct field's tag, or +// the empty string. It does not include the leading comma. +type tagOptions []string + +// parseTag splits a struct field's url tag into its name and comma-separated +// options. +func parseTag(tag string) (string, tagOptions) { + s := strings.Split(tag, ",") + return s[0], s[1:] +} + +// Contains checks whether the tagOptions contains the specified option. +func (o tagOptions) Contains(option string) bool { + for _, s := range o { + if s == option { + return true + } + } + return false +} diff --git a/vendor/github.com/gorilla/schema/converter.go b/vendor/github.com/gorilla/schema/converter.go new file mode 100644 index 000000000..4f2116a15 --- /dev/null +++ b/vendor/github.com/gorilla/schema/converter.go @@ -0,0 +1,145 @@ +// Copyright 2012 The Gorilla Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package schema + +import ( + "reflect" + "strconv" +) + +type Converter func(string) reflect.Value + +var ( + invalidValue = reflect.Value{} + boolType = reflect.Bool + float32Type = reflect.Float32 + float64Type = reflect.Float64 + intType = reflect.Int + int8Type = reflect.Int8 + int16Type = reflect.Int16 + int32Type = reflect.Int32 + int64Type = reflect.Int64 + stringType = reflect.String + uintType = reflect.Uint + uint8Type = reflect.Uint8 + uint16Type = reflect.Uint16 + uint32Type = reflect.Uint32 + uint64Type = reflect.Uint64 +) + +// Default converters for basic types. +var builtinConverters = map[reflect.Kind]Converter{ + boolType: convertBool, + float32Type: convertFloat32, + float64Type: convertFloat64, + intType: convertInt, + int8Type: convertInt8, + int16Type: convertInt16, + int32Type: convertInt32, + int64Type: convertInt64, + stringType: convertString, + uintType: convertUint, + uint8Type: convertUint8, + uint16Type: convertUint16, + uint32Type: convertUint32, + uint64Type: convertUint64, +} + +func convertBool(value string) reflect.Value { + if value == "on" { + return reflect.ValueOf(true) + } else if v, err := strconv.ParseBool(value); err == nil { + return reflect.ValueOf(v) + } + return invalidValue +} + +func convertFloat32(value string) reflect.Value { + if v, err := strconv.ParseFloat(value, 32); err == nil { + return reflect.ValueOf(float32(v)) + } + return invalidValue +} + +func convertFloat64(value string) reflect.Value { + if v, err := strconv.ParseFloat(value, 64); err == nil { + return reflect.ValueOf(v) + } + return invalidValue +} + +func convertInt(value string) reflect.Value { + if v, err := strconv.ParseInt(value, 10, 0); err == nil { + return reflect.ValueOf(int(v)) + } + return invalidValue +} + +func convertInt8(value string) reflect.Value { + if v, err := strconv.ParseInt(value, 10, 8); err == nil { + return reflect.ValueOf(int8(v)) + } + return invalidValue +} + +func convertInt16(value string) reflect.Value { + if v, err := strconv.ParseInt(value, 10, 16); err == nil { + return reflect.ValueOf(int16(v)) + } + return invalidValue +} + +func convertInt32(value string) reflect.Value { + if v, err := strconv.ParseInt(value, 10, 32); err == nil { + return reflect.ValueOf(int32(v)) + } + return invalidValue +} + +func convertInt64(value string) reflect.Value { + if v, err := strconv.ParseInt(value, 10, 64); err == nil { + return reflect.ValueOf(v) + } + return invalidValue +} + +func convertString(value string) reflect.Value { + return reflect.ValueOf(value) +} + +func convertUint(value string) reflect.Value { + if v, err := strconv.ParseUint(value, 10, 0); err == nil { + return reflect.ValueOf(uint(v)) + } + return invalidValue +} + +func convertUint8(value string) reflect.Value { + if v, err := strconv.ParseUint(value, 10, 8); err == nil { + return reflect.ValueOf(uint8(v)) + } + return invalidValue +} + +func convertUint16(value string) reflect.Value { + if v, err := strconv.ParseUint(value, 10, 16); err == nil { + return reflect.ValueOf(uint16(v)) + } + return invalidValue +} + +func convertUint32(value string) reflect.Value { + if v, err := strconv.ParseUint(value, 10, 32); err == nil { + return reflect.ValueOf(uint32(v)) + } + return invalidValue +} + +func convertUint64(value string) reflect.Value { + if v, err := strconv.ParseUint(value, 10, 64); err == nil { + return reflect.ValueOf(v) + } + return invalidValue +} diff --git a/vendor/github.com/gorilla/schema/decoder.go b/vendor/github.com/gorilla/schema/decoder.go new file mode 100644 index 000000000..5afbd921f --- /dev/null +++ b/vendor/github.com/gorilla/schema/decoder.go @@ -0,0 +1,504 @@ +// Copyright 2012 The Gorilla Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package schema + +import ( + "encoding" + "errors" + "fmt" + "reflect" + "strings" +) + +// NewDecoder returns a new Decoder. +func NewDecoder() *Decoder { + return &Decoder{cache: newCache()} +} + +// Decoder decodes values from a map[string][]string to a struct. +type Decoder struct { + cache *cache + zeroEmpty bool + ignoreUnknownKeys bool +} + +// SetAliasTag changes the tag used to locate custom field aliases. +// The default tag is "schema". +func (d *Decoder) SetAliasTag(tag string) { + d.cache.tag = tag +} + +// ZeroEmpty controls the behaviour when the decoder encounters empty values +// in a map. +// If z is true and a key in the map has the empty string as a value +// then the corresponding struct field is set to the zero value. +// If z is false then empty strings are ignored. +// +// The default value is false, that is empty values do not change +// the value of the struct field. +func (d *Decoder) ZeroEmpty(z bool) { + d.zeroEmpty = z +} + +// IgnoreUnknownKeys controls the behaviour when the decoder encounters unknown +// keys in the map. +// If i is true and an unknown field is encountered, it is ignored. This is +// similar to how unknown keys are handled by encoding/json. +// If i is false then Decode will return an error. Note that any valid keys +// will still be decoded in to the target struct. +// +// To preserve backwards compatibility, the default value is false. +func (d *Decoder) IgnoreUnknownKeys(i bool) { + d.ignoreUnknownKeys = i +} + +// RegisterConverter registers a converter function for a custom type. +func (d *Decoder) RegisterConverter(value interface{}, converterFunc Converter) { + d.cache.registerConverter(value, converterFunc) +} + +// Decode decodes a map[string][]string to a struct. +// +// The first parameter must be a pointer to a struct. +// +// The second parameter is a map, typically url.Values from an HTTP request. +// Keys are "paths" in dotted notation to the struct fields and nested structs. +// +// See the package documentation for a full explanation of the mechanics. +func (d *Decoder) Decode(dst interface{}, src map[string][]string) error { + v := reflect.ValueOf(dst) + if v.Kind() != reflect.Ptr || v.Elem().Kind() != reflect.Struct { + return errors.New("schema: interface must be a pointer to struct") + } + v = v.Elem() + t := v.Type() + errors := MultiError{} + for path, values := range src { + if parts, err := d.cache.parsePath(path, t); err == nil { + if err = d.decode(v, path, parts, values); err != nil { + errors[path] = err + } + } else if !d.ignoreUnknownKeys { + errors[path] = UnknownKeyError{Key: path} + } + } + errors.merge(d.checkRequired(t, src)) + if len(errors) > 0 { + return errors + } + return nil +} + +// checkRequired checks whether required fields are empty +// +// check type t recursively if t has struct fields. +// +// src is the source map for decoding, we use it here to see if those required fields are included in src +func (d *Decoder) checkRequired(t reflect.Type, src map[string][]string) MultiError { + m, errs := d.findRequiredFields(t, "", "") + for key, fields := range m { + if isEmptyFields(fields, src) { + errs[key] = EmptyFieldError{Key: key} + } + } + return errs +} + +// findRequiredFields recursively searches the struct type t for required fields. +// +// canonicalPrefix and searchPrefix are used to resolve full paths in dotted notation +// for nested struct fields. canonicalPrefix is a complete path which never omits +// any embedded struct fields. searchPrefix is a user-friendly path which may omit +// some embedded struct fields to point promoted fields. +func (d *Decoder) findRequiredFields(t reflect.Type, canonicalPrefix, searchPrefix string) (map[string][]fieldWithPrefix, MultiError) { + struc := d.cache.get(t) + if struc == nil { + // unexpect, cache.get never return nil + return nil, MultiError{canonicalPrefix + "*": errors.New("cache fail")} + } + + m := map[string][]fieldWithPrefix{} + errs := MultiError{} + for _, f := range struc.fields { + if f.typ.Kind() == reflect.Struct { + fcprefix := canonicalPrefix + f.canonicalAlias + "." + for _, fspath := range f.paths(searchPrefix) { + fm, ferrs := d.findRequiredFields(f.typ, fcprefix, fspath+".") + for key, fields := range fm { + m[key] = append(m[key], fields...) + } + errs.merge(ferrs) + } + } + if f.isRequired { + key := canonicalPrefix + f.canonicalAlias + m[key] = append(m[key], fieldWithPrefix{ + fieldInfo: f, + prefix: searchPrefix, + }) + } + } + return m, errs +} + +type fieldWithPrefix struct { + *fieldInfo + prefix string +} + +// isEmptyFields returns true if all of specified fields are empty. +func isEmptyFields(fields []fieldWithPrefix, src map[string][]string) bool { + for _, f := range fields { + for _, path := range f.paths(f.prefix) { + if !isEmpty(f.typ, src[path]) { + return false + } + } + } + return true +} + +// isEmpty returns true if value is empty for specific type +func isEmpty(t reflect.Type, value []string) bool { + if len(value) == 0 { + return true + } + switch t.Kind() { + case boolType, float32Type, float64Type, intType, int8Type, int32Type, int64Type, stringType, uint8Type, uint16Type, uint32Type, uint64Type: + return len(value[0]) == 0 + } + return false +} + +// decode fills a struct field using a parsed path. +func (d *Decoder) decode(v reflect.Value, path string, parts []pathPart, values []string) error { + // Get the field walking the struct fields by index. + for _, name := range parts[0].path { + if v.Type().Kind() == reflect.Ptr { + if v.IsNil() { + v.Set(reflect.New(v.Type().Elem())) + } + v = v.Elem() + } + v = v.FieldByName(name) + } + // Don't even bother for unexported fields. + if !v.CanSet() { + return nil + } + + // Dereference if needed. + t := v.Type() + if t.Kind() == reflect.Ptr { + t = t.Elem() + if v.IsNil() { + v.Set(reflect.New(t)) + } + v = v.Elem() + } + + // Slice of structs. Let's go recursive. + if len(parts) > 1 { + idx := parts[0].index + if v.IsNil() || v.Len() < idx+1 { + value := reflect.MakeSlice(t, idx+1, idx+1) + if v.Len() < idx+1 { + // Resize it. + reflect.Copy(value, v) + } + v.Set(value) + } + return d.decode(v.Index(idx), path, parts[1:], values) + } + + // Get the converter early in case there is one for a slice type. + conv := d.cache.converter(t) + m := isTextUnmarshaler(v) + if conv == nil && t.Kind() == reflect.Slice && m.IsSliceElement { + var items []reflect.Value + elemT := t.Elem() + isPtrElem := elemT.Kind() == reflect.Ptr + if isPtrElem { + elemT = elemT.Elem() + } + + // Try to get a converter for the element type. + conv := d.cache.converter(elemT) + if conv == nil { + conv = builtinConverters[elemT.Kind()] + if conv == nil { + // As we are not dealing with slice of structs here, we don't need to check if the type + // implements TextUnmarshaler interface + return fmt.Errorf("schema: converter not found for %v", elemT) + } + } + + for key, value := range values { + if value == "" { + if d.zeroEmpty { + items = append(items, reflect.Zero(elemT)) + } + } else if m.IsValid { + u := reflect.New(elemT) + if m.IsSliceElementPtr { + u = reflect.New(reflect.PtrTo(elemT).Elem()) + } + if err := u.Interface().(encoding.TextUnmarshaler).UnmarshalText([]byte(value)); err != nil { + return ConversionError{ + Key: path, + Type: t, + Index: key, + Err: err, + } + } + if m.IsSliceElementPtr { + items = append(items, u.Elem().Addr()) + } else if u.Kind() == reflect.Ptr { + items = append(items, u.Elem()) + } else { + items = append(items, u) + } + } else if item := conv(value); item.IsValid() { + if isPtrElem { + ptr := reflect.New(elemT) + ptr.Elem().Set(item) + item = ptr + } + if item.Type() != elemT && !isPtrElem { + item = item.Convert(elemT) + } + items = append(items, item) + } else { + if strings.Contains(value, ",") { + values := strings.Split(value, ",") + for _, value := range values { + if value == "" { + if d.zeroEmpty { + items = append(items, reflect.Zero(elemT)) + } + } else if item := conv(value); item.IsValid() { + if isPtrElem { + ptr := reflect.New(elemT) + ptr.Elem().Set(item) + item = ptr + } + if item.Type() != elemT && !isPtrElem { + item = item.Convert(elemT) + } + items = append(items, item) + } else { + return ConversionError{ + Key: path, + Type: elemT, + Index: key, + } + } + } + } else { + return ConversionError{ + Key: path, + Type: elemT, + Index: key, + } + } + } + } + value := reflect.Append(reflect.MakeSlice(t, 0, 0), items...) + v.Set(value) + } else { + val := "" + // Use the last value provided if any values were provided + if len(values) > 0 { + val = values[len(values)-1] + } + + if conv != nil { + if value := conv(val); value.IsValid() { + v.Set(value.Convert(t)) + } else { + return ConversionError{ + Key: path, + Type: t, + Index: -1, + } + } + } else if m.IsValid { + if m.IsPtr { + u := reflect.New(v.Type()) + if err := u.Interface().(encoding.TextUnmarshaler).UnmarshalText([]byte(val)); err != nil { + return ConversionError{ + Key: path, + Type: t, + Index: -1, + Err: err, + } + } + v.Set(reflect.Indirect(u)) + } else { + // If the value implements the encoding.TextUnmarshaler interface + // apply UnmarshalText as the converter + if err := m.Unmarshaler.UnmarshalText([]byte(val)); err != nil { + return ConversionError{ + Key: path, + Type: t, + Index: -1, + Err: err, + } + } + } + } else if val == "" { + if d.zeroEmpty { + v.Set(reflect.Zero(t)) + } + } else if conv := builtinConverters[t.Kind()]; conv != nil { + if value := conv(val); value.IsValid() { + v.Set(value.Convert(t)) + } else { + return ConversionError{ + Key: path, + Type: t, + Index: -1, + } + } + } else { + return fmt.Errorf("schema: converter not found for %v", t) + } + } + return nil +} + +func isTextUnmarshaler(v reflect.Value) unmarshaler { + // Create a new unmarshaller instance + m := unmarshaler{} + if m.Unmarshaler, m.IsValid = v.Interface().(encoding.TextUnmarshaler); m.IsValid { + return m + } + // As the UnmarshalText function should be applied to the pointer of the + // type, we check that type to see if it implements the necessary + // method. + if m.Unmarshaler, m.IsValid = reflect.New(v.Type()).Interface().(encoding.TextUnmarshaler); m.IsValid { + m.IsPtr = true + return m + } + + // if v is []T or *[]T create new T + t := v.Type() + if t.Kind() == reflect.Ptr { + t = t.Elem() + } + if t.Kind() == reflect.Slice { + // Check if the slice implements encoding.TextUnmarshaller + if m.Unmarshaler, m.IsValid = v.Interface().(encoding.TextUnmarshaler); m.IsValid { + return m + } + // If t is a pointer slice, check if its elements implement + // encoding.TextUnmarshaler + m.IsSliceElement = true + if t = t.Elem(); t.Kind() == reflect.Ptr { + t = reflect.PtrTo(t.Elem()) + v = reflect.Zero(t) + m.IsSliceElementPtr = true + m.Unmarshaler, m.IsValid = v.Interface().(encoding.TextUnmarshaler) + return m + } + } + + v = reflect.New(t) + m.Unmarshaler, m.IsValid = v.Interface().(encoding.TextUnmarshaler) + return m +} + +// TextUnmarshaler helpers ---------------------------------------------------- +// unmarshaller contains information about a TextUnmarshaler type +type unmarshaler struct { + Unmarshaler encoding.TextUnmarshaler + // IsValid indicates whether the resolved type indicated by the other + // flags implements the encoding.TextUnmarshaler interface. + IsValid bool + // IsPtr indicates that the resolved type is the pointer of the original + // type. + IsPtr bool + // IsSliceElement indicates that the resolved type is a slice element of + // the original type. + IsSliceElement bool + // IsSliceElementPtr indicates that the resolved type is a pointer to a + // slice element of the original type. + IsSliceElementPtr bool +} + +// Errors --------------------------------------------------------------------- + +// ConversionError stores information about a failed conversion. +type ConversionError struct { + Key string // key from the source map. + Type reflect.Type // expected type of elem + Index int // index for multi-value fields; -1 for single-value fields. + Err error // low-level error (when it exists) +} + +func (e ConversionError) Error() string { + var output string + + if e.Index < 0 { + output = fmt.Sprintf("schema: error converting value for %q", e.Key) + } else { + output = fmt.Sprintf("schema: error converting value for index %d of %q", + e.Index, e.Key) + } + + if e.Err != nil { + output = fmt.Sprintf("%s. Details: %s", output, e.Err) + } + + return output +} + +// UnknownKeyError stores information about an unknown key in the source map. +type UnknownKeyError struct { + Key string // key from the source map. +} + +func (e UnknownKeyError) Error() string { + return fmt.Sprintf("schema: invalid path %q", e.Key) +} + +// EmptyFieldError stores information about an empty required field. +type EmptyFieldError struct { + Key string // required key in the source map. +} + +func (e EmptyFieldError) Error() string { + return fmt.Sprintf("%v is empty", e.Key) +} + +// MultiError stores multiple decoding errors. +// +// Borrowed from the App Engine SDK. +type MultiError map[string]error + +func (e MultiError) Error() string { + s := "" + for _, err := range e { + s = err.Error() + break + } + switch len(e) { + case 0: + return "(0 errors)" + case 1: + return s + case 2: + return s + " (and 1 other error)" + } + return fmt.Sprintf("%s (and %d other errors)", s, len(e)-1) +} + +func (e MultiError) merge(errors MultiError) { + for key, err := range errors { + if e[key] == nil { + e[key] = err + } + } +} diff --git a/vendor/github.com/gorilla/schema/doc.go b/vendor/github.com/gorilla/schema/doc.go new file mode 100644 index 000000000..aae9f33f9 --- /dev/null +++ b/vendor/github.com/gorilla/schema/doc.go @@ -0,0 +1,148 @@ +// Copyright 2012 The Gorilla Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +/* +Package gorilla/schema fills a struct with form values. + +The basic usage is really simple. Given this struct: + + type Person struct { + Name string + Phone string + } + +...we can fill it passing a map to the Decode() function: + + values := map[string][]string{ + "Name": {"John"}, + "Phone": {"999-999-999"}, + } + person := new(Person) + decoder := schema.NewDecoder() + decoder.Decode(person, values) + +This is just a simple example and it doesn't make a lot of sense to create +the map manually. Typically it will come from a http.Request object and +will be of type url.Values, http.Request.Form, or http.Request.MultipartForm: + + func MyHandler(w http.ResponseWriter, r *http.Request) { + err := r.ParseForm() + + if err != nil { + // Handle error + } + + decoder := schema.NewDecoder() + // r.PostForm is a map of our POST form values + err := decoder.Decode(person, r.PostForm) + + if err != nil { + // Handle error + } + + // Do something with person.Name or person.Phone + } + +Note: it is a good idea to set a Decoder instance as a package global, +because it caches meta-data about structs, and an instance can be shared safely: + + var decoder = schema.NewDecoder() + +To define custom names for fields, use a struct tag "schema". To not populate +certain fields, use a dash for the name and it will be ignored: + + type Person struct { + Name string `schema:"name"` // custom name + Phone string `schema:"phone"` // custom name + Admin bool `schema:"-"` // this field is never set + } + +The supported field types in the destination struct are: + + * bool + * float variants (float32, float64) + * int variants (int, int8, int16, int32, int64) + * string + * uint variants (uint, uint8, uint16, uint32, uint64) + * struct + * a pointer to one of the above types + * a slice or a pointer to a slice of one of the above types + +Non-supported types are simply ignored, however custom types can be registered +to be converted. + +To fill nested structs, keys must use a dotted notation as the "path" for the +field. So for example, to fill the struct Person below: + + type Phone struct { + Label string + Number string + } + + type Person struct { + Name string + Phone Phone + } + +...the source map must have the keys "Name", "Phone.Label" and "Phone.Number". +This means that an HTML form to fill a Person struct must look like this: + + <form> + <input type="text" name="Name"> + <input type="text" name="Phone.Label"> + <input type="text" name="Phone.Number"> + </form> + +Single values are filled using the first value for a key from the source map. +Slices are filled using all values for a key from the source map. So to fill +a Person with multiple Phone values, like: + + type Person struct { + Name string + Phones []Phone + } + +...an HTML form that accepts three Phone values would look like this: + + <form> + <input type="text" name="Name"> + <input type="text" name="Phones.0.Label"> + <input type="text" name="Phones.0.Number"> + <input type="text" name="Phones.1.Label"> + <input type="text" name="Phones.1.Number"> + <input type="text" name="Phones.2.Label"> + <input type="text" name="Phones.2.Number"> + </form> + +Notice that only for slices of structs the slice index is required. +This is needed for disambiguation: if the nested struct also had a slice +field, we could not translate multiple values to it if we did not use an +index for the parent struct. + +There's also the possibility to create a custom type that implements the +TextUnmarshaler interface, and in this case there's no need to register +a converter, like: + + type Person struct { + Emails []Email + } + + type Email struct { + *mail.Address + } + + func (e *Email) UnmarshalText(text []byte) (err error) { + e.Address, err = mail.ParseAddress(string(text)) + return + } + +...an HTML form that accepts three Email values would look like this: + + <form> + <input type="email" name="Emails.0"> + <input type="email" name="Emails.1"> + <input type="email" name="Emails.2"> + </form> +*/ +package schema diff --git a/vendor/github.com/gorilla/schema/encoder.go b/vendor/github.com/gorilla/schema/encoder.go new file mode 100644 index 000000000..bf1d511e6 --- /dev/null +++ b/vendor/github.com/gorilla/schema/encoder.go @@ -0,0 +1,195 @@ +package schema + +import ( + "errors" + "fmt" + "reflect" + "strconv" +) + +type encoderFunc func(reflect.Value) string + +// Encoder encodes values from a struct into url.Values. +type Encoder struct { + cache *cache + regenc map[reflect.Type]encoderFunc +} + +// NewEncoder returns a new Encoder with defaults. +func NewEncoder() *Encoder { + return &Encoder{cache: newCache(), regenc: make(map[reflect.Type]encoderFunc)} +} + +// Encode encodes a struct into map[string][]string. +// +// Intended for use with url.Values. +func (e *Encoder) Encode(src interface{}, dst map[string][]string) error { + v := reflect.ValueOf(src) + + return e.encode(v, dst) +} + +// RegisterEncoder registers a converter for encoding a custom type. +func (e *Encoder) RegisterEncoder(value interface{}, encoder func(reflect.Value) string) { + e.regenc[reflect.TypeOf(value)] = encoder +} + +// SetAliasTag changes the tag used to locate custom field aliases. +// The default tag is "schema". +func (e *Encoder) SetAliasTag(tag string) { + e.cache.tag = tag +} + +// isValidStructPointer test if input value is a valid struct pointer. +func isValidStructPointer(v reflect.Value) bool { + return v.Type().Kind() == reflect.Ptr && v.Elem().IsValid() && v.Elem().Type().Kind() == reflect.Struct +} + +func isZero(v reflect.Value) bool { + switch v.Kind() { + case reflect.Func: + case reflect.Map, reflect.Slice: + return v.IsNil() || v.Len() == 0 + case reflect.Array: + z := true + for i := 0; i < v.Len(); i++ { + z = z && isZero(v.Index(i)) + } + return z + case reflect.Struct: + z := true + for i := 0; i < v.NumField(); i++ { + z = z && isZero(v.Field(i)) + } + return z + } + // Compare other types directly: + z := reflect.Zero(v.Type()) + return v.Interface() == z.Interface() +} + +func (e *Encoder) encode(v reflect.Value, dst map[string][]string) error { + if v.Kind() == reflect.Ptr { + v = v.Elem() + } + if v.Kind() != reflect.Struct { + return errors.New("schema: interface must be a struct") + } + t := v.Type() + + errors := MultiError{} + + for i := 0; i < v.NumField(); i++ { + name, opts := fieldAlias(t.Field(i), e.cache.tag) + if name == "-" { + continue + } + + // Encode struct pointer types if the field is a valid pointer and a struct. + if isValidStructPointer(v.Field(i)) { + e.encode(v.Field(i).Elem(), dst) + continue + } + + encFunc := typeEncoder(v.Field(i).Type(), e.regenc) + + // Encode non-slice types and custom implementations immediately. + if encFunc != nil { + value := encFunc(v.Field(i)) + if opts.Contains("omitempty") && isZero(v.Field(i)) { + continue + } + + dst[name] = append(dst[name], value) + continue + } + + if v.Field(i).Type().Kind() == reflect.Struct { + e.encode(v.Field(i), dst) + continue + } + + if v.Field(i).Type().Kind() == reflect.Slice { + encFunc = typeEncoder(v.Field(i).Type().Elem(), e.regenc) + } + + if encFunc == nil { + errors[v.Field(i).Type().String()] = fmt.Errorf("schema: encoder not found for %v", v.Field(i)) + continue + } + + // Encode a slice. + if v.Field(i).Len() == 0 && opts.Contains("omitempty") { + continue + } + + dst[name] = []string{} + for j := 0; j < v.Field(i).Len(); j++ { + dst[name] = append(dst[name], encFunc(v.Field(i).Index(j))) + } + } + + if len(errors) > 0 { + return errors + } + return nil +} + +func typeEncoder(t reflect.Type, reg map[reflect.Type]encoderFunc) encoderFunc { + if f, ok := reg[t]; ok { + return f + } + + switch t.Kind() { + case reflect.Bool: + return encodeBool + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return encodeInt + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: + return encodeUint + case reflect.Float32: + return encodeFloat32 + case reflect.Float64: + return encodeFloat64 + case reflect.Ptr: + f := typeEncoder(t.Elem(), reg) + return func(v reflect.Value) string { + if v.IsNil() { + return "null" + } + return f(v.Elem()) + } + case reflect.String: + return encodeString + default: + return nil + } +} + +func encodeBool(v reflect.Value) string { + return strconv.FormatBool(v.Bool()) +} + +func encodeInt(v reflect.Value) string { + return strconv.FormatInt(int64(v.Int()), 10) +} + +func encodeUint(v reflect.Value) string { + return strconv.FormatUint(uint64(v.Uint()), 10) +} + +func encodeFloat(v reflect.Value, bits int) string { + return strconv.FormatFloat(v.Float(), 'f', 6, bits) +} + +func encodeFloat32(v reflect.Value) string { + return encodeFloat(v, 32) +} + +func encodeFloat64(v reflect.Value) string { + return encodeFloat(v, 64) +} + +func encodeString(v reflect.Value) string { + return v.String() +} |