vendor: Add all vendor dependencies

These are all at latest upstream as of now, except:

 - github.com/s-rah/go-ricochet
   - Using github.com/special/go-ricochet on the api-handlers branch
 - github.com/chzyer/readline
   - Using github.com/special/readline on the refresh-race branch

Both of these are for PRs that are pending with upstream.
This commit is contained in:
John Brooks 2016-12-03 16:34:58 -08:00
parent df56cd1757
commit 580d5f67d3
617 changed files with 282289 additions and 0 deletions

202
vendor/cloud.google.com/go/compute/metadata/LICENSE generated vendored Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2014 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

438
vendor/cloud.google.com/go/compute/metadata/metadata.go generated vendored Normal file
View File

@ -0,0 +1,438 @@
// Copyright 2014 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package metadata provides access to Google Compute Engine (GCE)
// metadata and API service accounts.
//
// This package is a wrapper around the GCE metadata service,
// as documented at https://developers.google.com/compute/docs/metadata.
package metadata // import "cloud.google.com/go/compute/metadata"
import (
"encoding/json"
"fmt"
"io/ioutil"
"net"
"net/http"
"net/url"
"os"
"runtime"
"strings"
"sync"
"time"
"golang.org/x/net/context"
"golang.org/x/net/context/ctxhttp"
"cloud.google.com/go/internal"
)
const (
// metadataIP is the documented metadata server IP address.
metadataIP = "169.254.169.254"
// metadataHostEnv is the environment variable specifying the
// GCE metadata hostname. If empty, the default value of
// metadataIP ("169.254.169.254") is used instead.
// This is variable name is not defined by any spec, as far as
// I know; it was made up for the Go package.
metadataHostEnv = "GCE_METADATA_HOST"
)
type cachedValue struct {
k string
trim bool
mu sync.Mutex
v string
}
var (
projID = &cachedValue{k: "project/project-id", trim: true}
projNum = &cachedValue{k: "project/numeric-project-id", trim: true}
instID = &cachedValue{k: "instance/id", trim: true}
)
var (
metaClient = &http.Client{
Transport: &internal.Transport{
Base: &http.Transport{
Dial: (&net.Dialer{
Timeout: 2 * time.Second,
KeepAlive: 30 * time.Second,
}).Dial,
ResponseHeaderTimeout: 2 * time.Second,
},
},
}
subscribeClient = &http.Client{
Transport: &internal.Transport{
Base: &http.Transport{
Dial: (&net.Dialer{
Timeout: 2 * time.Second,
KeepAlive: 30 * time.Second,
}).Dial,
},
},
}
)
// NotDefinedError is returned when requested metadata is not defined.
//
// The underlying string is the suffix after "/computeMetadata/v1/".
//
// This error is not returned if the value is defined to be the empty
// string.
type NotDefinedError string
func (suffix NotDefinedError) Error() string {
return fmt.Sprintf("metadata: GCE metadata %q not defined", string(suffix))
}
// Get returns a value from the metadata service.
// The suffix is appended to "http://${GCE_METADATA_HOST}/computeMetadata/v1/".
//
// If the GCE_METADATA_HOST environment variable is not defined, a default of
// 169.254.169.254 will be used instead.
//
// If the requested metadata is not defined, the returned error will
// be of type NotDefinedError.
func Get(suffix string) (string, error) {
val, _, err := getETag(metaClient, suffix)
return val, err
}
// getETag returns a value from the metadata service as well as the associated
// ETag using the provided client. This func is otherwise equivalent to Get.
func getETag(client *http.Client, suffix string) (value, etag string, err error) {
// Using a fixed IP makes it very difficult to spoof the metadata service in
// a container, which is an important use-case for local testing of cloud
// deployments. To enable spoofing of the metadata service, the environment
// variable GCE_METADATA_HOST is first inspected to decide where metadata
// requests shall go.
host := os.Getenv(metadataHostEnv)
if host == "" {
// Using 169.254.169.254 instead of "metadata" here because Go
// binaries built with the "netgo" tag and without cgo won't
// know the search suffix for "metadata" is
// ".google.internal", and this IP address is documented as
// being stable anyway.
host = metadataIP
}
url := "http://" + host + "/computeMetadata/v1/" + suffix
req, _ := http.NewRequest("GET", url, nil)
req.Header.Set("Metadata-Flavor", "Google")
res, err := client.Do(req)
if err != nil {
return "", "", err
}
defer res.Body.Close()
if res.StatusCode == http.StatusNotFound {
return "", "", NotDefinedError(suffix)
}
if res.StatusCode != 200 {
return "", "", fmt.Errorf("status code %d trying to fetch %s", res.StatusCode, url)
}
all, err := ioutil.ReadAll(res.Body)
if err != nil {
return "", "", err
}
return string(all), res.Header.Get("Etag"), nil
}
func getTrimmed(suffix string) (s string, err error) {
s, err = Get(suffix)
s = strings.TrimSpace(s)
return
}
func (c *cachedValue) get() (v string, err error) {
defer c.mu.Unlock()
c.mu.Lock()
if c.v != "" {
return c.v, nil
}
if c.trim {
v, err = getTrimmed(c.k)
} else {
v, err = Get(c.k)
}
if err == nil {
c.v = v
}
return
}
var (
onGCEOnce sync.Once
onGCE bool
)
// OnGCE reports whether this process is running on Google Compute Engine.
func OnGCE() bool {
onGCEOnce.Do(initOnGCE)
return onGCE
}
func initOnGCE() {
onGCE = testOnGCE()
}
func testOnGCE() bool {
// The user explicitly said they're on GCE, so trust them.
if os.Getenv(metadataHostEnv) != "" {
return true
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
resc := make(chan bool, 2)
// Try two strategies in parallel.
// See https://github.com/GoogleCloudPlatform/google-cloud-go/issues/194
go func() {
res, err := ctxhttp.Get(ctx, metaClient, "http://"+metadataIP)
if err != nil {
resc <- false
return
}
defer res.Body.Close()
resc <- res.Header.Get("Metadata-Flavor") == "Google"
}()
go func() {
addrs, err := net.LookupHost("metadata.google.internal")
if err != nil || len(addrs) == 0 {
resc <- false
return
}
resc <- strsContains(addrs, metadataIP)
}()
tryHarder := systemInfoSuggestsGCE()
if tryHarder {
res := <-resc
if res {
// The first strategy succeeded, so let's use it.
return true
}
// Wait for either the DNS or metadata server probe to
// contradict the other one and say we are running on
// GCE. Give it a lot of time to do so, since the system
// info already suggests we're running on a GCE BIOS.
timer := time.NewTimer(5 * time.Second)
defer timer.Stop()
select {
case res = <-resc:
return res
case <-timer.C:
// Too slow. Who knows what this system is.
return false
}
}
// There's no hint from the system info that we're running on
// GCE, so use the first probe's result as truth, whether it's
// true or false. The goal here is to optimize for speed for
// users who are NOT running on GCE. We can't assume that
// either a DNS lookup or an HTTP request to a blackholed IP
// address is fast. Worst case this should return when the
// metaClient's Transport.ResponseHeaderTimeout or
// Transport.Dial.Timeout fires (in two seconds).
return <-resc
}
// systemInfoSuggestsGCE reports whether the local system (without
// doing network requests) suggests that we're running on GCE. If this
// returns true, testOnGCE tries a bit harder to reach its metadata
// server.
func systemInfoSuggestsGCE() bool {
if runtime.GOOS != "linux" {
// We don't have any non-Linux clues available, at least yet.
return false
}
slurp, _ := ioutil.ReadFile("/sys/class/dmi/id/product_name")
name := strings.TrimSpace(string(slurp))
return name == "Google" || name == "Google Compute Engine"
}
// Subscribe subscribes to a value from the metadata service.
// The suffix is appended to "http://${GCE_METADATA_HOST}/computeMetadata/v1/".
// The suffix may contain query parameters.
//
// Subscribe calls fn with the latest metadata value indicated by the provided
// suffix. If the metadata value is deleted, fn is called with the empty string
// and ok false. Subscribe blocks until fn returns a non-nil error or the value
// is deleted. Subscribe returns the error value returned from the last call to
// fn, which may be nil when ok == false.
func Subscribe(suffix string, fn func(v string, ok bool) error) error {
const failedSubscribeSleep = time.Second * 5
// First check to see if the metadata value exists at all.
val, lastETag, err := getETag(subscribeClient, suffix)
if err != nil {
return err
}
if err := fn(val, true); err != nil {
return err
}
ok := true
if strings.ContainsRune(suffix, '?') {
suffix += "&wait_for_change=true&last_etag="
} else {
suffix += "?wait_for_change=true&last_etag="
}
for {
val, etag, err := getETag(subscribeClient, suffix+url.QueryEscape(lastETag))
if err != nil {
if _, deleted := err.(NotDefinedError); !deleted {
time.Sleep(failedSubscribeSleep)
continue // Retry on other errors.
}
ok = false
}
lastETag = etag
if err := fn(val, ok); err != nil || !ok {
return err
}
}
}
// ProjectID returns the current instance's project ID string.
func ProjectID() (string, error) { return projID.get() }
// NumericProjectID returns the current instance's numeric project ID.
func NumericProjectID() (string, error) { return projNum.get() }
// InternalIP returns the instance's primary internal IP address.
func InternalIP() (string, error) {
return getTrimmed("instance/network-interfaces/0/ip")
}
// ExternalIP returns the instance's primary external (public) IP address.
func ExternalIP() (string, error) {
return getTrimmed("instance/network-interfaces/0/access-configs/0/external-ip")
}
// Hostname returns the instance's hostname. This will be of the form
// "<instanceID>.c.<projID>.internal".
func Hostname() (string, error) {
return getTrimmed("instance/hostname")
}
// InstanceTags returns the list of user-defined instance tags,
// assigned when initially creating a GCE instance.
func InstanceTags() ([]string, error) {
var s []string
j, err := Get("instance/tags")
if err != nil {
return nil, err
}
if err := json.NewDecoder(strings.NewReader(j)).Decode(&s); err != nil {
return nil, err
}
return s, nil
}
// InstanceID returns the current VM's numeric instance ID.
func InstanceID() (string, error) {
return instID.get()
}
// InstanceName returns the current VM's instance ID string.
func InstanceName() (string, error) {
host, err := Hostname()
if err != nil {
return "", err
}
return strings.Split(host, ".")[0], nil
}
// Zone returns the current VM's zone, such as "us-central1-b".
func Zone() (string, error) {
zone, err := getTrimmed("instance/zone")
// zone is of the form "projects/<projNum>/zones/<zoneName>".
if err != nil {
return "", err
}
return zone[strings.LastIndex(zone, "/")+1:], nil
}
// InstanceAttributes returns the list of user-defined attributes,
// assigned when initially creating a GCE VM instance. The value of an
// attribute can be obtained with InstanceAttributeValue.
func InstanceAttributes() ([]string, error) { return lines("instance/attributes/") }
// ProjectAttributes returns the list of user-defined attributes
// applying to the project as a whole, not just this VM. The value of
// an attribute can be obtained with ProjectAttributeValue.
func ProjectAttributes() ([]string, error) { return lines("project/attributes/") }
func lines(suffix string) ([]string, error) {
j, err := Get(suffix)
if err != nil {
return nil, err
}
s := strings.Split(strings.TrimSpace(j), "\n")
for i := range s {
s[i] = strings.TrimSpace(s[i])
}
return s, nil
}
// InstanceAttributeValue returns the value of the provided VM
// instance attribute.
//
// If the requested attribute is not defined, the returned error will
// be of type NotDefinedError.
//
// InstanceAttributeValue may return ("", nil) if the attribute was
// defined to be the empty string.
func InstanceAttributeValue(attr string) (string, error) {
return Get("instance/attributes/" + attr)
}
// ProjectAttributeValue returns the value of the provided
// project attribute.
//
// If the requested attribute is not defined, the returned error will
// be of type NotDefinedError.
//
// ProjectAttributeValue may return ("", nil) if the attribute was
// defined to be the empty string.
func ProjectAttributeValue(attr string) (string, error) {
return Get("project/attributes/" + attr)
}
// Scopes returns the service account scopes for the given account.
// The account may be empty or the string "default" to use the instance's
// main account.
func Scopes(serviceAccount string) ([]string, error) {
if serviceAccount == "" {
serviceAccount = "default"
}
return lines("instance/service-accounts/" + serviceAccount + "/scopes")
}
func strsContains(ss []string, s string) bool {
for _, v := range ss {
if v == s {
return true
}
}
return false
}

202
vendor/cloud.google.com/go/internal/LICENSE generated vendored Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2014 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

64
vendor/cloud.google.com/go/internal/cloud.go generated vendored Normal file
View File

@ -0,0 +1,64 @@
// Copyright 2014 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package internal provides support for the cloud packages.
//
// Users should not import this package directly.
package internal
import (
"fmt"
"net/http"
)
const userAgent = "gcloud-golang/0.1"
// Transport is an http.RoundTripper that appends Google Cloud client's
// user-agent to the original request's user-agent header.
type Transport struct {
// TODO(bradfitz): delete internal.Transport. It's too wrappy for what it does.
// Do User-Agent some other way.
// Base is the actual http.RoundTripper
// requests will use. It must not be nil.
Base http.RoundTripper
}
// RoundTrip appends a user-agent to the existing user-agent
// header and delegates the request to the base http.RoundTripper.
func (t *Transport) RoundTrip(req *http.Request) (*http.Response, error) {
req = cloneRequest(req)
ua := req.Header.Get("User-Agent")
if ua == "" {
ua = userAgent
} else {
ua = fmt.Sprintf("%s %s", ua, userAgent)
}
req.Header.Set("User-Agent", ua)
return t.Base.RoundTrip(req)
}
// cloneRequest returns a clone of the provided *http.Request.
// The clone is a shallow copy of the struct and its Header map.
func cloneRequest(r *http.Request) *http.Request {
// shallow copy of the struct
r2 := new(http.Request)
*r2 = *r
// deep copy of the Header
r2.Header = make(http.Header)
for k, s := range r.Header {
r2.Header[k] = s
}
return r2
}

402
vendor/cloud.google.com/go/internal/fields/fields.go generated vendored Normal file
View File

@ -0,0 +1,402 @@
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package fields provides a view of the fields of a struct that follows the Go
// rules, amended to consider tags and case insensitivity.
//
// Usage
//
// First define a function that interprets tags:
//
// func parseTag(st reflect.StructTag) (name string, keep bool, other interface{}) { ... }
//
// The function's return values describe whether to ignore the field
// completely or provide an alternate name, as well as other data from the
// parse that is stored to avoid re-parsing.
//
// Next, construct a Cache, passing your function. As its name suggests, a
// Cache remembers field information for a type, so subsequent calls with the
// same type are very fast.
//
// cache := fields.NewCache(parseTag)
//
// To get the fields of a struct type as determined by the above rules, call
// the Fields method:
//
// fields := cache.Fields(reflect.TypeOf(MyStruct{}))
//
// The return value can be treated as a slice of Fields.
//
// Given a string, such as a key or column name obtained during unmarshalling,
// call Match on the list of fields to find a field whose name is the best
// match:
//
// field := fields.Match(name)
//
// Match looks for an exact match first, then falls back to a case-insensitive
// comparison.
package fields
import (
"bytes"
"reflect"
"sort"
"sync"
"sync/atomic"
)
// A Field records information about a struct field.
type Field struct {
Name string // effective field name
NameFromTag bool // did Name come from a tag?
Type reflect.Type // field type
Index []int // index sequence, for reflect.Value.FieldByIndex
ParsedTag interface{} // third return value of the parseTag function
nameBytes []byte
equalFold func(s, t []byte) bool
}
// A Cache records information about the fields of struct types.
//
// A Cache is safe for use by multiple goroutines.
type Cache struct {
parseTag func(reflect.StructTag) (name string, keep bool, other interface{})
cache atomic.Value // map[reflect.Type][]Field
mu sync.Mutex // used only by writers of cache
}
// NewCache constructs a Cache. Its argument should be a function that accepts
// a struct tag and returns three values: an alternative name for the field
// extracted from the tag, a boolean saying whether to keep the field or ignore
// it, and additional data that is stored with the field information to avoid
// having to parse the tag again.
func NewCache(parseTag func(reflect.StructTag) (name string, keep bool, other interface{})) *Cache {
if parseTag == nil {
parseTag = func(reflect.StructTag) (string, bool, interface{}) {
return "", true, nil
}
}
return &Cache{parseTag: parseTag}
}
// A fieldScan represents an item on the fieldByNameFunc scan work list.
type fieldScan struct {
typ reflect.Type
index []int
}
// Fields returns all the exported fields of t, which must be a struct type. It
// follows the standard Go rules for embedded fields, modified by the presence
// of tags. The result is sorted lexicographically by index.
//
// If not nil, the given parseTag function should extract and return a name
// from the struct tag. This name is used instead of the field's declared name.
//
// These rules apply in the absence of tags:
// Anonymous struct fields are treated as if their inner exported fields were
// fields in the outer struct (embedding). The result includes all fields that
// aren't shadowed by fields at higher level of embedding. If more than one
// field with the same name exists at the same level of embedding, it is
// excluded. An anonymous field that is not of struct type is treated as having
// its type as its name.
//
// Tags modify these rules as follows:
// A field's tag is used as its name.
// An anonymous struct field with a name given in its tag is treated as
// a field having that name, rather than an embedded struct (the struct's
// fields will not be returned).
// If more than one field with the same name exists at the same level of embedding,
// but exactly one of them is tagged, then the tagged field is reported and the others
// are ignored.
func (c *Cache) Fields(t reflect.Type) List {
if t.Kind() != reflect.Struct {
panic("fields: Fields of non-struct type")
}
return List(c.cachedTypeFields(t))
}
// A List is a list of Fields.
type List []Field
// Match returns the field in the list whose name best matches the supplied
// name, nor nil if no field does. If there is a field with the exact name, it
// is returned. Otherwise the first field (sorted by index) whose name matches
// case-insensitively is returned.
func (l List) Match(name string) *Field {
return l.MatchBytes([]byte(name))
}
// MatchBytes is identical to Match, except that the argument is a byte slice.
func (l List) MatchBytes(name []byte) *Field {
var f *Field
for i := range l {
ff := &l[i]
if bytes.Equal(ff.nameBytes, name) {
return ff
}
if f == nil && ff.equalFold(ff.nameBytes, name) {
f = ff
}
}
return f
}
// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
// This code has been copied and modified from
// https://go.googlesource.com/go/+/go1.7.3/src/encoding/json/encode.go.
func (c *Cache) cachedTypeFields(t reflect.Type) []Field {
mp, _ := c.cache.Load().(map[reflect.Type][]Field)
f := mp[t]
if f != nil {
return f
}
// Compute fields without lock.
// Might duplicate effort but won't hold other computations back.
f = c.typeFields(t)
if f == nil {
f = []Field{}
}
c.mu.Lock()
mp, _ = c.cache.Load().(map[reflect.Type][]Field)
newM := make(map[reflect.Type][]Field, len(mp)+1)
for k, v := range mp {
newM[k] = v
}
newM[t] = f
c.cache.Store(newM)
c.mu.Unlock()
return f
}
func (c *Cache) typeFields(t reflect.Type) []Field {
fields := c.listFields(t)
sort.Sort(byName(fields))
// Delete all fields that are hidden by the Go rules for embedded fields.
// The fields are sorted in primary order of name, secondary order of field
// index length. So the first field with a given name is the dominant one.
var out []Field
for advance, i := 0, 0; i < len(fields); i += advance {
// One iteration per name.
// Find the sequence of fields with the name of this first field.
fi := fields[i]
name := fi.Name
for advance = 1; i+advance < len(fields); advance++ {
fj := fields[i+advance]
if fj.Name != name {
break
}
}
// Find the dominant field, if any, out of all fields that have the same name.
dominant, ok := dominantField(fields[i : i+advance])
if ok {
out = append(out, dominant)
}
}
sort.Sort(byIndex(out))
return out
}
func (c *Cache) listFields(t reflect.Type) []Field {
// This uses the same condition that the Go language does: there must be a unique instance
// of the match at a given depth level. If there are multiple instances of a match at the
// same depth, they annihilate each other and inhibit any possible match at a lower level.
// The algorithm is breadth first search, one depth level at a time.
// The current and next slices are work queues:
// current lists the fields to visit on this depth level,
// and next lists the fields on the next lower level.
current := []fieldScan{}
next := []fieldScan{{typ: t}}
// nextCount records the number of times an embedded type has been
// encountered and considered for queueing in the 'next' slice.
// We only queue the first one, but we increment the count on each.
// If a struct type T can be reached more than once at a given depth level,
// then it annihilates itself and need not be considered at all when we
// process that next depth level.
var nextCount map[reflect.Type]int
// visited records the structs that have been considered already.
// Embedded pointer fields can create cycles in the graph of
// reachable embedded types; visited avoids following those cycles.
// It also avoids duplicated effort: if we didn't find the field in an
// embedded type T at level 2, we won't find it in one at level 4 either.
visited := map[reflect.Type]bool{}
var fields []Field // Fields found.
for len(next) > 0 {
current, next = next, current[:0]
count := nextCount
nextCount = nil
// Process all the fields at this depth, now listed in 'current'.
// The loop queues embedded fields found in 'next', for processing during the next
// iteration. The multiplicity of the 'current' field counts is recorded
// in 'count'; the multiplicity of the 'next' field counts is recorded in 'nextCount'.
for _, scan := range current {
t := scan.typ
if visited[t] {
// We've looked through this type before, at a higher level.
// That higher level would shadow the lower level we're now at,
// so this one can't be useful to us. Ignore it.
continue
}
visited[t] = true
for i := 0; i < t.NumField(); i++ {
f := t.Field(i)
exported := (f.PkgPath == "")
// If a named field is unexported, ignore it. An anonymous
// unexported field is processed, because it may contain
// exported fields, which are visible.
if !exported && !f.Anonymous {
continue
}
// Examine the tag.
tagName, keep, other := c.parseTag(f.Tag)
if !keep {
continue
}
var ntyp reflect.Type
if f.Anonymous {
// Anonymous field of type T or *T.
ntyp = f.Type
if ntyp.Kind() == reflect.Ptr {
ntyp = ntyp.Elem()
}
}
// Record fields with a tag name, non-anonymous fields, or
// anonymous non-struct fields.
if tagName != "" || ntyp == nil || ntyp.Kind() != reflect.Struct {
if !exported {
continue
}
fields = append(fields, newField(f, tagName, other, scan.index, i))
if count[t] > 1 {
// If there were multiple instances, add a second,
// so that the annihilation code will see a duplicate.
fields = append(fields, fields[len(fields)-1])
}
continue
}
// Queue embedded struct fields for processing with next level,
// but only if the embedded types haven't already been queued.
if nextCount[ntyp] > 0 {
nextCount[ntyp] = 2 // exact multiple doesn't matter
continue
}
if nextCount == nil {
nextCount = map[reflect.Type]int{}
}
nextCount[ntyp] = 1
if count[t] > 1 {
nextCount[ntyp] = 2 // exact multiple doesn't matter
}
var index []int
index = append(index, scan.index...)
index = append(index, i)
next = append(next, fieldScan{ntyp, index})
}
}
}
return fields
}
func newField(f reflect.StructField, tagName string, other interface{}, index []int, i int) Field {
name := tagName
if name == "" {
name = f.Name
}
sf := Field{
Name: name,
NameFromTag: tagName != "",
Type: f.Type,
ParsedTag: other,
nameBytes: []byte(name),
}
sf.equalFold = foldFunc(sf.nameBytes)
sf.Index = append(sf.Index, index...)
sf.Index = append(sf.Index, i)
return sf
}
// byName sorts fields using the following criteria, in order:
// 1. name
// 2. embedding depth
// 3. tag presence (preferring a tagged field)
// 4. index sequence.
type byName []Field
func (x byName) Len() int { return len(x) }
func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byName) Less(i, j int) bool {
if x[i].Name != x[j].Name {
return x[i].Name < x[j].Name
}
if len(x[i].Index) != len(x[j].Index) {
return len(x[i].Index) < len(x[j].Index)
}
if x[i].NameFromTag != x[j].NameFromTag {
return x[i].NameFromTag
}
return byIndex(x).Less(i, j)
}
// byIndex sorts field by index sequence.
type byIndex []Field
func (x byIndex) Len() int { return len(x) }
func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byIndex) Less(i, j int) bool {
xi := x[i].Index
xj := x[j].Index
ln := len(xi)
if l := len(xj); l < ln {
ln = l
}
for k := 0; k < ln; k++ {
if xi[k] != xj[k] {
return xi[k] < xj[k]
}
}
return len(xi) < len(xj)
}
// dominantField looks through the fields, all of which are known to have the
// same name, to find the single field that dominates the others using Go's
// embedding rules, modified by the presence of tags. If there are multiple
// top-level fields, the boolean will be false: This condition is an error in
// Go and we skip all the fields.
func dominantField(fs []Field) (Field, bool) {
// The fields are sorted in increasing index-length order, then by presence of tag.
// That means that the first field is the dominant one. We need only check
// for error cases: two fields at top level, either both tagged or neither tagged.
if len(fs) > 1 && len(fs[0].Index) == len(fs[1].Index) && fs[0].NameFromTag == fs[1].NameFromTag {
return Field{}, false
}
return fs[0], true
}

156
vendor/cloud.google.com/go/internal/fields/fold.go generated vendored Normal file
View File

@ -0,0 +1,156 @@
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package fields
// This file was copied from https://go.googlesource.com/go/+/go1.7.3/src/encoding/json/fold.go.
// Only the license and package were changed.
import (
"bytes"
"unicode/utf8"
)
const (
caseMask = ^byte(0x20) // Mask to ignore case in ASCII.
kelvin = '\u212a'
smallLongEss = '\u017f'
)
// foldFunc returns one of four different case folding equivalence
// functions, from most general (and slow) to fastest:
//
// 1) bytes.EqualFold, if the key s contains any non-ASCII UTF-8
// 2) equalFoldRight, if s contains special folding ASCII ('k', 'K', 's', 'S')
// 3) asciiEqualFold, no special, but includes non-letters (including _)
// 4) simpleLetterEqualFold, no specials, no non-letters.
//
// The letters S and K are special because they map to 3 runes, not just 2:
// * S maps to s and to U+017F 'ſ' Latin small letter long s
// * k maps to K and to U+212A '' Kelvin sign
// See https://play.golang.org/p/tTxjOc0OGo
//
// The returned function is specialized for matching against s and
// should only be given s. It's not curried for performance reasons.
func foldFunc(s []byte) func(s, t []byte) bool {
nonLetter := false
special := false // special letter
for _, b := range s {
if b >= utf8.RuneSelf {
return bytes.EqualFold
}
upper := b & caseMask
if upper < 'A' || upper > 'Z' {
nonLetter = true
} else if upper == 'K' || upper == 'S' {
// See above for why these letters are special.
special = true
}
}
if special {
return equalFoldRight
}
if nonLetter {
return asciiEqualFold
}
return simpleLetterEqualFold
}
// equalFoldRight is a specialization of bytes.EqualFold when s is
// known to be all ASCII (including punctuation), but contains an 's',
// 'S', 'k', or 'K', requiring a Unicode fold on the bytes in t.
// See comments on foldFunc.
func equalFoldRight(s, t []byte) bool {
for _, sb := range s {
if len(t) == 0 {
return false
}
tb := t[0]
if tb < utf8.RuneSelf {
if sb != tb {
sbUpper := sb & caseMask
if 'A' <= sbUpper && sbUpper <= 'Z' {
if sbUpper != tb&caseMask {
return false
}
} else {
return false
}
}
t = t[1:]
continue
}
// sb is ASCII and t is not. t must be either kelvin
// sign or long s; sb must be s, S, k, or K.
tr, size := utf8.DecodeRune(t)
switch sb {
case 's', 'S':
if tr != smallLongEss {
return false
}
case 'k', 'K':
if tr != kelvin {
return false
}
default:
return false
}
t = t[size:]
}
if len(t) > 0 {
return false
}
return true
}
// asciiEqualFold is a specialization of bytes.EqualFold for use when
// s is all ASCII (but may contain non-letters) and contains no
// special-folding letters.
// See comments on foldFunc.
func asciiEqualFold(s, t []byte) bool {
if len(s) != len(t) {
return false
}
for i, sb := range s {
tb := t[i]
if sb == tb {
continue
}
if ('a' <= sb && sb <= 'z') || ('A' <= sb && sb <= 'Z') {
if sb&caseMask != tb&caseMask {
return false
}
} else {
return false
}
}
return true
}
// simpleLetterEqualFold is a specialization of bytes.EqualFold for
// use when s is all ASCII letters (no underscores, etc) and also
// doesn't contain 'k', 'K', 's', or 'S'.
// See comments on foldFunc.
func simpleLetterEqualFold(s, t []byte) bool {
if len(s) != len(t) {
return false
}
for i, b := range s {
if b&caseMask != t[i]&caseMask {
return false
}
}
return true
}

View File

@ -0,0 +1,94 @@
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package optional provides versions of primitive types that can
// be nil. These are useful in methods that update some of an API object's
// fields.
package optional
import (
"fmt"
"strings"
)
type (
// Bool is either a bool or nil.
Bool interface{}
// String is either a string or nil.
String interface{}
// Int is either an int or nil.
Int interface{}
// Uint is either a uint or nil.
Uint interface{}
// Float64 is either a float64 or nil.
Float64 interface{}
)
// ToBool returns its argument as a bool.
// It panics if its argument is nil or not a bool.
func ToBool(v Bool) bool {
x, ok := v.(bool)
if !ok {
doPanic("Bool", v)
}
return x
}
// ToString returns its argument as a string.
// It panics if its argument is nil or not a string.
func ToString(v String) string {
x, ok := v.(string)
if !ok {
doPanic("String", v)
}
return x
}
// ToInt returns its argument as an int.
// It panics if its argument is nil or not an int.
func ToInt(v Int) int {
x, ok := v.(int)
if !ok {
doPanic("Int", v)
}
return x
}
// ToUint returns its argument as a uint.
// It panics if its argument is nil or not a uint.
func ToUint(v Uint) uint {
x, ok := v.(uint)
if !ok {
doPanic("Uint", v)
}
return x
}
// ToFloat64 returns its argument as a float64.
// It panics if its argument is nil or not a float64.
func ToFloat64(v Float64) float64 {
x, ok := v.(float64)
if !ok {
doPanic("Float64", v)
}
return x
}
func doPanic(capType string, v interface{}) {
panic(fmt.Sprintf("optional.%s value should be %s, got %T", capType, strings.ToLower(capType), v))
}

78
vendor/cloud.google.com/go/internal/pretty/diff.go generated vendored Normal file
View File

@ -0,0 +1,78 @@
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package pretty
import (
"fmt"
"io/ioutil"
"os"
"os/exec"
"syscall"
)
// Diff compares the pretty-printed representation of two values. The second
// return value reports whether the two values' representations are identical.
// If it is false, the first return value contains the diffs.
//
// The output labels the first value "want" and the second "got".
//
// Diff works by invoking the "diff" command. It will only succeed in
// environments where "diff" is on the shell path.
func Diff(want, got interface{}) (string, bool, error) {
fname1, err := writeToTemp(want)
if err != nil {
return "", false, err
}
defer os.Remove(fname1)
fname2, err := writeToTemp(got)
if err != nil {
return "", false, err
}
defer os.Remove(fname2)
cmd := exec.Command("diff", "-u", "--label=want", "--label=got", fname1, fname2)
out, err := cmd.Output()
if err == nil {
return string(out), true, nil
}
eerr, ok := err.(*exec.ExitError)
if !ok {
return "", false, err
}
ws, ok := eerr.Sys().(syscall.WaitStatus)
if !ok {
return "", false, err
}
if ws.ExitStatus() != 1 {
return "", false, err
}
// Exit status of 1 means no error, but diffs were found.
return string(out), false, nil
}
func writeToTemp(v interface{}) (string, error) {
f, err := ioutil.TempFile("", "prettyDiff")
if err != nil {
return "", err
}
if _, err := fmt.Fprintf(f, "%+v\n", Value(v)); err != nil {
return "", err
}
if err := f.Close(); err != nil {
return "", err
}
return f.Name(), nil
}

238
vendor/cloud.google.com/go/internal/pretty/pretty.go generated vendored Normal file
View File

@ -0,0 +1,238 @@
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package pretty implements a simple pretty-printer. It is intended for
// debugging the output of tests.
//
// It follows pointers and produces multi-line output for complex values like
// slices, maps and structs.
package pretty
import (
"fmt"
"io"
"reflect"
"sort"
"strings"
)
// Indent is the string output at each level of indentation.
var Indent = " "
// Value returns a value that will print prettily when used as an
// argument for the %v or %s format specifiers.
// With no flags, struct fields and map keys with default values are omitted.
// With the '+' or '#' flags, all values are displayed.
//
// This package does not detect cycles. Attempting to print a Value that
// contains cycles will result in unbounded recursion.
func Value(v interface{}) val { return val{v: v} }
type val struct{ v interface{} }
// Format implements the fmt.Formatter interface.
func (v val) Format(s fmt.State, c rune) {
if c == 'v' || c == 's' {
fprint(s, reflect.ValueOf(v.v), state{
defaults: s.Flag('+') || s.Flag('#'),
})
} else {
fmt.Fprintf(s, "%%!%c(pretty.Val)", c)
}
}
type state struct {
level int
prefix, suffix string
defaults bool
}
func fprint(w io.Writer, v reflect.Value, s state) {
indent := strings.Repeat(Indent, s.level)
fmt.Fprintf(w, "%s%s", indent, s.prefix)
if isNil(v) {
fmt.Fprintf(w, "nil%s", s.suffix)
return
}
for v.Type().Kind() == reflect.Ptr {
fmt.Fprintf(w, "&")
v = v.Elem()
}
switch v.Type().Kind() {
default:
fmt.Fprintf(w, "%s%s", short(v), s.suffix)
case reflect.Array:
fmt.Fprintf(w, "%s{\n", v.Type())
for i := 0; i < v.Len(); i++ {
fprint(w, v.Index(i), state{
level: s.level + 1,
prefix: "",
suffix: ",",
defaults: s.defaults,
})
fmt.Fprintln(w)
}
fmt.Fprintf(w, "%s}", indent)
case reflect.Slice:
fmt.Fprintf(w, "%s{", v.Type())
if v.Len() > 0 {
fmt.Fprintln(w)
for i := 0; i < v.Len(); i++ {
fprint(w, v.Index(i), state{
level: s.level + 1,
prefix: "",
suffix: ",",
defaults: s.defaults,
})
fmt.Fprintln(w)
}
}
fmt.Fprintf(w, "%s}%s", indent, s.suffix)
case reflect.Map:
fmt.Fprintf(w, "%s{", v.Type())
if v.Len() > 0 {
fmt.Fprintln(w)
keys := v.MapKeys()
maybeSort(keys, v.Type().Key())
for _, key := range keys {
val := v.MapIndex(key)
if s.defaults || !isDefault(val) {
fprint(w, val, state{
level: s.level + 1,
prefix: short(key) + ": ",
suffix: ",",
defaults: s.defaults,
})
fmt.Fprintln(w)
}
}
}
fmt.Fprintf(w, "%s}%s", indent, s.suffix)
case reflect.Struct:
t := v.Type()
fmt.Fprintf(w, "%s{\n", t)
for i := 0; i < t.NumField(); i++ {
f := v.Field(i)
if s.defaults || !isDefault(f) {
fprint(w, f, state{
level: s.level + 1,
prefix: t.Field(i).Name + ": ",
suffix: ",",
defaults: s.defaults,
})
fmt.Fprintln(w)
}
}
fmt.Fprintf(w, "%s}%s", indent, s.suffix)
}
}
func isNil(v reflect.Value) bool {
if !v.IsValid() {
return true
}
switch v.Type().Kind() {
case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return v.IsNil()
default:
return false
}
}
func isDefault(v reflect.Value) bool {
if !v.IsValid() {
return true
}
t := v.Type()
switch t.Kind() {
case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return v.IsNil()
default:
if !v.CanInterface() {
return false
}
return t.Comparable() && v.Interface() == reflect.Zero(t).Interface()
}
}
// short returns a short, one-line string for v.
func short(v reflect.Value) string {
if !v.IsValid() {
return "nil"
}
if v.Type().Kind() == reflect.String {
return fmt.Sprintf("%q", v)
}
return fmt.Sprintf("%v", v)
}
func indent(w io.Writer, level int) {
for i := 0; i < level; i++ {
io.WriteString(w, Indent) // ignore errors
}
}
func maybeSort(vs []reflect.Value, t reflect.Type) {
if less := lessFunc(t); less != nil {
sort.Sort(&sorter{vs, less})
}
}
// lessFunc returns a function that implements the "<" operator
// for the given type, or nil if the type doesn't support "<" .
func lessFunc(t reflect.Type) func(v1, v2 interface{}) bool {
switch t.Kind() {
case reflect.String:
return func(v1, v2 interface{}) bool { return v1.(string) < v2.(string) }
case reflect.Int:
return func(v1, v2 interface{}) bool { return v1.(int) < v2.(int) }
case reflect.Int8:
return func(v1, v2 interface{}) bool { return v1.(int8) < v2.(int8) }
case reflect.Int16:
return func(v1, v2 interface{}) bool { return v1.(int16) < v2.(int16) }
case reflect.Int32:
return func(v1, v2 interface{}) bool { return v1.(int32) < v2.(int32) }
case reflect.Int64:
return func(v1, v2 interface{}) bool { return v1.(int64) < v2.(int64) }
case reflect.Uint:
return func(v1, v2 interface{}) bool { return v1.(uint) < v2.(uint) }
case reflect.Uint8:
return func(v1, v2 interface{}) bool { return v1.(uint8) < v2.(uint8) }
case reflect.Uint16:
return func(v1, v2 interface{}) bool { return v1.(uint16) < v2.(uint16) }
case reflect.Uint32:
return func(v1, v2 interface{}) bool { return v1.(uint32) < v2.(uint32) }
case reflect.Uint64:
return func(v1, v2 interface{}) bool { return v1.(uint64) < v2.(uint64) }
case reflect.Float32:
return func(v1, v2 interface{}) bool { return v1.(float32) < v2.(float32) }
case reflect.Float64:
return func(v1, v2 interface{}) bool { return v1.(float64) < v2.(float64) }
default:
return nil
}
}
type sorter struct {
vs []reflect.Value
less func(v1, v2 interface{}) bool
}
func (s *sorter) Len() int { return len(s.vs) }
func (s *sorter) Swap(i, j int) { s.vs[i], s.vs[j] = s.vs[j], s.vs[i] }
func (s *sorter) Less(i, j int) bool { return s.less(s.vs[i].Interface(), s.vs[j].Interface()) }

55
vendor/cloud.google.com/go/internal/retry.go generated vendored Normal file
View File

@ -0,0 +1,55 @@
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package internal
import (
"fmt"
"time"
gax "github.com/googleapis/gax-go"
"golang.org/x/net/context"
)
// Retry calls the supplied function f repeatedly according to the provided
// backoff parameters. It returns when one of the following occurs:
// When f's first return value is true, Retry immediately returns with f's second
// return value.
// When the provided context is done, Retry returns with ctx.Err().
func Retry(ctx context.Context, bo gax.Backoff, f func() (stop bool, err error)) error {
return retry(ctx, bo, f, gax.Sleep)
}
func retry(ctx context.Context, bo gax.Backoff, f func() (stop bool, err error),
sleep func(context.Context, time.Duration) error) error {
var lastErr error
for {
stop, err := f()
if stop {
return err
}
// Remember the last "real" error from f.
if err != nil && err != context.Canceled && err != context.DeadlineExceeded {
lastErr = err
}
p := bo.Pause()
if cerr := sleep(ctx, p); cerr != nil {
if lastErr != nil {
return fmt.Errorf("%v; last function err: %v", cerr, lastErr)
}
return cerr
}
}
}

View File

@ -0,0 +1,60 @@
// Copyright 2014 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package testutil contains helper functions for writing tests.
package testutil
import (
"io/ioutil"
"log"
"os"
"golang.org/x/net/context"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
)
const (
envProjID = "GCLOUD_TESTS_GOLANG_PROJECT_ID"
envPrivateKey = "GCLOUD_TESTS_GOLANG_KEY"
)
// ProjID returns the project ID to use in integration tests, or the empty
// string if none is configured.
func ProjID() string {
projID := os.Getenv(envProjID)
if projID == "" {
return ""
}
return projID
}
// TokenSource returns the OAuth2 token source to use in integration tests,
// or nil if none is configured. TokenSource will log.Fatal if the token
// source is specified but missing or invalid.
func TokenSource(ctx context.Context, scopes ...string) oauth2.TokenSource {
key := os.Getenv(envPrivateKey)
if key == "" {
return nil
}
jsonKey, err := ioutil.ReadFile(key)
if err != nil {
log.Fatalf("Cannot read the JSON key file, err: %v", err)
}
conf, err := google.JWTConfigFromJSON(jsonKey, scopes...)
if err != nil {
log.Fatalf("google.JWTConfigFromJSON: %v", err)
}
return conf.TokenSource(ctx)
}

73
vendor/cloud.google.com/go/internal/testutil/server.go generated vendored Normal file
View File

@ -0,0 +1,73 @@
/*
Copyright 2016 Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package testutil
import (
"net"
grpc "google.golang.org/grpc"
)
// A Server is an in-process gRPC server, listening on a system-chosen port on
// the local loopback interface. Servers are for testing only and are not
// intended to be used in production code.
//
// To create a server, make a new Server, register your handlers, then call
// Start:
//
// srv, err := NewServer()
// ...
// mypb.RegisterMyServiceServer(srv.Gsrv, &myHandler)
// ....
// srv.Start()
//
// Clients should connect to the server with no security:
//
// conn, err := grpc.Dial(srv.Addr, grpc.WithInsecure())
// ...
type Server struct {
Addr string
l net.Listener
Gsrv *grpc.Server
}
// NewServer creates a new Server. The Server will be listening for gRPC connections
// at the address named by the Addr field, without TLS.
func NewServer() (*Server, error) {
l, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
return nil, err
}
s := &Server{
Addr: l.Addr().String(),
l: l,
Gsrv: grpc.NewServer(),
}
return s, nil
}
// Start causes the server to start accepting incoming connections.
// Call Start after registering handlers.
func (s *Server) Start() {
go s.Gsrv.Serve(s.l)
}
// Close shuts down the server.
func (s *Server) Close() {
s.Gsrv.Stop()
s.l.Close()
}

22
vendor/github.com/chzyer/readline/LICENSE generated vendored Normal file
View File

@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2015 Chzyer
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

246
vendor/github.com/chzyer/readline/ansi_windows.go generated vendored Normal file
View File

@ -0,0 +1,246 @@
// +build windows
package readline
import (
"bufio"
"io"
"strconv"
"strings"
"sync"
"unicode/utf8"
"unsafe"
)
const (
_ = uint16(0)
COLOR_FBLUE = 0x0001
COLOR_FGREEN = 0x0002
COLOR_FRED = 0x0004
COLOR_FINTENSITY = 0x0008
COLOR_BBLUE = 0x0010
COLOR_BGREEN = 0x0020
COLOR_BRED = 0x0040
COLOR_BINTENSITY = 0x0080
COMMON_LVB_UNDERSCORE = 0x8000
)
var ColorTableFg = []word{
0, // 30: Black
COLOR_FRED, // 31: Red
COLOR_FGREEN, // 32: Green
COLOR_FRED | COLOR_FGREEN, // 33: Yellow
COLOR_FBLUE, // 34: Blue
COLOR_FRED | COLOR_FBLUE, // 35: Magenta
COLOR_FGREEN | COLOR_FBLUE, // 36: Cyan
COLOR_FRED | COLOR_FBLUE | COLOR_FGREEN, // 37: White
}
var ColorTableBg = []word{
0, // 40: Black
COLOR_BRED, // 41: Red
COLOR_BGREEN, // 42: Green
COLOR_BRED | COLOR_BGREEN, // 43: Yellow
COLOR_BBLUE, // 44: Blue
COLOR_BRED | COLOR_BBLUE, // 45: Magenta
COLOR_BGREEN | COLOR_BBLUE, // 46: Cyan
COLOR_BRED | COLOR_BBLUE | COLOR_BGREEN, // 47: White
}
type ANSIWriter struct {
target io.Writer
wg sync.WaitGroup
ctx *ANSIWriterCtx
sync.Mutex
}
func NewANSIWriter(w io.Writer) *ANSIWriter {
a := &ANSIWriter{
target: w,
ctx: NewANSIWriterCtx(w),
}
return a
}
func (a *ANSIWriter) Close() error {
a.wg.Wait()
return nil
}
type ANSIWriterCtx struct {
isEsc bool
isEscSeq bool
arg []string
target *bufio.Writer
wantFlush bool
}
func NewANSIWriterCtx(target io.Writer) *ANSIWriterCtx {
return &ANSIWriterCtx{
target: bufio.NewWriter(target),
}
}
func (a *ANSIWriterCtx) Flush() {
a.target.Flush()
}
func (a *ANSIWriterCtx) process(r rune) bool {
if a.wantFlush {
if r == 0 || r == CharEsc {
a.wantFlush = false
a.target.Flush()
}
}
if a.isEscSeq {
a.isEscSeq = a.ioloopEscSeq(a.target, r, &a.arg)
return true
}
switch r {
case CharEsc:
a.isEsc = true
case '[':
if a.isEsc {
a.arg = nil
a.isEscSeq = true
a.isEsc = false
break
}
fallthrough
default:
a.target.WriteRune(r)
a.wantFlush = true
}
return true
}
func (a *ANSIWriterCtx) ioloopEscSeq(w *bufio.Writer, r rune, argptr *[]string) bool {
arg := *argptr
var err error
if r >= 'A' && r <= 'D' {
count := short(GetInt(arg, 1))
info, err := GetConsoleScreenBufferInfo()
if err != nil {
return false
}
switch r {
case 'A': // up
info.dwCursorPosition.y -= count
case 'B': // down
info.dwCursorPosition.y += count
case 'C': // right
info.dwCursorPosition.x += count
case 'D': // left
info.dwCursorPosition.x -= count
}
SetConsoleCursorPosition(&info.dwCursorPosition)
return false
}
switch r {
case 'J':
killLines()
case 'K':
eraseLine()
case 'm':
color := word(0)
for _, item := range arg {
var c int
c, err = strconv.Atoi(item)
if err != nil {
w.WriteString("[" + strings.Join(arg, ";") + "m")
break
}
if c >= 30 && c < 40 {
color ^= COLOR_FINTENSITY
color |= ColorTableFg[c-30]
} else if c >= 40 && c < 50 {
color ^= COLOR_BINTENSITY
color |= ColorTableBg[c-40]
} else if c == 4 {
color |= COMMON_LVB_UNDERSCORE | ColorTableFg[7]
} else { // unknown code treat as reset
color = ColorTableFg[7]
}
}
if err != nil {
break
}
kernel.SetConsoleTextAttribute(stdout, uintptr(color))
case '\007': // set title
case ';':
if len(arg) == 0 || arg[len(arg)-1] != "" {
arg = append(arg, "")
*argptr = arg
}
return true
default:
if len(arg) == 0 {
arg = append(arg, "")
}
arg[len(arg)-1] += string(r)
*argptr = arg
return true
}
*argptr = nil
return false
}
func (a *ANSIWriter) Write(b []byte) (int, error) {
a.Lock()
defer a.Unlock()
off := 0
for len(b) > off {
r, size := utf8.DecodeRune(b[off:])
if size == 0 {
return off, io.ErrShortWrite
}
off += size
a.ctx.process(r)
}
a.ctx.Flush()
return off, nil
}
func killLines() error {
sbi, err := GetConsoleScreenBufferInfo()
if err != nil {
return err
}
size := (sbi.dwCursorPosition.y - sbi.dwSize.y) * sbi.dwSize.x
size += sbi.dwCursorPosition.x
var written int
kernel.FillConsoleOutputAttribute(stdout, uintptr(ColorTableFg[7]),
uintptr(size),
sbi.dwCursorPosition.ptr(),
uintptr(unsafe.Pointer(&written)),
)
return kernel.FillConsoleOutputCharacterW(stdout, uintptr(' '),
uintptr(size),
sbi.dwCursorPosition.ptr(),
uintptr(unsafe.Pointer(&written)),
)
}
func eraseLine() error {
sbi, err := GetConsoleScreenBufferInfo()
if err != nil {
return err
}
size := sbi.dwSize.x
sbi.dwCursorPosition.x = 0
var written int
return kernel.FillConsoleOutputCharacterW(stdout, uintptr(' '),
uintptr(size),
sbi.dwCursorPosition.ptr(),
uintptr(unsafe.Pointer(&written)),
)
}

283
vendor/github.com/chzyer/readline/complete.go generated vendored Normal file
View File

@ -0,0 +1,283 @@
package readline
import (
"bufio"
"bytes"
"fmt"
"io"
)
type AutoCompleter interface {
// Readline will pass the whole line and current offset to it
// Completer need to pass all the candidates, and how long they shared the same characters in line
// Example:
// [go, git, git-shell, grep]
// Do("g", 1) => ["o", "it", "it-shell", "rep"], 1
// Do("gi", 2) => ["t", "t-shell"], 2
// Do("git", 3) => ["", "-shell"], 3
Do(line []rune, pos int) (newLine [][]rune, length int)
}
type TabCompleter struct{}
func (t *TabCompleter) Do([]rune, int) ([][]rune, int) {
return [][]rune{[]rune("\t")}, 0
}
type opCompleter struct {
w io.Writer
op *Operation
width int
inCompleteMode bool
inSelectMode bool
candidate [][]rune
candidateSource []rune
candidateOff int
candidateChoise int
candidateColNum int
}
func newOpCompleter(w io.Writer, op *Operation, width int) *opCompleter {
return &opCompleter{
w: w,
op: op,
width: width,
}
}
func (o *opCompleter) doSelect() {
if len(o.candidate) == 1 {
o.op.buf.WriteRunes(o.candidate[0])
o.ExitCompleteMode(false)
return
}
o.nextCandidate(1)
o.CompleteRefresh()
}
func (o *opCompleter) nextCandidate(i int) {
o.candidateChoise += i
o.candidateChoise = o.candidateChoise % len(o.candidate)
if o.candidateChoise < 0 {
o.candidateChoise = len(o.candidate) + o.candidateChoise
}
}
func (o *opCompleter) OnComplete() bool {
if o.width == 0 {
return false
}
if o.IsInCompleteSelectMode() {
o.doSelect()
return true
}
buf := o.op.buf
rs := buf.Runes()
if o.IsInCompleteMode() && o.candidateSource != nil && runes.Equal(rs, o.candidateSource) {
o.EnterCompleteSelectMode()
o.doSelect()
return true
}
o.ExitCompleteSelectMode()
o.candidateSource = rs
newLines, offset := o.op.cfg.AutoComplete.Do(rs, buf.idx)
if len(newLines) == 0 {
o.ExitCompleteMode(false)
return true
}
// only Aggregate candidates in non-complete mode
if !o.IsInCompleteMode() {
if len(newLines) == 1 {
buf.WriteRunes(newLines[0])
o.ExitCompleteMode(false)
return true
}
same, size := runes.Aggregate(newLines)
if size > 0 {
buf.WriteRunes(same)
o.ExitCompleteMode(false)
return true
}
}
o.EnterCompleteMode(offset, newLines)
return true
}
func (o *opCompleter) IsInCompleteSelectMode() bool {
return o.inSelectMode
}
func (o *opCompleter) IsInCompleteMode() bool {
return o.inCompleteMode
}
func (o *opCompleter) HandleCompleteSelect(r rune) bool {
next := true
switch r {
case CharEnter, CharCtrlJ:
next = false
o.op.buf.WriteRunes(o.op.candidate[o.op.candidateChoise])
o.ExitCompleteMode(false)
case CharLineStart:
num := o.candidateChoise % o.candidateColNum
o.nextCandidate(-num)
case CharLineEnd:
num := o.candidateColNum - o.candidateChoise%o.candidateColNum - 1
o.candidateChoise += num
if o.candidateChoise >= len(o.candidate) {
o.candidateChoise = len(o.candidate) - 1
}
case CharBackspace:
o.ExitCompleteSelectMode()
next = false
case CharTab, CharForward:
o.doSelect()
case CharBell, CharInterrupt:
o.ExitCompleteMode(true)
next = false
case CharNext:
tmpChoise := o.candidateChoise + o.candidateColNum
if tmpChoise >= o.getMatrixSize() {
tmpChoise -= o.getMatrixSize()
} else if tmpChoise >= len(o.candidate) {
tmpChoise += o.candidateColNum
tmpChoise -= o.getMatrixSize()
}
o.candidateChoise = tmpChoise
case CharBackward:
o.nextCandidate(-1)
case CharPrev:
tmpChoise := o.candidateChoise - o.candidateColNum
if tmpChoise < 0 {
tmpChoise += o.getMatrixSize()
if tmpChoise >= len(o.candidate) {
tmpChoise -= o.candidateColNum
}
}
o.candidateChoise = tmpChoise
default:
next = false
o.ExitCompleteSelectMode()
}
if next {
o.CompleteRefresh()
return true
}
return false
}
func (o *opCompleter) getMatrixSize() int {
line := len(o.candidate) / o.candidateColNum
if len(o.candidate)%o.candidateColNum != 0 {
line++
}
return line * o.candidateColNum
}
func (o *opCompleter) OnWidthChange(newWidth int) {
o.width = newWidth
}
func (o *opCompleter) CompleteRefresh() {
if !o.inCompleteMode {
return
}
lineCnt := o.op.buf.CursorLineCount()
colWidth := 0
for _, c := range o.candidate {
w := runes.WidthAll(c)
if w > colWidth {
colWidth = w
}
}
colWidth += o.candidateOff + 1
same := o.op.buf.RuneSlice(-o.candidateOff)
// -1 to avoid reach the end of line
width := o.width - 1
colNum := width / colWidth
colWidth += (width - (colWidth * colNum)) / colNum
o.candidateColNum = colNum
buf := bufio.NewWriter(o.w)
buf.Write(bytes.Repeat([]byte("\n"), lineCnt))
colIdx := 0
lines := 1
buf.WriteString("\033[J")
for idx, c := range o.candidate {
inSelect := idx == o.candidateChoise && o.IsInCompleteSelectMode()
if inSelect {
buf.WriteString("\033[30;47m")
}
buf.WriteString(string(same))
buf.WriteString(string(c))
buf.Write(bytes.Repeat([]byte(" "), colWidth-len(c)-len(same)))
if inSelect {
buf.WriteString("\033[0m")
}
colIdx++
if colIdx == colNum {
buf.WriteString("\n")
lines++
colIdx = 0
}
}
// move back
fmt.Fprintf(buf, "\033[%dA\r", lineCnt-1+lines)
fmt.Fprintf(buf, "\033[%dC", o.op.buf.idx+o.op.buf.PromptLen())
buf.Flush()
}
func (o *opCompleter) aggCandidate(candidate [][]rune) int {
offset := 0
for i := 0; i < len(candidate[0]); i++ {
for j := 0; j < len(candidate)-1; j++ {
if i > len(candidate[j]) {
goto aggregate
}
if candidate[j][i] != candidate[j+1][i] {
goto aggregate
}
}
offset = i
}
aggregate:
return offset
}
func (o *opCompleter) EnterCompleteSelectMode() {
o.inSelectMode = true
o.candidateChoise = -1
o.CompleteRefresh()
}
func (o *opCompleter) EnterCompleteMode(offset int, candidate [][]rune) {
o.inCompleteMode = true
o.candidate = candidate
o.candidateOff = offset
o.CompleteRefresh()
}
func (o *opCompleter) ExitCompleteSelectMode() {
o.inSelectMode = false
o.candidate = nil
o.candidateChoise = -1
o.candidateOff = -1
o.candidateSource = nil
}
func (o *opCompleter) ExitCompleteMode(revent bool) {
o.inCompleteMode = false
o.ExitCompleteSelectMode()
}

165
vendor/github.com/chzyer/readline/complete_helper.go generated vendored Normal file
View File

@ -0,0 +1,165 @@
package readline
import (
"bytes"
"strings"
)
// Caller type for dynamic completion
type DynamicCompleteFunc func(string) []string
type PrefixCompleterInterface interface {
Print(prefix string, level int, buf *bytes.Buffer)
Do(line []rune, pos int) (newLine [][]rune, length int)
GetName() []rune
GetChildren() []PrefixCompleterInterface
SetChildren(children []PrefixCompleterInterface)
}
type DynamicPrefixCompleterInterface interface {
PrefixCompleterInterface
IsDynamic() bool
GetDynamicNames(line []rune) [][]rune
}
type PrefixCompleter struct {
Name []rune
Dynamic bool
Callback DynamicCompleteFunc
Children []PrefixCompleterInterface
}
func (p *PrefixCompleter) Tree(prefix string) string {
buf := bytes.NewBuffer(nil)
p.Print(prefix, 0, buf)
return buf.String()
}
func Print(p PrefixCompleterInterface, prefix string, level int, buf *bytes.Buffer) {
if strings.TrimSpace(string(p.GetName())) != "" {
buf.WriteString(prefix)
if level > 0 {
buf.WriteString("├")
buf.WriteString(strings.Repeat("─", (level*4)-2))
buf.WriteString(" ")
}
buf.WriteString(string(p.GetName()) + "\n")
level++
}
for _, ch := range p.GetChildren() {
ch.Print(prefix, level, buf)
}
}
func (p *PrefixCompleter) Print(prefix string, level int, buf *bytes.Buffer) {
Print(p, prefix, level, buf)
}
func (p *PrefixCompleter) IsDynamic() bool {
return p.Dynamic
}
func (p *PrefixCompleter) GetName() []rune {
return p.Name
}
func (p *PrefixCompleter) GetDynamicNames(line []rune) [][]rune {
var names = [][]rune{}
for _, name := range p.Callback(string(line)) {
names = append(names, []rune(name+" "))
}
return names
}
func (p *PrefixCompleter) GetChildren() []PrefixCompleterInterface {
return p.Children
}
func (p *PrefixCompleter) SetChildren(children []PrefixCompleterInterface) {
p.Children = children
}
func NewPrefixCompleter(pc ...PrefixCompleterInterface) *PrefixCompleter {
return PcItem("", pc...)
}
func PcItem(name string, pc ...PrefixCompleterInterface) *PrefixCompleter {
name += " "
return &PrefixCompleter{
Name: []rune(name),
Dynamic: false,
Children: pc,
}
}
func PcItemDynamic(callback DynamicCompleteFunc, pc ...PrefixCompleterInterface) *PrefixCompleter {
return &PrefixCompleter{
Callback: callback,
Dynamic: true,
Children: pc,
}
}
func (p *PrefixCompleter) Do(line []rune, pos int) (newLine [][]rune, offset int) {
return doInternal(p, line, pos, line)
}
func Do(p PrefixCompleterInterface, line []rune, pos int) (newLine [][]rune, offset int) {
return doInternal(p, line, pos, line)
}
func doInternal(p PrefixCompleterInterface, line []rune, pos int, origLine []rune) (newLine [][]rune, offset int) {
line = runes.TrimSpaceLeft(line[:pos])
goNext := false
var lineCompleter PrefixCompleterInterface
for _, child := range p.GetChildren() {
childNames := make([][]rune, 1)
childDynamic, ok := child.(DynamicPrefixCompleterInterface)
if ok && childDynamic.IsDynamic() {
childNames = childDynamic.GetDynamicNames(origLine)
} else {
childNames[0] = child.GetName()
}
for _, childName := range childNames {
if len(line) >= len(childName) {
if runes.HasPrefix(line, childName) {
if len(line) == len(childName) {
newLine = append(newLine, []rune{' '})
} else {
newLine = append(newLine, childName)
}
offset = len(childName)
lineCompleter = child
goNext = true
}
} else {
if runes.HasPrefix(childName, line) {
newLine = append(newLine, childName[len(line):])
offset = len(line)
lineCompleter = child
}
}
}
}
if len(newLine) != 1 {
return
}
tmpLine := make([]rune, 0, len(line))
for i := offset; i < len(line); i++ {
if line[i] == ' ' {
continue
}
tmpLine = append(tmpLine, line[i:]...)
return doInternal(lineCompleter, tmpLine, len(tmpLine), origLine)
}
if goNext {
return doInternal(lineCompleter, nil, 0, origLine)
}
return
}

82
vendor/github.com/chzyer/readline/complete_segment.go generated vendored Normal file
View File

@ -0,0 +1,82 @@
package readline
type SegmentCompleter interface {
// a
// |- a1
// |--- a11
// |- a2
// b
// input:
// DoTree([], 0) [a, b]
// DoTree([a], 1) [a]
// DoTree([a, ], 0) [a1, a2]
// DoTree([a, a], 1) [a1, a2]
// DoTree([a, a1], 2) [a1]
// DoTree([a, a1, ], 0) [a11]
// DoTree([a, a1, a], 1) [a11]
DoSegment([][]rune, int) [][]rune
}
type dumpSegmentCompleter struct {
f func([][]rune, int) [][]rune
}
func (d *dumpSegmentCompleter) DoSegment(segment [][]rune, n int) [][]rune {
return d.f(segment, n)
}
func SegmentFunc(f func([][]rune, int) [][]rune) AutoCompleter {
return &SegmentComplete{&dumpSegmentCompleter{f}}
}
func SegmentAutoComplete(completer SegmentCompleter) *SegmentComplete {
return &SegmentComplete{
SegmentCompleter: completer,
}
}
type SegmentComplete struct {
SegmentCompleter
}
func RetSegment(segments [][]rune, cands [][]rune, idx int) ([][]rune, int) {
ret := make([][]rune, 0, len(cands))
lastSegment := segments[len(segments)-1]
for _, cand := range cands {
if !runes.HasPrefix(cand, lastSegment) {
continue
}
ret = append(ret, cand[len(lastSegment):])
}
return ret, idx
}
func SplitSegment(line []rune, pos int) ([][]rune, int) {
segs := [][]rune{}
lastIdx := -1
line = line[:pos]
pos = 0
for idx, l := range line {
if l == ' ' {
pos = 0
segs = append(segs, line[lastIdx+1:idx])
lastIdx = idx
} else {
pos++
}
}
segs = append(segs, line[lastIdx+1:])
return segs, pos
}
func (c *SegmentComplete) Do(line []rune, pos int) (newLine [][]rune, offset int) {
segment, idx := SplitSegment(line, pos)
cands := c.DoSegment(segment, idx)
newLine, offset = RetSegment(segment, cands, idx)
for idx := range newLine {
newLine[idx] = append(newLine[idx], ' ')
}
return newLine, offset
}

View File

@ -0,0 +1,158 @@
package main
import (
"fmt"
"io"
"io/ioutil"
"log"
"strconv"
"strings"
"time"
"github.com/chzyer/readline"
)
func usage(w io.Writer) {
io.WriteString(w, "commands:\n")
io.WriteString(w, completer.Tree(" "))
}
// Function constructor - constructs new function for listing given directory
func listFiles(path string) func(string) []string {
return func(line string) []string {
names := make([]string, 0)
files, _ := ioutil.ReadDir(path)
for _, f := range files {
names = append(names, f.Name())
}
return names
}
}
var completer = readline.NewPrefixCompleter(
readline.PcItem("mode",
readline.PcItem("vi"),
readline.PcItem("emacs"),
),
readline.PcItem("login"),
readline.PcItem("say",
readline.PcItemDynamic(listFiles("./"),
readline.PcItem("with",
readline.PcItem("following"),
readline.PcItem("items"),
),
),
readline.PcItem("hello"),
readline.PcItem("bye"),
),
readline.PcItem("setprompt"),
readline.PcItem("setpassword"),
readline.PcItem("bye"),
readline.PcItem("help"),
readline.PcItem("go",
readline.PcItem("build", readline.PcItem("-o"), readline.PcItem("-v")),
readline.PcItem("install",
readline.PcItem("-v"),
readline.PcItem("-vv"),
readline.PcItem("-vvv"),
),
readline.PcItem("test"),
),
readline.PcItem("sleep"),
)
func main() {
l, err := readline.NewEx(&readline.Config{
Prompt: "\033[31m»\033[0m ",
HistoryFile: "/tmp/readline.tmp",
AutoComplete: completer,
InterruptPrompt: "^C",
EOFPrompt: "exit",
HistorySearchFold: true,
})
if err != nil {
panic(err)
}
defer l.Close()
setPasswordCfg := l.GenPasswordConfig()
setPasswordCfg.SetListener(func(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool) {
l.SetPrompt(fmt.Sprintf("Enter password(%v): ", len(line)))
l.Refresh()
return nil, 0, false
})
log.SetOutput(l.Stderr())
for {
line, err := l.Readline()
if err == readline.ErrInterrupt {
if len(line) == 0 {
break
} else {
continue
}
} else if err == io.EOF {
break
}
line = strings.TrimSpace(line)
switch {
case strings.HasPrefix(line, "mode "):
switch line[5:] {
case "vi":
l.SetVimMode(true)
case "emacs":
l.SetVimMode(false)
default:
println("invalid mode:", line[5:])
}
case line == "mode":
if l.IsVimMode() {
println("current mode: vim")
} else {
println("current mode: emacs")
}
case line == "login":
pswd, err := l.ReadPassword("please enter your password: ")
if err != nil {
break
}
println("you enter:", strconv.Quote(string(pswd)))
case line == "help":
usage(l.Stderr())
case line == "setpassword":
pswd, err := l.ReadPasswordWithConfig(setPasswordCfg)
if err == nil {
println("you set:", strconv.Quote(string(pswd)))
}
case strings.HasPrefix(line, "setprompt"):
prompt := line[10:]
if prompt == "" {
log.Println("setprompt <prompt>")
break
}
l.SetPrompt(prompt)
case strings.HasPrefix(line, "say"):
line := strings.TrimSpace(line[3:])
if len(line) == 0 {
log.Println("say what?")
break
}
go func() {
for range time.Tick(time.Second) {
log.Println(line)
}
}()
case line == "bye":
goto exit
case line == "sleep":
log.Println("sleep 4 second")
time.Sleep(4 * time.Second)
case line == "":
default:
log.Println("you said:", strconv.Quote(line))
}
}
exit:
}

View File

@ -0,0 +1,60 @@
package main
import (
"fmt"
"math/rand"
"time"
"github.com/chzyer/readline"
)
import "log"
func main() {
rl, err := readline.NewEx(&readline.Config{
UniqueEditLine: true,
})
if err != nil {
panic(err)
}
defer rl.Close()
rl.SetPrompt("username: ")
username, err := rl.Readline()
if err != nil {
return
}
rl.ResetHistory()
log.SetOutput(rl.Stderr())
fmt.Fprintln(rl, "Hi,", username+"! My name is Dave.")
rl.SetPrompt(username + "> ")
done := make(chan struct{})
go func() {
rand.Seed(time.Now().Unix())
loop:
for {
select {
case <-time.After(time.Duration(rand.Intn(20)) * 100 * time.Millisecond):
case <-done:
break loop
}
log.Println("Dave:", "hello")
}
log.Println("Dave:", "bye")
done <- struct{}{}
}()
for {
ln := rl.Line()
if ln.CanContinue() {
continue
} else if ln.CanBreak() {
break
}
log.Println(username+":", ln.Line)
}
rl.Clean()
done <- struct{}{}
<-done
}

View File

@ -0,0 +1,41 @@
package main
import (
"strings"
"github.com/chzyer/readline"
)
func main() {
rl, err := readline.NewEx(&readline.Config{
Prompt: "> ",
HistoryFile: "/tmp/readline-multiline",
DisableAutoSaveHistory: true,
})
if err != nil {
panic(err)
}
defer rl.Close()
var cmds []string
for {
line, err := rl.Readline()
if err != nil {
break
}
line = strings.TrimSpace(line)
if len(line) == 0 {
continue
}
cmds = append(cmds, line)
if !strings.HasSuffix(line, ";") {
rl.SetPrompt(">>> ")
continue
}
cmd := strings.Join(cmds, " ")
cmds = cmds[:0]
rl.SetPrompt("> ")
rl.SaveHistory(cmd)
println(cmd)
}
}

View File

@ -0,0 +1,102 @@
// This is a small example using readline to read a password
// and check it's strength while typing using the zxcvbn library.
// Depending on the strength the prompt is colored nicely to indicate strength.
//
// This file is licensed under the WTFPL:
//
// DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
// Version 2, December 2004
//
// Copyright (C) 2004 Sam Hocevar <sam@hocevar.net>
//
// Everyone is permitted to copy and distribute verbatim or modified
// copies of this license document, and changing it is allowed as long
// as the name is changed.
//
// DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
// TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
//
// 0. You just DO WHAT THE FUCK YOU WANT TO.
package main
import (
"fmt"
"github.com/chzyer/readline"
zxcvbn "github.com/nbutton23/zxcvbn-go"
)
const (
Cyan = 36
Green = 32
Magenta = 35
Red = 31
Yellow = 33
BackgroundRed = 41
)
// Reset sequence
var ColorResetEscape = "\033[0m"
// ColorResetEscape translates a ANSI color number to a color escape.
func ColorEscape(color int) string {
return fmt.Sprintf("\033[0;%dm", color)
}
// Colorize the msg using ANSI color escapes
func Colorize(msg string, color int) string {
return ColorEscape(color) + msg + ColorResetEscape
}
func createStrengthPrompt(password []rune) string {
symbol, color := "", Red
strength := zxcvbn.PasswordStrength(string(password), nil)
switch {
case strength.Score <= 1:
symbol = "✗"
color = Red
case strength.Score <= 2:
symbol = "⚡"
color = Magenta
case strength.Score <= 3:
symbol = "⚠"
color = Yellow
case strength.Score <= 4:
symbol = "✔"
color = Green
}
prompt := Colorize(symbol, color)
if strength.Entropy > 0 {
entropy := fmt.Sprintf(" %3.0f", strength.Entropy)
prompt += Colorize(entropy, Cyan)
} else {
prompt += Colorize(" ENT", Cyan)
}
prompt += Colorize(" New Password: ", color)
return prompt
}
func main() {
rl, err := readline.New("")
if err != nil {
return
}
defer rl.Close()
setPasswordCfg := rl.GenPasswordConfig()
setPasswordCfg.SetListener(func(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool) {
rl.SetPrompt(createStrengthPrompt(line))
rl.Refresh()
return nil, 0, false
})
pswd, err := rl.ReadPasswordWithConfig(setPasswordCfg)
if err != nil {
return
}
fmt.Println("Your password was:", string(pswd))
}

View File

@ -0,0 +1,9 @@
package main
import "github.com/chzyer/readline"
func main() {
if err := readline.DialRemote("tcp", ":12344"); err != nil {
println(err.Error())
}
}

View File

@ -0,0 +1,26 @@
package main
import (
"fmt"
"github.com/chzyer/readline"
)
func main() {
cfg := &readline.Config{
Prompt: "readline-remote: ",
}
handleFunc := func(rl *readline.Instance) {
for {
line, err := rl.Readline()
if err != nil {
break
}
fmt.Fprintln(rl.Stdout(), "receive:"+line)
}
}
err := readline.ListenRemote("tcp", ":12344", cfg, handleFunc)
if err != nil {
println(err.Error())
}
}

312
vendor/github.com/chzyer/readline/history.go generated vendored Normal file
View File

@ -0,0 +1,312 @@
package readline
import (
"bufio"
"container/list"
"fmt"
"os"
"strings"
"sync"
)
type hisItem struct {
Source []rune
Version int64
Tmp []rune
}
func (h *hisItem) Clean() {
h.Source = nil
h.Tmp = nil
}
type opHistory struct {
cfg *Config
history *list.List
historyVer int64
current *list.Element
fd *os.File
fdLock sync.Mutex
}
func newOpHistory(cfg *Config) (o *opHistory) {
o = &opHistory{
cfg: cfg,
history: list.New(),
}
return o
}
func (o *opHistory) Reset() {
o.history = list.New()
o.current = nil
}
func (o *opHistory) IsHistoryClosed() bool {
o.fdLock.Lock()
defer o.fdLock.Unlock()
return o.fd.Fd() == ^(uintptr(0))
}
func (o *opHistory) Init() {
if o.IsHistoryClosed() {
o.initHistory()
}
}
func (o *opHistory) initHistory() {
if o.cfg.HistoryFile != "" {
o.historyUpdatePath(o.cfg.HistoryFile)
}
}
// only called by newOpHistory
func (o *opHistory) historyUpdatePath(path string) {
o.fdLock.Lock()
defer o.fdLock.Unlock()
f, err := os.OpenFile(path, os.O_APPEND|os.O_CREATE|os.O_RDWR, 0666)
if err != nil {
return
}
o.fd = f
r := bufio.NewReader(o.fd)
total := 0
for ; ; total++ {
line, err := r.ReadString('\n')
if err != nil {
break
}
// ignore the empty line
line = strings.TrimSpace(line)
if len(line) == 0 {
continue
}
o.Push([]rune(line))
o.Compact()
}
if total > o.cfg.HistoryLimit {
o.rewriteLocked()
}
o.historyVer++
o.Push(nil)
return
}
func (o *opHistory) Compact() {
for o.history.Len() > o.cfg.HistoryLimit && o.history.Len() > 0 {
o.history.Remove(o.history.Front())
}
}
func (o *opHistory) Rewrite() {
o.fdLock.Lock()
defer o.fdLock.Unlock()
o.rewriteLocked()
}
func (o *opHistory) rewriteLocked() {
if o.cfg.HistoryFile == "" {
return
}
tmpFile := o.cfg.HistoryFile + ".tmp"
fd, err := os.OpenFile(tmpFile, os.O_CREATE|os.O_WRONLY|os.O_TRUNC|os.O_APPEND, 0666)
if err != nil {
return
}
buf := bufio.NewWriter(fd)
for elem := o.history.Front(); elem != nil; elem = elem.Next() {
buf.WriteString(string(elem.Value.(*hisItem).Source))
}
buf.Flush()
// replace history file
if err = os.Rename(tmpFile, o.cfg.HistoryFile); err != nil {
fd.Close()
return
}
if o.fd != nil {
o.fd.Close()
}
// fd is write only, just satisfy what we need.
o.fd = fd
}
func (o *opHistory) Close() {
o.fdLock.Lock()
defer o.fdLock.Unlock()
if o.fd != nil {
o.fd.Close()
}
}
func (o *opHistory) FindBck(isNewSearch bool, rs []rune, start int) (int, *list.Element) {
for elem := o.current; elem != nil; elem = elem.Prev() {
item := o.showItem(elem.Value)
if isNewSearch {
start += len(rs)
}
if elem == o.current {
if len(item) >= start {
item = item[:start]
}
}
idx := runes.IndexAllBckEx(item, rs, o.cfg.HistorySearchFold)
if idx < 0 {
continue
}
return idx, elem
}
return -1, nil
}
func (o *opHistory) FindFwd(isNewSearch bool, rs []rune, start int) (int, *list.Element) {
for elem := o.current; elem != nil; elem = elem.Next() {
item := o.showItem(elem.Value)
if isNewSearch {
start -= len(rs)
if start < 0 {
start = 0
}
}
if elem == o.current {
if len(item)-1 >= start {
item = item[start:]
} else {
continue
}
}
idx := runes.IndexAllEx(item, rs, o.cfg.HistorySearchFold)
if idx < 0 {
continue
}
if elem == o.current {
idx += start
}
return idx, elem
}
return -1, nil
}
func (o *opHistory) showItem(obj interface{}) []rune {
item := obj.(*hisItem)
if item.Version == o.historyVer {
return item.Tmp
}
return item.Source
}
func (o *opHistory) Prev() []rune {
if o.current == nil {
return nil
}
current := o.current.Prev()
if current == nil {
return nil
}
o.current = current
return runes.Copy(o.showItem(current.Value))
}
func (o *opHistory) Next() ([]rune, bool) {
if o.current == nil {
return nil, false
}
current := o.current.Next()
if current == nil {
return nil, false
}
o.current = current
return runes.Copy(o.showItem(current.Value)), true
}
func (o *opHistory) debug() {
Debug("-------")
for item := o.history.Front(); item != nil; item = item.Next() {
Debug(fmt.Sprintf("%+v", item.Value))
}
}
// save history
func (o *opHistory) New(current []rune) (err error) {
current = runes.Copy(current)
// if just use last command without modify
// just clean lastest history
if back := o.history.Back(); back != nil {
prev := back.Prev()
if prev != nil {
if runes.Equal(current, prev.Value.(*hisItem).Source) {
o.current = o.history.Back()
o.current.Value.(*hisItem).Clean()
o.historyVer++
return nil
}
}
}
if len(current) == 0 {
o.current = o.history.Back()
if o.current != nil {
o.current.Value.(*hisItem).Clean()
o.historyVer++
return nil
}
}
if o.current != o.history.Back() {
// move history item to current command
currentItem := o.current.Value.(*hisItem)
// set current to last item
o.current = o.history.Back()
current = runes.Copy(currentItem.Tmp)
}
// err only can be a IO error, just report
err = o.Update(current, true)
// push a new one to commit current command
o.historyVer++
o.Push(nil)
return
}
func (o *opHistory) Revert() {
o.historyVer++
o.current = o.history.Back()
}
func (o *opHistory) Update(s []rune, commit bool) (err error) {
o.fdLock.Lock()
defer o.fdLock.Unlock()
s = runes.Copy(s)
if o.current == nil {
o.Push(s)
o.Compact()
return
}
r := o.current.Value.(*hisItem)
r.Version = o.historyVer
if commit {
r.Source = s
if o.fd != nil {
// just report the error
_, err = o.fd.Write([]byte(string(r.Source) + "\n"))
}
} else {
r.Tmp = append(r.Tmp[:0], s...)
}
o.current.Value = r
o.Compact()
return
}
func (o *opHistory) Push(s []rune) {
s = runes.Copy(s)
elem := o.history.PushBack(&hisItem{Source: s})
o.current = elem
}

485
vendor/github.com/chzyer/readline/operation.go generated vendored Normal file
View File

@ -0,0 +1,485 @@
package readline
import (
"errors"
"io"
)
var (
ErrInterrupt = errors.New("Interrupt")
)
type InterruptError struct {
Line []rune
}
func (*InterruptError) Error() string {
return "Interrupted"
}
type Operation struct {
cfg *Config
t *Terminal
buf *RuneBuffer
outchan chan []rune
errchan chan error
w io.Writer
history *opHistory
*opSearch
*opCompleter
*opPassword
*opVim
}
type wrapWriter struct {
r *Operation
t *Terminal
target io.Writer
}
func (w *wrapWriter) Write(b []byte) (int, error) {
var (
n int
err error
)
w.r.buf.Refresh(func() {
n, err = w.target.Write(b)
})
if w.r.IsSearchMode() {
w.r.SearchRefresh(-1)
}
if w.r.IsInCompleteMode() {
w.r.CompleteRefresh()
}
return n, err
}
func NewOperation(t *Terminal, cfg *Config) *Operation {
width := cfg.FuncGetWidth()
op := &Operation{
t: t,
buf: NewRuneBuffer(t, cfg.Prompt, cfg, width),
outchan: make(chan []rune),
errchan: make(chan error),
}
op.w = op.buf.w
op.SetConfig(cfg)
op.opVim = newVimMode(op)
op.opCompleter = newOpCompleter(op.buf.w, op, width)
op.opPassword = newOpPassword(op)
op.cfg.FuncOnWidthChanged(func() {
newWidth := cfg.FuncGetWidth()
op.opCompleter.OnWidthChange(newWidth)
op.opSearch.OnWidthChange(newWidth)
op.buf.OnWidthChange(newWidth)
})
go op.ioloop()
return op
}
func (o *Operation) SetPrompt(s string) {
o.buf.SetPrompt(s)
}
func (o *Operation) SetMaskRune(r rune) {
o.buf.SetMask(r)
}
func (o *Operation) ioloop() {
for {
keepInSearchMode := false
keepInCompleteMode := false
r := o.t.ReadRune()
if r == 0 { // io.EOF
if o.buf.Len() == 0 {
o.buf.Clean()
select {
case o.errchan <- io.EOF:
}
break
} else {
// if stdin got io.EOF and there is something left in buffer,
// let's flush them by sending CharEnter.
// And we will got io.EOF int next loop.
r = CharEnter
}
}
isUpdateHistory := true
if o.IsInCompleteSelectMode() {
keepInCompleteMode = o.HandleCompleteSelect(r)
if keepInCompleteMode {
continue
}
o.buf.Refresh(nil)
switch r {
case CharEnter, CharCtrlJ:
o.history.Update(o.buf.Runes(), false)
fallthrough
case CharInterrupt:
o.t.KickRead()
fallthrough
case CharBell:
continue
}
}
if o.IsEnableVimMode() {
r = o.HandleVim(r, o.t.ReadRune)
if r == 0 {
continue
}
}
switch r {
case CharBell:
if o.IsSearchMode() {
o.ExitSearchMode(true)
o.buf.Refresh(nil)
}
if o.IsInCompleteMode() {
o.ExitCompleteMode(true)
o.buf.Refresh(nil)
}
case CharTab:
if o.cfg.AutoComplete == nil {
o.t.Bell()
break
}
if o.OnComplete() {
keepInCompleteMode = true
} else {
o.t.Bell()
break
}
case CharBckSearch:
if !o.SearchMode(S_DIR_BCK) {
o.t.Bell()
break
}
keepInSearchMode = true
case CharCtrlU:
o.buf.KillFront()
case CharFwdSearch:
if !o.SearchMode(S_DIR_FWD) {
o.t.Bell()
break
}
keepInSearchMode = true
case CharKill:
o.buf.Kill()
keepInCompleteMode = true
case MetaForward:
o.buf.MoveToNextWord()
case CharTranspose:
o.buf.Transpose()
case MetaBackward:
o.buf.MoveToPrevWord()
case MetaDelete:
o.buf.DeleteWord()
case CharLineStart:
o.buf.MoveToLineStart()
case CharLineEnd:
o.buf.MoveToLineEnd()
case CharBackspace, CharCtrlH:
if o.IsSearchMode() {
o.SearchBackspace()
keepInSearchMode = true
break
}
if o.buf.Len() == 0 {
o.t.Bell()
break
}
o.buf.Backspace()
if o.IsInCompleteMode() {
o.OnComplete()
}
case CharCtrlZ:
o.buf.Clean()
o.t.SleepToResume()
o.Refresh()
case CharCtrlL:
ClearScreen(o.w)
o.Refresh()
case MetaBackspace, CharCtrlW:
o.buf.BackEscapeWord()
case CharEnter, CharCtrlJ:
if o.IsSearchMode() {
o.ExitSearchMode(false)
}
o.buf.MoveToLineEnd()
var data []rune
if !o.cfg.UniqueEditLine {
o.buf.WriteRune('\n')
data = o.buf.Reset()
data = data[:len(data)-1] // trim \n
} else {
o.buf.Clean()
data = o.buf.Reset()
}
o.outchan <- data
if !o.cfg.DisableAutoSaveHistory {
// ignore IO error
_ = o.history.New(data)
} else {
isUpdateHistory = false
}
case CharBackward:
o.buf.MoveBackward()
case CharForward:
o.buf.MoveForward()
case CharPrev:
buf := o.history.Prev()
if buf != nil {
o.buf.Set(buf)
} else {
o.t.Bell()
}
case CharNext:
buf, ok := o.history.Next()
if ok {
o.buf.Set(buf)
} else {
o.t.Bell()
}
case CharDelete:
if o.buf.Len() > 0 || !o.IsNormalMode() {
o.t.KickRead()
if !o.buf.Delete() {
o.t.Bell()
}
break
}
// treat as EOF
if !o.cfg.UniqueEditLine {
o.buf.WriteString(o.cfg.EOFPrompt + "\n")
}
o.buf.Reset()
isUpdateHistory = false
o.history.Revert()
o.errchan <- io.EOF
if o.cfg.UniqueEditLine {
o.buf.Clean()
}
case CharInterrupt:
if o.IsSearchMode() {
o.t.KickRead()
o.ExitSearchMode(true)
break
}
if o.IsInCompleteMode() {
o.t.KickRead()
o.ExitCompleteMode(true)
o.buf.Refresh(nil)
break
}
o.buf.MoveToLineEnd()
o.buf.Refresh(nil)
hint := o.cfg.InterruptPrompt + "\n"
if !o.cfg.UniqueEditLine {
o.buf.WriteString(hint)
}
remain := o.buf.Reset()
if !o.cfg.UniqueEditLine {
remain = remain[:len(remain)-len([]rune(hint))]
}
isUpdateHistory = false
o.history.Revert()
o.errchan <- &InterruptError{remain}
default:
if o.IsSearchMode() {
o.SearchChar(r)
keepInSearchMode = true
break
}
o.buf.WriteRune(r)
if o.IsInCompleteMode() {
o.OnComplete()
keepInCompleteMode = true
}
}
if o.cfg.Listener != nil {
newLine, newPos, ok := o.cfg.Listener.OnChange(o.buf.Runes(), o.buf.Pos(), r)
if ok {
o.buf.SetWithIdx(newPos, newLine)
}
}
if !keepInSearchMode && o.IsSearchMode() {
o.ExitSearchMode(false)
o.buf.Refresh(nil)
} else if o.IsInCompleteMode() {
if !keepInCompleteMode {
o.ExitCompleteMode(false)
o.Refresh()
} else {
o.buf.Refresh(nil)
o.CompleteRefresh()
}
}
if isUpdateHistory && !o.IsSearchMode() {
// it will cause null history
o.history.Update(o.buf.Runes(), false)
}
}
}
func (o *Operation) Stderr() io.Writer {
return &wrapWriter{target: o.cfg.Stderr, r: o, t: o.t}
}
func (o *Operation) Stdout() io.Writer {
return &wrapWriter{target: o.cfg.Stdout, r: o, t: o.t}
}
func (o *Operation) String() (string, error) {
r, err := o.Runes()
return string(r), err
}
func (o *Operation) Runes() ([]rune, error) {
o.t.EnterRawMode()
defer o.t.ExitRawMode()
if o.cfg.Listener != nil {
o.cfg.Listener.OnChange(nil, 0, 0)
}
o.buf.Refresh(nil) // print prompt
o.t.KickRead()
select {
case r := <-o.outchan:
return r, nil
case err := <-o.errchan:
if e, ok := err.(*InterruptError); ok {
return e.Line, ErrInterrupt
}
return nil, err
}
}
func (o *Operation) PasswordEx(prompt string, l Listener) ([]byte, error) {
cfg := o.GenPasswordConfig()
cfg.Prompt = prompt
cfg.Listener = l
return o.PasswordWithConfig(cfg)
}
func (o *Operation) GenPasswordConfig() *Config {
return o.opPassword.PasswordConfig()
}
func (o *Operation) PasswordWithConfig(cfg *Config) ([]byte, error) {
if err := o.opPassword.EnterPasswordMode(cfg); err != nil {
return nil, err
}
defer o.opPassword.ExitPasswordMode()
return o.Slice()
}
func (o *Operation) Password(prompt string) ([]byte, error) {
return o.PasswordEx(prompt, nil)
}
func (o *Operation) SetTitle(t string) {
o.w.Write([]byte("\033[2;" + t + "\007"))
}
func (o *Operation) Slice() ([]byte, error) {
r, err := o.Runes()
if err != nil {
return nil, err
}
return []byte(string(r)), nil
}
func (o *Operation) Close() {
o.history.Close()
}
func (o *Operation) SetHistoryPath(path string) {
if o.history != nil {
o.history.Close()
}
o.cfg.HistoryFile = path
o.history = newOpHistory(o.cfg)
}
func (o *Operation) IsNormalMode() bool {
return !o.IsInCompleteMode() && !o.IsSearchMode()
}
func (op *Operation) SetConfig(cfg *Config) (*Config, error) {
if op.cfg == cfg {
return op.cfg, nil
}
if err := cfg.Init(); err != nil {
return op.cfg, err
}
old := op.cfg
op.cfg = cfg
op.SetPrompt(cfg.Prompt)
op.SetMaskRune(cfg.MaskRune)
op.buf.SetConfig(cfg)
width := op.cfg.FuncGetWidth()
if cfg.opHistory == nil {
op.SetHistoryPath(cfg.HistoryFile)
cfg.opHistory = op.history
cfg.opSearch = newOpSearch(op.buf.w, op.buf, op.history, cfg, width)
}
op.history = cfg.opHistory
// SetHistoryPath will close opHistory which already exists
// so if we use it next time, we need to reopen it by `InitHistory()`
op.history.Init()
if op.cfg.AutoComplete != nil {
op.opCompleter = newOpCompleter(op.buf.w, op, width)
}
op.opSearch = cfg.opSearch
return old, nil
}
func (o *Operation) ResetHistory() {
o.history.Reset()
}
// if err is not nil, it just mean it fail to write to file
// other things goes fine.
func (o *Operation) SaveHistory(content string) error {
return o.history.New([]rune(content))
}
func (o *Operation) Refresh() {
o.buf.Refresh(nil)
}
func (o *Operation) Clean() {
o.buf.Clean()
}
func FuncListener(f func(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool)) Listener {
return &DumpListener{f: f}
}
type DumpListener struct {
f func(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool)
}
func (d *DumpListener) OnChange(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool) {
return d.f(line, pos, key)
}
type Listener interface {
OnChange(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool)
}

32
vendor/github.com/chzyer/readline/password.go generated vendored Normal file
View File

@ -0,0 +1,32 @@
package readline
type opPassword struct {
o *Operation
backupCfg *Config
}
func newOpPassword(o *Operation) *opPassword {
return &opPassword{o: o}
}
func (o *opPassword) ExitPasswordMode() {
o.o.SetConfig(o.backupCfg)
o.backupCfg = nil
}
func (o *opPassword) EnterPasswordMode(cfg *Config) (err error) {
o.backupCfg, err = o.o.SetConfig(cfg)
return
}
func (o *opPassword) PasswordConfig() *Config {
return &Config{
EnableMask: true,
InterruptPrompt: "\n",
EOFPrompt: "\n",
HistoryLimit: -1,
Stdout: o.o.cfg.Stdout,
Stderr: o.o.cfg.Stderr,
}
}

125
vendor/github.com/chzyer/readline/rawreader_windows.go generated vendored Normal file
View File

@ -0,0 +1,125 @@
// +build windows
package readline
import "unsafe"
const (
VK_CANCEL = 0x03
VK_BACK = 0x08
VK_TAB = 0x09
VK_RETURN = 0x0D
VK_SHIFT = 0x10
VK_CONTROL = 0x11
VK_MENU = 0x12
VK_ESCAPE = 0x1B
VK_LEFT = 0x25
VK_UP = 0x26
VK_RIGHT = 0x27
VK_DOWN = 0x28
VK_DELETE = 0x2E
VK_LSHIFT = 0xA0
VK_RSHIFT = 0xA1
VK_LCONTROL = 0xA2
VK_RCONTROL = 0xA3
)
// RawReader translate input record to ANSI escape sequence.
// To provides same behavior as unix terminal.
type RawReader struct {
ctrlKey bool
altKey bool
}
func NewRawReader() *RawReader {
r := new(RawReader)
return r
}
// only process one action in one read
func (r *RawReader) Read(buf []byte) (int, error) {
ir := new(_INPUT_RECORD)
var read int
var err error
next:
err = kernel.ReadConsoleInputW(stdin,
uintptr(unsafe.Pointer(ir)),
1,
uintptr(unsafe.Pointer(&read)),
)
if err != nil {
return 0, err
}
if ir.EventType != EVENT_KEY {
goto next
}
ker := (*_KEY_EVENT_RECORD)(unsafe.Pointer(&ir.Event[0]))
if ker.bKeyDown == 0 { // keyup
if r.ctrlKey || r.altKey {
switch ker.wVirtualKeyCode {
case VK_RCONTROL, VK_LCONTROL:
r.ctrlKey = false
case VK_MENU: //alt
r.altKey = false
}
}
goto next
}
if ker.unicodeChar == 0 {
var target rune
switch ker.wVirtualKeyCode {
case VK_RCONTROL, VK_LCONTROL:
r.ctrlKey = true
case VK_MENU: //alt
r.altKey = true
case VK_LEFT:
target = CharBackward
case VK_RIGHT:
target = CharForward
case VK_UP:
target = CharPrev
case VK_DOWN:
target = CharNext
}
if target != 0 {
return r.write(buf, target)
}
goto next
}
char := rune(ker.unicodeChar)
if r.ctrlKey {
switch char {
case 'A':
char = CharLineStart
case 'E':
char = CharLineEnd
case 'R':
char = CharBckSearch
case 'S':
char = CharFwdSearch
}
} else if r.altKey {
switch char {
case VK_BACK:
char = CharBackspace
}
return r.writeEsc(buf, char)
}
return r.write(buf, char)
}
func (r *RawReader) writeEsc(b []byte, char rune) (int, error) {
b[0] = '\033'
n := copy(b[1:], []byte(string(char)))
return n + 1, nil
}
func (r *RawReader) write(b []byte, char rune) (int, error) {
n := copy(b, []byte(string(char)))
return n, nil
}
func (r *RawReader) Close() error {
return nil
}

279
vendor/github.com/chzyer/readline/readline.go generated vendored Normal file
View File

@ -0,0 +1,279 @@
// Readline is a pure go implementation for GNU-Readline kind library.
//
// example:
// rl, err := readline.New("> ")
// if err != nil {
// panic(err)
// }
// defer rl.Close()
//
// for {
// line, err := rl.Readline()
// if err != nil { // io.EOF
// break
// }
// println(line)
// }
//
package readline
import "io"
type Instance struct {
Config *Config
Terminal *Terminal
Operation *Operation
}
type Config struct {
// prompt supports ANSI escape sequence, so we can color some characters even in windows
Prompt string
// readline will persist historys to file where HistoryFile specified
HistoryFile string
// specify the max length of historys, it's 500 by default, set it to -1 to disable history
HistoryLimit int
DisableAutoSaveHistory bool
// enable case-insensitive history searching
HistorySearchFold bool
// AutoCompleter will called once user press TAB
AutoComplete AutoCompleter
// Any key press will pass to Listener
// NOTE: Listener will be triggered by (nil, 0, 0) immediately
Listener Listener
// If VimMode is true, readline will in vim.insert mode by default
VimMode bool
InterruptPrompt string
EOFPrompt string
FuncGetWidth func() int
Stdin io.Reader
Stdout io.Writer
Stderr io.Writer
EnableMask bool
MaskRune rune
// erase the editing line after user submited it
// it use in IM usually.
UniqueEditLine bool
// force use interactive even stdout is not a tty
FuncIsTerminal func() bool
FuncMakeRaw func() error
FuncExitRaw func() error
FuncOnWidthChanged func(func())
ForceUseInteractive bool
// private fields
inited bool
opHistory *opHistory
opSearch *opSearch
}
func (c *Config) useInteractive() bool {
if c.ForceUseInteractive {
return true
}
return c.FuncIsTerminal()
}
func (c *Config) Init() error {
if c.inited {
return nil
}
c.inited = true
if c.Stdin == nil {
c.Stdin = NewCancelableStdin(Stdin)
}
if c.Stdout == nil {
c.Stdout = Stdout
}
if c.Stderr == nil {
c.Stderr = Stderr
}
if c.HistoryLimit == 0 {
c.HistoryLimit = 500
}
if c.InterruptPrompt == "" {
c.InterruptPrompt = "^C"
} else if c.InterruptPrompt == "\n" {
c.InterruptPrompt = ""
}
if c.EOFPrompt == "" {
c.EOFPrompt = "^D"
} else if c.EOFPrompt == "\n" {
c.EOFPrompt = ""
}
if c.AutoComplete == nil {
c.AutoComplete = &TabCompleter{}
}
if c.FuncGetWidth == nil {
c.FuncGetWidth = GetScreenWidth
}
if c.FuncIsTerminal == nil {
c.FuncIsTerminal = DefaultIsTerminal
}
rm := new(RawMode)
if c.FuncMakeRaw == nil {
c.FuncMakeRaw = rm.Enter
}
if c.FuncExitRaw == nil {
c.FuncExitRaw = rm.Exit
}
if c.FuncOnWidthChanged == nil {
c.FuncOnWidthChanged = DefaultOnWidthChanged
}
return nil
}
func (c Config) Clone() *Config {
c.opHistory = nil
c.opSearch = nil
return &c
}
func (c *Config) SetListener(f func(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool)) {
c.Listener = FuncListener(f)
}
func NewEx(cfg *Config) (*Instance, error) {
t, err := NewTerminal(cfg)
if err != nil {
return nil, err
}
rl := t.Readline()
return &Instance{
Config: cfg,
Terminal: t,
Operation: rl,
}, nil
}
func New(prompt string) (*Instance, error) {
return NewEx(&Config{Prompt: prompt})
}
func (i *Instance) ResetHistory() {
i.Operation.ResetHistory()
}
func (i *Instance) SetPrompt(s string) {
i.Operation.SetPrompt(s)
}
func (i *Instance) SetMaskRune(r rune) {
i.Operation.SetMaskRune(r)
}
// change history persistence in runtime
func (i *Instance) SetHistoryPath(p string) {
i.Operation.SetHistoryPath(p)
}
// readline will refresh automatic when write through Stdout()
func (i *Instance) Stdout() io.Writer {
return i.Operation.Stdout()
}
// readline will refresh automatic when write through Stdout()
func (i *Instance) Stderr() io.Writer {
return i.Operation.Stderr()
}
// switch VimMode in runtime
func (i *Instance) SetVimMode(on bool) {
i.Operation.SetVimMode(on)
}
func (i *Instance) IsVimMode() bool {
return i.Operation.IsEnableVimMode()
}
func (i *Instance) GenPasswordConfig() *Config {
return i.Operation.GenPasswordConfig()
}
// we can generate a config by `i.GenPasswordConfig()`
func (i *Instance) ReadPasswordWithConfig(cfg *Config) ([]byte, error) {
return i.Operation.PasswordWithConfig(cfg)
}
func (i *Instance) ReadPasswordEx(prompt string, l Listener) ([]byte, error) {
return i.Operation.PasswordEx(prompt, l)
}
func (i *Instance) ReadPassword(prompt string) ([]byte, error) {
return i.Operation.Password(prompt)
}
type Result struct {
Line string
Error error
}
func (l *Result) CanContinue() bool {
return len(l.Line) != 0 && l.Error == ErrInterrupt
}
func (l *Result) CanBreak() bool {
return !l.CanContinue() && l.Error != nil
}
func (i *Instance) Line() *Result {
ret, err := i.Readline()
return &Result{ret, err}
}
// err is one of (nil, io.EOF, readline.ErrInterrupt)
func (i *Instance) Readline() (string, error) {
return i.Operation.String()
}
func (i *Instance) SaveHistory(content string) error {
return i.Operation.SaveHistory(content)
}
// same as readline
func (i *Instance) ReadSlice() ([]byte, error) {
return i.Operation.Slice()
}
// we must make sure that call Close() before process exit.
func (i *Instance) Close() error {
if err := i.Terminal.Close(); err != nil {
return err
}
i.Operation.Close()
return nil
}
func (i *Instance) Clean() {
i.Operation.Clean()
}
func (i *Instance) Write(b []byte) (int, error) {
return i.Stdout().Write(b)
}
func (i *Instance) SetConfig(cfg *Config) *Config {
if i.Config == cfg {
return cfg
}
old := i.Config
i.Config = cfg
i.Operation.SetConfig(cfg)
i.Terminal.SetConfig(cfg)
return old
}
func (i *Instance) Refresh() {
i.Operation.Refresh()
}

474
vendor/github.com/chzyer/readline/remote.go generated vendored Normal file
View File

@ -0,0 +1,474 @@
package readline
import (
"bufio"
"bytes"
"encoding/binary"
"fmt"
"io"
"net"
"os"
"sync"
"sync/atomic"
)
type MsgType int16
const (
T_DATA = MsgType(iota)
T_WIDTH
T_WIDTH_REPORT
T_ISTTY_REPORT
T_RAW
T_ERAW // exit raw
T_EOF
)
type RemoteSvr struct {
eof int32
closed int32
width int32
reciveChan chan struct{}
writeChan chan *writeCtx
conn net.Conn
isTerminal bool
funcWidthChan func()
stopChan chan struct{}
dataBufM sync.Mutex
dataBuf bytes.Buffer
}
type writeReply struct {
n int
err error
}
type writeCtx struct {
msg *Message
reply chan *writeReply
}
func newWriteCtx(msg *Message) *writeCtx {
return &writeCtx{
msg: msg,
reply: make(chan *writeReply),
}
}
func NewRemoteSvr(conn net.Conn) (*RemoteSvr, error) {
rs := &RemoteSvr{
width: -1,
conn: conn,
writeChan: make(chan *writeCtx),
reciveChan: make(chan struct{}),
stopChan: make(chan struct{}),
}
buf := bufio.NewReader(rs.conn)
if err := rs.init(buf); err != nil {
return nil, err
}
go rs.readLoop(buf)
go rs.writeLoop()
return rs, nil
}
func (r *RemoteSvr) init(buf *bufio.Reader) error {
m, err := ReadMessage(buf)
if err != nil {
return err
}
// receive isTerminal
if m.Type != T_ISTTY_REPORT {
return fmt.Errorf("unexpected init message")
}
r.GotIsTerminal(m.Data)
// receive width
m, err = ReadMessage(buf)
if err != nil {
return err
}
if m.Type != T_WIDTH_REPORT {
return fmt.Errorf("unexpected init message")
}
r.GotReportWidth(m.Data)
return nil
}
func (r *RemoteSvr) HandleConfig(cfg *Config) {
cfg.Stderr = r
cfg.Stdout = r
cfg.Stdin = r
cfg.FuncExitRaw = r.ExitRawMode
cfg.FuncIsTerminal = r.IsTerminal
cfg.FuncMakeRaw = r.EnterRawMode
cfg.FuncExitRaw = r.ExitRawMode
cfg.FuncGetWidth = r.GetWidth
cfg.FuncOnWidthChanged = func(f func()) {
r.funcWidthChan = f
}
}
func (r *RemoteSvr) IsTerminal() bool {
return r.isTerminal
}
func (r *RemoteSvr) checkEOF() error {
if atomic.LoadInt32(&r.eof) == 1 {
return io.EOF
}
return nil
}
func (r *RemoteSvr) Read(b []byte) (int, error) {
r.dataBufM.Lock()
n, err := r.dataBuf.Read(b)
r.dataBufM.Unlock()
if n == 0 {
if err := r.checkEOF(); err != nil {
return 0, err
}
}
if n == 0 && err == io.EOF {
<-r.reciveChan
r.dataBufM.Lock()
n, err = r.dataBuf.Read(b)
r.dataBufM.Unlock()
}
if n == 0 {
if err := r.checkEOF(); err != nil {
return 0, err
}
}
return n, err
}
func (r *RemoteSvr) writeMsg(m *Message) error {
ctx := newWriteCtx(m)
r.writeChan <- ctx
reply := <-ctx.reply
return reply.err
}
func (r *RemoteSvr) Write(b []byte) (int, error) {
ctx := newWriteCtx(NewMessage(T_DATA, b))
r.writeChan <- ctx
reply := <-ctx.reply
return reply.n, reply.err
}
func (r *RemoteSvr) EnterRawMode() error {
return r.writeMsg(NewMessage(T_RAW, nil))
}
func (r *RemoteSvr) ExitRawMode() error {
return r.writeMsg(NewMessage(T_ERAW, nil))
}
func (r *RemoteSvr) writeLoop() {
defer r.Close()
loop:
for {
select {
case ctx, ok := <-r.writeChan:
if !ok {
break
}
n, err := ctx.msg.WriteTo(r.conn)
ctx.reply <- &writeReply{n, err}
case <-r.stopChan:
break loop
}
}
}
func (r *RemoteSvr) Close() {
if atomic.CompareAndSwapInt32(&r.closed, 0, 1) {
close(r.stopChan)
r.conn.Close()
}
}
func (r *RemoteSvr) readLoop(buf *bufio.Reader) {
defer r.Close()
for {
m, err := ReadMessage(buf)
if err != nil {
break
}
switch m.Type {
case T_EOF:
atomic.StoreInt32(&r.eof, 1)
select {
case r.reciveChan <- struct{}{}:
default:
}
case T_DATA:
r.dataBufM.Lock()
r.dataBuf.Write(m.Data)
r.dataBufM.Unlock()
select {
case r.reciveChan <- struct{}{}:
default:
}
case T_WIDTH_REPORT:
r.GotReportWidth(m.Data)
case T_ISTTY_REPORT:
r.GotIsTerminal(m.Data)
}
}
}
func (r *RemoteSvr) GotIsTerminal(data []byte) {
if binary.BigEndian.Uint16(data) == 0 {
r.isTerminal = false
} else {
r.isTerminal = true
}
}
func (r *RemoteSvr) GotReportWidth(data []byte) {
atomic.StoreInt32(&r.width, int32(binary.BigEndian.Uint16(data)))
if r.funcWidthChan != nil {
r.funcWidthChan()
}
}
func (r *RemoteSvr) GetWidth() int {
return int(atomic.LoadInt32(&r.width))
}
// -----------------------------------------------------------------------------
type Message struct {
Type MsgType
Data []byte
}
func ReadMessage(r io.Reader) (*Message, error) {
m := new(Message)
var length int32
if err := binary.Read(r, binary.BigEndian, &length); err != nil {
return nil, err
}
if err := binary.Read(r, binary.BigEndian, &m.Type); err != nil {
return nil, err
}
m.Data = make([]byte, int(length)-2)
if _, err := io.ReadFull(r, m.Data); err != nil {
return nil, err
}
return m, nil
}
func NewMessage(t MsgType, data []byte) *Message {
return &Message{t, data}
}
func (m *Message) WriteTo(w io.Writer) (int, error) {
buf := bytes.NewBuffer(make([]byte, 0, len(m.Data)+2+4))
binary.Write(buf, binary.BigEndian, int32(len(m.Data)+2))
binary.Write(buf, binary.BigEndian, m.Type)
buf.Write(m.Data)
n, err := buf.WriteTo(w)
return int(n), err
}
// -----------------------------------------------------------------------------
type RemoteCli struct {
conn net.Conn
raw RawMode
receiveChan chan struct{}
inited int32
isTerminal *bool
data bytes.Buffer
dataM sync.Mutex
}
func NewRemoteCli(conn net.Conn) (*RemoteCli, error) {
r := &RemoteCli{
conn: conn,
receiveChan: make(chan struct{}),
}
return r, nil
}
func (r *RemoteCli) MarkIsTerminal(is bool) {
r.isTerminal = &is
}
func (r *RemoteCli) init() error {
if !atomic.CompareAndSwapInt32(&r.inited, 0, 1) {
return nil
}
if err := r.reportIsTerminal(); err != nil {
return err
}
if err := r.reportWidth(); err != nil {
return err
}
// register sig for width changed
DefaultOnWidthChanged(func() {
r.reportWidth()
})
return nil
}
func (r *RemoteCli) writeMsg(m *Message) error {
r.dataM.Lock()
_, err := m.WriteTo(r.conn)
r.dataM.Unlock()
return err
}
func (r *RemoteCli) Write(b []byte) (int, error) {
m := NewMessage(T_DATA, b)
r.dataM.Lock()
_, err := m.WriteTo(r.conn)
r.dataM.Unlock()
return len(b), err
}
func (r *RemoteCli) reportWidth() error {
screenWidth := GetScreenWidth()
data := make([]byte, 2)
binary.BigEndian.PutUint16(data, uint16(screenWidth))
msg := NewMessage(T_WIDTH_REPORT, data)
if err := r.writeMsg(msg); err != nil {
return err
}
return nil
}
func (r *RemoteCli) reportIsTerminal() error {
var isTerminal bool
if r.isTerminal != nil {
isTerminal = *r.isTerminal
} else {
isTerminal = DefaultIsTerminal()
}
data := make([]byte, 2)
if isTerminal {
binary.BigEndian.PutUint16(data, 1)
} else {
binary.BigEndian.PutUint16(data, 0)
}
msg := NewMessage(T_ISTTY_REPORT, data)
if err := r.writeMsg(msg); err != nil {
return err
}
return nil
}
func (r *RemoteCli) readLoop() {
buf := bufio.NewReader(r.conn)
for {
msg, err := ReadMessage(buf)
if err != nil {
break
}
switch msg.Type {
case T_ERAW:
r.raw.Exit()
case T_RAW:
r.raw.Enter()
case T_DATA:
os.Stdout.Write(msg.Data)
}
}
}
func (r *RemoteCli) ServeBy(source io.Reader) error {
if err := r.init(); err != nil {
return err
}
go func() {
defer r.Close()
for {
n, _ := io.Copy(r, source)
if n == 0 {
break
}
}
}()
defer r.raw.Exit()
r.readLoop()
return nil
}
func (r *RemoteCli) Close() {
r.writeMsg(NewMessage(T_EOF, nil))
}
func (r *RemoteCli) Serve() error {
return r.ServeBy(os.Stdin)
}
func ListenRemote(n, addr string, cfg *Config, h func(*Instance), onListen ...func(net.Listener) error) error {
ln, err := net.Listen(n, addr)
if err != nil {
return err
}
if len(onListen) > 0 {
if err := onListen[0](ln); err != nil {
return err
}
}
for {
conn, err := ln.Accept()
if err != nil {
break
}
go func() {
defer conn.Close()
rl, err := HandleConn(*cfg, conn)
if err != nil {
return
}
h(rl)
}()
}
return nil
}
func HandleConn(cfg Config, conn net.Conn) (*Instance, error) {
r, err := NewRemoteSvr(conn)
if err != nil {
return nil, err
}
r.HandleConfig(&cfg)
rl, err := NewEx(&cfg)
if err != nil {
return nil, err
}
return rl, nil
}
func DialRemote(n, addr string) error {
conn, err := net.Dial(n, addr)
if err != nil {
return err
}
defer conn.Close()
cli, err := NewRemoteCli(conn)
if err != nil {
return err
}
return cli.Serve()
}

572
vendor/github.com/chzyer/readline/runebuf.go generated vendored Normal file
View File

@ -0,0 +1,572 @@
package readline
import (
"bufio"
"bytes"
"io"
"strings"
"sync"
)
type runeBufferBck struct {
buf []rune
idx int
}
type RuneBuffer struct {
buf []rune
idx int
prompt []rune
w io.Writer
hadClean bool
interactive bool
cfg *Config
width int
bck *runeBufferBck
offset string
sync.Mutex
}
func (r *RuneBuffer) OnWidthChange(newWidth int) {
r.Lock()
r.width = newWidth
r.Unlock()
}
func (r *RuneBuffer) Backup() {
r.Lock()
r.bck = &runeBufferBck{r.buf, r.idx}
r.Unlock()
}
func (r *RuneBuffer) Restore() {
r.Refresh(func() {
if r.bck == nil {
return
}
r.buf = r.bck.buf
r.idx = r.bck.idx
})
}
func NewRuneBuffer(w io.Writer, prompt string, cfg *Config, width int) *RuneBuffer {
rb := &RuneBuffer{
w: w,
interactive: cfg.useInteractive(),
cfg: cfg,
width: width,
}
rb.SetPrompt(prompt)
return rb
}
func (r *RuneBuffer) SetConfig(cfg *Config) {
r.Lock()
r.cfg = cfg
r.interactive = cfg.useInteractive()
r.Unlock()
}
func (r *RuneBuffer) SetMask(m rune) {
r.Lock()
r.cfg.MaskRune = m
r.Unlock()
}
func (r *RuneBuffer) CurrentWidth(x int) int {
r.Lock()
defer r.Unlock()
return runes.WidthAll(r.buf[:x])
}
func (r *RuneBuffer) PromptLen() int {
r.Lock()
width := r.promptLen()
r.Unlock()
return width
}
func (r *RuneBuffer) promptLen() int {
return runes.WidthAll(runes.ColorFilter(r.prompt))
}
func (r *RuneBuffer) RuneSlice(i int) []rune {
r.Lock()
defer r.Unlock()
if i > 0 {
rs := make([]rune, i)
copy(rs, r.buf[r.idx:r.idx+i])
return rs
}
rs := make([]rune, -i)
copy(rs, r.buf[r.idx+i:r.idx])
return rs
}
func (r *RuneBuffer) Runes() []rune {
r.Lock()
newr := make([]rune, len(r.buf))
copy(newr, r.buf)
r.Unlock()
return newr
}
func (r *RuneBuffer) Pos() int {
r.Lock()
defer r.Unlock()
return r.idx
}
func (r *RuneBuffer) Len() int {
r.Lock()
defer r.Unlock()
return len(r.buf)
}
func (r *RuneBuffer) MoveToLineStart() {
r.Refresh(func() {
if r.idx == 0 {
return
}
r.idx = 0
})
}
func (r *RuneBuffer) MoveBackward() {
r.Refresh(func() {
if r.idx == 0 {
return
}
r.idx--
})
}
func (r *RuneBuffer) WriteString(s string) {
r.WriteRunes([]rune(s))
}
func (r *RuneBuffer) WriteRune(s rune) {
r.WriteRunes([]rune{s})
}
func (r *RuneBuffer) WriteRunes(s []rune) {
r.Refresh(func() {
tail := append(s, r.buf[r.idx:]...)
r.buf = append(r.buf[:r.idx], tail...)
r.idx += len(s)
})
}
func (r *RuneBuffer) MoveForward() {
r.Refresh(func() {
if r.idx == len(r.buf) {
return
}
r.idx++
})
}
func (r *RuneBuffer) IsCursorInEnd() bool {
r.Lock()
defer r.Unlock()
return r.idx == len(r.buf)
}
func (r *RuneBuffer) Replace(ch rune) {
r.Refresh(func() {
r.buf[r.idx] = ch
})
}
func (r *RuneBuffer) Erase() {
r.Refresh(func() {
r.idx = 0
r.buf = r.buf[:0]
})
}
func (r *RuneBuffer) Delete() (success bool) {
r.Refresh(func() {
if r.idx == len(r.buf) {
return
}
r.buf = append(r.buf[:r.idx], r.buf[r.idx+1:]...)
success = true
})
return
}
func (r *RuneBuffer) DeleteWord() {
if r.idx == len(r.buf) {
return
}
init := r.idx
for init < len(r.buf) && IsWordBreak(r.buf[init]) {
init++
}
for i := init + 1; i < len(r.buf); i++ {
if !IsWordBreak(r.buf[i]) && IsWordBreak(r.buf[i-1]) {
r.Refresh(func() {
r.buf = append(r.buf[:r.idx], r.buf[i-1:]...)
})
return
}
}
r.Kill()
}
func (r *RuneBuffer) MoveToPrevWord() (success bool) {
r.Refresh(func() {
if r.idx == 0 {
return
}
for i := r.idx - 1; i > 0; i-- {
if !IsWordBreak(r.buf[i]) && IsWordBreak(r.buf[i-1]) {
r.idx = i
success = true
return
}
}
r.idx = 0
success = true
})
return
}
func (r *RuneBuffer) KillFront() {
r.Refresh(func() {
if r.idx == 0 {
return
}
length := len(r.buf) - r.idx
copy(r.buf[:length], r.buf[r.idx:])
r.idx = 0
r.buf = r.buf[:length]
})
}
func (r *RuneBuffer) Kill() {
r.Refresh(func() {
r.buf = r.buf[:r.idx]
})
}
func (r *RuneBuffer) Transpose() {
r.Refresh(func() {
if len(r.buf) == 1 {
r.idx++
}
if len(r.buf) < 2 {
return
}
if r.idx == 0 {
r.idx = 1
} else if r.idx >= len(r.buf) {
r.idx = len(r.buf) - 1
}
r.buf[r.idx], r.buf[r.idx-1] = r.buf[r.idx-1], r.buf[r.idx]
r.idx++
})
}
func (r *RuneBuffer) MoveToNextWord() {
r.Refresh(func() {
for i := r.idx + 1; i < len(r.buf); i++ {
if !IsWordBreak(r.buf[i]) && IsWordBreak(r.buf[i-1]) {
r.idx = i
return
}
}
r.idx = len(r.buf)
})
}
func (r *RuneBuffer) MoveToEndWord() {
r.Refresh(func() {
// already at the end, so do nothing
if r.idx == len(r.buf) {
return
}
// if we are at the end of a word already, go to next
if !IsWordBreak(r.buf[r.idx]) && IsWordBreak(r.buf[r.idx+1]) {
r.idx++
}
// keep going until at the end of a word
for i := r.idx + 1; i < len(r.buf); i++ {
if IsWordBreak(r.buf[i]) && !IsWordBreak(r.buf[i-1]) {
r.idx = i - 1
return
}
}
r.idx = len(r.buf)
})
}
func (r *RuneBuffer) BackEscapeWord() {
r.Refresh(func() {
if r.idx == 0 {
return
}
for i := r.idx - 1; i > 0; i-- {
if !IsWordBreak(r.buf[i]) && IsWordBreak(r.buf[i-1]) {
r.buf = append(r.buf[:i], r.buf[r.idx:]...)
r.idx = i
return
}
}
r.buf = r.buf[:0]
r.idx = 0
})
}
func (r *RuneBuffer) Backspace() {
r.Refresh(func() {
if r.idx == 0 {
return
}
r.idx--
r.buf = append(r.buf[:r.idx], r.buf[r.idx+1:]...)
})
}
func (r *RuneBuffer) MoveToLineEnd() {
r.Refresh(func() {
if r.idx == len(r.buf) {
return
}
r.idx = len(r.buf)
})
}
func (r *RuneBuffer) LineCount(width int) int {
if width == -1 {
width = r.width
}
return LineCount(width,
runes.WidthAll(r.buf)+r.PromptLen())
}
func (r *RuneBuffer) MoveTo(ch rune, prevChar, reverse bool) (success bool) {
r.Refresh(func() {
if reverse {
for i := r.idx - 1; i >= 0; i-- {
if r.buf[i] == ch {
r.idx = i
if prevChar {
r.idx++
}
success = true
return
}
}
return
}
for i := r.idx + 1; i < len(r.buf); i++ {
if r.buf[i] == ch {
r.idx = i
if prevChar {
r.idx--
}
success = true
return
}
}
})
return
}
func (r *RuneBuffer) isInLineEdge() bool {
if isWindows {
return false
}
sp := r.getSplitByLine(r.buf)
return len(sp[len(sp)-1]) == 0
}
func (r *RuneBuffer) getSplitByLine(rs []rune) []string {
return SplitByLine(r.promptLen(), r.width, rs)
}
func (r *RuneBuffer) IdxLine(width int) int {
r.Lock()
defer r.Unlock()
return r.idxLine(width)
}
func (r *RuneBuffer) idxLine(width int) int {
if width == 0 {
return 0
}
sp := r.getSplitByLine(r.buf[:r.idx])
return len(sp) - 1
}
func (r *RuneBuffer) CursorLineCount() int {
return r.LineCount(r.width) - r.IdxLine(r.width)
}
func (r *RuneBuffer) Refresh(f func()) {
r.Lock()
defer r.Unlock()
if !r.interactive {
if f != nil {
f()
}
return
}
r.clean()
if f != nil {
f()
}
r.print()
}
func (r *RuneBuffer) SetOffset(offset string) {
r.Lock()
r.offset = offset
r.Unlock()
}
func (r *RuneBuffer) print() {
r.w.Write(r.output())
r.hadClean = false
}
func (r *RuneBuffer) output() []byte {
buf := bytes.NewBuffer(nil)
buf.WriteString(string(r.prompt))
if r.cfg.EnableMask && len(r.buf) > 0 {
buf.Write([]byte(strings.Repeat(string(r.cfg.MaskRune), len(r.buf)-1)))
if r.buf[len(r.buf)-1] == '\n' {
buf.Write([]byte{'\n'})
} else {
buf.Write([]byte(string(r.cfg.MaskRune)))
}
if len(r.buf) > r.idx {
buf.Write(runes.Backspace(r.buf[r.idx:]))
}
} else {
for idx := range r.buf {
if r.buf[idx] == '\t' {
buf.WriteString(strings.Repeat(" ", TabWidth))
} else {
buf.WriteRune(r.buf[idx])
}
}
if r.isInLineEdge() {
buf.Write([]byte(" \b"))
}
}
if len(r.buf) > r.idx {
buf.Write(runes.Backspace(r.buf[r.idx:]))
}
return buf.Bytes()
}
func (r *RuneBuffer) Reset() []rune {
ret := runes.Copy(r.buf)
r.buf = r.buf[:0]
r.idx = 0
return ret
}
func (r *RuneBuffer) calWidth(m int) int {
if m > 0 {
return runes.WidthAll(r.buf[r.idx : r.idx+m])
}
return runes.WidthAll(r.buf[r.idx+m : r.idx])
}
func (r *RuneBuffer) SetStyle(start, end int, style string) {
if end < start {
panic("end < start")
}
// goto start
move := start - r.idx
if move > 0 {
r.w.Write([]byte(string(r.buf[r.idx : r.idx+move])))
} else {
r.w.Write(bytes.Repeat([]byte("\b"), r.calWidth(move)))
}
r.w.Write([]byte("\033[" + style + "m"))
r.w.Write([]byte(string(r.buf[start:end])))
r.w.Write([]byte("\033[0m"))
// TODO: move back
}
func (r *RuneBuffer) SetWithIdx(idx int, buf []rune) {
r.Refresh(func() {
r.buf = buf
r.idx = idx
})
}
func (r *RuneBuffer) Set(buf []rune) {
r.SetWithIdx(len(buf), buf)
}
func (r *RuneBuffer) SetPrompt(prompt string) {
r.Lock()
r.prompt = []rune(prompt)
r.Unlock()
}
func (r *RuneBuffer) cleanOutput(w io.Writer, idxLine int) {
buf := bufio.NewWriter(w)
if r.width == 0 {
buf.WriteString(strings.Repeat("\r\b", len(r.buf)+r.promptLen()))
buf.Write([]byte("\033[J"))
} else {
buf.Write([]byte("\033[J")) // just like ^k :)
if idxLine == 0 {
buf.WriteString("\033[2K")
buf.WriteString("\r")
} else {
for i := 0; i < idxLine; i++ {
io.WriteString(buf, "\033[2K\r\033[A")
}
io.WriteString(buf, "\033[2K\r")
}
}
buf.Flush()
return
}
func (r *RuneBuffer) Clean() {
r.Lock()
r.clean()
r.Unlock()
}
func (r *RuneBuffer) clean() {
r.cleanWithIdxLine(r.idxLine(r.width))
}
func (r *RuneBuffer) cleanWithIdxLine(idxLine int) {
if r.hadClean || !r.interactive {
return
}
r.hadClean = true
r.cleanOutput(r.w, idxLine)
}

223
vendor/github.com/chzyer/readline/runes.go generated vendored Normal file
View File

@ -0,0 +1,223 @@
package readline
import (
"bytes"
"unicode"
"unicode/utf8"
)
var runes = Runes{}
var TabWidth = 4
type Runes struct{}
func (Runes) EqualRune(a, b rune, fold bool) bool {
if a == b {
return true
}
if !fold {
return false
}
if a > b {
a, b = b, a
}
if b < utf8.RuneSelf && 'A' <= a && a <= 'Z' {
if b == a+'a'-'A' {
return true
}
}
return false
}
func (r Runes) EqualRuneFold(a, b rune) bool {
return r.EqualRune(a, b, true)
}
func (r Runes) EqualFold(a, b []rune) bool {
if len(a) != len(b) {
return false
}
for i := 0; i < len(a); i++ {
if r.EqualRuneFold(a[i], b[i]) {
continue
}
return false
}
return true
}
func (Runes) Equal(a, b []rune) bool {
if len(a) != len(b) {
return false
}
for i := 0; i < len(a); i++ {
if a[i] != b[i] {
return false
}
}
return true
}
func (rs Runes) IndexAllBckEx(r, sub []rune, fold bool) int {
for i := len(r) - len(sub); i >= 0; i-- {
found := true
for j := 0; j < len(sub); j++ {
if !rs.EqualRune(r[i+j], sub[j], fold) {
found = false
break
}
}
if found {
return i
}
}
return -1
}
// Search in runes from end to front
func (rs Runes) IndexAllBck(r, sub []rune) int {
return rs.IndexAllBckEx(r, sub, false)
}
// Search in runes from front to end
func (rs Runes) IndexAll(r, sub []rune) int {
return rs.IndexAllEx(r, sub, false)
}
func (rs Runes) IndexAllEx(r, sub []rune, fold bool) int {
for i := 0; i < len(r); i++ {
found := true
if len(r[i:]) < len(sub) {
return -1
}
for j := 0; j < len(sub); j++ {
if !rs.EqualRune(r[i+j], sub[j], fold) {
found = false
break
}
}
if found {
return i
}
}
return -1
}
func (Runes) Index(r rune, rs []rune) int {
for i := 0; i < len(rs); i++ {
if rs[i] == r {
return i
}
}
return -1
}
func (Runes) ColorFilter(r []rune) []rune {
newr := make([]rune, 0, len(r))
for pos := 0; pos < len(r); pos++ {
if r[pos] == '\033' && r[pos+1] == '[' {
idx := runes.Index('m', r[pos+2:])
if idx == -1 {
continue
}
pos += idx + 2
continue
}
newr = append(newr, r[pos])
}
return newr
}
var zeroWidth = []*unicode.RangeTable{
unicode.Mn,
unicode.Me,
unicode.Cc,
unicode.Cf,
}
var doubleWidth = []*unicode.RangeTable{
unicode.Han,
unicode.Hangul,
unicode.Hiragana,
unicode.Katakana,
}
func (Runes) Width(r rune) int {
if r == '\t' {
return TabWidth
}
if unicode.IsOneOf(zeroWidth, r) {
return 0
}
if unicode.IsOneOf(doubleWidth, r) {
return 2
}
return 1
}
func (Runes) WidthAll(r []rune) (length int) {
for i := 0; i < len(r); i++ {
length += runes.Width(r[i])
}
return
}
func (Runes) Backspace(r []rune) []byte {
return bytes.Repeat([]byte{'\b'}, runes.WidthAll(r))
}
func (Runes) Copy(r []rune) []rune {
n := make([]rune, len(r))
copy(n, r)
return n
}
func (Runes) HasPrefixFold(r, prefix []rune) bool {
if len(r) < len(prefix) {
return false
}
return runes.EqualFold(r[:len(prefix)], prefix)
}
func (Runes) HasPrefix(r, prefix []rune) bool {
if len(r) < len(prefix) {
return false
}
return runes.Equal(r[:len(prefix)], prefix)
}
func (Runes) Aggregate(candicate [][]rune) (same []rune, size int) {
for i := 0; i < len(candicate[0]); i++ {
for j := 0; j < len(candicate)-1; j++ {
if i >= len(candicate[j]) || i >= len(candicate[j+1]) {
goto aggregate
}
if candicate[j][i] != candicate[j+1][i] {
goto aggregate
}
}
size = i + 1
}
aggregate:
if size > 0 {
same = runes.Copy(candicate[0][:size])
for i := 0; i < len(candicate); i++ {
n := runes.Copy(candicate[i])
copy(n, n[size:])
candicate[i] = n[:len(n)-size]
}
}
return
}
func (Runes) TrimSpaceLeft(in []rune) []rune {
firstIndex := len(in)
for i, r := range in {
if unicode.IsSpace(r) == false {
firstIndex = i
break
}
}
return in[firstIndex:]
}

155
vendor/github.com/chzyer/readline/runes/runes.go generated vendored Normal file
View File

@ -0,0 +1,155 @@
// deprecated.
// see https://github.com/chzyer/readline/issues/43
// use github.com/chzyer/readline/runes.go
package runes
import (
"bytes"
"unicode"
)
func Equal(a, b []rune) bool {
if len(a) != len(b) {
return false
}
for i := 0; i < len(a); i++ {
if a[i] != b[i] {
return false
}
}
return true
}
// Search in runes from end to front
func IndexAllBck(r, sub []rune) int {
for i := len(r) - len(sub); i >= 0; i-- {
found := true
for j := 0; j < len(sub); j++ {
if r[i+j] != sub[j] {
found = false
break
}
}
if found {
return i
}
}
return -1
}
// Search in runes from front to end
func IndexAll(r, sub []rune) int {
for i := 0; i < len(r); i++ {
found := true
if len(r[i:]) < len(sub) {
return -1
}
for j := 0; j < len(sub); j++ {
if r[i+j] != sub[j] {
found = false
break
}
}
if found {
return i
}
}
return -1
}
func Index(r rune, rs []rune) int {
for i := 0; i < len(rs); i++ {
if rs[i] == r {
return i
}
}
return -1
}
func ColorFilter(r []rune) []rune {
newr := make([]rune, 0, len(r))
for pos := 0; pos < len(r); pos++ {
if r[pos] == '\033' && r[pos+1] == '[' {
idx := Index('m', r[pos+2:])
if idx == -1 {
continue
}
pos += idx + 2
continue
}
newr = append(newr, r[pos])
}
return newr
}
var zeroWidth = []*unicode.RangeTable{
unicode.Mn,
unicode.Me,
unicode.Cc,
unicode.Cf,
}
var doubleWidth = []*unicode.RangeTable{
unicode.Han,
unicode.Hangul,
unicode.Hiragana,
unicode.Katakana,
}
func Width(r rune) int {
if unicode.IsOneOf(zeroWidth, r) {
return 0
}
if unicode.IsOneOf(doubleWidth, r) {
return 2
}
return 1
}
func WidthAll(r []rune) (length int) {
for i := 0; i < len(r); i++ {
length += Width(r[i])
}
return
}
func Backspace(r []rune) []byte {
return bytes.Repeat([]byte{'\b'}, WidthAll(r))
}
func Copy(r []rune) []rune {
n := make([]rune, len(r))
copy(n, r)
return n
}
func HasPrefix(r, prefix []rune) bool {
if len(r) < len(prefix) {
return false
}
return Equal(r[:len(prefix)], prefix)
}
func Aggregate(candicate [][]rune) (same []rune, size int) {
for i := 0; i < len(candicate[0]); i++ {
for j := 0; j < len(candicate)-1; j++ {
if i >= len(candicate[j]) || i >= len(candicate[j+1]) {
goto aggregate
}
if candicate[j][i] != candicate[j+1][i] {
goto aggregate
}
}
size = i + 1
}
aggregate:
if size > 0 {
same = Copy(candicate[0][:size])
for i := 0; i < len(candicate); i++ {
n := Copy(candicate[i])
copy(n, n[size:])
candicate[i] = n[:len(n)-size]
}
}
return
}

164
vendor/github.com/chzyer/readline/search.go generated vendored Normal file
View File

@ -0,0 +1,164 @@
package readline
import (
"bytes"
"container/list"
"fmt"
"io"
)
const (
S_STATE_FOUND = iota
S_STATE_FAILING
)
const (
S_DIR_BCK = iota
S_DIR_FWD
)
type opSearch struct {
inMode bool
state int
dir int
source *list.Element
w io.Writer
buf *RuneBuffer
data []rune
history *opHistory
cfg *Config
markStart int
markEnd int
width int
}
func newOpSearch(w io.Writer, buf *RuneBuffer, history *opHistory, cfg *Config, width int) *opSearch {
return &opSearch{
w: w,
buf: buf,
cfg: cfg,
history: history,
width: width,
}
}
func (o *opSearch) OnWidthChange(newWidth int) {
o.width = newWidth
}
func (o *opSearch) IsSearchMode() bool {
return o.inMode
}
func (o *opSearch) SearchBackspace() {
if len(o.data) > 0 {
o.data = o.data[:len(o.data)-1]
o.search(true)
}
}
func (o *opSearch) findHistoryBy(isNewSearch bool) (int, *list.Element) {
if o.dir == S_DIR_BCK {
return o.history.FindBck(isNewSearch, o.data, o.buf.idx)
}
return o.history.FindFwd(isNewSearch, o.data, o.buf.idx)
}
func (o *opSearch) search(isChange bool) bool {
if len(o.data) == 0 {
o.state = S_STATE_FOUND
o.SearchRefresh(-1)
return true
}
idx, elem := o.findHistoryBy(isChange)
if elem == nil {
o.SearchRefresh(-2)
return false
}
o.history.current = elem
item := o.history.showItem(o.history.current.Value)
start, end := 0, 0
if o.dir == S_DIR_BCK {
start, end = idx, idx+len(o.data)
} else {
start, end = idx, idx+len(o.data)
idx += len(o.data)
}
o.buf.SetWithIdx(idx, item)
o.markStart, o.markEnd = start, end
o.SearchRefresh(idx)
return true
}
func (o *opSearch) SearchChar(r rune) {
o.data = append(o.data, r)
o.search(true)
}
func (o *opSearch) SearchMode(dir int) bool {
if o.width == 0 {
return false
}
alreadyInMode := o.inMode
o.inMode = true
o.dir = dir
o.source = o.history.current
if alreadyInMode {
o.search(false)
} else {
o.SearchRefresh(-1)
}
return true
}
func (o *opSearch) ExitSearchMode(revert bool) {
if revert {
o.history.current = o.source
o.buf.Set(o.history.showItem(o.history.current.Value))
}
o.markStart, o.markEnd = 0, 0
o.state = S_STATE_FOUND
o.inMode = false
o.source = nil
o.data = nil
}
func (o *opSearch) SearchRefresh(x int) {
if x == -2 {
o.state = S_STATE_FAILING
} else if x >= 0 {
o.state = S_STATE_FOUND
}
if x < 0 {
x = o.buf.idx
}
x = o.buf.CurrentWidth(x)
x += o.buf.PromptLen()
x = x % o.width
if o.markStart > 0 {
o.buf.SetStyle(o.markStart, o.markEnd, "4")
}
lineCnt := o.buf.CursorLineCount()
buf := bytes.NewBuffer(nil)
buf.Write(bytes.Repeat([]byte("\n"), lineCnt))
buf.WriteString("\033[J")
if o.state == S_STATE_FAILING {
buf.WriteString("failing ")
}
if o.dir == S_DIR_BCK {
buf.WriteString("bck")
} else if o.dir == S_DIR_FWD {
buf.WriteString("fwd")
}
buf.WriteString("-i-search: ")
buf.WriteString(string(o.data)) // keyword
buf.WriteString("\033[4m \033[0m") // _
fmt.Fprintf(buf, "\r\033[%dA", lineCnt) // move prev
if x > 0 {
fmt.Fprintf(buf, "\033[%dC", x) // move forward
}
o.w.Write(buf.Bytes())
}

133
vendor/github.com/chzyer/readline/std.go generated vendored Normal file
View File

@ -0,0 +1,133 @@
package readline
import (
"io"
"os"
"sync"
"sync/atomic"
)
var (
Stdin io.ReadCloser = os.Stdin
Stdout io.WriteCloser = os.Stdout
Stderr io.WriteCloser = os.Stderr
)
var (
std *Instance
stdOnce sync.Once
)
// global instance will not submit history automatic
func getInstance() *Instance {
stdOnce.Do(func() {
std, _ = NewEx(&Config{
DisableAutoSaveHistory: true,
})
})
return std
}
// let readline load history from filepath
// and try to persist history into disk
// set fp to "" to prevent readline persisting history to disk
// so the `AddHistory` will return nil error forever.
func SetHistoryPath(fp string) {
ins := getInstance()
cfg := ins.Config.Clone()
cfg.HistoryFile = fp
ins.SetConfig(cfg)
}
// set auto completer to global instance
func SetAutoComplete(completer AutoCompleter) {
ins := getInstance()
cfg := ins.Config.Clone()
cfg.AutoComplete = completer
ins.SetConfig(cfg)
}
// add history to global instance manually
// raise error only if `SetHistoryPath` is set with a non-empty path
func AddHistory(content string) error {
ins := getInstance()
return ins.SaveHistory(content)
}
func Password(prompt string) ([]byte, error) {
ins := getInstance()
return ins.ReadPassword(prompt)
}
// readline with global configs
func Line(prompt string) (string, error) {
ins := getInstance()
ins.SetPrompt(prompt)
return ins.Readline()
}
type CancelableStdin struct {
r io.Reader
mutex sync.Mutex
stop chan struct{}
closed int32
notify chan struct{}
data []byte
read int
err error
}
func NewCancelableStdin(r io.Reader) *CancelableStdin {
c := &CancelableStdin{
r: r,
notify: make(chan struct{}),
stop: make(chan struct{}),
}
go c.ioloop()
return c
}
func (c *CancelableStdin) ioloop() {
loop:
for {
select {
case <-c.notify:
c.read, c.err = c.r.Read(c.data)
select {
case c.notify <- struct{}{}:
case <-c.stop:
break loop
}
case <-c.stop:
break loop
}
}
}
func (c *CancelableStdin) Read(b []byte) (n int, err error) {
c.mutex.Lock()
defer c.mutex.Unlock()
if atomic.LoadInt32(&c.closed) == 1 {
return 0, io.EOF
}
c.data = b
select {
case c.notify <- struct{}{}:
case <-c.stop:
return 0, io.EOF
}
select {
case <-c.notify:
return c.read, c.err
case <-c.stop:
return 0, io.EOF
}
}
func (c *CancelableStdin) Close() error {
if atomic.CompareAndSwapInt32(&c.closed, 0, 1) {
close(c.stop)
}
return nil
}

9
vendor/github.com/chzyer/readline/std_windows.go generated vendored Normal file
View File

@ -0,0 +1,9 @@
// +build windows
package readline
func init() {
Stdin = NewRawReader()
Stdout = NewANSIWriter(Stdout)
Stderr = NewANSIWriter(Stderr)
}

134
vendor/github.com/chzyer/readline/term.go generated vendored Normal file
View File

@ -0,0 +1,134 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin dragonfly freebsd linux,!appengine netbsd openbsd
// Package terminal provides support functions for dealing with terminals, as
// commonly found on UNIX systems.
//
// Putting a terminal into raw mode is the most common requirement:
//
// oldState, err := terminal.MakeRaw(0)
// if err != nil {
// panic(err)
// }
// defer terminal.Restore(0, oldState)
package readline
import (
"io"
"syscall"
"unsafe"
)
// State contains the state of a terminal.
type State struct {
termios syscall.Termios
}
// IsTerminal returns true if the given file descriptor is a terminal.
func IsTerminal(fd int) bool {
var termios syscall.Termios
_, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlReadTermios, uintptr(unsafe.Pointer(&termios)), 0, 0, 0)
return err == 0
}
// MakeRaw put the terminal connected to the given file descriptor into raw
// mode and returns the previous state of the terminal so that it can be
// restored.
func MakeRaw(fd int) (*State, error) {
var oldState State
if _, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlReadTermios, uintptr(unsafe.Pointer(&oldState.termios)), 0, 0, 0); err != 0 {
return nil, err
}
newState := oldState.termios
// This attempts to replicate the behaviour documented for cfmakeraw in
// the termios(3) manpage.
newState.Iflag &^= syscall.IGNBRK | syscall.BRKINT | syscall.PARMRK | syscall.ISTRIP | syscall.INLCR | syscall.IGNCR | syscall.ICRNL | syscall.IXON
// newState.Oflag &^= syscall.OPOST
newState.Lflag &^= syscall.ECHO | syscall.ECHONL | syscall.ICANON | syscall.ISIG | syscall.IEXTEN
newState.Cflag &^= syscall.CSIZE | syscall.PARENB
newState.Cflag |= syscall.CS8
if _, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlWriteTermios, uintptr(unsafe.Pointer(&newState)), 0, 0, 0); err != 0 {
return nil, err
}
return &oldState, nil
}
// GetState returns the current state of a terminal which may be useful to
// restore the terminal after a signal.
func GetState(fd int) (*State, error) {
var oldState State
if _, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlReadTermios, uintptr(unsafe.Pointer(&oldState.termios)), 0, 0, 0); err != 0 {
return nil, err
}
return &oldState, nil
}
// Restore restores the terminal connected to the given file descriptor to a
// previous state.
func restoreTerm(fd int, state *State) error {
_, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlWriteTermios, uintptr(unsafe.Pointer(&state.termios)), 0, 0, 0)
return err
}
// GetSize returns the dimensions of the given terminal.
func GetSize(fd int) (width, height int, err error) {
var dimensions [4]uint16
if _, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), uintptr(syscall.TIOCGWINSZ), uintptr(unsafe.Pointer(&dimensions)), 0, 0, 0); err != 0 {
return -1, -1, err
}
return int(dimensions[1]), int(dimensions[0]), nil
}
// ReadPassword reads a line of input from a terminal without local echo. This
// is commonly used for inputting passwords and other sensitive data. The slice
// returned does not include the \n.
func ReadPassword(fd int) ([]byte, error) {
var oldState syscall.Termios
if _, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlReadTermios, uintptr(unsafe.Pointer(&oldState)), 0, 0, 0); err != 0 {
return nil, err
}
newState := oldState
newState.Lflag &^= syscall.ECHO
newState.Lflag |= syscall.ICANON | syscall.ISIG
newState.Iflag |= syscall.ICRNL
if _, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlWriteTermios, uintptr(unsafe.Pointer(&newState)), 0, 0, 0); err != 0 {
return nil, err
}
defer func() {
syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlWriteTermios, uintptr(unsafe.Pointer(&oldState)), 0, 0, 0)
}()
var buf [16]byte
var ret []byte
for {
n, err := syscall.Read(fd, buf[:])
if err != nil {
return nil, err
}
if n == 0 {
if len(ret) == 0 {
return nil, io.EOF
}
break
}
if buf[n-1] == '\n' {
n--
}
ret = append(ret, buf[:n]...)
if n < len(buf) {
break
}
}
return ret, nil
}

12
vendor/github.com/chzyer/readline/term_bsd.go generated vendored Normal file
View File

@ -0,0 +1,12 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin dragonfly freebsd netbsd openbsd
package readline
import "syscall"
const ioctlReadTermios = syscall.TIOCGETA
const ioctlWriteTermios = syscall.TIOCSETA

11
vendor/github.com/chzyer/readline/term_linux.go generated vendored Normal file
View File

@ -0,0 +1,11 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package readline
// These constants are declared here, rather than importing
// them from the syscall package as some syscall packages, even
// on linux, for example gccgo, do not declare them.
const ioctlReadTermios = 0x5401 // syscall.TCGETS
const ioctlWriteTermios = 0x5402 // syscall.TCSETS

171
vendor/github.com/chzyer/readline/term_windows.go generated vendored Normal file
View File

@ -0,0 +1,171 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build windows
// Package terminal provides support functions for dealing with terminals, as
// commonly found on UNIX systems.
//
// Putting a terminal into raw mode is the most common requirement:
//
// oldState, err := terminal.MakeRaw(0)
// if err != nil {
// panic(err)
// }
// defer terminal.Restore(0, oldState)
package readline
import (
"io"
"syscall"
"unsafe"
)
const (
enableLineInput = 2
enableEchoInput = 4
enableProcessedInput = 1
enableWindowInput = 8
enableMouseInput = 16
enableInsertMode = 32
enableQuickEditMode = 64
enableExtendedFlags = 128
enableAutoPosition = 256
enableProcessedOutput = 1
enableWrapAtEolOutput = 2
)
var kernel32 = syscall.NewLazyDLL("kernel32.dll")
var (
procGetConsoleMode = kernel32.NewProc("GetConsoleMode")
procSetConsoleMode = kernel32.NewProc("SetConsoleMode")
procGetConsoleScreenBufferInfo = kernel32.NewProc("GetConsoleScreenBufferInfo")
)
type (
coord struct {
x short
y short
}
smallRect struct {
left short
top short
right short
bottom short
}
consoleScreenBufferInfo struct {
size coord
cursorPosition coord
attributes word
window smallRect
maximumWindowSize coord
}
)
type State struct {
mode uint32
}
// IsTerminal returns true if the given file descriptor is a terminal.
func IsTerminal(fd int) bool {
var st uint32
r, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, uintptr(fd), uintptr(unsafe.Pointer(&st)), 0)
return r != 0 && e == 0
}
// MakeRaw put the terminal connected to the given file descriptor into raw
// mode and returns the previous state of the terminal so that it can be
// restored.
func MakeRaw(fd int) (*State, error) {
var st uint32
_, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, uintptr(fd), uintptr(unsafe.Pointer(&st)), 0)
if e != 0 {
return nil, error(e)
}
raw := st &^ (enableEchoInput | enableProcessedInput | enableLineInput | enableProcessedOutput)
_, _, e = syscall.Syscall(procSetConsoleMode.Addr(), 2, uintptr(fd), uintptr(raw), 0)
if e != 0 {
return nil, error(e)
}
return &State{st}, nil
}
// GetState returns the current state of a terminal which may be useful to
// restore the terminal after a signal.
func GetState(fd int) (*State, error) {
var st uint32
_, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, uintptr(fd), uintptr(unsafe.Pointer(&st)), 0)
if e != 0 {
return nil, error(e)
}
return &State{st}, nil
}
// Restore restores the terminal connected to the given file descriptor to a
// previous state.
func restoreTerm(fd int, state *State) error {
_, _, err := syscall.Syscall(procSetConsoleMode.Addr(), 2, uintptr(fd), uintptr(state.mode), 0)
return err
}
// GetSize returns the dimensions of the given terminal.
func GetSize(fd int) (width, height int, err error) {
var info consoleScreenBufferInfo
_, _, e := syscall.Syscall(procGetConsoleScreenBufferInfo.Addr(), 2, uintptr(fd), uintptr(unsafe.Pointer(&info)), 0)
if e != 0 {
return 0, 0, error(e)
}
return int(info.size.x), int(info.size.y), nil
}
// ReadPassword reads a line of input from a terminal without local echo. This
// is commonly used for inputting passwords and other sensitive data. The slice
// returned does not include the \n.
func ReadPassword(fd int) ([]byte, error) {
var st uint32
_, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, uintptr(fd), uintptr(unsafe.Pointer(&st)), 0)
if e != 0 {
return nil, error(e)
}
old := st
st &^= (enableEchoInput)
st |= (enableProcessedInput | enableLineInput | enableProcessedOutput)
_, _, e = syscall.Syscall(procSetConsoleMode.Addr(), 2, uintptr(fd), uintptr(st), 0)
if e != 0 {
return nil, error(e)
}
defer func() {
syscall.Syscall(procSetConsoleMode.Addr(), 2, uintptr(fd), uintptr(old), 0)
}()
var buf [16]byte
var ret []byte
for {
n, err := syscall.Read(syscall.Handle(fd), buf[:])
if err != nil {
return nil, err
}
if n == 0 {
if len(ret) == 0 {
return nil, io.EOF
}
break
}
if buf[n-1] == '\n' {
n--
}
if n > 0 && buf[n-1] == '\r' {
n--
}
ret = append(ret, buf[:n]...)
if n < len(buf) {
break
}
}
return ret, nil
}

215
vendor/github.com/chzyer/readline/terminal.go generated vendored Normal file
View File

@ -0,0 +1,215 @@
package readline
import (
"bufio"
"fmt"
"io"
"strings"
"sync"
"sync/atomic"
)
type Terminal struct {
cfg *Config
outchan chan rune
closed int32
stopChan chan struct{}
kickChan chan struct{}
wg sync.WaitGroup
isReading int32
sleeping int32
sizeChan chan string
}
func NewTerminal(cfg *Config) (*Terminal, error) {
if err := cfg.Init(); err != nil {
return nil, err
}
t := &Terminal{
cfg: cfg,
kickChan: make(chan struct{}, 1),
outchan: make(chan rune),
stopChan: make(chan struct{}, 1),
sizeChan: make(chan string, 1),
}
go t.ioloop()
return t, nil
}
// SleepToResume will sleep myself, and return only if I'm resumed.
func (t *Terminal) SleepToResume() {
if !atomic.CompareAndSwapInt32(&t.sleeping, 0, 1) {
return
}
defer atomic.StoreInt32(&t.sleeping, 0)
t.ExitRawMode()
ch := WaitForResume()
SuspendMe()
<-ch
t.EnterRawMode()
}
func (t *Terminal) EnterRawMode() (err error) {
return t.cfg.FuncMakeRaw()
}
func (t *Terminal) ExitRawMode() (err error) {
return t.cfg.FuncExitRaw()
}
func (t *Terminal) Write(b []byte) (int, error) {
return t.cfg.Stdout.Write(b)
}
type termSize struct {
left int
top int
}
func (t *Terminal) GetOffset(f func(offset string)) {
go func() {
f(<-t.sizeChan)
}()
t.Write([]byte("\033[6n"))
}
func (t *Terminal) Print(s string) {
fmt.Fprintf(t.cfg.Stdout, "%s", s)
}
func (t *Terminal) PrintRune(r rune) {
fmt.Fprintf(t.cfg.Stdout, "%c", r)
}
func (t *Terminal) Readline() *Operation {
return NewOperation(t, t.cfg)
}
// return rune(0) if meet EOF
func (t *Terminal) ReadRune() rune {
ch, ok := <-t.outchan
if !ok {
return rune(0)
}
return ch
}
func (t *Terminal) IsReading() bool {
return atomic.LoadInt32(&t.isReading) == 1
}
func (t *Terminal) KickRead() {
select {
case t.kickChan <- struct{}{}:
default:
}
}
func (t *Terminal) ioloop() {
t.wg.Add(1)
defer func() {
t.wg.Done()
close(t.outchan)
}()
var (
isEscape bool
isEscapeEx bool
expectNextChar bool
)
buf := bufio.NewReader(t.cfg.Stdin)
for {
if !expectNextChar {
atomic.StoreInt32(&t.isReading, 0)
select {
case <-t.kickChan:
atomic.StoreInt32(&t.isReading, 1)
case <-t.stopChan:
return
}
}
expectNextChar = false
r, _, err := buf.ReadRune()
if err != nil {
if strings.Contains(err.Error(), "interrupted system call") {
expectNextChar = true
continue
}
break
}
if isEscape {
isEscape = false
if r == CharEscapeEx {
expectNextChar = true
isEscapeEx = true
continue
}
r = escapeKey(r, buf)
} else if isEscapeEx {
isEscapeEx = false
if key := readEscKey(r, buf); key != nil {
r = escapeExKey(key)
// offset
if key.typ == 'R' {
if _, _, ok := key.Get2(); ok {
select {
case t.sizeChan <- key.attr:
default:
}
}
expectNextChar = true
continue
}
}
if r == 0 {
expectNextChar = true
continue
}
}
expectNextChar = true
switch r {
case CharEsc:
if t.cfg.VimMode {
t.outchan <- r
break
}
isEscape = true
case CharInterrupt, CharEnter, CharCtrlJ, CharDelete:
expectNextChar = false
fallthrough
default:
t.outchan <- r
}
}
}
func (t *Terminal) Bell() {
fmt.Fprintf(t, "%c", CharBell)
}
func (t *Terminal) Close() error {
if atomic.SwapInt32(&t.closed, 1) != 0 {
return nil
}
if closer, ok := t.cfg.Stdin.(io.Closer); ok {
closer.Close()
}
close(t.stopChan)
t.wg.Wait()
return t.ExitRawMode()
}
func (t *Terminal) SetConfig(c *Config) error {
if err := c.Init(); err != nil {
return err
}
t.cfg = c
return nil
}

274
vendor/github.com/chzyer/readline/utils.go generated vendored Normal file
View File

@ -0,0 +1,274 @@
package readline
import (
"bufio"
"bytes"
"container/list"
"fmt"
"os"
"strconv"
"strings"
"sync"
"time"
"unicode"
)
var (
isWindows = false
)
const (
CharLineStart = 1
CharBackward = 2
CharInterrupt = 3
CharDelete = 4
CharLineEnd = 5
CharForward = 6
CharBell = 7
CharCtrlH = 8
CharTab = 9
CharCtrlJ = 10
CharKill = 11
CharCtrlL = 12
CharEnter = 13
CharNext = 14
CharPrev = 16
CharBckSearch = 18
CharFwdSearch = 19
CharTranspose = 20
CharCtrlU = 21
CharCtrlW = 23
CharCtrlZ = 26
CharEsc = 27
CharEscapeEx = 91
CharBackspace = 127
)
const (
MetaBackward rune = -iota - 1
MetaForward
MetaDelete
MetaBackspace
MetaTranspose
)
// WaitForResume need to call before current process got suspend.
// It will run a ticker until a long duration is occurs,
// which means this process is resumed.
func WaitForResume() chan struct{} {
ch := make(chan struct{})
var wg sync.WaitGroup
wg.Add(1)
go func() {
ticker := time.NewTicker(10 * time.Millisecond)
t := time.Now()
wg.Done()
for {
now := <-ticker.C
if now.Sub(t) > 100*time.Millisecond {
break
}
t = now
}
ticker.Stop()
ch <- struct{}{}
}()
wg.Wait()
return ch
}
func Restore(fd int, state *State) error {
err := restoreTerm(fd, state)
if err != nil {
// errno 0 means everything is ok :)
if err.Error() == "errno 0" {
err = nil
}
}
return nil
}
func IsPrintable(key rune) bool {
isInSurrogateArea := key >= 0xd800 && key <= 0xdbff
return key >= 32 && !isInSurrogateArea
}
// translate Esc[X
func escapeExKey(key *escapeKeyPair) rune {
var r rune
switch key.typ {
case 'D':
r = CharBackward
case 'C':
r = CharForward
case 'A':
r = CharPrev
case 'B':
r = CharNext
case 'H':
r = CharLineStart
case 'F':
r = CharLineEnd
case '~':
if key.attr == "3" {
r = CharDelete
}
default:
}
return r
}
type escapeKeyPair struct {
attr string
typ rune
}
func (e *escapeKeyPair) Get2() (int, int, bool) {
sp := strings.Split(e.attr, ";")
if len(sp) < 2 {
return -1, -1, false
}
s1, err := strconv.Atoi(sp[0])
if err != nil {
return -1, -1, false
}
s2, err := strconv.Atoi(sp[1])
if err != nil {
return -1, -1, false
}
return s1, s2, true
}
func readEscKey(r rune, reader *bufio.Reader) *escapeKeyPair {
p := escapeKeyPair{}
buf := bytes.NewBuffer(nil)
for {
if r == ';' {
} else if unicode.IsNumber(r) {
} else {
p.typ = r
break
}
buf.WriteRune(r)
r, _, _ = reader.ReadRune()
}
p.attr = buf.String()
return &p
}
// translate EscX to Meta+X
func escapeKey(r rune, reader *bufio.Reader) rune {
switch r {
case 'b':
r = MetaBackward
case 'f':
r = MetaForward
case 'd':
r = MetaDelete
case CharTranspose:
r = MetaTranspose
case CharBackspace:
r = MetaBackspace
case 'O':
d, _, _ := reader.ReadRune()
switch d {
case 'H':
r = CharLineStart
case 'F':
r = CharLineEnd
default:
reader.UnreadRune()
}
case CharEsc:
}
return r
}
func SplitByLine(start, screenWidth int, rs []rune) []string {
var ret []string
buf := bytes.NewBuffer(nil)
currentWidth := start
for _, r := range rs {
w := runes.Width(r)
currentWidth += w
buf.WriteRune(r)
if currentWidth >= screenWidth {
ret = append(ret, buf.String())
buf.Reset()
currentWidth = 0
}
}
ret = append(ret, buf.String())
return ret
}
// calculate how many lines for N character
func LineCount(screenWidth, w int) int {
r := w / screenWidth
if w%screenWidth != 0 {
r++
}
return r
}
func IsWordBreak(i rune) bool {
switch {
case i >= 'a' && i <= 'z':
case i >= 'A' && i <= 'Z':
case i >= '0' && i <= '9':
default:
return true
}
return false
}
func GetInt(s []string, def int) int {
if len(s) == 0 {
return def
}
c, err := strconv.Atoi(s[0])
if err != nil {
return def
}
return c
}
type RawMode struct {
state *State
}
func (r *RawMode) Enter() (err error) {
r.state, err = MakeRaw(GetStdin())
return err
}
func (r *RawMode) Exit() error {
if r.state == nil {
return nil
}
return Restore(GetStdin(), r.state)
}
// -----------------------------------------------------------------------------
func sleep(n int) {
Debug(n)
time.Sleep(2000 * time.Millisecond)
}
// print a linked list to Debug()
func debugList(l *list.List) {
idx := 0
for e := l.Front(); e != nil; e = e.Next() {
Debug(idx, fmt.Sprintf("%+v", e.Value))
idx++
}
}
// append log info to another file
func Debug(o ...interface{}) {
f, _ := os.OpenFile("debug.tmp", os.O_RDWR|os.O_CREATE|os.O_APPEND, 0666)
fmt.Fprintln(f, o...)
f.Close()
}

90
vendor/github.com/chzyer/readline/utils_unix.go generated vendored Normal file
View File

@ -0,0 +1,90 @@
// +build darwin dragonfly freebsd linux,!appengine netbsd openbsd
package readline
import (
"io"
"os"
"os/signal"
"sync"
"syscall"
"unsafe"
)
type winsize struct {
Row uint16
Col uint16
Xpixel uint16
Ypixel uint16
}
// SuspendMe use to send suspend signal to myself, when we in the raw mode.
// For OSX it need to send to parent's pid
// For Linux it need to send to myself
func SuspendMe() {
p, _ := os.FindProcess(os.Getppid())
p.Signal(syscall.SIGTSTP)
p, _ = os.FindProcess(os.Getpid())
p.Signal(syscall.SIGTSTP)
}
// get width of the terminal
func getWidth(stdoutFd int) int {
ws := &winsize{}
retCode, _, errno := syscall.Syscall(syscall.SYS_IOCTL,
uintptr(stdoutFd),
uintptr(syscall.TIOCGWINSZ),
uintptr(unsafe.Pointer(ws)))
if int(retCode) == -1 {
_ = errno
return -1
}
return int(ws.Col)
}
func GetScreenWidth() int {
w := getWidth(syscall.Stdout)
if w < 0 {
w = getWidth(syscall.Stderr)
}
return w
}
// ClearScreen clears the console screen
func ClearScreen(w io.Writer) (int, error) {
return w.Write([]byte("\033[H"))
}
func DefaultIsTerminal() bool {
return IsTerminal(syscall.Stdin) && (IsTerminal(syscall.Stdout) || IsTerminal(syscall.Stderr))
}
func GetStdin() int {
return syscall.Stdin
}
// -----------------------------------------------------------------------------
var (
widthChange sync.Once
widthChangeCallback func()
)
func DefaultOnWidthChanged(f func()) {
widthChangeCallback = f
widthChange.Do(func() {
ch := make(chan os.Signal, 1)
signal.Notify(ch, syscall.SIGWINCH)
go func() {
for {
_, ok := <-ch
if !ok {
break
}
widthChangeCallback()
}
}()
})
}

41
vendor/github.com/chzyer/readline/utils_windows.go generated vendored Normal file
View File

@ -0,0 +1,41 @@
// +build windows
package readline
import (
"io"
"syscall"
)
func SuspendMe() {
}
func GetStdin() int {
return int(syscall.Stdin)
}
func init() {
isWindows = true
}
// get width of the terminal
func GetScreenWidth() int {
info, _ := GetConsoleScreenBufferInfo()
if info == nil {
return -1
}
return int(info.dwSize.x)
}
// ClearScreen clears the console screen
func ClearScreen(_ io.Writer) error {
return SetConsoleCursorPosition(&_COORD{0, 0})
}
func DefaultIsTerminal() bool {
return true
}
func DefaultOnWidthChanged(func()) {
}

174
vendor/github.com/chzyer/readline/vim.go generated vendored Normal file
View File

@ -0,0 +1,174 @@
package readline
const (
VIM_NORMAL = iota
VIM_INSERT
VIM_VISUAL
)
type opVim struct {
cfg *Config
op *Operation
vimMode int
}
func newVimMode(op *Operation) *opVim {
ov := &opVim{
cfg: op.cfg,
op: op,
}
ov.SetVimMode(ov.cfg.VimMode)
return ov
}
func (o *opVim) SetVimMode(on bool) {
if o.cfg.VimMode && !on { // turn off
o.ExitVimMode()
}
o.cfg.VimMode = on
o.vimMode = VIM_INSERT
}
func (o *opVim) ExitVimMode() {
o.vimMode = VIM_INSERT
}
func (o *opVim) IsEnableVimMode() bool {
return o.cfg.VimMode
}
func (o *opVim) handleVimNormalMovement(r rune, readNext func() rune) (t rune, handled bool) {
rb := o.op.buf
handled = true
switch r {
case 'h':
t = CharBackward
case 'j':
t = CharNext
case 'k':
t = CharPrev
case 'l':
t = CharForward
case '0', '^':
rb.MoveToLineStart()
case '$':
rb.MoveToLineEnd()
case 'x':
rb.Delete()
if rb.IsCursorInEnd() {
rb.MoveBackward()
}
case 'r':
rb.Replace(readNext())
case 'd':
next := readNext()
switch next {
case 'd':
rb.Erase()
case 'w':
rb.DeleteWord()
case 'h':
rb.Backspace()
case 'l':
rb.Delete()
}
case 'b', 'B':
rb.MoveToPrevWord()
case 'w', 'W':
rb.MoveToNextWord()
case 'e', 'E':
rb.MoveToEndWord()
case 'f', 'F', 't', 'T':
next := readNext()
prevChar := r == 't' || r == 'T'
reverse := r == 'F' || r == 'T'
switch next {
case CharEsc:
default:
rb.MoveTo(next, prevChar, reverse)
}
default:
return r, false
}
return t, true
}
func (o *opVim) handleVimNormalEnterInsert(r rune, readNext func() rune) (t rune, handled bool) {
rb := o.op.buf
handled = true
switch r {
case 'i':
case 'I':
rb.MoveToLineStart()
case 'a':
rb.MoveForward()
case 'A':
rb.MoveToLineEnd()
case 's':
rb.Delete()
case 'S':
rb.Erase()
case 'c':
next := readNext()
switch next {
case 'c':
rb.Erase()
case 'w':
rb.DeleteWord()
case 'h':
rb.Backspace()
case 'l':
rb.Delete()
}
default:
return r, false
}
o.EnterVimInsertMode()
return
}
func (o *opVim) HandleVimNormal(r rune, readNext func() rune) (t rune) {
switch r {
case CharEnter, CharInterrupt:
o.ExitVimMode()
return r
}
if r, handled := o.handleVimNormalMovement(r, readNext); handled {
return r
}
if r, handled := o.handleVimNormalEnterInsert(r, readNext); handled {
return r
}
// invalid operation
o.op.t.Bell()
return 0
}
func (o *opVim) EnterVimInsertMode() {
o.vimMode = VIM_INSERT
}
func (o *opVim) ExitVimInsertMode() {
o.vimMode = VIM_NORMAL
}
func (o *opVim) HandleVim(r rune, readNext func() rune) rune {
if o.vimMode == VIM_NORMAL {
return o.HandleVimNormal(r, readNext)
}
if r == CharEsc {
o.ExitVimInsertMode()
return 0
}
switch o.vimMode {
case VIM_INSERT:
return r
case VIM_VISUAL:
}
return r
}

152
vendor/github.com/chzyer/readline/windows_api.go generated vendored Normal file
View File

@ -0,0 +1,152 @@
// +build windows
package readline
import (
"reflect"
"syscall"
"unsafe"
)
var (
kernel = NewKernel()
stdout = uintptr(syscall.Stdout)
stdin = uintptr(syscall.Stdin)
)
type Kernel struct {
SetConsoleCursorPosition,
SetConsoleTextAttribute,
FillConsoleOutputCharacterW,
FillConsoleOutputAttribute,
ReadConsoleInputW,
GetConsoleScreenBufferInfo,
GetConsoleCursorInfo,
GetStdHandle CallFunc
}
type short int16
type word uint16
type dword uint32
type wchar uint16
type _COORD struct {
x short
y short
}
func (c *_COORD) ptr() uintptr {
return uintptr(*(*int32)(unsafe.Pointer(c)))
}
const (
EVENT_KEY = 0x0001
EVENT_MOUSE = 0x0002
EVENT_WINDOW_BUFFER_SIZE = 0x0004
EVENT_MENU = 0x0008
EVENT_FOCUS = 0x0010
)
type _KEY_EVENT_RECORD struct {
bKeyDown int32
wRepeatCount word
wVirtualKeyCode word
wVirtualScanCode word
unicodeChar wchar
dwControlKeyState dword
}
// KEY_EVENT_RECORD KeyEvent;
// MOUSE_EVENT_RECORD MouseEvent;
// WINDOW_BUFFER_SIZE_RECORD WindowBufferSizeEvent;
// MENU_EVENT_RECORD MenuEvent;
// FOCUS_EVENT_RECORD FocusEvent;
type _INPUT_RECORD struct {
EventType word
Padding uint16
Event [16]byte
}
type _CONSOLE_SCREEN_BUFFER_INFO struct {
dwSize _COORD
dwCursorPosition _COORD
wAttributes word
srWindow _SMALL_RECT
dwMaximumWindowSize _COORD
}
type _SMALL_RECT struct {
left short
top short
right short
bottom short
}
type _CONSOLE_CURSOR_INFO struct {
dwSize dword
bVisible bool
}
type CallFunc func(u ...uintptr) error
func NewKernel() *Kernel {
k := &Kernel{}
kernel32 := syscall.NewLazyDLL("kernel32.dll")
v := reflect.ValueOf(k).Elem()
t := v.Type()
for i := 0; i < t.NumField(); i++ {
name := t.Field(i).Name
f := kernel32.NewProc(name)
v.Field(i).Set(reflect.ValueOf(k.Wrap(f)))
}
return k
}
func (k *Kernel) Wrap(p *syscall.LazyProc) CallFunc {
return func(args ...uintptr) error {
var r0 uintptr
var e1 syscall.Errno
size := uintptr(len(args))
if len(args) <= 3 {
buf := make([]uintptr, 3)
copy(buf, args)
r0, _, e1 = syscall.Syscall(p.Addr(), size,
buf[0], buf[1], buf[2])
} else {
buf := make([]uintptr, 6)
copy(buf, args)
r0, _, e1 = syscall.Syscall6(p.Addr(), size,
buf[0], buf[1], buf[2], buf[3], buf[4], buf[5],
)
}
if int(r0) == 0 {
if e1 != 0 {
return error(e1)
} else {
return syscall.EINVAL
}
}
return nil
}
}
func GetConsoleScreenBufferInfo() (*_CONSOLE_SCREEN_BUFFER_INFO, error) {
t := new(_CONSOLE_SCREEN_BUFFER_INFO)
err := kernel.GetConsoleScreenBufferInfo(
stdout,
uintptr(unsafe.Pointer(t)),
)
return t, err
}
func GetConsoleCursorInfo() (*_CONSOLE_CURSOR_INFO, error) {
t := new(_CONSOLE_CURSOR_INFO)
err := kernel.GetConsoleCursorInfo(stdout, uintptr(unsafe.Pointer(t)))
return t, err
}
func SetConsoleCursorPosition(c *_COORD) error {
return kernel.SetConsoleCursorPosition(stdout, c.ptr())
}

191
vendor/github.com/golang/glog/LICENSE generated vendored Normal file
View File

@ -0,0 +1,191 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and
distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright
owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities
that control, are controlled by, or are under common control with that entity.
For the purposes of this definition, "control" means (i) the power, direct or
indirect, to cause the direction or management of such entity, whether by
contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising
permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including
but not limited to software source code, documentation source, and configuration
files.
"Object" form shall mean any form resulting from mechanical transformation or
translation of a Source form, including but not limited to compiled object code,
generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made
available under the License, as indicated by a copyright notice that is included
in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that
is based on (or derived from) the Work and for which the editorial revisions,
annotations, elaborations, or other modifications represent, as a whole, an
original work of authorship. For the purposes of this License, Derivative Works
shall not include works that remain separable from, or merely link (or bind by
name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version
of the Work and any modifications or additions to that Work or Derivative Works
thereof, that is intentionally submitted to Licensor for inclusion in the Work
by the copyright owner or by an individual or Legal Entity authorized to submit
on behalf of the copyright owner. For the purposes of this definition,
"submitted" means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems, and
issue tracking systems that are managed by, or on behalf of, the Licensor for
the purpose of discussing and improving the Work, but excluding communication
that is conspicuously marked or otherwise designated in writing by the copyright
owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf
of whom a Contribution has been received by Licensor and subsequently
incorporated within the Work.
2. Grant of Copyright License.
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the Work and such
Derivative Works in Source or Object form.
3. Grant of Patent License.
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable (except as stated in this section) patent license to make, have
made, use, offer to sell, sell, import, and otherwise transfer the Work, where
such license applies only to those patent claims licensable by such Contributor
that are necessarily infringed by their Contribution(s) alone or by combination
of their Contribution(s) with the Work to which such Contribution(s) was
submitted. If You institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work or a
Contribution incorporated within the Work constitutes direct or contributory
patent infringement, then any patent licenses granted to You under this License
for that Work shall terminate as of the date such litigation is filed.
4. Redistribution.
You may reproduce and distribute copies of the Work or Derivative Works thereof
in any medium, with or without modifications, and in Source or Object form,
provided that You meet the following conditions:
You must give any other recipients of the Work or Derivative Works a copy of
this License; and
You must cause any modified files to carry prominent notices stating that You
changed the files; and
You must retain, in the Source form of any Derivative Works that You distribute,
all copyright, patent, trademark, and attribution notices from the Source form
of the Work, excluding those notices that do not pertain to any part of the
Derivative Works; and
If the Work includes a "NOTICE" text file as part of its distribution, then any
Derivative Works that You distribute must include a readable copy of the
attribution notices contained within such NOTICE file, excluding those notices
that do not pertain to any part of the Derivative Works, in at least one of the
following places: within a NOTICE text file distributed as part of the
Derivative Works; within the Source form or documentation, if provided along
with the Derivative Works; or, within a display generated by the Derivative
Works, if and wherever such third-party notices normally appear. The contents of
the NOTICE file are for informational purposes only and do not modify the
License. You may add Your own attribution notices within Derivative Works that
You distribute, alongside or as an addendum to the NOTICE text from the Work,
provided that such additional attribution notices cannot be construed as
modifying the License.
You may add Your own copyright statement to Your modifications and may provide
additional or different license terms and conditions for use, reproduction, or
distribution of Your modifications, or for any such Derivative Works as a whole,
provided Your use, reproduction, and distribution of the Work otherwise complies
with the conditions stated in this License.
5. Submission of Contributions.
Unless You explicitly state otherwise, any Contribution intentionally submitted
for inclusion in the Work by You to the Licensor shall be under the terms and
conditions of this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify the terms of
any separate license agreement you may have executed with Licensor regarding
such Contributions.
6. Trademarks.
This License does not grant permission to use the trade names, trademarks,
service marks, or product names of the Licensor, except as required for
reasonable and customary use in describing the origin of the Work and
reproducing the content of the NOTICE file.
7. Disclaimer of Warranty.
Unless required by applicable law or agreed to in writing, Licensor provides the
Work (and each Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE,
NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are
solely responsible for determining the appropriateness of using or
redistributing the Work and assume any risks associated with Your exercise of
permissions under this License.
8. Limitation of Liability.
In no event and under no legal theory, whether in tort (including negligence),
contract, or otherwise, unless required by applicable law (such as deliberate
and grossly negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special, incidental,
or consequential damages of any character arising as a result of this License or
out of the use or inability to use the Work (including but not limited to
damages for loss of goodwill, work stoppage, computer failure or malfunction, or
any and all other commercial damages or losses), even if such Contributor has
been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability.
While redistributing the Work or Derivative Works thereof, You may choose to
offer, and charge a fee for, acceptance of support, warranty, indemnity, or
other liability obligations and/or rights consistent with this License. However,
in accepting such obligations, You may act only on Your own behalf and on Your
sole responsibility, not on behalf of any other Contributor, and only if You
agree to indemnify, defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason of your
accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work
To apply the Apache License to your work, attach the following boilerplate
notice, with the fields enclosed by brackets "[]" replaced with your own
identifying information. (Don't include the brackets!) The text should be
enclosed in the appropriate comment syntax for the file format. We also
recommend that a file or class name and description of purpose be included on
the same "printed page" as the copyright notice for easier identification within
third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

1180
vendor/github.com/golang/glog/glog.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

124
vendor/github.com/golang/glog/glog_file.go generated vendored Normal file
View File

@ -0,0 +1,124 @@
// Go support for leveled logs, analogous to https://code.google.com/p/google-glog/
//
// Copyright 2013 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// File I/O for logs.
package glog
import (
"errors"
"flag"
"fmt"
"os"
"os/user"
"path/filepath"
"strings"
"sync"
"time"
)
// MaxSize is the maximum size of a log file in bytes.
var MaxSize uint64 = 1024 * 1024 * 1800
// logDirs lists the candidate directories for new log files.
var logDirs []string
// If non-empty, overrides the choice of directory in which to write logs.
// See createLogDirs for the full list of possible destinations.
var logDir = flag.String("log_dir", "", "If non-empty, write log files in this directory")
func createLogDirs() {
if *logDir != "" {
logDirs = append(logDirs, *logDir)
}
logDirs = append(logDirs, os.TempDir())
}
var (
pid = os.Getpid()
program = filepath.Base(os.Args[0])
host = "unknownhost"
userName = "unknownuser"
)
func init() {
h, err := os.Hostname()
if err == nil {
host = shortHostname(h)
}
current, err := user.Current()
if err == nil {
userName = current.Username
}
// Sanitize userName since it may contain filepath separators on Windows.
userName = strings.Replace(userName, `\`, "_", -1)
}
// shortHostname returns its argument, truncating at the first period.
// For instance, given "www.google.com" it returns "www".
func shortHostname(hostname string) string {
if i := strings.Index(hostname, "."); i >= 0 {
return hostname[:i]
}
return hostname
}
// logName returns a new log file name containing tag, with start time t, and
// the name for the symlink for tag.
func logName(tag string, t time.Time) (name, link string) {
name = fmt.Sprintf("%s.%s.%s.log.%s.%04d%02d%02d-%02d%02d%02d.%d",
program,
host,
userName,
tag,
t.Year(),
t.Month(),
t.Day(),
t.Hour(),
t.Minute(),
t.Second(),
pid)
return name, program + "." + tag
}
var onceLogDirs sync.Once
// create creates a new log file and returns the file and its filename, which
// contains tag ("INFO", "FATAL", etc.) and t. If the file is created
// successfully, create also attempts to update the symlink for that tag, ignoring
// errors.
func create(tag string, t time.Time) (f *os.File, filename string, err error) {
onceLogDirs.Do(createLogDirs)
if len(logDirs) == 0 {
return nil, "", errors.New("log: no log dirs")
}
name, link := logName(tag, t)
var lastErr error
for _, dir := range logDirs {
fname := filepath.Join(dir, name)
f, err := os.Create(fname)
if err == nil {
symlink := filepath.Join(dir, link)
os.Remove(symlink) // ignore err
os.Symlink(name, symlink) // ignore err
return f, fname, nil
}
lastErr = err
}
return nil, "", fmt.Errorf("log: cannot create log: %v", lastErr)
}

31
vendor/github.com/golang/protobuf/LICENSE generated vendored Normal file
View File

@ -0,0 +1,31 @@
Go support for Protocol Buffers - Google's data interchange format
Copyright 2010 The Go Authors. All rights reserved.
https://github.com/golang/protobuf
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -0,0 +1,93 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2016 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Package descriptor provides functions for obtaining protocol buffer
// descriptors for generated Go types.
//
// These functions cannot go in package proto because they depend on the
// generated protobuf descriptor messages, which themselves depend on proto.
package descriptor
import (
"bytes"
"compress/gzip"
"fmt"
"io/ioutil"
"github.com/golang/protobuf/proto"
protobuf "google.golang.org/genproto/protobuf"
)
// extractFile extracts a FileDescriptorProto from a gzip'd buffer.
func extractFile(gz []byte) (*protobuf.FileDescriptorProto, error) {
r, err := gzip.NewReader(bytes.NewReader(gz))
if err != nil {
return nil, fmt.Errorf("failed to open gzip reader: %v", err)
}
defer r.Close()
b, err := ioutil.ReadAll(r)
if err != nil {
return nil, fmt.Errorf("failed to uncompress descriptor: %v", err)
}
fd := new(protobuf.FileDescriptorProto)
if err := proto.Unmarshal(b, fd); err != nil {
return nil, fmt.Errorf("malformed FileDescriptorProto: %v", err)
}
return fd, nil
}
// Message is a proto.Message with a method to return its descriptor.
//
// Message types generated by the protocol compiler always satisfy
// the Message interface.
type Message interface {
proto.Message
Descriptor() ([]byte, []int)
}
// ForMessage returns a FileDescriptorProto and a DescriptorProto from within it
// describing the given message.
func ForMessage(msg Message) (fd *protobuf.FileDescriptorProto, md *protobuf.DescriptorProto) {
gz, path := msg.Descriptor()
fd, err := extractFile(gz)
if err != nil {
panic(fmt.Sprintf("invalid FileDescriptorProto for %T: %v", msg, err))
}
md = fd.MessageType[path[0]]
for _, i := range path[1:] {
md = md.NestedType[i]
}
return fd, md
}

843
vendor/github.com/golang/protobuf/jsonpb/jsonpb.go generated vendored Normal file
View File

@ -0,0 +1,843 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2015 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
/*
Package jsonpb provides marshaling and unmarshaling between protocol buffers and JSON.
It follows the specification at https://developers.google.com/protocol-buffers/docs/proto3#json.
This package produces a different output than the standard "encoding/json" package,
which does not operate correctly on protocol buffers.
*/
package jsonpb
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
"reflect"
"sort"
"strconv"
"strings"
"time"
"github.com/golang/protobuf/proto"
)
// Marshaler is a configurable object for converting between
// protocol buffer objects and a JSON representation for them.
type Marshaler struct {
// Whether to render enum values as integers, as opposed to string values.
EnumsAsInts bool
// Whether to render fields with zero values.
EmitDefaults bool
// A string to indent each level by. The presence of this field will
// also cause a space to appear between the field separator and
// value, and for newlines to be appear between fields and array
// elements.
Indent string
// Whether to use the original (.proto) name for fields.
OrigName bool
}
// Marshal marshals a protocol buffer into JSON.
func (m *Marshaler) Marshal(out io.Writer, pb proto.Message) error {
writer := &errWriter{writer: out}
return m.marshalObject(writer, pb, "", "")
}
// MarshalToString converts a protocol buffer object to JSON string.
func (m *Marshaler) MarshalToString(pb proto.Message) (string, error) {
var buf bytes.Buffer
if err := m.Marshal(&buf, pb); err != nil {
return "", err
}
return buf.String(), nil
}
type int32Slice []int32
// For sorting extensions ids to ensure stable output.
func (s int32Slice) Len() int { return len(s) }
func (s int32Slice) Less(i, j int) bool { return s[i] < s[j] }
func (s int32Slice) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
type wkt interface {
XXX_WellKnownType() string
}
// marshalObject writes a struct to the Writer.
func (m *Marshaler) marshalObject(out *errWriter, v proto.Message, indent, typeURL string) error {
s := reflect.ValueOf(v).Elem()
// Handle well-known types.
if wkt, ok := v.(wkt); ok {
switch wkt.XXX_WellKnownType() {
case "DoubleValue", "FloatValue", "Int64Value", "UInt64Value",
"Int32Value", "UInt32Value", "BoolValue", "StringValue", "BytesValue":
// "Wrappers use the same representation in JSON
// as the wrapped primitive type, ..."
sprop := proto.GetProperties(s.Type())
return m.marshalValue(out, sprop.Prop[0], s.Field(0), indent)
case "Any":
// Any is a bit more involved.
return m.marshalAny(out, v, indent)
case "Duration":
// "Generated output always contains 3, 6, or 9 fractional digits,
// depending on required precision."
s, ns := s.Field(0).Int(), s.Field(1).Int()
d := time.Duration(s)*time.Second + time.Duration(ns)*time.Nanosecond
x := fmt.Sprintf("%.9f", d.Seconds())
x = strings.TrimSuffix(x, "000")
x = strings.TrimSuffix(x, "000")
out.write(`"`)
out.write(x)
out.write(`s"`)
return out.err
case "Struct":
// Let marshalValue handle the `fields` map.
// TODO: pass the correct Properties if needed.
return m.marshalValue(out, &proto.Properties{}, s.Field(0), indent)
case "Timestamp":
// "RFC 3339, where generated output will always be Z-normalized
// and uses 3, 6 or 9 fractional digits."
s, ns := s.Field(0).Int(), s.Field(1).Int()
t := time.Unix(s, ns).UTC()
// time.RFC3339Nano isn't exactly right (we need to get 3/6/9 fractional digits).
x := t.Format("2006-01-02T15:04:05.000000000")
x = strings.TrimSuffix(x, "000")
x = strings.TrimSuffix(x, "000")
out.write(`"`)
out.write(x)
out.write(`Z"`)
return out.err
case "Value":
// Value has a single oneof.
kind := s.Field(0)
if kind.IsNil() {
// "absence of any variant indicates an error"
return errors.New("nil Value")
}
// oneof -> *T -> T -> T.F
x := kind.Elem().Elem().Field(0)
// TODO: pass the correct Properties if needed.
return m.marshalValue(out, &proto.Properties{}, x, indent)
}
}
out.write("{")
if m.Indent != "" {
out.write("\n")
}
firstField := true
if typeURL != "" {
if err := m.marshalTypeURL(out, indent, typeURL); err != nil {
return err
}
firstField = false
}
for i := 0; i < s.NumField(); i++ {
value := s.Field(i)
valueField := s.Type().Field(i)
if strings.HasPrefix(valueField.Name, "XXX_") {
continue
}
// IsNil will panic on most value kinds.
switch value.Kind() {
case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
if value.IsNil() {
continue
}
}
if !m.EmitDefaults {
switch value.Kind() {
case reflect.Bool:
if !value.Bool() {
continue
}
case reflect.Int32, reflect.Int64:
if value.Int() == 0 {
continue
}
case reflect.Uint32, reflect.Uint64:
if value.Uint() == 0 {
continue
}
case reflect.Float32, reflect.Float64:
if value.Float() == 0 {
continue
}
case reflect.String:
if value.Len() == 0 {
continue
}
}
}
// Oneof fields need special handling.
if valueField.Tag.Get("protobuf_oneof") != "" {
// value is an interface containing &T{real_value}.
sv := value.Elem().Elem() // interface -> *T -> T
value = sv.Field(0)
valueField = sv.Type().Field(0)
}
prop := jsonProperties(valueField, m.OrigName)
if !firstField {
m.writeSep(out)
}
if err := m.marshalField(out, prop, value, indent); err != nil {
return err
}
firstField = false
}
// Handle proto2 extensions.
if ep, ok := v.(proto.Message); ok {
extensions := proto.RegisteredExtensions(v)
// Sort extensions for stable output.
ids := make([]int32, 0, len(extensions))
for id, desc := range extensions {
if !proto.HasExtension(ep, desc) {
continue
}
ids = append(ids, id)
}
sort.Sort(int32Slice(ids))
for _, id := range ids {
desc := extensions[id]
if desc == nil {
// unknown extension
continue
}
ext, extErr := proto.GetExtension(ep, desc)
if extErr != nil {
return extErr
}
value := reflect.ValueOf(ext)
var prop proto.Properties
prop.Parse(desc.Tag)
prop.JSONName = fmt.Sprintf("[%s]", desc.Name)
if !firstField {
m.writeSep(out)
}
if err := m.marshalField(out, &prop, value, indent); err != nil {
return err
}
firstField = false
}
}
if m.Indent != "" {
out.write("\n")
out.write(indent)
}
out.write("}")
return out.err
}
func (m *Marshaler) writeSep(out *errWriter) {
if m.Indent != "" {
out.write(",\n")
} else {
out.write(",")
}
}
func (m *Marshaler) marshalAny(out *errWriter, any proto.Message, indent string) error {
// "If the Any contains a value that has a special JSON mapping,
// it will be converted as follows: {"@type": xxx, "value": yyy}.
// Otherwise, the value will be converted into a JSON object,
// and the "@type" field will be inserted to indicate the actual data type."
v := reflect.ValueOf(any).Elem()
turl := v.Field(0).String()
val := v.Field(1).Bytes()
// Only the part of type_url after the last slash is relevant.
mname := turl
if slash := strings.LastIndex(mname, "/"); slash >= 0 {
mname = mname[slash+1:]
}
mt := proto.MessageType(mname)
if mt == nil {
return fmt.Errorf("unknown message type %q", mname)
}
msg := reflect.New(mt.Elem()).Interface().(proto.Message)
if err := proto.Unmarshal(val, msg); err != nil {
return err
}
if _, ok := msg.(wkt); ok {
out.write("{")
if m.Indent != "" {
out.write("\n")
}
if err := m.marshalTypeURL(out, indent, turl); err != nil {
return err
}
m.writeSep(out)
if m.Indent != "" {
out.write(indent)
out.write(m.Indent)
out.write(`"value": `)
} else {
out.write(`"value":`)
}
if err := m.marshalObject(out, msg, indent+m.Indent, ""); err != nil {
return err
}
if m.Indent != "" {
out.write("\n")
out.write(indent)
}
out.write("}")
return out.err
}
return m.marshalObject(out, msg, indent, turl)
}
func (m *Marshaler) marshalTypeURL(out *errWriter, indent, typeURL string) error {
if m.Indent != "" {
out.write(indent)
out.write(m.Indent)
}
out.write(`"@type":`)
if m.Indent != "" {
out.write(" ")
}
b, err := json.Marshal(typeURL)
if err != nil {
return err
}
out.write(string(b))
return out.err
}
// marshalField writes field description and value to the Writer.
func (m *Marshaler) marshalField(out *errWriter, prop *proto.Properties, v reflect.Value, indent string) error {
if m.Indent != "" {
out.write(indent)
out.write(m.Indent)
}
out.write(`"`)
out.write(prop.JSONName)
out.write(`":`)
if m.Indent != "" {
out.write(" ")
}
if err := m.marshalValue(out, prop, v, indent); err != nil {
return err
}
return nil
}
// marshalValue writes the value to the Writer.
func (m *Marshaler) marshalValue(out *errWriter, prop *proto.Properties, v reflect.Value, indent string) error {
var err error
v = reflect.Indirect(v)
// Handle repeated elements.
if v.Kind() == reflect.Slice && v.Type().Elem().Kind() != reflect.Uint8 {
out.write("[")
comma := ""
for i := 0; i < v.Len(); i++ {
sliceVal := v.Index(i)
out.write(comma)
if m.Indent != "" {
out.write("\n")
out.write(indent)
out.write(m.Indent)
out.write(m.Indent)
}
if err := m.marshalValue(out, prop, sliceVal, indent+m.Indent); err != nil {
return err
}
comma = ","
}
if m.Indent != "" {
out.write("\n")
out.write(indent)
out.write(m.Indent)
}
out.write("]")
return out.err
}
// Handle well-known types.
// Most are handled up in marshalObject (because 99% are messages).
type wkt interface {
XXX_WellKnownType() string
}
if wkt, ok := v.Interface().(wkt); ok {
switch wkt.XXX_WellKnownType() {
case "NullValue":
out.write("null")
return out.err
}
}
// Handle enumerations.
if !m.EnumsAsInts && prop.Enum != "" {
// Unknown enum values will are stringified by the proto library as their
// value. Such values should _not_ be quoted or they will be interpreted
// as an enum string instead of their value.
enumStr := v.Interface().(fmt.Stringer).String()
var valStr string
if v.Kind() == reflect.Ptr {
valStr = strconv.Itoa(int(v.Elem().Int()))
} else {
valStr = strconv.Itoa(int(v.Int()))
}
isKnownEnum := enumStr != valStr
if isKnownEnum {
out.write(`"`)
}
out.write(enumStr)
if isKnownEnum {
out.write(`"`)
}
return out.err
}
// Handle nested messages.
if v.Kind() == reflect.Struct {
return m.marshalObject(out, v.Addr().Interface().(proto.Message), indent+m.Indent, "")
}
// Handle maps.
// Since Go randomizes map iteration, we sort keys for stable output.
if v.Kind() == reflect.Map {
out.write(`{`)
keys := v.MapKeys()
sort.Sort(mapKeys(keys))
for i, k := range keys {
if i > 0 {
out.write(`,`)
}
if m.Indent != "" {
out.write("\n")
out.write(indent)
out.write(m.Indent)
out.write(m.Indent)
}
b, err := json.Marshal(k.Interface())
if err != nil {
return err
}
s := string(b)
// If the JSON is not a string value, encode it again to make it one.
if !strings.HasPrefix(s, `"`) {
b, err := json.Marshal(s)
if err != nil {
return err
}
s = string(b)
}
out.write(s)
out.write(`:`)
if m.Indent != "" {
out.write(` `)
}
if err := m.marshalValue(out, prop, v.MapIndex(k), indent+m.Indent); err != nil {
return err
}
}
if m.Indent != "" {
out.write("\n")
out.write(indent)
out.write(m.Indent)
}
out.write(`}`)
return out.err
}
// Default handling defers to the encoding/json library.
b, err := json.Marshal(v.Interface())
if err != nil {
return err
}
needToQuote := string(b[0]) != `"` && (v.Kind() == reflect.Int64 || v.Kind() == reflect.Uint64)
if needToQuote {
out.write(`"`)
}
out.write(string(b))
if needToQuote {
out.write(`"`)
}
return out.err
}
// Unmarshaler is a configurable object for converting from a JSON
// representation to a protocol buffer object.
type Unmarshaler struct {
// Whether to allow messages to contain unknown fields, as opposed to
// failing to unmarshal.
AllowUnknownFields bool
}
// UnmarshalNext unmarshals the next protocol buffer from a JSON object stream.
// This function is lenient and will decode any options permutations of the
// related Marshaler.
func (u *Unmarshaler) UnmarshalNext(dec *json.Decoder, pb proto.Message) error {
inputValue := json.RawMessage{}
if err := dec.Decode(&inputValue); err != nil {
return err
}
return u.unmarshalValue(reflect.ValueOf(pb).Elem(), inputValue, nil)
}
// Unmarshal unmarshals a JSON object stream into a protocol
// buffer. This function is lenient and will decode any options
// permutations of the related Marshaler.
func (u *Unmarshaler) Unmarshal(r io.Reader, pb proto.Message) error {
dec := json.NewDecoder(r)
return u.UnmarshalNext(dec, pb)
}
// UnmarshalNext unmarshals the next protocol buffer from a JSON object stream.
// This function is lenient and will decode any options permutations of the
// related Marshaler.
func UnmarshalNext(dec *json.Decoder, pb proto.Message) error {
return new(Unmarshaler).UnmarshalNext(dec, pb)
}
// Unmarshal unmarshals a JSON object stream into a protocol
// buffer. This function is lenient and will decode any options
// permutations of the related Marshaler.
func Unmarshal(r io.Reader, pb proto.Message) error {
return new(Unmarshaler).Unmarshal(r, pb)
}
// UnmarshalString will populate the fields of a protocol buffer based
// on a JSON string. This function is lenient and will decode any options
// permutations of the related Marshaler.
func UnmarshalString(str string, pb proto.Message) error {
return new(Unmarshaler).Unmarshal(strings.NewReader(str), pb)
}
// unmarshalValue converts/copies a value into the target.
// prop may be nil.
func (u *Unmarshaler) unmarshalValue(target reflect.Value, inputValue json.RawMessage, prop *proto.Properties) error {
targetType := target.Type()
// Allocate memory for pointer fields.
if targetType.Kind() == reflect.Ptr {
target.Set(reflect.New(targetType.Elem()))
return u.unmarshalValue(target.Elem(), inputValue, prop)
}
// Handle well-known types.
type wkt interface {
XXX_WellKnownType() string
}
if wkt, ok := target.Addr().Interface().(wkt); ok {
switch wkt.XXX_WellKnownType() {
case "DoubleValue", "FloatValue", "Int64Value", "UInt64Value",
"Int32Value", "UInt32Value", "BoolValue", "StringValue", "BytesValue":
// "Wrappers use the same representation in JSON
// as the wrapped primitive type, except that null is allowed."
// encoding/json will turn JSON `null` into Go `nil`,
// so we don't have to do any extra work.
return u.unmarshalValue(target.Field(0), inputValue, prop)
case "Any":
return fmt.Errorf("unmarshaling Any not supported yet")
case "Duration":
ivStr := string(inputValue)
if ivStr == "null" {
target.Field(0).SetInt(0)
target.Field(1).SetInt(0)
return nil
}
unq, err := strconv.Unquote(ivStr)
if err != nil {
return err
}
d, err := time.ParseDuration(unq)
if err != nil {
return fmt.Errorf("bad Duration: %v", err)
}
ns := d.Nanoseconds()
s := ns / 1e9
ns %= 1e9
target.Field(0).SetInt(s)
target.Field(1).SetInt(ns)
return nil
case "Timestamp":
ivStr := string(inputValue)
if ivStr == "null" {
target.Field(0).SetInt(0)
target.Field(1).SetInt(0)
return nil
}
unq, err := strconv.Unquote(ivStr)
if err != nil {
return err
}
t, err := time.Parse(time.RFC3339Nano, unq)
if err != nil {
return fmt.Errorf("bad Timestamp: %v", err)
}
target.Field(0).SetInt(int64(t.Unix()))
target.Field(1).SetInt(int64(t.Nanosecond()))
return nil
}
}
// Handle enums, which have an underlying type of int32,
// and may appear as strings.
// The case of an enum appearing as a number is handled
// at the bottom of this function.
if inputValue[0] == '"' && prop != nil && prop.Enum != "" {
vmap := proto.EnumValueMap(prop.Enum)
// Don't need to do unquoting; valid enum names
// are from a limited character set.
s := inputValue[1 : len(inputValue)-1]
n, ok := vmap[string(s)]
if !ok {
return fmt.Errorf("unknown value %q for enum %s", s, prop.Enum)
}
if target.Kind() == reflect.Ptr { // proto2
target.Set(reflect.New(targetType.Elem()))
target = target.Elem()
}
target.SetInt(int64(n))
return nil
}
// Handle nested messages.
if targetType.Kind() == reflect.Struct {
var jsonFields map[string]json.RawMessage
if err := json.Unmarshal(inputValue, &jsonFields); err != nil {
return err
}
consumeField := func(prop *proto.Properties) (json.RawMessage, bool) {
// Be liberal in what names we accept; both orig_name and camelName are okay.
fieldNames := acceptedJSONFieldNames(prop)
vOrig, okOrig := jsonFields[fieldNames.orig]
vCamel, okCamel := jsonFields[fieldNames.camel]
if !okOrig && !okCamel {
return nil, false
}
// If, for some reason, both are present in the data, favour the camelName.
var raw json.RawMessage
if okOrig {
raw = vOrig
delete(jsonFields, fieldNames.orig)
}
if okCamel {
raw = vCamel
delete(jsonFields, fieldNames.camel)
}
return raw, true
}
sprops := proto.GetProperties(targetType)
for i := 0; i < target.NumField(); i++ {
ft := target.Type().Field(i)
if strings.HasPrefix(ft.Name, "XXX_") {
continue
}
valueForField, ok := consumeField(sprops.Prop[i])
if !ok {
continue
}
if err := u.unmarshalValue(target.Field(i), valueForField, sprops.Prop[i]); err != nil {
return err
}
}
// Check for any oneof fields.
if len(jsonFields) > 0 {
for _, oop := range sprops.OneofTypes {
raw, ok := consumeField(oop.Prop)
if !ok {
continue
}
nv := reflect.New(oop.Type.Elem())
target.Field(oop.Field).Set(nv)
if err := u.unmarshalValue(nv.Elem().Field(0), raw, oop.Prop); err != nil {
return err
}
}
}
if !u.AllowUnknownFields && len(jsonFields) > 0 {
// Pick any field to be the scapegoat.
var f string
for fname := range jsonFields {
f = fname
break
}
return fmt.Errorf("unknown field %q in %v", f, targetType)
}
return nil
}
// Handle arrays (which aren't encoded bytes)
if targetType.Kind() == reflect.Slice && targetType.Elem().Kind() != reflect.Uint8 {
var slc []json.RawMessage
if err := json.Unmarshal(inputValue, &slc); err != nil {
return err
}
len := len(slc)
target.Set(reflect.MakeSlice(targetType, len, len))
for i := 0; i < len; i++ {
if err := u.unmarshalValue(target.Index(i), slc[i], prop); err != nil {
return err
}
}
return nil
}
// Handle maps (whose keys are always strings)
if targetType.Kind() == reflect.Map {
var mp map[string]json.RawMessage
if err := json.Unmarshal(inputValue, &mp); err != nil {
return err
}
target.Set(reflect.MakeMap(targetType))
var keyprop, valprop *proto.Properties
if prop != nil {
// These could still be nil if the protobuf metadata is broken somehow.
// TODO: This won't work because the fields are unexported.
// We should probably just reparse them.
//keyprop, valprop = prop.mkeyprop, prop.mvalprop
}
for ks, raw := range mp {
// Unmarshal map key. The core json library already decoded the key into a
// string, so we handle that specially. Other types were quoted post-serialization.
var k reflect.Value
if targetType.Key().Kind() == reflect.String {
k = reflect.ValueOf(ks)
} else {
k = reflect.New(targetType.Key()).Elem()
if err := u.unmarshalValue(k, json.RawMessage(ks), keyprop); err != nil {
return err
}
}
// Unmarshal map value.
v := reflect.New(targetType.Elem()).Elem()
if err := u.unmarshalValue(v, raw, valprop); err != nil {
return err
}
target.SetMapIndex(k, v)
}
return nil
}
// 64-bit integers can be encoded as strings. In this case we drop
// the quotes and proceed as normal.
isNum := targetType.Kind() == reflect.Int64 || targetType.Kind() == reflect.Uint64
if isNum && strings.HasPrefix(string(inputValue), `"`) {
inputValue = inputValue[1 : len(inputValue)-1]
}
// Use the encoding/json for parsing other value types.
return json.Unmarshal(inputValue, target.Addr().Interface())
}
// jsonProperties returns parsed proto.Properties for the field and corrects JSONName attribute.
func jsonProperties(f reflect.StructField, origName bool) *proto.Properties {
var prop proto.Properties
prop.Init(f.Type, f.Name, f.Tag.Get("protobuf"), &f)
if origName || prop.JSONName == "" {
prop.JSONName = prop.OrigName
}
return &prop
}
type fieldNames struct {
orig, camel string
}
func acceptedJSONFieldNames(prop *proto.Properties) fieldNames {
opts := fieldNames{orig: prop.OrigName, camel: prop.OrigName}
if prop.JSONName != "" {
opts.camel = prop.JSONName
}
return opts
}
// Writer wrapper inspired by https://blog.golang.org/errors-are-values
type errWriter struct {
writer io.Writer
err error
}
func (w *errWriter) write(str string) {
if w.err != nil {
return
}
_, w.err = w.writer.Write([]byte(str))
}
// Map fields may have key types of non-float scalars, strings and enums.
// The easiest way to sort them in some deterministic order is to use fmt.
// If this turns out to be inefficient we can always consider other options,
// such as doing a Schwartzian transform.
//
// Numeric keys are sorted in numeric order per
// https://developers.google.com/protocol-buffers/docs/proto#maps.
type mapKeys []reflect.Value
func (s mapKeys) Len() int { return len(s) }
func (s mapKeys) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s mapKeys) Less(i, j int) bool {
if k := s[i].Kind(); k == s[j].Kind() {
switch k {
case reflect.Int32, reflect.Int64:
return s[i].Int() < s[j].Int()
case reflect.Uint32, reflect.Uint64:
return s[i].Uint() < s[j].Uint()
}
}
return fmt.Sprint(s[i].Interface()) < fmt.Sprint(s[j].Interface())
}

View File

@ -0,0 +1,207 @@
// Code generated by protoc-gen-go.
// source: more_test_objects.proto
// DO NOT EDIT!
/*
Package jsonpb is a generated protocol buffer package.
It is generated from these files:
more_test_objects.proto
test_objects.proto
It has these top-level messages:
Simple3
Mappy
Simple
Repeats
Widget
Maps
MsgWithOneof
Real
Complex
KnownTypes
*/
package jsonpb
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
type Numeral int32
const (
Numeral_UNKNOWN Numeral = 0
Numeral_ARABIC Numeral = 1
Numeral_ROMAN Numeral = 2
)
var Numeral_name = map[int32]string{
0: "UNKNOWN",
1: "ARABIC",
2: "ROMAN",
}
var Numeral_value = map[string]int32{
"UNKNOWN": 0,
"ARABIC": 1,
"ROMAN": 2,
}
func (x Numeral) String() string {
return proto.EnumName(Numeral_name, int32(x))
}
func (Numeral) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
type Simple3 struct {
Dub float64 `protobuf:"fixed64,1,opt,name=dub" json:"dub,omitempty"`
}
func (m *Simple3) Reset() { *m = Simple3{} }
func (m *Simple3) String() string { return proto.CompactTextString(m) }
func (*Simple3) ProtoMessage() {}
func (*Simple3) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *Simple3) GetDub() float64 {
if m != nil {
return m.Dub
}
return 0
}
type Mappy struct {
Nummy map[int64]int32 `protobuf:"bytes,1,rep,name=nummy" json:"nummy,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"varint,2,opt,name=value"`
Strry map[string]string `protobuf:"bytes,2,rep,name=strry" json:"strry,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
Objjy map[int32]*Simple3 `protobuf:"bytes,3,rep,name=objjy" json:"objjy,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
Buggy map[int64]string `protobuf:"bytes,4,rep,name=buggy" json:"buggy,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
Booly map[bool]bool `protobuf:"bytes,5,rep,name=booly" json:"booly,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"varint,2,opt,name=value"`
Enumy map[string]Numeral `protobuf:"bytes,6,rep,name=enumy" json:"enumy,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"varint,2,opt,name=value,enum=jsonpb.Numeral"`
S32Booly map[int32]bool `protobuf:"bytes,7,rep,name=s32booly" json:"s32booly,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"varint,2,opt,name=value"`
S64Booly map[int64]bool `protobuf:"bytes,8,rep,name=s64booly" json:"s64booly,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"varint,2,opt,name=value"`
U32Booly map[uint32]bool `protobuf:"bytes,9,rep,name=u32booly" json:"u32booly,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"varint,2,opt,name=value"`
U64Booly map[uint64]bool `protobuf:"bytes,10,rep,name=u64booly" json:"u64booly,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"varint,2,opt,name=value"`
}
func (m *Mappy) Reset() { *m = Mappy{} }
func (m *Mappy) String() string { return proto.CompactTextString(m) }
func (*Mappy) ProtoMessage() {}
func (*Mappy) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *Mappy) GetNummy() map[int64]int32 {
if m != nil {
return m.Nummy
}
return nil
}
func (m *Mappy) GetStrry() map[string]string {
if m != nil {
return m.Strry
}
return nil
}
func (m *Mappy) GetObjjy() map[int32]*Simple3 {
if m != nil {
return m.Objjy
}
return nil
}
func (m *Mappy) GetBuggy() map[int64]string {
if m != nil {
return m.Buggy
}
return nil
}
func (m *Mappy) GetBooly() map[bool]bool {
if m != nil {
return m.Booly
}
return nil
}
func (m *Mappy) GetEnumy() map[string]Numeral {
if m != nil {
return m.Enumy
}
return nil
}
func (m *Mappy) GetS32Booly() map[int32]bool {
if m != nil {
return m.S32Booly
}
return nil
}
func (m *Mappy) GetS64Booly() map[int64]bool {
if m != nil {
return m.S64Booly
}
return nil
}
func (m *Mappy) GetU32Booly() map[uint32]bool {
if m != nil {
return m.U32Booly
}
return nil
}
func (m *Mappy) GetU64Booly() map[uint64]bool {
if m != nil {
return m.U64Booly
}
return nil
}
func init() {
proto.RegisterType((*Simple3)(nil), "jsonpb.Simple3")
proto.RegisterType((*Mappy)(nil), "jsonpb.Mappy")
proto.RegisterEnum("jsonpb.Numeral", Numeral_name, Numeral_value)
}
func init() { proto.RegisterFile("more_test_objects.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 444 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x8c, 0x94, 0xc1, 0x6b, 0xdb, 0x30,
0x14, 0x87, 0xe7, 0xa4, 0x4e, 0xec, 0x17, 0xba, 0x19, 0x31, 0x98, 0x58, 0x2f, 0xa1, 0x30, 0x08,
0x83, 0xf9, 0x90, 0x8c, 0xad, 0x6c, 0xa7, 0x74, 0xf4, 0x50, 0x46, 0x1d, 0x70, 0x09, 0x3b, 0x96,
0x78, 0x13, 0x65, 0x9e, 0x6d, 0x19, 0xdb, 0x1a, 0xe8, 0x8f, 0x1f, 0x8c, 0x27, 0xcb, 0xb5, 0x6c,
0x14, 0xd2, 0x9b, 0xcc, 0xef, 0xfb, 0xf2, 0x9e, 0xf4, 0x1e, 0x81, 0x37, 0x39, 0xaf, 0xd8, 0x43,
0xc3, 0xea, 0xe6, 0x81, 0x27, 0x29, 0xfb, 0xd9, 0xd4, 0x61, 0x59, 0xf1, 0x86, 0x93, 0x59, 0x5a,
0xf3, 0xa2, 0x4c, 0x2e, 0x2f, 0x60, 0x7e, 0xff, 0x3b, 0x2f, 0x33, 0xb6, 0x21, 0x01, 0x4c, 0x7f,
0x89, 0x84, 0x3a, 0x4b, 0x67, 0xe5, 0xc4, 0x78, 0xbc, 0xfc, 0xe7, 0x81, 0x7b, 0x77, 0x28, 0x4b,
0x49, 0x42, 0x70, 0x0b, 0x91, 0xe7, 0x92, 0x3a, 0xcb, 0xe9, 0x6a, 0xb1, 0xa6, 0x61, 0xab, 0x87,
0x2a, 0x0d, 0x23, 0x8c, 0x6e, 0x8a, 0xa6, 0x92, 0x71, 0x8b, 0x21, 0x5f, 0x37, 0x55, 0x25, 0xe9,
0xc4, 0xc6, 0xdf, 0x63, 0xa4, 0x79, 0x85, 0x21, 0xcf, 0x93, 0x34, 0x95, 0x74, 0x6a, 0xe3, 0x77,
0x18, 0x69, 0x5e, 0x61, 0xc8, 0x27, 0xe2, 0xf1, 0x51, 0xd2, 0x33, 0x1b, 0x7f, 0x8d, 0x91, 0xe6,
0x15, 0xa6, 0x78, 0xce, 0x33, 0x49, 0x5d, 0x2b, 0x8f, 0x51, 0xc7, 0xe3, 0x19, 0x79, 0x56, 0x88,
0x5c, 0xd2, 0x99, 0x8d, 0xbf, 0xc1, 0x48, 0xf3, 0x0a, 0x23, 0x9f, 0xc1, 0xab, 0x37, 0xeb, 0xb6,
0xc4, 0x5c, 0x29, 0x17, 0xa3, 0x2b, 0xeb, 0xb4, 0xb5, 0x9e, 0x60, 0x25, 0x7e, 0xfa, 0xd8, 0x8a,
0x9e, 0x55, 0xd4, 0x69, 0x27, 0xea, 0x4f, 0x14, 0x45, 0x57, 0xd1, 0xb7, 0x89, 0xfb, 0x61, 0x45,
0x61, 0x54, 0x14, 0x5d, 0x45, 0xb0, 0x8a, 0xc3, 0x8a, 0x1d, 0xfc, 0xf6, 0x0a, 0xa0, 0x1f, 0x34,
0x6e, 0xcb, 0x1f, 0x26, 0xd5, 0xb6, 0x4c, 0x63, 0x3c, 0x92, 0xd7, 0xe0, 0xfe, 0x3d, 0x64, 0x82,
0xd1, 0xc9, 0xd2, 0x59, 0xb9, 0x71, 0xfb, 0xf1, 0x65, 0x72, 0xe5, 0xa0, 0xd9, 0x8f, 0xdc, 0x34,
0x7d, 0x8b, 0xe9, 0x9b, 0xe6, 0x2d, 0x40, 0x3f, 0x7c, 0xd3, 0x74, 0x5b, 0xf3, 0x9d, 0x69, 0x2e,
0xd6, 0xaf, 0xba, 0x9b, 0xe8, 0x9d, 0x1e, 0x35, 0xd1, 0xef, 0xc5, 0xa9, 0xf6, 0xfd, 0xb1, 0xf9,
0xf4, 0x20, 0xa6, 0xe9, 0x59, 0x4c, 0x6f, 0xd4, 0x7e, 0xbf, 0x2b, 0x96, 0x8b, 0x0f, 0xda, 0x7f,
0xd9, 0xb7, 0x1f, 0x89, 0x9c, 0x55, 0x87, 0xcc, 0xfc, 0xa9, 0xaf, 0x70, 0x3e, 0xd8, 0x21, 0xcb,
0x63, 0x1c, 0xef, 0x03, 0x65, 0x73, 0xaa, 0xa7, 0xae, 0x3f, 0x96, 0xf7, 0xc7, 0x2a, 0x9f, 0x3f,
0x47, 0x3e, 0x56, 0xf9, 0xec, 0x84, 0xfc, 0xfe, 0x03, 0xcc, 0xf5, 0x4b, 0x90, 0x05, 0xcc, 0xf7,
0xd1, 0xf7, 0x68, 0xf7, 0x23, 0x0a, 0x5e, 0x10, 0x80, 0xd9, 0x36, 0xde, 0x5e, 0xdf, 0x7e, 0x0b,
0x1c, 0xe2, 0x83, 0x1b, 0xef, 0xee, 0xb6, 0x51, 0x30, 0x49, 0x66, 0xea, 0xaf, 0x6d, 0xf3, 0x3f,
0x00, 0x00, 0xff, 0xff, 0xa2, 0x4b, 0xe1, 0x77, 0xf5, 0x04, 0x00, 0x00,
}

View File

@ -0,0 +1,771 @@
// Code generated by protoc-gen-go.
// source: test_objects.proto
// DO NOT EDIT!
package jsonpb
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import google_protobuf "github.com/golang/protobuf/ptypes/any"
import google_protobuf1 "github.com/golang/protobuf/ptypes/duration"
import google_protobuf2 "github.com/golang/protobuf/ptypes/struct"
import google_protobuf3 "github.com/golang/protobuf/ptypes/timestamp"
import google_protobuf4 "github.com/golang/protobuf/ptypes/wrappers"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
type Widget_Color int32
const (
Widget_RED Widget_Color = 0
Widget_GREEN Widget_Color = 1
Widget_BLUE Widget_Color = 2
)
var Widget_Color_name = map[int32]string{
0: "RED",
1: "GREEN",
2: "BLUE",
}
var Widget_Color_value = map[string]int32{
"RED": 0,
"GREEN": 1,
"BLUE": 2,
}
func (x Widget_Color) Enum() *Widget_Color {
p := new(Widget_Color)
*p = x
return p
}
func (x Widget_Color) String() string {
return proto.EnumName(Widget_Color_name, int32(x))
}
func (x *Widget_Color) UnmarshalJSON(data []byte) error {
value, err := proto.UnmarshalJSONEnum(Widget_Color_value, data, "Widget_Color")
if err != nil {
return err
}
*x = Widget_Color(value)
return nil
}
func (Widget_Color) EnumDescriptor() ([]byte, []int) { return fileDescriptor1, []int{2, 0} }
// Test message for holding primitive types.
type Simple struct {
OBool *bool `protobuf:"varint,1,opt,name=o_bool,json=oBool" json:"o_bool,omitempty"`
OInt32 *int32 `protobuf:"varint,2,opt,name=o_int32,json=oInt32" json:"o_int32,omitempty"`
OInt64 *int64 `protobuf:"varint,3,opt,name=o_int64,json=oInt64" json:"o_int64,omitempty"`
OUint32 *uint32 `protobuf:"varint,4,opt,name=o_uint32,json=oUint32" json:"o_uint32,omitempty"`
OUint64 *uint64 `protobuf:"varint,5,opt,name=o_uint64,json=oUint64" json:"o_uint64,omitempty"`
OSint32 *int32 `protobuf:"zigzag32,6,opt,name=o_sint32,json=oSint32" json:"o_sint32,omitempty"`
OSint64 *int64 `protobuf:"zigzag64,7,opt,name=o_sint64,json=oSint64" json:"o_sint64,omitempty"`
OFloat *float32 `protobuf:"fixed32,8,opt,name=o_float,json=oFloat" json:"o_float,omitempty"`
ODouble *float64 `protobuf:"fixed64,9,opt,name=o_double,json=oDouble" json:"o_double,omitempty"`
OString *string `protobuf:"bytes,10,opt,name=o_string,json=oString" json:"o_string,omitempty"`
OBytes []byte `protobuf:"bytes,11,opt,name=o_bytes,json=oBytes" json:"o_bytes,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *Simple) Reset() { *m = Simple{} }
func (m *Simple) String() string { return proto.CompactTextString(m) }
func (*Simple) ProtoMessage() {}
func (*Simple) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{0} }
func (m *Simple) GetOBool() bool {
if m != nil && m.OBool != nil {
return *m.OBool
}
return false
}
func (m *Simple) GetOInt32() int32 {
if m != nil && m.OInt32 != nil {
return *m.OInt32
}
return 0
}
func (m *Simple) GetOInt64() int64 {
if m != nil && m.OInt64 != nil {
return *m.OInt64
}
return 0
}
func (m *Simple) GetOUint32() uint32 {
if m != nil && m.OUint32 != nil {
return *m.OUint32
}
return 0
}
func (m *Simple) GetOUint64() uint64 {
if m != nil && m.OUint64 != nil {
return *m.OUint64
}
return 0
}
func (m *Simple) GetOSint32() int32 {
if m != nil && m.OSint32 != nil {
return *m.OSint32
}
return 0
}
func (m *Simple) GetOSint64() int64 {
if m != nil && m.OSint64 != nil {
return *m.OSint64
}
return 0
}
func (m *Simple) GetOFloat() float32 {
if m != nil && m.OFloat != nil {
return *m.OFloat
}
return 0
}
func (m *Simple) GetODouble() float64 {
if m != nil && m.ODouble != nil {
return *m.ODouble
}
return 0
}
func (m *Simple) GetOString() string {
if m != nil && m.OString != nil {
return *m.OString
}
return ""
}
func (m *Simple) GetOBytes() []byte {
if m != nil {
return m.OBytes
}
return nil
}
// Test message for holding repeated primitives.
type Repeats struct {
RBool []bool `protobuf:"varint,1,rep,name=r_bool,json=rBool" json:"r_bool,omitempty"`
RInt32 []int32 `protobuf:"varint,2,rep,name=r_int32,json=rInt32" json:"r_int32,omitempty"`
RInt64 []int64 `protobuf:"varint,3,rep,name=r_int64,json=rInt64" json:"r_int64,omitempty"`
RUint32 []uint32 `protobuf:"varint,4,rep,name=r_uint32,json=rUint32" json:"r_uint32,omitempty"`
RUint64 []uint64 `protobuf:"varint,5,rep,name=r_uint64,json=rUint64" json:"r_uint64,omitempty"`
RSint32 []int32 `protobuf:"zigzag32,6,rep,name=r_sint32,json=rSint32" json:"r_sint32,omitempty"`
RSint64 []int64 `protobuf:"zigzag64,7,rep,name=r_sint64,json=rSint64" json:"r_sint64,omitempty"`
RFloat []float32 `protobuf:"fixed32,8,rep,name=r_float,json=rFloat" json:"r_float,omitempty"`
RDouble []float64 `protobuf:"fixed64,9,rep,name=r_double,json=rDouble" json:"r_double,omitempty"`
RString []string `protobuf:"bytes,10,rep,name=r_string,json=rString" json:"r_string,omitempty"`
RBytes [][]byte `protobuf:"bytes,11,rep,name=r_bytes,json=rBytes" json:"r_bytes,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *Repeats) Reset() { *m = Repeats{} }
func (m *Repeats) String() string { return proto.CompactTextString(m) }
func (*Repeats) ProtoMessage() {}
func (*Repeats) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{1} }
func (m *Repeats) GetRBool() []bool {
if m != nil {
return m.RBool
}
return nil
}
func (m *Repeats) GetRInt32() []int32 {
if m != nil {
return m.RInt32
}
return nil
}
func (m *Repeats) GetRInt64() []int64 {
if m != nil {
return m.RInt64
}
return nil
}
func (m *Repeats) GetRUint32() []uint32 {
if m != nil {
return m.RUint32
}
return nil
}
func (m *Repeats) GetRUint64() []uint64 {
if m != nil {
return m.RUint64
}
return nil
}
func (m *Repeats) GetRSint32() []int32 {
if m != nil {
return m.RSint32
}
return nil
}
func (m *Repeats) GetRSint64() []int64 {
if m != nil {
return m.RSint64
}
return nil
}
func (m *Repeats) GetRFloat() []float32 {
if m != nil {
return m.RFloat
}
return nil
}
func (m *Repeats) GetRDouble() []float64 {
if m != nil {
return m.RDouble
}
return nil
}
func (m *Repeats) GetRString() []string {
if m != nil {
return m.RString
}
return nil
}
func (m *Repeats) GetRBytes() [][]byte {
if m != nil {
return m.RBytes
}
return nil
}
// Test message for holding enums and nested messages.
type Widget struct {
Color *Widget_Color `protobuf:"varint,1,opt,name=color,enum=jsonpb.Widget_Color" json:"color,omitempty"`
RColor []Widget_Color `protobuf:"varint,2,rep,name=r_color,json=rColor,enum=jsonpb.Widget_Color" json:"r_color,omitempty"`
Simple *Simple `protobuf:"bytes,10,opt,name=simple" json:"simple,omitempty"`
RSimple []*Simple `protobuf:"bytes,11,rep,name=r_simple,json=rSimple" json:"r_simple,omitempty"`
Repeats *Repeats `protobuf:"bytes,20,opt,name=repeats" json:"repeats,omitempty"`
RRepeats []*Repeats `protobuf:"bytes,21,rep,name=r_repeats,json=rRepeats" json:"r_repeats,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *Widget) Reset() { *m = Widget{} }
func (m *Widget) String() string { return proto.CompactTextString(m) }
func (*Widget) ProtoMessage() {}
func (*Widget) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{2} }
func (m *Widget) GetColor() Widget_Color {
if m != nil && m.Color != nil {
return *m.Color
}
return Widget_RED
}
func (m *Widget) GetRColor() []Widget_Color {
if m != nil {
return m.RColor
}
return nil
}
func (m *Widget) GetSimple() *Simple {
if m != nil {
return m.Simple
}
return nil
}
func (m *Widget) GetRSimple() []*Simple {
if m != nil {
return m.RSimple
}
return nil
}
func (m *Widget) GetRepeats() *Repeats {
if m != nil {
return m.Repeats
}
return nil
}
func (m *Widget) GetRRepeats() []*Repeats {
if m != nil {
return m.RRepeats
}
return nil
}
type Maps struct {
MInt64Str map[int64]string `protobuf:"bytes,1,rep,name=m_int64_str,json=mInt64Str" json:"m_int64_str,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
MBoolSimple map[bool]*Simple `protobuf:"bytes,2,rep,name=m_bool_simple,json=mBoolSimple" json:"m_bool_simple,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
XXX_unrecognized []byte `json:"-"`
}
func (m *Maps) Reset() { *m = Maps{} }
func (m *Maps) String() string { return proto.CompactTextString(m) }
func (*Maps) ProtoMessage() {}
func (*Maps) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{3} }
func (m *Maps) GetMInt64Str() map[int64]string {
if m != nil {
return m.MInt64Str
}
return nil
}
func (m *Maps) GetMBoolSimple() map[bool]*Simple {
if m != nil {
return m.MBoolSimple
}
return nil
}
type MsgWithOneof struct {
// Types that are valid to be assigned to Union:
// *MsgWithOneof_Title
// *MsgWithOneof_Salary
// *MsgWithOneof_Country
// *MsgWithOneof_HomeAddress
Union isMsgWithOneof_Union `protobuf_oneof:"union"`
XXX_unrecognized []byte `json:"-"`
}
func (m *MsgWithOneof) Reset() { *m = MsgWithOneof{} }
func (m *MsgWithOneof) String() string { return proto.CompactTextString(m) }
func (*MsgWithOneof) ProtoMessage() {}
func (*MsgWithOneof) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{4} }
type isMsgWithOneof_Union interface {
isMsgWithOneof_Union()
}
type MsgWithOneof_Title struct {
Title string `protobuf:"bytes,1,opt,name=title,oneof"`
}
type MsgWithOneof_Salary struct {
Salary int64 `protobuf:"varint,2,opt,name=salary,oneof"`
}
type MsgWithOneof_Country struct {
Country string `protobuf:"bytes,3,opt,name=Country,oneof"`
}
type MsgWithOneof_HomeAddress struct {
HomeAddress string `protobuf:"bytes,4,opt,name=home_address,json=homeAddress,oneof"`
}
func (*MsgWithOneof_Title) isMsgWithOneof_Union() {}
func (*MsgWithOneof_Salary) isMsgWithOneof_Union() {}
func (*MsgWithOneof_Country) isMsgWithOneof_Union() {}
func (*MsgWithOneof_HomeAddress) isMsgWithOneof_Union() {}
func (m *MsgWithOneof) GetUnion() isMsgWithOneof_Union {
if m != nil {
return m.Union
}
return nil
}
func (m *MsgWithOneof) GetTitle() string {
if x, ok := m.GetUnion().(*MsgWithOneof_Title); ok {
return x.Title
}
return ""
}
func (m *MsgWithOneof) GetSalary() int64 {
if x, ok := m.GetUnion().(*MsgWithOneof_Salary); ok {
return x.Salary
}
return 0
}
func (m *MsgWithOneof) GetCountry() string {
if x, ok := m.GetUnion().(*MsgWithOneof_Country); ok {
return x.Country
}
return ""
}
func (m *MsgWithOneof) GetHomeAddress() string {
if x, ok := m.GetUnion().(*MsgWithOneof_HomeAddress); ok {
return x.HomeAddress
}
return ""
}
// XXX_OneofFuncs is for the internal use of the proto package.
func (*MsgWithOneof) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
return _MsgWithOneof_OneofMarshaler, _MsgWithOneof_OneofUnmarshaler, _MsgWithOneof_OneofSizer, []interface{}{
(*MsgWithOneof_Title)(nil),
(*MsgWithOneof_Salary)(nil),
(*MsgWithOneof_Country)(nil),
(*MsgWithOneof_HomeAddress)(nil),
}
}
func _MsgWithOneof_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
m := msg.(*MsgWithOneof)
// union
switch x := m.Union.(type) {
case *MsgWithOneof_Title:
b.EncodeVarint(1<<3 | proto.WireBytes)
b.EncodeStringBytes(x.Title)
case *MsgWithOneof_Salary:
b.EncodeVarint(2<<3 | proto.WireVarint)
b.EncodeVarint(uint64(x.Salary))
case *MsgWithOneof_Country:
b.EncodeVarint(3<<3 | proto.WireBytes)
b.EncodeStringBytes(x.Country)
case *MsgWithOneof_HomeAddress:
b.EncodeVarint(4<<3 | proto.WireBytes)
b.EncodeStringBytes(x.HomeAddress)
case nil:
default:
return fmt.Errorf("MsgWithOneof.Union has unexpected type %T", x)
}
return nil
}
func _MsgWithOneof_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
m := msg.(*MsgWithOneof)
switch tag {
case 1: // union.title
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
x, err := b.DecodeStringBytes()
m.Union = &MsgWithOneof_Title{x}
return true, err
case 2: // union.salary
if wire != proto.WireVarint {
return true, proto.ErrInternalBadWireType
}
x, err := b.DecodeVarint()
m.Union = &MsgWithOneof_Salary{int64(x)}
return true, err
case 3: // union.Country
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
x, err := b.DecodeStringBytes()
m.Union = &MsgWithOneof_Country{x}
return true, err
case 4: // union.home_address
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
x, err := b.DecodeStringBytes()
m.Union = &MsgWithOneof_HomeAddress{x}
return true, err
default:
return false, nil
}
}
func _MsgWithOneof_OneofSizer(msg proto.Message) (n int) {
m := msg.(*MsgWithOneof)
// union
switch x := m.Union.(type) {
case *MsgWithOneof_Title:
n += proto.SizeVarint(1<<3 | proto.WireBytes)
n += proto.SizeVarint(uint64(len(x.Title)))
n += len(x.Title)
case *MsgWithOneof_Salary:
n += proto.SizeVarint(2<<3 | proto.WireVarint)
n += proto.SizeVarint(uint64(x.Salary))
case *MsgWithOneof_Country:
n += proto.SizeVarint(3<<3 | proto.WireBytes)
n += proto.SizeVarint(uint64(len(x.Country)))
n += len(x.Country)
case *MsgWithOneof_HomeAddress:
n += proto.SizeVarint(4<<3 | proto.WireBytes)
n += proto.SizeVarint(uint64(len(x.HomeAddress)))
n += len(x.HomeAddress)
case nil:
default:
panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
}
return n
}
type Real struct {
Value *float64 `protobuf:"fixed64,1,opt,name=value" json:"value,omitempty"`
proto.XXX_InternalExtensions `json:"-"`
XXX_unrecognized []byte `json:"-"`
}
func (m *Real) Reset() { *m = Real{} }
func (m *Real) String() string { return proto.CompactTextString(m) }
func (*Real) ProtoMessage() {}
func (*Real) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{5} }
var extRange_Real = []proto.ExtensionRange{
{100, 536870911},
}
func (*Real) ExtensionRangeArray() []proto.ExtensionRange {
return extRange_Real
}
func (m *Real) GetValue() float64 {
if m != nil && m.Value != nil {
return *m.Value
}
return 0
}
type Complex struct {
Imaginary *float64 `protobuf:"fixed64,1,opt,name=imaginary" json:"imaginary,omitempty"`
proto.XXX_InternalExtensions `json:"-"`
XXX_unrecognized []byte `json:"-"`
}
func (m *Complex) Reset() { *m = Complex{} }
func (m *Complex) String() string { return proto.CompactTextString(m) }
func (*Complex) ProtoMessage() {}
func (*Complex) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{6} }
var extRange_Complex = []proto.ExtensionRange{
{100, 536870911},
}
func (*Complex) ExtensionRangeArray() []proto.ExtensionRange {
return extRange_Complex
}
func (m *Complex) GetImaginary() float64 {
if m != nil && m.Imaginary != nil {
return *m.Imaginary
}
return 0
}
var E_Complex_RealExtension = &proto.ExtensionDesc{
ExtendedType: (*Real)(nil),
ExtensionType: (*Complex)(nil),
Field: 123,
Name: "jsonpb.Complex.real_extension",
Tag: "bytes,123,opt,name=real_extension,json=realExtension",
Filename: "test_objects.proto",
}
type KnownTypes struct {
An *google_protobuf.Any `protobuf:"bytes,14,opt,name=an" json:"an,omitempty"`
Dur *google_protobuf1.Duration `protobuf:"bytes,1,opt,name=dur" json:"dur,omitempty"`
St *google_protobuf2.Struct `protobuf:"bytes,12,opt,name=st" json:"st,omitempty"`
Ts *google_protobuf3.Timestamp `protobuf:"bytes,2,opt,name=ts" json:"ts,omitempty"`
Dbl *google_protobuf4.DoubleValue `protobuf:"bytes,3,opt,name=dbl" json:"dbl,omitempty"`
Flt *google_protobuf4.FloatValue `protobuf:"bytes,4,opt,name=flt" json:"flt,omitempty"`
I64 *google_protobuf4.Int64Value `protobuf:"bytes,5,opt,name=i64" json:"i64,omitempty"`
U64 *google_protobuf4.UInt64Value `protobuf:"bytes,6,opt,name=u64" json:"u64,omitempty"`
I32 *google_protobuf4.Int32Value `protobuf:"bytes,7,opt,name=i32" json:"i32,omitempty"`
U32 *google_protobuf4.UInt32Value `protobuf:"bytes,8,opt,name=u32" json:"u32,omitempty"`
Bool *google_protobuf4.BoolValue `protobuf:"bytes,9,opt,name=bool" json:"bool,omitempty"`
Str *google_protobuf4.StringValue `protobuf:"bytes,10,opt,name=str" json:"str,omitempty"`
Bytes *google_protobuf4.BytesValue `protobuf:"bytes,11,opt,name=bytes" json:"bytes,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *KnownTypes) Reset() { *m = KnownTypes{} }
func (m *KnownTypes) String() string { return proto.CompactTextString(m) }
func (*KnownTypes) ProtoMessage() {}
func (*KnownTypes) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{7} }
func (m *KnownTypes) GetAn() *google_protobuf.Any {
if m != nil {
return m.An
}
return nil
}
func (m *KnownTypes) GetDur() *google_protobuf1.Duration {
if m != nil {
return m.Dur
}
return nil
}
func (m *KnownTypes) GetSt() *google_protobuf2.Struct {
if m != nil {
return m.St
}
return nil
}
func (m *KnownTypes) GetTs() *google_protobuf3.Timestamp {
if m != nil {
return m.Ts
}
return nil
}
func (m *KnownTypes) GetDbl() *google_protobuf4.DoubleValue {
if m != nil {
return m.Dbl
}
return nil
}
func (m *KnownTypes) GetFlt() *google_protobuf4.FloatValue {
if m != nil {
return m.Flt
}
return nil
}
func (m *KnownTypes) GetI64() *google_protobuf4.Int64Value {
if m != nil {
return m.I64
}
return nil
}
func (m *KnownTypes) GetU64() *google_protobuf4.UInt64Value {
if m != nil {
return m.U64
}
return nil
}
func (m *KnownTypes) GetI32() *google_protobuf4.Int32Value {
if m != nil {
return m.I32
}
return nil
}
func (m *KnownTypes) GetU32() *google_protobuf4.UInt32Value {
if m != nil {
return m.U32
}
return nil
}
func (m *KnownTypes) GetBool() *google_protobuf4.BoolValue {
if m != nil {
return m.Bool
}
return nil
}
func (m *KnownTypes) GetStr() *google_protobuf4.StringValue {
if m != nil {
return m.Str
}
return nil
}
func (m *KnownTypes) GetBytes() *google_protobuf4.BytesValue {
if m != nil {
return m.Bytes
}
return nil
}
var E_Name = &proto.ExtensionDesc{
ExtendedType: (*Real)(nil),
ExtensionType: (*string)(nil),
Field: 124,
Name: "jsonpb.name",
Tag: "bytes,124,opt,name=name",
Filename: "test_objects.proto",
}
func init() {
proto.RegisterType((*Simple)(nil), "jsonpb.Simple")
proto.RegisterType((*Repeats)(nil), "jsonpb.Repeats")
proto.RegisterType((*Widget)(nil), "jsonpb.Widget")
proto.RegisterType((*Maps)(nil), "jsonpb.Maps")
proto.RegisterType((*MsgWithOneof)(nil), "jsonpb.MsgWithOneof")
proto.RegisterType((*Real)(nil), "jsonpb.Real")
proto.RegisterType((*Complex)(nil), "jsonpb.Complex")
proto.RegisterType((*KnownTypes)(nil), "jsonpb.KnownTypes")
proto.RegisterEnum("jsonpb.Widget_Color", Widget_Color_name, Widget_Color_value)
proto.RegisterExtension(E_Complex_RealExtension)
proto.RegisterExtension(E_Name)
}
func init() { proto.RegisterFile("test_objects.proto", fileDescriptor1) }
var fileDescriptor1 = []byte{
// 1054 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x74, 0x95, 0xdf, 0x72, 0xdb, 0x44,
0x14, 0xc6, 0x23, 0xc9, 0x96, 0xed, 0x75, 0x12, 0xcc, 0x4e, 0x4a, 0x15, 0x13, 0x40, 0x63, 0x4a,
0x11, 0x85, 0xba, 0x83, 0xe2, 0xf1, 0x30, 0x85, 0x9b, 0xa4, 0x31, 0x94, 0x81, 0x94, 0x99, 0x4d,
0x43, 0x2f, 0x3d, 0x72, 0xbc, 0x71, 0x55, 0x64, 0xad, 0x67, 0x77, 0x45, 0xea, 0x81, 0x8b, 0x5c,
0x73, 0xcd, 0x2b, 0xc0, 0x23, 0xf0, 0x44, 0x3c, 0x48, 0xe7, 0x9c, 0xd5, 0x9f, 0xc4, 0x8e, 0xaf,
0xe2, 0xb3, 0xe7, 0x3b, 0x5f, 0x56, 0xbf, 0x3d, 0xbb, 0x87, 0x50, 0xcd, 0x95, 0x1e, 0x8b, 0xc9,
0x1b, 0x7e, 0xa1, 0x55, 0x7f, 0x21, 0x85, 0x16, 0xd4, 0x7d, 0xa3, 0x44, 0xba, 0x98, 0x74, 0xf7,
0x67, 0x42, 0xcc, 0x12, 0xfe, 0x04, 0x57, 0x27, 0xd9, 0xe5, 0x93, 0x28, 0x5d, 0x1a, 0x49, 0xf7,
0xe3, 0xd5, 0xd4, 0x34, 0x93, 0x91, 0x8e, 0x45, 0x9a, 0xe7, 0x0f, 0x56, 0xf3, 0x4a, 0xcb, 0xec,
0x42, 0xe7, 0xd9, 0x4f, 0x56, 0xb3, 0x3a, 0x9e, 0x73, 0xa5, 0xa3, 0xf9, 0x62, 0x93, 0xfd, 0x95,
0x8c, 0x16, 0x0b, 0x2e, 0xf3, 0x1d, 0xf6, 0xfe, 0xb1, 0x89, 0x7b, 0x16, 0xcf, 0x17, 0x09, 0xa7,
0xf7, 0x88, 0x2b, 0xc6, 0x13, 0x21, 0x12, 0xcf, 0xf2, 0xad, 0xa0, 0xc9, 0xea, 0xe2, 0x58, 0x88,
0x84, 0xde, 0x27, 0x0d, 0x31, 0x8e, 0x53, 0x7d, 0x18, 0x7a, 0xb6, 0x6f, 0x05, 0x75, 0xe6, 0x8a,
0x1f, 0x21, 0x2a, 0x13, 0xc3, 0x81, 0xe7, 0xf8, 0x56, 0xe0, 0x98, 0xc4, 0x70, 0x40, 0xf7, 0x49,
0x53, 0x8c, 0x33, 0x53, 0x52, 0xf3, 0xad, 0x60, 0x87, 0x35, 0xc4, 0x39, 0x86, 0x55, 0x6a, 0x38,
0xf0, 0xea, 0xbe, 0x15, 0xd4, 0xf2, 0x54, 0x51, 0xa5, 0x4c, 0x95, 0xeb, 0x5b, 0xc1, 0xfb, 0xac,
0x21, 0xce, 0x6e, 0x54, 0x29, 0x53, 0xd5, 0xf0, 0xad, 0x80, 0xe6, 0xa9, 0xe1, 0xc0, 0x6c, 0xe2,
0x32, 0x11, 0x91, 0xf6, 0x9a, 0xbe, 0x15, 0xd8, 0xcc, 0x15, 0xdf, 0x43, 0x64, 0x6a, 0xa6, 0x22,
0x9b, 0x24, 0xdc, 0x6b, 0xf9, 0x56, 0x60, 0xb1, 0x86, 0x38, 0xc1, 0x30, 0xb7, 0xd3, 0x32, 0x4e,
0x67, 0x1e, 0xf1, 0xad, 0xa0, 0x05, 0x76, 0x18, 0x1a, 0xbb, 0xc9, 0x52, 0x73, 0xe5, 0xb5, 0x7d,
0x2b, 0xd8, 0x66, 0xae, 0x38, 0x86, 0xa8, 0xf7, 0xaf, 0x4d, 0x1a, 0x8c, 0x2f, 0x78, 0xa4, 0x15,
0x80, 0x92, 0x05, 0x28, 0x07, 0x40, 0xc9, 0x02, 0x94, 0x2c, 0x41, 0x39, 0x00, 0x4a, 0x96, 0xa0,
0x64, 0x09, 0xca, 0x01, 0x50, 0xb2, 0x04, 0x25, 0x2b, 0x50, 0x0e, 0x80, 0x92, 0x15, 0x28, 0x59,
0x81, 0x72, 0x00, 0x94, 0xac, 0x40, 0xc9, 0x0a, 0x94, 0x03, 0xa0, 0xe4, 0xd9, 0x8d, 0xaa, 0x12,
0x94, 0x03, 0xa0, 0x64, 0x05, 0x4a, 0x96, 0xa0, 0x1c, 0x00, 0x25, 0x4b, 0x50, 0xb2, 0x02, 0xe5,
0x00, 0x28, 0x59, 0x81, 0x92, 0x15, 0x28, 0x07, 0x40, 0xc9, 0x0a, 0x94, 0x2c, 0x41, 0x39, 0x00,
0x4a, 0x1a, 0x50, 0xff, 0xd9, 0xc4, 0x7d, 0x15, 0x4f, 0x67, 0x5c, 0xd3, 0x47, 0xa4, 0x7e, 0x21,
0x12, 0x21, 0xb1, 0x9f, 0x76, 0xc3, 0xbd, 0xbe, 0xb9, 0x0d, 0x7d, 0x93, 0xee, 0x3f, 0x83, 0x1c,
0x33, 0x12, 0xfa, 0x18, 0xfc, 0x8c, 0x1a, 0xe0, 0x6d, 0x52, 0xbb, 0x12, 0xff, 0xd2, 0x87, 0xc4,
0x55, 0xd8, 0xb5, 0x78, 0x80, 0xed, 0x70, 0xb7, 0x50, 0x9b, 0x5e, 0x66, 0x79, 0x96, 0x7e, 0x61,
0x80, 0xa0, 0x12, 0xf6, 0xb9, 0xae, 0x04, 0x40, 0xb9, 0xb4, 0x21, 0xcd, 0x01, 0x7b, 0x7b, 0xe8,
0xf9, 0x5e, 0xa1, 0xcc, 0xcf, 0x9d, 0x15, 0x79, 0xfa, 0x15, 0x69, 0xc9, 0x71, 0x21, 0xbe, 0x87,
0xb6, 0x6b, 0xe2, 0xa6, 0xcc, 0x7f, 0xf5, 0x3e, 0x23, 0x75, 0xb3, 0xe9, 0x06, 0x71, 0xd8, 0xe8,
0xa4, 0xb3, 0x45, 0x5b, 0xa4, 0xfe, 0x03, 0x1b, 0x8d, 0x5e, 0x74, 0x2c, 0xda, 0x24, 0xb5, 0xe3,
0x9f, 0xcf, 0x47, 0x1d, 0xbb, 0xf7, 0xb7, 0x4d, 0x6a, 0xa7, 0xd1, 0x42, 0xd1, 0x6f, 0x49, 0x7b,
0x6e, 0xda, 0x05, 0xd8, 0x63, 0x8f, 0xb5, 0xc3, 0x0f, 0x0b, 0x7f, 0x90, 0xf4, 0x4f, 0xb1, 0x7f,
0xce, 0xb4, 0x1c, 0xa5, 0x5a, 0x2e, 0x59, 0x6b, 0x5e, 0xc4, 0xf4, 0x88, 0xec, 0xcc, 0xb1, 0x37,
0x8b, 0xaf, 0xb6, 0xb1, 0xfc, 0xa3, 0xdb, 0xe5, 0xd0, 0xaf, 0xe6, 0xb3, 0x8d, 0x41, 0x7b, 0x5e,
0xad, 0x74, 0xbf, 0x23, 0xbb, 0xb7, 0xfd, 0x69, 0x87, 0x38, 0xbf, 0xf1, 0x25, 0x1e, 0xa3, 0xc3,
0xe0, 0x27, 0xdd, 0x23, 0xf5, 0xdf, 0xa3, 0x24, 0xe3, 0xf8, 0x24, 0xb4, 0x98, 0x09, 0x9e, 0xda,
0xdf, 0x58, 0xdd, 0x17, 0xa4, 0xb3, 0x6a, 0x7f, 0xb3, 0xbe, 0x69, 0xea, 0x1f, 0xdc, 0xac, 0x5f,
0x3f, 0x94, 0xca, 0xaf, 0xf7, 0x97, 0x45, 0xb6, 0x4f, 0xd5, 0xec, 0x55, 0xac, 0x5f, 0xff, 0x92,
0x72, 0x71, 0x49, 0x3f, 0x20, 0x75, 0x1d, 0xeb, 0x84, 0xa3, 0x5d, 0xeb, 0xf9, 0x16, 0x33, 0x21,
0xf5, 0x88, 0xab, 0xa2, 0x24, 0x92, 0x4b, 0xf4, 0x74, 0x9e, 0x6f, 0xb1, 0x3c, 0xa6, 0x5d, 0xd2,
0x78, 0x26, 0x32, 0xd8, 0x09, 0x3e, 0x54, 0x50, 0x53, 0x2c, 0xd0, 0x4f, 0xc9, 0xf6, 0x6b, 0x31,
0xe7, 0xe3, 0x68, 0x3a, 0x95, 0x5c, 0x29, 0x7c, 0xaf, 0x40, 0xd0, 0x86, 0xd5, 0x23, 0xb3, 0x78,
0xdc, 0x20, 0xf5, 0x2c, 0x8d, 0x45, 0xda, 0x7b, 0x48, 0x6a, 0x8c, 0x47, 0x49, 0xf5, 0xf9, 0x16,
0xbe, 0x2c, 0x26, 0x78, 0xd4, 0x6c, 0x4e, 0x3b, 0xd7, 0xd7, 0xd7, 0xd7, 0x76, 0xef, 0x0a, 0xfe,
0x23, 0x7c, 0xc9, 0x5b, 0x7a, 0x40, 0x5a, 0xf1, 0x3c, 0x9a, 0xc5, 0x29, 0xec, 0xcc, 0xc8, 0xab,
0x85, 0xaa, 0x24, 0x3c, 0x21, 0xbb, 0x92, 0x47, 0xc9, 0x98, 0xbf, 0xd5, 0x3c, 0x55, 0xb1, 0x48,
0xe9, 0x76, 0xd5, 0x52, 0x51, 0xe2, 0xfd, 0x71, 0xbb, 0x27, 0x73, 0x7b, 0xb6, 0x03, 0x45, 0xa3,
0xa2, 0xa6, 0xf7, 0x7f, 0x8d, 0x90, 0x9f, 0x52, 0x71, 0x95, 0xbe, 0x5c, 0x2e, 0xb8, 0xa2, 0x0f,
0x88, 0x1d, 0xa5, 0xde, 0x2e, 0x96, 0xee, 0xf5, 0xcd, 0x28, 0xe8, 0x17, 0xa3, 0xa0, 0x7f, 0x94,
0x2e, 0x99, 0x1d, 0xa5, 0xf4, 0x4b, 0xe2, 0x4c, 0x33, 0x73, 0x4b, 0xdb, 0xe1, 0xfe, 0x9a, 0xec,
0x24, 0x1f, 0x48, 0x0c, 0x54, 0xf4, 0x73, 0x62, 0x2b, 0xed, 0x6d, 0xa3, 0xf6, 0xfe, 0x9a, 0xf6,
0x0c, 0x87, 0x13, 0xb3, 0x15, 0xdc, 0x7e, 0x5b, 0xab, 0xfc, 0x7c, 0xbb, 0x6b, 0xc2, 0x97, 0xc5,
0x9c, 0x62, 0xb6, 0x56, 0xb4, 0x4f, 0x9c, 0xe9, 0x24, 0xc1, 0xd3, 0x69, 0x87, 0x07, 0xeb, 0x3b,
0xc0, 0xe7, 0xe8, 0x57, 0x80, 0xcc, 0x40, 0x48, 0x1f, 0x13, 0xe7, 0x32, 0xd1, 0x78, 0x58, 0x70,
0x35, 0x56, 0xf5, 0xf8, 0xb0, 0xe5, 0xf2, 0xcb, 0x44, 0x83, 0x3c, 0xce, 0x07, 0xce, 0x5d, 0x72,
0x6c, 0xf6, 0x5c, 0x1e, 0x0f, 0x07, 0xb0, 0x9b, 0x6c, 0x38, 0xc0, 0x21, 0x74, 0xd7, 0x6e, 0xce,
0x6f, 0xea, 0xb3, 0xe1, 0x00, 0xed, 0x0f, 0x43, 0x9c, 0x4c, 0x1b, 0xec, 0x0f, 0xc3, 0xc2, 0xfe,
0x30, 0x44, 0xfb, 0xc3, 0x10, 0xc7, 0xd5, 0x26, 0xfb, 0x52, 0x9f, 0xa1, 0xbe, 0x86, 0xc3, 0xa6,
0xb5, 0x01, 0x25, 0xdc, 0x36, 0x23, 0x47, 0x1d, 0xf8, 0xc3, 0xbb, 0x41, 0x36, 0xf8, 0x9b, 0x07,
0x3c, 0xf7, 0x57, 0x5a, 0xd2, 0xaf, 0x49, 0xbd, 0x9a, 0x78, 0x77, 0x7d, 0x00, 0x3e, 0xec, 0xa6,
0xc0, 0x28, 0x9f, 0xfa, 0xa4, 0x96, 0x46, 0x73, 0xbe, 0xd2, 0xa2, 0x7f, 0xe2, 0x5b, 0x80, 0x99,
0x77, 0x01, 0x00, 0x00, 0xff, 0xff, 0x7e, 0xe7, 0x47, 0x52, 0x0e, 0x09, 0x00, 0x00,
}

229
vendor/github.com/golang/protobuf/proto/clone.go generated vendored Normal file
View File

@ -0,0 +1,229 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2011 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Protocol buffer deep copy and merge.
// TODO: RawMessage.
package proto
import (
"log"
"reflect"
"strings"
)
// Clone returns a deep copy of a protocol buffer.
func Clone(pb Message) Message {
in := reflect.ValueOf(pb)
if in.IsNil() {
return pb
}
out := reflect.New(in.Type().Elem())
// out is empty so a merge is a deep copy.
mergeStruct(out.Elem(), in.Elem())
return out.Interface().(Message)
}
// Merge merges src into dst.
// Required and optional fields that are set in src will be set to that value in dst.
// Elements of repeated fields will be appended.
// Merge panics if src and dst are not the same type, or if dst is nil.
func Merge(dst, src Message) {
in := reflect.ValueOf(src)
out := reflect.ValueOf(dst)
if out.IsNil() {
panic("proto: nil destination")
}
if in.Type() != out.Type() {
// Explicit test prior to mergeStruct so that mistyped nils will fail
panic("proto: type mismatch")
}
if in.IsNil() {
// Merging nil into non-nil is a quiet no-op
return
}
mergeStruct(out.Elem(), in.Elem())
}
func mergeStruct(out, in reflect.Value) {
sprop := GetProperties(in.Type())
for i := 0; i < in.NumField(); i++ {
f := in.Type().Field(i)
if strings.HasPrefix(f.Name, "XXX_") {
continue
}
mergeAny(out.Field(i), in.Field(i), false, sprop.Prop[i])
}
if emIn, ok := extendable(in.Addr().Interface()); ok {
emOut, _ := extendable(out.Addr().Interface())
mIn, muIn := emIn.extensionsRead()
if mIn != nil {
mOut := emOut.extensionsWrite()
muIn.Lock()
mergeExtension(mOut, mIn)
muIn.Unlock()
}
}
uf := in.FieldByName("XXX_unrecognized")
if !uf.IsValid() {
return
}
uin := uf.Bytes()
if len(uin) > 0 {
out.FieldByName("XXX_unrecognized").SetBytes(append([]byte(nil), uin...))
}
}
// mergeAny performs a merge between two values of the same type.
// viaPtr indicates whether the values were indirected through a pointer (implying proto2).
// prop is set if this is a struct field (it may be nil).
func mergeAny(out, in reflect.Value, viaPtr bool, prop *Properties) {
if in.Type() == protoMessageType {
if !in.IsNil() {
if out.IsNil() {
out.Set(reflect.ValueOf(Clone(in.Interface().(Message))))
} else {
Merge(out.Interface().(Message), in.Interface().(Message))
}
}
return
}
switch in.Kind() {
case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Int32, reflect.Int64,
reflect.String, reflect.Uint32, reflect.Uint64:
if !viaPtr && isProto3Zero(in) {
return
}
out.Set(in)
case reflect.Interface:
// Probably a oneof field; copy non-nil values.
if in.IsNil() {
return
}
// Allocate destination if it is not set, or set to a different type.
// Otherwise we will merge as normal.
if out.IsNil() || out.Elem().Type() != in.Elem().Type() {
out.Set(reflect.New(in.Elem().Elem().Type())) // interface -> *T -> T -> new(T)
}
mergeAny(out.Elem(), in.Elem(), false, nil)
case reflect.Map:
if in.Len() == 0 {
return
}
if out.IsNil() {
out.Set(reflect.MakeMap(in.Type()))
}
// For maps with value types of *T or []byte we need to deep copy each value.
elemKind := in.Type().Elem().Kind()
for _, key := range in.MapKeys() {
var val reflect.Value
switch elemKind {
case reflect.Ptr:
val = reflect.New(in.Type().Elem().Elem())
mergeAny(val, in.MapIndex(key), false, nil)
case reflect.Slice:
val = in.MapIndex(key)
val = reflect.ValueOf(append([]byte{}, val.Bytes()...))
default:
val = in.MapIndex(key)
}
out.SetMapIndex(key, val)
}
case reflect.Ptr:
if in.IsNil() {
return
}
if out.IsNil() {
out.Set(reflect.New(in.Elem().Type()))
}
mergeAny(out.Elem(), in.Elem(), true, nil)
case reflect.Slice:
if in.IsNil() {
return
}
if in.Type().Elem().Kind() == reflect.Uint8 {
// []byte is a scalar bytes field, not a repeated field.
// Edge case: if this is in a proto3 message, a zero length
// bytes field is considered the zero value, and should not
// be merged.
if prop != nil && prop.proto3 && in.Len() == 0 {
return
}
// Make a deep copy.
// Append to []byte{} instead of []byte(nil) so that we never end up
// with a nil result.
out.SetBytes(append([]byte{}, in.Bytes()...))
return
}
n := in.Len()
if out.IsNil() {
out.Set(reflect.MakeSlice(in.Type(), 0, n))
}
switch in.Type().Elem().Kind() {
case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Int32, reflect.Int64,
reflect.String, reflect.Uint32, reflect.Uint64:
out.Set(reflect.AppendSlice(out, in))
default:
for i := 0; i < n; i++ {
x := reflect.Indirect(reflect.New(in.Type().Elem()))
mergeAny(x, in.Index(i), false, nil)
out.Set(reflect.Append(out, x))
}
}
case reflect.Struct:
mergeStruct(out, in)
default:
// unknown type, so not a protocol buffer
log.Printf("proto: don't know how to copy %v", in)
}
}
func mergeExtension(out, in map[int32]Extension) {
for extNum, eIn := range in {
eOut := Extension{desc: eIn.desc}
if eIn.value != nil {
v := reflect.New(reflect.TypeOf(eIn.value)).Elem()
mergeAny(v, reflect.ValueOf(eIn.value), false, nil)
eOut.value = v.Interface()
}
if eIn.enc != nil {
eOut.enc = make([]byte, len(eIn.enc))
copy(eOut.enc, eIn.enc)
}
out[extNum] = eOut
}
}

970
vendor/github.com/golang/protobuf/proto/decode.go generated vendored Normal file
View File

@ -0,0 +1,970 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
/*
* Routines for decoding protocol buffer data to construct in-memory representations.
*/
import (
"errors"
"fmt"
"io"
"os"
"reflect"
)
// errOverflow is returned when an integer is too large to be represented.
var errOverflow = errors.New("proto: integer overflow")
// ErrInternalBadWireType is returned by generated code when an incorrect
// wire type is encountered. It does not get returned to user code.
var ErrInternalBadWireType = errors.New("proto: internal error: bad wiretype for oneof")
// The fundamental decoders that interpret bytes on the wire.
// Those that take integer types all return uint64 and are
// therefore of type valueDecoder.
// DecodeVarint reads a varint-encoded integer from the slice.
// It returns the integer and the number of bytes consumed, or
// zero if there is not enough.
// This is the format for the
// int32, int64, uint32, uint64, bool, and enum
// protocol buffer types.
func DecodeVarint(buf []byte) (x uint64, n int) {
for shift := uint(0); shift < 64; shift += 7 {
if n >= len(buf) {
return 0, 0
}
b := uint64(buf[n])
n++
x |= (b & 0x7F) << shift
if (b & 0x80) == 0 {
return x, n
}
}
// The number is too large to represent in a 64-bit value.
return 0, 0
}
func (p *Buffer) decodeVarintSlow() (x uint64, err error) {
i := p.index
l := len(p.buf)
for shift := uint(0); shift < 64; shift += 7 {
if i >= l {
err = io.ErrUnexpectedEOF
return
}
b := p.buf[i]
i++
x |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
p.index = i
return
}
}
// The number is too large to represent in a 64-bit value.
err = errOverflow
return
}
// DecodeVarint reads a varint-encoded integer from the Buffer.
// This is the format for the
// int32, int64, uint32, uint64, bool, and enum
// protocol buffer types.
func (p *Buffer) DecodeVarint() (x uint64, err error) {
i := p.index
buf := p.buf
if i >= len(buf) {
return 0, io.ErrUnexpectedEOF
} else if buf[i] < 0x80 {
p.index++
return uint64(buf[i]), nil
} else if len(buf)-i < 10 {
return p.decodeVarintSlow()
}
var b uint64
// we already checked the first byte
x = uint64(buf[i]) - 0x80
i++
b = uint64(buf[i])
i++
x += b << 7
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 7
b = uint64(buf[i])
i++
x += b << 14
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 14
b = uint64(buf[i])
i++
x += b << 21
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 21
b = uint64(buf[i])
i++
x += b << 28
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 28
b = uint64(buf[i])
i++
x += b << 35
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 35
b = uint64(buf[i])
i++
x += b << 42
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 42
b = uint64(buf[i])
i++
x += b << 49
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 49
b = uint64(buf[i])
i++
x += b << 56
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 56
b = uint64(buf[i])
i++
x += b << 63
if b&0x80 == 0 {
goto done
}
// x -= 0x80 << 63 // Always zero.
return 0, errOverflow
done:
p.index = i
return x, nil
}
// DecodeFixed64 reads a 64-bit integer from the Buffer.
// This is the format for the
// fixed64, sfixed64, and double protocol buffer types.
func (p *Buffer) DecodeFixed64() (x uint64, err error) {
// x, err already 0
i := p.index + 8
if i < 0 || i > len(p.buf) {
err = io.ErrUnexpectedEOF
return
}
p.index = i
x = uint64(p.buf[i-8])
x |= uint64(p.buf[i-7]) << 8
x |= uint64(p.buf[i-6]) << 16
x |= uint64(p.buf[i-5]) << 24
x |= uint64(p.buf[i-4]) << 32
x |= uint64(p.buf[i-3]) << 40
x |= uint64(p.buf[i-2]) << 48
x |= uint64(p.buf[i-1]) << 56
return
}
// DecodeFixed32 reads a 32-bit integer from the Buffer.
// This is the format for the
// fixed32, sfixed32, and float protocol buffer types.
func (p *Buffer) DecodeFixed32() (x uint64, err error) {
// x, err already 0
i := p.index + 4
if i < 0 || i > len(p.buf) {
err = io.ErrUnexpectedEOF
return
}
p.index = i
x = uint64(p.buf[i-4])
x |= uint64(p.buf[i-3]) << 8
x |= uint64(p.buf[i-2]) << 16
x |= uint64(p.buf[i-1]) << 24
return
}
// DecodeZigzag64 reads a zigzag-encoded 64-bit integer
// from the Buffer.
// This is the format used for the sint64 protocol buffer type.
func (p *Buffer) DecodeZigzag64() (x uint64, err error) {
x, err = p.DecodeVarint()
if err != nil {
return
}
x = (x >> 1) ^ uint64((int64(x&1)<<63)>>63)
return
}
// DecodeZigzag32 reads a zigzag-encoded 32-bit integer
// from the Buffer.
// This is the format used for the sint32 protocol buffer type.
func (p *Buffer) DecodeZigzag32() (x uint64, err error) {
x, err = p.DecodeVarint()
if err != nil {
return
}
x = uint64((uint32(x) >> 1) ^ uint32((int32(x&1)<<31)>>31))
return
}
// These are not ValueDecoders: they produce an array of bytes or a string.
// bytes, embedded messages
// DecodeRawBytes reads a count-delimited byte buffer from the Buffer.
// This is the format used for the bytes protocol buffer
// type and for embedded messages.
func (p *Buffer) DecodeRawBytes(alloc bool) (buf []byte, err error) {
n, err := p.DecodeVarint()
if err != nil {
return nil, err
}
nb := int(n)
if nb < 0 {
return nil, fmt.Errorf("proto: bad byte length %d", nb)
}
end := p.index + nb
if end < p.index || end > len(p.buf) {
return nil, io.ErrUnexpectedEOF
}
if !alloc {
// todo: check if can get more uses of alloc=false
buf = p.buf[p.index:end]
p.index += nb
return
}
buf = make([]byte, nb)
copy(buf, p.buf[p.index:])
p.index += nb
return
}
// DecodeStringBytes reads an encoded string from the Buffer.
// This is the format used for the proto2 string type.
func (p *Buffer) DecodeStringBytes() (s string, err error) {
buf, err := p.DecodeRawBytes(false)
if err != nil {
return
}
return string(buf), nil
}
// Skip the next item in the buffer. Its wire type is decoded and presented as an argument.
// If the protocol buffer has extensions, and the field matches, add it as an extension.
// Otherwise, if the XXX_unrecognized field exists, append the skipped data there.
func (o *Buffer) skipAndSave(t reflect.Type, tag, wire int, base structPointer, unrecField field) error {
oi := o.index
err := o.skip(t, tag, wire)
if err != nil {
return err
}
if !unrecField.IsValid() {
return nil
}
ptr := structPointer_Bytes(base, unrecField)
// Add the skipped field to struct field
obuf := o.buf
o.buf = *ptr
o.EncodeVarint(uint64(tag<<3 | wire))
*ptr = append(o.buf, obuf[oi:o.index]...)
o.buf = obuf
return nil
}
// Skip the next item in the buffer. Its wire type is decoded and presented as an argument.
func (o *Buffer) skip(t reflect.Type, tag, wire int) error {
var u uint64
var err error
switch wire {
case WireVarint:
_, err = o.DecodeVarint()
case WireFixed64:
_, err = o.DecodeFixed64()
case WireBytes:
_, err = o.DecodeRawBytes(false)
case WireFixed32:
_, err = o.DecodeFixed32()
case WireStartGroup:
for {
u, err = o.DecodeVarint()
if err != nil {
break
}
fwire := int(u & 0x7)
if fwire == WireEndGroup {
break
}
ftag := int(u >> 3)
err = o.skip(t, ftag, fwire)
if err != nil {
break
}
}
default:
err = fmt.Errorf("proto: can't skip unknown wire type %d for %s", wire, t)
}
return err
}
// Unmarshaler is the interface representing objects that can
// unmarshal themselves. The method should reset the receiver before
// decoding starts. The argument points to data that may be
// overwritten, so implementations should not keep references to the
// buffer.
type Unmarshaler interface {
Unmarshal([]byte) error
}
// Unmarshal parses the protocol buffer representation in buf and places the
// decoded result in pb. If the struct underlying pb does not match
// the data in buf, the results can be unpredictable.
//
// Unmarshal resets pb before starting to unmarshal, so any
// existing data in pb is always removed. Use UnmarshalMerge
// to preserve and append to existing data.
func Unmarshal(buf []byte, pb Message) error {
pb.Reset()
return UnmarshalMerge(buf, pb)
}
// UnmarshalMerge parses the protocol buffer representation in buf and
// writes the decoded result to pb. If the struct underlying pb does not match
// the data in buf, the results can be unpredictable.
//
// UnmarshalMerge merges into existing data in pb.
// Most code should use Unmarshal instead.
func UnmarshalMerge(buf []byte, pb Message) error {
// If the object can unmarshal itself, let it.
if u, ok := pb.(Unmarshaler); ok {
return u.Unmarshal(buf)
}
return NewBuffer(buf).Unmarshal(pb)
}
// DecodeMessage reads a count-delimited message from the Buffer.
func (p *Buffer) DecodeMessage(pb Message) error {
enc, err := p.DecodeRawBytes(false)
if err != nil {
return err
}
return NewBuffer(enc).Unmarshal(pb)
}
// DecodeGroup reads a tag-delimited group from the Buffer.
func (p *Buffer) DecodeGroup(pb Message) error {
typ, base, err := getbase(pb)
if err != nil {
return err
}
return p.unmarshalType(typ.Elem(), GetProperties(typ.Elem()), true, base)
}
// Unmarshal parses the protocol buffer representation in the
// Buffer and places the decoded result in pb. If the struct
// underlying pb does not match the data in the buffer, the results can be
// unpredictable.
//
// Unlike proto.Unmarshal, this does not reset pb before starting to unmarshal.
func (p *Buffer) Unmarshal(pb Message) error {
// If the object can unmarshal itself, let it.
if u, ok := pb.(Unmarshaler); ok {
err := u.Unmarshal(p.buf[p.index:])
p.index = len(p.buf)
return err
}
typ, base, err := getbase(pb)
if err != nil {
return err
}
err = p.unmarshalType(typ.Elem(), GetProperties(typ.Elem()), false, base)
if collectStats {
stats.Decode++
}
return err
}
// unmarshalType does the work of unmarshaling a structure.
func (o *Buffer) unmarshalType(st reflect.Type, prop *StructProperties, is_group bool, base structPointer) error {
var state errorState
required, reqFields := prop.reqCount, uint64(0)
var err error
for err == nil && o.index < len(o.buf) {
oi := o.index
var u uint64
u, err = o.DecodeVarint()
if err != nil {
break
}
wire := int(u & 0x7)
if wire == WireEndGroup {
if is_group {
if required > 0 {
// Not enough information to determine the exact field.
// (See below.)
return &RequiredNotSetError{"{Unknown}"}
}
return nil // input is satisfied
}
return fmt.Errorf("proto: %s: wiretype end group for non-group", st)
}
tag := int(u >> 3)
if tag <= 0 {
return fmt.Errorf("proto: %s: illegal tag %d (wire type %d)", st, tag, wire)
}
fieldnum, ok := prop.decoderTags.get(tag)
if !ok {
// Maybe it's an extension?
if prop.extendable {
if e, _ := extendable(structPointer_Interface(base, st)); isExtensionField(e, int32(tag)) {
if err = o.skip(st, tag, wire); err == nil {
extmap := e.extensionsWrite()
ext := extmap[int32(tag)] // may be missing
ext.enc = append(ext.enc, o.buf[oi:o.index]...)
extmap[int32(tag)] = ext
}
continue
}
}
// Maybe it's a oneof?
if prop.oneofUnmarshaler != nil {
m := structPointer_Interface(base, st).(Message)
// First return value indicates whether tag is a oneof field.
ok, err = prop.oneofUnmarshaler(m, tag, wire, o)
if err == ErrInternalBadWireType {
// Map the error to something more descriptive.
// Do the formatting here to save generated code space.
err = fmt.Errorf("bad wiretype for oneof field in %T", m)
}
if ok {
continue
}
}
err = o.skipAndSave(st, tag, wire, base, prop.unrecField)
continue
}
p := prop.Prop[fieldnum]
if p.dec == nil {
fmt.Fprintf(os.Stderr, "proto: no protobuf decoder for %s.%s\n", st, st.Field(fieldnum).Name)
continue
}
dec := p.dec
if wire != WireStartGroup && wire != p.WireType {
if wire == WireBytes && p.packedDec != nil {
// a packable field
dec = p.packedDec
} else {
err = fmt.Errorf("proto: bad wiretype for field %s.%s: got wiretype %d, want %d", st, st.Field(fieldnum).Name, wire, p.WireType)
continue
}
}
decErr := dec(o, p, base)
if decErr != nil && !state.shouldContinue(decErr, p) {
err = decErr
}
if err == nil && p.Required {
// Successfully decoded a required field.
if tag <= 64 {
// use bitmap for fields 1-64 to catch field reuse.
var mask uint64 = 1 << uint64(tag-1)
if reqFields&mask == 0 {
// new required field
reqFields |= mask
required--
}
} else {
// This is imprecise. It can be fooled by a required field
// with a tag > 64 that is encoded twice; that's very rare.
// A fully correct implementation would require allocating
// a data structure, which we would like to avoid.
required--
}
}
}
if err == nil {
if is_group {
return io.ErrUnexpectedEOF
}
if state.err != nil {
return state.err
}
if required > 0 {
// Not enough information to determine the exact field. If we use extra
// CPU, we could determine the field only if the missing required field
// has a tag <= 64 and we check reqFields.
return &RequiredNotSetError{"{Unknown}"}
}
}
return err
}
// Individual type decoders
// For each,
// u is the decoded value,
// v is a pointer to the field (pointer) in the struct
// Sizes of the pools to allocate inside the Buffer.
// The goal is modest amortization and allocation
// on at least 16-byte boundaries.
const (
boolPoolSize = 16
uint32PoolSize = 8
uint64PoolSize = 4
)
// Decode a bool.
func (o *Buffer) dec_bool(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
if len(o.bools) == 0 {
o.bools = make([]bool, boolPoolSize)
}
o.bools[0] = u != 0
*structPointer_Bool(base, p.field) = &o.bools[0]
o.bools = o.bools[1:]
return nil
}
func (o *Buffer) dec_proto3_bool(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
*structPointer_BoolVal(base, p.field) = u != 0
return nil
}
// Decode an int32.
func (o *Buffer) dec_int32(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
word32_Set(structPointer_Word32(base, p.field), o, uint32(u))
return nil
}
func (o *Buffer) dec_proto3_int32(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
word32Val_Set(structPointer_Word32Val(base, p.field), uint32(u))
return nil
}
// Decode an int64.
func (o *Buffer) dec_int64(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
word64_Set(structPointer_Word64(base, p.field), o, u)
return nil
}
func (o *Buffer) dec_proto3_int64(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
word64Val_Set(structPointer_Word64Val(base, p.field), o, u)
return nil
}
// Decode a string.
func (o *Buffer) dec_string(p *Properties, base structPointer) error {
s, err := o.DecodeStringBytes()
if err != nil {
return err
}
*structPointer_String(base, p.field) = &s
return nil
}
func (o *Buffer) dec_proto3_string(p *Properties, base structPointer) error {
s, err := o.DecodeStringBytes()
if err != nil {
return err
}
*structPointer_StringVal(base, p.field) = s
return nil
}
// Decode a slice of bytes ([]byte).
func (o *Buffer) dec_slice_byte(p *Properties, base structPointer) error {
b, err := o.DecodeRawBytes(true)
if err != nil {
return err
}
*structPointer_Bytes(base, p.field) = b
return nil
}
// Decode a slice of bools ([]bool).
func (o *Buffer) dec_slice_bool(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
v := structPointer_BoolSlice(base, p.field)
*v = append(*v, u != 0)
return nil
}
// Decode a slice of bools ([]bool) in packed format.
func (o *Buffer) dec_slice_packed_bool(p *Properties, base structPointer) error {
v := structPointer_BoolSlice(base, p.field)
nn, err := o.DecodeVarint()
if err != nil {
return err
}
nb := int(nn) // number of bytes of encoded bools
fin := o.index + nb
if fin < o.index {
return errOverflow
}
y := *v
for o.index < fin {
u, err := p.valDec(o)
if err != nil {
return err
}
y = append(y, u != 0)
}
*v = y
return nil
}
// Decode a slice of int32s ([]int32).
func (o *Buffer) dec_slice_int32(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
structPointer_Word32Slice(base, p.field).Append(uint32(u))
return nil
}
// Decode a slice of int32s ([]int32) in packed format.
func (o *Buffer) dec_slice_packed_int32(p *Properties, base structPointer) error {
v := structPointer_Word32Slice(base, p.field)
nn, err := o.DecodeVarint()
if err != nil {
return err
}
nb := int(nn) // number of bytes of encoded int32s
fin := o.index + nb
if fin < o.index {
return errOverflow
}
for o.index < fin {
u, err := p.valDec(o)
if err != nil {
return err
}
v.Append(uint32(u))
}
return nil
}
// Decode a slice of int64s ([]int64).
func (o *Buffer) dec_slice_int64(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
structPointer_Word64Slice(base, p.field).Append(u)
return nil
}
// Decode a slice of int64s ([]int64) in packed format.
func (o *Buffer) dec_slice_packed_int64(p *Properties, base structPointer) error {
v := structPointer_Word64Slice(base, p.field)
nn, err := o.DecodeVarint()
if err != nil {
return err
}
nb := int(nn) // number of bytes of encoded int64s
fin := o.index + nb
if fin < o.index {
return errOverflow
}
for o.index < fin {
u, err := p.valDec(o)
if err != nil {
return err
}
v.Append(u)
}
return nil
}
// Decode a slice of strings ([]string).
func (o *Buffer) dec_slice_string(p *Properties, base structPointer) error {
s, err := o.DecodeStringBytes()
if err != nil {
return err
}
v := structPointer_StringSlice(base, p.field)
*v = append(*v, s)
return nil
}
// Decode a slice of slice of bytes ([][]byte).
func (o *Buffer) dec_slice_slice_byte(p *Properties, base structPointer) error {
b, err := o.DecodeRawBytes(true)
if err != nil {
return err
}
v := structPointer_BytesSlice(base, p.field)
*v = append(*v, b)
return nil
}
// Decode a map field.
func (o *Buffer) dec_new_map(p *Properties, base structPointer) error {
raw, err := o.DecodeRawBytes(false)
if err != nil {
return err
}
oi := o.index // index at the end of this map entry
o.index -= len(raw) // move buffer back to start of map entry
mptr := structPointer_NewAt(base, p.field, p.mtype) // *map[K]V
if mptr.Elem().IsNil() {
mptr.Elem().Set(reflect.MakeMap(mptr.Type().Elem()))
}
v := mptr.Elem() // map[K]V
// Prepare addressable doubly-indirect placeholders for the key and value types.
// See enc_new_map for why.
keyptr := reflect.New(reflect.PtrTo(p.mtype.Key())).Elem() // addressable *K
keybase := toStructPointer(keyptr.Addr()) // **K
var valbase structPointer
var valptr reflect.Value
switch p.mtype.Elem().Kind() {
case reflect.Slice:
// []byte
var dummy []byte
valptr = reflect.ValueOf(&dummy) // *[]byte
valbase = toStructPointer(valptr) // *[]byte
case reflect.Ptr:
// message; valptr is **Msg; need to allocate the intermediate pointer
valptr = reflect.New(reflect.PtrTo(p.mtype.Elem())).Elem() // addressable *V
valptr.Set(reflect.New(valptr.Type().Elem()))
valbase = toStructPointer(valptr)
default:
// everything else
valptr = reflect.New(reflect.PtrTo(p.mtype.Elem())).Elem() // addressable *V
valbase = toStructPointer(valptr.Addr()) // **V
}
// Decode.
// This parses a restricted wire format, namely the encoding of a message
// with two fields. See enc_new_map for the format.
for o.index < oi {
// tagcode for key and value properties are always a single byte
// because they have tags 1 and 2.
tagcode := o.buf[o.index]
o.index++
switch tagcode {
case p.mkeyprop.tagcode[0]:
if err := p.mkeyprop.dec(o, p.mkeyprop, keybase); err != nil {
return err
}
case p.mvalprop.tagcode[0]:
if err := p.mvalprop.dec(o, p.mvalprop, valbase); err != nil {
return err
}
default:
// TODO: Should we silently skip this instead?
return fmt.Errorf("proto: bad map data tag %d", raw[0])
}
}
keyelem, valelem := keyptr.Elem(), valptr.Elem()
if !keyelem.IsValid() {
keyelem = reflect.Zero(p.mtype.Key())
}
if !valelem.IsValid() {
valelem = reflect.Zero(p.mtype.Elem())
}
v.SetMapIndex(keyelem, valelem)
return nil
}
// Decode a group.
func (o *Buffer) dec_struct_group(p *Properties, base structPointer) error {
bas := structPointer_GetStructPointer(base, p.field)
if structPointer_IsNil(bas) {
// allocate new nested message
bas = toStructPointer(reflect.New(p.stype))
structPointer_SetStructPointer(base, p.field, bas)
}
return o.unmarshalType(p.stype, p.sprop, true, bas)
}
// Decode an embedded message.
func (o *Buffer) dec_struct_message(p *Properties, base structPointer) (err error) {
raw, e := o.DecodeRawBytes(false)
if e != nil {
return e
}
bas := structPointer_GetStructPointer(base, p.field)
if structPointer_IsNil(bas) {
// allocate new nested message
bas = toStructPointer(reflect.New(p.stype))
structPointer_SetStructPointer(base, p.field, bas)
}
// If the object can unmarshal itself, let it.
if p.isUnmarshaler {
iv := structPointer_Interface(bas, p.stype)
return iv.(Unmarshaler).Unmarshal(raw)
}
obuf := o.buf
oi := o.index
o.buf = raw
o.index = 0
err = o.unmarshalType(p.stype, p.sprop, false, bas)
o.buf = obuf
o.index = oi
return err
}
// Decode a slice of embedded messages.
func (o *Buffer) dec_slice_struct_message(p *Properties, base structPointer) error {
return o.dec_slice_struct(p, false, base)
}
// Decode a slice of embedded groups.
func (o *Buffer) dec_slice_struct_group(p *Properties, base structPointer) error {
return o.dec_slice_struct(p, true, base)
}
// Decode a slice of structs ([]*struct).
func (o *Buffer) dec_slice_struct(p *Properties, is_group bool, base structPointer) error {
v := reflect.New(p.stype)
bas := toStructPointer(v)
structPointer_StructPointerSlice(base, p.field).Append(bas)
if is_group {
err := o.unmarshalType(p.stype, p.sprop, is_group, bas)
return err
}
raw, err := o.DecodeRawBytes(false)
if err != nil {
return err
}
// If the object can unmarshal itself, let it.
if p.isUnmarshaler {
iv := v.Interface()
return iv.(Unmarshaler).Unmarshal(raw)
}
obuf := o.buf
oi := o.index
o.buf = raw
o.index = 0
err = o.unmarshalType(p.stype, p.sprop, is_group, bas)
o.buf = obuf
o.index = oi
return err
}

1362
vendor/github.com/golang/protobuf/proto/encode.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

300
vendor/github.com/golang/protobuf/proto/equal.go generated vendored Normal file
View File

@ -0,0 +1,300 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2011 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Protocol buffer comparison.
package proto
import (
"bytes"
"log"
"reflect"
"strings"
)
/*
Equal returns true iff protocol buffers a and b are equal.
The arguments must both be pointers to protocol buffer structs.
Equality is defined in this way:
- Two messages are equal iff they are the same type,
corresponding fields are equal, unknown field sets
are equal, and extensions sets are equal.
- Two set scalar fields are equal iff their values are equal.
If the fields are of a floating-point type, remember that
NaN != x for all x, including NaN. If the message is defined
in a proto3 .proto file, fields are not "set"; specifically,
zero length proto3 "bytes" fields are equal (nil == {}).
- Two repeated fields are equal iff their lengths are the same,
and their corresponding elements are equal. Note a "bytes" field,
although represented by []byte, is not a repeated field and the
rule for the scalar fields described above applies.
- Two unset fields are equal.
- Two unknown field sets are equal if their current
encoded state is equal.
- Two extension sets are equal iff they have corresponding
elements that are pairwise equal.
- Two map fields are equal iff their lengths are the same,
and they contain the same set of elements. Zero-length map
fields are equal.
- Every other combination of things are not equal.
The return value is undefined if a and b are not protocol buffers.
*/
func Equal(a, b Message) bool {
if a == nil || b == nil {
return a == b
}
v1, v2 := reflect.ValueOf(a), reflect.ValueOf(b)
if v1.Type() != v2.Type() {
return false
}
if v1.Kind() == reflect.Ptr {
if v1.IsNil() {
return v2.IsNil()
}
if v2.IsNil() {
return false
}
v1, v2 = v1.Elem(), v2.Elem()
}
if v1.Kind() != reflect.Struct {
return false
}
return equalStruct(v1, v2)
}
// v1 and v2 are known to have the same type.
func equalStruct(v1, v2 reflect.Value) bool {
sprop := GetProperties(v1.Type())
for i := 0; i < v1.NumField(); i++ {
f := v1.Type().Field(i)
if strings.HasPrefix(f.Name, "XXX_") {
continue
}
f1, f2 := v1.Field(i), v2.Field(i)
if f.Type.Kind() == reflect.Ptr {
if n1, n2 := f1.IsNil(), f2.IsNil(); n1 && n2 {
// both unset
continue
} else if n1 != n2 {
// set/unset mismatch
return false
}
b1, ok := f1.Interface().(raw)
if ok {
b2 := f2.Interface().(raw)
// RawMessage
if !bytes.Equal(b1.Bytes(), b2.Bytes()) {
return false
}
continue
}
f1, f2 = f1.Elem(), f2.Elem()
}
if !equalAny(f1, f2, sprop.Prop[i]) {
return false
}
}
if em1 := v1.FieldByName("XXX_InternalExtensions"); em1.IsValid() {
em2 := v2.FieldByName("XXX_InternalExtensions")
if !equalExtensions(v1.Type(), em1.Interface().(XXX_InternalExtensions), em2.Interface().(XXX_InternalExtensions)) {
return false
}
}
if em1 := v1.FieldByName("XXX_extensions"); em1.IsValid() {
em2 := v2.FieldByName("XXX_extensions")
if !equalExtMap(v1.Type(), em1.Interface().(map[int32]Extension), em2.Interface().(map[int32]Extension)) {
return false
}
}
uf := v1.FieldByName("XXX_unrecognized")
if !uf.IsValid() {
return true
}
u1 := uf.Bytes()
u2 := v2.FieldByName("XXX_unrecognized").Bytes()
if !bytes.Equal(u1, u2) {
return false
}
return true
}
// v1 and v2 are known to have the same type.
// prop may be nil.
func equalAny(v1, v2 reflect.Value, prop *Properties) bool {
if v1.Type() == protoMessageType {
m1, _ := v1.Interface().(Message)
m2, _ := v2.Interface().(Message)
return Equal(m1, m2)
}
switch v1.Kind() {
case reflect.Bool:
return v1.Bool() == v2.Bool()
case reflect.Float32, reflect.Float64:
return v1.Float() == v2.Float()
case reflect.Int32, reflect.Int64:
return v1.Int() == v2.Int()
case reflect.Interface:
// Probably a oneof field; compare the inner values.
n1, n2 := v1.IsNil(), v2.IsNil()
if n1 || n2 {
return n1 == n2
}
e1, e2 := v1.Elem(), v2.Elem()
if e1.Type() != e2.Type() {
return false
}
return equalAny(e1, e2, nil)
case reflect.Map:
if v1.Len() != v2.Len() {
return false
}
for _, key := range v1.MapKeys() {
val2 := v2.MapIndex(key)
if !val2.IsValid() {
// This key was not found in the second map.
return false
}
if !equalAny(v1.MapIndex(key), val2, nil) {
return false
}
}
return true
case reflect.Ptr:
// Maps may have nil values in them, so check for nil.
if v1.IsNil() && v2.IsNil() {
return true
}
if v1.IsNil() != v2.IsNil() {
return false
}
return equalAny(v1.Elem(), v2.Elem(), prop)
case reflect.Slice:
if v1.Type().Elem().Kind() == reflect.Uint8 {
// short circuit: []byte
// Edge case: if this is in a proto3 message, a zero length
// bytes field is considered the zero value.
if prop != nil && prop.proto3 && v1.Len() == 0 && v2.Len() == 0 {
return true
}
if v1.IsNil() != v2.IsNil() {
return false
}
return bytes.Equal(v1.Interface().([]byte), v2.Interface().([]byte))
}
if v1.Len() != v2.Len() {
return false
}
for i := 0; i < v1.Len(); i++ {
if !equalAny(v1.Index(i), v2.Index(i), prop) {
return false
}
}
return true
case reflect.String:
return v1.Interface().(string) == v2.Interface().(string)
case reflect.Struct:
return equalStruct(v1, v2)
case reflect.Uint32, reflect.Uint64:
return v1.Uint() == v2.Uint()
}
// unknown type, so not a protocol buffer
log.Printf("proto: don't know how to compare %v", v1)
return false
}
// base is the struct type that the extensions are based on.
// x1 and x2 are InternalExtensions.
func equalExtensions(base reflect.Type, x1, x2 XXX_InternalExtensions) bool {
em1, _ := x1.extensionsRead()
em2, _ := x2.extensionsRead()
return equalExtMap(base, em1, em2)
}
func equalExtMap(base reflect.Type, em1, em2 map[int32]Extension) bool {
if len(em1) != len(em2) {
return false
}
for extNum, e1 := range em1 {
e2, ok := em2[extNum]
if !ok {
return false
}
m1, m2 := e1.value, e2.value
if m1 != nil && m2 != nil {
// Both are unencoded.
if !equalAny(reflect.ValueOf(m1), reflect.ValueOf(m2), nil) {
return false
}
continue
}
// At least one is encoded. To do a semantically correct comparison
// we need to unmarshal them first.
var desc *ExtensionDesc
if m := extensionMaps[base]; m != nil {
desc = m[extNum]
}
if desc == nil {
log.Printf("proto: don't know how to compare extension %d of %v", extNum, base)
continue
}
var err error
if m1 == nil {
m1, err = decodeExtension(e1.enc, desc)
}
if m2 == nil && err == nil {
m2, err = decodeExtension(e2.enc, desc)
}
if err != nil {
// The encoded form is invalid.
log.Printf("proto: badly encoded extension %d of %v: %v", extNum, base, err)
return false
}
if !equalAny(reflect.ValueOf(m1), reflect.ValueOf(m2), nil) {
return false
}
}
return true
}

587
vendor/github.com/golang/protobuf/proto/extensions.go generated vendored Normal file
View File

@ -0,0 +1,587 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
/*
* Types and routines for supporting protocol buffer extensions.
*/
import (
"errors"
"fmt"
"reflect"
"strconv"
"sync"
)
// ErrMissingExtension is the error returned by GetExtension if the named extension is not in the message.
var ErrMissingExtension = errors.New("proto: missing extension")
// ExtensionRange represents a range of message extensions for a protocol buffer.
// Used in code generated by the protocol compiler.
type ExtensionRange struct {
Start, End int32 // both inclusive
}
// extendableProto is an interface implemented by any protocol buffer generated by the current
// proto compiler that may be extended.
type extendableProto interface {
Message
ExtensionRangeArray() []ExtensionRange
extensionsWrite() map[int32]Extension
extensionsRead() (map[int32]Extension, sync.Locker)
}
// extendableProtoV1 is an interface implemented by a protocol buffer generated by the previous
// version of the proto compiler that may be extended.
type extendableProtoV1 interface {
Message
ExtensionRangeArray() []ExtensionRange
ExtensionMap() map[int32]Extension
}
// extensionAdapter is a wrapper around extendableProtoV1 that implements extendableProto.
type extensionAdapter struct {
extendableProtoV1
}
func (e extensionAdapter) extensionsWrite() map[int32]Extension {
return e.ExtensionMap()
}
func (e extensionAdapter) extensionsRead() (map[int32]Extension, sync.Locker) {
return e.ExtensionMap(), notLocker{}
}
// notLocker is a sync.Locker whose Lock and Unlock methods are nops.
type notLocker struct{}
func (n notLocker) Lock() {}
func (n notLocker) Unlock() {}
// extendable returns the extendableProto interface for the given generated proto message.
// If the proto message has the old extension format, it returns a wrapper that implements
// the extendableProto interface.
func extendable(p interface{}) (extendableProto, bool) {
if ep, ok := p.(extendableProto); ok {
return ep, ok
}
if ep, ok := p.(extendableProtoV1); ok {
return extensionAdapter{ep}, ok
}
return nil, false
}
// XXX_InternalExtensions is an internal representation of proto extensions.
//
// Each generated message struct type embeds an anonymous XXX_InternalExtensions field,
// thus gaining the unexported 'extensions' method, which can be called only from the proto package.
//
// The methods of XXX_InternalExtensions are not concurrency safe in general,
// but calls to logically read-only methods such as has and get may be executed concurrently.
type XXX_InternalExtensions struct {
// The struct must be indirect so that if a user inadvertently copies a
// generated message and its embedded XXX_InternalExtensions, they
// avoid the mayhem of a copied mutex.
//
// The mutex serializes all logically read-only operations to p.extensionMap.
// It is up to the client to ensure that write operations to p.extensionMap are
// mutually exclusive with other accesses.
p *struct {
mu sync.Mutex
extensionMap map[int32]Extension
}
}
// extensionsWrite returns the extension map, creating it on first use.
func (e *XXX_InternalExtensions) extensionsWrite() map[int32]Extension {
if e.p == nil {
e.p = new(struct {
mu sync.Mutex
extensionMap map[int32]Extension
})
e.p.extensionMap = make(map[int32]Extension)
}
return e.p.extensionMap
}
// extensionsRead returns the extensions map for read-only use. It may be nil.
// The caller must hold the returned mutex's lock when accessing Elements within the map.
func (e *XXX_InternalExtensions) extensionsRead() (map[int32]Extension, sync.Locker) {
if e.p == nil {
return nil, nil
}
return e.p.extensionMap, &e.p.mu
}
var extendableProtoType = reflect.TypeOf((*extendableProto)(nil)).Elem()
var extendableProtoV1Type = reflect.TypeOf((*extendableProtoV1)(nil)).Elem()
// ExtensionDesc represents an extension specification.
// Used in generated code from the protocol compiler.
type ExtensionDesc struct {
ExtendedType Message // nil pointer to the type that is being extended
ExtensionType interface{} // nil pointer to the extension type
Field int32 // field number
Name string // fully-qualified name of extension, for text formatting
Tag string // protobuf tag style
Filename string // name of the file in which the extension is defined
}
func (ed *ExtensionDesc) repeated() bool {
t := reflect.TypeOf(ed.ExtensionType)
return t.Kind() == reflect.Slice && t.Elem().Kind() != reflect.Uint8
}
// Extension represents an extension in a message.
type Extension struct {
// When an extension is stored in a message using SetExtension
// only desc and value are set. When the message is marshaled
// enc will be set to the encoded form of the message.
//
// When a message is unmarshaled and contains extensions, each
// extension will have only enc set. When such an extension is
// accessed using GetExtension (or GetExtensions) desc and value
// will be set.
desc *ExtensionDesc
value interface{}
enc []byte
}
// SetRawExtension is for testing only.
func SetRawExtension(base Message, id int32, b []byte) {
epb, ok := extendable(base)
if !ok {
return
}
extmap := epb.extensionsWrite()
extmap[id] = Extension{enc: b}
}
// isExtensionField returns true iff the given field number is in an extension range.
func isExtensionField(pb extendableProto, field int32) bool {
for _, er := range pb.ExtensionRangeArray() {
if er.Start <= field && field <= er.End {
return true
}
}
return false
}
// checkExtensionTypes checks that the given extension is valid for pb.
func checkExtensionTypes(pb extendableProto, extension *ExtensionDesc) error {
var pbi interface{} = pb
// Check the extended type.
if ea, ok := pbi.(extensionAdapter); ok {
pbi = ea.extendableProtoV1
}
if a, b := reflect.TypeOf(pbi), reflect.TypeOf(extension.ExtendedType); a != b {
return errors.New("proto: bad extended type; " + b.String() + " does not extend " + a.String())
}
// Check the range.
if !isExtensionField(pb, extension.Field) {
return errors.New("proto: bad extension number; not in declared ranges")
}
return nil
}
// extPropKey is sufficient to uniquely identify an extension.
type extPropKey struct {
base reflect.Type
field int32
}
var extProp = struct {
sync.RWMutex
m map[extPropKey]*Properties
}{
m: make(map[extPropKey]*Properties),
}
func extensionProperties(ed *ExtensionDesc) *Properties {
key := extPropKey{base: reflect.TypeOf(ed.ExtendedType), field: ed.Field}
extProp.RLock()
if prop, ok := extProp.m[key]; ok {
extProp.RUnlock()
return prop
}
extProp.RUnlock()
extProp.Lock()
defer extProp.Unlock()
// Check again.
if prop, ok := extProp.m[key]; ok {
return prop
}
prop := new(Properties)
prop.Init(reflect.TypeOf(ed.ExtensionType), "unknown_name", ed.Tag, nil)
extProp.m[key] = prop
return prop
}
// encode encodes any unmarshaled (unencoded) extensions in e.
func encodeExtensions(e *XXX_InternalExtensions) error {
m, mu := e.extensionsRead()
if m == nil {
return nil // fast path
}
mu.Lock()
defer mu.Unlock()
return encodeExtensionsMap(m)
}
// encode encodes any unmarshaled (unencoded) extensions in e.
func encodeExtensionsMap(m map[int32]Extension) error {
for k, e := range m {
if e.value == nil || e.desc == nil {
// Extension is only in its encoded form.
continue
}
// We don't skip extensions that have an encoded form set,
// because the extension value may have been mutated after
// the last time this function was called.
et := reflect.TypeOf(e.desc.ExtensionType)
props := extensionProperties(e.desc)
p := NewBuffer(nil)
// If e.value has type T, the encoder expects a *struct{ X T }.
// Pass a *T with a zero field and hope it all works out.
x := reflect.New(et)
x.Elem().Set(reflect.ValueOf(e.value))
if err := props.enc(p, props, toStructPointer(x)); err != nil {
return err
}
e.enc = p.buf
m[k] = e
}
return nil
}
func extensionsSize(e *XXX_InternalExtensions) (n int) {
m, mu := e.extensionsRead()
if m == nil {
return 0
}
mu.Lock()
defer mu.Unlock()
return extensionsMapSize(m)
}
func extensionsMapSize(m map[int32]Extension) (n int) {
for _, e := range m {
if e.value == nil || e.desc == nil {
// Extension is only in its encoded form.
n += len(e.enc)
continue
}
// We don't skip extensions that have an encoded form set,
// because the extension value may have been mutated after
// the last time this function was called.
et := reflect.TypeOf(e.desc.ExtensionType)
props := extensionProperties(e.desc)
// If e.value has type T, the encoder expects a *struct{ X T }.
// Pass a *T with a zero field and hope it all works out.
x := reflect.New(et)
x.Elem().Set(reflect.ValueOf(e.value))
n += props.size(props, toStructPointer(x))
}
return
}
// HasExtension returns whether the given extension is present in pb.
func HasExtension(pb Message, extension *ExtensionDesc) bool {
// TODO: Check types, field numbers, etc.?
epb, ok := extendable(pb)
if !ok {
return false
}
extmap, mu := epb.extensionsRead()
if extmap == nil {
return false
}
mu.Lock()
_, ok = extmap[extension.Field]
mu.Unlock()
return ok
}
// ClearExtension removes the given extension from pb.
func ClearExtension(pb Message, extension *ExtensionDesc) {
epb, ok := extendable(pb)
if !ok {
return
}
// TODO: Check types, field numbers, etc.?
extmap := epb.extensionsWrite()
delete(extmap, extension.Field)
}
// GetExtension parses and returns the given extension of pb.
// If the extension is not present and has no default value it returns ErrMissingExtension.
func GetExtension(pb Message, extension *ExtensionDesc) (interface{}, error) {
epb, ok := extendable(pb)
if !ok {
return nil, errors.New("proto: not an extendable proto")
}
if err := checkExtensionTypes(epb, extension); err != nil {
return nil, err
}
emap, mu := epb.extensionsRead()
if emap == nil {
return defaultExtensionValue(extension)
}
mu.Lock()
defer mu.Unlock()
e, ok := emap[extension.Field]
if !ok {
// defaultExtensionValue returns the default value or
// ErrMissingExtension if there is no default.
return defaultExtensionValue(extension)
}
if e.value != nil {
// Already decoded. Check the descriptor, though.
if e.desc != extension {
// This shouldn't happen. If it does, it means that
// GetExtension was called twice with two different
// descriptors with the same field number.
return nil, errors.New("proto: descriptor conflict")
}
return e.value, nil
}
v, err := decodeExtension(e.enc, extension)
if err != nil {
return nil, err
}
// Remember the decoded version and drop the encoded version.
// That way it is safe to mutate what we return.
e.value = v
e.desc = extension
e.enc = nil
emap[extension.Field] = e
return e.value, nil
}
// defaultExtensionValue returns the default value for extension.
// If no default for an extension is defined ErrMissingExtension is returned.
func defaultExtensionValue(extension *ExtensionDesc) (interface{}, error) {
t := reflect.TypeOf(extension.ExtensionType)
props := extensionProperties(extension)
sf, _, err := fieldDefault(t, props)
if err != nil {
return nil, err
}
if sf == nil || sf.value == nil {
// There is no default value.
return nil, ErrMissingExtension
}
if t.Kind() != reflect.Ptr {
// We do not need to return a Ptr, we can directly return sf.value.
return sf.value, nil
}
// We need to return an interface{} that is a pointer to sf.value.
value := reflect.New(t).Elem()
value.Set(reflect.New(value.Type().Elem()))
if sf.kind == reflect.Int32 {
// We may have an int32 or an enum, but the underlying data is int32.
// Since we can't set an int32 into a non int32 reflect.value directly
// set it as a int32.
value.Elem().SetInt(int64(sf.value.(int32)))
} else {
value.Elem().Set(reflect.ValueOf(sf.value))
}
return value.Interface(), nil
}
// decodeExtension decodes an extension encoded in b.
func decodeExtension(b []byte, extension *ExtensionDesc) (interface{}, error) {
o := NewBuffer(b)
t := reflect.TypeOf(extension.ExtensionType)
props := extensionProperties(extension)
// t is a pointer to a struct, pointer to basic type or a slice.
// Allocate a "field" to store the pointer/slice itself; the
// pointer/slice will be stored here. We pass
// the address of this field to props.dec.
// This passes a zero field and a *t and lets props.dec
// interpret it as a *struct{ x t }.
value := reflect.New(t).Elem()
for {
// Discard wire type and field number varint. It isn't needed.
if _, err := o.DecodeVarint(); err != nil {
return nil, err
}
if err := props.dec(o, props, toStructPointer(value.Addr())); err != nil {
return nil, err
}
if o.index >= len(o.buf) {
break
}
}
return value.Interface(), nil
}
// GetExtensions returns a slice of the extensions present in pb that are also listed in es.
// The returned slice has the same length as es; missing extensions will appear as nil elements.
func GetExtensions(pb Message, es []*ExtensionDesc) (extensions []interface{}, err error) {
epb, ok := extendable(pb)
if !ok {
return nil, errors.New("proto: not an extendable proto")
}
extensions = make([]interface{}, len(es))
for i, e := range es {
extensions[i], err = GetExtension(epb, e)
if err == ErrMissingExtension {
err = nil
}
if err != nil {
return
}
}
return
}
// ExtensionDescs returns a new slice containing pb's extension descriptors, in undefined order.
// For non-registered extensions, ExtensionDescs returns an incomplete descriptor containing
// just the Field field, which defines the extension's field number.
func ExtensionDescs(pb Message) ([]*ExtensionDesc, error) {
epb, ok := extendable(pb)
if !ok {
return nil, fmt.Errorf("proto: %T is not an extendable proto.Message", pb)
}
registeredExtensions := RegisteredExtensions(pb)
emap, mu := epb.extensionsRead()
if emap == nil {
return nil, nil
}
mu.Lock()
defer mu.Unlock()
extensions := make([]*ExtensionDesc, 0, len(emap))
for extid, e := range emap {
desc := e.desc
if desc == nil {
desc = registeredExtensions[extid]
if desc == nil {
desc = &ExtensionDesc{Field: extid}
}
}
extensions = append(extensions, desc)
}
return extensions, nil
}
// SetExtension sets the specified extension of pb to the specified value.
func SetExtension(pb Message, extension *ExtensionDesc, value interface{}) error {
epb, ok := extendable(pb)
if !ok {
return errors.New("proto: not an extendable proto")
}
if err := checkExtensionTypes(epb, extension); err != nil {
return err
}
typ := reflect.TypeOf(extension.ExtensionType)
if typ != reflect.TypeOf(value) {
return errors.New("proto: bad extension value type")
}
// nil extension values need to be caught early, because the
// encoder can't distinguish an ErrNil due to a nil extension
// from an ErrNil due to a missing field. Extensions are
// always optional, so the encoder would just swallow the error
// and drop all the extensions from the encoded message.
if reflect.ValueOf(value).IsNil() {
return fmt.Errorf("proto: SetExtension called with nil value of type %T", value)
}
extmap := epb.extensionsWrite()
extmap[extension.Field] = Extension{desc: extension, value: value}
return nil
}
// ClearAllExtensions clears all extensions from pb.
func ClearAllExtensions(pb Message) {
epb, ok := extendable(pb)
if !ok {
return
}
m := epb.extensionsWrite()
for k := range m {
delete(m, k)
}
}
// A global registry of extensions.
// The generated code will register the generated descriptors by calling RegisterExtension.
var extensionMaps = make(map[reflect.Type]map[int32]*ExtensionDesc)
// RegisterExtension is called from the generated code.
func RegisterExtension(desc *ExtensionDesc) {
st := reflect.TypeOf(desc.ExtendedType).Elem()
m := extensionMaps[st]
if m == nil {
m = make(map[int32]*ExtensionDesc)
extensionMaps[st] = m
}
if _, ok := m[desc.Field]; ok {
panic("proto: duplicate extension registered: " + st.String() + " " + strconv.Itoa(int(desc.Field)))
}
m[desc.Field] = desc
}
// RegisteredExtensions returns a map of the registered extensions of a
// protocol buffer struct, indexed by the extension number.
// The argument pb should be a nil pointer to the struct type.
func RegisteredExtensions(pb Message) map[int32]*ExtensionDesc {
return extensionMaps[reflect.TypeOf(pb).Elem()]
}

898
vendor/github.com/golang/protobuf/proto/lib.go generated vendored Normal file
View File

@ -0,0 +1,898 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
/*
Package proto converts data structures to and from the wire format of
protocol buffers. It works in concert with the Go source code generated
for .proto files by the protocol compiler.
A summary of the properties of the protocol buffer interface
for a protocol buffer variable v:
- Names are turned from camel_case to CamelCase for export.
- There are no methods on v to set fields; just treat
them as structure fields.
- There are getters that return a field's value if set,
and return the field's default value if unset.
The getters work even if the receiver is a nil message.
- The zero value for a struct is its correct initialization state.
All desired fields must be set before marshaling.
- A Reset() method will restore a protobuf struct to its zero state.
- Non-repeated fields are pointers to the values; nil means unset.
That is, optional or required field int32 f becomes F *int32.
- Repeated fields are slices.
- Helper functions are available to aid the setting of fields.
msg.Foo = proto.String("hello") // set field
- Constants are defined to hold the default values of all fields that
have them. They have the form Default_StructName_FieldName.
Because the getter methods handle defaulted values,
direct use of these constants should be rare.
- Enums are given type names and maps from names to values.
Enum values are prefixed by the enclosing message's name, or by the
enum's type name if it is a top-level enum. Enum types have a String
method, and a Enum method to assist in message construction.
- Nested messages, groups and enums have type names prefixed with the name of
the surrounding message type.
- Extensions are given descriptor names that start with E_,
followed by an underscore-delimited list of the nested messages
that contain it (if any) followed by the CamelCased name of the
extension field itself. HasExtension, ClearExtension, GetExtension
and SetExtension are functions for manipulating extensions.
- Oneof field sets are given a single field in their message,
with distinguished wrapper types for each possible field value.
- Marshal and Unmarshal are functions to encode and decode the wire format.
When the .proto file specifies `syntax="proto3"`, there are some differences:
- Non-repeated fields of non-message type are values instead of pointers.
- Getters are only generated for message and oneof fields.
- Enum types do not get an Enum method.
The simplest way to describe this is to see an example.
Given file test.proto, containing
package example;
enum FOO { X = 17; }
message Test {
required string label = 1;
optional int32 type = 2 [default=77];
repeated int64 reps = 3;
optional group OptionalGroup = 4 {
required string RequiredField = 5;
}
oneof union {
int32 number = 6;
string name = 7;
}
}
The resulting file, test.pb.go, is:
package example
import proto "github.com/golang/protobuf/proto"
import math "math"
type FOO int32
const (
FOO_X FOO = 17
)
var FOO_name = map[int32]string{
17: "X",
}
var FOO_value = map[string]int32{
"X": 17,
}
func (x FOO) Enum() *FOO {
p := new(FOO)
*p = x
return p
}
func (x FOO) String() string {
return proto.EnumName(FOO_name, int32(x))
}
func (x *FOO) UnmarshalJSON(data []byte) error {
value, err := proto.UnmarshalJSONEnum(FOO_value, data)
if err != nil {
return err
}
*x = FOO(value)
return nil
}
type Test struct {
Label *string `protobuf:"bytes,1,req,name=label" json:"label,omitempty"`
Type *int32 `protobuf:"varint,2,opt,name=type,def=77" json:"type,omitempty"`
Reps []int64 `protobuf:"varint,3,rep,name=reps" json:"reps,omitempty"`
Optionalgroup *Test_OptionalGroup `protobuf:"group,4,opt,name=OptionalGroup" json:"optionalgroup,omitempty"`
// Types that are valid to be assigned to Union:
// *Test_Number
// *Test_Name
Union isTest_Union `protobuf_oneof:"union"`
XXX_unrecognized []byte `json:"-"`
}
func (m *Test) Reset() { *m = Test{} }
func (m *Test) String() string { return proto.CompactTextString(m) }
func (*Test) ProtoMessage() {}
type isTest_Union interface {
isTest_Union()
}
type Test_Number struct {
Number int32 `protobuf:"varint,6,opt,name=number"`
}
type Test_Name struct {
Name string `protobuf:"bytes,7,opt,name=name"`
}
func (*Test_Number) isTest_Union() {}
func (*Test_Name) isTest_Union() {}
func (m *Test) GetUnion() isTest_Union {
if m != nil {
return m.Union
}
return nil
}
const Default_Test_Type int32 = 77
func (m *Test) GetLabel() string {
if m != nil && m.Label != nil {
return *m.Label
}
return ""
}
func (m *Test) GetType() int32 {
if m != nil && m.Type != nil {
return *m.Type
}
return Default_Test_Type
}
func (m *Test) GetOptionalgroup() *Test_OptionalGroup {
if m != nil {
return m.Optionalgroup
}
return nil
}
type Test_OptionalGroup struct {
RequiredField *string `protobuf:"bytes,5,req" json:"RequiredField,omitempty"`
}
func (m *Test_OptionalGroup) Reset() { *m = Test_OptionalGroup{} }
func (m *Test_OptionalGroup) String() string { return proto.CompactTextString(m) }
func (m *Test_OptionalGroup) GetRequiredField() string {
if m != nil && m.RequiredField != nil {
return *m.RequiredField
}
return ""
}
func (m *Test) GetNumber() int32 {
if x, ok := m.GetUnion().(*Test_Number); ok {
return x.Number
}
return 0
}
func (m *Test) GetName() string {
if x, ok := m.GetUnion().(*Test_Name); ok {
return x.Name
}
return ""
}
func init() {
proto.RegisterEnum("example.FOO", FOO_name, FOO_value)
}
To create and play with a Test object:
package main
import (
"log"
"github.com/golang/protobuf/proto"
pb "./example.pb"
)
func main() {
test := &pb.Test{
Label: proto.String("hello"),
Type: proto.Int32(17),
Reps: []int64{1, 2, 3},
Optionalgroup: &pb.Test_OptionalGroup{
RequiredField: proto.String("good bye"),
},
Union: &pb.Test_Name{"fred"},
}
data, err := proto.Marshal(test)
if err != nil {
log.Fatal("marshaling error: ", err)
}
newTest := &pb.Test{}
err = proto.Unmarshal(data, newTest)
if err != nil {
log.Fatal("unmarshaling error: ", err)
}
// Now test and newTest contain the same data.
if test.GetLabel() != newTest.GetLabel() {
log.Fatalf("data mismatch %q != %q", test.GetLabel(), newTest.GetLabel())
}
// Use a type switch to determine which oneof was set.
switch u := test.Union.(type) {
case *pb.Test_Number: // u.Number contains the number.
case *pb.Test_Name: // u.Name contains the string.
}
// etc.
}
*/
package proto
import (
"encoding/json"
"fmt"
"log"
"reflect"
"sort"
"strconv"
"sync"
)
// Message is implemented by generated protocol buffer messages.
type Message interface {
Reset()
String() string
ProtoMessage()
}
// Stats records allocation details about the protocol buffer encoders
// and decoders. Useful for tuning the library itself.
type Stats struct {
Emalloc uint64 // mallocs in encode
Dmalloc uint64 // mallocs in decode
Encode uint64 // number of encodes
Decode uint64 // number of decodes
Chit uint64 // number of cache hits
Cmiss uint64 // number of cache misses
Size uint64 // number of sizes
}
// Set to true to enable stats collection.
const collectStats = false
var stats Stats
// GetStats returns a copy of the global Stats structure.
func GetStats() Stats { return stats }
// A Buffer is a buffer manager for marshaling and unmarshaling
// protocol buffers. It may be reused between invocations to
// reduce memory usage. It is not necessary to use a Buffer;
// the global functions Marshal and Unmarshal create a
// temporary Buffer and are fine for most applications.
type Buffer struct {
buf []byte // encode/decode byte stream
index int // read point
// pools of basic types to amortize allocation.
bools []bool
uint32s []uint32
uint64s []uint64
// extra pools, only used with pointer_reflect.go
int32s []int32
int64s []int64
float32s []float32
float64s []float64
}
// NewBuffer allocates a new Buffer and initializes its internal data to
// the contents of the argument slice.
func NewBuffer(e []byte) *Buffer {
return &Buffer{buf: e}
}
// Reset resets the Buffer, ready for marshaling a new protocol buffer.
func (p *Buffer) Reset() {
p.buf = p.buf[0:0] // for reading/writing
p.index = 0 // for reading
}
// SetBuf replaces the internal buffer with the slice,
// ready for unmarshaling the contents of the slice.
func (p *Buffer) SetBuf(s []byte) {
p.buf = s
p.index = 0
}
// Bytes returns the contents of the Buffer.
func (p *Buffer) Bytes() []byte { return p.buf }
/*
* Helper routines for simplifying the creation of optional fields of basic type.
*/
// Bool is a helper routine that allocates a new bool value
// to store v and returns a pointer to it.
func Bool(v bool) *bool {
return &v
}
// Int32 is a helper routine that allocates a new int32 value
// to store v and returns a pointer to it.
func Int32(v int32) *int32 {
return &v
}
// Int is a helper routine that allocates a new int32 value
// to store v and returns a pointer to it, but unlike Int32
// its argument value is an int.
func Int(v int) *int32 {
p := new(int32)
*p = int32(v)
return p
}
// Int64 is a helper routine that allocates a new int64 value
// to store v and returns a pointer to it.
func Int64(v int64) *int64 {
return &v
}
// Float32 is a helper routine that allocates a new float32 value
// to store v and returns a pointer to it.
func Float32(v float32) *float32 {
return &v
}
// Float64 is a helper routine that allocates a new float64 value
// to store v and returns a pointer to it.
func Float64(v float64) *float64 {
return &v
}
// Uint32 is a helper routine that allocates a new uint32 value
// to store v and returns a pointer to it.
func Uint32(v uint32) *uint32 {
return &v
}
// Uint64 is a helper routine that allocates a new uint64 value
// to store v and returns a pointer to it.
func Uint64(v uint64) *uint64 {
return &v
}
// String is a helper routine that allocates a new string value
// to store v and returns a pointer to it.
func String(v string) *string {
return &v
}
// EnumName is a helper function to simplify printing protocol buffer enums
// by name. Given an enum map and a value, it returns a useful string.
func EnumName(m map[int32]string, v int32) string {
s, ok := m[v]
if ok {
return s
}
return strconv.Itoa(int(v))
}
// UnmarshalJSONEnum is a helper function to simplify recovering enum int values
// from their JSON-encoded representation. Given a map from the enum's symbolic
// names to its int values, and a byte buffer containing the JSON-encoded
// value, it returns an int32 that can be cast to the enum type by the caller.
//
// The function can deal with both JSON representations, numeric and symbolic.
func UnmarshalJSONEnum(m map[string]int32, data []byte, enumName string) (int32, error) {
if data[0] == '"' {
// New style: enums are strings.
var repr string
if err := json.Unmarshal(data, &repr); err != nil {
return -1, err
}
val, ok := m[repr]
if !ok {
return 0, fmt.Errorf("unrecognized enum %s value %q", enumName, repr)
}
return val, nil
}
// Old style: enums are ints.
var val int32
if err := json.Unmarshal(data, &val); err != nil {
return 0, fmt.Errorf("cannot unmarshal %#q into enum %s", data, enumName)
}
return val, nil
}
// DebugPrint dumps the encoded data in b in a debugging format with a header
// including the string s. Used in testing but made available for general debugging.
func (p *Buffer) DebugPrint(s string, b []byte) {
var u uint64
obuf := p.buf
index := p.index
p.buf = b
p.index = 0
depth := 0
fmt.Printf("\n--- %s ---\n", s)
out:
for {
for i := 0; i < depth; i++ {
fmt.Print(" ")
}
index := p.index
if index == len(p.buf) {
break
}
op, err := p.DecodeVarint()
if err != nil {
fmt.Printf("%3d: fetching op err %v\n", index, err)
break out
}
tag := op >> 3
wire := op & 7
switch wire {
default:
fmt.Printf("%3d: t=%3d unknown wire=%d\n",
index, tag, wire)
break out
case WireBytes:
var r []byte
r, err = p.DecodeRawBytes(false)
if err != nil {
break out
}
fmt.Printf("%3d: t=%3d bytes [%d]", index, tag, len(r))
if len(r) <= 6 {
for i := 0; i < len(r); i++ {
fmt.Printf(" %.2x", r[i])
}
} else {
for i := 0; i < 3; i++ {
fmt.Printf(" %.2x", r[i])
}
fmt.Printf(" ..")
for i := len(r) - 3; i < len(r); i++ {
fmt.Printf(" %.2x", r[i])
}
}
fmt.Printf("\n")
case WireFixed32:
u, err = p.DecodeFixed32()
if err != nil {
fmt.Printf("%3d: t=%3d fix32 err %v\n", index, tag, err)
break out
}
fmt.Printf("%3d: t=%3d fix32 %d\n", index, tag, u)
case WireFixed64:
u, err = p.DecodeFixed64()
if err != nil {
fmt.Printf("%3d: t=%3d fix64 err %v\n", index, tag, err)
break out
}
fmt.Printf("%3d: t=%3d fix64 %d\n", index, tag, u)
case WireVarint:
u, err = p.DecodeVarint()
if err != nil {
fmt.Printf("%3d: t=%3d varint err %v\n", index, tag, err)
break out
}
fmt.Printf("%3d: t=%3d varint %d\n", index, tag, u)
case WireStartGroup:
fmt.Printf("%3d: t=%3d start\n", index, tag)
depth++
case WireEndGroup:
depth--
fmt.Printf("%3d: t=%3d end\n", index, tag)
}
}
if depth != 0 {
fmt.Printf("%3d: start-end not balanced %d\n", p.index, depth)
}
fmt.Printf("\n")
p.buf = obuf
p.index = index
}
// SetDefaults sets unset protocol buffer fields to their default values.
// It only modifies fields that are both unset and have defined defaults.
// It recursively sets default values in any non-nil sub-messages.
func SetDefaults(pb Message) {
setDefaults(reflect.ValueOf(pb), true, false)
}
// v is a pointer to a struct.
func setDefaults(v reflect.Value, recur, zeros bool) {
v = v.Elem()
defaultMu.RLock()
dm, ok := defaults[v.Type()]
defaultMu.RUnlock()
if !ok {
dm = buildDefaultMessage(v.Type())
defaultMu.Lock()
defaults[v.Type()] = dm
defaultMu.Unlock()
}
for _, sf := range dm.scalars {
f := v.Field(sf.index)
if !f.IsNil() {
// field already set
continue
}
dv := sf.value
if dv == nil && !zeros {
// no explicit default, and don't want to set zeros
continue
}
fptr := f.Addr().Interface() // **T
// TODO: Consider batching the allocations we do here.
switch sf.kind {
case reflect.Bool:
b := new(bool)
if dv != nil {
*b = dv.(bool)
}
*(fptr.(**bool)) = b
case reflect.Float32:
f := new(float32)
if dv != nil {
*f = dv.(float32)
}
*(fptr.(**float32)) = f
case reflect.Float64:
f := new(float64)
if dv != nil {
*f = dv.(float64)
}
*(fptr.(**float64)) = f
case reflect.Int32:
// might be an enum
if ft := f.Type(); ft != int32PtrType {
// enum
f.Set(reflect.New(ft.Elem()))
if dv != nil {
f.Elem().SetInt(int64(dv.(int32)))
}
} else {
// int32 field
i := new(int32)
if dv != nil {
*i = dv.(int32)
}
*(fptr.(**int32)) = i
}
case reflect.Int64:
i := new(int64)
if dv != nil {
*i = dv.(int64)
}
*(fptr.(**int64)) = i
case reflect.String:
s := new(string)
if dv != nil {
*s = dv.(string)
}
*(fptr.(**string)) = s
case reflect.Uint8:
// exceptional case: []byte
var b []byte
if dv != nil {
db := dv.([]byte)
b = make([]byte, len(db))
copy(b, db)
} else {
b = []byte{}
}
*(fptr.(*[]byte)) = b
case reflect.Uint32:
u := new(uint32)
if dv != nil {
*u = dv.(uint32)
}
*(fptr.(**uint32)) = u
case reflect.Uint64:
u := new(uint64)
if dv != nil {
*u = dv.(uint64)
}
*(fptr.(**uint64)) = u
default:
log.Printf("proto: can't set default for field %v (sf.kind=%v)", f, sf.kind)
}
}
for _, ni := range dm.nested {
f := v.Field(ni)
// f is *T or []*T or map[T]*T
switch f.Kind() {
case reflect.Ptr:
if f.IsNil() {
continue
}
setDefaults(f, recur, zeros)
case reflect.Slice:
for i := 0; i < f.Len(); i++ {
e := f.Index(i)
if e.IsNil() {
continue
}
setDefaults(e, recur, zeros)
}
case reflect.Map:
for _, k := range f.MapKeys() {
e := f.MapIndex(k)
if e.IsNil() {
continue
}
setDefaults(e, recur, zeros)
}
}
}
}
var (
// defaults maps a protocol buffer struct type to a slice of the fields,
// with its scalar fields set to their proto-declared non-zero default values.
defaultMu sync.RWMutex
defaults = make(map[reflect.Type]defaultMessage)
int32PtrType = reflect.TypeOf((*int32)(nil))
)
// defaultMessage represents information about the default values of a message.
type defaultMessage struct {
scalars []scalarField
nested []int // struct field index of nested messages
}
type scalarField struct {
index int // struct field index
kind reflect.Kind // element type (the T in *T or []T)
value interface{} // the proto-declared default value, or nil
}
// t is a struct type.
func buildDefaultMessage(t reflect.Type) (dm defaultMessage) {
sprop := GetProperties(t)
for _, prop := range sprop.Prop {
fi, ok := sprop.decoderTags.get(prop.Tag)
if !ok {
// XXX_unrecognized
continue
}
ft := t.Field(fi).Type
sf, nested, err := fieldDefault(ft, prop)
switch {
case err != nil:
log.Print(err)
case nested:
dm.nested = append(dm.nested, fi)
case sf != nil:
sf.index = fi
dm.scalars = append(dm.scalars, *sf)
}
}
return dm
}
// fieldDefault returns the scalarField for field type ft.
// sf will be nil if the field can not have a default.
// nestedMessage will be true if this is a nested message.
// Note that sf.index is not set on return.
func fieldDefault(ft reflect.Type, prop *Properties) (sf *scalarField, nestedMessage bool, err error) {
var canHaveDefault bool
switch ft.Kind() {
case reflect.Ptr:
if ft.Elem().Kind() == reflect.Struct {
nestedMessage = true
} else {
canHaveDefault = true // proto2 scalar field
}
case reflect.Slice:
switch ft.Elem().Kind() {
case reflect.Ptr:
nestedMessage = true // repeated message
case reflect.Uint8:
canHaveDefault = true // bytes field
}
case reflect.Map:
if ft.Elem().Kind() == reflect.Ptr {
nestedMessage = true // map with message values
}
}
if !canHaveDefault {
if nestedMessage {
return nil, true, nil
}
return nil, false, nil
}
// We now know that ft is a pointer or slice.
sf = &scalarField{kind: ft.Elem().Kind()}
// scalar fields without defaults
if !prop.HasDefault {
return sf, false, nil
}
// a scalar field: either *T or []byte
switch ft.Elem().Kind() {
case reflect.Bool:
x, err := strconv.ParseBool(prop.Default)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default bool %q: %v", prop.Default, err)
}
sf.value = x
case reflect.Float32:
x, err := strconv.ParseFloat(prop.Default, 32)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default float32 %q: %v", prop.Default, err)
}
sf.value = float32(x)
case reflect.Float64:
x, err := strconv.ParseFloat(prop.Default, 64)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default float64 %q: %v", prop.Default, err)
}
sf.value = x
case reflect.Int32:
x, err := strconv.ParseInt(prop.Default, 10, 32)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default int32 %q: %v", prop.Default, err)
}
sf.value = int32(x)
case reflect.Int64:
x, err := strconv.ParseInt(prop.Default, 10, 64)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default int64 %q: %v", prop.Default, err)
}
sf.value = x
case reflect.String:
sf.value = prop.Default
case reflect.Uint8:
// []byte (not *uint8)
sf.value = []byte(prop.Default)
case reflect.Uint32:
x, err := strconv.ParseUint(prop.Default, 10, 32)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default uint32 %q: %v", prop.Default, err)
}
sf.value = uint32(x)
case reflect.Uint64:
x, err := strconv.ParseUint(prop.Default, 10, 64)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default uint64 %q: %v", prop.Default, err)
}
sf.value = x
default:
return nil, false, fmt.Errorf("proto: unhandled def kind %v", ft.Elem().Kind())
}
return sf, false, nil
}
// Map fields may have key types of non-float scalars, strings and enums.
// The easiest way to sort them in some deterministic order is to use fmt.
// If this turns out to be inefficient we can always consider other options,
// such as doing a Schwartzian transform.
func mapKeys(vs []reflect.Value) sort.Interface {
s := mapKeySorter{
vs: vs,
// default Less function: textual comparison
less: func(a, b reflect.Value) bool {
return fmt.Sprint(a.Interface()) < fmt.Sprint(b.Interface())
},
}
// Type specialization per https://developers.google.com/protocol-buffers/docs/proto#maps;
// numeric keys are sorted numerically.
if len(vs) == 0 {
return s
}
switch vs[0].Kind() {
case reflect.Int32, reflect.Int64:
s.less = func(a, b reflect.Value) bool { return a.Int() < b.Int() }
case reflect.Uint32, reflect.Uint64:
s.less = func(a, b reflect.Value) bool { return a.Uint() < b.Uint() }
}
return s
}
type mapKeySorter struct {
vs []reflect.Value
less func(a, b reflect.Value) bool
}
func (s mapKeySorter) Len() int { return len(s.vs) }
func (s mapKeySorter) Swap(i, j int) { s.vs[i], s.vs[j] = s.vs[j], s.vs[i] }
func (s mapKeySorter) Less(i, j int) bool {
return s.less(s.vs[i], s.vs[j])
}
// isProto3Zero reports whether v is a zero proto3 value.
func isProto3Zero(v reflect.Value) bool {
switch v.Kind() {
case reflect.Bool:
return !v.Bool()
case reflect.Int32, reflect.Int64:
return v.Int() == 0
case reflect.Uint32, reflect.Uint64:
return v.Uint() == 0
case reflect.Float32, reflect.Float64:
return v.Float() == 0
case reflect.String:
return v.String() == ""
}
return false
}
// ProtoPackageIsVersion2 is referenced from generated protocol buffer files
// to assert that that code is compatible with this version of the proto package.
const ProtoPackageIsVersion2 = true
// ProtoPackageIsVersion1 is referenced from generated protocol buffer files
// to assert that that code is compatible with this version of the proto package.
const ProtoPackageIsVersion1 = true

311
vendor/github.com/golang/protobuf/proto/message_set.go generated vendored Normal file
View File

@ -0,0 +1,311 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
/*
* Support for message sets.
*/
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"reflect"
"sort"
)
// errNoMessageTypeID occurs when a protocol buffer does not have a message type ID.
// A message type ID is required for storing a protocol buffer in a message set.
var errNoMessageTypeID = errors.New("proto does not have a message type ID")
// The first two types (_MessageSet_Item and messageSet)
// model what the protocol compiler produces for the following protocol message:
// message MessageSet {
// repeated group Item = 1 {
// required int32 type_id = 2;
// required string message = 3;
// };
// }
// That is the MessageSet wire format. We can't use a proto to generate these
// because that would introduce a circular dependency between it and this package.
type _MessageSet_Item struct {
TypeId *int32 `protobuf:"varint,2,req,name=type_id"`
Message []byte `protobuf:"bytes,3,req,name=message"`
}
type messageSet struct {
Item []*_MessageSet_Item `protobuf:"group,1,rep"`
XXX_unrecognized []byte
// TODO: caching?
}
// Make sure messageSet is a Message.
var _ Message = (*messageSet)(nil)
// messageTypeIder is an interface satisfied by a protocol buffer type
// that may be stored in a MessageSet.
type messageTypeIder interface {
MessageTypeId() int32
}
func (ms *messageSet) find(pb Message) *_MessageSet_Item {
mti, ok := pb.(messageTypeIder)
if !ok {
return nil
}
id := mti.MessageTypeId()
for _, item := range ms.Item {
if *item.TypeId == id {
return item
}
}
return nil
}
func (ms *messageSet) Has(pb Message) bool {
if ms.find(pb) != nil {
return true
}
return false
}
func (ms *messageSet) Unmarshal(pb Message) error {
if item := ms.find(pb); item != nil {
return Unmarshal(item.Message, pb)
}
if _, ok := pb.(messageTypeIder); !ok {
return errNoMessageTypeID
}
return nil // TODO: return error instead?
}
func (ms *messageSet) Marshal(pb Message) error {
msg, err := Marshal(pb)
if err != nil {
return err
}
if item := ms.find(pb); item != nil {
// reuse existing item
item.Message = msg
return nil
}
mti, ok := pb.(messageTypeIder)
if !ok {
return errNoMessageTypeID
}
mtid := mti.MessageTypeId()
ms.Item = append(ms.Item, &_MessageSet_Item{
TypeId: &mtid,
Message: msg,
})
return nil
}
func (ms *messageSet) Reset() { *ms = messageSet{} }
func (ms *messageSet) String() string { return CompactTextString(ms) }
func (*messageSet) ProtoMessage() {}
// Support for the message_set_wire_format message option.
func skipVarint(buf []byte) []byte {
i := 0
for ; buf[i]&0x80 != 0; i++ {
}
return buf[i+1:]
}
// MarshalMessageSet encodes the extension map represented by m in the message set wire format.
// It is called by generated Marshal methods on protocol buffer messages with the message_set_wire_format option.
func MarshalMessageSet(exts interface{}) ([]byte, error) {
var m map[int32]Extension
switch exts := exts.(type) {
case *XXX_InternalExtensions:
if err := encodeExtensions(exts); err != nil {
return nil, err
}
m, _ = exts.extensionsRead()
case map[int32]Extension:
if err := encodeExtensionsMap(exts); err != nil {
return nil, err
}
m = exts
default:
return nil, errors.New("proto: not an extension map")
}
// Sort extension IDs to provide a deterministic encoding.
// See also enc_map in encode.go.
ids := make([]int, 0, len(m))
for id := range m {
ids = append(ids, int(id))
}
sort.Ints(ids)
ms := &messageSet{Item: make([]*_MessageSet_Item, 0, len(m))}
for _, id := range ids {
e := m[int32(id)]
// Remove the wire type and field number varint, as well as the length varint.
msg := skipVarint(skipVarint(e.enc))
ms.Item = append(ms.Item, &_MessageSet_Item{
TypeId: Int32(int32(id)),
Message: msg,
})
}
return Marshal(ms)
}
// UnmarshalMessageSet decodes the extension map encoded in buf in the message set wire format.
// It is called by generated Unmarshal methods on protocol buffer messages with the message_set_wire_format option.
func UnmarshalMessageSet(buf []byte, exts interface{}) error {
var m map[int32]Extension
switch exts := exts.(type) {
case *XXX_InternalExtensions:
m = exts.extensionsWrite()
case map[int32]Extension:
m = exts
default:
return errors.New("proto: not an extension map")
}
ms := new(messageSet)
if err := Unmarshal(buf, ms); err != nil {
return err
}
for _, item := range ms.Item {
id := *item.TypeId
msg := item.Message
// Restore wire type and field number varint, plus length varint.
// Be careful to preserve duplicate items.
b := EncodeVarint(uint64(id)<<3 | WireBytes)
if ext, ok := m[id]; ok {
// Existing data; rip off the tag and length varint
// so we join the new data correctly.
// We can assume that ext.enc is set because we are unmarshaling.
o := ext.enc[len(b):] // skip wire type and field number
_, n := DecodeVarint(o) // calculate length of length varint
o = o[n:] // skip length varint
msg = append(o, msg...) // join old data and new data
}
b = append(b, EncodeVarint(uint64(len(msg)))...)
b = append(b, msg...)
m[id] = Extension{enc: b}
}
return nil
}
// MarshalMessageSetJSON encodes the extension map represented by m in JSON format.
// It is called by generated MarshalJSON methods on protocol buffer messages with the message_set_wire_format option.
func MarshalMessageSetJSON(exts interface{}) ([]byte, error) {
var m map[int32]Extension
switch exts := exts.(type) {
case *XXX_InternalExtensions:
m, _ = exts.extensionsRead()
case map[int32]Extension:
m = exts
default:
return nil, errors.New("proto: not an extension map")
}
var b bytes.Buffer
b.WriteByte('{')
// Process the map in key order for deterministic output.
ids := make([]int32, 0, len(m))
for id := range m {
ids = append(ids, id)
}
sort.Sort(int32Slice(ids)) // int32Slice defined in text.go
for i, id := range ids {
ext := m[id]
if i > 0 {
b.WriteByte(',')
}
msd, ok := messageSetMap[id]
if !ok {
// Unknown type; we can't render it, so skip it.
continue
}
fmt.Fprintf(&b, `"[%s]":`, msd.name)
x := ext.value
if x == nil {
x = reflect.New(msd.t.Elem()).Interface()
if err := Unmarshal(ext.enc, x.(Message)); err != nil {
return nil, err
}
}
d, err := json.Marshal(x)
if err != nil {
return nil, err
}
b.Write(d)
}
b.WriteByte('}')
return b.Bytes(), nil
}
// UnmarshalMessageSetJSON decodes the extension map encoded in buf in JSON format.
// It is called by generated UnmarshalJSON methods on protocol buffer messages with the message_set_wire_format option.
func UnmarshalMessageSetJSON(buf []byte, exts interface{}) error {
// Common-case fast path.
if len(buf) == 0 || bytes.Equal(buf, []byte("{}")) {
return nil
}
// This is fairly tricky, and it's not clear that it is needed.
return errors.New("TODO: UnmarshalMessageSetJSON not yet implemented")
}
// A global registry of types that can be used in a MessageSet.
var messageSetMap = make(map[int32]messageSetDesc)
type messageSetDesc struct {
t reflect.Type // pointer to struct
name string
}
// RegisterMessageSetType is called from the generated code.
func RegisterMessageSetType(m Message, fieldNum int32, name string) {
messageSetMap[fieldNum] = messageSetDesc{
t: reflect.TypeOf(m),
name: name,
}
}

View File

@ -0,0 +1,484 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2012 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// +build appengine js
// This file contains an implementation of proto field accesses using package reflect.
// It is slower than the code in pointer_unsafe.go but it avoids package unsafe and can
// be used on App Engine.
package proto
import (
"math"
"reflect"
)
// A structPointer is a pointer to a struct.
type structPointer struct {
v reflect.Value
}
// toStructPointer returns a structPointer equivalent to the given reflect value.
// The reflect value must itself be a pointer to a struct.
func toStructPointer(v reflect.Value) structPointer {
return structPointer{v}
}
// IsNil reports whether p is nil.
func structPointer_IsNil(p structPointer) bool {
return p.v.IsNil()
}
// Interface returns the struct pointer as an interface value.
func structPointer_Interface(p structPointer, _ reflect.Type) interface{} {
return p.v.Interface()
}
// A field identifies a field in a struct, accessible from a structPointer.
// In this implementation, a field is identified by the sequence of field indices
// passed to reflect's FieldByIndex.
type field []int
// toField returns a field equivalent to the given reflect field.
func toField(f *reflect.StructField) field {
return f.Index
}
// invalidField is an invalid field identifier.
var invalidField = field(nil)
// IsValid reports whether the field identifier is valid.
func (f field) IsValid() bool { return f != nil }
// field returns the given field in the struct as a reflect value.
func structPointer_field(p structPointer, f field) reflect.Value {
// Special case: an extension map entry with a value of type T
// passes a *T to the struct-handling code with a zero field,
// expecting that it will be treated as equivalent to *struct{ X T },
// which has the same memory layout. We have to handle that case
// specially, because reflect will panic if we call FieldByIndex on a
// non-struct.
if f == nil {
return p.v.Elem()
}
return p.v.Elem().FieldByIndex(f)
}
// ifield returns the given field in the struct as an interface value.
func structPointer_ifield(p structPointer, f field) interface{} {
return structPointer_field(p, f).Addr().Interface()
}
// Bytes returns the address of a []byte field in the struct.
func structPointer_Bytes(p structPointer, f field) *[]byte {
return structPointer_ifield(p, f).(*[]byte)
}
// BytesSlice returns the address of a [][]byte field in the struct.
func structPointer_BytesSlice(p structPointer, f field) *[][]byte {
return structPointer_ifield(p, f).(*[][]byte)
}
// Bool returns the address of a *bool field in the struct.
func structPointer_Bool(p structPointer, f field) **bool {
return structPointer_ifield(p, f).(**bool)
}
// BoolVal returns the address of a bool field in the struct.
func structPointer_BoolVal(p structPointer, f field) *bool {
return structPointer_ifield(p, f).(*bool)
}
// BoolSlice returns the address of a []bool field in the struct.
func structPointer_BoolSlice(p structPointer, f field) *[]bool {
return structPointer_ifield(p, f).(*[]bool)
}
// String returns the address of a *string field in the struct.
func structPointer_String(p structPointer, f field) **string {
return structPointer_ifield(p, f).(**string)
}
// StringVal returns the address of a string field in the struct.
func structPointer_StringVal(p structPointer, f field) *string {
return structPointer_ifield(p, f).(*string)
}
// StringSlice returns the address of a []string field in the struct.
func structPointer_StringSlice(p structPointer, f field) *[]string {
return structPointer_ifield(p, f).(*[]string)
}
// Extensions returns the address of an extension map field in the struct.
func structPointer_Extensions(p structPointer, f field) *XXX_InternalExtensions {
return structPointer_ifield(p, f).(*XXX_InternalExtensions)
}
// ExtMap returns the address of an extension map field in the struct.
func structPointer_ExtMap(p structPointer, f field) *map[int32]Extension {
return structPointer_ifield(p, f).(*map[int32]Extension)
}
// NewAt returns the reflect.Value for a pointer to a field in the struct.
func structPointer_NewAt(p structPointer, f field, typ reflect.Type) reflect.Value {
return structPointer_field(p, f).Addr()
}
// SetStructPointer writes a *struct field in the struct.
func structPointer_SetStructPointer(p structPointer, f field, q structPointer) {
structPointer_field(p, f).Set(q.v)
}
// GetStructPointer reads a *struct field in the struct.
func structPointer_GetStructPointer(p structPointer, f field) structPointer {
return structPointer{structPointer_field(p, f)}
}
// StructPointerSlice the address of a []*struct field in the struct.
func structPointer_StructPointerSlice(p structPointer, f field) structPointerSlice {
return structPointerSlice{structPointer_field(p, f)}
}
// A structPointerSlice represents the address of a slice of pointers to structs
// (themselves messages or groups). That is, v.Type() is *[]*struct{...}.
type structPointerSlice struct {
v reflect.Value
}
func (p structPointerSlice) Len() int { return p.v.Len() }
func (p structPointerSlice) Index(i int) structPointer { return structPointer{p.v.Index(i)} }
func (p structPointerSlice) Append(q structPointer) {
p.v.Set(reflect.Append(p.v, q.v))
}
var (
int32Type = reflect.TypeOf(int32(0))
uint32Type = reflect.TypeOf(uint32(0))
float32Type = reflect.TypeOf(float32(0))
int64Type = reflect.TypeOf(int64(0))
uint64Type = reflect.TypeOf(uint64(0))
float64Type = reflect.TypeOf(float64(0))
)
// A word32 represents a field of type *int32, *uint32, *float32, or *enum.
// That is, v.Type() is *int32, *uint32, *float32, or *enum and v is assignable.
type word32 struct {
v reflect.Value
}
// IsNil reports whether p is nil.
func word32_IsNil(p word32) bool {
return p.v.IsNil()
}
// Set sets p to point at a newly allocated word with bits set to x.
func word32_Set(p word32, o *Buffer, x uint32) {
t := p.v.Type().Elem()
switch t {
case int32Type:
if len(o.int32s) == 0 {
o.int32s = make([]int32, uint32PoolSize)
}
o.int32s[0] = int32(x)
p.v.Set(reflect.ValueOf(&o.int32s[0]))
o.int32s = o.int32s[1:]
return
case uint32Type:
if len(o.uint32s) == 0 {
o.uint32s = make([]uint32, uint32PoolSize)
}
o.uint32s[0] = x
p.v.Set(reflect.ValueOf(&o.uint32s[0]))
o.uint32s = o.uint32s[1:]
return
case float32Type:
if len(o.float32s) == 0 {
o.float32s = make([]float32, uint32PoolSize)
}
o.float32s[0] = math.Float32frombits(x)
p.v.Set(reflect.ValueOf(&o.float32s[0]))
o.float32s = o.float32s[1:]
return
}
// must be enum
p.v.Set(reflect.New(t))
p.v.Elem().SetInt(int64(int32(x)))
}
// Get gets the bits pointed at by p, as a uint32.
func word32_Get(p word32) uint32 {
elem := p.v.Elem()
switch elem.Kind() {
case reflect.Int32:
return uint32(elem.Int())
case reflect.Uint32:
return uint32(elem.Uint())
case reflect.Float32:
return math.Float32bits(float32(elem.Float()))
}
panic("unreachable")
}
// Word32 returns a reference to a *int32, *uint32, *float32, or *enum field in the struct.
func structPointer_Word32(p structPointer, f field) word32 {
return word32{structPointer_field(p, f)}
}
// A word32Val represents a field of type int32, uint32, float32, or enum.
// That is, v.Type() is int32, uint32, float32, or enum and v is assignable.
type word32Val struct {
v reflect.Value
}
// Set sets *p to x.
func word32Val_Set(p word32Val, x uint32) {
switch p.v.Type() {
case int32Type:
p.v.SetInt(int64(x))
return
case uint32Type:
p.v.SetUint(uint64(x))
return
case float32Type:
p.v.SetFloat(float64(math.Float32frombits(x)))
return
}
// must be enum
p.v.SetInt(int64(int32(x)))
}
// Get gets the bits pointed at by p, as a uint32.
func word32Val_Get(p word32Val) uint32 {
elem := p.v
switch elem.Kind() {
case reflect.Int32:
return uint32(elem.Int())
case reflect.Uint32:
return uint32(elem.Uint())
case reflect.Float32:
return math.Float32bits(float32(elem.Float()))
}
panic("unreachable")
}
// Word32Val returns a reference to a int32, uint32, float32, or enum field in the struct.
func structPointer_Word32Val(p structPointer, f field) word32Val {
return word32Val{structPointer_field(p, f)}
}
// A word32Slice is a slice of 32-bit values.
// That is, v.Type() is []int32, []uint32, []float32, or []enum.
type word32Slice struct {
v reflect.Value
}
func (p word32Slice) Append(x uint32) {
n, m := p.v.Len(), p.v.Cap()
if n < m {
p.v.SetLen(n + 1)
} else {
t := p.v.Type().Elem()
p.v.Set(reflect.Append(p.v, reflect.Zero(t)))
}
elem := p.v.Index(n)
switch elem.Kind() {
case reflect.Int32:
elem.SetInt(int64(int32(x)))
case reflect.Uint32:
elem.SetUint(uint64(x))
case reflect.Float32:
elem.SetFloat(float64(math.Float32frombits(x)))
}
}
func (p word32Slice) Len() int {
return p.v.Len()
}
func (p word32Slice) Index(i int) uint32 {
elem := p.v.Index(i)
switch elem.Kind() {
case reflect.Int32:
return uint32(elem.Int())
case reflect.Uint32:
return uint32(elem.Uint())
case reflect.Float32:
return math.Float32bits(float32(elem.Float()))
}
panic("unreachable")
}
// Word32Slice returns a reference to a []int32, []uint32, []float32, or []enum field in the struct.
func structPointer_Word32Slice(p structPointer, f field) word32Slice {
return word32Slice{structPointer_field(p, f)}
}
// word64 is like word32 but for 64-bit values.
type word64 struct {
v reflect.Value
}
func word64_Set(p word64, o *Buffer, x uint64) {
t := p.v.Type().Elem()
switch t {
case int64Type:
if len(o.int64s) == 0 {
o.int64s = make([]int64, uint64PoolSize)
}
o.int64s[0] = int64(x)
p.v.Set(reflect.ValueOf(&o.int64s[0]))
o.int64s = o.int64s[1:]
return
case uint64Type:
if len(o.uint64s) == 0 {
o.uint64s = make([]uint64, uint64PoolSize)
}
o.uint64s[0] = x
p.v.Set(reflect.ValueOf(&o.uint64s[0]))
o.uint64s = o.uint64s[1:]
return
case float64Type:
if len(o.float64s) == 0 {
o.float64s = make([]float64, uint64PoolSize)
}
o.float64s[0] = math.Float64frombits(x)
p.v.Set(reflect.ValueOf(&o.float64s[0]))
o.float64s = o.float64s[1:]
return
}
panic("unreachable")
}
func word64_IsNil(p word64) bool {
return p.v.IsNil()
}
func word64_Get(p word64) uint64 {
elem := p.v.Elem()
switch elem.Kind() {
case reflect.Int64:
return uint64(elem.Int())
case reflect.Uint64:
return elem.Uint()
case reflect.Float64:
return math.Float64bits(elem.Float())
}
panic("unreachable")
}
func structPointer_Word64(p structPointer, f field) word64 {
return word64{structPointer_field(p, f)}
}
// word64Val is like word32Val but for 64-bit values.
type word64Val struct {
v reflect.Value
}
func word64Val_Set(p word64Val, o *Buffer, x uint64) {
switch p.v.Type() {
case int64Type:
p.v.SetInt(int64(x))
return
case uint64Type:
p.v.SetUint(x)
return
case float64Type:
p.v.SetFloat(math.Float64frombits(x))
return
}
panic("unreachable")
}
func word64Val_Get(p word64Val) uint64 {
elem := p.v
switch elem.Kind() {
case reflect.Int64:
return uint64(elem.Int())
case reflect.Uint64:
return elem.Uint()
case reflect.Float64:
return math.Float64bits(elem.Float())
}
panic("unreachable")
}
func structPointer_Word64Val(p structPointer, f field) word64Val {
return word64Val{structPointer_field(p, f)}
}
type word64Slice struct {
v reflect.Value
}
func (p word64Slice) Append(x uint64) {
n, m := p.v.Len(), p.v.Cap()
if n < m {
p.v.SetLen(n + 1)
} else {
t := p.v.Type().Elem()
p.v.Set(reflect.Append(p.v, reflect.Zero(t)))
}
elem := p.v.Index(n)
switch elem.Kind() {
case reflect.Int64:
elem.SetInt(int64(int64(x)))
case reflect.Uint64:
elem.SetUint(uint64(x))
case reflect.Float64:
elem.SetFloat(float64(math.Float64frombits(x)))
}
}
func (p word64Slice) Len() int {
return p.v.Len()
}
func (p word64Slice) Index(i int) uint64 {
elem := p.v.Index(i)
switch elem.Kind() {
case reflect.Int64:
return uint64(elem.Int())
case reflect.Uint64:
return uint64(elem.Uint())
case reflect.Float64:
return math.Float64bits(float64(elem.Float()))
}
panic("unreachable")
}
func structPointer_Word64Slice(p structPointer, f field) word64Slice {
return word64Slice{structPointer_field(p, f)}
}

View File

@ -0,0 +1,270 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2012 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// +build !appengine,!js
// This file contains the implementation of the proto field accesses using package unsafe.
package proto
import (
"reflect"
"unsafe"
)
// NOTE: These type_Foo functions would more idiomatically be methods,
// but Go does not allow methods on pointer types, and we must preserve
// some pointer type for the garbage collector. We use these
// funcs with clunky names as our poor approximation to methods.
//
// An alternative would be
// type structPointer struct { p unsafe.Pointer }
// but that does not registerize as well.
// A structPointer is a pointer to a struct.
type structPointer unsafe.Pointer
// toStructPointer returns a structPointer equivalent to the given reflect value.
func toStructPointer(v reflect.Value) structPointer {
return structPointer(unsafe.Pointer(v.Pointer()))
}
// IsNil reports whether p is nil.
func structPointer_IsNil(p structPointer) bool {
return p == nil
}
// Interface returns the struct pointer, assumed to have element type t,
// as an interface value.
func structPointer_Interface(p structPointer, t reflect.Type) interface{} {
return reflect.NewAt(t, unsafe.Pointer(p)).Interface()
}
// A field identifies a field in a struct, accessible from a structPointer.
// In this implementation, a field is identified by its byte offset from the start of the struct.
type field uintptr
// toField returns a field equivalent to the given reflect field.
func toField(f *reflect.StructField) field {
return field(f.Offset)
}
// invalidField is an invalid field identifier.
const invalidField = ^field(0)
// IsValid reports whether the field identifier is valid.
func (f field) IsValid() bool {
return f != ^field(0)
}
// Bytes returns the address of a []byte field in the struct.
func structPointer_Bytes(p structPointer, f field) *[]byte {
return (*[]byte)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// BytesSlice returns the address of a [][]byte field in the struct.
func structPointer_BytesSlice(p structPointer, f field) *[][]byte {
return (*[][]byte)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// Bool returns the address of a *bool field in the struct.
func structPointer_Bool(p structPointer, f field) **bool {
return (**bool)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// BoolVal returns the address of a bool field in the struct.
func structPointer_BoolVal(p structPointer, f field) *bool {
return (*bool)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// BoolSlice returns the address of a []bool field in the struct.
func structPointer_BoolSlice(p structPointer, f field) *[]bool {
return (*[]bool)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// String returns the address of a *string field in the struct.
func structPointer_String(p structPointer, f field) **string {
return (**string)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// StringVal returns the address of a string field in the struct.
func structPointer_StringVal(p structPointer, f field) *string {
return (*string)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// StringSlice returns the address of a []string field in the struct.
func structPointer_StringSlice(p structPointer, f field) *[]string {
return (*[]string)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// ExtMap returns the address of an extension map field in the struct.
func structPointer_Extensions(p structPointer, f field) *XXX_InternalExtensions {
return (*XXX_InternalExtensions)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
func structPointer_ExtMap(p structPointer, f field) *map[int32]Extension {
return (*map[int32]Extension)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// NewAt returns the reflect.Value for a pointer to a field in the struct.
func structPointer_NewAt(p structPointer, f field, typ reflect.Type) reflect.Value {
return reflect.NewAt(typ, unsafe.Pointer(uintptr(p)+uintptr(f)))
}
// SetStructPointer writes a *struct field in the struct.
func structPointer_SetStructPointer(p structPointer, f field, q structPointer) {
*(*structPointer)(unsafe.Pointer(uintptr(p) + uintptr(f))) = q
}
// GetStructPointer reads a *struct field in the struct.
func structPointer_GetStructPointer(p structPointer, f field) structPointer {
return *(*structPointer)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// StructPointerSlice the address of a []*struct field in the struct.
func structPointer_StructPointerSlice(p structPointer, f field) *structPointerSlice {
return (*structPointerSlice)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// A structPointerSlice represents a slice of pointers to structs (themselves submessages or groups).
type structPointerSlice []structPointer
func (v *structPointerSlice) Len() int { return len(*v) }
func (v *structPointerSlice) Index(i int) structPointer { return (*v)[i] }
func (v *structPointerSlice) Append(p structPointer) { *v = append(*v, p) }
// A word32 is the address of a "pointer to 32-bit value" field.
type word32 **uint32
// IsNil reports whether *v is nil.
func word32_IsNil(p word32) bool {
return *p == nil
}
// Set sets *v to point at a newly allocated word set to x.
func word32_Set(p word32, o *Buffer, x uint32) {
if len(o.uint32s) == 0 {
o.uint32s = make([]uint32, uint32PoolSize)
}
o.uint32s[0] = x
*p = &o.uint32s[0]
o.uint32s = o.uint32s[1:]
}
// Get gets the value pointed at by *v.
func word32_Get(p word32) uint32 {
return **p
}
// Word32 returns the address of a *int32, *uint32, *float32, or *enum field in the struct.
func structPointer_Word32(p structPointer, f field) word32 {
return word32((**uint32)(unsafe.Pointer(uintptr(p) + uintptr(f))))
}
// A word32Val is the address of a 32-bit value field.
type word32Val *uint32
// Set sets *p to x.
func word32Val_Set(p word32Val, x uint32) {
*p = x
}
// Get gets the value pointed at by p.
func word32Val_Get(p word32Val) uint32 {
return *p
}
// Word32Val returns the address of a *int32, *uint32, *float32, or *enum field in the struct.
func structPointer_Word32Val(p structPointer, f field) word32Val {
return word32Val((*uint32)(unsafe.Pointer(uintptr(p) + uintptr(f))))
}
// A word32Slice is a slice of 32-bit values.
type word32Slice []uint32
func (v *word32Slice) Append(x uint32) { *v = append(*v, x) }
func (v *word32Slice) Len() int { return len(*v) }
func (v *word32Slice) Index(i int) uint32 { return (*v)[i] }
// Word32Slice returns the address of a []int32, []uint32, []float32, or []enum field in the struct.
func structPointer_Word32Slice(p structPointer, f field) *word32Slice {
return (*word32Slice)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// word64 is like word32 but for 64-bit values.
type word64 **uint64
func word64_Set(p word64, o *Buffer, x uint64) {
if len(o.uint64s) == 0 {
o.uint64s = make([]uint64, uint64PoolSize)
}
o.uint64s[0] = x
*p = &o.uint64s[0]
o.uint64s = o.uint64s[1:]
}
func word64_IsNil(p word64) bool {
return *p == nil
}
func word64_Get(p word64) uint64 {
return **p
}
func structPointer_Word64(p structPointer, f field) word64 {
return word64((**uint64)(unsafe.Pointer(uintptr(p) + uintptr(f))))
}
// word64Val is like word32Val but for 64-bit values.
type word64Val *uint64
func word64Val_Set(p word64Val, o *Buffer, x uint64) {
*p = x
}
func word64Val_Get(p word64Val) uint64 {
return *p
}
func structPointer_Word64Val(p structPointer, f field) word64Val {
return word64Val((*uint64)(unsafe.Pointer(uintptr(p) + uintptr(f))))
}
// word64Slice is like word32Slice but for 64-bit values.
type word64Slice []uint64
func (v *word64Slice) Append(x uint64) { *v = append(*v, x) }
func (v *word64Slice) Len() int { return len(*v) }
func (v *word64Slice) Index(i int) uint64 { return (*v)[i] }
func structPointer_Word64Slice(p structPointer, f field) *word64Slice {
return (*word64Slice)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}

872
vendor/github.com/golang/protobuf/proto/properties.go generated vendored Normal file
View File

@ -0,0 +1,872 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
/*
* Routines for encoding data into the wire format for protocol buffers.
*/
import (
"fmt"
"log"
"os"
"reflect"
"sort"
"strconv"
"strings"
"sync"
)
const debug bool = false
// Constants that identify the encoding of a value on the wire.
const (
WireVarint = 0
WireFixed64 = 1
WireBytes = 2
WireStartGroup = 3
WireEndGroup = 4
WireFixed32 = 5
)
const startSize = 10 // initial slice/string sizes
// Encoders are defined in encode.go
// An encoder outputs the full representation of a field, including its
// tag and encoder type.
type encoder func(p *Buffer, prop *Properties, base structPointer) error
// A valueEncoder encodes a single integer in a particular encoding.
type valueEncoder func(o *Buffer, x uint64) error
// Sizers are defined in encode.go
// A sizer returns the encoded size of a field, including its tag and encoder
// type.
type sizer func(prop *Properties, base structPointer) int
// A valueSizer returns the encoded size of a single integer in a particular
// encoding.
type valueSizer func(x uint64) int
// Decoders are defined in decode.go
// A decoder creates a value from its wire representation.
// Unrecognized subelements are saved in unrec.
type decoder func(p *Buffer, prop *Properties, base structPointer) error
// A valueDecoder decodes a single integer in a particular encoding.
type valueDecoder func(o *Buffer) (x uint64, err error)
// A oneofMarshaler does the marshaling for all oneof fields in a message.
type oneofMarshaler func(Message, *Buffer) error
// A oneofUnmarshaler does the unmarshaling for a oneof field in a message.
type oneofUnmarshaler func(Message, int, int, *Buffer) (bool, error)
// A oneofSizer does the sizing for all oneof fields in a message.
type oneofSizer func(Message) int
// tagMap is an optimization over map[int]int for typical protocol buffer
// use-cases. Encoded protocol buffers are often in tag order with small tag
// numbers.
type tagMap struct {
fastTags []int
slowTags map[int]int
}
// tagMapFastLimit is the upper bound on the tag number that will be stored in
// the tagMap slice rather than its map.
const tagMapFastLimit = 1024
func (p *tagMap) get(t int) (int, bool) {
if t > 0 && t < tagMapFastLimit {
if t >= len(p.fastTags) {
return 0, false
}
fi := p.fastTags[t]
return fi, fi >= 0
}
fi, ok := p.slowTags[t]
return fi, ok
}
func (p *tagMap) put(t int, fi int) {
if t > 0 && t < tagMapFastLimit {
for len(p.fastTags) < t+1 {
p.fastTags = append(p.fastTags, -1)
}
p.fastTags[t] = fi
return
}
if p.slowTags == nil {
p.slowTags = make(map[int]int)
}
p.slowTags[t] = fi
}
// StructProperties represents properties for all the fields of a struct.
// decoderTags and decoderOrigNames should only be used by the decoder.
type StructProperties struct {
Prop []*Properties // properties for each field
reqCount int // required count
decoderTags tagMap // map from proto tag to struct field number
decoderOrigNames map[string]int // map from original name to struct field number
order []int // list of struct field numbers in tag order
unrecField field // field id of the XXX_unrecognized []byte field
extendable bool // is this an extendable proto
oneofMarshaler oneofMarshaler
oneofUnmarshaler oneofUnmarshaler
oneofSizer oneofSizer
stype reflect.Type
// OneofTypes contains information about the oneof fields in this message.
// It is keyed by the original name of a field.
OneofTypes map[string]*OneofProperties
}
// OneofProperties represents information about a specific field in a oneof.
type OneofProperties struct {
Type reflect.Type // pointer to generated struct type for this oneof field
Field int // struct field number of the containing oneof in the message
Prop *Properties
}
// Implement the sorting interface so we can sort the fields in tag order, as recommended by the spec.
// See encode.go, (*Buffer).enc_struct.
func (sp *StructProperties) Len() int { return len(sp.order) }
func (sp *StructProperties) Less(i, j int) bool {
return sp.Prop[sp.order[i]].Tag < sp.Prop[sp.order[j]].Tag
}
func (sp *StructProperties) Swap(i, j int) { sp.order[i], sp.order[j] = sp.order[j], sp.order[i] }
// Properties represents the protocol-specific behavior of a single struct field.
type Properties struct {
Name string // name of the field, for error messages
OrigName string // original name before protocol compiler (always set)
JSONName string // name to use for JSON; determined by protoc
Wire string
WireType int
Tag int
Required bool
Optional bool
Repeated bool
Packed bool // relevant for repeated primitives only
Enum string // set for enum types only
proto3 bool // whether this is known to be a proto3 field; set for []byte only
oneof bool // whether this is a oneof field
Default string // default value
HasDefault bool // whether an explicit default was provided
def_uint64 uint64
enc encoder
valEnc valueEncoder // set for bool and numeric types only
field field
tagcode []byte // encoding of EncodeVarint((Tag<<3)|WireType)
tagbuf [8]byte
stype reflect.Type // set for struct types only
sprop *StructProperties // set for struct types only
isMarshaler bool
isUnmarshaler bool
mtype reflect.Type // set for map types only
mkeyprop *Properties // set for map types only
mvalprop *Properties // set for map types only
size sizer
valSize valueSizer // set for bool and numeric types only
dec decoder
valDec valueDecoder // set for bool and numeric types only
// If this is a packable field, this will be the decoder for the packed version of the field.
packedDec decoder
}
// String formats the properties in the protobuf struct field tag style.
func (p *Properties) String() string {
s := p.Wire
s = ","
s += strconv.Itoa(p.Tag)
if p.Required {
s += ",req"
}
if p.Optional {
s += ",opt"
}
if p.Repeated {
s += ",rep"
}
if p.Packed {
s += ",packed"
}
s += ",name=" + p.OrigName
if p.JSONName != p.OrigName {
s += ",json=" + p.JSONName
}
if p.proto3 {
s += ",proto3"
}
if p.oneof {
s += ",oneof"
}
if len(p.Enum) > 0 {
s += ",enum=" + p.Enum
}
if p.HasDefault {
s += ",def=" + p.Default
}
return s
}
// Parse populates p by parsing a string in the protobuf struct field tag style.
func (p *Properties) Parse(s string) {
// "bytes,49,opt,name=foo,def=hello!"
fields := strings.Split(s, ",") // breaks def=, but handled below.
if len(fields) < 2 {
fmt.Fprintf(os.Stderr, "proto: tag has too few fields: %q\n", s)
return
}
p.Wire = fields[0]
switch p.Wire {
case "varint":
p.WireType = WireVarint
p.valEnc = (*Buffer).EncodeVarint
p.valDec = (*Buffer).DecodeVarint
p.valSize = sizeVarint
case "fixed32":
p.WireType = WireFixed32
p.valEnc = (*Buffer).EncodeFixed32
p.valDec = (*Buffer).DecodeFixed32
p.valSize = sizeFixed32
case "fixed64":
p.WireType = WireFixed64
p.valEnc = (*Buffer).EncodeFixed64
p.valDec = (*Buffer).DecodeFixed64
p.valSize = sizeFixed64
case "zigzag32":
p.WireType = WireVarint
p.valEnc = (*Buffer).EncodeZigzag32
p.valDec = (*Buffer).DecodeZigzag32
p.valSize = sizeZigzag32
case "zigzag64":
p.WireType = WireVarint
p.valEnc = (*Buffer).EncodeZigzag64
p.valDec = (*Buffer).DecodeZigzag64
p.valSize = sizeZigzag64
case "bytes", "group":
p.WireType = WireBytes
// no numeric converter for non-numeric types
default:
fmt.Fprintf(os.Stderr, "proto: tag has unknown wire type: %q\n", s)
return
}
var err error
p.Tag, err = strconv.Atoi(fields[1])
if err != nil {
return
}
for i := 2; i < len(fields); i++ {
f := fields[i]
switch {
case f == "req":
p.Required = true
case f == "opt":
p.Optional = true
case f == "rep":
p.Repeated = true
case f == "packed":
p.Packed = true
case strings.HasPrefix(f, "name="):
p.OrigName = f[5:]
case strings.HasPrefix(f, "json="):
p.JSONName = f[5:]
case strings.HasPrefix(f, "enum="):
p.Enum = f[5:]
case f == "proto3":
p.proto3 = true
case f == "oneof":
p.oneof = true
case strings.HasPrefix(f, "def="):
p.HasDefault = true
p.Default = f[4:] // rest of string
if i+1 < len(fields) {
// Commas aren't escaped, and def is always last.
p.Default += "," + strings.Join(fields[i+1:], ",")
break
}
}
}
}
func logNoSliceEnc(t1, t2 reflect.Type) {
fmt.Fprintf(os.Stderr, "proto: no slice oenc for %T = []%T\n", t1, t2)
}
var protoMessageType = reflect.TypeOf((*Message)(nil)).Elem()
// Initialize the fields for encoding and decoding.
func (p *Properties) setEncAndDec(typ reflect.Type, f *reflect.StructField, lockGetProp bool) {
p.enc = nil
p.dec = nil
p.size = nil
switch t1 := typ; t1.Kind() {
default:
fmt.Fprintf(os.Stderr, "proto: no coders for %v\n", t1)
// proto3 scalar types
case reflect.Bool:
p.enc = (*Buffer).enc_proto3_bool
p.dec = (*Buffer).dec_proto3_bool
p.size = size_proto3_bool
case reflect.Int32:
p.enc = (*Buffer).enc_proto3_int32
p.dec = (*Buffer).dec_proto3_int32
p.size = size_proto3_int32
case reflect.Uint32:
p.enc = (*Buffer).enc_proto3_uint32
p.dec = (*Buffer).dec_proto3_int32 // can reuse
p.size = size_proto3_uint32
case reflect.Int64, reflect.Uint64:
p.enc = (*Buffer).enc_proto3_int64
p.dec = (*Buffer).dec_proto3_int64
p.size = size_proto3_int64
case reflect.Float32:
p.enc = (*Buffer).enc_proto3_uint32 // can just treat them as bits
p.dec = (*Buffer).dec_proto3_int32
p.size = size_proto3_uint32
case reflect.Float64:
p.enc = (*Buffer).enc_proto3_int64 // can just treat them as bits
p.dec = (*Buffer).dec_proto3_int64
p.size = size_proto3_int64
case reflect.String:
p.enc = (*Buffer).enc_proto3_string
p.dec = (*Buffer).dec_proto3_string
p.size = size_proto3_string
case reflect.Ptr:
switch t2 := t1.Elem(); t2.Kind() {
default:
fmt.Fprintf(os.Stderr, "proto: no encoder function for %v -> %v\n", t1, t2)
break
case reflect.Bool:
p.enc = (*Buffer).enc_bool
p.dec = (*Buffer).dec_bool
p.size = size_bool
case reflect.Int32:
p.enc = (*Buffer).enc_int32
p.dec = (*Buffer).dec_int32
p.size = size_int32
case reflect.Uint32:
p.enc = (*Buffer).enc_uint32
p.dec = (*Buffer).dec_int32 // can reuse
p.size = size_uint32
case reflect.Int64, reflect.Uint64:
p.enc = (*Buffer).enc_int64
p.dec = (*Buffer).dec_int64
p.size = size_int64
case reflect.Float32:
p.enc = (*Buffer).enc_uint32 // can just treat them as bits
p.dec = (*Buffer).dec_int32
p.size = size_uint32
case reflect.Float64:
p.enc = (*Buffer).enc_int64 // can just treat them as bits
p.dec = (*Buffer).dec_int64
p.size = size_int64
case reflect.String:
p.enc = (*Buffer).enc_string
p.dec = (*Buffer).dec_string
p.size = size_string
case reflect.Struct:
p.stype = t1.Elem()
p.isMarshaler = isMarshaler(t1)
p.isUnmarshaler = isUnmarshaler(t1)
if p.Wire == "bytes" {
p.enc = (*Buffer).enc_struct_message
p.dec = (*Buffer).dec_struct_message
p.size = size_struct_message
} else {
p.enc = (*Buffer).enc_struct_group
p.dec = (*Buffer).dec_struct_group
p.size = size_struct_group
}
}
case reflect.Slice:
switch t2 := t1.Elem(); t2.Kind() {
default:
logNoSliceEnc(t1, t2)
break
case reflect.Bool:
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_bool
p.size = size_slice_packed_bool
} else {
p.enc = (*Buffer).enc_slice_bool
p.size = size_slice_bool
}
p.dec = (*Buffer).dec_slice_bool
p.packedDec = (*Buffer).dec_slice_packed_bool
case reflect.Int32:
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_int32
p.size = size_slice_packed_int32
} else {
p.enc = (*Buffer).enc_slice_int32
p.size = size_slice_int32
}
p.dec = (*Buffer).dec_slice_int32
p.packedDec = (*Buffer).dec_slice_packed_int32
case reflect.Uint32:
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_uint32
p.size = size_slice_packed_uint32
} else {
p.enc = (*Buffer).enc_slice_uint32
p.size = size_slice_uint32
}
p.dec = (*Buffer).dec_slice_int32
p.packedDec = (*Buffer).dec_slice_packed_int32
case reflect.Int64, reflect.Uint64:
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_int64
p.size = size_slice_packed_int64
} else {
p.enc = (*Buffer).enc_slice_int64
p.size = size_slice_int64
}
p.dec = (*Buffer).dec_slice_int64
p.packedDec = (*Buffer).dec_slice_packed_int64
case reflect.Uint8:
p.dec = (*Buffer).dec_slice_byte
if p.proto3 {
p.enc = (*Buffer).enc_proto3_slice_byte
p.size = size_proto3_slice_byte
} else {
p.enc = (*Buffer).enc_slice_byte
p.size = size_slice_byte
}
case reflect.Float32, reflect.Float64:
switch t2.Bits() {
case 32:
// can just treat them as bits
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_uint32
p.size = size_slice_packed_uint32
} else {
p.enc = (*Buffer).enc_slice_uint32
p.size = size_slice_uint32
}
p.dec = (*Buffer).dec_slice_int32
p.packedDec = (*Buffer).dec_slice_packed_int32
case 64:
// can just treat them as bits
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_int64
p.size = size_slice_packed_int64
} else {
p.enc = (*Buffer).enc_slice_int64
p.size = size_slice_int64
}
p.dec = (*Buffer).dec_slice_int64
p.packedDec = (*Buffer).dec_slice_packed_int64
default:
logNoSliceEnc(t1, t2)
break
}
case reflect.String:
p.enc = (*Buffer).enc_slice_string
p.dec = (*Buffer).dec_slice_string
p.size = size_slice_string
case reflect.Ptr:
switch t3 := t2.Elem(); t3.Kind() {
default:
fmt.Fprintf(os.Stderr, "proto: no ptr oenc for %T -> %T -> %T\n", t1, t2, t3)
break
case reflect.Struct:
p.stype = t2.Elem()
p.isMarshaler = isMarshaler(t2)
p.isUnmarshaler = isUnmarshaler(t2)
if p.Wire == "bytes" {
p.enc = (*Buffer).enc_slice_struct_message
p.dec = (*Buffer).dec_slice_struct_message
p.size = size_slice_struct_message
} else {
p.enc = (*Buffer).enc_slice_struct_group
p.dec = (*Buffer).dec_slice_struct_group
p.size = size_slice_struct_group
}
}
case reflect.Slice:
switch t2.Elem().Kind() {
default:
fmt.Fprintf(os.Stderr, "proto: no slice elem oenc for %T -> %T -> %T\n", t1, t2, t2.Elem())
break
case reflect.Uint8:
p.enc = (*Buffer).enc_slice_slice_byte
p.dec = (*Buffer).dec_slice_slice_byte
p.size = size_slice_slice_byte
}
}
case reflect.Map:
p.enc = (*Buffer).enc_new_map
p.dec = (*Buffer).dec_new_map
p.size = size_new_map
p.mtype = t1
p.mkeyprop = &Properties{}
p.mkeyprop.init(reflect.PtrTo(p.mtype.Key()), "Key", f.Tag.Get("protobuf_key"), nil, lockGetProp)
p.mvalprop = &Properties{}
vtype := p.mtype.Elem()
if vtype.Kind() != reflect.Ptr && vtype.Kind() != reflect.Slice {
// The value type is not a message (*T) or bytes ([]byte),
// so we need encoders for the pointer to this type.
vtype = reflect.PtrTo(vtype)
}
p.mvalprop.init(vtype, "Value", f.Tag.Get("protobuf_val"), nil, lockGetProp)
}
// precalculate tag code
wire := p.WireType
if p.Packed {
wire = WireBytes
}
x := uint32(p.Tag)<<3 | uint32(wire)
i := 0
for i = 0; x > 127; i++ {
p.tagbuf[i] = 0x80 | uint8(x&0x7F)
x >>= 7
}
p.tagbuf[i] = uint8(x)
p.tagcode = p.tagbuf[0 : i+1]
if p.stype != nil {
if lockGetProp {
p.sprop = GetProperties(p.stype)
} else {
p.sprop = getPropertiesLocked(p.stype)
}
}
}
var (
marshalerType = reflect.TypeOf((*Marshaler)(nil)).Elem()
unmarshalerType = reflect.TypeOf((*Unmarshaler)(nil)).Elem()
)
// isMarshaler reports whether type t implements Marshaler.
func isMarshaler(t reflect.Type) bool {
// We're checking for (likely) pointer-receiver methods
// so if t is not a pointer, something is very wrong.
// The calls above only invoke isMarshaler on pointer types.
if t.Kind() != reflect.Ptr {
panic("proto: misuse of isMarshaler")
}
return t.Implements(marshalerType)
}
// isUnmarshaler reports whether type t implements Unmarshaler.
func isUnmarshaler(t reflect.Type) bool {
// We're checking for (likely) pointer-receiver methods
// so if t is not a pointer, something is very wrong.
// The calls above only invoke isUnmarshaler on pointer types.
if t.Kind() != reflect.Ptr {
panic("proto: misuse of isUnmarshaler")
}
return t.Implements(unmarshalerType)
}
// Init populates the properties from a protocol buffer struct tag.
func (p *Properties) Init(typ reflect.Type, name, tag string, f *reflect.StructField) {
p.init(typ, name, tag, f, true)
}
func (p *Properties) init(typ reflect.Type, name, tag string, f *reflect.StructField, lockGetProp bool) {
// "bytes,49,opt,def=hello!"
p.Name = name
p.OrigName = name
if f != nil {
p.field = toField(f)
}
if tag == "" {
return
}
p.Parse(tag)
p.setEncAndDec(typ, f, lockGetProp)
}
var (
propertiesMu sync.RWMutex
propertiesMap = make(map[reflect.Type]*StructProperties)
)
// GetProperties returns the list of properties for the type represented by t.
// t must represent a generated struct type of a protocol message.
func GetProperties(t reflect.Type) *StructProperties {
if t.Kind() != reflect.Struct {
panic("proto: type must have kind struct")
}
// Most calls to GetProperties in a long-running program will be
// retrieving details for types we have seen before.
propertiesMu.RLock()
sprop, ok := propertiesMap[t]
propertiesMu.RUnlock()
if ok {
if collectStats {
stats.Chit++
}
return sprop
}
propertiesMu.Lock()
sprop = getPropertiesLocked(t)
propertiesMu.Unlock()
return sprop
}
// getPropertiesLocked requires that propertiesMu is held.
func getPropertiesLocked(t reflect.Type) *StructProperties {
if prop, ok := propertiesMap[t]; ok {
if collectStats {
stats.Chit++
}
return prop
}
if collectStats {
stats.Cmiss++
}
prop := new(StructProperties)
// in case of recursive protos, fill this in now.
propertiesMap[t] = prop
// build properties
prop.extendable = reflect.PtrTo(t).Implements(extendableProtoType) ||
reflect.PtrTo(t).Implements(extendableProtoV1Type)
prop.unrecField = invalidField
prop.Prop = make([]*Properties, t.NumField())
prop.order = make([]int, t.NumField())
for i := 0; i < t.NumField(); i++ {
f := t.Field(i)
p := new(Properties)
name := f.Name
p.init(f.Type, name, f.Tag.Get("protobuf"), &f, false)
if f.Name == "XXX_InternalExtensions" { // special case
p.enc = (*Buffer).enc_exts
p.dec = nil // not needed
p.size = size_exts
} else if f.Name == "XXX_extensions" { // special case
p.enc = (*Buffer).enc_map
p.dec = nil // not needed
p.size = size_map
} else if f.Name == "XXX_unrecognized" { // special case
prop.unrecField = toField(&f)
}
oneof := f.Tag.Get("protobuf_oneof") // special case
if oneof != "" {
// Oneof fields don't use the traditional protobuf tag.
p.OrigName = oneof
}
prop.Prop[i] = p
prop.order[i] = i
if debug {
print(i, " ", f.Name, " ", t.String(), " ")
if p.Tag > 0 {
print(p.String())
}
print("\n")
}
if p.enc == nil && !strings.HasPrefix(f.Name, "XXX_") && oneof == "" {
fmt.Fprintln(os.Stderr, "proto: no encoder for", f.Name, f.Type.String(), "[GetProperties]")
}
}
// Re-order prop.order.
sort.Sort(prop)
type oneofMessage interface {
XXX_OneofFuncs() (func(Message, *Buffer) error, func(Message, int, int, *Buffer) (bool, error), func(Message) int, []interface{})
}
if om, ok := reflect.Zero(reflect.PtrTo(t)).Interface().(oneofMessage); ok {
var oots []interface{}
prop.oneofMarshaler, prop.oneofUnmarshaler, prop.oneofSizer, oots = om.XXX_OneofFuncs()
prop.stype = t
// Interpret oneof metadata.
prop.OneofTypes = make(map[string]*OneofProperties)
for _, oot := range oots {
oop := &OneofProperties{
Type: reflect.ValueOf(oot).Type(), // *T
Prop: new(Properties),
}
sft := oop.Type.Elem().Field(0)
oop.Prop.Name = sft.Name
oop.Prop.Parse(sft.Tag.Get("protobuf"))
// There will be exactly one interface field that
// this new value is assignable to.
for i := 0; i < t.NumField(); i++ {
f := t.Field(i)
if f.Type.Kind() != reflect.Interface {
continue
}
if !oop.Type.AssignableTo(f.Type) {
continue
}
oop.Field = i
break
}
prop.OneofTypes[oop.Prop.OrigName] = oop
}
}
// build required counts
// build tags
reqCount := 0
prop.decoderOrigNames = make(map[string]int)
for i, p := range prop.Prop {
if strings.HasPrefix(p.Name, "XXX_") {
// Internal fields should not appear in tags/origNames maps.
// They are handled specially when encoding and decoding.
continue
}
if p.Required {
reqCount++
}
prop.decoderTags.put(p.Tag, i)
prop.decoderOrigNames[p.OrigName] = i
}
prop.reqCount = reqCount
return prop
}
// Return the Properties object for the x[0]'th field of the structure.
func propByIndex(t reflect.Type, x []int) *Properties {
if len(x) != 1 {
fmt.Fprintf(os.Stderr, "proto: field index dimension %d (not 1) for type %s\n", len(x), t)
return nil
}
prop := GetProperties(t)
return prop.Prop[x[0]]
}
// Get the address and type of a pointer to a struct from an interface.
func getbase(pb Message) (t reflect.Type, b structPointer, err error) {
if pb == nil {
err = ErrNil
return
}
// get the reflect type of the pointer to the struct.
t = reflect.TypeOf(pb)
// get the address of the struct.
value := reflect.ValueOf(pb)
b = toStructPointer(value)
return
}
// A global registry of enum types.
// The generated code will register the generated maps by calling RegisterEnum.
var enumValueMaps = make(map[string]map[string]int32)
// RegisterEnum is called from the generated code to install the enum descriptor
// maps into the global table to aid parsing text format protocol buffers.
func RegisterEnum(typeName string, unusedNameMap map[int32]string, valueMap map[string]int32) {
if _, ok := enumValueMaps[typeName]; ok {
panic("proto: duplicate enum registered: " + typeName)
}
enumValueMaps[typeName] = valueMap
}
// EnumValueMap returns the mapping from names to integers of the
// enum type enumType, or a nil if not found.
func EnumValueMap(enumType string) map[string]int32 {
return enumValueMaps[enumType]
}
// A registry of all linked message types.
// The string is a fully-qualified proto name ("pkg.Message").
var (
protoTypes = make(map[string]reflect.Type)
revProtoTypes = make(map[reflect.Type]string)
)
// RegisterType is called from generated code and maps from the fully qualified
// proto name to the type (pointer to struct) of the protocol buffer.
func RegisterType(x Message, name string) {
if _, ok := protoTypes[name]; ok {
// TODO: Some day, make this a panic.
log.Printf("proto: duplicate proto type registered: %s", name)
return
}
t := reflect.TypeOf(x)
protoTypes[name] = t
revProtoTypes[t] = name
}
// MessageName returns the fully-qualified proto name for the given message type.
func MessageName(x Message) string {
type xname interface {
XXX_MessageName() string
}
if m, ok := x.(xname); ok {
return m.XXX_MessageName()
}
return revProtoTypes[reflect.TypeOf(x)]
}
// MessageType returns the message type (pointer to struct) for a named message.
func MessageType(name string) reflect.Type { return protoTypes[name] }
// A registry of all linked proto files.
var (
protoFiles = make(map[string][]byte) // file name => fileDescriptor
)
// RegisterFile is called from generated code and maps from the
// full file name of a .proto file to its compressed FileDescriptorProto.
func RegisterFile(filename string, fileDescriptor []byte) {
protoFiles[filename] = fileDescriptor
}
// FileDescriptor returns the compressed FileDescriptorProto for a .proto file.
func FileDescriptor(filename string) []byte { return protoFiles[filename] }

View File

@ -0,0 +1,347 @@
// Code generated by protoc-gen-go.
// source: proto3_proto/proto3.proto
// DO NOT EDIT!
/*
Package proto3_proto is a generated protocol buffer package.
It is generated from these files:
proto3_proto/proto3.proto
It has these top-level messages:
Message
Nested
MessageWithMap
IntMap
IntMaps
*/
package proto3_proto
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import google_protobuf "github.com/golang/protobuf/ptypes/any"
import testdata "github.com/golang/protobuf/proto/testdata"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
type Message_Humour int32
const (
Message_UNKNOWN Message_Humour = 0
Message_PUNS Message_Humour = 1
Message_SLAPSTICK Message_Humour = 2
Message_BILL_BAILEY Message_Humour = 3
)
var Message_Humour_name = map[int32]string{
0: "UNKNOWN",
1: "PUNS",
2: "SLAPSTICK",
3: "BILL_BAILEY",
}
var Message_Humour_value = map[string]int32{
"UNKNOWN": 0,
"PUNS": 1,
"SLAPSTICK": 2,
"BILL_BAILEY": 3,
}
func (x Message_Humour) String() string {
return proto.EnumName(Message_Humour_name, int32(x))
}
func (Message_Humour) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{0, 0} }
type Message struct {
Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
Hilarity Message_Humour `protobuf:"varint,2,opt,name=hilarity,enum=proto3_proto.Message_Humour" json:"hilarity,omitempty"`
HeightInCm uint32 `protobuf:"varint,3,opt,name=height_in_cm,json=heightInCm" json:"height_in_cm,omitempty"`
Data []byte `protobuf:"bytes,4,opt,name=data,proto3" json:"data,omitempty"`
ResultCount int64 `protobuf:"varint,7,opt,name=result_count,json=resultCount" json:"result_count,omitempty"`
TrueScotsman bool `protobuf:"varint,8,opt,name=true_scotsman,json=trueScotsman" json:"true_scotsman,omitempty"`
Score float32 `protobuf:"fixed32,9,opt,name=score" json:"score,omitempty"`
Key []uint64 `protobuf:"varint,5,rep,packed,name=key" json:"key,omitempty"`
ShortKey []int32 `protobuf:"varint,19,rep,packed,name=short_key,json=shortKey" json:"short_key,omitempty"`
Nested *Nested `protobuf:"bytes,6,opt,name=nested" json:"nested,omitempty"`
RFunny []Message_Humour `protobuf:"varint,16,rep,packed,name=r_funny,json=rFunny,enum=proto3_proto.Message_Humour" json:"r_funny,omitempty"`
Terrain map[string]*Nested `protobuf:"bytes,10,rep,name=terrain" json:"terrain,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
Proto2Field *testdata.SubDefaults `protobuf:"bytes,11,opt,name=proto2_field,json=proto2Field" json:"proto2_field,omitempty"`
Proto2Value map[string]*testdata.SubDefaults `protobuf:"bytes,13,rep,name=proto2_value,json=proto2Value" json:"proto2_value,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
Anything *google_protobuf.Any `protobuf:"bytes,14,opt,name=anything" json:"anything,omitempty"`
ManyThings []*google_protobuf.Any `protobuf:"bytes,15,rep,name=many_things,json=manyThings" json:"many_things,omitempty"`
Submessage *Message `protobuf:"bytes,17,opt,name=submessage" json:"submessage,omitempty"`
Children []*Message `protobuf:"bytes,18,rep,name=children" json:"children,omitempty"`
}
func (m *Message) Reset() { *m = Message{} }
func (m *Message) String() string { return proto.CompactTextString(m) }
func (*Message) ProtoMessage() {}
func (*Message) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *Message) GetName() string {
if m != nil {
return m.Name
}
return ""
}
func (m *Message) GetHilarity() Message_Humour {
if m != nil {
return m.Hilarity
}
return Message_UNKNOWN
}
func (m *Message) GetHeightInCm() uint32 {
if m != nil {
return m.HeightInCm
}
return 0
}
func (m *Message) GetData() []byte {
if m != nil {
return m.Data
}
return nil
}
func (m *Message) GetResultCount() int64 {
if m != nil {
return m.ResultCount
}
return 0
}
func (m *Message) GetTrueScotsman() bool {
if m != nil {
return m.TrueScotsman
}
return false
}
func (m *Message) GetScore() float32 {
if m != nil {
return m.Score
}
return 0
}
func (m *Message) GetKey() []uint64 {
if m != nil {
return m.Key
}
return nil
}
func (m *Message) GetShortKey() []int32 {
if m != nil {
return m.ShortKey
}
return nil
}
func (m *Message) GetNested() *Nested {
if m != nil {
return m.Nested
}
return nil
}
func (m *Message) GetRFunny() []Message_Humour {
if m != nil {
return m.RFunny
}
return nil
}
func (m *Message) GetTerrain() map[string]*Nested {
if m != nil {
return m.Terrain
}
return nil
}
func (m *Message) GetProto2Field() *testdata.SubDefaults {
if m != nil {
return m.Proto2Field
}
return nil
}
func (m *Message) GetProto2Value() map[string]*testdata.SubDefaults {
if m != nil {
return m.Proto2Value
}
return nil
}
func (m *Message) GetAnything() *google_protobuf.Any {
if m != nil {
return m.Anything
}
return nil
}
func (m *Message) GetManyThings() []*google_protobuf.Any {
if m != nil {
return m.ManyThings
}
return nil
}
func (m *Message) GetSubmessage() *Message {
if m != nil {
return m.Submessage
}
return nil
}
func (m *Message) GetChildren() []*Message {
if m != nil {
return m.Children
}
return nil
}
type Nested struct {
Bunny string `protobuf:"bytes,1,opt,name=bunny" json:"bunny,omitempty"`
Cute bool `protobuf:"varint,2,opt,name=cute" json:"cute,omitempty"`
}
func (m *Nested) Reset() { *m = Nested{} }
func (m *Nested) String() string { return proto.CompactTextString(m) }
func (*Nested) ProtoMessage() {}
func (*Nested) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *Nested) GetBunny() string {
if m != nil {
return m.Bunny
}
return ""
}
func (m *Nested) GetCute() bool {
if m != nil {
return m.Cute
}
return false
}
type MessageWithMap struct {
ByteMapping map[bool][]byte `protobuf:"bytes,1,rep,name=byte_mapping,json=byteMapping" json:"byte_mapping,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value,proto3"`
}
func (m *MessageWithMap) Reset() { *m = MessageWithMap{} }
func (m *MessageWithMap) String() string { return proto.CompactTextString(m) }
func (*MessageWithMap) ProtoMessage() {}
func (*MessageWithMap) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *MessageWithMap) GetByteMapping() map[bool][]byte {
if m != nil {
return m.ByteMapping
}
return nil
}
type IntMap struct {
Rtt map[int32]int32 `protobuf:"bytes,1,rep,name=rtt" json:"rtt,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"varint,2,opt,name=value"`
}
func (m *IntMap) Reset() { *m = IntMap{} }
func (m *IntMap) String() string { return proto.CompactTextString(m) }
func (*IntMap) ProtoMessage() {}
func (*IntMap) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
func (m *IntMap) GetRtt() map[int32]int32 {
if m != nil {
return m.Rtt
}
return nil
}
type IntMaps struct {
Maps []*IntMap `protobuf:"bytes,1,rep,name=maps" json:"maps,omitempty"`
}
func (m *IntMaps) Reset() { *m = IntMaps{} }
func (m *IntMaps) String() string { return proto.CompactTextString(m) }
func (*IntMaps) ProtoMessage() {}
func (*IntMaps) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
func (m *IntMaps) GetMaps() []*IntMap {
if m != nil {
return m.Maps
}
return nil
}
func init() {
proto.RegisterType((*Message)(nil), "proto3_proto.Message")
proto.RegisterType((*Nested)(nil), "proto3_proto.Nested")
proto.RegisterType((*MessageWithMap)(nil), "proto3_proto.MessageWithMap")
proto.RegisterType((*IntMap)(nil), "proto3_proto.IntMap")
proto.RegisterType((*IntMaps)(nil), "proto3_proto.IntMaps")
proto.RegisterEnum("proto3_proto.Message_Humour", Message_Humour_name, Message_Humour_value)
}
func init() { proto.RegisterFile("proto3_proto/proto3.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 733 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x84, 0x53, 0x6d, 0x6f, 0xf3, 0x34,
0x14, 0x25, 0x4d, 0x5f, 0xd2, 0x9b, 0x74, 0x0b, 0x5e, 0x91, 0xbc, 0x02, 0x52, 0x28, 0x12, 0x8a,
0x78, 0x49, 0xa1, 0xd3, 0xd0, 0x84, 0x10, 0x68, 0x1b, 0x9b, 0xa8, 0xd6, 0x95, 0xca, 0xdd, 0x98,
0xf8, 0x14, 0xa5, 0xad, 0xdb, 0x46, 0x34, 0x4e, 0x49, 0x1c, 0xa4, 0xfc, 0x1d, 0xfe, 0x28, 0x8f,
0x6c, 0xa7, 0x5d, 0x36, 0x65, 0xcf, 0xf3, 0x29, 0xf6, 0xf1, 0xb9, 0xf7, 0x9c, 0x1c, 0x5f, 0xc3,
0xe9, 0x2e, 0x89, 0x79, 0x7c, 0xe6, 0xcb, 0xcf, 0x40, 0x6d, 0x3c, 0xf9, 0x41, 0x56, 0xf9, 0xa8,
0x77, 0xba, 0x8e, 0xe3, 0xf5, 0x96, 0x2a, 0xca, 0x3c, 0x5b, 0x0d, 0x02, 0x96, 0x2b, 0x62, 0xef,
0x84, 0xd3, 0x94, 0x2f, 0x03, 0x1e, 0x0c, 0xc4, 0x42, 0x81, 0xfd, 0xff, 0x5b, 0xd0, 0xba, 0xa7,
0x69, 0x1a, 0xac, 0x29, 0x42, 0x50, 0x67, 0x41, 0x44, 0xb1, 0xe6, 0x68, 0x6e, 0x9b, 0xc8, 0x35,
0xba, 0x00, 0x63, 0x13, 0x6e, 0x83, 0x24, 0xe4, 0x39, 0xae, 0x39, 0x9a, 0x7b, 0x34, 0xfc, 0xcc,
0x2b, 0x0b, 0x7a, 0x45, 0xb1, 0xf7, 0x7b, 0x16, 0xc5, 0x59, 0x42, 0x0e, 0x6c, 0xe4, 0x80, 0xb5,
0xa1, 0xe1, 0x7a, 0xc3, 0xfd, 0x90, 0xf9, 0x8b, 0x08, 0xeb, 0x8e, 0xe6, 0x76, 0x08, 0x28, 0x6c,
0xc4, 0xae, 0x23, 0xa1, 0x27, 0xec, 0xe0, 0xba, 0xa3, 0xb9, 0x16, 0x91, 0x6b, 0xf4, 0x05, 0x58,
0x09, 0x4d, 0xb3, 0x2d, 0xf7, 0x17, 0x71, 0xc6, 0x38, 0x6e, 0x39, 0x9a, 0xab, 0x13, 0x53, 0x61,
0xd7, 0x02, 0x42, 0x5f, 0x42, 0x87, 0x27, 0x19, 0xf5, 0xd3, 0x45, 0xcc, 0xd3, 0x28, 0x60, 0xd8,
0x70, 0x34, 0xd7, 0x20, 0x96, 0x00, 0x67, 0x05, 0x86, 0xba, 0xd0, 0x48, 0x17, 0x71, 0x42, 0x71,
0xdb, 0xd1, 0xdc, 0x1a, 0x51, 0x1b, 0x64, 0x83, 0xfe, 0x37, 0xcd, 0x71, 0xc3, 0xd1, 0xdd, 0x3a,
0x11, 0x4b, 0xf4, 0x29, 0xb4, 0xd3, 0x4d, 0x9c, 0x70, 0x5f, 0xe0, 0x27, 0x8e, 0xee, 0x36, 0x88,
0x21, 0x81, 0x3b, 0x9a, 0xa3, 0x6f, 0xa1, 0xc9, 0x68, 0xca, 0xe9, 0x12, 0x37, 0x1d, 0xcd, 0x35,
0x87, 0xdd, 0x97, 0xbf, 0x3e, 0x91, 0x67, 0xa4, 0xe0, 0xa0, 0x73, 0x68, 0x25, 0xfe, 0x2a, 0x63,
0x2c, 0xc7, 0xb6, 0xa3, 0x7f, 0x30, 0xa9, 0x66, 0x72, 0x2b, 0xb8, 0xe8, 0x67, 0x68, 0x71, 0x9a,
0x24, 0x41, 0xc8, 0x30, 0x38, 0xba, 0x6b, 0x0e, 0xfb, 0xd5, 0x65, 0x0f, 0x8a, 0x74, 0xc3, 0x78,
0x92, 0x93, 0x7d, 0x09, 0xba, 0x00, 0x75, 0xff, 0x43, 0x7f, 0x15, 0xd2, 0xed, 0x12, 0x9b, 0xd2,
0xe8, 0x27, 0xde, 0xfe, 0xae, 0xbd, 0x59, 0x36, 0xff, 0x8d, 0xae, 0x82, 0x6c, 0xcb, 0x53, 0x62,
0x2a, 0xea, 0xad, 0x60, 0xa2, 0xd1, 0xa1, 0xf2, 0xdf, 0x60, 0x9b, 0x51, 0xdc, 0x91, 0xe2, 0x5f,
0x55, 0x8b, 0x4f, 0x25, 0xf3, 0x4f, 0x41, 0x54, 0x06, 0x8a, 0x56, 0x12, 0x41, 0xdf, 0x83, 0x11,
0xb0, 0x9c, 0x6f, 0x42, 0xb6, 0xc6, 0x47, 0x45, 0x52, 0x6a, 0x0e, 0xbd, 0xfd, 0x1c, 0x7a, 0x97,
0x2c, 0x27, 0x07, 0x16, 0x3a, 0x07, 0x33, 0x0a, 0x58, 0xee, 0xcb, 0x5d, 0x8a, 0x8f, 0xa5, 0x76,
0x75, 0x11, 0x08, 0xe2, 0x83, 0xe4, 0xa1, 0x73, 0x80, 0x34, 0x9b, 0x47, 0xca, 0x14, 0xfe, 0xb8,
0xf8, 0xd7, 0x2a, 0xc7, 0xa4, 0x44, 0x44, 0x3f, 0x80, 0xb1, 0xd8, 0x84, 0xdb, 0x65, 0x42, 0x19,
0x46, 0x52, 0xea, 0x8d, 0xa2, 0x03, 0xad, 0x37, 0x05, 0xab, 0x1c, 0xf8, 0x7e, 0x72, 0xd4, 0xd3,
0x90, 0x93, 0xf3, 0x35, 0x34, 0x54, 0x70, 0xb5, 0xf7, 0xcc, 0x86, 0xa2, 0xfc, 0x54, 0xbb, 0xd0,
0x7a, 0x8f, 0x60, 0xbf, 0x4e, 0xb1, 0xa2, 0xeb, 0x37, 0x2f, 0xbb, 0xbe, 0x71, 0x91, 0xcf, 0x6d,
0xfb, 0xbf, 0x42, 0x53, 0x0d, 0x14, 0x32, 0xa1, 0xf5, 0x38, 0xb9, 0x9b, 0xfc, 0xf1, 0x34, 0xb1,
0x3f, 0x42, 0x06, 0xd4, 0xa7, 0x8f, 0x93, 0x99, 0xad, 0xa1, 0x0e, 0xb4, 0x67, 0xe3, 0xcb, 0xe9,
0xec, 0x61, 0x74, 0x7d, 0x67, 0xd7, 0xd0, 0x31, 0x98, 0x57, 0xa3, 0xf1, 0xd8, 0xbf, 0xba, 0x1c,
0x8d, 0x6f, 0xfe, 0xb2, 0xf5, 0xfe, 0x10, 0x9a, 0xca, 0xac, 0x78, 0x33, 0x73, 0x39, 0xbe, 0xca,
0x8f, 0xda, 0x88, 0x57, 0xba, 0xc8, 0xb8, 0x32, 0x64, 0x10, 0xb9, 0xee, 0xff, 0xa7, 0xc1, 0x51,
0x91, 0xd9, 0x53, 0xc8, 0x37, 0xf7, 0xc1, 0x0e, 0x4d, 0xc1, 0x9a, 0xe7, 0x9c, 0xfa, 0x51, 0xb0,
0xdb, 0x89, 0x39, 0xd0, 0x64, 0xce, 0xdf, 0x55, 0xe6, 0x5c, 0xd4, 0x78, 0x57, 0x39, 0xa7, 0xf7,
0x8a, 0x5f, 0x4c, 0xd5, 0xfc, 0x19, 0xe9, 0xfd, 0x02, 0xf6, 0x6b, 0x42, 0x39, 0x30, 0x43, 0x05,
0xd6, 0x2d, 0x07, 0x66, 0x95, 0x93, 0xf9, 0x07, 0x9a, 0x23, 0xc6, 0x85, 0xb7, 0x01, 0xe8, 0x09,
0xe7, 0x85, 0xa5, 0xcf, 0x5f, 0x5a, 0x52, 0x14, 0x8f, 0x70, 0xae, 0x2c, 0x08, 0x66, 0xef, 0x47,
0x30, 0xf6, 0x40, 0x59, 0xb2, 0x51, 0x21, 0xd9, 0x28, 0x4b, 0x9e, 0x41, 0x4b, 0xf5, 0x4b, 0x91,
0x0b, 0xf5, 0x28, 0xd8, 0xa5, 0x85, 0x68, 0xb7, 0x4a, 0x94, 0x48, 0xc6, 0xbc, 0xa9, 0x8e, 0xde,
0x05, 0x00, 0x00, 0xff, 0xff, 0x75, 0x38, 0xad, 0x84, 0xe4, 0x05, 0x00, 0x00,
}

854
vendor/github.com/golang/protobuf/proto/text.go generated vendored Normal file
View File

@ -0,0 +1,854 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
// Functions for writing the text protocol buffer format.
import (
"bufio"
"bytes"
"encoding"
"errors"
"fmt"
"io"
"log"
"math"
"reflect"
"sort"
"strings"
)
var (
newline = []byte("\n")
spaces = []byte(" ")
gtNewline = []byte(">\n")
endBraceNewline = []byte("}\n")
backslashN = []byte{'\\', 'n'}
backslashR = []byte{'\\', 'r'}
backslashT = []byte{'\\', 't'}
backslashDQ = []byte{'\\', '"'}
backslashBS = []byte{'\\', '\\'}
posInf = []byte("inf")
negInf = []byte("-inf")
nan = []byte("nan")
)
type writer interface {
io.Writer
WriteByte(byte) error
}
// textWriter is an io.Writer that tracks its indentation level.
type textWriter struct {
ind int
complete bool // if the current position is a complete line
compact bool // whether to write out as a one-liner
w writer
}
func (w *textWriter) WriteString(s string) (n int, err error) {
if !strings.Contains(s, "\n") {
if !w.compact && w.complete {
w.writeIndent()
}
w.complete = false
return io.WriteString(w.w, s)
}
// WriteString is typically called without newlines, so this
// codepath and its copy are rare. We copy to avoid
// duplicating all of Write's logic here.
return w.Write([]byte(s))
}
func (w *textWriter) Write(p []byte) (n int, err error) {
newlines := bytes.Count(p, newline)
if newlines == 0 {
if !w.compact && w.complete {
w.writeIndent()
}
n, err = w.w.Write(p)
w.complete = false
return n, err
}
frags := bytes.SplitN(p, newline, newlines+1)
if w.compact {
for i, frag := range frags {
if i > 0 {
if err := w.w.WriteByte(' '); err != nil {
return n, err
}
n++
}
nn, err := w.w.Write(frag)
n += nn
if err != nil {
return n, err
}
}
return n, nil
}
for i, frag := range frags {
if w.complete {
w.writeIndent()
}
nn, err := w.w.Write(frag)
n += nn
if err != nil {
return n, err
}
if i+1 < len(frags) {
if err := w.w.WriteByte('\n'); err != nil {
return n, err
}
n++
}
}
w.complete = len(frags[len(frags)-1]) == 0
return n, nil
}
func (w *textWriter) WriteByte(c byte) error {
if w.compact && c == '\n' {
c = ' '
}
if !w.compact && w.complete {
w.writeIndent()
}
err := w.w.WriteByte(c)
w.complete = c == '\n'
return err
}
func (w *textWriter) indent() { w.ind++ }
func (w *textWriter) unindent() {
if w.ind == 0 {
log.Print("proto: textWriter unindented too far")
return
}
w.ind--
}
func writeName(w *textWriter, props *Properties) error {
if _, err := w.WriteString(props.OrigName); err != nil {
return err
}
if props.Wire != "group" {
return w.WriteByte(':')
}
return nil
}
// raw is the interface satisfied by RawMessage.
type raw interface {
Bytes() []byte
}
func requiresQuotes(u string) bool {
// When type URL contains any characters except [0-9A-Za-z./\-]*, it must be quoted.
for _, ch := range u {
switch {
case ch == '.' || ch == '/' || ch == '_':
continue
case '0' <= ch && ch <= '9':
continue
case 'A' <= ch && ch <= 'Z':
continue
case 'a' <= ch && ch <= 'z':
continue
default:
return true
}
}
return false
}
// isAny reports whether sv is a google.protobuf.Any message
func isAny(sv reflect.Value) bool {
type wkt interface {
XXX_WellKnownType() string
}
t, ok := sv.Addr().Interface().(wkt)
return ok && t.XXX_WellKnownType() == "Any"
}
// writeProto3Any writes an expanded google.protobuf.Any message.
//
// It returns (false, nil) if sv value can't be unmarshaled (e.g. because
// required messages are not linked in).
//
// It returns (true, error) when sv was written in expanded format or an error
// was encountered.
func (tm *TextMarshaler) writeProto3Any(w *textWriter, sv reflect.Value) (bool, error) {
turl := sv.FieldByName("TypeUrl")
val := sv.FieldByName("Value")
if !turl.IsValid() || !val.IsValid() {
return true, errors.New("proto: invalid google.protobuf.Any message")
}
b, ok := val.Interface().([]byte)
if !ok {
return true, errors.New("proto: invalid google.protobuf.Any message")
}
parts := strings.Split(turl.String(), "/")
mt := MessageType(parts[len(parts)-1])
if mt == nil {
return false, nil
}
m := reflect.New(mt.Elem())
if err := Unmarshal(b, m.Interface().(Message)); err != nil {
return false, nil
}
w.Write([]byte("["))
u := turl.String()
if requiresQuotes(u) {
writeString(w, u)
} else {
w.Write([]byte(u))
}
if w.compact {
w.Write([]byte("]:<"))
} else {
w.Write([]byte("]: <\n"))
w.ind++
}
if err := tm.writeStruct(w, m.Elem()); err != nil {
return true, err
}
if w.compact {
w.Write([]byte("> "))
} else {
w.ind--
w.Write([]byte(">\n"))
}
return true, nil
}
func (tm *TextMarshaler) writeStruct(w *textWriter, sv reflect.Value) error {
if tm.ExpandAny && isAny(sv) {
if canExpand, err := tm.writeProto3Any(w, sv); canExpand {
return err
}
}
st := sv.Type()
sprops := GetProperties(st)
for i := 0; i < sv.NumField(); i++ {
fv := sv.Field(i)
props := sprops.Prop[i]
name := st.Field(i).Name
if strings.HasPrefix(name, "XXX_") {
// There are two XXX_ fields:
// XXX_unrecognized []byte
// XXX_extensions map[int32]proto.Extension
// The first is handled here;
// the second is handled at the bottom of this function.
if name == "XXX_unrecognized" && !fv.IsNil() {
if err := writeUnknownStruct(w, fv.Interface().([]byte)); err != nil {
return err
}
}
continue
}
if fv.Kind() == reflect.Ptr && fv.IsNil() {
// Field not filled in. This could be an optional field or
// a required field that wasn't filled in. Either way, there
// isn't anything we can show for it.
continue
}
if fv.Kind() == reflect.Slice && fv.IsNil() {
// Repeated field that is empty, or a bytes field that is unused.
continue
}
if props.Repeated && fv.Kind() == reflect.Slice {
// Repeated field.
for j := 0; j < fv.Len(); j++ {
if err := writeName(w, props); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte(' '); err != nil {
return err
}
}
v := fv.Index(j)
if v.Kind() == reflect.Ptr && v.IsNil() {
// A nil message in a repeated field is not valid,
// but we can handle that more gracefully than panicking.
if _, err := w.Write([]byte("<nil>\n")); err != nil {
return err
}
continue
}
if err := tm.writeAny(w, v, props); err != nil {
return err
}
if err := w.WriteByte('\n'); err != nil {
return err
}
}
continue
}
if fv.Kind() == reflect.Map {
// Map fields are rendered as a repeated struct with key/value fields.
keys := fv.MapKeys()
sort.Sort(mapKeys(keys))
for _, key := range keys {
val := fv.MapIndex(key)
if err := writeName(w, props); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte(' '); err != nil {
return err
}
}
// open struct
if err := w.WriteByte('<'); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte('\n'); err != nil {
return err
}
}
w.indent()
// key
if _, err := w.WriteString("key:"); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte(' '); err != nil {
return err
}
}
if err := tm.writeAny(w, key, props.mkeyprop); err != nil {
return err
}
if err := w.WriteByte('\n'); err != nil {
return err
}
// nil values aren't legal, but we can avoid panicking because of them.
if val.Kind() != reflect.Ptr || !val.IsNil() {
// value
if _, err := w.WriteString("value:"); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte(' '); err != nil {
return err
}
}
if err := tm.writeAny(w, val, props.mvalprop); err != nil {
return err
}
if err := w.WriteByte('\n'); err != nil {
return err
}
}
// close struct
w.unindent()
if err := w.WriteByte('>'); err != nil {
return err
}
if err := w.WriteByte('\n'); err != nil {
return err
}
}
continue
}
if props.proto3 && fv.Kind() == reflect.Slice && fv.Len() == 0 {
// empty bytes field
continue
}
if fv.Kind() != reflect.Ptr && fv.Kind() != reflect.Slice {
// proto3 non-repeated scalar field; skip if zero value
if isProto3Zero(fv) {
continue
}
}
if fv.Kind() == reflect.Interface {
// Check if it is a oneof.
if st.Field(i).Tag.Get("protobuf_oneof") != "" {
// fv is nil, or holds a pointer to generated struct.
// That generated struct has exactly one field,
// which has a protobuf struct tag.
if fv.IsNil() {
continue
}
inner := fv.Elem().Elem() // interface -> *T -> T
tag := inner.Type().Field(0).Tag.Get("protobuf")
props = new(Properties) // Overwrite the outer props var, but not its pointee.
props.Parse(tag)
// Write the value in the oneof, not the oneof itself.
fv = inner.Field(0)
// Special case to cope with malformed messages gracefully:
// If the value in the oneof is a nil pointer, don't panic
// in writeAny.
if fv.Kind() == reflect.Ptr && fv.IsNil() {
// Use errors.New so writeAny won't render quotes.
msg := errors.New("/* nil */")
fv = reflect.ValueOf(&msg).Elem()
}
}
}
if err := writeName(w, props); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte(' '); err != nil {
return err
}
}
if b, ok := fv.Interface().(raw); ok {
if err := writeRaw(w, b.Bytes()); err != nil {
return err
}
continue
}
// Enums have a String method, so writeAny will work fine.
if err := tm.writeAny(w, fv, props); err != nil {
return err
}
if err := w.WriteByte('\n'); err != nil {
return err
}
}
// Extensions (the XXX_extensions field).
pv := sv.Addr()
if _, ok := extendable(pv.Interface()); ok {
if err := tm.writeExtensions(w, pv); err != nil {
return err
}
}
return nil
}
// writeRaw writes an uninterpreted raw message.
func writeRaw(w *textWriter, b []byte) error {
if err := w.WriteByte('<'); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte('\n'); err != nil {
return err
}
}
w.indent()
if err := writeUnknownStruct(w, b); err != nil {
return err
}
w.unindent()
if err := w.WriteByte('>'); err != nil {
return err
}
return nil
}
// writeAny writes an arbitrary field.
func (tm *TextMarshaler) writeAny(w *textWriter, v reflect.Value, props *Properties) error {
v = reflect.Indirect(v)
// Floats have special cases.
if v.Kind() == reflect.Float32 || v.Kind() == reflect.Float64 {
x := v.Float()
var b []byte
switch {
case math.IsInf(x, 1):
b = posInf
case math.IsInf(x, -1):
b = negInf
case math.IsNaN(x):
b = nan
}
if b != nil {
_, err := w.Write(b)
return err
}
// Other values are handled below.
}
// We don't attempt to serialise every possible value type; only those
// that can occur in protocol buffers.
switch v.Kind() {
case reflect.Slice:
// Should only be a []byte; repeated fields are handled in writeStruct.
if err := writeString(w, string(v.Bytes())); err != nil {
return err
}
case reflect.String:
if err := writeString(w, v.String()); err != nil {
return err
}
case reflect.Struct:
// Required/optional group/message.
var bra, ket byte = '<', '>'
if props != nil && props.Wire == "group" {
bra, ket = '{', '}'
}
if err := w.WriteByte(bra); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte('\n'); err != nil {
return err
}
}
w.indent()
if etm, ok := v.Interface().(encoding.TextMarshaler); ok {
text, err := etm.MarshalText()
if err != nil {
return err
}
if _, err = w.Write(text); err != nil {
return err
}
} else if err := tm.writeStruct(w, v); err != nil {
return err
}
w.unindent()
if err := w.WriteByte(ket); err != nil {
return err
}
default:
_, err := fmt.Fprint(w, v.Interface())
return err
}
return nil
}
// equivalent to C's isprint.
func isprint(c byte) bool {
return c >= 0x20 && c < 0x7f
}
// writeString writes a string in the protocol buffer text format.
// It is similar to strconv.Quote except we don't use Go escape sequences,
// we treat the string as a byte sequence, and we use octal escapes.
// These differences are to maintain interoperability with the other
// languages' implementations of the text format.
func writeString(w *textWriter, s string) error {
// use WriteByte here to get any needed indent
if err := w.WriteByte('"'); err != nil {
return err
}
// Loop over the bytes, not the runes.
for i := 0; i < len(s); i++ {
var err error
// Divergence from C++: we don't escape apostrophes.
// There's no need to escape them, and the C++ parser
// copes with a naked apostrophe.
switch c := s[i]; c {
case '\n':
_, err = w.w.Write(backslashN)
case '\r':
_, err = w.w.Write(backslashR)
case '\t':
_, err = w.w.Write(backslashT)
case '"':
_, err = w.w.Write(backslashDQ)
case '\\':
_, err = w.w.Write(backslashBS)
default:
if isprint(c) {
err = w.w.WriteByte(c)
} else {
_, err = fmt.Fprintf(w.w, "\\%03o", c)
}
}
if err != nil {
return err
}
}
return w.WriteByte('"')
}
func writeUnknownStruct(w *textWriter, data []byte) (err error) {
if !w.compact {
if _, err := fmt.Fprintf(w, "/* %d unknown bytes */\n", len(data)); err != nil {
return err
}
}
b := NewBuffer(data)
for b.index < len(b.buf) {
x, err := b.DecodeVarint()
if err != nil {
_, err := fmt.Fprintf(w, "/* %v */\n", err)
return err
}
wire, tag := x&7, x>>3
if wire == WireEndGroup {
w.unindent()
if _, err := w.Write(endBraceNewline); err != nil {
return err
}
continue
}
if _, err := fmt.Fprint(w, tag); err != nil {
return err
}
if wire != WireStartGroup {
if err := w.WriteByte(':'); err != nil {
return err
}
}
if !w.compact || wire == WireStartGroup {
if err := w.WriteByte(' '); err != nil {
return err
}
}
switch wire {
case WireBytes:
buf, e := b.DecodeRawBytes(false)
if e == nil {
_, err = fmt.Fprintf(w, "%q", buf)
} else {
_, err = fmt.Fprintf(w, "/* %v */", e)
}
case WireFixed32:
x, err = b.DecodeFixed32()
err = writeUnknownInt(w, x, err)
case WireFixed64:
x, err = b.DecodeFixed64()
err = writeUnknownInt(w, x, err)
case WireStartGroup:
err = w.WriteByte('{')
w.indent()
case WireVarint:
x, err = b.DecodeVarint()
err = writeUnknownInt(w, x, err)
default:
_, err = fmt.Fprintf(w, "/* unknown wire type %d */", wire)
}
if err != nil {
return err
}
if err = w.WriteByte('\n'); err != nil {
return err
}
}
return nil
}
func writeUnknownInt(w *textWriter, x uint64, err error) error {
if err == nil {
_, err = fmt.Fprint(w, x)
} else {
_, err = fmt.Fprintf(w, "/* %v */", err)
}
return err
}
type int32Slice []int32
func (s int32Slice) Len() int { return len(s) }
func (s int32Slice) Less(i, j int) bool { return s[i] < s[j] }
func (s int32Slice) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
// writeExtensions writes all the extensions in pv.
// pv is assumed to be a pointer to a protocol message struct that is extendable.
func (tm *TextMarshaler) writeExtensions(w *textWriter, pv reflect.Value) error {
emap := extensionMaps[pv.Type().Elem()]
ep, _ := extendable(pv.Interface())
// Order the extensions by ID.
// This isn't strictly necessary, but it will give us
// canonical output, which will also make testing easier.
m, mu := ep.extensionsRead()
if m == nil {
return nil
}
mu.Lock()
ids := make([]int32, 0, len(m))
for id := range m {
ids = append(ids, id)
}
sort.Sort(int32Slice(ids))
mu.Unlock()
for _, extNum := range ids {
ext := m[extNum]
var desc *ExtensionDesc
if emap != nil {
desc = emap[extNum]
}
if desc == nil {
// Unknown extension.
if err := writeUnknownStruct(w, ext.enc); err != nil {
return err
}
continue
}
pb, err := GetExtension(ep, desc)
if err != nil {
return fmt.Errorf("failed getting extension: %v", err)
}
// Repeated extensions will appear as a slice.
if !desc.repeated() {
if err := tm.writeExtension(w, desc.Name, pb); err != nil {
return err
}
} else {
v := reflect.ValueOf(pb)
for i := 0; i < v.Len(); i++ {
if err := tm.writeExtension(w, desc.Name, v.Index(i).Interface()); err != nil {
return err
}
}
}
}
return nil
}
func (tm *TextMarshaler) writeExtension(w *textWriter, name string, pb interface{}) error {
if _, err := fmt.Fprintf(w, "[%s]:", name); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte(' '); err != nil {
return err
}
}
if err := tm.writeAny(w, reflect.ValueOf(pb), nil); err != nil {
return err
}
if err := w.WriteByte('\n'); err != nil {
return err
}
return nil
}
func (w *textWriter) writeIndent() {
if !w.complete {
return
}
remain := w.ind * 2
for remain > 0 {
n := remain
if n > len(spaces) {
n = len(spaces)
}
w.w.Write(spaces[:n])
remain -= n
}
w.complete = false
}
// TextMarshaler is a configurable text format marshaler.
type TextMarshaler struct {
Compact bool // use compact text format (one line).
ExpandAny bool // expand google.protobuf.Any messages of known types
}
// Marshal writes a given protocol buffer in text format.
// The only errors returned are from w.
func (tm *TextMarshaler) Marshal(w io.Writer, pb Message) error {
val := reflect.ValueOf(pb)
if pb == nil || val.IsNil() {
w.Write([]byte("<nil>"))
return nil
}
var bw *bufio.Writer
ww, ok := w.(writer)
if !ok {
bw = bufio.NewWriter(w)
ww = bw
}
aw := &textWriter{
w: ww,
complete: true,
compact: tm.Compact,
}
if etm, ok := pb.(encoding.TextMarshaler); ok {
text, err := etm.MarshalText()
if err != nil {
return err
}
if _, err = aw.Write(text); err != nil {
return err
}
if bw != nil {
return bw.Flush()
}
return nil
}
// Dereference the received pointer so we don't have outer < and >.
v := reflect.Indirect(val)
if err := tm.writeStruct(aw, v); err != nil {
return err
}
if bw != nil {
return bw.Flush()
}
return nil
}
// Text is the same as Marshal, but returns the string directly.
func (tm *TextMarshaler) Text(pb Message) string {
var buf bytes.Buffer
tm.Marshal(&buf, pb)
return buf.String()
}
var (
defaultTextMarshaler = TextMarshaler{}
compactTextMarshaler = TextMarshaler{Compact: true}
)
// TODO: consider removing some of the Marshal functions below.
// MarshalText writes a given protocol buffer in text format.
// The only errors returned are from w.
func MarshalText(w io.Writer, pb Message) error { return defaultTextMarshaler.Marshal(w, pb) }
// MarshalTextString is the same as MarshalText, but returns the string directly.
func MarshalTextString(pb Message) string { return defaultTextMarshaler.Text(pb) }
// CompactText writes a given protocol buffer in compact text format (one line).
func CompactText(w io.Writer, pb Message) error { return compactTextMarshaler.Marshal(w, pb) }
// CompactTextString is the same as CompactText, but returns the string directly.
func CompactTextString(pb Message) string { return compactTextMarshaler.Text(pb) }

895
vendor/github.com/golang/protobuf/proto/text_parser.go generated vendored Normal file
View File

@ -0,0 +1,895 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
// Functions for parsing the Text protocol buffer format.
// TODO: message sets.
import (
"encoding"
"errors"
"fmt"
"reflect"
"strconv"
"strings"
"unicode/utf8"
)
// Error string emitted when deserializing Any and fields are already set
const anyRepeatedlyUnpacked = "Any message unpacked multiple times, or %q already set"
type ParseError struct {
Message string
Line int // 1-based line number
Offset int // 0-based byte offset from start of input
}
func (p *ParseError) Error() string {
if p.Line == 1 {
// show offset only for first line
return fmt.Sprintf("line 1.%d: %v", p.Offset, p.Message)
}
return fmt.Sprintf("line %d: %v", p.Line, p.Message)
}
type token struct {
value string
err *ParseError
line int // line number
offset int // byte number from start of input, not start of line
unquoted string // the unquoted version of value, if it was a quoted string
}
func (t *token) String() string {
if t.err == nil {
return fmt.Sprintf("%q (line=%d, offset=%d)", t.value, t.line, t.offset)
}
return fmt.Sprintf("parse error: %v", t.err)
}
type textParser struct {
s string // remaining input
done bool // whether the parsing is finished (success or error)
backed bool // whether back() was called
offset, line int
cur token
}
func newTextParser(s string) *textParser {
p := new(textParser)
p.s = s
p.line = 1
p.cur.line = 1
return p
}
func (p *textParser) errorf(format string, a ...interface{}) *ParseError {
pe := &ParseError{fmt.Sprintf(format, a...), p.cur.line, p.cur.offset}
p.cur.err = pe
p.done = true
return pe
}
// Numbers and identifiers are matched by [-+._A-Za-z0-9]
func isIdentOrNumberChar(c byte) bool {
switch {
case 'A' <= c && c <= 'Z', 'a' <= c && c <= 'z':
return true
case '0' <= c && c <= '9':
return true
}
switch c {
case '-', '+', '.', '_':
return true
}
return false
}
func isWhitespace(c byte) bool {
switch c {
case ' ', '\t', '\n', '\r':
return true
}
return false
}
func isQuote(c byte) bool {
switch c {
case '"', '\'':
return true
}
return false
}
func (p *textParser) skipWhitespace() {
i := 0
for i < len(p.s) && (isWhitespace(p.s[i]) || p.s[i] == '#') {
if p.s[i] == '#' {
// comment; skip to end of line or input
for i < len(p.s) && p.s[i] != '\n' {
i++
}
if i == len(p.s) {
break
}
}
if p.s[i] == '\n' {
p.line++
}
i++
}
p.offset += i
p.s = p.s[i:len(p.s)]
if len(p.s) == 0 {
p.done = true
}
}
func (p *textParser) advance() {
// Skip whitespace
p.skipWhitespace()
if p.done {
return
}
// Start of non-whitespace
p.cur.err = nil
p.cur.offset, p.cur.line = p.offset, p.line
p.cur.unquoted = ""
switch p.s[0] {
case '<', '>', '{', '}', ':', '[', ']', ';', ',', '/':
// Single symbol
p.cur.value, p.s = p.s[0:1], p.s[1:len(p.s)]
case '"', '\'':
// Quoted string
i := 1
for i < len(p.s) && p.s[i] != p.s[0] && p.s[i] != '\n' {
if p.s[i] == '\\' && i+1 < len(p.s) {
// skip escaped char
i++
}
i++
}
if i >= len(p.s) || p.s[i] != p.s[0] {
p.errorf("unmatched quote")
return
}
unq, err := unquoteC(p.s[1:i], rune(p.s[0]))
if err != nil {
p.errorf("invalid quoted string %s: %v", p.s[0:i+1], err)
return
}
p.cur.value, p.s = p.s[0:i+1], p.s[i+1:len(p.s)]
p.cur.unquoted = unq
default:
i := 0
for i < len(p.s) && isIdentOrNumberChar(p.s[i]) {
i++
}
if i == 0 {
p.errorf("unexpected byte %#x", p.s[0])
return
}
p.cur.value, p.s = p.s[0:i], p.s[i:len(p.s)]
}
p.offset += len(p.cur.value)
}
var (
errBadUTF8 = errors.New("proto: bad UTF-8")
errBadHex = errors.New("proto: bad hexadecimal")
)
func unquoteC(s string, quote rune) (string, error) {
// This is based on C++'s tokenizer.cc.
// Despite its name, this is *not* parsing C syntax.
// For instance, "\0" is an invalid quoted string.
// Avoid allocation in trivial cases.
simple := true
for _, r := range s {
if r == '\\' || r == quote {
simple = false
break
}
}
if simple {
return s, nil
}
buf := make([]byte, 0, 3*len(s)/2)
for len(s) > 0 {
r, n := utf8.DecodeRuneInString(s)
if r == utf8.RuneError && n == 1 {
return "", errBadUTF8
}
s = s[n:]
if r != '\\' {
if r < utf8.RuneSelf {
buf = append(buf, byte(r))
} else {
buf = append(buf, string(r)...)
}
continue
}
ch, tail, err := unescape(s)
if err != nil {
return "", err
}
buf = append(buf, ch...)
s = tail
}
return string(buf), nil
}
func unescape(s string) (ch string, tail string, err error) {
r, n := utf8.DecodeRuneInString(s)
if r == utf8.RuneError && n == 1 {
return "", "", errBadUTF8
}
s = s[n:]
switch r {
case 'a':
return "\a", s, nil
case 'b':
return "\b", s, nil
case 'f':
return "\f", s, nil
case 'n':
return "\n", s, nil
case 'r':
return "\r", s, nil
case 't':
return "\t", s, nil
case 'v':
return "\v", s, nil
case '?':
return "?", s, nil // trigraph workaround
case '\'', '"', '\\':
return string(r), s, nil
case '0', '1', '2', '3', '4', '5', '6', '7', 'x', 'X':
if len(s) < 2 {
return "", "", fmt.Errorf(`\%c requires 2 following digits`, r)
}
base := 8
ss := s[:2]
s = s[2:]
if r == 'x' || r == 'X' {
base = 16
} else {
ss = string(r) + ss
}
i, err := strconv.ParseUint(ss, base, 8)
if err != nil {
return "", "", err
}
return string([]byte{byte(i)}), s, nil
case 'u', 'U':
n := 4
if r == 'U' {
n = 8
}
if len(s) < n {
return "", "", fmt.Errorf(`\%c requires %d digits`, r, n)
}
bs := make([]byte, n/2)
for i := 0; i < n; i += 2 {
a, ok1 := unhex(s[i])
b, ok2 := unhex(s[i+1])
if !ok1 || !ok2 {
return "", "", errBadHex
}
bs[i/2] = a<<4 | b
}
s = s[n:]
return string(bs), s, nil
}
return "", "", fmt.Errorf(`unknown escape \%c`, r)
}
// Adapted from src/pkg/strconv/quote.go.
func unhex(b byte) (v byte, ok bool) {
switch {
case '0' <= b && b <= '9':
return b - '0', true
case 'a' <= b && b <= 'f':
return b - 'a' + 10, true
case 'A' <= b && b <= 'F':
return b - 'A' + 10, true
}
return 0, false
}
// Back off the parser by one token. Can only be done between calls to next().
// It makes the next advance() a no-op.
func (p *textParser) back() { p.backed = true }
// Advances the parser and returns the new current token.
func (p *textParser) next() *token {
if p.backed || p.done {
p.backed = false
return &p.cur
}
p.advance()
if p.done {
p.cur.value = ""
} else if len(p.cur.value) > 0 && isQuote(p.cur.value[0]) {
// Look for multiple quoted strings separated by whitespace,
// and concatenate them.
cat := p.cur
for {
p.skipWhitespace()
if p.done || !isQuote(p.s[0]) {
break
}
p.advance()
if p.cur.err != nil {
return &p.cur
}
cat.value += " " + p.cur.value
cat.unquoted += p.cur.unquoted
}
p.done = false // parser may have seen EOF, but we want to return cat
p.cur = cat
}
return &p.cur
}
func (p *textParser) consumeToken(s string) error {
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value != s {
p.back()
return p.errorf("expected %q, found %q", s, tok.value)
}
return nil
}
// Return a RequiredNotSetError indicating which required field was not set.
func (p *textParser) missingRequiredFieldError(sv reflect.Value) *RequiredNotSetError {
st := sv.Type()
sprops := GetProperties(st)
for i := 0; i < st.NumField(); i++ {
if !isNil(sv.Field(i)) {
continue
}
props := sprops.Prop[i]
if props.Required {
return &RequiredNotSetError{fmt.Sprintf("%v.%v", st, props.OrigName)}
}
}
return &RequiredNotSetError{fmt.Sprintf("%v.<unknown field name>", st)} // should not happen
}
// Returns the index in the struct for the named field, as well as the parsed tag properties.
func structFieldByName(sprops *StructProperties, name string) (int, *Properties, bool) {
i, ok := sprops.decoderOrigNames[name]
if ok {
return i, sprops.Prop[i], true
}
return -1, nil, false
}
// Consume a ':' from the input stream (if the next token is a colon),
// returning an error if a colon is needed but not present.
func (p *textParser) checkForColon(props *Properties, typ reflect.Type) *ParseError {
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value != ":" {
// Colon is optional when the field is a group or message.
needColon := true
switch props.Wire {
case "group":
needColon = false
case "bytes":
// A "bytes" field is either a message, a string, or a repeated field;
// those three become *T, *string and []T respectively, so we can check for
// this field being a pointer to a non-string.
if typ.Kind() == reflect.Ptr {
// *T or *string
if typ.Elem().Kind() == reflect.String {
break
}
} else if typ.Kind() == reflect.Slice {
// []T or []*T
if typ.Elem().Kind() != reflect.Ptr {
break
}
} else if typ.Kind() == reflect.String {
// The proto3 exception is for a string field,
// which requires a colon.
break
}
needColon = false
}
if needColon {
return p.errorf("expected ':', found %q", tok.value)
}
p.back()
}
return nil
}
func (p *textParser) readStruct(sv reflect.Value, terminator string) error {
st := sv.Type()
sprops := GetProperties(st)
reqCount := sprops.reqCount
var reqFieldErr error
fieldSet := make(map[string]bool)
// A struct is a sequence of "name: value", terminated by one of
// '>' or '}', or the end of the input. A name may also be
// "[extension]" or "[type/url]".
//
// The whole struct can also be an expanded Any message, like:
// [type/url] < ... struct contents ... >
for {
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value == terminator {
break
}
if tok.value == "[" {
// Looks like an extension or an Any.
//
// TODO: Check whether we need to handle
// namespace rooted names (e.g. ".something.Foo").
extName, err := p.consumeExtName()
if err != nil {
return err
}
if s := strings.LastIndex(extName, "/"); s >= 0 {
// If it contains a slash, it's an Any type URL.
messageName := extName[s+1:]
mt := MessageType(messageName)
if mt == nil {
return p.errorf("unrecognized message %q in google.protobuf.Any", messageName)
}
tok = p.next()
if tok.err != nil {
return tok.err
}
// consume an optional colon
if tok.value == ":" {
tok = p.next()
if tok.err != nil {
return tok.err
}
}
var terminator string
switch tok.value {
case "<":
terminator = ">"
case "{":
terminator = "}"
default:
return p.errorf("expected '{' or '<', found %q", tok.value)
}
v := reflect.New(mt.Elem())
if pe := p.readStruct(v.Elem(), terminator); pe != nil {
return pe
}
b, err := Marshal(v.Interface().(Message))
if err != nil {
return p.errorf("failed to marshal message of type %q: %v", messageName, err)
}
if fieldSet["type_url"] {
return p.errorf(anyRepeatedlyUnpacked, "type_url")
}
if fieldSet["value"] {
return p.errorf(anyRepeatedlyUnpacked, "value")
}
sv.FieldByName("TypeUrl").SetString(extName)
sv.FieldByName("Value").SetBytes(b)
fieldSet["type_url"] = true
fieldSet["value"] = true
continue
}
var desc *ExtensionDesc
// This could be faster, but it's functional.
// TODO: Do something smarter than a linear scan.
for _, d := range RegisteredExtensions(reflect.New(st).Interface().(Message)) {
if d.Name == extName {
desc = d
break
}
}
if desc == nil {
return p.errorf("unrecognized extension %q", extName)
}
props := &Properties{}
props.Parse(desc.Tag)
typ := reflect.TypeOf(desc.ExtensionType)
if err := p.checkForColon(props, typ); err != nil {
return err
}
rep := desc.repeated()
// Read the extension structure, and set it in
// the value we're constructing.
var ext reflect.Value
if !rep {
ext = reflect.New(typ).Elem()
} else {
ext = reflect.New(typ.Elem()).Elem()
}
if err := p.readAny(ext, props); err != nil {
if _, ok := err.(*RequiredNotSetError); !ok {
return err
}
reqFieldErr = err
}
ep := sv.Addr().Interface().(Message)
if !rep {
SetExtension(ep, desc, ext.Interface())
} else {
old, err := GetExtension(ep, desc)
var sl reflect.Value
if err == nil {
sl = reflect.ValueOf(old) // existing slice
} else {
sl = reflect.MakeSlice(typ, 0, 1)
}
sl = reflect.Append(sl, ext)
SetExtension(ep, desc, sl.Interface())
}
if err := p.consumeOptionalSeparator(); err != nil {
return err
}
continue
}
// This is a normal, non-extension field.
name := tok.value
var dst reflect.Value
fi, props, ok := structFieldByName(sprops, name)
if ok {
dst = sv.Field(fi)
} else if oop, ok := sprops.OneofTypes[name]; ok {
// It is a oneof.
props = oop.Prop
nv := reflect.New(oop.Type.Elem())
dst = nv.Elem().Field(0)
field := sv.Field(oop.Field)
if !field.IsNil() {
return p.errorf("field '%s' would overwrite already parsed oneof '%s'", name, sv.Type().Field(oop.Field).Name)
}
field.Set(nv)
}
if !dst.IsValid() {
return p.errorf("unknown field name %q in %v", name, st)
}
if dst.Kind() == reflect.Map {
// Consume any colon.
if err := p.checkForColon(props, dst.Type()); err != nil {
return err
}
// Construct the map if it doesn't already exist.
if dst.IsNil() {
dst.Set(reflect.MakeMap(dst.Type()))
}
key := reflect.New(dst.Type().Key()).Elem()
val := reflect.New(dst.Type().Elem()).Elem()
// The map entry should be this sequence of tokens:
// < key : KEY value : VALUE >
// However, implementations may omit key or value, and technically
// we should support them in any order. See b/28924776 for a time
// this went wrong.
tok := p.next()
var terminator string
switch tok.value {
case "<":
terminator = ">"
case "{":
terminator = "}"
default:
return p.errorf("expected '{' or '<', found %q", tok.value)
}
for {
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value == terminator {
break
}
switch tok.value {
case "key":
if err := p.consumeToken(":"); err != nil {
return err
}
if err := p.readAny(key, props.mkeyprop); err != nil {
return err
}
if err := p.consumeOptionalSeparator(); err != nil {
return err
}
case "value":
if err := p.checkForColon(props.mvalprop, dst.Type().Elem()); err != nil {
return err
}
if err := p.readAny(val, props.mvalprop); err != nil {
return err
}
if err := p.consumeOptionalSeparator(); err != nil {
return err
}
default:
p.back()
return p.errorf(`expected "key", "value", or %q, found %q`, terminator, tok.value)
}
}
dst.SetMapIndex(key, val)
continue
}
// Check that it's not already set if it's not a repeated field.
if !props.Repeated && fieldSet[name] {
return p.errorf("non-repeated field %q was repeated", name)
}
if err := p.checkForColon(props, dst.Type()); err != nil {
return err
}
// Parse into the field.
fieldSet[name] = true
if err := p.readAny(dst, props); err != nil {
if _, ok := err.(*RequiredNotSetError); !ok {
return err
}
reqFieldErr = err
}
if props.Required {
reqCount--
}
if err := p.consumeOptionalSeparator(); err != nil {
return err
}
}
if reqCount > 0 {
return p.missingRequiredFieldError(sv)
}
return reqFieldErr
}
// consumeExtName consumes extension name or expanded Any type URL and the
// following ']'. It returns the name or URL consumed.
func (p *textParser) consumeExtName() (string, error) {
tok := p.next()
if tok.err != nil {
return "", tok.err
}
// If extension name or type url is quoted, it's a single token.
if len(tok.value) > 2 && isQuote(tok.value[0]) && tok.value[len(tok.value)-1] == tok.value[0] {
name, err := unquoteC(tok.value[1:len(tok.value)-1], rune(tok.value[0]))
if err != nil {
return "", err
}
return name, p.consumeToken("]")
}
// Consume everything up to "]"
var parts []string
for tok.value != "]" {
parts = append(parts, tok.value)
tok = p.next()
if tok.err != nil {
return "", p.errorf("unrecognized type_url or extension name: %s", tok.err)
}
}
return strings.Join(parts, ""), nil
}
// consumeOptionalSeparator consumes an optional semicolon or comma.
// It is used in readStruct to provide backward compatibility.
func (p *textParser) consumeOptionalSeparator() error {
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value != ";" && tok.value != "," {
p.back()
}
return nil
}
func (p *textParser) readAny(v reflect.Value, props *Properties) error {
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value == "" {
return p.errorf("unexpected EOF")
}
switch fv := v; fv.Kind() {
case reflect.Slice:
at := v.Type()
if at.Elem().Kind() == reflect.Uint8 {
// Special case for []byte
if tok.value[0] != '"' && tok.value[0] != '\'' {
// Deliberately written out here, as the error after
// this switch statement would write "invalid []byte: ...",
// which is not as user-friendly.
return p.errorf("invalid string: %v", tok.value)
}
bytes := []byte(tok.unquoted)
fv.Set(reflect.ValueOf(bytes))
return nil
}
// Repeated field.
if tok.value == "[" {
// Repeated field with list notation, like [1,2,3].
for {
fv.Set(reflect.Append(fv, reflect.New(at.Elem()).Elem()))
err := p.readAny(fv.Index(fv.Len()-1), props)
if err != nil {
return err
}
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value == "]" {
break
}
if tok.value != "," {
return p.errorf("Expected ']' or ',' found %q", tok.value)
}
}
return nil
}
// One value of the repeated field.
p.back()
fv.Set(reflect.Append(fv, reflect.New(at.Elem()).Elem()))
return p.readAny(fv.Index(fv.Len()-1), props)
case reflect.Bool:
// true/1/t/True or false/f/0/False.
switch tok.value {
case "true", "1", "t", "True":
fv.SetBool(true)
return nil
case "false", "0", "f", "False":
fv.SetBool(false)
return nil
}
case reflect.Float32, reflect.Float64:
v := tok.value
// Ignore 'f' for compatibility with output generated by C++, but don't
// remove 'f' when the value is "-inf" or "inf".
if strings.HasSuffix(v, "f") && tok.value != "-inf" && tok.value != "inf" {
v = v[:len(v)-1]
}
if f, err := strconv.ParseFloat(v, fv.Type().Bits()); err == nil {
fv.SetFloat(f)
return nil
}
case reflect.Int32:
if x, err := strconv.ParseInt(tok.value, 0, 32); err == nil {
fv.SetInt(x)
return nil
}
if len(props.Enum) == 0 {
break
}
m, ok := enumValueMaps[props.Enum]
if !ok {
break
}
x, ok := m[tok.value]
if !ok {
break
}
fv.SetInt(int64(x))
return nil
case reflect.Int64:
if x, err := strconv.ParseInt(tok.value, 0, 64); err == nil {
fv.SetInt(x)
return nil
}
case reflect.Ptr:
// A basic field (indirected through pointer), or a repeated message/group
p.back()
fv.Set(reflect.New(fv.Type().Elem()))
return p.readAny(fv.Elem(), props)
case reflect.String:
if tok.value[0] == '"' || tok.value[0] == '\'' {
fv.SetString(tok.unquoted)
return nil
}
case reflect.Struct:
var terminator string
switch tok.value {
case "{":
terminator = "}"
case "<":
terminator = ">"
default:
return p.errorf("expected '{' or '<', found %q", tok.value)
}
// TODO: Handle nested messages which implement encoding.TextUnmarshaler.
return p.readStruct(fv, terminator)
case reflect.Uint32:
if x, err := strconv.ParseUint(tok.value, 0, 32); err == nil {
fv.SetUint(uint64(x))
return nil
}
case reflect.Uint64:
if x, err := strconv.ParseUint(tok.value, 0, 64); err == nil {
fv.SetUint(x)
return nil
}
}
return p.errorf("invalid %v: %v", v.Type(), tok.value)
}
// UnmarshalText reads a protocol buffer in Text format. UnmarshalText resets pb
// before starting to unmarshal, so any existing data in pb is always removed.
// If a required field is not set and no other error occurs,
// UnmarshalText returns *RequiredNotSetError.
func UnmarshalText(s string, pb Message) error {
if um, ok := pb.(encoding.TextUnmarshaler); ok {
err := um.UnmarshalText([]byte(s))
return err
}
pb.Reset()
v := reflect.ValueOf(pb)
if pe := newTextParser(s).readStruct(v.Elem(), ""); pe != nil {
return pe
}
return nil
}

File diff suppressed because it is too large Load Diff

51
vendor/github.com/golang/protobuf/protoc-gen-go/doc.go generated vendored Normal file
View File

@ -0,0 +1,51 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
/*
A plugin for the Google protocol buffer compiler to generate Go code.
Run it by building this program and putting it in your path with the name
protoc-gen-go
That word 'go' at the end becomes part of the option string set for the
protocol compiler, so once the protocol compiler (protoc) is installed
you can run
protoc --go_out=output_directory input_directory/file.proto
to generate Go bindings for the protocol defined by file.proto.
With that input, the output will be written to
output_directory/file.pb.go
The generated code is documented in the package comment for
the library.
See the README and documentation for protocol buffers to learn more:
https://developers.google.com/protocol-buffers/
*/
package documentation

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,463 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2015 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Package grpc outputs gRPC service descriptions in Go code.
// It runs as a plugin for the Go protocol buffer compiler plugin.
// It is linked in to protoc-gen-go.
package grpc
import (
"fmt"
"path"
"strconv"
"strings"
pb "github.com/golang/protobuf/protoc-gen-go/descriptor"
"github.com/golang/protobuf/protoc-gen-go/generator"
)
// generatedCodeVersion indicates a version of the generated code.
// It is incremented whenever an incompatibility between the generated code and
// the grpc package is introduced; the generated code references
// a constant, grpc.SupportPackageIsVersionN (where N is generatedCodeVersion).
const generatedCodeVersion = 4
// Paths for packages used by code generated in this file,
// relative to the import_prefix of the generator.Generator.
const (
contextPkgPath = "golang.org/x/net/context"
grpcPkgPath = "google.golang.org/grpc"
)
func init() {
generator.RegisterPlugin(new(grpc))
}
// grpc is an implementation of the Go protocol buffer compiler's
// plugin architecture. It generates bindings for gRPC support.
type grpc struct {
gen *generator.Generator
}
// Name returns the name of this plugin, "grpc".
func (g *grpc) Name() string {
return "grpc"
}
// The names for packages imported in the generated code.
// They may vary from the final path component of the import path
// if the name is used by other packages.
var (
contextPkg string
grpcPkg string
)
// Init initializes the plugin.
func (g *grpc) Init(gen *generator.Generator) {
g.gen = gen
contextPkg = generator.RegisterUniquePackageName("context", nil)
grpcPkg = generator.RegisterUniquePackageName("grpc", nil)
}
// Given a type name defined in a .proto, return its object.
// Also record that we're using it, to guarantee the associated import.
func (g *grpc) objectNamed(name string) generator.Object {
g.gen.RecordTypeUse(name)
return g.gen.ObjectNamed(name)
}
// Given a type name defined in a .proto, return its name as we will print it.
func (g *grpc) typeName(str string) string {
return g.gen.TypeName(g.objectNamed(str))
}
// P forwards to g.gen.P.
func (g *grpc) P(args ...interface{}) { g.gen.P(args...) }
// Generate generates code for the services in the given file.
func (g *grpc) Generate(file *generator.FileDescriptor) {
if len(file.FileDescriptorProto.Service) == 0 {
return
}
g.P("// Reference imports to suppress errors if they are not otherwise used.")
g.P("var _ ", contextPkg, ".Context")
g.P("var _ ", grpcPkg, ".ClientConn")
g.P()
// Assert version compatibility.
g.P("// This is a compile-time assertion to ensure that this generated file")
g.P("// is compatible with the grpc package it is being compiled against.")
g.P("const _ = ", grpcPkg, ".SupportPackageIsVersion", generatedCodeVersion)
g.P()
for i, service := range file.FileDescriptorProto.Service {
g.generateService(file, service, i)
}
}
// GenerateImports generates the import declaration for this file.
func (g *grpc) GenerateImports(file *generator.FileDescriptor) {
if len(file.FileDescriptorProto.Service) == 0 {
return
}
g.P("import (")
g.P(contextPkg, " ", strconv.Quote(path.Join(g.gen.ImportPrefix, contextPkgPath)))
g.P(grpcPkg, " ", strconv.Quote(path.Join(g.gen.ImportPrefix, grpcPkgPath)))
g.P(")")
g.P()
}
// reservedClientName records whether a client name is reserved on the client side.
var reservedClientName = map[string]bool{
// TODO: do we need any in gRPC?
}
func unexport(s string) string { return strings.ToLower(s[:1]) + s[1:] }
// generateService generates all the code for the named service.
func (g *grpc) generateService(file *generator.FileDescriptor, service *pb.ServiceDescriptorProto, index int) {
path := fmt.Sprintf("6,%d", index) // 6 means service.
origServName := service.GetName()
fullServName := origServName
if pkg := file.GetPackage(); pkg != "" {
fullServName = pkg + "." + fullServName
}
servName := generator.CamelCase(origServName)
g.P()
g.P("// Client API for ", servName, " service")
g.P()
// Client interface.
g.P("type ", servName, "Client interface {")
for i, method := range service.Method {
g.gen.PrintComments(fmt.Sprintf("%s,2,%d", path, i)) // 2 means method in a service.
g.P(g.generateClientSignature(servName, method))
}
g.P("}")
g.P()
// Client structure.
g.P("type ", unexport(servName), "Client struct {")
g.P("cc *", grpcPkg, ".ClientConn")
g.P("}")
g.P()
// NewClient factory.
g.P("func New", servName, "Client (cc *", grpcPkg, ".ClientConn) ", servName, "Client {")
g.P("return &", unexport(servName), "Client{cc}")
g.P("}")
g.P()
var methodIndex, streamIndex int
serviceDescVar := "_" + servName + "_serviceDesc"
// Client method implementations.
for _, method := range service.Method {
var descExpr string
if !method.GetServerStreaming() && !method.GetClientStreaming() {
// Unary RPC method
descExpr = fmt.Sprintf("&%s.Methods[%d]", serviceDescVar, methodIndex)
methodIndex++
} else {
// Streaming RPC method
descExpr = fmt.Sprintf("&%s.Streams[%d]", serviceDescVar, streamIndex)
streamIndex++
}
g.generateClientMethod(servName, fullServName, serviceDescVar, method, descExpr)
}
g.P("// Server API for ", servName, " service")
g.P()
// Server interface.
serverType := servName + "Server"
g.P("type ", serverType, " interface {")
for i, method := range service.Method {
g.gen.PrintComments(fmt.Sprintf("%s,2,%d", path, i)) // 2 means method in a service.
g.P(g.generateServerSignature(servName, method))
}
g.P("}")
g.P()
// Server registration.
g.P("func Register", servName, "Server(s *", grpcPkg, ".Server, srv ", serverType, ") {")
g.P("s.RegisterService(&", serviceDescVar, `, srv)`)
g.P("}")
g.P()
// Server handler implementations.
var handlerNames []string
for _, method := range service.Method {
hname := g.generateServerMethod(servName, fullServName, method)
handlerNames = append(handlerNames, hname)
}
// Service descriptor.
g.P("var ", serviceDescVar, " = ", grpcPkg, ".ServiceDesc {")
g.P("ServiceName: ", strconv.Quote(fullServName), ",")
g.P("HandlerType: (*", serverType, ")(nil),")
g.P("Methods: []", grpcPkg, ".MethodDesc{")
for i, method := range service.Method {
if method.GetServerStreaming() || method.GetClientStreaming() {
continue
}
g.P("{")
g.P("MethodName: ", strconv.Quote(method.GetName()), ",")
g.P("Handler: ", handlerNames[i], ",")
g.P("},")
}
g.P("},")
g.P("Streams: []", grpcPkg, ".StreamDesc{")
for i, method := range service.Method {
if !method.GetServerStreaming() && !method.GetClientStreaming() {
continue
}
g.P("{")
g.P("StreamName: ", strconv.Quote(method.GetName()), ",")
g.P("Handler: ", handlerNames[i], ",")
if method.GetServerStreaming() {
g.P("ServerStreams: true,")
}
if method.GetClientStreaming() {
g.P("ClientStreams: true,")
}
g.P("},")
}
g.P("},")
g.P("Metadata: \"", file.GetName(), "\",")
g.P("}")
g.P()
}
// generateClientSignature returns the client-side signature for a method.
func (g *grpc) generateClientSignature(servName string, method *pb.MethodDescriptorProto) string {
origMethName := method.GetName()
methName := generator.CamelCase(origMethName)
if reservedClientName[methName] {
methName += "_"
}
reqArg := ", in *" + g.typeName(method.GetInputType())
if method.GetClientStreaming() {
reqArg = ""
}
respName := "*" + g.typeName(method.GetOutputType())
if method.GetServerStreaming() || method.GetClientStreaming() {
respName = servName + "_" + generator.CamelCase(origMethName) + "Client"
}
return fmt.Sprintf("%s(ctx %s.Context%s, opts ...%s.CallOption) (%s, error)", methName, contextPkg, reqArg, grpcPkg, respName)
}
func (g *grpc) generateClientMethod(servName, fullServName, serviceDescVar string, method *pb.MethodDescriptorProto, descExpr string) {
sname := fmt.Sprintf("/%s/%s", fullServName, method.GetName())
methName := generator.CamelCase(method.GetName())
inType := g.typeName(method.GetInputType())
outType := g.typeName(method.GetOutputType())
g.P("func (c *", unexport(servName), "Client) ", g.generateClientSignature(servName, method), "{")
if !method.GetServerStreaming() && !method.GetClientStreaming() {
g.P("out := new(", outType, ")")
// TODO: Pass descExpr to Invoke.
g.P("err := ", grpcPkg, `.Invoke(ctx, "`, sname, `", in, out, c.cc, opts...)`)
g.P("if err != nil { return nil, err }")
g.P("return out, nil")
g.P("}")
g.P()
return
}
streamType := unexport(servName) + methName + "Client"
g.P("stream, err := ", grpcPkg, ".NewClientStream(ctx, ", descExpr, `, c.cc, "`, sname, `", opts...)`)
g.P("if err != nil { return nil, err }")
g.P("x := &", streamType, "{stream}")
if !method.GetClientStreaming() {
g.P("if err := x.ClientStream.SendMsg(in); err != nil { return nil, err }")
g.P("if err := x.ClientStream.CloseSend(); err != nil { return nil, err }")
}
g.P("return x, nil")
g.P("}")
g.P()
genSend := method.GetClientStreaming()
genRecv := method.GetServerStreaming()
genCloseAndRecv := !method.GetServerStreaming()
// Stream auxiliary types and methods.
g.P("type ", servName, "_", methName, "Client interface {")
if genSend {
g.P("Send(*", inType, ") error")
}
if genRecv {
g.P("Recv() (*", outType, ", error)")
}
if genCloseAndRecv {
g.P("CloseAndRecv() (*", outType, ", error)")
}
g.P(grpcPkg, ".ClientStream")
g.P("}")
g.P()
g.P("type ", streamType, " struct {")
g.P(grpcPkg, ".ClientStream")
g.P("}")
g.P()
if genSend {
g.P("func (x *", streamType, ") Send(m *", inType, ") error {")
g.P("return x.ClientStream.SendMsg(m)")
g.P("}")
g.P()
}
if genRecv {
g.P("func (x *", streamType, ") Recv() (*", outType, ", error) {")
g.P("m := new(", outType, ")")
g.P("if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err }")
g.P("return m, nil")
g.P("}")
g.P()
}
if genCloseAndRecv {
g.P("func (x *", streamType, ") CloseAndRecv() (*", outType, ", error) {")
g.P("if err := x.ClientStream.CloseSend(); err != nil { return nil, err }")
g.P("m := new(", outType, ")")
g.P("if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err }")
g.P("return m, nil")
g.P("}")
g.P()
}
}
// generateServerSignature returns the server-side signature for a method.
func (g *grpc) generateServerSignature(servName string, method *pb.MethodDescriptorProto) string {
origMethName := method.GetName()
methName := generator.CamelCase(origMethName)
if reservedClientName[methName] {
methName += "_"
}
var reqArgs []string
ret := "error"
if !method.GetServerStreaming() && !method.GetClientStreaming() {
reqArgs = append(reqArgs, contextPkg+".Context")
ret = "(*" + g.typeName(method.GetOutputType()) + ", error)"
}
if !method.GetClientStreaming() {
reqArgs = append(reqArgs, "*"+g.typeName(method.GetInputType()))
}
if method.GetServerStreaming() || method.GetClientStreaming() {
reqArgs = append(reqArgs, servName+"_"+generator.CamelCase(origMethName)+"Server")
}
return methName + "(" + strings.Join(reqArgs, ", ") + ") " + ret
}
func (g *grpc) generateServerMethod(servName, fullServName string, method *pb.MethodDescriptorProto) string {
methName := generator.CamelCase(method.GetName())
hname := fmt.Sprintf("_%s_%s_Handler", servName, methName)
inType := g.typeName(method.GetInputType())
outType := g.typeName(method.GetOutputType())
if !method.GetServerStreaming() && !method.GetClientStreaming() {
g.P("func ", hname, "(srv interface{}, ctx ", contextPkg, ".Context, dec func(interface{}) error, interceptor ", grpcPkg, ".UnaryServerInterceptor) (interface{}, error) {")
g.P("in := new(", inType, ")")
g.P("if err := dec(in); err != nil { return nil, err }")
g.P("if interceptor == nil { return srv.(", servName, "Server).", methName, "(ctx, in) }")
g.P("info := &", grpcPkg, ".UnaryServerInfo{")
g.P("Server: srv,")
g.P("FullMethod: ", strconv.Quote(fmt.Sprintf("/%s/%s", fullServName, methName)), ",")
g.P("}")
g.P("handler := func(ctx ", contextPkg, ".Context, req interface{}) (interface{}, error) {")
g.P("return srv.(", servName, "Server).", methName, "(ctx, req.(*", inType, "))")
g.P("}")
g.P("return interceptor(ctx, in, info, handler)")
g.P("}")
g.P()
return hname
}
streamType := unexport(servName) + methName + "Server"
g.P("func ", hname, "(srv interface{}, stream ", grpcPkg, ".ServerStream) error {")
if !method.GetClientStreaming() {
g.P("m := new(", inType, ")")
g.P("if err := stream.RecvMsg(m); err != nil { return err }")
g.P("return srv.(", servName, "Server).", methName, "(m, &", streamType, "{stream})")
} else {
g.P("return srv.(", servName, "Server).", methName, "(&", streamType, "{stream})")
}
g.P("}")
g.P()
genSend := method.GetServerStreaming()
genSendAndClose := !method.GetServerStreaming()
genRecv := method.GetClientStreaming()
// Stream auxiliary types and methods.
g.P("type ", servName, "_", methName, "Server interface {")
if genSend {
g.P("Send(*", outType, ") error")
}
if genSendAndClose {
g.P("SendAndClose(*", outType, ") error")
}
if genRecv {
g.P("Recv() (*", inType, ", error)")
}
g.P(grpcPkg, ".ServerStream")
g.P("}")
g.P()
g.P("type ", streamType, " struct {")
g.P(grpcPkg, ".ServerStream")
g.P("}")
g.P()
if genSend {
g.P("func (x *", streamType, ") Send(m *", outType, ") error {")
g.P("return x.ServerStream.SendMsg(m)")
g.P("}")
g.P()
}
if genSendAndClose {
g.P("func (x *", streamType, ") SendAndClose(m *", outType, ") error {")
g.P("return x.ServerStream.SendMsg(m)")
g.P("}")
g.P()
}
if genRecv {
g.P("func (x *", streamType, ") Recv() (*", inType, ", error) {")
g.P("m := new(", inType, ")")
g.P("if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err }")
g.P("return m, nil")
g.P("}")
g.P()
}
return hname
}

View File

@ -0,0 +1,34 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2015 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import _ "github.com/golang/protobuf/protoc-gen-go/grpc"

View File

@ -0,0 +1,98 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// protoc-gen-go is a plugin for the Google protocol buffer compiler to generate
// Go code. Run it by building this program and putting it in your path with
// the name
// protoc-gen-go
// That word 'go' at the end becomes part of the option string set for the
// protocol compiler, so once the protocol compiler (protoc) is installed
// you can run
// protoc --go_out=output_directory input_directory/file.proto
// to generate Go bindings for the protocol defined by file.proto.
// With that input, the output will be written to
// output_directory/file.pb.go
//
// The generated code is documented in the package comment for
// the library.
//
// See the README and documentation for protocol buffers to learn more:
// https://developers.google.com/protocol-buffers/
package main
import (
"io/ioutil"
"os"
"github.com/golang/protobuf/proto"
"github.com/golang/protobuf/protoc-gen-go/generator"
)
func main() {
// Begin by allocating a generator. The request and response structures are stored there
// so we can do error handling easily - the response structure contains the field to
// report failure.
g := generator.New()
data, err := ioutil.ReadAll(os.Stdin)
if err != nil {
g.Error(err, "reading input")
}
if err := proto.Unmarshal(data, g.Request); err != nil {
g.Error(err, "parsing input proto")
}
if len(g.Request.FileToGenerate) == 0 {
g.Fail("no files to generate")
}
g.CommandLineParameters(g.Request.GetParameter())
// Create a wrapped version of the Descriptors and EnumDescriptors that
// point to the file that defines them.
g.WrapTypes()
g.SetPackageNames()
g.BuildTypeNameMap()
g.GenerateAllFiles()
// Send back the results.
data, err = proto.Marshal(g.Response)
if err != nil {
g.Error(err, "failed to marshal output proto")
}
_, err = os.Stdout.Write(data)
if err != nil {
g.Error(err, "failed to write output proto")
}
}

View File

@ -0,0 +1,229 @@
// Code generated by protoc-gen-go.
// source: google/protobuf/compiler/plugin.proto
// DO NOT EDIT!
/*
Package plugin_go is a generated protocol buffer package.
It is generated from these files:
google/protobuf/compiler/plugin.proto
It has these top-level messages:
CodeGeneratorRequest
CodeGeneratorResponse
*/
package plugin_go
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import google_protobuf "github.com/golang/protobuf/protoc-gen-go/descriptor"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// An encoded CodeGeneratorRequest is written to the plugin's stdin.
type CodeGeneratorRequest struct {
// The .proto files that were explicitly listed on the command-line. The
// code generator should generate code only for these files. Each file's
// descriptor will be included in proto_file, below.
FileToGenerate []string `protobuf:"bytes,1,rep,name=file_to_generate,json=fileToGenerate" json:"file_to_generate,omitempty"`
// The generator parameter passed on the command-line.
Parameter *string `protobuf:"bytes,2,opt,name=parameter" json:"parameter,omitempty"`
// FileDescriptorProtos for all files in files_to_generate and everything
// they import. The files will appear in topological order, so each file
// appears before any file that imports it.
//
// protoc guarantees that all proto_files will be written after
// the fields above, even though this is not technically guaranteed by the
// protobuf wire format. This theoretically could allow a plugin to stream
// in the FileDescriptorProtos and handle them one by one rather than read
// the entire set into memory at once. However, as of this writing, this
// is not similarly optimized on protoc's end -- it will store all fields in
// memory at once before sending them to the plugin.
ProtoFile []*google_protobuf.FileDescriptorProto `protobuf:"bytes,15,rep,name=proto_file,json=protoFile" json:"proto_file,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *CodeGeneratorRequest) Reset() { *m = CodeGeneratorRequest{} }
func (m *CodeGeneratorRequest) String() string { return proto.CompactTextString(m) }
func (*CodeGeneratorRequest) ProtoMessage() {}
func (*CodeGeneratorRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *CodeGeneratorRequest) GetFileToGenerate() []string {
if m != nil {
return m.FileToGenerate
}
return nil
}
func (m *CodeGeneratorRequest) GetParameter() string {
if m != nil && m.Parameter != nil {
return *m.Parameter
}
return ""
}
func (m *CodeGeneratorRequest) GetProtoFile() []*google_protobuf.FileDescriptorProto {
if m != nil {
return m.ProtoFile
}
return nil
}
// The plugin writes an encoded CodeGeneratorResponse to stdout.
type CodeGeneratorResponse struct {
// Error message. If non-empty, code generation failed. The plugin process
// should exit with status code zero even if it reports an error in this way.
//
// This should be used to indicate errors in .proto files which prevent the
// code generator from generating correct code. Errors which indicate a
// problem in protoc itself -- such as the input CodeGeneratorRequest being
// unparseable -- should be reported by writing a message to stderr and
// exiting with a non-zero status code.
Error *string `protobuf:"bytes,1,opt,name=error" json:"error,omitempty"`
File []*CodeGeneratorResponse_File `protobuf:"bytes,15,rep,name=file" json:"file,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *CodeGeneratorResponse) Reset() { *m = CodeGeneratorResponse{} }
func (m *CodeGeneratorResponse) String() string { return proto.CompactTextString(m) }
func (*CodeGeneratorResponse) ProtoMessage() {}
func (*CodeGeneratorResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *CodeGeneratorResponse) GetError() string {
if m != nil && m.Error != nil {
return *m.Error
}
return ""
}
func (m *CodeGeneratorResponse) GetFile() []*CodeGeneratorResponse_File {
if m != nil {
return m.File
}
return nil
}
// Represents a single generated file.
type CodeGeneratorResponse_File struct {
// The file name, relative to the output directory. The name must not
// contain "." or ".." components and must be relative, not be absolute (so,
// the file cannot lie outside the output directory). "/" must be used as
// the path separator, not "\".
//
// If the name is omitted, the content will be appended to the previous
// file. This allows the generator to break large files into small chunks,
// and allows the generated text to be streamed back to protoc so that large
// files need not reside completely in memory at one time. Note that as of
// this writing protoc does not optimize for this -- it will read the entire
// CodeGeneratorResponse before writing files to disk.
Name *string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
// If non-empty, indicates that the named file should already exist, and the
// content here is to be inserted into that file at a defined insertion
// point. This feature allows a code generator to extend the output
// produced by another code generator. The original generator may provide
// insertion points by placing special annotations in the file that look
// like:
// @@protoc_insertion_point(NAME)
// The annotation can have arbitrary text before and after it on the line,
// which allows it to be placed in a comment. NAME should be replaced with
// an identifier naming the point -- this is what other generators will use
// as the insertion_point. Code inserted at this point will be placed
// immediately above the line containing the insertion point (thus multiple
// insertions to the same point will come out in the order they were added).
// The double-@ is intended to make it unlikely that the generated code
// could contain things that look like insertion points by accident.
//
// For example, the C++ code generator places the following line in the
// .pb.h files that it generates:
// // @@protoc_insertion_point(namespace_scope)
// This line appears within the scope of the file's package namespace, but
// outside of any particular class. Another plugin can then specify the
// insertion_point "namespace_scope" to generate additional classes or
// other declarations that should be placed in this scope.
//
// Note that if the line containing the insertion point begins with
// whitespace, the same whitespace will be added to every line of the
// inserted text. This is useful for languages like Python, where
// indentation matters. In these languages, the insertion point comment
// should be indented the same amount as any inserted code will need to be
// in order to work correctly in that context.
//
// The code generator that generates the initial file and the one which
// inserts into it must both run as part of a single invocation of protoc.
// Code generators are executed in the order in which they appear on the
// command line.
//
// If |insertion_point| is present, |name| must also be present.
InsertionPoint *string `protobuf:"bytes,2,opt,name=insertion_point,json=insertionPoint" json:"insertion_point,omitempty"`
// The file contents.
Content *string `protobuf:"bytes,15,opt,name=content" json:"content,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *CodeGeneratorResponse_File) Reset() { *m = CodeGeneratorResponse_File{} }
func (m *CodeGeneratorResponse_File) String() string { return proto.CompactTextString(m) }
func (*CodeGeneratorResponse_File) ProtoMessage() {}
func (*CodeGeneratorResponse_File) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1, 0} }
func (m *CodeGeneratorResponse_File) GetName() string {
if m != nil && m.Name != nil {
return *m.Name
}
return ""
}
func (m *CodeGeneratorResponse_File) GetInsertionPoint() string {
if m != nil && m.InsertionPoint != nil {
return *m.InsertionPoint
}
return ""
}
func (m *CodeGeneratorResponse_File) GetContent() string {
if m != nil && m.Content != nil {
return *m.Content
}
return ""
}
func init() {
proto.RegisterType((*CodeGeneratorRequest)(nil), "google.protobuf.compiler.CodeGeneratorRequest")
proto.RegisterType((*CodeGeneratorResponse)(nil), "google.protobuf.compiler.CodeGeneratorResponse")
proto.RegisterType((*CodeGeneratorResponse_File)(nil), "google.protobuf.compiler.CodeGeneratorResponse.File")
}
func init() { proto.RegisterFile("google/protobuf/compiler/plugin.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 310 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x74, 0x51, 0xc1, 0x4a, 0xc3, 0x40,
0x10, 0x25, 0xb6, 0x22, 0x19, 0xa5, 0x95, 0xa5, 0xc2, 0x52, 0x7a, 0x08, 0x45, 0x31, 0xa7, 0x14,
0x44, 0xf0, 0xde, 0x8a, 0x7a, 0x2c, 0xc1, 0x93, 0x20, 0x21, 0xa6, 0xd3, 0xb0, 0x90, 0xec, 0xac,
0xb3, 0xdb, 0x2f, 0xf2, 0x9f, 0xfc, 0x1e, 0xd9, 0x4d, 0x5b, 0xa5, 0xd8, 0xdb, 0xce, 0x7b, 0x6f,
0xe6, 0xbd, 0x9d, 0x81, 0x9b, 0x9a, 0xa8, 0x6e, 0x70, 0x66, 0x98, 0x1c, 0x7d, 0x6c, 0xd6, 0xb3,
0x8a, 0x5a, 0xa3, 0x1a, 0xe4, 0x99, 0x69, 0x36, 0xb5, 0xd2, 0x59, 0x20, 0x84, 0xec, 0x64, 0xd9,
0x4e, 0x96, 0xed, 0x64, 0xe3, 0xe4, 0x70, 0xc0, 0x0a, 0x6d, 0xc5, 0xca, 0x38, 0xe2, 0x4e, 0x3d,
0xfd, 0x8a, 0x60, 0xb4, 0xa0, 0x15, 0x3e, 0xa3, 0x46, 0x2e, 0x1d, 0x71, 0x8e, 0x9f, 0x1b, 0xb4,
0x4e, 0xa4, 0x70, 0xb9, 0x56, 0x0d, 0x16, 0x8e, 0x8a, 0xba, 0xe3, 0x50, 0x46, 0x49, 0x2f, 0x8d,
0xf3, 0x81, 0xc7, 0x5f, 0x69, 0xdb, 0x81, 0x62, 0x02, 0xb1, 0x29, 0xb9, 0x6c, 0xd1, 0x21, 0xcb,
0x93, 0x24, 0x4a, 0xe3, 0xfc, 0x17, 0x10, 0x0b, 0x80, 0xe0, 0x54, 0xf8, 0x2e, 0x39, 0x4c, 0x7a,
0xe9, 0xf9, 0xdd, 0x75, 0x76, 0x98, 0xf8, 0x49, 0x35, 0xf8, 0xb8, 0xcf, 0xb6, 0xf4, 0x70, 0x1e,
0x07, 0xd6, 0x33, 0xd3, 0xef, 0x08, 0xae, 0x0e, 0x52, 0x5a, 0x43, 0xda, 0xa2, 0x18, 0xc1, 0x29,
0x32, 0x13, 0xcb, 0x28, 0x18, 0x77, 0x85, 0x78, 0x81, 0xfe, 0x1f, 0xbb, 0xfb, 0xec, 0xd8, 0x82,
0xb2, 0x7f, 0x87, 0x86, 0x34, 0x79, 0x98, 0x30, 0x7e, 0x87, 0xbe, 0xaf, 0x84, 0x80, 0xbe, 0x2e,
0x5b, 0xdc, 0xda, 0x84, 0xb7, 0xb8, 0x85, 0xa1, 0xd2, 0x16, 0xd9, 0x29, 0xd2, 0x85, 0x21, 0xa5,
0xdd, 0xf6, 0xfb, 0x83, 0x3d, 0xbc, 0xf4, 0xa8, 0x90, 0x70, 0x56, 0x91, 0x76, 0xa8, 0x9d, 0x1c,
0x06, 0xc1, 0xae, 0x9c, 0x3f, 0xc0, 0xa4, 0xa2, 0xf6, 0x68, 0xbe, 0xf9, 0xc5, 0x32, 0x1c, 0x3a,
0x2c, 0xc4, 0xbe, 0xc5, 0xdd, 0xd9, 0x8b, 0x9a, 0x7e, 0x02, 0x00, 0x00, 0xff, 0xff, 0x83, 0x7b,
0x5c, 0x7c, 0x1b, 0x02, 0x00, 0x00,
}

136
vendor/github.com/golang/protobuf/ptypes/any.go generated vendored Normal file
View File

@ -0,0 +1,136 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2016 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package ptypes
// This file implements functions to marshal proto.Message to/from
// google.protobuf.Any message.
import (
"fmt"
"reflect"
"strings"
"github.com/golang/protobuf/proto"
"github.com/golang/protobuf/ptypes/any"
)
const googleApis = "type.googleapis.com/"
// AnyMessageName returns the name of the message contained in a google.protobuf.Any message.
//
// Note that regular type assertions should be done using the Is
// function. AnyMessageName is provided for less common use cases like filtering a
// sequence of Any messages based on a set of allowed message type names.
func AnyMessageName(any *any.Any) (string, error) {
slash := strings.LastIndex(any.TypeUrl, "/")
if slash < 0 {
return "", fmt.Errorf("message type url %q is invalid", any.TypeUrl)
}
return any.TypeUrl[slash+1:], nil
}
// MarshalAny takes the protocol buffer and encodes it into google.protobuf.Any.
func MarshalAny(pb proto.Message) (*any.Any, error) {
value, err := proto.Marshal(pb)
if err != nil {
return nil, err
}
return &any.Any{TypeUrl: googleApis + proto.MessageName(pb), Value: value}, nil
}
// DynamicAny is a value that can be passed to UnmarshalAny to automatically
// allocate a proto.Message for the type specified in a google.protobuf.Any
// message. The allocated message is stored in the embedded proto.Message.
//
// Example:
//
// var x ptypes.DynamicAny
// if err := ptypes.UnmarshalAny(a, &x); err != nil { ... }
// fmt.Printf("unmarshaled message: %v", x.Message)
type DynamicAny struct {
proto.Message
}
// Empty returns a new proto.Message of the type specified in a
// google.protobuf.Any message. It returns an error if corresponding message
// type isn't linked in.
func Empty(any *any.Any) (proto.Message, error) {
aname, err := AnyMessageName(any)
if err != nil {
return nil, err
}
t := proto.MessageType(aname)
if t == nil {
return nil, fmt.Errorf("any: message type %q isn't linked in", aname)
}
return reflect.New(t.Elem()).Interface().(proto.Message), nil
}
// UnmarshalAny parses the protocol buffer representation in a google.protobuf.Any
// message and places the decoded result in pb. It returns an error if type of
// contents of Any message does not match type of pb message.
//
// pb can be a proto.Message, or a *DynamicAny.
func UnmarshalAny(any *any.Any, pb proto.Message) error {
if d, ok := pb.(*DynamicAny); ok {
if d.Message == nil {
var err error
d.Message, err = Empty(any)
if err != nil {
return err
}
}
return UnmarshalAny(any, d.Message)
}
aname, err := AnyMessageName(any)
if err != nil {
return err
}
mname := proto.MessageName(pb)
if aname != mname {
return fmt.Errorf("mismatched message type: got %q want %q", aname, mname)
}
return proto.Unmarshal(any.Value, pb)
}
// Is returns true if any value contains a given message type.
func Is(any *any.Any, pb proto.Message) bool {
aname, err := AnyMessageName(any)
if err != nil {
return false
}
return aname == proto.MessageName(pb)
}

155
vendor/github.com/golang/protobuf/ptypes/any/any.pb.go generated vendored Normal file
View File

@ -0,0 +1,155 @@
// Code generated by protoc-gen-go.
// source: github.com/golang/protobuf/ptypes/any/any.proto
// DO NOT EDIT!
/*
Package any is a generated protocol buffer package.
It is generated from these files:
github.com/golang/protobuf/ptypes/any/any.proto
It has these top-level messages:
Any
*/
package any
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// `Any` contains an arbitrary serialized protocol buffer message along with a
// URL that describes the type of the serialized message.
//
// Protobuf library provides support to pack/unpack Any values in the form
// of utility functions or additional generated methods of the Any type.
//
// Example 1: Pack and unpack a message in C++.
//
// Foo foo = ...;
// Any any;
// any.PackFrom(foo);
// ...
// if (any.UnpackTo(&foo)) {
// ...
// }
//
// Example 2: Pack and unpack a message in Java.
//
// Foo foo = ...;
// Any any = Any.pack(foo);
// ...
// if (any.is(Foo.class)) {
// foo = any.unpack(Foo.class);
// }
//
// Example 3: Pack and unpack a message in Python.
//
// foo = Foo(...)
// any = Any()
// any.Pack(foo)
// ...
// if any.Is(Foo.DESCRIPTOR):
// any.Unpack(foo)
// ...
//
// The pack methods provided by protobuf library will by default use
// 'type.googleapis.com/full.type.name' as the type URL and the unpack
// methods only use the fully qualified type name after the last '/'
// in the type URL, for example "foo.bar.com/x/y.z" will yield type
// name "y.z".
//
//
// JSON
// ====
// The JSON representation of an `Any` value uses the regular
// representation of the deserialized, embedded message, with an
// additional field `@type` which contains the type URL. Example:
//
// package google.profile;
// message Person {
// string first_name = 1;
// string last_name = 2;
// }
//
// {
// "@type": "type.googleapis.com/google.profile.Person",
// "firstName": <string>,
// "lastName": <string>
// }
//
// If the embedded message type is well-known and has a custom JSON
// representation, that representation will be embedded adding a field
// `value` which holds the custom JSON in addition to the `@type`
// field. Example (for message [google.protobuf.Duration][]):
//
// {
// "@type": "type.googleapis.com/google.protobuf.Duration",
// "value": "1.212s"
// }
//
type Any struct {
// A URL/resource name whose content describes the type of the
// serialized protocol buffer message.
//
// For URLs which use the scheme `http`, `https`, or no scheme, the
// following restrictions and interpretations apply:
//
// * If no scheme is provided, `https` is assumed.
// * The last segment of the URL's path must represent the fully
// qualified name of the type (as in `path/google.protobuf.Duration`).
// The name should be in a canonical form (e.g., leading "." is
// not accepted).
// * An HTTP GET on the URL must yield a [google.protobuf.Type][]
// value in binary format, or produce an error.
// * Applications are allowed to cache lookup results based on the
// URL, or have them precompiled into a binary to avoid any
// lookup. Therefore, binary compatibility needs to be preserved
// on changes to types. (Use versioned type names to manage
// breaking changes.)
//
// Schemes other than `http`, `https` (or the empty scheme) might be
// used with implementation specific semantics.
//
TypeUrl string `protobuf:"bytes,1,opt,name=type_url,json=typeUrl" json:"type_url,omitempty"`
// Must be a valid serialized protocol buffer of the above specified type.
Value []byte `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
}
func (m *Any) Reset() { *m = Any{} }
func (m *Any) String() string { return proto.CompactTextString(m) }
func (*Any) ProtoMessage() {}
func (*Any) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (*Any) XXX_WellKnownType() string { return "Any" }
func init() {
proto.RegisterType((*Any)(nil), "google.protobuf.Any")
}
func init() { proto.RegisterFile("github.com/golang/protobuf/ptypes/any/any.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 187 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xd2, 0x4f, 0xcf, 0x2c, 0xc9,
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xcf, 0xcf, 0x49, 0xcc, 0x4b, 0xd7, 0x2f, 0x28,
0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x2f, 0x28, 0xa9, 0x2c, 0x48, 0x2d, 0xd6, 0x4f, 0xcc,
0xab, 0x04, 0x61, 0x3d, 0xb0, 0xb8, 0x10, 0x7f, 0x7a, 0x7e, 0x7e, 0x7a, 0x4e, 0xaa, 0x1e, 0x4c,
0x95, 0x92, 0x19, 0x17, 0xb3, 0x63, 0x5e, 0xa5, 0x90, 0x24, 0x17, 0x07, 0x48, 0x79, 0x7c, 0x69,
0x51, 0x8e, 0x04, 0xa3, 0x02, 0xa3, 0x06, 0x67, 0x10, 0x3b, 0x88, 0x1f, 0x5a, 0x94, 0x23, 0x24,
0xc2, 0xc5, 0x5a, 0x96, 0x98, 0x53, 0x9a, 0x2a, 0xc1, 0xa4, 0xc0, 0xa8, 0xc1, 0x13, 0x04, 0xe1,
0x38, 0x15, 0x71, 0x09, 0x27, 0xe7, 0xe7, 0xea, 0xa1, 0x19, 0xe7, 0xc4, 0xe1, 0x98, 0x57, 0x19,
0x00, 0xe2, 0x04, 0x30, 0x46, 0xa9, 0x12, 0xe5, 0xb8, 0x05, 0x8c, 0x8c, 0x8b, 0x98, 0x98, 0xdd,
0x03, 0x9c, 0x56, 0x31, 0xc9, 0xb9, 0x43, 0x4c, 0x0b, 0x80, 0xaa, 0xd2, 0x0b, 0x4f, 0xcd, 0xc9,
0xf1, 0xce, 0xcb, 0x2f, 0xcf, 0x0b, 0x01, 0xa9, 0x4e, 0x62, 0x03, 0x6b, 0x37, 0x06, 0x04, 0x00,
0x00, 0xff, 0xff, 0xc6, 0x4d, 0x03, 0x23, 0xf6, 0x00, 0x00, 0x00,
}

35
vendor/github.com/golang/protobuf/ptypes/doc.go generated vendored Normal file
View File

@ -0,0 +1,35 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2016 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
/*
Package ptypes contains code for interacting with well-known types.
*/
package ptypes

102
vendor/github.com/golang/protobuf/ptypes/duration.go generated vendored Normal file
View File

@ -0,0 +1,102 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2016 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package ptypes
// This file implements conversions between google.protobuf.Duration
// and time.Duration.
import (
"errors"
"fmt"
"time"
durpb "github.com/golang/protobuf/ptypes/duration"
)
const (
// Range of a durpb.Duration in seconds, as specified in
// google/protobuf/duration.proto. This is about 10,000 years in seconds.
maxSeconds = int64(10000 * 365.25 * 24 * 60 * 60)
minSeconds = -maxSeconds
)
// validateDuration determines whether the durpb.Duration is valid according to the
// definition in google/protobuf/duration.proto. A valid durpb.Duration
// may still be too large to fit into a time.Duration (the range of durpb.Duration
// is about 10,000 years, and the range of time.Duration is about 290).
func validateDuration(d *durpb.Duration) error {
if d == nil {
return errors.New("duration: nil Duration")
}
if d.Seconds < minSeconds || d.Seconds > maxSeconds {
return fmt.Errorf("duration: %v: seconds out of range", d)
}
if d.Nanos <= -1e9 || d.Nanos >= 1e9 {
return fmt.Errorf("duration: %v: nanos out of range", d)
}
// Seconds and Nanos must have the same sign, unless d.Nanos is zero.
if (d.Seconds < 0 && d.Nanos > 0) || (d.Seconds > 0 && d.Nanos < 0) {
return fmt.Errorf("duration: %v: seconds and nanos have different signs", d)
}
return nil
}
// Duration converts a durpb.Duration to a time.Duration. Duration
// returns an error if the durpb.Duration is invalid or is too large to be
// represented in a time.Duration.
func Duration(p *durpb.Duration) (time.Duration, error) {
if err := validateDuration(p); err != nil {
return 0, err
}
d := time.Duration(p.Seconds) * time.Second
if int64(d/time.Second) != p.Seconds {
return 0, fmt.Errorf("duration: %v is out of range for time.Duration", p)
}
if p.Nanos != 0 {
d += time.Duration(p.Nanos)
if (d < 0) != (p.Nanos < 0) {
return 0, fmt.Errorf("duration: %v is out of range for time.Duration", p)
}
}
return d, nil
}
// DurationProto converts a time.Duration to a durpb.Duration.
func DurationProto(d time.Duration) *durpb.Duration {
nanos := d.Nanoseconds()
secs := nanos / 1e9
nanos -= secs * 1e9
return &durpb.Duration{
Seconds: secs,
Nanos: int32(nanos),
}
}

View File

@ -0,0 +1,114 @@
// Code generated by protoc-gen-go.
// source: github.com/golang/protobuf/ptypes/duration/duration.proto
// DO NOT EDIT!
/*
Package duration is a generated protocol buffer package.
It is generated from these files:
github.com/golang/protobuf/ptypes/duration/duration.proto
It has these top-level messages:
Duration
*/
package duration
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// A Duration represents a signed, fixed-length span of time represented
// as a count of seconds and fractions of seconds at nanosecond
// resolution. It is independent of any calendar and concepts like "day"
// or "month". It is related to Timestamp in that the difference between
// two Timestamp values is a Duration and it can be added or subtracted
// from a Timestamp. Range is approximately +-10,000 years.
//
// Example 1: Compute Duration from two Timestamps in pseudo code.
//
// Timestamp start = ...;
// Timestamp end = ...;
// Duration duration = ...;
//
// duration.seconds = end.seconds - start.seconds;
// duration.nanos = end.nanos - start.nanos;
//
// if (duration.seconds < 0 && duration.nanos > 0) {
// duration.seconds += 1;
// duration.nanos -= 1000000000;
// } else if (durations.seconds > 0 && duration.nanos < 0) {
// duration.seconds -= 1;
// duration.nanos += 1000000000;
// }
//
// Example 2: Compute Timestamp from Timestamp + Duration in pseudo code.
//
// Timestamp start = ...;
// Duration duration = ...;
// Timestamp end = ...;
//
// end.seconds = start.seconds + duration.seconds;
// end.nanos = start.nanos + duration.nanos;
//
// if (end.nanos < 0) {
// end.seconds -= 1;
// end.nanos += 1000000000;
// } else if (end.nanos >= 1000000000) {
// end.seconds += 1;
// end.nanos -= 1000000000;
// }
//
//
type Duration struct {
// Signed seconds of the span of time. Must be from -315,576,000,000
// to +315,576,000,000 inclusive.
Seconds int64 `protobuf:"varint,1,opt,name=seconds" json:"seconds,omitempty"`
// Signed fractions of a second at nanosecond resolution of the span
// of time. Durations less than one second are represented with a 0
// `seconds` field and a positive or negative `nanos` field. For durations
// of one second or more, a non-zero value for the `nanos` field must be
// of the same sign as the `seconds` field. Must be from -999,999,999
// to +999,999,999 inclusive.
Nanos int32 `protobuf:"varint,2,opt,name=nanos" json:"nanos,omitempty"`
}
func (m *Duration) Reset() { *m = Duration{} }
func (m *Duration) String() string { return proto.CompactTextString(m) }
func (*Duration) ProtoMessage() {}
func (*Duration) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (*Duration) XXX_WellKnownType() string { return "Duration" }
func init() {
proto.RegisterType((*Duration)(nil), "google.protobuf.Duration")
}
func init() {
proto.RegisterFile("github.com/golang/protobuf/ptypes/duration/duration.proto", fileDescriptor0)
}
var fileDescriptor0 = []byte{
// 189 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xb2, 0x4c, 0xcf, 0x2c, 0xc9,
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xcf, 0xcf, 0x49, 0xcc, 0x4b, 0xd7, 0x2f, 0x28,
0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x2f, 0x28, 0xa9, 0x2c, 0x48, 0x2d, 0xd6, 0x4f, 0x29,
0x2d, 0x4a, 0x2c, 0xc9, 0xcc, 0xcf, 0x83, 0x33, 0xf4, 0xc0, 0x2a, 0x84, 0xf8, 0xd3, 0xf3, 0xf3,
0xd3, 0x73, 0x52, 0xf5, 0x60, 0xea, 0x95, 0xac, 0xb8, 0x38, 0x5c, 0xa0, 0x4a, 0x84, 0x24, 0xb8,
0xd8, 0x8b, 0x53, 0x93, 0xf3, 0xf3, 0x52, 0x8a, 0x25, 0x18, 0x15, 0x18, 0x35, 0x98, 0x83, 0x60,
0x5c, 0x21, 0x11, 0x2e, 0xd6, 0xbc, 0xc4, 0xbc, 0xfc, 0x62, 0x09, 0x26, 0x05, 0x46, 0x0d, 0xd6,
0x20, 0x08, 0xc7, 0xa9, 0x86, 0x4b, 0x38, 0x39, 0x3f, 0x57, 0x0f, 0xcd, 0x48, 0x27, 0x5e, 0x98,
0x81, 0x01, 0x20, 0x91, 0x00, 0xc6, 0x28, 0x2d, 0xe2, 0xdd, 0xbb, 0x80, 0x91, 0x71, 0x11, 0x13,
0xb3, 0x7b, 0x80, 0xd3, 0x2a, 0x26, 0x39, 0x77, 0x88, 0xb9, 0x01, 0x50, 0xa5, 0x7a, 0xe1, 0xa9,
0x39, 0x39, 0xde, 0x79, 0xf9, 0xe5, 0x79, 0x21, 0x20, 0x2d, 0x49, 0x6c, 0x60, 0x33, 0x8c, 0x01,
0x01, 0x00, 0x00, 0xff, 0xff, 0x62, 0xfb, 0xb1, 0x51, 0x0e, 0x01, 0x00, 0x00,
}

View File

@ -0,0 +1,69 @@
// Code generated by protoc-gen-go.
// source: github.com/golang/protobuf/ptypes/empty/empty.proto
// DO NOT EDIT!
/*
Package empty is a generated protocol buffer package.
It is generated from these files:
github.com/golang/protobuf/ptypes/empty/empty.proto
It has these top-level messages:
Empty
*/
package empty
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// A generic empty message that you can re-use to avoid defining duplicated
// empty messages in your APIs. A typical example is to use it as the request
// or the response type of an API method. For instance:
//
// service Foo {
// rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
// }
//
// The JSON representation for `Empty` is empty JSON object `{}`.
type Empty struct {
}
func (m *Empty) Reset() { *m = Empty{} }
func (m *Empty) String() string { return proto.CompactTextString(m) }
func (*Empty) ProtoMessage() {}
func (*Empty) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (*Empty) XXX_WellKnownType() string { return "Empty" }
func init() {
proto.RegisterType((*Empty)(nil), "google.protobuf.Empty")
}
func init() {
proto.RegisterFile("github.com/golang/protobuf/ptypes/empty/empty.proto", fileDescriptor0)
}
var fileDescriptor0 = []byte{
// 150 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0x32, 0x4e, 0xcf, 0x2c, 0xc9,
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xcf, 0xcf, 0x49, 0xcc, 0x4b, 0xd7, 0x2f, 0x28,
0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x2f, 0x28, 0xa9, 0x2c, 0x48, 0x2d, 0xd6, 0x4f, 0xcd,
0x2d, 0x28, 0xa9, 0x84, 0x90, 0x7a, 0x60, 0x39, 0x21, 0xfe, 0xf4, 0xfc, 0xfc, 0xf4, 0x9c, 0x54,
0x3d, 0x98, 0x4a, 0x25, 0x76, 0x2e, 0x56, 0x57, 0x90, 0xbc, 0x53, 0x25, 0x97, 0x70, 0x72, 0x7e,
0xae, 0x1e, 0x9a, 0xbc, 0x13, 0x17, 0x58, 0x36, 0x00, 0xc4, 0x0d, 0x60, 0x8c, 0x52, 0x27, 0xd2,
0xce, 0x05, 0x8c, 0x8c, 0x3f, 0x18, 0x19, 0x17, 0x31, 0x31, 0xbb, 0x07, 0x38, 0xad, 0x62, 0x92,
0x73, 0x87, 0x18, 0x1a, 0x00, 0x55, 0xaa, 0x17, 0x9e, 0x9a, 0x93, 0xe3, 0x9d, 0x97, 0x5f, 0x9e,
0x17, 0x02, 0xd2, 0x92, 0xc4, 0x06, 0x36, 0xc3, 0x18, 0x10, 0x00, 0x00, 0xff, 0xff, 0x7f, 0xbb,
0xf4, 0x0e, 0xd2, 0x00, 0x00, 0x00,
}

View File

@ -0,0 +1,382 @@
// Code generated by protoc-gen-go.
// source: github.com/golang/protobuf/ptypes/struct/struct.proto
// DO NOT EDIT!
/*
Package structpb is a generated protocol buffer package.
It is generated from these files:
github.com/golang/protobuf/ptypes/struct/struct.proto
It has these top-level messages:
Struct
Value
ListValue
*/
package structpb
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// `NullValue` is a singleton enumeration to represent the null value for the
// `Value` type union.
//
// The JSON representation for `NullValue` is JSON `null`.
type NullValue int32
const (
// Null value.
NullValue_NULL_VALUE NullValue = 0
)
var NullValue_name = map[int32]string{
0: "NULL_VALUE",
}
var NullValue_value = map[string]int32{
"NULL_VALUE": 0,
}
func (x NullValue) String() string {
return proto.EnumName(NullValue_name, int32(x))
}
func (NullValue) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (NullValue) XXX_WellKnownType() string { return "NullValue" }
// `Struct` represents a structured data value, consisting of fields
// which map to dynamically typed values. In some languages, `Struct`
// might be supported by a native representation. For example, in
// scripting languages like JS a struct is represented as an
// object. The details of that representation are described together
// with the proto support for the language.
//
// The JSON representation for `Struct` is JSON object.
type Struct struct {
// Unordered map of dynamically typed values.
Fields map[string]*Value `protobuf:"bytes,1,rep,name=fields" json:"fields,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
}
func (m *Struct) Reset() { *m = Struct{} }
func (m *Struct) String() string { return proto.CompactTextString(m) }
func (*Struct) ProtoMessage() {}
func (*Struct) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (*Struct) XXX_WellKnownType() string { return "Struct" }
func (m *Struct) GetFields() map[string]*Value {
if m != nil {
return m.Fields
}
return nil
}
// `Value` represents a dynamically typed value which can be either
// null, a number, a string, a boolean, a recursive struct value, or a
// list of values. A producer of value is expected to set one of that
// variants, absence of any variant indicates an error.
//
// The JSON representation for `Value` is JSON value.
type Value struct {
// The kind of value.
//
// Types that are valid to be assigned to Kind:
// *Value_NullValue
// *Value_NumberValue
// *Value_StringValue
// *Value_BoolValue
// *Value_StructValue
// *Value_ListValue
Kind isValue_Kind `protobuf_oneof:"kind"`
}
func (m *Value) Reset() { *m = Value{} }
func (m *Value) String() string { return proto.CompactTextString(m) }
func (*Value) ProtoMessage() {}
func (*Value) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (*Value) XXX_WellKnownType() string { return "Value" }
type isValue_Kind interface {
isValue_Kind()
}
type Value_NullValue struct {
NullValue NullValue `protobuf:"varint,1,opt,name=null_value,json=nullValue,enum=google.protobuf.NullValue,oneof"`
}
type Value_NumberValue struct {
NumberValue float64 `protobuf:"fixed64,2,opt,name=number_value,json=numberValue,oneof"`
}
type Value_StringValue struct {
StringValue string `protobuf:"bytes,3,opt,name=string_value,json=stringValue,oneof"`
}
type Value_BoolValue struct {
BoolValue bool `protobuf:"varint,4,opt,name=bool_value,json=boolValue,oneof"`
}
type Value_StructValue struct {
StructValue *Struct `protobuf:"bytes,5,opt,name=struct_value,json=structValue,oneof"`
}
type Value_ListValue struct {
ListValue *ListValue `protobuf:"bytes,6,opt,name=list_value,json=listValue,oneof"`
}
func (*Value_NullValue) isValue_Kind() {}
func (*Value_NumberValue) isValue_Kind() {}
func (*Value_StringValue) isValue_Kind() {}
func (*Value_BoolValue) isValue_Kind() {}
func (*Value_StructValue) isValue_Kind() {}
func (*Value_ListValue) isValue_Kind() {}
func (m *Value) GetKind() isValue_Kind {
if m != nil {
return m.Kind
}
return nil
}
func (m *Value) GetNullValue() NullValue {
if x, ok := m.GetKind().(*Value_NullValue); ok {
return x.NullValue
}
return NullValue_NULL_VALUE
}
func (m *Value) GetNumberValue() float64 {
if x, ok := m.GetKind().(*Value_NumberValue); ok {
return x.NumberValue
}
return 0
}
func (m *Value) GetStringValue() string {
if x, ok := m.GetKind().(*Value_StringValue); ok {
return x.StringValue
}
return ""
}
func (m *Value) GetBoolValue() bool {
if x, ok := m.GetKind().(*Value_BoolValue); ok {
return x.BoolValue
}
return false
}
func (m *Value) GetStructValue() *Struct {
if x, ok := m.GetKind().(*Value_StructValue); ok {
return x.StructValue
}
return nil
}
func (m *Value) GetListValue() *ListValue {
if x, ok := m.GetKind().(*Value_ListValue); ok {
return x.ListValue
}
return nil
}
// XXX_OneofFuncs is for the internal use of the proto package.
func (*Value) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
return _Value_OneofMarshaler, _Value_OneofUnmarshaler, _Value_OneofSizer, []interface{}{
(*Value_NullValue)(nil),
(*Value_NumberValue)(nil),
(*Value_StringValue)(nil),
(*Value_BoolValue)(nil),
(*Value_StructValue)(nil),
(*Value_ListValue)(nil),
}
}
func _Value_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
m := msg.(*Value)
// kind
switch x := m.Kind.(type) {
case *Value_NullValue:
b.EncodeVarint(1<<3 | proto.WireVarint)
b.EncodeVarint(uint64(x.NullValue))
case *Value_NumberValue:
b.EncodeVarint(2<<3 | proto.WireFixed64)
b.EncodeFixed64(math.Float64bits(x.NumberValue))
case *Value_StringValue:
b.EncodeVarint(3<<3 | proto.WireBytes)
b.EncodeStringBytes(x.StringValue)
case *Value_BoolValue:
t := uint64(0)
if x.BoolValue {
t = 1
}
b.EncodeVarint(4<<3 | proto.WireVarint)
b.EncodeVarint(t)
case *Value_StructValue:
b.EncodeVarint(5<<3 | proto.WireBytes)
if err := b.EncodeMessage(x.StructValue); err != nil {
return err
}
case *Value_ListValue:
b.EncodeVarint(6<<3 | proto.WireBytes)
if err := b.EncodeMessage(x.ListValue); err != nil {
return err
}
case nil:
default:
return fmt.Errorf("Value.Kind has unexpected type %T", x)
}
return nil
}
func _Value_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
m := msg.(*Value)
switch tag {
case 1: // kind.null_value
if wire != proto.WireVarint {
return true, proto.ErrInternalBadWireType
}
x, err := b.DecodeVarint()
m.Kind = &Value_NullValue{NullValue(x)}
return true, err
case 2: // kind.number_value
if wire != proto.WireFixed64 {
return true, proto.ErrInternalBadWireType
}
x, err := b.DecodeFixed64()
m.Kind = &Value_NumberValue{math.Float64frombits(x)}
return true, err
case 3: // kind.string_value
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
x, err := b.DecodeStringBytes()
m.Kind = &Value_StringValue{x}
return true, err
case 4: // kind.bool_value
if wire != proto.WireVarint {
return true, proto.ErrInternalBadWireType
}
x, err := b.DecodeVarint()
m.Kind = &Value_BoolValue{x != 0}
return true, err
case 5: // kind.struct_value
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
msg := new(Struct)
err := b.DecodeMessage(msg)
m.Kind = &Value_StructValue{msg}
return true, err
case 6: // kind.list_value
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
msg := new(ListValue)
err := b.DecodeMessage(msg)
m.Kind = &Value_ListValue{msg}
return true, err
default:
return false, nil
}
}
func _Value_OneofSizer(msg proto.Message) (n int) {
m := msg.(*Value)
// kind
switch x := m.Kind.(type) {
case *Value_NullValue:
n += proto.SizeVarint(1<<3 | proto.WireVarint)
n += proto.SizeVarint(uint64(x.NullValue))
case *Value_NumberValue:
n += proto.SizeVarint(2<<3 | proto.WireFixed64)
n += 8
case *Value_StringValue:
n += proto.SizeVarint(3<<3 | proto.WireBytes)
n += proto.SizeVarint(uint64(len(x.StringValue)))
n += len(x.StringValue)
case *Value_BoolValue:
n += proto.SizeVarint(4<<3 | proto.WireVarint)
n += 1
case *Value_StructValue:
s := proto.Size(x.StructValue)
n += proto.SizeVarint(5<<3 | proto.WireBytes)
n += proto.SizeVarint(uint64(s))
n += s
case *Value_ListValue:
s := proto.Size(x.ListValue)
n += proto.SizeVarint(6<<3 | proto.WireBytes)
n += proto.SizeVarint(uint64(s))
n += s
case nil:
default:
panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
}
return n
}
// `ListValue` is a wrapper around a repeated field of values.
//
// The JSON representation for `ListValue` is JSON array.
type ListValue struct {
// Repeated field of dynamically typed values.
Values []*Value `protobuf:"bytes,1,rep,name=values" json:"values,omitempty"`
}
func (m *ListValue) Reset() { *m = ListValue{} }
func (m *ListValue) String() string { return proto.CompactTextString(m) }
func (*ListValue) ProtoMessage() {}
func (*ListValue) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (*ListValue) XXX_WellKnownType() string { return "ListValue" }
func (m *ListValue) GetValues() []*Value {
if m != nil {
return m.Values
}
return nil
}
func init() {
proto.RegisterType((*Struct)(nil), "google.protobuf.Struct")
proto.RegisterType((*Value)(nil), "google.protobuf.Value")
proto.RegisterType((*ListValue)(nil), "google.protobuf.ListValue")
proto.RegisterEnum("google.protobuf.NullValue", NullValue_name, NullValue_value)
}
func init() {
proto.RegisterFile("github.com/golang/protobuf/ptypes/struct/struct.proto", fileDescriptor0)
}
var fileDescriptor0 = []byte{
// 416 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x8c, 0x92, 0x41, 0x8b, 0xd3, 0x40,
0x14, 0x80, 0x3b, 0xc9, 0x36, 0x98, 0x17, 0x59, 0x97, 0x11, 0xb4, 0xac, 0xa0, 0xa1, 0x7b, 0x09,
0x22, 0x09, 0x56, 0x04, 0x31, 0x5e, 0x0c, 0xac, 0xbb, 0x60, 0x58, 0x62, 0x74, 0x57, 0xf0, 0x52,
0x9a, 0x34, 0x8d, 0xa1, 0xd3, 0x99, 0x90, 0xcc, 0x28, 0x3d, 0xfa, 0x2f, 0x3c, 0x8a, 0x47, 0x8f,
0xfe, 0x42, 0x99, 0x99, 0x24, 0x4a, 0x4b, 0xc1, 0xd3, 0xf4, 0xbd, 0xf9, 0xde, 0x37, 0xef, 0xbd,
0x06, 0x9e, 0x97, 0x15, 0xff, 0x2c, 0x32, 0x3f, 0x67, 0x9b, 0xa0, 0x64, 0x64, 0x41, 0xcb, 0xa0,
0x6e, 0x18, 0x67, 0x99, 0x58, 0x05, 0x35, 0xdf, 0xd6, 0x45, 0x1b, 0xb4, 0xbc, 0x11, 0x39, 0xef,
0x0e, 0x5f, 0xdd, 0xe2, 0x3b, 0x25, 0x63, 0x25, 0x29, 0xfc, 0x9e, 0x9d, 0x7e, 0x47, 0x60, 0xbd,
0x57, 0x04, 0x0e, 0xc1, 0x5a, 0x55, 0x05, 0x59, 0xb6, 0x13, 0xe4, 0x9a, 0x9e, 0x33, 0x3b, 0xf3,
0x77, 0x60, 0x5f, 0x83, 0xfe, 0x1b, 0x45, 0x9d, 0x53, 0xde, 0x6c, 0xd3, 0xae, 0xe4, 0xf4, 0x1d,
0x38, 0xff, 0xa4, 0xf1, 0x09, 0x98, 0xeb, 0x62, 0x3b, 0x41, 0x2e, 0xf2, 0xec, 0x54, 0xfe, 0xc4,
0x4f, 0x60, 0xfc, 0x65, 0x41, 0x44, 0x31, 0x31, 0x5c, 0xe4, 0x39, 0xb3, 0x7b, 0x7b, 0xf2, 0x1b,
0x79, 0x9b, 0x6a, 0xe8, 0xa5, 0xf1, 0x02, 0x4d, 0x7f, 0x1b, 0x30, 0x56, 0x49, 0x1c, 0x02, 0x50,
0x41, 0xc8, 0x5c, 0x0b, 0xa4, 0xf4, 0x78, 0x76, 0xba, 0x27, 0xb8, 0x12, 0x84, 0x28, 0xfe, 0x72,
0x94, 0xda, 0xb4, 0x0f, 0xf0, 0x19, 0xdc, 0xa6, 0x62, 0x93, 0x15, 0xcd, 0xfc, 0xef, 0xfb, 0xe8,
0x72, 0x94, 0x3a, 0x3a, 0x3b, 0x40, 0x2d, 0x6f, 0x2a, 0x5a, 0x76, 0x90, 0x29, 0x1b, 0x97, 0x90,
0xce, 0x6a, 0xe8, 0x11, 0x40, 0xc6, 0x58, 0xdf, 0xc6, 0x91, 0x8b, 0xbc, 0x5b, 0xf2, 0x29, 0x99,
0xd3, 0xc0, 0x2b, 0x65, 0x11, 0x39, 0xef, 0x90, 0xb1, 0x1a, 0xf5, 0xfe, 0x81, 0x3d, 0x76, 0x7a,
0x91, 0xf3, 0x61, 0x4a, 0x52, 0xb5, 0x7d, 0xad, 0xa5, 0x6a, 0xf7, 0xa7, 0x8c, 0xab, 0x96, 0x0f,
0x53, 0x92, 0x3e, 0x88, 0x2c, 0x38, 0x5a, 0x57, 0x74, 0x39, 0x0d, 0xc1, 0x1e, 0x08, 0xec, 0x83,
0xa5, 0x64, 0xfd, 0x3f, 0x7a, 0x68, 0xe9, 0x1d, 0xf5, 0xf8, 0x01, 0xd8, 0xc3, 0x12, 0xf1, 0x31,
0xc0, 0xd5, 0x75, 0x1c, 0xcf, 0x6f, 0x5e, 0xc7, 0xd7, 0xe7, 0x27, 0xa3, 0xe8, 0x1b, 0x82, 0xbb,
0x39, 0xdb, 0xec, 0x2a, 0x22, 0x47, 0x4f, 0x93, 0xc8, 0x38, 0x41, 0x9f, 0x9e, 0xfe, 0xef, 0x87,
0x19, 0xea, 0xa3, 0xce, 0x7e, 0x20, 0xf4, 0xd3, 0x30, 0x2f, 0x92, 0xe8, 0x97, 0xf1, 0xf0, 0x42,
0xcb, 0x93, 0xbe, 0xbf, 0x8f, 0x05, 0x21, 0x6f, 0x29, 0xfb, 0x4a, 0x3f, 0xc8, 0xca, 0xcc, 0x52,
0xaa, 0x67, 0x7f, 0x02, 0x00, 0x00, 0xff, 0xff, 0xbc, 0xcf, 0x6d, 0x50, 0xfe, 0x02, 0x00, 0x00,
}

125
vendor/github.com/golang/protobuf/ptypes/timestamp.go generated vendored Normal file
View File

@ -0,0 +1,125 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2016 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package ptypes
// This file implements operations on google.protobuf.Timestamp.
import (
"errors"
"fmt"
"time"
tspb "github.com/golang/protobuf/ptypes/timestamp"
)
const (
// Seconds field of the earliest valid Timestamp.
// This is time.Date(1, 1, 1, 0, 0, 0, 0, time.UTC).Unix().
minValidSeconds = -62135596800
// Seconds field just after the latest valid Timestamp.
// This is time.Date(10000, 1, 1, 0, 0, 0, 0, time.UTC).Unix().
maxValidSeconds = 253402300800
)
// validateTimestamp determines whether a Timestamp is valid.
// A valid timestamp represents a time in the range
// [0001-01-01, 10000-01-01) and has a Nanos field
// in the range [0, 1e9).
//
// If the Timestamp is valid, validateTimestamp returns nil.
// Otherwise, it returns an error that describes
// the problem.
//
// Every valid Timestamp can be represented by a time.Time, but the converse is not true.
func validateTimestamp(ts *tspb.Timestamp) error {
if ts == nil {
return errors.New("timestamp: nil Timestamp")
}
if ts.Seconds < minValidSeconds {
return fmt.Errorf("timestamp: %v before 0001-01-01", ts)
}
if ts.Seconds >= maxValidSeconds {
return fmt.Errorf("timestamp: %v after 10000-01-01", ts)
}
if ts.Nanos < 0 || ts.Nanos >= 1e9 {
return fmt.Errorf("timestamp: %v: nanos not in range [0, 1e9)", ts)
}
return nil
}
// Timestamp converts a google.protobuf.Timestamp proto to a time.Time.
// It returns an error if the argument is invalid.
//
// Unlike most Go functions, if Timestamp returns an error, the first return value
// is not the zero time.Time. Instead, it is the value obtained from the
// time.Unix function when passed the contents of the Timestamp, in the UTC
// locale. This may or may not be a meaningful time; many invalid Timestamps
// do map to valid time.Times.
//
// A nil Timestamp returns an error. The first return value in that case is
// undefined.
func Timestamp(ts *tspb.Timestamp) (time.Time, error) {
// Don't return the zero value on error, because corresponds to a valid
// timestamp. Instead return whatever time.Unix gives us.
var t time.Time
if ts == nil {
t = time.Unix(0, 0).UTC() // treat nil like the empty Timestamp
} else {
t = time.Unix(ts.Seconds, int64(ts.Nanos)).UTC()
}
return t, validateTimestamp(ts)
}
// TimestampProto converts the time.Time to a google.protobuf.Timestamp proto.
// It returns an error if the resulting Timestamp is invalid.
func TimestampProto(t time.Time) (*tspb.Timestamp, error) {
seconds := t.Unix()
nanos := int32(t.Sub(time.Unix(seconds, 0)))
ts := &tspb.Timestamp{
Seconds: seconds,
Nanos: nanos,
}
if err := validateTimestamp(ts); err != nil {
return nil, err
}
return ts, nil
}
// TimestampString returns the RFC 3339 string for valid Timestamps. For invalid
// Timestamps, it returns an error message in parentheses.
func TimestampString(ts *tspb.Timestamp) string {
t, err := Timestamp(ts)
if err != nil {
return fmt.Sprintf("(%v)", err)
}
return t.Format(time.RFC3339Nano)
}

View File

@ -0,0 +1,127 @@
// Code generated by protoc-gen-go.
// source: github.com/golang/protobuf/ptypes/timestamp/timestamp.proto
// DO NOT EDIT!
/*
Package timestamp is a generated protocol buffer package.
It is generated from these files:
github.com/golang/protobuf/ptypes/timestamp/timestamp.proto
It has these top-level messages:
Timestamp
*/
package timestamp
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// A Timestamp represents a point in time independent of any time zone
// or calendar, represented as seconds and fractions of seconds at
// nanosecond resolution in UTC Epoch time. It is encoded using the
// Proleptic Gregorian Calendar which extends the Gregorian calendar
// backwards to year one. It is encoded assuming all minutes are 60
// seconds long, i.e. leap seconds are "smeared" so that no leap second
// table is needed for interpretation. Range is from
// 0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z.
// By restricting to that range, we ensure that we can convert to
// and from RFC 3339 date strings.
// See [https://www.ietf.org/rfc/rfc3339.txt](https://www.ietf.org/rfc/rfc3339.txt).
//
// Example 1: Compute Timestamp from POSIX `time()`.
//
// Timestamp timestamp;
// timestamp.set_seconds(time(NULL));
// timestamp.set_nanos(0);
//
// Example 2: Compute Timestamp from POSIX `gettimeofday()`.
//
// struct timeval tv;
// gettimeofday(&tv, NULL);
//
// Timestamp timestamp;
// timestamp.set_seconds(tv.tv_sec);
// timestamp.set_nanos(tv.tv_usec * 1000);
//
// Example 3: Compute Timestamp from Win32 `GetSystemTimeAsFileTime()`.
//
// FILETIME ft;
// GetSystemTimeAsFileTime(&ft);
// UINT64 ticks = (((UINT64)ft.dwHighDateTime) << 32) | ft.dwLowDateTime;
//
// // A Windows tick is 100 nanoseconds. Windows epoch 1601-01-01T00:00:00Z
// // is 11644473600 seconds before Unix epoch 1970-01-01T00:00:00Z.
// Timestamp timestamp;
// timestamp.set_seconds((INT64) ((ticks / 10000000) - 11644473600LL));
// timestamp.set_nanos((INT32) ((ticks % 10000000) * 100));
//
// Example 4: Compute Timestamp from Java `System.currentTimeMillis()`.
//
// long millis = System.currentTimeMillis();
//
// Timestamp timestamp = Timestamp.newBuilder().setSeconds(millis / 1000)
// .setNanos((int) ((millis % 1000) * 1000000)).build();
//
//
// Example 5: Compute Timestamp from current time in Python.
//
// now = time.time()
// seconds = int(now)
// nanos = int((now - seconds) * 10**9)
// timestamp = Timestamp(seconds=seconds, nanos=nanos)
//
//
type Timestamp struct {
// Represents seconds of UTC time since Unix epoch
// 1970-01-01T00:00:00Z. Must be from from 0001-01-01T00:00:00Z to
// 9999-12-31T23:59:59Z inclusive.
Seconds int64 `protobuf:"varint,1,opt,name=seconds" json:"seconds,omitempty"`
// Non-negative fractions of a second at nanosecond resolution. Negative
// second values with fractions must still have non-negative nanos values
// that count forward in time. Must be from 0 to 999,999,999
// inclusive.
Nanos int32 `protobuf:"varint,2,opt,name=nanos" json:"nanos,omitempty"`
}
func (m *Timestamp) Reset() { *m = Timestamp{} }
func (m *Timestamp) String() string { return proto.CompactTextString(m) }
func (*Timestamp) ProtoMessage() {}
func (*Timestamp) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (*Timestamp) XXX_WellKnownType() string { return "Timestamp" }
func init() {
proto.RegisterType((*Timestamp)(nil), "google.protobuf.Timestamp")
}
func init() {
proto.RegisterFile("github.com/golang/protobuf/ptypes/timestamp/timestamp.proto", fileDescriptor0)
}
var fileDescriptor0 = []byte{
// 194 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xb2, 0x4e, 0xcf, 0x2c, 0xc9,
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xcf, 0xcf, 0x49, 0xcc, 0x4b, 0xd7, 0x2f, 0x28,
0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x2f, 0x28, 0xa9, 0x2c, 0x48, 0x2d, 0xd6, 0x2f, 0xc9,
0xcc, 0x4d, 0x2d, 0x2e, 0x49, 0xcc, 0x2d, 0x40, 0xb0, 0xf4, 0xc0, 0x6a, 0x84, 0xf8, 0xd3, 0xf3,
0xf3, 0xd3, 0x73, 0x52, 0xf5, 0x60, 0x3a, 0x94, 0xac, 0xb9, 0x38, 0x43, 0x60, 0x6a, 0x84, 0x24,
0xb8, 0xd8, 0x8b, 0x53, 0x93, 0xf3, 0xf3, 0x52, 0x8a, 0x25, 0x18, 0x15, 0x18, 0x35, 0x98, 0x83,
0x60, 0x5c, 0x21, 0x11, 0x2e, 0xd6, 0xbc, 0xc4, 0xbc, 0xfc, 0x62, 0x09, 0x26, 0x05, 0x46, 0x0d,
0xd6, 0x20, 0x08, 0xc7, 0xa9, 0x91, 0x91, 0x4b, 0x38, 0x39, 0x3f, 0x57, 0x0f, 0xcd, 0x50, 0x27,
0x3e, 0xb8, 0x91, 0x01, 0x20, 0xa1, 0x00, 0xc6, 0x28, 0x6d, 0x12, 0x1c, 0xbd, 0x80, 0x91, 0xf1,
0x07, 0x23, 0xe3, 0x22, 0x26, 0x66, 0xf7, 0x00, 0xa7, 0x55, 0x4c, 0x72, 0xee, 0x10, 0xc3, 0x03,
0xa0, 0xca, 0xf5, 0xc2, 0x53, 0x73, 0x72, 0xbc, 0xf3, 0xf2, 0xcb, 0xf3, 0x42, 0x40, 0xda, 0x92,
0xd8, 0xc0, 0xe6, 0x18, 0x03, 0x02, 0x00, 0x00, 0xff, 0xff, 0x17, 0x5f, 0xb7, 0xdc, 0x17, 0x01,
0x00, 0x00,
}

View File

@ -0,0 +1,200 @@
// Code generated by protoc-gen-go.
// source: github.com/golang/protobuf/ptypes/wrappers/wrappers.proto
// DO NOT EDIT!
/*
Package wrappers is a generated protocol buffer package.
It is generated from these files:
github.com/golang/protobuf/ptypes/wrappers/wrappers.proto
It has these top-level messages:
DoubleValue
FloatValue
Int64Value
UInt64Value
Int32Value
UInt32Value
BoolValue
StringValue
BytesValue
*/
package wrappers
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// Wrapper message for `double`.
//
// The JSON representation for `DoubleValue` is JSON number.
type DoubleValue struct {
// The double value.
Value float64 `protobuf:"fixed64,1,opt,name=value" json:"value,omitempty"`
}
func (m *DoubleValue) Reset() { *m = DoubleValue{} }
func (m *DoubleValue) String() string { return proto.CompactTextString(m) }
func (*DoubleValue) ProtoMessage() {}
func (*DoubleValue) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (*DoubleValue) XXX_WellKnownType() string { return "DoubleValue" }
// Wrapper message for `float`.
//
// The JSON representation for `FloatValue` is JSON number.
type FloatValue struct {
// The float value.
Value float32 `protobuf:"fixed32,1,opt,name=value" json:"value,omitempty"`
}
func (m *FloatValue) Reset() { *m = FloatValue{} }
func (m *FloatValue) String() string { return proto.CompactTextString(m) }
func (*FloatValue) ProtoMessage() {}
func (*FloatValue) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (*FloatValue) XXX_WellKnownType() string { return "FloatValue" }
// Wrapper message for `int64`.
//
// The JSON representation for `Int64Value` is JSON string.
type Int64Value struct {
// The int64 value.
Value int64 `protobuf:"varint,1,opt,name=value" json:"value,omitempty"`
}
func (m *Int64Value) Reset() { *m = Int64Value{} }
func (m *Int64Value) String() string { return proto.CompactTextString(m) }
func (*Int64Value) ProtoMessage() {}
func (*Int64Value) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (*Int64Value) XXX_WellKnownType() string { return "Int64Value" }
// Wrapper message for `uint64`.
//
// The JSON representation for `UInt64Value` is JSON string.
type UInt64Value struct {
// The uint64 value.
Value uint64 `protobuf:"varint,1,opt,name=value" json:"value,omitempty"`
}
func (m *UInt64Value) Reset() { *m = UInt64Value{} }
func (m *UInt64Value) String() string { return proto.CompactTextString(m) }
func (*UInt64Value) ProtoMessage() {}
func (*UInt64Value) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
func (*UInt64Value) XXX_WellKnownType() string { return "UInt64Value" }
// Wrapper message for `int32`.
//
// The JSON representation for `Int32Value` is JSON number.
type Int32Value struct {
// The int32 value.
Value int32 `protobuf:"varint,1,opt,name=value" json:"value,omitempty"`
}
func (m *Int32Value) Reset() { *m = Int32Value{} }
func (m *Int32Value) String() string { return proto.CompactTextString(m) }
func (*Int32Value) ProtoMessage() {}
func (*Int32Value) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
func (*Int32Value) XXX_WellKnownType() string { return "Int32Value" }
// Wrapper message for `uint32`.
//
// The JSON representation for `UInt32Value` is JSON number.
type UInt32Value struct {
// The uint32 value.
Value uint32 `protobuf:"varint,1,opt,name=value" json:"value,omitempty"`
}
func (m *UInt32Value) Reset() { *m = UInt32Value{} }
func (m *UInt32Value) String() string { return proto.CompactTextString(m) }
func (*UInt32Value) ProtoMessage() {}
func (*UInt32Value) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
func (*UInt32Value) XXX_WellKnownType() string { return "UInt32Value" }
// Wrapper message for `bool`.
//
// The JSON representation for `BoolValue` is JSON `true` and `false`.
type BoolValue struct {
// The bool value.
Value bool `protobuf:"varint,1,opt,name=value" json:"value,omitempty"`
}
func (m *BoolValue) Reset() { *m = BoolValue{} }
func (m *BoolValue) String() string { return proto.CompactTextString(m) }
func (*BoolValue) ProtoMessage() {}
func (*BoolValue) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
func (*BoolValue) XXX_WellKnownType() string { return "BoolValue" }
// Wrapper message for `string`.
//
// The JSON representation for `StringValue` is JSON string.
type StringValue struct {
// The string value.
Value string `protobuf:"bytes,1,opt,name=value" json:"value,omitempty"`
}
func (m *StringValue) Reset() { *m = StringValue{} }
func (m *StringValue) String() string { return proto.CompactTextString(m) }
func (*StringValue) ProtoMessage() {}
func (*StringValue) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
func (*StringValue) XXX_WellKnownType() string { return "StringValue" }
// Wrapper message for `bytes`.
//
// The JSON representation for `BytesValue` is JSON string.
type BytesValue struct {
// The bytes value.
Value []byte `protobuf:"bytes,1,opt,name=value,proto3" json:"value,omitempty"`
}
func (m *BytesValue) Reset() { *m = BytesValue{} }
func (m *BytesValue) String() string { return proto.CompactTextString(m) }
func (*BytesValue) ProtoMessage() {}
func (*BytesValue) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
func (*BytesValue) XXX_WellKnownType() string { return "BytesValue" }
func init() {
proto.RegisterType((*DoubleValue)(nil), "google.protobuf.DoubleValue")
proto.RegisterType((*FloatValue)(nil), "google.protobuf.FloatValue")
proto.RegisterType((*Int64Value)(nil), "google.protobuf.Int64Value")
proto.RegisterType((*UInt64Value)(nil), "google.protobuf.UInt64Value")
proto.RegisterType((*Int32Value)(nil), "google.protobuf.Int32Value")
proto.RegisterType((*UInt32Value)(nil), "google.protobuf.UInt32Value")
proto.RegisterType((*BoolValue)(nil), "google.protobuf.BoolValue")
proto.RegisterType((*StringValue)(nil), "google.protobuf.StringValue")
proto.RegisterType((*BytesValue)(nil), "google.protobuf.BytesValue")
}
func init() {
proto.RegisterFile("github.com/golang/protobuf/ptypes/wrappers/wrappers.proto", fileDescriptor0)
}
var fileDescriptor0 = []byte{
// 260 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xb2, 0x4c, 0xcf, 0x2c, 0xc9,
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xcf, 0xcf, 0x49, 0xcc, 0x4b, 0xd7, 0x2f, 0x28,
0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x2f, 0x28, 0xa9, 0x2c, 0x48, 0x2d, 0xd6, 0x2f, 0x2f,
0x4a, 0x2c, 0x28, 0x48, 0x2d, 0x42, 0x30, 0xf4, 0xc0, 0x2a, 0x84, 0xf8, 0xd3, 0xf3, 0xf3, 0xd3,
0x73, 0x52, 0xf5, 0x60, 0xea, 0x95, 0x94, 0xb9, 0xb8, 0x5d, 0xf2, 0x4b, 0x93, 0x72, 0x52, 0xc3,
0x12, 0x73, 0x4a, 0x53, 0x85, 0x44, 0xb8, 0x58, 0xcb, 0x40, 0x0c, 0x09, 0x46, 0x05, 0x46, 0x0d,
0xc6, 0x20, 0x08, 0x47, 0x49, 0x89, 0x8b, 0xcb, 0x2d, 0x27, 0x3f, 0xb1, 0x04, 0x8b, 0x1a, 0x26,
0x24, 0x35, 0x9e, 0x79, 0x25, 0x66, 0x26, 0x58, 0xd4, 0x30, 0xc3, 0xd4, 0x28, 0x73, 0x71, 0x87,
0xe2, 0x52, 0xc4, 0x82, 0x6a, 0x90, 0xb1, 0x11, 0x16, 0x35, 0xac, 0x68, 0x06, 0x61, 0x55, 0xc4,
0x0b, 0x53, 0xa4, 0xc8, 0xc5, 0xe9, 0x94, 0x9f, 0x9f, 0x83, 0x45, 0x09, 0x07, 0x92, 0x39, 0xc1,
0x25, 0x45, 0x99, 0x79, 0xe9, 0x58, 0x14, 0x71, 0x22, 0x39, 0xc8, 0xa9, 0xb2, 0x24, 0xb5, 0x18,
0x8b, 0x1a, 0x1e, 0xa8, 0x1a, 0xa7, 0x7a, 0x2e, 0xe1, 0xe4, 0xfc, 0x5c, 0x3d, 0xb4, 0xd0, 0x75,
0xe2, 0x0d, 0x87, 0x06, 0x7f, 0x00, 0x48, 0x24, 0x80, 0x31, 0x4a, 0x8b, 0xf8, 0xa8, 0x5b, 0xc0,
0xc8, 0xf8, 0x83, 0x91, 0x71, 0x11, 0x13, 0xb3, 0x7b, 0x80, 0xd3, 0x2a, 0x26, 0x39, 0x77, 0x88,
0xd1, 0x01, 0x50, 0xd5, 0x7a, 0xe1, 0xa9, 0x39, 0x39, 0xde, 0x79, 0xf9, 0xe5, 0x79, 0x21, 0x20,
0x5d, 0x49, 0x6c, 0x60, 0x63, 0x8c, 0x01, 0x01, 0x00, 0x00, 0xff, 0xff, 0xa9, 0xdf, 0x64, 0x4b,
0x1c, 0x02, 0x00, 0x00,
}

27
vendor/github.com/googleapis/gax-go/LICENSE generated vendored Normal file
View File

@ -0,0 +1,27 @@
Copyright 2016, Google Inc.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

136
vendor/github.com/googleapis/gax-go/call_option.go generated vendored Normal file
View File

@ -0,0 +1,136 @@
// Copyright 2016, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package gax
import (
"math/rand"
"time"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
)
// CallOption is an option used by Invoke to control behaviors of RPC calls.
// CallOption works by modifying relevant fields of CallSettings.
type CallOption interface {
// Resolve applies the option by modifying cs.
Resolve(cs *CallSettings)
}
// Retryer is used by Invoke to determine retry behavior.
type Retryer interface {
// Retry reports whether a request should be retriedand how long to pause before retrying
// if the previous attempt returned with err. Invoke never calls Retry with nil error.
Retry(err error) (pause time.Duration, shouldRetry bool)
}
type retryerOption func() Retryer
func (o retryerOption) Resolve(s *CallSettings) {
s.Retry = o
}
// WithRetry sets CallSettings.Retry to fn.
func WithRetry(fn func() Retryer) CallOption {
return retryerOption(fn)
}
// OnCodes returns a Retryer that retries if and only if
// the previous attempt returns a GRPC error whose error code is stored in cc.
// Pause times between retries are specified by bo.
//
// bo is only used for its parameters; each Retryer has its own copy.
func OnCodes(cc []codes.Code, bo Backoff) Retryer {
return &boRetryer{
backoff: bo,
codes: append([]codes.Code(nil), cc...),
}
}
type boRetryer struct {
backoff Backoff
codes []codes.Code
}
func (r *boRetryer) Retry(err error) (time.Duration, bool) {
c := grpc.Code(err)
for _, rc := range r.codes {
if c == rc {
return r.backoff.Pause(), true
}
}
return 0, false
}
// Backoff implements exponential backoff.
// The wait time between retries is a random value between 0 and the "retry envelope".
// The envelope starts at Initial and increases by the factor of Multiplier every retry,
// but is capped at Max.
type Backoff struct {
// Initial is the initial value of the retry envelope, defaults to 1 second.
Initial time.Duration
// Max is the maximum value of the retry envelope, defaults to 30 seconds.
Max time.Duration
// Multiplier is the factor by which the retry envelope increases.
// It should be greater than 1 and defaults to 2.
Multiplier float64
// cur is the current retry envelope
cur time.Duration
}
func (bo *Backoff) Pause() time.Duration {
if bo.Initial == 0 {
bo.Initial = time.Second
}
if bo.cur == 0 {
bo.cur = bo.Initial
}
if bo.Max == 0 {
bo.Max = 30 * time.Second
}
if bo.Multiplier < 1 {
bo.Multiplier = 2
}
d := time.Duration(rand.Int63n(int64(bo.cur)))
bo.cur = time.Duration(float64(bo.cur) * bo.Multiplier)
if bo.cur > bo.Max {
bo.cur = bo.Max
}
return d
}
type CallSettings struct {
// Retry returns a Retryer to be used to control retry logic of a method call.
// If Retry is nil or the returned Retryer is nil, the call will not be retried.
Retry func() Retryer
}

40
vendor/github.com/googleapis/gax-go/gax.go generated vendored Normal file
View File

@ -0,0 +1,40 @@
// Copyright 2016, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Package gax contains a set of modules which aid the development of APIs
// for clients and servers based on gRPC and Google API conventions.
//
// Application code will rarely need to use this library directly.
// However, code generated automatically from API definition files can use it
// to simplify code generation and to provide more convenient and idiomatic API surfaces.
//
// This project is currently experimental and not supported.
package gax
const Version = "0.1.0"

90
vendor/github.com/googleapis/gax-go/invoke.go generated vendored Normal file
View File

@ -0,0 +1,90 @@
// Copyright 2016, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package gax
import (
"time"
"golang.org/x/net/context"
)
// A user defined call stub.
type APICall func(context.Context) error
// Invoke calls the given APICall,
// performing retries as specified by opts, if any.
func Invoke(ctx context.Context, call APICall, opts ...CallOption) error {
var settings CallSettings
for _, opt := range opts {
opt.Resolve(&settings)
}
return invoke(ctx, call, settings, Sleep)
}
// Sleep is similar to time.Sleep, but it can be interrupted by ctx.Done() closing.
// If interrupted, Sleep returns ctx.Err().
func Sleep(ctx context.Context, d time.Duration) error {
t := time.NewTimer(d)
select {
case <-ctx.Done():
t.Stop()
return ctx.Err()
case <-t.C:
return nil
}
}
type sleeper func(ctx context.Context, d time.Duration) error
// invoke implements Invoke, taking an additional sleeper argument for testing.
func invoke(ctx context.Context, call APICall, settings CallSettings, sp sleeper) error {
var retryer Retryer
for {
err := call(ctx)
if err == nil {
return nil
}
if settings.Retry == nil {
return err
}
if retryer == nil {
if r := settings.Retry(); r != nil {
retryer = r
} else {
return err
}
}
if d, ok := retryer.Retry(err); !ok {
return err
} else if err = sp(ctx, d); err != nil {
return err
}
}
}

176
vendor/github.com/googleapis/gax-go/path_template.go generated vendored Normal file
View File

@ -0,0 +1,176 @@
// Copyright 2016, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package gax
import (
"errors"
"fmt"
"strings"
)
type matcher interface {
match([]string) (int, error)
String() string
}
type segment struct {
matcher
name string
}
type labelMatcher string
func (ls labelMatcher) match(segments []string) (int, error) {
if len(segments) == 0 {
return 0, fmt.Errorf("expected %s but no more segments found", ls)
}
if segments[0] != string(ls) {
return 0, fmt.Errorf("expected %s but got %s", ls, segments[0])
}
return 1, nil
}
func (ls labelMatcher) String() string {
return string(ls)
}
type wildcardMatcher int
func (wm wildcardMatcher) match(segments []string) (int, error) {
if len(segments) == 0 {
return 0, errors.New("no more segments found")
}
return 1, nil
}
func (wm wildcardMatcher) String() string {
return "*"
}
type pathWildcardMatcher int
func (pwm pathWildcardMatcher) match(segments []string) (int, error) {
length := len(segments) - int(pwm)
if length <= 0 {
return 0, errors.New("not sufficient segments are supplied for path wildcard")
}
return length, nil
}
func (pwm pathWildcardMatcher) String() string {
return "**"
}
type ParseError struct {
Pos int
Template string
Message string
}
func (pe ParseError) Error() string {
return fmt.Sprintf("at %d of template '%s', %s", pe.Pos, pe.Template, pe.Message)
}
// PathTemplate manages the template to build and match with paths used
// by API services. It holds a template and variable names in it, and
// it can extract matched patterns from a path string or build a path
// string from a binding.
//
// See http.proto in github.com/googleapis/googleapis/ for the details of
// the template syntax.
type PathTemplate struct {
segments []segment
}
// NewPathTemplate parses a path template, and returns a PathTemplate
// instance if successful.
func NewPathTemplate(template string) (*PathTemplate, error) {
return parsePathTemplate(template)
}
// MustCompilePathTemplate is like NewPathTemplate but panics if the
// expression cannot be parsed. It simplifies safe initialization of
// global variables holding compiled regular expressions.
func MustCompilePathTemplate(template string) *PathTemplate {
pt, err := NewPathTemplate(template)
if err != nil {
panic(err)
}
return pt
}
// Match attempts to match the given path with the template, and returns
// the mapping of the variable name to the matched pattern string.
func (pt *PathTemplate) Match(path string) (map[string]string, error) {
paths := strings.Split(path, "/")
values := map[string]string{}
for _, segment := range pt.segments {
length, err := segment.match(paths)
if err != nil {
return nil, err
}
if segment.name != "" {
value := strings.Join(paths[:length], "/")
if oldValue, ok := values[segment.name]; ok {
values[segment.name] = oldValue + "/" + value
} else {
values[segment.name] = value
}
}
paths = paths[length:]
}
if len(paths) != 0 {
return nil, fmt.Errorf("Trailing path %s remains after the matching", strings.Join(paths, "/"))
}
return values, nil
}
// Render creates a path string from its template and the binding from
// the variable name to the value.
func (pt *PathTemplate) Render(binding map[string]string) (string, error) {
result := make([]string, 0, len(pt.segments))
var lastVariableName string
for _, segment := range pt.segments {
name := segment.name
if lastVariableName != "" && name == lastVariableName {
continue
}
lastVariableName = name
if name == "" {
result = append(result, segment.String())
} else if value, ok := binding[name]; ok {
result = append(result, value)
} else {
return "", fmt.Errorf("%s is not found", name)
}
}
built := strings.Join(result, "/")
return built, nil
}

View File

@ -0,0 +1,227 @@
// Copyright 2016, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package gax
import (
"fmt"
"io"
"strings"
)
// This parser follows the syntax of path templates, from
// https://github.com/googleapis/googleapis/blob/master/google/api/http.proto.
// The differences are that there is no custom verb, we allow the initial slash
// to be absent, and that we are not strict as
// https://tools.ietf.org/html/rfc6570 about the characters in identifiers and
// literals.
type pathTemplateParser struct {
r *strings.Reader
runeCount int // the number of the current rune in the original string
nextVar int // the number to use for the next unnamed variable
seenName map[string]bool // names we've seen already
seenPathWildcard bool // have we seen "**" already?
}
func parsePathTemplate(template string) (pt *PathTemplate, err error) {
p := &pathTemplateParser{
r: strings.NewReader(template),
seenName: map[string]bool{},
}
// Handle panics with strings like errors.
// See pathTemplateParser.error, below.
defer func() {
if x := recover(); x != nil {
errmsg, ok := x.(errString)
if !ok {
panic(x)
}
pt = nil
err = ParseError{p.runeCount, template, string(errmsg)}
}
}()
segs := p.template()
// If there is a path wildcard, set its length. We can't do this
// until we know how many segments we've got all together.
for i, seg := range segs {
if _, ok := seg.matcher.(pathWildcardMatcher); ok {
segs[i].matcher = pathWildcardMatcher(len(segs) - i - 1)
break
}
}
return &PathTemplate{segments: segs}, nil
}
// Used to indicate errors "thrown" by this parser. We don't use string because
// many parts of the standard library panic with strings.
type errString string
// Terminates parsing immediately with an error.
func (p *pathTemplateParser) error(msg string) {
panic(errString(msg))
}
// Template = [ "/" ] Segments
func (p *pathTemplateParser) template() []segment {
var segs []segment
if p.consume('/') {
// Initial '/' needs an initial empty matcher.
segs = append(segs, segment{matcher: labelMatcher("")})
}
return append(segs, p.segments("")...)
}
// Segments = Segment { "/" Segment }
func (p *pathTemplateParser) segments(name string) []segment {
var segs []segment
for {
subsegs := p.segment(name)
segs = append(segs, subsegs...)
if !p.consume('/') {
break
}
}
return segs
}
// Segment = "*" | "**" | LITERAL | Variable
func (p *pathTemplateParser) segment(name string) []segment {
if p.consume('*') {
if name == "" {
name = fmt.Sprintf("$%d", p.nextVar)
p.nextVar++
}
if p.consume('*') {
if p.seenPathWildcard {
p.error("multiple '**' disallowed")
}
p.seenPathWildcard = true
// We'll change 0 to the right number at the end.
return []segment{{name: name, matcher: pathWildcardMatcher(0)}}
}
return []segment{{name: name, matcher: wildcardMatcher(0)}}
}
if p.consume('{') {
if name != "" {
p.error("recursive named bindings are not allowed")
}
return p.variable()
}
return []segment{{name: name, matcher: labelMatcher(p.literal())}}
}
// Variable = "{" FieldPath [ "=" Segments ] "}"
// "{" is already consumed.
func (p *pathTemplateParser) variable() []segment {
// Simplification: treat FieldPath as LITERAL, instead of IDENT { '.' IDENT }
name := p.literal()
if p.seenName[name] {
p.error(name + " appears multiple times")
}
p.seenName[name] = true
var segs []segment
if p.consume('=') {
segs = p.segments(name)
} else {
// "{var}" is equivalent to "{var=*}"
segs = []segment{{name: name, matcher: wildcardMatcher(0)}}
}
if !p.consume('}') {
p.error("expected '}'")
}
return segs
}
// A literal is any sequence of characters other than a few special ones.
// The list of stop characters is not quite the same as in the template RFC.
func (p *pathTemplateParser) literal() string {
lit := p.consumeUntil("/*}{=")
if lit == "" {
p.error("empty literal")
}
return lit
}
// Read runes until EOF or one of the runes in stopRunes is encountered.
// If the latter, unread the stop rune. Return the accumulated runes as a string.
func (p *pathTemplateParser) consumeUntil(stopRunes string) string {
var runes []rune
for {
r, ok := p.readRune()
if !ok {
break
}
if strings.IndexRune(stopRunes, r) >= 0 {
p.unreadRune()
break
}
runes = append(runes, r)
}
return string(runes)
}
// If the next rune is r, consume it and return true.
// Otherwise, leave the input unchanged and return false.
func (p *pathTemplateParser) consume(r rune) bool {
rr, ok := p.readRune()
if !ok {
return false
}
if r == rr {
return true
}
p.unreadRune()
return false
}
// Read the next rune from the input. Return it.
// The second return value is false at EOF.
func (p *pathTemplateParser) readRune() (rune, bool) {
r, _, err := p.r.ReadRune()
if err == io.EOF {
return r, false
}
if err != nil {
p.error(err.Error())
}
p.runeCount++
return r, true
}
// Put the last rune that was read back on the input.
func (p *pathTemplateParser) unreadRune() {
if err := p.r.UnreadRune(); err != nil {
p.error(err.Error())
}
p.runeCount--
}

20
vendor/github.com/nbutton23/zxcvbn-go/LICENSE.txt generated vendored Normal file
View File

@ -0,0 +1,20 @@
Copyright (c) Nathan Button
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@ -0,0 +1,96 @@
package adjacency
import (
"encoding/json"
"log"
// "fmt"
"github.com/nbutton23/zxcvbn-go/data"
)
type AdjacencyGraph struct {
Graph map[string][]string
averageDegree float64
Name string
}
var AdjacencyGph = make(map[string]AdjacencyGraph)
func init() {
AdjacencyGph["qwerty"] = BuildQwerty()
AdjacencyGph["dvorak"] = BuildDvorak()
AdjacencyGph["keypad"] = BuildKeypad()
AdjacencyGph["macKeypad"] = BuildMacKeypad()
AdjacencyGph["l33t"] = BuildLeet()
}
func BuildQwerty() AdjacencyGraph {
data, err := zxcvbn_data.Asset("data/Qwerty.json")
if err != nil {
panic("Can't find asset")
}
return GetAdjancencyGraphFromFile(data, "qwerty")
}
func BuildDvorak() AdjacencyGraph {
data, err := zxcvbn_data.Asset("data/Dvorak.json")
if err != nil {
panic("Can't find asset")
}
return GetAdjancencyGraphFromFile(data, "dvorak")
}
func BuildKeypad() AdjacencyGraph {
data, err := zxcvbn_data.Asset("data/Keypad.json")
if err != nil {
panic("Can't find asset")
}
return GetAdjancencyGraphFromFile(data, "keypad")
}
func BuildMacKeypad() AdjacencyGraph {
data, err := zxcvbn_data.Asset("data/MacKeypad.json")
if err != nil {
panic("Can't find asset")
}
return GetAdjancencyGraphFromFile(data, "mac_keypad")
}
func BuildLeet() AdjacencyGraph {
data, err := zxcvbn_data.Asset("data/L33t.json")
if err != nil {
panic("Can't find asset")
}
return GetAdjancencyGraphFromFile(data, "keypad")
}
func GetAdjancencyGraphFromFile(data []byte, name string) AdjacencyGraph {
var graph AdjacencyGraph
err := json.Unmarshal(data, &graph)
if err != nil {
log.Fatal(err)
}
graph.Name = name
return graph
}
//on qwerty, 'g' has degree 6, being adjacent to 'ftyhbv'. '\' has degree 1.
//this calculates the average over all keys.
//TODO double check that i ported this correctly scoring.coffee ln 5
func (adjGrp AdjacencyGraph) CalculateAvgDegree() float64 {
if adjGrp.averageDegree != float64(0) {
return adjGrp.averageDegree
}
var avg float64
var count float64
for _, value := range adjGrp.Graph {
for _, char := range value {
if char != "" || char != " " {
avg += float64(len(char))
count++
}
}
}
adjGrp.averageDegree = avg / count
return adjGrp.averageDegree
}

444
vendor/github.com/nbutton23/zxcvbn-go/data/bindata.go generated vendored Normal file

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,215 @@
package entropy
import (
"github.com/nbutton23/zxcvbn-go/adjacency"
"github.com/nbutton23/zxcvbn-go/match"
"github.com/nbutton23/zxcvbn-go/utils/math"
"math"
"regexp"
"unicode"
)
const (
START_UPPER string = `^[A-Z][^A-Z]+$`
END_UPPER string = `^[^A-Z]+[A-Z]$'`
ALL_UPPER string = `^[A-Z]+$`
NUM_YEARS = float64(119) // years match against 1900 - 2019
NUM_MONTHS = float64(12)
NUM_DAYS = float64(31)
)
var (
KEYPAD_STARTING_POSITIONS = len(adjacency.AdjacencyGph["keypad"].Graph)
KEYPAD_AVG_DEGREE = adjacency.AdjacencyGph["keypad"].CalculateAvgDegree()
)
func DictionaryEntropy(match match.Match, rank float64) float64 {
baseEntropy := math.Log2(rank)
upperCaseEntropy := extraUpperCaseEntropy(match)
//TODO: L33t
return baseEntropy + upperCaseEntropy
}
func extraUpperCaseEntropy(match match.Match) float64 {
word := match.Token
allLower := true
for _, char := range word {
if unicode.IsUpper(char) {
allLower = false
break
}
}
if allLower {
return float64(0)
}
//a capitalized word is the most common capitalization scheme,
//so it only doubles the search space (uncapitalized + capitalized): 1 extra bit of entropy.
//allcaps and end-capitalized are common enough too, underestimate as 1 extra bit to be safe.
for _, regex := range []string{START_UPPER, END_UPPER, ALL_UPPER} {
matcher := regexp.MustCompile(regex)
if matcher.MatchString(word) {
return float64(1)
}
}
//Otherwise calculate the number of ways to capitalize U+L uppercase+lowercase letters with U uppercase letters or
//less. Or, if there's more uppercase than lower (for e.g. PASSwORD), the number of ways to lowercase U+L letters
//with L lowercase letters or less.
countUpper, countLower := float64(0), float64(0)
for _, char := range word {
if unicode.IsUpper(char) {
countUpper++
} else if unicode.IsLower(char) {
countLower++
}
}
totalLenght := countLower + countUpper
var possibililities float64
for i := float64(0); i <= math.Min(countUpper, countLower); i++ {
possibililities += float64(zxcvbn_math.NChoseK(totalLenght, i))
}
if possibililities < 1 {
return float64(1)
}
return float64(math.Log2(possibililities))
}
func SpatialEntropy(match match.Match, turns int, shiftCount int) float64 {
var s, d float64
if match.DictionaryName == "qwerty" || match.DictionaryName == "dvorak" {
//todo: verify qwerty and dvorak have the same length and degree
s = float64(len(adjacency.BuildQwerty().Graph))
d = adjacency.BuildQwerty().CalculateAvgDegree()
} else {
s = float64(KEYPAD_STARTING_POSITIONS)
d = KEYPAD_AVG_DEGREE
}
possibilities := float64(0)
length := float64(len(match.Token))
//TODO: Should this be <= or just < ?
//Estimate the number of possible patterns w/ length L or less with t turns or less
for i := float64(2); i <= length+1; i++ {
possibleTurns := math.Min(float64(turns), i-1)
for j := float64(1); j <= possibleTurns+1; j++ {
x := zxcvbn_math.NChoseK(i-1, j-1) * s * math.Pow(d, j)
possibilities += x
}
}
entropy := math.Log2(possibilities)
//add extra entropu for shifted keys. ( % instead of 5 A instead of a)
//Math is similar to extra entropy for uppercase letters in dictionary matches.
if S := float64(shiftCount); S > float64(0) {
possibilities = float64(0)
U := length - S
for i := float64(0); i < math.Min(S, U)+1; i++ {
possibilities += zxcvbn_math.NChoseK(S+U, i)
}
entropy += math.Log2(possibilities)
}
return entropy
}
func RepeatEntropy(match match.Match) float64 {
cardinality := CalcBruteForceCardinality(match.Token)
entropy := math.Log2(cardinality * float64(len(match.Token)))
return entropy
}
//TODO: Validate against python
func CalcBruteForceCardinality(password string) float64 {
lower, upper, digits, symbols := float64(0), float64(0), float64(0), float64(0)
for _, char := range password {
if unicode.IsLower(char) {
lower = float64(26)
} else if unicode.IsDigit(char) {
digits = float64(10)
} else if unicode.IsUpper(char) {
upper = float64(26)
} else {
symbols = float64(33)
}
}
cardinality := lower + upper + digits + symbols
return cardinality
}
func SequenceEntropy(match match.Match, dictionaryLength int, ascending bool) float64 {
firstChar := match.Token[0]
baseEntropy := float64(0)
if string(firstChar) == "a" || string(firstChar) == "1" {
baseEntropy = float64(0)
} else {
baseEntropy = math.Log2(float64(dictionaryLength))
//TODO: should this be just the first or any char?
if unicode.IsUpper(rune(firstChar)) {
baseEntropy++
}
}
if !ascending {
baseEntropy++
}
return baseEntropy + math.Log2(float64(len(match.Token)))
}
func ExtraLeetEntropy(match match.Match, password string) float64 {
var subsitutions float64
var unsub float64
subPassword := password[match.I:match.J]
for index, char := range subPassword {
if string(char) != string(match.Token[index]) {
subsitutions++
} else {
//TODO: Make this only true for 1337 chars that are not subs?
unsub++
}
}
var possibilities float64
for i := float64(0); i <= math.Min(subsitutions, unsub)+1; i++ {
possibilities += zxcvbn_math.NChoseK(subsitutions+unsub, i)
}
if possibilities <= 1 {
return float64(1)
}
return math.Log2(possibilities)
}
func YearEntropy(dateMatch match.DateMatch) float64 {
return math.Log2(NUM_YEARS)
}
func DateEntropy(dateMatch match.DateMatch) float64 {
var entropy float64
if dateMatch.Year < 100 {
entropy = math.Log2(NUM_DAYS * NUM_MONTHS * 100)
} else {
entropy = math.Log2(NUM_DAYS * NUM_MONTHS * NUM_YEARS)
}
if dateMatch.Separator != "" {
entropy += 2 //add two bits for separator selection [/,-,.,etc]
}
return entropy
}

View File

@ -0,0 +1,47 @@
package frequency
import (
"encoding/json"
"github.com/nbutton23/zxcvbn-go/data"
"log"
)
type FrequencyList struct {
Name string
List []string
}
var FrequencyLists = make(map[string]FrequencyList)
func init() {
maleFilePath := getAsset("data/MaleNames.json")
femaleFilePath := getAsset("data/FemaleNames.json")
surnameFilePath := getAsset("data/Surnames.json")
englishFilePath := getAsset("data/English.json")
passwordsFilePath := getAsset("data/Passwords.json")
FrequencyLists["MaleNames"] = GetStringListFromAsset(maleFilePath, "MaleNames")
FrequencyLists["FemaleNames"] = GetStringListFromAsset(femaleFilePath, "FemaleNames")
FrequencyLists["Surname"] = GetStringListFromAsset(surnameFilePath, "Surname")
FrequencyLists["English"] = GetStringListFromAsset(englishFilePath, "English")
FrequencyLists["Passwords"] = GetStringListFromAsset(passwordsFilePath, "Passwords")
}
func getAsset(name string) []byte {
data, err := zxcvbn_data.Asset(name)
if err != nil {
panic("Error getting asset " + name)
}
return data
}
func GetStringListFromAsset(data []byte, name string) FrequencyList {
var tempList FrequencyList
err := json.Unmarshal(data, &tempList)
if err != nil {
log.Fatal(err)
}
tempList.Name = name
return tempList
}

35
vendor/github.com/nbutton23/zxcvbn-go/match/match.go generated vendored Normal file
View File

@ -0,0 +1,35 @@
package match
type Matches []Match
func (s Matches) Len() int {
return len(s)
}
func (s Matches) Swap(i, j int) {
s[i], s[j] = s[j], s[i]
}
func (s Matches) Less(i, j int) bool {
if s[i].I < s[j].I {
return true
} else if s[i].I == s[j].I {
return s[i].J < s[j].J
} else {
return false
}
}
type Match struct {
Pattern string
I, J int
Token string
DictionaryName string
Entropy float64
}
type DateMatch struct {
Pattern string
I, J int
Token string
Separator string
Day, Month, Year int64
}

View File

@ -0,0 +1,189 @@
package matching
import (
"github.com/nbutton23/zxcvbn-go/entropy"
"github.com/nbutton23/zxcvbn-go/match"
"regexp"
"strconv"
"strings"
)
func checkDate(day, month, year int64) (bool, int64, int64, int64) {
if (12 <= month && month <= 31) && day <= 12 {
day, month = month, day
}
if day > 31 || month > 12 {
return false, 0, 0, 0
}
if !((1900 <= year && year <= 2019) || (0 <= year && year <= 99)) {
return false, 0, 0, 0
}
return true, day, month, year
}
func dateSepMatcher(password string) []match.Match {
dateMatches := dateSepMatchHelper(password)
var matches []match.Match
for _, dateMatch := range dateMatches {
match := match.Match{
I: dateMatch.I,
J: dateMatch.J,
Entropy: entropy.DateEntropy(dateMatch),
DictionaryName: "date_match",
Token: dateMatch.Token,
}
matches = append(matches, match)
}
return matches
}
func dateSepMatchHelper(password string) []match.DateMatch {
var matches []match.DateMatch
matcher := regexp.MustCompile(DATE_RX_YEAR_SUFFIX)
for _, v := range matcher.FindAllString(password, len(password)) {
splitV := matcher.FindAllStringSubmatch(v, len(v))
i := strings.Index(password, v)
j := i + len(v)
day, _ := strconv.ParseInt(splitV[0][4], 10, 16)
month, _ := strconv.ParseInt(splitV[0][2], 10, 16)
year, _ := strconv.ParseInt(splitV[0][6], 10, 16)
match := match.DateMatch{Day: day, Month: month, Year: year, Separator: splitV[0][5], I: i, J: j, Token: password[i:j]}
matches = append(matches, match)
}
matcher = regexp.MustCompile(DATE_RX_YEAR_PREFIX)
for _, v := range matcher.FindAllString(password, len(password)) {
splitV := matcher.FindAllStringSubmatch(v, len(v))
i := strings.Index(password, v)
j := i + len(v)
day, _ := strconv.ParseInt(splitV[0][4], 10, 16)
month, _ := strconv.ParseInt(splitV[0][6], 10, 16)
year, _ := strconv.ParseInt(splitV[0][2], 10, 16)
match := match.DateMatch{Day: day, Month: month, Year: year, Separator: splitV[0][5], I: i, J: j, Token: password[i:j]}
matches = append(matches, match)
}
var out []match.DateMatch
for _, match := range matches {
if valid, day, month, year := checkDate(match.Day, match.Month, match.Year); valid {
match.Pattern = "date"
match.Day = day
match.Month = month
match.Year = year
out = append(out, match)
}
}
return out
}
type DateMatchCandidate struct {
DayMonth string
Year string
I, J int
}
type DateMatchCandidateTwo struct {
Day string
Month string
Year string
I, J int
}
func dateWithoutSepMatch(password string) []match.Match {
dateMatches := dateWithoutSepMatchHelper(password)
var matches []match.Match
for _, dateMatch := range dateMatches {
match := match.Match{
I: dateMatch.I,
J: dateMatch.J,
Entropy: entropy.DateEntropy(dateMatch),
DictionaryName: "date_match",
Token: dateMatch.Token,
}
matches = append(matches, match)
}
return matches
}
//TODO Has issues with 6 digit dates
func dateWithoutSepMatchHelper(password string) (matches []match.DateMatch) {
matcher := regexp.MustCompile(DATE_WITHOUT_SEP_MATCH)
for _, v := range matcher.FindAllString(password, len(password)) {
i := strings.Index(password, v)
j := i + len(v)
length := len(v)
lastIndex := length - 1
var candidatesRoundOne []DateMatchCandidate
if length <= 6 {
//2-digit year prefix
candidatesRoundOne = append(candidatesRoundOne, buildDateMatchCandidate(v[2:], v[0:2], i, j))
//2-digityear suffix
candidatesRoundOne = append(candidatesRoundOne, buildDateMatchCandidate(v[0:lastIndex-2], v[lastIndex-2:], i, j))
}
if length >= 6 {
//4-digit year prefix
candidatesRoundOne = append(candidatesRoundOne, buildDateMatchCandidate(v[4:], v[0:4], i, j))
//4-digit year sufix
candidatesRoundOne = append(candidatesRoundOne, buildDateMatchCandidate(v[0:lastIndex-3], v[lastIndex-3:], i, j))
}
var candidatesRoundTwo []DateMatchCandidateTwo
for _, c := range candidatesRoundOne {
if len(c.DayMonth) == 2 {
candidatesRoundTwo = append(candidatesRoundTwo, buildDateMatchCandidateTwo(c.DayMonth[0:0], c.DayMonth[1:1], c.Year, c.I, c.J))
} else if len(c.DayMonth) == 3 {
candidatesRoundTwo = append(candidatesRoundTwo, buildDateMatchCandidateTwo(c.DayMonth[0:2], c.DayMonth[2:2], c.Year, c.I, c.J))
candidatesRoundTwo = append(candidatesRoundTwo, buildDateMatchCandidateTwo(c.DayMonth[0:0], c.DayMonth[1:3], c.Year, c.I, c.J))
} else if len(c.DayMonth) == 4 {
candidatesRoundTwo = append(candidatesRoundTwo, buildDateMatchCandidateTwo(c.DayMonth[0:2], c.DayMonth[2:4], c.Year, c.I, c.J))
}
}
for _, candidate := range candidatesRoundTwo {
intDay, err := strconv.ParseInt(candidate.Day, 10, 16)
if err != nil {
continue
}
intMonth, err := strconv.ParseInt(candidate.Month, 10, 16)
if err != nil {
continue
}
intYear, err := strconv.ParseInt(candidate.Year, 10, 16)
if err != nil {
continue
}
if ok, _, _, _ := checkDate(intDay, intMonth, intYear); ok {
matches = append(matches, match.DateMatch{Token: password, Pattern: "date", Day: intDay, Month: intMonth, Year: intYear, I: i, J: j})
}
}
}
return matches
}
func buildDateMatchCandidate(dayMonth, year string, i, j int) DateMatchCandidate {
return DateMatchCandidate{DayMonth: dayMonth, Year: year, I: i, J: j}
}
func buildDateMatchCandidateTwo(day, month string, year string, i, j int) DateMatchCandidateTwo {
return DateMatchCandidateTwo{Day: day, Month: month, Year: year, I: i, J: j}
}

View File

@ -0,0 +1,54 @@
package matching
import (
"github.com/nbutton23/zxcvbn-go/entropy"
"github.com/nbutton23/zxcvbn-go/match"
"strings"
)
func buildDictMatcher(dictName string, rankedDict map[string]int) func(password string) []match.Match {
return func(password string) []match.Match {
matches := dictionaryMatch(password, dictName, rankedDict)
for _, v := range matches {
v.DictionaryName = dictName
}
return matches
}
}
func dictionaryMatch(password string, dictionaryName string, rankedDict map[string]int) []match.Match {
length := len(password)
var results []match.Match
pwLower := strings.ToLower(password)
for i := 0; i < length; i++ {
for j := i; j < length; j++ {
word := pwLower[i : j+1]
if val, ok := rankedDict[word]; ok {
matchDic := match.Match{Pattern: "dictionary",
DictionaryName: dictionaryName,
I: i,
J: j,
Token: password[i : j+1],
}
matchDic.Entropy = entropy.DictionaryEntropy(matchDic, float64(val))
results = append(results, matchDic)
}
}
}
return results
}
func buildRankedDict(unrankedList []string) map[string]int {
result := make(map[string]int)
for i, v := range unrankedList {
result[strings.ToLower(v)] = i + 1
}
return result
}

68
vendor/github.com/nbutton23/zxcvbn-go/matching/leet.go generated vendored Normal file
View File

@ -0,0 +1,68 @@
package matching
import (
"github.com/nbutton23/zxcvbn-go/entropy"
"github.com/nbutton23/zxcvbn-go/match"
"strings"
)
func l33tMatch(password string) []match.Match {
substitutions := relevantL33tSubtable(password)
permutations := getAllPermutationsOfLeetSubstitutions(password, substitutions)
var matches []match.Match
for _, permutation := range permutations {
for _, mather := range DICTIONARY_MATCHERS {
matches = append(matches, mather(permutation)...)
}
}
for _, match := range matches {
match.Entropy += entropy.ExtraLeetEntropy(match, password)
match.DictionaryName = match.DictionaryName + "_3117"
}
return matches
}
func getAllPermutationsOfLeetSubstitutions(password string, substitutionsMap map[string][]string) []string {
var permutations []string
for index, char := range password {
for value, splice := range substitutionsMap {
for _, sub := range splice {
if string(char) == sub {
var permutation string
permutation = password[:index] + value + password[index+1:]
permutations = append(permutations, permutation)
if index < len(permutation) {
tempPermutations := getAllPermutationsOfLeetSubstitutions(permutation[index+1:], substitutionsMap)
for _, temp := range tempPermutations {
permutations = append(permutations, permutation[:index+1]+temp)
}
}
}
}
}
}
return permutations
}
func relevantL33tSubtable(password string) map[string][]string {
relevantSubs := make(map[string][]string)
for key, values := range L33T_TABLE.Graph {
for _, value := range values {
if strings.Contains(password, value) {
relevantSubs[key] = append(relevantSubs[key], value)
}
}
}
return relevantSubs
}

View File

@ -0,0 +1,77 @@
package matching
import (
"github.com/nbutton23/zxcvbn-go/adjacency"
"github.com/nbutton23/zxcvbn-go/frequency"
"github.com/nbutton23/zxcvbn-go/match"
"sort"
)
var (
DICTIONARY_MATCHERS []func(password string) []match.Match
MATCHERS []func(password string) []match.Match
ADJACENCY_GRAPHS []adjacency.AdjacencyGraph
L33T_TABLE adjacency.AdjacencyGraph
SEQUENCES map[string]string
)
const (
DATE_RX_YEAR_SUFFIX string = `((\d{1,2})(\s|-|\/|\\|_|\.)(\d{1,2})(\s|-|\/|\\|_|\.)(19\d{2}|200\d|201\d|\d{2}))`
DATE_RX_YEAR_PREFIX string = `((19\d{2}|200\d|201\d|\d{2})(\s|-|/|\\|_|\.)(\d{1,2})(\s|-|/|\\|_|\.)(\d{1,2}))`
DATE_WITHOUT_SEP_MATCH string = `\d{4,8}`
)
func init() {
loadFrequencyList()
}
func Omnimatch(password string, userInputs []string) (matches []match.Match) {
//Can I run into the issue where nil is not equal to nil?
if DICTIONARY_MATCHERS == nil || ADJACENCY_GRAPHS == nil {
loadFrequencyList()
}
if userInputs != nil {
userInputMatcher := buildDictMatcher("user_inputs", buildRankedDict(userInputs))
matches = userInputMatcher(password)
}
for _, matcher := range MATCHERS {
matches = append(matches, matcher(password)...)
}
sort.Sort(match.Matches(matches))
return matches
}
func loadFrequencyList() {
for n, list := range frequency.FrequencyLists {
DICTIONARY_MATCHERS = append(DICTIONARY_MATCHERS, buildDictMatcher(n, buildRankedDict(list.List)))
}
L33T_TABLE = adjacency.AdjacencyGph["l33t"]
ADJACENCY_GRAPHS = append(ADJACENCY_GRAPHS, adjacency.AdjacencyGph["qwerty"])
ADJACENCY_GRAPHS = append(ADJACENCY_GRAPHS, adjacency.AdjacencyGph["dvorak"])
ADJACENCY_GRAPHS = append(ADJACENCY_GRAPHS, adjacency.AdjacencyGph["keypad"])
ADJACENCY_GRAPHS = append(ADJACENCY_GRAPHS, adjacency.AdjacencyGph["macKeypad"])
//l33tFilePath, _ := filepath.Abs("adjacency/L33t.json")
//L33T_TABLE = adjacency.GetAdjancencyGraphFromFile(l33tFilePath, "l33t")
SEQUENCES = make(map[string]string)
SEQUENCES["lower"] = "abcdefghijklmnopqrstuvwxyz"
SEQUENCES["upper"] = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
SEQUENCES["digits"] = "0123456789"
MATCHERS = append(MATCHERS, DICTIONARY_MATCHERS...)
MATCHERS = append(MATCHERS, spatialMatch)
MATCHERS = append(MATCHERS, repeatMatch)
MATCHERS = append(MATCHERS, sequenceMatch)
MATCHERS = append(MATCHERS, l33tMatch)
MATCHERS = append(MATCHERS, dateSepMatcher)
MATCHERS = append(MATCHERS, dateWithoutSepMatch)
}

View File

@ -0,0 +1,59 @@
package matching
import (
"github.com/nbutton23/zxcvbn-go/entropy"
"github.com/nbutton23/zxcvbn-go/match"
"strings"
)
func repeatMatch(password string) []match.Match {
var matches []match.Match
//Loop through password. if current == prev currentStreak++ else if currentStreak > 2 {buildMatch; currentStreak = 1} prev = current
var current, prev string
currentStreak := 1
var i int
var char rune
for i, char = range password {
current = string(char)
if i == 0 {
prev = current
continue
}
if strings.ToLower(current) == strings.ToLower(prev) {
currentStreak++
} else if currentStreak > 2 {
iPos := i - currentStreak
jPos := i - 1
matchRepeat := match.Match{
Pattern: "repeat",
I: iPos,
J: jPos,
Token: password[iPos : jPos+1],
DictionaryName: prev}
matchRepeat.Entropy = entropy.RepeatEntropy(matchRepeat)
matches = append(matches, matchRepeat)
currentStreak = 1
} else {
currentStreak = 1
}
prev = current
}
if currentStreak > 2 {
iPos := i - currentStreak + 1
jPos := i
matchRepeat := match.Match{
Pattern: "repeat",
I: iPos,
J: jPos,
Token: password[iPos : jPos+1],
DictionaryName: prev}
matchRepeat.Entropy = entropy.RepeatEntropy(matchRepeat)
matches = append(matches, matchRepeat)
}
return matches
}

Some files were not shown because too many files have changed in this diff Show More