This commit is contained in:
Thomas Hargrove 2019-09-30 09:17:31 -07:00
Коммит 932a45116b
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 8629B6D11228B20C
112 изменённых файлов: 12384 добавлений и 0 удалений

6
.gitignore поставляемый Normal file
Просмотреть файл

@ -0,0 +1,6 @@
.idea/
.DS_Store
/store/
/data/
coverage.out
sloop.iml

105
CODE_OF_CONDUCT.md Normal file
Просмотреть файл

@ -0,0 +1,105 @@
# Salesforce Open Source Community Code of Conduct
## About the Code of Conduct
Equality is a core value at Salesforce. We believe a diverse and inclusive
community fosters innovation and creativity, and are committed to building a
culture where everyone feels included.
Salesforce open-source projects are committed to providing a friendly, safe, and
welcoming environment for all, regardless of gender identity and expression,
sexual orientation, disability, physical appearance, body size, ethnicity, nationality,
race, age, religion, level of experience, education, socioeconomic status, or
other similar personal characteristics.
The goal of this code of conduct is to specify a baseline standard of behavior so
that people with different social values and communication styles can work
together effectively, productively, and respectfully in our open source community.
It also establishes a mechanism for reporting issues and resolving conflicts.
All questions and reports of abusive, harassing, or otherwise unacceptable behavior
in a Salesforce open-source project may be reported by contacting the Salesforce
Open Source Conduct Committee at ossconduct@salesforce.com.
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of gender
identity and expression, sexual orientation, disability, physical appearance,
body size, ethnicity, nationality, race, age, religion, level of experience, education,
socioeconomic status, or other similar personal characteristics.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy toward other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Personal attacks, insulting/derogatory comments, or trolling
* Public or private harassment
* Publishing, or threatening to publish, others' private information—such as
a physical or electronic address—without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
* Advocating for or encouraging any of the above behaviors
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned with this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project email
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the Salesforce Open Source Conduct Committee
at ossconduct@salesforce.com. All complaints will be reviewed and investigated
and will result in a response that is deemed necessary and appropriate to the
circumstances. The committee is obligated to maintain confidentiality with
regard to the reporter of an incident. Further details of specific enforcement
policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership and the Salesforce Open Source Conduct
Committee.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][contributor-covenant-home],
version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html.
It includes adaptions and additions from [Go Community Code of Conduct][golang-coc],
[CNCF Code of Conduct][cncf-coc], and [Microsoft Open Source Code of Conduct][microsoft-coc].
This Code of Conduct is licensed under the [Creative Commons Attribution 3.0 License][cc-by-3-us].
[contributor-covenant-home]: https://www.contributor-covenant.org (https://www.contributor-covenant.org/)
[golang-coc]: https://golang.org/conduct
[cncf-coc]: https://github.com/cncf/foundation/blob/master/code-of-conduct.md
[microsoft-coc]: https://opensource.microsoft.com/codeofconduct/
[cc-by-3-us]: https://creativecommons.org/licenses/by/3.0/us/

45
CONTRIBUTING.md Normal file
Просмотреть файл

@ -0,0 +1,45 @@
## Build
Sloop uses GitHub to manager reviews of pull requests
## Steps to Contribute
ADD
## Pull Request Checklist
ADD
## Dependency Management
Sloop uses [go modules](https://golang.org/cmd/go/#hdr-Modules__module_versions__and_more).
This requires a working Go environment with version 1.13 or greater installed.
It is suggested you set `export GO111MODULE=on`
To add or update a new dependency:
1. use `go get` to pull in the new dependency
1. run `go mod tidy`
## Protobuf Schema Changes
When changing schema in pkg/sloop/store/typed/schema.proto you will need to do the following:
1. Install protobuf. On OSX you can do `brew install protobuf`
1. Grab protoc-gen-go with `go get -u github.com/golang/protobuf/protoc-gen-go`
1. Run this makefile target: `make protobuf`
## Changes to Generated Code
Sloop uses genny to code-gen typed table wrappers. Any changes to `pkg/sloop/store/typed/tabletemplate*.go` will need
to be followed with `go generate`. We have a Makefile target for this: `make generate`
## Prometheus
Sloop uses prometheus to emit metrics, which is very helpful for performance debugging. In the root of the repo is a prometheus config.
On OSX you can install prometheus with `brew install prometheus`. Then simply start it from the sloop directory by running `prometheus`
Open your browser to http://localhost:9090.
An example of a useful query is [rate(kubewatch_event_count[5m])](http://localhost:9090/graph?g0.range_input=1h&g0.expr=rate(kubewatch_event_count%5B1m%5D)&g0.tab=0)

6
Dockerfile Normal file
Просмотреть файл

@ -0,0 +1,6 @@
FROM alpine:3.10
ADD sloop /bin/
# Place webfiles in the same relative path from github to the root of the container
# which is the default current working directory
ADD ./pkg/sloop/webfiles/ /pkg/sloop/webfiles/
CMD ["/bin/sloop"]

12
LICENSE.txt Normal file
Просмотреть файл

@ -0,0 +1,12 @@
Copyright (c) 2019 Salesforce.com, Inc.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

37
Makefile Normal file
Просмотреть файл

@ -0,0 +1,37 @@
.PHONY:perf perfasm
export GO111MODULE=on
all:
go get ./pkg/...
go fmt ./pkg/...
go install ./pkg/...
go test -cover ./pkg/...
run:
go install ./pkg/...
$(GOPATH)/bin/sloop
linux:
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go install -ldflags "-s" -installsuffix cgo -v ./pkg/...
docker: linux
cp $(GOPATH)/bin/linux_amd64/sloop .
docker build -t sloop .
rm sloop
generate:
go generate ./pkg/...
tidy:
# Run tidy whenever go.mod is changed
go mod tidy
protobuf:
# Make sure you `brew install protobuf` first
# go get -u github.com/golang/protobuf/protoc-gen-go
protoc -I=./pkg/sloop/store/typed/ --go_out=./pkg/sloop/store/typed/ ./pkg/sloop/store/typed/schema.proto
cover:
go test ./pkg/... -coverprofile=coverage.out
go tool cover -html=coverage.out

76
README.md Normal file
Просмотреть файл

@ -0,0 +1,76 @@
# sloop - Kubernetes History Visualization
Sloop monitors Kubernetes, recording histories of events and resource state changes
and providing visualizations to aid in debugging past events.
Key features:
1. Allows you to find and inspect resources that no longer exist (example: discover what host the pod from the previous deployment was using).
1. Provides timeline displays that show rollouts of related resources in updates to Deployments, ReplicaSets, and StatefulSets.
1. Helps debug transient and intermittent errors.
1. Allows you to see changes over time in a Kubernetes application.
1. Is a self-contained service with no dependencies on distributed storage.
## Screenshots
![Screenshot1](other/screenshot1.png?raw=true "Screenshot 1")
## Architecture Overview
![Architecture](other/architecture.png?raw=true "Architecture")
## Install
Sloop can be installed using any of these options:
### Precompiled Binaries
_DockerHub images coming soon._
### Helm Chart
_Helm chart coming soon._
### Build from Source
Building Sloop from source needs a working Go environment
with [version 1.13 or greater installed](https://golang.org/doc/install).
Clone the sloop repository and build using `make`:
```sh
$ mkdir -p $GOPATH/src/github.com/salesforce
$ cd $GOPATH/src/github.com/salesforce
$ git clone https://github.com/salesforce/sloop.git
$ cd sloop
$ make
$ ~/go/bin/sloop
```
When complete, you should have a running Sloop version accessing the current context from your kubeConfig. Just point your browser at http://localhost:8080/
Other makefile targets:
* *docker*: Builds a Docker image.
* *cover*: Runs unit tests with code coverage.
* *generate*: Updates genny templates for typed table classes.
* *protobuf*: Generates protobuf code-gen.
### Local Docker Run
To run from Docker you need to host mount your kubeconfig:
```shell script
$ make docker
$ docker run --rm -it -p 8080:8080 -v ~/.kube/:/kube/ -e KUBECONFIG=/kube/config sloop
```
In this mode, data is written to a memory-backed volume and is discarded after each run. To preserve the data, you can host-mount /data with something like `-v /data/:/some_path_on_host/`
## Contributing
Refer to [CONTRIBUTING.md](CONTRIBUTING.md)
## License
BSD 3-Clause

34
go.mod Normal file
Просмотреть файл

@ -0,0 +1,34 @@
module github.com/salesforce/sloop
go 1.12
require (
github.com/Jeffail/gabs/v2 v2.1.0
github.com/dgraph-io/badger v0.0.0-20190809121831-9d7b751e85c9
github.com/ghodss/yaml v1.0.0
github.com/gogo/protobuf v1.3.0
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b
github.com/golang/protobuf v1.3.2
github.com/google/go-cmp v0.3.1 // indirect
github.com/googleapis/gnostic v0.3.1 // indirect
github.com/hashicorp/golang-lru v0.5.3 // indirect
github.com/imdario/mergo v0.3.7 // indirect
github.com/nsf/jsondiff v0.0.0-20190712045011-8443391ee9b6
github.com/pkg/errors v0.8.1
github.com/prometheus/client_golang v1.1.0
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 // indirect
github.com/prometheus/procfs v0.0.4 // indirect
github.com/spf13/afero v1.2.2
github.com/stretchr/testify v1.4.0
golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7 // indirect
golang.org/x/net v0.0.0-20190909003024-a7b16738d86b // indirect
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 // indirect
golang.org/x/sys v0.0.0-20190910064555-bbd175535a8b // indirect
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 // indirect
google.golang.org/appengine v1.6.2 // indirect
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
k8s.io/api v0.0.0-20190905160310-fb749d2f1064 // indirect
k8s.io/client-go v0.0.0-20190620085101-78d2af792bab
k8s.io/utils v0.0.0-20190907131718-3d4f5b7dea0b // indirect
)

320
go.sum Normal file
Просмотреть файл

@ -0,0 +1,320 @@
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/AndreasBriese/bbloom v0.0.0-20190306092124-e2d15f34fcf9 h1:HD8gA2tkByhMAwYaFAX9w2l7vxvBQ5NMoxDrkhqhtn4=
github.com/AndreasBriese/bbloom v0.0.0-20190306092124-e2d15f34fcf9/go.mod h1:bOvUY6CB00SOBii9/FifXqc0awNKxLFCL/+pkDPuyl8=
github.com/Azure/go-autorest v11.1.2+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/Jeffail/gabs/v2 v2.1.0 h1:6dV9GGOjoQgzWTQEltZPXlJdFloxvIq7DwqgxMCbq30=
github.com/Jeffail/gabs/v2 v2.1.0/go.mod h1:xCn81vdHKxFUuWWAaD5jCTQDNPBMh5pPs9IJ+NcziBI=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/OneOfOne/xxhash v1.2.5 h1:zl/OfRA6nftbBK9qTohYBJ5xvw6C/oNKizR7cZGl3cI=
github.com/OneOfOne/xxhash v1.2.5/go.mod h1:eZbhyaAYD41SGSSsnmcpxVoRiQ/MPUTjUdIIOT9Um7Q=
github.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/VictoriaMetrics/fastcache v1.5.1/go.mod h1:+jv9Ckb+za/P1ZRg/sulP5Ni1v49daAVERr0H3CuscE=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/allegro/bigcache v1.2.1-0.20190218064605-e24eb225f156/go.mod h1:Cb/ax3seSYIx7SuZdm2G2xzfwmv3TPSk2ucNfQESPXM=
github.com/allegro/bigcache v1.2.1/go.mod h1:Cb/ax3seSYIx7SuZdm2G2xzfwmv3TPSk2ucNfQESPXM=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.0.1-0.20190104013014-3767db7a7e18/go.mod h1:HD5P3vAIAh+Y2GAxg0PrPN1P8WkepXGpjbUPDHJqqKM=
github.com/coocood/freecache v1.1.0/go.mod h1:ePwxCDzOYvARfHdr1pByNct1at3CoKnsipOHwKlNbzI=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
github.com/davecgh/go-spew v0.0.0-20151105211317-5215b55f46b2/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgraph-io/badger v0.0.0-20190809121831-9d7b751e85c9 h1:d1VXA/fah5BGgbquHuUlmH9tBCacAYMnFoDKShUQh74=
github.com/dgraph-io/badger v0.0.0-20190809121831-9d7b751e85c9/go.mod h1:zmuWBVEXFtixdGaO4wf/9iZh3AntlbA4pPmfpUHad1I=
github.com/dgraph-io/ristretto v0.0.0-20190801024210-18ba08fdea80 h1:ZVYvevH/zd9ygtRNosrnlGdvI6CEuUPwZ3EV0lfdGuM=
github.com/dgraph-io/ristretto v0.0.0-20190801024210-18ba08fdea80/go.mod h1:UvZmzj8odp3S1nli6yEb1vLME8iJFBrRcw8rAJEiu9Q=
github.com/dgrijalva/jwt-go v0.0.0-20160705203006-01aeca54ebda/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2 h1:tdlZCpZ/P9DhczCTSixgIKmwPv6+wP5DGjqLYw5SUiA=
github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/elazarl/goproxy v0.0.0-20170405201442-c4fc26588b6e/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/evanphx/json-patch v0.0.0-20190203023257-5858425f7550/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
github.com/go-openapi/jsonpointer v0.0.0-20160704185906-46af16f9f7b1/go.mod h1:+35s3my2LFTysnkMfxsJBAMHj/DoqoB9knIWoYG/Vk0=
github.com/go-openapi/jsonreference v0.0.0-20160704190145-13c6e3589ad9/go.mod h1:W3Z9FmVs9qj+KR4zFKmDPGiLdk1D9Rlm7cyMvf57TTg=
github.com/go-openapi/spec v0.0.0-20160808142527-6aced65f8501/go.mod h1:J8+jY1nAiCcj+friV/PDoE1/3eeccG9LYBs0tYvLOWc=
github.com/go-openapi/swag v0.0.0-20160704191624-1d0bd113de87/go.mod h1:DXUve3Dpr1UfpPtxFw+EFuQ41HhCWZfha5jSVRG7C7I=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/goburrow/cache v0.1.0/go.mod h1:8oxkfud4hvjO4tNjEKZfEd+LrpDVDlBIauGYsWGEzio=
github.com/gogo/protobuf v0.0.0-20171007142547-342cbe0a0415 h1:WSBJMqJbLxsn+bTCPyPYZfqHdJmc8MK4wrBjMft6BAM=
github.com/gogo/protobuf v0.0.0-20171007142547-342cbe0a0415/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.1.1 h1:72R+M5VuhED/KujmZVcIquuo8mBgX4oVda//DQb3PXo=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.0 h1:G8O7TerXerS4F6sx9OV7/nRfJdnXgHZu/S/7F2SN+UE=
github.com/gogo/protobuf v1.3.0/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903 h1:LbsanbbD6LieFkXbj9YNNBupiGHJgFeLpO0j0Fza1h8=
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6 h1:ZgQEtGgCBiWRM39fZuwSd1LwSqqSW0hOdXCYYDX0R3I=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/protobuf v0.0.0-20161109072736-4bd1920723d7/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/btree v0.0.0-20160524151835-7d79101e329e/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/gofuzz v0.0.0-20161122191042-44d81051d367/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI=
github.com/google/gofuzz v0.0.0-20170612174753-24818f796faf h1:+RRA9JqSOZFfKrOeqr2z77+8R2RKyh8PG66dcu1V0ck=
github.com/google/gofuzz v0.0.0-20170612174753-24818f796faf/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI=
github.com/google/gofuzz v1.0.0 h1:A8PeW59pxE9IoFRqBp37U+mSNaQoZ46F1f0f863XSXw=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d h1:7XGaL1e6bYS1yIonGp9761ExpPPV1ui0SAC59Yube9k=
github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
github.com/googleapis/gnostic v0.3.1 h1:WeAefnSUHlBb0iJKwxFDZdbfGwkd7xRNuV+IpXMJhYk=
github.com/googleapis/gnostic v0.3.1/go.mod h1:on+2t9HRStVgn95RSsFWFz+6Q0Snyqv1awfrALZdbtU=
github.com/gophercloud/gophercloud v0.0.0-20190126172459-c818fa66e4c8/go.mod h1:3WdhXV3rUYy9p6AUW8d94kr+HS62Y4VL9mBnFxsD8q4=
github.com/gregjones/httpcache v0.0.0-20170728041850-787624de3eb7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/hashicorp/golang-lru v0.5.0 h1:CL2msUPvZTLb5O648aiLNJw3hnBxN2+1Jq8rCOH9wdo=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.3 h1:YPkqC67at8FYaadspW/6uE0COsBxS2656RLEr8Bppgk=
github.com/hashicorp/golang-lru v0.5.3/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/imdario/mergo v0.3.5 h1:JboBksRwiiAJWvIYJVo46AfV+IAIKZpfrSzVKj42R4Q=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.7 h1:Y+UAYTZ7gDEuOfhxKWy+dvb5dRQ6rJjFSdX2HZY1/gI=
github.com/imdario/mergo v0.3.7/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/json-iterator/go v0.0.0-20180612202835-f2b4162afba3/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v0.0.0-20180701071628-ab8a2e0c74be h1:AHimNtVIpiBjPUhEF5KNCkrUyqTSA5zWUl8sQ2bfGBE=
github.com/json-iterator/go v0.0.0-20180701071628-ab8a2e0c74be/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.7 h1:KfgG9LzI+pYjr4xvmz/5H4FXjokeP+rlHLhv3iH62Fo=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mailru/easyjson v0.0.0-20160728113105-d5b7844b561a/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180320133207-05fbef0ca5da/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/nsf/jsondiff v0.0.0-20190712045011-8443391ee9b6 h1:qsqscDgSJy+HqgMTR+3NwjYJBbp1+honwDsszLoS+pA=
github.com/nsf/jsondiff v0.0.0-20190712045011-8443391ee9b6/go.mod h1:uFMI8w+ref4v2r9jz+c9i1IfIttS/OkmLfrk1jne5hs=
github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v0.0.0-20190113212917-5533ce8a0da3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v0.0.0-20151028094244-d8ed2627bdf0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.1.0 h1:BQ53HtBmfOitExawJ6LokA4x8ov/z0SYYb0+HxJfRI8=
github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90 h1:S/YWwWx/RA8rT8tKFRuGUZhuA90OyIBpPCXkcbwU8DE=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 h1:gQz4mCbXsO+nc9n1hCxHcGA3Zx3Eo+UHZoInFGUIXNM=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.6.0 h1:kRhiuYSXR3+uv2IbVbZhUxK5zVD/2pp3Gd2PpvPkpEo=
github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.3 h1:CTwfnzjQ+8dS6MhHHu4YswVAD99sL2wjPqP+VkURmKE=
github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
github.com/prometheus/procfs v0.0.4 h1:w8DjqFMJDjuVwdZBQoOozr4MVWOnwF7RcL/7uxBjY78=
github.com/prometheus/procfs v0.0.4/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spaolacci/murmur3 v1.0.1-0.20190317074736-539464a789e9 h1:5Cp3cVwpQP4aCQ6jx6dNLP3IarbYiuStmIzYu+BjQwY=
github.com/spaolacci/murmur3 v1.0.1-0.20190317074736-539464a789e9/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v0.0.5 h1:f0B+LkLX6DtmRH1isoNA9VTtNUK9K8xYd28JNNfOv/s=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.1 h1:aCvUg6QPl3ibpQUxyLkrEkCHtPqYJL4x9AuhqVqFis4=
github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v0.0.0-20151208002404-e3a8ff8ce365/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181025213731-e84da0312774 h1:a4tQYYYuK9QdeO/+kEvNYyuR21S+7ve5EANok6hABhI=
golang.org/x/crypto v0.0.0-20181025213731-e84da0312774/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2 h1:VklqNMn3ovrHsnt90PveolxSbWFaJdECFbxSq0Mqo2M=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7 h1:0hQKqeLdqlt5iIwVOBErRisrHJAN57yOiPRQItI20fU=
golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190206173232-65e2d4e15006 h1:bfLnR+k0tq5Lqt6dflRLcZiz6UaXCMt3vhYJ1l4FQ80=
golang.org/x/net v0.0.0-20190206173232-65e2d4e15006/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859 h1:R/3boaszxrf1GEUWTVDzSKVwLmSJpwZ1yqXm8j0v2QI=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190812203447-cdfb69ac37fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190909003024-a7b16738d86b h1:XfVGCX+0T4WOStkaOsJRllbsiImhB2jgVBGc9L0lPGc=
golang.org/x/net v0.0.0-20190909003024-a7b16738d86b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a h1:tImsplftrFpALCYumobsd0K86vlAs/eXGFms2txfJfA=
golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20170830134202-bb24a47a89ea/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313 h1:pczuHS43Cp2ktBEEmLwScxgjWsBSzdaQiKzUyf3DTTc=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190626221950-04f50cda93cb/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3 h1:4y9KwBHBgBNwDbtu44R5o1fdOCQUEXhbk/P4A9WmJq0=
golang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190910064555-bbd175535a8b h1:3S2h5FadpNr0zUUCVZjlKIEYF+KaX/OBplTGo89CYHI=
golang.org/x/sys v0.0.0-20190910064555-bbd175535a8b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20181227161524-e6919f6577db h1:6/JqlYfC1CCaLnGceQTI+sDGhC9UBSPAsBqI0Gun6kU=
golang.org/x/text v0.3.1-0.20181227161524-e6919f6577db/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20161028155119-f51c12702a4d h1:TnM+PKb3ylGmZvyPXmo9m/wktg7Jn/a/fNmr33HSj8g=
golang.org/x/time v0.0.0-20161028155119-f51c12702a4d/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181011042414-1f849cf54d09/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b h1:mSUCVIwDx4hfXJfWsOPfdzEHxzb2Xjl6BQ8YgPnazQA=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0 h1:KxkO13IPW4Lslp2bz+KHP2E3gtFlrIGNThxkZQ3g+4c=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.2 h1:j8RI1yW0SkI+paT6uGwMlrMI/6zwYA6/CFil8rxOzGI=
google.golang.org/appengine v1.6.2/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/inf.v0 v0.9.0 h1:3zYtXIO92bvsdS3ggAdA8Gb4Azj0YU+TVY1uGYNFA8o=
gopkg.in/inf.v0 v0.9.0/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.1 h1:mUhvW9EsL+naU5Q3cakzfE91YhliOondGd6ZrsDBHQE=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
k8s.io/api v0.0.0-20190620084959-7cf5895f2711 h1:BblVYz/wE5WtBsD/Gvu54KyBUTJMflolzc5I2DTvh50=
k8s.io/api v0.0.0-20190620084959-7cf5895f2711/go.mod h1:TBhBqb1AWbBQbW3XRusr7n7E4v2+5ZY8r8sAMnyFC5A=
k8s.io/api v0.0.0-20190905160310-fb749d2f1064 h1:eH+1zuwJLhhgexaVwnhYzLg884nka2DIc2SPT87dsHI=
k8s.io/api v0.0.0-20190905160310-fb749d2f1064/go.mod h1:u09ZxrpPFcoUNEQM2GsqT/KpglKAtXdEcK+tSMilQ3Q=
k8s.io/apimachinery v0.0.0-20190612205821-1799e75a0719 h1:uV4S5IB5g4Nvi+TBVNf3e9L4wrirlwYJ6w88jUQxTUw=
k8s.io/apimachinery v0.0.0-20190612205821-1799e75a0719/go.mod h1:I4A+glKBHiTgiEjQiCCQfCAIcIMFGt291SmsvcrFzJA=
k8s.io/apimachinery v0.0.0-20190831074630-461753078381 h1:gySvpxrHatsZtG3qOkyPIHjWY7D5ogkrrWnD7+5/RGs=
k8s.io/apimachinery v0.0.0-20190831074630-461753078381/go.mod h1:nL6pwRT8NgfF8TT68DBI8uEePRt89cSvoXUVqbkWHq4=
k8s.io/client-go v0.0.0-20190620085101-78d2af792bab h1:E8Fecph0qbNsAbijJJQryKu4Oi9QTp5cVpjTE+nqg6g=
k8s.io/client-go v0.0.0-20190620085101-78d2af792bab/go.mod h1:E95RaSlHr79aHaX0aGSwcPNfygDiPKOVXdmivCIZT0k=
k8s.io/gengo v0.0.0-20190128074634-0689ccc1d7d6/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/klog v0.0.0-20181102134211-b9b56d5dfc92/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
k8s.io/klog v0.3.0/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
k8s.io/klog v0.3.1 h1:RVgyDHY/kFKtLqh67NvEWIgkMneNoIrdkN0CxDSQc68=
k8s.io/klog v0.3.1/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
k8s.io/klog v0.4.0 h1:lCJCxf/LIowc2IGS9TPjWDyXY4nOmdGdfcwwDQCOURQ=
k8s.io/klog v0.4.0/go.mod h1:4Bi6QPql/J/LkTDqv7R/cd3hPo4k2DG6Ptcz060Ez5I=
k8s.io/kube-openapi v0.0.0-20190228160746-b3a7cee44a30/go.mod h1:BXM9ceUBTj2QnfH2MK1odQs778ajze1RxcmP6S8RVVc=
k8s.io/kube-openapi v0.0.0-20190816220812-743ec37842bf/go.mod h1:1TqjTSzOxsLGIKfj0lK8EeCP7K1iUG65v09OM0/WG5E=
k8s.io/utils v0.0.0-20190221042446-c2654d5206da h1:ElyM7RPonbKnQqOcw7dG2IK5uvQQn3b/WPHqD5mBvP4=
k8s.io/utils v0.0.0-20190221042446-c2654d5206da/go.mod h1:8k8uAuAQ0rXslZKaEWd0c3oVhZz7sSzSiPnVZayjIX0=
k8s.io/utils v0.0.0-20190907131718-3d4f5b7dea0b h1:eMM0sTvh3KBVGwJfuNcU86P38TJhlVMAICbFPDG3t0M=
k8s.io/utils v0.0.0-20190907131718-3d4f5b7dea0b/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew=
sigs.k8s.io/structured-merge-diff v0.0.0-20190525122527-15d366b2352e/go.mod h1:wWxsB5ozmmv/SG7nM11ayaAW51xMvak/t1r0CSlcokI=
sigs.k8s.io/yaml v1.1.0 h1:4A07+ZFc2wgJwo8YNlQpr1rVlgUDlxXHhPJciaPY5gs=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=

Двоичные данные
other/architecture.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 54 KiB

Двоичные данные
other/screenshot1.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 367 KiB

Просмотреть файл

@ -0,0 +1,36 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package ingress
import (
"github.com/ghodss/yaml"
"github.com/golang/glog"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"io/ioutil"
)
func PlayFile(outChan chan typed.KubeWatchResult, filename string) error {
b, err := ioutil.ReadFile(filename)
if err != nil {
panic(err)
}
var playbackFile KubePlaybackFile
err = yaml.Unmarshal(b, &playbackFile)
if err != nil {
return err
}
glog.Infof("Loaded %v resources from file source %v", len(playbackFile.Data), filename)
for _, watchRecord := range playbackFile.Data {
outChan <- watchRecord
}
glog.Infof("Done writing kubeWatch events to channel")
return nil
}

Просмотреть файл

@ -0,0 +1,57 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package ingress
import (
"github.com/ghodss/yaml"
"github.com/golang/glog"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"io/ioutil"
"sync"
)
type FileRecorder struct {
inChan chan typed.KubeWatchResult
data []typed.KubeWatchResult
filename string
wg sync.WaitGroup // Ensure we don't call close at the same time we are taking in events
}
func NewFileRecorder(filename string, inChan chan typed.KubeWatchResult) *FileRecorder {
fr := &FileRecorder{filename: filename, inChan: inChan}
return fr
}
func (fr *FileRecorder) Start() {
fr.wg.Add(1)
go fr.listen(fr.inChan)
}
func (fr *FileRecorder) listen(inChan chan typed.KubeWatchResult) {
for {
newRecord, more := <-inChan
if !more {
fr.wg.Done()
return
}
fr.data = append(fr.data, newRecord)
}
}
func (fr *FileRecorder) Close() error {
fr.wg.Wait()
f := KubePlaybackFile{Data: fr.data}
byteData, err := yaml.Marshal(f)
if err != nil {
return err
}
err = ioutil.WriteFile(fr.filename, byteData, 0755)
glog.Infof("Wrote %v records to %v. err %v", len(fr.data), fr.filename, err)
return err
}

Просмотреть файл

@ -0,0 +1,55 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package ingress
import (
"github.com/golang/glog"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/tools/clientcmd/api"
)
// Returns kubeClient, currentContext, error
func MakeKubernetesClient(masterURL string, kubeContext string) (kubernetes.Interface, string, error) {
glog.Infof("Creating k8sclient with user-defined config masterURL=%v, kubeContext=%v.", masterURL, kubeContext)
clientConfig := getConfig(masterURL, kubeContext)
// This tells us the currentContext defined in the kubeConfig which gets used if we dont have an override
rawConfig, err := clientConfig.RawConfig()
if err != nil {
return nil, "", err
}
contextInUse := rawConfig.CurrentContext
if kubeContext != "" {
contextInUse = kubeContext
}
config, err := clientConfig.ClientConfig()
if err != nil {
return nil, "", err
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
glog.Errorf("Cannot Initialize Kubernetes Client API: %v", err)
return nil, "", err
}
glog.Infof("Created k8sclient with context=%v, masterURL=%v, configFile=%v.", contextInUse, config.Host, clientConfig.ConfigAccess().GetLoadingPrecedence())
return clientset, contextInUse, nil
}
func getConfig(masterURL string, kubeContext string) clientcmd.ClientConfig {
loadingRules := clientcmd.NewDefaultClientConfigLoadingRules()
return clientcmd.NewNonInteractiveDeferredLoadingClientConfig(
loadingRules,
&clientcmd.ConfigOverrides{CurrentContext: kubeContext, ClusterInfo: api.Cluster{Server: masterURL}})
}

Просмотреть файл

@ -0,0 +1,177 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package ingress
import (
"encoding/json"
"fmt"
"github.com/golang/glog"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"time"
"github.com/golang/protobuf/ptypes"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/salesforce/sloop/pkg/sloop/kubeextractor"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"k8s.io/client-go/tools/cache"
"sync"
)
/*
This class watches for changes to many kinds of kubernetes resources and writes them to a supplied channel
*/
type KubeWatcher interface {
Stop()
}
type kubeWatcherImpl struct {
informerFactory informers.SharedInformerFactory
stopChan chan struct{}
outchan chan typed.KubeWatchResult
resync time.Duration
outchanlock *sync.Mutex
stopped bool
currentContext string
}
var (
metricIngressKubewatchcount = promauto.NewCounterVec(prometheus.CounterOpts{Name: "sloop_ingress_kubewatchcount"}, []string{"kind", "watchtype", "namespace"})
metricIngressKubewatchbytes = promauto.NewCounterVec(prometheus.CounterOpts{Name: "sloop_ingress_kubewatchbytes"}, []string{"kind", "watchtype", "namespace"})
)
// Todo: Add additional parameters for filtering
func NewKubeWatcherSource(kubeClient kubernetes.Interface, outChan chan typed.KubeWatchResult, resync time.Duration) (KubeWatcher, error) {
kw := &kubeWatcherImpl{resync: resync, outchanlock: &sync.Mutex{}}
kw.stopChan = make(chan struct{})
kw.outchan = outChan
kw.startInformer(kubeClient, true)
return kw, nil
}
func (i *kubeWatcherImpl) startInformer(kubeclient kubernetes.Interface, includeEvents bool) {
i.informerFactory = informers.NewSharedInformerFactory(kubeclient, i.resync)
i.informerFactory.Apps().V1beta1().Deployments().Informer().AddEventHandler(i.getEventHandlerForResource("Deployment"))
i.informerFactory.Apps().V1beta1().StatefulSets().Informer().AddEventHandler(i.getEventHandlerForResource("StatefulSet"))
i.informerFactory.Core().V1().ConfigMaps().Informer().AddEventHandler(i.getEventHandlerForResource("ConfigMap"))
i.informerFactory.Core().V1().Endpoints().Informer().AddEventHandler(i.getEventHandlerForResource("Endpoint"))
i.informerFactory.Core().V1().Namespaces().Informer().AddEventHandler(i.getEventHandlerForResource("Namespace"))
i.informerFactory.Core().V1().Nodes().Informer().AddEventHandler(i.getEventHandlerForResource("Node"))
i.informerFactory.Core().V1().PersistentVolumeClaims().Informer().AddEventHandler(i.getEventHandlerForResource("PersistentVolumeClaim"))
i.informerFactory.Core().V1().PersistentVolumes().Informer().AddEventHandler(i.getEventHandlerForResource("PersistentVolume"))
i.informerFactory.Core().V1().Pods().Informer().AddEventHandler(i.getEventHandlerForResource("Pod"))
i.informerFactory.Core().V1().Services().Informer().AddEventHandler(i.getEventHandlerForResource("Service"))
i.informerFactory.Core().V1().ReplicationControllers().Informer().AddEventHandler(i.getEventHandlerForResource("ReplicationController"))
i.informerFactory.Extensions().V1beta1().DaemonSets().Informer().AddEventHandler(i.getEventHandlerForResource("DaemonSet"))
i.informerFactory.Extensions().V1beta1().ReplicaSets().Informer().AddEventHandler(i.getEventHandlerForResource("ReplicaSet"))
i.informerFactory.Storage().V1().StorageClasses().Informer().AddEventHandler(i.getEventHandlerForResource("StorageClass"))
i.informerFactory.Core().V1().Events().Informer().AddEventHandler(i.getEventHandlerForResource("Event"))
i.informerFactory.Start(i.stopChan)
}
func (i *kubeWatcherImpl) getEventHandlerForResource(resourceKind string) cache.ResourceEventHandler {
return cache.ResourceEventHandlerFuncs{
AddFunc: i.reportAdd(resourceKind),
DeleteFunc: i.reportDelete(resourceKind),
UpdateFunc: i.reportUpdate(resourceKind),
}
}
func (i *kubeWatcherImpl) reportAdd(kind string) func(interface{}) {
return func(obj interface{}) {
watchResultShell := &typed.KubeWatchResult{
Timestamp: ptypes.TimestampNow(),
Kind: kind,
WatchType: typed.KubeWatchResult_ADD,
Payload: "",
}
i.processUpdate(kind, obj, watchResultShell)
}
}
func (i *kubeWatcherImpl) reportDelete(kind string) func(interface{}) {
return func(obj interface{}) {
watchResultShell := &typed.KubeWatchResult{
Timestamp: ptypes.TimestampNow(),
Kind: kind,
WatchType: typed.KubeWatchResult_DELETE,
Payload: "",
}
i.processUpdate(kind, obj, watchResultShell)
}
}
func (i *kubeWatcherImpl) reportUpdate(kind string) func(interface{}, interface{}) {
return func(_ interface{}, newObj interface{}) {
watchResultShell := &typed.KubeWatchResult{
Timestamp: ptypes.TimestampNow(),
Kind: kind,
WatchType: typed.KubeWatchResult_UPDATE,
Payload: "",
}
i.processUpdate(kind, newObj, watchResultShell)
}
}
func (i *kubeWatcherImpl) processUpdate(kind string, obj interface{}, watchResult *typed.KubeWatchResult) {
resourceJson, err := i.getResourceAsJsonString(kind, obj)
if err != nil {
glog.Error(err)
return
}
kubeMetadata, err := kubeextractor.ExtractMetadata(resourceJson)
if err != nil {
// We are only grabbing namespace here for a prometheus metric, so if metadata extract fails we just log and continue
glog.V(2).Infof("No namespace for resource: %v", err)
}
metricIngressKubewatchcount.WithLabelValues(kind, watchResult.WatchType.String(), kubeMetadata.Namespace).Inc()
metricIngressKubewatchbytes.WithLabelValues(kind, watchResult.WatchType.String(), kubeMetadata.Namespace).Add(float64(len(resourceJson)))
watchResult.Payload = resourceJson
i.writeToOutChan(watchResult)
}
func (i *kubeWatcherImpl) writeToOutChan(watchResult *typed.KubeWatchResult) {
// We need to ensure that no messages are written to outChan after stop is called
// Kube watch library has a way to tell it to stop, but no way to know it is complete
// Use a lock around output channel for this purpose
i.outchanlock.Lock()
defer i.outchanlock.Unlock()
if i.stopped {
return
}
i.outchan <- *watchResult
}
func (i *kubeWatcherImpl) getResourceAsJsonString(kind string, obj interface{}) (string, error) {
bytes, err := json.Marshal(obj)
if err != nil {
return "", fmt.Errorf("resource cannot be marshalled %v", err)
}
return string(bytes), nil
}
func (i *kubeWatcherImpl) Stop() {
glog.Infof("Stopping kubeWatcher")
i.outchanlock.Lock()
if i.stopped {
return
}
i.stopped = true
i.outchanlock.Unlock()
close(i.stopChan)
}

Просмотреть файл

@ -0,0 +1,22 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package ingress
import (
"github.com/salesforce/sloop/pkg/sloop/store/typed"
)
type KubeResourceSource interface {
Init() (chan typed.KubeWatchResult, error)
Stop()
}
type KubePlaybackFile struct {
Data []typed.KubeWatchResult `json:"Data"`
}

Просмотреть файл

@ -0,0 +1,127 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package kubeextractor
import (
"encoding/json"
"fmt"
"github.com/golang/glog"
"strings"
"time"
)
// Example Event
//
// Name: na1-mist61app-prd-676c5b7dd4-h7x6r.15bf81c8df2bce2c
// Kind: Event
// Namespace: somens
// Payload:
/*
{
"metadata": {
"name": "na1-mist61app-prd-676c5b7dd4-h7x6r.15bf81c8df2bce2c",
"namespace": "somens",
"selfLink": "/api/v1/namespaces/somens/events/na1-mist61app-prd-676c5b7dd4-h7x6r.15bf81c8df2bce2c",
"uid": "d73fbbd4-caa3-11e9-a836-5e785cdb595d",
"resourceVersion": "2623487073",
"creationTimestamp": "2019-08-29T21:27:45Z"
},
"involvedObject": {
"kind": "Pod",
"namespace": "somens",
"name": "na1-mist61app-prd-676c5b7dd4-h7x6r",
"uid": "2358ba5b-caa3-11e9-a863-14187760f413",
"apiVersion": "v1",
"resourceVersion": "2621648750",
"fieldPath": "spec.containers{coreapp}"
},
"reason": "Unhealthy",
"message": "Readiness probe failed for some reason",
"source": {
"component": "kubelet",
"host": "somehostname"
},
"firstTimestamp": "2019-08-29T21:24:55Z",
"lastTimestamp": "2019-08-30T16:47:45Z",
"count": 13954,
"type": "Warning"
}
*/
// Extracts involved object from kube watch event payload.
func ExtractInvolvedObject(payload string) (KubeInvolvedObject, error) {
resource := struct {
InvolvedObject KubeInvolvedObject
}{}
err := json.Unmarshal([]byte(payload), &resource)
if err != nil {
return KubeInvolvedObject{}, err
}
return resource.InvolvedObject, nil
}
type EventInfo struct {
Reason string `json:"reason"`
Type string `json:"type"`
FirstTimestamp time.Time `json:"firstTimestamp"`
LastTimestamp time.Time `json:"lastTimestamp"`
Count int `json:"count"`
}
// Extracts event reason from kube watch event payload
func ExtractEventInfo(payload string) (*EventInfo, error) {
internalResource := struct {
Reason string `json:"reason"`
FirstTimestamp string `json:"firstTimestamp"`
LastTimestamp string `json:"lastTimestamp"`
Count int `json:"count"`
Type string `json:"type"`
}{}
err := json.Unmarshal([]byte(payload), &internalResource)
if err != nil {
return nil, err
}
// Convert timestamps
fs, err := time.Parse(time.RFC3339, internalResource.FirstTimestamp)
if err != nil {
glog.Errorf("Could not parse first timestamp %v\n", internalResource.FirstTimestamp)
fs = time.Time{}
}
ls, err := time.Parse(time.RFC3339, internalResource.LastTimestamp)
if err != nil {
glog.Errorf("Could not parse last timestamp %v\n", internalResource.LastTimestamp)
fs = time.Time{}
}
return &EventInfo{
Reason: internalResource.Reason,
FirstTimestamp: fs,
LastTimestamp: ls,
Count: internalResource.Count,
Type: internalResource.Type,
}, nil
}
// Events in kubernetes share the same namespace as the involved object, Kind=Event, and
// Name is the involved object name + "." + some unique string
//
// Deployment name: some-deployment-name
// Event name: some-deployment-name.15c37e2c4b7ff38e
//
// Pod name: some-deployment-name-d72v-5fd4f779f7-h4t6r
// Event name: some-deployment-name-d72v-5fd4f779f7-h4t6r.15c37e4fcf9f159f
func GetInvolvedObjectNameFromEventName(eventName string) (string, error) {
dotIdx := strings.LastIndex(eventName, ".")
if dotIdx < 0 {
return "", fmt.Errorf("unexpected format for a k8s event name: %v", eventName)
}
return eventName[0:dotIdx], nil
}

Просмотреть файл

@ -0,0 +1,90 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package kubeextractor
import (
"github.com/stretchr/testify/assert"
"testing"
"time"
)
func Test_ExtractInvolvedObject_OutputCorrect(t *testing.T) {
payload := `{"involvedObject":{"kind":"ReplicaSet","namespace":"namespace1","name":"name1","uid":"uid1"}}`
expectedResult := KubeInvolvedObject{
Kind: "ReplicaSet",
Name: "name1",
Namespace: "namespace1",
Uid: "uid1",
}
result, err := ExtractInvolvedObject(payload)
assert.Nil(t, err)
assert.Equal(t, expectedResult, result)
}
func Test_ExtractInvolvedObject_InvalidPayload_ReturnsError(t *testing.T) {
payload := `{"involvedObject":{"name":"name1","namespace":"namespace1","selfLink":"link1"}`
result, err := ExtractInvolvedObject(payload)
assert.NotNil(t, err)
assert.Equal(t, KubeInvolvedObject{}, result)
}
func Test_ExtractInvolvedObject_PayloadHasAdditionalFields_OutputCorrect(t *testing.T) {
payload := `{"metadata":{"name":"name2","namespace":"namespace2","uid":"uid2"},"involvedObject":{"kind":"Pod","name":"name1","namespace":"namespace1","uid":"uid1"}}`
expectedResult := KubeInvolvedObject{
Kind: "Pod",
Name: "name1",
Namespace: "namespace1",
Uid: "uid1",
}
result, err := ExtractInvolvedObject(payload)
assert.Nil(t, err)
assert.Equal(t, expectedResult, result)
}
var someFirstSeenTime = time.Date(2019, 8, 29, 21, 24, 55, 0, time.UTC)
var someLastSeenTime = time.Date(2019, 8, 30, 16, 47, 45, 0, time.UTC)
func Test_ExtractEventReason_OutputCorrect(t *testing.T) {
payload := `{"reason":"failed","firstTimestamp": "2019-08-29T21:24:55Z","lastTimestamp": "2019-08-30T16:47:45Z","count": 13954}`
result, err := ExtractEventInfo(payload)
assert.Nil(t, err)
assert.Equal(t, "failed", result.Reason)
assert.Equal(t, someFirstSeenTime, result.FirstTimestamp)
assert.Equal(t, someLastSeenTime, result.LastTimestamp)
assert.Equal(t, 13954, result.Count)
}
func Test_ExtractEventReason_MissingFieldsAreIgnored(t *testing.T) {
payload := `{"metadata":{"name":"name1","uid":"uid1","resourceVersion":"123","creationTimestamp":"2019-07-12T20:12:12Z"}}`
expectedResult := ""
result, err := ExtractEventInfo(payload)
assert.Nil(t, err)
assert.Equal(t, expectedResult, result.Reason)
}
func Test_GetInvolvedObjectNameFromEventName_invalid(t *testing.T) {
eventName := "xxx"
key, err := GetInvolvedObjectNameFromEventName(eventName)
assert.NotNil(t, err)
assert.Equal(t, key, "")
}
func Test_GetInvolvedObjectNameFromEventName_valid(t *testing.T) {
eventName := "xxx.abc"
key, err := GetInvolvedObjectNameFromEventName(eventName)
assert.Nil(t, err)
assert.Equal(t, "xxx", key)
}
func Test_GetInvolvedObjectNameFromEventName_HostName(t *testing.T) {
eventName := "somehost.somedomain.com.abc"
key, err := GetInvolvedObjectNameFromEventName(eventName)
assert.Nil(t, err)
assert.Equal(t, "somehost.somedomain.com", key)
}

Просмотреть файл

@ -0,0 +1,16 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package kubeextractor
const (
NodeKind = "Node"
NamespaceKind = "Namespace"
PodKind = "Pod"
EventKind = "Event"
)

Просмотреть файл

@ -0,0 +1,48 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package kubeextractor
import (
"encoding/json"
)
type KubeMetadata struct {
Name string
Namespace string
Uid string
SelfLink string
ResourceVersion string
CreationTimestamp string
OwnerReferences []KubeMetadataOwnerReference
}
type KubeInvolvedObject struct {
Kind string
Name string
Namespace string
Uid string
}
type KubeMetadataOwnerReference struct {
Kind string
Name string
Uid string
}
// Extracts metadata from kube watch event payload.
func ExtractMetadata(payload string) (KubeMetadata, error) {
resource := struct {
Metadata KubeMetadata
}{}
err := json.Unmarshal([]byte(payload), &resource)
if err != nil {
return KubeMetadata{}, err
}
return resource.Metadata, nil
}

Просмотреть файл

@ -0,0 +1,82 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package kubeextractor
import (
"github.com/stretchr/testify/assert"
"testing"
)
func Test_ExtractMetadata_OutputCorrect(t *testing.T) {
payload := `{"metadata":
{
"name":"name1",
"namespace":"namespace1",
"selfLink":"link1",
"uid":"uid1",
"resourceVersion":"123",
"creationTimestamp":"2019-07-12T20:12:12Z",
"ownerReferences": [
{
"kind": "Deployment",
"name": "deployment1",
"uid": "uid0"
}]
}
}`
expectedResult := KubeMetadata{
Name: "name1",
Namespace: "namespace1",
Uid: "uid1",
SelfLink: "link1",
ResourceVersion: "123",
CreationTimestamp: "2019-07-12T20:12:12Z",
OwnerReferences: []KubeMetadataOwnerReference{{Kind: "Deployment", Name: "deployment1", Uid: "uid0"}},
}
result, err := ExtractMetadata(payload)
assert.Nil(t, err)
assert.Equal(t, expectedResult, result)
}
func Test_ExtractMetadata_MissingFieldsAreIgnored(t *testing.T) {
payload := `{"metadata":{"name":"name1","uid":"uid1","resourceVersion":"123","creationTimestamp":"2019-07-12T20:12:12Z"}}`
expectedResult := KubeMetadata{
Name: "name1",
Namespace: "",
Uid: "uid1",
SelfLink: "",
ResourceVersion: "123",
CreationTimestamp: "2019-07-12T20:12:12Z",
}
result, err := ExtractMetadata(payload)
assert.Nil(t, err)
assert.Equal(t, expectedResult, result)
}
func Test_ExtractMetadata_InvalidPayload_ReturnsError(t *testing.T) {
payload := `{"metadata":{"name":"name1","namespace":"namespace1","selfLink":"link1"}`
result, err := ExtractMetadata(payload)
assert.NotNil(t, err)
assert.Equal(t, KubeMetadata{}, result)
}
func Test_ExtractMetadata_PayloadHasAdditionalFields_OutputCorrect(t *testing.T) {
payload := `{"metadata":{"name":"name1","namespace":"namespace1","selfLink":"link1","uid":"uid1","resourceVersion":"123","creationTimestamp":"2019-07-12T20:12:12Z"},"meta2":{"kind":"Pod","namespace":"namespace2"}}`
expectedResult := KubeMetadata{
Name: "name1",
Namespace: "namespace1",
Uid: "uid1",
SelfLink: "link1",
ResourceVersion: "123",
CreationTimestamp: "2019-07-12T20:12:12Z",
}
result, err := ExtractMetadata(payload)
assert.Nil(t, err)
assert.Equal(t, expectedResult, result)
}

Просмотреть файл

@ -0,0 +1,52 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package kubeextractor
import (
"fmt"
"github.com/Jeffail/gabs/v2"
"github.com/pkg/errors"
)
// In Kubernetes the node objects generally update frequently with new timestamps and resourceVersion
// Because nodes are large resources, it can be desirable to drop updates without important state change
func NodeHasMajorUpdate(node1 string, node2 string) (bool, error) {
cleanNode1, err := removeResVerAndTimestamp(node1)
if err != nil {
return false, err
}
cleanNode2, err := removeResVerAndTimestamp(node2)
if err != nil {
return false, err
}
return !(cleanNode1 == cleanNode2), nil
}
func removeResVerAndTimestamp(nodeJson string) (string, error) {
jsonParsed, err := gabs.ParseJSON([]byte(nodeJson))
if err != nil {
return "", errors.Wrap(err, "Failed to parse json for node resource")
}
_, err = jsonParsed.Set("removed", "metadata", "resourceVersion")
if err != nil {
return "", errors.Wrap(err, "Could not replace metadata.resourceVersion in node resource")
}
numConditions := len(jsonParsed.S("status", "conditions").Children())
for idx := 0; idx < numConditions; idx += 1 {
_, err = jsonParsed.Set("removed", "status", "conditions", fmt.Sprint(idx), "lastHeartbeatTime")
if err != nil {
return "", errors.Wrap(err, "Could not set node condition")
}
}
return jsonParsed.String(), nil
}

Просмотреть файл

@ -0,0 +1,100 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package kubeextractor
import (
"bytes"
"fmt"
"github.com/stretchr/testify/assert"
"testing"
"text/template"
)
const nodeTemplate = `{
"metadata": {
"name": "somehostname",
"uid": "1f9c4fdc-df86-11e6-8ec4-141877585f71",
"resourceVersion": "{{.ResourceVersion}}"
},
"status": {
"conditions": [
{
"type": "OutOfDisk",
"status": "{{.OutOfDisk}}",
"lastHeartbeatTime": "{{.LastHeartbeatTime}}",
"lastTransitionTime": "2019-07-19T15:35:56Z",
"reason": "KubeletHasSufficientDisk"
},
{
"type": "MemoryPressure",
"status": "False",
"lastHeartbeatTime": "{{.LastHeartbeatTime}}",
"lastTransitionTime": "2019-07-19T15:35:56Z",
"reason": "KubeletHasSufficientMemory"
}
]
}
}
`
const someResourceVersion1 = "873691308"
const someResourceVersion2 = "873691358"
const someHeartbeatTime1 = "2019-07-23T17:18:10Z"
const someHeartbeatTime2 = "2019-07-23T17:18:20Z"
type nodeData struct {
ResourceVersion string
LastHeartbeatTime string
OutOfDisk string
}
func helper_makeNodeResource(t *testing.T, resVer string, heartbeat string, outOfDisk string) string {
data := nodeData{ResourceVersion: resVer, LastHeartbeatTime: heartbeat, OutOfDisk: outOfDisk}
tmp, err := template.New("test").Parse(nodeTemplate)
assert.Nil(t, err)
var tpl bytes.Buffer
err = tmp.Execute(&tpl, data)
assert.Nil(t, err)
return tpl.String()
}
const expectedCleanNode = `{"metadata":{"name":"somehostname","resourceVersion":"removed","uid":"1f9c4fdc-df86-11e6-8ec4-141877585f71"},"status":{"conditions":[` +
`{"lastHeartbeatTime":"removed","lastTransitionTime":"2019-07-19T15:35:56Z","reason":"KubeletHasSufficientDisk","status":"False","type":"OutOfDisk"},` +
`{"lastHeartbeatTime":"removed","lastTransitionTime":"2019-07-19T15:35:56Z","reason":"KubeletHasSufficientMemory","status":"False","type":"MemoryPressure"}]}}`
func Test_removeResVerAndTimestamp(t *testing.T) {
nodeJson := helper_makeNodeResource(t, someResourceVersion1, someHeartbeatTime1, "False")
cleanNode, err := removeResVerAndTimestamp(nodeJson)
assert.Nil(t, err)
fmt.Printf("%v\n", cleanNode)
assert.Equal(t, expectedCleanNode, cleanNode)
}
func Test_nodesMeaningfullyDifferent_sameNode(t *testing.T) {
nodeJson := helper_makeNodeResource(t, someResourceVersion1, someHeartbeatTime1, "False")
diff, err := NodeHasMajorUpdate(nodeJson, nodeJson)
assert.Nil(t, err)
assert.False(t, diff)
}
func Test_nodesMeaningfullyDifferent_onlyDiffTimeAndRes(t *testing.T) {
nodeJson1 := helper_makeNodeResource(t, someResourceVersion1, someHeartbeatTime1, "False")
nodeJson2 := helper_makeNodeResource(t, someResourceVersion2, someHeartbeatTime2, "False")
diff, err := NodeHasMajorUpdate(nodeJson1, nodeJson2)
assert.Nil(t, err)
assert.False(t, diff)
}
func Test_nodesMeaningfullyDifferent_diffOutOfDisk(t *testing.T) {
nodeJson1 := helper_makeNodeResource(t, someResourceVersion1, someHeartbeatTime1, "False")
nodeJson2 := helper_makeNodeResource(t, someResourceVersion1, someHeartbeatTime1, "True")
diff, err := NodeHasMajorUpdate(nodeJson1, nodeJson2)
assert.Nil(t, err)
assert.True(t, diff)
}

38
pkg/sloop/main.go Normal file
Просмотреть файл

@ -0,0 +1,38 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package main
import (
"flag"
"github.com/golang/glog"
"github.com/salesforce/sloop/pkg/sloop/server"
"os"
"runtime/pprof"
)
var cpuprofile = flag.String("cpuprofile", "", "write profile to file")
func main() {
if *cpuprofile != "" {
f, err := os.Create(*cpuprofile)
if err != nil {
glog.Fatal(err)
}
pprof.StartCPUProfile(f)
defer pprof.StopCPUProfile()
}
err := server.RealMain()
if err != nil {
glog.Errorf("Main exited with error: %v\n", err)
os.Exit(1)
} else {
glog.Infof("Shutting down gracefully")
os.Exit(0)
}
}

Просмотреть файл

@ -0,0 +1,244 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package processing
import (
"github.com/golang/glog"
"github.com/golang/protobuf/ptypes"
"github.com/golang/protobuf/ptypes/timestamp"
"github.com/pkg/errors"
"github.com/salesforce/sloop/pkg/sloop/kubeextractor"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"math"
"time"
)
// TODO: We are only looking for the previous event in the current partiton, but we need to look back in cases where we cross the boundary
func updateEventCountTable(
tables typed.Tables,
txn badgerwrap.Txn,
watchRec *typed.KubeWatchResult,
metadata *kubeextractor.KubeMetadata,
involvedObject *kubeextractor.KubeInvolvedObject,
maxLookback time.Duration) error {
if watchRec.Kind != kubeextractor.EventKind {
glog.V(7).Infof("Skipping event processing for %v", watchRec.Kind)
return nil
}
prevEventInfo, err := getPreviousEventInfo(tables, txn, watchRec.Timestamp, watchRec.Kind, metadata.Namespace, metadata.Name)
if err != nil {
return errors.Wrap(err, "Could not get event info for previous event instance")
}
newEventInfo, err := kubeextractor.ExtractEventInfo(watchRec.Payload)
if err != nil {
return errors.Wrap(err, "Could not extract reason")
}
computedFirstTs, computedLastTs, computedCount := computeEventsDiff(prevEventInfo, newEventInfo)
if computedCount == 0 {
return nil
}
// Truncate long-lived events to (watch.Timestamp - maxLookback). This avoids filling in data that will immediately
// be garbage collected, and avoids creating transactions that are too large and fail
watchTs, err := ptypes.Timestamp(watchRec.Timestamp)
if err != nil {
return err
}
truncateTs := watchTs.Add(-1 * maxLookback)
computedFirstTs, computedLastTs, computedCount = adjustForMaxLookback(computedFirstTs, computedLastTs, computedCount, truncateTs)
eventCountByMinute := spreadOutEvents(computedFirstTs, computedLastTs, computedCount)
err = storeMinutes(tables, txn, eventCountByMinute, involvedObject.Kind, involvedObject.Namespace, involvedObject.Name, involvedObject.Uid, newEventInfo.Reason, newEventInfo.Type)
if err != nil {
return err
}
return nil
}
func storeMinutes(tables typed.Tables, txn badgerwrap.Txn, minToCount map[int64]int, kind string, namespace string, name string, uid string, reason string, severity string) error {
// We have event counts over different timestamps, which can be in different partitions. But we want to do all
// the work for the same partition in one round trip.
mapPartToTimeToCount := map[string]map[int64]int{}
for unixTime, count := range minToCount {
thisTs := time.Unix(unixTime, 0)
partitionId := untyped.GetPartitionId(thisTs)
_, ok := mapPartToTimeToCount[partitionId]
if !ok {
mapPartToTimeToCount[partitionId] = map[int64]int{}
}
mapPartToTimeToCount[partitionId][unixTime] = count
}
for _, thisPartMap := range mapPartToTimeToCount {
for unixTime, count := range thisPartMap {
key := typed.NewEventCountKey(time.Unix(unixTime, 0).UTC(), kind, namespace, name, uid)
eventRecord, err := tables.EventCountTable().GetOrDefault(txn, key.String())
if err != nil {
return errors.Wrap(err, "Could not get event record")
}
if _, ok := eventRecord.MapMinToEvents[unixTime]; !ok {
eventRecord.MapMinToEvents[unixTime] = &typed.EventCounts{MapReasonToCount: make(map[string]int32)}
}
eventRecord.MapMinToEvents[unixTime].MapReasonToCount[reason+":"+severity] += int32(count)
err = tables.EventCountTable().Set(txn, key.String(), eventRecord)
if err != nil {
return errors.Wrap(err, "Failed to put")
}
}
}
return nil
}
func distributeValue(value int, buckets int) []int {
if buckets == 0 {
return []int{}
}
ret := []int{}
for pos := 0; pos < buckets; pos += 1 {
thisVal := value / buckets
if value%buckets > pos {
thisVal += 1
}
ret = append(ret, thisVal)
}
return ret
}
// TODO: Do this the right way so the totals always match. This is a placeholder solution
// TODO: Figure out proper way to round this
func spreadOutEvents(firstTs time.Time, lastTs time.Time, count int) map[int64]int {
ret := map[int64]int{}
firstRound := firstTs.Round(time.Minute)
lastRound := lastTs.Round(time.Minute)
// It all happened in the same minute
if firstRound == lastRound {
ret[firstRound.Unix()] = count
return ret
}
numMinutes := int(math.Ceil(lastRound.Sub(firstRound).Minutes()))
if numMinutes < 1 {
numMinutes = 1
}
counts := distributeValue(count, numMinutes)
thisMinute := firstRound
for idx := 0; idx < numMinutes; idx += 1 {
if counts[idx] > 0 {
ret[thisMinute.Unix()] = counts[idx]
}
thisMinute = thisMinute.Add(time.Minute)
}
return ret
}
func getPreviousEventInfo(tables typed.Tables, txn badgerwrap.Txn, ts *timestamp.Timestamp, kind string, namespace string, name string) (*kubeextractor.EventInfo, error) {
// Find the most recent copy of this event in the store so we can figure out what is new
prevWatch, err := getLastKubeWatchResult(tables, txn, ts, kind, namespace, name)
if err != nil {
return nil, err
}
if prevWatch == nil {
return nil, nil
}
return kubeextractor.ExtractEventInfo(prevWatch.Payload)
}
// Subtract old events from new events
func computeEventsDiff(prevEventInfo *kubeextractor.EventInfo, newEventInfo *kubeextractor.EventInfo) (time.Time, time.Time, int) {
// First time we are seeing this event, so just return it:
//
// Old: nil
// New: |----- Count: 50 ---|
if prevEventInfo == nil {
return newEventInfo.FirstTimestamp, newEventInfo.LastTimestamp, newEventInfo.Count
}
// Old event does not overlap, so return the new event:
//
// Old: |--- Count: 2 --|
// New: |-- Count: 1 --|
if prevEventInfo.LastTimestamp.Before(newEventInfo.FirstTimestamp) {
return newEventInfo.FirstTimestamp, newEventInfo.LastTimestamp, newEventInfo.Count
}
// This is a duplicate or old event, so just return count=0 (no new events)
//
// Old: |----- Count: 50 ---|
// New: |----- Count: 50 ---|
//
// or possibly this strange one:
//
// Old: |----- Count: 55 --------|
// New: |----- Count: 50 ---|
if prevEventInfo.LastTimestamp.Equal(newEventInfo.LastTimestamp) || prevEventInfo.LastTimestamp.After(newEventInfo.LastTimestamp) {
return time.Time{}, time.Time{}, 0
}
// New and old events start at the same time, but new ends later. This will happen all the time, and we subtract the old from new
//
// Old: |----- Count: 50 ---|
// New: |------------- Count: 62 -----|
// So we return:
// |-- 12 ---|
if prevEventInfo.FirstTimestamp == newEventInfo.FirstTimestamp {
if newEventInfo.Count < prevEventInfo.Count {
// This should not happen!
glog.Errorf("New event has a lower count than previous event wth same start time! Old %v New %v", prevEventInfo, newEventInfo)
return time.Time{}, time.Time{}, 0
}
return prevEventInfo.LastTimestamp, newEventInfo.LastTimestamp, newEventInfo.Count - prevEventInfo.Count
}
// If we reach here, we have partially overlapping event ranges like this which should NOT happen.
// Figure out the percent overlap, and reduce the old count by that amount. This is the best approximation we can do.
// Old: |---- count: 123 -----|
// New: |----- count: 4235 ----|
glog.Errorf("Encountered partially overlapping events. Attempting to guess new count")
oldSeconds := prevEventInfo.LastTimestamp.Sub(prevEventInfo.FirstTimestamp).Seconds()
overlapSeconds := prevEventInfo.LastTimestamp.Sub(newEventInfo.FirstTimestamp).Seconds()
if oldSeconds <= 0 {
// Should not happen, but dont want a divide by zero
return time.Time{}, time.Time{}, 0
}
pctOverlap := float64(overlapSeconds) / float64(oldSeconds)
newCount := newEventInfo.Count - int(float64(prevEventInfo.Count)*pctOverlap)
if newCount < 0 {
newCount = 0
}
return prevEventInfo.LastTimestamp, newEventInfo.LastTimestamp, newCount
}
// When you first bring up Sloop it can read in events that have been occurring for an extremely long time (many months)
// We dont want to spread them out beyond the maxLookback because they can create huge transactions that fail and will
// immediately kick in GC.
// Returns a new firstTs, lastTs and Count
func adjustForMaxLookback(firstTs time.Time, lastTs time.Time, count int, truncateTs time.Time) (time.Time, time.Time, int) {
if firstTs.After(truncateTs) {
return firstTs, lastTs, count
}
totalSeconds := lastTs.Sub(firstTs).Seconds()
beforeSeconds := truncateTs.Sub(firstTs).Seconds()
pctEventsToKeep := (totalSeconds - beforeSeconds) / totalSeconds
return truncateTs, lastTs, int(float64(count) * pctEventsToKeep)
}

Просмотреть файл

@ -0,0 +1,332 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package processing
import (
"fmt"
"github.com/dgraph-io/badger"
"github.com/golang/protobuf/ptypes"
"github.com/salesforce/sloop/pkg/sloop/kubeextractor"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/stretchr/testify/assert"
"testing"
"time"
)
var (
someEventWatchTs = time.Date(2019, 8, 29, 21, 24, 55, 6, time.UTC)
someEventWatchPTime, _ = ptypes.TimestampProto(someEventWatchTs)
someMaxLookback = time.Duration(time.Hour * 24 * 14)
someEventPayload = `{
"metadata": {
"name": "someEventName",
"namespace": "someNamespace",
"uid": "someEventUid"
},
"involvedObject": {
"kind": "Pod",
"namespace": "someNamespace",
"name": "somePodName",
"uid": "somePodUid"
},
"reason":"failed",
"firstTimestamp": "2019-08-29T21:24:55Z",
"lastTimestamp": "2019-08-29T21:27:55Z",
"count": 10,
"type": "Warning"
}`
)
const (
expectedEventKey = "/eventcount/001567036800/Pod/someNamespace/somePodName/somePodUid"
expectedEventMinKey = 1567113900
expectedEventReason = "failed:Warning"
)
func Test_EventCountTable_NonEvent(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour * 24)
db, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
tables := typed.NewTableList(db)
watchRec := typed.KubeWatchResult{
Kind: "Pod",
Timestamp: someEventWatchPTime,
}
err = tables.Db().Update(func(txn badgerwrap.Txn) error {
return updateEventCountTable(tables, txn, &watchRec, nil, nil, someMaxLookback)
})
assert.Nil(t, err)
foundKeys, err := findEventKeys(tables)
assert.Nil(t, err)
assert.Equal(t, 0, len(foundKeys))
}
func Test_EventCountTable_Event(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour * 24)
db, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
tables := typed.NewTableList(db)
watchRec := typed.KubeWatchResult{
Kind: kubeextractor.EventKind,
Timestamp: someEventWatchPTime,
Payload: someEventPayload,
}
resourceMetadata, err := kubeextractor.ExtractMetadata(watchRec.Payload)
assert.Nil(t, err)
involvedObject, err := kubeextractor.ExtractInvolvedObject(watchRec.Payload)
assert.Nil(t, err)
err = tables.Db().Update(func(txn badgerwrap.Txn) error {
return updateEventCountTable(tables, txn, &watchRec, &resourceMetadata, &involvedObject, someMaxLookback)
})
assert.Nil(t, err)
helper_dumpKeys(t, tables.Db(), "After adding event")
foundKeys, err := findEventKeys(tables)
assert.Nil(t, err)
assert.Equal(t, []string{expectedEventKey}, foundKeys)
counts, err := getEventKey(db, tables.EventCountTable(), expectedEventKey)
assert.Nil(t, err)
assert.Equal(t, 3, len(counts.MapMinToEvents))
// We should have these 4 minutes all with 3 failed events
// mapMinToEvents:<key:1567113900 value:<mapReasonToCount:<key:"failed" value:3 > > >
// mapMinToEvents:<key:1567113960 value:<mapReasonToCount:<key:"failed" value:3 > > >
// mapMinToEvents:<key:1567114020 value:<mapReasonToCount:<key:"failed" value:3 > > >
// mapMinToEvents:<key:1567114080 value:<mapReasonToCount:<key:"failed" value:3 > > >
reasonCounts := counts.MapMinToEvents[expectedEventMinKey].MapReasonToCount
assert.Equal(t, 1, len(reasonCounts))
assert.Equal(t, int32(4), reasonCounts[expectedEventReason])
}
func Test_EventCountTable_DupeEventSameResults(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour * 24)
db, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
tables := typed.NewTableList(db)
watchRec := typed.KubeWatchResult{
Kind: kubeextractor.EventKind,
Timestamp: someEventWatchPTime,
Payload: someEventPayload,
}
resourceMetadata, err := kubeextractor.ExtractMetadata(watchRec.Payload)
assert.Nil(t, err)
involvedObject, err := kubeextractor.ExtractInvolvedObject(watchRec.Payload)
assert.Nil(t, err)
err = tables.Db().Update(func(txn badgerwrap.Txn) error {
// For dedupe to work we need a record written to the watch table
err2 := updateEventCountTable(tables, txn, &watchRec, &resourceMetadata, &involvedObject, someMaxLookback)
if err2 != nil {
return err2
}
kubeMetadata, err := kubeextractor.ExtractMetadata(watchRec.Payload)
assert.Nil(t, err)
err2 = updateKubeWatchTable(tables, txn, &watchRec, &kubeMetadata, false)
return err2
})
assert.Nil(t, err)
helper_dumpKeys(t, tables.Db(), "After first time processing event")
err = tables.Db().Update(func(txn badgerwrap.Txn) error {
return updateEventCountTable(tables, txn, &watchRec, &resourceMetadata, &involvedObject, someMaxLookback)
})
assert.Nil(t, err)
foundKeys, err := findEventKeys(tables)
assert.Nil(t, err)
fmt.Printf("Keys: %v\n", foundKeys)
assert.Equal(t, []string{expectedEventKey}, foundKeys)
counts, err := getEventKey(db, tables.EventCountTable(), expectedEventKey)
assert.Nil(t, err)
assert.Equal(t, 3, len(counts.MapMinToEvents))
reasonCounts := counts.MapMinToEvents[expectedEventMinKey].MapReasonToCount
assert.Equal(t, 1, len(reasonCounts))
assert.Equal(t, int32(4), reasonCounts[expectedEventReason])
}
func findEventKeys(tables typed.Tables) ([]string, error) {
var foundKeys []string
err := tables.Db().View(func(txn badgerwrap.Txn) error {
ret, _, err2 := tables.EventCountTable().RangeRead(txn, func(s string) bool { return true }, nil, someEventWatchTs.Add(-1*time.Hour), someEventWatchTs.Add(time.Hour))
if err2 != nil {
return err2
}
for k, _ := range ret {
foundKeys = append(foundKeys, k.String())
}
return nil
})
return foundKeys, err
}
func getEventKey(db badgerwrap.DB, table *typed.ResourceEventCountsTable, key string) (*typed.ResourceEventCounts, error) {
var val *typed.ResourceEventCounts
err := db.View(func(txn badgerwrap.Txn) error {
v, err := table.Get(txn, key)
if err != nil {
return err
}
val = v
return nil
})
return val, err
}
func helper_dumpKeys(t *testing.T, db badgerwrap.DB, message string) {
fmt.Printf("%v\n", message)
err := db.View(func(txn badgerwrap.Txn) error {
itr := txn.NewIterator(badger.DefaultIteratorOptions)
for itr.Rewind(); itr.Valid(); itr.Next() {
fmt.Printf("KEY %v\n", string(itr.Item().Key()))
}
return nil
})
assert.Nil(t, err)
}
func Test_distributeValue(t *testing.T) {
assert.Equal(t, []int{}, distributeValue(8, 0))
assert.Equal(t, []int{8}, distributeValue(8, 1))
assert.Equal(t, []int{2, 2, 2}, distributeValue(6, 3))
assert.Equal(t, []int{3, 2, 2}, distributeValue(7, 3))
assert.Equal(t, []int{3, 3, 2}, distributeValue(8, 3))
assert.Equal(t, []int{3, 3, 3}, distributeValue(9, 3))
}
var someEventTs1 = time.Date(2019, 8, 29, 21, 24, 55, 6, time.UTC)
var someEventTs2 = someEventTs1.Add(time.Minute)
var someEventTs3 = someEventTs1.Add(2 * time.Minute)
var someEventTs4 = someEventTs1.Add(3 * time.Minute)
func Test_computeEventsDiff_NoOldEvent(t *testing.T) {
newEventInfo := &kubeextractor.EventInfo{
FirstTimestamp: someEventTs1,
LastTimestamp: someEventTs1,
Count: 123,
}
t1, t2, count := computeEventsDiff(nil, newEventInfo)
assert.Equal(t, 123, count)
assert.Equal(t, someEventTs1, t1)
assert.Equal(t, someEventTs1, t2)
}
func Test_computeEventsDiff_DupeEvent(t *testing.T) {
newEventInfo := &kubeextractor.EventInfo{
FirstTimestamp: someEventTs1,
LastTimestamp: someEventTs1,
Count: 123,
}
t1, t2, count := computeEventsDiff(newEventInfo, newEventInfo)
assert.Equal(t, 0, count)
assert.Equal(t, time.Time{}, t1)
assert.Equal(t, time.Time{}, t2)
}
func Test_computeEventsDiff_DupeEventWithDiffCount(t *testing.T) {
prevEventInfo := &kubeextractor.EventInfo{
FirstTimestamp: someEventTs1,
LastTimestamp: someEventTs1,
Count: 122,
}
newEventInfo := &kubeextractor.EventInfo{
FirstTimestamp: someEventTs1,
LastTimestamp: someEventTs1,
Count: 123,
}
t1, t2, count := computeEventsDiff(prevEventInfo, newEventInfo)
assert.Equal(t, 0, count)
assert.Equal(t, time.Time{}, t1)
assert.Equal(t, time.Time{}, t2)
}
func Test_computeEventsDiff_GotAnOldEvent(t *testing.T) {
oldEventInfo := &kubeextractor.EventInfo{
FirstTimestamp: someEventTs1,
LastTimestamp: someEventTs3,
Count: 10,
}
newEventInfo := &kubeextractor.EventInfo{
FirstTimestamp: someEventTs1,
LastTimestamp: someEventTs2,
Count: 13,
}
t1, t2, count := computeEventsDiff(oldEventInfo, newEventInfo)
assert.Equal(t, 0, count)
assert.Equal(t, time.Time{}, t1)
assert.Equal(t, time.Time{}, t2)
}
func Test_computeEventsDiff_NewEventsWithMoreCount(t *testing.T) {
oldEventInfo := &kubeextractor.EventInfo{
FirstTimestamp: someEventTs1,
LastTimestamp: someEventTs2,
Count: 10,
}
newEventInfo := &kubeextractor.EventInfo{
FirstTimestamp: someEventTs1,
LastTimestamp: someEventTs3,
Count: 13,
}
t1, t2, count := computeEventsDiff(oldEventInfo, newEventInfo)
assert.Equal(t, 3, count)
assert.Equal(t, someEventTs2, t1)
assert.Equal(t, someEventTs3, t2)
}
func Test_computeEventsDiff_PartiallyOverlapping(t *testing.T) {
oldEventInfo := &kubeextractor.EventInfo{
FirstTimestamp: someEventTs1,
LastTimestamp: someEventTs3,
Count: 10,
}
newEventInfo := &kubeextractor.EventInfo{
FirstTimestamp: someEventTs2,
LastTimestamp: someEventTs4,
Count: 6,
}
t1, t2, count := computeEventsDiff(oldEventInfo, newEventInfo)
assert.Equal(t, 1, count)
assert.Equal(t, someEventTs3, t1)
assert.Equal(t, someEventTs4, t2)
}
func Test_adjustForMaxLookback_ShortEventNoChange(t *testing.T) {
first, last, count := adjustForMaxLookback(someEventTs3, someEventTs4, 100, someEventTs1)
assert.Equal(t, someEventTs3, first)
assert.Equal(t, someEventTs4, last)
assert.Equal(t, 100, count)
}
func Test_adjustForMaxLookback_LongEventGetsTruncated(t *testing.T) {
first, last, count := adjustForMaxLookback(someEventTs1, someEventTs4, 1000, someEventTs3)
assert.Equal(t, someEventTs3, first)
assert.Equal(t, someEventTs4, last)
assert.Equal(t, 333, count)
}

Просмотреть файл

@ -0,0 +1,97 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package processing
import (
"github.com/golang/glog"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/salesforce/sloop/pkg/sloop/kubeextractor"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"sync"
"time"
)
type Runner struct {
kubeWatchChan chan typed.KubeWatchResult
tables typed.Tables
inputWg *sync.WaitGroup
keepMinorNodeUpdates bool
maxLookback time.Duration
}
var (
metricProcessingWatchtableUpdatecount = promauto.NewCounter(prometheus.CounterOpts{Name: "sloop_processing_watchtable_updatecount"})
)
func NewProcessing(kubeWatchChan chan typed.KubeWatchResult, tables typed.Tables, keepMinorNodeUpdates bool, maxLookback time.Duration) *Runner {
return &Runner{kubeWatchChan: kubeWatchChan, tables: tables, inputWg: &sync.WaitGroup{}, keepMinorNodeUpdates: keepMinorNodeUpdates, maxLookback: maxLookback}
}
func (r *Runner) processingFailed(name string, err error) {
glog.Errorf("Processing for %v failed with error %v", name, err)
}
func (r *Runner) Start() {
r.inputWg.Add(1)
go func() {
for {
watchRec, more := <-r.kubeWatchChan
if !more {
r.inputWg.Done()
return
}
resourceMetadata, err := kubeextractor.ExtractMetadata(watchRec.Payload)
if err != nil {
r.processingFailed("cannot extract resource metadata", err)
}
involvedObject, err := kubeextractor.ExtractInvolvedObject(watchRec.Payload)
if err != nil {
r.processingFailed("cannot extract involved object", err)
}
// Processing event count first so it can easily find the previous copy of the event
// If we update watchTable first then this will see the new event and think it is a dupe
err = r.tables.Db().Update(func(txn badgerwrap.Txn) error {
return updateEventCountTable(r.tables, txn, &watchRec, &resourceMetadata, &involvedObject, r.maxLookback)
})
if err != nil {
r.processingFailed("updateEventCountTable", err)
}
err = r.tables.Db().Update(func(txn badgerwrap.Txn) error {
return updateWatchActivityTable(r.tables, txn, &watchRec, &resourceMetadata)
})
if err != nil {
r.processingFailed("updateWatchActivityTable", err)
}
err = r.tables.Db().Update(func(txn badgerwrap.Txn) error {
return updateKubeWatchTable(r.tables, txn, &watchRec, &resourceMetadata, r.keepMinorNodeUpdates)
})
if err != nil {
r.processingFailed("updateKubeWatchTable", err)
}
err = r.tables.Db().Update(func(txn badgerwrap.Txn) error {
return updateResourceSummaryTable(r.tables, txn, &watchRec, &resourceMetadata)
})
if err != nil {
r.processingFailed("updateResourceSummaryTable", err)
}
}
}()
}
func (r *Runner) Wait() {
glog.Infof("Waiting for outstanding processing to finish")
r.inputWg.Wait()
}

Просмотреть файл

@ -0,0 +1,82 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package processing
import (
"github.com/dgraph-io/badger"
"github.com/golang/glog"
"github.com/golang/protobuf/ptypes"
"github.com/pkg/errors"
"github.com/salesforce/sloop/pkg/sloop/kubeextractor"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"time"
)
// TODO: Split this up and add unit tests
func updateResourceSummaryTable(tables typed.Tables, txn badgerwrap.Txn, watchRec *typed.KubeWatchResult, metadata *kubeextractor.KubeMetadata) error {
if watchRec.Kind == kubeextractor.EventKind {
glog.V(2).Infof("Skipping resource summary table update as kubewatch result is an event(selfLink: %v)", metadata.SelfLink)
return nil
}
ts, err := ptypes.Timestamp(watchRec.Timestamp)
if err != nil {
return errors.Wrap(err, "could not convert timestamp")
}
key := typed.NewResourceSummaryKey(ts, watchRec.Kind, metadata.Namespace, metadata.Name, metadata.Uid).String()
value, err := getResourceSummaryValue(tables, txn, key, metadata, watchRec)
if err != nil {
return errors.Wrapf(err, "could not get record for key %v", key)
}
value.Relationships = getRelationships(ts, metadata)
err = tables.ResourceSummaryTable().Set(txn, key, value)
if err != nil {
return errors.Wrapf(err, "put for the key %v failed", key)
}
return nil
}
func getResourceSummaryValue(tables typed.Tables, txn badgerwrap.Txn, key string, metadata *kubeextractor.KubeMetadata, watchRec *typed.KubeWatchResult) (*typed.ResourceSummary, error) {
value, err := tables.ResourceSummaryTable().Get(txn, key)
if err != nil {
if err != badger.ErrKeyNotFound {
return nil, errors.Wrap(err, "could not get record")
}
createTimeProto, err := typed.StringToProtobufTimestamp(metadata.CreationTimestamp)
if err != nil {
return nil, errors.Wrap(err, "could not convert string to timestamp")
}
value = &typed.ResourceSummary{
FirstSeen: watchRec.Timestamp,
CreateTime: createTimeProto,
DeletedAtEnd: false}
} else if watchRec.WatchType == typed.KubeWatchResult_ADD {
value.FirstSeen = watchRec.Timestamp
}
value.LastSeen = watchRec.Timestamp
if watchRec.WatchType == typed.KubeWatchResult_DELETE {
value.DeletedAtEnd = true
}
return value, nil
}
func getRelationships(timestamp time.Time, metadata *kubeextractor.KubeMetadata) []string {
relationships := []string{}
for _, value := range metadata.OwnerReferences {
refKey := typed.NewResourceSummaryKey(timestamp, value.Kind, metadata.Namespace, value.Name, value.Uid).String()
relationships = append(relationships, refKey)
}
return relationships
}

Просмотреть файл

@ -0,0 +1,127 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package processing
import (
"github.com/dgraph-io/badger"
"github.com/golang/glog"
"github.com/golang/protobuf/ptypes"
"github.com/golang/protobuf/ptypes/timestamp"
"github.com/pkg/errors"
"github.com/salesforce/sloop/pkg/sloop/kubeextractor"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"time"
)
func getLastKubeWatchResult(tables typed.Tables, txn badgerwrap.Txn, ts *timestamp.Timestamp, kind string, namespace string, name string) (*typed.KubeWatchResult, error) {
keyPrefixWithoutTs, err := toWatchTableKeyPrefix(ts, kind, namespace, name)
if err != nil {
return nil, err
}
prevFound, prevKey, err := getLastWatchKey(txn, keyPrefixWithoutTs)
if err != nil {
return nil, errors.Wrapf(err, "Failure getting previous watch result for %v", keyPrefixWithoutTs.String())
}
if !prevFound {
return nil, nil
}
prevWatch, err := tables.WatchTable().Get(txn, prevKey)
if err != nil {
return nil, err
}
return prevWatch, nil
}
// TODO: This code was labeled 'Previous' but really only returns 'Last', it may be helpful to actually have a 'Previous' implementation
// TODO: Move this to code-gen per table
func getLastWatchKey(txn badgerwrap.Txn, keyPrefix *typed.WatchTableKey) (bool, string, error) {
// Retrieve the previous copy of this node and see if differences are important
// Badger reverse seek is pretty goofy. We need a prefix with 255 at the end for the seek, but not for prefix
keyPrefixStr := keyPrefix.String()
keyPrefixEndBytes := []byte(keyPrefixStr + string(rune(255)))
keyPrefixBytes := []byte(keyPrefix.String())
iterOpt := badger.DefaultIteratorOptions
iterOpt.Prefix = []byte(keyPrefixBytes)
iterOpt.Reverse = true
itr := txn.NewIterator(iterOpt)
defer itr.Close()
itr.Seek([]byte(keyPrefixEndBytes))
if itr.ValidForPrefix(keyPrefixBytes) {
item := itr.Item()
return true, string(item.Key()), nil
}
return false, "", nil
}
func doesNodeHaveMajorUpdates(tables typed.Tables, txn badgerwrap.Txn, watchRec *typed.KubeWatchResult, metadata *kubeextractor.KubeMetadata) (bool, error) {
prevValue, err := getLastKubeWatchResult(tables, txn, watchRec.Timestamp, watchRec.Kind, metadata.Namespace, metadata.Name)
if err != nil {
return false, err
}
if prevValue == nil {
return true, nil
}
diff, err := kubeextractor.NodeHasMajorUpdate(prevValue.Payload, watchRec.Payload)
if err != nil {
keyPrefix, _ := toWatchTableKeyPrefix(watchRec.Timestamp, watchRec.Kind, metadata.Namespace, metadata.Name)
return false, errors.Wrapf(err, "Failed to check if nodes have meaningfully differences for %v", keyPrefix.String())
}
return diff, nil
}
func updateKubeWatchTable(tables typed.Tables, txn badgerwrap.Txn, watchRec *typed.KubeWatchResult, metadata *kubeextractor.KubeMetadata, keepMinorNodeUpdates bool) error {
metricProcessingWatchtableUpdatecount.Inc()
key, err := toWatchTableKey(watchRec.Timestamp, watchRec.Kind, metadata.Namespace, metadata.Name)
if err != nil {
return err
}
if watchRec.Kind == kubeextractor.NodeKind && !keepMinorNodeUpdates {
hasUpdates, err := doesNodeHaveMajorUpdates(tables, txn, watchRec, metadata)
if err != nil {
return err
}
if !hasUpdates {
glog.V(2).Infof("Not inserting node %v because it has no major updates", key.String())
return nil
}
}
err = tables.WatchTable().Set(txn, key.String(), watchRec)
if err != nil {
return errors.Wrap(err, "Put failed")
}
return nil
}
func toWatchTableKey(ts *timestamp.Timestamp, kind string, namespace string, name string) (*typed.WatchTableKey, error) {
timestamp, err := ptypes.Timestamp(ts)
if err != nil {
return &typed.WatchTableKey{}, errors.Wrapf(err, "Could not convert timestamp %v", ts.String())
}
return typed.NewWatchTableKey(untyped.GetPartitionId(timestamp), kind, namespace, name, timestamp), nil
}
func toWatchTableKeyPrefix(ts *timestamp.Timestamp, kind string, namespace string, name string) (*typed.WatchTableKey, error) {
timestamp, err := ptypes.Timestamp(ts)
if err != nil {
return &typed.WatchTableKey{}, errors.Wrapf(err, "Could not convert timestamp %v for key prefix", ts.String())
}
return typed.NewWatchTableKey(untyped.GetPartitionId(timestamp), kind, namespace, name, time.Time{}), nil
}

Просмотреть файл

@ -0,0 +1,234 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package processing
import (
"github.com/dgraph-io/badger"
"github.com/golang/protobuf/ptypes"
"github.com/salesforce/sloop/pkg/sloop/kubeextractor"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/salesforce/sloop/pkg/sloop/test/assertex"
"github.com/stretchr/testify/assert"
"testing"
"time"
)
var someKind = "Pod"
var someWatchTime = time.Date(2019, 3, 4, 3, 4, 5, 6, time.UTC)
const somePodPayload = `{
"metadata": {
"name": "someName",
"namespace": "someNamespace",
"uid": "6c2a9795-a282-11e9-ba2f-14187761de09",
"creationTimestamp": "2019-07-09T19:47:45Z"
}
}`
const expectedKey = "/watch/001551668400/Pod/someNamespace/someName/1551668645000000006"
const someNode = `{
"metadata": {
"name": "somehostname",
"resourceVersion": "123"
},
"status": {
"conditions": [
{
"type": "OutOfDisk",
"status": "False",
"lastHeartbeatTime": "2019-07-19T15:35:56Z",
"lastTransitionTime": "2019-07-19T15:35:56Z",
"reason": "KubeletHasSufficientDisk"
}
]
}
}`
const someNodeDiffTsAndRV = `{
"metadata": {
"name": "somehostname",
"resourceVersion": "456"
},
"status": {
"conditions": [
{
"type": "OutOfDisk",
"status": "False",
"lastHeartbeatTime": "2012-01-01T15:35:56Z",
"lastTransitionTime": "2019-07-19T15:35:56Z",
"reason": "KubeletHasSufficientDisk"
}
]
}
}`
const someNodeDiffStatus = `{
"metadata": {
"name": "somehostname",
"resourceVersion": "123"
},
"status": {
"conditions": [
{
"type": "OutOfDisk",
"status": "True",
"lastHeartbeatTime": "2019-07-19T15:35:56Z",
"lastTransitionTime": "2019-07-19T15:35:56Z",
"reason": "KubeletHasSufficientDisk"
}
]
}
}`
type wtKeyValPair struct {
Key string
Value *typed.KubeWatchResult
}
func helper_runWatchTableProcessingOnInputs(t *testing.T, inRecs []*typed.KubeWatchResult, keepMinorNodeUpdates bool) []wtKeyValPair {
untyped.TestHookSetPartitionDuration(time.Hour)
db, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
tables := typed.NewTableList(db)
for _, watchRec := range inRecs {
err = tables.Db().Update(func(txn badgerwrap.Txn) error {
kubeMetadata, err := kubeextractor.ExtractMetadata(watchRec.Payload)
assert.Nil(t, err)
return updateKubeWatchTable(tables, txn, watchRec, &kubeMetadata, keepMinorNodeUpdates)
})
assert.Nil(t, err)
}
var foundRows []wtKeyValPair
err = tables.Db().View(func(txn badgerwrap.Txn) error {
itr := txn.NewIterator(badger.DefaultIteratorOptions)
defer itr.Close()
for itr.Rewind(); itr.Valid(); itr.Next() {
thisKey := string(itr.Item().Key())
thisVal, err := tables.WatchTable().Get(txn, thisKey)
assert.Nil(t, err)
newRows := wtKeyValPair{Key: string(itr.Item().Key()), Value: thisVal}
foundRows = append(foundRows, newRows)
}
return nil
})
assert.Nil(t, err)
return foundRows
}
func Test_WatchTable_BasicAddWorks(t *testing.T) {
ts, err := ptypes.TimestampProto(someWatchTime)
assert.Nil(t, err)
watchRec := &typed.KubeWatchResult{Kind: someKind, WatchType: typed.KubeWatchResult_ADD, Timestamp: ts, Payload: somePodPayload}
results := helper_runWatchTableProcessingOnInputs(t, []*typed.KubeWatchResult{watchRec}, false)
assert.Equal(t, 1, len(results))
actualkey := results[0].Key
actualVal := results[0].Value
assert.Equal(t, expectedKey, actualkey)
assert.Equal(t, someKind, actualVal.Kind)
assertex.ProtoEqual(t, ts, actualVal.Timestamp)
assert.Equal(t, somePodPayload, actualVal.Payload)
assert.Equal(t, typed.KubeWatchResult_ADD, actualVal.WatchType)
}
func Test_WatchTable_AddSameWatchResultTwiceSameTS_OneOutputRow(t *testing.T) {
ts, err := ptypes.TimestampProto(someWatchTime)
assert.Nil(t, err)
watchRec := &typed.KubeWatchResult{Kind: someKind, WatchType: typed.KubeWatchResult_ADD, Timestamp: ts, Payload: somePodPayload}
results := helper_runWatchTableProcessingOnInputs(t, []*typed.KubeWatchResult{watchRec, watchRec}, false)
assert.Equal(t, 1, len(results))
}
func Test_WatchTable_AddSameWatchResultTwiceDiffTS_TwoOutputRows(t *testing.T) {
ts1, err := ptypes.TimestampProto(someWatchTime)
assert.Nil(t, err)
ts2, err := ptypes.TimestampProto(someWatchTime.Add(time.Second))
assert.Nil(t, err)
watchRec1 := &typed.KubeWatchResult{Kind: someKind, WatchType: typed.KubeWatchResult_ADD, Timestamp: ts1, Payload: somePodPayload}
watchRec2 := &typed.KubeWatchResult{Kind: someKind, WatchType: typed.KubeWatchResult_ADD, Timestamp: ts2, Payload: somePodPayload}
results := helper_runWatchTableProcessingOnInputs(t, []*typed.KubeWatchResult{watchRec1, watchRec2}, false)
assert.Equal(t, 2, len(results))
}
func Test_WatchTable_DontKeepMinorNodeUpdates_AddNodeTwiceOnlyDiffIsTS_OneOutputRow(t *testing.T) {
ts1, err := ptypes.TimestampProto(someWatchTime)
assert.Nil(t, err)
ts2, err := ptypes.TimestampProto(someWatchTime.Add(time.Second))
assert.Nil(t, err)
watchRec1 := &typed.KubeWatchResult{Kind: kubeextractor.NodeKind, WatchType: typed.KubeWatchResult_UPDATE, Timestamp: ts1, Payload: someNode}
watchRec2 := &typed.KubeWatchResult{Kind: kubeextractor.NodeKind, WatchType: typed.KubeWatchResult_UPDATE, Timestamp: ts2, Payload: someNodeDiffTsAndRV}
results := helper_runWatchTableProcessingOnInputs(t, []*typed.KubeWatchResult{watchRec1, watchRec2}, false)
assert.Equal(t, 1, len(results))
}
func Test_WatchTable_DontKeepMinorNodeUpdates_AddNodeTwiceWithStatusDiff_TwoOutputRows(t *testing.T) {
ts1, err := ptypes.TimestampProto(someWatchTime)
assert.Nil(t, err)
ts2, err := ptypes.TimestampProto(someWatchTime.Add(time.Second))
assert.Nil(t, err)
watchRec1 := &typed.KubeWatchResult{Kind: kubeextractor.NodeKind, WatchType: typed.KubeWatchResult_UPDATE, Timestamp: ts1, Payload: someNode}
watchRec2 := &typed.KubeWatchResult{Kind: kubeextractor.NodeKind, WatchType: typed.KubeWatchResult_UPDATE, Timestamp: ts2, Payload: someNodeDiffStatus}
results := helper_runWatchTableProcessingOnInputs(t, []*typed.KubeWatchResult{watchRec1, watchRec2}, false)
assert.Equal(t, 2, len(results))
}
func Test_WatchTable_KeepMinorNodeUpdates_AddNodeTwiceOnlyDiffIsTS_TwoOutputRow(t *testing.T) {
ts1, err := ptypes.TimestampProto(someWatchTime)
assert.Nil(t, err)
ts2, err := ptypes.TimestampProto(someWatchTime.Add(time.Second))
assert.Nil(t, err)
watchRec1 := &typed.KubeWatchResult{Kind: kubeextractor.NodeKind, WatchType: typed.KubeWatchResult_UPDATE, Timestamp: ts1, Payload: someNode}
watchRec2 := &typed.KubeWatchResult{Kind: kubeextractor.NodeKind, WatchType: typed.KubeWatchResult_UPDATE, Timestamp: ts2, Payload: someNodeDiffTsAndRV}
results := helper_runWatchTableProcessingOnInputs(t, []*typed.KubeWatchResult{watchRec1, watchRec2}, true)
assert.Equal(t, 2, len(results))
}
func Test_getLastKubeWatchResult(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
db, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
tables := typed.NewTableList(db)
ts, err := ptypes.TimestampProto(someWatchTime)
assert.Nil(t, err)
watchRec := typed.KubeWatchResult{Kind: kubeextractor.NodeKind, WatchType: typed.KubeWatchResult_UPDATE, Timestamp: ts, Payload: somePodPayload}
metadata := &kubeextractor.KubeMetadata{Name: "someName", Namespace: "someNamespace"}
err = tables.Db().Update(func(txn badgerwrap.Txn) error {
return updateKubeWatchTable(tables, txn, &watchRec, metadata, true)
})
err = tables.Db().View(func(txn badgerwrap.Txn) error {
prevWatch, err := getLastKubeWatchResult(tables, txn, ts, kubeextractor.NodeKind, metadata.Namespace, "differentName")
assert.Nil(t, err)
assert.Nil(t, prevWatch)
prevWatch, err = getLastKubeWatchResult(tables, txn, ts, kubeextractor.NodeKind, metadata.Namespace, metadata.Name)
assert.Nil(t, err)
assert.NotNil(t, prevWatch)
return nil
})
assert.Nil(t, err)
}

Просмотреть файл

@ -0,0 +1,89 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package processing
import (
"github.com/golang/protobuf/ptypes"
"github.com/pkg/errors"
"github.com/salesforce/sloop/pkg/sloop/kubeextractor"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"time"
)
func updateWatchActivityTable(tables typed.Tables, txn badgerwrap.Txn, watchRec *typed.KubeWatchResult, metadata *kubeextractor.KubeMetadata) error {
if watchRec.Kind == kubeextractor.EventKind {
return nil
}
resourceChanged, err := didKubeWatchResultChange(tables, txn, watchRec, metadata)
if err != nil {
return err
}
timestamp, err := ptypes.Timestamp(watchRec.Timestamp)
if err != nil {
return errors.Wrapf(err, "Could not convert timestamp %v", watchRec.Timestamp)
}
activityRecord, key, err := getWatchActivity(tables, txn, timestamp, watchRec, metadata)
if err != nil {
return err
}
if resourceChanged {
activityRecord.ChangedAt = append(activityRecord.ChangedAt, timestamp.Unix())
} else {
activityRecord.NoChangeAt = append(activityRecord.NoChangeAt, timestamp.Unix())
}
return putWatchActivity(tables, txn, activityRecord, key)
}
func didKubeWatchResultChange(tables typed.Tables, txn badgerwrap.Txn, watchRec *typed.KubeWatchResult, metadata *kubeextractor.KubeMetadata) (bool, error) {
resourceChanged := false
prevWatch, err := getLastKubeWatchResult(tables, txn, watchRec.Timestamp, watchRec.Kind, metadata.Namespace, metadata.Name)
if err != nil {
return false, errors.Wrap(err, "Could not get event info for previous event instance")
}
if prevWatch != nil {
prevMetadata, err := kubeextractor.ExtractMetadata(prevWatch.Payload)
if err != nil {
return false, errors.Wrap(err, "Cannot extract resource metadata")
}
resourceChanged = metadata.ResourceVersion != prevMetadata.ResourceVersion
}
return resourceChanged, nil
}
func getWatchActivity(tables typed.Tables, txn badgerwrap.Txn, timestamp time.Time, watchRec *typed.KubeWatchResult, metadata *kubeextractor.KubeMetadata) (*typed.WatchActivity, *typed.WatchActivityKey, error) {
partitionId := untyped.GetPartitionId(timestamp)
key := typed.NewWatchActivityKey(partitionId, watchRec.Kind, metadata.Namespace, metadata.Name, metadata.Uid)
activityRecord, err := tables.WatchActivityTable().GetOrDefault(txn, key.String())
if err != nil {
return nil, nil, errors.Wrap(err, "Could not get watch activity record")
}
return activityRecord, key, nil
}
func putWatchActivity(tables typed.Tables, txn badgerwrap.Txn, activityRecord *typed.WatchActivity, key *typed.WatchActivityKey) error {
err := tables.WatchActivityTable().Set(txn, key.String(), activityRecord)
if err != nil {
return errors.Wrap(err, "Failed to put watch activity record")
}
return nil
}

Просмотреть файл

@ -0,0 +1,134 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package processing
import (
"testing"
"time"
"github.com/dgraph-io/badger"
"github.com/golang/protobuf/ptypes"
"github.com/salesforce/sloop/pkg/sloop/kubeextractor"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/stretchr/testify/assert"
)
const someNodePayload1 = `{
"metadata": {
"name": "someName",
"namespace": "someNamespace",
"resourceVersion": "456"
},
"status": {
"conditions": [
{
"type": "OutOfDisk",
"status": "False",
"lastHeartbeatTime": "2012-01-01T15:35:56Z",
"lastTransitionTime": "2019-07-19T15:35:56Z",
"reason": "KubeletHasSufficientDisk"
}
]
}
}`
const someNodePayload2 = `{
"metadata": {
"name": "someName",
"namespace": "someNamespace",
"resourceVersion": "457"
},
"status": {
"conditions": [
{
"type": "OutOfDisk",
"status": "False",
"lastHeartbeatTime": "2012-01-01T15:35:56Z",
"lastTransitionTime": "2019-07-19T15:35:56Z",
"reason": "KubeletHasSufficientDisk"
}
]
}
}`
func Test_updateWatchActivityTable(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
db, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
tables := typed.NewTableList(db)
ts, err := ptypes.TimestampProto(someWatchTime)
assert.Nil(t, err)
watchRec := &typed.KubeWatchResult{Kind: kubeextractor.NodeKind, WatchType: typed.KubeWatchResult_UPDATE, Timestamp: ts, Payload: someNodePayload1}
metadata, err := kubeextractor.ExtractMetadata(watchRec.Payload)
assert.Nil(t, err)
// add a WatchActivity (no matching KubeWatchResult) => no change
err = tables.Db().Update(func(txn badgerwrap.Txn) error {
err = updateWatchActivityTable(tables, txn, watchRec, &metadata)
assert.Nil(t, err)
activityRecord, _, err := getWatchActivity(tables, txn, someWatchTime, watchRec, &metadata)
assert.Nil(t, err)
assert.NotNil(t, activityRecord)
assert.Equal(t, 0, len(activityRecord.ChangedAt))
assert.Equal(t, 1, len(activityRecord.NoChangeAt))
assert.Equal(t, someWatchTime.Unix(), activityRecord.NoChangeAt[0])
return nil
})
assert.Nil(t, err)
// add a KubeWatchResult
err = tables.Db().Update(func(txn badgerwrap.Txn) error {
return updateKubeWatchTable(tables, txn, watchRec, &metadata, true)
})
assert.Nil(t, err)
// add a WatchActivity => no change at timestamp
timestamp2 := someWatchTime.Add(time.Minute)
ts2, err := ptypes.TimestampProto(timestamp2)
assert.Nil(t, err)
watchRec.Timestamp = ts2
err = tables.Db().Update(func(txn badgerwrap.Txn) error {
err = updateWatchActivityTable(tables, txn, watchRec, &metadata)
assert.Nil(t, err)
activityRecord, _, err := getWatchActivity(tables, txn, timestamp2, watchRec, &metadata)
assert.Nil(t, err)
assert.NotNil(t, activityRecord)
assert.Equal(t, 0, len(activityRecord.ChangedAt))
assert.Equal(t, 2, len(activityRecord.NoChangeAt))
assert.Equal(t, timestamp2.Unix(), activityRecord.NoChangeAt[1])
return nil
})
assert.Nil(t, err)
// add a changed WatchActivity => changed at timestamp
watchRec.Payload = someNodePayload2
metadata, err = kubeextractor.ExtractMetadata(watchRec.Payload)
assert.Nil(t, err)
err = tables.Db().Update(func(txn badgerwrap.Txn) error {
err = updateWatchActivityTable(tables, txn, watchRec, &metadata)
assert.Nil(t, err)
activityRecord, _, err := getWatchActivity(tables, txn, timestamp2, watchRec, &metadata)
assert.Nil(t, err)
assert.NotNil(t, activityRecord)
assert.Equal(t, 1, len(activityRecord.ChangedAt))
assert.Equal(t, 2, len(activityRecord.NoChangeAt))
assert.Equal(t, timestamp2.Unix(), activityRecord.ChangedAt[0])
return nil
})
assert.Nil(t, err)
}

Просмотреть файл

@ -0,0 +1,77 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package queries
import (
"encoding/json"
"fmt"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"net/url"
"time"
)
type EventsData struct {
EventsList []EventOutput `json:"eventsList"`
}
type EventOutput struct {
PartitionId string `json:"partitionId"`
Namespace string `json:"namespace"`
Name string `json:"name"`
WatchTimestamp time.Time `json:"watchTimestamp,omitempty"`
Kind string `json:"kind,omitempty"`
WatchType typed.KubeWatchResult_WatchType `json:"watchType,omitempty"`
Payload string `json:"payload,omitempty"`
EventKey string `json:"eventKey"`
}
func GetEventData(params url.Values, t typed.Tables, startTime time.Time, endTime time.Time, requestId string) ([]byte, error) {
var watchEvents map[typed.WatchTableKey]*typed.KubeWatchResult
err := t.Db().View(func(txn badgerwrap.Txn) error {
var err2 error
var stats typed.RangeReadStats
// TODO: In addition to isEventValInTimeRange we need to also crack open the payload and check the involvedObject kind (+namespace, name, uuid)
watchEvents, stats, err2 = t.WatchTable().RangeRead(txn, paramEventDataFn(params), isEventValInTimeRange(startTime, endTime), startTime, endTime)
if err2 != nil {
return err2
}
stats.Log(requestId)
return nil
})
if err != nil {
return []byte{}, err
}
var res EventsData
eventsList := []EventOutput{}
for key, val := range watchEvents {
output := EventOutput{
PartitionId: key.PartitionId,
Namespace: key.Namespace,
Name: key.Name,
WatchTimestamp: key.Timestamp,
Kind: key.Kind,
WatchType: val.WatchType,
Payload: val.Payload,
EventKey: key.String(),
}
eventsList = append(eventsList, output)
}
if len(eventsList) == 0 {
return []byte{}, nil
}
res.EventsList = eventsList
bytes, err := json.MarshalIndent(res.EventsList, "", " ")
if err != nil {
return nil, fmt.Errorf("failed to marshal json %v", err)
}
return bytes, nil
}

Просмотреть файл

@ -0,0 +1,121 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package queries
import (
"github.com/dgraph-io/badger"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/salesforce/sloop/pkg/sloop/test/assertex"
"github.com/stretchr/testify/assert"
"testing"
"time"
)
const someRequestId = "someReqId"
func helper_get_k8Watchtable(keys []string, t *testing.T, somePyaload string) typed.Tables {
if len(somePyaload) == 0 {
somePyaload = `{
"reason":"failed",
"firstTimestamp": "2019-08-29T21:24:55Z",
"lastTimestamp": "2019-08-29T21:27:55Z",
"count": 10}`
}
val := &typed.KubeWatchResult{Kind: "someKind", Payload: somePyaload}
db, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
wt := typed.OpenKubeWatchResultTable()
err = db.Update(func(txn badgerwrap.Txn) error {
for _, key := range keys {
txerr := wt.Set(txn, key, val)
if txerr != nil {
return txerr
}
}
return nil
})
assert.Nil(t, err)
tables := typed.NewTableList(db)
return tables
}
func Test_GetEventData_False(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
partitionId := untyped.GetPartitionId(someTs)
values := helper_get_params()
values[KindParam] = []string{"someKind"}
values[NamespaceParam] = []string{"someNamespace"}
values[NameParam] = []string{"someName"}
var keys []string
for i := 'a'; i < 'd'; i++ {
keys = append(keys, typed.NewWatchTableKey(partitionId, "someKind"+string(i), "someNamespace", "someName.xx", someTs).String())
}
starTime := someTs.Add(-60 * time.Minute)
endTime := someTs.Add(60 * time.Minute)
tables := helper_get_k8Watchtable(keys, t, "")
res, err := GetEventData(values, tables, starTime, endTime, someRequestId)
assert.Equal(t, string(res), "")
assert.Nil(t, err)
}
func Test_GetEventData_NotInTimeRange(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
partitionId := untyped.GetPartitionId(someTs)
values := helper_get_params()
values[KindParam] = []string{"someKinda"}
values[NamespaceParam] = []string{"someNamespace"}
values[NameParam] = []string{"someName"}
var keys []string
keys = append(keys, typed.NewWatchTableKey(partitionId, "someKinda", "someNamespace", "someName.xx", someTs).String())
keys = append(keys, typed.NewWatchTableKey(partitionId, "someKinda", "someNamespace", "someName.xx", someTs.Add(-10*time.Minute)).String())
for i := 'b'; i < 'd'; i++ {
keys = append(keys, typed.NewWatchTableKey(partitionId, "someKind"+string(i), "someNamespace", "someName.xx", someTs).String())
}
tables := helper_get_k8Watchtable(keys, t, "")
res, err := GetEventData(values, tables, someTs.Add(-60*time.Minute), someTs.Add(60*time.Minute), someRequestId)
assert.Nil(t, err)
assert.Equal(t, string(res), "")
}
func Test_GetEventData_True(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
partitionId := untyped.GetPartitionId(someTs)
values := helper_get_params()
values[KindParam] = []string{"someKinda"}
values[NamespaceParam] = []string{"someNamespace"}
values[NameParam] = []string{"someName"}
var keys []string
keys = append(keys, typed.NewWatchTableKey(partitionId, "Event", "someNamespace", "someName.xx", someTs).String())
keys = append(keys, typed.NewWatchTableKey(partitionId, "Event", "someNamespaceb", "someName.xx", someTs).String())
someEventPayload := `{
"reason":"someReason",
"firstTimestamp": "2019-01-01T21:24:55Z",
"lastTimestamp": "2019-01-02T21:27:55Z",
"count": 10
}`
tables := helper_get_k8Watchtable(keys, t, someEventPayload)
res, err := GetEventData(values, tables, someTs.Add(-1*time.Hour), someTs.Add(6*time.Hour), someRequestId)
assert.Nil(t, err)
expectedRes := `[
{
"partitionId": "001546398000",
"namespace": "someNamespace",
"name": "someName.xx",
"watchTimestamp": "2019-01-02T03:04:05.000000006Z",
"kind": "Event",
"payload": "{\n \"reason\":\"someReason\",\n \"firstTimestamp\": \"2019-01-01T21:24:55Z\",\n \"lastTimestamp\": \"2019-01-02T21:27:55Z\",\n \"count\": 10\n }",
"eventKey": "/watch/001546398000/Event/someNamespace/someName.xx/1546398245000000006"
}
]`
assertex.JsonEqual(t, expectedRes, string(res))
}

Просмотреть файл

@ -0,0 +1,18 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package queries
import (
"flag"
"fmt"
)
func init() {
flag.Set("alsologtostderr", fmt.Sprintf("%t", true))
}

Просмотреть файл

@ -0,0 +1,31 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package queries
// Parameters are shared between webserver and here
// Keep this in sync with pkg/sloop/webfiles/filter.js
const (
LookbackParam = "lookback"
NamespaceParam = "namespace"
KindParam = "kind"
NameParam = "name"
NameMatchParam = "namematch" // substring match on name
UuidParam = "uuid"
StartDateParam = "start_date"
EndDateParam = "end_date"
ClickTimeParam = "click_time"
QueryParam = "query"
SortParam = "sort"
)
const (
AllKinds = "_all"
AllNamespaces = "_all"
DefaultNamespace = "default"
)

Просмотреть файл

@ -0,0 +1,51 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package queries
import (
"fmt"
"github.com/golang/glog"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"net/url"
"time"
)
// Takes in arguments from the web page, runs the query, and returns json
type ganttJsonQuery = func(params url.Values, tables typed.Tables, startTime time.Time, endTime time.Time, requestId string) ([]byte, error)
var funcMap = map[string]ganttJsonQuery{
"EventHeatMap": EventHeatMap3Query,
"GetEventData": GetEventData,
"GetResPayload": GetResPayload,
"Namespaces": NamespaceQuery,
"Kinds": KindQuery,
"Queries": QueryAvailableQueries,
"GetResSummaryData": GetResSummaryData,
}
func Default() string {
return "EventHeatMap"
}
func GetNamesOfQueries() []string {
return []string{"EventHeatMap"}
}
func RunQuery(queryName string, params url.Values, tables typed.Tables, maxLookBack time.Duration, requestId string) ([]byte, error) {
startTime, endTime := computeTimeRange(params, tables, maxLookBack)
fn, ok := funcMap[queryName]
if !ok {
return []byte{}, fmt.Errorf("Query not found: " + queryName)
}
ret, err := fn(params, tables, startTime, endTime, requestId)
if err != nil {
glog.Errorf("Query %v failed with error: %v", queryName, err)
}
return ret, err
}

Просмотреть файл

@ -0,0 +1,456 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package queries
import (
"encoding/json"
"fmt"
"github.com/golang/glog"
"github.com/golang/protobuf/ptypes"
"github.com/salesforce/sloop/pkg/sloop/kubeextractor"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"net/url"
"sort"
"time"
)
const EmptyPartition = ""
type rawData struct {
Events map[typed.EventCountKey]*typed.ResourceEventCounts
Resources map[typed.ResourceSummaryKey]*typed.ResourceSummary
WatchActivity map[typed.WatchActivityKey]*typed.WatchActivity
}
func EventHeatMap3Query(params url.Values, t typed.Tables, queryStartTime time.Time, queryEndTime time.Time, requestId string) ([]byte, error) {
// Simple query of store for all rows in matching partitions (will include extra rows)
rawRows, err := getRawDataFromStore(params, t, queryStartTime, queryEndTime, requestId)
if err != nil {
return nil, err
}
glog.Infof("reqId: %v EventHeatMap3Query read %v events, %v resources, and %v watch activity", requestId, len(rawRows.Events), len(rawRows.Resources), len(rawRows.WatchActivity))
// Remove rows that don't fit in time range, and clip rows that go outside time range
err = timeFilterResSumMap(rawRows.Resources, queryStartTime, queryEndTime)
if err != nil {
return []byte{}, err
}
err = timeFilterEventsMap(rawRows.Events, queryStartTime, queryEndTime)
if err != nil {
return []byte{}, err
}
timeFilterWatchActivityMap(rawRows.WatchActivity, queryStartTime, queryEndTime)
glog.Infof("reqId: %v EventHeatMap3Query after filter %v events, %v resources, and %v watch activity", requestId, len(rawRows.Events), len(rawRows.Resources), len(rawRows.WatchActivity))
// TODO: Get this 30 minutes from resync time from config
err = adjustLastSeenTimeMap(rawRows.Resources, queryEndTime, 30*time.Minute)
if err != nil {
return []byte{}, err
}
// Simple one-to-one conversion of store resSum record to a d3 row
mapResSumKeyToD3Gantt, err := resSumRowsToD3GanttMap(rawRows.Resources)
if err != nil {
return []byte{}, err
}
// add the event counts in as overlay
mapResSumKeyToOverlay, err := eventCountsToOverlayMap(rawRows.Events)
if err != nil {
return []byte{}, err
}
err = mergeHeatmapWithResources(mapResSumKeyToD3Gantt, mapResSumKeyToOverlay)
if err != nil {
return []byte{}, err
}
// add the watch activity timestamps
mapResSumKeyToWatchActivity, err := watchActivityToMap(rawRows.WatchActivity)
if err != nil {
return []byte{}, err
}
err = mergeHeatmapWithWatchActivity(mapResSumKeyToD3Gantt, mapResSumKeyToWatchActivity)
if err != nil {
return []byte{}, err
}
// Because overlays are grouped by minute, that minute might start before the resource was created or end after it finished
// This moves the overlay start/end values so they are contained properly in the resource timeline
outputRows := convertHeatmapToSlice(mapResSumKeyToD3Gantt)
adjustOverlays(outputRows)
outputRowValidation(outputRows, requestId)
sortParam := params.Get(SortParam)
outputRoot := TimelineRoot{
Rows: outputRows,
ViewOpt: ViewOptions{Sort: sortParam},
}
bytes, err := json.MarshalIndent(outputRoot, "", " ")
if err != nil {
return nil, fmt.Errorf("Failed to marshal json %v", err)
}
return bytes, nil
}
// Grab data from the store. This will return rows from all partitions that intersect with startTime-endTime
// which will often include more rows that we need.
func getRawDataFromStore(params url.Values, t typed.Tables, startTime time.Time, endTime time.Time, requestId string) (rawData, error) {
ret := rawData{}
ret.Events = map[typed.EventCountKey]*typed.ResourceEventCounts{}
ret.Resources = map[typed.ResourceSummaryKey]*typed.ResourceSummary{}
ret.WatchActivity = map[typed.WatchActivityKey]*typed.WatchActivity{}
err := t.Db().View(func(txn badgerwrap.Txn) error {
var err2 error
var stats typed.RangeReadStats
ret.Events, stats, err2 = t.EventCountTable().RangeRead(txn, paramEventCountSumFn(params), nil, startTime, endTime)
if err2 != nil {
return err2
}
stats.Log(requestId)
ret.Resources, stats, err2 = t.ResourceSummaryTable().RangeRead(txn, paramFilterResSumFn(params), nil, startTime, endTime)
if err2 != nil {
return err2
}
stats.Log(requestId)
ret.WatchActivity, stats, err2 = t.WatchActivityTable().RangeRead(txn, paramFilterWatchActivityFn(params), nil, startTime, endTime)
if err2 != nil {
return err2
}
stats.Log(requestId)
return nil
})
if err != nil {
return rawData{}, err
}
return ret, nil
}
// Last seen is either the last time a resource changed or when we got our last resync
// If the last seen time is close to the end of the query time we extend the lastSeenTime
// Otherwise it looks like resources stopped and events land outside the resource
func adjustLastSeenTime(resSum *typed.ResourceSummary, queryEndTime time.Time, resync time.Duration) error {
// If the resource was deleted we should not change anything
if resSum.DeletedAtEnd {
return nil
}
lastTs, err := ptypes.Timestamp(resSum.LastSeen)
if err != nil {
return err
}
if lastTs.Add(resync).Equal(queryEndTime) || lastTs.Add(resync).After(queryEndTime) {
resSum.LastSeen, err = ptypes.TimestampProto(queryEndTime)
if err != nil {
return err
}
}
return nil
}
func adjustLastSeenTimeMap(resSumMap map[typed.ResourceSummaryKey]*typed.ResourceSummary, queryEndTime time.Time, resync time.Duration) error {
for _, value := range resSumMap {
err := adjustLastSeenTime(value, queryEndTime, resync)
if err != nil {
return err
}
}
return nil
}
// Simple conversion of a resourceSummary row from the store to a d3Gantt row
func resSumRowToD3Gantt(key typed.ResourceSummaryKey, value *typed.ResourceSummary) (*TimelineRow, error) {
startTs, err := ptypes.Timestamp(value.CreateTime)
if err != nil {
return nil, err
}
lastTs, err := ptypes.Timestamp(value.LastSeen)
if err != nil {
return nil, err
}
newRow := TimelineRow{
Text: key.Name,
Kind: key.Kind,
StartDate: startTs.Unix(),
EndDate: lastTs.Unix(),
Duration: lastTs.Unix() - startTs.Unix(),
Overlays: []Overlay{},
Namespace: key.Namespace,
}
return &newRow, nil
}
func takeNewest(left *TimelineRow, right *TimelineRow) *TimelineRow {
if left == nil {
return right
}
if right == nil {
return left
}
if left.EndDate > right.EndDate {
return left
} else {
return right
}
}
// Takes in all ResSum rows and returns a map of key to D3Gantt row
func resSumRowsToD3GanttMap(summaries map[typed.ResourceSummaryKey]*typed.ResourceSummary) (map[typed.ResourceSummaryKey]*TimelineRow, error) {
result := map[typed.ResourceSummaryKey]*TimelineRow{}
for key, value := range summaries {
// This needs to be set to something constant so all resources come together
key.PartitionId = EmptyPartition
newRow, err := resSumRowToD3Gantt(key, value)
if err != nil {
return result, err
}
_, ok := result[key]
if !ok {
result[key] = newRow
} else {
// We have more than one entry for this resource summary row. Take the newest
result[key] = takeNewest(newRow, result[key])
}
}
return result, nil
}
// Take a row from EventCountTable and extract the matching ResSum key and a slice of D3 Overlays
func eventCountRowToD3GanttOverlay(key typed.EventCountKey, value *typed.ResourceEventCounts) (typed.ResourceSummaryKey, []Overlay, error) {
partitionStartTimestamp, _, err := untyped.GetTimeRangeForPartition(key.PartitionId)
if err != nil {
return typed.ResourceSummaryKey{}, []Overlay{}, err
}
refResSumKey := typed.NewResourceSummaryKey(partitionStartTimestamp, key.Kind, key.Namespace, key.Name, key.Uid)
// This is a little ugly, but we want to group all the same reasons for the same bucket and sum them
mapBucketMinToReasonToCount := map[int64]map[string]int32{}
for unixMinute, eventCountMap := range value.MapMinToEvents {
for reason, count := range eventCountMap.MapReasonToCount {
_, ok := mapBucketMinToReasonToCount[unixMinute]
if !ok {
mapBucketMinToReasonToCount[unixMinute] = map[string]int32{}
}
mapBucketMinToReasonToCount[unixMinute][reason] += count
}
}
mapBucketMinToText := map[int64]string{}
for unixMinute, mapReasonToCount := range mapBucketMinToReasonToCount {
// We need to get all reasons and sort them so we have deterministic output for easy unit tests
sortedReasons := []string{}
for reason, _ := range mapReasonToCount {
sortedReasons = append(sortedReasons, reason)
}
sort.Strings(sortedReasons)
text := ""
for idx, reason := range sortedReasons {
if idx != 0 {
text += " "
}
text += fmt.Sprintf("%v:%v", reason, mapReasonToCount[reason])
mapBucketMinToText[unixMinute] = text
}
}
overlays := []Overlay{}
for bucketMin, text := range mapBucketMinToText {
newOverlay := Overlay{
Text: text,
StartDate: bucketMin,
Duration: int64(60), // EventCounts are per minute
EndDate: time.Unix(bucketMin, 0).UTC().Add(time.Minute).Unix(),
}
overlays = append(overlays, newOverlay)
}
sort.Slice(overlays, func(i, j int) bool {
return overlays[i].StartDate < overlays[j].StartDate
})
return *refResSumKey, overlays, nil
}
func eventCountsToOverlayMap(events map[typed.EventCountKey]*typed.ResourceEventCounts) (map[typed.ResourceSummaryKey][]Overlay, error) {
retMap := map[typed.ResourceSummaryKey][]Overlay{}
for key, value := range events {
resSumRefKey, overlays, err := eventCountRowToD3GanttOverlay(key, value)
if err != nil {
return retMap, err
}
// In order for keys to join properly we need an empty partition ID
resSumRefKey.PartitionId = EmptyPartition
retMap[resSumRefKey] = append(retMap[resSumRefKey], overlays...)
}
return retMap, nil
}
func mergeHeatmapWithResources(resKeyToD3Map map[typed.ResourceSummaryKey]*TimelineRow, resKeyToOverlayMap map[typed.ResourceSummaryKey][]Overlay) error {
// For some reason kubernetes Node objects have normal UUIDs, but events for a node have the node name filled in for involvedObject.UUID
// This does not appear to be the case for other objects. So we need a hack to make them match up properly
for resKey, d3row := range resKeyToD3Map {
if resKey.Kind == kubeextractor.NodeKind {
resKey.Uid = resKey.Name
}
d3row.Overlays = resKeyToOverlayMap[resKey]
if d3row.Overlays == nil {
d3row.Overlays = []Overlay{}
}
}
return nil
}
func watchActivityToMap(watchActivity map[typed.WatchActivityKey]*typed.WatchActivity) (map[typed.ResourceSummaryKey]typed.WatchActivity, error) {
retMap := map[typed.ResourceSummaryKey]typed.WatchActivity{}
for key, value := range watchActivity {
partitionStartTimestamp, _, err := untyped.GetTimeRangeForPartition(key.PartitionId)
if err != nil {
return nil, err
}
resSumRefKey := *typed.NewResourceSummaryKey(partitionStartTimestamp, key.Kind, key.Namespace, key.Name, key.Uid)
resSumRefKey.PartitionId = EmptyPartition // In order for keys to join properly we need an empty partition ID
combined := typed.WatchActivity{}
if existing, found := retMap[resSumRefKey]; found {
combined = existing
}
combined.ChangedAt = append(combined.ChangedAt, value.ChangedAt...)
combined.NoChangeAt = append(combined.NoChangeAt, value.NoChangeAt...)
retMap[resSumRefKey] = combined
}
return retMap, nil
}
func mergeHeatmapWithWatchActivity(resKeyToD3Map map[typed.ResourceSummaryKey]*TimelineRow, watchActivity map[typed.ResourceSummaryKey]typed.WatchActivity) error {
for resKey, d3row := range resKeyToD3Map {
if activity, found := watchActivity[resKey]; found {
d3row.ChangedAt = activity.ChangedAt
d3row.NoChangeAt = activity.NoChangeAt
} else {
glog.Errorf("DEBUG: no activity - %v", resKey)
}
}
return nil
}
func convertHeatmapToSlice(resKeyToD3Map map[typed.ResourceSummaryKey]*TimelineRow) []TimelineRow {
var ret []TimelineRow
for _, d3row := range resKeyToD3Map {
ret = append(ret, *d3row)
}
return ret
}
// TODO: Add unit tests
// This adjusts overlays so they are always contained in the time range of the d3row. This is needed because we bucket
// events by minute, but resources start and end somewhere inside the minute.
//
// Per-Minute Overlay: |-- 1 --|-- 0 --|-- 4 --|...
// d3row: |-----------------|
// After adjustment: |- 1 |-- 0 --|- 4 |
func adjustOverlays(rows []TimelineRow) {
for _, d3row := range rows {
// Fix all the start times for the overlay
for idx, ol := range d3row.Overlays {
tooEarlyMs := (d3row.StartDate - ol.StartDate)
if tooEarlyMs > 0 && tooEarlyMs < 60*1000 {
d3row.Overlays[idx].StartDate += tooEarlyMs
d3row.Overlays[idx].Duration -= tooEarlyMs
if d3row.Overlays[idx].Duration <= 0 {
// We need to collapse this into a single time
d3row.Overlays[idx].Duration = 0
d3row.Overlays[idx].StartDate = d3row.Overlays[idx].EndDate
}
}
}
}
for _, d3row := range rows {
// Do a new loop for the end time
for idx, ol := range d3row.Overlays {
overMs := (ol.EndDate - d3row.EndDate)
if overMs > 0 && overMs < 60*1000 {
d3row.Overlays[idx].EndDate -= overMs
d3row.Overlays[idx].Duration -= overMs
if d3row.Overlays[idx].Duration <= 0 {
d3row.Overlays[idx].Duration = 0
d3row.Overlays[idx].EndDate = d3row.Overlays[idx].StartDate
}
}
}
}
}
// TODO: Add unit tests
func outputRowValidation(rows []TimelineRow, requestId string) {
for _, d3row := range rows {
if d3row.StartDate > d3row.EndDate {
glog.Errorf("reqId: %v d3 row has start %v > end %v", requestId, d3row.StartDate, d3row.EndDate)
}
if d3row.StartDate+d3row.Duration != d3row.EndDate {
glog.Errorf("reqId: %v d3 row times are inconsistent. start %v + duration %v != end %v. Off by %v",
requestId, d3row.StartDate, d3row.Duration, d3row.EndDate, d3row.StartDate+d3row.Duration-d3row.EndDate)
}
if d3row.Duration < 0 {
glog.Errorf("reqId: %v d3row has negative duration %v", requestId, d3row.Duration)
}
for _, ol := range d3row.Overlays {
if ol.StartDate > ol.EndDate {
glog.Errorf("reqId: %v overlay has start %v > end %v", requestId, ol.StartDate, ol.EndDate)
}
if ol.StartDate+ol.Duration != ol.EndDate {
glog.Errorf("reqId: %v overlay times are inconsistnet. start %v + duration %v != end %v. Off by %v",
requestId, ol.StartDate, ol.Duration, ol.EndDate, ol.StartDate+ol.Duration-ol.EndDate)
}
if ol.Duration < 0 {
glog.Errorf("reqId: %v overlay has negative duration [%v] %v", requestId, ol.Text, ol.Duration)
}
if ol.StartDate < d3row.StartDate {
tooEarlyMs := (d3row.StartDate - ol.StartDate)
glog.Errorf("reqId: %v overlay is outside the bounds of d3 row. OL Start %v < D3 Start %v. Too early by %v ms\n",
requestId, ol.StartDate, d3row.StartDate, tooEarlyMs)
}
if ol.EndDate > d3row.EndDate {
tooLateMs := (ol.StartDate + ol.Duration) - (d3row.StartDate + d3row.Duration)
glog.Errorf("reqId: %v overlay is outside the bounds of d3 row. OL End %v > D3 End %v. Runs over by %v ms\n",
requestId, (ol.StartDate + ol.Duration), (d3row.StartDate + d3row.Duration),
tooLateMs)
}
}
}
}

Просмотреть файл

@ -0,0 +1,236 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package queries
import (
"github.com/dgraph-io/badger"
"github.com/golang/protobuf/ptypes"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/salesforce/sloop/pkg/sloop/test/assertex"
"github.com/stretchr/testify/assert"
"net/url"
"testing"
"time"
)
const (
kindPod = "Pod"
someNamespace = "somens"
someName = "somename"
someUid = "someuid"
)
var someHeatMapQueryStart = time.Date(2019, 3, 1, 0, 0, 0, 0, time.UTC)
var heatMapPartStart = someHeatMapQueryStart
var someResSumTs = someHeatMapQueryStart.Add(time.Minute)
var firstSeenTs = someHeatMapQueryStart.Add(2 * time.Minute)
var events1Ts = someHeatMapQueryStart.Add(7 * time.Minute)
var events2Ts = someHeatMapQueryStart.Add(28 * time.Minute)
var lastSeenTs = someHeatMapQueryStart.Add(50 * time.Minute)
var someHeatMapQueryEnd = someHeatMapQueryStart.Add(60 * time.Minute)
func helper_AddResSum(t *testing.T, tables typed.Tables) {
someResSumKey := typed.NewResourceSummaryKey(someResSumTs, kindPod, someNamespace, someName, someUid)
CreatePts, _ := ptypes.TimestampProto(firstSeenTs)
LastSeenPts, _ := ptypes.TimestampProto(lastSeenTs)
someResSumVal := typed.ResourceSummary{
CreateTime: CreatePts,
LastSeen: LastSeenPts,
}
err := tables.Db().Update(func(txn badgerwrap.Txn) error {
return tables.ResourceSummaryTable().Set(txn, someResSumKey.String(), &someResSumVal)
})
assert.Nil(t, err)
}
func helper_AddEventSum(t *testing.T, tables typed.Tables) {
someEventKey := typed.NewEventCountKey(someResSumTs, kindPod, someNamespace, someName, someUid)
firstEventsMin := events1Ts.Unix()
assert.Equal(t, int64(1551398820), firstEventsMin)
secondEventsMin := events2Ts.Unix()
assert.Equal(t, int64(1551400080), secondEventsMin)
someEventValue := typed.ResourceEventCounts{
MapMinToEvents: map[int64]*typed.EventCounts{
firstEventsMin: {
MapReasonToCount: map[string]int32{
"ImagePullError": 1,
"LivenessProveFailed": 2,
},
},
secondEventsMin: {
MapReasonToCount: map[string]int32{
"ContainerCreated": 3,
},
},
},
}
err := tables.Db().Update(func(txn badgerwrap.Txn) error {
return tables.EventCountTable().Set(txn, someEventKey.String(), &someEventValue)
})
assert.Nil(t, err)
}
func helper_AddWatchActivity(t *testing.T, tables typed.Tables) {
someWatchActivityKey := typed.NewWatchActivityKey(untyped.GetPartitionId(someResSumTs), kindPod, someNamespace, someName, someUid)
firstActivity := events1Ts.Unix()
assert.Equal(t, int64(1551398820), firstActivity)
secondActivity := events2Ts.Unix()
assert.Equal(t, int64(1551400080), secondActivity)
someWatchActivity := &typed.WatchActivity{
NoChangeAt: []int64{firstActivity},
ChangedAt: []int64{secondActivity},
}
err := tables.Db().Update(func(txn badgerwrap.Txn) error {
return tables.WatchActivityTable().Set(txn, someWatchActivityKey.String(), someWatchActivity)
})
assert.Nil(t, err)
}
func helper_UrlValues() url.Values {
return map[string][]string{
NamespaceParam: []string{AllNamespaces},
KindParam: []string{AllKinds},
}
}
func Test_EventHeatMap3_SimpleTestWithOneDeployment(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour * 24)
db, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
tables := typed.NewTableList(db)
helper_AddResSum(t, tables)
resultJsonBytes, err := EventHeatMap3Query(helper_UrlValues(), tables, someHeatMapQueryStart, someHeatMapQueryEnd, someRequestId)
assert.Nil(t, err)
expectedJson := `{
"view_options": {
"sort": ""
},
"rows": [
{
"text": "somename",
"duration": 3480,
"kind": "Pod",
"namespace": "somens",
"overlays": [],
"changedat": null,
"nochangeat": null,
"start_date": 1551398520,
"end_date": 1551402000
}
]
}`
assertex.JsonEqual(t, expectedJson, string(resultJsonBytes))
}
func Test_EventHeatMap3_OneDeploymentAnd3Events(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour * 24)
db, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
tables := typed.NewTableList(db)
helper_AddResSum(t, tables)
helper_AddEventSum(t, tables)
helper_AddWatchActivity(t, tables)
resultJsonBytes, err := EventHeatMap3Query(helper_UrlValues(), tables, someHeatMapQueryStart, someHeatMapQueryEnd, someRequestId)
assert.Nil(t, err)
expectedJson := `{
"view_options": {
"sort": ""
},
"rows": [
{
"text": "somename",
"duration": 3480,
"kind": "Pod",
"namespace": "somens",
"overlays": [
{
"text": "ImagePullError:1 LivenessProveFailed:2",
"start_date": 1551398820,
"duration": 60,
"end_date": 1551398880
},
{
"text": "ContainerCreated:3",
"start_date": 1551400080,
"duration": 60,
"end_date": 1551400140
}
],
"changedat": [
1551400080
],
"nochangeat": [
1551398820
],
"start_date": 1551398520,
"end_date": 1551402000
}
]
}`
assertex.JsonEqual(t, expectedJson, string(resultJsonBytes))
}
var someAdjQueryEndTime = time.Date(2019, 3, 1, 0, 0, 0, 0, time.UTC)
var someAdjLastSeenOld = someAdjQueryEndTime.Add(-5 * time.Hour)
var someAdjLastSeenRecent = someAdjQueryEndTime.Add(-5 * time.Minute)
var someResyncDuration = time.Duration(30) * time.Minute
func Test_adjustLastSeenTime_TooOldNoChange(t *testing.T) {
lastOld, _ := ptypes.TimestampProto(someAdjLastSeenOld)
resSum := &typed.ResourceSummary{
LastSeen: lastOld,
}
err := adjustLastSeenTime(resSum, someAdjQueryEndTime, someResyncDuration)
assert.Nil(t, err)
assert.Equal(t, lastOld, resSum.LastSeen)
}
func Test_adjustLastSeenTime_RecentChanged(t *testing.T) {
lastOld, _ := ptypes.TimestampProto(someAdjLastSeenRecent)
queryEnd, _ := ptypes.TimestampProto(someAdjQueryEndTime)
resSum := &typed.ResourceSummary{
LastSeen: lastOld,
}
err := adjustLastSeenTime(resSum, someAdjQueryEndTime, someResyncDuration)
assert.Nil(t, err)
assert.Equal(t, queryEnd, resSum.LastSeen)
}
func Test_adjustLastSeenTime_RecentButDeletedNotChanged(t *testing.T) {
lastOld, _ := ptypes.TimestampProto(someAdjLastSeenRecent)
resSum := &typed.ResourceSummary{
LastSeen: lastOld,
DeletedAtEnd: true,
}
err := adjustLastSeenTime(resSum, someAdjQueryEndTime, someResyncDuration)
assert.Nil(t, err)
assert.Equal(t, lastOld, resSum.LastSeen)
}

Просмотреть файл

@ -0,0 +1,142 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package queries
import (
"encoding/json"
"fmt"
"github.com/salesforce/sloop/pkg/sloop/kubeextractor"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"net/url"
"sort"
"time"
)
// Consider: Make use of resources to limit what namespaces we return.
// For example, if kind == ConfigMap, only return namespaces that contain a ConfigMap
func NamespaceQuery(params url.Values, tables typed.Tables, startTime time.Time, endTime time.Time, requestId string) ([]byte, error) {
var resourcesNs map[typed.ResourceSummaryKey]*typed.ResourceSummary
err := tables.Db().View(func(txn badgerwrap.Txn) error {
var err2 error
var stats typed.RangeReadStats
resourcesNs, stats, err2 = tables.ResourceSummaryTable().RangeRead(txn, isNamespace, nil, startTime, endTime)
if err2 != nil {
return err2
}
stats.Log(requestId)
return nil
})
if err != nil {
return []byte{}, err
}
namespaces := resSumRowsToNamespaceStrings(resourcesNs)
namespaces = append(namespaces, AllNamespaces)
bytes, err := json.MarshalIndent(namespaces, "", " ")
if err != nil {
return nil, fmt.Errorf("Failed to marshal json %v", err)
}
return bytes, nil
}
// TODO: Only return kinds for the specified namespace
func KindQuery(params url.Values, tables typed.Tables, startTime time.Time, endTime time.Time, requestId string) ([]byte, error) {
kindExists := make(map[string]bool)
err := tables.Db().View(func(txn badgerwrap.Txn) error {
_, stats, err2 := tables.ResourceSummaryTable().RangeRead(txn, isKind(kindExists), nil, startTime, endTime)
if err2 != nil {
return err2
}
stats.Log(requestId)
return nil
})
if err != nil {
return []byte{}, err
}
kinds := []string{AllKinds}
for k, _ := range kindExists {
kinds = append(kinds, k)
}
sort.Strings(kinds)
bytes, err := json.MarshalIndent(kinds, "", " ")
if err != nil {
return nil, fmt.Errorf("Failed to marshal json %v", err)
}
return bytes, nil
}
func QueryAvailableQueries(params url.Values, tables typed.Tables, startTime time.Time, endTime time.Time, requestId string) ([]byte, error) {
queries := GetNamesOfQueries()
bytes, err := json.MarshalIndent(queries, "", " ")
if err != nil {
return nil, fmt.Errorf("Failed to marshal json %v", err)
}
return bytes, nil
}
func resSumRowsToNamespaceStrings(resources map[typed.ResourceSummaryKey]*typed.ResourceSummary) []string {
namespaceList := []string{}
namespaceExists := make(map[string]bool)
for key, _ := range resources {
_, ok := namespaceExists[key.Name]
if !ok {
namespaceList = append(namespaceList, key.Name)
namespaceExists[key.Name] = true
}
}
sort.Strings(namespaceList)
return namespaceList
}
func isNamespace(k string) bool {
key := &typed.ResourceSummaryKey{}
err := key.Parse(k)
if err != nil {
return false
}
return key.Kind == kubeextractor.NamespaceKind
}
func isKind(kindExists map[string]bool) func(string) bool {
return func(key string) bool {
return keepResourceSummaryKind(key, kindExists)
}
}
func resSumRowsToKindStrings(resources map[typed.ResourceSummaryKey]*typed.ResourceSummary) []string {
kindList := []string{""}
KindExists := make(map[string]bool)
for key, _ := range resources {
if _, ok := KindExists[key.Kind]; !ok {
kindList = append(kindList, key.Kind)
KindExists[key.Kind] = true
}
}
sort.Strings(kindList)
return kindList
}
func keepResourceSummaryKind(key string, kindExists map[string]bool) bool {
// parse the key and get its kind
k := &typed.ResourceSummaryKey{}
err := k.Parse(key)
if err != nil {
return false
}
kind := k.Kind
// if it is the first time to see the kind, return true,
_, ok := kindExists[kind]
if !ok {
kindExists[kind] = true
return true
}
return false
}

Просмотреть файл

@ -0,0 +1,257 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package queries
import (
"github.com/dgraph-io/badger"
"github.com/golang/protobuf/ptypes"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/salesforce/sloop/pkg/sloop/test/assertex"
"github.com/stretchr/testify/assert"
"net/url"
"testing"
"time"
)
var (
someTs = time.Date(2019, 1, 2, 3, 4, 5, 6, time.UTC)
somePTime, _ = ptypes.TimestampProto(someTs)
someFirstSeenTime = time.Date(2019, 3, 4, 3, 4, 5, 6, time.UTC)
mostRecentTime = time.Date(2019, 3, 6, 3, 4, 0, 0, time.UTC)
someLastSeenTime = mostRecentTime.Add(-1 * time.Hour)
someCreateTime = mostRecentTime.Add(-3 * time.Hour)
)
func helper_get_resSumtable(keys []*typed.ResourceSummaryKey, t *testing.T) typed.Tables {
firstSeen, err := ptypes.TimestampProto(someFirstSeenTime)
assert.Nil(t, err)
lastSeen, err := ptypes.TimestampProto(someLastSeenTime)
assert.Nil(t, err)
val := &typed.ResourceSummary{FirstSeen: firstSeen, LastSeen: lastSeen}
db, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
wt := typed.OpenResourceSummaryTable()
err = db.Update(func(txn badgerwrap.Txn) error {
for _, key := range keys {
txerr := wt.Set(txn, key.String(), val)
if txerr != nil {
return txerr
}
}
return nil
})
assert.Nil(t, err)
tables := typed.NewTableList(db)
return tables
}
func helper_get_params() url.Values {
return map[string][]string{
QueryParam: []string{"EventHeatMap"},
NamespaceParam: []string{"some-namespace"},
KindParam: []string{AllNamespaces},
LookbackParam: []string{"24h"},
"dhxr1567107277290": []string{"1"},
}
}
func Test_GetNamespaces_Success(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
keys := make([]*typed.ResourceSummaryKey, 2)
keys[0] = typed.NewResourceSummaryKey(someTs, "Namespace", "", "mynamespace", "68510937-4ffc-11e9-8e26-1418775557c8")
keys[1] = typed.NewResourceSummaryKey(someTs, "Deployment", "namespace-b", "somename-b", "45510937-d4fc-11e9-8e26-14187754567")
tables := helper_get_resSumtable(keys, t)
filterData, err := NamespaceQuery(url.Values{}, tables, someTs, someTs, someRequestId)
assert.Nil(t, err)
expectedNamespaces := `[
"mynamespace",
"_all"
]`
assertex.JsonEqual(t, expectedNamespaces, string(filterData))
}
func Test_GetNamespaces_EmptyNamespace(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
keys := make([]*typed.ResourceSummaryKey, 2)
keys[0] = typed.NewResourceSummaryKey(someTs, "SomeKind", "namespace-a", "mynamespace", "68510937-4ffc-11e9-8e26-1418775557c8")
keys[1] = typed.NewResourceSummaryKey(someTs, "SomeKind", "namespace-b", "somename-b", "45510937-d4fc-11e9-8e26-14187754567")
tables := helper_get_resSumtable(keys, t)
filterData, err := NamespaceQuery(url.Values{}, tables, someTs, someTs, someRequestId)
assert.Nil(t, err)
expectedNamespaces := `[
"_all"
]`
assertex.JsonEqual(t, expectedNamespaces, string(filterData))
}
func Test_GetKinds_SimpleCase(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
keys := make([]*typed.ResourceSummaryKey, 2)
keys[0] = typed.NewResourceSummaryKey(someTs, "Namespace", "", "mynamespace", "68510937-4ffc-11e9-8e26-1418775557c8")
keys[1] = typed.NewResourceSummaryKey(someTs, "Deployment", "namespace-b", "somename-b", "45510937-d4fc-11e9-8e26-14187754567")
tables := helper_get_resSumtable(keys, t)
filterData, err := KindQuery(url.Values{}, tables, someTs, someTs, someRequestId)
assert.Nil(t, err)
expectedKinds := `[
"Deployment",
"Namespace",
"_all"
]`
assertex.JsonEqual(t, expectedKinds, string(filterData))
}
func Test_resSumRowsToNamespaceStrings(t *testing.T) {
resources := make(map[typed.ResourceSummaryKey]*typed.ResourceSummary)
firstTimeProto, err := ptypes.TimestampProto(someFirstSeenTime)
assert.Nil(t, err)
lastTimeProto, err := ptypes.TimestampProto(someLastSeenTime)
assert.Nil(t, err)
createTimeProto, err := ptypes.TimestampProto(someCreateTime)
assert.Nil(t, err)
resources[typed.ResourceSummaryKey{
PartitionId: "0",
Kind: "Namespace",
Namespace: "",
Name: "name1",
Uid: "uid1",
}] = &typed.ResourceSummary{
FirstSeen: firstTimeProto,
LastSeen: lastTimeProto,
CreateTime: createTimeProto,
DeletedAtEnd: false,
}
resources[typed.ResourceSummaryKey{
PartitionId: "1",
Kind: "Namespace",
Namespace: "",
Name: "name2",
Uid: "uid2",
}] = &typed.ResourceSummary{
FirstSeen: firstTimeProto,
LastSeen: lastTimeProto,
CreateTime: createTimeProto,
DeletedAtEnd: false,
}
// add a duplicate namespace
resources[typed.ResourceSummaryKey{
PartitionId: "2",
Kind: "Namespace",
Namespace: "",
Name: "name2",
Uid: "uid23",
}] = &typed.ResourceSummary{
FirstSeen: firstTimeProto,
LastSeen: lastTimeProto,
CreateTime: createTimeProto,
DeletedAtEnd: true,
}
expectedData := []string{"name1", "name2"}
data := resSumRowsToNamespaceStrings(resources)
assert.Equal(t, expectedData, data)
}
func Test_isNamespace_Namespace(t *testing.T) {
// test when kind is namespace
key1 := "/ressum/001567094400/Namespace//some-othernamespace/96b0e282-9744-11e8-9d31-1418775557c8"
flag := isNamespace(key1)
assert.True(t, flag)
}
func Test_isNamespace_KindWithNamespace(t *testing.T) {
// test when kind is not namespace
key2 := "/ressum/001562961600/Deployment/some-namespace/some-name/f8f372a3-f731-11e8-b3bd-e24c7f08fac6"
flag := isNamespace(key2)
assert.False(t, flag)
}
func Test_isNamespace_KindWithoutNamespce(t *testing.T) {
// test when there is no namespace field
key3 := "/eventcount/001567022400/Node//somehost/somehost"
flag := isNamespace(key3)
assert.False(t, flag)
}
func Test_isKind_Empty(t *testing.T) {
kindExists := make(map[string]bool)
key := "/ressum/001567105200/StatefulSet/some-namespace/some-name/52071bcf-64cf-11e9-b4c3-1418774b3e9d"
flag := isKind(kindExists)(key)
assert.True(t, flag)
}
func Test_isKind_KindExists(t *testing.T) {
kindExists := make(map[string]bool)
kindExists["Deployment"] = true
key2 := "/ressum/001562961600/Deployment/some-namespace/some-name/f8f372a3-f731-11e8-b3bd-e24c7f08fac6"
flag := isKind(kindExists)(key2)
assert.False(t, flag)
}
func Test_resSumRowsToKindStrings(t *testing.T) {
resources := make(map[typed.ResourceSummaryKey]*typed.ResourceSummary)
firstTimeProto, err := ptypes.TimestampProto(someFirstSeenTime)
assert.Nil(t, err)
lastTimeProto, err := ptypes.TimestampProto(someLastSeenTime)
assert.Nil(t, err)
createTimeProto, err := ptypes.TimestampProto(someCreateTime)
assert.Nil(t, err)
resources[typed.ResourceSummaryKey{
PartitionId: "0",
Kind: "Pod",
Namespace: "",
Name: "name1",
Uid: "uid1",
}] = &typed.ResourceSummary{
FirstSeen: firstTimeProto,
LastSeen: lastTimeProto,
CreateTime: createTimeProto,
DeletedAtEnd: false,
}
resources[typed.ResourceSummaryKey{
PartitionId: "1",
Kind: "Deployment",
Namespace: "",
Name: "name2",
Uid: "uid2",
}] = &typed.ResourceSummary{
FirstSeen: firstTimeProto,
LastSeen: lastTimeProto,
CreateTime: createTimeProto,
DeletedAtEnd: false,
}
// add a duplicate kind
resources[typed.ResourceSummaryKey{
PartitionId: "2",
Kind: "Deployment",
Namespace: "",
Name: "name2",
Uid: "uid23",
}] = &typed.ResourceSummary{
FirstSeen: firstTimeProto,
LastSeen: lastTimeProto,
CreateTime: createTimeProto,
DeletedAtEnd: true,
}
expectedData := []string{"", "Deployment", "Pod"}
data := resSumRowsToKindStrings(resources)
assert.Equal(t, expectedData, data)
}

Просмотреть файл

@ -0,0 +1,243 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package queries
import (
"github.com/golang/glog"
"github.com/golang/protobuf/ptypes"
"github.com/salesforce/sloop/pkg/sloop/kubeextractor"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"net/url"
"strings"
"time"
)
func paramFilterResSumFn(params url.Values) func(string) bool {
selectedNamespace := params.Get(NamespaceParam)
selectedKind := params.Get(KindParam)
selectedNameSubstring := params.Get(NameMatchParam)
selectedNameExactMatch := params.Get(NameParam)
selectedUuid := params.Get(UuidParam)
return func(key string) bool {
k := &typed.ResourceSummaryKey{}
err := k.Parse(key)
if err != nil {
return false
}
kind := k.Kind
namespace := k.Namespace
name := k.Name
uuid := k.Uid
return keepRowHelper(name, kind, namespace, selectedKind, selectedNamespace, selectedNameSubstring, selectedNameExactMatch, selectedUuid, uuid)
}
}
func paramEventCountSumFn(params url.Values) func(string) bool {
selectedNamespace := params.Get(NamespaceParam)
selectedKind := params.Get(KindParam)
selectedNameMatchSubstring := params.Get(NameMatchParam)
return func(key string) bool {
k := &typed.EventCountKey{}
err := k.Parse(key)
if err != nil {
return false
}
kind := k.Kind
namespace := k.Namespace
name := k.Name
return keepRowHelper(name, kind, namespace, selectedKind, selectedNamespace, selectedNameMatchSubstring, "", "", "")
}
}
func paramFilterWatchActivityFn(params url.Values) func(string) bool {
selectedNamespace := params.Get(NamespaceParam)
selectedKind := params.Get(KindParam)
selectedNameSubstring := params.Get(NameMatchParam)
selectedNameExactMatch := params.Get(NameParam)
selectedUuid := params.Get(UuidParam)
return func(key string) bool {
k := &typed.WatchActivityKey{}
err := k.Parse(key)
if err != nil {
return false
}
kind := k.Kind
namespace := k.Namespace
name := k.Name
uuid := k.Uid
return keepRowHelper(name, kind, namespace, selectedKind, selectedNamespace, selectedNameSubstring, selectedNameExactMatch, selectedUuid, uuid)
}
}
// this function is only used by GetEventData, the function gets key from EventCountKey,
// while this one gets key from WatchTableKey so they cannot be combined into one
func paramEventDataFn(params url.Values) func(string) bool {
selectedNamespace := params.Get(NamespaceParam)
selectedName := params.Get(NameParam)
selectedKind := params.Get(KindParam)
// Nodes in the watch table are stored under the default namespace
// TODO: Figure out if this is correct from k8s or coming from some upstream logic in sloop
if selectedKind == kubeextractor.NodeKind {
selectedNamespace = DefaultNamespace
}
return func(key string) bool {
k := &typed.WatchTableKey{}
err := k.Parse(key)
if err != nil {
glog.Errorf("Failed to parse key: %v", key)
return false
}
if k.Kind != kubeextractor.EventKind {
return false
}
if selectedNamespace != AllNamespaces && k.Namespace != selectedNamespace {
return false
}
involvedObjectName, err := kubeextractor.GetInvolvedObjectNameFromEventName(k.Name)
if err != nil {
glog.Errorf("Could not get involved object name from event name: " + key)
return false
}
if involvedObjectName != selectedName {
return false
}
return true
}
}
func paramResPayloadFn(params url.Values) func(string) bool {
selectedNamespace := params.Get(NamespaceParam)
selectedName := params.Get(NameParam)
selectedKind := params.Get(KindParam)
if selectedKind == kubeextractor.NodeKind {
selectedNamespace = DefaultNamespace
}
return func(key string) bool {
k := &typed.WatchTableKey{}
err := k.Parse(key)
if err != nil {
glog.Errorf("Failed to parse key: %v", key)
return false
}
if k.Kind != selectedKind {
return false
}
if selectedNamespace != AllNamespaces && k.Namespace != selectedNamespace {
return false
}
if k.Name != selectedName {
return false
}
return true
}
}
// TODO: Try and remove some of this special logic. Maybe have a generic approach for resources that dont have namespaces
func keepRowHelper(name string, kind string, namespace string, selectedKind string, selectedNamespace string, selectedNameMatchSubstring string, selectedNameExactMatch string, selectedUuid string, uuid string) bool {
// Edge cases:
// 1) Node does not have a namespace
// 2) Namespace does not have a namespace
if selectedKind != AllKinds {
if selectedKind != kind {
return false
}
} else {
// When showing all kinds and a namespace is set dont show nodes
if selectedNamespace != AllNamespaces && kind == kubeextractor.NodeKind {
return false
}
}
// Nodes do not have a namespace. If user set kind=Node then no need to filter on namespace
// which would just confuse the user when they dont see the nodes
if selectedNamespace != AllNamespaces && selectedKind != kubeextractor.NodeKind {
if kind == kubeextractor.NamespaceKind {
// A namespace itself does not have a namespace, so instead match on name
if selectedNamespace != name {
return false
}
} else {
if selectedNamespace != namespace {
return false
}
}
}
if selectedNameMatchSubstring != "" {
if !strings.Contains(name, selectedNameMatchSubstring) {
return false
}
}
if selectedNameExactMatch != "" {
if !strings.EqualFold(name, selectedNameExactMatch) {
return false
}
}
if selectedUuid != "" {
if selectedUuid != uuid {
return false
}
}
return true
}
func isResSummaryValInTimeRange(startTime time.Time, endTime time.Time) func(*typed.ResourceSummary) bool {
return func(retVal *typed.ResourceSummary) bool {
firstSeen, err := ptypes.Timestamp(retVal.FirstSeen)
if err != nil {
return false
}
lastSeen, err := ptypes.Timestamp(retVal.LastSeen)
if err != nil {
return false
}
if firstSeen.After(endTime) || lastSeen.Before(startTime) {
return false
}
return true
}
}
func isEventValInTimeRange(startTime time.Time, endTime time.Time) func(*typed.KubeWatchResult) bool {
return func(retVal *typed.KubeWatchResult) bool {
eventInfo, err := kubeextractor.ExtractEventInfo(retVal.Payload)
if err != nil {
return false
}
firstTime := eventInfo.FirstTimestamp
lastTime := eventInfo.LastTimestamp
if firstTime.After(endTime) || lastTime.Before(startTime) {
return false
}
return true
}
}
func isResPayloadInTimeRange(startTime time.Time, endTime time.Time) func(*typed.KubeWatchResult) bool {
return func(retVal *typed.KubeWatchResult) bool {
resTime, err := ptypes.Timestamp(retVal.Timestamp)
if err != nil {
return false
}
if resTime.After(endTime) || resTime.Before(startTime) {
return false
}
return true
}
}

Просмотреть файл

@ -0,0 +1,171 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package queries
import (
"github.com/salesforce/sloop/pkg/sloop/kubeextractor"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/stretchr/testify/assert"
"testing"
"time"
)
func Test_isFiltered_NotSelectedNamespace(t *testing.T) {
values := helper_get_params()
// test when namespace is not selected
key := "/eventcount/001567105200/StatefulSet/some-user/vrb-mgmt-pd/52071bcf-64cf-11e9-b4c3-1418774b3e9d"
flag := paramEventCountSumFn(values)(key)
assert.False(t, flag)
}
func Test_isFiltered_SelectedNamespace(t *testing.T) {
values := helper_get_params()
// test when namespace is selected
key := "/eventcount/001567105200/StatefulSet/some-namespace/vrb-mgmt-pd/52071bcf-64cf-11e9-b4c3-1418774b3e9d"
flag := paramEventCountSumFn(values)(key)
assert.True(t, flag)
}
func Test_isFiltered_WhenKindIsNodeIgnoreNamespace(t *testing.T) {
values := helper_get_params()
values["kind"] = []string{kubeextractor.NodeKind}
values["namespace"] = []string{"someNamespace"}
// test node
key := "/eventcount/001567022400/Node//somehost/somehost"
flag := paramEventCountSumFn(values)(key)
assert.True(t, flag)
}
func Test_isFiltered_NodeReturnedForAllNamespaces(t *testing.T) {
values := helper_get_params()
values["kind"] = []string{kubeextractor.NodeKind}
values["namespace"] = []string{AllNamespaces}
// test node
key := "/eventcount/001567022400/Node//somehost/somehost"
flag := paramEventCountSumFn(values)(key)
assert.True(t, flag)
}
func Test_isFiltered_NodeNotReturnedForAllKindsInSomeNamespace(t *testing.T) {
values := make(map[string][]string)
values["kind"] = []string{AllKinds}
values["namespace"] = []string{"foo"}
// test node
key := "/eventcount/001567022400/Node//somehost/somehost"
flag := paramEventCountSumFn(values)(key)
assert.False(t, flag)
}
func Test_isFiltered_NodeReturnedForAllKindsAllNamespace(t *testing.T) {
values := make(map[string][]string)
values["kind"] = []string{AllKinds}
values["namespace"] = []string{AllNamespaces}
// test node
key := "/eventcount/001567022400/Node//somehost/somehost"
flag := paramEventCountSumFn(values)(key)
assert.True(t, flag)
}
func Test_isFiltered_KindIsNamespaceMatchNameNotNamespace(t *testing.T) {
values := make(map[string][]string)
values[NamespaceParam] = []string{"some-namespace"}
values[KindParam] = []string{kubeextractor.NamespaceKind}
// test when namespace is not selected
key := "/ressum/001567094400/Namespace//some-othernamespace/96b0e282-9744-11e8-9d31-1418775557c8"
flag := paramFilterResSumFn(values)(key)
assert.False(t, flag)
// test when namespace is selected
key = "/ressum/001567094400/Namespace//some-namespace/96b0e282-9744-11e8-9d31-1418775557c8"
flag = paramFilterResSumFn(values)(key)
assert.True(t, flag)
}
func Test_isResSummaryValInTimeRange_False(t *testing.T) {
val := helper_getResSum(t)
flag := isResSummaryValInTimeRange(someTs.Add(-60*time.Minute), someTs.Add(60*time.Minute))(val)
assert.False(t, flag)
}
func Test_isResSummaryValInTimeRange_True(t *testing.T) {
val := helper_getResSum(t)
flag := isResSummaryValInTimeRange(someFirstSeenTime.Add(-24*time.Hour), someLastSeenTime.Add(24*time.Hour))(val)
assert.True(t, flag)
}
func Test_paramEventSumFn_False(t *testing.T) {
values := helper_get_params()
key := "/watch/001562961600/Event/someNS/someName.xx/1562963507608345756"
flag := paramEventDataFn(values)(key)
assert.False(t, flag)
}
func Test_paramEventSumFn_True(t *testing.T) {
values := helper_get_params()
values[KindParam] = []string{kubeextractor.EventKind}
values[NamespaceParam] = []string{"someNS"}
values[NameParam] = []string{"someName"}
key := "/watch/001562961600/Event/someNS/someName.xx/1562963507608345756"
flag := paramEventDataFn(values)(key)
assert.True(t, flag)
}
func Test_isEventValInTimeRange_False(t *testing.T) {
someEventPayload := `{
"reason":"someReason",
"firstTimestamp": "2016-01-01T21:24:55Z",
"lastTimestamp": "2016-01-02T21:27:55Z",
"count": 10
}`
val := &typed.KubeWatchResult{Kind: "someKind", Payload: someEventPayload}
flag := isEventValInTimeRange(someTs.Add(-60*time.Minute), someTs.Add(60*time.Minute))(val)
assert.False(t, flag)
}
func Test_isEventValInTimeRange_True(t *testing.T) {
someEventPayload := `{
"reason":"someReason",
"firstTimestamp": "2019-01-01T21:24:55Z",
"lastTimestamp": "2019-01-02T21:27:55Z",
"count": 10
}`
val := &typed.KubeWatchResult{Kind: "someKind", Payload: someEventPayload}
flag := isEventValInTimeRange(someTs.Add(-60*time.Minute), someTs.Add(60*time.Minute))(val)
assert.True(t, flag)
}
func Test_paramEventDataFn_False(t *testing.T) {
values := helper_get_params()
key := "/watch/001562961600/Pod/someNS/someName.xx/1562963507608345756"
flag := paramResPayloadFn(values)(key)
assert.False(t, flag)
}
func Test_paramEventDataFn_True(t *testing.T) {
values := helper_get_params()
values[KindParam] = []string{kubeextractor.PodKind}
values[NamespaceParam] = []string{"someNS"}
values[NameParam] = []string{"someName"}
key := "/watch/001562961600/Pod/someNS/someName/1562963507608345756"
flag := paramResPayloadFn(values)(key)
assert.True(t, flag)
}
func Test_isResPayloadInTimeRange_True(t *testing.T) {
val := &typed.KubeWatchResult{Kind: "someKind", Timestamp: somePTime}
flag := isResPayloadInTimeRange(someTs.Add(-60*time.Minute), someTs.Add(60*time.Minute))(val)
assert.True(t, flag)
}
func Test_isResPayloadInTimeRange_False(t *testing.T) {
val := &typed.KubeWatchResult{Kind: "someKind", Timestamp: somePTime}
flag := isResPayloadInTimeRange(someTs.Add(60*time.Minute), someTs.Add(65*time.Minute))(val)
assert.False(t, flag)
}

Просмотреть файл

@ -0,0 +1,55 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package queries
import (
"encoding/json"
"fmt"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"net/url"
"time"
)
type ResPayLoadData struct {
PayLoadMap map[int64]string `json:"payloadMap"`
}
func GetResPayload(params url.Values, t typed.Tables, startTime time.Time, endTime time.Time, requestId string) ([]byte, error) {
var watchRes map[typed.WatchTableKey]*typed.KubeWatchResult
err := t.Db().View(func(txn badgerwrap.Txn) error {
var err2 error
var stats typed.RangeReadStats
watchRes, _, err2 = t.WatchTable().RangeRead(txn, paramResPayloadFn(params), isResPayloadInTimeRange(startTime, endTime), startTime, endTime)
if err2 != nil {
return err2
}
stats.Log(requestId)
return nil
})
if err != nil {
return []byte{}, err
}
var res ResPayLoadData
resPayloadMap := make(map[int64]string)
for key, val := range watchRes {
resPayloadMap[key.Timestamp.Unix()] = val.Payload
}
// Todo: in future we might need to think if we want to return a marshalled empty json object, for now we just return []byte{}
if len(resPayloadMap) == 0 {
return []byte{}, nil
}
res.PayLoadMap = resPayloadMap
bytes, err := json.MarshalIndent(res.PayLoadMap, "", " ")
if err != nil {
return nil, fmt.Errorf("failed to marshal json %v", err)
}
return bytes, nil
}

Просмотреть файл

@ -0,0 +1,107 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package queries
import (
"github.com/dgraph-io/badger"
"github.com/golang/protobuf/ptypes/timestamp"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/salesforce/sloop/pkg/sloop/test/assertex"
"github.com/stretchr/testify/assert"
"testing"
"time"
)
const somePodPayload = `{
"metadata": {
"name": "someName",
"namespace": "someNamespace",
"uid": "6c2a9795-a282-11e9-ba2f-14187761de09",
"creationTimestamp": "2019-07-09T19:47:45Z"
}
}`
func helper_get_resPayload(keys []string, t *testing.T, somePTime *timestamp.Timestamp) typed.Tables {
val := &typed.KubeWatchResult{Kind: "someKind", Timestamp: somePTime, Payload: somePodPayload}
db, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
wt := typed.OpenKubeWatchResultTable()
err = db.Update(func(txn badgerwrap.Txn) error {
for _, key := range keys {
txerr := wt.Set(txn, key, val)
if txerr != nil {
return txerr
}
}
return nil
})
assert.Nil(t, err)
tables := typed.NewTableList(db)
return tables
}
func Test_GetResPayload_False(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
partitionId := untyped.GetPartitionId(someTs)
values := helper_get_params()
values[KindParam] = []string{"someKind-Test"}
values[NamespaceParam] = []string{"someNamespace"}
values[NameParam] = []string{"someName"}
var keys []string
for i := 'a'; i < 'd'; i++ {
keys = append(keys, typed.NewWatchTableKey(partitionId, "someKind"+string(i), "someNamespace", "someName", someTs).String())
}
starTime := someTs.Add(-60 * time.Minute)
endTime := someTs.Add(60 * time.Minute)
tables := helper_get_resPayload(keys, t, somePTime)
res, err := GetResPayload(values, tables, starTime, endTime, someRequestId)
assert.Equal(t, string(res), "")
assert.Nil(t, err)
}
func Test_GetResPayload_NotInTimeRange(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
partitionId := untyped.GetPartitionId(someTs)
values := helper_get_params()
values[KindParam] = []string{"someKind"}
values[NamespaceParam] = []string{"someNamespace"}
values[NameParam] = []string{"someName"}
var keys []string
keys = append(keys, typed.NewWatchTableKey(partitionId, "someKind", "someNamespace", "someName", someTs).String())
keys = append(keys, typed.NewWatchTableKey(partitionId, "someKind", "someNamespace", "someName", someTs.Add(-10*time.Minute)).String())
for i := 'b'; i < 'd'; i++ {
keys = append(keys, typed.NewWatchTableKey(partitionId, "someKind"+string(i), "someNamespace", "someName.xx", someTs).String())
}
tables := helper_get_resPayload(keys, t, somePTime)
res, err := GetResPayload(values, tables, someTs.Add(60*time.Minute), someTs.Add(65*time.Minute), someRequestId)
assert.Nil(t, err)
assert.Equal(t, string(res), "")
}
func Test_GetResPayload_True(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
partitionId := untyped.GetPartitionId(someTs)
values := helper_get_params()
values[KindParam] = []string{"someKind"}
values[NamespaceParam] = []string{"someNamespace"}
values[NameParam] = []string{"someName"}
var keys []string
keys = append(keys, typed.NewWatchTableKey(partitionId, "someKind", "someNamespace", "someName", someTs).String())
keys = append(keys, typed.NewWatchTableKey(partitionId, "someKind", "someNamespaceb", "someName", someTs).String())
tables := helper_get_resPayload(keys, t, somePTime)
res, err := GetResPayload(values, tables, someTs.Add(-1*time.Hour), someTs.Add(6*time.Hour), someRequestId)
assert.Nil(t, err)
expectedRes := `{
"1546398245": "{\n \"metadata\": {\n \"name\": \"someName\",\n \"namespace\": \"someNamespace\",\n \"uid\": \"6c2a9795-a282-11e9-ba2f-14187761de09\",\n \"creationTimestamp\": \"2019-07-09T19:47:45Z\"\n }\n}"
}`
assertex.JsonEqual(t, expectedRes, string(res))
}

Просмотреть файл

@ -0,0 +1,72 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package queries
import (
"encoding/json"
"fmt"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"net/url"
"reflect"
"time"
)
type ResSummaryOutput struct {
typed.ResourceSummaryKey
typed.ResourceSummary
}
func (r ResSummaryOutput) IsEmpty() bool {
return reflect.DeepEqual(ResSummaryOutput{}, r)
}
func GetResSummaryData(params url.Values, t typed.Tables, startTime time.Time, endTime time.Time, requestId string) ([]byte, error) {
var resSummaries map[typed.ResourceSummaryKey]*typed.ResourceSummary
err := t.Db().View(func(txn badgerwrap.Txn) error {
var err2 error
var stats typed.RangeReadStats
resSummaries, stats, err2 = t.ResourceSummaryTable().RangeRead(txn, paramFilterResSumFn(params), isResSummaryValInTimeRange(startTime, endTime), startTime, endTime)
if err2 != nil {
return err2
}
stats.Log(requestId)
return nil
})
if err != nil {
return []byte{}, err
}
output := ResSummaryOutput{}
for key, val := range resSummaries {
output.PartitionId = key.PartitionId
output.Name = key.Name
output.Namespace = key.Namespace
output.Uid = key.Uid
output.Kind = key.Kind
output.FirstSeen = val.FirstSeen
output.LastSeen = val.LastSeen
output.CreateTime = val.CreateTime
output.DeletedAtEnd = val.DeletedAtEnd
output.Relationships = val.Relationships
// we only need to get one resSummary
break
}
if output.IsEmpty() {
return []byte{}, nil
}
bytes, err := json.MarshalIndent(output, "", " ")
if err != nil {
return nil, fmt.Errorf("failed to marshal json %v", err)
}
return bytes, nil
}

Просмотреть файл

@ -0,0 +1,95 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package queries
import (
"github.com/golang/protobuf/ptypes"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/test/assertex"
"github.com/stretchr/testify/assert"
"testing"
"time"
)
func helper_getResSum(t *testing.T) *typed.ResourceSummary {
firstTimeProto, err := ptypes.TimestampProto(someFirstSeenTime)
assert.Nil(t, err)
lastTimeProto, err := ptypes.TimestampProto(someLastSeenTime)
assert.Nil(t, err)
createTimeProto, err := ptypes.TimestampProto(someCreateTime)
assert.Nil(t, err)
val := &typed.ResourceSummary{
FirstSeen: firstTimeProto,
LastSeen: lastTimeProto,
CreateTime: createTimeProto,
DeletedAtEnd: false,
}
return val
}
func Test_GetResSummaryData_False(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
values := helper_get_params()
values[KindParam] = []string{"someKind"}
values[NamespaceParam] = []string{"someNamespace"}
values[NameParam] = []string{"someName"}
values[UuidParam] = []string{"someuid"}
keys := make([]*typed.ResourceSummaryKey, 2)
keys[0] = typed.NewResourceSummaryKey(someTs, "someKind", "someNs", "mynamespace", "68510937-4ffc-11e9-8e26-1418775557c8")
keys[1] = typed.NewResourceSummaryKey(someTs, "SomeKind", "namespace-b", "somename-b", "45510937-d4fc-11e9-8e26-14187754567")
tables := helper_get_resSumtable(keys, t)
res, err := GetResSummaryData(values, tables, someTs.Add(-60*time.Minute), someTs.Add(60*time.Minute), someRequestId)
assert.Equal(t, string(res), "")
assert.Nil(t, err)
}
func Test_GetResSummaryData_NotInTimeRange(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
values := helper_get_params()
values[KindParam] = []string{"someKind"}
values[NamespaceParam] = []string{"someNamespace"}
values[NameParam] = []string{"someName"}
values[UuidParam] = []string{"someuid"}
keys := make([]*typed.ResourceSummaryKey, 1)
keys[0] = typed.NewResourceSummaryKey(someTs, "someKind", "someNamespace", "someName", "someuid")
tables := helper_get_resSumtable(keys, t)
res, err := GetResSummaryData(values, tables, someTs.Add(60*time.Minute), someTs.Add(160*time.Minute), someRequestId)
assert.Nil(t, err)
assert.Equal(t, string(res), "")
}
func Test_GetResSummaryData_True(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
values := helper_get_params()
values[KindParam] = []string{"someKind"}
values[NamespaceParam] = []string{"someNamespace"}
values[NameParam] = []string{"someName"}
values[UuidParam] = []string{"someuid"}
keys := make([]*typed.ResourceSummaryKey, 1)
keys[0] = typed.NewResourceSummaryKey(someFirstSeenTime, "someKind", "someNamespace", "someName", "someuid")
tables := helper_get_resSumtable(keys, t)
res, err := GetResSummaryData(values, tables, someFirstSeenTime.Add(-1*time.Hour), someLastSeenTime.Add(6*time.Hour), someRequestId)
assert.Nil(t, err)
expectedRes := `{
"PartitionId": "001551668400",
"Kind": "someKind",
"Namespace": "someNamespace",
"Name": "someName",
"Uid": "someuid",
"firstSeen": {
"seconds": 1551668645,
"nanos": 6
},
"lastSeen": {
"seconds": 1551837840
}
}`
assertex.JsonEqual(t, expectedRes, string(res))
}

Просмотреть файл

@ -0,0 +1,161 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package queries
import (
"github.com/golang/glog"
"github.com/golang/protobuf/ptypes"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"net/url"
"time"
)
// This computes a start and end time for a given query. If user specifies a time duration we use it, otherwise
// we use maxLookBack from config. For the end time if not specified we want to use newest data in the store.
// That way we can look at old data sets without clipping. We know the min and max time of the newest partition
// but we dont really know the newest record within that range. For now just use end of newest partiton, but
// we will improve that later.
// TODO: Add unit tests
// TODO: If wall clock is in the middle of the newest partition min-max time we can use it
func computeTimeRange(params url.Values, tables typed.Tables, maxLookBack time.Duration) (time.Time, time.Time) {
now := time.Now()
// If web request specifies a valid lookback use that, else use the config for the store
queryDuration := maxLookBack
queryLookBack := params.Get(LookbackParam)
if queryLookBack != "" {
var err error
queryDuration, err = time.ParseDuration(queryLookBack)
if err != nil {
glog.Errorf("Invalid lookback param: %v. err: %v", queryLookBack, err)
}
}
if queryDuration < 10*time.Minute || queryDuration > maxLookBack {
queryDuration = maxLookBack
}
// Find the end of the newest store partition and use that as endTime
ok, _, maxPartition, err := tables.GetMinAndMaxPartition()
if err != nil || !ok {
if err != nil {
glog.Errorf("Error getting MinAndMaxPartition: %v", err)
}
// Store is broken or has no data. Best we can do is now - queryDuration
return now.Add(-1 * queryDuration), now
}
_, endTimeOfNewestPartition, err := untyped.GetTimeRangeForPartition(maxPartition)
if err != nil {
glog.Errorf("Error getting MinAndMaxPartition: %v", err)
return now.Add(-1 * queryDuration), now
}
// The newest partition ends in the future, so use now instead
if endTimeOfNewestPartition.After(now) {
return now.Add(-1 * queryDuration), now
}
return endTimeOfNewestPartition.Add(-1 * queryDuration), endTimeOfNewestPartition
}
// This extracts time info from the ResourceSummary value and checks if it overlaps with the query time range.
// If outside the range it returns false
// If fully inside the range it returns true and nothing is modified
// If partially in the range it clips off the parts that are outside the range and returns true
func timeFilterResSumValue(value *typed.ResourceSummary, queryStartTime time.Time, queryEndTime time.Time) (bool, error) {
startTs, err := ptypes.Timestamp(value.CreateTime)
if err != nil {
return false, err
}
lastTs, err := ptypes.Timestamp(value.LastSeen)
if err != nil {
return false, err
}
if startTs.After(queryEndTime) || lastTs.Before(queryStartTime) {
// This will not show up anyways, lets filter it
return false, nil
}
if startTs.Before(queryStartTime) {
value.CreateTime, err = ptypes.TimestampProto(queryStartTime)
if err != nil {
return false, err
}
}
if lastTs.After(queryEndTime) {
value.LastSeen, err = ptypes.TimestampProto(queryEndTime)
if err != nil {
return false, err
}
}
return true, nil
}
func timeFilterResSumMap(resSumMap map[typed.ResourceSummaryKey]*typed.ResourceSummary, queryStartTime time.Time, queryEndTime time.Time) error {
for key, value := range resSumMap {
keep, err := timeFilterResSumValue(value, queryStartTime, queryEndTime)
if err != nil {
return err
}
if !keep {
delete(resSumMap, key)
}
}
return nil
}
func timeFilterEventValue(value *typed.ResourceEventCounts, queryStartTime time.Time, queryEndTime time.Time) (bool, error) {
// TODO: Event values have a map of minute within a partition to a count of events
// We need to compute the time for each minute and rewrite the value accordingly
return true, nil
}
func timeFilterEventsMap(events map[typed.EventCountKey]*typed.ResourceEventCounts, queryStartTime time.Time, queryEndTime time.Time) error {
for key, value := range events {
keep, err := timeFilterEventValue(value, queryStartTime, queryEndTime)
if err != nil {
return err
}
if !keep {
delete(events, key)
}
}
return nil
}
func timeFilterWatchActivityOccurrences(occurrences []int64, queryStartTime time.Time, queryEndTime time.Time) []int64 {
start := queryStartTime.Unix()
end := queryEndTime.Unix()
filtered := make([]int64, 0, len(occurrences))
for _, when := range occurrences {
if when >= start && when <= end {
filtered = append(filtered, when)
}
}
return filtered
}
func timeFilterWatchActivity(activity *typed.WatchActivity, queryStartTime time.Time, queryEndTime time.Time) *typed.WatchActivity {
activity.ChangedAt = timeFilterWatchActivityOccurrences(activity.ChangedAt, queryStartTime, queryEndTime)
activity.NoChangeAt = timeFilterWatchActivityOccurrences(activity.NoChangeAt, queryStartTime, queryEndTime)
return activity
}
func timeFilterWatchActivityMap(activityMap map[typed.WatchActivityKey]*typed.WatchActivity, queryStartTime time.Time, queryEndTime time.Time) {
for key, value := range activityMap {
filtered := timeFilterWatchActivity(value, queryStartTime, queryEndTime)
if len(filtered.NoChangeAt) == 0 && len(filtered.ChangedAt) == 0 {
delete(activityMap, key)
} else {
activityMap[key] = filtered
}
}
}

Просмотреть файл

@ -0,0 +1,116 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package queries
import (
"github.com/golang/protobuf/ptypes"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/stretchr/testify/assert"
"testing"
"time"
)
var someQueryStartTs = time.Date(2019, 3, 1, 3, 4, 0, 0, time.UTC)
var someQueryEndTs = someQueryStartTs.Add(time.Hour)
var rightBeforeStartTs = someQueryStartTs.Add(-1 * time.Minute)
var rightAfterStartTs = someQueryStartTs.Add(time.Minute)
var rigthAfterEndTs = someQueryEndTs.Add(time.Minute)
func Test_timeFilterResSumValue_OutsideRangeDrop(t *testing.T) {
resVal := typed.ResourceSummary{}
resVal.CreateTime, _ = ptypes.TimestampProto(rightBeforeStartTs)
resVal.LastSeen, _ = ptypes.TimestampProto(rightBeforeStartTs)
keep, err := timeFilterResSumValue(&resVal, someQueryStartTs, someQueryEndTs)
assert.Nil(t, err)
assert.False(t, keep)
}
func Test_timeFilterResSumValue_InsideRangeKeepAndNoChange(t *testing.T) {
resVal := typed.ResourceSummary{}
resVal.CreateTime, _ = ptypes.TimestampProto(rightAfterStartTs)
resVal.LastSeen, _ = ptypes.TimestampProto(rightAfterStartTs)
keep, err := timeFilterResSumValue(&resVal, someQueryStartTs, someQueryEndTs)
assert.Nil(t, err)
assert.True(t, keep)
afterCreateTime, _ := ptypes.Timestamp(resVal.CreateTime)
afterLastSeenTime, _ := ptypes.Timestamp(resVal.LastSeen)
assert.Equal(t, rightAfterStartTs, afterCreateTime)
assert.Equal(t, rightAfterStartTs, afterLastSeenTime)
}
func Test_timeFilterResSumValue_StartsBeforeChangeIsClipped(t *testing.T) {
resVal := typed.ResourceSummary{}
resVal.CreateTime, _ = ptypes.TimestampProto(rightBeforeStartTs)
resVal.LastSeen, _ = ptypes.TimestampProto(rightAfterStartTs)
keep, err := timeFilterResSumValue(&resVal, someQueryStartTs, someQueryEndTs)
assert.Nil(t, err)
assert.True(t, keep)
// CreateTime should match query start time
afterCreateTime, _ := ptypes.Timestamp(resVal.CreateTime)
afterLastSeenTime, _ := ptypes.Timestamp(resVal.LastSeen)
assert.Equal(t, someQueryStartTs, afterCreateTime)
assert.Equal(t, rightAfterStartTs, afterLastSeenTime)
}
func Test_timeFilterResSumValue_EndsAfterChangeIsClipped(t *testing.T) {
resVal := typed.ResourceSummary{}
resVal.CreateTime, _ = ptypes.TimestampProto(rightAfterStartTs)
resVal.LastSeen, _ = ptypes.TimestampProto(rigthAfterEndTs)
keep, err := timeFilterResSumValue(&resVal, someQueryStartTs, someQueryEndTs)
assert.Nil(t, err)
assert.True(t, keep)
// CreateTime should match query start time
afterCreateTime, _ := ptypes.Timestamp(resVal.CreateTime)
afterLastSeenTime, _ := ptypes.Timestamp(resVal.LastSeen)
assert.Equal(t, rightAfterStartTs, afterCreateTime)
assert.Equal(t, someQueryEndTs, afterLastSeenTime)
}
func Test_timeFilterResSumValue_ExtendsOnBothSizedIsClipped(t *testing.T) {
resVal := typed.ResourceSummary{}
resVal.CreateTime, _ = ptypes.TimestampProto(rightBeforeStartTs)
resVal.LastSeen, _ = ptypes.TimestampProto(rigthAfterEndTs)
keep, err := timeFilterResSumValue(&resVal, someQueryStartTs, someQueryEndTs)
assert.Nil(t, err)
assert.True(t, keep)
// CreateTime should match query start time
afterCreateTime, _ := ptypes.Timestamp(resVal.CreateTime)
afterLastSeenTime, _ := ptypes.Timestamp(resVal.LastSeen)
assert.Equal(t, someQueryStartTs, afterCreateTime)
assert.Equal(t, someQueryEndTs, afterLastSeenTime)
}
func Test_timeFilterWatchActivityOccurrences(t *testing.T) {
occurrences := []int64{rightBeforeStartTs.Unix(), someQueryStartTs.Unix(), rightAfterStartTs.Unix(), someQueryEndTs.Unix(), rigthAfterEndTs.Unix()}
filtered := timeFilterWatchActivityOccurrences(occurrences, someQueryStartTs, someQueryEndTs)
assert.Len(t, filtered, 3)
for _, item := range filtered {
assert.True(t, item == someQueryStartTs.Unix() || item == rightAfterStartTs.Unix() || item == someQueryEndTs.Unix())
}
}
func Test_timeFilterWatchActivityMap(t *testing.T) {
activityMap := make(map[typed.WatchActivityKey]*typed.WatchActivity)
activityMap[typed.WatchActivityKey{Name: "before1"}] = &typed.WatchActivity{ChangedAt: []int64{rightBeforeStartTs.Unix()}}
activityMap[typed.WatchActivityKey{Name: "before2"}] = &typed.WatchActivity{NoChangeAt: []int64{rightBeforeStartTs.Unix()}}
activityMap[typed.WatchActivityKey{Name: "during1"}] = &typed.WatchActivity{ChangedAt: []int64{rightAfterStartTs.Unix()}}
activityMap[typed.WatchActivityKey{Name: "during2"}] = &typed.WatchActivity{NoChangeAt: []int64{rightAfterStartTs.Unix()}}
activityMap[typed.WatchActivityKey{Name: "after1"}] = &typed.WatchActivity{ChangedAt: []int64{rigthAfterEndTs.Unix()}}
activityMap[typed.WatchActivityKey{Name: "after2"}] = &typed.WatchActivity{NoChangeAt: []int64{rigthAfterEndTs.Unix()}}
timeFilterWatchActivityMap(activityMap, someQueryStartTs, someQueryEndTs)
assert.Len(t, activityMap, 2)
assert.Contains(t, activityMap, typed.WatchActivityKey{Name: "during1"})
assert.Contains(t, activityMap, typed.WatchActivityKey{Name: "during2"})
}

Просмотреть файл

@ -0,0 +1,37 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package queries
type TimelineRoot struct {
ViewOpt ViewOptions `json:"view_options"`
Rows []TimelineRow `json:"rows"`
}
type TimelineRow struct {
Text string `json:"text"`
Duration int64 `json:"duration"`
Kind string `json:"kind"`
Namespace string `json:"namespace"`
Overlays []Overlay `json:"overlays"`
ChangedAt []int64 `json:"changedat"`
NoChangeAt []int64 `json:"nochangeat"`
StartDate int64 `json:"start_date"`
EndDate int64 `json:"end_date"`
}
type ViewOptions struct {
Sort string `json:"sort"`
}
type Overlay struct {
Text string `json:"text"`
StartDate int64 `json:"start_date"`
Duration int64 `json:"duration"`
EndDate int64 `json:"end_date"`
}

Просмотреть файл

@ -0,0 +1,47 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package e2e
import (
"github.com/salesforce/sloop/pkg/sloop/server"
"github.com/salesforce/sloop/pkg/sloop/server/internal/config"
"github.com/salesforce/sloop/pkg/sloop/test/assertex"
"github.com/stretchr/testify/assert"
"io/ioutil"
"testing"
"time"
)
func helper_runE2E(playbackData []byte, expectedOutput []byte, queryName string, t *testing.T) {
// Badger Data DB
dataDir, err := ioutil.TempDir("", "data")
assert.Nil(t, err)
// Playback File
playbackFile, err := ioutil.TempFile("", "playback")
assert.Nil(t, err)
_, err = playbackFile.Write([]byte(playbackData))
assert.Nil(t, err)
playbackFile.Close()
// Test config
testConfig := &config.SloopConfig{}
testConfig.DisableKubeWatcher = true
testConfig.DebugDisableWebServer = true
testConfig.StoreRoot = dataDir
testConfig.DebugPlaybackFile = playbackFile.Name()
testConfig.DebugRunQuery = queryName
testConfig.UseMockBadger = true
testConfig.DisableStoreManager = true
testConfig.MaxLookback = 14 * 24 * time.Hour
outData, err := server.RunWithConfig(testConfig)
assert.Nil(t, err)
assertex.JsonEqualBytes(t, []byte(expectedOutput), outData)
}

Просмотреть файл

@ -0,0 +1,18 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package e2e
import (
"flag"
"fmt"
)
func init() {
flag.Set("alsologtostderr", fmt.Sprintf("%t", true))
}

Просмотреть файл

@ -0,0 +1,62 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package e2e
import (
"fmt"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"strings"
"testing"
"time"
)
var Payload = `{
"metadata": {
"name": "some-name",
"namespace": "some-namespace",
"uid": "f8f372a3-f731-11e8-b3bd-e24c7f08fac6",
"creationTimestamp": "2018-12-03T19:31:03Z"
}
}
`
var SimpleQueryPlayback = fmt.Sprintf(`Data:
- kind: Deployment
payload: '%v'
timestamp:
nanos: 557590245
seconds: 1562963506`, strings.ReplaceAll(Payload, "\n", ""))
const SimpleQueryExpected = `{
"view_options": {
"sort": ""
},
"rows": [
{
"text": "some-name",
"duration": 1209600,
"kind": "Deployment",
"namespace": "some-namespace",
"overlays": [],
"changedat": null,
"nochangeat": [
1562963506
],
"start_date": 1561755600,
"end_date": 1562965200
}
]
}`
// This test exercises main() with a sample input and compares the result query
// These should be used sparingly, and most query tests should be done in the query folder
func Test_SimpleQueryWithOneResource(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
helper_runE2E([]byte(SimpleQueryPlayback), []byte(SimpleQueryExpected), "EventHeatMap", t)
}

Просмотреть файл

@ -0,0 +1,152 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package config
import (
"flag"
"fmt"
"github.com/ghodss/yaml"
"github.com/salesforce/sloop/pkg/sloop/webserver"
"io/ioutil"
"os"
"time"
)
const sloopConfigEnvVar = "SLOOP_CONFIG"
type SloopConfig struct {
// These fields can only come from command line
ConfigFile string
// These fields can only come from file because they use complex types
LeftBarLinks []webserver.LinkTemplate `json:"leftBarLinks"`
ResourceLinks []webserver.ResourceLinkTemplate `json:"resourceLinks"`
// Normal fields that can come from file or cmd line
DisableKubeWatcher bool `json:"disableKubeWatch"`
KubeWatchResyncInterval time.Duration `json:"kubeWatchResyncInterval"`
WebFilesPath string `json:"webfilesPath"`
Port int `json:"port"`
StoreRoot string `json:"storeRoot"`
MaxLookback time.Duration `json:"maxLookBack"`
MaxDiskMb int `json:"maxDiskMb"`
DebugDisableWebServer bool `json:"disableWebServer"`
DebugPlaybackFile string `json:"debugPlaybackFile"`
DebugRecordFile string `json:"debugRecordFile"`
DebugRunQuery string `json:"runQuery"`
UseMockBadger bool `json:"mockBadger"`
DisableStoreManager bool `json:"disableStoreManager"`
CleanupFrequency time.Duration `json:"cleanupFrequency" validate:"min=1h,max=120h"`
KeepMinorNodeUpdates bool `json:"keepMinorNodeUpdates"`
DefaultNamespace string `json:"defaultNamespace"`
DefaultKind string `json:"defaultKind"`
DefaultLookback string `json:"defaultLookback"`
UseKubeContext string `json:"context"`
DisplayContext string `json:"displayContext"`
ApiServerHost string `json:"apiServerHost"`
}
func registerFlags(fs *flag.FlagSet, config *SloopConfig) {
fs.StringVar(&config.ConfigFile, "config", "", "Path to a yaml or json config file")
fs.BoolVar(&config.DisableKubeWatcher, "disable-kube-watch", false, "Turn off kubernetes watch")
fs.DurationVar(&config.KubeWatchResyncInterval, "kube-watch-resync-interval", 30*time.Minute,
"OPTIONAL: Kubernetes watch resync interval")
fs.StringVar(&config.WebFilesPath, "web-files-path", "./pkg/sloop/webfiles", "Path to web files")
fs.IntVar(&config.Port, "port", 8080, "Web server port")
fs.StringVar(&config.StoreRoot, "store-root", "./data", "Path to store history data")
fs.DurationVar(&config.MaxLookback, "max-look-back", time.Duration(14*24)*time.Hour, "Max history data to keep")
fs.IntVar(&config.MaxDiskMb, "max-disk-mb", 32*1024, "Max disk storage in MB")
fs.BoolVar(&config.DebugDisableWebServer, "disable-web-server", false, "Disable web server")
fs.StringVar(&config.DebugPlaybackFile, "playback-file", "", "Read watch data from a playback file")
fs.StringVar(&config.DebugRecordFile, "record-file", "", "Record watch data to a playback file")
fs.StringVar(&config.DebugRunQuery, "run-query", "", "Load store, run this one query, and exit")
fs.BoolVar(&config.UseMockBadger, "use-mock-badger", false, "Use a fake in-memory mock of badger")
fs.BoolVar(&config.DisableStoreManager, "disable-store-manager", false, "Turn off store manager which is to clean up database")
fs.DurationVar(&config.CleanupFrequency, "cleanup-frequency", time.Minute,
"OPTIONAL: Frequency between subsequent runs for the database cleanup")
fs.BoolVar(&config.KeepMinorNodeUpdates, "keep-minor-node-updates", false, "Keep all node updates even if change is only condition timestamps")
fs.StringVar(&config.UseKubeContext, "context", "", "Use a specific kubernetes context")
fs.StringVar(&config.DisplayContext, "display-context", "", "Use this to override the display context. When running in k8s the context is empty string. This lets you override that (mainly useful if you are running many copies of sloop on different clusters) ")
fs.StringVar(&config.ApiServerHost, "apiserver-host", "", "Kubernetes API server endpoint")
}
// This will first check if a config file is specified on cmd line using a temporary flagSet
// If not there, check the environment variable
// If we have a config path, load initial values from it
// Next parse flags again and override any fields from command line
//
// We do this to support settings that can come from either cmd line or config file
func Init() *SloopConfig {
newConfig := &SloopConfig{}
configFilename := preParseConfigFlag()
if configFilename == "" {
configFilename = os.Getenv(sloopConfigEnvVar)
}
if configFilename != "" {
newConfig = loadFromFile(configFilename)
}
registerFlags(flag.CommandLine, newConfig)
flag.Parse()
// Set this to the correct value in case we got it from envVar
newConfig.ConfigFile = configFilename
return newConfig
}
func (c *SloopConfig) ToYaml() string {
b, err := yaml.Marshal(c)
if err != nil {
panic(err)
}
return string(b)
}
func (c *SloopConfig) Validate() error {
if c.MaxLookback <= 0 {
return fmt.Errorf("SloopConfig value MaxLookback can not be <= 0")
}
return nil
}
func loadFromFile(filename string) *SloopConfig {
yamlFile, err := ioutil.ReadFile(filename)
if err != nil {
panic(fmt.Sprintf("failed to read %v. %v", filename, err))
}
var config SloopConfig
err = yaml.Unmarshal(yamlFile, &config)
if err != nil {
panic(fmt.Sprintf("failed to unmarshal %v. %v", filename, err))
}
return &config
}
// Pre-parse flags and return config filename without side-effects
func preParseConfigFlag() string {
tempCfg := &SloopConfig{}
fs := flag.NewFlagSet("configFileOnly", flag.ContinueOnError)
registerFlags(fs, tempCfg)
registerDummyGlogFlags(fs)
err := fs.Parse(os.Args[1:])
if err != nil {
fmt.Printf("Failed to pre-parse flags looking for config file: %v\n", err)
}
return tempCfg.ConfigFile
}
// The gflags library registers flags in init() in github.com/golang/glog.go but only using the global flag set
// We need to also register them in our temporary flagset so we dont get an error about "flag provided but not
// defined". We dont care what the values are.
func registerDummyGlogFlags(fs *flag.FlagSet) {
fs.Bool("logtostderr", false, "log to standard error instead of files")
fs.Bool("alsologtostderr", false, "log to standard error as well as files")
fs.Int("v", 0, "log level for V logs")
fs.Int("stderrthreshold", 0, "logs at or above this threshold go to stderr")
fs.String("vmodule", "", "comma-separated list of pattern=N settings for file-filtered logging")
fs.String("log_backtrace_at", "", "when logging hits line file:N, emit a stack trace")
}

183
pkg/sloop/server/server.go Normal file
Просмотреть файл

@ -0,0 +1,183 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package server
import (
"flag"
"github.com/salesforce/sloop/pkg/sloop/ingress"
"github.com/salesforce/sloop/pkg/sloop/server/internal/config"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"os"
"path"
"strings"
"github.com/golang/glog"
"fmt"
"github.com/salesforce/sloop/pkg/sloop/processing"
"github.com/salesforce/sloop/pkg/sloop/queries"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/salesforce/sloop/pkg/sloop/storemanager"
"github.com/salesforce/sloop/pkg/sloop/webserver"
"github.com/spf13/afero"
"net/url"
"time"
)
const alsologtostderr = "alsologtostderr"
// For easier use in e2e tests
// This is a little ugly and we may want a better solution, but if config says
// to run a single query it returns the output. When running webserver the output is nil
func RunWithConfig(conf *config.SloopConfig) ([]byte, error) {
err := conf.Validate()
if err != nil {
return []byte{}, err
}
kubeClient, kubeContext, err := ingress.MakeKubernetesClient(conf.ApiServerHost, conf.UseKubeContext)
if err != nil {
return []byte{}, err
}
// Channel used for updates from ingress to store
// The channel is owned by this function, and no external code should close this!
kubeWatchChan := make(chan typed.KubeWatchResult, 1000)
var factory badgerwrap.Factory
// Setup badger
if conf.UseMockBadger {
factory = &badgerwrap.MockFactory{}
} else {
factory = &badgerwrap.BadgerFactory{}
}
storeRootWithKubeContext := path.Join(conf.StoreRoot, kubeContext)
db, err := untyped.OpenStore(factory, storeRootWithKubeContext, time.Duration(1)*time.Hour)
if err != nil {
return []byte{}, fmt.Errorf("failed to init untyped store: %v", err)
}
defer untyped.CloseStore(db)
tables := typed.NewTableList(db)
processor := processing.NewProcessing(kubeWatchChan, tables, conf.KeepMinorNodeUpdates, conf.MaxLookback)
processor.Start()
// Real kubernetes watcher
var kubeWatcherSource ingress.KubeWatcher
if !conf.DisableKubeWatcher {
kubeWatcherSource, err = ingress.NewKubeWatcherSource(kubeClient, kubeWatchChan, conf.KubeWatchResyncInterval)
if err != nil {
return []byte{}, fmt.Errorf("failed to initialize kubeWatcher: %v", err)
}
}
// File playback
if conf.DebugPlaybackFile != "" {
err = ingress.PlayFile(kubeWatchChan, conf.DebugPlaybackFile)
if err != nil {
return []byte{}, fmt.Errorf("failed to play back file: %v", err)
}
}
var recorder *ingress.FileRecorder
if conf.DebugRecordFile != "" {
recorder = ingress.NewFileRecorder(conf.DebugRecordFile, kubeWatchChan)
recorder.Start()
}
var storemgr *storemanager.StoreManager
if !conf.DisableStoreManager {
fs := &afero.Afero{Fs: afero.NewOsFs()}
storemgr = storemanager.NewStoreManager(tables, conf.StoreRoot, conf.CleanupFrequency, conf.MaxLookback, conf.MaxDiskMb, fs)
storemgr.Start()
}
displayContext := kubeContext
if conf.DisplayContext != "" {
displayContext = conf.DisplayContext
}
if !conf.DebugDisableWebServer {
webConfig := webserver.WebConfig{
Port: conf.Port,
WebFilesPath: conf.WebFilesPath,
ConfigYaml: conf.ToYaml(),
MaxLookback: conf.MaxLookback,
DefaultNamespace: conf.DefaultNamespace,
DefaultLookback: conf.DefaultLookback,
DefaultResources: conf.DefaultKind,
ResourceLinks: conf.ResourceLinks,
LeftBarLinks: conf.LeftBarLinks,
CurrentContext: displayContext,
}
err = webserver.Run(webConfig, tables)
if err != nil {
return []byte{}, fmt.Errorf("failed to run webserver: %v", err)
}
}
// Initiate shutdown with the following order:
// 1. Shut down ingress so that it stops emitting events
// 2. Close the input channel which signals processing to finish work
// 3. Wait on processor to tell us all work is complete. Store will not change after that
if kubeWatcherSource != nil {
kubeWatcherSource.Stop()
}
close(kubeWatchChan)
processor.Wait()
if conf.DebugRunQuery != "" {
params := url.Values(map[string][]string{
queries.NamespaceParam: {queries.AllNamespaces},
queries.KindParam: {queries.AllKinds},
})
queryData, err := queries.RunQuery(conf.DebugRunQuery, params, tables, conf.MaxLookback, "server")
if err != nil {
return []byte{}, fmt.Errorf("run debug query failed with: %v", err)
}
return queryData, nil
}
if recorder != nil {
recorder.Close()
}
if storemgr != nil {
storemgr.Shutdown()
}
glog.Infof("RunWithConfig finished")
return []byte{}, nil
}
// By default glog will not print anything to console, which can confuse users
// This will turn it on unless user sets it explicitly (with --alsologtostderr=false)
func setupStdErrLogging() {
for _, arg := range os.Args[1:] {
if strings.Contains(arg, alsologtostderr) {
return
}
}
err := flag.Set("alsologtostderr", "true")
if err != nil {
panic(err)
}
}
func RealMain() error {
defer glog.Flush()
setupStdErrLogging()
config := config.Init() // internally this calls flag.parse
glog.Infof("SloopConfig: %v", config.ToYaml())
_, err := RunWithConfig(config)
return err
}

Просмотреть файл

@ -0,0 +1,79 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"fmt"
"github.com/dgraph-io/badger"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"strings"
"time"
)
type EventCountKey struct {
PartitionId string
Kind string
Namespace string
Name string
Uid string
}
func NewEventCountKey(timestamp time.Time, kind string, namespace string, name string, uid string) *EventCountKey {
partitionId := untyped.GetPartitionId(timestamp)
return &EventCountKey{PartitionId: partitionId, Kind: kind, Namespace: namespace, Name: name, Uid: uid}
}
func (_ *EventCountKey) TableName() string {
return "eventcount"
}
func (k *EventCountKey) Parse(key string) error {
parts := strings.Split(key, "/")
if len(parts) != 7 {
return fmt.Errorf("Key should have 6 parts: %v", key)
}
if parts[0] != "" {
return fmt.Errorf("Key should start with /: %v", key)
}
if parts[1] != k.TableName() {
return fmt.Errorf("Second part of key (%v) should be %v", key, k.TableName())
}
k.PartitionId = parts[2]
k.Kind = parts[3]
k.Namespace = parts[4]
k.Name = parts[5]
k.Uid = parts[6]
return nil
}
func (k *EventCountKey) String() string {
return fmt.Sprintf("/%v/%v/%v/%v/%v/%v", k.TableName(), k.PartitionId, k.Kind, k.Namespace, k.Name, k.Uid)
}
func (_ *EventCountKey) ValidateKey(key string) error {
newKey := EventCountKey{}
return newKey.Parse(key)
}
func (t *ResourceEventCountsTable) GetOrDefault(txn badgerwrap.Txn, key string) (*ResourceEventCounts, error) {
rec, err := t.Get(txn, key)
if err != nil {
if err != badger.ErrKeyNotFound {
return nil, err
} else {
return &ResourceEventCounts{MapMinToEvents: make(map[int64]*EventCounts)}, nil
}
}
return rec, nil
}
func (k *EventCountKey) SetPartitionId(newPartitionId string) {
k.PartitionId = newPartitionId
}

Просмотреть файл

@ -0,0 +1,141 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"github.com/dgraph-io/badger"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/salesforce/sloop/pkg/sloop/test/assertex"
"github.com/stretchr/testify/assert"
"testing"
"time"
)
const someMinute = 13
const someReason = "someReason"
const someCount = 23
func Test_EventCountTableKey_OutputCorrect(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
k := NewEventCountKey(someTs, someKind, someNamespace, someName, someUid)
assert.Equal(t, "/eventcount/001546398000/somekind/somenamespace/somename/68510937-4ffc-11e9-8e26-1418775557c8", k.String())
}
func Test_EventCountTableKey_ParseCorrect(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
k := &EventCountKey{}
err := k.Parse("/eventcount/001546398000/somekind/somenamespace/somename/68510937-4ffc-11e9-8e26-1418775557c8")
assert.Nil(t, err)
assert.Equal(t, "001546398000", k.PartitionId)
assert.Equal(t, someNamespace, k.Namespace)
assert.Equal(t, someName, k.Name)
assert.Equal(t, someUid, k.Uid)
}
func helper_update_eventcount_table(t *testing.T) (badgerwrap.DB, *ResourceEventCountsTable) {
untyped.TestHookSetPartitionDuration(time.Hour)
var keys []string
for i := 'a'; i < 'd'; i++ {
// add keys in ascending order
keys = append(keys, NewEventCountKey(someTs, someKind, someNamespace, someName+string(i), someUid).String())
}
expectedResult := &ResourceEventCounts{MapMinToEvents: make(map[int64]*EventCounts)}
expectedResult.MapMinToEvents[someMinute] = &EventCounts{MapReasonToCount: make(map[string]int32)}
expectedResult.MapMinToEvents[someMinute].MapReasonToCount[someReason] = someCount
b, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
et := OpenResourceEventCountsTable()
err = b.Update(func(txn badgerwrap.Txn) error {
var txerr error
for _, key := range keys {
txerr = et.Set(txn, key, expectedResult)
if txerr != nil {
return txerr
}
}
return nil
})
assert.Nil(t, err)
return b, et
}
func Test_EventCount_PutThenGet_SameData(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
key := NewEventCountKey(someTs, someKind, someNamespace, someName, someUid).String()
val := &ResourceEventCounts{MapMinToEvents: make(map[int64]*EventCounts)}
val.MapMinToEvents[someMinute] = &EventCounts{MapReasonToCount: make(map[string]int32)}
val.MapMinToEvents[someMinute].MapReasonToCount[someReason] = someCount
b, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
wt := OpenResourceEventCountsTable()
err = b.Update(func(txn badgerwrap.Txn) error {
txerr := wt.Set(txn, key, val)
if txerr != nil {
return txerr
}
return nil
})
assert.Nil(t, err)
var retval *ResourceEventCounts
err = b.View(func(txn badgerwrap.Txn) error {
var txerr error
retval, txerr = wt.Get(txn, key)
if txerr != nil {
return txerr
}
return nil
})
assert.Nil(t, err)
assertex.ProtoEqual(t, val, retval)
}
func Test_EventCount_TestMinAndMaxKeys(t *testing.T) {
db, rt := helper_update_eventcount_table(t)
var minKey string
var maxKey string
err := db.View(func(txn badgerwrap.Txn) error {
_, minKey = rt.GetMinKey(txn)
_, maxKey = rt.GetMaxKey(txn)
return nil
})
assert.Nil(t, err)
assert.Equal(t, "/eventcount/001546398000/somekind/somenamespace/somenamea/68510937-4ffc-11e9-8e26-1418775557c8", minKey)
assert.Equal(t, "/eventcount/001546398000/somekind/somenamespace/somenamec/68510937-4ffc-11e9-8e26-1418775557c8", maxKey)
}
func Test_EventCount_TestGetMinMaxPartitions(t *testing.T) {
db, rt := helper_update_eventcount_table(t)
var minPartition string
var maxPartition string
var found bool
err := db.View(func(txn badgerwrap.Txn) error {
found, minPartition, maxPartition = rt.GetMinMaxPartitions(txn)
return nil
})
assert.Nil(t, err)
assert.True(t, found)
assert.Equal(t, untyped.GetPartitionId(someTs), minPartition)
assert.Equal(t, untyped.GetPartitionId(someTs), maxPartition)
}
func (_ *EventCountKey) GetTestKey() string {
k := NewEventCountKey(someTs, "someKind", "someNamespace", "someName", "someUuid")
return k.String()
}
func (_ *EventCountKey) GetTestValue() *ResourceEventCounts {
return &ResourceEventCounts{}
}

Просмотреть файл

@ -0,0 +1,271 @@
// This file was automatically generated by genny.
// Any changes will be lost if this file is regenerated.
// see https://github.com/cheekybits/genny
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"fmt"
"strconv"
"time"
"github.com/dgraph-io/badger"
"github.com/golang/protobuf/proto"
"github.com/pkg/errors"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
)
type ResourceEventCountsTable struct {
tableName string
}
func OpenResourceEventCountsTable() *ResourceEventCountsTable {
keyInst := &EventCountKey{}
return &ResourceEventCountsTable{tableName: keyInst.TableName()}
}
func (t *ResourceEventCountsTable) Set(txn badgerwrap.Txn, key string, value *ResourceEventCounts) error {
err := (&EventCountKey{}).ValidateKey(key)
if err != nil {
return errors.Wrapf(err, "invalid key for table %v: %v", t.tableName, key)
}
outb, err := proto.Marshal(value)
if err != nil {
return errors.Wrapf(err, "protobuf marshal for table %v failed", t.tableName)
}
err = txn.Set([]byte(key), outb)
if err != nil {
return errors.Wrapf(err, "set for table %v failed", t.tableName)
}
return nil
}
func (t *ResourceEventCountsTable) Get(txn badgerwrap.Txn, key string) (*ResourceEventCounts, error) {
err := (&EventCountKey{}).ValidateKey(key)
if err != nil {
return nil, errors.Wrapf(err, "invalid key for table %v: %v", t.tableName, key)
}
item, err := txn.Get([]byte(key))
if err == badger.ErrKeyNotFound {
// Dont wrap. Need to preserve error type
return nil, err
} else if err != nil {
return nil, errors.Wrapf(err, "get failed for table %v", t.tableName)
}
valueBytes, err := item.ValueCopy([]byte{})
if err != nil {
return nil, errors.Wrapf(err, "value copy failed for table %v", t.tableName)
}
retValue := &ResourceEventCounts{}
err = proto.Unmarshal(valueBytes, retValue)
if err != nil {
return nil, errors.Wrapf(err, "protobuf unmarshal failed for table %v on value length %v", t.tableName, len(valueBytes))
}
return retValue, nil
}
func (t *ResourceEventCountsTable) GetMinKey(txn badgerwrap.Txn) (bool, string) {
keyPrefix := "/" + t.tableName + "/"
iterOpt := badger.DefaultIteratorOptions
iterOpt.Prefix = []byte(keyPrefix)
iterator := txn.NewIterator(iterOpt)
defer iterator.Close()
iterator.Seek([]byte(keyPrefix))
if !iterator.ValidForPrefix([]byte(keyPrefix)) {
return false, ""
}
return true, string(iterator.Item().Key())
}
func (t *ResourceEventCountsTable) GetMaxKey(txn badgerwrap.Txn) (bool, string) {
keyPrefix := "/" + t.tableName + "/"
iterOpt := badger.DefaultIteratorOptions
iterOpt.Prefix = []byte(keyPrefix)
iterOpt.Reverse = true
iterator := txn.NewIterator(iterOpt)
defer iterator.Close()
// We need to seek to the end of the range so we add a 255 character at the end
iterator.Seek([]byte(keyPrefix + string(rune(255))))
if !iterator.Valid() {
return false, ""
}
return true, string(iterator.Item().Key())
}
func (t *ResourceEventCountsTable) GetMinMaxPartitions(txn badgerwrap.Txn) (bool, string, string) {
ok, minKeyStr := t.GetMinKey(txn)
if !ok {
return false, "", ""
}
ok, maxKeyStr := t.GetMaxKey(txn)
if !ok {
// This should be impossible
return false, "", ""
}
minKey := &EventCountKey{}
maxKey := &EventCountKey{}
err := minKey.Parse(minKeyStr)
if err != nil {
panic(fmt.Sprintf("invalid key in table: %v key: %q error: %v", t.tableName, minKeyStr, err))
}
err = maxKey.Parse(maxKeyStr)
if err != nil {
panic(fmt.Sprintf("invalid key in table: %v key: %q error: %v", t.tableName, maxKeyStr, err))
}
return true, minKey.PartitionId, maxKey.PartitionId
}
func (t *ResourceEventCountsTable) RangeRead(
txn badgerwrap.Txn,
keyPredicateFn func(string) bool,
valPredicateFn func(*ResourceEventCounts) bool,
startTime time.Time,
endTime time.Time) (map[EventCountKey]*ResourceEventCounts, RangeReadStats, error) {
resources := map[EventCountKey]*ResourceEventCounts{}
keyPrefix := "/" + t.tableName + "/"
iterOpt := badger.DefaultIteratorOptions
iterOpt.Prefix = []byte(keyPrefix)
itr := txn.NewIterator(iterOpt)
defer itr.Close()
startPartition := untyped.GetPartitionId(startTime)
endPartition := untyped.GetPartitionId(endTime)
startPartitionPrefix := keyPrefix + startPartition + "/"
stats := RangeReadStats{}
before := time.Now()
lastPartition := ""
for itr.Seek([]byte(startPartitionPrefix)); itr.ValidForPrefix([]byte(keyPrefix)); itr.Next() {
stats.RowsVisitedCount += 1
if !keyPredicateFn(string(itr.Item().Key())) {
continue
}
stats.RowsPassedKeyPredicateCount += 1
key := EventCountKey{}
err := key.Parse(string(itr.Item().Key()))
if err != nil {
return nil, stats, err
}
if key.PartitionId != lastPartition {
stats.PartitionCount += 1
lastPartition = key.PartitionId
}
// partitions are zero padded to 12 digits so we can compare them lexicographically
if key.PartitionId > endPartition {
// end of range
break
}
valueBytes, err := itr.Item().ValueCopy([]byte{})
if err != nil {
return nil, stats, err
}
retValue := &ResourceEventCounts{}
err = proto.Unmarshal(valueBytes, retValue)
if err != nil {
return nil, stats, err
}
if valPredicateFn != nil && !valPredicateFn(retValue) {
continue
}
stats.RowsPassedValuePredicateCount += 1
resources[key] = retValue
}
stats.Elapsed = time.Since(before)
stats.TableName = (&EventCountKey{}).TableName()
return resources, stats, nil
}
//todo: need to add unit test
func (t *ResourceEventCountsTable) GetUniquePartitionList(txn badgerwrap.Txn) ([]string, error) {
resources := []string{}
ok, minPar, maxPar := t.GetMinMaxPartitions(txn)
if ok {
parDuration := untyped.GetPartitionDuration()
for curPar := minPar; curPar < maxPar; {
resources = append(resources, curPar)
// update curPar
partInt, err := strconv.ParseInt(curPar, 10, 64)
if err != nil {
return resources, errors.Wrapf(err, "failed to get partition:%v", curPar)
}
parTime := time.Unix(partInt, 0).UTC().Add(parDuration)
curPar = untyped.GetPartitionId(parTime)
}
}
return resources, nil
}
//todo: need to add unit test
func (t *ResourceEventCountsTable) GetPreviousKey(txn badgerwrap.Txn, key EventCountKey, keyPrefix EventCountKey) (EventCountKey, error) {
partitionList, err := t.GetUniquePartitionList(txn)
if err != nil {
return EventCountKey{}, errors.Wrapf(err, "failed to get partition list from table:%v", t.tableName)
}
currentPartition := key.PartitionId
for i := len(partitionList) - 1; i >= 0; i-- {
prePart := partitionList[i]
if prePart > currentPartition {
continue
} else {
prevFound, prevKey, err := t.getLastMatchingKeyInPartition(txn, prePart, key, keyPrefix)
if err != nil {
return EventCountKey{}, errors.Wrapf(err, "Failure getting previous key for %v, for partition id:%v", key.String(), prePart)
}
if prevFound && err == nil {
return prevKey, nil
}
}
}
return EventCountKey{}, fmt.Errorf("failed to get any previous key in table:%v, for key:%v, keyPrefix:%v", t.tableName, key.String(), keyPrefix)
}
//todo: need to add unit test
func (t *ResourceEventCountsTable) getLastMatchingKeyInPartition(txn badgerwrap.Txn, curPartition string, key EventCountKey, keyPrefix EventCountKey) (bool, EventCountKey, error) {
iterOpt := badger.DefaultIteratorOptions
iterOpt.Reverse = true
itr := txn.NewIterator(iterOpt)
defer itr.Close()
// update partition with current value
key.SetPartitionId(curPartition)
keySeekStr := key.String()
itr.Seek([]byte(keySeekStr))
// if the result is same as key, we want to check its previous one
keyRes := string(itr.Item().Key())
if keyRes == key.String() {
itr.Next()
}
if itr.ValidForPrefix([]byte(keyPrefix.String())) {
key := EventCountKey{}
err := key.Parse(string(itr.Item().Key()))
if err != nil {
return true, EventCountKey{}, err
}
return true, key, nil
}
return false, EventCountKey{}, nil
}

Просмотреть файл

@ -0,0 +1,53 @@
// This file was automatically generated by genny.
// Any changes will be lost if this file is regenerated.
// see https://github.com/cheekybits/genny
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"fmt"
"reflect"
"testing"
"time"
"github.com/dgraph-io/badger"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/stretchr/testify/assert"
)
func helper_ResourceEventCounts_ShouldSkip() bool {
// Tests will not work on the fake types in the template, but we want to run tests on real objects
if "typed.Value"+"Type" == fmt.Sprint(reflect.TypeOf(ResourceEventCounts{})) {
fmt.Printf("Skipping unit test")
return true
}
return false
}
func Test_ResourceEventCountsTable_SetWorks(t *testing.T) {
if helper_ResourceEventCounts_ShouldSkip() {
return
}
untyped.TestHookSetPartitionDuration(time.Hour * 24)
db, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
err = db.Update(func(txn badgerwrap.Txn) error {
k := (&EventCountKey{}).GetTestKey()
vt := OpenResourceEventCountsTable()
err2 := vt.Set(txn, k, (&EventCountKey{}).GetTestValue())
assert.Nil(t, err2)
return nil
})
assert.Nil(t, err)
}

Просмотреть файл

@ -0,0 +1,30 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"github.com/golang/protobuf/ptypes"
"github.com/golang/protobuf/ptypes/timestamp"
"github.com/pkg/errors"
"time"
)
func StringToProtobufTimestamp(ts string) (*timestamp.Timestamp, error) {
t, err := time.Parse(time.RFC3339, ts)
if err != nil {
return nil, errors.Wrap(err, "could not parse timestamp")
}
tspb, err := ptypes.TimestampProto(t)
if err != nil {
return nil, errors.Wrap(err, "could not transform to proto timestamp")
}
return tspb, nil
}

Просмотреть файл

@ -0,0 +1,39 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"github.com/golang/protobuf/ptypes/timestamp"
"github.com/stretchr/testify/assert"
"testing"
)
func Test_StringToProtobufTimestamp_Success(t *testing.T) {
expectedResult := &timestamp.Timestamp{
Seconds: 1562962332,
Nanos: 0,
}
ts, err := StringToProtobufTimestamp("2019-07-12T20:12:12Z")
assert.Nil(t, err)
assert.Equal(t, expectedResult, ts)
}
func Test_StringToProtobufTimestamp_FailureCannotParse(t *testing.T) {
ts, err := StringToProtobufTimestamp("2019-070:12:12Z")
assert.NotNil(t, err)
assert.Contains(t, err.Error(), "could not parse timestamp")
assert.Nil(t, ts)
}
func Test_StringToProtobufTimestamp_FailureCannotTransformToPB(t *testing.T) {
ts, err := StringToProtobufTimestamp("0000-07-12T20:12:12Z")
assert.NotNil(t, err)
assert.Contains(t, err.Error(), "could not transform to proto timestamp")
assert.Nil(t, ts)
}

Просмотреть файл

@ -0,0 +1,73 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"fmt"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"strings"
"time"
)
// Key is /<partition>/<kind>/<namespace>/<name>/<uid>
//
// Partition is UnixSeconds rounded down to partition duration
// Kind is kubernetes kind, starts with upper case
// Namespace is kubernetes namespace, all lower
// Name is kubernetes name, all lower
// Uid is kubernetes $.metadata.uid
type ResourceSummaryKey struct {
PartitionId string
Kind string
Namespace string
Name string
Uid string
}
func NewResourceSummaryKey(timestamp time.Time, kind string, namespace string, name string, uid string) *ResourceSummaryKey {
partitionId := untyped.GetPartitionId(timestamp)
return &ResourceSummaryKey{PartitionId: partitionId, Kind: kind, Namespace: namespace, Name: name, Uid: uid}
}
func (_ *ResourceSummaryKey) TableName() string {
return "ressum"
}
func (k *ResourceSummaryKey) Parse(key string) error {
parts := strings.Split(key, "/")
if len(parts) != 7 {
return fmt.Errorf("Key should have 6 parts: %v", key)
}
if parts[0] != "" {
return fmt.Errorf("Key should start with /: %v", key)
}
if parts[1] != k.TableName() {
return fmt.Errorf("Second part of key (%v) should be %v", key, k.TableName())
}
k.PartitionId = parts[2]
k.Kind = parts[3]
k.Namespace = parts[4]
k.Name = parts[5]
k.Uid = parts[6]
return nil
}
func (k *ResourceSummaryKey) String() string {
return fmt.Sprintf("/%v/%v/%v/%v/%v/%v", k.TableName(), k.PartitionId, k.Kind, k.Namespace, k.Name, k.Uid)
}
func (k *ResourceSummaryKey) SetPartitionId(newPartitionId string) {
k.PartitionId = newPartitionId
}
func (_ *ResourceSummaryKey) ValidateKey(key string) error {
newKey := ResourceSummaryKey{}
return newKey.Parse(key)
}

Просмотреть файл

@ -0,0 +1,284 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"github.com/dgraph-io/badger"
"github.com/golang/protobuf/ptypes"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/salesforce/sloop/pkg/sloop/test/assertex"
"github.com/stretchr/testify/assert"
"testing"
"time"
)
const someUid = "68510937-4ffc-11e9-8e26-1418775557c8"
var someFirstSeenTime = time.Date(2019, 3, 4, 3, 4, 5, 6, time.UTC)
func Test_ResourceSummaryTableKey_OutputCorrect(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
k := NewResourceSummaryKey(someTs, someKind, someNamespace, someName, someUid)
assert.Equal(t, "/ressum/001546398000/somekind/somenamespace/somename/68510937-4ffc-11e9-8e26-1418775557c8", k.String())
}
func Test_ResourceSummaryTableKey_ParseCorrect(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
k := &ResourceSummaryKey{}
err := k.Parse("/ressum/001546398000/somekind/somenamespace/somename/68510937-4ffc-11e9-8e26-1418775557c8")
assert.Nil(t, err)
assert.Equal(t, "001546398000", k.PartitionId)
assert.Equal(t, someNamespace, k.Namespace)
assert.Equal(t, someName, k.Name)
assert.Equal(t, someUid, k.Uid)
}
func Test_ResourceSummary_PutThenGet_SameData(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
createTimeProto, err := ptypes.TimestampProto(someFirstSeenTime)
assert.Nil(t, err)
key := NewResourceSummaryKey(someTs, someKind, someNamespace, someName, someUid).String()
val := &ResourceSummary{FirstSeen: createTimeProto}
b, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
wt := OpenResourceSummaryTable()
err = b.Update(func(txn badgerwrap.Txn) error {
txerr := wt.Set(txn, key, val)
if txerr != nil {
return txerr
}
return nil
})
assert.Nil(t, err)
var retval *ResourceSummary
err = b.View(func(txn badgerwrap.Txn) error {
var txerr error
retval, txerr = wt.Get(txn, key)
if txerr != nil {
return txerr
}
return nil
})
assert.Nil(t, err)
assertex.ProtoEqual(t, val.FirstSeen, retval.FirstSeen)
}
func Test_ResourceSummary_RangeRead(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
createTimeProto, err := ptypes.TimestampProto(someFirstSeenTime)
assert.Nil(t, err)
key1 := NewResourceSummaryKey(someTs, someKind, someNamespace, someName+"a", someUid)
key2 := NewResourceSummaryKey(someTs, someKind, someNamespace, someName+"b", someUid)
val := &ResourceSummary{FirstSeen: createTimeProto}
b, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
wt := OpenResourceSummaryTable()
err = b.Update(func(txn badgerwrap.Txn) error {
txerr := wt.Set(txn, key1.String(), val)
if txerr != nil {
return txerr
}
txerr = wt.Set(txn, key2.String(), val)
if txerr != nil {
return txerr
}
return nil
})
assert.Nil(t, err)
var retval map[ResourceSummaryKey]*ResourceSummary
err = b.View(func(txn badgerwrap.Txn) error {
var txerr error
retval, _, txerr = wt.RangeRead(txn, func(k string) bool { return true }, func(r *ResourceSummary) bool { return true }, someTs, someTs)
if txerr != nil {
return txerr
}
return nil
})
assert.Nil(t, err)
assert.Contains(t, retval, *key1)
assert.Contains(t, retval, *key2)
assertex.ProtoEqual(t, val, retval[*key1])
assertex.ProtoEqual(t, val, retval[*key2])
}
func Test_ResourceSummary_RangeReadWithKeyPredicate(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
createTimeProto, err := ptypes.TimestampProto(someFirstSeenTime)
assert.Nil(t, err)
key1 := NewResourceSummaryKey(someTs, someKind, someNamespace, someName+"a", someUid)
key2 := NewResourceSummaryKey(someTs, someKind, someNamespace+"b", someName+"b", someUid)
val := &ResourceSummary{FirstSeen: createTimeProto}
b, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
wt := OpenResourceSummaryTable()
err = b.Update(func(txn badgerwrap.Txn) error {
txerr := wt.Set(txn, key1.String(), val)
if txerr != nil {
return txerr
}
txerr = wt.Set(txn, key2.String(), val)
if txerr != nil {
return txerr
}
return nil
})
assert.Nil(t, err)
var retval map[ResourceSummaryKey]*ResourceSummary
err = b.View(func(txn badgerwrap.Txn) error {
var txerr error
retval, _, txerr = wt.RangeRead(txn, func(k string) bool {
key := &ResourceSummaryKey{}
err2 := key.Parse(k)
assert.Nil(t, err2)
return key.Namespace == someNamespace+"b"
}, func(r *ResourceSummary) bool { return true }, someTs, someTs)
if txerr != nil {
return txerr
}
return nil
})
assert.Nil(t, err)
assert.Len(t, retval, 1)
assert.Contains(t, retval, *key2)
assert.NotContains(t, retval, *key1)
assertex.ProtoEqual(t, val, retval[*key2])
}
func helper_update_resourcesummary_table(t *testing.T, keysFn func() []string) (badgerwrap.DB, *ResourceSummaryTable) {
untyped.TestHookSetPartitionDuration(time.Hour)
createTimeProto, err := ptypes.TimestampProto(someFirstSeenTime)
assert.Nil(t, err)
keys := keysFn()
val := &ResourceSummary{FirstSeen: createTimeProto}
b, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
rt := OpenResourceSummaryTable()
err = b.Update(func(txn badgerwrap.Txn) error {
var txerr error
for _, key := range keys {
txerr = rt.Set(txn, key, val)
if txerr != nil {
return txerr
}
}
return nil
})
return b, rt
}
func Test_ResourceSummary_TestMinAndMaxKeys(t *testing.T) {
keysFn := func() []string {
var keys []string
for i := 'a'; i < 'd'; i++ {
// add keys in ascending order
keys = append(keys, NewResourceSummaryKey(someTs, someKind, someNamespace, someName+string(i), someUid).String())
}
return keys
}
db, rt := helper_update_resourcesummary_table(t, keysFn)
var minKey string
var maxKey string
err := db.View(func(txn badgerwrap.Txn) error {
_, minKey = rt.GetMinKey(txn)
_, maxKey = rt.GetMaxKey(txn)
return nil
})
assert.Nil(t, err)
assert.Equal(t, "/ressum/001546398000/somekind/somenamespace/somenamea/68510937-4ffc-11e9-8e26-1418775557c8", minKey)
assert.Equal(t, "/ressum/001546398000/somekind/somenamespace/somenamec/68510937-4ffc-11e9-8e26-1418775557c8", maxKey)
}
func Test_ResourceSummary_TestGetMinMaxParititons(t *testing.T) {
keysFn := func() []string {
var keys []string
for i := 'a'; i < 'd'; i++ {
// add keys in ascending order
keys = append(keys, NewResourceSummaryKey(someTs, someKind, someNamespace, someName+string(i), someUid).String())
}
return keys
}
db, rt := helper_update_resourcesummary_table(t, keysFn)
var minPartition string
var maxPartition string
var found bool
err := db.View(func(txn badgerwrap.Txn) error {
found, minPartition, maxPartition = rt.GetMinMaxPartitions(txn)
return nil
})
assert.Nil(t, err)
assert.True(t, found)
assert.Equal(t, untyped.GetPartitionId(someTs), minPartition)
assert.Equal(t, untyped.GetPartitionId(someTs), maxPartition)
}
func Test_ResourceSummary_RangeReadWithTimeRange(t *testing.T) {
var someTs = time.Date(2019, 1, 2, 3, 4, 5, 6, time.UTC)
keysFn := func() []string {
var keys []string
for i := 'a'; i < 'c'; i++ {
// add keys in ascending order
keys = append(keys, NewResourceSummaryKey(someTs, someKind, someNamespace, someName+string(i), someUid).String())
}
for i := 'c'; i < 'e'; i++ {
// add keys in ascending order
keys = append(keys, NewResourceSummaryKey(someTs.Add(1*time.Hour), someKind, someNamespace, someName+string(i), someUid).String())
}
for i := 'e'; i < 'g'; i++ {
// add keys in ascending order
keys = append(keys, NewResourceSummaryKey(someTs.Add(2*time.Hour), someKind, someNamespace, someName+string(i), someUid).String())
}
return keys
}
db, rst := helper_update_resourcesummary_table(t, keysFn)
var retval map[ResourceSummaryKey]*ResourceSummary
err := db.View(func(txn badgerwrap.Txn) error {
var txerr error
// someTs starts with 4 minutes, subtract 5 minutes to not include partitions above (someTs + 2hours)
retval, _, txerr = rst.RangeRead(txn, func(k string) bool { return true }, func(r *ResourceSummary) bool { return true }, someTs.Add(1*time.Hour), someTs.Add(2*time.Hour-5*time.Minute))
if txerr != nil {
return txerr
}
return nil
})
assert.Nil(t, err)
assert.Len(t, retval, 2)
expectedKey := &ResourceSummaryKey{}
err = expectedKey.Parse("/ressum/001546401600/somekind/somenamespace/somenamec/68510937-4ffc-11e9-8e26-1418775557c8")
assert.Nil(t, err)
assert.Contains(t, retval, *expectedKey)
err = expectedKey.Parse("/ressum/001546401600/somekind/somenamespace/somenamed/68510937-4ffc-11e9-8e26-1418775557c8")
assert.Nil(t, err)
assert.Contains(t, retval, *expectedKey)
}
func (_ *ResourceSummaryKey) GetTestKey() string {
k := NewResourceSummaryKey(someTs, "someKind", "someNamespace", "someName", "someUuid")
return k.String()
}
func (_ *ResourceSummaryKey) GetTestValue() *ResourceSummary {
return &ResourceSummary{}
}

Просмотреть файл

@ -0,0 +1,271 @@
// This file was automatically generated by genny.
// Any changes will be lost if this file is regenerated.
// see https://github.com/cheekybits/genny
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"fmt"
"strconv"
"time"
"github.com/dgraph-io/badger"
"github.com/golang/protobuf/proto"
"github.com/pkg/errors"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
)
type ResourceSummaryTable struct {
tableName string
}
func OpenResourceSummaryTable() *ResourceSummaryTable {
keyInst := &ResourceSummaryKey{}
return &ResourceSummaryTable{tableName: keyInst.TableName()}
}
func (t *ResourceSummaryTable) Set(txn badgerwrap.Txn, key string, value *ResourceSummary) error {
err := (&ResourceSummaryKey{}).ValidateKey(key)
if err != nil {
return errors.Wrapf(err, "invalid key for table %v: %v", t.tableName, key)
}
outb, err := proto.Marshal(value)
if err != nil {
return errors.Wrapf(err, "protobuf marshal for table %v failed", t.tableName)
}
err = txn.Set([]byte(key), outb)
if err != nil {
return errors.Wrapf(err, "set for table %v failed", t.tableName)
}
return nil
}
func (t *ResourceSummaryTable) Get(txn badgerwrap.Txn, key string) (*ResourceSummary, error) {
err := (&ResourceSummaryKey{}).ValidateKey(key)
if err != nil {
return nil, errors.Wrapf(err, "invalid key for table %v: %v", t.tableName, key)
}
item, err := txn.Get([]byte(key))
if err == badger.ErrKeyNotFound {
// Dont wrap. Need to preserve error type
return nil, err
} else if err != nil {
return nil, errors.Wrapf(err, "get failed for table %v", t.tableName)
}
valueBytes, err := item.ValueCopy([]byte{})
if err != nil {
return nil, errors.Wrapf(err, "value copy failed for table %v", t.tableName)
}
retValue := &ResourceSummary{}
err = proto.Unmarshal(valueBytes, retValue)
if err != nil {
return nil, errors.Wrapf(err, "protobuf unmarshal failed for table %v on value length %v", t.tableName, len(valueBytes))
}
return retValue, nil
}
func (t *ResourceSummaryTable) GetMinKey(txn badgerwrap.Txn) (bool, string) {
keyPrefix := "/" + t.tableName + "/"
iterOpt := badger.DefaultIteratorOptions
iterOpt.Prefix = []byte(keyPrefix)
iterator := txn.NewIterator(iterOpt)
defer iterator.Close()
iterator.Seek([]byte(keyPrefix))
if !iterator.ValidForPrefix([]byte(keyPrefix)) {
return false, ""
}
return true, string(iterator.Item().Key())
}
func (t *ResourceSummaryTable) GetMaxKey(txn badgerwrap.Txn) (bool, string) {
keyPrefix := "/" + t.tableName + "/"
iterOpt := badger.DefaultIteratorOptions
iterOpt.Prefix = []byte(keyPrefix)
iterOpt.Reverse = true
iterator := txn.NewIterator(iterOpt)
defer iterator.Close()
// We need to seek to the end of the range so we add a 255 character at the end
iterator.Seek([]byte(keyPrefix + string(rune(255))))
if !iterator.Valid() {
return false, ""
}
return true, string(iterator.Item().Key())
}
func (t *ResourceSummaryTable) GetMinMaxPartitions(txn badgerwrap.Txn) (bool, string, string) {
ok, minKeyStr := t.GetMinKey(txn)
if !ok {
return false, "", ""
}
ok, maxKeyStr := t.GetMaxKey(txn)
if !ok {
// This should be impossible
return false, "", ""
}
minKey := &ResourceSummaryKey{}
maxKey := &ResourceSummaryKey{}
err := minKey.Parse(minKeyStr)
if err != nil {
panic(fmt.Sprintf("invalid key in table: %v key: %q error: %v", t.tableName, minKeyStr, err))
}
err = maxKey.Parse(maxKeyStr)
if err != nil {
panic(fmt.Sprintf("invalid key in table: %v key: %q error: %v", t.tableName, maxKeyStr, err))
}
return true, minKey.PartitionId, maxKey.PartitionId
}
func (t *ResourceSummaryTable) RangeRead(
txn badgerwrap.Txn,
keyPredicateFn func(string) bool,
valPredicateFn func(*ResourceSummary) bool,
startTime time.Time,
endTime time.Time) (map[ResourceSummaryKey]*ResourceSummary, RangeReadStats, error) {
resources := map[ResourceSummaryKey]*ResourceSummary{}
keyPrefix := "/" + t.tableName + "/"
iterOpt := badger.DefaultIteratorOptions
iterOpt.Prefix = []byte(keyPrefix)
itr := txn.NewIterator(iterOpt)
defer itr.Close()
startPartition := untyped.GetPartitionId(startTime)
endPartition := untyped.GetPartitionId(endTime)
startPartitionPrefix := keyPrefix + startPartition + "/"
stats := RangeReadStats{}
before := time.Now()
lastPartition := ""
for itr.Seek([]byte(startPartitionPrefix)); itr.ValidForPrefix([]byte(keyPrefix)); itr.Next() {
stats.RowsVisitedCount += 1
if !keyPredicateFn(string(itr.Item().Key())) {
continue
}
stats.RowsPassedKeyPredicateCount += 1
key := ResourceSummaryKey{}
err := key.Parse(string(itr.Item().Key()))
if err != nil {
return nil, stats, err
}
if key.PartitionId != lastPartition {
stats.PartitionCount += 1
lastPartition = key.PartitionId
}
// partitions are zero padded to 12 digits so we can compare them lexicographically
if key.PartitionId > endPartition {
// end of range
break
}
valueBytes, err := itr.Item().ValueCopy([]byte{})
if err != nil {
return nil, stats, err
}
retValue := &ResourceSummary{}
err = proto.Unmarshal(valueBytes, retValue)
if err != nil {
return nil, stats, err
}
if valPredicateFn != nil && !valPredicateFn(retValue) {
continue
}
stats.RowsPassedValuePredicateCount += 1
resources[key] = retValue
}
stats.Elapsed = time.Since(before)
stats.TableName = (&ResourceSummaryKey{}).TableName()
return resources, stats, nil
}
//todo: need to add unit test
func (t *ResourceSummaryTable) GetUniquePartitionList(txn badgerwrap.Txn) ([]string, error) {
resources := []string{}
ok, minPar, maxPar := t.GetMinMaxPartitions(txn)
if ok {
parDuration := untyped.GetPartitionDuration()
for curPar := minPar; curPar < maxPar; {
resources = append(resources, curPar)
// update curPar
partInt, err := strconv.ParseInt(curPar, 10, 64)
if err != nil {
return resources, errors.Wrapf(err, "failed to get partition:%v", curPar)
}
parTime := time.Unix(partInt, 0).UTC().Add(parDuration)
curPar = untyped.GetPartitionId(parTime)
}
}
return resources, nil
}
//todo: need to add unit test
func (t *ResourceSummaryTable) GetPreviousKey(txn badgerwrap.Txn, key ResourceSummaryKey, keyPrefix ResourceSummaryKey) (ResourceSummaryKey, error) {
partitionList, err := t.GetUniquePartitionList(txn)
if err != nil {
return ResourceSummaryKey{}, errors.Wrapf(err, "failed to get partition list from table:%v", t.tableName)
}
currentPartition := key.PartitionId
for i := len(partitionList) - 1; i >= 0; i-- {
prePart := partitionList[i]
if prePart > currentPartition {
continue
} else {
prevFound, prevKey, err := t.getLastMatchingKeyInPartition(txn, prePart, key, keyPrefix)
if err != nil {
return ResourceSummaryKey{}, errors.Wrapf(err, "Failure getting previous key for %v, for partition id:%v", key.String(), prePart)
}
if prevFound && err == nil {
return prevKey, nil
}
}
}
return ResourceSummaryKey{}, fmt.Errorf("failed to get any previous key in table:%v, for key:%v, keyPrefix:%v", t.tableName, key.String(), keyPrefix)
}
//todo: need to add unit test
func (t *ResourceSummaryTable) getLastMatchingKeyInPartition(txn badgerwrap.Txn, curPartition string, key ResourceSummaryKey, keyPrefix ResourceSummaryKey) (bool, ResourceSummaryKey, error) {
iterOpt := badger.DefaultIteratorOptions
iterOpt.Reverse = true
itr := txn.NewIterator(iterOpt)
defer itr.Close()
// update partition with current value
key.SetPartitionId(curPartition)
keySeekStr := key.String()
itr.Seek([]byte(keySeekStr))
// if the result is same as key, we want to check its previous one
keyRes := string(itr.Item().Key())
if keyRes == key.String() {
itr.Next()
}
if itr.ValidForPrefix([]byte(keyPrefix.String())) {
key := ResourceSummaryKey{}
err := key.Parse(string(itr.Item().Key()))
if err != nil {
return true, ResourceSummaryKey{}, err
}
return true, key, nil
}
return false, ResourceSummaryKey{}, nil
}

Просмотреть файл

@ -0,0 +1,53 @@
// This file was automatically generated by genny.
// Any changes will be lost if this file is regenerated.
// see https://github.com/cheekybits/genny
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"fmt"
"reflect"
"testing"
"time"
"github.com/dgraph-io/badger"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/stretchr/testify/assert"
)
func helper_ResourceSummary_ShouldSkip() bool {
// Tests will not work on the fake types in the template, but we want to run tests on real objects
if "typed.Value"+"Type" == fmt.Sprint(reflect.TypeOf(ResourceSummary{})) {
fmt.Printf("Skipping unit test")
return true
}
return false
}
func Test_ResourceSummaryTable_SetWorks(t *testing.T) {
if helper_ResourceSummary_ShouldSkip() {
return
}
untyped.TestHookSetPartitionDuration(time.Hour * 24)
db, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
err = db.Update(func(txn badgerwrap.Txn) error {
k := (&ResourceSummaryKey{}).GetTestKey()
vt := OpenResourceSummaryTable()
err2 := vt.Set(txn, k, (&ResourceSummaryKey{}).GetTestValue())
assert.Nil(t, err2)
return nil
})
assert.Nil(t, err)
}

Просмотреть файл

@ -0,0 +1,369 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: schema.proto
package typed
import (
fmt "fmt"
proto "github.com/golang/protobuf/proto"
timestamp "github.com/golang/protobuf/ptypes/timestamp"
math "math"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
type KubeWatchResult_WatchType int32
const (
KubeWatchResult_ADD KubeWatchResult_WatchType = 0
KubeWatchResult_UPDATE KubeWatchResult_WatchType = 1
KubeWatchResult_DELETE KubeWatchResult_WatchType = 2
)
var KubeWatchResult_WatchType_name = map[int32]string{
0: "ADD",
1: "UPDATE",
2: "DELETE",
}
var KubeWatchResult_WatchType_value = map[string]int32{
"ADD": 0,
"UPDATE": 1,
"DELETE": 2,
}
func (x KubeWatchResult_WatchType) String() string {
return proto.EnumName(KubeWatchResult_WatchType_name, int32(x))
}
func (KubeWatchResult_WatchType) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_1c5fb4d8cc22d66a, []int{0, 0}
}
type KubeWatchResult struct {
Timestamp *timestamp.Timestamp `protobuf:"bytes,1,opt,name=timestamp,proto3" json:"timestamp,omitempty"`
Kind string `protobuf:"bytes,2,opt,name=kind,proto3" json:"kind,omitempty"`
WatchType KubeWatchResult_WatchType `protobuf:"varint,3,opt,name=watchType,proto3,enum=typed.KubeWatchResult_WatchType" json:"watchType,omitempty"`
Payload string `protobuf:"bytes,4,opt,name=payload,proto3" json:"payload,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *KubeWatchResult) Reset() { *m = KubeWatchResult{} }
func (m *KubeWatchResult) String() string { return proto.CompactTextString(m) }
func (*KubeWatchResult) ProtoMessage() {}
func (*KubeWatchResult) Descriptor() ([]byte, []int) {
return fileDescriptor_1c5fb4d8cc22d66a, []int{0}
}
func (m *KubeWatchResult) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_KubeWatchResult.Unmarshal(m, b)
}
func (m *KubeWatchResult) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_KubeWatchResult.Marshal(b, m, deterministic)
}
func (m *KubeWatchResult) XXX_Merge(src proto.Message) {
xxx_messageInfo_KubeWatchResult.Merge(m, src)
}
func (m *KubeWatchResult) XXX_Size() int {
return xxx_messageInfo_KubeWatchResult.Size(m)
}
func (m *KubeWatchResult) XXX_DiscardUnknown() {
xxx_messageInfo_KubeWatchResult.DiscardUnknown(m)
}
var xxx_messageInfo_KubeWatchResult proto.InternalMessageInfo
func (m *KubeWatchResult) GetTimestamp() *timestamp.Timestamp {
if m != nil {
return m.Timestamp
}
return nil
}
func (m *KubeWatchResult) GetKind() string {
if m != nil {
return m.Kind
}
return ""
}
func (m *KubeWatchResult) GetWatchType() KubeWatchResult_WatchType {
if m != nil {
return m.WatchType
}
return KubeWatchResult_ADD
}
func (m *KubeWatchResult) GetPayload() string {
if m != nil {
return m.Payload
}
return ""
}
// Enough information to draw a timeline and hierarchy
// Key: /<kind>/<namespace>/<name>/<uid>
type ResourceSummary struct {
FirstSeen *timestamp.Timestamp `protobuf:"bytes,1,opt,name=firstSeen,proto3" json:"firstSeen,omitempty"`
LastSeen *timestamp.Timestamp `protobuf:"bytes,2,opt,name=lastSeen,proto3" json:"lastSeen,omitempty"`
CreateTime *timestamp.Timestamp `protobuf:"bytes,3,opt,name=createTime,proto3" json:"createTime,omitempty"`
DeletedAtEnd bool `protobuf:"varint,4,opt,name=deletedAtEnd,proto3" json:"deletedAtEnd,omitempty"`
// List of relationships. Direction does not matter. Examples:
// A Pod has a relationship to its namespace, ReplicaSet or StatefulSet, node
// A ReplicaSet has a relationship to deployment and namespace
// A node might have a relationship to a rack (maybe latery, as this is virtual)
// We dont need relationships in both directions. We can union them at query time
// Uses same key format here as this overall table
Relationships []string `protobuf:"bytes,5,rep,name=relationships,proto3" json:"relationships,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *ResourceSummary) Reset() { *m = ResourceSummary{} }
func (m *ResourceSummary) String() string { return proto.CompactTextString(m) }
func (*ResourceSummary) ProtoMessage() {}
func (*ResourceSummary) Descriptor() ([]byte, []int) {
return fileDescriptor_1c5fb4d8cc22d66a, []int{1}
}
func (m *ResourceSummary) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_ResourceSummary.Unmarshal(m, b)
}
func (m *ResourceSummary) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_ResourceSummary.Marshal(b, m, deterministic)
}
func (m *ResourceSummary) XXX_Merge(src proto.Message) {
xxx_messageInfo_ResourceSummary.Merge(m, src)
}
func (m *ResourceSummary) XXX_Size() int {
return xxx_messageInfo_ResourceSummary.Size(m)
}
func (m *ResourceSummary) XXX_DiscardUnknown() {
xxx_messageInfo_ResourceSummary.DiscardUnknown(m)
}
var xxx_messageInfo_ResourceSummary proto.InternalMessageInfo
func (m *ResourceSummary) GetFirstSeen() *timestamp.Timestamp {
if m != nil {
return m.FirstSeen
}
return nil
}
func (m *ResourceSummary) GetLastSeen() *timestamp.Timestamp {
if m != nil {
return m.LastSeen
}
return nil
}
func (m *ResourceSummary) GetCreateTime() *timestamp.Timestamp {
if m != nil {
return m.CreateTime
}
return nil
}
func (m *ResourceSummary) GetDeletedAtEnd() bool {
if m != nil {
return m.DeletedAtEnd
}
return false
}
func (m *ResourceSummary) GetRelationships() []string {
if m != nil {
return m.Relationships
}
return nil
}
type EventCounts struct {
MapReasonToCount map[string]int32 `protobuf:"bytes,1,rep,name=mapReasonToCount,proto3" json:"mapReasonToCount,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"varint,2,opt,name=value,proto3"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *EventCounts) Reset() { *m = EventCounts{} }
func (m *EventCounts) String() string { return proto.CompactTextString(m) }
func (*EventCounts) ProtoMessage() {}
func (*EventCounts) Descriptor() ([]byte, []int) {
return fileDescriptor_1c5fb4d8cc22d66a, []int{2}
}
func (m *EventCounts) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_EventCounts.Unmarshal(m, b)
}
func (m *EventCounts) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_EventCounts.Marshal(b, m, deterministic)
}
func (m *EventCounts) XXX_Merge(src proto.Message) {
xxx_messageInfo_EventCounts.Merge(m, src)
}
func (m *EventCounts) XXX_Size() int {
return xxx_messageInfo_EventCounts.Size(m)
}
func (m *EventCounts) XXX_DiscardUnknown() {
xxx_messageInfo_EventCounts.DiscardUnknown(m)
}
var xxx_messageInfo_EventCounts proto.InternalMessageInfo
func (m *EventCounts) GetMapReasonToCount() map[string]int32 {
if m != nil {
return m.MapReasonToCount
}
return nil
}
type ResourceEventCounts struct {
MapMinToEvents map[int64]*EventCounts `protobuf:"bytes,1,rep,name=mapMinToEvents,proto3" json:"mapMinToEvents,omitempty" protobuf_key:"varint,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *ResourceEventCounts) Reset() { *m = ResourceEventCounts{} }
func (m *ResourceEventCounts) String() string { return proto.CompactTextString(m) }
func (*ResourceEventCounts) ProtoMessage() {}
func (*ResourceEventCounts) Descriptor() ([]byte, []int) {
return fileDescriptor_1c5fb4d8cc22d66a, []int{3}
}
func (m *ResourceEventCounts) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_ResourceEventCounts.Unmarshal(m, b)
}
func (m *ResourceEventCounts) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_ResourceEventCounts.Marshal(b, m, deterministic)
}
func (m *ResourceEventCounts) XXX_Merge(src proto.Message) {
xxx_messageInfo_ResourceEventCounts.Merge(m, src)
}
func (m *ResourceEventCounts) XXX_Size() int {
return xxx_messageInfo_ResourceEventCounts.Size(m)
}
func (m *ResourceEventCounts) XXX_DiscardUnknown() {
xxx_messageInfo_ResourceEventCounts.DiscardUnknown(m)
}
var xxx_messageInfo_ResourceEventCounts proto.InternalMessageInfo
func (m *ResourceEventCounts) GetMapMinToEvents() map[int64]*EventCounts {
if m != nil {
return m.MapMinToEvents
}
return nil
}
// Track when 'watch' occurred for a resource within partition
type WatchActivity struct {
// List of timestamps where `watch` event did not contain changes from previous event
NoChangeAt []int64 `protobuf:"varint,1,rep,packed,name=NoChangeAt,proto3" json:"NoChangeAt,omitempty"`
// List of timestamps where 'watch' event contained a change from previous event
ChangedAt []int64 `protobuf:"varint,2,rep,packed,name=ChangedAt,proto3" json:"ChangedAt,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *WatchActivity) Reset() { *m = WatchActivity{} }
func (m *WatchActivity) String() string { return proto.CompactTextString(m) }
func (*WatchActivity) ProtoMessage() {}
func (*WatchActivity) Descriptor() ([]byte, []int) {
return fileDescriptor_1c5fb4d8cc22d66a, []int{4}
}
func (m *WatchActivity) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_WatchActivity.Unmarshal(m, b)
}
func (m *WatchActivity) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_WatchActivity.Marshal(b, m, deterministic)
}
func (m *WatchActivity) XXX_Merge(src proto.Message) {
xxx_messageInfo_WatchActivity.Merge(m, src)
}
func (m *WatchActivity) XXX_Size() int {
return xxx_messageInfo_WatchActivity.Size(m)
}
func (m *WatchActivity) XXX_DiscardUnknown() {
xxx_messageInfo_WatchActivity.DiscardUnknown(m)
}
var xxx_messageInfo_WatchActivity proto.InternalMessageInfo
func (m *WatchActivity) GetNoChangeAt() []int64 {
if m != nil {
return m.NoChangeAt
}
return nil
}
func (m *WatchActivity) GetChangedAt() []int64 {
if m != nil {
return m.ChangedAt
}
return nil
}
func init() {
proto.RegisterEnum("typed.KubeWatchResult_WatchType", KubeWatchResult_WatchType_name, KubeWatchResult_WatchType_value)
proto.RegisterType((*KubeWatchResult)(nil), "typed.KubeWatchResult")
proto.RegisterType((*ResourceSummary)(nil), "typed.ResourceSummary")
proto.RegisterType((*EventCounts)(nil), "typed.EventCounts")
proto.RegisterMapType((map[string]int32)(nil), "typed.EventCounts.MapReasonToCountEntry")
proto.RegisterType((*ResourceEventCounts)(nil), "typed.ResourceEventCounts")
proto.RegisterMapType((map[int64]*EventCounts)(nil), "typed.ResourceEventCounts.MapMinToEventsEntry")
proto.RegisterType((*WatchActivity)(nil), "typed.WatchActivity")
}
func init() { proto.RegisterFile("schema.proto", fileDescriptor_1c5fb4d8cc22d66a) }
var fileDescriptor_1c5fb4d8cc22d66a = []byte{
// 505 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x92, 0xcf, 0x6e, 0x9b, 0x40,
0x10, 0xc6, 0x0b, 0xc4, 0x49, 0x18, 0xe7, 0x8f, 0xb5, 0x69, 0x25, 0x64, 0x55, 0x2d, 0x42, 0x3d,
0x70, 0xa8, 0x88, 0xe4, 0x4a, 0x55, 0x94, 0x43, 0x25, 0x64, 0x73, 0x6a, 0x5d, 0x55, 0x1b, 0xd2,
0x9c, 0xd7, 0x66, 0x62, 0xa3, 0x00, 0x8b, 0xd8, 0xc5, 0x15, 0x8f, 0xd0, 0x37, 0xe9, 0x83, 0xf4,
0x5d, 0xfa, 0x1a, 0x15, 0x8b, 0xb1, 0x89, 0x63, 0xb5, 0xb9, 0x0d, 0xdf, 0xfe, 0xbe, 0xe1, 0x9b,
0xd9, 0x85, 0x13, 0x31, 0x5f, 0x62, 0xca, 0xbc, 0xbc, 0xe0, 0x92, 0x93, 0x9e, 0xac, 0x72, 0x8c,
0x86, 0x6f, 0x17, 0x9c, 0x2f, 0x12, 0xbc, 0x54, 0xe2, 0xac, 0xbc, 0xbf, 0x94, 0x71, 0x8a, 0x42,
0xb2, 0x34, 0x6f, 0x38, 0xe7, 0x8f, 0x06, 0xe7, 0x9f, 0xcb, 0x19, 0xde, 0x31, 0x39, 0x5f, 0x52,
0x14, 0x65, 0x22, 0xc9, 0x15, 0x98, 0x1b, 0xcc, 0xd2, 0x6c, 0xcd, 0xed, 0x8f, 0x86, 0x5e, 0xd3,
0xc8, 0x6b, 0x1b, 0x79, 0x61, 0x4b, 0xd0, 0x2d, 0x4c, 0x08, 0x1c, 0x3c, 0xc4, 0x59, 0x64, 0xe9,
0xb6, 0xe6, 0x9a, 0x54, 0xd5, 0xe4, 0x13, 0x98, 0x3f, 0xea, 0xe6, 0x61, 0x95, 0xa3, 0x65, 0xd8,
0x9a, 0x7b, 0x36, 0xb2, 0x3d, 0x95, 0xce, 0xdb, 0xf9, 0xb1, 0x77, 0xd7, 0x72, 0x74, 0x6b, 0x21,
0x16, 0x1c, 0xe5, 0xac, 0x4a, 0x38, 0x8b, 0xac, 0x03, 0xd5, 0xb6, 0xfd, 0x74, 0xde, 0x83, 0xb9,
0x71, 0x90, 0x23, 0x30, 0xfc, 0xc9, 0x64, 0xf0, 0x82, 0x00, 0x1c, 0xde, 0x7e, 0x9b, 0xf8, 0x61,
0x30, 0xd0, 0xea, 0x7a, 0x12, 0x7c, 0x09, 0xc2, 0x60, 0xa0, 0x3b, 0x3f, 0x75, 0x38, 0xa7, 0x28,
0x78, 0x59, 0xcc, 0xf1, 0xa6, 0x4c, 0x53, 0x56, 0x54, 0xf5, 0xa4, 0xf7, 0x71, 0x21, 0xe4, 0x0d,
0x62, 0xf6, 0x9c, 0x49, 0x37, 0x30, 0xf9, 0x08, 0xc7, 0x09, 0x5b, 0x1b, 0xf5, 0xff, 0x1a, 0x37,
0x2c, 0xb9, 0x06, 0x98, 0x17, 0xc8, 0x24, 0xd6, 0x87, 0x6a, 0x1d, 0xff, 0x76, 0x76, 0x68, 0xe2,
0xc0, 0x49, 0x84, 0x09, 0x4a, 0x8c, 0x7c, 0x19, 0x64, 0xcd, 0x3a, 0x8e, 0xe9, 0x23, 0x8d, 0xbc,
0x83, 0xd3, 0x02, 0x13, 0x26, 0x63, 0x9e, 0x89, 0x65, 0x9c, 0x0b, 0xab, 0x67, 0x1b, 0xae, 0x49,
0x1f, 0x8b, 0xce, 0x2f, 0x0d, 0xfa, 0xc1, 0x0a, 0x33, 0x39, 0xe6, 0x65, 0x26, 0x05, 0x09, 0x61,
0x90, 0xb2, 0x9c, 0x22, 0x13, 0x3c, 0x0b, 0xb9, 0x12, 0x2d, 0xcd, 0x36, 0xdc, 0xfe, 0xc8, 0x5d,
0x5f, 0x55, 0x87, 0xf6, 0xa6, 0x3b, 0x68, 0x90, 0xc9, 0xa2, 0xa2, 0x4f, 0x3a, 0x0c, 0xc7, 0xf0,
0x6a, 0x2f, 0x4a, 0x06, 0x60, 0x3c, 0x60, 0xa5, 0x16, 0x6e, 0xd2, 0xba, 0x24, 0x2f, 0xa1, 0xb7,
0x62, 0x49, 0x89, 0x6a, 0x97, 0x3d, 0xda, 0x7c, 0x5c, 0xeb, 0x57, 0x9a, 0xf3, 0x5b, 0x83, 0x8b,
0xf6, 0xda, 0xba, 0x91, 0xbf, 0xc3, 0x59, 0xca, 0xf2, 0x69, 0x9c, 0x85, 0x5c, 0xc9, 0x62, 0x1d,
0xd8, 0x5b, 0x07, 0xde, 0xe3, 0xa9, 0x83, 0x77, 0x0c, 0x4d, 0xec, 0x9d, 0x2e, 0xc3, 0x5b, 0xb8,
0xd8, 0x83, 0x75, 0x23, 0x1b, 0x4d, 0x64, 0xb7, 0x1b, 0xb9, 0x3f, 0x22, 0x4f, 0x17, 0xd5, 0x1d,
0x63, 0x0a, 0xa7, 0xea, 0xad, 0xfa, 0x73, 0x19, 0xaf, 0x62, 0x59, 0x91, 0x37, 0x00, 0x5f, 0xf9,
0x78, 0xc9, 0xb2, 0x05, 0xfa, 0xcd, 0xb2, 0x0d, 0xda, 0x51, 0xc8, 0x6b, 0x30, 0x9b, 0x3a, 0xf2,
0xa5, 0xa5, 0xab, 0xe3, 0xad, 0x30, 0x3b, 0x54, 0x4f, 0xe5, 0xc3, 0xdf, 0x00, 0x00, 0x00, 0xff,
0xff, 0x7e, 0x49, 0x70, 0xa9, 0xf5, 0x03, 0x00, 0x00,
}

Просмотреть файл

@ -0,0 +1,58 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
syntax = 'proto3';
package typed;
import "google/protobuf/timestamp.proto";
message KubeWatchResult {
enum WatchType {
ADD = 0;
UPDATE = 1;
DELETE = 2;
}
google.protobuf.Timestamp timestamp = 1; // This is a bit of a pain to convert, but uint64 is just as bad
string kind = 2;
WatchType watchType = 3;
string payload = 4;
}
// Enough information to draw a timeline and hierarchy
// Key: /<kind>/<namespace>/<name>/<uid>
message ResourceSummary {
google.protobuf.Timestamp firstSeen = 1; // Scoped to this partition
google.protobuf.Timestamp lastSeen = 2; // Also scoped to this partition
google.protobuf.Timestamp createTime = 3; // Taken from in resource
bool deletedAtEnd = 4; // Tells us that the lastSeen time is also when it was deleted
// List of relationships. Direction does not matter. Examples:
// A Pod has a relationship to its namespace, ReplicaSet or StatefulSet, node
// A ReplicaSet has a relationship to deployment and namespace
// A node might have a relationship to a rack (maybe latery, as this is virtual)
// We dont need relationships in both directions. We can union them at query time
// Uses same key format here as this overall table
repeated string relationships = 5;
}
message EventCounts {
map<string, int32> mapReasonToCount = 1;
}
message ResourceEventCounts {
map<int64, EventCounts> mapMinToEvents = 1;
}
// Track when 'watch' occurred for a resource within partition
message WatchActivity {
// List of timestamps where `watch` event did not contain changes from previous event
repeated int64 NoChangeAt = 1;
// List of timestamps where 'watch' event contained a change from previous event
repeated int64 ChangedAt = 2;
}

Просмотреть файл

@ -0,0 +1,106 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"github.com/golang/glog"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"sort"
)
type Tables interface {
ResourceSummaryTable() *ResourceSummaryTable
EventCountTable() *ResourceEventCountsTable
WatchTable() *KubeWatchResultTable
WatchActivityTable() *WatchActivityTable
Db() badgerwrap.DB
GetMinAndMaxPartition() (bool, string, string, error)
GetTableNames() []string
GetTables() []interface{}
}
type MinMaxPartitionsGetter interface {
GetMinMaxPartitions(badgerwrap.Txn) (bool, string, string)
}
type tablesImpl struct {
resourceSummaryTable *ResourceSummaryTable
eventCountTable *ResourceEventCountsTable
watchTable *KubeWatchResultTable
watchActivityTable *WatchActivityTable
db badgerwrap.DB
}
func NewTableList(db badgerwrap.DB) Tables {
t := &tablesImpl{}
t.resourceSummaryTable = OpenResourceSummaryTable()
t.eventCountTable = OpenResourceEventCountsTable()
t.watchTable = OpenKubeWatchResultTable()
t.watchActivityTable = OpenWatchActivityTable()
t.db = db
return t
}
func (t *tablesImpl) ResourceSummaryTable() *ResourceSummaryTable {
return t.resourceSummaryTable
}
func (t *tablesImpl) EventCountTable() *ResourceEventCountsTable {
return t.eventCountTable
}
func (t *tablesImpl) WatchTable() *KubeWatchResultTable {
return t.watchTable
}
func (t *tablesImpl) WatchActivityTable() *WatchActivityTable {
return t.watchActivityTable
}
func (t *tablesImpl) Db() badgerwrap.DB {
return t.db
}
func (t *tablesImpl) GetMinAndMaxPartition() (bool, string, string, error) {
allPartitions := []string{}
err := t.db.View(func(txn badgerwrap.Txn) error {
for _, table := range t.GetTables() {
coerced, canCoerce := table.(MinMaxPartitionsGetter)
if !canCoerce {
glog.Errorf("Expected type to implement GetMinMaxPartitions but failed")
continue
}
ok, minPar, maxPar := coerced.GetMinMaxPartitions(txn)
if ok {
allPartitions = append(allPartitions, minPar, maxPar)
}
}
return nil
})
if err != nil {
return false, "", "", err
}
if len(allPartitions) == 0 {
return false, "", "", nil
}
sort.Strings(allPartitions)
return true, allPartitions[0], allPartitions[len(allPartitions)-1], nil
}
func (t *tablesImpl) GetTableNames() []string {
return []string{t.watchTable.tableName, t.resourceSummaryTable.tableName, t.eventCountTable.tableName}
}
func (t *tablesImpl) GetTables() []interface{} {
intfs := new([]interface{})
*intfs = append(*intfs, t.eventCountTable, t.resourceSummaryTable, t.watchTable)
return *intfs
}

Просмотреть файл

@ -0,0 +1,271 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"fmt"
"github.com/dgraph-io/badger"
"github.com/golang/protobuf/proto"
"github.com/pkg/errors"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"strconv"
"time"
)
//go:generate genny -in=$GOFILE -out=watchtablegen.go gen "ValueType=KubeWatchResult KeyType=WatchTableKey"
//go:generate genny -in=$GOFILE -out=resourcesummarytablegen.go gen "ValueType=ResourceSummary KeyType=ResourceSummaryKey"
//go:generate genny -in=$GOFILE -out=eventcounttablegen.go gen "ValueType=ResourceEventCounts KeyType=EventCountKey"
//go:generate genny -in=$GOFILE -out=watchactivitytablegen.go gen "ValueType=WatchActivity KeyType=WatchActivityKey"
type ValueTypeTable struct {
tableName string
}
func OpenValueTypeTable() *ValueTypeTable {
keyInst := &KeyType{}
return &ValueTypeTable{tableName: keyInst.TableName()}
}
func (t *ValueTypeTable) Set(txn badgerwrap.Txn, key string, value *ValueType) error {
err := (&KeyType{}).ValidateKey(key)
if err != nil {
return errors.Wrapf(err, "invalid key for table %v: %v", t.tableName, key)
}
outb, err := proto.Marshal(value)
if err != nil {
return errors.Wrapf(err, "protobuf marshal for table %v failed", t.tableName)
}
err = txn.Set([]byte(key), outb)
if err != nil {
return errors.Wrapf(err, "set for table %v failed", t.tableName)
}
return nil
}
func (t *ValueTypeTable) Get(txn badgerwrap.Txn, key string) (*ValueType, error) {
err := (&KeyType{}).ValidateKey(key)
if err != nil {
return nil, errors.Wrapf(err, "invalid key for table %v: %v", t.tableName, key)
}
item, err := txn.Get([]byte(key))
if err == badger.ErrKeyNotFound {
// Dont wrap. Need to preserve error type
return nil, err
} else if err != nil {
return nil, errors.Wrapf(err, "get failed for table %v", t.tableName)
}
valueBytes, err := item.ValueCopy([]byte{})
if err != nil {
return nil, errors.Wrapf(err, "value copy failed for table %v", t.tableName)
}
retValue := &ValueType{}
err = proto.Unmarshal(valueBytes, retValue)
if err != nil {
return nil, errors.Wrapf(err, "protobuf unmarshal failed for table %v on value length %v", t.tableName, len(valueBytes))
}
return retValue, nil
}
func (t *ValueTypeTable) GetMinKey(txn badgerwrap.Txn) (bool, string) {
keyPrefix := "/" + t.tableName + "/"
iterOpt := badger.DefaultIteratorOptions
iterOpt.Prefix = []byte(keyPrefix)
iterator := txn.NewIterator(iterOpt)
defer iterator.Close()
iterator.Seek([]byte(keyPrefix))
if !iterator.ValidForPrefix([]byte(keyPrefix)) {
return false, ""
}
return true, string(iterator.Item().Key())
}
func (t *ValueTypeTable) GetMaxKey(txn badgerwrap.Txn) (bool, string) {
keyPrefix := "/" + t.tableName + "/"
iterOpt := badger.DefaultIteratorOptions
iterOpt.Prefix = []byte(keyPrefix)
iterOpt.Reverse = true
iterator := txn.NewIterator(iterOpt)
defer iterator.Close()
// We need to seek to the end of the range so we add a 255 character at the end
iterator.Seek([]byte(keyPrefix + string(rune(255))))
if !iterator.Valid() {
return false, ""
}
return true, string(iterator.Item().Key())
}
func (t *ValueTypeTable) GetMinMaxPartitions(txn badgerwrap.Txn) (bool, string, string) {
ok, minKeyStr := t.GetMinKey(txn)
if !ok {
return false, "", ""
}
ok, maxKeyStr := t.GetMaxKey(txn)
if !ok {
// This should be impossible
return false, "", ""
}
minKey := &KeyType{}
maxKey := &KeyType{}
err := minKey.Parse(minKeyStr)
if err != nil {
panic(fmt.Sprintf("invalid key in table: %v key: %q error: %v", t.tableName, minKeyStr, err))
}
err = maxKey.Parse(maxKeyStr)
if err != nil {
panic(fmt.Sprintf("invalid key in table: %v key: %q error: %v", t.tableName, maxKeyStr, err))
}
return true, minKey.PartitionId, maxKey.PartitionId
}
func (t *ValueTypeTable) RangeRead(
txn badgerwrap.Txn,
keyPredicateFn func(string) bool,
valPredicateFn func(*ValueType) bool,
startTime time.Time,
endTime time.Time) (map[KeyType]*ValueType, RangeReadStats, error) {
resources := map[KeyType]*ValueType{}
keyPrefix := "/" + t.tableName + "/"
iterOpt := badger.DefaultIteratorOptions
iterOpt.Prefix = []byte(keyPrefix)
itr := txn.NewIterator(iterOpt)
defer itr.Close()
startPartition := untyped.GetPartitionId(startTime)
endPartition := untyped.GetPartitionId(endTime)
startPartitionPrefix := keyPrefix + startPartition + "/"
stats := RangeReadStats{}
before := time.Now()
lastPartition := ""
for itr.Seek([]byte(startPartitionPrefix)); itr.ValidForPrefix([]byte(keyPrefix)); itr.Next() {
stats.RowsVisitedCount += 1
if !keyPredicateFn(string(itr.Item().Key())) {
continue
}
stats.RowsPassedKeyPredicateCount += 1
key := KeyType{}
err := key.Parse(string(itr.Item().Key()))
if err != nil {
return nil, stats, err
}
if key.PartitionId != lastPartition {
stats.PartitionCount += 1
lastPartition = key.PartitionId
}
// partitions are zero padded to 12 digits so we can compare them lexicographically
if key.PartitionId > endPartition {
// end of range
break
}
valueBytes, err := itr.Item().ValueCopy([]byte{})
if err != nil {
return nil, stats, err
}
retValue := &ValueType{}
err = proto.Unmarshal(valueBytes, retValue)
if err != nil {
return nil, stats, err
}
if valPredicateFn != nil && !valPredicateFn(retValue) {
continue
}
stats.RowsPassedValuePredicateCount += 1
resources[key] = retValue
}
stats.Elapsed = time.Since(before)
stats.TableName = (&KeyType{}).TableName()
return resources, stats, nil
}
//todo: need to add unit test
func (t *ValueTypeTable) GetUniquePartitionList(txn badgerwrap.Txn) ([]string, error) {
resources := []string{}
ok, minPar, maxPar := t.GetMinMaxPartitions(txn)
if ok {
parDuration := untyped.GetPartitionDuration()
for curPar := minPar; curPar < maxPar; {
resources = append(resources, curPar)
// update curPar
partInt, err := strconv.ParseInt(curPar, 10, 64)
if err != nil {
return resources, errors.Wrapf(err, "failed to get partition:%v", curPar)
}
parTime := time.Unix(partInt, 0).UTC().Add(parDuration)
curPar = untyped.GetPartitionId(parTime)
}
}
return resources, nil
}
//todo: need to add unit test
func (t *ValueTypeTable) GetPreviousKey(txn badgerwrap.Txn, key KeyType, keyPrefix KeyType) (KeyType, error) {
partitionList, err := t.GetUniquePartitionList(txn)
if err != nil {
return KeyType{}, errors.Wrapf(err, "failed to get partition list from table:%v", t.tableName)
}
currentPartition := key.PartitionId
for i := len(partitionList) - 1; i >= 0; i-- {
prePart := partitionList[i]
if prePart > currentPartition {
continue
} else {
prevFound, prevKey, err := t.getLastMatchingKeyInPartition(txn, prePart, key, keyPrefix)
if err != nil {
return KeyType{}, errors.Wrapf(err, "Failure getting previous key for %v, for partition id:%v", key.String(), prePart)
}
if prevFound && err == nil {
return prevKey, nil
}
}
}
return KeyType{}, fmt.Errorf("failed to get any previous key in table:%v, for key:%v, keyPrefix:%v", t.tableName, key.String(), keyPrefix)
}
//todo: need to add unit test
func (t *ValueTypeTable) getLastMatchingKeyInPartition(txn badgerwrap.Txn, curPartition string, key KeyType, keyPrefix KeyType) (bool, KeyType, error) {
iterOpt := badger.DefaultIteratorOptions
iterOpt.Reverse = true
itr := txn.NewIterator(iterOpt)
defer itr.Close()
// update partition with current value
key.SetPartitionId(curPartition)
keySeekStr := key.String()
itr.Seek([]byte(keySeekStr))
// if the result is same as key, we want to check its previous one
keyRes := string(itr.Item().Key())
if keyRes == key.String() {
itr.Next()
}
if itr.ValidForPrefix([]byte(keyPrefix.String())) {
key := KeyType{}
err := key.Parse(string(itr.Item().Key()))
if err != nil {
return true, KeyType{}, err
}
return true, key, nil
}
return false, KeyType{}, nil
}

Просмотреть файл

@ -0,0 +1,53 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"fmt"
"github.com/dgraph-io/badger"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/stretchr/testify/assert"
"reflect"
"testing"
"time"
)
//go:generate genny -in=$GOFILE -out=watchtablegen_test.go gen "ValueType=KubeWatchResult KeyType=WatchTableKey"
//go:generate genny -in=$GOFILE -out=resourcesummarytablegen_test.go gen "ValueType=ResourceSummary KeyType=ResourceSummaryKey"
//go:generate genny -in=$GOFILE -out=eventcounttablegen_test.go gen "ValueType=ResourceEventCounts KeyType=EventCountKey"
//go:generate genny -in=$GOFILE -out=watchactivitytablegen_test.go gen "ValueType=WatchActivity KeyType=WatchActivityKey"
func helper_ValueType_ShouldSkip() bool {
// Tests will not work on the fake types in the template, but we want to run tests on real objects
if "typed.Value"+"Type" == fmt.Sprint(reflect.TypeOf(ValueType{})) {
fmt.Printf("Skipping unit test")
return true
}
return false
}
func Test_ValueTypeTable_SetWorks(t *testing.T) {
if helper_ValueType_ShouldSkip() {
return
}
untyped.TestHookSetPartitionDuration(time.Hour * 24)
db, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
err = db.Update(func(txn badgerwrap.Txn) error {
k := (&KeyType{}).GetTestKey()
vt := OpenValueTypeTable()
err2 := vt.Set(txn, k, (&KeyType{}).GetTestValue())
assert.Nil(t, err2)
return nil
})
assert.Nil(t, err)
}

Просмотреть файл

@ -0,0 +1,76 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"github.com/golang/glog"
"time"
)
// The code in this file is simply here to let us compile tabletemplate.go but these are
// things we dont want in the generated output as they would conflict with functions on the real value and key types
type ValueType struct {
}
func (p *ValueType) Reset() {
}
func (p *ValueType) String() string {
return ""
}
func (p *ValueType) ProtoMessage() {
}
type KeyType struct {
PartitionId string
}
func (_ *KeyType) ValidateKey(key string) error {
panic("Placeholder key type should never be used")
}
func (_ *KeyType) TableName() string {
panic("Placeholder key should not be used")
}
func (_ *KeyType) Parse(key string) error {
panic("Placeholder key should not be used")
}
func (_ *KeyType) GetTestKey() string {
panic("Placeholder key should not be used")
}
func (_ *KeyType) String() string {
panic("Placeholder key should not be used")
}
func (_ *KeyType) GetTestValue() *ValueType {
panic("Placeholder key should not be used")
}
func (_ *KeyType) SetPartitionId(newPartitionId string) {
panic("Placeholder key should not be used")
}
type RangeReadStats struct {
TableName string
PartitionCount int
RowsVisitedCount int
RowsPassedKeyPredicateCount int
RowsPassedValuePredicateCount int
Elapsed time.Duration
}
func (stats RangeReadStats) Log(requestId string) {
glog.Infof("reqId: %v range read on table %v took %v. Partitions scanned %v. Rows scanned %v, past key predicate %v, past value predicate %v",
requestId, stats.TableName, stats.Elapsed, stats.PartitionCount, stats.RowsVisitedCount, stats.RowsPassedKeyPredicateCount, stats.RowsPassedValuePredicateCount)
}

Просмотреть файл

@ -0,0 +1,83 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"fmt"
"github.com/dgraph-io/badger"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"strings"
)
// Key is /<partition>/<kind>/<namespace>/<name>
//
// Partition is UnixSeconds rounded down to partition duration
// Kind is kubernetes kind, starts with upper case
// Namespace is kubernetes namespace, all lower
// Name is kubernetes name, all lower
type WatchActivityKey struct {
PartitionId string
Kind string
Namespace string
Name string
Uid string
}
func NewWatchActivityKey(partitionId string, kind string, namespace string, name string, uid string) *WatchActivityKey {
return &WatchActivityKey{PartitionId: partitionId, Kind: kind, Namespace: namespace, Name: name, Uid: uid}
}
func (_ *WatchActivityKey) TableName() string {
return "watchactivity"
}
func (k *WatchActivityKey) Parse(key string) error {
parts := strings.Split(key, "/")
if len(parts) != 7 {
return fmt.Errorf("Key should have 6 parts: %v", key)
}
if parts[0] != "" {
return fmt.Errorf("Key should start with /: %v", key)
}
if parts[1] != k.TableName() {
return fmt.Errorf("Second part of key (%v) should be %v", key, k.TableName())
}
k.PartitionId = parts[2]
k.Kind = parts[3]
k.Namespace = parts[4]
k.Name = parts[5]
k.Uid = parts[6]
return nil
}
func (k *WatchActivityKey) String() string {
return fmt.Sprintf("/%v/%v/%v/%v/%v/%v", k.TableName(), k.PartitionId, k.Kind, k.Namespace, k.Name, k.Uid)
}
func (_ *WatchActivityKey) ValidateKey(key string) error {
newKey := WatchActivityKey{}
return newKey.Parse(key)
}
func (k *WatchActivityKey) SetPartitionId(newPartitionId string) {
k.PartitionId = newPartitionId
}
func (t *WatchActivityTable) GetOrDefault(txn badgerwrap.Txn, key string) (*WatchActivity, error) {
rec, err := t.Get(txn, key)
if err != nil {
if err != badger.ErrKeyNotFound {
return nil, err
} else {
return &WatchActivity{}, nil
}
}
return rec, nil
}

Просмотреть файл

@ -0,0 +1,133 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"github.com/dgraph-io/badger"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/stretchr/testify/assert"
"testing"
"time"
)
const (
someWatchActivityKey = "/watchactivity/001546398000/somekind/somenamespace/somename/68510937-4ffc-11e9-8e26-1418775557c8"
)
func Test_WatchActivityKey_OutputCorrect(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
partitionId := untyped.GetPartitionId(someTs)
k := NewWatchActivityKey(partitionId, someKind, someNamespace, someName, someUid)
assert.Equal(t, someWatchActivityKey, k.String())
}
func Test_WatchActivityKey_ParseCorrect(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
k := &WatchActivityKey{}
err := k.Parse(someWatchActivityKey)
assert.Nil(t, err)
assert.Equal(t, somePartition, k.PartitionId)
assert.Equal(t, someNamespace, k.Namespace)
assert.Equal(t, someName, k.Name)
}
func Test_WatchActivityKey_ValidateWorks(t *testing.T) {
assert.Nil(t, (&WatchActivityKey{}).ValidateKey(someWatchActivityKey))
}
func helper_update_watchactivity_table(t *testing.T) (badgerwrap.DB, *WatchActivityTable) {
untyped.TestHookSetPartitionDuration(time.Hour)
partitionId := untyped.GetPartitionId(someTs)
var keys []string
for i := 'a'; i < 'd'; i++ {
// add keys in ascending order
keys = append(keys, NewWatchActivityKey(partitionId, someKind+string(i), someNamespace, someName, someUid).String())
}
val := &WatchActivity{}
b, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
wat := OpenWatchActivityTable()
err = b.Update(func(txn badgerwrap.Txn) error {
var txerr error
for _, key := range keys {
txerr = wat.Set(txn, key, val)
if txerr != nil {
return txerr
}
}
// Add some keys outside the range
txerr = txn.Set([]byte("/a/123/"), []byte{})
if txerr != nil {
return txerr
}
txerr = txn.Set([]byte("/zzz/123/"), []byte{})
if txerr != nil {
return txerr
}
return nil
})
assert.Nil(t, err)
return b, wat
}
func Test_WatchActivity_PutThenGet_SameData(t *testing.T) {
db, wat := helper_update_watchactivity_table(t)
var retval *WatchActivity
err := db.View(func(txn badgerwrap.Txn) error {
var txerr error
retval, txerr = wat.Get(txn, "/watchactivity/001546398000/somekinda/somenamespace/somename/68510937-4ffc-11e9-8e26-1418775557c8")
if txerr != nil {
return txerr
}
return nil
})
assert.Nil(t, err)
assert.Nil(t, retval.ChangedAt)
}
func Test_WatchActivity_TestMinAndMaxKeys(t *testing.T) {
db, wt := helper_update_watchactivity_table(t)
var minKey string
var maxKey string
err := db.View(func(txn badgerwrap.Txn) error {
_, minKey = wt.GetMinKey(txn)
_, maxKey = wt.GetMaxKey(txn)
return nil
})
assert.Nil(t, err)
assert.Equal(t, "/watchactivity/001546398000/somekinda/somenamespace/somename/68510937-4ffc-11e9-8e26-1418775557c8", minKey)
assert.Equal(t, "/watchactivity/001546398000/somekindc/somenamespace/somename/68510937-4ffc-11e9-8e26-1418775557c8", maxKey)
}
func Test_WatchActivity_TestGetMinMaxPartitions(t *testing.T) {
db, wt := helper_update_watchactivity_table(t)
var minPartition string
var maxPartition string
var found bool
err := db.View(func(txn badgerwrap.Txn) error {
found, minPartition, maxPartition = wt.GetMinMaxPartitions(txn)
return nil
})
assert.Nil(t, err)
assert.True(t, found)
assert.Equal(t, somePartition, minPartition)
assert.Equal(t, somePartition, maxPartition)
}
func (_ *WatchActivityKey) GetTestKey() string {
k := NewWatchActivityKey("001546398000", someKind, someNamespace, someName, someUid)
return k.String()
}
func (_ *WatchActivityKey) GetTestValue() *WatchActivity {
return &WatchActivity{}
}

Просмотреть файл

@ -0,0 +1,271 @@
// This file was automatically generated by genny.
// Any changes will be lost if this file is regenerated.
// see https://github.com/cheekybits/genny
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"fmt"
"strconv"
"time"
"github.com/dgraph-io/badger"
"github.com/golang/protobuf/proto"
"github.com/pkg/errors"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
)
type WatchActivityTable struct {
tableName string
}
func OpenWatchActivityTable() *WatchActivityTable {
keyInst := &WatchActivityKey{}
return &WatchActivityTable{tableName: keyInst.TableName()}
}
func (t *WatchActivityTable) Set(txn badgerwrap.Txn, key string, value *WatchActivity) error {
err := (&WatchActivityKey{}).ValidateKey(key)
if err != nil {
return errors.Wrapf(err, "invalid key for table %v: %v", t.tableName, key)
}
outb, err := proto.Marshal(value)
if err != nil {
return errors.Wrapf(err, "protobuf marshal for table %v failed", t.tableName)
}
err = txn.Set([]byte(key), outb)
if err != nil {
return errors.Wrapf(err, "set for table %v failed", t.tableName)
}
return nil
}
func (t *WatchActivityTable) Get(txn badgerwrap.Txn, key string) (*WatchActivity, error) {
err := (&WatchActivityKey{}).ValidateKey(key)
if err != nil {
return nil, errors.Wrapf(err, "invalid key for table %v: %v", t.tableName, key)
}
item, err := txn.Get([]byte(key))
if err == badger.ErrKeyNotFound {
// Dont wrap. Need to preserve error type
return nil, err
} else if err != nil {
return nil, errors.Wrapf(err, "get failed for table %v", t.tableName)
}
valueBytes, err := item.ValueCopy([]byte{})
if err != nil {
return nil, errors.Wrapf(err, "value copy failed for table %v", t.tableName)
}
retValue := &WatchActivity{}
err = proto.Unmarshal(valueBytes, retValue)
if err != nil {
return nil, errors.Wrapf(err, "protobuf unmarshal failed for table %v on value length %v", t.tableName, len(valueBytes))
}
return retValue, nil
}
func (t *WatchActivityTable) GetMinKey(txn badgerwrap.Txn) (bool, string) {
keyPrefix := "/" + t.tableName + "/"
iterOpt := badger.DefaultIteratorOptions
iterOpt.Prefix = []byte(keyPrefix)
iterator := txn.NewIterator(iterOpt)
defer iterator.Close()
iterator.Seek([]byte(keyPrefix))
if !iterator.ValidForPrefix([]byte(keyPrefix)) {
return false, ""
}
return true, string(iterator.Item().Key())
}
func (t *WatchActivityTable) GetMaxKey(txn badgerwrap.Txn) (bool, string) {
keyPrefix := "/" + t.tableName + "/"
iterOpt := badger.DefaultIteratorOptions
iterOpt.Prefix = []byte(keyPrefix)
iterOpt.Reverse = true
iterator := txn.NewIterator(iterOpt)
defer iterator.Close()
// We need to seek to the end of the range so we add a 255 character at the end
iterator.Seek([]byte(keyPrefix + string(rune(255))))
if !iterator.Valid() {
return false, ""
}
return true, string(iterator.Item().Key())
}
func (t *WatchActivityTable) GetMinMaxPartitions(txn badgerwrap.Txn) (bool, string, string) {
ok, minKeyStr := t.GetMinKey(txn)
if !ok {
return false, "", ""
}
ok, maxKeyStr := t.GetMaxKey(txn)
if !ok {
// This should be impossible
return false, "", ""
}
minKey := &WatchActivityKey{}
maxKey := &WatchActivityKey{}
err := minKey.Parse(minKeyStr)
if err != nil {
panic(fmt.Sprintf("invalid key in table: %v key: %q error: %v", t.tableName, minKeyStr, err))
}
err = maxKey.Parse(maxKeyStr)
if err != nil {
panic(fmt.Sprintf("invalid key in table: %v key: %q error: %v", t.tableName, maxKeyStr, err))
}
return true, minKey.PartitionId, maxKey.PartitionId
}
func (t *WatchActivityTable) RangeRead(
txn badgerwrap.Txn,
keyPredicateFn func(string) bool,
valPredicateFn func(*WatchActivity) bool,
startTime time.Time,
endTime time.Time) (map[WatchActivityKey]*WatchActivity, RangeReadStats, error) {
resources := map[WatchActivityKey]*WatchActivity{}
keyPrefix := "/" + t.tableName + "/"
iterOpt := badger.DefaultIteratorOptions
iterOpt.Prefix = []byte(keyPrefix)
itr := txn.NewIterator(iterOpt)
defer itr.Close()
startPartition := untyped.GetPartitionId(startTime)
endPartition := untyped.GetPartitionId(endTime)
startPartitionPrefix := keyPrefix + startPartition + "/"
stats := RangeReadStats{}
before := time.Now()
lastPartition := ""
for itr.Seek([]byte(startPartitionPrefix)); itr.ValidForPrefix([]byte(keyPrefix)); itr.Next() {
stats.RowsVisitedCount += 1
if !keyPredicateFn(string(itr.Item().Key())) {
continue
}
stats.RowsPassedKeyPredicateCount += 1
key := WatchActivityKey{}
err := key.Parse(string(itr.Item().Key()))
if err != nil {
return nil, stats, err
}
if key.PartitionId != lastPartition {
stats.PartitionCount += 1
lastPartition = key.PartitionId
}
// partitions are zero padded to 12 digits so we can compare them lexicographically
if key.PartitionId > endPartition {
// end of range
break
}
valueBytes, err := itr.Item().ValueCopy([]byte{})
if err != nil {
return nil, stats, err
}
retValue := &WatchActivity{}
err = proto.Unmarshal(valueBytes, retValue)
if err != nil {
return nil, stats, err
}
if valPredicateFn != nil && !valPredicateFn(retValue) {
continue
}
stats.RowsPassedValuePredicateCount += 1
resources[key] = retValue
}
stats.Elapsed = time.Since(before)
stats.TableName = (&WatchActivityKey{}).TableName()
return resources, stats, nil
}
//todo: need to add unit test
func (t *WatchActivityTable) GetUniquePartitionList(txn badgerwrap.Txn) ([]string, error) {
resources := []string{}
ok, minPar, maxPar := t.GetMinMaxPartitions(txn)
if ok {
parDuration := untyped.GetPartitionDuration()
for curPar := minPar; curPar < maxPar; {
resources = append(resources, curPar)
// update curPar
partInt, err := strconv.ParseInt(curPar, 10, 64)
if err != nil {
return resources, errors.Wrapf(err, "failed to get partition:%v", curPar)
}
parTime := time.Unix(partInt, 0).UTC().Add(parDuration)
curPar = untyped.GetPartitionId(parTime)
}
}
return resources, nil
}
//todo: need to add unit test
func (t *WatchActivityTable) GetPreviousKey(txn badgerwrap.Txn, key WatchActivityKey, keyPrefix WatchActivityKey) (WatchActivityKey, error) {
partitionList, err := t.GetUniquePartitionList(txn)
if err != nil {
return WatchActivityKey{}, errors.Wrapf(err, "failed to get partition list from table:%v", t.tableName)
}
currentPartition := key.PartitionId
for i := len(partitionList) - 1; i >= 0; i-- {
prePart := partitionList[i]
if prePart > currentPartition {
continue
} else {
prevFound, prevKey, err := t.getLastMatchingKeyInPartition(txn, prePart, key, keyPrefix)
if err != nil {
return WatchActivityKey{}, errors.Wrapf(err, "Failure getting previous key for %v, for partition id:%v", key.String(), prePart)
}
if prevFound && err == nil {
return prevKey, nil
}
}
}
return WatchActivityKey{}, fmt.Errorf("failed to get any previous key in table:%v, for key:%v, keyPrefix:%v", t.tableName, key.String(), keyPrefix)
}
//todo: need to add unit test
func (t *WatchActivityTable) getLastMatchingKeyInPartition(txn badgerwrap.Txn, curPartition string, key WatchActivityKey, keyPrefix WatchActivityKey) (bool, WatchActivityKey, error) {
iterOpt := badger.DefaultIteratorOptions
iterOpt.Reverse = true
itr := txn.NewIterator(iterOpt)
defer itr.Close()
// update partition with current value
key.SetPartitionId(curPartition)
keySeekStr := key.String()
itr.Seek([]byte(keySeekStr))
// if the result is same as key, we want to check its previous one
keyRes := string(itr.Item().Key())
if keyRes == key.String() {
itr.Next()
}
if itr.ValidForPrefix([]byte(keyPrefix.String())) {
key := WatchActivityKey{}
err := key.Parse(string(itr.Item().Key()))
if err != nil {
return true, WatchActivityKey{}, err
}
return true, key, nil
}
return false, WatchActivityKey{}, nil
}

Просмотреть файл

@ -0,0 +1,53 @@
// This file was automatically generated by genny.
// Any changes will be lost if this file is regenerated.
// see https://github.com/cheekybits/genny
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"fmt"
"reflect"
"testing"
"time"
"github.com/dgraph-io/badger"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/stretchr/testify/assert"
)
func helper_WatchActivity_ShouldSkip() bool {
// Tests will not work on the fake types in the template, but we want to run tests on real objects
if "typed.Value"+"Type" == fmt.Sprint(reflect.TypeOf(WatchActivity{})) {
fmt.Printf("Skipping unit test")
return true
}
return false
}
func Test_WatchActivityTable_SetWorks(t *testing.T) {
if helper_WatchActivity_ShouldSkip() {
return
}
untyped.TestHookSetPartitionDuration(time.Hour * 24)
db, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
err = db.Update(func(txn badgerwrap.Txn) error {
k := (&WatchActivityKey{}).GetTestKey()
vt := OpenWatchActivityTable()
err2 := vt.Set(txn, k, (&WatchActivityKey{}).GetTestValue())
assert.Nil(t, err2)
return nil
})
assert.Nil(t, err)
}

Просмотреть файл

@ -0,0 +1,81 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"fmt"
"github.com/pkg/errors"
"strconv"
"strings"
"time"
)
// Key is /<partition>/<kind>/<namespace>/<name>/<timestamp>
//
// Partition is UnixSeconds rounded down to partition duration
// Kind is kubernetes kind, starts with upper case
// Namespace is kubernetes namespace, all lower
// Name is kubernetes name, all lower
// Timestamp is UnixNano in UTC
type WatchTableKey struct {
PartitionId string
Kind string
Namespace string
Name string
Timestamp time.Time
}
func NewWatchTableKey(partitionId string, kind string, namespace string, name string, timestamp time.Time) *WatchTableKey {
return &WatchTableKey{PartitionId: partitionId, Kind: kind, Namespace: namespace, Name: name, Timestamp: timestamp}
}
func (_ *WatchTableKey) TableName() string {
return "watch"
}
func (k *WatchTableKey) Parse(key string) error {
parts := strings.Split(key, "/")
if len(parts) != 7 {
return fmt.Errorf("Key should have 6 parts: %v", key)
}
if parts[0] != "" {
return fmt.Errorf("Key should start with /: %v", key)
}
if parts[1] != k.TableName() {
return fmt.Errorf("Second part of key (%v) should be %v", key, k.TableName())
}
k.PartitionId = parts[2]
k.Kind = parts[3]
k.Namespace = parts[4]
k.Name = parts[5]
tsint, err := strconv.ParseInt(parts[6], 10, 64)
if err != nil {
return errors.Wrapf(err, "Failed to parse timestamp from key: %v", key)
}
k.Timestamp = time.Unix(0, tsint).UTC()
return nil
}
func (k *WatchTableKey) SetPartitionId(newPartitionId string) {
k.PartitionId = newPartitionId
}
func (k *WatchTableKey) String() string {
if k.Timestamp.IsZero() {
return fmt.Sprintf("/%v/%v/%v/%v/%v/", k.TableName(), k.PartitionId, k.Kind, k.Namespace, k.Name)
} else {
return fmt.Sprintf("/%v/%v/%v/%v/%v/%v", k.TableName(), k.PartitionId, k.Kind, k.Namespace, k.Name, k.Timestamp.UnixNano())
}
}
func (_ *WatchTableKey) ValidateKey(key string) error {
newKey := WatchTableKey{}
return newKey.Parse(key)
}

Просмотреть файл

@ -0,0 +1,138 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"github.com/dgraph-io/badger"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/stretchr/testify/assert"
"testing"
"time"
)
var someTs = time.Date(2019, 1, 2, 3, 4, 5, 6, time.UTC)
const someKind = "somekind"
const someNamespace = "somenamespace"
const someName = "somename"
const somePartition = "001546398000"
func Test_WatchTableKey_OutputCorrect(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
partitionId := untyped.GetPartitionId(someTs)
k := NewWatchTableKey(partitionId, someKind, someNamespace, someName, someTs)
assert.Equal(t, "/watch/001546398000/somekind/somenamespace/somename/1546398245000000006", k.String())
}
func Test_WatchTableKey_ParseCorrect(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
k := &WatchTableKey{}
err := k.Parse("/watch/001546398000/somekind/somenamespace/somename/1546398245000000006")
assert.Nil(t, err)
assert.Equal(t, somePartition, k.PartitionId)
assert.Equal(t, someNamespace, k.Namespace)
assert.Equal(t, someName, k.Name)
assert.Equal(t, someTs, k.Timestamp)
}
func Test_WatchTableKey_ValidateWorks(t *testing.T) {
testKey := "/watch/001562961600/ReplicaSet/mesh-control-plane/istio-pilot-56f7d9848/1562963507608345756"
assert.Nil(t, (&WatchTableKey{}).ValidateKey(testKey))
}
func helper_update_watch_table(t *testing.T) (badgerwrap.DB, *KubeWatchResultTable) {
untyped.TestHookSetPartitionDuration(time.Hour)
partitionId := untyped.GetPartitionId(someTs)
var keys []string
for i := 'a'; i < 'd'; i++ {
// add keys in ascending order
keys = append(keys, NewWatchTableKey(partitionId, someKind+string(i), someNamespace, someName, someTs).String())
}
val := &KubeWatchResult{Kind: someKind}
b, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
wt := OpenKubeWatchResultTable()
err = b.Update(func(txn badgerwrap.Txn) error {
var txerr error
for _, key := range keys {
txerr = wt.Set(txn, key, val)
if txerr != nil {
return txerr
}
}
// Add some keys outside the range
txerr = txn.Set([]byte("/a/123/"), []byte{})
if txerr != nil {
return txerr
}
txerr = txn.Set([]byte("/zzz/123/"), []byte{})
if txerr != nil {
return txerr
}
return nil
})
assert.Nil(t, err)
return b, wt
}
func Test_WatchTable_PutThenGet_SameData(t *testing.T) {
db, wt := helper_update_watch_table(t)
var retval *KubeWatchResult
err := db.View(func(txn badgerwrap.Txn) error {
var txerr error
retval, txerr = wt.Get(txn, "/watch/001546398000/somekinda/somenamespace/somename/1546398245000000006")
if txerr != nil {
return txerr
}
return nil
})
assert.Nil(t, err)
assert.Equal(t, someKind, retval.Kind)
}
func Test_WatchTable_TestMinAndMaxKeys(t *testing.T) {
db, wt := helper_update_watch_table(t)
var minKey string
var maxKey string
err := db.View(func(txn badgerwrap.Txn) error {
_, minKey = wt.GetMinKey(txn)
_, maxKey = wt.GetMaxKey(txn)
return nil
})
assert.Nil(t, err)
assert.Equal(t, "/watch/001546398000/somekinda/somenamespace/somename/1546398245000000006", minKey)
assert.Equal(t, "/watch/001546398000/somekindc/somenamespace/somename/1546398245000000006", maxKey)
}
func Test_WatchTable_TestGetMinMaxPartitions(t *testing.T) {
db, wt := helper_update_watch_table(t)
var minPartition string
var maxPartition string
var found bool
err := db.View(func(txn badgerwrap.Txn) error {
found, minPartition, maxPartition = wt.GetMinMaxPartitions(txn)
return nil
})
assert.Nil(t, err)
assert.True(t, found)
assert.Equal(t, somePartition, minPartition)
assert.Equal(t, somePartition, maxPartition)
}
func (_ *WatchTableKey) GetTestKey() string {
k := NewWatchTableKey("001546398000", "someKind", "someNamespace", "someName", someTs)
return k.String()
}
func (_ *WatchTableKey) GetTestValue() *KubeWatchResult {
return &KubeWatchResult{}
}

Просмотреть файл

@ -0,0 +1,271 @@
// This file was automatically generated by genny.
// Any changes will be lost if this file is regenerated.
// see https://github.com/cheekybits/genny
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"fmt"
"strconv"
"time"
"github.com/dgraph-io/badger"
"github.com/golang/protobuf/proto"
"github.com/pkg/errors"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
)
type KubeWatchResultTable struct {
tableName string
}
func OpenKubeWatchResultTable() *KubeWatchResultTable {
keyInst := &WatchTableKey{}
return &KubeWatchResultTable{tableName: keyInst.TableName()}
}
func (t *KubeWatchResultTable) Set(txn badgerwrap.Txn, key string, value *KubeWatchResult) error {
err := (&WatchTableKey{}).ValidateKey(key)
if err != nil {
return errors.Wrapf(err, "invalid key for table %v: %v", t.tableName, key)
}
outb, err := proto.Marshal(value)
if err != nil {
return errors.Wrapf(err, "protobuf marshal for table %v failed", t.tableName)
}
err = txn.Set([]byte(key), outb)
if err != nil {
return errors.Wrapf(err, "set for table %v failed", t.tableName)
}
return nil
}
func (t *KubeWatchResultTable) Get(txn badgerwrap.Txn, key string) (*KubeWatchResult, error) {
err := (&WatchTableKey{}).ValidateKey(key)
if err != nil {
return nil, errors.Wrapf(err, "invalid key for table %v: %v", t.tableName, key)
}
item, err := txn.Get([]byte(key))
if err == badger.ErrKeyNotFound {
// Dont wrap. Need to preserve error type
return nil, err
} else if err != nil {
return nil, errors.Wrapf(err, "get failed for table %v", t.tableName)
}
valueBytes, err := item.ValueCopy([]byte{})
if err != nil {
return nil, errors.Wrapf(err, "value copy failed for table %v", t.tableName)
}
retValue := &KubeWatchResult{}
err = proto.Unmarshal(valueBytes, retValue)
if err != nil {
return nil, errors.Wrapf(err, "protobuf unmarshal failed for table %v on value length %v", t.tableName, len(valueBytes))
}
return retValue, nil
}
func (t *KubeWatchResultTable) GetMinKey(txn badgerwrap.Txn) (bool, string) {
keyPrefix := "/" + t.tableName + "/"
iterOpt := badger.DefaultIteratorOptions
iterOpt.Prefix = []byte(keyPrefix)
iterator := txn.NewIterator(iterOpt)
defer iterator.Close()
iterator.Seek([]byte(keyPrefix))
if !iterator.ValidForPrefix([]byte(keyPrefix)) {
return false, ""
}
return true, string(iterator.Item().Key())
}
func (t *KubeWatchResultTable) GetMaxKey(txn badgerwrap.Txn) (bool, string) {
keyPrefix := "/" + t.tableName + "/"
iterOpt := badger.DefaultIteratorOptions
iterOpt.Prefix = []byte(keyPrefix)
iterOpt.Reverse = true
iterator := txn.NewIterator(iterOpt)
defer iterator.Close()
// We need to seek to the end of the range so we add a 255 character at the end
iterator.Seek([]byte(keyPrefix + string(rune(255))))
if !iterator.Valid() {
return false, ""
}
return true, string(iterator.Item().Key())
}
func (t *KubeWatchResultTable) GetMinMaxPartitions(txn badgerwrap.Txn) (bool, string, string) {
ok, minKeyStr := t.GetMinKey(txn)
if !ok {
return false, "", ""
}
ok, maxKeyStr := t.GetMaxKey(txn)
if !ok {
// This should be impossible
return false, "", ""
}
minKey := &WatchTableKey{}
maxKey := &WatchTableKey{}
err := minKey.Parse(minKeyStr)
if err != nil {
panic(fmt.Sprintf("invalid key in table: %v key: %q error: %v", t.tableName, minKeyStr, err))
}
err = maxKey.Parse(maxKeyStr)
if err != nil {
panic(fmt.Sprintf("invalid key in table: %v key: %q error: %v", t.tableName, maxKeyStr, err))
}
return true, minKey.PartitionId, maxKey.PartitionId
}
func (t *KubeWatchResultTable) RangeRead(
txn badgerwrap.Txn,
keyPredicateFn func(string) bool,
valPredicateFn func(*KubeWatchResult) bool,
startTime time.Time,
endTime time.Time) (map[WatchTableKey]*KubeWatchResult, RangeReadStats, error) {
resources := map[WatchTableKey]*KubeWatchResult{}
keyPrefix := "/" + t.tableName + "/"
iterOpt := badger.DefaultIteratorOptions
iterOpt.Prefix = []byte(keyPrefix)
itr := txn.NewIterator(iterOpt)
defer itr.Close()
startPartition := untyped.GetPartitionId(startTime)
endPartition := untyped.GetPartitionId(endTime)
startPartitionPrefix := keyPrefix + startPartition + "/"
stats := RangeReadStats{}
before := time.Now()
lastPartition := ""
for itr.Seek([]byte(startPartitionPrefix)); itr.ValidForPrefix([]byte(keyPrefix)); itr.Next() {
stats.RowsVisitedCount += 1
if !keyPredicateFn(string(itr.Item().Key())) {
continue
}
stats.RowsPassedKeyPredicateCount += 1
key := WatchTableKey{}
err := key.Parse(string(itr.Item().Key()))
if err != nil {
return nil, stats, err
}
if key.PartitionId != lastPartition {
stats.PartitionCount += 1
lastPartition = key.PartitionId
}
// partitions are zero padded to 12 digits so we can compare them lexicographically
if key.PartitionId > endPartition {
// end of range
break
}
valueBytes, err := itr.Item().ValueCopy([]byte{})
if err != nil {
return nil, stats, err
}
retValue := &KubeWatchResult{}
err = proto.Unmarshal(valueBytes, retValue)
if err != nil {
return nil, stats, err
}
if valPredicateFn != nil && !valPredicateFn(retValue) {
continue
}
stats.RowsPassedValuePredicateCount += 1
resources[key] = retValue
}
stats.Elapsed = time.Since(before)
stats.TableName = (&WatchTableKey{}).TableName()
return resources, stats, nil
}
//todo: need to add unit test
func (t *KubeWatchResultTable) GetUniquePartitionList(txn badgerwrap.Txn) ([]string, error) {
resources := []string{}
ok, minPar, maxPar := t.GetMinMaxPartitions(txn)
if ok {
parDuration := untyped.GetPartitionDuration()
for curPar := minPar; curPar < maxPar; {
resources = append(resources, curPar)
// update curPar
partInt, err := strconv.ParseInt(curPar, 10, 64)
if err != nil {
return resources, errors.Wrapf(err, "failed to get partition:%v", curPar)
}
parTime := time.Unix(partInt, 0).UTC().Add(parDuration)
curPar = untyped.GetPartitionId(parTime)
}
}
return resources, nil
}
//todo: need to add unit test
func (t *KubeWatchResultTable) GetPreviousKey(txn badgerwrap.Txn, key WatchTableKey, keyPrefix WatchTableKey) (WatchTableKey, error) {
partitionList, err := t.GetUniquePartitionList(txn)
if err != nil {
return WatchTableKey{}, errors.Wrapf(err, "failed to get partition list from table:%v", t.tableName)
}
currentPartition := key.PartitionId
for i := len(partitionList) - 1; i >= 0; i-- {
prePart := partitionList[i]
if prePart > currentPartition {
continue
} else {
prevFound, prevKey, err := t.getLastMatchingKeyInPartition(txn, prePart, key, keyPrefix)
if err != nil {
return WatchTableKey{}, errors.Wrapf(err, "Failure getting previous key for %v, for partition id:%v", key.String(), prePart)
}
if prevFound && err == nil {
return prevKey, nil
}
}
}
return WatchTableKey{}, fmt.Errorf("failed to get any previous key in table:%v, for key:%v, keyPrefix:%v", t.tableName, key.String(), keyPrefix)
}
//todo: need to add unit test
func (t *KubeWatchResultTable) getLastMatchingKeyInPartition(txn badgerwrap.Txn, curPartition string, key WatchTableKey, keyPrefix WatchTableKey) (bool, WatchTableKey, error) {
iterOpt := badger.DefaultIteratorOptions
iterOpt.Reverse = true
itr := txn.NewIterator(iterOpt)
defer itr.Close()
// update partition with current value
key.SetPartitionId(curPartition)
keySeekStr := key.String()
itr.Seek([]byte(keySeekStr))
// if the result is same as key, we want to check its previous one
keyRes := string(itr.Item().Key())
if keyRes == key.String() {
itr.Next()
}
if itr.ValidForPrefix([]byte(keyPrefix.String())) {
key := WatchTableKey{}
err := key.Parse(string(itr.Item().Key()))
if err != nil {
return true, WatchTableKey{}, err
}
return true, key, nil
}
return false, WatchTableKey{}, nil
}

Просмотреть файл

@ -0,0 +1,53 @@
// This file was automatically generated by genny.
// Any changes will be lost if this file is regenerated.
// see https://github.com/cheekybits/genny
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package typed
import (
"fmt"
"reflect"
"testing"
"time"
"github.com/dgraph-io/badger"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/stretchr/testify/assert"
)
func helper_KubeWatchResult_ShouldSkip() bool {
// Tests will not work on the fake types in the template, but we want to run tests on real objects
if "typed.Value"+"Type" == fmt.Sprint(reflect.TypeOf(KubeWatchResult{})) {
fmt.Printf("Skipping unit test")
return true
}
return false
}
func Test_KubeWatchResultTable_SetWorks(t *testing.T) {
if helper_KubeWatchResult_ShouldSkip() {
return
}
untyped.TestHookSetPartitionDuration(time.Hour * 24)
db, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
err = db.Update(func(txn badgerwrap.Txn) error {
k := (&WatchTableKey{}).GetTestKey()
vt := OpenKubeWatchResultTable()
err2 := vt.Set(txn, k, (&WatchTableKey{}).GetTestValue())
assert.Nil(t, err2)
return nil
})
assert.Nil(t, err)
}

Просмотреть файл

@ -0,0 +1,90 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package badgerwrap
import (
"github.com/dgraph-io/badger"
)
// Need a factory we can pass into untyped store so it can open and close databases
// with the proper impl
type Factory interface {
Open(opt badger.Options) (DB, error)
}
type DB interface {
Close() error
Sync() error
Update(fn func(txn Txn) error) error
View(fn func(txn Txn) error) error
DropPrefix(prefix []byte) error
Size() (lsm, vlog int64)
Tables(withKeysCount bool) []badger.TableInfo
// Backup(w io.Writer, since uint64) (uint64, error)
// DropAll() error
// Flatten(workers int) error
// GetMergeOperator(key []byte, f MergeFunc, dur time.Duration) *MergeOperator
// GetSequence(key []byte, bandwidth uint64) (*Sequence, error)
// KeySplits(prefix []byte) []string
// Load(r io.Reader, maxPendingWrites int) error
// MaxBatchCount() int64
// MaxBatchSize() int64
// NewKVLoader(maxPendingWrites int) *KVLoader
// NewStream() *Stream
// NewStreamAt(readTs uint64) *Stream
// NewStreamWriter() *StreamWriter
// NewTransaction(update bool) *Txn
// NewTransactionAt(readTs uint64, update bool) *Txn
// NewWriteBatch() *WriteBatch
// PrintHistogram(keyPrefix []byte)
// RunValueLogGC(discardRatio float64) error
// SetDiscardTs(ts uint64)
// Subscribe(ctx context.Context, cb func(kv *KVList), prefixes ...[]byte) error
// VerifyChecksum() error
}
type Txn interface {
Get(key []byte) (Item, error)
Set(key, val []byte) error
Delete(key []byte) error
NewIterator(opt badger.IteratorOptions) Iterator
// NewKeyIterator(key []byte, opt badger.IteratorOptions) *badger.Iterator
// ReadTs() uint64
// SetEntry(e *badger.Entry) error
// Discard()
// Commit() error
// CommitAt(commitTs uint64, callback func(error)) error
// CommitWith(cb func(error))
}
type Item interface {
Key() []byte
Value(fn func(val []byte) error) error
ValueCopy(dst []byte) ([]byte, error)
// DiscardEarlierVersions() bool
// EstimatedSize() int64
// ExpiresAt() uint64
// IsDeletedOrExpired() bool
// KeyCopy(dst []byte) []byte
// KeySize() int64
// String() string
// UserMeta() byte
// ValueSize() int64
// Version() uint64
}
type Iterator interface {
Close()
Item() Item
Next()
Seek(key []byte)
Valid() bool
ValidForPrefix(prefix []byte) bool
Rewind()
}

Просмотреть файл

@ -0,0 +1,142 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package badgerwrap
import (
"github.com/dgraph-io/badger"
"github.com/pkg/errors"
)
type BadgerFactory struct {
}
type BadgerDb struct {
db *badger.DB
}
type BadgerTxn struct {
txn *badger.Txn
}
type BadgerItem struct {
item *badger.Item
}
type BadgerIterator struct {
itr *badger.Iterator
}
func (f *BadgerFactory) Open(opt badger.Options) (DB, error) {
db, err := badger.Open(opt)
if err != nil {
return nil, errors.Wrap(err, "Failed to open badger")
}
return &BadgerDb{db: db}, nil
}
// Database
func (b *BadgerDb) Close() error {
return b.db.Close()
}
func (b *BadgerDb) Sync() error {
return b.db.Sync()
}
func (b *BadgerDb) Update(fn func(txn Txn) error) error {
return b.db.Update(func(txn *badger.Txn) error {
return fn(&BadgerTxn{txn: txn})
})
}
func (b *BadgerDb) View(fn func(txn Txn) error) error {
return b.db.View(func(txn *badger.Txn) error {
return fn(&BadgerTxn{txn: txn})
})
}
func (b *BadgerDb) DropPrefix(prefix []byte) error {
err := b.db.DropPrefix(prefix)
return err
}
func (b *BadgerDb) Size() (lsm, vlog int64) {
return b.db.Size()
}
func (b *BadgerDb) Tables(withKeysCount bool) []badger.TableInfo {
return b.db.Tables(withKeysCount)
}
// Transaction
func (t *BadgerTxn) Get(key []byte) (Item, error) {
item, err := t.txn.Get(key)
if err != nil {
return nil, err
}
return &BadgerItem{item: item}, nil
}
func (t *BadgerTxn) Set(key, val []byte) error {
return t.txn.Set(key, val)
}
func (t *BadgerTxn) Delete(key []byte) error {
return t.txn.Delete(key)
}
func (t *BadgerTxn) NewIterator(opt badger.IteratorOptions) Iterator {
return &BadgerIterator{itr: t.txn.NewIterator(opt)}
}
// Item
func (i *BadgerItem) Key() []byte {
return i.item.Key()
}
func (i *BadgerItem) Value(fn func(val []byte) error) error {
return i.item.Value(fn)
}
func (i *BadgerItem) ValueCopy(dst []byte) ([]byte, error) {
return i.item.ValueCopy(dst)
}
// Iterator
func (i *BadgerIterator) Close() {
i.itr.Close()
}
func (i *BadgerIterator) Item() Item {
return i.itr.Item()
}
func (i *BadgerIterator) Next() {
i.itr.Next()
}
func (i *BadgerIterator) Seek(key []byte) {
i.itr.Seek(key)
}
func (i *BadgerIterator) Valid() bool {
return i.itr.Valid()
}
func (i *BadgerIterator) ValidForPrefix(prefix []byte) bool {
return i.itr.ValidForPrefix(prefix)
}
func (i *BadgerIterator) Rewind() {
i.itr.Rewind()
}

Просмотреть файл

@ -0,0 +1,229 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package badgerwrap
import (
"fmt"
"github.com/dgraph-io/badger"
"sort"
"strings"
"sync"
)
// This mock simulates badger using an in-memory store
// Useful for fast unit tests that don't want to touch the disk
// Currently this uses a crude lock to simulate transactions
type MockFactory struct {
}
type MockDb struct {
lock *sync.RWMutex
data map[string][]byte
}
type MockTxn struct {
readOnly bool
db *MockDb
}
type MockItem struct {
key []byte
value []byte
}
type MockIterator struct {
opt badger.IteratorOptions
currentIdx int
db *MockDb
// A snapshot of keys in sorted order
keys []string
}
func (f *MockFactory) Open(opt badger.Options) (DB, error) {
return &MockDb{lock: &sync.RWMutex{}, data: make(map[string][]byte)}, nil
}
// Database
func (b *MockDb) Close() error {
return nil
}
func (b *MockDb) Sync() error {
return nil
}
func (b *MockDb) Update(fn func(txn Txn) error) error {
b.lock.Lock()
defer b.lock.Unlock()
txn := &MockTxn{readOnly: false, db: b}
return fn(txn)
}
func (b *MockDb) View(fn func(txn Txn) error) error {
b.lock.RLock()
defer b.lock.RUnlock()
txn := &MockTxn{readOnly: true, db: b}
return fn(txn)
}
func (b *MockDb) DropPrefix(prefix []byte) error {
b.lock.Lock()
defer b.lock.Unlock()
if len(b.data) == 0 {
return fmt.Errorf("enable to delete prefix: %s from empty table", string(prefix))
}
for key, _ := range b.data {
exists := strings.HasPrefix(key, string(prefix))
if exists {
delete(b.data, key)
}
}
return nil
}
func (b *MockDb) Size() (lsm, vlog int64) {
size := 0
for k, v := range b.data {
size += len(k) + len(v)
}
return int64(size), 0
}
func (b *MockDb) Tables(withKeysCount bool) []badger.TableInfo {
keyCount := 0
if withKeysCount {
keyCount = len(b.data)
}
return []badger.TableInfo{
{KeyCount: uint64(keyCount)},
}
}
// Transaction
func (t *MockTxn) Get(key []byte) (Item, error) {
data, ok := t.db.data[string(key)]
if !ok {
return nil, badger.ErrKeyNotFound
}
item := &MockItem{key: key, value: data}
return item, nil
}
func (t *MockTxn) Set(key, val []byte) error {
if t.readOnly {
return badger.ErrReadOnlyTxn
}
t.db.data[string(key)] = val
return nil
}
func (t *MockTxn) Delete(key []byte) error {
if t.readOnly {
return badger.ErrReadOnlyTxn
}
delete(t.db.data, string(key))
return nil
}
func (t *MockTxn) NewIterator(opt badger.IteratorOptions) Iterator {
keys := []string{}
for k, _ := range t.db.data {
keys = append(keys, k)
}
if opt.Reverse {
sort.Sort(sort.Reverse(sort.StringSlice(keys)))
} else {
sort.Strings(keys)
}
return &MockIterator{db: t.db, currentIdx: 0, opt: opt, keys: keys}
}
// Item
func (i *MockItem) Key() []byte {
return i.key
}
func (i *MockItem) Value(fn func(val []byte) error) error {
return fn(i.value)
}
func (i *MockItem) ValueCopy(dst []byte) ([]byte, error) {
copy(dst, i.value)
newcopy := make([]byte, len(i.value))
copy(newcopy, i.value)
return newcopy, nil
}
// Iterator
func (i *MockIterator) Close() {
}
// Item returns pointer to the current key-value pair. This item is only valid until
// it.Next() gets called.
func (i *MockIterator) Item() Item {
if i.currentIdx < len(i.keys) {
thisKey := i.keys[i.currentIdx]
thisValue := i.db.data[thisKey]
return &MockItem{key: []byte(thisKey), value: thisValue}
}
return nil
}
// Next would advance the iterator by one. Always check it.Valid() after a Next() to
// ensure you have access to a valid it.Item().
func (i *MockIterator) Next() {
i.currentIdx += 1
}
// Seek would seek to the provided key if present. If absent, it would seek to the next
// smallest key greater than the provided key if iterating in the forward direction. Behavior
// would be reversed if iterating backwards.
func (i *MockIterator) Seek(key []byte) {
if !i.opt.Reverse {
i.currentIdx = sort.SearchStrings(i.keys, string(key))
} else {
// Badger has a silly behavior where everything in the iterator works properly in reverse except Seek
// I would expect seek in reverse to find the end of the key range based on the prefix but it does not
// Also, golang search requires ascending order
sort.Strings(i.keys)
i.currentIdx = len(i.keys) - sort.SearchStrings(i.keys, string(key))
sort.Sort(sort.Reverse(sort.StringSlice(i.keys)))
}
}
// Valid returns false when iteration is done.
func (i *MockIterator) Valid() bool {
if i.currentIdx < 0 || i.currentIdx >= len(i.keys) {
return false
}
return true
}
// ValidForPrefix returns false when iteration is done or when the current key is not prefixed
// by the specified prefix.
func (i *MockIterator) ValidForPrefix(prefix []byte) bool {
if !i.Valid() {
return false
}
return strings.HasPrefix(i.keys[i.currentIdx], string(prefix))
}
// Rewind would rewind the iterator cursor all the way to zero-th position, which would be
// the smallest key if iterating forward, and largest if iterating backward. It does not keep
// track of whether the cursor started with a Seek().
func (i *MockIterator) Rewind() {
i.currentIdx = 0
}

Просмотреть файл

@ -0,0 +1,286 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package badgerwrap
import (
"fmt"
"github.com/dgraph-io/badger"
"github.com/stretchr/testify/assert"
"io/ioutil"
"testing"
)
// Turn then on when writing new tests, but leave off when you check in
var useRealBadger = false
func helper_OpenDb(t *testing.T) DB {
if useRealBadger {
// Badger Data DB
dataDir, err := ioutil.TempDir("", "data")
assert.Nil(t, err)
options := badger.DefaultOptions(dataDir)
db, err := (&BadgerFactory{}).Open(options)
assert.Nil(t, err)
return db
} else {
options := badger.DefaultOptions("")
db, err := (&MockFactory{}).Open(options)
assert.Nil(t, err)
return db
}
}
func helper_Set(t *testing.T, db DB, key []byte, value []byte) {
err := db.Update(
func(t Txn) error {
t.Set(key, value)
return nil
})
assert.Nil(t, err)
}
// There are 3 ways to get the value of an item. We are ensuring
// they all match. If this slows down tests we can make it optional
func helper_Get(t *testing.T, db DB, key []byte) ([]byte, error) {
var actualValueFn []byte
err := db.View(
func(tx Txn) error {
item, err2 := tx.Get(key)
if err2 != nil {
return err2
}
assert.Equal(t, key, item.Key())
// Grab value first with a function
item.Value(
func(val []byte) error {
actualValueFn = val
return nil
})
// Get it a second time as a copy and make sure they match
var actualValueCopy []byte
actualValueCopy, err2 = item.ValueCopy([]byte{})
assert.Nil(t, err2)
assert.Equal(t, actualValueFn, actualValueCopy)
// Get it a third time with writing to an existing slice
var actualValueCopyExisintSlice = make([]byte, len(actualValueFn))
_, err2 = item.ValueCopy(actualValueCopyExisintSlice)
assert.Nil(t, err2)
assert.Equal(t, actualValueFn, actualValueCopyExisintSlice)
return nil
})
return actualValueFn, err
}
func helper_GetNoError(t *testing.T, db DB, key []byte) []byte {
data, err := helper_Get(t, db, key)
assert.Nil(t, err)
return data
}
func helper_iterateKeys(db DB, opt badger.IteratorOptions) []string {
actual := []string{}
db.View(func(txn Txn) error {
i := txn.NewIterator(opt)
defer i.Close()
for i.Rewind(); i.Valid(); i.Next() {
actual = append(actual, string(i.Item().Key()))
}
return nil
})
return actual
}
func helper_iterateKeysPrefix(db DB, opt badger.IteratorOptions, seek string, prefix string) []string {
actual := []string{}
db.View(func(txn Txn) error {
i := txn.NewIterator(opt)
defer i.Close()
// Split out for easier debugging
// for i.Seek([]byte(prefix)); i.ValidForPrefix([]byte(prefix)); i.Next() {
// }
i.Seek([]byte(seek))
for {
if !i.ValidForPrefix([]byte(prefix)) {
break
}
actual = append(actual, string(i.Item().Key()))
i.Next()
}
return nil
})
return actual
}
var testKey = []byte("/somekey")
var testValue1 = []byte("somevalue1")
var testValue2 = []byte("somevalue2")
func Test_MockBadger_GetMissingKey_ReturnsCorrectError(t *testing.T) {
db := helper_OpenDb(t)
defer db.Close()
_, err := helper_Get(t, db, testKey)
assert.Equal(t, "Key not found", fmt.Sprintf("%v", err.Error()))
}
func Test_MockBadger_PutAndGet_ValuesMatch(t *testing.T) {
db := helper_OpenDb(t)
defer db.Close()
helper_Set(t, db, testKey, testValue1)
actualValue := helper_GetNoError(t, db, testKey)
assert.Equal(t, testValue1, actualValue)
}
func Test_MockBadger_WriteTwoValuesToSameKey_LatestIsReturned(t *testing.T) {
db := helper_OpenDb(t)
defer db.Close()
helper_Set(t, db, testKey, testValue1)
helper_Set(t, db, testKey, testValue2)
actualValue := helper_GetNoError(t, db, testKey)
assert.Equal(t, testValue2, actualValue)
}
func Test_MockBadger_AddThenDelete_NotFound(t *testing.T) {
db := helper_OpenDb(t)
defer db.Close()
helper_Set(t, db, testKey, testValue1)
err := db.Update(
func(txn Txn) error {
txn.Delete(testKey)
return nil
})
assert.Nil(t, err)
_, err = helper_Get(t, db, testKey)
assert.Equal(t, "Key not found", fmt.Sprintf("%v", err.Error()))
}
func Test_MockBadger_DeleteAMissingKey_NoError(t *testing.T) {
db := helper_OpenDb(t)
defer db.Close()
err := db.Update(
func(txn Txn) error {
err2 := txn.Delete(testKey)
assert.Nil(t, err2)
return nil
})
assert.Nil(t, err)
}
func Test_MockBadger_SetAnEmptyKey_NoError(t *testing.T) {
db := helper_OpenDb(t)
defer db.Close()
helper_Set(t, db, []byte{}, testValue1)
}
func Test_MockBadger_IterateAllKeys(t *testing.T) {
db := helper_OpenDb(t)
defer db.Close()
expected := []string{"/a1", "/a2", "/a3", "/a4"}
// Include some dupes
for _, key := range []string{"/a4", "/a1", "/a3", "/a2", "/a4"} {
helper_Set(t, db, []byte(key), []byte{})
}
actual := helper_iterateKeys(db, badger.DefaultIteratorOptions)
assert.Equal(t, expected, actual)
}
func Test_MockBadger_IterateAllKeysBackwards(t *testing.T) {
db := helper_OpenDb(t)
defer db.Close()
expected := []string{"/a4", "/a3", "/a2", "/a1"}
// Include some dupes
for _, key := range []string{"/a4", "/a1", "/a3", "/a2", "/a4"} {
helper_Set(t, db, []byte(key), []byte{})
}
opt := badger.DefaultIteratorOptions
opt.Reverse = true
actual := helper_iterateKeys(db, opt)
assert.Equal(t, expected, actual)
}
func Test_MockBadger_IterateAllKeysWithPrefix(t *testing.T) {
db := helper_OpenDb(t)
defer db.Close()
expected := []string{"/b/1", "/b/4"}
// Include some dupes
for _, key := range []string{"/a/1", "/a/2", "/b/1", "/b/4", "/c/1", "/c/2"} {
helper_Set(t, db, []byte(key), []byte{})
}
actual := helper_iterateKeysPrefix(db, badger.DefaultIteratorOptions, "/b/", "/b/")
assert.Equal(t, expected, actual)
}
// This feels like a bug in badger. We need to seek to the key after our prefix
// Sounds like its by design: https://github.com/dgraph-io/badger/issues/436
// "/b0" is one past "/b/"
func Test_MockBadger_IterateAllKeysWithPrefixBackwards(t *testing.T) {
db := helper_OpenDb(t)
defer db.Close()
expected := []string{"/b/4", "/b/1"}
// Include some dupes
for _, key := range []string{"/a/1", "/a/2", "/b/1", "/b/4", "/c/1", "/c/2"} {
helper_Set(t, db, []byte(key), []byte{})
}
opt := badger.DefaultIteratorOptions
opt.Reverse = true
actual := helper_iterateKeysPrefix(db, opt, "/b0", "/b/")
assert.Equal(t, expected, actual)
}
func Test_MockBadger_DropPrefix_OK(t *testing.T) {
db := helper_OpenDb(t)
defer db.Close()
expected := []string{"/b/1", "/b/4"}
for _, key := range []string{"/a/1", "/a/2", "/b/1", "/b/4", "/c/1", "/c/2"} {
helper_Set(t, db, []byte(key), []byte{})
}
actual := helper_iterateKeysPrefix(db, badger.DefaultIteratorOptions, "/b/", "/b/")
assert.Equal(t, expected, actual)
// start drop prefix with /b
db.DropPrefix([]byte("/b"))
actual = helper_iterateKeysPrefix(db, badger.DefaultIteratorOptions, "/b/", "/b/")
assert.Len(t, actual, 0)
}
func Test_MockBadger_DropPrefix_Fail(t *testing.T) {
db := helper_OpenDb(t)
defer db.Close()
for _, key := range []string{"/a/1", "/a/2", "/b/1", "/b/4", "/c/1", "/c/2"} {
helper_Set(t, db, []byte(key), testValue1)
}
db.DropPrefix([]byte("/x"))
for _, key := range []string{"/a/1", "/a/2", "/b/1", "/b/4", "/c/1", "/c/2"} {
data, err := helper_Get(t, db, []byte(key))
assert.Nil(t, err)
assert.Equal(t, len(testValue1), len(data))
}
}

Просмотреть файл

@ -0,0 +1,61 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package untyped
import (
"fmt"
"strconv"
"time"
)
// For now we want the ability to try different durations, and this can not change during runtime
// Keys need access to GetPartitionId() which needs this value, and we dont want to pass aroudn config everywhere
// that deals with keys.
// TODO: Later when we figure out an ideal partition duration lets remove it from config so users dont change it
// and end up with data that does not match the business logic
var partitionDuration time.Duration
// Partitions need to be in lexicographical sorted order, so zero pad to 12 digits
func GetPartitionId(timestamp time.Time) string {
if partitionDuration == time.Hour {
rounded := time.Date(timestamp.Year(), timestamp.Month(), timestamp.Day(), timestamp.Hour(), 0, 0, 0, timestamp.Location())
return fmt.Sprintf("%012d", uint64(rounded.Unix()))
} else if partitionDuration == 24*time.Hour {
rounded := time.Date(timestamp.Year(), timestamp.Month(), timestamp.Day(), 0, 0, 0, 0, timestamp.Location())
return fmt.Sprintf("%012d", uint64(rounded.Unix()))
} else {
panic("Invalid partition duration")
}
}
func GetTimeRangeForPartition(partitionId string) (time.Time, time.Time, error) {
partInt, err := strconv.ParseInt(partitionId, 10, 64)
if err != nil {
return time.Time{}, time.Time{}, err
}
oldestTime := time.Unix(partInt, 0).UTC()
var newestTime time.Time
if partitionDuration == time.Hour {
newestTime = oldestTime.Add(time.Hour)
} else if partitionDuration == 24*time.Hour {
newestTime = oldestTime.Add(24 * time.Hour)
} else {
panic("Invalid partition duration")
}
return oldestTime, newestTime, nil
}
func TestHookSetPartitionDuration(partDuration time.Duration) {
partitionDuration = partDuration
}
func GetPartitionDuration() time.Duration {
return partitionDuration
}

Просмотреть файл

@ -0,0 +1,37 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package untyped
import (
"github.com/stretchr/testify/assert"
"testing"
"time"
)
var someTs = time.Date(2019, 1, 2, 3, 4, 5, 6, time.UTC)
var someTsRoundedHour = time.Date(2019, 1, 2, 3, 0, 0, 0, time.UTC)
var someTsRoundedDay = time.Date(2019, 1, 2, 0, 0, 0, 0, time.UTC)
func Test_PartitionsRoundTrip_Hour(t *testing.T) {
TestHookSetPartitionDuration(time.Hour)
partStr := GetPartitionId(someTs)
minTs, maxTs, err := GetTimeRangeForPartition(partStr)
assert.Nil(t, err)
assert.Equal(t, someTsRoundedHour, minTs)
assert.Equal(t, someTsRoundedHour.Add(time.Hour), maxTs)
}
func Test_PartitionsRoundTrip_Day(t *testing.T) {
TestHookSetPartitionDuration(24 * time.Hour)
partStr := GetPartitionId(someTs)
minTs, maxTs, err := GetTimeRangeForPartition(partStr)
assert.Nil(t, err)
assert.Equal(t, someTsRoundedDay, minTs)
assert.Equal(t, someTsRoundedDay.Add(24*time.Hour), maxTs)
}

Просмотреть файл

@ -0,0 +1,46 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package untyped
import (
"fmt"
"github.com/dgraph-io/badger"
"github.com/golang/glog"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"os"
"time"
)
func OpenStore(factory badgerwrap.Factory, rootPath string, configPartitionDuration time.Duration) (badgerwrap.DB, error) {
err := os.MkdirAll(rootPath, 0755)
if err != nil {
glog.Infof("mkdir failed with %v", err)
}
// For now using a temp name because this all need to be replaced when we add real table/partition support
opts := badger.DefaultOptions(rootPath)
db, err := factory.Open(opts)
if err != nil {
return nil, fmt.Errorf("badger.OpenStore failed with: %v", err)
}
if configPartitionDuration != time.Hour && configPartitionDuration != 24*time.Hour {
return nil, fmt.Errorf("Only hour and day partitionDurations are supported")
}
partitionDuration = configPartitionDuration
return db, nil
}
func CloseStore(db badgerwrap.DB) error {
glog.Infof("Closing store")
err := db.Close()
glog.Infof("Finished closing store")
return err
}

Просмотреть файл

@ -0,0 +1,38 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package storemanager
import (
"time"
)
// This provides a way to Sleep with the ability to get woken up for a cancel
// Once cancel is called all future sleeps will return immediately
type SleepWithCancel struct {
cancel chan bool
}
func NewSleepWithCancel() *SleepWithCancel {
return &SleepWithCancel{cancel: make(chan bool, 10)}
}
func (s *SleepWithCancel) Sleep(after time.Duration) {
select {
case <-s.cancel:
break
case <-time.After(after):
break
}
}
func (s *SleepWithCancel) Cancel() {
s.cancel <- true
close(s.cancel)
}

Просмотреть файл

@ -0,0 +1,25 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package storemanager
import (
"github.com/stretchr/testify/assert"
"testing"
"time"
)
func Test_SleepWithCancel_TestThatSleepsAfterCancelDontCrash(t *testing.T) {
before := time.Now()
s := NewSleepWithCancel()
s.Cancel()
s.Sleep(time.Minute)
s.Sleep(time.Minute)
s.Sleep(time.Hour)
assert.True(t, time.Since(before).Seconds() < 100)
}

Просмотреть файл

@ -0,0 +1,215 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package storemanager
import (
"fmt"
"github.com/golang/glog"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/spf13/afero"
"os"
"sync"
"time"
)
var (
metricGcRunCount = promauto.NewCounter(prometheus.CounterOpts{Name: "sloop_gc_run_count"})
metricGcCleanupPerformedCount = promauto.NewCounter(prometheus.CounterOpts{Name: "sloop_gc_cleanup_performed_count"})
metricGcFailedCount = promauto.NewCounter(prometheus.CounterOpts{Name: "sloop_failed_gc_count"})
metricStoreSizeondiskmb = promauto.NewGauge(prometheus.GaugeOpts{Name: "sloop_store_sizeondiskmb"})
metricBadgerKeys = promauto.NewGauge(prometheus.GaugeOpts{Name: "sloop_badger_keys"})
metricBadgerTables = promauto.NewGauge(prometheus.GaugeOpts{Name: "sloop_badger_tables"})
metricBadgerLsmsizemb = promauto.NewGauge(prometheus.GaugeOpts{Name: "sloop_badger_lsmsizemb"})
metricBadgerVlogsizemb = promauto.NewGauge(prometheus.GaugeOpts{Name: "sloop_badger_vlogsizemb"})
)
type StoreManager struct {
tables typed.Tables
storeRoot string
freq time.Duration
timeLimit time.Duration
sizeLimitMb int
fs *afero.Afero
testMode bool
sleeper *SleepWithCancel
wg *sync.WaitGroup
done bool
donelock *sync.Mutex
}
func NewStoreManager(tables typed.Tables, storeRoot string, freq time.Duration, timeLimit time.Duration, sizeLimitMb int, fs *afero.Afero) *StoreManager {
return &StoreManager{
tables: tables,
storeRoot: storeRoot,
freq: freq,
timeLimit: timeLimit,
sizeLimitMb: sizeLimitMb,
fs: fs,
sleeper: NewSleepWithCancel(),
wg: &sync.WaitGroup{},
done: false,
donelock: &sync.Mutex{},
}
}
func (sm *StoreManager) isDone() bool {
sm.donelock.Lock()
defer sm.donelock.Unlock()
return sm.done
}
func (sm *StoreManager) Start() {
go func() {
sm.wg.Add(1)
defer sm.wg.Done()
for {
if sm.isDone() {
glog.Infof("Store manager main loop exiting")
return
}
temporaryEmitMetrics(sm.storeRoot, sm.tables.Db(), sm.fs)
metricGcRunCount.Inc()
cleanupPerformed, err := doCleanup(sm.tables, sm.storeRoot, sm.timeLimit, sm.sizeLimitMb*1024*1024, sm.fs)
if err != nil {
glog.Errorf("GC failed with err:%v, will sleep: %v and retry later ...", err, sm.freq)
sm.sleeper.Sleep(sm.freq)
} else if !cleanupPerformed {
glog.V(2).Infof("GC did not need to clean anything, will sleep: %v", sm.freq)
sm.sleeper.Sleep(sm.freq)
} else {
// We did some cleanup and there were no errors. Because we may be in a low space situation lets skip
// the sleep and repeat the loop
glog.Infof("GC cleanup performed")
metricGcCleanupPerformedCount.Inc()
}
}
}()
}
func (sm *StoreManager) Shutdown() {
glog.Infof("Starting store manager shutdown")
sm.donelock.Lock()
sm.done = true
sm.donelock.Unlock()
sm.sleeper.Cancel()
sm.wg.Wait()
}
func doCleanup(tables typed.Tables, storeRoot string, timeLimit time.Duration, sizeLimitBytes int, fs *afero.Afero) (bool, error) {
ok, minPartition, maxPartiton, err := tables.GetMinAndMaxPartition()
if err != nil {
return false, fmt.Errorf("failed to get min partition : %s, max partition: %s, err:%v", minPartition, maxPartiton, err)
}
if !ok {
return false, nil
}
anyCleanupPerformed := false
if cleanUpTimeCondition(minPartition, maxPartiton, timeLimit) || cleanUpFileSizeCondition(storeRoot, sizeLimitBytes, fs) {
var errMsgs []string
for _, tableName := range tables.GetTableNames() {
prefix := fmt.Sprintf("/%s/%s", tableName, minPartition)
start := time.Now()
err = tables.Db().DropPrefix([]byte(prefix))
elapsed := time.Since(start)
if err != nil {
errMsgs = append(errMsgs, fmt.Sprintf("failed to cleanup with min key: %s, elapsed: %v,err: %v,", prefix, elapsed, err))
}
anyCleanupPerformed = true
}
if len(errMsgs) != 0 {
var errMsg string
for _, er := range errMsgs {
errMsg += er + ","
}
return false, fmt.Errorf(errMsg)
}
}
return anyCleanupPerformed, nil
}
func cleanUpTimeCondition(minPartition string, maxPartition string, timeLimit time.Duration) bool {
oldestTime, _, err := untyped.GetTimeRangeForPartition(minPartition)
if err != nil {
glog.Error(err)
return false
}
_, latestTime, err := untyped.GetTimeRangeForPartition(maxPartition)
if err != nil {
glog.Error(err)
return false
}
timeDiff := latestTime.Sub(oldestTime)
if timeDiff > timeLimit {
glog.Infof("Start cleaning up because current time diff: %v exceeds time limit: %v", timeDiff, timeLimit)
return true
}
glog.V(2).Infof("Can not clean up, wait until clean up time gap: %v exceeds time limit: %v yet", timeDiff, timeLimit)
return false
}
func cleanUpFileSizeCondition(storeRoot string, sizeLimitBytes int, fs *afero.Afero) bool {
size, err := getDirSizeRecursive(storeRoot, fs)
if err != nil {
return false
}
if size > uint64(sizeLimitBytes) {
glog.Infof("Start cleaning up because current file size: %v exceeds file size: %v", size, sizeLimitBytes)
return true
}
glog.V(2).Infof("Can not clean up, disk size: %v is not exceeding size limit: %v yet", size, uint64(sizeLimitBytes))
return false
}
func getDirSizeRecursive(root string, fs *afero.Afero) (uint64, error) {
var totalSize uint64
err := fs.Walk(root, func(path string, info os.FileInfo, err error) error {
if !info.IsDir() {
totalSize += uint64(info.Size())
}
return nil
})
if err != nil {
return 0, err
}
return totalSize, nil
}
// TODO: Properly integrate this with the next refactor
func temporaryEmitMetrics(storeRoot string, db badgerwrap.DB, fs *afero.Afero) {
totalSizeBytes, err := getDirSizeRecursive(storeRoot, fs)
if err != nil {
glog.Errorf("Failed to check storage size on disk")
} else {
metricStoreSizeondiskmb.Set(float64(totalSizeBytes) / 1024.0 / 1024.0)
}
lsmSize, vlogSize := db.Size()
metricBadgerLsmsizemb.Set(float64(lsmSize) / 1024.0 / 1024.0)
metricBadgerVlogsizemb.Set(float64(vlogSize) / 1024.0 / 1024.0)
var totalKeys uint64
tables := db.Tables(true)
for _, table := range tables {
totalKeys += table.KeyCount
}
metricBadgerKeys.Set(float64(totalKeys))
metricBadgerTables.Set(float64(len(tables)))
}

Просмотреть файл

@ -0,0 +1,149 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package storemanager
import (
"github.com/stretchr/testify/assert"
"testing"
"time"
"github.com/dgraph-io/badger"
"github.com/salesforce/sloop/pkg/sloop/store/typed"
"github.com/salesforce/sloop/pkg/sloop/store/untyped"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
"github.com/spf13/afero"
)
var (
useRealBadger = false
testKey = []byte("/somekey")
testValue1 = []byte("somevalue1")
testValue2 = []byte("somevalue2")
someTs = time.Date(2019, 1, 2, 3, 4, 5, 6, time.UTC)
someDir = "/foo"
somePath = "/foo/something"
someKind = "somekind"
someNamespace = "somenamespace"
someName = "somename"
someUid = "123232"
)
func Test_GetDirSizeRecursive(t *testing.T) {
fs := afero.Afero{Fs: afero.NewMemMapFs()}
fs.MkdirAll(someDir, 0700)
fs.WriteFile(somePath, []byte("abcdfdfdfd"), 0700)
fileSize, err := getDirSizeRecursive(someDir, &fs)
assert.Nil(t, err)
assert.NotZero(t, fileSize)
}
func Test_cleanUpFileSizeCondition_True(t *testing.T) {
fs := afero.Afero{Fs: afero.NewMemMapFs()}
fs.MkdirAll(someDir, 0700)
fs.WriteFile(somePath, []byte("abcdfdfdfd"), 0700)
flag := cleanUpFileSizeCondition(someDir, 3, &fs)
assert.True(t, flag)
}
func Test_cleanUpFileSizeCondition_False(t *testing.T) {
fs := afero.Afero{Fs: afero.NewMemMapFs()}
fs.MkdirAll(someDir, 0700)
fs.WriteFile(somePath, []byte("abcdfdfdfd"), 0700)
flag := cleanUpFileSizeCondition(someDir, 100, &fs)
assert.False(t, flag)
}
func Test_cleanUpTimeCondition(t *testing.T) {
untyped.TestHookSetPartitionDuration(time.Hour)
// partition gap is smaller than time limit
flag := cleanUpTimeCondition("001564074000", "001564077600", 3*time.Hour)
assert.False(t, flag)
// minPartition is illegal input
flag = cleanUpTimeCondition("dfdfdere001564074000", "001564077600", time.Hour)
assert.False(t, flag)
// maxPartition is illegal input
flag = cleanUpTimeCondition("001564074000", "dfdfdere001564077600", time.Hour)
assert.False(t, flag)
// partition gap is greater than time limit
flag = cleanUpTimeCondition("001564074000", "001564077600", 20*time.Minute)
assert.True(t, flag)
}
func help_get_db(t *testing.T) badgerwrap.DB {
untyped.TestHookSetPartitionDuration(time.Hour)
partitionId := untyped.GetPartitionId(someTs)
key1 := typed.NewWatchTableKey(partitionId, someKind+"a", someNamespace, someName, someTs).String()
key2 := typed.NewResourceSummaryKey(someTs, someKind+"b", someNamespace, someName, someUid).String()
key3 := typed.NewEventCountKey(someTs, someKind+"c", someNamespace, someName, someUid).String()
wtval := &typed.KubeWatchResult{Kind: someKind}
rtval := &typed.ResourceSummary{DeletedAtEnd: false}
ecVal := &typed.ResourceEventCounts{XXX_sizecache: int32(0)}
db, err := (&badgerwrap.MockFactory{}).Open(badger.DefaultOptions(""))
assert.Nil(t, err)
defer db.Close()
wt := typed.OpenKubeWatchResultTable()
rt := typed.OpenResourceSummaryTable()
ec := typed.OpenResourceEventCountsTable()
err = db.Update(func(txn badgerwrap.Txn) error {
txerr := wt.Set(txn, key1, wtval)
if txerr != nil {
return txerr
}
txerr = rt.Set(txn, key2, rtval)
if txerr != nil {
return txerr
}
txerr = ec.Set(txn, key3, ecVal)
if txerr != nil {
return txerr
}
txerr = ec.Set(txn, "something", nil)
if txerr != nil {
return txerr
}
return nil
})
return db
}
func Test_doCleanup_true(t *testing.T) {
db := help_get_db(t)
tables := typed.NewTableList(db)
fs := afero.Afero{Fs: afero.NewMemMapFs()}
fs.MkdirAll(someDir, 0700)
fs.WriteFile(somePath, []byte("abcdfdfdfd"), 0700)
flag, err := doCleanup(tables, someDir, time.Hour, 2, &fs)
assert.True(t, flag)
assert.Nil(t, err)
}
func Test_doCleanup_false(t *testing.T) {
db := help_get_db(t)
tables := typed.NewTableList(db)
fs := afero.Afero{Fs: afero.NewMemMapFs()}
fs.MkdirAll(someDir, 0700)
fs.WriteFile(somePath, []byte("abcdfdfdfd"), 0700)
flag, err := doCleanup(tables, someDir, time.Hour, 1000, &fs)
assert.False(t, flag)
assert.Nil(t, err)
}

Просмотреть файл

@ -0,0 +1,89 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package assertex
import (
"fmt"
"github.com/nsf/jsondiff"
"testing"
)
// This is similar functionality to assert.JSONEq() but is more useful for a couple reasons
// 1) It prints the full actual string without transformed line-breaks, so its easy to copy the output back into source
// code if desired
// 2) JSONEq prints a diff of the json strings, where this shows a combined diff
//
// Expected Payload:
// {
// "foo": { "bar": 5 },
// "abc": [2,3]
// }
//
// Actual Payload
// {
// "foo": { "bar": 1 },
// "abc": [2,3]
// }
//
// assert.JSONEq() will give you:
/*
Error Trace:
Error: Not equal: map[string]interface {}{"foo":map[string]interface {}{"bar":5}, "abc":[]interface {}{2, 3}} (expected)
!= map[string]interface {}{"foo":map[string]interface {}{"bar":1}, "abc":[]interface {}{2, 3}} (actual)
Diff:
--- Expected
+++ Actual
@@ -6,3 +6,3 @@
(string) (len=3) "foo": (map[string]interface {}) (len=1) {
- (string) (len=3) "bar": (float64) 5
+ (string) (len=3) "bar": (float64) 1
}
*/
// This helper will give you:
/*
Diff:NoMatch
## EXPECTED:
{
"foo": { "bar": 5 },
"abc": [2,3]
}
## ACTUAL:
{
"foo": { "bar": 1 },
"abc": [2,3]
}
## DIFF:
{
"abc": [
2,
3
],
"foo": {
"bar": 5 => 1
}
*/
func JsonEqualBytes(t *testing.T, expectedByte []byte, actualByte []byte) {
diff, diffString := jsondiff.Compare(expectedByte, actualByte, &jsondiff.Options{})
if diff != jsondiff.FullMatch {
fmt.Printf("Diff:%v\n", diff.String())
fmt.Printf("## EXPECTED:\n%v\n", string(expectedByte))
fmt.Printf("## ACTUAL:\n%v\n", string(actualByte))
fmt.Printf("## DIFF:\n%v", diffString)
t.Fail()
}
}
func JsonEqual(t *testing.T, expectedStr string, actualStr string) {
JsonEqualBytes(t, []byte(expectedStr), []byte(actualStr))
}

Просмотреть файл

@ -0,0 +1,34 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
package assertex
import (
"fmt"
"github.com/golang/protobuf/proto"
"testing"
)
func areProtoEqual(expected interface{}, actual interface{}) bool {
expectedProto, ok := expected.(proto.Message)
if ok {
actualProto, ok := actual.(proto.Message)
if ok {
return proto.Equal(expectedProto, actualProto)
}
}
return false
}
func ProtoEqual(t *testing.T, expected interface{}, actual interface{}) {
if !areProtoEqual(expected, actual) {
fmt.Printf("## EXPECTED:\n%v\n", expected)
fmt.Printf("## ACTUAL:\n%v\n", actual)
t.Fail()
}
}

Просмотреть файл

@ -0,0 +1,20 @@
<!--
Copyright (c) 2019, salesforce.com, inc.
All rights reserved.
Licensed under the BSD 3-Clause license.
For full license text, see LICENSE.txt file in the repo root or
https://opensource.org/licenses/BSD-3-Clause
-->
<html>
<head>
<title>Sloop Debug</title>
<link rel='shortcut icon' type='image/x-icon' href='/webfiles/favicon.ico' />
</head>
<body>
[ <a href="/">Home</a> ][ <a href="/debug/">List Keys</a> ][ <a href="/config/">Config</a> ]<br><br>
<b>Current Config</b>:<br>
<table border="1"><tr><td>
<pre>{{.}}</pre>
</td></tr></table>
</body>
</html>

Просмотреть файл

@ -0,0 +1,51 @@
<!--
Copyright (c) 2019, salesforce.com, inc.
All rights reserved.
Licensed under the BSD 3-Clause license.
For full license text, see LICENSE.txt file in the repo root or
https://opensource.org/licenses/BSD-3-Clause
-->
<html>
<head>
<title>Sloop Debug</title>
<link rel='shortcut icon' type='image/x-icon' href='/webfiles/favicon.ico' />
</head>
<body>
[ <a href="/">Home</a> ][ <a href="/debug/">List Keys</a> ][ <a href="/debug/config/">Config</a> ]<br/><br/>
<table bgcolor="silver" width="500px"><tr><td style="padding: 20px">
<form action="/debug/" method="get">
<label for="table">Time Range:</label><br/>
<select name="table" id="table">
<option value="watch">watch</option>
<option value="ressum">ressum</option>
<option value="eventcount">eventcount</option>
<option value="watchactivity">watchactivity</option>
</select><br><br>
<label for="keymatch">Key RegEx Filter:</label><br>
<input type="text" name="keymatch" id="keymatch"><br><br>
<label for="maxrows">Max Rows:</label><br>
<input type="text" name="maxrows" id="maxrows"><br><br>
<input type="submit">
</form>
</td></tr></table>
<br/>
<b>Key List</b>:<br/>
<ol>
{{range .}}
<li><a href='/debug/view?k={{.}}'>{{.}}</a>
{{end}}
</ol>
</body>
<script src="/webfiles/filter.js"></script>
<script>
setText("keymatch", "keymatch", ".*")
setText("maxrows", "maxrows", "1000")
setDropdown("table", "table", "watch")
</script>
</html>

Просмотреть файл

@ -0,0 +1,24 @@
<!--
Copyright (c) 2019, salesforce.com, inc.
All rights reserved.
Licensed under the BSD 3-Clause license.
For full license text, see LICENSE.txt file in the repo root or
https://opensource.org/licenses/BSD-3-Clause
-->
<html>
<head>
<title>Sloop Debug</title>
<link rel='shortcut icon' type='image/x-icon' href='/webfiles/favicon.ico' />
</head>
<body>
[ <a href="/">Home</a> ][ <a href="/debug/">Back</a>]<br>
<b>Record View</b><br>
<table border="1">
<tr><td>Key</td><td>{{.Key}}</td></tr>
<tr><td>Payload</td><td><pre>{{.Payload}}</pre></td></tr>
{{if not (eq "" .ExtraName) }}
<tr><td>{{.ExtraName}}</td><td><pre>{{.ExtraValue}}</pre></td></tr>
{{end}}
</table>
</body></html>

Двоичные данные
pkg/sloop/webfiles/favicon.ico Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 15 KiB

Просмотреть файл

@ -0,0 +1,100 @@
/**
* Copyright (c) 2019, salesforce.com, inc.
* All rights reserved.
* Licensed under the BSD 3-Clause license.
* For full license text, see LICENSE.txt file in the repo root or
* https://opensource.org/licenses/BSD-3-Clause
*/
function getUrlVars() {
var vars = {};
var parts = window.location.href.replace(/[?&]+([^=&]+)=([^&]*)/gi, function(m,key,value) {
vars[key] = value;
});
return vars;
}
function getUrlParam(parameter, defaultvalue){
var urlparameter = defaultvalue;
if(window.location.href.indexOf(parameter) > -1){
urlparameter = getUrlVars()[parameter];
}
return urlparameter;
}
// Look up the url parameter "param" using "defaultValue" if not found
// then set the form option with id "elementId" to that value
function setDropdown(param, elementId, defaultValue, insertValueIfMissing) {
var value = getUrlParam(param, defaultValue);
var select = document.getElementById(elementId);
var found = false
for(var i = 0;i < select.options.length; i++){
if(select.options[i].value == value ) {
select.options[i].selected = true;
found = true
}
}
if (!found && insertValueIfMissing) {
select.append( new Option(value, value, false, true))
}
return value
}
// Look up the url parameter "param" using "defaultValue" if not found
// then set the form text input with id "elementId" to that value
function setText(param, elementId, defaultValue) {
var value = getUrlParam(param, defaultValue);
var inpt = document.getElementById(elementId);
inpt.value = value
return value
}
// Get a list of values from queryUrl (in the form of a json array)
// Insert those into drop down with id equal "elementId"
// And when you find a value matching url param "param" set it to selected
function populateDropdownFromQuery(param, elementId, defaultValue, queryUrl) {
var value = getUrlParam(param, defaultValue);
var element = document.getElementById(elementId);
// Start off with just an option for the value from the URL
element.append( new Option(value, value, false, true))
namespaces = d3.json(queryUrl);
namespaces.then(function (result) {
element.remove(0)
var found = false
result.forEach(
function(row) {
isSelected = (value == row)
element.append( new Option(row, row, false, isSelected) );
if (isSelected) {
found = true
}
});
if (!found) {
element.append( new Option(value, value, false, true))
}
})
return value
}
function setFiltersAndReturnQueryUrl(defaultLookback, defaultKind, defaultNamespace) {
// Keep this in sync with pkg/sloop/queries/params.go
// Some of these need to hit the backend which takes a little time
// Do the fast ones first
// TODO: Query the back-end async
// TODO: Also, we may consider initially populating the drop-down with the value from url params as a placholder
// until we get the full list back
lookback = setDropdown("lookback", "filterlookback", defaultLookback, true)
sort = setDropdown("sort", "filtersort", "start_time", false)
namematch = setText("namematch", "filternamematch", "")
query = populateDropdownFromQuery("query", "filterquery", "EventHeatMap", "/data?query=Queries");
ns = populateDropdownFromQuery("namespace", "filternamespace", defaultNamespace, "/data?query=Namespaces");
kind = populateDropdownFromQuery("kind", "filterkind", defaultKind, "/data?query=Kinds");
dataQuery = "/data?query="+query+"&namespace="+ns+"&lookback="+lookback+"&kind="+kind+"&sort="+sort+"&namematch="+namematch
return dataQuery
}

Просмотреть файл

@ -0,0 +1,115 @@
<!--
Copyright (c) 2019, salesforce.com, inc.
All rights reserved.
Licensed under the BSD 3-Clause license.
For full license text, see LICENSE.txt file in the repo root or
https://opensource.org/licenses/BSD-3-Clause
-->
<!DOCTYPE html>
<!--suppress HtmlUnknownTarget, JSUnresolvedVariable -->
<style type="text/css">
body {
font-family: sans-serif;
font-size: 10px;
}
.svg-container {
border:2px solid #000;
height: 100%;
overflow: scroll;
background-color: whitesmoke;
}
svg {
z-index: -1;
}
</style>
<head>
<meta http-equiv="Content-type" content="text/html; charset=utf-8">
<title>Sloop{{if (ne .CurrentContext "")}} - {{.CurrentContext}}{{end}}</title>
<link rel="stylesheet" type="text/css" href="/webfiles/sloop.css">
<link rel='shortcut icon' type='image/x-icon' href='/webfiles/favicon.ico'/>
</head>
<body>
<div style="display: flex; width:100%; height:100%">
<div id="sloopleftnav" class="sloopleftnav" style="height: 100%; width: 250px; padding-right: 20px">
<span style="font-size:20px">Sloop v0.2</span><br>
<span style="font-size:12px">Kubernetes History Visualization</span><br><br>
{{if (ne .CurrentContext "")}}
<label for="currentContext">Kubernetes Context:</label><br/>
<input type="text" id="currentContext" value="{{.CurrentContext}}" disabled="true" style="width:100%"><br><br>
{{end}}
<form action="/" method="get">
<!-- Not showing the query drop down as we only have one query at the moment -->
<!-- <label for="filterquery">Query:</label><br /> -->
<select name="query" id="filterquery" hidden="true"> </select>
<!-- <br><br> -->
<label for="filterlookback">Time Range:</label><br/>
<select name="lookback" id="filterlookback" style="width:100%">
<option value="1h">1 Hour</option>
<option value="3h">3 Hours</option>
<option value="6h">6 Hours</option>
<option value="12h">12 Hours</option>
<option value="24h">1 Day</option>
<option value="168h">1 Week</option>
<option value="336h">2 Weeks</option>
</select><br><br>
<label for="filternamespace">Filter Namespace:</label><br/>
<select name="namespace" id="filternamespace" style="width:100%">
</select>
<br><br>
<label for="filterkind">Filter Kind: </label><br/>
<select id="filterkind" name="kind" style="width:100%">
</select><br><br>
<label for="filtersort">Sort:</label><br>
<select name="sort" id="filtersort" style="width:100%">
<option value="starttime">Start Time</option>
<option value="mostevents">Most Events</option>
<option value="name">Name</option>
</select><br><br>
<label for="filternamematch">Name Filter:</label><br>
<input type="text" name="namematch" id="filternamematch" style="width:100%"><br><br>
<input type="submit">
</form>
<br><br>
<h2>Links</h2>
<a href="/debug/">List Keys</a><br/>
<a href="/debug/config">View Config</a><br/>
<a href="" id="datafilelink">Query Data File</a><br/>
<a href="https://github.com/salesforce/sloop" target="_blank">Source Code on GitHub</a><br/>
{{range .LeftBarLinks}}
<a href="{{.Url}}" target="_blank">{{.Text}}</a><br/>
{{end}}
</div><div id="d3_here" class="svg-container" style='width: 100%; height:100%;'>
</div>
<script src="https://d3js.org/d3.v5.js"></script>
<script
src="https://code.jquery.com/jquery-3.4.1.min.js"
integrity="sha256-CSXorXvZcTkaix6Yvo6HppcZGetbYMGWSFlBw8HfCJo="
crossorigin="anonymous"></script>
<script
src="https://code.jquery.com/ui/1.12.1/jquery-ui.min.js"
integrity="sha256-VazP97ZCwtekAsvgPBSUwPFKdrwD3unUfSGVYrahUqU="
crossorigin="anonymous"></script>
<script src="https://unpkg.com/axios/dist/axios.min.js" async="" type="text/javascript"></script>
<script src="https://cdn.jsdelivr.net/npm/vue/dist/vue.js" async="" type="text/javascript"></script>
<script src="https://unpkg.com/nprogress@0.2.0/nprogress.js" async="" type="text/javascript"></script>
<script src="/webfiles/filter.js"></script>
<script>
// When we reload the page this will set the correct selections for the drop-downs based on passed in parameters
dataQueryUrl = setFiltersAndReturnQueryUrl("{{.DefaultLookback}}", "{{.DefaultKind}}", "{{.DefaultNamespace}}");
document.getElementById("datafilelink").href = dataQueryUrl;
</script>
<script src="/webfiles/sloop_ui.js"></script>
</div>
</body>

Просмотреть файл

@ -0,0 +1,25 @@
html, body {
overflow: auto;
}
#resource_event_table {
font-family: Helvetica, sans-serif;
border-collapse: collapse;
width: 100%;
}
#resource_event_table td, #resource_event_table th {
border: 1px solid lightgrey;
padding: 4px;
}
#resource_event_table tr:nth-child(even){background-color: whitesmoke;}
#resource_event_table tr:hover {background-color: lightgrey;}
#resource_event_table th {
text-align: left;
background-color: midnightblue;
color: white;
cursor:pointer;
}

Просмотреть файл

@ -0,0 +1,135 @@
<!--
Copyright (c) 2019, salesforce.com, inc.
All rights reserved.
Licensed under the BSD 3-Clause license.
For full license text, see LICENSE.txt file in the repo root or
https://opensource.org/licenses/BSD-3-Clause
-->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<link rel='shortcut icon' type='image/x-icon' href='/webfiles/favicon.ico'/>
<link rel="stylesheet" type="text/css" href="/webfiles/sloop.css">
<link rel="stylesheet" type="text/css" href="/webfiles/resource.css">
<link href="https://unpkg.com/nprogress@0.2.0/nprogress.css" rel="stylesheet" />
<title>Resource {{.Kind}}/{{.Namespace}}/{{.Name}}</title>
<script src="https://unpkg.com/axios/dist/axios.min.js" type="text/javascript"></script>
<script src="https://cdn.jsdelivr.net/npm/vue/dist/vue.js" type="text/javascript"></script>
<script src="https://unpkg.com/nprogress@0.2.0/nprogress.js" type="text/javascript"></script>
</head>
<body>
<h2>Details</h2>
<b>Name</b>: {{.Name}}<br>
<b>Namespace</b>: {{.Namespace}}<br>
<b>Kind</b>: {{.Kind}}<br><br>
<a href="{{.SelfUrl}}" target="_blank">Open In New Tab</a><br>
{{if ne (len .Links) 0 }}
<h2>Links</h2>
{{range .Links}}<a href="{{.Url}}" target="_blank">{{.Text}}</a><br>{{end}}
{{end}}
<div id="resource_events">
<h2>Events</h2>
<table id="resource_event_table">
<tr>
<th>Payload Link</th>
<th @click="sort('message')">Message &#x21C5</th>
<th @click="sort('reason')">Reason &#x21C5</th>
<th @click="sort('source')">Source &#x21C5</th>
<th @click="sort('count')">Count &#x21C5</th>
<th @click="sort('firstSeen')">First seen &#x21C5</th>
<th @click="sort('lastSeen')">Last seen &#x21C5</th>
</tr>
<tr v-for="resEvent in sortedResEvents">
<td><a :href ="resEvent.origValue | get_payload_url" target="_blank">Details</a></td>
<td>${ resEvent.message }</td>
<td>${ resEvent.reason }</td>
<td>${ resEvent.source }</td>
<td>${ resEvent.count }</td>
<td>${ resEvent.firstSeen | get_formatted_date }</td>
<td>${ resEvent.lastSeen | get_formatted_date }</td>
</tr>
</table>
</div>
<script>
new Vue({
el: '#resource_events',
delimiters: ['${', '}'],
data: {
resEvents: [],
currentSortBy: 'firstSeen',
currentSortDirection: 'asc'
},
filters: {
get_formatted_date(value) {
return value.split('T').join(' ');
},
get_payload_url(value) {
return "/debug/view/?k=" + value.eventKey;
}
},
mounted() {
NProgress.configure({
easing: 'ease',
minimum: 0.3,
parent: '#resource_events'
});
axios.interceptors.request.use(config => {
NProgress.start();
return config;
});
axios.interceptors.response.use(response => {
NProgress.done();
return response;
});
axios
.get('{{.EventsUrl}}')
.then(response => {
if (response.data) {
this.resEvents = response.data.map(function (val) {
let parsedVal = JSON.parse(val.payload);
return {
message: parsedVal.message,
source: parsedVal.source.host,
reason: parsedVal.reason,
count: parsedVal.count,
firstSeen: parsedVal.firstTimestamp,
lastSeen: parsedVal.lastTimestamp,
origValue: val
};
});
} else {
console.log("No events found for period")
}
})
},
methods: {
sort: function (sortBy) {
if (sortBy === this.currentSortBy) {
this.currentSortDirection = (this.currentSortDirection === 'asc') ? 'desc' : 'asc';
}
this.currentSortBy = sortBy;
}
},
computed: {
sortedResEvents: function () {
return this.resEvents.sort((a, b) => {
let direction = (this.currentSortDirection === 'asc') ? 1 : -1;
if (a[this.currentSortBy] < b[this.currentSortBy]) return -1 * direction;
if (a[this.currentSortBy] > b[this.currentSortBy]) return direction;
return 0;
});
}
}
});
</script>
</body>
</html>

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше