Update README with basic usage

This commit is contained in:
Jordan Olshevski 2022-07-26 11:38:38 -05:00
Родитель 4e66aad044
Коммит 07cfeaf691
2 изменённых файлов: 32 добавлений и 16 удалений

Просмотреть файл

@ -1,7 +1,5 @@
# Meta Etcd
> Project status: proof of concept - do not use unless you want an adventure!
A proxy that enables sharding Kubernetes apiserver's keyspace across multiple etcd clusters.
## Why?
@ -22,10 +20,6 @@ Beyond scaling bottlenecks, partitioning is still useful - particularly for syst
- Multi-key range queries fan out to all clusters
- Leases are only partially supported
## Repartitioning
Currently the proxy does not support repartitioning, although it is implemented such that it is possible in the future. The long term goal is to support dynamically adding/removing member clusters at runtime with little to no impact.
## Architecture
### Clocking
@ -38,6 +32,28 @@ Since at least one member cluster always has the latest timestamp, the coordinat
The proxy watches the entire keyspace of every member cluster, buffers n messages, and replays them to clients. It's possible that messages will be received out of order, since network latency may vary between member clusters. In this case, it will buffer the out of order message until a timeout window is exceeded or the previous message has been received.
### Repartitioning
Currently the proxy does not support repartitioning, although it is implemented such that it is possible in the future. The long term goal is to support dynamically adding/removing member clusters at runtime with little to no impact.
## Basic Usage
Required flags:
- `--ca-cert` certificate used to verify the identity of etcd clusters (and proxy clients)
- `--client-cert` certificate presented to etcd clusters
- `--client-cert-key` key of `--client-cert`
- `--coordinator` URL of the coordinator cluster
- `--members` comma-separated list of member cluster URLs
By default, the meta cluster's proxy will be served on localhost:2379.
Although the listen address and server certificate can be configured with flags.
Important metrics:
- `metaetcd_request_count`: incremented for each request (by method)
- `metaetcd_time_buffer_timeouts_count`: incremented when a watch event is considered to be lost
- `metaetcd_clock_reconstitution`: incremented when the coordinator's state is lost and the clock is reconstituted from member clusters
## Contributing

20
main.go
Просмотреть файл

@ -44,16 +44,16 @@ func main() {
grpcSvrKeepaliveTimeout time.Duration
grpcContext membership.GrpcContext
)
flag.StringVar(&listenAddr, "listen-addr", ":2379", "")
flag.StringVar(&coordinator, "coordinator", "", "")
flag.StringVar(&membersStr, "members", "", "")
flag.StringVar(&clientCertPath, "client-cert", "", "")
flag.StringVar(&clientCertKeyPath, "client-cert-key", "", "")
flag.StringVar(&serverCertPath, "server-cert", "", "")
flag.StringVar(&serverCertKeyPath, "server-cert-key", "", "")
flag.StringVar(&caPath, "ca-cert", "", "")
flag.DurationVar(&watchTimeout, "watch-timeout", time.Second*10, "")
flag.IntVar(&watchBufferLen, "watch-buffer-len", 3000, "")
flag.StringVar(&listenAddr, "listen-addr", "127.0.0.1:2379", "address to serve the etcd proxy server on")
flag.StringVar(&coordinator, "coordinator", "", "URL of the coordinator cluster")
flag.StringVar(&membersStr, "members", "", "comma-separated list of member clusters")
flag.StringVar(&clientCertPath, "client-cert", "", "cert used when connecting to the coordinator and member clusters")
flag.StringVar(&clientCertKeyPath, "client-cert-key", "", "key of --client-cert")
flag.StringVar(&serverCertPath, "server-cert", "", "cert presented to etcd proxy clients (optional)")
flag.StringVar(&serverCertKeyPath, "server-cert-key", "", "key of --server-cert (optional)")
flag.StringVar(&caPath, "ca-cert", "", "cert used to verify incoming and outgoing identities")
flag.DurationVar(&watchTimeout, "watch-timeout", time.Second*10, "how long to wait before a watch message is considered missing")
flag.IntVar(&watchBufferLen, "watch-buffer-len", 1000, "how many watch events to buffer")
logLevel := zap.LevelFlag("v", zap.WarnLevel, "log level")
flag.IntVar(&pprofPort, "pprof-port", 0, "port to serve pprof on. disabled if 0")
flag.IntVar(&metricsPort, "metrics-port", 9090, "port to serve Prometheus metrics on. disabled if 0")