doc(readme): add Travis badge, fix latency measurements being too high (should have half RTT)

This commit is contained in:
Connor Peet 2017-10-01 13:00:56 -07:00
Родитель 967a80a0ab
Коммит e42a51798b
2 изменённых файлов: 9 добавлений и 9 удалений

Просмотреть файл

@ -92,7 +92,7 @@ func BenchmarkLatency() {
atomic.StoreInt64(&stopped, 1)
wg.Wait()
final := float64(latency/int64(time.Microsecond)) / float64(samples)
final := float64(latency/int64(time.Microsecond)) / float64(samples) / 2
fmt.Printf("clients=%d, payload=%db, latency=%.1fus\n", clientsCount, len(payload), final)
}

Просмотреть файл

@ -1,4 +1,4 @@
# wsplice
# wsplice [![Build Status](https://travis-ci.org/mixer/wsplice.svg?branch=master)](https://travis-ci.org/mixer/wsplice)
`wsplice` is a websocket multiplexer, allowing you to connect to multiple remote hosts though a single websocket.
@ -13,7 +13,7 @@ Download a binary from the [Releases page](https://github.com/mixer/wsplice/rele
--tls-ca=my-ca.pem \
--listen=0.0.0.0:3000
# Omit the CA cert to run it over TLS, and allow it to connect
# Omit the CA cert to run it over TLS, and allow it to connect
# to example.com and ws.example.com
./wsplice --tls-cert=my-cert.pem \
--tls-key=my-key.pem \
@ -22,7 +22,7 @@ Download a binary from the [Releases page](https://github.com/mixer/wsplice/rele
### Protocol
Websocket frames are prefixed with two bytes, as a big endian uint16, to describe who that message goes to. The magic control index is `[0xff, 0xff]`, which is a simple JSON RPC tool. To connect to another server, you might do something like this in Node.js:
Websocket frames are prefixed with two bytes, as a big endian uint16, to describe who that message goes to. The magic control index is `[0xff, 0xff]`, which is a simple JSON RPC protocol. To connect to another server, you might do something like this in Node.js:
```js
const payload = Buffer.concat([
@ -80,9 +80,9 @@ Once the client disconnects, the wsplice will call `onSocketDisconnect`. For exa
| clients=32 | 231 mbps | 235 mbps | 2940 mbps | 11200 mbps |
| clients=128 | 327 mbps | 333 mbps | 2520 mbps | 14600 mbps |
| clients=512 | 46.3 mbps | 533 mbps | 3570 mbps | 11200 mbps |
| Latency | | | | |
| clients=32 | 11.2μs | 9.2μs | 10.7μs | 14.8μs |
| clients=128 | 9.3μs | 10.4μs | 11.4μs | 11.3μs |
| clients=512 | 7.3μs | 7.9μs | 10.3μs | 14.2μs |
| **Latency** | | | | |
| clients=32 | 5.6μs | 4.6μs | 5.4μs | 7.4μs |
| clients=128 | 4.7μs | 5.2μs | 5.7μs | 5.7μs |
| clients=512 | 3.7μs | 4.0μs | 5.2μs | 7.1μs |
These measurements were taken on an B8 Azure VM using the binary in `./cmd/bench`.
These measurements were taken on an B8 Azure VM running Ubuntu 16.04, using the binary in `./cmd/bench`.