diff --git a/.golangci.yml b/.golangci.yml
index f39b461..6a9c2ca 100644
--- a/.golangci.yml
+++ b/.golangci.yml
@@ -36,6 +36,7 @@ linters:
- forbidigo
- interfacer
- maligned
+ - errorlint
issues:
exclude-use-default: false
diff --git a/README.md b/README.md
index 54e1334..16aae2c 100644
--- a/README.md
+++ b/README.md
@@ -14,13 +14,15 @@ Scale curl requests, cousin of `ab`, `siege`, [`hey`](https://github.com/rakyll/
## Install
```
-go get -u github.com/vearutop/plt
+go install github.com/vearutop/plt@latest
+$(go env GOPATH)/bin/plt --help
```
-Or
+Or (with `go1.15` or older)
```
-GO111MODULE=on go get github.com/vearutop/plt@latest
+go get -u github.com/vearutop/plt
+$(go env GOPATH)/bin/plt --help
```
Or download binary from [releases](https://github.com/vearutop/plt/releases).
@@ -61,122 +63,103 @@ In `--live-ui` mode you can control concurrency and rate limits with arrow keys.
## Example
```bash
-plt --live-ui --duration=1m --rate-limit=60 curl -X GET "https://acme-dummy-service.staging-k8s.acme.io/" -H "accept: application/json"
+plt --live-ui --duration=20s --rate-limit=60 curl -X GET "https://demo.phperf.ml/profile"
```
```
-Requests per second: 60.79
-Total requests: 3650
-Time spent: 1m0.041s
+plt --live-ui --duration=20s --rate-limit=60 curl -X GET "https://demo.phperf.ml/profile"
+Host resolved: 130.61.99.204
+
+Requests per second: 59.96
+Total requests: 1201
+Time spent: 20.029s
Request latency percentiles:
-99%: 101.00ms
-95%: 45.89ms
-90%: 42.58ms
-50%: 41.44ms
+99%: 101.73ms
+95%: 64.88ms
+90%: 44.57ms
+50%: 28.09ms
Request latency distribution in ms:
-[ min max] cnt total% (3650 events)
-[ 39.62 39.62] 1 0.03%
-[ 39.67 39.67] 1 0.03%
-[ 39.76 39.91] 12 0.33%
-[ 39.92 40.46] 217 5.95% .....
-[ 40.46 41.71] 2206 60.44% ............................................................
-[ 41.71 47.29] 1058 28.99% ............................
-[ 47.36 55.04] 74 2.03% ..
-[ 55.28 74.25] 18 0.49%
-[ 74.40 161.25] 62 1.70% .
-[187.05 187.05] 1 0.03%
+[ min max] cnt total% (1201 events)
+[ 19.03 19.03] 1 0.08%
+[ 22.52 23.76] 25 2.08% ..
+[ 23.78 25.26] 111 9.24% .........
+[ 25.26 26.58] 200 16.65% ................
+[ 26.58 29.61] 449 37.39% .....................................
+[ 29.61 39.94] 287 23.90% .......................
+[ 40.54 40.54] 1 0.08%
+[ 41.00 62.45] 60 5.00% ....
+[ 62.77 110.91] 57 4.75% ....
+[126.64 260.53] 10 0.83%
Requests with latency more than 1s: 0
-Envoy upstream latency percentiles:
-99%: 12ms
-95%: 5ms
-90%: 3ms
-50%: 2ms
-
-Envoy upstream latency distribution in ms:
-[ min max] cnt total% (3650 events)
-[ 1.00 1.00] 474 12.99% ............
-[ 2.00 2.00] 2157 59.10% ...........................................................
-[ 3.00 4.00] 809 22.16% ......................
-[ 5.00 6.00] 69 1.89% .
-[ 7.00 10.00] 86 2.36% ..
-[ 11.00 14.00] 35 0.96%
-[ 15.00 23.00] 9 0.25%
-[ 28.00 40.00] 5 0.14%
-[ 49.00 81.00] 3 0.08%
-[ 98.00 148.00] 3 0.08%
+Bytes read 2.3MB / 23.5MB/s
+Bytes written 105.4KB / 57.6KB/s
DNS latency distribution in ms:
-[ min max] cnt total% (50 events)
-[ 9.88 9.88] 1 2.00% ..
-[10.13 10.15] 3 6.00% ......
-[10.18 10.22] 4 8.00% ........
-[10.25 10.31] 5 10.00% ..........
-[10.32 10.36] 4 8.00% ........
-[10.37 10.41] 12 24.00% ........................
-[10.42 10.48] 7 14.00% ..............
-[10.49 10.56] 6 12.00% ............
-[10.59 10.71] 7 14.00% ..............
-[10.76 10.76] 1 2.00% ..
+[ min max] cnt total% (16 events)
+[0.44 0.44] 1 6.25% ......
+[0.49 0.49] 1 6.25% ......
+[0.50 0.50] 1 6.25% ......
+[0.56 0.56] 1 6.25% ......
+[0.61 0.61] 2 12.50% ............
+[0.65 0.70] 3 18.75% ..................
+[0.77 0.77] 1 6.25% ......
+[0.88 0.88] 1 6.25% ......
+[1.00 1.03] 3 18.75% ..................
+[1.28 1.42] 2 12.50% ............
+
+TLS handshake latency distribution in ms:
+[ min max] cnt total% (16 events)
+[ 64.95 64.95] 1 6.25% ......
+[ 65.14 65.14] 1 6.25% ......
+[ 66.98 66.98] 1 6.25% ......
+[ 70.94 71.81] 3 18.75% ..................
+[ 74.10 76.13] 2 12.50% ............
+[ 80.20 82.91] 2 12.50% ............
+[ 90.86 90.86] 1 6.25% ......
+[127.14 127.72] 2 12.50% ............
+[157.99 161.84] 2 12.50% ............
+[179.98 179.98] 1 6.25% ......
+
+Time to first resp byte (TTFB) distribution in ms:
+[ min max] cnt total% (1201 events)
+[ 18.97 18.97] 1 0.08%
+[ 22.49 23.77] 27 2.25% ..
+[ 23.78 24.96] 81 6.74% ......
+[ 24.96 26.56] 236 19.65% ...................
+[ 26.57 29.52] 441 36.72% ....................................
+[ 29.54 29.54] 1 0.08%
+[ 29.56 40.90] 288 23.98% .......................
+[ 41.16 62.68] 60 5.00% ....
+[ 63.02 110.87] 56 4.66% ....
+[126.59 260.27] 10 0.83%
Connection latency distribution in ms:
-[ min max] cnt total% (50 events)
-[36.36 36.36] 1 2.00% ..
-[36.38 36.39] 6 12.00% ............
-[36.39 36.41] 8 16.00% ................
-[36.42 36.43] 6 12.00% ............
-[36.44 36.48] 5 10.00% ..........
-[36.50 36.54] 2 4.00% ....
-[36.58 36.66] 2 4.00% ....
-[36.70 36.86] 12 24.00% ........................
-[36.86 37.09] 4 8.00% ........
-[37.11 37.48] 4 8.00% ........
+[ min max] cnt total% (16 events)
+[28.04 28.04] 1 6.25% ......
+[28.13 28.13] 1 6.25% ......
+[29.68 29.68] 1 6.25% ......
+[30.01 30.41] 2 12.50% ............
+[31.04 31.12] 2 12.50% ............
+[32.22 32.44] 2 12.50% ............
+[33.37 34.07] 3 18.75% ..................
+[36.04 36.04] 1 6.25% ......
+[40.09 40.09] 1 6.25% ......
+[43.50 45.20] 2 12.50% ............
Responses by status code
-[200] 3650
+[200] 1201
-Bytes read 1667316
-Bytes written 643177
+Response samples (first by status code):
[200]
-Welcome to acme-dummy-service. Please read API documentation.
-```
-
-### Fast mode
-
-```
-plt --concurrency 100 --number 200000 curl --fast -X GET "http://localhost:8011/v0/tasks/1" -H "accept: application/json"
-Host resolved: 127.0.0.1
-Requests per second: 145232.05
-Total requests: 200000
-Request latency distribution in ms:
-[ min max] cnt total% (200000 events)
-[ 0.06 0.06] 1 0.00%
-[ 0.06 0.06] 7 0.00%
-[ 0.06 0.08] 716 0.36%
-[ 0.08 0.10] 5050 2.52% ..
-[ 0.10 0.14] 19661 9.83% .........
-[ 0.14 0.48] 109190 54.59% ......................................................
-[ 0.48 1.07] 46129 23.06% .......................
-[ 1.07 2.91] 9715 4.86% ....
-[ 2.91 16.01] 9516 4.76% ....
-[16.11 23.96] 15 0.01%
+Connection: keep-alive
+Content-Length: 1733
+Content-Type: application/json; charset=utf-8
+Date: Thu, 08 Apr 2021 15:25:54 GMT
+Server: nginx/1.14.0 (Ubuntu)
-Request latency percentiles:
-99%: 7.212116ms
-95%: 2.914069ms
-90%: 1.063058ms
-50%: 0.336996ms
-
-Requests with latency more than 1s: 0
-
-Responses by status code
-[200] 200000
-
-Bytes read 83600000
-Bytes written 18600000
-[200]
-{"id":1,"goal":"enjoy!","deadline":"2020-05-24T21:00:54.998Z","createdAt":"2020-05-24T23:00:56.656017059+02:00"}
+{"recent":[{"addr":{"id":"606f1f90dcdc3"},"edges":3715,"wt":"190.55ms","cpu":"179.03ms","io":"11.52ms","peakMem":"3.52M"},{"addr":{"id":"606f1faf1c5c3"},"edges":3715,"wt":"270.14ms","cpu":"185.62ms","io":"84.52ms","peakMem":"3.52M"},{"addr":{"id":"606f1fc261e9d"},"edges":575,"wt":"3.4s","cpu":"3.39s","io":"9.71ms","peakMem":"28.03M"},{"addr":{"id":"606f1fcd6f694"},"edges":3715,"wt":"153.58ms","cpu":"143.68ms","io":"9.9ms","peakMem":"3.52M"},{"addr":{"id":"606f1feba911d"},"edges":3715,"wt":"202.18ms","cpu":"191.82ms","io":"10.36ms","peakMem":"3.52M"},{"addr":{"id":"606f20052f7c1"},"edges":471,"wt":"679.08ms","cpu":"669.01ms","io":"10.07ms","peakMem":"6.18M"},{"addr":{"id":"606f2009de4cd"},"edges":3715,"wt":"175.34ms","cpu":"163.14ms","io":"12.2ms","peakMem":"3.52M"},{"addr":{"id":"606f2028185dd"},"edges":3715,"wt":"677.03ms","cpu":"320.84ms","io":"356.19ms","peakMem":"3.52M"},{"addr":{"id":"606f2046c16a2"},"edges":3715,"wt":"313.28ms","cpu":"292.38ms","io":"20.9ms","peakMem":"3.52M"},{"...
```
diff --git a/fasthttp/job.go b/fasthttp/job.go
index 12f89a6..76ee9b5 100644
--- a/fasthttp/job.go
+++ b/fasthttp/job.go
@@ -14,6 +14,7 @@ import (
"github.com/valyala/fasthttp"
"github.com/vearutop/plt/nethttp"
+ "github.com/vearutop/plt/report"
)
// JobProducer sends HTTP requests.
@@ -148,8 +149,8 @@ func (j *JobProducer) Print() {
j.mu.Unlock()
fmt.Println(codes)
- fmt.Println("Bytes read", atomic.LoadInt64(&j.bytesRead))
- fmt.Println("Bytes written", atomic.LoadInt64(&j.bytesWritten))
+ fmt.Println("Bytes read", report.ByteSize(atomic.LoadInt64(&j.bytesRead)))
+ fmt.Println("Bytes written", report.ByteSize(atomic.LoadInt64(&j.bytesWritten)))
fmt.Println(resps)
}
@@ -188,13 +189,10 @@ func (j *JobProducer) Job(_ int) (time.Duration, error) {
if j.respCode[resp.StatusCode()] == 1 {
body := resp.Body()
- switch {
- case len(resp.Header.Peek("Content-Encoding")) > 0:
+ if len(resp.Header.Peek("Content-Encoding")) > 0 {
j.respBody[resp.StatusCode()] = []byte("<" + string(resp.Header.Peek("Content-Encoding")) + "-encoded-content>")
- case len(body) > 1000:
- j.respBody[resp.StatusCode()] = append(body[0:1000], '.', '.', '.')
- default:
- j.respBody[resp.StatusCode()] = body
+ } else {
+ j.respBody[resp.StatusCode()] = report.PeekBody(body, 1000)
}
}
j.mu.Unlock()
diff --git a/nethttp/job.go b/nethttp/job.go
index 2edd73a..c3ac88f 100644
--- a/nethttp/job.go
+++ b/nethttp/job.go
@@ -21,6 +21,7 @@ import (
"github.com/vearutop/dynhist-go"
"github.com/vearutop/plt/loadgen"
+ "github.com/vearutop/plt/report"
)
// JobProducer sends HTTP requests.
@@ -35,16 +36,19 @@ type JobProducer struct {
upstreamHist *dynhist.Collector
upstreamHistPrecise *dynhist.Collector
- mu sync.Mutex
- respCode map[int]int
- respBody map[int][]byte
-
+ respCode [600]int64
bytesWritten int64
+ writeTime int64
bytesRead int64
+ readTime int64
f Flags
tr *http.Transport
+
+ mu sync.Mutex
+ respBody map[int][]byte
+ respHeader map[int]http.Header
}
// RequestCounts returns distribution by status code.
@@ -52,9 +56,9 @@ func (j *JobProducer) RequestCounts() map[string]int {
j.mu.Lock()
defer j.mu.Unlock()
- res := make(map[string]int, len(j.respCode))
- for code, cnt := range j.respCode {
- res[strconv.Itoa(code)] = cnt
+ res := make(map[string]int, len(j.respBody))
+ for code := range j.respBody {
+ res[strconv.Itoa(code)] = int(atomic.LoadInt64(&j.respCode[code]))
}
return res
@@ -162,8 +166,8 @@ func NewJobProducer(f Flags, lf loadgen.Flags) *JobProducer {
j.ttfbHist = &dynhist.Collector{BucketsLimit: 10, WeightFunc: dynhist.LatencyWidth}
j.upstreamHist = &dynhist.Collector{BucketsLimit: 10, WeightFunc: dynhist.LatencyWidth}
j.upstreamHistPrecise = &dynhist.Collector{BucketsLimit: 100, WeightFunc: dynhist.LatencyWidth}
- j.respCode = make(map[int]int, 5)
j.respBody = make(map[int][]byte, 5)
+ j.respHeader = make(map[int]http.Header, 5)
j.f = f
if _, ok := f.HeaderMap["User-Agent"]; !ok {
@@ -177,6 +181,18 @@ func NewJobProducer(f Flags, lf loadgen.Flags) *JobProducer {
func (j *JobProducer) Print() {
fmt.Println()
+ bytesRead := atomic.LoadInt64(&j.bytesRead)
+ readTime := atomic.LoadInt64(&j.readTime)
+ dlSpeed := float64(bytesRead) / time.Duration(readTime).Seconds()
+
+ bytesWritten := atomic.LoadInt64(&j.bytesWritten)
+ writeTime := atomic.LoadInt64(&j.writeTime)
+ ulSpeed := float64(bytesWritten) / time.Duration(writeTime).Seconds()
+
+ fmt.Println("Bytes read", report.ByteSize(bytesRead), "/", report.ByteSize(int64(dlSpeed))+"/s")
+ fmt.Println("Bytes written", report.ByteSize(bytesWritten), "/", report.ByteSize(int64(ulSpeed))+"/s")
+ fmt.Println()
+
if j.upstreamHist.Count > 0 {
fmt.Println("Envoy upstream latency percentiles:")
fmt.Printf("99%%: %.0fms\n", j.upstreamHistPrecise.Percentile(99))
@@ -211,24 +227,30 @@ func (j *JobProducer) Print() {
codes := ""
resps := ""
- for code, cnt := range j.respCode {
- codes += fmt.Sprintf("[%d] %d\n", code, cnt)
- resps += fmt.Sprintf("[%d]\n%s\n", code, string(j.respBody[code]))
+ for code := range j.respBody {
+ codes += fmt.Sprintf("[%d] %d\n", code, atomic.LoadInt64(&j.respCode[code]))
+ h := bytes.NewBuffer(nil)
+
+ err := j.respHeader[code].Write(h)
+ if err != nil {
+ fmt.Println("Failed to render headers:", err)
+ }
+
+ resps += fmt.Sprintf("[%d]\n%s\n%s\n", code, h.String(), string(j.respBody[code]))
}
j.mu.Unlock()
fmt.Println(codes)
- fmt.Println("Bytes read", atomic.LoadInt64(&j.bytesRead))
- fmt.Println("Bytes written", atomic.LoadInt64(&j.bytesWritten))
-
+ fmt.Println("Response samples (first by status code):")
fmt.Println(resps)
}
+// SampleSize is maximum number of bytes to sample from response.
+const SampleSize = 1000
+
// Job runs single item of load.
func (j *JobProducer) Job(_ int) (time.Duration, error) {
- start := time.Now()
-
- var dnsStart, connStart, tlsStart time.Time
+ var start, dnsStart, connStart, tlsStart, dlStart time.Time
var body io.Reader
if j.f.Body != "" {
@@ -265,7 +287,13 @@ func (j *JobProducer) Job(_ int) (time.Duration, error) {
TLSHandshakeDone: func(tls.ConnectionState, error) {
j.tlsHist.Add(1000 * time.Since(tlsStart).Seconds())
},
+
+ WroteRequest: func(_ httptrace.WroteRequestInfo) {
+ atomic.AddInt64(&j.writeTime, int64(time.Since(start)))
+ },
+
GotFirstResponseByte: func() {
+ dlStart = time.Now()
j.ttfbHist.Add(1000 * time.Since(start).Seconds())
},
}
@@ -277,16 +305,13 @@ func (j *JobProducer) Job(_ int) (time.Duration, error) {
tr = j.makeTransport()
}
+ start = time.Now()
+
resp, err := tr.RoundTrip(req)
if err != nil {
return 0, err
}
- si := time.Since(start)
-
- j.mu.Lock()
- j.respCode[resp.StatusCode]++
-
if envoyUpstreamMS := resp.Header.Get("X-Envoy-Upstream-Service-Time"); envoyUpstreamMS != "" {
ms, err := strconv.Atoi(envoyUpstreamMS)
if err == nil {
@@ -295,34 +320,47 @@ func (j *JobProducer) Job(_ int) (time.Duration, error) {
}
}
- if j.respCode[resp.StatusCode] == 1 {
- body, err := ioutil.ReadAll(resp.Body)
- if err != nil {
- return si, err
+ cnt := atomic.AddInt64(&j.respCode[resp.StatusCode], 1)
+
+ if cnt == 1 {
+ j.mu.Lock()
+
+ // Read a few bytes of response to save as sample.
+ body := make([]byte, SampleSize+1)
+
+ n, err := io.ReadAtLeast(resp.Body, body, SampleSize+1)
+ if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
+ return 0, err
}
- switch {
- case resp.Header.Get("Content-Encoding") != "":
+ body = body[0:n]
+
+ if resp.Header.Get("Content-Encoding") != "" {
j.respBody[resp.StatusCode] = []byte("<" + resp.Header.Get("Content-Encoding") + "-encoded-content>")
- case len(body) > 1000:
- j.respBody[resp.StatusCode] = append(body[0:1000], '.', '.', '.')
- default:
- j.respBody[resp.StatusCode] = body
+ } else {
+ j.respBody[resp.StatusCode] = report.PeekBody(body, SampleSize)
}
- }
- j.mu.Unlock()
+ j.respHeader[resp.StatusCode] = resp.Header
+
+ j.mu.Unlock()
+ }
_, err = io.Copy(ioutil.Discard, resp.Body)
if err != nil {
- return si, err
+ return 0, err
}
err = resp.Body.Close()
if err != nil {
- return si, err
+ return 0, err
}
+ done := time.Now()
+
+ atomic.AddInt64(&j.readTime, int64(done.Sub(dlStart)))
+ si := done.Sub(start)
+
return si, nil
}
diff --git a/report/print.go b/report/print.go
new file mode 100644
index 0000000..c852ec4
--- /dev/null
+++ b/report/print.go
@@ -0,0 +1,83 @@
+// Package report provides reporting utilities.
+package report
+
+import (
+ "strconv"
+ "strings"
+ "unicode"
+)
+
+// PeekBody takes head of data for printing.
+func PeekBody(body []byte, l int) []byte {
+ tooLong := false
+ if len(body) > l {
+ tooLong = true
+ body = body[0:l]
+ }
+
+ if !IsASCIIPrintable(string(body)) {
+ return []byte("")
+ }
+
+ if tooLong {
+ return append(body, '.', '.', '.')
+ }
+
+ return body
+}
+
+// IsASCIIPrintable checks if s is ascii.
+func IsASCIIPrintable(s string) bool {
+ for _, r := range s {
+ if r > unicode.MaxASCII {
+ return false
+ }
+ }
+
+ return true
+}
+
+// Bytes.
+const (
+ BYTE = 1 << (10 * iota)
+ KILOBYTE
+ MEGABYTE
+ GIGABYTE
+ TERABYTE
+ PETABYTE
+ EXABYTE
+)
+
+// ByteSize returns a human-readable byte string of the form 10M, 12.5K, and so forth.
+func ByteSize(bytes int64) string {
+ unit := ""
+ value := float64(bytes)
+
+ switch {
+ case bytes >= EXABYTE:
+ unit = "EB"
+ value /= EXABYTE
+ case bytes >= PETABYTE:
+ unit = "PB"
+ value /= PETABYTE
+ case bytes >= TERABYTE:
+ unit = "TB"
+ value /= TERABYTE
+ case bytes >= GIGABYTE:
+ unit = "GB"
+ value /= GIGABYTE
+ case bytes >= MEGABYTE:
+ unit = "MB"
+ value /= MEGABYTE
+ case bytes >= KILOBYTE:
+ unit = "KB"
+ value /= KILOBYTE
+ default:
+ unit = "B"
+ }
+
+ result := strconv.FormatFloat(value, 'f', 1, 64)
+ result = strings.TrimSuffix(result, ".0")
+
+ return result + unit
+}