Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Turning latency/bandwidth into simpler (run+analyze) commands #21

Merged
merged 3 commits into from
Nov 13, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 35 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,7 @@ WARNING: do not expose `--address` to the internet
<b>NOTE: if the `--address` is not the same as your external IP addres used for communications between servers then you need to set `--real-ip`, otherwise the server will report internal IPs in the stats and it will run the test against itself, causing invalid results.</b>

### The listen command
Hperf can run tests without a specific `client` needing to be constantly connected. Once the `client` has started a test, the `client` can
easily exit without interrupting the test stopping.
Hperf can run tests without a specific `client` needing to be constantly connected. Once the `client` has started a test, the `client` can easily exit without interrupting the test stopping.

Any `client` can hook into the list test at runtime using the `--id` of the test. There can even be multiple `clients`
listening to the same test.
Expand Down Expand Up @@ -129,11 +128,19 @@ The analysis will show:
- Memory (Mem high/low/used):
- CPU (CPU high/low/used):

## Example: 20 second HTTP payload transfer test using multiple sockets
This test will use 12 concurrent workers to send http requests with a payload without any timeout between requests.
Much like a bandwidth test, but it will also test server behaviour when multiple sockets are being created and closed:
## Live feed statistics
Live stats are calculated as high/low for all data points seen up until the current time.

## Example: Basic latency testing
This will run a 20 second latency test and analyze+print the results when done
```
$ ./hperf latency --hosts file:./hosts --port [PORT] --duration 20 --print-all
```

## Example: Basic bandwidth testing
This will run a 20 second bandwidth test and print the results when done
```
$ ./hperf requests --hosts file:./hosts --id http-test-1 --duration 20 --concurrency 12
$ ./hperf bandwidth --hosts file:./hosts --port [PORT] --duration 20 --concurrency 10 --print-all
```

## Example: 20 second HTTP payload transfer test using a stream
Expand All @@ -149,14 +156,33 @@ $ ./hperf latency --hosts file:./hosts --id http-test-2 --duration 360 --concurr
--bufferSize 1000 --payloadSize 1000
```

# Full test scenario with analysis and csv export
## On the server
# Full test scenario using (requests, download, analysis and csv export)
## On the servers
```bash
$ ./hperf server --address 10.10.2.10:5000 --real-ip 150.150.20.2 --storage-path /tmp/hperf/
$ ./hperf server --address 0.0.0.0:6000 --real-ip 10.10.10.2 --storage-path /tmp/hperf/
```

## The client

### Run test
```bash
./hperf requests --hosts 10.10.10.{2...3} --port 6000 --duration 10 --request-delay 100 --concurrency 1 --buffer-size 1000 --payload-size 1000 --restart-on-error --id latency-test-1
```

### Download test
```bash
./hperf download --hosts 10.10.10.{2...3} --port 6000 --id latency-test-1 --file latency-test-1
```

### Analyze test
```bash
./hperf analyze --file latency-test-1 --print-stats --print-errors
```

### Export csv
```bash
./hperf csv --file latency-test-1
```



Expand Down
68 changes: 64 additions & 4 deletions client/client.go
Original file line number Diff line number Diff line change
Expand Up @@ -406,7 +406,7 @@ func RunTest(ctx context.Context, c shared.Config) (err error) {
}

for i := range responseERR {
fmt.Println(responseERR[i])
PrintErrorString(responseERR[i].Error)
}

if printCount%10 == 1 {
Expand Down Expand Up @@ -553,6 +553,66 @@ func DownloadTest(ctx context.Context, c shared.Config) (err error) {
return nil
}

func AnalyzeBandwidthTest(ctx context.Context, c shared.Config) (err error) {
_, cancel := context.WithCancel(ctx)
defer cancel()

if c.PrintAll {
shared.INFO(" Printing all data points ..")
fmt.Println("")

printSliceOfDataPoints(responseDPS, c)

if len(responseERR) > 0 {
fmt.Println(" ____ ERRORS ____")
}
for i := range responseERR {
PrintTError(responseERR[i])
}
if len(responseERR) > 0 {
fmt.Println("")
}
}

if len(responseDPS) == 0 {
fmt.Println("No datapoints found")
return
}

return nil
}

func AnalyzeLatencyTest(ctx context.Context, c shared.Config) (err error) {
_, cancel := context.WithCancel(ctx)
defer cancel()

if c.PrintAll {
shared.INFO(" Printing all data points ..")

printSliceOfDataPoints(responseDPS, c)

if len(responseERR) > 0 {
fmt.Println(" ____ ERRORS ____")
}
for i := range responseERR {
PrintTError(responseERR[i])
}
if len(responseERR) > 0 {
fmt.Println("")
}
}
if len(responseDPS) == 0 {
fmt.Println("No datapoints found")
return
}

shared.INFO(" Analyzing data ..")
fmt.Println("")
analyzeLatencyTest(responseDPS, c)

return nil
}

func AnalyzeTest(ctx context.Context, c shared.Config) (err error) {
_, cancel := context.WithCancel(ctx)
defer cancel()
Expand Down Expand Up @@ -592,7 +652,7 @@ func AnalyzeTest(ctx context.Context, c shared.Config) (err error) {
dps = shared.HostFilter(c.HostFilter, dps)
}

if c.PrintFull {
if c.PrintStats {
printSliceOfDataPoints(dps, c)
}

Expand All @@ -614,9 +674,9 @@ func AnalyzeTest(ctx context.Context, c shared.Config) (err error) {
}

switch dps[0].Type {
case shared.LatencyTest:
case shared.RequestTest:
analyzeLatencyTest(dps, c)
case shared.BandwidthTest:
case shared.StreamTest:
fmt.Println("")
fmt.Println("Detailed analysis for bandwidth testing is in development")
}
Expand Down
16 changes: 8 additions & 8 deletions client/table.go
Original file line number Diff line number Diff line change
Expand Up @@ -191,9 +191,9 @@ func PrintColumns(style lipgloss.Style, columns ...column) {

func printDataPointHeaders(t shared.TestType) {
switch t {
case shared.BandwidthTest:
case shared.StreamTest:
printHeader(BandwidthHeaders)
case shared.LatencyTest:
case shared.RequestTest:
printHeader(LatencyHeaders)
default:
printHeader(FullDataPointHeaders)
Expand All @@ -202,17 +202,17 @@ func printDataPointHeaders(t shared.TestType) {

func printRealTimeHeaders(t shared.TestType) {
switch t {
case shared.BandwidthTest:
case shared.StreamTest:
printHeader(RealTimeBandwidthHeaders)
case shared.LatencyTest:
case shared.RequestTest:
printHeader(RealTimeLatencyHeaders)
default:
}
}

func printRealTimeRow(style lipgloss.Style, entry *shared.TestOutput, t shared.TestType) {
switch t {
case shared.BandwidthTest:
case shared.StreamTest:
PrintColumns(
style,
column{formatInt(int64(entry.ErrCount)), headerSlice[ErrCount].width},
Expand All @@ -227,7 +227,7 @@ func printRealTimeRow(style lipgloss.Style, entry *shared.TestOutput, t shared.T
column{formatInt(int64(entry.CL)), headerSlice[CPULow].width},
)
return
case shared.LatencyTest:
case shared.RequestTest:
PrintColumns(
style,
column{formatInt(int64(entry.ErrCount)), headerSlice[ErrCount].width},
Expand All @@ -252,7 +252,7 @@ func printRealTimeRow(style lipgloss.Style, entry *shared.TestOutput, t shared.T

func printTableRow(style lipgloss.Style, entry *shared.DP, t shared.TestType) {
switch t {
case shared.BandwidthTest:
case shared.StreamTest:
PrintColumns(
style,
column{entry.Created.Format("15:04:05"), headerSlice[Created].width},
Expand All @@ -265,7 +265,7 @@ func printTableRow(style lipgloss.Style, entry *shared.DP, t shared.TestType) {
column{formatInt(int64(entry.CPUUsedPercent)), headerSlice[CPUUsage].width},
)
return
case shared.LatencyTest:
case shared.RequestTest:
PrintColumns(
style,
column{entry.Created.Format("15:04:05"), headerSlice[Created].width},
Expand Down
51 changes: 34 additions & 17 deletions cmd/hperf/bandwidth.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,50 +18,49 @@
package main

import (
"fmt"

"github.com/minio/cli"
"github.com/minio/hperf/client"
"github.com/minio/hperf/shared"
)

var bandwidthCMD = cli.Command{
Name: "bandwidth",
Usage: "start a test to measure bandwidth, open --concurrency number of sockets, write data upto --duration",
Usage: "Start a test to measure bandwidth",
Action: runBandwidth,
Flags: []cli.Flag{
hostsFlag,
portFlag,
concurrencyFlag,
durationFlag,
saveTestFlag,
testIDFlag,
bufferSizeFlag,
payloadSizeFlag,
restartOnErrorFlag,
concurrencyFlag,
dnsServerFlag,
saveTestFlag,
microSecondsFlag,
printAllFlag,
},
CustomHelpTemplate: `NAME:
{{.HelpName}} - {{.Usage}}

USAGE:
{{.HelpName}} [FLAGS]

NOTE:
Matching concurrency with your thread count can often lead to
improved performance, it is even better to run concurrency at
50% of the GOMAXPROCS.
NOTES:
When testing for bandwidth it is recommended to start with concurrency 10 and increase the count as needed. Normally 10 is enough to saturate a 100Gb NIC.

FLAGS:
{{range .VisibleFlags}}{{.}}
{{end}}
EXAMPLES:
1. Run a basic test:
{{.Prompt}} {{.HelpName}} --hosts 10.10.10.1,10.10.10.2
1. Run a basic test which prints all data points when finished:
{{.Prompt}} {{.HelpName}} --hosts 10.10.10.1,10.10.10.2 --print-all

2. Run a test with custom concurrency:
{{.Prompt}} {{.HelpName}} --hosts 10.10.10.1,10.10.10.2 --concurrency 24
{{.Prompt}} {{.HelpName}} --hosts 10.10.10.1,10.10.10.2 --concurrency 10

3. Run a test with custom buffer sizes, for MTU specific testing:
{{.Prompt}} {{.HelpName}} --hosts 10.10.10.1,10.10.10.2 --bufferSize 9000 --payloadSize 9000
3. Run a 30 seconds bandwidth test:
{{.Prompt}} {{.HelpName}} --hosts 10.10.10.1,10.10.10.2 --duration 30 --id bandwidth-30
`,
}

Expand All @@ -70,6 +69,24 @@ func runBandwidth(ctx *cli.Context) error {
if err != nil {
return err
}
config.TestType = shared.BandwidthTest
return client.RunTest(GlobalContext, *config)

config.TestType = shared.StreamTest
config.BufferSize = 32000
config.PayloadSize = 32000
config.RequestDelay = 0
config.RestartOnError = true

fmt.Println("")
shared.INFO(" Test ID:", config.TestID)
fmt.Println("")

err = client.RunTest(GlobalContext, *config)
if err != nil {
return err
}

fmt.Println("")
shared.INFO(" Testing finished..")

return client.AnalyzeBandwidthTest(GlobalContext, *config)
}
41 changes: 28 additions & 13 deletions cmd/hperf/latency.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,28 +18,26 @@
package main

import (
"fmt"

"github.com/minio/cli"
"github.com/minio/hperf/client"
"github.com/minio/hperf/shared"
)

var latencyCMD = cli.Command{
var latency = cli.Command{
Name: "latency",
Usage: "start a test to measure latency at the application level",
Usage: "Start a latency test and analyze the results",
Action: runLatency,
Flags: []cli.Flag{
hostsFlag,
portFlag,
concurrencyFlag,
delayFlag,
durationFlag,
bufferSizeFlag,
payloadSizeFlag,
restartOnErrorFlag,
testIDFlag,
saveTestFlag,
dnsServerFlag,
microSecondsFlag,
printAllFlag,
},
CustomHelpTemplate: `NAME:
{{.HelpName}} - {{.Usage}}
Expand All @@ -51,11 +49,11 @@ FLAGS:
{{range .VisibleFlags}}{{.}}
{{end}}
EXAMPLES:
1. Run a basic test:
{{.Prompt}} {{.HelpName}} --hosts 10.10.10.1,10.10.10.2
1. Run a 30 second latency test and show all data points after the test finishes:
{{.Prompt}} {{.HelpName}} --duration 30 --hosts 10.10.10.1,10.10.10.2 --print-all

2. Run a slow moving test to probe latency:
{{.Prompt}} {{.HelpName}} --hosts 10.10.10.1,10.10.10.2 --request-delay 100 --concurrency 1
2. Run a 30 second latency test with custom id:
{{.Prompt}} {{.HelpName}} --duration 60 --hosts 10.10.10.1,10.10.10.2 --id latency-60
`,
}

Expand All @@ -64,6 +62,23 @@ func runLatency(ctx *cli.Context) error {
if err != nil {
return err
}
config.TestType = shared.LatencyTest
return client.RunTest(GlobalContext, *config)
config.TestType = shared.RequestTest
config.BufferSize = 1000
config.PayloadSize = 1000
config.Concurrency = 1
config.RequestDelay = 200
config.RestartOnError = true

fmt.Println("")
shared.INFO(" Test ID:", config.TestID)
fmt.Println("")

err = client.RunTest(GlobalContext, *config)
if err != nil {
return err
}
fmt.Println("")
shared.INFO(" Testing finished ..")

return client.AnalyzeLatencyTest(GlobalContext, *config)
}
Loading