|
| 1 | +# Results |
| 2 | + |
| 3 | +## Test environment |
| 4 | + |
| 5 | +NGINX Plus: false |
| 6 | + |
| 7 | +GKE Cluster: |
| 8 | + |
| 9 | +- Node count: 3 |
| 10 | +- k8s version: v1.28.9-gke.1000000 |
| 11 | +- vCPUs per node: 2 |
| 12 | +- RAM per node: 4019180Ki |
| 13 | +- Max pods per node: 110 |
| 14 | +- Zone: us-central1-c |
| 15 | +- Instance Type: e2-medium |
| 16 | +- NGF pod name -- ngf-longevity-nginx-gateway-fabric-59576c5749-dgrwg |
| 17 | + |
| 18 | +## Traffic |
| 19 | + |
| 20 | +HTTP: |
| 21 | + |
| 22 | +```text |
| 23 | +Running 5760m test @ http://cafe.example.com/coffee |
| 24 | + 2 threads and 100 connections |
| 25 | + Thread Stats Avg Stdev Max +/- Stdev |
| 26 | + Latency 186.26ms 145.81ms 2.00s 78.32% |
| 27 | + Req/Sec 299.23 199.98 1.80k 66.54% |
| 28 | + 202451765 requests in 5760.00m, 70.37GB read |
| 29 | + Socket errors: connect 0, read 338005, write 0, timeout 4600 |
| 30 | +Requests/sec: 585.80 |
| 31 | +Transfer/sec: 213.51KB |
| 32 | +``` |
| 33 | + |
| 34 | +HTTPS: |
| 35 | + |
| 36 | +```text |
| 37 | +Running 5760m test @ https://cafe.example.com/tea |
| 38 | + 2 threads and 100 connections |
| 39 | + Thread Stats Avg Stdev Max +/- Stdev |
| 40 | + Latency 177.05ms 122.73ms 1.98s 67.91% |
| 41 | + Req/Sec 298.12 199.66 1.83k 66.52% |
| 42 | + 201665338 requests in 5760.00m, 69.02GB read |
| 43 | + Socket errors: connect 0, read 332742, write 0, timeout 40 |
| 44 | +Requests/sec: 583.52 |
| 45 | +Transfer/sec: 209.42KB |
| 46 | +``` |
| 47 | + |
| 48 | +### Logs |
| 49 | + |
| 50 | +No error logs in nginx-gateway |
| 51 | + |
| 52 | +No error logs in nginx |
| 53 | + |
| 54 | +### Key Metrics |
| 55 | + |
| 56 | +#### Containers memory |
| 57 | + |
| 58 | + |
| 59 | + |
| 60 | +Drop in NGINX memory usage corresponds to the end of traffic generation. |
| 61 | + |
| 62 | +#### NGF Container Memory |
| 63 | + |
| 64 | + |
| 65 | + |
| 66 | +### Containers CPU |
| 67 | + |
| 68 | + |
| 69 | + |
| 70 | +Drop in NGINX CPU usage corresponds to the end of traffic generation. |
| 71 | + |
| 72 | +### NGINX metrics |
| 73 | + |
| 74 | + |
| 75 | + |
| 76 | +Drop in request corresponds to the end of traffic generation. |
| 77 | + |
| 78 | + |
| 79 | +### Reloads |
| 80 | + |
| 81 | +Rate of reloads - successful and errors: |
| 82 | + |
| 83 | + |
| 84 | + |
| 85 | + |
| 86 | +Reload spikes correspond to 1 hour periods of backend re-rollouts. |
| 87 | + |
| 88 | +No reloads finished with an error. |
| 89 | + |
| 90 | +Reload time distribution - counts: |
| 91 | + |
| 92 | + |
| 93 | + |
| 94 | + |
| 95 | +Reload related metrics at the end: |
| 96 | + |
| 97 | + |
| 98 | + |
| 99 | +All successful reloads took less than 5 seconds, with most under 1 second. |
| 100 | + |
| 101 | +## Comparison with previous runs |
| 102 | + |
| 103 | +Graphs look similar to 1.2.0 results. |
| 104 | +As https://github.com/nginxinc/nginx-gateway-fabric/issues/1112 was fixed, we no longer see the corresponding |
| 105 | +reload spikes. |
| 106 | +Memory usage is flat, but ~1 Mb higher than in 1.2.0. |
0 commit comments