Original Grade: Verifying Traffic Redistribution When a Threshold Is Exceeded You can configure NS1 to redistribute traffic away from a given NGINX Plus instance when a load metric for the instance exceeds one or more thresholds that you set. The thresholds are set in an NS1 shed filter, so‑called because NS1 describes the shifting of traffic to a different IP address as “shedding load” from the current IP address. Here we verify that NS1 redistributes traffic correctly when the number of active connections on an instance exceeds the threshold we set. Creating the Shed Filter First we perform these steps to create the shed filter: Navigate to the details page of the A record for nginxgslb.cf under the  ZONES  tab, if it is not already open. Click the  Edit Filter Chain  button. Screenshot of NS1 GUI: clicking Edit Filter Chain button In the Add Filters window that opens, click the plus sign (+) on the box labeled Shed Load in the  HEALTHCHECKS  section. Screenshot of NS1 GUI: clicking Shed Load button on Add Filters page The Shed Load filter is added as the fourth (lowest) box in the Active Filters section. Move it to be third by clicking and dragging it above the Select First N box. Click the  Save Filter Chain  button. Back on the A record’s details page, in the Filter Chain column click the Shed Load box, which expands to display an explanation of how the filter works. Click the label on the white box at the bottom of the explanation and select Active connections from the drop‑down menu. Screenshot of NS1 GUI: selecting Active connections for shed filter In the Ungrouped Answers section, click the stacked dots icon at the right end of the field for the US‑based NGINX Plus instance (10.10.10.1) and select Edit Answer Metadata. Screenshot of NS1 GUI: clicking Edit Answer Metadata button for shed filter In the Answer Metadata window that opens, set values for the following metadata. In each case, click the icon in the  FEED  column of the metadata’s row, then select or enter the indicated value in the  AVAILABLE  column. (For testing purposes, we’re setting very small values for the watermarks so that the threshold is exceeded very quickly.) Active connections – us-nginxgslb-datafeed High watermark – 5 Low watermark – 2 After setting all three, click the  Ok  button. (The screenshot shows the window just before this action.) Screenshot of NS1 GUI: Answer Metadata page for shed filter Testing the Threshold With the shed filter in place, we’re ready to verify that NS1 shifts traffic to the next‑nearest NGINX Plus instance when the number of active connections on the nearest instance exceeds the high watermark (upper threshold) of 5. As noted in Step 7 just above, we’ve set a very small value so we can quickly see the effect when it’s exceeded. With the low watermark setting of 2, NS1 will start shifting traffic probabilistically when there are three active connections and definitely when there are five or more connections. We have written a script that continuously simulates more than four simultaneous connections. We have also configured the backend app to perform a sleep, so that the connections stay open long enough for the agent to report the number of active connections to NS1 before they close. We run the following commands on a host located in the US. Query the NGINX Plus API for the number of active connections: Copy $ curl -X GET "127.0.0.1:8000/api//connections" -H "accept: application/json" | python -m json.tool | grep active "active": 1, Query the NS1 API to learn the number of active connections the NS1 agent has reported to NS1. (For details about this API call, see the NS1 documentation. If the page doesn’t scroll automatically to the relevant section, search for “Get data feed details”.) On the command line: and are the same values we included in the YAML file in Step 4 of Installing the NS1 Agent and used in Step 2 of Verifying Traffic Redistribution When an Upstream Group Is Down. is the ID assigned by NS1 to the us-nginxgslb-datafeed data feed. It was reported as in the id field of the output in Step 2 in Verifying Traffic Redistribution When an Upstream Group Is Down. (It also appears in that field in the following output.) The relevant field in the output is connections in the data section, and in this example it indicates there is one active connection. Copy $ curl -X GET -H 'X-NSONE-Key: ' https://api.nsone.net/v1/data/feeds// | python -m json.tool { "config": { "label": "us-nginxgslb-datafeed" }, "data": { "connections": 1, "up": true }, "destinations": [ { "destid": "", "desttype": "answer", "record": "" } ], "id": "", "name": "us-nginxgslb-datafeed", "networks": [ 0 ] } Determine which site NS1 is returning for hosts in the US. Appropriately, it’s 10.10.10.1, the IP address of the US‑based NGINX Plus instance. Copy $ nslookup nginxgslb.cf Server: 10.10.100.102 Address: 10.10.100.102#53 Non-authoritative answer: Name: nginxgslb.cf Address: 10.10.10.1 Create five or more connections to the NGINX Plus instance. We do this by running the script mentioned in the introduction to this section. Repeat Step 1. The NGINX Plus API now reports five active connections. Copy $ curl -X GET "127.0.0.1:8000/api//connections" -H "accept: application/json" | python -m json.tool | grep active "active": 5, Repeat Step 2. The NS1 API also reports five active connections. Copy $ curl -X GET -H 'X-NSONE-Key: ' https://api.nsone.net/v1/data/feeds// | python -m json.tool { "config": { "label": "us-nginxgslb-datafeed" }, "data": { "connections": 5, "up": true }, ... } Wait an hour – because we didn’t change the default time-to-live (TTL) of 3600 seconds on the A record for nginxgslb.cf – and repeat Step 3. NS1 returns 10.10.10.2, the IP address of the NGINX Plus instance in Germany, which is the nearest now that the instance in the US has too many active connections. Copy $ nslookup nginxgslb.cf Server: 10.10.100.102 Address: 10.10.100.102#53 Non-authoritative answer: Name: nginxgslb.cf Address: 10.10.10.2