From 9398fa2433de21c542058df4935f22a5a35a1a80 Mon Sep 17 00:00:00 2001 From: Mike Jang <3287976+mjang@users.noreply.github.com> Date: Thu, 29 May 2025 10:55:56 -0700 Subject: [PATCH 01/24] Fix inaccurate statement (#626) --- .../how-to/staged-configs/import-export-staged-config.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/content/nginx-one/how-to/staged-configs/import-export-staged-config.md b/content/nginx-one/how-to/staged-configs/import-export-staged-config.md index 192fad4c4..cbb77d2d5 100644 --- a/content/nginx-one/how-to/staged-configs/import-export-staged-config.md +++ b/content/nginx-one/how-to/staged-configs/import-export-staged-config.md @@ -36,8 +36,6 @@ When you work with such archives, consider the following: - Do _not_ unpack archives directly to your NGINX configuration directories. You do not want to accidentally overwrite existing configuration files. - The files are set to a default file permission mode of 0644. - Do not include files with secrets or personally identifying information. -- We ignore hidden files. - - If you import or export such files in archives, NGINX One Console does not include those files. - The size of the archive is limited to 5 MB. The size of all uncompressed files in the archive is limited to 10 MB. {{< tip >}} From 0be97114d93be0f44acff8711f31bf0b6448dd94 Mon Sep 17 00:00:00 2001 From: Travis Martin <33876974+travisamartin@users.noreply.github.com> Date: Thu, 29 May 2025 16:31:59 -0700 Subject: [PATCH 02/24] applied nb shortcode for non-breaking spaces and hyphens (#627) --- ...stalling-nginx-plus-amazon-web-services.md | 4 +- .../monitoring/new-relic-plugin.md | 4 +- .../high-availability-keepalived.md | 18 +- ...high-availability-network-load-balancer.md | 4 +- ...-controller-elastic-kubernetes-services.md | 4 +- .../route-53-global-server-load-balancing.md | 82 ++-- .../ns1-global-server-load-balancing.md | 20 +- .../high-availability-all-active.md | 376 +++++++++--------- .../load-balance-third-party/apache-tomcat.md | 16 +- .../microsoft-exchange.md | 20 +- .../load-balance-third-party/node-js.md | 14 +- .../oracle-e-business-suite.md | 8 +- .../oracle-weblogic-server.md | 16 +- .../load-balance-third-party/wildfly.md | 8 +- ...igh-availability-standard-load-balancer.md | 54 +-- .../virtual-machines-for-nginx.md | 44 +- .../citrix-adc-configuration.md | 2 +- .../f5-big-ip-configuration.md | 2 +- .../nginx-plus-high-availability-chef.md | 24 +- .../setting-up-nginx-demo-environment.md | 2 +- .../active-directory-federation-services.md | 12 +- .../single-sign-on/oidc-njs/cognito.md | 34 +- .../single-sign-on/oidc-njs/keycloak.md | 14 +- .../single-sign-on/oidc-njs/onelogin.md | 10 +- .../single-sign-on/oidc-njs/ping-identity.md | 26 +- go.mod | 2 +- go.sum | 2 + 27 files changed, 412 insertions(+), 410 deletions(-) diff --git a/content/nginx/admin-guide/installing-nginx/installing-nginx-plus-amazon-web-services.md b/content/nginx/admin-guide/installing-nginx/installing-nginx-plus-amazon-web-services.md index 88cd20853..66a814d97 100644 --- a/content/nginx/admin-guide/installing-nginx/installing-nginx-plus-amazon-web-services.md +++ b/content/nginx/admin-guide/installing-nginx/installing-nginx-plus-amazon-web-services.md @@ -30,8 +30,8 @@ To quickly set up an NGINX Plus environment on AWS: Click the **Continue to Subscribe** button to proceed to the **Launch on EC2** page. -3. Select the type of launch by clicking the appropriate tab (**1‑Click Launch***, **Manual Launch**, or **Service Catalog**). Choose the desired options for billing, instance size, and so on, and click the **Accept Software Terms…** button. -4. When configuring the firewall rules, add a rule to accept web traffic on TCP ports 80 and 443 (this happens automatically if you launch from the **1‑Click Launch** tab). +3. Select the type of launch by clicking the appropriate tab ({{}}**1-Click Launch**{{}}, {{}}**Manual Launch**{{}}, or {{}}**Service Catalog**{{}}). Choose the desired options for billing, instance size, and so on, and click the {{}}**Accept Software Terms…**{{}} button. +4. When configuring the firewall rules, add a rule to accept web traffic on TCP ports 80 and 443 (this happens automatically if you launch from the {{}}**1-Click Launch**{{}} tab). 5. As soon as the new EC2 instance launches, NGINX Plus starts automatically and serves a default **index.html** page. To view the page, use a web browser to access the public DNS name of the new instance. You can also check the status of the NGINX Plus server by logging into the EC2 instance and running this command: ```nginx diff --git a/content/nginx/admin-guide/monitoring/new-relic-plugin.md b/content/nginx/admin-guide/monitoring/new-relic-plugin.md index b56831edb..e22e19f44 100644 --- a/content/nginx/admin-guide/monitoring/new-relic-plugin.md +++ b/content/nginx/admin-guide/monitoring/new-relic-plugin.md @@ -33,7 +33,7 @@ Download the [plug‑in and installation instructions](https://docs.newrelic.com ## Configuring the Plug‑In -The configuration file for the NGINX plug‑in is **/etc/nginx‑nr‑agent/nginx‑nr‑agent.ini**. The minimal configuration includes: +The configuration file for the NGINX plug‑in is {{}}**/etc/nginx-nr-agent/nginx-nr-agent.ini**{{}}. The minimal configuration includes: - Your New Relic license key in the `newrelic_license_key` statement in the `global` section. @@ -44,7 +44,7 @@ The configuration file for the NGINX plug‑in is **/etc/nginx‑nr‑ag You can include the optional `http_user` and `http_pass` statements to set HTTP basic authentication credentials in cases where the corresponding location is protected by the NGINX [auth_basic](https://nginx.org/en/docs/http/ngx_http_auth_basic_module.html#auth_basic) directive. -The default log file is **/var/log/nginx‑nr‑agent.log**. +The default log file is {{}}**/var/log/nginx-nr-agent.log**{{}}. ## Running the Plug‑In diff --git a/content/nginx/deployment-guides/amazon-web-services/high-availability-keepalived.md b/content/nginx/deployment-guides/amazon-web-services/high-availability-keepalived.md index a3023a49b..8df51d9ae 100644 --- a/content/nginx/deployment-guides/amazon-web-services/high-availability-keepalived.md +++ b/content/nginx/deployment-guides/amazon-web-services/high-availability-keepalived.md @@ -96,8 +96,8 @@ Allocate an Elastic IP address and remember its ID. For detailed instructions, s The NGINX Plus HA solution uses two scripts, which are invoked by `keepalived`: -- **nginx‑ha‑check** – Determines the health of NGINX Plus. -- **nginx‑ha‑notify** – Moves the Elastic IP address when a state transition happens, for example when the backup instance becomes the primary. +- {{}}**nginx-ha-check**{{}} – Determines the health of NGINX Plus. +- {{}}**nginx-ha-notify**{{}} – Moves the Elastic IP address when a state transition happens, for example when the backup instance becomes the primary. 1. Create a directory for the scripts, if it doesn’t already exist. @@ -121,7 +121,7 @@ The NGINX Plus HA solution uses two scripts, which are invoked by `keepalived`: There are two configuration files for the HA solution: - **keepalived.conf** – The main configuration file for `keepalived`, slightly different for each NGINX Plus instance. -- **nginx‑ha‑notify** – The script you downloaded in [Step 4](#ha-aws_ha-scripts), with several user‑defined variables. +- {{}}**nginx-ha-notify**{{}} – The script you downloaded in [Step 4](#ha-aws_ha-scripts), with several user‑defined variables. ### Creating keepalived.conf @@ -158,8 +158,8 @@ You must change values for the following configuration keywords. As you do so, a - `script` in the `chk_nginx_service` block – The script that sends health checks to NGINX Plus. - - On Ubuntu systems, **/usr/lib/keepalived/nginx‑ha‑check** - - On CentOS systems, **/usr/libexec/keepalived/nginx‑ha‑check** + - On Ubuntu systems, {{}}**/usr/lib/keepalived/nginx-ha-check**{{}} + - On CentOS systems, {{}}**/usr/libexec/keepalived/nginx-ha-check**{{}} - `priority` – The value that controls which instance becomes primary, with a higher value meaning a higher priority. Use `101` for the primary instance and `100` for the backup. @@ -171,13 +171,13 @@ You must change values for the following configuration keywords. As you do so, a - `notify` – The script that is invoked during a state transition. - - On Ubuntu systems, **/usr/lib/keepalived/nginx‑ha‑notify** - - On CentOS systems, **/usr/libexec/keepalived/nginx‑ha‑notify** + - On Ubuntu systems, {{}}**/usr/lib/keepalived/nginx-ha-notify**{{}} + - On CentOS systems, {{}}**/usr/libexec/keepalived/nginx-ha-notify**{{}} ### Creating nginx-ha-notify -Modify the user‑defined variables section of the **nginx‑ha‑notify** script, replacing each `` placeholder with the value specified in the list below: +Modify the user‑defined variables section of the {{}}**nginx-ha-notify**{{}} script, replacing each `` placeholder with the value specified in the list below: ```none export AWS_ACCESS_KEY_ID= @@ -223,7 +223,7 @@ Check the state on the backup instance, confirming that it has transitioned to ` ## Troubleshooting -If the solution doesn’t work as expected, check the `keepalived` logs, which are written to **/var/log/syslog**. Also, you can manually run the commands that invoke the `awscli` utility in the **nginx‑ha‑notify** script to check that the utility is working properly. +If the solution doesn’t work as expected, check the `keepalived` logs, which are written to **/var/log/syslog**. Also, you can manually run the commands that invoke the `awscli` utility in the {{}}**nginx-ha-notify**{{}} script to check that the utility is working properly. ## Caveats diff --git a/content/nginx/deployment-guides/amazon-web-services/high-availability-network-load-balancer.md b/content/nginx/deployment-guides/amazon-web-services/high-availability-network-load-balancer.md index 6c7a451a9..c37434906 100644 --- a/content/nginx/deployment-guides/amazon-web-services/high-availability-network-load-balancer.md +++ b/content/nginx/deployment-guides/amazon-web-services/high-availability-network-load-balancer.md @@ -291,7 +291,7 @@ Configure NGINX Plus instances as load balancers. These distribute requests to Use the *Step‑by‑step* instructions in our deployment guide, [Setting Up an NGINX Demo Environment]({{< ref "/nginx/deployment-guides/setting-up-nginx-demo-environment.md" >}}). -Repeat the instructions on both **ngx‑plus‑1** and **ngx‑plus‑2**. +Repeat the instructions on both {{}}**ngx-plus-1**{{}} and {{}}**ngx-plus-2**{{}}. ### Automate instance setup with Packer and Terraform @@ -317,7 +317,7 @@ To run the scripts, follow these instructions: 3. Set your AWS credentials in the Packer and Terraform scripts: - - For Packer, set your credentials in the `variables` block in both **packer/ngx‑oss/packer.json** and **packer/ngx‑plus/packer.json**: + - For Packer, set your credentials in the `variables` block in both {{}}**packer/ngx-oss/packer.json**{{}} and {{}}**packer/ngx-plus/packer.json**{{}}: ```none "variables": { diff --git a/content/nginx/deployment-guides/amazon-web-services/ingress-controller-elastic-kubernetes-services.md b/content/nginx/deployment-guides/amazon-web-services/ingress-controller-elastic-kubernetes-services.md index e1c9811b3..09f4df0a0 100644 --- a/content/nginx/deployment-guides/amazon-web-services/ingress-controller-elastic-kubernetes-services.md +++ b/content/nginx/deployment-guides/amazon-web-services/ingress-controller-elastic-kubernetes-services.md @@ -43,14 +43,14 @@ This guide covers the `eksctl` command as it is the simplest option. 1. Follow the instructions in the [eksctl.io documentation](https://eksctl.io/installation/) to install or update the `eksctl` command. -2. Create an Amazon EKS cluster by following the instructions in the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html). Select the **Managed nodes – Linux** option for each step. Note that the `eksctl create cluster` command in the first step can take ten minutes or more. +2. Create an Amazon EKS cluster by following the instructions in the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html). Select the {{}}**Managed nodes – Linux**{{}} option for each step. Note that the `eksctl create cluster` command in the first step can take ten minutes or more. ## Push the NGINX Plus Ingress Controller Image to AWS ECR This step is only required if you do not plan to use the prebuilt NGINX Open Source image. -1. Use the [AWS documentation](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html) to create a repository in the Amazon Elastic Container Registry (ECR). In Step 4 of the AWS instructions, name the repository **nginx‑plus‑ic** as that is what we use in this guide. +1. Use the [AWS documentation](https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html) to create a repository in the Amazon Elastic Container Registry (ECR). In Step 4 of the AWS instructions, name the repository {{}}**nginx-plus-ic**{{}} as that is what we use in this guide. 2. Run the following AWS CLI command. It generates an auth token for your AWS ECR registry, then pipes it into the `docker login` command. This lets AWS ECR authenticate and authorize the upcoming Docker requests. For details about the command, see the [AWS documentation](https://docs.aws.amazon.com/AmazonECR/latest/userguide/registry_auth.html). diff --git a/content/nginx/deployment-guides/amazon-web-services/route-53-global-server-load-balancing.md b/content/nginx/deployment-guides/amazon-web-services/route-53-global-server-load-balancing.md index 32038fedb..3cf5c6530 100644 --- a/content/nginx/deployment-guides/amazon-web-services/route-53-global-server-load-balancing.md +++ b/content/nginx/deployment-guides/amazon-web-services/route-53-global-server-load-balancing.md @@ -40,7 +40,7 @@ The setup for global server load balancing (GSLB) in this guide combines Amazon Diagram showing a topology for global server load balancing (GSLB). Eight backend servers, four in each of two regions, host the content for a domain. Two NGINX Plus load balancers in each region route traffic to the backend servers. For each client requesting DNS information for the domain, Amazon Route 53 provides the DNS record for the region closest to the client. -Route 53 is a Domain Name System (DNS) service that performs global server load balancing by routing each request to the AWS region closest to the requester's location. This guide uses two regions: **US West (Oregon)** and **US East (N. Virginia)**. +Route 53 is a Domain Name System (DNS) service that performs global server load balancing by routing each request to the AWS region closest to the requester's location. This guide uses two regions: {{}}**US West (Oregon)**{{}} and {{}}**US East (N. Virginia)**{{}}. In each region, two or more NGINX Plus load balancers are deployed in a high‑availability (HA) configuration. In this guide, there are two NGINX Plus load balancer instances per region. You can also use NGINX Open Source for this purpose, but it lacks the [application health checks](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/) that make for more precise error detection. For simplicity, we'll refer to NGINX Plus load balancers throughout this guide, noting when features specific to NGINX Plus are used. @@ -79,7 +79,7 @@ Create a _hosted zone_, which basically involves designating a domain name to be 1. Log in to the [AWS Management Console](https://console.aws.amazon.com/) (**console.aws.amazon.com/**). -2. Access the Route 53 dashboard page by clicking **Services** in the top AWS navigation bar, mousing over **Networking** in the **All AWS Services** column and then clicking **Route 53**. +2. Access the Route 53 dashboard page by clicking **Services** in the top AWS navigation bar, mousing over **Networking** in the {{}}**All AWS Services**{{}} column and then clicking **Route 53**. Screenshot showing how to access the Amazon Route 53 dashboard to configure global load balancing (GLB) with NGINX Plus @@ -87,7 +87,7 @@ Create a _hosted zone_, which basically involves designating a domain name to be Screenshot showing the Route 53 Registered domains tab during configuration of NGINX GSLB (global server load balancing) - If you see the Route 53 home page instead, access the **Registered domains** tab by clicking the  Get started now  button under **Domain registration**. + If you see the Route 53 home page instead, access the **Registered domains** tab by clicking the  Get started now  button under {{}}**Domain registration**{{}}. Screenshot showing the Amazon Route 53 homepage for a first-time Route 53 user during configuration of AWS GSLB (global server load balancing) with NGINX Plus @@ -124,19 +124,19 @@ Create records sets for your domain: 4. Fill in the fields in the **Create Record Set** column: - **Name** – You can leave this field blank, but for this guide we are setting the name to **www.nginxroute53.com**. - - **Type** – **A – IPv4 address**. + - **Type** – **A {{}}– IPv4 address**{{}}. - **Alias** – **No**. - **TTL (Seconds)** – **60**. **Note**: Reducing TTL from the default of **300** in this way can decrease the time that it takes for Route 53 to fail over when both NGINX Plus load balancers in the region are down, but there is always a delay of about two minutes regardless of the TTL setting. This is a built‑in limitation of Route 53. - - **Value** – [Elastic IP addresses](#elastic-ip) of the NGINX Plus load balancers in the first region [in this guide, **US West (Oregon)**]. + - **Value** – [Elastic IP addresses](#elastic-ip) of the NGINX Plus load balancers in the first region [in this guide, {{}}**US West (Oregon)**]{{}}. - **Routing Policy** – **Latency**. 5. A new area opens when you select **Latency**. Fill in the fields as indicated (see the figure below): - - **Region** – Region to which the load balancers belong (in this guide, **us‑west‑2**). - - **Set ID** – Identifier for this group of load balancers (in this guide, **US West LBs**). + - **Region** – Region to which the load balancers belong (in this guide, {{}}**us-west-2**{{}}). + - **Set ID** – Identifier for this group of load balancers (in this guide, {{}}**US West LBs**{{}}). - **Associate with Health Check** – **No**. When you complete all fields, the tab looks like this: @@ -145,7 +145,7 @@ Create records sets for your domain: 6. Click the  Create  button. -7. Repeat Steps 3 through 6 for the load balancers in the other region [in this guide, **US East (N. Virginia)**]. +7. Repeat Steps 3 through 6 for the load balancers in the other region [in this guide, {{}}**US East (N. Virginia)**]{{}}. You can now test your website. Insert your domain name into a browser and see that your request is being load balanced between servers based on your location. @@ -172,21 +172,21 @@ We create health checks both for each NGINX Plus load balancer individually and Screenshot of Amazon Route 53 welcome screen seen by first-time user of Route 53 during configuration of global server load balancing (GSLB) with NGINX Plus -2. Click the  Create health check  button. In the **Configure health check** form that opens, specify the following values, then click the  Next  button. +2. Click the  Create health check  button. In the {{}}**Configure health check**{{}} form that opens, specify the following values, then click the  Next  button. - - **Name** – Identifier for an NGINX Plus load balancer instance, for example **US West LB 1**. + - **Name** – Identifier for an NGINX Plus load balancer instance, for example {{}}**US West LB 1**{{}}. - **What to monitor** – **Endpoint**. - - **Specify endpoint by** – **IP address**. + - **Specify endpoint by** – {{}}**IP address**{{}}. - **IP address** – The [elastic IP address](#elastic-ip) of the NGINX Plus load balancer. - **Port** – The port advertised to clients for your domain or web service (the default is **80**). Screenshot of Amazon Route 53 interface for configuring health checks, during configuration of AWS global load balancing (GLB) with NGINX Plus -3. On the **Get notified when health check fails** screen that opens, set the **Create alarm** radio button to **Yes** or **No** as appropriate, then click the  Create health check  button. +3. On the {{}}**Get notified when health check fails**{{}} screen that opens, set the **Create alarm** radio button to **Yes** or **No** as appropriate, then click the  Create health check  button. Screenshot of Route 53 configuration screen for enabling notifications of failed health checks, during configuration of Route 53 global load balancing (GLB) with NGINX Plus -4. Repeat Steps 2 and 3 for your other NGINX Plus load balancers (in this guide, **US West LB 2**, **US East LB 1**, and **US East LB 2**). +4. Repeat Steps 2 and 3 for your other NGINX Plus load balancers (in this guide, {{}}**US West LB 2**{{}}, {{}}**US East LB 1**{{}}, and {{}}**US East LB 2**{{}}). 5. Proceed to the next section to configure health checks for the load balancer pairs. @@ -195,18 +195,18 @@ We create health checks both for each NGINX Plus load balancer individually and 1. Click the  Create health check  button. -2. In the **Configure health check** form that opens, specify the following values, then click the  Next  button. +2. In the {{}}**Configure health check**{{}} form that opens, specify the following values, then click the  Next  button. - - **Name** – Identifier for the pair of NGINX Plus load balancers in the first region, for example **US West LBs**. - - **What to monitor** – **Status of other health checks **. + - **Name** – Identifier for the pair of NGINX Plus load balancers in the first region, for example {{}}**US West LBs**{{}}. + - **What to monitor** – {{}}**Status of other health checks{{}} **. - **Health checks to monitor** – The health checks of the two US West load balancers (add them one after the other by clicking in the box and choosing them from the drop‑down menu as shown). - **Report healthy when** – **at least 1 of 2 selected health checks are healthy** (the choices in this field are obscured in the screenshot by the drop‑down menu). Screenshot of Amazon Route 53 interface for configuring a health check of combined other health checks, during configuration of global server load balancing (GSLB) with NGINX Plus -3. On the **Get notified when health check fails** screen that opens, set the **Create alarm** radio button as appropriate (see Step 5 in the previous section), then click the  Create health check  button. +3. On the {{}}**Get notified when health check fails**{{}} screen that opens, set the **Create alarm** radio button as appropriate (see Step 5 in the previous section), then click the  Create health check  button. -4. Repeat Steps 1 through 3 for the paired load balancers in the other region [in this guide, **US East (N. Virginia)**]. +4. Repeat Steps 1 through 3 for the paired load balancers in the other region [in this guide, {{}}**US East (N. Virginia)**]{{}}. When you have finished configuring all six health checks, the **Health checks** tab looks like this: @@ -223,13 +223,13 @@ When you have finished configuring all six health checks, the **Health checks** The tab changes to display the record sets for the domain. -3. In the list of record sets that opens, click the row for the record set belonging to your first region [in this guide, **US West (Oregon)**]. The Edit Record Set column opens on the right side of the tab. +3. In the list of record sets that opens, click the row for the record set belonging to your first region [in this guide, {{}}**US West (Oregon)**]{{}}. The Edit Record Set column opens on the right side of the tab. Screenshot of interface for editing Route 53 record sets during configuration of global server load balancing (GSLB) with NGINX Plus 4. Change the **Associate with Health Check** radio button to **Yes**. -5. In the **Health Check to Associate** field, select the paired health check for your first region (in this guide, **US West LBs**). +5. In the **Health Check to Associate** field, select the paired health check for your first region (in this guide, {{}}**US West LBs**{{}}). 6. Click the  Save Record Set  button. @@ -242,7 +242,7 @@ These instructions assume that you have configured NGINX Plus on two EC2 instan **Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command. -1. Connect to the **US West LB 1** instance. For instructions, see Connecting to an EC2 Instance. +1. Connect to the {{}}**US West LB 1**{{}} instance. For instructions, see Connecting to an EC2 Instance. 2. Change directory to **/etc/nginx/conf.d**. @@ -250,7 +250,7 @@ These instructions assume that you have configured NGINX Plus on two EC2 instan cd /etc/nginx/conf.d ``` -3. Edit the **west‑lb1.conf** file and add the **@healthcheck** location to set up health checks. +3. Edit the {{}}**west-lb1.conf**{{}} file and add the **@healthcheck** location to set up health checks. ```nginx upstream backend-servers { @@ -282,9 +282,9 @@ These instructions assume that you have configured NGINX Plus on two EC2 instan nginx -s reload ``` -5. Repeat Steps 1 through 4 for the other three load balancers (**US West LB 2**, **US East LB 1**, and **US East LB2**). +5. Repeat Steps 1 through 4 for the other three load balancers ({{}}**US West LB 2**{{}}, {{}}**US East LB 1**{{}}, and {{}}**US East LB2**{{}}). - In Step 3, change the filename as appropriate (**west‑lb2.conf**, **east‑lb1.conf**, and **east‑lb2.conf**). In the **east‑lb1.conf** and **east‑lb2.conf** files, the `server` directives specify the public DNS names of Backup 3 and Backup 4. + In Step 3, change the filename as appropriate ({{}}**west-lb2.conf**{{}}, {{}}**east-lb1.conf**{{}}, and {{}}**east-lb2.conf**{{}}). In the {{}}**east-lb1.conf**{{}} and {{}}**east-lb2.conf**{{}} files, the `server` directives specify the public DNS names of Backup 3 and Backup 4. ## Appendix @@ -307,31 +307,31 @@ Step‑by‑step instructions for creating EC2 instances and installing NGINX so Assign the following names to the instances, and then install the indicated NGINX software. -- In the first region, which is **US West (Oregon)** in this guide: +- In the first region, which is {{}}**US West (Oregon)**{{}} in this guide: - Two load balancer instances running NGINX Plus: - - **US West LB 1** - - **US West LB 2** + - {{}}**US West LB 1**{{}} + - {{}}**US West LB 2**{{}} - Two backend instances running NGINX Open Source: - * **Backend 1** - - **Backend 2** + * {{}}**Backend 1**{{}} + - {{}}**Backend 2**{{}} -- In the second region, which is **US East (N. Virginia)** in this guide: +- In the second region, which is {{}}**US East (N. Virginia)**{{}} in this guide: - Two load balancer instances running NGINX Plus: - - **US East LB 1** - - **US East LB 2** + - {{}}**US East LB 1**{{}} + - {{}}**US East LB 2**{{}} - Two backend instances running NGINX Open Source: - * **Backend 3** - - **Backend 4** + * {{}}**Backend 3**{{}} + - {{}}**Backend 4**{{}} -Here's the **Instances** tab after we create the four instances in the **N. Virginia** region. +Here's the **Instances** tab after we create the four instances in the {{}}**N. Virginia**{{}} region. Screenshot showing newly created EC2 instances in one of two regions, which is a prerequisite to configuring AWS GSLB (global server load balancing) with NGINX Plus @@ -366,7 +366,7 @@ After you complete the instructions on all instances, the list for a region (her ### Configuring NGINX Open Source on the Backend Servers -Perform these steps on all four backend servers: **Backend 1**, **Backend 2**, **Backend 3**, and **Backend 4**. In Step 3, substitute the appropriate name for `Backend X` in the **index.html** file. +Perform these steps on all four backend servers: {{}}**Backend 1**{{}}, {{}}**Backend 2**{{}}, {{}}**Backend 3**{{}}, and {{}}**Backend 4**{{}}. In Step 3, substitute the appropriate name for `Backend X` in the **index.html** file. **Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command. @@ -421,7 +421,7 @@ Perform these steps on all four backend servers: **Backend 1**, **Backend&n ### Configuring NGINX Plus on the Load Balancers -Perform these steps on all four backend servers: **US West LB 1**, **US West LB 2**, **US East LB 1**, and **US West LB 2**. +Perform these steps on all four backend servers: {{}}**US West LB 1**{{}}, {{}}**US West LB 2**{{}}, {{}}**US East LB 1**{{}}, and {{}}**US West LB 2**{{}}. **Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command. @@ -439,10 +439,10 @@ Perform these steps on all four backend servers: **US West LB 1** 4. Create a new file containing the following text, which configures load balancing of the two backend servers in the relevant region. The filename on each instance is: - - For **US West LB 1** – **west‑lb1.conf** - - For **US West LB 2** – **west‑lb2.conf** - - For **US East LB 1** – **east‑lb1.conf** - - For **US West LB 2** – **east‑lb2.conf** + - For {{}}**US West LB 1**{{}} – {{}}**west-lb1.conf**{{}} + - For {{}}**US West LB 2**{{}} – {{}}**west-lb2.conf**{{}} + - For {{}}**US East LB 1**{{}} – {{}}**east-lb1.conf**{{}} + - For {{}}**US West LB 2**{{}} – {{}}**east-lb2.conf**{{}} In the `server` directives in the `upstream` block, substitute the public DNS names of the backend instances in the region; to learn them, see the **Instances** tab in the EC2 Dashboard. diff --git a/content/nginx/deployment-guides/global-server-load-balancing/ns1-global-server-load-balancing.md b/content/nginx/deployment-guides/global-server-load-balancing/ns1-global-server-load-balancing.md index 8e425506e..8e14e22e7 100644 --- a/content/nginx/deployment-guides/global-server-load-balancing/ns1-global-server-load-balancing.md +++ b/content/nginx/deployment-guides/global-server-load-balancing/ns1-global-server-load-balancing.md @@ -93,7 +93,7 @@ The solution functions alongside other NS1 capabilities, such as geo‑proximal - **Canadian province(s)** – Two‑letter codes for Canadian provinces - **Country/countries** – Two‑letter codes for nations and territories - - **Geographic region(s)** – Identifiers like **US‑WEST** and **ASIAPAC** + - **Geographic region(s)** – Identifiers like {{}}**US-WEST**{{}} and **ASIAPAC** - **ISO region code** – Identification codes for nations and territories as defined in [ISO 3166](https://www.iso.org/iso-3166-country-codes.html) - **Latitude** – Degrees, minutes, and seconds of latitude (northern or southern hemisphere) - **Longitude** – Degrees, minutes, and seconds of longitude (eastern or western hemisphere) @@ -114,8 +114,8 @@ The solution functions alongside other NS1 capabilities, such as geo‑proximal 12. In the **Add Filters** window that pops up, click the plus sign (+) on the button for each filter you want to apply. In this guide, we're configuring the filters in this order: - **Up** in the ** HEALTHCHECKS ** section - - **Geotarget Country** in the ** GEOGRAPHIC ** section - - **Select First N** in the ** TRAFFIC MANAGEMENT ** section + - {{}}**Geotarget Country**{{}} in the ** GEOGRAPHIC ** section + - {{}}**Select First N**{{}} in the ** {{}}TRAFFIC MANAGEMENT{{}} ** section Click the  Save Filter Chain  button. @@ -128,17 +128,17 @@ In this section we install and configure the NS1 agent on the same hosts as our 1. Follow the instructions in the [NS1 documentation](https://help.ns1.com/hc/en-us/articles/360020474154) to set up and connect a separate data feed for each of the three NGINX Plus instances, which NS1 calls _answers_. - On the first page (**Configure a new data source from NSONE Data Feed API v1**) specify a name for the _data source_, which is the administrative container for the data feeds you will be creating. Use the same name each of the three times you go through the instructions. We're naming the data source **NGINX‑GSLB**. + On the first page (**Configure a new data source from NSONE Data Feed API v1**) specify a name for the _data source_, which is the administrative container for the data feeds you will be creating. Use the same name each of the three times you go through the instructions. We're naming the data source {{}}**NGINX-GSLB**{{}}. On the next page (**Create Feed from NSONE Data Feed API v1**), create a data feed for the instance. Because the **Name** field is just for internal use, any value is fine. The value in the **Label** field is used in the YAML configuration file for the instance (see Step 4 below). We're specifying labels that indicate the country (using the ISO 3166 codes) in which the instance is running: - - **us‑nginxgslb‑datafeed** for instance 1 in the US - - **de‑nginxgslb‑datafeed** for instance 2 in Germany - - **sg‑nginxgslb‑datafeed** for instance 3 in Singapore + - {{}}**us-nginxgslb-datafeed**{{}} for instance 1 in the US + - {{}}**de-nginxgslb-datafeed**{{}} for instance 2 in Germany + - {{}}**sg-nginxgslb-datafeed**{{}} for instance 3 in Singapore After creating the three feeds, note the value in the **Feeds URL** field on the  INTEGRATIONS  tab. The final element of the URL is the ```` you will specify in the YAML configuration file in Step 4. In the third screenshot in the [NS1 documentation](https://help.ns1.com/hc/en-us/articles/360020474154), for example, it is **e566332c5d22c6b66aeaa8837eae90ac**. -2. Follow the instructions in the [NS1 documentation](https://help.ns1.com/hc/en-us/articles/360017341694-Creating-managing-API-keys) to create an NS1 API key for the agent, if you have not already. (To access **Account Settings** in Step 1, click your username in the upper right corner of the NS1 title bar.) We're naming the app **NGINX‑GSLB**. Make note of the key value – you'll specify it as ```` in the YAML configuration file in Step 4. To see the actual hexadecimal value, click on the circled letter **i** in the **API Key** field. +2. Follow the instructions in the [NS1 documentation](https://help.ns1.com/hc/en-us/articles/360017341694-Creating-managing-API-keys) to create an NS1 API key for the agent, if you have not already. (To access **Account Settings** in Step 1, click your username in the upper right corner of the NS1 title bar.) We're naming the app {{}}**NGINX-GSLB**{{}}. Make note of the key value – you'll specify it as ```` in the YAML configuration file in Step 4. To see the actual hexadecimal value, click on the circled letter **i** in the **API Key** field. 3. On each NGINX Plus host, clone the [GitHub repo](https://github.com/nginxinc/nginx-ns1-gslb) for the NS1 agent. @@ -349,7 +349,7 @@ First we perform these steps to create the shed filter: Screenshot of NS1 GUI: clicking Shed Load button on Add Filters page -3. The **Shed Load** filter is added as the fourth (lowest) box in the **Active Filters** section. Move it to be third by clicking and dragging it above the **Select First N** box. +3. The **Shed Load** filter is added as the fourth (lowest) box in the **Active Filters** section. Move it to be third by clicking and dragging it above the {{}}**Select First N**{{}} box. 4. Click the  Save Filter Chain  button. @@ -363,7 +363,7 @@ First we perform these steps to create the shed filter: 7. In the **Answer Metadata** window that opens, set values for the following metadata. In each case, click the icon in the  FEED  column of the metadata's row, then select or enter the indicated value in the  AVAILABLE  column. (For testing purposes, we're setting very small values for the watermarks so that the threshold is exceeded very quickly.) - - **Active connections** – **us‑nginxgslb‑datafeed** + - **Active connections** – {{}}**us-nginxgslb-datafeed**{{}} - **High watermark** – **5** - **Low watermark** – **2** diff --git a/content/nginx/deployment-guides/google-cloud-platform/high-availability-all-active.md b/content/nginx/deployment-guides/google-cloud-platform/high-availability-all-active.md index cf9e705b0..f44b95e75 100644 --- a/content/nginx/deployment-guides/google-cloud-platform/high-availability-all-active.md +++ b/content/nginx/deployment-guides/google-cloud-platform/high-availability-all-active.md @@ -15,7 +15,7 @@ This guide explains how to deploy F5 NGINX Plus in a high-availability configura **Notes:** - The GCE environment changes constantly. This could include names and arrangements of GUI elements. This guide was accurate when published. But, some GCE GUI elements might have changed over time. Use this guide as a reference and adapt to the current GCE working environment. -- The configuration described in this guide allows anyone from a public IP address to access the NGINX Plus instances. While this works in common scenarios in a test environment, we do not recommend it in production. Block external HTTP/HTTPS access to **app‑1** and **app‑2** instances to external IP address before production deployment. Alternatively, remove the external IP addresses for all application instances, so they're accessible only on the internal GCE network. +- The configuration described in this guide allows anyone from a public IP address to access the NGINX Plus instances. While this works in common scenarios in a test environment, we do not recommend it in production. Block external HTTP/HTTPS access to {{}}**app-1**{{}} and {{}}**app-2**{{}} instances to external IP address before production deployment. Alternatively, remove the external IP addresses for all application instances, so they're accessible only on the internal GCE network. @@ -38,7 +38,7 @@ The GCE network LB assigns each new client to a specific NGINX Plus LB. This ass NGINX Plus LB uses the round-robin algorithm to forward requests to specific app instances. It also adds a session cookie. It keeps future requests from the same client on the same app instance as long as it's running. -This deployment guide uses two groups of app instances: – **app‑1** and **app‑2**. It demonstrates [load balancing](https://www.nginx.com/products/nginx/load-balancing/) between different app types. But both groups have the same app configurations. +This deployment guide uses two groups of app instances: – {{}}**app-1**{{}} and {{}}**app-2**{{}}. It demonstrates [load balancing](https://www.nginx.com/products/nginx/load-balancing/) between different app types. But both groups have the same app configurations. You can adapt the deployment to distribute unique connections to different groups of app instances. This can be done by creating discrete upstream blocks and routing content based on the URI. @@ -69,17 +69,17 @@ Create a new GCE project to host the all‑active NGINX Plus deployment. 1. Log into the [GCP Console](http://console.cloud.google.com) at **console.cloud.google.com**. -2. The GCP **Home > Dashboard** tab opens. Its contents depend on whether you have any existing projects. +2. The GCP {{}}**Home > Dashboard**{{}} tab opens. Its contents depend on whether you have any existing projects. - If there are no existing projects, click the  Create a project  button. Screenshot of the Google Cloud Platform dashboard that appears when there are no existing projects (creating a project is the first step in configuring NGINX Plus as the Google Cloud load balancer) - - If there are existing projects, the name of one of them appears in the upper left of the blue header bar (in the screenshot, it's  My Test Project ). Click the project name and select **Create project** from the menu that opens. + - If there are existing projects, the name of one of them appears in the upper left of the blue header bar (in the screenshot, it's  My Test Project ). Click the project name and select {{}}**Create project**{{}} from the menu that opens. Screenshot of the Google Cloud Platform page that appears when other projects already exist (creating a project is the first step in configuring NGINX Plus as the Google Cloud load balancer) -3. Type your project name in the **New Project** window that pops up, then click CREATE. We're naming the project **NGINX Plus All‑Active‑LB**. +3. Type your project name in the {{}}**New Project**{{}} window that pops up, then click CREATE. We're naming the project **NGINX {{}}Plus All-Active-LB**{{}}. Screenshot of the New Project pop-up window for naming a new project on the Google Cloud Platform, which is the first step in configuring NGINX Plus as the Google load balancer @@ -87,24 +87,24 @@ Create a new GCE project to host the all‑active NGINX Plus deployment. Create firewall rules that allow access to the HTTP and HTTPS ports on your GCE instances. You'll attach the rules to all the instances you create for the deployment. -1. Navigate to the **Networking > Firewall rules** tab and click  +  CREATE FIREWALL RULE. (The screenshot shows the default rules provided by GCE.) +1. Navigate to the {{}}**Networking > Firewall rules**{{}} tab and click  +  CREATE FIREWALL RULE. (The screenshot shows the default rules provided by GCE.) Screenshot of the Google Cloud Platform page for defining new firewall rules; when configuring NGINX Plus as the Google Cloud load balancer, we open ports 80, 443, and 8080 for it. -2. Fill in the fields on the **Create a firewall rule** screen that opens: +2. Fill in the fields on the {{}}**Create a firewall rule**{{}} screen that opens: - - **Name** – **nginx‑plus‑http‑fw‑rule** + - **Name** – {{}}**nginx-plus-http-fw-rule**{{}} - **Description** – **Allow access to ports 80, 8080, and 443 on all NGINX Plus instances** - - **Source filter** – On the drop-down menu, select either **Allow from any source (0.0.0.0/0)**, or **IP range** if you want to restrict access to users on your private network. In the second case, fill in the **Source IP ranges** field that opens. In the screenshot, we are allowing unrestricted access. - - **Allowed protocols and ports** – **tcp:80; tcp:8080; tcp:443** + - {{}}**Source filter**{{}} – On the drop-down menu, select either **Allow from any source (0.0.0.0/0)**, or {{}}**IP range**{{}} if you want to restrict access to users on your private network. In the second case, fill in the {{}}**Source IP ranges**{{}} field that opens. In the screenshot, we are allowing unrestricted access. + - {{}}**Allowed protocols and ports**{{}} – {{}}**tcp:80; tcp:8080; tcp:443**{{}} **Note:** As noted in the introduction, allowing access from any public IP address is appropriate only in a test environment. Before deploying the architecture in production, create a firewall rule. Use this rule to block access to the external IP address for your application instances. Alternatively, you can disable external IP addresses for the instances. This limits access only to the internal GCE network. - - **Target tags** – **nginx‑plus‑http‑fw‑rule** + - {{}}**Target tags**{{}} – {{}}**nginx-plus-http-fw-rule**{{}} Screenshot of the interface for creating a Google Compute Engine (GCE) firewall rule, used during deployment of NGINX Plus as the Google load balancer. -3. Click the  Create  button. The new rule is added to the table on the **Firewall rules** tab. +3. Click the  Create  button. The new rule is added to the table on the {{}}**Firewall rules**{{}} tab. ## Task 2: Creating Source Instances @@ -123,29 +123,29 @@ The methods to create a source instance are different. Once you've created the s Create three source VM instances based on a GCE VM image. We're basing our instances on the Ubuntu 16.04 LTS image. -1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar. +1. Verify that the **NGINX {{}}Plus All-Active-LB**{{}} project is still selected in the Google Cloud Platform header bar. -2. Navigate to the **Compute Engine > VM instances** tab. +2. Navigate to the {{}}**Compute Engine > VM instances**{{}} tab. -3. Click the  Create instance  button. The **Create an instance** page opens. +3. Click the  Create instance  button. The {{}}**Create an instance**{{}} page opens. #### Creating the First Application Instance from a VM Image -1. On the **Create an instance** page, modify or verify the fields and checkboxes as indicated (a screenshot of the completed page appears in the next step): +1. On the {{}}**Create an instance**{{}} page, modify or verify the fields and checkboxes as indicated (a screenshot of the completed page appears in the next step): - - **Name** – **nginx‑plus‑app‑1** - - **Zone** – The GCP zone that makes sense for your location. We're using **us‑west1‑a**. - - **Machine type** – The appropriate size for the level of traffic you anticipate. We're selecting **micro**, which is ideal for testing purposes. - - **Boot disk** – Click **Change**. The **Boot disk** page opens to the OS images subtab. Perform the following steps: + - **Name** – {{}}**nginx-plus-app-1**{{}} + - **Zone** – The GCP zone that makes sense for your location. We're using {{}}**us-west1-a**{{}}. + - {{}}**Machine type**{{}} – The appropriate size for the level of traffic you anticipate. We're selecting **micro**, which is ideal for testing purposes. + - {{}}**Boot disk**{{}} – Click **Change**. The {{}}**Boot disk**{{}} page opens to the OS images subtab. Perform the following steps: - - Click the radio button for the Unix or Linux image of your choice (here, **Ubuntu 16.04 LTS**). - - Accept the default values in the **Boot disk type** and **Size (GB)** fields (**Standard persistent disk** and **10** respectively). + - Click the radio button for the Unix or Linux image of your choice (here, {{}}**Ubuntu 16.04 LTS**{{}}). + - Accept the default values in the {{}}**Boot disk type**{{}} and {{}}**Size (GB)**{{}} fields ({{}}**Standard persistent disk**{{}} and **10** respectively). - Click the  Select  button. Screenshot of the 'Boot disk' page in Google Cloud Platform for selecting the OS on which a VM runs. In the deployment of NGINX Plus as the Google load balancer, we select Ubuntu 16.04 LTS. - - **Identity and API access** – Keep the defaults for the **Service account ** field and **Access scopes** radio button. Unless you need more granular control. + - {{}}**Identity and API access**{{}} – Keep the defaults for the {{}}**Service account **{{}} field and {{}}**Access scopes**{{}} radio button. Unless you need more granular control. - **Firewall** – Verify that neither check box is checked (the default). The firewall rule invoked in the **Tags** field on the **Management** subtab (see Step 3 below) controls this type of access. 2. Click Management, disk, networking, SSH keys to open that set of subtabs. (The screenshot shows the values entered in the previous step.) @@ -154,11 +154,11 @@ Create three source VM instances based on a GCE VM image. We're basing our insta 3. On the **Management** subtab, modify or verify the fields as indicated: - - **Description** – **NGINX Plus app‑1 Image** - - **Tags** – **nginx‑plus‑http‑fw‑rule** - - **Preemptibility** – **Off (recommended)** (the default) - - **Automatic restart** – **On (recommended)** (the default) - - **On host maintenance** – **Migrate VM instance (recommended)** (the default) + - **Description** – **NGINX {{}}Plus app-1 Image**{{}} + - **Tags** – {{}}**nginx-plus-http-fw-rule**{{}} + - **Preemptibility** – {{}}**Off (recommended)**{{}} (the default) + - {{}}**Automatic restart**{{}} – {{}}**On (recommended)**{{}} (the default) + - {{}}**On host maintenance**{{}} – {{}}**Migrate VM instance (recommended)**{{}} (the default) Screenshot of the Management subtab used during creation of a new VM instance, part of deploying NGINX Plus as the Google load balancer. @@ -166,38 +166,38 @@ Create three source VM instances based on a GCE VM image. We're basing our insta Screenshot of the Disks subtab used during creation of a new VM instance, part of deploying NGINX Plus as the Google Cloud load balancer. -5. On the **Networking** subtab, verify the default settings, in particular **Ephemeral** for **External IP** and **Off** for **IP Forwarding**. +5. On the **Networking** subtab, verify the default settings, in particular **Ephemeral** for {{}}**External IP**{{}} and **Off** for {{}}**IP Forwarding**{{}}. Screenshot of the Networking subtab used during creation of a new VM instance, part of deploying NGINX Plus as the Google Cloud load balancer. -6. If you're using your own SSH public key instead of your default GCE keys, paste the hexadecimal key string on the **SSH Keys** subtab. Right into the box that reads **Enter entire key data**. +6. If you're using your own SSH public key instead of your default GCE keys, paste the hexadecimal key string on the {{}}**SSH Keys**{{}} subtab. Right into the box that reads {{}}**Enter entire key data**{{}}. Screenshot of the SSH Keys subtab used during creation of a new VM instance, part of deploying NGINX Plus as the Google Cloud Platform load balancer. -7. Click the  Create  button at the bottom of the **Create an instance** page. +7. Click the  Create  button at the bottom of the {{}}**Create an instance**{{}} page. - The **VM instances** summary page opens. It can take several minutes for the instance to be created. Wait to continue until the green check mark appears. + The {{}}**VM instances**{{}} summary page opens. It can take several minutes for the instance to be created. Wait to continue until the green check mark appears. Screenshot of the summary page that verifies the creation of a new VM instance, part of deploying NGINX Plus as the load balancer for Google Cloud. #### Creating the Second Application Instance from a VM Image -1. On the **VM instances** summary page, click CREATE INSTANCE. +1. On the {{}}**VM instances**{{}} summary page, click CREATE INSTANCE. 2. Repeat the steps in Creating the First Application Instance to create the second application instance. Specify the same values as for the first application instance, except: - - In Step 1, **Name** – **nginx‑plus‑app‑2** - - In Step 3, **Description** – **NGINX Plus app‑2 Image** + - In Step 1, **Name** – {{}}**nginx-plus-app-2**{{}} + - In Step 3, **Description** – **NGINX {{}}Plus app-2 Image**{{}} #### Creating the Load-Balancing Instance from a VM Image -1. On the **VM instances** summary page, click CREATE INSTANCE. +1. On the {{}}**VM instances**{{}} summary page, click CREATE INSTANCE. 2. Repeat the steps in Creating the First Application Instance to create the load‑balancing instance. Specify the same values as for the first application instance, except: - - In Step 1, **Name** – **nginx‑plus‑lb** + - In Step 1, **Name** – {{}}**nginx-plus-lb**{{}} - In Step 3, **Description** – **NGINX Plus Load Balancing Image** @@ -205,14 +205,14 @@ Create three source VM instances based on a GCE VM image. We're basing our insta Install and configure PHP and FastCGI on the instances. -Repeat these instructions for all three source instances (**nginx‑plus‑app‑1**, **nginx‑plus‑app‑2**, and **nginx‑plus‑lb**). +Repeat these instructions for all three source instances ({{}}**nginx-plus-app-1**{{}}, {{}}**nginx-plus-app-2**{{}}, and {{}}**nginx-plus-lb**{{}}). **Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command. 1. Connect to the instance over SSH using the method of your choice. GCE provides a built-in mechanism: - - Navigate to the **Compute Engine > VM instances** tab. - - In the instance's row in the table, click the triangle icon in the **Connect** column at the far right and select a method (for example, **Open in browser window**). + - Navigate to the {{}}**Compute Engine > VM instances**{{}} tab. + - In the instance's row in the table, click the triangle icon in the **Connect** column at the far right and select a method (for example, {{}}**Open in browser window**{{}}). Screenshot showing how to connect via SSH to a VM instance, part of deploying NGINX Plus as the Google load balancer. @@ -253,7 +253,7 @@ Now install NGINX Plus and download files that are specific to the all‑active Both the configuration and content files are available at the [NGINX GitHub repository](https://github.com/nginxinc/NGINX-Demos/tree/master/gce-nginx-plus-deployment-guide-files). -Repeat these instructions for all three source instances (**nginx‑plus‑app‑1**, **nginx‑plus‑app‑2**, and **nginx‑plus‑lb**). +Repeat these instructions for all three source instances ({{}}**nginx-plus-app-1**{{}}, {{}}**nginx-plus-app-2**{{}}, and {{}}**nginx-plus-lb**{{}}). **Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command. @@ -265,7 +265,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo 4. Copy the right configuration file from the **etc\_nginx\_conf.d** subdirectory of the cloned repository to **/etc/nginx/conf.d**: - - On both **nginx‑plus‑app‑1** and **nginx‑plus‑app‑2**, copy **gce‑all‑active‑app.conf**. + - On both {{}}**nginx-plus-app-1**{{}} and {{}}**nginx-plus-app-2**{{}}, copy {{}}**gce-all-active-app.conf**{{}}. You can also run the following commands to download the configuration file directly from the GitHub repository: @@ -281,7 +281,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo wget https://github.com/nginxinc/NGINX-Demos/blob/master/gce-nginx-plus-deployment-guide-files/etc_nginx_conf.d/gce-all-active-app.conf ``` - - On **nginx‑plus‑lb**, copy **gce‑all‑active‑lb.conf**. + - On {{}}**nginx-plus-lb**{{}}, copy {{}}**gce-all-active-lb.conf**{{}}. You can also run the following commands to download the configuration file directly from the GitHub repository: @@ -297,9 +297,9 @@ Both the configuration and content files are available at the [NGINX GitHub repo wget https://github.com/nginxinc/NGINX-Demos/blob/master/gce-nginx-plus-deployment-guide-files/etc_nginx_conf.d/gce-all-active-lb.conf ``` -5. On the LB instance (**nginx‑plus‑lb**), use a text editor to open **gce‑all‑active‑lb.conf**. Change the `server` directives in the `upstream` block to reference the internal IP addresses of the **nginx‑plus‑app‑1** and **nginx‑plus‑app‑2** instances (substitute the address for the expression in angle brackets). You do not need to modify the two application instances. +5. On the LB instance ({{}}**nginx-plus-lb**{{}}), use a text editor to open {{}}**gce-all-active-lb.conf**{{}}. Change the `server` directives in the `upstream` block to reference the internal IP addresses of the {{}}**nginx-plus-app-1**{{}} and {{}}**nginx-plus-app-2**{{}} instances (substitute the address for the expression in angle brackets). You do not need to modify the two application instances. - You can look up internal IP addresses in the **Internal IP** column of the table on the **Compute Engine > VM instances** summary page. + You can look up internal IP addresses in the {{}}**Internal IP**{{}} column of the table on the {{}}**Compute Engine > VM instances**{{}} summary page. ```nginx upstream upstream_app_pool { @@ -341,7 +341,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo nginx -s reload ``` -9. Verify the instance is working by accessing it at its external IP address. (As previously noted, we recommend blocking access to the external IP addresses of the application instances in a production environment.) The external IP address for the instance appears on the **Compute Engine > VM instances** summary page, in the **External IP** column of the table. +9. Verify the instance is working by accessing it at its external IP address. (As previously noted, we recommend blocking access to the external IP addresses of the application instances in a production environment.) The external IP address for the instance appears on the {{}}**Compute Engine > VM instances**{{}} summary page, in the {{}}**External IP**{{}} column of the table. - Access the **index.html** page either in a browser or by running this `curl` command. @@ -351,7 +351,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo - Access its NGINX Plus live activity monitoring dashboard in a browser, at: - **https://_external‑IP‑address_:8080/status.html** + {{}}**https://_external-IP-address_:8080/status.html**{{}} 10. Proceed to [Task 3: Creating "Gold" Images](#gold). @@ -363,7 +363,7 @@ Create three source instances based on a prebuilt NGINX Plus image running on < #### Creating the First Application Instance from a Prebuilt Image -1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar. +1. Verify that the **NGINX {{}}Plus All-Active-LB**{{}} project is still selected in the Google Cloud Platform header bar. 2. Navigate to the GCP Marketplace and search for **nginx plus**. @@ -373,16 +373,16 @@ Create three source instances based on a prebuilt NGINX Plus image running on < 4. On the **NGINX Plus** page that opens, click the  Launch on Compute Engine  button. -5. Fill in the fields on the **New NGINX Plus deployment** page as indicated. +5. Fill in the fields on the {{}}**New NGINX{{}} {{}}Plus deployment**{{}} page as indicated. - - **Deployment name** – **nginx‑plus‑app‑1** - - **Zone** – The GCP zone that makes sense for your location. We're using **us‑west1‑a**. - - **Machine type** – The appropriate size for the level of traffic you anticipate. We're selecting **micro**, which is ideal for testing purposes. - - **Disk type** – **Standard Persistent Disk** (the default) - - **Disk size in GB** – **10** (the default and minimum allowed) - - **Network name** – **default** - - **Subnetwork name** – **default** - - **Firewall** – Verify that the **Allow HTTP traffic** checkbox is checked. + - {{}}**Deployment name**{{}} – {{}}**nginx-plus-app-1**{{}} + - **Zone** – The GCP zone that makes sense for your location. We're using {{}}**us-west1-a**{{}}. + - {{}}**Machine type**{{}} – The appropriate size for the level of traffic you anticipate. We're selecting **micro**, which is ideal for testing purposes. + - {{}}**Disk type**{{}} – {{}}**Standard Persistent Disk**{{}} (the default) + - {{}}**Disk size in GB**{{}} – **10** (the default and minimum allowed) + - {{}}**Network name**{{}} – **default** + - {{}}**Subnetwork name**{{}} – **default** + - **Firewall** – Verify that the {{}}**Allow HTTP traffic**{{}} checkbox is checked. Screenshot of the page for creating a prebuilt NGINX Plus VM instance when deploying NGINX Plus as the Google Cloud Platform load balancer. @@ -392,25 +392,25 @@ Create three source instances based on a prebuilt NGINX Plus image running on < Screenshot of the page that confirms the creation of a prebuilt NGINX Plus VM instance when deploying NGINX Plus as the Google load balancer. -7. Navigate to the **Compute Engine > VM instances** tab and click **nginx‑plus‑app‑1‑vm** in the Name column in the table. (The **‑vm** suffix is added automatically to the name of the newly created instance.) +7. Navigate to the {{}}**Compute Engine > VM instances**{{}} tab and click {{}}**nginx-plus-app-1-vm**{{}} in the Name column in the table. (The {{}}**-vm**{{}} suffix is added automatically to the name of the newly created instance.) Screenshot showing how to access the page where configuration details for a VM instance can be modified during deployment of NGINX Plus as the Google Cloud load balancer. -8. On the **VM instances** page that opens, click EDIT at the top of the page. In fields that can be edited, the value changes from static text to text boxes, drop‑down menus, and checkboxes. +8. On the {{}}**VM instances**{{}} page that opens, click EDIT at the top of the page. In fields that can be edited, the value changes from static text to text boxes, drop‑down menus, and checkboxes. 9. Modify or verify the indicated editable fields (non‑editable fields are not listed): - - **Tags** – If a default tag appears (for example, **nginx‑plus‑app‑1‑tcp‑80**), click the **X** after its name to remove it. Then, type in **nginx‑plus‑http‑fw‑rule**. - - **External IP** – **Ephemeral** (the default) - - **Boot disk and local disks** – Uncheck the checkbox labeled **Delete boot disk when instance is deleted**. - - **Additional disks** – No changes - - **Network** – If you must change the defaults, for example, when configuring a production environment, select default Then, select EDIT on the opened **Network details** page. After making your changes select the  Save  button. + - **Tags** – If a default tag appears (for example, {{}}**nginx-plus-app-1-tcp-80**{{}}), click the **X** after its name to remove it. Then, type in {{}}**nginx-plus-http-fw-rule**{{}}. + - {{}}**External IP**{{}} – **Ephemeral** (the default) + - {{}}**Boot disk and local disks**{{}} – Uncheck the checkbox labeled **Delete boot disk when instance is deleted**. + - {{}}**Additional disks**{{}} – No changes + - **Network** – If you must change the defaults, for example, when configuring a production environment, select default Then, select EDIT on the opened {{}}**Network details**{{}} page. After making your changes select the  Save  button. - **Firewall** – Verify that neither check box is checked (the default). The firewall rule named in the **Tags** field that's above on the current page (see the first bullet in this list) controls this type of access. - - **Automatic restart** – **On (recommended)** (the default) - - **On host maintenance** – **Migrate VM instance (recommended)** (the default) - - **Custom metadata** – No changes - - **SSH Keys** – If you're using your own SSH public key instead of your default GCE keys, paste the hexadecimal key string into the box labeled **Enter entire key data**. - - **Serial port** – Verify that the check box labeled **Enable connecting to serial ports** is not checked (the default). + - {{}}**Automatic restart**{{}} – {{}}**On (recommended)**{{}} (the default) + - {{}}**On host maintenance**{{}} – {{}}**Migrate VM instance (recommended)**{{}} (the default) + - {{}}**Custom metadata**{{}} – No changes + - {{}}**SSH Keys**{{}} – If you're using your own SSH public key instead of your default GCE keys, paste the hexadecimal key string into the box labeled {{}}**Enter entire key data**{{}}. + - {{}}**Serial port**{{}} – Verify that the check box labeled {{}}**Enable connecting to serial ports**{{}} is not checked (the default). The screenshot shows the results of your changes. It omits some fields that can't be edited or for which we recommend keeping the defaults. @@ -423,29 +423,29 @@ Create three source instances based on a prebuilt NGINX Plus image running on < Create the second application instance by cloning the first one. -1. Navigate back to the summary page on the **Compute Engine > VM instances** tab (click the arrow that is circled in the following figure). +1. Navigate back to the summary page on the {{}}**Compute Engine > VM instances**{{}} tab (click the arrow that is circled in the following figure). Screenshot showing how to return to the VM instance summary page during deployment of NGINX Plus as the Google Cloud Platform load balancer. -2. Click **nginx‑plus‑app‑1‑vm** in the Name column of the table (shown in the screenshot in Step 7 of Creating the First Application Instance). +2. Click {{}}**nginx-plus-app-1-vm**{{}} in the Name column of the table (shown in the screenshot in Step 7 of Creating the First Application Instance). -3. On the **VM instances** page that opens, click CLONE at the top of the page. +3. On the {{}}**VM instances**{{}} page that opens, click CLONE at the top of the page. -4. On the **Create an instance** page that opens, modify or verify the fields and checkboxes as indicated: +4. On the {{}}**Create an instance**{{}} page that opens, modify or verify the fields and checkboxes as indicated: - - **Name** – **nginx‑plus‑app‑2‑vm**. Here we're adding the **‑vm** suffix to make the name consistent with the first instance; GCE does not add it automatically when you clone an instance. - - **Zone** – The GCP zone that makes sense for your location. We're using **us‑west1‑a**. - - **Machine type** – The appropriate size for the level of traffic you anticipate. We're selecting **f1‑micro**, which is ideal for testing purposes. - - **Boot disk type** – **New 10 GB standard persistent disk** (the value inherited from **nginx‑plus‑app‑1‑vm**) - - **Identity and API access** – Set the **Access scopes** radio button to **Allow default access** and accept the default values in all other fields. If you want more granular control over access than is provided by these settings, modify the fields in this section as appropriate. + - **Name** – {{}}**nginx-plus-app-2-vm**{{}}. Here we're adding the {{}}**-vm**{{}} suffix to make the name consistent with the first instance; GCE does not add it automatically when you clone an instance. + - **Zone** – The GCP zone that makes sense for your location. We're using {{}}**us-west1-a**{{}}. + - {{}}**Machine type**{{}} – The appropriate size for the level of traffic you anticipate. We're selecting {{}}**f1-micro**{{}}, which is ideal for testing purposes. + - {{}}**Boot disk type**{{}} – {{}}**New 10 GB standard persistent disk**{{}} (the value inherited from {{}}**nginx-plus-app-1-vm**{{}}) + - {{}}**Identity and API access**{{}} – Set the {{}}**Access scopes**{{}} radio button to {{}}**Allow default access**{{}} and accept the default values in all other fields. If you want more granular control over access than is provided by these settings, modify the fields in this section as appropriate. - **Firewall** – Verify that neither check box is checked (the default). 5. Click Management, disk, networking, SSH keys to open that set of subtabs. 6. Verify the following settings on the subtabs, modifying them as necessary: - - **Management** – In the **Tags** field: **nginx‑plus‑http‑fw‑rule** - - **Disks** – The **Deletion rule** checkbox (labeled **Delete boot disk when instance is deleted**) is not checked + - **Management** – In the **Tags** field: {{}}**nginx-plus-http-fw-rule**{{}} + - **Disks** – The {{}}**Deletion rule**{{}} checkbox (labeled **Delete boot disk when instance is deleted**) is not checked 7. Select the  Create  button. @@ -454,21 +454,21 @@ Create the second application instance by cloning the first one. Create the source load‑balancing instance by cloning the first instance again. -Repeat Steps 2 through 7 of Creating the Second Application Instance. In Step 4, specify **nginx‑plus‑lb‑vm** as the name. +Repeat Steps 2 through 7 of Creating the Second Application Instance. In Step 4, specify {{}}**nginx-plus-lb-vm**{{}} as the name. #### Configuring PHP and FastCGI on the Prebuilt-Based Instances Install and configure PHP and FastCGI on the instances. -Repeat these instructions for all three source instances (**nginx‑plus‑app‑1‑vm**, **nginx‑plus‑app‑2‑vm**, and **nginx‑plus‑lb‑vm**). +Repeat these instructions for all three source instances ({{}}**nginx-plus-app-1-vm**{{}}, {{}}**nginx-plus-app-2-vm**{{}}, and {{}}**nginx-plus-lb-vm**{{}}). **Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command. 1. Connect to the instance over SSH using the method of your choice. GCE provides a built‑in mechanism: - - Navigate to the **Compute Engine > VM instances** tab. - - In the table, find the row for the instance. Select the triangle icon in the **Connect** column at the far right. Then, select a method (for example, **Open in browser window**). + - Navigate to the {{}}**Compute Engine > VM instances**{{}} tab. + - In the table, find the row for the instance. Select the triangle icon in the **Connect** column at the far right. Then, select a method (for example, {{}}**Open in browser window**{{}}). The screenshot shows instances based on the prebuilt NGINX Plus images. @@ -511,7 +511,7 @@ Now download files that are specific to the all‑active deployment: Both the configuration and content files are available at the [NGINX GitHub repository](https://github.com/nginxinc/NGINX-Demos/tree/master/gce-nginx-plus-deployment-guide-files). -Repeat these instructions for all three source instances (**nginx‑plus‑app‑1‑vm**, **nginx‑plus‑app‑2‑vm**, and **nginx‑plus‑lb‑vm**). +Repeat these instructions for all three source instances ({{}}**nginx-plus-app-1-vm**{{}}, {{}}**nginx-plus-app-2-vm**{{}}, and {{}}**nginx-plus-lb-vm**{{}}). **Note:** Some commands require `root` privilege. If appropriate for your environment, prefix commands with the `sudo` command. @@ -522,7 +522,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo 3. Copy the right configuration file from the **etc\_nginx\_conf.d** subdirectory of the cloned repository to **/etc/nginx/conf.d**: - - On both **nginx‑plus‑app‑1‑vm** and **nginx‑plus‑app‑2‑vm**, copy **gce‑all‑active‑app.conf**. + - On both {{}}**nginx-plus-app-1-vm**{{}} and {{}}**nginx-plus-app-2-vm**{{}}, copy {{}}**gce-all-active-app.conf**{{}}. You can also run these commands to download the configuration file from GitHub: @@ -538,7 +538,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo wget https://github.com/nginxinc/NGINX-Demos/blob/master/gce-nginx-plus-deployment-guide-files/etc_nginx_conf.d/gce-all-active-app.conf ``` - - On **nginx‑plus‑lb‑vm**, copy **gce‑all‑active‑lb.conf**. + - On {{}}**nginx-plus-lb-vm**{{}}, copy {{}}**gce-all-active-lb.conf**{{}}. You can also run the following commands to download the configuration file directly from the GitHub repository: @@ -554,9 +554,9 @@ Both the configuration and content files are available at the [NGINX GitHub repo wget https://github.com/nginxinc/NGINX-Demos/blob/master/gce-nginx-plus-deployment-guide-files/etc_nginx_conf.d/gce-all-active-lb.conf ``` -4. On the LB instance (**nginx‑plus‑lb‑vm**), use a text editor to open **gce‑all‑active‑lb.conf**. Change the `server` directives in the `upstream` block to reference the internal IP addresses of the **nginx‑plus‑app‑1‑vm** and **nginx‑plus‑app‑2‑vm** instances. (No action is required on the two application instances themselves.) +4. On the LB instance ({{}}**nginx-plus-lb-vm**{{}}), use a text editor to open {{}}**gce-all-active-lb.conf**{{}}. Change the `server` directives in the `upstream` block to reference the internal IP addresses of the {{}}**nginx-plus-app-1-vm**{{}} and {{}}**nginx-plus-app-2-vm**{{}} instances. (No action is required on the two application instances themselves.) - You can look up internal IP addresses in the **Internal IP** column of the table on the **Compute Engine > VM instances** summary page. + You can look up internal IP addresses in the {{}}**Internal IP**{{}} column of the table on the {{}}**Compute Engine > VM instances**{{}} summary page. ```nginx upstream upstream_app_pool { @@ -598,7 +598,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo nginx -s reload ``` -8. Verify the instance is working by accessing it at its external IP address. (As noted, we recommend blocking access, in production, to the external IPs of the app.) The external IP address for the instance appears on the **Compute Engine > VM instances** summary page, in the **External IP** column of the table. +8. Verify the instance is working by accessing it at its external IP address. (As noted, we recommend blocking access, in production, to the external IPs of the app.) The external IP address for the instance appears on the {{}}**Compute Engine > VM instances**{{}} summary page, in the {{}}**External IP**{{}} column of the table. - Access the **index.html** page either in a browser or by running this `curl` command. @@ -608,7 +608,7 @@ Both the configuration and content files are available at the [NGINX GitHub repo - Access the NGINX Plus live activity monitoring dashboard in a browser, at: - **https://_external‑IP‑address‑of‑NGINX‑Plus‑server_:8080/dashboard.html** + {{}}**https://_external-IP-address-of-NGINX-Plus-server_:8080/dashboard.html**{{}} 9. Proceed to [Task 3: Creating "Gold" Images](#gold). @@ -617,14 +617,14 @@ Both the configuration and content files are available at the [NGINX GitHub repo Create _gold images_, which are base images that GCE clones automatically when it needs to scale up the number of instances. They are derived from the instances you created in [Creating Source Instances](#source). Before creating the images, delete the source instances. This breaks the attachment between them and the disk. (you can't create an image from a disk attached to a VM instance). -1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar. +1. Verify that the **NGINX {{}}Plus All-Active-LB**{{}} project is still selected in the Google Cloud Platform header bar. -2. Navigate to the **Compute Engine > VM instances** tab. +2. Navigate to the {{}}**Compute Engine > VM instances**{{}} tab. 3. In the table, select all three instances: - - If you created source instances from [VM (Ubuntu) images](#source-vm): **nginx‑plus‑app‑1**, **nginx‑plus‑app‑2**, and **nginx‑plus‑lb** - - If you created source instances from [prebuilt NGINX Plus images](#source-prebuilt): **nginx‑plus‑app‑1‑vm**, **nginx‑plus‑app‑2‑vm**, and **nginx‑plus‑lb‑vm** + - If you created source instances from [VM (Ubuntu) images](#source-vm): {{}}**nginx-plus-app-1**{{}}, {{}}**nginx-plus-app-2**{{}}, and {{}}**nginx-plus-lb**{{}} + - If you created source instances from [prebuilt NGINX Plus images](#source-prebuilt): {{}}**nginx-plus-app-1-vm**{{}}, {{}}**nginx-plus-app-2-vm**{{}}, and {{}}**nginx-plus-lb-vm**{{}} 4. Click STOP in the top toolbar to stop the instances. @@ -634,43 +634,43 @@ Create _gold images_, which are base images that GCE clones automatically when i **Note:** If the pop-up warns that it will delete the boot disk for any instance, cancel the deletion. Then, perform the steps below for each affected instance: - - Navigate to the **Compute Engine > VM instances** tab and click the instance in the Name column in the table. (The screenshot shows **nginx‑plus‑app‑1‑vm**.) + - Navigate to the {{}}**Compute Engine > VM instances**{{}} tab and click the instance in the Name column in the table. (The screenshot shows {{}}**nginx-plus-app-1-vm**.){{}} Screenshot showing how to access the page where configuration details for a VM instance can be modified during deployment of NGINX Plus as the Google Cloud load balancer. - - On the **VM instances** page that opens, click EDIT at the top of the page. In fields that can be edited, the value changes from static text to text boxes, drop‑down menus, and checkboxes. - - In the **Boot disk and local disks** field, uncheck the checkbox labeled **Delete boot disk when instance is deleted**. + - On the {{}}**VM instances**{{}} page that opens, click EDIT at the top of the page. In fields that can be edited, the value changes from static text to text boxes, drop‑down menus, and checkboxes. + - In the {{}}**Boot disk and local disks**{{}} field, uncheck the checkbox labeled **Delete boot disk when instance is deleted**. - Click the  Save  button. - - On the **VM instances** summary page, select the instance in the table and click DELETE in the top toolbar to delete it. + - On the {{}}**VM instances**{{}} summary page, select the instance in the table and click DELETE in the top toolbar to delete it. -6. Navigate to the **Compute Engine > Images** tab. +6. Navigate to the {{}}**Compute Engine > Images**{{}} tab. 7. Click [+] CREATE IMAGE. -8. On the **Create an image** page that opens, modify or verify the fields as indicated: +8. On the {{}}**Create an image**{{}} page that opens, modify or verify the fields as indicated: - - **Name** – **nginx‑plus‑app‑1‑image** + - **Name** – {{}}**nginx-plus-app-1-image**{{}} - **Family** – Leave the field empty - **Description** – **NGINX Plus Application 1 Gold Image** - - **Encryption** – **Automatic (recommended)** (the default) + - **Encryption** – {{}}**Automatic (recommended)**{{}} (the default) - **Source** – **Disk** (the default) - - **Source disk** – **nginx‑plus‑app‑1** or **nginx‑plus‑app‑1‑vm**, depending on the method you used to create source instances (select the source instance from the drop‑down menu) + - {{}}**Source disk**{{}} – {{}}**nginx-plus-app-1**{{}} or {{}}**nginx-plus-app-1-vm**{{}}, depending on the method you used to create source instances (select the source instance from the drop‑down menu) 9. Click the  Create  button. 10. Repeat Steps 7 through 9 to create a second image with the following values (retain the default values in all other fields): - - **Name** – **nginx‑plus‑app‑2‑image** - - **Description** – **NGINX Plus Application 2 Gold Image** - - **Source disk** – **nginx‑plus‑app‑2** or **nginx‑plus‑app‑2‑vm**, depending on the method you used to create source instances (select the source instance from the drop‑down menu) + - **Name** – {{}}**nginx-plus-app-2-image**{{}} + - **Description** – **NGINX {{}}Plus Application 2 Gold Image**{{}} + - {{}}**Source disk**{{}} – {{}}**nginx-plus-app-2**{{}} or {{}}**nginx-plus-app-2-vm**{{}}, depending on the method you used to create source instances (select the source instance from the drop‑down menu) 11. Repeat Steps 7 through 9 to create a third image with the following values (retain the default values in all other fields): - - **Name** – **nginx‑plus‑lb‑image** + - **Name** – {{}}**nginx-plus-lb-image**{{}} - **Description** – **NGINX Plus LB Gold Image** - - **Source disk** – **nginx‑plus‑lb** or **nginx‑plus‑lb‑vm**, depending on the method you used to create source instances (select the source instance from the drop‑down menu) + - {{}}**Source disk**{{}} – {{}}**nginx-plus-lb**{{}} or {{}}**nginx-plus-lb-vm**{{}}, depending on the method you used to create source instances (select the source instance from the drop‑down menu) -12. Verify that the three images appear at the top of the table on the **Compute Engine > Images** tab. +12. Verify that the three images appear at the top of the table on the {{}}**Compute Engine > Images**{{}} tab. ## Task 4: Creating Instance Templates @@ -681,31 +681,31 @@ Create _instance templates_. They are the compute workloads in instance groups. ### Creating the First Application Instance Template -1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar. +1. Verify that the **NGINX {{}}Plus All-Active-LB**{{}} project is still selected in the Google Cloud Platform header bar. -2. Navigate to the **Compute Engine > Instance templates** tab. +2. Navigate to the {{}}**Compute Engine > Instance templates**{{}} tab. 3. Click the  Create instance template  button. -4. On the **Create an instance template** page that opens, modify or verify the fields as indicated: +4. On the {{}}**Create an instance template**{{}} page that opens, modify or verify the fields as indicated: - - **Name** – **nginx‑plus‑app‑1‑instance‑template** - - **Machine type** – The appropriate size for the level of traffic you anticipate. We're selecting **micro**, which is ideal for testing purposes. - - **Boot disk** – Click **Change**. The **Boot disk** page opens. Perform the following steps: + - **Name** – {{}}**nginx-plus-app-1-instance-template**{{}} + - {{}}**Machine type**{{}} – The appropriate size for the level of traffic you anticipate. We're selecting **micro**, which is ideal for testing purposes. + - {{}}**Boot disk**{{}} – Click **Change**. The {{}}**Boot disk**{{}} page opens. Perform the following steps: - - Open the **Custom Images** subtab. + - Open the {{}}**Custom Images**{{}} subtab. Screenshot of the 'Boot disk' page in Google Cloud Platform for selecting the source instance of a new instance template, part of deploying NGINX Plus as the Google load balancer. - - Select **NGINX Plus All‑Active‑LB** from the drop-down menu labeled **Show images from**. + - Select **NGINX {{}}Plus All-Active-LB**{{}} from the drop-down menu labeled {{}}**Show images from**{{}}. - - Click the **nginx‑plus‑app‑1‑image** radio button. + - Click the {{}}**nginx-plus-app-1-image**{{}} radio button. - - Accept the default values in the **Boot disk type** and **Size (GB)** fields (**Standard persistent disk** and **10** respectively). + - Accept the default values in the {{}}**Boot disk type**{{}} and {{}}**Size (GB)**{{}} fields ({{}}**Standard persistent disk**{{}} and **10** respectively). - Click the  Select  button. - - **Identity and API access** – Unless you want more granular control over access, keep the defaults in the **Service account** field (**Compute Engine default service account**) and **Access scopes** field (**Allow default access**). + - {{}}**Identity and API access**{{}} – Unless you want more granular control over access, keep the defaults in the {{}}**Service account**{{}} field (**Compute Engine default service account**) and {{}}**Access scopes**{{}} field ({{}}**Allow default access**{{}}). - **Firewall** – Verify that neither check box is checked (the default). The firewall rule invoked in the **Tags** field on the **Management** subtab (see Step 6 below) controls this type of access. 5. Select Management, disk, networking, SSH keys (indicated with a red arrow in the following screenshot) to open that set of subtabs. @@ -714,11 +714,11 @@ Create _instance templates_. They are the compute workloads in instance groups. 6. On the **Management** subtab, modify or verify the fields as indicated: - - **Description** – **NGINX Plus app‑1 Instance Template** - - **Tags** – **nginx‑plus‑http‑fw‑rule** - - **Preemptibility** – **Off (recommended)** (the default) - - **Automatic restart** – **On (recommended)** (the default) - - **On host maintenance** – **Migrate VM instance (recommended)** (the default) + - **Description** – **NGINX {{}}Plus app-1 Instance Template**{{}} + - **Tags** – {{}}**nginx-plus-http-fw-rule**{{}} + - **Preemptibility** – {{}}**Off (recommended)**{{}} (the default) + - {{}}**Automatic restart**{{}} – {{}}**On (recommended)**{{}} (the default) + - {{}}**On host maintenance**{{}} – {{}}**Migrate VM instance (recommended)**{{}} (the default) Screenshot of the Management subtab used during creation of a new VM instance template, part of deploying NGINX Plus as the Google load balancer. @@ -728,11 +728,11 @@ Create _instance templates_. They are the compute workloads in instance groups. Screenshot of the Disks subtab used during creation of a new VM instance template, part of deploying NGINX Plus as the Google Cloud load balancer. -8. On the **Networking** subtab, verify the default settings of **Ephemeral** for **External IP** and **Off** for **IP Forwarding**. +8. On the **Networking** subtab, verify the default settings of **Ephemeral** for {{}}**External IP**{{}} and **Off** for {{}}**IP Forwarding**{{}}. Screenshot of the Networking subtab used during creation of a new VM instance template, part of deploying NGINX Plus as the Google load balancer. -9. If you're using your own SSH public key instead of your default keys, paste the hexadecimal key string on the **SSH Keys** subtab. Right into the box that reads **Enter entire key data**. +9. If you're using your own SSH public key instead of your default keys, paste the hexadecimal key string on the {{}}**SSH Keys**{{}} subtab. Right into the box that reads {{}}**Enter entire key data**{{}}. Screenshot of the SSH Keys subtab used during creation of a new VM instance, part of deploying NGINX Plus as the Google Cloud Platform load balancer. @@ -741,25 +741,25 @@ Create _instance templates_. They are the compute workloads in instance groups. ### Creating the Second Application Instance Template -1. On the **Instance templates** summary page, click CREATE INSTANCE TEMPLATE. +1. On the {{}}**Instance templates**{{}} summary page, click CREATE INSTANCE TEMPLATE. 2. Repeat Steps 4 through 10 of Creating the First Application Instance Template to create a second application instance template. Use the same values as for the first instance template, except as noted: - In Step 4: - - **Name** – **nginx‑plus‑app‑2‑instance‑template** - - **Boot disk** – Click the **nginx‑plus‑app‑2‑image** radio button - - In Step 6, **Description** – **NGINX Plus app‑2 Instance Template** + - **Name** – {{}}**nginx-plus-app-2-instance-template**{{}} + - {{}}**Boot disk**{{}} – Click the {{}}**nginx-plus-app-2-image**{{}} radio button + - In Step 6, **Description** – **NGINX {{}}Plus app-2 Instance Template**{{}} ### Creating the Load-Balancing Instance Template -1. On the **Instance templates** summary page, click CREATE INSTANCE TEMPLATE. +1. On the {{}}**Instance templates**{{}} summary page, click CREATE INSTANCE TEMPLATE. 2. Repeat Steps 4 through 10 of Creating the First Application Instance Template to create the load‑balancing instance template. Use the same values as for the first instance template, except as noted: - In Step 4: - - **Name** – **nginx‑plus‑lb‑instance‑template**. - - **Boot disk** – Click the **nginx‑plus‑lb‑image** radio button + - **Name** – {{}}**nginx-plus-lb-instance-template**{{}}. + - {{}}**Boot disk**{{}} – Click the {{}}**nginx-plus-lb-image**{{}} radio button - In Step 6, **Description** – **NGINX Plus Load‑Balancing Instance Template** @@ -768,28 +768,28 @@ Create _instance templates_. They are the compute workloads in instance groups. Define the simple HTTP health check that GCE uses. This verifies that each NGINX Plus LB image is running (and to re-create any LB instance that isn't running). -1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar. +1. Verify that the **NGINX {{}}Plus All-Active-LB**{{}} project is still selected in the Google Cloud Platform header bar. -2. Navigate to the **Compute Engine > Health checks** tab. +2. Navigate to the {{}}**Compute Engine > Health checks**{{}} tab. 3. Click the  Create a health check  button. -4. On the **Create a health check** page that opens, modify or verify the fields as indicated: +4. On the {{}}**Create a health check**{{}} page that opens, modify or verify the fields as indicated: - - **Name** – **nginx‑plus‑http‑health‑check** + - **Name** – {{}}**nginx-plus-http-health-check**{{}} - **Description** – **Basic HTTP health check to monitor NGINX Plus instances** - **Protocol** – **HTTP** (the default) - **Port** – **80** (the default) - - **Request path** – **/status‑old.html** + - {{}}**Request path**{{}} – {{}}**/status-old.html**{{}} -5. If the **Health criteria** section is not already open, click More. +5. If the {{}}**Health criteria**{{}} section is not already open, click More. 6. Modify or verify the fields as indicated: - - **Check interval** – **10 seconds** - - **Timeout** – **10 seconds** - - **Healthy threshold** – **2 consecutive successes** (the default) - - **Unhealthy threshold** – **10 consecutive failures** + - {{}}**Check interval**{{}} – {{}}**10 seconds**{{}} + - **Timeout** – {{}}**10 seconds**{{}} + - {{}}**Healthy threshold**{{}} – {{}}**2 consecutive successes**{{}} (the default) + - {{}}**Unhealthy threshold**{{}} – {{}}**10 consecutive failures**{{}} 7. Click the  Create  button. @@ -800,28 +800,28 @@ Define the simple HTTP health check that GCE uses. This verifies that each NGINX Create three independent instance groups, one for each type of function-specific instance. -1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar. +1. Verify that the **NGINX {{}}Plus All-Active-LB**{{}} project is still selected in the Google Cloud Platform header bar. -2. Navigate to the **Compute Engine > Instance groups** tab. +2. Navigate to the {{}}**Compute Engine > Instance groups**{{}} tab. 3. Click the  Create instance group  button. ### Creating the First Application Instance Group -1. On the **Create a new instance group** page that opens, modify or verify the fields as indicated. Ignore fields that are not mentioned: +1. On the {{}}**Create a new instance group**{{}} page that opens, modify or verify the fields as indicated. Ignore fields that are not mentioned: - - **Name** – **nginx‑plus‑app‑1‑instance‑group** + - **Name** – {{}}**nginx-plus-app-1-instance-group**{{}} - **Description** – **Instance group to host NGINX Plus app-1 instances** - **Location** – - - Click the **Single‑zone** radio button (the default). - - **Zone** – The GCP zone you specified when you created source instances (Step 1 of [Creating the First Application Instance from a VM Image](#source-vm-app-1) or Step 5 of [Creating the First Application Instance from a Prebuilt Image](#source-prebuilt)). We're using **us‑west1‑a**. - - **Creation method** – **Use instance template** radio button (the default) - - **Instance template** – **nginx‑plus‑app‑1‑instance‑template** (select from the drop-down menu) + - Click the {{}}**Single-zone**{{}} radio button (the default). + - **Zone** – The GCP zone you specified when you created source instances (Step 1 of [Creating the First Application Instance from a VM Image](#source-vm-app-1) or Step 5 of [Creating the First Application Instance from a Prebuilt Image](#source-prebuilt)). We're using {{}}**us-west1-a**{{}}. + - {{}}**Creation method**{{}} – {{}}**Use instance template**{{}} radio button (the default) + - {{}}**Instance template**{{}} – {{}}**nginx-plus-app-1-instance-template**{{}} (select from the drop-down menu) - **Autoscaling** – **Off** (the default) - - **Number of instances** – **2** - - **Health check** – **nginx‑plus‑http‑health‑check** (select from the drop-down menu) - - **Initial delay** – **300 seconds** (the default) + - {{}}**Number of instances**{{}} – **2** + - {{}}**Health check**{{}} – {{}}**nginx-plus-http-health-check**{{}} (select from the drop-down menu) + - {{}}**Initial delay**{{}} – {{}}**300 seconds**{{}} (the default) 3. Click the  Create  button. @@ -834,25 +834,25 @@ Create three independent instance groups, one for each type of function-specific 2. Repeat the steps in [Creating the First Application Instance Group](#groups-app-1) to create a second application instance group. Specify the same values as for the first instance template, except for these fields: - - **Name** – **nginx‑plus‑app‑2‑instance‑group** + - **Name** – {{}}**nginx-plus-app-2-instance-group**{{}} - **Description** – **Instance group to host NGINX Plus app-2 instances** - - **Instance template** – **nginx‑plus‑app‑2‑instance‑template** (select from the drop-down menu) + - {{}}**Instance template**{{}} – {{}}**nginx-plus-app-2-instance-template**{{}} (select from the drop-down menu) ### Creating the Load-Balancing Instance Group -1. On the **Instance groups** summary page, click CREATE INSTANCE GROUP. +1. On the {{}}**Instance groups**{{}} summary page, click CREATE INSTANCE GROUP. 2. Repeat the steps in [Creating the First Application Instance Group](#groups-app-1) to create the load‑balancing instance group. Specify the same values as for the first instance template, except for these fields: - - **Name** – **nginx‑plus‑lb‑instance‑group** + - **Name** – {{}}**nginx-plus-lb-instance-group**{{}} - **Description** – **Instance group to host NGINX Plus load balancing instances** - - **Instance template** – **nginx‑plus‑lb‑instance‑template** (select from the drop-down menu) + - {{}}**Instance template**{{}} – {{}}**nginx-plus-lb-instance-template**{{}} (select from the drop-down menu) ### Updating and Testing the NGINX Plus Configuration -Update the NGINX Plus configuration on the two LB instances (**nginx‑plus‑lb‑instance‑group‑[a...z]**). It should list the internal IP addresses of the four application servers (two instances each of **nginx‑plus‑app‑1‑instance‑group‑[a...z]** and **nginx‑plus‑app‑2‑instance‑group‑[a...z]**). +Update the NGINX Plus configuration on the two LB instances ({{}}**nginx-plus-lb-instance-group-[a...z]**{{}}). It should list the internal IP addresses of the four application servers (two instances each of {{}}**nginx-plus-app-1-instance-group-[a...z]**{{}} and {{}}**nginx-plus-app-2-instance-group-[a...z]**{{}}). Repeat these instructions for both LB instances. @@ -860,10 +860,10 @@ Update the NGINX Plus configuration on the two LB instances (**nginx‑plus 1. Connect to the LB instance over SSH using the method of your choice. GCE provides a built-in mechanism: - - Navigate to the **Compute Engine > VM instances** tab. - - In the table, find the row for the instance. Click the triangle icon in the **Connect** column at the far right. Then, select a method (for example, **Open in browser window**). + - Navigate to the {{}}**Compute Engine > VM instances**{{}} tab. + - In the table, find the row for the instance. Click the triangle icon in the **Connect** column at the far right. Then, select a method (for example, {{}}**Open in browser window**{{}}). -2. In the SSH terminal, use your preferred text editor to edit **gce‑all‑active‑lb.conf**. Change the `server` directives in the `upstream` block to reference the internal IPs of the two **nginx‑plus‑app‑1‑instance‑group‑[a...z]** instances and the two **nginx‑plus‑app‑2‑instance‑group‑[a...z]** instances. You can check the addresses in the **Internal IP** column of the table on the **Compute Engine > VM instances** summary page. For example: +2. In the SSH terminal, use your preferred text editor to edit {{}}**gce-all-active-lb.conf**{{}}. Change the `server` directives in the `upstream` block to reference the internal IPs of the two {{}}**nginx-plus-app-1-instance-group-[a...z]**{{}} instances and the two {{}}**nginx-plus-app-2-instance-group-[a...z]**{{}} instances. You can check the addresses in the {{}}**Internal IP**{{}} column of the table on the {{}}**Compute Engine > VM instances**{{}} summary page. For example: ```nginx upstream upstream_app_pool { @@ -887,9 +887,9 @@ Update the NGINX Plus configuration on the two LB instances (**nginx‑plus nginx -s reload ``` -4. Verify that the four application instances are receiving traffic and responding. To do this, access the NGINX Plus live activity monitoring dashboard on the load-balancing instance (**nginx‑plus‑lb‑instance‑group‑[a...z]**). You can see the instance's external IP address on the **Compute Engine > VM instances** summary page in the **External IP** column of the table. +4. Verify that the four application instances are receiving traffic and responding. To do this, access the NGINX Plus live activity monitoring dashboard on the load-balancing instance ({{}}**nginx-plus-lb-instance-group-[a...z]**{{}}). You can see the instance's external IP address on the {{}}**Compute Engine > VM instances**{{}} summary page in the {{}}**External IP**{{}} column of the table. - **https://_LB‑external‑IP‑address_:8080/status.html** + {{}}**https://_LB-external-IP-address_:8080/status.html**{{}} 5. Verify that NGINX Plus is load balancing traffic among the four application instance groups. Do this by running this command on a separate client machine: @@ -904,50 +904,50 @@ Update the NGINX Plus configuration on the two LB instances (**nginx‑plus Set up a GCE network load balancer. It will distribute incoming client traffic to the NGINX Plus LB instances. First, reserve the static IP address the GCE network load balancer advertises to clients. -1. Verify that the **NGINX Plus All‑Active‑LB** project is still selected in the Google Cloud Platform header bar. +1. Verify that the **NGINX {{}}Plus All-Active-LB**{{}} project is still selected in the Google Cloud Platform header bar. -2. Navigate to the **Networking > External IP addresses** tab. +2. Navigate to the {{}}**Networking > External IP addresses**{{}} tab. 3. Click the  Reserve static address  button. -4. On the **Reserve a static address** page that opens, modify or verify the fields as indicated: +4. On the {{}}**Reserve a static address**{{}} page that opens, modify or verify the fields as indicated: - - **Name** – **nginx‑plus‑network‑lb‑static‑ip** + - **Name** – {{}}**nginx-plus-network-lb-static-ip**{{}} - **Description** – **Static IP address for Network LB frontend to NGINX Plus LB instances** - **Type** – Click the **Regional** radio button (the default) - - **Region** – The GCP zone you specified when you created source instances (Step 1 of [Creating the First Application Instance from a VM Image](#source-vm-app-1) or Step 5 of [Creating the First Application Instance from a Prebuilt Image](#source-prebuilt)). We're using **us‑west1**. - - **Attached to** – **None** (the default) + - **Region** – The GCP zone you specified when you created source instances (Step 1 of [Creating the First Application Instance from a VM Image](#source-vm-app-1) or Step 5 of [Creating the First Application Instance from a Prebuilt Image](#source-prebuilt)). We're using {{}}**us-west1**{{}}. + - {{}}**Attached to**{{}} – **None** (the default) 5. Click the  Reserve  button. Screenshot of the interface for reserving a static IP address for Google Compute Engine network load balancer. -6. Navigate to the **Networking > Load balancing** tab. +6. Navigate to the {{}}**Networking > Load balancing**{{}} tab. 7. Click the  Create load balancer  button. -8. On the **Load balancing** page that opens, click **Start configuration** in the **TCP Load Balancing** box. +8. On the {{}}**Load balancing**{{}} page that opens, click {{}}**Start configuration**{{}} in the {{}}**TCP Load Balancing**{{}} box. -9. On the page that opens, click the **From Internet to my VMs** and **No (TCP)** radio buttons (the defaults). +9. On the page that opens, click the {{}}**From Internet to my VMs**{{}} and {{}}**No (TCP)**{{}} radio buttons (the defaults). -10. Click the  Continue  button. The **New TCP load balancer** page opens. +10. Click the  Continue  button. The {{}}**New TCP load balancer**{{}} page opens. -11. In the **Name** field, type **nginx‑plus‑network‑lb‑frontend**. +11. In the **Name** field, type {{}}**nginx-plus-network-lb-frontend**{{}}. -12. Click **Backend configuration** in the left column to open the **Backend configuration** interface in the right column. Fill in the fields as indicated: +12. Click {{}}**Backend configuration**{{}} in the left column to open the {{}}**Backend configuration**{{}} interface in the right column. Fill in the fields as indicated: - - **Region** – The GCP region you specified in Step 4. We're using **us‑west1**. - - **Backends** – With **Select existing instance groups** selected, select **nginx‑plus‑lb‑instance‑group** from the drop-down menu - - **Backup pool** – **None** (the default) - - **Failover ratio** – **10** (the default) - - **Health check** – **nginx‑plus‑http‑health‑check** - - **Session affinity** – **Client IP** + - **Region** – The GCP region you specified in Step 4. We're using {{}}**us-west1**{{}}. + - **Backends** – With {{}}**Select existing instance groups**{{}} selected, select {{}}**nginx-plus-lb-instance-group**{{}} from the drop-down menu + - {{}}**Backup pool**{{}} – **None** (the default) + - {{}}**Failover ratio**{{}} – **10** (the default) + - {{}}**Health check**{{}} – {{}}**nginx-plus-http-health-check**{{}} + - {{}}**Session affinity**{{}} – {{}}**Client IP**{{}} Screenshot of the interface for backend configuration of GCE network load balancer, used during deployment of NGINX Plus as the Google Cloud Platform load balancer. -13. Select **Frontend configuration** in the left column. This opens up the **Frontend configuration** interface on the right column. +13. Select {{}}**Frontend configuration**{{}} in the left column. This opens up the {{}}**Frontend configuration**{{}} interface on the right column. -14. Create three **Protocol‑IP‑Port** tuples, each with: +14. Create three {{}}**Protocol-IP-Port**{{}} tuples, each with: - **Protocol** – **TCP** - **IP** – The address you reserved in Step 5, selected from the drop-down menu (if there is more than one address, select the one labeled in parentheses with the name you specified in Step 5) @@ -978,7 +978,7 @@ If load balancing is working properly, the unique **Server** field from the inde To verify that high availability is working: -1. Connect to one of the instances in the **nginx‑plus‑lb‑instance‑group** over SSH and run this command to force it offline: +1. Connect to one of the instances in the {{}}**nginx-plus-lb-instance-group**{{}} over SSH and run this command to force it offline: ```shell iptables -A INPUT -p tcp --destination-port 80 -j DROP diff --git a/content/nginx/deployment-guides/load-balance-third-party/apache-tomcat.md b/content/nginx/deployment-guides/load-balance-third-party/apache-tomcat.md index df0ff8cb6..0cc3af9b2 100644 --- a/content/nginx/deployment-guides/load-balance-third-party/apache-tomcat.md +++ b/content/nginx/deployment-guides/load-balance-third-party/apache-tomcat.md @@ -171,7 +171,7 @@ http { } ``` -You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files **_function_‑http.conf**, this is an appropriate `include` directive: +You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files {{}}**_function_-http.conf**{{}}, this is an appropriate `include` directive: ```nginx http { @@ -294,7 +294,7 @@ To configure load balancing, first create a named _upstream group_, which lists 2. In the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), include two `location` blocks: - - The first one matches HTTPS requests in which the path starts with **/tomcat‑app/**, and proxies them to the **tomcat** upstream group we created in the previous step. + - The first one matches HTTPS requests in which the path starts with {{}}**/tomcat-app/**{{}}, and proxies them to the **tomcat** upstream group we created in the previous step. - The second one funnels all traffic to the first `location` block, by doing a temporary redirect of all requests for **"http://example.com/"**. @@ -409,7 +409,7 @@ To enable basic caching in NGINX Open Source< Directive documentation: [proxy_cache_path](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_path) -2. In the `location` block that matches HTTPS requests in which the path starts with **/tomcat‑app/**, include the `proxy_cache` directive to reference the cache created in the previous step. +2. In the `location` block that matches HTTPS requests in which the path starts with {{}}**/tomcat-app/**{{}}, include the `proxy_cache` directive to reference the cache created in the previous step. ```nginx # In the 'server' block for HTTPS traffic @@ -440,11 +440,11 @@ HTTP/2 is fully supported in both NGINX Open - In NGINX Plus R8 and later, NGINX Plus supports HTTP/2 by default. (Support for SPDY is deprecated as of that release). Specifically: - In NGINX Plus R11 and later, the **nginx‑plus** package continues to support HTTP/2 by default, but the **nginx‑plus‑extras** package available in previous releases is deprecated by [dynamic modules](https://www.nginx.com/products/nginx/dynamic-modules/). + In NGINX Plus R11 and later, the {{}}**nginx-plus**{{}} package continues to support HTTP/2 by default, but the {{}}**nginx-plus-extras**{{}} package available in previous releases is deprecated by [dynamic modules](https://www.nginx.com/products/nginx/dynamic-modules/). - For NGINX Plus R8 through R10, the **nginx‑plus** and **nginx‑plus‑extras** packages support HTTP/2 by default. + For NGINX Plus R8 through R10, the {{}}**nginx-plus**{{}} and {{}}**nginx-plus-extras**{{}} packages support HTTP/2 by default. - If using NGINX Plus R7, you must install the **nginx‑plus‑http2** package instead of the **nginx‑plus** or **nginx‑plus‑extras** package. + If using NGINX Plus R7, you must install the {{}}**nginx-plus-http2**{{}} package instead of the {{}}**nginx-plus**{{}} or {{}}**nginx-plus-extras**{{}} package. To enable HTTP/2 support, add the `http2` directive in the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), so that it looks like this: @@ -636,7 +636,7 @@ Health checks are out-of-band HTTP req Because the `health_check` directive is placed in the `location` block, we can enable different health checks for each application. -1. In the `location` block that matches HTTPS requests in which the path starts with **/tomcat‑app/** (created in [Configuring Basic Load Balancing](#load-balancing-basic)), add the `health_check` directive. +1. In the `location` block that matches HTTPS requests in which the path starts with {{}}**/tomcat-app/**{{}} (created in [Configuring Basic Load Balancing](#load-balancing-basic)), add the `health_check` directive. Here we configure NGINX Plus to send an out-of-band request for the top‑level URI **/** (slash) to each of the servers in the **tomcat** upstream group every 2 seconds, which is more aggressive than the default 5‑second interval. If a server does not respond correctly, it is marked down and NGINX Plus stops sending requests to it until it passes five subsequent health checks in a row. We include the `match` parameter to define a nondefault set of health‑check tests. @@ -719,7 +719,7 @@ The quickest way to configure the module and the built‑in dashboard is to down Directive documentation: [include](https://nginx.org/en/docs/ngx_core_module.html#include) - If you are using the conventional configuration scheme and your existing `include` directives use the wildcard notation discussed in [Creating and Modifying Configuration Files](#config-files), you can either add a separate `include` directive for **status.conf** as shown above, or change the name of **status.conf** so it is captured by the wildcard in an existing `include` directive in the `http` block. For example, changing it to **status‑http.conf** means it is captured by the `include` directive for `*-http.conf`. + If you are using the conventional configuration scheme and your existing `include` directives use the wildcard notation discussed in [Creating and Modifying Configuration Files](#config-files), you can either add a separate `include` directive for **status.conf** as shown above, or change the name of **status.conf** so it is captured by the wildcard in an existing `include` directive in the `http` block. For example, changing it to {{}}**status-http.conf**{{}} means it is captured by the `include` directive for `*-http.conf`. 3. Comments in **status.conf** explain which directives you must customize for your deployment. In particular, the default settings in the sample configuration file allow anyone on any network to access the dashboard. We strongly recommend that you restrict access to the dashboard with one or more of the following methods: diff --git a/content/nginx/deployment-guides/load-balance-third-party/microsoft-exchange.md b/content/nginx/deployment-guides/load-balance-third-party/microsoft-exchange.md index 912302769..bfaabc179 100644 --- a/content/nginx/deployment-guides/load-balance-third-party/microsoft-exchange.md +++ b/content/nginx/deployment-guides/load-balance-third-party/microsoft-exchange.md @@ -371,7 +371,7 @@ To set up the conventional configuration scheme, perform these steps: Directive documentation: [include](https://nginx.org/en/docs/ngx_core_module.html#include) - You can also use wildcard notation to read all function‑specific files for either HTTP or TCP traffic into the appropriate context block. For example, if you name all HTTP configuration files **_function_‑http.conf** and all TCP configuration files **_function_‑stream.conf** (the filenames we specify in this section conform to this pattern), the wildcarded `include` directives are: + You can also use wildcard notation to read all function‑specific files for either HTTP or TCP traffic into the appropriate context block. For example, if you name all HTTP configuration files {{}}**_function_-http.conf**{{}} and all TCP configuration files {{}}**_function_-stream.conf**{{}} (the filenames we specify in this section conform to this pattern), the wildcarded `include` directives are: ```nginx http { @@ -383,9 +383,9 @@ To set up the conventional configuration scheme, perform these steps: } ``` -2. In the **/etc/nginx/conf.d** directory, create a new file called **exchange‑http.conf** for directives that pertain to Exchange HTTP and HTTPS traffic (or substitute the name you chose in Step 1). Copy in the directives from the `http` configuration block in the downloaded configuration file. Remember not to copy the first line (`http` `{`) or the closing curly brace (`}`) for the block, because the `http` block you created in Step 1 already has them. +2. In the **/etc/nginx/conf.d** directory, create a new file called {{}}**exchange-http.conf**{{}} for directives that pertain to Exchange HTTP and HTTPS traffic (or substitute the name you chose in Step 1). Copy in the directives from the `http` configuration block in the downloaded configuration file. Remember not to copy the first line (`http` `{`) or the closing curly brace (`}`) for the block, because the `http` block you created in Step 1 already has them. -3. Also in the **/etc/nginx/conf.d** directory, create a new file called **exchange‑stream.conf** for directives that pertain to Exchange TCP traffic (or substitute the name you chose in Step 1). Copy in the directives from the `stream` configuration block in the dowloaded configuration file. Again, do not copy the first line (`stream` `{`) or the closing curly brace (`}`). +3. Also in the **/etc/nginx/conf.d** directory, create a new file called {{}}**exchange-stream.conf**{{}} for directives that pertain to Exchange TCP traffic (or substitute the name you chose in Step 1). Copy in the directives from the `stream` configuration block in the dowloaded configuration file. Again, do not copy the first line (`stream` `{`) or the closing curly brace (`}`). For reference purposes, the text of the full configuration files is included in this document: @@ -468,7 +468,7 @@ The directives in the top‑level `stream` configuration block configure TCP loa } ``` -3. This `server` block defines the virtual server that proxies traffic on port 993 to the **exchange‑imaps** upstream group configured in Step 1. +3. This `server` block defines the virtual server that proxies traffic on port 993 to the {{}}**exchange-imaps**{{}} upstream group configured in Step 1. ```nginx # In the 'stream' block @@ -481,7 +481,7 @@ The directives in the top‑level `stream` configuration block configure TCP loa Directive documentation: [listen](https://nginx.org/en/docs/stream/ngx_stream_core_module.html#listen), [proxy_pass](https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_pass), [server](https://nginx.org/en/docs/stream/ngx_stream_core_module.html#server), [status_zone](https://nginx.org/en/docs/http/ngx_http_status_module.html#status_zone) -4. This `server` block defines the virtual server that proxies traffic on port 25 to the **exchange‑smtp** upstream group configured in Step 2. If you wish to change the port number from 25 (for example, to 587), change the `listen` directive. +4. This `server` block defines the virtual server that proxies traffic on port 25 to the {{}}**exchange-smtp**{{}} upstream group configured in Step 2. If you wish to change the port number from 25 (for example, to 587), change the `listen` directive. ```nginx # In the 'stream' block @@ -615,11 +615,11 @@ HTTP/2 is fully supported in NGINX Plus R7NGINX Plus R8 and later, NGINX Plus supports HTTP/2 by default, and does not support SPDY: -- In NGINX Plus R11 and later, the **nginx‑plus** package continues to support HTTP/2 by default, but the **nginx‑plus‑extras** package available in previous releases is deprecated by [dynamic modules](https://www.nginx.com/products/nginx/dynamic-modules/). +- In NGINX Plus R11 and later, the {{}}**nginx-plus**{{}} package continues to support HTTP/2 by default, but the {{}}**nginx-plus-extras**{{}} package available in previous releases is deprecated by [dynamic modules](https://www.nginx.com/products/nginx/dynamic-modules/). -- For NGINX Plus R8 through R10, the **nginx‑plus** and **nginx‑plus‑extras** packages support HTTP/2 by default. +- For NGINX Plus R8 through R10, the {{}}**nginx-plus**{{}} and {{}}**nginx-plus-extras**{{}} packages support HTTP/2 by default. -If using NGINX Plus R7, you must install the **nginx‑plus‑http2** package instead of the **nginx‑plus** or **nginx‑plus‑extras** package. +If using NGINX Plus R7, you must install the {{}}**nginx-plus-http2**{{}} package instead of the {{}}**nginx-plus**{{}} or {{}}**nginx-plus-extras**{{}} package. To enable HTTP/2 support, add the `http2` directive in the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), so that it looks like this: @@ -926,7 +926,7 @@ Exchange CASs interact with various applications used by clients on different ty } ``` - - Mobile clients like iPhone and Android access the ActiveSync location (**/Microsoft‑Server‑ActiveSync**). + - Mobile clients like iPhone and Android access the ActiveSync location ({{}}**/Microsoft-Server-ActiveSync**{{}}). ```nginx # In the 'server' block for HTTPS traffic @@ -1092,7 +1092,7 @@ The quickest way to configure the module and the built‑in dashboard is to down include conf.d/status.conf; ``` - If you are using the conventional configuration scheme and your existing `include` directives use the wildcard notation discussed in [Creating and Modifying Configuration Files](#config-files), you can either add a separate `include` directive for **status.conf** as shown above, or change the name of **status.conf** so it is captured by the wildcard in an existing `include` directive in the `http` block. For example, changing it to **status‑http.conf** means it is captured by the `include` directive for `*-http.conf`. + If you are using the conventional configuration scheme and your existing `include` directives use the wildcard notation discussed in [Creating and Modifying Configuration Files](#config-files), you can either add a separate `include` directive for **status.conf** as shown above, or change the name of **status.conf** so it is captured by the wildcard in an existing `include` directive in the `http` block. For example, changing it to {{}}**status-http.conf**{{}} means it is captured by the `include` directive for `*-http.conf`. Directive documentation: [include](https://nginx.org/en/docs/ngx_core_module.html#include) diff --git a/content/nginx/deployment-guides/load-balance-third-party/node-js.md b/content/nginx/deployment-guides/load-balance-third-party/node-js.md index c37183f64..93c8b64da 100644 --- a/content/nginx/deployment-guides/load-balance-third-party/node-js.md +++ b/content/nginx/deployment-guides/load-balance-third-party/node-js.md @@ -175,7 +175,7 @@ http { Directive documentation: [include](https://nginx.org/en/docs/ngx_core_module.html#include) -You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files **_function_‑http.conf**, this is an appropriate `include` directive: +You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files {{}}**_function_-http.conf**{{}}, this is an appropriate `include` directive: ```nginx http { @@ -433,13 +433,13 @@ HTTP/2 is fully supported in both NGINX 1.9.5 and later, and NGINX Plus R7 and - If using NGINX Open Source, note that in version 1.9.5 and later the SPDY module is completely removed from the codebase and replaced with the [HTTP/2](https://nginx.org/en/docs/http/ngx_http_v2_module.html) module. After upgrading to version 1.9.5 or later, you can no longer configure NGINX Open Source to use SPDY. If you want to keep using SPDY, you need to compile NGINX Open Source from the sources in the [NGINX 1.8.x branch](https://nginx.org/en/download.html). -- If using NGINX Plus, in R11 and later the **nginx‑plus** package supports HTTP/2 by default, and the **nginx‑plus‑extras** package available in previous releases is deprecated by separate [dynamic modules](https://www.nginx.com/products/nginx/modules/) authored by NGINX. +- If using NGINX Plus, in R11 and later the {{}}**nginx-plus**{{}} package supports HTTP/2 by default, and the {{}}**nginx-plus-extras**{{}} package available in previous releases is deprecated by separate [dynamic modules](https://www.nginx.com/products/nginx/modules/) authored by NGINX. - In NGINX Plus R8 through R10, the **nginx‑plus** and **nginx‑plus‑extras** packages support HTTP/2 by default. + In NGINX Plus R8 through R10, the {{}}**nginx-plus**{{}} and {{}}**nginx-plus-extras**{{}} packages support HTTP/2 by default. In NGINX Plus R8 and later, NGINX Plus supports HTTP/2 by default, and does not support SPDY. - If using NGINX Plus R7, you must install the **nginx‑plus‑http2** package instead of the **nginx‑plus** or **nginx‑plus‑extras** package. + If using NGINX Plus R7, you must install the {{}}**nginx-plus-http2**{{}} package instead of the {{}}**nginx-plus**{{}} or {{}}**nginx-plus-extras**{{}} package. To enable HTTP/2 support, add the [http2](https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2) directive in the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), so that it looks like this: @@ -459,7 +459,7 @@ To verify that HTTP/2 translation is working, you can use the "HTTP/2 and SPDY i The full configuration for basic load balancing appears here for your convenience. It goes in the `http` context. The complete file is available for [download](https://www.nginx.com/resource/conf/nodejs-basic.conf) from the NGINX website. -We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of **/etc/nginx/conf.d/nodejs‑basic.conf**. +We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of {{}}**/etc/nginx/conf.d/nodejs-basic.conf**{{}}. ```nginx proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m; @@ -785,9 +785,9 @@ Parameter documentation: [service](https://nginx.org/en/docs/http/ngx_http_upstr The full configuration for enhanced load balancing appears here for your convenience. It goes in the `http` context. The complete file is available for [download](https://www.nginx.com/resource/conf/nodejs-enhanced.conf) from the NGINX website. -We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – namely, add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of **/etc/nginx/conf.d/nodejs‑enhanced.conf**. +We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – namely, add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of {{}}**/etc/nginx/conf.d/nodejs-enhanced.conf**{{}}. -**Note:** The `api` block in this configuration summary and the [downloadable](https://www.nginx.com/resource/conf/nodejs-enhanced.conf) **nodejs‑enhanced.conf** file is for the [API method](#reconfiguration-api) of dynamic reconfiguration. If you want to use the [DNS method](#reconfiguration-dns) instead, make the appropriate changes to the block. (You can also remove or comment out the directives for the NGINX Plus API in that case, but they do not conflict with using the DNS method and enable features other than dynamic reconfiguration.) +**Note:** The `api` block in this configuration summary and the [downloadable](https://www.nginx.com/resource/conf/nodejs-enhanced.conf) {{}}**nodejs-enhanced.conf**{{}} file is for the [API method](#reconfiguration-api) of dynamic reconfiguration. If you want to use the [DNS method](#reconfiguration-dns) instead, make the appropriate changes to the block. (You can also remove or comment out the directives for the NGINX Plus API in that case, but they do not conflict with using the DNS method and enable features other than dynamic reconfiguration.) ```nginx proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m; diff --git a/content/nginx/deployment-guides/load-balance-third-party/oracle-e-business-suite.md b/content/nginx/deployment-guides/load-balance-third-party/oracle-e-business-suite.md index 6e456bacd..47466094e 100644 --- a/content/nginx/deployment-guides/load-balance-third-party/oracle-e-business-suite.md +++ b/content/nginx/deployment-guides/load-balance-third-party/oracle-e-business-suite.md @@ -322,7 +322,7 @@ http { Directive documentation: [include](https://nginx.org/en/docs/ngx_core_module.html#include) -You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files **_function_‑http.conf**, this is an appropriate include directive: +You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files {{}}**_function_-http.conf**{{}}, this is an appropriate include directive: ```nginx http { @@ -505,13 +505,13 @@ HTTP/2 is fully supported in both NGINX 1.9.5 and later, and NGINX Plus R7 and - If using open source NGINX, note that in version 1.9.5 and later the SPDY module is completely removed from the NGINX codebase and replaced with the [HTTP/2](https://nginx.org/en/docs/http/ngx_http_v2_module.html) module. After upgrading to version 1.9.5 or later, you can no longer configure NGINX to use SPDY. If you want to keep using SPDY, you need to compile NGINX from the sources in the [NGINX 1.8 branch](https://nginx.org/en/download.html). -- If using NGINX Plus, in R11 and later the **nginx‑plus** package supports HTTP/2 by default, and the **nginx‑plus‑extras** package available in previous releases is deprecated by separate [dynamic modules](https://www.nginx.com/products/nginx/modules/) authored by NGINX. +- If using NGINX Plus, in R11 and later the {{}}**nginx-plus**{{}} package supports HTTP/2 by default, and the {{}}**nginx-plus-extras**{{}} package available in previous releases is deprecated by separate [dynamic modules](https://www.nginx.com/products/nginx/modules/) authored by NGINX. - In NGINX Plus R8 through R10, the **nginx‑plus** and **nginx‑plus‑extras** packages support HTTP/2 by default. + In NGINX Plus R8 through R10, the {{}}**nginx-plus**{{}} and {{}}**nginx-plus-extras**{{}} packages support HTTP/2 by default. In NGINX Plus R8 and later, NGINX Plus supports HTTP/2 by default, and does not support SPDY. - If using NGINX Plus R7, you must install the **nginx‑plus‑http2** package instead of the **nginx‑plus** or **nginx‑plus‑extras** package. + If using NGINX Plus R7, you must install the {{}}**nginx-plus-http2**{{}} package instead of the {{}}**nginx-plus**{{}} or {{}}**nginx-plus-extras**{{}} package. To enable HTTP/2 support, add the `http2` directive in the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), so that it looks like this: diff --git a/content/nginx/deployment-guides/load-balance-third-party/oracle-weblogic-server.md b/content/nginx/deployment-guides/load-balance-third-party/oracle-weblogic-server.md index de3b44837..872e44ebf 100644 --- a/content/nginx/deployment-guides/load-balance-third-party/oracle-weblogic-server.md +++ b/content/nginx/deployment-guides/load-balance-third-party/oracle-weblogic-server.md @@ -173,7 +173,7 @@ http { } ``` -You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files **_function_‑http.conf**, this is an appropriate include directive: +You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files {{}}**_function_-http.conf**{{}}, this is an appropriate include directive: ```nginx http { @@ -299,7 +299,7 @@ By putting NGINX Open Source or NGINX Plus in front of WebLogic Server servers 2. In the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), include two `location` blocks: - - The first one matches HTTPS requests in which the path starts with **/weblogic‑app/**, and proxies them to the **weblogic** upstream group we created in the previous step. + - The first one matches HTTPS requests in which the path starts with {{}}**/weblogic-app/**{{}}, and proxies them to the **weblogic** upstream group we created in the previous step. - The second one funnels all traffic to the first `location` block, by doing a temporary redirect of all requests for **"http://example.com/"**. @@ -414,7 +414,7 @@ To create a very simple caching configuration: Directive documentation: [proxy_cache_path](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_path) -2. In the `location` block that matches HTTPS requests in which the path starts with **/weblogic‑app/**, include the `proxy_cache` directive to reference the cache created in the previous step. +2. In the `location` block that matches HTTPS requests in which the path starts with {{}}**/weblogic-app/**{{}}, include the `proxy_cache` directive to reference the cache created in the previous step. ```nginx # In the 'server' block for HTTPS traffic @@ -443,13 +443,13 @@ HTTP/2 is fully supported in both NGINX 1.9.5 and later, and NGINX Plus R7 and - If using NGINX Open Source, note that in version 1.9.5 and later the SPDY module is completely removed from the codebase and replaced with the [HTTP/2](https://nginx.org/en/docs/http/ngx_http_v2_module.html) module. After upgrading to version 1.9.5 or later, you can no longer configure NGINX Open Source to use SPDY. If you want to keep using SPDY, you need to compile NGINX Open Source from the sources in the [NGINX 1.8.x branch](https://nginx.org/en/download.html). -- If using NGINX Plus, in R11 and later the **nginx‑plus** package supports HTTP/2 by default, and the **nginx‑plus‑extras** package available in previous releases is deprecated by separate [dynamic modules](https://www.nginx.com/products/nginx/modules/) authored by NGINX. +- If using NGINX Plus, in R11 and later the {{}}**nginx-plus**{{}} package supports HTTP/2 by default, and the {{}}**nginx-plus-extras**{{}} package available in previous releases is deprecated by separate [dynamic modules](https://www.nginx.com/products/nginx/modules/) authored by NGINX. - In NGINX Plus R8 through R10, the **nginx‑plus** and **nginx‑plus‑extras** packages support HTTP/2 by default. + In NGINX Plus R8 through R10, the {{}}**nginx-plus**{{}} and {{}}**nginx-plus-extras**{{}} packages support HTTP/2 by default. In NGINX Plus R8 and later, NGINX Plus supports HTTP/2 by default, and does not support SPDY. - If using NGINX Plus R7, you must install the **nginx‑plus‑http2** package instead of the **nginx‑plus** or **nginx‑plus‑extras** package. + If using NGINX Plus R7, you must install the {{}}**nginx-plus-http2**{{}} package instead of the {{}}**nginx-plus**{{}} or {{}}**nginx-plus-extras**{{}} package. To enable HTTP/2 support, add the `http2` directive in the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), so that it looks like this: @@ -601,7 +601,7 @@ Health checks are out‑of‑band HTTP requests sent to a server at fixed interv Because the `health_check` directive is placed in the `location` block, we can enable different health checks for each application. -1. In the `location` block that matches HTTPS requests in which the path starts with **/weblogic‑app/** (created in [Configuring Basic Load Balancing](#load-balancing-basic)), add the `health_check` directive. +1. In the `location` block that matches HTTPS requests in which the path starts with {{}}**/weblogic-app/**{{}} (created in [Configuring Basic Load Balancing](#load-balancing-basic)), add the `health_check` directive. Here we configure NGINX Plus to send an out‑of‑band request for the URI **/benefits** to each of the servers in the **weblogic** upstream group every 5 seconds (the default frequency). If a server does not respond correctly, it is marked down and NGINX Plus stops sending requests to it until it passes a subsequent health check. We include the `match` parameter to the `health_check` directive to define a nondefault set of health‑check tests. @@ -814,7 +814,7 @@ To enable dynamic reconfiguration of your upstream group of WebLogic Server app The full configuration for enhanced load balancing appears here for your convenience. It goes in the `http` context. The complete file is available for [download](https://www.nginx.com/resource/conf/weblogic-enhanced.conf) from the NGINX website. -We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of **/etc/nginx/conf.d/weblogic‑enhanced.conf**. +We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of {{}}**/etc/nginx/conf.d/weblogic-enhanced.conf**{{}}. ```nginx proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m; diff --git a/content/nginx/deployment-guides/load-balance-third-party/wildfly.md b/content/nginx/deployment-guides/load-balance-third-party/wildfly.md index 2e92a9243..92c0d72b4 100644 --- a/content/nginx/deployment-guides/load-balance-third-party/wildfly.md +++ b/content/nginx/deployment-guides/load-balance-third-party/wildfly.md @@ -169,7 +169,7 @@ http { Directive documentation: [include](https://nginx.org/en/docs/ngx_core_module.html#include) -You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files **_function_‑http.conf**, this is an appropriate include directive: +You can also use wildcard notation to reference all files that pertain to a certain function or traffic type in the appropriate context block. For example, if you name all HTTP configuration files {{}}**_function_-http.conf**{{}}, this is an appropriate include directive: ```nginx http { @@ -429,9 +429,9 @@ HTTP/2 is fully supported in both NGINX Open In [NGINX Plus R11]({{< ref "/nginx/releases.md#r11" >}}) and later, the **nginx-plus** package continues to support HTTP/2 by default, but the **nginx-plus-extras** package available in previous releases is deprecated and replaced by [dynamic modules]({{< ref "/nginx/admin-guide/dynamic-modules/dynamic-modules.md" >}}). - For NGINX Plus R8 through R10, the **nginx‑plus** and **nginx‑plus‑extras** packages support HTTP/2 by default. + For NGINX Plus R8 through R10, the {{}}**nginx-plus**{{}} and {{}}**nginx-plus-extras**{{}} packages support HTTP/2 by default. - If using NGINX Plus R7, you must install the **nginx‑plus‑http2** package instead of the **nginx‑plus** or **nginx‑plus‑extras** package. + If using NGINX Plus R7, you must install the {{}}**nginx-plus-http2**{{}} package instead of the {{}}**nginx-plus**{{}} or {{}}**nginx-plus-extras**{{}} package. To enable HTTP/2 support, add the `http2` directive in the `server` block for HTTPS traffic that we created in [Configuring Virtual Servers for HTTP and HTTPS Traffic](#virtual-servers), so that it looks like this: @@ -793,7 +793,7 @@ The full configuration for enhanced load balancing appears here for your conveni We recommend that you do not copy text directly from this document, but instead use the method described in [Creating and Modifying Configuration Files](#config-files) to include these directives in your configuration – add an `include` directive to the `http` context of the main **nginx.conf** file to read in the contents of /etc/nginx/conf.d/jboss-enhanced.conf. -**Note:** The `api` block in this configuration summary and the [downloadable](https://www.nginx.com/resource/conf/jboss-enhanced.conf) **jboss‑enhanced.conf** file is for the [API method](#reconfiguration-api) of dynamic reconfiguration. If you want to use the [DNS method](#reconfiguration-dns) instead, make the appropriate changes to the block. (You can also remove or comment out the directives for the NGINX Plus API in that case, but they do not conflict with using the DNS method and enable features other than dynamic reconfiguration.) +**Note:** The `api` block in this configuration summary and the [downloadable](https://www.nginx.com/resource/conf/jboss-enhanced.conf) {{}}**jboss-enhanced.conf**{{}} file is for the [API method](#reconfiguration-api) of dynamic reconfiguration. If you want to use the [DNS method](#reconfiguration-dns) instead, make the appropriate changes to the block. (You can also remove or comment out the directives for the NGINX Plus API in that case, but they do not conflict with using the DNS method and enable features other than dynamic reconfiguration.) ```nginx proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m; diff --git a/content/nginx/deployment-guides/microsoft-azure/high-availability-standard-load-balancer.md b/content/nginx/deployment-guides/microsoft-azure/high-availability-standard-load-balancer.md index 792d14bbb..e804a2ca2 100644 --- a/content/nginx/deployment-guides/microsoft-azure/high-availability-standard-load-balancer.md +++ b/content/nginx/deployment-guides/microsoft-azure/high-availability-standard-load-balancer.md @@ -71,7 +71,7 @@ These instructions assume you have the following: - An Azure [account](https://azure.microsoft.com/en-us/free/). - An Azure [subscription](https://docs.microsoft.com/en-us/azure/azure-glossary-cloud-terminology?toc=/azure/virtual-network/toc.json#subscription). -- An Azure [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview#resource-groups), preferably dedicated to the HA solution. In this guide, it is called **NGINX‑Plus‑HA**. +- An Azure [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/overview#resource-groups), preferably dedicated to the HA solution. In this guide, it is called {{}}**NGINX-Plus-HA**{{}}. - An Azure [virtual network](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview). - Six Azure VMs, four running NGINX Open Source and two running NGINX Plus (in each region where you deploy the solution). You need a paid or trial subscription for each NGINX Plus instance. @@ -100,10 +100,10 @@ With NGINX Open Source and NGINX Plus installed and configured on the Azure VMs 4. On the **Create load balancer** page that opens (to the **Basics** tab), enter the following values: - - **Subscription** – Name of your subscription (**NGINX‑Plus‑HA‑subscription** in this guide) - - **Resource group** – Name of your resource group (**NGINX‑Plus‑HA** in this guide) + - **Subscription** – Name of your subscription ({{}}**NGINX-Plus-HA-subscription**{{}} in this guide) + - **Resource group** – Name of your resource group ({{}}**NGINX-Plus-HA**{{}} in this guide) - **Name** – Name of your Standard Load Balancer (**lb** in this guide) - - **Region** – Name selected from the drop‑down menu (**(US) West US 2** in this guide) + - **Region** – Name selected from the drop‑down menu ({{}}**(US) West US 2**{{}} in this guide) - **Type** – **Public** - **SKU** – **Standard** - **Public IP address** – **Create new** @@ -139,21 +139,21 @@ With NGINX Open Source and NGINX Plus installed and configured on the Azure VMs Screenshot of selecting 'Backend pools' on details page for an Azure Standard Load Balancer -4. On the **lb | Backend Pools** page that opens, click **+ Add** in the upper left corner of the main pane. +4. On the {{}}**lb | Backend Pools**{{}} page that opens, click **+ Add** in the upper left corner of the main pane. -5. On the **Add backend pool** page that opens, enter the following values, then click the  Add  button: +5. On the {{}}**Add backend pool**{{}} page that opens, enter the following values, then click the  Add  button: - **Name** – Name of the new backend pool (**lb\_backend_pool** in this guide) - **IP version** – **IPv4** - - **Virtual machines** – **ngx‑plus‑1** and **ngx‑plus‑2** + - **Virtual machines** – {{}}**ngx-plus-1**{{}} and {{}}**ngx-plus-2**{{}} Screenshot of Azure 'Add backend pool' page for Standard Load Balancer After a few moments the virtual machines appear in the new backend pool. -6. Click **Health probes** in the left navigation column, and then **+ Add** in the upper left corner of the main pane on the **lb | Health probes** page that opens. +6. Click **Health probes** in the left navigation column, and then **+ Add** in the upper left corner of the main pane on the {{}}**lb | Health probes**{{}} page that opens. -7. On the **Add health probe** page that opens, enter the following values, then click the  OK  button. +7. On the {{}}**Add health probe**{{}} page that opens, enter the following values, then click the  OK  button. - **Name** – Name of the new backend pool (**lb\_probe** in this guide) - **Protocol** – **HTTP** or **HTTPS** @@ -164,20 +164,20 @@ With NGINX Open Source and NGINX Plus installed and configured on the Azure VMs Screenshot of Azure 'Add health probe' page for Standard Load Balancer - After a few moments the new probe appears in the table on the **lb | Health probes** page. This probe queries the NGINX Plus landing page every five seconds to check whether NGINX Plus is running. + After a few moments the new probe appears in the table on the {{}}**lb | Health probes**{{}} page. This probe queries the NGINX Plus landing page every five seconds to check whether NGINX Plus is running. -8. Click **Load balancing rules** in the left navigation column, and then **+ Add** in the upper left corner of the main pane on the **lb | Load balancing rules** page that opens. +8. Click {{}}**Load balancing rules**{{}} in the left navigation column, and then **+ Add** in the upper left corner of the main pane on the {{}}**lb | Load balancing rules**{{}} page that opens. -9. On the **Add load balancing rule** page that opens, enter or select the following values, then click the  OK  button. +9. On the {{}}**Add load balancing rule**{{}} page that opens, enter or select the following values, then click the  OK  button. - **Name** – Name of the rule (**lb\_rule** in this guide) - **IP version** – **IPv4** - - **Frontend IP address** – The Standard Load Balancer's public IP address, as reported in the **Public IP address** field on the **Overview** tag of the Standard Load Balancer's page (for an example, see [Step 3](#slb-configure-lb-overview) above); in this guide it is **51.143.107.x (LoadBalancerFrontEnd)** + - **Frontend IP address** – The Standard Load Balancer's public IP address, as reported in the {{}}**Public IP address**{{}} field on the **Overview** tag of the Standard Load Balancer's page (for an example, see [Step 3](#slb-configure-lb-overview) above); in this guide it is {{}}**51.143.107.x (LoadBalancerFrontEnd)**{{}} - **Protocol** – **TCP** - **Port** – **80** - **Backend port** – **80** - **Backend pool** – **lb_backend** - - **Health probe** – **lb_probe (HTTP:80)** + - **Health probe** – {{}}**lb_probe (HTTP:80)**{{}} - **Session persistence** – **None** - **Idle timeout (minutes)** – **4** - **TCP reset** – **Disabled** @@ -186,14 +186,14 @@ With NGINX Open Source and NGINX Plus installed and configured on the Azure VMs Screenshot of Azure 'Add load balancing rule' page for Standard Load Balancer - After a few moments the new rule appears in the table on the **lb | Load balancing rules** page. + After a few moments the new rule appears in the table on the {{}}**lb | Load balancing rules**{{}} page. ### Verifying Correct Operation -1. To verify that Standard Load Balancer is working correctly, open a new browser window and navigate to the IP address for the Standard Load Balancer front end, which appears in the **Public IP address** field on the **Overview** tab of the load balancer's page on the dashboard (for an example, see [Step 3](#slb-configure-lb-overview) of _Configuring the Standard Load Balancer_). +1. To verify that Standard Load Balancer is working correctly, open a new browser window and navigate to the IP address for the Standard Load Balancer front end, which appears in the {{}}**Public IP address**{{}} field on the **Overview** tab of the load balancer's page on the dashboard (for an example, see [Step 3](#slb-configure-lb-overview) of _Configuring the Standard Load Balancer_). -2. The default **Welcome to nginx!** page indicates that the Standard Load Balancer has successfully forwarded a request to one of the two NGINX Plus instances. +2. The default {{}}**Welcome to nginx!**{{}} page indicates that the Standard Load Balancer has successfully forwarded a request to one of the two NGINX Plus instances. Screenshot of 'Welcome to nginx!' page that verifies correct configuration of an Azure Standard Load Balancer @@ -210,7 +210,7 @@ Once you’ve tested that the Standard Load Balancer has been correctly deployed In this case, you need to set up Azure Traffic Manager for DNS‑based global server load balancing (GSLB) among the regions. The involves creating a DNS name for the Standard Load Balancer and registering it as an endpoint in Traffic Manager. -1. Navigate to the **Public IP addresses** page. (One way is to enter **Public IP addresses** in the search field of the Azure title bar and select that value in the **Services** section of the resulting drop‑down menu.) +1. Navigate to the {{}}**Public IP addresses**{{}} page. (One way is to enter {{}}**Public IP addresses**{{}} in the search field of the Azure title bar and select that value in the **Services** section of the resulting drop‑down menu.) 2. Click the name of the Standard Load Balancer's public IP address in the **Name** column of the table (here it is **public\_ip_lb**). @@ -218,33 +218,33 @@ In this case, you need to set up Azure Traffic Manager for DNS‑based global se 3. On the **public\_ip_lb** page that opens, click **Configuration** in the left navigation column. -4. Enter the DNS name for the Standard Load Balancer in the **DNS name label** field. In this guide, we're accepting the default, **public‑ip‑dns**. +4. Enter the DNS name for the Standard Load Balancer in the {{}}**DNS name label**{{}} field. In this guide, we're accepting the default, {{}}**public-ip-dns**{{}}. Screenshot of Azure page for public IP address of a Standard Load Balancer -5. Navigate to the **Traffic Manager profiles** tab. (One way is to enter **Traffic Manager profiles** in the search field of the Azure title bar and select that value in the **Services** section of the resulting drop‑down menu.) +5. Navigate to the {{}}**Traffic Manager profiles**{{}} tab. (One way is to enter {{}}**Traffic Manager profiles**{{}} in the search field of the Azure title bar and select that value in the **Services** section of the resulting drop‑down menu.) 6. Click **+ Add** in the upper left corner of the page. -7. On the **Create Traffic Manager profile** page that opens, enter or select the following values and click the  Create  button. +7. On the {{}}**Create Traffic Manager profile**{{}} page that opens, enter or select the following values and click the  Create  button. - **Name** – Name of the profile (**ngx** in this guide) - **Routing method** – **Performance** - - **Subscription** – **NGINX‑Plus‑HA‑subscription** in this guide - - **Resource group** – **NGINX‑Plus‑HA** in this guide + - **Subscription** – {{}}**NGINX-Plus-HA-subscription**{{}} in this guide + - **Resource group** – {{}}**NGINX-Plus-HA**{{}} in this guide _Azure-create-lb-create-Traffic-Manager-profile_ Screenshot of Azure 'Create Traffic Manager profile' page -8. It takes a few moments to create the profile. When it appears in the table on the **Traffic Manager profiles** page, click its name in the **Name** column. +8. It takes a few moments to create the profile. When it appears in the table on the {{}}**Traffic Manager profiles**{{}} page, click its name in the **Name** column. 9. On the **ngx** page that opens, click **Endpoints** in the left navigation column, then **+ Add** in the main part of the page. 10. On the **Add endpoint** window that opens, enter or select the following values and click the  Add  button. - - **Type** – **Azure endpoint** - - **Name** – Endpoint name (**ep‑lb‑west‑us** in this guide) - - **Target resource type** – **Public IP address** + - **Type** – {{}}**Azure endpoint**{{}} + - **Name** – Endpoint name ({{}}**ep-lb-west-us**{{}} in this guide) + - **Target resource type** – {{}}**Public IP address**{{}} - **Public IP address** – Name of the Standard Load Balancer's public IP address (**public\_ip_lb (51.143.107.x)** in this guide) - **Custom Header settings** – None in this guide diff --git a/content/nginx/deployment-guides/microsoft-azure/virtual-machines-for-nginx.md b/content/nginx/deployment-guides/microsoft-azure/virtual-machines-for-nginx.md index 487983211..ebbc17c53 100644 --- a/content/nginx/deployment-guides/microsoft-azure/virtual-machines-for-nginx.md +++ b/content/nginx/deployment-guides/microsoft-azure/virtual-machines-for-nginx.md @@ -23,7 +23,7 @@ These instructions assume you have: - An Azure [account](https://azure.microsoft.com/en-us/free/). - An Azure [subscription](https://docs.microsoft.com/en-us/azure/azure-glossary-cloud-terminology?toc=/azure/virtual-network/toc.json#subscription). -- An Azure [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#resource-groups). In this guide, it is called **NGINX‑Plus‑HA**. +- An Azure [resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#resource-groups). In this guide, it is called {{}}**NGINX-Plus-HA**{{}}. - An Azure [virtual network](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview). - If using the instructions in [Automating Installation with Ansible](#automate-ansible), basic Linux system administration skills, including installation of Linux software from vendor‑supplied packages, and file creation and editing. @@ -48,25 +48,25 @@ In addition, to install NGINX software by following the linked instructions, you 4. In the **Create a virtual machine** window that opens, enter the requested information on the **Basics** tab. In this guide, we're using the following values: - - **Subscription** – **NGINX‑Plus‑HA‑subscription** - - **Resource group** – **NGINX‑Plus‑HA** - - **Virtual machine name** – **ngx‑plus‑1** + - **Subscription** – {{}}**NGINX-Plus-HA-subscription**{{}} + - **Resource group** – {{}}**NGINX-Plus-HA**{{}} + - **Virtual machine name** – {{}}**ngx-plus-1**{{}} - The value **ngx‑plus‑1** is one of the six used for VMs in [Active-Active HA for NGINX Plus on Microsoft Azure Using the Azure Standard Load Balancer]({{< ref "high-availability-standard-load-balancer.md" >}}). See Step 7 below for the other instance names. + The value {{}}**ngx-plus-1**{{}} is one of the six used for VMs in [Active-Active HA for NGINX Plus on Microsoft Azure Using the Azure Standard Load Balancer]({{< ref "high-availability-standard-load-balancer.md" >}}). See Step 7 below for the other instance names. - - **Region** – **(US) West US 2** - - **Availability options** – **No infrastructure redundancy required** + - **Region** – {{}}**(US) West US 2**{{}} + - **Availability options** – {{}}**No infrastructure redundancy required**{{}} This option is sufficient for a demo like the one in this guide. For production deployments, you might want to select a more robust option; we recommend deploying a copy of each VM in a different Availability Zone. For more information, see the [Azure documentation](https://docs.microsoft.com/en-us/azure/availability-zones/az-overview). - - **Image** – **Ubuntu Server 18.04 LTS** + - **Image** – {{}}**Ubuntu Server 18.04 LTS**{{}} - **Azure Spot instance** – **No** - - **Size** – **B1s** (click Select size to access the **Select a VM size** window, click the **B1s** row, and click the  Select  button to return to the **Basics** tab) - - **Authentication type** – **SSH public key** + - **Size** – **B1s** (click Select size to access the {{}}**Select a VM size**{{}} window, click the **B1s** row, and click the  Select  button to return to the **Basics** tab) + - **Authentication type** – {{}}**SSH public key**{{}} - **Username** – **nginx_azure** - - **SSH public key source** – **Generate new key pair** (the other choices on the drop‑down menu are to use an existing key stored in Azure or an existing public key) + - **SSH public key source** – {{}}**Generate new key pair**{{}} (the other choices on the drop‑down menu are to use an existing key stored in Azure or an existing public key) - **Key pair name** – **nginx_key** - - **Public inbound ports** – **Allow selected ports** - - **Select inbound ports** – Select from the drop-down menu: **SSH (22)** and **HTTP (80)**, plus **HTTPS (443)** if you plan to configure NGINX and NGINX Plus for SSL/TLS + - **Public inbound ports** – {{}}**Allow selected ports**{{}} + - **Select inbound ports** – Select from the drop-down menu: {{}}**SSH (22)**{{}} and {{}}**HTTP (80)**{{}}, plus {{}}**HTTPS (443)**{{}} if you plan to configure NGINX and NGINX Plus for SSL/TLS screenshot of 'Basics' tab on Azure 'Create a virtual machine' page @@ -75,7 +75,7 @@ In addition, to install NGINX software by following the linked instructions, you For simplicity, we recommend allocating **Standard** public IP addresses for all six VMs used in the deployment. At the time of initial publication of this guide, the hourly cost for six such VMs was only $0.008 more than for six VMs with Basic addresses; for current pricing, see the [Microsoft documentation](https://azure.microsoft.com/en-us/pricing/details/ip-addresses/). - To allocate a **Standard** public IP address, open the **Networking** tab on the **Create a virtual machine** window. Click Create new below the **Public IP** field. In the **Create public IP address** column that opens at right, click the **Standard** radio button under **SKU**. You can change the value in the **Name** field; here we are accepting the default created by Azure, **ngx‑plus‑1‑ip**. Click the ** OK ** button. + To allocate a **Standard** public IP address, open the **Networking** tab on the **Create a virtual machine** window. Click Create new below the **Public IP** field. In the {{}}**Create public IP address**{{}} column that opens at right, click the **Standard** radio button under **SKU**. You can change the value in the **Name** field; here we are accepting the default created by Azure, {{}}**ngx-plus-1-ip**{{}}. Click the ** OK ** button. screenshot of 'Networking' tab on Azure 'Create a virtual machine' page @@ -87,7 +87,7 @@ In addition, to install NGINX software by following the linked instructions, you To change any settings, open the appropriate tab. If the settings are correct, click the  Create  button. - If you chose in [Step 4](#create-vm_Basics) to generate a new key pair, a **Generate new key pair** window pops up. Click the  Download key and create private resource  button. + If you chose in [Step 4](#create-vm_Basics) to generate a new key pair, a {{}}**Generate new key pair**{{}} window pops up. Click the  Download key and create private resource  button. screenshot of validation message on Azure 'Create a virtual machine' page @@ -107,7 +107,7 @@ In addition, to install NGINX software by following the linked instructions, you For **ngx-plus-2**, it is probably simplest to repeat Steps 2 through 6 above (or purchase a second prebuilt VM in the [Microsoft Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=NGINX%20Plus)). - For the NGINX Open Source VMs, you can create them individually using Steps 2 through 6. Alternatively, create them based on an Azure image. To do so, follow Steps 2 through 6 above to create a source VM (naming it **nginx‑oss**), [install the NGINX Open Source software](#install-nginx) on it, and then follow the instructions in [Optional: Creating an NGINX Open Source Image](#create-nginx-oss-image). + For the NGINX Open Source VMs, you can create them individually using Steps 2 through 6. Alternatively, create them based on an Azure image. To do so, follow Steps 2 through 6 above to create a source VM (naming it {{}}**nginx-oss**{{}}), [install the NGINX Open Source software](#install-nginx) on it, and then follow the instructions in [Optional: Creating an NGINX Open Source Image](#create-nginx-oss-image). ## Connecting to a Virtual Machine @@ -118,7 +118,7 @@ To install and configure NGINX Open Source or NGINX Plus on a VM, you need to o screenshot of Azure 'Virtual machines' page with list of VMs -2. On the page that opens (**ngx‑plus‑1** in this guide), note the VM's public IP address (in the **Public IP address** field in the right column). +2. On the page that opens ({{}}**ngx-plus-1**{{}} in this guide), note the VM's public IP address (in the {{}}**Public IP address**{{}} field in the right column). screenshot of details page for 'ngx-plus-1' VM in Azure @@ -130,7 +130,7 @@ To install and configure NGINX Open Source or NGINX Plus on a VM, you need to o where - - `` is the name of the file containing the private key paired with the public key you entered in the **SSH public key** field in Step 4 of _Creating a Microsoft Azure Virtual Machine_. + - `` is the name of the file containing the private key paired with the public key you entered in the {{}}**SSH public key**{{}} field in Step 4 of _Creating a Microsoft Azure Virtual Machine_. - `` is the name you entered in the **Username** field in Step 4 of _Creating a Microsoft Azure Virtual Machine_ (in this guide it is **nginx_azure**). - `` is the address you looked up in the previous step. @@ -169,7 +169,7 @@ NGINX publishes a unified Ansible role for NGINX Open Source and NGINX Plus on ansible-galaxy install nginxinc.nginx ``` -4. (NGINX Plus only) Copy the **nginx‑repo.key** and **nginx‑repo.crt** files provided by NGINX to **~/.ssh/ngx‑certs/**. +4. (NGINX Plus only) Copy the {{}}**nginx-repo.key**{{}} and {{}}**nginx-repo.crt**{{}} files provided by NGINX to {{}}**~/.ssh/ngx-certs/**{{}}. 5. Create a file called **playbook.yml** with the following contents: @@ -196,7 +196,7 @@ To streamline the process of installing NGINX Open Source on multiple VMs, you c 2. Navigate to the **Virtual machines** page, if you are not already there. -2. In the list of VMs, click the name of the one to use as a source image (in this guide, we have called it **ngx‑oss**). Remember that NGINX Open Source needs to be installed on it already. +2. In the list of VMs, click the name of the one to use as a source image (in this guide, we have called it {{}}**ngx-oss**{{}}). Remember that NGINX Open Source needs to be installed on it already. 3. On the page than opens, click the **Capture** icon in the top navigation bar. @@ -207,10 +207,10 @@ To streamline the process of installing NGINX Open Source on multiple VMs, you c Then select the following values: - **Name** – Keep the current value. - - **Resource group** – Select the appropriate resource group from the drop‑down menu. Here it is **NGINX‑Plus‑HA**. + - **Resource group** – Select the appropriate resource group from the drop‑down menu. Here it is {{}}**NGINX-Plus-HA**{{}}. - **Automatically delete this virtual machine after creating the image** – We recommend checking the box, since you can't do anything more with the image anyway. - **Zone resiliency** – **On**. - - **Type the virtual machine name** – Name of the source VM (**ngx‑oss** in this guide). + - **Type the virtual machine name** – Name of the source VM ({{}}**ngx-oss**{{}} in this guide). Click the  Create  button. diff --git a/content/nginx/deployment-guides/migrate-hardware-adc/citrix-adc-configuration.md b/content/nginx/deployment-guides/migrate-hardware-adc/citrix-adc-configuration.md index 404b54b18..de397e3b0 100644 --- a/content/nginx/deployment-guides/migrate-hardware-adc/citrix-adc-configuration.md +++ b/content/nginx/deployment-guides/migrate-hardware-adc/citrix-adc-configuration.md @@ -327,7 +327,7 @@ NGINX Plus and Citrix ADC handle high availability (HA) in similar but slightly Citrix ADC handles the monitoring and failover of the VIP in a proprietary way. - For [on‑premises deployments]({{< ref "nginx/admin-guide/high-availability/ha-keepalived.md" >}}), NGINX Plus uses a separate software package called ****nginx‑ha‑keepalived**** to handle the VIP and the failover process for an active‑passive pair of NGINX Plus servers. The package implements the VRRP protocol to handle the VIP. Limited [active‑active]({{< ref "nginx/admin-guide/high-availability/ha-keepalived-nodes.md" >}}) scenarios are also possible with the **nginx‑ha‑keepalived** package. + For [on‑premises deployments]({{< ref "nginx/admin-guide/high-availability/ha-keepalived.md" >}}), NGINX Plus uses a separate software package called {{}}****nginx-ha-keepalived****{{}} to handle the VIP and the failover process for an active‑passive pair of NGINX Plus servers. The package implements the VRRP protocol to handle the VIP. Limited [active‑active]({{< ref "nginx/admin-guide/high-availability/ha-keepalived-nodes.md" >}}) scenarios are also possible with the {{}}**nginx-ha-keepalived**{{}} package. Solutions for high availability of NGINX Plus in cloud environments are also available, including these: diff --git a/content/nginx/deployment-guides/migrate-hardware-adc/f5-big-ip-configuration.md b/content/nginx/deployment-guides/migrate-hardware-adc/f5-big-ip-configuration.md index d7b82fc44..1cc9c9455 100644 --- a/content/nginx/deployment-guides/migrate-hardware-adc/f5-big-ip-configuration.md +++ b/content/nginx/deployment-guides/migrate-hardware-adc/f5-big-ip-configuration.md @@ -99,7 +99,7 @@ In addition to these networking concepts, there are two other important technolo BIG-IP LTM uses a built‑in HA mechanism to handle the failover. - For [on‑premises deployments]({{< ref "nginx/admin-guide/high-availability/ha-keepalived.md" >}}), NGINX Plus uses a separate software package called ****nginx‑ha‑keepalived**** to handle the VIP and the failover process for an active‑passive pair of NGINX Plus servers. The package implements the VRRP protocol to handle the VIP. Limited [active‑active]({{< ref "nginx/admin-guide/high-availability/ha-keepalived-nodes.md" >}}) scenarios are also possible with the **nginx‑ha‑keepalived** package. + For [on‑premises deployments]({{< ref "nginx/admin-guide/high-availability/ha-keepalived.md" >}}), NGINX Plus uses a separate software package called {{}}****nginx-ha-keepalived****{{}} to handle the VIP and the failover process for an active‑passive pair of NGINX Plus servers. The package implements the VRRP protocol to handle the VIP. Limited [active‑active]({{< ref "nginx/admin-guide/high-availability/ha-keepalived-nodes.md" >}}) scenarios are also possible with the {{}}**nginx-ha-keepalived**{{}} package. Solutions for high availability of NGINX Plus in cloud environments are also available, including these: diff --git a/content/nginx/deployment-guides/nginx-plus-high-availability-chef.md b/content/nginx/deployment-guides/nginx-plus-high-availability-chef.md index 5dce8601b..3e709cab9 100644 --- a/content/nginx/deployment-guides/nginx-plus-high-availability-chef.md +++ b/content/nginx/deployment-guides/nginx-plus-high-availability-chef.md @@ -27,9 +27,9 @@ To set up the highly available active/passive cluster, we’re using the [HA sol ## Modifying the NGINX Cookbook -First we set up the Chef files for installing of the NGINX Plus HA package (**nginx‑ha‑keepalived**) and creating the `keepalived` configuration file, **keepalive.conf**. +First we set up the Chef files for installing of the NGINX Plus HA package ({{}}**nginx-ha-keepalived**{{}}) and creating the `keepalived` configuration file, **keepalive.conf**. -1. Modify the existing **plus_package** recipe to include package and configuration templates for the HA solution, by adding the following code to the bottom of the **plus_package.rb** file (per the instructions in the previous post, the file is in the **~/chef‑zero/playground/cookbooks/nginx/recipes** directory). +1. Modify the existing **plus_package** recipe to include package and configuration templates for the HA solution, by adding the following code to the bottom of the **plus_package.rb** file (per the instructions in the previous post, the file is in the {{}}**~/chef-zero/playground/cookbooks/nginx/recipes**{{}} directory). We are using the **eth1** interface on each NGINX host, which makes the code a bit more complicated than if we used **eth0**. In case you are using **eth0**, the relevant code appears near the top of the file, commented out. @@ -37,7 +37,7 @@ First we set up the Chef files for installing of the NGINX Plus HA package (**n - It looks up the IP address of the **eth1** interface on the node where NGINX Plus is being installed, and assigns the value to the `origip` variable so it can be passed to the template. - It finds the other node in the HA pair by using Chef’s `search` function to iterate through all Chef nodes, then looks up the IP address for that node’s **eth1** interface and assigns the address to the `ha_pair_ips` variable. - - It installs the **nginx‑ha‑keepalived** package, registers the `keepalived` service with Chef, and generates the **keepalived.conf** configuration file as a template, passing in the values of the `origip` and `ha_pair_ips` variables. + - It installs the {{}}**nginx-ha-keepalived**{{}} package, registers the `keepalived` service with Chef, and generates the **keepalived.conf** configuration file as a template, passing in the values of the `origip` and `ha_pair_ips` variables. ```nginx if node['nginx']['enable_ha_mode'] == 'true' @@ -102,7 +102,7 @@ First we set up the Chef files for installing of the NGINX Plus HA package (**n You can download the [full recipe file](https://www.nginx.com/resource/conf/plus_package.rb-chef-recipe) from the NGINX, Inc. website. -2. Create the Chef template for creating **keepalived.conf**, by copying the following content to a new template file, **nginx_plus_keepalived.conf.erb**, in the **~/chef‑zero/playground/cookbooks/nginx/templates/default** directory. +2. Create the Chef template for creating **keepalived.conf**, by copying the following content to a new template file, **nginx_plus_keepalived.conf.erb**, in the {{}}**~/chef-zero/playground/cookbooks/nginx/templates/default**{{}} directory. We’re using a combination of variables and attributes to pass the necessary information to **keepalived.conf**. We’ll set the attributes in the next step. Here we set the two variables in the template file to the host IP addresses that were set with the `variables` directive in the **plus_package.rb** recipe (modified in the previous step): @@ -186,7 +186,7 @@ First we set up the Chef files for installing of the NGINX Plus HA package (**n ``` -3. Create a role that sets attributes used in the recipe and template files created in the previous steps, by copying the following contents to a new role file, **nginx_plus_ha.rb** in the **~/chef‑zero/playground/roles** directory. +3. Create a role that sets attributes used in the recipe and template files created in the previous steps, by copying the following contents to a new role file, **nginx_plus_ha.rb** in the {{}}**~/chef-zero/playground/roles**{{}} directory. Four attributes need to be set, and in the role we set the following three: @@ -290,13 +290,13 @@ Now we bootstrap the nodes and get them ready for the installation. Note that th ` -2. Create a local copy of the node definition file, which we’ll edit as appropriate for the node we bootstrapped in the previous step, **chef‑test‑1**: +2. Create a local copy of the node definition file, which we’ll edit as appropriate for the node we bootstrapped in the previous step, {{}}**chef-test-1**{{}}: ```nginx root@chef-server:~/chef-zero/playground# knife node show chef-test-1 --format json > nodes/chef-test-1.json ``` -3. Edit **chef‑test‑1.json** to have the following contents. In particular, we’re updating the run list and setting the `ha_primary` attribute, as required for the HA deployment. +3. Edit {{}}**chef-test-1.json**{{}} to have the following contents. In particular, we’re updating the run list and setting the `ha_primary` attribute, as required for the HA deployment. ```json { @@ -323,7 +323,7 @@ Now we bootstrap the nodes and get them ready for the installation. Note that th Updated Node chef-test-1! ``` -5. Log in on the **chef‑test‑1** node and run the `chef-client` command to get everything configured: +5. Log in on the {{}}**chef-test-1**{{}} node and run the `chef-client` command to get everything configured: ```text username@chef-test-1:~$ sudo chef-client @@ -616,7 +616,7 @@ Enter your password: 10.100.10.102 Chef Client finished, 18/50 resources updated in 10 seconds` -If we look at **keepalived.conf** at this point, we see that there is a peer set in the `unicast_peer` section. But the following command shows that **chef‑test‑2**, which we intend to be the secondary node, is also assigned the VIP (10.100.10.50). This is because we haven’t yet updated the Chef configuration on **chef‑test‑1** to make its `keepalived` aware of the secondary node. +If we look at **keepalived.conf** at this point, we see that there is a peer set in the `unicast_peer` section. But the following command shows that {{}}**chef-test-2**{{}}, which we intend to be the secondary node, is also assigned the VIP (10.100.10.50). This is because we haven’t yet updated the Chef configuration on {{}}**chef-test-1**{{}} to make its `keepalived` aware of the secondary node. username@chef-test-2:~$ ip addr show eth1 3: eth1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 @@ -630,7 +630,7 @@ If we look at **keepalived.conf** at this point, we see that there is a peer set ### Synchronizing the Nodes -To make `keepalived` on **chef‑test‑1** aware of **chef‑test‑2** and its IP address, we rerun the `chef-client` command on **chef‑test‑1**: +To make `keepalived` on {{}}**chef-test-1**{{}} aware of {{}}**chef-test-2**{{}} and its IP address, we rerun the `chef-client` command on {{}}**chef-test-1**{{}}: ```text username@chef-test-1:~$ sudo chef-client @@ -741,7 +741,7 @@ Chef Client finished, 2/47 resources updated in 05 seconds ``` -We see that **chef‑test‑1** is still assigned the VIP: +We see that {{}}**chef-test-1**{{}} is still assigned the VIP: ```nginx username@chef-test-1:~$ ip addr show eth1 @@ -755,7 +755,7 @@ We see that **chef‑test‑1** is still assigned the VIP: valid_lft forever preferred_lft forever ``` -And **chef‑test‑2**, as the secondary node, is now assigned only its physical IP address: +And {{}}**chef-test-2**{{}}, as the secondary node, is now assigned only its physical IP address: ```nginx username@chef-test-2:~$ ip addr show eth1 diff --git a/content/nginx/deployment-guides/setting-up-nginx-demo-environment.md b/content/nginx/deployment-guides/setting-up-nginx-demo-environment.md index dc85877b0..a14367e24 100644 --- a/content/nginx/deployment-guides/setting-up-nginx-demo-environment.md +++ b/content/nginx/deployment-guides/setting-up-nginx-demo-environment.md @@ -21,7 +21,7 @@ Some commands require `root` privilege. If appropriate for your environment, pre ## Configuring NGINX Open Source for web serving -The steps in this section configure an NGINX Open Source instance as a web server to return a page like the following, which specifies the server name, address, and other information. The page is defined in the **demo‑index.html** configuration file you create in Step 4 below. +The steps in this section configure an NGINX Open Source instance as a web server to return a page like the following, which specifies the server name, address, and other information. The page is defined in the {{}}**demo-index.html**{{}} configuration file you create in Step 4 below. diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/active-directory-federation-services.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/active-directory-federation-services.md index 870198ac6..6d0173780 100644 --- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/active-directory-federation-services.md +++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/active-directory-federation-services.md @@ -50,9 +50,9 @@ The instructions assume you have the following: Create an AD FS application for NGINX Plus: -1. Open the AD FS Management window. In the navigation column on the left, right‑click on the **Application Groups** folder and select **Add Application Group** from the drop‑down menu. +1. Open the AD FS Management window. In the navigation column on the left, right‑click on the **Application Groups** folder and select {{}}**Add Application Group**{{}} from the drop‑down menu. - The **Add Application Group Wizard** window opens. The left navigation column shows the steps you will complete to add an application group. + The {{}}**Add Application Group Wizard**{{}} window opens. The left navigation column shows the steps you will complete to add an application group. 2. In the **Welcome** step, type the application group name in the **Name** field. Here we are using **ADFSSSO**. In the **Template** field, select **Server application** under **Standalone applications**. Click the  Next >  button. @@ -63,7 +63,7 @@ Create an AD FS application for NGINX Plus: 1. Make a note of the value in the **Client Identifier** field. You will add it to the NGINX Plus configuration in [Step 4 of _Configuring NGINX Plus_](#nginx-plus-variables).
- 2. In the **Redirect URI** field, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch**. Here we’re using **https://my‑nginx.example.com:443/\_codexch**. Click the  Add  button. + 2. In the **Redirect URI** field, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch**. Here we’re using {{}}**https://my-nginx.example.com:443/\_codexch**{{}}. Click the  Add  button. **Notes:** @@ -75,7 +75,7 @@ Create an AD FS application for NGINX Plus: -4. In the **Configure Application Credentials** step, click the **Generate a shared secret** checkbox. Make a note of the secret that AD FS generates (perhaps by clicking the **Copy to clipboard** button and pasting the clipboard content into a file). You will add the secret to the NGINX Plus configuration in [Step 4 of _Configuring NGINX Plus_](#nginx-plus-variables). Click the  Next >  button. +4. In the {{}}**Configure Application Credentials**{{}} step, click the {{}}**Generate a shared secret**{{}} checkbox. Make a note of the secret that AD FS generates (perhaps by clicking the {{}}**Copy to clipboard**{{}} button and pasting the clipboard content into a file). You will add the secret to the NGINX Plus configuration in [Step 4 of _Configuring NGINX Plus_](#nginx-plus-variables). Click the  Next >  button. @@ -87,7 +87,7 @@ Create an AD FS application for NGINX Plus: Configure NGINX Plus as the OpenID Connect relying party: -1. Create a clone of the [**nginx‑openid‑connect**](https://github.com/nginxinc/nginx-openid-connect) GitHub repository. +1. Create a clone of the {{}}[**nginx-openid-connect**](https://github.com/nginxinc/nginx-openid-connect){{}} GitHub repository. ```shell git clone https://github.com/nginxinc/nginx-openid-connect @@ -150,7 +150,7 @@ In a browser, enter the address of your NGINX Plus instance and try to log in u ## Troubleshooting -See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the **nginx‑openid‑connect** repository on GitHub. +See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the {{}}**nginx-openid-connect**{{}} repository on GitHub. ### Revision History diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/cognito.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/cognito.md index 6fa141df9..94e62f0f8 100644 --- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/cognito.md +++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/cognito.md @@ -59,7 +59,7 @@ Create a new application for NGINX Plus in the Cognito GUI: -3. In the **Create a user pool** window that opens, type a value in the **Pool name** field (in this guide, it's **nginx‑plus‑pool**), then click the Review defaults button. +3. In the **Create a user pool** window that opens, type a value in the **Pool name** field (in this guide, it's {{}}**nginx-plus-pool**{{}}), then click the Review defaults button. @@ -70,11 +70,11 @@ Create a new application for NGINX Plus in the Cognito GUI: 5. On the **App clients** tab which opens, click Add an app client. -6. On the **Which app clients will have access to this user pool?** window which opens, enter a value (in this guide, **nginx‑plus‑app**) in the **App client name** field. Make sure the **Generate client secret** box is checked, then click the  Create app client  button. +6. On the **Which app clients will have access to this user pool?** window which opens, enter a value (in this guide, {{}}**nginx-plus-app**{{}}) in the {{}}**App client name**{{}} field. Make sure the {{}}**Generate client secret**{{}} box is checked, then click the  Create app client  button. -7. On the confirmation page which opens, click **Return to pool details** to return to the **Review** tab. On that tab click the  Create pool  button at the bottom. (The screenshot in [Step 4](#cognito-review-tab) shows the button.) +7. On the confirmation page which opens, click {{}}**Return to pool details**{{}} to return to the **Review** tab. On that tab click the  Create pool  button at the bottom. (The screenshot in [Step 4](#cognito-review-tab) shows the button.) 8. On the details page which opens to confirm the new user pool was successfully created, make note of the value in the **Pool Id** field; you will add it to the NGINX Plus configuration in [Step 3 of _Configuring NGINX Plus_](#nginx-plus-variables). @@ -82,36 +82,36 @@ Create a new application for NGINX Plus in the Cognito GUI: 'General settings' tab in Amazon Cognito GUI -9. Click **Users and groups** in the left navigation column. In the interface that opens, designate the users (or group of users, on the **Groups** tab) who will be able to use SSO for the app being proxied by NGINX Plus. For instructions, see the Cognito documentation about [creating users](https://docs.aws.amazon.com/cognito/latest/developerguide/how-to-create-user-accounts.html), [importing users](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-using-import-tool.html), or [adding a group](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-user-groups.html). +9. Click {{}}**Users and groups**{{}} in the left navigation column. In the interface that opens, designate the users (or group of users, on the **Groups** tab) who will be able to use SSO for the app being proxied by NGINX Plus. For instructions, see the Cognito documentation about [creating users](https://docs.aws.amazon.com/cognito/latest/developerguide/how-to-create-user-accounts.html), [importing users](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-using-import-tool.html), or [adding a group](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-user-groups.html). 'Users and groups' tab in Amazon Cognito GUI -10. Click **App clients** in the left navigation bar. On the tab that opens, click the Show Details button in the box labeled with the app client name (in this guide, **nginx‑plus‑app**). +10. Click **App clients** in the left navigation bar. On the tab that opens, click the Show Details button in the box labeled with the app client name (in this guide, {{}}**nginx-plus-app**{{}}). 'App clients' tab in Amazon Cognito GUI -11. On the details page that opens, make note of the values in the **App client id** and **App client secret** fields. You will add them to the NGINX Plus configuration in [Step 3 of _Configuring NGINX Plus_](#nginx-plus-variables). +11. On the details page that opens, make note of the values in the {{}}**App client id**{{}} and {{}}**App client secret**{{}} fields. You will add them to the NGINX Plus configuration in [Step 3 of _Configuring NGINX Plus_](#nginx-plus-variables). -12. Click **App client settings** in the left navigation column. In the tab that opens, perform the following steps: +12. Click {{}}**App client settings**{{}} in the left navigation column. In the tab that opens, perform the following steps: - 1. In the **Enabled Identity Providers** section, click the **Cognito User Pool** checkbox (the **Select all** box gets checked automatically). - 2. In the **Callback URL(s)** field of the **Sign in and sign out URLs** section, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch**. Here we’re using **https://my‑nginx‑plus.example.com:443/_codexch**. + 1. In the {{}}**Enabled Identity Providers**{{}} section, click the {{}}**Cognito User Pool**{{}} checkbox (the **Select all** box gets checked automatically). + 2. In the **Callback URL(s)** field of the {{}}**Sign in and sign out URLs**{{}} section, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch**. Here we’re using {{}}**https://my-nginx-plus.example.com:443/_codexch**{{}}. **Notes:** - For production, we strongly recommend that you use SSL/TLS (port 443). - The port number is mandatory even when you're using the default port for HTTP (80) or HTTPS (443). - 3. In the **OAuth 2.0** section, click the **Authorization code grant** checkbox under **Allowed OAuth Flows** and the **email**, **openid**, and **profile** checkboxes under **Allowed OAuth Scopes**. + 3. In the **OAuth 2.0** section, click the {{}}**Authorization code grant**{{}} checkbox under {{}}**Allowed OAuth Flows**{{}} and the **email**, **openid**, and **profile** checkboxes under {{}}**Allowed OAuth Scopes**{{}}. 4. Click the  Save changes  button. -13. Click **Domain name** in the left navigation column. In the tab that opens, type a domain prefix in the **Domain prefix** field under **Amazon Cognito domain** (in this guide, **my‑nginx‑plus**). Click the  Save changes  button. +13. Click **Domain name** in the left navigation column. In the tab that opens, type a domain prefix in the **Domain prefix** field under {{}}**Amazon Cognito domain**{{}} (in this guide, {{}}**my-nginx-plus**{{}}). Click the  Save changes  button. @@ -120,7 +120,7 @@ Create a new application for NGINX Plus in the Cognito GUI: Configure NGINX Plus as the OpenID Connect relying party: -1. Create a clone of the [**nginx‑openid‑connect**](https://github.com/nginxinc/nginx-openid-connect) GitHub repository. +1. Create a clone of the {{}}[**nginx-openid-connect**](https://github.com/nginxinc/nginx-openid-connect){{}} GitHub repository. ```shell git clone https://github.com/nginxinc/nginx-openid-connect @@ -135,12 +135,12 @@ Configure NGINX Plus as the OpenID Connect relying party: 3. In your preferred text editor, open **/etc/nginx/conf.d/frontend.conf**. Change the second parameter of each of the following [set](http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#set) directives to the specified value. - The `` variable is the full value in the **Domain prefix** field in [Step 13 of _Configuring Amazon Cognito_](#cognito-domain-name). In this guide it is **https://my‑nginx‑plus.auth.us‑east‑2.amazoncognito.com**. + The `` variable is the full value in the **Domain prefix** field in [Step 13 of _Configuring Amazon Cognito_](#cognito-domain-name). In this guide it is {{}}**https://my-nginx-plus.auth.us-east-2.amazoncognito.com**{{}}. - `set $oidc_authz_endpoint` – `/oauth2/authorize` - `set $oidc_token_endpoint` – `/oauth2/token` - - `set $oidc_client` – Value in the **App client id** field from [Step 11 of _Configuring Amazon Cognito_](#cognito-app-client-id-secret) (in this guide, `2or4cs8bjo1lkbq6143tqp6ist`) - - `set $oidc_client_secret` – Value in the **App client secret** field from [Step 11 of _Configuring Amazon Cognito_](#cognito-app-client-id-secret) (in this guide, `1k63m3nrcnu...`) + - `set $oidc_client` – Value in the {{}}**App client id**{{}} field from [Step 11 of _Configuring Amazon Cognito_](#cognito-app-client-id-secret) (in this guide, `2or4cs8bjo1lkbq6143tqp6ist`) + - `set $oidc_client_secret` – Value in the {{}}**App client secret**{{}} field from [Step 11 of _Configuring Amazon Cognito_](#cognito-app-client-id-secret) (in this guide, `1k63m3nrcnu...`) - `set $oidc_hmac_key` – A unique, long, and secure phrase 4. Configure the JWK file. The file's URL is @@ -154,7 +154,7 @@ Configure NGINX Plus as the OpenID Connect relying party: In this guide, the URL is - **https://cognito‑idp.us‑east‑2.amazonaws.com/us‑east‑2_mLoGHJpOs/.well‑known/jwks.json**. + {{}}**https://cognito-idp.us-east-2.amazonaws.com/us-east-2_mLoGHJpOs/.well-known/jwks.json**{{}}. The method for configuring the JWK file depends on which version of NGINX Plus you are using: @@ -187,7 +187,7 @@ In a browser, enter the address of your NGINX Plus instance and try to log in u ## Troubleshooting -See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the **nginx‑openid‑connect** repository on GitHub. +See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the {{}}**nginx-openid-connect**{{}} repository on GitHub. ### Revision History diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/keycloak.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/keycloak.md index f904a1811..25c8af487 100644 --- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/keycloak.md +++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/keycloak.md @@ -60,15 +60,15 @@ Create a Keycloak client for NGINX Plus in the Keycloak GUI: 3. On the **Add Client** page that opens, enter or select these values, then click the  Save  button. - - **Client ID** – The name of the application for which you're enabling SSO (Keycloak refers to it as the “client”). Here we're using **NGINX‑Plus**. - - **Client Protocol** – **openid‑connect**. + - **Client ID** – The name of the application for which you're enabling SSO (Keycloak refers to it as the “client”). Here we're using {{}}**NGINX-Plus**{{}}. + - **Client Protocol** – {{}}**openid-connect**{{}}. 4. On the **NGINX Plus** page that opens, enter or select these values on the Settings tab: - **Access Type** – **confidential** - - **Valid Redirect URIs** – The URI of the NGINX Plus instance, including the port number, and ending in **/\_codexch** (in this guide it is **https://my‑nginx.example.com:443/_codexch**) + - **Valid Redirect URIs** – The URI of the NGINX Plus instance, including the port number, and ending in **/\_codexch** (in this guide it is {{}}**https://my-nginx.example.com:443/_codexch**{{}}) **Notes:** @@ -84,14 +84,14 @@ Create a Keycloak client for NGINX Plus in the Keycloak GUI: 6. Click the Roles tab, then click the **Add Role** button in the upper right corner of the page that opens. -7. On the **Add Role** page that opens, type a value in the **Role Name** field (here it is **nginx‑keycloak‑role**) and click the  Save  button. +7. On the **Add Role** page that opens, type a value in the **Role Name** field (here it is {{}}**nginx-keycloak-role**{{}}) and click the  Save  button. 8. In the left navigation column, click **Users**. On the **Users** page that opens, either click the name of an existing user, or click the **Add user** button in the upper right corner to create a new user. For complete instructions, see the [Keycloak documentation](https://www.keycloak.org/docs/latest/server_admin/index.html#user-management). -9. On the management page for the user (here, **user01**), click the Role Mappings tab. On the page that opens, select **NGINX‑Plus** on the **Client Roles** drop‑down menu. Click **nginx‑keycloak‑role** in the **Available Roles** box, then click the **Add selected** button below the box. The role then appears in the **Assigned Roles** and **Effective Roles** boxes, as shown in the screenshot. +9. On the management page for the user (here, **user01**), click the Role Mappings tab. On the page that opens, select {{}}**NGINX-Plus**{{}} on the **Client Roles** drop‑down menu. Click {{}}**nginx-keycloak-role**{{}} in the **Available Roles** box, then click the **Add selected** button below the box. The role then appears in the **Assigned Roles** and **Effective Roles** boxes, as shown in the screenshot. @@ -101,7 +101,7 @@ Create a Keycloak client for NGINX Plus in the Keycloak GUI: Configure NGINX Plus as the OpenID Connect relying party: -1. Create a clone of the [**nginx‑openid‑connect**](https://github.com/nginxinc/nginx-openid-connect) GitHub repository. +1. Create a clone of the {{}}[**nginx-openid-connect**](https://github.com/nginxinc/nginx-openid-connect){{}} GitHub repository. ```shell git clone https://github.com/nginxinc/nginx-openid-connect @@ -165,7 +165,7 @@ In a browser, enter the address of your NGINX Plus instance and try to log in u ## Troubleshooting -See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the **nginx‑openid‑connect** repository on GitHub. +See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the {{}}**nginx-openid-connect**{{}} repository on GitHub. ### Revision History diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/onelogin.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/onelogin.md index 9d2c00fdb..b6c8fde9a 100644 --- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/onelogin.md +++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/onelogin.md @@ -48,15 +48,15 @@ Create a new application for NGINX Plus in the OneLogin GUI: -3. On the **Find Applications** page that opens, type **OpenID Connect** in the search box. Click on the **OpenID Connect (OIDC)** row that appears. +3. On the **Find Applications** page that opens, type {{}}**OpenID Connect**{{}} in the search box. Click on the **OpenID Connect (OIDC)** row that appears. -4. On the **Add OpenId Connect (OIDC)** page that opens, change the value in the **Display Name** field to **NGINX Plus** and click the  Save  button. +4. On the **Add OpenId Connect (OIDC)** page that opens, change the value in the **Display Name** field to {{}}**NGINX Plus**{{}} and click the  Save  button. -5. When the save completes, a new set of choices appears in the left navigation bar. Click **Configuration**. In the **Redirect URI's** field, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch** (in this guide it is **https://my‑nginx.example.com:443/_codexch**). Then click the  Save  button. +5. When the save completes, a new set of choices appears in the left navigation bar. Click **Configuration**. In the **Redirect URI's** field, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch** (in this guide it is {{}}**https://my-nginx.example.com:443/_codexch**{{}}). Then click the  Save  button. **Notes:** @@ -66,12 +66,12 @@ Create a new application for NGINX Plus in the OneLogin GUI: -6. When the save completes, click **SSO** in the left navigation bar. Click **Show client secret** below the **Client Secret** field. Record the values in the **Client ID** and **Client Secret** fields. You will add them to the NGINX Plus configuration in [Step 4 of _Configuring NGINX Plus_](#nginx-plus-variables). +6. When the save completes, click **SSO** in the left navigation bar. Click {{}}**Show client secret**{{}} below the **Client Secret** field. Record the values in the **Client ID** and **Client Secret** fields. You will add them to the NGINX Plus configuration in [Step 4 of _Configuring NGINX Plus_](#nginx-plus-variables). -7. Assign users to the application (in this guide, **NGINX Plus**) to enable them to access it for SSO. OneLogin recommends using [roles](https://onelogin.service-now.com/kb_view_customer.do?sysparm_article=KB0010606) for this purpose. You can access the **Roles** page under  Users  in the title bar. +7. Assign users to the application (in this guide, {{}}**NGINX Plus**{{}}) to enable them to access it for SSO. OneLogin recommends using [roles](https://onelogin.service-now.com/kb_view_customer.do?sysparm_article=KB0010606) for this purpose. You can access the **Roles** page under  Users  in the title bar. diff --git a/content/nginx/deployment-guides/single-sign-on/oidc-njs/ping-identity.md b/content/nginx/deployment-guides/single-sign-on/oidc-njs/ping-identity.md index d4901c65a..495b2ebad 100644 --- a/content/nginx/deployment-guides/single-sign-on/oidc-njs/ping-identity.md +++ b/content/nginx/deployment-guides/single-sign-on/oidc-njs/ping-identity.md @@ -56,30 +56,30 @@ Create a new application for NGINX Plus: 1. Log in to your Ping Identity account. The administrative dashboard opens automatically. In this guide, we show the PingOne for Enterprise dashboard, and for brevity refer simply to ”PingOne”. -2. Click  APPLICATIONS  in the title bar, and on the **My Applications** page that opens, click **OIDC** and then the **+ Add Application** button. +2. Click  APPLICATIONS  in the title bar, and on the **My Applications** page that opens, click **OIDC** and then the {{}}**+ Add Application**{{}} button. -3. The **Add OIDC Application** window pops up. Click the ADVANCED CONFIGURATION box, and then the  Next  button. +3. The {{}}**Add OIDC Application**{{}} window pops up. Click the ADVANCED CONFIGURATION box, and then the  Next  button. -4. In section 1 (PROVIDE DETAILS ABOUT YOUR APPLICATION), type a name in the **APPLICATION NAME** field and a short description in the **SHORT DESCRIPTION** field. Here, we're using **nginx‑plus‑application** and **NGINX Plus**. Choose a value from the **CATEGORY** drop‑down menu; here we’re using **Information Technology**. You can also add an icon if you wish. Click the  Next  button. +4. In section 1 (PROVIDE DETAILS ABOUT YOUR APPLICATION), type a name in the **APPLICATION NAME** field and a short description in the **SHORT DESCRIPTION** field. Here, we're using {{}}**nginx-plus-application**{{}} and {{}}**NGINX Plus**{{}}. Choose a value from the **CATEGORY** drop‑down menu; here we’re using {{}}**Information Technology**{{}}. You can also add an icon if you wish. Click the  Next  button. 5. In section 2 (AUTHORIZATION SETTINGS), perform these steps: - 1. Under **GRANTS**, click both **Authorization Code** and **Implicit**.
- 2. Under **CREDENTIALS**, click the **+ Add Secret** button. PingOne creates a client secret and opens the **CLIENT SECRETS** field to display it, as shown in the screenshot. To see the actual value of the secret, click the eye icon.
+ 1. Under **GRANTS**, click both {{}}**Authorization Code**{{}} and **Implicit**.
+ 2. Under **CREDENTIALS**, click the {{}}**+ Add Secret**{{}} button. PingOne creates a client secret and opens the **CLIENT SECRETS** field to display it, as shown in the screenshot. To see the actual value of the secret, click the eye icon.
3. Click the  Next  button. 6. In section 3 (SSO FLOW AND AUTHENTICATION SETTINGS): - 1. In the **START SSO URL** field, type the URL where users access your application. Here we’re using **https://example.com**. - 2. In the **REDIRECT URIS** field, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch**. Here we’re using **https://my‑nginx‑plus.example.com:443/\_codexch** (the full value is not visible in the screenshot). + 1. In the {{}}**START SSO URL**{{}} field, type the URL where users access your application. Here we’re using **https://example.com**. + 2. In the **REDIRECT URIS** field, type the URI of the NGINX Plus instance including the port number, and ending in **/\_codexch**. Here we’re using {{}}**https://my-nginx-plus.example.com:443/\_codexch**{{}} (the full value is not visible in the screenshot). **Notes:** @@ -88,11 +88,11 @@ Create a new application for NGINX Plus: -7. In section 4 (DEFAULT USER PROFILE ATTRIBUTE CONTRACT), optionally add attributes to the required **sub** and **idpid** attributes, by clicking the **+ Add Attribute** button. We’re not adding any in this example. When finished, click the  Next  button. +7. In section 4 (DEFAULT USER PROFILE ATTRIBUTE CONTRACT), optionally add attributes to the required **sub** and **idpid** attributes, by clicking the {{}}**+ Add Attribute**{{}} button. We’re not adding any in this example. When finished, click the  Next  button. -8. In section 5 (CONNECT SCOPES), click the circled plus-sign on the **OpenID Profile (profile)** and **OpenID Profile Email (email)** scopes in the **LIST OF SCOPES** column. They are moved to the **CONNECTED SCOPES** column, as shown in the screenshot. Click the  Next  button. +8. In section 5 (CONNECT SCOPES), click the circled plus-sign on the {{}}**OpenID Profile (profile)**{{}} and {{}}**OpenID Profile Email (email)**{{}} scopes in the {{}}**LIST OF SCOPES**{{}} column. They are moved to the **CONNECTED SCOPES** column, as shown in the screenshot. Click the  Next  button. @@ -107,14 +107,14 @@ Create a new application for NGINX Plus: -11. You are returned to the **My Applications** window, which now includes a row for **nginx‑plus‑application**. Click the toggle switch at the right end of the row to the “on” position, as shown in the screenshot. Then click the “expand” icon at the end of the row, to display the application’s details. +11. You are returned to the **My Applications** window, which now includes a row for {{}}**nginx-plus-application**{{}}. Click the toggle switch at the right end of the row to the “on” position, as shown in the screenshot. Then click the “expand” icon at the end of the row, to display the application’s details. 12. On the page that opens, make note of the values in the following fields on the **Details** tab. You will add them to the NGINX Plus configuration in [Step 4 of _Configuring NGINX Plus_](#nginx-plus-variables). - - **CLIENT ID** (in the screenshot, **28823604‑83c5‑4608‑88da‑c73fff9c607a**) + - **CLIENT ID** (in the screenshot, {{}}**28823604-83c5-4608-88da-c73fff9c607a**{{}}) - **CLIENT SECRETS** (in the screenshot, **7GMKILBofxb...**); click on the eye icon to view the actual value @@ -124,7 +124,7 @@ Create a new application for NGINX Plus: Configure NGINX Plus as the OpenID Connect relying party: -1. Create a clone of the [**nginx‑openid‑connect**](https://github.com/nginxinc/nginx-openid-connect) GitHub repository. +1. Create a clone of the {{}}[**nginx-openid-connect**](https://github.com/nginxinc/nginx-openid-connect){{}} GitHub repository. ```shell git clone https://github.com/nginxinc/nginx-openid-connect @@ -190,7 +190,7 @@ In a browser, enter the address of your NGINX Plus instance and try to log in u ## Troubleshooting -See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the **nginx‑openid‑connect** repository on GitHub. +See the [**Troubleshooting**](https://github.com/nginxinc/nginx-openid-connect#troubleshooting) section at the {{}}**nginx-openid-connect**{{}} repository on GitHub. ### Revision History diff --git a/go.mod b/go.mod index a7dc51830..fd5114fe6 100644 --- a/go.mod +++ b/go.mod @@ -2,4 +2,4 @@ module github.com/nginxinc/docs go 1.19 -require github.com/nginxinc/nginx-hugo-theme v0.42.28 // indirect +require github.com/nginxinc/nginx-hugo-theme v0.42.36 // indirect diff --git a/go.sum b/go.sum index 8a25e80eb..e54ae2fb6 100644 --- a/go.sum +++ b/go.sum @@ -4,3 +4,5 @@ github.com/nginxinc/nginx-hugo-theme v0.42.27 h1:D80Sf/o9lR4P0NDFfP/hCQllohz6C5q github.com/nginxinc/nginx-hugo-theme v0.42.27/go.mod h1:DPNgSS5QYxkjH/BfH4uPDiTfODqWJ50NKZdorguom8M= github.com/nginxinc/nginx-hugo-theme v0.42.28 h1:1SGzBADcXnSqP4rOKEhlfEUloopH6UvMg+XTyVVQyjU= github.com/nginxinc/nginx-hugo-theme v0.42.28/go.mod h1:DPNgSS5QYxkjH/BfH4uPDiTfODqWJ50NKZdorguom8M= +github.com/nginxinc/nginx-hugo-theme v0.42.36 h1:vFBavxB+tw2fs0rLTpA3kYPMdBK15LtZkfkX21kzrDo= +github.com/nginxinc/nginx-hugo-theme v0.42.36/go.mod h1:DPNgSS5QYxkjH/BfH4uPDiTfODqWJ50NKZdorguom8M= From 212cf5b18529098d6044b4f69dc8ab1cf24c184f Mon Sep 17 00:00:00 2001 From: Mike Jang <3287976+mjang@users.noreply.github.com> Date: Fri, 30 May 2025 05:57:57 -0700 Subject: [PATCH 03/24] feat: Customer-friendly NGINX One Console home page (#628) * feat: Customer-friendly NGINX One Console home page --------- Co-authored-by: Alan Dooley --- .../nginx-one/add-file/existing-ssl-bundle.md | 2 +- .../includes/nginx-one/how-to/add-instance.md | 26 ++ .../monitoring/n1c-dashboard-overview.md | 39 +++ content/nginx-one/about.md | 2 +- content/nginx-one/api/_index.md | 6 +- content/nginx-one/changelog.md | 10 +- content/nginx-one/connect-instances/_index.md | 6 + .../add-instance.md | 32 +-- ...ginx-plus-container-images-to-nginx-one.md | 4 +- .../create-manage-data-plane-keys.md | 2 +- .../set-up-nginx-proxy-for-nginx-one.md | 12 +- .../settings/_index.md | 2 +- content/nginx-one/getting-started.md | 49 ++-- content/nginx-one/glossary.md | 4 +- content/nginx-one/how-to/_index.md | 6 - .../nginx-one/how-to/certificates/_index.md | 6 - .../how-to/config-sync-groups/_index.md | 6 - content/nginx-one/how-to/containers/_index.md | 6 - .../how-to/data-plane-keys/_index.md | 6 - .../nginx-one/how-to/nginx-configs/_index.md | 6 - .../manage-config-sync-groups.md | 239 ------------------ .../nginx-one/how-to/proxy-setup/_index.md | 6 - .../nginx-one/how-to/staged-configs/_index.md | 6 - .../staged-configs/api-staged-config.md | 29 --- content/nginx-one/metrics/_index.md | 6 + content/nginx-one/metrics/enable-metrics.md | 23 ++ content/nginx-one/metrics/review-metrics.md | 23 ++ content/nginx-one/nginx-configs/_index.md | 6 + .../{how-to => }/nginx-configs/add-file.md | 10 +- .../nginx-configs/certificates/_index.md | 6 + .../certificates/manage-certificates.md | 13 +- .../clean-up-unavailable-instances.md | 2 +- .../config-sync-groups/_index.md | 6 + .../config-sync-groups/add-file-csg.md | 8 +- .../manage-config-sync-groups.md | 14 +- .../view-edit-nginx-configurations.md | 20 +- content/nginx-one/rbac/_index.md | 4 +- content/nginx-one/rbac/overview.md | 2 +- content/nginx-one/rbac/rbac-api.md | 2 +- content/nginx-one/rbac/roles.md | 2 +- content/nginx-one/staged-configs/_index.md | 6 + .../staged-configs/add-staged-config.md | 6 +- .../staged-configs/api-staged-config.md | 20 ++ .../staged-configs/edit-staged-config.md | 0 .../import-export-staged-config.md | 4 +- layouts/partials/list-main.html | 119 +++++++++ 46 files changed, 367 insertions(+), 447 deletions(-) create mode 100644 content/includes/nginx-one/how-to/add-instance.md create mode 100644 content/includes/use-cases/monitoring/n1c-dashboard-overview.md create mode 100644 content/nginx-one/connect-instances/_index.md rename content/nginx-one/{how-to/nginx-configs => connect-instances}/add-instance.md (60%) rename content/nginx-one/{how-to/containers => connect-instances}/connect-nginx-plus-container-images-to-nginx-one.md (94%) rename content/nginx-one/{how-to/data-plane-keys => connect-instances}/create-manage-data-plane-keys.md (98%) rename content/nginx-one/{how-to/proxy-setup => connect-instances}/set-up-nginx-proxy-for-nginx-one.md (87%) rename content/nginx-one/{how-to => connect-instances}/settings/_index.md (57%) delete mode 100644 content/nginx-one/how-to/_index.md delete mode 100644 content/nginx-one/how-to/certificates/_index.md delete mode 100644 content/nginx-one/how-to/config-sync-groups/_index.md delete mode 100644 content/nginx-one/how-to/containers/_index.md delete mode 100644 content/nginx-one/how-to/data-plane-keys/_index.md delete mode 100644 content/nginx-one/how-to/nginx-configs/_index.md delete mode 100644 content/nginx-one/how-to/nginx-configs/manage-config-sync-groups.md delete mode 100644 content/nginx-one/how-to/proxy-setup/_index.md delete mode 100644 content/nginx-one/how-to/staged-configs/_index.md delete mode 100644 content/nginx-one/how-to/staged-configs/api-staged-config.md create mode 100644 content/nginx-one/metrics/_index.md create mode 100644 content/nginx-one/metrics/enable-metrics.md create mode 100644 content/nginx-one/metrics/review-metrics.md create mode 100644 content/nginx-one/nginx-configs/_index.md rename content/nginx-one/{how-to => }/nginx-configs/add-file.md (78%) create mode 100644 content/nginx-one/nginx-configs/certificates/_index.md rename content/nginx-one/{how-to => nginx-configs}/certificates/manage-certificates.md (94%) rename content/nginx-one/{how-to => }/nginx-configs/clean-up-unavailable-instances.md (99%) create mode 100644 content/nginx-one/nginx-configs/config-sync-groups/_index.md rename content/nginx-one/{how-to => nginx-configs}/config-sync-groups/add-file-csg.md (76%) rename content/nginx-one/{how-to => nginx-configs}/config-sync-groups/manage-config-sync-groups.md (93%) rename content/nginx-one/{how-to => }/nginx-configs/view-edit-nginx-configurations.md (66%) create mode 100644 content/nginx-one/staged-configs/_index.md rename content/nginx-one/{how-to => }/staged-configs/add-staged-config.md (96%) create mode 100644 content/nginx-one/staged-configs/api-staged-config.md rename content/nginx-one/{how-to => }/staged-configs/edit-staged-config.md (100%) rename content/nginx-one/{how-to => }/staged-configs/import-export-staged-config.md (94%) create mode 100644 layouts/partials/list-main.html diff --git a/content/includes/nginx-one/add-file/existing-ssl-bundle.md b/content/includes/nginx-one/add-file/existing-ssl-bundle.md index e6a8c59a4..9afcf9137 100644 --- a/content/includes/nginx-one/add-file/existing-ssl-bundle.md +++ b/content/includes/nginx-one/add-file/existing-ssl-bundle.md @@ -2,7 +2,7 @@ docs: --- -With this option, You can incorporate [Managed certificates]({{< ref "/nginx-one/how-to/certificates/manage-certificates.md#managed-and-unmanaged-certificates" >}}). +With this option, you can incorporate [Managed certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md#managed-and-unmanaged-certificates" >}}). In the **Choose Certificate** drop-down, select the managed certificate of your choice, and select **Add**. You can then: 1. Review details of the certificate. The next steps depend on whether the certificate is a CA bundle or a certificate / key pair. diff --git a/content/includes/nginx-one/how-to/add-instance.md b/content/includes/nginx-one/how-to/add-instance.md new file mode 100644 index 000000000..94f0b628b --- /dev/null +++ b/content/includes/nginx-one/how-to/add-instance.md @@ -0,0 +1,26 @@ +--- +docs: +files: + - content/nginx-one/connect-instances/add-instance.md + - content/nginx-one/getting-started.md +--- + +You can add an instance to NGINX One Console in the following ways: + +- Directly, under **Instances** +- Indirectly, by selecting a Config Sync Group, and selecting **Add Instance to Config Sync Group** + +In either case, NGINX One Console gives you a choice for data plane keys: + +- Create a new key +- Use an existing key + +NGINX One Console takes the option you use, and adds the data plane key to a command that you'd use to register your target instance. You should see the command in the **Add Instance** screen in the console. + +Connect to the host where your NGINX instance is running. Run the provided command to [install NGINX Agent]({{< ref "/nginx-one/getting-started#install-nginx-agent" >}}) dependencies and packages on that host. + +```bash +curl https://agent.connect.nginx.com/nginx-agent/install | DATA_PLANE_KEY="" sh -s -- -y +``` + +Once the process is complete, you can configure that instance in your NGINX One Console. \ No newline at end of file diff --git a/content/includes/use-cases/monitoring/n1c-dashboard-overview.md b/content/includes/use-cases/monitoring/n1c-dashboard-overview.md new file mode 100644 index 000000000..3018b83d8 --- /dev/null +++ b/content/includes/use-cases/monitoring/n1c-dashboard-overview.md @@ -0,0 +1,39 @@ +--- +docs: +files: + - content/nginx-one/metrics/enable-metrics.md + - content/nginx-one/getting-started.md +--- + +Navigating the dashboard: + +- **Drill down into specifics**: For in-depth information on a specific metric, like expiring certificates, click on the relevant link in the metric's card to go to a detailed overview page. +- **Refine metric timeframe**: Metrics show the last hour's data by default. To view data from a different period, select the time interval you want from the drop-down menu. + + +{{< img src="nginx-one/images/nginx-one-dashboard.png">}} + + +{{}} +**NGINX One dashboard metrics** +| Metric | Description | Details | +|---|---|---| +| **Instance availability** | Understand the operational status of your NGINX instances. | - **Online**: The NGINX instance is actively connected and functioning properly.
- **Offline**: NGINX Agent is connected but the NGINX instance isn't running, isn't installed, or can't communicate with NGINX Agent.
- **Unavailable**: The connection between NGINX Agent and NGINX One has been lost or the instance has been decommissioned.
- **Unknown**: The current state can't be determined at the moment. | +| **NGINX versions by instance** | See which NGINX versions are in use across your instances. | | +| **Operating systems** | Find out which operating systems your instances are running on. | | +| **Certificates** | Monitor the status of your SSL certificates to know which are expiring soon and which are still valid. | | +| **Config recommendations** | Get configuration recommendations to optimize your instances' settings. | | +| **CVEs (Common Vulnerabilities and Exposures)** | Evaluate the severity and number of potential security threats in your instances. | - **Major**: Indicates a high-severity threat that needs immediate attention.
- **Medium**: Implies a moderate threat level.
- **Minor** and **Low**: Represent less critical issues that still require monitoring.
- **Other**: Encompasses any threats that don't fit the standard categories. | +| **CPU utilization** | Track CPU usage trends and pinpoint instances with high CPU demand. | | +| **Memory utilization** | Watch memory usage patterns to identify instances using significant memory. | | +| **Disk space utilization** | Monitor how much disk space your instances are using and identify those nearing capacity. | | +| **Unsuccessful response codes** | Look for instances with a high number of HTTP server errors and investigate their error codes. | | +| **Top network usage** | Review the network usage and bandwidth consumption of your instances. | | + +{{
}} + + + + + + diff --git a/content/nginx-one/about.md b/content/nginx-one/about.md index f20c40bea..6d0dd4594 100644 --- a/content/nginx-one/about.md +++ b/content/nginx-one/about.md @@ -1,7 +1,7 @@ --- description: '' docs: DOCS-1392 -title: About +title: Manage your NGINX fleet toc: true weight: 10 type: diff --git a/content/nginx-one/api/_index.md b/content/nginx-one/api/_index.md index e1f50db88..5b3284d5e 100644 --- a/content/nginx-one/api/_index.md +++ b/content/nginx-one/api/_index.md @@ -1,6 +1,6 @@ --- -title: API +title: Automate with the NGINX One API description: -weight: 1000 +weight: 700 url: /nginx-one/api ---- \ No newline at end of file +--- diff --git a/content/nginx-one/changelog.md b/content/nginx-one/changelog.md index e41c765f8..ab541049a 100644 --- a/content/nginx-one/changelog.md +++ b/content/nginx-one/changelog.md @@ -83,8 +83,8 @@ You can: - Remove a deployed certificate from a Config Sync Group For more information, including warnings about risks, see our documentation on how you can: -- [Add a file]({{< ref "/nginx-one/how-to/nginx-configs/add-file.md" >}}) -- [Manage certificates]({{< ref "/nginx-one/how-to/certificates/manage-certificates.md" >}}) +- [Add a file]({{< ref "/nginx-one/nginx-configs/add-file.md" >}}) +- [Manage certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md" >}}) ### Revert a configuration @@ -108,7 +108,7 @@ From the NGINX One Console you can now: - Ensure that your certificates are current and correct. - Manage your certificates from a central location. This can help you simplify operations and remotely update, rotate, and deploy those certificates. -For more information, see the full documentation on how you can [Manage Certificates]({{< ref "/nginx-one/how-to/certificates/manage-certificates.md" >}}). +For more information, see the full documentation on how you can [Manage Certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md" >}}). ## August 22, 2024 @@ -116,7 +116,7 @@ For more information, see the full documentation on how you can [Manage Certific Config Sync Groups are now available in the F5 NGINX One Console. This feature allows you to manage and synchronize NGINX configurations across multiple instances as a single entity, ensuring consistency and simplifying the management of your NGINX environment. -For more information, see the full documentation on [Managing Config Sync Groups]({{< ref "/nginx-one/how-to/config-sync-groups/manage-config-sync-groups.md" >}}). +For more information, see the full documentation on [Managing Config Sync Groups]({{< ref "/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md" >}}). ## August 8, 2024 @@ -136,7 +136,7 @@ Select the link for each CVE to see the details, including the CVE's publish dat ### Edit NGINX configurations -You can now make configuration changes to your NGINX instances. For more details, see [View and edit NGINX configurations]({{< ref "/nginx-one/how-to/nginx-configs/view-edit-nginx-configurations.md" >}}). +You can now make configuration changes to your NGINX instances. For more details, see [View and edit NGINX configurations]({{< ref "/nginx-one/nginx-configs/view-edit-nginx-configurations.md" >}}). ## May 28, 2024 diff --git a/content/nginx-one/connect-instances/_index.md b/content/nginx-one/connect-instances/_index.md new file mode 100644 index 000000000..ea3ed0292 --- /dev/null +++ b/content/nginx-one/connect-instances/_index.md @@ -0,0 +1,6 @@ +--- +description: +title: Connect your instances +weight: 200 +url: /nginx-one/connect-instances/ +--- diff --git a/content/nginx-one/how-to/nginx-configs/add-instance.md b/content/nginx-one/connect-instances/add-instance.md similarity index 60% rename from content/nginx-one/how-to/nginx-configs/add-instance.md rename to content/nginx-one/connect-instances/add-instance.md index 8bae0ef02..328d93724 100644 --- a/content/nginx-one/how-to/nginx-configs/add-instance.md +++ b/content/nginx-one/connect-instances/add-instance.md @@ -16,34 +16,16 @@ to set up a data plane key to connect your instances to NGINX One. Before you add an instance to NGINX One Console, ensure: -- You have administrator access to NGINX One Console. -- You have configured instances of NGINX that you want to manage through NGINX One Console. -- You have or are ready to configure a data plane key. -- You have or are ready to set up managed certificates. +- You have [administrator access]({{< ref "/nginx-one/rbac/roles.md" >}}) to NGINX One Console. +- You have [configured instances of NGINX]({{< ref "/nginx-one/getting-started.md#add-your-nginx-instances-to-nginx-one" >}}) that you want to manage through NGINX One Console. +- You have or are ready to configure a [data plane key]({{< ref "/nginx-one/getting-started.md#generate-data-plane-key" >}}). +- You have or are ready to set up [managed certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md" >}}). -{{< note >}}If this is the first time an instance is being added to a Config Sync Group, and you have not yet defined the configuration for that Config Sync Group, that instance provides the template for that group. For more information, see [Configuration management]({{< ref "nginx-one/how-to/config-sync-groups/manage-config-sync-groups#configuration-management" >}}).{{< /note >}} +{{< note >}}If this is the first time an instance is being added to a Config Sync Group, and you have not yet defined the configuration for that Config Sync Group, that instance provides the template for that group. For more information, see [Configuration management]({{< ref "nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups#configuration-management" >}}).{{< /note >}} ## Add an instance -You can add an instance to NGINX One Console in the following ways: - -- Directly, under **Instances** -- Indirectly, by selecting a Config Sync Group, and selecting **Add Instance to Config Sync Group** - -In either case, NGINX One Console gives you a choice for data plane keys: - -- Create a new key -- Use an existing key - -NGINX One Console takes the option you use, and adds the data plane key to a command that you'd use to register your target instance. You should see the command in the **Add Instance** screen in the console. - -Connect to the host where your NGINX instance is running. Run the provided command to [install NGINX Agent]({{< ref "/nginx-one/getting-started#install-nginx-agent" >}}) dependencies and packages on that host. - -```bash -curl https://agent.connect.nginx.com/nginx-agent/install | DATA_PLANE_KEY="" sh -s -- -y -``` - -Once the process is complete, you can configure that instance in your NGINX One Console. +{{< include "/nginx-one/how-to/add-instance.md" >}} ## Managed and Unmanaged Certificates @@ -71,5 +53,5 @@ Once you've completed the process, NGINX One reassigns this as a managed certifi ## Add an instance to a Config Sync Group -When you [Manage Config Sync Group membership]({{< ref "nginx-one/how-to/config-sync-groups/manage-config-sync-groups#manage-config-sync-group-membership" >}}), you can add an existing or new instance to the group of your choice. +When you [Manage Config Sync Group membership]({{< ref "nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups#manage-config-sync-group-membership" >}}), you can add an existing or new instance to the group of your choice. That instance inherits the setup of that Config Sync Group. diff --git a/content/nginx-one/how-to/containers/connect-nginx-plus-container-images-to-nginx-one.md b/content/nginx-one/connect-instances/connect-nginx-plus-container-images-to-nginx-one.md similarity index 94% rename from content/nginx-one/how-to/containers/connect-nginx-plus-container-images-to-nginx-one.md rename to content/nginx-one/connect-instances/connect-nginx-plus-container-images-to-nginx-one.md index daae3af88..968ecc836 100644 --- a/content/nginx-one/how-to/containers/connect-nginx-plus-container-images-to-nginx-one.md +++ b/content/nginx-one/connect-instances/connect-nginx-plus-container-images-to-nginx-one.md @@ -1,7 +1,7 @@ --- description: '' docs: null -title: Connect NGINX Plus container images to NGINX One +title: Connect NGINX Plus container images toc: true weight: 400 type: @@ -19,7 +19,7 @@ This guide explains how to set up an F5 NGINX Plus Docker container with NGINX A Before you start, make sure you have: - A valid JSON Web Token (JWT) for your NGINX subscription. -- [A data plane key from NGINX One]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}). +- [A data plane key from NGINX One]({{< ref "/nginx-one/connect-instances/create-manage-data-plane-keys.md" >}}). - Docker installed and running on your system. #### Download your JWT license from MyF5 diff --git a/content/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md b/content/nginx-one/connect-instances/create-manage-data-plane-keys.md similarity index 98% rename from content/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md rename to content/nginx-one/connect-instances/create-manage-data-plane-keys.md index 9ac000860..224b3ff51 100644 --- a/content/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md +++ b/content/nginx-one/connect-instances/create-manage-data-plane-keys.md @@ -1,7 +1,7 @@ --- description: '' docs: DOCS-1395 -title: Create and manage data plane keys +title: Prepare - Create and manage data plane keys toc: true weight: 100 type: diff --git a/content/nginx-one/how-to/proxy-setup/set-up-nginx-proxy-for-nginx-one.md b/content/nginx-one/connect-instances/set-up-nginx-proxy-for-nginx-one.md similarity index 87% rename from content/nginx-one/how-to/proxy-setup/set-up-nginx-proxy-for-nginx-one.md rename to content/nginx-one/connect-instances/set-up-nginx-proxy-for-nginx-one.md index 974f7851c..6c97ffccc 100644 --- a/content/nginx-one/how-to/proxy-setup/set-up-nginx-proxy-for-nginx-one.md +++ b/content/nginx-one/connect-instances/set-up-nginx-proxy-for-nginx-one.md @@ -1,7 +1,7 @@ --- description: '' docs: null -title: Set up NGINX as a proxy for NGINX One +title: Minimize connections - Set up NGINX as a proxy toc: true weight: 300 type: @@ -17,7 +17,7 @@ This guide explains how to set up NGINX as a proxy for other NGINX instances to ## Before you start - [Install NGINX Open Source or NGINX Plus]({{< ref "/nginx/admin-guide/installing-nginx/" >}}). -- [Get a Data Plane Key from NGINX One]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}). +- [Get a Data Plane Key from NGINX One]({{< ref "/nginx-one/connect-instances/create-manage-data-plane-keys.md" >}}). --- @@ -61,7 +61,7 @@ In this step, we'll configure an NGINX instance to act as a proxy server for NGI --- -## Configure NGINX Agent to use the proxy instance +## Configure NGINX Agent to use the proxy To set up your other NGINX instances to use the proxy instance to connect to NGINX One, update the NGINX Agent configuration on those instances to use the proxy NGINX instance's IP address. See the example NGINX Agent configuration below. @@ -95,7 +95,7 @@ To set up your other NGINX instances to use the proxy instance to connect to NGI For more information, refer to the following resources: -- [Installing NGINX and NGINX Plus]({{< ref "/nginx/admin-guide/installing-nginx/" >}}) -- [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) +- [Install NGINX and NGINX Plus]({{< ref "/nginx/admin-guide/installing-nginx/" >}}) +- [Create and manage data plane keys]({{< ref "/nginx-one/connect-instances/create-manage-data-plane-keys.md" >}}) - [NGINX Agent Installation and upgrade](https://docs.nginx.com/nginx-agent/installation-upgrade/) -- [NGINX Agent Configuration](https://docs.nginx.com/nginx-agent/configuration/) \ No newline at end of file +- [NGINX Agent Configuration](https://docs.nginx.com/nginx-agent/configuration/) diff --git a/content/nginx-one/how-to/settings/_index.md b/content/nginx-one/connect-instances/settings/_index.md similarity index 57% rename from content/nginx-one/how-to/settings/_index.md rename to content/nginx-one/connect-instances/settings/_index.md index cdbbc1636..3bdb5095b 100644 --- a/content/nginx-one/how-to/settings/_index.md +++ b/content/nginx-one/connect-instances/settings/_index.md @@ -2,6 +2,6 @@ description: title: Settings weight: 500 -url: /nginx-one/how-to/settings +url: /nginx-one/connect-instances/settings draft: true --- diff --git a/content/nginx-one/getting-started.md b/content/nginx-one/getting-started.md index 9e3a0e8e5..6a761fa2f 100644 --- a/content/nginx-one/getting-started.md +++ b/content/nginx-one/getting-started.md @@ -24,12 +24,10 @@ To get started using NGINX One, enable the service on F5 Distributed Cloud. Next, add your NGINX instances to NGINX One. You'll need to create a data plane key and then install NGINX Agent on each instance you want to monitor. -### Add an instance - -Depending on whether this is your first time using NGINX One Console or you've used it before, follow the appropriate steps to add an instance: +The following instructions include minimal information, sufficient to "get started." See the following links for detailed instructions: -- **For first-time users:** On the welcome screen, select **Add Instance**. -- **For returning users:** If you've added instances previously and want to add more, select **Instances** on the left menu, then select **Add Instance**. +- [Create and manage data plane keys]({{< ref "/nginx-one/connect-instances/create-manage-data-plane-keys.md" >}}) +- [Add an NGINX instance]({{< ref "/nginx-one/connect-instances/add-instance.md" >}}) ### Generate a data plane key {#generate-data-plane-key} @@ -43,11 +41,18 @@ To generate a data plane key: {{}} Data plane keys are displayed only once and cannot be retrieved later. Be sure to copy and store this key securely. -Data plane keys expire after one year. You can change this expiration date later by [editing the key]({{< ref "nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md#change-expiration-date" >}}). +Data plane keys expire after one year. You can change this expiration date later by [editing the key]({{< ref "nginx-one/connect-instances/create-manage-data-plane-keys.md#change-expiration-date" >}}). -[Revoking a data plane key]({{< ref "nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md#revoke-data-plane-key" >}}) disconnects all instances that were registered with that key. +[Revoking a data plane key]({{< ref "nginx-one/connect-instances/create-manage-data-plane-keys.md#revoke-data-plane-key" >}}) disconnects all instances that were registered with that key. {{}} +### Add an instance + +Depending on whether this is your first time using NGINX One Console or you've used it before, follow the appropriate steps to add an instance: + +- **For first-time users:** On the welcome screen, select **Add Instance**. +- **For returning users:** If you've added instances previously and want to add more, select **Instances** on the left menu, then select **Add Instance**. + ### Install NGINX Agent @@ -134,37 +139,11 @@ If you followed the [Installation and upgrade](https://docs.nginx.com/nginx-agen --- -## Enable NGINX metrics reporting - The NGINX One Console dashboard relies on APIs for NGINX Plus and NGINX Open Source Stub Status to report traffic and system metrics. The following sections show you how to enable those metrics. ### Enable NGINX Plus API - -To collect metrics for NGINX Plus, add the following to your NGINX Plus configuration file: - -```nginx -# Enable the /api/ location with appropriate access control -# to use the NGINX Plus API. -# -location /api/ { - api write=on; - allow 127.0.0.1; - deny all; -} -``` - -This configuration: - -- Enables the NGINX Plus API. -- Allows requests only from `127.0.0.1` (localhost). -- Blocks all other requests for security. - -After saving the changes, reload NGINX to apply the new configuration: - -```shell -nginx -s reload -``` +{{< include "/use-cases/monitoring/enable-nginx-plus-api.md" >}} ### Enable NGINX Open Source Stub Status API @@ -183,6 +162,8 @@ After connecting your NGINX instances to NGINX One, you can monitor their perfor ### Overview of the NGINX One dashboard +{{< include "/use-cases/monitoring/n1c-dashboard-overview.md" >}} + Navigating the dashboard: - **Drill down into specifics**: For in-depth information on a specific metric, like expiring certificates, click on the relevant link in the metric's card to go to a detailed overview page. diff --git a/content/nginx-one/glossary.md b/content/nginx-one/glossary.md index c315d35ef..33b817f55 100644 --- a/content/nginx-one/glossary.md +++ b/content/nginx-one/glossary.md @@ -3,7 +3,7 @@ description: '' docs: DOCS-1396 title: Glossary toc: true -weight: 1000 +weight: 800 type: - reference --- @@ -14,7 +14,7 @@ This glossary defines terms used in the F5 NGINX One Console and F5 Distributed {{}} | Term | Definition | |-------------|-------------| -| **Config Sync Group** | A group of NGINX systems (or instances) with identical configurations. They may also share the same certificates. However, the instances in a Config Sync Group could belong to different systems and even different clusters. For more information, see this explanation of [Important considerations]({{< ref "/nginx-one/how-to/config-sync-groups/manage-config-sync-groups.md#important-considerations" >}}) | +| **Config Sync Group** | A group of NGINX systems (or instances) with identical configurations. They may also share the same certificates. However, the instances in a Config Sync Group could belong to different systems and even different clusters. For more information, see this explanation of [Important considerations]({{< ref "/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md#important-considerations" >}}) | | **Data Plane** | The data plane is the part of a network architecture that carries user traffic. It handles tasks like forwarding data packets between devices and managing network communication. In the context of NGINX, the data plane is responsible for tasks such as load balancing, caching, and serving web content. | | **Instance** | An instance is an individual system with NGINX installed. You can group the instances of your choice in a Config Sync Group. When you add an instance to NGINX One, you need to use a data plane key. | | **Namespace** | In F5 Distributed Cloud, a namespace groups a tenant’s configuration objects, similar to administrative domains. Every object in a namespace must have a unique name, and each namespace must be unique to its tenant. This setup ensures isolation, preventing cross-referencing of objects between namespaces. You'll see the namespace in the NGINX One Console URL as `/namespaces//` | diff --git a/content/nginx-one/how-to/_index.md b/content/nginx-one/how-to/_index.md deleted file mode 100644 index 3e88ec7ae..000000000 --- a/content/nginx-one/how-to/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: -title: How-to guides -weight: 200 -url: /nginx-one/how-to/ ---- diff --git a/content/nginx-one/how-to/certificates/_index.md b/content/nginx-one/how-to/certificates/_index.md deleted file mode 100644 index 39e16a174..000000000 --- a/content/nginx-one/how-to/certificates/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: -title: Certificates -weight: 400 -url: /nginx-one/how-to/certificates ---- diff --git a/content/nginx-one/how-to/config-sync-groups/_index.md b/content/nginx-one/how-to/config-sync-groups/_index.md deleted file mode 100644 index 31f258b69..000000000 --- a/content/nginx-one/how-to/config-sync-groups/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: -title: Config Sync Groups -weight: 250 -url: /nginx-one/how-to/config-sync-groups ---- diff --git a/content/nginx-one/how-to/containers/_index.md b/content/nginx-one/how-to/containers/_index.md deleted file mode 100644 index c3617fd7d..000000000 --- a/content/nginx-one/how-to/containers/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: -title: Containers -weight: 300 -url: /nginx-one/how-to/containers ---- diff --git a/content/nginx-one/how-to/data-plane-keys/_index.md b/content/nginx-one/how-to/data-plane-keys/_index.md deleted file mode 100644 index 0aa1ba7bf..000000000 --- a/content/nginx-one/how-to/data-plane-keys/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: -title: Data plane keys -weight: 100 -url: /nginx-one/how-to/data-plane-keys ---- diff --git a/content/nginx-one/how-to/nginx-configs/_index.md b/content/nginx-one/how-to/nginx-configs/_index.md deleted file mode 100644 index b7fa815da..000000000 --- a/content/nginx-one/how-to/nginx-configs/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: -title: Instances and Configurations -weight: 200 -url: /nginx-one/how-to/nginx ---- diff --git a/content/nginx-one/how-to/nginx-configs/manage-config-sync-groups.md b/content/nginx-one/how-to/nginx-configs/manage-config-sync-groups.md deleted file mode 100644 index 8bc10cce6..000000000 --- a/content/nginx-one/how-to/nginx-configs/manage-config-sync-groups.md +++ /dev/null @@ -1,239 +0,0 @@ ---- -docs: null -title: Manage config sync groups -toc: true -weight: 300 -type: -- how-to ---- - -## Overview - -This guide explains how to create and manage config sync groups in the F5 NGINX One Console. Config sync groups synchronize NGINX configurations across multiple NGINX instances, ensuring consistency and ease of management. - -If you’ve used [instance groups in NGINX Instance Manager]({{< ref "/nim/nginx-instances/manage-instance-groups.md" >}}), you’ll find config sync groups in NGINX One similar, though the steps and terminology differ slightly. - -## Before you start - -Before you create and manage config sync groups, ensure: - -- You have access to the NGINX One Console. -- You have the necessary permissions to create and manage config sync groups. -- NGINX instances are properly registered with NGINX One if you plan to add existing instances to a config sync group. - -## Important considerations - -- **NGINX Agent configuration file location**: When you run the NGINX Agent installation script to register an instance with NGINX One, the script creates the `agent-dynamic.conf` file, which contains settings for the NGINX Agent, including the specified config sync group. This file is typically located in `/var/lib/nginx-agent/` on most systems; however, on FreeBSD, it's located at `/var/db/nginx-agent/`. - -- **Mixing NGINX Open Source and NGINX Plus instances**: You can add both NGINX Open Source and NGINX Plus instances to the same config sync group, but there are limitations. If your configuration includes features exclusive to NGINX Plus, synchronization will fail on NGINX Open Source instances because they don't support these features. NGINX One allows you to mix NGINX instance types for flexibility, but it’s important to ensure that the configurations you're applying are compatible with all instances in the group. - -- **Single config sync group membership**: An instance can join only one config sync group at a time. - -- **Configuration inheritance**: If the config sync group already has a configuration defined, that configuration will be pushed to instances when they join. - -- **Using an instance's configuration for the group configuration**: If an instance is the first to join a config sync group and the group's configuration hasn't been defined, the instance’s configuration will become the group’s configuration. Any instances added later will automatically inherit this configuration. - - {{< note >}} If you add multiple instances to a single config sync group, simultaneously (with automation), follow these steps. Your instances will inherit your desired configuration: - - 1. Create a config sync group. - 1. Add a configuration to the config sync group, so all instances inherit it. - 1. Add the instances in a separate operation. - - Your instances should synchronize with your desired configuration within 30 seconds. {{< /note >}} - -- **Persistence of a config sync group's configuration**: The configuration for a config sync group persists until you delete the group. Even if you remove all instances, the group's configuration stays intact. Any new instances that join later will automatically inherit this configuration. - -- **Config sync groups vs. cluster syncing**: Config sync groups are not the same as cluster syncing. Config sync groups let you to manage and synchronize configurations across multiple NGINX instances as a single entity. This is particularly useful when your NGINX instances are load-balanced by an external load balancer, as it ensures consistency across all instances. In contrast, cluster syncing, like [zone syncing]({{< ref "nginx/admin-guide/high-availability/zone_sync_details.md" >}}), ensures data consistency and high availability across NGINX instances in a cluster. While config sync groups focus on configuration management, cluster syncing supports failover and data consistency. - -## Create a config sync group - -Creating a config sync group allows you to manage the configurations of multiple NGINX instances as a single entity. - -1. On the left menu, select **Config Sync Groups**. -2. Select **Add Config Sync Group**. -3. In the **Name** field, type a name for your config sync group. -4. Select **Create** to add the config sync group. - -## Manage config sync group membership - -### Add an existing instance to a config sync group {#add-an-existing-instance-to-a-config-sync-group} - -You can add existing NGINX instances that are already registered with NGINX One to a config sync group. - -1. Open a command-line terminal on the NGINX instance. -2. Open the `/var/lib/nginx-agent/agent-dynamic.conf` file in a text editor. -3. At the end of the file, add a new line beginning with `instance_group:`, followed by the config sync group name. - - ``` text - instance_group: - ``` - -4. Restart NGINX Agent: - - ``` shell - sudo systemctl restart nginx-agent - ``` - -### Add a new instance to a config sync group {#add-a-new-instance-to-a-config-sync-group} - -When adding a new NGINX instance that is not yet registered with NGINX One, you need a data plane key to securely connect the instance. You can generate a new data plane key during the process or use an existing one if you already have it. - -1. On the left menu, select **Config Sync Groups**. -2. Select the config sync group in the list. -3. In the **Instances** pane, select **Add Instance to Config Sync Group**. -4. In the **Add Instance to Config Sync Group** dialog, select **Register a new instance with NGINX One then add to config sync group**. -5. Select **Next**. -6. **Generate a new data plane key** (choose this option if you don't have an existing key): - - - Select **Generate new key** to create a new data plane key for the instance. - - Select **Generate Data Plane Key**. - - Copy and securely store the generated key, as it is displayed only once. - -7. **Use an existing data plane key** (choose this option if you already have a key): - - - Select **Use existing key**. - - In the **Data Plane Key** field, enter the existing data plane key. - -{{}} - -{{%tab name="Virtual Machine or Bare Metal"%}} - -8. Run the provided command, which includes the data plane key, in your NGINX instance terminal to register the instance with NGINX One. -9. Select **Done** to complete the process. - -{{%/tab%}} - -{{%tab name="Docker Container"%}} - -8. **Log in to the NGINX private registry**: - - - Replace `YOUR_JWT_HERE` with your JSON Web Token (JWT) from [MyF5](https://my.f5.com/manage/s/). - - ```shell - sudo docker login private-registry.nginx.com --username=YOUR_JWT_HERE --password=none - ``` - -9. **Pull the Docker image**: - - - From the **OS Type** list, choose the appropriate operating system for your Docker image. - - After selecting the OS, run the provided command to pull the Docker image. - - **Note**: Subject to availability, you can modify the `agent: ` to match the specific NGINX Plus version, OS type, and OS version you need. For example, you might use `agent: r32-ubi-9`. For more details on version tags and how to pull an image, see [Deploying NGINX and NGINX Plus on Docker]({{< ref "nginx/admin-guide/installing-nginx/installing-nginx-docker.md#pulling-the-image" >}}). - -10. Run the provided command, which includes the data plane key, in your NGINX instance terminal to start the Docker container. - -11. Select **Done** to complete the process. - -{{%/tab%}} - -{{}} - -{{}} - -Data plane keys are required for registering NGINX instances with the NGINX One Console. These keys serve as secure tokens, ensuring that only authorized instances can connect and communicate with NGINX One. - -For more details on creating and managing data plane keys, see [Create and manage data plane keys]({{}}). - -{{}} - -### Change the config sync group for an instance - -If you need to move an NGINX instance to a different config sync group, follow these steps: - -1. Open a command-line terminal on the NGINX instance. -2. Open the `/var/lib/nginx-agent/agent-dynamic.conf` file in a text editor. -3. Locate the line that begins with `instance_group:` and change it to the name of the new config sync group. - - ``` text - instance_group: - ``` - -4. Restart NGINX Agent by running the following command: - - ```shell - sudo systemctl restart nginx-agent - ``` - -**Important:** If the instance is the first to join the new config sync group and a group configuration hasn’t been added manually beforehand, the instance’s configuration will automatically become the group’s configuration. Any instances added to this group later will inherit this configuration. - -### Remove an instance from a config sync group - -If you need to remove an NGINX instance from a config sync group without adding it to another group, follow these steps: - -1. Open a command-line terminal on the NGINX instance. -2. Open the `/var/lib/nginx-agent/agent-dynamic.conf` file in a text editor. -3. Locate the line that begins with `instance_group:` and either remove it or comment it out by adding a `#` at the beginning of the line. - - ```text - # instance_group: - ``` - -4. Restart NGINX Agent: - - ```shell - sudo systemctl restart nginx-agent - ``` - -By removing or commenting out this line, the instance will no longer be associated with any config sync group. - -## Add the config sync group configuration - -You can set the configuration for a config sync group in two ways: - -### Define the group configuration manually - -You can manually define the group's configuration before adding any instances. When you add instances to the group later, they automatically inherit this configuration. - -To manually set the group configuration: - -1. Follow steps 1–4 in the [**Create a config sync group**](#create-a-config-sync-group) section to create your config sync group. -2. After creating the group, select the **Configuration** tab. -3. Since no instances have been added, the **Configuration** tab will show an empty configuration with a message indicating that no config files exist yet. -4. To add a configuration, select **Edit Configuration**. -5. In the editor, define your NGINX configuration as needed. This might include adding or modifying `nginx.conf` or other related files. -6. After making your changes, select **Next** to view a split screen showing your changes. -7. If you're satisfied with the configuration, select **Save and Publish**. - -### Use an instance's configuration - -If you don't manually define a group config, the NGINX configuration of the first instance added to a config sync group becomes the group's configuration. Any additional instances added afterward inherit this group configuration. - -To set the group configuration by adding an instance: - -1. Follow the steps in the [**Add an existing instance to a config sync group**](#add-an-existing-instance-to-a-config-sync-group) or [**Add a new instance to a config sync group**](#add-a-new-instance-to-a-config-sync-group) sections to add your first instance to the group. -2. The NGINX configuration from this instance will automatically become the group's configuration. -3. You can further edit and publish this configuration by following the steps in the [**Publish the config sync group configuration**](#publish-the-config-sync-group-configuration) section. - -## Publish the config sync group configuration {#publish-the-config-sync-group-configuration} - -After the config sync group is created, you can modify and publish the group's configuration as needed. Any changes made to the group configuration will be applied to all instances within the group. - -1. On the left menu, select **Config Sync Groups**. -2. Select the config sync group in the list. -3. Select the **Configuration** tab to view the group's NGINX configuration. -4. To modify the group's configuration, select **Edit Configuration**. -5. Make the necessary changes to the configuration. -6. When you're finished, select **Next**. A split view displays the changes. -7. If you're satisfied with the changes, select **Save and Publish**. - -Publishing the group configuration ensures that all instances within the config sync group are synchronized with the latest group configuration. This helps maintain consistency across all instances in the group, preventing configuration drift. - -## Understanding config sync statuses - -The **Config Sync Status** column on the **Config Sync Groups** page provides insight into the synchronization state of your NGINX instances within each group. - -{{}} -| **Status** | **Description** | -|-----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------| -| **In Sync** | All instances within the config sync group have configurations that match the group configuration. No action is required. | -| **Out of Sync** | At least one instance in the group has a configuration that differs from the group's configuration. You may need to review and resolve discrepancies to ensure consistency. | -| **Sync in Progress** | An instance is currently being synchronized with the group's configuration. This status appears when an instance is moved to a new group or when a configuration is being applied. | -| **Unknown** | The synchronization status of the instances in this group cannot be determined. This could be due to connectivity issues, instances being offline, or other factors. Investigating the cause of this status is recommended. | -{{}} - -Monitoring the **Config Sync Status** helps ensure that your configurations are consistently applied across all instances in a group, reducing the risk of configuration drift. - -## See also - -- [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) -- [View and edit NGINX configurations]({{< ref "/nginx-one/how-to/nginx-configs/view-edit-nginx-configurations.md" >}}) diff --git a/content/nginx-one/how-to/proxy-setup/_index.md b/content/nginx-one/how-to/proxy-setup/_index.md deleted file mode 100644 index 16f858cc2..000000000 --- a/content/nginx-one/how-to/proxy-setup/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: -title: Proxy setup -weight: 600 -url: /nginx-one/how-to/settings/nginx-as-proxy ---- diff --git a/content/nginx-one/how-to/staged-configs/_index.md b/content/nginx-one/how-to/staged-configs/_index.md deleted file mode 100644 index 51e07d1aa..000000000 --- a/content/nginx-one/how-to/staged-configs/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -description: -title: Staged Configurations -weight: 800 -url: /nginx-one/how-to/staged-configs ---- diff --git a/content/nginx-one/how-to/staged-configs/api-staged-config.md b/content/nginx-one/how-to/staged-configs/api-staged-config.md deleted file mode 100644 index ff559d014..000000000 --- a/content/nginx-one/how-to/staged-configs/api-staged-config.md +++ /dev/null @@ -1,29 +0,0 @@ ---- -# We use sentence case and present imperative tone -title: Use the API to manage your Staged Configurations -# Weights are assigned in increments of 100: determines sorting order -weight: 500 -# Creates a table of contents and sidebar, useful for large documents -toc: true -# Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this -type: how-to -# Intended for internal catalogue and search, case sensitive: -product: NGINX One ---- - -You can use F5 NGINX One Console API to manage your Staged Configurations. With our API, you can: - -- [Create an NGINX Staged Configuration]({{< ref "/nginx-one/api/api-reference-guide/#operation/createStagedConfig" >}}) - - Use details to add existing configuration files. -- [Get a list of existing Staged Configurations]({{< ref "/nginx-one/api/api-reference-guide/#operation/listStagedConfigs" >}}) - - Record the `object_id` of your target Staged Configuration for your analysis report. -- [Get an analysis report for an existing Staged Configuration]({{< ref "/nginx-one/api/api-reference-guide/#operation/getStagedConfigReport" >}}) - - Review the same recommendations found in the UI. -- [Export a Staged Configuration]({{< ref "/nginx-one/api/api-reference-guide/#operation/exportStagedConfig" >}}) - - Exports an existing Staged Configuration from the console. It sends you an archive of that configuration in `tar.gz` format. -- [Import a Staged Configuration]({{< ref "/nginx-one/api/api-reference-guide/#operation/importStagedConfig" >}}) - - Imports an existing Staged Configuration from your system and sends it to the console. This REST call assumes that your configuration is archived in `tar.gz` format. -- [Bulk manage multiple Staged Configurations]({{< ref "/nginx-one/api/api-reference-guide/#operation/bulkStagedConfigs" >}}) - - Allows you to delete multiple Staged Configurations. Requires each `object_id`. - - For several API endpoints, we ask for a `conf_path`. Make sure to set an absolute file path. If you make a REST call without an absolute file path, you'll see a 400 error message. diff --git a/content/nginx-one/metrics/_index.md b/content/nginx-one/metrics/_index.md new file mode 100644 index 000000000..9602b6a8b --- /dev/null +++ b/content/nginx-one/metrics/_index.md @@ -0,0 +1,6 @@ +--- +description: +title: Set up metrics +weight: 500 +url: /nginx-one/metrics/ +--- diff --git a/content/nginx-one/metrics/enable-metrics.md b/content/nginx-one/metrics/enable-metrics.md new file mode 100644 index 000000000..0e677a78c --- /dev/null +++ b/content/nginx-one/metrics/enable-metrics.md @@ -0,0 +1,23 @@ +--- +# We use sentence case and present imperative tone +title: "Enable metrics" +# Weights are assigned in increments of 100: determines sorting order +weight: i00 +# Creates a table of contents and sidebar, useful for large documents +toc: true +# Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this +nd-content-type: tutorial +# Intended for internal catalogue and search, case sensitive: +# Agent, N4Azure, NIC, NIM, NGF, NAP-DOS, NAP-WAF, NGINX One, NGINX+, Solutions, Unit +nd-product: NGINX-One +--- + +The NGINX One Console dashboard relies on APIs for NGINX Plus and NGINX Open Source Stub Status to report traffic and system metrics. The following sections show you how to enable those metrics. + +### Enable NGINX Plus API + +{{< include "/use-cases/monitoring/enable-nginx-plus-api.md" >}} + +### Enable NGINX Open Source Stub Status API + +{{< include "/use-cases/monitoring/enable-nginx-oss-stub-status.md" >}} diff --git a/content/nginx-one/metrics/review-metrics.md b/content/nginx-one/metrics/review-metrics.md new file mode 100644 index 000000000..2920ca63e --- /dev/null +++ b/content/nginx-one/metrics/review-metrics.md @@ -0,0 +1,23 @@ +--- +# We use sentence case and present imperative tone +title: "Review metrics on the NGINX One dashboard" +# Weights are assigned in increments of 100: determines sorting order +weight: i00 +# Creates a table of contents and sidebar, useful for large documents +toc: true +# Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this +nd-content-type: how-to +# Intended for internal catalogue and search, case sensitive: +# Agent, N4Azure, NIC, NIM, NGF, NAP-DOS, NAP-WAF, NGINX One, NGINX+, Solutions, Unit +nd-product: NGINX-One +--- + +After connecting your NGINX instances to NGINX One, you can monitor their performance and health. The NGINX One dashboard is designed for this purpose, offering an easy-to-use interface. + +### Log in to NGINX One + +1. Log in to [F5 Distributed Console](https://www.f5.com/cloud/products/distributed-cloud-console). +1. Select **NGINX One > Visit Service**. + +{{< include "/use-cases/monitoring/n1c-dashboard-overview.md" >}} + diff --git a/content/nginx-one/nginx-configs/_index.md b/content/nginx-one/nginx-configs/_index.md new file mode 100644 index 000000000..d4a987147 --- /dev/null +++ b/content/nginx-one/nginx-configs/_index.md @@ -0,0 +1,6 @@ +--- +description: +title: Manage your NGINX instances +weight: 300 +url: /nginx-one/nginx-configs +--- diff --git a/content/nginx-one/how-to/nginx-configs/add-file.md b/content/nginx-one/nginx-configs/add-file.md similarity index 78% rename from content/nginx-one/how-to/nginx-configs/add-file.md rename to content/nginx-one/nginx-configs/add-file.md index 7b654d86e..574ba30e4 100644 --- a/content/nginx-one/how-to/nginx-configs/add-file.md +++ b/content/nginx-one/nginx-configs/add-file.md @@ -2,7 +2,7 @@ docs: null title: Add a file to an instance toc: true -weight: 400 +weight: 300 type: - how-to --- @@ -21,7 +21,7 @@ Before you add files in your configuration, ensure: ## Important considerations If your instance is a member of a Config Sync Group, changes that you make may be synchronized to other instances in that group. -For more information, see how you can [Manage Config Sync Groups]({{< ref "/nginx-one/how-to/config-sync-groups/manage-config-sync-groups.md" >}}). +For more information, see how you can [Manage Config Sync Groups]({{< ref "/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md" >}}). ## Add a file @@ -62,6 +62,6 @@ Enter the name of the desired configuration file, such as `abc.conf` and select ## See also -- [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) -- [View and edit NGINX configurations]({{< ref "/nginx-one/how-to/nginx-configs/view-edit-nginx-configurations.md" >}}) -- [Manage certificates]({{< ref "/nginx-one/how-to/certificates/manage-certificates.md" >}}) +- [Create and manage data plane keys]({{< ref "/nginx-one/connect-instances/create-manage-data-plane-keys.md" >}}) +- [Add an NGINX instance]({{< ref "/nginx-one/connect-instances/add-instance.md" >}}) +- [Manage certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md" >}}) diff --git a/content/nginx-one/nginx-configs/certificates/_index.md b/content/nginx-one/nginx-configs/certificates/_index.md new file mode 100644 index 000000000..b97d42034 --- /dev/null +++ b/content/nginx-one/nginx-configs/certificates/_index.md @@ -0,0 +1,6 @@ +--- +description: +title: Monitor your certificates +weight: 500 +url: /nginx-one/nginx-configs/certificates +--- diff --git a/content/nginx-one/how-to/certificates/manage-certificates.md b/content/nginx-one/nginx-configs/certificates/manage-certificates.md similarity index 94% rename from content/nginx-one/how-to/certificates/manage-certificates.md rename to content/nginx-one/nginx-configs/certificates/manage-certificates.md index 0d53b6947..b80d52c3c 100644 --- a/content/nginx-one/how-to/certificates/manage-certificates.md +++ b/content/nginx-one/nginx-configs/certificates/manage-certificates.md @@ -3,6 +3,7 @@ docs: null title: Manage certificates toc: true weight: 100 +aliases: /nginx-one/how-to/certificates/manage-certificates/ type: - how-to --- @@ -33,8 +34,8 @@ From the NGINX One Console you can: You can manage the certificates for: -- [Unique instances]({{< ref "/nginx-one/how-to/nginx-configs/add-file.md#new-ssl-certificate-or-ca-bundle" >}}) -- For all instances that are members of a [Config Sync Group]({{< ref "/nginx-one/how-to/config-sync-groups/manage-config-sync-groups/#configuration-management" >}}) +- [Unique instances]({{< ref "/nginx-one/nginx-configs/add-file.md#new-ssl-certificate-or-ca-bundle" >}}) +- For all instances that are members of a [Config Sync Group]({{< ref "/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups/#configuration-management" >}}) {{< tip >}} @@ -178,7 +179,7 @@ If you register an instance to NGINX One Console, as described in [Add your NGIN - Are used in their NGINX configuration - Do _not_ match an existing managed SSL certificate/CA bundle -These certificates appear in the list of unmanaged certificates. NGINX One Console does not store unmanaged certs or keys, only metadata associated with certs for monitoring. +These certificates appear in the list of unmanaged certificates. We recommend that you convert your unmanaged certificates. Converting to a managed certificate allows you to centrally manage, update, and deploy a certificate to your data plane from the NGINX One Console. @@ -192,6 +193,6 @@ To convert these cerificates to managed, start with the Certificates menu, and s ## See also -- [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) -- [View and edit NGINX configurations]({{< ref "/nginx-one/how-to/nginx-configs/view-edit-nginx-configurations.md" >}}) -- [Add a file in a configuration]({{< ref "/nginx-one/how-to/nginx-configs/add-file.md" >}}) +- [Create and manage data plane keys]({{< ref "/nginx-one/connect-instances/create-manage-data-plane-keys.md" >}}) +- [Add an instance]({{< ref "/nginx-one/connect-instances/add-instance.md" >}}) +- [Add a file in a configuration]({{< ref "/nginx-one/nginx-configs/add-file.md" >}}) diff --git a/content/nginx-one/how-to/nginx-configs/clean-up-unavailable-instances.md b/content/nginx-one/nginx-configs/clean-up-unavailable-instances.md similarity index 99% rename from content/nginx-one/how-to/nginx-configs/clean-up-unavailable-instances.md rename to content/nginx-one/nginx-configs/clean-up-unavailable-instances.md index 6a119617d..9e7979e34 100644 --- a/content/nginx-one/how-to/nginx-configs/clean-up-unavailable-instances.md +++ b/content/nginx-one/nginx-configs/clean-up-unavailable-instances.md @@ -3,7 +3,7 @@ description: '' docs: null title: Clean up unavailable NGINX instances toc: true -weight: 200 +weight: 1000 type: - how-to --- diff --git a/content/nginx-one/nginx-configs/config-sync-groups/_index.md b/content/nginx-one/nginx-configs/config-sync-groups/_index.md new file mode 100644 index 000000000..eaefeaea3 --- /dev/null +++ b/content/nginx-one/nginx-configs/config-sync-groups/_index.md @@ -0,0 +1,6 @@ +--- +description: +title: Change multiple instances with one push +weight: 400 +url: /nginx-one/config-sync-groups +--- diff --git a/content/nginx-one/how-to/config-sync-groups/add-file-csg.md b/content/nginx-one/nginx-configs/config-sync-groups/add-file-csg.md similarity index 76% rename from content/nginx-one/how-to/config-sync-groups/add-file-csg.md rename to content/nginx-one/nginx-configs/config-sync-groups/add-file-csg.md index ad8d31ca0..147b83950 100644 --- a/content/nginx-one/how-to/config-sync-groups/add-file-csg.md +++ b/content/nginx-one/nginx-configs/config-sync-groups/add-file-csg.md @@ -58,10 +58,10 @@ Enter the name of the desired configuration file, such as `abc.conf` and select ### Existing SSL Certificate or CA Bundle {{< include "nginx-one/add-file/existing-ssl-bundle.md" >}} -With this option, You can incorporate [Managed certificates]({{< ref "/nginx-one/how-to/certificates/manage-certificates.md#managed-and-unmanaged-certificates" >}}). +With this option, you can incorporate [Managed certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md#managed-and-unmanaged-certificates" >}}). ## See also -- [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) -- [View and edit NGINX configurations]({{< ref "/nginx-one/how-to/nginx-configs/view-edit-nginx-configurations.md" >}}) -- [Manage certificates]({{< ref "/nginx-one/how-to/certificates/manage-certificates.md" >}}) +- [Create and manage data plane keys]({{< ref "/nginx-one/connect-instances/create-manage-data-plane-keys.md" >}}) +- [Add an NGINX instance]({{< ref "/nginx-one/connect-instances/add-instance.md" >}}) +- [Manage certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md" >}}) diff --git a/content/nginx-one/how-to/config-sync-groups/manage-config-sync-groups.md b/content/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md similarity index 93% rename from content/nginx-one/how-to/config-sync-groups/manage-config-sync-groups.md rename to content/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md index d686e713e..d5811fe1a 100644 --- a/content/nginx-one/how-to/config-sync-groups/manage-config-sync-groups.md +++ b/content/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md @@ -37,7 +37,7 @@ Config Sync Groups support configuration inheritance and persistance. If you've On the other hand, if you remove all instances from a Config Sync Group, the original configuration persists. In other words, the group retains the configuration from that first instance (or the original configuration). Any new instance that you add later still inherits that configuration. -{{< tip >}}You can use _unmanaged_ certificates. NGINX One Console does not store unmanaged certs or keys, only metadata associated with the certs or keys for monitoring. Your actions can affect the [Config Sync Group status](#config-sync-group-status). For future instances on the data plane, if it: +{{< tip >}}You can use _unmanaged_ certificates. Your actions can affect the [Config Sync Group status](#config-sync-group-status). For future instances on the data plane, if it: - Has unmanaged certificates in the same file paths as defined by the NGINX configuration as the Config Sync Group, that instance will be [**In Sync**](#config-sync-group-status). - Will be [**Out of Sync**](#config-sync-group-status) if the instance: @@ -100,12 +100,6 @@ Now that you created a Config Sync Group, you can add instances to that group. A Any instance that joins the group afterwards inherits that configuration. -{{< note >}} If you see the following [Config Sync Group Status](#config-sync-group-status) message: **Out of Sync**: - - - Review the instance details in NGINX One Console to identify any publication problems. - - After you change the configuration of the Config Sync Group, [Publish it](#publish-the-config-sync-group-configuration]. -In that case, review and resolve discrepancies between the Instance and the rest of the Config Sync Group. {{< /note >}} - ### Add an existing instance to a Config Sync Group {#add-an-existing-instance-to-a-config-sync-group} You can add existing NGINX instances that are already registered with NGINX One to a Config Sync Group. @@ -188,7 +182,7 @@ When adding a new NGINX instance that is not yet registered with NGINX One, you Data plane keys are required for registering NGINX instances with the NGINX One Console. These keys serve as secure tokens, ensuring that only authorized instances can connect and communicate with NGINX One. -For more details on creating and managing data plane keys, see [Create and manage data plane keys]({{}}). +For more details on creating and managing data plane keys, see [Create and manage data plane keys]({{}}). {{}} @@ -263,5 +257,5 @@ Monitor the **Config Sync Status** column. It can help you ensure that your conf ## See also -- [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) -- [View and edit NGINX configurations]({{< ref "/nginx-one/how-to/nginx-configs/view-edit-nginx-configurations.md" >}}) +- [Create and manage data plane keys]({{< ref "/nginx-one/connect-instances/create-manage-data-plane-keys.md" >}}) +- [Add an NGINX instance]({{< ref "/nginx-one/connect-instances/add-instance.md" >}}) diff --git a/content/nginx-one/how-to/nginx-configs/view-edit-nginx-configurations.md b/content/nginx-one/nginx-configs/view-edit-nginx-configurations.md similarity index 66% rename from content/nginx-one/how-to/nginx-configs/view-edit-nginx-configurations.md rename to content/nginx-one/nginx-configs/view-edit-nginx-configurations.md index 37d4fb6f5..f775520f8 100644 --- a/content/nginx-one/how-to/nginx-configs/view-edit-nginx-configurations.md +++ b/content/nginx-one/nginx-configs/view-edit-nginx-configurations.md @@ -1,8 +1,8 @@ --- # We use sentence case and present imperative tone -title: View and edit NGINX configurations +title: View and edit an NGINX instance # Weights are assigned in increments of 100: determines sorting order -weight: 300 +weight: 200 # Creates a table of contents and sidebar, useful for large documents toc: true # Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this @@ -12,17 +12,7 @@ product: NGINX One --- -## Overview - -This guide explains how to add a **Instances** to your NGINX One Console. - -## Before you start - -Before you add **Instances** to NGINX One Console, ensure: - -- You have an NGINX One Console account with staged configuration permissions.``` - -Once you've registered your NGINX Instances with the F5 NGINX One Console, you can view and edit their NGINX configurations on the **Instances** details page. +This guide explains how to edit the configuration of an existing **Instance** in your NGINX One Console. To view and edit an NGINX configuration, follow these steps: @@ -34,8 +24,8 @@ To view and edit an NGINX configuration, follow these steps: 6. When you are satisfied with the changes, select **Next**. 7. Compare and verify your changes before selecting **Save and Publish** to publish the edited configuration. -Alternatively, you can select **Save Changes As**. In the window that appears, you can set up this instance as a [**Staged Configuration**]({{< ref "/nginx-one/how-to/staged-configs/_index.md" >}}). +Alternatively, you can select **Save Changes As**. In the window that appears, you can set up this instance as a [**Staged Configuration**]({{< ref "/nginx-one/staged-configs/_index.md" >}}). ## See also -- [Manage Config Sync Groups]({{< ref "/nginx-one/how-to/config-sync-groups/manage-config-sync-groups.md" >}}) +- [Manage Config Sync Groups]({{< ref "/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md" >}}) diff --git a/content/nginx-one/rbac/_index.md b/content/nginx-one/rbac/_index.md index a1f7050ff..447611e56 100644 --- a/content/nginx-one/rbac/_index.md +++ b/content/nginx-one/rbac/_index.md @@ -1,6 +1,6 @@ --- -title: Role-based access control +title: Organize users with RBAC description: -weight: 300 +weight: 600 url: /nginx-one/rbac --- diff --git a/content/nginx-one/rbac/overview.md b/content/nginx-one/rbac/overview.md index ccab68d4b..2bcdfc17b 100644 --- a/content/nginx-one/rbac/overview.md +++ b/content/nginx-one/rbac/overview.md @@ -1,5 +1,5 @@ --- -title: "Role-based access control overview" +title: "Learn about Role-based access control" weight: 400 toc: true type: reference diff --git a/content/nginx-one/rbac/rbac-api.md b/content/nginx-one/rbac/rbac-api.md index 82953365a..11e90cfc3 100644 --- a/content/nginx-one/rbac/rbac-api.md +++ b/content/nginx-one/rbac/rbac-api.md @@ -1,5 +1,5 @@ --- -title: "Custom roles and API groups" +title: "Set up custom roles with API groups" weight: 500 toc: true type: reference diff --git a/content/nginx-one/rbac/roles.md b/content/nginx-one/rbac/roles.md index 646f0d5cb..e2d33a15b 100644 --- a/content/nginx-one/rbac/roles.md +++ b/content/nginx-one/rbac/roles.md @@ -1,5 +1,5 @@ --- -title: "Default roles" +title: "Review default roles" weight: 500 toc: true type: reference diff --git a/content/nginx-one/staged-configs/_index.md b/content/nginx-one/staged-configs/_index.md new file mode 100644 index 000000000..1305546f1 --- /dev/null +++ b/content/nginx-one/staged-configs/_index.md @@ -0,0 +1,6 @@ +--- +description: +title: Draft new configurations +weight: 400 +url: /nginx-one/staged-configs +--- diff --git a/content/nginx-one/how-to/staged-configs/add-staged-config.md b/content/nginx-one/staged-configs/add-staged-config.md similarity index 96% rename from content/nginx-one/how-to/staged-configs/add-staged-config.md rename to content/nginx-one/staged-configs/add-staged-config.md index e69c0da78..042779b8b 100644 --- a/content/nginx-one/how-to/staged-configs/add-staged-config.md +++ b/content/nginx-one/staged-configs/add-staged-config.md @@ -6,10 +6,10 @@ weight: 100 # Creates a table of contents and sidebar, useful for large documents toc: true # Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this -nd-content-type: how-to +type: tutorial # Intended for internal catalogue and search, case sensitive: # Agent, N4Azure, NIC, NIM, NGF, NAP-DOS, NAP-WAF, NGINX One, NGINX+, Solutions, Unit -nd-product: NGINX One +product: NGINX-One --- ## Overview @@ -33,7 +33,7 @@ You can add a Staged Configuration from: - An existing Config Sync Group - An existing Staged Configuration -To start the process from NGINX One Console, select **Manage > Staged Configurations**. Select **Add Staged Configuration**. +To start the process from NGINX One Console, select **Manage > Staged Configruations**. Select **Add Staged Configuration**. The following sections start from the **Add Staged Configuration** window that appears. diff --git a/content/nginx-one/staged-configs/api-staged-config.md b/content/nginx-one/staged-configs/api-staged-config.md new file mode 100644 index 000000000..18c276ae3 --- /dev/null +++ b/content/nginx-one/staged-configs/api-staged-config.md @@ -0,0 +1,20 @@ +--- +# We use sentence case and present imperative tone +title: Use the API to manage your Staged Configurations +# Weights are assigned in increments of 100: determines sorting order +weight: 300 +# Creates a table of contents and sidebar, useful for large documents +toc: true +# Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this +type: tutorial +# Intended for internal catalogue and search, case sensitive: +product: NGINX-One +--- + +You can use F5 NGINX One Console API to manage your Staged Configurations. With our API, you can: + +- [Create an NGINX Staged Configuration]({{< ref "/nginx-one/api/api-reference-guide/#operation/createStagedConfig" >}}) + - The details allow you to add existing configuration files. +- [Get a list of existing Staged Configurations]({{< ref "/nginx-one/api/api-reference-guide/#operation/listStagedConfigs" >}}) + - Be sure to record the `object_id` of your target Staged Configuration for your analysis report. +- [Get an analysis report for an existing Staged Configuration]({{< ref "/nginx-one/api/api-reference-guide/#operation/getStagedConfigReport" >}}) diff --git a/content/nginx-one/how-to/staged-configs/edit-staged-config.md b/content/nginx-one/staged-configs/edit-staged-config.md similarity index 100% rename from content/nginx-one/how-to/staged-configs/edit-staged-config.md rename to content/nginx-one/staged-configs/edit-staged-config.md diff --git a/content/nginx-one/how-to/staged-configs/import-export-staged-config.md b/content/nginx-one/staged-configs/import-export-staged-config.md similarity index 94% rename from content/nginx-one/how-to/staged-configs/import-export-staged-config.md rename to content/nginx-one/staged-configs/import-export-staged-config.md index cbb77d2d5..68993ee99 100644 --- a/content/nginx-one/how-to/staged-configs/import-export-staged-config.md +++ b/content/nginx-one/staged-configs/import-export-staged-config.md @@ -26,7 +26,7 @@ Before you import or export a Staged Configuration to NGINX One Console, ensure: - You have an NGINX One Console account with staged configuration permissions. -You can also import, export, and manage multiple Staged Configurations through [the API]({{< ref "/nginx-one/how-to/staged-configs/api-staged-config.md" >}}). +You can also import, export, and manage multiple Staged Configurations through [the API]({{< ref "/nginx-one/staged-configs/api-staged-config.md" >}}). ## Considerations @@ -36,6 +36,8 @@ When you work with such archives, consider the following: - Do _not_ unpack archives directly to your NGINX configuration directories. You do not want to accidentally overwrite existing configuration files. - The files are set to a default file permission mode of 0644. - Do not include files with secrets or personally identifying information. +- We ignore hidden files. + - If you import or export such files in archives, NGINX One Console does not include those files. - The size of the archive is limited to 5 MB. The size of all uncompressed files in the archive is limited to 10 MB. {{< tip >}} diff --git a/layouts/partials/list-main.html b/layouts/partials/list-main.html new file mode 100644 index 000000000..be455babf --- /dev/null +++ b/layouts/partials/list-main.html @@ -0,0 +1,119 @@ +{{/* TODO: Delete this page, and use the one from nginx-hugo-them */}} +
+ {{ $PageTitle := .Title }} +
+ +
+ + {{ if or (lt .WordCount 1) (eq $PageTitle "F5 NGINX One Console") (eq $PageTitle "F5 NGINX App Protect DoS") (eq $PageTitle "F5 NGINX Plus") }} +
+
+
+ {{ range .Pages.GroupBy "Section" }} + {{ range .Pages.ByWeight }} +
+
+

+ + {{ .Title }} +

+ {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Manage your NGINX fleet")}} +

Simplify, scale, secure, and collaborate with your NGINX fleet

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Get started")}} +

See benefits from the NGINX One Console

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Draft new configurations")}} +

Work with Staged Configurations

+ {{ end }} + + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Manage your NGINX instances")}} +

Monitor and maintain your deployments

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Organize users with RBAC")}} +

Assign responsibilities with role-based access control

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Automate with the NGINX One API")}} +

Manage your NGINX fleet over REST

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Glossary")}} +

Learn terms unique to NGINX One Console

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Connect your instances") }} +

Work with data plane keys, containers, and proxy servers

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Set up metrics") }} +

Review your deployments in a dashboard

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "API")}} +

These are API docs

+ + {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Changelog") }} + {{ partial "changelog-date.html" . }} + {{ end }} +
+
+ {{ end }} + {{ end }} +
+ {{ if eq $PageTitle "F5 NGINX One Console" }} +

Other Products

+ {{ $nginxProducts := slice + (dict "title" "NGINX Instance Manager" "url" "/nginx-instance-manager" "imgSrc" "NGINX-Instance-Manager-product-icon" "type" "local-console-option" "description" "Track and control NGINX Open Source and NGINX Plus instances.") + (dict "title" "NGINX Ingress Controller" "url" "/nginx-ingress-controller" "imgSrc" "NGINX-Ingress-Controller-product-icon" "type" "kubernetes-solutions" "description" "Kubernetes traffic management with API gateway, identity, and observability features.") + (dict "title" "NGINX Gateway Fabric" "url" "/nginx-gateway-fabric" "imgSrc" "NGINX-product-icon" "type" "kubernetes-solutions" "description" "Next generation Kubernetes connectivity using the Gateway API.") + (dict "title" "NGINX App Protect WAF" "url" "/nginx-app-protect-waf" "imgSrc" "NGINX-App-Protect-WAF-product-icon" "type" "security" "description" "Lightweight, high-performance, advanced protection against Layer 7 attacks on your apps and APIs.") + (dict "title" "NGINX App Protect DoS" "url" "/nginx-app-protect-dos" "imgSrc" "NGINX-App-Protect-DoS-product-icon" "type" "security" "description" "Defend, adapt, and mitigate against Layer 7 denial-of-service attacks on your apps and APIs.") + (dict "title" "NGINX Plus" "url" "/nginx" "imgSrc" "NGINX-Plus-product-icon-RGB" "type" "modern-app-delivery" "description" "The all-in-one load balancer, reverse proxy, web server, content cache, and API gateway.") + (dict "title" "NGINX Open Source" "url" "https://nginx.org/en/docs/" "imgSrc" "NGINX-product-icon" "type" "modern-app-delivery" "description" "The open source all-in-one load balancer, content cache, and web server") + }} + {{ $groupedProducts := dict + "local-console-option" (where $nginxProducts "type" "local-console-option") + "kubernetes-solutions" (where $nginxProducts "type" "kubernetes-solutions") + "security" (where $nginxProducts "type" "security") + "modern-app-delivery" (where $nginxProducts "type" "modern-app-delivery") + }} + {{ range $type, $products := $groupedProducts }} +
+

{{ $type | humanize | title }}

+ {{ range $products }} +
+
+

+ + {{ .title }} +

+

+ {{ if .description }}{{ .description | markdownify }}{{ end }} +

+
+
+ {{ end }} +
+ {{ end }} + {{ end }} +
+
+ {{end}} +
From aa066e037e158284e12b2683b225c56d7904bcb5 Mon Sep 17 00:00:00 2001 From: Alan Dooley Date: Fri, 30 May 2025 15:13:15 +0100 Subject: [PATCH 04/24] feat: Update r33 notice banner (#631) This commit updates the r33 notice banner that was converted into the new banner system with the latest theme release. The original link was a series of nested relative links: it replaces it with a Hugo reference. --- _banners/upgrade-r33.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_banners/upgrade-r33.md b/_banners/upgrade-r33.md index 97cb82155..bed7946f9 100644 --- a/_banners/upgrade-r33.md +++ b/_banners/upgrade-r33.md @@ -1,5 +1,5 @@ {{< banner "caution" "NGINX Plus R33 requires NGINX Instance Manager 2.18 or later" >}} If your NGINX data plane instances are running NGINX Plus R33 or later, you must upgrade to NGINX Instance Manager 2.18 or later to support usage reporting. NGINX Plus R33 instances must report usage data to the F5 licensing endpoint or NGINX Instance Manager. Otherwise, they will stop processing traffic.

- For more details about usage reporting and enforcement, see [About solution licenses](../../../../solutions/about-subscription-licenses) + For more details about usage reporting and enforcement, see [About solution licenses]({{< ref "/solutions/about-subscription-licenses.md" >}}) {{}} \ No newline at end of file From c2a7f674dfedba9e72250cc289c34c47bfbcedce Mon Sep 17 00:00:00 2001 From: yar Date: Fri, 30 May 2025 15:52:48 +0100 Subject: [PATCH 05/24] Added HTTP/3 to Tech Specs. (#569) --- content/nginx/technical-specs.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/content/nginx/technical-specs.md b/content/nginx/technical-specs.md index c65fcb06d..5405859c6 100644 --- a/content/nginx/technical-specs.md +++ b/content/nginx/technical-specs.md @@ -163,9 +163,10 @@ See [Sizing Guide for Deploying NGINX Plus on Bare Metal Servers](https://www.ng - [Limit Requests](https://nginx.org/en/docs/http/ngx_http_limit_req_module.html) – Limit rate of request processing for a client IP address or other keyed value - [Limit Responses](https://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate) – Limit rate of responses per client connection -### HTTP/2 and SSL/TLS +### HTTP/2, HTTP/3 and SSL/TLS - [HTTP/2](https://nginx.org/en/docs/http/ngx_http_v2_module.html) – Process HTTP/2 traffic +- [HTTP/3](https://nginx.org/en/docs/http/ngx_http_v3_module.html) – Process HTTP/3 traffic - [SSL/TLS](https://nginx.org/en/docs/http/ngx_http_ssl_module.html) – Process HTTPS traffic ### Mail From 85275b634b8ddd78d09b04716666c401566de211 Mon Sep 17 00:00:00 2001 From: yar Date: Fri, 30 May 2025 16:00:04 +0100 Subject: [PATCH 06/24] Applied formatting to some examples. (#570) --- .../installing-nginx/installing-nginx-plus.md | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) diff --git a/content/nginx/admin-guide/installing-nginx/installing-nginx-plus.md b/content/nginx/admin-guide/installing-nginx/installing-nginx-plus.md index 3ca6da52a..aed4c97fe 100644 --- a/content/nginx/admin-guide/installing-nginx/installing-nginx-plus.md +++ b/content/nginx/admin-guide/installing-nginx/installing-nginx-plus.md @@ -171,17 +171,26 @@ NGINX Plus can be installed on the following versions of Debian or Ubuntu: - **For Debian**: ```shell - sudo apt update - sudo apt install apt-transport-https lsb-release ca-certificates wget gnupg2 debian-archive-keyring + sudo apt update && \ + sudo apt install apt-transport-https \ + lsb-release \ + ca-certificates \ + wget \ + gnupg2 \ + debian-archive-keyring ``` - **For Ubuntu**: ```shell - sudo apt update - sudo apt install apt-transport-https lsb-release ca-certificates wget gnupg2 ubuntu-keyring + sudo apt update && \ + sudo apt install apt-transport-https \ + lsb-release \ + ca-certificates \ + wget \ + gnupg2 \ + ubuntu-keyring ``` - 1. Download and add NGINX signing key: ```shell From fd036dae36c8c0e00682a4ebaaabbb5d1c58f2bd Mon Sep 17 00:00:00 2001 From: Mike Jang <3287976+mjang@users.noreply.github.com> Date: Fri, 30 May 2025 11:02:32 -0700 Subject: [PATCH 07/24] enhancement: Move legal notice to an appropriate location (#633) * enhancement: Move legal notice to an appropriate location * Update content/nginx-one/glossary.md Co-authored-by: yar --------- Co-authored-by: yar --- content/nginx-one/about.md | 4 ---- content/nginx-one/glossary.md | 4 ++++ 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/content/nginx-one/about.md b/content/nginx-one/about.md index 6d0dd4594..72e06d6bb 100644 --- a/content/nginx-one/about.md +++ b/content/nginx-one/about.md @@ -19,7 +19,3 @@ NGINX One offers the following key benefits: - **Performance optimization**: Track your NGINX versions and receive recommendations for tuning your configurations for better performance. - **Graphical Metrics Display**: Access a dashboard that shows key metrics for your NGINX instances, including instance availability, version distribution, system health, and utilization trends. - **Real-time alerts**: Receive alerts about critical issues. - -## Legal notice: Licensing agreements for NGINX products - -Using NGINX One is subject to our End User Service Agreement (EUSA). For [NGINX Plus]({{< ref "/nginx" >}}), usage is governed by the End User License Agreement (EULA). Open source projects, including [NGINX Agent](https://github.com/nginx/agent) and [NGINX OSS](https://github.com/nginx/nginx), are covered under their respective licenses. For more details on these licenses, follow the provided links. diff --git a/content/nginx-one/glossary.md b/content/nginx-one/glossary.md index 33b817f55..04951c14c 100644 --- a/content/nginx-one/glossary.md +++ b/content/nginx-one/glossary.md @@ -22,6 +22,10 @@ This glossary defines terms used in the F5 NGINX One Console and F5 Distributed | **Tenant** | A tenant in F5 Distributed Cloud is an entity that owns a specific set of configuration and infrastructure. It is fundamental for isolation, meaning a tenant cannot access objects or infrastructure of other tenants. Tenants can be either individual or enterprise, with the latter allowing multiple users with role-based access control (RBAC). | {{
}} +## Legal notice: Licensing agreements for NGINX products + +Using NGINX One is subject to our End User Service Agreement (EUSA). For [NGINX Plus]({{< ref "/nginx" >}}), usage is governed by the End User License Agreement (EULA). Open source projects, including [NGINX Agent](https://github.com/nginx/agent) and [NGINX Open Source](https://github.com/nginx/nginx), are covered under their respective licenses. For more details on these licenses, follow the provided links. + --- ## References From 3d35ecf20c51f4245d77f913ba25d5f7a318e3fd Mon Sep 17 00:00:00 2001 From: Mike Jang <3287976+mjang@users.noreply.github.com> Date: Fri, 25 Apr 2025 10:28:57 -0700 Subject: [PATCH 08/24] Draft: new N1C doc homepage --- .../agent/installation/install-agent-api.md | 83 +- .../includes/nap-waf/build-nginx-image-cmd.md | 2 +- .../nginx-one/add-file/existing-ssl-bundle.md | 4 + .../learn-about-deployment.md | 6 +- content/nap-waf/v4/admin-guide/install.md | 4 +- content/nap-waf/v5/admin-guide/compiler.md | 2 +- content/ngf/overview/custom-policies.md | 9 +- content/ngf/overview/product-telemetry.md | 3 +- content/nginx-one/api/_index.md | 4 + content/nginx-one/certificates/_index.md | 6 + .../certificates/manage-certificates.md | 197 +++++ content/nginx-one/changelog.md | 12 + .../nginx-one/config-sync-groups/_index.md | 6 + .../config-sync-groups/add-file-csg.md | 67 ++ .../manage-config-sync-groups.md | 261 ++++++ content/nginx-one/glossary.md | 8 + content/nginx-one/how-to/_index.md | 6 + content/nginx-one/nginx-configs/_index.md | 4 + content/nginx-one/nginx-configs/add-file.md | 14 + .../nginx-one/nginx-configs/add-instance.md | 75 ++ .../clean-up-unavailable-instances.md | 4 + .../manage-config-sync-groups.md | 239 ++++++ .../view-edit-nginx-configurations.md | 24 + content/nginx-one/rbac/_index.md | 6 + content/nginx-one/staged-configs/_index.md | 6 + .../staged-configs/add-staged-config.md | 4 + .../staged-configs/api-staged-config.md | 4 + .../load-balancer/tcp-udp-load-balancer.md | 40 +- .../load-balancer/udp-health-check.md | 160 +--- .../manage-waf-security-policies.md | 763 ++++++++++++------ .../overview-nap-waf-config-management.md | 72 +- .../waf-config-management.md | 30 + 32 files changed, 1606 insertions(+), 519 deletions(-) create mode 100644 content/nginx-one/certificates/_index.md create mode 100644 content/nginx-one/certificates/manage-certificates.md create mode 100644 content/nginx-one/config-sync-groups/_index.md create mode 100644 content/nginx-one/config-sync-groups/add-file-csg.md create mode 100644 content/nginx-one/config-sync-groups/manage-config-sync-groups.md create mode 100644 content/nginx-one/how-to/_index.md create mode 100644 content/nginx-one/nginx-configs/add-instance.md create mode 100644 content/nginx-one/nginx-configs/manage-config-sync-groups.md create mode 100644 content/nim/nginx-app-protect/waf-config-management.md diff --git a/content/includes/agent/installation/install-agent-api.md b/content/includes/agent/installation/install-agent-api.md index 15009d21f..95a9650aa 100644 --- a/content/includes/agent/installation/install-agent-api.md +++ b/content/includes/agent/installation/install-agent-api.md @@ -1,75 +1,74 @@ ---- -docs: DOCS-1031 -files: - - content/nim/nginx-app-protect/setup-waf-config-management.md ---- - -{{}}Make sure `gpg` is installed on your system before continuing. You can install NGINX Agent using command-line tools like `curl` or `wget`.{{}} - -If your NGINX Instance Manager host doesn't use valid TLS certificates, you can use the insecure flags to bypass verification. Here are some example commands: +**Note**: To complete this step, make sure that `gpg` is installed on your system. You can install NGINX Agent using various command-line tools like `curl` or `wget`. If your NGINX Instance Manager host is not set up with valid TLS certificates, you can use the insecure flags provided by those tools. See the following examples: {{}} {{%tab name="curl"%}} -- **Secure:** +- Secure: ```bash - curl https:///install/nginx-agent | sudo sh + curl https:///install/nginx-agent | sudo sh ``` -- **Insecure:** +- Insecure: ```bash - curl --insecure https:///install/nginx-agent | sudo sh + curl --insecure https:///install/nginx-agent | sudo sh ``` -To add the instance to a specific instance group during installation, use the `--instance-group` (or `-g`) flag: + You can add your NGINX instance to an existing instance group or create one using `--instance-group` or `-g` flag when installing NGINX Agent. + + The following example shows how to download and run the script with the optional `--instance-group` flag adding the NGINX instance to the instance group **my-instance-group**: + + ```bash + curl https:///install/nginx-agent > install.sh; chmod u+x install.sh + sudo ./install.sh --instance-group my-instance-group + ``` -```shell -curl https:///install/nginx-agent -o install.sh -chmod u+x install.sh -sudo ./install.sh --instance-group -``` + By default, the install script attempts to use a secure connection when downloading packages. If, however, the script cannot create a secure connection, it uses an insecure connection instead and logs the following warning message: -By default, the install script uses a secure connection to download packages. If it can’t establish one, it falls back to an insecure connection and logs this message: + ``` text + Warning: An insecure connection will be used during this nginx-agent installation + ``` -```text -Warning: An insecure connection will be used during this nginx-agent installation -``` + To require a secure connection, you can set the optional flag `skip-verify` to `false`. -To enforce a secure connection, set the `--skip-verify` flag to false: + The following example shows how to download and run the script with an enforced secure connection: -```shell -curl https:///install/nginx-agent -o install.sh -chmod u+x install.sh -sudo ./install.sh --skip-verify false -``` + ```bash + curl https:///install/nginx-agent > install.sh chmod u+x install.sh; chmod u+x install.sh + sudo sh ./install.sh --skip-verify false + ``` {{%/tab%}} {{%tab name="wget"%}} -- **Secure:** - ```shell - wget https:///install/nginx-agent -O - | sudo sh -s --skip-verify false +- Secure: + + ```bash + wget https:///install/nginx-agent -O - | sudo sh -s --skip-verify false ``` -- **Insecure:** +- Insecure: - ```shell - wget --no-check-certificate https:///install/nginx-agent -O - | sudo sh + ```bash + wget --no-check-certificate https:///install/nginx-agent -O - | sudo sh ``` -To add your instance to a group during installation, use the `--instance-group` (or `-g`) flag: + When you install the NGINX Agent, you can use the `--instance-group` or `-g` flag to add your NGINX instance to an existing instance group or to a new group that you specify. -```shell -wget https:///install/nginx-agent -O install.sh -chmod u+x install.sh -sudo ./install.sh --instance-group -``` + The following example downloads and runs the NGINX Agent install script with the optional `--instance-group` flag, adding the NGINX instance to the instance group **my-instance-group**: + + ```bash + wget https://gnms1.npi.f5net.com/install/nginx-agent -O install.sh ; chmod u+x install.sh + sudo ./install.sh --instance-group my-instance-group + ``` -{{%/tab%}} +{{%/tab%}} {{}} + + + diff --git a/content/includes/nap-waf/build-nginx-image-cmd.md b/content/includes/nap-waf/build-nginx-image-cmd.md index fcb89d363..41bf90d03 100644 --- a/content/includes/nap-waf/build-nginx-image-cmd.md +++ b/content/includes/nap-waf/build-nginx-image-cmd.md @@ -10,7 +10,7 @@ To build the image, execute the following command in the directory containing th ```shell -sudo docker build --no-cache --platform linux/amd64 \ +sudo docker build --no-cache \ --secret id=nginx-crt,src=nginx-repo.crt \ --secret id=nginx-key,src=nginx-repo.key \ -t nginx-app-protect-5 . diff --git a/content/includes/nginx-one/add-file/existing-ssl-bundle.md b/content/includes/nginx-one/add-file/existing-ssl-bundle.md index 9afcf9137..151d06103 100644 --- a/content/includes/nginx-one/add-file/existing-ssl-bundle.md +++ b/content/includes/nginx-one/add-file/existing-ssl-bundle.md @@ -2,7 +2,11 @@ docs: --- +<<<<<<< HEAD With this option, you can incorporate [Managed certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md#managed-and-unmanaged-certificates" >}}). +======= +With this option, You can incorporate [Managed certificates]({{< ref "/nginx-one/certificates/manage-certificates.md#managed-and-unmanaged-certificates" >}}). +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) In the **Choose Certificate** drop-down, select the managed certificate of your choice, and select **Add**. You can then: 1. Review details of the certificate. The next steps depend on whether the certificate is a CA bundle or a certificate / key pair. diff --git a/content/nap-dos/deployment-guide/learn-about-deployment.md b/content/nap-dos/deployment-guide/learn-about-deployment.md index 430fd9e2e..df137bd2e 100644 --- a/content/nap-dos/deployment-guide/learn-about-deployment.md +++ b/content/nap-dos/deployment-guide/learn-about-deployment.md @@ -1405,7 +1405,7 @@ You need root permissions to execute the following steps. 6. Create a Docker image: ```shell - docker build --no-cache --platform linux/amd64 -t app-protect-dos . + docker build --no-cache -t app-protect-dos . ``` The `--no-cache` option tells Docker to build the image from scratch and ensures the installation of the latest version of NGINX Plus and NGINX App Protect DoS. If the Dockerfile was previously used to build an image without the `--no-cache` option, the new image uses versions from the previously built image from the Docker cache. @@ -1966,13 +1966,13 @@ Make sure to replace upstream and proxy pass directives in this example with rel For CentOS: ```shell - docker build --no-cache --platform linux/amd64 -t app-protect-dos . + docker build --no-cache -t app-protect-dos . ``` For RHEL: ```shell - docker build --platform linux/amd64 --build-arg RHEL_ORGANIZATION=${RHEL_ORGANIZATION} --build-arg RHEL_ACTIVATION_KEY=${RHEL_ACTIVATION_KEY} --no-cache -t app-protect-dos . + docker build --build-arg RHEL_ORGANIZATION=${RHEL_ORGANIZATION} --build-arg RHEL_ACTIVATION_KEY=${RHEL_ACTIVATION_KEY} --no-cache -t app-protect-dos . ``` The `--no-cache` option tells Docker to build the image from scratch and ensures the installation of the latest version of NGINX Plus and NGINX App Protect DoS. If the Dockerfile was previously used to build an image without the `--no-cache` option, the new image uses versions from the previously built image from the Docker cache. diff --git a/content/nap-waf/v4/admin-guide/install.md b/content/nap-waf/v4/admin-guide/install.md index c3e0575dc..3158ac9d3 100644 --- a/content/nap-waf/v4/admin-guide/install.md +++ b/content/nap-waf/v4/admin-guide/install.md @@ -939,7 +939,7 @@ If a user other than **nginx** is to be used, note the following: - For Oracle Linux/Debian/Ubuntu/Alpine/Amazon Linux: ```shell - DOCKER_BUILDKIT=1 docker build --no-cache --platform linux/amd64 --secret id=nginx-crt,src=nginx-repo.crt --secret id=nginx-key,src=nginx-repo.key -t app-protect . + DOCKER_BUILDKIT=1 docker build --no-cache --secret id=nginx-crt,src=nginx-repo.crt --secret id=nginx-key,src=nginx-repo.key -t app-protect . ``` The `DOCKER_BUILDKIT=1` enables `docker build` to recognize the `--secret` flag which allows the user to pass secret information to be used in the Dockerfile for building docker images in a safe way that will not end up stored in the final image. This is a recommended practice for the handling of the certificate and private key for NGINX repository access (`nginx-repo.crt` and `nginx-repo.key` files). More information [here](https://docs.docker.com/engine/reference/commandline/buildx_build/#secret). @@ -1289,7 +1289,7 @@ You need root permissions to execute the following steps. - For Oracle Linux/Debian/Ubuntu/Alpine/Amazon Linux: ```shell - DOCKER_BUILDKIT=1 docker build --no-cache --platform linux/amd64 --secret id=nginx-crt,src=nginx-repo.crt --secret id=nginx-key,src=nginx-repo.key -t app-protect-converter . + DOCKER_BUILDKIT=1 docker build --no-cache --secret id=nginx-crt,src=nginx-repo.crt --secret id=nginx-key,src=nginx-repo.key -t app-protect-converter . ``` The `DOCKER_BUILDKIT=1` enables `docker build` to recognize the `--secret` flag which allows the user to pass secret information to be used in the Dockerfile for building docker images in a safe way that will not end up stored in the final image. This is a recommended practice for the handling of the certificate and private key for NGINX repository access (`nginx-repo.crt` and `nginx-repo.key` files). More information [here](https://docs.docker.com/engine/reference/commandline/buildx_build/#secret). diff --git a/content/nap-waf/v5/admin-guide/compiler.md b/content/nap-waf/v5/admin-guide/compiler.md index ea0f28500..dd0e828e4 100644 --- a/content/nap-waf/v5/admin-guide/compiler.md +++ b/content/nap-waf/v5/admin-guide/compiler.md @@ -98,7 +98,7 @@ curl -s https://private-registry.nginx.com/v2/nap/waf-compiler/tags/list --key < Run the command below to build your image, where `waf-compiler-:custom` is an example of the image tag: ```shell - sudo docker build --no-cache --platform linux/amd64 \ + sudo docker build --no-cache \ --secret id=nginx-crt,src=nginx-repo.crt \ --secret id=nginx-key,src=nginx-repo.key \ -t waf-compiler-:custom . diff --git a/content/ngf/overview/custom-policies.md b/content/ngf/overview/custom-policies.md index 5aeb99fce..c7e5a785d 100644 --- a/content/ngf/overview/custom-policies.md +++ b/content/ngf/overview/custom-policies.md @@ -17,11 +17,10 @@ The following table summarizes NGINX Gateway Fabric custom policies: {{< bootstrap-table "table table-striped table-bordered" >}} -| Policy | Description | Attachment Type | Supported Target Object(s) | Supports Multiple Target Refs | Mergeable | API Version | -|---------------------------------------------------------------------------------------------|---------------------------------------------------------|-----------------|-------------------------------|-------------------------------|-----------|-------------| -| [ClientSettingsPolicy]({{< ref "/ngf/how-to/traffic-management/client-settings.md" >}}) | Configure connection behavior between client and NGINX | Inherited | Gateway, HTTPRoute, GRPCRoute | No | Yes | v1alpha1 | -| [ObservabilityPolicy]({{< ref "/ngf/how-to/monitoring/tracing.md" >}}) | Define settings related to tracing, metrics, or logging | Direct | HTTPRoute, GRPCRoute | Yes | No | v1alpha2 | -| [UpstreamSettingsPolicy]({{< ref "/ngf/how-to/traffic-management/upstream-settings.md" >}}) | Configure connection behavior between NGINX and backend | Direct | Service | Yes | Yes | v1alpha1 | +| Policy | Description | Attachment Type | Supported Target Object(s) | Supports Multiple Target Refs | Mergeable | API Version | +|---------------------------------------------------------------------------------------|---------------------------------------------------------|-----------------|-------------------------------|-------------------------------|-----------|-------------| +| [ClientSettingsPolicy]({{< ref "/ngf/how-to/traffic-management/client-settings.md" >}}) | Configure connection behavior between client and NGINX | Inherited | Gateway, HTTPRoute, GRPCRoute | No | Yes | v1alpha1 | +| [ObservabilityPolicy]({{< ref "/ngf/how-to/monitoring/tracing.md" >}}) | Define settings related to tracing, metrics, or logging | Direct | HTTPRoute, GRPCRoute | Yes | No | v1alpha1 | {{< /bootstrap-table >}} diff --git a/content/ngf/overview/product-telemetry.md b/content/ngf/overview/product-telemetry.md index 3c73a4cb5..cd9f7a20f 100644 --- a/content/ngf/overview/product-telemetry.md +++ b/content/ngf/overview/product-telemetry.md @@ -32,8 +32,7 @@ Telemetry data is collected once every 24 hours and sent to a service managed by - **Image Build Source:** whether the image was built by GitHub or locally (values are `gha`, `local`, or `unknown`). The source repository of the images is **not** collected. - **Deployment Flags:** a list of NGINX Gateway Fabric Deployment flags that are specified by a user. The actual values of non-boolean flags are **not** collected; we only record that they are either `true` or `false` for boolean flags and `default` or `user-defined` for the rest. - **Count of Resources:** the total count of resources related to NGINX Gateway Fabric. This includes `GatewayClasses`, `Gateways`, `HTTPRoutes`,`GRPCRoutes`, `TLSRoutes`, `Secrets`, `Services`, `BackendTLSPolicies`, `ClientSettingsPolicies`, `NginxProxies`, `ObservabilityPolicies`, `UpstreamSettingsPolicies`, `SnippetsFilters`, and `Endpoints`. The data within these resources is **not** collected. -- **SnippetsFilters Info:** a list of directive-context strings from applied SnippetFilters and a total count per strings. The actual value of any NGINX directive is **not** collected. - +- **SnippetsFilters Info**a list of directive-context strings from applied SnippetFilters and a total count per strings. The actual value of any NGINX directive is **not** collected. This data is used to identify the following information: - The flavors of Kubernetes environments that are most popular among our users. diff --git a/content/nginx-one/api/_index.md b/content/nginx-one/api/_index.md index 5b3284d5e..1735740f8 100644 --- a/content/nginx-one/api/_index.md +++ b/content/nginx-one/api/_index.md @@ -1,5 +1,9 @@ --- +<<<<<<< HEAD title: Automate with the NGINX One API +======= +title: NGINX One API +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) description: weight: 700 url: /nginx-one/api diff --git a/content/nginx-one/certificates/_index.md b/content/nginx-one/certificates/_index.md new file mode 100644 index 000000000..0ea28d683 --- /dev/null +++ b/content/nginx-one/certificates/_index.md @@ -0,0 +1,6 @@ +--- +description: +title: Monitor your certificates +weight: 400 +url: /nginx-one/certificates +--- diff --git a/content/nginx-one/certificates/manage-certificates.md b/content/nginx-one/certificates/manage-certificates.md new file mode 100644 index 000000000..13c532e38 --- /dev/null +++ b/content/nginx-one/certificates/manage-certificates.md @@ -0,0 +1,197 @@ +--- +docs: null +title: Manage certificates +toc: true +weight: 100 +type: +- how-to +--- + +## Overview + +This guide explains how you can manage SSL/TLS certificates with the F5 NGINX One Console. Valid certificates support encrypted connections between NGINX and your users. + +You may have separate sets of SSL/TLS certificates, as described in the following table: + +{{}} +| Functionality | Typical file names | Notes | +|-------------------|--------------------------------------------------------------------|----------------------------------------------------------------------------------------| +| Website traffic | /etc/nginx/ssl/example.com.crt
/etc/nginx/ssl/example.com.key | Typically purchased from a Certificate Authority (CA) | +| Repository access | /etc/ssl/nginx/nginx-repo.crt
/etc/ssl/nginx/nginx-repo.key | Supports access to repositories to download and install NGINX packages | +| NGINX Licensing | /etc/ssl/nginx/server.crt
/etc/ssl/nginx/server.key | Supports access to repositories. Based on licenses downloaded from https://my.f5.com/ | +{{
}} + +Allowed directories depend on the [NGINX Agent]({{< ref "/nginx-one/getting-started/#install-nginx-agent" >}}). Look for the `/etc/nginx-agent/nginx-agent.conf` file. +Find the `config_dirs` parameter in that file, as described in the NGINX Agent [Basic configuration](https://docs.nginx.com/nginx-agent/configuration/configuration-overview/#cli-flags--environment-variables). +You may need to add a directory like `/etc/ssl` to that parameter. + +From the NGINX One Console you can: + +- Monitor all certificates configured for use by your connected NGINX Instances. +- Ensure that your certificates are current and correct. +- Manage your certificates from a central location. This can help you simplify operations and remotely update, rotate, and deploy those certificates. + +You can manage the certificates for: + +- [Unique instances]({{< ref "/nginx-one/nginx-configs/add-file.md#new-ssl-certificate-or-ca-bundle" >}}) +- For all instances that are members of a [Config Sync Group]({{< ref "/nginx-one/config-sync-groups/manage-config-sync-groups/#configuration-management" >}}) + + +{{< tip >}} + +If you are managing the certificate from NGINX One Console, we recommend that you avoid directly manipulating the files on the data plane. + +{{< /tip >}} + +## Before you start + +Before you add and manage certificates with the NGINX One Console make sure: + +- You have access to the NGINX One Console +- You have access through the F5 Distributed Cloud role, as described in the [Authentication]({{< ref "/nginx-one/api/authentication.md" >}}) guide, to manage SSL/TLS certificates + - You have the `f5xc-nginx-one-user` role for your account +- Your SSL/TLS certificates and keys match + +### SSL/TLS certificates and more + +NGINX One Console supports certificates for access to repositories. You may need a copy of these files from your Certificate Authority (CA) to upload them to NGINX One Console: + +- SSL Certificate + - Example file extensions: .crt, .pem +- Privacy certificate + - Example file extensions: .key, .pem + +The NGINX One Console allows you to upload these certificates as text and as files. You can also upload your own certificate files (with file extensions such as .crt and .key). + +Make sure your certificates, keys, and pem files are encrypted to one of the following standards: + +- RSA +- ECC/ECDSA + +In other words, any private key of this type should be supported, regardless of the curve types or hashing algorithm. + +For exmaple, if you use ECDSA private keys in PEM format, the PEM headers should contain: + +``` +-----BEGIN EC PRIVATE KEY----- + +-----END EC PRIVATE KEY----- + +``` + +If you use one of these keys, the US National Institute of Standards and Technology, in [Publication 800-57 Part 3 (PDF)](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-57Pt3r1.pdf), recommends a key size of at least +2048 bits. It also has recommnedations for ECDSA. + +### Include certificates in NGINX configuration + +For NGINX configuration, these files are typically associated with the following NGINX directives: + +- [`ssl_certificate`](https://nginx.org/en/docs/stream/ngx_stream_ssl_module.html#ssl_certificate) +- [`ssl_certificate_key`](https://nginx.org/en/docs/stream/ngx_stream_ssl_module.html#ssl_certificate_key) +- [`ssl_trusted_certificate`](https://nginx.org/en/docs/stream/ngx_stream_ssl_module.html#ssl_trusted_certificate) +- [`ssl_client_certificate`](https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_client_certificate) +- [`proxy_ssl_certificate`](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_certificate) +- [`proxy_ssl_certificate_key`](https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_certificate_key) + +## Important considerations + +Most websites include valid information from public keys and certificates or CA bundles. However, the NGINX One Console accepts, but provides warnings for these use cases: + +- When the public certificate is expired +- When the leaf certificate part of a certificate chain is expired +- When any of the components of a CA bundle are expired +- When the public key does not match the private certificate + +In such cases, you may get websites that present "Your connection is not private" warning messages in client web browsers. + +## Review existing certificates + +Follow these steps to review existing certificates for your instances. + +On the left-hand pane, select **Certificates**. In the window that appears, you see: + +{{}} +| Term | Definition | +|-------------|-------------| +| **Total** | Total number of certificates available to NGINX One Console | +| **Valid (31+ days)** | Number of certificates that expire in 31 or more days | +| **Expires Soon (<31 days)** | Number of certificates that expire in less than 31 days | +| **Expired** | Number of exprired certificates | +| **Not Ready** | Certificates with a start date in the future | +| **Managed** | Managed by and stored in the NGINX One Console | +| **Unmanaged** | Detected by, and not managed by NGINX One Console. To convert to managed, you may need to upload the certificate and key during the process. | +{{}} + +You can **Add Filter** to filter certificates by: + +- Name +- Status +- Subject Name +- Type + +The Export option supports exports of basic certification file information to a CSV file. It does _not_ include the content of the public certificate or the private key. + +## Add a new certificate or bundle + +To add a new certificate, select **Add Certificate**. + + +In the screen that appears, you can add a certificate name. If you don't add a name, NGINX One will add a name for you, based on the expiration date for the certificate. + +You can add certificates in the following formats: + +- **SSL Certificate and Key** +- **CA Certificate Bundle** + +In each case, you can upload files directly, or enter the content of the certificates in a text box. Once you upload these certificates, you'll see: + +- **Certificate Details**, with the Subject Name, start and end dates. +- **Key Details**, with the encryption key size and algorithm, such as RSA + + +## Edit an existing certificate or bundle + +You can modify existing certificates from the **Certificates** screen. Select the certificate of your choice. Depending on the type of certificate, you'll then see either a **Edit Certificate** or **Edit CA Bundle** option. The NGINX One Console then presents a window with the same options as shown when you [Add a new certificate](#add-a-new-certificate-or-bundle). + +If that certificate is already managed as part of a Config Sync Group, the changes you make affect all instances in that group. + +## Remove a deployed certificate + +You can remove a deployed certificate from an independent instance or from a Config Sync Group. This will remove the certificate's association with the instance or group, but it does not delete the certificate files from the instance(s). + +Every instance with a deployed certificate includes paths to certificates in their configuration files. If you remove the deployed file path to one certificate, that change is limited to that one instance. + +Every Config Sync Group also includes paths to certificates in its configuration files. If you remove the deployed path to one certificate, that change affects all instances which belong to that Config Sync Group. + +## Delete a deployed certificate + +To delete a certificate, find the name in the **Certificates** screen. Find the **Actions** column associated with the certificate. Select the ellipsis (`...`) and then select **Delete**. Before deleting that certificate, you should see a warning. + +If that certificate is managed and is part of a Config Sync Group, that change affects all instances in that group. + +{{< warning >}} Be cautious if you want to delete certificates that are being used by an instance or a Config Sync Group. Deleting such certificates leads to failure in affected NGINX deployments. {{< /warning >}} + +## Managed and unmanaged certificates + +If you register an instance to NGINX One Console, as described in [Add your NGINX instances to NGINX One]({{< ref "/nginx-one/getting-started.md#add-your-nginx-instances-to-nginx-one" >}}), and the associated SSL/TLS certificates: + +- Are used in their NGINX configuration +- Do _not_ match an existing managed SSL certificate/CA bundle + +These certificates appear in the list of unmanaged certificates. + +We recommend that you convert your unmanaged certificates. Converting to a managed certificate allows you to centrally manage, update, and deploy a certificate to your data plane from the NGINX One Console. + +To convert these cerificates to managed, start with the Certificates menu, and select **Unmanaged**. You should see a list of **Unmanaged Certificates or CA Bundles**. Then: + +- Select a certificate +- Select **Convert to Managed** +- In the window that appears, you can now include the same information as shown in the [Add a new certificate](#add-a-new-certificate) section + + + +## See also + +- [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) +- [View and edit NGINX configurations]({{< ref "/nginx-one/nginx-configs/view-edit-nginx-configurations.md" >}}) +- [Add a file in a configuration]({{< ref "/nginx-one/nginx-configs/add-file.md" >}}) diff --git a/content/nginx-one/changelog.md b/content/nginx-one/changelog.md index ab541049a..4fefc47b1 100644 --- a/content/nginx-one/changelog.md +++ b/content/nginx-one/changelog.md @@ -84,7 +84,11 @@ You can: For more information, including warnings about risks, see our documentation on how you can: - [Add a file]({{< ref "/nginx-one/nginx-configs/add-file.md" >}}) +<<<<<<< HEAD - [Manage certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md" >}}) +======= +- [Manage certificates]({{< ref "/nginx-one/certificates/manage-certificates.md" >}}) +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) ### Revert a configuration @@ -108,7 +112,11 @@ From the NGINX One Console you can now: - Ensure that your certificates are current and correct. - Manage your certificates from a central location. This can help you simplify operations and remotely update, rotate, and deploy those certificates. +<<<<<<< HEAD For more information, see the full documentation on how you can [Manage Certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md" >}}). +======= +For more information, see the full documentation on how you can [Manage Certificates]({{< ref "/nginx-one/certificates/manage-certificates.md" >}}). +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) ## August 22, 2024 @@ -116,7 +124,11 @@ For more information, see the full documentation on how you can [Manage Certific Config Sync Groups are now available in the F5 NGINX One Console. This feature allows you to manage and synchronize NGINX configurations across multiple instances as a single entity, ensuring consistency and simplifying the management of your NGINX environment. +<<<<<<< HEAD For more information, see the full documentation on [Managing Config Sync Groups]({{< ref "/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md" >}}). +======= +For more information, see the full documentation on [Managing Config Sync Groups]({{< ref "/nginx-one/config-sync-groups/manage-config-sync-groups.md" >}}). +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) ## August 8, 2024 diff --git a/content/nginx-one/config-sync-groups/_index.md b/content/nginx-one/config-sync-groups/_index.md new file mode 100644 index 000000000..db1ee5560 --- /dev/null +++ b/content/nginx-one/config-sync-groups/_index.md @@ -0,0 +1,6 @@ +--- +description: +title: Organize in groups +weight: 400 +url: /nginx-one/config-sync-groups +--- diff --git a/content/nginx-one/config-sync-groups/add-file-csg.md b/content/nginx-one/config-sync-groups/add-file-csg.md new file mode 100644 index 000000000..9b6905aea --- /dev/null +++ b/content/nginx-one/config-sync-groups/add-file-csg.md @@ -0,0 +1,67 @@ +--- +docs: null +title: Add a file to a Config Sync Group +toc: true +weight: 400 +type: +- how-to +--- + +## Overview + +{{< include "nginx-one/add-file/overview.md" >}} + +## Before you start + +Before you add files in your configuration, ensure: + +- You have access to the NGINX One Console. +- Config Sync Groups are properly registered with NGINX One Console + +## Important considerations + +This page applies when you want to add a file to a Config Sync Group. Any changes you make here apply to all [Instances]({{< ref "/nginx-one/glossary.md" >}}) of that Config Sync Group. + +## Add a file + +You can use the NGINX One Console to add a file to a specific Config Sync Group. To do so: + +1. Select the Config Sync Group to manage. +1. Select the **Configuration** tab. + + {{< tip >}} + + {{< include "nginx-one/add-file/edit-config-tip.md" >}} + + {{< /tip >}} + +1. Select **Edit Configuration**. +1. In the **Edit Configuration** window that appears, select **Add File**. + +You now have multiple options, described in the sections which follow. + +### New Configuration File + +Enter the name of the desired configuration file, such as `abc.conf` and select **Add**. The configuration file appears in the **Edit Configuration** window. + +### New SSL Certificate or CA Bundle + +{{< include "nginx-one/add-file/new-ssl-bundle.md" >}} + + {{< tip >}} + + Make sure to specify the path to your certificate in your NGINX configuration, + with the `ssl_certificate` and `ssl_certificate_key` directives. + + {{< /tip >}} + +### Existing SSL Certificate or CA Bundle + +{{< include "nginx-one/add-file/existing-ssl-bundle.md" >}} +With this option, You can incorporate [Managed certificates]({{< ref "/nginx-one/certificates/manage-certificates.md#managed-and-unmanaged-certificates" >}}). + +## See also + +- [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) +- [View and edit NGINX configurations]({{< ref "/nginx-one/nginx-configs/view-edit-nginx-configurations.md" >}}) +- [Manage certificates]({{< ref "/nginx-one/certificates/manage-certificates.md" >}}) diff --git a/content/nginx-one/config-sync-groups/manage-config-sync-groups.md b/content/nginx-one/config-sync-groups/manage-config-sync-groups.md new file mode 100644 index 000000000..8b24001cd --- /dev/null +++ b/content/nginx-one/config-sync-groups/manage-config-sync-groups.md @@ -0,0 +1,261 @@ +--- +docs: null +title: Manage Config Sync Groups +toc: true +weight: 300 +type: +- how-to +--- + +## Overview + +If you work with several instances of NGINX, it can help to organize these instances in Config Sync Groups. Each instance in a Config Sync Group has the same configuration. + +This guide explains how to create and manage Config Sync Groups in the F5 NGINX One Console. Config Sync Groups synchronize NGINX configurations across multiple NGINX instances, ensuring consistency and ease of management. + +If you’ve used [instance groups in NGINX Instance Manager]({{< ref "/nim/nginx-instances/manage-instance-groups.md" >}}), you’ll find Config Sync Groups in NGINX One similar, though the steps and terminology differ slightly. + +Config Sync Groups are functionally different from syncing instances in a cluster. They let you to manage and synchronize configurations across multiple NGINX instances, all at once. + +This is particularly useful when your NGINX instances are load-balanced by an external load balancer, as it ensures consistency across all instances. In contrast, cluster syncing, like [zone syncing]({{< ref "nginx/admin-guide/high-availability/zone_sync_details.md" >}}), ensures data consistency and high availability across NGINX instances in a cluster. While Config Sync Groups focus on configuration management, cluster syncing supports failover and data consistency. + +## Before you start + +Before you create and manage Config Sync Groups, ensure: + +- You have access to the NGINX One Console. +- You have the necessary permissions to create and manage Config Sync Groups. +- If you plan to add existing instances to a Config Sync Group, make sure those NGINX instances are properly registered with NGINX One. + +## Configuration management + +Config Sync Groups support configuration inheritance and persistance. If you've just created a Config Sync Group, you can define the configuration for that group in the following ways: + +- Before adding an instance to a group, you can [Define the Config Sync Group configuration manually](#define-the-config-sync-group-configuration-manually). +- When you add the first instance to a group, that instance defines the configuration for that Config Sync Group. +- Afterwards, you can modify the configuration of the Config Sync Group. That modifies the configuration of all member instances. Future members of that group inherit that modified configuration. + +On the other hand, if you remove all instances from a Config Sync Group, the original configuration persists. In other words, the group retains the configuration from that first instance (or the original configuration). Any new instance that you add later still inherits that configuration. + +{{< tip >}}You can use _unmanaged_ certificates. Your actions can affect the [Config Sync Group status](#config-sync-group-status). For future instances on the data plane, if it: + +- Has unmanaged certificates in the same file paths as defined by the NGINX configuration as the Config Sync Group, that instance will be [**In Sync**](#config-sync-group-status). +- Will be [**Out of Sync**](#config-sync-group-status) if the instance: + - Does not have unmanaged certificates in the same file paths + - Has unmanaged certificates in a different directory from the Config Sync Group +{{< /tip >}} + +### Risk when adding multiple instances to a Config Sync Group + +If you add multiple instances to a single Config Sync Group, simultaneously (with automation), there's a risk that the instance selects a random configuration. To prevent this problem, you should: + +1. Create a Config Sync Group. +1. Add a configuration to the Config Sync Group, so all instances inherit it. +1. Add the instances in a separate operation. + +Your instances should synchronize with your desired configuration within 30 seconds. + +### Use an instance to define the Config Sync Group configuration + +1. Follow the steps in the [**Add an existing instance to a Config Sync Group**](#add-an-existing-instance-to-a-config-sync-group) or [**Add a new instance to a Config Sync Group**](#add-a-new-instance-to-a-config-sync-group) sections to add your first instance to the group. +2. The NGINX configuration from this instance will automatically become the group's configuration. +3. You can further edit and publish this configuration by following the steps in the [**Publish the Config Sync Group configuration**](#publish-the-config-sync-group-configuration) section. + +### Define the Config Sync Group configuration manually + +You can manually define the group's configuration before adding any instances. When you add instances to the group later, they automatically inherit this configuration. + +To manually set the group configuration: + +1. Follow steps 1–4 in the [**Create a Config Sync Group**](#create-a-config-sync-group) section to create your Config Sync Group. +2. After creating the group, select the **Configuration** tab. +3. Since no instances have been added, the **Configuration** tab will show an empty configuration with a message indicating that no config files exist yet. +4. To add a configuration, select **Edit Configuration**. +5. In the editor, define your NGINX configuration as needed. This might include adding or modifying `nginx.conf` or other related files. +6. After making your changes, select **Next** to view a split screen showing your changes. +7. If you're satisfied with the configuration, select **Save and Publish**. + +## Important considerations + +When you plan Config Sync Groups, consider the following factors: + +- **Single Config Sync Group membership**: You can add an instance to only one Config Sync Group. + +- **NGINX Agent configuration file location**: When you run the NGINX Agent installation script to register an instance with NGINX One, the script creates the `agent-dynamic.conf` file, which contains settings for the NGINX Agent, including the specified Config Sync Group. This file is typically located in `/var/lib/nginx-agent/` on most systems; however, on FreeBSD, it's located at `/var/db/nginx-agent/`. + +- **Mixing NGINX Open Source and NGINX Plus instances**: You can add both NGINX Open Source and NGINX Plus instances to the same Config Sync Group, but there are limitations. If your configuration includes features exclusive to NGINX Plus, synchronization will fail on NGINX Open Source instances because they don't support these features. NGINX One allows you to mix NGINX instance types for flexibility, but it’s important to ensure that the configurations you're applying are compatible with all instances in the group. + +## Create a Config Sync Group + +When you create a Config Sync Group, you can manage the configurations of multiple NGINX instances as a single entity. + +1. On the left menu, select **Config Sync Groups**. +2. Select **Add Config Sync Group**. +3. In the **Name** field, type a name for your Config Sync Group. +4. Select **Create** to add the Config Sync Group. + +## Manage Config Sync Group membership + +Now that you created a Config Sync Group, you can add instances to that group. As described in [Configuration management](#configuration-management), the first instance you add to a group, when you add it, defines the initial configuration for the group. You can update the configuration for the entire Config Sync Group. + +Any instance that joins the group afterwards inherits that configuration. + +### Add an existing instance to a Config Sync Group {#add-an-existing-instance-to-a-config-sync-group} + +You can add existing NGINX instances that are already registered with NGINX One to a Config Sync Group. + +1. Open a command-line terminal on the NGINX instance. +2. Open the `/var/lib/nginx-agent/agent-dynamic.conf` file in a text editor. +3. At the end of the file, add a new line beginning with `instance_group:`, followed by the Config Sync Group name. + + ``` text + instance_group: + ``` + +4. Restart NGINX Agent: + + ``` shell + sudo systemctl restart nginx-agent + ``` + +### Add a new instance to a Config Sync Group {#add-a-new-instance-to-a-config-sync-group} + +When adding a new NGINX instance that is not yet registered with NGINX One, you need a data plane key to securely connect the instance. You can generate a new data plane key during the process or use an existing one if you already have it. + +1. On the left menu, select **Config Sync Groups**. +2. Select the Config Sync Group in the list. +3. In the **Instances** pane, select **Add Instance to Config Sync Group**. +4. In the **Add Instance to Config Sync Group** dialog, select **Register a new instance with NGINX One then add to Config Sync Group**. +5. Select **Next**. +6. **Generate a new data plane key** (choose this option if you don't have an existing key): + + - Select **Generate new key** to create a new data plane key for the instance. + - Select **Generate Data Plane Key**. + - Copy and securely store the generated key, as it is displayed only once. + +7. **Use an existing data plane key** (choose this option if you already have a key): + + - Select **Use existing key**. + - In the **Data Plane Key** field, enter the existing data plane key. + +{{}} + +{{%tab name="Virtual Machine or Bare Metal"%}} + +8. Run the provided command, which includes the data plane key, in your NGINX instance terminal to register the instance with NGINX One. +9. Select **Done** to complete the process. + +{{%/tab%}} + +{{%tab name="Docker Container"%}} + +8. **Log in to the NGINX private registry**: + + - Replace `YOUR_JWT_HERE` with your JSON Web Token (JWT) license from [MyF5](https://my.f5.com/manage/s/). + + ```shell + sudo docker login private-registry.nginx.com --username=YOUR_JWT_HERE --password=none + ``` + +9. **Pull the Docker image**: + + - From the **OS Type** list, choose the appropriate operating system for your Docker image. + - After selecting the OS, run the provided command to pull the Docker image. + + **Note**: Subject to availability, you can modify the `agent: ` to match the specific NGINX Plus version, OS type, and OS version you need. For example, you might use `agent: r32-ubi-9`. For more details on version tags and how to pull an image, see [Deploying NGINX and NGINX Plus on Docker]({{< ref "nginx/admin-guide/installing-nginx/installing-nginx-docker.md#pulling-the-image" >}}). + + + - From the **OS Type** list, choose the appropriate operating system for your Docker image. + - After selecting the OS, run the provided command to pull the Docker image. + + **Note**: Subject to availability, you can modify the `agent: ` to match the specific NGINX Plus version, OS type, and OS version you need. For example, you might use `agent: r32-ubi-9`. For more details on version tags and how to pull an image, see [Deploying NGINX and NGINX Plus on Docker]({{< ref "nginx/admin-guide/installing-nginx/installing-nginx-docker.md#pulling-the-image" >}}). + +10. Run the provided command, which includes the data plane key, in your NGINX instance terminal to start the Docker container. + +11. Select **Done** to complete the process. + +{{%/tab%}} + +{{}} + +{{}} + +Data plane keys are required for registering NGINX instances with the NGINX One Console. These keys serve as secure tokens, ensuring that only authorized instances can connect and communicate with NGINX One. + +For more details on creating and managing data plane keys, see [Create and manage data plane keys]({{}}). + +{{}} + +### Move an instance to a different Config Sync Group + +If you need to move an NGINX instance to a different Config Sync Group, follow these steps: + +1. Open a command-line terminal on the NGINX instance. +2. Open the `/var/lib/nginx-agent/agent-dynamic.conf` file in a text editor. +3. Locate the line that begins with `instance_group:` and change it to the name of the new Config Sync Group. + + ``` text + instance_group: + ``` + +4. Restart NGINX Agent by running the following command: + + ```shell + sudo systemctl restart nginx-agent + ``` + +If you move an instance with certificates from one Config Sync Group to another, NGINX One adds or removes those certificates from the data plane, to synchronize with the deployed certificates of the group. + +### Remove an instance from a Config Sync Group + +If you need to remove an NGINX instance from a Config Sync Group without adding it to another group, follow these steps: + +1. Open a command-line terminal on the NGINX instance. +2. Open the `/var/lib/nginx-agent/agent-dynamic.conf` file in a text editor. +3. Locate the line that begins with `instance_group:` and either remove it or comment it out by adding a `#` at the beginning of the line. + + ```text + # instance_group: + ``` + +4. Restart NGINX Agent: + + ```shell + sudo systemctl restart nginx-agent + ``` + +By removing or commenting out this line, the instance will no longer be associated with any Config Sync Group. + +## Publish the Config Sync Group configuration {#publish-the-config-sync-group-configuration} + +After the Config Sync Group is created, you can modify and publish the group's configuration as needed. Any changes made to the group configuration will be applied to all instances within the group. + +1. On the left menu, select **Config Sync Groups**. +2. Select the Config Sync Group in the list. +3. Select the **Configuration** tab to view the group's NGINX configuration. +4. To modify the group's configuration, select **Edit Configuration**. +5. Make the necessary changes to the configuration. +6. When you're finished, select **Next**. A split view displays the changes. +7. If you're satisfied with the changes, select **Save and Publish**. + +Publishing the group configuration ensures that all instances within the Config Sync Group are synchronized with the latest group configuration. This helps maintain consistency across all instances in the group, preventing configuration drift. + +## Config Sync Group status + +The **Config Sync Status** column on the **Config Sync Groups** page provides insight into the synchronization state of your NGINX instances within each group. + +{{}} +| **Status** | **Description** | +|-----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------| +| **In Sync** | All instances within the Config Sync Group have configurations that match the group configuration. No action is required. | +| **Out of Sync** | At least one instance in the group has a configuration that differs from the group's configuration. You may need to review and resolve discrepancies to ensure consistency. | +| **Sync in Progress** | An instance is currently being synchronized with the group's configuration. This status appears when an instance is moved to a new group or when a configuration is being applied. | +| **Unknown** | The synchronization status of the instances in this group cannot be determined. This could be due to connectivity issues, instances being offline, or other factors. Investigating the cause of this status is recommended. | +{{}} + +Monitor the **Config Sync Status** column. It can help you ensure that your configurations are consistently applied across all instances in a group. + +## See also + +- [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) +- [View and edit NGINX configurations]({{< ref "/nginx-one/nginx-configs/view-edit-nginx-configurations.md" >}}) diff --git a/content/nginx-one/glossary.md b/content/nginx-one/glossary.md index 04951c14c..3104b27ea 100644 --- a/content/nginx-one/glossary.md +++ b/content/nginx-one/glossary.md @@ -3,7 +3,11 @@ description: '' docs: DOCS-1396 title: Glossary toc: true +<<<<<<< HEAD weight: 800 +======= +weight: 2000 +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) type: - reference --- @@ -14,7 +18,11 @@ This glossary defines terms used in the F5 NGINX One Console and F5 Distributed {{}} | Term | Definition | |-------------|-------------| +<<<<<<< HEAD | **Config Sync Group** | A group of NGINX systems (or instances) with identical configurations. They may also share the same certificates. However, the instances in a Config Sync Group could belong to different systems and even different clusters. For more information, see this explanation of [Important considerations]({{< ref "/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md#important-considerations" >}}) | +======= +| **Config Sync Group** | A group of NGINX systems (or instances) with identical configurations. They may also share the same certificates. However, the instances in a Config Sync Group could belong to different systems and even different clusters. For more information, see this explanation of [Important considerations]({{< ref "/nginx-one/config-sync-groups/manage-config-sync-groups.md#important-considerations" >}}) | +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) | **Data Plane** | The data plane is the part of a network architecture that carries user traffic. It handles tasks like forwarding data packets between devices and managing network communication. In the context of NGINX, the data plane is responsible for tasks such as load balancing, caching, and serving web content. | | **Instance** | An instance is an individual system with NGINX installed. You can group the instances of your choice in a Config Sync Group. When you add an instance to NGINX One, you need to use a data plane key. | | **Namespace** | In F5 Distributed Cloud, a namespace groups a tenant’s configuration objects, similar to administrative domains. Every object in a namespace must have a unique name, and each namespace must be unique to its tenant. This setup ensures isolation, preventing cross-referencing of objects between namespaces. You'll see the namespace in the NGINX One Console URL as `/namespaces//` | diff --git a/content/nginx-one/how-to/_index.md b/content/nginx-one/how-to/_index.md new file mode 100644 index 000000000..e7b505736 --- /dev/null +++ b/content/nginx-one/how-to/_index.md @@ -0,0 +1,6 @@ +--- +description: +title: How-to guides +weight: 700 +url: /nginx-one/how-to/ +--- diff --git a/content/nginx-one/nginx-configs/_index.md b/content/nginx-one/nginx-configs/_index.md index d4a987147..2e6a37c0e 100644 --- a/content/nginx-one/nginx-configs/_index.md +++ b/content/nginx-one/nginx-configs/_index.md @@ -1,6 +1,10 @@ --- description: +<<<<<<< HEAD title: Manage your NGINX instances +======= +title: Organize your NGINX instances +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) weight: 300 url: /nginx-one/nginx-configs --- diff --git a/content/nginx-one/nginx-configs/add-file.md b/content/nginx-one/nginx-configs/add-file.md index 574ba30e4..15e301e57 100644 --- a/content/nginx-one/nginx-configs/add-file.md +++ b/content/nginx-one/nginx-configs/add-file.md @@ -2,7 +2,11 @@ docs: null title: Add a file to an instance toc: true +<<<<<<< HEAD weight: 300 +======= +weight: 400 +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) type: - how-to --- @@ -21,7 +25,11 @@ Before you add files in your configuration, ensure: ## Important considerations If your instance is a member of a Config Sync Group, changes that you make may be synchronized to other instances in that group. +<<<<<<< HEAD For more information, see how you can [Manage Config Sync Groups]({{< ref "/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md" >}}). +======= +For more information, see how you can [Manage Config Sync Groups]({{< ref "/nginx-one/config-sync-groups/manage-config-sync-groups.md" >}}). +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) ## Add a file @@ -62,6 +70,12 @@ Enter the name of the desired configuration file, such as `abc.conf` and select ## See also +<<<<<<< HEAD - [Create and manage data plane keys]({{< ref "/nginx-one/connect-instances/create-manage-data-plane-keys.md" >}}) - [Add an NGINX instance]({{< ref "/nginx-one/connect-instances/add-instance.md" >}}) - [Manage certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md" >}}) +======= +- [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) +- [View and edit NGINX configurations]({{< ref "/nginx-one/nginx-configs/view-edit-nginx-configurations.md" >}}) +- [Manage certificates]({{< ref "/nginx-one/certificates/manage-certificates.md" >}}) +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) diff --git a/content/nginx-one/nginx-configs/add-instance.md b/content/nginx-one/nginx-configs/add-instance.md new file mode 100644 index 000000000..ddf5eb095 --- /dev/null +++ b/content/nginx-one/nginx-configs/add-instance.md @@ -0,0 +1,75 @@ +--- +description: '' +title: Add an NGINX instance +toc: true +weight: 100 +type: +- how-to +--- + +## Overview + +This guide explains how to add an F5 NGINX instance in F5 NGINX One Console. You can add an instance from the NGINX One Console individually, or as part of a [Config Sync Group]({{< ref "/nginx-one/glossary.md" >}}). In either case, you need +to set up a data plane key to connect your instances to NGINX One. + +## Before you start + +Before you add an instance to NGINX One Console, ensure: + +- You have administrator access to NGINX One Console. +- You have configured instances of NGINX that you want to manage through NGINX One Console. +- You have or are ready to configure a data plane key. +- You have or are ready to set up managed certificates. + +{{< note >}}If this is the first time an instance is being added to a Config Sync Group, and you have not yet defined the configuration for that Config Sync Group, that instance provides the template for that group. For more information, see [Configuration management]({{< ref "nginx-one/config-sync-groups/manage-config-sync-groups#configuration-management" >}}).{{< /note >}} + +## Add an instance + +You can add an instance to NGINX One Console in the following ways: + +- Directly, under **Instances** +- Indirectly, by selecting a Config Sync Group, and selecting **Add Instance to Config Sync Group** + +In either case, NGINX One Console gives you a choice for data plane keys: + +- Create a new key +- Use an existing key + +NGINX One Console takes the option you use, and adds the data plane key to a command that you'd use to register your target instance. You should see the command in the **Add Instance** screen in the console. + +Connect to the host where your NGINX instance is running. Run the provided command to [install NGINX Agent]({{< ref "/nginx-one/getting-started#install-nginx-agent" >}}) dependencies and packages on that host. + +```bash +curl https://agent.connect.nginx.com/nginx-agent/install | DATA_PLANE_KEY="" sh -s -- -y +``` + +Once the process is complete, you can configure that instance in your NGINX One Console. + +## Managed and Unmanaged Certificates + +If you add an instance with SSL/TLS certificates, those certificates can match an existing managed SSL certificate/CA bundle. + +### If the certificate is already managed + +If you add an instance with a managed certificate, as described in [Add your NGINX instances to NGINX One], these certificates are added to your list of **Managed Certificates**. + +NGINX One Console can manage your instances along with those certificates. + +### If the certificate is not managed + +These certificates appear in the list of **Unmanaged Certificates**. + +To take full advantage of NGINX One, you can convert these to **Managed Certificates**. You can then manage, update, and deploy a certificate to all of your NGINX instances in a Config Sync Group. + +To convert these cerificates, start with the Certificates menu, and select **Unmanaged**. You should see a list of **Unmanaged Certificates or CA Bundles**. Then: + +- Select a certificate +- Select **Convert to Managed** +- In the window that appears, you can now include the same information as shown in the [Add a new certificate](#add-a-new-certificate) section + +Once you've completed the process, NGINX One reassigns this as a managed certificate, and assigns it to the associated instance or Config Sync Group. + +## Add an instance to a Config Sync Group + +When you [Manage Config Sync Group membership]({{< ref "nginx-one/config-sync-groups/manage-config-sync-groups#manage-config-sync-group-membership" >}}), you can add an existing or new instance to the group of your choice. +That instance inherits the setup of that Config Sync Group. diff --git a/content/nginx-one/nginx-configs/clean-up-unavailable-instances.md b/content/nginx-one/nginx-configs/clean-up-unavailable-instances.md index 9e7979e34..77d406013 100644 --- a/content/nginx-one/nginx-configs/clean-up-unavailable-instances.md +++ b/content/nginx-one/nginx-configs/clean-up-unavailable-instances.md @@ -3,7 +3,11 @@ description: '' docs: null title: Clean up unavailable NGINX instances toc: true +<<<<<<< HEAD weight: 1000 +======= +weight: 200 +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) type: - how-to --- diff --git a/content/nginx-one/nginx-configs/manage-config-sync-groups.md b/content/nginx-one/nginx-configs/manage-config-sync-groups.md new file mode 100644 index 000000000..41589dadd --- /dev/null +++ b/content/nginx-one/nginx-configs/manage-config-sync-groups.md @@ -0,0 +1,239 @@ +--- +docs: null +title: Manage config sync groups +toc: true +weight: 300 +type: +- how-to +--- + +## Overview + +This guide explains how to create and manage config sync groups in the F5 NGINX One Console. Config sync groups synchronize NGINX configurations across multiple NGINX instances, ensuring consistency and ease of management. + +If you’ve used [instance groups in NGINX Instance Manager]({{< ref "/nim/nginx-instances/manage-instance-groups.md" >}}), you’ll find config sync groups in NGINX One similar, though the steps and terminology differ slightly. + +## Before you start + +Before you create and manage config sync groups, ensure: + +- You have access to the NGINX One Console. +- You have the necessary permissions to create and manage config sync groups. +- NGINX instances are properly registered with NGINX One if you plan to add existing instances to a config sync group. + +## Important considerations + +- **NGINX Agent configuration file location**: When you run the NGINX Agent installation script to register an instance with NGINX One, the script creates the `agent-dynamic.conf` file, which contains settings for the NGINX Agent, including the specified config sync group. This file is typically located in `/var/lib/nginx-agent/` on most systems; however, on FreeBSD, it's located at `/var/db/nginx-agent/`. + +- **Mixing NGINX Open Source and NGINX Plus instances**: You can add both NGINX Open Source and NGINX Plus instances to the same config sync group, but there are limitations. If your configuration includes features exclusive to NGINX Plus, synchronization will fail on NGINX Open Source instances because they don't support these features. NGINX One allows you to mix NGINX instance types for flexibility, but it’s important to ensure that the configurations you're applying are compatible with all instances in the group. + +- **Single config sync group membership**: An instance can join only one config sync group at a time. + +- **Configuration inheritance**: If the config sync group already has a configuration defined, that configuration will be pushed to instances when they join. + +- **Using an instance's configuration for the group configuration**: If an instance is the first to join a config sync group and the group's configuration hasn't been defined, the instance’s configuration will become the group’s configuration. Any instances added later will automatically inherit this configuration. + + {{< note >}} If you add multiple instances to a single config sync group, simultaneously (with automation), follow these steps. Your instances will inherit your desired configuration: + + 1. Create a config sync group. + 1. Add a configuration to the config sync group, so all instances inherit it. + 1. Add the instances in a separate operation. + + Your instances should synchronize with your desired configuration within 30 seconds. {{< /note >}} + +- **Persistence of a config sync group's configuration**: The configuration for a config sync group persists until you delete the group. Even if you remove all instances, the group's configuration stays intact. Any new instances that join later will automatically inherit this configuration. + +- **Config sync groups vs. cluster syncing**: Config sync groups are not the same as cluster syncing. Config sync groups let you to manage and synchronize configurations across multiple NGINX instances as a single entity. This is particularly useful when your NGINX instances are load-balanced by an external load balancer, as it ensures consistency across all instances. In contrast, cluster syncing, like [zone syncing]({{< ref "nginx/admin-guide/high-availability/zone_sync_details.md" >}}), ensures data consistency and high availability across NGINX instances in a cluster. While config sync groups focus on configuration management, cluster syncing supports failover and data consistency. + +## Create a config sync group + +Creating a config sync group allows you to manage the configurations of multiple NGINX instances as a single entity. + +1. On the left menu, select **Config Sync Groups**. +2. Select **Add Config Sync Group**. +3. In the **Name** field, type a name for your config sync group. +4. Select **Create** to add the config sync group. + +## Manage config sync group membership + +### Add an existing instance to a config sync group {#add-an-existing-instance-to-a-config-sync-group} + +You can add existing NGINX instances that are already registered with NGINX One to a config sync group. + +1. Open a command-line terminal on the NGINX instance. +2. Open the `/var/lib/nginx-agent/agent-dynamic.conf` file in a text editor. +3. At the end of the file, add a new line beginning with `instance_group:`, followed by the config sync group name. + + ``` text + instance_group: + ``` + +4. Restart NGINX Agent: + + ``` shell + sudo systemctl restart nginx-agent + ``` + +### Add a new instance to a config sync group {#add-a-new-instance-to-a-config-sync-group} + +When adding a new NGINX instance that is not yet registered with NGINX One, you need a data plane key to securely connect the instance. You can generate a new data plane key during the process or use an existing one if you already have it. + +1. On the left menu, select **Config Sync Groups**. +2. Select the config sync group in the list. +3. In the **Instances** pane, select **Add Instance to Config Sync Group**. +4. In the **Add Instance to Config Sync Group** dialog, select **Register a new instance with NGINX One then add to config sync group**. +5. Select **Next**. +6. **Generate a new data plane key** (choose this option if you don't have an existing key): + + - Select **Generate new key** to create a new data plane key for the instance. + - Select **Generate Data Plane Key**. + - Copy and securely store the generated key, as it is displayed only once. + +7. **Use an existing data plane key** (choose this option if you already have a key): + + - Select **Use existing key**. + - In the **Data Plane Key** field, enter the existing data plane key. + +{{}} + +{{%tab name="Virtual Machine or Bare Metal"%}} + +8. Run the provided command, which includes the data plane key, in your NGINX instance terminal to register the instance with NGINX One. +9. Select **Done** to complete the process. + +{{%/tab%}} + +{{%tab name="Docker Container"%}} + +8. **Log in to the NGINX private registry**: + + - Replace `YOUR_JWT_HERE` with your JSON Web Token (JWT) from [MyF5](https://my.f5.com/manage/s/). + + ```shell + sudo docker login private-registry.nginx.com --username=YOUR_JWT_HERE --password=none + ``` + +9. **Pull the Docker image**: + + - From the **OS Type** list, choose the appropriate operating system for your Docker image. + - After selecting the OS, run the provided command to pull the Docker image. + + **Note**: Subject to availability, you can modify the `agent: ` to match the specific NGINX Plus version, OS type, and OS version you need. For example, you might use `agent: r32-ubi-9`. For more details on version tags and how to pull an image, see [Deploying NGINX and NGINX Plus on Docker]({{< ref "nginx/admin-guide/installing-nginx/installing-nginx-docker.md#pulling-the-image" >}}). + +10. Run the provided command, which includes the data plane key, in your NGINX instance terminal to start the Docker container. + +11. Select **Done** to complete the process. + +{{%/tab%}} + +{{}} + +{{}} + +Data plane keys are required for registering NGINX instances with the NGINX One Console. These keys serve as secure tokens, ensuring that only authorized instances can connect and communicate with NGINX One. + +For more details on creating and managing data plane keys, see [Create and manage data plane keys]({{}}). + +{{}} + +### Change the config sync group for an instance + +If you need to move an NGINX instance to a different config sync group, follow these steps: + +1. Open a command-line terminal on the NGINX instance. +2. Open the `/var/lib/nginx-agent/agent-dynamic.conf` file in a text editor. +3. Locate the line that begins with `instance_group:` and change it to the name of the new config sync group. + + ``` text + instance_group: + ``` + +4. Restart NGINX Agent by running the following command: + + ```shell + sudo systemctl restart nginx-agent + ``` + +**Important:** If the instance is the first to join the new config sync group and a group configuration hasn’t been added manually beforehand, the instance’s configuration will automatically become the group’s configuration. Any instances added to this group later will inherit this configuration. + +### Remove an instance from a config sync group + +If you need to remove an NGINX instance from a config sync group without adding it to another group, follow these steps: + +1. Open a command-line terminal on the NGINX instance. +2. Open the `/var/lib/nginx-agent/agent-dynamic.conf` file in a text editor. +3. Locate the line that begins with `instance_group:` and either remove it or comment it out by adding a `#` at the beginning of the line. + + ```text + # instance_group: + ``` + +4. Restart NGINX Agent: + + ```shell + sudo systemctl restart nginx-agent + ``` + +By removing or commenting out this line, the instance will no longer be associated with any config sync group. + +## Add the config sync group configuration + +You can set the configuration for a config sync group in two ways: + +### Define the group configuration manually + +You can manually define the group's configuration before adding any instances. When you add instances to the group later, they automatically inherit this configuration. + +To manually set the group configuration: + +1. Follow steps 1–4 in the [**Create a config sync group**](#create-a-config-sync-group) section to create your config sync group. +2. After creating the group, select the **Configuration** tab. +3. Since no instances have been added, the **Configuration** tab will show an empty configuration with a message indicating that no config files exist yet. +4. To add a configuration, select **Edit Configuration**. +5. In the editor, define your NGINX configuration as needed. This might include adding or modifying `nginx.conf` or other related files. +6. After making your changes, select **Next** to view a split screen showing your changes. +7. If you're satisfied with the configuration, select **Save and Publish**. + +### Use an instance's configuration + +If you don't manually define a group config, the NGINX configuration of the first instance added to a config sync group becomes the group's configuration. Any additional instances added afterward inherit this group configuration. + +To set the group configuration by adding an instance: + +1. Follow the steps in the [**Add an existing instance to a config sync group**](#add-an-existing-instance-to-a-config-sync-group) or [**Add a new instance to a config sync group**](#add-a-new-instance-to-a-config-sync-group) sections to add your first instance to the group. +2. The NGINX configuration from this instance will automatically become the group's configuration. +3. You can further edit and publish this configuration by following the steps in the [**Publish the config sync group configuration**](#publish-the-config-sync-group-configuration) section. + +## Publish the config sync group configuration {#publish-the-config-sync-group-configuration} + +After the config sync group is created, you can modify and publish the group's configuration as needed. Any changes made to the group configuration will be applied to all instances within the group. + +1. On the left menu, select **Config Sync Groups**. +2. Select the config sync group in the list. +3. Select the **Configuration** tab to view the group's NGINX configuration. +4. To modify the group's configuration, select **Edit Configuration**. +5. Make the necessary changes to the configuration. +6. When you're finished, select **Next**. A split view displays the changes. +7. If you're satisfied with the changes, select **Save and Publish**. + +Publishing the group configuration ensures that all instances within the config sync group are synchronized with the latest group configuration. This helps maintain consistency across all instances in the group, preventing configuration drift. + +## Understanding config sync statuses + +The **Config Sync Status** column on the **Config Sync Groups** page provides insight into the synchronization state of your NGINX instances within each group. + +{{}} +| **Status** | **Description** | +|-----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------| +| **In Sync** | All instances within the config sync group have configurations that match the group configuration. No action is required. | +| **Out of Sync** | At least one instance in the group has a configuration that differs from the group's configuration. You may need to review and resolve discrepancies to ensure consistency. | +| **Sync in Progress** | An instance is currently being synchronized with the group's configuration. This status appears when an instance is moved to a new group or when a configuration is being applied. | +| **Unknown** | The synchronization status of the instances in this group cannot be determined. This could be due to connectivity issues, instances being offline, or other factors. Investigating the cause of this status is recommended. | +{{}} + +Monitoring the **Config Sync Status** helps ensure that your configurations are consistently applied across all instances in a group, reducing the risk of configuration drift. + +## See also + +- [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) +- [View and edit NGINX configurations]({{< ref "/nginx-one/nginx-configs/view-edit-nginx-configurations.md" >}}) diff --git a/content/nginx-one/nginx-configs/view-edit-nginx-configurations.md b/content/nginx-one/nginx-configs/view-edit-nginx-configurations.md index f775520f8..91b37552d 100644 --- a/content/nginx-one/nginx-configs/view-edit-nginx-configurations.md +++ b/content/nginx-one/nginx-configs/view-edit-nginx-configurations.md @@ -1,8 +1,14 @@ --- # We use sentence case and present imperative tone +<<<<<<< HEAD title: View and edit an NGINX instance # Weights are assigned in increments of 100: determines sorting order weight: 200 +======= +title: View and edit NGINX configurations +# Weights are assigned in increments of 100: determines sorting order +weight: 300 +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) # Creates a table of contents and sidebar, useful for large documents toc: true # Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this @@ -12,7 +18,21 @@ product: NGINX One --- +<<<<<<< HEAD This guide explains how to edit the configuration of an existing **Instance** in your NGINX One Console. +======= +## Overview + +This guide explains how to add a **Instances** to your NGINX One Console. + +## Before you start + +Before you add **Instances** to NGINX One Console, ensure: + +- You have an NGINX One Console account with staged configuration permissions.``` + +Once you've registered your NGINX Instances with the F5 NGINX One Console, you can view and edit their NGINX configurations on the **Instances** details page. +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) To view and edit an NGINX configuration, follow these steps: @@ -28,4 +48,8 @@ Alternatively, you can select **Save Changes As**. In the window that appears, y ## See also +<<<<<<< HEAD - [Manage Config Sync Groups]({{< ref "/nginx-one/nginx-configs/config-sync-groups/manage-config-sync-groups.md" >}}) +======= +- [Manage Config Sync Groups]({{< ref "/nginx-one/config-sync-groups/manage-config-sync-groups.md" >}}) +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) diff --git a/content/nginx-one/rbac/_index.md b/content/nginx-one/rbac/_index.md index 447611e56..4e50aa4ca 100644 --- a/content/nginx-one/rbac/_index.md +++ b/content/nginx-one/rbac/_index.md @@ -1,6 +1,12 @@ --- +<<<<<<< HEAD title: Organize users with RBAC description: weight: 600 +======= +title: Organize your administrators with RBAC +description: +weight: 500 +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) url: /nginx-one/rbac --- diff --git a/content/nginx-one/staged-configs/_index.md b/content/nginx-one/staged-configs/_index.md index 1305546f1..66cf13915 100644 --- a/content/nginx-one/staged-configs/_index.md +++ b/content/nginx-one/staged-configs/_index.md @@ -1,6 +1,12 @@ --- description: +<<<<<<< HEAD title: Draft new configurations weight: 400 url: /nginx-one/staged-configs +======= +title: Set up new instances +weight: 200 +url: /nginx-one/how-to/staged-configs +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) --- diff --git a/content/nginx-one/staged-configs/add-staged-config.md b/content/nginx-one/staged-configs/add-staged-config.md index 042779b8b..c46b120e5 100644 --- a/content/nginx-one/staged-configs/add-staged-config.md +++ b/content/nginx-one/staged-configs/add-staged-config.md @@ -9,7 +9,11 @@ toc: true type: tutorial # Intended for internal catalogue and search, case sensitive: # Agent, N4Azure, NIC, NIM, NGF, NAP-DOS, NAP-WAF, NGINX One, NGINX+, Solutions, Unit +<<<<<<< HEAD product: NGINX-One +======= +product: +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) --- ## Overview diff --git a/content/nginx-one/staged-configs/api-staged-config.md b/content/nginx-one/staged-configs/api-staged-config.md index 18c276ae3..d95524b83 100644 --- a/content/nginx-one/staged-configs/api-staged-config.md +++ b/content/nginx-one/staged-configs/api-staged-config.md @@ -8,7 +8,11 @@ toc: true # Types have a 1:1 relationship with Hugo archetypes, so you shouldn't need to change this type: tutorial # Intended for internal catalogue and search, case sensitive: +<<<<<<< HEAD product: NGINX-One +======= +product: NGINX One +>>>>>>> c7ce27ce (Draft: new N1C doc homepage) --- You can use F5 NGINX One Console API to manage your Staged Configurations. With our API, you can: diff --git a/content/nginx/admin-guide/load-balancer/tcp-udp-load-balancer.md b/content/nginx/admin-guide/load-balancer/tcp-udp-load-balancer.md index bf40c20be..a7a6a7f61 100644 --- a/content/nginx/admin-guide/load-balancer/tcp-udp-load-balancer.md +++ b/content/nginx/admin-guide/load-balancer/tcp-udp-load-balancer.md @@ -9,9 +9,10 @@ type: - how-to --- -## Introduction {#intro} + +## Introduction -[Load balancing](https://www.f5.com/glossary/load-balancer) refers to efficiently distributing network traffic across multiple backend servers. +[Load balancing](https://www.nginx.com/solutions/load-balancing/) refers to efficiently distributing network traffic across multiple backend servers. In F5 NGINX Plus [R5]({{< ref "nginx/releases.md#r5" >}}) and later, NGINX Plus can proxy and load balance Transmission Control Protocol) (TCP) traffic. TCP is the protocol for many popular applications and services, such as LDAP, MySQL, and RTMP. @@ -19,13 +20,15 @@ In NGINX Plus [R9]({{< ref "nginx/releases.md#r9" >}}) and later, NGINX Plus can To load balance HTTP traffic, refer to the [HTTP Load Balancing]({{< ref "http-load-balancer.md" >}}) article. + ## Prerequisites - Latest NGINX Plus (no extra build steps required) or latest [NGINX Open Source](https://nginx.org/en/download.html) built with the `--with-stream` configuration flag - An application, database, or service that communicates over TCP or UDP - Upstream servers, each running the same instance of the application, database, or service -## Configuring reverse proxy {#proxy_pass} + +## Configuring Reverse Proxy First, you will need to configure _reverse proxy_ so that NGINX Plus or NGINX Open Source can forward TCP connections or UDP datagrams from clients to an upstream group or a proxied server. @@ -115,7 +118,8 @@ Open the NGINX configuration file and perform the following steps: } ``` -## Configuring TCP or UDP load balancing {#upstream} + +## Configuring TCP or UDP Load Balancing To configure load balancing: @@ -246,7 +250,8 @@ stream { } ``` -## Configuring health checks {#health} + +## Configuring Health Checks NGINX can continually test your TCP or UDP upstream servers, avoid the servers that have failed, and gracefully add the recovered servers into the load‑balanced group. @@ -254,7 +259,8 @@ See [TCP Health Checks]({{< ref "nginx/admin-guide/load-balancer/tcp-health-chec See [UDP Health Checks]({{< ref "nginx/admin-guide/load-balancer/udp-health-check.md" >}}) for instructions how to configure health checks for UDP. -## On-the-fly configuration + +## On-the-Fly Configuration Upstream server groups can be easily reconfigured on-the-fly using NGINX Plus REST API. Using this interface, you can view all servers in an upstream group or a particular server, modify server parameters, and add or remove upstream servers. @@ -349,7 +355,8 @@ To enable on-the-fly configuration: } ``` -### On-the-fly configuration example + +### On-the-Fly Configuration Example ```nginx stream { @@ -396,22 +403,23 @@ For example, to add a new server to the server group, send a `POST` request: curl -X POST -d '{ \ "server": "appserv3.example.com:12345", \ "weight": 4 \ - }' -s 'http://127.0.0.1/api/9/stream/upstreams/appservers/servers' + }' -s 'http://127.0.0.1/api/6/stream/upstreams/appservers/servers' ``` To remove a server from the server group, send a `DELETE` request: ```shell -curl -X DELETE -s 'http://127.0.0.1/api/9/stream/upstreams/appservers/servers/0' +curl -X DELETE -s 'http://127.0.0.1/api/6/stream/upstreams/appservers/servers/0' ``` To modify a parameter for a specific server, send a `PATCH` request: ```shell -curl -X PATCH -d '{ "down": true }' -s 'http://127.0.0.1/api/9/http/upstreams/appservers/servers/0' +curl -X PATCH -d '{ "down": true }' -s 'http://127.0.0.1/api/6/http/upstreams/appservers/servers/0' ``` -## Example of TCP and UDP load-balancing configuration {#example} + +## Example of TCP and UDP Load-Balancing Configuration This is a configuration example of TCP and UDP load balancing with NGINX: @@ -463,13 +471,3 @@ The three [`server`](https://nginx.org/en/docs/stream/ngx_stream_upstream_module - The second server listens on port 53 and proxies all UDP datagrams (the `udp` parameter to the `listen` directive) to an upstream group called **dns_servers**. If the `udp` parameter is not specified, the socket listens for TCP connections. - The third virtual server listens on port 12346 and proxies TCP connections to **backend4.example.com**, which can resolve to several IP addresses that are load balanced with the Round Robin method. - -## See also - -- [TCP Health Checks]({{< relref "tcp-health-check.md" >}}) - -- [UDP Health Checks]({{< relref "udp-health-check.md" >}}) - -- [Load Balancing DNS Traffic with NGINX and NGINX Plus](https://www.f5.com/company/blog/nginx/load-balancing-dns-traffic-nginx-plus) - -- [TCP/UDP Load Balancing with NGINX: Overview, Tips, and Tricks](https://blog.nginx.org/blog/tcp-load-balancing-udp-load-balancing-nginx-tips-tricks) diff --git a/content/nginx/admin-guide/load-balancer/udp-health-check.md b/content/nginx/admin-guide/load-balancer/udp-health-check.md index 7885d032a..bb4818fd4 100644 --- a/content/nginx/admin-guide/load-balancer/udp-health-check.md +++ b/content/nginx/admin-guide/load-balancer/udp-health-check.md @@ -9,11 +9,10 @@ type: - how-to --- -NGINX Plus can continually test your upstream servers that handle UDP network traffic (DNS, RADIUS, syslog), avoid the servers that have failed, and gracefully add the recovered servers into the load‑balanced group. + +## Prerequisites -## Prerequisites {#prereq} - -- You have [configured an upstream group of servers]({{< ref "nginx/admin-guide/load-balancer/tcp-udp-load-balancer.md" >}}) that handles UDP network traffic in the [`stream {}`](https://nginx.org/en/docs/stream/ngx_stream_core_module.html#stream) context, for example: +- You have configured an upstream group of servers that handles UDP network traffic (DNS, RADIUS, syslog) in the [`stream {}`](https://nginx.org/en/docs/stream/ngx_stream_core_module.html#stream) context, for example: ```nginx stream { @@ -45,7 +44,8 @@ NGINX Plus can continually test your upstream servers that handle UDP network tr See [TCP and UDP Load Balancing]({{< ref "nginx/admin-guide/load-balancer/tcp-udp-load-balancer.md" >}}) for details. -## Passive UDP health checks {#hc_passive} + +## Passive UDP Health Checks NGINX Open Source or F5 NGINX Plus can mark the server as unavailable and stop sending UDP datagrams to it for some time if the server replies with an error or times out. @@ -62,7 +62,8 @@ upstream dns_upstream { } ``` -## Active UDP health checks {#hc_active} + +## Active UDP Health Checks Active Health Checks allow testing a wider range of failure types and are available only for NGINX Plus. For example, instead of waiting for an actual TCP request from a DNS client to fail before marking the DNS server as down (as in passive health checks), NGINX Plus will send special health check requests to each upstream server and check for a response that satisfies certain conditions. If a connection to the server cannot be established, the health check fails, and the server is considered unhealthy. NGINX Plus does not proxy client connections to unhealthy servers. If more than one health check is defined, the failure of any check is enough to consider the corresponding upstream server unhealthy. @@ -99,7 +100,8 @@ To enable active health checks: A basic UDP health check assumes that NGINX Plus sends the “nginx health check” string to an upstream server and expects the absence of ICMP “Destination Unreachable” message in response. You can configure your own health check tests in the `match {}` block. See [The “match {}” Configuration Block](#hc_active_match) for details. -### Fine-Tuning UDP Health Checks {#hc_active_finetune} + +### Fine-Tuning UDP Health Checks You can fine‑tune the health check by specifying the following parameters to the [`health_check`](https://nginx.org/en/docs/stream/ngx_stream_upstream_hc_module.html#health_check) directive: @@ -117,9 +119,10 @@ server { In the example, the time between UDP health checks is increased to 20 seconds, the server is considered unhealthy after 2 consecutive failed health checks, and the server needs to pass 2 consecutive checks to be considered healthy again. -### The “match {}” configuration block {#hc_active_match} + +### The “match {}” Configuration Block -A basic UDP health check assumes that NGINX Plus sends the “nginx health check” string to an upstream server and expects the absence of ICMP “Destination Unreachable” message in response. You can configure your own health check tests that will verify server responses. These tests are defined within the [`match {}`](https://nginx.org/en/docs/stream/ngx_stream_upstream_hc_module.html#match) configuration block. +You can verify server responses to health checks by configuring a number of tests. These tests are defined within the [`match {}`](https://nginx.org/en/docs/stream/ngx_stream_upstream_hc_module.html#match) configuration block. 1. In the top‑level `stream {}` context, specify the [`match {}`](https://nginx.org/en/docs/stream/ngx_stream_upstream_hc_module.html#match) block and set its name, for example, `udp_test`: @@ -152,9 +155,8 @@ A basic UDP health check assumes that NGINX Plus sends the “nginx health check These parameters can be used in different combinations, but no more than one `send` and one `expect` parameter can be specified at a time. -## Usage scenarios - -### NTP health checks {#example_ntp} + +#### Example Test for NTP To fine‑tune health checks for NTP, you should specify both `send` and `expect` parameters with the following text strings: @@ -165,138 +167,14 @@ match ntp { } ``` -#### Complete NTP health check configuration example - -```nginx - -stream { - upstream ntp_upstream { - zone ntp_zone 64k; - server 192.168.136.130:53; - server 192.168.136.131:53; - server 192.168.136.132:53; -} - server { - listen 53 udp; - proxy_pass ntp_upstream; - health_check match=ntp udp; - proxy_timeout 1s; - proxy_responses 1; - error_log logs/ntp.log; - } - - match ntp { - send \xe3\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00; - expect ~* \x24; - } -} -``` - -### DNS health checks {#example_dns} - -[DNS health checks](#hc_active) can be enhanced to perform real DNS lookup queries. You can craft a valid DNS query packet, send it to the upstream server, and inspect the response to determine health. - -The process includes three steps: -- [Creating a CNAME test record](#create-a-cname-record) on your DNS server. -- [Crafting a raw DNS query packet](#construct-a-raw-dns-query-packet) to be sent by NGINX Plus. -- [Validating the expected response](#configure-the-match-block-for-dns-lookup) using the `match` block, where the `send` parameter represents a raw DNS query packet, and `expect` represents the value of the CNAME record. - -#### Create a CNAME record - -First, create a CNAME record on your DNS server for a health check that points to the target website. - -For example, if you are using BIND self-hosted DNS solution on a Linux server: - -- Open the zone file in a text editor: - -```shell -sudo nano /etc/bind/zones/db.example.com -``` -- Add a CNAME record, making `healthcheck.example.com` resolve to `healthy.svcs.example.com`: - -```none -healthcheck IN CNAME healthy.svcs.example.com. -``` - -- Save the file and reload the DNS service: - -```shell -sudo systemctl reload bind9 -``` - -Once the CNAME record is live and resolvable, you can craft a DNS query packet that represents a DNS lookup and can be used in the `send` directive. - -#### Construct a raw DNS query packet - -The `send` parameter of the `match` block allows you to send raw UDP packets for health checks. To query your CNAME record, you need to construct a valid DNS query packet according to the [DNS protocol message format](https://datatracker.ietf.org/doc/html/rfc1035#section-4.1), including a header and question section. + +#### Example Test for DNS -The DNS Query packet can be created using DNS packet builders, such as Python Scapy or dnslib, or packet analyzers, such as tcpdump or Wireshark. If using a packet analyzer, extract only the DNS layer, removing Ethernet, IP, and UDP-related headers. - -This is the raw DNS query of `healthcheck.example.com`, represented as one line in Hex with `\x` prefixes: - -```none -\x00\x01\x01\x00\x00\x01\x00\x00\x00\x00\x00\x00\x0b\x68\x65\x61\x6c\x74\x68\x63\x68\x65\x63\x6b\x07\x65\x78\x61\x6d\x70\x6c\x65\x03\x63\x6f\x6d\x00\x00\x01\x00\x01 -``` -where: - -{{}} -| HEX | Description | -|------------------|------------------------| -| \x00\x01 | Transaction ID: 0x0001 | -| \x01\x00 | Flags: Standard query, recursion desired | -| \x00\x01 | Questions: 1 | -| \x00\x00 | Answer RRs: 0 | -| \x00\x00 | Authority RRs: 0 | -| \x00\x00 | Additional RRs: 0 | -| \x0b\x68\x65\x61\x6c\x74\x68\x63\x68\x65\x63\x6b | "healthcheck" | -| \x07\x65\x78\x61\x6d\x70\x6c\x65 | "example" | -| \x03\x63\x6f\x6d | "com" | -| \x00 | end of name | -| \x00\x01 | Type: A | -| \x00\x01 | Class: IN | -{{}} - -#### Configure the match block for DNS lookup - -Finally, specify the `match` block in the NGINX configuration file to pair the raw query with an expected response. The `send` directive should contain the DNS query packet, while `expect` directive should contain a matching DNS record in the DNS server's response: +To fine‑tune health checks for DNS, you should also specify both `send` and `expect` parameters with the following text strings: ```nginx match dns { - send \x00\x01\x01\x00\x00\x01\x00\x00\x00\x00\x00\x00\x0b\x68\x65\x61\x6c\x74\x68\x63\x68\x65\x63\x6b\x07\x65\x78\x61\x6d\x70\x6c\x65\x03\x63\x6f\x6d\x00\x00\x01\x00\x01; - expect ~* "healthy.svcs.example.com"; -} -``` - -#### Complete DNS health check configuration example - -```nginx - -stream { - upstream dns_upstream { - zone dns_zone 64k; - server 192.168.136.130:53; - server 192.168.136.131:53; - server 192.168.136.132:53; -} - server { - listen 53 udp; - proxy_pass dns_upstream; - health_check match=dns udp; - proxy_timeout 1s; - proxy_responses 1; - error_log logs/dns.log; - } - - match dns { - # make sure appropriate CNAME record exists - send \x00\x01\x01\x00\x00\x01\x00\x00\x00\x00\x00\x00\x0b\x68\x65\x61\x6c\x74\x68\x63\x68\x65\x63\x6b\x07\x65\x78\x61\x6d\x70\x6c\x65\x03\x63\x6f\x6d\x00\x00\x01\x00\x01; - expect ~* "healthy.svcs.example.com"; - } + send \x00\x2a\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x03\x73\x74\x6c\x04\x75\x6d\x73\x6c\x03\x65\x64\x75\x00\x00\x01\x00\x01; + expect ~* "health.is.good"; } ``` - -## See also - -- [Load Balancing DNS Traffic with NGINX and NGINX Plus](https://www.f5.com/company/blog/nginx/load-balancing-dns-traffic-nginx-plus) - -- [TCP/UDP Load Balancing with NGINX: Overview, Tips, and Tricks](https://blog.nginx.org/blog/tcp-load-balancing-udp-load-balancing-nginx-tips-tricks#activeHealthCheck) diff --git a/content/nim/nginx-app-protect/manage-waf-security-policies.md b/content/nim/nginx-app-protect/manage-waf-security-policies.md index 5c0e5ebf3..71684b133 100644 --- a/content/nim/nginx-app-protect/manage-waf-security-policies.md +++ b/content/nim/nginx-app-protect/manage-waf-security-policies.md @@ -1,7 +1,8 @@ --- -title: Manage and deploy WAF policies and log profiles -description: Learn how to use F5 NGINX Instance Manager to manage NGINX App Protect WAF security policies and security log profiles. -weight: 300 +title: Manage WAF Security Policies and Security Log Profiles +description: Learn how to use F5 NGINX Management Suite Instance Manager to manage NGINX + App Protect WAF security policies and security log profiles. +weight: 200 toc: true type: how-to product: NIM @@ -10,76 +11,83 @@ docs: DOCS-1105 ## Overview -F5 NGINX Instance Manager lets you manage NGINX App Protect WAF configurations using either the web interface or REST API. You can edit, update, and deploy security policies, log profiles, attack signatures, and threat campaigns to individual instances or instance groups. +F5 NGINX Management Suite Instance Manager provides the ability to manage the configuration of NGINX App Protect WAF instances either by the user interface or the REST API. This includes editing, updating, and deploying security policies, log profiles, attack signatures, and threat campaigns to individual instances and/or instance groups. -You can compile a security policy, attack signatures, and threat campaigns into a security policy bundle. The bundle includes all necessary components for a specific NGINX App Protect WAF version. Precompiling the bundle improves performance by avoiding separate compilation of each component during deployment. +In Instance Manager v2.14.0 and later, you can compile a security policy, attack signatures, and threat campaigns into a security policy bundle. A security policy bundle consists of the security policy, the attack signatures, and threat campaigns for a particular version of NGINX App Protect WAF, and additional supporting files that make it possible for NGINX App Protect WAF to use the bundle. Because the security policy bundle is pre-compiled, the configuration gets applied faster than when you individually reference the security policy, attack signature, and threat campaign files. {{}} -The following capabilities are available only through the Instance Manager REST API: +The following capabilities are only available via the Instance Manager REST API: - Update security policies - Create, read, and update security policy bundles -- Create, read, update, and delete security log profiles -- Publish security policies, log profiles, attack signatures, and threat campaigns to instances and instance groups +- Create, read, update, and delete Security Log Profiles +- Publish security policies, security log profiles, attack signatures, and/or threat campaigns to instances and instance groups {{}} --- -## Before you begin +## Before You Begin -Before continuing, complete the following steps: +Complete the following prerequisites before proceeding with this guide: -- [Set up App Protect WAF configuration management]({{< ref "setup-waf-config-management" >}}) -- Make sure your user account has the [required permissions]({{< ref "/nim/admin-guide/rbac/overview-rbac.md" >}}) to access the REST API: +- [Set Up App Protect WAF Configuration Management]({{< ref "setup-waf-config-management" >}}) +- Verify that your user account has the [necessary permissions]({{< ref "/nim/admin-guide/rbac/overview-rbac.md" >}}) to access the Instance Manager REST API: - - **Module**: Instance Manager - - **Feature**: Instance Management → `READ` - - **Feature**: Security Policies → `READ`, `CREATE`, `UPDATE`, `DELETE` + - **Module**: Instance Manager + - **Feature**: Instance Management + - **Access**: `READ` + - **Feature**: Security Policies + - **Access**: `READ`, `CREATE`, `UPDATE`, `DELETE` -To use policy bundles, you also need to: +The following are required to use support policy bundles: -- Have `UPDATE` permissions for each referenced security policy -- [Install the correct `nms-nap-compiler` package]({{< ref "/nim/nginx-app-protect/setup-waf-config-management.md#install-the-waf-compiler" >}}) for your App Protect WAF version -- [Install the required attack signatures and threat campaigns]({{< ref "/nim/nginx-app-protect/setup-waf-config-management.md#set-up-attack-signatures-and-threat-campaigns" >}}) +- You must have `UPDATE` permissions for the security policies specified in the request. +- The correct `nms-nap-compiler` packages for the NGINX App Protect WAF version you're using are [installed on Instance Manager]({{< ref "/nim/nginx-app-protect/setup-waf-config-management.md#install-the-waf-compiler" >}}). +- The attack signatures and threat campaigns that you want to use are [installed on Instance Manager]({{< ref "/nim/nginx-app-protect/setup-waf-config-management.md#set-up-attack-signatures-and-threat-campaigns" >}}). -### Access the web interface +### How to Access the Web Interface -To access the web interface, open a browser and go to the fully qualified domain name (FQDN) of your NGINX Instance Manager. Log in, then select **Instance Manager** from the Launchpad. +To access the web interface, go to the FQDN for your NGINX Instance Manager host in a web browser and log in. Once you're logged in, select "Instance Manager" from the Launchpad menu. -### Access the REST API +### How to Access the REST API {{< include "nim/how-to-access-nim-api.md" >}} --- -## Create a security policy {#create-security-policy} +## Create a Security Policy {#create-security-policy} {{}} {{%tab name="web interface"%}} -To create a security policy using the NGINX Instance Manager web interface: +
+ +To create a security policy using the Instance Manager web interface: + +1. In a web browser, go to the FQDN for your NGINX Management Suite host and log in. Then, from the Launchpad menu, select **Instance Manager**. +2. On the left menu, select **App Protect**. +3. On the *Security Policies* page, select **Create**. +4. On the *Create Policy* page, fill out the necessary fields: -1. In your browser, go to the FQDN for your NGINX Instance Manager host and log in. -2. From the Launchpad menu, select **Instance Manager**. -3. In the left menu, select **App Protect**. -4. On the *Security Policies* page, select **Create**. -5. On the *Create Policy* page, enter the required information: - - **Name**: Enter a name for the policy. - - **Description**: (Optional) Add a brief description. - - **Enter Policy**: Paste or type the JSON-formatted policy into the editor. The interface automatically validates the JSON. + - **Name**: Provide a name for the policy. + - **Description**: (Optional) Add a short description for the policy. + - **Enter Policy**: Type or paste the policy in JSON format into the form provided. The editor will validate the JSON for accuracy. - For help writing custom policies, see the [NGINX App Protect WAF Declarative Policy guide](https://docs.nginx.com/nginx-app-protect/declarative-policy/policy/) and the [Policy Authoring and Tuning section](https://docs.nginx.com/nginx-app-protect/configuration-guide/configuration/#policy-authoring-and-tuning) in the configuration guide. + For more information about creating custom policies, refer to the [NGINX App Protect WAF Declarative Policy](https://docs.nginx.com/nginx-app-protect/declarative-policy/policy/) guide and the [Policy Authoring and Tuning](https://docs.nginx.com/nginx-app-protect/configuration-guide/configuration/#policy-authoring-and-tuning) section of the config guide. -6. Select **Save**. +5. Select **Save**. {{%/tab%}} {{%tab name="API"%}} -To upload a new security policy using the REST API, send a `POST` request to the Security Policies API endpoint. +To upload a new security policy, send an HTTP `POST` request to the Security Policies API endpoint. + +{{}}Before sending a security policy to Instance Manager, you need to encode it using `base64`. Submitting a policy in its original JSON format will result in an error.{{}} + +
-You must encode the JSON policy using `base64`. If you send the policy in plain JSON, the request will fail. {{}} @@ -93,7 +101,7 @@ You must encode the JSON policy using `base64`. If you send the policy in plain For example: ```shell -curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/policies \ +curl -X POST https://{{NMS_FQDN}}/api/platform/v1/security/policies \ -H "Authorization: Bearer " \ -d @ignore-xss-example.json ``` @@ -126,7 +134,7 @@ curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/policies \ "modified": "2022-04-12T23:19:58.502Z", "name": "ignore-cross-site-scripting", "revisionTimestamp": "2022-04-12T23:19:58.502Z", - "uid": "" + "uid": "21daa130-4ba4-442b-bc4e-ab294af123e5" }, "selfLink": { "rel": "/api/platform/v1/services/environments/prod" @@ -140,13 +148,11 @@ curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/policies \ --- -## Update a security policy - +## Update a Security Policy -To update a security policy, send a `POST` or `PUT` request to the Security Policies API. +To update a security policy, send an HTTP `POST` request to the Security Policies API endpoint, `/api/platform/v1/security/policies`. -- Use `POST` with the `isNewRevision=true` parameter to add a new version of an existing policy. -- Use `PUT` with the policy UID to overwrite the existing version. +You can use the optional `isNewRevision` parameter to indicate whether the updated policy is a new version of an existing policy. {{}} @@ -159,35 +165,33 @@ To update a security policy, send a `POST` or `PUT` request to the Security Poli {{}} -To use `POST`, include the policy metadata and content in your request: +For example: ```shell -curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/policies?isNewRevision=true \ +curl -X POST https://{{NMS_FQDN}}/api/platform/v1/security/policies?isNewRevision=true \ -H "Authorization: Bearer " \ -d @update-xss-policy.json ``` -To use PUT, first retrieve the policy’s unique identifier (UID). You can do this by sending a GET request to the policies endpoint: +You can update a specific policy by sending an HTTP `PUT` request to the Security Policies API endpoint that includes the policy's unique identifier (UID). -```shell -curl -X GET https://{{NIM_FQDN}}/api/platform/v1/security/policies \ - -H "Authorization: Bearer " -``` +To find the UID, send an HTTP `GET` request to the Security Policies API endpoint. This returns a list of all Security Policies that contains the unique identifier for each policy. + +Include the UID for the security policy in your `PUT` request to update the policy. Once the policy update is accepted, the WAF compiler will create a new, updated bundle. -Then include the UID in your PUT request: +For example: ```shell -curl -X PUT https://{{NIM_FQDN}}/api/platform/v1/security/policies/ \ +curl -X PUT https://{{NMS_FQDN}}/api/platform/v1/security/policies/23139e0a-4ac8-49f9-b7a0-0577b42c70c7 \ -H "Authorization: Bearer " \ - --Content-Type application/json \ - -d @update-xss-policy.json + --Content-Type application/json -d @update-xss-policy.json ``` -After updating the policy, you can [publish it](#publish-policy) to selected instances or instance groups. +After you have pushed an updated security policy, you can [publish it](#publish-policy) to selected instances or instance groups. --- -## Delete a security policy +## Delete a Security Policy {{}} @@ -195,29 +199,17 @@ After updating the policy, you can [publish it](#publish-policy) to selected ins
-To delete a security policy using the NGINX Instance Manager web interface: - -1. In your browser, go to the FQDN for your NGINX Instance Manager host and log in. -2. From the Launchpad menu, select **Instance Manager**. -3. In the left menu, select **App Protect**. -4. On the *Security Policies* page, find the policy you want to delete. -5. Select the **Actions** menu (**...**) and choose **Delete**. +To delete a security policy using the Instance Manager web interface: +1. In a web browser, go to the FQDN for your NGINX Management Suite host and log in. Then, from the Launchpad menu, select **Instance Manager**. +2. On the left menu, select **App Protect**. +3. On the *Security Policies* page, select the **Actions** menu (represented by an ellipsis, **...**) for the policy you want to delete. Select **Delete** to remove the policy. {{%/tab%}} {{%tab name="API"%}} -To delete a security policy using the REST API: - -1. Retrieve the UID for the policy by sending a `GET` request to the policies endpoint: - - ```shell - curl -X GET https://{{NIM_FQDN}}/api/platform/v1/security/policies \ - -H "Authorization: Bearer " - ``` - -2. Send a `DELETE` request using the policy UID: +To delete a security policy, send an HTTP `DELETE` request to the Security Policies API endpoint that includes the unique identifier for the policy that you want to delete. {{}} @@ -229,10 +221,10 @@ To delete a security policy using the REST API: {{}} -Example: +For example: ```shell -curl -X DELETE https://{{NIM_FQDN}}/api/platform/v1/security/policies/ \ +curl -X DELETE https://{{NMS_FQDN}}/api/platform/v1/security/policies/23139e0a-4ac8-49f9-b7a0-0577b42c70c7 \ -H "Authorization: Bearer " ``` @@ -240,42 +232,36 @@ curl -X DELETE https://{{NIM_FQDN}}/api/platform/v1/security/policies/}} ---- +{{%comment%}}TO DO: Add sections for managing attack signatures and threat campaigns{{%/comment%}} -## Create security policy bundles {#create-security-policy-bundles} - - -To create a security policy bundle, send a `POST` request to the Security Policy Bundles API. The policies you want to include in the bundle must already exist in NGINX Instance Manager. +--- -Each bundle includes: +## Create Security Policy Bundles {#create-security-policy-bundles} -- A security policy -- Attack signatures -- Threat campaigns -- A version of NGINX App Protect WAF -- Supporting files required to compile and deploy the bundle +To create security policy bundles, send an HTTP `POST` request to the Security Policies Bundles API endpoint. The specified security policies you'd like to compile into security policy bundles must already exist in Instance Manager. -### Required fields +### Required Fields -- `appProtectWAFVersion`: The version of NGINX App Protect WAF to target. -- `policyName`: The name of the policy to include. Must reference an existing policy. -- `policyUID`: Optional. If omitted, the latest revision of the specified policy is used. This field does **not** accept the keyword `latest`. +- `appProtectWAFVersion`: The version of NGINX App Protect WAF being used. +- `policyName`: The name of security policy to include in the bundle. This must reference an existing security policy; refer to the [Create a Security Policy](#create-security-policy) section above for instructions. -If you don’t include `attackSignatureVersionDateTime` or `threatCampaignVersionDateTime`, the latest versions are used by default. You can also set them explicitly by using `"latest"` as the value. +### Notes +- If you do not specify a value for the `attackSignatureVersionDateTime` and/or `threatCampaignVersionDateTime` fields, the latest version of each will be used by default. You can also explicitly state that you want to use the most recent version by specifying the keyword `latest` as the value. +- If the `policyUID` field is not defined, the latest version of the specified security policy will be used. This field **does not allow** use of the keyword `latest`. {{}} -| Method | Endpoint | -|--------|----------------------------------------------| +| Method | Endpoint | +|--------|--------------------------------------| | POST | `/api/platform/v1/security/policies/bundles` | {{}} -Example: +For example: ```shell -curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/policies/bundles \ +curl -X POST https://{{NMS_FQDN}}/api/platform/v1/security/policies/bundles \ -H "Authorization: Bearer " \ -d @security-policy-bundles.json ``` @@ -288,7 +274,7 @@ curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/policies/bundles \ "bundles": [{ "appProtectWAFVersion": "4.457.0", "policyName": "default-enforcement", - "policyUID": "", + "policyUID": "29d86fe8-612a-5c69-895a-04fc5b9849a6", "attackSignatureVersionDateTime": "2023.06.20", "threatCampaignVersionDateTime": "2023.07.18" }, @@ -319,10 +305,10 @@ curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/policies/bundles \ "modified": "2023-10-04T23:19:58.502Z", "appProtectWAFVersion": "4.457.0", "policyName": "default-enforcement", - "policyUID": "", + "policyUID": "29d86fe8-612a-5c69-895a-04fc5b9849a6", "attackSignatureVersionDateTime": "2023.06.20", "threatCampaignVersionDateTime": "2023.07.18", - "uid": "" + "uid": "dceb8254-9a90-4e77-87ac-73070f821412" }, "content": "", "compilationStatus": { @@ -335,11 +321,11 @@ curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/policies/bundles \ "created": "2023-10-04T23:19:58.502Z", "modified": "2023-10-04T23:19:58.502Z", "appProtectWAFVersion": "4.279.0", - "policyName": "default-enforcement", - "policyUID": "", + "policyName": "defautl-enforcement", + "policyUID": "04fc5b9849a6-612a-5c69-895a-29d86fe8", "attackSignatureVersionDateTime": "2023.08.10", "threatCampaignVersionDateTime": "2023.08.09", - "uid": "" + "uid": "trs35lv2-9a90-4e77-87ac-ythn4967" }, "content": "", "compilationStatus": { @@ -353,10 +339,10 @@ curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/policies/bundles \ "modified": "2023-10-04T23:19:58.502Z", "appProtectWAFVersion": "4.457.0", "policyName": "ignore-xss", - "policyUID": "", + "policyUID": "849a604fc5b9-612a-5c69-895a-86f29de8", "attackSignatureVersionDateTime": "2023.08.10", "threatCampaignVersionDateTime": "2023.08.09", - "uid": "" + "uid": "nbu844lz-9a90-4e77-87ac-zze8861d" }, "content": "", "compilationStatus": { @@ -368,38 +354,39 @@ curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/policies/bundles \ } ``` + --- -## List security policy bundles {#list-security-policy-bundles} +## List Security Policy Bundles {#list-security-policy-bundles} -To list all security policy bundles, send a `GET` request to the Security Policy Bundles API. +To list security policy bundles, send an HTTP `GET` request to the Security Policies Bundles API endpoint. -You’ll only see bundles you have `"READ"` permissions for. +{{}}The list will only contain the security policy bundles that you have "READ" permissions for in Instance Manager.{{}} -You can use the following query parameters to filter results: +You can filter the results by using the following query parameters: -- `includeBundleContent`: Whether to include base64-encoded content in the response. Defaults to `false`. -- `policyName`: Return only bundles that match this policy name. -- `policyUID`: Return only bundles that match this policy UID. -- `startTime`: Return only bundles modified at or after this time. -- `endTime`: Return only bundles modified before this time. +- `includeBundleContent`: Boolean indicating whether to include the security policy bundle content for each bundle when getting a list of bundles or not. If not provided, defaults to `false`. Please note that the content returned is `base64 encoded`. +- `policyName`: String used to filter the list of security policy bundles; only security policy bundles that have the specified security policy name will be returned. If not provided, it will not filter based on `policyName`. +- `policyUID`: String used to filter the list of security policy bundles; only security policy bundles that have the specified security policy UID will be returned. If not provided, it will not filter based on `policyUID`. +- `startTime`: The security policy bundle's "modified time" has to be equal to or greater than this time value. If no value is supplied, it defaults to 24 hours from the current time. `startTime` has to be less than `endTime`. +- `endTime`: Indicates the time that the security policy bundles modified time has to be less than. If no value is supplied, it defaults to current time. `endTime` has to be greater than `startTime`. -If no time range is provided, the API defaults to showing bundles modified in the past 24 hours. +
{{}} -| Method | Endpoint | -|--------|----------------------------------------------| +| Method | Endpoint | +|--------|--------------------------------------| | GET | `/api/platform/v1/security/policies/bundles` | {{}} -Example: +For example: ```shell -curl -X GET https://{{NIM_FQDN}}/api/platform/v1/security/policies/bundles \ +curl -X GET https://{{NMS_FQDN}}/api/platform/v1/security/policies/bundles \ -H "Authorization: Bearer " ``` @@ -414,10 +401,10 @@ curl -X GET https://{{NIM_FQDN}}/api/platform/v1/security/policies/bundles \ "modified": "2023-10-04T23:19:58.502Z", "appProtectWAFVersion": "4.457.0", "policyName": "default-enforcement", - "policyUID": "", + "policyUID": "29d86fe8-612a-5c69-895a-04fc5b9849a6", "attackSignatureVersionDateTime": "2023.06.20", "threatCampaignVersionDateTime": "2023.07.18", - "uid": "" + "uid": "dceb8254-9a90-4e77-87ac-73070f821412" }, "content": "", "compilationStatus": { @@ -431,10 +418,10 @@ curl -X GET https://{{NIM_FQDN}}/api/platform/v1/security/policies/bundles \ "modified": "2023-10-04T23:19:58.502Z", "appProtectWAFVersion": "4.279.0", "policyName": "defautl-enforcement", - "policyUID": "", + "policyUID": "04fc5b9849a6-612a-5c69-895a-29d86fe8", "attackSignatureVersionDateTime": "2023.08.10", "threatCampaignVersionDateTime": "2023.08.09", - "uid": "" + "uid": "trs35lv2-9a90-4e77-87ac-ythn4967" }, "content": "", "compilationStatus": { @@ -448,10 +435,10 @@ curl -X GET https://{{NIM_FQDN}}/api/platform/v1/security/policies/bundles \ "modified": "2023-10-04T23:19:58.502Z", "appProtectWAFVersion": "4.457.0", "policyName": "ignore-xss", - "policyUID": "", + "policyUID": "849a604fc5b9-612a-5c69-895a-86f29de8", "attackSignatureVersionDateTime": "2023.08.10", "threatCampaignVersionDateTime": "2023.08.09", - "uid": "" + "uid": "nbu844lz-9a90-4e77-87ac-zze8861d" }, "content": "", "compilationStatus": { @@ -465,35 +452,35 @@ curl -X GET https://{{NIM_FQDN}}/api/platform/v1/security/policies/bundles \ --- -## Get a security policy bundle {#get-security-policy-bundle} +## Get a Security Policy Bundle {#get-security-policy-bundle} -To retrieve a specific security policy bundle, send a `GET` request to the Security Policy Bundles API using the policy UID and bundle UID in the URL path. +To get a specific security policy bundle, send an HTTP `GET` request to the Security Policies Bundles API endpoint that contains the security policy UID and security policy bundle UID in the path. -You must have `"READ"` permission for the bundle to retrieve it. +{{}}You must have "READ" permission for the security policy bundle to be able to retrieve information about a bundle by using the REST API.{{}} + +
{{}} -| Method | Endpoint | -|--------|-------------------------------------------------------------------------------------------------| +| Method | Endpoint | +|--------|--------------------------------------| | GET | `/api/platform/v1/security/policies/{security-policy-uid}/bundles/{security-policy-bundle-uid}` | {{}} -Example: + +For example: ```shell -curl -X GET https://{{NIM_FQDN}}/api/platform/v1/security/policies//bundles/ \ +curl -X GET https://{{NMS_FQDN}}/api/platform/v1/security/policies/29d86fe8-612a-5c69-895a-04fc5b9849a6/bundles/trs35lv2-9a90-4e77-87ac-ythn4967 \ -H "Authorization: Bearer " ``` -The response includes a content field that contains the bundle in base64 format. To use it, you’ll need to decode the content and save it as a `.tgz` file. - -Example: +The JSON response, shown in the example below, includes a `content` field that is base64 encoded. After you retrieve the information from the API, you will need to base64 decode the content field. You can include this in your API call, as shown in the following example cURL request: ```bash -curl -X GET "https://{{NIM_FQDN}}/api/platform/v1/security/policies//bundles/" \ - -H "Authorization: Bearer " | jq -r '.content' | base64 -d > security-policy-bundle.tgz +curl -X GET "https://{NMS_FQDN}/api/platform/v1/security/policies/{security-policy-uid}/bundles/{security-policy-bundle-uid}" -H "Authorization: Bearer xxxxx.yyyyy.zzzzz" | jq -r '.content' | base64 -d > security-policy-bundle.tgz ```
@@ -505,10 +492,10 @@ curl -X GET "https://{{NIM_FQDN}}/api/platform/v1/security/policies/ "created": "2023-10-04T23:19:58.502Z", "modified": "2023-10-04T23:19:58.502Z", "appProtectWAFVersion": "4.457.0", - "policyUID": "", + "policyUID": "29d86fe8-612a-5c69-895a-04fc5b9849a6", "attackSignatureVersionDateTime": "2023.08.10", "threatCampaignVersionDateTime": "2023.08.09", - "uid": "" + "uid": "trs35lv2-9a90-4e77-87ac-ythn4967" }, "content": "ZXZlbnRzIHt9Cmh0dHAgeyAgCiAgICBzZXJ2ZXIgeyAgCiAgICAgICAgbGlzdGVuIDgwOyAgCiAgICAgICAgc2VydmVyX25hbWUgXzsKCiAgICAgICAgcmV0dXJuIDIwMCAiSGVsbG8iOyAgCiAgICB9ICAKfQ==", "compilationStatus": { @@ -520,25 +507,28 @@ curl -X GET "https://{{NIM_FQDN}}/api/platform/v1/security/policies/ --- -## Create a security log profile {#create-security-log-profile} +## Create a Security Log Profile {#create-security-log-profile} + +Send an HTTP `POST` request to the Security Log Profiles API endpoint to upload a new security log profile. -To upload a new security log profile, send a `POST` request to the Security Log Profiles API endpoint. +{{}}Before sending a security log profile to Instance Manager, you need to encode it using `base64`. Submitting a log profile in its original JSON format will result in an error.{{}} + +
-You must encode the log profile in `base64` before sending it. If you send plain JSON, the request will fail. {{}} -| Method | Endpoint | -|--------|-----------------------------------------| +| Method | Endpoint | +|--------|--------------------------------------| | POST | `/api/platform/v1/security/logprofiles` | {{}} -Example: +For example: ```shell -curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/logprofiles \ +curl -X POST https://{{NMS_FQDN}}/api/platform/v1/security/logprofiles \ -H "Authorization: Bearer " \ -d @default-log-example.json ``` @@ -568,100 +558,87 @@ curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/logprofiles \ "modified": "2023-07-05T22:09:19.634358096Z", "name": "default-log-example", "revisionTimestamp": "2023-07-05T22:09:19.634358096Z", - "uid": "" + "uid": "54c35ad7-e082-4dc5-bb5d-2640a17d5620" }, "selfLink": { - "rel": "/api/platform/v1/security/logprofiles/" + "rel": "/api/platform/v1/security/logprofiles/54c35ad7-e082-4dc5-bb5d-2640a17d5620" } } ``` --- -## Update a security log profile {#update-security-log-profile} +## Update a Security Log Profile + +To update a security log profile, send an HTTP `POST` request to the Security Log Profiles API endpoint, `/api/platform/v1/security/logprofiles`. -To update a security log profile, you can either: +You can use the optional `isNewRevision` parameter to indicate whether the updated log profile is a new version of an existing log profile. -- Use `POST` with the `isNewRevision=true` parameter to add a new version. -- Use `PUT` with the log profile UID to overwrite the existing version. {{}} -| Method | Endpoint | -|--------|--------------------------------------------------------------------| -| POST | `/api/platform/v1/security/logprofiles?isNewRevision=true` | +| Method | Endpoint | +|--------|---------------------------------------------------------| +| POST | `/api/platform/v1/security/logprofiles?isNewRevision=true` | | PUT | `/api/platform/v1/security/logprofiles/{security-log-profile-uid}` | {{}} -To create a new revision: +For example: ```shell -curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/logprofiles?isNewRevision=true \ +curl -X POST https://{{NMS_FQDN}}/api/platform/v1/security/logprofiles?isNewRevision=true \ -H "Authorization: Bearer " \ -d @update-default-log.json ``` -To overwrite an existing security log profile: +You can update a specific log profile by sending an HTTP `PUT` request to the Security Log Profiles API endpoint that includes the log profile's unique identifier (UID). + +To find the UID, send an HTTP `GET` request to the Security Log Profiles API endpoint. This returns a list of all Security Log Profiles that contains the unique identifier for each log profile. -1. Retrieve the profile’s UID: +Include the UID for the security log profile in your `PUT` request to update the log profile. - ```shell - curl -X PUT https://{{NIM_FQDN}}/api/platform/v1/security/logprofiles/ \ - -H "Authorization: Bearer " \ - --Content-Type application/json \ - -d @update-log-profile.json - ``` +For example: -2. Use the UID in your PUT request: - - ```shell - curl -X PUT https://{{NIM_FQDN}}/api/platform/v1/security/logprofiles/ \ - -H "Authorization: Bearer " \ - --Content-Type application/json \ - -d @update-log-profile.json - ``` +```shell +curl -X PUT https://{{NMS_FQDN}}/api/platform/v1/security/logprofiles/23139e0a-4ac8-49f9-b7a0-0577b42c70c7 \ + -H "Authorization: Bearer " \ + --Content-Type application/json -d @update-default-log.json +``` -After updating the security log profile, you can [publish it](#publish-policy) to specific instances or instance groups. +After you have pushed an updated security log profile, you can [publish it](#publish-policy) to selected instances or instance groups. --- -## Delete a security log profile {#delete-security-log-profile} +## Delete a Security Log Profile -To delete a security log profile, send a `DELETE` request to the Security Log Profiles API using the profile’s UID. +To delete a security log profile, send an HTTP `DELETE` request to the Security Log Profiles API endpoint that includes the unique identifier for the log profile that you want to delete. {{}} -| Method | Endpoint | -|--------|--------------------------------------------------------------------| +| Method | Endpoint | +|--------|------------------------------------------------------------| | DELETE | `/api/platform/v1/security/logprofiles/{security-log-profile-uid}` | {{}} -1. Retrieve the UID: - - ```shell - curl -X GET https://{{NIM_FQDN}}/api/platform/v1/security/logprofiles \ - -H "Authorization: Bearer " - ``` - -2. Send the delete request: +For example: - ```shell - curl -X DELETE https://{{NIM_FQDN}}/api/platform/v1/security/logprofiles/ \ - -H "Authorization: Bearer " - ``` +```shell +curl -X DELETE https://{{NMS_FQDN}}/api/platform/v1/security/logprofiles/23139e0a-4ac8-49f9-b7a0-0577b42c70c7 \ + -H "Authorization: Bearer " +``` --- -## Publish updates to instances {#publish-policy} +## Publish Updates to Instances {#publish-policy} -Use the Publish API to push security policies, log profiles, attack signatures, and threat campaigns to NGINX instances or instance groups. +The Publish API lets you distribute security policies, security log profiles, attack signatures, and/or threat campaigns to instances and instance groups. -Call this endpoint *after* you've created or updated the resources you want to deploy. +{{}}Use this endpoint *after* you've added or updated security policies, security log profiles, attack signatures, and/or threat campaigns.{{}} {{}} @@ -673,20 +650,18 @@ Call this endpoint *after* you've created or updated the resources you want to d {{}} -Include the following information in your request, depending on what you're publishing: +When making a request to the Publish API, make sure to include all the necessary information for your specific use case: -- Instance and instance group UIDs -- Policy UID and name -- Log profile UID and name -- Attack signature library UID and version -- Threat campaign UID and version +- Instance and/or Instance Group UID(s) to push the bundle to +- Threat Campaign version and UID +- Attack Signature version and UID +- Security Policy UID(s) +- Security Log Profile UID(s) -Example: +For example: ```shell -curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/publish \ - -H "Authorization: Bearer " \ - -d @publish-request.json +curl -X PUT https://{{NMS_FQDN}}/api/platform/v1/security/publish -H "Authorization: Bearer " ```
@@ -696,27 +671,27 @@ curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/publish \ { "publications": [ { - "instances": [ - "" - ], + "attackSignatureLibrary": { + "uid": "3fa85f64-5717-4562-b3fc-2c963f66afa6", + "versionDateTime": "2022.10.02" + }, "instanceGroups": [ - "" + "3fa85f64-5717-4562-b3fc-2c963f66afa6" + ], + "instances": [ + "3fa85f64-5717-4562-b3fc-2c963f66afa6" ], - "policyContent": { - "name": "example-policy", - "uid": "" - }, "logProfileContent": { - "name": "example-log-profile", - "uid": "" + "name": "default-log", + "uid": "ffdbda39-88be-420a-b673-19d4183b7e4c" }, - "attackSignatureLibrary": { - "uid": "", - "versionDateTime": "2023.10.02" + "policyContent": { + "name": "default-enforcement", + "uid": "3fa85f64-5717-4562-b3fc-2c963f66afa6" }, "threatCampaign": { - "uid": "", - "versionDateTime": "2023.10.01" + "uid": "3fa85f64-5717-4562-b3fc-2c963f66afa6", + "versionDateTime": "2022.10.01" } } ] @@ -746,91 +721,357 @@ curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/publish \ --- -## Check security policy and security log profile publication status {#check-publication-status} - -After publishing updates, you can check deployment status using the NGINX Instance Manager REST API. +## Check Security Policy and Security Log Profile Publication Status +When publishing an NGINX configuration that references a security policy and secuity log profile, the Instance Manager REST APIs can provide further details about the status of the configuration publications. To access this information, use the Instance Manager API endpoints and method as indicated. -Use the following endpoints to verify whether the configuration updates were successfully deployed to instances or instance groups. +To retrieve the details for the different configuration publication statuses for a particular security policy, send an HTTP `GET` request to the Security Deployments Associations API endpoint, providing the name of the security policy. -### Check publication status for a security policy +| Method | Endpoint | +|--------|-----------------------------------------------------------------------------| +| GET | `/api/platform/v1/security/deployments/associations/{security-policy-name}` | -To view deployment status for a specific policy, send a `GET` request to the Security Deployments Associations API using the policy name. +You can locate the configuration publication status in the response within the field `lastDeploymentDetails` for instances and instance groups: -{{}} +- `lastDeploymentDetails` (for an instance): associations -> instance -> lastDeploymentDetails +- `lastDeploymentDetails` (for an instance in an instance group): associations -> instanceGroup -> instances -> lastDeploymentDetails -| Method | Endpoint | -|--------|--------------------------------------------------------------------| -| GET | `/api/platform/v1/security/deployments/associations/{policy-name}` | - -{{}} - -Example: +The example below shows a call to the `security deployments associations` endpoint and the corresponding JSON response containing successful deployments. ```shell -curl -X GET "https://{{NIM_FQDN}}/api/platform/v1/security/deployments/associations/ignore-xss" \ - -H "Authorization: Bearer " +curl -X GET "https://{NGINX-INSTANCE-MANAGER-FQDN}/api/platform/v1/security/deployments/associations/ignore-xss" -H "Authorization: Bearer " ``` -In the response, look for the `lastDeploymentDetails` field under instance or `instanceGroup.instances`. +
+JSON Response +```json +{ + "associations": [ + { + "attackSignatureLibrary": { + "uid": "c69460cc-6b59-4813-8d9c-76e4a6c56b4b", + "versionDateTime": "2023.02.16" + }, + "instance": { + "hostName": "ip-172-16-0-99", + "lastDeploymentDetails": { + "createTime": "2023-04-11T21:36:11.519174534Z", + "details": { + "failure": [], + "pending": [], + "success": [ + { + "name": "ip-172-16-0-99" + } + ] + }, + "id": "19cf5ed4-29d6-4139-b5f5-308c0d0ebb13", + "message": "Instance config successfully published to", + "status": "successful", + "updateTime": "2023-04-11T21:36:14.008108979Z" + }, + "systemUid": "0435a5de-41c1-3754-b2e8-9d9fe946bafe", + "uid": "29d86fe8-612a-5c69-895a-04fc5b9849a6" + }, + "instanceGroup": { + "displayName": "inst_group_1", + "instances": [ + { + "hostName": "hostname1", + "systemUid": "49d143c2-f556-4cd7-8658-76fff54fb861", + "uid": "c8e15dcf-c504-4b7f-b52d-def7b8fd2f64", + "lastDeploymentDetails": { + "createTime": "2023-04-11T21:36:11.519174534Z", + "details": { + "failure": [], + "pending": [], + "success": [ + { + "name": "ip-172-16-0-99" + } + ] + }, + "id": "19cf5ed4-29d6-4139-b5f5-308c0d0ebb13", + "message": "Instance config successfully published to", + "status": "successful", + "updateTime": "2023-04-11T21:36:14.008108979Z" + }, + }, + { + "hostName": "hostname2", + "systemUid": "88a99ab0-15bb-4719-9107-daf5007c33f7", + "uid": "ed7e9173-794f-41af-80d9-4ed37d593247", + "lastDeploymentDetails": { + "createTime": "2023-04-11T21:36:11.519174534Z", + "details": { + "failure": [], + "pending": [], + "success": [ + { + "name": "ip-172-16-0-99" + } + ] + }, + "id": "19cf5ed4-29d6-4139-b5f5-308c0d0ebb13", + "message": "Instance config successfully published to", + "status": "successful", + "updateTime": "2023-04-11T21:36:14.008108979Z" + }, + } + ], + "uid": "51f8addc-c0e9-438b-b0b6-3e4f1aa8202d" + }, + "policyUid": "9991f237-d9c7-47b7-98aa-faa836838f38", + "policyVersionDateTime": "2023-04-11T21:18:19.183Z", + "threatCampaign": { + "uid": "eab683fe-c2f1-4910-a88c-8bfbc6363164", + "versionDateTime": "2023.02.15" + } + } + ] +} +``` -### Check publication status for a security log profile +
-{{}} +To retrieve the details for the different configuration publication statuses for a particular security log profile, send an HTTP `GET` request to the Security Deployments Associations API endpoint, providing the name of the security log profile. -| Method | Endpoint | -|--------|-------------------------------------------------------------------------------------| -| GET | `/api/platform/v1/security/deployments/logprofiles/associations/{log-profile-name}` | +| Method | Endpoint | +|--------|-----------------------------------------------------------------------------| +| GET | `/api/platform/v1/security/deployments/logprofiles/associations/{security-log-profile-name}` | -{{}} +You can locate the configuration publication status in the response within the field `lastDeploymentDetails` for instances and instance groups: -Example: +- `lastDeploymentDetails` (for an instance): associations -> instance -> lastDeploymentDetails +- `lastDeploymentDetails` (for an instance in an instance group): associations -> instanceGroup -> instances -> lastDeploymentDetails + +The example below shows a call to the `security deployments associations` endpoint and the corresponding JSON response containing successful deployments. ```shell -curl -X GET "https://{{NIM_FQDN}}/api/platform/v1/security/deployments/logprofiles/associations/default-log" \ - -H "Authorization: Bearer " +curl -X GET "https://{NGINX-INSTANCE-MANAGER-FQDN}/api/platform/v1/security/deployments/logprofiles/associations/default-log" -H "Authorization: Bearer " ``` -The response also contains `lastDeploymentDetails` for each instance or group. +
+JSON Response -### Check status for a specific instance +```json +{ + "associations": [ + { + "instance": { + "hostName": "", + "systemUid": "", + "uid": "" + }, + "instanceGroup": { + "displayName": "ig1", + "instances": [ + { + "hostName": "ip-172-16-0-142", + "systemUid": "1d1f03ff-02de-32c5-8dfd-902658aada4c", + "uid": "18d074e6-3868-51ba-9999-b7466a936815" + } + ], + "lastDeploymentDetails": { + "createTime": "2023-07-05T23:01:06.679136973Z", + "details": { + "failure": [], + "pending": [], + "success": [ + { + "name": "ip-172-16-0-142" + } + ] + }, + "id": "9bfc9db7-877d-4e8e-a43d-9660a6cd11cc", + "message": "Instance Group config successfully published to ig1", + "status": "successful", + "updateTime": "2023-07-05T23:01:06.790802157Z" + }, + "uid": "0df0386e-82f7-4efc-863e-5d7cfbc3f7df" + }, + "logProfileUid": "b680f7c3-6fc0-4c6b-889a-3025580c7fcb", + "logProfileVersionDateTime": "2023-07-05T22:08:47.371Z" + }, + { + "instance": { + "hostName": "ip-172-16-0-5", + "lastDeploymentDetails": { + "createTime": "2023-07-05T21:45:08.698646791Z", + "details": { + "failure": [], + "pending": [], + "success": [ + { + "name": "ip-172-16-0-5" + } + ] + }, + "id": "73cf670a-738a-4a74-b3fb-ac9771e89814", + "message": "Instance config successfully published to", + "status": "successful", + "updateTime": "2023-07-05T21:45:08.698646791Z" + }, + "systemUid": "0afe5ac2-43aa-36c8-bcdc-7f88cdd35ab2", + "uid": "9bb4e2ef-3746-5d79-b526-e545fad27e90" + }, + "instanceGroup": { + "displayName": "", + "instances": [], + "uid": "" + }, + "logProfileUid": "bb3badb2-f8f5-4b95-9428-877fc208e2f1", + "logProfileVersionDateTime": "2023-07-03T21:46:17.006Z" + } + ] +} +``` -You can also view the deployment status for a specific instance by providing the system UID and instance UID. +
-{{}} +To retrieve the configuration publication status details for a particular instance, send an HTTP `GET` request to the Instances API endpoint, providing the unique system and instance identifiers. -| Method | Endpoint | -|--------|------------------------------------------------------------------| -| GET | `/api/platform/v1/systems/{system-uid}/instances/{instance-uid}` | +| Method | Endpoint | +|--------|-----------------------------------------------------------------| +| GET | `/api/platform/v1/systems/{system-uid}/instances/{instance-id}` | -{{}} +You can locate the configuration publication status in the the response within the `lastDeploymentDetails` field, which contains additional fields that provide more context around the status. -Example: +The example below shows a call to the `instances` endpoint and the corresponding JSON response containing a compiler related error message. ```shell -curl -X GET "https://{{NIM_FQDN}}/api/platform/v1/systems//instances/" \ - -H "Authorization: Bearer " +curl -X GET "https://{NGINX-INSTANCE-MANAGER-FQDN}/api/platform/v1/systems/b9df6377-2c4f-3266-a64a-e064b0371c73/instances/5663cf4e-a0c7-50c8-b93c-16fd11a0f00b" -H "Authorization: Bearer " ``` -In the response, look for the `lastDeploymentDetails` field, which shows the deployment status and any related errors. +
+JSON Response -### Check deployment result by deployment ID +```json +{ + "build": { + "nginxPlus": true, + "release": "nginx-plus-r28", + "version": "1.23.2" + }, + "configPath": "/etc/nginx/nginx.conf", + "configVersion": { + "instanceGroup": { + "createTime": "0001-01-01T00:00:00Z", + "uid": "", + "versionHash": "" + }, + "versions": [ + { + "createTime": "2023-01-14T10:48:46.319Z", + "uid": "5663cf4e-a0c7-50c8-b93c-16fd11a0f00b", + "versionHash": "922e9d40fa6d4dd3a4b721295b8ecd95f73402644cb8d234f9f4f862b8a56bfc" + } + ] + }, + "displayName": "ip-192-0-2-27", + "links": [ + { + "rel": "/api/platform/v1/systems/b9df6377-2c4f-3266-a64a-e064b0371c73", + "name": "system" + }, + { + "rel": "/api/platform/v1/systems/b9df6377-2c4f-3266-a64a-e064b0371c73/instances/5663cf4e-a0c7-50c8-b93c-16fd11a0f00b", + "name": "self" + }, + { + "rel": "/api/platform/v1/systems/instances/deployments/b31c6ab1-4a46-4c81-a065-204575145e8e", + "name": "deployment" + } + ], + "processPath": "/usr/sbin/nginx", + "registrationTime": "2023-01-14T10:12:31.000Z", + "startTime": "2023-01-14T10:09:43Z", + "status": { + "lastStatusReport": "2023-01-14T11:11:49.323495017Z", + "state": "online" + }, + "uid": "5663cf4e-a0c7-50c8-b93c-16fd11a0f00b", + "version": "1.23.2", + "appProtect": { + "attackSignatureVersion": "Available after publishing Attack Signatures from Instance Manager", + "status": "active", + "threatCampaignVersion": "Available after publishing Threat Campaigns from Instance Manager", + "version": "4.2.0" + }, + "configureArgs": [ + ... + ], + "lastDeploymentDetails": { + "createTime": "2023-01-14T11:10:25.096812852Z", + "details": { + "error": "{\"instance:b9df6377-2c4f-3266-a64a-e064b0371c73\":\"failed building config payload: policy compilation failed for deployment b31c6ab1-4a46-4c81-a065-204575145e8e due to integrations service error: the specified compiler (4.2.0) is missing, please install it and try again.\"}", + "failure": [ + { + "failMessage": "failed building config payload: policy compilation failed for deployment b31c6ab1-4a46-4c81-a065-204575145e8e due to integrations service error: the specified compiler (4.2.0) is missing, please install it and try again.", + "name": "ip-192-0-2-27" + } + ], + "pending": [], + "success": [] + }, + "id": "b31c6ab1-4a46-4c81-a065-204575145e8e", + "message": "Instance config failed to publish to", + "status": "failed", + "updateTime": "2023-01-14T11:10:25.175145693Z" + }, + "loadableModules": [ + ... + ], + "packages": [ + ... + ], + "processId": "10345", + "ssl": { + "built": null, + "runtime": null + } +} +``` -When you use the Publish API to [publish security content](#publish-policy), NGINX Instance Manager creates a deployment ID for the request. You can use this ID to check the result of the publication. +
-{{}} +When you use the Publish API (`/security/publish`) to [publish a security policy and security log profile](#publish-policy), Instance Manager creates a deployment ID for the request. To view the status of the update, or to check for any errors, use the endpoint and method shown below and reference the deployment ID. | Method | Endpoint | |--------|------------------------------------------------------------------| | GET | `/api/platform/v1/systems/instances/deployments/{deployment-id}` | -{{}} +You can locate the configuration publication status in the the response within the `details` field, which contains additional fields that provide more context around the status. -Example: +The example below shows a call to the `deployments` endpoint and the corresponding JSON response containing a compiler error message. ```shell -curl -X GET "https://{{NIM_FQDN}}/api/platform/v1/systems/instances/deployments/" \ +curl -X GET --url "https://{NGINX-INSTANCE-MANAGER-FQDN}/api/platform/v1/systems/instances/deployments/d38a8e5d-2312-4046-a60f-a30a4aea1fbb" \ -H "Authorization: Bearer " ``` -The response includes the full deployment status, success or failure details, and any compiler error messages. +
+JSON Response + +```json +{ + "createTime": "2023-01-14T04:35:47.566082799Z", + "details": { + "error": "{\"instance:8a2092aa-5612-370d-bff0-5d7521e206d6\":\"failed building config payload: policy bundle compilation failed for d38a8e5d-2312-4046-a60f-a30a4aea1fbb, integrations service returned the following error: missing the specified compiler (4.2.0) please install it and try again\"}", + "failure": [ + { + "failMessage": "failed building config payload: policy bundle compilation failed for d38a8e5d-2312-4046-a60f-a30a4aea1fbb, integrations service returned the following error: missing the specified compiler (4.2.0) please install it and try again", + "name": "ip-192-0-2-243" + } + ], + "pending": [], + "success": [] + }, + "id": "d38a8e5d-2312-4046-a60f-a30a4aea1fbb", + "message": "Instance config failed to publish to", + "status": "failed", + "updateTime": "2023-01-14T04:35:47.566082799Z" +} +``` + +
diff --git a/content/nim/nginx-app-protect/overview-nap-waf-config-management.md b/content/nim/nginx-app-protect/overview-nap-waf-config-management.md index b0d31319f..92287079d 100644 --- a/content/nim/nginx-app-protect/overview-nap-waf-config-management.md +++ b/content/nim/nginx-app-protect/overview-nap-waf-config-management.md @@ -1,76 +1,68 @@ --- -description: Learn how to use F5 NGINX Instance Manager to set up and manage NGINX App Protect WAF security policies. +description: Learn how you can use F5 NGINX Management Suite Instance Manager to configure + NGINX App Protect WAF security policies. docs: DOCS-992 -title: "How WAF policy management works" +title: NGINX App Protect WAF configuration management toc: true -weight: 100 +weight: 500 type: - reference --- ## Overview -F5 NGINX Instance Manager helps you manage [NGINX App Protect WAF](https://www.nginx.com/products/nginx-app-protect/web-application-firewall/) security configurations. +F5 NGINX Management Suite Instance Manager provides configuration management for [NGINX App Protect WAF](https://www.nginx.com/products/nginx-app-protect/web-application-firewall/). -Use NGINX Instance Manager with NGINX App Protect WAF to inspect incoming traffic, detect threats, and block malicious requests. You can define policies in one place and push them to some or all of your NGINX App Protect WAF instances. +You can use NGINX App Protect WAF with Instance Manager to inspect incoming traffic, identify potential threats, and block malicious traffic. With Configuration Management for App Protect WAF, you can configure WAF security policies in a single location and push your configurations out to one, some, or all of your NGINX App Protect WAF instances. -### Key features +### Features -- Manage WAF policies using the NGINX Instance Manager web interface or REST API -- Update attack signature and threat campaign packages -- Compile WAF configurations into a binary bundle for deployment +- Manage NGINX App Protect WAF security configurations by using the NGINX Management Suite user interface or REST API +- Update Attack Signatures and Threat Campaign packages +- Compile security configurations into a binary bundle for consumption by NGINX App Protect WAF instances ## Architecture -NGINX Instance Manager lets you define and manage security policies, upload signature packages, and push configurations to your NGINX App Protect WAF instances. It can also compile your security configuration into a bundle before publishing it to the data plane. +As demonstrated in Figure 1, Instance Manager lets you manage security configurations for NGINX App Protect WAF. You can define security policies, upload attack signatures and threat campaign packages, and publish common configurations out to your NGINX App Protect WAF instances. Instance Manager can compile the security configuration into a bundle before pushing the configuration to the NGINX App Protect WAF data plane instances. The NGINX Management Suite Security Monitoring module provides data visualization for NGINX App Protect, so you can monitor, analyze, and refine your policies. -The **Security Monitoring** module shows real-time data from NGINX App Protect WAF so you can track traffic, spot anomalies, and fine-tune policies. +{{< img src="nim/app-sec-overview.png" caption="Figure 1. NGINX Management Suite with NGINX App Protect Architecture Overview" alt="A diagram showing the architecture of the NGINX Management Suite with NGINX App Protect solution" width="75%">}} -{{< img src="nim/app-sec-overview.png" caption="Figure 1. NGINX Instance Manager with NGINX App Protect architecture overview" alt="Architecture diagram showing NGINX Instance Manager and Security Monitoring in the control plane pushing security bundles to NGINX App Protect WAF instances in the data plane" >}} +### Security Bundle Compilation {#security-bundle} -### Security bundle compilation {#security-bundle} +Instance Manager provides a compiler that can be configured to bundle the complete security configuration -- including JSON security policies, attack signatures, threat campaigns, and log profiles -- into a single binary in `.tgz` format. This bundle is then pushed out to each selected NGINX App Protect WAF instance. -NGINX Instance Manager includes a compiler that packages your complete WAF configuration — security policies, attack signatures, threat campaigns, and log profiles — into a single `.tgz` file. It then pushes this bundle to the selected NGINX App Protect WAF instances. +Performing the security bundle compilation on Instance Manager (precompiled publication) instead of on the NGINX App Protect WAF instances provides the following benefits: -**Why precompile with NGINX Instance Manager?** +- Eliminates the need to provision system resources on NGINX App Protect WAF instances to perform compilation. +- The bundles produced by Instance Manager can be reused by multiple NGINX App Protect WAF instances, instead of each instance having to perform the compilation separately. -- Saves system resources on WAF instances -- Lets you reuse the same bundle across multiple instances +However, if you prefer to maintain policy compilation on the NGINX App Protect WAF instance, that is supported with the following limitation: -If you choose to compile policies on the WAF instance instead, that works too—but with this limitation: +- Instance Manager does not publish JSON policies to the NGINX App Protect WAF instance. JSON policies referenced in an NGINX configuration must already exist on the NGINX App Protect WAF instance. -- NGINX Instance Manager won’t publish `.json` policies to the WAF instance. These policies must already exist on the instance and be referenced in the NGINX config. +The example [`location`](https://nginx.org/en/docs/http/ngx_http_core_module.html#location) context below enables NGINX App Protect WAF and tells NGINX where to find the compiled security bundle: -Example [`location`](https://nginx.org/en/docs/http/ngx_http_core_module.html#location) block to enable WAF and point to the bundle: +## Log Profile Compilation -```nginx -location / { - app_protect_enable on; - app_protect_policy_file /etc/app_protect/policies/policy_bundle.tgz; -} -``` - -## Log profile compilation - -You can also configure NGINX Instance Manager to compile log profiles when you install a new version of the compiler. When publishing NGINX configs that include the [`app_protect_security_log`](https://docs.nginx.com/nginx-app-protect/logging-overview/security-log/#app_protect_security_log) directive, NGINX Instance Manager pushes the compiled log profile to your WAF instances (when precompiled publication is turned on). +Instance Manager can also be configured to compile log profiles when you install a new version of the WAF compiler. When you publish an NGINX configuration with the NGINX App Protect [`app_protect_security_log`](https://docs.nginx.com/nginx-app-protect/logging-overview/security-log/#app_protect_security_log) directive, Instance Manager publishes the compiled log profiles to the NGINX App Protect WAF instances when precompiled publication is enabled. {{}} -NGINX Instance Manager and Security Monitoring both use log profiles, but their configurations are different. If you're using configuration management in NGINX Instance Manager, you must reference the log profile with the `.tgz` file extension, not `.json`. +Instance Manager and Security Monitoring both use NGINX App Protect log profiles. The configuration requirements for each are different. When using Instance Manager configuration management, you must reference the log profile in your NGINX configuration using the `.tgz` file extension instead of `.json`. {{}} -## Security management APIs +## Security Management APIs -Use the NGINX Instance Manager REST API to automate updates across your NGINX App Protect WAF instances. You can use the API to manage: +By using the Instance Manager REST API, you can automate configuration updates to be pushed out to all of your NGINX App Protect WAF instances. You can use the Instance Manager API to manage and deploy the following security configurations: -- Security policies -- Log profiles -- Attack signatures -- Threat campaigns +- security policies, +- log profiles, +- attack signatures, and +- threat campaigns. -Just like with the web interface, the compiler creates a binary bundle with your updates that you can push to your WAF instances. +Just as with changes made via the user interface, the Instance Manager compiler bundles all of the config updates into a single binary package that you can push out to your instances. Figure 2 shows an overview of the API endpoints available to support security policy configuration and publishing. -{{< img src="nim/app-sec-api-overview.png" caption="Figure 2. NGINX Instance Manager with NGINX App Protect WAF architecture overview" alt="Diagram showing how the NGINX Instance Manager REST API is used to create policies, upload signatures and campaigns, and publish compiled security bundles to NGINX App Protect WAF instances">}} +{{< img src="nim/app-sec-api-overview.png" caption="Figure 2. NGINX Management Suite with NGINX App Protect WAF Architecture Overview" alt="A diagram showing the architecture of the NGINX Management Suite with NGINX App Protect solution">}} -For full details, see the API documentation: +More information is available in the Instance Manager API documentation. {{< include "nim/how-to-access-api-docs.md" >}} diff --git a/content/nim/nginx-app-protect/waf-config-management.md b/content/nim/nginx-app-protect/waf-config-management.md new file mode 100644 index 000000000..5e76684d4 --- /dev/null +++ b/content/nim/nginx-app-protect/waf-config-management.md @@ -0,0 +1,30 @@ +--- +description: Learn how to use NGINX Management Suite Instance Manager to publish NGINX + App Protect WAF configurations. +docs: DOCS-1114 +title: WAF Configuration Management +toc: true +weight: 100 +--- + +## Overview + +You can use NGINX Management Suite Instance Manager to publish configurations to your NGINX App Protect WAF data plane instances. + +## Publish WAF Configurations + +1. Set up your NGINX Management Suite Instance Manager instance: + + - [Install the WAF Compiler]({{< ref "/nim/nginx-app-protect/setup-waf-config-management#install-the-waf-compiler" >}}) + + - [Set up the Attack Signatures and Threat Campaigns]({{< ref "/nim/nginx-app-protect/setup-waf-config-management#set-up-attack-signatures-and-threat-campaigns" >}}) + +2. In Instance Manager, [onboard the App Protect Instances]({{< ref "/nim/nginx-app-protect/setup-waf-config-management#onboard-nginx-app-protect-waf-instances" >}}) you want to publish policies and log profiles to. + +3. [Create the security policies]({{< ref "/nim/nginx-app-protect/manage-waf-security-policies#create-security-policy" >}}). + +4. [Create the security log profiles]({{< ref "/nim/nginx-app-protect/manage-waf-security-policies#create-security-log-profile" >}}). + +5. [Add or edit a WAF Configuration]({{< ref "/nim/nginx-app-protect/setup-waf-config-management#add-waf-config" >}}) to your NGINX Instances, and publish using Instance Manager. + + {{}}Map the App Protect directives on NGINX configuration to `.tgz` file extensions (not `.json`).{{< /note >}} From 80892b23274d9427cdc27f63c1145af44423d396 Mon Sep 17 00:00:00 2001 From: Mike Jang <3287976+mjang@users.noreply.github.com> Date: Fri, 25 Apr 2025 10:31:10 -0700 Subject: [PATCH 09/24] Less --- .../waf-config-management.md | 30 ------------------- 1 file changed, 30 deletions(-) delete mode 100644 content/nim/nginx-app-protect/waf-config-management.md diff --git a/content/nim/nginx-app-protect/waf-config-management.md b/content/nim/nginx-app-protect/waf-config-management.md deleted file mode 100644 index 5e76684d4..000000000 --- a/content/nim/nginx-app-protect/waf-config-management.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -description: Learn how to use NGINX Management Suite Instance Manager to publish NGINX - App Protect WAF configurations. -docs: DOCS-1114 -title: WAF Configuration Management -toc: true -weight: 100 ---- - -## Overview - -You can use NGINX Management Suite Instance Manager to publish configurations to your NGINX App Protect WAF data plane instances. - -## Publish WAF Configurations - -1. Set up your NGINX Management Suite Instance Manager instance: - - - [Install the WAF Compiler]({{< ref "/nim/nginx-app-protect/setup-waf-config-management#install-the-waf-compiler" >}}) - - - [Set up the Attack Signatures and Threat Campaigns]({{< ref "/nim/nginx-app-protect/setup-waf-config-management#set-up-attack-signatures-and-threat-campaigns" >}}) - -2. In Instance Manager, [onboard the App Protect Instances]({{< ref "/nim/nginx-app-protect/setup-waf-config-management#onboard-nginx-app-protect-waf-instances" >}}) you want to publish policies and log profiles to. - -3. [Create the security policies]({{< ref "/nim/nginx-app-protect/manage-waf-security-policies#create-security-policy" >}}). - -4. [Create the security log profiles]({{< ref "/nim/nginx-app-protect/manage-waf-security-policies#create-security-log-profile" >}}). - -5. [Add or edit a WAF Configuration]({{< ref "/nim/nginx-app-protect/setup-waf-config-management#add-waf-config" >}}) to your NGINX Instances, and publish using Instance Manager. - - {{}}Map the App Protect directives on NGINX configuration to `.tgz` file extensions (not `.json`).{{< /note >}} From 56e573a41bf23d7055b0267d8333011229076a5c Mon Sep 17 00:00:00 2001 From: Mike Jang <3287976+mjang@users.noreply.github.com> Date: Fri, 25 Apr 2025 10:45:08 -0700 Subject: [PATCH 10/24] More --- content/nginx-one/config-sync-groups/_index.md | 2 +- content/nginx-one/nginx-configs/_index.md | 4 ++++ layouts/partials/list-main.html | 16 ++++++++++++++++ 3 files changed, 21 insertions(+), 1 deletion(-) diff --git a/content/nginx-one/config-sync-groups/_index.md b/content/nginx-one/config-sync-groups/_index.md index db1ee5560..eaefeaea3 100644 --- a/content/nginx-one/config-sync-groups/_index.md +++ b/content/nginx-one/config-sync-groups/_index.md @@ -1,6 +1,6 @@ --- description: -title: Organize in groups +title: Change multiple instances with one push weight: 400 url: /nginx-one/config-sync-groups --- diff --git a/content/nginx-one/nginx-configs/_index.md b/content/nginx-one/nginx-configs/_index.md index 2e6a37c0e..18ece7e11 100644 --- a/content/nginx-one/nginx-configs/_index.md +++ b/content/nginx-one/nginx-configs/_index.md @@ -1,10 +1,14 @@ --- description: <<<<<<< HEAD +<<<<<<< HEAD title: Manage your NGINX instances ======= title: Organize your NGINX instances >>>>>>> c7ce27ce (Draft: new N1C doc homepage) +======= +title: Watch your NGINX instances +>>>>>>> 09d8a53f (More) weight: 300 url: /nginx-one/nginx-configs --- diff --git a/layouts/partials/list-main.html b/layouts/partials/list-main.html index be455babf..1416ecf7a 100644 --- a/layouts/partials/list-main.html +++ b/layouts/partials/list-main.html @@ -40,17 +40,33 @@

{{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Draft new configurations")}}

Work with Staged Configurations

{{ end }} +<<<<<<< HEAD {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Manage your NGINX instances")}}

Monitor and maintain your deployments

+======= + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Watch your NGINX instances")}} +

Keep an inventory of your deployments

+>>>>>>> 09d8a53f (More) {{ end }} {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Organize users with RBAC")}}

Assign responsibilities with role-based access control

{{ end }} +<<<<<<< HEAD {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Automate with the NGINX One API")}}

Manage your NGINX fleet over REST

+======= + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Organize your administrators with RBAC")}} +

Secure your systems with role-based access control

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Change multiple instances with one push")}} +

Configure and synchronize groups of NGINX instances simultaneously

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "NGINX One API")}} +

Automate NGINX fleet management

+>>>>>>> 09d8a53f (More) {{ end }} {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Glossary")}}

Learn terms unique to NGINX One Console

From 39acb8d04c46a6bb453cb4b05c9b722f347b86c5 Mon Sep 17 00:00:00 2001 From: Mike Jang <3287976+mjang@users.noreply.github.com> Date: Fri, 25 Apr 2025 11:11:59 -0700 Subject: [PATCH 11/24] update --- .../agent/installation/install-agent-api.md | 83 +- .../includes/nap-waf/build-nginx-image-cmd.md | 2 +- .../learn-about-deployment.md | 6 +- content/nap-waf/v4/admin-guide/install.md | 4 +- content/nap-waf/v5/admin-guide/compiler.md | 2 +- content/ngf/overview/custom-policies.md | 9 +- content/ngf/overview/product-telemetry.md | 3 +- .../load-balancer/tcp-udp-load-balancer.md | 40 +- .../load-balancer/udp-health-check.md | 160 +++- .../manage-waf-security-policies.md | 763 ++++++------------ .../overview-nap-waf-config-management.md | 72 +- 11 files changed, 519 insertions(+), 625 deletions(-) diff --git a/content/includes/agent/installation/install-agent-api.md b/content/includes/agent/installation/install-agent-api.md index 95a9650aa..15009d21f 100644 --- a/content/includes/agent/installation/install-agent-api.md +++ b/content/includes/agent/installation/install-agent-api.md @@ -1,74 +1,75 @@ -**Note**: To complete this step, make sure that `gpg` is installed on your system. You can install NGINX Agent using various command-line tools like `curl` or `wget`. If your NGINX Instance Manager host is not set up with valid TLS certificates, you can use the insecure flags provided by those tools. See the following examples: +--- +docs: DOCS-1031 +files: + - content/nim/nginx-app-protect/setup-waf-config-management.md +--- + +{{}}Make sure `gpg` is installed on your system before continuing. You can install NGINX Agent using command-line tools like `curl` or `wget`.{{}} + +If your NGINX Instance Manager host doesn't use valid TLS certificates, you can use the insecure flags to bypass verification. Here are some example commands: {{}} {{%tab name="curl"%}} -- Secure: +- **Secure:** ```bash - curl https:///install/nginx-agent | sudo sh + curl https:///install/nginx-agent | sudo sh ``` -- Insecure: +- **Insecure:** ```bash - curl --insecure https:///install/nginx-agent | sudo sh + curl --insecure https:///install/nginx-agent | sudo sh ``` - You can add your NGINX instance to an existing instance group or create one using `--instance-group` or `-g` flag when installing NGINX Agent. - - The following example shows how to download and run the script with the optional `--instance-group` flag adding the NGINX instance to the instance group **my-instance-group**: - - ```bash - curl https:///install/nginx-agent > install.sh; chmod u+x install.sh - sudo ./install.sh --instance-group my-instance-group - ``` +To add the instance to a specific instance group during installation, use the `--instance-group` (or `-g`) flag: - By default, the install script attempts to use a secure connection when downloading packages. If, however, the script cannot create a secure connection, it uses an insecure connection instead and logs the following warning message: +```shell +curl https:///install/nginx-agent -o install.sh +chmod u+x install.sh +sudo ./install.sh --instance-group +``` - ``` text - Warning: An insecure connection will be used during this nginx-agent installation - ``` +By default, the install script uses a secure connection to download packages. If it can’t establish one, it falls back to an insecure connection and logs this message: - To require a secure connection, you can set the optional flag `skip-verify` to `false`. +```text +Warning: An insecure connection will be used during this nginx-agent installation +``` - The following example shows how to download and run the script with an enforced secure connection: +To enforce a secure connection, set the `--skip-verify` flag to false: - ```bash - curl https:///install/nginx-agent > install.sh chmod u+x install.sh; chmod u+x install.sh - sudo sh ./install.sh --skip-verify false - ``` +```shell +curl https:///install/nginx-agent -o install.sh +chmod u+x install.sh +sudo ./install.sh --skip-verify false +``` {{%/tab%}} {{%tab name="wget"%}} +- **Secure:** -- Secure: - - ```bash - wget https:///install/nginx-agent -O - | sudo sh -s --skip-verify false + ```shell + wget https:///install/nginx-agent -O - | sudo sh -s --skip-verify false ``` -- Insecure: +- **Insecure:** - ```bash - wget --no-check-certificate https:///install/nginx-agent -O - | sudo sh + ```shell + wget --no-check-certificate https:///install/nginx-agent -O - | sudo sh ``` - When you install the NGINX Agent, you can use the `--instance-group` or `-g` flag to add your NGINX instance to an existing instance group or to a new group that you specify. - - The following example downloads and runs the NGINX Agent install script with the optional `--instance-group` flag, adding the NGINX instance to the instance group **my-instance-group**: - - ```bash - wget https://gnms1.npi.f5net.com/install/nginx-agent -O install.sh ; chmod u+x install.sh - sudo ./install.sh --instance-group my-instance-group - ``` +To add your instance to a group during installation, use the `--instance-group` (or `-g`) flag: +```shell +wget https:///install/nginx-agent -O install.sh +chmod u+x install.sh +sudo ./install.sh --instance-group +``` {{%/tab%}} -{{}} - - +{{}} diff --git a/content/includes/nap-waf/build-nginx-image-cmd.md b/content/includes/nap-waf/build-nginx-image-cmd.md index 41bf90d03..fcb89d363 100644 --- a/content/includes/nap-waf/build-nginx-image-cmd.md +++ b/content/includes/nap-waf/build-nginx-image-cmd.md @@ -10,7 +10,7 @@ To build the image, execute the following command in the directory containing th ```shell -sudo docker build --no-cache \ +sudo docker build --no-cache --platform linux/amd64 \ --secret id=nginx-crt,src=nginx-repo.crt \ --secret id=nginx-key,src=nginx-repo.key \ -t nginx-app-protect-5 . diff --git a/content/nap-dos/deployment-guide/learn-about-deployment.md b/content/nap-dos/deployment-guide/learn-about-deployment.md index df137bd2e..430fd9e2e 100644 --- a/content/nap-dos/deployment-guide/learn-about-deployment.md +++ b/content/nap-dos/deployment-guide/learn-about-deployment.md @@ -1405,7 +1405,7 @@ You need root permissions to execute the following steps. 6. Create a Docker image: ```shell - docker build --no-cache -t app-protect-dos . + docker build --no-cache --platform linux/amd64 -t app-protect-dos . ``` The `--no-cache` option tells Docker to build the image from scratch and ensures the installation of the latest version of NGINX Plus and NGINX App Protect DoS. If the Dockerfile was previously used to build an image without the `--no-cache` option, the new image uses versions from the previously built image from the Docker cache. @@ -1966,13 +1966,13 @@ Make sure to replace upstream and proxy pass directives in this example with rel For CentOS: ```shell - docker build --no-cache -t app-protect-dos . + docker build --no-cache --platform linux/amd64 -t app-protect-dos . ``` For RHEL: ```shell - docker build --build-arg RHEL_ORGANIZATION=${RHEL_ORGANIZATION} --build-arg RHEL_ACTIVATION_KEY=${RHEL_ACTIVATION_KEY} --no-cache -t app-protect-dos . + docker build --platform linux/amd64 --build-arg RHEL_ORGANIZATION=${RHEL_ORGANIZATION} --build-arg RHEL_ACTIVATION_KEY=${RHEL_ACTIVATION_KEY} --no-cache -t app-protect-dos . ``` The `--no-cache` option tells Docker to build the image from scratch and ensures the installation of the latest version of NGINX Plus and NGINX App Protect DoS. If the Dockerfile was previously used to build an image without the `--no-cache` option, the new image uses versions from the previously built image from the Docker cache. diff --git a/content/nap-waf/v4/admin-guide/install.md b/content/nap-waf/v4/admin-guide/install.md index 3158ac9d3..c3e0575dc 100644 --- a/content/nap-waf/v4/admin-guide/install.md +++ b/content/nap-waf/v4/admin-guide/install.md @@ -939,7 +939,7 @@ If a user other than **nginx** is to be used, note the following: - For Oracle Linux/Debian/Ubuntu/Alpine/Amazon Linux: ```shell - DOCKER_BUILDKIT=1 docker build --no-cache --secret id=nginx-crt,src=nginx-repo.crt --secret id=nginx-key,src=nginx-repo.key -t app-protect . + DOCKER_BUILDKIT=1 docker build --no-cache --platform linux/amd64 --secret id=nginx-crt,src=nginx-repo.crt --secret id=nginx-key,src=nginx-repo.key -t app-protect . ``` The `DOCKER_BUILDKIT=1` enables `docker build` to recognize the `--secret` flag which allows the user to pass secret information to be used in the Dockerfile for building docker images in a safe way that will not end up stored in the final image. This is a recommended practice for the handling of the certificate and private key for NGINX repository access (`nginx-repo.crt` and `nginx-repo.key` files). More information [here](https://docs.docker.com/engine/reference/commandline/buildx_build/#secret). @@ -1289,7 +1289,7 @@ You need root permissions to execute the following steps. - For Oracle Linux/Debian/Ubuntu/Alpine/Amazon Linux: ```shell - DOCKER_BUILDKIT=1 docker build --no-cache --secret id=nginx-crt,src=nginx-repo.crt --secret id=nginx-key,src=nginx-repo.key -t app-protect-converter . + DOCKER_BUILDKIT=1 docker build --no-cache --platform linux/amd64 --secret id=nginx-crt,src=nginx-repo.crt --secret id=nginx-key,src=nginx-repo.key -t app-protect-converter . ``` The `DOCKER_BUILDKIT=1` enables `docker build` to recognize the `--secret` flag which allows the user to pass secret information to be used in the Dockerfile for building docker images in a safe way that will not end up stored in the final image. This is a recommended practice for the handling of the certificate and private key for NGINX repository access (`nginx-repo.crt` and `nginx-repo.key` files). More information [here](https://docs.docker.com/engine/reference/commandline/buildx_build/#secret). diff --git a/content/nap-waf/v5/admin-guide/compiler.md b/content/nap-waf/v5/admin-guide/compiler.md index dd0e828e4..ea0f28500 100644 --- a/content/nap-waf/v5/admin-guide/compiler.md +++ b/content/nap-waf/v5/admin-guide/compiler.md @@ -98,7 +98,7 @@ curl -s https://private-registry.nginx.com/v2/nap/waf-compiler/tags/list --key < Run the command below to build your image, where `waf-compiler-:custom` is an example of the image tag: ```shell - sudo docker build --no-cache \ + sudo docker build --no-cache --platform linux/amd64 \ --secret id=nginx-crt,src=nginx-repo.crt \ --secret id=nginx-key,src=nginx-repo.key \ -t waf-compiler-:custom . diff --git a/content/ngf/overview/custom-policies.md b/content/ngf/overview/custom-policies.md index c7e5a785d..5aeb99fce 100644 --- a/content/ngf/overview/custom-policies.md +++ b/content/ngf/overview/custom-policies.md @@ -17,10 +17,11 @@ The following table summarizes NGINX Gateway Fabric custom policies: {{< bootstrap-table "table table-striped table-bordered" >}} -| Policy | Description | Attachment Type | Supported Target Object(s) | Supports Multiple Target Refs | Mergeable | API Version | -|---------------------------------------------------------------------------------------|---------------------------------------------------------|-----------------|-------------------------------|-------------------------------|-----------|-------------| -| [ClientSettingsPolicy]({{< ref "/ngf/how-to/traffic-management/client-settings.md" >}}) | Configure connection behavior between client and NGINX | Inherited | Gateway, HTTPRoute, GRPCRoute | No | Yes | v1alpha1 | -| [ObservabilityPolicy]({{< ref "/ngf/how-to/monitoring/tracing.md" >}}) | Define settings related to tracing, metrics, or logging | Direct | HTTPRoute, GRPCRoute | Yes | No | v1alpha1 | +| Policy | Description | Attachment Type | Supported Target Object(s) | Supports Multiple Target Refs | Mergeable | API Version | +|---------------------------------------------------------------------------------------------|---------------------------------------------------------|-----------------|-------------------------------|-------------------------------|-----------|-------------| +| [ClientSettingsPolicy]({{< ref "/ngf/how-to/traffic-management/client-settings.md" >}}) | Configure connection behavior between client and NGINX | Inherited | Gateway, HTTPRoute, GRPCRoute | No | Yes | v1alpha1 | +| [ObservabilityPolicy]({{< ref "/ngf/how-to/monitoring/tracing.md" >}}) | Define settings related to tracing, metrics, or logging | Direct | HTTPRoute, GRPCRoute | Yes | No | v1alpha2 | +| [UpstreamSettingsPolicy]({{< ref "/ngf/how-to/traffic-management/upstream-settings.md" >}}) | Configure connection behavior between NGINX and backend | Direct | Service | Yes | Yes | v1alpha1 | {{< /bootstrap-table >}} diff --git a/content/ngf/overview/product-telemetry.md b/content/ngf/overview/product-telemetry.md index cd9f7a20f..3c73a4cb5 100644 --- a/content/ngf/overview/product-telemetry.md +++ b/content/ngf/overview/product-telemetry.md @@ -32,7 +32,8 @@ Telemetry data is collected once every 24 hours and sent to a service managed by - **Image Build Source:** whether the image was built by GitHub or locally (values are `gha`, `local`, or `unknown`). The source repository of the images is **not** collected. - **Deployment Flags:** a list of NGINX Gateway Fabric Deployment flags that are specified by a user. The actual values of non-boolean flags are **not** collected; we only record that they are either `true` or `false` for boolean flags and `default` or `user-defined` for the rest. - **Count of Resources:** the total count of resources related to NGINX Gateway Fabric. This includes `GatewayClasses`, `Gateways`, `HTTPRoutes`,`GRPCRoutes`, `TLSRoutes`, `Secrets`, `Services`, `BackendTLSPolicies`, `ClientSettingsPolicies`, `NginxProxies`, `ObservabilityPolicies`, `UpstreamSettingsPolicies`, `SnippetsFilters`, and `Endpoints`. The data within these resources is **not** collected. -- **SnippetsFilters Info**a list of directive-context strings from applied SnippetFilters and a total count per strings. The actual value of any NGINX directive is **not** collected. +- **SnippetsFilters Info:** a list of directive-context strings from applied SnippetFilters and a total count per strings. The actual value of any NGINX directive is **not** collected. + This data is used to identify the following information: - The flavors of Kubernetes environments that are most popular among our users. diff --git a/content/nginx/admin-guide/load-balancer/tcp-udp-load-balancer.md b/content/nginx/admin-guide/load-balancer/tcp-udp-load-balancer.md index a7a6a7f61..bf40c20be 100644 --- a/content/nginx/admin-guide/load-balancer/tcp-udp-load-balancer.md +++ b/content/nginx/admin-guide/load-balancer/tcp-udp-load-balancer.md @@ -9,10 +9,9 @@ type: - how-to --- - -## Introduction +## Introduction {#intro} -[Load balancing](https://www.nginx.com/solutions/load-balancing/) refers to efficiently distributing network traffic across multiple backend servers. +[Load balancing](https://www.f5.com/glossary/load-balancer) refers to efficiently distributing network traffic across multiple backend servers. In F5 NGINX Plus [R5]({{< ref "nginx/releases.md#r5" >}}) and later, NGINX Plus can proxy and load balance Transmission Control Protocol) (TCP) traffic. TCP is the protocol for many popular applications and services, such as LDAP, MySQL, and RTMP. @@ -20,15 +19,13 @@ In NGINX Plus [R9]({{< ref "nginx/releases.md#r9" >}}) and later, NGINX Plus can To load balance HTTP traffic, refer to the [HTTP Load Balancing]({{< ref "http-load-balancer.md" >}}) article. - ## Prerequisites - Latest NGINX Plus (no extra build steps required) or latest [NGINX Open Source](https://nginx.org/en/download.html) built with the `--with-stream` configuration flag - An application, database, or service that communicates over TCP or UDP - Upstream servers, each running the same instance of the application, database, or service - -## Configuring Reverse Proxy +## Configuring reverse proxy {#proxy_pass} First, you will need to configure _reverse proxy_ so that NGINX Plus or NGINX Open Source can forward TCP connections or UDP datagrams from clients to an upstream group or a proxied server. @@ -118,8 +115,7 @@ Open the NGINX configuration file and perform the following steps: } ``` - -## Configuring TCP or UDP Load Balancing +## Configuring TCP or UDP load balancing {#upstream} To configure load balancing: @@ -250,8 +246,7 @@ stream { } ``` - -## Configuring Health Checks +## Configuring health checks {#health} NGINX can continually test your TCP or UDP upstream servers, avoid the servers that have failed, and gracefully add the recovered servers into the load‑balanced group. @@ -259,8 +254,7 @@ See [TCP Health Checks]({{< ref "nginx/admin-guide/load-balancer/tcp-health-chec See [UDP Health Checks]({{< ref "nginx/admin-guide/load-balancer/udp-health-check.md" >}}) for instructions how to configure health checks for UDP. - -## On-the-Fly Configuration +## On-the-fly configuration Upstream server groups can be easily reconfigured on-the-fly using NGINX Plus REST API. Using this interface, you can view all servers in an upstream group or a particular server, modify server parameters, and add or remove upstream servers. @@ -355,8 +349,7 @@ To enable on-the-fly configuration: } ``` - -### On-the-Fly Configuration Example +### On-the-fly configuration example ```nginx stream { @@ -403,23 +396,22 @@ For example, to add a new server to the server group, send a `POST` request: curl -X POST -d '{ \ "server": "appserv3.example.com:12345", \ "weight": 4 \ - }' -s 'http://127.0.0.1/api/6/stream/upstreams/appservers/servers' + }' -s 'http://127.0.0.1/api/9/stream/upstreams/appservers/servers' ``` To remove a server from the server group, send a `DELETE` request: ```shell -curl -X DELETE -s 'http://127.0.0.1/api/6/stream/upstreams/appservers/servers/0' +curl -X DELETE -s 'http://127.0.0.1/api/9/stream/upstreams/appservers/servers/0' ``` To modify a parameter for a specific server, send a `PATCH` request: ```shell -curl -X PATCH -d '{ "down": true }' -s 'http://127.0.0.1/api/6/http/upstreams/appservers/servers/0' +curl -X PATCH -d '{ "down": true }' -s 'http://127.0.0.1/api/9/http/upstreams/appservers/servers/0' ``` - -## Example of TCP and UDP Load-Balancing Configuration +## Example of TCP and UDP load-balancing configuration {#example} This is a configuration example of TCP and UDP load balancing with NGINX: @@ -471,3 +463,13 @@ The three [`server`](https://nginx.org/en/docs/stream/ngx_stream_upstream_module - The second server listens on port 53 and proxies all UDP datagrams (the `udp` parameter to the `listen` directive) to an upstream group called **dns_servers**. If the `udp` parameter is not specified, the socket listens for TCP connections. - The third virtual server listens on port 12346 and proxies TCP connections to **backend4.example.com**, which can resolve to several IP addresses that are load balanced with the Round Robin method. + +## See also + +- [TCP Health Checks]({{< relref "tcp-health-check.md" >}}) + +- [UDP Health Checks]({{< relref "udp-health-check.md" >}}) + +- [Load Balancing DNS Traffic with NGINX and NGINX Plus](https://www.f5.com/company/blog/nginx/load-balancing-dns-traffic-nginx-plus) + +- [TCP/UDP Load Balancing with NGINX: Overview, Tips, and Tricks](https://blog.nginx.org/blog/tcp-load-balancing-udp-load-balancing-nginx-tips-tricks) diff --git a/content/nginx/admin-guide/load-balancer/udp-health-check.md b/content/nginx/admin-guide/load-balancer/udp-health-check.md index bb4818fd4..7885d032a 100644 --- a/content/nginx/admin-guide/load-balancer/udp-health-check.md +++ b/content/nginx/admin-guide/load-balancer/udp-health-check.md @@ -9,10 +9,11 @@ type: - how-to --- - -## Prerequisites +NGINX Plus can continually test your upstream servers that handle UDP network traffic (DNS, RADIUS, syslog), avoid the servers that have failed, and gracefully add the recovered servers into the load‑balanced group. -- You have configured an upstream group of servers that handles UDP network traffic (DNS, RADIUS, syslog) in the [`stream {}`](https://nginx.org/en/docs/stream/ngx_stream_core_module.html#stream) context, for example: +## Prerequisites {#prereq} + +- You have [configured an upstream group of servers]({{< ref "nginx/admin-guide/load-balancer/tcp-udp-load-balancer.md" >}}) that handles UDP network traffic in the [`stream {}`](https://nginx.org/en/docs/stream/ngx_stream_core_module.html#stream) context, for example: ```nginx stream { @@ -44,8 +45,7 @@ type: See [TCP and UDP Load Balancing]({{< ref "nginx/admin-guide/load-balancer/tcp-udp-load-balancer.md" >}}) for details. - -## Passive UDP Health Checks +## Passive UDP health checks {#hc_passive} NGINX Open Source or F5 NGINX Plus can mark the server as unavailable and stop sending UDP datagrams to it for some time if the server replies with an error or times out. @@ -62,8 +62,7 @@ upstream dns_upstream { } ``` - -## Active UDP Health Checks +## Active UDP health checks {#hc_active} Active Health Checks allow testing a wider range of failure types and are available only for NGINX Plus. For example, instead of waiting for an actual TCP request from a DNS client to fail before marking the DNS server as down (as in passive health checks), NGINX Plus will send special health check requests to each upstream server and check for a response that satisfies certain conditions. If a connection to the server cannot be established, the health check fails, and the server is considered unhealthy. NGINX Plus does not proxy client connections to unhealthy servers. If more than one health check is defined, the failure of any check is enough to consider the corresponding upstream server unhealthy. @@ -100,8 +99,7 @@ To enable active health checks: A basic UDP health check assumes that NGINX Plus sends the “nginx health check” string to an upstream server and expects the absence of ICMP “Destination Unreachable” message in response. You can configure your own health check tests in the `match {}` block. See [The “match {}” Configuration Block](#hc_active_match) for details. - -### Fine-Tuning UDP Health Checks +### Fine-Tuning UDP Health Checks {#hc_active_finetune} You can fine‑tune the health check by specifying the following parameters to the [`health_check`](https://nginx.org/en/docs/stream/ngx_stream_upstream_hc_module.html#health_check) directive: @@ -119,10 +117,9 @@ server { In the example, the time between UDP health checks is increased to 20 seconds, the server is considered unhealthy after 2 consecutive failed health checks, and the server needs to pass 2 consecutive checks to be considered healthy again. - -### The “match {}” Configuration Block +### The “match {}” configuration block {#hc_active_match} -You can verify server responses to health checks by configuring a number of tests. These tests are defined within the [`match {}`](https://nginx.org/en/docs/stream/ngx_stream_upstream_hc_module.html#match) configuration block. +A basic UDP health check assumes that NGINX Plus sends the “nginx health check” string to an upstream server and expects the absence of ICMP “Destination Unreachable” message in response. You can configure your own health check tests that will verify server responses. These tests are defined within the [`match {}`](https://nginx.org/en/docs/stream/ngx_stream_upstream_hc_module.html#match) configuration block. 1. In the top‑level `stream {}` context, specify the [`match {}`](https://nginx.org/en/docs/stream/ngx_stream_upstream_hc_module.html#match) block and set its name, for example, `udp_test`: @@ -155,8 +152,9 @@ You can verify server responses to health checks by configuring a number of test These parameters can be used in different combinations, but no more than one `send` and one `expect` parameter can be specified at a time. - -#### Example Test for NTP +## Usage scenarios + +### NTP health checks {#example_ntp} To fine‑tune health checks for NTP, you should specify both `send` and `expect` parameters with the following text strings: @@ -167,14 +165,138 @@ match ntp { } ``` - -#### Example Test for DNS +#### Complete NTP health check configuration example + +```nginx + +stream { + upstream ntp_upstream { + zone ntp_zone 64k; + server 192.168.136.130:53; + server 192.168.136.131:53; + server 192.168.136.132:53; +} + server { + listen 53 udp; + proxy_pass ntp_upstream; + health_check match=ntp udp; + proxy_timeout 1s; + proxy_responses 1; + error_log logs/ntp.log; + } + + match ntp { + send \xe3\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00; + expect ~* \x24; + } +} +``` + +### DNS health checks {#example_dns} + +[DNS health checks](#hc_active) can be enhanced to perform real DNS lookup queries. You can craft a valid DNS query packet, send it to the upstream server, and inspect the response to determine health. + +The process includes three steps: +- [Creating a CNAME test record](#create-a-cname-record) on your DNS server. +- [Crafting a raw DNS query packet](#construct-a-raw-dns-query-packet) to be sent by NGINX Plus. +- [Validating the expected response](#configure-the-match-block-for-dns-lookup) using the `match` block, where the `send` parameter represents a raw DNS query packet, and `expect` represents the value of the CNAME record. + +#### Create a CNAME record + +First, create a CNAME record on your DNS server for a health check that points to the target website. + +For example, if you are using BIND self-hosted DNS solution on a Linux server: + +- Open the zone file in a text editor: + +```shell +sudo nano /etc/bind/zones/db.example.com +``` +- Add a CNAME record, making `healthcheck.example.com` resolve to `healthy.svcs.example.com`: + +```none +healthcheck IN CNAME healthy.svcs.example.com. +``` + +- Save the file and reload the DNS service: + +```shell +sudo systemctl reload bind9 +``` + +Once the CNAME record is live and resolvable, you can craft a DNS query packet that represents a DNS lookup and can be used in the `send` directive. + +#### Construct a raw DNS query packet + +The `send` parameter of the `match` block allows you to send raw UDP packets for health checks. To query your CNAME record, you need to construct a valid DNS query packet according to the [DNS protocol message format](https://datatracker.ietf.org/doc/html/rfc1035#section-4.1), including a header and question section. -To fine‑tune health checks for DNS, you should also specify both `send` and `expect` parameters with the following text strings: +The DNS Query packet can be created using DNS packet builders, such as Python Scapy or dnslib, or packet analyzers, such as tcpdump or Wireshark. If using a packet analyzer, extract only the DNS layer, removing Ethernet, IP, and UDP-related headers. + +This is the raw DNS query of `healthcheck.example.com`, represented as one line in Hex with `\x` prefixes: + +```none +\x00\x01\x01\x00\x00\x01\x00\x00\x00\x00\x00\x00\x0b\x68\x65\x61\x6c\x74\x68\x63\x68\x65\x63\x6b\x07\x65\x78\x61\x6d\x70\x6c\x65\x03\x63\x6f\x6d\x00\x00\x01\x00\x01 +``` +where: + +{{}} +| HEX | Description | +|------------------|------------------------| +| \x00\x01 | Transaction ID: 0x0001 | +| \x01\x00 | Flags: Standard query, recursion desired | +| \x00\x01 | Questions: 1 | +| \x00\x00 | Answer RRs: 0 | +| \x00\x00 | Authority RRs: 0 | +| \x00\x00 | Additional RRs: 0 | +| \x0b\x68\x65\x61\x6c\x74\x68\x63\x68\x65\x63\x6b | "healthcheck" | +| \x07\x65\x78\x61\x6d\x70\x6c\x65 | "example" | +| \x03\x63\x6f\x6d | "com" | +| \x00 | end of name | +| \x00\x01 | Type: A | +| \x00\x01 | Class: IN | +{{}} + +#### Configure the match block for DNS lookup + +Finally, specify the `match` block in the NGINX configuration file to pair the raw query with an expected response. The `send` directive should contain the DNS query packet, while `expect` directive should contain a matching DNS record in the DNS server's response: ```nginx match dns { - send \x00\x2a\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x03\x73\x74\x6c\x04\x75\x6d\x73\x6c\x03\x65\x64\x75\x00\x00\x01\x00\x01; - expect ~* "health.is.good"; + send \x00\x01\x01\x00\x00\x01\x00\x00\x00\x00\x00\x00\x0b\x68\x65\x61\x6c\x74\x68\x63\x68\x65\x63\x6b\x07\x65\x78\x61\x6d\x70\x6c\x65\x03\x63\x6f\x6d\x00\x00\x01\x00\x01; + expect ~* "healthy.svcs.example.com"; +} +``` + +#### Complete DNS health check configuration example + +```nginx + +stream { + upstream dns_upstream { + zone dns_zone 64k; + server 192.168.136.130:53; + server 192.168.136.131:53; + server 192.168.136.132:53; +} + server { + listen 53 udp; + proxy_pass dns_upstream; + health_check match=dns udp; + proxy_timeout 1s; + proxy_responses 1; + error_log logs/dns.log; + } + + match dns { + # make sure appropriate CNAME record exists + send \x00\x01\x01\x00\x00\x01\x00\x00\x00\x00\x00\x00\x0b\x68\x65\x61\x6c\x74\x68\x63\x68\x65\x63\x6b\x07\x65\x78\x61\x6d\x70\x6c\x65\x03\x63\x6f\x6d\x00\x00\x01\x00\x01; + expect ~* "healthy.svcs.example.com"; + } } ``` + +## See also + +- [Load Balancing DNS Traffic with NGINX and NGINX Plus](https://www.f5.com/company/blog/nginx/load-balancing-dns-traffic-nginx-plus) + +- [TCP/UDP Load Balancing with NGINX: Overview, Tips, and Tricks](https://blog.nginx.org/blog/tcp-load-balancing-udp-load-balancing-nginx-tips-tricks#activeHealthCheck) diff --git a/content/nim/nginx-app-protect/manage-waf-security-policies.md b/content/nim/nginx-app-protect/manage-waf-security-policies.md index 71684b133..5c0e5ebf3 100644 --- a/content/nim/nginx-app-protect/manage-waf-security-policies.md +++ b/content/nim/nginx-app-protect/manage-waf-security-policies.md @@ -1,8 +1,7 @@ --- -title: Manage WAF Security Policies and Security Log Profiles -description: Learn how to use F5 NGINX Management Suite Instance Manager to manage NGINX - App Protect WAF security policies and security log profiles. -weight: 200 +title: Manage and deploy WAF policies and log profiles +description: Learn how to use F5 NGINX Instance Manager to manage NGINX App Protect WAF security policies and security log profiles. +weight: 300 toc: true type: how-to product: NIM @@ -11,83 +10,76 @@ docs: DOCS-1105 ## Overview -F5 NGINX Management Suite Instance Manager provides the ability to manage the configuration of NGINX App Protect WAF instances either by the user interface or the REST API. This includes editing, updating, and deploying security policies, log profiles, attack signatures, and threat campaigns to individual instances and/or instance groups. +F5 NGINX Instance Manager lets you manage NGINX App Protect WAF configurations using either the web interface or REST API. You can edit, update, and deploy security policies, log profiles, attack signatures, and threat campaigns to individual instances or instance groups. -In Instance Manager v2.14.0 and later, you can compile a security policy, attack signatures, and threat campaigns into a security policy bundle. A security policy bundle consists of the security policy, the attack signatures, and threat campaigns for a particular version of NGINX App Protect WAF, and additional supporting files that make it possible for NGINX App Protect WAF to use the bundle. Because the security policy bundle is pre-compiled, the configuration gets applied faster than when you individually reference the security policy, attack signature, and threat campaign files. +You can compile a security policy, attack signatures, and threat campaigns into a security policy bundle. The bundle includes all necessary components for a specific NGINX App Protect WAF version. Precompiling the bundle improves performance by avoiding separate compilation of each component during deployment. {{}} -The following capabilities are only available via the Instance Manager REST API: +The following capabilities are available only through the Instance Manager REST API: - Update security policies - Create, read, and update security policy bundles -- Create, read, update, and delete Security Log Profiles -- Publish security policies, security log profiles, attack signatures, and/or threat campaigns to instances and instance groups +- Create, read, update, and delete security log profiles +- Publish security policies, log profiles, attack signatures, and threat campaigns to instances and instance groups {{}} --- -## Before You Begin +## Before you begin -Complete the following prerequisites before proceeding with this guide: +Before continuing, complete the following steps: -- [Set Up App Protect WAF Configuration Management]({{< ref "setup-waf-config-management" >}}) -- Verify that your user account has the [necessary permissions]({{< ref "/nim/admin-guide/rbac/overview-rbac.md" >}}) to access the Instance Manager REST API: +- [Set up App Protect WAF configuration management]({{< ref "setup-waf-config-management" >}}) +- Make sure your user account has the [required permissions]({{< ref "/nim/admin-guide/rbac/overview-rbac.md" >}}) to access the REST API: - - **Module**: Instance Manager - - **Feature**: Instance Management - - **Access**: `READ` - - **Feature**: Security Policies - - **Access**: `READ`, `CREATE`, `UPDATE`, `DELETE` + - **Module**: Instance Manager + - **Feature**: Instance Management → `READ` + - **Feature**: Security Policies → `READ`, `CREATE`, `UPDATE`, `DELETE` -The following are required to use support policy bundles: +To use policy bundles, you also need to: -- You must have `UPDATE` permissions for the security policies specified in the request. -- The correct `nms-nap-compiler` packages for the NGINX App Protect WAF version you're using are [installed on Instance Manager]({{< ref "/nim/nginx-app-protect/setup-waf-config-management.md#install-the-waf-compiler" >}}). -- The attack signatures and threat campaigns that you want to use are [installed on Instance Manager]({{< ref "/nim/nginx-app-protect/setup-waf-config-management.md#set-up-attack-signatures-and-threat-campaigns" >}}). +- Have `UPDATE` permissions for each referenced security policy +- [Install the correct `nms-nap-compiler` package]({{< ref "/nim/nginx-app-protect/setup-waf-config-management.md#install-the-waf-compiler" >}}) for your App Protect WAF version +- [Install the required attack signatures and threat campaigns]({{< ref "/nim/nginx-app-protect/setup-waf-config-management.md#set-up-attack-signatures-and-threat-campaigns" >}}) -### How to Access the Web Interface +### Access the web interface -To access the web interface, go to the FQDN for your NGINX Instance Manager host in a web browser and log in. Once you're logged in, select "Instance Manager" from the Launchpad menu. +To access the web interface, open a browser and go to the fully qualified domain name (FQDN) of your NGINX Instance Manager. Log in, then select **Instance Manager** from the Launchpad. -### How to Access the REST API +### Access the REST API {{< include "nim/how-to-access-nim-api.md" >}} --- -## Create a Security Policy {#create-security-policy} +## Create a security policy {#create-security-policy} {{}} {{%tab name="web interface"%}} -
- -To create a security policy using the Instance Manager web interface: - -1. In a web browser, go to the FQDN for your NGINX Management Suite host and log in. Then, from the Launchpad menu, select **Instance Manager**. -2. On the left menu, select **App Protect**. -3. On the *Security Policies* page, select **Create**. -4. On the *Create Policy* page, fill out the necessary fields: +To create a security policy using the NGINX Instance Manager web interface: - - **Name**: Provide a name for the policy. - - **Description**: (Optional) Add a short description for the policy. - - **Enter Policy**: Type or paste the policy in JSON format into the form provided. The editor will validate the JSON for accuracy. +1. In your browser, go to the FQDN for your NGINX Instance Manager host and log in. +2. From the Launchpad menu, select **Instance Manager**. +3. In the left menu, select **App Protect**. +4. On the *Security Policies* page, select **Create**. +5. On the *Create Policy* page, enter the required information: + - **Name**: Enter a name for the policy. + - **Description**: (Optional) Add a brief description. + - **Enter Policy**: Paste or type the JSON-formatted policy into the editor. The interface automatically validates the JSON. - For more information about creating custom policies, refer to the [NGINX App Protect WAF Declarative Policy](https://docs.nginx.com/nginx-app-protect/declarative-policy/policy/) guide and the [Policy Authoring and Tuning](https://docs.nginx.com/nginx-app-protect/configuration-guide/configuration/#policy-authoring-and-tuning) section of the config guide. + For help writing custom policies, see the [NGINX App Protect WAF Declarative Policy guide](https://docs.nginx.com/nginx-app-protect/declarative-policy/policy/) and the [Policy Authoring and Tuning section](https://docs.nginx.com/nginx-app-protect/configuration-guide/configuration/#policy-authoring-and-tuning) in the configuration guide. -5. Select **Save**. +6. Select **Save**. {{%/tab%}} {{%tab name="API"%}} -To upload a new security policy, send an HTTP `POST` request to the Security Policies API endpoint. - -{{}}Before sending a security policy to Instance Manager, you need to encode it using `base64`. Submitting a policy in its original JSON format will result in an error.{{}} - -
+To upload a new security policy using the REST API, send a `POST` request to the Security Policies API endpoint. +You must encode the JSON policy using `base64`. If you send the policy in plain JSON, the request will fail. {{}} @@ -101,7 +93,7 @@ To upload a new security policy, send an HTTP `POST` request to the Security Pol For example: ```shell -curl -X POST https://{{NMS_FQDN}}/api/platform/v1/security/policies \ +curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/policies \ -H "Authorization: Bearer " \ -d @ignore-xss-example.json ``` @@ -134,7 +126,7 @@ curl -X POST https://{{NMS_FQDN}}/api/platform/v1/security/policies \ "modified": "2022-04-12T23:19:58.502Z", "name": "ignore-cross-site-scripting", "revisionTimestamp": "2022-04-12T23:19:58.502Z", - "uid": "21daa130-4ba4-442b-bc4e-ab294af123e5" + "uid": "" }, "selfLink": { "rel": "/api/platform/v1/services/environments/prod" @@ -148,11 +140,13 @@ curl -X POST https://{{NMS_FQDN}}/api/platform/v1/security/policies \ --- -## Update a Security Policy +## Update a security policy + -To update a security policy, send an HTTP `POST` request to the Security Policies API endpoint, `/api/platform/v1/security/policies`. +To update a security policy, send a `POST` or `PUT` request to the Security Policies API. -You can use the optional `isNewRevision` parameter to indicate whether the updated policy is a new version of an existing policy. +- Use `POST` with the `isNewRevision=true` parameter to add a new version of an existing policy. +- Use `PUT` with the policy UID to overwrite the existing version. {{}} @@ -165,33 +159,35 @@ You can use the optional `isNewRevision` parameter to indicate whether the updat {{}} -For example: +To use `POST`, include the policy metadata and content in your request: ```shell -curl -X POST https://{{NMS_FQDN}}/api/platform/v1/security/policies?isNewRevision=true \ +curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/policies?isNewRevision=true \ -H "Authorization: Bearer " \ -d @update-xss-policy.json ``` -You can update a specific policy by sending an HTTP `PUT` request to the Security Policies API endpoint that includes the policy's unique identifier (UID). +To use PUT, first retrieve the policy’s unique identifier (UID). You can do this by sending a GET request to the policies endpoint: -To find the UID, send an HTTP `GET` request to the Security Policies API endpoint. This returns a list of all Security Policies that contains the unique identifier for each policy. - -Include the UID for the security policy in your `PUT` request to update the policy. Once the policy update is accepted, the WAF compiler will create a new, updated bundle. +```shell +curl -X GET https://{{NIM_FQDN}}/api/platform/v1/security/policies \ + -H "Authorization: Bearer " +``` -For example: +Then include the UID in your PUT request: ```shell -curl -X PUT https://{{NMS_FQDN}}/api/platform/v1/security/policies/23139e0a-4ac8-49f9-b7a0-0577b42c70c7 \ +curl -X PUT https://{{NIM_FQDN}}/api/platform/v1/security/policies/ \ -H "Authorization: Bearer " \ - --Content-Type application/json -d @update-xss-policy.json + --Content-Type application/json \ + -d @update-xss-policy.json ``` -After you have pushed an updated security policy, you can [publish it](#publish-policy) to selected instances or instance groups. +After updating the policy, you can [publish it](#publish-policy) to selected instances or instance groups. --- -## Delete a Security Policy +## Delete a security policy {{}} @@ -199,17 +195,29 @@ After you have pushed an updated security policy, you can [publish it](#publish-
-To delete a security policy using the Instance Manager web interface: +To delete a security policy using the NGINX Instance Manager web interface: + +1. In your browser, go to the FQDN for your NGINX Instance Manager host and log in. +2. From the Launchpad menu, select **Instance Manager**. +3. In the left menu, select **App Protect**. +4. On the *Security Policies* page, find the policy you want to delete. +5. Select the **Actions** menu (**...**) and choose **Delete**. -1. In a web browser, go to the FQDN for your NGINX Management Suite host and log in. Then, from the Launchpad menu, select **Instance Manager**. -2. On the left menu, select **App Protect**. -3. On the *Security Policies* page, select the **Actions** menu (represented by an ellipsis, **...**) for the policy you want to delete. Select **Delete** to remove the policy. {{%/tab%}} {{%tab name="API"%}} -To delete a security policy, send an HTTP `DELETE` request to the Security Policies API endpoint that includes the unique identifier for the policy that you want to delete. +To delete a security policy using the REST API: + +1. Retrieve the UID for the policy by sending a `GET` request to the policies endpoint: + + ```shell + curl -X GET https://{{NIM_FQDN}}/api/platform/v1/security/policies \ + -H "Authorization: Bearer " + ``` + +2. Send a `DELETE` request using the policy UID: {{}} @@ -221,10 +229,10 @@ To delete a security policy, send an HTTP `DELETE` request to the Security Polic {{}} -For example: +Example: ```shell -curl -X DELETE https://{{NMS_FQDN}}/api/platform/v1/security/policies/23139e0a-4ac8-49f9-b7a0-0577b42c70c7 \ +curl -X DELETE https://{{NIM_FQDN}}/api/platform/v1/security/policies/ \ -H "Authorization: Bearer " ``` @@ -232,36 +240,42 @@ curl -X DELETE https://{{NMS_FQDN}}/api/platform/v1/security/policies/23139e0a-4 {{
}} -{{%comment%}}TO DO: Add sections for managing attack signatures and threat campaigns{{%/comment%}} - --- -## Create Security Policy Bundles {#create-security-policy-bundles} +## Create security policy bundles {#create-security-policy-bundles} -To create security policy bundles, send an HTTP `POST` request to the Security Policies Bundles API endpoint. The specified security policies you'd like to compile into security policy bundles must already exist in Instance Manager. -### Required Fields +To create a security policy bundle, send a `POST` request to the Security Policy Bundles API. The policies you want to include in the bundle must already exist in NGINX Instance Manager. -- `appProtectWAFVersion`: The version of NGINX App Protect WAF being used. -- `policyName`: The name of security policy to include in the bundle. This must reference an existing security policy; refer to the [Create a Security Policy](#create-security-policy) section above for instructions. +Each bundle includes: -### Notes +- A security policy +- Attack signatures +- Threat campaigns +- A version of NGINX App Protect WAF +- Supporting files required to compile and deploy the bundle + +### Required fields + +- `appProtectWAFVersion`: The version of NGINX App Protect WAF to target. +- `policyName`: The name of the policy to include. Must reference an existing policy. +- `policyUID`: Optional. If omitted, the latest revision of the specified policy is used. This field does **not** accept the keyword `latest`. + +If you don’t include `attackSignatureVersionDateTime` or `threatCampaignVersionDateTime`, the latest versions are used by default. You can also set them explicitly by using `"latest"` as the value. -- If you do not specify a value for the `attackSignatureVersionDateTime` and/or `threatCampaignVersionDateTime` fields, the latest version of each will be used by default. You can also explicitly state that you want to use the most recent version by specifying the keyword `latest` as the value. -- If the `policyUID` field is not defined, the latest version of the specified security policy will be used. This field **does not allow** use of the keyword `latest`. {{}} -| Method | Endpoint | -|--------|--------------------------------------| +| Method | Endpoint | +|--------|----------------------------------------------| | POST | `/api/platform/v1/security/policies/bundles` | {{}} -For example: +Example: ```shell -curl -X POST https://{{NMS_FQDN}}/api/platform/v1/security/policies/bundles \ +curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/policies/bundles \ -H "Authorization: Bearer " \ -d @security-policy-bundles.json ``` @@ -274,7 +288,7 @@ curl -X POST https://{{NMS_FQDN}}/api/platform/v1/security/policies/bundles \ "bundles": [{ "appProtectWAFVersion": "4.457.0", "policyName": "default-enforcement", - "policyUID": "29d86fe8-612a-5c69-895a-04fc5b9849a6", + "policyUID": "", "attackSignatureVersionDateTime": "2023.06.20", "threatCampaignVersionDateTime": "2023.07.18" }, @@ -305,10 +319,10 @@ curl -X POST https://{{NMS_FQDN}}/api/platform/v1/security/policies/bundles \ "modified": "2023-10-04T23:19:58.502Z", "appProtectWAFVersion": "4.457.0", "policyName": "default-enforcement", - "policyUID": "29d86fe8-612a-5c69-895a-04fc5b9849a6", + "policyUID": "", "attackSignatureVersionDateTime": "2023.06.20", "threatCampaignVersionDateTime": "2023.07.18", - "uid": "dceb8254-9a90-4e77-87ac-73070f821412" + "uid": "" }, "content": "", "compilationStatus": { @@ -321,11 +335,11 @@ curl -X POST https://{{NMS_FQDN}}/api/platform/v1/security/policies/bundles \ "created": "2023-10-04T23:19:58.502Z", "modified": "2023-10-04T23:19:58.502Z", "appProtectWAFVersion": "4.279.0", - "policyName": "defautl-enforcement", - "policyUID": "04fc5b9849a6-612a-5c69-895a-29d86fe8", + "policyName": "default-enforcement", + "policyUID": "", "attackSignatureVersionDateTime": "2023.08.10", "threatCampaignVersionDateTime": "2023.08.09", - "uid": "trs35lv2-9a90-4e77-87ac-ythn4967" + "uid": "" }, "content": "", "compilationStatus": { @@ -339,10 +353,10 @@ curl -X POST https://{{NMS_FQDN}}/api/platform/v1/security/policies/bundles \ "modified": "2023-10-04T23:19:58.502Z", "appProtectWAFVersion": "4.457.0", "policyName": "ignore-xss", - "policyUID": "849a604fc5b9-612a-5c69-895a-86f29de8", + "policyUID": "", "attackSignatureVersionDateTime": "2023.08.10", "threatCampaignVersionDateTime": "2023.08.09", - "uid": "nbu844lz-9a90-4e77-87ac-zze8861d" + "uid": "" }, "content": "", "compilationStatus": { @@ -354,39 +368,38 @@ curl -X POST https://{{NMS_FQDN}}/api/platform/v1/security/policies/bundles \ } ``` - --- -## List Security Policy Bundles {#list-security-policy-bundles} +## List security policy bundles {#list-security-policy-bundles} -To list security policy bundles, send an HTTP `GET` request to the Security Policies Bundles API endpoint. +To list all security policy bundles, send a `GET` request to the Security Policy Bundles API. -{{}}The list will only contain the security policy bundles that you have "READ" permissions for in Instance Manager.{{}} +You’ll only see bundles you have `"READ"` permissions for. -You can filter the results by using the following query parameters: +You can use the following query parameters to filter results: -- `includeBundleContent`: Boolean indicating whether to include the security policy bundle content for each bundle when getting a list of bundles or not. If not provided, defaults to `false`. Please note that the content returned is `base64 encoded`. -- `policyName`: String used to filter the list of security policy bundles; only security policy bundles that have the specified security policy name will be returned. If not provided, it will not filter based on `policyName`. -- `policyUID`: String used to filter the list of security policy bundles; only security policy bundles that have the specified security policy UID will be returned. If not provided, it will not filter based on `policyUID`. -- `startTime`: The security policy bundle's "modified time" has to be equal to or greater than this time value. If no value is supplied, it defaults to 24 hours from the current time. `startTime` has to be less than `endTime`. -- `endTime`: Indicates the time that the security policy bundles modified time has to be less than. If no value is supplied, it defaults to current time. `endTime` has to be greater than `startTime`. +- `includeBundleContent`: Whether to include base64-encoded content in the response. Defaults to `false`. +- `policyName`: Return only bundles that match this policy name. +- `policyUID`: Return only bundles that match this policy UID. +- `startTime`: Return only bundles modified at or after this time. +- `endTime`: Return only bundles modified before this time. -
+If no time range is provided, the API defaults to showing bundles modified in the past 24 hours. {{}} -| Method | Endpoint | -|--------|--------------------------------------| +| Method | Endpoint | +|--------|----------------------------------------------| | GET | `/api/platform/v1/security/policies/bundles` | {{}} -For example: +Example: ```shell -curl -X GET https://{{NMS_FQDN}}/api/platform/v1/security/policies/bundles \ +curl -X GET https://{{NIM_FQDN}}/api/platform/v1/security/policies/bundles \ -H "Authorization: Bearer " ``` @@ -401,10 +414,10 @@ curl -X GET https://{{NMS_FQDN}}/api/platform/v1/security/policies/bundles \ "modified": "2023-10-04T23:19:58.502Z", "appProtectWAFVersion": "4.457.0", "policyName": "default-enforcement", - "policyUID": "29d86fe8-612a-5c69-895a-04fc5b9849a6", + "policyUID": "", "attackSignatureVersionDateTime": "2023.06.20", "threatCampaignVersionDateTime": "2023.07.18", - "uid": "dceb8254-9a90-4e77-87ac-73070f821412" + "uid": "" }, "content": "", "compilationStatus": { @@ -418,10 +431,10 @@ curl -X GET https://{{NMS_FQDN}}/api/platform/v1/security/policies/bundles \ "modified": "2023-10-04T23:19:58.502Z", "appProtectWAFVersion": "4.279.0", "policyName": "defautl-enforcement", - "policyUID": "04fc5b9849a6-612a-5c69-895a-29d86fe8", + "policyUID": "", "attackSignatureVersionDateTime": "2023.08.10", "threatCampaignVersionDateTime": "2023.08.09", - "uid": "trs35lv2-9a90-4e77-87ac-ythn4967" + "uid": "" }, "content": "", "compilationStatus": { @@ -435,10 +448,10 @@ curl -X GET https://{{NMS_FQDN}}/api/platform/v1/security/policies/bundles \ "modified": "2023-10-04T23:19:58.502Z", "appProtectWAFVersion": "4.457.0", "policyName": "ignore-xss", - "policyUID": "849a604fc5b9-612a-5c69-895a-86f29de8", + "policyUID": "", "attackSignatureVersionDateTime": "2023.08.10", "threatCampaignVersionDateTime": "2023.08.09", - "uid": "nbu844lz-9a90-4e77-87ac-zze8861d" + "uid": "" }, "content": "", "compilationStatus": { @@ -452,35 +465,35 @@ curl -X GET https://{{NMS_FQDN}}/api/platform/v1/security/policies/bundles \ --- -## Get a Security Policy Bundle {#get-security-policy-bundle} +## Get a security policy bundle {#get-security-policy-bundle} -To get a specific security policy bundle, send an HTTP `GET` request to the Security Policies Bundles API endpoint that contains the security policy UID and security policy bundle UID in the path. +To retrieve a specific security policy bundle, send a `GET` request to the Security Policy Bundles API using the policy UID and bundle UID in the URL path. -{{}}You must have "READ" permission for the security policy bundle to be able to retrieve information about a bundle by using the REST API.{{}} - -
+You must have `"READ"` permission for the bundle to retrieve it. {{}} -| Method | Endpoint | -|--------|--------------------------------------| +| Method | Endpoint | +|--------|-------------------------------------------------------------------------------------------------| | GET | `/api/platform/v1/security/policies/{security-policy-uid}/bundles/{security-policy-bundle-uid}` | {{}} - -For example: +Example: ```shell -curl -X GET https://{{NMS_FQDN}}/api/platform/v1/security/policies/29d86fe8-612a-5c69-895a-04fc5b9849a6/bundles/trs35lv2-9a90-4e77-87ac-ythn4967 \ +curl -X GET https://{{NIM_FQDN}}/api/platform/v1/security/policies//bundles/ \ -H "Authorization: Bearer " ``` -The JSON response, shown in the example below, includes a `content` field that is base64 encoded. After you retrieve the information from the API, you will need to base64 decode the content field. You can include this in your API call, as shown in the following example cURL request: +The response includes a content field that contains the bundle in base64 format. To use it, you’ll need to decode the content and save it as a `.tgz` file. + +Example: ```bash -curl -X GET "https://{NMS_FQDN}/api/platform/v1/security/policies/{security-policy-uid}/bundles/{security-policy-bundle-uid}" -H "Authorization: Bearer xxxxx.yyyyy.zzzzz" | jq -r '.content' | base64 -d > security-policy-bundle.tgz +curl -X GET "https://{{NIM_FQDN}}/api/platform/v1/security/policies//bundles/" \ + -H "Authorization: Bearer " | jq -r '.content' | base64 -d > security-policy-bundle.tgz ```
@@ -492,10 +505,10 @@ curl -X GET "https://{NMS_FQDN}/api/platform/v1/security/policies/{security-poli "created": "2023-10-04T23:19:58.502Z", "modified": "2023-10-04T23:19:58.502Z", "appProtectWAFVersion": "4.457.0", - "policyUID": "29d86fe8-612a-5c69-895a-04fc5b9849a6", + "policyUID": "", "attackSignatureVersionDateTime": "2023.08.10", "threatCampaignVersionDateTime": "2023.08.09", - "uid": "trs35lv2-9a90-4e77-87ac-ythn4967" + "uid": "" }, "content": "ZXZlbnRzIHt9Cmh0dHAgeyAgCiAgICBzZXJ2ZXIgeyAgCiAgICAgICAgbGlzdGVuIDgwOyAgCiAgICAgICAgc2VydmVyX25hbWUgXzsKCiAgICAgICAgcmV0dXJuIDIwMCAiSGVsbG8iOyAgCiAgICB9ICAKfQ==", "compilationStatus": { @@ -507,28 +520,25 @@ curl -X GET "https://{NMS_FQDN}/api/platform/v1/security/policies/{security-poli --- -## Create a Security Log Profile {#create-security-log-profile} - -Send an HTTP `POST` request to the Security Log Profiles API endpoint to upload a new security log profile. +## Create a security log profile {#create-security-log-profile} -{{}}Before sending a security log profile to Instance Manager, you need to encode it using `base64`. Submitting a log profile in its original JSON format will result in an error.{{}} - -
+To upload a new security log profile, send a `POST` request to the Security Log Profiles API endpoint. +You must encode the log profile in `base64` before sending it. If you send plain JSON, the request will fail. {{}} -| Method | Endpoint | -|--------|--------------------------------------| +| Method | Endpoint | +|--------|-----------------------------------------| | POST | `/api/platform/v1/security/logprofiles` | {{}} -For example: +Example: ```shell -curl -X POST https://{{NMS_FQDN}}/api/platform/v1/security/logprofiles \ +curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/logprofiles \ -H "Authorization: Bearer " \ -d @default-log-example.json ``` @@ -558,87 +568,100 @@ curl -X POST https://{{NMS_FQDN}}/api/platform/v1/security/logprofiles \ "modified": "2023-07-05T22:09:19.634358096Z", "name": "default-log-example", "revisionTimestamp": "2023-07-05T22:09:19.634358096Z", - "uid": "54c35ad7-e082-4dc5-bb5d-2640a17d5620" + "uid": "" }, "selfLink": { - "rel": "/api/platform/v1/security/logprofiles/54c35ad7-e082-4dc5-bb5d-2640a17d5620" + "rel": "/api/platform/v1/security/logprofiles/" } } ``` --- -## Update a Security Log Profile - -To update a security log profile, send an HTTP `POST` request to the Security Log Profiles API endpoint, `/api/platform/v1/security/logprofiles`. +## Update a security log profile {#update-security-log-profile} -You can use the optional `isNewRevision` parameter to indicate whether the updated log profile is a new version of an existing log profile. +To update a security log profile, you can either: +- Use `POST` with the `isNewRevision=true` parameter to add a new version. +- Use `PUT` with the log profile UID to overwrite the existing version. {{}} -| Method | Endpoint | -|--------|---------------------------------------------------------| -| POST | `/api/platform/v1/security/logprofiles?isNewRevision=true` | +| Method | Endpoint | +|--------|--------------------------------------------------------------------| +| POST | `/api/platform/v1/security/logprofiles?isNewRevision=true` | | PUT | `/api/platform/v1/security/logprofiles/{security-log-profile-uid}` | {{}} -For example: +To create a new revision: ```shell -curl -X POST https://{{NMS_FQDN}}/api/platform/v1/security/logprofiles?isNewRevision=true \ +curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/logprofiles?isNewRevision=true \ -H "Authorization: Bearer " \ -d @update-default-log.json ``` -You can update a specific log profile by sending an HTTP `PUT` request to the Security Log Profiles API endpoint that includes the log profile's unique identifier (UID). - -To find the UID, send an HTTP `GET` request to the Security Log Profiles API endpoint. This returns a list of all Security Log Profiles that contains the unique identifier for each log profile. +To overwrite an existing security log profile: -Include the UID for the security log profile in your `PUT` request to update the log profile. +1. Retrieve the profile’s UID: -For example: + ```shell + curl -X PUT https://{{NIM_FQDN}}/api/platform/v1/security/logprofiles/ \ + -H "Authorization: Bearer " \ + --Content-Type application/json \ + -d @update-log-profile.json + ``` -```shell -curl -X PUT https://{{NMS_FQDN}}/api/platform/v1/security/logprofiles/23139e0a-4ac8-49f9-b7a0-0577b42c70c7 \ - -H "Authorization: Bearer " \ - --Content-Type application/json -d @update-default-log.json -``` +2. Use the UID in your PUT request: + + ```shell + curl -X PUT https://{{NIM_FQDN}}/api/platform/v1/security/logprofiles/ \ + -H "Authorization: Bearer " \ + --Content-Type application/json \ + -d @update-log-profile.json + ``` -After you have pushed an updated security log profile, you can [publish it](#publish-policy) to selected instances or instance groups. +After updating the security log profile, you can [publish it](#publish-policy) to specific instances or instance groups. --- -## Delete a Security Log Profile +## Delete a security log profile {#delete-security-log-profile} -To delete a security log profile, send an HTTP `DELETE` request to the Security Log Profiles API endpoint that includes the unique identifier for the log profile that you want to delete. +To delete a security log profile, send a `DELETE` request to the Security Log Profiles API using the profile’s UID. {{}} -| Method | Endpoint | -|--------|------------------------------------------------------------| +| Method | Endpoint | +|--------|--------------------------------------------------------------------| | DELETE | `/api/platform/v1/security/logprofiles/{security-log-profile-uid}` | {{}} -For example: +1. Retrieve the UID: -```shell -curl -X DELETE https://{{NMS_FQDN}}/api/platform/v1/security/logprofiles/23139e0a-4ac8-49f9-b7a0-0577b42c70c7 \ - -H "Authorization: Bearer " -``` + ```shell + curl -X GET https://{{NIM_FQDN}}/api/platform/v1/security/logprofiles \ + -H "Authorization: Bearer " + ``` + +2. Send the delete request: + + ```shell + curl -X DELETE https://{{NIM_FQDN}}/api/platform/v1/security/logprofiles/ \ + -H "Authorization: Bearer " + ``` --- -## Publish Updates to Instances {#publish-policy} +## Publish updates to instances {#publish-policy} -The Publish API lets you distribute security policies, security log profiles, attack signatures, and/or threat campaigns to instances and instance groups. +Use the Publish API to push security policies, log profiles, attack signatures, and threat campaigns to NGINX instances or instance groups. -{{}}Use this endpoint *after* you've added or updated security policies, security log profiles, attack signatures, and/or threat campaigns.{{}} +Call this endpoint *after* you've created or updated the resources you want to deploy. {{}} @@ -650,18 +673,20 @@ The Publish API lets you distribute security policies, security log profiles, at {{}} -When making a request to the Publish API, make sure to include all the necessary information for your specific use case: +Include the following information in your request, depending on what you're publishing: -- Instance and/or Instance Group UID(s) to push the bundle to -- Threat Campaign version and UID -- Attack Signature version and UID -- Security Policy UID(s) -- Security Log Profile UID(s) +- Instance and instance group UIDs +- Policy UID and name +- Log profile UID and name +- Attack signature library UID and version +- Threat campaign UID and version -For example: +Example: ```shell -curl -X PUT https://{{NMS_FQDN}}/api/platform/v1/security/publish -H "Authorization: Bearer " +curl -X POST https://{{NIM_FQDN}}/api/platform/v1/security/publish \ + -H "Authorization: Bearer " \ + -d @publish-request.json ```
@@ -671,27 +696,27 @@ curl -X PUT https://{{NMS_FQDN}}/api/platform/v1/security/publish -H "Authorizat { "publications": [ { - "attackSignatureLibrary": { - "uid": "3fa85f64-5717-4562-b3fc-2c963f66afa6", - "versionDateTime": "2022.10.02" - }, - "instanceGroups": [ - "3fa85f64-5717-4562-b3fc-2c963f66afa6" - ], "instances": [ - "3fa85f64-5717-4562-b3fc-2c963f66afa6" + "" ], + "instanceGroups": [ + "" + ], + "policyContent": { + "name": "example-policy", + "uid": "" + }, "logProfileContent": { - "name": "default-log", - "uid": "ffdbda39-88be-420a-b673-19d4183b7e4c" + "name": "example-log-profile", + "uid": "" }, - "policyContent": { - "name": "default-enforcement", - "uid": "3fa85f64-5717-4562-b3fc-2c963f66afa6" + "attackSignatureLibrary": { + "uid": "", + "versionDateTime": "2023.10.02" }, "threatCampaign": { - "uid": "3fa85f64-5717-4562-b3fc-2c963f66afa6", - "versionDateTime": "2022.10.01" + "uid": "", + "versionDateTime": "2023.10.01" } } ] @@ -721,357 +746,91 @@ curl -X PUT https://{{NMS_FQDN}}/api/platform/v1/security/publish -H "Authorizat --- -## Check Security Policy and Security Log Profile Publication Status -When publishing an NGINX configuration that references a security policy and secuity log profile, the Instance Manager REST APIs can provide further details about the status of the configuration publications. To access this information, use the Instance Manager API endpoints and method as indicated. +## Check security policy and security log profile publication status {#check-publication-status} -To retrieve the details for the different configuration publication statuses for a particular security policy, send an HTTP `GET` request to the Security Deployments Associations API endpoint, providing the name of the security policy. +After publishing updates, you can check deployment status using the NGINX Instance Manager REST API. -| Method | Endpoint | -|--------|-----------------------------------------------------------------------------| -| GET | `/api/platform/v1/security/deployments/associations/{security-policy-name}` | +Use the following endpoints to verify whether the configuration updates were successfully deployed to instances or instance groups. -You can locate the configuration publication status in the response within the field `lastDeploymentDetails` for instances and instance groups: +### Check publication status for a security policy -- `lastDeploymentDetails` (for an instance): associations -> instance -> lastDeploymentDetails -- `lastDeploymentDetails` (for an instance in an instance group): associations -> instanceGroup -> instances -> lastDeploymentDetails +To view deployment status for a specific policy, send a `GET` request to the Security Deployments Associations API using the policy name. -The example below shows a call to the `security deployments associations` endpoint and the corresponding JSON response containing successful deployments. +{{}} -```shell -curl -X GET "https://{NGINX-INSTANCE-MANAGER-FQDN}/api/platform/v1/security/deployments/associations/ignore-xss" -H "Authorization: Bearer " -``` +| Method | Endpoint | +|--------|--------------------------------------------------------------------| +| GET | `/api/platform/v1/security/deployments/associations/{policy-name}` | -
-JSON Response +{{}} -```json -{ - "associations": [ - { - "attackSignatureLibrary": { - "uid": "c69460cc-6b59-4813-8d9c-76e4a6c56b4b", - "versionDateTime": "2023.02.16" - }, - "instance": { - "hostName": "ip-172-16-0-99", - "lastDeploymentDetails": { - "createTime": "2023-04-11T21:36:11.519174534Z", - "details": { - "failure": [], - "pending": [], - "success": [ - { - "name": "ip-172-16-0-99" - } - ] - }, - "id": "19cf5ed4-29d6-4139-b5f5-308c0d0ebb13", - "message": "Instance config successfully published to", - "status": "successful", - "updateTime": "2023-04-11T21:36:14.008108979Z" - }, - "systemUid": "0435a5de-41c1-3754-b2e8-9d9fe946bafe", - "uid": "29d86fe8-612a-5c69-895a-04fc5b9849a6" - }, - "instanceGroup": { - "displayName": "inst_group_1", - "instances": [ - { - "hostName": "hostname1", - "systemUid": "49d143c2-f556-4cd7-8658-76fff54fb861", - "uid": "c8e15dcf-c504-4b7f-b52d-def7b8fd2f64", - "lastDeploymentDetails": { - "createTime": "2023-04-11T21:36:11.519174534Z", - "details": { - "failure": [], - "pending": [], - "success": [ - { - "name": "ip-172-16-0-99" - } - ] - }, - "id": "19cf5ed4-29d6-4139-b5f5-308c0d0ebb13", - "message": "Instance config successfully published to", - "status": "successful", - "updateTime": "2023-04-11T21:36:14.008108979Z" - }, - }, - { - "hostName": "hostname2", - "systemUid": "88a99ab0-15bb-4719-9107-daf5007c33f7", - "uid": "ed7e9173-794f-41af-80d9-4ed37d593247", - "lastDeploymentDetails": { - "createTime": "2023-04-11T21:36:11.519174534Z", - "details": { - "failure": [], - "pending": [], - "success": [ - { - "name": "ip-172-16-0-99" - } - ] - }, - "id": "19cf5ed4-29d6-4139-b5f5-308c0d0ebb13", - "message": "Instance config successfully published to", - "status": "successful", - "updateTime": "2023-04-11T21:36:14.008108979Z" - }, - } - ], - "uid": "51f8addc-c0e9-438b-b0b6-3e4f1aa8202d" - }, - "policyUid": "9991f237-d9c7-47b7-98aa-faa836838f38", - "policyVersionDateTime": "2023-04-11T21:18:19.183Z", - "threatCampaign": { - "uid": "eab683fe-c2f1-4910-a88c-8bfbc6363164", - "versionDateTime": "2023.02.15" - } - } - ] -} +Example: + +```shell +curl -X GET "https://{{NIM_FQDN}}/api/platform/v1/security/deployments/associations/ignore-xss" \ + -H "Authorization: Bearer " ``` -
+In the response, look for the `lastDeploymentDetails` field under instance or `instanceGroup.instances`. -To retrieve the details for the different configuration publication statuses for a particular security log profile, send an HTTP `GET` request to the Security Deployments Associations API endpoint, providing the name of the security log profile. -| Method | Endpoint | -|--------|-----------------------------------------------------------------------------| -| GET | `/api/platform/v1/security/deployments/logprofiles/associations/{security-log-profile-name}` | +### Check publication status for a security log profile -You can locate the configuration publication status in the response within the field `lastDeploymentDetails` for instances and instance groups: +{{}} -- `lastDeploymentDetails` (for an instance): associations -> instance -> lastDeploymentDetails -- `lastDeploymentDetails` (for an instance in an instance group): associations -> instanceGroup -> instances -> lastDeploymentDetails +| Method | Endpoint | +|--------|-------------------------------------------------------------------------------------| +| GET | `/api/platform/v1/security/deployments/logprofiles/associations/{log-profile-name}` | -The example below shows a call to the `security deployments associations` endpoint and the corresponding JSON response containing successful deployments. +{{}} + +Example: ```shell -curl -X GET "https://{NGINX-INSTANCE-MANAGER-FQDN}/api/platform/v1/security/deployments/logprofiles/associations/default-log" -H "Authorization: Bearer " +curl -X GET "https://{{NIM_FQDN}}/api/platform/v1/security/deployments/logprofiles/associations/default-log" \ + -H "Authorization: Bearer " ``` -
-JSON Response +The response also contains `lastDeploymentDetails` for each instance or group. -```json -{ - "associations": [ - { - "instance": { - "hostName": "", - "systemUid": "", - "uid": "" - }, - "instanceGroup": { - "displayName": "ig1", - "instances": [ - { - "hostName": "ip-172-16-0-142", - "systemUid": "1d1f03ff-02de-32c5-8dfd-902658aada4c", - "uid": "18d074e6-3868-51ba-9999-b7466a936815" - } - ], - "lastDeploymentDetails": { - "createTime": "2023-07-05T23:01:06.679136973Z", - "details": { - "failure": [], - "pending": [], - "success": [ - { - "name": "ip-172-16-0-142" - } - ] - }, - "id": "9bfc9db7-877d-4e8e-a43d-9660a6cd11cc", - "message": "Instance Group config successfully published to ig1", - "status": "successful", - "updateTime": "2023-07-05T23:01:06.790802157Z" - }, - "uid": "0df0386e-82f7-4efc-863e-5d7cfbc3f7df" - }, - "logProfileUid": "b680f7c3-6fc0-4c6b-889a-3025580c7fcb", - "logProfileVersionDateTime": "2023-07-05T22:08:47.371Z" - }, - { - "instance": { - "hostName": "ip-172-16-0-5", - "lastDeploymentDetails": { - "createTime": "2023-07-05T21:45:08.698646791Z", - "details": { - "failure": [], - "pending": [], - "success": [ - { - "name": "ip-172-16-0-5" - } - ] - }, - "id": "73cf670a-738a-4a74-b3fb-ac9771e89814", - "message": "Instance config successfully published to", - "status": "successful", - "updateTime": "2023-07-05T21:45:08.698646791Z" - }, - "systemUid": "0afe5ac2-43aa-36c8-bcdc-7f88cdd35ab2", - "uid": "9bb4e2ef-3746-5d79-b526-e545fad27e90" - }, - "instanceGroup": { - "displayName": "", - "instances": [], - "uid": "" - }, - "logProfileUid": "bb3badb2-f8f5-4b95-9428-877fc208e2f1", - "logProfileVersionDateTime": "2023-07-03T21:46:17.006Z" - } - ] -} -``` +### Check status for a specific instance -
+You can also view the deployment status for a specific instance by providing the system UID and instance UID. -To retrieve the configuration publication status details for a particular instance, send an HTTP `GET` request to the Instances API endpoint, providing the unique system and instance identifiers. +{{}} -| Method | Endpoint | -|--------|-----------------------------------------------------------------| -| GET | `/api/platform/v1/systems/{system-uid}/instances/{instance-id}` | +| Method | Endpoint | +|--------|------------------------------------------------------------------| +| GET | `/api/platform/v1/systems/{system-uid}/instances/{instance-uid}` | -You can locate the configuration publication status in the the response within the `lastDeploymentDetails` field, which contains additional fields that provide more context around the status. +{{}} -The example below shows a call to the `instances` endpoint and the corresponding JSON response containing a compiler related error message. +Example: ```shell -curl -X GET "https://{NGINX-INSTANCE-MANAGER-FQDN}/api/platform/v1/systems/b9df6377-2c4f-3266-a64a-e064b0371c73/instances/5663cf4e-a0c7-50c8-b93c-16fd11a0f00b" -H "Authorization: Bearer " +curl -X GET "https://{{NIM_FQDN}}/api/platform/v1/systems//instances/" \ + -H "Authorization: Bearer " ``` -
-JSON Response +In the response, look for the `lastDeploymentDetails` field, which shows the deployment status and any related errors. -```json -{ - "build": { - "nginxPlus": true, - "release": "nginx-plus-r28", - "version": "1.23.2" - }, - "configPath": "/etc/nginx/nginx.conf", - "configVersion": { - "instanceGroup": { - "createTime": "0001-01-01T00:00:00Z", - "uid": "", - "versionHash": "" - }, - "versions": [ - { - "createTime": "2023-01-14T10:48:46.319Z", - "uid": "5663cf4e-a0c7-50c8-b93c-16fd11a0f00b", - "versionHash": "922e9d40fa6d4dd3a4b721295b8ecd95f73402644cb8d234f9f4f862b8a56bfc" - } - ] - }, - "displayName": "ip-192-0-2-27", - "links": [ - { - "rel": "/api/platform/v1/systems/b9df6377-2c4f-3266-a64a-e064b0371c73", - "name": "system" - }, - { - "rel": "/api/platform/v1/systems/b9df6377-2c4f-3266-a64a-e064b0371c73/instances/5663cf4e-a0c7-50c8-b93c-16fd11a0f00b", - "name": "self" - }, - { - "rel": "/api/platform/v1/systems/instances/deployments/b31c6ab1-4a46-4c81-a065-204575145e8e", - "name": "deployment" - } - ], - "processPath": "/usr/sbin/nginx", - "registrationTime": "2023-01-14T10:12:31.000Z", - "startTime": "2023-01-14T10:09:43Z", - "status": { - "lastStatusReport": "2023-01-14T11:11:49.323495017Z", - "state": "online" - }, - "uid": "5663cf4e-a0c7-50c8-b93c-16fd11a0f00b", - "version": "1.23.2", - "appProtect": { - "attackSignatureVersion": "Available after publishing Attack Signatures from Instance Manager", - "status": "active", - "threatCampaignVersion": "Available after publishing Threat Campaigns from Instance Manager", - "version": "4.2.0" - }, - "configureArgs": [ - ... - ], - "lastDeploymentDetails": { - "createTime": "2023-01-14T11:10:25.096812852Z", - "details": { - "error": "{\"instance:b9df6377-2c4f-3266-a64a-e064b0371c73\":\"failed building config payload: policy compilation failed for deployment b31c6ab1-4a46-4c81-a065-204575145e8e due to integrations service error: the specified compiler (4.2.0) is missing, please install it and try again.\"}", - "failure": [ - { - "failMessage": "failed building config payload: policy compilation failed for deployment b31c6ab1-4a46-4c81-a065-204575145e8e due to integrations service error: the specified compiler (4.2.0) is missing, please install it and try again.", - "name": "ip-192-0-2-27" - } - ], - "pending": [], - "success": [] - }, - "id": "b31c6ab1-4a46-4c81-a065-204575145e8e", - "message": "Instance config failed to publish to", - "status": "failed", - "updateTime": "2023-01-14T11:10:25.175145693Z" - }, - "loadableModules": [ - ... - ], - "packages": [ - ... - ], - "processId": "10345", - "ssl": { - "built": null, - "runtime": null - } -} -``` +### Check deployment result by deployment ID -
+When you use the Publish API to [publish security content](#publish-policy), NGINX Instance Manager creates a deployment ID for the request. You can use this ID to check the result of the publication. -When you use the Publish API (`/security/publish`) to [publish a security policy and security log profile](#publish-policy), Instance Manager creates a deployment ID for the request. To view the status of the update, or to check for any errors, use the endpoint and method shown below and reference the deployment ID. +{{}} | Method | Endpoint | |--------|------------------------------------------------------------------| | GET | `/api/platform/v1/systems/instances/deployments/{deployment-id}` | -You can locate the configuration publication status in the the response within the `details` field, which contains additional fields that provide more context around the status. +{{}} -The example below shows a call to the `deployments` endpoint and the corresponding JSON response containing a compiler error message. +Example: ```shell -curl -X GET --url "https://{NGINX-INSTANCE-MANAGER-FQDN}/api/platform/v1/systems/instances/deployments/d38a8e5d-2312-4046-a60f-a30a4aea1fbb" \ +curl -X GET "https://{{NIM_FQDN}}/api/platform/v1/systems/instances/deployments/" \ -H "Authorization: Bearer " ``` -
-JSON Response - -```json -{ - "createTime": "2023-01-14T04:35:47.566082799Z", - "details": { - "error": "{\"instance:8a2092aa-5612-370d-bff0-5d7521e206d6\":\"failed building config payload: policy bundle compilation failed for d38a8e5d-2312-4046-a60f-a30a4aea1fbb, integrations service returned the following error: missing the specified compiler (4.2.0) please install it and try again\"}", - "failure": [ - { - "failMessage": "failed building config payload: policy bundle compilation failed for d38a8e5d-2312-4046-a60f-a30a4aea1fbb, integrations service returned the following error: missing the specified compiler (4.2.0) please install it and try again", - "name": "ip-192-0-2-243" - } - ], - "pending": [], - "success": [] - }, - "id": "d38a8e5d-2312-4046-a60f-a30a4aea1fbb", - "message": "Instance config failed to publish to", - "status": "failed", - "updateTime": "2023-01-14T04:35:47.566082799Z" -} -``` - -
+The response includes the full deployment status, success or failure details, and any compiler error messages. diff --git a/content/nim/nginx-app-protect/overview-nap-waf-config-management.md b/content/nim/nginx-app-protect/overview-nap-waf-config-management.md index 92287079d..b0d31319f 100644 --- a/content/nim/nginx-app-protect/overview-nap-waf-config-management.md +++ b/content/nim/nginx-app-protect/overview-nap-waf-config-management.md @@ -1,68 +1,76 @@ --- -description: Learn how you can use F5 NGINX Management Suite Instance Manager to configure - NGINX App Protect WAF security policies. +description: Learn how to use F5 NGINX Instance Manager to set up and manage NGINX App Protect WAF security policies. docs: DOCS-992 -title: NGINX App Protect WAF configuration management +title: "How WAF policy management works" toc: true -weight: 500 +weight: 100 type: - reference --- ## Overview -F5 NGINX Management Suite Instance Manager provides configuration management for [NGINX App Protect WAF](https://www.nginx.com/products/nginx-app-protect/web-application-firewall/). +F5 NGINX Instance Manager helps you manage [NGINX App Protect WAF](https://www.nginx.com/products/nginx-app-protect/web-application-firewall/) security configurations. -You can use NGINX App Protect WAF with Instance Manager to inspect incoming traffic, identify potential threats, and block malicious traffic. With Configuration Management for App Protect WAF, you can configure WAF security policies in a single location and push your configurations out to one, some, or all of your NGINX App Protect WAF instances. +Use NGINX Instance Manager with NGINX App Protect WAF to inspect incoming traffic, detect threats, and block malicious requests. You can define policies in one place and push them to some or all of your NGINX App Protect WAF instances. -### Features +### Key features -- Manage NGINX App Protect WAF security configurations by using the NGINX Management Suite user interface or REST API -- Update Attack Signatures and Threat Campaign packages -- Compile security configurations into a binary bundle for consumption by NGINX App Protect WAF instances +- Manage WAF policies using the NGINX Instance Manager web interface or REST API +- Update attack signature and threat campaign packages +- Compile WAF configurations into a binary bundle for deployment ## Architecture -As demonstrated in Figure 1, Instance Manager lets you manage security configurations for NGINX App Protect WAF. You can define security policies, upload attack signatures and threat campaign packages, and publish common configurations out to your NGINX App Protect WAF instances. Instance Manager can compile the security configuration into a bundle before pushing the configuration to the NGINX App Protect WAF data plane instances. The NGINX Management Suite Security Monitoring module provides data visualization for NGINX App Protect, so you can monitor, analyze, and refine your policies. +NGINX Instance Manager lets you define and manage security policies, upload signature packages, and push configurations to your NGINX App Protect WAF instances. It can also compile your security configuration into a bundle before publishing it to the data plane. -{{< img src="nim/app-sec-overview.png" caption="Figure 1. NGINX Management Suite with NGINX App Protect Architecture Overview" alt="A diagram showing the architecture of the NGINX Management Suite with NGINX App Protect solution" width="75%">}} +The **Security Monitoring** module shows real-time data from NGINX App Protect WAF so you can track traffic, spot anomalies, and fine-tune policies. -### Security Bundle Compilation {#security-bundle} +{{< img src="nim/app-sec-overview.png" caption="Figure 1. NGINX Instance Manager with NGINX App Protect architecture overview" alt="Architecture diagram showing NGINX Instance Manager and Security Monitoring in the control plane pushing security bundles to NGINX App Protect WAF instances in the data plane" >}} -Instance Manager provides a compiler that can be configured to bundle the complete security configuration -- including JSON security policies, attack signatures, threat campaigns, and log profiles -- into a single binary in `.tgz` format. This bundle is then pushed out to each selected NGINX App Protect WAF instance. +### Security bundle compilation {#security-bundle} -Performing the security bundle compilation on Instance Manager (precompiled publication) instead of on the NGINX App Protect WAF instances provides the following benefits: +NGINX Instance Manager includes a compiler that packages your complete WAF configuration — security policies, attack signatures, threat campaigns, and log profiles — into a single `.tgz` file. It then pushes this bundle to the selected NGINX App Protect WAF instances. -- Eliminates the need to provision system resources on NGINX App Protect WAF instances to perform compilation. -- The bundles produced by Instance Manager can be reused by multiple NGINX App Protect WAF instances, instead of each instance having to perform the compilation separately. +**Why precompile with NGINX Instance Manager?** -However, if you prefer to maintain policy compilation on the NGINX App Protect WAF instance, that is supported with the following limitation: +- Saves system resources on WAF instances +- Lets you reuse the same bundle across multiple instances -- Instance Manager does not publish JSON policies to the NGINX App Protect WAF instance. JSON policies referenced in an NGINX configuration must already exist on the NGINX App Protect WAF instance. +If you choose to compile policies on the WAF instance instead, that works too—but with this limitation: -The example [`location`](https://nginx.org/en/docs/http/ngx_http_core_module.html#location) context below enables NGINX App Protect WAF and tells NGINX where to find the compiled security bundle: +- NGINX Instance Manager won’t publish `.json` policies to the WAF instance. These policies must already exist on the instance and be referenced in the NGINX config. -## Log Profile Compilation +Example [`location`](https://nginx.org/en/docs/http/ngx_http_core_module.html#location) block to enable WAF and point to the bundle: -Instance Manager can also be configured to compile log profiles when you install a new version of the WAF compiler. When you publish an NGINX configuration with the NGINX App Protect [`app_protect_security_log`](https://docs.nginx.com/nginx-app-protect/logging-overview/security-log/#app_protect_security_log) directive, Instance Manager publishes the compiled log profiles to the NGINX App Protect WAF instances when precompiled publication is enabled. +```nginx +location / { + app_protect_enable on; + app_protect_policy_file /etc/app_protect/policies/policy_bundle.tgz; +} +``` + +## Log profile compilation + +You can also configure NGINX Instance Manager to compile log profiles when you install a new version of the compiler. When publishing NGINX configs that include the [`app_protect_security_log`](https://docs.nginx.com/nginx-app-protect/logging-overview/security-log/#app_protect_security_log) directive, NGINX Instance Manager pushes the compiled log profile to your WAF instances (when precompiled publication is turned on). {{}} -Instance Manager and Security Monitoring both use NGINX App Protect log profiles. The configuration requirements for each are different. When using Instance Manager configuration management, you must reference the log profile in your NGINX configuration using the `.tgz` file extension instead of `.json`. +NGINX Instance Manager and Security Monitoring both use log profiles, but their configurations are different. If you're using configuration management in NGINX Instance Manager, you must reference the log profile with the `.tgz` file extension, not `.json`. {{}} -## Security Management APIs +## Security management APIs -By using the Instance Manager REST API, you can automate configuration updates to be pushed out to all of your NGINX App Protect WAF instances. You can use the Instance Manager API to manage and deploy the following security configurations: +Use the NGINX Instance Manager REST API to automate updates across your NGINX App Protect WAF instances. You can use the API to manage: -- security policies, -- log profiles, -- attack signatures, and -- threat campaigns. +- Security policies +- Log profiles +- Attack signatures +- Threat campaigns -Just as with changes made via the user interface, the Instance Manager compiler bundles all of the config updates into a single binary package that you can push out to your instances. Figure 2 shows an overview of the API endpoints available to support security policy configuration and publishing. +Just like with the web interface, the compiler creates a binary bundle with your updates that you can push to your WAF instances. -{{< img src="nim/app-sec-api-overview.png" caption="Figure 2. NGINX Management Suite with NGINX App Protect WAF Architecture Overview" alt="A diagram showing the architecture of the NGINX Management Suite with NGINX App Protect solution">}} +{{< img src="nim/app-sec-api-overview.png" caption="Figure 2. NGINX Instance Manager with NGINX App Protect WAF architecture overview" alt="Diagram showing how the NGINX Instance Manager REST API is used to create policies, upload signatures and campaigns, and publish compiled security bundles to NGINX App Protect WAF instances">}} -More information is available in the Instance Manager API documentation. +For full details, see the API documentation: {{< include "nim/how-to-access-api-docs.md" >}} From ab38b0123507b651a1d6c80a3845f254f0071755 Mon Sep 17 00:00:00 2001 From: Mike Jang <3287976+mjang@users.noreply.github.com> Date: Fri, 25 Apr 2025 11:25:38 -0700 Subject: [PATCH 12/24] new messages --- layouts/partials/list-main.html | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/layouts/partials/list-main.html b/layouts/partials/list-main.html index 1416ecf7a..3dd81da99 100644 --- a/layouts/partials/list-main.html +++ b/layouts/partials/list-main.html @@ -51,8 +51,13 @@

Keep an inventory of your deployments

>>>>>>> 09d8a53f (More) {{ end }} +<<<<<<< HEAD {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Organize users with RBAC")}}

Assign responsibilities with role-based access control

+======= + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Monitor your certificates")}} +

Detect and resolve expired SSL certs in minutes

+>>>>>>> 5e6d32b4 (new messages) {{ end }} <<<<<<< HEAD {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Automate with the NGINX One API")}} @@ -62,11 +67,15 @@

Secure your systems with role-based access control

{{ end }} {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Change multiple instances with one push")}} -

Configure and synchronize groups of NGINX instances simultaneously

+

Synchronize changes across cloud environments

{{ end }} {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "NGINX One API")}} +<<<<<<< HEAD

Automate NGINX fleet management

>>>>>>> 09d8a53f (More) +======= +

Automate NGINX fleet management from the CLI

+>>>>>>> 5e6d32b4 (new messages) {{ end }} {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Glossary")}}

Learn terms unique to NGINX One Console

From 43ef45c8127b203b5a286fd05b1a23264355c056 Mon Sep 17 00:00:00 2001 From: Mike Jang <3287976+mjang@users.noreply.github.com> Date: Thu, 15 May 2025 09:48:37 -0700 Subject: [PATCH 13/24] Sync with presentation --- content/nginx-one/secure-your-fleet/_index.md | 6 ++++++ layouts/partials/list-main.html | 16 ++++++++++++++-- 2 files changed, 20 insertions(+), 2 deletions(-) create mode 100644 content/nginx-one/secure-your-fleet/_index.md diff --git a/content/nginx-one/secure-your-fleet/_index.md b/content/nginx-one/secure-your-fleet/_index.md new file mode 100644 index 000000000..3d693a250 --- /dev/null +++ b/content/nginx-one/secure-your-fleet/_index.md @@ -0,0 +1,6 @@ +--- +title: Secure your fleet +description: +weight: 250 +url: /nginx-one/secure-your-fleet +--- diff --git a/layouts/partials/list-main.html b/layouts/partials/list-main.html index 3dd81da99..381678a1c 100644 --- a/layouts/partials/list-main.html +++ b/layouts/partials/list-main.html @@ -19,7 +19,7 @@

- {{ if or (lt .WordCount 1) (eq $PageTitle "F5 NGINX One Console") (eq $PageTitle "F5 NGINX App Protect DoS") (eq $PageTitle "F5 NGINX Plus") }} + {{ if or (lt .WordCount 1) (eq $PageTitle "fF5 NGINX One Console") (eq $PageTitle "F5 NGINX App Protect DoS") (eq $PageTitle "F5 NGINX Plus") }}
@@ -37,8 +37,16 @@

{{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Get started")}}

See benefits from the NGINX One Console

{{ end }} +<<<<<<< HEAD {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Draft new configurations")}}

Work with Staged Configurations

+======= + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Set up new instances")}} +

Collaborate with Staged Configurations

+ {{ end }} + {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Secure your fleet")}} +

Configure alerts that match your security policies

+>>>>>>> 3dbfb304 (Sync with presentation) {{ end }} <<<<<<< HEAD +>>>>>>> 42d749df (Make more prod ready) {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Watch your NGINX instances")}}

Keep an inventory of your deployments

>>>>>>> 09d8a53f (More) From 7254b72c81a96fd5657e9aa29d1c97de72a2c4cd Mon Sep 17 00:00:00 2001 From: Mike Jang <3287976+mjang@users.noreply.github.com> Date: Thu, 22 May 2025 11:15:19 -0700 Subject: [PATCH 16/24] More --- .../includes/nginx-one/how-to/add-instance.md | 3 ++ .../nginx-one/nginx-configs/add-instance.md | 30 +++---------- layouts/partials/list-main.html | 43 +------------------ 3 files changed, 10 insertions(+), 66 deletions(-) diff --git a/content/includes/nginx-one/how-to/add-instance.md b/content/includes/nginx-one/how-to/add-instance.md index 94f0b628b..b9c5c86b8 100644 --- a/content/includes/nginx-one/how-to/add-instance.md +++ b/content/includes/nginx-one/how-to/add-instance.md @@ -1,8 +1,11 @@ --- docs: +<<<<<<< HEAD files: - content/nginx-one/connect-instances/add-instance.md - content/nginx-one/getting-started.md +======= +>>>>>>> 4067237f (More) --- You can add an instance to NGINX One Console in the following ways: diff --git a/content/nginx-one/nginx-configs/add-instance.md b/content/nginx-one/nginx-configs/add-instance.md index ddf5eb095..fd3b70fd5 100644 --- a/content/nginx-one/nginx-configs/add-instance.md +++ b/content/nginx-one/nginx-configs/add-instance.md @@ -16,34 +16,16 @@ to set up a data plane key to connect your instances to NGINX One. Before you add an instance to NGINX One Console, ensure: -- You have administrator access to NGINX One Console. -- You have configured instances of NGINX that you want to manage through NGINX One Console. -- You have or are ready to configure a data plane key. -- You have or are ready to set up managed certificates. +- You have [administrator access]({{< ref "/nginx-one/rbac/roles.md" >}}) to NGINX One Console. +- You have [configured instances of NGINX]({{< ref "/nginx-one/getting-started.md#add-your-nginx-instances-to-nginx-one" >}}) that you want to manage through NGINX One Console. +- You have or are ready to configure a [data plane key]({{< ref "/nginx-one/getting-started.md#generate-data-plane-key" >}}). +- You have or are ready to set up [managed certificates]({{< ref "/nginx-one/certificates/manage-certificates.md" >}}). {{< note >}}If this is the first time an instance is being added to a Config Sync Group, and you have not yet defined the configuration for that Config Sync Group, that instance provides the template for that group. For more information, see [Configuration management]({{< ref "nginx-one/config-sync-groups/manage-config-sync-groups#configuration-management" >}}).{{< /note >}} ## Add an instance -You can add an instance to NGINX One Console in the following ways: - -- Directly, under **Instances** -- Indirectly, by selecting a Config Sync Group, and selecting **Add Instance to Config Sync Group** - -In either case, NGINX One Console gives you a choice for data plane keys: - -- Create a new key -- Use an existing key - -NGINX One Console takes the option you use, and adds the data plane key to a command that you'd use to register your target instance. You should see the command in the **Add Instance** screen in the console. - -Connect to the host where your NGINX instance is running. Run the provided command to [install NGINX Agent]({{< ref "/nginx-one/getting-started#install-nginx-agent" >}}) dependencies and packages on that host. - -```bash -curl https://agent.connect.nginx.com/nginx-agent/install | DATA_PLANE_KEY="" sh -s -- -y -``` - -Once the process is complete, you can configure that instance in your NGINX One Console. +{{< include "/nginx-one/how-to/add-instance.md" >}} ## Managed and Unmanaged Certificates @@ -51,7 +33,7 @@ If you add an instance with SSL/TLS certificates, those certificates can match a ### If the certificate is already managed -If you add an instance with a managed certificate, as described in [Add your NGINX instances to NGINX One], these certificates are added to your list of **Managed Certificates**. +If you add an instance with a managed certificate, as described in [Add your NGINX instances to NGINX One]({{< ref "/nginx-one/getting-started.md#add-your-nginx-instances-to-nginx-one" >}}), these certificates are added to your list of **Managed Certificates**. NGINX One Console can manage your instances along with those certificates. diff --git a/layouts/partials/list-main.html b/layouts/partials/list-main.html index 6cb957c27..be455babf 100644 --- a/layouts/partials/list-main.html +++ b/layouts/partials/list-main.html @@ -19,7 +19,7 @@

- {{ if or (lt .WordCount 1) (eq $PageTitle "fF5 NGINX One Console") (eq $PageTitle "F5 NGINX App Protect DoS") (eq $PageTitle "F5 NGINX Plus") }} + {{ if or (lt .WordCount 1) (eq $PageTitle "F5 NGINX One Console") (eq $PageTitle "F5 NGINX App Protect DoS") (eq $PageTitle "F5 NGINX Plus") }}
@@ -37,61 +37,20 @@

{{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Get started")}}

See benefits from the NGINX One Console

{{ end }} -<<<<<<< HEAD {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Draft new configurations")}}

Work with Staged Configurations

-======= - {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Set up new instances")}} -

Collaborate with Staged Configurations

{{ end }} - {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Manage your NGINX instances")}}

Monitor and maintain your deployments

-======= -======= - {{ end }} --> ->>>>>>> 42d749df (Make more prod ready) - {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Watch your NGINX instances")}} -

Keep an inventory of your deployments

->>>>>>> 09d8a53f (More) {{ end }} -<<<<<<< HEAD {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Organize users with RBAC")}}

Assign responsibilities with role-based access control

-======= - {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Monitor your certificates")}} -<<<<<<< HEAD -

Detect and resolve expired SSL certs in minutes

->>>>>>> 5e6d32b4 (new messages) -======= -

Update your SSL certs before they expire

->>>>>>> 3dbfb304 (Sync with presentation) {{ end }} -<<<<<<< HEAD {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Automate with the NGINX One API")}}

Manage your NGINX fleet over REST

-======= - {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Organize your administrators with RBAC")}} -

Secure your systems with role-based access control

- {{ end }} - {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Change multiple instances with one push")}} -

Simplify changes with Config Sync Groups

- {{ end }} - {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "NGINX One API")}} -<<<<<<< HEAD -

Automate NGINX fleet management

->>>>>>> 09d8a53f (More) -======= -

Automate NGINX fleet management from the CLI

->>>>>>> 5e6d32b4 (new messages) {{ end }} {{ if and (eq $PageTitle "F5 NGINX One Console") (eq .Title "Glossary")}}

Learn terms unique to NGINX One Console

From 245683258ebeeb4f14be9808f0c6045c00277da0 Mon Sep 17 00:00:00 2001 From: Mike Jang <3287976+mjang@users.noreply.github.com> Date: Thu, 22 May 2025 11:48:39 -0700 Subject: [PATCH 17/24] more --- content/nginx-one/api/_index.md | 4 + content/nginx-one/nginx-configs/_index.md | 4 + .../manage-config-sync-groups.md | 239 ------------------ content/nginx-one/staged-configs/_index.md | 4 + 4 files changed, 12 insertions(+), 239 deletions(-) delete mode 100644 content/nginx-one/nginx-configs/manage-config-sync-groups.md diff --git a/content/nginx-one/api/_index.md b/content/nginx-one/api/_index.md index 1735740f8..b74c67e09 100644 --- a/content/nginx-one/api/_index.md +++ b/content/nginx-one/api/_index.md @@ -1,9 +1,13 @@ --- <<<<<<< HEAD +<<<<<<< HEAD title: Automate with the NGINX One API ======= title: NGINX One API >>>>>>> c7ce27ce (Draft: new N1C doc homepage) +======= +title: Automation with the NGINX One API +>>>>>>> 614bafed (more) description: weight: 700 url: /nginx-one/api diff --git a/content/nginx-one/nginx-configs/_index.md b/content/nginx-one/nginx-configs/_index.md index 18ece7e11..4b732ef6e 100644 --- a/content/nginx-one/nginx-configs/_index.md +++ b/content/nginx-one/nginx-configs/_index.md @@ -2,6 +2,7 @@ description: <<<<<<< HEAD <<<<<<< HEAD +<<<<<<< HEAD title: Manage your NGINX instances ======= title: Organize your NGINX instances @@ -9,6 +10,9 @@ title: Organize your NGINX instances ======= title: Watch your NGINX instances >>>>>>> 09d8a53f (More) +======= +title: Access and connect to your NGINX instances +>>>>>>> 614bafed (more) weight: 300 url: /nginx-one/nginx-configs --- diff --git a/content/nginx-one/nginx-configs/manage-config-sync-groups.md b/content/nginx-one/nginx-configs/manage-config-sync-groups.md deleted file mode 100644 index 41589dadd..000000000 --- a/content/nginx-one/nginx-configs/manage-config-sync-groups.md +++ /dev/null @@ -1,239 +0,0 @@ ---- -docs: null -title: Manage config sync groups -toc: true -weight: 300 -type: -- how-to ---- - -## Overview - -This guide explains how to create and manage config sync groups in the F5 NGINX One Console. Config sync groups synchronize NGINX configurations across multiple NGINX instances, ensuring consistency and ease of management. - -If you’ve used [instance groups in NGINX Instance Manager]({{< ref "/nim/nginx-instances/manage-instance-groups.md" >}}), you’ll find config sync groups in NGINX One similar, though the steps and terminology differ slightly. - -## Before you start - -Before you create and manage config sync groups, ensure: - -- You have access to the NGINX One Console. -- You have the necessary permissions to create and manage config sync groups. -- NGINX instances are properly registered with NGINX One if you plan to add existing instances to a config sync group. - -## Important considerations - -- **NGINX Agent configuration file location**: When you run the NGINX Agent installation script to register an instance with NGINX One, the script creates the `agent-dynamic.conf` file, which contains settings for the NGINX Agent, including the specified config sync group. This file is typically located in `/var/lib/nginx-agent/` on most systems; however, on FreeBSD, it's located at `/var/db/nginx-agent/`. - -- **Mixing NGINX Open Source and NGINX Plus instances**: You can add both NGINX Open Source and NGINX Plus instances to the same config sync group, but there are limitations. If your configuration includes features exclusive to NGINX Plus, synchronization will fail on NGINX Open Source instances because they don't support these features. NGINX One allows you to mix NGINX instance types for flexibility, but it’s important to ensure that the configurations you're applying are compatible with all instances in the group. - -- **Single config sync group membership**: An instance can join only one config sync group at a time. - -- **Configuration inheritance**: If the config sync group already has a configuration defined, that configuration will be pushed to instances when they join. - -- **Using an instance's configuration for the group configuration**: If an instance is the first to join a config sync group and the group's configuration hasn't been defined, the instance’s configuration will become the group’s configuration. Any instances added later will automatically inherit this configuration. - - {{< note >}} If you add multiple instances to a single config sync group, simultaneously (with automation), follow these steps. Your instances will inherit your desired configuration: - - 1. Create a config sync group. - 1. Add a configuration to the config sync group, so all instances inherit it. - 1. Add the instances in a separate operation. - - Your instances should synchronize with your desired configuration within 30 seconds. {{< /note >}} - -- **Persistence of a config sync group's configuration**: The configuration for a config sync group persists until you delete the group. Even if you remove all instances, the group's configuration stays intact. Any new instances that join later will automatically inherit this configuration. - -- **Config sync groups vs. cluster syncing**: Config sync groups are not the same as cluster syncing. Config sync groups let you to manage and synchronize configurations across multiple NGINX instances as a single entity. This is particularly useful when your NGINX instances are load-balanced by an external load balancer, as it ensures consistency across all instances. In contrast, cluster syncing, like [zone syncing]({{< ref "nginx/admin-guide/high-availability/zone_sync_details.md" >}}), ensures data consistency and high availability across NGINX instances in a cluster. While config sync groups focus on configuration management, cluster syncing supports failover and data consistency. - -## Create a config sync group - -Creating a config sync group allows you to manage the configurations of multiple NGINX instances as a single entity. - -1. On the left menu, select **Config Sync Groups**. -2. Select **Add Config Sync Group**. -3. In the **Name** field, type a name for your config sync group. -4. Select **Create** to add the config sync group. - -## Manage config sync group membership - -### Add an existing instance to a config sync group {#add-an-existing-instance-to-a-config-sync-group} - -You can add existing NGINX instances that are already registered with NGINX One to a config sync group. - -1. Open a command-line terminal on the NGINX instance. -2. Open the `/var/lib/nginx-agent/agent-dynamic.conf` file in a text editor. -3. At the end of the file, add a new line beginning with `instance_group:`, followed by the config sync group name. - - ``` text - instance_group: - ``` - -4. Restart NGINX Agent: - - ``` shell - sudo systemctl restart nginx-agent - ``` - -### Add a new instance to a config sync group {#add-a-new-instance-to-a-config-sync-group} - -When adding a new NGINX instance that is not yet registered with NGINX One, you need a data plane key to securely connect the instance. You can generate a new data plane key during the process or use an existing one if you already have it. - -1. On the left menu, select **Config Sync Groups**. -2. Select the config sync group in the list. -3. In the **Instances** pane, select **Add Instance to Config Sync Group**. -4. In the **Add Instance to Config Sync Group** dialog, select **Register a new instance with NGINX One then add to config sync group**. -5. Select **Next**. -6. **Generate a new data plane key** (choose this option if you don't have an existing key): - - - Select **Generate new key** to create a new data plane key for the instance. - - Select **Generate Data Plane Key**. - - Copy and securely store the generated key, as it is displayed only once. - -7. **Use an existing data plane key** (choose this option if you already have a key): - - - Select **Use existing key**. - - In the **Data Plane Key** field, enter the existing data plane key. - -{{}} - -{{%tab name="Virtual Machine or Bare Metal"%}} - -8. Run the provided command, which includes the data plane key, in your NGINX instance terminal to register the instance with NGINX One. -9. Select **Done** to complete the process. - -{{%/tab%}} - -{{%tab name="Docker Container"%}} - -8. **Log in to the NGINX private registry**: - - - Replace `YOUR_JWT_HERE` with your JSON Web Token (JWT) from [MyF5](https://my.f5.com/manage/s/). - - ```shell - sudo docker login private-registry.nginx.com --username=YOUR_JWT_HERE --password=none - ``` - -9. **Pull the Docker image**: - - - From the **OS Type** list, choose the appropriate operating system for your Docker image. - - After selecting the OS, run the provided command to pull the Docker image. - - **Note**: Subject to availability, you can modify the `agent: ` to match the specific NGINX Plus version, OS type, and OS version you need. For example, you might use `agent: r32-ubi-9`. For more details on version tags and how to pull an image, see [Deploying NGINX and NGINX Plus on Docker]({{< ref "nginx/admin-guide/installing-nginx/installing-nginx-docker.md#pulling-the-image" >}}). - -10. Run the provided command, which includes the data plane key, in your NGINX instance terminal to start the Docker container. - -11. Select **Done** to complete the process. - -{{%/tab%}} - -{{}} - -{{}} - -Data plane keys are required for registering NGINX instances with the NGINX One Console. These keys serve as secure tokens, ensuring that only authorized instances can connect and communicate with NGINX One. - -For more details on creating and managing data plane keys, see [Create and manage data plane keys]({{}}). - -{{}} - -### Change the config sync group for an instance - -If you need to move an NGINX instance to a different config sync group, follow these steps: - -1. Open a command-line terminal on the NGINX instance. -2. Open the `/var/lib/nginx-agent/agent-dynamic.conf` file in a text editor. -3. Locate the line that begins with `instance_group:` and change it to the name of the new config sync group. - - ``` text - instance_group: - ``` - -4. Restart NGINX Agent by running the following command: - - ```shell - sudo systemctl restart nginx-agent - ``` - -**Important:** If the instance is the first to join the new config sync group and a group configuration hasn’t been added manually beforehand, the instance’s configuration will automatically become the group’s configuration. Any instances added to this group later will inherit this configuration. - -### Remove an instance from a config sync group - -If you need to remove an NGINX instance from a config sync group without adding it to another group, follow these steps: - -1. Open a command-line terminal on the NGINX instance. -2. Open the `/var/lib/nginx-agent/agent-dynamic.conf` file in a text editor. -3. Locate the line that begins with `instance_group:` and either remove it or comment it out by adding a `#` at the beginning of the line. - - ```text - # instance_group: - ``` - -4. Restart NGINX Agent: - - ```shell - sudo systemctl restart nginx-agent - ``` - -By removing or commenting out this line, the instance will no longer be associated with any config sync group. - -## Add the config sync group configuration - -You can set the configuration for a config sync group in two ways: - -### Define the group configuration manually - -You can manually define the group's configuration before adding any instances. When you add instances to the group later, they automatically inherit this configuration. - -To manually set the group configuration: - -1. Follow steps 1–4 in the [**Create a config sync group**](#create-a-config-sync-group) section to create your config sync group. -2. After creating the group, select the **Configuration** tab. -3. Since no instances have been added, the **Configuration** tab will show an empty configuration with a message indicating that no config files exist yet. -4. To add a configuration, select **Edit Configuration**. -5. In the editor, define your NGINX configuration as needed. This might include adding or modifying `nginx.conf` or other related files. -6. After making your changes, select **Next** to view a split screen showing your changes. -7. If you're satisfied with the configuration, select **Save and Publish**. - -### Use an instance's configuration - -If you don't manually define a group config, the NGINX configuration of the first instance added to a config sync group becomes the group's configuration. Any additional instances added afterward inherit this group configuration. - -To set the group configuration by adding an instance: - -1. Follow the steps in the [**Add an existing instance to a config sync group**](#add-an-existing-instance-to-a-config-sync-group) or [**Add a new instance to a config sync group**](#add-a-new-instance-to-a-config-sync-group) sections to add your first instance to the group. -2. The NGINX configuration from this instance will automatically become the group's configuration. -3. You can further edit and publish this configuration by following the steps in the [**Publish the config sync group configuration**](#publish-the-config-sync-group-configuration) section. - -## Publish the config sync group configuration {#publish-the-config-sync-group-configuration} - -After the config sync group is created, you can modify and publish the group's configuration as needed. Any changes made to the group configuration will be applied to all instances within the group. - -1. On the left menu, select **Config Sync Groups**. -2. Select the config sync group in the list. -3. Select the **Configuration** tab to view the group's NGINX configuration. -4. To modify the group's configuration, select **Edit Configuration**. -5. Make the necessary changes to the configuration. -6. When you're finished, select **Next**. A split view displays the changes. -7. If you're satisfied with the changes, select **Save and Publish**. - -Publishing the group configuration ensures that all instances within the config sync group are synchronized with the latest group configuration. This helps maintain consistency across all instances in the group, preventing configuration drift. - -## Understanding config sync statuses - -The **Config Sync Status** column on the **Config Sync Groups** page provides insight into the synchronization state of your NGINX instances within each group. - -{{}} -| **Status** | **Description** | -|-----------------------|------------------------------------------------------------------------------------------------------------------------------------------------------| -| **In Sync** | All instances within the config sync group have configurations that match the group configuration. No action is required. | -| **Out of Sync** | At least one instance in the group has a configuration that differs from the group's configuration. You may need to review and resolve discrepancies to ensure consistency. | -| **Sync in Progress** | An instance is currently being synchronized with the group's configuration. This status appears when an instance is moved to a new group or when a configuration is being applied. | -| **Unknown** | The synchronization status of the instances in this group cannot be determined. This could be due to connectivity issues, instances being offline, or other factors. Investigating the cause of this status is recommended. | -{{}} - -Monitoring the **Config Sync Status** helps ensure that your configurations are consistently applied across all instances in a group, reducing the risk of configuration drift. - -## See also - -- [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) -- [View and edit NGINX configurations]({{< ref "/nginx-one/nginx-configs/view-edit-nginx-configurations.md" >}}) diff --git a/content/nginx-one/staged-configs/_index.md b/content/nginx-one/staged-configs/_index.md index 66cf13915..b6cb02bae 100644 --- a/content/nginx-one/staged-configs/_index.md +++ b/content/nginx-one/staged-configs/_index.md @@ -1,11 +1,15 @@ --- description: <<<<<<< HEAD +<<<<<<< HEAD title: Draft new configurations weight: 400 url: /nginx-one/staged-configs ======= title: Set up new instances +======= +title: Draft new instances +>>>>>>> 614bafed (more) weight: 200 url: /nginx-one/how-to/staged-configs >>>>>>> c7ce27ce (Draft: new N1C doc homepage) From 662a65ec690fc62bdb3ce7b225f5f65bcda1fa33 Mon Sep 17 00:00:00 2001 From: Mike Jang <3287976+mjang@users.noreply.github.com> Date: Thu, 22 May 2025 12:34:50 -0700 Subject: [PATCH 18/24] based on Jason's feedback --- content/nginx-one/api/_index.md | 4 ++++ content/nginx-one/certificates/manage-certificates.md | 2 +- content/nginx-one/config-sync-groups/add-file-csg.md | 2 +- .../config-sync-groups/manage-config-sync-groups.md | 2 +- content/nginx-one/nginx-configs/_index.md | 4 ++++ content/nginx-one/nginx-configs/add-file.md | 2 +- .../nginx-configs/view-edit-nginx-configurations.md | 8 ++++++++ content/nginx-one/rbac/_index.md | 4 ++++ content/nginx-one/staged-configs/_index.md | 4 ++++ 9 files changed, 28 insertions(+), 4 deletions(-) diff --git a/content/nginx-one/api/_index.md b/content/nginx-one/api/_index.md index b74c67e09..5372d8f12 100644 --- a/content/nginx-one/api/_index.md +++ b/content/nginx-one/api/_index.md @@ -1,6 +1,7 @@ --- <<<<<<< HEAD <<<<<<< HEAD +<<<<<<< HEAD title: Automate with the NGINX One API ======= title: NGINX One API @@ -8,6 +9,9 @@ title: NGINX One API ======= title: Automation with the NGINX One API >>>>>>> 614bafed (more) +======= +title: Automate with the NGINX One API +>>>>>>> 4da8aa7e (based on Jason's feedback) description: weight: 700 url: /nginx-one/api diff --git a/content/nginx-one/certificates/manage-certificates.md b/content/nginx-one/certificates/manage-certificates.md index 13c532e38..136a4e299 100644 --- a/content/nginx-one/certificates/manage-certificates.md +++ b/content/nginx-one/certificates/manage-certificates.md @@ -193,5 +193,5 @@ To convert these cerificates to managed, start with the Certificates menu, and s ## See also - [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) -- [View and edit NGINX configurations]({{< ref "/nginx-one/nginx-configs/view-edit-nginx-configurations.md" >}}) +- [Add an instance]({{< ref "/nginx-one/nginx-configs/add-instance.md" >}}) - [Add a file in a configuration]({{< ref "/nginx-one/nginx-configs/add-file.md" >}}) diff --git a/content/nginx-one/config-sync-groups/add-file-csg.md b/content/nginx-one/config-sync-groups/add-file-csg.md index 9b6905aea..c416848a8 100644 --- a/content/nginx-one/config-sync-groups/add-file-csg.md +++ b/content/nginx-one/config-sync-groups/add-file-csg.md @@ -63,5 +63,5 @@ With this option, You can incorporate [Managed certificates]({{< ref "/nginx-one ## See also - [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) -- [View and edit NGINX configurations]({{< ref "/nginx-one/nginx-configs/view-edit-nginx-configurations.md" >}}) +- [Add an NGINX instance]({{< ref "/nginx-one/nginx-configs/add-instance.md" >}}) - [Manage certificates]({{< ref "/nginx-one/certificates/manage-certificates.md" >}}) diff --git a/content/nginx-one/config-sync-groups/manage-config-sync-groups.md b/content/nginx-one/config-sync-groups/manage-config-sync-groups.md index 8b24001cd..056414a67 100644 --- a/content/nginx-one/config-sync-groups/manage-config-sync-groups.md +++ b/content/nginx-one/config-sync-groups/manage-config-sync-groups.md @@ -258,4 +258,4 @@ Monitor the **Config Sync Status** column. It can help you ensure that your conf ## See also - [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) -- [View and edit NGINX configurations]({{< ref "/nginx-one/nginx-configs/view-edit-nginx-configurations.md" >}}) +- [Add an NGINX instance]({{< ref "/nginx-one/nginx-configs/add-instance.md" >}}) diff --git a/content/nginx-one/nginx-configs/_index.md b/content/nginx-one/nginx-configs/_index.md index 4b732ef6e..3dcf3860b 100644 --- a/content/nginx-one/nginx-configs/_index.md +++ b/content/nginx-one/nginx-configs/_index.md @@ -3,6 +3,7 @@ description: <<<<<<< HEAD <<<<<<< HEAD <<<<<<< HEAD +<<<<<<< HEAD title: Manage your NGINX instances ======= title: Organize your NGINX instances @@ -13,6 +14,9 @@ title: Watch your NGINX instances ======= title: Access and connect to your NGINX instances >>>>>>> 614bafed (more) +======= +title: Manage your NGINX instances +>>>>>>> 4da8aa7e (based on Jason's feedback) weight: 300 url: /nginx-one/nginx-configs --- diff --git a/content/nginx-one/nginx-configs/add-file.md b/content/nginx-one/nginx-configs/add-file.md index 15e301e57..0f7570ba6 100644 --- a/content/nginx-one/nginx-configs/add-file.md +++ b/content/nginx-one/nginx-configs/add-file.md @@ -76,6 +76,6 @@ Enter the name of the desired configuration file, such as `abc.conf` and select - [Manage certificates]({{< ref "/nginx-one/nginx-configs/certificates/manage-certificates.md" >}}) ======= - [Create and manage data plane keys]({{< ref "/nginx-one/how-to/data-plane-keys/create-manage-data-plane-keys.md" >}}) -- [View and edit NGINX configurations]({{< ref "/nginx-one/nginx-configs/view-edit-nginx-configurations.md" >}}) +- [Add an NGINX instance]({{< ref "/nginx-one/nginx-configs/add-instance.md" >}}) - [Manage certificates]({{< ref "/nginx-one/certificates/manage-certificates.md" >}}) >>>>>>> c7ce27ce (Draft: new N1C doc homepage) diff --git a/content/nginx-one/nginx-configs/view-edit-nginx-configurations.md b/content/nginx-one/nginx-configs/view-edit-nginx-configurations.md index 91b37552d..074dbdbaa 100644 --- a/content/nginx-one/nginx-configs/view-edit-nginx-configurations.md +++ b/content/nginx-one/nginx-configs/view-edit-nginx-configurations.md @@ -1,11 +1,15 @@ --- # We use sentence case and present imperative tone <<<<<<< HEAD +<<<<<<< HEAD title: View and edit an NGINX instance # Weights are assigned in increments of 100: determines sorting order weight: 200 ======= title: View and edit NGINX configurations +======= +title: View and edit an NGINX instance +>>>>>>> 4da8aa7e (based on Jason's feedback) # Weights are assigned in increments of 100: determines sorting order weight: 300 >>>>>>> c7ce27ce (Draft: new N1C doc homepage) @@ -18,6 +22,7 @@ product: NGINX One --- +<<<<<<< HEAD <<<<<<< HEAD This guide explains how to edit the configuration of an existing **Instance** in your NGINX One Console. ======= @@ -33,6 +38,9 @@ Before you add **Instances** to NGINX One Console, ensure: Once you've registered your NGINX Instances with the F5 NGINX One Console, you can view and edit their NGINX configurations on the **Instances** details page. >>>>>>> c7ce27ce (Draft: new N1C doc homepage) +======= +This guide explains how to edit the configuration of an existing **Instance** in your NGINX One Console. +>>>>>>> 4da8aa7e (based on Jason's feedback) To view and edit an NGINX configuration, follow these steps: diff --git a/content/nginx-one/rbac/_index.md b/content/nginx-one/rbac/_index.md index 4e50aa4ca..cac8d28a1 100644 --- a/content/nginx-one/rbac/_index.md +++ b/content/nginx-one/rbac/_index.md @@ -1,10 +1,14 @@ --- <<<<<<< HEAD +<<<<<<< HEAD title: Organize users with RBAC description: weight: 600 ======= title: Organize your administrators with RBAC +======= +title: Organize administrators with RBAC +>>>>>>> 4da8aa7e (based on Jason's feedback) description: weight: 500 >>>>>>> c7ce27ce (Draft: new N1C doc homepage) diff --git a/content/nginx-one/staged-configs/_index.md b/content/nginx-one/staged-configs/_index.md index b6cb02bae..848874c60 100644 --- a/content/nginx-one/staged-configs/_index.md +++ b/content/nginx-one/staged-configs/_index.md @@ -2,6 +2,7 @@ description: <<<<<<< HEAD <<<<<<< HEAD +<<<<<<< HEAD title: Draft new configurations weight: 400 url: /nginx-one/staged-configs @@ -10,6 +11,9 @@ title: Set up new instances ======= title: Draft new instances >>>>>>> 614bafed (more) +======= +title: Draft new instances (Staged Configuration) +>>>>>>> 4da8aa7e (based on Jason's feedback) weight: 200 url: /nginx-one/how-to/staged-configs >>>>>>> c7ce27ce (Draft: new N1C doc homepage) From ae8e34abab6488506c4af9b808a4b8ac07a4cb02 Mon Sep 17 00:00:00 2001 From: Mike Jang <3287976+mjang@users.noreply.github.com> Date: Fri, 23 May 2025 12:21:45 -0700 Subject: [PATCH 19/24] More --- content/nginx-one/nginx-configs/_index.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/content/nginx-one/nginx-configs/_index.md b/content/nginx-one/nginx-configs/_index.md index 3dcf3860b..5cf29aa32 100644 --- a/content/nginx-one/nginx-configs/_index.md +++ b/content/nginx-one/nginx-configs/_index.md @@ -4,6 +4,7 @@ description: <<<<<<< HEAD <<<<<<< HEAD <<<<<<< HEAD +<<<<<<< HEAD title: Manage your NGINX instances ======= title: Organize your NGINX instances @@ -17,6 +18,9 @@ title: Access and connect to your NGINX instances ======= title: Manage your NGINX instances >>>>>>> 4da8aa7e (based on Jason's feedback) +======= +title: Add and manage your NGINX instances +>>>>>>> f6a622d3 (More) weight: 300 url: /nginx-one/nginx-configs --- From c5a61ea577f63bb0281c24d21aac33230a984a16 Mon Sep 17 00:00:00 2001 From: Mike Jang <3287976+mjang@users.noreply.github.com> Date: Fri, 23 May 2025 12:30:00 -0700 Subject: [PATCH 20/24] More --- content/nginx-one/nginx-configs/_index.md | 4 ++++ content/nginx-one/staged-configs/_index.md | 4 ++++ 2 files changed, 8 insertions(+) diff --git a/content/nginx-one/nginx-configs/_index.md b/content/nginx-one/nginx-configs/_index.md index 5cf29aa32..1e0f420e0 100644 --- a/content/nginx-one/nginx-configs/_index.md +++ b/content/nginx-one/nginx-configs/_index.md @@ -5,6 +5,7 @@ description: <<<<<<< HEAD <<<<<<< HEAD <<<<<<< HEAD +<<<<<<< HEAD title: Manage your NGINX instances ======= title: Organize your NGINX instances @@ -21,6 +22,9 @@ title: Manage your NGINX instances ======= title: Add and manage your NGINX instances >>>>>>> f6a622d3 (More) +======= +title: Add and manage NGINX instances +>>>>>>> 2fec8a44 (More) weight: 300 url: /nginx-one/nginx-configs --- diff --git a/content/nginx-one/staged-configs/_index.md b/content/nginx-one/staged-configs/_index.md index 848874c60..3da38e3e6 100644 --- a/content/nginx-one/staged-configs/_index.md +++ b/content/nginx-one/staged-configs/_index.md @@ -3,6 +3,7 @@ description: <<<<<<< HEAD <<<<<<< HEAD <<<<<<< HEAD +<<<<<<< HEAD title: Draft new configurations weight: 400 url: /nginx-one/staged-configs @@ -14,6 +15,9 @@ title: Draft new instances ======= title: Draft new instances (Staged Configuration) >>>>>>> 4da8aa7e (based on Jason's feedback) +======= +title: Draft new instances (Staged Configs) +>>>>>>> 2fec8a44 (More) weight: 200 url: /nginx-one/how-to/staged-configs >>>>>>> c7ce27ce (Draft: new N1C doc homepage) From f2e708f443bf03bba03aa56d20b914fc76752d15 Mon Sep 17 00:00:00 2001 From: Mike Jang <3287976+mjang@users.noreply.github.com> Date: Tue, 20 May 2025 11:46:34 -0700 Subject: [PATCH 21/24] Feature: secure your fleet --- content/nginx-one/getting-started.md | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/content/nginx-one/getting-started.md b/content/nginx-one/getting-started.md index 6a761fa2f..162dd363f 100644 --- a/content/nginx-one/getting-started.md +++ b/content/nginx-one/getting-started.md @@ -9,6 +9,17 @@ docs: DOCS-1393 This guide provides step-by-step instructions on how to activate and start using F5 NGINX One Console. NGINX One is a management console for monitoring and managing NGINX data plane instances. +## Confirm access to the F5 Distributed Cloud + +You can access NGINX One Console through the F5 Distributed Cloud. + +1. Log in to [MyF5](https://my.f5.com/manage/s/). +1. Go to **My Products & Plans > Subscriptions** to see your active subscriptions. +1. Within one of your subscriptions, you should see either an NGINX and/or a Distributed Cloud subscription + - If the above does not appear in any of your subscriptions, please reach out to either your F5 Account Team or Customer Success Manager. + +Now identify your tenant. You or someone in your organization should have received an email from no-reply@cloud.f5.com asking you to update your password. The account name referenced in the E-Mail is the tenant name. Navigate to https://YOUR_TENANT_NAME.console.ves.volterra.io to access the F5 Distributed Cloud. + ## Enable the NGINX One service {#enable-nginx-one} To get started using NGINX One, enable the service on F5 Distributed Cloud. From a8a00c06925681bd16aae55d99e8856c7bdf24a1 Mon Sep 17 00:00:00 2001 From: Mike Jang <3287976+mjang@users.noreply.github.com> Date: Thu, 22 May 2025 10:45:53 -0700 Subject: [PATCH 22/24] More --- content/nginx-one/secure-your-fleet/_index.md | 6 ++++++ 1 file changed, 6 insertions(+) create mode 100644 content/nginx-one/secure-your-fleet/_index.md diff --git a/content/nginx-one/secure-your-fleet/_index.md b/content/nginx-one/secure-your-fleet/_index.md new file mode 100644 index 000000000..c35f4beab --- /dev/null +++ b/content/nginx-one/secure-your-fleet/_index.md @@ -0,0 +1,6 @@ +--- +title: Secure your fleet +description: +weight: 350 +url: /nginx-one/secure-your-fleet +--- From 62df7d8114f9353cbcbe16d1c306ae71aeb5665c Mon Sep 17 00:00:00 2001 From: Mike Jang <3287976+mjang@users.noreply.github.com> Date: Fri, 23 May 2025 13:37:45 -0700 Subject: [PATCH 23/24] More --- content/nginx-one/secure-your-fleet/_index.md | 2 +- content/nginx-one/secure-your-fleet/secure.md | 53 +++++++++++++++++++ 2 files changed, 54 insertions(+), 1 deletion(-) create mode 100644 content/nginx-one/secure-your-fleet/secure.md diff --git a/content/nginx-one/secure-your-fleet/_index.md b/content/nginx-one/secure-your-fleet/_index.md index c35f4beab..d9fea82ff 100644 --- a/content/nginx-one/secure-your-fleet/_index.md +++ b/content/nginx-one/secure-your-fleet/_index.md @@ -1,6 +1,6 @@ --- title: Secure your fleet description: -weight: 350 +weight: 450 url: /nginx-one/secure-your-fleet --- diff --git a/content/nginx-one/secure-your-fleet/secure.md b/content/nginx-one/secure-your-fleet/secure.md new file mode 100644 index 000000000..cca84bf13 --- /dev/null +++ b/content/nginx-one/secure-your-fleet/secure.md @@ -0,0 +1,53 @@ +--- +title: "Set up security alerts" +weight: 500 +toc: true +type: how-to +product: NGINX One +docs: DOCS-000 +--- + +The F5 Distributed Cloud generates alerts from all its services including NGINX One. You can configure rules to send those alerts to a receiver of your choice. These instructions walk you through how to configure an email notification when we see new CVEs or detect security issues with your NGiNX instances. + +This page describes basic steps to set up an email alert. For authoritative documentation, see +[Alerts - Email & SMS](https://flatrender.tora.reviews/docs-v2/shared-configuration/how-tos/alerting/alerts-email-sms). + +## Configure alerts to be sent to your email + +To configure security-related alerts, follow these steps: + +1. Navigate to the F5 Distributed Cloud Console at https://INSERT_YOUR_TENANT_NAME.console.ves.volterra.io. +1. Find **Audit Logs & Alerts** > **Alerts Management** +1. Select **Add Alert Receiver** +1. Configure the **Alert Receivers** + 1. Enter the name of your choice + 1. (Optional) Specify a label and description +1. Under **Receiver**, select Email and enter your email address. +1. Select **Save and Exit**. +1. Your Email receiver should now appear on the list of Alert Receivers. +1. Under the Actions column, select Verify Email. +1. Select **Send email** to confirm +1. You should receive a verification code in the email provided. Copy that code. +1. Under the Actions column, select **Enter verification code**. +1. Paste the code and select **Verify receiver**. + +## Configure Alert Policy + +Next, configure the policy that identifies when you'll get an alert. + +1. Navigate to **Alerts Management > Alert Policiei**. +1. Select Add Alert Policy. + 1. Enter the name of your choice + 1. (Optional) Specify a label and description +1. Under Alert Reciever Configuration > Alert Receivers, select the Alert Receiver you just created +1. Under Policy Rules select Configure. +1. Select Add Item. +1. Under Select Alerts (TBD) +1. Set the Action as Send and select Apply + +Now set a second alert related to Common Vulnerabilities and Exposures (CVEs). + +1. Select Add Item. +1. Under Select Alerts {adding additional Alert type for CVE). +1. Set the Action as Send and select Apply. +1. Select **Save and Exit**. From e7d69aa459acacd1a2cb58ac693d300cd5657904 Mon Sep 17 00:00:00 2001 From: Mike Jang <3287976+mjang@users.noreply.github.com> Date: Tue, 27 May 2025 08:12:42 -0700 Subject: [PATCH 24/24] Apply suggestions from code review Co-authored-by: yar --- content/nginx-one/secure-your-fleet/secure.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/content/nginx-one/secure-your-fleet/secure.md b/content/nginx-one/secure-your-fleet/secure.md index cca84bf13..638874258 100644 --- a/content/nginx-one/secure-your-fleet/secure.md +++ b/content/nginx-one/secure-your-fleet/secure.md @@ -7,7 +7,7 @@ product: NGINX One docs: DOCS-000 --- -The F5 Distributed Cloud generates alerts from all its services including NGINX One. You can configure rules to send those alerts to a receiver of your choice. These instructions walk you through how to configure an email notification when we see new CVEs or detect security issues with your NGiNX instances. +The F5 Distributed Cloud generates alerts from all its services including NGINX One. You can configure rules to send those alerts to a receiver of your choice. These instructions walk you through how to configure an email notification when we see new CVEs or detect security issues with your NGINX instances. This page describes basic steps to set up an email alert. For authoritative documentation, see [Alerts - Email & SMS](https://flatrender.tora.reviews/docs-v2/shared-configuration/how-tos/alerting/alerts-email-sms). @@ -17,8 +17,8 @@ This page describes basic steps to set up an email alert. For authoritative docu To configure security-related alerts, follow these steps: 1. Navigate to the F5 Distributed Cloud Console at https://INSERT_YOUR_TENANT_NAME.console.ves.volterra.io. -1. Find **Audit Logs & Alerts** > **Alerts Management** -1. Select **Add Alert Receiver** +1. Find **Audit Logs & Alerts** > **Alerts Management**. +1. Select **Add Alert Receiver**. 1. Configure the **Alert Receivers** 1. Enter the name of your choice 1. (Optional) Specify a label and description @@ -26,7 +26,7 @@ To configure security-related alerts, follow these steps: 1. Select **Save and Exit**. 1. Your Email receiver should now appear on the list of Alert Receivers. 1. Under the Actions column, select Verify Email. -1. Select **Send email** to confirm +1. Select **Send email** to confirm. 1. You should receive a verification code in the email provided. Copy that code. 1. Under the Actions column, select **Enter verification code**. 1. Paste the code and select **Verify receiver**. @@ -35,8 +35,8 @@ To configure security-related alerts, follow these steps: Next, configure the policy that identifies when you'll get an alert. -1. Navigate to **Alerts Management > Alert Policiei**. -1. Select Add Alert Policy. +1. Navigate to **Alerts Management > Alert Policies**. +1. Select **Add Alert Policy**. 1. Enter the name of your choice 1. (Optional) Specify a label and description 1. Under Alert Reciever Configuration > Alert Receivers, select the Alert Receiver you just created