Skip to content

Commit

Permalink
Try link fixes 20241025 (#276)
Browse files Browse the repository at this point in the history
* Broken link fixes from FAQ changes

Signed-off-by: Justine Geffen <[email protected]>

* Broken link fixes

Signed-off-by: Justine Geffen <[email protected]>

* Update azure_troubleshooting.mdx

Signed-off-by: Justine Geffen <[email protected]>

* Update api_and_cli.mdx

Signed-off-by: Justine Geffen <[email protected]>

* Update troubleshooting.mdx

Signed-off-by: Justine Geffen <[email protected]>

* Fixing broken links

* Link fixing

* link fixing for troubleshooting

* Additional link fixes for FAQs

* rm MultiQC

---------

Signed-off-by: Justine Geffen <[email protected]>
Co-authored-by: Justine Geffen <[email protected]>
  • Loading branch information
jason-seqera and justinegeffen authored Oct 25, 2024
1 parent 95f2642 commit aebdf99
Show file tree
Hide file tree
Showing 8 changed files with 26 additions and 26 deletions.
2 changes: 1 addition & 1 deletion multiqc_docs/multiqc_repo
Submodule multiqc_repo updated 41 files
+1 −1 .github/workflows/integration_test.yml
+1 −1 .github/workflows/unit_tests.yml
+0 −32 CHANGELOG.md
+1 −1 docs/markdown/custom_content/index.md
+1 −1 docs/markdown/modules/bcftools.md
+1 −1 docs/markdown/modules/bclconvert.md
+1 −1 docs/markdown/modules/biscuit.md
+1 −1 docs/markdown/modules/cutadapt.md
+1 −1 docs/markdown/modules/deeptools.md
+2 −2 docs/markdown/modules/fastqc.md
+1 −1 docs/markdown/modules/lima.md
+2 −2 docs/markdown/modules/preseq.md
+1 −1 docs/markdown/modules/qualimap.md
+1 −1 docs/markdown/modules/quast.md
+1 −1 docs/markdown/modules/supernova.md
+1 −1 docs/markdown/modules/vcftools.md
+2 −2 docs/markdown/usage/scripts.md
+8 −15 multiqc/base_module.py
+0 −8 multiqc/config_defaults.yaml
+2 −9 multiqc/core/exec_modules.py
+1 −2 multiqc/interactive.py
+4 −4 multiqc/modules/custom_content/custom_content.py
+4 −4 multiqc/modules/ngsbits/mappingqc.py
+2 −8 multiqc/modules/ngsbits/ngsbits.py
+12 −18 multiqc/modules/ngsbits/readqc.py
+0 −111 multiqc/modules/ngsbits/samplegender.py
+0 −3 multiqc/modules/ngsbits/tests/__init__.py
+0 −19 multiqc/modules/ngsbits/tests/test_samplegender.py
+1 −1 multiqc/modules/ngsbits/utils.py
+3 −2 multiqc/modules/rsem/rsem.py
+0 −8 multiqc/modules/samtools/stats.py
+9 −20 multiqc/multiqc.py
+70 −33 multiqc/plots/table_object.py
+1 −0 multiqc/report.py
+0 −2 multiqc/search_patterns.yaml
+5 −9 multiqc/templates/default/assets/js/plots/violin.js
+39 −89 multiqc/validation.py
+1 −1 scripts/print_changelog.py
+1 −2 tests/conftest.py
+25 −33 tests/test_custom_content.py
+12 −12 tests/test_plots.py
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ After you have created a resource group and Storage account, create a [Batch acc
- **Active jobs and schedules**: Each Nextflow process will require an active Azure Batch job per pipeline while running, so increase this number to a high level. See [here][az-learn-jobs] to learn more about jobs in Azure Batch.
- **Pools**: Each platform compute environment requires one Azure Batch pool. Each pool is composed of multiple machines of one virtual machine size.
:::note
To use separate pools for head and compute nodes, see [this FAQ entry](../faqs.mdx#azure).
To use separate pools for head and compute nodes, see [this FAQ entry](../troubleshooting_and_faqs/azure_troubleshooting.mdx).
:::
- **Batch accounts per region per subscription**: Set this to the number of Azure Batch accounts per region per subscription. Only one is required.
- **Spot/low-priority vCPUs**: Platform does not support spot or low-priority machines when using Forge, so when using Forge this number can be zero. When manually setting up a pool, select an appropriate number of concurrent vCPUs here.
Expand Down Expand Up @@ -295,7 +295,7 @@ Create a manual Seqera Azure Batch compute environment:
1. Set the **Config mode** to **Manual**.
1. Enter the **Compute Pool name**. This is the name of the Azure Batch pool you created previously in the Azure Batch account.
:::note
The default Azure Batch implementation uses a single pool for head and compute nodes. To use separate pools for head and compute nodes (for example, to use low-priority VMs for compute jobs), see [this FAQ entry](../faqs.mdx#azure).
The default Azure Batch implementation uses a single pool for head and compute nodes. To use separate pools for head and compute nodes (for example, to use low-priority VMs for compute jobs), see [this FAQ entry](../troubleshooting_and_faqs/azure_troubleshooting.mdx).
:::
1. Enter a user-assigned **Managed identity client ID**, if one is attached to your Azure Batch pool. See [Managed Identity](#managed-identity) below.
1. Apply [**Resource labels**](../resource-labels/overview.mdx). This will populate the **Metadata** fields of the Azure Batch pool.
Expand Down
2 changes: 1 addition & 1 deletion platform_versioned_docs/version-24.1/launch/advanced.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ Replace Nextflow process commands with command [stubs](https://www.nextflow.io/d
Nextflow will attempt to run the script named `main.nf` in the project repository by default. You can configure a custom script filename in `manifest.mainScript` or you can provide the script filename in this field.

:::note
If you specify a custom script filename, the root of the default branch in your pipeline repository must still contain blank `main.nf` and `nextflow.config` files. See [Nextflow configuration](../faqs.mdx#nextflow-configuration) for more information on this known Nextflow behavior.
If you specify a custom script filename, the root of the default branch in your pipeline repository must still contain blank `main.nf` and `nextflow.config` files. See [Nextflow configuration](../troubleshooting_and_faqs/nextflow.mdx) for more information on this known Nextflow behavior.
:::

### Workflow entry name
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ curl -X GET "https://$TOWER_SERVER_URL/workflow/$WORKFLOW_ID/tasks? workspaceId=

**Connection errors when creating or viewing AWS Batch compute environments with `tw compute-envs` commands**

Versions of tw CLI earlier than v0.8 do not support the `SPOT_PRICE_CAPACITY_OPTIMIZED` [allocation strategy](./compute-envs/aws-batch.mdx#advanced-options) in AWS Batch. Creating or viewing AWS Batch compute environments with this allocation strategy will lead to errors. This issue was [addressed in CLI v0.9](https://github.com/seqeralabs/tower-cli/issues/332).
Versions of tw CLI earlier than v0.8 do not support the `SPOT_PRICE_CAPACITY_OPTIMIZED` [allocation strategy](../compute-envs/aws-batch.mdx#advanced-options) in AWS Batch. Creating or viewing AWS Batch compute environments with this allocation strategy will lead to errors. This issue was [addressed in CLI v0.9](https://github.com/seqeralabs/tower-cli/issues/332).

**Segfault errors**

Expand All @@ -50,7 +50,7 @@ HTTP must not be used in production environments.

**Resume/relaunch runs with tw CLI**

Runs can be [relaunched](./launch/launchpad.mdx#relaunch-pipeline-run) with `tw runs relaunch` command.
Runs can be [relaunched](../launch/launchpad.mdx#relaunch-pipeline-run) with `tw runs relaunch` command.

```
tw runs relaunch -i 3adMwRdD75ah6P -w 161372824019700
Expand All @@ -67,4 +67,4 @@ tw runs list -w 161372824019700
5fUvqUMB89zr2W | SUBMITTED | nf/hello | magical_darwin | seqera-user | Tue, 10 Sep 2022 14:40:52 GMT
3adMwRdD75ah6P | SUCCEEDED | nf/hello | high_hodgkin | seqera-user | Tue, 10 Sep 2022 13:10:50 GMT
```
```
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ The default Azure Batch implementation in Seqera Platform uses a single pool for
Both pools must meet the requirements of a pre-existing pool as detailed in the [Nextflow documentation](https://www.nextflow.io/docs/latest/azure.html#requirements-on-pre-existing-named-pools).
:::

2. Create a manual [Azure Batch](./compute-envs/azure-batch.mdx#manual) compute environment in Seqera Platform.
2. Create a manual [Azure Batch](../compute-envs/azure-batch.mdx#manual) compute environment in Seqera Platform.
3. In **Compute pool name**, specify your dedicated Batch pool.
4. Specify the Low priority pool using the `process.queue` [directive](https://www.nextflow.io/docs/latest/process.html#queue) in your `nextflow.config` file either via the launch form, or your pipeline repository's `nextflow.config` file.

Expand All @@ -29,7 +29,7 @@ Both pools must meet the requirements of a pre-existing pool as detailed in the

This error can occur if your Nextflow pod uses an Azure Files-type (SMB) persistent volume as its storage medium. By default, the `jgit` library used by Nextflow attempts a filesystem link operation which [is not supported](https://docs.microsoft.com/en-us/azure/storage/files/files-smb-protocol?tabs=azure-portal#limitations) by Azure Files (SMB).

To avoid this problem, add the following code snippet in your pipeline's [**Pre-run script**](./launch/advanced.mdx#pre--post-run-scripts) field:
To avoid this problem, add the following code snippet in your pipeline's [**Pre-run script**](../launch/advanced.mdx#pre--post-run-scripts) field:

```bash
cat <<EOT > ~/.gitconfig
Expand All @@ -42,10 +42,10 @@ EOT

**Problem with the SSL CA cert**

This can occur if a tool/library in your task container requires SSL certificates to validate the identity of an external data source. Mount SSL certificates into the container to resolve this issue. See [SSL/TLS](./enterprise/configuration/ssl_tls.mdx#configure-tower-to-trust-your-private-certificate) for more information.
This can occur if a tool/library in your task container requires SSL certificates to validate the identity of an external data source. Mount SSL certificates into the container to resolve this issue. See [SSL/TLS](../enterprise/configuration/ssl_tls.mdx#configure-tower-to-trust-your-private-certificate) for more information.

**Azure SQL database error: _Connections using insecure transport are prohibited while --require_secure_transport=ON_**

This error is due to Azure's default MySQL behavior which enforces the SSL connections between your server and client application, as detailed in [SSL/TLS connectivity in Azure Database for MySQL](https://learn.microsoft.com/en-us/azure/mysql/single-server/concepts-ssl-connection-security). To fix this, append `useSSL=true&enabledSslProtocolSuites=TLSv1.2&trustServerCertificate=true` to your `TOWER_DB_URL` connection string. For example:

`TOWER_DB_URL: jdbc:mysql://mysql:3306/tower?permitMysqlScheme=true/azuredatabase.com/tower?serverTimezone=UTC&useSSL=true&enabledSslProtocolSuites=TLSv1.2&trustServerCertificate=true`
`TOWER_DB_URL: jdbc:mysql://mysql:3306/tower?permitMysqlScheme=true/azuredatabase.com/tower?serverTimezone=UTC&useSSL=true&enabledSslProtocolSuites=TLSv1.2&trustServerCertificate=true`
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ To minimize disruption on existing pipelines, version 22.1.x and later are confi

**Invoke Nextflow CLI run arguments during Seqera launch**

From [Nextflow v22.09.1-edge](https://github.com/nextflow-io/nextflow/releases/tag/v22.09.1-edge), you can specify [Nextflow CLI run arguments](https://www.nextflow.io/docs/latest/cli.html?highlight=dump#run) when invoking a pipeline from Seqera. Set the `NXF_CLI_OPTS` environment variable using a [pre-run script](./launch/advanced.mdx#pre--post-run-scripts):
From [Nextflow v22.09.1-edge](https://github.com/nextflow-io/nextflow/releases/tag/v22.09.1-edge), you can specify [Nextflow CLI run arguments](https://www.nextflow.io/docs/latest/cli.html?highlight=dump#run) when invoking a pipeline from Seqera. Set the `NXF_CLI_OPTS` environment variable using a [pre-run script](../launch/advanced.mdx#pre--post-run-scripts):

```
export NXF_CLI_OPTS='-dump-hashes'
Expand All @@ -29,7 +29,7 @@ export NXF_CLI_OPTS='-dump-hashes'

Nextflow resolves relative paths against the current working directory. In a classic grid HPC, this normally corresponds to a subdirectory of the `$HOME` directory. In a cloud execution environment, however, the path will be resolved relative to the _container file system_, meaning files will be lost when the container is terminated. See [here](https://github.com/nextflow-io/nextflow/issues/2661#issuecomment-1047259845) for more details.

Specify the absolute path to your persistent storage using the `NXF_FILE_ROOT` environment variable in your [`nextflow.config`](./launch/advanced.mdx#nextflow-config-file) file. This resolves the relative paths defined in your Netflow script so that output files are written to your stateful storage, rather than ephemeral container storage.
Specify the absolute path to your persistent storage using the `NXF_FILE_ROOT` environment variable in your [`nextflow.config`](../launch/advanced.mdx#nextflow-config-file) file. This resolves the relative paths defined in your Netflow script so that output files are written to your stateful storage, rather than ephemeral container storage.

**Nextflow: Ignore Singularity cache**

Expand Down Expand Up @@ -69,7 +69,7 @@ The following configuration is suggested to overcome AWS limitations:

- Head Job CPUs: 16
- Head Job Memory: 60000
- [Pre-run script](./launch/advanced.mdx#pre--post-run-scripts): `export NXF_OPTS="-Xms20G -Xmx40G"`
- [Pre-run script](../launch/advanced.mdx#pre--post-run-scripts): `export NXF_OPTS="-Xms20G -Xmx40G"`
- Increase chunk size and slow down the number of transfers using `nextflow.config`:

```
Expand Down Expand Up @@ -130,7 +130,7 @@ aws {

This change occurs because Seqera superimposes its 10 MB default value rather than the value specified in your `nextflow.config`.

To force the Seqera-invoked job to use your `nextflow.config` value, add the configuration setting in the workspace Launch screen's [**Nextflow config file** field](./launch/launchpad.mdx). For the example above, you would add `aws.client.uploadChunkSize = 209715200 // 200 MB`.
To force the Seqera-invoked job to use your `nextflow.config` value, add the configuration setting in the workspace Launch screen's [**Nextflow config file** field](../launch/launchpad.mdx). For the example above, you would add `aws.client.uploadChunkSize = 209715200 // 200 MB`.

Nextflow configuration values affected by this behaviour include:

Expand All @@ -139,7 +139,7 @@ Nextflow configuration values affected by this behaviour include:

**Fusion v1 execution: _Missing output file(s) [X] expected by process [Y]_ error**

Fusion v1 has a limitation which causes tasks that run for less than 60 seconds to fail as the output file generated by the task is not yet detected by Nextflow. This is a limitation inherited from a Goofys driver used by the Fusion v1 implementation. [Fusion v2](./supported_software/fusion/fusion.mdx) resolves this issue.
Fusion v1 has a limitation which causes tasks that run for less than 60 seconds to fail as the output file generated by the task is not yet detected by Nextflow. This is a limitation inherited from a Goofys driver used by the Fusion v1 implementation. [Fusion v2](../supported_software/fusion/fusion.mdx) resolves this issue.

If you can't update to Fusion v2, this issue can be addressed by instructing Nextflow to wait for 60 seconds after the task completes.

Expand Down Expand Up @@ -203,11 +203,11 @@ The Groovy shell used by Nextflow to execute your workflow has a hard limit on s

Your Seqera installation knows the nf-launcher image version it needs and specifies this value automatically when launching a pipeline.

If you're restricted from using public container registries, see Seqera Enterprise release [instructions](./enterprise/release_notes/23.2.mdx#nextflow-launcher-image) for the specific image to set as the default when invoking pipelines.
If you're restricted from using public container registries, see Seqera Enterprise release [instructions](../enterprise/release_notes/23.2.mdx#nextflow-launcher-image) for the specific image to set as the default when invoking pipelines.

**Specify Nextflow version**

Each Seqera Platform release uses a specific nf-launcher image by default. This image is loaded with a specific Nextflow version that any workflow run in the container uses by default. Force your jobs to use a newer/older version of Nextflow with one of the following strategies:

- Use a [pre-run script](./launch/advanced.mdx#pre--post-run-scripts) to set the desired Nextflow version. For example: `export NXF_VER=22.08.0-edge`
- For jobs executing in an AWS Batch compute environment, create a [custom job definition](./enterprise/advanced-topics/custom-launch-container.mdx) which references a different nf-launcher image.
- Use a [pre-run script](../launch/advanced.mdx#pre--post-run-scripts) to set the desired Nextflow version. For example: `export NXF_VER=22.08.0-edge`
- For jobs executing in an AWS Batch compute environment, create a [custom job definition](../enterprise/advanced-topics/custom-launch-container.mdx) which references a different nf-launcher image.
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Try the following:

**_Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect)_ error**

This error can occur if incorrect configuration values are assigned to the `backend` and `cron` containers' [`MICRONAUT_ENVIRONMENTS`](./enterprise/configuration/overview.mdx#compute-environments) environment variable. You may see other unexpected system behavior, like two exact copies of the same Nextflow job submitted to the executor for scheduling.
This error can occur if incorrect configuration values are assigned to the `backend` and `cron` containers' [`MICRONAUT_ENVIRONMENTS`](../enterprise/configuration/overview.mdx#compute-environments) environment variable. You may see other unexpected system behavior, like two exact copies of the same Nextflow job submitted to the executor for scheduling.

Verify the following:

Expand Down Expand Up @@ -70,7 +70,7 @@ Most containers use the root user by default. However, some users prefer to defi
touch: cannot touch '/fsx/work/ab/27d78d2b9b17ee895b88fcee794226/.command.begin': Permission denied
```

This should not occur when using AWS Batch from Seqera version 22.1.0. In other situations, you can avoid this issue by forcing all task containers to run as root. Add one of the following snippets to your [Nextflow configuration](./launch/advanced.mdx#nextflow-config-file):
This should not occur when using AWS Batch from Seqera version 22.1.0. In other situations, you can avoid this issue by forcing all task containers to run as root. Add one of the following snippets to your [Nextflow configuration](../launch/advanced.mdx#nextflow-config-file):

```
// cloud executors
Expand Down Expand Up @@ -115,7 +115,7 @@ To fix the problem, try the following:
export JAVA_OPTIONS="-Dmail.smtp.ssl.protocols=TLSv1.2"
```

2. Add this parameter to your [nextflow.config file](./launch/advanced.mdx#nextflow-config-file):
2. Add this parameter to your [nextflow.config file](../launch/advanced.mdx#nextflow-config-file):

```
mail {
Expand Down Expand Up @@ -173,7 +173,7 @@ Users with email addresses other than the `trustedEmails` list will undergo an a

:::note

1. You must rebuild your containers (`docker compose down`) to force Seqera to implement this change. Ensure your database is persistent before issuing the teardown command. See [here](./enterprise/docker-compose.mdx) for more information.
1. You must rebuild your containers (`docker compose down`) to force Seqera to implement this change. Ensure your database is persistent before issuing the teardown command. See [here](../enterprise/docker-compose.mdx) for more information.
2. All login attempts are visible to the root user at **Profile > Admin panel > Users**.
3. Any user logged in prior to the restriction will not be subject to the new restriction. An admin of the organization should remove users that have previously logged in via (untrusted) email from the Admin panel users list. This will restart the approval process before they can log in via email.

Expand Down Expand Up @@ -211,7 +211,7 @@ Mount the APM solution's JAR file in Seqera's `backend` container and set the ag

Although it's not possible to directly download the trace logs via Seqera, you can configure your workflow to export the file to persistent storage:

1. Set this block in your [`nextflow.config`](./launch/advanced.mdx#nextflow-config-file):
1. Set this block in your [`nextflow.config`](../launch/advanced.mdx#nextflow-config-file):

```nextflow
trace {
Expand Down Expand Up @@ -306,7 +306,7 @@ This error can occur if the Nextflow head job fails to retrieve the necessary re

**_Missing AWS execution role arn_ error during Seqera launch**

The [ECS Agent must have access](https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html) to retrieve secrets from the AWS Secrets Manager. Secrets-using pipelines launched from your instance in an AWS Batch compute environment will encounter this error if an IAM Execution Role is not provided. See [Secrets](./secrets/overview.mdx) for more information.
The [ECS Agent must have access](https://docs.aws.amazon.com/batch/latest/userguide/execution-IAM-role.html) to retrieve secrets from the AWS Secrets Manager. Secrets-using pipelines launched from your instance in an AWS Batch compute environment will encounter this error if an IAM Execution Role is not provided. See [Secrets](../secrets/overview.mdx) for more information.

**AWS Batch task failures with secrets**

Expand Down
2 changes: 1 addition & 1 deletion wave_docs/wave_repo
Submodule wave_repo updated 101 files

0 comments on commit aebdf99

Please sign in to comment.