Skip to content

Workflow params (part 1) #5929

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Workflow params (part 1) #5929

wants to merge 1 commit into from

Conversation

bentsherman
Copy link
Member

@bentsherman bentsherman commented Mar 31, 2025

Related to #4669

This PR is the first step towards defining workflow params entirely in the pipeline script. It allows you to define a params block like so:

params {
  input
  save_intermeds = false
}

workflow {
  // ...
}

Instead of assigning individual params. The advantage is that Nextflow can validate params natively once the params block is defined, because it guarantees that all params are declared in one place.

There is still some work required to make the validation work, but the high-level flow is:

  1. User specifies params on the command line / params file
  2. Config files can override script params or define "config params" which are only used by the config
  3. When the params block is defined in the script, config params are ignored and only overrides from the command line / config are applied. If a script param was not specified and has no default value, an error is reported. If a CLI param was not already defined in the config or script, an error is reported

TODO:

  • Separate config params from CLI params to identify invalid params (i.e. params that weren't declared in the script or config)

Copy link

netlify bot commented Mar 31, 2025

Deploy Preview for nextflow-docs-staging ready!

Name Link
🔨 Latest commit db9d727
🔍 Latest deploy log https://app.netlify.com/sites/nextflow-docs-staging/deploys/681eb59b16e59e0008e1b68e
😎 Deploy Preview https://deploy-preview-5929--nextflow-docs-staging.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@bentsherman
Copy link
Member Author

Regarding the schema generation, there are two approaches we could take:

  1. Declare only the param name / type / default value in the script, put everything else in an auxiliary JSON/YAML file
  2. Put everything in the script that is used for the schema -- description, icon, form validation rules, section headers, etc

I was initially leaning towards (2), because it would simplify the schema generation and we could validate the available schema properties. But this would also make the params definition really long and verbose in the script, whereas the pipeline code only cares about the name / type / default value.

So now I'm starting to lean towards (1). In that case we could have a really concise definition:

params {
  input: Path
  save_intermeds: boolean = false
}

workflow {
  println "input = ${params.input}"
  println "save_intermeds = ${params.save_intermeds}"
}

Or even directly in the entry workflow:

workflow {
  params:
  input: Path
  save_intermeds: boolean = false

  main:
  println "input = ${input}"
  println "save_intermeds = ${save_intermeds}"
}

This concise syntax will work only if we're certain we don't need anything else in the script. I thought maybe the help text would be useful for CLI help, but that could be provided through a Javadoc comment

In this case, the schema generation would look something like this:

  1. Run nextflow schema to initialize a bare-bones JSON schema from the params definition
  2. Populate the schema with extra information (help text, icons)
  3. Run nextflow schema periodically to update the JSON schema from the params definition, overwriting fields like name / type / schema (perhaps with an appropriate warning)

@ewels
Copy link
Member

ewels commented Apr 1, 2025

This concise syntax will work only if we're certain we don't need anything else in the script.

I'm not entirely sure that this is the case, the schema is used for validation of more than just type. I know that some of these things can be handled with Records (eg. enum choices), but what about things like pattern, min/max and uniqueItems etc?

I thought maybe the help text would be useful for CLI help, but that could be provided through a Javadoc comment

I'd be curious to see how this might look - ideally for both description and helptext in one.

@bentsherman
Copy link
Member Author

bentsherman commented Apr 1, 2025

I know that some of these things can be handled with Records (eg. enum choices), but what about things like pattern, min/max and uniqueItems etc?

Any kind of validation should be possible through custom types and constructor functions (functions that create the custom type and just implements the validation logic). But not all of those cases can be automatically translated to the schema.

For example, I can automatically generate a pattern from a type definition, but not things like min and max. Unless we did something crazy like Min<0, Max<Integer, 10>> 😅

I could go either way at this point. I like the concise syntax of declaring params in the entry workflow, but CLI libraries like argparse are also pretty standard, so maybe the concise syntax is just too restrictive

I'd be curious to see how this might look - ideally for both description and helptext in one.

Copying your example from our slack convo:

workflow {
  params:
  /**
   * Path to comma-separated file containing information about the samples in the experiment.
   *
   * You will need to create a design file with information about the samples in your experiment
   * before running the pipeline. Use this parameter to specify its location.
   * It has to be a comma-separated file with 4 columns, and a header row.
   * See [usage docs](https://nf-co.re/rnaseq/usage#samplesheet-input).
   */
  input: Path

  /**
   * If generated by the pipeline save the STAR index in the results directory.
   *
   * If an alignment index is generated by the pipeline use this parameter
   * to save it to your results folder.
   * These can then be used for future pipeline runs, reducing processing times.
   */
  save_reference: boolean = false

  main:
  // ..
}

@bentsherman
Copy link
Member Author

Using the Javadoc comment is "better" in the sense that you only need to parse the script to produce the CLI help, you don't have to execute it, which i think would be excessive

@bentsherman
Copy link
Member Author

bentsherman commented Apr 3, 2025

See also Pydantic: https://docs.pydantic.dev/latest/concepts/fields/#validate-default-values

Simple param:

myparam: String = "default-value"

Full param:

myparam: String = Field(default: "default-value", pattern: "/some.*regex/")

@bentsherman
Copy link
Member Author

Declaring params in the entry workflow means that you don't need the params. prefix anymore:

workflow {
  params:
  input: Path
  save_intermeds: boolean = false

  main:
  println "input = ${input}"
  println "save_intermeds = ${save_intermeds}"
}

On the one hand I like that it makes params more like workflow takes. On the other hand, you still need the params. prefix to use params in the config, so I fear that the end result would just be more confusing?

That would suggest that the params block is needed just for consistency with the config. Maybe we could allow the short and long forms like Pydantic:

params {
  input: Path {
    description '...'
    pattern '*.csv'
  }
  save_intermeds: boolean = false
}

workflow {
  println "input = ${params.input}"
  println "save_intermeds = ${params.save_intermeds}"
}

Though I always hesitate to add shortcuts if it makes the code less consistent

@ewels
Copy link
Member

ewels commented Apr 3, 2025

+1 for the separate params block, I feel like for consistency that is easier to read and understand, also avoids confusion with take, which is doing quite a similar thing.

Not sure about the squiggly bracket syntax. I like the thinking, but it means that we now have three different types of syntax for them. Nothing for config, : for types and = for variables / others. I can see that being really annoying.

That said, the Field() syntax can be confusing in its own ways, see the Pydantic docs:

Using the f: <type> = Field(...) form can be confusing and might trick users into thinking f has a default value, while in reality it is still required.

@bentsherman
Copy link
Member Author

Yeah, confusing the Field() with a default value is a serious drawback

The block syntax appeals to me because it is consistent with workflow outputs:

// fetchngs...
outputs {
  samples: Channel<Sample> {
    path '...'
    // ...
  }
}

// rnaseq...
params {
  input: List<Sample> {
    // ...
  }
}

Since we want to be able to match outputs to inputs for pipeline chaining, it makes sense to me that the syntax for inputs and outputs mirror each other.

The config is another issue. Let me think through that and write a separate comment...

@bentsherman
Copy link
Member Author

We could add the same params block syntax to config files if consistency is an issue. But I fear this might feel too "weird" in a configuration context.

The nice thing about config params is that they basically have to be simple values (numbers, strings, booleans, etc). So being able to declare the type isn't so important because it can be inferred from the default value.

Meanwhile, if we take the hybrid approach of generating a skeleton schema that the user can annotate manually as needed, we don't need to add new syntax to the config file to support things like validation and help text, because those can just be defined in the JSON schema

@kenibrewer
Copy link
Member

kenibrewer commented Apr 3, 2025

I really like this syntax proposal a lot:

params {
  input: Path {
    description '...'
    pattern '*.csv'
  }
  save_intermeds: boolean = false
}

This is likely in part because it mirrors Python/Pydantic but I think that's a good thing for us to emulate. Imitating the syntax of the most popular typed python extension will make it easier for folks to learn and feel like they can read and understand.

@ewels
Copy link
Member

ewels commented Apr 3, 2025

If we're leaning more towards that syntax, we could arguably have all JSON schema parameters covered. I think we need to make sure we're crystal clear on what we want to support.

For example, maybe no description as that's "decorative" and too verbose, so the example above becomes simply:

params {
  input: Path { pattern '*.csv' }
  save_intermeds: boolean = false
}

@ewels
Copy link
Member

ewels commented Apr 8, 2025

@bentsherman - any ideas how the schema builder might be able to reach into Nextflow code to update these definitions based on changes made in the GUI / JSON?

@bentsherman
Copy link
Member Author

I don't think that will be possible. I think the flow will have to be:

  1. Write params definition in main script
  2. Generate skeleton JSON schema
  3. Extend the schema by hand or via CLI wizard or upload to schema builder
  4. Don't edit things that are sourced from the main script

Or if you use the schema builder from scratch, you have to update the params definition by hand (or not use it at all)

@mashehu
Copy link

mashehu commented Apr 8, 2025

I am just thinking of the following user story: I set the type of a parameter to boolean in the parameter defintion. While writing the param definition in the schema builder I see that the tool actually accepts 0,1, and 2 as values. In the current version of the builder I switch the parameter type in the GUI to integer and set a min and a max value. In the currently proposed setup, I would need to go back to the nextflow code, export a new schema and open the new schema in the builder. not optimal imo.

@bentsherman
Copy link
Member Author

The kind of discovery work you describe (i.e. figuring out the appropriate type of a param) is exactly the kind of thing that needs to happen in the pipeline code, so that the Nextflow compiler can verify param names and types as they are used in the entry workflow. You can't get that kind of validation in the schema builder.

Instead, the schema builder should be used to annotate a fixed set of params with things like help text, icons, form validation rules, etc. It should be primarily concerned with how params are accessed by external systems.

@ewels
Copy link
Member

ewels commented Apr 9, 2025

I agree that the logic / definition should be in the pipeline code: at least, that should be the source of truth. I was mostly wondering if we could have some way to update the Nextflow code from the JSON schema builder, to have the best of both worlds. The schema builder has some nice beginner-friendly functionality in it, for example a GUI with a built-in regex tester for writing patterns, and a bunch of built-in help text.

Maybe this is something that we could do with @RefTrace ? eg. From the Python CLI that launches the schema builder GUI, then go back and access the Nextflow code to edit it in place. Not sure if that's possible.

Or if we launch the schema GUI editor from Nextflow itself (with a local server etc) could there be a callback which is able to edit the Nextflow code? 🤔 We would know the param name and attributes..

@bentsherman
Copy link
Member Author

The furthest I would go is to generate a code block in the schema builder that the user can copy into their Nextflow pipeline if they want.. Automatically updating code from an external source is an anti-pattern in my view

@bentsherman bentsherman marked this pull request as ready for review April 18, 2025 14:07
@bentsherman bentsherman requested review from a team as code owners April 18, 2025 14:07
@bentsherman bentsherman requested a review from pditommaso April 18, 2025 14:07
@ewels
Copy link
Member

ewels commented Apr 24, 2025

Suggestion from call: Would be nice to support single-line comments (//) in addition to Javadoc multi-line comments. This drops the number of lines per-param from 4 to 2, which makes quite a bit of difference if there are a lot of parameters.

Copy link
Collaborator

@christopher-hakkaart christopher-hakkaart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added some suggestions to the docs. Looking good.

Copy link
Collaborator

@christopher-hakkaart christopher-hakkaart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docs looking good 👍

@bentsherman bentsherman linked an issue Apr 28, 2025 that may be closed by this pull request
@bentsherman bentsherman added this to the 25.04 milestone May 1, 2025
Copy link
Member

@pditommaso pditommaso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the general idea, but there are some points that needs to be improved.

I'm a bit concerned about this:

Config files can override script params or define "config params" which are only used by the config

Maybe I'm misunderstanding, but I don't config params should be managed differently from script params

Copy link
Member

@pditommaso pditommaso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe, this needs some work (as per previous comment)

@bentsherman
Copy link
Member Author

I'm a bit concerned about this:

Config files can override script params or define "config params" which are only used by the config

Maybe I'm misunderstanding, but I don't config params should be managed differently from script params

Config params are still added to the script as before:

// configure script params
this.cliParams = cliParams
this.configParams = configParams
binding.setParams( (Map)config.params )

However, if you use the new params block, in the script you will only be allowed to use params that you declared in the params block:

for( final name : cliParams.keySet() ) {
if( !declarations.containsKey(name) && !configParams.containsKey(name) )
throw new ScriptRuntimeException("Parameter `$name` was specified on the command line or params file but is not declared in the script or config")
}
final params = new HashMap<String,?>()
for( final name : declarations.keySet() ) {
final defaultValue = declarations[name]
if( cliParams.containsKey(name) )
params[name] = cliParams[name]
else if( configParams.containsKey(name) )
params[name] = configParams[name]
else if( defaultValue.isPresent() )
params[name] = defaultValue.get()
else
throw new ScriptRuntimeException("Parameter `$name` is required but was not specified on the command line, params file, or config")
}

So you can still override script params from the config, but you can't declare a new param from the config and use it in the script, without also declaring it in the params block. Again, this only applies if you use the new params block

@bentsherman bentsherman requested a review from pditommaso May 2, 2025 17:52
@pditommaso
Copy link
Member

I believe it would be better continuing iterating this before release it as preview

@bentsherman bentsherman modified the milestones: 25.04, 25.10 May 5, 2025
Signed-off-by: Ben Sherman <[email protected]>
@bentsherman bentsherman changed the title Workflow params definition (first preview) Workflow params (part 1) May 10, 2025
@bentsherman
Copy link
Member Author

From our recent discussions, we will introduce workflow params in 25.10 as a complete feature (this iteration + type annotations + JSON schema integration)

I have updated the PR accordingly:

  • remove the feature flag (will be introduced in 25.10 as a complete feature)
  • remove support for v1 parser (incentivize adoption of strict syntax)

I would like to go ahead and merge this part so that I can use it in other efforts around static types. The type annotations and JSON schema integrations will need to be separate PRs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Print warning for unused CLI parameters
6 participants