Skip to content

Commit

Permalink
Update to 1.0.20180622214234
Browse files Browse the repository at this point in the history
  • Loading branch information
michael-kotliar committed Aug 13, 2018
1 parent 77330fa commit c7467ff
Show file tree
Hide file tree
Showing 39 changed files with 241 additions and 2,477 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
.DS_Store
._.DS_Store
.idea
36 changes: 24 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,32 @@
# ChIP-Seq CWL pipeline from BioWardrobe
# ChIP-Seq-SE CWL pipeline from BioWardrobe

This workflow is a CWL version of a Python pipeline from BioWardrobe (Kartashov and Barski, 2015). It starts by using BowTie (Langmead et al., 2009) to perform alignment to a reference genome, resulting in an unsorted SAM file. The SAM file is then sorted and indexed with samtools (Li et al., 2009) to obtain a BAM file and a BAI index. Next MACS2 (Zhang et al., 2008) is used to call peaks and to estimate fragment size. In the last few steps, the coverage by estimated fragments is calculated from the BAM file and is reported in bigWig format. The pipeline also reports statistics, such as read quality, peak number and base frequency, and other troubleshooting information using tools such as [fastx-toolkit](http://hannonlab.cshl.edu/fastx_toolkit/) and [bamtools](https://github.com/pezmaster31/bamtools).
This workflow is a CWL version of a Python pipeline from BioWardrobe (Kartashov and Barski, 2015).
It starts by extracting input FASTQ file (if it was compressed). Next step runs
BowTie (Langmead et al., 2009) to perform alignment to a reference genome,
resulting in an unsorted SAM file. The SAM file is then sorted and indexed with Samtools
(Li et al., 2009) to obtain a BAM file and a BAI index. Next MACS2 (Zhang et al., 2008)
is used to call peaks and to estimate fragment size. In the last few steps, the coverage
by estimated fragments is calculated from the BAM file and is reported in bigWig format.
The pipeline also reports statistics, such as read quality, peak number and base frequency,
and other troubleshooting information using tools such as
[fastx-toolkit](http://hannonlab.cshl.edu/fastx_toolkit/) and
[bamtools](https://github.com/pezmaster31/bamtools).

This workflow (v0.0.2) is being used for GA4GH/DREAM challenge (phase 2).
This workflow (v0.0.2) is being used for
[GA4GH-DREAM Workflow Execution Challenge](https://www.synapse.org/#!Synapse:syn8507133/wiki/415976).

The pipeline scheme is available in the Wiki.

To download workflow with testing input data use
**To download workflow with the testing input data run the following command**
```
git clone --recursive https://github.com/Barski-lab/ga4gh_challenge.git
```
It will clone submodule [ga4gh_challenge_data](https://github.com/michael-kotliar/ga4gh_challenge_data)
into `data`folder.
Uncompress input FASTQ file using
```
cd ga4gh_challenge/data
./prepare_inputs.sh
```
> tested with cwltool==1.0.20180116213856
It will also clone submodule [ga4gh_challenge_data](https://github.com/michael-kotliar/ga4gh_challenge_data)
into the *data* folder and uncompress input FASTQ file.


![Workflow scheme](docs/workflow_sheme.png)
Pipeline scheme generated by [Rabix Composer](http://rabix.io/).


*tested with cwltool==1.0.20180622214234*
63 changes: 21 additions & 42 deletions biowardrobe_chipseq_se.cwl
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,10 @@ class: Workflow

requirements:
- class: SubworkflowFeatureRequirement
- class: ScatterFeatureRequirement
- class: StepInputExpressionRequirement
- class: InlineJavascriptRequirement
- class: MultipleInputFeatureRequirement


inputs:
Expand Down Expand Up @@ -86,7 +88,6 @@ inputs:
doc: "Number of threads for those steps that support multithreading"
label: "Number of threads"


outputs:

bigwig:
Expand Down Expand Up @@ -215,30 +216,24 @@ outputs:
doc: "fragment, calculated fragment, islands count from MACS2 results"
outputSource: macs2_callpeak/macs2_stat_file

fastq_compressed:
type: File
label: "Compressed FASTQ"
doc: "bz2 compressed FASTQ file"
outputSource: bzip/output_file

steps:

extract_fastq:
run: ./tools/extract-fastq.cwl
in:
compressed_file: fastq_file
out: [fastq_file]

fastx_quality_stats:
run: ./tools/fastx-quality-stats.cwl
in:
input_file: fastq_file
input_file: extract_fastq/fastq_file
out: [statistics_file]

bzip:
run: ./tools/bzip2-compress.cwl
in:
input_file: fastq_file
out: [output_file]

bowtie_aligner:
run: ./tools/bowtie-alignreads.cwl
in:
upstream_filelist: fastq_file
upstream_filelist: extract_fastq/fastq_file
indices_folder: indices_folder
clip_3p_end: clip_3p_end
clip_5p_end: clip_5p_end
Expand Down Expand Up @@ -323,7 +318,7 @@ steps:
- macs2_fragments_calculated

bam_to_bigwig:
run: ./workflows/bam-bedgraph-bigwig.cwl
run: ./subworkflows/bam-bedgraph-bigwig.cwl
in:
bam_file: samtools_sort_index_after_rmdup/bam_bai_pair
chrom_length_file: chrom_length
Expand Down Expand Up @@ -411,33 +406,17 @@ s:creator:
- id: http://orcid.org/0000-0002-6486-3898

doc: |
Current workflow is used to run CHIP-Seq basic analysis with single-end input FASTQ file.
The workflow is used to run CHIP-Seq basic analysis with single-end input FASTQ file.
In outputs it returns coordinate sorted BAM file alongside with index BAI file, quality
statistics of the input FASTQ file, reads coverage in a form of BigWig file, peaks calling
statistics of the input FASTQ file, reads coverage in a form of bigWig file, peaks calling
data in a form of narrowPeak or broadPeak files.

s:about: |
Current workflow is used to run CHIP-Seq basic analysis with single-end input FASTQ file.
In outputs it returns coordinate sorted BAM file alongside with index BAI file, quality
statistics of the input FASTQ file, reads coverage in a form of BigWig file, peaks calling
data in a form of narrowPeak or broadPeak files.
Workflow starts with running fastx_quality_stats (Step fastx_quality_stats) from FASTX-Toolkit
to calculate quality statistics for input FASTQ file. At the same time (Step bowtie_aligner)
Bowtie is used to align reads from input FASTQ file to reference genome. The output of this step
is unsorted SAM file which is being sorted and indexed by samtools sort and samtools index
(Step samtools_sort_index). Depending on workflow’s input parameters indexed the sorted BAM file
could be processed by samtools rmdup (Step samtools_rmdup) to remove all possible read duplicates.
In a case when removing duplicates is not necessary the step returns original input BAM and BAI
files without any processing. If the duplicates were removed the following step
(Step samtools_sort_index_after_rmdup) reruns samtools sort and samtools index with BAM and BAI files,
if not - the step returns original unchanged input files. Right after that (Step macs2_callpeak)
macs2 callpeak performs peak calling. This step also calculates the number of islands and estimated
fragment size. If the last one is less that 80 (hardcoded in a workflow) macs2 callpeak is rerun again
with forced fixed fragment size value. If at the very beginning it was set in workflow
input parameters to run peak calling with fixed fragment size, macs2 peak calling is run only once.
The following step (Step bam_to_bigwig) is used to calculate read coverage on the base of input BAM file
and save it in bigWig format. For that purpose the mapped reads number is used as a scaling factor
by bedtools genomecov when it performs coverage calculation and saves it in BED format. The last one
is then being sorted and converted to bigWig format by bedGraphToBigWig tool from UCSC utilities.
The following two steps (Step island_intersect and average_tag_density) define the closest genes to
the islands received from macs2 peak calling and calculate average tag density.
The workflow is a CWL version of a Python pipeline from BioWardrobe (Kartashov and Barski, 2015).
It starts by extracting input FASTQ file (in case it was compressed). Next step runs BowTie
(Langmead et al., 2009) to perform alignment to a reference genome, resulting in an unsorted SAM file.
The SAM file is then sorted and indexed with samtools (Li et al., 2009) to obtain a BAM file and a BAI index.
Next MACS2 (Zhang et al., 2008) is used to call peaks and to estimate fragment size. In the last few steps,
the coverage by estimated fragments is calculated from the BAM file and is reported in bigWig format. The pipeline
also reports statistics, such as read quality, peak number and base frequency, and other troubleshooting information
using tools such as fastx-toolkit and bamtools.
2 changes: 1 addition & 1 deletion biowardrobe_chipseq_se.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,5 +19,5 @@ remove_duplicates: false
exp_fragment_size: 150
force_fragment_size: false
broad_peak: false
threads: 8
threads: 2
genome_size: "1.2e8"
Binary file added docs/workflow_sheme.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed resources/images/workflow.png
Binary file not shown.
80 changes: 2 additions & 78 deletions workflows/bam-bedgraph-bigwig.cwl → subworkflows/bam-bedgraph-bigwig.cwl
100755 → 100644
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
#!/usr/bin/env cwl-runner

cwlVersion: v1.0
class: Workflow

Expand Down Expand Up @@ -88,14 +86,7 @@ steps:
default: "-bg"
split:
source: split
valueFrom: |
${
if (self == null){
return true;
} else {
return self;
}
}
default: true
output_filename: bedgraph_filename
pairchip: pairchip
fragment_size: fragment_size
Expand All @@ -119,71 +110,4 @@ steps:
bedgraph_file: sort_bedgraph/sorted_file
chrom_length_file: chrom_length_file
output_filename: bigwig_filename
out: [bigwig_file]

$namespaces:
s: http://schema.org/

$schemas:
- http://schema.org/docs/schema_org_rdfa.html

s:name: "bam-bedgraph-bigwig"
s:downloadUrl: hhttps://raw.githubusercontent.com/Barski-lab/ga4gh_challenge/master/workflows/bam-bedgraph-bigwig.cwl
s:codeRepository: https://github.com/Barski-lab/ga4gh_challenge
s:license: http://www.apache.org/licenses/LICENSE-2.0

s:isPartOf:
class: s:CreativeWork
s:name: Common Workflow Language
s:url: http://commonwl.org/

s:creator:
- class: s:Organization
s:legalName: "Cincinnati Children's Hospital Medical Center"
s:location:
- class: s:PostalAddress
s:addressCountry: "USA"
s:addressLocality: "Cincinnati"
s:addressRegion: "OH"
s:postalCode: "45229"
s:streetAddress: "3333 Burnet Ave"
s:telephone: "+1(513)636-4200"
s:logo: "https://www.cincinnatichildrens.org/-/media/cincinnati%20childrens/global%20shared/childrens-logo-new.png"
s:department:
- class: s:Organization
s:legalName: "Allergy and Immunology"
s:department:
- class: s:Organization
s:legalName: "Barski Research Lab"
s:member:
- class: s:Person
s:name: Michael Kotliar
s:email: mailto:[email protected]
s:sameAs:
- id: http://orcid.org/0000-0002-6486-3898
- class: s:Person
s:name: Andrey Kartashov
s:email: mailto:[email protected]
s:sameAs:
- id: http://orcid.org/0000-0001-9102-5681

doc: |
Workflow converts input BAM file into bigWig and bedGraph files

s:about: |
Workflow converts input BAM file into bigWig and bedGraph files.

Input BAM file should be sorted by coordinates (required by `bam_to_bedgraph` step).

If `split` input is not provided use true by default. Default logic is implemented in `valueFrom` field of `split`
input inside `bam_to_bedgraph` step to avoid possible bug in cwltool with setting default values for workflow inputs.

`scale` has higher priority over the `mapped_reads_number`. The last one is used to calculate `-scale` parameter for
`bedtools genomecov` (step `bam_to_bedgraph`) only in a case when input `scale` is not provided. All logic is
implemented inside `bedtools-genomecov.cwl`.

`bigwig_filename` defines the output name only for generated bigWig file. `bedgraph_filename` defines the output name
for generated bedGraph file and can influence on generated bigWig filename in case when `bigwig_filename` is not provided.

All workflow inputs and outputs don't have `format` field to avoid format incompatibility errors when workflow is used
as subworkflow.
out: [bigwig_file]
Loading

0 comments on commit c7467ff

Please sign in to comment.