lists.galaxyproject.org
Sign In
Sign Up
Sign In
Sign Up
Manage this list
×
Keyboard Shortcuts
Thread View
j
: Next unread message
k
: Previous unread message
j a
: Jump to all threads
j l
: Jump to MailingList overview
2024
September
August
July
June
May
April
March
February
January
2023
December
November
October
September
August
July
June
May
April
March
February
January
2022
December
November
October
September
August
July
June
May
April
March
February
January
2021
December
November
October
September
August
July
June
May
April
March
February
January
2020
December
November
October
September
August
July
June
May
April
March
February
January
2019
December
November
October
September
August
July
June
May
April
March
February
January
2018
December
November
October
September
August
July
June
May
April
March
February
January
2017
December
November
October
September
August
July
June
May
April
March
February
January
2016
December
November
October
September
August
July
June
May
April
March
February
January
2015
December
November
October
September
August
July
June
May
April
March
February
January
2014
December
November
October
September
August
July
June
May
April
March
February
January
2013
December
November
October
September
August
July
June
May
April
March
February
January
2012
December
November
October
September
August
July
June
May
April
March
February
January
2011
December
November
October
September
August
July
June
May
April
March
February
January
2010
December
November
October
September
August
July
June
May
List overview
Download
galaxy-commits
November 2014
----- 2024 -----
September 2024
August 2024
July 2024
June 2024
May 2024
April 2024
March 2024
February 2024
January 2024
----- 2023 -----
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
----- 2022 -----
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
----- 2021 -----
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
----- 2020 -----
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
----- 2019 -----
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
----- 2018 -----
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
----- 2017 -----
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
----- 2016 -----
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
----- 2015 -----
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
----- 2014 -----
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
----- 2013 -----
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
----- 2012 -----
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
----- 2011 -----
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
----- 2010 -----
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
galaxy-commits@lists.galaxyproject.org
2 participants
184 discussions
Start a n
N
ew thread
commit/galaxy-central: jmchilton: Merged in ericenns/galaxy-central (pull request #546)
by commits-noreply@bitbucket.org
02 Nov '14
02 Nov '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/51f4c1c7fce7/
Changeset: 51f4c1c7fce7 User: jmchilton Date: 2014-11-02 21:16:01+00:00 Summary: Merged in ericenns/galaxy-central (pull request #546) Update admin manage jobs time calculation Affected #: 1 file diff -r ce6691088fd59ff13d2566e224551ccccd7ad569 -r 51f4c1c7fce753eb28ec907715d324a2bca2c84a lib/galaxy/web/base/controllers/admin.py --- a/lib/galaxy/web/base/controllers/admin.py +++ b/lib/galaxy/web/base/controllers/admin.py @@ -1124,7 +1124,9 @@ last_updated = {} for job in jobs: delta = datetime.utcnow() - job.update_time - if delta > timedelta( minutes=60 ): + if delta.days > 0: + last_updated[job.id] = '%s hours' % ( delta.days * 24 + int( delta.seconds / 60 / 60 ) ) + elif delta > timedelta( minutes=59 ): last_updated[job.id] = '%s hours' % int( delta.seconds / 60 / 60 ) else: last_updated[job.id] = '%s minutes' % int( delta.seconds / 60 ) Repository URL:
https://bitbucket.org/galaxy/galaxy-central/
-- This is a commit notification from
bitbucket.org
. You are receiving this because you have the service enabled, addressing the recipient of this email.
1
0
0
0
commit/galaxy-central: 2 new changesets
by commits-noreply@bitbucket.org
02 Nov '14
02 Nov '14
2 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/aaa62e323fb0/
Changeset: aaa62e323fb0 User: ericenns Date: 2014-10-31 14:30:41+00:00 Summary: Update job time output in admin manage jobs to prevent rollover when time surpasses 24 hours Affected #: 1 file diff -r dcd77603b24da934ba6f0fae63da8ddfea55d10b -r aaa62e323fb02c58dc383867b3a9c6d786328fe4 lib/galaxy/web/base/controllers/admin.py --- a/lib/galaxy/web/base/controllers/admin.py +++ b/lib/galaxy/web/base/controllers/admin.py @@ -1124,7 +1124,9 @@ last_updated = {} for job in jobs: delta = datetime.utcnow() - job.update_time - if delta > timedelta( minutes=60 ): + if delta.days > 0: + last_updated[job.id] = '%s hours' % ( delta.days * 24 + int( delta.seconds / 60 / 60 ) ) + elif delta > timedelta( minutes=59 ): last_updated[job.id] = '%s hours' % int( delta.seconds / 60 / 60 ) else: last_updated[job.id] = '%s minutes' % int( delta.seconds / 60 )
https://bitbucket.org/galaxy/galaxy-central/commits/51f4c1c7fce7/
Changeset: 51f4c1c7fce7 User: jmchilton Date: 2014-11-02 21:16:01+00:00 Summary: Merged in ericenns/galaxy-central (pull request #546) Update admin manage jobs time calculation Affected #: 1 file diff -r ce6691088fd59ff13d2566e224551ccccd7ad569 -r 51f4c1c7fce753eb28ec907715d324a2bca2c84a lib/galaxy/web/base/controllers/admin.py --- a/lib/galaxy/web/base/controllers/admin.py +++ b/lib/galaxy/web/base/controllers/admin.py @@ -1124,7 +1124,9 @@ last_updated = {} for job in jobs: delta = datetime.utcnow() - job.update_time - if delta > timedelta( minutes=60 ): + if delta.days > 0: + last_updated[job.id] = '%s hours' % ( delta.days * 24 + int( delta.seconds / 60 / 60 ) ) + elif delta > timedelta( minutes=59 ): last_updated[job.id] = '%s hours' % int( delta.seconds / 60 / 60 ) else: last_updated[job.id] = '%s minutes' % int( delta.seconds / 60 ) Repository URL:
https://bitbucket.org/galaxy/galaxy-central/
-- This is a commit notification from
bitbucket.org
. You are receiving this because you have the service enabled, addressing the recipient of this email.
1
0
0
0
commit/galaxy-central: jmchilton: Merged in nsoranzo/galaxy-central (pull request #543)
by commits-noreply@bitbucket.org
02 Nov '14
02 Nov '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/ce6691088fd5/
Changeset: ce6691088fd5 User: jmchilton Date: 2014-11-02 21:00:05+00:00 Summary: Merged in nsoranzo/galaxy-central (pull request #543) Enhancements to "Convert delimiters" and "velvetg" tools. Affected #: 4 files diff -r ddf23ee6275837e3ca25ca5c51c43ec68a7432d8 -r ce6691088fd59ff13d2566e224551ccccd7ad569 tools/filters/convert_characters.py --- a/tools/filters/convert_characters.py +++ b/tools/filters/convert_characters.py @@ -1,30 +1,18 @@ #!/usr/bin/env python #By, Guruprasad Ananda. +import optparse import re -import sys - -def stop_err(msg): - sys.stderr.write(msg) - sys.exit() - - -def main(): - if len(sys.argv) != 4: - stop_err("usage: convert_characters infile from_char outfile") - - try: - fin = open(sys.argv[1], 'r') - except: - stop_err("Input file cannot be opened for reading.") - - from_char = sys.argv[2] - - try: - fout = open(sys.argv[3], 'w') - except: - stop_err("Output file cannot be opened for writing.") +def __main__(): + parser = optparse.OptionParser() + parser.add_option('--strip', action='store_true', + help='strip leading and trailing whitespaces') + parser.add_option('--condense', action='store_true', + help='condense consecutive delimiters') + (options, args) = parser.parse_args() + if len(args) != 3: + parser.error("usage: convert_characters.py infile from_char outfile") char_dict = { 'T': '\t', @@ -38,20 +26,26 @@ 'Sc': ';' } # regexp to match 1 or more occurences. - from_ch = char_dict[from_char] + '+' + from_char = args[1] + from_ch = char_dict[from_char] + if options.condense: + from_ch += '+' + skipped = 0 + with open(args[0], 'rU') as fin: + with open(args[2], 'w') as fout: + for line in fin: + if options.strip: + line = line.strip() + else: + line = line.rstrip('\n') + try: + fout.write("%s\n" % (re.sub(from_ch, '\t', line))) + except: + skipped += 1 - for line in fin: - line = line.strip() - try: - fout.write("%s\n" % (re.sub(from_ch, '\t', line))) - except: - skipped += 1 - - fin.close() - fout.close() if skipped: print "Skipped %d lines as invalid." % skipped if __name__ == "__main__": - main() + __main__() diff -r ddf23ee6275837e3ca25ca5c51c43ec68a7432d8 -r ce6691088fd59ff13d2566e224551ccccd7ad569 tools/filters/convert_characters.xml --- a/tools/filters/convert_characters.xml +++ b/tools/filters/convert_characters.xml @@ -1,59 +1,77 @@ -<tool id="Convert characters1" name="Convert"> - <description>delimiters to TAB</description> - <command interpreter="python">convert_characters.py $input $convert_from $out_file1</command> - <inputs> - <param name="convert_from" type="select" label="Convert all"> - <option value="s">Whitespaces</option> - <option value="T">Tabs</option> - <!--<option value="Sp">Spaces</option>--> - <option value="Dt">Dots</option> - <option value="C">Commas</option> - <option value="D">Dashes</option> - <option value="U">Underscores</option> - <option value="P">Pipes</option> - <option value="Co">Colons</option> - <option value="Sc">Semicolons</option> - </param> - <param format="txt" name="input" type="data" label="in Dataset"/> - </inputs> - <outputs> - <data format="tabular" name="out_file1" /> - </outputs> - <tests> - <test> - <param name="convert_from" value="s"/> - <param name="input" value="1.bed"/> - <output name="out_file1" file="eq-convert.dat"/> - </test> - <test> - <param name="convert_from" value="s"/> - <param name="input" value="a.txt"/> - <output name="out_file1" file="a.tab"/> - </test> - </tests> - <help> - -**What it does** - -Converts all delimiters of a specified type into TABs. Consecutive characters are condensed. For example, if columns are separated by 5 spaces they will converted into 1 tab. - ------ - -**Example** - -- Input file:: - - chrX||151283558|151283724|NM_000808_exon_8_0_chrX_151283559_r|0|- - chrX|151370273|151370486|NM_000808_exon_9_0_chrX_151370274_r|0|- - chrX|151559494|151559583|NM_018558_exon_1_0_chrX_151559495_f|0|+ - chrX|151564643|151564711|NM_018558_exon_2_0_chrX_151564644_f||||0|+ - -- Converting all pipe delimiters of the above file to TABs will get:: - - chrX 151283558 151283724 NM_000808_exon_8_0_chrX_151283559_r 0 - - chrX 151370273 151370486 NM_000808_exon_9_0_chrX_151370274_r 0 - - chrX 151559494 151559583 NM_018558_exon_1_0_chrX_151559495_f 0 + - chrX 151564643 151564711 NM_018558_exon_2_0_chrX_151564644_f 0 + - -</help> -</tool> +<tool id="Convert characters1" name="Convert"> + <description>delimiters to TAB</description> + <command interpreter="python"> +convert_characters.py +#if $strip + --strip +#end if +#if $condense + --condense +#end if +$input $convert_from $out_file1 + </command> + <inputs> + <param name="convert_from" type="select" label="Convert all"> + <option value="s">Whitespaces</option> + <option value="T">Tabs</option> + <!--<option value="Sp">Spaces</option>--> + <option value="Dt">Dots</option> + <option value="C">Commas</option> + <option value="D">Dashes</option> + <option value="U">Underscores</option> + <option value="P">Pipes</option> + <option value="Co">Colons</option> + <option value="Sc">Semicolons</option> + </param> + <param format="txt" name="input" type="data" label="in Dataset"/> + <param name="strip" type="boolean" checked="true" label="Strip leading and trailing whitespaces" /> + <param name="condense" type="boolean" checked="true" label="Condense consecutive delimiters in one TAB" /> + </inputs> + <outputs> + <data format="tabular" name="out_file1" /> + </outputs> + <stdio> + <exit_code range="1:" level="fatal" /> + </stdio> + <tests> + <test> + <param name="convert_from" value="s"/> + <param name="input" value="1.bed"/> + <param name="strip" value="true" /> + <param name="condense" value="true" /> + <output name="out_file1" file="eq-convert.dat"/> + </test> + <test> + <param name="convert_from" value="s"/> + <param name="input" value="a.txt"/> + <param name="strip" value="true" /> + <param name="condense" value="true" /> + <output name="out_file1" file="a.tab"/> + </test> + </tests> + <help> + +**What it does** + +Converts all delimiters of a specified type into TABs. Consecutive delimiters can be condensed in a single TAB. + +----- + +**Example** + +- Input file:: + + chrX||151283558|151283724|NM_000808_exon_8_0_chrX_151283559_r|0|- + chrX|151370273|151370486|NM_000808_exon_9_0_chrX_151370274_r|0|- + chrX|151559494|151559583|NM_018558_exon_1_0_chrX_151559495_f|0|+ + chrX|151564643|151564711|NM_018558_exon_2_0_chrX_151564644_f||||0|+ + +- Converting all pipe delimiters of the above file to TABs and condensing delimiters will get:: + + chrX 151283558 151283724 NM_000808_exon_8_0_chrX_151283559_r 0 - + chrX 151370273 151370486 NM_000808_exon_9_0_chrX_151370274_r 0 - + chrX 151559494 151559583 NM_018558_exon_1_0_chrX_151559495_f 0 + + chrX 151564643 151564711 NM_018558_exon_2_0_chrX_151564644_f 0 + + + </help> +</tool> diff -r ddf23ee6275837e3ca25ca5c51c43ec68a7432d8 -r ce6691088fd59ff13d2566e224551ccccd7ad569 tools/sr_assembly/velvetg.xml --- a/tools/sr_assembly/velvetg.xml +++ b/tools/sr_assembly/velvetg.xml @@ -4,7 +4,6 @@ <command interpreter="python"> velvetg_wrapper.py '$input.extra_files_path' - '$contigs' '$stats' '$LastGraph' '$velvet_asm' '$unused_reads_fasta' #if $generate_amos.afg == "yes": -amos_file $generate_amos.afg #end if @@ -96,7 +95,7 @@ <when value="yes"/></conditional> - <param name="read_trkg" type="boolean" checked="false" truevalue="-read_trkg yes" falsevalue="-read_trkg no" label="track short read positions" help=" tracking of short read positions in assembly"/> + <param name="read_trkg" type="boolean" checked="false" truevalue="-read_trkg yes" falsevalue="-read_trkg no" label="Tracking of short read positions in assembly" help="Generates Graph2 dataset" /><conditional name="coverage"><param name="cutoff" type="select" label="Coverage cutoff" help=""> @@ -169,20 +168,20 @@ </conditional></inputs><outputs> - <!-- - <data format="velvet_graph" name="LastGraph" /> - --> - <data format="txt" name="LastGraph" label="${tool.name} on ${on_string}: LastGraph"> + <data format="txt" name="Graph2" label="${tool.name} on ${on_string}: Graph2" from_work_dir="Graph2"> + <filter>read_trkg is True</filter> + </data> + <data format="txt" name="LastGraph" label="${tool.name} on ${on_string}: LastGraph" from_work_dir="LastGraph"><filter>last_graph['generate_graph'] == "yes"</filter></data> - <data format="afg" name="velvet_asm" label="${tool.name} on ${on_string}: AMOS.afg"> + <data format="afg" name="velvet_asm" label="${tool.name} on ${on_string}: AMOS.afg" from_work_dir="velvet_asm.afg"><filter>generate_amos['afg'] == "yes"</filter></data> - <data format="fasta" name="unused_reads_fasta" label="${tool.name} on ${on_string}: Unused Reads"> + <data format="fasta" name="unused_reads_fasta" label="${tool.name} on ${on_string}: Unused Reads" from_work_dir="UnusedReads.fa"><filter>unused_reads['generate_unused'] == "yes"</filter></data> - <data format="tabular" name="stats" label="${tool.name} on ${on_string}: Stats"/> - <data format="fasta" name="contigs" label="${tool.name} on ${on_string}: Contigs"/> + <data format="tabular" name="stats" label="${tool.name} on ${on_string}: Stats" from_work_dir="stats.txt" /> + <data format="fasta" name="contigs" label="${tool.name} on ${on_string}: Contigs" from_work_dir="contigs.fa" /></outputs><requirements><requirement type="package">velvet</requirement> diff -r ddf23ee6275837e3ca25ca5c51c43ec68a7432d8 -r ce6691088fd59ff13d2566e224551ccccd7ad569 tools/sr_assembly/velvetg_wrapper.py --- a/tools/sr_assembly/velvetg_wrapper.py +++ b/tools/sr_assembly/velvetg_wrapper.py @@ -4,14 +4,9 @@ Classes encapsulating decypher tool. James E Johnson - University of Minnesota """ -import pkg_resources; -import logging, os, string, sys, tempfile, glob, shutil, types, urllib -import shlex, subprocess -from optparse import OptionParser, OptionGroup -from stat import * - - -log = logging.getLogger( __name__ ) +import os +import sys +import subprocess assert sys.version_info[:2] >= ( 2, 4 ) @@ -19,20 +14,15 @@ sys.stderr.write( "%s\n" % msg ) sys.exit() + def __main__(): #Parse Command Line - s = 'velvetg_wrapper.py: argv = %s\n' % (sys.argv) - # print >> sys.stderr, s # so will appear as blurb for file - argcnt = len(sys.argv) working_dir = sys.argv[1] - contigs = sys.argv[2] - stats = sys.argv[3] - LastGraph = sys.argv[4] - afgFile = sys.argv[5] - unusedReadsFile = sys.argv[6] - inputs = string.join(sys.argv[7:],' ') - cmdline = 'velvetg %s %s > /dev/null' % (working_dir, inputs) - # print >> sys.stderr, cmdline # so will appear as blurb for file + inputs = ' '.join(sys.argv[2:]) + for _ in ('Roadmaps', 'Sequences'): + os.symlink(os.path.join(working_dir, _), _) + cmdline = 'velvetg . %s' % (inputs) + print "Command to be executed: %s" % cmdline try: proc = subprocess.Popen( args=cmdline, shell=True, stderr=subprocess.PIPE ) returncode = proc.wait() @@ -50,41 +40,7 @@ raise Exception, stderr except Exception, e: stop_err( 'Error running velvetg ' + str( e ) ) - out = open(contigs,'w') - contigs_path = os.path.join(working_dir,'contigs.fa') - for line in open( contigs_path ): - out.write( "%s" % (line) ) - out.close() - out = open(stats,'w') - stats_path = os.path.join(working_dir,'stats.txt') - for line in open( stats_path ): - out.write( "%s" % (line) ) - out.close() - if LastGraph != 'None': - out = open(LastGraph,'w') - LastGraph_path = os.path.join(working_dir,'LastGraph') - for line in open( LastGraph_path ): - out.write( "%s" % (line) ) - out.close() - if afgFile != 'None': - out = open(afgFile,'w') - afgFile_path = os.path.join(working_dir,'velvet_asm.afg') - try: - for line in open( afgFile_path ): - out.write( "%s" % (line) ) - except: - logging.warn( 'error reading %s' %(afgFile_path)) - pass - out.close() - if unusedReadsFile != 'None': - out = open(unusedReadsFile,'w') - unusedReadsFile_path = os.path.join(working_dir,'UnusedReads.fa') - try: - for line in open( unusedReadsFile_path ): - out.write( "%s" % (line) ) - except: - logging.info( 'error reading %s' %(unusedReadsFile_path)) - pass - out.close() -if __name__ == "__main__": __main__() + +if __name__ == "__main__": + __main__() Repository URL:
https://bitbucket.org/galaxy/galaxy-central/
-- This is a commit notification from
bitbucket.org
. You are receiving this because you have the service enabled, addressing the recipient of this email.
1
0
0
0
commit/galaxy-central: 5 new changesets
by commits-noreply@bitbucket.org
02 Nov '14
02 Nov '14
5 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/3d7d4cf4aa12/
Changeset: 3d7d4cf4aa12 User: nsoranzo Date: 2014-10-28 11:21:00+00:00 Summary: dos2unix Affected #: 1 file diff -r 32929a1f9bf4b2150f99655c266223cd05d9a2bd -r 3d7d4cf4aa121fbc84447d8905ce1fefc91ac46f tools/filters/convert_characters.xml --- a/tools/filters/convert_characters.xml +++ b/tools/filters/convert_characters.xml @@ -1,59 +1,59 @@ -<tool id="Convert characters1" name="Convert"> - <description>delimiters to TAB</description> - <command interpreter="python">convert_characters.py $input $convert_from $out_file1</command> - <inputs> - <param name="convert_from" type="select" label="Convert all"> - <option value="s">Whitespaces</option> - <option value="T">Tabs</option> - <!--<option value="Sp">Spaces</option>--> - <option value="Dt">Dots</option> - <option value="C">Commas</option> - <option value="D">Dashes</option> - <option value="U">Underscores</option> - <option value="P">Pipes</option> - <option value="Co">Colons</option> - <option value="Sc">Semicolons</option> - </param> - <param format="txt" name="input" type="data" label="in Dataset"/> - </inputs> - <outputs> - <data format="tabular" name="out_file1" /> - </outputs> - <tests> - <test> - <param name="convert_from" value="s"/> - <param name="input" value="1.bed"/> - <output name="out_file1" file="eq-convert.dat"/> - </test> - <test> - <param name="convert_from" value="s"/> - <param name="input" value="a.txt"/> - <output name="out_file1" file="a.tab"/> - </test> - </tests> - <help> - -**What it does** - -Converts all delimiters of a specified type into TABs. Consecutive characters are condensed. For example, if columns are separated by 5 spaces they will converted into 1 tab. - ------ - -**Example** - -- Input file:: - - chrX||151283558|151283724|NM_000808_exon_8_0_chrX_151283559_r|0|- - chrX|151370273|151370486|NM_000808_exon_9_0_chrX_151370274_r|0|- - chrX|151559494|151559583|NM_018558_exon_1_0_chrX_151559495_f|0|+ - chrX|151564643|151564711|NM_018558_exon_2_0_chrX_151564644_f||||0|+ - -- Converting all pipe delimiters of the above file to TABs will get:: - - chrX 151283558 151283724 NM_000808_exon_8_0_chrX_151283559_r 0 - - chrX 151370273 151370486 NM_000808_exon_9_0_chrX_151370274_r 0 - - chrX 151559494 151559583 NM_018558_exon_1_0_chrX_151559495_f 0 + - chrX 151564643 151564711 NM_018558_exon_2_0_chrX_151564644_f 0 + - -</help> -</tool> +<tool id="Convert characters1" name="Convert"> + <description>delimiters to TAB</description> + <command interpreter="python">convert_characters.py $input $convert_from $out_file1</command> + <inputs> + <param name="convert_from" type="select" label="Convert all"> + <option value="s">Whitespaces</option> + <option value="T">Tabs</option> + <!--<option value="Sp">Spaces</option>--> + <option value="Dt">Dots</option> + <option value="C">Commas</option> + <option value="D">Dashes</option> + <option value="U">Underscores</option> + <option value="P">Pipes</option> + <option value="Co">Colons</option> + <option value="Sc">Semicolons</option> + </param> + <param format="txt" name="input" type="data" label="in Dataset"/> + </inputs> + <outputs> + <data format="tabular" name="out_file1" /> + </outputs> + <tests> + <test> + <param name="convert_from" value="s"/> + <param name="input" value="1.bed"/> + <output name="out_file1" file="eq-convert.dat"/> + </test> + <test> + <param name="convert_from" value="s"/> + <param name="input" value="a.txt"/> + <output name="out_file1" file="a.tab"/> + </test> + </tests> + <help> + +**What it does** + +Converts all delimiters of a specified type into TABs. Consecutive characters are condensed. For example, if columns are separated by 5 spaces they will converted into 1 tab. + +----- + +**Example** + +- Input file:: + + chrX||151283558|151283724|NM_000808_exon_8_0_chrX_151283559_r|0|- + chrX|151370273|151370486|NM_000808_exon_9_0_chrX_151370274_r|0|- + chrX|151559494|151559583|NM_018558_exon_1_0_chrX_151559495_f|0|+ + chrX|151564643|151564711|NM_018558_exon_2_0_chrX_151564644_f||||0|+ + +- Converting all pipe delimiters of the above file to TABs will get:: + + chrX 151283558 151283724 NM_000808_exon_8_0_chrX_151283559_r 0 - + chrX 151370273 151370486 NM_000808_exon_9_0_chrX_151370274_r 0 - + chrX 151559494 151559583 NM_018558_exon_1_0_chrX_151559495_f 0 + + chrX 151564643 151564711 NM_018558_exon_2_0_chrX_151564644_f 0 + + +</help> +</tool>
https://bitbucket.org/galaxy/galaxy-central/commits/227a36a8998b/
Changeset: 227a36a8998b User: nsoranzo Date: 2014-10-28 15:57:49+00:00 Summary: Make stripping and condensing optional. Affected #: 2 files diff -r 3d7d4cf4aa121fbc84447d8905ce1fefc91ac46f -r 227a36a8998b314b04823e06734eb933f9289987 tools/filters/convert_characters.py --- a/tools/filters/convert_characters.py +++ b/tools/filters/convert_characters.py @@ -1,30 +1,18 @@ #!/usr/bin/env python #By, Guruprasad Ananda. +import optparse import re -import sys - -def stop_err(msg): - sys.stderr.write(msg) - sys.exit() - - -def main(): - if len(sys.argv) != 4: - stop_err("usage: convert_characters infile from_char outfile") - - try: - fin = open(sys.argv[1], 'r') - except: - stop_err("Input file cannot be opened for reading.") - - from_char = sys.argv[2] - - try: - fout = open(sys.argv[3], 'w') - except: - stop_err("Output file cannot be opened for writing.") +def __main__(): + parser = optparse.OptionParser() + parser.add_option('--strip', action='store_true', + help='strip leading and trailing whitespaces') + parser.add_option('--condense', action='store_true', + help='condense consecutive delimiters') + (options, args) = parser.parse_args() + if len(args) != 3: + parser.error("usage: convert_characters.py infile from_char outfile") char_dict = { 'T': '\t', @@ -38,20 +26,26 @@ 'Sc': ';' } # regexp to match 1 or more occurences. - from_ch = char_dict[from_char] + '+' + from_char = args[1] + from_ch = char_dict[from_char] + if options.condense: + from_ch += '+' + skipped = 0 + with open(args[0], 'rU') as fin: + with open(args[2], 'w') as fout: + for line in fin: + if options.strip: + line = line.strip() + else: + line = line.rstrip('\n') + try: + fout.write("%s\n" % (re.sub(from_ch, '\t', line))) + except: + skipped += 1 - for line in fin: - line = line.strip() - try: - fout.write("%s\n" % (re.sub(from_ch, '\t', line))) - except: - skipped += 1 - - fin.close() - fout.close() if skipped: print "Skipped %d lines as invalid." % skipped if __name__ == "__main__": - main() + __main__() diff -r 3d7d4cf4aa121fbc84447d8905ce1fefc91ac46f -r 227a36a8998b314b04823e06734eb933f9289987 tools/filters/convert_characters.xml --- a/tools/filters/convert_characters.xml +++ b/tools/filters/convert_characters.xml @@ -1,6 +1,15 @@ <tool id="Convert characters1" name="Convert"><description>delimiters to TAB</description> - <command interpreter="python">convert_characters.py $input $convert_from $out_file1</command> + <command interpreter="python"> +convert_characters.py +#if $strip + --strip +#end if +#if $condense + --condense +#end if +$input $convert_from $out_file1 + </command><inputs><param name="convert_from" type="select" label="Convert all"><option value="s">Whitespaces</option> @@ -15,19 +24,28 @@ <option value="Sc">Semicolons</option></param><param format="txt" name="input" type="data" label="in Dataset"/> + <param name="strip" type="boolean" checked="true" label="Strip leading and trailing whitespaces" /> + <param name="condense" type="boolean" checked="true" label="Condense consecutive delimiters in one TAB" /></inputs><outputs><data format="tabular" name="out_file1" /></outputs> + <stdio> + <exit_code range="1:" level="fatal" /> + </stdio><tests><test><param name="convert_from" value="s"/><param name="input" value="1.bed"/> + <param name="strip" value="true" /> + <param name="condense" value="true" /><output name="out_file1" file="eq-convert.dat"/></test><test><param name="convert_from" value="s"/><param name="input" value="a.txt"/> + <param name="strip" value="true" /> + <param name="condense" value="true" /><output name="out_file1" file="a.tab"/></test></tests> @@ -35,7 +53,7 @@ **What it does** -Converts all delimiters of a specified type into TABs. Consecutive characters are condensed. For example, if columns are separated by 5 spaces they will converted into 1 tab. +Converts all delimiters of a specified type into TABs. Consecutive delimiters can be condensed in a single TAB. ----- @@ -48,12 +66,12 @@ chrX|151559494|151559583|NM_018558_exon_1_0_chrX_151559495_f|0|+ chrX|151564643|151564711|NM_018558_exon_2_0_chrX_151564644_f||||0|+ -- Converting all pipe delimiters of the above file to TABs will get:: +- Converting all pipe delimiters of the above file to TABs and condensing delimiters will get:: chrX 151283558 151283724 NM_000808_exon_8_0_chrX_151283559_r 0 - chrX 151370273 151370486 NM_000808_exon_9_0_chrX_151370274_r 0 - chrX 151559494 151559583 NM_018558_exon_1_0_chrX_151559495_f 0 + chrX 151564643 151564711 NM_018558_exon_2_0_chrX_151564644_f 0 + -</help> + </help></tool>
https://bitbucket.org/galaxy/galaxy-central/commits/6fee2583c95d/
Changeset: 6fee2583c95d User: nsoranzo Date: 2014-10-28 17:45:17+00:00 Summary: Use from_work_dir instead of copying output files line by line. Write velvetg output to stdout. Affected #: 2 files diff -r 227a36a8998b314b04823e06734eb933f9289987 -r 6fee2583c95d5a662afef5ac1e99e2e7be810065 tools/sr_assembly/velvetg.xml --- a/tools/sr_assembly/velvetg.xml +++ b/tools/sr_assembly/velvetg.xml @@ -4,7 +4,6 @@ <command interpreter="python"> velvetg_wrapper.py '$input.extra_files_path' - '$contigs' '$stats' '$LastGraph' '$velvet_asm' '$unused_reads_fasta' #if $generate_amos.afg == "yes": -amos_file $generate_amos.afg #end if @@ -169,20 +168,17 @@ </conditional></inputs><outputs> - <!-- - <data format="velvet_graph" name="LastGraph" /> - --> - <data format="txt" name="LastGraph" label="${tool.name} on ${on_string}: LastGraph"> + <data format="txt" name="LastGraph" label="${tool.name} on ${on_string}: LastGraph" from_work_dir="LastGraph"><filter>last_graph['generate_graph'] == "yes"</filter></data> - <data format="afg" name="velvet_asm" label="${tool.name} on ${on_string}: AMOS.afg"> + <data format="afg" name="velvet_asm" label="${tool.name} on ${on_string}: AMOS.afg" from_work_dir="velvet_asm.afg"><filter>generate_amos['afg'] == "yes"</filter></data> - <data format="fasta" name="unused_reads_fasta" label="${tool.name} on ${on_string}: Unused Reads"> + <data format="fasta" name="unused_reads_fasta" label="${tool.name} on ${on_string}: Unused Reads" from_work_dir="UnusedReads.fa"><filter>unused_reads['generate_unused'] == "yes"</filter></data> - <data format="tabular" name="stats" label="${tool.name} on ${on_string}: Stats"/> - <data format="fasta" name="contigs" label="${tool.name} on ${on_string}: Contigs"/> + <data format="tabular" name="stats" label="${tool.name} on ${on_string}: Stats" from_work_dir="stats.txt" /> + <data format="fasta" name="contigs" label="${tool.name} on ${on_string}: Contigs" from_work_dir="contigs.fa" /></outputs><requirements><requirement type="package">velvet</requirement> diff -r 227a36a8998b314b04823e06734eb933f9289987 -r 6fee2583c95d5a662afef5ac1e99e2e7be810065 tools/sr_assembly/velvetg_wrapper.py --- a/tools/sr_assembly/velvetg_wrapper.py +++ b/tools/sr_assembly/velvetg_wrapper.py @@ -4,14 +4,9 @@ Classes encapsulating decypher tool. James E Johnson - University of Minnesota """ -import pkg_resources; -import logging, os, string, sys, tempfile, glob, shutil, types, urllib -import shlex, subprocess -from optparse import OptionParser, OptionGroup -from stat import * - - -log = logging.getLogger( __name__ ) +import os +import sys +import subprocess assert sys.version_info[:2] >= ( 2, 4 ) @@ -19,20 +14,15 @@ sys.stderr.write( "%s\n" % msg ) sys.exit() + def __main__(): #Parse Command Line - s = 'velvetg_wrapper.py: argv = %s\n' % (sys.argv) - # print >> sys.stderr, s # so will appear as blurb for file - argcnt = len(sys.argv) working_dir = sys.argv[1] - contigs = sys.argv[2] - stats = sys.argv[3] - LastGraph = sys.argv[4] - afgFile = sys.argv[5] - unusedReadsFile = sys.argv[6] - inputs = string.join(sys.argv[7:],' ') - cmdline = 'velvetg %s %s > /dev/null' % (working_dir, inputs) - # print >> sys.stderr, cmdline # so will appear as blurb for file + inputs = ' '.join(sys.argv[2:]) + for _ in ('Roadmaps', 'Sequences'): + os.symlink(os.path.join(working_dir, _), _) + cmdline = 'velvetg . %s' % (inputs) + print "Command to be executed: %s" % cmdline try: proc = subprocess.Popen( args=cmdline, shell=True, stderr=subprocess.PIPE ) returncode = proc.wait() @@ -50,41 +40,7 @@ raise Exception, stderr except Exception, e: stop_err( 'Error running velvetg ' + str( e ) ) - out = open(contigs,'w') - contigs_path = os.path.join(working_dir,'contigs.fa') - for line in open( contigs_path ): - out.write( "%s" % (line) ) - out.close() - out = open(stats,'w') - stats_path = os.path.join(working_dir,'stats.txt') - for line in open( stats_path ): - out.write( "%s" % (line) ) - out.close() - if LastGraph != 'None': - out = open(LastGraph,'w') - LastGraph_path = os.path.join(working_dir,'LastGraph') - for line in open( LastGraph_path ): - out.write( "%s" % (line) ) - out.close() - if afgFile != 'None': - out = open(afgFile,'w') - afgFile_path = os.path.join(working_dir,'velvet_asm.afg') - try: - for line in open( afgFile_path ): - out.write( "%s" % (line) ) - except: - logging.warn( 'error reading %s' %(afgFile_path)) - pass - out.close() - if unusedReadsFile != 'None': - out = open(unusedReadsFile,'w') - unusedReadsFile_path = os.path.join(working_dir,'UnusedReads.fa') - try: - for line in open( unusedReadsFile_path ): - out.write( "%s" % (line) ) - except: - logging.info( 'error reading %s' %(unusedReadsFile_path)) - pass - out.close() -if __name__ == "__main__": __main__() + +if __name__ == "__main__": + __main__()
https://bitbucket.org/galaxy/galaxy-central/commits/a07e0e8dc45d/
Changeset: a07e0e8dc45d User: nsoranzo Date: 2014-10-28 16:15:39+00:00 Summary: Add Graph2 output dataset when "-read_trkg yes". Affected #: 1 file diff -r 6fee2583c95d5a662afef5ac1e99e2e7be810065 -r a07e0e8dc45de641b3c1718d19591953b0c7bb90 tools/sr_assembly/velvetg.xml --- a/tools/sr_assembly/velvetg.xml +++ b/tools/sr_assembly/velvetg.xml @@ -95,7 +95,7 @@ <when value="yes"/></conditional> - <param name="read_trkg" type="boolean" checked="false" truevalue="-read_trkg yes" falsevalue="-read_trkg no" label="track short read positions" help=" tracking of short read positions in assembly"/> + <param name="read_trkg" type="boolean" checked="false" truevalue="-read_trkg yes" falsevalue="-read_trkg no" label="Tracking of short read positions in assembly" help="Generates Graph2 dataset" /><conditional name="coverage"><param name="cutoff" type="select" label="Coverage cutoff" help=""> @@ -168,6 +168,9 @@ </conditional></inputs><outputs> + <data format="txt" name="Graph2" label="${tool.name} on ${on_string}: Graph2" from_work_dir="Graph2"> + <filter>read_trkg is True</filter> + </data><data format="txt" name="LastGraph" label="${tool.name} on ${on_string}: LastGraph" from_work_dir="LastGraph"><filter>last_graph['generate_graph'] == "yes"</filter></data>
https://bitbucket.org/galaxy/galaxy-central/commits/ce6691088fd5/
Changeset: ce6691088fd5 User: jmchilton Date: 2014-11-02 21:00:05+00:00 Summary: Merged in nsoranzo/galaxy-central (pull request #543) Enhancements to "Convert delimiters" and "velvetg" tools. Affected #: 4 files diff -r ddf23ee6275837e3ca25ca5c51c43ec68a7432d8 -r ce6691088fd59ff13d2566e224551ccccd7ad569 tools/filters/convert_characters.py --- a/tools/filters/convert_characters.py +++ b/tools/filters/convert_characters.py @@ -1,30 +1,18 @@ #!/usr/bin/env python #By, Guruprasad Ananda. +import optparse import re -import sys - -def stop_err(msg): - sys.stderr.write(msg) - sys.exit() - - -def main(): - if len(sys.argv) != 4: - stop_err("usage: convert_characters infile from_char outfile") - - try: - fin = open(sys.argv[1], 'r') - except: - stop_err("Input file cannot be opened for reading.") - - from_char = sys.argv[2] - - try: - fout = open(sys.argv[3], 'w') - except: - stop_err("Output file cannot be opened for writing.") +def __main__(): + parser = optparse.OptionParser() + parser.add_option('--strip', action='store_true', + help='strip leading and trailing whitespaces') + parser.add_option('--condense', action='store_true', + help='condense consecutive delimiters') + (options, args) = parser.parse_args() + if len(args) != 3: + parser.error("usage: convert_characters.py infile from_char outfile") char_dict = { 'T': '\t', @@ -38,20 +26,26 @@ 'Sc': ';' } # regexp to match 1 or more occurences. - from_ch = char_dict[from_char] + '+' + from_char = args[1] + from_ch = char_dict[from_char] + if options.condense: + from_ch += '+' + skipped = 0 + with open(args[0], 'rU') as fin: + with open(args[2], 'w') as fout: + for line in fin: + if options.strip: + line = line.strip() + else: + line = line.rstrip('\n') + try: + fout.write("%s\n" % (re.sub(from_ch, '\t', line))) + except: + skipped += 1 - for line in fin: - line = line.strip() - try: - fout.write("%s\n" % (re.sub(from_ch, '\t', line))) - except: - skipped += 1 - - fin.close() - fout.close() if skipped: print "Skipped %d lines as invalid." % skipped if __name__ == "__main__": - main() + __main__() diff -r ddf23ee6275837e3ca25ca5c51c43ec68a7432d8 -r ce6691088fd59ff13d2566e224551ccccd7ad569 tools/filters/convert_characters.xml --- a/tools/filters/convert_characters.xml +++ b/tools/filters/convert_characters.xml @@ -1,59 +1,77 @@ -<tool id="Convert characters1" name="Convert"> - <description>delimiters to TAB</description> - <command interpreter="python">convert_characters.py $input $convert_from $out_file1</command> - <inputs> - <param name="convert_from" type="select" label="Convert all"> - <option value="s">Whitespaces</option> - <option value="T">Tabs</option> - <!--<option value="Sp">Spaces</option>--> - <option value="Dt">Dots</option> - <option value="C">Commas</option> - <option value="D">Dashes</option> - <option value="U">Underscores</option> - <option value="P">Pipes</option> - <option value="Co">Colons</option> - <option value="Sc">Semicolons</option> - </param> - <param format="txt" name="input" type="data" label="in Dataset"/> - </inputs> - <outputs> - <data format="tabular" name="out_file1" /> - </outputs> - <tests> - <test> - <param name="convert_from" value="s"/> - <param name="input" value="1.bed"/> - <output name="out_file1" file="eq-convert.dat"/> - </test> - <test> - <param name="convert_from" value="s"/> - <param name="input" value="a.txt"/> - <output name="out_file1" file="a.tab"/> - </test> - </tests> - <help> - -**What it does** - -Converts all delimiters of a specified type into TABs. Consecutive characters are condensed. For example, if columns are separated by 5 spaces they will converted into 1 tab. - ------ - -**Example** - -- Input file:: - - chrX||151283558|151283724|NM_000808_exon_8_0_chrX_151283559_r|0|- - chrX|151370273|151370486|NM_000808_exon_9_0_chrX_151370274_r|0|- - chrX|151559494|151559583|NM_018558_exon_1_0_chrX_151559495_f|0|+ - chrX|151564643|151564711|NM_018558_exon_2_0_chrX_151564644_f||||0|+ - -- Converting all pipe delimiters of the above file to TABs will get:: - - chrX 151283558 151283724 NM_000808_exon_8_0_chrX_151283559_r 0 - - chrX 151370273 151370486 NM_000808_exon_9_0_chrX_151370274_r 0 - - chrX 151559494 151559583 NM_018558_exon_1_0_chrX_151559495_f 0 + - chrX 151564643 151564711 NM_018558_exon_2_0_chrX_151564644_f 0 + - -</help> -</tool> +<tool id="Convert characters1" name="Convert"> + <description>delimiters to TAB</description> + <command interpreter="python"> +convert_characters.py +#if $strip + --strip +#end if +#if $condense + --condense +#end if +$input $convert_from $out_file1 + </command> + <inputs> + <param name="convert_from" type="select" label="Convert all"> + <option value="s">Whitespaces</option> + <option value="T">Tabs</option> + <!--<option value="Sp">Spaces</option>--> + <option value="Dt">Dots</option> + <option value="C">Commas</option> + <option value="D">Dashes</option> + <option value="U">Underscores</option> + <option value="P">Pipes</option> + <option value="Co">Colons</option> + <option value="Sc">Semicolons</option> + </param> + <param format="txt" name="input" type="data" label="in Dataset"/> + <param name="strip" type="boolean" checked="true" label="Strip leading and trailing whitespaces" /> + <param name="condense" type="boolean" checked="true" label="Condense consecutive delimiters in one TAB" /> + </inputs> + <outputs> + <data format="tabular" name="out_file1" /> + </outputs> + <stdio> + <exit_code range="1:" level="fatal" /> + </stdio> + <tests> + <test> + <param name="convert_from" value="s"/> + <param name="input" value="1.bed"/> + <param name="strip" value="true" /> + <param name="condense" value="true" /> + <output name="out_file1" file="eq-convert.dat"/> + </test> + <test> + <param name="convert_from" value="s"/> + <param name="input" value="a.txt"/> + <param name="strip" value="true" /> + <param name="condense" value="true" /> + <output name="out_file1" file="a.tab"/> + </test> + </tests> + <help> + +**What it does** + +Converts all delimiters of a specified type into TABs. Consecutive delimiters can be condensed in a single TAB. + +----- + +**Example** + +- Input file:: + + chrX||151283558|151283724|NM_000808_exon_8_0_chrX_151283559_r|0|- + chrX|151370273|151370486|NM_000808_exon_9_0_chrX_151370274_r|0|- + chrX|151559494|151559583|NM_018558_exon_1_0_chrX_151559495_f|0|+ + chrX|151564643|151564711|NM_018558_exon_2_0_chrX_151564644_f||||0|+ + +- Converting all pipe delimiters of the above file to TABs and condensing delimiters will get:: + + chrX 151283558 151283724 NM_000808_exon_8_0_chrX_151283559_r 0 - + chrX 151370273 151370486 NM_000808_exon_9_0_chrX_151370274_r 0 - + chrX 151559494 151559583 NM_018558_exon_1_0_chrX_151559495_f 0 + + chrX 151564643 151564711 NM_018558_exon_2_0_chrX_151564644_f 0 + + + </help> +</tool> diff -r ddf23ee6275837e3ca25ca5c51c43ec68a7432d8 -r ce6691088fd59ff13d2566e224551ccccd7ad569 tools/sr_assembly/velvetg.xml --- a/tools/sr_assembly/velvetg.xml +++ b/tools/sr_assembly/velvetg.xml @@ -4,7 +4,6 @@ <command interpreter="python"> velvetg_wrapper.py '$input.extra_files_path' - '$contigs' '$stats' '$LastGraph' '$velvet_asm' '$unused_reads_fasta' #if $generate_amos.afg == "yes": -amos_file $generate_amos.afg #end if @@ -96,7 +95,7 @@ <when value="yes"/></conditional> - <param name="read_trkg" type="boolean" checked="false" truevalue="-read_trkg yes" falsevalue="-read_trkg no" label="track short read positions" help=" tracking of short read positions in assembly"/> + <param name="read_trkg" type="boolean" checked="false" truevalue="-read_trkg yes" falsevalue="-read_trkg no" label="Tracking of short read positions in assembly" help="Generates Graph2 dataset" /><conditional name="coverage"><param name="cutoff" type="select" label="Coverage cutoff" help=""> @@ -169,20 +168,20 @@ </conditional></inputs><outputs> - <!-- - <data format="velvet_graph" name="LastGraph" /> - --> - <data format="txt" name="LastGraph" label="${tool.name} on ${on_string}: LastGraph"> + <data format="txt" name="Graph2" label="${tool.name} on ${on_string}: Graph2" from_work_dir="Graph2"> + <filter>read_trkg is True</filter> + </data> + <data format="txt" name="LastGraph" label="${tool.name} on ${on_string}: LastGraph" from_work_dir="LastGraph"><filter>last_graph['generate_graph'] == "yes"</filter></data> - <data format="afg" name="velvet_asm" label="${tool.name} on ${on_string}: AMOS.afg"> + <data format="afg" name="velvet_asm" label="${tool.name} on ${on_string}: AMOS.afg" from_work_dir="velvet_asm.afg"><filter>generate_amos['afg'] == "yes"</filter></data> - <data format="fasta" name="unused_reads_fasta" label="${tool.name} on ${on_string}: Unused Reads"> + <data format="fasta" name="unused_reads_fasta" label="${tool.name} on ${on_string}: Unused Reads" from_work_dir="UnusedReads.fa"><filter>unused_reads['generate_unused'] == "yes"</filter></data> - <data format="tabular" name="stats" label="${tool.name} on ${on_string}: Stats"/> - <data format="fasta" name="contigs" label="${tool.name} on ${on_string}: Contigs"/> + <data format="tabular" name="stats" label="${tool.name} on ${on_string}: Stats" from_work_dir="stats.txt" /> + <data format="fasta" name="contigs" label="${tool.name} on ${on_string}: Contigs" from_work_dir="contigs.fa" /></outputs><requirements><requirement type="package">velvet</requirement> diff -r ddf23ee6275837e3ca25ca5c51c43ec68a7432d8 -r ce6691088fd59ff13d2566e224551ccccd7ad569 tools/sr_assembly/velvetg_wrapper.py --- a/tools/sr_assembly/velvetg_wrapper.py +++ b/tools/sr_assembly/velvetg_wrapper.py @@ -4,14 +4,9 @@ Classes encapsulating decypher tool. James E Johnson - University of Minnesota """ -import pkg_resources; -import logging, os, string, sys, tempfile, glob, shutil, types, urllib -import shlex, subprocess -from optparse import OptionParser, OptionGroup -from stat import * - - -log = logging.getLogger( __name__ ) +import os +import sys +import subprocess assert sys.version_info[:2] >= ( 2, 4 ) @@ -19,20 +14,15 @@ sys.stderr.write( "%s\n" % msg ) sys.exit() + def __main__(): #Parse Command Line - s = 'velvetg_wrapper.py: argv = %s\n' % (sys.argv) - # print >> sys.stderr, s # so will appear as blurb for file - argcnt = len(sys.argv) working_dir = sys.argv[1] - contigs = sys.argv[2] - stats = sys.argv[3] - LastGraph = sys.argv[4] - afgFile = sys.argv[5] - unusedReadsFile = sys.argv[6] - inputs = string.join(sys.argv[7:],' ') - cmdline = 'velvetg %s %s > /dev/null' % (working_dir, inputs) - # print >> sys.stderr, cmdline # so will appear as blurb for file + inputs = ' '.join(sys.argv[2:]) + for _ in ('Roadmaps', 'Sequences'): + os.symlink(os.path.join(working_dir, _), _) + cmdline = 'velvetg . %s' % (inputs) + print "Command to be executed: %s" % cmdline try: proc = subprocess.Popen( args=cmdline, shell=True, stderr=subprocess.PIPE ) returncode = proc.wait() @@ -50,41 +40,7 @@ raise Exception, stderr except Exception, e: stop_err( 'Error running velvetg ' + str( e ) ) - out = open(contigs,'w') - contigs_path = os.path.join(working_dir,'contigs.fa') - for line in open( contigs_path ): - out.write( "%s" % (line) ) - out.close() - out = open(stats,'w') - stats_path = os.path.join(working_dir,'stats.txt') - for line in open( stats_path ): - out.write( "%s" % (line) ) - out.close() - if LastGraph != 'None': - out = open(LastGraph,'w') - LastGraph_path = os.path.join(working_dir,'LastGraph') - for line in open( LastGraph_path ): - out.write( "%s" % (line) ) - out.close() - if afgFile != 'None': - out = open(afgFile,'w') - afgFile_path = os.path.join(working_dir,'velvet_asm.afg') - try: - for line in open( afgFile_path ): - out.write( "%s" % (line) ) - except: - logging.warn( 'error reading %s' %(afgFile_path)) - pass - out.close() - if unusedReadsFile != 'None': - out = open(unusedReadsFile,'w') - unusedReadsFile_path = os.path.join(working_dir,'UnusedReads.fa') - try: - for line in open( unusedReadsFile_path ): - out.write( "%s" % (line) ) - except: - logging.info( 'error reading %s' %(unusedReadsFile_path)) - pass - out.close() -if __name__ == "__main__": __main__() + +if __name__ == "__main__": + __main__() Repository URL:
https://bitbucket.org/galaxy/galaxy-central/
-- This is a commit notification from
bitbucket.org
. You are receiving this because you have the service enabled, addressing the recipient of this email.
1
0
0
0
← Newer
1
...
16
17
18
19
Older →
Jump to page:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Results per page:
10
25
50
100
200