galaxy-commits
Threads by month
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
November 2014
- 2 participants
- 184 discussions
commit/galaxy-central: jmchilton: Merged in ericenns/galaxy-central (pull request #546)
by commits-noreply@bitbucket.org 02 Nov '14
by commits-noreply@bitbucket.org 02 Nov '14
02 Nov '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/51f4c1c7fce7/
Changeset: 51f4c1c7fce7
User: jmchilton
Date: 2014-11-02 21:16:01+00:00
Summary: Merged in ericenns/galaxy-central (pull request #546)
Update admin manage jobs time calculation
Affected #: 1 file
diff -r ce6691088fd59ff13d2566e224551ccccd7ad569 -r 51f4c1c7fce753eb28ec907715d324a2bca2c84a lib/galaxy/web/base/controllers/admin.py
--- a/lib/galaxy/web/base/controllers/admin.py
+++ b/lib/galaxy/web/base/controllers/admin.py
@@ -1124,7 +1124,9 @@
last_updated = {}
for job in jobs:
delta = datetime.utcnow() - job.update_time
- if delta > timedelta( minutes=60 ):
+ if delta.days > 0:
+ last_updated[job.id] = '%s hours' % ( delta.days * 24 + int( delta.seconds / 60 / 60 ) )
+ elif delta > timedelta( minutes=59 ):
last_updated[job.id] = '%s hours' % int( delta.seconds / 60 / 60 )
else:
last_updated[job.id] = '%s minutes' % int( delta.seconds / 60 )
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
2 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/aaa62e323fb0/
Changeset: aaa62e323fb0
User: ericenns
Date: 2014-10-31 14:30:41+00:00
Summary: Update job time output in admin manage jobs to prevent rollover when time surpasses 24 hours
Affected #: 1 file
diff -r dcd77603b24da934ba6f0fae63da8ddfea55d10b -r aaa62e323fb02c58dc383867b3a9c6d786328fe4 lib/galaxy/web/base/controllers/admin.py
--- a/lib/galaxy/web/base/controllers/admin.py
+++ b/lib/galaxy/web/base/controllers/admin.py
@@ -1124,7 +1124,9 @@
last_updated = {}
for job in jobs:
delta = datetime.utcnow() - job.update_time
- if delta > timedelta( minutes=60 ):
+ if delta.days > 0:
+ last_updated[job.id] = '%s hours' % ( delta.days * 24 + int( delta.seconds / 60 / 60 ) )
+ elif delta > timedelta( minutes=59 ):
last_updated[job.id] = '%s hours' % int( delta.seconds / 60 / 60 )
else:
last_updated[job.id] = '%s minutes' % int( delta.seconds / 60 )
https://bitbucket.org/galaxy/galaxy-central/commits/51f4c1c7fce7/
Changeset: 51f4c1c7fce7
User: jmchilton
Date: 2014-11-02 21:16:01+00:00
Summary: Merged in ericenns/galaxy-central (pull request #546)
Update admin manage jobs time calculation
Affected #: 1 file
diff -r ce6691088fd59ff13d2566e224551ccccd7ad569 -r 51f4c1c7fce753eb28ec907715d324a2bca2c84a lib/galaxy/web/base/controllers/admin.py
--- a/lib/galaxy/web/base/controllers/admin.py
+++ b/lib/galaxy/web/base/controllers/admin.py
@@ -1124,7 +1124,9 @@
last_updated = {}
for job in jobs:
delta = datetime.utcnow() - job.update_time
- if delta > timedelta( minutes=60 ):
+ if delta.days > 0:
+ last_updated[job.id] = '%s hours' % ( delta.days * 24 + int( delta.seconds / 60 / 60 ) )
+ elif delta > timedelta( minutes=59 ):
last_updated[job.id] = '%s hours' % int( delta.seconds / 60 / 60 )
else:
last_updated[job.id] = '%s minutes' % int( delta.seconds / 60 )
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
commit/galaxy-central: jmchilton: Merged in nsoranzo/galaxy-central (pull request #543)
by commits-noreply@bitbucket.org 02 Nov '14
by commits-noreply@bitbucket.org 02 Nov '14
02 Nov '14
1 new commit in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/ce6691088fd5/
Changeset: ce6691088fd5
User: jmchilton
Date: 2014-11-02 21:00:05+00:00
Summary: Merged in nsoranzo/galaxy-central (pull request #543)
Enhancements to "Convert delimiters" and "velvetg" tools.
Affected #: 4 files
diff -r ddf23ee6275837e3ca25ca5c51c43ec68a7432d8 -r ce6691088fd59ff13d2566e224551ccccd7ad569 tools/filters/convert_characters.py
--- a/tools/filters/convert_characters.py
+++ b/tools/filters/convert_characters.py
@@ -1,30 +1,18 @@
#!/usr/bin/env python
#By, Guruprasad Ananda.
+import optparse
import re
-import sys
-
-def stop_err(msg):
- sys.stderr.write(msg)
- sys.exit()
-
-
-def main():
- if len(sys.argv) != 4:
- stop_err("usage: convert_characters infile from_char outfile")
-
- try:
- fin = open(sys.argv[1], 'r')
- except:
- stop_err("Input file cannot be opened for reading.")
-
- from_char = sys.argv[2]
-
- try:
- fout = open(sys.argv[3], 'w')
- except:
- stop_err("Output file cannot be opened for writing.")
+def __main__():
+ parser = optparse.OptionParser()
+ parser.add_option('--strip', action='store_true',
+ help='strip leading and trailing whitespaces')
+ parser.add_option('--condense', action='store_true',
+ help='condense consecutive delimiters')
+ (options, args) = parser.parse_args()
+ if len(args) != 3:
+ parser.error("usage: convert_characters.py infile from_char outfile")
char_dict = {
'T': '\t',
@@ -38,20 +26,26 @@
'Sc': ';'
}
# regexp to match 1 or more occurences.
- from_ch = char_dict[from_char] + '+'
+ from_char = args[1]
+ from_ch = char_dict[from_char]
+ if options.condense:
+ from_ch += '+'
+
skipped = 0
+ with open(args[0], 'rU') as fin:
+ with open(args[2], 'w') as fout:
+ for line in fin:
+ if options.strip:
+ line = line.strip()
+ else:
+ line = line.rstrip('\n')
+ try:
+ fout.write("%s\n" % (re.sub(from_ch, '\t', line)))
+ except:
+ skipped += 1
- for line in fin:
- line = line.strip()
- try:
- fout.write("%s\n" % (re.sub(from_ch, '\t', line)))
- except:
- skipped += 1
-
- fin.close()
- fout.close()
if skipped:
print "Skipped %d lines as invalid." % skipped
if __name__ == "__main__":
- main()
+ __main__()
diff -r ddf23ee6275837e3ca25ca5c51c43ec68a7432d8 -r ce6691088fd59ff13d2566e224551ccccd7ad569 tools/filters/convert_characters.xml
--- a/tools/filters/convert_characters.xml
+++ b/tools/filters/convert_characters.xml
@@ -1,59 +1,77 @@
-<tool id="Convert characters1" name="Convert">
- <description>delimiters to TAB</description>
- <command interpreter="python">convert_characters.py $input $convert_from $out_file1</command>
- <inputs>
- <param name="convert_from" type="select" label="Convert all">
- <option value="s">Whitespaces</option>
- <option value="T">Tabs</option>
- <!--<option value="Sp">Spaces</option>-->
- <option value="Dt">Dots</option>
- <option value="C">Commas</option>
- <option value="D">Dashes</option>
- <option value="U">Underscores</option>
- <option value="P">Pipes</option>
- <option value="Co">Colons</option>
- <option value="Sc">Semicolons</option>
- </param>
- <param format="txt" name="input" type="data" label="in Dataset"/>
- </inputs>
- <outputs>
- <data format="tabular" name="out_file1" />
- </outputs>
- <tests>
- <test>
- <param name="convert_from" value="s"/>
- <param name="input" value="1.bed"/>
- <output name="out_file1" file="eq-convert.dat"/>
- </test>
- <test>
- <param name="convert_from" value="s"/>
- <param name="input" value="a.txt"/>
- <output name="out_file1" file="a.tab"/>
- </test>
- </tests>
- <help>
-
-**What it does**
-
-Converts all delimiters of a specified type into TABs. Consecutive characters are condensed. For example, if columns are separated by 5 spaces they will converted into 1 tab.
-
------
-
-**Example**
-
-- Input file::
-
- chrX||151283558|151283724|NM_000808_exon_8_0_chrX_151283559_r|0|-
- chrX|151370273|151370486|NM_000808_exon_9_0_chrX_151370274_r|0|-
- chrX|151559494|151559583|NM_018558_exon_1_0_chrX_151559495_f|0|+
- chrX|151564643|151564711|NM_018558_exon_2_0_chrX_151564644_f||||0|+
-
-- Converting all pipe delimiters of the above file to TABs will get::
-
- chrX 151283558 151283724 NM_000808_exon_8_0_chrX_151283559_r 0 -
- chrX 151370273 151370486 NM_000808_exon_9_0_chrX_151370274_r 0 -
- chrX 151559494 151559583 NM_018558_exon_1_0_chrX_151559495_f 0 +
- chrX 151564643 151564711 NM_018558_exon_2_0_chrX_151564644_f 0 +
-
-</help>
-</tool>
+<tool id="Convert characters1" name="Convert">
+ <description>delimiters to TAB</description>
+ <command interpreter="python">
+convert_characters.py
+#if $strip
+ --strip
+#end if
+#if $condense
+ --condense
+#end if
+$input $convert_from $out_file1
+ </command>
+ <inputs>
+ <param name="convert_from" type="select" label="Convert all">
+ <option value="s">Whitespaces</option>
+ <option value="T">Tabs</option>
+ <!--<option value="Sp">Spaces</option>-->
+ <option value="Dt">Dots</option>
+ <option value="C">Commas</option>
+ <option value="D">Dashes</option>
+ <option value="U">Underscores</option>
+ <option value="P">Pipes</option>
+ <option value="Co">Colons</option>
+ <option value="Sc">Semicolons</option>
+ </param>
+ <param format="txt" name="input" type="data" label="in Dataset"/>
+ <param name="strip" type="boolean" checked="true" label="Strip leading and trailing whitespaces" />
+ <param name="condense" type="boolean" checked="true" label="Condense consecutive delimiters in one TAB" />
+ </inputs>
+ <outputs>
+ <data format="tabular" name="out_file1" />
+ </outputs>
+ <stdio>
+ <exit_code range="1:" level="fatal" />
+ </stdio>
+ <tests>
+ <test>
+ <param name="convert_from" value="s"/>
+ <param name="input" value="1.bed"/>
+ <param name="strip" value="true" />
+ <param name="condense" value="true" />
+ <output name="out_file1" file="eq-convert.dat"/>
+ </test>
+ <test>
+ <param name="convert_from" value="s"/>
+ <param name="input" value="a.txt"/>
+ <param name="strip" value="true" />
+ <param name="condense" value="true" />
+ <output name="out_file1" file="a.tab"/>
+ </test>
+ </tests>
+ <help>
+
+**What it does**
+
+Converts all delimiters of a specified type into TABs. Consecutive delimiters can be condensed in a single TAB.
+
+-----
+
+**Example**
+
+- Input file::
+
+ chrX||151283558|151283724|NM_000808_exon_8_0_chrX_151283559_r|0|-
+ chrX|151370273|151370486|NM_000808_exon_9_0_chrX_151370274_r|0|-
+ chrX|151559494|151559583|NM_018558_exon_1_0_chrX_151559495_f|0|+
+ chrX|151564643|151564711|NM_018558_exon_2_0_chrX_151564644_f||||0|+
+
+- Converting all pipe delimiters of the above file to TABs and condensing delimiters will get::
+
+ chrX 151283558 151283724 NM_000808_exon_8_0_chrX_151283559_r 0 -
+ chrX 151370273 151370486 NM_000808_exon_9_0_chrX_151370274_r 0 -
+ chrX 151559494 151559583 NM_018558_exon_1_0_chrX_151559495_f 0 +
+ chrX 151564643 151564711 NM_018558_exon_2_0_chrX_151564644_f 0 +
+
+ </help>
+</tool>
diff -r ddf23ee6275837e3ca25ca5c51c43ec68a7432d8 -r ce6691088fd59ff13d2566e224551ccccd7ad569 tools/sr_assembly/velvetg.xml
--- a/tools/sr_assembly/velvetg.xml
+++ b/tools/sr_assembly/velvetg.xml
@@ -4,7 +4,6 @@
<command interpreter="python">
velvetg_wrapper.py
'$input.extra_files_path'
- '$contigs' '$stats' '$LastGraph' '$velvet_asm' '$unused_reads_fasta'
#if $generate_amos.afg == "yes":
-amos_file $generate_amos.afg
#end if
@@ -96,7 +95,7 @@
<when value="yes"/></conditional>
- <param name="read_trkg" type="boolean" checked="false" truevalue="-read_trkg yes" falsevalue="-read_trkg no" label="track short read positions" help=" tracking of short read positions in assembly"/>
+ <param name="read_trkg" type="boolean" checked="false" truevalue="-read_trkg yes" falsevalue="-read_trkg no" label="Tracking of short read positions in assembly" help="Generates Graph2 dataset" /><conditional name="coverage"><param name="cutoff" type="select" label="Coverage cutoff" help="">
@@ -169,20 +168,20 @@
</conditional></inputs><outputs>
- <!--
- <data format="velvet_graph" name="LastGraph" />
- -->
- <data format="txt" name="LastGraph" label="${tool.name} on ${on_string}: LastGraph">
+ <data format="txt" name="Graph2" label="${tool.name} on ${on_string}: Graph2" from_work_dir="Graph2">
+ <filter>read_trkg is True</filter>
+ </data>
+ <data format="txt" name="LastGraph" label="${tool.name} on ${on_string}: LastGraph" from_work_dir="LastGraph"><filter>last_graph['generate_graph'] == "yes"</filter></data>
- <data format="afg" name="velvet_asm" label="${tool.name} on ${on_string}: AMOS.afg">
+ <data format="afg" name="velvet_asm" label="${tool.name} on ${on_string}: AMOS.afg" from_work_dir="velvet_asm.afg"><filter>generate_amos['afg'] == "yes"</filter></data>
- <data format="fasta" name="unused_reads_fasta" label="${tool.name} on ${on_string}: Unused Reads">
+ <data format="fasta" name="unused_reads_fasta" label="${tool.name} on ${on_string}: Unused Reads" from_work_dir="UnusedReads.fa"><filter>unused_reads['generate_unused'] == "yes"</filter></data>
- <data format="tabular" name="stats" label="${tool.name} on ${on_string}: Stats"/>
- <data format="fasta" name="contigs" label="${tool.name} on ${on_string}: Contigs"/>
+ <data format="tabular" name="stats" label="${tool.name} on ${on_string}: Stats" from_work_dir="stats.txt" />
+ <data format="fasta" name="contigs" label="${tool.name} on ${on_string}: Contigs" from_work_dir="contigs.fa" /></outputs><requirements><requirement type="package">velvet</requirement>
diff -r ddf23ee6275837e3ca25ca5c51c43ec68a7432d8 -r ce6691088fd59ff13d2566e224551ccccd7ad569 tools/sr_assembly/velvetg_wrapper.py
--- a/tools/sr_assembly/velvetg_wrapper.py
+++ b/tools/sr_assembly/velvetg_wrapper.py
@@ -4,14 +4,9 @@
Classes encapsulating decypher tool.
James E Johnson - University of Minnesota
"""
-import pkg_resources;
-import logging, os, string, sys, tempfile, glob, shutil, types, urllib
-import shlex, subprocess
-from optparse import OptionParser, OptionGroup
-from stat import *
-
-
-log = logging.getLogger( __name__ )
+import os
+import sys
+import subprocess
assert sys.version_info[:2] >= ( 2, 4 )
@@ -19,20 +14,15 @@
sys.stderr.write( "%s\n" % msg )
sys.exit()
+
def __main__():
#Parse Command Line
- s = 'velvetg_wrapper.py: argv = %s\n' % (sys.argv)
- # print >> sys.stderr, s # so will appear as blurb for file
- argcnt = len(sys.argv)
working_dir = sys.argv[1]
- contigs = sys.argv[2]
- stats = sys.argv[3]
- LastGraph = sys.argv[4]
- afgFile = sys.argv[5]
- unusedReadsFile = sys.argv[6]
- inputs = string.join(sys.argv[7:],' ')
- cmdline = 'velvetg %s %s > /dev/null' % (working_dir, inputs)
- # print >> sys.stderr, cmdline # so will appear as blurb for file
+ inputs = ' '.join(sys.argv[2:])
+ for _ in ('Roadmaps', 'Sequences'):
+ os.symlink(os.path.join(working_dir, _), _)
+ cmdline = 'velvetg . %s' % (inputs)
+ print "Command to be executed: %s" % cmdline
try:
proc = subprocess.Popen( args=cmdline, shell=True, stderr=subprocess.PIPE )
returncode = proc.wait()
@@ -50,41 +40,7 @@
raise Exception, stderr
except Exception, e:
stop_err( 'Error running velvetg ' + str( e ) )
- out = open(contigs,'w')
- contigs_path = os.path.join(working_dir,'contigs.fa')
- for line in open( contigs_path ):
- out.write( "%s" % (line) )
- out.close()
- out = open(stats,'w')
- stats_path = os.path.join(working_dir,'stats.txt')
- for line in open( stats_path ):
- out.write( "%s" % (line) )
- out.close()
- if LastGraph != 'None':
- out = open(LastGraph,'w')
- LastGraph_path = os.path.join(working_dir,'LastGraph')
- for line in open( LastGraph_path ):
- out.write( "%s" % (line) )
- out.close()
- if afgFile != 'None':
- out = open(afgFile,'w')
- afgFile_path = os.path.join(working_dir,'velvet_asm.afg')
- try:
- for line in open( afgFile_path ):
- out.write( "%s" % (line) )
- except:
- logging.warn( 'error reading %s' %(afgFile_path))
- pass
- out.close()
- if unusedReadsFile != 'None':
- out = open(unusedReadsFile,'w')
- unusedReadsFile_path = os.path.join(working_dir,'UnusedReads.fa')
- try:
- for line in open( unusedReadsFile_path ):
- out.write( "%s" % (line) )
- except:
- logging.info( 'error reading %s' %(unusedReadsFile_path))
- pass
- out.close()
-if __name__ == "__main__": __main__()
+
+if __name__ == "__main__":
+ __main__()
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0
5 new commits in galaxy-central:
https://bitbucket.org/galaxy/galaxy-central/commits/3d7d4cf4aa12/
Changeset: 3d7d4cf4aa12
User: nsoranzo
Date: 2014-10-28 11:21:00+00:00
Summary: dos2unix
Affected #: 1 file
diff -r 32929a1f9bf4b2150f99655c266223cd05d9a2bd -r 3d7d4cf4aa121fbc84447d8905ce1fefc91ac46f tools/filters/convert_characters.xml
--- a/tools/filters/convert_characters.xml
+++ b/tools/filters/convert_characters.xml
@@ -1,59 +1,59 @@
-<tool id="Convert characters1" name="Convert">
- <description>delimiters to TAB</description>
- <command interpreter="python">convert_characters.py $input $convert_from $out_file1</command>
- <inputs>
- <param name="convert_from" type="select" label="Convert all">
- <option value="s">Whitespaces</option>
- <option value="T">Tabs</option>
- <!--<option value="Sp">Spaces</option>-->
- <option value="Dt">Dots</option>
- <option value="C">Commas</option>
- <option value="D">Dashes</option>
- <option value="U">Underscores</option>
- <option value="P">Pipes</option>
- <option value="Co">Colons</option>
- <option value="Sc">Semicolons</option>
- </param>
- <param format="txt" name="input" type="data" label="in Dataset"/>
- </inputs>
- <outputs>
- <data format="tabular" name="out_file1" />
- </outputs>
- <tests>
- <test>
- <param name="convert_from" value="s"/>
- <param name="input" value="1.bed"/>
- <output name="out_file1" file="eq-convert.dat"/>
- </test>
- <test>
- <param name="convert_from" value="s"/>
- <param name="input" value="a.txt"/>
- <output name="out_file1" file="a.tab"/>
- </test>
- </tests>
- <help>
-
-**What it does**
-
-Converts all delimiters of a specified type into TABs. Consecutive characters are condensed. For example, if columns are separated by 5 spaces they will converted into 1 tab.
-
------
-
-**Example**
-
-- Input file::
-
- chrX||151283558|151283724|NM_000808_exon_8_0_chrX_151283559_r|0|-
- chrX|151370273|151370486|NM_000808_exon_9_0_chrX_151370274_r|0|-
- chrX|151559494|151559583|NM_018558_exon_1_0_chrX_151559495_f|0|+
- chrX|151564643|151564711|NM_018558_exon_2_0_chrX_151564644_f||||0|+
-
-- Converting all pipe delimiters of the above file to TABs will get::
-
- chrX 151283558 151283724 NM_000808_exon_8_0_chrX_151283559_r 0 -
- chrX 151370273 151370486 NM_000808_exon_9_0_chrX_151370274_r 0 -
- chrX 151559494 151559583 NM_018558_exon_1_0_chrX_151559495_f 0 +
- chrX 151564643 151564711 NM_018558_exon_2_0_chrX_151564644_f 0 +
-
-</help>
-</tool>
+<tool id="Convert characters1" name="Convert">
+ <description>delimiters to TAB</description>
+ <command interpreter="python">convert_characters.py $input $convert_from $out_file1</command>
+ <inputs>
+ <param name="convert_from" type="select" label="Convert all">
+ <option value="s">Whitespaces</option>
+ <option value="T">Tabs</option>
+ <!--<option value="Sp">Spaces</option>-->
+ <option value="Dt">Dots</option>
+ <option value="C">Commas</option>
+ <option value="D">Dashes</option>
+ <option value="U">Underscores</option>
+ <option value="P">Pipes</option>
+ <option value="Co">Colons</option>
+ <option value="Sc">Semicolons</option>
+ </param>
+ <param format="txt" name="input" type="data" label="in Dataset"/>
+ </inputs>
+ <outputs>
+ <data format="tabular" name="out_file1" />
+ </outputs>
+ <tests>
+ <test>
+ <param name="convert_from" value="s"/>
+ <param name="input" value="1.bed"/>
+ <output name="out_file1" file="eq-convert.dat"/>
+ </test>
+ <test>
+ <param name="convert_from" value="s"/>
+ <param name="input" value="a.txt"/>
+ <output name="out_file1" file="a.tab"/>
+ </test>
+ </tests>
+ <help>
+
+**What it does**
+
+Converts all delimiters of a specified type into TABs. Consecutive characters are condensed. For example, if columns are separated by 5 spaces they will converted into 1 tab.
+
+-----
+
+**Example**
+
+- Input file::
+
+ chrX||151283558|151283724|NM_000808_exon_8_0_chrX_151283559_r|0|-
+ chrX|151370273|151370486|NM_000808_exon_9_0_chrX_151370274_r|0|-
+ chrX|151559494|151559583|NM_018558_exon_1_0_chrX_151559495_f|0|+
+ chrX|151564643|151564711|NM_018558_exon_2_0_chrX_151564644_f||||0|+
+
+- Converting all pipe delimiters of the above file to TABs will get::
+
+ chrX 151283558 151283724 NM_000808_exon_8_0_chrX_151283559_r 0 -
+ chrX 151370273 151370486 NM_000808_exon_9_0_chrX_151370274_r 0 -
+ chrX 151559494 151559583 NM_018558_exon_1_0_chrX_151559495_f 0 +
+ chrX 151564643 151564711 NM_018558_exon_2_0_chrX_151564644_f 0 +
+
+</help>
+</tool>
https://bitbucket.org/galaxy/galaxy-central/commits/227a36a8998b/
Changeset: 227a36a8998b
User: nsoranzo
Date: 2014-10-28 15:57:49+00:00
Summary: Make stripping and condensing optional.
Affected #: 2 files
diff -r 3d7d4cf4aa121fbc84447d8905ce1fefc91ac46f -r 227a36a8998b314b04823e06734eb933f9289987 tools/filters/convert_characters.py
--- a/tools/filters/convert_characters.py
+++ b/tools/filters/convert_characters.py
@@ -1,30 +1,18 @@
#!/usr/bin/env python
#By, Guruprasad Ananda.
+import optparse
import re
-import sys
-
-def stop_err(msg):
- sys.stderr.write(msg)
- sys.exit()
-
-
-def main():
- if len(sys.argv) != 4:
- stop_err("usage: convert_characters infile from_char outfile")
-
- try:
- fin = open(sys.argv[1], 'r')
- except:
- stop_err("Input file cannot be opened for reading.")
-
- from_char = sys.argv[2]
-
- try:
- fout = open(sys.argv[3], 'w')
- except:
- stop_err("Output file cannot be opened for writing.")
+def __main__():
+ parser = optparse.OptionParser()
+ parser.add_option('--strip', action='store_true',
+ help='strip leading and trailing whitespaces')
+ parser.add_option('--condense', action='store_true',
+ help='condense consecutive delimiters')
+ (options, args) = parser.parse_args()
+ if len(args) != 3:
+ parser.error("usage: convert_characters.py infile from_char outfile")
char_dict = {
'T': '\t',
@@ -38,20 +26,26 @@
'Sc': ';'
}
# regexp to match 1 or more occurences.
- from_ch = char_dict[from_char] + '+'
+ from_char = args[1]
+ from_ch = char_dict[from_char]
+ if options.condense:
+ from_ch += '+'
+
skipped = 0
+ with open(args[0], 'rU') as fin:
+ with open(args[2], 'w') as fout:
+ for line in fin:
+ if options.strip:
+ line = line.strip()
+ else:
+ line = line.rstrip('\n')
+ try:
+ fout.write("%s\n" % (re.sub(from_ch, '\t', line)))
+ except:
+ skipped += 1
- for line in fin:
- line = line.strip()
- try:
- fout.write("%s\n" % (re.sub(from_ch, '\t', line)))
- except:
- skipped += 1
-
- fin.close()
- fout.close()
if skipped:
print "Skipped %d lines as invalid." % skipped
if __name__ == "__main__":
- main()
+ __main__()
diff -r 3d7d4cf4aa121fbc84447d8905ce1fefc91ac46f -r 227a36a8998b314b04823e06734eb933f9289987 tools/filters/convert_characters.xml
--- a/tools/filters/convert_characters.xml
+++ b/tools/filters/convert_characters.xml
@@ -1,6 +1,15 @@
<tool id="Convert characters1" name="Convert"><description>delimiters to TAB</description>
- <command interpreter="python">convert_characters.py $input $convert_from $out_file1</command>
+ <command interpreter="python">
+convert_characters.py
+#if $strip
+ --strip
+#end if
+#if $condense
+ --condense
+#end if
+$input $convert_from $out_file1
+ </command><inputs><param name="convert_from" type="select" label="Convert all"><option value="s">Whitespaces</option>
@@ -15,19 +24,28 @@
<option value="Sc">Semicolons</option></param><param format="txt" name="input" type="data" label="in Dataset"/>
+ <param name="strip" type="boolean" checked="true" label="Strip leading and trailing whitespaces" />
+ <param name="condense" type="boolean" checked="true" label="Condense consecutive delimiters in one TAB" /></inputs><outputs><data format="tabular" name="out_file1" /></outputs>
+ <stdio>
+ <exit_code range="1:" level="fatal" />
+ </stdio><tests><test><param name="convert_from" value="s"/><param name="input" value="1.bed"/>
+ <param name="strip" value="true" />
+ <param name="condense" value="true" /><output name="out_file1" file="eq-convert.dat"/></test><test><param name="convert_from" value="s"/><param name="input" value="a.txt"/>
+ <param name="strip" value="true" />
+ <param name="condense" value="true" /><output name="out_file1" file="a.tab"/></test></tests>
@@ -35,7 +53,7 @@
**What it does**
-Converts all delimiters of a specified type into TABs. Consecutive characters are condensed. For example, if columns are separated by 5 spaces they will converted into 1 tab.
+Converts all delimiters of a specified type into TABs. Consecutive delimiters can be condensed in a single TAB.
-----
@@ -48,12 +66,12 @@
chrX|151559494|151559583|NM_018558_exon_1_0_chrX_151559495_f|0|+
chrX|151564643|151564711|NM_018558_exon_2_0_chrX_151564644_f||||0|+
-- Converting all pipe delimiters of the above file to TABs will get::
+- Converting all pipe delimiters of the above file to TABs and condensing delimiters will get::
chrX 151283558 151283724 NM_000808_exon_8_0_chrX_151283559_r 0 -
chrX 151370273 151370486 NM_000808_exon_9_0_chrX_151370274_r 0 -
chrX 151559494 151559583 NM_018558_exon_1_0_chrX_151559495_f 0 +
chrX 151564643 151564711 NM_018558_exon_2_0_chrX_151564644_f 0 +
-</help>
+ </help></tool>
https://bitbucket.org/galaxy/galaxy-central/commits/6fee2583c95d/
Changeset: 6fee2583c95d
User: nsoranzo
Date: 2014-10-28 17:45:17+00:00
Summary: Use from_work_dir instead of copying output files line by line. Write velvetg output to stdout.
Affected #: 2 files
diff -r 227a36a8998b314b04823e06734eb933f9289987 -r 6fee2583c95d5a662afef5ac1e99e2e7be810065 tools/sr_assembly/velvetg.xml
--- a/tools/sr_assembly/velvetg.xml
+++ b/tools/sr_assembly/velvetg.xml
@@ -4,7 +4,6 @@
<command interpreter="python">
velvetg_wrapper.py
'$input.extra_files_path'
- '$contigs' '$stats' '$LastGraph' '$velvet_asm' '$unused_reads_fasta'
#if $generate_amos.afg == "yes":
-amos_file $generate_amos.afg
#end if
@@ -169,20 +168,17 @@
</conditional></inputs><outputs>
- <!--
- <data format="velvet_graph" name="LastGraph" />
- -->
- <data format="txt" name="LastGraph" label="${tool.name} on ${on_string}: LastGraph">
+ <data format="txt" name="LastGraph" label="${tool.name} on ${on_string}: LastGraph" from_work_dir="LastGraph"><filter>last_graph['generate_graph'] == "yes"</filter></data>
- <data format="afg" name="velvet_asm" label="${tool.name} on ${on_string}: AMOS.afg">
+ <data format="afg" name="velvet_asm" label="${tool.name} on ${on_string}: AMOS.afg" from_work_dir="velvet_asm.afg"><filter>generate_amos['afg'] == "yes"</filter></data>
- <data format="fasta" name="unused_reads_fasta" label="${tool.name} on ${on_string}: Unused Reads">
+ <data format="fasta" name="unused_reads_fasta" label="${tool.name} on ${on_string}: Unused Reads" from_work_dir="UnusedReads.fa"><filter>unused_reads['generate_unused'] == "yes"</filter></data>
- <data format="tabular" name="stats" label="${tool.name} on ${on_string}: Stats"/>
- <data format="fasta" name="contigs" label="${tool.name} on ${on_string}: Contigs"/>
+ <data format="tabular" name="stats" label="${tool.name} on ${on_string}: Stats" from_work_dir="stats.txt" />
+ <data format="fasta" name="contigs" label="${tool.name} on ${on_string}: Contigs" from_work_dir="contigs.fa" /></outputs><requirements><requirement type="package">velvet</requirement>
diff -r 227a36a8998b314b04823e06734eb933f9289987 -r 6fee2583c95d5a662afef5ac1e99e2e7be810065 tools/sr_assembly/velvetg_wrapper.py
--- a/tools/sr_assembly/velvetg_wrapper.py
+++ b/tools/sr_assembly/velvetg_wrapper.py
@@ -4,14 +4,9 @@
Classes encapsulating decypher tool.
James E Johnson - University of Minnesota
"""
-import pkg_resources;
-import logging, os, string, sys, tempfile, glob, shutil, types, urllib
-import shlex, subprocess
-from optparse import OptionParser, OptionGroup
-from stat import *
-
-
-log = logging.getLogger( __name__ )
+import os
+import sys
+import subprocess
assert sys.version_info[:2] >= ( 2, 4 )
@@ -19,20 +14,15 @@
sys.stderr.write( "%s\n" % msg )
sys.exit()
+
def __main__():
#Parse Command Line
- s = 'velvetg_wrapper.py: argv = %s\n' % (sys.argv)
- # print >> sys.stderr, s # so will appear as blurb for file
- argcnt = len(sys.argv)
working_dir = sys.argv[1]
- contigs = sys.argv[2]
- stats = sys.argv[3]
- LastGraph = sys.argv[4]
- afgFile = sys.argv[5]
- unusedReadsFile = sys.argv[6]
- inputs = string.join(sys.argv[7:],' ')
- cmdline = 'velvetg %s %s > /dev/null' % (working_dir, inputs)
- # print >> sys.stderr, cmdline # so will appear as blurb for file
+ inputs = ' '.join(sys.argv[2:])
+ for _ in ('Roadmaps', 'Sequences'):
+ os.symlink(os.path.join(working_dir, _), _)
+ cmdline = 'velvetg . %s' % (inputs)
+ print "Command to be executed: %s" % cmdline
try:
proc = subprocess.Popen( args=cmdline, shell=True, stderr=subprocess.PIPE )
returncode = proc.wait()
@@ -50,41 +40,7 @@
raise Exception, stderr
except Exception, e:
stop_err( 'Error running velvetg ' + str( e ) )
- out = open(contigs,'w')
- contigs_path = os.path.join(working_dir,'contigs.fa')
- for line in open( contigs_path ):
- out.write( "%s" % (line) )
- out.close()
- out = open(stats,'w')
- stats_path = os.path.join(working_dir,'stats.txt')
- for line in open( stats_path ):
- out.write( "%s" % (line) )
- out.close()
- if LastGraph != 'None':
- out = open(LastGraph,'w')
- LastGraph_path = os.path.join(working_dir,'LastGraph')
- for line in open( LastGraph_path ):
- out.write( "%s" % (line) )
- out.close()
- if afgFile != 'None':
- out = open(afgFile,'w')
- afgFile_path = os.path.join(working_dir,'velvet_asm.afg')
- try:
- for line in open( afgFile_path ):
- out.write( "%s" % (line) )
- except:
- logging.warn( 'error reading %s' %(afgFile_path))
- pass
- out.close()
- if unusedReadsFile != 'None':
- out = open(unusedReadsFile,'w')
- unusedReadsFile_path = os.path.join(working_dir,'UnusedReads.fa')
- try:
- for line in open( unusedReadsFile_path ):
- out.write( "%s" % (line) )
- except:
- logging.info( 'error reading %s' %(unusedReadsFile_path))
- pass
- out.close()
-if __name__ == "__main__": __main__()
+
+if __name__ == "__main__":
+ __main__()
https://bitbucket.org/galaxy/galaxy-central/commits/a07e0e8dc45d/
Changeset: a07e0e8dc45d
User: nsoranzo
Date: 2014-10-28 16:15:39+00:00
Summary: Add Graph2 output dataset when "-read_trkg yes".
Affected #: 1 file
diff -r 6fee2583c95d5a662afef5ac1e99e2e7be810065 -r a07e0e8dc45de641b3c1718d19591953b0c7bb90 tools/sr_assembly/velvetg.xml
--- a/tools/sr_assembly/velvetg.xml
+++ b/tools/sr_assembly/velvetg.xml
@@ -95,7 +95,7 @@
<when value="yes"/></conditional>
- <param name="read_trkg" type="boolean" checked="false" truevalue="-read_trkg yes" falsevalue="-read_trkg no" label="track short read positions" help=" tracking of short read positions in assembly"/>
+ <param name="read_trkg" type="boolean" checked="false" truevalue="-read_trkg yes" falsevalue="-read_trkg no" label="Tracking of short read positions in assembly" help="Generates Graph2 dataset" /><conditional name="coverage"><param name="cutoff" type="select" label="Coverage cutoff" help="">
@@ -168,6 +168,9 @@
</conditional></inputs><outputs>
+ <data format="txt" name="Graph2" label="${tool.name} on ${on_string}: Graph2" from_work_dir="Graph2">
+ <filter>read_trkg is True</filter>
+ </data><data format="txt" name="LastGraph" label="${tool.name} on ${on_string}: LastGraph" from_work_dir="LastGraph"><filter>last_graph['generate_graph'] == "yes"</filter></data>
https://bitbucket.org/galaxy/galaxy-central/commits/ce6691088fd5/
Changeset: ce6691088fd5
User: jmchilton
Date: 2014-11-02 21:00:05+00:00
Summary: Merged in nsoranzo/galaxy-central (pull request #543)
Enhancements to "Convert delimiters" and "velvetg" tools.
Affected #: 4 files
diff -r ddf23ee6275837e3ca25ca5c51c43ec68a7432d8 -r ce6691088fd59ff13d2566e224551ccccd7ad569 tools/filters/convert_characters.py
--- a/tools/filters/convert_characters.py
+++ b/tools/filters/convert_characters.py
@@ -1,30 +1,18 @@
#!/usr/bin/env python
#By, Guruprasad Ananda.
+import optparse
import re
-import sys
-
-def stop_err(msg):
- sys.stderr.write(msg)
- sys.exit()
-
-
-def main():
- if len(sys.argv) != 4:
- stop_err("usage: convert_characters infile from_char outfile")
-
- try:
- fin = open(sys.argv[1], 'r')
- except:
- stop_err("Input file cannot be opened for reading.")
-
- from_char = sys.argv[2]
-
- try:
- fout = open(sys.argv[3], 'w')
- except:
- stop_err("Output file cannot be opened for writing.")
+def __main__():
+ parser = optparse.OptionParser()
+ parser.add_option('--strip', action='store_true',
+ help='strip leading and trailing whitespaces')
+ parser.add_option('--condense', action='store_true',
+ help='condense consecutive delimiters')
+ (options, args) = parser.parse_args()
+ if len(args) != 3:
+ parser.error("usage: convert_characters.py infile from_char outfile")
char_dict = {
'T': '\t',
@@ -38,20 +26,26 @@
'Sc': ';'
}
# regexp to match 1 or more occurences.
- from_ch = char_dict[from_char] + '+'
+ from_char = args[1]
+ from_ch = char_dict[from_char]
+ if options.condense:
+ from_ch += '+'
+
skipped = 0
+ with open(args[0], 'rU') as fin:
+ with open(args[2], 'w') as fout:
+ for line in fin:
+ if options.strip:
+ line = line.strip()
+ else:
+ line = line.rstrip('\n')
+ try:
+ fout.write("%s\n" % (re.sub(from_ch, '\t', line)))
+ except:
+ skipped += 1
- for line in fin:
- line = line.strip()
- try:
- fout.write("%s\n" % (re.sub(from_ch, '\t', line)))
- except:
- skipped += 1
-
- fin.close()
- fout.close()
if skipped:
print "Skipped %d lines as invalid." % skipped
if __name__ == "__main__":
- main()
+ __main__()
diff -r ddf23ee6275837e3ca25ca5c51c43ec68a7432d8 -r ce6691088fd59ff13d2566e224551ccccd7ad569 tools/filters/convert_characters.xml
--- a/tools/filters/convert_characters.xml
+++ b/tools/filters/convert_characters.xml
@@ -1,59 +1,77 @@
-<tool id="Convert characters1" name="Convert">
- <description>delimiters to TAB</description>
- <command interpreter="python">convert_characters.py $input $convert_from $out_file1</command>
- <inputs>
- <param name="convert_from" type="select" label="Convert all">
- <option value="s">Whitespaces</option>
- <option value="T">Tabs</option>
- <!--<option value="Sp">Spaces</option>-->
- <option value="Dt">Dots</option>
- <option value="C">Commas</option>
- <option value="D">Dashes</option>
- <option value="U">Underscores</option>
- <option value="P">Pipes</option>
- <option value="Co">Colons</option>
- <option value="Sc">Semicolons</option>
- </param>
- <param format="txt" name="input" type="data" label="in Dataset"/>
- </inputs>
- <outputs>
- <data format="tabular" name="out_file1" />
- </outputs>
- <tests>
- <test>
- <param name="convert_from" value="s"/>
- <param name="input" value="1.bed"/>
- <output name="out_file1" file="eq-convert.dat"/>
- </test>
- <test>
- <param name="convert_from" value="s"/>
- <param name="input" value="a.txt"/>
- <output name="out_file1" file="a.tab"/>
- </test>
- </tests>
- <help>
-
-**What it does**
-
-Converts all delimiters of a specified type into TABs. Consecutive characters are condensed. For example, if columns are separated by 5 spaces they will converted into 1 tab.
-
------
-
-**Example**
-
-- Input file::
-
- chrX||151283558|151283724|NM_000808_exon_8_0_chrX_151283559_r|0|-
- chrX|151370273|151370486|NM_000808_exon_9_0_chrX_151370274_r|0|-
- chrX|151559494|151559583|NM_018558_exon_1_0_chrX_151559495_f|0|+
- chrX|151564643|151564711|NM_018558_exon_2_0_chrX_151564644_f||||0|+
-
-- Converting all pipe delimiters of the above file to TABs will get::
-
- chrX 151283558 151283724 NM_000808_exon_8_0_chrX_151283559_r 0 -
- chrX 151370273 151370486 NM_000808_exon_9_0_chrX_151370274_r 0 -
- chrX 151559494 151559583 NM_018558_exon_1_0_chrX_151559495_f 0 +
- chrX 151564643 151564711 NM_018558_exon_2_0_chrX_151564644_f 0 +
-
-</help>
-</tool>
+<tool id="Convert characters1" name="Convert">
+ <description>delimiters to TAB</description>
+ <command interpreter="python">
+convert_characters.py
+#if $strip
+ --strip
+#end if
+#if $condense
+ --condense
+#end if
+$input $convert_from $out_file1
+ </command>
+ <inputs>
+ <param name="convert_from" type="select" label="Convert all">
+ <option value="s">Whitespaces</option>
+ <option value="T">Tabs</option>
+ <!--<option value="Sp">Spaces</option>-->
+ <option value="Dt">Dots</option>
+ <option value="C">Commas</option>
+ <option value="D">Dashes</option>
+ <option value="U">Underscores</option>
+ <option value="P">Pipes</option>
+ <option value="Co">Colons</option>
+ <option value="Sc">Semicolons</option>
+ </param>
+ <param format="txt" name="input" type="data" label="in Dataset"/>
+ <param name="strip" type="boolean" checked="true" label="Strip leading and trailing whitespaces" />
+ <param name="condense" type="boolean" checked="true" label="Condense consecutive delimiters in one TAB" />
+ </inputs>
+ <outputs>
+ <data format="tabular" name="out_file1" />
+ </outputs>
+ <stdio>
+ <exit_code range="1:" level="fatal" />
+ </stdio>
+ <tests>
+ <test>
+ <param name="convert_from" value="s"/>
+ <param name="input" value="1.bed"/>
+ <param name="strip" value="true" />
+ <param name="condense" value="true" />
+ <output name="out_file1" file="eq-convert.dat"/>
+ </test>
+ <test>
+ <param name="convert_from" value="s"/>
+ <param name="input" value="a.txt"/>
+ <param name="strip" value="true" />
+ <param name="condense" value="true" />
+ <output name="out_file1" file="a.tab"/>
+ </test>
+ </tests>
+ <help>
+
+**What it does**
+
+Converts all delimiters of a specified type into TABs. Consecutive delimiters can be condensed in a single TAB.
+
+-----
+
+**Example**
+
+- Input file::
+
+ chrX||151283558|151283724|NM_000808_exon_8_0_chrX_151283559_r|0|-
+ chrX|151370273|151370486|NM_000808_exon_9_0_chrX_151370274_r|0|-
+ chrX|151559494|151559583|NM_018558_exon_1_0_chrX_151559495_f|0|+
+ chrX|151564643|151564711|NM_018558_exon_2_0_chrX_151564644_f||||0|+
+
+- Converting all pipe delimiters of the above file to TABs and condensing delimiters will get::
+
+ chrX 151283558 151283724 NM_000808_exon_8_0_chrX_151283559_r 0 -
+ chrX 151370273 151370486 NM_000808_exon_9_0_chrX_151370274_r 0 -
+ chrX 151559494 151559583 NM_018558_exon_1_0_chrX_151559495_f 0 +
+ chrX 151564643 151564711 NM_018558_exon_2_0_chrX_151564644_f 0 +
+
+ </help>
+</tool>
diff -r ddf23ee6275837e3ca25ca5c51c43ec68a7432d8 -r ce6691088fd59ff13d2566e224551ccccd7ad569 tools/sr_assembly/velvetg.xml
--- a/tools/sr_assembly/velvetg.xml
+++ b/tools/sr_assembly/velvetg.xml
@@ -4,7 +4,6 @@
<command interpreter="python">
velvetg_wrapper.py
'$input.extra_files_path'
- '$contigs' '$stats' '$LastGraph' '$velvet_asm' '$unused_reads_fasta'
#if $generate_amos.afg == "yes":
-amos_file $generate_amos.afg
#end if
@@ -96,7 +95,7 @@
<when value="yes"/></conditional>
- <param name="read_trkg" type="boolean" checked="false" truevalue="-read_trkg yes" falsevalue="-read_trkg no" label="track short read positions" help=" tracking of short read positions in assembly"/>
+ <param name="read_trkg" type="boolean" checked="false" truevalue="-read_trkg yes" falsevalue="-read_trkg no" label="Tracking of short read positions in assembly" help="Generates Graph2 dataset" /><conditional name="coverage"><param name="cutoff" type="select" label="Coverage cutoff" help="">
@@ -169,20 +168,20 @@
</conditional></inputs><outputs>
- <!--
- <data format="velvet_graph" name="LastGraph" />
- -->
- <data format="txt" name="LastGraph" label="${tool.name} on ${on_string}: LastGraph">
+ <data format="txt" name="Graph2" label="${tool.name} on ${on_string}: Graph2" from_work_dir="Graph2">
+ <filter>read_trkg is True</filter>
+ </data>
+ <data format="txt" name="LastGraph" label="${tool.name} on ${on_string}: LastGraph" from_work_dir="LastGraph"><filter>last_graph['generate_graph'] == "yes"</filter></data>
- <data format="afg" name="velvet_asm" label="${tool.name} on ${on_string}: AMOS.afg">
+ <data format="afg" name="velvet_asm" label="${tool.name} on ${on_string}: AMOS.afg" from_work_dir="velvet_asm.afg"><filter>generate_amos['afg'] == "yes"</filter></data>
- <data format="fasta" name="unused_reads_fasta" label="${tool.name} on ${on_string}: Unused Reads">
+ <data format="fasta" name="unused_reads_fasta" label="${tool.name} on ${on_string}: Unused Reads" from_work_dir="UnusedReads.fa"><filter>unused_reads['generate_unused'] == "yes"</filter></data>
- <data format="tabular" name="stats" label="${tool.name} on ${on_string}: Stats"/>
- <data format="fasta" name="contigs" label="${tool.name} on ${on_string}: Contigs"/>
+ <data format="tabular" name="stats" label="${tool.name} on ${on_string}: Stats" from_work_dir="stats.txt" />
+ <data format="fasta" name="contigs" label="${tool.name} on ${on_string}: Contigs" from_work_dir="contigs.fa" /></outputs><requirements><requirement type="package">velvet</requirement>
diff -r ddf23ee6275837e3ca25ca5c51c43ec68a7432d8 -r ce6691088fd59ff13d2566e224551ccccd7ad569 tools/sr_assembly/velvetg_wrapper.py
--- a/tools/sr_assembly/velvetg_wrapper.py
+++ b/tools/sr_assembly/velvetg_wrapper.py
@@ -4,14 +4,9 @@
Classes encapsulating decypher tool.
James E Johnson - University of Minnesota
"""
-import pkg_resources;
-import logging, os, string, sys, tempfile, glob, shutil, types, urllib
-import shlex, subprocess
-from optparse import OptionParser, OptionGroup
-from stat import *
-
-
-log = logging.getLogger( __name__ )
+import os
+import sys
+import subprocess
assert sys.version_info[:2] >= ( 2, 4 )
@@ -19,20 +14,15 @@
sys.stderr.write( "%s\n" % msg )
sys.exit()
+
def __main__():
#Parse Command Line
- s = 'velvetg_wrapper.py: argv = %s\n' % (sys.argv)
- # print >> sys.stderr, s # so will appear as blurb for file
- argcnt = len(sys.argv)
working_dir = sys.argv[1]
- contigs = sys.argv[2]
- stats = sys.argv[3]
- LastGraph = sys.argv[4]
- afgFile = sys.argv[5]
- unusedReadsFile = sys.argv[6]
- inputs = string.join(sys.argv[7:],' ')
- cmdline = 'velvetg %s %s > /dev/null' % (working_dir, inputs)
- # print >> sys.stderr, cmdline # so will appear as blurb for file
+ inputs = ' '.join(sys.argv[2:])
+ for _ in ('Roadmaps', 'Sequences'):
+ os.symlink(os.path.join(working_dir, _), _)
+ cmdline = 'velvetg . %s' % (inputs)
+ print "Command to be executed: %s" % cmdline
try:
proc = subprocess.Popen( args=cmdline, shell=True, stderr=subprocess.PIPE )
returncode = proc.wait()
@@ -50,41 +40,7 @@
raise Exception, stderr
except Exception, e:
stop_err( 'Error running velvetg ' + str( e ) )
- out = open(contigs,'w')
- contigs_path = os.path.join(working_dir,'contigs.fa')
- for line in open( contigs_path ):
- out.write( "%s" % (line) )
- out.close()
- out = open(stats,'w')
- stats_path = os.path.join(working_dir,'stats.txt')
- for line in open( stats_path ):
- out.write( "%s" % (line) )
- out.close()
- if LastGraph != 'None':
- out = open(LastGraph,'w')
- LastGraph_path = os.path.join(working_dir,'LastGraph')
- for line in open( LastGraph_path ):
- out.write( "%s" % (line) )
- out.close()
- if afgFile != 'None':
- out = open(afgFile,'w')
- afgFile_path = os.path.join(working_dir,'velvet_asm.afg')
- try:
- for line in open( afgFile_path ):
- out.write( "%s" % (line) )
- except:
- logging.warn( 'error reading %s' %(afgFile_path))
- pass
- out.close()
- if unusedReadsFile != 'None':
- out = open(unusedReadsFile,'w')
- unusedReadsFile_path = os.path.join(working_dir,'UnusedReads.fa')
- try:
- for line in open( unusedReadsFile_path ):
- out.write( "%s" % (line) )
- except:
- logging.info( 'error reading %s' %(unusedReadsFile_path))
- pass
- out.close()
-if __name__ == "__main__": __main__()
+
+if __name__ == "__main__":
+ __main__()
Repository URL: https://bitbucket.org/galaxy/galaxy-central/
--
This is a commit notification from bitbucket.org. You are receiving
this because you have the service enabled, addressing the recipient of
this email.
1
0