Hi, We are trying to configure Galaxy to work with our tool. So far I have been able to configure it well and link it with the cluster to do process all the jobs as per the documentation . Now what we want is the before the actual processing starts the validations for the input files are done on the Local. And the job processing is done on the cluster. Is there anyway we can configure this. We are using sge on the cluster. Thanks a lot. Regards, Amit Modi ----------------------------------------- Confidentiality Notice: The following mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. The recipient is responsible to maintain the confidentiality of this information and to use the information only for authorized purposes. If you are not the intended recipient (or authorized to receive information for the intended recipient), you are hereby notified that any review, use, disclosure, distribution, copying, printing, or action taken in reliance on the contents of this e-mail is strictly prohibited. If you have received this communication in error, please notify us immediately by reply e-mail and destroy all copies of the original message. Thank you.
Modi, Amit wrote:
Hi,
We are trying to configure Galaxy to work with our tool.
So far I have been able to configure it well and link it with the cluster to do process all the jobs as per the documentation .
Now what we want is the before the actual processing starts the validations for the input files are done on the Local. And the job processing is done on the cluster.
Is there anyway we can configure this.
We are using sge on the cluster.
Hi Amit, I'm not sure what you mean by input file validation? You can have metadata set on the cluster by setting 'set_metadata_externally = True' in the config file. Additionally, the data source tools like Upload and UCSC can run on the cluster, providing the following are true: 1. The default directory used by Python's tempfile module is cluster-accessible (usually /tmp, so you can change it by setting $TEMP in the environment). 2. The cluster nodes have access to the Internet (NAT is fine). If this is not what you're looking for, please let us know. --nate
Thanks a lot.
Regards, Amit Modi
------------------------------------------------------------------------
* ----------------------------------------- Confidentiality Notice: The following mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. The recipient is responsible to maintain the confidentiality of this information and to use the information only for authorized purposes. If you are not the intended recipient (or authorized to receive information for the intended recipient), you are hereby notified that any review, use, disclosure, distribution, copying, printing, or action taken in reliance on the contents of this e-mail is strictly prohibited. If you have received this communication in error, please notify us immediately by reply e-mail and destroy all copies of the original message. Thank you. *
------------------------------------------------------------------------
_______________________________________________ galaxy-dev mailing list galaxy-dev@lists.bx.psu.edu http://lists.bx.psu.edu/listinfo/galaxy-dev
Hi, I am almost done with integrating our tool with galaxy. It would be really great if you can help solve this issue as I was unable find anything related to this on the wiki. Our tool produces some output messages during its execution due to some reason galaxy splits them into 2 categories and displays as additional output and error messages though both of them are standard output messages ad should be displayed as additional output . Here is the snapshot of the message that is displayed. Dataset 52: PPL File Tool execution generated the following error message: 10/04/21 12:14:26 @0s, Initializing 10/04/21 12:14:26 @0s, Performing analysis 10/04/21 12:14:30 @4s, Analysis complete 10/04/21 12:14:30 @4s, Finished run The tool produced the following additional output: 10/04/21 12:14:26 Use segment ID 4620291 for diagnostic purposes 10/04/21 12:14:26 kelvin V0.38.0 edit 2805 built Mar 25 2010 12:13:28 10/04/21 12:14:26 Compiler 4.1.1 20070105 (Red Hat 4.1.1-52) 10/04/21 12:14:26 Using GNU Scientific Library (GSL) statistical functions instead of internal ones 10/04/21 12:14:26 OpenMP-enabled w/maximum of 4 thread(s). 10/04/21 12:14:26 GCC optimization level 2 enabled 10/04/21 12:14:26 To check status (at some risk), type CTRL-\ or type "kill -3 28022" 10/04/21 12:14:26 Using configuration file /export/home/galaxy/galaxy-dist/database/files/000/dataset_151.dat 10/04/21 12:14:26 Computation is done in polynomial mode 10/04/21 12:14:26 polynomialScale is 1 (1-10, 1 is default) 10/04/21 12:14:26 Integration is done numerically (dkelvin) 10/04/21 12:14:26 Further progress will be displayed at 120 second intervals 10/04/21 12:14:26 9 pair(s) over a dynamic space (1LC) for 11 pedigree(s) 10/04/21 12:14:26 Trait-to-marker Two-Point, Dichotomous Trait, Equilibrium. 10/04/21 12:14:30 stopwatch overall(1) e:4s u:8s s:1s, vx:205780, ivx:15961, sf:14225, hf:0 Also, the tool executes just fine and also produces the correct output files. I can verify these files by looking into the galaxy/galaxy-dist/database/files/000/ directory. But for some reason these files are not displayed on the web interface of galaxy and this error message is displayed along with failed status. Thanks a lot. Regards, Amit Modi On 4/19/10 11:34 AM, "Nate Coraor" <nate@bx.psu.edu> wrote: Modi, Amit wrote:
Hi,
We are trying to configure Galaxy to work with our tool.
So far I have been able to configure it well and link it with the cluster to do process all the jobs as per the documentation .
Now what we want is the before the actual processing starts the validations for the input files are done on the Local. And the job processing is done on the cluster.
Is there anyway we can configure this.
We are using sge on the cluster.
Hi Amit, I'm not sure what you mean by input file validation? You can have metadata set on the cluster by setting 'set_metadata_externally = True' in the config file. Additionally, the data source tools like Upload and UCSC can run on the cluster, providing the following are true: 1. The default directory used by Python's tempfile module is cluster-accessible (usually /tmp, so you can change it by setting $TEMP in the environment). 2. The cluster nodes have access to the Internet (NAT is fine). If this is not what you're looking for, please let us know. --nate
Thanks a lot.
Regards, Amit Modi
------------------------------------------------------------------------
* ----------------------------------------- Confidentiality Notice: The following mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. The recipient is responsible to maintain the confidentiality of this information and to use the information only for authorized purposes. If you are not the intended recipient (or authorized to receive information for the intended recipient), you are hereby notified that any review, use, disclosure, distribution, copying, printing, or action taken in reliance on the contents of this e-mail is strictly prohibited. If you have received this communication in error, please notify us immediately by reply e-mail and destroy all copies of the original message. Thank you. *
------------------------------------------------------------------------
_______________________________________________ galaxy-dev mailing list galaxy-dev@lists.bx.psu.edu http://lists.bx.psu.edu/listinfo/galaxy-dev
Modi, Amit wrote:
Hi,
I am almost done with integrating our tool with galaxy.
It would be really great if you can help solve this issue as I was unable find anything related to this on the wiki.
Our tool produces some output messages during its execution due to some reason galaxy splits them into 2 categories and displays as additional output and error messages though both of them are standard output messages ad should be displayed as additional output .
Here is the snapshot of the message that is displayed.
*Dataset 52: PPL File * Tool execution generated the following error message: 10/04/21 12:14:26 @0s, Initializing 10/04/21 12:14:26 @0s, Performing analysis 10/04/21 12:14:30 @4s, Analysis complete 10/04/21 12:14:30 @4s, Finished run
Hi Amit, Are you sure that these messages are being printed to stdout? Galaxy seems to be reading them from stderr. --nate
The tool produced the following additional output: 10/04/21 12:14:26 Use segment ID 4620291 for diagnostic purposes 10/04/21 12:14:26 kelvin V0.38.0 edit 2805 built Mar 25 2010 12:13:28 10/04/21 12:14:26 Compiler 4.1.1 20070105 (Red Hat 4.1.1-52) 10/04/21 12:14:26 Using GNU Scientific Library (GSL) statistical functions instead of internal ones 10/04/21 12:14:26 OpenMP-enabled w/maximum of 4 thread(s). 10/04/21 12:14:26 GCC optimization level 2 enabled 10/04/21 12:14:26 To check status (at some risk), type CTRL-\ or type "kill -3 28022" 10/04/21 12:14:26 Using configuration file /export/home/galaxy/galaxy-dist/database/files/000/dataset_151.dat 10/04/21 12:14:26 Computation is done in polynomial mode 10/04/21 12:14:26 polynomialScale is 1 (1-10, 1 is default) 10/04/21 12:14:26 Integration is done numerically (dkelvin) 10/04/21 12:14:26 Further progress will be displayed at 120 second intervals 10/04/21 12:14:26 9 pair(s) over a dynamic space (1LC) for 11 pedigree(s) 10/04/21 12:14:26 Trait-to-marker Two-Point, Dichotomous Trait, Equilibrium. 10/04/21 12:14:30 stopwatch overall(1) e:4s u:8s s:1s, vx:205780, ivx:15961, sf:14225, hf:0
Also, the tool executes just fine and also produces the correct output files.
I can verify these files by looking into the galaxy/galaxy-dist/database/files/000/ directory.
But for some reason these files are not displayed on the web interface of galaxy and this error message is displayed along with failed status.
Thanks a lot.
Regards, Amit Modi
On 4/19/10 11:34 AM, "Nate Coraor" <nate@bx.psu.edu> wrote:
Modi, Amit wrote: > Hi, > > > We are trying to configure Galaxy to work with our tool. > > So far I have been able to configure it well and link it with the > cluster to do process all the jobs as per the documentation . > > Now what we want is the before the actual processing starts the > validations for the input files are done on the > Local. And the job processing is done on the cluster. > > Is there anyway we can configure this. > > We are using sge on the cluster.
Hi Amit,
I'm not sure what you mean by input file validation? You can have metadata set on the cluster by setting 'set_metadata_externally = True' in the config file. Additionally, the data source tools like Upload and UCSC can run on the cluster, providing the following are true:
1. The default directory used by Python's tempfile module is cluster-accessible (usually /tmp, so you can change it by setting $TEMP in the environment).
2. The cluster nodes have access to the Internet (NAT is fine).
If this is not what you're looking for, please let us know.
--nate
> > Thanks a lot. > > > > Regards, > Amit Modi > > ------------------------------------------------------------------------ > > * ----------------------------------------- Confidentiality Notice: The > following mail message, including any attachments, is for the sole use > of the intended recipient(s) and may contain confidential and privileged > information. The recipient is responsible to maintain the > confidentiality of this information and to use the information only for > authorized purposes. If you are not the intended recipient (or > authorized to receive information for the intended recipient), you are > hereby notified that any review, use, disclosure, distribution, copying, > printing, or action taken in reliance on the contents of this e-mail is > strictly prohibited. If you have received this communication in error, > please notify us immediately by reply e-mail and destroy all copies of > the original message. Thank you. * > > > ------------------------------------------------------------------------ > > _______________________________________________ > galaxy-dev mailing list > galaxy-dev@lists.bx.psu.edu > http://lists.bx.psu.edu/listinfo/galaxy-dev
Hi , Thanks a lot for the prompt reply. Our tool in the normal course of execution produce messages on both stdout and stderr.But exits with a success status when successfully executed So, does that means galaxy will always interpret this as job failure . Is there any work around for this ? Regards, Amit Modi On 4/21/10 1:13 PM, "Nate Coraor" <nate@bx.psu.edu> wrote: Modi, Amit wrote:
Hi,
I am almost done with integrating our tool with galaxy.
It would be really great if you can help solve this issue as I was unable find anything related to this on the wiki.
Our tool produces some output messages during its execution due to some reason galaxy splits them into 2 categories and displays as additional output and error messages though both of them are standard output messages ad should be displayed as additional output .
Here is the snapshot of the message that is displayed.
*Dataset 52: PPL File * Tool execution generated the following error message: 10/04/21 12:14:26 @0s, Initializing 10/04/21 12:14:26 @0s, Performing analysis 10/04/21 12:14:30 @4s, Analysis complete 10/04/21 12:14:30 @4s, Finished run
Hi Amit, Are you sure that these messages are being printed to stdout? Galaxy seems to be reading them from stderr. --nate
The tool produced the following additional output: 10/04/21 12:14:26 Use segment ID 4620291 for diagnostic purposes 10/04/21 12:14:26 kelvin V0.38.0 edit 2805 built Mar 25 2010 12:13:28 10/04/21 12:14:26 Compiler 4.1.1 20070105 (Red Hat 4.1.1-52) 10/04/21 12:14:26 Using GNU Scientific Library (GSL) statistical functions instead of internal ones 10/04/21 12:14:26 OpenMP-enabled w/maximum of 4 thread(s). 10/04/21 12:14:26 GCC optimization level 2 enabled 10/04/21 12:14:26 To check status (at some risk), type CTRL-\ or type "kill -3 28022" 10/04/21 12:14:26 Using configuration file /export/home/galaxy/galaxy-dist/database/files/000/dataset_151.dat 10/04/21 12:14:26 Computation is done in polynomial mode 10/04/21 12:14:26 polynomialScale is 1 (1-10, 1 is default) 10/04/21 12:14:26 Integration is done numerically (dkelvin) 10/04/21 12:14:26 Further progress will be displayed at 120 second intervals 10/04/21 12:14:26 9 pair(s) over a dynamic space (1LC) for 11 pedigree(s) 10/04/21 12:14:26 Trait-to-marker Two-Point, Dichotomous Trait, Equilibrium. 10/04/21 12:14:30 stopwatch overall(1) e:4s u:8s s:1s, vx:205780, ivx:15961, sf:14225, hf:0
Also, the tool executes just fine and also produces the correct output files.
I can verify these files by looking into the galaxy/galaxy-dist/database/files/000/ directory.
But for some reason these files are not displayed on the web interface of galaxy and this error message is displayed along with failed status.
Thanks a lot.
Regards, Amit Modi
On 4/19/10 11:34 AM, "Nate Coraor" <nate@bx.psu.edu> wrote:
Modi, Amit wrote: > Hi, > > > We are trying to configure Galaxy to work with our tool. > > So far I have been able to configure it well and link it with the > cluster to do process all the jobs as per the documentation . > > Now what we want is the before the actual processing starts the > validations for the input files are done on the > Local. And the job processing is done on the cluster. > > Is there anyway we can configure this. > > We are using sge on the cluster.
Hi Amit,
I'm not sure what you mean by input file validation? You can have metadata set on the cluster by setting 'set_metadata_externally = True' in the config file. Additionally, the data source tools like Upload and UCSC can run on the cluster, providing the following are true:
1. The default directory used by Python's tempfile module is cluster-accessible (usually /tmp, so you can change it by setting $TEMP in the environment).
2. The cluster nodes have access to the Internet (NAT is fine).
If this is not what you're looking for, please let us know.
--nate
> > Thanks a lot. > > > > Regards, > Amit Modi > > ------------------------------------------------------------------------ > > * ----------------------------------------- Confidentiality Notice: The > following mail message, including any attachments, is for the sole use > of the intended recipient(s) and may contain confidential and privileged > information. The recipient is responsible to maintain the > confidentiality of this information and to use the information only for > authorized purposes. If you are not the intended recipient (or > authorized to receive information for the intended recipient), you are > hereby notified that any review, use, disclosure, distribution, copying, > printing, or action taken in reliance on the contents of this e-mail is > strictly prohibited. If you have received this communication in error, > please notify us immediately by reply e-mail and destroy all copies of > the original message. Thank you. * > > > ------------------------------------------------------------------------ > > _______________________________________________ > galaxy-dev mailing list > galaxy-dev@lists.bx.psu.edu > http://lists.bx.psu.edu/listinfo/galaxy-dev
Modi, Amit wrote:
Hi ,
Thanks a lot for the prompt reply.
Our tool in the normal course of execution produce messages on both stdout and stderr.But exits with a success status when successfully executed
So, does that means galaxy will always interpret this as job failure .
In this case, yes.
Is there any work around for this ?
The simplest solution is a small wrapper that will suppress the messages from stderr. We've found that applications may not set a non-zero exit code upon failure, so looking for things written to stderr is the most reliable method for detecting failure. Since this is your own app and you know it sets proper return codes, the best thing to do would be to suppress stderr if the program returns 0, otherwise, print them as usual. --nate
Regards, Amit Modi
On 4/21/10 1:13 PM, "Nate Coraor" <nate@bx.psu.edu> wrote:
Modi, Amit wrote: > Hi, > > I am almost done with integrating our tool with galaxy. > > It would be really great if you can help solve this issue as I was > unable find anything related to this on the wiki. > > Our tool produces some output messages during its execution due to some > reason galaxy splits them into 2 categories and displays as additional > output and error messages though both of them are standard output > messages ad should be displayed as additional output . > > > Here is the snapshot of the message that is displayed. > > *Dataset 52: PPL File > * > Tool execution generated the following error message: > 10/04/21 12:14:26 @0s, Initializing > 10/04/21 12:14:26 @0s, Performing analysis > 10/04/21 12:14:30 @4s, Analysis complete > 10/04/21 12:14:30 @4s, Finished run
Hi Amit,
Are you sure that these messages are being printed to stdout? Galaxy seems to be reading them from stderr.
--nate
> > The tool produced the following additional output: > 10/04/21 12:14:26 Use segment ID 4620291 for diagnostic purposes > 10/04/21 12:14:26 kelvin V0.38.0 edit 2805 built Mar 25 2010 12:13:28 > 10/04/21 12:14:26 Compiler 4.1.1 20070105 (Red Hat 4.1.1-52) > 10/04/21 12:14:26 Using GNU Scientific Library (GSL) statistical > functions instead of internal ones > 10/04/21 12:14:26 OpenMP-enabled w/maximum of 4 thread(s). > 10/04/21 12:14:26 GCC optimization level 2 enabled > 10/04/21 12:14:26 To check status (at some risk), type CTRL-\ or type > "kill -3 28022" > 10/04/21 12:14:26 Using configuration file > /export/home/galaxy/galaxy-dist/database/files/000/dataset_151.dat > 10/04/21 12:14:26 Computation is done in polynomial mode > 10/04/21 12:14:26 polynomialScale is 1 (1-10, 1 is default) > 10/04/21 12:14:26 Integration is done numerically (dkelvin) > 10/04/21 12:14:26 Further progress will be displayed at 120 second intervals > 10/04/21 12:14:26 9 pair(s) over a dynamic space (1LC) for 11 pedigree(s) > 10/04/21 12:14:26 Trait-to-marker Two-Point, Dichotomous Trait, Equilibrium. > 10/04/21 12:14:30 stopwatch overall(1) e:4s u:8s s:1s, vx:205780, > ivx:15961, sf:14225, hf:0 > > > Also, the tool executes just fine and also produces the correct output > files. > > I can verify these files by looking into the > galaxy/galaxy-dist/database/files/000/ directory. > > > But for some reason these files are not displayed on the web interface > of galaxy and this error message is displayed along with failed status. > > Thanks a lot. > > Regards, > Amit Modi > > > On 4/19/10 11:34 AM, "Nate Coraor" <nate@bx.psu.edu> wrote: > > Modi, Amit wrote: > > Hi, > > > > > > We are trying to configure Galaxy to work with our tool. > > > > So far I have been able to configure it well and link it with the > > cluster to do process all the jobs as per the documentation . > > > > Now what we want is the before the actual processing starts the > > validations for the input files are done on the > > Local. And the job processing is done on the cluster. > > > > Is there anyway we can configure this. > > > > We are using sge on the cluster. > > Hi Amit, > > I'm not sure what you mean by input file validation? You can have > metadata set on the cluster by setting 'set_metadata_externally = True' > in the config file. Additionally, the data source tools like Upload and > UCSC can run on the cluster, providing the following are true: > > 1. The default directory used by Python's tempfile module is > cluster-accessible (usually /tmp, so you can change it by setting $TEMP > in the environment). > > 2. The cluster nodes have access to the Internet (NAT is fine). > > If this is not what you're looking for, please let us know. > > --nate > > > > > Thanks a lot. > > > > > > > > Regards, > > Amit Modi > > > > > ------------------------------------------------------------------------ > > > > * ----------------------------------------- Confidentiality > Notice: The > > following mail message, including any attachments, is for the sole use > > of the intended recipient(s) and may contain confidential and > privileged > > information. The recipient is responsible to maintain the > > confidentiality of this information and to use the information > only for > > authorized purposes. If you are not the intended recipient (or > > authorized to receive information for the intended recipient), you are > > hereby notified that any review, use, disclosure, distribution, > copying, > > printing, or action taken in reliance on the contents of this > e-mail is > > strictly prohibited. If you have received this communication in error, > > please notify us immediately by reply e-mail and destroy all copies of > > the original message. Thank you. * > > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > galaxy-dev mailing list > > galaxy-dev@lists.bx.psu.edu > > http://lists.bx.psu.edu/listinfo/galaxy-dev > >
On Wed, Apr 21, 2010 at 01:46:33PM -0400, Nate Coraor wrote:
The simplest solution is a small wrapper that will suppress the messages from stderr. We've found that applications may not set a non-zero exit code upon failure, so looking for things written to stderr is the most reliable method for detecting failure.
If you're employing a shell script wrapper around your tool in the tool runner, as many seem to do, you can just add "2>&1" to the end of your command. For example in the /galaxy-dist/tools/plotting/r_wrapper.sh file the line that looks like: R --vanilla --slave $* < $infile would change to R --vanilla --slave $* < $infile 2>&1 -- Ry4an Brase 612-626-6575 University of Minnesota Supercomputing Institute for Advanced Computational Research http://www.msi.umn.edu
Ry4an Brase wrote:
If you're employing a shell script wrapper around your tool in the tool runner, as many seem to do, you can just add "2>&1" to the end of your command. For example in the /galaxy-dist/tools/plotting/r_wrapper.sh file the line that looks like:
R --vanilla --slave $* < $infile
would change to
R --vanilla --slave $* < $infile 2>&1
Hi Ry4an, While this would indeed fix the problem of the job failing, it'd prevent any failures from ever being detected. What we've done for tools that don't set a return code is attempt to parse the output for errors. --nate
On Wed, Apr 21, 2010 at 01:58:40PM -0400, Nate Coraor wrote:
Ry4an Brase wrote:
If you're employing a shell script wrapper around your tool in the tool runner, as many seem to do, you can just add "2>&1" to the end of your command.
Hi Ry4an,
While this would indeed fix the problem of the job failing, it'd prevent any failures from ever being detected. What we've done for tools that don't set a return code is attempt to parse the output for errors.
Absolutely. I was suggesting squelching stdout only for his tool specifically because he says he does return a valid exit code on error. It's certainly not a good general solution, though. Having a tag in the <tool>...</tool> config files which indicates what counts as an error (a) nothing, (b) non-zero exit, (c) text on stdout, or (d) either, with D being the default might be a nice addition in the future. -- Ry4an Brase 612-626-6575 University of Minnesota Supercomputing Institute for Advanced Computational Research http://www.msi.umn.edu
Ry4an Brase wrote:
Absolutely. I was suggesting squelching stdout only for his tool specifically because he says he does return a valid exit code on error. It's certainly not a good general solution, though.
Having a tag in the <tool>...</tool> config files which indicates what counts as an error (a) nothing, (b) non-zero exit, (c) text on stdout, or (d) either, with D being the default might be a nice addition in the future.
This has been suggested in the past as well, we pretty much settled on it but it has yet to be implemented. Here's a ticket I created for it: http://bitbucket.org/galaxy/galaxy-central/issue/325 --nate
On Wed, Apr 21, 2010 at 02:15:37PM -0400, Nate Coraor wrote:
Ry4an Brase wrote:
Having a tag in the <tool>...</tool> config files which indicates what counts as an error (a) nothing, (b) non-zero exit, (c) text on stdout, or (d) either, with D being the default might be a nice addition in the future.
This has been suggested in the past as well, we pretty much settled on it but it has yet to be implemented.
Here's a ticket I created for it:
Cool, I'll pick that as a background task as it seems generically useful and within my reach. -- Ry4an Brase 612-626-6575 University of Minnesota Supercomputing Institute for Advanced Computational Research http://www.msi.umn.edu
Nate Coraor wrote, On 04/21/2010 01:58 PM:
Ry4an Brase wrote:
would change to
R --vanilla --slave $* < $infile 2>&1
While this would indeed fix the problem of the job failing, it'd prevent any failures from ever being detected. What we've done for tools that don't set a return code is attempt to parse the output for errors.
We use shell code similar to the following to temporarily store STDERR, and report it back to Galaxy based on the program's exit code: ======= #!/bin/sh # Temporary storage for STDERR TMP_STDERR=$(mktemp) || exit 1 # Run the program, send STDERR to temporary file MY_PROGRAM 2> $TMP_STDERR #check program's exit code if (( $? )); then #Program failed, send STDERR to real STDERR cat $TMP_STDERR >&2 rm $TMP_STDERR exit 1 fi #Program succeeded, delete STDERR file rm $TMP_STDERR exit 0 ======== -gordon
Another simple way to respect exit codes, if you don't care that all stderr produced by a program gets redirected to stdout, is to do something like: <command>executable '$output_file' '$input_file' 2>&1 || echo "Error running executable." >&2</command> On Apr 21, 2010, at 2:10 PM, Assaf Gordon wrote:
Nate Coraor wrote, On 04/21/2010 01:58 PM:
Ry4an Brase wrote:
would change to
R --vanilla --slave $* < $infile 2>&1
While this would indeed fix the problem of the job failing, it'd prevent any failures from ever being detected. What we've done for tools that don't set a return code is attempt to parse the output for errors.
We use shell code similar to the following to temporarily store STDERR, and report it back to Galaxy based on the program's exit code: ======= #!/bin/sh
# Temporary storage for STDERR TMP_STDERR=$(mktemp) || exit 1
# Run the program, send STDERR to temporary file MY_PROGRAM 2> $TMP_STDERR
#check program's exit code if (( $? )); then #Program failed, send STDERR to real STDERR cat $TMP_STDERR >&2 rm $TMP_STDERR exit 1 fi
#Program succeeded, delete STDERR file rm $TMP_STDERR exit 0
========
-gordon _______________________________________________ galaxy-dev mailing list galaxy-dev@lists.bx.psu.edu http://lists.bx.psu.edu/listinfo/galaxy-dev
Hi, So a small question would galaxy recognize the job failure in case of no std error message and unsuccessful exit code Or it relies only on the std error for checking the successful / unsuccessful job completion. Regards, Amit Modi On 4/21/10 1:46 PM, "Nate Coraor" <nate@bx.psu.edu> wrote: Modi, Amit wrote:
Hi ,
Thanks a lot for the prompt reply.
Our tool in the normal course of execution produce messages on both stdout and stderr.But exits with a success status when successfully executed
So, does that means galaxy will always interpret this as job failure .
In this case, yes.
Is there any work around for this ?
The simplest solution is a small wrapper that will suppress the messages from stderr. We've found that applications may not set a non-zero exit code upon failure, so looking for things written to stderr is the most reliable method for detecting failure. Since this is your own app and you know it sets proper return codes, the best thing to do would be to suppress stderr if the program returns 0, otherwise, print them as usual. --nate
Regards, Amit Modi
On 4/21/10 1:13 PM, "Nate Coraor" <nate@bx.psu.edu> wrote:
Modi, Amit wrote: > Hi, > > I am almost done with integrating our tool with galaxy. > > It would be really great if you can help solve this issue as I was > unable find anything related to this on the wiki. > > Our tool produces some output messages during its execution due to some > reason galaxy splits them into 2 categories and displays as additional > output and error messages though both of them are standard output > messages ad should be displayed as additional output . > > > Here is the snapshot of the message that is displayed. > > *Dataset 52: PPL File > * > Tool execution generated the following error message: > 10/04/21 12:14:26 @0s, Initializing > 10/04/21 12:14:26 @0s, Performing analysis > 10/04/21 12:14:30 @4s, Analysis complete > 10/04/21 12:14:30 @4s, Finished run
Hi Amit,
Are you sure that these messages are being printed to stdout? Galaxy seems to be reading them from stderr.
--nate
> > The tool produced the following additional output: > 10/04/21 12:14:26 Use segment ID 4620291 for diagnostic purposes > 10/04/21 12:14:26 kelvin V0.38.0 edit 2805 built Mar 25 2010 12:13:28 > 10/04/21 12:14:26 Compiler 4.1.1 20070105 (Red Hat 4.1.1-52) > 10/04/21 12:14:26 Using GNU Scientific Library (GSL) statistical > functions instead of internal ones > 10/04/21 12:14:26 OpenMP-enabled w/maximum of 4 thread(s). > 10/04/21 12:14:26 GCC optimization level 2 enabled > 10/04/21 12:14:26 To check status (at some risk), type CTRL-\ or type > "kill -3 28022" > 10/04/21 12:14:26 Using configuration file > /export/home/galaxy/galaxy-dist/database/files/000/dataset_151.dat > 10/04/21 12:14:26 Computation is done in polynomial mode > 10/04/21 12:14:26 polynomialScale is 1 (1-10, 1 is default) > 10/04/21 12:14:26 Integration is done numerically (dkelvin) > 10/04/21 12:14:26 Further progress will be displayed at 120 second intervals > 10/04/21 12:14:26 9 pair(s) over a dynamic space (1LC) for 11 pedigree(s) > 10/04/21 12:14:26 Trait-to-marker Two-Point, Dichotomous Trait, Equilibrium. > 10/04/21 12:14:30 stopwatch overall(1) e:4s u:8s s:1s, vx:205780, > ivx:15961, sf:14225, hf:0 > > > Also, the tool executes just fine and also produces the correct output > files. > > I can verify these files by looking into the > galaxy/galaxy-dist/database/files/000/ directory. > > > But for some reason these files are not displayed on the web interface > of galaxy and this error message is displayed along with failed status. > > Thanks a lot. > > Regards, > Amit Modi > > > On 4/19/10 11:34 AM, "Nate Coraor" <nate@bx.psu.edu> wrote: > > Modi, Amit wrote: > > Hi, > > > > > > We are trying to configure Galaxy to work with our tool. > > > > So far I have been able to configure it well and link it with the > > cluster to do process all the jobs as per the documentation . > > > > Now what we want is the before the actual processing starts the > > validations for the input files are done on the > > Local. And the job processing is done on the cluster. > > > > Is there anyway we can configure this. > > > > We are using sge on the cluster. > > Hi Amit, > > I'm not sure what you mean by input file validation? You can have > metadata set on the cluster by setting 'set_metadata_externally = True' > in the config file. Additionally, the data source tools like Upload and > UCSC can run on the cluster, providing the following are true: > > 1. The default directory used by Python's tempfile module is > cluster-accessible (usually /tmp, so you can change it by setting $TEMP > in the environment). > > 2. The cluster nodes have access to the Internet (NAT is fine). > > If this is not what you're looking for, please let us know. > > --nate > > > > > Thanks a lot. > > > > > > > > Regards, > > Amit Modi > > > > > ------------------------------------------------------------------------ > > > > * ----------------------------------------- Confidentiality > Notice: The > > following mail message, including any attachments, is for the sole use > > of the intended recipient(s) and may contain confidential and > privileged > > information. The recipient is responsible to maintain the > > confidentiality of this information and to use the information > only for > > authorized purposes. If you are not the intended recipient (or > > authorized to receive information for the intended recipient), you are > > hereby notified that any review, use, disclosure, distribution, > copying, > > printing, or action taken in reliance on the contents of this > e-mail is > > strictly prohibited. If you have received this communication in error, > > please notify us immediately by reply e-mail and destroy all copies of > > the original message. Thank you. * > > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > galaxy-dev mailing list > > galaxy-dev@lists.bx.psu.edu > > http://lists.bx.psu.edu/listinfo/galaxy-dev > >
Modi, Amit wrote:
Hi,
So a small question would galaxy recognize the job failure in case of no std error message and unsuccessful exit code
Or it relies only on the std error for checking the successful / unsuccessful job completion.
This is the case. At present, only stderr matters, return codes are always ignored. See the followup message by Assaf Gordon for a wrapper that will do exactly what you need.
Regards, Amit Modi
On 4/21/10 1:46 PM, "Nate Coraor" <nate@bx.psu.edu> wrote:
Modi, Amit wrote: > Hi , > > Thanks a lot for the prompt reply. > > Our tool in the normal course of execution produce messages on both > stdout and stderr.But exits with a success status when successfully > executed > > So, does that means galaxy will always interpret this as job failure .
In this case, yes.
> > Is there any work around for this ?
The simplest solution is a small wrapper that will suppress the messages from stderr. We've found that applications may not set a non-zero exit code upon failure, so looking for things written to stderr is the most reliable method for detecting failure.
Since this is your own app and you know it sets proper return codes, the best thing to do would be to suppress stderr if the program returns 0, otherwise, print them as usual.
--nate
> > Regards, > Amit Modi > > > > > On 4/21/10 1:13 PM, "Nate Coraor" <nate@bx.psu.edu> wrote: > > Modi, Amit wrote: > > Hi, > > > > I am almost done with integrating our tool with galaxy. > > > > It would be really great if you can help solve this issue as I was > > unable find anything related to this on the wiki. > > > > Our tool produces some output messages during its execution due to > some > > reason galaxy splits them into 2 categories and displays as additional > > output and error messages though both of them are standard output > > messages ad should be displayed as additional output . > > > > > > Here is the snapshot of the message that is displayed. > > > > *Dataset 52: PPL File > > * > > Tool execution generated the following error message: > > 10/04/21 12:14:26 @0s, Initializing > > 10/04/21 12:14:26 @0s, Performing analysis > > 10/04/21 12:14:30 @4s, Analysis complete > > 10/04/21 12:14:30 @4s, Finished run > > Hi Amit, > > Are you sure that these messages are being printed to stdout? Galaxy > seems to be reading them from stderr. > > --nate > > > > > The tool produced the following additional output: > > 10/04/21 12:14:26 Use segment ID 4620291 for diagnostic purposes > > 10/04/21 12:14:26 kelvin V0.38.0 edit 2805 built Mar 25 2010 12:13:28 > > 10/04/21 12:14:26 Compiler 4.1.1 20070105 (Red Hat 4.1.1-52) > > 10/04/21 12:14:26 Using GNU Scientific Library (GSL) statistical > > functions instead of internal ones > > 10/04/21 12:14:26 OpenMP-enabled w/maximum of 4 thread(s). > > 10/04/21 12:14:26 GCC optimization level 2 enabled > > 10/04/21 12:14:26 To check status (at some risk), type CTRL-\ or type > > "kill -3 28022" > > 10/04/21 12:14:26 Using configuration file > > /export/home/galaxy/galaxy-dist/database/files/000/dataset_151.dat > > 10/04/21 12:14:26 Computation is done in polynomial mode > > 10/04/21 12:14:26 polynomialScale is 1 (1-10, 1 is default) > > 10/04/21 12:14:26 Integration is done numerically (dkelvin) > > 10/04/21 12:14:26 Further progress will be displayed at 120 second > intervals > > 10/04/21 12:14:26 9 pair(s) over a dynamic space (1LC) for 11 > pedigree(s) > > 10/04/21 12:14:26 Trait-to-marker Two-Point, Dichotomous Trait, > Equilibrium. > > 10/04/21 12:14:30 stopwatch overall(1) e:4s u:8s s:1s, vx:205780, > > ivx:15961, sf:14225, hf:0 > > > > > > Also, the tool executes just fine and also produces the correct output > > files. > > > > I can verify these files by looking into the > > galaxy/galaxy-dist/database/files/000/ directory. > > > > > > But for some reason these files are not displayed on the web > interface > > of galaxy and this error message is displayed along with failed > status. > > > > Thanks a lot. > > > > Regards, > > Amit Modi > > > > > > On 4/19/10 11:34 AM, "Nate Coraor" <nate@bx.psu.edu> wrote: > > > > Modi, Amit wrote: > > > Hi, > > > > > > > > > We are trying to configure Galaxy to work with our tool. > > > > > > So far I have been able to configure it well and link it > with the > > > cluster to do process all the jobs as per the documentation . > > > > > > Now what we want is the before the actual processing starts the > > > validations for the input files are done on the > > > Local. And the job processing is done on the cluster. > > > > > > Is there anyway we can configure this. > > > > > > We are using sge on the cluster. > > > > Hi Amit, > > > > I'm not sure what you mean by input file validation? You can have > > metadata set on the cluster by setting > 'set_metadata_externally = True' > > in the config file. Additionally, the data source tools like > Upload and > > UCSC can run on the cluster, providing the following are true: > > > > 1. The default directory used by Python's tempfile module is > > cluster-accessible (usually /tmp, so you can change it by > setting $TEMP > > in the environment). > > > > 2. The cluster nodes have access to the Internet (NAT is fine). > > > > If this is not what you're looking for, please let us know. > > > > --nate > > > > > > > > Thanks a lot. > > > > > > > > > > > > Regards, > > > Amit Modi > > > > > > > > > ------------------------------------------------------------------------ > > > > > > * ----------------------------------------- Confidentiality > > Notice: The > > > following mail message, including any attachments, is for > the sole use > > > of the intended recipient(s) and may contain confidential and > > privileged > > > information. The recipient is responsible to maintain the > > > confidentiality of this information and to use the information > > only for > > > authorized purposes. If you are not the intended recipient (or > > > authorized to receive information for the intended > recipient), you are > > > hereby notified that any review, use, disclosure, distribution, > > copying, > > > printing, or action taken in reliance on the contents of this > > e-mail is > > > strictly prohibited. If you have received this > communication in error, > > > please notify us immediately by reply e-mail and destroy > all copies of > > > the original message. Thank you. * > > > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > _______________________________________________ > > > galaxy-dev mailing list > > > galaxy-dev@lists.bx.psu.edu > > > http://lists.bx.psu.edu/listinfo/galaxy-dev > > > > > >
Hi, Thanks for your help. I am able to run the jobs using the wrapper script but the problem is that instead of getting the stdout msg on the main screen panel I am getting the message on the green panel (on the right )that displays the output files. Here is the script that I wrote: /usr/local/bin/kelvin-0.38.0 $1 --PedigreeFile $2 --MapFile $3 --FrequencyFile $4 --LocusFile $5 --BayesRatioFile $6 --PPLFile $7 --MODFile $8 2>&1 if [ "$?" = "0" ] then exit 0 #echo "Script executed successfully " else exit 1 #echo "Script failed" fi Thanks . Regards, Amit Modi On 4/21/10 2:17 PM, "Nate Coraor" <nate@bx.psu.edu> wrote: Modi, Amit wrote:
Hi,
So a small question would galaxy recognize the job failure in case of no std error message and unsuccessful exit code
Or it relies only on the std error for checking the successful / unsuccessful job completion.
This is the case. At present, only stderr matters, return codes are always ignored. See the followup message by Assaf Gordon for a wrapper that will do exactly what you need.
Regards, Amit Modi
On 4/21/10 1:46 PM, "Nate Coraor" <nate@bx.psu.edu> wrote:
Modi, Amit wrote: > Hi , > > Thanks a lot for the prompt reply. > > Our tool in the normal course of execution produce messages on both > stdout and stderr.But exits with a success status when successfully > executed > > So, does that means galaxy will always interpret this as job failure .
In this case, yes.
> > Is there any work around for this ?
The simplest solution is a small wrapper that will suppress the messages from stderr. We've found that applications may not set a non-zero exit code upon failure, so looking for things written to stderr is the most reliable method for detecting failure.
Since this is your own app and you know it sets proper return codes, the best thing to do would be to suppress stderr if the program returns 0, otherwise, print them as usual.
--nate
> > Regards, > Amit Modi > > > > > On 4/21/10 1:13 PM, "Nate Coraor" <nate@bx.psu.edu> wrote: > > Modi, Amit wrote: > > Hi, > > > > I am almost done with integrating our tool with galaxy. > > > > It would be really great if you can help solve this issue as I was > > unable find anything related to this on the wiki. > > > > Our tool produces some output messages during its execution due to > some > > reason galaxy splits them into 2 categories and displays as additional > > output and error messages though both of them are standard output > > messages ad should be displayed as additional output . > > > > > > Here is the snapshot of the message that is displayed. > > > > *Dataset 52: PPL File > > * > > Tool execution generated the following error message: > > 10/04/21 12:14:26 @0s, Initializing > > 10/04/21 12:14:26 @0s, Performing analysis > > 10/04/21 12:14:30 @4s, Analysis complete > > 10/04/21 12:14:30 @4s, Finished run > > Hi Amit, > > Are you sure that these messages are being printed to stdout? Galaxy > seems to be reading them from stderr. > > --nate > > > > > The tool produced the following additional output: > > 10/04/21 12:14:26 Use segment ID 4620291 for diagnostic purposes > > 10/04/21 12:14:26 kelvin V0.38.0 edit 2805 built Mar 25 2010 12:13:28 > > 10/04/21 12:14:26 Compiler 4.1.1 20070105 (Red Hat 4.1.1-52) > > 10/04/21 12:14:26 Using GNU Scientific Library (GSL) statistical > > functions instead of internal ones > > 10/04/21 12:14:26 OpenMP-enabled w/maximum of 4 thread(s). > > 10/04/21 12:14:26 GCC optimization level 2 enabled > > 10/04/21 12:14:26 To check status (at some risk), type CTRL-\ or type > > "kill -3 28022" > > 10/04/21 12:14:26 Using configuration file > > /export/home/galaxy/galaxy-dist/database/files/000/dataset_151.dat > > 10/04/21 12:14:26 Computation is done in polynomial mode > > 10/04/21 12:14:26 polynomialScale is 1 (1-10, 1 is default) > > 10/04/21 12:14:26 Integration is done numerically (dkelvin) > > 10/04/21 12:14:26 Further progress will be displayed at 120 second > intervals > > 10/04/21 12:14:26 9 pair(s) over a dynamic space (1LC) for 11 > pedigree(s) > > 10/04/21 12:14:26 Trait-to-marker Two-Point, Dichotomous Trait, > Equilibrium. > > 10/04/21 12:14:30 stopwatch overall(1) e:4s u:8s s:1s, vx:205780, > > ivx:15961, sf:14225, hf:0 > > > > > > Also, the tool executes just fine and also produces the correct output > > files. > > > > I can verify these files by looking into the > > galaxy/galaxy-dist/database/files/000/ directory. > > > > > > But for some reason these files are not displayed on the web > interface > > of galaxy and this error message is displayed along with failed > status. > > > > Thanks a lot. > > > > Regards, > > Amit Modi > > > > > > On 4/19/10 11:34 AM, "Nate Coraor" <nate@bx.psu.edu> wrote: > > > > Modi, Amit wrote: > > > Hi, > > > > > > > > > We are trying to configure Galaxy to work with our tool. > > > > > > So far I have been able to configure it well and link it > with the > > > cluster to do process all the jobs as per the documentation . > > > > > > Now what we want is the before the actual processing starts the > > > validations for the input files are done on the > > > Local. And the job processing is done on the cluster. > > > > > > Is there anyway we can configure this. > > > > > > We are using sge on the cluster. > > > > Hi Amit, > > > > I'm not sure what you mean by input file validation? You can have > > metadata set on the cluster by setting > 'set_metadata_externally = True' > > in the config file. Additionally, the data source tools like > Upload and > > UCSC can run on the cluster, providing the following are true: > > > > 1. The default directory used by Python's tempfile module is > > cluster-accessible (usually /tmp, so you can change it by > setting $TEMP > > in the environment). > > > > 2. The cluster nodes have access to the Internet (NAT is fine). > > > > If this is not what you're looking for, please let us know. > > > > --nate > > > > > > > > Thanks a lot. > > > > > > > > > > > > Regards, > > > Amit Modi > > > > > > > > > ------------------------------------------------------------------------ > > > > > > * ----------------------------------------- Confidentiality > > Notice: The > > > following mail message, including any attachments, is for > the sole use > > > of the intended recipient(s) and may contain confidential and > > privileged > > > information. The recipient is responsible to maintain the > > > confidentiality of this information and to use the information > > only for > > > authorized purposes. If you are not the intended recipient (or > > > authorized to receive information for the intended > recipient), you are > > > hereby notified that any review, use, disclosure, distribution, > > copying, > > > printing, or action taken in reliance on the contents of this > > e-mail is > > > strictly prohibited. If you have received this > communication in error, > > > please notify us immediately by reply e-mail and destroy > all copies of > > > the original message. Thank you. * > > > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > _______________________________________________ > > > galaxy-dev mailing list > > > galaxy-dev@lists.bx.psu.edu > > > http://lists.bx.psu.edu/listinfo/galaxy-dev > > > > > >
Modi, Amit wrote:
Hi,
Thanks for your help.
I am able to run the jobs using the wrapper script but the problem is that instead of getting the stdout msg on the main screen panel
I am getting the message on the green panel (on the right )that displays the output files.
Hi Amit, This is standard behavior. Galaxy always displays the stdout in the history item itself. If the wrapped tool outputs data to stdout rather than messages, you'll need to use something like '1>$output_file' to write it out to the output file defined in the tool's XML configuration. --nate
Here is the script that I wrote:
/usr/local/bin/kelvin-0.38.0 $1 --PedigreeFile $2 --MapFile $3 --FrequencyFile $4 --LocusFile $5 --BayesRatioFile $6 --PPLFile $7 --MODFile $8 2>&1
if [ "$?" = "0" ] then exit 0 #echo "Script executed successfully " else exit 1 #echo "Script failed" fi Thanks .
Regards, Amit Modi
On 4/21/10 2:17 PM, "Nate Coraor" <nate@bx.psu.edu> wrote:
Modi, Amit wrote: > Hi, > > So a small question would galaxy recognize the job failure in case of no > std error message and unsuccessful exit code > > Or it relies only on the std error for checking the successful / > unsuccessful job completion.
This is the case. At present, only stderr matters, return codes are always ignored. See the followup message by Assaf Gordon for a wrapper that will do exactly what you need.
> > Regards, > Amit Modi > > On 4/21/10 1:46 PM, "Nate Coraor" <nate@bx.psu.edu> wrote: > > Modi, Amit wrote: > > Hi , > > > > Thanks a lot for the prompt reply. > > > > Our tool in the normal course of execution produce messages on both > > stdout and stderr.But exits with a success status when successfully > > executed > > > > So, does that means galaxy will always interpret this as job failure . > > In this case, yes. > > > > > Is there any work around for this ? > > The simplest solution is a small wrapper that will suppress the messages > from stderr. We've found that applications may not set a non-zero exit > code upon failure, so looking for things written to stderr is the most > reliable method for detecting failure. > > Since this is your own app and you know it sets proper return codes, the > best thing to do would be to suppress stderr if the program returns 0, > otherwise, print them as usual. > > --nate > > > > > Regards, > > Amit Modi > > > > > > > > > > On 4/21/10 1:13 PM, "Nate Coraor" <nate@bx.psu.edu> wrote: > > > > Modi, Amit wrote: > > > Hi, > > > > > > I am almost done with integrating our tool with galaxy. > > > > > > It would be really great if you can help solve this issue > as I was > > > unable find anything related to this on the wiki. > > > > > > Our tool produces some output messages during its execution > due to > > some > > > reason galaxy splits them into 2 categories and displays as > additional > > > output and error messages though both of them are standard > output > > > messages ad should be displayed as additional output . > > > > > > > > > Here is the snapshot of the message that is displayed. > > > > > > *Dataset 52: PPL File > > > * > > > Tool execution generated the following error message: > > > 10/04/21 12:14:26 @0s, Initializing > > > 10/04/21 12:14:26 @0s, Performing analysis > > > 10/04/21 12:14:30 @4s, Analysis complete > > > 10/04/21 12:14:30 @4s, Finished run > > > > Hi Amit, > > > > Are you sure that these messages are being printed to stdout? > Galaxy > > seems to be reading them from stderr. > > > > --nate > > > > > > > > The tool produced the following additional output: > > > 10/04/21 12:14:26 Use segment ID 4620291 for diagnostic > purposes > > > 10/04/21 12:14:26 kelvin V0.38.0 edit 2805 built Mar 25 > 2010 12:13:28 > > > 10/04/21 12:14:26 Compiler 4.1.1 20070105 (Red Hat 4.1.1-52) > > > 10/04/21 12:14:26 Using GNU Scientific Library (GSL) > statistical > > > functions instead of internal ones > > > 10/04/21 12:14:26 OpenMP-enabled w/maximum of 4 thread(s). > > > 10/04/21 12:14:26 GCC optimization level 2 enabled > > > 10/04/21 12:14:26 To check status (at some risk), type > CTRL-\ or type > > > "kill -3 28022" > > > 10/04/21 12:14:26 Using configuration file > > > > /export/home/galaxy/galaxy-dist/database/files/000/dataset_151.dat > > > 10/04/21 12:14:26 Computation is done in polynomial mode > > > 10/04/21 12:14:26 polynomialScale is 1 (1-10, 1 is default) > > > 10/04/21 12:14:26 Integration is done numerically (dkelvin) > > > 10/04/21 12:14:26 Further progress will be displayed at 120 > second > > intervals > > > 10/04/21 12:14:26 9 pair(s) over a dynamic space (1LC) for 11 > > pedigree(s) > > > 10/04/21 12:14:26 Trait-to-marker Two-Point, Dichotomous Trait, > > Equilibrium. > > > 10/04/21 12:14:30 stopwatch overall(1) e:4s u:8s s:1s, > vx:205780, > > > ivx:15961, sf:14225, hf:0 > > > > > > > > > Also, the tool executes just fine and also produces the > correct output > > > files. > > > > > > I can verify these files by looking into the > > > galaxy/galaxy-dist/database/files/000/ directory. > > > > > > > > > But for some reason these files are not displayed on the web > > interface > > > of galaxy and this error message is displayed along with failed > > status. > > > > > > Thanks a lot. > > > > > > Regards, > > > Amit Modi > > > > > > > > > On 4/19/10 11:34 AM, "Nate Coraor" <nate@bx.psu.edu> wrote: > > > > > > Modi, Amit wrote: > > > > Hi, > > > > > > > > > > > > We are trying to configure Galaxy to work with our tool. > > > > > > > > So far I have been able to configure it well and link it > > with the > > > > cluster to do process all the jobs as per the > documentation . > > > > > > > > Now what we want is the before the actual processing > starts the > > > > validations for the input files are done on the > > > > Local. And the job processing is done on the cluster. > > > > > > > > Is there anyway we can configure this. > > > > > > > > We are using sge on the cluster. > > > > > > Hi Amit, > > > > > > I'm not sure what you mean by input file validation? > You can have > > > metadata set on the cluster by setting > > 'set_metadata_externally = True' > > > in the config file. Additionally, the data source > tools like > > Upload and > > > UCSC can run on the cluster, providing the following > are true: > > > > > > 1. The default directory used by Python's tempfile > module is > > > cluster-accessible (usually /tmp, so you can change it by > > setting $TEMP > > > in the environment). > > > > > > 2. The cluster nodes have access to the Internet (NAT > is fine). > > > > > > If this is not what you're looking for, please let us know. > > > > > > --nate > > > > > > > > > > > Thanks a lot. > > > > > > > > > > > > > > > > Regards, > > > > Amit Modi > > > > > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > > > * ----------------------------------------- > Confidentiality > > > Notice: The > > > > following mail message, including any attachments, > is for > > the sole use > > > > of the intended recipient(s) and may contain > confidential and > > > privileged > > > > information. The recipient is responsible to > maintain the > > > > confidentiality of this information and to use the > information > > > only for > > > > authorized purposes. If you are not the intended > recipient (or > > > > authorized to receive information for the intended > > recipient), you are > > > > hereby notified that any review, use, disclosure, > distribution, > > > copying, > > > > printing, or action taken in reliance on the > contents of this > > > e-mail is > > > > strictly prohibited. If you have received this > > communication in error, > > > > please notify us immediately by reply e-mail and destroy > > all copies of > > > > the original message. Thank you. * > > > > > > > > > > > > > > > > > > ------------------------------------------------------------------------ > > > > > > > > _______________________________________________ > > > > galaxy-dev mailing list > > > > galaxy-dev@lists.bx.psu.edu > > > > http://lists.bx.psu.edu/listinfo/galaxy-dev > > > > > > > > > > > >
Hi , I am trying to create a metadata for or taking input for our tool. Matadata should take (1primary + 4 composite files) I have read the documentation and created a metadata called "conf". class Configuration ( Data ): file_ext = "conf" composite_type = "basic" MetadataElement( name="data_lines", default=4, desc="Number of data lines", readonly=True, optional=True, visible=True, no_value=0 ) def __init(self, **kwd): Data.__init(self,**kwd) self.add_composite_file('mapfile.txt') self.add_composite_file('frequenctfile.txt') self.add_composite_file('datafile.txt') self.add_composite_file('pedfile.txt') def init_meta(self,dataset,copy_from=None ): Data.init_meta(self,dataset,copy_from=copy_from) def generate_primary_file( self, dataset = None ): rval = ['<html><head><title>Files for Composite Dataset (%s)</title></head><p/>This composite dataset is composed of the following files:<p/><ul>' % ( self.file_ext ) ] for composite_name, composite_file in self.get_composite_files( dataset = dataset ).iteritems(): opt_text = '' if composite_file.optional: opt_text = ' (optional)' rval.append( '<li><a href="%s">%s</a>%s' % ( composite_name, composite_name, opt_text ) ) rval.append( '</ul></html>' ) return "\n".join( rval ) But after doing all this when I go to the text file formats in upload file and I click on the "conf" file format , the select box created to upload file is only 1. Though I need 5 such select boxes. 1 for primary file and 4 other for the sub files. Thanks a lot. Regards, Amit Modi On 4/21/10 1:35 PM, "Modi, Amit" <Amit.Modi@nationwidechildrens.org> wrote: Hi , Thanks a lot for the prompt reply. Our tool in the normal course of execution produce messages on both stdout and stderr.But exits with a success status when successfully executed So, does that means galaxy will always interpret this as job failure . Is there any work around for this ? Regards, Amit Modi On 4/21/10 1:13 PM, "Nate Coraor" <nate@bx.psu.edu> wrote: Modi, Amit wrote:
Hi,
I am almost done with integrating our tool with galaxy.
It would be really great if you can help solve this issue as I was unable find anything related to this on the wiki.
Our tool produces some output messages during its execution due to some reason galaxy splits them into 2 categories and displays as additional output and error messages though both of them are standard output messages ad should be displayed as additional output .
Here is the snapshot of the message that is displayed.
*Dataset 52: PPL File * Tool execution generated the following error message: 10/04/21 12:14:26 @0s, Initializing 10/04/21 12:14:26 @0s, Performing analysis 10/04/21 12:14:30 @4s, Analysis complete 10/04/21 12:14:30 @4s, Finished run
Hi Amit, Are you sure that these messages are being printed to stdout? Galaxy seems to be reading them from stderr. --nate
The tool produced the following additional output: 10/04/21 12:14:26 Use segment ID 4620291 for diagnostic purposes 10/04/21 12:14:26 kelvin V0.38.0 edit 2805 built Mar 25 2010 12:13:28 10/04/21 12:14:26 Compiler 4.1.1 20070105 (Red Hat 4.1.1-52) 10/04/21 12:14:26 Using GNU Scientific Library (GSL) statistical functions instead of internal ones 10/04/21 12:14:26 OpenMP-enabled w/maximum of 4 thread(s). 10/04/21 12:14:26 GCC optimization level 2 enabled 10/04/21 12:14:26 To check status (at some risk), type CTRL-\ or type "kill -3 28022" 10/04/21 12:14:26 Using configuration file /export/home/galaxy/galaxy-dist/database/files/000/dataset_151.dat 10/04/21 12:14:26 Computation is done in polynomial mode 10/04/21 12:14:26 polynomialScale is 1 (1-10, 1 is default) 10/04/21 12:14:26 Integration is done numerically (dkelvin) 10/04/21 12:14:26 Further progress will be displayed at 120 second intervals 10/04/21 12:14:26 9 pair(s) over a dynamic space (1LC) for 11 pedigree(s) 10/04/21 12:14:26 Trait-to-marker Two-Point, Dichotomous Trait, Equilibrium. 10/04/21 12:14:30 stopwatch overall(1) e:4s u:8s s:1s, vx:205780, ivx:15961, sf:14225, hf:0
Also, the tool executes just fine and also produces the correct output files.
I can verify these files by looking into the galaxy/galaxy-dist/database/files/000/ directory.
But for some reason these files are not displayed on the web interface of galaxy and this error message is displayed along with failed status.
Thanks a lot.
Regards, Amit Modi
On 4/19/10 11:34 AM, "Nate Coraor" <nate@bx.psu.edu> wrote:
Modi, Amit wrote: > Hi, > > > We are trying to configure Galaxy to work with our tool. > > So far I have been able to configure it well and link it with the > cluster to do process all the jobs as per the documentation . > > Now what we want is the before the actual processing starts the > validations for the input files are done on the > Local. And the job processing is done on the cluster. > > Is there anyway we can configure this. > > We are using sge on the cluster.
Hi Amit,
I'm not sure what you mean by input file validation? You can have metadata set on the cluster by setting 'set_metadata_externally = True' in the config file. Additionally, the data source tools like Upload and UCSC can run on the cluster, providing the following are true:
1. The default directory used by Python's tempfile module is cluster-accessible (usually /tmp, so you can change it by setting $TEMP in the environment).
2. The cluster nodes have access to the Internet (NAT is fine).
If this is not what you're looking for, please let us know.
--nate
> > Thanks a lot. > > > > Regards, > Amit Modi > > ------------------------------------------------------------------------ > > * ----------------------------------------- Confidentiality Notice: The > following mail message, including any attachments, is for the sole use > of the intended recipient(s) and may contain confidential and privileged > information. The recipient is responsible to maintain the > confidentiality of this information and to use the information only for > authorized purposes. If you are not the intended recipient (or > authorized to receive information for the intended recipient), you are > hereby notified that any review, use, disclosure, distribution, copying, > printing, or action taken in reliance on the contents of this e-mail is > strictly prohibited. If you have received this communication in error, > please notify us immediately by reply e-mail and destroy all copies of > the original message. Thank you. * > > > ------------------------------------------------------------------------ > > _______________________________________________ > galaxy-dev mailing list > galaxy-dev@lists.bx.psu.edu > http://lists.bx.psu.edu/listinfo/galaxy-dev
<HTML><BODY><P><hr size=1></P> <P><STRONG> ----------------------------------------- Confidentiality Notice: The following mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. The recipient is responsible to maintain the confidentiality of this information and to use the information only for authorized purposes. If you are not the intended recipient (or authorized to receive information for the intended recipient), you are hereby notified that any review, use, disclosure, distribution, copying, printing, or action taken in reliance on the contents of this e-mail is strictly prohibited. If you have received this communication in error, please notify us immediately by reply e-mail and destroy all copies of the original message. Thank you. </STRONG></P></BODY></HTML>
participants (5)
-
Assaf Gordon
-
Daniel Blankenberg
-
Modi, Amit
-
Nate Coraor
-
Ry4an Brase