It is through dir server
Not sure why it should fail on it's own (unless u mean there's a
Sure will start up the galaxy instance again to see if it's still
running the slow upload. And feedback here again?
Are you using PBS, SGE, or another cluster job runner? Otherwise, these
jobs should automatically set to the "error" state upon server startup.
Are you sure that they actually run (at the command line), or are they
just stuck in the "running" state in the library interface?
There was a bug fixed a while back that could be preventing these jobs
from being set to error upon server startup. If you're running an older
revision of Galaxy, I would suggest updating.
If you send us which database you're using (just the database type as in
SQLite, Postgres, or MySQL, not the database itself), I can send you
appropriate SQL to fix the job state. Unfortunately there is no
interface in Galaxy to correct these.
Sent from my iPod
On 13-Oct-2010, at 1:23 AM, Jennifer Jackson <jen(a)bx.psu.edu> wrote:
>Sorry for misunderstanding your question the first time ... you
>are asking about a library upload (not a dataset).
>In this case, you will have to manually change the job state in
>the database to 'error' to get Galaxy to stop recovering it at
>Our developers are curious whether you are using cluster job
>runner and a URL or server directory upload though, since
>otherwise the job should have failed on its own. If there is a
>case through Galaxy's main tools where it doesn't fail, we'd like
>to track that down if you have time to help.
>On 10/4/10 6:34 PM, Kevin Lam wrote:
>>I have made a mistake of uploading via file browser a large file
>>the data library function. How do I cancel it as it resumes upload
>>whenever I start the local instance.
>>galaxy-user mailing list
galaxy-user mailing list