I set it to run on the cluster: [galaxy@bic galaxy-dist]$ grep upload1 universe_wsgi.runner.ini #upload1 = local:/// On Thu, Jan 5, 2012 at 11:33 AM, Nate Coraor <nate@bx.psu.edu> wrote:
On Jan 5, 2012, at 11:29 AM, Ryan Golhar wrote:
On Jan 4, 2012, at 6:44 PM, Ryan Golhar wrote:
On Wed, Jan 4, 2012 at 5:17 PM, Ryan Golhar < ngsbioinformatics@gmail.com> wrote: I'm adding Data Libraries to my local galaxy instance. I'm doing this by importing directories that contain bam and bai files. I see the bam/bai files get added on the admin page and the Message is "This job is running". qstat shows the job run and complete. I checked my runner0.log and it registers the PBS job completed successfully. But the web page never updates. I tried to refresh the page by navigating away from it then back to it, but it still reads "This job is running". How do I fix this?
Some more information...I check my head node and I see samtools is running there. Its running 'samtools index'. So two problems:
1) samtools is not using the cluster. I assume this is a configuration setting somewhere.
See set_metadata_externally in universe_wsgi.ini. This should be set to True to run on the cluster.
If you haven't seen the rest of the production server documentation, see http://usegalaxy.org/production
This is already set. I set this in universe_wsgi.ini (and universe_wsgi.webapp.ini and universe_wsgi.running.ini since I'm using a proxy server and load balancer on Apache). This was one of the first things I set up.
Does the upload tool run on the cluster? See upload1 under [galaxy:tool_runners] in universe_wsgi.runner.ini.
--nate