Hi Dave,
I replied earlier to the email you sent me directly. I just now saw this
email and it contains a bit more info. The fact that the data volume is
close to being full, may be the reason for the error. The data is first
saved to the disk by a web proxy and then moved into galaxy's data dir so
this could be hitting the limit. Did the volume resizing complete? Did you
try running jobs after the fact?
Like I mentioned in the earlier email, Galaxy's log may have more info
about what went wrong...
Enis
On Tue, Jan 24, 2012 at 10:35 PM, Dave Lin <dave(a)verdematics.com> wrote:
Dear Galaxy Support,
I'm getting the following error message when trying to process larger
Solid files.
ERROR MESSAGE: "Cluster could not complete job"
- Compute Quality Statistic-- First got the error message. Ran ok after
re-running the job.
- Subsequent job of converting qual/csfasta -> fastq failed with same
error message
- Doesn't seem to happen on small solid files
Potentially relevant information:
1. Cloud Instance on Amazon/Large instance
2. Only one master node on cluster.
3. Has been updated using the update feature to a version as of late last
week.
4. Only 1 user right now on system, so there shouldn't be any competing
load.
5. Downloaded a bunch of data files, so volume was at 94%. Currently in
process of expanding volume.
Question: Is this expected behavior or have I misconfigured something
(i.e. some timeout value)? Any suggestions?
Thanks in advance,
Dave
P.S. I'm new to galaxy and impressed so far. Keep up the great work.
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client. To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
http://lists.bx.psu.edu/