I'm currently adding a large number of files into my Galaxy instance's dataset library. During the import some of the files (a small percentage) failed with: /inside/depot4/galaxy/set_metadata.sh: line 4: 14790 Segmentation fault (core dumped) python ./scripts/set_metadata.py $@ I think it's probably standard cluster shenanigans, and may work just fine if run again. But there doesn't seem to be a way retry. Is there a way to deal with this that is easier than manually deleting and re-uploading the offending files? Kyle
On Dec 31, 2012, at 8:18 PM, Kyle Ellrott wrote:
I'm currently adding a large number of files into my Galaxy instance's dataset library. During the import some of the files (a small percentage) failed with:
/inside/depot4/galaxy/set_metadata.sh: line 4: 14790 Segmentation fault (core dumped) python ./scripts/set_metadata.py $@
I think it's probably standard cluster shenanigans, and may work just fine if run again. But there doesn't seem to be a way retry. Is there a way to deal with this that is easier than manually deleting and re-uploading the offending files?
Hi Kyle, Unfortunately, there's not going to be a way to do this entirely in the UI. Your best shot is to change the state of the datasets in the database from 'error' to 'ok' and then try using the metadata auto-detect button in the UI. --nate
Kyle ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
Editing the DB and manually fixing all the errors in the UI doesn't seem like a long term solution. Is there a way to do this through the API? If not, where should I start working to add it in? Kyle On Wed, Jan 2, 2013 at 7:41 AM, Nate Coraor <nate@bx.psu.edu> wrote:
On Dec 31, 2012, at 8:18 PM, Kyle Ellrott wrote:
I'm currently adding a large number of files into my Galaxy instance's dataset library. During the import some of the files (a small percentage) failed with:
/inside/depot4/galaxy/set_metadata.sh: line 4: 14790 Segmentation fault (core dumped) python ./scripts/set_metadata.py $@
I think it's probably standard cluster shenanigans, and may work just fine if run again. But there doesn't seem to be a way retry. Is there a way to deal with this that is easier than manually deleting and re-uploading the offending files?
Hi Kyle,
Unfortunately, there's not going to be a way to do this entirely in the UI. Your best shot is to change the state of the datasets in the database from 'error' to 'ok' and then try using the metadata auto-detect button in the UI.
--nate
Kyle ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
On Jan 7, 2013, at 8:44 PM, Kyle Ellrott wrote:
Editing the DB and manually fixing all the errors in the UI doesn't seem like a long term solution. Is there a way to do this through the API? If not, where should I start working to add it in?
Hi Kyle, I'm not sure that we would want to allow unprivileged users to modify the dataset state, although this may be a useful feature for administrators. However, in this specific instance, there is probably a better solution. If the set_metadata script were to redirect all output to stdout, this would not have resulted in a failure. As long as the metadata output files indicate failure in this case, the dataset would show up as "ok but metadata failed" and allow the user to rerun set_meta with the auto-detect button. Had the process been interrupted in the earlier stage (in the upload.py tool itself), it's probably not something that could have been recovered from unless you manually look and see whether the dataset had been moved from its import location to its final destination in file_path, for each dataset. --nate
Kyle
On Wed, Jan 2, 2013 at 7:41 AM, Nate Coraor <nate@bx.psu.edu> wrote: On Dec 31, 2012, at 8:18 PM, Kyle Ellrott wrote:
I'm currently adding a large number of files into my Galaxy instance's dataset library. During the import some of the files (a small percentage) failed with:
/inside/depot4/galaxy/set_metadata.sh: line 4: 14790 Segmentation fault (core dumped) python ./scripts/set_metadata.py $@
I think it's probably standard cluster shenanigans, and may work just fine if run again. But there doesn't seem to be a way retry. Is there a way to deal with this that is easier than manually deleting and re-uploading the offending files?
Hi Kyle,
Unfortunately, there's not going to be a way to do this entirely in the UI. Your best shot is to change the state of the datasets in the database from 'error' to 'ok' and then try using the metadata auto-detect button in the UI.
--nate
Kyle ___________________________________________________________ Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at:
participants (2)
-
Kyle Ellrott
-
Nate Coraor