I'm currently adding a large number of files into my Galaxy instance's dataset library. During the import some of the files (a small percentage) failed with:
/inside/depot4/galaxy/set_metadata.sh: line 4: 14790 Segmentation fault (core dumped) python ./scripts/set_metadata.py $@
I think it's probably standard cluster shenanigans, and may work just fine if run again. But there doesn't seem to be a way retry. Is there a way to deal with this that is easier than manually deleting and re-uploading the offending files?