On Tue, May 13, 2014 at 11:11 AM, Guillaume Penderia <g.penderia@gmail.com> wrote:
Hi,
I am currently working on a workflow with some custom tools, and one of these tools has to create very big temporary files (like 45Go big). As this workflow will be used on a lot of file at the same time, I have to keep it from running more than once or twice at the same time (the other execution would wait in the queue). If I don't, I'm afraid that some memory lack or something could cause all the executions to fail and stop.
The problem is : I can't find if it's possible to do that, and if it is, how do I do it.
Anyone has an idea please ?
If you are using a cluster, one idea would be to setup a dedicated queue for these big jobs, configured to ensure only one runs at a time. Or at least, only one per cluster node. Peter