My more detailed need is to parallel an existing analysis pipeline in galaxy, using the same underlying code. Reusing the same code from within galaxy has the great advantage of ensuring exactly the same process is run, allowing the "official" analysis to be repeated by outsiders. This pipeline works on a number of inputs (paired end reads, multiple replicates, etc.) and goes through a number of intervening steps where massive temporary files may be reused in several steps, then ultimately deleted after obtaining final results. The existing pipeline, outside of galaxy, creates these shared files in a temporary directory, exclusive to the "experiment" being run. In Galaxy however, ensuring that an "experiment" named by one user will not collide with one named by another user, accidentally overwriting existing files becomes a great deal easier if the temporary experiment directories have history_id based names.