Hi Nate,
Hi Derrick,
If you use outputs_to_working_directory = True and put the working directory on the SSD pool, writing during job execution will use the fast space. Once the job completes, the outputs will be moved back to the larger, slower pool configured for file_path.
--nate
> ___________________________________________________________
On Jun 19, 2012, at 8:08 AM, Derrick Lin wrote:
> Another relevant question, my institute has configured a NFS volume that based on a SSD disk pool on the file server.
>
> I want to use it for improving galaxy's job execution on big dataset. However, the SSD volume has only 2.5TB (they are very expensive...). So migrate the entire database folder to there is impossible.
>
> Any recommendation for the galaxy to have a good use of the SSD?
>
> Thanks
> Derrick
>
> On Tue, Jun 19, 2012 at 9:55 PM, Derrick Lin <klin938@gmail.com> wrote:
> I think my question has been answered:
>
> http://gmod.827538.n3.nabble.com/NFS-Cluster-and-working-directories-questions-td3784165.html
>
> Hopefully can see the enhancements in the near future.
>
> Cheers.
> Derrick
>
>
> On Tue, Jun 19, 2012 at 5:01 PM, Derrick Lin <klin938@gmail.com> wrote:
> Hi guys,
>
> I have deploy a galaxy on a cluster (so I installed it on a NFS share that it's accessible by all cluster compute nodes).
>
> Everything is running fine. Now I am looking for a way such that every job dispatched to a compute node uses that node's local /tmp as working directory.
>
> I know galaxy config provides job_working_directory for the similar purpose.
>
> My question really is, while all my compute nodes can access the NFS share where galaxy installed, but the galaxy host cannot access each compute node's /tmp.
>
> Is there a way that for the galaxy to collect job results back to the data directory?
>
> Regards,
> Derrick
>
>
> Please keep all replies on the list by using "reply all"
> in your mail client. To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>
> http://lists.bx.psu.edu/