Here are three interrelated issues.
I am trying to use Galaxy with some large cancer genomic datasets here at UCSC and do some systems biology. I have petabyte size dataset data libraries which will constantly be in flux at the edges. I would prefer to just have the Galaxy read the metadata from the file system for large datasets without using the database. Is there a convenient api boundary to write an adapter to the dataset object interface?
In the meantime, I am going to try to just import day using the link. Its great that this feature is in already When I import into a couple of a modest megabyte size dataset using "Link to files without copying to Galaxy" option, the status never changes from "queued". Is this a bug? Is there a known work around? I have many large datasets.
Also, it takes a long time to expand the dataset name link. (My experiment on import is a data tree of about a thousand files). Is this a known bug?