Unfortunately, you will need to reset metadata for any problematic datasets
once you have updated to the latest version of 15.05 and set a cutoff
value. You can find the datasets with the following SQL query in your
select hda.id, u.email, h.name, hda.hid, hda.name, length(hda.metadata)
from history_dataset_association hda
join history h on hda.history_id=h.id
join galaxy_user u on h.user_id=u.id
where length(hda.metadata) > 1048576 order by length(hda.metadata) desc;
And you can reset metadata by clicking on the dataset's pencil icon in your
history and clicking "auto-detect".
On Wed, Jul 15, 2015 at 3:16 PM, Poole, Richard <r.poole(a)ucl.ac.uk> wrote:
I am indeed using later than 15.05………so I will try this fix next time I
can restart the server and let you know.
On 15 Jul 2015, at 19:32, Nate Coraor <nate(a)bx.psu.edu> wrote:
By any chance, are you running Galaxy 15.05 or later? 15.05 includes new
metadata for bam files that can cause UI performance problems with certain
types of bam files. This can be limited with the new
`max_metadata_value_size` in galaxy.ini (on usegalaxy.org
we've set it to
I've also created a pull request to make this limiting the default:
However, if you are using an older version of Galaxy, this issue is not
related to the problem you're experiencing.
On Wed, Jul 15, 2015 at 2:16 PM, Poole, Richard <r.poole(a)ucl.ac.uk> wrote:
> Hi Christian and Carl,
> Thanks both for the replies.
> To answer your questions in reverse order. I have about XX histories in
> my account each with an average of about XX datasets. Total data in my
> account is about 1TB.
> It is indeed an admin account and other users with close to 1TB of data
> do not have a similar slow down. Although their data is spread over far
> fewer histories. Is there a way then to prevent the file_name attribute
> being requested for admin accounts so I can see if this speeds things back
> up again?
> Although the Galaxy server is running on my iMac the data is stored
> external on a large directly attached NAS. I think I first noticed this
> slow down after deleting and purging a bunch of older histories to free
> space on the NAS. I have tried running some of the cleanup_datasets scripts
> but they are actually returning errors and not running right now (can give
> you the error messages if necessary).
> The slowdown is actually getting worse now and it is even slow to
> display tool pages, as well as often getting this error if it is really
> Proxy Error
> The proxy server received an invalid response from an upstream server.
> The proxy server could not handle the request *GET /history/list
> Reason: *Error reading from remote server*
> I am running through an apache proxy - perhaps the apache settings need
> tweaking too? (I forget right now where I set these up!).
> As for the database itself, I am running PostgreSQL 9.3 and I tweaked
> the settings in my universe_wsgi.ini as per the instructions on
> So my settings are:
> # -- Database
> # By default, Galaxy uses a SQLite database at
> 'database/universe.sqlite'. You
> # may use a SQLAlchemy connection string to specify an external database
> # instead. This string takes many options which are explained in detail
> in the
> # config file documentation.
> database_connection =
> # If the server logs errors about not having enough database pool
> # you will want to increase these values, or consider running more Galaxy
> # processes.
> database_engine_option_pool_size = 10
> database_engine_option_max_overflow = 20
> # If using MySQL and the server logs the error "MySQL server has gone
> # you will want to set this to some positive value (7200 should work).
> #database_engine_option_pool_recycle = -1
> # If large database query results are causing memory or response time
> issues in
> # the Galaxy process, leave the result on the server instead. This
> option is
> # only available for PostgreSQL and is highly recommended.
> database_engine_option_server_side_cursors = True
> # Create only one connection to the database per thread, to reduce the
> # connection overhead. Recommended when not using SQLite:
> database_engine_option_strategy = threadlocal
> # Log all database transactions, can be useful for debugging and
> # profiling. Logging is done via Python's 'logging' module under the
> # 'galaxy.model.orm.logging_connection_proxy'
> database_query_profiling_proxy = False
> # -- Files and directories
> Let me know if you think these settings are appropriate or need further
> Thanks again for your responses so far,
> On 13 Jul 2015, at 16:31, Carl Eberhard <carlfeberhard(a)gmail.com> wrote:
> Hi, Richard
> How many histories are on your account? How many datasets (roughly)?
> Are you using an Admin account to view the histories and does the slow
> down still occur for regular users with large amounts of data?
> One of the exposed attributes of datasets (for admins - not other users
> generally) is the file_name. I've noticed that retrieving this attribute
> from the file system can be slow.
> Christian also provides good advice.
> On Thu, Jul 9, 2015 at 4:12 AM, Christian Brenninkmeijer <
> christian.brenninkmeijer(a)manchester.ac.uk> wrote:
>> Hi Richard,
>> I am relatively new to galaxy so if you get a different response from
>> one of the core team ignore this.
>> One thing I would check is the underlying database.
>> What do you have set for "database_connection" in your galaxy.ini
>> Especially if you are using the default sqlite this could be the issue.
>> As that is store in a single file on disk.
>> Whichever database you have make sure it has enough resources to handle
>> what will now be a large size.
>> *From:* galaxy-dev [galaxy-dev-bounces(a)lists.galaxyproject.org] on
>> behalf of Poole, Richard [r.poole(a)ucl.ac.uk]
>> *Sent:* Wednesday, July 08, 2015 9:04 PM
>> *To:* galaxy-dev(a)lists.galaxyproject.org
>> *Subject:* [galaxy-dev] Slow repsonses viewing histories
>> Hi all,
>> I am having trouble right now with my own personal account on my
>> production server. Grid refreshes are taking a huge amount of time (e.g.
>> when viewing ‘saved histories’ or even generating the dataset list for a
>> single history). My account is very full of data (1TB), could it be this?
>> There are no obvious messages in the logs though so I am a bit stumped
>> as to why.I do not have the same trouble when impersonating other users
>> with fairly full accounts. Perhaps a database issue (I do not know how to
>> ‘cleanup’ the database or indeed Galaxy user accounts). Any thoughts?
>> *Richard J Poole PhD*
>> Wellcome Trust Fellow
>> Department of Cell and Developmental Biology
>> University College London
>> 21 University Street, London WC1E 6DE
>> Office (518 Rockefeller): +44 20 7679 6577 (int. 46577)
>> Lab (529 Rockefeller): +44 20 7679 6133 (int. 46133)
>> Please keep all replies on the list by using "reply all"
>> in your mail client. To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>> To search Galaxy mailing lists use the unified search at:
> Please keep all replies on the list by using "reply all"
> in your mail client. To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> To search Galaxy mailing lists use the unified search at: