Hey,
I wanted to know if anyone else has had experience with database deadlock when using dataset collections and running a large number of samples through a workflow.
Traceback (most recent call last):
File "/Warehouse/Applications/irida/galaxy/galaxy-dist/lib/galaxy/jobs/runners/__init__.py", line 565, in finish_job
job_state.job_wrapper.finish( stdout, stderr, exit_code )
File "/Warehouse/Applications/irida/galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py", line 1250, in finish
self.sa_session.flush()
File "/Warehouse/Applications/irida/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/scoping.py", line 114, in do
return getattr(self.registry(), name)(*args, **kwargs)
File "/Warehouse/Applications/irida/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/session.py", line 1718, in flush
self._flush(objects)
File "/Warehouse/Applications/irida/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/session.py", line 1789, in _flush
flush_context.execute()
File "/Warehouse/Applications/irida/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/unitofwork.py", line 331, in execute
rec.execute(self)
File "/Warehouse/Applications/irida/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/unitofwork.py", line 475, in execute
uow
File "/Warehouse/Applications/irida/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/persistence.py", line 59, in save_obj
mapper, table, update)
File "/Warehouse/Applications/irida/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/orm/persistence.py", line 485, in _emit_update_statements
execute(statement, params)
File "/Warehouse/Applications/irida/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/engine/base.py", line 1449, in execute
params)
File "/Warehouse/Applications/irida/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/engine/base.py", line 1584, in _execute_clauseelement
compiled_sql, distilled_params
File "/Warehouse/Applications/irida/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/engine/base.py", line 1698, in _execute_context
context)
File "/Warehouse/Applications/irida/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/engine/base.py", line 1691, in _execute_context
context)
File "/Warehouse/Applications/irida/galaxy/galaxy-dist/eggs/SQLAlchemy-0.7.9-py2.6-linux-x86_64-ucs4.egg/sqlalchemy/engine/default.py", line 331, in do_execute
cursor.execute(statement, parameters)
DBAPIError: (TransactionRollbackError) deadlock detected
DETAIL: Process 25859 waits for ShareLock on transaction 144373; blocked by process 25858.
Process 25858 waits for ShareLock on transaction 144372; blocked by process 25859.
HINT: See server log for query details.
'UPDATE workflow_invocation SET update_time=%(update_time)s WHERE workflow_invocation.id = %(workflow_invocation_id)s' {'update_time': datetime.datetime(2015, 2, 27, 3, 51, 57, 81403), 'workflow_invocation_id': 48}
I saw this post http://dev.list.galaxyproject.org/data-collections-workflow-bug-td4666496.html with a similar issue and the solution was to make sure not to use a sqlite database, but I'm using a postgres database and still encountered this issue. This was after running a very large number of samples (~200) using dataset collections. Just wondering if anyone else was running into this issue?
Thanks,
Aaron