Re: [galaxy-dev] PATH set in tool env.sh leaking through to set_metadata.py
by John Chilton
On Wed, Jan 21, 2015 at 6:54 PM, John Chilton <jmchilton(a)gmail.com> wrote:
> On Wed, Jan 21, 2015 at 10:35 AM, Wolfgang Maier
> <wolfgang.maier(a)biologie.uni-freiburg.de> wrote:
>>
>>
>> On 21.01.2015 16:11, John Chilton wrote:
>>>
>>> So the tool dependency stuff didn't change - but the local job runner
>>> now behaves like the other job runners (DRMAA, PBS, etc...) so you
>>> have uncovered a bug for all of them I think. Is this a tool shed tool
>>> or a locally created dependency?
>>>
>>
>> This is a tool shed tool (package_mimodd_0_1_5), but one I uploaded myself
>> (I discovered the bug when I did a test install after updating our local
>> Galaxy).
>>
>>> I need to think about how to solve this more generally but if you want
>>> a quick work around - I think it would work to just replace "python"
>>> in $GALAXY_ROOT/set_metadata.sh with a hard-coded path to Galaxy's
>>> python or perhaps just "python2".
>>
>>
>> Since I can change the tool shed tool a more "global" solution could be to
>> simply remove the "python" symlink from the python3 virtualenv after its
>> created and have the tool use "python3" explicitly at runtime. I'll try
>> whether that works, but I think it should.
>
> This sounds like a better short-term solution to me - this way older
> versions of Galaxy will still be able to use the recipe. I created a
> Trello issue to track the longer term fix of isolating metadata
> generation from the tool environment - which is needed but could be a
> little disruptive so I would rather push it off until after the next
> release.
Opps - I forgot the link and forgot to cc the mailing list.
https://trello.com/c/aMCwYZsN
-John
>
> Thanks!
>
> -John
>
>>
>>>
>>> Sorry for the inconvenience and thanks for bringing this to our attention.
>>>
>>
>> No problem. I'm glad if I could help with something instead of just asking
>> questions all the time.
>>
>> Wolfgang
>>
>>
>>
>>>
>>> On Tue, Jan 20, 2015 at 12:52 PM, John Chilton <jmchilton(a)gmail.com>
>>> wrote:
>>>>
>>>> Are you using the local job runner (this will be the case if you
>>>> haven't explicitly configured something like pbs or DRMAA in your
>>>> job_conf.xml file)?
>>>>
>>>> -John
>>>>
>>>> On Tue, Jan 20, 2015 at 12:34 PM, Wolfgang Maier
>>>> <wolfgang.maier(a)biologie.uni-freiburg.de> wrote:
>>>>>
>>>>> On 01/20/2015 06:20 PM, Wolfgang Maier wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> I have not seen this error before (I believe not with
>>>>>> latest_2014.10.06)
>>>>>
>>>>>
>>>>>
>>>>> update: confirmed this now. It's enough to hg update to
>>>>> latest_2014.10.06
>>>>> and things are working again.
>>>>>
>>>>> The difference is that when building the dependency shell command the
>>>>> latest
>>>>> release seems to put the call to set_metadata.sh into that command,
>>>>> while
>>>>> before it seems it was run separately.
>>>>>
>>>>>
>>>>> Wolfgang
>>>>>
>>
7 years, 5 months
PATH set in tool env.sh leaking through to set_metadata.py
by Wolfgang Maier
Hi,
I am not sure whether I am diagnosing this problem approximately right
or not, but it seems that with the latest release of Galaxy (stable
branch, latest_2015.01.13) enviroment variables set in a
tool-dependency's env.sh file and sourced before a job run are now
leaking through to
./scripts/set_metadata.py called after the tool run finishes.
In my specific case, the sourced env.sh adds a python 3 virtualenv
directory to PATH, which then causes set_metadata.py to fail with:
File "./scripts/set_metadata.py", line 130
except Exception, e:
^
SyntaxError: invalid syntax
because that (amazingly enough :)) seems to be the first line not
compatible with Python 3.
I have not seen this error before (I believe not with latest_2014.10.06)
so I guess my question is: has the way that set_metadata.py is called
been changed recently and would you consider the current behavior a bug
or is this expected and ok ?
Best,
Wolfgang
7 years, 5 months
Oct '14 and Jan'15 release, trouble installing tools from the toolshed, resetting metadata for installed repositories
by Yves Gagnon
Hello fellow galaxy users :
I noticed that trying to install some tools from the main toolshed,
sometimes it works, others it does not. In a specific case, trying to
install clustalw and its dependency, I get this error message :
URL: (redactedURL)/admin_toolshed/prepare_for_install
File '(redacted
path)/galaxy_dist_dev/eggs/WebError-0.8a-py2.6.egg/weberror/evalexception/middleware.py',
line 364 in respond
app_iter = self.application(environ, detect_start_response)
File '(redacted
path)/galaxy_dist_dev/eggs/Paste-1.7.5.1-py2.6.egg/paste/recursive.py',
line 84 in __call__
return self.application(environ, start_response)
File '(redacted
path)/galaxy_dist_dev/eggs/Paste-1.7.5.1-py2.6.egg/paste/httpexceptions.py',
line 633 in __call__
return self.application(environ, start_response)
File '(redacted path)/galaxy_dist_dev/lib/galaxy/web/framework/base.py',
line 132 in __call__
return self.handle_request( environ, start_response )
File '(redacted path)/galaxy_dist_dev/lib/galaxy/web/framework/base.py',
line 190 in handle_request
body = method( trans, **kwargs )
File '(redacted
path)/galaxy_dist_dev/lib/galaxy/web/framework/decorators.py', line 87 in
decorator
return func( self, trans, *args, **kwargs )
File '(redacted
path)/galaxy_dist_dev/lib/galaxy/webapps/galaxy/controllers/admin_toolshed.py',
line 1095 in prepare_for_install
install_repository_manager.handle_tool_shed_repositories(
installation_dict, using_api=False )
File '(redacted
path)/galaxy_dist_dev/lib/tool_shed/galaxy_install/install_manager.py',
line 636 in handle_tool_shed_repositories
new_tool_panel_section_label=new_tool_panel_section_label )
File '(redacted
path)/galaxy_dist_dev/lib/tool_shed/galaxy_install/repository_dependencies/repository_dependency_manager.py',
line 200 in create_repository_dependency_objects
self.app.installed_repository_manager.activate_repository(
repository_db_record )
File '(redacted
path)/galaxy_dist_dev/lib/tool_shed/galaxy_install/installed_repository_manager.py',
line 90 in activate_repository
repository_tools_tups = irmm.get_repository_tools_tups()
File '(redacted
path)/galaxy_dist_dev/lib/tool_shed/galaxy_install/metadata/installed_repository_metadata_manager.py',
line 69 in get_repository_tools_tups
index, shed_conf_dict = self.tpm.get_shed_tool_conf_dict(
self.metadata_dict.get( 'shed_config_filename' ) )
TypeError: 'NoneType' object is not iterable
Same goes for resetting all metadata on installed repositories after
update, 23 out of ~200 tools work, for the rest I get this error in
paster.log :
tool_shed.galaxy_install.metadata.installed_repository_metadata_manager
ERROR 2015-01-20 15:18:35,357 Error attempting to reset metadata on
repository tabular_to_fasta
Traceback (most recent call last):
File "(redacted
path)/galaxy_dist_dev/lib/tool_shed/galaxy_install/metadata/installed_repository_metadata_manager.py",
line 114, in reset_metadata_on_selected_repositories
self.set_repository( repository )
File "(redacted
path)/galaxy_dist_dev/lib/tool_shed/galaxy_install/metadata/installed_repository_metadata_manager.py",
line 142, in set_repository
super( InstalledRepositoryMetadataManager, self ).set_repository(
repository )
File "(redacted
path)/galaxy_dist_dev/lib/tool_shed/metadata/metadata_generator.py", line
1066, in set_repository
self.metadata_dict = { 'shed_config_filename' :
self.shed_config_dict.get( 'config_filename', None ) }
AttributeError: 'NoneType' object has no attribute 'get'
My galaxy contains this line for tool_conf files, and I moved all config
files from the distrib root to the config folder:
tool_config_file = config/tool_conf.xml,config/shed_tool_conf.xml
Did I do something wrong while updating to the latest releases???
Thanks for your help!
Yves Gagnon
7 years, 5 months
Unable to get data from UCSC
by Anna Terry
Hi,
I am having lots of problems with the get data tools installed on our local
galaxy server, I'll start with just one.
With the UCSC Main one, I am unable to send data back to galaxy, when I
click "Send query to Galaxy" nothing happens, the button is unresponsive.
This is in contrast to when I try the same query from the main galaxy
server, on the same browser and the content in correctly transferred back
to galaxy.
I have no idea how to start debugging this, there is nothing in paster.log
Does anyone have any ideas?
TIA,
Anna
7 years, 5 months
Partial automation for generating those twisty R dependency tool shed installation sequences
by Ross
This may be helpful for anyone else struggling to get complex nested R
package dependency installation from the tool shed sorted out. That whole
can of worms. While we have setup_r_packages, the developer still has to
figure out the right magical incantation and make sure the tarballs are
available.
https://wiki.galaxyproject.org/SetUpREnvironment has some notes I've
started - contribitions welcome.
It has a more or less reusable R script to generate tool_dependencies.xml
boilerplate, assuming you set the constant libdir to your local git
repository path where those tarballs will be downloaded from.
I hope this helps someone!
Could make a tool to do this if enough developers want access to it without
the pain of managing yet another R script?
7 years, 5 months
problem with launching Galaxy cloud
by Gurova, Katerina
Hi, I tried to start using Galaxy today and launched new instance at AWS, but I am getting all the time this error message:
[cid:image001.png@01D030C2.9F312090]
Please, advise what to do.
Thank you,
Katerina
Katerina Gurova, MD, PhD
Assistant Professor
Department of Cell Stress Biology
Roswell Park Cancer Institute
Elm and Carlton Streets
Buffalo, NY, 14263
Tel: 716-845-4760
Email: Katerina.gurova(a)roswellpark.org
This email message may contain legally privileged and/or confidential information. If you are not the intended recipient(s), or the employee or agent responsible for the delivery of this message to the intended recipient(s), you are hereby notified that any disclosure, copying, distribution, or use of this email message is prohibited. If you have received this message in error, please notify the sender immediately by e-mail and delete this email message from your computer. Thank you.
7 years, 5 months
Problem with Galaxy installation
by FRITSCH Emilie - L. Int
Dear all,
I installed Galaxy last week on our server following the tutorial on Admin/GetGalaxy from the wiki. I had no problem and it was running fine. However, as one of my colleague required the localhost:8080, I had to change the port number. I followed the tutorial on the wiki « Running galaxy in a production environment ». I copied the galaxy.ini.sample, modified the port and also the additional parameters as suggested. Since then, I get an error message stating that "socket.error: [Errno 98] Address already in use ». I checked and the port is free.
I copy at the end of this message part of the standard and error output, hoping this could help figuring out what the problem is. Additionally I noticed that there was another warning, right at the beginning, when running the run.sh script:
/data/database/galaxy-dist/eggs/pysam-0.4.2_kanwei_b10f6e722e9a-py2.6-linux-x86_64-ucs4.egg/pysam/__init__.py:1: RuntimeWarning: __builtin__.file size changed, may indicate binary incompatibility
I have checked on previous messages to see if someone had the same problem but usually it seemed that only changing the port in config/galaxy.ini worked.
Thanks a lot in advance, I am newbie and am a bit puzzled on that one,
Emilie
Starting server in PID 27030.
Traceback (most recent call last):
File "./scripts/paster.py", line 33, in <module>
serve.run()
File "/data/database/galaxy-dist/lib/galaxy/util/pastescript/serve.py", line 1049, in run
invoke(command, command_name, options, args[1:])
File "/data/database/galaxy-dist/lib/galaxy/util/pastescript/serve.py", line 1055, in invoke
exit_code = runner.run(args)
File "/data/database/galaxy-dist/lib/galaxy/util/pastescript/serve.py", line 220, in run
result = self.command()
File "/data/database/galaxy-dist/lib/galaxy/util/pastescript/serve.py", line 670, in command
serve()
File "/data/database/galaxy-dist/lib/galaxy/util/pastescript/serve.py", line 654, in serve
server(app)
File "/data/database/galaxy-dist/lib/galaxy/util/pastescript/loadwsgi.py", line 292, in server_wrapper
**context.local_conf)
File "/data/database/galaxy-dist/lib/galaxy/util/pastescript/loadwsgi.py", line 97, in fix_call
val = callable(*args, **kw)
File "/data/database/galaxy-dist/eggs/Paste-1.7.5.1-py2.6.egg/paste/httpserver.py", line 1342, in server_runner
serve(wsgi_app, **kwargs)
File "/data/database/galaxy-dist/eggs/Paste-1.7.5.1-py2.6.egg/paste/httpserver.py", line 1291, in serve
request_queue_size=request_queue_size)
File "/data/database/galaxy-dist/eggs/Paste-1.7.5.1-py2.6.egg/paste/httpserver.py", line 1134, in __init__
request_queue_size=request_queue_size)
File "/data/database/galaxy-dist/eggs/Paste-1.7.5.1-py2.6.egg/paste/httpserver.py", line 1113, in __init__
request_queue_size=request_queue_size)
File "/data/database/galaxy-dist/eggs/Paste-1.7.5.1-py2.6.egg/paste/httpserver.py", line 360, in __init__
HTTPServer.__init__(self, server_address, RequestHandlerClass)
File "/usr/lib64/python2.6/SocketServer.py", line 412, in __init__
self.server_bind()
File "/usr/lib64/python2.6/BaseHTTPServer.py", line 108, in server_bind
SocketServer.TCPServer.server_bind(self)
File "/usr/lib64/python2.6/SocketServer.py", line 423, in server_bind
self.socket.bind(self.server_address)
File "<string>", line 1, in bind
socket.error: [Errno 98] Address already in use
7 years, 5 months
Web/Job scaling question
by Josephine Palencia
Hello,
Current setup:
: have 1-2 VMs w/ 4-core each, 4GB mem mounting remote lustre shared fs
: use remote job submission: cli/ssh
: proxied, use remote auth login/logout
I'm prepping for a scenario where there will be 60-75 users/students
who will simultaneously submit several complete genomics workflows
to the single VM (I actually have 3 identical VMs). Full runs can go from
30min to an hour and a half each.
I have 1 of the VMs set up with the standalone paste-based
with 5 job handler and 5 web server processes. Would this be enough for
this scenario? (or 10/10, 20/20, more?).
I have the 2nd VM being set up using the uWSGI method.
Would it be worth it to proceed with the standalone paste-based or just
skip it and exclusively only concentrate on the uWSGI setup (or keep
both?). I'd like to automate everything (remote job submissions
for full workflows) and just let it (60 test users or so) run for 1
month to check reliability, anticipate problems.
I'd appreciate feedback, thoughts from those who have already gone through
a similar experience.
TIA
7 years, 5 months
Fasta Datatype Adjustments
by Marco Albuquerque
Hello Galaxy,
I have a small dilemma regarding the fasta datatype.
Currently, to my knowledge, the fasta datatype does not specify any
metadata. I was curious how I should go about changing the fasta datatype if
I wanted to include the .fai and .dict as metadata? Basically I want to
avoid unnecessary recreating of the same file done automatically by MuTect.
It seems very inefficient to have to produce these files every time a user
were to call MuTect in galaxy (I.e. MuTect can't find them, so it makes them
itself).
I guess my question is, what are the consequences to adjusting the current
implementation of the fasta datatype? Will users be able to pull this tool
easily? (I.e. When one uploads a tool and that tools uses a different fasta
definition, how does galaxy handle this?) Could this new fasta declaration
potentially be adapted by galaxy? Should I just define a new mtfasta
datatype special for MuTect purposes?
Let me know if you have any advice,
Marco
7 years, 5 months