Running jobs as real user and extra_file_path
by Louise-Amélie Schmitt
Hi everyone,
I just wanted to ask how the extra_file_path is handled in case of job
running as the real user since the file_path is only writable by the
galaxy user. Any clue?
Thanks,
L-A
3 years, 11 months
Re: [galaxy-dev] how to create users through Galaxy API?
by Martin Čech
Hi,
you specify username, password and email in the body (payload) of the POST
as Key:Value pairs.
Code from the API method: (
https://galaxy-central.readthedocs.org/en/latest/_modules/galaxy/webapps/...)
username = payload[ 'username' ] email = payload[ 'email' ] password =
payload[ 'password' ]
There are also other conditions that need to be fulfilled (e.g. user
creation has to be turned on in the configuration) - you will find these
when you look at the source code of the method (because the documentation
is not perfect yet, sorry).
M.
On Sun, Dec 29, 2013 at 11:22 PM, xlwang <xlwang_0903(a)163.com> wrote:
> hi,
> I still have some confusion.
> post /api/users , when i use postman, it seems like this:
>
>
> No other parameters? How to confirm the new user's information?Such as
> user name, password.
> Can you give me a example please?Thank you!
>
> At 2013-12-30 06:04:15,"Martin Čech" <marten(a)bx.psu.edu> wrote:
>
> Hello,
>
> to create a user through API you send a POST request to /api/users
>
> More information about User API here:
>
>
> https://galaxy-central.readthedocs.org/en/latest/lib/galaxy.webapps.galax...
>
> Martin
>
>
> On Fri, Dec 27, 2013 at 5:11 AM, xlwang <xlwang_0903(a)163.com> wrote:
>
>> hello,
>> I want to create users through Galaxy API.For example, I can get users
>> information through url like this(use postman):
>> http://localhost:8080/api/users?key=my_key
>> But how to create a user?
>>
>>
>>
>> ___________________________________________________________
>> Please keep all replies on the list by using "reply all"
>> in your mail client. To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>> http://lists.bx.psu.edu/
>>
>> To search Galaxy mailing lists use the unified search at:
>> http://galaxyproject.org/search/mailinglists/
>>
>
>
>
>
4 years, 7 months
error loading files into galaxy
by Hakeem Almabrazi
Hi,
I started getting the following error whenever I try to load a file into Galaxy local.
Traceback (most recent call last):
File "/usr/local/galaxy/galaxy-dist/tools/data_source/upload.py", line 8, in <module>
from galaxy import eggs
ImportError: cannot import name eggs
I will appreciate if someone can tell me what could happen to cause such issue and how to resolve it.
Regards,
5 years, 1 month
Upload issue in local installation
by Batsal Devkota
I installed galaxy locally in a linux server. However, I cannot upload the files (no matter how small, I have tried few kb size fasta files). When I try to upload, the link to the file shows up in the History and gets a new number (purple box). When I click on the link I get 'Dataset is uploading' forever.
In the terminal window where I start galaxy, I get the following error report:
92.17.41.13 - - [02/Aug/2012:15:33:32 -0400] "GET / HTTP/1.1" 200 - "-" "Mozilla/5.0 (Windows NT 5.2; rv:13.0) Gecko/20100101 Firefox/13.0.1"
92.17.41.13 - - [02/Aug/2012:15:33:32 -0400] "GET /root/tool_menu HTTP/1.1" 200 - "http://redhat:8080/" "Mozilla/5.0 (Windows NT 5.2; rv:13.0) Gecko/20100101 Firefox/13.0.1"
92.17.41.13 - - [02/Aug/2012:15:33:32 -0400] "GET /history HTTP/1.1" 200 - "http://redhat:8080/" "Mozilla/5.0 (Windows NT 5.2; rv:13.0) Gecko/20100101 Firefox/13.0.1"
92.17.41.13 - - [02/Aug/2012:15:33:33 -0400] "POST /root/user_get_usage HTTP/1.1" 200 - "http://redhat:8080/history" "Mozilla/5.0 (Windows NT 5.2; rv:13.0) Gecko/20100101 Firefox/13.0.1"
92.17.41.13 - - [02/Aug/2012:15:33:40 -0400] "GET /tool_runner?tool_id=upload1 HTTP/1.1" 200 - "http://redhat:8080/root/tool_menu" "Mozilla/5.0 (Windows NT 5.2; rv:13.0) Gecko/20100101 Firefox/13.0.1"
92.17.41.13 - - [02/Aug/2012:15:33:59 -0400] "POST /tool_runner/upload_async_create HTTP/1.1" 200 - "http://redhat:8080/" "Mozilla/5.0 (Windows NT 5.2; rv:13.0) Gecko/20100101 Firefox/13.0.1"
92.17.41.13 - - [02/Aug/2012:15:33:59 -0400] "GET /tool_runner/upload_async_message HTTP/1.1" 200 - "http://redhat:8080/" "Mozilla/5.0 (Windows NT 5.2; rv:13.0) Gecko/20100101 Firefox/13.0.1"
92.17.41.13 - - [02/Aug/2012:15:33:59 -0400] "GET /history HTTP/1.1" 200 - "http://redhat:8080/tool_runner/upload_async_message" "Mozilla/5.0 (Windows NT 5.2; rv:13.0) Gecko/20100101 Firefox/13.0.1"
92.17.41.13 - - [02/Aug/2012:15:33:59 -0400] "POST /root/user_get_usage HTTP/1.1" 200 - "http://redhat:8080/history" "Mozilla/5.0 (Windows NT 5.2; rv:13.0) Gecko/20100101 Firefox/13.0.1"
92.17.41.13 - - [02/Aug/2012:15:34:04 -0400] "POST /root/history_item_updates HTTP/1.1" 200 - "http://redhat:8080/history" "Mozilla/5.0 (Windows NT 5.2; rv:13.0) Gecko/20100101 Firefox/13.0.1"
92.17.41.13 - - [02/Aug/2012:15:34:08 -0400] "POST /root/history_item_updates HTTP/1.1" 200 - "http://redhat:8080/history" "Mozilla/5.0 (Windows NT 5.2; rv:13.0) Gecko/20100101 Firefox/13.0.1"
The last line keeps going for ever... writes new line every 4 secs.
I am stuck and don't know where to look. Please help.
Batsal.
5 years, 2 months
Rename output from a repeat
by Ulf Schaefer
Hi all
We frequently use the syntax below to rename outputs of workflows that
we run in batch. It is convenient to have sample names from fastqs
carried over to sams, bams, vcfs, etc.
#{input1 | basename}.bam
This does not seem to be working for inputs that are in repeats, e.g.
the VelvetOptimiser. Does anybody know if there is a syntax to make this
work, maybe
#{repeatname[0].input1 | basename}.bam ?
Thanks a lot for your help
Ulf
**************************************************************************
The information contained in the EMail and any attachments is confidential and intended solely and for the attention and use of the named addressee(s). It may not be disclosed to any other person without the express authority of Public Health England, or the intended recipient, or both. If you are not the intended recipient, you must not disclose, copy, distribute or retain this message or any part of it. This footnote also confirms that this EMail has been swept for computer viruses by Symantec.Cloud, but please re-sweep any attachments before opening or saving. http://www.gov.uk/PHE
**************************************************************************
5 years, 7 months
Object-Store, setting filetypes crashes Galaxy
by bjoern.gruening@googlemail.com
Hi,
I have configured to use the hierarchical object store but as soon as I try
to reset the filetpye of a dataset Galaxy is crashing with:
galaxy.objectstore DEBUG 2014-06-12 14:39:21,180 Using preferred backend
'files3' for creation of MetadataFile 5963
132.230.153.57 - - [12/Jun/2014:14:39:20 +0200] "POST
/datasets/966f24627ef70c12/edit HTTP/1.1" 500 - "
http://galaxy.uni-freiburg.de/datasets/966f24627ef70c12/edit" "Mozilla/5.0
(X11; Linux x86_64; rv:29.0) Gecko/20100101 Fire
fox/29.0"
Error - <type 'exceptions.OSError'>: [Errno 2] No such file or directory:
'database/tmp/metadata_temp_file_1xnGcE'
URL: http://galaxy.uni-freiburg.de/datasets/966f24627ef70c12/edit
File
'/usr/local/galaxy/galaxy-dist/lib/galaxy/web/framework/middleware/error.py',
line 149 in __call__
app_iter = self.application(environ, sr_checker)
File
'/usr/local/galaxy/galaxy-dist/eggs/Paste-1.7.5.1-py2.7.egg/paste/recursive.py',
line 84 in __call__
return self.application(environ, start_response)
File
'/usr/local/galaxy/galaxy-dist/eggs/Paste-1.7.5.1-py2.7.egg/paste/httpexceptions.py',
line 633 in __call__
return self.application(environ, start_response)
File '/usr/local/galaxy/galaxy-dist/lib/galaxy/web/framework/base.py', line
132 in __call__
return self.handle_request( environ, start_response )
File '/usr/local/galaxy/galaxy-dist/lib/galaxy/web/framework/base.py', line
190 in handle_request
body = method( trans, **kwargs )
File
'/usr/local/galaxy/galaxy-dist/lib/galaxy/webapps/galaxy/controllers/dataset.py',
line 295 in edit
trans.app.datatypes_registry.set_external_metadata_tool.tool_action.execute(
trans.app.datatypes_registry.set_external_metadata_tool, trans, incoming =
{ 'input1':data }, overwrite = False ) #overwrite is False as per existi
ng behavior
File '/usr/local/galaxy/galaxy-dist/lib/galaxy/tools/actions/metadata.py',
line 18 in execute
overwrite, history, job_params )
File '/usr/local/galaxy/galaxy-dist/lib/galaxy/tools/actions/metadata.py',
line 79 in execute_via_app
kwds = { 'overwrite' : overwrite } )
File '/usr/local/galaxy/galaxy-dist/lib/galaxy/datatypes/metadata.py', line
717 in setup_external_metadata
shutil.copy( dataset.metadata.get( meta_key, None ).file_name,
metadata_temp.file_name )
File '/usr/local/galaxy/galaxy-dist/lib/galaxy/datatypes/metadata.py', line
575 in file_name
self._filename = abspath( tempfile.NamedTemporaryFile( dir =
self.tmp_dir, prefix = "metadata_temp_file_" ).name )
File '/usr/local/python/2.7/lib/python2.7/tempfile.py', line 454 in
NamedTemporaryFile
(fd, name) = _mkstemp_inner(dir, prefix, suffix, flags)
File '/usr/local/python/2.7/lib/python2.7/tempfile.py', line 235 in
_mkstemp_inner
fd = _os.open(file, flags, 0600)
OSError: [Errno 2] No such file or directory:
'database/tmp/metadata_temp_file_1xnGcE'
I have attached my object_store_conf.xml file.
Thanks,
Bjoern
5 years, 9 months
Parallelism using metadata
by Marco Albuquerque
Hello Galaxy Dev,
I have a question regarding parallelism on a BAM file.
I have currently implemented 3 split options for the BAM datatype
1) by_rname -> splits the bam into files based on the chromosome
2) by_interval -> splits the bam into files based on a defined bp length,
and does so across the entire genome present in the BAM file
3) by_read -> splits the bam into files based on the number of reads
encountered (if multiple files, all other files match the interval as the
first)
Now, as you can imagine, reading and writing large BAM files is a pain, and
I personally think this is not the best solution for Galaxy.
What I was hoping to implement (but don't know how) is to create a new
metadata option in bam (bam.metadata.bam_interval) which would generate the
interval without creating a new file (essentially, I would create a symbolic
link to the old large file, and then update the metadata.bam_interval, this
would contain some string of the form chrom:start-end which could then be
used in a variety of tools which accept an interval as an option (for
example samtools view))
This would be far more efficient then my first implementation, but the thing
I don't know how to do is specify some kind of metadata at the split level.
I was hoping maybe you could direct me to an example that does this?
I have added the following to my metadata.py file:
class IntervalParameter( MetadataParamter )
def __init__( self, spec ):
MetadataParamter.__init__( self, spec ):
self.rname = self.spec.get( "rname" )
self.start = self.spec.get( "start" )
self.end = self.spec.get( "end" )
def to_string(self):
if self.rname = 'all':
return ''
else:
return ''.join([self.rname, ':', self.start, '-', self.end])
And the following to my binary.py file:
### UNDER THE BAM CLASS
MetadataElement( name="bam_interval", desc="BAM Interval",
param=metadata.IntervalParameter, rname="all", start="", end="",
visible=False, optional=True)
I somehow want rname="all" to be the default, but upon parallelism, I want
to be able to adjust this parameter in the split functions.
So,
<parallelism method="multi" split_inputs="normal,tumour"
split_mode="by_interval" split_size="50000000" merge_outputs="output"/>
Would actually change the metadata of each file, and not create sub-bams.
PLEASE HELP!!!
Marco
5 years, 9 months
Strange repository dependency behavior with toolshed-installed naive_variant_caller
by Brian Claywell
First, I'm at a bit of a loss to even describe what's going on, so I
apologize in advance for a rambling bug report. Please bear with me!
The naive_variant_caller tool was failing on our instance with "Failed to
resolve dependency" errors on numpy, pyBamParser, and pyBamTools, despite
the tool dependencies and their respective dependency packages being
installed.
I attempted to use the "repair repository" action on naive_variant_caller
with no success. I then uninstalled naive_variant_caller,
package_numpy_1_7, package_pybamparser_0_0_1, and package_pybamtools_0_0_1
and attempted to reinstall naive_variant_caller from scratch, again
allowing Galaxy to manage the dependencies.
This time, the dependencies resolved (at least as far as the "Building
dependency shell command" messages were concerned), but
naive_variant_caller still failed, this time with an import error:
ImportError: No module named pyBamTools.genotyping.naive
Digging around a little I found that the env.sh that was sourced for
pyBamTools was looking for an env.sh in another pyBamTools revision
directory, but the revision it was looking for didn't exist.
Then I looked in the admin interface and saw that package_pybamtools_0_0_1
and package_pybamparser_0_0_1 were each installed twice; each had both the
most recent revision and an older revision with the "updates are available"
icon.
At this point I uninstalled naive_variant_caller and all of its dependency
packages again. I then installed naive_variant_caller from the toolshed
*without* dependency management, and manually installed the required
revisions of numpy, pybamparser, and pybamtools. Finally
naive_variant_caller started working.
However, naive_variant_caller is now shown in the admin interface as
"Installed, missing repository dependencies", and the installed repository
page shows that the dependencies are a very strange state (see attached
screenshot). That is, the same revision of numpy is listed as installed
twice, and the pybamparser and pybamtools dependencies are each listed as
simultaneously installed and uninstalled.
Can anyone shed some light on what could be happening and what to do about
it? Is there some external way to verify the consistency of a tool and its
dependencies?
Thanks!
Cheers,
Brian
--
Brian Claywell | programmer/analyst
Matsen Group | http://matsen.fredhutch.org <http://matsen.fhcrc.org/>
Fred Hutchinson Cancer Research Center
5 years, 10 months
Job submission failed
by Vipin TS
Hi dev-team,
I am experiencing an issue with my production instance in dispatching the
jobs. When I click on execute button, each tool page shows the popup error
message as here:
[image: Inline image 1]
I am not sure what is happening, not getting any single line of error
message in log file.
Here is the version I am using:
changeset: 17051:28113eeee212
branch: stable
date: Mon Mar 16 22:38:27 2015 -0400
summary: Update tag latest_15.03 for changeset 9f40ff3e3fe1
any suggestions, Thank you!
Vipin | Rätsch Lab
5 years, 11 months
use of copied history while original user account deleled (and purged)
by Olivia Doppelt-Azeroual
Dear Developers,
We manage a Galaxy instance at Institut Pasteur where there are more than a hundred users. Among them there are postdocs that are meant to
leave the institute.
Our Galaxy instance is configured with LDAP authentication. The LDAP entry is suppressed shortly after the end of contracts.
We are facing the following problem:
A user left a few month ago,
1/ before leaving, she shared interesting histories with another colleague of the lab.
2/ before leaving, the colleague created a copy of every shared history in her own Galaxy account.
3/ The colleague made a few tests to check that the data were really transfered, by displaying them or downloading them and everything was ok.
4/ The time passed and the ldap user account of the user who left was suppressed. We deleted then purged her Galaxy account.
5/ The colleague has tried to relaunch some analysis using the copied histories. The data are there as her colleague is able to download the files or display them, but the jobs haven't been launched. They have remained grey in the copied history.
The problem is that there are no logs at all on the galaxy side. No command line is generated either.
On the reporting side of Galaxy, we see that the jobs have been created but their status remains "NEW'
We know that this will be a recurrent problem if we can't resolve it.
Have someone already complained about something like that?
Best regards,
--
Olivia Doppelt-Azeroual, PhD
Fabien Mareuil, PhD
Bioinformatics Engineer
Galaxy Team - CIB/C3BI
Institut Pasteur, Paris
5 years, 11 months