Problems with DataImport
by Ted Goldstein
Hi there,
Here are three interrelated issues.
I am trying to use Galaxy with some large cancer genomic datasets here at UCSC and do some systems biology. I have petabyte size dataset data libraries which will constantly be in flux at the edges. I would prefer to just have the Galaxy read the metadata from the file system for large datasets without using the database. Is there a convenient api boundary to write an adapter to the dataset object interface?
In the meantime, I am going to try to just import day using the link. Its great that this feature is in already When I import into a couple of a modest megabyte size dataset using "Link to files without copying to Galaxy" option, the status never changes from "queued". Is this a bug? Is there a known work around? I have many large datasets.
Also, it takes a long time to expand the dataset name link. (My experiment on import is a data tree of about a thousand files). Is this a known bug?
Thanks!
Ted
10 years, 3 months
Re: [galaxy-dev] Error viewing BAM files in IGV
by Jim Johnson
Hi,
I'm seeing the same behavior. Galaxy is returning a web page rather than the requested .bai index file for the BAM file.
In class WebApplication ( lib/galaxy/web/framework/base.py ) in __call__( self, environ, start_response ) line 133
# Setup the transaction
trans = self.transaction_factory( environ )
Gets routed to the root controller and returns the galaxy server main page html.
JJ
> On Dec 14, 2011, at 4:22 PM, Alexander Graf wrote:
>
>> Hi nate,
>> I have tried it with several BAM files, resulting in the same error.
>> If I download the BAM and bai files from Galaxy and load it into IGV manually, everything works like charm.
>> Up to now I could not figure out why it is not working.
>> Could I have better success switching to the nginx-server?
> Hi Alex,
>
> It should work with Apache as well. I don't have an environment set up here to test, but could you take a look at the Apache access and error logs to determine whether the file is being found and read properly? It's possible that the request is returning something other than a 200 code and the file data.
>
> --nate
>
>> Alex
>>
>> Am 12.12.2011 um 16:11 schrieb Nate Coraor:
>>
>>> On Dec 9, 2011, at 6:45 AM, Alexander Graf wrote:
>>>
>>>> Hello,
>>>> I have recently updated our Galaxy dist and I'm running into problems viewing BAM files in IGV (v2.0.22), saying: Invalid GZIP header.
>>> Hi Alex,
>>>
>>> Your config below looks okay at first glance. Can you verify that the file in question is a valid BAM? Or is this happening with all BAMs?
>>>
>>> --nate
>>>
>>>> I have configured the Apache as explained in the Wiki using this http.conf:
>>>>
>>>>
>>>> ------------------------------------------------------------------------------------------------------------------------------------
>>>> <VirtualHost *:80>
>>>> ServerName 127.0.0.1
>>>> RewriteEngine on
>>>>
>>>> RewriteRule ^/galaxy$ /galaxy/ [R]
>>>> RewriteRule ^/galaxy/static/style/(.*)/opt/galaxy/static/june_2007_style/blue/$1 [L]
>>>> RewriteRule ^/galaxy/static/scripts/(.*)/opt/galaxy/static/scripts/packed/$1 [L]
>>>> RewriteRule ^/galaxy/static/(.*)/opt/galaxy/static/$1 [L]
>>>> RewriteRule ^/galaxy/favicon.ico /opt/galaxy/static/favicon.ico [L]
>>>> RewriteRule ^/galaxy/robots.txt /opt/galaxy/static/robots.txt [L]
>>>> RewriteRule ^/galaxy(.*) http://localhost:8081$1 [P]
>>>>
>>>> <Proxyhttp://localhost:8081>
>>>> Order deny,allow
>>>> Allow from all
>>>> </Proxy>
>>>>
>>>> <Location "/galaxy">
>>>> # Define the authentication method
>>>> XSendFile on
>>>> XSendFilePath /
>>>> # Compress all uncompressed content.http.conf
>>>> SetOutputFilter DEFLATE
>>>> SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary
>>>> SetEnvIfNoCase Request_URI \.(?:t?gz|zip|bz2)$ no-gzip dont-vary
>>>> </Location>
>>>> <Directory "/galaxy/static">
>>>> ExpiresActive On
>>>> ExpiresDefault "access plus 6 hours"
>>>> </Directory>
>>>> </VirtualHost>
>>>> ----------------------------------------------------------------------------------------------------------------------------------------
>>>>
>>>>
>>>> By having this parts changed in universe_wsgi.ini:
>>>> ----------------------------------------------------------------------------------------------------------------------------------------
>>>> [server:main]
>>>> use = egg:Paste#http
>>>> port = 8081
>>>> host = 0.0.0.0
>>>> use_threadpool = True
>>>>
>>>> [filter:gzip]
>>>> use = egg:Paste#gzip
>>>>
>>>> [filter:proxy-prefix]
>>>> use = egg:PasteDeploy#prefix
>>>> prefix = /galaxy
>>>>
>>>> [app:main]
>>>> paste.app_factory = galaxy.web.buildapp:app_factory
>>>> filter-with = proxy-prefix
>>>> cookie_path = /galaxy
>>>> apache_xsendfile = True
>>>> upstream_gzip = False
>>>> ----------------------------------------------------------------------------------------------------------------------------------------
>>>>
>>>> The resulting Galaxy error log is:
>>>> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>>>> galaxy.web.framework DEBUG 2011-12-09 12:32:08,825 Error: this request returned None from get_history():http://10.153.182.203/galaxy/root
>>>> 10.163.241.110 - - [09/Dec/2011:12:32:08 +0200] "GET /galaxy/root?app_action=data&user_id=c9a3f3a19e75965d&app_name=igv_bam&link_name=local_default&action_param=galaxy_9b0f702d0207cd78.bam.bai&dataset_id=9b0f702d0207cd78 HTTP/1.1" 200 - "-" "IGV Version 2.0.22 (1360)11/29/2011 02:24 PM Java/1.6.0_22"
>>>> 10.163.241.110 - - [09/Dec/2011:12:32:08 +0200] "HEAD /galaxy/display_application/9b0f702d0207cd78/igv_bam/local_default/c9a3f3a19e75965d/data/galaxy_9b0f702d0207cd78.bam HTTP/1.1" 302 - "-" "IGV Version 2.0.22 (1360)11/29/2011 02:24 PM Java/1.6.0_22"
>>>> galaxy.web.framework DEBUG 2011-12-09 12:32:08,915 Error: this request returned None from get_history():http://10.153.182.203/galaxy/root
>>>> 10.163.241.110 - - [09/Dec/2011:12:32:08 +0200] "HEAD /galaxy/root?app_action=data&user_id=c9a3f3a19e75965d&app_name=igv_bam&link_name=local_default&action_param=galaxy_9b0f702d0207cd78.bam&dataset_id=9b0f702d0207cd78 HTTP/1.1" 200 - "-" "IGV Version 2.0.22 (1360)11/29/2011 02:24 PM Java/1.6.0_22"
>>>> ----------------------------------------
>>>> Exception happened during processing of request from ('127.0.0.1', 52683)
>>>> Traceback (most recent call last):
>>>> File "/opt/galaxy/eggs/Paste-1.6-py2.6.egg/paste/httpserver.py", line 1053, in process_request_in_thread
>>>> self.finish_request(request, client_address)
>>>> File "/usr/lib/python2.6/SocketServer.py", line 322, in finish_request
>>>> self.RequestHandlerClass(request, client_address, self)
>>>> File "/usr/lib/python2.6/SocketServer.py", line 618, in __init__
>>>> self.finish()
>>>> File "/usr/lib/python2.6/SocketServer.py", line 661, in finish
>>>> self.wfile.flush()
>>>> File "/usr/lib/python2.6/socket.py", line 297, in flush
>>>> self._sock.sendall(buffer(data, write_offset, buffer_size))
>>>> error: [Errno 32] Broken pipe
>>>> ----------------------------------------
>>>>
>>>> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>>>>
>>>>
>>>> Thanks in advance for your help
>>>>
>>>> Alex
>>>>
>>>> ___________________________________________________________
>>>> Please keep all replies on the list by using "reply all"
>>>> in your mail client. To manage your subscriptions to this
>>>> and other Galaxy lists, please use the interface at:
>>>>
>>>> http://lists.bx.psu.edu/
10 years, 3 months
generic sounding error when trying to upload files to data library
by dhivya arasappan
Hi,
We have our own galaxy instance and I'm trying to add datasets to a
data library. It has always worked before, but today, when I choose
any of the upload options, it gives the following error:
This link may not be followed from within Galaxy.
It seems like a very generic error. Any ideas of what could be going
on? Thanks for the help.
Dhivya
10 years, 3 months
download error - duplicate headers received from server
by Jeremy Coate
I'm trying to download a fastq sanger file from my Galaxy (Main) account
and getting the error message below as of 2:15pm, Wed, 1/11/12. I have a
concatenated fastq file and I get this message when clicking the download
(floppy disk) icon. Any help would be appreciated. Thanks,
Jeremy
Duplicate headers received from server
The response from the server contained duplicate headers. This problem is
generally the result of a misconfigured website or proxy. Only the website
or proxy administrator can fix this issue.
Error 349 (net::ERR_RESPONSE_HEADERS_MULTIPLE_CONTENT_DISPOSITION):
Multiple Content-Disposition headers received. This is disallowed to
protect against HTTP response splitting attacks
10 years, 3 months
Tophat Error?
by Minh Bui
Hi everyone,
I have a *.qual file and I ran Tophat on the public galaxy server but I got
the following error. I am wondering if anyone has a solution for this.
Thank you
Input: QUAL Groomer file
Database: hg19 Full
Error:
Error in tophat:
[Tue Jan 17 10:55:08 2012] Beginning TopHat run (v1.4.0)
-----------------------------------------------
[Tue Jan 17 10:55:08 2012] Preparing output location ./tophat_out/
[Tue Jan 17 10:55:08 2012] Checking for Bowtie index files
[Tue Jan 17 10:55:08 2012] Checking for reference FASTA file
[Tue Jan 17 10:55:08 2012] Checking for Bowtie
Bowtie version: 0.12.7.0
[Tue Jan 17 10:55:08 2012] Checking for Samtools
Samtools Version: 0.1.12
[Tue Jan 17 10:55:08 2012] Generating SAM header for
/galaxy/data/hg19/hg19full/bowtie_index/hg19full
format: fastq
quality scale: phred33 (default)
[Tue Jan 17 10:55:34 2012] Preparing reads
left reads: min. length=50, count=5247
[Tue Jan 17 10:55:34 2012] Mapping left_kept_reads against hg19full with Bowtie
[Tue Jan 17 10:55:36 2012] Processing bowtie hits
[Tue Jan 17 10:57:03 2012] Mapping left_kept_reads_seg1 against
hg19full with Bowtie (1/21)
[Tue Jan 17 10:57:04 2012] Mapping left_kept_reads_seg2 against
hg19full with Bowtie (2/21)
[Tue Jan 17 10:57:05 2012] Mapping left_kept_reads_seg3 against
hg19full with Bowtie (3/21)
[Tue Jan 17 10:57:07 2012] Mapping left_kept_reads_seg4 against
hg19full with Bowtie (4/21)
[Tue Jan 17 10:57:08 2012] Mapping left_kept_reads_seg5 against
hg19full with Bowtie (5/21)
[Tue Jan 17 10:57:10 2012] Mapping left_kept_reads_seg6 against
hg19full with Bowtie (6/21)
[Tue Jan 17 10:57:11 2012] Mapping left_kept_reads_seg7 against
hg19full with Bowtie (7/21)
[Tue Jan 17 10:57:12 2012] Mapping left_kept_reads_seg8 against
hg19full with Bowtie (8/21)
[Tue Jan 17 10:57:13 2012] Mapping left_kept_reads_seg9 against
hg19full with Bowtie (9/21)
[Tue Jan 17 10:57:15 2012] Mapping left_kept_reads_seg10 against
hg19full with Bowtie (10/21)
[Tue Jan 17 10:57:16 2012] Mapping left_kept_reads_seg11 against
hg19full with Bowtie (11/21)
[Tue Jan 17 10:57:17 2012] Mapping left_kept_reads_seg12 against
hg19full with Bowtie (12/21)
[Tue Jan 17 10:57:18 2012] Mapping left_kept_reads_seg13 against
hg19full with Bowtie (13/21)
[Tue Jan 17 10:57:19 2012] Mapping left_kept_reads_seg14 against
hg19full with Bowtie (14/21)
[Tue Jan 17 10:57:20 2012] Mapping left_kept_reads_seg15 against
hg19full with Bowtie (15/21)
[Tue Jan 17 10:57:21 2012] Mapping left_kept_reads_seg16 against
hg19full with Bowtie (16/21)
[Tue Jan 17 10:57:22 2012] Mapping left_kept_reads_seg17 against
hg19full with Bowtie (17/21)
[Tue Jan 17 10:57:23 2012] Mapping left_kept_reads_seg18 against
hg19full with Bowtie (18/21)
[Tue Jan 17 10:57:24 2012] Mapping left_kept_reads_seg19 against
hg19full with Bowtie (19/21)
[Tue Jan 17 10:57:25 2012] Mapping left_kept_reads_seg20 against
hg19full with Bowtie (20/21)
[Tue Jan 17 10:57:26 2012] Mapping left_kept_reads_seg21 against
hg19full with Bowtie (21/21)
[Tue Jan 17 10:57:27 2012] Searching for junctions via segment mapping
[Tue Jan 17 10:58:42 2012] Retrieving sequences for splices
[Tue Jan 17 11:02:24 2012] Indexing splices
[Tue Jan 17 11:02:25 2012] Mapping left_kept_reads_seg1 against
segment_juncs with Bowtie (1/21)
[Tue Jan 17 11:02:25 2012] Mapping left_kept_reads_seg2 against
segment_juncs with Bowtie (2/21)
[Tue Jan 17 11:02:25 2012] Mapping left_kept_reads_seg3 against
segment_juncs with Bowtie (3/21)
[Tue Jan 17 11:02:25 2012] Mapping left_kept_reads_seg4 against
segment_juncs with Bowtie (4/21)
[Tue Jan 17 11:02:25 2012] Mapping left_kept_reads_seg5 against
segment_juncs with Bowtie (5/21)
[Tue Jan 17 11:02:25 2012] Mapping left_kept_reads_seg6 against
segment_juncs with Bowtie (6/21)
[Tue Jan 17 11:02:25 2012] Mapping left_kept_reads_seg7 against
segment_juncs with Bowtie (7/21)
[Tue Jan 17 11:02:25 2012] Mapping left_kept_reads_seg8 against
segment_juncs with Bowtie (8/21)
[Tue Jan 17 11:02:25 2012] Mapping left_kept_reads_seg9 against
segment_juncs with Bowtie (9/21)
[Tue Jan 17 11:02:25 2012] Mapping left_kept_reads_seg10 against
segment_juncs with Bowtie (10/21)
[Tue Jan 17 11:02:25 2012] Mapping left_kept_reads_seg11 against
segment_juncs with Bowtie (11/21)
[Tue Jan 17 11:02:25 2012] Mapping left_kept_reads_seg12 against
segment_juncs with Bowtie (12/21)
[Tue Jan 17 11:02:25 2012] Mapping left_kept_reads_seg13 against
segment_juncs with Bowtie (13/21)
[Tue Jan 17 11:02:26 2012] Mapping left_kept_reads_seg14 against
segment_juncs with Bowtie (14/21)
[Tue Jan 17 11:02:26 2012] Mapping left_kept_reads_seg15 against
segment_juncs with Bowtie (15/21)
[Tue Jan 17 11:02:26 2012] Mapping left_kept_reads_seg16 against
segment_juncs with Bowtie (16/21)
[Tue Jan 17 11:02:26 2012] Mapping left_kept_reads_seg17 against
segment_juncs with Bowtie (17/21)
[Tue Jan 17 11:02:26 2012] Mapping left_kept_reads_seg18 against
segment_juncs with Bowtie (18/21)
[Tue Jan 17 11:02:26 2012] Mapping left_kept_reads_seg19 against
segment_juncs with Bowtie (19/21)
[Tue Jan 17 11:02:26 2012] Mapping left_kept_reads_seg20 against
segment_juncs with Bowtie (20/21)
[Tue Jan 17 11:02:26 2012] Mapping left_kept_reads_seg21 against
segment_juncs with Bowtie (21/21)
[Tue Jan 17 11:02:26 2012] Joining segment hits
[FAILED]
Error executing:
/galaxy/mounts/software/linux2.6-x86_64/pkg/tophat-1.4.0/bin/bam_merge
./tophat_out/tmp/left_kept_reads.candidates_and_unspl.bam
./tophat_out/tmp/left_kept_reads.candidates.bam
./tophat_out/tmp/left_kept_reads.unspl.bam
10 years, 3 months
Customizing the history panel
by Lukasse, Pieter
Hi,
I'm looking for ways to customize the history panel that gets added to the workflow history after a step has finished.
Below is the standard set of buttons/icons that we normally get:
[cid:image001.png@01CCA9D3.2C2EF410]
What I would like to do is to either add some extra buttons to it OR extend the functionality of such a button. How can this be done?
I only found the page below mentioning there are some extra optional buttons, but it does not tell me how to enable/disable the buttons:
http://wiki.g2.bx.psu.edu/Learn/Managing%20Datasets
Thanks and regards,
Pieter Lukasse.
10 years, 3 months
Re: [galaxy-dev] Problem running purge_datasets.sh cleanup scripts
by Kip Bodi
Hi Nate,
For those of us who might be having some storage issues and need to purge
datasets, is there a field(s) we can manually update in the database to get
past this issue for the time being? I'll definitely make some backups
first...
Cheers,
Kip
On Mon, Jan 30, 2012 at 12:50 PM, Nate Coraor <nate(a)bx.psu.edu> wrote:
> On Jan 25, 2012, at 10:37 AM, Leandro Hermida wrote:
>
> > Hello,
> >
> > In the latest galaxy-dist version when running the system cleanup
> commands in the documented order I get an error when running the
> purge_datasets.sh script:
> >
> > sh delete_userless_histories.sh
> > sh purge_histories.sh
> > sh purge_libraries.sh
> > sh purge_folders.sh
> > sh delete_datasets.sh
> > sh purge_datasets.sh
> > Traceback (most recent call last):
> > File "./scripts/cleanup_datasets/cleanup_datasets.py", line 518, in
> <module>
> > if __name__ == "__main__": main()
> > File "./scripts/cleanup_datasets/cleanup_datasets.py", line 116, in main
> > purge_datasets( app, cutoff_time, options.remove_from_disk, info_only
> = options.info_only, force_retry = options.force_retry )
> > File "./scripts/cleanup_datasets/cleanup_datasets.py", line 353, in
> purge_datasets
> > _purge_dataset( app, dataset, remove_from_disk, info_only = info_only
> )
> > File "./scripts/cleanup_datasets/cleanup_datasets.py", line 478, in
> _purge_dataset
> > print "Error attempting to purge data file: ", dataset.file_name, "
> error: ", str( exc )
> > File
> "/gpfshpc/software/pmrdcdb/galaxy/galaxy_dist_dev/lib/galaxy/model/__init__.py",
> line 651, in get_file_name
> > assert self.object_store is not None, "Object Store has not been
> initialized for dataset %s" % self.id
> > AssertionError: Object Store has not been initialized for dataset 58
> >
> > What could be going wrong?
>
> Hi Leandro,
>
> This is a bug that cropped up when we added the object store backend to
> Galaxy. I'm working on a fix which should be available in the next release
> of Galaxy. Sorry for the inconvenience.
>
> Thanks,
> --nate
>
> >
> > regards,
> > Leandro
> > ___________________________________________________________
> > Please keep all replies on the list by using "reply all"
> > in your mail client. To manage your subscriptions to this
> > and other Galaxy lists, please use the interface at:
> >
> > http://lists.bx.psu.edu/
>
>
> ___________________________________________________________
> Please keep all replies on the list by using "reply all"
> in your mail client. To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>
> http://lists.bx.psu.edu/
>
--
Kip Lord Bodi
Tufts University School of Medicine
136 Harrison Avenue, South Cove 803
Boston, MA 02118
617-636-3763
kip.bodi(a)tufts.edu
www.tucf.org | genomics.med.tufts.edu
10 years, 3 months
Re: [galaxy-dev] samtools in local instance of galaxy
by Nate Coraor
On Jan 31, 2012, at 5:29 AM, Praveen Baskaran wrote:
> Nate,
>
> Thanks,
> I have already changed that in unververse_wsgi.ini, but the problem
> continues.
>
> When I checked the galaxy.log file after executing samtools, got the
> following error:
>
>
>
> galaxy.jobs.runners.drmaa DEBUG 2012-01-28 16:14:18,582 (258/2349769)
> state change: job is running
>
> galaxy.jobs.runners.drmaa DEBUG 2012-01-28 16:14:20,174 (258/2349769)
> state change: job finished normally
>
> galaxy.datatypes.metadata DEBUG 2012-01-28 16:14:20,405 loading metadata
> from file for: HistoryDatasetAssociation 412
>
> 10.12.152.44 - - [28/Jan/2012:16:14:20 +0100] "POST
> /root/history_item_updates HTTP/1.1" 200 - "http://localhost:8181/history"
> "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/535.7 (KHTML,
> like Gecko) Chrome/16.0.912.77 Safari/535.7"
>
> galaxy.jobs.runners.drmaa ERROR 2012-01-28 16:14:20,755 Job wrapper finish
> method failed
>
> Traceback (most recent call last):
>
> File
> "/apps1/applications/galaxy/galaxy-dist/1/default/lib/galaxy/jobs/runners/d
> rmaa.py", line 282, in finish_job
>
> drm_job_state.job_wrapper.finish( stdout, stderr )
>
> File
> "/apps1/applications/galaxy/galaxy-dist/1/default/lib/galaxy/jobs/__init__.
> py", line 569, in finish
>
> dataset.metadata.from_JSON_dict(
> self.external_output_metadata.get_output_filenames_by_dataset( dataset,
> self.sa_session ).filename_out )
>
> File
> "/apps1/applications/galaxy/galaxy-dist/1/default/lib/galaxy/datatypes/meta
> data.py", line 109, in from_JSON_dict
>
> dataset._metadata[ name ] = spec.param.from_external_value(
> JSONified_dict[ name ], dataset )
>
> File
> "/apps1/applications/galaxy/galaxy-dist/1/default/lib/galaxy/datatypes/meta
> data.py", line 440, in from_external_value
>
> shutil.move( value.file_name, mf.file_name )
>
> File
> "/apps1/applications/galaxy/galaxy-dist/1/default/lib/galaxy/model/__init__
> .py", line 1601, in file_name
>
> os.makedirs( path )
>
> File "/apps/compilers/python/2.7.2/2/default/lib/python2.7/os.py", line
> 157, in makedirs
>
> mkdir(name, mode)
>
> OSError: [Errno 13] Permission denied:
> '/nobackup/galaxy/database/files/_metadata_files/000'.
Hi Praveen,
It looks like the file_path directory (in this case the default <galaxy root>/database/files/) is not writable by the user running Galaxy.
--nate
>
>
>
>
>
> Thanks
>
> Praveen Baskaran
>
>
>
>
>
>
> On 30/01/2012 17:43, "Nate Coraor" <nate(a)bx.psu.edu> wrote:
>
>> On Jan 25, 2012, at 5:28 AM, Praveen Baskaran wrote:
>>
>>> i have installed local instance of galaxy and have a small problem with
>>> samtools. when i try to convert sam file to bam, the result pane shows
>>> that the job is running (for several hours). but when check the galaxy
>>> log file, it shows that the job is finished normally. i can also find
>>> the result (bam) file in galaxy/database/files/000/ folder. can anyone
>>> help to resolve this issue.
>>
>> Hi Praveen,
>>
>> Please change the following in your universe_wsgi.ini:
>>
>> set_metadata_externally = True
>>
>> And let us know if this makes a difference. Most likely, the tool output
>> is very large and the Galaxy process is setting metadata on it. With
>> metadata set externally, you'll be able to see this step as a separate
>> process.
>>
>> --nate
>>
>>>
>>> Thanks
>>> Praveen
>>>
>>> ___________________________________________________________
>>> Please keep all replies on the list by using "reply all"
>>> in your mail client. To manage your subscriptions to this
>>> and other Galaxy lists, please use the interface at:
>>>
>>> http://lists.bx.psu.edu/
>
>
10 years, 3 months
How to delete user (and unshared files) in local instance by galaxy-admin (panel)
by Bossers, Alex
Hi All,
We are finally up-and-running again with the latest dist release. The previoous version was ok but already quite old.
For our local public server at wur we are now encountering the awaited disk space issues..... as had to come some day....but sooner anyway... So we have been cleaning up. Used the cleanusp scripts for marked as deleted files and such.
But two issues remain for galaxy-admin users:
1) How to really delete a user and its non (no-longer) shared files? (from the admin panel)
2) Is there a way to get the user disk space usage in the admin panel (or using some other method?)? Thereby we can contact that user to push cleaning up files.
Thanks
Alex
10 years, 3 months