galaxy-dev
Threads by month
- ----- 2025 -----
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2009 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2008 -----
- December
- November
- October
- September
- August
November 2009
- 26 participants
- 233 discussions
Dear Galaxy Team,
I wanted to know If there is any schema that I can have for
Galaxy's DB.
Thanks in advance,
--
Sondos Seif El-Din
2
1
Hello,
A new version of FASTX-Toolkit is available for download at
http://cancan.cshl.edu/labmembers/gordon/fastx_toolkit/index.html
Improvements include:
* fastx_collapser can now re-collapse an already-collapsed FASTA file.
you can collapse+trim+collapse without losing counts or getting error messages.
* new tool: fastx_uncollapser
uncollapses a FASTA file (reverse the effect of fastx_collapser).
This tool can also uncollapse a tabular file with collapsed sequence-identifiers.
In galaxy these are split into two separate tools:
http://cancan.cshl.edu/labmembers/gordon/fastx_toolkit/galaxy.html#fasta_un…
http://cancan.cshl.edu/labmembers/gordon/fastx_toolkit/galaxy.html#seqid_un…
On the command line, it's the same program.
* new tool: fastq quality trimmer
trims a FASTQ file based on quality score cut-off.
http://cancan.cshl.edu/labmembers/gordon/fastx_toolkit/galaxy.html#fastq_qu…
(tool suggested by Matthew Baranski).
* fastx_trimmer can now trim N bases from the end of the sequence
(instead of having to specify absolute length).
In Galaxy, this is a separate tool:
http://cancan.cshl.edu/labmembers/gordon/fastx_toolkit/galaxy.html#fastx_tr…
On the command line, it's the same program.
(option suggested by Matthew Baranski).
* fastx_quality_stats supports new format with statistics per-nucleotide.
(so you can find avg/min/max/med quality of each nucleotide in each cycle).
* Fixed several compilation errors and warnings on different platforms.
Comments are welcomed,
-gordon.
1
0

23 Nov '09
details: http://www.bx.psu.edu/hg/galaxy/rev/82b63d47a0d8
changeset: 3117:82b63d47a0d8
user: Greg Von Kuster <greg(a)bx.psu.edu>
date: Fri Nov 20 15:35:30 2009 -0500
description:
Rename test files for taxonomy tools and set ftype for test's input datasets.
diffstat:
test-data/find_diag_hits.dat | 2 -
test-data/find_diag_hits.tabular | 2 +
test-data/poisson2test1.out | 5 --
test-data/poisson2test1.tabular | 5 ++
test-data/poisson2test1.txt | 5 --
test-data/poisson2test1_out.tabular | 5 ++
test-data/poisson2test2.out | 52 --------------------------
test-data/poisson2test2.tabular | 52 ++++++++++++++++++++++++++
test-data/poisson2test2.txt | 52 --------------------------
test-data/poisson2test2_out.tabular | 52 ++++++++++++++++++++++++++
test-data/t2t_report.dat | 15 -------
test-data/t2t_report.tabular | 15 +++++++
test-data/taxonomy2gi-input.tabular | 4 ++
test-data/taxonomy2gi-input.txt | 4 --
test-data/taxonomy2gi-output.tabular | 4 ++
test-data/taxonomy2gi-output.txt | 4 --
test-data/taxonomyGI.dat | 5 --
test-data/taxonomyGI.tabular | 5 ++
test-data/taxonomyGI.taxonomy | 5 ++
test-data/taxonomyGI.txt | 5 --
tools/taxonomy/example_dataset_missing_nodes.txt | 6 ---
tools/taxonomy/find_diag_hits.xml | 4 +-
tools/taxonomy/gi2taxonomy.xml | 4 +-
tools/taxonomy/poisson2test.xml | 8 ++--
tools/taxonomy/t2t_report.xml | 4 +-
25 files changed, 159 insertions(+), 165 deletions(-)
diffs (467 lines):
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/find_diag_hits.dat
--- a/test-data/find_diag_hits.dat Fri Nov 20 15:25:13 2009 -0500
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,2 +0,0 @@
-Primates 4 order
-Homo 2 genus
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/find_diag_hits.tabular
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/test-data/find_diag_hits.tabular Fri Nov 20 15:35:30 2009 -0500
@@ -0,0 +1,2 @@
+Primates 4 order
+Homo 2 genus
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/poisson2test1.out
--- a/test-data/poisson2test1.out Fri Nov 20 15:25:13 2009 -0500
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,5 +0,0 @@
-Acinetobacter 37 7 2.109 2.315 0.017468 0.010302 0.087342 0.051510
-Acyrthosiphon 70 21 1.549 1.612 0.060722 0.053481 0.303609 0.267406
-aedes 61 13 2.425 2.625 0.007645 0.004329 0.038223 0.021643
-Aeromonas 169 0 8.623 14.372 0.000000 0.000000 0.000000 0.000000
-anopheles 145 97 -3.217 -3.102 0.000647 0.000960 0.003234 0.004801
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/poisson2test1.tabular
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/test-data/poisson2test1.tabular Fri Nov 20 15:35:30 2009 -0500
@@ -0,0 +1,5 @@
+Acinetobacter 37 7
+Acyrthosiphon 70 21
+aedes 61 13
+Aeromonas 169 0
+anopheles 145 97
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/poisson2test1.txt
--- a/test-data/poisson2test1.txt Fri Nov 20 15:25:13 2009 -0500
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,5 +0,0 @@
-Acinetobacter 37 7
-Acyrthosiphon 70 21
-aedes 61 13
-Aeromonas 169 0
-anopheles 145 97
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/poisson2test1_out.tabular
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/test-data/poisson2test1_out.tabular Fri Nov 20 15:35:30 2009 -0500
@@ -0,0 +1,5 @@
+Acinetobacter 37 7 2.109 2.315 0.017468 0.010302 0.087342 0.051510
+Acyrthosiphon 70 21 1.549 1.612 0.060722 0.053481 0.303609 0.267406
+aedes 61 13 2.425 2.625 0.007645 0.004329 0.038223 0.021643
+Aeromonas 169 0 8.623 14.372 0.000000 0.000000 0.000000 0.000000
+anopheles 145 97 -3.217 -3.102 0.000647 0.000960 0.003234 0.004801
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/poisson2test2.out
--- a/test-data/poisson2test2.out Fri Nov 20 15:25:13 2009 -0500
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,52 +0,0 @@
-Acyrthosiphon pisum 55 54 -4.303 -4.049 0.000008 0.000026 0.000438 0.001340
-Aedes aegypti 246 72 3.064 3.198 0.001093 0.000693 0.056838 0.036029
-Anopheles gambiae 337 168 -1.323 -1.307 0.092930 0.095535 1.000000 1.000000
-Apis mellifera 33 11 0.800 0.823 0.211855 0.205213 1.000000 1.000000
-Aspergillus niger 292 29 8.371 9.916 0.000000 0.000000 0.000000 0.000000
-Batrachochytrium dendrobatidis 17 14 -1.765 -1.678 0.038749 0.046690 1.000000 1.000000
-Bombyx mori 2 16 5.373 5.103 0.000000 0.000000 0.000002 0.000009
-Bos taurus 16 71 10.338 9.621 0.000000 0.000000 0.000000 0.000000
-Branchiostoma floridae 44 1 4.126 5.667 0.000018 0.000000 0.000959 0.000000
-Caenorhabditis briggsae 269 121 -0.202 -0.201 0.420141 0.420309 1.000000 1.000000
-Caenorhabditis remanei 35 29 -2.563 -2.435 0.005191 0.007450 0.269927 0.387398
-Chlamydomonas reinhardtii 85 13 3.716 4.183 0.000101 0.000014 0.005267 0.000747
-Citrus sinensis 14 2 1.568 1.780 0.058457 0.037576 1.000000 1.000000
-Culex pipiens 408 81 6.717 7.331 0.000000 0.000000 0.000000 0.000000
-Daphnia pulex 213 75 1.663 1.701 0.048160 0.044463 1.000000 1.000000
-Drosophila ananassae 3 20 5.872 5.539 0.000000 0.000000 0.000000 0.000001
-Drosophila grimshawi 14 10 -1.182 -1.134 0.118667 0.128417 1.000000 1.000000
-Drosophila pseudoobscura 28 22 -2.064 -1.967 0.019519 0.024570 1.000000 1.000000
-Drosophila willistoni 4 18 5.220 4.860 0.000000 0.000001 0.000005 0.000031
-Emiliania huxleyi 56 13 2.113 2.264 0.017321 0.011791 0.900675 0.613145
-Glycine max 4019 1831 -1.235 -1.231 0.108478 0.109251 1.000000 1.000000
-Helobdella robusta 33 1 3.496 4.684 0.000237 0.000001 0.012302 0.000073
-Homo sapiens 59 6 3.732 4.409 0.000095 0.000005 0.004933 0.000270
-Hyaloperonospora parasitica 48 10 2.201 2.389 0.013860 0.008448 0.720722 0.439307
-Hydra magnipapillata 9 65 10.697 10.120 0.000000 0.000000 0.000000 0.000000
-Medicago truncatula 62 42 -2.176 -2.096 0.014777 0.018033 0.768379 0.937696
-Mimulus guttatus 12 9 -1.224 -1.170 0.110516 0.120942 1.000000 1.000000
-Mus musculus 18 5 0.918 0.964 0.179337 0.167614 1.000000 1.000000
-Mycosphaerella fijiensis 42 7 2.472 2.755 0.006711 0.002933 0.348951 0.152533
-Nasonia vitripennis 10 12 2.443 2.277 0.007288 0.011379 0.378990 0.591708
-Nectria haematococca 67 2 4.987 6.692 0.000000 0.000000 0.000016 0.000000
-Oryza sativa 6068 2561 1.768 1.775 0.038558 0.037957 1.000000 1.000000
-Paramecium tetraurelia 749 296 1.565 1.582 0.058782 0.056837 1.000000 1.000000
-Pediculus humanus 49 40 -2.947 -2.802 0.001606 0.002538 0.083501 0.131991
-Phakopsora pachyrhizi 66 53 -3.311 -3.152 0.000464 0.000811 0.024152 0.042153
-Physcomitrella patens 304 36 7.993 9.276 0.000000 0.000000 0.000000 0.000000
-Phytophthora ramorum 174 56 2.044 2.111 0.020488 0.017390 1.000000 0.904295
-Phytophthora sojae 10 0 2.098 3.496 0.017969 0.000236 0.934412 0.012278
-Pinus taeda 0 26 7.687 8.498 0.000000 0.000000 0.000000 0.000000
-Populus balsamifera 43 4 3.281 3.916 0.000517 0.000045 0.026903 0.002339
-Pristionchus pacificus 4 14 4.349 4.025 0.000007 0.000028 0.000355 0.001481
-Rattus norvegicus 11 3 0.741 0.780 0.229238 0.217720 1.000000 1.000000
-Rhodnius prolixus 24 17 -1.516 -1.456 0.064729 0.072722 1.000000 1.000000
-Ricinus communis 75 14 3.036 3.338 0.001198 0.000422 0.062288 0.021926
-Schistosoma mansoni 80 15 3.124 3.433 0.000891 0.000298 0.046328 0.015504
-Schmidtea mediterranea 307 277 -8.853 -8.368 0.000000 0.000000 0.000000 0.000000
-Selaginella moellendorffii 27 10 0.466 0.474 0.320629 0.317714 1.000000 1.000000
-Sorghum bicolor 306 72 4.857 5.197 0.000001 0.000000 0.000031 0.000005
-Strongylocentrotus purpuratus 182 34 4.727 5.196 0.000001 0.000000 0.000059 0.000005
-Trypanosoma cruzi 23 0 3.181 5.302 0.000733 0.000000 0.038134 0.000003
-Volvox carteri 64 2 4.854 6.487 0.000001 0.000000 0.000031 0.000000
-Zea mays 583 263 -0.336 -0.335 0.368487 0.368792 1.000000 1.000000
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/poisson2test2.tabular
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/test-data/poisson2test2.tabular Fri Nov 20 15:35:30 2009 -0500
@@ -0,0 +1,52 @@
+Acyrthosiphon pisum 55 54
+Aedes aegypti 246 72
+Anopheles gambiae 337 168
+Apis mellifera 33 11
+Aspergillus niger 292 29
+Batrachochytrium dendrobatidis 17 14
+Bombyx mori 2 16
+Bos taurus 16 71
+Branchiostoma floridae 44 1
+Caenorhabditis briggsae 269 121
+Caenorhabditis remanei 35 29
+Chlamydomonas reinhardtii 85 13
+Citrus sinensis 14 2
+Culex pipiens 408 81
+Daphnia pulex 213 75
+Drosophila ananassae 3 20
+Drosophila grimshawi 14 10
+Drosophila pseudoobscura 28 22
+Drosophila willistoni 4 18
+Emiliania huxleyi 56 13
+Glycine max 4019 1831
+Helobdella robusta 33 1
+Homo sapiens 59 6
+Hyaloperonospora parasitica 48 10
+Hydra magnipapillata 9 65
+Medicago truncatula 62 42
+Mimulus guttatus 12 9
+Mus musculus 18 5
+Mycosphaerella fijiensis 42 7
+Nasonia vitripennis 10 12
+Nectria haematococca 67 2
+Oryza sativa 6068 2561
+Paramecium tetraurelia 749 296
+Pediculus humanus 49 40
+Phakopsora pachyrhizi 66 53
+Physcomitrella patens 304 36
+Phytophthora ramorum 174 56
+Phytophthora sojae 10 0
+Pinus taeda 0 26
+Populus balsamifera 43 4
+Pristionchus pacificus 4 14
+Rattus norvegicus 11 3
+Rhodnius prolixus 24 17
+Ricinus communis 75 14
+Schistosoma mansoni 80 15
+Schmidtea mediterranea 307 277
+Selaginella moellendorffii 27 10
+Sorghum bicolor 306 72
+Strongylocentrotus purpuratus 182 34
+Trypanosoma cruzi 23 0
+Volvox carteri 64 2
+Zea mays 583 263
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/poisson2test2.txt
--- a/test-data/poisson2test2.txt Fri Nov 20 15:25:13 2009 -0500
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,52 +0,0 @@
-Acyrthosiphon pisum 55 54
-Aedes aegypti 246 72
-Anopheles gambiae 337 168
-Apis mellifera 33 11
-Aspergillus niger 292 29
-Batrachochytrium dendrobatidis 17 14
-Bombyx mori 2 16
-Bos taurus 16 71
-Branchiostoma floridae 44 1
-Caenorhabditis briggsae 269 121
-Caenorhabditis remanei 35 29
-Chlamydomonas reinhardtii 85 13
-Citrus sinensis 14 2
-Culex pipiens 408 81
-Daphnia pulex 213 75
-Drosophila ananassae 3 20
-Drosophila grimshawi 14 10
-Drosophila pseudoobscura 28 22
-Drosophila willistoni 4 18
-Emiliania huxleyi 56 13
-Glycine max 4019 1831
-Helobdella robusta 33 1
-Homo sapiens 59 6
-Hyaloperonospora parasitica 48 10
-Hydra magnipapillata 9 65
-Medicago truncatula 62 42
-Mimulus guttatus 12 9
-Mus musculus 18 5
-Mycosphaerella fijiensis 42 7
-Nasonia vitripennis 10 12
-Nectria haematococca 67 2
-Oryza sativa 6068 2561
-Paramecium tetraurelia 749 296
-Pediculus humanus 49 40
-Phakopsora pachyrhizi 66 53
-Physcomitrella patens 304 36
-Phytophthora ramorum 174 56
-Phytophthora sojae 10 0
-Pinus taeda 0 26
-Populus balsamifera 43 4
-Pristionchus pacificus 4 14
-Rattus norvegicus 11 3
-Rhodnius prolixus 24 17
-Ricinus communis 75 14
-Schistosoma mansoni 80 15
-Schmidtea mediterranea 307 277
-Selaginella moellendorffii 27 10
-Sorghum bicolor 306 72
-Strongylocentrotus purpuratus 182 34
-Trypanosoma cruzi 23 0
-Volvox carteri 64 2
-Zea mays 583 263
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/poisson2test2_out.tabular
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/test-data/poisson2test2_out.tabular Fri Nov 20 15:35:30 2009 -0500
@@ -0,0 +1,52 @@
+Acyrthosiphon pisum 55 54 -4.303 -4.049 0.000008 0.000026 0.000438 0.001340
+Aedes aegypti 246 72 3.064 3.198 0.001093 0.000693 0.056838 0.036029
+Anopheles gambiae 337 168 -1.323 -1.307 0.092930 0.095535 1.000000 1.000000
+Apis mellifera 33 11 0.800 0.823 0.211855 0.205213 1.000000 1.000000
+Aspergillus niger 292 29 8.371 9.916 0.000000 0.000000 0.000000 0.000000
+Batrachochytrium dendrobatidis 17 14 -1.765 -1.678 0.038749 0.046690 1.000000 1.000000
+Bombyx mori 2 16 5.373 5.103 0.000000 0.000000 0.000002 0.000009
+Bos taurus 16 71 10.338 9.621 0.000000 0.000000 0.000000 0.000000
+Branchiostoma floridae 44 1 4.126 5.667 0.000018 0.000000 0.000959 0.000000
+Caenorhabditis briggsae 269 121 -0.202 -0.201 0.420141 0.420309 1.000000 1.000000
+Caenorhabditis remanei 35 29 -2.563 -2.435 0.005191 0.007450 0.269927 0.387398
+Chlamydomonas reinhardtii 85 13 3.716 4.183 0.000101 0.000014 0.005267 0.000747
+Citrus sinensis 14 2 1.568 1.780 0.058457 0.037576 1.000000 1.000000
+Culex pipiens 408 81 6.717 7.331 0.000000 0.000000 0.000000 0.000000
+Daphnia pulex 213 75 1.663 1.701 0.048160 0.044463 1.000000 1.000000
+Drosophila ananassae 3 20 5.872 5.539 0.000000 0.000000 0.000000 0.000001
+Drosophila grimshawi 14 10 -1.182 -1.134 0.118667 0.128417 1.000000 1.000000
+Drosophila pseudoobscura 28 22 -2.064 -1.967 0.019519 0.024570 1.000000 1.000000
+Drosophila willistoni 4 18 5.220 4.860 0.000000 0.000001 0.000005 0.000031
+Emiliania huxleyi 56 13 2.113 2.264 0.017321 0.011791 0.900675 0.613145
+Glycine max 4019 1831 -1.235 -1.231 0.108478 0.109251 1.000000 1.000000
+Helobdella robusta 33 1 3.496 4.684 0.000237 0.000001 0.012302 0.000073
+Homo sapiens 59 6 3.732 4.409 0.000095 0.000005 0.004933 0.000270
+Hyaloperonospora parasitica 48 10 2.201 2.389 0.013860 0.008448 0.720722 0.439307
+Hydra magnipapillata 9 65 10.697 10.120 0.000000 0.000000 0.000000 0.000000
+Medicago truncatula 62 42 -2.176 -2.096 0.014777 0.018033 0.768379 0.937696
+Mimulus guttatus 12 9 -1.224 -1.170 0.110516 0.120942 1.000000 1.000000
+Mus musculus 18 5 0.918 0.964 0.179337 0.167614 1.000000 1.000000
+Mycosphaerella fijiensis 42 7 2.472 2.755 0.006711 0.002933 0.348951 0.152533
+Nasonia vitripennis 10 12 2.443 2.277 0.007288 0.011379 0.378990 0.591708
+Nectria haematococca 67 2 4.987 6.692 0.000000 0.000000 0.000016 0.000000
+Oryza sativa 6068 2561 1.768 1.775 0.038558 0.037957 1.000000 1.000000
+Paramecium tetraurelia 749 296 1.565 1.582 0.058782 0.056837 1.000000 1.000000
+Pediculus humanus 49 40 -2.947 -2.802 0.001606 0.002538 0.083501 0.131991
+Phakopsora pachyrhizi 66 53 -3.311 -3.152 0.000464 0.000811 0.024152 0.042153
+Physcomitrella patens 304 36 7.993 9.276 0.000000 0.000000 0.000000 0.000000
+Phytophthora ramorum 174 56 2.044 2.111 0.020488 0.017390 1.000000 0.904295
+Phytophthora sojae 10 0 2.098 3.496 0.017969 0.000236 0.934412 0.012278
+Pinus taeda 0 26 7.687 8.498 0.000000 0.000000 0.000000 0.000000
+Populus balsamifera 43 4 3.281 3.916 0.000517 0.000045 0.026903 0.002339
+Pristionchus pacificus 4 14 4.349 4.025 0.000007 0.000028 0.000355 0.001481
+Rattus norvegicus 11 3 0.741 0.780 0.229238 0.217720 1.000000 1.000000
+Rhodnius prolixus 24 17 -1.516 -1.456 0.064729 0.072722 1.000000 1.000000
+Ricinus communis 75 14 3.036 3.338 0.001198 0.000422 0.062288 0.021926
+Schistosoma mansoni 80 15 3.124 3.433 0.000891 0.000298 0.046328 0.015504
+Schmidtea mediterranea 307 277 -8.853 -8.368 0.000000 0.000000 0.000000 0.000000
+Selaginella moellendorffii 27 10 0.466 0.474 0.320629 0.317714 1.000000 1.000000
+Sorghum bicolor 306 72 4.857 5.197 0.000001 0.000000 0.000031 0.000005
+Strongylocentrotus purpuratus 182 34 4.727 5.196 0.000001 0.000000 0.000059 0.000005
+Trypanosoma cruzi 23 0 3.181 5.302 0.000733 0.000000 0.038134 0.000003
+Volvox carteri 64 2 4.854 6.487 0.000001 0.000000 0.000031 0.000000
+Zea mays 583 263 -0.336 -0.335 0.368487 0.368792 1.000000 1.000000
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/t2t_report.dat
--- a/test-data/t2t_report.dat Fri Nov 20 15:25:13 2009 -0500
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,15 +0,0 @@
-root root 5
-superkingdom Eukaryota 5
-kingdom Metazoa 5
-phylum Chordata 5
-subphylum Craniata 5
-superclass Gnathostomata 5
-class Mammalia 5
-superorder Euarchontoglires 4
-order Primates 4
-suborder Haplorrhini 3
-superfamily Hominoidea 3
-family Hominidae 3
-genus Homo 2
-species Homo sapiens 2
-subspecies Homo sapiens neanderthalensis 1
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/t2t_report.tabular
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/test-data/t2t_report.tabular Fri Nov 20 15:35:30 2009 -0500
@@ -0,0 +1,15 @@
+root root 5
+superkingdom Eukaryota 5
+kingdom Metazoa 5
+phylum Chordata 5
+subphylum Craniata 5
+superclass Gnathostomata 5
+class Mammalia 5
+superorder Euarchontoglires 4
+order Primates 4
+suborder Haplorrhini 3
+superfamily Hominoidea 3
+family Hominidae 3
+genus Homo 2
+species Homo sapiens 2
+subspecies Homo sapiens neanderthalensis 1
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/taxonomy2gi-input.tabular
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/test-data/taxonomy2gi-input.tabular Fri Nov 20 15:35:30 2009 -0500
@@ -0,0 +1,4 @@
+2 9913
+4 9646
+15 9915
+16 9771
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/taxonomy2gi-input.txt
--- a/test-data/taxonomy2gi-input.txt Fri Nov 20 15:25:13 2009 -0500
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,4 +0,0 @@
-2 9913
-4 9646
-15 9915
-16 9771
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/taxonomy2gi-output.tabular
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/test-data/taxonomy2gi-output.tabular Fri Nov 20 15:35:30 2009 -0500
@@ -0,0 +1,4 @@
+9913 9913 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Laurasiatheria n Ruminantia n Bovidae Bovinae n n Bos n Bos taurus n 2
+9646 9646 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Laurasiatheria Carnivora Caniformia n Ursidae n n n Ailuropoda n Ailuropoda melanoleuca n 4
+9915 9915 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Laurasiatheria n Ruminantia n Bovidae Bovinae n n Bos n Bos indicus n 15
+9771 9771 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Laurasiatheria Cetacea Mysticeti n Balaenopteridae n n n Balaenoptera n Balaenoptera musculus n 16
\ No newline at end of file
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/taxonomy2gi-output.txt
--- a/test-data/taxonomy2gi-output.txt Fri Nov 20 15:25:13 2009 -0500
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,4 +0,0 @@
-9913 9913 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Laurasiatheria n Ruminantia n Bovidae Bovinae n n Bos n Bos taurus n 2
-9646 9646 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Laurasiatheria Carnivora Caniformia n Ursidae n n n Ailuropoda n Ailuropoda melanoleuca n 4
-9915 9915 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Laurasiatheria n Ruminantia n Bovidae Bovinae n n Bos n Bos indicus n 15
-9771 9771 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Laurasiatheria Cetacea Mysticeti n Balaenopteridae n n n Balaenoptera n Balaenoptera musculus n 16
\ No newline at end of file
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/taxonomyGI.dat
--- a/test-data/taxonomyGI.dat Fri Nov 20 15:25:13 2009 -0500
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,5 +0,0 @@
-9606 9606 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Euarchontoglires Primates Haplorrhini Hominoidea Hominidae n n n Homo n Homo sapiens n 12583
-40674 40674 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n n n n n n n n n n n n n 410771
-63221 63221 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Euarchontoglires Primates Haplorrhini Hominoidea Hominidae n n n Homo n Homo sapiens Homo sapiens neanderthalensis 2286205
-9604 9604 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Euarchontoglires Primates Haplorrhini Hominoidea Hominidae n n n n n n n 23236241
-9443 9443 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Euarchontoglires Primates n n n n n n n n n n 33001686
\ No newline at end of file
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/taxonomyGI.tabular
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/test-data/taxonomyGI.tabular Fri Nov 20 15:35:30 2009 -0500
@@ -0,0 +1,5 @@
+33001686 9443
+23236241 9604
+12583 9606
+410771 40674
+2286205 63221
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/taxonomyGI.taxonomy
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/test-data/taxonomyGI.taxonomy Fri Nov 20 15:35:30 2009 -0500
@@ -0,0 +1,5 @@
+9606 9606 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Euarchontoglires Primates Haplorrhini Hominoidea Hominidae n n n Homo n Homo sapiens n 12583
+40674 40674 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n n n n n n n n n n n n n 410771
+63221 63221 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Euarchontoglires Primates Haplorrhini Hominoidea Hominidae n n n Homo n Homo sapiens Homo sapiens neanderthalensis 2286205
+9604 9604 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Euarchontoglires Primates Haplorrhini Hominoidea Hominidae n n n n n n n 23236241
+9443 9443 root Eukaryota Metazoa n n Chordata Craniata Gnathostomata Mammalia n Euarchontoglires Primates n n n n n n n n n n 33001686
\ No newline at end of file
diff -r 436de91e08b0 -r 82b63d47a0d8 test-data/taxonomyGI.txt
--- a/test-data/taxonomyGI.txt Fri Nov 20 15:25:13 2009 -0500
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,5 +0,0 @@
-33001686 9443
-23236241 9604
-12583 9606
-410771 40674
-2286205 63221
diff -r 436de91e08b0 -r 82b63d47a0d8 tools/taxonomy/example_dataset_missing_nodes.txt
--- a/tools/taxonomy/example_dataset_missing_nodes.txt Fri Nov 20 15:25:13 2009 -0500
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,6 +0,0 @@
-Species_1 1 root superkingdom1 kingdom1 subkingdom1 superphylum1 phylum1 subphylum1 superclass1 class1 subclass1 superorder1 order1 suborder1 superfamily1 family1 subfamily1 tribe1 subtribe1 genus1 subgenus1 species1 subspecies1
-Species_2 2 root superkingdom1 kingdom1 subkingdom1 superphylum1 phylum1 subphylum1 superclass1 class1 subclass1 superorder1 order1 suborder1 superfamily1 family1 subfamily1 tribe1 subtribe1 genus2 n species2 subspecies2
-Species_3 3 root superkingdom1 kingdom1 subkingdom1 superphylum1 n subphylum3 superclass3 class3 subclass3 superorder3 order3 suborder3 superfamily3 family3 subfamily3 tribe3 subtribe3 genus3 subgenus3 species3 subspecies3
-Species_4 4 root superkingdom1 kingdom1 subkingdom1 superphylum1 phylum4 subphylum4 superclass4 class4 subclass4 superorder4 order4 suborder4 superfamily4 family4 subfamily4 tribe4 subtribe4 genus4 subgenus4 species4 subspecies4
-Species_5 5 root superkingdom1 kingdom1 subkingdom1 superphylum1 n subphylum3 superclass3 class3 subclass3 superorder3 order3 suborder3 superfamily3 family3 subfamily3 tribe3 subtribe3 genus3 subgenus3 species5 subspecies5
-
diff -r 436de91e08b0 -r 82b63d47a0d8 tools/taxonomy/find_diag_hits.xml
--- a/tools/taxonomy/find_diag_hits.xml Fri Nov 20 15:25:13 2009 -0500
+++ b/tools/taxonomy/find_diag_hits.xml Fri Nov 20 15:35:30 2009 -0500
@@ -37,11 +37,11 @@
</outputs>
<tests>
<test>
- <param name="input1" value="taxonomyGI.dat" ftype="taxonomy"/>
+ <param name="input1" value="taxonomyGI.taxonomy" ftype="taxonomy"/>
<param name="id_col" value="1" />
<param name="rank_list" value="order,genus" />
<param name="out_format" value="counts" />
- <output name="out_file1" file="find_diag_hits.dat" ftype="tabular"/>
+ <output name="out_file1" file="find_diag_hits.tabular" />
</test>
</tests>
diff -r 436de91e08b0 -r 82b63d47a0d8 tools/taxonomy/gi2taxonomy.xml
--- a/tools/taxonomy/gi2taxonomy.xml Fri Nov 20 15:25:13 2009 -0500
+++ b/tools/taxonomy/gi2taxonomy.xml Fri Nov 20 15:35:30 2009 -0500
@@ -14,10 +14,10 @@
</requirements>
<tests>
<test>
- <param name="input" value="taxonomy2gi-input.txt"/>
+ <param name="input" ftype="tabular" value="taxonomy2gi-input.tabular"/>
<param name="giField" value="1"/>
<param name="idField" value="2"/>
- <output name="out_file1" file="taxonomy2gi-output.txt"/>
+ <output name="out_file1" file="taxonomy2gi-output.tabular"/>
</test>
</tests>
diff -r 436de91e08b0 -r 82b63d47a0d8 tools/taxonomy/poisson2test.xml
--- a/tools/taxonomy/poisson2test.xml Fri Nov 20 15:25:13 2009 -0500
+++ b/tools/taxonomy/poisson2test.xml Fri Nov 20 15:35:30 2009 -0500
@@ -16,20 +16,20 @@
</outputs>
<tests>
<test>
- <param name="input1" value="poisson2test1.txt"/>
+ <param name="input1" value="poisson2test1.tabular" ftype="tabular"/>
<param name="input2" value="2" />
<param name="input3" value="3" />
<param name="input4" value="0.44" />
<param name="input5" value="0" />
- <output name="output1" file="poisson2test1.out" />
+ <output name="output1" file="poisson2test1_out.tabular" />
</test>
<test>
- <param name="input1" value="poisson2test2.txt"/>
+ <param name="input1" value="poisson2test2.tabular" ftype="tabular"/>
<param name="input2" value="2" />
<param name="input3" value="3" />
<param name="input4" value="0.44" />
<param name="input5" value="0" />
- <output name="output1" file="poisson2test2.out" />
+ <output name="output1" file="poisson2test2_out.tabular" />
</test>
</tests>
<help>
diff -r 436de91e08b0 -r 82b63d47a0d8 tools/taxonomy/t2t_report.xml
--- a/tools/taxonomy/t2t_report.xml Fri Nov 20 15:25:13 2009 -0500
+++ b/tools/taxonomy/t2t_report.xml Fri Nov 20 15:35:30 2009 -0500
@@ -12,8 +12,8 @@
</requirements>
<tests>
<test>
- <param name="input" value="taxonomyGI.dat" ftype="taxonomy"/>
- <output name="out_file1" file="t2t_report.dat"/>
+ <param name="input" value="taxonomyGI.taxonomy" ftype="taxonomy"/>
+ <output name="out_file1" file="t2t_report.tabular"/>
</test>
</tests>
1
0

23 Nov '09
details: http://www.bx.psu.edu/hg/galaxy/rev/436de91e08b0
changeset: 3116:436de91e08b0
user: Kanwei Li <kanwei(a)gmail.com>
date: Fri Nov 20 15:25:13 2009 -0500
description:
Data from UCSC table browser now uses hgta_table for table name. Fixes #67
diffstat:
tools/data_source/ucsc_tablebrowser.xml | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diffs (12 lines):
diff -r e24c2590f8af -r 436de91e08b0 tools/data_source/ucsc_tablebrowser.xml
--- a/tools/data_source/ucsc_tablebrowser.xml Fri Nov 20 12:07:06 2009 -0500
+++ b/tools/data_source/ucsc_tablebrowser.xml Fri Nov 20 15:25:13 2009 -0500
@@ -19,7 +19,7 @@
<request_param galaxy_name="URL" remote_name="URL" missing="" />
<request_param galaxy_name="dbkey" remote_name="db" missing="?" />
<request_param galaxy_name="organism" remote_name="org" missing="unknown species" />
- <request_param galaxy_name="table" remote_name="hgta_track" missing="unknown table" />
+ <request_param galaxy_name="table" remote_name="hgta_table" missing="unknown table" />
<request_param galaxy_name="description" remote_name="hgta_regionType" missing="no description" />
<request_param galaxy_name="data_type" remote_name="hgta_outputType" missing="tabular" >
<data_type_translation>
1
0

23 Nov '09
details: http://www.bx.psu.edu/hg/galaxy/rev/5c46679d0755
changeset: 3113:5c46679d0755
user: Greg Von Kuster <greg(a)bx.psu.edu>
date: Fri Nov 20 08:46:49 2009 -0500
description:
Eliminate the ability to delete/undelete/purge users until the grid helper can access config settings.
diffstat:
lib/galaxy/web/controllers/admin.py | 10 +++++-----
1 files changed, 5 insertions(+), 5 deletions(-)
diffs (23 lines):
diff -r b9b9fe0612b9 -r 5c46679d0755 lib/galaxy/web/controllers/admin.py
--- a/lib/galaxy/web/controllers/admin.py Thu Nov 19 18:55:35 2009 -0500
+++ b/lib/galaxy/web/controllers/admin.py Fri Nov 20 08:46:49 2009 -0500
@@ -91,14 +91,14 @@
grids.GridAction( "Create new user", dict( controller='admin', action='users', operation='create' ) )
]
operations = [
- grids.GridOperation( "Manage Roles & Groups", condition=( lambda item: not item.deleted ), allow_multiple=False )
+ grids.GridOperation( "Manage Roles & Groups", condition=( lambda item: not item.deleted ), allow_multiple=False ),
+ grids.GridOperation( "Reset Password", condition=( lambda item: not item.deleted ), allow_multiple=True, allow_popup=False )
]
#TODO: enhance to account for trans.app.config.allow_user_deletion here so that we can eliminate these operations if
# the setting is False
- operations.append( grids.GridOperation( "Reset Password", condition=( lambda item: not item.deleted ), allow_multiple=True, allow_popup=False ) )
- operations.append( grids.GridOperation( "Delete", condition=( lambda item: not item.deleted ), allow_multiple=True ) )
- operations.append( grids.GridOperation( "Undelete", condition=( lambda item: item.deleted and not item.purged ), allow_multiple=True ) )
- operations.append( grids.GridOperation( "Purge", condition=( lambda item: item.deleted and not item.purged ), allow_multiple=True ) )
+ #operations.append( grids.GridOperation( "Delete", condition=( lambda item: not item.deleted ), allow_multiple=True ) )
+ #operations.append( grids.GridOperation( "Undelete", condition=( lambda item: item.deleted and not item.purged ), allow_multiple=True ) )
+ #operations.append( grids.GridOperation( "Purge", condition=( lambda item: item.deleted and not item.purged ), allow_multiple=True ) )
standard_filters = [
grids.GridColumnFilter( "Active", args=dict( deleted=False ) ),
grids.GridColumnFilter( "Deleted", args=dict( deleted=True, purged=False ) ),
1
0

23 Nov '09
details: http://www.bx.psu.edu/hg/galaxy/rev/b9b9fe0612b9
changeset: 3112:b9b9fe0612b9
user: Kanwei Li <kanwei(a)gmail.com>
date: Thu Nov 19 18:55:35 2009 -0500
description:
trackster: load block data incrementally, huge performance gains from drawing smaller tiles
diffstat:
lib/galaxy/visualization/tracks/data/interval_index.py | 4 +-
static/scripts/packed/galaxy.base.js | 2 +-
static/scripts/packed/trackster.js | 2 +-
static/scripts/trackster.js | 81 +++++++++----------
4 files changed, 42 insertions(+), 47 deletions(-)
diffs (233 lines):
diff -r b70e23d80a75 -r b9b9fe0612b9 lib/galaxy/visualization/tracks/data/interval_index.py
--- a/lib/galaxy/visualization/tracks/data/interval_index.py Thu Nov 19 12:56:24 2009 -0500
+++ b/lib/galaxy/visualization/tracks/data/interval_index.py Thu Nov 19 18:55:35 2009 -0500
@@ -18,11 +18,10 @@
index = Indexes( self.converted_dataset.file_name )
results = []
- uid = 0
for start, end, offset in index.find(chrom, start, end):
source.seek(offset)
feature = source.readline().split()
- payload = { 'uid': uid, 'start': start, 'end': end, 'name': feature[3] }
+ payload = { 'uid': offset, 'start': start, 'end': end, 'name': feature[3] }
try:
payload['strand'] = feature[5]
except IndexError:
@@ -44,6 +43,5 @@
pass
results.append(payload)
- uid += 1
return results
diff -r b70e23d80a75 -r b9b9fe0612b9 static/scripts/packed/galaxy.base.js
--- a/static/scripts/packed/galaxy.base.js Thu Nov 19 12:56:24 2009 -0500
+++ b/static/scripts/packed/galaxy.base.js Thu Nov 19 18:55:35 2009 -0500
@@ -1,1 +1,1 @@
-$.fn.makeAbsolute=function(a){return this.each(function(){var b=$(this);var c=b.position();b.css({position:"absolute",marginLeft:0,marginTop:0,top:c.top,left:c.left,right:$(window).width()-(c.left+b.width())});if(a){b.remove().appendTo("body")}})};jQuery(document).ready(function(){jQuery("a[confirm]").click(function(){return confirm(jQuery(this).attr("confirm"))});jQuery("div[popupmenu]").each(function(){var c={};$(this).find("a").each(function(){var b=$(this).attr("confirm"),d=$(this).attr("href"),e=$(this).attr("target");c[$(this).text()]=function(){if(!b||confirm(b)){var g=window;if(e=="_parent"){g=window.parent}g.location=d}}});var a=$("#"+$(this).attr("popupmenu"));make_popupmenu(a,c);$(this).remove();a.show()})});function ensure_popup_helper(){if($("#popup-helper").length==0){$("<div id='popup-helper'/>").css({background:"white",opacity:0,zIndex:15000,position:"absolute",top:0,left:0,width:"100%",height:"100%"}).appendTo("body").hide()}}function make_popupmenu(d,c){ens
ure_popup_helper();var a=$(d);var b=$("<ul id='"+d.attr("id")+"-menu'></ul>");$.each(c,function(g,f){if(f){$("<li/>").html(g).click(f).appendTo(b)}else{$("<li class='head'/>").html(g).appendTo(b)}});var e=$("<div class='popmenu-wrapper'>");e.append(b).append("<div class='overlay-border'>").css("position","absolute").appendTo("body").hide();attach_popupmenu(d,e)}function attach_popupmenu(b,d){var a=function(){d.unbind().hide();$("#popup-helper").unbind("click.popupmenu").hide()};var c=function(g){var h=$(b).offset();$("#popup-helper").bind("click.popupmenu",a).show();d.click(a).css({left:0,top:-1000}).show();var f=g.pageX-d.width()/2;f=Math.min(f,$(document).scrollLeft()+$(window).width()-$(d).width()-20);f=Math.max(f,$(document).scrollLeft()+20);d.css({top:g.pageY-5,left:f});return false};$(b).click(c)};
\ No newline at end of file
+$.fn.makeAbsolute=function(a){return this.each(function(){var b=$(this);var c=b.position();b.css({position:"absolute",marginLeft:0,marginTop:0,top:c.top,left:c.left,right:$(window).width()-(c.left+b.width())});if(a){b.remove().appendTo("body")}})};jQuery(document).ready(function(){jQuery("a[confirm]").click(function(){return confirm(jQuery(this).attr("confirm"))});make_popup_menus()});function make_popup_menus(){jQuery("div[popupmenu]").each(function(){var c={};$(this).find("a").each(function(){var b=$(this).attr("confirm"),d=$(this).attr("href"),e=$(this).attr("target");c[$(this).text()]=function(){if(!b||confirm(b)){var g=window;if(e=="_parent"){g=window.parent}g.location=d}}});var a=$("#"+$(this).attr("popupmenu"));make_popupmenu(a,c);$(this).remove();a.show()})}function ensure_popup_helper(){if($("#popup-helper").length==0){$("<div id='popup-helper'/>").css({background:"white",opacity:0,zIndex:15000,position:"absolute",top:0,left:0,width:"100%",height:"100%"}).appendTo("
body").hide()}}function make_popupmenu(d,c){ensure_popup_helper();var a=$(d);var b=$("<ul id='"+d.attr("id")+"-menu'></ul>");$.each(c,function(g,f){if(f){$("<li/>").html(g).click(f).appendTo(b)}else{$("<li class='head'/>").html(g).appendTo(b)}});var e=$("<div class='popmenu-wrapper'>");e.append(b).append("<div class='overlay-border'>").css("position","absolute").appendTo("body").hide();attach_popupmenu(d,e)}function attach_popupmenu(b,d){var a=function(){d.unbind().hide();$("#popup-helper").unbind("click.popupmenu").hide()};var c=function(g){var h=$(b).offset();$("#popup-helper").bind("click.popupmenu",a).show();d.click(a).css({left:0,top:-1000}).show();var f=g.pageX-d.width()/2;f=Math.min(f,$(document).scrollLeft()+$(window).width()-$(d).width()-20);f=Math.max(f,$(document).scrollLeft()+20);d.css({top:g.pageY-5,left:f});return false};$(b).click(c)}var array_length=function(a){if(a.length){return a.length}var b=0;for(element in a){b++}return b};
\ No newline at end of file
diff -r b70e23d80a75 -r b9b9fe0612b9 static/scripts/packed/trackster.js
--- a/static/scripts/packed/trackster.js Thu Nov 19 12:56:24 2009 -0500
+++ b/static/scripts/packed/trackster.js Thu Nov 19 18:55:35 2009 -0500
@@ -1,1 +1,1 @@
-var DENSITY=1000,DATA_ERROR="There was an error in indexing this dataset.",DATA_NONE="No data for this chrom/contig.",DATA_PENDING="Currently indexing... please wait",DATA_LOADING="Loading data...",CACHED_TILES=10,CACHED_DATA=20,CONTEXT=$("<canvas></canvas>").get(0).getContext("2d"),RIGHT_STRAND,LEFT_STRAND;var right_img=new Image();right_img.src="../images/visualization/strand_right.png";right_img.onload=function(){RIGHT_STRAND=CONTEXT.createPattern(right_img,"repeat")};var left_img=new Image();left_img.src="../images/visualization/strand_left.png";left_img.onload=function(){LEFT_STRAND=CONTEXT.createPattern(left_img,"repeat")};var right_img_inv=new Image();right_img_inv.src="../images/visualization/strand_right_inv.png";right_img_inv.onload=function(){RIGHT_STRAND_INV=CONTEXT.createPattern(right_img_inv,"repeat")};var left_img_inv=new Image();left_img_inv.src="../images/visualization/strand_left_inv.png";left_img_inv.onload=function(){LEFT_STRAND_INV=CONTEXT.createPattern(
left_img_inv,"repeat")};function commatize(b){b+="";var a=/(\d+)(\d{3})/;while(a.test(b)){b=b.replace(a,"$1,$2")}return b}var Cache=function(a){this.num_elements=a;this.obj_cache={};this.key_ary=[]};$.extend(Cache.prototype,{get:function(b){var a=this.key_ary.indexOf(b);if(a!=-1){this.key_ary.splice(a,1);this.key_ary.push(b)}return this.obj_cache[b]},set:function(b,c){if(!this.obj_cache[b]){if(this.key_ary.length>=this.num_elements){var a=this.key_ary.shift();delete this.obj_cache[a]}this.key_ary.push(b)}this.obj_cache[b]=c;return c}});var View=function(b,a){this.chrom=b;this.tracks=[];this.max_low=0;this.max_high=a;this.center=(this.max_high-this.max_low)/2;this.span=this.max_high-this.max_low;this.zoom_factor=2;this.zoom_level=0};$.extend(View.prototype,{add_track:function(a){a.view=this;this.tracks.push(a);if(a.init){a.init()}},redraw:function(){var d=this.span/Math.pow(this.zoom_factor,this.zoom_level),b=this.center-(d/2),e=b+d;if(b<0){b=0;e=b+d}else{if(e>this.max_high){
e=this.max_high;b=e-d}}this.low=Math.floor(b);this.high=Math.ceil(e);this.center=Math.round(this.low+(this.high-this.low)/2);$("#overview-box").css({left:(this.low/this.span)*$("#overview-viewport").width(),width:Math.max(12,((this.high-this.low)/this.span)*$("#overview-viewport").width())}).show();$("#low").val(commatize(this.low));$("#high").val(commatize(this.high));for(var c=0,a=this.tracks.length;c<a;c++){this.tracks[c].draw()}$("#bottom-spacer").remove();$("#viewport").append('<div id="bottom-spacer" style="height: 200px;"></div>')},zoom_in:function(a){if(this.max_high===0||this.high-this.low<30){return}if(a){this.center=a/$(document).width()*(this.high-this.low)+this.low}this.zoom_level+=1;this.redraw()},zoom_out:function(){if(this.max_high===0){return}if(this.zoom_level<=0){this.zoom_level=0;return}this.zoom_level-=1;this.redraw()}});var Track=function(a,b){this.name=a;this.parent_element=b;this.make_container()};$.extend(Track.prototype,{make_container:function(){th
is.header_div=$("<div class='track-header'>").text(this.name);this.content_div=$("<div class='track-content'>");this.container_div=$("<div class='track'></div>").append(this.header_div).append(this.content_div);this.parent_element.append(this.container_div)}});var TiledTrack=function(){this.tile_cache=new Cache(CACHED_TILES)};$.extend(TiledTrack.prototype,Track.prototype,{draw:function(){var h=this.view.low,d=this.view.high,e=d-h;var c=Math.pow(10,Math.ceil(Math.log(e/DENSITY)/Math.log(10)));c=Math.max(c,0.1);c=Math.min(c,1000000);var j=$("<div style='position: relative;'></div>");this.content_div.children(":first").remove();this.content_div.append(j);var k=this.content_div.width()/e;var g;var a=Math.floor(h/c/DENSITY);while((a*DENSITY*c)<d){var i=this.content_div.width()+"_"+this.view.zoom_level+"_"+a;var b=this.tile_cache.get(i);if(b){var f=a*DENSITY*c;b.css({left:(f-this.view.low)*k});j.append(b)}else{g=this.draw_tile(c,a,j,k);if(g){this.tile_cache.set(i,g)}}a+=1}}});var
LabelTrack=function(a){Track.call(this,null,a);this.container_div.addClass("label-track")};$.extend(LabelTrack.prototype,Track.prototype,{draw:function(){var c=this.view,d=c.high-c.low,g=Math.floor(Math.pow(10,Math.floor(Math.log(d)/Math.log(10)))),a=Math.floor(c.low/g)*g,e=this.content_div.width(),b=$("<div style='position: relative; height: 1.3em;'></div>");while(a<c.high){var f=(a-c.low)/d*e;b.append($("<div class='label'>"+commatize(a)+"</div>").css({position:"absolute",left:f-1}));a+=g}this.content_div.children(":first").remove();this.content_div.append(b)}});var LineTrack=function(c,b,a){Track.call(this,c,$("#viewport"));TiledTrack.call(this);this.track_type="line";this.height_px=(a?a:100);this.container_div.addClass("line-track");this.dataset_id=b;this.cache=new Cache(CACHED_DATA)};$.extend(LineTrack.prototype,TiledTrack.prototype,{init:function(){var a=this;a.content_div.text(DATA_LOADING);$.getJSON(data_url,{stats:true,track_type:a.track_type,chrom:a.view.chrom,low:
null,high:null,dataset_id:a.dataset_id},function(c){if(!c||c=="error"){a.container_div.addClass("error");a.content_div.text(DATA_ERROR)}else{if(c=="no data"){a.container_div.addClass("nodata");a.content_div.text(DATA_NONE)}else{if(c=="pending"){a.container_div.addClass("pending");a.content_div.text(DATA_PENDING);setTimeout(function(){a.init()},5000)}else{a.content_div.text("");a.content_div.css("height",a.height_px+"px");a.min_value=c.min;a.max_value=c.max;a.vertical_range=a.max_value-a.min_value;var d=$("<div class='yaxislabel'>"+a.min_value+"</div>");var b=$("<div class='yaxislabel'>"+a.max_value+"</div>");b.css({position:"relative",top:"35px"});b.prependTo(a.container_div);d.css({position:"relative",top:a.height_px+32+"px",});d.prependTo(a.container_div);a.draw()}}}})},get_data:function(d,b){var c=this,a=b*DENSITY*d,f=(b+1)*DENSITY*d,e=d+"_"+b;$.getJSON(data_url,{track_type:this.track_type,chrom:this.view.chrom,low:a,high:f,dataset_id:this.dataset_id},function(g){c.cache[
e]=g;$(document).trigger("redraw")})},draw_tile:function(d,a,m,o){if(!this.vertical_range){return}var h=a*DENSITY*d,b=DENSITY*d,c=$("<canvas class='tile'></canvas>"),l=d+"_"+a;if(!this.cache[l]){this.get_data(d,a);return}var g=this.cache[l];c.css({position:"absolute",top:0,left:(h-this.view.low)*o});c.get(0).width=Math.ceil(b*o);c.get(0).height=this.height_px;var n=c.get(0).getContext("2d");var e=false;n.beginPath();for(var f=0;f<g.length-1;f++){var k=g[f][0]-h;var j=g[f][1];if(isNaN(j)){e=false}else{k=k*o;j=(j-this.min_value)/this.vertical_range*this.height_px;if(e){n.lineTo(k,j)}else{n.moveTo(k,j);e=true}}}n.stroke();m.append(c);return c}});var FeatureTrack=function(c,b,a){Track.call(this,c,$("#viewport"));TiledTrack.call(this);this.track_type="feature";this.height_px=(a?a:100);this.container_div.addClass("feature-track");this.dataset_id=b;this.zo_slots={};this.show_labels_scale=0.001;this.showing_labels=false;this.vertical_gap=10;this.base_color="#2C3143"};$.extend(Featur
eTrack.prototype,TiledTrack.prototype,{init:function(){var a=this;a.content_div.text(DATA_LOADING);$.getJSON(data_url,{track_type:a.track_type,low:a.view.max_low,high:a.view.max_high,dataset_id:a.dataset_id,chrom:a.view.chrom},function(b){if(b=="error"){a.container_div.addClass("error");a.content_div.text(DATA_ERROR)}else{if(b.length===0||b=="no data"){a.container_div.addClass("nodata");a.content_div.text(DATA_NONE)}else{if(b=="pending"){a.container_div.addClass("pending");a.content_div.text(DATA_PENDING);setTimeout(function(){a.init()},5000)}else{a.content_div.text("");a.content_div.css("height",a.height_px+"px");a.values=b;a.calc_slots();a.slots=a.zo_slots;a.draw()}}}})},calc_slots:function(o){var c=[],b=this.content_div.width()/(this.view.high-this.view.low),g=this.show_labels_scale,a=this.view.max_high,e=this.view.max_low;if(o){this.zi_slots={}}var m=$("<canvas></canvas>").get(0).getContext("2d");for(var f=0,h=this.values.length;f<h;f++){var k,l,n=this.values[f];if(o){k=
Math.floor((n.start-e)*g);k-=m.measureText(n.name).width;l=Math.ceil((n.end-e)*g)}else{k=Math.floor((n.start-e)*b);l=Math.ceil((n.end-e)*b)}var d=0;while(true){if(c[d]===undefined||c[d]<k){c[d]=l;if(o){this.zi_slots[n.name]=d}else{this.zo_slots[n.name]=d}break}d++}}this.height_px=c.length*this.vertical_gap+15;this.content_div.css("height",this.height_px+"px")},draw_tile:function(w,B,g,n){if(!this.values){return null}if(n>this.show_labels_scale&&!this.showing_labels){this.showing_labels=true;if(!this.zi_slots){this.calc_slots(true)}this.slots=this.zi_slots}else{if(n<=this.show_labels_scale&&this.showing_labels){this.showing_labels=false;this.slots=this.zo_slots}}var C=B*DENSITY*w,c=(B+1)*DENSITY*w,q=DENSITY*w;var u=Math.ceil(q*n),t=this.height_px,s=$("<canvas class='tile'></canvas>");s.css({position:"absolute",top:0,left:(C-this.view.low)*n});s.get(0).width=u;s.get(0).height=t;var v=s.get(0).getContext("2d");v.fillStyle=this.base_color;v.font="10px monospace";v.textAlign="rig
ht";var y=0;for(var z=0,A=this.values.length;z<A;z++){var f=this.values[z];if(f.start<=c&&f.end>=C){var e=Math.floor(Math.max(0,(f.start-C)*n)),h=Math.ceil(Math.min(u,(f.end-C)*n)),d=this.slots[f.name]*this.vertical_gap;var a,G,b=null,o=null;if(f.thick_start&&f.thick_end){b=Math.floor(Math.max(0,(f.thick_start-C)*n));o=Math.ceil(Math.min(u,(f.thick_end-C)*n))}if(!this.showing_labels){v.fillRect(e,d+5,h-e,1)}else{if(v.fillText){v.fillText(f.name,e-1,d+8)}var E=f.blocks;if(E){if(f.strand){if(f.strand=="+"){v.fillStyle=RIGHT_STRAND}else{if(f.strand=="-"){v.fillStyle=LEFT_STRAND}}v.fillRect(e,d,h-e,10);v.fillStyle=this.base_color}for(var x=0,F=E.length;x<F;x++){var m=E[x],l=Math.floor(Math.max(0,(m[0]-C)*n)),D=Math.ceil(Math.min(u,(m[1]-C)*n));a=5;G=3;v.fillRect(l,d+G,D-l,a);if(b&&(l<o||D>b)){a=9;G=1;var r=Math.max(l,b),p=Math.min(D,o);v.fillRect(r,d+G,p-r,a)}}}else{a=9;G=1;v.fillRect(e,d+G,h-e,a);if(f.strand){if(f.strand=="+"){v.fillStyle=RIGHT_STRAND_INV}else{if(f.strand=="-")
{v.fillStyle=LEFT_STRAND_INV}}v.fillRect(e,d,h-e,10);v.fillStyle=this.base_color}}}y++}}g.append(s);return s}});
\ No newline at end of file
+var DEBUG=false;var DENSITY=1000,FEATURE_LEVELS=100,DATA_ERROR="There was an error in indexing this dataset.",DATA_NONE="No data for this chrom/contig.",DATA_PENDING="Currently indexing... please wait",DATA_LOADING="Loading data...",CACHED_TILES_FEATURE=10,CACHED_TILES_LINE=30,CACHED_DATA=20,CONTEXT=$("<canvas></canvas>").get(0).getContext("2d"),RIGHT_STRAND,LEFT_STRAND;var right_img=new Image();right_img.src="../images/visualization/strand_right.png";right_img.onload=function(){RIGHT_STRAND=CONTEXT.createPattern(right_img,"repeat")};var left_img=new Image();left_img.src="../images/visualization/strand_left.png";left_img.onload=function(){LEFT_STRAND=CONTEXT.createPattern(left_img,"repeat")};var right_img_inv=new Image();right_img_inv.src="../images/visualization/strand_right_inv.png";right_img_inv.onload=function(){RIGHT_STRAND_INV=CONTEXT.createPattern(right_img_inv,"repeat")};var left_img_inv=new Image();left_img_inv.src="../images/visualization/strand_left_inv.png";left_
img_inv.onload=function(){LEFT_STRAND_INV=CONTEXT.createPattern(left_img_inv,"repeat")};function commatize(b){b+="";var a=/(\d+)(\d{3})/;while(a.test(b)){b=b.replace(a,"$1,$2")}return b}var Cache=function(a){this.num_elements=a;this.obj_cache={};this.key_ary=[]};$.extend(Cache.prototype,{get:function(b){var a=this.key_ary.indexOf(b);if(a!=-1){this.key_ary.splice(a,1);this.key_ary.push(b)}return this.obj_cache[b]},set:function(b,c){if(!this.obj_cache[b]){if(this.key_ary.length>=this.num_elements){var a=this.key_ary.shift();delete this.obj_cache[a]}this.key_ary.push(b)}this.obj_cache[b]=c;return c}});var View=function(b,a){this.chrom=b;this.tracks=[];this.max_low=0;this.max_high=a;this.center=(this.max_high-this.max_low)/2;this.span=this.max_high-this.max_low;this.zoom_factor=2;this.zoom_level=0};$.extend(View.prototype,{add_track:function(a){a.view=this;this.tracks.push(a);if(a.init){a.init()}},redraw:function(){var d=this.span/Math.pow(this.zoom_factor,this.zoom_level),b=thi
s.center-(d/2),e=b+d;if(b<0){b=0;e=b+d}else{if(e>this.max_high){e=this.max_high;b=e-d}}this.low=Math.floor(b);this.high=Math.ceil(e);this.center=Math.round(this.low+(this.high-this.low)/2);this.resolution=Math.pow(10,Math.ceil(Math.log((this.high-this.low)/DENSITY)/Math.LN10));this.zoom_res=Math.max(1,Math.ceil(Math.log(this.resolution,FEATURE_LEVELS)/Math.log(FEATURE_LEVELS)));$("#overview-box").css({left:(this.low/this.span)*$("#overview-viewport").width(),width:Math.max(12,((this.high-this.low)/this.span)*$("#overview-viewport").width())}).show();$("#low").val(commatize(this.low));$("#high").val(commatize(this.high));for(var c=0,a=this.tracks.length;c<a;c++){this.tracks[c].draw()}$("#bottom-spacer").remove();$("#viewport").append('<div id="bottom-spacer" style="height: 200px;"></div>')},zoom_in:function(a){if(this.max_high===0||this.high-this.low<30){return}if(a){this.center=a/$(document).width()*(this.high-this.low)+this.low}this.zoom_level+=1;this.redraw()},zoom_out:fun
ction(){if(this.max_high===0){return}if(this.zoom_level<=0){this.zoom_level=0;return}this.zoom_level-=1;this.redraw()}});var Track=function(a,b){this.name=a;this.parent_element=b;this.make_container()};$.extend(Track.prototype,{make_container:function(){this.header_div=$("<div class='track-header'>").text(this.name);this.content_div=$("<div class='track-content'>");this.container_div=$("<div class='track'></div>").append(this.header_div).append(this.content_div);this.parent_element.append(this.container_div)}});var TiledTrack=function(){};$.extend(TiledTrack.prototype,Track.prototype,{draw:function(){var i=this.view.low,e=this.view.high,f=e-i,d=this.view.resolution;if(DEBUG){$("#debug").text(d+" "+this.view.zoom_res)}var k=$("<div style='position: relative;'></div>");this.content_div.children(":first").remove();this.content_div.append(k);var l=this.content_div.width()/f;var h;var a=Math.floor(i/d/DENSITY);while((a*DENSITY*d)<e){var j=this.content_div.width()+"_"+this.view.zo
om_level+"_"+a;var c=this.tile_cache.get(j);if(c){var g=a*DENSITY*d;var b=(g-i)*l;if(this.left_offset){b-=this.left_offset}c.css({left:b});k.append(c)}else{h=this.draw_tile(d,a,k,l);if(h){this.tile_cache.set(j,h)}}a+=1}}});var LabelTrack=function(a){Track.call(this,null,a);this.container_div.addClass("label-track")};$.extend(LabelTrack.prototype,Track.prototype,{draw:function(){var c=this.view,d=c.high-c.low,g=Math.floor(Math.pow(10,Math.floor(Math.log(d)/Math.log(10)))),a=Math.floor(c.low/g)*g,e=this.content_div.width(),b=$("<div style='position: relative; height: 1.3em;'></div>");while(a<c.high){var f=(a-c.low)/d*e;b.append($("<div class='label'>"+commatize(a)+"</div>").css({position:"absolute",left:f-1}));a+=g}this.content_div.children(":first").remove();this.content_div.append(b)}});var LineTrack=function(c,b,a){this.tile_cache=new Cache(CACHED_TILES_LINE);Track.call(this,c,$("#viewport"));TiledTrack.call(this);this.track_type="line";this.height_px=(a?a:100);this.contain
er_div.addClass("line-track");this.dataset_id=b;this.data_queue={};this.cache=new Cache(CACHED_DATA)};$.extend(LineTrack.prototype,TiledTrack.prototype,{init:function(){var a=this;a.content_div.text(DATA_LOADING);$.getJSON(data_url,{stats:true,track_type:a.track_type,chrom:a.view.chrom,low:null,high:null,dataset_id:a.dataset_id},function(c){if(!c||c=="error"){a.container_div.addClass("error");a.content_div.text(DATA_ERROR)}else{if(c=="no data"){a.container_div.addClass("nodata");a.content_div.text(DATA_NONE)}else{if(c=="pending"){a.container_div.addClass("pending");a.content_div.text(DATA_PENDING);setTimeout(function(){a.init()},5000)}else{a.content_div.text("");a.content_div.css("height",a.height_px+"px");a.min_value=c.min;a.max_value=c.max;a.vertical_range=a.max_value-a.min_value;var d=$("<div class='yaxislabel'>"+a.min_value+"</div>");var b=$("<div class='yaxislabel'>"+a.max_value+"</div>");b.css({position:"relative",top:"35px"});b.prependTo(a.container_div);d.css({positi
on:"relative",top:a.height_px+32+"px"});d.prependTo(a.container_div);a.draw()}}}})},get_data:function(d,b){var c=this,a=b*DENSITY*d,f=(b+1)*DENSITY*d,e=d+"_"+b;if(!c.data_queue[e]){c.data_queue[e]=true;$.getJSON(data_url,{track_type:this.track_type,chrom:this.view.chrom,low:a,high:f,dataset_id:this.dataset_id,resolution:this.view.resolution},function(g){c.cache.set(e,g);delete c.data_queue[e];c.draw()})}},draw_tile:function(d,a,m,o){if(!this.vertical_range){return}var h=a*DENSITY*d,b=DENSITY*d,c=$("<canvas class='tile'></canvas>"),l=d+"_"+a;if(!this.cache.get(l)){this.get_data(d,a);return}var g=this.cache.get(l);c.css({position:"absolute",top:0,left:(h-this.view.low)*o});c.get(0).width=Math.ceil(b*o);c.get(0).height=this.height_px;var n=c.get(0).getContext("2d");var e=false;n.beginPath();for(var f=0;f<g.length-1;f++){var k=g[f][0]-h;var j=g[f][1];if(isNaN(j)){e=false}else{k=k*o;j=(j-this.min_value)/this.vertical_range*this.height_px;if(e){n.lineTo(k,j)}else{n.moveTo(k,j);e=t
rue}}}n.stroke();m.append(c);return c}});var FeatureTrack=function(c,b,a){this.tile_cache=new Cache(CACHED_TILES_FEATURE);Track.call(this,c,$("#viewport"));TiledTrack.call(this);this.track_type="feature";this.height_px=(a?a:100);this.container_div.addClass("feature-track");this.dataset_id=b;this.zo_slots={};this.show_labels_scale=0.001;this.showing_labels=false;this.vertical_gap=10;this.base_color="#2C3143";this.default_font="9px Monaco, Lucida Console, monospace";this.left_offset=200;this.inc_slots={};this.data_queue={};this.data_cache=new Cache(20)};$.extend(FeatureTrack.prototype,TiledTrack.prototype,{init:function(){var a=this;a.content_div.text(DATA_LOADING);$.getJSON(data_url,{track_type:a.track_type,low:a.view.max_low,high:a.view.max_high,dataset_id:a.dataset_id,chrom:a.view.chrom},function(b){if(b=="error"){a.container_div.addClass("error");a.content_div.text(DATA_ERROR)}else{if(b.length===0||b=="no data"){a.container_div.addClass("nodata");a.content_div.text(DATA_NO
NE)}else{if(b=="pending"){a.container_div.addClass("pending");a.content_div.text(DATA_PENDING);setTimeout(function(){a.init()},5000)}else{a.content_div.text("");a.content_div.css("height",a.height_px+"px");a.values=b;a.calc_slots();a.slots=a.zo_slots;a.draw()}}}})},get_data:function(a,d){var b=this,c=a+"_"+d;if(!b.data_queue[c]){b.data_queue[c]=true;$.getJSON(data_url,{track_type:b.track_type,chrom:b.view.chrom,low:a,high:d,dataset_id:b.dataset_id,include_blocks:true},function(e){b.data_cache.set(c,e);delete b.data_queue[c];b.draw()})}},calc_slots:function(){var c=[],b=this.content_div.width()/(this.view.high-this.view.low),a=this.view.max_high,e=this.view.max_low;for(var f=0,g=this.values.length;f<g;f++){var h,k,l=this.values[f];h=Math.floor((l.start-e)*b);k=Math.ceil((l.end-e)*b);var d=0;while(true){if(c[d]===undefined||c[d]<h){c[d]=k;this.zo_slots[l.uid]=d;break}d++}}this.height_px=c.length*this.vertical_gap+15;this.content_div.css("height",this.height_px+"px")},increment
al_slots:function(a,b){if(!this.inc_slots[a]){this.inc_slots[a]={};this.inc_slots[a].w_scale=DENSITY/Math.pow(FEATURE_LEVELS,a+1)}var k=this.inc_slots[a];var d=[],l=[],c=0,m=$("<canvas></canvas>").get(0).getContext("2d"),f=this.view.max_low;for(var g=0,h=b.length;g<h;g++){var n=b[g];if(k[n.uid]){c=Math.max(c,k[n.uid]);d[k[n.uid]]=Math.ceil((n.end-f)*k.w_scale)}else{l.push(n)}}for(var g=0,h=l.length;g<h;g++){var n=l[g];f_start=Math.floor((n.start-f)*k.w_scale);f_start-=m.measureText(n.name).width;f_end=Math.ceil((n.end-f)*k.w_scale);var e=0;while(true){if(d[e]===undefined||d[e]<f_start){d[e]=f_end;k[n.uid]=e;c=Math.max(c,e);break}e++}}return c},draw_tile:function(A,F,n,r){if(!this.values){return}var G=F*DENSITY*A,c=(F+1)*DENSITY*A,v=DENSITY*A;var K,u,h;if(r>this.show_labels_scale){if(!this.showing_labels){this.showing_labels=true}for(var B in this.data_cache.obj_cache){var p=B.split("_"),e=p[0],d=p[1];if(e<=G&&d>=c){K=this.data_cache.get(B);break}}if(!K){this.data_queue[[G,c]
]=true;this.get_data(G,c);return}h=this.incremental_slots(this.view.zoom_res,K)*this.vertical_gap+15;u=this.inc_slots[this.view.zoom_res]}else{if(this.showing_labels){this.showing_labels=false}h=this.height_px;u=this.zo_slots;K=this.values}var y=Math.ceil(v*r),x=$("<canvas class='tile'></canvas>");x.css({position:"absolute",top:0,left:(G-this.view.low)*r-this.left_offset});x.get(0).width=y+this.left_offset;x.get(0).height=h;var z=x.get(0).getContext("2d");z.fillStyle=this.base_color;z.font=this.default_font;z.textAlign="right";var C=0;for(var D=0,E=K.length;D<E;D++){var m=K[D];if(m.start<=c&&m.end>=G){var g=Math.floor(Math.max(0,(m.start-G)*r)),l=Math.ceil(Math.min(y,(m.end-G)*r)),f=u[m.uid]*this.vertical_gap;var a,L,b=null,s=null;if(m.thick_start&&m.thick_end){b=Math.floor(Math.max(0,(m.thick_start-G)*r));s=Math.ceil(Math.min(y,(m.thick_end-G)*r))}if(!this.showing_labels){z.fillRect(g+this.left_offset,f+5,l-g,1)}else{if(z.fillText&&m.start>G){z.fillText(m.name,g-1+this.left
_offset,f+8)}var I=m.blocks;if(I){if(m.strand){if(m.strand=="+"){z.fillStyle=RIGHT_STRAND}else{if(m.strand=="-"){z.fillStyle=LEFT_STRAND}}z.fillRect(g+this.left_offset,f,l-g,10);z.fillStyle=this.base_color}for(var B=0,J=I.length;B<J;B++){var q=I[B],o=Math.floor(Math.max(0,(q[0]-G)*r)),H=Math.ceil(Math.min(y,(q[1]-G)*r));if(o>H){continue}a=5;L=3;z.fillRect(o+this.left_offset,f+L,H-o,a);if(b&&(o<s||H>b)){a=9;L=1;var w=Math.max(o,b),t=Math.min(H,s);z.fillRect(w+this.left_offset,f+L,t-w,a)}}}else{a=9;L=1;z.fillRect(g+this.left_offset,f+L,l-g,a);if(m.strand){if(m.strand=="+"){z.fillStyle=RIGHT_STRAND_INV}else{if(m.strand=="-"){z.fillStyle=LEFT_STRAND_INV}}z.fillRect(g+this.left_offset,f,l-g,10);z.fillStyle=this.base_color}}}C++}}n.append(x);return x}});
\ No newline at end of file
diff -r b70e23d80a75 -r b9b9fe0612b9 static/scripts/trackster.js
--- a/static/scripts/trackster.js Thu Nov 19 12:56:24 2009 -0500
+++ b/static/scripts/trackster.js Thu Nov 19 18:55:35 2009 -0500
@@ -1,7 +1,7 @@
/* Trackster
2009, James Taylor, Kanwei Li
*/
-var DEBUG = true;
+var DEBUG = false;
var DENSITY = 1000,
FEATURE_LEVELS = 100,
@@ -109,7 +109,7 @@
// 10^log10(range / DENSITY) Close approximation for browser window, assuming DENSITY = window width
this.resolution = Math.pow( 10, Math.ceil( Math.log( (this.high - this.low) / DENSITY ) / Math.LN10 ) );
- this.zoom_res = Math.max(1,Math.ceil( Math.log( (this.high - this.low) / FEATURE_LEVELS ) / Math.log(FEATURE_LEVELS) ));
+ this.zoom_res = Math.max(1,Math.ceil( Math.log( this.resolution, FEATURE_LEVELS ) / Math.log(FEATURE_LEVELS) ));
// Overview
$("#overview-box").css( {
@@ -404,7 +404,7 @@
});
},
get_data: function( low, high ) {
- console.log("getting: ", low, high);
+ // console.log("getting: ", low, high);
var track = this,
key = low + '_' + high;
@@ -420,39 +420,24 @@
});
}
},
- calc_slots: function( include_labels ) {
+ calc_slots: function() {
// console.log("num vals: " + this.values.length);
var end_ary = [],
scale = this.content_div.width() / (this.view.high - this.view.low),
- labels_scale = this.show_labels_scale,
max_high = this.view.max_high,
max_low = this.view.max_low;
// console.log(scale, this.view.high, this.view.low);
- if (include_labels) {
- this.zi_slots = {};
- }
- var dummy_canvas = $("<canvas></canvas>").get(0).getContext("2d");
for (var i = 0, len = this.values.length; i < len; i++) {
var f_start, f_end, feature = this.values[i];
- if (include_labels) {
- f_start = Math.floor( (feature.start - max_low) * labels_scale );
- f_start -= dummy_canvas.measureText(feature.name).width;
- f_end = Math.ceil( (feature.end - max_low) * labels_scale );
- } else {
- f_start = Math.floor( (feature.start - max_low) * scale );
- f_end = Math.ceil( (feature.end - max_low) * scale );
- }
+ f_start = Math.floor( (feature.start - max_low) * scale );
+ f_end = Math.ceil( (feature.end - max_low) * scale );
+
// if (include_labels) { console.log(f_start, f_end); }
-
var j = 0;
while (true) {
if (end_ary[j] === undefined || end_ary[j] < f_start) {
end_ary[j] = f_end;
- if (include_labels) {
- this.zi_slots[feature.uid] = j;
- } else {
- this.zo_slots[feature.uid] = j;
- }
+ this.zo_slots[feature.uid] = j;
break;
}
j++;
@@ -464,20 +449,23 @@
incremental_slots: function( level, features ) {
if (!this.inc_slots[level]) {
this.inc_slots[level] = {};
- this.inc_slots[level].w_scale = 1000 / Math.pow(FEATURE_LEVELS, level);
+ this.inc_slots[level].w_scale = DENSITY / Math.pow(FEATURE_LEVELS, level+1);
+ // this.inc_slots[level].w_scale = 1000 / (this.view.high - this.view.low);
+
}
var slots = this.inc_slots[level];
- if (slots[uid]) {
- return slots[uid];
- }
+ // console.log(level, slots.w_scale, slots);
var end_ary = [],
undone = [],
- max_high = this.view.max_high,
+ highest_slot = 0, // To measure how big to draw canvas
+ dummy_canvas = $("<canvas></canvas>").get(0).getContext("2d"),
max_low = this.view.max_low;
for (var i = 0, len = features.length; i < len; i++) {
var feature = features[i];
+ // console.log(feature.name, feature.uid, slots[feature.uid]);
if (slots[feature.uid]) {
+ highest_slot = Math.max(highest_slot, slots[feature.uid]);
end_ary[ slots[feature.uid] ] = Math.ceil( (feature.end - max_low) * slots.w_scale );
} else {
undone.push(feature);
@@ -488,6 +476,19 @@
f_start = Math.floor( (feature.start - max_low) * slots.w_scale );
f_start -= dummy_canvas.measureText(feature.name).width;
f_end = Math.ceil( (feature.end - max_low) * slots.w_scale );
+ // console.log(f_start, f_end, feature.name);
+ var j = 0;
+ while (true) {
+ if (end_ary[j] === undefined || end_ary[j] < f_start) {
+ end_ary[j] = f_end;
+ slots[feature.uid] = j;
+ highest_slot = Math.max(highest_slot, j);
+ break;
+ }
+ j++;
+ }
+ }
+ return highest_slot;
},
draw_tile: function( resolution, tile_index, parent_element, w_scale ) {
@@ -499,14 +500,10 @@
tile_span = DENSITY * resolution;
// console.log("drawing " + tile_index);
// Once we zoom in enough, show name labels
- var data;
+ var data, slots, required_height;
if (w_scale > this.show_labels_scale) {
if (!this.showing_labels) {
this.showing_labels = true;
- if (!this.zi_slots) {
- this.calc_slots(true);
- }
- this.slots = this.zi_slots;
}
for (var k in this.data_cache.obj_cache) {
var k_split = k.split("_"), k_low = k_split[0], k_high = k_split[1];
@@ -520,20 +517,20 @@
this.get_data(tile_low, tile_high);
return;
}
-
+ required_height = this.incremental_slots( this.view.zoom_res, data ) * this.vertical_gap + 15;
+ // console.log(required_height);
+ slots = this.inc_slots[this.view.zoom_res];
} else {
if (this.showing_labels) {
this.showing_labels = false;
- this.slots = this.zo_slots;
}
+ required_height = this.height_px;
+ slots = this.zo_slots;
data = this.values;
}
- // console.log(this.slots);
-
-
+
// console.log(tile_low, tile_high, tile_length, w_scale);
var width = Math.ceil( tile_span * w_scale ),
- height = this.height_px,
new_canvas = $("<canvas class='tile'></canvas>");
new_canvas.css({
@@ -542,7 +539,7 @@
left: ( tile_low - this.view.low ) * w_scale - this.left_offset
});
new_canvas.get(0).width = width + this.left_offset;
- new_canvas.get(0).height = height;
+ new_canvas.get(0).height = required_height;
// console.log(( tile_low - this.view.low ) * w_scale, tile_index, w_scale);
var ctx = new_canvas.get(0).getContext("2d");
ctx.fillStyle = this.base_color;
@@ -555,7 +552,7 @@
if (feature.start <= tile_high && feature.end >= tile_low) {
var f_start = Math.floor( Math.max(0, (feature.start - tile_low) * w_scale) ),
f_end = Math.ceil( Math.min(width, (feature.end - tile_low) * w_scale) ),
- y_center = this.slots[feature.uid] * this.vertical_gap;
+ y_center = slots[feature.uid] * this.vertical_gap;
var thickness, y_start, thick_start = null, thick_end = null;
if (feature.thick_start && feature.thick_end) {
@@ -569,7 +566,7 @@
// Showing labels, blocks, details
if (ctx.fillText && feature.start > tile_low) {
ctx.fillText(feature.name, f_start - 1 + this.left_offset, y_center + 8);
- // ctx.fillText(commatize(feature.start), f_start - 1, y_center + 8);
+ // ctx.fillText(commatize(feature.start), f_start - 1 + this.left_offset, y_center + 8);
}
var blocks = feature.blocks;
if (blocks) {
1
0

23 Nov '09
details: http://www.bx.psu.edu/hg/galaxy/rev/f4654abcec1e
changeset: 3114:f4654abcec1e
user: Greg Von Kuster <greg(a)bx.psu.edu>
date: Fri Nov 20 11:03:41 2009 -0500
description:
Move line count processing from set_peek() methods to set_meta() methods for data types since set_meta can run externally and it does line by line processing in some cases. Added new MetadataElements to the Data, Text, Sequence and Maf classes to handle setting dataset.blurb from stored metadata. Cleaned up a lot of code in the process.
diffstat:
lib/galaxy/datatypes/binary.py | 14 +-
lib/galaxy/datatypes/data.py | 55 ++--
lib/galaxy/datatypes/genetics.py | 4 +-
lib/galaxy/datatypes/images.py | 8 +-
lib/galaxy/datatypes/interval.py | 15 +-
lib/galaxy/datatypes/metadata.py | 5 +-
lib/galaxy/datatypes/qualityscore.py | 54 +-----
lib/galaxy/datatypes/sequence.py | 148 ++++++++++----
lib/galaxy/datatypes/tabular.py | 93 ++++----
lib/galaxy/datatypes/xml.py | 4 +-
lib/galaxy/jobs/__init__.py | 4 +-
lib/galaxy/jobs/runners/local.py | 5 +-
lib/galaxy/model/__init__.py | 8 +-
lib/galaxy/model/migrate/versions/0005_cleanup_datasets_fix.py | 8 +-
test/functional/test_get_data.py | 6 +-
test/functional/test_history_functions.py | 2 +-
16 files changed, 220 insertions(+), 213 deletions(-)
diffs (965 lines):
diff -r 5c46679d0755 -r f4654abcec1e lib/galaxy/datatypes/binary.py
--- a/lib/galaxy/datatypes/binary.py Fri Nov 20 08:46:49 2009 -0500
+++ b/lib/galaxy/datatypes/binary.py Fri Nov 20 11:03:41 2009 -0500
@@ -18,7 +18,7 @@
class Binary( data.Data ):
"""Binary data"""
- def set_peek( self, dataset ):
+ def set_peek( self, dataset, is_multi_byte=False ):
"""Set the peek and blurb text"""
if not dataset.dataset.purged:
dataset.peek = 'binary data'
@@ -30,7 +30,7 @@
class Ab1( Binary ):
"""Class describing an ab1 binary sequence file"""
file_ext = "ab1"
- def set_peek( self, dataset ):
+ def set_peek( self, dataset, is_multi_byte=False ):
if not dataset.dataset.purged:
export_url = "/history_add_to?" + urlencode( {'history_id':dataset.history_id,'ext':'ab1','name':'ab1 sequence','info':'Sequence file','dbkey':dataset.dbkey} )
dataset.peek = "Binary ab1 sequence file"
@@ -71,7 +71,7 @@
if os.path.exists( tmpf1bai ):
os.remove( tmpf1bai )
dataset.metadata.bam_index = index_file
- def set_peek( self, dataset ):
+ def set_peek( self, dataset, is_multi_byte=False ):
if not dataset.dataset.purged:
export_url = "/history_add_to?" + urlencode( {'history_id':dataset.history_id,'ext':'bam','name':'bam alignments','info':'Alignments file','dbkey':dataset.dbkey} )
dataset.peek = "Binary bam alignments file"
@@ -91,7 +91,7 @@
class Binseq( Binary ):
"""Class describing a zip archive of binary sequence files"""
file_ext = "binseq.zip"
- def set_peek( self, dataset ):
+ def set_peek( self, dataset, is_multi_byte=False ):
if not dataset.dataset.purged:
zip_file = zipfile.ZipFile( dataset.file_name, "r" )
num_files = len( zip_file.namelist() )
@@ -112,7 +112,7 @@
class Scf( Binary ):
"""Class describing an scf binary sequence file"""
file_ext = "scf"
- def set_peek( self, dataset ):
+ def set_peek( self, dataset, is_multi_byte=False ):
if not dataset.dataset.purged:
export_url = "/history_add_to?" + urlencode({'history_id':dataset.history_id,'ext':'scf','name':'scf sequence','info':'Sequence file','dbkey':dataset.dbkey})
dataset.peek = "Binary scf sequence file"
@@ -139,9 +139,9 @@
if binascii.b2a_hex( header ) == binascii.hexlify( '.sff' ):
return True
return False
- except Exception, e:
+ except:
return False
- def set_peek( self, dataset ):
+ def set_peek( self, dataset, is_multi_byte=False ):
if not dataset.dataset.purged:
export_url = "/history_add_to?" + urlencode( {'history_id':dataset.history_id,'ext':'sff','name':'sff file','info':'sff file','dbkey':dataset.dbkey} )
dataset.peek = "Binary sff file"
diff -r 5c46679d0755 -r f4654abcec1e lib/galaxy/datatypes/data.py
--- a/lib/galaxy/datatypes/data.py Fri Nov 20 08:46:49 2009 -0500
+++ b/lib/galaxy/datatypes/data.py Fri Nov 20 11:03:41 2009 -0500
@@ -95,9 +95,6 @@
def set_meta( self, dataset, overwrite = True, **kwd ):
"""Unimplemented method, allows guessing of metadata from contents of file"""
return True
- def set_readonly_meta( self, dataset ):
- """Unimplemented method, resets the readonly metadata values"""
- return True
def missing_meta( self, dataset, check = [], skip = [] ):
"""
Checks for empty metadata values, Returns True if non-optional metadata is missing
@@ -114,7 +111,7 @@
if not value:
return True
return False
- def set_peek( self, dataset ):
+ def set_peek( self, dataset, is_multi_byte=False ):
"""Set the peek and blurb text"""
if not dataset.dataset.purged:
dataset.peek = ''
@@ -312,6 +309,11 @@
return False
class Text( Data ):
+ file_ext = 'txt'
+
+ """Add metadata elements"""
+ MetadataElement( name="data_lines", default=0, desc="Number of data lines", readonly=True, optional=True, visible=False, no_value=0 )
+
def write_from_stream(self, dataset, stream):
"""Writes data from a stream"""
# write it twice for now
@@ -322,7 +324,6 @@
break
os.write(fd, chunk)
os.close(fd)
-
# rewrite the file with unix newlines
fp = open(dataset.file_name, 'wt')
for line in file(temp_name, "U"):
@@ -344,23 +345,29 @@
def get_mime(self):
"""Returns the mime type of the datatype"""
return 'text/plain'
- def set_peek( self, dataset, line_count=None ):
+ def set_meta( self, dataset, **kwd ):
+ """
+ Set the number of lines of data in dataset,
+ skipping all blank lines and comments.
+ """
+ data_lines = 0
+ for line in file( dataset.file_name ):
+ line = line.strip()
+ if line and not line.startswith( '#' ):
+ data_lines += 1
+ dataset.metadata.data_lines = data_lines
+ def set_peek( self, dataset, line_count=None, is_multi_byte=False ):
if not dataset.dataset.purged:
# The file must exist on disk for the get_file_peek() method
- dataset.peek = get_file_peek( dataset.file_name )
+ dataset.peek = get_file_peek( dataset.file_name, is_multi_byte=is_multi_byte )
if line_count is None:
- dataset.blurb = "%s lines" % util.commaify( str( get_line_count( dataset.file_name ) ) )
- else:
- dataset.blurb = "%s lines" % util.commaify( str( line_count ) )
- else:
- dataset.peek = 'file does not exist'
- dataset.blurb = 'file purged from disk'
- def set_multi_byte_peek( self, dataset, line_count=None ):
- if not dataset.dataset.purged:
- # The file must exist on disk for the get_file_peek() method
- dataset.peek = get_file_peek( dataset.file_name, is_multi_byte=True )
- if line_count is None:
- dataset.blurb = "%s lines" % util.commaify( str( get_line_count( dataset.file_name ) ) )
+ # See if line_count is stored in the metadata
+ if dataset.metadata.data_lines:
+ dataset.blurb = "%s lines" % util.commaify( str( dataset.metadata.data_lines ) )
+ else:
+ # Number of lines is not known ( this should not happen ), and auto-detect is
+ # needed to set metadata
+ dataset.blurb = "? lines"
else:
dataset.blurb = "%s lines" % util.commaify( str( line_count ) )
else:
@@ -370,7 +377,7 @@
class Txtseq( Data ):
"""Class describing a zip archive of text sequence files"""
file_ext = "txtseq.zip"
- def set_peek( self, dataset ):
+ def set_peek( self, dataset, is_multi_byte=False ):
if not dataset.dataset.purged:
zip_file = zipfile.ZipFile( dataset.file_name, "r" )
num_files = len( zip_file.namelist() )
@@ -459,11 +466,3 @@
else:
text = unicode( '\n'.join( lines ), 'utf-8' )
return text
-def get_line_count(file_name):
- """Returns the number of lines in a file that are neither null nor comments"""
- count = 0
- for line in file(file_name):
- line = line.strip()
- if line and line[0] != '#':
- count += 1
- return count
diff -r 5c46679d0755 -r f4654abcec1e lib/galaxy/datatypes/genetics.py
--- a/lib/galaxy/datatypes/genetics.py Fri Nov 20 08:46:49 2009 -0500
+++ b/lib/galaxy/datatypes/genetics.py Fri Nov 20 11:03:41 2009 -0500
@@ -358,7 +358,7 @@
"""
file_ext="snpmatrix"
- def set_peek( self, dataset ):
+ def set_peek( self, dataset, is_multi_byte=False ):
if not dataset.dataset.purged:
dataset.peek = "Binary RGenetics file"
dataset.blurb = data.nice_size( dataset.get_size() )
@@ -597,7 +597,7 @@
else:
p = []
return '\n'.join(p)
- def set_peek( self, dataset ):
+ def set_peek( self, dataset, is_multi_byte=False ):
"""
expects a .pheno file in the extra_files_dir - ugh
note that R is wierd and does not include the row.name in
diff -r 5c46679d0755 -r f4654abcec1e lib/galaxy/datatypes/images.py
--- a/lib/galaxy/datatypes/images.py Fri Nov 20 08:46:49 2009 -0500
+++ b/lib/galaxy/datatypes/images.py Fri Nov 20 11:03:41 2009 -0500
@@ -15,7 +15,7 @@
class Image( data.Data ):
"""Class describing an image"""
- def set_peek( self, dataset ):
+ def set_peek( self, dataset, is_multi_byte=False ):
if not dataset.dataset.purged:
dataset.peek = 'Image in %s format' % dataset.extension
dataset.blurb = data.nice_size( dataset.get_size() )
@@ -51,7 +51,7 @@
"""Class describing a GMAJ Applet"""
file_ext = "gmaj.zip"
copy_safe_peek = False
- def set_peek( self, dataset ):
+ def set_peek( self, dataset, is_multi_byte=False ):
if not dataset.dataset.purged:
if hasattr( dataset, 'history_id' ):
params = {
@@ -102,7 +102,7 @@
class Html( data.Text ):
"""Class describing an html file"""
file_ext = "html"
- def set_peek( self, dataset ):
+ def set_peek( self, dataset, is_multi_byte=False ):
if not dataset.dataset.purged:
dataset.peek = "HTML file"
dataset.blurb = data.nice_size( dataset.get_size() )
@@ -136,7 +136,7 @@
"""Class describing a LAJ Applet"""
file_ext = "laj"
copy_safe_peek = False
- def set_peek( self, dataset ):
+ def set_peek( self, dataset, is_multi_byte=False ):
if not dataset.dataset.purged:
if hasattr( dataset, 'history_id' ):
params = {
diff -r 5c46679d0755 -r f4654abcec1e lib/galaxy/datatypes/interval.py
--- a/lib/galaxy/datatypes/interval.py Fri Nov 20 08:46:49 2009 -0500
+++ b/lib/galaxy/datatypes/interval.py Fri Nov 20 11:03:41 2009 -0500
@@ -58,12 +58,18 @@
def init_meta( self, dataset, copy_from=None ):
Tabular.init_meta( self, dataset, copy_from=copy_from )
- def set_peek( self, dataset, line_count=None ):
+ def set_peek( self, dataset, line_count=None, is_multi_byte=False ):
"""Set the peek and blurb text"""
if not dataset.dataset.purged:
- dataset.peek = data.get_file_peek( dataset.file_name )
+ dataset.peek = data.get_file_peek( dataset.file_name, is_multi_byte=is_multi_byte )
if line_count is None:
- dataset.blurb = "%s regions" % util.commaify( str( data.get_line_count( dataset.file_name ) ) )
+ # See if line_count is stored in the metadata
+ if dataset.metadata.data_lines:
+ dataset.blurb = "%s regions" % util.commaify( str( dataset.metadata.data_lines ) )
+ else:
+ # Number of lines is not known ( this should not happen ), and auto-detect is
+ # needed to set metadata
+ dataset.blurb = "? regions"
else:
dataset.blurb = "%s regions" % util.commaify( str( line_count ) )
else:
@@ -884,9 +890,6 @@
"""Initialize interval datatype, by adding UCSC display app"""
Tabular.__init__(self, **kwd)
self.add_display_app ( 'ucsc', 'display at UCSC', 'as_ucsc_display_file', 'ucsc_links' )
- def set_readonly_meta( self, dataset, skip=1, **kwd ):
- """Resets the values of readonly metadata elements."""
- Tabular.set_readonly_meta( self, dataset, skip = skip, **kwd )
def set_meta( self, dataset, overwrite = True, **kwd ):
Tabular.set_meta( self, dataset, overwrite = overwrite, skip = 1 )
def display_peek( self, dataset ):
diff -r 5c46679d0755 -r f4654abcec1e lib/galaxy/datatypes/metadata.py
--- a/lib/galaxy/datatypes/metadata.py Fri Nov 20 08:46:49 2009 -0500
+++ b/lib/galaxy/datatypes/metadata.py Fri Nov 20 11:03:41 2009 -0500
@@ -89,7 +89,10 @@
return bool( self.parent._metadata.get( name, False ) )
def get_html_by_name( self, name, **kwd ):
if name in self.spec:
- return self.spec[name].param.get_html( value=getattr( self, name ), context=self, **kwd )
+ rval = self.spec[name].param.get_html( value=getattr( self, name ), context=self, **kwd )
+ if rval is None:
+ return self.spec[name].no_value
+ return rval
def make_dict_copy( self, to_copy ):
"""Makes a deep copy of input iterable to_copy according to self.spec"""
rval = {}
diff -r 5c46679d0755 -r f4654abcec1e lib/galaxy/datatypes/qualityscore.py
--- a/lib/galaxy/datatypes/qualityscore.py Fri Nov 20 08:46:49 2009 -0500
+++ b/lib/galaxy/datatypes/qualityscore.py Fri Nov 20 11:03:41 2009 -0500
@@ -14,23 +14,6 @@
until we know more about quality score formats
"""
file_ext = "qualsolid"
-
- def set_peek( self, dataset, line_count=None ):
- if not dataset.dataset.purged:
- dataset.peek = data.get_file_peek( dataset.file_name )
- if line_count is None:
- dataset.blurb = data.nice_size( dataset.get_size() )
- else:
- dataset.blurb = "%s lines, SOLiD Quality score file" % util.commaify( str( line_count ) )
- else:
- dataset.peek = 'file does not exist'
- dataset.blurb = 'file purged from disk'
-
- def display_peek(self, dataset):
- try:
- return dataset.peek
- except:
- return "SOLiD Quality score file (%s)" % ( data.nice_size( dataset.get_size() ) )
def sniff( self, filename ):
"""
@@ -70,6 +53,7 @@
return True
else:
break #we found a non-empty line, but it's not a header
+ fh.close()
except:
pass
return False
@@ -79,23 +63,6 @@
until we know more about quality score formats
"""
file_ext = "qual454"
-
- def set_peek( self, dataset, line_count=None ):
- if not dataset.dataset.purged:
- dataset.peek = data.get_file_peek( dataset.file_name )
- if line_count is None:
- dataset.blurb = data.nice_size( dataset.get_size() )
- else:
- dataset.blurb = "%s lines, 454 Quality score file" % util.commaify( str( line_count ) )
- else:
- dataset.peek = 'file does not exist'
- dataset.blurb = 'file purged from disk'
-
- def display_peek(self, dataset):
- try:
- return dataset.peek
- except:
- return "454 Quality score file (%s)" % ( data.nice_size( dataset.get_size() ) )
def sniff( self, filename ):
"""
@@ -125,6 +92,7 @@
return True
else:
break #we found a non-empty line, but it's not a header
+ fh.close()
except:
pass
return False
@@ -134,22 +102,4 @@
until we know more about quality score formats
"""
file_ext = "qualsolexa"
-
- def set_peek( self, dataset, line_count=None ):
- if not dataset.dataset.purged:
- dataset.peek = data.get_file_peek( dataset.file_name )
- if line_count is None:
- dataset.blurb = data.nice_size( dataset.get_size() )
- else:
- dataset.blurb = "%s lines, Solexa Quality score file" % util.commaify( str( line_count ) )
- else:
- dataset.peek = 'file does not exist'
- dataset.blurb = 'file purged from disk'
-
- def display_peek(self, dataset):
- try:
- return dataset.peek
- except:
- return "Solexa Quality score file (%s)" % ( data.nice_size( dataset.get_size() ) )
-
\ No newline at end of file
diff -r 5c46679d0755 -r f4654abcec1e lib/galaxy/datatypes/sequence.py
--- a/lib/galaxy/datatypes/sequence.py Fri Nov 20 08:46:49 2009 -0500
+++ b/lib/galaxy/datatypes/sequence.py Fri Nov 20 11:03:41 2009 -0500
@@ -17,11 +17,40 @@
class Sequence( data.Text ):
"""Class describing a sequence"""
- def set_readonly_meta( self, dataset ):
- """Resets the values of readonly metadata elements."""
- pass
-class Alignment( Sequence ):
+ """Add metadata elements"""
+ MetadataElement( name="sequences", default=0, desc="Number of sequences", readonly=True, visible=False, optional=True, no_value=0 )
+
+ def set_meta( self, dataset, **kwd ):
+ """
+ Set the number of sequences and the number of data lines in dataset.
+ """
+ data_lines = 0
+ sequences = 0
+ for line in file( dataset.file_name ):
+ line = line.strip()
+ if line and line.startswith( '#' ):
+ # We don't count comment lines for sequence data types
+ continue
+ if line and line.startswith( '>' ):
+ sequences += 1
+ data_lines +=1
+ else:
+ data_lines += 1
+ dataset.metadata.data_lines = data_lines
+ dataset.metadata.sequences = sequences
+ def set_peek( self, dataset, is_multi_byte=False ):
+ if not dataset.dataset.purged:
+ dataset.peek = data.get_file_peek( dataset.file_name, is_multi_byte=is_multi_byte )
+ if dataset.metadata.sequences:
+ dataset.blurb = "%s sequences" % util.commaify( str( dataset.metadata.sequences ) )
+ else:
+ dataset.blurb = data.nice_size( dataset.get_size() )
+ else:
+ dataset.peek = 'file does not exist'
+ dataset.blurb = 'file purged from disk'
+
+class Alignment( data.Text ):
"""Class describing an alignment"""
"""Add metadata elements"""
@@ -29,16 +58,9 @@
class Fasta( Sequence ):
"""Class representing a FASTA sequence"""
+
file_ext = "fasta"
- def set_peek( self, dataset ):
- if not dataset.dataset.purged:
- dataset.peek = data.get_file_peek( dataset.file_name )
- dataset.blurb = data.nice_size( dataset.get_size() )
- else:
- dataset.peek = 'file does not exist'
- dataset.blurb = 'file purged from disk'
-
def sniff( self, filename ):
"""
Determines whether the file is in fasta format
@@ -82,6 +104,7 @@
return True
else:
break #we found a non-empty line, but its not a fasta header
+ fh.close()
except:
pass
return False
@@ -89,14 +112,6 @@
class csFasta( Sequence ):
""" Class representing the SOLID Color-Space sequence ( csfasta ) """
file_ext = "csfasta"
-
- def set_peek( self, dataset ):
- if not dataset.dataset.purged:
- dataset.peek = data.get_file_peek( dataset.file_name )
- dataset.blurb = data.nice_size( dataset.get_size() )
- else:
- dataset.peek = 'file does not exist'
- dataset.blurb = 'file purged from disk'
def sniff( self, filename ):
"""
@@ -130,6 +145,7 @@
return True
else:
break #we found a non-empty line, but it's not a header
+ fh.close()
except:
pass
return False
@@ -137,15 +153,26 @@
class Fastq ( Sequence ):
"""Class representing a generic FASTQ sequence"""
file_ext = "fastq"
-
- def set_peek( self, dataset ):
- if not dataset.dataset.purged:
- dataset.peek = data.get_file_peek( dataset.file_name )
- dataset.blurb = data.nice_size( dataset.get_size() )
- else:
- dataset.peek = 'file does not exist'
- dataset.blurb = 'file purged from disk'
-
+
+ def set_meta( self, dataset, **kwd ):
+ """
+ Set the number of sequences and the number of data lines
+ in dataset.
+ """
+ data_lines = 0
+ sequences = 0
+ for line in file( dataset.file_name ):
+ line = line.strip()
+ if line and line.startswith( '#' ):
+ # We don't count comment lines for sequence data types
+ continue
+ if line and line.startswith( '@' ):
+ sequences += 1
+ data_lines +=1
+ else:
+ data_lines += 1
+ dataset.metadata.data_lines = data_lines
+ dataset.metadata.sequences = sequences
def sniff ( self, filename ):
"""
Determines whether the file is in generic fastq format
@@ -178,13 +205,13 @@
"""Class representing a FASTQ sequence ( the Sanger variant )"""
file_ext = "fastqsanger"
-
try:
from galaxy import eggs
import pkg_resources; pkg_resources.require( "bx-python" )
import bx.align.maf
except:
pass
+
#trying to import maf_utilities here throws an ImportError due to a circular import between jobs and tools:
#from galaxy.tools.util.maf_utilities import build_maf_index_species_chromosomes
#Traceback (most recent call last):
@@ -223,12 +250,15 @@
species = []
species_chromosomes = {}
indexes = bx.interval_index_file.Indexes()
+ blocks = 0
try:
maf_reader = bx.align.maf.Reader( open( filename ) )
while True:
pos = maf_reader.file.tell()
block = maf_reader.next()
- if block is None: break
+ if block is None:
+ break
+ blocks += 1
for c in block.components:
spec = c.src
chrom = None
@@ -255,29 +285,30 @@
except Exception, e:
#most likely a bad MAF
log.debug( 'Building MAF index on %s failed: %s' % ( filename, e ) )
- return ( None, [], {} )
- return ( indexes, species, species_chromosomes )
+ return ( None, [], {}, 0 )
+ return ( indexes, species, species_chromosomes, blocks )
class Maf( Alignment ):
"""Class describing a Maf alignment"""
file_ext = "maf"
#Readonly and optional, users can't unset it, but if it is not set, we are generally ok; if required use a metadata validator in the tool definition
+ MetadataElement( name="blocks", default=0, desc="Number of blocks", readonly=True, optional=True, visible=False, no_value=0 )
MetadataElement( name="species_chromosomes", desc="Species Chromosomes", param=metadata.FileParameter, readonly=True, no_value=None, visible=False, optional=True )
MetadataElement( name="maf_index", desc="MAF Index File", param=metadata.FileParameter, readonly=True, no_value=None, visible=False, optional=True )
def init_meta( self, dataset, copy_from=None ):
Alignment.init_meta( self, dataset, copy_from=copy_from )
-
def set_meta( self, dataset, overwrite = True, **kwd ):
"""
Parses and sets species, chromosomes, index from MAF file.
"""
#these metadata values are not accessable by users, always overwrite
- indexes, species, species_chromosomes = COPIED_build_maf_index_species_chromosomes( dataset.file_name )
- if indexes is None: return #this is not a MAF file
-
+ indexes, species, species_chromosomes, blocks = COPIED_build_maf_index_species_chromosomes( dataset.file_name )
+ if indexes is None:
+ return #this is not a MAF file
dataset.metadata.species = species
+ dataset.metadata.blocks = blocks
#only overwrite the contents if our newly determined chromosomes don't match stored
chrom_file = dataset.metadata.species_chromosomes
compare_chroms = {}
@@ -303,17 +334,27 @@
open( chrom_file.file_name, 'wb' ).write( tmp_file.read() )
dataset.metadata.species_chromosomes = chrom_file
tmp_file.close()
-
index_file = dataset.metadata.maf_index
if not index_file:
index_file = dataset.metadata.spec['maf_index'].param.new_file( dataset = dataset )
indexes.write( open( index_file.file_name, 'w' ) )
dataset.metadata.maf_index = index_file
-
+ def set_peek( self, dataset, is_multi_byte=False ):
+ if not dataset.dataset.purged:
+ # The file must exist on disk for the get_file_peek() method
+ dataset.peek = data.get_file_peek( dataset.file_name, is_multi_byte=is_multi_byte )
+ if dataset.metadata.blocks:
+ dataset.blurb = "%s blocks" % util.commaify( str( dataset.metadata.blocks ) )
+ else:
+ # Number of blocks is not known ( this should not happen ), and auto-detect is
+ # needed to set metadata
+ dataset.blurb = "? blocks"
+ else:
+ dataset.peek = 'file does not exist'
+ dataset.blurb = 'file purged from disk'
def display_peek( self, dataset ):
"""Returns formated html of peek"""
return self.make_html_table( dataset )
-
def make_html_table( self, dataset, skipchars=[] ):
"""Create HTML table, used for displaying peek"""
out = ['<table cellspacing="0" cellpadding="3">']
@@ -336,7 +377,6 @@
except Exception, exc:
out = "Can't create peek %s" % exc
return out
-
def sniff( self, filename ):
"""
Determines wether the file is in maf format
@@ -368,8 +408,13 @@
except:
return False
-class Axt( Sequence ):
+class Axt( data.Text ):
"""Class describing an axt alignment"""
+
+ # gvk- 11/19/09 - This is really an alignment, but we no longer have tools that use this data type, and it is
+ # here simply for backward compatibility ( although it is still in the datatypes registry ). Subclassing
+ # from data.Text eliminates managing metadata elements inherited from the Alignemnt class.
+
file_ext = "axt"
def sniff( self, filename ):
@@ -377,10 +422,16 @@
Determines whether the file is in axt format
axt alignment files are produced from Blastz, an alignment tool available from Webb Miller's lab
- at Penn State University. Each alignment block in an axt file contains three lines: a summary
- line and 2 sequence lines. Blocks are separated from one another by blank lines.
+ at Penn State University.
- The summary line contains chromosomal position and size information about the alignment. It consists of 9 required fields:
+ Each alignment block in an axt file contains three lines: a summary line and 2 sequence lines.
+ Blocks are separated from one another by blank lines.
+
+ The summary line contains chromosomal position and size information about the alignment. It
+ consists of 9 required fields.
+
+ The sequence lines contain the sequence of the primary assembly (line 2) and aligning assembly
+ (line 3) with inserts. Repeats are indicated by lower-case letters.
For complete details see http://genome.ucsc.edu/goldenPath/help/axt.html
@@ -409,10 +460,15 @@
else:
return True
-class Lav( Sequence ):
+class Lav( data.Text ):
"""Class describing a LAV alignment"""
+
file_ext = "lav"
+ # gvk- 11/19/09 - This is really an alignment, but we no longer have tools that use this data type, and it is
+ # here simply for backward compatibility ( although it is still in the datatypes registry ). Subclassing
+ # from data.Text eliminates managing metadata elements inherited from the Alignemnt class.
+
def sniff( self, filename ):
"""
Determines whether the file is in lav format
diff -r 5c46679d0755 -r f4654abcec1e lib/galaxy/datatypes/tabular.py
--- a/lib/galaxy/datatypes/tabular.py Fri Nov 20 08:46:49 2009 -0500
+++ b/lib/galaxy/datatypes/tabular.py Fri Nov 20 11:03:41 2009 -0500
@@ -19,14 +19,12 @@
"""Tab delimited data"""
"""Add metadata elements"""
+ MetadataElement( name="comment_lines", default=0, desc="Number of comment lines", readonly=False, optional=True, no_value=0 )
MetadataElement( name="columns", default=0, desc="Number of columns", readonly=True, visible=False, no_value=0 )
MetadataElement( name="column_types", default=[], desc="Column types", param=metadata.ColumnTypesParameter, readonly=True, visible=False, no_value=[] )
def init_meta( self, dataset, copy_from=None ):
data.Text.init_meta( self, dataset, copy_from=copy_from )
- def set_readonly_meta( self, dataset, skip=None, **kwd ):
- """Resets the values of readonly metadata elements."""
- Tabular.set_meta( self, dataset, overwrite = True, skip = skip )
def set_meta( self, dataset, overwrite = True, skip = None, **kwd ):
"""
Tries to determine the number of columns as well as those columns
@@ -35,15 +33,19 @@
their data type classes are responsible to determine how many invalid
comment lines should be skipped. Using None for skip will cause skip
to be zero, but the first line will be processed as a header.
+
+ Items of interest:
+ 1. We treat 'overwrite' as always True (we always want to set tabular metadata when called).
+ 2. If a tabular file has no data, it will have one column of type 'str'.
+ 3. We used to check only the first 100 lines when setting metadata and this class's
+ set_peek() method read the entire file to determine the number of lines in the file.
+ Since metadata can now be processed on cluster nodes, we've merged the line count portion
+ of the set_peek() processing here, and we now check the entire contents of the file.
"""
- #we treat 'overwrite' as always True (we always want to set tabular metadata when called)
- #if a tabular file has no data, it will have one column of type str
-
- num_check_lines = 100 #we will only check up to this many lines into the file
- requested_skip = skip #store original skip value to check with later
+ # Store original skip value to check with later
+ requested_skip = skip
if skip is None:
skip = 0
-
column_type_set_order = [ 'int', 'float', 'list', 'str' ] #Order to set column types in
default_column_type = column_type_set_order[-1] # Default column type is lowest in list
column_type_compare_order = list( column_type_set_order ) #Order to compare column types
@@ -89,49 +91,47 @@
if is_column_type[column_type]( column_text ):
return column_type
return None
-
+ data_lines = 0
+ comment_lines = 0
column_types = []
first_line_column_types = [default_column_type] # default value is one column of type str
if dataset.has_data():
#NOTE: if skip > num_check_lines, we won't detect any metadata, and will use default
for i, line in enumerate( file ( dataset.file_name ) ):
- line = line.rstrip('\r\n')
+ line = line.rstrip( '\r\n' )
if i < skip or not line or line.startswith( '#' ):
- continue
-
- fields = line.split( '\t' )
- for field_count, field in enumerate( fields ):
- if field_count >= len( column_types ): #found a previously unknown column, we append None
- column_types.append( None )
- column_type = guess_column_type( field )
- if type_overrules_type( column_type, column_types[field_count] ):
- column_types[field_count] = column_type
-
- if i == 0 and requested_skip is None:
- #this is our first line, people seem to like to upload files that have a header line, but do not start with '#' (i.e. all column types would then most likely be detected as str)
- #we will assume that the first line is always a header (this was previous behavior - it was always skipped) when the requested skip is None
- #we only use the data from the first line if we have no other data for a column
- #this is far from perfect, as:
- #1,2,3 1.1 2.2 qwerty
- #0 0 1,2,3
- #will detect as
- #"column_types": ["int", "int", "float", "list"]
- #instead of:
- #"column_types": ["list", "float", "float", "str"] *** would seem to be the 'Truth' by manual observation that the first line should be included as data
- #old method would have detected as:
- #"column_types": ["int", "int", "str", "list"]
- first_line_column_types = column_types
- column_types = [ None for col in first_line_column_types ]
- elif i > num_check_lines:
- # We exceeded our max check lines
- break
-
+ # We'll call blank lines comments
+ comment_lines += 1
+ else:
+ data_lines += 1
+ fields = line.split( '\t' )
+ for field_count, field in enumerate( fields ):
+ if field_count >= len( column_types ): #found a previously unknown column, we append None
+ column_types.append( None )
+ column_type = guess_column_type( field )
+ if type_overrules_type( column_type, column_types[field_count] ):
+ column_types[field_count] = column_type
+ if i == 0 and requested_skip is None:
+ # This is our first line, people seem to like to upload files that have a header line, but do not
+ # start with '#' (i.e. all column types would then most likely be detected as str). We will assume
+ # that the first line is always a header (this was previous behavior - it was always skipped). When
+ # the requested skip is None, we only use the data from the first line if we have no other data for
+ # a column. This is far from perfect, as
+ # 1,2,3 1.1 2.2 qwerty
+ # 0 0 1,2,3
+ # will be detected as
+ # "column_types": ["int", "int", "float", "list"]
+ # instead of
+ # "column_types": ["list", "float", "float", "str"] *** would seem to be the 'Truth' by manual
+ # observation that the first line should be included as data. The old method would have detected as
+ # "column_types": ["int", "int", "str", "list"]
+ first_line_column_types = column_types
+ column_types = [ None for col in first_line_column_types ]
#we error on the larger number of columns
#first we pad our column_types by using data from first line
if len( first_line_column_types ) > len( column_types ):
for column_type in first_line_column_types[len( column_types ):]:
column_types.append( column_type )
-
#Now we fill any unknown (None) column_types with data from first line
for i in range( len( column_types ) ):
if column_types[i] is None:
@@ -139,10 +139,11 @@
column_types[i] = default_column_type
else:
column_types[i] = first_line_column_types[i]
-
+ # Set the discovered metadata values for the dataset
+ dataset.metadata.data_lines = data_lines
+ dataset.metadata.comment_lines = comment_lines
dataset.metadata.column_types = column_types
dataset.metadata.columns = len( column_types )
-
def make_html_table( self, dataset, skipchars=[] ):
"""Create HTML table, used for displaying peek"""
out = ['<table cellspacing="0" cellpadding="3">']
@@ -202,6 +203,10 @@
out.append( '<tr><td>' )
out.append( '%s</td></tr>' % escape( comments.pop(0) ) )
return "".join( out )
+ def set_peek( self, dataset, line_count=None, is_multi_byte=False ):
+ data.Text.set_peek( self, dataset, line_count=line_count, is_multi_byte=is_multi_byte )
+ if dataset.metadata.comment_lines:
+ dataset.blurb = "%s, %s comments" % ( dataset.blurb, util.commaify( str( dataset.metadata.comment_lines ) ) )
def display_peek( self, dataset ):
"""Returns formatted html of peek"""
return self.make_html_table( dataset )
@@ -219,7 +224,6 @@
'Superorder', 'Order', 'Suborder', 'Superfamily', 'Family', 'Subfamily',
'Tribe', 'Subtribe', 'Genus', 'Subgenus', 'Species', 'Subspecies'
]
-
def make_html_table( self, dataset, skipchars=[] ):
"""Create HTML table, used for displaying peek"""
out = ['<table cellspacing="0" cellpadding="3">']
@@ -317,6 +321,7 @@
count += 1
if count == 5:
return True
+ fh.close()
if count < 5 and count > 0:
return True
except:
diff -r 5c46679d0755 -r f4654abcec1e lib/galaxy/datatypes/xml.py
--- a/lib/galaxy/datatypes/xml.py Fri Nov 20 08:46:49 2009 -0500
+++ b/lib/galaxy/datatypes/xml.py Fri Nov 20 11:03:41 2009 -0500
@@ -10,10 +10,10 @@
class BlastXml( data.Text ):
"""NCBI Blast XML Output data"""
file_ext = "blastxml"
- def set_peek( self, dataset ):
+ def set_peek( self, dataset, is_multi_byte=False ):
"""Set the peek and blurb text"""
if not dataset.dataset.purged:
- dataset.peek = data.get_file_peek( dataset.file_name )
+ dataset.peek = data.get_file_peek( dataset.file_name, is_multi_byte=is_multi_byte )
dataset.blurb = 'NCBI Blast XML data'
else:
dataset.peek = 'file does not exist'
diff -r 5c46679d0755 -r f4654abcec1e lib/galaxy/jobs/__init__.py
--- a/lib/galaxy/jobs/__init__.py Fri Nov 20 08:46:49 2009 -0500
+++ b/lib/galaxy/jobs/__init__.py Fri Nov 20 11:03:41 2009 -0500
@@ -549,12 +549,12 @@
try:
assert context.get( 'line_count', None ) is not None
if ( not dataset.datatype.composite_type and dataset.dataset.is_multi_byte() ) or self.tool.is_multi_byte:
- dataset.set_multi_byte_peek( line_count=context['line_count'] )
+ dataset.set_peek( line_count=context['line_count'], is_multi_byte=True )
else:
dataset.set_peek( line_count=context['line_count'] )
except:
if ( not dataset.datatype.composite_type and dataset.dataset.is_multi_byte() ) or self.tool.is_multi_byte:
- dataset.set_multi_byte_peek()
+ dataset.set_peek( is_multi_byte=True )
else:
dataset.set_peek()
try:
diff -r 5c46679d0755 -r f4654abcec1e lib/galaxy/jobs/runners/local.py
--- a/lib/galaxy/jobs/runners/local.py Fri Nov 20 08:46:49 2009 -0500
+++ b/lib/galaxy/jobs/runners/local.py Fri Nov 20 11:03:41 2009 -0500
@@ -101,10 +101,9 @@
job_wrapper.fail( "failure running job", exception=True )
log.exception("failure running job %d" % job_wrapper.job_id)
return
-
#run the metadata setting script here
- #this is terminatable when output dataset/job is deleted
- #so that long running set_meta()s can be cancelled without having to reboot the server
+ #this is terminate-able when output dataset/job is deleted
+ #so that long running set_meta()s can be canceled without having to reboot the server
if job_wrapper.get_state() not in [ model.Job.states.ERROR, model.Job.states.DELETED ] and self.app.config.set_metadata_externally and job_wrapper.output_paths:
external_metadata_script = job_wrapper.setup_external_metadata( output_fnames = job_wrapper.get_output_fnames(),
set_extension = True,
diff -r 5c46679d0755 -r f4654abcec1e lib/galaxy/model/__init__.py
--- a/lib/galaxy/model/__init__.py Fri Nov 20 08:46:49 2009 -0500
+++ b/lib/galaxy/model/__init__.py Fri Nov 20 11:03:41 2009 -0500
@@ -537,17 +537,13 @@
def is_multi_byte( self ):
"""Data consists of multi-byte characters"""
return self.dataset.is_multi_byte()
- def set_peek( self ):
- return self.datatype.set_peek( self )
- def set_multi_byte_peek( self ):
- return self.datatype.set_multi_byte_peek( self )
+ def set_peek( self, is_multi_byte=False ):
+ return self.datatype.set_peek( self, is_multi_byte=is_multi_byte )
def init_meta( self, copy_from=None ):
return self.datatype.init_meta( self, copy_from=copy_from )
def set_meta( self, **kwd ):
self.clear_associated_files( metadata_safe = True )
return self.datatype.set_meta( self, **kwd )
- def set_readonly_meta( self, **kwd ):
- return self.datatype.set_readonly_meta( self, **kwd )
def missing_meta( self, **kwd ):
return self.datatype.missing_meta( self, **kwd )
def as_display_type( self, type, **kwd ):
diff -r 5c46679d0755 -r f4654abcec1e lib/galaxy/model/migrate/versions/0005_cleanup_datasets_fix.py
--- a/lib/galaxy/model/migrate/versions/0005_cleanup_datasets_fix.py Fri Nov 20 08:46:49 2009 -0500
+++ b/lib/galaxy/model/migrate/versions/0005_cleanup_datasets_fix.py Fri Nov 20 11:03:41 2009 -0500
@@ -227,17 +227,13 @@
def get_mime( self ):
"""Returns the mime type of the data"""
return datatypes_registry.get_mimetype_by_extension( self.extension.lower() )
- def set_peek( self ):
- return self.datatype.set_peek( self )
- def set_multi_byte_peek( self ):
- return self.datatype.set_multi_byte_peek( self )
+ def set_peek( self, is_multi_byte=False ):
+ return self.datatype.set_peek( self, is_multi_byte=is_multi_byte )
def init_meta( self, copy_from=None ):
return self.datatype.init_meta( self, copy_from=copy_from )
def set_meta( self, **kwd ):
self.clear_associated_files( metadata_safe = True )
return self.datatype.set_meta( self, **kwd )
- def set_readonly_meta( self, **kwd ):
- return self.datatype.set_readonly_meta( self, **kwd )
def missing_meta( self, **kwd ):
return self.datatype.missing_meta( self, **kwd )
def as_display_type( self, type, **kwd ):
diff -r 5c46679d0755 -r f4654abcec1e test/functional/test_get_data.py
--- a/test/functional/test_get_data.py Fri Nov 20 08:46:49 2009 -0500
+++ b/test/functional/test_get_data.py Fri Nov 20 11:03:41 2009 -0500
@@ -294,7 +294,7 @@
.first()
assert hda is not None, "Problem retrieving hda from database"
self.verify_dataset_correctness( 'qualscores.qualsolid', hid=str( hda.hid ) )
- self.check_history_for_string( '2.5 Kb, format: <span class="qualsolid">qualsolid</span>, database: \? Info: uploaded file' )
+ self.check_history_for_string( '48 lines, format: <span class="qualsolid">qualsolid</span>, database: \? Info: uploaded file' )
self.check_metadata_for_string( 'Change data type value="qualsolid" selected="yes">qualsolid' )
self.delete_history( id=self.security.encode_id( history.id ) )
def test_0090_upload_file( self ):
@@ -312,7 +312,7 @@
.first()
assert hda is not None, "Problem retrieving hda from database"
self.verify_dataset_correctness( 'qualscores.qual454', hid=str( hda.hid ) )
- self.check_history_for_string( '5.6 Kb, format: <span class="qual454">qual454</span>, database: \?' )
+ self.check_history_for_string( '49 lines, format: <span class="qual454">qual454</span>, database: \?' )
self.check_metadata_for_string( 'Change data type value="qual454" selected="yes">qual454' )
self.delete_history( id=self.security.encode_id( history.id ) )
def test_0095_upload_file( self ):
@@ -483,7 +483,7 @@
.first()
assert hda is not None, "Problem retrieving hda from database"
self.verify_dataset_correctness( 'shrimp_cs_test1.csfasta', hid=str( hda.hid ) )
- self.check_history_for_string( '162.6 Kb, format: <span class="csfasta">csfasta</span>, <td>>2_14_26_F3,-1282216.0</td>' )
+ self.check_history_for_string( '2,500 sequences, format: <span class="csfasta">csfasta</span>, <td>>2_14_26_F3,-1282216.0</td>' )
self.check_metadata_for_string( 'value="shrimp_cs_test1.csfasta" value="\?" Change data type value="csfasta" selected="yes"' )
self.delete_history( id=self.security.encode_id( history.id ) )
def test_0140_upload_file( self ):
diff -r 5c46679d0755 -r f4654abcec1e test/functional/test_history_functions.py
--- a/test/functional/test_history_functions.py Fri Nov 20 08:46:49 2009 -0500
+++ b/test/functional/test_history_functions.py Fri Nov 20 11:03:41 2009 -0500
@@ -737,7 +737,7 @@
deleted_history_ids=deleted_history_ids )
sa_session.refresh( history6 )
if len( history6.datasets ) != 2:
- raise AssertionError, "Copying hda1 to the current history failed"
+ raise AssertionError, "Copying hda1 to the current history failed, history 6 has %d datasets, but should have 2" % len( history6.datasets )
# Test copying 1 hda to another history
self.new_history( name=urllib.quote( 'copy history items - 2' ) )
history7 = sa_session.query( galaxy.model.History ) \
1
0
details: http://www.bx.psu.edu/hg/galaxy/rev/e24c2590f8af
changeset: 3115:e24c2590f8af
user: Kanwei Li <kanwei(a)gmail.com>
date: Fri Nov 20 12:07:06 2009 -0500
description:
update bx_python egg
diffstat:
eggs.ini | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diffs (20 lines):
diff -r f4654abcec1e -r e24c2590f8af eggs.ini
--- a/eggs.ini Fri Nov 20 11:03:41 2009 -0500
+++ b/eggs.ini Fri Nov 20 12:07:06 2009 -0500
@@ -60,14 +60,14 @@
pysqlite = _static
MySQL_python = _5.0.67_static
python_lzo = _static
-bx_python = _dev_r4bf1f32e6b76
+bx_python = _dev_3b9d30e47619
GeneTrack = _dev_e380f21c704218622155b9d230a44b3c9c452524
SQLAlchemy = _dev_r6498
; nose = .dev_r7156749efc58
; source location, necessary for scrambling
[source]
-bx_python = http://bitbucket.org/james_taylor/bx-python/get/4bf1f32e6b76.bz2
+bx_python = http://bitbucket.org/james_taylor/bx-python/get/3b9d30e47619.bz2
Cheetah = http://pypi.python.org/packages/source/C/Cheetah/Cheetah-2.2.2.tar.gz
DRMAA_python = http://gridengine.sunsource.net/files/documents/7/36/DRMAA-python-0.2.tar.gz
MySQL_python = http://superb-west.dl.sourceforge.net/sourceforge/mysql-python/MySQL-python… http://downloads.mysql.com/archives/mysql-5.0/mysql-5.0.67.tar.gz
1
0

23 Nov '09
details: http://www.bx.psu.edu/hg/galaxy/rev/c507bad7e373
changeset: 3109:c507bad7e373
user: Kanwei Li <kanwei(a)gmail.com>
date: Wed Nov 18 13:45:53 2009 -0500
description:
revert to sqlite default in universe.sample
diffstat:
lib/galaxy/datatypes/converters/wiggle_to_array_tree_converter.py | 6 +-
lib/galaxy/visualization/tracks/data/array_tree.py | 30 +-
lib/galaxy/visualization/tracks/data/interval_index.py | 26 +-
static/scripts/trackster.js | 116 +++++++++--
universe_wsgi.ini.sample | 12 +-
5 files changed, 131 insertions(+), 59 deletions(-)
diffs (352 lines):
diff -r bcb3c0eeb72f -r c507bad7e373 lib/galaxy/datatypes/converters/wiggle_to_array_tree_converter.py
--- a/lib/galaxy/datatypes/converters/wiggle_to_array_tree_converter.py Wed Nov 18 13:37:41 2009 -0500
+++ b/lib/galaxy/datatypes/converters/wiggle_to_array_tree_converter.py Wed Nov 18 13:45:53 2009 -0500
@@ -8,7 +8,7 @@
from bx.arrays.array_tree import *
from bx.arrays.wiggle import IntervalReader
-BLOCK_SIZE = 1000
+BLOCK_SIZE = 100
def main():
@@ -20,8 +20,8 @@
# Fill array from wiggle
d = array_tree_dict_from_wiggle_reader( reader, {}, block_size = BLOCK_SIZE )
- for value in d.itervalues():
- value.root.build_summary()
+ for array_tree in d.itervalues():
+ array_tree.root.build_summary()
f = open( out_fname, "w" )
FileArrayTreeDict.dict_to_file( d, f )
diff -r bcb3c0eeb72f -r c507bad7e373 lib/galaxy/visualization/tracks/data/array_tree.py
--- a/lib/galaxy/visualization/tracks/data/array_tree.py Wed Nov 18 13:37:41 2009 -0500
+++ b/lib/galaxy/visualization/tracks/data/array_tree.py Wed Nov 18 13:45:53 2009 -0500
@@ -12,9 +12,6 @@
# Maybe this should be included in the datatype itself, so users can add their
# own types to the browser as long as they return the right format of data?
-# FIXME: Assuming block size is always 1000 for the moment
-BLOCK_SIZE = 1000
-
class ArrayTreeDataProvider( object ):
def __init__( self, dataset, original_dataset ):
self.dataset = dataset
@@ -32,32 +29,33 @@
return { 'max': float( max(root_summary.maxs) ), 'min': float( min(root_summary.mins) ) }
def get_data( self, chrom, start, end, **kwargs ):
- start = int( start )
- end = int( end )
- resolution = max(1, ceil(float(kwargs['resolution'])))
-
- level = int( floor( log( resolution, BLOCK_SIZE ) ) )
- level = max( level, 0 )
- stepsize = BLOCK_SIZE ** level
- step1 = stepsize * BLOCK_SIZE
-
- # Open the file
f = open( self.dataset.file_name )
d = FileArrayTreeDict( f )
+
# Get the right chromosome
try:
chrom_array_tree = d[chrom]
except KeyError:
return None
+
+ block_size = chrom_array_tree.block_size
+ start = int( start )
+ end = int( end )
+ resolution = max(1, ceil(float(kwargs['resolution'])))
+
+ level = int( floor( log( resolution, block_size ) ) )
+ level = max( level, 0 )
+ stepsize = block_size ** level
+ step1 = stepsize * block_size
+
# Is the requested level valid?
assert 0 <= level <= chrom_array_tree.levels
- # Calculate the actual start/range/step of the block we're getting
results = []
- for block_start in range( start, end, stepsize * BLOCK_SIZE ):
+ for block_start in range( start, end, stepsize * block_size ):
# print block_start
# Return either data point or a summary depending on the level
- indexes = range( block_start, block_start + stepsize * BLOCK_SIZE, stepsize )
+ indexes = range( block_start, block_start + stepsize * block_size, stepsize )
if level > 0:
s = chrom_array_tree.get_summary( block_start, level )
if s is not None:
diff -r bcb3c0eeb72f -r c507bad7e373 lib/galaxy/visualization/tracks/data/interval_index.py
--- a/lib/galaxy/visualization/tracks/data/interval_index.py Wed Nov 18 13:37:41 2009 -0500
+++ b/lib/galaxy/visualization/tracks/data/interval_index.py Wed Nov 18 13:45:53 2009 -0500
@@ -27,19 +27,21 @@
payload['strand'] = feature[5]
except IndexError:
pass
- try:
- block_sizes = [ int(n) for n in feature[10].split(',') if n != '']
- block_starts = [ int(n) for n in feature[11].split(',') if n != '' ]
- blocks = zip(block_sizes, block_starts)
- payload['blocks'] = [ (start + block[1], start + block[1] + block[0]) for block in blocks]
- except IndexError:
- pass
+
+ if 'include_blocks' in kwargs:
+ try:
+ block_sizes = [ int(n) for n in feature[10].split(',') if n != '']
+ block_starts = [ int(n) for n in feature[11].split(',') if n != '' ]
+ blocks = zip(block_sizes, block_starts)
+ payload['blocks'] = [ (start + block[1], start + block[1] + block[0]) for block in blocks]
+ except IndexError:
+ pass
- try:
- payload['thick_start'] = int(feature[6])
- payload['thick_end'] = int(feature[7])
- except IndexError:
- pass
+ try:
+ payload['thick_start'] = int(feature[6])
+ payload['thick_end'] = int(feature[7])
+ except IndexError:
+ pass
results.append(payload)
uid += 1
diff -r bcb3c0eeb72f -r c507bad7e373 static/scripts/trackster.js
--- a/static/scripts/trackster.js Wed Nov 18 13:37:41 2009 -0500
+++ b/static/scripts/trackster.js Wed Nov 18 13:45:53 2009 -0500
@@ -1,9 +1,10 @@
/* Trackster
2009, James Taylor, Kanwei Li
*/
-var DEBUG = false;
+var DEBUG = true;
var DENSITY = 1000,
+ FEATURE_LEVELS = 100,
DATA_ERROR = "There was an error in indexing this dataset.",
DATA_NONE = "No data for this chrom/contig.",
DATA_PENDING = "Currently indexing... please wait",
@@ -108,6 +109,7 @@
// 10^log10(range / DENSITY) Close approximation for browser window, assuming DENSITY = window width
this.resolution = Math.pow( 10, Math.ceil( Math.log( (this.high - this.low) / DENSITY ) / Math.LN10 ) );
+ this.zoom_res = Math.max(1,Math.ceil( Math.log( (this.high - this.low) / FEATURE_LEVELS ) / Math.log(FEATURE_LEVELS) ));
// Overview
$("#overview-box").css( {
@@ -171,7 +173,7 @@
resolution = this.view.resolution;
- if (DEBUG) { $("#debug").text(resolution); }
+ if (DEBUG) { $("#debug").text(resolution + " " + this.view.zoom_res); }
var parent_element = $("<div style='position: relative;'></div>");
this.content_div.children( ":first" ).remove();
@@ -279,7 +281,7 @@
max_label.css({ position: "relative", top: "35px" });
max_label.prependTo(track.container_div)
- min_label.css({ position: "relative", top: track.height_px + 32 + "px", });
+ min_label.css({ position: "relative", top: track.height_px + 32 + "px" });
min_label.prependTo(track.container_div);
track.draw();
@@ -297,7 +299,7 @@
$.getJSON( data_url, { track_type: this.track_type, chrom: this.view.chrom,
low: low, high: high, dataset_id: this.dataset_id,
resolution: this.view.resolution }, function ( data ) {
- track.cache[key] = data;
+ track.cache.set(key, data);
delete track.data_queue[key];
track.draw();
});
@@ -313,12 +315,12 @@
canvas = $("<canvas class='tile'></canvas>"),
key = resolution + "_" + tile_index;
- if (!this.cache[key]) {
+ if (!this.cache.get(key)) {
this.get_data( resolution, tile_index );
return;
}
- var data = this.cache[key];
+ var data = this.cache.get(key);
canvas.css( {
position: "absolute",
top: 0,
@@ -368,7 +370,11 @@
this.showing_labels = false;
this.vertical_gap = 10;
this.base_color = "#2C3143";
+ this.default_font = "9px Monaco, Lucida Console, monospace";
this.left_offset = 200;
+ this.inc_slots = {};
+ this.data_queue = {};
+ this.data_cache = new Cache(20);
};
$.extend( FeatureTrack.prototype, TiledTrack.prototype, {
init: function() {
@@ -397,6 +403,23 @@
}
});
},
+ get_data: function( low, high ) {
+ console.log("getting: ", low, high);
+ var track = this,
+ key = low + '_' + high;
+
+ if (!track.data_queue[key]) {
+ track.data_queue[key] = true;
+ $.getJSON( data_url, { track_type: track.track_type, chrom: track.view.chrom,
+ low: low, high: high, dataset_id: track.dataset_id,
+ include_blocks: true }, function ( data ) {
+ track.data_cache.set(key, data);
+ // console.log("datacache", track.data_cache.get(key));
+ delete track.data_queue[key];
+ track.draw();
+ });
+ }
+ },
calc_slots: function( include_labels ) {
// console.log("num vals: " + this.values.length);
var end_ary = [],
@@ -438,27 +461,76 @@
this.height_px = end_ary.length * this.vertical_gap + 15;
this.content_div.css( "height", this.height_px + "px" );
},
+ incremental_slots: function( level, features ) {
+ if (!this.inc_slots[level]) {
+ this.inc_slots[level] = {};
+ this.inc_slots[level].w_scale = 1000 / Math.pow(FEATURE_LEVELS, level);
+ }
+ var slots = this.inc_slots[level];
+ if (slots[uid]) {
+ return slots[uid];
+ }
+ var end_ary = [],
+ undone = [],
+ max_high = this.view.max_high,
+ max_low = this.view.max_low;
+
+ for (var i = 0, len = features.length; i < len; i++) {
+ var feature = features[i];
+ if (slots[feature.uid]) {
+ end_ary[ slots[feature.uid] ] = Math.ceil( (feature.end - max_low) * slots.w_scale );
+ } else {
+ undone.push(feature);
+ }
+ }
+ for (var i = 0, len = undone.length; i < len; i++) {
+ var feature = undone[i];
+ f_start = Math.floor( (feature.start - max_low) * slots.w_scale );
+ f_start -= dummy_canvas.measureText(feature.name).width;
+ f_end = Math.ceil( (feature.end - max_low) * slots.w_scale );
+
+ },
draw_tile: function( resolution, tile_index, parent_element, w_scale ) {
- if (!this.values) { // Still loading
- return null;
+ if (!this.values) {
+ return;
}
+ var tile_low = tile_index * DENSITY * resolution,
+ tile_high = ( tile_index + 1 ) * DENSITY * resolution,
+ tile_span = DENSITY * resolution;
// console.log("drawing " + tile_index);
// Once we zoom in enough, show name labels
- if (w_scale > this.show_labels_scale && !this.showing_labels) {
- this.showing_labels = true;
- if (!this.zi_slots) {
- this.calc_slots(true);
+ var data;
+ if (w_scale > this.show_labels_scale) {
+ if (!this.showing_labels) {
+ this.showing_labels = true;
+ if (!this.zi_slots) {
+ this.calc_slots(true);
+ }
+ this.slots = this.zi_slots;
}
- this.slots = this.zi_slots;
- } else if (w_scale <= this.show_labels_scale && this.showing_labels) {
- this.showing_labels = false;
- this.slots = this.zo_slots;
+ for (var k in this.data_cache.obj_cache) {
+ var k_split = k.split("_"), k_low = k_split[0], k_high = k_split[1];
+ if (k_low <= tile_low && k_high >= tile_high) {
+ data = this.data_cache.get(k);
+ break;
+ }
+ }
+ if (!data) {
+ this.data_queue[ [tile_low, tile_high] ] = true;
+ this.get_data(tile_low, tile_high);
+ return;
+ }
+
+ } else {
+ if (this.showing_labels) {
+ this.showing_labels = false;
+ this.slots = this.zo_slots;
+ }
+ data = this.values;
}
// console.log(this.slots);
- var tile_low = tile_index * DENSITY * resolution,
- tile_high = ( tile_index + 1 ) * DENSITY * resolution,
- tile_span = DENSITY * resolution;
+
// console.log(tile_low, tile_high, tile_length, w_scale);
var width = Math.ceil( tile_span * w_scale ),
height = this.height_px,
@@ -474,12 +546,12 @@
// console.log(( tile_low - this.view.low ) * w_scale, tile_index, w_scale);
var ctx = new_canvas.get(0).getContext("2d");
ctx.fillStyle = this.base_color;
- ctx.font = "10px monospace";
+ ctx.font = this.default_font;
ctx.textAlign = "right";
var j = 0;
- for (var i = 0, len = this.values.length; i < len; i++) {
- var feature = this.values[i];
+ for (var i = 0, len = data.length; i < len; i++) {
+ var feature = data[i];
if (feature.start <= tile_high && feature.end >= tile_low) {
var f_start = Math.floor( Math.max(0, (feature.start - tile_low) * w_scale) ),
f_end = Math.ceil( Math.min(width, (feature.end - tile_low) * w_scale) ),
diff -r bcb3c0eeb72f -r c507bad7e373 universe_wsgi.ini.sample
--- a/universe_wsgi.ini.sample Wed Nov 18 13:37:41 2009 -0500
+++ b/universe_wsgi.ini.sample Wed Nov 18 13:45:53 2009 -0500
@@ -31,15 +31,15 @@
paste.app_factory = galaxy.web.buildapp:app_factory
# By default, Galaxy uses a SQLite database found here
-#database_file = database/universe.sqlite
+database_file = database/universe.sqlite
# You may use a SQLAlchemy connection string to specify an external database
# instead. PostgreSQL and MySQL are supported.
-database_connection = postgres:///galaxy
-database_engine_option_echo = true
-database_engine_option_echo_pool = true
-database_engine_option_pool_size = 10
-database_engine_option_max_overflow = 20
+#database_connection = postgres:///galaxy
+#database_engine_option_echo = true
+#database_engine_option_echo_pool = true
+#database_engine_option_pool_size = 10
+#database_engine_option_max_overflow = 20
# If using MySQL, see:
# http://rapd.wordpress.com/2008/03/02/sqlalchemy-sqlerror-operationalerror-2…
1
0

23 Nov '09
details: http://www.bx.psu.edu/hg/galaxy/rev/b70e23d80a75
changeset: 3111:b70e23d80a75
user: Enis Afgan <afgane(a)gmail.com>
date: Thu Nov 19 12:56:24 2009 -0500
description:
Fixed a bug in cloud store update. Also, added method comments to EC2 cloud provider implementation class.
diffstat:
lib/galaxy/cloud/providers/ec2.py | 90 ++++++++++++++++++++---------
lib/galaxy/cloud/providers/eucalyptus.py | 32 +++++-----
templates/cloud/list_images.mako | 2 +
3 files changed, 81 insertions(+), 43 deletions(-)
diffs (316 lines):
diff -r 53de6aea6445 -r b70e23d80a75 lib/galaxy/cloud/providers/ec2.py
--- a/lib/galaxy/cloud/providers/ec2.py Wed Nov 18 16:18:09 2009 -0500
+++ b/lib/galaxy/cloud/providers/ec2.py Thu Nov 19 12:56:24 2009 -0500
@@ -94,13 +94,16 @@
log.info( "EC2 cloud manager stopped" )
def put( self, uci_wrapper ):
- # Get rid of UCI from state description
+ """
+ Add uci_wrapper object to the end of the request queue to be handled by
+ this cloud provider.
+ """
state = uci_wrapper.get_uci_state()
uci_wrapper.change_state( state.split('U')[0] ) # remove 'UCI' from end of state description (i.e., mark as accepted and ready for processing)
self.queue.put( uci_wrapper )
def run_next( self ):
- """Run the next job, waiting until one is available if necessary"""
+ """Process next request, waiting until one is available if necessary."""
cnt = 0
while 1:
@@ -125,7 +128,7 @@
def get_connection( self, uci_wrapper ):
"""
- Establishes EC2 cloud connection using user's credentials associated with given UCI
+ Establishes cloud connection using user's credentials associated with given UCI
"""
log.debug( 'Establishing %s cloud connection.' % self.type )
provider = uci_wrapper.get_provider()
@@ -152,7 +155,10 @@
def check_key_pair( self, uci_wrapper, conn ):
"""
- Generate key pair using user's credentials
+ Check if a key pair associated with this UCI exists on cloud provider.
+ If yes, return key pair name; otherwise, generate a key pair with the cloud
+ provider and, again, return key pair name.
+ Key pair name for given UCI is generated from UCI's name and suffix '_kp'
"""
kp = None
kp_name = uci_wrapper.get_name().replace(' ','_') + "_kp"
@@ -200,6 +206,7 @@
return None
def create_key_pair( self, conn, kp_name ):
+ """ Initiate creation of key pair under kp_name by current cloud provider. """
try:
return conn.create_key_pair( kp_name )
except boto.exception.EC2ResponseError, e:
@@ -226,8 +233,8 @@
def create_uci( self, uci_wrapper ):
"""
- Creates User Configured Instance (UCI). Essentially, creates storage volume on cloud provider
- and registers relevant information in Galaxy database.
+ Create User Configured Instance (UCI) - i.e., create storage volume on cloud provider
+ and register relevant information in local Galaxy database.
"""
conn = self.get_connection( uci_wrapper )
if uci_wrapper.get_uci_availability_zone()=='':
@@ -279,8 +286,11 @@
def delete_uci( self, uci_wrapper ):
"""
- Deletes UCI. NOTE that this implies deletion of any and all data associated
+ Delete UCI - i.e., delete all storage volumes associated with this UCI.
+ NOTE that this implies deletion of any and all data associated
with this UCI from the cloud. All data will be deleted.
+ Information in local Galaxy database is marked as deleted but not actually removed
+ from the database.
"""
conn = self.get_connection( uci_wrapper )
vl = [] # volume list
@@ -318,7 +328,8 @@
def snapshot_uci( self, uci_wrapper ):
"""
- Creates snapshot of all storage volumes associated with this UCI.
+ Initiate creation of a snapshot by cloud provider for all storage volumes
+ associated with this UCI.
"""
if uci_wrapper.get_uci_state() != uci_states.ERROR:
conn = self.get_connection( uci_wrapper )
@@ -362,7 +373,7 @@
def start_uci( self, uci_wrapper ):
"""
- Starts instance(s) of given UCI on the cloud.
+ Start instance(s) of given UCI on the cloud.
"""
if uci_wrapper.get_uci_state() != uci_states.ERROR:
conn = self.get_connection( uci_wrapper )
@@ -461,7 +472,7 @@
def stop_uci( self, uci_wrapper):
"""
- Stops all of cloud instances associated with given UCI.
+ Stop all of cloud instances associated with given UCI.
"""
conn = self.get_connection( uci_wrapper )
@@ -543,11 +554,15 @@
def update( self ):
"""
- Runs a global status update on all instances that are in 'running', 'pending', or 'shutting-down' state.
- Also, runs update on all storage volumes that are in 'in-use', 'creating', or 'None' state.
+ Run status update on all instances that are in 'running', 'pending', or 'shutting-down' state.
+ Run status update on all storage volumes whose status is 'in-use', 'creating', or 'None'.
+ Run status update on all snapshots whose status is 'pending' or 'delete'
+ Run status update on any zombie UCIs, i.e., UCI's that is in 'submitted' state for an
+ extended period of time.
+
Reason behind this method is to sync state of local DB and real-world resources
"""
- log.debug( "Running general status update for EC2 UCIs..." )
+ log.debug( "Running general status update for %s UCIs..." % self.type )
# Update instances
instances = self.sa_session.query( model.CloudInstance ) \
.filter( or_( model.CloudInstance.table.c.state==instance_states.RUNNING,
@@ -610,7 +625,11 @@
self.process_zombie( z_inst )
def update_instance( self, inst ):
-
+ """
+ Update information in local database for given instance as it is obtained from cloud provider.
+ Along with updating information about given instance, information about the UCI controlling
+ this instance is also updated.
+ """
# Get credentials associated wit this instance
uci_id = inst.uci_id
uci = self.sa_session.query( model.UCI ).get( uci_id )
@@ -683,6 +702,11 @@
return None
def update_store( self, store ):
+ """
+ Update information in local database for given storage volume as it is obtained from cloud provider.
+ Along with updating information about given storage volume, information about the UCI controlling
+ this storage volume is also updated.
+ """
# Get credentials associated wit this store
uci_id = store.uci_id
uci = self.sa_session.query( model.UCI ).get( uci_id )
@@ -705,6 +729,7 @@
# Update store status in local DB with info from cloud provider
if len(vl) > 0:
try:
+ log.debug( "Storage volume '%s' current status: '%s'" % (store.volume_id, vl[0].status ) )
if store.status != vl[0].status:
# In case something failed during creation of UCI but actual storage volume was created and yet
# UCI state remained as 'new', try to remedy this by updating UCI state here
@@ -721,18 +746,19 @@
store.status = vl[0].status
self.sa_session.add( store )
self.sa_session.flush()
- if store.inst.instance_id != vl[0].instance_id:
- store.inst.instance_id = vl[0].instance_id
- self.sa_session.add( store )
- self.sa_session.flush()
- if store.attach_time != vl[0].attach_time:
- store.attach_time = vl[0].attach_time
- self.sa_session.add( store )
- self.sa_session.flush()
- if store.device != vl[0].device:
- store.device = vl[0].device
- self.sa_session.add( store )
- self.sa_session.flush()
+ if store.inst != None:
+ if store.inst.instance_id != vl[0].instance_id:
+ store.inst.instance_id = vl[0].instance_id
+ self.sa_session.add( store )
+ self.sa_session.flush()
+ if store.attach_time != vl[0].attach_time:
+ store.attach_time = vl[0].attach_time
+ self.sa_session.add( store )
+ self.sa_session.flush()
+ if store.device != vl[0].device:
+ store.device = vl[0].device
+ self.sa_session.add( store )
+ self.sa_session.flush()
except boto.exception.EC2ResponseError, e:
err = "Updating status of volume(s) from cloud failed for UCI '"+ uci.name + "' during general status update: " + str( e )
log.error( err )
@@ -753,6 +779,11 @@
self.sa_session.flush()
def update_snapshot( self, snapshot ):
+ """
+ Update information in local database for given snapshot as it is obtained from cloud provider.
+ Along with updating information about given snapshot, information about the UCI controlling
+ this snapshot is also updated.
+ """
# Get credentials associated wit this store
uci_id = snapshot.uci_id
uci = self.sa_session.query( model.UCI ).get( uci_id )
@@ -799,6 +830,9 @@
self.sa_session.flush()
def delete_snapshot( self, snapshot ):
+ """
+ Initiate deletion of given snapshot from cloud provider.
+ """
if snapshot.status == snapshot_status.DELETE:
# Get credentials associated wit this store
uci_id = snapshot.uci_id
@@ -845,7 +879,7 @@
def process_zombie( self, inst ):
"""
- Attempt at discovering if starting an instance was successful but local database was not updated
+ Attempt at discovering if starting a cloud instance was successful but local database was not updated
accordingly or if something else failed and instance was never started. Currently, no automatic
repairs are being attempted; instead, appropriate error messages are set.
"""
@@ -921,7 +955,7 @@
def get_connection_from_uci( self, uci ):
"""
- Establishes and returns connection to cloud provider. Information needed to do so is obtained
+ Establish and return connection to cloud provider. Information needed to do so is obtained
directly from uci database object.
"""
log.debug( 'Establishing %s cloud connection' % self.type )
diff -r 53de6aea6445 -r b70e23d80a75 lib/galaxy/cloud/providers/eucalyptus.py
--- a/lib/galaxy/cloud/providers/eucalyptus.py Wed Nov 18 16:18:09 2009 -0500
+++ b/lib/galaxy/cloud/providers/eucalyptus.py Thu Nov 19 12:56:24 2009 -0500
@@ -94,7 +94,7 @@
def put( self, uci_wrapper ):
"""
- Adds uci_wrapper object to the end of the request queue to be handled by
+ Add uci_wrapper object to the end of the request queue to be handled by
this cloud provider.
"""
state = uci_wrapper.get_uci_state()
@@ -536,7 +536,7 @@
Reason behind this method is to sync state of local DB and real-world resources
"""
- log.debug( "Running general status update for EPC UCIs..." )
+ log.debug( "Running general status update for %s UCIs..." % self.type )
# Update instances
instances = self.sa_session.query( model.CloudInstance ) \
.filter( or_( model.CloudInstance.table.c.state==instance_states.RUNNING,
@@ -695,7 +695,8 @@
# Update store status in local DB with info from cloud provider
if len(vl) > 0:
try:
- if store.status != vl[0].status and store.availability_zone != 'epc':
+ log.debug( "Storage volume '%s' current status: '%s'" % (store.volume_id, vl[0].status ) )
+ if store.status != vl[0].status:
# In case something failed during creation of UCI but actual storage volume was created and yet
# UCI state remained as 'new', try to remedy this by updating UCI state here
if ( store.status == None ) and ( store.volume_id != None ):
@@ -718,10 +719,11 @@
store.status = vl[0].status
self.sa_session.add( store )
self.sa_session.flush()
- if store.inst.instance_id != vl[0].instance_id:
- store.inst.instance_id = vl[0].instance_id
- self.sa_session.add( store )
- self.sa_session.flush()
+ if store.inst != None:
+ if store.inst.instance_id != vl[0].instance_id:
+ store.inst.instance_id = vl[0].instance_id
+ self.sa_session.add( store )
+ self.sa_session.flush()
if store.attach_time != vl[0].attach_time:
store.attach_time = vl[0].attach_time
self.sa_session.add( store )
@@ -937,7 +939,7 @@
def get_connection_from_uci( self, uci ):
"""
- Establishes and returns connection to cloud provider. Information needed to do so is obtained
+ Establish and return connection to cloud provider. Information needed to do so is obtained
directly from uci database object.
"""
log.debug( 'Establishing %s cloud connection' % self.type )
@@ -946,13 +948,13 @@
# Get connection
try:
region = RegionInfo( None, uci.credentials.provider.region_name, uci.credentials.provider.region_endpoint )
- log.debug( "[%s] Using following command to connect to cloud provider: "
- "conn = EC2Connection( aws_access_key_id=%s, "
- "aws_secret_access_key=%s, "
- "port=%s, "
- "is_secure=%s, "
- "region=region, "
- "path=%s )" % ( self.type, a_key, s_key, uci.credentials.provider.is_secure, uci.credentials.provider.port, uci.credentials.provider.path ) )
+# log.debug( "[%s] Using following command to connect to cloud provider: "
+# "conn = EC2Connection( aws_access_key_id=%s, "
+# "aws_secret_access_key=%s, "
+# "port=%s, "
+# "is_secure=%s, "
+# "region=region, "
+# "path=%s )" % ( self.type, a_key, s_key, uci.credentials.provider.is_secure, uci.credentials.provider.port, uci.credentials.provider.path ) )
conn = EC2Connection( aws_access_key_id=a_key,
aws_secret_access_key=s_key,
is_secure=uci.credentials.provider.is_secure,
diff -r 53de6aea6445 -r b70e23d80a75 templates/cloud/list_images.mako
--- a/templates/cloud/list_images.mako Wed Nov 18 16:18:09 2009 -0500
+++ b/templates/cloud/list_images.mako Thu Nov 19 12:56:24 2009 -0500
@@ -78,6 +78,8 @@
</tr>
%endfor
</table>
+
+ <p /></p><a href="${h.url_for( controller='cloud', action='add_new_image' )}" target="galaxy_main">Add another machine image</a>
%else:
<h3>There are no registered machine images.</h3><br />
<a href="${h.url_for( controller='cloud', action='add_new_image' )}" target="galaxy_main">Add machine image now?</a>
1
0