Seems on the up 'n up. Can you check the `handler` column of the job table in the database for the jobs in question?

On Wed, Nov 30, 2016 at 4:14 PM, Langhorst, Brad <Langhorst@neb.com> wrote:
nope - 
I limit local jobs to 8, but no jobs go to that destination

2 handlers, a few jobs that reserve more cores.

here’s my job.conf

<?xml version="1.0"?>
<job_conf>
    <plugins workers="4">
        <!-- "workers" is the number of threads for the runner's work queue.
             The default from <plugins> is used if not defined for a <plugin>.
          -->
<plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" workers="4"/>
        <plugin id="gridengine" type="runner" load="galaxy.jobs.runners.drmaa:DRMAAJobRunner">
            <!-- Override the $DRMAA_LIBRARY_PATH environment variable -->
            <param id="drmaa_library_path">/opt/gridengine/lib/linux-x64/libdrmaa.so</param>
        </plugin>
    </plugins>
    <handlers default="handlers">
        <handler id="handler0" tags='handlers'>
            <plugin id="gridengine"/>
        </handler>
        <handler id="handler1" tags='handlers'>
            <plugin id="gridengine"/>
        </handler>
    </handlers>
    <destinations default="gridengine">
        <destination id="local" runner="local"/>
        <destination id="gridengine" runner="gridengine" tags="rockscluster">
<param id="embed_metadata_in_job">True</param>
<env exec="source /mnt/galaxy/data/galaxy/cluster_env/bin/activate"/>
</destination>
        <destination id="gridengine_8" runner="gridengine" tags="rockscluster,8cores">
<param id="embed_metadata_in_job">True</param>
        <param id="nativeSpecification">-pe smp 8</param>
<env exec="source /mnt/galaxy/data/galaxy/cluster_env/bin/activate"/>
        </destination>
        <destination id="gridengine_16" runner="gridengine" tags="rockscluster,16cores">
<param id="embed_metadata_in_job">True</param>
                <param id="nativeSpecification">-pe smp 16</param>
<env exec="source /mnt/galaxy/data/galaxy/cluster_env/bin/activate"/>
        </destination>
        <destination id="gridengine_16_long" runner="gridengine" tags="rockscluster,16cores,longrun">
<param id="embed_metadata_in_job">True</param>
                <param id="nativeSpecification">-pe smp 16 -P longrun</param>
<env exec="source /mnt/galaxy/data/galaxy/cluster_env/bin/activate"/>
        </destination>
<destination id="gridengine_32_long" runner="gridengine" tags="rockscluster,32cores,longrun">
                <param id="embed_metadata_in_job">True</param>
                <param id="nativeSpecification">-pe smp 32 -P longrun</param>
                <env exec="source /mnt/galaxy/data/galaxy/cluster_env/bin/activate"/>
</destination>
    </destinations>
    <tools>
    <tool id="bowtie2" destination="8cores"/>
    <tool id="bismark_bowtie2" destination="8cores"/>
    <tool id="freebayes" destination="8cores"/>
    <tool id="tophat2" destination="8cores"/>
    <tool id="hisat"  destination="8cores"/>
    <tool id="hisat2"  destination="8cores"/>
    <tool id="trinityrnaseq" destination="longrun"/>
    <tool id="cufflinks" destination="8cores"/>
<tool id="stringtie" destination="16cores"/>
<tool id="mira_4_0_de_novo" destination="8cores"/>
    </tools>
    <limits>
        <limit type="destination_total_concurrent_jobs" id="local">8</limit>
    </limits>
</job_conf>


Brad

Brad Langhorst, Ph.D.
Development Scientist
New England Biolabs




On Nov 30, 2016, at 4:05 PM, Nate Coraor <nate@bx.psu.edu> wrote:

Hi Brad,

Do you have any kind of strange job mapping going on, or job limits configured?

--nate

On Wed, Nov 30, 2016 at 1:01 AM, Langhorst, Brad <Langhorst@neb.com> wrote:
Hi:

I have one user who’s jobs won’t execute on my local instance.

The user was mysteriously listed as inactive.
I reactivated and restored the handlers, still jobs are not executing.
The user has plenty of quota (82%  used)

Anybody have an idea about what might be wrong with this user?

I have impersonated the user, jobs can be added 

galaxy.util.object_wrapper WARNING 2016-11-30 00:40:08,118 Unable to create dynamic subclass for <type 'instance'>, None: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
galaxy.tools.actions INFO 2016-11-30 00:40:08,271 Handled output (153.865 ms)
galaxy.tools.actions INFO 2016-11-30 00:40:08,330 Verified access to datasets (5.648 ms)
galaxy.tools.execute DEBUG 2016-11-30 00:40:08,490 Tool [toolshed.g2.bx.psu.edu/repos/jjohnson/fastqc/FastQC/1.0.0] created job [537201] (384.040 ms)
galaxy.tools.execute DEBUG 2016-11-30 00:40:08,498 Executed all jobs for tool request: (413.473 ms)
10.254.254.95 - - [30/Nov/2016:00:40:07 -0400] "POST /api/tools HTTP/1.0" 200 - "http://galaxy.neb.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36 OPR/41.0.2353.69”

but they never get scheduled or run.

Nothing else in the logs after submitting the job.

They just remain in this state:
This is a new dataset and not all of its data are available yet

Any ideas?
Brad

___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/