BaseDaskJobQueueBackend

class openff.evaluator.backends.dask.BaseDaskJobQueueBackend(minimum_number_of_workers=1, maximum_number_of_workers=1, resources_per_worker=<openff.evaluator.backends.backends.QueueWorkerResources object>, queue_name='default', setup_script_commands=None, extra_script_options=None, adaptive_interval='10000ms', disable_nanny_process=False, cluster_type=None, adaptive_class=None)[source]

An openff-evaluator backend which uses a dask_jobqueue.JobQueueCluster object to run calculations within an existing HPC queuing system.

See also

dask_jobqueue.JobQueueCluster

__init__(minimum_number_of_workers=1, maximum_number_of_workers=1, resources_per_worker=<openff.evaluator.backends.backends.QueueWorkerResources object>, queue_name='default', setup_script_commands=None, extra_script_options=None, adaptive_interval='10000ms', disable_nanny_process=False, cluster_type=None, adaptive_class=None)[source]

Constructs a new BaseDaskJobQueueBackend object

Parameters
  • minimum_number_of_workers (int) – The minimum number of workers to request from the queue system.

  • maximum_number_of_workers (int) – The maximum number of workers to request from the queue system.

  • resources_per_worker (QueueWorkerResources) – The resources to request per worker.

  • queue_name (str) – The name of the queue which the workers will be requested from.

  • setup_script_commands (list of str) –

    A list of bash script commands to call within the queue submission script before the call to launch the dask worker.

    This may include activating a python environment, or loading an environment module

  • extra_script_options (list of str) –

    A list of extra job specific options to include in the queue submission script. These will get added to the script header in the form

    #BSUB <extra_script_options[x]>

  • adaptive_interval (str) – The interval between attempting to either scale up or down the cluster, of of the from ‘XXXms’.

  • disable_nanny_process (bool) –

    If true, dask workers will be started in –no-nanny mode. This is required if using multiprocessing code within submitted tasks.

    This has not been fully tested yet and my lead to stability issues with the workers.

  • adaptive_class (class of type distributed.deploy.AdaptiveCore, optional) – An optional class to pass to dask to use for its adaptive scaling handling. This is mainly exposed to allow easily working around certain dask bugs / quirks.

Methods

__init__([minimum_number_of_workers, …])

Constructs a new BaseDaskJobQueueBackend object

job_script()

Returns the job script that dask will use to submit workers.

start()

Start the calculation backend.

stop()

Stop the calculation backend.

submit_task(function, *args, **kwargs)

Submit a task to the compute resources managed by this backend.

Attributes

started

Returns whether this backend has been started yet.

job_script()[source]

Returns the job script that dask will use to submit workers. The backend must be started before calling this function.

Returns

Return type

str

start()[source]

Start the calculation backend.

submit_task(function, *args, **kwargs)[source]

Submit a task to the compute resources managed by this backend.

Parameters

function (function) – The function to run.

Returns

Returns a future object which will eventually point to the results of the submitted task.

Return type

Future

property started

Returns whether this backend has been started yet.

Type

bool

stop()

Stop the calculation backend.