openaleph_procrastinate.manage
This is temporary and should use the procrastinate Django models at one point in the future
Db
Get a db manager object for the current procrastinate database uri
Source code in openaleph_procrastinate/manage/db.py
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 |
|
cancel_jobs(dataset=None, batch=None, queue=None, task=None)
Cancel jobs by given criteria.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset
|
str | None
|
The dataset to filter for |
None
|
batch
|
str | None
|
The job batch to filter for |
None
|
queue
|
str | None
|
The queue name to filter for |
None
|
task
|
str | None
|
The task name to filter for |
None
|
Source code in openaleph_procrastinate/manage/db.py
configure()
Create procrastinate tables and schema (if not exists) and add our index optimizations (if not exists)
Source code in openaleph_procrastinate/manage/db.py
iterate_jobs(dataset=None, batch=None, queue=None, task=None, status=None, min_ts=None, max_ts=None, flatten_entities=False)
Iterate job objects from the database by given criteria.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset
|
str | None
|
The dataset to filter for |
None
|
batch
|
str | None
|
The job batch to filter for |
None
|
queue
|
str | None
|
The queue name to filter for |
None
|
task
|
str | None
|
The task name to filter for |
None
|
status
|
Status | None
|
The status to filter for |
None
|
min_ts
|
datetime | None
|
Start timestamp (earliest event found in |
None
|
max_ts
|
datetime | None
|
End timestamp (latest event found in |
None
|
flatten_entities
|
bool | None
|
If true, yield a job for each entity found in the source job |
False
|
Yields:
Type | Description |
---|---|
Jobs
|
Iterator of Job |
Source code in openaleph_procrastinate/manage/db.py
iterate_status(dataset=None, batch=None, queue=None, task=None, status=None, active_only=True)
Iterate through aggregated job status summary
Each row is an aggregation over
dataset,batch,queue_name,task_name,status
and includes jobs count,
timestamp first event, timestamp last event
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset
|
str | None
|
The dataset to filter for |
None
|
batch
|
str | None
|
The job batch to filter for |
None
|
queue
|
str | None
|
The queue name to filter for |
None
|
task
|
str | None
|
The task name to filter for |
None
|
status
|
Status | None
|
The status to filter for |
None
|
active_only
|
bool | None
|
Only include "active" datasets (at least 1 job in 'todo' or 'doing') |
True
|
Yields:
Type | Description |
---|---|
Rows
|
Rows a tuple with the fields in this order: dataset, batch, queue_name, task_name, status, jobs count, timestamp first event, timestamp last event |