run_task¶
-
ndmapper.iraf_task.
run_task
(taskname, inputs, outputs=None, prefix=None, suffix=None, comb_in=False, MEF_ext=True, path_param=None, reprocess=None, logfile=None, **params)[source] [edit on github]¶ Wrapper to run an IRAF task on one or more
DataFile
objects and collect the results.Parameters: taskname :
str
Name of the IRAF task to run, optionally (if the package is already loaded) including the package hierarchy (eg.
gemini.gmos.gfcube
).inputs :
dict
Dictionary mapping task parameter names to one or more input
DataFile
instances to pass one at a time to the task (comb_in is False
) or all together (comb_in is True
). All the named files must already exist.outputs :
dict
orNone
, optionalSpecify output parameter name(s) and their filename value(s), if any, as dictionary keys & values. The files named must not already exist. The equivalent dictionary is returned as output, after applying any automatic modifications. The
dict
values may have any type that can be converted to a string (eg.FileName
), either individually or in a sequence.If the
prefix
and/orsuffix
parameter is set, the value(s) may name a parameter from the inputs dictionary, prefixed with ‘@’ (eg.@infiles
), to create the output names based on the corresponding input names, or prefixed with ‘!’ to create a single output name based on the first input name.A default prefix to add to existing input filename(s) to form the output filename(s), if the output parameter value(s) specify this behaviour.
A suffix to add to existing input filename(s) to form the output filename(s), if the output parameter value(s) specify this behaviour.
comb_in :
bool
Pass all the inputs to the task at once, in a single call (eg. for stacking), instead of the default behaviour of calling the task on each file in turn (per input parameter)? This parameter is named obscurely to avoid conflicts with IRAF tasks (eg.
imcombine.combine
).MEF_ext :
bool
Specify and iterate over FITS image extensions, for tasks expecting simple FITS as input (eg. core IRAF tasks; default
True
)? This should be set toFalse
for tasks that already handle multi-extension FITS files (eg. from Gemini IRAF) or when the input files are already simple FITS. The extension names to be iterated over are defined when the input DataFile instances are created, defaulting to values kept in the packageconfig
dictionary.The number of extensions named
config['labels']['data']
(eg. ‘SCI’) must be the same for every input file or one (in which case that single extension will be re-used for each iteration over the extensions of the other input files).Name of a task parameter (eg.
rawpath
) used to specify the location of the input files, instead of including the full path in each input filename as usual. The DataFile paths are automatically stripped from the inputs and supplied to the task via this parameter instead. Output files are still assumed to reside in the current working directory unless otherwise specified. To use this option, all inputs containing a directory path (other than ‘’) must reside in the same directory – if this is not the case and the IRAF task does not understand paths in filenames then the user will need to copy the input files to the current directory before running it. The user must not supply filenames in another directory to input parameters for which the IRAF task does not apply the path_param prefix (usually input calibrations), or the task will fail to find some or all of the inputs.reprocess :
bool
orNone
, optionalOverwrite or re-use existing output files? The default of
None
redirects to the value of the package configuration variablendmapper.config['reprocess']
, which in turn defaults toNone
, causing an error to be raised where output files already exist before processing. If the value is set toTrue
, any existing files will be deleted before the IRAF task is run, whileFalse
is a no-op for existing files, causing them to be re-used as output without repeating the processing. If the IRAF task produces any intermediate files that are not included inoutputs
(ie. that are unknown to run_task), it is the caller’s responsibility to delete them before repeating any processing. The task is always (re-)run where there are nooutputs
.logfile :
str
ordict
orNone
, optionalOptional filename for logging output, which includes any IRAF log contents (delimited by run_task status lines) or Python exceptions.
The default of
None
causes the value of the package configuration variablendmapper.config['logfile']
to be used, which itself defaults toNone
(in which case no log is written).Where only a filename string is provided and the IRAF task has a parameter named
logfile
, the corresponding IRAF log will be captured automatically, otherwise only status information and Python exceptions will get recorded. Where a single-item dictionary is provided, the key specifies an alternative IRAF “log file” parameter name to use and the value again specifies the output filename [not implemented]. A special key string ofSTDOUT
will cause the standard output to be captured in the log (instead of any IRAF log file contents) [unimplemented].The IRAF log contents relevant to each file are also appended to the corresponding output DataFile’s log attribute (whether or not a log file is specified here and written to disk).
params :
dict
Named IRAF task parameters. These may include ancillary input or output filenames that don’t need to be tracked by the main inputs & outputs dictionaries.
Returns: The DataFile objects named by the parameter
outputs
, containing the results from IRAF.Notes
There is no support for mixing MEF- and simple FITS files in a single call.
In principle,
prefix
&suffix
could conflict with any like-named IRAF parameters that have a different meaning from the Gemini convention (ie. a string that is added to the start/end of the input filename to provide an output name), but there appears to be only one such case in Ureka (sqiid.getcoo); likewise forMEF_ext
, which has no known uses elsewhere,path_param
andreprocess
. It is assumed that the widely-usedlogfile
will only ever have the usual meaning.