AutoMLRun Class
Represents an automated ML experiment run in Azure Machine Learning.
The AutoMLRun class can be used to manage a run, check run status, and retrieve run details once an AutoML run is submitted. For more information on working with experiment runs, see the Run class.
Initialize an AutoML run.
Constructor
AutoMLRun(experiment, run_id, **kwargs)
Parameters
Name | Description |
---|---|
experiment
Required
|
The experiment associated with the run. |
run_id
Required
|
The ID of the run. |
experiment
Required
|
The experiment associated with the run. |
run_id
Required
|
The ID of the run. |
Remarks
An AutoMLRun object is returned when you use the submit method of an experiment.
To retrieve a run that has already started, use the following code:
from azureml.train.automl.run import AutoMLRun
ws = Workspace.from_config()
experiment = ws.experiments['my-experiment-name']
automl_run = AutoMLRun(experiment, run_id = 'AutoML_9fe201fe-89fd-41cc-905f-2f41a5a98883')
Methods
cancel |
Cancel an AutoML run. Return True if the AutoML run was canceled successfully. |
cancel_iteration |
Cancel a particular child run. |
complete |
Complete an AutoML Run. |
continue_experiment |
Continue an existing AutoML experiment. |
fail |
Fail an AutoML Run. Optionally set the Error property of the run with a message or exception passed to |
get_best_child |
Return the child run with the best score for this AutoML Run. |
get_guardrails |
Print and return detailed results from running Guardrail verification. |
get_output |
Return the run with the corresponding best pipeline that has already been tested. If no input parameters are provided, |
get_run_sdk_dependencies |
Get the SDK run dependencies for a given run. |
pause |
Return True if the AutoML run was paused successfully. This method is not implemented. |
register_model |
Register the model with AzureML ACI service. |
resume |
Return True if the AutoML run was resumed successfully. This method is not implemented. |
retry |
Return True if the AutoML run was retried successfully. This method is not implemented. |
summary |
Get a table containing a summary of algorithms attempted and their scores. |
wait_for_completion |
Wait for the completion of this run. Returns the status object after the wait. |
cancel
Cancel an AutoML run.
Return True if the AutoML run was canceled successfully.
cancel()
Returns
Type | Description |
---|---|
None |
cancel_iteration
Cancel a particular child run.
cancel_iteration(iteration)
Parameters
Name | Description |
---|---|
iteration
Required
|
The iteration to cancel. |
Returns
Type | Description |
---|---|
None |
complete
Complete an AutoML Run.
complete(**kwargs)
Returns
Type | Description |
---|---|
None |
continue_experiment
Continue an existing AutoML experiment.
continue_experiment(X=None, y=None, sample_weight=None, X_valid=None, y_valid=None, sample_weight_valid=None, data=None, label=None, columns=None, cv_splits_indices=None, spark_context=None, experiment_timeout_hours=None, experiment_exit_score=None, iterations=None, show_output=False, training_data=None, validation_data=None, **kwargs)
Parameters
Name | Description |
---|---|
X
|
Training features. Default value: None
|
y
|
Training labels. Default value: None
|
sample_weight
|
Sample weights for training data. Default value: None
|
X_valid
|
Validation features. Default value: None
|
y_valid
|
Validation labels. Default value: None
|
sample_weight_valid
|
validation set sample weights. Default value: None
|
data
|
Training features and label. Default value: None
|
label
|
Label column in data. Default value: None
|
columns
|
A list of allowed columns in the data to use as features. Default value: None
|
cv_splits_indices
|
Indices where to split training data for cross validation. Each row is a separate cross fold and within each crossfold, provide 2 arrays, the first with the indices for samples to use for training data and the second with the indices to use for validation data. i.e [[t1, v1], [t2, v2], ...] where t1 is the training indices for the first cross fold and v1 is the validation indices for the first cross fold. Default value: None
|
spark_context
|
<xref:SparkContext>
Spark context, only applicable when used inside azure databricks/spark environment. Default value: None
|
experiment_timeout_hours
|
How many additional hours to run this experiment for. Default value: None
|
experiment_exit_score
|
If specified indicates that the experiment is terminated when this value is reached. Default value: None
|
iterations
|
How many additional iterations to run for this experiment. Default value: None
|
show_output
|
Flag indicating whether to print output to console. Default value: False
|
training_data
|
<xref:azureml.dataprep.Dataflow> or
DataFrame
Input training data. Default value: None
|
validation_data
|
<xref:azureml.dataprep.Dataflow> or
DataFrame
Validation data. Default value: None
|
Returns
Type | Description |
---|---|
The AutoML parent run. |
Exceptions
Type | Description |
---|---|
fail
Fail an AutoML Run.
Optionally set the Error property of the run with a message or exception passed to error_details
.
fail(error_details=None, error_code=None, _set_status=True, **kwargs)
Parameters
Name | Description |
---|---|
error_details
|
str or
BaseException
Optional details of the error. Default value: None
|
error_code
|
Optional error code of the error for the error classification. Default value: None
|
_set_status
|
Indicates whether to send the status event for tracking. Default value: True
|
get_best_child
Return the child run with the best score for this AutoML Run.
get_best_child(metric: str | None = None, onnx_compatible: bool = False, **kwargs: Any) -> Run
Parameters
Name | Description |
---|---|
metric
|
The metric to use to when selecting the best run to return. Defaults to the primary metric. Default value: None
|
onnx_compatible
|
Whether to only return runs that generated onnx models. Default value: False
|
kwargs
Required
|
|
Returns
Type | Description |
---|---|
AutoML Child Run. |
get_guardrails
Print and return detailed results from running Guardrail verification.
get_guardrails(to_console: bool = True) -> Dict[str, Any]
Parameters
Name | Description |
---|---|
to_console
|
Indicates whether to write the verification results to the console. Default value: True
|
Returns
Type | Description |
---|---|
A dictionary of verifier results. |
Exceptions
Type | Description |
---|---|
get_output
Return the run with the corresponding best pipeline that has already been tested.
If no input parameters are provided, get_output
returns the best pipeline according to the primary metric. Alternatively,
you can use either the iteration
or metric
parameter to retrieve a particular
iteration or the best run per provided metric, respectively.
get_output(iteration: int | None = None, metric: str | None = None, return_onnx_model: bool = False, return_split_onnx_model: SplitOnnxModelName | None = None, **kwargs: Any) -> Tuple[Run, Any]
Parameters
Name | Description |
---|---|
iteration
|
The iteration number of the corresponding run and fitted model to return. Default value: None
|
metric
|
The metric to use to when selecting the best run and fitted model to return. Default value: None
|
return_onnx_model
|
This method will return the converted ONNX model if
the Default value: False
|
return_split_onnx_model
|
The type of the split onnx model to return Default value: None
|
Returns
Type | Description |
---|---|
Run, <xref:Model>
|
The run, the corresponding fitted model. |
Exceptions
Type | Description |
---|---|
Remarks
If you'd like to inspect the preprocessor(s) and algorithm (estimator) used, you can do so through
Model.steps
, similar to sklearn.pipeline.Pipeline.steps
.
For instance, the code below shows how to retrieve the estimator.
best_run, model = parent_run.get_output()
estimator = model.steps[-1]
get_run_sdk_dependencies
Get the SDK run dependencies for a given run.
get_run_sdk_dependencies(iteration=None, check_versions=True, **kwargs)
Parameters
Name | Description |
---|---|
iteration
|
The iteration number of the fitted run to be retrieved. If None, retrieve the parent environment. Default value: None
|
check_versions
|
If True, check the versions with current environment. If False, pass. Default value: True
|
Returns
Type | Description |
---|---|
The dictionary of dependencies retrieved from RunHistory. |
Exceptions
Type | Description |
---|---|
pause
Return True if the AutoML run was paused successfully.
This method is not implemented.
pause()
Exceptions
Type | Description |
---|---|
register_model
Register the model with AzureML ACI service.
register_model(model_name=None, description=None, tags=None, iteration=None, metric=None)
Parameters
Name | Description |
---|---|
model_name
|
The name of the model being deployed. Default value: None
|
description
|
The description for the model being deployed. Default value: None
|
tags
|
Tags for the model being deployed. Default value: None
|
iteration
|
Override for which model to deploy. Deploys the model for a given iteration. Default value: None
|
metric
|
Override for which model to deploy. Deploys the best model for a different metric. Default value: None
|
Returns
Type | Description |
---|---|
<xref:Model>
|
The registered model object. |
resume
Return True if the AutoML run was resumed successfully.
This method is not implemented.
resume()
Exceptions
Type | Description |
---|---|
NotImplementedError:
|
retry
Return True if the AutoML run was retried successfully.
This method is not implemented.
retry()
Exceptions
Type | Description |
---|---|
summary
Get a table containing a summary of algorithms attempted and their scores.
summary()
Returns
Type | Description |
---|---|
Pandas DataFrame containing AutoML model statistics. |
wait_for_completion
Wait for the completion of this run.
Returns the status object after the wait.
wait_for_completion(show_output=False, wait_post_processing=False)
Parameters
Name | Description |
---|---|
show_output
|
Indicates whether to show the run output on sys.stdout. Default value: False
|
wait_post_processing
|
Indicates whether to wait for the post processing to complete after the run completes. Default value: False
|
Returns
Type | Description |
---|---|
The status object. |
Exceptions
Type | Description |
---|---|