Autologging is a feature in the Vertex AI SDK that automatically logs parameters and metrics from model-training runs to Vertex AI Experiments. This can save time and effort by eliminating the need to manually log this data. Autologging only supports parameter and metric logging.
Autolog data
There are two options for autologging data to Vertex AI Experiments.
- Let the Vertex AI SDK automatically create
    ExperimentRun
    resources for you.
 
- Specify the ExperimentRun resource that you'd like autologged parameters
    and metrics to be written to.
 
Auto-created
       The Vertex AI SDK for Python handles creating ExperimentRun resources for you.
        Automatically created ExperimentRun resources will have a run name in the following format:
        {ml-framework-name}-{timestamp}-{uid},
        for example: "tensorflow-2023-01-04-16-09-20-86a88".
      
The following sample uses the init method,
        from the aiplatform Package
          functions.
Python
- experiment_name: Provide a name for your experiment. You can find your list of experiments in the Google Cloud console by selecting Experiments in the section nav.
- experiment_tensorboard: (Optional) Provide a name for your Vertex AI TensorBoard instance.
- project: . You can find these Project IDs in the Google Cloud console welcome page.
- location: See List of available locations
User-specified
    Provide your own ExperimentRun names and have metrics and parameters
    from multiple model-training runs logged to the same ExperimentRun. Any metrics from model
 to the current run set by calling aiplatform.start_run("your-run-name") until
  aiplatform.end_run() is called.
The following sample uses the init method,
    from the aiplatform Package functions.
Python
- experiment_name: Provide the name of your experiment.
- run_name: Provide a name for your experiment run. You can find your list of experiments in the Google Cloud console by selecting Experiments in the section nav.
- project: . You can find these Project IDs in the Google Cloud console welcome page.
- location: See List of available locations
- experiment_tensorboard: (Optional) Provide a name for your Vertex AI TensorBoard instance.
Vertex AI SDK autologging uses MLFlow's autologging in its implementation. Evaluation metrics and parameters from the following frameworks are logged to your ExperimentRun when autologging is enabled.
- Fastai
- Gluon
- Keras
- LightGBM
- Pytorch Lightning
- Scikit-learn
- Spark
- Statsmodels
- XGBoost
View autologged parameters and metrics
Use the Vertex AI SDK for Python to compare runs and get runs data. The Google Cloud console provides an easy way to compare these runs.