Personalize / Client / describe_data_deletion_job

describe_data_deletion_job#

Personalize.Client.describe_data_deletion_job(**kwargs)#

Describes the data deletion job created by CreateDataDeletionJob, including the job status.

See also: AWS API Documentation

Request Syntax

response = client.describe_data_deletion_job(
    dataDeletionJobArn='string'
)
Parameters:

dataDeletionJobArn (string) –

[REQUIRED]

The Amazon Resource Name (ARN) of the data deletion job.

Return type:

dict

Returns:

Response Syntax

{
    'dataDeletionJob': {
        'jobName': 'string',
        'dataDeletionJobArn': 'string',
        'datasetGroupArn': 'string',
        'dataSource': {
            'dataLocation': 'string'
        },
        'roleArn': 'string',
        'status': 'string',
        'numDeleted': 123,
        'creationDateTime': datetime(2015, 1, 1),
        'lastUpdatedDateTime': datetime(2015, 1, 1),
        'failureReason': 'string'
    }
}

Response Structure

  • (dict) –

    • dataDeletionJob (dict) –

      Information about the data deletion job, including the status.

      The status is one of the following values:

      • PENDING

      • IN_PROGRESS

      • COMPLETED

      • FAILED

      • jobName (string) –

        The name of the data deletion job.

      • dataDeletionJobArn (string) –

        The Amazon Resource Name (ARN) of the data deletion job.

      • datasetGroupArn (string) –

        The Amazon Resource Name (ARN) of the dataset group the job deletes records from.

      • dataSource (dict) –

        Describes the data source that contains the data to upload to a dataset, or the list of records to delete from Amazon Personalize.

        • dataLocation (string) –

          For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your dataset is stored. For data deletion jobs, the path to the Amazon S3 bucket that stores the list of records to delete.

          For example:

          s3://bucket-name/folder-name/fileName.csv

          If your CSV files are in a folder in your Amazon S3 bucket and you want your import job or data deletion job to consider multiple files, you can specify the path to the folder. With a data deletion job, Amazon Personalize uses all files in the folder and any sub folder. Use the following syntax with a / after the folder name:

          s3://bucket-name/folder-name/

      • roleArn (string) –

        The Amazon Resource Name (ARN) of the IAM role that has permissions to read from the Amazon S3 data source.

      • status (string) –

        The status of the data deletion job.

        A data deletion job can have one of the following statuses:

        • PENDING > IN_PROGRESS > COMPLETED -or- FAILED

      • numDeleted (integer) –

        The number of records deleted by a COMPLETED job.

      • creationDateTime (datetime) –

        The creation date and time (in Unix time) of the data deletion job.

      • lastUpdatedDateTime (datetime) –

        The date and time (in Unix time) the data deletion job was last updated.

      • failureReason (string) –

        If a data deletion job fails, provides the reason why.

Exceptions

  • Personalize.Client.exceptions.InvalidInputException

  • Personalize.Client.exceptions.ResourceNotFoundException