Bedrock / Client / update_guardrail

update_guardrail#

Bedrock.Client.update_guardrail(**kwargs)#

Updates a guardrail with the values you specify.

  • Specify a name and optional description.

  • Specify messages for when the guardrail successfully blocks a prompt or a model response in the blockedInputMessaging and blockedOutputsMessaging fields.

  • Specify topics for the guardrail to deny in the topicPolicyConfig object. Each GuardrailTopicConfig object in the topicsConfig list pertains to one topic.

    • Give a name and description so that the guardrail can properly identify the topic.

    • Specify DENY in the type field.

    • (Optional) Provide up to five prompts that you would categorize as belonging to the topic in the examples list.

  • Specify filter strengths for the harmful categories defined in Amazon Bedrock in the contentPolicyConfig object. Each GuardrailContentFilterConfig object in the filtersConfig list pertains to a harmful category. For more information, see Content filters. For more information about the fields in a content filter, see GuardrailContentFilterConfig.

    • Specify the category in the type field.

    • Specify the strength of the filter for prompts in the inputStrength field and for model responses in the strength field of the GuardrailContentFilterConfig.

  • (Optional) For security, include the ARN of a KMS key in the kmsKeyId field.

  • (Optional) Attach any tags to the guardrail in the tags object. For more information, see Tag resources.

See also: AWS API Documentation

Request Syntax

response = client.update_guardrail(
    guardrailIdentifier='string',
    name='string',
    description='string',
    topicPolicyConfig={
        'topicsConfig': [
            {
                'name': 'string',
                'definition': 'string',
                'examples': [
                    'string',
                ],
                'type': 'DENY'
            },
        ]
    },
    contentPolicyConfig={
        'filtersConfig': [
            {
                'type': 'SEXUAL'|'VIOLENCE'|'HATE'|'INSULTS'|'MISCONDUCT'|'PROMPT_ATTACK',
                'inputStrength': 'NONE'|'LOW'|'MEDIUM'|'HIGH',
                'outputStrength': 'NONE'|'LOW'|'MEDIUM'|'HIGH'
            },
        ]
    },
    wordPolicyConfig={
        'wordsConfig': [
            {
                'text': 'string'
            },
        ],
        'managedWordListsConfig': [
            {
                'type': 'PROFANITY'
            },
        ]
    },
    sensitiveInformationPolicyConfig={
        'piiEntitiesConfig': [
            {
                'type': 'ADDRESS'|'AGE'|'AWS_ACCESS_KEY'|'AWS_SECRET_KEY'|'CA_HEALTH_NUMBER'|'CA_SOCIAL_INSURANCE_NUMBER'|'CREDIT_DEBIT_CARD_CVV'|'CREDIT_DEBIT_CARD_EXPIRY'|'CREDIT_DEBIT_CARD_NUMBER'|'DRIVER_ID'|'EMAIL'|'INTERNATIONAL_BANK_ACCOUNT_NUMBER'|'IP_ADDRESS'|'LICENSE_PLATE'|'MAC_ADDRESS'|'NAME'|'PASSWORD'|'PHONE'|'PIN'|'SWIFT_CODE'|'UK_NATIONAL_HEALTH_SERVICE_NUMBER'|'UK_NATIONAL_INSURANCE_NUMBER'|'UK_UNIQUE_TAXPAYER_REFERENCE_NUMBER'|'URL'|'USERNAME'|'US_BANK_ACCOUNT_NUMBER'|'US_BANK_ROUTING_NUMBER'|'US_INDIVIDUAL_TAX_IDENTIFICATION_NUMBER'|'US_PASSPORT_NUMBER'|'US_SOCIAL_SECURITY_NUMBER'|'VEHICLE_IDENTIFICATION_NUMBER',
                'action': 'BLOCK'|'ANONYMIZE'
            },
        ],
        'regexesConfig': [
            {
                'name': 'string',
                'description': 'string',
                'pattern': 'string',
                'action': 'BLOCK'|'ANONYMIZE'
            },
        ]
    },
    blockedInputMessaging='string',
    blockedOutputsMessaging='string',
    kmsKeyId='string'
)
Parameters:
  • guardrailIdentifier (string) –

    [REQUIRED]

    The unique identifier of the guardrail

  • name (string) –

    [REQUIRED]

    A name for the guardrail.

  • description (string) – A description of the guardrail.

  • topicPolicyConfig (dict) –

    The topic policy to configure for the guardrail.

    • topicsConfig (list) – [REQUIRED]

      A list of policies related to topics that the guardrail should deny.

      • (dict) –

        Details about topics for the guardrail to identify and deny.

        This data type is used in the following API operations:

        • name (string) – [REQUIRED]

          The name of the topic to deny.

        • definition (string) – [REQUIRED]

          A definition of the topic to deny.

        • examples (list) –

          A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic.

          • (string) –

        • type (string) – [REQUIRED]

          Specifies to deny the topic.

  • contentPolicyConfig (dict) –

    The content policy to configure for the guardrail.

    • filtersConfig (list) – [REQUIRED]

      Contains the type of the content filter and how strongly it should apply to prompts and model responses.

      • (dict) –

        Contains filter strengths for harmful content. Guardrails support the following content filters to detect and filter harmful user inputs and FM-generated outputs.

        • Hate – Describes language or a statement that discriminates, criticizes, insults, denounces, or dehumanizes a person or group on the basis of an identity (such as race, ethnicity, gender, religion, sexual orientation, ability, and national origin).

        • Insults – Describes language or a statement that includes demeaning, humiliating, mocking, insulting, or belittling language. This type of language is also labeled as bullying.

        • Sexual – Describes language or a statement that indicates sexual interest, activity, or arousal using direct or indirect references to body parts, physical traits, or sex.

        • Violence – Describes language or a statement that includes glorification of or threats to inflict physical pain, hurt, or injury toward a person, group or thing.

        Content filtering depends on the confidence classification of user inputs and FM responses across each of the four harmful categories. All input and output statements are classified into one of four confidence levels (NONE, LOW, MEDIUM, HIGH) for each harmful category. For example, if a statement is classified as Hate with HIGH confidence, the likelihood of the statement representing hateful content is high. A single statement can be classified across multiple categories with varying confidence levels. For example, a single statement can be classified as Hate with HIGH confidence, Insults with LOW confidence, Sexual with NONE confidence, and Violence with MEDIUM confidence.

        For more information, see Guardrails content filters.

        This data type is used in the following API operations:

        • type (string) – [REQUIRED]

          The harmful category that the content filter is applied to.

        • inputStrength (string) – [REQUIRED]

          The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.

        • outputStrength (string) – [REQUIRED]

          The strength of the content filter to apply to model responses. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.

  • wordPolicyConfig (dict) –

    The word policy to configure for the guardrail.

    • wordsConfig (list) –

      A list of words to configure for the guardrail.

      • (dict) –

        A word to configure for the guardrail.

        • text (string) – [REQUIRED]

          Text of the word configured for the guardrail to block.

    • managedWordListsConfig (list) –

      A list of managed words to configure for the guardrail.

      • (dict) –

        The managed word list to configure for the guardrail.

        • type (string) – [REQUIRED]

          The managed word type to configure for the guardrail.

  • sensitiveInformationPolicyConfig (dict) –

    The sensitive information policy to configure for the guardrail.

    • piiEntitiesConfig (list) –

      A list of PII entities to configure to the guardrail.

      • (dict) –

        The PII entity to configure for the guardrail.

        • type (string) – [REQUIRED]

          Configure guardrail type when the PII entity is detected.

        • action (string) – [REQUIRED]

          Configure guardrail action when the PII entity is detected.

    • regexesConfig (list) –

      A list of regular expressions to configure to the guardrail.

      • (dict) –

        The regular expression to configure for the guardrail.

        • name (string) – [REQUIRED]

          The name of the regular expression to configure for the guardrail.

        • description (string) –

          The description of the regular expression to configure for the guardrail.

        • pattern (string) – [REQUIRED]

          The regular expression pattern to configure for the guardrail.

        • action (string) – [REQUIRED]

          The guardrail action to configure when matching regular expression is detected.

  • blockedInputMessaging (string) –

    [REQUIRED]

    The message to return when the guardrail blocks a prompt.

  • blockedOutputsMessaging (string) –

    [REQUIRED]

    The message to return when the guardrail blocks a model response.

  • kmsKeyId (string) – The ARN of the KMS key with which to encrypt the guardrail.

Return type:

dict

Returns:

Response Syntax

{
    'guardrailId': 'string',
    'guardrailArn': 'string',
    'version': 'string',
    'updatedAt': datetime(2015, 1, 1)
}

Response Structure

  • (dict) –

    • guardrailId (string) –

      The unique identifier of the guardrail

    • guardrailArn (string) –

      The ARN of the guardrail that was created.

    • version (string) –

      The version of the guardrail.

    • updatedAt (datetime) –

      The date and time at which the guardrail was updated.

Exceptions

  • Bedrock.Client.exceptions.ResourceNotFoundException

  • Bedrock.Client.exceptions.AccessDeniedException

  • Bedrock.Client.exceptions.ValidationException

  • Bedrock.Client.exceptions.ConflictException

  • Bedrock.Client.exceptions.InternalServerException

  • Bedrock.Client.exceptions.ServiceQuotaExceededException

  • Bedrock.Client.exceptions.ThrottlingException