Bedrock / Client / create_guardrail

create_guardrail#

Bedrock.Client.create_guardrail(**kwargs)#

Creates a guardrail to block topics and to filter out harmful content.

  • Specify a name and optional description.

  • Specify messages for when the guardrail successfully blocks a prompt or a model response in the blockedInputMessaging and blockedOutputsMessaging fields.

  • Specify topics for the guardrail to deny in the topicPolicyConfig object. Each GuardrailTopicConfig object in the topicsConfig list pertains to one topic.

    • Give a name and description so that the guardrail can properly identify the topic.

    • Specify DENY in the type field.

    • (Optional) Provide up to five prompts that you would categorize as belonging to the topic in the examples list.

  • Specify filter strengths for the harmful categories defined in Amazon Bedrock in the contentPolicyConfig object. Each GuardrailContentFilterConfig object in the filtersConfig list pertains to a harmful category. For more information, see Content filters. For more information about the fields in a content filter, see GuardrailContentFilterConfig.

    • Specify the category in the type field.

    • Specify the strength of the filter for prompts in the inputStrength field and for model responses in the strength field of the GuardrailContentFilterConfig.

  • (Optional) For security, include the ARN of a KMS key in the kmsKeyId field.

  • (Optional) Attach any tags to the guardrail in the tags object. For more information, see Tag resources.

See also: AWS API Documentation

Request Syntax

response = client.create_guardrail(
    name='string',
    description='string',
    topicPolicyConfig={
        'topicsConfig': [
            {
                'name': 'string',
                'definition': 'string',
                'examples': [
                    'string',
                ],
                'type': 'DENY'
            },
        ]
    },
    contentPolicyConfig={
        'filtersConfig': [
            {
                'type': 'SEXUAL'|'VIOLENCE'|'HATE'|'INSULTS'|'MISCONDUCT'|'PROMPT_ATTACK',
                'inputStrength': 'NONE'|'LOW'|'MEDIUM'|'HIGH',
                'outputStrength': 'NONE'|'LOW'|'MEDIUM'|'HIGH'
            },
        ]
    },
    wordPolicyConfig={
        'wordsConfig': [
            {
                'text': 'string'
            },
        ],
        'managedWordListsConfig': [
            {
                'type': 'PROFANITY'
            },
        ]
    },
    sensitiveInformationPolicyConfig={
        'piiEntitiesConfig': [
            {
                'type': 'ADDRESS'|'AGE'|'AWS_ACCESS_KEY'|'AWS_SECRET_KEY'|'CA_HEALTH_NUMBER'|'CA_SOCIAL_INSURANCE_NUMBER'|'CREDIT_DEBIT_CARD_CVV'|'CREDIT_DEBIT_CARD_EXPIRY'|'CREDIT_DEBIT_CARD_NUMBER'|'DRIVER_ID'|'EMAIL'|'INTERNATIONAL_BANK_ACCOUNT_NUMBER'|'IP_ADDRESS'|'LICENSE_PLATE'|'MAC_ADDRESS'|'NAME'|'PASSWORD'|'PHONE'|'PIN'|'SWIFT_CODE'|'UK_NATIONAL_HEALTH_SERVICE_NUMBER'|'UK_NATIONAL_INSURANCE_NUMBER'|'UK_UNIQUE_TAXPAYER_REFERENCE_NUMBER'|'URL'|'USERNAME'|'US_BANK_ACCOUNT_NUMBER'|'US_BANK_ROUTING_NUMBER'|'US_INDIVIDUAL_TAX_IDENTIFICATION_NUMBER'|'US_PASSPORT_NUMBER'|'US_SOCIAL_SECURITY_NUMBER'|'VEHICLE_IDENTIFICATION_NUMBER',
                'action': 'BLOCK'|'ANONYMIZE'
            },
        ],
        'regexesConfig': [
            {
                'name': 'string',
                'description': 'string',
                'pattern': 'string',
                'action': 'BLOCK'|'ANONYMIZE'
            },
        ]
    },
    blockedInputMessaging='string',
    blockedOutputsMessaging='string',
    kmsKeyId='string',
    tags=[
        {
            'key': 'string',
            'value': 'string'
        },
    ],
    clientRequestToken='string'
)
Parameters:
  • name (string) –

    [REQUIRED]

    The name to give the guardrail.

  • description (string) – A description of the guardrail.

  • topicPolicyConfig (dict) –

    The topic policies to configure for the guardrail.

    • topicsConfig (list) – [REQUIRED]

      A list of policies related to topics that the guardrail should deny.

      • (dict) –

        Details about topics for the guardrail to identify and deny.

        This data type is used in the following API operations:

        • name (string) – [REQUIRED]

          The name of the topic to deny.

        • definition (string) – [REQUIRED]

          A definition of the topic to deny.

        • examples (list) –

          A list of prompts, each of which is an example of a prompt that can be categorized as belonging to the topic.

          • (string) –

        • type (string) – [REQUIRED]

          Specifies to deny the topic.

  • contentPolicyConfig (dict) –

    The content filter policies to configure for the guardrail.

    • filtersConfig (list) – [REQUIRED]

      Contains the type of the content filter and how strongly it should apply to prompts and model responses.

      • (dict) –

        Contains filter strengths for harmful content. Guardrails support the following content filters to detect and filter harmful user inputs and FM-generated outputs.

        • Hate – Describes language or a statement that discriminates, criticizes, insults, denounces, or dehumanizes a person or group on the basis of an identity (such as race, ethnicity, gender, religion, sexual orientation, ability, and national origin).

        • Insults – Describes language or a statement that includes demeaning, humiliating, mocking, insulting, or belittling language. This type of language is also labeled as bullying.

        • Sexual – Describes language or a statement that indicates sexual interest, activity, or arousal using direct or indirect references to body parts, physical traits, or sex.

        • Violence – Describes language or a statement that includes glorification of or threats to inflict physical pain, hurt, or injury toward a person, group or thing.

        Content filtering depends on the confidence classification of user inputs and FM responses across each of the four harmful categories. All input and output statements are classified into one of four confidence levels (NONE, LOW, MEDIUM, HIGH) for each harmful category. For example, if a statement is classified as Hate with HIGH confidence, the likelihood of the statement representing hateful content is high. A single statement can be classified across multiple categories with varying confidence levels. For example, a single statement can be classified as Hate with HIGH confidence, Insults with LOW confidence, Sexual with NONE confidence, and Violence with MEDIUM confidence.

        For more information, see Guardrails content filters.

        This data type is used in the following API operations:

        • type (string) – [REQUIRED]

          The harmful category that the content filter is applied to.

        • inputStrength (string) – [REQUIRED]

          The strength of the content filter to apply to prompts. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.

        • outputStrength (string) – [REQUIRED]

          The strength of the content filter to apply to model responses. As you increase the filter strength, the likelihood of filtering harmful content increases and the probability of seeing harmful content in your application reduces.

  • wordPolicyConfig (dict) –

    The word policy you configure for the guardrail.

    • wordsConfig (list) –

      A list of words to configure for the guardrail.

      • (dict) –

        A word to configure for the guardrail.

        • text (string) – [REQUIRED]

          Text of the word configured for the guardrail to block.

    • managedWordListsConfig (list) –

      A list of managed words to configure for the guardrail.

      • (dict) –

        The managed word list to configure for the guardrail.

        • type (string) – [REQUIRED]

          The managed word type to configure for the guardrail.

  • sensitiveInformationPolicyConfig (dict) –

    The sensitive information policy to configure for the guardrail.

    • piiEntitiesConfig (list) –

      A list of PII entities to configure to the guardrail.

      • (dict) –

        The PII entity to configure for the guardrail.

        • type (string) – [REQUIRED]

          Configure guardrail type when the PII entity is detected.

        • action (string) – [REQUIRED]

          Configure guardrail action when the PII entity is detected.

    • regexesConfig (list) –

      A list of regular expressions to configure to the guardrail.

      • (dict) –

        The regular expression to configure for the guardrail.

        • name (string) – [REQUIRED]

          The name of the regular expression to configure for the guardrail.

        • description (string) –

          The description of the regular expression to configure for the guardrail.

        • pattern (string) – [REQUIRED]

          The regular expression pattern to configure for the guardrail.

        • action (string) – [REQUIRED]

          The guardrail action to configure when matching regular expression is detected.

  • blockedInputMessaging (string) –

    [REQUIRED]

    The message to return when the guardrail blocks a prompt.

  • blockedOutputsMessaging (string) –

    [REQUIRED]

    The message to return when the guardrail blocks a model response.

  • kmsKeyId (string) – The ARN of the KMS key that you use to encrypt the guardrail.

  • tags (list) –

    The tags that you want to attach to the guardrail.

    • (dict) –

      Definition of the key/value pair for a tag.

      • key (string) – [REQUIRED]

        Key for the tag.

      • value (string) – [REQUIRED]

        Value for the tag.

  • clientRequestToken (string) –

    A unique, case-sensitive identifier to ensure that the API request completes no more than once. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency in the Amazon S3 User Guide.

    This field is autopopulated if not provided.

Return type:

dict

Returns:

Response Syntax

{
    'guardrailId': 'string',
    'guardrailArn': 'string',
    'version': 'string',
    'createdAt': datetime(2015, 1, 1)
}

Response Structure

  • (dict) –

    • guardrailId (string) –

      The unique identifier of the guardrail that was created.

    • guardrailArn (string) –

      The ARN of the guardrail that was created.

    • version (string) –

      The version of the guardrail that was created. This value should be 1.

    • createdAt (datetime) –

      The time at which the guardrail was created.

Exceptions

  • Bedrock.Client.exceptions.ResourceNotFoundException

  • Bedrock.Client.exceptions.AccessDeniedException

  • Bedrock.Client.exceptions.ValidationException

  • Bedrock.Client.exceptions.ConflictException

  • Bedrock.Client.exceptions.InternalServerException

  • Bedrock.Client.exceptions.TooManyTagsException

  • Bedrock.Client.exceptions.ServiceQuotaExceededException

  • Bedrock.Client.exceptions.ThrottlingException