Amazon Bedrock, a fully managed service for building and deploying large language models (LLMs), has introduced a powerful feature called Guardrails. This feature enables customers to implement safeguards based on application requirements and responsible artificial intelligence (AI) policies. Guardrails can help prevent undesirable content, block prompt attacks, and remove sensitive information to ensure privacy.
Detecting hallucinations with contextual grounding checks
One of the key enhancements to Guardrails for Amazon Bedrock is the introduction of contextual grounding checks. This new policy type enables the detection of hallucinations in model responses, ensuring that the generated content is factually correct and relevant to the user’s query. By leveraging enterprise data sources as a reference, the contextual grounding check can filter out responses that are not grounded in the provided information or are irrelevant to the user’s request.
Safeguarding applications with custom and third-party foundation models
Guardrails for Amazon Bedrock now supports the ApplyGuardrail API, which allows you to evaluate input prompts and model responses against the configured safeguards for any foundation model (FM), including those available outside of Amazon Bedrock. This capability enables centralised governance across all your generative AI applications, regardless of the underlying infrastructure or the source of the FM.
Implementing guardrails for Amazon Bedrock
The blog post provides step-by-step examples of how to configure and use the contextual grounding check policy, as well as how to leverage the ApplyGuardrail API to safeguard applications built with custom or third-party FMs. The examples demonstrate the integration of Guardrails for Amazon Bedrock using the AWS SDK for Python (Boto3).
To conclude
Guardrails for Amazon Bedrock is a powerful tool that helps ensure the safety, privacy, and truthfulness of generative AI applications. With the introduction of contextual grounding checks and the ApplyGuardrail API, customers can now implement robust safeguards across their entire generative AI ecosystem, including applications built with custom or third-party FMs. By leveraging these capabilities, businesses can enhance the reliability and trustworthiness of their AI-powered solutions, paving the way for responsible and innovative development practices.