Screen content for threats
The guard
API endpoint is the integration point for GenAI applications using Lakera Guard. It allows you to call on all of Lakera Guard’s defenses with a single API call.
Using guard
, you can submit the text content of an LLM interaction to Lakera Guard. The configured detectors will screen the interaction, and a flagging response will indicate whether any threats were detected, in line with your policy.
Your application can then be programmed to take mitigating action based on the flagging response, such as blocking the interaction, warning the end user, or generating an internal security alert.
Headers
Bearer authentication of the form Bearer <token>, where token is your auth token.
Request
List of messages comprising the interaction history with the LLM in OpenAI API Chat Completions format. Can be multiple messages of any role: user, assistant, or system.
Metadata tags can be attached to screening requests as an object that can contain any arbitrary key-value pairs. Common use cases include specifying the user or session ID.
Response
Contains detected PII, profanity, or custom regex matches with their locations. Only returned if payload=true in request.
List of detectors run and their results. Only returned if breakdown=true in request.
Build information. Only returned if dev_info=true in request.