Get detailed detection results
The results
endpoint screens submitted content according to the policy assigned to the specified project. It then returns the confidence level results of the detectors. It doesn’t make a flagging decision. It can be used to analyze data and calibrate detector threshold levels for policies.
You can use the results to analyze historic LLM prompt and response data without worrying about triggering alerts or affecting monitoring, as they are not logged as screening requests by Lakera Guard.
If no project ID is passed in the request, then the default Lakera Guard policy is used, which runs all Guard defenses and detectors.
Headers
Bearer authentication of the form Bearer <token>, where token is your auth token.
Request
List of messages comprising the interaction history with the LLM in OpenAI API Chat Completions format. Can be multiple messages of any role: user, assistant, or system.
Metadata tags can be attached to screening requests as an object that can contain any arbitrary key-value pairs. Common use cases include specifying the user or session ID.