Getting Started with Lakera Guard
Lakera Guard screens the content going into and coming out from LLMs and flags any threats, providing real time protection for your GenAI application and users.
Follow the steps below to detect your first prompt attack with Lakera Guard.
Create a Lakera Account
- Navigate to the Lakera platform
- Click on the Create free account button
- Enter your email address and set a secure password, or use one of the single sign on options
Create an API Key
- Navigate to the API Access page
- Click on the + Create new API key button
- Name the key
Guard Quickstart Key
- Click the Create API key button
- Copy and save the API key securely. Please note that once generated, it cannot be retrieved from your Lakera AI account for security reasons
- Open a terminal session and export your key as an environment variable (replacing
<your-api-key>
with your API key):
Detect a Prompt Injection Attack
The example code below should trigger Lakera Guard’s prompt attack and unknown links detection.
Copy and paste it into a file on your local machine and execute it from the same terminal session where you exported your API key.
Python
JavaScript
cURL
HTTPie
Other
Learn More
Integrating with the Lakera Guard API is as simple as making a POST request to the guard
endpoint for each input to and output from an LLM. Lakera Guard will screen the input or output contents and flag if any of the following threats are detected:
- Prompt attacks - detect prompt injections, jailbreaks or manipulation in user prompts or reference materials to stop LLM behavior being overridden
- PII - prevent leakage of Personally Identifiable Information (PII) in user prompts or LLM outputs
- Moderated content - detect offensive, hateful, sexual, violent and vulgar content in user prompts or LLM outputs
- Unknown links - detect links that are not from an allowed list of domains to prevent phishing and malicious links being shown to users
You can control and customize the defenses applied to your application by setting policies.
Guides
To help you learn more about the security risks Lakera Guard protects against, we’ve created some guides:
- The ELI5 Guide to Prompt Injection: Techniques, Prevention Methods & Tools
- Jailbreaking Large Language Models: Techniques, Examples, Prevention Methods
Other Resources
If you’re still looking for more:
- Test your prompt injection skills against Gandalf
- Download the LLM Security Playbook