Quickstart Guide
Go from zero to structured web data in under 5 minutes.
Prerequisites: An API key (get one free at mantisapi.com) and any HTTP client (curl, Python, Node.js, etc.)
Your First API Call
1
Scrape a web page
The simplest call — fetch the HTML content of any URL:
curl
Python
Node.js
curl -X POST https://api.mantisapi.com/v1/scrape \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://news.ycombinator.com",
"render_js": false
}'
Response
{
"status": 200,
"url": "https://news.ycombinator.com",
"html": "<!DOCTYPE html><html>...",
"text": "Hacker News\n1. Show HN: ...",
"metadata": {
"title": "Hacker News",
"description": "",
"response_time_ms": 342
}
}
2
Take a screenshot
Capture a full-page screenshot as PNG or JPEG:
curl -X POST https://api.mantisapi.com/v1/screenshot \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"format": "png",
"full_page": true,
"viewport": { "width": 1280, "height": 720 }
}'
Response
{
"status": 200,
"url": "https://example.com",
"screenshot_url": "https://cdn.mantisapi.com/screenshots/abc123.png",
"format": "png",
"viewport": { "width": 1280, "height": 720 }
}
3
Extract structured data with AI
This is the magic. Define a schema, and AI extracts exactly what you need — no selectors, no regex, no maintenance:
curl -X POST https://api.mantisapi.com/v1/extract \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://news.ycombinator.com",
"schema": {
"type": "array",
"items": {
"type": "object",
"properties": {
"title": { "type": "string" },
"url": { "type": "string" },
"points": { "type": "integer" },
"comments": { "type": "integer" }
}
}
},
"prompt": "Extract the top 10 stories with their titles, URLs, points, and comment counts"
}'
Response
{
"status": 200,
"url": "https://news.ycombinator.com",
"data": [
{
"title": "Show HN: I built an AI agent that manages my servers",
"url": "https://example.com/show-hn-ai-server",
"points": 342,
"comments": 127
},
{
"title": "The future of web scraping is AI-powered",
"url": "https://example.com/ai-web-scraping",
"points": 256,
"comments": 89
}
// ... 8 more items
],
"tokens_used": 1847
}
That's it! No CSS selectors to break. No regex to maintain. Just describe what you want, and the API extracts it. When the website redesigns, your code keeps working.
Python Example
Full working example using the requests library:
import requests
API_KEY = "your_api_key_here"
BASE_URL = "https://api.mantisapi.com/v1"
# Extract product data from any e-commerce page
response = requests.post(
f"{BASE_URL}/extract",
headers={"Authorization": f"Bearer {API_KEY}"},
json={
"url": "https://example-store.com/products",
"schema": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"price": {"type": "number"},
"in_stock": {"type": "boolean"},
"rating": {"type": "number"}
}
}
},
"prompt": "Extract all products with name, price, availability, and rating"
}
)
products = response.json()["data"]
for p in products:
print(f"{p['name']} — ${p['price']} — {'✅' if p['in_stock'] else '❌'}")
Node.js Example
const API_KEY = 'your_api_key_here';
const response = await fetch('https://api.mantisapi.com/v1/extract', {
method: 'POST',
headers: {
'Authorization': `Bearer ${API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
url: 'https://news.ycombinator.com',
schema: {
type: 'array',
items: {
type: 'object',
properties: {
title: { type: 'string' },
url: { type: 'string' },
points: { type: 'integer' },
},
},
},
prompt: 'Extract the top stories',
}),
});
const { data } = await response.json();
data.forEach(story => console.log(`${story.title} (${story.points} pts)`));
What's Next?
- POST /scrape — Full endpoint reference with all parameters
- POST /screenshot — Viewport options, formats, and full-page capture
- POST /extract — AI extraction schema reference and advanced prompts
- AI Extraction Guide — Best practices for schema design and prompt engineering
- LangChain Integration — Use WebPerception as a LangChain tool