๐Ÿ”ง Anthropic Claude Integration

Give Claude Real-Time Web Access with Tool Use

Define WebPerception as a tool for Claude. Scrape pages, take screenshots, and extract structured data โ€” all through Claude's native tool use API.

Quick Setup

1 Install dependencies
pip install anthropic requests
2 Define the web scraping tool
import anthropic
import requests

MANTIS_API_KEY = "YOUR_MANTIS_API_KEY"

# Define the tool for Claude
tools = [
    {
        "name": "scrape_webpage",
        "description": "Scrape a webpage and return its content. Can optionally extract structured data using AI by providing a JSON schema.",
        "input_schema": {
            "type": "object",
            "properties": {
                "url": {
                    "type": "string",
                    "description": "The URL to scrape"
                },
                "extract_schema": {
                    "type": "object",
                    "description": "Optional JSON schema for AI-powered structured extraction"
                }
            },
            "required": ["url"]
        }
    }
]
3 Handle tool calls in the message loop
client = anthropic.Anthropic()

def execute_tool(name, inputs):
    if name == "scrape_webpage":
        headers = {"x-api-key": MANTIS_API_KEY}
        if inputs.get("extract_schema"):
            resp = requests.post("https://api.mantisapi.com/v1/extract",
                headers=headers,
                json={"url": inputs["url"], "schema": inputs["extract_schema"]}
            )
        else:
            resp = requests.post("https://api.mantisapi.com/v1/scrape",
                headers=headers,
                json={"url": inputs["url"], "render_js": True}
            )
        return resp.json()

# Send message with tools
messages = [{"role": "user", "content": "Scrape the Hacker News front page and summarize the top stories"}]

response = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=4096,
    tools=tools,
    messages=messages
)

# Process tool calls
while response.stop_reason == "tool_use":
    tool_block = next(b for b in response.content if b.type == "tool_use")
    result = execute_tool(tool_block.name, tool_block.input)
    
    messages.append({"role": "assistant", "content": response.content})
    messages.append({"role": "user", "content": [{
        "type": "tool_result",
        "tool_use_id": tool_block.id,
        "content": str(result)
    }]})
    
    response = client.messages.create(
        model="claude-sonnet-4-20250514",
        max_tokens=4096,
        tools=tools,
        messages=messages
    )

print(response.content[0].text)

Why WebPerception + Claude?

๐Ÿง 

Claude's Best-in-Class Reasoning

Claude excels at understanding context and nuance. Combined with WebPerception's structured extraction, you get the most accurate web data analysis possible.

๐Ÿ”ง

Native Tool Use

WebPerception maps perfectly to Claude's tool use API. No adapters, no wrappers โ€” just define the tool schema and Claude handles the rest.

๐Ÿ‘๏ธ

Vision + Screenshots

Take screenshots with WebPerception and pass them to Claude's vision capabilities. Claude can analyze page layouts, read charts, and understand visual content.

๐Ÿ“Š

Structured Output

WebPerception's AI extraction returns clean JSON. Claude processes it without parsing headaches โ€” prices, names, dates in the exact format you need.

๐Ÿ”’

200K Context Window

Claude's massive context window means it can process entire scraped pages without truncation. Analyze full articles, documentation, and long product pages.

โšก

No Infrastructure

One API call to WebPerception handles JavaScript rendering, proxies, and anti-bot bypasses. Focus on your Claude logic, not browser management.

Claude + WebPerception Use Cases

๐Ÿ“ฐ Intelligent Content Summarizer

Claude scrapes articles, understands context, and produces nuanced summaries. Unlike basic scrapers, Claude grasps subtext, identifies bias, and extracts the real story.

# Claude will use the scrape tool automatically
messages = [{
    "role": "user",
    "content": "Read these 3 articles about the new EU AI Act and write a balanced summary highlighting key implications for startups: [url1] [url2] [url3]"
}]

๐Ÿข Company Research Agent

Build an agent that researches companies by scraping their website, LinkedIn, Crunchbase, and news articles. Claude synthesizes everything into a comprehensive brief.

# Multi-source research with structured extraction
tools_with_extract = [
    {
        "name": "scrape_webpage",
        "description": "Scrape a webpage. Use extract_schema for structured data.",
        "input_schema": { ... same as above ... }
    },
    {
        "name": "take_screenshot",
        "description": "Take a screenshot of a webpage for visual analysis.",
        "input_schema": {
            "type": "object",
            "properties": {
                "url": {"type": "string"}
            },
            "required": ["url"]
        }
    }
]

๐Ÿ“‹ Automated Due Diligence

Feed Claude a list of companies to evaluate. It scrapes their sites, extracts team size, funding, product details, and produces a standardized report โ€” perfect for VCs and M&A teams.

# Extract company data into a standard schema
company_schema = {
    "company_name": "string",
    "founded_year": "number",
    "team_size": "string",
    "funding_total": "string",
    "product_description": "string",
    "pricing_model": "string",
    "key_customers": ["string"]
}

Give Claude Eyes on the Web

100 free API calls/month. No credit card required.

Get Your API Key โ†’