Web Scraping for Social Media & Brand Intelligence: How AI Agents Track Mentions, Sentiment & Competitors in 2026

Published March 12, 2026 ยท 14 min read ยท By Mantis Team

Brand teams spend $12,000โ€“$120,000 per year on social listening tools โ€” Brandwatch, Meltwater, Sprout Social โ€” and still miss conversations happening on forums, niche communities, and review sites. Meanwhile, a single viral negative thread on Reddit can tank your brand perception overnight.

What if an AI agent could monitor every mention of your brand across Twitter/X, Reddit, Hacker News, Product Hunt, G2, Trustpilot, and YouTube โ€” then classify sentiment, detect crises in real-time, track competitor moves, and surface influencer opportunities? That's exactly what you'll build in this guide.

Using Mantis WebPerception API for scraping and GPT-4o for intelligence, you'll create a brand monitoring system that rivals enterprise tools at a fraction of the cost.

Why Traditional Social Listening Tools Fall Short

Brandwatch starts at $1,000/month. Meltwater wants $4,000โ€“$20,000/month for full coverage. Even Sprout Social's listening add-on runs $500โ€“$2,000/month. And here's what you get: a dashboard locked to their data sources, limited historical data, and zero customization.

The real problem? These tools cover social platforms but miss the long tail โ€” niche developer forums, industry-specific communities, review aggregators, blog comments, and emerging platforms. For B2B and developer-focused brands, the most important conversations happen in places traditional tools don't index.

CapabilityTraditional ToolsAI Agent Approach
Platform coverageMajor social onlyAny public website
Niche communitiesLimited/noneReddit, HN, forums, Discord (public)
Review sitesSome integrationsG2, Trustpilot, Capterra, any site
Sentiment analysisBasic keyword matchingLLM-powered contextual understanding
Crisis detectionVolume spike alertsSemantic crisis classification
Competitor analysisSide-by-side dashboardsDeep competitive intelligence
CustomizationFixed dashboardsFully programmable
Cost$1Kโ€“$20K/mo~$29โ€“$299/mo

Architecture: The Brand Intelligence Pipeline

Your AI brand monitoring system follows a six-step pipeline:

  1. Source Discovery: Identify where your brand, competitors, and industry are discussed
  2. Mention Scraping: Use Mantis API to extract mentions from each source
  3. Sentiment Analysis: Classify each mention with GPT-4o (positive, negative, neutral, mixed)
  4. Intelligence Layer: Detect crises, track competitors, identify influencers
  5. Storage & Trending: SQLite for time-series analysis and historical comparisons
  6. Alerts & Reports: Real-time Slack alerts for crises, daily/weekly brand reports

Step 1: Define Your Data Models

Start with structured schemas for every type of intelligence you want to extract:

from pydantic import BaseModel
from typing import Optional
from datetime import datetime
from enum import Enum

class SentimentLabel(str, Enum):
    POSITIVE = "positive"
    NEGATIVE = "negative"
    NEUTRAL = "neutral"
    MIXED = "mixed"

class MentionSource(str, Enum):
    TWITTER = "twitter"
    REDDIT = "reddit"
    HACKERNEWS = "hackernews"
    PRODUCTHUNT = "producthunt"
    G2 = "g2"
    TRUSTPILOT = "trustpilot"
    YOUTUBE = "youtube"
    BLOG = "blog"
    FORUM = "forum"
    NEWS = "news"

class SocialMention(BaseModel):
    """A single mention of a brand or keyword across any platform."""
    source: MentionSource
    platform_url: str
    author: Optional[str]
    author_followers: Optional[int]
    content: str
    timestamp: datetime
    engagement: Optional[dict]  # likes, shares, comments, upvotes
    reach_estimate: Optional[int]
    thread_url: Optional[str]
    parent_content: Optional[str]  # context if it's a reply

class SentimentAnalysis(BaseModel):
    """AI-powered sentiment classification of a mention."""
    mention_id: str
    sentiment: SentimentLabel
    confidence: float  # 0.0 to 1.0
    key_phrases: list[str]  # what drove the sentiment
    topics: list[str]  # product, pricing, support, competitor comparison
    purchase_intent: bool  # does the author show buying intent?
    crisis_flag: bool  # potential PR crisis?
    crisis_reason: Optional[str]
    suggested_response: Optional[str]

class CompetitorPost(BaseModel):
    """Tracked competitor activity across platforms."""
    competitor: str
    source: MentionSource
    content: str
    url: str
    timestamp: datetime
    post_type: str  # launch, feature, pricing, hiring, partnership
    engagement: Optional[dict]
    our_opportunity: Optional[str]  # how we can respond or capitalize

class InfluencerProfile(BaseModel):
    """Potential influencer or advocate identified from mentions."""
    name: str
    platform: MentionSource
    profile_url: str
    followers: Optional[int]
    mention_count: int  # how many times they've mentioned us
    avg_sentiment: float  # -1.0 to 1.0
    topics: list[str]
    engagement_rate: Optional[float]
    influencer_tier: str  # nano, micro, mid, macro, mega
    brand_fit_score: float  # 0.0 to 1.0
    outreach_recommended: bool

Step 2: Scrape Brand Mentions Across Platforms

Use Mantis to monitor mentions across the open web. Each platform has different structures, but Mantis handles the rendering and extraction uniformly:

import requests
import sqlite3
from datetime import datetime, timedelta

MANTIS_API_KEY = "your_mantis_api_key"
MANTIS_URL = "https://api.mantisapi.com/v1"

# Brands and keywords to monitor
BRAND_KEYWORDS = ["mantis api", "mantisapi", "mantis scraping"]
COMPETITOR_KEYWORDS = ["scrapingbee", "browserless", "apify", "firecrawl"]
INDUSTRY_KEYWORDS = ["web scraping api", "ai data extraction"]

def scrape_reddit_mentions(keyword: str, subreddits: list[str] = None) -> list[dict]:
    """Scrape Reddit for brand mentions using Mantis AI extraction."""
    
    # Search Reddit for the keyword
    search_url = f"https://www.reddit.com/search.json?q={keyword}&sort=new&t=week"
    
    response = requests.post(f"{MANTIS_URL}/scrape", json={
        "url": search_url,
        "render_js": False,
        "extract": {
            "schema": {
                "type": "array",
                "items": {
                    "type": "object",
                    "properties": {
                        "title": {"type": "string"},
                        "author": {"type": "string"},
                        "subreddit": {"type": "string"},
                        "score": {"type": "integer"},
                        "num_comments": {"type": "integer"},
                        "url": {"type": "string"},
                        "selftext": {"type": "string"},
                        "created_utc": {"type": "number"}
                    }
                }
            },
            "prompt": f"Extract all Reddit posts mentioning '{keyword}'. Include the full post text, author, subreddit, score, comment count, and permalink."
        }
    }, headers={"Authorization": f"Bearer {MANTIS_API_KEY}"})
    
    return response.json().get("extracted_data", [])


def scrape_hackernews_mentions(keyword: str) -> list[dict]:
    """Monitor Hacker News for brand/industry mentions."""
    
    # Use HN Algolia API via Mantis for rendering
    search_url = f"https://hn.algolia.com/api/v1/search_by_date?query={keyword}&tags=story"
    
    response = requests.post(f"{MANTIS_URL}/scrape", json={
        "url": search_url,
        "render_js": False,
        "extract": {
            "schema": {
                "type": "array",
                "items": {
                    "type": "object",
                    "properties": {
                        "title": {"type": "string"},
                        "author": {"type": "string"},
                        "points": {"type": "integer"},
                        "num_comments": {"type": "integer"},
                        "url": {"type": "string"},
                        "objectID": {"type": "string"},
                        "created_at": {"type": "string"}
                    }
                }
            },
            "prompt": f"Extract all Hacker News stories mentioning '{keyword}'. Include title, author, points, comment count, and URL."
        }
    }, headers={"Authorization": f"Bearer {MANTIS_API_KEY}"})
    
    return response.json().get("extracted_data", [])


def scrape_g2_reviews(product_slug: str) -> list[dict]:
    """Scrape G2 reviews for your product or competitors."""
    
    url = f"https://www.g2.com/products/{product_slug}/reviews"
    
    response = requests.post(f"{MANTIS_URL}/scrape", json={
        "url": url,
        "render_js": True,
        "wait_for": ".review-content",
        "extract": {
            "schema": {
                "type": "array",
                "items": {
                    "type": "object",
                    "properties": {
                        "reviewer_name": {"type": "string"},
                        "reviewer_title": {"type": "string"},
                        "company_size": {"type": "string"},
                        "rating": {"type": "number"},
                        "title": {"type": "string"},
                        "likes": {"type": "string"},
                        "dislikes": {"type": "string"},
                        "review_date": {"type": "string"},
                        "use_case": {"type": "string"}
                    }
                }
            },
            "prompt": "Extract all visible reviews. For each review, get the reviewer name, job title, company size, star rating, review title, what they like, what they dislike, date, and primary use case."
        }
    }, headers={"Authorization": f"Bearer {MANTIS_API_KEY}"})
    
    return response.json().get("extracted_data", [])


def scrape_trustpilot_reviews(company: str) -> list[dict]:
    """Monitor Trustpilot reviews for reputation tracking."""
    
    url = f"https://www.trustpilot.com/review/{company}"
    
    response = requests.post(f"{MANTIS_URL}/scrape", json={
        "url": url,
        "render_js": True,
        "extract": {
            "schema": {
                "type": "array",
                "items": {
                    "type": "object",
                    "properties": {
                        "reviewer": {"type": "string"},
                        "rating": {"type": "integer"},
                        "title": {"type": "string"},
                        "body": {"type": "string"},
                        "date": {"type": "string"},
                        "verified": {"type": "boolean"},
                        "reply_from_company": {"type": "boolean"}
                    }
                }
            },
            "prompt": "Extract all reviews on this page. Include reviewer name, star rating (1-5), review title, full body text, date posted, whether verified, and whether the company replied."
        }
    }, headers={"Authorization": f"Bearer {MANTIS_API_KEY}"})
    
    return response.json().get("extracted_data", [])

Step 3: AI-Powered Sentiment Analysis

Basic keyword-based sentiment ("good" = positive, "bad" = negative) fails on nuanced content like sarcasm, comparative reviews, and technical criticism. Use GPT-4o for contextual understanding:

from openai import OpenAI
import json

openai_client = OpenAI()

def analyze_sentiment(mention: dict, brand: str) -> SentimentAnalysis:
    """Use GPT-4o to perform deep sentiment analysis on a mention."""
    
    response = openai_client.chat.completions.create(
        model="gpt-4o",
        messages=[{
            "role": "system",
            "content": """You are a brand intelligence analyst. Analyze the given social media mention 
            and provide detailed sentiment analysis. Be nuanced โ€” detect sarcasm, backhanded compliments, 
            and comparative sentiment (e.g., "X is good but Y is better" is negative for X).
            
            Flag as crisis if: widespread negative sentiment, viral potential, legal/ethical issues, 
            data breach mentions, or public figure complaints."""
        }, {
            "role": "user",
            "content": f"""Analyze this mention of "{brand}":
            
            Platform: {mention.get('source', 'unknown')}
            Author: {mention.get('author', 'unknown')}
            Author Followers: {mention.get('followers', 'unknown')}
            Content: {mention.get('content', '')}
            Engagement: {mention.get('engagement', {})}
            Context/Parent: {mention.get('parent_content', 'N/A')}
            
            Return JSON with:
            - sentiment: positive/negative/neutral/mixed
            - confidence: 0.0-1.0
            - key_phrases: list of phrases that drove your assessment
            - topics: what aspects are discussed (product, pricing, support, reliability, comparison)
            - purchase_intent: boolean
            - crisis_flag: boolean
            - crisis_reason: string or null
            - suggested_response: brief response recommendation or null"""
        }],
        response_format={"type": "json_object"},
        temperature=0.1
    )
    
    analysis = json.loads(response.choices[0].message.content)
    return SentimentAnalysis(mention_id=mention.get('id', ''), **analysis)


def batch_analyze(mentions: list[dict], brand: str) -> list[SentimentAnalysis]:
    """Analyze a batch of mentions and flag any crises immediately."""
    
    results = []
    crises = []
    
    for mention in mentions:
        analysis = analyze_sentiment(mention, brand)
        results.append(analysis)
        
        if analysis.crisis_flag:
            crises.append((mention, analysis))
    
    # Immediately alert on crises
    if crises:
        send_crisis_alert(crises)
    
    return results

Step 4: Competitor Intelligence Tracking

Track what competitors are doing โ€” product launches, pricing changes, content strategy, hiring patterns, and community sentiment:

def track_competitor_launches(competitor: str) -> list[CompetitorPost]:
    """Monitor competitor for product launches, features, and announcements."""
    
    sources = [
        f"https://twitter.com/search?q={competitor} launch OR announce OR feature&f=live",
        f"https://www.producthunt.com/search?q={competitor}",
        f"https://news.ycombinator.com/from?site={competitor}.com",
    ]
    
    all_posts = []
    
    for source_url in sources:
        response = requests.post(f"{MANTIS_URL}/scrape", json={
            "url": source_url,
            "render_js": True,
            "extract": {
                "schema": {
                    "type": "array",
                    "items": {
                        "type": "object",
                        "properties": {
                            "content": {"type": "string"},
                            "url": {"type": "string"},
                            "author": {"type": "string"},
                            "date": {"type": "string"},
                            "engagement": {"type": "object"},
                            "post_type": {
                                "type": "string",
                                "enum": ["launch", "feature", "pricing", "hiring", "partnership", "content", "other"]
                            }
                        }
                    }
                },
                "prompt": f"Extract posts about {competitor}. Classify each as: launch (new product/major release), feature (new capability), pricing (price changes), hiring (job posts/team growth), partnership (integrations/collabs), content (blog/tutorial), or other."
            }
        }, headers={"Authorization": f"Bearer {MANTIS_API_KEY}"})
        
        posts = response.json().get("extracted_data", [])
        all_posts.extend(posts)
    
    return [CompetitorPost(competitor=competitor, **p) for p in all_posts]


def competitive_share_of_voice(brands: list[str], period_days: int = 7) -> dict:
    """Calculate share of voice across platforms for multiple brands."""
    
    conn = sqlite3.connect("brand_intel.db")
    
    results = {}
    total_mentions = 0
    
    for brand in brands:
        cursor = conn.execute("""
            SELECT COUNT(*) as mentions,
                   AVG(CASE sentiment 
                       WHEN 'positive' THEN 1.0 
                       WHEN 'neutral' THEN 0.0 
                       WHEN 'negative' THEN -1.0 
                       ELSE 0.0 END) as avg_sentiment,
                   SUM(reach_estimate) as total_reach
            FROM mentions
            WHERE brand = ? 
            AND timestamp > datetime('now', ?)
        """, (brand, f"-{period_days} days"))
        
        row = cursor.fetchone()
        results[brand] = {
            "mentions": row[0],
            "avg_sentiment": round(row[1] or 0, 2),
            "total_reach": row[2] or 0
        }
        total_mentions += row[0]
    
    # Calculate share of voice percentage
    for brand in results:
        results[brand]["share_of_voice"] = round(
            (results[brand]["mentions"] / total_mentions * 100) if total_mentions > 0 else 0, 1
        )
    
    conn.close()
    return results

Step 5: Crisis Detection & Response

The most valuable feature of any brand monitoring system is early crisis detection. A viral complaint thread can go from 10 upvotes to the front page in hours. Your agent should detect and alert within minutes:

def detect_crisis(mentions: list[dict], threshold: int = 5) -> list[dict]:
    """Detect potential brand crises based on mention patterns."""
    
    conn = sqlite3.connect("brand_intel.db")
    
    # Check 1: Sudden volume spike (3x normal)
    cursor = conn.execute("""
        SELECT COUNT(*) FROM mentions
        WHERE brand = ? AND timestamp > datetime('now', '-1 hour')
    """, (BRAND_KEYWORDS[0],))
    recent_count = cursor.fetchone()[0]
    
    cursor = conn.execute("""
        SELECT AVG(hourly_count) FROM (
            SELECT COUNT(*) as hourly_count
            FROM mentions
            WHERE brand = ? AND timestamp > datetime('now', '-7 days')
            GROUP BY strftime('%Y-%m-%d %H', timestamp)
        )
    """, (BRAND_KEYWORDS[0],))
    avg_hourly = cursor.fetchone()[0] or 1
    
    volume_spike = recent_count > (avg_hourly * 3)
    
    # Check 2: Negative sentiment surge
    cursor = conn.execute("""
        SELECT 
            COUNT(CASE WHEN sentiment = 'negative' THEN 1 END) * 100.0 / COUNT(*) as neg_pct
        FROM mentions
        WHERE brand = ? AND timestamp > datetime('now', '-2 hours')
    """, (BRAND_KEYWORDS[0],))
    neg_pct = cursor.fetchone()[0] or 0
    
    sentiment_surge = neg_pct > 60  # Over 60% negative is unusual
    
    # Check 3: High-reach negative mentions
    cursor = conn.execute("""
        SELECT * FROM mentions
        WHERE brand = ? 
        AND sentiment = 'negative'
        AND reach_estimate > 10000
        AND timestamp > datetime('now', '-4 hours')
    """, (BRAND_KEYWORDS[0],))
    high_reach_negatives = cursor.fetchall()
    
    conn.close()
    
    crises = []
    if volume_spike and sentiment_surge:
        crises.append({
            "type": "volume_sentiment_spike",
            "severity": "high",
            "detail": f"{recent_count} mentions in last hour ({avg_hourly:.0f} avg), {neg_pct:.0f}% negative"
        })
    
    for mention in high_reach_negatives:
        crises.append({
            "type": "high_reach_negative",
            "severity": "medium",
            "detail": f"Negative mention with {mention['reach_estimate']:,} reach",
            "url": mention["url"]
        })
    
    return crises


def send_crisis_alert(crises: list[tuple]) -> None:
    """Send immediate Slack alert for detected crises."""
    
    blocks = [{
        "type": "header",
        "text": {"type": "plain_text", "text": "๐Ÿšจ BRAND CRISIS DETECTED"}
    }]
    
    for mention, analysis in crises:
        blocks.append({
            "type": "section",
            "text": {
                "type": "mrkdwn",
                "text": f"*Platform:* {mention.get('source', 'unknown')}\n"
                        f"*Author:* {mention.get('author', 'unknown')} "
                        f"({mention.get('followers', '?')} followers)\n"
                        f"*Content:* {mention.get('content', '')[:200]}...\n"
                        f"*Crisis Reason:* {analysis.crisis_reason}\n"
                        f"*Suggested Response:* {analysis.suggested_response}"
            }
        })
    
    requests.post("https://hooks.slack.com/services/YOUR/WEBHOOK/URL", json={
        "text": "๐Ÿšจ Brand crisis detected โ€” immediate attention required",
        "blocks": blocks
    })

Step 6: Influencer Discovery & Scoring

Identify people who already talk about your brand or industry and score them for partnership potential:

def discover_influencers(brand: str, min_mentions: int = 2) -> list[InfluencerProfile]:
    """Find potential brand advocates and influencers from mention data."""
    
    conn = sqlite3.connect("brand_intel.db")
    
    cursor = conn.execute("""
        SELECT 
            author,
            source,
            author_profile_url,
            MAX(author_followers) as followers,
            COUNT(*) as mention_count,
            AVG(CASE sentiment 
                WHEN 'positive' THEN 1.0 
                WHEN 'neutral' THEN 0.0 
                WHEN 'negative' THEN -1.0 
                ELSE 0.0 END) as avg_sentiment,
            GROUP_CONCAT(DISTINCT topic) as topics
        FROM mentions
        WHERE brand = ?
        GROUP BY author, source
        HAVING COUNT(*) >= ?
        ORDER BY mention_count DESC, followers DESC
    """, (brand, min_mentions))
    
    influencers = []
    for row in cursor.fetchall():
        followers = row[3] or 0
        
        # Classify tier
        if followers >= 1_000_000: tier = "mega"
        elif followers >= 100_000: tier = "macro"
        elif followers >= 10_000: tier = "mid"
        elif followers >= 1_000: tier = "micro"
        else: tier = "nano"
        
        # Calculate brand fit score
        fit_score = min(1.0, (
            (0.3 if row[5] > 0.5 else 0.1) +  # positive sentiment weight
            (0.3 * min(row[4] / 10, 1.0)) +     # mention frequency weight
            (0.2 if tier in ["micro", "mid"] else 0.1) +  # mid-tier preferred
            (0.2 if "product" in (row[6] or "") else 0.0)  # product mentions
        ))
        
        influencers.append(InfluencerProfile(
            name=row[0],
            platform=row[1],
            profile_url=row[2] or "",
            followers=followers,
            mention_count=row[4],
            avg_sentiment=round(row[5], 2),
            topics=(row[6] or "").split(","),
            engagement_rate=None,
            influencer_tier=tier,
            brand_fit_score=round(fit_score, 2),
            outreach_recommended=fit_score > 0.6 and row[5] > 0.3
        ))
    
    conn.close()
    return influencers

Step 7: Automated Reporting & Alerts

Generate daily and weekly brand intelligence reports automatically:

def generate_daily_report(brand: str) -> str:
    """Generate a daily brand intelligence summary using GPT-4o."""
    
    conn = sqlite3.connect("brand_intel.db")
    
    # Get today's stats
    cursor = conn.execute("""
        SELECT 
            COUNT(*) as total_mentions,
            COUNT(CASE WHEN sentiment = 'positive' THEN 1 END) as positive,
            COUNT(CASE WHEN sentiment = 'negative' THEN 1 END) as negative,
            COUNT(CASE WHEN sentiment = 'neutral' THEN 1 END) as neutral,
            SUM(reach_estimate) as total_reach,
            COUNT(CASE WHEN purchase_intent = 1 THEN 1 END) as purchase_intents
        FROM mentions
        WHERE brand = ? AND timestamp > datetime('now', '-24 hours')
    """, (brand,))
    
    stats = cursor.fetchone()
    
    # Get top mentions by engagement
    cursor = conn.execute("""
        SELECT source, author, content, engagement, sentiment, url
        FROM mentions
        WHERE brand = ? AND timestamp > datetime('now', '-24 hours')
        ORDER BY reach_estimate DESC
        LIMIT 10
    """, (brand,))
    top_mentions = cursor.fetchall()
    
    # Get competitor comparison
    sov = competitive_share_of_voice([brand] + COMPETITOR_KEYWORDS[:3])
    
    conn.close()
    
    # Generate narrative summary with GPT-4o
    response = openai_client.chat.completions.create(
        model="gpt-4o",
        messages=[{
            "role": "system",
            "content": "You are a brand intelligence analyst. Write concise, actionable daily reports."
        }, {
            "role": "user",
            "content": f"""Generate a daily brand intelligence report for "{brand}":
            
            TODAY'S METRICS:
            - Total mentions: {stats[0]}
            - Positive: {stats[1]} | Negative: {stats[2]} | Neutral: {stats[3]}
            - Total reach: {stats[4]:,}
            - Purchase intents detected: {stats[5]}
            
            TOP MENTIONS: {json.dumps([dict(zip(['source','author','content','engagement','sentiment','url'], m)) for m in top_mentions], indent=2)}
            
            SHARE OF VOICE: {json.dumps(sov, indent=2)}
            
            Format as a Slack message with sections: Summary, Key Mentions, Competitor Activity, Recommended Actions."""
        }],
        temperature=0.3
    )
    
    return response.choices[0].message.content


# Schedule with APScheduler or cron
def run_daily_monitoring():
    """Main monitoring loop โ€” run every 30 minutes."""
    
    # 1. Scrape all sources
    all_mentions = []
    for keyword in BRAND_KEYWORDS:
        all_mentions.extend(scrape_reddit_mentions(keyword))
        all_mentions.extend(scrape_hackernews_mentions(keyword))
    
    # 2. Analyze sentiment
    analyses = batch_analyze(all_mentions, BRAND_KEYWORDS[0])
    
    # 3. Store in SQLite
    store_mentions(all_mentions, analyses)
    
    # 4. Check for crises
    crises = detect_crisis(all_mentions)
    if crises:
        send_crisis_alert(crises)
    
    # 5. Generate daily report at 9 AM
    if datetime.now().hour == 9:
        report = generate_daily_report(BRAND_KEYWORDS[0])
        post_to_slack(report, channel="#brand-intelligence")
๐Ÿ’ก Pro Tip: Start with Reddit and Hacker News โ€” they're the most valuable for B2B and developer brands because conversations are threaded, public, and searchable. Add review sites (G2, Trustpilot) next, then expand to Twitter/X and niche forums. Quality of insights matters more than volume of sources.

Advanced: Share of Voice Dashboard

Track your brand's share of voice against competitors over time. This is the metric that brand teams and CMOs care about most โ€” and enterprise tools charge thousands for it:

def weekly_share_of_voice_trend(brands: list[str], weeks: int = 12) -> dict:
    """Calculate weekly share of voice trends for competitive analysis."""
    
    conn = sqlite3.connect("brand_intel.db")
    trends = {brand: [] for brand in brands}
    
    for week_offset in range(weeks, 0, -1):
        week_total = 0
        week_data = {}
        
        for brand in brands:
            cursor = conn.execute("""
                SELECT COUNT(*) as mentions,
                       AVG(CASE sentiment 
                           WHEN 'positive' THEN 1.0 
                           WHEN 'neutral' THEN 0.0 
                           WHEN 'negative' THEN -1.0 
                           ELSE 0.0 END) as avg_sentiment
                FROM mentions
                WHERE brand = ?
                AND timestamp BETWEEN 
                    datetime('now', ?) AND datetime('now', ?)
            """, (brand, f"-{week_offset * 7} days", f"-{(week_offset - 1) * 7} days"))
            
            row = cursor.fetchone()
            week_data[brand] = {"mentions": row[0], "sentiment": round(row[1] or 0, 2)}
            week_total += row[0]
        
        for brand in brands:
            sov = round(week_data[brand]["mentions"] / week_total * 100, 1) if week_total else 0
            trends[brand].append({
                "week": week_offset,
                "mentions": week_data[brand]["mentions"],
                "share_of_voice": sov,
                "sentiment": week_data[brand]["sentiment"]
            })
    
    conn.close()
    return trends

Cost Comparison: Enterprise vs. AI Agent

PlatformMonthly CostCoverageCustomization
Brandwatch$1,000โ€“$10,000Major social + newsDashboard templates
Meltwater$4,000โ€“$20,000Social + media + printModerate
Sprout Social$500โ€“$2,000Major social onlyLimited
Talkwalker$3,000โ€“$15,000Social + blogs + forumsDashboard-based
Mention$300โ€“$1,200Social + webAlert rules
AI Agent + Mantis$29โ€“$299Any public websiteFully programmable

A mid-market brand team typically spends $3,000โ€“$8,000/month on social listening. An AI agent with Mantis can deliver comparable coverage โ€” plus custom sources, LLM-powered sentiment, and crisis detection โ€” for under $300/month.

โšก Honest Assessment: Enterprise social listening tools have advantages in historical data archives (years of backfill), direct API integrations with social platforms, and compliance certifications. An AI agent approach excels at custom source coverage, nuanced LLM analysis, real-time crisis detection, and cost. For most brands, the agent approach covers 90% of needs at 5% of the cost. Use this as your primary intelligence layer and supplement with a basic tool if you need certified historical data.

Use Cases by Team

1. Brand & Marketing Teams

Monitor brand perception, track campaign performance across social channels, measure share of voice against competitors. Get daily reports with actionable insights instead of static dashboards.

2. PR & Communications

Real-time crisis detection is the killer feature. Detect negative viral content within minutes โ€” not hours. AI-generated response recommendations let you act fast with the right tone.

3. Competitive Intelligence

Track competitor product launches, pricing changes, hiring patterns, and community sentiment. Automated weekly competitive reports that would take an analyst days to compile.

4. Influencer Marketing

Discover organic advocates who already mention your brand positively. Score influencer-brand fit algorithmically. Identify micro-influencers that enterprise tools miss because they're on niche platforms.

Start Monitoring Your Brand in Minutes

Mantis WebPerception API handles the scraping โ€” you build the intelligence layer. Extract structured brand data from any website with AI-powered extraction.

Get Your API Key โ†’

What You've Built

Your AI-powered brand intelligence system now includes:

All for the cost of a Mantis API subscription plus your OpenAI usage โ€” typically under $300/month for comprehensive monitoring that would cost $5,000+ with enterprise tools.

Related Guides