Web Scraping for Logistics & Freight: How AI Agents Track Shipments, Rates, Routes & Supply Chain Data in 2026

Published: March 13, 2026 ยท 18 min read ยท By the Mantis Team

The global logistics industry moves $12 trillion worth of goods annually, with freight spending in the US alone exceeding $1 trillion in 2025. Yet most logistics operations still rely on manual rate checks, phone calls to carriers, and spreadsheets to track shipments across fragmented systems. Meanwhile, freight rates swing 30โ€“50% within weeks based on fuel surcharges, capacity crunches, and port congestion events.

Enterprise visibility platforms like FourKites, project44, and Descartes charge $5,000โ€“$30,000+ per month for real-time tracking and analytics. But the underlying data โ€” carrier rates, port schedules, vessel positions, customs clearance times โ€” is publicly available across hundreds of carrier websites, freight marketplaces, and government portals. AI agents powered by web scraping APIs can build equivalent intelligence at a fraction of the cost.

In this guide, you'll build a complete logistics intelligence system using Python, the Mantis WebPerception API, and GPT-4o โ€” covering freight rate benchmarking, shipment tracking, port congestion monitoring, and carrier performance analytics.

Why Logistics Teams Need Web Scraping

Freight doesn't wait. Rates change hourly. Ships reroute around congested ports. A single customs delay can cascade through your entire supply chain. The companies that win are the ones with real-time intelligence:

Build Logistics Intelligence Agents with Mantis

Scrape freight rates, shipment tracking, port data, and carrier performance from any logistics platform with one API call. AI-powered extraction handles every carrier format.

Get Free API Key โ†’

Architecture: The 6-Step Logistics Intelligence Pipeline

  1. Freight rate scraping โ€” Monitor spot and contract rates across truckload, LTL, ocean, and air freight markets
  2. Shipment tracking aggregation โ€” Unified tracking across ocean carriers, trucking companies, and parcel providers
  3. Port & terminal monitoring โ€” Vessel queues, berth utilization, gate hours, and chassis availability
  4. Carrier performance analytics โ€” On-time rates, transit times, claims ratios from public filings and reviews
  5. GPT-4o supply chain analysis โ€” Predict disruptions, optimize routing, and generate cost-saving recommendations
  6. Alert delivery โ€” Route rate spikes, delays, and congestion events to logistics teams via Slack or TMS integration

Step 1: Define Your Logistics Data Models

from pydantic import BaseModel
from typing import Optional, List
from datetime import datetime
from enum import Enum

class FreightMode(str, Enum):
    TRUCKLOAD = "truckload"
    LTL = "ltl"
    OCEAN_FCL = "ocean_fcl"
    OCEAN_LCL = "ocean_lcl"
    AIR = "air"
    INTERMODAL = "intermodal"
    RAIL = "rail"

class FreightRate(BaseModel):
    """Freight rate quote from a carrier or marketplace."""
    lane: str  # e.g., "LAX-ORD", "Shanghai-LA"
    origin_city: str
    origin_state_country: str
    destination_city: str
    destination_state_country: str
    mode: FreightMode
    carrier: Optional[str]
    rate_per_mile: Optional[float]  # trucking
    rate_total: float
    fuel_surcharge: Optional[float]
    accessorial_charges: Optional[float]
    currency: str = "USD"
    equipment_type: Optional[str]  # "dry van", "reefer", "flatbed", "20ft", "40ft HC"
    transit_days: Optional[int]
    valid_from: Optional[datetime]
    valid_to: Optional[datetime]
    source: str  # "dat", "truckstop", "freightos", "maersk", etc.
    scraped_at: datetime

class ShipmentStatus(BaseModel):
    """Unified shipment tracking across carriers."""
    tracking_number: str
    carrier: str
    mode: FreightMode
    status: str  # "in_transit", "delivered", "customs_hold", "delayed", "at_port"
    origin: str
    destination: str
    current_location: Optional[str]
    vessel_name: Optional[str]  # ocean
    container_number: Optional[str]
    estimated_arrival: Optional[datetime]
    actual_arrival: Optional[datetime]
    delay_hours: Optional[float]
    last_event: str
    last_event_time: datetime
    events: List[dict]  # full event history
    scraped_at: datetime
    source_url: str

class PortCongestion(BaseModel):
    """Port congestion and vessel queue data."""
    port_name: str
    port_code: str  # UN/LOCODE
    country: str
    vessels_at_anchor: int
    vessels_at_berth: int
    avg_wait_days: float
    avg_berth_days: float
    container_dwell_days: float  # avg time container sits at terminal
    gate_turn_time_minutes: Optional[int]
    chassis_availability_pct: Optional[float]
    import_volume_teu: Optional[int]
    export_volume_teu: Optional[int]
    congestion_level: str  # "low", "moderate", "high", "critical"
    scraped_at: datetime
    source: str

class CarrierPerformance(BaseModel):
    """Carrier reliability and performance metrics."""
    carrier_name: str
    carrier_mc_number: Optional[str]  # FMCSA MC number
    carrier_scac: Optional[str]
    mode: FreightMode
    on_time_pct: float
    avg_transit_days: float
    transit_variability_days: float  # std deviation
    claims_ratio_pct: Optional[float]
    safety_rating: Optional[str]  # FMCSA: satisfactory/conditional/unsatisfactory
    insurance_coverage: Optional[float]
    fleet_size: Optional[int]
    authority_status: Optional[str]
    review_score: Optional[float]  # from broker review sites
    scraped_at: datetime
    source: str

Step 2: Scrape Freight Rates Across Markets

from mantis import MantisClient
import asyncio

mantis = MantisClient(api_key="your-mantis-api-key")

async def scrape_truckload_rates(
    lanes: List[dict],  # [{"origin": "Los Angeles, CA", "dest": "Chicago, IL", "equipment": "dry_van"}]
    sources: List[str] = ["dat", "truckstop"]
) -> List[FreightRate]:
    """
    Scrape current spot rates from freight marketplaces.
    DAT and Truckstop publish daily rate indices by lane and equipment type.
    """
    rates = []
    
    source_urls = {
        "dat": "https://www.dat.com/industry-trends/trendlines",
        "truckstop": "https://truckstop.com/market-data/",
    }
    
    for source in sources:
        result = await mantis.scrape(
            url=source_urls[source],
            extract={
                "rate_data": [{
                    "lane": "string (origin-destination)",
                    "rate_per_mile": "number",
                    "fuel_surcharge_per_mile": "number or null",
                    "total_rate": "number",
                    "equipment": "string",
                    "volume_index": "number or null",
                    "week_over_week_change_pct": "number or null"
                }],
                "national_avg_rate_per_mile": "number",
                "diesel_price_per_gallon": "number",
                "data_date": "string"
            }
        )
        
        for lane_data in result.get("rate_data", []):
            for lane in lanes:
                lane_str = f"{lane['origin']}-{lane['dest']}"
                
                rate = FreightRate(
                    lane=lane_str,
                    origin_city=lane["origin"].split(",")[0],
                    origin_state_country=lane["origin"].split(",")[-1].strip(),
                    destination_city=lane["dest"].split(",")[0],
                    destination_state_country=lane["dest"].split(",")[-1].strip(),
                    mode=FreightMode.TRUCKLOAD,
                    rate_per_mile=lane_data.get("rate_per_mile"),
                    rate_total=lane_data.get("total_rate", 0),
                    fuel_surcharge=lane_data.get("fuel_surcharge_per_mile"),
                    equipment_type=lane.get("equipment", "dry_van"),
                    source=source,
                    scraped_at=datetime.now()
                )
                rates.append(rate)
    
    return rates

async def scrape_ocean_rates(
    trade_lanes: List[dict]  # [{"origin_port": "Shanghai", "dest_port": "Los Angeles", "container": "40HC"}]
) -> List[FreightRate]:
    """
    Scrape ocean container rates from carrier websites and indices.
    Freightos Baltic Index (FBX) provides benchmark container rates.
    Major carriers: Maersk, MSC, CMA CGM, COSCO, Hapag-Lloyd, ONE, Evergreen.
    """
    rates = []
    
    # Freightos Baltic Index โ€” public benchmark rates
    fbx_result = await mantis.scrape(
        url="https://fbx.freightos.com/",
        extract={
            "routes": [{
                "route_name": "string",
                "rate_usd": "number",
                "change_week": "number",
                "change_month": "number"
            }],
            "global_index": "number",
            "last_updated": "string"
        }
    )
    
    for route in fbx_result.get("routes", []):
        rate = FreightRate(
            lane=route.get("route_name", ""),
            origin_city=route.get("route_name", "").split("-")[0].strip() if "-" in route.get("route_name", "") else "",
            origin_state_country="",
            destination_city=route.get("route_name", "").split("-")[-1].strip() if "-" in route.get("route_name", "") else "",
            destination_state_country="",
            mode=FreightMode.OCEAN_FCL,
            rate_total=route.get("rate_usd", 0),
            equipment_type="40HC",
            source="freightos_fbx",
            scraped_at=datetime.now()
        )
        rates.append(rate)
    
    # Scrape individual carrier rate pages
    carrier_urls = {
        "maersk": "https://www.maersk.com/transportation-services/container-shipping",
        "msc": "https://www.msc.com/en/search-a-schedule",
        "cma_cgm": "https://www.cma-cgm.com/ebusiness/schedules"
    }
    
    for carrier, url in carrier_urls.items():
        for lane in trade_lanes:
            result = await mantis.scrape(
                url=url,
                extract={
                    "rate_usd": "number",
                    "transit_days": "number",
                    "vessel": "string or null",
                    "departure_date": "string",
                    "arrival_date": "string",
                    "surcharges": [{
                        "name": "string",
                        "amount": "number"
                    }]
                }
            )
            
            total_surcharges = sum(s.get("amount", 0) for s in result.get("surcharges", []))
            
            rate = FreightRate(
                lane=f"{lane['origin_port']}-{lane['dest_port']}",
                origin_city=lane["origin_port"],
                origin_state_country="",
                destination_city=lane["dest_port"],
                destination_state_country="",
                mode=FreightMode.OCEAN_FCL,
                carrier=carrier,
                rate_total=result.get("rate_usd", 0),
                fuel_surcharge=total_surcharges,
                equipment_type=lane.get("container", "40HC"),
                transit_days=result.get("transit_days"),
                source=carrier,
                scraped_at=datetime.now()
            )
            rates.append(rate)
    
    return rates

Detecting Rate Spikes and Market Shifts

async def detect_rate_anomalies(
    current_rates: List[FreightRate],
    historical_db: str = "logistics_intelligence.db"
) -> dict:
    """
    Compare current freight rates against historical data to detect
    significant market shifts, capacity crunches, and arbitrage opportunities.
    """
    import sqlite3
    
    conn = sqlite3.connect(historical_db)
    alerts = {"rate_spikes": [], "rate_drops": [], "capacity_signals": [], "arbitrage": []}
    
    for rate in current_rates:
        # Get 30-day average for this lane + mode
        avg = conn.execute("""
            SELECT AVG(rate_total), MIN(rate_total), MAX(rate_total),
                   AVG(rate_per_mile), COUNT(*)
            FROM rate_history
            WHERE lane = ? AND mode = ? AND equipment_type = ?
            AND scraped_at > datetime('now', '-30 days')
        """, (rate.lane, rate.mode, rate.equipment_type)).fetchone()
        
        if avg and avg[0]:
            avg_rate, min_rate, max_rate, avg_rpm, sample_count = avg
            
            # Rate spike: >15% above 30-day average (freight is volatile)
            if rate.rate_total > avg_rate * 1.15:
                alerts["rate_spikes"].append({
                    "lane": rate.lane,
                    "mode": rate.mode,
                    "current": rate.rate_total,
                    "avg_30d": round(avg_rate, 2),
                    "spike_pct": round(((rate.rate_total - avg_rate) / avg_rate) * 100, 1),
                    "carrier": rate.carrier,
                    "source": rate.source
                })
            
            # Rate drop: >15% below average (opportunity to lock in)
            if rate.rate_total < avg_rate * 0.85:
                alerts["rate_drops"].append({
                    "lane": rate.lane,
                    "mode": rate.mode,
                    "current": rate.rate_total,
                    "avg_30d": round(avg_rate, 2),
                    "drop_pct": round(((avg_rate - rate.rate_total) / avg_rate) * 100, 1),
                    "recommendation": "Consider locking contract rate at current level"
                })
        
        # Store current rate
        conn.execute(
            """INSERT INTO rate_history 
            (lane, mode, carrier, rate_total, rate_per_mile, equipment_type, source, scraped_at)
            VALUES (?, ?, ?, ?, ?, ?, ?, ?)""",
            (rate.lane, rate.mode, rate.carrier, rate.rate_total,
             rate.rate_per_mile, rate.equipment_type, rate.source,
             rate.scraped_at.isoformat())
        )
    
    conn.commit()
    conn.close()
    return alerts

Step 3: Unified Shipment Tracking

async def track_shipments(
    shipments: List[dict]  # [{"tracking": "MSKU123", "carrier": "maersk", "mode": "ocean_fcl"}]
) -> List[ShipmentStatus]:
    """
    Scrape shipment status from carrier tracking portals.
    Provides unified view across ocean, truck, and air shipments.
    """
    tracking_urls = {
        "maersk": "https://www.maersk.com/tracking/{}",
        "msc": "https://www.msc.com/track-a-shipment?query={}",
        "cma_cgm": "https://www.cma-cgm.com/ebusiness/tracking/search?SearchBy=BL&Reference={}",
        "hapag_lloyd": "https://www.hapag-lloyd.com/en/online-business/track/track-by-booking-solution.html?blno={}",
        "ups": "https://www.ups.com/track?tracknum={}",
        "fedex": "https://www.fedex.com/fedextrack/?trknbr={}",
    }
    
    statuses = []
    
    for shipment in shipments:
        carrier = shipment["carrier"]
        tracking = shipment["tracking"]
        url = tracking_urls.get(carrier, "").format(tracking)
        
        if not url:
            continue
        
        result = await mantis.scrape(
            url=url,
            extract={
                "status": "string (in_transit, delivered, customs_hold, delayed, at_port)",
                "current_location": "string",
                "vessel_name": "string or null",
                "container_number": "string or null",
                "estimated_arrival": "string (ISO date)",
                "actual_arrival": "string (ISO date) or null",
                "last_event": "string",
                "last_event_time": "string (ISO datetime)",
                "origin": "string",
                "destination": "string",
                "events": [{"description": "string", "location": "string", "timestamp": "string"}]
            }
        )
        
        # Calculate delay
        delay_hours = None
        if result.get("estimated_arrival") and result.get("actual_arrival"):
            from dateutil import parser
            eta = parser.parse(result["estimated_arrival"])
            ata = parser.parse(result["actual_arrival"])
            delay_hours = (ata - eta).total_seconds() / 3600
        
        status = ShipmentStatus(
            tracking_number=tracking,
            carrier=carrier,
            mode=shipment.get("mode", "ocean_fcl"),
            status=result.get("status", "unknown"),
            origin=result.get("origin", ""),
            destination=result.get("destination", ""),
            current_location=result.get("current_location"),
            vessel_name=result.get("vessel_name"),
            container_number=result.get("container_number"),
            estimated_arrival=result.get("estimated_arrival"),
            actual_arrival=result.get("actual_arrival"),
            delay_hours=delay_hours,
            last_event=result.get("last_event", ""),
            last_event_time=result.get("last_event_time", datetime.now().isoformat()),
            events=result.get("events", []),
            scraped_at=datetime.now(),
            source_url=url
        )
        statuses.append(status)
    
    return statuses

def identify_at_risk_shipments(statuses: List[ShipmentStatus]) -> List[dict]:
    """Flag shipments likely to miss delivery windows."""
    at_risk = []
    
    for s in statuses:
        risk_factors = []
        risk_score = 0
        
        if s.status == "customs_hold":
            risk_factors.append("Held at customs")
            risk_score += 40
        
        if s.status == "delayed":
            risk_factors.append(f"Carrier reports delay")
            risk_score += 30
        
        if s.delay_hours and s.delay_hours > 24:
            risk_factors.append(f"Already {s.delay_hours:.0f}h behind ETA")
            risk_score += min(int(s.delay_hours / 24) * 10, 50)
        
        if s.status == "at_port" and s.mode in ["ocean_fcl", "ocean_lcl"]:
            risk_factors.append("Sitting at port โ€” possible congestion")
            risk_score += 20
        
        if risk_score >= 30:
            at_risk.append({
                "tracking": s.tracking_number,
                "carrier": s.carrier,
                "status": s.status,
                "risk_score": min(risk_score, 100),
                "risk_factors": risk_factors,
                "current_location": s.current_location,
                "eta": s.estimated_arrival
            })
    
    return sorted(at_risk, key=lambda x: x["risk_score"], reverse=True)

Step 4: Port Congestion & Terminal Monitoring

async def monitor_port_congestion(
    ports: List[dict]  # [{"name": "Los Angeles", "code": "USLAX"}, ...]
) -> List[PortCongestion]:
    """
    Monitor port congestion levels by scraping port authority websites,
    marine traffic data, and terminal operator portals.
    """
    port_sources = {
        "USLAX": [
            "https://www.portoflosangeles.org/business/statistics",
            "https://signal.ai/port-of-los-angeles"
        ],
        "USLGB": [
            "https://polb.com/business/port-statistics/"
        ],
        "NLRTM": [
            "https://www.portofrotterdam.com/en/logistics/cargo-figures"
        ],
        "SGSIN": [
            "https://www.mpa.gov.sg/port-marine-ops/port-statistics"
        ],
        "CNSHA": [
            "https://www.portshanghai.com.cn/en/"
        ]
    }
    
    congestion_data = []
    
    for port in ports:
        code = port["code"]
        urls = port_sources.get(code, [])
        
        for url in urls:
            result = await mantis.scrape(
                url=url,
                extract={
                    "vessels_at_anchor": "number or null",
                    "vessels_at_berth": "number or null",
                    "avg_wait_days": "number or null",
                    "avg_berth_days": "number or null",
                    "container_dwell_days": "number or null",
                    "gate_turn_time_minutes": "number or null",
                    "chassis_availability_pct": "number or null",
                    "import_teu": "number or null",
                    "export_teu": "number or null"
                }
            )
            
            # Classify congestion level
            wait_days = result.get("avg_wait_days", 0) or 0
            if wait_days > 7:
                level = "critical"
            elif wait_days > 3:
                level = "high"
            elif wait_days > 1:
                level = "moderate"
            else:
                level = "low"
            
            congestion = PortCongestion(
                port_name=port["name"],
                port_code=code,
                country=code[:2],
                vessels_at_anchor=result.get("vessels_at_anchor", 0) or 0,
                vessels_at_berth=result.get("vessels_at_berth", 0) or 0,
                avg_wait_days=result.get("avg_wait_days", 0) or 0,
                avg_berth_days=result.get("avg_berth_days", 0) or 0,
                container_dwell_days=result.get("container_dwell_days", 0) or 0,
                gate_turn_time_minutes=result.get("gate_turn_time_minutes"),
                chassis_availability_pct=result.get("chassis_availability_pct"),
                import_volume_teu=result.get("import_teu"),
                export_volume_teu=result.get("export_teu"),
                congestion_level=level,
                scraped_at=datetime.now(),
                source=url
            )
            congestion_data.append(congestion)
    
    return congestion_data

Step 5: Carrier Performance & Safety Analytics

async def scrape_carrier_safety(
    carriers: List[dict]  # [{"name": "Swift Transport", "mc": "MC-123456"}]
) -> List[CarrierPerformance]:
    """
    Scrape carrier safety data from FMCSA (Federal Motor Carrier Safety Administration).
    FMCSA SAFER system is public data โ€” safety ratings, inspections, crash data.
    """
    performances = []
    
    for carrier in carriers:
        # FMCSA SAFER system โ€” public carrier safety data
        fmcsa_url = f"https://safer.fmcsa.dot.gov/query.asp?query_type=queryCarrierSnapshot&query_param=MC_MX&query_string={carrier['mc']}"
        
        result = await mantis.scrape(
            url=fmcsa_url,
            extract={
                "carrier_name": "string",
                "mc_number": "string",
                "dot_number": "string",
                "safety_rating": "string (satisfactory, conditional, unsatisfactory, not rated)",
                "authority_status": "string (active, inactive, revoked)",
                "insurance_on_file": "number (dollars)",
                "fleet_size_trucks": "number",
                "fleet_size_drivers": "number",
                "inspection_count_24mo": "number",
                "oos_rate_vehicle_pct": "number",
                "oos_rate_driver_pct": "number",
                "crash_count_24mo": "number",
                "fatal_crashes": "number"
            }
        )
        
        perf = CarrierPerformance(
            carrier_name=result.get("carrier_name", carrier["name"]),
            carrier_mc_number=carrier["mc"],
            mode=FreightMode.TRUCKLOAD,
            on_time_pct=0,  # not available from FMCSA, need broker data
            avg_transit_days=0,
            transit_variability_days=0,
            safety_rating=result.get("safety_rating"),
            insurance_coverage=result.get("insurance_on_file"),
            fleet_size=result.get("fleet_size_trucks"),
            authority_status=result.get("authority_status"),
            scraped_at=datetime.now(),
            source="fmcsa_safer"
        )
        performances.append(perf)
    
    return performances

async def scrape_fuel_surcharges() -> dict:
    """
    Track DOE diesel prices and carrier fuel surcharge schedules.
    Fuel surcharges are typically 20-35% of linehaul rates.
    """
    # DOE weekly diesel price โ€” the benchmark for all FSC calculations
    doe_result = await mantis.scrape(
        url="https://www.eia.gov/petroleum/gasdiesel/",
        extract={
            "national_avg_diesel": "number ($/gallon)",
            "week_change": "number",
            "region_prices": [{
                "region": "string",
                "price": "number"
            }],
            "as_of_date": "string"
        }
    )
    
    return {
        "doe_diesel_national": doe_result.get("national_avg_diesel"),
        "week_change": doe_result.get("week_change"),
        "regions": doe_result.get("region_prices", []),
        "as_of": doe_result.get("as_of_date"),
        "scraped_at": datetime.now().isoformat()
    }

Step 6: AI-Powered Supply Chain Analysis & Alerts

from openai import OpenAI

openai_client = OpenAI()

async def generate_logistics_intelligence(
    rates: List[FreightRate],
    shipments: List[ShipmentStatus],
    congestion: List[PortCongestion],
    rate_alerts: dict,
    at_risk: List[dict]
) -> dict:
    """
    Generate comprehensive logistics intelligence briefing
    combining rates, shipment status, congestion, and carrier data.
    """
    response = openai_client.chat.completions.create(
        model="gpt-4o",
        messages=[{
            "role": "system",
            "content": """You are a logistics analytics expert. Analyze the following data
            and produce an actionable supply chain intelligence briefing:
            
            1. RATE INTELLIGENCE
               - Market direction: rates trending up or down?
               - Lanes with significant movement (>10%)
               - Spot vs. contract spread analysis
               - Recommendations: lock rates, go spot, or wait?
            
            2. SHIPMENT STATUS
               - At-risk shipments requiring intervention
               - Delay patterns by carrier or lane
               - Customs issues and recommended actions
            
            3. PORT & CONGESTION
               - Ports with worsening congestion
               - Routing alternatives for congested ports
               - Impact on transit times and costs
            
            4. CAPACITY OUTLOOK
               - Tight vs. loose markets by region
               - Seasonal factors approaching
               - Equipment availability concerns
            
            5. COST OPTIMIZATION
               - Mode-shift opportunities (ocean vs air, TL vs LTL)
               - Lane consolidation possibilities
               - Carrier negotiation leverage points
            
            6. TOP 3 ACTIONS THIS WEEK
               - Prioritized, specific, with expected savings
            
            Be quantitative. Include dollar amounts and percentages."""
        }, {
            "role": "user",
            "content": f"""Rate data: {len(rates)} rates tracked
            {json.dumps([r.model_dump() for r in rates[:20]], default=str)}
            
            Rate alerts: {json.dumps(rate_alerts, default=str)}
            
            Shipments: {len(shipments)} active, {len(at_risk)} at risk
            At-risk: {json.dumps(at_risk[:10], default=str)}
            
            Port congestion: {json.dumps([c.model_dump() for c in congestion], default=str)}"""
        }],
        temperature=0.2
    )
    
    return {
        "briefing": response.choices[0].message.content,
        "generated_at": datetime.now().isoformat(),
        "data_summary": {
            "rates_tracked": len(rates),
            "active_shipments": len(shipments),
            "at_risk_shipments": len(at_risk),
            "ports_monitored": len(congestion),
            "rate_spikes": len(rate_alerts.get("rate_spikes", [])),
            "rate_drops": len(rate_alerts.get("rate_drops", []))
        }
    }

async def deliver_logistics_alerts(alerts: dict, at_risk: List[dict], slack_webhook: str):
    """Route logistics alerts by urgency to ops teams."""
    import httpx
    
    msg_parts = []
    
    # Critical: at-risk shipments
    if at_risk:
        msg_parts.append("๐Ÿšจ *At-Risk Shipments*\n")
        for s in at_risk[:5]:
            msg_parts.append(
                f"โ€ข *{s['tracking']}* ({s['carrier']}) โ€” Risk: {s['risk_score']}/100\n"
                f"  {', '.join(s['risk_factors'])}\n"
                f"  ๐Ÿ“ {s.get('current_location', 'Unknown')} | ETA: {s.get('eta', 'N/A')}\n"
            )
    
    # Rate spikes
    if alerts.get("rate_spikes"):
        msg_parts.append("\n๐Ÿ“ˆ *Rate Spikes Detected*\n")
        for spike in alerts["rate_spikes"][:5]:
            msg_parts.append(
                f"โ€ข *{spike['lane']}* ({spike['mode']}): ${spike['current']:,.0f} "
                f"(โ†‘{spike['spike_pct']}% vs 30d avg ${spike['avg_30d']:,.0f})\n"
            )
    
    # Rate opportunities
    if alerts.get("rate_drops"):
        msg_parts.append("\n๐Ÿ“‰ *Rate Drop Opportunities*\n")
        for drop in alerts["rate_drops"][:5]:
            msg_parts.append(
                f"โ€ข *{drop['lane']}*: ${drop['current']:,.0f} "
                f"(โ†“{drop['drop_pct']}% โ€” {drop.get('recommendation', '')})\n"
            )
    
    if msg_parts:
        await httpx.AsyncClient().post(slack_webhook, json={
            "text": "".join(msg_parts),
            "unfurl_links": False
        })

Advanced: Multi-Modal Route Optimization

Build a routing engine that compares total landed cost across ocean, air, truck, and intermodal options for the same shipment:

async def optimize_routing(
    shipment: dict,  # {"origin": "Shanghai", "dest": "Chicago", "weight_kg": 15000, "value_usd": 250000, "urgency": "standard"}
    rates: List[FreightRate],
    congestion: List[PortCongestion]
) -> dict:
    """
    Compare total landed cost across routing options:
    - Ocean (Shanghai โ†’ LA/Long Beach โ†’ truck/rail to Chicago)
    - Ocean (Shanghai โ†’ Savannah/Houston โ†’ truck/rail to Chicago)
    - Air (Shanghai โ†’ O'Hare direct)
    - Ocean + air split (bulk by ocean, urgent by air)
    """
    options = []
    
    # Option 1: Ocean via West Coast
    la_congestion = next((c for c in congestion if c.port_code == "USLAX"), None)
    ocean_la = next((r for r in rates if "Shanghai" in r.lane and "LA" in r.lane and r.mode == FreightMode.OCEAN_FCL), None)
    truck_la_chi = next((r for r in rates if "LA" in r.lane and "Chicago" in r.lane and r.mode == FreightMode.TRUCKLOAD), None)
    
    if ocean_la and truck_la_chi:
        port_delay = la_congestion.avg_wait_days if la_congestion else 0
        options.append({
            "route": "Shanghai โ†’ LA (ocean) โ†’ Chicago (truck)",
            "ocean_cost": ocean_la.rate_total,
            "drayage_cost": 800,  # LA port to warehouse
            "inland_cost": truck_la_chi.rate_total,
            "total_cost": ocean_la.rate_total + 800 + truck_la_chi.rate_total,
            "transit_days": (ocean_la.transit_days or 14) + port_delay + (truck_la_chi.transit_days or 4),
            "port_congestion": la_congestion.congestion_level if la_congestion else "unknown",
            "risk": "high" if port_delay > 3 else "moderate" if port_delay > 1 else "low"
        })
    
    # Option 2: Air freight (expensive but fast)
    air_rate = next((r for r in rates if "Shanghai" in r.lane and "Chicago" in r.lane and r.mode == FreightMode.AIR), None)
    if air_rate:
        options.append({
            "route": "Shanghai โ†’ Chicago (air direct)",
            "total_cost": air_rate.rate_total,
            "transit_days": air_rate.transit_days or 3,
            "port_congestion": "N/A",
            "risk": "low"
        })
    
    # Calculate cost per day of transit for comparison
    for opt in options:
        opt["cost_per_day"] = round(opt["total_cost"] / opt["transit_days"], 2) if opt["transit_days"] else 0
        # Factor in inventory carrying cost
        daily_carrying = shipment["value_usd"] * 0.15 / 365  # 15% annual carrying cost
        opt["total_landed_cost"] = round(opt["total_cost"] + (daily_carrying * opt["transit_days"]), 2)
    
    return {
        "shipment": shipment,
        "options": sorted(options, key=lambda x: x.get("total_landed_cost", float("inf"))),
        "recommendation": min(options, key=lambda x: x.get("total_landed_cost", float("inf"))) if options else None
    }

Cost Comparison: AI Agents vs. Logistics Platforms

PlatformMonthly CostBest For
FourKites$5,000โ€“$25,000Real-time visibility, predictive ETAs, carrier collaboration
project44$5,000โ€“$30,000Multi-modal visibility, ocean tracking, data quality
Descartes$3,000โ€“$15,000Customs compliance, routing, fleet management
FreightWaves SONAR$1,000โ€“$10,000Market data, tender volumes, rate forecasting
FlexportVariableDigital freight forwarding, integrated visibility
Turvo$2,000โ€“$8,000Collaboration platform, TMS for 3PLs
AI Agent + Mantis$29โ€“$299Rate benchmarking, tracking, port data, carrier safety โ€” fully custom

Honest caveat: Enterprise platforms like FourKites and project44 have deep EDI/API integrations with carriers, providing real-time GPS-level tracking that web scraping cannot replicate. Their predictive ETA models are trained on billions of shipment data points. The AI agent approach excels at rate benchmarking, port congestion monitoring, FMCSA safety data, and market intelligence โ€” the external data layer that complements carrier-integrated visibility platforms. For shippers spending under $10M/year on freight, an AI agent covers 80โ€“90% of intelligence needs at 5โ€“10% of the cost.

Use Cases by Logistics Segment

1. Third-Party Logistics Providers (3PLs)

Rate benchmarking across carriers to ensure competitive pricing for customers. Track shipments across multiple carriers in a single dashboard without expensive per-carrier EDI setups. Monitor carrier safety ratings to manage risk. Analyze lane profitability with real-time market rate comparisons.

2. Shippers & Manufacturers

Monitor inbound shipments from suppliers across ocean, truck, and air. Detect delays before they impact production lines. Benchmark freight spending against market rates to identify overcharges. Track port congestion at key import gateways and pre-plan routing alternatives.

3. Freight Brokers

Real-time rate intelligence for instant quoting. Track capacity signals (tender rejections, equipment availability) to anticipate rate movements. FMCSA carrier vetting at scale โ€” automatically screen carriers before booking. Build proprietary market intelligence that differentiates your brokerage.

4. E-commerce & Retail Supply Chain

Monitor last-mile carrier performance (UPS, FedEx, USPS, regional carriers) across delivery zones. Track inbound ocean containers from Asian suppliers โ€” the lead time visibility that determines inventory planning. Fuel surcharge monitoring to forecast shipping cost changes for P&L planning.

Compliance & Best Practices

Getting Started

  1. Map your freight network โ€” identify your top 10โ€“20 lanes by spend and volume
  2. Set up Mantis API access โ€” sign up for a free API key (100 calls/month free)
  3. Start with rate benchmarking โ€” compare what you're paying against market spot rates; this alone typically identifies 5โ€“15% savings
  4. Add shipment tracking โ€” unified tracking across your top 3โ€“5 carriers eliminates the need for manual portal checks
  5. Monitor key ports โ€” if you import from Asia, track LA/Long Beach and your East Coast gateways weekly
  6. Automate alerts โ€” route rate spikes >15%, shipment delays >24h, and port congestion changes to your logistics team via Slack