AI Framework Integrations

Email data, ready for your AI stack

Connect InboxParse to LangChain, LlamaIndex, CrewAI, AutoGen, n8n, Zapier, Dify, Flowise — or any tool that can make HTTP requests. Get clean JSON or Markdown from any inbox in one API call.

1. Fetch email thread as Markdown for your LLM
# LangChain — email thread → LLM response
from langchain_google_genai import ChatGoogleGenerativeAI
import requests

# Fetch thread as Markdown (optimized for token efficiency)
thread = requests.get(
    "https://inboxparse.com/api/v1/threads/th_123?format=markdown",
    headers={"Authorization": "Bearer ip_your_key"}
).json()

llm = ChatGoogleGenerativeAI(model="gemini-2.0-flash")
response = llm.invoke(
    f"Summarize this email thread and extract action items:\n\n{thread['markdown']}"
)

print(response.content)
# → "Customer is asking about Enterprise pricing.
#    Action: schedule demo call by Friday."

LangChain & LlamaIndex

Fetch threads as Markdown or JSON and pass directly to your chain or index. No preprocessing needed — InboxParse handles HTML stripping, thread stitching, and normalization.

LlamaIndex RAG Pipelines

Index email corpora with LlamaIndex in minutes. Each email becomes a structured document with sender context, subject, and clean body text — ready for vector embedding.

CrewAI & AutoGen Agents

Give your agents a live email inbox. Agents can list threads, read full conversations, and reply — all via simple REST calls from any Python or JS environment.

n8n & Zapier Workflows

Trigger automation flows on new emails without polling. InboxParse webhooks fire on arrival, delivering structured JSON payload directly to your n8n or Zapier node.

Dify & Flowise

Use the HTTP Request node in Dify or Flowise to pull email data into visual AI pipelines. The clean JSON schema maps directly to node inputs with zero transformation.

Any HTTP Client

No SDK required. If your framework can make HTTP requests, it can consume InboxParse. Works with curl, fetch, axios, httpx, requests — or any language runtime.

Compact Mode for Token Efficiency

Use ?format=compact to get stripped-down email content optimized for LLM context windows. Cut token usage by up to 60% without losing signal.

Webhook-Driven Pipelines

Push emails into your pipeline the moment they arrive. Configure webhook endpoints per mailbox and receive structured JSON events with HMAC signatures for security.

2. Trigger your pipeline via webhook

Copy-and-paste ready. No boilerplate.

2. Trigger your pipeline via webhook
// Next.js webhook handler — fires on every new email
// Configure in InboxParse dashboard: Webhooks → New Endpoint
export async function POST(req: Request) {
  const payload = await req.json();

  // Verify HMAC signature (InboxParse signs every delivery)
  // const sig = req.headers.get("x-inboxparse-signature");

  const { subject, from, markdown, labels } = payload.data;

  // Route to your AI pipeline based on labels
  if (labels.includes("support")) {
    await fetch("https://your-n8n-instance.com/webhook/support-agent", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ subject, from, body: markdown }),
    });
  }

  return new Response("OK");
}

Frequently asked questions

What output formats does InboxParse return?+

InboxParse supports multiple output formats via the ?format= query parameter: json (full structured object with extracted data), markdown (LLM-optimized plain text), compact (stripped-down version for token efficiency), and raw (original email source). Most AI frameworks work best with markdown or json.

How does API authentication work?+

All API calls use a Bearer token in the Authorization header: Authorization: Bearer ip_your_key. Generate API keys from the InboxParse dashboard under API Keys. Keys are scoped per workspace and support read/write permissions.

What is the typical API latency?+

Median API response time is under 200ms for cached threads and under 500ms for fresh email fetches that require processing. Webhook delivery to your endpoint happens within seconds of email arrival. All endpoints are served from Vercel Edge infrastructure with 99.9% uptime SLA.

Wire up your AI stack.

Free tier — 500 emails/month. No credit card.