AI Helpdesk Starter

Turn support emails into an AI-powered helpdesk.

Don't build your own email ingestion layer. Receive support tickets as clean JSON with AI-extracted intent over webhooks, then pass them to your LLM to draft replies instantly.

1. Receive structured ticket via Webhook
// POST /api/webhooks/inboxparse
export async function POST(req: Request) {
  // InboxParse sends you clean, structured data
  const { event, data } = await req.json();

  if (event !== 'message.created') return new Response('Ignored');

  const {
    id,
    subject,
    markdown,        // LLM-ready body
    thread_history,  // Full conversation context
    intent,          // e.g. "BUG_REPORT", "BILLING_ISSUE"
    actions          // e.g. ["REPLY", "ESCALATE"]
  } = data;

  // Process based on AI-detected intent
  if (intent === 'BUG_REPORT') {
    await createJiraTicket({ title: subject, description: markdown });
  }

  if (actions.includes('REPLY')) {
    await draftAiResponse(data);
  }

  return new Response('OK');
}

Agent-Ready Data

Stop feeding raw HTML and quoted replies to your LLM. InboxParse cleans messages into Markdown and reconstructs the full thread context.

Built-in Intent Extraction

InboxParse analyzes every incoming email asynchronously. Webhooks arrive with pre-computed intent, sentiment, and suggested actions.

Zero Infrastructure

No IMAP polling loops, no OAuth token refresh logic. Connect the shared support@ mailbox once, and listen for simplified webhooks.

Instant Delivery

Configure a webhook endpoint and receive JSON payloads the second a customer emails you. Reply within seconds.

Focus on AI, Not Email

Spend your time engineering the perfect prompt and agent logic, not fighting MIME parsing edge cases.

Secure & Private

Data is encrypted at rest and in transit. Scoped access ensures your AI only reads what it needs.

2. Draft and send reply with AI

Copy-and-paste ready. No boilerplate.

2. Draft and send reply with AI
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

async function draftAiResponse(ticketData: any) {
  // 1. Generate reply using Vercel AI SDK
  const { text: aiReply } = await generateText({
    model: openai('gpt-4o'),
    system: `You are a helpful customer support agent.
    Customer sentiment is: ${ticketData.sentiment}.
    Previous thread context: ${ticketData.thread_history}`,
    prompt: `Draft a polite reply to this message: ${ticketData.markdown}`
  });

  // 2. Send the reply via InboxParse API
  await fetch('https://inboxparse.com/api/v1/messages/send', {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${process.env.INBOXPARSE_API_KEY}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      mailbox_id: ticketData.mailbox_id,
      thread_id: ticketData.thread_id,
      to: ticketData.from,
      body: aiReply // InboxParse handles HTML formatting
    })
  });
}

Frequently asked questions

How does InboxParse detect email intent?+

InboxParse analyzes every incoming email with AI and classifies it by intent (e.g. BUG_REPORT, BILLING_ISSUE, FEATURE_REQUEST), sentiment, and suggested actions. This data arrives pre-computed in your webhook payload.

Can I use InboxParse to auto-reply to support emails?+

Yes. Use the webhook payload (which includes clean Markdown and full thread context) as input to your LLM, generate a reply, and send it back through the InboxParse send API. The entire loop can be fully automated.

Do I need to build my own email ingestion layer?+

No. InboxParse handles IMAP polling, OAuth token refresh, MIME parsing, and HTML-to-Markdown conversion. You receive clean, structured JSON over webhooks and focus on your AI logic.

Build your AI helpdesk today.

Start with our Developer plan and replace your legacy parser with an AI-native ingestion layer.