A customizable React chat widget with SSE streaming, research mode, and markdown rendering.
This is a standalone extraction from my production portfolio site. See it in action at danmonteiro.com.
You're adding a chat interface but:
- UI from scratch is tedious β styling, animations, responsive design
- Streaming is complex β SSE handling, partial updates, error recovery
- Research mode needs context β article-aware conversations require state management
- Features pile up β markdown, sources, feedback, download...
Chatbot Widget provides:
- Drop-in React component β floating button + chat window
- SSE streaming built-in β real-time responses with status updates
- Research Mode β article-focused conversations with "Go Deeper" analysis
- Full-featured β markdown, sources, feedback, download, expand/collapse
import { ChatbotProvider, ChatbotWidget } from '@danmonteiro/chatbot-widget';
function App() {
return (
<ChatbotProvider
config={{
apiBaseUrl: 'https://api.example.com',
auth: { isAuthenticated: true, token: 'jwt...' },
enableResearchMode: true,
}}
>
<ChatbotWidget />
</ChatbotProvider>
);
}| Feature | Description |
|---|---|
| SSE Streaming | Real-time response streaming with status indicators |
| Research Mode | Article-aware conversations with deeper analysis |
| Go Deeper | Staged responses - get more detail on demand |
| Markdown | Full GFM support via react-markdown |
| Source Citations | Display sources with links and excerpts |
| Feedback | π/π response feedback collection |
| Download | Export conversation as markdown file |
| Expand/Collapse | Full-width or compact view |
| Customizable | Theming, placeholders, suggested questions |
npm install @danmonteiro/chatbot-widgetimport { ChatbotProvider, ChatbotWidget } from '@danmonteiro/chatbot-widget';
function App() {
return (
<ChatbotProvider
config={{
apiBaseUrl: 'https://your-api.com',
auth: {
isAuthenticated: true,
token: 'your-jwt-token',
},
onAuthRequired: () => {
// Show login modal
},
}}
>
<YourApp />
<ChatbotWidget />
</ChatbotProvider>
);
}The widget expects these endpoints (customizable):
// Standard chat
POST /api/rag/ask
Body: { question, topK, threshold, temperature, maxTokens }
Response: { success, data: { answer, sources, confidence, provider } }
// SSE streaming (Research Mode)
POST /api/rag/query-stream
Body: { question, articleContext?, sessionId? }
Events: connected, outline, status, answer, done, error
// Go Deeper
POST /api/rag/query-stream/deeper
Body: { sessionId }
Events: analysis, done, errorinterface ChatbotConfig {
// Required
apiBaseUrl: string;
// Authentication
auth?: {
isAuthenticated: boolean;
token?: string;
user?: { id: number; email: string };
tier?: 'free' | 'premium' | 'research';
};
// Endpoints (relative to apiBaseUrl)
endpoints?: {
chat?: string; // Default: '/api/rag/ask'
stream?: string; // Default: '/api/rag/query-stream'
feedback?: string; // Default: '/api/chatbot/response-feedback'
deeper?: string; // Default: '/api/rag/query-stream/deeper'
};
// Features
enableResearchMode?: boolean;
showSources?: boolean;
showFeedback?: boolean;
showModelInfo?: boolean;
enableDownload?: boolean;
// Research Mode context
articleContext?: {
slug: string;
title: string;
content: string;
};
// Customization
placeholders?: {
regular?: string;
research?: string;
unauthenticated?: string;
};
suggestedQuestions?: {
regular?: string[];
research?: string[];
};
// Callbacks
onAuthRequired?: () => void;
onError?: (error: Error) => void;
// Layout
position?: 'bottom-right' | 'bottom-left';
maxMessageLength?: number;
}Research Mode enables article-focused conversations with deeper analysis capabilities.
<ChatbotProvider
config={{
apiBaseUrl: 'https://api.example.com',
enableResearchMode: true,
articleContext: {
slug: 'understanding-rag',
title: 'Understanding RAG Architecture',
content: 'Full article content here...',
},
}}
>
<ChatbotWidget />
</ChatbotProvider>1. User sends question
2. Server sends: event: connected
3. Server sends: event: outline (quick key points)
4. Server sends: event: status (progress updates)
5. Server sends: event: answer (full response)
6. Server sends: event: done (with sessionId, canGoDeeper)
When canGoDeeper: true in the done event, users see a "Go Deeper" button. Clicking it:
- Calls the deeper endpoint with sessionId
- Streams additional analysis
- Appends to the original message
Access the chatbot context anywhere:
import { useChatbot } from '@danmonteiro/chatbot-widget';
function MyComponent() {
const {
isOpen,
setIsOpen,
isResearchMode,
messages,
clearMessages,
} = useChatbot();
return (
<button onClick={() => setIsOpen(true)}>
Open Chat ({messages.length} messages)
</button>
);
}Direct API access:
import { useChatApi } from '@danmonteiro/chatbot-widget';
function MyComponent() {
const { sendMessage, requestDeeperAnalysis } = useChatApi();
return (
<button onClick={() => sendMessage('Hello!')}>
Send Message
</button>
);
}The widget uses Tailwind-style utility classes. For custom styling:
- Override CSS variables (coming soon)
- Wrap with custom styles
- Fork and modify the component
chatbot-widget/
βββ src/
β βββ index.ts # Exports
β βββ types.ts # Type definitions
β βββ ChatbotContext.tsx # Provider and context
β βββ ChatbotWidget.tsx # Main component
β βββ hooks/
β βββ useChatApi.ts # API communication hook
βββ package.json
βββ README.md
This repo is the user-facing layer in a broader approach to context continuity β giving AI systems the right context at the right time.
| Layer | Role | This Repo |
|---|---|---|
| Intra-session | Short-term memory (4-hr cache) | chatbot-widget |
| Document-scoped | Injected article context | chatbot-widget |
| Progressive | Go Deeper staged responses | chatbot-widget |
| Exportable | Conversation download | chatbot-widget |
| Retrieved | Long-term semantic memory | β |
The widget handles ephemeral memory (session cache), scoped context (Research Mode), progressive disclosure (Go Deeper), and user-controlled persistence (download). Combined with RAG for long-term retrieval, it creates seamless context continuity.
Related repos:
- rag-pipeline β The RAG backend for semantic retrieval
- mcp-rag-server β RAG as MCP tools
- ai-orchestrator β Complexity-based model routing
Contributions welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feat/new-feature) - Make changes with semantic commits
- Open a PR with clear description
MIT License - see LICENSE for details.
Built with Claude Code.
Co-Authored-By: Claude <noreply@anthropic.com>