Meeting transcription tool with live transcription and AI-powered summaries.
- 🎤 Live audio capture from system audio (Windows WASAPI, Linux PipeWire)
- 📝 Real-time transcription using faster-whisper
- 🔄 Dual-pass transcription — fast model for low latency, better model for accuracy
- 🤖 AI-powered Q&A during meetings (local via Ollama, or cloud via OpenAI/Claude/Gemini)
- 📋 Automatic summary generation with key themes and action items
- 📄 Markdown export with collapsible full transcript
- ⚙️ Options menu — change settings without editing config files
- 🎨 Theming — multiple built-in themes via command palette
- 💾 Persistent config — settings saved automatically
- 💻 Terminal UI using Textual
┌─ Squelch ─────────────────────────────── Recording 🔴 ─┐
│ Transcript │ Event Log │
│ │ │
│ [00:01] To build a textual app, you │ 20:11:02 │
│ need to define a class that inherits... │ FAST 6.1s │
│ │ │
│ [00:07] ✓ The Widgets module is where │ 20:11:08 │
│ you find a rich set of widgets... │ SLOW 60.0s │
│ │ │
├─────────────────────────────────────────┤ │
│ 🤖 Response │ │
│ Q: What are the main topics? │ │
│ A: The discussion covers building │ │
│ Textual apps and widget modules... │ │
├─────────────────────────────────────────┴─────────────┤
│ 💬 Ask about the transcript... │
├───────────────────────────────────────────────────────┤
│ f5 Start/Stop f10 End & Generate f2 Options q Quit │
└───────────────────────────────────────────────────────┘
| Platform | Status | Notes |
|---|---|---|
| Windows | ✅ Supported | WASAPI loopback |
| Linux | ✅ Supported | Requires PipeWire |
| macOS | ❌ Not yet | Contributions welcome! |
Python 3.11+ is required.
Windows — No additional setup needed.
Linux — Install PipeWire:
# Debian/Ubuntu
sudo apt install pipewire pipewire-pulse
# Fedora
sudo dnf install pipewire pipewire-pulseaudio# Clone the repository
git clone https://github.com/vlouf/squelch.git
cd squelch
# Create virtual environment
python -m venv venv
source venv/bin/activate # Linux
venv\Scripts\activate # Windows
# Install
pip install -e .
# Optional: install cloud LLM support
pip install -e ".[cloud]"For AI-powered Q&A and summaries, you need an LLM provider:
Option A: Ollama (Local, Free)
# Install from https://ollama.ai, then:
ollama pull llama3.1:8b
ollama serveOption B: Cloud Providers
Install cloud support and set your API key:
pip install -e ".[cloud]"
# Then set ONE of these:
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export GEMINI_API_KEY=...squelch| Key | Action |
|---|---|
| F5 | Start/Stop recording |
| F10 | End meeting & generate summary |
| F3 | Toggle response panel |
| F2 | Options menu |
| F1 | Help |
| Ctrl+P | Command palette |
| Q | Quit |
- F5 — Start recording (captures system audio)
- Watch the live transcript appear
- Type questions in the input box for AI responses
- F10 — End meeting, generate summary, export to Markdown
- Review the exported file (opens automatically)
Configure without editing files:
- Theme — Dark/light mode
- Audio device — Select loopback source
- Whisper models — Choose speed vs accuracy tradeoff
- Language — Set transcription language
- LLM provider — Ollama (local) or Cloud
- Output directory — Where to save meeting notes
Settings persist between sessions.
Quick access to themes and commands. Type to search:
theme— Browse all themes (nord, gruvbox, dracula, etc.)toggle— Recording, response panel, dark modeoptions— Open settings
Meeting notes are saved as Markdown:
~/Documents/Squelch/2025-12-22_1430_meeting.md
Each file includes:
- Duration and word count
- AI-generated summary
- Key themes and action items
- Full transcript (collapsible)
Settings are stored in:
- Windows:
%APPDATA%\Squelch\config.toml - Linux:
~/.config/squelch/config.toml
You can edit this file directly or use the Options menu (F2).
For faster transcription, install CUDA:
- Install CUDA Toolkit
- Install cuDNN
- Squelch will automatically use GPU when available
No audio being captured?
- Check Options (F2) → Audio Device
- Make sure audio is playing through the selected device
Ollama not detected?
- Run
ollama servein a terminal - Check that a model is pulled:
ollama list
Transcription is slow?
- Use smaller Whisper models in Options
- Enable GPU acceleration (see above)
Cloud LLM not working?
- Verify API key is set:
echo $OPENAI_API_KEY - Check the model name is correct
See CONTRIBUTING.md for development setup and guidelines.
Built with faster-whisper, Textual, and Ollama.