A powerful web-based flashcard application designed for instant deployment with smart AI assistance that works in any browser!
- AI Hints: Get intelligent, contextual hints for any question - completely free!
- 🧠 AI Deck Generation: Create personalized study decks based on your learning patterns
- � Multiple AI Sources: Ollama (local), WebLLM (browser), and free API tiers
- 🔒 Privacy-First: Run AI locally with Ollama or WebLLM - your data never leaves your device
- �🎯 Custom Decks: Create decks for any subject with custom names and categories
- 📝 Easy Card Creation: Add questions and answers with a simple, intuitive interface
- 🎲 Shuffled Study Mode: Cards are randomly shuffled for better learning
- ✅ Smart Learning: Correct answers are removed, incorrect answers cycle to the back
- 📊 Progress Tracking: Real-time score and progress indicators
- 💾 Local Storage: All data persists in your browser - no accounts needed
- 📱 Responsive Design: Works beautifully on desktop, tablet, and mobile
- 🎨 Modern UI: Clean, animated interface with smooth transitions
- Visit ollama.ai/download
- Download and install for your OS
- Open terminal/command prompt
- Run:
ollama pull mistral:7b(downloads ~4GB AI model) - Start service:
ollama serve - Refresh FlashCards - AI will be detected automatically!
- Click "🤖 AI Setup" in FlashCards
- Try WebLLM setup (requires modern browser with WebGPU)
- Download will happen automatically (~4GB)
- Get free API keys from Cohere, Together AI, or Hugging Face
- Click "🤖 AI Setup" → "Configure APIs"
- Enter your keys for limited free usage
- Click "Create Deck" or the "+" button
- Enter a deck name and subject
- Add cards by filling in questions and answers
- Click "Add Card" for more cards
- Save your deck
- NEW: Use AI generation for instant, personalized decks!
- Click the "Study" button on any deck
- Read the question and type your answer
- Press Enter or click "Check" to verify
- NEW: Click the hint button (💡) for AI-powered hints!
- Correct answers advance you forward, incorrect answers cycle to the back
- Complete all cards to finish the deck
- View all your decks on the home screen
- Delete decks using the trash icon
- Track your progress with card counts and scores
- NEW: AI analyzes your performance and suggests improvements
- Frontend: HTML5, CSS3, JavaScript (ES6+)
- Storage: Browser localStorage (no accounts, fully local)
- Responsive: Mobile-first design with CSS Grid and Flexbox
- Animations: CSS transitions and transforms
- Local AI: Ollama integration for unlimited local processing
- Browser AI: WebLLM for client-side AI (WebGPU required)
- Cloud APIs: Cohere, Together AI, Hugging Face integration
- Fallback System: Smart rule-based hints when AI unavailable
- Privacy: Local AI options keep all data on your device
- Ollama: Mistral 7B, Llama 2 7B/13B, CodeLlama, Phi, and more
- WebLLM: Llama 2, TinyLlama, Mistral (browser-based)
- APIs: Various cloud models with free tiers
- RAM: 8GB minimum (16GB+ for larger models)
- Storage: 4-8GB per AI model
- OS: Windows, macOS, or Linux
- CPU: Any modern processor (GPU acceleration automatic)
- Browser: Chrome/Edge 113+, Firefox with WebGPU enabled
- RAM: 8GB minimum
- GPU: Modern GPU with WebGPU support
- Internet: Stable connection required
- Limits: Free tier limitations apply
This application is designed to be deployed to GitHub Pages:
- Push all files to your GitHub repository
- Go to repository Settings → Pages
- Select "Deploy from a branch"
- Choose "main" branch and "/ (root)" folder
- Your app will be available at
https://username.github.io/repository-name
- Chrome 60+
- Firefox 60+
- Safari 12+
- Edge 79+
Simply open index.html in your web browser or serve it using any local web server:
# Using Python
python -m http.server 8000
# Using Node.js (http-server)
npx http-server
# Using PHP
php -S localhost:8000MIT License - feel free to use this project for personal or educational purposes.