laminai Documentation

Everything you need to set up, configure, and get the most out of laminai. Start with the Quick Start below, or jump to the section you need using the sidebar.

Quick Start

Get laminai running in under 2 minutes:

  1. Start the main appRun python3 app.py in the project directory. The app will be available at http://localhost:8080
  2. (Optional) Start the landing pageRun python3 landing_app.py to serve the landing page at http://localhost:8000
  3. Open the appNavigate to http://localhost:8080 in your browser
  4. Paste a YouTube URLEnter any YouTube video URL and click "Analyze" — the app will download, transcribe, and analyze it automatically

Installation

laminai requires Python 3.8+ and the following packages:

Terminal
pip3 install flask groq yt-dlp pydub pdfplumber python-dotenv

You also need FFmpeg installed for audio conversion:

macOS
brew install ffmpeg
Ubuntu/Debian
sudo apt install ffmpeg

Configuration

Create a .env file in the project root with your API keys:

.env
GROQ_API_KEY=gsk_your_key_here

Get your free API key at console.groq.com. The free tier provides generous limits for personal use.

⚠️ Never commit your .env file. Add it to .gitignore to keep your API key secure.

YouTube Analysis

laminai processes YouTube videos through a 3-stage pipeline:

⬇️
1. Download
yt-dlp extracts the best available audio stream and converts it to MP3 at 192kbps via FFmpeg.
🎙️
2. Transcribe
AI transcribes the audio. Videos over 10 minutes are split into chunks.
🧠
3. Analyze
AI extracts topics, takeaways, key details, and practical applications as structured JSON.

Progress is streamed in real-time via Server-Sent Events — you see download %, transcription progress, and analysis status live.

PDF Analysis

Upload any PDF document for instant AI-powered analysis. Supported formats include research papers, textbooks, reports, and presentations.

  1. Click "Upload Document" in the sidebar or on the home screen
  2. Select a PDF file from your computer (max recommended: 50MB)
  3. Text is extracted using pdfplumber, preserving paragraph structure
  4. Click "Analyze" to generate summaries, or go straight to AI Chat
Note: Scanned PDFs (image-only) are not supported — the PDF must contain selectable text.

AI Chat

laminai has two chat modes:

Content Chat (after processing)

After processing a video or document, switch to the Chat view to ask questions grounded in the content. The AI uses the first 8,000 characters of the transcript as context.

Standalone AI Assistant

Access the AI Assistant from the sidebar — no video or document required. This is a general-purpose chat interface, maintaining conversation history for the session.

MCQ Generator

After analyzing a video or document, click "Generate Quiz" to create multiple-choice questions. Configure the number of questions (5–20) from the settings panel.

Questions are distributed by difficulty: 30% Easy, 50% Medium, 20% Hard. Each question includes all 4 options, the correct answer, and a detailed explanation.

History

laminai automatically saves your activity to browser localStorage. The History view shows:

  • YouTube videos — with video ID, timestamp, and quick-replay button
  • Documents — PDF filename and upload date
  • AI Chat sessions — first message as title, session timestamp

Up to 30 entries are stored. Use the filter tabs (All / YouTube / Documents / AI Chats) to browse by type. Click "Clear All" to reset.

API Endpoints

laminai exposes a REST API on http://localhost:8080. All endpoints return JSON except /api/process which streams SSE.

GET /api/process

Process a YouTube video. Streams progress events via Server-Sent Events.

ParameterTypeDescription
urlstringYouTube video URL (query param)
Example
GET /api/process?url=https://youtube.com/watch?v=dQw4w9WgXcQ

// SSE stream events:
data: {"status":"downloading","message":"Downloading audio... 45%","pct":45}
data: {"status":"transcribing","message":"Transcribing audio...","pct":95}
data: {"status":"done","message":"Done!","pct":100,"transcript":"..."}

POST /api/process-document

Upload and extract text from a PDF file.

BodyTypeDescription
filemultipart/form-dataPDF file to process
Response
{"status": "done", "filename": "paper.pdf", "total_pages": 24}

POST /api/analyze

Analyze the current transcript (must call /api/process or /api/process-document first).

Response
{"summary": "...", "key_topics": [...], "main_takeaways": [...],
 "important_details": [...], "practical_applications": [...]}

POST /api/generate-mcq

Generate MCQs from the current analysis.

Body (JSON)TypeDefault
num_questionsinteger10

POST /api/chat

Chat about the currently loaded video/document content.

Body (JSON)TypeDescription
messagestringUser's question

POST /api/ai-chat

General-purpose AI chat (no content context required).

Body (JSON)TypeDescription
messagestringUser's message
historyarrayPrior messages [{role, content}]

Common Issues

Port 8080 already in use
Fix
# Find and kill the process using port 8080
lsof -ti:8080 | xargs kill -9
python3 app.py
ModuleNotFoundError: No module named 'flask'

You're likely using python instead of python3. Always use:

python3 app.py
GROQ_API_KEY not set

Ensure your .env file is in the project root (same directory as app.py) and contains a valid key:

GROQ_API_KEY=gsk_xxxxxxxxxxxxxxxx
yt-dlp download fails
YouTube regularly updates its API. If downloads fail, update yt-dlp: pip3 install -U yt-dlp
FFmpeg not found

FFmpeg must be installed and in your system PATH. Verify with ffmpeg -version. If missing, install via Homebrew (macOS) or apt (Linux).

Support

If you're stuck on something not covered here, reach out: