Embeddings on-prem in Europe
bge-m3 (1024 dims) on Hetzner DE. Your transcripts never reach OpenAI. Vector search via pgvector on the same Postgres.
Semantic search across your entire AudioMap library with on-prem embeddings (bge-m3 in Europe). Ask "what did client X say about pricing?" and get answers with exact note + timestamp citations.
bge-m3 (1024 dims) on Hetzner DE. Your transcripts never reach OpenAI. Vector search via pgvector on the same Postgres.
Ask "which clients mentioned a price increase?" across all your notes. AudioMap finds them with citations.
Every answer cites note + exact second. Click and AudioMap jumps to that segment. No "trust the AI" — verify what was said.
Or scope your question to a single recording. Useful for asesorías reviewing a specific client meeting.
Connect your AudioMap library to Claude, ChatGPT, Cursor or Gemini. Use them as your memory.
pgvector + cosine similarity scales to thousands of meetings without external vector DB. Sub-200ms typical query latency.
bge-m3 (BAAI) is multilingual (100+ languages including all Spanish variants), open-source, and runs entirely on our Hetzner DE server. Quality is comparable to text-embedding-3-large for Spanish. We avoid OpenAI specifically to keep all data in Europe.
pgvector handles millions of segments efficiently with HNSW index. Practical limit is your disk, not the engine. We've tested with 500K+ segments in single-digit-second query latency.
Yes. bge-m3 is cross-lingual: the embedding for "what did Sara say about budget?" in English matches Spanish content discussing "presupuesto". Cross-language retrieval works both directions.
Generate an API key at /dashboard/api-keys, configure your Claude/ChatGPT/Cursor with our MCP server URL, and they can query your AudioMap library directly. Tools: search_notes, get_transcript_segments, ask_note, etc. Free tier includes 50 credits/month MCP usage.
You may also like
Semantic search included from Free tier. 120 free minutes monthly. Data stays in Europe.
Try for free