Not just what's similar—what actually helped. Say it worked. Say it didn't. The AI remembers. 100% private and local.
Works with popular AI tools like Claude Code and Cursor. Your memory stays on your computer.
Pre-built: Ready to run, no setup • Build yourself: Free, some setup required
Most AI memory finds things that sound similar to your question.
But sounding similar doesn't mean it helped you before.
You ask your AI: "What's a good restaurant for a date night?"
Your AI remembers two places you've tried before:
Memory A:
"That new romantic sushi place downtown"
You went there - food was disappointing
Memory B:
"The Italian place on Main Street"
You loved it - amazing experience
Regular AI picks Memory A because "romantic" and "downtown" sound like good date words.
Roampal picks Memory B because it remembers you actually had a great time there.
How does it know what worked?
Just talk naturally. The AI reads your response, determines if you were satisfied, and scores the memory in Roampal. No buttons to click, no ratings to give - it learns from the conversation itself.
Questions designed to trick the AI into giving the wrong answer.
| Approach | Got It Right | Improvement |
|---|---|---|
| Standard AI search | 10% | - |
| + Smarter ranking | 20% | +10 pts |
| + Learning from outcomes | 50% | +40 pts |
Smarter ranking helps a little. Outcome tracking helps 4× more.
We measured how well it finds the right answer over time.
Cold start (0 uses)
As you use it
Better than ranking alone
The more you use it, the better it gets at finding what actually helps you.
Better answers with less noise. Roampal only retrieves what's proven helpful, not everything that sounds related. That means lower API costs, faster responses, and more accurate answers.
Coding (30): p=0.001, d=7.49 • Finance (100): p<0.001, χ²=98 • Learning: p=0.005, d=13.4
Full methodology & reproducible benchmarks
Traditional AI memory gets fuller. Roampal gets smarter.
Good advice rises to the top. Bad advice sinks. Automatically.
Short-term, long-term, permanent - your memories organized by how often you need them
Learns which tools work, which topics connect, and what advice helps you specifically
Searches your memories in milliseconds. All on your machine.
Everything stays on your machine. No cloud. No tracking. Works offline.
All of this is tested and verified. Want to see the details?
View Benchmark Source CodeCloud AI remembers what you say—but not what actually worked vs failed with you.
Re-explain your stack, your goals, your context. Every. Single. Day.
Memory features behind paywalls. Your conversations stored in the cloud. Forever.
Tell it something once. It remembers forever. Your preferences, your goals, the things that matter to you.
Upload your files - notes, PDFs, spreadsheets, whatever. Ask questions and get answers based on YOUR stuff.
When advice helps you, it remembers. When it doesn't, it learns from that too. Over time, it gets better at helping YOU.
What's a good restaurant for a date night?
How about that new sushi place downtown?
Went there, wasn't great...
Any other suggestions?
There's that Italian place on Main Street - quiet atmosphere, great reviews.
Perfect! Had an amazing time there.
Need a restaurant recommendation for tonight
Based on what you enjoyed last time, I'd suggest the Italian place on Main Street again, or something similar in that area.
Keeps track of which solutions actually help with YOUR problems. The more you use it, the better it gets at helping you.
See the AI search memory, create notes, and stream responses in real-time. Full transparency into what it's doing.
Remembers your preferences and learned patterns across ALL conversations. No need to repeat yourself.
Roampal uses LLM to detect outcomes from natural language:
# LLM analyzes conversation and detects outcome
if outcome == "worked":
score += 0.2 # Positive outcome
elif outcome == "failed":
score -= 0.3 # Negative outcome
elif outcome == "partial":
score += 0.05 # Partial success
Roampal builds 3 interconnected Knowledge Graphs that learn different aspects of your memory:
Learns which memory collections answer which types of questions.
You ask: "What books did I read about investing?"
System learned: Questions about "books" → books collection (85% success)
Result: Searches books first, skips less relevant collections → faster answers
Extracts people, places, tools, and concepts from your memories and tracks relationships.
Your memories mention: "Sarah Chen works at TechCorp as an engineer"
System extracts: Sarah Chen → TechCorp → engineer (with quality scores)
Result: "Who works at TechCorp?" → finds Sarah even without exact keyword match
Tracks which tools and actions succeed or fail in different contexts.
During a quiz: AI uses create_memory() to answer questions
System tracks: create_memory in "recall_test" → 5% success (hallucinating!)
System tracks: search_memory in "recall_test" → 85% success
Result: After 3+ failures, warns AI: "create_memory has 5% success here - use search_memory instead"
Together: Routing KG finds the right place, Content KG finds connected information, Action-Effectiveness KG prevents mistakes. All three learn from real outcomes.
All data stored locally:
Zero network calls. Zero telemetry.
Regular AI remembers what you say. Roampal learns what actually helps.
| Feature | Cloud AI Memory | Roampal |
|---|---|---|
| Memory Type | Stores what you say | Learns what works |
| Outcome Tracking | No feedback loop | Scores every result |
| Bad Advice Handling | Stays in memory | Auto-deleted |
| Pattern Recognition | None | Tracks success rates per approach |
| Failure Avoidance | No tracking | Warns about past failures |
| Privacy | Cloud servers | Your machine only |
| Works Offline | No | Yes |
Different tools for different needs. Use cloud AI for broad knowledge, Roampal for personal learning that compounds over time.
"Remembers my entire stack. Never suggests Python when I use Rust."
"My personal tutor that remembers what I struggle with."
"Remembers my story world, characters, and tone."
"My business advisor that knows my entire strategy."
Give any MCP-compatible AI tool access to Roampal's persistent memory. Works with Claude Code, Cursor, Cline, Continue.dev, and any tool supporting the Model Context Protocol.
Click the settings icon in Roampal's sidebar to access configuration options.
Go to the Integrations tab. Roampal automatically detects MCP-compatible tools installed on your system.
Works with: Any tool with an MCP config file (Claude Code, Cursor, Cline, Continue.dev, etc.)
Click Connect next to any detected tool. Roampal automatically configures the MCP integration.
Click Add Custom MCP Client to manually specify a config file path.
Memory Tools Available:
search_memory
add_to_memory_bank
update_memory
archive_memory
get_context_insights
record_response
Close and reopen your connected tool. Memory tools will be available immediately.
No manual JSON editing required
Bundled paraphrase-multilingual-mpnet-base-v2, no Ollama needed
Memories shared across all connected tools
Fully local after initial setup
Connection not working? Try disconnecting and reconnecting in Settings → Integrations, or reach out to the community for support.
Join Discord CommunityReleased December 2025
MCP security hardening with parameter allowlisting, rate limiting, and audit logging. Full memory content returned everywhere (no more truncation). In-app update notifications. Exit button for clean shutdown.
Parameter allowlisting blocks signature cloaking attacks. Rate limiting prevents runaway tool loops. Audit logging tracks all MCP tool executions.
Full memory content returned everywhere. Removed all character limits from MCP search results, cold-start context, and internal guidance.
In-app banner when new versions available. Downloads through Gumroad. Critical updates can force attention.
Released December 2025
Monolithic 4,746-line file refactored into 9 focused, testable modules. Facade pattern preserves API while enabling maintainability. 260 tests passing.
ScoringService, SearchService, KnowledgeGraphService, RoutingService, PromotionService, OutcomeService, MemoryBankService, ContextService.
4,746 lines → ~2,880 lines through deduplication and cleaner abstractions. Each service is ~150-400 lines.
All existing integrations continue working unchanged. Facade maintains public API while delegating to services.
Released December 2025
Internal LLM now contributes to Action KG (unified learning across all interfaces). get_context_insights outputs actionable prompts. Contextual book embeddings improve retrieval by ~49%.
Internal LLM and MCP clients now both contribute to Action-Effectiveness KG. Tracks ANY tool—built-in and external MCP tools.
get_context_insights now outputs actionable prompts with workflow guidance. Model-agnostic descriptions work across all LLMs.
Book chunks now embedded with "Book: {title}, Section: {section}" prefix. ~49% improved retrieval for ambiguous queries.
Released December 2025
Roampal can now connect to external MCP tool servers (Blender, filesystem, GitHub, databases). Wilson score ranking ensures proven memories outrank "lucky" new ones. Multilingual smart sorting supports 14 languages.
Connect to external tool servers like Blender, filesystem, GitHub, SQLite, Brave Search. Your local LLM can now use any MCP-compatible tool.
Statistical confidence intervals favor proven track records. A memory with 90/100 success now correctly outranks "lucky" 1/1 success.
14-language smart sorting for international users. English, Spanish, German, French, Chinese, Japanese, Korean, Arabic, and more.
Released December 2025
Upload PDFs, Word docs, Excel spreadsheets, and more. VRAM-aware model selection recommends optimal quality for your GPU. 100% accuracy on 100 adversarial finance scenarios (vs 0% for regular AI search).
PDF, DOCX, Excel, CSV, HTML, RTF + existing TXT/MD. Automatic metadata extraction. Semantic search over tabular data.
GPU auto-detection recommends optimal model quality for your VRAM. 6 levels from Q2_K to Q8_0. Prevents VRAM overflow.
100 adversarial personal finance scenarios. Roampal: 100% correct. Regular AI: 0%. 63% fewer tokens per query.
Enriched search results (score, uses, age). Auto-scores retrieved memories based on user outcome. Better LLM integration.
Released November 27, 2025
Major intelligence upgrade: 3rd Knowledge Graph learns which tools work in which contexts, state-of-the-art retrieval pipeline (contextual + hybrid + reranking), and comprehensive production validation (40/40 tests, 93% accuracy, sub-100ms).
Action-Effectiveness KG learns which tools work in which contexts. Auto-warns LLM after 3+ uses when patterns emerge.
Contextual retrieval + Hybrid search (BM25+Vector) + Smarter sorting. Based on state-of-the-art research techniques.
100% vs 0-3% accuracy on 130 adversarial scenarios (p<0.001). 63% fewer tokens per query. Learning improves 58%→93% over time. Sub-100ms latency.
Content KG ranks entities by quality (importance x confidence), not just mentions. Authoritative facts surface first.
Released November 5, 2025
Major release featuring intelligent knowledge graph routing that learns which memory collections answer which queries, plus enhanced MCP integration with semantic learning storage.
System learns which collections answer which queries. Measurable routing improvement through outcome-based feedback.
Semantic learning storage with outcome-based scoring. External LLMs (Claude, Cursor) get intelligent routing.
Routing KG (blue) learns query patterns, Content KG (green) extracts entities. Visualize memory relationships.
All users use paraphrase-multilingual-mpnet-base-v2. Works offline, no Ollama required for embeddings.
Released November 3, 2025
Critical fixes for collection-specific memory search and knowledge graph success rate accuracy.
AI can now properly target specific memory collections.
Knowledge graph now shows true success rates.
Clearer labels in knowledge graph concept details.
Released October 31, 2025
Major update adding support for both Ollama and LM Studio providers, enhanced model management, and critical performance fixes.
Use both Ollama and LM Studio. Auto-detects providers and seamlessly switches between them.
Browse and download models from both providers with unified management interface.
Settings show which providers are running and how many models each has installed.
Initial public release
First public alpha featuring 5-tier memory system, outcome learning, knowledge graph, and persistent memory bank.
Cloud AI stores what you say. Roampal learns what works.
Three key differences:
Works with many popular AI models:
Pick the one that works best on your computer. Install from Settings.
Yes! After downloading models, Roampal works completely offline. No internet required.
Minimum: 8GB RAM, 10GB free disk space.
Recommended: 16GB RAM and a decent graphics card for faster responses.
Easy way: Go to Settings → Export. Pick what you want to backup and download a zip file.
Manual: All your data is stored in regular files on your computer. Just copy the data folder to backup.
The code is free on GitHub. The $9.99 gets you:
Technical users can View on GitHub for free. The $9.99 is for convenience.
No. Roampal uses AI models that can generate incorrect information, hallucinate facts, or reflect training biases.
Always verify critical information from authoritative sources. Do not rely on AI-generated content for:
When you connect Roampal to Claude Code, Cursor, or other MCP-compatible tools, they get access to these memory tools:
Your AI uses these automatically during conversation. You don't need to do anything special.
Just talk naturally. The AI reads your response, determines if you were satisfied, and scores the memory in Roampal.
For example:
No buttons to click, no ratings to give. It learns from the conversation itself.
Roampal is built on a simple idea: software that respects you. No hidden fees, no data collection, no lock-in.
AI that grows with you. Learns from outcomes. Tracks what actually works. 100% local.
Pre-built version: Just download and run • Or build yourself for free from the source code