Distributed vector memory service for AI assistants powered by PluresDB.
PluresLM MCP provides persistent vector memory with P2P synchronization across devices. Built on PluresDB for distributed data and Model Context Protocol for AI tool integration.
- 🧠 Persistent vector memory - Semantic search across conversation history
- 🌐 P2P synchronization - Share memories across devices via Hyperswarm
- 🔧 MCP protocol - Standard interface for AI assistant integration
- 🚀 Multiple transports - stdio, SSE/HTTP for different deployment needs
- 📦 Zero-knowledge - No central servers, encrypted P2P mesh
- 🛠️ Project indexing - Ingest codebases for context-aware assistance
npm install
npm run build
# Set PluresDB topic (generate with: openssl rand -hex 32)
export PLURES_DB_TOPIC="your-64-char-hex-topic-key"
# Start stdio MCP server
npm start# Configure for HTTP transport
export MCP_TRANSPORT=sse
export PORT=3001
export HOST=0.0.0.0
export PLURES_DB_TOPIC="your-topic-key"
# Start HTTP service
npm start
# → Serving on http://0.0.0.0:3001/sse| Variable | Required | Description |
|---|---|---|
PLURES_DB_TOPIC |
✅ | 64-char hex string (32 bytes) for PluresDB mesh |
PLURES_DB_SECRET |
❌ | Optional encryption secret for mesh |
MCP_TRANSPORT |
❌ | stdio (default) or sse for HTTP |
PORT |
❌ | HTTP port when using SSE transport (default: 3001) |
HOST |
❌ | HTTP host (default: 0.0.0.0) |
OPENAI_API_KEY |
❌ | OpenAI key for embeddings (falls back to local Transformers.js) |
OPENAI_EMBEDDING_MODEL |
❌ | OpenAI model name (default: text-embedding-3-small) |
PLURES_LM_DEBUG |
❌ | Enable debug logging (true/false) |
{
"mcpServers": {
"pluresLM": {
"command": "node",
"args": ["path/to/pluresLM-mcp/dist/index.js"],
"env": {
"PLURES_DB_TOPIC": "your-topic-key"
}
}
}
}{
"mcpServers": {
"pluresLM": {
"transport": {
"type": "sse",
"url": "http://memory-service:3001/sse"
}
}
}
}PluresLM v2.0+ uses pure PluresDB for storage and synchronization:
- No SQLite dependencies - Distributed-first design
- Hyperswarm P2P mesh - Direct device-to-device sync
- Embedded vector search - Cosine similarity in-memory
- Conflict-free replication - CRDTs for distributed consistency
- stdio (default) - Process pipes for local OpenClaw integration
- sse - Server-Sent Events over HTTP for remote/clustered deployments
All devices sharing the same PLURES_DB_TOPIC automatically sync memories:
# Device 1
export PLURES_DB_TOPIC="abc123..."
npm start # Stores memories locally
# Device 2
export PLURES_DB_TOPIC="abc123..." # Same topic
npm start # Automatically receives Device 1's memoriesPluresLM MCP exposes these tools for AI assistants:
pluresLM_store(content, tags?, category?, source?)- Store new memorypluresLM_search(query, limit?, minScore?)- Semantic searchpluresLM_forget(id? | query?, threshold?)- Delete memoriespluresLM_index(directory, maxFiles?, category?, tags?)- Index codebasepluresLM_status()- Database stats + sync statuspluresLM_profile()- User profile data
For enterprise deployments across multiple OpenClaw instances:
# Memory service (dedicated server)
docker run -p 3001:3001 -e MCP_TRANSPORT=sse plures/pluresLM-mcp
# Worker instances (point to service)
export MCP_TRANSPORT=sse
export PLURES_LM_SERVICE_URL=http://memory-service:3001/sse# Multiple services with shared PluresDB topic
docker-compose up -d # Load balancer → N service instancesPluresLM v2.0 is a breaking change from SQLite-based v1.x:
- ❌ Removed: All SQLite/better-sqlite3 dependencies
- ✅ Added: PluresDB distributed storage
- ✅ Added: P2P mesh synchronization
- ✅ Added: SSE/HTTP transport option
- 🔄 Changed: Tool names (
memory_*→pluresLM_*) - 🔄 Changed: Configuration (file paths → topic keys)
- Export v1.x data:
pluresLM_export_bundle - Deploy v2.0 with PluresDB topic
- Import data:
pluresLM_import_bundle
Note: Direct file migration not supported due to schema differences.
AGPL-3.0 - See LICENSE
- PluresDB - Distributed database engine
- Model Context Protocol - AI tool standard
- OpenClaw - AI assistant platform