Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -330,6 +330,15 @@ user_data_collection:

**Important**: Only MCP servers defined in the `lightspeed-stack.yaml` configuration are available to the agents. Tools configured in the llama-stack `run.yaml` are not accessible to lightspeed-core agents.

Besides configuring the MCP Servers in `lightspeed-stack.yaml` we also need to enable the appropriate tool in llama-stack's `run.yaml` file under the `tool_runtime` section. Here's an example using the default `provider_id` name used by lightspeed-stack for MCPs:

```yaml
tool_runtime:
- provider_id: model-context-protocol
provider_type: remote::model-context-protocol
config: {}
```

#### Configuring MCP Servers

MCP (Model Context Protocol) servers provide tools and capabilities to the AI agents. These are configured in the `mcp_servers` section of your `lightspeed-stack.yaml`.
Expand Down Expand Up @@ -377,7 +386,7 @@ The secret files should contain only the header value (tokens are automatically

```bash
# /var/secrets/api-token
Bearer sk-abc123def456...
sk-abc123def456...

# /var/secrets/api-key
my-api-key-value
Expand Down
13 changes: 8 additions & 5 deletions examples/run.yaml
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Example llama-stack configuration for OpenAI inference + FAISS (RAG)
#
#
# Notes:
# - You will need an OpenAI API key
# - You can generate the vector index with the rag-content tool (https://github.com/lightspeed-core/rag-content)
#
#
version: 2

apis:
Expand All @@ -17,7 +17,7 @@ apis:
- scoring
- tool_runtime
- vector_io

benchmarks: []
datasets: []
image_name: starter
Expand Down Expand Up @@ -61,6 +61,9 @@ providers:
- config: {} # Enable the RAG tool
provider_id: rag-runtime
provider_type: inline::rag-runtime
- config: {} # Enable the MCP tool
provider_id: model-context-protocol
provider_type: remote::model-context-protocol
vector_io:
- config: # Define the storage backend for RAG
persistence:
Expand Down Expand Up @@ -144,7 +147,7 @@ registered_resources:
provider_model_id: sentence-transformers/all-mpnet-base-v2
metadata:
embedding_dimension: 768
vector_stores:
vector_stores:
- embedding_dimension: 768
embedding_model: sentence-transformers/nomic-ai/nomic-embed-text-v1.5
provider_id: faiss
Expand All @@ -167,4 +170,4 @@ vector_stores:
safety:
default_shield_id: llama-guard
telemetry:
enabled: true
enabled: true
Loading