Skip to content

LCORE-1209: Custom shields not compatible with LCORE#1221

Merged
tisnik merged 1 commit intolightspeed-core:mainfrom
are-ces:fix-custom-shield-clean
Feb 25, 2026
Merged

LCORE-1209: Custom shields not compatible with LCORE#1221
tisnik merged 1 commit intolightspeed-core:mainfrom
are-ces:fix-custom-shield-clean

Conversation

@are-ces
Copy link
Contributor

@are-ces are-ces commented Feb 25, 2026

Description

We were assuming that all shields would have a provider_resource_id, however custom shields (e.g. the ones defined in lightspeed-providers) do not have this field, resulting in failure.

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement
  • Benchmarks improvement

Tools used to create PR

Identify any AI code assistants used in this PR (for transparency and review context)

NA

Related Tickets & Documents

  • Related Issue #
  • Closes #

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

Tested manually, with llama-guard shield with invalid shield name, and a custom shield

Summary by CodeRabbit

Release Notes

  • Bug Fixes

    • Enhanced validation for shield moderation to properly detect and report missing or invalid model configurations, with improved error responses for misconfigured shields.
  • Tests

    • Expanded test coverage to verify correct behavior across different shield provider types.

@are-ces are-ces requested review from asimurka and tisnik February 25, 2026 10:41
@are-ces are-ces marked this pull request as draft February 25, 2026 10:43
@are-ces are-ces force-pushed the fix-custom-shield-clean branch from e85c194 to 1de20a4 Compare February 25, 2026 10:47
@are-ces are-ces marked this pull request as ready for review February 25, 2026 10:48
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 25, 2026

Walkthrough

Introduces provider-specific validation for shield moderation: only llama-guard shields now validate that their provider_resource_id exists in available_models; other providers skip this check and proceed directly to moderation. Includes corresponding test updates to verify the new behavior for both llama-guard and non-llama-guard shields.

Changes

Cohort / File(s) Summary
Shield Moderation Logic
src/utils/shields.py
Adds targeted model validation that triggers exclusively for llama-guard provider, verifying provider_resource_id exists in available_models; other providers bypass this check. Returns HTTP 404 if llama-guard model reference is missing or invalid. Non-llama-guard moderation flow remains unchanged.
Shield Moderation Tests
tests/unit/utils/test_shields.py
Adds new test for non-llama-guard shields to confirm model validation is skipped; modifies existing tests to explicitly configure llama-guard provider and adjust expectations for error cases involving missing or invalid model references.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • tisnik
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title references the issue (LCORE-1209) and indicates the PR addresses custom shields compatibility, which aligns with the main change: removing provider_resource_id validation for non-llama-guard shields to fix custom shield failures.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
src/utils/shields.py (2)

91-91: Extract "llama-guard" to a named constant.

♻️ Proposed refactor
+LLAMA_GUARD_PROVIDER_ID = "llama-guard"
+
 ...
-        if shield.provider_id == "llama-guard" and (
+        if shield.provider_id == LLAMA_GUARD_PROVIDER_ID and (

Also check constants.py to confirm this constant isn't already defined there before adding it here. As per coding guidelines: "Check constants.py for shared constants before defining new ones."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/shields.py` at line 91, Replace the hardcoded "llama-guard" string
used in the condition on shield.provider_id with a named constant: first check
constants.py for an existing constant (e.g., LLAMA_GUARD_PROVIDER) and if
missing add one there with the value "llama-guard"; then import that constant
into src/utils/shields.py and update the conditional that references
shield.provider_id (the if statement using "llama-guard") to use the constant
instead.

82-82: available_models is fetched on every call regardless of shield provider type.

client.models.list() is an async I/O call that is only consumed inside the provider_id == "llama-guard" branch (lines 91–94). When no llama-guard shields are configured (e.g., a pure custom-shield deployment), this round-trip is wasted on every moderation request.

♻️ Lazy-load available_models only when needed
-    available_models = {model.id for model in await client.models.list()}
-
     shields = await client.shields.list()
+    available_models: set[str] | None = None
     for shield in shields:
         if shield.provider_id == "llama-guard" and (
             not shield.provider_resource_id
-            or shield.provider_resource_id not in available_models
+            or shield.provider_resource_id
+            not in (
+                available_models := available_models
+                or {model.id for model in await client.models.list()}
+            )
         ):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/shields.py` at line 82, Move the async call to client.models.list()
out of the unconditional top-level line that builds available_models and instead
fetch it lazily only when provider_id == "llama-guard" (i.e., inside the branch
that actually uses available_models); replace the current eager set
comprehension ({model.id for model in await client.models.list()}) with a
conditional fetch or a small cached helper that awaits client.models.list() only
when needed, and reference/assign to the same available_models name inside the
llama-guard branch so the rest of the code can use it unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/utils/shields.py`:
- Line 91: Replace the hardcoded "llama-guard" string used in the condition on
shield.provider_id with a named constant: first check constants.py for an
existing constant (e.g., LLAMA_GUARD_PROVIDER) and if missing add one there with
the value "llama-guard"; then import that constant into src/utils/shields.py and
update the conditional that references shield.provider_id (the if statement
using "llama-guard") to use the constant instead.
- Line 82: Move the async call to client.models.list() out of the unconditional
top-level line that builds available_models and instead fetch it lazily only
when provider_id == "llama-guard" (i.e., inside the branch that actually uses
available_models); replace the current eager set comprehension ({model.id for
model in await client.models.list()}) with a conditional fetch or a small cached
helper that awaits client.models.list() only when needed, and reference/assign
to the same available_models name inside the llama-guard branch so the rest of
the code can use it unchanged.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9af1c06 and 1de20a4.

📒 Files selected for processing (2)
  • src/utils/shields.py
  • tests/unit/utils/test_shields.py

Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tisnik tisnik merged commit 71b93ea into lightspeed-core:main Feb 25, 2026
22 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants