Refactors Newton XformPrimView: proper local poses, warp-native API, and shared contract tests#5179
Refactors Newton XformPrimView: proper local poses, warp-native API, and shared contract tests#5179ooctipus wants to merge 18 commits intoisaac-sim:developfrom
Conversation
There was a problem hiding this comment.
🤖 Isaac Lab Review Bot
Summary
This PR refactors XformPrimView into a backend-polymorphic design: a BaseXformPrimView ABC, a USD-only base implementation, a PhysX/Fabric implementation in isaaclab_physx, a Newton/GPU-state implementation in isaaclab_newton, and a XformPrimViewFactory for runtime backend selection. The return type of all getters changes from torch.Tensor to wp.array. This is a breaking change that will crash all existing callers that haven't been updated.
Design Assessment
The approach of splitting XformPrimView into backend-specific implementations behind an ABC is architecturally sound and follows the framework's existing pattern (FactoryBase). However, the execution has critical problems:
-
The return type change (
torch.Tensor→wp.array) is not propagated to callers. The Camera sensor, ray caster, tiled camera, interactive scene extras, andray_cast_utils.obtain_world_pose_from_view()all consumeXformPrimView.get_world_poses()and expecttorch.Tensor. They will crash or produce wrong results at runtime. -
The factory exists but nothing uses it. All existing callers import
XformPrimViewdirectly fromisaaclab.sim.views, which now gives them the USD-only base class. The Camera sensor, which previously got automatic Fabric acceleration, now silently falls back to slow USD operations. Thesync_usd_on_fabric_write=Trueparameter is swallowed by**kwargs. -
Alternative design consideration: Instead of requiring all callers to switch to
XformPrimViewFactoryorwp.to_torch(), the ABC could returntorch.Tensor(the framework's standard data type) and only usewp.arrayinternally. This would be non-breaking. Or the baseXformPrimViewimport could be made the factory itself (via__init__.pyaliasing) so existing code automatically gets the right backend.
Verdict: Acceptable short-term, but the caller migration must happen in this PR or the return type change must be reverted to torch.Tensor for the public API.
Architecture Impact
High cross-module impact — multiple callers broken:
isaaclab.sensors.camera.camera.Camera— callsget_world_poses(), assigns result toself._data.pos_w(atorch.Tensor). Will crash.isaaclab.sensors.camera.tiled_camera.TiledCamera— inherits from Camera, same issue.isaaclab.sensors.ray_caster.ray_cast_utils.obtain_world_pose_from_view()— returnspos_w, quat_wtyped astorch.Tensor, calls.clone()on them.wp.arrayhas no.clone(). Will crash.isaaclab.scene.interactive_scene.InteractiveScene.extras— storesXformPrimViewinstances; downstream code may callget_world_poses()expectingtorch.Tensor.
Implementation Verdict
Significant concerns — The return type change from torch.Tensor to wp.array breaks the Camera sensor, ray caster, and any downstream code that consumes poses from XformPrimView. These are critical runtime crashes, not theoretical edge cases.
Test Coverage
The PR includes comprehensive tests for both the Newton backend (432 lines) and PhysX Fabric backend (478 lines). The core test suite is updated to convert wp.array → torch.Tensor via wp.to_torch() before assertions. However:
- ❌ No tests for the
XformPrimViewFactory— the factory is never instantiated in any test. - ❌ No integration tests verifying Camera or ray caster still work with the new return types.
- ❌ The core tests remove the
backend="fabric"parametrization entirely — Fabric is no longer tested in the core test suite.
CI Status
CI is still in progress (Docker builds pending, pre-commit ✅, linter checks pending). No failures detected yet.
Findings
🔴 Critical: source/isaaclab/isaaclab/sim/views/xform_prim_view.py — Return type changed from torch.Tensor to wp.array without updating callers. Every caller of get_world_poses(), get_local_poses(), get_scales() across the codebase expects torch.Tensor. At minimum: camera.py:695 does self._data.pos_w[env_ids] = poses where poses is now wp.array; ray_cast_utils.py:38 does pos_w.clone() on a wp.array. Both will crash at runtime.
🔴 Critical: source/isaaclab/isaaclab/sensors/camera/camera.py:452 — Camera loses Fabric acceleration silently. Camera creates XformPrimView(..., sync_usd_on_fabric_write=True). The refactored base class swallows sync_usd_on_fabric_write via **kwargs, and doesn't use Fabric at all. This is a significant performance regression for camera pose queries.
🟡 Warning: source/isaaclab/isaaclab/sim/views/xform_prim_view.py — Type hints say wp.array but implementation silently accepts torch.Tensor via _to_numpy(). The set_world_poses signature says positions: wp.array | None = None but the implementation calls self._to_numpy(positions) which handles torch.Tensor too. This is misleading.
🟡 Warning: source/isaaclab/isaaclab/sim/views/xform_prim_view_factory.py — Factory has no callers. All callers still import XformPrimView directly, getting the USD-only base class. Fabric acceleration is effectively disabled for the entire PhysX backend until callers are migrated.
🟡 Warning: source/isaaclab/isaaclab/sim/views/xform_prim_view.py — Input validation removed. The old code validated tensor shapes and raised ValueError with clear messages. The new code removes all shape validation.
🔵 Improvement: source/isaaclab_newton/isaaclab_newton/sim/views/xform_prim_view.py — _gather_scales is O(sites × shapes) per kernel thread. Pre-computing a body_to_first_shape index during __init__ would be O(1) per thread.
🔵 Improvement: source/isaaclab_newton/isaaclab_newton/sim/views/xform_prim_view.py:271-277 — shape_labels.index(pp) is O(n) per prim. Converting to a dict for O(1) lookup would improve init performance.
| imageable.MakeVisible() | ||
| else: | ||
| imageable.MakeInvisible() | ||
|
|
There was a problem hiding this comment.
🔴 Critical: Return type changed from torch.Tensor to wp.array breaks all downstream callers.
camera.py:695 does self._data.pos_w[env_ids] = poses where poses is now a wp.array — this will crash because self._data.pos_w is a torch.Tensor.
ray_cast_utils.py:38-42 calls pos_w.clone() on the result — wp.array has no .clone() method.
Either:
- Keep returning
torch.Tensorfrom the public API (dowp.to_torch()internally), or - Update ALL callers in this same PR: camera.py, tiled_camera.py, ray_cast_utils.py, interactive_scene.py.
Option 1 is safer and non-breaking.
| Args: | ||
| prim_path: USD prim path pattern to match prims. Supports wildcards (``*``) and | ||
| regex patterns (e.g., ``"/World/Env_.*/Robot"``). See | ||
| :func:`isaaclab.sim.utils.find_matching_prims` for pattern syntax. |
There was a problem hiding this comment.
🔴 Critical: **kwargs silently swallows sync_usd_on_fabric_write=True from Camera.
The Camera sensor at camera.py:452 creates:
self._view = XformPrimView(
self.cfg.prim_path, device=self._device, stage=self.stage, sync_usd_on_fabric_write=True
)With this refactor, sync_usd_on_fabric_write is silently consumed by **kwargs and ignored. The camera loses Fabric acceleration entirely — a significant performance regression.
The Camera needs to be updated to use XformPrimViewFactory or isaaclab_physx.sim.views.XformPrimView to get Fabric support back.
|
|
||
| def __new__(cls, *args, **kwargs) -> BaseXformPrimView: | ||
| """Create a new XformPrimView for the active physics backend.""" | ||
| return super().__new__(cls, *args, **kwargs) |
There was a problem hiding this comment.
🟡 Warning: Factory has zero callers in the codebase.
All existing code imports XformPrimView directly from isaaclab.sim.views, which now gives the USD-only base class. No caller has been migrated to use the factory.
Consider either:
- Making
XformPrimViewin__init__.pyipoint to the factory (so existing imports get backend dispatch automatically), or - Migrating callers (camera, ray_caster, interactive_scene) in this PR.
Without one of these, Fabric acceleration is dead code for all existing users.
| if found == 0: | ||
| out_scales[i] = wp.vec3f(1.0, 1.0, 1.0) | ||
|
|
||
|
|
There was a problem hiding this comment.
🔵 Improvement: _gather_scales / _scatter_scales have O(sites × shapes) complexity per kernel thread.
Each GPU thread iterates over ALL shapes to find the one matching its body index. For scenes with many shapes (e.g., complex articulated robots with hundreds of collision shapes), this linear scan per thread is wasteful.
Pre-compute a body_to_first_shape mapping during __init__ and do an O(1) indexed lookup.
| identity_xform = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0] | ||
|
|
||
| for prim in self._prims: | ||
| pp = prim.GetPath().pathString |
There was a problem hiding this comment.
🔵 Improvement: Use dict lookup instead of list.index() for shape resolution.
shape_labels.index(pp) is O(n) per prim. For scenes with many shapes, this adds up during init.
| pp = prim.GetPath().pathString | |
| shape_label_to_idx = {label: i for i, label in enumerate(shape_labels)} | |
| body_label_to_idx = {label: i for i, label in enumerate(body_labels)} | |
| for prim in self._prims: | |
| pp = prim.GetPath().pathString | |
| if pp in shape_label_to_idx: | |
| si = shape_label_to_idx[pp] |
kellyguo11
left a comment
There was a problem hiding this comment.
🤖 Isaac Lab Review Bot
Summary
This PR refactors XformPrimView from a monolithic class into a backend-polymorphic hierarchy: BaseXformPrimView (ABC) → XformPrimView (USD-only, in isaaclab), XformPrimView (PhysX/Fabric, in isaaclab_physx), XformPrimView (Newton/GPU, in isaaclab_newton), connected by XformPrimViewFactory. The API changes from torch.Tensor to wp.array for all getters/setters.
Architecture Impact
High. The return type change from torch.Tensor → wp.array on all getters is a breaking change that affects every downstream consumer of XformPrimView. Multiple files in the core isaaclab package (camera, ray_caster, benchmarks) directly consume the return values of get_world_poses() as torch.Tensor and were NOT updated in this PR. This will cause runtime crashes.
Implementation Verdict
The refactoring direction (backend-polymorphic views, warp-native data) is sound architecture. However, the PR is incomplete as submitted: it changes the return types of a widely-consumed API without updating all callers. The Newton and PhysX implementations are well-structured, but there are several issues ranging from critical (broken callers) to moderate (performance, inconsistent dtypes).
Test Coverage
- Core USD tests: ✅ Updated to convert via
wp.to_torch() - PhysX Fabric tests: ✅ New dedicated test file
- Newton tests: ✅ New comprehensive test file
- Missing: No tests for
XformPrimViewFactory(the whole point of the abstraction) - Missing: Camera / ray_caster integration tests would have caught the broken callers
CI Status
All checks pending at time of review.
Findings
| # | Severity | File | Finding |
|---|---|---|---|
| 1 | 🔴 | ray_cast_utils.py (not in diff) |
obtain_world_pose_from_view() returns wp.array to callers expecting torch.Tensor — crashes ray_caster |
| 2 | 🔴 | camera.py (not in diff) |
_update_poses() assigns wp.array into torch.Tensor slice — crashes camera |
| 3 | 🔴 | benchmark_view_comparison.py (not in diff) |
Uses get_world_poses() return as torch tensors for arithmetic |
| 4 | 🟡 | newton/xform_prim_view.py |
_gather_scales kernel is O(sites × total_shapes) — quadratic blowup |
| 5 | 🟡 | newton/xform_prim_view.py |
get_local_poses = get_world_poses — semantically wrong for hierarchical prims |
| 6 | 🟡 | newton/xform_prim_view.py |
Inconsistent wp.array dtype: Newton uses wp.vec3f/wp.vec4f, USD/PhysX use wp.float32 |
| 7 | 🟡 | xform_prim_view_factory.py |
No XformPrimViewFactory tests |
| 8 | 🔵 | newton/xform_prim_view.py |
Init uses O(N) list.index() instead of O(1) dict lookup |
| 9 | 🔵 | xform_prim_view.py |
Type hints say wp.array but tests and callers pass torch.Tensor to setters |
See inline comments for details on findings 4–9.
| positions: World-space positions of shape ``(M, 3)``. | ||
| orientations: World-space quaternions ``(w, x, y, z)`` of shape ``(M, 4)``. | ||
| indices: Indices of prims to set poses for. Defaults to None (all prims). | ||
| """ |
There was a problem hiding this comment.
🔴 Breaking change: callers not updated
This method now returns wp.array instead of torch.Tensor. However, multiple core callers were not updated:
-
camera.py:695-696:poses, quat = self._view.get_world_poses(env_ids)followed byself._data.pos_w[env_ids] = poses— assigningwp.arrayinto atorch.Tensorslice will crash. -
ray_cast_utils.py:38:pos_w, quat_w = physx_view.get_world_poses(env_ids)— returned to callers that dopos_w += self.drift[env_ids](torch arithmetic onwp.arraywill fail). -
benchmark_view_comparison.py: Uses return values as torch tensors for timing comparisons.
These files must either:
- Be updated to wrap returns with
wp.to_torch(), or - The base class should return
torch.Tensorand let Newton/PhysX returnwp.arrayonly when the caller opts in.
This is a runtime crash, not a silent error — it will be caught immediately, but it blocks any existing workflow that uses cameras or ray casters.
|
|
||
|
|
||
| @wp.kernel | ||
| def _gather_scales( |
There was a problem hiding this comment.
🟡 Performance: O(sites × total_shapes) kernel
This kernel performs a linear scan over all shapes in the model for every site:
for s in range(num_shapes):
if shape_body[s] == bid and found == 0:For a typical scene with 4096 envs × ~10 shapes/body = ~40K shapes, this is:
- 4096 sites × 40K shapes = 163 million comparisons per
get_scales()call
This is a GPU kernel so each thread does O(num_shapes) work, but it's still a significant serial loop inside each warp thread. For larger scenes, this will dominate.
Suggestion: Pre-compute a site_to_shape index mapping at init time (a simple wp.array mapping each site index to its first shape index). This converts the kernel from O(num_shapes) per thread to O(1).
# At init time:
site_shape = wp.array([...], dtype=wp.int32, device=device) # -1 if no shape
# In kernel:
si = site_shape[i] # O(1) lookup
if si >= 0:
out_scales[i] = shape_scale[si]| # Local poses -- delegate to world (Newton bodies live in world space) | ||
| # ------------------------------------------------------------------ | ||
|
|
||
| def get_local_poses(self, indices: Sequence[int] | None = None) -> tuple[wp.array, wp.array]: |
There was a problem hiding this comment.
🟡 Semantic concern: get_local_poses returns world poses
For Newton, get_local_poses() delegates directly to get_world_poses(). While documented ("Newton bodies live in world space"), this is semantically wrong for prims with non-root parents.
Consider a camera mount at (0.1, 0, 0.05) relative to a rigid body. The USD XformPrimView would return the local offset (0.1, 0, 0.05) from get_local_poses(), but the Newton view returns the world position. Code that relies on local-pose semantics (e.g., computing relative transforms) will silently produce wrong results.
If true local poses aren't available from Newton's state, consider either:
- Computing local poses from
world_pose * inverse(parent_world_pose)using the site data you already have - Raising
NotImplementedErrorso callers know the limitation - At minimum, adding a prominent warning in the docstring that this method does NOT return local poses for Newton
There was a problem hiding this comment.
May I ask why are we doing this in the first place? This seems like something that can lead to many many errors.
I agree with @kellyguo11
| pp = prim.GetPath().pathString | ||
|
|
||
| if pp in shape_labels: | ||
| si = shape_labels.index(pp) |
There was a problem hiding this comment.
🔵 Performance: O(N) list.index() in init loop
For each matched prim, shape_labels.index(pp) does a linear scan of the entire shape_labels list. With 4096 envs × ~10 shapes = 40K labels, and 4096 prims, this is ~164M string comparisons.
Same issue on lines 295 and 334 with body_labels.index().
| si = shape_labels.index(pp) | |
| si = shape_label_to_idx[pp] |
Where shape_label_to_idx = {label: i for i, label in enumerate(shape_labels)} is precomputed once. Similarly for body labels. You already have body_label_set as a set for O(1) membership test — just make it a dict mapping to index.
| from .base_xform_prim_view import BaseXformPrimView | ||
|
|
||
|
|
||
| class XformPrimViewFactory(FactoryBase, BaseXformPrimView): |
There was a problem hiding this comment.
🟡 No tests for the factory
The XformPrimViewFactory is the core architectural addition that ties the backends together, but there are zero tests for it. Missing test cases:
- Factory returns
XformPrimView(USD) when no SimulationContext exists - Factory returns PhysX
XformPrimViewwhen PhysX backend is active - Factory returns Newton
XformPrimViewwhen Newton backend is active - Factory
isinstancecheck:isinstance(factory_result, BaseXformPrimView)returns True
Without these, any regression in the factory dispatch logic (e.g., wrong module path, missing registration) will go undetected.
| ... | ||
|
|
||
| @abc.abstractmethod | ||
| def get_world_poses(self, indices: Sequence[int] | None = None) -> tuple[wp.array, wp.array]: |
There was a problem hiding this comment.
🟡 Inconsistent wp.array dtypes between backends
The ABC declares returns as tuple[wp.array, wp.array] but doesn't specify the expected dtype or shape convention. The implementations diverge:
- USD/PhysX: Returns
wp.array(dtype=wp.float32, shape=(M, 3))andwp.array(dtype=wp.float32, shape=(M, 4)) - Newton: Returns
wp.array(dtype=wp.vec3f, shape=(M,))andwp.array(dtype=wp.vec4f, shape=(M,))
Both convert to the same torch.Tensor shape via wp.to_torch(), but any code inspecting the raw wp.array.shape or wp.array.dtype will get different results depending on the backend. This violates the Liskov Substitution Principle for the ABC.
Consider documenting the expected dtype/shape in the docstring (or better, standardizing on one convention). The wp.vec3f/wp.vec4f approach is more semantically correct and avoids ambiguity.
| orientations: wp.array | None = None, | ||
| indices: Sequence[int] | None = None, | ||
| ): | ||
| """Set world-space poses for prims in the view. |
There was a problem hiding this comment.
🔵 Type hints say wp.array but callers pass torch.Tensor
The setters are annotated as positions: wp.array | None but the _to_numpy helper (line 301) explicitly handles both wp.array and torch.Tensor. All existing tests also pass torch.Tensor to the setters.
The type hints should reflect what the code actually accepts:
| """Set world-space poses for prims in the view. | |
| positions: wp.array | torch.Tensor | None = None, | |
| orientations: wp.array | torch.Tensor | None = None, |
Or at minimum, document that torch.Tensor is accepted as an input type. Without this, type checkers will flag every existing caller as an error.
| indices_wp = self._resolve_indices_wp(indices) | ||
| count = indices_wp.shape[0] | ||
|
|
||
| positions_wp = positions if positions is not None else wp.zeros((0, 3), dtype=wp.float32, device=self._device) |
There was a problem hiding this comment.
🔵 Fabric path doesn't explicitly convert torch.Tensor → wp.array
The old code explicitly called wp.from_torch(positions) before passing to wp.launch(). The new code passes the input directly:
positions_wp = positions if positions is not None else wp.zeros(...)This works because warp's pack_arg auto-converts via __cuda_array_interface__ (CUDA) or __array__() (CPU). However, it's fragile and undocumented — relying on warp internal behavior rather than explicit conversion.
The old pattern was more robust:
positions_wp = wp.from_torch(positions) if isinstance(positions, torch.Tensor) else positionsNot a bug today, but a reliability concern if warp changes its auto-conversion behavior.
| logger = logging.getLogger(__name__) | ||
|
|
||
|
|
||
| class XformPrimView(UsdXformPrimView): |
There was a problem hiding this comment.
I think this is kind of inconsistent with what you have designed and I would consider inheritance as anti-pattern in this situation.
Whatever someone does to UsdXformPrimView in the future, it's going to be exposed as public API. I would suggest to derive from interface you designed - BaseXFormPrimView, and do composition.
If you look at #4923 MR I designed xform data backend through composition - this gives us additional benefits - we can decouple, xform interface, backend and caching, and we can keep adding those layers. We can also test them separately.
I thought we can have Usd, Newton, Fabric, UsdRt backends, that are wrapped in caching layer that can be inserted into XFormPrimView.
There was a problem hiding this comment.
Also I would rename UsdXformPrimView alias into something more descriptive, IssacSimXformPrivmView?
c35dffd to
e82a0b7
Compare
Greptile SummaryThis PR refactors
Confidence Score: 4/5Safe to merge after fixing the torch.Tensor index handling bug in NewtonSiteXformPrimView. One P1 bug: source/isaaclab_newton/isaaclab_newton/sim/views/newton_site_xform_prim_view.py — all six Important Files Changed
Class Diagram%%{init: {'theme': 'neutral'}}%%
classDiagram
class BaseXformPrimView {
<<abstract>>
+count int
+get_world_poses(indices) tuple
+set_world_poses(positions, orientations, indices)
+get_local_poses(indices) tuple
+set_local_poses(translations, orientations, indices)
+get_scales(indices) wp.array
+set_scales(scales, indices)
}
class UsdXformPrimView {
-_prims list
-_device str
+get_world_poses() USD XformCache
+set_world_poses() USD attributes
+get_visibility()
+set_visibility()
}
class FabricXformPrimView {
-_usd_view UsdXformPrimView
-_use_fabric bool
-_fabric_world_matrices wp.fabricarray
+get_world_poses() Fabric GPU
+set_world_poses() Fabric GPU
+get_scales() Fabric GPU
}
class NewtonSiteXformPrimView {
-_site_body wp.array int32
-_site_local wp.array transformf
-_parent_site_body wp.array int32
-_parent_site_local wp.array transformf
+get_world_poses() Warp kernel
+set_world_poses() Warp kernel
+get_local_poses() Warp kernel
+set_local_poses() Warp kernel
+get_scales() Warp kernel
}
class XformPrimView {
<<factory>>
+__new__() BaseXformPrimView
-_get_backend() str
}
BaseXformPrimView <|-- UsdXformPrimView
BaseXformPrimView <|-- FabricXformPrimView
BaseXformPrimView <|-- NewtonSiteXformPrimView
XformPrimView ..> FabricXformPrimView : PhysX backend
XformPrimView ..> NewtonSiteXformPrimView : Newton backend
FabricXformPrimView o-- UsdXformPrimView : composition (fallback)
Reviews (1): Last reviewed commit: "fixes" | Re-trigger Greptile |
|
|
||
| if indices is not None: | ||
| n = len(indices) | ||
| idx_wp = wp.array(list(indices), dtype=wp.int32, device=self._device) |
There was a problem hiding this comment.
list(torch.Tensor) produces 0-d tensors, not Python ints
list(torch.tensor([1, 2])) returns [tensor(1), tensor(2)] — a list of 0-d tensors, not Python integers. wp.array(...) does not reliably convert 0-d tensors to wp.int32 scalars, so this will either silently produce garbage indices or raise a Warp dispatch error at runtime.
This pattern appears in 6 places throughout this file (get_world_poses, set_world_poses, get_local_poses, set_local_poses, get_scales, set_scales). All of them hit the same issue when indices is a torch.Tensor. UsdXformPrimView._resolve_indices correctly guards against this with indices.tolist().
| idx_wp = wp.array(list(indices), dtype=wp.int32, device=self._device) | |
| idx_wp = wp.array(indices.tolist() if isinstance(indices, torch.Tensor) else list(indices), dtype=wp.int32, device=self._device) |
Apply the same fix to all six wp.array(list(indices), ...) call sites.
| def _ensure_wp_vec3f(data: wp.array, device: str) -> wp.array: | ||
| """Pass-through for ``wp.array``; convert ``torch.Tensor`` via ``wp.from_torch``.""" | ||
| if isinstance(data, wp.array): | ||
| return data | ||
| import torch # noqa: PLC0415 | ||
|
|
||
| if isinstance(data, torch.Tensor): | ||
| return wp.from_torch(data.contiguous(), dtype=wp.vec3f) | ||
| raise TypeError(f"Expected wp.array or torch.Tensor, got {type(data)}") |
There was a problem hiding this comment.
Unused
device parameter in helper functions
Both _ensure_wp_vec3f and _ensure_wp_vec4f accept a device: str argument that is never referenced in the function body. For the torch.Tensor path, wp.from_torch places the result on the tensor's own device (which may differ from self._device). Either enforce the device in the conversion, or remove the dead parameter from the signature.
| def _ensure_wp_vec3f(data: wp.array, device: str) -> wp.array: | |
| """Pass-through for ``wp.array``; convert ``torch.Tensor`` via ``wp.from_torch``.""" | |
| if isinstance(data, wp.array): | |
| return data | |
| import torch # noqa: PLC0415 | |
| if isinstance(data, torch.Tensor): | |
| return wp.from_torch(data.contiguous(), dtype=wp.vec3f) | |
| raise TypeError(f"Expected wp.array or torch.Tensor, got {type(data)}") | |
| def _ensure_wp_vec3f(data: wp.array) -> wp.array: | |
| """Pass-through for ``wp.array``; convert ``torch.Tensor`` via ``wp.from_torch``.""" | |
| if isinstance(data, wp.array): | |
| return data | |
| import torch # noqa: PLC0415 | |
| if isinstance(data, torch.Tensor): | |
| return wp.from_torch(data.contiguous(), dtype=wp.vec3f) | |
| raise TypeError(f"Expected wp.array or torch.Tensor, got {type(data)}") |
Apply the same change to _ensure_wp_vec4f, and update all call sites to drop the device argument.
source/isaaclab/docs/CHANGELOG.rst
Outdated
| logic to :class:`~isaaclab_physx.sim.views.XformPrimView` and | ||
| :class:`~isaaclab_newton.sim.views.XformPrimView`. The public API is unchanged; | ||
| use :class:`~isaaclab.sim.views.XformPrimView` for backend-aware instantiation. | ||
|
|
There was a problem hiding this comment.
Changelog cross-references point to non-existent class names
The Changed entry references :class:~isaaclab_physx.sim.views.XformPrimView and `:class:`~isaaclab_newton.sim.views.XformPrimView, but neither symbol is exported. The correct names are FabricXformPrimView (from isaaclab_physx) and NewtonSiteXformPrimView (from isaaclab_newton), as confirmed by each package's __init__.pyi. These Sphinx cross-references will resolve to broken links in the rendered docs.
| def _gather_scales( | ||
| shape_scale: wp.array(dtype=wp.vec3f), | ||
| shape_body: wp.array(dtype=wp.int32), | ||
| site_body: wp.array(dtype=wp.int32), | ||
| num_shapes: wp.int32, | ||
| out_scales: wp.array(dtype=wp.vec3f), | ||
| ): | ||
| """For each site, find the first shape on the same body and copy its scale.""" | ||
| i = wp.tid() | ||
| bid = site_body[i] | ||
| found = int(0) | ||
| for s in range(num_shapes): | ||
| if shape_body[s] == bid and found == 0: | ||
| out_scales[i] = shape_scale[s] | ||
| found = 1 | ||
| if found == 0: | ||
| out_scales[i] = wp.vec3f(1.0, 1.0, 1.0) |
There was a problem hiding this comment.
O(num_sites × num_shapes) scale gather/scatter kernels
The _gather_scales and _scatter_scales kernels perform a linear scan over all shapes for every site thread. With many shapes this becomes O(N×M) GPU work per call. Consider building a pre-computed site_to_first_shape mapping at init time to make this O(1) per site, or at least add a note in the docstring that performance degrades with large shape counts.
There was a problem hiding this comment.
🤖 Isaac Lab Review Bot — Follow-up
Addressed since initial review ✅
Significant improvements since the first push:
- Callers updated — Camera, ray_cast_utils, physx_scene_data_provider, mimic all properly convert between
wp.arrayandtorch.Tensor. The critical "caller breakage" concern is resolved. - Factory replaces import —
XformPrimViewIS the factory now (viaFactoryBase), sofrom isaaclab.sim.views import XformPrimViewautomatically dispatches to the right backend. No separate factory class needed. sync_usd_on_fabric_writeproperly handled —FabricXformPrimView.__init__explicitly accepts and uses it.- Composition over inheritance —
FabricXformPrimViewusesUsdXformPrimViewvia composition (per bareya's suggestion), not inheritance. - Newton
get_local_posesis correct — Proper parent-relative computation via_compute_site_local_transformskernel. Theinv(parent_world) * prim_worldmath is correct. - Shared contract test suite —
xform_contract_tests.pywith 14+ parameterized tests verifying getter/setter roundtrips, indexed ops, partial updates, and parent-child invariants across all backends.
The architecture is now clean: BaseXformPrimView (ABC) → UsdXformPrimView (pure USD), FabricXformPrimView (GPU-accelerated), NewtonSiteXformPrimView (site-based).
Still open 🔍
🔴 P1: list(indices) on torch.Tensor silently produces 0-d tensors
In NewtonSiteXformPrimView (6 call sites) and FabricXformPrimView._resolve_indices_wp:
idx_wp = wp.array(list(indices), dtype=wp.int32, device=self._device)When indices is a torch.Tensor, list(torch.tensor([1, 2])) produces [tensor(1), tensor(2)] — 0-d tensors, not Python ints. wp.array(...) may silently produce garbage or crash.
UsdXformPrimView._resolve_indices already handles this correctly:
if isinstance(indices, torch.Tensor):
return indices.tolist() # ← produces [1, 2] as Python ints
return list(indices)The same pattern should be used in the other two backends.
🟡 P2: Inconsistent wp.array dtypes across backends
USD/Fabric return wp.array(shape=(M,3), dtype=wp.float32), Newton returns wp.array(shape=(M,), dtype=wp.vec3f). Both produce identical torch.Tensor(M,3) via wp.to_torch(), but any code operating directly on wp.array attributes (.ndim, .dtype, .shape) will see different values. The ABC should document the expected representation, or backends should be aligned.
🟡 P2: Changelog cross-references point to non-existent class names
All three changelogs reference ~isaaclab_{physx,newton}.sim.views.XformPrimView but the actual exported classes are FabricXformPrimView and NewtonSiteXformPrimView.
| """ | ||
| if isinstance(physx_view, XformPrimView): | ||
| pos_w, quat_w = physx_view.get_world_poses(env_ids) | ||
| if isinstance(physx_view, BaseXformPrimView): |
There was a problem hiding this comment.
Not that this needs to be addressed in this MR but there is elegant way of wrapping those multiple definitions into single dispatch, example:
from functools import singledispatch
@singledispatch
def _world_poses(view, env_ids):
raise NotImplementedError(type(view))
@_world_poses.register
def _(view: BaseXformPrimView, env_ids):
...
def obtain_world_pose_from_view(physx_view, env_ids, clone=False):
return _world_poses(physx_view, env_ids)|
|
||
| def obtain_world_pose_from_view( | ||
| physx_view: XformPrimView | physx.ArticulationView | physx.RigidBodyView, | ||
| physx_view: BaseXformPrimView | physx.ArticulationView | physx.RigidBodyView, |
There was a problem hiding this comment.
Question, it seems like all of those are transformables. Why do we do this kind of "ugly" dispatch and not wrap this into something that is called transformable, xform view or whatever? They all seem very similar. However listing those types here brings some complexity?
There was a problem hiding this comment.
Return type is incorrect? I think this pattern of prim view articulation and rbd transforms repeats.
3526a7f to
6ad1d15
Compare
|
|
||
|
|
||
| @configclass | ||
| class RayCasterXformCfg(SpawnerCfg): |
There was a problem hiding this comment.
I’m wondering about the choice of the name RaycasterXformCfg: is there a specific reason for that? Since this prim doesn’t seem to carry any operations or schemas itself, would it be clearer to represent it as a DummyPrim (or something similar) to better reflect its role?
Also, it feels a bit unintuitive that this prim is created, but the sensor offset is then defined separately in the raycaster configuration. Would it make sense to consolidate that, or is there a design reason for keeping those concerns separate?
There was a problem hiding this comment.
yes the offset should be consolidate to frameview instead of separately maintained, much like the current camera class
|
|
||
| import isaaclab.sim as sim_utils | ||
| from isaaclab.sim.views import XformPrimView | ||
| from isaaclab.sim.views import BaseXformPrimView |
There was a problem hiding this comment.
We should probably stick to a consistent naming convention like <Type>Base, as we already do with AssetBase and SensorBase. Would it make sense to align this with that pattern?
| ray_caster = RayCasterCfg( | ||
| prim_path="{ENV_REGEX_NS}/Robot/base/lidar_cage", | ||
| prim_path="{ENV_REGEX_NS}/Robot/base/raycaster", | ||
| spawn=sim_utils.RayCasterXformCfg(), |
There was a problem hiding this comment.
oof, this would also be a pretty big breaking change for existing user code that's using the ray caster sensor right?
| pos_w, quat_w = wp.to_torch(physx_view.get_transforms())[env_ids].split([3, 4], dim=-1) | ||
| else: | ||
| raise NotImplementedError(f"Cannot get world poses for prim view of type '{type(physx_view)}'.") | ||
| indices = wp.from_torch(env_ids.to(dtype=torch.int32), dtype=wp.int32) if env_ids is not None else None |
There was a problem hiding this comment.
does this mean we don't allow attaching ray caster sensor to a physics body and assume it's always attached to a non-physics body?
| path already exists as a prim with a physics API, the path is automatically extended | ||
| with ``/raycaster`` and a deprecation warning is emitted. | ||
| """ | ||
| if cfg.spawn is None: |
There was a problem hiding this comment.
was spawn needed in previous ray caster configurations as well? or users will need to add this to their previous setup?
There was a problem hiding this comment.
its default added, user don't need to add this to their previous setup
|
|
||
| if validate_xform_ops: | ||
| for prim in self._prims: | ||
| sim_utils.standardize_xform_ops(prim) |
There was a problem hiding this comment.
how long does this call take for a typical RL environment workflow? is there benefit in defaulting validate_xform_ops to False instead?
There was a problem hiding this comment.
actually on second thought, accuracy is probably more important than performance, but maybe worth documenting somewhere if the overhead is large.
| @@ -0,0 +1,359 @@ | |||
| # Copyright (c) 2022-2026, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md). | |||
There was a problem hiding this comment.
is this meant to be unit tests? why does the file not start with test_*?
| position = pose[..., :3] | ||
| orientation = pose[..., 3:] | ||
| xform_prim.set_world_poses(position, orientation, None) | ||
| xform_prim.set_world_poses(wp.from_torch(position.contiguous()), wp.from_torch(orientation.contiguous()), None) |
There was a problem hiding this comment.
@AntoineRichard - do you think the .contiguous() call could also be something we can handle for users under the hood somehow? I'm not sure how easy it'd be for users who are unfamiliar with Warp to know this
| si = indices[i] | ||
| bid = site_body[si] | ||
| found = int(0) | ||
| for s in range(num_shapes): |
There was a problem hiding this comment.
is it worth having this as a kernel dimension as well?
| # ------------------------------------------------------------------ | ||
|
|
||
| def set_scales(self, scales, indices=None): | ||
| if not self._use_fabric: |
There was a problem hiding this comment.
what does it mean to be in the FabricXformPrimView class but not having use_fabric enabled? should this path be just using the USD view class instead?
- Add Google-style docstrings to all 12 Warp kernels in NewtonSiteFrameView with explicit Args sections describing shapes, units, and semantics. - Add missing docstrings to all public methods (__init__, count, get_world_poses, get_scales, set_scales). - Reorder kernel parameters: inputs before outputs. - Update wp.launch calls to use explicit inputs/outputs kwargs. - Treat in-place arrays (site_local, shape_scale) as inputs. - Fix camera.py rebase conflict resolution: keep develop's Renderer abstraction while applying XformPrimView→FrameView rename.
cbd0031 to
b461f40
Compare
Description
Summary
Rewrites the Newton
XformPrimViewfrom scratch with correct local-pose semantics, a clean site-based architecture, and a shared test contract that enforces the same invariants across all backends (USD, Fabric, Newton).Key changes:
get_local_poses/set_local_posesnow correctly compute parent-relative transforms on GPU (inv(parent_world) * prim_world) instead of incorrectly returning world posessite_localoffset instead of writingbody_qdirectly (which would move the parent body)ValueErrorif prim path resolves to a physics body or collision shape — XformPrimView is for non-physics child prims only (cameras, sensors, markers)wp.array— no torch/list conversion overheadfrom isaaclab.sim.views import XformPrimViewnow auto-selects the correct backend (USD, Fabric, Newton) viaXformPrimViewFactoryFabricXformPrimViewuses composition (self._usd_view) instead of inheriting fromUsdXformPrimViewUsdXformPrimView,FabricXformPrimView,NewtonSiteXformPrimView— no more ambiguousXformPrimViewin every packagexform_contract_tests.pythat any backend imports and runs by providing aview_factoryfixtureType of change
local == world)XformPrimViewrenamed toNewtonSiteXformPrimView, PhysX toFabricXformPrimView; indices parameter changed fromSequence[int]towp.array)Expected failures
test_set_world_updates_local[cuda:0]in Fabric — pre-existing limitation:set_world_poseswrites toomni:fabric:worldMatrixbutget_local_posesreads from USD, so local poses are stale after a Fabric world write. This will be fixed by the Fabric backend PR (Test for Camera and Tiled Camera transforms on CPU and GPU #4923) which addsomni:fabric:localMatrixsupport.Test results
Benchmark (1024 prims, 50 iterations, RTX 5090)
Checklist
./isaaclab.sh --formatconfig/extension.tomlfileCONTRIBUTORS.mdor my name already exists there