-
Notifications
You must be signed in to change notification settings - Fork 2
[Request] Memory usage and performance with large concurrent find() operations #54
Copy link
Copy link
Open
Description
We're running into high memory usage and slow response times when multiple users query large collections concurrently. Wanted to check if there are any potential optimizations or recommendations on your side.
Our setup
- 5-10 concurrent users
- Frontend may fetch ~25 collections in parallel via
find()through the realm-web SDK - Some collections have 1000+ documents
- Same data model and queries as with MongoDB Realm
We know this is not an ideal data loading strategy, but it's what we inherited from the Realm migration and a frontend rewrite isn't feasible short-term.
What we're seeing
- Memory spikes from ~200MB to 8GB+ when several users log in around the same time
- Process crashes with
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory - Single large collection responses can be about 29MB and take 50+ seconds (on Realm these were quite a lot faster ~20 sec)
- Responses get increasingly slower the more user request the data while this didn't really affect loading times with realm
- We've increased
--max-old-space-sizeto 16GB as a workaround for the OOM, but response times are still slow
We understand that this is an unusual use-case/approach. Just wondering if there are any optimizations you could think of that could help with this kind of workload, or if there's a recommended approach for handling larger datasets.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels