Skip to content

Conversation

@Renizmy
Copy link

@Renizmy Renizmy commented Nov 21, 2025

Related to: #13372

@Renizmy
Copy link
Author

Renizmy commented Nov 21, 2025

Mooved here @Megafredo

@Megafredo Megafredo assigned Megafredo and unassigned Megafredo Nov 21, 2025
@Megafredo Megafredo self-requested a review November 21, 2025 13:23
@Megafredo
Copy link
Member

Hello @Renizmy, thank you for the switch!
There is just one issue with the linter. It seems to be a matter of indentation in the linter configuration.

@Renizmy
Copy link
Author

Renizmy commented Nov 21, 2025

Fixed, sorry

@codecov
Copy link

codecov bot commented Nov 21, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 30.85%. Comparing base (bc3f6d9) to head (4d039e4).
⚠️ Report is 834 commits behind head on master.

Additional details and impacted files
@@             Coverage Diff             @@
##           master   #13261       +/-   ##
===========================================
+ Coverage   16.26%   30.85%   +14.59%     
===========================================
  Files        2846     2911       +65     
  Lines      412135   192432   -219703     
  Branches    11512    39246    +27734     
===========================================
- Hits        67035    59378     -7657     
+ Misses     345100   133054   -212046     
Flag Coverage Δ
opencti 30.85% <ø> (+14.59%) ⬆️
opencti-front 2.46% <ø> (-1.44%) ⬇️
opencti-graphql 68.25% <ø> (+1.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Member

@Megafredo Megafredo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @Renizmy, thanks for your work!
This new method for streams that allows batch processing will make a lot of people happy!

@Gwendoline-FAVRE-FELIX Gwendoline-FAVRE-FELIX added the community use to identify PR from community label Dec 5, 2025
@helene-nguyen
Copy link
Member

@Renizmy FYI, we'd like to improve a bit and refactor the code before merging ! :)

@xfournet
Copy link
Member

Hi @Renizmy,

Thank you for your contribution. As @helene-nguyen mentioned, we'd like the code to be refactored before merging. The main concern is that the new class (ListenStreamBatch) and method (listen_stream_batch) duplicate existing code.

Instead of creating a new class and method, we suggest implementing a message_callback wrapper that can adapt the existing listen_stream function from a single callback per message to a batched callback. You should be able to use the code you've already introduced to create this adapter.

Then each batch-capable connector (in regards of the targeted API) could be able to use this adapter to receive batch of message instead individual message.

Usage (assuming wrapper is named create_batch_callback and the process_message of the connector becomes process_message_batch) would be something like that:

    self.helper.listen_stream(message_callback=self.process_message)

--->

    batch_callback = self.helper.create_batch_callback(self.process_message_batch, self.batch_size, self.batch_timeout, self.max_batches_per_minute)
    self.helper.listen_stream(message_callback=batch_callback)

Would you be open to making this change?

@xfournet xfournet self-requested a review December 15, 2025 16:16
Copy link
Member

@xfournet xfournet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update! I made some comments, I will resume the PR after theses first feebacks have been processed.

@Renizmy
Copy link
Author

Renizmy commented Dec 29, 2025

Hi @xfournet ,

Thanks for the review! All points addressed:

Changes to the rate limiter have led to simplifications. I haven't implemented any code related to RL for basic stream consumption (out of scope?).

Comment on lines 2399 to 2401
if not isinstance(max_per_minute, int) or max_per_minute <= 0:
raise ValueError("max_per_minute must be a positive integer")
rate_limiter = RateLimiter(helper=self, max_per_minute=max_per_minute)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should use create_rate_limiter here ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed ... Sorry

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

community use to identify PR from community

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add bulk consumption helper/method for stream processing in client python

5 participants