-
Notifications
You must be signed in to change notification settings - Fork 107
Improve thread pool error handling #273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve thread pool error handling #273
Conversation
|
|
||
| describe Temporal::Activity::Context do | ||
| let(:client) { instance_double('Temporal::Client::GRPCClient') } | ||
| let(:connection) { instance_double('Temporal::Connection::GRPC') } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This turned out to be the wrong/a non-existent type. It's not strictly necessary for this PR, but I fixed it while I was in here.
| subject.heartbeat(iteration: 3) | ||
| expect(subject.last_heartbeat_throttled).to be(true) | ||
|
|
||
| # Shutdown to drain remaining threads | ||
| heartbeat_thread_pool.shutdown |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code was raising some sort of error that was previously being ignored. Now that unhandled errors result in crashes to the process, this needs to be properly shutdown so that threads encountering bad test state don't fail.
|
This change has also been running within Stripe since around July with good results |
|
Thanks for this PR @jeffschoner. The example silent failure you gave in the description sounds nasty and tough to debug. |
Summary
All errors and exceptions coming out of a thread pool job are now logged and sent to the error handler
If any error does reach the top of the stack on a thread pool thread, it will now crash the process rather than silently kill the thread. Because
StandardErroris rescued in theTaskProcessor, this impacts only severe errors likeNoMemoryErrororSecurityErrorraised by activity or workflow code, or ordinary error raised due to bugs in temporal-ruby itself.It's perhaps somewhat controversial to crash the worker process, but
Exceptionraised out of user code or unexpected errors in the pollers, task processors, or thread pools, leave the worker in an unknown state, where it's unclear that it can continue to safely process work.Motivation
Before this change, when an activity or workflow task raises an error that is not a subclass of
StandardError, it silently kills the thread pool thread it is running on. Additionally, any errors in the ensure/rescue blocks in the pollers, task processors or the thread pool will cause this same behavior. Eventually, workers can run out of thread pool threads and become "zombie" workers that will stop polling for tasks, all while logging no errors.For example, I've seen this occur when an error raised by an activity contains a circular reference. When Oj tries to serialize it, it will run out of memory and raise
NoMemoryError. This is a subclass ofExceptionnotStandardError, and therefore is not caught by an ordinaryrescue => eclause. Eventually this reaches the top of the activity task thread pool thread and causes it to silently exit before it can increment theavailable_threadsand signal theavailabilitycondition variable to free up resources. Memory frees up at this point, so the worker continues to run, but other failures may have occurred on other threads in the process during this period of memory exhaustion, leaving the worker in an unknown state. The activity will then be retried after it times out, which in turn kills another thread on one of the workers. If this happens enough times between worker service restarts, the entire worker fleet will not be able to process any activities.Testing
There are new specs for these thread pool cases in both regular and scheduled thread pools. Some other specs also had to be updated because they had errors throwing on a background thread that started surfacing hidden failures.