#802: Adding some log messages in hyperdrive trigger#805
#802: Adding some log messages in hyperdrive trigger#805filiphornak merged 16 commits intodevelopfrom
Conversation
src/main/scala/za/co/absa/hyperdrive/trigger/scheduler/utilities/logging/LazyToStr.scala
Outdated
Show resolved
Hide resolved
.../scala/za/co/absa/hyperdrive/trigger/api/rest/health/DatabaseConnectionHealthIndicator.scala
Show resolved
Hide resolved
| implicit override def executionContext: ExecutionContext = scala.concurrent.ExecutionContext.Implicits.global | ||
|
|
||
| private trait HasUpdateJob { | ||
| trait HasUpdateJob { |
There was a problem hiding this comment.
I actually don't know, but when I ran it locally, it complained that the mockito framework could not mock private static classes. I used the same command as in the GitHub actions script.
There was a problem hiding this comment.
So I reverted it and will test it in the pipeline
| schedulerInstanceService | ||
| .registerNewInstance() | ||
| .map { id => | ||
| .map(wireTap { id => |
There was a problem hiding this comment.
I don't like it, I see this idea. But I would leave as it was. It is visible that what is returned and what is a side effect on first scan
There was a problem hiding this comment.
Ok, I will remove it. But, IMO, this looks clearer when you see that there is no change in value, just execution of side-effect.
Akka had something similar in their stream. Even native scala contains .tap, or .tapEach. And it could be easily implemented on any Functor type with cats.
import cats._
implicit class FunctorOps[F[_], A](val fa: F[A]) extends AnyVal {
def tap[A, U](f: A => U)(implicit func: Functor[F]): F[A] =
func.map(fa) { elm =>
f(elm)
elm
}
}Because, to me, this looks much cleaner
schedulerInstanceId match {
case Some(id) => Future.successful(id)
case _ =>
schedulerInstanceService
.registerNewInstance()
.tap(id => schedulerInstanceId = Some(id))
.omComplete {
case Success(id) =>
logger.info(s"Successfully assigned new (SchedulerId=${id}) to scheduler")
case Failure(e) =>
logger.error("Failed to get new SchedulerId", e)
}
}
src/main/scala/za/co/absa/hyperdrive/trigger/scheduler/sensors/Sensors.scala
Outdated
Show resolved
Hide resolved
src/main/scala/za/co/absa/hyperdrive/trigger/scheduler/sensors/Sensors.scala
Outdated
Show resolved
Hide resolved
| case Some(joinedDagDefinition) => | ||
| for { | ||
| hasInQueueDagInstance <- dagInstanceRepository.hasInQueueDagInstance(joinedDagDefinition.workflowId) | ||
| hasInQueueDagInstance <- dagInstanceRepository |
There was a problem hiding this comment.
You can rewrite this and have logging after each "flatmap"
_ = logger.trace("bla")
There was a problem hiding this comment.
Yeah, I know about it, and I will change it back. However, I don't like it because the = is for a map if scala had something for specifying side effects in for comprehension.
I would instead use something like https://typelevel.org/cats/typeclasses/arrow.html or a better example with precisely the same thing https://medium.com/virtuslab/arrows-monads-and-kleisli-part-ii-12ffd4da8bc9.
To me, writing it that way looks much cleaner, and also, using cats would add several additional structures, like IO, Functor (which in turn can be used to simplify all the .map(_.map(..._.map(f)...)) operations), Validated, Writer, etc...
…ala-logging dependencies
| } | ||
| fut.onComplete { | ||
| case Success(_) => logger.debug(s"Executing job. Job instance = $jobInstance") | ||
| case Success(_) => logger.debug(s"Executing job. (JobId={})", jobInstance) |
There was a problem hiding this comment.
JobInstance instead of JobId
| private def updateJob(jobInstance: JobInstance): Future[Unit] = { | ||
| logger.info( | ||
| s"Job updated. ID = ${jobInstance.id} STATUS = ${jobInstance.jobStatus} EXECUTOR_ID = ${jobInstance.executorJobId}" | ||
| "(JobId={}). Job updated. ID = {} STATUS = {} EXECUTOR_ID = {}", |
There was a problem hiding this comment.
Unnecessary job id twice
|
Kudos, SonarCloud Quality Gate passed!
|








I added some logging messages for the hyperdrive trigger. It will give us insight into what's happening with skipping specific workflows.