(writing in progress)
For background see ChimeraTK/DeviceAccess#40. This ticket is the second step of the bullet-point list (but can be done in parallel to the first):
Write an RPC-over-shared-memory implementation. The following features are required:
- The server defines a number of functions which can be called by the client.
- A function has any number of parameters and any number of return values.
- Each parameter or return value can be any fundamental C++ type or a variable-lenght array of any fundamental C++ type.
- The list of parameters or return values can be empty (like a function returning void).
- The list of functions is declared in the initialisation phase and does not change during the runtime of the program.
- Applications should declare the list of functions in a separate C++ header file which is used by both server and client, to make sure the declarations are consistent. Hence it must be possible to use the same C++ declarations for the list of RPC functions on server and client side.
- A function call should always block until the server has processed it.
- Multiple function calls can be processed in parallel even for the same function.
- There is no time limit how long a function can execute.
- At runtime it should be checked whether both server and client have consistent declarations of the RPC functions.
- Variable-lenght array support is important, but it is acceptable if a maximum array length for each array needs to be specified during the server setup phase. The maximum length shall not need to be specified in the function declaration or on client side, it must be a runtime decision.
A possible implementation can look as follows:
- Both the server and the client have a "central dispatcher" which takes care of the actual shared memory communication.
- Each dispatcher runs in two independent threads.
- cppext::future_queue should be used for the communication between the dispatcher and the application.
- On client side, one cppext::future_queue per function exists which transports all function parameters (as a struct). A second cppext::future_queue (also per function) transports the return values (also as a struct).
- On server side, the same pair of cppext::future_queues exist.
- A function call will then consist of the following steps:
- The application pushes the function parameters to the future_queue corresponding to the function to be called.
- The client dispatcher waits in its first thread on any function to be called using wait_any. Thus the dispatcher thread will wake up due to the function parameters pushed to the queue.
- The client dispatcher will copy the data and write the function ID to shared memory. It will then release a semaphore.
- The client dispatcher will put its first thread again to sleep by waiting on the next incoming function call with wait_any.
- Due to the unlocked semaphore, the server dispatcher's first thread will wake up.
- The server dispatcher will read the function ID and parameters from shared memory and fill it into the corresponding future_queue.
- The server application thread dealing with this particular function will wake up and execute the function.
- Once the server application thread is done, it will push the result into the return value queue for the function.
- The second thread of the server dispatcher is waiting for any return value queue using wait_any. Thus it will wake up now.
- The server dispatcher copies the return values and writes the function ID to shared memory, and unlocks a second semaphore. It will then wait_any for the next return values to arrive.
- The second thread of the client dispatcher is waking up due to the second semaphore. It will read the function ID and return values from shared memory and push it into the corresponding future_queue.
- Note, both dispatchers must make sure that the counterpart dispatcher has properly copied the data from shared memory before it writes the next data to shared memory. This requires an additional semaphore which is used by the receiving dispatcher thread to notify the sending dispatcher thread when it is done with copying the data.
- The cppext::future_queues should be exposed to the applications, so they can use continuations. The future_queues which are read by the RPC implementation (rather than written) should be optionally created by the application, so the application can hand the RPC implementation a continuation of an existing future_queue.
(writing in progress)
For background see ChimeraTK/DeviceAccess#40. This ticket is the second step of the bullet-point list (but can be done in parallel to the first):
Write an RPC-over-shared-memory implementation. The following features are required:
A possible implementation can look as follows: