-
Notifications
You must be signed in to change notification settings - Fork 3
Description
Currently all data lookups from the bidding script to KV state need to be serialized/de-serialized and go through the getValues hook.
For real time bidding use cases the server must reply in ms time and optimizations that might be negligible elsewhere here actually matter. Some optimization include doing batched network calls, or eliminate network or IPC calls all together.
As an example use case, one can mention efficient filter techniques where a query comes in and is compared to an a list of configurations and we keep all configurations that apply. It could also apply to ML inference where an in-process inference that directly uses the model stored in memory would be significantly faster than the proposed ML inference sidecar that needs GRPC to operate.
As presented here we measured the overhead of the storage lookup getValues hook to 20%. This metric should be taken with a grain of salt as it has been executed on dummy data and awaits proper assessment with a common agreed benchmark. However, it shows that even there the overhead is visible and that it may significantly increase the infrastructure cost if several caches are used (we currently have over 30 in memory caches in our Production systems).
As a potential solution we explored inlining techniques discussed here. It shows that they work to some extent but will likely not work with GB sized JS files and production workloads.
A promising long term solution might be shared memory techniques where the state is accessed in a read only manner in shared memory, like MMAP with PROT_READ.
It should be noted that a long running c#/Java process proposed here might naturally solve this problem.