-
-
Notifications
You must be signed in to change notification settings - Fork 3.2k
perf: replace Arc<Mutex> with DashMap in ChannelDataIpcQueue for improved performance #14667
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
Package Changes Through 9989d28There are 4 changes which include tauri-utils with patch, tauri-build with patch, tauri-cli with patch, tauri with patch Planned Package VersionsThe following package releases are the planned based on the context of changes in this pull request.
Add another change file through the GitHub UI by following this link. Read about change files or the docs at github.com/jbolda/covector |
.changes/improve-performance-when-payload-oversize-MAX_RAW_DIRECT_EXECUTE_THRESHOLD.md
Outdated
Show resolved
Hide resolved
92f1105 to
f51fc36
Compare
|
Is there a more macroscopic example how this benefits performance? I'm in favor but for other reasons, already a transitive dependency and gets rid of unwraps for data structures that don't benefit from poisoning. |
|
I am not sure how much this would benefit performance to be honest, with the test code you provided, my results are like this in release build (size optimized not speed)
And the whole fetch process takes milliseconds, I'm not sure if it makes sense to bring in a new dependency just for this
This is in the |
|
In that case no longer in favor. On another note, removing the Arc here is almost certainly going to change behavior unless the derived |
Yes, there is no any clone with ChannelDataIpcQueue.
In the clash-verge-rev project, the optimization reduced the average IPC channel insertion time from approximately 2.36µs to 1.15µs, achieving a significant performance gain of over 50%.
Since Dashmap already in the lock file, bringing it as tauri's dependency did not need compile new units. |
Could you share some benchmark results on the actual time used for a whole send operation for the channel? I think this insert call is most definitely not a bottleneck
As I said, it is only in a dependency of |
|
Compile Config: Compile Command: Baseline: With Dashmap as dependency:
This PR forcusing on reduce IPC communication lantancy when inserting data when IPC payload over MAX_RAW_DIRECT_EXECUTE_THRESHOLD. I would like to share some of actually runtime measurement with same compile config. I measure with launch to main dashboard, and do no extra user-operations, but cancle run mannualy. The order may not exactually same since async.
Sorry to bother you, I'm used to developing user applications and may lack experience in considering as upstream aspects. |
The data seems to suggest the 2 took very similar times, no? With
No worries, we all learned our way through 🙃 |
Introducing Dashmap dependency. Test with example, trigge the spawn button on UI.
Some testing usage:
Previous Arc<Mutex>
Current Dashmap
Roughly improved 53% cost time.