-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add write queue to localForage storage provider #121
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might alleviate the writing problem (notice any difference?), but it still creates parallel writes
I think the source of the problem as long as we're using localForage
is parallel writes happening
localForage
stores all our key/values under a single object store and creates a readwrite transaction for each setItem
call: https://github.com/localForage/localForage/blob/711d5891dfc699705f086c2bec4c68d6c363c8ff/src/drivers/indexeddb.js#L648-L651
When readonly transactions using the same object store overlap - it's not a problem
When readwrite transaction using the same object store overlap - they have to wait each other
https://www.w3.org/TR/IndexedDB/#transaction-scheduling
If multiple read/write transactions are attempting to access the same object store (i.e. if they have overlapping scopes), the transaction that was created first is the transaction which gets access to the object store first, and it is the only transaction which has access to the object store until the transaction is finished.
- If the Browser is already queueing the transactions for us, having our own queue would probably not help much
- Since parallel writes are not possible, we can just queue writes to happen sequentially one at a time
IMO if this PR is noticeably improving performance we can use it while we work on a new storage provider
Using idb
we can take control of readwrite transactions and write multiple keys in a single transaction
BTW we can try a mix of |
Ah that's a solid point then. So there's maybe no reason to "batch" anything since the writes can only happen 1 at a time anyway. The queueing must be mainly giving the JS a chance to breathe in between the blocking sequential writes then.
That sounds like a great plan. |
👍 🏆 . This will be eventually needed. |
Changes are simplified now. This greatly improves the first load performance for me on web/desktop and seems like everything works normally otherwise. |
cc @Julesssss @AndrewGable this could use another set of 👀 from some past contributors :D |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks really good! I'd be interested in seeing the idb
solution as well
Submitted a proposal about it here: Expensify/App#7950 (comment) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Took me a while to catch up with localForage and Kidroca's points, but this looks good 👍
Also interested to see what an idb improvement looks like!
cc @kidroca curious if you see any issues with this approach
Details
IndexedDB writes are causing major page jank when refreshing a page or signing in for the first time.
Trying to figure out a solution in this PR hopefully.
Just added a basic kind of queue that will limit the total number of parallel writes to IDB so we aren't locking things up too bad.
Definitely got something wrong here as there's an issue on first sign in where no data seems to load until the page is refreshed. Still looking into why, but I think in theory if we just slow down writing to IndexedDB it should help performance.Has went away and seems unrelated to the changes.Related Issues
#119
Automated Tests
Modified existing test
Linked PRs
Expensify/App#8053