-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[HOLD for payment 2024-05-06] [Tracking] Metric - Search screen load time (Performance audit by Callstack) #34323
Comments
Metric: Search screen load time (ms) Workflow
Markers
These 10 runs resulted in an average of Please see this commit for the open_search markers. We advise to merge such markers placement into the main branch to start tracking this metric (open_search) over time with the Performance API. |
Metric: Search screen load time Phase: Analysis Commit hash: 320ff54 Analysis
The timeline indicates that
The biggest problem of Possible solutionTo render only 500 items, it still needs to build all the options. At this moment, the option list is created from scratch on each Search Page opening. Since we already know that removing For testing purposes, I’ve created a simple PoC where I did the following things:
With this approach, when I was testing it on a release build on Android emulator, I was able to reduce the The possible drawback of this solution is increased TTI of the app. Since the heavy calculations are moved to the app startup, it might extend the time needed for the app to run. For now the exact numbers are not known, however we did a rough tests on Android device to compare current state of the app, and after moving generating options to app start:
This indicates ~10% increase of TTI for Android app. However, since it’s a rough calculation, and there are no improvements applied to We should also try to refactor some parts of
Both of these functions seem to work just fine when the amount of data is not big, or in a single run, but the combined time when running it against 15k reports seems to be big. I believe there are accounts with even more data than our testing account. I’d suggest starting with trying to refactor the search and diving into
If we analyze those results, they seem very promising. In this particular scenario (15k reports, web environment) there is a ~300ms improvement in const searchReports = matchSorter(searchOptions.recentReports, searchValue, {keys: ['text', 'subtitle', 'alternateText', 'participantsList.0.displayName', 'participantsList.0.firstName', 'participantsList.0.lastName', 'participantsList.0.login']})
const searchPersonalDetails = matchSorter(searchOptions.personalDetails, searchValue, {keys: ['login', 'displayName']}) This solution should offload the If the performance improvement is proven by numbers in this case, we can try to separate |
thats quite a while, at least that will be good to see any improvements 🚀 |
@adhorodyski @TMisiukiewicz can you please open a separate PR to start tracking these metrics? |
@roryabraham here it is: #34489. I have updated the naming to match the app convention: I've also changed the place where the @adhorodyski can we update those namings in both Measurement and Analysis phases? |
@roryabraham Can I have your opinion on the above analysis and proposed changes? |
Hi @mountiny @roryabraham , would you guys be able to jump in here in the next couple of days to help Tomek with the review process? I think we can also push #34489 forward with the new markers setup. |
reviewed the PR |
@mountiny do you feel like reviewing the analysis or should we bring other folks for this matter? |
Sorry @adhorodyski I have missed this and thought it was alreday reviewed by Rory First of all, wow, what a great analysis ❤️ I think that report pagination will help us get the number of reports in the app to more maintainable numbers but its not removing the inefficiencies you have found. Also I think the number of report actions, policies and transactions can also influence this as some of the pieces in the created options can still subscribe to those too. Also your testing account does not have that many personal details as really user might have which I think could actually make the stuff faster than it would be for normal user. I believe using context to cache the options is a great idea. But I do not like the idea of doing all the heavy lifting when opening the App. Could we keep that options creation when the Search page is opened for a first time leading to slower opening time the first time but then stuff will be cached and we can load next time faster? In terms of the specific improvements in the createOption method, I would not block any of this, I think we have been just adding more and more stuff to this structure without thinking that much about its performance so if we can take a fresh look whether this cannot be done much better now knowing where we are and what expectations we have from this page/ component would be handy. I think you can bring any of the specific smaller proposals to the newdot-performance room and we can take this peacemeal instead of one huge catch all project/ PR I think any more bigger changes of how the search will be handled with come with report pagination where we will not have all the reports available offline and once we will work on global search but that is different story |
|
The solution for this issue has been 🚀 deployed to production 🚀 in version 1.4.32-5 and is now subject to a 7-day regression period 📆. Here is the list of pull requests that resolve this issue: If no regressions arise, payment will be issued on 2024-02-05. 🎊 For reference, here are some details about the assignees on this issue:
|
thanks, I think we can make this a weekly |
Review ongoing in #37909 |
This issue has not been updated in over 15 days. @TMisiukiewicz, @adhorodyski, @roryabraham eroding to Monthly issue. P.S. Is everyone reading this sure this is really a near-term priority? Be brave: if you disagree, go ahead and close it out. If someone disagrees, they'll reopen it, and if they don't: one less thing to do! |
|
The solution for this issue has been 🚀 deployed to production 🚀 in version 1.4.67-7 and is now subject to a 7-day regression period 📆. Here is the list of pull requests that resolve this issue: If no regressions arise, payment will be issued on 2024-05-06. 🎊 For reference, here are some details about the assignees on this issue:
|
@muttmuure Am I eligible for compensation for PR review? |
Contributor+: @parasharrajat due $500 via NewDot |
As per this thread expensify.slack.com/archives/C02NK2DQWUX/p1715272276516799?thread_ts=1715190852.677769&cid=C02NK2DQWUX, the price should be 500 for this issue. @mallenexpensify Could you please update that on the summary? |
Actually, that's not necessarily accurate, that the 'creation' date is the source of truth. If a price was changed before you were assigned, it should from that date, right? In this case, it looks like @parasharrajat was assigned to review the PR on April 2nd. Which... is 2 days before we lowered the price to $250 so... yes.. you're right @parasharrajat, payment is $500 for you, updating the above. :) I want to think more about how to best document this.... |
Payment requested as per #34323 (comment) |
$500 approved for @parasharrajat |
Tracking issue for the Search screen load time (ms) metric as part of the Performance audit conducted by Callstack.
Part of #33070
The text was updated successfully, but these errors were encountered: