|
|
Created:
4 years, 3 months ago by dyaroshev Modified:
4 years, 1 month ago Reviewers:
erikchen, nduca, jochen (gone - plz use gerrit), mpearson, Nico, Paweł Hajdan Jr., Peter Kasting, brettw, rohitrao (ping after 24h) CC:
Alexei Svitkine (slow), chromium-reviews, nednguyen Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
DescriptionPerformance tests for HistoryQuickProvider.
This CL adds performance test for HQP: typing and erasing a popular url
(10000 similar entries). Based on HQP unittests.
BUG=643668
Committed: https://crrev.com/7d7c15ee9c7481355f4b42315d386b0eccc9fd75
Cr-Commit-Position: refs/heads/master@{#429835}
Patch Set 1 #
Total comments: 1
Patch Set 2 : Browser perftest -> unit perftest #Patch Set 3 : Keeping generated profile in a repo #Patch Set 4 : Getting perf test simpler and smaller #Patch Set 5 : Creating a separate main for perf tests #
Total comments: 2
Patch Set 6 : Rebasing on prepared components perftests. #
Total comments: 71
Patch Set 7 : Review, round 1. #Patch Set 8 : Review, round 2. #
Total comments: 14
Patch Set 9 : Review, round 3. #
Total comments: 12
Patch Set 10 : Rebase + small clean up. #
Total comments: 1
Patch Set 11 : Review, round 4. #Messages
Total messages: 115 (36 generated)
Description was changed from ========== Adding performance test, deleting popular url from omnibox. Added: 1 - Performance test that deletes popular url. Generates 10000 similar urls (number is based on a real profile). 2 - Utility class: ScopedHistogramResultPrinter that prints collected data from histograms. Used to measure time of different places in code, during running of a performance test. 3 - A new build target: performance_interactive_ui_tests for performance tests that have to run whole browser. BUG=643668 ========== to ========== Adding performance test, deleting popular url from omnibox. Added: 1 - Performance test that deletes popular url. Generates 10000 similar urls (number is based on a real profile). 2 - Utility class: ScopedHistogramResultPrinter that prints collected data from histograms. Used to measure time of different places in code, during running of a performance test. 3 - A new build target: performance_interactive_ui_tests for performance tests that have to run whole browser. BUG=643668 ==========
dyaroshev@yandex-team.ru changed reviewers: + phajdan.jr@chromium.org, pkasting@google.com
On 2016/09/02 15:16:10, dyaroshev wrote: > mailto:dyaroshev@yandex-team.ru changed reviewers: > + mailto:phajdan.jr@chromium.org, mailto:pkasting@google.com This cl is not complete, I have a few questions. 1) Who do I write as an owner of added histograms? 2) What do I have to do to add performance test to regular performance runs? 3) Most of this patch is copied from our fork. I've cleaned up, best I could, but I don't know much about gn and adding test targets. Any suggestions?
pkasting@chromium.org changed reviewers: + pkasting@chromium.org
Seems like running the whole browser is going to make the perf pretty noisy. Can this be done in more unittest style, by instantiating the omnibox system directly? There are a variety of autocomplete/omnibox unittests currently so I would assume such a thing is possible. It's not clear to me why deleting the URL is the right test, since deleting URLs is a very rare action. I assume the main thing you're trying to measure is the HQP matching speed. Would timing "input character" -> "results finished" not be able to measure what you want? For histograms, mpearson is a good person.
On 2016/09/02 20:00:07, Peter Kasting wrote: > Seems like running the whole browser is going to make the perf pretty noisy. I do not entirely understand what do you mean by noisy. If you mean that many activities affect measurements: We use histograms to measure practically any piece of code we want. The only condition is that histograms have good enough precision. If you mean that results might vary from run to run much, here are results of consecutive runs on my pretty fast machine. (results are printed in a form {average, standard distribution}). https://docs.google.com/spreadsheets/d/1sZ5mIiJ8YoqGF019a3q6vn-vUESJ60-Iu0Dx_... As you can see, variations from run to run are in 1ms range which, I think, is acceptable. We can achieve smaller distribution if we separate cases: deleting in the beginning of a url and deleting at the end. For the purpose of demonstrating optimisation with flat containers this is unnecessary but this can be done. Plus, with flat maps/sets in our fork the distribution numbers went down quite dramatically. > Can this be done in more unittest style, by instantiating the omnibox system > directly? There are a variety of autocomplete/omnibox unittests currently so I > would assume such a thing is possible. Doing it through a whole browser gives a few advantages: - I'm interested in a HistoryIDSetFromWords hot spot. It's not always a hot spot only under quite tight but realistic conditions. By using browser test it's easier to proof that they are realistic. - Profiling and measuring whole browser displays all performance problems not just ones we know off. - By using histograms we can measure many smaller piece of code in a browser test, but we can not go from unit performance test to browser. We use similar tests from time to time they turned up to be quite useful. Adding new measurements to existing scenarios is really fast. Writing tests for new scenarios is reasonably fast. If you would have a good test profile they would be even better. > It's not clear to me why deleting the URL is the right test, since deleting URLs > is a very rare action. I assume the main thing you're trying to measure is the > HQP matching speed. Would timing "input character" -> "results finished" not be > able to measure what you want? I've chosen deleting popular url for a few reasons: - Deleting pieces of popular urls is a reasonable use case: many urls (like from bug trackers) can be modified by hand. Plus, many people are not extremely good with computers, so they delete long urls by hand. - I add this test with optimisation with flat sets/maps. This is an optimisation primarily for users with big profile. Therefore I have to generate a profile that affects measurements in a similar way to the real one. For deleting long url I can just add many urls with different endings. For inputting text I would have to do smth considerably more complicated. To sum up: deleting url is not the most important case but it's a good first test and it shows the usefulness of flat sets/maps.
On 2016/09/05 14:33:00, dyaroshev wrote: > On 2016/09/02 20:00:07, Peter Kasting wrote: > > Seems like running the whole browser is going to make the perf pretty noisy. > > I do not entirely understand what do you mean by noisy. I mean that existing performance tests, e.g. for renderer benchmarks, are very noisy precisely because they're doing a lot of things. A test that runs only the piece of code being measured is preferable. And existing browser tests are flaky and unreliable because stuff happens, like losing focus in the middle of the test because another app on the machine popped up a dialog. So we have spent years moving away from full-browser testing because the tests just end up getting disabled because they don't stay green, and then we might as well not have them. > If you mean that results might vary from run to run much, here are results of > consecutive runs on my pretty fast machine. Assume the test bots are slow and have heavy and wildly varying load. > > Can this be done in more unittest style, by instantiating the omnibox system > > directly? There are a variety of autocomplete/omnibox unittests currently so > I > > would assume such a thing is possible. > > Doing it through a whole browser gives a few advantages: > - I'm interested in a HistoryIDSetFromWords hot spot. It's not always a hot spot > only under quite tight but realistic conditions. By using browser test it's > easier to proof that they are realistic. You're interested in measuring omnibox code. As long as you are measuring the full chain of that, the rest of the browser is noise. Yes, testing calls directly to HistoryIDSetFromWords would likely be too low-level, but testing the runtime of the code that handles particular keypresses is plenty. And my point is that that test can likely be done without the whole browser running. > - Profiling and measuring whole browser displays all performance problems not > just ones we know off. That's precisely what I want to avoid. I don't want, say, a performance problem with reading the cookie store off disk to affect the results of this test. That kind of thing just means the test no longer measures something precise and actionable. > - By using histograms we can measure many smaller piece of code in a browser > test, but we can not go from unit performance test to browser. The goal here was to demonstrate that a particular data structure would provide a meaningful real-world perf win. Demonstrating that, say, handling a particular keypress drops from 50 to 10 ms would be completely sufficient for that. > We use similar tests from time to time they turned up to be quite useful. Adding > new measurements to existing scenarios is really fast. Writing tests for new > scenarios is reasonably fast. If you would have a good test profile they would > be even better. I am on board with all of that, but again, if it's all doable in a more unittest-scale framework, then better. Browser tests seemed similarly appealing to us in the early days, until they turned out to be insufficiently reliable at scale. > > It's not clear to me why deleting the URL is the right test, since deleting > URLs > > is a very rare action. I assume the main thing you're trying to measure is > the > > HQP matching speed. Would timing "input character" -> "results finished" not > be > > able to measure what you want? > > I've chosen deleting popular url for a few reasons: > - Deleting pieces of popular urls is a reasonable use case: many urls (like from > bug trackers) can be modified by hand. Plus, many people are not extremely good > with computers, so they delete long urls by hand. Oh, you're deleting the characters from an input string? OK, I interpreted "deleting a URL" as what is implemented by calling DeleteMatch(). "Backspacing during input" is a bit different, and a better choice of use cases. > - I add this test with optimisation with flat sets/maps. This is an optimisation > primarily for users with big profile. Therefore I have to generate a profile > that affects measurements in a similar way to the real one. For deleting long > url I can just add many urls with different endings. For inputting text I would > have to do smth considerably more complicated. Why does many URLs with different endings not show up in the inputting test case? For example, if I have 1000 URLs starting with "a", then typing an initial "a" should match them all. Showing a significant perf win on the first input character when that character matches a lot of stuff would be really important as that's the single most common use case (it happens during almost every input interaction) and also the one where we have to process the most matches and are thus the most laggy.
On 2016/09/05 15:29:49, Peter Kasting wrote: > On 2016/09/05 14:33:00, dyaroshev wrote: > > On 2016/09/02 20:00:07, Peter Kasting wrote: > > > Seems like running the whole browser is going to make the perf pretty noisy. > > > > I do not entirely understand what do you mean by noisy. > > I mean that existing performance tests, e.g. for renderer benchmarks, are very > noisy precisely because they're doing a lot of things. A test that runs only > the piece of code being measured is preferable. > > And existing browser tests are flaky and unreliable because stuff happens, like > losing focus in the middle of the test because another app on the machine popped > up a dialog. So we have spent years moving away from full-browser testing > because the tests just end up getting disabled because they don't stay green, > and then we might as well not have them. > > > If you mean that results might vary from run to run much, here are results of > > consecutive runs on my pretty fast machine. > > Assume the test bots are slow and have heavy and wildly varying load. > Oh, I didn't realise you are running perf tests on regular test bots as part of acceptance tests. I have to read more about perf testing in chromium. > > > Can this be done in more unittest style, by instantiating the omnibox system > > > directly? There are a variety of autocomplete/omnibox unittests currently > so > > I > > > would assume such a thing is possible. > > > > Doing it through a whole browser gives a few advantages: > > - I'm interested in a HistoryIDSetFromWords hot spot. It's not always a hot > spot > > only under quite tight but realistic conditions. By using browser test it's > > easier to proof that they are realistic. > > You're interested in measuring omnibox code. As long as you are measuring the > full chain of that, the rest of the browser is noise. Yes, testing calls > directly to HistoryIDSetFromWords would likely be too low-level, but testing the > runtime of the code that handles particular keypresses is plenty. And my point > is that that test can likely be done without the whole browser running. > > > - Profiling and measuring whole browser displays all performance problems not > > just ones we know off. > > That's precisely what I want to avoid. I don't want, say, a performance problem > with reading the cookie store off disk to affect the results of this test. That > kind of thing just means the test no longer measures something precise and > actionable. > I don't exactly agree because I use histogram results and they measure very precise small pieces of code. For example - regression in HQP or in HistoryIDSetFromWords in this test is very actionable. However - I agree with argument that if smth else is happening while running a perf test, unit tests are more preferable. > I am on board with all of that, but again, if it's all doable in a more > unittest-scale framework, then better. Browser tests seemed similarly appealing > to us in the early days, until they turned out to be insufficiently reliable at > scale. Is running AutocompleteController in isolation small enough to be a good perf test? . > > Why does many URLs with different endings not show up in the inputting test > case? For example, if I have 1000 URLs starting with "a", then typing an > initial "a" should match them all. > I would assume that urls starting with "b" and also that have "a" somewhere in the middle are important too. However, if it's good for you, it's good for me. > Showing a significant perf win on the first input character when that character > matches a lot of stuff would be really important as that's the single most > common use case (it happens during almost every input interaction) and also the > one where we have to process the most matches and are thus the most laggy. I'll have to check, I'm not sure that in this case HistoryIDSetFromWords is a problem.
> Is running AutocompleteController in isolation small enough to be a good perf > test? Oh, it seems easier to run just HQP itself. I will do that than.
https://codereview.chromium.org/2300323003/diff/1/chrome/test/base/ui_test_ut... File chrome/test/base/ui_test_utils.cc (right): https://codereview.chromium.org/2300323003/diff/1/chrome/test/base/ui_test_ut... chrome/test/base/ui_test_utils.cc:323: content::WindowedNotificationObserver ready_observer( Shouldn't the observer be instantiated before any actions which may trigger the notification?
On 2016/09/06 14:22:11, Paweł Hajdan Jr. wrote: > https://codereview.chromium.org/2300323003/diff/1/chrome/test/base/ui_test_ut... > chrome/test/base/ui_test_utils.cc:323: content::WindowedNotificationObserver > ready_observer( > Shouldn't the observer be instantiated before any actions which may trigger the > notification? I just factored that code out, so I might be wrong but: AutocompleteController and the caller are both on the ui thread -> either AC cannot become ready between a check an a wait
On 2016/09/06 14:22:11, Paweł Hajdan Jr. wrote: > https://codereview.chromium.org/2300323003/diff/1/chrome/test/base/ui_test_ut... > File chrome/test/base/ui_test_utils.cc (right): > > https://codereview.chromium.org/2300323003/diff/1/chrome/test/base/ui_test_ut... > chrome/test/base/ui_test_utils.cc:323: content::WindowedNotificationObserver > ready_observer( > Shouldn't the observer be instantiated before any actions which may trigger the > notification? Anyways, I've discarded this change.
dyaroshev@yandex-team.ru changed reviewers: + jochen@chromium.org, mpearson@google.com
On 2016/09/05 15:29:49, Peter Kasting wrote: I've changed the test to the unit one. Even though I'm not done refactoring yet, it would be nice to know if general direction is acceptable to you. Btw: I'm adding a histogram in test code only - do I have to add it to histograms.xml?
In principle, this seems like a better design. Can we avoid building up the profile inside the performance test itself? Maybe use profile data checked into the tree and fake the current system time/date? (Pretty sure we have ways of doing the latter for history-related tests anyway.) Pawel and Mark might have more comments on histograms, adding them to .xml files, pretty-printing them, etc.
On 2016/09/06 19:44:16, Peter Kasting wrote: > Can we avoid building up the profile inside the performance test itself? Maybe > use profile data checked into the tree and fake the current system time/date? > (Pretty sure we have ways of doing the latter for history-related tests anyway.) It seems to me that telemetry already has generated profiles, but I can't find them in the repository. How do I get my hands on one of these?
On 2016/09/06 20:40:30, dyaroshev wrote: > On 2016/09/06 19:44:16, Peter Kasting wrote: > > Can we avoid building up the profile inside the performance test itself? > Maybe > > use profile data checked into the tree and fake the current system time/date? > > (Pretty sure we have ways of doing the latter for history-related tests > anyway.) > > It seems to me that telemetry already has generated profiles, but I can't find > them in the repository. How do I get my hands on one of these? Dunno, never worked with anything telemetry.
On 2016/09/06 20:44:07, Peter Kasting wrote: > On 2016/09/06 20:40:30, dyaroshev wrote: > > On 2016/09/06 19:44:16, Peter Kasting wrote: > > > Can we avoid building up the profile inside the performance test itself? > > Maybe > > > use profile data checked into the tree and fake the current system > time/date? > > > (Pretty sure we have ways of doing the latter for history-related tests > > anyway.) > > > > It seems to me that telemetry already has generated profiles, but I can't find > > them in the repository. How do I get my hands on one of these? > > Dunno, never worked with anything telemetry. Ok, once again there is some serious cleaning up further ahead but I checking if is alright so far. I'm using normally disabled test to generate a "profile". Generated profile is kept with other test data. Does this work for you?
On 2016/09/07 18:43:10, dyaroshev wrote: > On 2016/09/06 20:44:07, Peter Kasting wrote: > > On 2016/09/06 20:40:30, dyaroshev wrote: > > > On 2016/09/06 19:44:16, Peter Kasting wrote: > > > > Can we avoid building up the profile inside the performance test itself? > > > Maybe > > > > use profile data checked into the tree and fake the current system > > time/date? > > > > (Pretty sure we have ways of doing the latter for history-related tests > > > anyway.) > > > > > > It seems to me that telemetry already has generated profiles, but I can't > find > > > them in the repository. How do I get my hands on one of these? > > > > Dunno, never worked with anything telemetry. > > Ok, once again there is some serious cleaning up further ahead but I checking if > is alright so far. I'm using normally disabled test to generate a "profile". > Generated profile is kept with other test data. Does this work for you? I don't know. You may want to hold off a bit here until we can get more speed infra folks to comment on the right way to do all this? Ultimately if we agree on an approach, you'll want to split this into distinct reviews -- stuff to build fake profiles, stuff to print out histograms, and an actual performance test are all separate bits, and the combined review is very large.
Description was changed from ========== Adding performance test, deleting popular url from omnibox. Added: 1 - Performance test that deletes popular url. Generates 10000 similar urls (number is based on a real profile). 2 - Utility class: ScopedHistogramResultPrinter that prints collected data from histograms. Used to measure time of different places in code, during running of a performance test. 3 - A new build target: performance_interactive_ui_tests for performance tests that have to run whole browser. BUG=643668 ========== to ========== Adding performance tests for HQP that represent importance of optimising HistoryItemsForTerms method. Added: 1 - Performance tests for HQP: typing and deleting a popular url. Use generated profile and histogram based timings. 2 - Generation of a simple profile with big history, containing 10000 variantions of the same url. 3 - Utility class to make performance tests based on histograms. BUG=643668 ==========
On Peter Kasting wrote: > I don't know. You may want to hold off a bit here until we can get more speed > infra folks to comment on the right way to do all this? Ok, do I have to do smth to get their attention?
On 2016/09/07 21:23:41, dyaroshev wrote: > On Peter Kasting wrote: > > I don't know. You may want to hold off a bit here until we can get more speed > > infra folks to comment on the right way to do all this? > > Ok, do I have to do smth to get their attention? No, they are aware of this code review.
Description was changed from ========== Adding performance tests for HQP that represent importance of optimising HistoryItemsForTerms method. Added: 1 - Performance tests for HQP: typing and deleting a popular url. Use generated profile and histogram based timings. 2 - Generation of a simple profile with big history, containing 10000 variantions of the same url. 3 - Utility class to make performance tests based on histograms. BUG=643668 ========== to ========== Added: 1 - Performance tests for HQP: typing and deleting a popular url. Use generated profile and histogram based timings. 2 - Generation of a simple profile with big history, containing 10000 variantions of the same url. 3 - Utility class to make performance tests based on histograms. BUG=643668 ==========
thakis@chromium.org changed reviewers: + erikchen@chromium.org, thakis@chromium.org
I think erikchen wrote a profile generator a while ago, maybe that could be used here? (I'm fairly certain that someone wrote a profile generator, I'm not certain it was Erik though)
On 2016/09/12 14:16:23, Nico wrote: > I think erikchen wrote a profile generator a while ago, maybe that could be used > here? (I'm fairly certain that someone wrote a profile generator, I'm not > certain it was Erik though) There is one in telemery: https://cs.chromium.org/chromium/src/tools/perf/profile_creators/profile_gene... I just couldn't get my hands on the generated one. Can you refer me to an example? Preferably, where somebody uses it in a binary perf test.
On 2016/09/12 14:30:19, dyaroshev wrote: > On 2016/09/12 14:16:23, Nico wrote: > > I think erikchen wrote a profile generator a while ago, maybe that could be > used > > here? (I'm fairly certain that someone wrote a profile generator, I'm not > > certain it was Erik though) > > There is one in telemery: > https://cs.chromium.org/chromium/src/tools/perf/profile_creators/profile_gene... > I just couldn't get my hands on the generated one. Can you refer me to an > example? Preferably, where somebody uses it in a binary perf test. If your goal is to create a Chrome profile with populated history, take a look at: https://chromium.googlesource.com/chromium/src/+/dd937b22ac15f304729161969e43.... [That CL removed the history profile extender, since no one is using it]. Note that I haven't looked at this CL in depth, so I don't know if it's the right solution to whatever you're trying to solve.
> If your goal is to create a Chrome profile with populated history, take a look > at: > https://chromium.googlesource.com/chromium/src/+/dd937b22ac15f304729161969e43.... > [That CL removed the history profile extender, since no one is using it]. Note > that I haven't looked at this CL in depth, so I don't know if it's the right > solution to whatever you're trying to solve. Where does the generated profile live in the repository? Or when and where is it generated?
On 2016/09/12 16:52:32, dyaroshev wrote: > > If your goal is to create a Chrome profile with populated history, take a look > > at: > > > https://chromium.googlesource.com/chromium/src/+/dd937b22ac15f304729161969e43.... > > [That CL removed the history profile extender, since no one is using it]. Note > > that I haven't looked at this CL in depth, so I don't know if it's the right > > solution to whatever you're trying to solve. > > Where does the generated profile live in the repository? Or when and where is it > generated? profile_creator is a python script that generates a Chrome profile. We use cloud storage to store the result, which is automatically pulled down for telemetry tests. That being said, if you're not using telemetry, it may not be the right tool to solve your problem. Note that any profile you pregenerate and store will need to be migrated by Chrome to be usable.
On 2016/09/12 16:55:48, erikchen wrote: > profile_creator is a python script that generates a Chrome profile. We use cloud > storage to store the result, which is automatically pulled down for telemetry > tests. That being said, if you're not using telemetry, it may not be the right > tool to solve your problem. Note that any profile you pregenerate and store will > need to be migrated by Chrome to be usable. Thanks. As far as I understood you, because I'm not using telemetry, I can't use pregenerated profile from it. @pkasting: This is a reasonably simple perf test. Generating a profile with 10000 history entries is quite fast. And it doesn't have all the problems of pre-generated profile like storing, migrating etc, etc. HistoryQuickProviderTest already does some generation, similar to the ones I'm doing. Do we really want to generate and load a profile? Generating it every time seems like a reasonable option. Especially, because it's this simple.
On 2016/09/07 19:46:33, Peter Kasting wrote: > Ultimately if we agree on an approach, you'll want to split this into distinct > reviews -- stuff to build fake profiles, stuff to print out histograms, and an > actual performance test are all separate bits, and the combined review is very > large. I think for the particular purpose of this perf test we can do without printing histograms. If we generate profile in a test setup and just measure the whole run time of HQP, would it be small enough for one review? If you are ok with generating data in a set up.
On 2016/09/12 17:49:32, dyaroshev wrote: > On 2016/09/07 19:46:33, Peter Kasting wrote: > > Ultimately if we agree on an approach, you'll want to split this into distinct > > reviews -- stuff to build fake profiles, stuff to print out histograms, and an > > actual performance test are all separate bits, and the combined review is very > > large. > > I think for the particular purpose of this perf test we can do without printing > histograms. If we generate profile in a test setup and just measure the whole > run time of HQP, would it be small enough for one review? If you are ok with > generating data in a set up. I'm going to repeat what I said earlier about punting to the speed infra folks -- they'll hopefully have a better handle on what we want to do in all these regards.
On 2016/09/13 05:56:58, Peter Kasting wrote: > I'm going to repeat what I said earlier about punting to the speed infra folks > -- they'll hopefully have a better handle on what we want to do in all these > regards. @awesome_speed_infra_folks, please come! This perf test will help us do awesome things!
On 2016/09/13 09:34:25, dyaroshev wrote: > On 2016/09/13 05:56:58, Peter Kasting wrote: > > I'm going to repeat what I said earlier about punting to the speed infra folks > > -- they'll hopefully have a better handle on what we want to do in all these > > regards. > > @awesome_speed_infra_folks, please come! This perf test will help us do awesome > things! Does performance of suggestion depends on network condition? If so, how are you doing network shaping?
On 2016/09/14 17:10:42, nednguyen wrote: > Does performance of suggestion depends on network condition? If so, how are you > doing network shaping? All networking processing is done asynchronously, and we worry about the synchronous part of suggest on the UI thread here. There are two things that absolutely dominate suggest on the UI thread: History Quick Provider (the biggest impacter) and History URL provider. Here we are dealing with History Quick Provider. To sum up: we care about the UI thread most, in terms of performance internet connection doesn't affect it.
On Mon, Sep 12, 2016 at 1:09 PM, <dyaroshev@yandex-team.ru> wrote: > > Thanks. As far as I understood you, because I'm not using telemetry, I > can't use > pregenerated profile from it. > Is there a reason for not using telemetry. I'm not an expert here either, but my understanding is if you can build this as a telemetry perf test, then it can run the Chromium perf waterfall and automatically get monitoring for regressions, etc. I think framing this in a way that measures the same thing we do in the wild (i.e. Omnibox. CharTypedToRepaintLatency that you mentioned in your chromium-dev@ email) would be ideal - since then we'd have parity in the lab to what's measured in the wild. On Wed, Sep 14, 2016 at 1:16 PM, <dyaroshev@yandex-team.ru> wrote: > On 2016/09/14 17:10:42, nednguyen wrote: > > Does performance of suggestion depends on network condition? If so, how > are > you > > doing network shaping? > > All networking processing is done asynchronously, and we worry about the > synchronous part of suggest on the UI thread here. There are two things > that > absolutely dominate suggest on the UI thread: History Quick Provider (the > biggest impacter) and History URL provider. Here we are dealing with > History > Quick Provider. > > To sum up: we care about the UI thread most, in terms of performance > internet > connection doesn't affect it. > > > > https://codereview.chromium.org/2300323003/ > -- You received this message because you are subscribed to the Google Groups "Chromium-reviews" group. To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On 2016/09/14 17:25:09, Alexei Svitkine (very slow) wrote: > On Mon, Sep 12, 2016 at 1:09 PM, <mailto:dyaroshev@yandex-team.ru> wrote: > > > > Thanks. As far as I understood you, because I'm not using telemetry, I > > can't use > > pregenerated profile from it. > > > > Is there a reason for not using telemetry. When we first tried to write perf tests on omnibox, telemetry didn't have an api to work with omnibox and this is not easy to add one. The other thing is - discussed earlier in these comments - we want to move away from whole browser testing, because it's too noisy. Doing it from C++ is easier. > then it can run the Chromium perf waterfall and automatically get > monitoring for regressions, etc. I hope that we can do it with a binary test. There are already components_perftests that are used somehow. > I think framing this in a way that measures the same thing we do in the wild (i.e. Omnibox. > CharTypedToRepaintLatency that you mentioned in your chromium-dev@ email) > would be ideal - since then we'd have parity in the lab to what's measured > in the wild. > This is what scoped_histogram_printer for. It records all necessary places in a similar way to the real browser. And I've tried to generate data that's gives similar results to real thing (I used a real profile to check). We can't get Omnibox.CharTypedToRepaintLatency because it requires the whole browser running but I've added histogram for an interesting for me piece of code - HistoryItemsForTerms. If there is a place you would like to be measure in HQP, it can be added.
What's the purpose of this CL? - There is a component HistoryQuickProvider (HQP) which is the slowest part of synchronous Autocomplete. For a certain user profile it can be really slow. There are some ways it can be considerably optimised but to do the optimisations we need a benchmark. The best way to do a benchmark is to have a perf test, so we can monitor and prevent regressions. This is a CL with a perf test. What is required and how it's done? - Select a way to measure In this CL an utility class ScopedHistogramPrinter is used. This utility class can be used to print average results and standard distribution of any histogram(s). An alternative approach would be to measure without histograms from a perf test itself. This would give us only results for the whole HQP and not any piece of it but is simpler. For the purpose of this test the second way is sufficient. - Generate history The history has to be big but produce realistic results. I've checked in a big profile: there were ~8000 similar urls and many many others. Similar generated 10000 produce similar results. The major questions are: 1) do we want to generate in a test setup or 2) do we want to store/load it from somewhere. Generating 10000 urls of history is not terribly slow - I would pick the first one. If we take the second one (in the last version of the CL) we have to decide: a) how to generate it, b) - where to store it, c) - how to get it to the test (if we have to download it from somewhere). My solution is generating from a disabled C++ test (very easy to do and to understand), store it in a repository (it's not very big) and then c is not a problem. Current generated profiles in telemetry looked not particularly useful to me - we would have to restore the generate history part - and they are hard to use if your test is not in telemetry. Doing test from telemetry is really hard - check previous comment. - Write perf test Well this is the easy part but really depends on how we do the others. The perf test has to have little affection of anything but HPQ (check out the discussion with pkasting earlier), so we don't want to run the whole browser for it. - Enable a perf test in regular builds. I just do not know how to do it. I propose that we toss out histogram printer and generate history in a test set up. Created test would serve it's purpose of measuring HPQ and will help to optimise it.
nduca@chromium.org changed reviewers: + nduca@chromium.org
Doing a bit of a drive by because I was asked to offer a broad speed perspective of the problem. I've skimmed the patch a bit, and talked a bit to peter about the background of this patch. I don't have it all in my head so feel free to correct me if I'm off in whacko land. But broadly, I think this looks like its on the right track. And giant essay to prove it follows. :) First, whether to telemetry, perf unittest, browser unittest, etc. I think its right to use perf unittests. I generally think that you should benchmark at the absolute finest level possible to achieve your aim. For here, I understand that we are considering a make a change in a core data structure from hash to flat map, and so we'll want to see the speed of some key (maybe large but mostly synchronous) part of the omnibox. For me, that's about reading a timestamp, calling a function, and then stoppign the timestamp. Maybe the code is written in a way that you have to spin a messageloop, which makes it tougher, but its still very much about start a thing, call a function, stop a timer, maybe repeat a few times and stop and print. That doesn't seem to me to merit needing to start a browser. Sometimes, perf unittest vs "start a browser" perf tests get confused on basis of needing test data to set up the test case being measured. I think though that this is a bit of a trap: the reason you pick between perf unittest and a browser level perf test is entirely about whether your goal can be measured at a unit level. Measuring a function, or a chain of simple messageloop tasks, is very much unit perftestable. But, say, measuring time from pressing enter in an omnbox to seeing the newtabpage's UI rendered for the first time, that really needs a large portion of the browser running. So, here, even though clearly we need to some plausible test data, if we choose unit test as the strategy, then its our burden (however unpleasant) to then figure out how to create that data. I remember in cc, we ended up having to make changes to our classes and a bunch of helper functions so that it was easier to type out sufficiently interesting test scenarios. It was hard work, but the perf tests have WAY lower noise, and that pays off in the end. So, unittest this should be, at least given my read of this situation. And while we're on test scenarios and test data, don't load profiles from disk that were saved as part of telemetry or etc. Down that road is agony: they always go stale, and then nobody remembers how to freshen the data. Better to do that heavy lifting to code up your test data. That will force you down the usual unit test rules: use minimal but sufficient data for your test into the test, so you're not overtesting. There was some discussion I gatehr about whether to use a code generator and land it and run it on every go, or just hand-coding the test setup code. I think this should be handled just like a unit testing question in the same space: which will be more maintainable in the long term? Someone someday will make a breaking change to an function signature or semantics that the unit tests depend on. At that point they have to hand edit either the code generator or a zillion lines of code. Sometimes a generator is the right call, sometimes hand written code is better, and sometimes helper functions that reduce the typing involved in common scenarios makes the most sense. Pick whats right for your local code health. Definitely keep using use perf_tests/'s printing apis to output the results. This ensures that we can then the perf unittest output can be parsed by chromeperf.appspot.com, for instance, which you could do with a patch to recipes in a followup. cc/ has a perf unittests harness that you might use for guidance if needed, and if you grep recipes for the cc perf unittests target, you'll see how its hooked up to perf monitoring infrastructure. But, speaking of printing, definitely think hard about whether UMA printer is the right design for you. There is perf_test::PrintResult that is way simpler than the UMA printer. As I understand it, the uma printer is meant for when your code that for other reasons has Uma probes in it for things where uma makes clear sense. In that case, you might also want a perf unittest coverage I suppose, and thats what the uma printer does. But, in your case, I kind of think you just want to call a function in a tight loop. For that case, you could just hand code reading a timer, doing your loop, and then outputting the elapsed time or elapsed times, depending. Its really important to decide about whether histograms are really right for you, too, even if you dont use uma histograms. The general guidance I have here, by no means the final word but just my perspective, is that histograms work best when you have many iterations *and* you expect non-normal noise in between runs such that averaging would destroys important stastical information that would be relevant for determining p-values or other measures of significance between runs. For instance, suppose your performance test will definitely produce bimodal data: for those, you really need a histogram, because the average of a bimodal distribution is absolutely not the right signal to use to detect changes. Here, my mental model is that you're timing a single tight function, and you're not doing io or talking to some disk cache, or interacting with gpu vsync, so you've probably got noise but its normally distributed or at least log-normally distributed. That means that timestamppign the elapsed time and then averaging will be sane with regard to the underlying signal. The point is to really get down into the depths here about your signal, reason about it, and then you can usually justify a choice. Might be that geomean makesmore sense but I kinda doubt it. For most perf tests, scheudling noise is normally distributed and its only in the wild and in very long running heavily contended cpu situations that geomean helps. Btw, which timer to use is worth thinking through. There are two! There's TimeTicks from HighResNow, which is monotonic wall time, but there's the way way way cooler base::Time::ThreadTicks. The cool thing is that ThreadTicks don't include time where the cpu was stolen by another process, which is one of the main sources of non-normal noise in benchmark results. Thus, it is typically am much more reliable measure of "how much cpu work did it take to do X," and is used in a lot of our best perf unittests. That is, use it if you can possibly do so, because you'll get a better performance result. \o/. Finally, there has been some discussion on and off this CL about whether this benchmark can be tweaked or leaned on by system health folks and other general speed folks who in general would like to see better performance test coverage (unit, or otherwise) of browser interaction speed/responsiveness. "We have a variety of interesting umas here, eg the typeToFirstRender uma, but we don't have anyone really well-owning this code right now and maybe," the discussion goes, "we could use this patch journey to improve our coveage here." Having given this some thought, my current thinking is that the topic at hand here in this CL is a totally separate topic from browser UI responsiveness, which is what the typing to render UMAs, and other jank ideas are about. System health and speed metrics definitely needs better coverage for browser ui responsiveness, but my general feeling is that the right solution for those is something else. I'll sign up to write a doc about what I think the right speed solution is to browser UI system health coverage and I will commit to slowly sharing it around as it takes shape. Key thing to know there is I can't move that topic extremely aggressively so it may take some time. :) Anyway, thats whats on my mind. In general, this patch looks good and if folks want an LG from me at a very high level I'd be happy to do so. Its key to get the details of the perf unittest right though, and hook this up into the build rules so that it follows normative practices of other x_perftest binaries in other parts of chrome. So someone should definitely dig into that and really make sure that things are awesome there. I'll be around if there are general questions or disagreements though about which perf strategy seems to make sense here, though!
Facinating reading! I'm sorry, it's almost 4am here, I'd have to reread when my eyes are not closing) First - thanks a lot for coming here, I'd really want this patch in some form accepted. I've got a few questions to check if I understood everything correctly so I can do staff tomorrow) On 2016/09/15 23:31:55, nduca wrote: > First, whether to telemetry, perf unittest, browser unittest, etc. I think its > right to use perf unittests. > I generally think that you should benchmark at the > ... in a core data structure from hash to flat map... This is absolutely irrelevant but the discussion was about std::sets/maps which are based on red-black trees and not hashing. > ... For me, that's about reading a timestamp, > calling a function, and then stoppign the timestamp. Maybe the code is written > in a way that you have to spin a messageloop, which makes it tougher, but its > still very much about start a thing, call a function, stop a timer, maybe repeat > a few times and stop and print. That doesn't seem to me to merit needing to > start a browser. > Ok, I can definetly do that. > Sometimes, perf unittest vs "start a browser" perf tests get confused on basis > of needing test data to set up the test case being measured. I think though that > this is a bit of a trap: the reason you pick between perf unittest and a browser > level perf test is entirely about whether your goal can be measured at a unit > level. Measuring a function, or a chain of simple messageloop tasks, is very > much unit perftestable. But, say, measuring time from pressing enter in an > omnbox to seeing the newtabpage's UI rendered for the first time, that really > needs a large portion of the browser running. So, here, even though clearly we > need to some plausible test data, if we choose unit test as the strategy, then > its our burden (however unpleasant) to then figure out how to create that data. > I remember in cc, we ended up having to make changes to our classes and a bunch > of helper functions so that it was easier to type out sufficiently interesting > test scenarios. It was hard work, but the perf tests have WAY lower noise, and > that pays off in the end. So, unittest this should be, at least given my read of > this situation. > > Unit tests it is) The results vary much depending on the lenght of an inputed url. On my machine an avage result is 25ms but standard distribution is +-11ms (depending on the input)? The test consistenly gives me 25+-11ms. Do we consider this unpleasant noise? Maybe split the measurements up a bit? > > And while we're on test scenarios and test data, don't load profiles from disk > that were saved as part of telemetry or etc. Down that road is agony: they > always go stale, and then nobody remembers how to freshen the data. Better to do > that heavy lifting to code up your test data. That will force you down the usual > unit test rules: use minimal but sufficient data for your test into the test, so > you're not overtesting. > Great! So generating data in a setup is ok? It takes a few secons. > There was some discussion I gatehr about whether to use a code generator and > land it and run it on every go, or just hand-coding the test setup code. I think > this should be handled just like a unit testing question in the same space: > which will be more maintainable in the long term? Someone someday will make a > breaking change to an function signature or semantics that the unit tests depend > on. At that point they have to hand edit either the code generator or a zillion > lines of code. Sometimes a generator is the right call, sometimes hand written > code is better, and sometimes helper functions that reduce the typing involved > in common scenarios makes the most sense. Pick whats right for your local code > health. In terms of maintainability I would definetly vote SetUp. Way lesser code and no one has to remember to regenerate anything. > But, speaking of printing, definitely think hard about whether UMA printer is > the right design for you. There is perf_test::PrintResult that is way simpler > than the UMA printer. As I understand it, the uma printer is meant for when your > code that for other reasons has Uma probes in it for things where uma makes > clear sense. In that case, you might also want a perf unittest coverage I > suppose, and thats what the uma printer does. But, in your case, I kind of think > you just want to call a function in a tight loop. For that case, you could just > hand code reading a timer, doing your loop, and then outputting the elapsed time > or elapsed times, depending. > UMA printer is so much better name than HistogramPrinter. Yeah, seems like an overkill here. I like it, it is very useful if you use perf test as a profiling tool. You can check specific places in code easily and it doesn't require much learning. However - this is not the right cl for it. > > Its really important to decide about whether histograms are really right for > you, too, even if you dont use uma histograms. The general guidance I have here, > by no means the final word but just my perspective, is that histograms work best > when you have many iterations *and* you expect non-normal noise in between runs > such that averaging would destroys important stastical information that would be > relevant for determining p-values or other measures of significance between > runs. For instance, suppose your performance test will definitely produce > bimodal data: for those, you really need a histogram, because the average of a > bimodal distribution is absolutely not the right signal to use to detect > changes. > I didn't quite get everything. Even though I'll get rid of UMA printer, I would still like to keep a histogram in HistoryItemsForTerms. Most of the time it's fast, but sometimes it can be super slow, at least for now. Seems like a good place - right? > Btw, which timer to use is worth thinking through. There are two! There's > TimeTicks from HighResNow, which is monotonic wall time, but there's the way way > way cooler base::Time::ThreadTicks. The cool thing is that ThreadTicks don't > include time where the cpu was stolen by another process, which is one of the > main sources of non-normal noise in benchmark results. Thus, it is typically am > much more reliable measure of "how much cpu work did it take to do X," and is > used in a lot of our best perf unittests. That is, use it if you can possibly do > so, because you'll get a better performance result. \o/. > It's all on the UI thread but still ThreadTicks sound awesome. > Finally, there has been some discussion on and off this CL about whether this > benchmark can be tweaked or leaned on by system health folks and other general > speed folks who in general would like to see better performance test coverage > (unit, or otherwise) of browser interaction speed/responsiveness. "We have a > variety of interesting umas here, eg the typeToFirstRender uma, but we don't > have anyone really well-owning this code right now and maybe," the discussion > goes, "we could use this patch journey to improve our coveage here." > > Having given this some thought, my current thinking is that the topic at hand > here in this CL is a totally separate topic from browser UI responsiveness, > which is what the typing to render UMAs, and other jank ideas are about. System > health and speed metrics definitely needs better coverage for browser ui > responsiveness, but my general feeling is that the right solution for those is > something else. I'll sign up to write a doc about what I think the right speed > solution is to browser UI system health coverage and I will commit to slowly > sharing it around as it takes shape. Key thing to know there is I can't move > that topic extremely aggressively so it may take some time. :) > Sounds great. Benchmarking is the first step to optimising, we should do more of that) > Anyway, thats whats on my mind. In general, this patch looks good and if folks > want an LG from me at a very high level I'd be happy to do so. Its key to get > the details of the perf unittest right though, and hook this up into the build > rules so that it follows normative practices of other x_perftest binaries in > other parts of chrome. So someone should definitely dig into that and really > make sure that things are awesome there. I'll be around if there are general > questions or disagreements though about which perf strategy seems to make sense > here, though!
About Patch Set 4. 1) Saving/loading profile (especially after @nduca comment) seems to be unnecessary. I've timed - both perf tests run 7 seconds on my machine (one - about 3.7 seconds). Many regular tests run much longer. 2) UMA printer is no more. It's a useful tool but not for this CL. 3) Each case have been divided in groups by length. This will help the test to be more informative. 4) Some refactoring. Mostly ripping history quick provider unittest apart into separate files. Any suggestions about organising them are welcome. 5) I still do not know how to enable the test for regular runs with other perf tests. Looking forward for your suggestions.
Description was changed from ========== Added: 1 - Performance tests for HQP: typing and deleting a popular url. Use generated profile and histogram based timings. 2 - Generation of a simple profile with big history, containing 10000 variantions of the same url. 3 - Utility class to make performance tests based on histograms. BUG=643668 ========== to ========== Added: 1 - Performance tests for HQP: typing and deleting a popular url (10000 similar entries). BUG=643668 ==========
Description was changed from ========== Added: 1 - Performance tests for HQP: typing and deleting a popular url (10000 similar entries). BUG=643668 ========== to ========== Adding performance tests for HQP: typing and deleting a popular url (10000 similar entries). BUG=643668 ==========
On 2016/09/16 17:44:52, dyaroshev wrote: > 5) I still do not know how to enable the test for regular runs with other perf > tests. Take a look at the cc_perftests target. This gets automatically run on the bots and its output tracked and monitored for regressions. We probably want to make a similar components_perftests target. I don't know precisely what the pieces are but since "cc_perftests" doesn't appear too many places I'm hoping that it's mostly copy-and-paste. nduca can maybe handhold more here or direct you better. For an individual test inside this target, see how cc/animation/animation_host_perftest.cc is built on GTest and uses perf_test::PrintResult() to show its results.
On 2016/09/16 18:58:48, Peter Kasting wrote: > Take a look at the cc_perftests target ... Perf tests turned out to be quite messy. 1) There are two mechanisms for doing perf tests - using testing/perf/* tools. Print results to the standard output. I used them. They are used in cc_perftests. https://cs.chromium.org/chromium/src/cc/animation/animation_host_perftest.cc?... - using base/test/perf* Print results to some log file. This is used by an existing test in components_perftests. https://cs.chromium.org/chromium/src/components/visitedlink/test/visitedlink_... testing/perf* is used in more modern code, however base/test/perf* is used all over chromium. 2) In order to use testing/perf/* it looks like I have to modify https://cs.chromium.org/chromium/src/tools/perf/generate_perf_json.py?q=gener... to add components_perftests as a target. This is a new script, no examples of just adding a target exist now. 3) I have to create a separate main file for components_perftests because: a - Unit tests run in parallel and perf tests should run sequentiually. b - Using base/test/perf* requires special initialisation which is not needed in unittests. The third point requires refactoring https://cs.chromium.org/chromium/src/components/test/run_all_unittests.cc?q=c... which is not obvious because of different macroces and listeners logic. With this I have a few questions: 1 - Can I modify visitedlink_perftest.cc to use tools/perf_test? This would make refactoring considerably easier and smaller and it seems to be the right way to do perf_tests. 2 - If I can't, maybe I can use the base/test/perf* in my test? 3 - What about platforms in generate_perf_json.py? Which ones do I need? And what is shrads in the context of perftests? Maybe I need to modify something else too?
On 2016/09/19 16:39:29, dyaroshev wrote: I think I've managed to factor out a reasonable solution, so the only remaining thing that I see is generate_perf_json. 1) Is generate_perf_json.py the only place I have to modify to enable perf tests? 2) How do I test my changes? 3) What about platforms and shrads? 4) Is everything else ok?
I am uncertain about when to use which perf unit testing framework. I think we should carefully figure out which is appropriate, instead of just picking the one that components/ currently uses. I don't think we should just blindly assume that the base/ one is the right one --- we wont' learn that way, and so far, I think we're all learning a ton from you pushing here, so thanks! this CL is giant and the comment chain so long that its starting to become difficult to handle, so probably want to start a fresh document or email chain to explore that space: "my goal is to do X, but as part of that, it was unclear which perf unittest harness to use, so I started this doc about the two options, what we know about each, and the pros and cons of each, i'd like your feedback on this." I also have some lingering questions about the choice to use uma_scoped_timer as your core measuring technique, and more importantly, I have concerns about your noise amount that you reported earlier in this thread. I'd like to have a frank conversation about reducing noise by other techniques, for instance additional repetition, more thread timestamps, etc. UMA-based probes have nice toolability properties, but the key thing here is noise, and I'd like to be convinced that the noise level you're achieving is the best that can be done. My biggest challenge I'm having here is that this CL is getting to have too many concepts in it at once for me at least to sanely review in a codereview context. Thats kinda normal... exploration patches often get giant and unwieldy. At that point, the right thing to do is usually to take a step back, and say "aha, i need to split this patch, and maybe its time for a design doc." I suspect we're at this juncture here. What I see here is at least: - a fresh document talking about why to choose which of the two perf unittest harnesses, the pros and cons of each, that we can circulate around to experts to figure out which is right, what to do about it - some changes to either base/test/perf/ or to the harness, as prepration for your later patches - some patches to omnibox code to improve perf-testability of some of the functions - a patch to create the components/ perf uinttest skeleton without the actual perf test: creates the glue it into bots and maybe a dummy test that prints a simple 1 result, but doesn’t actually do the actual testing - the actual perf unit test where we can discuss noise levels and the measuring technique Because this is a lot of parts, you might make a master design doc that describes your overall goal, shows the current parts, puts down the rationale that got us to this point thats spread out in this long comment chain, etc. That'd be up to you whether to do that piece. Anyway, thats my thought! I think its up to you and your actual code owners about whether they want you to take this slow-n-steady approach I suggest above, but I sure know that I'd be able to give you better continued feedback by taking this one puzzle piece at a time! And by the way, I hadn't realized how messy this space was, and how many vaguenesses we have in our perf unittesting story. You've taught me a lot just by going on this journey, so I do want to really make the time to say a big "Thank you!" for continuing to work with us to find a great solution here! Best, - Nat https://codereview.chromium.org/2300323003/diff/80001/components/test/compone... File components/test/components_test_suit.h (right): https://codereview.chromium.org/2300323003/diff/80001/components/test/compone... components/test/components_test_suit.h:5: #ifndef COMPONENTS_TEST_COMPONENTS_TEST_SUIT_H_ nit: do you mean to call this and the other related files components_test_suite.h/cc?
On 2016/09/21 00:41:00, nduca wrote: > I am uncertain about when to use which perf unit testing framework. I think we > should carefully figure out which is appropriate, instead of just picking the > one that components/ currently uses. I don't think we should just blindly assume > that the base/ one is the right one --- we wont' learn that way, and so far, I > think we're all learning a ton from you pushing here, so thanks! > > this CL is giant and the comment chain so long that its starting to become > difficult to handle, so probably want to start a fresh document or email chain > to explore that space: "my goal is to do X, but as part of that, it was unclear > which perf unittest harness to use, so I started this doc about the two options, > what we know about each, and the pros and cons of each, i'd like your feedback > on this." > I've asked smth like this in https://groups.google.com/a/chromium.org/forum/#!topic/chromium-dev/mV-Wp0i5Iek > I also have some lingering questions about the choice to use uma_scoped_timer as > your core measuring technique, and more importantly, I have concerns about your > noise amount that you reported earlier in this thread. I'd like to have a frank > conversation about reducing noise by other techniques, for instance additional > repetition, more thread timestamps, etc. UMA-based probes have nice toolability > properties, but the key thing here is noise, and I'd like to be convinced that > the noise level you're achieving is the best that can be done. > > 1 - measurements are no longer done with uma, I used ThreadTicks. 2 - Here are consecutive runs on my machine https://docs.google.com/spreadsheets/d/1zbDzCeRiavIP3f-hWUTlWKQMzQGvs62ujq5d9... The noise seems to be ok and I've made distribution considerably smaller by separating inputs in groups by a few symbols. (Results are printed with print_list, they should be processed by test analysing tool) > > My biggest challenge I'm having here is that this CL is getting to have too many > concepts in it at once for me at least to sanely review in a codereview context. > Thats kinda normal... exploration patches often get giant and unwieldy. At that > point, the right thing to do is usually to take a step back, and say "aha, i > need to split this patch, and maybe its time for a design doc." I suspect we're > at this juncture here. What I see here is at least: > - a fresh document talking about why to choose which of the two perf unittest > harnesses, the pros and cons of each, that we can circulate around to experts to > figure out which is right, what to do about it > I would argue, that the only thing it now has is HistoryQuickProvider perf test. The majority of changes in this CL is refactoring to reuse smth. - extracting ComponentTestSuit form run_all_unittests.cc to build run_all_perftests.cc - extracting pieces from HistoryQuickProviderUnittest to get FakeClient and database test utilities - modifying base/perf_test, so one does not have to inherit PerfTestSuit to do enable perf_logging. All of these refactorings are very small and simple. The only new thing here is the perf test itself, which is one small file. Other than for amount of comments, this seems like a pretty regular cl. I think a design doc would be an overkill. > - some changes to either base/test/perf/ or to the harness, as prepration for > your later patches > - some patches to omnibox code to improve perf-testability of some of the > functions > - a patch to create the components/ perf uinttest skeleton without the actual > perf test: creates the glue it into bots and maybe a dummy test that prints a > simple 1 result, but doesn’t actually do the actual testing > - the actual perf unit test where we can discuss noise levels and the measuring > technique > Ok, even though I don't think this is necessary, let's create some separate CLs. > > And by the way, I hadn't realized how messy this space was, and how many > vaguenesses we have in our perf unittesting story. You've taught me a lot just > by going on this journey, so I do want to really make the time to say a big > "Thank you!" for continuing to work with us to find a great solution here! It takes so much longer then I would expect( It's just a benchmark!
https://codereview.chromium.org/2300323003/diff/80001/components/test/compone... File components/test/components_test_suit.h (right): https://codereview.chromium.org/2300323003/diff/80001/components/test/compone... components/test/components_test_suit.h:5: #ifndef COMPONENTS_TEST_COMPONENTS_TEST_SUIT_H_ On 2016/09/21 00:41:00, nduca wrote: > nit: do you mean to call this and the other related files > components_test_suite.h/cc? I needed to refactor this out of run_all_unittests.cc I was looking at https://cs.chromium.org/chromium/src/cc/test/cc_test_suite.h?q=cc/test/c&sq=p... for example.
Description was changed from ========== Adding performance tests for HQP: typing and deleting a popular url (10000 similar entries). BUG=643668 ========== to ========== Adding performance tests for HQP: typing and deleting a popular url (10000 similar entries). UPD: first we have to decide everything with components_perftests here: https://codereview.chromium.org/2358063002 BUG=643668 ==========
On 2016/09/21 12:08:29, dyaroshev wrote: > On 2016/09/21 00:41:00, nduca wrote: > > - some changes to either base/test/perf/ or to the harness, as prepration for > > your later patches > > - some patches to omnibox code to improve perf-testability of some of the > > functions > > - a patch to create the components/ perf uinttest skeleton without the actual > > perf test: creates the glue it into bots and maybe a dummy test that prints a > > simple 1 result, but doesn’t actually do the actual testing > > - the actual perf unit test where we can discuss noise levels and the > measuring > > technique > > > > Ok, even though I don't think this is necessary, let's create some separate CLs. Here it comes https://codereview.chromium.org/2358063002/
On 2016/09/21 12:08:29, dyaroshev wrote: > On 2016/09/21 00:41:00, nduca wrote: > > And by the way, I hadn't realized how messy this space was, and how many > > vaguenesses we have in our perf unittesting story. You've taught me a lot just > > by going on this journey, so I do want to really make the time to say a big > > "Thank you!" for continuing to work with us to find a great solution here! > > It takes so much longer then I would expect( It's just a benchmark! Yeah, sorry about this. Apparently some of the previous efforts in this space _didn't_ go through all the work on this change of figuring out what the right framework is and setting things up consistently, which is why we have several different ways to do things. Definitely want to get to the point where adding such a benchmark _is_ easy, and is common for any change touching performance, so taking the time to figure this all out is super helpful here.
> I've asked smth like this in https://groups.google.com/a/chromium.org/forum/#!topic/chromium-dev/mV-Wp0i5Iek Great! When we know more from that thread, I think we'll know what to do here. > 1 - measurements are no longer done with uma, I used ThreadTicks. > 2 - Here are consecutive runs on my machine https://docs.google.com/spreadsheets/d/1zbDzCeRiavIP3f-hWUTlWKQMzQGvs62ujq5d9... > The noise seems to be ok and I've made distribution considerably smaller by separating inputs in groups by a few symbols. (Results are printed with print_list, they should be processed by test analysing tool) This noise looks great! Way better than the initial findings, thank you for clarifying this! Thanks again!
pkasting@chromium.org changed reviewers: - pkasting@google.com
asvitkine@chromium.org changed reviewers: + asvitkine@chromium.org
I'm just curious what the status of this is? It would be a shame to lose out on the perf wins due to sadness in our review process.
On 2016/10/21 18:43:55, Alexei Svitkine (slow) wrote: > I'm just curious what the status of this is? > > It would be a shame to lose out on the perf wins due to sadness in our review > process. The status is that https://codereview.chromium.org/2358063002 is in the works (hopefully landing soon) to create components_perftests to put this in. Once that's done this can be refactored to just add the one new test, which will hopefully not be hard.
Ah great - I missed the other CL - thanks for pointing me to it. :) On Fri, Oct 21, 2016 at 2:51 PM, <pkasting@chromium.org> wrote: > On 2016/10/21 18:43:55, Alexei Svitkine (slow) wrote: > > I'm just curious what the status of this is? > > > > It would be a shame to lose out on the perf wins due to sadness in our > review > > process. > > The status is that https://codereview.chromium.org/2358063002 is in the > works > (hopefully landing soon) to create components_perftests to put this in. > > Once that's done this can be refactored to just add the one new test, > which will > hopefully not be hard. > > https://codereview.chromium.org/2300323003/ > -- You received this message because you are subscribed to the Google Groups "Chromium-reviews" group. To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On 2016/10/21 19:00:27, Alexei Svitkine (slow) wrote: > Ah great - I missed the other CL - thanks for pointing me to it. :) > > On Fri, Oct 21, 2016 at 2:51 PM, <mailto:pkasting@chromium.org> wrote: > > > On 2016/10/21 18:43:55, Alexei Svitkine (slow) wrote: > > > I'm just curious what the status of this is? > > > > > > It would be a shame to lose out on the perf wins due to sadness in our > > review > > > process. > > > > The status is that https://codereview.chromium.org/2358063002 is in the > > works > > (hopefully landing soon) to create components_perftests to put this in. > > > > Once that's done this can be refactored to just add the one new test, > > which will > > hopefully not be hard. > > > > https://codereview.chromium.org/2300323003/ > > > > -- > You received this message because you are subscribed to the Google Groups > "Chromium-reviews" group. > To unsubscribe from this group and stop receiving emails from it, send an email > to mailto:chromium-reviews+unsubscribe@chromium.org. And that CL was merged, so I updated this one. Now all this does is creates a simple performance test for HQP.
Description was changed from ========== Adding performance tests for HQP: typing and deleting a popular url (10000 similar entries). UPD: first we have to decide everything with components_perftests here: https://codereview.chromium.org/2358063002 BUG=643668 ========== to ========== Performance tests for HQP. Creating performance test for HQP: typing and erasing a popular url (10000 similar entries). Based on HQP unittests. BUG=643668 ==========
Description was changed from ========== Performance tests for HQP. Creating performance test for HQP: typing and erasing a popular url (10000 similar entries). Based on HQP unittests. BUG=643668 ========== to ========== Performance tests for HQP. Creating performance test for HQP: typing and erasing a popular url (10000 similar entries). Based on HQP unittests. BUG=643668 ==========
Description was changed from ========== Performance tests for HQP. Creating performance test for HQP: typing and erasing a popular url (10000 similar entries). Based on HQP unittests. BUG=643668 ========== to ========== Performance tests for HQP. This CL adds performance test for HQP: typing and erasing a popular url (10000 similar entries). Based on HQP unittests. BUG=643668 ==========
Nits to be fixed with next patch. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/BUILD.gn (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/BUILD.gn:190: "//base/test:test_support", Remove now unnecessary dependencies from "unit_tests" target https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/history_quick_provider_performance_unittest.cc (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:103: // Customisation point to data generation, Customisation point for data generation.
https://codereview.chromium.org/2300323003/diff/100001/components/BUILD.gn File components/BUILD.gn (right): https://codereview.chromium.org/2300323003/diff/100001/components/BUILD.gn#ne... components/BUILD.gn:465: "omnibox/browser/history_quick_provider_performance_unittest.cc", Nit: For cleanliness, it may make sense to have this source file and its deps defined as a separate source_set (in a BUILD.gn file in omnibox/browser/?) and pulled in here as a dependency? Not sure if we have a best practice here, maybe ask brettw or dpranke. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/fake_autocomplete_provider_client.cc (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/fake_autocomplete_provider_client.cc:16: : pool_owner_(3, "Background Pool") { Why 3 threads? Do we need a specific number? More than 1? https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/fake_autocomplete_provider_client.cc:29: FakeAutocompleteProviderClient::~FakeAutocompleteProviderClient() = default; Nit: I'd write this in the header (instead of here in the .cc) if you don't get warnings about inlined complex destructors. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/fake_autocomplete_provider_client.h (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/fake_autocomplete_provider_client.h:29: Nit: No blank lines between the overrides in this group (leave one between the group and the destructor above and the non-override below) https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/history_quick_provider_performance_unittest.cc (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:24: using base::ASCIIToUTF16; You don't use this symbol. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:26: using base::TimeDelta; Nit: Half the time this file explicitly qualifies these anyway, sometimes it doesn't. Do one or the other. My preference is to always qualify, and nuke these using directives. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:32: constexpr std::size_t kSimilarUrlCount = 10000; Nit: For better or worse, we don't std::-qualify size_t in Chromium code. (several places) https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:56: std::string GenerateWeiredShortString(int seed) { Nit: Weird (But I don't love "weird" in a function name. What kind of a short string is a "weird" one? Add comments on what this means and use a different name.) https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:57: CR_DEFINE_STATIC_LOCAL(std::string, syms, (GenerateSymbolsToChooseFrom())); Why not just: static constexpr char kSyms[] = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789,/=+?#"; static constexpr size_t kSymsSize = arraysize(kSyms) - 1; This avoids the CR_DEFINE_STATIC_LOCAL and the function above. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:60: for (auto i = std::hash<int>()(seed); i; i /= syms.size()) { Do you really need to hash? Or do you just want random characters? It looks as if you're using "hash" here to get randomness, not to actually hash? But maybe I'm wrong. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:67: constexpr char kPopularUrl[] = Nit: static https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:80: // This test is based on HistoryQuickProviderTest Nit: Based how? And why is this useful information for the reader? Does it explain some piece of strange code, or suggest something about how it should be maintained? If not, remove this comment. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:89: void PrepareData(std::size_t data_count); Nit: Add comments on these functions about what they do (in particular, what the arguments mean). https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:112: }; Nit: DISALLOW_COPY_AND_ASSIGN https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:160: cur.typed_count, visit_time); Nit: Basically this whole loop body is also in history_quick_provider_unittest.cc. Maybe factor to a common base location. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:209: return base::Time::Now() - regular_start; Nit: I think it would be clearer to do something like: // In class declaration so it's inlined void TestBody(const AutocompleteInput& input) { provider_->Start(input, false); EXPECT_TRUE(provider_->done()); } // Here base::TimeDelta HQPPerfTestBase::RunTest(const base::string16& text) { ... // Time using the more-accurate ThreadTicks method if supported. if (base::ThreadTicks::IsSupported()) { const base::ThreadTicks start = base::ThreadTicks::Now(); TestBody(); return base::ThreadTicks::Now() - start; } // Fall back to wall clock time. const base::Time start = base::Time::Now(); TestBody(); return base::Time::Now() - start; } https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:212: class HQPPerfTestOnePopularURL : public HQPPerfTestBase { Nit: Do we really buy much by splitting this out into its own subclass for now? Unless you already have more perf tests planned, how do you know that this is the right interface for them? Using the "design for what you need now" principle, I'd probably merge these two classes into a single HQPPerfTest class and refactor later if need be (again, unless you have more tests planned). https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:232: }; Nit: DISALLOW_COPY_AND_ASSIGN https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:249: measurements.push_back(RunTest(base::UTF8ToUTF16({test_url.begin(), j}))); Nit: AFAICT, this line is the only thing that differs from the case below. Consider some kind of helper function/refactoring to allow these to share as much code as possible? If you don't do this, then all the comments below apply up here too. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:261: auto test_url = GeneratePopularUrlVariation(kSimilarUrlCount + 1); Nit: I'd prefer to use "auto" a little less aggressively in this function, because it makes it hard to tell what e.g. the type of j.base() is below ("j is an auto... of the same type as group_start, which is an auto... of the same type as i, which is an auto... of a reverse_iterator type for test_url, which is an auto... of the return type of GeneratePopularUrlVariation()... now I gotta look that up"). Feel free to use it for loop iterator types and the type of the lambda below, or in places where the type is already explicitly present once in the statement, but in a case like this, I would use an explicit typename. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:263: auto url_position_to_string = [&test_url](std::string::reverse_iterator it) { Nit: I don't know if we have a rule on this, but I tend to prefer naming local lambdas like functions (CamelCaseName) rather than like variables (hacker_case_name). const& on |it|? https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:270: for (auto i = test_url.rbegin(); i != test_url.rend();) { Nit: Place space between ; and ), unless this is how clang-format did it https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:274: for (auto j = group_start; j != group_end; ++j) Nit: Why not just for (; i != group_end; ++i) { Then use |i| in the body and eliminate the "i = group_end;" line below. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:279: url_position_to_string(group_end); Nit: I'd probably just inline this into the next statement, but whatever https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/history_quick_provider_unittest.cc (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_unittest.cc:299: &i](ui::PageTransition transtition) mutable { Nit: Can just use [&] https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/history_test_util.cc (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_test_util.cc:29: typed_count, last_visit_time.ToInternalValue()); Don't use StringPrintf() to put the values directly into your SQL statement; use Sql::Statement::BindXXX() instead. See for example how https://cs.chromium.org/chromium/src/chrome/browser/extensions/activity_log/a... does this kind of thing. (2 places)
https://codereview.chromium.org/2300323003/diff/100001/components/BUILD.gn File components/BUILD.gn (right): https://codereview.chromium.org/2300323003/diff/100001/components/BUILD.gn#ne... components/BUILD.gn:465: "omnibox/browser/history_quick_provider_performance_unittest.cc", On 2016/10/24 22:28:28, Peter Kasting wrote: > Nit: For cleanliness, it may make sense to have this source file and its deps > defined as a separate source_set (in a BUILD.gn file in omnibox/browser/?) and > pulled in here as a dependency? Not sure if we have a best practice here, maybe > ask brettw or dpranke. I looked at cc_perftests (https://cs.chromium.org/chromium/src/cc/BUILD.gn?q=cc/BUILD.gn&sq=package:chr...) https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/fake_autocomplete_provider_client.cc (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/fake_autocomplete_provider_client.cc:16: : pool_owner_(3, "Background Pool") { On 2016/10/24 22:28:28, Peter Kasting wrote: > Why 3 threads? Do we need a specific number? More than 1? This is not my code and I couldn't figure it out fast. I've just factored it out. Here is the patch that introduced it. https://codereview.chromium.org/1646893003 https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/fake_autocomplete_provider_client.cc:29: FakeAutocompleteProviderClient::~FakeAutocompleteProviderClient() = default; On 2016/10/24 22:28:28, Peter Kasting wrote: > Nit: I'd write this in the header (instead of here in the .cc) if you don't get > warnings about inlined complex destructors. I do get the warnings. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/fake_autocomplete_provider_client.h (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/fake_autocomplete_provider_client.h:29: On 2016/10/24 22:28:28, Peter Kasting wrote: > Nit: No blank lines between the overrides in this group (leave one between the > group and the destructor above and the non-override below) Done https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/history_quick_provider_performance_unittest.cc (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:24: using base::ASCIIToUTF16; On 2016/10/24 22:28:29, Peter Kasting wrote: > You don't use this symbol. Done. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:26: using base::TimeDelta; On 2016/10/24 22:28:29, Peter Kasting wrote: > Nit: Half the time this file explicitly qualifies these anyway, sometimes it > doesn't. Do one or the other. My preference is to always qualify, and nuke > these using directives. Done. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:32: constexpr std::size_t kSimilarUrlCount = 10000; On 2016/10/24 22:28:29, Peter Kasting wrote: > Nit: For better or worse, we don't std::-qualify size_t in Chromium code. > (several places) Done. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:56: std::string GenerateWeiredShortString(int seed) { On 2016/10/24 22:28:28, Peter Kasting wrote: > Nit: Weird > > (But I don't love "weird" in a function name. What kind of a short string is a > "weird" one? Add comments on what this means and use a different name.) Done. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:57: CR_DEFINE_STATIC_LOCAL(std::string, syms, (GenerateSymbolsToChooseFrom())); On 2016/10/24 22:28:29, Peter Kasting wrote: > Why not just: > > static constexpr char kSyms[] = > "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789,/=+?#"; > static constexpr size_t kSymsSize = arraysize(kSyms) - 1; > > This avoids the CR_DEFINE_STATIC_LOCAL and the function above. Done. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:60: for (auto i = std::hash<int>()(seed); i; i /= syms.size()) { On 2016/10/24 22:28:29, Peter Kasting wrote: > Do you really need to hash? Or do you just want random characters? It looks as > if you're using "hash" here to get randomness, not to actually hash? But maybe > I'm wrong. I want to get somewhat uniformly distributed deterministic integers. std::hash seemed better than std::rand and uniform_distribution is not allowed https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:67: constexpr char kPopularUrl[] = On 2016/10/24 22:28:28, Peter Kasting wrote: > Nit: static Done. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:80: // This test is based on HistoryQuickProviderTest On 2016/10/24 22:28:29, Peter Kasting wrote: > Nit: Based how? And why is this useful information for the reader? Does it > explain some piece of strange code, or suggest something about how it should be > maintained? If not, remove this comment. Done. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:112: }; On 2016/10/24 22:28:28, Peter Kasting wrote: > Nit: DISALLOW_COPY_AND_ASSIGN Doesn't work for test classes. I get compilation errors. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:160: cur.typed_count, visit_time); On 2016/10/24 22:28:28, Peter Kasting wrote: > Nit: Basically this whole loop body is also in > history_quick_provider_unittest.cc. Maybe factor to a common base location. Done. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:209: return base::Time::Now() - regular_start; On 2016/10/24 22:28:29, Peter Kasting wrote: > Nit: I think it would be clearer to do something like: > > // In class declaration so it's inlined > void TestBody(const AutocompleteInput& input) { > provider_->Start(input, false); > EXPECT_TRUE(provider_->done()); > } > > // Here > base::TimeDelta HQPPerfTestBase::RunTest(const base::string16& text) { > ... > > // Time using the more-accurate ThreadTicks method if supported. > if (base::ThreadTicks::IsSupported()) { > const base::ThreadTicks start = base::ThreadTicks::Now(); > TestBody(); > return base::ThreadTicks::Now() - start; > } > > // Fall back to wall clock time. > const base::Time start = base::Time::Now(); > TestBody(); > return base::Time::Now() - start; > } Done. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:212: class HQPPerfTestOnePopularURL : public HQPPerfTestBase { On 2016/10/24 22:28:29, Peter Kasting wrote: > Nit: Do we really buy much by splitting this out into its own subclass for now? > Unless you already have more perf tests planned, how do you know that this is > the right interface for them? > > Using the "design for what you need now" principle, I'd probably merge these two > classes into a single HQPPerfTest class and refactor later if need be (again, > unless you have more tests planned). Done. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:249: measurements.push_back(RunTest(base::UTF8ToUTF16({test_url.begin(), j}))); On 2016/10/24 22:28:29, Peter Kasting wrote: > Nit: AFAICT, this line is the only thing that differs from the case below. > Consider some kind of helper function/refactoring to allow these to share as > much code as possible? > > If you don't do this, then all the comments below apply up here too. Haven't got around to it yet. I thought about it when first wrote it, and it's not obvious how to refactor it. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:261: auto test_url = GeneratePopularUrlVariation(kSimilarUrlCount + 1); On 2016/10/24 22:28:29, Peter Kasting wrote: > Nit: I'd prefer to use "auto" a little less aggressively in this function, > because it makes it hard to tell what e.g. the type of j.base() is below ("j is > an auto... of the same type as group_start, which is an auto... of the same type > as i, which is an auto... of a reverse_iterator type for test_url, which is an > auto... of the return type of GeneratePopularUrlVariation()... now I gotta look > that up"). > > Feel free to use it for loop iterator types and the type of the lambda below, or > in places where the type is already explicitly present once in the statement, > but in a case like this, I would use an explicit typename. Done. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:263: auto url_position_to_string = [&test_url](std::string::reverse_iterator it) { On 2016/10/24 22:28:29, Peter Kasting wrote: > Nit: I don't know if we have a rule on this, but I tend to prefer naming local > lambdas like functions (CamelCaseName) rather than like variables > (hacker_case_name). > > const& on |it|? Haven't got around to it yet. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:270: for (auto i = test_url.rbegin(); i != test_url.rend();) { On 2016/10/24 22:28:29, Peter Kasting wrote: > Nit: Place space between ; and ), unless this is how clang-format did it clang-format https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:274: for (auto j = group_start; j != group_end; ++j) On 2016/10/24 22:28:29, Peter Kasting wrote: > Nit: Why not just > > for (; i != group_end; ++i) { > > Then use |i| in the body and eliminate the "i = group_end;" line below. Haven't got around to it yet. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/history_test_util.cc (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_test_util.cc:29: typed_count, last_visit_time.ToInternalValue()); On 2016/10/24 22:28:29, Peter Kasting wrote: > Don't use StringPrintf() to put the values directly into your SQL statement; use > Sql::Statement::BindXXX() instead. See for example how > https://cs.chromium.org/chromium/src/chrome/browser/extensions/activity_log/a... > does this kind of thing. (2 places) Haven't got around to it yet.
https://codereview.chromium.org/2300323003/diff/100001/components/BUILD.gn File components/BUILD.gn (right): https://codereview.chromium.org/2300323003/diff/100001/components/BUILD.gn#ne... components/BUILD.gn:465: "omnibox/browser/history_quick_provider_performance_unittest.cc", On 2016/10/25 18:11:34, dyaroshev wrote: > On 2016/10/24 22:28:28, Peter Kasting wrote: > > Nit: For cleanliness, it may make sense to have this source file and its deps > > defined as a separate source_set (in a BUILD.gn file in omnibox/browser/?) and > > pulled in here as a dependency? Not sure if we have a best practice here, > maybe > > ask brettw or dpranke. > > I looked at cc_perftests > (https://cs.chromium.org/chromium/src/cc/BUILD.gn?q=cc/BUILD.gn&sq=package:chr...) Right, but cc/ is more of a true monolithic thing, while components/ is a giant catchall for dozens of unrelated pieces. If this gets heavily used over time, the list could get unwieldy. I don't know for sure that what's being done here is wrong, but I don't think cc_perftests is necessarily the right model for this. Hence my suggestion to check with a gn-knowledgeable person. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/fake_autocomplete_provider_client.cc (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/fake_autocomplete_provider_client.cc:16: : pool_owner_(3, "Background Pool") { On 2016/10/25 18:11:34, dyaroshev wrote: > On 2016/10/24 22:28:28, Peter Kasting wrote: > > Why 3 threads? Do we need a specific number? More than 1? > > This is not my code and I couldn't figure it out fast. > I've just factored it out. Here is the patch that introduced it. > https://codereview.chromium.org/1646893003 This is OK for now, then, but can you loop in rohitrao (the author of that) and ask why we had this so we can ideally add a comment about it? https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/history_quick_provider_performance_unittest.cc (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:60: for (auto i = std::hash<int>()(seed); i; i /= syms.size()) { On 2016/10/25 18:11:35, dyaroshev wrote: > On 2016/10/24 22:28:29, Peter Kasting wrote: > > Do you really need to hash? Or do you just want random characters? It looks > as > > if you're using "hash" here to get randomness, not to actually hash? But > maybe > > I'm wrong. > > I want to get somewhat uniformly distributed deterministic integers. std::hash > seemed better than std::rand and uniform_distribution is not allowed Because <random> is still on the TBD list for Chromium C++11 support? I just sent an email requesting that we move that to the allowed list. I'll know shortly what the fate of that is, but for now I'd try recoding this using std::uniform_int_distribution or similar. That seems clearer than this code. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:112: }; On 2016/10/25 18:11:35, dyaroshev wrote: > On 2016/10/24 22:28:28, Peter Kasting wrote: > > Nit: DISALLOW_COPY_AND_ASSIGN > > Doesn't work for test classes. I get compilation errors. Oh? https://cs.chromium.org/chromium/src/base/bind_unittest.cc?rcl=0&l=316 is an example of a testing::Test subclass that has this, and there are many more. What are the errors you're seeing? https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:249: measurements.push_back(RunTest(base::UTF8ToUTF16({test_url.begin(), j}))); On 2016/10/25 18:11:34, dyaroshev wrote: > On 2016/10/24 22:28:29, Peter Kasting wrote: > > Nit: AFAICT, this line is the only thing that differs from the case below. > > Consider some kind of helper function/refactoring to allow these to share as > > much code as possible? > > > > If you don't do this, then all the comments below apply up here too. > > Haven't got around to it yet. I thought about it when first wrote it, and it's > not obvious how to refactor it. Write this as a template, and pass begin()/end() to one version and rbegin()/rend() to the other? That doesn't solve the .base() call, but I can think of a couple (possibly ugly) ways to do that. Or, always do the iteration forward, but in the second test, reverse the URL string before passing it in?
dyaroshev@yandex-team.ru changed reviewers: + brettw@chromium.org, rohitrao@chromium.org
https://codereview.chromium.org/2300323003/diff/100001/components/BUILD.gn File components/BUILD.gn (right): https://codereview.chromium.org/2300323003/diff/100001/components/BUILD.gn#ne... components/BUILD.gn:465: "omnibox/browser/history_quick_provider_performance_unittest.cc", On 2016/10/25 18:27:56, Peter Kasting wrote: > On 2016/10/25 18:11:34, dyaroshev wrote: > > On 2016/10/24 22:28:28, Peter Kasting wrote: > > > Nit: For cleanliness, it may make sense to have this source file and its > deps > > > defined as a separate source_set (in a BUILD.gn file in omnibox/browser/?) > and > > > pulled in here as a dependency? Not sure if we have a best practice here, > > maybe > > > ask brettw or dpranke. > > > > I looked at cc_perftests > > > (https://cs.chromium.org/chromium/src/cc/BUILD.gn?q=cc/BUILD.gn&sq=package:chr...) > > Right, but cc/ is more of a true monolithic thing, while components/ is a giant > catchall for dozens of unrelated pieces. If this gets heavily used over time, > the list could get unwieldy. > > I don't know for sure that what's being done here is wrong, but I don't think > cc_perftests is necessarily the right model for this. Hence my suggestion to > check with a gn-knowledgeable person. @brettw can you please take a look? https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/fake_autocomplete_provider_client.cc (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/fake_autocomplete_provider_client.cc:16: : pool_owner_(3, "Background Pool") { On 2016/10/25 18:27:56, Peter Kasting wrote: > On 2016/10/25 18:11:34, dyaroshev wrote: > > On 2016/10/24 22:28:28, Peter Kasting wrote: > > > Why 3 threads? Do we need a specific number? More than 1? > > > > This is not my code and I couldn't figure it out fast. > > I've just factored it out. Here is the patch that introduced it. > > https://codereview.chromium.org/1646893003 > > This is OK for now, then, but can you loop in rohitrao (the author of that) and > ask why we had this so we can ideally add a comment about it? @rohitrao can you please help out?
https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/fake_autocomplete_provider_client.cc (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/fake_autocomplete_provider_client.cc:16: : pool_owner_(3, "Background Pool") { On 2016/10/25 23:06:46, dyaroshev wrote: > On 2016/10/25 18:27:56, Peter Kasting wrote: > > On 2016/10/25 18:11:34, dyaroshev wrote: > > > On 2016/10/24 22:28:28, Peter Kasting wrote: > > > > Why 3 threads? Do we need a specific number? More than 1? > > > > > > This is not my code and I couldn't figure it out fast. > > > I've just factored it out. Here is the patch that introduced it. > > > https://codereview.chromium.org/1646893003 > > > > This is OK for now, then, but can you loop in rohitrao (the author of that) > and > > ask why we had this so we can ideally add a comment about it? > > @rohitrao can you please help out? 3 was an arbitrary choice. Grepping for pool_owner_, tests are evenly split between 2 and 3. More than one thread might be useful, so that tasks don't end up running sequentially, but the exact number of worker threads doesn't matter.
Do with the next patch https://codereview.chromium.org/2300323003/diff/140001/components/history/cor... File components/history/core/browser/history_database.h (right): https://codereview.chromium.org/2300323003/diff/140001/components/history/cor... components/history/core/browser/history_database.h:172: friend class HQPPerfTestOnePopularURL; Friendship is no longer required https://codereview.chromium.org/2300323003/diff/140001/components/omnibox/bro... File components/omnibox/browser/BUILD.gn (right): https://codereview.chromium.org/2300323003/diff/140001/components/omnibox/bro... components/omnibox/browser/BUILD.gn:198: "//sql", Check if necessary
I think, I've cleaned the vast majority of issues up (everything, except gn where I need some guidance). Anything new? https://codereview.chromium.org/2300323003/diff/100001/components/BUILD.gn File components/BUILD.gn (right): https://codereview.chromium.org/2300323003/diff/100001/components/BUILD.gn#ne... components/BUILD.gn:465: "omnibox/browser/history_quick_provider_performance_unittest.cc", On 2016/10/25 23:06:46, dyaroshev wrote: > On 2016/10/25 18:27:56, Peter Kasting wrote: > > On 2016/10/25 18:11:34, dyaroshev wrote: > > > On 2016/10/24 22:28:28, Peter Kasting wrote: > > > > Nit: For cleanliness, it may make sense to have this source file and its > > deps > > > > defined as a separate source_set (in a BUILD.gn file in omnibox/browser/?) > > and > > > > pulled in here as a dependency? Not sure if we have a best practice here, > > > maybe > > > > ask brettw or dpranke. > > > > > > I looked at cc_perftests > > > > > > (https://cs.chromium.org/chromium/src/cc/BUILD.gn?q=cc/BUILD.gn&sq=package:chr...) > > > > Right, but cc/ is more of a true monolithic thing, while components/ is a > giant > > catchall for dozens of unrelated pieces. If this gets heavily used over time, > > the list could get unwieldy. > > > > I don't know for sure that what's being done here is wrong, but I don't think > > cc_perftests is necessarily the right model for this. Hence my suggestion to > > check with a gn-knowledgeable person. > > @brettw can you please take a look? Seems like this is the last issue if not counting attention errors. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/fake_autocomplete_provider_client.cc (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/fake_autocomplete_provider_client.cc:16: : pool_owner_(3, "Background Pool") { On 2016/10/25 23:12:24, rohitrao wrote: > On 2016/10/25 23:06:46, dyaroshev wrote: > > On 2016/10/25 18:27:56, Peter Kasting wrote: > > > On 2016/10/25 18:11:34, dyaroshev wrote: > > > > On 2016/10/24 22:28:28, Peter Kasting wrote: > > > > > Why 3 threads? Do we need a specific number? More than 1? > > > > > > > > This is not my code and I couldn't figure it out fast. > > > > I've just factored it out. Here is the patch that introduced it. > > > > https://codereview.chromium.org/1646893003 > > > > > > This is OK for now, then, but can you loop in rohitrao (the author of that) > > and > > > ask why we had this so we can ideally add a comment about it? > > > > @rohitrao can you please help out? > > 3 was an arbitrary choice. Grepping for pool_owner_, tests are evenly split > between 2 and 3. More than one thread might be useful, so that tasks don't end > up running sequentially, but the exact number of worker threads doesn't matter. Added some explanation https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/history_quick_provider_performance_unittest.cc (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:60: for (auto i = std::hash<int>()(seed); i; i /= syms.size()) { On 2016/10/25 18:27:56, Peter Kasting wrote: > On 2016/10/25 18:11:35, dyaroshev wrote: > > On 2016/10/24 22:28:29, Peter Kasting wrote: > > > Do you really need to hash? Or do you just want random characters? It > looks > > as > > > if you're using "hash" here to get randomness, not to actually hash? But > > maybe > > > I'm wrong. > > > > I want to get somewhat uniformly distributed deterministic integers. std::hash > > seemed better than std::rand and uniform_distribution is not allowed > > Because <random> is still on the TBD list for Chromium C++11 support? I just > sent an email requesting that we move that to the allowed list. I'll know > shortly what the fate of that is, but for now I'd try recoding this using > std::uniform_int_distribution or similar. That seems clearer than this code. Rewrote with <random>. Will this slow down this patch? https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:112: }; On 2016/10/25 18:27:56, Peter Kasting wrote: > On 2016/10/25 18:11:35, dyaroshev wrote: > > On 2016/10/24 22:28:28, Peter Kasting wrote: > > > Nit: DISALLOW_COPY_AND_ASSIGN > > > > Doesn't work for test classes. I get compilation errors. > > Oh? https://cs.chromium.org/chromium/src/base/bind_unittest.cc?rcl=0&l=316 is > an example of a testing::Test subclass that has this, and there are many more. > What are the errors you're seeing? I forgot semicolon after DISALLOW_COPY_AND_ASSIGN. Understanding that from error messages wasn't easy) Done. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:249: measurements.push_back(RunTest(base::UTF8ToUTF16({test_url.begin(), j}))); On 2016/10/25 18:27:56, Peter Kasting wrote: > On 2016/10/25 18:11:34, dyaroshev wrote: > > On 2016/10/24 22:28:29, Peter Kasting wrote: > > > Nit: AFAICT, this line is the only thing that differs from the case below. > > > Consider some kind of helper function/refactoring to allow these to share as > > > much code as possible? > > > > > > If you don't do this, then all the comments below apply up here too. > > > > Haven't got around to it yet. I thought about it when first wrote it, and it's > > not obvious how to refactor it. > > Write this as a template, and pass begin()/end() to one version and > rbegin()/rend() to the other? That doesn't solve the .base() call, but I can > think of a couple (possibly ugly) ways to do that. > > Or, always do the iteration forward, but in the second test, reverse the URL > string before passing it in? Rewrote with StringPiece. I think, looks nice. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:263: auto url_position_to_string = [&test_url](std::string::reverse_iterator it) { On 2016/10/25 18:11:35, dyaroshev wrote: > On 2016/10/24 22:28:29, Peter Kasting wrote: > > Nit: I don't know if we have a rule on this, but I tend to prefer naming local > > lambdas like functions (CamelCaseName) rather than like variables > > (hacker_case_name). > > > > const& on |it|? > > Haven't got around to it yet. Done, if I haven't missed anything. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:274: for (auto j = group_start; j != group_end; ++j) On 2016/10/25 18:11:34, dyaroshev wrote: > On 2016/10/24 22:28:29, Peter Kasting wrote: > > Nit: Why not just > > > > for (; i != group_end; ++i) { > > > > Then use |i| in the body and eliminate the "i = group_end;" line below. > > Haven't got around to it yet. I've done smth cleaner than before. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:279: url_position_to_string(group_end); On 2016/10/24 22:28:29, Peter Kasting wrote: > Nit: I'd probably just inline this into the next statement, but whatever Done. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/history_quick_provider_unittest.cc (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_unittest.cc:299: &i](ui::PageTransition transtition) mutable { On 2016/10/24 22:28:29, Peter Kasting wrote: > Nit: Can just use [&] I thought it wasn't allowed. Done. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/history_test_util.cc (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_test_util.cc:29: typed_count, last_visit_time.ToInternalValue()); On 2016/10/25 18:11:35, dyaroshev wrote: > On 2016/10/24 22:28:29, Peter Kasting wrote: > > Don't use StringPrintf() to put the values directly into your SQL statement; > use > > Sql::Statement::BindXXX() instead. See for example how > > > https://cs.chromium.org/chromium/src/chrome/browser/extensions/activity_log/a... > > does this kind of thing. (2 places) > > Haven't got around to it yet. This was originally not my code, I just refactored it out. But, since you asked, I reworked it with using HistoryDatabase instead of unwrapped SQL statements. Looks cleaner and less error prone.
https://codereview.chromium.org/2300323003/diff/100001/components/BUILD.gn File components/BUILD.gn (right): https://codereview.chromium.org/2300323003/diff/100001/components/BUILD.gn#ne... components/BUILD.gn:465: "omnibox/browser/history_quick_provider_performance_unittest.cc", This could go either way. If we're going to add a bunch more files to this list, splitting it up does make more sense. If it stays just a couple of random files, this current structure makes more sense. My impression is that these perftest targets don't really grow that much over time, so I think it's fine to leave as-is.
I don't see anything major still outstanding. LGTM when existing issues are all resolved. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/history_quick_provider_performance_unittest.cc (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:89: void PrepareData(std::size_t data_count); On 2016/10/24 22:28:28, Peter Kasting wrote: > Nit: Add comments on these functions about what they do (in particular, what the > arguments mean). Still need this https://codereview.chromium.org/2300323003/diff/140001/components/omnibox/bro... File components/omnibox/browser/history_quick_provider_performance_unittest.cc (right): https://codereview.chromium.org/2300323003/diff/140001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:28: constexpr size_t kFakeHashLength = 10; Nit: It's legal to do either way, but I would define these in the individual functions that use them, so their values are immediately apparent while reading the local code. https://codereview.chromium.org/2300323003/diff/140001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:36: CR_DEFINE_STATIC_LOCAL(std::mt19937, engine, (0)); Why pass 0 explicitly? Just using the default seed seems good; this makes it look as if zero is an important value. https://codereview.chromium.org/2300323003/diff/140001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:56: CHECK(row.url().is_valid()); Nit: Prefer ASSERT_TRUE to CHECK in test code (causes test to exit cleanly instead of crashing) https://codereview.chromium.org/2300323003/diff/140001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:65: base::TimeDelta TimeAction(Action action) { Nit: This helper is only called once; is there really benefit to breaking it out separately? https://codereview.chromium.org/2300323003/diff/140001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:143: // |FillHistoryDB()| must be called before |RebuildFromHistory()|. This will Nit: No || on function names https://codereview.chromium.org/2300323003/diff/140001/components/omnibox/bro... File components/omnibox/browser/history_quick_provider_unittest.cc (right): https://codereview.chromium.org/2300323003/diff/140001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_unittest.cc:284: for (auto* it = test_data; it != test_data + data_count; ++it) { Nit: May be out of scope for this patch, but it might be a nice cleanup to change the GetTestData() methods in this file to return a vector (instead of using size/pointer outparams), which could then be iterated over here with range-based for. Returning a vector could be done fairly easily by moving the file-scope data constants into the functions akin to: std::vector<TestURLInfo> HistoryQuickProviderTest::GetTestData() { return { {"http://www.google.com/", "Google", 3, 3, 0}, {"http://slashdot.org/favorite_page.html", "Favorite page", 200, 100, 0}, ... }; } (Not sure offhand whether an explicit "std::vector<TestURLInfo>(" is needed here to make this compile.)
(BTW, don't actually submit this until I get the go-ahead on using <random>... still arguing about that one.)
@pkasting - can you please point me to the discussion about <random>? Other than that, I think this CL is good to go. https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... File components/omnibox/browser/history_quick_provider_performance_unittest.cc (right): https://codereview.chromium.org/2300323003/diff/100001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:89: void PrepareData(std::size_t data_count); On 2016/10/27 00:59:38, Peter Kasting wrote: > On 2016/10/24 22:28:28, Peter Kasting wrote: > > Nit: Add comments on these functions about what they do (in particular, what > the > > arguments mean). > > Still need this Done. https://codereview.chromium.org/2300323003/diff/140001/components/omnibox/bro... File components/omnibox/browser/history_quick_provider_performance_unittest.cc (right): https://codereview.chromium.org/2300323003/diff/140001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:28: constexpr size_t kFakeHashLength = 10; On 2016/10/27 00:59:38, Peter Kasting wrote: > Nit: It's legal to do either way, but I would define these in the individual > functions that use them, so their values are immediately apparent while reading > the local code. Done. https://codereview.chromium.org/2300323003/diff/140001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:36: CR_DEFINE_STATIC_LOCAL(std::mt19937, engine, (0)); On 2016/10/27 00:59:38, Peter Kasting wrote: > Why pass 0 explicitly? Just using the default seed seems good; this makes it > look as if zero is an important value. Done. https://codereview.chromium.org/2300323003/diff/140001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:56: CHECK(row.url().is_valid()); On 2016/10/27 00:59:38, Peter Kasting wrote: > Nit: Prefer ASSERT_TRUE to CHECK in test code (causes test to exit cleanly > instead of crashing) Done. https://codereview.chromium.org/2300323003/diff/140001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:65: base::TimeDelta TimeAction(Action action) { On 2016/10/27 00:59:38, Peter Kasting wrote: > Nit: This helper is only called once; is there really benefit to breaking it out > separately? Done. https://codereview.chromium.org/2300323003/diff/140001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:143: // |FillHistoryDB()| must be called before |RebuildFromHistory()|. This will On 2016/10/27 00:59:38, Peter Kasting wrote: > Nit: No || on function names Done. https://codereview.chromium.org/2300323003/diff/140001/components/omnibox/bro... File components/omnibox/browser/history_quick_provider_unittest.cc (right): https://codereview.chromium.org/2300323003/diff/140001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_unittest.cc:284: for (auto* it = test_data; it != test_data + data_count; ++it) { On 2016/10/27 00:59:39, Peter Kasting wrote: > Nit: May be out of scope for this patch, but it might be a nice cleanup to > change the GetTestData() methods in this file to return a vector (instead of > using size/pointer outparams), which could then be iterated over here with > range-based for. > > Returning a vector could be done fairly easily by moving the file-scope data > constants into the functions akin to: > > std::vector<TestURLInfo> HistoryQuickProviderTest::GetTestData() { > return { > {"http://www.google.com/", "Google", 3, 3, 0}, > {"http://slashdot.org/favorite_page.html", "Favorite page", 200, 100, 0}, > ... > }; > } > > (Not sure offhand whether an explicit "std::vector<TestURLInfo>(" is needed here > to make this compile.) Done.
The CQ bit was checked by dyaroshev@yandex-team.ru to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by dyaroshev@yandex-team.ru
The CQ bit was checked by dyaroshev@yandex-team.ru to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: cast_shell_android on master.tryserver.chromium.android (JOB_FAILED, https://build.chromium.org/p/tryserver.chromium.android/builders/cast_shell_a...) ios-device on master.tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/ios-device/builds...)
can you please replace HQP with HistoryQuickProvider in the CL description? The CL overall looks good, but I want to have a final look once it's in a state that the bots are green
The <random> discussion is at https://groups.google.com/a/chromium.org/forum/#!topic/cxx/MLgK9vCE4BA . I'm just waiting to see what the style arbiters think, so far the discussion has been largely tangential. I found more things that could be cleaned up with history_quick_provider_unittest.cc, but we're well outside the bounds of what needs to be done in this CL by this point, so feel free not to do them. https://codereview.chromium.org/2300323003/diff/160001/components/omnibox/bro... File components/omnibox/browser/history_quick_provider_performance_unittest.cc (right): https://codereview.chromium.org/2300323003/diff/160001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:84: template <typename PieceIt> Nit: Comments on these next two functions too would be nice https://codereview.chromium.org/2300323003/diff/160001/components/omnibox/bro... File components/omnibox/browser/history_quick_provider_unittest.cc (right): https://codereview.chromium.org/2300323003/diff/160001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_unittest.cc:44: struct TestURLInfo { Nit: Struct definition could be protected within HistoryQuickProviderTest https://codereview.chromium.org/2300323003/diff/160001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_unittest.cc:184: base::MessageLoop message_loop_; Nit: Data members should be private https://codereview.chromium.org/2300323003/diff/160001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_unittest.cc:190: }; Nit: DISALLOW_COPY_AND_ASSIGN https://codereview.chromium.org/2300323003/diff/160001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_unittest.cc:753: }; Nit: Private DISALLOW_COPY_AND_ASSIGN https://codereview.chromium.org/2300323003/diff/160001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_unittest.cc:780: "tlpd - bw korean - player " Nit: Strings like this don't need to be broken across two lines. Other strings in here could be wrapped differently as well, but it's mostly cases like this and a couple below where we seem to wrap needlessly that would be nice to fix.
Description was changed from ========== Performance tests for HQP. This CL adds performance test for HQP: typing and erasing a popular url (10000 similar entries). Based on HQP unittests. BUG=643668 ========== to ========== Performance tests for History Quick Provier. This CL adds performance test for HQP: typing and erasing a popular url (10000 similar entries). Based on HQP unittests. BUG=643668 ==========
Description was changed from ========== Performance tests for History Quick Provier. This CL adds performance test for HQP: typing and erasing a popular url (10000 similar entries). Based on HQP unittests. BUG=643668 ========== to ========== Performance tests for History Quick Provider. This CL adds performance test for HQP: typing and erasing a popular url (10000 similar entries). Based on HQP unittests. BUG=643668 ==========
Description was changed from ========== Performance tests for History Quick Provider. This CL adds performance test for HQP: typing and erasing a popular url (10000 similar entries). Based on HQP unittests. BUG=643668 ========== to ========== Performance tests for HistoryQuickProvider. This CL adds performance test for HQP: typing and erasing a popular url (10000 similar entries). Based on HQP unittests. BUG=643668 ==========
@pkasting - I fixed what you've pointed out. There is more to be done, but this cl is bigger than it should be anyways. https://codereview.chromium.org/2300323003/diff/160001/components/omnibox/bro... File components/omnibox/browser/history_quick_provider_performance_unittest.cc (right): https://codereview.chromium.org/2300323003/diff/160001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_performance_unittest.cc:84: template <typename PieceIt> On 2016/10/27 09:00:56, Peter Kasting wrote: > Nit: Comments on these next two functions too would be nice Done. https://codereview.chromium.org/2300323003/diff/160001/components/omnibox/bro... File components/omnibox/browser/history_quick_provider_unittest.cc (right): https://codereview.chromium.org/2300323003/diff/160001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_unittest.cc:44: struct TestURLInfo { On 2016/10/27 09:00:56, Peter Kasting wrote: > Nit: Struct definition could be protected within HistoryQuickProviderTest Done. https://codereview.chromium.org/2300323003/diff/160001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_unittest.cc:184: base::MessageLoop message_loop_; On 2016/10/27 09:00:56, Peter Kasting wrote: > Nit: Data members should be private Done. https://codereview.chromium.org/2300323003/diff/160001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_unittest.cc:190: }; On 2016/10/27 09:00:56, Peter Kasting wrote: > Nit: DISALLOW_COPY_AND_ASSIGN Done. https://codereview.chromium.org/2300323003/diff/160001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_unittest.cc:753: }; On 2016/10/27 09:00:56, Peter Kasting wrote: > Nit: Private DISALLOW_COPY_AND_ASSIGN Done. https://codereview.chromium.org/2300323003/diff/160001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_unittest.cc:780: "tlpd - bw korean - player " On 2016/10/27 09:00:56, Peter Kasting wrote: > Nit: Strings like this don't need to be broken across two lines. > > Other strings in here could be wrapped differently as well, but it's mostly > cases like this and a couple below where we seem to wrap needlessly that would > be nice to fix. Done.
The CQ bit was checked by dyaroshev@yandex-team.ru to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
On 2016/10/27 07:44:30, jochen wrote: > can you please replace HQP with HistoryQuickProvider in the CL description? > > The CL overall looks good, but I want to have a final look once it's in a state > that the bots are green All bots are green.
lgtm
@nduca, @pkasting We might have a problem. I redid measurements of "flat_sets/flat_maps without optimising insertions in the tight spot" and they have horrible statistical outliers. Will the performance infrastructure handle it? On avarage results we'll still see, that they sometimes lose to std::set/std::map but by quite a little margin. Here are the results: 0-4= [1,87,9,6,6,] ms // avg : 21 5-9= [6,6,6,87,9,] ms // avg 22.8 10-14= [6,6,6,81,9,] ms // avg 21.6 15-19= [7,6,6,6,6,] ms // avg 6.2 20-24= [7,8,7,6,7,] ms // avg 7 25-29= [7,7,7,6,6,] ms // avg 6,6 30-34= [82,9,7,7,7,] ms // avg 22.4 35-39= [7,7,7,7,7,] ms // avg 7 40-44= [8,7,7,9,7,] ms // avg 7.6 45-49= [7,6,1,1,1,] ms // avg 3.2 50-54= [1,1,1,1,1,] ms // avg 1
On 2016/10/28 08:25:18, dyaroshev wrote: > @nduca, @pkasting > > We might have a problem. I redid measurements of "flat_sets/flat_maps without > optimising insertions in the tight spot" and they have horrible statistical > outliers. Will the performance infrastructure handle it? On avarage results > we'll still see, that they sometimes lose to std::set/std::map but by quite a > little margin. > > Here are the results: > 0-4= [1,87,9,6,6,] ms // avg : 21 > 5-9= [6,6,6,87,9,] ms // avg 22.8 > 10-14= [6,6,6,81,9,] ms // avg 21.6 > 15-19= [7,6,6,6,6,] ms // avg 6.2 > 20-24= [7,8,7,6,7,] ms // avg 7 > 25-29= [7,7,7,6,6,] ms // avg 6,6 > 30-34= [82,9,7,7,7,] ms // avg 22.4 > 35-39= [7,7,7,7,7,] ms // avg 7 > 40-44= [8,7,7,9,7,] ms // avg 7.6 > 45-49= [7,6,1,1,1,] ms // avg 3.2 > 50-54= [1,1,1,1,1,] ms // avg 1 Can you explain these results? Most of your runs are around 7 ms, but you have runs an order of magnitude and below that. That seems worrisome indeed.
On 2016/10/28 08:51:26, Peter Kasting wrote: > On 2016/10/28 08:25:18, dyaroshev wrote: > > @nduca, @pkasting > > > > We might have a problem. I redid measurements of "flat_sets/flat_maps without > > optimising insertions in the tight spot" and they have horrible statistical > > outliers. Will the performance infrastructure handle it? On avarage results > > we'll still see, that they sometimes lose to std::set/std::map but by quite a > > little margin. > > > > Here are the results: > > 0-4= [1,87,9,6,6,] ms // avg : 21 > > 5-9= [6,6,6,87,9,] ms // avg 22.8 > > 10-14= [6,6,6,81,9,] ms // avg 21.6 > > 15-19= [7,6,6,6,6,] ms // avg 6.2 > > 20-24= [7,8,7,6,7,] ms // avg 7 > > 25-29= [7,7,7,6,6,] ms // avg 6,6 > > 30-34= [82,9,7,7,7,] ms // avg 22.4 > > 35-39= [7,7,7,7,7,] ms // avg 7 > > 40-44= [8,7,7,9,7,] ms // avg 7.6 > > 45-49= [7,6,1,1,1,] ms // avg 3.2 > > 50-54= [1,1,1,1,1,] ms // avg 1 > > Can you explain these results? Most of your runs are around 7 ms, but you have > runs an order of magnitude and below that. That seems worrisome indeed. Yes, I think. HQP does caching. When we miss cache we have to rebuild and we end up here https://cs.chromium.org/chromium/src/components/omnibox/browser/url_index_pri... - lots of insertions, not good for flat_sets/maps. If we do bulk insertion of all the elements everything evens up. I'll go over everything in the upcoming proposal to add flat containers in detail. Results are expected. My concern is: will the graphic be alright.
On 2016/10/28 09:50:42, dyaroshev wrote: > On 2016/10/28 08:51:26, Peter Kasting wrote: > > On 2016/10/28 08:25:18, dyaroshev wrote: > > > @nduca, @pkasting > > > > > > We might have a problem. I redid measurements of "flat_sets/flat_maps > without > > > optimising insertions in the tight spot" and they have horrible statistical > > > outliers. Will the performance infrastructure handle it? On avarage results > > > we'll still see, that they sometimes lose to std::set/std::map but by quite > a > > > little margin. > > > > > > Here are the results: > > > 0-4= [1,87,9,6,6,] ms // avg : 21 > > > 5-9= [6,6,6,87,9,] ms // avg 22.8 > > > 10-14= [6,6,6,81,9,] ms // avg 21.6 > > > 15-19= [7,6,6,6,6,] ms // avg 6.2 > > > 20-24= [7,8,7,6,7,] ms // avg 7 > > > 25-29= [7,7,7,6,6,] ms // avg 6,6 > > > 30-34= [82,9,7,7,7,] ms // avg 22.4 > > > 35-39= [7,7,7,7,7,] ms // avg 7 > > > 40-44= [8,7,7,9,7,] ms // avg 7.6 > > > 45-49= [7,6,1,1,1,] ms // avg 3.2 > > > 50-54= [1,1,1,1,1,] ms // avg 1 > > > > Can you explain these results? Most of your runs are around 7 ms, but you > have > > runs an order of magnitude and below that. That seems worrisome indeed. > > Yes, I think. HQP does caching. When we miss cache we have to rebuild and we end > up here > https://cs.chromium.org/chromium/src/components/omnibox/browser/url_index_pri... > - lots of insertions, not good for flat_sets/maps. If we do bulk insertion of > all the elements everything evens up. I'll go over everything in the upcoming > proposal to add flat containers in detail. > > Results are expected. My concern is: will the graphic be alright. That would explain the spiky outliers. I am worried about the 1 ms cases as well; why are those so much faster? If this were just a smear across the 1 - 90 ms range I could understand that it's about the aggregate number of insertions, but the output looks very modal, and a cache miss that causes an insertion storm only explains one of the two outlier modes. In any case, we should set up this test so that the effects of plausible outliers are captured in typical output data. If, for example, this test only does 5 runs at a time, then given your results, we'd sometimes have perf runs looking like there was a phantom tripling of the average runtime. Perhaps the arithmetic mean here isn't the mean we want; perhaps we need to show the standard deviation; perhaps we need to do more runs.
On 2016/10/28 10:01:10, Peter Kasting wrote: > That would explain the spiky outliers. I am worried about the 1 ms cases as > well; why are those so much faster? If this were just a smear across the 1 - 90 > ms range I could understand that it's about the aggregate number of insertions, > but the output looks very modal, and a cache miss that causes an insertion storm > only explains one of the two outlier modes. > This is pure guessing but I think it has to do with the input itself. Last 10 symbols are a fake hash. Therefor we have not 10000 matches, but much much less. > In any case, we should set up this test so that the effects of plausible > outliers are captured in typical output data. If, for example, this test only > does 5 runs at a time, then given your results, we'd sometimes have perf runs > looking like there was a phantom tripling of the average runtime. Perhaps the > arithmetic mean here isn't the mean we want; perhaps we need to show the > standard deviation; perhaps we need to do more runs. I think this ought to be done in the perf tests infrastructure. I don't do avg mean or anything, I just print results in a list. However the question remains - does the performance infrastructure handle this.
asvitkine@chromium.org changed reviewers: - asvitkine@chromium.org
I'm not sure what you want to be performance testing here but would it make sense to reach into the HQP data structure and simply clear the cache before every iteration? I.e., you'd test the searching and deleting code paths without the cache in place and be measuring performance on those. That seems like a core function that we should be optimizing. --mark
On 2016/10/28 16:57:56, Mark P (sick) wrote: > I'm not sure what you want to be performance testing here but would it make > sense to reach into the HQP data structure and simply clear the cache before > every iteration? I.e., you'd test the searching and deleting code paths without > the cache in place and be measuring performance on those. That seems like a > core function that we should be optimizing. > > --mark I don't exactly see why. Caches themselves are maps and are a part of HistoryItemsForTerms which is of interest here. Caches very often work in real runs, so this won't make tests more accurate. I'm trying to find out wether performance test as is with current infrastructure show such outliers or we ought to modify test to see them. In other words - does infrastructure show distribution?
On 2016/10/28 16:57:56, Mark P (sick) wrote: > I'm not sure what you want to be performance testing here but would it make > sense to reach into the HQP data structure and simply clear the cache before > every iteration? I.e., you'd test the searching and deleting code paths without > the cache in place and be measuring performance on those. That seems like a > core function that we should be optimizing. > > --mark I don't exactly see why. Caches themselves are maps and are a part of HistoryItemsForTerms which is of interest here. Caches very often work in real runs, so this won't make tests more accurate. I'm trying to find out wether performance test as is with current infrastructure show such outliers or we ought to modify test to see them. In other words - does infrastructure show distribution?
I'm going to see if I can get the <random> issue resolved today. In the meantime, ran into one more thing: https://codereview.chromium.org/2300323003/diff/180001/components/omnibox/bro... File components/omnibox/browser/history_quick_provider_unittest.cc (right): https://codereview.chromium.org/2300323003/diff/180001/components/omnibox/bro... components/omnibox/browser/history_quick_provider_unittest.cc:202: FillData(); Needs to be: ASSERT_NO_FATAL_FAILURE(FillData()); ...so that any assertions in FillData() are properly caught.
On 2016/10/31 17:26:00, Peter Kasting wrote: > I'm going to see if I can get the <random> issue resolved today. In the > meantime, ran into one more thing: > > https://codereview.chromium.org/2300323003/diff/180001/components/omnibox/bro... > File components/omnibox/browser/history_quick_provider_unittest.cc (right): > > https://codereview.chromium.org/2300323003/diff/180001/components/omnibox/bro... > components/omnibox/browser/history_quick_provider_unittest.cc:202: FillData(); > Needs to be: > > ASSERT_NO_FATAL_FAILURE(FillData()); > > ...so that any assertions in FillData() are properly caught. Ok. I've been thinking, maybe we want to print out max value of all inputs. This could be useful.
On 2016/10/31 21:57:23, dyaroshev wrote: > On 2016/10/31 17:26:00, Peter Kasting wrote: > > I'm going to see if I can get the <random> issue resolved today. In the > > meantime, ran into one more thing: > > > > > https://codereview.chromium.org/2300323003/diff/180001/components/omnibox/bro... > > File components/omnibox/browser/history_quick_provider_unittest.cc (right): > > > > > https://codereview.chromium.org/2300323003/diff/180001/components/omnibox/bro... > > components/omnibox/browser/history_quick_provider_unittest.cc:202: FillData(); > > Needs to be: > > > > ASSERT_NO_FATAL_FAILURE(FillData()); > > > > ...so that any assertions in FillData() are properly caught. > > Ok. > > I've been thinking, maybe we want to print out max value of all inputs. This > could be useful. @pkasting I see, you've pushed <random> in allowed features. Can I merge now, after fixing ASSERT_NO_FATAL_ERRORS?
On 2016/11/03 23:19:30, dyaroshev wrote: > On 2016/10/31 21:57:23, dyaroshev wrote: > > On 2016/10/31 17:26:00, Peter Kasting wrote: > > > I'm going to see if I can get the <random> issue resolved today. In the > > > meantime, ran into one more thing: > > > > > > > > > https://codereview.chromium.org/2300323003/diff/180001/components/omnibox/bro... > > > File components/omnibox/browser/history_quick_provider_unittest.cc (right): > > > > > > > > > https://codereview.chromium.org/2300323003/diff/180001/components/omnibox/bro... > > > components/omnibox/browser/history_quick_provider_unittest.cc:202: > FillData(); > > > Needs to be: > > > > > > ASSERT_NO_FATAL_FAILURE(FillData()); > > > > > > ...so that any assertions in FillData() are properly caught. > > > > Ok. > > > > I've been thinking, maybe we want to print out max value of all inputs. This > > could be useful. > > @pkasting > > I see, you've pushed <random> in allowed features. Can I merge now, after fixing > ASSERT_NO_FATAL_ERRORS? I think so; there's no known styleguide blocker and you have my review signoff.
The CQ bit was checked by dyaroshev@yandex-team.ru
The patchset sent to the CQ was uploaded after l-g-t-m from pkasting@chromium.org, jochen@chromium.org Link to the patchset: https://codereview.chromium.org/2300323003/#ps200001 (title: "Review, round 4.")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
Message was sent while issue was closed.
Description was changed from ========== Performance tests for HistoryQuickProvider. This CL adds performance test for HQP: typing and erasing a popular url (10000 similar entries). Based on HQP unittests. BUG=643668 ========== to ========== Performance tests for HistoryQuickProvider. This CL adds performance test for HQP: typing and erasing a popular url (10000 similar entries). Based on HQP unittests. BUG=643668 ==========
Message was sent while issue was closed.
Committed patchset #11 (id:200001)
Message was sent while issue was closed.
Description was changed from ========== Performance tests for HistoryQuickProvider. This CL adds performance test for HQP: typing and erasing a popular url (10000 similar entries). Based on HQP unittests. BUG=643668 ========== to ========== Performance tests for HistoryQuickProvider. This CL adds performance test for HQP: typing and erasing a popular url (10000 similar entries). Based on HQP unittests. BUG=643668 Committed: https://crrev.com/7d7c15ee9c7481355f4b42315d386b0eccc9fd75 Cr-Commit-Position: refs/heads/master@{#429835} ==========
Message was sent while issue was closed.
Patchset 11 (id:??) landed as https://crrev.com/7d7c15ee9c7481355f4b42315d386b0eccc9fd75 Cr-Commit-Position: refs/heads/master@{#429835} |