In part 1, we’ve talked about why the performance of ad filtering is important for us and the performance metrics we decided to measure.
In part 2, let's dive into how to measure those metrics and what we can do with the results.
When we started running performance tests, we discovered that the results were wildly inconsistent. There were several reasons:
We had to address all of these problems before we could gain trust in the numbers.
We’ve set up a “device farm” that contains several identical phones (we cross-checked their real-life performance) and designated them as the authoritative source of data –– no more at-home testing with random devices. We made sure all phones are configured with a “full power, don’t conserve battery” profile to avoid throttling. This required flashing them with non-standard, development Android images.
Then, we experimented with Android emulators running on PCs but they were even less consistent and poor at representing performance of real, physical devices.
So we started using Web Page Replay. It’s an ingenious way of solving both the variability of websites’ content and the fluctuations of network load. You visit a live website once, record all requests the browser sent and all responses it received, store them in a file and then replay them during test runs, with predetermined latencies. You don’t even need a live internet connection anymore.
We also disabled field trials in our builds, making sure they weren’t a random factor and established a set of build flags that deliver most consistent results, via trial and error.
Even then, there was some unavoidable variance between runs and we had to analyze this setup with our data team to identify how many runs we need to produce statistically significant results and how many outliers to discard.
Trustworthy performance numbers are great, but they alone don’t make the software efficient.
The next step was to weave those measurements into our software development process, in a way that lets us track performance over time and find regressions.
The solution:
Here some screenshots of that dashboard, along with explanations of what they show:
Vertical axis is in megabytes. Red line: Chromium with ad filtering. Blue line: Stock Chromium
What we learned is that the metric on the left indicates that ad filtering “costs” around 15 MB of memory, but the one on the right indicates there’s pretty much no difference. This shows the importance of choosing the right metric - our optimization efforts are guided by the graph on the left.
Around 10 May 2023, there was a bump in memory consumption. It was a regression in Chromium that wasn’t caused by our ad filtering code and has been fixed since. This shows that we need to be careful of how external factors influence performance.
Anomalies always happen in long-running systems and it requires a human eye to explain them, case by case. Around 22 June 2023, there was an anomaly – an infrastructure problem on our end which wasn’t a product bug.
Vertical axis is in seconds. Orange bar = “time to onload”, the metric that tracks how long it takes to finish loading the page.Pink bar = “time to interactive”, or how long it takes before the page can accept clicks and scrolls.
On the left is Chromium with ad filtering enabled, and on the right is stock Chromium with no ad filtering.
What we see here is that both metrics are lower (better) with ad filtering enabled - This goes to show that ad filtering does make pages load faster!
However, it wasn’t always like this. Our team has spent over two years optimizing our solution to this state. Early versions of our code were slower than stock Chromium.
Ad filtering does not guarantee better performance unless it’s implemented really well, and that requires measuring, optimizing and repeating.
This is a time-series of nightly performance tests, showing the “time to on load” metric, in seconds. In the middle of this time period we integrated a performance improvement, which shows up in the results.
Synthetic, isolated benchmarks of the optimized code showed that this particular fix yielded a 100x improvement in ad classification speed! But since ad classification is just a tiny component of the overall website rendering time, this translated to around a 5-10% real-life improvement.
This shows you can’t base your claims about efficiency on purpose-written benchmarks (which we have too!), you need to know the big picture as well.
Optimizing is a game of increments.
Once all those pieces are in place, the last part is embedding performance measurement into our development process.
We did it by:
Implementing this performance-driven approach to development changed eyeo Browser Ad-Filtering Solution profoundly.
We rewrote it from scratch, in C++ – it no longer embeds a JavaScript filtering engine and it’s multi-threaded. It can access filter lists via memory-mapped files to save RAM and stores parsed filters in a zero-overhead binary format that lets it start in milliseconds.
Overall, it consumes less memory when using full filter lists than the original solution did with minified lists. It took a lot of effort but it was worth it in the end.
Although this is a significant achievement for us, the work is not completed. Maintaining state-of-the-art performance is not a job you finish, it’s an ongoing process that requires sparks of innovation and laborious statistical analysis, updating tests, troubleshooting misbehaving servers and cultivating a team culture that puts value on all those things.
If you’d like to learn more about eyeo Browser Ad-Filtering Solution, the code is available on gitlab for you to take a look at and it’s free (as in freedom) software.
Get in touch here if you’d like to use it in your browser and we can discuss the terms of a commercial license.