Does blocking ads make your browser faster or slower?
It depends. It takes time to classify whether a resource on a page is an ad. But, once classified, intrusive ads can be filtered out, saving download time and rendering time.
At first, we didn’t know if the benefits outweigh the costs as the performance of an ad blocker is not a single, simple number. It’s hard to define and even harder to measure and improve, but it’s not impossible. Here's how we approached this problem and incorporated performance metrics into the development of eyeo Browser Ad-Filtering Solution.
You might interpret “ad-filtering performance” as how good the software is at removing intrusive ads, while not removing legitimate content. The general thinking is: the fewer intrusive ads get through, the better.
In our view, this isn’t ad-filtering performance - this is ad-filtering quality. We ensure quality by demanding (and verifying) that eyeo Browser Ad-Filtering Solution works with all test cases defined on https://abptestpages.org/. We treat failed tests as software bugs, not performance problems.
For the purpose of this article, performance means:
There are many other aspects of performance one could measure: how much data gets transferred over the network? Do we reduce network traffic by refraining from downloading ads, or increase it by downloading filter lists? How much extra disk space do we use for filter lists?
Let’s focus on page loading speed and memory consumption.
As modern-day computers are fast and memory is cheap, does performance matter anymore?
If you’re reading this on a modern laptop, you probably wouldn’t notice if your ad blocker became slower or faster, even if the change was large. But you would notice the difference on a mid-range or a low-end phone’s mobile browser. Furthermore, inefficient algorithms leave the CPU running in a high-powered state for longer, which drains your battery.
Additionally, wasteful code also emits more greenhouse gasses. While the effect is tiny on an individual scale, when multiplied by the millions of users browsing every day, it adds up.
When eyeo needed to deliver an ad-filtering solution for Android to our partners that were building Chromium-based browsers, we discovered a problem - Chromium doesn’t support extensions in its mobile version. This means that our browser extension, Adblock Plus, wouldn’t have been a viable offering, at least not out of the box.
We navigated this by creating a fork of Chromium, modified to run the filtering engine of Adblock Plus in a dedicated instance of V8 - the browser’s JavaScript engine. The eyeo Browser Ad-Filtering Solution started as a clever way to run the core of a desktop browser extension in a mobile browser.
This approach worked, but even without established benchmarks, we knew it had performance issues. On startup, the browser visibly froze for several seconds as the JavaScript code was struggling to parse the large filter lists. We also suspected that this solution consumes too much memory – a resource much more scarce on Android than on desktop computers. It was possible to work around these problems by using minified filter lists but at the cost of significantly worse ad filtering quality.
We decided that we need to approach the problem methodically.
The first aspect of performance we knew we had to measure was memory consumption.
One way to measure the impact of the eyeo Browser Ad-Filtering Solution on memory consumption is to direct the ad-filtering browser to a website, measure its memory footprint as the page is being rendered, then compare that against the footprint of a stock Chromium browser with no ad filtering. Repeating this for multiple websites, with varying amounts of ads, should give us a good approximation of the overhead of our code.
But which memory metric should we use? The amount of memory a program allocates is different from the amount of memory the program uses. And the amount of memory the program requests from the operating system is different from the amount the operating system reserves for it. Some of the memory is shared between processes. There is no single number.
Some of the available memory metrics, each line representing a different way of counting.
Additionally, we needed to decide if want to measure memory consumption:
After much research, we’ve settled on two metrics:
memory::chrome::all_processes::reported_by_chrome::effective_size
which represents the amount of memory all Chromium processes requested from the operating system, as tracked by Chromium itself. This shows how careful we are with allocations in our code but may undercount the actual impact on the operating system’s available resources.
memory:chrome:all_processes:reported_by_os
:system_memory:proportional_resident_size
which represents the operating system’s view on how many resources it holds reserved for the program. This metric likely overcounts what the program actually requested, but it’s always the highest number and we treat it as the upper bound, or worst case approximation.
The second aspect we wanted to measure was page loading speed. As before, the approach was to load many websites with ad filtering enabled and compare the results to stock Chromium, to establish the impact of our code.
Again, there were many metrics to choose from:
Some of the ways Chromium measures page loading time. Notice the large discrepancies between them.
We’ve decided to track four metrics:
timeToInteractive
– the time it takes for the website to become interactive, and to start accepting clicks and scrolls. This should show delays when loading waits for the classification of resources into ads or legitimate content.
timeToFirstContentfulPaint
– the time it takes for the browser to render the first visible glimpses of the website.
timeToOnload
– time until the website becomes fully loaded. This should show the benefits of not waiting to load ads.
totalBlockingTime
– time spent performing long tasks (50ms or more) that block user inputs (clicks, keystrokes). This should show if we add perceivable “jank” or interruptions.
We’ve assembled a list of representative websites and found a semi-automated way of running the browser, directing it to a website and taking measurements of all those metrics.
The next step was to learn how to measure these metrics and what to do with the results.
While the journey so far was exciting, trust me, we're just scratching the surface. Hold onto your seats, because in the upcoming part two, we're diving even deeper. Our next blog will cover visualizations on memory consumption, page load times and the final conclusion. Stay tuned!