Yoav's blog thing

Mastodon   RSS   Twitter   GitHub  

Why Chromium cares about standards

I wrote a Google-internal version of this post on the train back from W3C TPAC in Seville, but thought this could be useful to the broader Chromium community. At the same time, this is my own personal opinion. I speak for no one other than myself.

Here goes!

TPAC was an amazing week, full of great folks from a large variety of companies, all working together to build the open web. But at least a few folks, mostly coming from organizations that haven’t been traditionally contributing to the web, seemed to not fully understand why standards are important.

Such folks working on Chromium, while going through the Blink process, run a risk of getting discouraged. Process without understanding of its purpose sure seems a lot like pointless bureaucracy.

So this is my attempt to right that wrong and explain...

# Why?

Chromium is an implementation of the open web platform. That's a fundamental fact that we should keep in mind when working on browsers that rely on it.

Given that fundamental fact, the goal of APIs we’re shipping in Chromium is to move the entire web platform forward, not just Chromium browsers. We want to drive these features into the web’s baseline and need to bring other stakeholders with us in order to make that happen.

On top of that, we have to be careful in the APIs we expose to the open web, as there's a good chance that these APIs would be used by some pages, more or less forever.

Removals on the web can be extremely hard and costly, and tbh, they are not fun. At the same time, no one wants to maintain a feature that is known to be a bad idea. So we need to try and make sure that the APIs that we ship on the web have a reasonable quality bar.

We want these APIs to be more-or-less consistent with the rest of the platform. To be ergonomic. To be compatible with existing content, interoperable with other web platform implementations (i.e. other engines, such as WebKit, Gecko and Ladybird) as well as with deployed network components (e.g. CDNs and other intermediaries).

In short, there are a lot of considerations that go into shipping features on the web, in order to make sure that the cost of shipping them (on the platform and on web developers) is minimal and will be outweighed by their usefulness.

This is the reason we can't just design our feature over some internal design doc and treat the standards and Blink process as a checkbox that needs to be checked.

We can’t just go:

We need to fill in those fields with meaningful links. But we need to go beyond that and make a genuine effort at achieving the goals that those fields represent.

So let’s talk about the reasons we are filling those fields in the first place!

# Eventual Interoperability

The web's superpower is in its reach. It's a ubiquitous platform and web pages can run on a huge variety of device form factors, operating systems and browsers.

When web developers write web sites, they do that for the broader web, not just for Chromium browsers, at least not when they're doing it right. And if web developers would start writing Chromium-only web sites, they'd give up a lot of that reach. That'd be a shame for them, but more importantly would also erode user trust in the web, resulting in the entire platform losing prominence.

So, we want web developers to write interoperable sites. They can only do that if the changes we introduce to the platform are interoperable. There are cases where we're introducing capabilities that other implementations do not support, in which case these capabilities won't be immediately interoperable.

In such cases where there’s a reasonable fallback or a polyfill, we should ensure that:

  1. other implementations don't break as a result of us shipping the feature, and web developers use feature detection in order to ensure that
  2. When other implementations are interested in adopting that capability, they could easily do that in an interoperable way.

We ensure (1) happens through API design, documentation and code examples.

(2) is the reason we run things through the standards process, vendor positions, developer signals, and web platform tests.

More specifically:

In cases where there’s no reasonable fallback for developers, the feature we’re shipping may not as useful as it can be until it’s supported ubiquitously. (at least not without giving up on reach)

In those cases, we should have an explicit plan for adoption with other vendors. In such cases, the above becomes even more important.

# Eventual?

It’s entirely understandable that the “eventual” part is frustrating. When there isn’t active support and engagement from other implementers, doing all that work now in favor of future nebulous benefits can feel like a wasted investment. But it’s important to understand that not doing that will effectively make it impossible for other implementations to (eventually) catch up in an interoperable way, and will result in a forked platform. That in turn would result in diminished reach for developers, causing them to invest their efforts elsewhere.

Even if both TAG and other vendors currently oppose a certain feature, taking in feedback about the feature helps remove future opposition, once they’d see value in the use case, or get enough developer demand for the feature.

# Consistency

Beyond interop, the process is aiming to ensure the APIs we are shipping are ergonomic, easy to use for web developers, and are generally consistent with the rest of the platform.

This is where the TAG review comes in. The W3C TAG is composed of web platform experts that represent the broader industry - browser engineers, web developers, privacy advocates and more.

Their design reviews are aiming to provide shipping APIs the quality attributes we're after. Beyond that, the TAG is an influential stakeholder who we’d like to get on board with the feature, and reviews help with that as well.

But reviews only do what they're intended if we go into them while actively seeking feedback. If the TAG review is a checkbox, and we go into it ready to justify our initial design choices or agree to disagree, that's just an unpleasant experience to all involved.

Hence it’s important to engage with the TAG early, respond to their feedback whenever it’s actionable and integrate it into our API design.

# Transparency

Another major goal of the process is transparency in what we're working on. We can't be transparent while using industry-specific jargon and while hiding what actual changes mean behind walls of text of discussions and processing models.

This is where explainers come into play. When folks look at a passing intent, an explainer helps them better understand what the feature is all about and how web developers are supposed to use it.

That's true for web developers coming in from the Intent To Ship bot on social media, folks from the web community who want to understand what we're shipping as well as to API owners who review dozens of intents in a typical week.

For all of these cases you don’t want to force people to jump through hoops like reading through complex Github discussions and/or algorithmic pseudo-code in order to understand what your feature is, what it does, and what they'd need to do in order to use it. As a feature owner, it’s probably a good investment on your part to minimize the time reviewing your feature would take, to reduce the time the overall process would take. Also, you definitely don’t want people to have to be subject-matter experts in order to understand what we’re trying to ship.

While explainers don't replace either specifications or the final developer facing documentation, they sit somewhere in between and serve a crucial role in the process’ transparency. They enable us to bring in the broader community along for the ride.

# In closing

I know what you're thinking. The above sounds like a lot of work. And it is. Creating all the related artifacts takes time.

Taking feedback into account can delay shipping timelines, and requires re-opening an implementation you thought is already done. It can be tedious. I get it.

At the same time, it's also critically important and an essential part of working on the web platform. Investing work in the Blink process early on in the feature’s lifetime can go a long way to reduce its cost on the platform in the next 30 years or more. By ensuring the feature is properly reviewed, specified and documented, we’re minimizing costs for the millions of web developers that would use it, and future engineers that would maintain and improve on it.

We're here to create an interoperable & capable web, that web developers love developing for and that belongs to everyone.

And the process is crucial to achieve that.

So I’m hoping that having the end goal we're trying to achieve in mind can make the occasional frustration and extra work along the way more bearable, as well as help us ship better web platform features and capabilities.

Huge thanks to Rick Byers, Chris Harrelson and Chris Wilson for providing great feedback on an earlier version of this post.

← Home