This past month has been a brutal succession of hidden hardware flaws being exposed. First, the revelation that iOS has indeed been throttling CPUs on older devices based on their claim to improve battery life. Second, the Meltdown/Spectre attack vectors threaten to slowdown CPUs ~20% in order to fix a security hole caused by an eager caching performance boost.

Oof.

Adding another anecdote, this past month I noticed Jekyll compile times and NPM install times on my 2016 Surface Book laptop were extremely painful. I thought it was maybe a Bash on Windows issue, but upon further investigation I found a 2-3x difference between when my laptop was plugged in and when I was on battery. Microsoft folks eventually instructed me that the taskbar’s battery icon has a performance slider you can use to boost your speed.

It isn’t far fetched that a device would reduce power consumption when on battery, it makes the device last longer, makes users happier. This explains iOS’ less-than-stellar performance in Low Power mode. I know this mode well because my last phone probably spent 80% if its life in Low Power mode.

Oof.

What’s this mean for websites? Well, we can’t rely on device names and numbers as a corollary for performance. Even a device that shipped with a great CPU, might be unknowingly crippled by the OS or some other battery saving feature. This creates another lens in the already broad landscape of hostile browsers and devices.

And with Battery Status API being a security concern, augmenting sites for Low Power mode isn’t an option either. We’re sending code to inconsistent executors.

Addy Osmani wrote a great post last year on “The Cost of JavaScript”. Alex Russell then expounded those thoughts in a post called “Can You Afford It?: Real-world Web Performance Budgets”. “Don’t use JavaScript” isn’t a very good solution, but we must understand that it’s damn expensive.

Here’s Lin Clark channeling Steve Souders:

[T]he old bottleneck for web performance used to be the network. But the new bottleneck for web performance is the CPU, and particularly the main thread. –Lin Clark

JavaScript is CPU intensive and the CPU is the bottleneck for performance. The solution to this is a “Push only what you need” modular code architecture. Polymer’s PRPL Pattern and Webpack’s Code-Splitting are examples of this. Single-File Components, like old-Polymer and Vue files accomplish this as well without a build process or Server Push. But speaking plainly, with Polymer 3.0 switching to ES6 modules, we’re gravitating to the React model of All-in-JS components where HTML and CSS live inside your JavaScript module.

I believe we need an alternative to All-in-JS on the client. I am bought into the idea of Single-File Components, but want that with more focus on HTML as the foundation than JavaScript. But with Mozilla effectively killing HTML Imports in favor of All-in-JS solutions in 2015, we’re stuck with some survivorship bias that All-in-JS won so therefore is the best option.

There’s an “HTML Modules” long thread discussion in the W3C Github, but unless I’m misreading (and I hope I am), it has devolved into a “How do we import HTML in JavaScript?” I hope I’m wrong, but it seemed one of the advantages of HTML Imports was that browsers were good at handling documents, and ingesting giant chunks of HTML into the JavaScript thread seems.. uh… bad.

Oof.

But it’s not all bad. Kudos to browsers, they have been working to pull rendering off the main thread. That’s good for websites. I’ve heard Googlers talk about splitting logic into asynchronous Workers. This is good for websites too. My UI and logic are so mixed up, I’m not sure where I can squeeze gains like these, but I’ll be looking for opportunities. Web Assembly stands to revolutionize applications as well. So that future is exciting and new.

But darn it all, I just want to build modular websites using HTML and a little bit of JavaScript.