Tim Perry

Senior software engineer at resin.io.
Creator of Build Focus, keen open-source contributor, and maintainer of Loglevel, Git‑Confirm and Server Components.

Because sometimes you want to know if they actually work.

Bash scripts are unloved and underappreciated. Many of us developers spend a lot of time on the command line, and a good shell script is an incredibly powerful thing to drop into & extend your existing workflow.

Shell scripting isn’t easy though. Many of the tools and techniques you might be used to aren’t nearly as effective or well-used on the command line. Testing is a good example: in most languages, there’s a clearly agreed basic approach to testing, and most projects have at least a few tests sprinkled around (though often not as many as they’d like).

In shell scripts though, testing is typically an afterthought. You’ll be hard pressed to find even the most popular shell scripts with any tests at all, and manual testing (or just not testing) is pretty standard. Scripts are often just quickly written, tested once for one use case, and considered ‘done’.

Sometimes quick scripting like that’s fine, but if you’re building anything substantial, it’s really not. I’ve recently released git-confirm and notes, both intended to be useful & reliable tools for others, and getting them to that stage without automated tests would’ve been drastically tougher (and catching issues in PRs would’ve been harder still).

Setting up a test environment

To start with, you’ll want to have a git repo set up with the basic outline of your project (a LICENSE, a file for your script, etc).

Next, you need some testing tools. Install Bats, Bats-Support and Bats-Assert — I almost always install them as git submodules, so it’s easier for other contributors to get hold of them later.

Bats is the core testing library, Bats-Assert adds lots of extra asserts for more readable tests, and Bats-Support adds some better output formatting (and is a prerequisite for Bats-Assert).

To pull all these submodules into test/libs, run the below from the root of your git repo and commit the result:

You’ll want a quick test file (addition-test.bats) to try this out:

This test has a few interesting steps:

  • A shebang that loads the bats executable relatively, from the libs submodule we’ve included, so that if you chmod +x test/test.bats and run this file directly it’ll run itself with bats.
  • load 'file' — this is a Bats command to let you easily load files, here the load.sh scripts that initialise both Bats-Support and Bats-Assert.
  • A @test call, which defines a test, with a name and a body. Bats will run this test for us, and it passes if every single line within it returns a zero status code.
  • A call to assert_equal, checking that the output of echo 1+1 | bc is 2.

You can run this with ./test/libs/bats/bin/bats test/*.bats.

That’s a bit of a mouthful though, so you’ll normally want a more convenient way to run these. I normally have two scripts, one (test.sh) in the root of the project, which runs the tests a single time for a quick check and for CI, and one (dev.sh) which watches my files and reruns the tests whenever they change, for quick development feedback.

chmod +x both of these, and you’re away. Contributors can now check out your project and run the tests with just:

If you want to run your tests in CI easily, I’d strongly recommend Travis CI, which will happily run these tests too, if you enable it for your repository and then just push a .travis.yml file containing:

Writing your own tests

Once you have this set up, writing tests for Bats is pretty easy: work out what you’d run on the command line to check your program works, write exactly that wrapped with a @test call and a name, and Bats will do the rest.

Checking the results of each step can be slightly tricky, since you can’t just eyeball command output, but the key is to remember that any failing line is a failing test. You can just write any bash conditional by itself on a line to define an assert, e.g. [ -f "$file" ] ($file exists) or [ -z "$result" ] ($result is empty).

You’ll also often want test setup and teardown steps to run before and after each of your tests, to manage their environment. Bats makes this incredibly easy: define functions called setup and teardown, and they’ll be automatically run before and after each test execution.

In many cases, you’ll want to run a command and assert on its resulting status and output, rather than immediately failing if it returns non-zero. Bats provides a run function to make this easier, which wraps commands to return non-zero, and puts the command result into $status and $output variables. Bats-Assert then provides a nice selection of nice assertion functions to easily check these. Here’s an example from notes:

You can take a closer look at the real test in action here.

That’s all for now. There’s lots of more specific techniques to look at here, from mocking to test helpers to test isolation practices, but this should be more than enough to get you started, so you can verify your shell scripts work, and quickly tell whether future changes and PRs break anything. If you want more examples, take a look at the set of test included in notes and git-confirm, both of which cover some more interesting practices here.

Get testing!

Promises Are So Passé at Codemotion Milan

CSS-only tabs are a fun topic, and :target is a delightfully elegant declarative approach, except for the bit where it doesn’t work. The #hash links involved jump you around the page by default, and disabling that jumping breaks :target in your CSS, due to poorly defined behaviour in the spec, and corresponding bugs in every browser.

Or so we thought. Turns out there is a way to make this work almost perfectly, despite these bugs, and get perfect CSS-only accessible linkable history-tracking tabs for free.

What? Let’s back up.

What’s :target?

:target is a CSS psuedo-class that matches an element if its id is the hash part of the URL. If you’re on http://example.com#my-element for example, div:target would match <div id="my-element"> and not <div id="abc">.

This ties perfectly into <a href="#my-element">. All of a sudden you can add links that apply a style to one part of the page on demand, with just simple CSS.

There’s lots of fun uses for this, but an obvious one is tabs. Click a link, the page updates, and that part of the page is shown. Click another link, that part of the page disappears, and a different part is shown. All with working history and linkability, because the URL is just updating like normal.

Implementation is super easy, and works in 98% of browsers, right back to IE9. It looks like this:

JS Bin

Hop, Skip, and Jump

If you try to do this in any page longer than the height of your browser though, you’ll quickly discover that this can jump around, which can be annoying. Try http://output.jsbin.com/melama — you’ll have to scroll back up after hitting each tab. It’s functionally fine, and in some cases this behaviour might be ok too, but for many it isn’t exactly what you want.

This is because links to #hash links not only update the URL, but also scroll down to the element with the corresponding id on the page. We can fight that behaviour though, with a little extra JavaScript.

JavaScript, in our beautiful CSS-only UI? Sadly yes, but it’s important to understand why this is still valuable. With a tiny bit of JS on top we still retain the benefits that a CSS-based approach gives us (page appearance independent of any logic, defined declaratively, completely accessible, and with minimal page weight) and then progressively enhance it to improve the UI polish only when we can.

In environments where JavaScript isn’t available the tabs still work fine; they swap, and the browser jumps down to the tab you’ve opened. When you do have JavaScript, we can enhance them further, to tune the exact behaviour and polish that as we like.

Our JavaScript enhancement looks something like this:

This disables the normal behaviour on every #hash link, and instead manually updates the hash and the browser history. Unfortunately though it doesn’t work. The URL updates, but the :target selector matching doesn’t!

You end up with #tab-1 in the URL, but <div id="tab-1"> doesn’t match :target. The spec doesn’t actually cover this case and every single browser currently has this crazy behaviour (although both the spec authors and browser vendors are looking at fixing this). Game over.

Two steps forward, one step back

We can beat this. Right now we can disable jumping around, but it breaks :target. We need a way to disable jumping around, but still update the hash in a way that :target will listen to.

There’s a few ways we might be able to work around this. Ian Hansson has a clever trick where you position: fixed and hide the targeted element, to control the scroll, but depend on its :target status with sibling selectors. Meanwhile Chris Coyier has suggested capturing the scroll position and resetting it, and I think there might be a route through if you change the element’s id to something else and back at just the right time too. These are all very hacky though; it’d be nice to come up with a way of just fixing the JS we want to use above, so it actually works properly.

Fortunately a helpful comment from Han mentions some interesting current browser behaviour that might help us out, while examining compatibility issues with fixing this for real:

I’ve found a good reason to believe that virtually no webpages would be broken by the “breaking” change of updating :target selectors upon pushState: though they are currently not updated, if you however hit Back and then Fwd again, then :target rules do get applied

Moving in the browser history give us the :target behaviour we (and all sane people) are expecting. If we can find a way to transparently go back and forward without breaking the user’s experience, we can fix :target.

From there’s it’s easy. We can build a simple workaround in JavaScript to do exactly this, and save the day:

Here we add the hash to the history twice, and immediately remove it once.

This isn’t perfect, and it would be nice if :target worked properly without these workarounds. As-is though, this gives perfect behaviour, with the only downside being that the Forward button in the user’s browser isn’t displayed as disabled, as they might expect. Actually going forward won’t do anything though (it’s the same URL they’re already on), and your users are not typically going to notice this.

This will keep working even if/when :target is fixed, and you’ll then be able to remove the extra workaround here, to lose that slightly messy extra behaviour. If this does break, or any users don’t have JavaScript, they’ll get working tabs with jumping-to-tab behaviour. Slightly annoying, but perfectly usable.

So what?

This lets you build amazingly simple & effective CSS-only tabs.

Minimal clean markup & CSS, perfect accessibility, working in IE9, with shareable tab URLs and a fully working history for free. Enjoy!

Full code:

Try it yourself on JSBin:

JS Bin

Useful? Hit the heart to recommend this to the world, and follow me for even more CSS magic (and more) in future. Questions? Twitter at me.

Opening Open Source with DevOps at DevDay
Promises Are So Passé at Frontend Conference

Building a Server-Rendered Map Component

Part 2: How to use client-side libraries like Leaflet, in Node.

As discussed in Part One: Why?, it’d be really useful to be able to take an interesting UI component like a map, and pre-render it on the server as a web component, using Server Components.

We don’t want to do the hard mapping ourselves though. Really, we’d like this to be just as easy as building a client-side UI component. We’d like to use a shiny mapping library, like Leaflet, to give us all the core functionality right out of the box. Unfortunately though, Leaflet doesn’t run server-side.

This article’s going to focus on fixing that so you can use Leaflet with Server Components, but you’ll hit the same problems (and need very similar fixes) if you’re doing any other Node-based server rendering, including with React. The JS ecosystem right now is not good at isomorphism, but with a few small tweaks you can transform any library you like, to run anywhere.

Let’s focus on Leaflet for now. It doesn’t run server-side, because there’s not that many people seriously looking at rendering nice UIs outside a browser, so JS libraries are pretty trigger-happy making big browser-based assumptions. Leaflet expects a few things that don’t fit neatly outside a browser:

  • Global window, document and navigator objects.
  • A live element in an HTML DOM to be inserted into.
  • A Leaflet <script> tag on the page, so it can find its URL, so it can autodetect the path to the Leaflet icons.
  • To export itself just by adding an ‘L’ property to the window object.

All of these are things need tricky fixes. Just finding these issues is non-trivial: you need to try and use the library in Node, hit a bug, solve the bug, and repeat, until you get the output you’re expecting.

Leaflet is a relatively hard case though. Most libraries aren’t quite so involved in complex DOM interactions, and just need the basic globals they expect injected into them.

So, how do we fix this?

Managing Browser Globals

If you npm install leaflet and then require(“leaflet”), you’ll immediately see our first issue:

> ReferenceError: window is not defined

Fix this one, and we’ll hit a few more at require() time, for document and navigator too. We need to run Leaflet with the context it’s expecting.

It would be nice to do that by having a DOM module somewhere that gives us a document and a window, and using those as our globals. Let’s assume we’ve got such a module for a moment. Given that, we could prefix the Leaflet module with something like:

var fakeDOM = require("my-fake-dom");
var window = fakeDOM.window;
var document = fakeDOM.document;
var navigator = window.navigator;
[...insert Leaflet code...]

(Instead we could just define browser globals as process-wide Node globals and leave the Leaflet source untouched, but this isn’t good behaviour, and it’ll come back to bite you very quickly if you’re not careful)

Doing something like this will get you much closer. With any reasonable DOM stub you should be able to get Leaflet successfully importing here. Unfortunately though, this fails because of a fundamental difference between browser and Node rendering. On the server, we have to support multiple DOM contexts in one process, so we need to be able to change the document and window.

We can still pull this off though, just taking this a step further with something like:

module.exports = function (window, document) {
var navigator = window.navigator;
  [...insert Leaflet code...]
}

Now this is a Node module that exports not a single Leaflet, but a factory function to build Leaflet for a given window and document, provided by the code using the library. This doesn’t actually return anything though when called, as you might reasonably expect, instead creating window.L, as is common for browser JS libraries. In some cases that’s probably ok, but in my case I’d rather leave Window alone, and grab the Leaflet instance directly, by adding the below to the end of the function, after the Leaflet code:

return window.L.noConflict();

This tells Leaflet to remove itself as a global, and just give you the library as a reference directly.

With this, require(“leaflet”) now returns a function, and passing that a window and document gives you a working ready-to-use Leaflet.

Emulating the expected DOM

We’re not done though. If you want to use this Leaflet, you might define a Server Component like:

var LeafletFactory = require("leaflet");
var components = require("server-components");
var MapElement = components.newElement();   MapElement.createdCallback = function (document) {
var L = LeafletFactory(new components.dom.Window(), document);
  var map = L.map(this).setView([41.3851, 2.1734], 12);
  L.tileLayer('http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {
maxZoom: 19,
}).addTo(map);
});
components.registerElement("leaflet-map", {prototype: MapElement});

This should define a component that generates the HTML for a full working map when rendered. It doesn’t. The problem is that Leaflet here is given a DOM node to render into (‘this’, inside the component), and it tries to automatically render at the appropriate size. This isn’t a real browser though, we don’t have a screen size and we’re not doing layout (that’s why it’s cheap), and everything actually has zero height or width.

This isn’t as elegant a fix, but it’s an unavoidable one in any server-rendering approach I think: you need to pick a fixed size for your initial render, and nudge Leaflet to use that. Here that’s easy, you just make sure that before the map is created you add:

this.clientHeight = 500;
this.clientWidth = 500;

And with that, it works.

This fakes layout, as if the browser had decided that this was how big the element is. You can render like this at a fixed size for lots of applications, and potentially add client-side rendering on top to resize too if you want.

With that added, you can take this component, render it with a cheeky components.renderFragment(“<leaflet-map></leaflet-map”) and be given working HTML for a lovely static map you can send straight to your users. Delightful.

There is still one last step required if you want to take this further. Leaflet by default includes a set of icons, and uses the ‘href’ in its script tag in the page to automatically work out the URL to these icons. This is a bit fragile in quite a few ways, including this environment, and if you extend this example to use any icons (e.g. adding markers), you’ll find your icons don’t load.

This step’s very simple though, you just need to set L.Icon.Default.imagePath appropriately. If you want to do that in a nice portable Server Component, that means:

var componentsStatic = require("server-components-static");
var leafletContent = componentsStatic.forComponent("leaflet");
L.Icon.Default.imagePath = leafletContent.getUrl("images");

This calculates the client-facing URL you’ll need that maps to Leaflet’s images folder on disk (see Server-Components-Static for more details).

Making this (more) maintainable

There’s one more step though. This is a bit messy in a few ways, but particularly in that we have to manually fork and change the code of Leaflet, and maintain that ourselves in future. It would be great to automate this instead, to dynamically wrap normal Leaflet code, without duplicating it. With Sandboxed-Module we can do exactly that.

Sandboxed-Module lets you dynamically hook into Node’s require process, to transform module code however you like. There’s lots of somewhat crazy applications of this (on-require compilation of non-JS languages, for example), but also some very practical ones, like our changes here.

There’s potentially a very small performance hit on startup from this for the transformation, but for the rest of runtime it shouldn’t make any difference; it hooks into the initial file read to change the result, and then from that point on it’s just another Node module.

So, what does this look like?

var SandboxedModule = require('sandboxed-module');
module.exports = SandboxedModule.require('leaflet', {
sourceTransformers: {
wrapToInjectGlobals: function (source) {
return `
module.exports = function (window, document) {
var navigator = window.navigator;
${source}
return window.L.noConflict();
}`;
}
}
});

That’s it! Your project can now depend on any version of Leaflet, and require this wrapped module to automatically get given a Node-compatible version, without having to maintain your own fork.

This same approach should work for almost any other library that you need to manage server side. It’s not perfect — if Leaflet starts depending on other browser global things may break — but it should be much easier to manage and maintain than copying Leaflet’s code into your project wholesale.

Hopefully in future more projects will improve their native support for running in other environments, and this will go away, but in the meantime there are some relatively simple changes you can make to add Node support to even relatively complex client-side libraries.

Let’s stop there for now. In the next post, we’ll take a proper look at a full working map component, complete with configurability, static content and marker support, and see what you can do to start putting this into action yourself. Can’t wait? Check out https://github.com/pimterry/leaflet-map-server-component for the map component codebase so far.

Like this? Recommend it below, and hit Follow to catch the next installment.

Get easy confidence on exactly what you’re committing.

Git Confirm is a git hook, which asks you to confirm when you commit a change that includes additions from a (configurable) list of risky matches. Think ‘TODO’, ‘FIXME’, ‘@Ignore’, ‘describe.skip/it.skip’ and ‘describe.only/it.only’. You can drop Git Confirm in, and effortlessly stop yourself ever committing anything like this by accident.

TODO is the easiest example. It’s really useful to sprinkle TODO comments in your code as you work, to mark things that will need fixing in future, and jot down notes as you work. It’s really terrible to end up with a codebase riddled with them though. You need to actively deal with them in the short term, either fixing them immediately as you wrap up your larger piece of work, or moving them into a proper task tracking system somewhere.

Unfortunately, it’s easy to fail to do that; to add todos as you go, skim your code later and assume you’ve to-done them, and end up slowly rotting your codebase over time.

With Git Confirm, you put ‘TODO’ on the watchlist, and any time you commit code that adds TODO, you’ll see it (in context) so you can confirm you definitely want to commit it. Demo time!

This could be you right now.

Simple, effective and ready to go. There’s other options for this, but personally they’re either too simplistic (rejecting commits to any matching files outright) or more complex and overpowered than I want by default (automatically opening TODO issues on Github, or fully synchronized tracking of TODOs with external issue trackers). All of these have their place, but there’s a big gap in the middle for a simple tool that stays out of your way and catches you if you slip up, without being too opinionated.

There’s also a lot of edge cases that these often don’t cover: changing other parts of files that do already contain TODOs, removing lines with TODOs in, or including context with failures so you can see find the issue. Deep interactions with Git like this create surprisingly fiddly edge cases, but Git Confirm has your back all the way.

In addition, as far as I can tell there’s no tools for the general case at all, where I want to match arbitrary patterns. I’d really like to never accidentally commit an ignored test, and never ever ever ever accidentally ignore all my tests but one. With Git-Confirm, that never happens.

Want to try it? In the root of your repo, run:

curl https://cdn.rawgit.com/pimterry/git-confirm/v0.2.1/hook.sh >   .git/hooks/pre-commit
chmod +x .git/hooks/pre-commit

Done! By default it’ll just match ‘TODO’, but you can override that and add any regular expressions you like with:

git config --add hooks.confirm.match "pattern-to-match"

Happy committing!

Want to learn more? Check out Git Confirm on Github to see the full details, and reply below or tweet me at @pimterry with your questions.

You’ve written an application deployed using Dokku, and you’ve got it all up and running and great. You’ve heard a lot about why HTTPS is important nowadays, especially the SEO and performance benefits, and you’d like to add all that, with minimal cost and hassle. Let’s get right on that.

Let’s Encrypt is a new certificate authority (an organisation that issues the certificates you need to host an HTTPS site), which provides certificates to sites entirely for free, by totally automating the system, to help get 100% of the web onto HTTPS. Sounds great, right?

To set up this up, you’re going to need to prove you own the domain you’re asking for a certificate for, you need to get the certificates and start using them, and you’re going to want to have some system in place to automatically renew your certificates, so you never have to think about this again (Let’s Encrypt certificates expire every 90 days, to encourage automation). That means we’ve got a few key steps:

  • Generate a key-pair to represent ourselves (you can think of this as our login details for Let’s Encrypt).
  • Complete Let’s Encrypt’s Simple HTTP challenge, by signing and host a given JSON token at /.well-known/acme-challenge/<token> on port 80 for the given domain name, with our public key. This validates the key pair used to sign this token as authorized to issue certificates for this domain.
  • Request a certificate for this domain, signing the request with our now-authorized key.
  • Set up our server to use this key.
  • Set up an automated job to re-request certificates and update the certificate we’re using at regular intervals.

(Interested in the full details of how the Let’s Encrypt process works? Check out their detailed intro: https://letsencrypt.org/how-it-works/)

Fortunately, you barely have to do any of this! In reality, with Dokku, this is pretty much a case of turning a plugin on, and walking away.

How do you actually do this with Dokku?

First up, we need to install dokku-letsencrypt, which will do most of the setup magically for us (see full instructions here). On your server, run the command below for your dokku version (run ‘dokku version’ to check):

# dokku 0.5+
dokku plugin:update letsencrypt

# dokku 0.4
dokku plugin:update letsencrypt dokku-0.4

To configure it, set an email for your app:

dokku config:set --no-restart myapp DOKKU_LETSENCRYPT_EMAIL=a@b.com

(This email will receive the renewal warnings for your certificate)

Turn it on:

dokku letsencrypt myapp

For Dokku 0.5+, set up auto-renewal:

dokku letsencrypt:cron-job --add

For Dokku 0.4, you’ll have to manually add a cronjob scheduled every 60 days to kick off the auto-renewal process:

dokku letsencrypt:auto-renew

That’s it!

What just happened?

The magic is in the ‘dokku letsencrypt myapp’ command. When you run that, Dokku-LetsEncrypt:

  • Starts a new service, in a Docker container, and temporarily delegates the top-level .well-known path on port 80 to it in Nginx (the server which sits in front of all your services in Dokku and routes your requests).
  • Generates a key pair, and authorizes it for each of the domains you have configured for your app (see ‘dokku domains myapp’).
  • Uses that key pair to get certificates for each of those domains.
  • Configures Nginx to enable HTTPS with these certificates for each of those domains.
  • Configures Nginx to 301 redirect any vanilla HTTP requests on to HTTPS for each of those domains.

The later auto-renewal steps just do the work to update these certificates later on.

Easy as pie. Don’t forget to check it actually worked though! It’s quite likely you’ll find mixed content warnings when you first turn on HTTPS. Most major CDNs or other services you might be embedding will have HTTPS options available nowadays, so this should just be a matter of find/replacing http: to https: throughout your codebase. With that done though, that shiny green padlock should be all yours.

Like this? Have any trouble? Recommend it below so others can see it too, comment with extra questions, and follow me for more. Thanks!

Building a Server-Rendered Map Component

Part 1: Why do we need better maps?

Maps are a standard tool in our web UI toolkit; they’re an easy way to link our applications to the real world they (hopefully) involve. At the same time though, maps aren’t as easy to include in your web site as you might want, and certainly don’t feel like a native part of the web.

Critically, unlike many other UI components, you normally can’t just serve up a map with plain HTML. You can easily serve up HTML with a videos embedded, or complex forms, or intricate SVG images and animations, or MathML, but maps don’t quite make the cut. (Google Maps does actually have an Embed API that’s closer to this, but it’s very limited in features, and it’s really just bumping the problem over their fence).

If you want to throw a map on page, you’re going to need pick out a library, write and serve up scripts to imperatively define and render your map, and bring in the extra complexity (and page weight and other limitations) of all the supporting code required to put that into action. That’s not really that much work, and we’ve got used to making fixes like these everywhere. It’s a far cry though from the declarative simplicity of:

<video src=“cats.mp4”></video>

Maps are one small example of the way that the HTML elements we have don’t match what we’re using day to day, and the workarounds you have to use to get closer.

How can we show a map on a page with the same simplicity and power?

Web Components

What we really want is to add<map lat=1.2 long=3.4 zoom=5> to our pages, get a great looking map, and be done with it. That’s the goal.

In principle, web components handle this nicely. You define a custom element for your map, give it its own rendering logic and internally isolated DOM, and then just start using it. That’s really great, it’s a fantastic model, and in principle it’s perfect. Clicking together isolated encapsulated UI elements like this is the key thing that every shiny client-side framework has in common, and getting that as a native feature of the web is incredible.

Support is terrible though. Right now, it’s new Chrome only, so you’re cutting out 50% of your users right off the bat, and it’s going to be a very long time until you can safely do this everywhere. You can get close with lots of polyfills, and that’s often a good choice, but there is extra weight and complexity there, and you really just want a damn <map> — not to spend hours researching the tradeoffs of Polymer or the best way to polyfill registerElement outside it.

Even once this works, you still have the extra front-end complexity that results, and the associated maintenance. You have to serve up your initial content, and then re-render in your JavaScript before some of it is visible. You quickly end up with content that is invisible to almost all automated scripts, scrapers, devices, and search spiders. Depending on JavaScript doesn’t just cut off the small percentage of users who turn it off.

Mobile devices make this even more important. With their slow connections and slow rendering, are going to have to wait longer to see your page when you render things client-side. JavaScript is not at all resilient to connection issues either — any device that drops any scripts required to for basic rendering won’t show any of that rendered content, while simple HTML pages continue with whatever’s available. New Relic’s mobile statistics show 0.9% of HTTP requests from mobile devices failing on the network. That failure rate builds up quickly, when applied to every resource in your page! With just that 0.9% of failures and only 10 resources to load, 9% of your users aren’t going to see your whole page.

Web components are a great step in the right direction, and are exactly the right shape of solution. Web components that progressively enhance already meaningful HTML content are even better, and can mitigate many of the issues above. In many cases like maps though, your component is rendering core page content. Web components do give you a superb way to do that client-side, but rendering your content client-side rendering comes with heavy costs. Worse, it’s frequently not a real necessity. A static map would give all the core functionality (and could then be enhanced with dynamic behaviour on top).

What if we could get these same benefits, without the costs?

Server Components

Server Components is an attempt to improve on our current approach to rendering the web, with the same approach and design as web components, but renderable on the server-side.

Without Server Components, if you wanted to quickly show a map of a location sprinkled with pins, or shaded areas, or whatever else, you couldn’t easily do so without serving up a mountain of JavaScript. Instead, I want to write <map lat=1.2 long=3.4 zoom=5> in my HTML and have my users instantly see a map, without extra client-side rendering or complexity or weight.

This sounds like a pipe dream, but with Server Components, we can do exactly that. We can define new HTML elements, write simple rendering code in normal vanilla JavaScript that builds and transforms the page DOM, and run all of that logic on the server to serve up the resulting ready-to-go JavaScript. All the client-side simplicity and power, without the costs.

Let’s stop there for now. In the next post, we’ll take a look at exactly how you’d implement this, and what you can do to start putting it into action yourself. Can’t wait? Check out https://github.com/pimterry/leaflet-map-server-component for the map component so far.

Like this? Recommend it below, and hit Follow to catch the next instalment.

Add device.setCustomLocation and device.unsetCustomLocation
Expose internal pine instance
Upgrade resin-pine (to ensure pinejs-client is up to date)
Upgrade resin-pine (to ensure pinejs-client is up to date)
Update pinejs client
Update pinejs client
Add sentry error tracking
Don't try to use a non-existent lodash instance
Don't try to use a non-existent lodash instance
Add device.getAllByParentDevice and a parent device param to app.create
Add device.getAllByParentDevice and a parent device param to app.create
Release v3.1.0
Release v3.1.0
Ensure setImplementations tests work in Chrome
Add build.get, include update_timestamp, and add a lot more pine options
Ensure passwords are always submitted as strings when authenticating
Allow empty expand objects
Ensure setImplementations tests work in Chrome
Ensure passwords are always submitted as strings when authenticating
Add build.get, include update_timestamp, and add a lot more pine options
Allow pinejs options in models.key.getAll
Allow empty expand objects
Allow pinejs options in models.key.getAll
Add ResinBuildNotFound error
My exciting github discovery of the day: "Pull requests that have a failing status can’t be merged on a phone". What? ...why?