Filtered by JavaScript

Page 12

Reset

More CSS load performance experiments (of a React app)

March 25, 2017
0 comments Web development, JavaScript, React

tl;dr; More experiments with different ways to lazy load CSS with and without web fonts. Keeping it simple is almost best but not good enough.

Background

Last week I blogged about Web Performance Optimization of a Single-Page-App and web fonts in which I experimented with tricks to delay the loading of some massive CSS file. The app is one of those newfangled single-page-app where JavaScript (React in this case) does ALL the DOM rendering, which is practical but suffers the first-load experience.

Again, let's iterate that single-page-apps built with fully fledged JavaScript frameworks isn't the enemy. The hope is that your visitors will load more than just one page. If users load it once, any further click within the app might involve some "blocking" AJAX but generally the app feels great since a click doesn't require a new page load.

Again, let's iterate what the ideal is. Ideally the page should load and be visually useful by the server sending all the DOM you almost need and once loaded, some JavaScript can turn the app "alive" so that further clicks just update the DOM instead of reloading. But that's not trivial. Things like react-router (especially version 4) makes it a lot easier but you now need a NodeJS server to render the pages server-side and you can't just easily take your "build" directory and put into a CDN like Firebase or CloudFront.

When I first started these experiements I was actually looking at what I can do with the JavaScript side to make first loads faster. Turns out fixing your CSS render blocking load and assessing use of web fonts has a bigger "bang for the buck".

Go Straight to the Results

Visual comparison on WebPagetest.org
Visual comparison of 6 different solutions

Same visual but as a comparative video

Stare at that for a second or two. Note that some versions start rendering something earlier than others. Some lose some of their rendered content and goes blank. Some change look significantly more than once before the final rendered result. As expected they almost all take the same total time till the final thing is fully rendered with DOM and all CSS loaded.

What Each Experimental Version Does

Original

A regular webapp straight out of the ./build directory built with Create React App. Blocking CSS, blocking JS and no attempts to show any minimal DOM stuff first.

Optimized Version 1

Inline CSS with basic stuff like background color and font color. CSS links are lazy loaded using tricks from loadCSS.

Optimized Version 2

Same as Optimized version 1 but both the JavaScript that lazy loads the CSS and the React JavaScript has a defer on it to prevent render blockage.

Optimized Version 3

Not much of an optimization. More of a test. Takes the original version but moves the <link rel="stylesheet" ... tags to just above the </body> tag. So below the <script src="..."> tags. Note how, around the 15 second mark, this renders the DOM (by the JavaScript code) before the CSS has loaded and it gets ugly. Note also how much longer this one takes because it has to context switch and go back to CSS loading and re-rendering after the DOM is built by the JavaScript.

Optimized Version 4

All custom web font loading removed. Bye bye "Lato" font-family. Yes you could argue that it looks worse but I still believe performance makes users more happy ultimately than prettiness (not true for all sites, e.g. a fashion web app).
Actually this version is copied from the Optimized version 1 but with the web fonts removed. The final winner in my opinion.

Optimized Version 5

Out of curiousity, let's remove the web fonts one more time but do it based on the Original. So no web fonts but CSS still render blocking.

So, Which Is Best?

I would say Optimized version 4. Remove the web font and do the loadCSS trick of loading the CSS after some really basic rendering but do that before loading the JavaScript. This is what I did to Podcast Time except slightly improved the initial server HTML so that it says "Loading..." immediately and I made the header nav look exactly like the one you get when everything is fully loaded.

Also, Chrome is quite a popular browser after all and the first browser that supports <link rel="preload" ...>. That means that when Chrome (and Opera 43 and Android Browser) parses the initial HTML and sees <link rel="preload" href="foo.css"> it can immediately start downloading that file and probably start parsing it too (in parallel). The visual comparison I did here on WebPagetest used Chrome (on 3G Emerging Markets) so that gives the optimized versions above a slight advantage only for Chrome.

What's Next?

To really make your fancy single-page-app fly like greased lightning you should probably do two things:

  1. Server-side render the HTML but perhaps do it in such a way that if your app depends on a lot of database data, skip that so you at least send a decent amount of HTML so when React/Angular/YouNameIt starts executing it doesn't have to do much DOM rendering and can just go straight to a loading message whilst it starts fetching the actual AJAX data that needs to be displayed. Or if it's a news article or blog post, include that in the server rendered HTML but let the client-side load other things like comments or a sidebar with related links.

  2. Tools like critical that figures out the critical above-the-fold CSS don't work great with single-page-apps where the DOM is generated by loaded JavaScript but you can probably hack around that by writing some scripts that generate a DOM as a HTML string by first loading the app in PhantomJS in various ways. That way you can inline all the CSS needed even after the JavaScript has built the DOM. And once that's loaded, then you can lazy load in the remaining CSS (e.g. bootstrap.min.css) to have all styles ready when the user starts clicking around on stuff.

Most important is the sites' functionality and value but hopefully these experiments have shown that you can get pretty good mileage from your standard build of a single-page-app without having to do too much tooling.

Web Performance Optimization of a Single-Page-App and web fonts

March 16, 2017
0 comments Web development, JavaScript, React

tl;dr; Don't worry so much about your bloated JavaScript and how to async it. Worry about your CSS and your Web fonts. Here I demonstrate a web performance optimization story with Webpagetest.org results demonstrating the improvements.

The app playground

I have a little app called Podcast Time. It's a single-page-app that renders all the functionality in React with the help of MobX and I use Bootstrap 4 and Bootswatch to make it "pretty". And that Bootswatch theme I chose depends on the Lato font by Google Fonts.

I blogged about it last month. Check it out if you're curious what it actually does. As with almost all my side-projects, it ultimately becomes a playground to see how I can performance optimize it.

sourcemap of combined .js file
The first thing I did was add a Service Worker so that consecutive visits all draws from a local cache. But I fear most visitors (all 52 of them (I kid, I don't know)) come just once and don't benefit from the Service Worker. The compiled and minified JavaScript weighs 124 KB gzipped (440 KB uncompressed) and the CSS weighs 20 KB gzipped (151 KB uncompressed). Not only that but the bootstrap.min.css contained an external import to Google Fonts's CSS that loads the external font files. And lastly, the JavaScript is intense. It's almost 0.5 MB of heavy framework machinery. Of that JavaScript about 45 KB which is "my code". By the way, as ES6 code it's actually 66 KB that my fingers have typed. So that's an interesting factoid about the BabelJS conversion to ES5 code.

Anyway, to make site ultra fast the bestest thing would be to rewrite all the JavaScript in vanilla JavaScript and implement, with browser quirks (read: polyfills), HTML5 pushState etc. Also, I guess I should go through the bootstrap.css file and extract exactly only the selectors I'm actually using. But I'm not going to do that. That would be counter productive and I wouldn't have a chance to add the kind of features I have and can add. So that fat JavaScript and that fat CSS needs to be downloaded.

So what can I do to make that slightly less painful? ...when Service Workers isn't an option.

1. The baseline

Here's the test results on Webpagetest.org

Click on the first Filmstrip View and notice that the rendering both starts and finishes between the 3.0 and 3.5 second mark. The test is done using Chrome on what they call "Mobile 3G Fast".

2. Have some placeholder content whilst it's downloading

I poked around at the CSS to get some really basics out of it. The background color, the font color, the basic grid. I then copied that into the index.html as an inline stylesheet. You can see it here.

The second thing I did was to replace all <link rel="stylesheet" href="..."> meta tags with this:


<link rel="preload" href="FOO.css" as="style" onload="this.rel='stylesheet'">
<noscript><link rel="stylesheet" href="FOO.css"></noscript>

Also, near the bottom of the index.html I took the loadCSS.js and cssrelpreload.js scripts from Filament Group's loadCSS solution.

And lastly I copied the header text and the sub-title from the DOM, once rendered, plus added a little "Loading..." paragraph and put that into the DOM element that React mounts into once fully loaded.

Now the cool effect is that the render blocking CSS is gone. CSS loads AFTER some DOM rendering has happened. Yes, there's going to be an added delay when the browser has to re-render everything all over again, but at least now it appears that the site is working whilst you're waiting for the site to fully load. Much more user-friendly if you ask me.

Here's the test results on Webpagetest.org

A lot of blankness
Click on the first Filmstrip View and notice that some rendering started at the 0.9 second mark (instead of 3 seconds in the previous version). Then, after the browser notices that it's not the right font so it blanks the DOM, waits for the web font to load and around the time the web font is loaded, the JavaScript is loaded and ready so it starts rendering the final version at around the 2.8 second mark.

Note that the total time from start to finish is still the same. 3.1 second vs. 3.5 second. It's just that in this second version the user gets some indication that the site is alive and loading. I think the pulsating "Loading..." message puts the user at ease and although they're unhappy it still hasn't loaded I think it buys me a smidge of patience.

3. Hang on! Are web fonts really that important?

That FOIT is still pretty annoying though. When the browser realizes that the font is wrong, it makes the DOM blank until the right font has been loaded. My poor user has to stare at a blank screen for 1.3 seconds instad of seeing the "Loading..." message. Not only that but instead of just saying "Loading..." this could be an opportunity to display something more "constructive" that the user can actually start consuming with their eyes and brains. Then, by the time the full monty has been downloaded and ready the user will not have been waiting in vain.

So how important are those web fonts really? It's a nerdy app where search, content and aggregates matter more than the easthetic impression. Sure, a sexy looking site (no matter what the context/environment) does yield a better impression but there's tradeoffs to be made. If I can load it faster, my hunch is that that'll be more impressive than a pretty font. Secs sells.

I know there are possible good hacks to alleviate these FOIT problems. For example, in "How We Load Web Fonts Progressively" they first set the CSS to point to a system font-family, then, with a JavaScript polyfill they can know when the external web font has been downloaded and then they change the CSS. Cute and cool but it means yet another piece of JavaScript that has to be loaded. It also means "FOUT" (Flash Of Unstyled Text) which can disturb the view quite a lot if the changed font changes layout shifts.

Anyway, I decided I'll prefer fast loading over pretty font so I removed all font family hacks and references to the external font. Bye bye!

Here's the test results on Webpagetest.org

Click on the first Filmstrip View and notice that the rendering starts already at 1.2 seconds and stay with the "Loading..." message all the way till around 3.0 seconds when both the CSS and the full JavaScript has been loaded.

Side note; Lato vs. Helvetica Neue vs. Helvetica

You decide, is the Lato font that much prettier?

Lato

Lato

Helvetica Neue

Helvetica Neue

Helvetica

Helvetica

In Conclusion

PageSpeed Insights
I started this exploration trying to decide if there are better ways I can load my JavaScript files. I did some experiments with some server-side loading and async and defer. I scrutinized my ES6 code to see if there are things I can cut out. Perhaps I should switch to Preact instead of React but then I'd have to own the whole es-lint, babel and webpack config hell that create-react-app has eliminated and I'd get worried about not being able to benefit from awesome libraries like MobX and various routing libraries. However, when I was doing various measurements I noticed that all the tricks I tried, for a better loading experience, dwarfed in comparison to the render blocking CSS and the web font loading.

Service Workers is to web apps, what smartphone native apps are to mobile web pages in terms of loading performance. But Service Workers only work if you have a lot of repeat users.

If you want a fast loading experience and still have the convenience of building your app in a powerful JavaScript framework; focus on your CSS and your web font loading.

Podcasttime.io - How Much Time Do Your Podcasts Take To Listen To?

February 13, 2017
3 comments Python, Web development, Django, JavaScript, React

tl;dr; It's a web app where you search and find the podcasts you listen to. It then gives you a break down how much time that requires to keep up, per day, per week and per month. Podcasttime.io

Podcasttime.io on Firefox iOS
First I wrote some scripts to scrape various sources of podcasts. This is basically a RSS feed URL from which you can fetch the name and an image. And with some cron jobs you can download and parse each podcast feed and build up an index of how many episodes they have and how long each episode is. Together with each episodes "publish date" you can easily figure out an average of how much content each podcast puts out over time.

Suppose you listen to JavaScript Air, Talk Python To Me and Google Cloud Platform Podcast for example, that means you need to listen to podcasts for about 8 minutes per day to keep up.

The Back End

The technology is exciting. The backend is a Django 1.10 server. It manages a PostgreSQL database of all the podcasts, episodes, cron jobs etc. Through Django ORM signals is packages up each podcast with its metadata and stores it in an Elasticsearch database. All the communication between Django and ElasticSearch is done with Elasticsearch DSL.

Also, all the downloading and parsing of feeds is done as background tasks in Celery. This got really interesting/challenging because sooo many podcasts are poorly marked up and many a times the only way to find out how long an episode is is to use ffmpeg to probe it and that takes time.

Another biggish challenge is that fact that often things simply don't work because of networks being what they are, unreliable. So you have to re-attempt network calls without accidentally getting caught in infinite loops of accidentally putting a bad/broken RSS feed back into the background queue again and again and again.

The Front End

Actually, the first prototype of this app was written with Django as the front end plus some jQuery to tie things together. On a plane ride, and as an excuse to learn it, I re-wrote the whole thing in React with Redux. To be honest, I never really enjoyed that and it felt like everything was hard and I had to do more jumping-around-files than actual coding. In particular, Redux is nice but when you have a lot of AJAX both inside components and upon mounting it gets quite messy in my humble opinion.

So, on another plane ride (to Hawaii, so I had more time) I re-wrote it from scratch but this time using three beautiful pieces of front end technology: create-react-app, Mobx and mobx-router. Suddenly it became fun again. Mobx (or Redux or something "fluxy") is necessary if you want fancy pushState URLs AND a central (aka global) state management.

To be perfectly honest, I never actually tried combining Mobx with something like react-router or if it's even possible. But with mobx-router it's quite neat. You write a "views route map" (see example) where you can kick off AJAX before entering (and leaving) routes. Then you use that to populate a global store and now all components can be almost entirely about simply rendering the store. There is some AJAX within the mounted components (e.g. the search and autocomplete).

Plotly graph
On the home page, there's a chart that rather unscientifically plots episode durations over time as a line chart. I'm trying a library called Plotly which is actually a online app for building charts but they offer a free JavaScript library too for generating graphs. Not entirely sure how I feel about it yet but apart from looking a big crowded on mobile, it's working really well.

A Killer Feature

This is a pattern I've wanted to build but never managed to get right. The way to get data about a podcast (and its episodes) is to do an Elasticsearch search. From the homepage you basically call /find?q=Planet%20money when you search. That gives you almost all the information you need. So you store that in the global store. Then, if the user clicks on that particular podcast to go to its "perma page" you can simply load that podcast's individual route and you don't need to do something like /find?id=727 because you already have everything you need. If the user then opens that page in a new tab or reloads you now have to fetch just the one podcast, so you simply call /find?id=727. In other words, subsequent page loads load instantly! (Basically, it updates the store's podcast object upon clicking any of the podcasts iterated over from the listing. Code here)

And to top that - and this is where a good router shines - if you make a search or something, click something and click back since you have a global store of state, you can simply reuse that without needing another AJAX query.

The State of the Future

First of all, this is a fun little side project and it's probably buggy. My goal is not to make money on it but to build up a graph. Every time someone uses the site and finds the podcasts they listen to that slowly builds up connections. If you listen to "The Economist", "Planet Money" and "Freakonomics", that tie those together loosely. It's hard to programmatically know that those three podcasts are "related" but they are by "peoples' taste".

The ultimate goal of this is; now I can recommend other podcasts based on a given set. It's a little bit like LastFM used to work. Using Audioscrobbler LastFM was able to build up a graph based on what people preferred to listen to and using that network of knowledge they can recommend things you have not listened to but probably would appreciate.

At the moment, there's a simple Picks listing of "lists" (aka "picks") that people have chosen. With enough time and traffic I'll try to use Elasticsearch's X-Pack Graph capabilities to develop a search engine based on this.

At the time of writing, I've indexed 4,669 podcasts, spanning 611,025 episodes which equates to 549,722 hours of podcast content.

The Code

The front end code is available on github.com/peterbe/podcasttime2 and is relatively neat and tidy. The most interesting piece is probably the views/index.js which is the "controller" of things. That's where it decides which component to render, does the AJAX queries and manages the global store.

The back end code is a bit messier. It's done as an "app" as part of this very blog. The way the Elasticsearch indexing is configured is here and the hotch potch code for scraping and parsing RSS feeds is here.

Please try it out and show me your selection. You can drop feedback here.

10 Reasons I Love create-react-app

January 4, 2017
4 comments Web development, JavaScript

I now have two finished side projects (1 & 2) done and deployed based on create-react-app which I've kinda fallen in love with. Here are my reasons why I love create-react-app:

1. It Just Works

That wonderful Applesque feeling of "Man! That wasn't hard at all!". It's been such a solid experience with so few hiccups that I'm tickled really impressed. I think it works so well because it's a big project (174 contributors at the time of writing) and the scope is tight. Many other modern web framework boilerplatey projects feels like "This is how I do it. See if you like it. But you better think like I do."

2. No Hard Decisions To Be Made

Webpack still intimidates me. Babel still intimidates me. ESlint still intimidates me. I know how to use all three but whenever I do I feel like I'm not doing it right and that I'm missing out on something that everyone else knows. create-react-app encompasses all of these tools with a set of hard defaults. I know the Babel it ships with just works and I know that it isn't configured to support decorators but it works nicely and so far I've not had any desires to extend it.

Some people will get claustrophic not being able to configure things. But I'm "a making kinda person" so I like this focus on shipping rather than doing it the rightest way. In fact, I think it's a feature that you can't configure things in create-react-app. The option to yarn run eject isn't really an option because once you've ejected, and now can configure, you're no longer using create-react-app.

3. I Feel Like I'm Using Up-to-date Tech/Tools

I've tried other React (and AngularJS) boilerplate projects before and once started you always get nervous upgrading underlying bits with fear that it's going to stop working. Once you start a create-react-app project there are only two (three technically) things you need to update in your package.json and that's react-scripts and react (and react-dom technically).

If you read about a new cool way of doing something with Webpack|Babel|ESlint most likely that will be baked into react-scripts soon for you. So just stay cool and stay up to date.

By the way, as a side-note. If you've gotten used to keeping your package.json up to date with npm outdated make sure you switch to yarn and run yarn outdated. With yarn outdated it only checks the packages you've listed in package.json against npmjs.com. With npm oudated it will mention dependencies within listed packages that are outdated and that's really hard to evaluate.

4. The Server Proxy

Almost all my recent projects are single-page apps that load a behemoth of .js code and once started it starts getting data from a REST server built in Django or Go or something. So, realistically the app looks like this:


  componentDidMount() {
    fetch('/api/userdata/')
    .then(r = r.json())
    .then(response => {
      this.setState({userData: response.user_data})
    })
  }

With create-react-app you just add this line to your package.json:

"proxy": "http://localhost:8000"

Now, when you open http://localhost:3000 in the browser and the AJAX kicks off the browser thinks it's talking to http://localhost:3000/api/userdata/ but the Node server that create-react-app ships with just automatically passes that on to http://localhost:8000/api/userdata/. (This example just changes the port but note that it's a URL so the domain can be different too).

Now you don't have to worry about CORS or having to run everything via a local Nginx and use rewrite rules to route the /api/* queries.

5. One Page Documentation

This single page is all you need.

At the time of writing that documentation uses npm instead of yarn and I don't know why that is but, anyways, it's nice that it's all there in one single page. To find out how to do something I can just ⌘-f and find.

6. Has Nothing To Do With SCSS, Less or PostCSS

SCSS, Less and PostCSS are amazing but don't work with create-react-app. You have to use good old CSS. Remember that one? This is almost always fine by me because the kinds of changes I make to the CSS are mainly nudging one margin here or floating one div there. For that it's fine to keep is simple.

Although I haven't used it yet I'm very intrigued to use styled components as a really neat trick to style things in a contained and decentralized way that looks neat.

7. Has Nothing To Do With Redux, react-router or MobX

I'm sure you've seen it too. You discover some juicy looking React boilerplate project that looks really powerful but then you discover that it "forces" you to use Redux|react-router|MobX. Individually these are amazing libraries but if the boilerplate forces a choice that will immediately put some people off and the boilerplate project will immediately suffer from lack of users/contributors because it made a design decision too early.

It's really not hard to chose one of the fancier state management apps or other fancy routing libraries since your create-react-app project starts from scratch. They're all good but since I get to start from scratch and I build with my chosen libraries and learn how they work since I have to do it myself.

8. You Still Get Decent Hot Module Reloading

I have never been comfortable with react-hot-loader. Even after I tried the version 3 branch there were lots of times where I had to pause to think if I have to refresh the page or not.

By default, when you start a create-react-app project, it has a websocket thing connected to the server that notices when you edit source files and when doing so it refreshes the page you're on. That might feel brutish compared to the magic of proper hot module reloading. But for a guy who prefers console.log over debugger breakpoints this feels more than good enough to me.

In fact, if you add this to then bottom of your src/index.js now, when you edit a file, it won't refresh the page. Just reload the app:


if (module.hot) {
  module.hot.accept()
}

It will reload the state but the web console log doesn't disappear and a full browser refresh is usually slower.

9. Running Tests Included

I'll be honest and admit that I don't write tests. Not for side projects. Reason being that I'm almost always a single developer plus the app is most likely more a proof of concept than something supposed to stand the test of time.

But usually writing tests in JavaScript projects is scary. Not because it's hard but because it's tricky to get started. Which libraries should I use? Usually when you have a full suite up and running you realize that you depend on so many different libraries and once it works you cry a little when you read on Hacker News about some new fancy suite runner that literally has bells and whistles. create-react-app builds on jest but the documentation carefully takes you through the steps to start with simple unit tests to start rendering components, code coverage and continuous CI.

10. Dan Abramov

Dan is a powerhouse in the React community. Not just because he wrote Hot Reloading, redux and most of create-react-app, but because he's so incredibly nice.

I follow him on Twitter and his humble persona (aka #juniordevforlife) and positive attitude makes me feel close to him. And he oozes both-feet-on-the-ground-ism when it comes to the crazy world of JavaScript framework/library frenzy we're going through.

The fact that he built most of create-react-app, and that he encourages its uptake, makes me feel like I'm about to climb up on the shoulder of a giant.

Using Fanout.io in Django

December 13, 2016
1 comment Python, Web development, Django, Mozilla, JavaScript

Earlier this year we started using Fanout.io in Air Mozilla to enhance the experience for users awaiting content updates. Here I hope to flesh out its details a bit to inspire others to deploy a similar solution.

What It Is

First of all, Fanout.io is basically a service that handles your WebSockets. You put in some of Fanout's JavaScript into your site that handles a persistent WebSocket connection between your site and Fanout.io. And to push messages to your user you basically send them to Fanout.io from the server and they "forward" it to the WebSocket.

The HTML page looks like this:


<html>
<body>

  <h1>Web Page</h1>

<!-- replace the FANOUT_REALM_ID with the ID you get in the Fanout.io admin page -->
<script 
  src="https://{{ FANOUT_REALM_ID }}.fanoutcdn.com/bayeux/static/faye-browser-1.1.2-fanout1-min.js"
></script>
<script src="fanout.js"></script>
</body>
</html>

And the fanout.js script looks like this:


window.onload = function() {
  // replace the FANOUT_REALM_ID with the ID you get in the Fanout.io admin page
  var client = new Faye.Client('https://{{ FANOUT_REALM_ID }}.fanoutcdn.com/bayeux')
  client.subscribe('/mycomments', function(data) {  
     console.log('Incoming updated data from the server:', data);
  })
};

And in server it looks something like this:


from django.conf import settings
import fanout

fanout.realm = settings.FANOUT_REALM_ID
fanout.key = settings.FANOUT_REALM_KEY


def post_comment(request):
    """A django view function that saves the posted comment"""
   text = request.POST['comment']
   saved_comment = Comment.objects.create(text=text, user=request.user)
   fanout.publish('mycomments', {'new_comment': saved_comment.id})
   return http.JsonResponse({'comment_posted': True})

Note that, in the client-side code, there's no security since there's no authentication. Any client can connect to any channel. So it's important that you don't send anything sensitive. In fact, you should think of this pattern simply as a hint that something has changed. For example, here's a slightly more fleshed out example of how you'd use the subscription.


window.onload = function() {
  // replace the FANOUT_REALM_ID with the ID you get in the Fanout.io admin page
  var client = new Faye.Client('https://{{ FANOUT_REALM_ID }}.fanoutcdn.com/bayeux')
  client.subscribe('/mycomments', function(data) {  
    if (data.new_comment) {
      // server says a new comment has been posted in the server
      $.json('/comments', function(response) {
        $('#comments .comment').remove();
        $.each(response.comments, function(comment) {        
          $('<div class="comment">')
          .append($('<p>').text(comment.text))
          .append($('<span>').text('By: ' + comment.user.name))
          .appendTo('#comments');
        });
      });
    }
  })
};

Yes, I know jQuery isn't hip but it demonstrates the pattern well. Also, in the real world you might not want to ask the server for all comments (and re-render) but instead do an AJAX query to get all new comments since some parameter or something.

Why It's Awesome

It's awesome because you can have a simple page that updates near instantly when the server's database is updated. The alternative would be to do a setInterval loop that frequently does an AJAX query to see if there's new content to update. This is cumbersome because it requires a lot heavier AJAX queries. You might want to make it secure so you engage sessions that need to be looked up each time. Or, since you're going to request it often you have to write a very optimized server-side endpoint that is cheap to query often.

And last but not least, if you rely on an AJAX loop interval, you have to pick a frequency that your server can cope with and it's likely to be in the range of several seconds or else it might overload the server. That means that updates are quite delayed.

But maybe most important, you don't need to worry about running a WebSocket server. It's not terribly hard to do one yourself on your laptop with a bit of Node Express or Tornado but now you have yet another server to maintain and it, internally, needs to be connected to a "pub-sub framework" like Redis or a full blown message queue.

Alternatives

Fanout.io is not the only service that offers this. The decision to use Fanout.io was taken about a year ago and one of the attractive things it offers is that it's got a freemium option which is ideal for doing local testing. The honest truth is that I can't remember the other justifications used to chose Fanout.io over its competitors but here are some alternatives that popped up on a quick search:

It seems they all (including Fanout.io) has freemium plans, supports authentication, REST APIs (for sending and for querying connected clients' stats).

There are also some more advanced feature packed solutions like Meteor, Firebase and GunDB that act more like databases that are connected via WebSockets or alike. For example, you can have a database as a "conduit" for pushing data to a client. Meaning, instead of sending the data from the server directly you save it in a database which syncs to the connected clients.

Lastly, I've heard that Heroku has a really neat solution that does something similar whereby it sets up something similar as an extension.

Let's Get Realistic

The solution sketched out above is very simplistic. There are a lot more fine-grained details that you'd probably want to zoom in to if you're going to do this properly.

Throttling

In Air Mozilla, we call fanout.publish(channel, message) from a post_save ORM signal. If you have a lot of saves for some reason, you might be sending too many messages to the client. A throttling solution, per channel, simply makes sure your "callback" gets called only once per channel per small time frame. Here's the solution we employed:


window.Fanout = (function() {
  var _locks = {};
  return {
    subscribe: function subscribe(channel, callback) {
      _client.subscribe(channel, function(data) {
          if (_locks[channel]) {
              // throttled
              return;
          }
          _locks[channel] = true;
          callback(data);
          setTimeout(function() {
              _locks[channel] = false;
          }, 500);
      });        
    };
  }
})();

Subresource Integrity

Subresource integrity is an important web security technique where you know in advance a hash of the remote JavaScript you include. That means that if someone hacks the result of loading https://cdn.example.com/somelib.js the browser compares the hash of that with a hash mentioned in the <script> tag and refuses to load it if the hash doesn't match.

In the example of Fanout.io it actually looks like this:


<script 
  src="https://{{ FANOUT_REALM_ID }}.fanoutcdn.com/bayeux/static/faye-browser-1.1.2-fanout1-min.js"
  crossOrigin="anonymous"
  integrity="sha384-/9uLm3UDnP3tBHstjgZiqLa7fopVRjYmFinSBjz+FPS/ibb2C4aowhIttvYIGGt9"
></script>

The SHA you get from the Fanout.io documentation. It requires, and implies, that you need to use an exact version of the library. You can't use it like this: <script src="https://cdn.example/somelib.latest.min.js" ....

WebSockets vs. Long-polling

Fanout.io's JavaScript client follows a pattern that makes it compatible with clients that don't support WebSockets. The first technique it uses is called long-polling. With this the server basically relys on standard HTTP techniques but the responses are long lasting instead. It means the request simply takes a very long time to respond and when it does, that's when data can be passed.

This is not a problem for modern browsers. They almost all support WebSocket but you might have an application that isn't a modern browser.

Anyway, what Fanout.io does internally is that it first creates a long-polling connection but then shortly after tries to "upgrade" to WebSockets if it's supported. However, the projects I work only need to support modern browsers and there's a trick to tell Fanout to go straight to WebSockets:


var client = new Faye.Client('https://{{ FANOUT_REALM_ID }}.fanoutcdn.com/bayeux', {
    // What this means is that we're opting to have
    // Fanout start with fancy-pants WebSocket and
    // if that doesn't work it falls back on other
    // options, such as long-polling.
    // The default behaviour is that it starts with
    // long-polling and tries to "upgrade" itself
    // to WebSocket.
    transportMode: 'fallback'
});

Fallbacks

In the case of Air Mozilla, it already had a traditional solution whereby it does a setInterval loop that does an AJAX query frequently.

Because the networks can be flaky or because something might go wrong in the client, the way we use it is like this:


var RELOAD_INTERVAL = 5;  // seconds

if (typeof window.Fanout !== 'undefined') {
    Fanout.subscribe('/' + container.data('subscription-channel-comments'), function(data) {
        // Supposedly the comments have changed.
        // For security, let's not trust the data but just take it
        // as a hint that it's worth doing an AJAX query
        // now.
        Comments.load(container, data);
    });
    // If Fanout doesn't work for some reason even though it
    // was made available, still use the regular old
    // interval. Just not as frequently.
    RELOAD_INTERVAL = 60 * 5;
}
setInterval(function() {
    Comments.reload_loop(container);
}, RELOAD_INTERVAL * 1000);

Use Fanout Selectively/Progressively

In the case of Air Mozilla, there are lots of pages. Some don't ever need a WebSocket connection. For example, it might be a simple CRUD (Create Update Delete) page. So, for that I made the whole Fanout functionality "lazy" and it only gets set up if the page has some JavaScript that knows it needs it.

This also has the benefit that the Fanout resource loading etc. is slightly delayed until more pressing things have loaded and the DOM is ready.

You can see the whole solution here. And the way you use it here.

Have Many Channels

You can have as many channels as you like. Don't create a channel called comments when you can have a channel called comments-123 where 123 is the ID of the page you're on for example.

In the case of Air Mozilla, there's a channel for every single page. If you're sitting on a page with a commenting widget, it doesn't get WebSocket messages about newly posted comments on other pages.

Conclusion

We've now used Fanout for almost a year in our little Django + jQuery app and it's been great. The management pages in Air Mozilla use AngularJS and the integration looks like this in the event manager page:


window.Fanout.subscribe('/events', function(data) {
    $scope.$apply(lookForModifiedEvents);
});

Fanout.io's been great to us. Really responsive support and very reliable. But if I were to start a fresh new project that needs a solution like this I'd try to spend a little time to investigate the competitors to see if there are some neat features I'd enjoy.

UPDATE

Fanout reached out to help explain more what's great about Fanout.io

"One of Fanout's biggest differentiators is that we use and promote open technologies/standards. For example, our service supports the open Bayeux protocol, and you can connect to it with any compatible client library, such as Faye. Nearly all competing services have proprietary protocols. This "open" aspect of Fanout aligns pretty well with Mozilla's values, and in fact you'd have a hard time finding any alternative that works the same way."

How to deploy a create-react-app

November 4, 2016
2 comments Web development, JavaScript, React

First of all, create-react-app is an amazing kit. It's a zero configuration bundle that gives you a react app boilerplate with a dev server, linting and a deployment tool. All are awesome but not perfect.

I could go on giving this project praise but if you're here reading this you might be convinced already.

Anyway, the way you deploy a create-react-app project is actually stunningly simple, but there is one major caveat to look out for. Basically running yarn run build will first delete existing files in the ./build/ directory. Files that it indents to replace. For example your ./build/index.html or your ./build/static/js/main.94a86fe3.js.

So, what I suggest is that you deploy it like this:

#!/bin/bash

# Go into the project where the package.json exists
cd myproject
# Upgrade any libraries
yarn
# Use 
yarn run build
mv build build_final

Note! This tip is only applicable if you deploy "in place" as opposed to building a whole new container/image and swapping an old container/image for a new one.

Now, for your Nginx point to the ./build_final directory instead. For example:

# /etc/nginx/sites-enabled/mysite.conf
server {
    server_name mydomain.example.com;
    root /full/path/to/myproject/build_final;

    location / {
        try_files $uri /index.html;
        add_header   Cache-Control public;
        expires      1d;
    }
}

The whole point of this tip is that it's a good idea to not point Nginx to the ./build directory (but to a copy of it instead) because otherwise, during the seconds that yarn run build runs (1-5 seconds) a bunch of files will be missing and Nginx will send 404 errors to the clients unlucky enought to connect during the deployment.

CSS Bloat Comparison

June 3, 2016
0 comments Web development, JavaScript

tl;dr; How much web performance negative overhead does including a CSS stylesheet (that you don't use) add to the rendering time? I don't know. But WebPagetest gives us some clues.

To jump straight to the results, check out this video which is the slow-motion rendering of 1 + 5 pages. Each page has one more big fat CSS stylesheet linked than the other. I.e. the 5th one links to 5 different .css URLs. There's also a 0th one which only loads 1 .css file which has nothing in it.

WebPagetest results
The full results are here.

I love CSS frameworks and use them ALL the time. But I'm also interested in web performance and using techniques like real-time static analysis to figure out what CSS that doesn't need to be loaded. Some of those techniques surely lead to less stuff needing to be downloaded but how big is the gain? Not sure.

So I made 6 pages that loads a CSS framework but doesn't actually use any of it. The browser will be instructed to download the file and parse it. That takes time and CPU-work will surely will have an effect on the total rendering time.

In every page, I lastly load a little piece of JavaScript just to make something appear on the page. That means, the page will not fully render until AFTER the it has loaded the .css files and one little .js file which prints something on the DOM. The reason there's a 5 second delay until it uses AJAX (fetch) to figure out their sizes is because I don't want that effect to affect the rendering on WebPagetest.

css Bytes

Noteworthy

Notice that there are two cases of "outliers". According to the measurements, bloat3.html and bloat5.html take shorter time to render compared to their smaller files (bloat2.html and bloat4.html respectively). That seems to indicate that - even though WebPagetest does 3 runs each - that "network I/O luck" plays a big role.

Also, interesting is that the first file bloat1.html which loads 118Kb more CSS than bloat0.html but clearly it doesn't seem to have a big impact.

Conclusion

It does have an effect of reduced web performance. I.e. longer loading time. The page with just 1 .css file takes 0.5 seconds and the one with 5 .css files takes 0.8 seconds. However, the results also indicate that much of the total time is spent waiting for the download. Once the browser has downloaded the payload, it appears to be very fast at parsing it to get ready to render the page accordingly. In other words, don't worry so much about the "bloat" of the content of the CSS file. Worry more about the excessive HTTP requests needed in total.

It would be interesting to inline every CSS file into the .html page and re-run. That means only the .html needs to be downloaded from the network and although the last one will be bigger when all are gzipped the difference isn't huge.

In conclusion, I'm not sure it's a huge performance loss to add big bloated CSS frameworks to your site. Most likely your big wins lie with optimizing the images and JavaScript.

UPDATE

After publishing this, I decided to inline every .css as big <style> tags. For example, bloat4.html (View source).

Here's the result.

And here's the video.

What this proves is that the difference we saw earlier was almost entirely due to "network I/O luck". All pages are gzipped. The smallest one is only 0.19Kb and the largest one is 183Kb. But there's no noticeable difference in the total time it takes to render these two. Basically, the browser's ability to parse CSS is FAST! Don't worry so much about the size of the CSS payload itself. Go forth and make pretty web pages!

Don't that this or bind

April 12, 2016
2 comments JavaScript

Wrong

Having to create a variable outside the nested scope so that when you refer to this you refer to the parent's scope.


var Increaser = function(amount) {
  this.amount = amount;
};
Increaser.prototype.add = function(value) {
  if (Array.isArray(value)) {
    var that = this;  // NOTE!
    return value.map(function(item) {
      return item + that.amount;
    }); 
  } else {
    return value + this.amount;
  }
};

var inc = new Increaser(2);

console.log(inc.add(10)); // 12
console.log(inc.add([1, 2, 3])); // [3, 4, 5]

On CodePen

Why it's bad. Because it's code smell. Meaning, it's a hack that goes against what's natural. Also, because it's not necessary. There is a better solution. Hold tight. Code smells have a tendency to get worse. In this case we only have 1 function with its own scope so we can allow ourselves to call it just "that". If it was more complex, we'd have to call it "first_this" or "outer_that" or something ugly.

It's a cheap solution and it works but the risk is that the code becomes hard for humans to debug once it grows in scope.

Also Wrong But Better


var Increaser = function(amount) {
  this.amount = amount;
};
Increaser.prototype.add = function(value) {
  if (Array.isArray(value)) {
   return value.map(function(item) {
     return item + this.amount;
   }.bind(this));  // NOTE!
  } else {
    return value + this.amount;
  }
};

var inc = new Increaser(2);

console.log(inc.add(10)); // 12
console.log(inc.add([1, 2, 3])); // [3, 4, 5]

On CodePend

Why it's bad. Using .bind() creates a new function. It might not matter in this scenario but asking the JavaScript engine to create yet another function object in memory might matter in terms of optimization.

Why it's better. Because you "fix things" before it gets worse. This way, when deep inside the nested scope you don't need to juggle the name of what the this has temporarily been re-bound to.

Righter

The map function takes a second argument that is the context. This is true for forEach, filter and find too.


var Increaser = function(amount) {
  this.amount = amount;
};
Increaser.prototype.add = function(value) {
  if (Array.isArray(value)) {
   return value.map(function(item) {
     return item + this.amount;
   }, this);  // NOTE!
  } else {
    return value + this.amount;
  }
};

var inc = new Increaser(2);

console.log(inc.add(10)); // 12
console.log(inc.add([1, 2, 3])); // [3, 4, 5]

Why it's better.

No new assignment of a variable. No need to bind, which means it doesn't create a new function. And it's built-in.

Much Righter

Switch to ES6! Then you can use fat arrow functions.


class Increaser {
  constructor(amount) {
    this.amount = amount;
  }
  add(value) {
    if (Array.isArray(value)) {
      return value.map(item => {
        return item + this.amount;
      }); 
    } else {
      return value + this.amount;
    }
  }
}

let inc = new Increaser(2);

console.log(inc.add(10)); // 12
console.log(inc.add([1, 2, 3])); // [3, 4, 5]

On CodePen

Why it's better: Because then the whole problem goes away. Fat arrow functions are functions that have no scope of their own. Just like if statements. (EDIT: That's an over simplification. They do have their own scope. Just no own arguments or own this)

4 different kinds of React component styles

April 7, 2016
4 comments JavaScript, React

I know I'm going to be laughed at for having misunderstood the latest React lingo and best practice. But guess, what I don't give a ...

I'm starting to like React more and more. There's a certain element of confidence about them since they only do what you ask them to do and even though there's state involved, if you do things right it feels like it's only one direction that state "flows". And events also only flow in one direction (backwards, sort of).

However, an ugly wart with React is the angle of it being hard to learn. All powerful things are hard to learn but it's certainly not made easier when there are multiple ways to do the same thing. What I'm referring to is how to write components.

Partly as a way of me learning and summorizing what I've come to understand and partly to jot it down so others can be helped by the same summary. Others who are in a similar situation as I am with learning React.

The default Component Class

This is what I grew up learning. This is code you most likely start with and then realize, there is no need for state here.


class Button extends React.Component {

  static propTypes = {
    day: PropTypes.string.isRequired,
    increment: PropTypes.func.isRequired,
  }

  render() {
    return (
      <div>
        <button onClick={this.props.increment}>Today is {this.props.day}</button>
      </div>
    )
  }
}

The old style createClass component

I believe this is what you used before you had ES6 so readily available. And I heard a rumor from Facebook that this is going to be deprecated. Strange rumor considering that createClass is still used in the main documentation.


const Button = React.createClass({
  propTypes: {
    day: PropTypes.string.isRequired,
    increment: PropTypes.func.isRequired,
  },

  render: function() {
    return (
      <div>
        <button onClick={this.props.increment}>Today is {this.props.day}</button>
      </div>
    )
  }
})

The Stateless Function component

Makes it possible to do some JavaScript right there before the return


const Button = ({
  day,
  increment
}) => {
  return (
    <div>
      <button onClick={increment}>Today is {day}</button>
    </div>
  )
}

Button.propTypes = {
  day: PropTypes.string.isRequired,
  increment: PropTypes.func.isRequired,
}

The Presentational Component

An ES6 shortcut trick whereby you express a onliner lambda function as if it's got a body of its own.


const Button = ({
  day,
  increment
}) => (
  <div>
    <button onClick={increment}>Today is {day}</button>
  </div>
)

Button.propTypes = {
  day: PropTypes.string.isRequired,
  increment: PropTypes.func.isRequired,
}

Some thoughts and reactions

  • The advantage with the class is that you can write a shouldComponentUpdate hook method. That's applicable when you have an intimate knowledge of the props and state and you might know that deep inside the props or the state, there's differences you don't need to consider different enough to warrent a re-render of the component. Arguably, if you're in that rabbit hole and need some optimization hack like that, perhaps it's time to break things up.

  • The Stateless Function pattern and the Presentational Component are both functions. Here's what they look like converted to ES5: One and Two. Basically no difference.

  • The Stateless Function and the Presentational Component both suffer from the ugliness of that propTypes guard hanging outside the code. That makes it the opposite of encapsulated/bundled. You have to remember to copy two things.

  • A lot of smarter-than-me-people seem to indicate that classes in JavaScript is a bad thing and I haven't personally understood that argument yet. What I do know is that I kinda like the bundling. You have the whole component in a little package under one name and inside you can put little helper functions/methods that support the render function. Also, having a state in one of those classes is optional. Just because a component doesn't need state, doesn't mean you have to use a functional component. Also, the class is great for putting in side-effects in the componentWillMount and cancel side-effects in componentWillUnmount.

  • Supposedly with React.createClass() you can use mixins, but I've never used that. Is mixins something that's rapidly going out of fashion? I think I need to go back and properly read Mixins Are Dead. Long Live Composition.

  • Boy I wish there was only one way to do things and only one single name. In Django you used to only have view functions. Then class-based views came along and the diversion caused a lot of strain, anger and confusion. "Why should I use which?! I hate change!" was a common noise. However, JavaScript is what it is and React is newfangled stuff.

  • I kinda like the "statement" you make when you write a stateless/presentational function component. Just by seeing its signature you can tell that it won't mess with state inside. But if your needs grow over time and you realize you need a bit of state solely for that component, you have to rewrite it entirely, right?

  • I love plainly writing down the props I need as argument and not have to write this.props.myPropthing. Makes it easy to debug what the code does.

  • Is there not a way to put that Button.propTypes thing in the first React.Component style into the class?? UPDATE There is! Thanks Emiliano for showing me how.

  • If you're prepared to remember more terminology; the class component style is called a Container Component. I like that name!

Please Please Share your thoughts and reactions and I'll try to collect it and incorporate it into this blog post.

A quicksearch for Bugzilla using Autocompeter

January 27, 2016
0 comments Python, Web development, Mozilla, JavaScript

Here's the final demo.

What I did was, I used the Bugzilla REST APIs to download all bugs for a specific product. Then I bulk-uploaded then to Autocompeter.com and lastly built a simply web front-end.

When you "download all" bugs with the Bugzilla REST API, it might be capped but I don't know what the limit is. The trick is to not download ALL bugs for the product in one big fat query, but to find out what all components are for that product and then download for each. The Python code is here.

Everyone's Invited to Play

So first you need to sign in on https://autocompeter.com using your GitHub account. Then you can generate a Auth-Key by picking a domain. The domain can be anything really. I picked bugzilla.mozilla.org but you can use whatever you like.

Then, when you have an Auth-Key you need to know the name of the product (or products) and run the script like this:

python download.py 7U4eFYH5cqR15m3ekuxkzaUR Socorro

Once you've done that, fork my codepen and replace the domain and any other references to the product.

Caveats

To make this really useful, you'd have to run it more often. Perhaps you can hook it up to a cron job or something and make it so that you only download, from the REST API, things that have changed since the last time you did a big download. Then you can let the cron job run frequently.

If you want really hot results, you could hook up a server-side service that consumes the Bugzfeed websocket.

Last but not least; this will never list private/secure bugs. Only publically available stuff.

The Future

If people enjoy it perhaps we can change the front-end demo so it's not hardcoded to one specific product ("Socorro" in my case). And it can be made pretty.

And the data would need to be downloaded and re-submitted more frequently. A quick Heroku app mayhaps?