Filtered by JavaScript

Page 13

Reset

Advanced Closure Compiler vs UglifyJS2

January 20, 2016
12 comments JavaScript

A couple of years ago I wrote a blog post titled "Comparing Google Closure with UglifyJS". It concluded that Closure Compiler compressed files down to 45.6% of the original size. And UglifyJS only 51.5%. But UglifyJS was 1220% faster so I concluded that I'm going to stick to UglifyJS.

But things have changed since 2011. UglifyJS2 came out and stealthy replaced the original implementation (npm install uglify-js) and it has a --mangle option. Also, in the original experimental blog post I didn't use -O advanced when using Closure Compiler.

So I whip up a quick script to compare the two. Here's some of the output:

Truncated! Read the rest by clicking the link below.

Headsupper.io

December 5, 2015
0 comments Python, Web development, Django, JavaScript, React

tl;dr

Headsupper.io is a free GitHub webhook service that emails people when commits have the configurable keyword "headsup" in it.

Introduction

Headsupper.io is great for when you have a GitHub project with multiple people working on it and when you make a commit you want to notify other people by email.

Basically, you set up a GitHub Webhook, on pushes, to push to https://headsupper.io and then it'll parse the incoming push and its commits and look for certain things in the commit message. By default, it'll look for the word "headsup". For example, a git commit message might look like this:

fixes #123 - more juice in the Saab headsup! will require updating

Or you can use the multi-line approach where the first line is short and sweat and after the break a bit more elaborate:

bug 1234567 - tea kettle upgrade 2.1

Headsup: Next time you git pull from master, remember to run 
peep install on the requirements.txt file since this commit 
introduces a bunch of crazt dependency changes.

Git commits that come through that don't have any match on this word will simply be ignored by Headsupper.

How you use it

Maybe paradoxically, you need to authenticate with your GitHub account but that's in read-only mode and does NOT set up the Webhook for you. The reason you have to authenticate to prepare a configuration on headsupper.io is to tie the configuration to a real user.

Once you've authenticated you get the option to create your first configuration, then you have to enter at least these three piece of information:

  1. The GitHub "full name". This is the org name, slash, repo name. E.g. peterbe/django-peterbecom or mozilla/socorro.
  2. Pick a secret. Remember what you typed, because you'll need to type in this same secret when you set up the Webhook on your GitHub project's Webhooks page. (This is used to checksum and verify the source of the Webhook push)
  3. Who to send to. A list of email addresses separated with a newline or a semi-colon.

Once you've set that up, you'll need to go to your GitHub project's Setting page and enter a new Webhook and the URL you need to type in is https://headsupper.io and for the "Secret" type in that secret you used earlier. That's it!

Rules and options

The word that triggers is configurable by you. The default is headsupper. And by default, it's case insensitive. You can change that so it's case sensitive. Also, the word has to be word delimited on the left (e.g. a space or a newline character) and on the right it needs to be a space, a : or a !. So this won't match: theheadsup: or headsupper.

Other optional things you can configure are:

  • Which git branch to trigger on (by default it's master)
  • Which emails to CC when it sends
  • Which emails to BCC when it sends
  • Only send when you make a tag

That last option, Only send when a new tag is created, is interesting. I added that option because at work, we make production server releases by pushing a git tag. When a tag is pushed, all those commits are sent to the continuous deployment service which makes a server upgrade. This means you get a chance to enter a heads up message to be emailed to the people who care about new deployments going out.

How it was built

It's a mix between Django and ReactJS. The whole client-side app it built statically with Webpack in ES6. It's served as static files through Nginx. But Nginx is making an exception on all URLs that start with /api or /accounts. The /api/* it used for loading and setting JSON. The /accounts/* is used for the GitHub OAuth endpoints.

What's interesting about this the architecture is that it's using HTTP cookies. Not API tokens. Cookies are quite good in that they're established and the browser does all the automated work of keeping it secure and making each request potentially authenticated.

Here's the relevant React code and here's the relevant Django code that processes the Webhook.

The whole project is available on: https://github.com/peterbe/headsupper.

Also, I made a demo at the November Mozilla Beer and Tell.

Chainable catches in a JavaScript promise

November 5, 2015
6 comments Web development, JavaScript

If you have a Promise that you're executing, you can chain multiple things quite nicely by simply returning the value as it "passes through".
For example:


new Promise((resolve) => {
  resolve('some value')
})
.then((value) => {
  console.log('1', value)
  return value
})
.then((value) => {
  console.log('2', value)
  return value
})

This will console log

1 some value
2 some value

And you can add more .then() to it. As many as you like. Just remember to "play ball" by passing the value. In fact, you can actually pass a different value. Like this for example:


new Promise((resolve) => {
  resolve('some value')
})
.then((value) => {
  console.log('1', value)
  return value
})
.then((value) => {
  console.log('2', value)
  return value.toUpperCase()
})
.then((value) => {
  console.log('3', value)
  return value
})

Demo here. This'll console log

1 some value
2 some value
3 SOME VALUE

But how do you do the same with multiple .catch()?

This is NOT how you do it:


new Promise((resolve, reject) => {
  reject('some reason')
})
.catch((reason) => {
  console.warn('1', reason)
  return reason
})
.catch((reason) => {
  console.warn('2', reason)
  return reason
})

Demo here. When you run that you just get:

1 some reason

To chain catches you have to re-raise (aka re-throw) it:


new Promise((resolve, reject) => {
  reject('some reason')
})
.catch((reason) => {
  console.warn('1', reason)
  throw reason
})
.catch((reason) => {
  console.warn('2', reason)
})

Demo here. The output if you run this is:

1 some value
2 some value

But you have to be a bit more careful here. Note that in the second .catch() it doesn't re-throw the reason one last time. If you do that, you get a general JavaScript error on that page. I.e. an unhandled error that makes it all the way out to the web console. Meaning, you have to be aware of errors and take care of them.

Why does this matter?

It matters because you might want to have a, for example, low level and a high level dealing with errors. For example, you might want to log all exceptions AND still pass them along so that higher level code can be aware of it. For example, suppose you have a function that fetches data using the fetch API. You use it from multiple places and you don't want to have to log it everywhere. Instead, that wrapping function can be responsible for logging it but you still have to deal with it.

For example, this is contrived by not totally unrealistic code:


let fetcher = (url) => {
  // this function might be more advanced
  // and do other fancy things
  return fetch(url)
}

// 1st
fetcher('http://example.com/crap')
.then((response) => {
  document.querySelector('#result').textContent = response
})
.catch((exception) => {
  console.error('oh noes!', exception)
  document.querySelector('#result-error').style['display'] = 'block'
})

// 2nd
fetcher('http://example.com/other')
.then((response) => {
  document.querySelector('#other').textContent = response
})
.catch((exception) => {
  console.error('oh noes!', exception)
  document.querySelector('#other-error').style['display'] = 'block'
})

Demo here

Notice how each .catch() handler does the same kind of logging but deals with the error in a human way differently.
Wouldn't it be nice if you could have a general and central .catch() for logging but continue dealing with the errors in a human way?

Here's one such example:


let fetcher = (url) => {
  // this function might be more advanced
  // and do other fancy things
  return fetch(url)
  .catch((exception) => {
    console.error('oh noes! on:', url, 'exception:', exception)
    throw exception
  })
}

// 1st
fetcher('http://example.com/crap')
.then((response) => {
  document.querySelector('#result').textContent = response
})
.catch(() => {
  document.querySelector('#result-error').style['display'] = 'block'
})

// 2nd
fetcher('http://example.com/other')
.then((response) => {
  document.querySelector('#other').textContent = response
})
.catch(() => {
  document.querySelector('#other-error').style['display'] = 'block'
})

Demo here

Here you get the best of both worlds. You have a central place where all exceptions are logged in a nice way, and the higher level code only has to deal with the human way of explaining that something went wrong.

It's pretty basic but it's probably useful to somebody else who gets confused about how to deal with exceptions in promises.

How to "onchange" in ReactJS

October 21, 2015
28 comments JavaScript, React

Normally, in vanilla Javascript, the onchange event is triggered after you have typed something into a field and then "exited out of it", e.g. click outside the field so the cursor isn't blinking in it any more. This for example


document.querySelector('input').onchange = function(event) {
  document.querySelector('code').textContent = event.target.value;
}

First of all, let's talk about what this is useful for. One great example is a sign-up form where you have to pick a username or type in an email address or something. Before the user gets around to pressing the final submit button you might want to alert them early that their chosen username is available or already taken. Or you might want to alert early that the typed in email address is not a valid one. If you execute that kind of validation on every key stroke, it's unlikely to be a pleasant UI.

Problem is, you can't do that in ReactJS. It doesn't work like that. The explanation is quite non-trivial:

*"<input type="text" value="Untitled"> renders an input initialized with the value, Untitled. When the user updates the input, the node's value property will change. However, node.getAttribute('value') will still return the value used at initialization time, Untitled.

Unlike HTML, React components must represent the state of the view at any point in time and not only at initialization time."*

Basically, you can't easily rely on the input field because the state needs to come from the React app's state, not from the browser's idea of what the value should be.

You might try this


var Input = React.createClass({
  getInitialState: function() {
    return {typed: ''};
  },
  onChange: function(event) {
    this.setState({typed: event.target.value});
  },
  render: function() {
    return <div>
        <input type="text" onChange={this.onChange.bind(this)}/>
        You typed: <code>{this.state.typed}</code>
      </div>
  }
});
React.render(<Input/>, document.querySelector('div'));

But what you notice is the the onChange handler is fired on every key stroke. Not just when the whole input field has changed.

So, what to do?

The trick is surprisingly simple. Use onBlur instead!

Same snippet but using onBlur instead


var Input = React.createClass({
  getInitialState: function() {
    return {typed: ''};
  },
  onBlur: function(event) {
    this.setState({typed: event.target.value});
  },
  render: function() {
    return <div>
        <input type="text" onBlur={this.onBlur.bind(this)}/>
        You typed: <code>{this.state.typed}</code>
      </div>
  }
});
React.render(<Input/>, document.querySelector('div'));

Now, your handler is triggered after the user has finished with the field.

localStorage is not async, but it's FAST!

October 6, 2015
7 comments Web development, AngularJS, JavaScript

A long time I go I wrote an angular app that was pleasantly straight forward. It loads all records from the server in one big fat AJAX GET. The data is large, ~550Kb as a string of JSON, but that's OK because it's a fat-client app and it's extremely unlikely to grow any multiples of this. Yes, it'll some day go up to 1Mb but even that is fine.

Once ALL records are loaded with AJAX from the server, you can filter the whole set and paginate etc. It feels really nice and snappy. However, the app is slightly smarter than that. It has two cool additional features...

  1. Every 10 seconds it does an AJAX query to ask "Have any records been modified since {{insert latest modify date of all known records}}?" and if there's stuff, it updates.

  2. All AJAX queries from the server are cached in the browser's local storage (note, I didn't write localStorage, "local storage" encompasses multiple techniques). The purpose of that is to that on the next full load of the app, we can at least display what we had last time whilst we wait for the server to return the latest and greatest via a slowish network request.

  3. Suppose we have brand new browser with no local storage, because the default sort order is always known, instead of doing a full AJAX get of all records, it does a small one first: "Give me the top 20 records ordered by modify date" and once that's in, it does the big full AJAX request for all records. Thus bringing data to the eyes faster.

All of these these optimization tricks are accompanied with a flash message at the top that says: <img src="spinner.gif"> Currently using cached data. Loading all remaining records from server....

When I built this I decided to use localForage which is a convenience wrapper over localStorage AND IndexedDB that does it all asynchronously and with proper promises. And to make it work in AngularJS I used angular-localForage so it would work with Angular's cycle updates without custom $scope.$apply() stuff. I thought the advantage of this is that it being async means that the main event can continue doing important rendering stuff whilst the browser saves things to "disk" in the background.

Also, I was once told that localStorage, which is inherently blocking, has the risk that calling it the first time in a while might cause the browser to have to take a major break to boot data from actual disk into the browsers allocated memory. Turns out, that is extremely unlikely to be a problem (more about this is a future blog post). The warming up of fetching from disk and storing into the browser's memory happens when you start the browser the very first time. Chrome might be slightly different but I'm confident that this is how things work in Firefox and it has for many many months.

What's very important to note is that, by default, localForage will use IndexedDB as the storage backend. It has the advantage that it's async to boot and it supports much large data blobs.

So I timed, how long does it take for localForage to SET and GET the ~500Kb JSON data? I did that like this, for example:


var t0 = performance.now();
$localForage.getItem('eventmanager')
.then(function(data) {
    var t1 = performance.now();
    console.log('GET took', t1 - t0, 'ms');
    ...

The results are as follows:

Operation Iterations Average time
SET 4 341.0ms
GET 4 184.0ms

In all fairness, it doesn't actually matter how long it takes to save because my app actually doesn't depend on waiting for that promise to resolve. But it's an interesting number nevertheless.

So, here's what I did. I decided to drop all of that fancy localForage stuff and go back to basics. All I really need is these two operations:


// set stuff
localStorage.setItem('mykey', JSON.stringify(data))
// get stuff
var data = JSON.parse(localStorage.getItem('mykey') || '{}')

So, after I've refactored my code and deleted (6.33Kb + 22.3Kb) of extra .js files and put some performance measurements in:

Operation Iterations Average time
SET 4 5.9ms
GET 4 3.3ms

Just WOW!
That is so much faster. Sure the write operation is now blocking, but it's only taking 6 milliseconds. And the reason it took IndexedDB less than half a second also probably means more hard work for it to sweat CPU over.

Sold? I am :)

Using Lovefield as an AJAX proxy maybe

September 30, 2015
1 comment Web development, JavaScript

Lovefield, by Arthur Hsu at Google, is a cool little Javascript browser abstraction on top of IndexedDB. IndexedDB is this amazingly powerful asynchronous framework for storing data in the browser tied to the domain you're visiting. Unlike it's much "simpler" sibling localStorage, with IndexedDB you can store individual keys in a schema, use indexes for faster retrieval, asynchronous querying and much larger memory capacity than general DOM storage (e.g. localStorage and sessionStorage).

What Lovefield brings is best described by watching this video. But to save you time let me try...:

  • "Lovefield is a relational database for web apps"
  • Adds structural query capability to IndexedDB without having to work with any callbacks
  • It's not plain SQL strings that get executed but instead something similar to an ORM (e.g. SQLAlchemy)
  • Supports indexes for speedier lookups
  • Supports doing joins across different "tables"
  • Works in IE, Chrome, Firefox and Safari (although I don't know about iOS Safari)

Anyway, it sounds really cool and I'm looking forward to using it for something. But before I do I thought I'd try using it as a "AJAX proxy".

So what's an AJAX proxy, you ask. Well, it can mean many things but what I have in mind is a pattern where a web app's MVC is tied to a local storage and the local storage is tied to AJAX. That means two immediate benefits:

  • The MVC part of the web app is divorced from understanding network APIs.
  • The web app becomes offline capable to boot. Being able to retrieve fresh data from the network (or send!) is a luxury that you automatically get if you have a network connection.

Another subtle benefit is a "corner case" of that offline capability; when you load up the app you can read from local storage much much faster than from the network meaning you can display user data on the screen sooner. (With the obvious caveat that it might be stale and that the data will change once the network read is completed later)

So, to take this idea for a spin (the use of a local storage to remember the loaded data last time first) I extended my AJAX or Not playground with a hybrid that uses React to render the data, but render the data from Lovefield (and from localStorage too). Remember, it's an experiment so it's a bit clunky and perhaps contrieved. The objective is to notice how soon after loading the page, that data because available for your eyes to consume.

Here is the playground test page

You have to load it at least once to fill your IndexedDB with some data from an AJAX request. Then, reload the page and it'll display what it has locally (in IndexedDB extracted with the Lovefield API). Then, after it's loaded, try refreshing the browser repeatedly. With or without a Wifi connection.

Basically, it works. However, perhaps I've chosen the worst possible test bed for playing with Lovefield. Because it is super slow. If you open the web console, you'll see it reports how long it takes to extract the data out of Lovefield. The code looks like this:


...
getPostsFromDB: function() {
  return schemaBuilder.connect().then(function(db) {
    var table = db.getSchema().table('post');
    return db.select().from(table).exec();
  });
},
...
var t0 = performance.now();
this.getPostsFromDB()
.then(function(results) {
  var t1 = performance.now();
  console.log(results.length, 'records extracted');
  console.log((t1 - t0).toFixed(2) + 'ms to extract');
  ...

You can see the source here in full.

So out of curiousity, I forked this experiment. Kept almost all the React code but replaced the Lovefield stuff with good old JSON.parse(localStorage.getItem('posts') || '[]'). See code here.
This only takes 1-2 milliseconds. Lovefield repeatedly takes about 400-550 milliseconds on my Firefox version 43.

By the way, load up the localStorage fork and after a first load, try refreshing it over and over and notice how amazingly fast it is. Yay localStorage!

Some tips on learning React

August 4, 2015
0 comments JavaScript, React

In the last month I've been playing with React in my spare time. Said time is extremely limited so I'm unable to read the documentation or source code as a way to learn. Instead, I follow various tutorials and snippets on stackoverflow etc. to get going. I have something now that will soon be production ready and I'm excited about it even though it only scratches the surface of what React can do.

If you're learning React, starting from more or less scratch, here are some of my tips:

  1. Start with skimming the official tutorial or this little gem on scotch.io. Both start with a simple index.html in which you include JSXTransformer.js and you write your React/JSX code in a <script type="text/jsx"> block. Not a bad idea. That helps you appreciate what JSX is and how it relates to turning your code into production code.

  2. Don't fear jQuery! For those who've lept from jQuery based apps to AngularJS or Ember are often told to stay away from jQuery and learn to do it the "new way". But don't fear jQuery. Suppose you're working on a widget/component with React code; then you can continue to let your trusted jQuery code handle some other effect or widget on the page like a bootstrap modal window or something. Apparently, at Facebook, the first production use of React was just the little commenting widget underneath posts. They didn't rewrite the whole site in React when it all started. Also, even the official tutorial advocates using jQuery to do an AJAX fetch. (Personally I prefer the built-in fetch and this polyfill for doing AJAX fetches)

  3. Avoid "super-normalization" of components. A lot of React apps is about one master component rendering one or more other components that renders itself with other components. That's fine but can easily get messy when you're starting out. Don't fear writing extra rendering functions in your class instead of always relying on writing yet another deep component. For example, this is perfectly fine.
    It's good to split up distinct functionality into sub-components but don't go overboard. Ideal things for sub-components are things that have their own and different context. Just calling other local functions is for when your render function simply gets too many lines long.

  4. Avoid ES6 (aka Babel) unless you're comfortable with tooling like Webpack and Gulp. I personally jumped into the deep end straight away writing my first big React app in ES6 which has been fun but it's been hard sometimes to find matching resources. In particular around testing frameworks. A lot of stackoverflow posts and blog posts don't use ES6 so some things just don't work. I chose ES6 because I was curious about React and this project is not on any deadline so I was OK with getting stuck here and there.

  5. Remember, React is just Javascript. For someone who has done a lot of AngularJS I sometimes stop and have to think when I'm in AngularJS, "How do this the AngularJS way?". For example, in AngularJS you can't just use var timer = setTimeout(function() { ... because you're leaving "the way AngularJS works". React has its own state-awareness pitfalls but it's not nearly has precarious. Just write your code. It's just Javascript. Use React for what it's good at but don't be scared to just write code. Having said that, it might help to be aware of how binding this works. Here's a good example. (And here's a good counter-example to avoid too much function() {...}.bind(this) noise)

  6. Learn to distinguish between state and props. It's confusing at first. Especially in terms of which should I use when? My attempt of explaining it is that you can think of the state as the database and the props as that data being extracted and passed around to be shown and changed.

  7. Let render() just render. Every component has a render function. Its job is to render the current state and nothing else. It's tempting to do too much logic in there before it returns the JSX but you should avoid it. Suppose your render function renders a list of object. You might be tempted to apply filtering or sorting of that list using other queues, like the state, before displaying. Try to avoid that, it means you can't change the state without causing that whole filtering/sorting logic to run again. Basically, keep the render function short and simple if you can.

Note, I'm a beginner too. My hope is that by sharing these tips more people will get a chance to enjoy React too without being too intimidated by all the things you think you need to learn and understand to build something.

Visual speed comparison of AngularJS and ReactJS

July 20, 2015
0 comments Web development, AngularJS, JavaScript

Last year I put together a little experiment called AJAX or Not? and blogged about it here. The basic idea was to display 1,000 rows in a table. There are several ways of doing it but I decided to compare the following three patterns:

  1. Rendering the whole table in Django server-side and return the whole HTML document.
  2. Rendering a skeleton page, then load the table content as a big chunk of HTML via AJAX.
  3. Rendering a skeleton page, and let AngularJS load all the content of the table from the server as JSON and let AngularJS render it into the DOM.

It was clear as day that the server-side rendering version was hands down the fastest. And the AngularJS rendering the slowest.

Note! AngularJS is amazing and super flexible and powerful because you don't really need to worry about how to re-render once the data changes. This is really useful when you do things like loading more data from a remote endpoint or doing some in-page filtering.

Enter ReactJS

The point of AJAX or Not was not to compare Javascript frameworks but I had some time and I thought I'd write an equivalent version of the AngularJS one with ReactJS (version 0.13.3).

Anyway, here's the code and it's using the GitHub fetch polyfill to do the AJAX query. The AngularJS code is here and here and as you can see it's using track by on the ng-repeat.

WebPageTest
To measure the difference I ran a comparison in WebPageTest which I encourage you open and study for a bit. You can watch the video and download the video here.

Also, note that the Django rendered version loads jQuery. That's because the functionality dictates that clicking on a link should show a confirmation box before going to the link. I know, it's silly but it's very realistic that every page needs some Javascript functionality.

Executive summary...

  • Django server-side takes 0.8 seconds, ReactJS version takes 2.0 seconds and the AngularJS version takes 2.9 seconds.
  • The ReactJS version is the fastest to display something. It displays the header and the image first. Only by 0.2 seconds before the Django server-side version.
  • The AngularJS version causes a lot more CPU utilization. This might really matter when you're on a low-end smartphone.
  • The ReactJS causes twice as much CPU utilization than the server-side version. The AngularJS causes twice as much CPU utilization than the ReactJS version.
  • AngularJS is slightly larger than ReactJS + fetch but I don't think this has a huge effect on the total load time.

Some other thoughts...

  • The ReactJS code is all in one place more or less. That's neat! But it's pretty darn big in terms of number of lines. AngularJS code is split half in the Javascript code and half in the HTML.
  • It's clear, if you want a fast loading page, avoid Javascript as much as you possibly can.
  • This experiment is very optimized in how it gets the data to be displayed. In fact, the server-side rendering time is close to 0 seconds because the whole HTML blob is stored in memcached. A more realistic thing is that extracting the data would take a lot longer if the query isn't so easy to cache. That would be a huge disadvantage for the fully server-side rendered version since if the data query takes a long time you'll sit and stare at a white screen longer. Doing the AJAX approach would definitely be a nicer experience.
  • The difference isn't that big. Both fancy Javascript frameworks have amazing features that leave jQuery in the dust but if you want your page to load crazy-fast, do as much server-side as you possibly can.

Premailer.io

July 8, 2015
3 comments Python, Web development, AngularJS, JavaScript

Premailer is a Python library for turning a HTML + CSS into HTML with all the CSS embedded as inline style attributes. This is sadly very necessary to ensure that your fancy HTML emails look spiffy across all email clients and email webapps.

So, last week I put together a little site to test the library via a browser: Premailer.io

It's just a simple webapp with a form where you can enter HTML in three different ways; textarea, by URL and by file upload.

You can also override all the possible advanced options that premailer supports.

What's kinda cool is that you can get a preview of how the HTML document will look like in an iframe that is dynamically loaded with the result from the conversion.

The webapp is of course open source and available on github.com/peterbe/premailer.io. The front-end is an AngularJS app and the build system is Lineman.js. The server is a Falcon server running on uWSGI via Nginx.

There's very little fancy here. There's no limitations or protections. I just hope it becomes handy for people to test premailer out.

The inspiration came from MailChimp's CSS Inliner Tool which is cute but very basic and doesn't allow you the same kinds of input.

If anybody with some AngularJS or highlight.js chops has time I'd love to help fix why the HTML is not syntax highlighted.

Using lazy loading images on Air Mozilla

April 23, 2015
0 comments Mozilla, JavaScript

Starting today, (almost) all the thumbnails below the fold on Air Mozilla are not loaded.

The way it works, is that I use a library called Lazyr.js which notices when you scroll down and when certain pictures are going to be in view, it changes the <img> tag's src.

So it basically looks like this:


<article>
  <h3>Event 1</h3>
  <img src="event1.png">
</article>

<article>
  <h3>Event 2</h3>
  <img src="event2.png">
</article>

<article>
  <h3>Event 3</h3>
  <img src="event3.png">
</article>

<article>
  <h3>Event 4</h3>
  <img src="placeholder.png" data-lazyr="event4.png">
</article>

<article>
  <h3>Event 5</h3>
  <img src="placeholder.png" data-lazyr="event5.png">
</article>

<article>
  <h3>Event 6</h3>
  <img src="placeholder.png" data-lazyr="event6.png">
</article>

That means that to load this page it needs to download, only:

event1.png
event2.png
event3.png
placeholder.png

Only 4 images instead of the otherwise 6 (in this example).

When you scroll down to see the rest of the list, it then also downloads:

event4.png
event5.png
event6.png

The actual numbers on Air Mozilla is that there are 10 events page page and I lazy load 6 of them.

You can see the results when comparing this WebPageTest with this one.

There is more work to do though. At the moment, the thumbnails in the sidebar (Trending and Upcoming events) are above the fold when you're browsing but below the fold when you're viewing an individual event. That's something I have yet to implement.