Filtered by React

Page 3

Reset

React 16.6 with Suspense and lazy loading components with react-router-dom

October 26, 2018
7 comments Web development, JavaScript, React

If you're reading this, you might have thought one of two thoughts about this blog post title (or both); "Cool buzzwords!" or "Yuck! So much hyped buzzwords!"

Either way, React v16.6 came out a couple of days ago and it brings with it React.lazy: Code-Splitting with Suspense.

React.lazy is React's built-in way of lazy loading components. With Suspense you can make that lazy loading be smart and know to render a fallback component (or JSX element) whilst waiting for that slowly loading chunk for the lazy component.

The sample code in the announcement was deliciously simple but I was curious; how does that work with react-router-dom??

Without furher ado, here's a complete demo/example. The gist is an app that has two sub-components loaded with react-router-dom:


<Router>
  <div className="App">
    <Switch>
      <Route path="/" exact component={Home} />
      <Route path="/:id" component={Post} />
    </Switch>
  </div>
</Router>

The idea is that the Home component will list all the blog posts and the Post component will display the full details of that blog post. In my demo, the Post component never bothers to actually do the fetching of the full details to display. It just displays the passed in ID from the react-router-dom match prop. You get the idea.

That's standard React with react-router-dom stuff. Next up, lazy loading. Basically, instead of importing the Post component, you make it lazy:


-import Post from "./post";
+const Post = React.lazy(() => import("./post"));

And here comes the magic sauce. Instead of referencing component={Post} in the <Route/> you use this badboy:


function WaitingComponent(Component) {
  return props => (
    <Suspense fallback={<div>Loading...</div>}>
      <Component {...props} />
    </Suspense>
  );
}

Complete prototype

The final thing looks like this:


import React, { lazy, Suspense } from "react";
import ReactDOM from "react-dom";
import { MemoryRouter as Router, Route, Switch } from "react-router-dom";

import Home from "./home";
const Post = lazy(() => import("./post"));

function App() {
  return (
    <Router>
      <div className="App">
        <Switch>
          <Route path="/" exact component={Home} />
          <Route path="/:id" component={WaitingComponent(Post)} />
        </Switch>
      </div>
    </Router>
  );
}

function WaitingComponent(Component) {
  return props => (
    <Suspense fallback={<div>Loading...</div>}>
      <Component {...props} />
    </Suspense>
  );
}

const rootElement = document.getElementById("root");
ReactDOM.render(<App />, rootElement);

(sorry about the weird syntax highlighting with the red boxes.)

And it totally works! It's hard to show this with the demo but if you don't believe me, you can download the whole codesandbox as a .zip, run yarn && yarn run build && serve -s build and then you can see it doing its magic as if this was the complete foundation of a fully working client-side app.

1. Loading the "Home" page, then click one of the links

Loading the "Home" page

2. Lazy loading the Post component

Lazy loading the Post component

3. Post component lazily loaded once and for all

Post component lazily loaded once and for all

Bonus

One thing that can happen is that you might load the app when the Wifi is honky dory but when you eventually make a click that causes a lazy loading to actually need to go out on the Internet and download that .js file it might fail. For example, because the file has been removed from the server or your network just fails for some reason. To deal with that, simply wrap the whole <Suspense> component in an error boundary component.

See this demo which is a fork of the main demo but with error boundaries added.

In conclusion

No surprise that it works. React is pretty awesome. I just wasn't sure how it would look like with react-router-dom.

A word of warning, from the v16.6 announcement: "This feature is not yet available for server-side rendering. Suspense support will be added in a later release."

I think lazy loading isn't actually that big of a deal. It's nice that it works but how likely is it really that you have a sub-tree of components that is so big and slow that you can't just pay for it up front as part of one big fat build. If you really care about a really great web performance for those people who reach your app rarely and sporadically, the true ticket to success is server-side rendering and shipping a gzipped HTML document with all the React client-side code non-blocking rendering so that the user can download the HTML, start reading/consuming it immediately and then whilst the user is doing that you download the rest of the .js that is going to be needed once the user clicks around. Start there.

The ideal number of workers in Jest

October 8, 2018
0 comments Python, React

tl;dr; Use --runInBand when running jest in CI and use --maxWorkers=3 on your laptop.

Running out of memory on CircleCI

We have a test suite that covers 236 tests across 68 suites and runs mainly a bunch of enzyme rendering of React component but also some plain old JavaScript function tests. We hit a problem where tests utterly failed in CircleCI due to running out of memory. Several individual tests, before it gave up or failed, reported to take up to 45 seconds.
Turns out, jest tried to use 36 workers because the Docker OS it was running was reporting 36 CPUs.


> circleci@9e4c489cf76b:~/repo$ node
> var os = require('os')
undefined
> os.cpus().length
36

After forcibly setting --maxWorkers=2 to the jest command, the tests passed and it took 20 seconds. Yay!

But that got me thinking, what is the ideal number of workers when I'm running the suite here on my laptop? To find out, I wrote a Python script that would wrap the call CI=true yarn run test --maxWorkers=%(WORKERS) repeatedly and report which number is ideal for my laptop.

After leaving it running for a while it spits out this result:

SORTED BY BEST TIME:
3 8.47s
4 8.59s
6 9.12s
5 9.18s
2 9.51s
7 10.14s
8 10.59s
1 13.80s

The conclusion is vague. There is some benefit to using some small number greater than 1. If you attempt a bigger number it might backfire and take longer than necessary and if you do do that your laptop is likely to crawl and cough.

Notes and conclusions

Inline scripts in create-react-app 2.0 and CSP hashes

October 5, 2018
0 comments Web development, JavaScript, React

UPDATE (1)

My understanding of how to generate the CSP nonces was wrong. What I initially posted was a confusion between nonces and hashes. Sorry. The blog post has been updated to use hashing.

UPDATE (2)

Shortly after publishing this I changed my mind entirely. I decided I don't want any inline scripts no matter how small. Reasons are: 1) with HTTP2 it's cheap to send another file and thus that critical precious first HTML document becomes smaller and 2) when you load it as an external you have the power to load it async if it's applicable.

Check out this new script, it's hackish but works: uninline_scripts.js

UPDATE (Oct 18, 2018)

If you use INLINE_RUNTIME_CHUNK=false yarn run build no scripts, independent of size, are inlined. See this pull request for details.

END UPDATES

I have an app that is hosted on github-pages and because I can't control Content Security Policy HTTP headers I have to do it with a <meta http-equiv="Content-Security-Policy" content="${csp}"> tag in the HTML. That's working fine and the way I do it is that I have a script that looks like this:


#!/usr/bin/env node
const fs = require("fs");
const crypto = require("crypto");

const CSP_TEMPLATE = `
default-src 'none';
connect-src 'self' kinto.workon.app peterbecom.auth0.com;
frame-src peterbecom.auth0.com;
img-src 'self' avatars2.githubusercontent.com https://*.googleusercontent.com;
script-src 'self'%SCRIPT_HASHES%;
style-src 'self' 'unsafe-inline';
font-src 'self' data:;
manifest-src 'self'
`.trim();

const htmlFile = process.argv[2];
if (!htmlFile) throw new Error("missing file argument");
let html = fs.readFileSync(htmlFile, "utf8");

let hashes = "";
let csp = CSP_TEMPLATE;
const matches = html.match(/<script>.*<\/script>/g);
if (matches) {
  matches.forEach(scriptTag => {
    const hash = crypto.createHash("sha256");
    hash.update(scriptTag.replace(/<script>/, "").replace("</script>", ""));
    const digest = hash.digest("hex");
    hashes += ` 'sha256-${digest.toString("base64")}'`;
  });
}
csp = csp.replace(/%SCRIPT_HASHES%/, hashes);

const metatag = `
  <meta http-equiv="Content-Security-Policy" content="${csp}">
`
  .replace(/\n/g, "")
  .trim();
if (html.search(metatag) > -1)
  throw new Error("already has CSP metatag in HTML");
const anchor = '<meta charset="utf-8">';
const newHtml = html.replace(anchor, `${anchor}${metatag}`);
fs.writeFileSync(htmlFile, newHtml, "utf8");

Laugh all you like at my hurried node scripting but it works. It finds any <script>ANYTHING</script> tags (which means it disregards any <script src="... tags), calculates a sha256 hash string out of it and then puts that into the CSP block.

The output becomes something like this:


<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <meta 
      http-equiv="Content-Security-Policy" 
      content="default-src 'none';script-src 'self' 'sha256-bb84aa7f904e73495b9e99f08531053f3a86f3c1b2e232e3abbac252bf723f1f';">
  </head>
  <body>
    ...
    <script>....</script>
  </body>
</html>

I don't know if I've done it right but at least what didn't use to work now works; the page loads in my browsers now.

Webpack Bundle Analyzer for create-react-app

May 14, 2018
0 comments JavaScript, React

webpack-bundle-analyzer is an awesome little program for understanding why and which parts of your bundled .js files are so big. It's a lot more advanced (and pretty) than source-map-explorer.

Thanks to this tip by @trevorwhealy you can now use webpack-bundle-analyzer on a create-react-app bundle. Yay!

Check out the report I made for the client side code of Songsear.ch:

Webpack bundle analyzed for Songsear.ch

One thing I personally noticed from this is that the .png do take up quite a lot of kilobytes. And I'm quite that the whatwg-fetch polyfill uses 12KB before gzip.

Real minimal example of going from setState to MobX

May 4, 2018
0 comments JavaScript, React

This is not meant as a tutorial on MobX but hopefully it can be inspirational for people who have grokked how React's setState works but now feel they need to move the state management in their React app out of the components.

Store.js
To jump right in, here is a changeset that demonstrates how to replace setState with a MobX store:
https://github.com/peterbe/workon/commit/c1846ce782ce7c9da16f44b10c48f0be1337ae41

It's a really simple Todo list application based on create-react-app. Not much to read into at this point.

Here are some caveats to be aware if you look at the diff and wonder...

  • As part of this change, I moved the logic from the App component and created a new sub-component (that App renders) called TodoList. This was not necessary to add MobX.
  • There are a bunch of little unrelated edits in that such as deleting some commented out code.
  • store.items.sort((a, b) => b.id - a.id); doesn't actually work. You're supposed to do store.items.replace(store.items.sort((a, b) => b.id - a.id));.
  • Later I made the Item component also be an observer and not just the TodoList component.
  • The exported store is called store and, in this version, is an instance of the TodoStore class. The intention is to make store be an instance of combined different store classes, with TodoStore being just one of them.

Caveat last but not least... This diff does not much other than adding more library dependencies and fancy "observable arrays" that are hard to introspect with console.log debugging.
However, the intention is to...

  1. Add react-router to the mix so opening the Todo list is just one of many possible views.
  2. Now the Store.js file can be all about data. Data retrieval, storage, manipulation, mutation etc. The other components will be more simple since their only job is to render that's in the store and send events back to the store based on user actions.
  3. Note that the store is also put into window. That means I can open the web console and type store.items[2].text = "Test change" and simply by hitting enter the app re-renders to this change.

filterToQueryString - JavaScript function to turn current filter into a query string

March 15, 2018
1 comment Web development, JavaScript, React

tl;dr; this function:


export const filterToQueryString = (filterObj, overrides) => {
  const copy = Object.assign(overrides || {}, filterObj)
  const searchParams = new URLSearchParams()
  Object.entries(copy).forEach(([key, value]) => {
    if (Array.isArray(value) && value.length) {
      value.forEach(v => searchParams.append(key, v))
    } else if (value) {
      searchParams.set(key, value)
    }
  })
  searchParams.sort()
  return searchParams.toString()
}

I have a React project that used to use query-string to serialize and deserialize objects between React state and URL query strings. Yesterday version 6.0.0 came out and now I'm getting this error during yarn run build:

yarn run v1.5.1
$ react-scripts build
Creating an optimized production build...
Failed to compile.

Failed to minify the code from this file: 

    ./node_modules/query-string/index.js:8 

Read more here: http://bit.ly/2tRViJ9

error An unexpected error occurred: "Command failed.
Exit code: 1

Perhaps this is the wake up call to switch to URLSearchParams (documentation here). Yes it is. Let's do it.

My use case is that I store a dictionary of filters in React this.state. The filter object is updated by submitting a form that looks like this:

Fitler form

Since the form inputs might be empty strings my filter dictionary in this.state might look like this:


{
  user: '@mozilla.com', 
  created_at: 'yesterday', 
  size: '>= 1m, <300G', 
  uploaded_at: ''
}

What I want that to become is: created_at=yesterday&size=>%3D+1m%2C+<300G&user=%40mozilla.com
So it's important to be able to skip falsy values (empty strings or possibly empty arrays).

Sometimes there are other key-values that needs to be added that isn't part of what the user chose. So it needs to be easy to squeeze in additional key-values. Here's the function:


export const filterToQueryString = (filterObj, overrides) => {
  const copy = Object.assign(overrides || {}, filterObj)
  const searchParams = new URLSearchParams()
  Object.entries(copy).forEach(([key, value]) => {
    if (Array.isArray(value) && value.length) {
      value.forEach(v => searchParams.append(key, v))
    } else if (value) {
      searchParams.set(key, value)
    }
  })
  searchParams.sort()
  return searchParams.toString()
}

I use it like this:


_fetchUploadsNewCountLoop = () => {
  const qs = filterToQueryString(this.state.filter, {
    created_at: '>' + this.state.latestUpload
  })
  const url = '/api/uploads?' + qs
  ...
  fetch(...)
}

UPDATE - May 2018

In the original blog post (now edited and corrected) I copied the wrong code and didn't discover the subtle mistake until now.
What was wrong as the order of the arguments to Object.assign().

Wrong


const copy = Object.assign(filterObj, overrides || {})

Correct


const copy = Object.assign(overrides || {}, filterObj)

The old version was dangerous because it mutated the filterObj passed in. So if you did something like


const qs = filterToQueryString(this.state.filter, {
  created_at: '>' + this.state.latestUpload
})

it would potentially mutate this.state.filter which isn't desirable.

How to throttle AND debounce an autocomplete input in React

March 1, 2018
18 comments Web development, JavaScript, React

Let's start with some best practices for a good autocomplete input:

'f' - most common search term on Google

  • You want to start suggesting something as soon user starts typing. Apparently the most common search term on Google is f because people type that and Google's autocomplete starts suggesting Facebook (www.facebook.com).

  • If your autocomplete depends on a list of suggestions that is huge, such that you can't have all possible options preloaded in memory in JavaScript, starting a XHR request for every single input is not feasible. You have to throttle the XHR network requests.

  • Since networks are unreliable and results come back asynchronously in a possible different order from when they started, you should only populate your autocomplete list based on the latest XHR request.

  • If people type a lot in and keep ignoring autocomplete suggestions, you can calm your suggestions.

  • Unless you're Google or Amazon.com it might not make sense to suggest new words to autocomplete if what's been typed previously is not going to yield any results anyway. I.e. User's typed "Xjhfgxx1m 8cxxvkaspty efx8cnxq45jn Pet", there's often little value in suggesting "Peter" for that later term you're typing.

  • Users don't necessarily type one character at a time. On mobile, you might have some sort of autocomplete functionality with the device's keyboard. Bear that in mind.

  • You should not have to make an XHR request for the same input as done before. I.e. user types "f" then types in "fa" then backspaces so it's back to "f" again. This should only be at most 2 lookups.

To demonstrate these best practises, I'm going to use React with a mocked-out network request and mocked out UI for actual drop-down of options that usually appears underneath the input widget.

The Most Basic Version

In this version we have an event listener on every onChange and send the value of the input to the autocomplete function (called _fetch in this example):


class App extends React.Component {
  state = { q: "" };

  changeQuery = event => {
    this.setState({ q: event.target.value }, () => {
      this.autocompleteSearch();
    });
  };

  autocompleteSearch = () => {
    this._fetch(this.state.q);
  };

  _fetch = q => {
    const _searches = this.state._searches || [];
    _searches.push(q);
    this.setState({ _searches });
  };

  render() {
    const _searches = this.state._searches || [];
    return (
      <div>
        <input
          placeholder="Type something here"
          type="text"
          value={this.state.q}
          onChange={this.changeQuery}
        />
        <hr />
        <ol>
          {_searches.map((s, i) => {
            return <li key={s + i}>{s}</li>;
          })}
        </ol>
      </div>
    );
  }
}

You can try it here: No Throttle or Debounce

Note, when use it that an autocomplete lookup is done for every single change to the input (characters typed in or whole words pasted in). Typing in "Alask" at a normal speed our make an autocomplete lookup for "a", "al", "ala", "alas", and "alask".

Also worth pointing out, if you're on a CPU limited device, even if the autocomplete lookups can be done without network requests (e.g. you have a local "database" in-memory) there's still expensive DOM updates for that needs to be done for every single character/word typed in.

Throttled

What a throttle does is that it triggers predictably after a certain time. Every time. Basically, it prevents excessive or repeated calling of another function but doesn't get reset.

So if you type "t h r o t t l e" at a speed of 1 key press per 500ms the whole thing will take 8x500ms=3s and if you have a throttle on that, with a delay of 1s, it will fire 4 times.

I highly recommend using throttle-debounce to actually do the debounce. Let's rewrite our demo to use debounce:


import { throttle } from "throttle-debounce";

class App extends React.Component {
  constructor(props) {
    super(props);
    this.state = { q: "" };
    this.autocompleteSearchThrottled = throttle(500, this.autocompleteSearch);
  }

  changeQuery = event => {
    this.setState({ q: event.target.value }, () => {
      this.autocompleteSearchThrottled(this.state.q);
    });
  };

  autocompleteSearch = q => {
    this._fetch(q);
  };

  _fetch = q => {
    const _searches = this.state._searches || [];
    _searches.push(q);
    this.setState({ _searches });
  };

  render() {
    const _searches = this.state._searches || [];
    return (
      <div>
        <h2>Throttle</h2>
        <p>½ second Throttle triggering the autocomplete on every input.</p>
        <input
          placeholder="Type something here"
          type="text"
          value={this.state.q}
          onChange={this.changeQuery}
        />
        <hr />
        {_searches.length ? (
          <button
            type="button"
            onClick={event => this.setState({ _searches: [] })}
          >
            Reset
          </button>
        ) : null}
        <ol>
          {_searches.map((s, i) => {
            return <li key={s + i}>{s}</li>;
          })}
        </ol>
      </div>
    );
  }
}

One thing to notice on the React side is that the autocompleteSearch method can no longer use this.state.q because the function gets executed by the throttle function so the this is different. That's why, in this version we pass the search term as an argument instead.

You can try it here: Throttle

If you type something reasonably fast you'll notice it fires a couple of times. It's quite possible that if you type a bunch of stuff, with your eyes on the keyboard, by the time you're done you'll see it made a bunch of (mocked) autocomplete lookups whilst you weren't paying attention. You should also notice that it fired on the very first character you typed.

A cool feature about this is that if you can afford the network lookups, the interface will feel snappy. Hopefully, if your server is fast to respond to the autocomplete lookups there are quickly some suggestions there. At least it's a great indicator that the autocomplete UX is a think the user can expect as she types more.

Debounce

An alternative approach is to use a debounce. From the documentation of throttle-debounce:

"Debouncing, unlike throttling, guarantees that a function is only executed a single time, either at the very beginning of a series of calls, or at the very end."

Basically, ever time you "pile something on" it discards all the other delayed executions. Changing to this version is easy. just change import { throttle } from "throttle-debounce"; to import { debounce } from "throttle-debounce"; and change this.autocompleteSearchThrottled = throttle(1000, this.autocompleteSearch); to this.autocompleteSearchDebounced = debounce(1000, this.autocompleteSearch);

Here is the debounce version:


import { debounce } from "throttle-debounce";

class App extends React.Component {
  constructor(props) {
    super(props);
    this.state = { q: "" };
    this.autocompleteSearchDebounced = debounce(500, this.autocompleteSearch);
  }

  changeQuery = event => {
    this.setState({ q: event.target.value }, () => {
      this.autocompleteSearchDebounced(this.state.q);
    });
  };

  autocompleteSearch = q => {
    this._fetch(q);
  };

  _fetch = q => {
    const _searches = this.state._searches || [];
    _searches.push(q);
    this.setState({ _searches });
  };

  render() {
    const _searches = this.state._searches || [];
    return (
      <div>
        <h2>Debounce</h2>
        <p>
          ½ second Debounce triggering the autocomplete on every input.
        </p>
        <input
          placeholder="Type something here"
          type="text"
          value={this.state.q}
          onChange={this.changeQuery}
        />
        <hr />
        {_searches.length ? (
          <button
            type="button"
            onClick={event => this.setState({ _searches: [] })}
          >
            Reset
          </button>
        ) : null}
        <ol>
          {_searches.map((s, i) => {
            return <li key={s + i}>{s}</li>;
          })}
        </ol>
      </div>
    );
  }
}

You can try it here: Throttle

If you try it you'll notice that if you type at a steady pace (under 1 second for each input), it won't really trigger any autocomplete lookups at all. It basically triggers when you take your hands off the keyboard. But the silver lining with this approach is that if you typed "This is my long search input" it didn't bother looking things up for "this i", "this is my l", "this is my long s", "this is my long sear", "this is my long search in" since they are probably not very useful.

Best of Both World; Throttle and Debounce

The throttle works great in the beginning when you want the autocomplete widget to seem eager but if the user starts typing in a lot, you'll want to be more patient. It's quite human. If a friend is trying to remember something you're probably at first really quick to try to help with suggestions, but once you friend starts to remember and can start reciting, you patiently wait a bit more till they have said what they're going to say.

In this version we're going to use throttle (the eager one) in the beginning when the input is short and debounce (the patient one) when user has ignored the first autocomplete inputs and starting typing something longer.

Here is the version that uses both:


import { throttle, debounce } from "throttle-debounce";

class App extends React.Component {
  constructor(props) {
    super(props);
    this.state = { q: ""};
    this.autocompleteSearchDebounced = debounce(500, this.autocompleteSearch);
    this.autocompleteSearchThrottled = throttle(500, this.autocompleteSearch);
  }

  changeQuery = event => {
    this.setState({ q: event.target.value }, () => {
      const q = this.state.q;
      if (q.length < 5) {
        this.autocompleteSearchThrottled(this.state.q);
      } else {
        this.autocompleteSearchDebounced(this.state.q);
      }
    });
  };

  autocompleteSearch = q => {
    this._fetch(q);
  };

  _fetch = q => {
    const _searches = this.state._searches || [];
    _searches.push(q);
    this.setState({ _searches });
  };

  render() {
    const _searches = this.state._searches || [];
    return (
      <div>
        <h2>Throttle and Debounce</h2>
        <p>
          ½ second Throttle when input is small and ½ second Debounce when
          the input is longer.
        </p>
        <input
          placeholder="Type something here"
          type="text"
          value={this.state.q}
          onChange={this.changeQuery}
        />
        <hr />
        {_searches.length ? (
          <button
            type="button"
            onClick={event => this.setState({ _searches: [] })}
          >
            Reset
          </button>
        ) : null}
        <ol>
          {_searches.map((s, i) => {
            return <li key={s + i}>{s}</li>;
          })}
        </ol>
      </div>
    );
  }
}

In this version I cheated a little bit. The delays are different. The throttle has a delay of 500ms and the debounce as a delay of 1000ms. That makes it feel little bit more snappy there in the beginning when you start typing but once you've typed more than 5 characters, it switches to the more patient debounce version.

You can try it here: Throttle and Debounce

With this version, if you, in a steady pace typed in "south carolina" you'd notice that it does autocomplete lookups for "s", "sout" and "south carolina".

Avoiding wrongly ordered async responses

Suppose the user slowly types in "p" then "pe" then "pet", it would trigger 3 XHR requests. I.e. something like this:


fetch('/autocomplete?q=p')

fetch('/autocomplete?q=pe')

fetch('/autocomplete?q=pet')

But because all of these are asynchronous and sometimes there's unpredictable slowdowns on the network, it's not guarantee that they'll all come back in the same exact order. The solution to this is to use a "global variable" of the latest search term and then compare that to the locally scoped search term in each fetch callback promise. That might sound harder than it is. The solution basically looks like this:


class App extends React.Component {

  makeAutocompleteLookup = q => {
    // Store the latest input here scoped in the App instance.
    this.waitingFor = q;
    fetch('/autocompletelookup?q=' + q)
    .then(response => {
      if (response.status === 200) {
        // Only bother with this XHR response
        // if this query term matches what we're waiting for.
        if (q === this.waitingFor) {
          response.json()
          .then(results => {
              this.setState({results: results});
          })
        }
      }
    })
  }
}

Bonus feature; Caching

For caching the XHR requests, to avoid unnecessary network requests if the user uses backspace, the simplest solution is to maintain a dictionary of previous results as a component level instance. Let's assume you do the XHR autocomplete lookup like this initially:


class App extends React.Component {

  makeAutocompleteLookup = q => {
    const url = '/autocompletelookup?q=' + q;
    fetch(url)
    .then(response => {
      if (response.status === 200) {
        response.json()
        .then(results => {
            this.setState({ results });
        })
      }
    })
  }

}

To add caching (also a form of memoization) you can simply do this:


class App extends React.Component {

  _autocompleteCache = {};

  makeAutocompleteLookup = q => {
    const url = '/autocompletelookup?q=' + q;

    const cached = this._autocompleteCache[url];
    if (cached) {
      return Promise.resolve(cached).then(results => {
        this.setState({ results });
        });
      });
    }

    fetch(url)
    .then(response => {
      if (response.status === 200) {
        response.json()
        .then(results => {
            this.setState({ results });
        })
      }
    })
  }

}

In a more real app you might want to make that whole method always return a promise. And you might want to do something slightly smarter when response.status !== 200.

Bonus feature; Watch out for spaces

So the general gist of these above versions is that you debounce the XHR autocomplete lookups to only trigger sometimes. For short strings we trigger every, say, 300ms. When the input is longer, we only trigger when it appears the user has stopped typing. A more "advanced" approach is to trigger after a space. If I type "south carolina is a state" it's hard for a computer to know if "is", "a", or "state" is a complete word. Humans know and some English words can easily be recognized as stop words. However, what you can do is take advantage of the fact that a space almost always means the previous word was complete. It would be nice to trigger an autocomplete lookup after "south carolina" and "south carolina is" and "south carolina is a". These are also easier to deal with on the server side because, depending on your back-end, it's easier to search your database if you don't include "broken" words like "south carolina is a sta". To do that, here's one such implementation:


class App extends React.Component {

  // Just overriding the changeQuery method in this example.

  changeQuery = event => {
    const q = event.target.value
    this.setState({ q }, () => {

      // If the query term is short or ends with a
      // space, trigger the more impatient version.
      if (q.length < 5 || q.endsWith(' ')) {
        this.autocompleteSearchThrottled(q);
      } else {
        this.autocompleteSearchDebounced(q);
      }
    });
  };

  // Just overriding the changeQuery method in this example.

}

You can try it here: Throttle and Debounce with throttle on ending spaces.

Next level stuff

There is so much more that you can do for that ideal user experience. A lot depends on the context.

For example, when the input is small instead of doing a search on titles or names or whatever, you instead return a list of possible full search terms. So, if I have typed "sou" the back-end could return things like:

{
  "matches": [
     {"term": "South Carolina", "count": 123},
     {"term": "Southern", "count": 469},
     {"term": "South Dakota", "count": 98},
  ]
}

If the user selects one of these autocomplete suggestions, instead of triggering a full search you just append the selected match back into the search input widget. This is what Google does.

And if the input is longer you go ahead and actually search for the full documents. So if the input was "south caro" you return something like this:

{
  "matches": [
     {
       "title": "South Carolina Is A State", 
       "url": "/permapage/x19v093d"
     },
     {
       "title": "Best of South Carolina Parks", 
       "url": "/permapage/9vqif3z"
     },
     {
       "title": "I Live In South Carolina", 
       "url": "/permapage/abc300a1y"
     },
  ]
}

And when the XHR completes you look at what the user clicked and do something like this:


  return (<ul className="autocomplete">
    {this.state.results.map(result => {
      return <li onClick={event => {
        if (result.url) {
          document.location.href = result.url;
        } else {
          this.setState({ q: result.term });
        }
      }}>
        {result.url ? (
          <p className="document">{result.title}</p>
        ) : (
            <p className="new-term">{result.term}</p>
          )}
      </li>
    })
    }
    </ul>
  )

This is an incomplete example and more pseudo-code than a real solution but the pattern is quite nice. You're either helping the user type the full search term or if it's already a good match you can go skip the actual searching and go to the result directly.

This is how SongSearch works for example:

Suggestions for full search terms
Suggestions for full search terms

Suggestions for actual documents
Suggestions for actual documents

Items function in JavaScript for looping over dictionaries like Python

February 23, 2018
1 comment JavaScript, React

Too many times I've written code like this:


class MyComponent extends React.PureComponent {
  render() {
    return <ul>
      {Object.keys(this.props.someDictionary).map(key => {
        return <li key={key}><b>{key}:</b> {this.props.someDictionary[key]}</li> 
      })}
    </ul>
  }
}

The clunky thing about this is that you have to reference the dictionary twice. Makes it harder to refactor. In Python, you do this instead:


for key, value in some_dictionary.items():
    print(f'$key: $value')

To do the same in JavaScript make a function like this:


function items(dict, fn) {
  return Object.keys(dict).map((key, i) => {
    return fn(key, dict[key], i)
  })
}

Now you can use it "more like Python":


class MyComponent extends React.PureComponent {
  render() {
    return <ul>
      {items(this.props.someDictionary, (key, value) => {
        return <li key={key}><b>{key}:</b> {value}</li> 
      })}
    </ul>
  }
}

Example on CodeSandbox here

UPDATE

Thanks to @Osmose and @saltycrane for alerting me to Object.entries().


class MyComponent extends React.PureComponent {
  render() {
    return <ul>
      {Object.entries(this.props.someDictionary).map(([key, value]) => {
        return <li key={key}><b>{key}:</b> {value}</li> 
      })}
    </ul>
  }
}

Updated CodeSandbox here

Component, component function or plain function in React

February 6, 2018
1 comment React

tl;dr; Use React.PureComponent (or React.Component) if your component contains, or might contain, non-trivial logic that might affect it rendering or not. For all other cases, use a function, especially if it's not React specific.

Your choices

When you have state, especially good old this.state and this.setState you have to use React.PureComponent (or React.Component if you must).

For stateless functions, where you're just getting some props in, perhaps massaging them and rendering some JSX, you have choices.
You can write a React component in these three different ways:

Component


class MyComponent extends React.PureComponent {
  render() {
    return <h1>Hello {this.props.name}</h1>
  }
}

Component function


const MyComponent = ({ name }) => {
  return <h1>Hello {name}</h1>
}

Plain function


const MyComponent = name => {
  return <h1>Hello {name}</h1>
}

The first two can be used like this:


return (
  <div>
    <MyComponent name="Peter"/>
  </div>
)

The last one can be called directly:


return (
  <div>
    {MyComponent("Peter")}/>
  </div>
)

To be exact, you can actually call the second, component function, like this too:


return (
  <div>
    {MyComponent({name: "Peter"})}
  </div>
)

Example CodeSandbox here.

Each one has its strength and weaknesses.

Pros & cons for class MyComponent extends React.PureComponent

  • You have access to life-cycle hooks such as componentDidMount.
  • It's easy to add class-level public field functions, e.g. onButtonClick = event => {...} if inlining isn't convenient.
  • It has a shouldComponentUpdate method which means it can avoid a potentially expensive render execution when the props (or state) haven't changed.
  • If you decide you want to add some state, you just need to add a constructor that sets up this.state = {...}
  • If you use prop-types you can, depending on your babel transforms, set it as a static class member inside the class without having to repeat the name of the component outside (e.g. not having to do MyComponent.propTypes = {...} outside)
  • Maybe slower. I heard this rumor. Let's see later if it checks out.
  • Doesn't feel as simple because it's not a regular old function.
  • Might make you feel uncertain that it might have side-effects that aren't obvious.

Pros & cons for const MyComponent = (...) =>

  • Clearly it's just doing one thing. Rendering. No buts, ifs, or maybes.
  • Doesn't say React all over and thus should be easy to reason about outside React, such as in a unit test.
  • If it doesn't have to do all the mounting life-cycle hooks, perhaps that's valuable time saved.
  • There's something hip about writing functions and feeling "functional" without all that verbosity and boilerplate like class and extends and render etc.
  • Maybe faster. We'll see.

Benchmarking the difference

I don't know with confidence if this is the right way to test this but I really wanted to avoid process.env.NODE_ENV==='development' and I wanted to run each variant a bunch of times, because it feels more realistic, so as to avoid the slowness of the initial mounting.

So I made an app that looks like this:


class Components extends React.Component {
  render() {
    return <Component100 count={this.props.count} />;
  }
}

export default Components;

class Component100 extends React.PureComponent {
  render() {
    return <Component99 count={this.props.count} />;
  }
}
class Component99 extends React.PureComponent {
  render() {
    return <Component98 count={this.props.count} />;
  }
}

//...
//...you can imagine...
//...

class Component1 extends React.PureComponent {
  render() {
    return <Component0 count={this.props.count} />;
  }
}

class Component0 extends React.PureComponent {
  render() {
    collect('Components', performance.now());
    return <h1>Component0: {this.props.count}</h1>;
  }
}

This long chain of components calling "sub-components" starts right after the prop at the top changes. In the App that parents all of the variants, when the state changes the props change and it trickles down to that final last component. By taking a timestamp right before changing the state and during that last render you get a rough timeline for how long the whole chain took to render.

See the variants here:

Perhaps it's best to skim the code of the App.js too. It's a bit messy and there's a bunch of whacky code that uses global window to log all the timestamps but the gist is that it measures the few milliseconds it takes before a re-render is triggered until the final components render function gets called.

The app has a little hacky interval function that randomly switches between the different variants every 2 seconds and every 300 milliseconds it clicks a button, which changes the state which triggers a re-render.

Benchmark results

Results

Component style Median Comparison
Components 3.46ms 100%
ComponentFunctions 3.04ms 14% faster
Functions 2.02ms 71% faster

This was done using React 16.2.0 with process.env.NODE_ENV === 'production' in Firefox 60.

Sample app
You can try for yourself here: https://peterbe.github.io/function-or-component/

It might break when you click Reset. If it doesn't work very well in github.io, just download it and test locally.

Discussion

Here's my rule of thumb, the life-cycle hooks are awesome. I often write a component, using ...extends React.PureComponent even though it could be a plain function. But over time, eventually you expand it and realize you need some life cycle hook. Or you might find that writing inline functions is getting messy. Or, you realize that this component is sometimes unnecessarily called by a more complicated parent, with the same props as last time!

The performance penalty, for using full React components, is small. It exists, but it's probably dwarfed by other costs such as mounting not to mention actual DOM updating. It's also very likely that your components could benefit more from avoiding render (which only shouldComponentUpdate really can do) than the cost of calling it. Meaning, if the slower component only has to render 500 times, marginally slower, than the function component rendering 1,000 times, then the slower sometimes-not-needing-to-render will eventually win the performance battle.

There is still value in the functional stateless component. See the pros & cons above. But one rule of thumb I have is that if the component is really simple and contains no fancy logic that might affect its rendering or not rendering, then use components as functions. They're "sending a message" (to the code reader) by being brief and simple. For example, I have this little snippet in my Common.js module:


export const formatFileSize = (bytes, decimals = 0) => {
  if (!bytes) return '0 bytes'
  const k = 1024
  const dm = decimals + 1 || 3
  const sizes = ['bytes', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB']
  const i = Math.floor(Math.log(bytes) / Math.log(k))
  return parseFloat((bytes / Math.pow(k, i)).toFixed(dm)) + ' ' + sizes[i]
}

It's got nothing to do with React and that becomes extra obvious simply my looking at it. It's cleary got just one job. It's used a lot and often by more complicated components.

Last but not least; I'm very aware that the much more experienced React gurus of the world have already said something similar but with more accuracy. But I didn't want to just blurt out my opinion without adding some meat and some numbers to it. And I've always disliked the confusion that there's a choice at all so hopefully this blog post will help someone else who still suffers from having to wonder when to use which.

This tweet sums it up well:
Craig Kerstiens tweet

Display current React version

January 7, 2018
1 comment JavaScript, React

Usually you know what version of React your app is using by opening the package.json, or poking around in node_modules/react/index.js. But perhaps there are many packaging abstractions in between your command line and the server. Especially if you have a continous integration server that builds your static assets and if that CI uses caching. It might get scary.

If you really want to print out what version of React is rendering your app here's one way to do that:


import React from 'react'

class Introspection extends React.Component {
  render() {
    return <div>
      Currently using React {React.version}
    </div>
  }
}

Suppose that you want this display to depend on the app being in dev or prod mode:


import React from 'react'

class Introspection extends React.Component {
  render() {
    return <div>
      {
        process.env.NODE_ENV === 'development' ?
        <p>Currently using React {React.version}</p> : null
      }
    </div>
  }
}

Note that there's no need to import process.

See this CodeSandbox snippet for a live example.