Filtered by React

Page 2

Reset

My site's now NextJS - And I (almost) regret it already

December 17, 2021
8 comments React, Django, Node, JavaScript

My personal blog was a regular Django website with jQuery (later switched to Cash) for dynamic bits. In December 2021 I rewrote it in NextJS. It was a fun journey and NextJS is great but it's really not without some regrets.

Some flashpoints for note and comparison:

React SSR is awesome

The way infinitely nested comments are rendered is isomorphic now. Before I had to code it once as a Jinja2 template thing and once as a Cash (a fork of jQuery) thing. That's the nice and the promise of JavaScript React and server-side rendering.

JS bloat

The total JS payload is now ~111KB in 16 files. It used to be ~36KB in 7 files. :(

Before

Before

After

After

Data still comes from Django

Like any website, the web pages are made up from A) getting the raw data from a database, B) rendering that data in HTML.
I didn't want to rewrite all the database queries in Node (inside getServerSideProps).

What I did was I moved all the data gathering Django code and put them under a /api/v1/ prefix publishing simple JSON blobs. Then this is exposed on 127.0.0.1:3000 which the Node server fetches. And I wired up that that API endpoint so I can debug it via the web too. E.g. /api/v1/plog/sort-a-javascript-array-by-some-boolean-operation

Now, all I have to do is write some TypeScript interfaces that hopefully match the JSON that comes from Django. For example, here's the getServerSideProps code for getting the data to this page:


const url = `${API_BASE}/api/v1/plog/`;
const response = await fetch(url);
if (!response.ok) {
  throw new Error(`${response.status} on ${url}`);
}
const data: ServerData = await response.json();
const { groups } = data;

return {
  props: {
    groups,
  },
};

I like this pattern! Yes, there are overheads and Node could talk directly to PostgreSQL but the upside is decoupling. And with good outside caching, performance never matters.

Server + CDN > static site generation

I considered full-blown static generation, but it's not an option. My little blog only has about 1,400 blog posts but you can also filter by tags and combinations of tags and pagination of combinations of tags. E.g. /oc-JavaScript/oc-Python/p3 So the total number of pages is probably in the tens of thousands.

So, server-side rendering it is. To accomplish that I set up a very simple Express server. It proxies some stuff over to the Django server (e.g. /rss.xml) and then lets NextJS handle the rest.


import next from "next";
import express from "express";

const app = next();
const handle = app.getRequestHandler();

app
  .prepare()
  .then(() => {
    const server = express();

    server.use(handle);

    server.listen(port, (err) => {
      if (err) throw err;
      console.log(`> Ready on http://localhost:${port}`);
    });
  })

Now, my site is behind a CDN. And technically, it's behind Nginx too where I do some proxy_pass in-memory caching as a second line of defense.
Requests come in like this:

  1. from user to CDN
  2. from CDN to Nginx
  3. from Nginx to Express (proxy_pass)
  4. from Express to next().getRequestHandler()

And I set Cache-Control in res.setHeader("Cache-Control", "public,max-age=86400") from within the getServerSideProps functions in the src/pages/**/*.tsx files. And once that's set, the response will be cached both in Nginx and in the CDN.

Any caching is tricky when you need to do revalidation. Especially when you roll out a new central feature in the core bundle. But I quite like this pattern of a slow-rolling upgrade as individual pages eventually expire throughout the day.

This is a nasty bug with this and I don't yet know how to solve it. Client-side navigation is dependent of hashing. So loading this page, when done with client-side navigation, becomes /_next/data/2ps5rE-K6E39AoF4G6G-0/en/plog.json (no, I don't know how that hashed URL is determined). But if a new deployment happens, the new URL becomes /_next/data/UhK9ANa6t5p5oFg3LZ5dy/en/plog.json so you end up with a 404 because you started on a page based on an old JavaScript bundle, that is now invalid.

Thankfully, NextJS handles it quite gracefully by throwing an error on the 404 so it proceeds with a regular link redirect which takes you away from the old page.

Client-side navigation still sucks. Kinda.

Next has a built-in <Link> component that you use like this:


import Link from "next/link";

...

<Link href={"/plog/" + post.oid}>
  {post.title}
</Link>

Now, clicking any of those links will automatically enable client-side routing. Thankfully, it takes care of preloading the necessary JavaScript (and CSS) simply by hovering over the link, so that when you eventually click it just needs to do an XHR request to get the JSON necessary to be able to render the page within the loaded app (and then do the pushState stuff to change the URL accordingly).

It sounds good in theory but it kinda sucks because unless you have a really good Internet connection (or could be you hit upon a CDN-cold URL), nothing happens when you click. This isn't NextJS's fault, but I wonder if it's actually horribly for users.

Yes, it sucks that a user clicks something but nothing happens. (I think it would be better if it was a button-press and not a link because buttons feel more like an app whereas links have deeply ingrained UX expectations). But most of the time, it's honestly very fast and when it works it's a nice experience. It's a great piece of functionality for more app'y sites, but less good for websites whose most of the traffic comes from direct links or Google searches.

NextJS has built-in critical CSS optimization

Critical inline CSS is critical (pun intended) for web performance. Especially on my poor site where I depend on a bloated (and now ancient) CSS framework called Semantic-UI. Without inline CSS, the minified CSS file would become over 200KB.

In NextJS, to enable inline critical CSS loading you just need to add this to your next.config.js:


    experimental: { optimizeCss: true },

and you have to add critters to your package.json. I've found some bugs with it but nothing I can't work around.

Conclusion and what's next

I'm very familiar and experienced with React but NextJS is new to me. I've managed to miss it all these years. Until now. So there's still a lot to learn. With other frameworks, I've always been comfortable that I don't actually understand how Webpack and Babel work (internally) but at least I understood when and how I was calling/depending on it. Now, with NextJS there's a lot of abstracted magic that I don't quite understand. It's hard to let go of that. It's hard to get powerful tools that are complex and created by large groups of people and understand it all too. If you're desperate to understand exactly how something works, you inevitably have to scale back the amount of stuff you're leveraging. (Note, it might be different if it's absolute core to what you do for work and hack on for 8 hours a day)

The JavaScript bundles in NextJS lazy-load quite decently but it's definitely more bloat than it needs to be. It's up to me to fix it, partially, because much of the JS code on my site is for things that technically can wait such as the interactive commenting form and the auto-complete search.

But here's the rub; my site is not an app. Most traffic comes from people doing a Google search, clicking on my page, and then bugger off. It's quite static that way and who am I to assume that they'll stay and click around and reuse all that loaded JavaScript code.

With that said; I'm going to start an experiment to rewrite the site again in Remix.

How to simulate slow lazy chunk-loading in React

March 25, 2021
0 comments React, JavaScript

Suppose you have one of those React apps that lazy-load some chunk. It just basically means it injects a .js static asset URL into the DOM and once it's downloaded by the browser, it carries on the React rendering with the new code loaded. Well, what if the network is really slow? In local development, it can be hard to simulate this. You can mess with the browser's Devtools to try to slow down the network, but even that can be too fast sometimes.

What I often do is, I take this:


const SettingsApp = React.lazy(() => import("./app"));

...and change it to this:


const SettingsApp = React.lazy(() =>
  import("./app").then((module) => {
    return new Promise((resolve) => {
      setTimeout(() => {
        resolve(module as any);
      }, 10000);
    });
  })
);

Now, it won't load that JS chunk until 10 seconds later. Only temporarily, in local development.

I know it's admittedly just a hack but it's nifty. Just don't forget to undo it when you're done simulating your snail-speed web app.

PS. That resolve(module as any); is for TypeScript. You can just change that to resolve(module); if it's regular JavaScript.

useSearchParams as a React global state manager

February 1, 2021
0 comments React, JavaScript

tl;dr; The useSearchParams hook from react-router is great as a hybrid state manager in React.

The wonderful react-router has a v6 release coming soon. At the time of writing, 6.0.0-beta.0 is the release to play with. It comes with a React hook called useSearchParams and it's fantastic. It's not a global state manager, but it can be used as one. It's not persistent, but it's semi-persistent in that state can be recovered/retained in browser refreshes.

Basically, instead of component state (e.g. React.useState()) you use:


import React from "react";
import { createSearchParams, useSearchParams } from "react-router-dom";
import "./styles.css";

export default function App() {
  const [searchParams, setSearchParams] = useSearchParams();

  const favoriteFruit = searchParams.get("fruit");
  return (
    <div className="App">
      <h1>Favorite fruit</h1>
      {favoriteFruit ? (
        <p>
          Your favorite fruit is <b>{favoriteFruit}</b>
        </p>
      ) : (
        <i>No favorite fruit selected yet.</i>
      )}

      {["🍒", "🍑", "🍎", "🍌"].map((fruit) => {
        return (
          <p key={fruit}>
            <label htmlFor={`id_${fruit}`}>{fruit}</label>
            <input
              type="radio"
              value={fruit}
              checked={favoriteFruit === fruit}
              onChange={(event) => {
                setSearchParams(
                  createSearchParams({ fruit: event.target.value })
                );
              }}
            />
          </p>
        );
      })}
    </div>
  );
}

See Codesandbox demo here

To get a feel for it, try the demo page in Codesandbox and note has it basically sets ?fruit=🍌 in the URL and if you refresh the page, it just continues as if the state had been persistent.

Basically, that's it. You never have a local component state but instead, you use the current URL as your store, and useSearchParams is your conduit for it. The advantages are:

  1. It's dead simple to use
  2. You get "shared state" across components without needing to manually inform them through prop drilling
  3. At any time, the current URL is a shareable snapshot of the state

The disadvantages are:

  1. It needs to be realistic to serialize it through the URLSearchParams web API
  2. The keys used need to be globally reserved for each distinct component that uses it
  3. You might not want the URL to change

That's all you need to know to get started. But let's dig into some more advanced examples, with some abstractions, to "workaround" the limitations.

To append or to reset

Suppose you have many different components, it's very likely that they don't really know or care about each other. Suppose, the current URL is /page?food=🍔 and if one component does: setSearchParams(createSearchParams({fruit: "🍑"})) what will happen is that the URL will "start over" and become /page?fruit=🍑. In other words, the food=🍔 was lost. Well, this might be a desired effect, but let's assume it's not, so we'll have to make it "append" instead. Here's one such solution:


function appendSearchParams(obj) {
  const sp = createSearchParams(searchParams);
  Object.entries(obj).forEach(([key, value]) => {
    if (Array.isArray(value)) {
      sp.delete(key);
      value.forEach((v) => sp.append(key, v));
    } else if (value === undefined) {
      sp.delete(key);
    } else {
      sp.set(key, value);
    }
  });
  return sp;
}

Now, you can do things like this:


onChange={(event) => {
  setSearchParams(
-    createSearchParams({ fruit: event.target.value })
+    appendSearchParams({ fruit: event.target.value })
  );
}}

See Codesandbox demo here

Now, the two keys work independently of each other. It has a nice "just works feeling".

Note that this appendSearchParams() function implementation solves the case of arrays. You could now call it like this:


{/* Untested, but hopefully the point is demonstrated */}
<div>
  <ul>
    {(searchParams.getAll("languages") || []).map((language) => (
      <li key={language}>{language}</li>
    ))}
  </ul>
  <button
    type="button"
    onClick={() => {
      setSearchParams(
        appendSearchParams({ languages: ["en-US", "sv-SE"] })
      );
    }}
  >
    Select 'both'
  </button>
</div>

...and that will update the URL to become ?languages=en-US&languages=sv-SE.

Serialize it into links

The useSearchParams hook returns a callable setSearchParams() which is basically doing a redirect (uses the useNavigate() hook). But suppose you want to make a link that serializes a "future state". Here's a very basic example:


// Assumes 'import { Link } from "react-router-dom";'

<Link to={`?${appendSearchParams({fruit: "🍌"})}`}>Switch to 🍌</Link>

See Codesandbox demo here

Now, you get nice regular hyperlinks that uses can right-click and "Open in a new tab" and it'll just work.

Type conversion and protection

The above simple examples use strings and array of strings. But suppose you need to do more more advanced type conversions. For example: /tax-calculator?rate=3.14 where you might have something that needs to be deserialized and serialized as a floating point number. Basically, you have to wrap the deserializing in a more careful way. E.g.


function TaxYourImagination() {
  const [searchParams, setSearchParams] = useSearchParams();

  const taxRaw = searchParams.get("tax", DEFAULT_TAX_RATE);
  let tax;
  let taxError;
  try {
    tax = castAndCheck(taxRaw);
  } catch (err) {
    taxError = errl;
  }

  if (taxError) {
    return (
      <div className="error-alert">
        The provided tax rate is invalid: <code>{taxError.toString()}</code>
      </div>
    );
  }
  return <DisplayTax value={tax} onUpdate={(newValue) => { 
    setSearchParams(
      createSearchParams({ tax: newValue.toFixed(2) })
    );
   }}/>;
}

A React vs. Preact case study for a widget

July 24, 2019
0 comments Web development, React, Web Performance, JavaScript

tl;dr; The previous (React) total JavaScript bundle size was: 36.2K Brotli compressed. The new (Preact) JavaScript bundle size was: 5.9K. I.e. 6 times smaller. Also, it appears to load faster in WebPageTest.

I have this page that is a Django server-side rendered page that has on it a form that looks something like this:


<div id="root">  
  <form action="https://songsear.ch/q/">  
    <input type="search" name="term" placeholder="Type your search here..." />
    <button>Search</button>
  </form>  
</div>

It's a simple search form. But, to make it a bit better for users, I wrote a React widget that renders, into this document.querySelector('#root'), a near-identical <form> but with autocomplete functionality that displays suggestions as you type.

Anyway, I built that React bundle using create-react-app. I use the yarn run build command that generates...

  • css/main.83463791.chunk.css - 1.4K
  • js/main.ec6364ab.chunk.js - 9.0K (gzip 2.8K, br 2.5K)
  • js/runtime~main.a8a9905a.js - 1.5K (gzip 754B, br 688B)
  • js/2.b944397d.chunk.js - 119K (gzip 36K, br 33K)

Then, in Python, a piece of post-processing code copies the files from the build/static/ directory and inserts it into the rendered HTML file. The CSS gets injected as an inline <style> tag.

It's a simple little widget. No need for any service-workers or react-router or any global state stuff. (Actually, it only has 1 single runtime dependency outside the framework) I thought, how about moving this to Preact?

In comes preact-cli

The app used a couple of React hooks but they were easy to transform into class components. Now I just needed to run:


npx preact create --yarn widget name-of-my-preact-project
cd name-of-my-preact-project
mkdir src
cp ../name-of-React-project/src/App.js src/
code src/App.js

Then, I slowly moved over the src/App.js from the create-react-app project and slowly by slowly I did the various little things that you need to do. For example, to learn to build with preact build --no-prerender --no-service-worker and how I can override the default template.

Long story short, the new built bundles look like this:

  • style.82edf.css - 1.4K
  • bundle.d91f9.js - 18K (gzip 6.4K, br 5.9K)
  • polyfills.9168d.js - 4.5K (gzip 1.8K, br 1.6K)

(The polyfills.9168d.js gets injected as a script tag if window.fetch is falsy)

Unfortunately, when I did the move from React to Preact I did make some small fixes. Doing the "migration" I noticed a block of code that was never used so that gives the build bundle from Preact a slight advantage. But I think it's nominal.

In conclusion: The previous total JavaScript bundle size was: 36.2K (Brotli compressed). The new JavaScript bundle size was: 5.9K (Brotli compressed). I.e. 6 times smaller. But if you worry about the total amount of JavaScript to parse and execute, the size difference uncompressed was 129K vs. 18K. I.e. 7 times smaller. I can only speculate but I do suspect you need less CPU/battery to process 18K instead of 129K if CPU/batter matters more (or closer to) than network I/O.

WebPageTest - Visual Comparison - Mobile Slow 3G

Rendering speed difference

Rendering speed is so darn hard to measure on the web because the app is so small. Plus, there's so much else going on that matters.

However, using WebPageTest I can do a visual comparison with the "Mobile - Slow 3G" preset. It'll be a somewhat decent measurement of the total time of downloading, parsing and executing. Thing is, the server-side rended HTML form has a button. But the React/Preact widget that takes over the DOM hides that submit button. So, using the screenshots that WebPageTest provides, I can deduce that the Preact widget completes 0.8 seconds faster than the React widget. (I.e. instead of 4.4s it became 3.9s)

Truth be told, I'm not sure how predictable or reproducible is. I ran that WebPageTest visual comparison more than once and the results can vary significantly. I'm not even sure which run I'm referring to here (in the screenshot) but the React widget version was never faster.

Conclusion and thoughts

Unsurprisingly, Preact is smaller because you simply get less from that framework. E.g. synthetic events. I was lucky. My app uses onChange which I could easily "migrate" to onInput and I managed to get it to work pretty easily. I'm glad the widget app was so small and that I don't depend on any React specific third-party dependencies.

But! In WebPageTest Visual Comparison it was on "Mobile - Slow 3G" which only represents a small portion of the traffic. Mobile is a huge portion of the traffic but "Slow 3G" is not. When you do a Desktop comparison the difference is roughtly 0.1s.

Also, in total, that page is made up of 3 major elements

  1. The server-side rendered HTML
  2. The progressive JavaScript widget (what this blog post is about)
  3. A piece of JavaScript initiated banner ad

That HTML controls the "First Meaningful Paint" which takes 3 seconds. And the whole shebang, including the banner ad, takes a total of about 9s. So, all this work of rewriting a React app to Preact saved me 0.8s out of the total of 9s.

Web performance is hard and complicated. Every little counts, but keep your eye on the big ticket items assuming there's something you can do about them.

At the time of writing, preact-cli uses Preact 8.2 and I'm eager to see how Preact X feels. Apparently, since April 2019, it's in beta. Looking forward to giving it a try!

Whatsdeployed rewritten in React

April 15, 2019
0 comments Web development, Python, React, JavaScript

A couple of months ago my colleague Michael @mythmon Cooper wanted to add a feature to the front-end code of Whatsdeployed and learned that the whole front-end is spaghetti jQuery code. So, instead, he re-wrote it in React. My only requirements were "Use create-react-app and no redux", i.e. keep it simple.

We also took the opportunity to rewrite some of the ways that URLs are handled. It used to be that a "short link" would redirect. For example GET /s-5HY would return 302 to Location: ?org=mozilla&repo=tecken&name[]=Dev&url[]=https://symbols.dev.mozaws.net/__version__&name[]=Stage... Basically, the short link was just an alias for a redirect. Just like those services like bit.ly or g.co. Now, the short link is a permanent fixture. The short link is included in the XHR calls to the server for getting the relevant data.

All old URLs will continue to work but now the canonical URL becomes /s/5HY/mozilla-services/tecken, for example. The :org/:repo isn't really necessary because the server knows exactly what 5HY (in this example means), but it's nice for the URL bar's memory.

Another thing that changed was how it can recognize "bors commits". When you use bors, you put a bunch of commits into a GitHub Pull Request and then ask the bors bot to merge them into master. Using "bors mode" in Whatsdeployed is optional but we believe it looks a lot more user-friendly. Here is an example of mozilla/normandy with and without bors toggled on and off.

Without "bors mode"
Without "bors mode"

With "bors mode"
With "bors mode"

Thank you mythmon!

Lastly, hopefully this will make it a lot easier to contribute. Check out https://github.com/peterbe/whatsdeployed. All you need is Python 3, a PostgreSQL, and almost any version of Node that can run create-react-apps. Ping me if you find it hard to get up and running.

create-react-app, SCSS, and Bulmaswatch

February 12, 2019
2 comments Web development, React, JavaScript

1. Create a create-react-app first:

create-react-app myapp

2. Enter it and install node-sass and bulmaswatch

cd myapp
yarn add bulma bulmaswatch node-sass

3. Edit the src/index.js to import index.scss instead:


-import "./index.css";
+import "./index.scss";

4. "Rename" the index.css file:

git rm src/index.css 
touch src/index.scss
git add src/index.scss

5. Now edit the src/index.scss to look like this:


@import "node_modules/bulmaswatch/darkly/bulmaswatch";

This assumes your favorite theme was the darkly one. You can obviously change that later.

6. Run the app:

BROWSER=none yarn start

7. Open the browser at http://localhost:3000

CRA start

That's it! However, the create-react-app default look doesn't expose any of the cool stuff that Bulma can style. So let's rewrite our src/App.js by copying the minimal starter HTML from the Bulma documentation. So make the src/App.js component look something like this:


class App extends Component {
  render() {
    return (
      <section className="section">
        <div className="container">
          <h1 className="title">Hello World</h1>
          <p className="subtitle">
            My first website with <strong>Bulma</strong>!
          </p>
        </div>
      </section>
    );
  }
}

Now it'll look like this:

Bulma starter template

Yes, it's not much but it's a great start. Over to you to take this to infinity and beyond!

Not So Secret Sauce

In the rushed instructions above the choice of theme was darkly. But what you need to do next is go to https://jenil.github.io/bulmaswatch/, click around and eventually pick the one you like. Suppose you like spacelab, then you just change that @import ... line to be:


@import "node_modules/bulmaswatch/spacelab/bulmaswatch";

TEMPORARY:

h1 {  
    color: red;
    font-size: 5em;
}

TEST CHANGE 3.

Hooks tip! Avoid infinite recursion in React.useEffect()

February 6, 2019
1 comment React, JavaScript

React 16.8.0 with Hooks was released today. A big deal. Executive summary; components as functions is all the rage now.

What used to be this:


class MyComponent extends React.Component {
  ...

  componentDidMount() {
    ...
  }
  componentDidUpdate() {
    ...
  }

  render() { STUFF }
}

...is now this:


function MyComponent() {
  ...

  React.useEffect(() => {
    ...
  })

  return STUFF
}

Inside the useEffect "side-effect callback" you can actually update state. But if you do, and this is no different that old React.Component.componentDidUpdate, it will re-run the side-effect callback. Here's a simple way to cause an infinite recursion:


// DON'T DO THIS

function MyComponent() {
  const [counter, setCounter] = React.useState(0);

  React.useEffect(() => {
    setCounter(counter + 1);
  })

  return <p>Forever!</p>
}

The trick is to pass a second argument to React.useEffect that is a list of states to exclusively run on.

Here's how to fix the example above:


function MyComponent() {
  const [counter, setCounter] = React.useState(0);
  const [times, setTimes] = React.useState(0);

  React.useEffect(
    () => {
      if (times % 3 === 0) {
        setCounter(counter + 1);
      }
    },
    [times]  // <--- THIS RIGHT HERE IS THE KEY!
  );

  return (
    <div>
      <p>
        Divisible by 3: {counter}
        <br />
        Times: {times}
      </p>
      <button type="button" onClick={e => setTimes(times + 1)}>
        +1
      </button>
    </div>
  );
}

You can see it in this demo.

Note, this isn't just about avoiding infinite recursion. It can also be used to fit your business logic and/or an optimization to avoid executing the effect too often.

Displaying fetch() errors and unwanted responses in React

February 6, 2019
0 comments Web development, React, JavaScript

tl;dr; You can use error instanceof window.Response to distinguish between fetch exceptions and fetch responses.

When you do something like...


const response = await fetch(URL);

...two bad things can happen.

  1. The XHR request fails entirely. I.e. there's not even a response with a HTTP status code.
  2. The response "worked" but the HTTP status code was not to your liking.

Either way, your React app needs to deal with this. Ideally in a not-too-clunky way. So here is one take on this challenge/opportunity which I hope can inspire you to extend it the way you need it to go.

The trick is to "clump" exceptions with responses. Then you can do this:


function ShowServerError({ error }) {
  if (!error) {
    return null;
  }
  return (
    <div className="alert">
      <h3>Server Error</h3>
      {error instanceof window.Response ? (
        <p>
          <b>{error.status}</b> on <b>{error.url}</b>
          <br />
          <small>{error.statusText}</small>
        </p>
      ) : (
        <p>
          <code>{error.toString()}</code>
        </p>
      )}
    </div>
  );
}

The greatest trick the devil ever pulled was to use if (error instanceof window.Reponse) {. Then you know that error thing is the outcome of THIS = await fetch(URL) (or fetch(URL).then(THIS) if you prefer). Another good trick the devil pulled was to be aware that exceptions, when asked to render in React does not naturally call its .toString() so you have to do that yourself with {error.toString()}.

This codesandbox demonstrates it quite well. (Although, at the time of writing, codesandbox will spew warnings related to testing React components in the console log. Ignore that.)

If you can't open that codesandbox, here's the gist of it:


React.useEffect(() => {
  url &&
    (async () => {
      let response;
      try {
        response = await fetch(url);
      } catch (ex) {
        return setServerError(ex);
      }
      if (!response.ok) {
        return setServerError(response);
      }
      // do something here with `await response.json()`
    })(url);
}, [url]);

By the way, another important trick is to be subtle with how you put the try { and } catch(ex) {.


// DON'T DO THIS

try {
  const response = await fetch(url);
  if (!response.ok) {
    setServerError(response);
  }
  // do something here with `await response.json()`
} catch (ex) {
  setServerError(ex);
}

Instead...


// DO THIS

let response;
try {
  response = await fetch(url);
} catch (ex) {
  return setServerError(ex);
}
if (!response.ok) {
  return setServerError(response);
}
// do something here with `await response.json()`

If you don't do that you risk catching other exceptions that aren't exclusively the fetch() call. Also, notice the use of return inside the catch block which will exit the function early leaving you the rest of the code (de-dented 1 level) to deal with the happy-path response object.

Be aware that the test if (!response.ok) is simplistic. It's just a shorthand for checking if the "status in the range 200 to 299, inclusive". Realistically getting a response.status === 400 isn't an "error" really. It might just be a validation error hint from a server, and likely the await response.json() will work and contain useful information. No need to throw up a toast or a flash message that the communication with the server failed.

Conclusion

The details matter. You might want to deal with exceptions entirely differently from successful responses with bad HTTP status codes. It's nevertheless important to appreciate two things:

  1. Handle complete fetch() failures and feed your UI or your retry mechanisms.

  2. You can, in one component distinguish between a "successful" fetch() call and thrown JavaScript exceptions.

An example of using Immer to handle nested objects in React state

January 18, 2019
1 comment React, JavaScript

When Immer first came out I was confused. I kinda understood what I was reading but I couldn't really see what was so great about it. As always, nothing beats actual code you type yourself to experience how something works.

Here is, I believe, a great example: https://codesandbox.io/s/y2m399pw31

If you're reading this on your mobile it might be hard to see what it does. Basically, it's a very simple React app that displays a "todo list like" thing. The state (aka. this.state.tasks) is a pure JavaScript array. The React components that display the data (e.g. <List tasks={this.state.tasks}/> and <ShowItem item={item} />) are pure (i.e. extends React.PureComponent) meaning React natively protects from re-rendering a component when the props haven't changed. So no wasted render-cycles.

What Immer does is that it helps mutate an object in a smart way. I'm sure you've heard that you're never supposed to mutate state objects (arrays are a form of mutable objects too!) and instead do things like const stuff = Object.assign({}, this.state.stuff); or const things = this.state.things.slice(0);. However, those things are shallow copies meaning any mutable objects within (i.e. nested objects) don't get the clone treatment and can thus cause problems with not re-rendering when they should.

Here's the core gist:


import React from "react";
import produce from "immer";

class App extends React.Component {
  state = {
    tasks: [[false, { text: "Do something", date: new Date() }]]
  };
  onToggleDone = (i, done) => {
    // Immer
    // This is what the blog post is all about...
    const tasks = produce(this.state.tasks, draft => {
      draft[i][0] = done;
      draft[i][1].date = new Date();
    });

    // Pure JS
    // Don't do this!
    // const tasks = this.state.tasks.slice(0);
    // tasks[i][0] = done;
    // tasks[i][1].date = new Date();

    this.setState({ tasks });
  };
  render() {
    // appreviated, but...
    return <List tasks={this.state.tasks}/>
  }
}

class List extends React.PureComponent {
   ...

It just works. Neat!

By the way, here's a code sandbox that accomplishes the same thing but with ImmutableJS which I think is uglier. I think it's uglier because now the rendering components need to be aware that it's rendering immutable.Map objects instead.

Caveats

  1. The cost of doing what immer.produce isn't free. It's some smart work that needs to be done. But the alternative is to deep clone the object which is going to be much slower. Immer isn't the fastest kid on the block but unlike MobX and ImmutableJS once you've done this smart stuff you're back to plain JavaScript objects.

  2. Careful with doing something like console.log(draft) since it will raise a TypeError in your web console. Just be aware of that or use console.log(JSON.stringify(draft)) instead.

  3. If you know with confidence that your mutable object does not, and will not, have nested mutable objects you can use object spread, Object.assign(), or .slice(0) and save yourself the trouble of another dependency.

React.memo instead of React.PureComponent

November 2, 2018
0 comments JavaScript, React

React Hooks isn't here yet but when it comes I'll be all over that, replacing many of my classes with functions.

However, as of React 16.6 there's this awesome new React.memo() thing which is such a neat solution. Why didn't I think of that, myself, sooner?!

Anyway, one of the subtle benefits of it is that writing functions minify a lot better than classes when Babel'ifying your ES6 code.

To test that, I took one of my project's classes, which needed to be "PureComponent":


class ShowAutocompleteSuggestionSong extends React.PureComponent {
  render() {
    const { song } = this.props;
    return (
      <div className="media autocomplete-suggestion-song">
        <div className="media-left">
          <img
            className={
              song.image && song.image.preview
                ? 'img-rounded lazyload preview'
                : 'img-rounded lazyload'
            }
            src={
              song.image && song.image.preview
                ? song.image.preview
                : placeholderImage
            }
            data-src={
              song.image ? absolutifyUrl(song.image.url) : placeholderImage
            }
            alt={song.name}
          />
        </div>
        <div className="media-body">
          <h5 className="artist-name">
            <b>{song.name}</b>
            {' by '}
            <span>{song.artist.name}</span>
          </h5>
          {song.fragments.map((fragment, i) => {
            return <p key={i} dangerouslySetInnerHTML={{ __html: fragment }} />;
          })}
        </div>
      </div>
    );
  }
}

Minified it weights 1,893 bytes and looks like this:

Minified PureComponent class
Minified PureComponent class

When re-written with React.memo it looks like this:


const ShowAutocompleteSuggestionSong = React.memo(({ song }) => {
  return (
    <div className="media autocomplete-suggestion-song">
      <div className="media-left">
        <img
          className={
            song.image && song.image.preview
              ? 'img-rounded lazyload preview'
              : 'img-rounded lazyload'
          }
          src={
            song.image && song.image.preview
              ? song.image.preview
              : placeholderImage
          }
          data-src={
            song.image ? absolutifyUrl(song.image.url) : placeholderImage
          }
          alt={song.name}
        />
      </div>
      <div className="media-body">
        <h5 className="artist-name">
          <b>{song.name}</b>
          {' by '}
          <span>{song.artist.name}</span>
        </h5>
        {song.fragments.map((fragment, i) => {
          return <p key={i} dangerouslySetInnerHTML={{ __html: fragment }} />;
        })}
      </div>
    </div>
  );
});

Minified it weights 783 bytes and looks like this:

Minified React.memo function
Minified React.memo function

Highly scientific measurement. Yeah, I know. (Joking)
Perhaps it's stating the obvious but part of the ES5 code that it generates, from classes can be reused for other classes.

Anyway, it's neat and worth considering to squeeze some bytes out. And the bonus is that it gets you prepared for Hooks in React 16.7.