Filtered by Web development

Reset

Programmatically render a NextJS page without a server in Node

September 6, 2022
0 comments Web development, Node, JavaScript

If you use getServerSideProps() in Next you can render a page by visiting it. E.g. GET http://localhost:3000/mypages/page1
Or if you use getStaticProps() with getStaticPaths(), you can use npm run build to generate the HTML file (e.g. .next/server/pages directory).
But what if you don't want to start a server. What if you have a particular page/URL in mind that you want to generate but without starting a server and sending an HTTP GET request to it? This blog post shows a way to do this with a plain Node script.

Here's a solution to programmatically render a page:


#!/usr/bin/env node

import http from "http";

import next from "next";

async function main(uris) {
  const nextApp = next({});
  const nextHandleRequest = nextApp.getRequestHandler();
  await nextApp.prepare();

  const htmls = Object.fromEntries(
    await Promise.all(
      uris.map((uri) => {
        try {
          // If it's a fully qualified URL, make it its pathname
          uri = new URL(uri).pathname;
        } catch {}
        return renderPage(nextHandleRequest, uri);
      })
    )
  );
  console.log(htmls);
}

async function renderPage(handler, url) {
  const req = new http.IncomingMessage(null);
  const res = new http.ServerResponse(req);
  req.method = "GET";
  req.url = url;
  req.path = url;
  req.cookies = {};
  req.headers = {};
  await handler(req, res);
  if (res.statusCode !== 200) {
    throw new Error(`${res.statusCode} on rendering ${req.url}`);
  }
  for (const { data } of res.outputData) {
    const [, body] = data.split("\r\n\r\n");
    if (body) return [url, body];
  }
  throw new Error("No output data has a body");
}

main(process.argv.slice(2)).catch((err) => {
  console.error(err);
  process.exit(1);
});

To demonstrate I created this sample repo: https://github.com/peterbe/programmatically-render-next-page

Note, that you need to run npm run build first so Next can have all the static assets ready.

In conclusion

The alternative, in automation, would be run something like this:


▶ npm run build && npm run start &
▶ sleep 5  # give the server a chance to start
▶ xh http://localhost:3000/aboutus
HTTP/1.1 200 OK
Connection: keep-alive
Content-Encoding: gzip
Content-Type: text/html; charset=utf-8
Date: Tue, 06 Sep 2022 12:23:42 GMT
Etag: "m8ff9sdduo1hk"
Keep-Alive: timeout=5
Transfer-Encoding: chunked
Vary: Accept-Encoding
X-Powered-By: Next.js

<!DOCTYPE html><html><head><meta charSet="utf-8"/><meta name="viewport" content="width=device-width"/><title>About Us page</title><meta name="description" content="We do things. I hope."/><link rel="icon" href="/favicon.ico"/><meta name="next-head-count" content="5"/><link rel="preload" href="/_next/static/css/ab44ce7add5c3d11.css" as="style"/><link rel="stylesheet" href="/_next/static/css/ab44ce7add5c3d11.css" data-n-g=""/><link rel="preload" href="/_next/static/css/ae0e3e027412e072.css" as="style"/><link rel="stylesheet" href="/_next/static/css/ae0e3e027412e072.css" data-n-p=""/><noscript data-n-css=""></noscript><script defer="" nomodule="" src="/_next/static/chunks/polyfills-c67a75d1b6f99dc8.js"></script><script src="/_next/static/chunks/webpack-7ee66019f7f6d30f.js" defer=""></script><script src="/_next/static/chunks/framework-db825bd0b4ae01ef.js" defer=""></script><script src="/_next/static/chunks/main-3123a443c688934f.js" defer=""></script><script src="/_next/static/chunks/pages/_app-deb173bd80cbaa92.js" defer=""></script><script src="/_next/static/chunks/996-f1475101e84cf548.js" defer=""></script><script src="/_next/static/chunks/pages/aboutus-41b1f037d974ef60.js" defer=""></script><script src="/_next/static/REJUWXI26y-lp9JVmzJB5/_buildManifest.js" defer=""></script><script src="/_next/static/REJUWXI26y-lp9JVmzJB5/_ssgManifest.js" defer=""></script></head><body><div id="__next"><div class="Home_container__bCOhY"><main class="Home_main__nLjiQ"><h1 class="Home_title__T09hD">About Use page</h1><p class="Home_description__41Owk"><a href="/">Go to the <b>Home</b> page</a></p></main><footer class="Home_footer____T7K"><a href="/">Home page</a></footer></div></div><script id="__NEXT_DATA__" type="application/json">{"props":{"pageProps":{}},"page":"/aboutus","query":{},"buildId":"REJUWXI26y-lp9JVmzJB5","nextExport":true,"autoExport":true,"isFallback":false,"scriptLoader":[]}</script></body></html>

There are probably many great ideas that this can be used for. At work we use getServerSideProps() and we have too many pages to build them all statically. We need a solution like this to do custom analysis of the rendered HTML to check for broken links by analyzing every generated <a href> tag.

Introducing docsQL

March 28, 2022
0 comments Web development, GitHub, JavaScript

tl;dr; docsQL is a web app for analyzing lots of Markdown content files with SQL queries.

Demo

Sample instance based on MDN's open source content.

Screenshots

Background/Introduction

When I worked on the code for MDN in 2019-2021 I often found that I needed to understand the content better to debug or test or just find a sample page that uses some feature. I ended up writing a lot of one-off Python scripts that would traverse the repository files just to do some quick lookup that was too complex for grep. Eventually, I built a prototype called "Traits DB" which was powered by an in-browser SQL engine called alasql. Then in 2021, I joined GitHub to work on GitHub Docs and here there are lots of Markdown files too that trigger different features based on various front-matter keys.

docsQL does two things:

  1. Analyze lots of .md files into a docs.json file which can be queried
  2. A static single-page-app for executing SQL against this database file

Plugins

The analyzing portion has a killer feature in that you can write your own plugins tailored specifically to your project. Your project might use some quirks that are unique. In GitHub Docs, for example, we use something called "LiquidJS" which is like a pre-Markdown processing to do things like versioning. So I can write a custom JavaScript plugin that extends data you get from reading in the front-matter.

Here's an example plugin:


const regex = /💩/g;
export default function countCocoIceMentions({ data, content }) {
  const inTitle = (data.title.match(regex) || []).length;
  const inBody = (content.match(regex) || []).length;
  return {
    chocolateIcecreamMentions: inTitle + inBody,
  };
}

Now, if you add that to your project, you'll be able to run:


SELECT title, chocolateIcecreamMentions FROM ? 
WHERE chocolateIcecreamMentions > 0 
ORDER BY 2 DESC LIMIT 15

How you're supposed to use it

It's up to you. One important fact to keep in mind is that not everyone speaks SQL fluently. And even if you're somewhat confident with SQL, it might not be obvious how this particular engine works or what the fields are. (Mind you, there's a "Help" which shows you all fields and a collection of sample queries).
But it's really intuitive to extend an already written SQL query. So if someone shares their query, it's easy to just extend it. For example, your colleague might share a URL with an SQL query in the query string, but you want to change the sort order so you just edit DESC for ASC.

I would recommend that any team that has a project with a bunch of Markdown files, add docsql as a dependency somewhere, have it build with your directory of Markdown files, and then publish the docsql/out/ directory as a static web page which you can host on Netlify or GitHub Pages.
This way, your team gets a centralized place where team members can share URLs with each other that has queries in it. When someone shares one of these, they get added to your "Saved queries" and you can extend them from there to add to your own list.

Behind the scenes

The project is here: github.com/peterbe/docsql and it's MIT licensed. The analyzing part is all Node. It's a CLI that is able to dynamically import other .mjs files based on scanning the directory at runtime.

The front-end is a NextJS static build which uses Mantine for the React UI components.

You can install it npx like this:


npx docsql /path/to/my/markdown/files

But if you want to control it a bit better you can simply add it to your own Node project with: npm save docsql or yarn add docsql.

Let's make it better

First of all, it's a very new project. My initial goal was to get the basics working. A lot of edges have been left rough. Especially in areas of installation, performance, and SQL editor. Please come and help out if you see something. In particular, if you tried to set it up but found it hard, we can work together to either improve the documentation to fix some scripts that would help the next person.

For feature requests and bug reports use: https://github.com/peterbe/docsql/issues/new
Or just comment here on the blog post.

From photo of ingredients, to your shopping list

September 10, 2021
0 comments Web development, That's Groce!, Firebase

Today I launched a really cool new feature to That's Groce!: Ability to upload photos of ingredients and have the food words automatically suggested to be added to your shopping list.

It's best explained with this 26-second video.

The general idea

Food words found
The idea is that you know you're going to cook that Vegetarian Curry Lasagna on page 123 in Jamie's Summer Cookbook. Either you read the ingredients and type in each ingredient you're going to need to buy (because you know what's in your pantry and fridge) or just take a photo of the whole ingredient listing. Now, when you're in the store and wonder: "I remember I need red peppers, but how many was it again?!".
But not only that, once you've taken a photo of the list of ingredients, it can help you populate your shopping list.

How it works in That's Groce! is that your photo is turned into a block of text, and from that text certain "food words" are extracted and all else is ignored. The food words are based on a database of 3,600+ English food words that I've gathered and also manually curated. There are lots of caveats. The 3,600+ food words are not perfect and there are surprisingly many combinations and plural vs. singular that can make the list incomplete. But, that's where you can help! If you find that there's a word it correctly scanned but didn't suggest, you can type in your own suggestions for everyone to benefit from. If you type in something it didn't manage to spot, I'll review that and add it to the global database.

Food-word recognition

The word recognition is done using Google Cloud Vision API which is a powerful machine-learning-based service from Google Cloud that is stunningly accurate. Tidy camera photos of cookbooks with good light is nearly perfect, but it can also do an impressive job with photos of handwritten recipes, like this:

Handwritten recipe photo

One thing you will find is that it's often hard to only take a photo of the actual list of ingredients. Often, a chunk of the cooking instructions or the recipe "back story" gets into the photo frame. These words aren't actually ingredients and can lead to surprising food word suggestions that you definitely don't need to buy. It's not perfect but at least you'll have a visual memory of what you're cooking as you're standing there in the grocery store.

Options

Understandably, it's nearly impossible for the app to know what you have in your pantry/fridge/freezer/spice rack. But a lot of recipes spell out exactly everything you need, not for buying, but for cooking. E.g. 1 teaspoon salt. But salt (or pepper or sugar or butter) is the kind of foodstuff you probably always have at home. So then it doesn't make sense to suggest that you put "salt" on the shopping list. To solve for that, you can override your own preferred options of special keywords. For example, I've added "salt", "pepper", "sugar, "table salt", "black pepper" as words that can always be ignored. You can also train the system to set up aliases. For example, a lot of recipes call for "lemon zest" but what you actually purchase is a lemon and then grate it yourself on the grater. So you can add special keywords that act as aliases for other words. Here's one example:

Options

Help out!

Foodwords database sample
Over the last couple of days, I've been snapping lots of photos from lots of different cookbooks, and every time I begin to think the database is getting good, I stumble on some new food word. For example, I recently tested a photo of a recipe that called for "Jackfruit". What even is that?! Anyway, it is inevitable that certain words are missing. But that's where we can help each other. If you test it out and you notice that it correctly scanned the text but the word wasn't suggested, click the "Suggest" button and type in your suggestions. Together we can, one food word at a time, eradicate misses.

There's still some work to be done to make the database even stronger by setting up clever aliases for everyone. For example, a lot of recipes call for "(some number) cloves garlic" but it can easily get confused for the spice "cloves" and the root vegetable "garlic". So, perhaps we can train it to recognize "cloves garlic" to actually just mean "garlic".

Also, the database is currently only in (primarily American) English. The platform would support other languages but I would definitely need a hand to seed it with more words in other languages.

Try it out

If you haven't set up an account yet (it's still free!), to test it, you can go to https://thatsgroce.web.app, click "Get started without signing in", go into your newly created shopping list, and press the "Photos" button. Try uploading some snapped photos from your cookbooks. Please please let me know what you think!

How I upload Firebase images optimized

September 2, 2021
0 comments JavaScript, Web development, Firebase

I have an app that allows you to upload images. The images are stored using Firebase Storage. Then, once uploaded I have a Firebase Cloud Function that can turn that into a thumbnail. The problem with this is that it takes a long time to wake up the cloud function, the first time, and generating that thumbnail. Not to mention the download of the thumbnail payload for the client. It's not unrealistic that the whole thumbnail generation plus download can take multiple (single digit) seconds. But you don't want to have the user sit and wait that long. My solution is to display the uploaded file in a <img> tag using URL.createObjectURL().

The following code is most pseudo-code but should look familiar if you're used to how Firebase and React/Preact works. Here's the FileUpload component:


interface Props {
  onUploaded: ({ file, filePath }: { file: File; filePath: string }) => void;
  onSaved?: () => void;
}

function FileUpload({
  onSaved,
  onUploaded,
}: Props) => {
  const [file, setFile] = useState<File | null>(null);

  // ...some other state stuff omitted for example.

  useEffect(() => {
    if (file) {
      const metadata = {
        contentType: file.type,
    };

    const filePath = getImageFullPath(prefix, item ? item.id : list.id, file);
    const storageRef = storage.ref();

    uploadTask = storageRef.child(filePath).put(file, metadata);
    uploadTask.on(
      "state_changed",
      (snapshot) => {
        // ...set progress percentage
      },
      (error) => {
        setUploadError(error);
      },
      () => {
        onUploaded({ file, filePath });  // THE IMPORTANT BIT!

        db.collection("pictures")
          .add({ filePath })
          .then(() => { onSaved() })

      }
    }
  }, [file])

  return (
      <input
        type="file"
        accept="image/jpeg, image/png"
        onInput={(event) => {
          if (event.target.files) {
            const file = event.target.files[0];
            validateFile(file);
            setFile(file);
          }
        }}
      />
  );
}

The important "trick" is that we call back after the storage is complete by sending the filePath and the file back to whatever component triggered this component. Now, you can know, in the parent component, that there's going to soon be an image reference with a file path (filePath) that refers to that File object.

Here's a rough version of how I use this <FileUpload> component:


function Images() {

  const [uploadedFiles, setUploadedFiles] = useState<Map<string, File>>(
    new Map()
  );

  return (<div>  
    <FileUpload
      onUploaded={({ file, filePath }: { file: File; filePath: string }) => {
        const newMap: Map<string, File> = new Map(uploadedFiles);
        newMap.set(filePath, file);
        setUploadedFiles(newMap);
      }}
      />

    <ListUploadedPictures uploadedFiles={uploadedFiles}/>
    </div>
  );
}

function ListUploadedPictures({ uploadedFiles}: {uploadedFiles: Map<string, File>}) {

  // Imagine some Firebase Firestore subscriber here
  // that watches for uploaded pictures. 
  return <div>
    {pictures.map(picture => (
      <Picture picture={picture} uploadedFiles={uploadedFiles} />
    ))}
  </div>
}

function Picture({ 
  uploadedFiles,
  picture,
}: {
  uploadedFiles: Map<string, File>;
  picture: {
    filePath: string;
  }
}) {
  const thumbnailURL = getThumbnailURL(filePath, 500);
  const [loaded, setLoaded] = useState(false);

  useEffect(() => {
    const preloadImg = new Image();
    preloadImg.src = thumbnailURL;

    const callback = () => {
      if (mounted) {
        setLoaded(true);
      }
    };
    if (preloadImg.decode) {
      preloadImg.decode().then(callback, callback);
    } else {
      preloadImg.onload = callback;
    }

    return () => {
      mounted = false;
    };
  }, [thumbnailURL]);

  return <img
    style={{
      width: 500,
      height: 500,
      "object-fit": "cover",
    }}
    src={
      loaded
        ? thumbnailURL
        : file
        ? URL.createObjectURL(file)
        : PLACEHOLDER_IMAGE
    }
  />
}

Phew! That was a lot of code. Sorry about that. But still, this is just a summary of the real application code.

The point is that; I send the File object back to the parent component immediately after having uploaded it to Firebase Cloud Storage. Then, having access to that as a File object, I can use that as the thumbnail while I wait for the real thumbnail to come in. Now, it doesn't matter that it takes 1-2 seconds to wake up the cloud function and 1-2 seconds to perform the thumbnail creation, and then 0.1-2 seconds to download the thumbnail. All the while this is happening you're looking at the File object that was uploaded. Visually, the user doesn't even notice the difference. If you refresh the page, that temporary in-memory uploadedFiles (Map instance) is empty so you're now relying on the loading of the thumbnail which should hopefully, at this point, be stored in the browser's native HTTP cache.

The other important part of the trick is that we're using const preloadImg = new Image() for loading the thumbnail. And by relying on preloadImage.decode ? preloadImage.decode().then(...) : preload.onload = ... we can be informed only when the thumbnail has been successfully created and successfully downloaded to make the swap.

10 years a Mozillian, always a Mozillian

August 30, 2021
2 comments Web development, Mozilla, MDN

As of September 2021, I am leaving Mozilla after 10 years. It hasn't been perfect but it's been a wonderful time with fond memories and an amazing career rocket ship.

In April 2011, I joined as a web developer to work on internal web applications that support the Firefox development engineering. In rough order, I worked on...

  • Elmo: The web application for managing the state of Firefox localization
  • Socorro: When Firefox crashes and asks to send a crash dump, this is the storage plus website for analyzing that
  • Peekaboo: When people come to visit a Mozilla office, they sign in on a tablet at the reception desk
  • Balrog: For managing what versions are available for Firefox products to query when it's time to self-upgrade
  • Air Mozilla: For watching live streams and video archive of all recordings within the company
  • MozTrap: When QA engineers need to track what, and the results, of QA testing Firefox products
  • Symbol Server: Where all C++ debug symbols are stored from the build pipeline to be used to source-map crash stack traces
  • Buildhub: To get a complete database of all and every individual build shipped of Firefox products
  • Remote Settings: Managing experiments and for Firefox to "phone home" for smaller updates/experiments between releases
  • MDN Web Docs: Where web developers go to look up all the latest and most detailed details about web APIs

This is an incomplete list because at Mozilla you get to help each other and I shipped a lot of smaller projects too, such as Contribute.json, Whatsdeployed, GitHub PR Triage, Bugzilla GitHub Bug Linker.

Reflecting back, the highlight of any project is when you get to meet or interact with the people you help. Few things are as rewarding as when someone you don't know, in person, finds out what you do and they say: "Are you Peter?! The one who built XYZ? I love that stuff! We use it all the time now in my team. Thank you!" It's not a brag because oftentimes what you build for fellow humans it isn't engineering'ly brilliant in any way. It's just something that someone needed. Perhaps the lesson learned is the importance of not celebrating what you've built but just put you into the same room as who uses what you built. And, in fact, if what you've built for someone else isn't particularly loved, by meeting and fully interactive with the people who use "your stuff" gives you the best of feedback and who doesn't love constructive criticism so you can become empowered to build better stuff.

Mozilla is a great company. There is no doubt in my mind. We ship high-quality products and we do it with pride. There have definitely been some rough patches over the years but that happens and you just have to carry on and try to focus on delivering value. Firefox Nightly will continue to be my default browser and I'll happily click any Google search ads to help every now and then. THANK YOU everyone I've ever worked with at Mozilla! You are a wonderful bunch of people!

How to get all of MDN Web Docs running locally

June 9, 2021
1 comment Web development, MDN

tl;dr; git clone https://github.com/mdn/content.git && cd content && yarn install && yarn start && open http://localhost:5000/ will get you all of MDN Web Docs running on your laptop.

The MDN Web Docs is built from a git repository: github.com/mdn/content. It contains all you need to get all the content running locally. Including search. Embedded inside that repository is a package.json which helps you start a Yari server. Aka. the preview server. It's a static build of the github.com/mdn/yari project which handles client-side rendering, search, an just-in-time server-side rendering server.

Basics

All you need is the following:

▶ git clone https://github.com/mdn/content.git
▶ cd content
▶ yarn install
▶ yarn start

And now open http://localhost:5000 in your browser.

This will now run in "preview server" mode. It's meant for contributors (and core writers) to use when they're working on a git branch. Because of that, you'll see a "Writer's homepage" at the root URL. And when viewing each document, you get buttons about "flaws" and stuff. Looks like this:

Preview server

Alternative ways to download

If you don't want to use git clone you can download the ZIP file. For example:

▶ wget https://github.com/mdn/content/archive/refs/heads/main.zip
▶ unzip main.zip
▶ cd content-main
▶ yarn install
▶ yarn start

At the time of writing, the downloaded Zip file is 86MB and unzipped the directory is 278MB on disk.

When you use git clone, by default it will download all the git history. That can actually be useful. This way, when rendering each document, it can figure out from the git logs when each individual document was last modified. For example:

"Last modified"

If you don't care about the "Last modified" date, you can do a "shallow git clone" instead. Replace the above-mentioned first command with:

▶ git clone --depth 1 https://github.com/mdn/content.git

At the time of writing the shallow cloned content folder becomes 234MB instead of (the deep clone) 302MB.

Just the raw rendered data

Every MDN Web Docs page has an index.json equivalent. Take any MDN page and add /index.json to the URL. For example /en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/slice/index.json

Essentially, this is the intermediate state that's used for server-side rendering the page. A glorified way of sandwiching the content in a header, a footer, and a sidebar to the side. These URLs work on localhost:5000 too. Try http://localhost:5000/en-US/docs/Web/API/Fetch_API/Using_Fetch/index.json for example.

The content for that index.json is built just in time. It also contains a bunch of extra metadata about "flaws"; a system used to highlight things that should be fixed that is somewhat easy to automate. So, it doesn't contain things like spelling mistakes or code snippets that are actually invalid.

But suppose you want all that raw (rendered) data, without any of the flaw detections, you can run this command:

▶ BUILD_FLAW_LEVELS="*:ignore" yarn build

It'll take a while (because it produces an index.html file too). But now you have all the index.json files for everything in the newly created ./build/ directory. It should have created a lot of files:

▶ find build -name index.json | wc -l
   11649

If you just want a subtree of files you could have run it like this instead:

▶ BUILD_FOLDERSEARCH=web/javascript BUILD_FLAW_LEVELS="*:ignore" yarn build

Programmatic API access

The programmatic APIs are all about finding the source files. But you can use the sources to turn that into the built files you might need. Or just to get a list of URLs. To get started, create a file called find-files.js in the root:


const { Document } = require("@mdn/yari/content");

console.log(Document.findAll().count);

Now, run it like this:

▶ export CONTENT_ROOT=files

▶ node find-files.js
11649

Other things you can do with that findAll function:


const { Document } = require("@mdn/yari/content");

const found = Document.findAll({
  folderSearch: "web/javascript/reference/statements/f",
});
for (const document of found.iter()) {
  console.log(document.url);
}

Or, suppose you want to actually build each of these that you find:


const { Document } = require("@mdn/yari/content");
const { buildDocument } = require("@mdn/yari/build");

const found = Document.findAll({
  folderSearch: "web/javascript/reference/statements/f",
});

Promise.all([...found.iter()].map((document) => buildDocument(document))).then(
  (built) => {
    for (const { doc } of built) {
      console.log(doc.title.padEnd(20), doc.popularity);
    }
  }
);

That'll output something like this:

▶ node find-files.js
for                  0.0143
for await...of       0.0129
for...in             0.0748
for...of             0.0531
function declaration 0.0088
function*            0.0122

All the HTML content in production-grade mode

In the most basic form, it will start the "preview server" which is tailored towards building just in time and has all those buttons at the top for writers/contributors. If you want the more "production-grade" version, you can't use the copy of @mdn/yari that is "included" in the mdn/content repo. To do this, you need to git clone mdn/yari and install that. Hang on, this is about to get a bit more advanced:

▶ git clone https://github.com/mdn/yari.git
▶ cd yari
▶ yarn install
▶ yarn build:client
▶ yarn build:ssr
▶ CONTENT_ROOT=../files REACT_APP_DISABLE_AUTH=true BUILD_FLAW_LEVELS="*:ignore" yarn build
▶ CONTENT_ROOT=../files node server/static.js

Now, if you go to something like http://localhost:5000/en-US/docs/Web/Guide/ you'll get the same thing as you get on https://developer.mozilla.org but all on your laptop. Should be pretty snappy.

Is it really entirely offline?

No, it leaks a little. For example, there are interactive examples that uses an iframe that's hardcoded to https://interactive-examples.mdn.mozilla.net/.

There are also external images for example. You might get a live sample that refers to sample images on https://mdn.mozillademos.org/files/.... So that'll fail if you're without WiFi in a spaceship.

Conclusion

Making all of MDN Web Docs available offline is, honestly, not a priority. The focus is on A) a secure production build, and B) a good environment for previewing content changes. But all the pieces are there. Search is a little bit tricky, as an example. When you're running it as a preview server you can't do a full-text search on all the content, but you get a useful autocomplete search widget for navigating between different titles. And the full-text search engine is a remote centralized server that you can't take with you offline.

But all the pieces are there. Somehow. It all depends on your use case and what you're willing to "compromise" on.

My contribution to 2021 Earth Day: optimizing some bad favicons on MDN Web Docs

April 23, 2021
0 comments Web development, MDN

tl;dr; The old /favicon.ico was 15KB and due to bad caching was downloaded 24M times in the last month totaling ~350GB of server-to-client traffic which can almost all be avoided.

How to save the planet? Well, do something you can do, they say. Ok, what I can do is to reduce the amount of electricity consumed to browse the web. Mozilla MDN Web Docs, which I work on, has a lot of traffic from all over the world. In the last 30 days, we have roughly 70M pageviews across roughly 15M unique users.
A lot of these people come back to MDN more than once per month so good assets and good asset-caching matter.

I found out that somehow we had failed to optimize the /favicon.ico asset! It was 15,086 bytes when, with Optimage, I was quickly able to turn it down to 1,153 bytes. That's a 13x improvement! Here's what that looks like when zoomed in 4x:

Old and new favicon.ico

The next challenge was the Cache-Control. Our CDN is AWS Cloudfront and it respects whatever Cache-Control headers we set on the assets. Because favicon.ico doesn't have a unique hash in its name, the Cache-Control falls back to the default of 24 hours (max-age=86400) which isn't much. Especially for an asset that almost never changes and besides, if we do decide to change the image (but not the name) we'd have to wait a minimum of 24 hours until it's fully rolled out.

Another thing I did as part of this was to stop assuming the default URL of /favicon.ico and instead control it with the <link rel="shortcut icon" href="/favicon.323ad90c.ico" type="image/x-icon"> HTML meta tag. Now I can control the URL of the image that will be downloaded.

Our client-side code is based on create-react-app and it can't optimize the files in the client/public/ directory.
So I wrote a script that post-processes the files in client/build/. In particular, it looks through the index.html template and replaces...


<link rel="shortcut icon" href="/favicon.ico" type="image/x-icon">

...with...


<link rel="shortcut icon" href="/favicon.323ad90c.ico" type="image/x-icon">

Plus it makes a copy of the file with this hash in it so that the old URL still resolves. But now can cache it much more aggressively. 1 year in fact.

In summary

Combined, we used to have ~350GB worth of data sent from our CDN(s) to people's browsers every month.
Just changing the image itself would turn that number to ~25GB instead.
The new Cache-Control hopefully means that all those returning users can skip the download on a daily basis which will reduce the amount of network usage even more, but it's hard to predict in advance.

How MDN's site-search works

February 26, 2021
3 comments Web development, Django, Python, MDN, Elasticsearch

tl;dr; Periodically, the whole of MDN is built, by our Node code, in a GitHub Action. A Python script bulk-publishes this to Elasticsearch. Our Django server queries the same Elasticsearch via /api/v1/search. The site-search page is a static single-page app that sends XHR requests to the /api/v1/search endpoint. Search results' sort-order is determined by match and "popularity".

Jamstack'ing

The challenge with "Jamstack" websites is with data that is too vast and dynamic that it doesn't make sense to build statically. Search is one of those. For the record, as of Feb 2021, MDN consists of 11,619 documents (aka. articles) in English. Roughly another 40,000 translated documents. In English alone, there are 5.3 million words. So to build a good search experience we need to, as a static site build side-effect, index all of this in a full-text search database. And Elasticsearch is one such database and it's good. In particular, Elasticsearch is something MDN is already quite familiar with because it's what was used from within the Django app when MDN was a wiki.

Note: MDN gets about 20k site-searches per day from within the site.

Build

Diagram

When we build the whole site, it's a script that basically loops over all the raw content, applies macros and fixes, dumps one index.html (via React server-side rendering) and one index.json. The index.json contains all the fully rendered text (as HTML!) in blocks of "prose". It looks something like this:


{
  "doc": {
    "title": "DOCUMENT TITLE",
    "summary": "DOCUMENT SUMMARY",
    "body": [
      {
        "type": "prose", 
        "value": {
          "id": "introduction", 
          "title": "INTRODUCTION",
          "content": "<p>FIRST BLOCK OF TEXTS</p>"
       }
     },
     ...
   ],
   "popularity": 0.12345,
   ...
}

You can see one here: /en-US/docs/Web/index.json

Indexing

Next, after all the index.json files have been produced, a Python script takes over and it traverses all the index.json files and based on that structure it figures out the, title, summary, and the whole body (as HTML).

Next up, before sending this into the bulk-publisher in Elasticsearch it strips the HTML. It's a bit more than just turning <p>Some <em>cool</em> text.</p> to Some cool text. because it also cleans up things like <div class="hidden"> and certain <div class="notecard warning"> blocks.

One thing worth noting is that this whole thing runs roughly every 24 hours and then it builds everything. But what if, between two runs, a certain page has been removed (or moved), how do you remove what was previously added to Elasticsearch? The solution is simple: it deletes and re-creates the index from scratch every day. The whole bulk-publish takes a while so right after the index has been deleted, the searches won't be that great. Someone could be unlucky in that they're searching MDN a couple of seconds after the index was deleted and now waiting for it to build up again.
It's an unfortunate reality but it's a risk worth taking for the sake of simplicity. Also, most people are searching for things in English and specifically the Web/ tree so the bulk-publishing is done in a way the most popular content is bulk-published first and the rest was done after. Here's what the build output logs:

Found 50,461 (potential) documents to index
Deleting any possible existing index and creating a new one called mdn_docs
Took 3m 35s to index 50,362 documents. Approximately 234.1 docs/second
Counts per priority prefixes:
    en-us/docs/web                 9,056
    *rest*                         41,306

So, yes, for 3m 35s there's stuff missing from the index and some unlucky few will get fewer search results than they should. But we can optimize this in the future.

Searching

The way you connect to Elasticsearch is simply by a URL it looks something like this:

https://USER:PASSWD@HASH.us-west-2.aws.found.io:9243

It's an Elasticsearch cluster managed by Elastic running inside AWS. Our job is to make sure that we put the exact same URL in our GitHub Action ("the writer") as we put it into our Django server ("the reader").
In fact, we have 3 Elastic clusters: Prod, Stage, Dev.
And we have 2 Django servers: Prod, Stage.
So we just need to carefully make sure the secrets are set correctly to match the right environment.

Now, in the Django server, we just need to convert a request like GET /api/v1/search?q=foo&locale=fr (for example) to a query to send to Elasticsearch. We have a simple Django view function that validates the query string parameters, does some rate-limiting, creates a query (using elasticsearch-dsl) and packages the Elasticsearch results back to JSON.

How we make that query is important. In here lies the most important feature of the search; how it sorts results.

In one simple explanation, the sort order is a combination of popularity and "matchness". The assumption is that most people want the popular content. I.e. they search for foreach and mean to go to /en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach not /en-US/docs/Web/API/NodeList/forEach both of which contains forEach in the title. The "popularity" is based on Google Analytics pageviews which we download periodically, normalize into a floating-point number between 1 and 0. At the of writing the scoring function does something like this:

rank = doc.popularity * 10 + search.score

This seems to produce pretty reasonable results.

But there's more to the "matchness" too. Elasticsearch has its own API for defining boosting and the way we apply is:

  • match phrase in the title: Boost = 10.0
  • match phrase in the body: Boost = 5.0
  • match in title: Boost = 2.0
  • match in body: Boost = 1.0

This is then applied on top of whatever else Elasticsearch does such as "Term Frequency" and "Inverse Document Frequency" (tf and if). This article is a helpful introduction.

We're most likely not done with this. There's probably a lot more we can do to tune this myriad of knobs and sliders to get the best possible ranking of documents that match.

Web UI

The last piece of the puzzle is how we display all of this to the user. The way it works is that developer.mozilla.org/$locale/search returns a static page that is blank. As soon as the page has loaded, it lazy-loads JavaScript that can actually issue the XHR request to get and display search results. The code looks something like this:


function SearchResults() {
  const [searchParams] = useSearchParams();
  const sp = createSearchParams(searchParams);
  // add defaults and stuff here
  const fetchURL = `/api/v1/search?${sp.toString()}`;

  const { data, error } = useSWR(
    fetchURL,
    async (url) => {
      const response = await fetch(URL);
      // various checks on the response.statusCode here
      return await response.json();
    }
  );

  // render 'data' or 'error' accordingly here

A lot of interesting details are omitted from this code snippet. You have to check it out for yourself to get a more up-to-date insight into how it actually works. But basically, the window.location (and pushState) query string drives the fetch() call and then all the component has to do is display the search results with some highlighting.

The /api/v1/search endpoint also runs a suggestion query as part of the main search query. This extracts out interest alternative search queries. These are filtered and scored and we issue "sub-queries" just to get a count for each. Now we can do one of those "Did you mean...". For example: search for intersections.

In conclusion

There are a lot of interesting, important, and careful details that are glossed over here in this blog post. It's a constantly evolving system and we're constantly trying to improve and perfect the system in a way that it fits what users expect.

A lot of people reach MDN via a Google search (e.g. mdn array foreach) but despite that, nearly 5% of all traffic on MDN is the site-search functionality. The /$locale/search?... endpoint is the most frequently viewed page of all of MDN. And having a good search engine that's reliable is nevertheless important. By owning and controlling the whole pipeline allows us to do specific things that are unique to MDN that other websites don't need. For example, we index a lot of raw HTML (e.g. <video>) and we have code snippets that needs to be searchable.

Hopefully, the MDN site-search will elevate from being known to be very limited to something now that can genuinely help people get to the exact page better than Google can. Yes, it's worth aiming high!

downloadAndResize - Firebase Cloud Function to serve thumbnails

December 8, 2020
0 comments Web development, That's Groce!, Node, JavaScript

UPDATE 2020-12-30

With sharp after you've loaded the image (sharp(contents)) make sure to add .rotate() so it automatically rotates the image correctly based on EXIF data.

UPDATE 2020-12-13

I discovered that sharp is much better than jimp. It's order of maginitude faster. And it's actually what the Firebase Resize Images extension uses. Code updated below.

I have a Firebase app that uses the Firebase Cloud Storage to upload images. But now I need thumbnails. So I wrote a cloud function that can generate thumbnails on-the-fly.

There's a Firebase Extension called Resize Images which is nicely done but I just don't like that strategy. At least not for my app. Firstly, I'm forced to pick the right size(s) for thumbnails and I can't really go back on that. If I pick 50x50, 1000x1000 as my sizes, and depend on that in the app, and then realize that I actually want it to be 150x150, 500x500 then I'm quite stuck.

Instead, I want to pick any thumbnail sizes dynamically. One option would be a third-party service like imgix, CloudImage, or Cloudinary but these are not free and besides, I'll need to figure out how to upload the images there. There are other Open Source options like picfit which you install yourself but that's not an attractive option with its implicit complexity for a side-project. I want to stay in the Google Cloud. Another option would be this AppEngine function by Albert Chen which looks nice but then I need to figure out the access control between that and my Firebase Cloud Storage. Also, added complexity.

As part of your app initialization in Firebase, it automatically has access to the appropriate storage bucket. If I do:


const storageRef = storage.ref();
uploadTask = storageRef.child('images/photo.jpg').put(file, metadata);
...

...in the Firebase app, it means I can do:


 admin
      .storage()
      .bucket()
      .file('images/photo.jpg')
      .download()
      .then((downloadData) => {
        const contents = downloadData[0];

...in my cloud function and it just works!

And to do the resizing I use Jimp which is TypeScript aware and easy to use. Now, remember this isn't perfect or mature but it works. It solves my needs and perhaps it will solve your needs too. Or, at least it might be a good start for your application that you can build on. Here's the function (in functions/src/index.ts):


interface StorageErrorType extends Error {
  code: number;
}

const codeToErrorMap: Map<number, string> = new Map();
codeToErrorMap.set(404, "not found");
codeToErrorMap.set(403, "forbidden");
codeToErrorMap.set(401, "unauthenticated");

export const downloadAndResize = functions
  .runWith({ memory: "1GB" })
  .https.onRequest(async (req, res) => {
    const imagePath = req.query.image || "";
    if (!imagePath) {
      res.status(400).send("missing 'image'");
      return;
    }
    if (typeof imagePath !== "string") {
      res.status(400).send("can only be one 'image'");
      return;
    }
    const widthString = req.query.width || "";
    if (!widthString || typeof widthString !== "string") {
      res.status(400).send("missing 'width' or not a single string");
      return;
    }
    const extension = imagePath.toLowerCase().split(".").slice(-1)[0];
    if (!["jpg", "png", "jpeg"].includes(extension)) {
      res.status(400).send(`invalid extension (${extension})`);
      return;
    }
    let width = 0;
    try {
      width = parseInt(widthString);
      if (width < 0) {
        throw new Error("too small");
      }
      if (width > 1000) {
        throw new Error("too big");
      }
    } catch (error) {
      res.status(400).send(`width invalid (${error.toString()}`);
      return;
    }

    admin
      .storage()
      .bucket()
      .file(imagePath)
      .download()
      .then((downloadData) => {
        const contents = downloadData[0];
        console.log(
          `downloadAndResize (${JSON.stringify({
            width,
            imagePath,
          })}) downloadData.length=${humanFileSize(contents.length)}\n`
        );

        const contentType = extension === "png" ? "image/png" : "image/jpeg";
        sharp(contents)
          .rotate()
          .resize(width)
          .toBuffer()
          .then((buffer) => {
            res.setHeader("content-type", contentType);
            // TODO increase some day
            res.setHeader("cache-control", `public,max-age=${60 * 60 * 24}`);
            res.send(buffer);
          })
          .catch((error: Error) => {
            console.error(`Error reading in with sharp: ${error.toString()}`);
            res
              .status(500)
              .send(`Unable to read in image: ${error.toString()}`);
          });
      })
      .catch((error: StorageErrorType) => {
        if (error.code && codeToErrorMap.has(error.code)) {
          res.status(error.code).send(codeToErrorMap.get(error.code));
        } else {
          res.status(500).send(error.message);
        }
      });
  });

function humanFileSize(size: number): string {
  if (size < 1024) return `${size} B`;
  const i = Math.floor(Math.log(size) / Math.log(1024));
  const num = size / Math.pow(1024, i);
  const round = Math.round(num);
  const numStr: string | number =
    round < 10 ? num.toFixed(2) : round < 100 ? num.toFixed(1) : round;
  return `${numStr} ${"KMGTPEZY"[i - 1]}B`;
}

Here's what a sample URL looks like.

I hope it helps!

I think the next thing for me to consider is to extend this so it uploads the thumbnail back and uses the getDownloadURL() of the created thumbnail as a redirect instead. It would be transparent to the app but saves on repeated views. That'd be a good optimization.

Popularity contest for your grocery list

November 21, 2020
0 comments Web development, Mobile, That's Groce!

tl;dr; Up until recently, when you started to type a new entry in your That's Groce shopping list, the suggestions that would appear weren't sorted intelligently. Now they are. They're sorted by popularity.

The whole point with the suggestions that appear is to make it easier for you to not have to type the rest. The first factor that decides which should appear is simply based on what you've typed so far. If you started typing ch we can suggest:

  • Cherry tomatoes
  • Chocolate chips
  • Mini chocolate chips
  • Rainbow chard
  • Goat cheese
  • Chickpeas
  • etc.

They all contain ch in some form (starting of words only). But space is limited and you can't show every suggestion. So, if you're going to cap it to only show, say, 4 suggestions; which ones should you show first?
I think the solution is to do it by frequency. I.e. items you often put onto the list.

How to calculate the frequency

The way That's Groce now does it is that it knows the exact times a certain item was added to the list. It then takes that list and applies the following algorithm:

For each item...

  1. Discard the dates older than 3 months
  2. Discard any duplicates from clusters (e.g. you accidentally added it and removed it and added it again within minutes)
  3. Calculate the distance (in seconds) between each add
  4. From the last 4 times it was added, take the median value of the distance between

So the frequency becomes a number of seconds. It should feel somewhat realistic. In my family, it actually checks out. We buy bananas every week but sometimes slightly more often than that and in our case, the number comes to ~6 days.

The results

Before sorting by popularity
Before sorting by popularity

After sorting by popularity
After sorting by popularity

Great! The chances of appreciating and picking one of the suggestions is greater if it's more likely to be what you were looking for. And things that have been added frequently in the past are more likely to be added again.

How to debug this

There's now a new page called the "Popularity contest". You get to it from the "List options" button in the upper right-hand corner. On its own, it's fairly "useless" because it just lists them. But it's nice to get a feeling for what your family most frequently add to the list. A lot more can probably be done to this page but for now, it really helps to back up the understanding of how the suggestions are sorted when you're adding new entries.

Popularity contest

If you look carefully at my screenshot here you'll notice two little bugs. There are actually two different entries for "Lemon 🍋" and that was from the early days when that could happen.
Also, another bug is that there's one entry called "Bananas" and one called "Bananas 🍌" which is also something that's being fixed in the latest release. My own family's list predates those fixes.

Hope it helps!

Previous page
Next page