The best grep tool in the world; ripgrep

June 19, 2018
3 comments Linux, Web development, macOS

tl;dr; ripgrep (aka. rg) is the best tool to grep today.

ripgrep is a tool for searching files. Its killer feature is that it's fast. Like, really really fast. Faster than sift, git grep, ack, regular grep etc.

If you don't believe me, either read this detailed blog post from its author or just jump straight to the conclusion:

  • For both searching single files and huge directories of files, no other tool obviously stands above ripgrep in either performance or correctness.

  • ripgrep is the only tool with proper Unicode support that doesn’t make you pay dearly for it.

  • Tools that search many files at once are generally slower if they use memory maps, not faster.

Benchmark
Benchmark

I used to use git grep whenever I was inside a git repo and sift for everything else. That alone, was a huge step up from regular grep. Granted, almost all my git repos are small enough that regular git grep is faster than I can perceive many times. But with ripgrep I can just add --no-ignore-vcs and it searches in all the files mentioned in .gitignore too. That's useful when you want to search in your own source as well as the files in node_modules.

The installation instructions are easy. I installed it with brew install ripgrep and the best way to learn how to use it is rg --help. Remember that it has a lot of cool features that are well worth learning. It's written in Rust and so far I haven't had a single crash, ever. The ability to search by file type gets some getting used to (tip! use: rg --type-list) and remember that you can pipe rg output to another rg. For example, to search for all lines that contain query and string you can use rg query | rg string.

How to unset aliases set by Oh My Zsh

June 14, 2018
5 comments Linux, macOS

I use Oh My Zsh and I highly recommend it. However, it sets some aliases that I don't want. In particular, there's a plugin called git.plugin.zsh (located in ~/.oh-my-zsh/plugins/git/git.plugin.zsh) that interfers with a global binary I have in $PATH. So when I start a shell the executable gg becomes...:

which gg
gg: aliased to git gui citool

That overrides /usr/local/bin/gg which is the one I want to execute when I type gg. To unset that I can run...:

unset gg

▶ which gg
/usr/local/bin/gg

To override it "permanently", I added, to the end of ~/.zshrc:


# This unsets ~/.oh-my-zsh/plugins/git/git.plugin.zsh
# So my /usr/local/bin/gg works instead
unalias gg

Now whenever I start a new terminal, it defaults to the gg in /usr/local/bin/gg instead.

How to NOT start two servers on the same port

June 11, 2018
2 comments Linux, Web development

First of all, you can't start two servers on the same port. Ultimately it will fail. However, you might not want a late notice of this. For example, if you do this:


# In one terminal
$ cd elasticsearch-6.1.0
$ ./bin/elasticsearch
...
$ curl localhost:9200
...
"version" : {
    "number" : "6.1.0",
...

# In *another* terminal
$ cd elasticsearch-6.2.4
$ ./bin/elasticsearch
...
$ curl localhost:9200
...
"version" : {
    "number" : "6.1.0",
...

In other words, what happened to the elasticsearch-6.2.4/bin/elasticsearch?? It actually started on port :9201. But that's a rather scary thing because as you jump between project in different tabs or you might not notice that you have Elasticsearch running with docker-compose somewhere.

To remedy this I use this curl one-liner:


$ curl -s localhost:9200 > /dev/null && echo "Already running!" && exit || ./bin/elasticsearch

Now if you try to start a server on a used port it will exit early.

To wrap this up in a script, take this:


#!/bin/bash

set -eo pipefail

hostandport=$1
shift
curl -s "$hostandport" >/dev/null && \
  echo "Already running on $hostandport" && \
  exit 1 || exec "$@"

...and make it an executable called unlessalready.sh and now you can do this:


$ unlessalready.sh localhost:9200 ./bin/elasticsearch

GeneratorExit - How to clean up after the last yield in Python

June 7, 2018
7 comments Python

tl;dr; Use except GeneratorExit if your Python generator needs to know the consumer broke out.

Suppose you have a generator that yields things out. After each yield you want to execute some code that does something like logging or cleaning up. Here one such trivialized example:

The Problem


def pump():
    numbers = [1, 2, 3, 4]
    for number in numbers:
        yield number
        print("Have sent", number)
    print("Last number was sent")


for number in pump():
    print("Got", number)

print("All done")

The output is, as expected:

Got 1
Have sent 1
Got 2
Have sent 2
Got 3
Have sent 3
Got 4
Have sent 4
Last number was sent
All done

In this scenario, the consumer of the generator (the for number in pump() loop in this example) gets every thing the generator generates so after the last yield the generator is free to do any last minute activities which might be important such as closing a socket or updating a database.

Suppose the consumer is getting a bit "impatient" and breaks out as soon as it has what it needed.


def pump():
    numbers = [1, 2, 3, 4]
    for number in numbers:
        yield number
        print("Have sent", number)
    print("Last number was sent")


for number in pump():
    print("Got", number)
    # THESE TWO NEW LINES
    if number >= 2:
        break

print("All done")

What do you think the out is now? I'll tell you:

Got 1
Have sent 1
Got 2
All done

In other words, the potentially important lines print("Have sent", number) and print("Last number was sent") never gets executed! The generator could tell the consumer (through documentation) of the generator "Don't break! If you don't want me any more raise a StopIteration". But that's not a feasible requirement.

The Solution

But! There is a better solution and that's to catch GeneratorExit exceptions.


def pump():
    numbers = [1, 2, 3, 4]
    try:
        for number in numbers:
            yield number
            print("Have sent", number)
    except GeneratorExit:
        print("Exception!")
    print("Last number was sent")


for number in pump():
    print("Got", number)
    if number == 2:
        break
print("All done")

Now you get what you might want:

Got 1
Have sent 1
Got 2
Exception!
Last number was sent
All done

Next Level Stuff

Note in the last example's output, it never prints Have sent 2 even though the generator really did send that number. Suppose that's an important piece of information, then you can reach that inside the except GeneratorExit block. Like this for example:


def pump():
    numbers = [1, 2, 3, 4]
    try:
        for number in numbers:
            yield number
            print("Have sent", number)
    except GeneratorExit:
        print("Have sent*", number)
    print("Last number was sent")


for number in pump():
    print("Got", number)
    if number == 2:
        break
print("All done")

And the output is:

Got 1
Have sent 1
Got 2
Have sent* 2
Last number was sent
All done

The * is just in case we wanted to distinguish between a break happening or not. Depends on your application.

Writing a custom Datadog reporter using the Python API

May 21, 2018
2 comments Python

Datadog is an awesome sofware-as-a-service where you can aggregate and visualize statsd metrics sent from an application. For visualizing timings you create a time series graph. It can look something like this:

Time series

This time series looks sane because because it's timings made very very frequently. But what if it happens very rarely. Like once a day. Then, the graph doesn't look very useful. See this example:

"Rare time" series

Not only is it happening rarely, the amount of seconds is really quite hard to parse. I.e. what's 2.6 million milliseconds (answer is approximately 45 minutes). So to solve that I used the Datadog API. Now I can get a metric of every single point in milliseconds and I can make a little data table with human-readable dates and times.

The end result looks something like this:

SCOPE: ENV:PROD
+-------------------------+------------------------+-----------------------+
|          WHEN           |        TIME AGO        |       TIME TOOK       |
+=========================+========================+=======================+
| Mon 2018-05-21T17:00:00 | 2 hours 43 minutes ago | 23 minutes 32 seconds |
+-------------------------+------------------------+-----------------------+
| Sun 2018-05-20T17:00:00 | 1 day 2 hours ago      | 20 seconds            |
+-------------------------+------------------------+-----------------------+
| Sat 2018-05-19T17:00:00 | 2 days 2 hours ago     | 20 seconds            |
+-------------------------+------------------------+-----------------------+
| Fri 2018-05-18T17:00:00 | 3 days 2 hours ago     | 2 minutes 24 seconds  |
+-------------------------+------------------------+-----------------------+
| Wed 2018-05-16T20:00:00 | 4 days 23 hours ago    | 38 minutes 38 seconds |
+-------------------------+------------------------+-----------------------+

It's not gorgeous and there are a lot of caveats but it's at least really easy to read. See the metrics.py code here.

I don't think you can run this code since you don't have the same (hardcoded) metrics but hopefully it can serve as an example to whet your appetite.

What I'm going to do next, if I have time, is to run this as a Flask app instead that outputs a HTML table on a Herokup app or something.

To CDN assets or just HTTP/2

May 17, 2018
3 comments Web development, JavaScript, Web Performance

tl;dr; I see little benefit in using a CDN at this point.

I took two random pages here on my blog. One and Another. Doesn't matter what they say but it's important to notice that they're extremely similar. No big pictures. Both have 1 banner ad each. Both served with HTTP/2. Neither have any blocking linked assets. I.e. there is no blocking <link ref="stylesheet" href="styles.css"> and the script tags are are either async or defer. Both pages reference one little .png that is not deliberately lazy loaded. That's the baseline.

The HTML document, in both URLs, is served with HTTP/2 but it references a the lazy loaded .css and (a bunch of) .js files, via a CDN. In other words, it looks like this:


▶ curl -v https://www.peterbe.com/plog/hashin-0.7.0
...
> GET /plog/hashin-0.7.0 HTTP/2
...
< HTTP/2 200
...
<
...
<link rel="preload" href="/static/css/base.min.e8df96d84663.css" 
 as="style" onload="this.onload=null;this.rel='stylesheet'">
...
<script defer src="/static/js/blogitem-post.min.f6c0be691e73.js"></script>
...

So, cdn-2916.kxcdn.com is a an awesome CDN, but to a first-time visitor, that is going to require a DNS lookup and the creation of a new TCP connection that can be kept alive. The alternative to this is to not put any of the of the .png, .css or .js assets on a CDN. Basically, instead of <script src="https://mycdn.example.com/foo.js">, just do <script src="/foo.js">.

CDNs are really important since latency is a killer to web performance and remember that "Use a CDN" is rule number 2 in the, now dated, YSlow ruleset. However, we're entering an era where HTTP/2 is becoming more and more available in mainstream browsers (hint: nearly 100% of visitors to my site are HTTP/2 support). Buuuuuut, the latency (DNS, connection and SSL negotiation) doesn't matter that much if you have already paid those costs to get to the origin web server (https://www.peterbe.com in this example).

The Experiment

What I'm interested in seeing if there is a way to gauge/measure when it's best to use a CDN and when it's best to use the origin web server to serve all assets. My friend @stereobooster suggested: "Webpagetest.org is all you need"

Ok. Let's measure that then with Webpagetest.org and see what we can learn.

Here's a visual comparison of the two URLs when they both use CDN for the static assets.

  • They load pretty equally.
  • The Waterfall View looks almost identical.
  • Confirmed, there are no render blocking resources as it starts to paint already at about 1.5s.

Here's a visual comparison of one using a CDN for static assets and one does not.

  • They load pretty equally (diff by 0.1s).
  • The Waterfall View looks very different.
  • The second one does not have a second "dns - connection - ssl - download" bar.
  • Almost all the .js are downloaded at about 1.8s when there's no CDN.
  • Almost all the .js are downloaded at about 3.0s when using a CDN.
  • Use the little "Waterfall opacity" widget to slide left and right to see the difference.

You can see their webpagetests individually here and here.

Assets over CDN
Two connection prices paid. Downloads individual assets faster but ultimately takes a longer time.

One HTTP/2 connection only
Only 1 connection price paid. ALL assets downloaded sooner, albeit individually slower.

Analysis

My web server is served from a highly optimized Nginx server in New York, USA. The two Webpagetest visual comparisons above are both done from Virgina, USA. But the killer feature of a CDN is that latency can be so much better thanks to edge locations of the CDN. In particular, KeyCDN have an edge location in Stockholm, Sweden. So what happens when you run the URLs from a Webpagetest machine in Stockholm, Sweden?

The both start to render at the same time (expected since the HTML document is still in New York, USA) but the (rougly) total time to download all the .css and .js is (about) 2.6 seconds when a CDN and 1.9 seconds without a CDN. In other words, despite the CDN geographically so much closer, the static assets are still available sooner without a CDN.

It's pretty clear at this point that it's not a good idea to use a CDN for static assets. Even if they're not critical. The "First Meaningful Paint" and "Time To Interactive" are about the same but when HTTP/2 can download all the .js files faster, their useful JavaScript can start being useful sooner with HTTP/2.

What Else

So in my site, it's easiest to host the whole site on an Nginx server in a Digital Ocean server. It's easy to invalidate its cache (just delete the file from disk and wait for Django to regenerate it). Another advantage with using plain Nginx is that I serve the HTML with Cache-Control headers and then do some post-processing of the .html file and since Nginx is disk-based, I don't have to update a CDN.

An alternative would be to put the whole site behind a CDN. That way, the initial HTML document can be served from a CDN edge location, using HTTP/2 and send the rest of the static assets on the same HTTP/2 connection. But this means that every single dynamic URL (e.g. HTTP POSTs or some per-user XHR requests) has to go via a CDN rather than going straight to the Nginx that is connected to the Django web server.

Last but not least, even though my Nginx server is on a decent machine and pretty well tuned, I very much doubt it's as fast and powerful as a KeyCDN or CloudFront or Akamai or Google Cloud CDN. Those servers are beasts! Mind you, the DNS + connection + SSL negotiation, when requesting from Stockholm, Sweden was about 0.75s to my Nginx in New York, USA. For the KeyCDN edge location the DNS + connection + SSL negotiation was about 0.52s. So not a huge difference actually.

Another important aspect is Service Workers. Perhaps I don't know how to hack it, but it doesn't work when you use differnet domains for the service worker .js file and the URIs it references.

In conclusion; I see little benefit in using a CDN at this point. Perhaps for larger assets like videos, GIFs or high-res images. HTTP/2 changes one of the major web performance rules. End of an era(?)

Rust > Go > Python ...to parse millions of dates in CSV files

May 15, 2018
10 comments Python

It all started so innocently. The task at hand was to download an inventory of every single file ever uploaded to a public AWS S3 bucket. The way that works is that you download the root manifest.json. It references a boat load of .csv.gz files. So to go through every single file uploaded to the bucket, you read the manifest.json, the download each and every .csv.gz file. Now you can parse these and do something with each row. An example row in one of the CSV files looks like this:

"net-mozaws-prod-delivery-firefox","pub/firefox/nightly/latest-mozilla-central-l10n/linux-i686/xpi/firefox-57.0a1.gn.langpack.xpi","474348","2017-09-21T13:08:25.000Z","0af6ce0b494d1f380a8b1cd6f42e36cb"

In the Mozilla Buildhub what we do is we periodically do this, in Python (with asyncio), to spot if there are any files in the S3 bucket have potentially missed to record in an different database.
But ouf the 150 or so .csv.gz files, most of the files are getting old and in this particular application we can be certain it's unlikely to be relevant and can be ignored. To come to that conclusion you parse each .csv.gz file, parse each row of the CSV, extract the last_modified value (e.g. 2017-09-21T13:08:25.000Z) into a datetime.datetime instance. Now you can quickly decide if it's too old or recent enough to go through the other various checks.

So, the task is to parse 150 .csv.gz files totalling about 2.5GB with roughly 75 million rows. Basically parsing the date strings into datetime.datetime objects 75 million times.

Python

What this script does is it opens, synchronously, each and every .csv.gz file, parses each records date and compares it to a constant ("Is this record older than 6 months or not?")

This is an extraction of a bigger system to just look at the performance of parsing all those .csv.gz files to figure out how many are old and how many are within 6 months. Code looks like this:


import datetime
import gzip
import csv
from glob import glob

cutoff = datetime.datetime.now() - datetime.timedelta(days=6 * 30)

def count(fn):
    count = total = 0
    with gzip.open(fn, 'rt') as f:
        reader = csv.reader(f)
        for line in reader:
            lastmodified = datetime.datetime.strptime(
                line[3],
                '%Y-%m-%dT%H:%M:%S.%fZ'
            )
            if lastmodified > cutoff:
                count += 1
            total += 1

    return total, count


def run():
    total = recent = 0
    for fn in glob('*.csv.gz'):
        if len(fn) == 39:  # filter out other junk files that don't fit the pattern
            print(fn)
            t, c = count(fn)
            total += t
            recent += c

    print(total)
    print(recent)
    print('{:.1f}%'.format(100 * recent / total))


run()

Code as a gist here.

Only problem. This is horribly slow.

To reproduce this, I took a sample of 38 of these .csv.gz files and ran the above code with CPython 3.6.5. It took 3m44s on my 2017 MacBook Pro.

Let's try a couple low-hanging fruit ideas:

  • PyPy 5.10.1 (based on 3.5.1): 2m30s
  • Using asyncio on Python 3.6.5: 3m37s
  • Using a thread pool on Python 3.6.5: 7m05s
  • Using a process pool on Python 3.6.5: 1m5s

Hmm... Clearly this is CPU bound and using multiple processes is the ticket. But what's really holding us back is the date parsing. From the "Fastest Python datetime parser" benchmark the trick is to use ciso8601. Alright, let's try that. Diff:

6c6,10
< cutoff = datetime.datetime.now() - datetime.timedelta(days=6 * 30)
---
> import ciso8601
>
> cutoff = datetime.datetime.utcnow().replace(
>     tzinfo=datetime.timezone.utc
> ) - datetime.timedelta(days=6 * 30)
14,17c18
<             lastmodified = datetime.datetime.strptime(
<                 line[3],
<                 '%Y-%m-%dT%H:%M:%S.%fZ'
<             )
---
>             lastmodified = ciso8601.parse_datetime(line[3])

Version with ciso8601 here.

So what originally took 3 and a half minutes now takes 18 seconds. I suspect that's about as good as it gets with Python.
But it's not too shabby. Parsing 12,980,990 date strings in 18 seconds. Not bad.

Go

My Go is getting rusty but it's quite easy to write one of these so I couldn't resist the temptation:


package main

import (
    "compress/gzip"
    "encoding/csv"
    "fmt"
    "log"
    "os"
    "path/filepath"
    "strconv"
    "time"
)

func count(fn string, index int) (int, int) {
    fmt.Printf("%d %v\n", index, fn)
    f, err := os.Open(fn)
    if err != nil {
        log.Fatal(err)
    }
    defer f.Close()
    gr, err := gzip.NewReader(f)
    if err != nil {
        log.Fatal(err)
    }
    defer gr.Close()

    cr := csv.NewReader(gr)
    rec, err := cr.ReadAll()
    if err != nil {
        log.Fatal(err)
    }
    var count = 0
    var total = 0
    layout := "2006-01-02T15:04:05.000Z"

    minimum, err := time.Parse(layout, "2017-11-02T00:00:00.000Z")
    if err != nil {
        log.Fatal(err)
    }

    for _, v := range rec {
        last_modified := v[3]

        t, err := time.Parse(layout, last_modified)
        if err != nil {
            log.Fatal(err)
        }
        if t.After(minimum) {
            count += 1
        }
        total += 1
    }
    return total, count
}

func FloatToString(input_num float64) string {
    // to convert a float number to a string
    return strconv.FormatFloat(input_num, 'f', 2, 64)
}

func main() {
    var pattern = "*.csv.gz"

    files, err := filepath.Glob(pattern)
    if err != nil {
        panic(err)
    }
    total := int(0)
    recent := int(0)
    for i, fn := range files {
        if len(fn) == 39 {
            // fmt.Println(fn)
            c, t := count(fn, i)
            total += t
            recent += c
        }
    }
    fmt.Println(total)
    fmt.Println(recent)
    ratio := float64(recent) / float64(total)
    fmt.Println(FloatToString(100.0 * ratio))
}

Code as as gist here.

Using go1.10.1 I run go make main.go and then time ./main. This takes just 20s which is about the time it took the Python version that uses a process pool and ciso8601.

I showed this to my colleague @mostlygeek who saw my scripts and did the Go version more properly with its own repo.
At first pass (go build filter.go and time ./filter) this one clocks in at 19s just like my naive initial hack. However if you run this as time GOPAR=2 ./filter it will use 8 workers (my MacBook Pro as 8 CPUs) and now it only takes: 5.3s.

By the way, check out @mostlygeek's download.sh if you want to generate and download yourself a bunch of these .csv.gz files.

Rust

First @mythmon stepped up and wrote two versions. One single-threaded and one using rayon which will use all CPUs you have.

The version using rayon looks like this (single-threaded version here):


extern crate csv;
extern crate flate2;
#[macro_use]
extern crate serde_derive;
extern crate chrono;
extern crate rayon;

use std::env;
use std::fs::File;
use std::io;
use std::iter::Sum;

use chrono::{DateTime, Utc, Duration};
use flate2::read::GzDecoder;
use rayon::prelude::*;

#[derive(Debug, Deserialize)]
struct Row {
    bucket: String,
    key: String,
    size: usize,
    last_modified_date: DateTime<Utc>,
    etag: String,
}

struct Stats {
    total: usize,
    recent: usize,
}

impl Sum for Stats {
    fn sum<I: Iterator<Item=Self>>(iter: I) -> Self {
        let mut acc = Stats { total: 0, recent: 0 };
        for stat in  iter {
            acc.total += stat.total;
            acc.recent += stat.recent;
        }
        acc
    }
}

fn main() {
    let cutoff = Utc::now() - Duration::days(180);
    let filenames: Vec<String> = env::args().skip(1).collect();

    let stats: Stats = filenames.par_iter()
        .map(|filename| count(&filename, cutoff).expect(&format!("Couldn't read {}", &filename)))
        .sum();

    let percent = 100.0 * stats.recent as f32 / stats.total as f32;
    println!("{} / {} = {:.2}%", stats.recent, stats.total, percent);
}

fn count(path: &str, cutoff: DateTime<Utc>) -> Result<Stats, io::Error> {
    let mut input_file = File::open(&path)?;
    let decoder = GzDecoder::new(&mut input_file)?;
    let mut reader = csv::ReaderBuilder::new()
        .has_headers(false)
        .from_reader(decoder);

    let mut total = 0;
    let recent = reader.deserialize::<Row>()
        .flat_map(|row| row)  // Unwrap Somes, and skip Nones
        .inspect(|_| total += 1)
        .filter(|row| row.last_modified_date > cutoff)
        .count();

    Ok(Stats { total, recent })
}

I installed it like this (I have rustc 1.26 installed):

▶ cargo build --release --bin single_threaded
▶ time ./target/release/single_threaded ../*.csv.gz

That finishes in 22s.

Now let's try the one that uses all CPUs in parallel:

▶ cargo build --release --bin rayon
▶ time ./target/release/rayon ../*.csv.gz

That took 5.6s

That's rougly 3 times faster than the best Python version.

When chatting with my teammates about this, I "nerd-sniped" in another colleague, Ted Mielczarek who forked Mike's Rust version.

Compile and running these two I get 17.4s for the single-threaded version and 2.5s for the rayon one.

In conclusion

  1. Simplest Python version: 3m44s
  2. Using PyPy (for Python 3.5): 2m30s
  3. Using asyncio: 3m37s
  4. Using concurrent.futures.ThreadPoolExecutor: 7m05s
  5. Using concurrent.futures.ProcessPoolExecutor: 1m5s
  6. Using ciso8601 to parse the dates: 1m08s
  7. Using ciso8601 and concurrent.futures.ProcessPoolExecutor: 18.4s
  8. Novice Go version: 20s
  9. Go version with parallel workers: 5.3s
  10. Single-threaded Rust version: 22s
  11. Parallel workers in Rust: 5.6s
  12. (Ted's) Single-threaded Rust version: 17.4s
  13. (Ted's) Parallel workers in Rust: 2.5s

Most interesting is that this is not surprising. Of course it gets faster if you use more CPUs in parallel. And of course a C binary to do a critical piece in Python will speed things up. What I'm personally quite attracted to is how easy it was to replace the date parsing with ciso8601 in Python and get a more-than-double performance boost with very little work.

Yes, I'm perfectly aware that these are not scientific conditions and the list of disclaimers is long and boring. However, it was fun! It's fun to compare and constrast solutions like this. Don't you think?

Webpack Bundle Analyzer for create-react-app

May 14, 2018
0 comments JavaScript, React

webpack-bundle-analyzer is an awesome little program for understanding why and which parts of your bundled .js files are so big. It's a lot more advanced (and pretty) than source-map-explorer.

Thanks to this tip by @trevorwhealy you can now use webpack-bundle-analyzer on a create-react-app bundle. Yay!

Check out the report I made for the client side code of Songsear.ch:

Webpack bundle analyzed for Songsear.ch

One thing I personally noticed from this is that the .png do take up quite a lot of kilobytes. And I'm quite that the whatwg-fetch polyfill uses 12KB before gzip.

Always return namespaces in Django REST Framework

May 11, 2018
1 comment Python, Django

By default, when you hook up a model to Django REST Framework and run a query in JSON format, what you get is a list. E.g.

For GET localhost:8000/api/mymodel/


[
  {"id": 1, "name": "Foo"},
  {"id": 2, "name": "Bar"},
  {"id": 3, "name": "Baz"}
]

This isn't great because there's no good way to include other auxiliary data points that are relevant to this query. In Elasticsearch you get something like this:


{
  "took": 106,
  "timed_out": false,
  "_shards": {},
  "hits": {
    "total": 0,
    "hits": [],
    "max_score": 1
  }
}

Another key is that perhaps today you can't think of any immediate reason why you want to include some additonal meta data about the query, but perhaps some day you will.

The way to solve this in Django REST Framework is to override the list function in your Viewset classes.

Before


# views.py
# views.py
from rest_framework import viewsets

class BlogpostViewSet(viewsets.ModelViewSet):
    queryset = Blogpost.objects.all().order_by('date')
    serializer_class = serializers.BlogpostSerializer

After


# views.py
from rest_framework import viewsets

class BlogpostViewSet(viewsets.ModelViewSet):
    queryset = Blogpost.objects.all().order_by('date')
    serializer_class = serializers.BlogpostSerializer

    def list(self, request, *args, **kwargs):
        response = super().list(request, *args, **kwargs)
        # Where the magic happens!
        return response

Now, to re-wrap that, the response.data is a OrderedDict which you can change. Here's one way to do it:


# views.py
from rest_framework import viewsets

class BlogpostViewSet(viewsets.ModelViewSet):
    queryset = Blogpost.objects.all().order_by('date')
    serializer_class = serializers.BlogpostSerializer

    def list(self, request, *args, **kwargs):
        response = super().list(request, *args, **kwargs)
        response.data = {
            'hits': response.data,
        }
        return response

And if you want to do the same the "detail API" where you retrieve a single model instance, you can add an override to the retrieve method:


def retrieve(self, request, *args, **kwargs):
    response = super().retrieve(request, *args, **kwargs)
    response.data = {
        'hit': response.data,
    }
    return response

That's it. Perhaps it's personal preference but if you, like me, prefers this style, this is how you do it. I like namespacing things instead of dealing with raw lists.

"Namespaces are one honking great idea -- let's do more of those!"

From import this

Note! This works equally when you enable pagination. Enabling pagination immediately changes the main result from a list to a dictionary. I.e. Instead of...


[
  {"id": 1, "name": "Foo"},
  {"id": 2, "name": "Bar"},
  {"id": 3, "name": "Baz"}
]

you now get...


{
  "count": 3,
  "next": null,
  "previous": null,
  "items": [
    {"id": 1, "name": "Foo"},
    {"id": 2, "name": "Bar"},
    {"id": 3, "name": "Baz"}
  ]
}

So if you apply the "trick" mentioned in this blog post you end up with...:


{
  "hits": {
    "count": 3,
    "next": null,
    "previous": null,
    "items": [
      {"id": 1, "name": "Foo"},
      {"id": 2, "name": "Bar"},
      {"id": 3, "name": "Baz"}
    ]
  }
}

How I found out where a bash alias was set up

May 9, 2018
0 comments Linux

I wanted to install a command line tool called gg. But for some reason, gg was already tied to an alias. No problem, I'll just delete that alias. I looked in ~/.bash_profile and I looked in ~/.zshrc and it wasn't there!

But here's how I managed to figure out where it came from:

which gg
gg: aliased to git gui citool

Then I copied the git gui citool part of that output and ran:


▶ rg --hidden 'git gui citool'
.oh-my-zsh/plugins/git/git.plugin.zsh
104:alias gg='git gui citool'
105:alias gga='git gui citool --amend'

A ha! So it was .oh-my-zsh/plugins/git/git.plugin.zsh that was the culprit. Totally forgot about the plugin. It's full of other useful aliases so I just commented out the one(s) I knew I don't need any more.

By the way rg, aka. ripgrep is probably one of the best tools I have. I use it so often that it's attached to my belt rather than in my toolbox.