How to no-mincss links with django-pipeline

February 3, 2016
2 comments Python, Web development, Django

This might be the kind of problem only I have, but I thought I'd share in case others are in a similar pickle.

Warming Up

First of all, the way my personal site works is that every rendered page gets cached as rendered HTML. Midway, storing the rendered page in the cache, an optimization transformation happens. It basically takes HTML like this:


<html>
<link rel="stylesheet" href="vendor.css">
<link rel="stylesheet" href="stuff.css">
<body>...</body>
</html>

into this:


<html>
<style>
/* optimized contents of vendor.css and stuff.css minified */
</style>
<body>...</body>
</html>

Just right-click and "View Page Source" and you'll see.

When it does this it also filters out CSS selectors in those .css files that aren't actually used in the rendered HTML. This makes the inlined CSS much smaller. Especially since so much of the CSS comes from a CSS framework.

However, there are certain .css files that have references to selectors that aren't in the generated HTML but are needed later when some JavaScript changes the DOM based on AJAX or user actions. For example, the CSS used by the Autocompeter widget. The program that does this CSS optimization transformation is called mincss and it has a feature where you can tell it to NOT bother with certain CSS selectors (using a CSS comment) or certain <link> tags entirely. It looks like this:


<link rel="stylesheet" href="ajaxstuff.css" data-mincss="no">

Where Does django-pipeline Come In?

So, setting that data-mincss="no" isn't easy when you use django-pipeline because you don't write <link ... in your Django templates, you write {% stylesheet 'name-of-bundle %}. So, how do you get it in?

Well, first let's define the bundle. In my case it looks like this:



PIPELINE_CSS = {
  ...
  # Bundle of CSS that strictly isn't needed at pure HTML render-time
  'base_dynamic': {
        'source_filenames': (
            'css/transition.css',
            'autocompeter/autocompeter.min.css',
        ),
        'extra_context': {
            'no_mincss': True,
        },
        'output_filename': 'css/base-dynamic.min.css',
    },
    ...
}

But that isn't enough. Next, I need to override how django-pipeline turn that block into a <link ...> tag. To do that, you need to create a directory and file called pipeline/css.html (or pipeline/css.jinja if you use Jinja rendering by default).

So take the default one from inside the pipeline package and copy it into your project into one of your apps's templates directory. For example, in my case, peterbecom/apps/base/templates/pipeline/css.jinja. Then, in that template add at the very end somehting like this:

{% if no_mincss %} data-mincss="no"{% endif %} />

The Point?

The point is that if you're in a similar situation where you want django-pipeline to output the <link> or <script> tag differently than it's capable of, by default, then this is a good example of that.

Bestest and securest way to handle Python dependencies

February 1, 2016
1 comment Python

pip 8 is out and with it, the ability to only install dependencies you've vetted. Thank Erik Rose! Now you can be absolutely certain that dependencies you downloaded and installed locally is absolutely identical to the dependencies you download and install in your production server.

First pipstrap.py

So your server needs pip to install those dependencies safely and securely. Initially you have to trust the pip/virtualenv that is installed globally on the system. If you can trust it but unsure it's a good version of pip version 8 and up, that's where pipstrap.py comes in. It makes sure you get a pip version installed that supports pip install with hashes:

Add pipstrap.py to your git/hg repo and use it to make sure you have a good pip. For example your deployment script might look like this now:

#!/bin/bash
git pull origin master
virtualenv venv
source venv/bin/activate
python ./tools/pipstrap.py
pip install --require-hashes -r requirements.txt

Then hashin

Thanks to pipstrap we now have a version of pip that really does check the hashes you've put in the requirements.txt file.

(By the way, the --require-hashes on pip install is optional. pip will imply it if the requirements.txt file appears to have hashes defined. But to avoid the risk and you accidentally fumbling a bad requirements.txt it's good to specify --require-hashes to pip install)

Now that you're up and running and you sleep well at night because you know your production server has exactly the same dependencies you had when you did the development and unit testing, how do you get the hashes in there?

The tricks is to install hashin. (pip install hashin). It helps you write those hashes.

Suppose you have a requirements.txt file that looks like this:

Django==1.9.1
bgg==0.22.1
html2text==2016.1.8

You can try to run pip install --require-hashes -r requirements.txt and learn from the errors. E.g.:

Hashes are required in --require-hashes mode, but they are missing from some requirements. 
Here is a list of those requirements along with the hashes their downloaded archives actually 
had. Add lines like these to your requirements files to prevent tampering. (If you did not 
enable --require-hashes manually, note that it turns on automatically when any package has a hash.)
    Django==1.9.1 --hash=sha256:9f7ca04c6dbcf08b794f2ea5283c60156a37ebf2b8316d1027f594f34ff61101
    bgg==0.22.1 --hash=sha256:e5172c3fda0e8a42d1797fd1ff75245c3953d7c8574089a41a219204dbaad83d
    html2text==2016.1.8 --hash=sha256:088046f9b126761ff7e3380064d4792279766abaa5722d0dd765d011cf0bb079

But those are just the hashes for your particular environment (and your particular support for Python wheels). Instead, take each requirement and run it through hashin

$ hashin Django==1.9.1
$ hashin bgg==0.22.1
$ hashin html2text==2016.1.8

Now your requirements.txt will look like this:

Django==1.9.1 \
    --hash=sha256:9f7ca04c6dbcf08b794f2ea5283c60156a37ebf2b8316d1027f594f34ff61101 \
    --hash=sha256:a29aac46a686cade6da87ce7e7287d5d53cddabc41d777c6230a583c36244a18
bgg==0.22.1 \
    --hash=sha256:e5172c3fda0e8a42d1797fd1ff75245c3953d7c8574089a41a219204dbaad83d \
    --hash=sha256:aaa53aea1cecb8a6e1288d6bfe52a51408a264a97d5c865c38b34ae16c9bff88
html2text==2016.1.8 \
    --hash=sha256:088046f9b126761ff7e3380064d4792279766abaa5722d0dd765d011cf0bb079

One Last Note

pip is smart enough to traverse the nested dependencies of packages that need to be installed. For example, suppose you do:

$ hashin premailer

It will only add...

premailer==2.9.7 \
    --hash=sha256:1516cbb972234446660bf7862b28521f0fc8b5e7f3087655f35ae5dd233013a3 \
    --hash=sha256:843e624bdac9d28725b217559904aa5a217c1a94707bc2ecef6c91a8d82f1a23

...to your requirements.txt. But this package has a bunch of dependencies of its own. To find out what those are, let pip "fail for you".

$ pip install --require-hashes -r requirements.txt
Collecting premailer==2.9.7 (from -r r.txt (line 1))
  Downloading premailer-2.9.7-py2.py3-none-any.whl
Collecting lxml (from premailer==2.9.7->-r r.txt (line 1))
Collecting cssutils (from premailer==2.9.7->-r r.txt (line 1))
Collecting cssselect (from premailer==2.9.7->-r r.txt (line 1))
In --require-hashes mode, all requirements must have their versions pinned with ==. These do not:
    lxml from https://pypi.python.org/packages/source/l/lxml/lxml-3.5.0.tar.gz#md5=9f0c5f1eb43ff44d5455dab4b4efbe73 (from premailer==2.9.7->-r r.txt (line 1))
    cssutils from https://pypi.python.org/packages/2.7/c/cssutils/cssutils-1.0.1-py2-none-any.whl#md5=b173f51f1b87bcdc5e5e20fd39530cdc (from premailer==2.9.7->-r r.txt (line 1))
    cssselect from https://pypi.python.org/packages/source/c/cssselect/cssselect-0.9.1.tar.gz#md5=c74f45966277dc7a0f768b9b0f3522ac (from premailer==2.9.7->-r r.txt (line 1))

So apparently you need to hashin those three dependencies:

$ hashin lxml
$ hashin cssutils
$ hashin cssselect

Now your requirements.txt file will look something like this:

premailer==2.9.7 \
    --hash=sha256:1516cbb972234446660bf7862b28521f0fc8b5e7f3087655f35ae5dd233013a3 \
    --hash=sha256:843e624bdac9d28725b217559904aa5a217c1a94707bc2ecef6c91a8d82f1a23
lxml==3.5.0 \
    --hash=sha256:349f93e3a4b09cc59418854ab8013d027d246757c51744bf20069bc89016f578 \
    --hash=sha256:8628cc82957c41be10abce889a1976ceb7b9e3f36ebffa4fcb1a80901bf77adc \
    --hash=sha256:1c9c26bb6c31c3d5b3c104e843211d9c105db60b4df6770ac42673263d55d494 \
    --hash=sha256:01e54511034333f18772c335ec0b33a76bba988135eaf727a075897866d19604 \
    --hash=sha256:2abf6cac9b7952047d8b7265384a9565e419a727dba675e83e4b7f5b7892b6bb \
    --hash=sha256:6dff909020d0c030fb26004626c8f87f9116e0381702fed415caf94f5a9b9493
cssutils==1.0.1 \
    --hash=sha256:78ac48006ac2336b9456e88a75ed35f6a31a030c65162503b7af01a60d78db5a \
    --hash=sha256:d8a18b2848ea1011750231f1dd64fe9053dbec1be0b37563c582561e7a529063
cssselect==0.9.1 \
    --hash=sha256:0535a7e27014874b27ae3a4d33e8749e345bdfa62766195208b7996bf1100682

Ah... Now you feel confident.

Actually, One More Last Note

Sorry for repeating the obvious but it's so important it's worth making it loud and clear:

Use the same pip install procedure and requirements.txt file everywhere

I.e. Install the depdendencies the same way on your laptop, your continuous integration server, your staging server and production server. That really makes sure you're running the same process and the same dependencies everywhere.

A quicksearch for Bugzilla using Autocompeter

January 27, 2016
0 comments Python, Web development, Mozilla, JavaScript

Here's the final demo.

What I did was, I used the Bugzilla REST APIs to download all bugs for a specific product. Then I bulk-uploaded then to Autocompeter.com and lastly built a simply web front-end.

When you "download all" bugs with the Bugzilla REST API, it might be capped but I don't know what the limit is. The trick is to not download ALL bugs for the product in one big fat query, but to find out what all components are for that product and then download for each. The Python code is here.

Everyone's Invited to Play

So first you need to sign in on https://autocompeter.com using your GitHub account. Then you can generate a Auth-Key by picking a domain. The domain can be anything really. I picked bugzilla.mozilla.org but you can use whatever you like.

Then, when you have an Auth-Key you need to know the name of the product (or products) and run the script like this:

python download.py 7U4eFYH5cqR15m3ekuxkzaUR Socorro

Once you've done that, fork my codepen and replace the domain and any other references to the product.

Caveats

To make this really useful, you'd have to run it more often. Perhaps you can hook it up to a cron job or something and make it so that you only download, from the REST API, things that have changed since the last time you did a big download. Then you can let the cron job run frequently.

If you want really hot results, you could hook up a server-side service that consumes the Bugzfeed websocket.

Last but not least; this will never list private/secure bugs. Only publically available stuff.

The Future

If people enjoy it perhaps we can change the front-end demo so it's not hardcoded to one specific product ("Socorro" in my case). And it can be made pretty.

And the data would need to be downloaded and re-submitted more frequently. A quick Heroku app mayhaps?

hashin - a replacement for peepin

January 26, 2016
0 comments Python

tl;dr Stop using peepin. Start using hashin

Today I proudly release hashin (on PyPI). It's a replacement of peepin (on PyPI). Yes, I know that's confusing.

A couple of days ago my friend Erik Rose gloriously took his peep project and got it embedded in pip 8.0 proper so, as of that, the right thing to do is to upgrade to pip 8 and delete your peep.py.

With that change, it no longer makes sense to use peepin. It had a good run. Bye bye.

But much of the code lives on in hashin. It's basically a fork but with different logics on A) how it gets the hash and B) how it renders the automatic changes to your requirements file.

First, if you haven't already done so:

$ pip install -U peep pip
$ pip --version  # version 8 right?
$ peep port requirements.txt
$ pip uninstall peep
$ pip install --require-hashes -r requirements.txt

Check out Erik's guide.

Now, you can deal with the companion.

$ pip uninstall peepin
$ pip install hashin
$ touch /tmp/test.txt
$ hashin --verbose html2text simplejson /tmp/test.txt

What's Next?

If Erik managed to get peep into pip, surely I can get hashin into pip. Hoping for some encouragement from @dstufft and @jezdez :)

How I installed letsencrypt for Nginx

January 26, 2016
0 comments Linux, Web development

I have no problems admitting that I'm always finding SSL and certs and stuff like that confusing. And Let's Encrypt is no exception. However, with Let's Encrypt, apparently, all you need to do is download their software and run a command to get a couple of certificate files. No websites or forms to fill in. No need to create a .csr file. How hard can it be? After skimming some documentation and other blog posts I dug in. Turns out, it was quite doable.

To install it, I ran:

# pwd
/root
# git clone https://github.com/letsencrypt/letsencrypt
# cd letsencrypt
# pip install cryptography
# ./letsencrypt-auto

The reason I had to manually pip install cryptography was because the installer in ./letsencrypt-auto failed the first time.

Now it should be installed. To create the cert you have to temporarily stop Nginx. But I had to be quick because I don't want it to be down for long:

# /etc/init.d/nginx stop
# ./letsencrypt-auto certonly --standalone -d autocompeter.com
# /etc/init.d/nginx start

The first time I ran this I got Error: urn:acme:error:badNonce :: The client sent an unacceptable anti-replay nonce :: JWS has invalid anti-replay nonce which, according to this discussion is easy to bypass; simply try again. So I tried again, and the second time it worked.

This time it worked! Now I have 4 new files:

# ls -l /etc/letsencrypt/live/autocompeter.com/
total 0
lrwxrwxrwx 1 root root 32 Jan 25 08:04 cert.pem -> ../../archive/autocompeter.com/cert1.pem
lrwxrwxrwx 1 root root 33 Jan 25 08:04 chain.pem -> ../../archive/autocompeter.com/chain1.pem
lrwxrwxrwx 1 root root 37 Jan 25 08:04 fullchain.pem -> ../../archive/autocompeter.com/fullchain1.pem
lrwxrwxrwx 1 root root 35 Jan 25 08:04 privkey.pem -> ../../archive/autocompeter.com/privkey1.pem

Now add these lines to the Nginx config for that site:

listen 443;

ssl on;
ssl_certificate /etc/letsencrypt/live/autocompeter.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/autocompeter.com/privkey.pem;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;

The new cert I just created expires in about 2 months. I created an entry in my calendar with an alert. I think I just need to run:

# /etc/init.d/nginx stop
# ./letsencrypt-auto certonly --standalone -d autocompeter.com
# /etc/init.d/nginx start

Best Atom packages of 2015

January 22, 2016
7 comments Web development, macOS

tl;dr last-cursor-position, advanced-open-file and highlight-line

Sorry, for the sensationalist headline on this blog post. Almost all of Atom, including the core functionality, is based on packages. For example, the autocomplete thing that pops up whilst you're typing is a package with its own git repo and README. However, it's not a community package. Let's focus on those instead.

Number 1

last-cursor-position

If you're in the midst of typing and for some reason you need to scroll somewhere else in the code to type something or to select to copy to the clipboard, how do you get back? You can either memorize which line you were on. Or you can split the windows so that when you're done, elsewhere, you just kill the newly created split-window. Or; you install last-cursor-position.

At any time you can press alt-- (that's alt and the minus character) and it'll go back to where the cursor was last.

It works across open tabs too. So if you switch tabs to edit index.html and want to go back to that app.py you were working on you can alt-- yourself back there. And suppose that you want to go back to index.html again, you hit shift-alt--.

Number 2

advanced-open-file

This was written by a friend of mine called Michael "Osmose" Kelly and this was his first package he wrote. It's apparently very popular and Michael's most popular Open Source project to date.

What it does is introduce a command-line looking prompt for opening files. By default, you start it with Ctrl-x Ctrl-f which is the Emacs command for opening files/buffers.

Don't get me wrong, I love using Cmd-t to fuzzy-find files and that's awesome too, but sometimes when you have eleventeen files called models.py and you want the one in the "current directory" it's much easier to just go directly to that file. I type Cmd-x Cmd-f m [TAB] [ENTER] and I'm there. Had I typed m on the fuzzy-finder it would certainly have yielded too many files.

Another really really useful thing about this package is that I can easily go to any other file outside the current directory. Suppose my Atom window is rooted in ~/dev/PYTHON/premailer/ and I want to open /tmp/hack.js I easily can, thanks to this package without reaching for the mouse.

Number 3

highlight-line

The name well describes what it does. But why do I need it? The answer is simple; it's when I jump around. When I'm in the midst of typing a function or snippet or something I don't need to know which line I'm on because things are settled. No, it's when I go somewhere else, for example using the last-cursor-position package, then it's hard to see where the cursor is. Especially relevant when you have a big screen with high resolution.

Why isn't this a core package?!

In Summary

I bet I've forgotten some package that I love and use every day that isn't a core package. If so, it's probably something subtle or something that I almost take for granted. For example, who doesn't use react or atom-beautify?! Also, those packages are already so popular they don't need a blog post to raise their attention and fame :)

What was your favorites that you like so much that they just need to be highlighted? Leave a comment or discuss here.

Advanced Closure Compiler vs UglifyJS2

January 20, 2016
12 comments JavaScript

A couple of years ago I wrote a blog post titled "Comparing Google Closure with UglifyJS". It concluded that Closure Compiler compressed files down to 45.6% of the original size. And UglifyJS only 51.5%. But UglifyJS was 1220% faster so I concluded that I'm going to stick to UglifyJS.

But things have changed since 2011. UglifyJS2 came out and stealthy replaced the original implementation (npm install uglify-js) and it has a --mangle option. Also, in the original experimental blog post I didn't use -O advanced when using Closure Compiler.

So I whip up a quick script to compare the two. Here's some of the output:

Truncated! Read the rest by clicking the link below.

Select all relations in PostgreSQL

December 10, 2015
1 comment PostgreSQL

tl;dr Start psql with -E or --echo-hidden

I wanted to find out EVERYTHING that's related to a specific topic. Tables, views, stored procedures etc.
One way of doing that is to go into psql and type \d and/or \df and look through that list. But that's unpractical if it gets large and I might want to get it out stdout instead so I can grep and grep -v.

There are lots of Stackoverflow questions about how to SQL select all tables but I want it all. The solution is to start psql with -E or --echo-hidden. When you do that, it prints out what SQL it used to generate the output for you there. You can then copy that and do whatever you want to do with it. For example:

peterbecom=# \d
********* QUERY **********
SELECT n.nspname as "Schema",
  c.relname as "Name",
  CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'm' THEN 'materialized view' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' WHEN 'f' THEN 'foreign table' END as "Type",
  pg_catalog.pg_get_userbyid(c.relowner) as "Owner"
FROM pg_catalog.pg_class c
     LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','v','m','S','f','')
      AND n.nspname <> 'pg_catalog'
      AND n.nspname <> 'information_schema'
      AND n.nspname !~ '^pg_toast'
  AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 1,2;
**************************

With this I was able to come up with this SQL select to get all tables, views, sequences and functions.


SELECT
  c.relname as "Name",
  CASE c.relkind WHEN 'r' THEN 'table'
  WHEN 'v' THEN 'view'
  WHEN 'm' THEN 'materialized view'
  WHEN 'i' THEN 'index'
  WHEN 'S' THEN 'sequence'
  WHEN 's' THEN 'special'
  WHEN 'f' THEN 'foreign table' END as "Type"
FROM pg_catalog.pg_class c
     LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','v','m','S','f','')
      AND n.nspname <> 'pg_catalog'
      AND n.nspname <> 'information_schema'
      AND n.nspname !~ '^pg_toast'
  AND pg_catalog.pg_table_is_visible(c.oid);


SELECT
  p.proname as "Name",
  'function'
FROM pg_catalog.pg_proc p
     LEFT JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace
WHERE pg_catalog.pg_function_is_visible(p.oid)
      AND n.nspname <> 'pg_catalog'
      AND n.nspname <> 'information_schema';

A usecase of this is that I put those two SQL selects in a file and now I can grep:

$ psql mydatabase < everything.sql | grep -i crap

Headsupper.io

December 5, 2015
0 comments Python, Web development, Django, JavaScript, React

tl;dr

Headsupper.io is a free GitHub webhook service that emails people when commits have the configurable keyword "headsup" in it.

Introduction

Headsupper.io is great for when you have a GitHub project with multiple people working on it and when you make a commit you want to notify other people by email.

Basically, you set up a GitHub Webhook, on pushes, to push to https://headsupper.io and then it'll parse the incoming push and its commits and look for certain things in the commit message. By default, it'll look for the word "headsup". For example, a git commit message might look like this:

fixes #123 - more juice in the Saab headsup! will require updating

Or you can use the multi-line approach where the first line is short and sweat and after the break a bit more elaborate:

bug 1234567 - tea kettle upgrade 2.1

Headsup: Next time you git pull from master, remember to run 
peep install on the requirements.txt file since this commit 
introduces a bunch of crazt dependency changes.

Git commits that come through that don't have any match on this word will simply be ignored by Headsupper.

How you use it

Maybe paradoxically, you need to authenticate with your GitHub account but that's in read-only mode and does NOT set up the Webhook for you. The reason you have to authenticate to prepare a configuration on headsupper.io is to tie the configuration to a real user.

Once you've authenticated you get the option to create your first configuration, then you have to enter at least these three piece of information:

  1. The GitHub "full name". This is the org name, slash, repo name. E.g. peterbe/django-peterbecom or mozilla/socorro.
  2. Pick a secret. Remember what you typed, because you'll need to type in this same secret when you set up the Webhook on your GitHub project's Webhooks page. (This is used to checksum and verify the source of the Webhook push)
  3. Who to send to. A list of email addresses separated with a newline or a semi-colon.

Once you've set that up, you'll need to go to your GitHub project's Setting page and enter a new Webhook and the URL you need to type in is https://headsupper.io and for the "Secret" type in that secret you used earlier. That's it!

Rules and options

The word that triggers is configurable by you. The default is headsupper. And by default, it's case insensitive. You can change that so it's case sensitive. Also, the word has to be word delimited on the left (e.g. a space or a newline character) and on the right it needs to be a space, a : or a !. So this won't match: theheadsup: or headsupper.

Other optional things you can configure are:

  • Which git branch to trigger on (by default it's master)
  • Which emails to CC when it sends
  • Which emails to BCC when it sends
  • Only send when you make a tag

That last option, Only send when a new tag is created, is interesting. I added that option because at work, we make production server releases by pushing a git tag. When a tag is pushed, all those commits are sent to the continuous deployment service which makes a server upgrade. This means you get a chance to enter a heads up message to be emailed to the people who care about new deployments going out.

How it was built

It's a mix between Django and ReactJS. The whole client-side app it built statically with Webpack in ES6. It's served as static files through Nginx. But Nginx is making an exception on all URLs that start with /api or /accounts. The /api/* it used for loading and setting JSON. The /accounts/* is used for the GitHub OAuth endpoints.

What's interesting about this the architecture is that it's using HTTP cookies. Not API tokens. Cookies are quite good in that they're established and the browser does all the automated work of keeping it secure and making each request potentially authenticated.

Here's the relevant React code and here's the relevant Django code that processes the Webhook.

The whole project is available on: https://github.com/peterbe/headsupper.

Also, I made a demo at the November Mozilla Beer and Tell.

Django forms and making datetime inputs localized

December 4, 2015
2 comments Python, Django

tl;dr

To change from one timezone aware datetime to another, turn it into a naive datetime and then use pytz's localize() method to convert it back to the timezone you want it to be.

Introduction

Suppose you have a Django form where you allow people to enter a date, e.g. 2015-06-04 13:00. You have to save it timezone aware, because you have settings.USE_TZ on and it's just many times to store things in timezone aware dates.

By default, if you have settings.USE_TZ and no timezone information is in the string that the django.form.fields.DateTimeField parses, it will use settings.TIME_ZONE and that timezone might be different from what it really should be. For example, in my case, I have an app where you can upload a CSV file full of information about events. These events belong to a venue which I have in the database. Every venue has a timezone, e.g. Europe/Berlin or US/Pacific. So if someone uploads a CSV file for the Berlin location 2015-06-04 13:00 means 13:00 o'clock in Berlin. I don't care where the server is hosted and what its settings.TIME_ZONE is. I need to make that input timezone aware specifically for Berlin/Europe.

Examples

Suppose you have settings.TIME_ZONE == 'US/Pacific' and you let the django.form.fields.DateTimeField do its magic you get something you don't want:


>>> from django.conf import settings
>>> settings.TIME_ZONE
'US/Pacific'
>>> assert settings.USE_TZ
>>> from django.forms.fields import DateTimeField
>>> DateTimeField().clean('2015-06-04 13:00')
datetime.datetime(2015, 6, 4, 13, 0, tzinfo=<DstTzInfo 'US/Pacific' PDT-1 day, 17:00:00 DST>)

See! That's wrong. Sort of. Not Django's fault. What I need to do is to convert that datetime object into one that is timezone aware on the Europe/Berlin timezone.

In old versions of pytz, specifically <=2014.2 you could do this:


>>> import pytz
>>> pytz.VERSION
'2014.2'
>>> from django.forms.fields import DateTimeField
>>> date = DateTimeField().clean('2015-06-04 13:00')
>>> date
datetime.datetime(2015, 6, 4, 13, 0, tzinfo=<DstTzInfo 'US/Pacific' PDT-1 day, 17:00:00 DST>)
>>> date.replace(tzinfo=tz)
datetime.datetime(2015, 6, 4, 13, 0, tzinfo=<DstTzInfo 'Europe/Berlin' CET+1:00:00 STD>)

But in modern versions of pytz you can't do that because if you don't use the pytz.timezone instance to localize it will use the default version which might be one of those crazy "Local Mean Time" which they used a 100 years ago. E.g.


>>> import pytz
>>> pytz.VERSION
'2015.7'
>>> from django.forms.fields import DateTimeField
>>> date = DateTimeField().clean('2015-06-04 13:00')
>>> tz = pytz.timezone('Europe/Berlin')
>>> date.replace(tzinfo=tz)
datetime.datetime(2015, 6, 4, 13, 0, tzinfo=<DstTzInfo 'Europe/Berlin' LMT+0:53:00 STD>)

See, it's that crazy LMT+0:53:00 that's oft talked of on Stackoverflow!

Here's the trick

The trick is to use pytz.timezone(MY TIME ZONE NAME).localize(MY NAIVE DATETIME OBJECT). When you use the .localize() method pytz can use the date to make sure it uses the right conversion for that named timezone.

And in the case of our overly smart django.form.fields.DateTimeField it means we need to convert it back into a naive datetime object and then localize it.


>>> import pytz
>>> pytz.VERSION
'2015.7'
>>> from django.forms.fields import DateTimeField
>>> date = DateTimeField().clean('2015-06-04 13:00')
>>> date = date.replace(tzinfo=None)
>>> date
datetime.datetime(2015, 6, 4, 13, 0)
>>> tz = pytz.timezone('Europe/Berlin')
>>> tz.localize(date)
datetime.datetime(2015, 6, 4, 13, 0, tzinfo=<DstTzInfo 'Europe/Berlin' CEST+2:00:00 DST>)

That was much harder than it needed to be. Timezones are hard. Especially when you have the human element of people typing in things and just, rightfully, expect the system to figure it out and get it right.

I hope this helps the next schmuck who has/had to set aside an hour to figure this out.