From Postgres to JSON strings

November 12, 2013
14 comments Python

No, this is not about the new JSON Type added in Postgres 9.2. This is about how you can get a record set from a Postgres database into a JSON string the best way possible using Python.

Here's the traditional way:


>>> import json
>>> import psycopg2
>>>
>>> conn = psycopg2.connect('dbname=peterbecom')
>>> cur = conn.cursor()
>>> cur.execute("""
...   SELECT
...     id, oid, root, approved, name
...   FROM blogcomments
...   LIMIT 10
... """)
>>> columns = (
...     'id', 'oid', 'root', 'approved', 'name'
... )
>>> results = []
>>> for row in cur.fetchall():
...     results.append(dict(zip(columns, row)))
...
>>> print json.dumps(results, indent=2)
[
  {
    "oid": "comment-20030707-161847",
    "root": true,
    "id": 5662,
    "name": "Peter",
    "approved": true
  },
  {
    "oid": "comment-20040219-r4cf",
    "root": true,
    "id": 5663,
    "name": "silconscave",
    "approved": true
  },
  {
    "oid": "c091011r86x",
    "root": true,
    "id": 5664,
    "name": "Rachel Jay",
    "approved": true
  },
...

This is plain and nice but it's kinda annoying that you have to write down the columns you're selecting twice.
Also, it's annoying that you have to convert the results of fetchall() into a list of dicts in an extra loop.

So, there's a trick to the rescue! You can use the cursor_factory parameter. See below:


>>> import json
>>> import psycopg2
>>> from psycopg2.extras import RealDictCursor
>>>
>>> conn = psycopg2.connect('dbname=peterbecom')
>>> cur = conn.cursor(cursor_factory=RealDictCursor)
>>> cur.execute("""
...   SELECT
...     id, oid, root, approved, name
...   FROM blogcomments
...   LIMIT 10
... """)
>>>
>>> print json.dumps(cur.fetchall(), indent=2)
[
  {
    "oid": "comment-20030707-161847",
    "root": true,
    "id": 5662,
    "name": "Peter",
    "approved": true
  },
  {
    "oid": "comment-20040219-r4cf",
    "root": true,
    "id": 5663,
    "name": "silconscave",
    "approved": true
  },
  {
    "oid": "c091011r86x",
    "root": true,
    "id": 5664,
    "name": "Rachel Jay",
    "approved": true
  },
...

Isn't that much nicer? It's shorter and only lists the columns once.

But is it much faster? Sadly, no it's not. Not much faster. I ran various benchmarks comparing various ways of doing this and basically concluded that there's no significant difference. The latter one using RealDictCursor is around 5% faster. But I suspect all the time in the benchmark is spent doing things (the I/O) that is not different between the various versions.

Anyway. It's a keeper. I think it just looks nicer.

All my apps are now running on one EC2 server

November 3, 2013
0 comments Linux

As of moving over hugepic.io to my new EC2 server I now have all my working sites all under one server.

If I list all sites in /etc/nginx/sites-enabled/ I count 14 sites. This blog being one of many. More listed here.

All but one of these services are Python. One is a Node server. About half of the Python services are Django and the other half is Tornado. There are four persistant databases (Postgres, Redis, Memcache, MongoDB) and two message queues (RabbitMQ and Python RQ).

I have this little script called ps_mem.py which does a decent job summorizing how much memory all of these take. Its output currently looks like this:

 Private  +   Shared  =  RAM used   Program
 ...
  6.5 MiB +  27.3 MiB =  33.7 MiB   postgres (5)
 40.1 MiB +  58.0 KiB =  40.1 MiB   memcached
 54.7 MiB +  37.5 KiB =  54.7 MiB   redis-server
 72.2 MiB + 849.0 KiB =  73.1 MiB   mongod
 82.4 MiB +   1.5 MiB =  83.9 MiB   rqworker (10)
605.6 MiB + 350.9 MiB = 956.5 MiB   python (61)
  1.9 GiB +  51.2 MiB =   2.0 GiB   uwsgi-core (26)
---------------------------------
                          3.3 GiB                       

It's sorted by "RAM used" and I just showed here the bottom 7 ones.
Anyway, 3.3 Gb to run 14 sites isn't bad. All through one Nginx (which only uses 10Mb by the way).

The server is a Debian 7 on a reserved Large instance. I'll try to post an update later about this server with more details. I have a lot of work to do to set up all monitoring and backups for all these things.

Lazy loading below the fold

October 26, 2013
2 comments Web development, JavaScript

I've started experimenting with my home page to make it load even faster.

Amazon famously does this too which you can read more about in this Steve Souders post. They make sure everything that needs to be visible above the fold is loaded first, then, it starts loading all the other "stuff" below the fold. The assumption is that the user requests the page, watches it render, and sometimes after it has rendered reaches for the mouse and starts scrolling down for more content. Or perhaps, never bothers to scroll down at all. Either way, everything below the fold can wait. We have more time, to load that in, later.

What we want to avoid is a load graph like this:

big html document delays loading other stuff

The graph is deliberately zoomed out so that we don't get stuck on the details of that particular graph. But basically, you have a very heavy document to load which needs to be fully loaded (and partially rendered) before it can load all other stuff that that page entails. As you can see, the first load (the HTML document) is taking up a majority of the load time. Once that's downloaded the browser can start parsing it and start rendering it. Simultaneously it can start downloading all the mentioned resources such as images, javascript, and CSS.

On WebPagetest they call this Speed Index; "The Speed Index is the average time at which visible parts of the page are displayed."
So basically, you want to display as much as you possibly can and then load in other things that are necessary but can wait in the background.

So, how did I accomplish this on my site?

Basically, the home page uses as piece of Django code that picks up the 10 most recent blog posts and includes them into the template. Instead, I made it only pick up the first 2 and then after window.onload a piece if AJAX code loads the HTML for the remaining 8 blog posts.
That means that much less is required to load the home page. The page is smaller and references less images. The AJAX code is very crude and simple but works enough:


onload = function() {
  microAjax("/rest/2/10/", function (res) {
    document.getElementById('rest').innerHTML = res;
  });
};

The user probably won't notice a huge difference if she avoids looking at the loading spinner of her browser. Only if she is really really fast at scrolling down will she notice that the rest of the page (about 80% of its vertical space) comes in a little bit later.

So, did it work?

I hope so! The theory is sound. However, my home page is, unlike an Amazon.com product page, very sparse. The page weighs a total of 77Kb (excluding external resources) but now only the first 25Kb is loaded and the rest later.

Here's a measurement before and one after. It's kinda hard to compare because "fluctuations" on network I/O make measurements like this quite unpredictable. Also, there's various odd requests like New Relic and Google Analytics which clouds the waterfall view. However, what really matters is in the "First View" of the after measurement. If you look closely you'll see that now a bunch of images aren't loaded until after the "Document Complete" event has fired. That, to me, is a big win.

Below the fold

If you're interested in how it was done, check out this changeset.

Fastest database for Tornado

October 9, 2013
9 comments Python, Tornado

When you use a web framework like Tornado, which is single threaded with an event loop (like nodejs familiar with that), and you need persistency (ie. a database) there is one important questions you need to ask yourself:

Is the query fast enough that I don't need to do it asynchronously?

If it's going to be a really fast query (for example, selecting a small recordset by key (which is indexed)) it'll be quicker to just do it in a blocking fashion. It means less CPU work to jump between the events.

However, if the query is going to be potentially slow (like a complex and data intensive report) it's better to execute the query asynchronously, do something else and continue once the database gets back a result. If you don't all other requests to your web server might time out.

Another important question whenever you work with a database is:

Would it be a disaster if you intend to store something that ends up not getting stored on disk?

This question is related to the D in ACID and doesn't have anything specific to do with Tornado. However, the reason you're using Tornado is probably because it's much more performant that more convenient alternatives like Django. So, if performance is so important, is durable writes important too?

Let's cut to the chase... I wanted to see how different databases perform when integrating them in Tornado. But let's not just look at different databases, let's also evaluate different ways of using them; either blocking or non-blocking.

What the benchmark does is:

  • On one single Python process...
  • For each database engine...
  • Create X records of something containing a string, a datetime, a list and a floating point number...
  • Edit each of these records which will require a fetch and an update...
  • Delete each of these records...

I can vary the number of records ("X") and sum the total wall clock time it takes for each database engine to complete all of these tasks. That way you get an insert, a select, an update and a delete. Realistically, it's likely you'll get a lot more selects than any of the other operations.

And the winner is:

pymongo!! Using the blocking version without doing safe writes.

Fastest database for Tornado

Let me explain some of those engines

  • pymongo is the blocking pure python engine
  • with the redis, toredis and memcache a document ID is generated with uuid4, converted to JSON and stored as a key
  • toredis is a redis wrapper for Tornado
  • when it says (safe) on the engine it means to tell MongoDB to not respond until it has with some confidence written the data
  • motor is an asynchronous MongoDB driver specifically for Tornado
  • MySQL doesn't support arrays (unlike PostgreSQL) so instead the tags field is stored as text and transformed back and fro as JSON
  • None of these database have been tuned for performance. They're all fresh out-of-the-box installs on OSX with homebrew
  • None of these database have indexes apart from ElasticSearch where all things are indexes
  • momoko is an awesome wrapper for psycopg2 which works asyncronously specifically with Tornado
  • memcache is not persistant but I wanted to include it as a reference
  • All JSON encoding and decoding is done using ultrajson which should work to memcache, redis, toredis and mysql's advantage.
  • mongokit is a thin wrapper on pymongo that makes it feel more like an ORM
  • A lot of these can be optimized by doing bulk operations but I don't think that's fair
  • I don't yet have a way of measuring memory usage for each driver+engine but that's not really what this blog post is about
  • I'd love to do more work on running these benchmarks on concurrent hits to the server. However, with blocking drivers what would happen is that each request (other than the first one) would have to sit there and wait so the user experience would be poor but it wouldn't be any faster in total time.
  • I use the official elasticsearch driver but am curious to also add Tornado-es some day which will do asynchronous HTTP calls over to ES.

You can run the benchmark yourself

The code is here on github. The following steps should work:

$ virtualenv fastestdb
$ source fastestdb/bin/activate
$ git clone https://github.com/peterbe/fastestdb.git
$ cd fastestdb
$ pip install -r requirements.txt
$ python tornado_app.py

Then fire up http://localhost:8000/benchmark?how_many=10 and see if you can get it running.

Note: You might need to mess around with some of the hardcoded connection details in the file tornado_app.py.

Discussion

Before the lynch mob of HackerNews kill me for saying something positive about MongoDB; I'm perfectly aware of the discussions about large datasets and the complexities of managing them. Any flametroll comments about "web scale" will be deleted.

I think MongoDB does a really good job here. It's faster than Redis and Memcache but unlike those key-value stores, with MongoDB you can, if you need to, do actual queries (e.g. select all talks where the duration is greater than 0.5). MongoDB does its serialization between python and the database using a binary wrapper called BSON but mind you, the Redis and Memcache drivers also go to use a binary JSON encoding/decoder.

The conclusion is; be aware what you want to do with your data and what and where performance versus durability matters.

What's next

Some of those drivers will work on PyPy which I'm looking forward to testing. It should work with cffi like psycopg2cffi for example for PostgreSQL.

Also, an asynchronous version of elasticsearch should be interesting.

UPDATE 1

Today I installed RethinkDB 2.0 and included it in the test.

With RethinkDB 2.0

It was added in this commit and improved in this one.

I've been talking to the core team at RethinkDB to try to fix this.

10 years of blogging

September 19, 2013
2 comments This site

I'm now off by about two months but in June 2003 I posted my first ever blog post.

My first website was launched in 1997 but that one is long lost. The next version, which actually used a database and a real web framework was launched in 2001 and this is the oldest screenshot I could find.

A really old version of my blog
Back then the site was built in Zope which at the time was the coolest shit you could possibly use. Back in 2003 I was renting a room in an apartment in London when I was studying at City University. The broad band (american's know this as DSL) we had had a static IP address so I could tie my domain name directly to my bedroom basically. If you're born in the nineties or anything sooner you wouldn't remember this but for almost 20 years you could either buy a laptop (small but slow) or a stationary computer (clunky but fast) and this laptop I was running on was no exception. Not to mention it was an abandonned laptop too. I think it had about 8 MB of RAM. I ran a stripped down version of Debian on it without any graphical interface. I managed the code by scp'ing files into it from my Windows computer.

Anyway, running on a home DSL line with on a rusty old laptop blinking away under my bed meant that site would be ultra-slow if I didn't pre-optimize it. And that was something I did. The site had a Squid cache in front of it and the HTML, CSS and Javascript was compressed by a script I wrote called slimmer.

Back in 2003 blogging was getting hotter than celebrity spotting and I was very much interested in something that later became called "SEO" and the rumor at the time was that "blogs" got penalized by Google because blogs usually just re-posted stuff from real web pages. So I decided to prefix all my content with the word "plog". It's was a mix of "p" for Peter and sufficiently different from the word "blog".

In the first couple of years of blogging I would blog about all sorts of stuff that caught my interested. Not just genuine thoughts or real technology notes but any fun link I came across. That became a massive trend later (and still is I guess) by the giants like Digg and Reddit so I stopped doing that with my own blog. In the last 7 years (give or take) I only blog about things that are genuinely close to heart or something I've actually worked on.

Some stats:

Total number of blog posts: 949
Total number of approved blog comments: 8,086
Number of email addresses collected: 4,292
Maximum number of comments on any one post: 2,749
Number of Cease or Desist letters received: 1

To me, blogging used to be a form of shouting out to the world what I found interesting in the hope that you'll also find it interesting and that you'll thank me for finding that. Now it's a way for me of either documenting something I've learned recently or some other announcement that is related to what I do on some technical thing.
I wonder how this will change for me in the next 10 years.

In Python you sort with a tuple

June 14, 2013
9 comments Python

My colleague Axel Hecht showed me something I didn't know about sorting in Python.

In Python you can sort with a tuple. It's best illustrated with a simple example:


>>> items = [(1, 'B'), (1, 'A'), (2, 'A'), (0, 'B'), (0, 'a')]
>>> sorted(items)
[(0, 'B'), (0, 'a'), (1, 'A'), (1, 'B'), (2, 'A')]

By default the sort and the sorted built-in function notices that the items are tuples so it sorts on the first element first and on the second element second.

However, notice how you get (0, 'B') appearing before (0, 'a'). That's because upper case comes before lower case characters. However, suppose you wanted to apply some "humanization" on that and sort case insensitively. You might try:


>>> sorted(items, key=str.lower)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: descriptor 'lower' requires a 'str' object but received a 'tuple'

which is an error we deserve because this won't work for the first part of each tuple.

We could try to write a lambda function (e.g. sorted(items, key=lambda x: x.lower() if isinstance(x, str) else x)) but that's not going to work because you'll only ever get to apply that to the first item.

Without further ado, here's how you do it. A lambda function that returns a tuple:


>>> sorted(items, key=lambda x: (x[0], x[1].lower()))
[(0, 'a'), (0, 'B'), (1, 'A'), (1, 'B'), (2, 'A')]

And there you have it! Thanks for sharing Axel!

As a bonus item for people still reading...
I'm sure you know that you can reverse a sort order simply by passing in sorted(items, reverse=True, ...) but what if you want to have different directions depend on the key that you're sorting on.

Using the technique of a lambda function that returns a tuple, here's how we sort a slightly more advanced structure:


>>> peeps = [{'name': 'Bill', 'salary': 1000}, {'name': 'Bill', 'salary': 500}, {'name': 'Ted', 'salary': 500}]

And now, sort with a lambda function returning a tuple:


>>> sorted(peeps, key=lambda x: (x['name'], x['salary']))
[{'salary': 500, 'name': 'Bill'}, {'salary': 1000, 'name': 'Bill'}, {'salary': 500, 'name': 'Ted'}]

Makes sense, right? Bill comes before Ted and 500 comes before 1000. But how do you sort it like that on the name but reverse on the salary? Simple, negate it:


>>> sorted(peeps, key=lambda x: (x['name'], -x['salary']))
[{'salary': 1000, 'name': 'Bill'}, {'salary': 500, 'name': 'Bill'}, {'salary': 500, 'name': 'Ted'}]

UPDATE

Webucator has made a video explaining this blog post as a video.

Thanks Nat!

premailer now excludes pseudo selectors by default

May 27, 2013
0 comments Python, Web development

Thanks to Igor who emailed me and made me aware, you can't put pseudo classes in style attributes in HTML. I.e. this does not work:


<a href="#" style="color:pink :hover{color:red}">Sample Link</a>

See for yourself: Sample Link

Note how it does not become red when you hover over the link above.
This is what premailer used to do. Until yesterday.

BEFORE:


>>> from premailer import transform
>>> print transform('''
... <html>
... <style>
... a { color: pink }
... a:hover { color: red }
... </style>
... <a href="#">Sample Link</a>
... </html>
... ''')
<html><head><a href="#" style="{color:pink} :hover{color:red}">Sample Link</a></head></html>

AFTER:


>>> from premailer import transform
>>> print transform('''
... <html>
... <style>
... a { color: pink }
... a:hover { color: red }
... </style>
... <a href="#">Sample Link</a>
... </html>
... ''')
<html><head>
<style>a:hover {color:red}</style>
<a href="#" style="color:pink">Sample Link</a>
</head></html>

That's because the new default is exclude pseudo classes by default.

Thanks Igor for making me aware!

What stumped me about AngularJS

May 12, 2013
22 comments AngularJS, JavaScript

So I've now built my first real application using AngularJS. It's a fun side-project which my wife and I use to track what we spend money on. It's not a work project but it's also not another Todo list application. In fact, the application existed before as a typical jQuery app. So, I knew exactly what I needed to build but this time trying to avoid jQuery as much as I possibly could.

The first jQuery based version is here and although I'm hesitant to share this beginner-creation here's the AngularJS version

The following lists were some stumbling block and other things that stumped me. Hopefully by making this list it might help others who are also new to AngularJS and perhaps the Gods of AngularJS can see what confuses beginners like me.

1. AJAX doesn't work like jQuery

Similar to Backbone, I think, the default thing is to send the GET, POST request with the data the body blob. jQuery, by default, sends it as application/x-www-form-urlencoded. I like that because that's how most of my back ends work (e.g. request.GET.get('variable') in Django). I ended up pasting in this (code below) to get back what I'm familiar with:


module.config(function ($httpProvider) {
  $httpProvider.defaults.transformRequest = function(data){
    if (data === undefined) {
      return data;
    }
    return $.param(data);
  };
  $httpProvider.defaults.headers.post['Content-Type'] = ''
    + 'application/x-www-form-urlencoded; charset=UTF-8';
});

2. App/Module configuration confuses me

The whole concept of needing to define the code as an app or a module confused me. I think it all starts to make sense now. Basically, you don't need to think about "modules" until you start to split distinct things into separate files. To get started, you don't need it. At least not for simple applications that just have one chunk of business logic code.

Also, it's confusing why the the name of the app is important and why I even need a name.

3. How to do basic .show() and .hide() is handled by the data

In jQuery, you control the visibility of elements by working with the element based on data. In AngularJS you control the visibility by tying it to the data and then manipulate the data. It's hard to put your finger on it but I'm so used to looking at the data and then decide on elements' visibility. This is not an uncommon pattern in a jQuery app:


<p class="bench-press-question">
  <label>How much can you bench press?</label>
  <input name="bench_press_max">
</p>

if (data.user_info.gender == 'male') {
  $('.bench-press-question input').val(response.user_info.bench_press_max);
  $('.bench-press-question').show();
}

In AngularJS that would instead look something like this:


<p ng-show="male">
  <label>How much can you bench press?</label>
  <input name="bench_press_max" ng-model="bench_press_max">
</p>

if (data.user_info.gender == 'male') {
  $scope.male = true;
  $scope.bench_press_max = data.user_info.bench_press_max;
}

I know this can probably be expressed in some smarter way but what made me uneasy is that I mix stuff into the data to do visual things.

4. How do I use controllers that "manage" the whole page?

I like the ng-controller="MyController" thing because it makes it obvious where your "working environment" is as opposed to working with the whole document but what do I do if I need to tie data to lots of several places of the document?

To remedy this for myself I created a controller that manages, basically, the whole body. If I don't, I can't manage scope data that is scattered across totally different sections of the page.

I know it's a weak excuse but the code I ended up with has one massive controller for everything on the page. That can't be right.

5. setTimeout() doesn't quite work as you'd expect

If you do this in AngularJS it won't update as you'd expect.


<p class="status-message" ng-show="message">{{ message }}</p>

$scope.message = 'Changes saved!';
setTimout(function() {
  $scope.message = null;
}, 5 * 1000);

What you have to do, once you know it, is this:


function MyController($scope, $timeout) {
  ...
  $scope.message = 'Changes saved!'; 
  $timeout(function() {
    $scope.message = null;
  }, 5 * 1000);
}

It's not too bad but I couldn't see this until I had Googled some Stackoverflow questions.

6. Autocompleted password fields don't update the scope

Due to this bug when someone fills in a username and password form using autocomplete the password field isn't updating its data.

Let me explain; you have a username and password form. The user types in her username and her browser automatically now also fills in the password field and she's ready to submit. This simply does not work in AngularJS yet. So, if you have this code...:


<form>
<input name="username" ng-model="username" placeholder="Username">
<input type="password" name="password" ng-model="password" placeholder="Password">
<a class="button button-block" ng-click="submit()">Submit</a>
</form>

$scope.signin_submit = function() {
  $http.post('/signin', {username: $scope.username, password: $scope.password})
    .success(function(data) {
      console.log('Signed in!');
    };
  return false;
};

It simply doesn't work! I'll leave it to the reader to explore what available jQuery-helped hacks you can use.

7. Events for selection in a <select> tag is weird

This is one of those cases where readers might laugh at me but I just couldn't see how else to do it.
First, let me show you how I'd do it in jQuery:


$('select[name="choice"]').change(function() {
  if ($(this).val() == 'other') {
    // the <option value="other">Other...</option> option was chosen 
  }
});

Here's how I solved it in AngularJS:


$scope.$watch('choice', function(value) {
  if (value == 'other') {
    // the <option value="other">Other...</option> option was chosen 
  }
});

What's also strange is that there's nothing in the API documentation about $watch.

8. Controllers "dependency" injection is, by default, dependent on the controller's arguments

To have access to modules like $http and $timeout for example, in a controller, you put them in as arguments like this:


function MyController($scope, $http, $timeout) { 
  ...

It means that it's going to work equally if you do:


function MyController($scope, $timeout, $http) {  // order swapped
  ...

That's fine. Sort of. Except that this breaks minification so you have to do it this way:


var MyController = ['$scope', '$http', '$timeout', function($scope, $http, $timeout) {
  ...

Ugly! The first form depends on the interpreter inspecting the names of the arguments. The second form depends on the modules as strings.

The more correct way to do it is using the $inject. Like this:


MyController.$inject = ['$scope', '$http', '$timeout'];
function MyController($scope, $http, $timeout) {
  ...

Still ugly because it depends on them being strings. But why isn't this the one and only way to do it in the documentation? These days, no application is worth its salt if it isn't minify'able.

9. Is it "angular" or "angularjs"?

Googling and referring to it "angularjs" seems to yield better results.

This isn't a technical thing but rather something that's still in my head as I'm learning my way around.

In conclusion

I'm eager to write another blog post about how fun it has been to play with AngularJS. It's a fresh new way of doing things.

AngularJS code reminds me of the olden days when the HTML no longer looks like HTML but instead some document that contains half of the business logic spread all over the place. I think I haven't fully grasped this new way of doing things.

From hopping around example code and documentation I've seen some outrageously complicated HTML which I'm used to doing in Javascript instead. I appreciate that the HTML is after all part of the visual presentation and not the data handling but it still stumps me every time I see that what used to be one piece of functionality is now spread across two places (in the javascript controller and in the HTML directive).

I'm not givin up on AngularJS but I'll need to get a lot more comfortable with it before I use it in more serious applications.

Registration and sign-in by email verification

April 29, 2013
10 comments Web development

I was going to title this blog post "I don't want your stinkin' password!" but realised that this isn't the first site that uses entirely OpenID, OAuth and stuff.

On Around The World you can now log in with either your Google account, your Twitter account or simply by entering your email. It looks like this:

Sign-in screen Screenshot of email

What's neat about this is that it works independent of if you've signed in before (aka. log in) or if you're new (aka. register).

What's not so neat about it is that people might not recognize it. We're so used to both registration forms and log in forms to ask for passwords. Often, you can quickly tell of it's log in because you expect two input fields.

Another slight flaw with this is the fact that my emails usually take several tens of seconds to send. This is because they're sent by a cron job async. So, people who enter their email address might get disappointed if they don't get the email immediately.

Anyway, let's wait and see if people actually use it. At least it means you don't really need a third party service and you don't need to type in a password.

The Poincaré Conjecture

April 22, 2013
1 comment Mathematics, Books

The Poincaré Conjecture
I just finished a wondeful book, The Poincare Conjecture: In Search of the Shape of the Universe by Donal O'Shea, and because I'm not very good at writing I'm just going to quote a good chunk:

Mathematics reminds us how much we depend on one another, both on the insight and imagination of those who have lived before us, and on those who comprise the social and cultural institutions, schools and universities, that give children an education that allows them to fully engage the ideas of their times. It is up to all of us to ensure that the legacy of our times is a society that stewards and develops our common mathematical inheritance. For mathematics is one of the quintessentially human activities that makes us more fully human and, in so doing, leads us to transcend ourselves.

Looking up at the night sky, at the distant stars and galaxies and clusters of galaxies, it is inconceivable to me that there are not other intelligences out there, some far different then us. Hundreds of years hence, if we ever develop technologies that enable us to meet and to communicate, we will discover that they will know, or want to lknow, that the only compact three-dimensional manifold in which every loop can be shrunk to a point is a three-sphere. Count on it.

There were lots of mathematical concepts in this book that I didn't understand, but these two paragraphs I surely understood.