Filtered by Linux

Page 7

Reset

Upgrading to Ubuntu Lucid Lynx and downgrading to Python2.4 and Python2.5

May 11, 2010
5 comments Linux

So I upgraded to the latest Ubuntu Lucid Lynx 10.04 the other day and to my horror it removed Python 2.4 and Python 2.5. I rely more on those programs than I do on some silly Facebook connecting social widget crap. On my laptop I have lots of Zopes requiring Python 2.4 and I have about 10 active Django projects that rely on Python2.5. This fuckup by Ubuntu caused me to write this complaint.

So my estimeed colleague and Linux wiz Jan Kokoska helped me set things straight by showing me how to downgrade these packages to Karmic version and how to pin them in the apt preferences. First of all, make your /etc/apt/source.list look like this:


deb http://gb.archive.ubuntu.com/ubuntu/ karmic main restricted universe multiverse
deb-src http://gb.archive.ubuntu.com/ubuntu/ karmic main restricted universe multiverse

deb http://gb.archive.ubuntu.com/ubuntu/ karmic-updates main restricted universe multiverse
deb-src http://gb.archive.ubuntu.com/ubuntu/ karmic-updates main restricted universe multiverse

deb http://gb.archive.ubuntu.com/ubuntu/ karmic-backports main restricted universe multiverse
deb-src http://gb.archive.ubuntu.com/ubuntu/ karmic-backports main restricted universe multiverse

deb http://security.ubuntu.com/ubuntu karmic-security main restricted universe multiverse
deb-src http://security.ubuntu.com/ubuntu karmic-security main restricted universe multiverse

deb http://gb.archive.ubuntu.com/ubuntu/ lucid main restricted universe multiverse
deb-src http://gb.archive.ubuntu.com/ubuntu/ lucid main restricted universe multiverse

deb http://gb.archive.ubuntu.com/ubuntu/ lucid-updates main restricted universe multiverse
deb-src http://gb.archive.ubuntu.com/ubuntu/ lucid-updates main restricted universe multiverse

deb http://gb.archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse
deb-src http://gb.archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse

deb http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse
deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse

If you know what you're doing you might have other additional sources in there then keep those as is. Next thing to do is to update and upgrade:


# apt-get update
# apt-get dist-upgrade

You should now see that it's intending to upgrade a bunch of juicy packages like python2.4-dev for example. To check that python2.4 is now getting in from Karmic run this:


$ apt-cache madison python2.4

Now for the trick that really makes the difference:


# apt-get install python2.4=2.4.6-1ubuntu3.2.9.10.1 python2.4-dbg=2.4.6-1ubuntu3.2.9.10.1 \
python2.4-dev=2.4.6-1ubuntu3.2.9.10.1 python2.4-doc=2.4.6-1ubuntu3.2.9.10.1 \
python2.4-minimal=2.4.6-1ubuntu3.2.9.10.1

The command is quite self-explanatory. You use the equal sign to basically say what version you want to install. If you now for example want to install something like python-profiler for your Python 2.4 since this isn't available as a PyPi package. First, find out what version you have to install:


$ apt-cache madison python-profiler | grep karmic

From that list you'll get a bunch of versions. Chose the one from karmic-updates or karmic-security. Then install it:


# apt-get install python-profiler=2.6.4-0ubuntu1

Now, to avoid this causing a conflict and thus be removed the next time you do an upgrade you need to pin it. Create a file called /etc/apt/preferences and put the following into it:


Package: python-profiler
Pin: version 2.6.4-0ubuntu1
Pin-Priority: 999

And that concludes it. A word of warning from Jan:

"he slight problem is that with this setup, suppose a big security flaw was found in python-imaging and got patched in karmic that is still supported... you wouldn't get the package update. That is because it's pinned and while asterisks can be used in the version number, we don't know in advance what the version will match and what the Lucid version that we don't want will match"

"so you basically lose security upgrades for affected packages"

"minor annoyance when you have one or two packages on a laptop, but a big deal if you have a dozen packages on 100 VMs on server"

Having written about this helps me remember it myself for the next time I need it. Also, hopefully it will help other people who get bitten by this. Hopefully this will shame the Canonical guys into action so that the next time they don't haste their deprecation process and actually think about who's using their products. I bet a majority of Ubuntu's users care more about programming or something like that than they do about the ability to buy music on Ubuntu One or whatever it's called.

fcgi vs. gunicorn vs. uWSGI

April 9, 2010
29 comments Python, Django, Linux

uwsgi is the latest and greatest WSGI server and promising to be the fastest possible way to run Nginx + Django. Proof here But! Is it that simple? Especially if you're involving Django herself.

So I set out to benchmark good old threaded fcgi and gunicorn and then with a source compiled nginx with the uwsgi module baked in I also benchmarked uwsgi. The first mistake I did was testing a Django view that was using sessions and other crap. I profiled the view to make sure it wouldn't be the bottleneck as it appeared to take only 0.02 seconds each. However, with fcgi, gunicorn and uwsgi I kept being stuck on about 50 requests per second. Why? 1/0.02 = 50.0!!! Clearly the slowness of the Django view was thee bottleneck (for the curious, what took all of 0.02 was the need to create new session keys and putting them into the database).

So I wrote a really dumb Django view with no sessions middleware enabled. Now we're getting some interesting numbers:


fcgi (threaded)              640 r/s
fcgi (prefork 4 processors)  240 r/s (*)
gunicorn (2 workers)         1100 r/s
gunicorn (5 workers)         1300 r/s
gunicorn (10 workers)        1200 r/s (?!?)
uwsgi (2 workers)            1800 r/s
uwsgi (5 workers)            2100 r/s
uwsgi (10 workers)           2300 r/s

(* this made my computer exceptionally sluggish as CPU when through the roof)

Truncated! Read the rest by clicking the link below.

Guake, not Yakuake or Yeahconsole

January 23, 2010
4 comments Linux

I've been a big fan of Yakuake for a long time. It's a terminal you have open all the time in Linux that is shown and hidden, over any other windows, by a simply hit on the F12 button.

But as of more recent versions of Yakuake it has become really slow. It sometimes take 2-3 seconds from F12 press till you can type on the terminal. So I uninstalled it and tried Yeahconsole but I uninstalled it equally fast as I understood it was broken and didn't work at all despite being in the Xubuntu apt repositories.

Last but not least I ended up using Guake which not only works but also works really really fast. Screenshots here

What makes my website slow? DNS

October 23, 2009
14 comments This site, Linux

Pagetest web page performance test is a great tool for doing what Firebug does but not in your browser. Pagetest can do repeated tests to iron out any outliers. An alternative is Pingdom tools which has some nifty sorting functions but is generally the same thing.

So I ran the homepage of my website on it and concluded that: Wow! Half the time is spent on DNS lookup!

First Second Third

The server it sits on is located here in London, UK and the Pagetest test was made from a server also here in the UK. Needless to say, I was disappointed. Is there anything I can do about that? I've spent so much time configuring Squid, Varnish and Nginx and yet the biggest chunk is DNS lookup.

In a pseudo-optimistic fashion I'm hoping it's because I've made the site so fast that this is what's left when you've done all you can do. I'm hoping to learn some more about this "dilemma" without having to read any lengthy manuals. Pointers welcomed.

To sub-select or not sub-select in PostgreSQL

August 31, 2009
0 comments Linux

I have a query that looks like this (simplified for the sake of brevity):


SELECT
  gl.id,
  miles_between_lat_long(46.519582, 6.632121,
                         gl.latitude::numeric, gl.longitude::numeric
                        ) AS distance
FROM 
 kungfuperson gl
 miles_between_lat_long(46.519582, 6.632121,
                        gl.latitude::numeric, gl.longitude::numeric
                        ) < 5000

ORDER BY distance ASC;

It basically finds other entries in a table (which has columns for latitude and longitude) but only returns those that are within a certain distance (from a known latitude/longitude point). Running this query on my small table takes about 7 milliseconds. (I used EXPLAIN ANALYZE)

So I thought, how about if I wrap it in a sub-select so that the function miles_between_lat_long() is only used once per row. Surely that would make it a lot faster. I accept that it wouldn't be twice as fast because wrapping it in a sub-select would also add some extra computation. Here's the "improved" version:


SELECT * FROM (
SELECT
  gl.id,
  miles_between_lat_long(46.519582, 6.632121,
                         gl.latitude::numeric, gl.longitude::numeric
                        ) AS distance
FROM 
 kungfuperson gl
) AS ss
WHERE ss.distance < 5000
ORDER BY ss.distance ASC;

To test it I wrote a little script that randomly runs these two versions many many times (about 50 times) each and then compare the averages.

Truncated! Read the rest by clicking the link below.

gg - wrapping git-grep

August 11, 2009
0 comments Linux

I've grown quite addicted to this and finding that it's saving me tonnes of milliseconds every day. First of all, I've made this little script and put it in my bin directory called '~/bin/gg':


#!/usr/bin/python
import sys, os
args = sys.argv[1:]
i = False
if '-i' in args:
    i = True
    args.remove('-i')
pattern = args[-1]
extra_args = ''
if len(args) > 1:
    extra_args = ' '.join(args[:-1])
if i:
    param = "-in"
else:
    param = "-n"
cmd = "git grep %s %s '%s'" % (param, extra_args, pattern)
os.system(cmd)

Basically, it's just a lazy short hand for git grep ("Look for specified patterns in the working tree files"). Now I can do this:


peterbe@trillian:~/MoneyVillage2 $ gg getDIYPackURL
Homesite.py:526:    def getDIYPackURL(self):
zpt/homepage/index_html.zpt:78:       tal:attributes="href here/getDIYPackURL">Get your free trial here</
zpt/moneyconcerns/index_html.zpt:36:       tal:attributes="href here/getDIYPackURL">Get your free trial h
zpt/moneyconcerns/index_html.zpt:50:          <p><a tal:attributes="href here/getDIYPackURL" class="makea
(END) 

It's not much faster than normal grep but it automatically filters out junk. Obviously doesn't help you when searching in files you haven't added yet.

Sequences in PostgreSQL and rolling back transactions

May 12, 2009
0 comments Linux

This behavior bit me today and caused me some pain so hopefully by sharing it it can help someone else not ending up in the same pitfall.

Basically, I use Zope to manage a PostgreSQL database and since Zope is 100% transactional it rolls back queries when exception occur. That's great but what I didn't know is that when it rolls back it doesn't roll back the sequences. Makes sense in retrospect I guess. Here's a proof of that:


test_db=# create table "foo" (id serial primary key, name varchar(10));
CREATE TABLE
test_db=# insert into foo(name) values('Peter');
INSERT 0 1
test_db=# select * from foo;
 id | name  
----+-------
  1 | Peter
(1 row)

test_db=#  select nextval('foo_id_seq');
 nextval 
---------
       2
(1 row)

test_db=# begin;
BEGIN
test_db=# insert into foo(id, name) values(2, 'Sonic');
INSERT 0 1
test_db=# rollback;
ROLLBACK
test_db=#  select nextval('foo_id_seq');
 nextval 
---------
       3
(1 row)

In my application I often use the sequences to predict what the auto generate new ID is going to be for things that the application can use such as redirecting or updating some other tables. As I wasn't expecting this it caused a bug in my web app.

Git + Twitter = Friedcode

April 22, 2009
10 comments Python, Linux

Git + Twitter = Friedcode I've now written my first Git hook. For the people who don't know what Git is you have either lived under a rock for the past few years or your not into computer programming at all.

The hook is a post-commit hook and what it does is that it sends the last commit message up to a twitter account I called "friedcode". I guess it's not entirely useful but for you who want to be loud about your work and the progress you make I guess it can make sense. Or if you're a team and you want to get a brief overview of what your team mates are up to. For me, it was mostly an experiment to try Git hooks and pytwitter. Here's how I did it:

Truncated! Read the rest by clicking the link below.

Nginx vs. Squid

March 17, 2009
3 comments Linux

We all know that Nginx is fast and very lightweight. We also know that Squid is very fast too. But which one is fastest?

In an insanely unscientific way I added some rewrite rules to my current Nginx -> Squid -> Zope stack so that for certain static content, Nginx could go straight to the filesystem (where the Zope product holds the static stuff) to bypass the proxy pass. Then I did a quick and simple benchmark with ab comparing how to get a 700 bytes GIF image:


squid: 2275.62 [#/sec] (mean)
nginx: 7059.45 [#/sec] (mean)

Truncated! Read the rest by clicking the link below.