January issue of Python Magazine now available for download

The January issue is live and ready for download from

This month the cover article from Alex Martelli gives advice about how
and when to use regular expressions for parsing text, and when to use
other techniques instead.

Eugen Wintersberger has an excellent introduction to using ctypes.
I’ve been looking forward to reviewing his article for a while now, so
I’m glad to finally see it in print.

Dan Felts returns with an article about controlling the Nessus
security scanner using Python to talk to it over the network.

In his Welcome to Python column this month Mark Mruss covers
everything you need to know about iterators and generators.

And Steve Holden waxes philosophical about how, or whether, we should
be marketing Python the language and platform.

I have two pieces in the mag this month. First, an article about
treating your command line programs as objects by using CommandLineApp
to define their options and argument processing. And my column starts
the series I’ve mentioned on this blog previously by covering the
wealth of testing tools available in Python.

It’s another good issue, and we’re already well into the work of
producing the February edition as I write this. So go download the new
issue and send your feedback to us via the pythonmagazine.com web site
or in #pymag on irc.freenode.net.

Python development tools you can’t live without

I’m working on a series of columns for Python Magazine in which I
will be talking about development tools. The first “episode” appears in
the January 2008 issue and covers testing tools like frameworks and
runners. I have several more columns plotted out, but want to make sure
I cover the topics well and don’t miss out on mentioning a new or small
tool just because I haven’t heard about it myself.

So, I’m looking for feedback about the tools you just can’t (or don’t
want to) live without. Tell me all about your development environment,
editor, shell, debugger, IDE, version control system, etc. What
libraries (in addition to the standard library) do you use on a regular
basis? Is there any one thing that you would identify as being so
necessary for your Python work that if it was taken away you’d have to
give up and use another language? (And “the interpreter” is already on
the list, thanks.)

If you’re building a new development tool you think I should look at
for the series, post a link in a comment here or tag it with
pymagdifferent on del.icio.us and I’ll review it. I can’t promise
that everything will make it into the columns right away, but it will
eventually. Unusual or unique entries are more likely to be covered

It seems like everyone and their brother is building an editor, so if
that’s your game at least tell me why you think yours is
better/different from all the others. There are some problems with text
editing that are not easily solved, so if you have a killer feature make
sure you point that out. That goes double for templating languages –
highlight your special strengths.

January PyATL Meetup

This past Thursday I made it back to another PyATL meeting to see
presentations on packaging and build tools for Python.

Brandon Rhodes started us out with a bit of background material on
how Python searches for importable modules and where they are usually

Jeremy Jones talked about using setuptools and building eggs. I
usually distribute my packages as source, but the –tag-svn-revision
option looks pretty handy for building dev releases.

Noah Gift talked about virtualenv. The PDF of Noah’s
presentation is online. I have been using virtualenv for a while
now, and I don’t know how I ever lived without it. I use virtual
environments for every article I review for Python Magazine, since many
require packages I would not otherwise have installed on my system.
Creating separate environments lets me avoid cluttering up my main
installation with incompatible versions of tools.

Brandon wrapped things up with a presentation on the many benefits of
using buildout in development. The ability to reliably reproduce the
exact set of dependencies needed for a project really caught my
interest. I need to learn more about creating recipes to see how I can
use it effectively. We have a large Makefile-based build system at work,
since not everything we include in our packaging is Python. Using
buildout scripts might make (no pun) it easier to set up a new developer
on a clean system.

We recorded video of all of the presentations, and those should be
posted on the PyATL site soon. The presenters also recorded
screencasts as they were giving the talks, and those should be posted as

Testing Tools for Python

Test Driven Development and Test Automation are all the rage, and
Python developers have no shortage of tools for testing their own

This month I am starting a series of columns on tools for developing
in Python. I intend to cover areas such as version control, utility
modules, build, and packaging tools. Eventually I may even work up
the courage to address the issue of editors and IDEs while, I hope,
avoiding a religious war. But let’s start out by looking at tools for
testing your libraries and applications. This isn’t a comprehensive
survey, but it should give you some idea of the kinds of tools

Testing Frameworks

The standard library includes two separate testing frameworks for
writing automated tests: doctest and unittest. Each framework has
its own style that works better in certain circumstances. Each also
has proponents arguing in its favor. The discussion is not quite up
to the level of the vi vs. emacs argument, but it’s getting
there in some circles.

When run, doctest looks at the documentation strings for your modules,
classes, and methods and uses interpreter prompts (>>>) to
identify test cases. Almost anything you can do with the interactive
prompt from the interpreter can be saved as a test, making it easy to
create new tests as you experiment with and document your code. Since
doctest scans your documentation strings for tests, a nice benefit of
using it is your documentation is tested along with your code. This
is especially handy for library developers who need to provide
examples of using their library modules.

For other situations, where you need more formalized tests or where
extra test fixtures are needed, the unittest module is a better
choice. The standard library version of unittest started out as part
of the PyUnit project.
The name PyUnit is an homage to its origin in the XUnit API, originally created by Kent Beck for use in
Smalltalk and now available for many other languages.

Tests are implemented using methods of classes derived from
unittest.TestCase. TestCase supports methods to configure
fixtures, run the tests, and then clean up after the test runs. The
separate methods for configuring and cleaning up the fixtures are
useful for extending your automated tests beyond the unit level, when
you might have several tests that depend on the same basic
configuration (such as having a few different objects which need to be
connected to one another, temporary files to create and clean up, or
even a database transaction to manage). Separating the fixture
management from the test clarifies the distinction between test code
and setup code and makes it easier to ensure that the fixtures are
cleaned up after a test failure.

Testing the Python Standard Library

Both test frameworks provide a function to serve as a main program to
make it easy to run the tests in a module. Running your tests one
module at a time does not scale up well when you start trying to run
all of the tests for a project as it grows to encompass several
packages. The Python standard library is a prime example of this.

All of the automated tests for the Python standard library are part of
the source distribution and can be found in the test module. Each standard
library module has at least one associated test file named after the
original module and prefixed with test_. The actual test code is
a mixture of doctest and unittest tests, depending on the module and
the nature of the tests. Some tests are easier to express using
doctest, but unittest is the preferred framework for new tests, and
most of the doctest tests are being converted. In fact, quite a few
are being converted as part of the Google Highly Open Participation
(GHOP) contest, which I discussed last month.

The test package also includes a tool for running the tests, called
regrtest. Since the standard library includes some
platform-specific packages and several other modules for which tests
might require special resources (audio hardware devices, for example),
the test.regrtest framework needed to take those requirements into
account. The solution was to extend the unittest framework to allow
tests with special requirements to be enabled or disabled depending on
the resources actually available at the time the tests are run. In
order to avoid false negatives, tests requiring special resources are
disabled by default, and must be enabled explicitly.

To run the standard library tests, just run regrtest.py from the
command line, inside the test directory. You can specify a single
module to test as an argument, or run all of the tests by omitting the

$ pwd
$ python regrtest.py test_re.py
1 test OK.

The output reports that a single test was run, but that is a little
misleading. In fact it indicates that one module was run. Using
the -v option with regrtest.py will show the individual unit
tests that have been run, and their status.

To run tests with special resources enabled, use the -u option to
regrtest.py. For a complete list of the resource types, see the
help output (-h). This example illustrates running the curses
module tests, with curses disabled (the default):

$ python regrtest.py  test_curses.py
test_curses skipped -- Use of the `curses'
resource not enabled
1 test skipped:
Those skips are all expected on darwin.

And now with curses enabled:

$ python regrtest.py -ucurses test_curses.py
[screen paints black and cursor moves around]
1 test OK.

Running Your Tests

The test.regrtest runner is not the only solution available for
managing large test suites. There are several alternative test
runners, including py.test, nose, and

The first, py.test, works by scanning your source files for functions
or methods that start with test_ and running any that it finds.
It depends on your tests using assert, and so does not require a
special test class hierarchy or file layout. It has support for
fixtures at granularity ranging all the way from module to class, and
even the method level. It does not directly support the unittest
framework, and has only experimental support for doctest tests. One
unique feature is the ability for a test function to act as a
generator to produce additional tests. The example given on the
py.test web site illustrates this nicely:

def test_generative():
    for x in (42,17,49):
        yield check, x

def check(arg):
    assert arg % 7 == 0   # second generated tests fails!

Using a generator like this can save repetitive code in your tests.

Nose is billed as a “discovery-based unittest extension”. It scans
your test files to identify test cases to run automatically by
detecting classes derived from unittest.TestCase or matching a
name pattern. This lets you avoid writing explicit test suites by
hand. Configuration is handled through command line flags or a config
file, which makes it easy to set verbosity and other options
consistently for all of your projects. Nose can be extended through a
plugin API, and there are several nose-related packages available for
producing output in different formats, filtering or capturing output,
and otherwise adapting it to work the way you do.

Proctor is another tool that works by scanning your source files.
(Disclosure: I wrote Proctor.) It works closely with the unittest
framework, and has features for managing test suites spanning multiple
directories using tagging. I won’t go into Proctor too deeply here,
since I have an entire article dedicated to it
planned for the near future. I will say that it is less extensible
than nose because it lacks a plugin system, but it has been used in
production for several years now. It specializes in producing output
that is machine parsable, to make it easy to produce reports from test
logs with many tests (several thousand at a time). Although the
feature sets of nose and Proctor overlap, neither implements a clear
super-set of the other.

IDEs and GUIs

In addition to the command line test runners described here, any
number of editors and IDEs support running tests. There are too many
to name, but all have some form of quick access to the tests, usually
bound to a single keystroke. Feedback ranges from text output, to
HTML, to a progress bar style report.

I’ve recently started experimenting with TextMate as a development
editor, and I have to say I like the “PyMate” feature for running
scripts. The output is parsed and presented in an HTML pane, with
tracebacks linking back to the original source line in the editor.
This makes it easy to jump from tracebacks in the test output directly
to your source, making the edit-test-repeat cycle shorter. Other
editors with support for running test suites include Eclipse, WingIDE,
Eric, and Komodo – I’m sure that list could be a lot longer. Each
has its own particular take on finding, running, and presenting the
results of your tests. This subject deserves more in-depth discussion
than I can provide in a single month’s space, so look for more details
in a future column.

Domain-Specific Testing

The basic frameworks offer a strong foundation for testing, but at
times you will benefit from extensions for testing in a particular
problem domain. For example, both webunit and webtest add
web browser simulation features to the existing unittest framework.
Taking a different approach, twill
implements a domain-specific language for “navigating Web pages,
posting forms and asserting conditions”. It can be used from within a
Python module, so it can be embedded in your tests as well. Some of
the modern web application frameworks, such as Django and Zope,
include built-in support for testing of your code with an environment
configured by the framework.

…Django and Zope include built-in support for testing of your
code with an environment configured by the framework.

Although web development is hot, not every application is necessarily
web-enabled. Testing GUI applications has traditionally required
special consideration. In the old days, support for driving GUI
toolkits programmatically was spotty and tests had to depend on image
captures to look for regression errors. Times are changing, though,
and modern toolkits with accessibility APIs support tools like
dogtail and ldtp with test frameworks for driving
GUIs without such limitations.

Code Coverage

Once you establish the habit of writing tests, the next thing you’ll
want to know about is whether you are testing all of your
application. To answer that question, you will need a code coverage
analysis library, and Python has two primary options: Ned Batchelder’s
excellent coverage.py and Titus
Brown’s newer figleaf. They take
different approaches to detecting potentially executable lines in your
source, resulting in different “lines of code” counts. Otherwise they
produce similar statistics about the execution of your program. You
can run the coverage tools yourself, or use them integrated with one
of the test runners described above. Both tools collect statistics as
your program (or test suite) runs and enable you to prepare reports
showing the percent coverage for modules, as well as a list of lines
never run. Both also support creating annotated versions of the
source, highlighting lines which are not run. With a report of source
lines never run, it is easy to identify dead branches in conditional
statements containing code that can be removed or areas that need more
unit tests.

Source Checkers

One class of testing tools not necessarily used in Python development
is a source all that often is a source code checker or linter. Many
of you will recognize lint as the anal-retentive C programmer’s
friend. Python has several similar offerings for examining your
source and automatically detecting error patterns such as creating
variables which are never used, over-writing builtin symbol names
(like using type as a variable), and other such common issues that
are easy to introduce and difficult to ferret out. Pylint is an actively maintained
extension to the original PyChecker. In addition to looking for
coding errors, Pylint will check your code against a style guide,
verify that your interfaces are actually implemented, and a whole
series of other common problems. Think of it as an automatic


I only had space this month for a quick overview of the many testing
tools available for use with Python. I encourage you to check out
pycheesecake.org’s excellent taxonomy page for more a complete list
(see Related Links). Most of the tools described here are just an
easy_install away, so download a few and start working on your own
code quality today.

As always, if there is something you would like for me to cover in
this column, send a note with the details to doug dot hellmann at
pythonmagazine dot com and let me know, or add the link to your
del.icio.us account with the tag pymagdifferent. I’m particularly
interested in hearing from you about the tools you consider essential
for developing with Python. Anything from your favorite editor or IDE
to that obscure module you find yourself using over and over –
anything you just can’t seem to live without. I’ll be sharing links
and tips for development tools over the course of the next several
issues as I continue this series.

Originally published in Python Magazine Volume 2 Issue 1 , January, 2008

Python Magazine “wish list” updated

Brian has posted our current wish list over on his blog. I won’t
reproduce the entire thing here, so but please go look over the list and
see if there is a topic you know something about.

We realize that not everyone who is interested in writing is
necessarily an established writer. We will provide help and advice if
you’re a new author and not familiar with writing for a magazine. I’ve
been programming professionally for 11-12 years now, but I’m still
fairly new to publishing, so I know that that first article can be the
hardest. But you won’t gain the experience if you don’t start
somewhere. Brian refutes other reasons people don’t like to write
over here. Let me add to his arguments that it might just turn out to
be a lot easier than you think.

As Brian points out in his wish-list post, there are quite a few big
projects out there our readers are interested in hearing from but that
haven’t contributed articles, yet. If you’re a project maintainer who
has no time to write, that’s OK! Consider encouraging someone else from
your user or developer community to write an article about the project
instead. If you find yourself using a particular package or tool
frequently, but don’t contribute directly to that project, that’s not a
problem. There no requirement that articles have to come from the
original developer of a package. It’s always good to have contributions
from different perspectives.

The submission process is very easy and low pressure (ask any of our
published authors). Neither Brian nor I want to make things harder than
they have to be, so we don’t. And just in case it isn’t clear: we will
pay you for your effort
if you write an article. So check out the
and consider picking up a little extra cash after the holidays by
writing for us.

[Updated with link to Brian’s “Why you should write” post.]

Python Magazine for December available for download now

The December issue of Python Magazine went live this morning. If
you’re a subscriber, you can download your personalized, DRM-free PDF via
your account page right away.


The cover story this month is Python Threads and the Global
Interpreter Lock
, a detailed analysis of threading performance under
different types of load by Jesse Noller. Jesse’s article is
chock-full of benchmarks and background material that illustrates when
the GIL is, and isn’t, an issue.

Python On the Go: Using Python on Mobile Platforms, by Saša
Dimitrijević, includes references for all sorts of development tools for
getting started hacking your phone or PDA with Python.

In Python Powered Accessibility, Steve Lee explains how to use
Python and GNOME accessibility toolkits to make it easier easier for
people with disabilities to use your desktop applications.

John Berninger returns this month with Using Python to Manage RPMs,
an introduction to using RPM from within Python programs with a focus on
security and intrusion detection.

In his column this month, Mark Mruss creates some basic GUI apps with
PyQt. Steve Holden shows how to use RSS, del.icio.us, and MochiKit
to keep fresh content on your home page. Brian Jones examines the
increased adoption of Python over the past few years, and I talk about
the PSF’s involvement in the Google Highly Open Participation

Write for us!

We’ve just finished the work on this issue, which of course means it’s
time to start the next. If you have a topic you’d like to see covered,
post a comment here or head over to our web site and tell us all
about it. If you have an idea for an article, use the “write for us”
link. Don’t be shy! We’ll help you develop your idea into a full article
and make sure your prose looks as good as your code.

And, as usual, if there is something you think I should cover in my
column, shoot me an email, post a comment here, or tag the link with
pymagdifferent on del.icio.us.

Google Highly Open Participation Contest

The Google Highly Open Participation Contest for junior high
and high school students is well under way, and the response from
around the world has been phenomenal.

Expanding on the idea of the Google Summer of Code program, Google has
developed a new contest aimed at encouraging pre-university students
to participate in open source projects. The goal is to introduce these
students to open source in general, and help them become directly
involved in one of several specific, participating projects. While the
contest concludes on February 4, 2008, some of the contestants might
decide to stick around after the contest and continue to
contribute. Based on the contributions I have seen so far, I hope they

Contest Background

Ten different open source projects have been asked to participate in
the contest this year: The Apache Software Foundation, Drupal, GNOME,
Joomla!, MoinMoin, Mono, Moodle, Plone, the Python Software Foundation
(PSF), and SilverStripe. Each project is responsible for developing
tasks that the contestants can complete, and mentors to advise and
monitor the progress.

The contestants are trying to complete as many separate tasks as
possible over the course of the contest. Therefore, the tasks are all
relatively short, designed to be completed in days, rather than
weeks. This also allows the mentors to give more timely feedback. The
tasks cover all of the areas of an open source project, including
coding, documentation, outreach, testing, research, training,
translation, and user interface research and design.

Each contestant selects one task at a time to work on. As they are
working on the task, they are free to ask questions or use any
resources available to them to do the work. One of the key roles of
the mentors is to answer questions if the contestant is stuck on a
problem. Once the task is done, the contestant submits their finished
work for review by a mentor. When it is approved, they are free to
move on and claim their next task.

For every three tasks completed, the contestant is awarded $100 US (up
to $500 US for completing 15 tasks). Ten grand prize winners will also
receive a trip to Google’s headquarters in Mountain View,
California. The selection of the grand prize winner is left up to each
project, but the intent is to reward a contestant who selects more
difficult tasks, does work beyond the scope of the original task, or
is outstanding in some other way.

PSF Involvement

The PSF was selected by Google to participate in the contest because
of its previous work with the Summer of Code (SoC) contest. Titus
Brown was an administrator for the SoC, and is leading the mentoring
team for GHOP. Titus invited me, as well as community members Georg
Brandl, Grig Gheorghiu, Christian Heimes, André Roberge (among others)
to be mentors. Tasks have also been contributed by Brett Cannon,
Collin Winter, Michal Kwiatkowski, Greg Wilson, Terry Peppers, Shannon
Behrens, Michael Carter, Phil Hassey, and Michael Mol, and several of
them have also taken on more mentoring work as a result of their
participation. The list of mentors and contributors is changing
rapidly as more people become involved, so please accept my apologies
if I have left your name out.

All of the work on PSF tasks is being organized through the Google
Code site
the #ghop-python IRC channel on irc.freenode.net. Contestants claim
tasks from the issue tracker, and are given a deadline to complete the
work. Questions and feedback are communicated via a public Google
Group mailing list
they work through their projects.

Response from Students

Overall participation numbers vary, but at last count there were
somewhere around 375 contestants for all 10 projects. Paweł Sołyga
has been collecting statistics for the contest, and prepared the
figures you see here. Figure 1 shows the task progress as of 13
December. From the 812 tasks defined for all projects, 238 are
already completed, another 295 are in progress, and 279 remain to be
claimed. Figure 2 shows the distribution of those tasks across the
various projects.


Task Progress as of 13 December


The distribution of tasks across projects.

The response for the PSF’s tasks has been nothing short of amazing,
and we’re only a few weeks into the contest as I write this. While
Figure 3 shows that Joomla! has attracted more contestants, Figure 4
shows that the students participating in the PSF project are
completing more tasks. Is that another example of the power of Python?


Joomla! has attracted more contestants.


PSF contestants completed more tasks.

We started out with 65 tasks, and they were all claimed within the
first week. We added 100 more, and many of those have been snapped up
already, too. We hope to add a new batch of open tasks each Friday,
because as the holidays approach and school terms are ending, we are
seeing more students sign up to participate. There has been a steady
flow of messages on the mailing lists set up for advising contestants,
but most of the traffic has been requests for us to hurry up and
review their completed work so they can keep going. These kids are
talented and motivated.


The PSF tasks completed so far have benefited core Python as well as a
few smaller projects. As part of the outreach and advocacy tasks,
several contestants gave presentations about Python programming to
their school or local user groups. The presentations completed so far
have covered general Python programming, Genshi, pygame, and
Django. PDFs of the slides, and in many cases pictures from the event,
have been posted to the Python GHOP issue tracker. In addition to the
live presentations, a few screencasts for orbited and Windmill have
been completed and posted online as well.

Other outreach and documentation tasks have led to Python being the
top referenced programming language on http://www.RosettaCode.org/,
with updates to 17 different example pages in the RosettaCode wiki
covering topics ranging from basic control structure examples to more
complex subjects such as object serialization and working with XML and

Through André Roberge’s involvement in the project, several tasks have
been created around Crunchy. Translations into Italian,
Polish, Macedonian, Spanish, Estonian, Brazilian Portuguese,
Hungarian, and German have been submitted already and an Esperanto
translation is under way. That list of languages illustrates the
global nature of the contest, an important aspect that Google
reinforced by launching it at the Open Source Developers’ Conference
in Brisbane, Australia. In addition to translation, Crunchy’s test
suite has been expanded by several contestants and it has been
enhanced to operate with Python 3.0. There is also a “Turtle” graphics
package being developed to enable Crunchy presentations to use live
drawing commands to render graphics in the browser.

Georg Brandl has been overseeing contributions to core Python
documentation and unit tests. He reports 23 GHOP-related commits to
the Python svn repository from 16 separate contributors. Test coverage
for CGIHTTPServer, DocXMLRPCServer, SimpleHTTPServer, and
SimpleXMLRPCServer has been expanded so far. Code examples have been
added to the documentation for the re, datetime,
xml.etree.ElementTree, xmlrpclib, wsgiref, itertools, logging, mmap,
csv, pprint, traceback, and ConfigParser modules for Python 2.6. The C
API Reference material has been expanded in several areas, too, and
there has been quite a bit of work on the documentation in the Python
3.0 branch.

Will Guaraldi has mentored students working on several PyBlosxom
tasks. Most of the completed tasks are related to testing various
plugins and documentation. They have led to updates for the pygallery,
contact, filter, autoping, and latest plugins so far. There are still
a few open tasks for creating caching plugins and a reStructuredText
formatter. And I’m sure when those tasks are gone, the PyBlosxom team
will come up with some others.


Students interested in participating as a contestant should visit the
GHOP main page to
review the rules, and then dive in and find tasks of interest on the
various project lists. As I have described, there are tasks available
at all levels of difficulty and in many different areas of
interest. There are enough options that everyone should be able to
find some way to join as a contestant.

Open Tasks

I listed a lot of the work that has already been done, but what’s left
for contestants to work on? There are still several outreach tasks for
giving presentations to technical, as well as non-technical,
groups. Paul McGuire has contributed tasks for working on
PyParsing. There are tasks related to social networking
applications. Mark Dufour has a few open tasks related to Shed Skin. Several of the tasks ask the
students to test core Python modules or create examples for the
documentation. There are performance testing tasks, and complex
programming tasks such as implementing a ropes data structure or a
cyclomatic complexity analyzer. There have been requests to extend the
functionality of Crunchy and trac by creating new plugins as
well. Titus has even defined a few meta-tasks for the students to
come up with tasks for other contestants. If you want to participate
in GHOP, and can’t find a task you’re interested in from the the PSF
list, it’s not because we aren’t trying.

Other Ways to Participate

Only pre-college students 13 and up can compete, but there are other
ways for the rest of us to participate. The contestants have been
burning through the tasks at a much faster pace than we
anticipated. Part of that was no doubt due to our underestimating
their abilities, and starting out with relatively simple tasks. As a
result, we need task suggestions from the community (this is an open
source project, after all). We want to keep a few “introductory” tasks
on the list, but we do want to have more challenging work for those
advanced participants. If you have ideas for Python-related tasks,
check out the guidelines on the ghop-python page to find instructions
for submitting them. We need your participation!

Another way to contribute would be to sign on as a mentor. Many of
even the most talented contestants are new to open source in general,
and most probably won’t have experience with the participating
projects. They’re going to need help from experienced, patient
advisers to solve the tasks set before them. If you have the time and
inclination, join the ghop-python Google group and watch for questions
you can answer. Also keep an eye out for them on comp.lang.python or
other forums where they may be going for assistance.


For my own part, I’ve found working on this project very
rewarding. I’ve gained a greater appreciation for the contributions
the other mentors have made to our community, and it has been exciting
to observe the students learning to join that community as they
complete their tasks. I have had several good conversations through
IRC with contestants as we worked out issues they were having and
refined their submissions. It has also been inspiring to see how
students from all over the world take up the challenge of spreading
knowledge of, and contributing to, open source. Their energy and
enthusiasm is refreshing, and I’m looking forward to watching how the
rest of the contest unfolds over the next couple of months. It sounds
a little trite, but these kids are the next generation of
developers. This is an excellent opportunity for us to introduce them
to open source and pass on the traditions of cooperation and teamwork
that form the basis of the thriving community we have today.

Originally published in Python Magazine Volume 1 Number 12 , December, 2007

Python Magazine for November

Somehow the release of our November issue snuck right past me.
There’s a good range of articles this month, covering decorators,
working with RSS feeds, IDLE, and Gtk. My column is about the use of
Python in scientific applications, and Mark’s discusses operator
overloading. Brian’s column addresses some of the feedback we’ve seen
from readers of the October issue.

If you’re a subscriber, you should have already received an email
notification that the PDF is available for download. You can login to
the site and download your copy right away, and print copies are on the
way. If you’re not a subscriber yet, it isn’t too late!


A couple of days ago, Chris posted about using virtualenv to
create sandboxes on Leopard instead of installing packages directly into
the Frameworks directory. I’d heard of virtualenv, but never tried it
before. After reading what Chris said, I downloaded it and gave it a
try, and have to say, “Wow!”

I had been worried about installing a ton of dependencies on my system
so I could test code associated with articles submitted to Python
. I planned on setting up a VM, so I could at least isolate
that code from my own development environment, but virtualenv is going
to be much easier to deal with. I can create a separate environment for
each article, and verify the dependencies as necessary.

When you run virtualenv, it sets up a fresh sandbox version of Python
by copying/linking files from your default installation to create new
bin and lib directories. It does not copy site-packages, so you have a
clean place to install any packages you want (with easy_install, or by
other means). It does include your default installation in the
PYTHONPATH for the new sandbox, so you can use modules already installed
there as well as anything you install into the sandbox.

The new sandbox also includes a simple script to activate/deactivate
the environment for your current shell. When you activate an
environment, your command prompt changes (at least under Mac OS X, and I
assume other Unix variants) to remind you that you’re using that environment,
and your PATH is automatically updated to use the new bin directory. You
can also run the interpreter from the environment directly, without
“activation”, and it knows to look for modules using the correct path.

I’ll probably still set up that VM, but I know I’ll worry a lot less
about conflicts between different modules as I test the code from our

Python Magazine is here to stay

Word came in this morning, via Brian, that Python Magazine is
“viable”. That’s great news! I’ve been having a good time reading the
articles (and code) you have submitted, and working with Brian, Arbi,
and everyone else at MTA to put it together.

So, if you’ve been holding off on submitting your proposal for an
article, or subscribing, you can stop waiting. Head over to
pythonmagazine.com and take care of both today.