sphinxcontrib.spelling 1.2

What is sphinxcontrib.spelling?

sphinxcontrib.spelling is a spelling checker for Sphinx. It uses
PyEnchant to produce a report showing misspelled words.

What’s New in 1.2?

This update checks the spelling of document titles and section headers
as well as the body of the document. It also fixes a packaging issue
that prevented the tests from working when run directly from the sdist
available on PyPI.

Creating a Spelling Checker for reStructuredText Documents

I write a lot using reStructuredText files as the source format,
largely because of the ease of automating the tools used to convert
reST to other formats. The number of files involved has grown to the
point that some of the post-writing tasks were becoming tedious, and I
was skipping steps like running the spelling checker. I finally
decided to do something about that by creating a spelling checker
plugin for Sphinx, released as *sphinxcontrib-spelling*.

I have written about *why I chose reST* before. All of the articles
on this site, including the Python Module of the Week series,
started out as .rst files. I also use Sphinx to produce
several developer manuals at my day job. I like reST and Sphinx
because they both can be extended to meet new needs easily. One area
that has been lacking, though, is support for a spelling checker.

Checking the spelling of the contents of an individual
reStructuredText file from within a text editor like Aquamacs is
straightforward, but I have on the order of 200 separate files making
up parts of this site alone, not to mention *my book*. Manually
checking each file, one at a time, is a tedious job, and not one I
perform very often. After finding a few typos recently, I decided I
needed to take care of the problem by using automation to eliminate
the drudgery and make it easier to run the spelling checker regularly.

The files are already configured to be processed by Sphinx when they
are converted to HTML and PDF format, so that seemed like a natural
way to handle the spelling checker, too. To add a step to the build to
check the spelling of every file, I would need two new tools: an
extension to Sphinx to drive the spelling checker and the spelling
checker itself. I did not find any existing Sphinx extensions that
checked spelling, so I decided to write my own. The first step was to
evaluate spelling checkers.

I did not find any existing Sphinx extensions that checked spelling,
so I decided to write my own.

Choosing a Spelling Checker

I recently read Peter Norvig’s article How to Write a Spelling
Corrector
, which shows how to create a spelling checker from scratch
in Python. As with most nontrivial applications, though, the algorithm
for testing the words is only part of the story when looking at a
spelling checker. An equally important aspect is the dictionary of
words known to be spelled correctly. Without a good dictionary, the
algorithm would report too many false negatives. Not wanting to build
my own dictionary, I decided to investigate existing spelling checkers
and concentrate on writing the interface layer to connect them to
Sphinx.

There are several open source spelling checkers with Python bindings.
I evaluated the aspell-python and PyEnchant (bindings for
enchant, the spelling checker from the AbiWord project). Both tools
required some manual setup to get the engine working. The
aspell-python API was simple to use, but I decided to use PyEnchant
instead. It has an active development group and is more extensible
(with APIs to define alternate dictionaries, tokenizers, and filters).

Installing PyEnchant

I started out by trying to install enchant and PyEnchant from source
under OS X with Python 2.7, but eventually gave up after having to
download several dependencies just to get configure to run for
enchant. I stuck with PyEnchant as a solution because installing
aspell was not really any easier (the installation experience for both
tools could be improved). The simplest solution for OS X and Windows
is to use the platform-specific binary installers for PyEnchant (not
the .egg), since they include all of the dependencies. That means it
cannot be installed into a virtualenv, but I was willing to live with
that for the sake of having any solution at all.

Linux platforms can probably install enchant via RPM or other system
package, so it is less of a challenge to get PyEnchant working there,
and it may even work with pip.

Using PyEnchant

There are several good examples in the PyEnchant tutorial, and I
will not repeat them here. I will cover some of the concepts, though,
as part of explaining the implementation of the new extension.

The PyEnchant API is organized around a “dictionary,” which can be
loaded at runtime based on a language name. Enchant does some work to
try to determine the correct language automatically based on the
environment settings, but I found it more reliable to set the language
explicitly. After the dictionary is loaded, its check() method can
be used to test whether a word is correct or not. For incorrect words,
the suggest() method returns a list of possible alternatives,
sorted by the likelihood they are the intended word.

The check() method works well for individual words, but cannot
process paragraphs. PyEnchant provides an API for checking larger
blocks of text, but I chose to use a lower level API instead. In
addition to the dictionary, PyEnchant includes a “tokenizer” API for
splitting text into candidate words to be checked. Using the tokenizer
API means that the new plugin can run some additional tests on words
not found in the dictionary. For example, I plan to provide an option
to ignore “misspelled” words that appear to be the name of an
importable Python module.

Integrating with Sphinx

The Sphinx Extension API includes several ways to add new features
to Sphinx, including markup roles, language domains, processing
events, and directives. I chose to create a new “builder” class,
because that would give me complete control over the way the document
is processed. The builder API works with a parsed document to create
output, usually in a format like HTML or PDF. In this case, the
SpellingBuilder does not generate any output files. It prints the
list of misspelled words to standard output, and includes the headings
showing where the words appear in the document.

The first step in creating the new extension is to define a
setup() function to be invoked when the module is loaded. The
function receives as argument an instance of the Sphinx
application, ready to be configured. In
sphinxcontrib.spelling.setup(), the new builder and several
configuration options are added to the application. Although the
Sphinx configuration file can contain any Python code, only the
explicitly registered configuration settings affect the way the
environment is saved.

def setup(app):
    app.info('Initializing Spelling Checker')
    app.add_builder(SpellingBuilder)
    # Report guesses about correct spelling
    app.add_config_value('spelling_show_suggestions', False, 'env')
    # Set the language for the text
    app.add_config_value('spelling_lang', 'en_US', 'env')
    # Set a user-provided list of words known to be spelled properly
    app.add_config_value('spelling_word_list_filename', 'spelling_wordlist.txt', 'env')
    # Assume anything that looks like a PyPI package name is spelled properly
    app.add_config_value('spelling_ignore_pypi_package_names', False, 'env')
    # Assume words that look like wiki page names are spelled properly
    app.add_config_value('spelling_ignore_wiki_words', True, 'env')
    # Assume words that are all caps, or all caps with trailing s, are spelled properly
    app.add_config_value('spelling_ignore_acronyms', True, 'env')
    # Assume words that are part of __builtins__ are spelled properly
    app.add_config_value('spelling_ignore_python_builtins', True, 'env')
    # Assume words that look like the names of importable modules are spelled properly
    app.add_config_value('spelling_ignore_importable_modules', True, 'env')
    # Add any user-defined filter classes
    app.add_config_value('spelling_filters', [], 'env')
    # Register the 'spelling' directive for setting parameters within a document
    rst.directives.register_directive('spelling', SpellingDirective)
    return

The builder class is derived from sphinx.builders.Builder. The
important method is write_doc(), which processes the parsed
documents and saves the messages with unknown words to the output
file.

def write_doc(self, docname, doctree):
    self.checker.push_filters(self.env.spelling_document_filters[docname])

    for node in doctree.traverse(docutils.nodes.Text):
        if node.tagname == '#text' and  node.parent.tagname in TEXT_NODES:

            # Figure out the line number for this node by climbing the
            # tree until we find a node that has a line number.
            lineno = None
            parent = node
            seen = set()
            while lineno is None:
                #self.info('looking for line number on %r' % node)
                seen.add(parent)
                parent = node.parent
                if parent is None or parent in seen:
                    break
                lineno = parent.line
            filename = self.env.doc2path(docname, base=None)

            # Check the text of the node.
            for word, suggestions in self.checker.check(node.astext()):
                msg_parts = []
                if lineno:
                    msg_parts.append(darkgreen('(line %3d)' % lineno))
                msg_parts.append(red(word))
                msg_parts.append(self.format_suggestions(suggestions))
                msg = ' '.join(msg_parts)
                self.info(msg)
                self.output.write(u"%s:%s: (%s) %sn" % (
                        self.env.doc2path(docname, None),
                        lineno, word,
                        self.format_suggestions(suggestions),
                        ))

                # We found at least one bad spelling, so set the status
                # code for the app to a value that indicates an error.
                self.app.statuscode = 1

    self.checker.pop_filters()
    return

The builder traverses all of the text nodes, skipping over formatting
nodes and container nodes that contain no text. Each node is converted
to plain text using its astext() method, and the text is given to
the SpellingChecker to be parsed and checked.

class SpellingChecker(object):
    """Checks the spelling of blocks of text.

    Uses options defined in the sphinx configuration file to control
    the checking and filtering behavior.
    """

    def __init__(self, lang, suggest, word_list_filename, filters=[]):
        self.dictionary = enchant.DictWithPWL(lang, word_list_filename)
        self.tokenizer = get_tokenizer(lang, filters)
        self.original_tokenizer = self.tokenizer
        self.suggest = suggest

    def push_filters(self, new_filters):
        """Add a filter to the tokenizer chain.
        """
        t = self.tokenizer
        for f in new_filters:
            t = f(t)
        self.tokenizer = t

    def pop_filters(self):
        """Remove the filters pushed during the last call to push_filters().
        """
        self.tokenizer = self.original_tokenizer

    def check(self, text):
        """Generator function that yields bad words and suggested alternate spellings.
        """
        for word, pos in self.tokenizer(text):
            correct = self.dictionary.check(word)
            if correct:
                continue
            yield word, self.dictionary.suggest(word) if self.suggest else []
        return

Finding Words in the Input Text

The blocks of text from the nodes are parsed using a language-specific
tokenizer provided by PyEnchant. The text is split into words, and
then each word is passed through a series of filters. The API defined
by enchant.tokenize.Filter supports two behaviors. Based on the
return value from _skip(), the word might be ignored entirely and
never returned by the tokenizer. Alternatively, the _split()
method can return a modified version of the text.

In addition to the filters for email addresses and “wiki words”
provided by PyEnchant, sphinxcontrib-spelling includes several
others. The AcronymFilter tells the tokenizer to skip words that
use all uppercase letters.

class AcronymFilter(Filter):
    """If a word looks like an acronym (all upper case letters),
    ignore it.
    """
    def _skip(self, word):
        return (word == word.upper() # all caps
                or
                # pluralized acronym ("URLs")
                (word[-1].lower() == 's'
                 and
                 word[:-1] == word[:-1].upper()
                 )
                )

The ContractionFilter expands common English contractions
that might appear in less formal blog posts.

class list_tokenize(tokenize):
    def __init__(self, words):
        tokenize.__init__(self, '')
        self._words = words
    def next(self):
        if not self._words:
            raise StopIteration()
        word = self._words.pop(0)
        return (word, 0)

class ContractionFilter(Filter):
    """Strip common contractions from words.
    """
    splits = {
        "won't":['will', 'not'],
        "isn't":['is', 'not'],
        "can't":['can', 'not'],
        "i'm":['I', 'am'],
        }
    def _split(self, word):
        # Fixed responses
        if word.lower() in self.splits:
            return list_tokenize(self.splits[word.lower()])

        # Possessive
        if word.lower().endswith("'s"):
            return unit_tokenize(word[:-2])

        # * not
        if word.lower().endswith("n't"):
            return unit_tokenize(word[:-3])

        return unit_tokenize(word)

Because I write about Python a lot, I tend to use the names of
projects that appear on the Python Package Index
(PyPI). PyPiFilterFactory fetches a list of the packages from
the index and then sets up a filter to ignore all of them.

class IgnoreWordsFilter(Filter):
    """Given a set of words, ignore them all.
    """
    def __init__(self, tokenizer, word_set):
        self.word_set = set(word_set)
        Filter.__init__(self, tokenizer)
    def _skip(self, word):
        return word in self.word_set

class IgnoreWordsFilterFactory(object):
    def __init__(self, words):
        self.words = words
    def __call__(self, tokenizer):
        return IgnoreWordsFilter(tokenizer, self.words)

class PyPIFilterFactory(IgnoreWordsFilterFactory):
    """Build an IgnoreWordsFilter for all of the names of packages on PyPI.
    """
    def __init__(self):
        client = xmlrpclib.ServerProxy('http://pypi.python.org/pypi')
        IgnoreWordsFilterFactory.__init__(self, client.list_packages())

PythonBuiltinsFilter ignores functions built into the Python
interpreter.

class PythonBuiltinsFilter(Filter):
    """Ignore names of built-in Python symbols.
    """
    def _skip(self, word):
        return word in __builtins__

Finally, ImportableModuleFilter ignores words that match the
names of modules found on the import path. It uses imp to search
for the module
without actually importing it.

class ImportableModuleFilter(Filter):
    """Ignore names of modules that we could import.
    """
    def __init__(self, tokenizer):
        Filter.__init__(self, tokenizer)
        self.found_modules = set()
        self.sought_modules = set()
    def _skip(self, word):
        if word not in self.sought_modules:
            self.sought_modules.add(word)
            try:
                imp.find_module(word)
            except UnicodeEncodeError:
                return False
            except ImportError:
                return False
            else:
                self.found_modules.add(word)
                return True
        return word in self.found_modules

The SpellingBuilder creates the filter stack based on user
settings, so the filters can be turned on or off.

filters = [ ContractionFilter,
            EmailFilter,
            ]
if self.config.spelling_ignore_wiki_words:
    filters.append(WikiWordFilter)
if self.config.spelling_ignore_acronyms:
    filters.append(AcronymFilter)
if self.config.spelling_ignore_pypi_package_names:
    self.info('Adding package names from PyPI to local spelling dictionary...')
    filters.append(PyPIFilterFactory())
if self.config.spelling_ignore_python_builtins:
    filters.append(PythonBuiltinsFilter)
if self.config.spelling_ignore_importable_modules:
    filters.append(ImportableModuleFilter)
filters.extend(self.config.spelling_filters)

Using the Spelling Checker

PyEnchant and sphinxcontrib-spelling should be installed on the
import path for the same version of Python that Sphinx is using (refer
to the *project home page* for more details). Then the extension
needs to be explicitly enabled for a Sphinx project in order for the
builder to be recognized. To enable the extension, add it to the list
of extension in conf.py.

extensions = [ 'sphinxcontrib.spelling' ]

The other options can be set in conf.py, as well. For example, to
turn on the filter to ignore the names of packages from PyPI, set
spelling_add_pypy_package_names to True.

spelling_add_pypi_package_names = True

Because the spelling checker is integrated with Sphinx using a new
builder class, it is not run when the HTML or LaTeX builders
run. Instead, it needs to run as a separate phase of the build by
passing the -b option to sphinx-build. The output shows each
document name as it is processed, and if there are any errors the line
number and misspelled word is shown. When
spelling_show_suggestions is True, proposed corrections are
included in the output.

$ sphinx-build -b spelling -d build/doctrees source build/spelling
...
writing output... [ 31%] articles/how-tos/sphinxcontrib-spelling/index
(line 255) mispelling ["misspelling", "dispelling", "mi spelling",
"spelling", "compelling", "impelling", "rappelling"]
...

See Also

PyEnchant
Python interface to enchant.
*sphinxcontrib-spelling*
Project home page for the spelling checker.
sphinxcontrib
BitBucket repository for sphinxcontrib-spelling and several other
Sphinx extensions.
Sphinx Extension API
Describes methods for extending Sphinx.
*Defining Custom Roles in Sphinx*
Describes another way to extend Sphinx by modifying the
reStructuredText syntax.

sphinxcontrib.spelling 1.0

What is sphinxcontrib.spelling?

sphinxcontrib.spelling is a spelling checker for Sphinx. It uses
PyEnchant to produce a report showing misspelled words.

What’s New in 1.0?

This release is completely rewritten from the earlier 0.2 version. The
output includes more details about the location of unknown words in
the source files being processed, and the output is saved for
reference and review. It also includes more extensive
documentation.

holiday coding and virtualenvwrapper updates

In between other holiday-related activities I have had some time to
work on some personal projects a bit over the past week. The result
includes new releases of virtualenvwrapper, virtualenvwrapper.project,
and a new product called sphinxcontrib-spelling.

virtualenvwrapper updates

Version 2.6.1 of virtualenvwrapper includes a few fixes to make it
work better under Python 2.4 and correct some problems with Cygwin. It
also includes a major overhaul of the testing infrastructure to use
tox instead of the home-grown scripts I was using for testing with
multiple versions of Python. The tests now work reliably under Python
2.4 – 2.7 with bash, zsh, and ksh. And, more importantly, they run on
systems other than my laptop, so that other developers can run the
tests (and add new ones) before contributing patches.

The latest version of virtualenvwrapper.project (the extension that
adds project management to virtualenvwrapper) has had similar testing
updates made. I also added a cdproject command to return your shell to
the project home directory easily.

Spelling Checker for Sphinx

Finally, I started a new project to integrate PyEnchant with
Sphinx to make it easy to check the spelling in large
reStructuredText documents that span many files. The result,
sphinxcontrib-spelling, is in beta right now. I have some additional
features that I want to add, but it is usable in this form and ready for
feedback from users. I am working on a more extensive write-up of
creating a Sphinx extension like this, to be posted soon.

Defining Custom Roles in Sphinx

Defining Custom Roles in Sphinx

Creating custom processing instructions for Sphinx is easy and will
make documenting your project less trouble.

Apparently 42 is a magic number.

While working on issue 42 for virtualenvwrapper, I needed to create
a link from the history file to the newly resolved issue. I finally
decided that pasting the links in manually was getting old, and I
should do something to make it easier. Sphinx and docutils have
built-in markup for linking to RFCs and the Python developers use a
custom role for linking to their bug tracker issues. I decided to
create an extension so I could link to the issue trackers for my
BitBucket projects just as easily.

Extension Options

Sphinx is built on docutils, a set of tools for parsing and working
with reStructuredText markup. The rst parser in docutils is designed
to be extended in two main ways:

  1. Directives let you work with large blocks of text and intercept

    the parsing as well as formatting steps.

  2. Roles are intended for inline markup, within a paragraph.

Directives are used for handling things like inline code, including
source from external locations, or other large-scale processing.
Since each directive defines its own paragraphs, they operate at the
wrong scale for handling in-line markup. I needed to define a new
role.

Defining a Role Processor

The docutils parser works by converting the input text to an internal
tree representation made up of different types of nodes. The tree is
traversed by a writer to create output in the desired format. To add
a directive or role, you need to provide the hooks to be called to
handle the markup when it is encountered in the input file. A role
processor is defined with a function that takes arguments describing
the marked-up text and returns the nodes to be included in the parse
tree.

Roles all have a common syntax, based on the interpreted text
feature of reStructuredText. For example, the rfc role for
linking to an RFC document looks like:

:rfc:`1822`

and produces links like **RFC 1822**, complete with the upper
case RFC.

In my case, I wanted to define new roles for linking to tickets in the
issue tracker for a project (bbissue) and Mercurial changesets
(bbchangeset). The first step was to define the role processing
function.

def bbissue_role(name, rawtext, text, lineno, inliner, options={}, content=[]):
    """Link to a BitBucket issue.

    Returns 2 part tuple containing list of nodes to insert into the
    document and a list of system messages.  Both are allowed to be
    empty.

    :param name: The role name used in the document.
    :param rawtext: The entire markup snippet, with role.
    :param text: The text marked with the role.
    :param lineno: The line number where rawtext appears in the input.
    :param inliner: The inliner instance that called us.
    :param options: Directive options for customization.
    :param content: The directive content for customization.
    """
    try:
        issue_num = int(text)
        if issue_num <= 0:
            raise ValueError
    except ValueError:
        msg = inliner.reporter.error(
            'BitBucket issue number must be a number greater than or equal to 1; '
            '"%s" is invalid.' % text, line=lineno)
        prb = inliner.problematic(rawtext, rawtext, msg)
        return [prb], [msg]
    app = inliner.document.settings.env.app
    node = make_link_node(rawtext, app, 'issue', str(issue_num), options)
    return [node], []

The parser invokes the role processor when it sees interpreted text
using the role in the input. It passes both the raw, unparsed, text as
well as the contents of the interpreted text (the parts between the “`”). It also passes an “inliner”, the part of the parser that
saw the markup and invoked the processor. The inliner gives us a
handle back to docutils and Sphinx so we can access the runtime
environment to get configuration settings or save data for use later.

The return value from the processor is a tuple containing two lists.
The first list contains any new nodes to be added to the parse tree,
and the second list contains error or warning messages to show the
user. Processors are defined to return errors instead of raising
exceptions because the error messages can be inserted into the output
instead of halting all processing.

The bbissue role processor validates the input text by converting
it to an integer issue id. If that isn’t possible, it builds an error
message and returns a problematic node to be added to the output
file. It also returns the message text so the message is printed on
the console. If validation passes, a new node is constructed with
make_link_node(), and only that success node is included in the
return value.

To create the inline node with the hyperlink to a ticket,
make_link_node() looks in Sphinx’s configuration for a
bitbucket_project_url string. Then it builds a reference node
using the URL and other values derived from the values given by the
parser.

def make_link_node(rawtext, app, type, slug, options):
    """Create a link to a BitBucket resource.

    :param rawtext: Text being replaced with link node.
    :param app: Sphinx application context
    :param type: Link type (issue, changeset, etc.)
    :param slug: ID of the thing to link to
    :param options: Options dictionary passed to role func.
    """
    #
    try:
        base = app.config.bitbucket_project_url
        if not base:
            raise AttributeError
    except AttributeError, err:
        raise ValueError('bitbucket_project_url configuration value is not set (%s)' % str(err))
    #
    slash = '/' if base[-1] != '/' else ''
    ref = base + slash + type + '/' + slug + '/'
    set_classes(options)
    node = nodes.reference(rawtext, type + ' ' + utils.unescape(slug), refuri=ref,
                           **options)
    return node

Registering the Role Processor

With the role processor function defined, the next step is to tell
Sphinx to load the extension and to register the new role. Instead of
using setuptools entry points for defining plugins, Sphinx asks you to
list them explicitly in the configuration file. This makes it easy to
install several extensions to be used by several projects, and only
enable the ones you want for any given documentation set.

Extensions are listed in the conf.py configuration file for your
Sphinx project, in the extensions variable. I added my module to
the sphinxcontrib project namespace package, so the module has the
name sphinxcontrib.bitbucket.

# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.ifconfig',
              'sphinx.ext.autodoc',
              'sphinxcontrib.bitbucket',
              ]

Sphinx uses the name given to import the module or package containing
the extension, and then call a function named setup() to
initialize the extension. During the initialization phase you can
register new role and directives, as well as configuration values.

def setup(app):
    """Install the plugin.

    :param app: Sphinx application context.
    """
    app.add_role('bbissue', bbissue_role)
    app.add_role('bbchangeset', bbchangeset_role)
    app.add_config_value('bitbucket_project_url', None, 'env')
    return

For this extension I did not want to make any assumptions about the
BitBucket user or project name, so a bitbucket_project_url value
must be added to conf.py.

bitbucket_project_url = 'http://bitbucket.org/dhellmann/virtualenvwrapper/'

Accessing Sphinx Configuration from Your Role

Sphinx handles configuration a little differently from docutils, so I
had to dig for while to find an explanation of how to access the
configuration value from within the role processor. The inliner
argument includes a reference to the current document being processed,
including the docutils settings. Those settings contain an
environment context object, which can be modified by the processors
(to track things like items to include in a table of contents or
index, for example). Sphinx adds its separate application context to
the environment, and the application context includes the
configuration settings. If your role function’s argument is
inliner, then the full path to access a config value called
my_setting is:

inliner.document.settings.env.app.config.my_setting

Results

The new bbissue role looks the same as the rfc role, with the
ticket id as the body of the interpreted text.

For example:

:bbissue:`42`

becomes: issue 42

See also

sphinxcontrib.bitbucket home
Home page for sphinxcontrib.bitbucket, with links to the issue
tracker and announcements of new releases.
sphinxcontrib.bitbucket source
The complete source code for the Sphinx extension described above,
including both bbissue and bbchangeset roles.
Tutorial: Writing a simple extension
Part of the Sphinx documentation set, this tutorial explains how
to create a basic directive processor.
Creating reStructuredText Interpreted Text Roles
David Goodger’s original documentation for creating new roles for
distutils.
Docutils Hacker’s Guide
An introduction to Docutils’ internals by Lea Wiemann.

new project: sphinxcontrib-paverutils

Kevin Dangoor’s Paver includes basic integration for Sphinx, the
excellent document production toolkit from Georg Brandl. As I have
written before, however, the default integration didn’t quite meet my
needs for producing different forms of output from the same inputs.

Georg has opened the sphinxcontrib repository on BitBucket for
developers who want to collaborate on providing unofficial extensions to
Sphinx, so I decided to go ahead and package up the alternate
integration I use and release it in case someone else finds it helpful.
The result is sphinxcontrib.paverutils.

Writing Technical Documentation with Sphinx, Paver, and Cog

I’ve been working on the Python Module of the Week series since March of
2007
. During
the course of the project, my article style and tool chain have
both evolved. I now have a fairly smooth production process in
place, so the mechanics of producing a new post don’t get in the
way of the actual research and writing. Most of the tools are open
source, so I thought I would describe the process I go through and
how the tools work together.

Editing Text: TextMate

I work on a MacBook Pro, and use TextMate
for editing the articles and source for PyMOTW. TextMate is the one
tool I use regularly that is not open source. When I’m doing heavy
editing of hundreds of files for my day job I use Aquamacs Emacs, but TextMate is better suited for prose
editing and is easier to extend with quick actions. I discovered
TextMate while looking for a native editor to use for Python Magazine, and after being able to write my
own “bundle” to manage magazine articles (including defining a mode
for the markup language we use) I was hooked.

Some of the features that I like about TextMate for prose editing are
as-you-type spell-checking (I know some people hate this feature, but
I find it useful), text statistics (word count, etc.), easy block
selection (I can highlight a paragraph or several sentences and move
them using cursor keys), a moderately good reStructuredText mode
(emacs’ is better, but TextMate’s is good enough), paren and quote
matching as you type, and very simple extensibility for repetitive
tasks. I also like TextMate’s project management features, since they
makes it easy to open several related files at the same time.

Version Control: svn

I started out using a private svn repository for all of my projects,
including PyMOTW. I’m in the middle of evaluating hosted DVCS
options for PyMOTW
,
but still haven’t had enough time to give them all the research I
think is necessary before making the move. The Python core developers
are considering a similar move (PEP 374) so it will be interesting
to monitor that discussion.
No doubt we have different requirements (for example, they are hosting
their own repository), but the experiences with the various DVCS tools
will be useful input to my own decision.

Markup Language: reStructuredText

When I began posting, I wrote each article by hand using HTML. One of
the first tasks that I automated was the step of passing the source
code through pygments to produce a syntax colorized version. This
worked well enough for me at the time, but restricted me to producing
only HTML output. Eventually John Benediktsson contacted me with a
version of many of the posts converted from HTML to reStructuredText.

When reStructuredText was first put forward in the ‘90’s, I was
heavily into Zope development. As such, I was using StructuredText for documenting my
code, and in the Zope-based wiki that we ran at ZapMedia. I even
wrote my own app to extract
comments and docstrings to generate library documentation for a couple
of libraries I had released as open source. I really liked
StructuredText and, at first, I didn’t like reStructuredText.
Frankly, it looked ugly compared to what I was used to. It quickly
gained acceptance in the general community though, and I knew it would
give me options for producing other output formats for the PyMOTW
posts, so when John sent me the markup files I took another look.

While re-acquainting myself with reST, I realized two things. First,
although there is a bit more punctuation involved in the markup than
with the original StructuredText, the markup language was designed
with consistency in mind so it isn’t as difficult to learn as my first
impressions had lead me to believe. Second, it turned out the part I
thought was “ugly” was actually the part that made reST more
powerful
than StructuredText: It has a standard syntax for extension
directives that users can define for their own documents.

Markup to Output: Sphinx

Before I made a final decision on switching from hand-coded HTML to
reST, I needed a tool to convert to HTML (I still had to post the
results on the blog, after all, and Blogger doesn’t support reST). I
first tried David Goodger’s docutils package. The scripts it includes
felt a little too much like “pieces” of a tool rather than a complete
solution, though, and I didn’t really want to assemble my own wrappers
if I didn’t have to – I wanted to write text for this project, not
code my own tools. Around this time, Georg Brandl had made
significant progress on Sphinx, which
turned out to be a more complete turn-key system for converting a pile
of reST files to HTML or PDF. After a few hours of experimentation, I
had a sample project set up and was generating HTML from my documents
using the standard templates.

I decided that reStructuredText looked like the way to go.

HTML Templates: Jinja:

My next step was to work out exactly how to produce all of the outputs
I needed from reST inputs. Each post for the PyMOTW series ends up
going to several different places:

  • the PyMOTW source distribution (HTML)
  • my Blogger blog (HTML)
  • the PyMOTW project site (HTML)
  • O’Reilly.com (HTML)
  • the PyMOTW “book” (PDF)

Each of the four HTML outputs uses slightly different formatting,
requiring separate templates (PDF is a whole different problem,
covered below). The source distribution and project site are both
full HTML versions of all of the documents, but use different
templates. I decided to use the default Sphinx templates for the
packaged version; I may change that later, but it works for the time
being, and it’s one less custom template to deal with. I wanted the
online version to match the appearance of the rest of my site, so I
needed to create a template for it. The two blogs use a third
template (O’Reilly’s site ignores a lot of the markup due to their
Moveable Type configuration, but the articles come out looking good
enough so I can use the same template I use for my own blog without
worrying about a separate custom template).

Sphinx uses Jinja templates to produce
HTML output. The syntax for Jinja is very similar to Django’s
template language. As it happens, I use Django for the dynamic
portion of my web site that I host myself. I lucked out, and my
site’s base template was simple enough to use with Sphinx without
making any changes. Yay for compatibility!

Cleaning up HTML with BeautifulSoup

The blog posts need to be relatively clean HTML that I can upload to
Blogger and O’Reilly, so they could not include any html or
body tags or require any markup or styles not supported by either
blogging engine. The template I came up with is a stripped down
version that doesn’t include the CSS and markup for sidebars, header,
or footer. The result was almost exactly what I wanted, but had two
problems.

The easiest problem to handle was the permalinks generated by Sphinx.
After each heading on the page, Sphinx inserts an anchor tag with a ¶
character and applies CSS styles that hide/show the tag when the user
hovers over it. That’s a nice feature for the main site and packaged
content, but they didn’t work for the blogs. I have no control over
the CSS used at O’Reilly, so the tags were always visible. I didn’t
really care if they were included on the Blogger pages, so the
simplest thing to do was stick with one “blogging” template and remove
the permalinks.

The second, more annoying, problem, was that Blogger wanted to insert
extra whitespace into the post. There is a configuration option on
Blogger to treat line breaks in the post as “paragraph breaks” (I
think they actually insert br tags). This is very convenient for
normal posts with mostly straight text, since I can simply write each
paragraph on one long line, wrapped visually by my editor, and break
the paragraphs where I want them. The result is I can almost post
directly from plain text input. Unfortunately, the option is applied
to every post in the blog (even old posts), so changing it was not a
realistic option – I wasn’t about to go back and re-edit every single
post I had previously written.

The second, more annoying, problem, was that Blogger wanted to
insert extra whitespace into the post.

Sphinx didn’t have an option to skip generating the permalinks, and
there was no way to express that intent in the template, so I fell
back to writing a little script to strip them out after the fact. I
used BeautifulSoup
to find the tags I wanted removed, delete them from the parse tree,
then assemble the HTML text as a string again. I added code to the
same script to handle the whitespace issue by removing all newlines
from the input unless they were inside pre tags, which Blogger
handled correctly. The result was a single blob of partial HTML
without newlines or permalinks that I could post directly to either
blog without editing it by hand. Score a point for automation.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
def clean_blog_html(body):
    # Clean up the HTML
    import re
    import sys
    from BeautifulSoup import BeautifulSoup
    from cStringIO import StringIO

    # The post body is passed to stdin.
    soup = BeautifulSoup(body)

    # Remove the permalinks to each header since the blog does not have
    # the styles to hide them.
    links = soup.findAll('a', attrs={'class':"headerlink"})
    [l.extract() for l in links]

    # Get BeautifulSoup's version of the string
    s = soup.__str__(prettyPrint=False)

    # Remove extra newlines.  This depends on the fact that
    # code blocks are passed through pygments, which wraps each part of the line
    # in a span tag.
    pattern = re.compile(r'([^s][^p][^a][^n]>)n$', re.DOTALL|re.IGNORECASE)
    s = ''.join(pattern.sub(r'1', l) for l in StringIO(s))

    return s

Code Syntax Highlighting: pygments

I wanted my posts to look as good as possible, and an important factor
in the appearance would be the presentation of the source code. I
adopted pygments in the early hand-coded
HTML days, because it was easy to integrate into TextMate with a
simple script.

pygmentize -f html -O cssclass=syntax $@

Binding the command to a key combination meant with a few quick
keypresses I had HTML ready to insert into the body of a post.

When I moved to Sphinx, using pygments became even easier because
Sphinx automatically passes included source code through pygments as
it generates its output. Syntax highlighting works for HTML and PDF,
so I didn’t need any custom processing.

Automation: Paver

Automation is important for my sense of well being. I hate dealing
with mundane repetitive tasks, so once an article was written I didn’t
want to have to touch it to prepare it for publication of any of the
final destinations. As I have written before,
I started out using make to run various shell commands. I have
since converted the entire process to Paver.

Automation is important for my sense of well being.

The stock Sphinx integration provided with that comes with Paver
didn’t quite meet my needs, but by examining the source I was able to
create my own replacement tasks in an afternoon. The main problem was
the tight coupling between the code to run Sphinx and the code to find
the options to pass to it. For normal projects with a single
documentation output format (Paver assumes HTML with a single config
file), this isn’t a problem. PyMOTW’s requirements are different,
with the four output formats discussed above.

In order to produce different output with Sphinx, you need different
configuration files. Since the base name for the file must always be
conf.py, that means the files have to be stored in separate
directories. One of the options passed to Sphinx on the command line
tells it the directory to look in for its configuration file. Even
though Paver doesn’t fork() before calling Sphinx, it still uses
the command line options to pass instructions.

Creating separate Sphinx configuration files was easy. The problem
was defining options in Paver to tell Sphinx about each configuration
directory for the different output. Paver options are grouped into
bundles, which are essentially a namespace. When a Paver task looks
for an option, it scans through the bundles, possibly cascading to the
global namespace, until it finds the option by name. The search can
be limited to specific bundles, so that the same option name can be
used to configure different tasks.

The html task from paver.doctools sets the options search order to
look for values first in the sphinx section, then globally. Once
it has retrieved the path values, via _get_paths(), it invokes
Sphinx.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
def _get_paths():
    """look up the options that determine where all of the files are."""
    opts = options
    docroot = path(opts.get('docroot', 'docs'))
    if not docroot.exists():
        raise BuildFailure("Sphinx documentation root (%s) does not exist."
                           % docroot)
    builddir = docroot / opts.get("builddir", ".build")
    builddir.mkdir()
    srcdir = docroot / opts.get("sourcedir", "")
    if not srcdir.exists():
        raise BuildFailure("Sphinx source file dir (%s) does not exist"
                            % srcdir)
    htmldir = builddir / "html"
    htmldir.mkdir()
    doctrees = builddir / "doctrees"
    doctrees.mkdir()
    return Bunch(locals())

@task
def html():
    """Build HTML documentation using Sphinx. This uses the following
    options in a "sphinx" section of the options.

    docroot
      the root under which Sphinx will be working. Default: docs
    builddir
      directory under the docroot where the resulting files are put.
      default: build
    sourcedir
      directory under the docroot for the source files
      default: (empty string)
    """
    options.order('sphinx', add_rest=True)
    paths = _get_paths()
    sphinxopts = ['', '-b', 'html', '-d', paths.doctrees,
        paths.srcdir, paths.htmldir]
    dry("sphinx-build %s" % (" ".join(sphinxopts),), sphinx.main, sphinxopts)

This didn’t work for me because I needed to pass a separate
configuration directory (not handled by the default _get_paths())
and different build and output directories. The simplest solution
turned out to be re-implementing the Paver-Sphinx integration to make
it more flexible. I created my own _get_paths() and made it look
for the extra option values and use the directory structure I needed.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
def _get_paths():
    """look up the options that determine where all of the files are."""
    opts = options

    docroot = path(opts.get('docroot', 'docs'))
    if not docroot.exists():
        raise BuildFailure("Sphinx documentation root (%s) does not exist."
                           % docroot)

    builddir = docroot / opts.get("builddir", ".build")
    builddir.mkdir()

    srcdir = docroot / opts.get("sourcedir", "")
    if not srcdir.exists():
        raise BuildFailure("Sphinx source file dir (%s) does not exist"
                            % srcdir)

    # Where is the sphinx conf.py file?
    confdir = path(opts.get('confdir', srcdir))

    # Where should output files be generated?
    outdir = opts.get('outdir', '')
    if outdir:
        outdir = path(outdir)
    else:
        outdir = builddir / opts.get('builder', 'html')
    outdir.mkdir()

    # Where are doctrees cached?
    doctrees = opts.get('doctrees', '')
    if not doctrees:
        doctrees = builddir / "doctrees"
    else:
        doctrees = path(doctrees)
    doctrees.mkdir()

    return Bunch(locals())

Then I defined a new function, run_sphinx(), to set up the options
search path, look for the option values, and invoke Sphinx. I set
add_rest to False to disable searching globally for an option to
avoid namespace pollution from option collisions, since I knew I was
going to have options with the same names but different values for
each output format. I also look for a “builder”, to support PDF
generation.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
def run_sphinx(*option_sets):
    """Helper function to run sphinx with common options.

    Pass the names of namespaces to be used in the search path
    for options.
    """
    if 'sphinx' not in option_sets:
        option_sets += ('sphinx',)
    kwds = dict(add_rest=False)
    options.order(*option_sets, **kwds)
    paths = _get_paths()
    sphinxopts = ['',
                  '-b', options.get('builder', 'html'),
                  '-d', paths.doctrees,
                  '-c', paths.confdir,
                  paths.srcdir, paths.outdir]
    dry("sphinx-build %s" % (" ".join(sphinxopts),), sphinx.main, sphinxopts)
    return

With a working run_sphinx() function I could define several
Sphinx-based tasks, each taking options with the same names but from
different parts of the namespace. The tasks simply call
run_sphinx() with the desired namespace search path. For example,
to generate the HTML to include in the sdist package, the html
task looks in the html bunch:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
@task
@needs(['cog'])
def html():
    """Build HTML documentation using Sphinx. This uses the following
    options in a "sphinx" section of the options.

    docroot
      the root under which Sphinx will be working.
      default: docs
    builddir
      directory under the docroot where the resulting files are put.
      default: build
    sourcedir
      directory under the docroot for the source files
      default: (empty string)
    doctrees
      the location of the cached doctrees
      default: $builddir/doctrees
    confdir
      the location of the sphinx conf.py
      default: $sourcedir
    outdir
      the location of the generated output files
      default: $builddir/$builder
    builder
      the name of the sphinx builder to use
      default: html
    """
    set_templates(options.html.templates)
    run_sphinx('html')
    return

while generating the HTML output for the website uses a different set
of options from the website bunch:

1
2
3
4
5
6
7
8
@task
@needs(['webtemplatebase', 'cog'])
def webhtml():
    """Generate HTML files for website.
    """
    set_templates(options.website.templates)
    run_sphinx('website')
    return

All of the option search paths also include the sphinx bunch, so
values that do not change (such as the source directory) do not need
to be repeated. The relevant portion of the options from the PyMOTW
pavement.py file looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
options(
    # ...

    sphinx = Bunch(
        sourcedir=PROJECT,
        docroot = '.',
        builder = 'html',
        doctrees='sphinx/doctrees',
        confdir = 'sphinx',
    ),

    html = Bunch(
        builddir='docs',
        outdir='docs',
        templates='pkg',
    ),

    website=Bunch(
        templates = 'web',
        #outdir = 'web',
        builddir = 'web',
    ),

    pdf=Bunch(
        templates='pkg',
        #outdir='pdf_output',
        builddir='web',
        builder='latex',
    ),

    blog=Bunch(
        sourcedir=path(PROJECT)/MODULE,
        builddir='blog_posts',
        outdir='blog_posts',
        confdir='sphinx/blog',
        doctrees='blog_posts/doctrees',
    ),

    # ...
)

To find the sourcedir for the html task, _get_paths() first
looks in the html bunch, then the sphinx bunch.

Capturing Program Output: cog

As an editor at Python Magazine, and reviewer for several books, I’ve
discovered that one of the most frequent sources of errors with
technical writing occurs in the production process where the output of
running sample code is captured to be included in the final text.
This is usually done manually by running the program and copying and
pasting its output from the console. It’s not uncommon for a bug to
be found, or a library to change, requiring a change in the source
code provided with the article. That change, in turn, means the
output of commands may be different. Sometimes the change is minor,
but at other times the output is different in some significant way.
Since I’ve seen the problem come up so many times, I spent time
thinking about and looking for a solution to avoid it in my own work.

During my research, a few people suggested that I switch to using
doctests for my examples, but I felt there were several problems with
that approach. First, the doctest format isn’t very friendly for
users who want to copy and paste examples into their own scripts. The
reader has to select each line individually, and can’t simply grab the
entire block of code. Distributing the examples as separate scripts
makes this easier, since they can simply copy the entire file and
modify it as they want. Using individual .py files also makes it
possible for some of the more complicated examples to run clients and
servers at the same time from different scripts (as with
SimpleXMLRPCServer, for
example). But most importantly, using doctests does not solve the
fundamental problem. Doctests tell me when the output has changed,
but I still have to manually run the scripts to generate that output
and paste it into my document in the first place. What I really
wanted to be able to do was run the script and insert the output,
whatever it was, without manually copying and pasting text from the
console.

I finally found what I was looking for in cog, from Ned Batchelder. Ned
describes cog as a “code generation tool”, and most of the examples he
provides on his site are in that vein. But cog is a more general
purpose tool than that. It gives you a way to include arbitrary
Python instructions in your source document, have them executed, and
then have the source document change to reflect the output.

For each code sample, I wanted to include the Python source followed
by the output it produces when run on the console. There is a reST
directive to include the source file, so that part is easy:

.. include:: anydbm_whichdb.py
    :literal:
    :start-after: #end_pymotw_header

The include directive tells Sphinx that the file
“anydbm_whichdb.py” should be treated as a literal text block (instead
of more reST) and to only include the parts following the last line of
the standard header I use in all my source code. Syntax highlighting
comes for free when the literal block is converted to the output
format.

Grabbing the command output was a little trickier. Normally with cog,
one would embed the actual source to be run in the document. In my
case, I had the text in an external file. Most of the source is
Python, and I could just import it, but I would have to go to special
lengths to capture any output and pass it to cog.out(), the cog
function for including text in the processed document. I didn’t want
my example code littered with calls to cog.out() instead of
print, so I needed to capture sys.stdout and sys.stdin. A bigger
question was whether I wanted to have all of the sample files imported
into the namespace of the build process. Considering both issues, it
made sense to run the script in a separate process and capture the
output.

There is a bit of setup work needed to run the scripts this way, so I
decided to put it all into a function instead of including the
boilerplate code in every cog block. The reST source for running
anydbm_whichdb.py looks like:

.. {{{cog
.. cog.out(run_script(cog.inFile, 'anydbm_whichdb.py'))
.. }}}
.. {{{end}}}

The .. at the start of each line causes the reStructuredText
parser to treat the line as a comment, so it is not included in the
output. After passing the reST file through cog, it is rewritten to
contain:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
.. {{{cog
.. cog.out(run_script(cog.inFile, 'anydbm_whichdb.py'))
.. }}}

::

    $ python anydbm_whichdb.py
    dbhash

.. {{{end}}}

The run_script() function runs the python script it is given, adds
a prefix to make reST treat the following lines as literal text, then
indents the script output. The script is run via Paver’s sh()
function, which wraps the subprocess module and supports the dry-run
feature of Paver. Because the cog instructions are comments, the only
part that shows up in the output is the literal text block with the
command output.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
def run_script(input_file, script_name,
                interpreter='python',
                include_prefix=True,
                ignore_error=False,
                trailing_newlines=True,
                ):
    """Run a script in the context of the input_file's directory,
    return the text output formatted to be included as an rst
    literal text block.

    Arguments:

     input_file
       The name of the file being processed by cog.  Usually passed as cog.inFile.

     script_name
       The name of the Python script living in the same directory as input_file to be run.
       If not using an interpreter, this can be a complete command line.  If using an
       alternate interpreter, it can be some other type of file.

     include_prefix=True
       Boolean controlling whether the :: prefix is included.

     ignore_error=False
       Boolean controlling whether errors are ignored.  If not ignored, the error
       is printed to stdout and then the command is run *again* with errors ignored
       so that the output ends up in the cogged file.

     trailing_newlines=True
       Boolean controlling whether the trailing newlines are added to the output.
       If False, the output is passed to rstrip() then one newline is added.  If
       True, newlines are added to the output until it ends in 2.
    """
    rundir = path(input_file).dirname()
    if interpreter:
        cmd = '%(interpreter)s %(script_name)s' % vars()
    else:
        cmd = script_name
    real_cmd = 'cd %(rundir)s; %(cmd)s 2>&1' % vars()
    try:
        output_text = sh(real_cmd, capture=True, ignore_error=ignore_error)
    except Exception, err:
        print '*' * 50
        print 'ERROR run_script(%s) => %s' % (real_cmd, err)
        print '*' * 50
        output_text = sh(real_cmd, capture=True, ignore_error=True)
    if include_prefix:
        response = 'n::nn'
    else:
        response = ''
    response += 't$ %(cmd)snt' % vars()
    response += 'nt'.join(output_text.splitlines())
    if trailing_newlines:
        while not response.endswith('nn'):
            response += 'n'
    else:
        response = response.rstrip()
        response += 'n'
    return response

I defined run_script() in my pavement.py file, and added it to the
__builtins__ namespace to avoid having to import it each time I
wanted to use it from a source document.

A somewhat more complicated example shows another powerful feature of
cog. Because it can run any arbitrary Python code, it is possible to
establish the preconditions for a script before running it. For
example, anydbm_new.py assumes that its output database does not
already exist. I can ensure that condition by removing it before
running the script.

1
2
3
4
5
6
.. {{{cog
.. workdir = path(cog.inFile).dirname()
.. sh("cd %s; rm -f /tmp/example.db" % workdir)
.. cog.out(run_script(cog.inFile, 'anydbm_new.py'))
.. }}}
.. {{{end}}}

Since cog is integrated into Paver, all I had to do to enable it was
define the options and import the module. I chose to change the begin
and end tags used by cog because the default patterns ([[[cog and
]]]) appeared in the output of some of the scripts (printing
nested lists, for example).

1
2
3
4
5
cog=Bunch(
    beginspec='{{{cog',
    endspec='}}}',
    endoutput='{{{end}}}',
),

To process all of the input files through cog before generating the
output, I added ‘cog’ to the @needs list for any task running
sphinx. Then it was simply a matter of running paver html or paver
webhtml
to generate the output.

Paver includes an uncog task to remove the cog output from your
source files before committing to a source code repository, but I
decided to include the cogged values in committed versions so I would
be alerted if the output ever changed.

Generating PDF: TexLive

Generating HTML using Sphinx and Jinja templates is fairly
straightforward; PDF output wasn’t quite so easy to set up. Sphinx
actually produces LaTeX, another text-based format, as output, along
with a Makefile to run third-party LaTeX tools to create the PDF. I
started out experimenting on a Linux system (normally I use a Mac, but
this box claimed to have the required tools installed). Due to the
age of the system, however, the tools weren’t compatible with the
LaTeX produced by Sphinx. After some searching, and asking on the
sphinx-dev mailing list, I installed a copy of TeX Live, a newer TeX distro. A few tweaks to
my $PATH later and I was in business building PDFs right on my
Mac.

Generating HTML using Sphinx and Jinja templates is fairly
straightforward; PDF output wasn’t quite so easy to set up.

My pdf task runs Sphinx with the “latex” builder, then runs
make using the generated Makefile.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
@task
@needs(['cog'])
def pdf():
    """Generate the PDF book.
    """
    set_templates(options.pdf.templates)
    run_sphinx('pdf')
    latex_dir = path(options.pdf.builddir) / 'latex'
    sh('cd %s; make' % latex_dir)
    return

I still need to experiment with some of the LaTeX options, including
templates for pages in different sizes, logos, and styles. For now
I’m happy with the default look.

Releasing

Once I had the “build” fully automated, it was time to address the
distribution process. For each version, I need to:

  • upload HTML, PDF, and tar.gz files to my server
  • update PyPI
  • post to my blog
  • post to the O’Reilly blog

The HTML and PDF files are copied to my server using rsync, invoked
from Paver. I use a web browser and the admin interface for
django-codehosting to upload the
tar.gz file containing the source distribution manually. That will be
automated, eventually. Once the tar.gz is available, PyPI can be
updated via the builtin task paver register. That just leaves the
two blog posts.

For my own blog, I use MarsEdit to post and edit entries. I
find the UI easy to use, and I like the ability to work on drafts of
posts offline. It is much nicer than the web interface for Blogger,
and has the benefit of being AppleScript-able. I have plans to
automate all of the steps right up to actually posting the new blog
entry, but for now I copy the generated blog entry into a new post
window by hand.

O’Reilly’s blogging policy does not allow desktop clients (too much of
a support issue for the tech staff), so I need to use their Moveable
Type web UI to post. As with MarsEdit, I simply copy the output and
paste it into the field in the browser window, then add tags.

Tying it All Together

A quick overview of my current process is:

  1. Pick a module, research it, and write examples in reST and Python.
    Include the Python source and use cog directives to bring in the
    script output.
  2. Use the command “paver html” to produce HTML output to verify the
    results look good and I haven’t messed up any markup.
  3. Commit the changes to svn. When I’m done with the module, copy the
    “trunk” to a release branch for packaging.
  4. Use “paver sdist” to create the tar.gz file containing the Python
    source and HTML documentation.
  5. Upload the tar.gz file to my site.
  6. Run “paver installwebsite” to regenerate the hosted version of the
    HTML and the PDF, then copy both to my web server.
  7. Run “paver register” to update PyPI with the latest release
    information.
  8. Run “paver blog” to generate the HTML to be posted to the blogs.
    The task opens a new TextMate window containing the HTML so it is
    ready to be copied.
  9. Paste the blog post contents into MarsEdit, add tags, and send it
    to Blogger.
  10. Paste the blog post contents into the MT UI for O’Reilly, add
    tags, verify that it renders properly, then publish.

Try It Yourself

All of the source for PyMOTW (including the pavement.py file with
configuration options, task definitions, and Sphinx integration) is
available from the PyMOTW web site. Sphinx, Paver, cog, and
BeautifulSoup are all open source projects. I’ve only tested the
PyMOTW “build” on Mac OS X, but it should work on Linux without any
major alterations. If you’re on Windows, let me know if you get it
working.

Originally published on my blog, 2 February 2009

Python Module of the Week, meet reST and Sphinx

Release 1.65 of the Python Module of the Week includes HTML
documentation created with Sphinx from reStructuredText versions of all
of the posts so far. The documentation is included in the source
download, or you can browse it online from the PyMOTW home page.

I have been a long time StructuredText user, using it with Zope and
some home-grown tools to produce DocBook output for dead-tree
documentation. When reST was being created, I dismissed it as
comparatively ugly and overly complicated. The quality of the toolset
surrounding it now makes it a very attractive alternative for generating
documentation, especially if you need to produce different versions or
multiple output formats. After seeing the power and features it has that
regular ST doesn’t, I’m a convert.

A big “Thank you!” goes out to John Benediktsson for doing the
original HTML-to-reST conversion. It would have taken me ages to do it
myself, so if it was left up to me alone it probably never would have
been done.

The new versions of the docs online on the PyMOTW home page are in
a form that should be easier to browse than my blog archives. There is
still some work to be done to make the content consistent, mostly due to
my writing style and approach evolving over time. I’ll be tackling those
updates bit by bit, and converting the 2-3 modules that haven’t been
converted at all yet. I am releasing what I have now because I wanted to
finish the major portion of the migration this weekend.

The Sphinx development team also deserves a big “Thank you!” from
me. I’ve been using Sphinx at my day job to produce some documentation,
and found that I liked it enough that it was the obvious choice for the
PyMOTW site conversion. Incredibly, the Django base template I use for
the rest of my site worked the first time the Jinja template engine used
by Sphinx. I had to clean up some styles, but the template didn’t bomb
out and produced good HTML the first time.

need help with sphinx and LaTeX

Dear Lazy Web,

I’ve started using sphinx to produce some documentation at work.
The HTML output looks good, and I have the templating system figured out
so I can change it to look the way we want. We also want to produce a
PDF, and that’s where I’m stuck. It looks like I need to go through
LaTeX, then convert that to PDF. I’m a complete neophyte when it comes
to TeX, though, so I’m not even sure where to start.

When I search for things like “converting TeX to PDF”, I find some old
posts (c. 2005) about how some tools use bitmapped fonts and “look
terrible” or vague instructions like “Convert your TeX file to dvi in
the usual way.” I don’t have a usual way, yet, though so that doesn’t
help me.

Can someone suggest a useful reference manual or starting point for
me for using LaTeX under Linux?

I don’t actually care about LaTeX, so if there’s some other way to get
nice looking PDF output from a Sphinx document tree, that information
would be helpful, too.

Thanks in advance!