git-os-job 1.1.1

The OpenStack project stores the logs for all of the test jobs related to a commit on organized by the commit hash. To review the logs after a job runs, most developers start with the message jenkins leaves on gerrit, and click through to the log files. Not all jenkins jobs are triggered by or related to a gerrit review, though (e.g, release tags).

git-os-job makes it easy to find those logs by finding the hash of the commit and using it to build the right URL. It will then either print the URL or open a web browser directly.

What’s new in 1.1.1?

  • don’t decode bytes to unicode twice

git-os-job 1.1.0

The OpenStack project stores the logs for all of the test jobs related to a commit on organized by the commit hash. To review the logs after a job runs, most developers start with the message jenkins leaves on gerrit, and click through to the log files. Not all jenkins jobs are triggered by or related to a gerrit review, though (e.g, release tags).

git-os-job makes it easy to find those logs by finding the hash of the commit and using it to build the right URL. It will then either print the URL or open a web browser directly.

What’s new in 1.1.0?

  • add –reverse option to go from log URL to review URL
  • decode command output for python 3 support
  • add -u alias for –url command

Stop Working So Hard: Scaling Open Source Community Practices

Lately, I have been revising some of the OpenStack community’s processes to make them more sustainable. As we grew over the last 7 years to have more than 2,000 individual contributors to the current release, some practices that worked when they were implemented have begun causing trouble for us now that our community is changing in different ways. My goal in reviewing those practices is to find ways to eliminate the challenges.

OpenStack is developed by a collection of project teams, most of which focus on a feature-related area, such as block storage or networking. The areas where we have most needed to change intersect with all of those teams, such as release management and documentation. Although the teams responsible for those tasks have tended to be small, their members have been active and dedicated. At times that dedication has masked the near-heroic level of effort they were making to keep up with the work load.

When someone is overloaded in a corporate environment, where tasks are assigned and the performance and workload of team members are reviewed regularly, the employee can appeal to management for help. The solution may be to hire or assign new contributors, change the project schedule, or to make a short term trade-off that incurs technical debt. However, open source projects are largely driven by volunteers, so assigning people to work on a task isn’t an option. Even in a sponsor-driven community such as OpenStack, where many contributors are being paid to work on the project overall, sponsors typically give a relatively narrow mandate for the way their contributors can spend their time. Changing the project schedule is always an option, but if there are no volunteers for a task today, there is no guarantee volunteers will appear tomorrow, so it may not help.

We must use a different approach to eliminate the need for heroic effort.

Continue reading “Stop Working So Hard: Scaling Open Source Community Practices”

Lessons learned from working on large scale, cross-project initiatives in OpenStack

I have been involved with OpenStack development since just before the Folsom summit in 2012. Over the course of that time, I have participated in innumerable discussions about 3 big features tied to OpenStack logging: translating log messages, adding request IDs to log messages, and adding unique message IDs to log messages. We have had various degrees of success with the design, implementation, and ongoing maintenance of all three features, and the reasons for success or failure in each case provide helpful insight into how to approach changes with large community and product scope that should be considered before our next discussion at the summit/forum in Boston in 2017.
Continue reading “Lessons learned from working on large scale, cross-project initiatives in OpenStack”

Driving OpenStack via Ansible

Last week I spoke at the Atlanta OpenStack meetup about “Driving OpenStack via Ansible,” in which I introduced Ansible as a tool and talked about its ability to integrate with OpenStack. As part of the presentation I used two playbooks to launch VMs on a cloud and configure them with different applications. We walked through the playbooks and talked about what they were doing, the things that tripped me up while writing them, and then brainstormed ways to use Ansible in situations that have come up for members of the meetup.

One playbook uses my role to install ZNC, the popular IRC “bouncer,” for maintaining a persistent chat presence. The other demo was based on a playbook with the roles needed to configure a server for OpenStack development, ready to run devstack.

The slides are available, and you can download the playbooks from the github repository and try them yourself.

We used Dreamhost’s public cloud, DreamCompute, for the demo at the meetup. Thanks to the DreamHost crew for providing those resources!

Continue reading “Driving OpenStack via Ansible”

How OpenStack Makes Python Better, and Vice-Versa

I’ve been a Python developer since the late 1990s and came to the OpenStack project from that long background within the rest of the community. Thierry Carrez is on the staff of the OpenStack Foundation and the chair of the OpenStack Technical Committee. He came to Python through OpenStack’s adoption of the language.

At EuroPython 2016, we delivered a presentation titled “How OpenStack Makes Python Better, and Vice-Versa”, discussing the history of the decision to use Python, our perspectives on how the two communities benefit each other, and how we can improve the relationship. The recording of the presentation, and the slides, are below.

Continue reading “How OpenStack Makes Python Better, and Vice-Versa”

OpenStack contributions to other open source projects

As part of preparing for the talk I will be giving with Thierry Carrez at EuroPython 2016 next month, I wanted to put together a list of some of the projects members of the OpenStack community contribute to outside of things we think of as being part of OpenStack itself. I started by brainstorming myself, but I also asked the community to help me out. I limited my query to projects that somehow touched OpenStack, since what I am trying to establish is that OpenStack contributors identify needs we have, and do the work “upstream” in other projects where appropriate.

OpenStack has many facets, and as a result has pulled in contributors from many parts of the industry. A large number of them are also members of other open source communities, so it’s no surprise that even with only a few respondents to my question (most of them privately, off-list) we came up with a reasonably long list of other projects where we’ve made contributions. I did not make a distinction between the types of contributions, so this list includes everything from bug reports and triage to documentation to code patches for bug fixes or new features. In several cases, the projects came into existence entirely driven by OpenStack’s needs but have found wide adoption outside of our immediate community.

Python Packaging

  • packaging
  • pip
  • setuptools
  • wheel

Python Web Tools

  • Pecan
  • requests
  • WebOb
  • Werkzeug
  • wsgi-intercept
  • WSME

Python Database and Data tools

  • alembic
  • python-memcache
  • Ming
  • Pandas
  • redis-py
  • SQLAlchemy

Python Testing

  • fixtures
  • testtools
  • testrepository
  • tox

Other Python libs and tools

  • APScheduler
  • dogpile
  • eventlet
  • iso8601
  • jaraco.itertools
  • ldappool
  • Mako
  • pykerberos
  • pysaml2
  • retrying
  • sphinxcontrib-datatemplates
  • six

Python Interpreters

  • CPython
  • PyPy (in the past)


  • kazoo
  • kombu
  • pyngus
  • qpid
  • RabbitMQ


  • AngularJS
  • Registry-static
  • “other JS libraries”

Deployment, Automation, and Orchestration Tools

  • Ansible
  • Ansible modules for OpenStack
  • Puppet & Puppet Modules
  • Chef modules for OpenStack
  • saltstack


  • cloud-init
  • dpkg
  • libosinfo
  • Linux kernel
  • LUKS disk encryption
  • systemd


  • kvm
  • libguestfs
  • libvirt
  • qemu


  • Dibbler (DHCP)
  • OVS
  • OpenDaylight


  • Docker
  • Kubernetes
  • openvz

Testing and Developer Tools

  • gabbi
  • gerrit
  • Zuul
  • Jenkins Job Builder

Cloud Tools

  • fog
  • libcloud
  • nodepool
  • owncloud
  • phpopencloud
  • pkgcloud

Linux Distributions

  • Ubuntu
  • Red Hat
  • Debian
  • Fedora
  • Gentoo
  • SuSE

Other Tools

  • caimito (WebDAV front-end for object storage)
  • Corosync (cluster & HA synchronization)
  • Etherpad-lite
  • greenlet
  • jarco-tools
  • MySQL
  • Zanata (translation tools)

Updated 23 June to add Kubernetes to the list of container projects.

Updated 24 June to add pysaml2 to the list of Python libraries.

OpenStack Mitaka Release Complete

I’m excited to announce the final releases for the components of OpenStack Mitaka, which conclude the 6-month Mitaka development cycle.

You will find a complete list of all components, their latest versions, and links to individual project release notes documents listed on the new release site.

Congratulations to all of the teams who have contributed to this release!

I also want to extend a big thank you to Ajaeger and jhesketh for their help with build issues today, and to ttx and dims for their help with release work all this cycle and especially over the last couple of weeks. 

Thank you!

Our next production cycle, Newton, has already started. We will meet in Austin April 25-29 at the Newton Design Summit to plan the work for the upcoming cycle. I hope to see you there!


OpenStack Release Management Changes for Mitaka Retrospective

The OpenStack Release Management team has been focusing a lot on automation during the Mitaka release cycle. At the start of the cycle we set out our goals, and we’re making good progress toward completing them. We’ve made quite a few supporting changes to the release process, with considerable help from the Infrastructure team.

Standard Release Models

We have standardized projects on one of three different release “models”, describing the frequency of releases. The models are applied to projects using tags in the governance documentation, to communicate that information with consumers of the projects. The release:cycle-with-milestones model is the one most folks are used to, since it is the model most projects have had until fairly recently. It involves projects preparing pre-release deliverables at set times during the cycle, with a final full release at the end. The release:cycle-with-intermediary model allows projects to release full releases at any point in the cycle, with a final at the end forming the basis of a stable branch. Most of our libraries use this one, and we have a few server projects like Ironic and Swift following it, too. The release:independent model is for projects not tied to the release cycle. Independent projects may hold tools used while working on OpenStack but that are not part of a production deployment, or newer projects that have not yet synced their development with the release cycle. These projects typically don’t have a stable branch for a release series (though they may have other stable branches) and they don’t follow the deadlines.

Standard Versioning Scheme

We have also standardized all version numbers so that we use “semantic versioning.” Semantic versioning allows someone comparing two version numbers for the same project to tell the relative difference in the different releases. All of our versions include three components, the major, minor, and patch levels. When a new release includes only bug fixes, the patch level is incremented. That indicates that the new package should be compatible with the previous release, and can be dropped in as a replacement safely. When a release includes new features or changes in dependencies, the minor version number is incremented. These new packages should also be compatible, but may require updating other components. And when a new release is known to be incompatible, the major version number is incremented. Many of our server projects will do this each cycle to indicate that upgrade work is expected, and libraries do it when incompatible API changes are made.

Communication Changes

We have made some important changes to the way we communicate information about releases. The release management team no longer maintains milestone series pages for each project on Launchpad. Instead, we have created, and link to the release artifacts there. This new site makes it easier for consumers of OpenStack projects to find all of the related releases for a given series, and is easier to update with our new automation. Most project teams are still using Launchpad for their planning, but many are not and it is no longer required. Bug reports for projects should still be filed through Launchpad.

Reviewable Release Requests

The releases site is built from the same data files that power the new automated release process. We’re not quite 100% automated, so we still have a couple of manual steps, but the entire process is set up to support reviews and to allow anyone on the release team to process a release request from any project. Release liaisons or PTLs for a project can submit a patch to the openstack/releases repository describing the new release they want. The release team then reviews that patch, helping them to ensure they are using the semantic versioning rules correctly, that they are releasing all of the changes they want to include in the new release, and that the timing for the release is “good” (we try not to release new versions of libraries late in the week, for example). When the request is approved, a new release artifact is created, and published to (and for Python libraries).

The release process works the same for all branches, and is simple enough that we are now releasing stable branch updates more often than in the past. That should make it more appealing for downstream consumers to recommend fixes to be back-ported (and, we hope, submit the patches to do so), because they will see a new release including that fix more quickly than in previous cycles where we waited for pre-arranged milestones to create new releases.

Release Notes

The new releases web site also includes links to the release notes for projects. We are still updating the site to link to the new Mitaka release notes pages, but the links to the Liberty notes are in place. Most projects are using a new tool to manage the release notes in the source tree with the other parts of the project, instead of separately in the wiki. The new tool, reno, makes it easy for contributors to write notes as they fix bugs or add features, which means we should have more complete and detailed release notes by the end of the cycle. Because the notes are with the source code, they are also copied into branches when fixes are back-ported, so the release notes for stable releases will be updated automatically.

Dependency Management

Aside from the release automation, the team has also been working on managing the way we handle dependencies for projects, to make our CI systems more reliable. In the past we have had issues introduced when new versions of packages were released (sometimes even our own). A breaking change to an API, or even a minor bug in a library, would cause all sorts of test jobs to fail, blocking the work of other contributors until a fix was prepared. The new system being rolled out uses a set of constraints to indicate exactly which versions of packages should be used in the test jobs. Each project still declares compatibility with a range of each of its dependency, allowing flexibility for deployers and packagers. But in our upstream test jobs, the constraints reduce the impact of new releases on our test jobs, and give us a way to verify that a new version works before adding it to the test system. Constraints are already in place for the integration tests, and the tools are there for projects to add them for unit tests as well. We have a few early adopters who have set that up, and we’ll be encouraging other teams to do so during the Newton cycle.

Looking Ahead to Newton

We didn’t finish everything we set out to do in Mitaka, so some of that work will carry over to our next cycle. We need to finish the release automation by working with the Infrastructure team so that approving a release request in gerrit triggers the release and there are no more manual steps. When that work is complete we plan to expand the release automation’s use beyond projects with the release:managed tag to be used by default for all official projects. We also plan to implement translations for release notes, an important feature we had for past releases but that did not make it into this release.