Driving OpenStack via Ansible

Last week I spoke at the Atlanta OpenStack meetup about “Driving OpenStack via Ansible,” in which I introduced Ansible as a tool and talked about its ability to integrate with OpenStack. As part of the presentation I used two playbooks to launch VMs on a cloud and configure them with different applications. We walked through the playbooks and talked about what they were doing, the things that tripped me up while writing them, and then brainstormed ways to use Ansible in situations that have come up for members of the meetup.

One playbook uses my role to install ZNC, the popular IRC “bouncer,” for maintaining a persistent chat presence. The other demo was based on a playbook with the roles needed to configure a server for OpenStack development, ready to run devstack.

The slides are available, and you can download the playbooks from the github repository and try them yourself.

We used Dreamhost’s public cloud, DreamCompute, for the demo at the meetup. Thanks to the DreamHost crew for providing those resources!

Continue reading Driving OpenStack via Ansible

How OpenStack Makes Python Better, and Vice-Versa

I’ve been a Python developer since the late 1990s and came to the OpenStack project from that long background within the rest of the community. Thierry Carrez is on the staff of the OpenStack Foundation and the chair of the OpenStack Technical Committee. He came to Python through OpenStack’s adoption of the language.

At EuroPython 2016, we delivered a presentation titled “How OpenStack Makes Python Better, and Vice-Versa”, discussing the history of the decision to use Python, our perspectives on how the two communities benefit each other, and how we can improve the relationship. The recording of the presentation, and the slides, are below.

Continue reading How OpenStack Makes Python Better, and Vice-Versa

OpenStack contributions to other open source projects

As part of preparing for the talk I will be giving with Thierry Carrez at EuroPython 2016 next month, I wanted to put together a list of some of the projects members of the OpenStack community contribute to outside of things we think of as being part of OpenStack itself. I started by brainstorming myself, but I also asked the community to help me out. I limited my query to projects that somehow touched OpenStack, since what I am trying to establish is that OpenStack contributors identify needs we have, and do the work “upstream” in other projects where appropriate.

OpenStack has many facets, and as a result has pulled in contributors from many parts of the industry. A large number of them are also members of other open source communities, so it’s no surprise that even with only a few respondents to my question (most of them privately, off-list) we came up with a reasonably long list of other projects where we’ve made contributions. I did not make a distinction between the types of contributions, so this list includes everything from bug reports and triage to documentation to code patches for bug fixes or new features. In several cases, the projects came into existence entirely driven by OpenStack’s needs but have found wide adoption outside of our immediate community.

Python Packaging

  • packaging
  • pip
  • setuptools
  • wheel

Python Web Tools

  • Pecan
  • requests
  • WebOb
  • Werkzeug
  • wsgi-intercept
  • WSME

Python Database and Data tools

  • alembic
  • python-memcache
  • Ming
  • Pandas
  • redis-py
  • SQLAlchemy

Python Testing

  • fixtures
  • testtools
  • testrepository
  • tox

Other Python libs and tools

  • APScheduler
  • dogpile
  • eventlet
  • iso8601
  • jaraco.itertools
  • ldappool
  • Mako
  • pykerberos
  • pysaml2
  • retrying
  • sphinxcontrib-datatemplates
  • six

Python Interpreters

  • CPython
  • PyPy (in the past)

Messaging

  • kazoo
  • kombu
  • pyngus
  • qpid
  • RabbitMQ

JavaScript

  • AngularJS
  • Registry-static
  • “other JS libraries”

Deployment, Automation, and Orchestration Tools

  • Ansible
  • Ansible modules for OpenStack
  • Puppet & Puppet Modules
  • Chef modules for OpenStack
  • saltstack

Linux

  • cloud-init
  • dpkg
  • libosinfo
  • Linux kernel
  • LUKS disk encryption
  • systemd

Virtualization

  • kvm
  • libguestfs
  • libvirt
  • qemu

Networking

  • Dibbler (DHCP)
  • OVS
  • OpenDaylight

Containers

  • Docker
  • Kubernetes
  • openvz

Testing and Developer Tools

  • gabbi
  • gerrit
  • Zuul
  • Jenkins Job Builder

Cloud Tools

  • fog
  • libcloud
  • nodepool
  • owncloud
  • phpopencloud
  • pkgcloud

Linux Distributions

  • Ubuntu
  • Red Hat
  • Debian
  • Fedora
  • Gentoo
  • SuSE

Other Tools

  • caimito (WebDAV front-end for object storage)
  • Corosync (cluster & HA synchronization)
  • Etherpad-lite
  • greenlet
  • jarco-tools
  • MySQL
  • Zanata (translation tools)

Updated 23 June to add Kubernetes to the list of container projects.

Updated 24 June to add pysaml2 to the list of Python libraries.

OpenStack Mitaka Release Complete

I’m excited to announce the final releases for the components of OpenStack Mitaka, which conclude the 6-month Mitaka development cycle.

You will find a complete list of all components, their latest versions, and links to individual project release notes documents listed on the new release site.

Congratulations to all of the teams who have contributed to this release!

I also want to extend a big thank you to Ajaeger and jhesketh for their help with build issues today, and to ttx and dims for their help with release work all this cycle and especially over the last couple of weeks. 

Thank you!

Our next production cycle, Newton, has already started. We will meet in Austin April 25-29 at the Newton Design Summit to plan the work for the upcoming cycle. I hope to see you there!

get

OpenStack Release Management Changes for Mitaka Retrospective

The OpenStack Release Management team has been focusing a lot on automation during the Mitaka release cycle. At the start of the cycle we set out our goals, and we’re making good progress toward completing them. We’ve made quite a few supporting changes to the release process, with considerable help from the Infrastructure team.

Standard Release Models

We have standardized projects on one of three different release “models”, describing the frequency of releases. The models are applied to projects using tags in the governance documentation, to communicate that information with consumers of the projects. The release:cycle-with-milestones model is the one most folks are used to, since it is the model most projects have had until fairly recently. It involves projects preparing pre-release deliverables at set times during the cycle, with a final full release at the end. The release:cycle-with-intermediary model allows projects to release full releases at any point in the cycle, with a final at the end forming the basis of a stable branch. Most of our libraries use this one, and we have a few server projects like Ironic and Swift following it, too. The release:independent model is for projects not tied to the release cycle. Independent projects may hold tools used while working on OpenStack but that are not part of a production deployment, or newer projects that have not yet synced their development with the release cycle. These projects typically don’t have a stable branch for a release series (though they may have other stable branches) and they don’t follow the deadlines.

Standard Versioning Scheme

We have also standardized all version numbers so that we use “semantic versioning.” Semantic versioning allows someone comparing two version numbers for the same project to tell the relative difference in the different releases. All of our versions include three components, the major, minor, and patch levels. When a new release includes only bug fixes, the patch level is incremented. That indicates that the new package should be compatible with the previous release, and can be dropped in as a replacement safely. When a release includes new features or changes in dependencies, the minor version number is incremented. These new packages should also be compatible, but may require updating other components. And when a new release is known to be incompatible, the major version number is incremented. Many of our server projects will do this each cycle to indicate that upgrade work is expected, and libraries do it when incompatible API changes are made.

Communication Changes

We have made some important changes to the way we communicate information about releases. The release management team no longer maintains milestone series pages for each project on Launchpad. Instead, we have created http://releases.openstack.org, and link to the release artifacts there. This new site makes it easier for consumers of OpenStack projects to find all of the related releases for a given series, and is easier to update with our new automation. Most project teams are still using Launchpad for their planning, but many are not and it is no longer required. Bug reports for projects should still be filed through Launchpad.

Reviewable Release Requests

The releases site is built from the same data files that power the new automated release process. We’re not quite 100% automated, so we still have a couple of manual steps, but the entire process is set up to support reviews and to allow anyone on the release team to process a release request from any project. Release liaisons or PTLs for a project can submit a patch to the openstack/releases repository describing the new release they want. The release team then reviews that patch, helping them to ensure they are using the semantic versioning rules correctly, that they are releasing all of the changes they want to include in the new release, and that the timing for the release is “good” (we try not to release new versions of libraries late in the week, for example). When the request is approved, a new release artifact is created, and published to tarballs.openstack.org (and pypi.python.org for Python libraries).

The release process works the same for all branches, and is simple enough that we are now releasing stable branch updates more often than in the past. That should make it more appealing for downstream consumers to recommend fixes to be back-ported (and, we hope, submit the patches to do so), because they will see a new release including that fix more quickly than in previous cycles where we waited for pre-arranged milestones to create new releases.

Release Notes

The new releases web site also includes links to the release notes for projects. We are still updating the site to link to the new Mitaka release notes pages, but the links to the Liberty notes are in place. Most projects are using a new tool to manage the release notes in the source tree with the other parts of the project, instead of separately in the wiki. The new tool, reno, makes it easy for contributors to write notes as they fix bugs or add features, which means we should have more complete and detailed release notes by the end of the cycle. Because the notes are with the source code, they are also copied into branches when fixes are back-ported, so the release notes for stable releases will be updated automatically.

Dependency Management

Aside from the release automation, the team has also been working on managing the way we handle dependencies for projects, to make our CI systems more reliable. In the past we have had issues introduced when new versions of packages were released (sometimes even our own). A breaking change to an API, or even a minor bug in a library, would cause all sorts of test jobs to fail, blocking the work of other contributors until a fix was prepared. The new system being rolled out uses a set of constraints to indicate exactly which versions of packages should be used in the test jobs. Each project still declares compatibility with a range of each of its dependency, allowing flexibility for deployers and packagers. But in our upstream test jobs, the constraints reduce the impact of new releases on our test jobs, and give us a way to verify that a new version works before adding it to the test system. Constraints are already in place for the integration tests, and the tools are there for projects to add them for unit tests as well. We have a few early adopters who have set that up, and we’ll be encouraging other teams to do so during the Newton cycle.

Looking Ahead to Newton

We didn’t finish everything we set out to do in Mitaka, so some of that work will carry over to our next cycle. We need to finish the release automation by working with the Infrastructure team so that approving a release request in gerrit triggers the release and there are no more manual steps. When that work is complete we plan to expand the release automation’s use beyond projects with the release:managed tag to be used by default for all official projects. We also plan to implement translations for release notes, an important feature we had for past releases but that did not make it into this release.

Release Team Changes and Goals for OpenStack’s Mitaka Release Cycle

For the Mitaka cycle, we will be implementing changes designed to make it easier for project teams to manage their own projects, with less need for coordination and tight-coupling of the schedule.
Continue reading Release Team Changes and Goals for OpenStack’s Mitaka Release Cycle

Migrating back to WordPress

I’ve migrated my personal blog from Tinkerer back to WordPress, which may introduce repeated articles into the various RSS feeds, since the URLs have changed.

The primary reason I decided to change blogging tools is because with more than 500 posts, the site build time under Tinkerer was unacceptably long. It works great for someone familiar with Sphinx, and with a smaller amount of content than I have.

The reason I chose WordPress over another static blogging engine is I want to be able to schedule posts to be published in the future, without having to set up cron jobs or special tools to be able to do it. I am also going to be experimenting with posting from mobile devices using the WordPress app.

Keyword Bookmarks for OpenStack Developers

As an OpenStack developer, I spend a lot of time looking at web sites for code review, project status, bug reports, the wiki, and other online collaboration tools. As a productivity boost, I’ve set up “keyword bookmarks” for all of the most commonly accessed tools, turning my browser’s input field into a command-line-like short-cut to jump directly to the page I want, without hunting around in a long list of links.

Continue reading Keyword Bookmarks for OpenStack Developers

OpenStack Server Version Numbering

Last week the OpenStack community held our summit to discuss the work
we will be doing during the “Liberty” release cycle over the next six
months. On Friday, several of us met to discuss how we should specify
versions for the server projects. Unlike the other sessions where
there are etherpads for notes, we used a whiteboard and took
pictures. This post includes the picture, along with a transcription
of the text.

Snapshot

http://doughellmann.com/blog/wp-content/uploads/2015/05/IMG_2207.png

Transcription

Current

2015.1.0 -> All “servers” but Swift
2.3.0 -> Swift (“marketing-loaded” semver)
1.3.0
3.1.0 -> libraries (semver)
2.5.2 /

Option 1

Switch everything to semver

Cons

  • Distros need to specify an epoch
  • Pain to translate series to version

Option 2

Evolve yearly format

2015.1.0 -> 2016.
3000?
no 2015.2

Cons

  • Not sure how this would actually work

Agreed

Option 1, using semver.

Start with the next version 12, since Kilo was release 11.

Distros should use the epoch 1 (or the next higher value, if they
already have an epoch assigned) so these new version numbers sort
after the current releases.

OpenStack Requirements Handling, a.k.a. “Unbreak the World”

Last week the OpenStack community held our summit to discuss the work
we will be doing during the “Liberty” release cycle over the next six
months. On Friday, several of us met to discuss how we manage
dependencies. Unlike the other sessions where there are etherpads for
notes, we used a whiteboard and took pictures. This post includes
those pictures, along with transcriptions of the text.

Goals

We started by discussing the goals for requirements management, so we
could evaluate the proposal against those needs.

/blog/wp-content/uploads/2015/05/IMG_2202.png

  1. “Some” expression to downstream packagers/users
  2. Configure test jobs (unit, integration, & functional) (devstack &
    non-devstack)
  3. Encourage convergence/maintain co-installability
  4. Not be fiction
  5. pip install python-novaclient works
  6. Stop breaking all the time (esp. stable)
  7. Don’t make library release management painful
  • Current solution breaks 4 and 6 (and sometimes 5)
  • Robert’s solution breaks ?

Proposal

Next, Robert Collins presented his proposed solution for using
separate constraint lists for installing packages to be used in test
jobs, separate from the dependency specifications listed in each
project’s metadata.

/blog/wp-content/uploads/2015/05/IMG_2203.png

  • Global set(s) of complete consistent constraints
    • using == and exclusions
    • upper and lower
  • alternative testing
  • per project
    • excludes
    • minimums
    • semver upper
  • exclusions sync

Requirements specifications grow wider from left to right:

test sets (high/low) => coordinated => libraries/stackforge

! No guarantee that oslo releases won’t break project unit tests
(affirmative testing only runs integration tests)

TODO List

We prepared a list of actions we need to take this cycle to implement
the plan, and assigned each one to someone in the group.

/blog/wp-content/uploads/2015/05/IMG_2204.png

  • fix install requires in master+stable/kilo in each project (RC)
  • teach pip to honor constraints (RC)
  • change update.py to enforce wider range (until pip resolver exists)
    (RC)
  • initial global constraints for master + kilo (RC)
  • devstack patch to optionally use constraints (RC)
  • exp jobs to use constraints triggered by g-r (fungi)
  • Exp jobs triggered by local changes (fungi)
  • periodic job propose changes form pypi to master (RC)
  • unit test patch to use requirements
  • lower bounds test jobs

(I’m not sure what “exp jobs” are and may be mis-reading that)

Stable Branch Process

Then we reviewed the stable branch creation process to understand how
the change in requirements management will affect it.

pass 1

/blog/wp-content/uploads/2015/05/IMG_2205.png

Turn on master=>stable cross-check job
Release Oslo libraries
Freeze global requirements
Branch projects one at a time
Branch global requirements
Disable cross-check job
Unfreeze global-requirements

pass 2, with timeline

http://doughellmann.com/blog/wp-content/uploads/2015/05/IMG_2206.png

master/stable check
release oslo (FF-1)
converge constraints
l3/ff (soft requirements freeze) (FF)
hard freeze
branch projects (RC1)
branch g-r
disable cross-check
unfreeze

Conclusions

We finished with a little time to enjoy the beautiful scenery in
Vancouver.

/blog/wp-content/uploads/2015/05/IMG_2208.png