Community is the Key

Component Spotlight

blogmarch262016

Dr. Jeff Squyres is Cisco’s representative to the MPI Forum standards body and is Cisco’s core software developer in the open source Open MPI project. He has worked in the High Performance Computing (HPC) field since his early graduate-student days in the mid-1990’s, and is a chapter author of the MPI-2 and MPI-3 standards.

Dr. Squyres is deeply committed to the community of Open MPI, and we interviewed him to discuss community-building, a very important topic us here at OpenHPC.

The Message Passing Interface (MPI) is a key component in parallel computing. Open MPI is free and is widely used, so why would an organization building a cluster buy a proprietary version of MPI?
It comes down to customer requirements. There are free / open source MPI implementations, including Open MPI, and there are non-free / proprietary ones. There are different benefits and tradeoffs of each.

Support is typically an issue that customers consider. Traditionally, in the open source world, your support channels involve posting on public forums and asking the community for help. Many customers are comfortable with this concept; indeed, there are large, involved, HPC-specific communities who are eager and willing to help. However, other customers require a support contract and/or service level agreement with a vendor that entitles them to getting answers to questions, getting problems fixed, having access to consultation services, etc.

It should be noted that there are two major open source MPI implementations: Open MPI and MPICH. Both have various forms of vendor support backing their respective communities. Generally, the vendors take the open source project and either add some value-add themselves, or they provide support. Hence, even a hybrid model is possible: some customers use an open source MPI implementation, but then pay a vendor for support.

What comes after the current version MPI 3.1?
The MPI Forum is the standards body that controls the actual specification of MPI.

I recently came back from an MPI Forum meeting. Loosely speaking, the Forum is working on the next revision; we haven’t decided yet whether there’s going to be an MPI 3.2 or not. If we accumulate enough errata to MPI 3.1, maybe we’ll do an MPI 3.2. But the focus has been largely on some big new topics that may comprise what will become MPI 4.0. For example, fault tolerance is a big deal. So is a concept that we call “Endpoints,” which essentially makes MPI more friendly to threading.

Also at this meeting, a group of us presented a proposal on a new concept we’re calling “MPI Sessions,” which adds a local handle to the MPI library and allows multiple sessions within a single MPI process. I have posted details on my blog here.

Let me explain what the Forum does in terms of a comparison. Developing and releasing software requires a lot of care, preparation, testing, etc. There are entire philosophies about software development and release engineering. But developing and releasing standards requires even more than that, because you’re setting a direction for the entire industry. You need to put appropriate care and thought in the issues, and talk to enough people so that the standard can be representative of the entire HPC community. You really want to be slow and deliberate about it.

The Forum meets four times a year. It’s an interesting process because it’s both a community and a standards body. There’s a fascinating dynamic in the Forum: you don’t want to make too many changes too fast, because that just annoys all the vendors and MPI implementers who are trying to keep up. But you also don’t want to be too slow, because you need to add features that can take advantage of new hardware, new industry trends, etc. Meaning: you want to set an industry direction in a responsible way that’s not too fast for the implementers and also has had enough thought and deliberation put into it.

Do you see MPI merging with OpenMP?
OpenMP is a different thing. It’s a standards body like the MPI Forum, but they have a different parallel computing paradigm. OpenMP is more about multi-threading; you put compiler directives in your code to indicate loops that can be parallelized by spinning up a bunch of threads to split up and separately process the work of the loop. MPI is more about explicit message passing. For example, MPI can take a chunk of data in a process and send it over to peer #17. #17 might be on the same server as you, or it might be on the other side of the network.

Meaning: MPI is about processes talking back and forth to each other, whereas OpenMP is more about creating and destroying threads to do parallel types of work inside of a single process.

One of the goals of MPI 4.0 is to play better with threads. Wouldn’t it be great if we could meaningfully combine MPI and OpenMP in a single program? (Some people have experimented with mixing MPI and OpenMP, but we need to make this combination more seamless.)

How did Open MPI become involved in OpenHPC?
As I understand it, the whole point of OpenHPC is to make it trivially easy to install and maintain HPC clusters. If OpenHPC can unite the industry and make a central point of distribution and updates for HPC operating systems, network stacks, resource managers, applications, …and all the other things you need for HPC, yet still allow (and encourage!) individual vendor value-add and community contributions, that would be a phenomenal achievement. Every member of the ecosystem would be able to both contribute and benefit.

While Open MPI is a key component, it is but one layer of the overall stack that you run on an HPC resource. As a community, we are excited about enabling HPC for everyone. If OpenHPC can really make it so easy to install HPC systems that more people join the ecosystem – as users, system administrators, resource managers, or developers – we all win.

At the commodity HPC level, it’s my sense that end users (and their entire support structure) just want to run their HPC jobs. They don’t want to spend a lot of time tuning and tweaking; they don’t necessarily care about the underlying technologies and gizmos. They just want to run their HPC jobs. Benchmarks and metrics are great, and can be useful tools for discussing requirements and bottlenecks. But, to be incredibly redundant: users just want to run their HPC jobs.

Anything that we, as an HPC community, can do to get the HPC technology out of the way and let users run their jobs is a Good Thing. If OpenHPC can do that – take the focus away from the underlying technologies, and let the end users focus on the problems that they’re trying to solve with their HPC applications – that would be fantastic.

From our experience (Open MPI as a project is over 10 years old!), a successful community project needs to exhibit multiple characteristics:

  1. Actually be open. Don’t just throw code over the wall every once in a while.
  2. Encourage the community – not just vendors – to participate and innovate. Even those who are not paid a salary to develop HPC stacks can have great ideas.
  3. Encourage vendors to participate and innovate. There must be possibilities for vendor value-add and differentiation.
  4. Don’t let any one organization – vendor or otherwise – drive the community. Working together as a community is hard. Sometimes it’s really hard. But I am a huge believer that community-driven projects, when properly nurtured and encouraged, can result in significantly better results than are possible by any individual organization.

Open MPI has thrived because we have adhered to those principles. It’s not about Cisco, it’s not about Mellanox, it’s not about Intel, it’s not about individual Universities or even the government research labs. Open MPI is about the collective group of us coming together and deciding as a community: where should this software go? Keeping this focus is actually really, really important to the health, stability, and longevity of the project. And I assume that the same will be true for OpenHPC.

If you roughly split the Open MPI community into two parts, the research and the academics on the one side, and industry on the other, it is a continual struggle. How do we value the research and make sure those people are getting the publications and credit that they need? Can we take research prototypes and turn them into production-ready code for the user community to use? And balance all that with the vendors who just want to put out stable software that their customers can use — you have to go through the software engineering and all the industry requirements, too. This struggle is good; it’s what keeps us – as a community – honest. We actively try to balance all these competing forces and generate what’s important: fantastic software that is usable by real users.

Last week I was at an Open MPI developers meeting down in Dallas at the IBM Innovation Center facility there. And we had people from universities, government research labs, and multiple vendors, all sitting together in a room, collaborating and discussing how to solve technical problems and advance both the state of the art and the state of Open MPI (the software). How do we bring in so-and-so’s research? How do we get to the next version? Which features do we want? Literally seeing the will of the community being personified by all these engineers and scientists in the room – it was pretty awesome.

Outside of your work on Open MPI, what do you do for fun?
I am actually a true geek. All my hobbies are kind of geek-related. For example, I’m the Technology Committee Chair on the school board at my kid’s school.

I guess I’m a true engineer because I love doing things that are useful to other people. “Cool” is fun, like making some kind of cool, flashy thing, but actually making something that is going to be used by people — whether they’re Cisco customers or the open source community or my local community — that’s what I enjoy doing around the clock.

I’m a total geek. And I love it.