OpenHPC 1.3.2   (7 September 2017)

Binary downloads are presently available in the form of RPMs. These RPMs are organized into repositories that can be accessed via standard package manager utilities (e.g. yum, zypper). OpenHPC provides builds that are compatible with and tested against CentOS 7.3 as well as SUSE Linux Enterprise Server 12 SP2. A typical deployment on a new system will begin with the installation of the base operating system on a chosen master host identified as the system management server (SMS), followed by enabling access to a compatible OpenHPC repository.

The OpenHPC repository is created and maintained using a dedicated instance of the Open Build Service (OBS) that is available here. In addition to serving as the build server, this OBS instance also provides an RPM repository. You can scan the RPM packages that are available via this repository by browsing the x86_64/ and noarch/ sub-directories for the 1.3.2 release at:

To get started, you can enable an OpenHPC repository locally through installation of an ohpc-release RPM which includes gpg keys for package signing and defines the URL locations for [base] and [update] package repositories. A copy of the ohpc-release file is available for download here:

Alternatively, you can use the package manager to install the RPM directly from the network as in the following examples:

# yum install


# zypper in

ARM aarch64 Tech Preview

In addition to the OpenHPC 1.3.2 release, we present a tech preview of support for the ARM aarch64 architecture. Additional information can be found here.

Install Recipe(s)

To aid in the installation of OpenHPC packaged components,  a companion installation recipe is available. This can be obtained via installation of the docs-ohpc RPM after the OpenHPC repository has been enabled locally. Alternatively, copies of the documentation are also provided below:

The intent of these guides is to present a simple cluster installation procedure using components from the OpenHPC software stack. The documentation is intended to be reasonably generic, but uses the underlying motivation of a small, stateless cluster installation to define a step-by-step process. Several optional customizations are included and the intent is that these collective instructions can be modified as needed for local site use cases.  Please consult the install guide for more detail and discussion regarding a companion template install script.