Skip to content

How OpenVox builds work

Nick Burgan edited this page Aug 25, 2025 · 31 revisions

OpenVox Packaging

Every bit used except for the Vox Pupuli OpenVox private key is available in public repos in the OpenVoxProject Github org.

Building off of the work Jeff Clark did to improve Docker support in Vanagon, we added a bunch more things (and more since!) to make building these packages with containers work better. The choice of container images for each platform can be found in the platform default files. We also tried to standardize the platform defaults a bit since they've gotten rather divergent over the years. Using containerized builds also allows us to build for different architectures without having to build on a system of that particular architecture.

Generally, you will be able to build all packages locally yourself by using Rake tasks, or a GitHub action set up for this purpose.

Where generated files are uploaded

The S3 buckets where files get uploaded to is generously run by the OSU Open Source Lab and lives at https://s3.osuosl.org/<bucket-name>. The openvox-artifacts bucket is where intermediate build files are uploaded to (e.g. puppet-runtime, unsigned openvox-agent packages, etc.). The openvox-yum and openvox-apt buckets contain the actual repos that package managers point to, as well as rpm/deb files to set up those repos with the the public key. CNAMEs {apt,yum,artifacts}.overlookinfratech.com are set up to point to these OSL URLs.

Originally, the repo files pointed to the overlookinfratech.com addresses. As of 2025-05-06, these now point to {apt,yum}.voxpupuli.org. This is a mirror of the OSL buckets, and is not updated immediately upon uploading something to the bucket. The mirrors sync every hour. If you need the sync to happen immediately (e.g. on a new release), contact NickB to do this.

openvox-agent

In some cases, Perforce uses their own internal version of build tools (pl-build-tools) to build for older platforms that ship with build tools that are too old. Rather than do this, we've moved to utilizing publicly available updated tools instead. These days, that seems to be mostly for el7.

There are three component repos that are required for building the agent.

  • puppet-runtime - vanagon repo containing components packaged in the All-In-One (AIO) agent package
  • pxp-agent-vanagon - pxp-agent is primarily used by Puppet Enterprise for orchestration, but some pieces are used by members of the community
  • openvox-agent - the vanagon repo that creates the rpm/deb packages

Within these repos, you'll find the following rake tasks:

  • vox:tag['<tag>'] - This tags the repo and pushes the tag to origin.
  • vox:build['<project>','<platform>'] - This takes a project name (found in configs/project) and platform to build for (found in configs/platforms) and performs the build using vanagon's docker engine. The component will be built inside the container, and files will end up in the output directory of your repo clone.
  • vox:upload['<tag>','<platform>'] - This uploads the artifacts generated by the build to the OSL openvox-artifacts S3 bucket or potentially a different S3 bucket if desired. You won't be able to use this without the AWS CLI set up with appropriate secrets.
  • vox:promote['<component>','<tag>'] - This task is found in the openvox-agent and pxp-agent-vanagon repos and can be used for promoting any of the components.

The puppet-runtime and pxp-agent-vanagon repos should be tagged like YYYYMMDDR where R is the particular release of it you are doing that day. For example, if this is the second build I am tagging on February 20th, 2025, the tag would be 202502202.

First, puppet-runtime is built and uploaded to the puppet-runtime artifacts directory. Then pxp-agent, which utilizes puppet-runtime and is uploaded to the pxp-agent artifacts directory is built. Then openvox-agent, which utilizes both, is built and uploaded to the openvox-agent artifacts directory. In this last directory, the rpm and deb agent packages are stored, but these are unsigned.

Note that when creating a new release of the agent, you must update the version in the Puppet component and the gemspec, and then promote that version of the repo into the openvox-agent repo via the vox:promote rake task. Updating the version in these files can be done via the vox:tag rake task in the puppet repo.

The process for building the agent is now mostly in GitHub Actions. They share a build_vanagon.yml workflow, which contains the full list of platforms that OpenVox currently supports for the agent. An example of how this shared workflow is used can be found in puppet-runtime. The shared workflow is able to upload these artifacts to the appropriate S3 bucket locations.

MacOS Builds

!!! Important !!! You should use a fresh VM each time you build something that hasn't built any of the components previously. Because we are using --engine local and files are installed locally before being packaged up, vanagon seems to only package up changes in this build versus the last one. This will result in bad output.

In order to build the MacOS (osx-15-arm64) puppet-runtime and openvox-agent, you will need to set up a MacOS VM on a MacOS host and run the build process inside it. UTM is recommended. You should create a fresh copy every time you do a build. All build commands need to be run from a root shell (sudo su - root). Then, you will need to install XCode tools, build libyaml from source, then install Ruby (rbenv recommended).

xcode-select --install

curl -o yaml-0.2.5.tar.gz https://pyyaml.org/download/libyaml/yaml-0.2.5.tar.gz
tar xf yaml-0.2.5.tar.gz 
cd yaml-0.2.5
./configure
make
make install

git clone https://github.com/rbenv/rbenv.git ~/.rbenv
~/.rbenv/bin/rbenv init
(Open a new terminal or run /bin/bash)
git clone https://github.com/rbenv/ruby-build.git "$(rbenv root)"/plugins/ruby-build
rbenv install 3.2.7

If you are planning on uploading the artifacts after build and have the appropriate AWS credentials in ~/.aws, install AWS CLI.

softwareupdate --install-rosetta
curl -o AWSCLIV2.pkg https://awscli.amazonaws.com/AWSCLIV2.pkg
installer -pkg ./AWSCLIV2.pkg -target /

Signing

To set up a VM with the appropriate cert in the right place for the tasks to sign the right things, you will need the actual Apple Developer account Application signing identity (cert + private key, can be exported together as a .p12 file), the Installer signing identity, and the Apple Developer intermediate cert. The Apple Developer account signing identity will have a description like Developer ID Application: <Company name> (<10 character team ID>). The installer identity will be similar but say Installer instead of Application.

Note the 10-character team ID. You will also need an application token generated from your personal Apple ID that is part of the organization in the Apple Developer account.

You will need to have the Xcode app installed (not just the Xcode CLI tools).

security create-keychain signing
security default-keychain -s signing
security unlock-keychain signing
security import /path/to/DeveloperIDG2CA.cer -k /Library/Keychains/System.keychain
security import /path/to/application_identity.p12 -k signing -P <password> -T /usr/bin/codesign
security import /path/to/installer_identity.p12 -k signing -P <password> -T /usr/bin/productsign
security set-key-partition-list -S "apple-tool:,apple:" -D <description of application identity> signing
security set-key-partition-list -S "apple-tool:,apple:" -D <description of installer identity> signing
xcrun notarytool store-credentials "OpenVoxNotaryProfile" --apple-id "[email protected]" --password "<app token>" --team-id <10 character team ID>

In the MacOS agent package, all binary files, including dylib and bundle files, are signed by the application key. The pkg file inside the dmg is signed with the installer key. Lastly, the dmg itself is signed with the application key, and then notarized. All of this is required in order for Gatekeeper to not complain on MacOS 15+. In order for the automation to work correctly, you need to set the following environment variables:

  • SIGNING_KEYCHAIN_PW - The password to unlock the keychain (yes, this isn't great, we'll make this more secure before we put it in GitHub Actions)
  • SIGNING_KEYCHAIN - The path to the keychain where the certs/keys are stored. This should be /Library/Keychains/System.keychain so that root can access it.
  • APPLICATION_SIGNING_CERT - The description of the application identity described above
  • INSTALLER_SIGNING_CERT - The description of the installer identity described above
  • NOTARY_PROFILE - The name of the notary profile in the keychain described above
  • VANAGON_FORCE_SIGNING - Unless you are doing a dev build and don't need signing, set this to true (or anything, I think). Otherwise, make sure it is unset.

Windows Builds

To build for Windows, you should use a relatively modern OS (probably anything Server 2016+/10+ will work, but I've used 11 and Server 2022). You will need to install Cygwin. Put the setup_x86_64.exe installer file in the root of C:. At the base of the repos, there is a setup.bat which will use the Cygwin installer to install the base packages needed in order to do a successful bundle install (e.g. https://github.com/OpenVoxProject/puppet-runtime/blob/main/setup.bat). This is a little bit of a chicken and egg problem, since you need Cygwin to install git in order to clone the repo, so just take a look at that file and run the command manually in Powershell.

From the Cygwin terminal, clone the repo and build as described above. The platform string we are using for Windows builds is windows-2019-x64, but this is really just a placeholder for "any Windows newer than Server 2016 or Windows 10".

In order to build openvox-agent, you will need to manually install https://github.com/wixtoolset/wix3/releases/download/wix3141rtm/wix314.exe first.

Builds are currently unsigned which will trigger SmartScreen to block the installer at first. We'll figure out signing soon.

openvox-server/openvoxdb

These are built using our slightly tweaked version of ezbake, which allows us to change the name of the packages. The openvox-server and openvoxdb repos contain similar rake tasks, but are used slightly differently:

  • vox:tag['<tag>'] - First, this changes the version found in project.clj to the tag and commits that change. Then it tags the repo. Then it creates a new commit after the tag that increments the Z part of the version with -SNAPSHOT, following the current convention for these repos. Finally, it pushes the branch and the tag to origin.
  • vox:build['<tag>'] - Because the vox:tag task ends up creating a commit after the tag, this checks out the tag you want to build first. Then, it creates a container to do the ezbake build and saves the artifacts to the output directory in your repo clone. Note that since these projects are fairly platform-agnostic, all of the packages can be built inside a single container. This container must be rpm-based, as rpmbuild is needed by fpm to create the rpms, but no special packages are needed to build the debs. The tasks have a default list of platforms to build for, but you can define DEB_PLATFORMS and RPM_PLATFORMS environment variables. These are a comma-separated list of platforms with the architectecture excluded (e.g. ubuntu-18.04,debian-12 or el-9,amazon-2023). These are used by the GitHub build action.
  • vox:upload['<tag>','<optional platform>'] - This uploads the artifacts generated by the build to the OSL openvox-artifacts S3 bucket or, potentially, a different S3 bucket if desired. You won't be able to use this without the AWS CLI set up with appropriate secrets.

(Note: This is currently slightly broken. Nick will fix soon.) The process for building openvox-server and openvoxdb are now mostly in GitHub Actions. They share a build_ezbake.yml workflow. The default for the aformentioned environment variables listing the platforms to build for are defined here. An example of how this shared workflow is used can be found in openvox-server. The shared workflow is able to upload these artifacts to the appropriate S3 bucket locations. Before running these actions, you currently need to run the vox:tag task locally first.

Signing packages and creating the repos

To create the repository packages (i.e. the rpm files at https://yum.voxpupuli.org/ to set up the repo on your machine), openvox-release is used. The packages this generates will place the public key in the right place and import it, and set up the appropriate apt/yum repo on your machine. There is a build action to build and upload these files automatically. Note that the voxpupuli.org mirror will not sync immediately (unless you contact Nick and ask him to do so).

Signing is performed using the sign_from_s3.rb script run on an Overlook InfraTech GCP instance. You won't be able to use this yourself without the private signing key, but you can see the code used. It downloads the unsigned packages from the OSL openvox-artifacts S3 bucket, signs them, then incorporates them into yum and apt repos, which are then later synced to the S3 buckets. The apt repo is currently maintained with the reprepro tool. At some point, we'll move this to a more automated and sustainable workflow.

Warning: This process is destructive for the Apt repo. While you can add packages to the Yum repository, the Apt repository is replaced with an updated version each time we publish. That means that you must start from an existing repo, such as the one stored on the GCP signer instance.

Until this is further automated, this can only be performed by @nburgan or @binford2k.

  1. Log into the signer GCP instance and switch to the signer user.
    • Make sure you use a login shell so that profile scripts run
    • sudo su --login signer
    • If you need to forward your SSH key in order to update the misc repo and already have it forwarded to your user, use the following bash function. If someone has a better way, please put it here, because this is not great.
      function signer(){
        dir=$(dirname "${SSH_AUTH_SOCK}")
        echo "Chowning ${dir} to signer"
        sudo chown -R signer:signer $(dirname "${SSH_AUTH_SOCK}")
        sudo --preserve-env=SSH_AUTH_SOCK -u signer -i
        echo "Chowning ${dir} to ${USER}"
        sudo chown -R ${USER}:${USER} $(dirname "${SSH_AUTH_SOCK}")
      }
  2. Check that the environment contains ENDPOINT_URL, BUCKET_NAME, etc.
  3. Sign the new package(s):
    • ./misc/signing/sign_from_s3.rb <component> <version> <repo>
    • Example: ./misc/signing/sign_from_s3.rb openvox-agent 8.19.0 openvox8
  4. Sync the repos. First, do these commands with DRYRUN=1 and inspect the files it is going to update. Then run without the env var to do the full sync.
    • ./misc/signing/sync.rb apt
    • ./misc/signing/sync.rb yum
  5. The misc/signing/backups directory will have the entire state of the repos BEFORE you did the changes. This is so you can revert things should this process go sideways. Don't delete this folder. It will do an aws s3 sync from the repo into this folder, so leaving it in place means you don't need to download the entire repo every time. However, you can delete timestamped backups in this directory. You can also delete timestamped directories in ~/backups.

Disclaimer

The build machinery is all very new code, written to get things up and running as fast as possible. While we are fairly confident the packages should work as well as the last Perforce-built open source Puppet packages, we do not yet have the testing infrastructure that Perforce does. We'll be working on this soon!

Clone this wiki locally