◜PFC◞Pain Free Containers

An agnostic container engine toolset — successor to DOCK
runs on:
works with:

Docker, Podman, FreeBSD jails, hypervisors & cloud APIs.


Time estimates and funding goals are listed in the table below. Features are put together into SETS (also prioritized); each set has its target funding amount quoted in USD. Sets cannot be swapped, as each next set most of the time depends on one or more features from the other set.

However, each consequent set (except for sets 0-1) implies a perfectly functional product. In other words, not all sets need to be funded in order to have a working product. It may lack the functionality of the following sets, but be sure that whatever is described here — is to be delivered and be functional.

Once funding target is reached for a set, the development is activated as will be indicated on this page. The next round of funding for the set following it will also be automatically activated. The two sets: the one in development and the one in the processes of being funded will be marked appropriately on this page.

Set 0
Bare metal servers, VPS/AWS instances & a few other essential 3rd party services
Feature/expense descriptionEstimates & Lifespan

This set's goal is to kick-start our development effort and make it possible for the remote team to co-operate effectively. Set 0 is mostly expenses, but, hopefully, it is justified and explained well.

Bare metal servers

To kick-start our development effort and make it possible for the remote team to co-operate effectively, we're going to need to need 2 bare metal machines‐ one for staging & one for production environment. We estimate, the staging one would end up being costlier due to the number of usecases we'd need to be checking on it.

The production machine will be used to host official PFC images for VMs, (Docker, Podman, jails, etc.) containers and other things which don't have repositories, as well as be a home to our official GitHub repositories viewer, forum & issues sites, chat rooms, and a few other things. Specific software to be used for each purpose is to be determined, but we won't be using any cloud or 3-rd party services for them.

For the production machine, those things that are most important forit, they will be used for it ready within 1 week, while others, such asa forum, an issue tracker or chat room service, may be released later.

Cloud services for testing

Because we'd want PFC to also appease those preferring remote development environments and also to be able to serve as a deployment vehicle, we're going to need some to be renting cloud-services and VPS of various kinds on per-need basis: AWS, Azure, Linode, DigitalOcean (other suggestions are welcomed). These will be used for testing purposes, yet they'd be crucial for the smooth delivery of everything that concerns remote or production use of PFC, including the deployment.

A note on repositories & servers
Please don't be under the impression that the main project repository, various official images and other PFC related things will be stored primarily on our production server. In fact, we'll make sure they can be easily fetched in multiple ways, including via BitTorrent, Docker, or the multitude of other services available that we deem to be a reasonable place to host it. The goal is to make this project persist and flourish even after our funding runs out completely. It's what we ourselves want: we do not want to be supporting our whole lives, but rather deliver production ready core and then let time decide. If funding continues and people would ask for some additional features, we'll surely consider it, especially if those request come within the span of the next 6 months after the PFC 1.0 release. If they arrive 10 years after, perhaps there's a lesser chance we'll be able to accommodate them, but someone else might pick up he torch.
A note on the lifespans of the services funded

It's rather crucial for us to have various machines: virtual or bare metal, remote & rented or purchased locally. Without some funding, available for these purposes, we won't be able to start. But that's also why Set 1 funding deadline is actually set to a much earlier date: we don't need all of it at once, but certainly by the end of Set 2 and before we start work on Set 3. Otherwise, we cannot ensure the project can continue after our completion estimate of ~6 months — continue without disruption, that is. Despite our sincere effort to make PFC independent of any web-servers or web-services, some are needed for now: both for the convenience of our distribute team and for for the convenience of users, who aren't willing to spend too much time manually downloading code or .tar.gz releases of various PFC parts from third-party services, such as Github.

We're only going to be purchasing these services for the limited time necessary to conduct automatic & manual testing. Most allow per-hour purchases, so we'll be keeping these expenses to a minimum.

Current state: FUNDRAISING NOW



LIFESPAN: 1.5 years

Set 1
Preliminary work: migrate DOCK codebase, unit-test coverage, PGP & recipes.
Feature/expense descriptionEstimates & Lifespan

Upon completion of Set 1, the PFC-toolset will remain compatible only with Docker still, but this set would allow all architectures and all kinds of Linux distros to be used easily — which currently isn't the case with DOCK.

PGP-signatures checks
A Bash script that automatically downloads PGP-keys from various public sources, compares them (to make sure no source served a malicious key), imports it, and then checks that if a given file has a valid signature provided by the signature-file. It shall work all by invoking a single command, but also to either provide options to invoke these steps separately or intelligently guess which steps to skip.
Migrate the existing codebase of DOCK
This step will involve taking the existing code of DOCK, which is written in Bash and consider which parts require refactoring, rewrites or minor changes to accommodate what PFC plans to accomplish, which DOCK did not account for at all by design. Some parts, may be rewritten in Perl (keeping complexity at bay, of course).
Cover all existing code with unit tests.

For parts written in Bash this shall be done using an excellent utest library that DOCK's author has released a while ago. For Perl code (if it shall happen to be in the codebase by that point) it'll done using whatever is the most standard, simple and convenient tool there is in Perl.

Any of features below this one assume unit tests (where appropriate). In our humble opinion, while the intention is to keep PFC relatively simple, unit-testing will be absolutely crucial to make fast progress without re-checking every single feature after a change is introduced.

RECIPES feature & making containers PFC-compatible

Consider this feature to be like a DB-migration, except applied to a container (to be saved as an image, if needed). PFC will provide its bootstrap PFC-recipe which turns any kind of image into a PFC-compatible one; and a few more recipes later — to enable users to turn their images into most popular kinds they usually download in full from Dockerhub. For example: a "Python & pip" image, a "PHP & and its bells and whistles" image, a "Ruby & Rails" image, various database kinds of images, "nginx/apache" image etc.

The original author of DOCK wrote a pretty good spec of the proposed mechanism, so feel free to read if you want more details. We'll be basing our implementation on it, with changes introduced on the go.

Obvious advantages:

  • You may apply different combinations of PFC-recipes
  • You can write and publish your own, in any language of your choice.
  • You won't need to bother with downloading a full image for theright architecture. You only run the recipe(s) you need and yourcontainer becomes what you want.
  • Recipes get updates, such that when there's a security updateissued, PFC will notify you it has a new recipe toaddress the issue.

At some point, much like with DB-migrations, you may want to stopwriting/applying recipes with the assumption that a bunch of earlierrecipes need to be applied first and, much like with DB migrations,you'd just commit save your container as an image and then keep onwriting recipes for that particular newly committed image.


This feature will not only apply to Docker (and, in fact, no features thePFC would deliver will apply to a particular container engine, not when it concerns the external PFC interface anyway). It'll work with all container engines PFC supports, including FreeBSD jails.

Current state: FUNDRAISING NOW



LIFESPAN: zero-maintenance


Set 2
Integrate various HYPERVISORS & usage of heavier VMs on all major OSs.
Feature/expense descriptionEstimates & Lifespan

Set 2 is one of the most crucial ones for PFC portability. This is when we integrate hypervisor VMs on various systems, such that we can run any type of OS on any architecture and then run containers inside those instances without blinking an eye.

Write a layout script for working with various hypervisors

This step involves writing a layout Bash or even sh script (to make it as portable as possible, but without any hypervisor-specific code yet), that would be in charge of launching a dedicated hypervisor VM on a particular system, and passing on all it needs to pass on for the container(s) to be launched as usual. Then integrate it into the main pfc main executable script and the corresponding .pfc config file. The end result should be a script that makes an informed decision of what system it's installed on, which hypervisor are available (if any), what user preferences are (those are provided via a cli-argument or .pfc config file); then make an informed decision based on whether to launch a VM via one of the VM-specif launch scripts — scripts yet non-existent and to be added in the next step. Alternatively, it may decide to go without a VM and launch the container directly via an available container-engine software (only Docker, at this point), which is what would happen anyway after this script is integrated into the pfc main executable script.

Add support for major hypervisors

Write dedicated scripts to launch bhyve or xhyve VMs on FreeBSD and MacOSX respectively, KVM/QEMU instances on Linux and VirtualBox instances on both Linux, Windows, MacOSX and *BSD systems. VirtualBox, due to its availability on most platforms, will be the fallback option before the script gives up.

The to-go default OS image at this point will be a specially prepared PFC-compatible Debian or Ubuntu Linux — because only Docker is supported anyway, it doesn't make much sense to bother with other (but rest assured we will bother with them very soon).

However, this dedicated PFC-compatible VM Linux image will be "prepared" with an already implemented PFC-recipes feature, which means anyone would be able to start writing their simple scripts to accommodate any OS image and make it PFC-compatible even before PFC team our official recipe for it.

Current state: PENDING



LIFESPAN: zero-maintenance

Set 3
Implement support for most popular lightweight container engines
Feature/expense descriptionEstimates & Lifespan

Set 3 will finally materialize PFC most important promise: working with multiple container engines on all all popular platforms and architectures (of course, this will be due to the important work that's to be performed in Sets (1) and (2).

Add FreeBSD jails support

Since we have Linux, MacOSX & Windows covered with Docker already and those OSs, at that point, would all work with PFC, it's only logical to start adding alternative container engines supporting platforms where Docker simply doesn't work. FreeBSD is undoubtedly the first choice - it had its own light-weight container engine called FreeBSD jails long before Docker even existed. Adding support for FreeBSD jails would finally close that gap where all major platforms would now work with PFC through a unified CLI and GUI interface.

The importance of adding FreeBSD jails also lies in the fact that although it is in many ways very similar to Docker, it is also radically different from it. For one thing, it doesn't have the concept of images. A container system's root is just a directory on a FreeBSD host. PFC would then add that necessary layer of images, simply by creating a .tar.gz archive of a particular directory serving as container root. In fact, DOCK is already capable of both exporting and importing images this way — from a .tar.gz archive. Combined with the, by then, already existing functionality of PFC-recipes, it would be almost identical in how it handles containers. And from then on, this can be extrapolated to any container engine, because FreeBSD jails is (as far as we're concerned) the one that's most unlike other container engines.


The hardest part of putting things together here would actually be networking (as was the case with DOCK, as well, as we learned). In Set 3 we won't be paying much attention to the firewalls (although they will inevitably be involved, if only to route the traffic through the right bridge or virtual LAN connection). Securing the network comes in the next Set 4 solely dedicated to networking.

Add Podman support

Our research showed that the next most popular container engine after Docker itself is Podman. The great news is that Podman, as it happens, has the capability of mirroring Docker cli-interface, so the integration wouldn't even be much of a hassle. Podman also has an associated tool called Buildah, which allows for the creation of images from containers — something Podman cannot do on its own. We haven't looked deeper into that and whether it'd be worth employing that tool or use the very same approach we'd use for FreeBSD jails to emulate images there. It may make sense to support both approaches — for interoperability and compatibility. We currently do not believe this will be an issue that'd require much time or maintenance.

Note about estimates and domain-specific knowledge

This being the crucial step in PFC development, we wanted to give both accurate estimates for Set 3, but also not fall into the trap of the unknown unknowns. Please do email us if you have knowledge or concerns about this particular step, such that we can correct our estimates.

Current state: PENDING



LIFESPAN: zero-maintenance

Set 4
Remote development environments
Feature/expense descriptionEstimates & Lifespan

Feature set 4 addresses the needs of developers who prefer remote environments. It'd allow them to use the exact same cli or gui ◜PFC◞ provides for local machines, connect their text-editors and other tools to their clouds and even offer very fast backup & restore in case cloud-provider goes down and development is stalled.

Remote instances as containers

Most cloud providers, such as AWS, Azure, Linode, DigitalOcean etc, these days have API, which allow to instantly create and destroy instances on their own servers. There are three pieces of functionality to this set which will help us achieve the goal to a degree that's not just acceptable, but indispensable for anybody using cloud as their development environment.

Therefore, ◜PFC◞ may employ these APIs to allow those instances act much the same way a local container managed by ◜PFC◞ interface would.

Multiple containers on a remote instance

Sometimes, however, your remote instance may be quite heavy already. Or, perhaps, its your own bare-metal server with an OS of your choice installed already. In this case, ◜PFC◞ will manage it not through the API, but through an SSH connection, but in very much the same way and without any of your additional input, apart, perhaps, from requesting the information on the correct local SSH-key to be used.

Then various container types can be seamlessly made work on those instance, much like they do on local machines.

Backup & Restore in minutes
In case your remote development environment is down, provide an almost instant backup/and restore functionality, such that instance(s) currently down can be quickly brought up online some place else — either another cloud provider service or on on a local machine. The issue is not imaginary: we've observed multiple instances of developer work being put on hold for hours simply because a service like AWS or Github Spaces would go down. ◜PFC◞ aims at solving this issue without any additional work on the side of the user.

Current state: PENDING



LIFESPAN: maintenance may be required due to 3rd-party API changes

Set 5
Orchestration, testing & deployment
Feature/expense descriptionEstimates & Lifespan

This part isn't about writing PFC's own version of Docker Compose or, god forbid, Kubernetes. We discussed it internally and came to the conclusion that the most important goal this feature must accomplish is to rid the user from having to learn anything: even a DSL of some sorts.


The feature, upon completion, will provide the following tools for orchestration:

  • A CLI command pfo (which stands for Pain Free Orchestration) and that sounds like a short reasonable command name, which isn't already taken — we'd be out of luck if we were scouting for domain names though) will handle the creation of of simple key=value configuration files or something similarly plaintext looking and super simple. The same command will also allow for starting, stopping and various other manipulations that might be required to be performed on containers and/or images. Thus, it is very possibly you would never even have edit a config file, with the additional benefit of the GUI-part of that feature making use of the very same cli interface behind the scenes.
  • An option to create and manage multiple different orchestration configurations for the same project.
  • An option to run custom scripts or or compiled programs doing exactly what you want to do & both tactically (for some of the steps) an strategically (for multiple steps, environments or even completely replace what the config files created, with your own scripts and programs & all the while keeping the pfo command's cli interface the same.

Goes without saying, that this shall work on all systems and with all container engines supported. By now it should become pretty clear that PFC would be providing identical CLI or GUI interfaces, no matter the platform or container engine & for any feature it implements. Exceptions may occur, but they are expected to be very rare indeed.

For a more detailed information on this feature, please refer to the orchestration section of the features page.


Some of the inspiration for this feature was drawn from the dork — an unimplemented spec of similar tool published by the DOCK's author, which was supposed to provide a similar functionality. We decided to keep the best parts of it, but add more flexibility & simplicity to our implementation and gradually see what works and what doesn't. But even that unimplemented spec is worth reading, though & for those curious minds out there.

Automated tests

All members of our team value unit tests. One person finds it indispensable to have integration & functional tests as well for their projects, while others find them rather difficult to maintain and keep them up to date. Regardless of that, automated tests had become almost universal throughout teams, companies, open-source projects and even individual developers. The major disagreements people have about testing usually concern two things: what to test and what kind of tests to write. Exactly where and how they run is not such an issue, with the abundance of various developer-oriented CI tools. But our team also sees a few alarming pattern: a) testing had somehow become non-detachable from both the deployment process b) what's even alarming & it become intertwined with the particular version control system & namely Git, and then c) the most alarming of it all is that now you'd almost certainly want to run your tests on Github or Gitlab or risk wasting time coding up your own solution. We all feel this shall not be this way.

But one mustn't commit your changes until the tests pass; and if so, how can you possibly rely on Git, let alone any 3rd party remote service to do that job? It's simply impossible.

The PFC team had a conversation on how to approach this problem agonistically — in such way so as not to require developers to run daemons (like Gitlab does) or perform multiple git commit --amend && git push and wait until their tests pass.

We decided a great addition to the toolset would be the following sub-features:

  • For integration and functional tests, PFC would create a special environment consisting of the necessary running containers — using the The Orchestration feature that's described above. That's because you for development and unit tests, you might not have needed all of the containers, but now you all of a sudden supposedly do.
  • For unit tests, there isn't any special tool necessary & unless, of course, you'd want to run it some place else: be it a remote container or a local container, but simply a separate one (who knows, people have their reasons, we've seen that too). For that, reason, PFC would offer the exact same mechanism as it would employ for integration and functional testing, only without orchestration involved.

For those who fancy remote development environments & it would not be any different in your case. Remote is the same as local for PFC, because the CLI and GUI interfaces would be exactly the same (only main your PFC settings normally to be located in ~/.config/pfc.config would be different from your pals foolishly running it all on on their old dusty Pentiums from their basement. PFC is the new great equalizer, we'd like to think.

It is almost poetic now. You must have bought it by now. But if not, the last feature on our set is "Deployment", so hold on for one more tragically satirical several paragraphs.


Have you ever noticed how & at least in case of web-frameworks & they all seem to have their own deployment tools, while they can do perfectly well with even a simple Bash-script? Or else, you'd to craft your own tool, until finally submitting to the nightmares of the Kubernetes-monster, due to someone else's bad judgement and lack of experience or by your own, unfortunate and unwise choice. Forgive us for being opinionated. If you use your favorite complexity enhancement tool (no need to name names, there are several), you may keep it doing so and still use PFC-containers. But at least you'll have an easily digestible alternative to try. We'll call this one pfd for now (with "d" standing for deployment, not "daemon").

How about not ever going down that road? We have something better, which will be possible to use not just for PFC-based projects, but any projects? Here's a list of sub-features we aim to implement in with pfd implemented either in Bash or Perl:

  • Some basic tasks any kind of deployment usually requires, such as git-archiving repositories with all its submodules, compiling, transpiling, inspiring even what needs to be and, rsync-ing the release to the target machine(s) and, finally, restarting machine itself, a container or a web-server. All these functions would be subject to your order of execution preferences.
  • If you're "one of those" 100% deterministic build personalities pfd wouldn't have a problem with you either. It'll just prepare and upload the images, create containers from them and off you go.
  • A/B failure-tolerant deployment. Regardless of the way you deploy your apps, you might want to make sure you won't need to roll everything back because error messages start raining like it's a monsoon season Bangladesh, but it started in December, somehow.

    Sorry, Bangladesh. We've never been, but it seems to be a rather peculiar place despite how densely populated it is. All of us agreed to make this joke and leave it here, but made a promise we'd all visit, if funding goals are reached and we complete the project.

    And so, the A/B failure-tolerant deployment sub-feature will be responsible for rolling your project to production gradually. If anything goes wrong, the damage would then be almost infinitesimal. If you you start seeing issues popping up, you can easily just stop the deployment altogether, without having to do much but typing one command, such as pfd pause or something similar. If we have time left out of the what's outlined for this set, we'd even write a tool to manage SQL-database migrations rollbacks (not full-scale rollbacks, but limited ones & exact implementation details are to be determined).

    And so, the A/B failure-tolerant deployment sub-feature will be responsible for rolling As part of the A/B failure-tolerant deployment basic tools for callbacks will be introduced. The first task would be to watch for application exceptions and initiate deployment pause or rollback. This you wouldn't have to write yourself.

    Finally, it should had become obvious by now, pfd would allow for complete customization and ready to handle programs your write in any language of your choice, yet still preserve the same universal cli-interface.

Set 5 is a lot of work. Good work, hard work, but it's going to cost us all more than other sets.

Current state: PENDING



LIFESPAN: zero-maintenance