◜PFC◞Pain Free Containers

An agnostic container engine toolset — successor to DOCK
runs on:
works with:

Docker, Podman, FreeBSD jails, hypervisors & cloud APIs.

The features described here are all part of one of the funding sets and, as such, some of the definitions may be repeated. The purpose of this page is to give a shorter overview and demonstrate usage examples.
This page also serves as a partial spec and documentation, so consider part of that work completed already. Documentation, especially well written and formatted — is crucial, and here we aim be providing part of it already.
No config files for images (e.g. Dockerfiles)
Just the way our predecessor DOCK works, ◜PFC◞ would not use configuration files. Instead all configuration necessary is stored either inside the container or image itself, in a short and easy to read global ~/.config/.pfcrc or ~/.pfcrc; and, finally, the more advanced configuration is applied through special migration scripts called .
Converting an image into a PFC-compatible one

In order to start using ◜PFC◞, you do not need to download our "specially" prepared images — after all, you cannot trust nobody these days, especially new software. For that reason (but not only) ◜PFC◞ allows you to take any image from whichever container engine and convert it into a pfc-compatible default image, from which you can later derive other container and images and write your own conversion .

Suppose, you have want to use Docker as your and convert it into a PFC-compatible image. You can do this with the following command:

pfc default docker ubuntu:latestor even shorterpfc default ubuntu:latest

Running the code above would result in applying the default set of recipes to make the image PFC-compatible. When you list it with the pfc img command, it'll show up as pfc:default.

We were able to omit the docker part in the second command because ◜PFC◞ is smart enough to figure out which container engine you're using. You could've specified a fully qualified image name up there, or use the shorter one we opted out for. The commands Podman, FreeBSD-jails or various cloud service would only differ in their image naming.

Typical container usage: starting & connecting

You navigate into a particular directory and then run one simple command: pfc. It will figure out a lot of things for you, namely:

Suppose your container does not exist yet. Let's look at these two command-line commands — given our case, they are equivalent:

pfcandpfc pfc:default

Both lines would create a new container from the image tagged as pfc:default, mount the current directory into it (in RW mode) along with some ◜PFC◞-specific stuff (in read-only mode), start the container, generate and upload an ssh-keypair onto it (unless it's already there) and, finally, connect to it.

The pfc/ part in the example above is the NAMESPACE, while default is the TAG NAME, corresponding to that namespace. Which, in reality, may translate as well translate into the following command:

pfc pfc/ubuntu20.04/v1.0.24

This notation should look familiar and similar to Docker, where the string before the / is Docker image's so called "repository name" — or NAMESPACE in ◜PFC◞ terms; the string in between the two / is the container NAME and the final string is its version (not the tag — more on usage of tags will be published later).

BUT WHAT IF CONTAINER ALREADY EXISTS?
Then either command would only connect to it (starting it, if needed). You just have to be in the right directory, because by default container names are derived from your current path.

You may, however, start any container from any directory by simply specifying its name: pfc myapp.php8.dev. ◜PFC◞ will know you're implying the creation of a new image here: on the contrary, it will imply you mean container named pfc myapp.php8.dev. Judgin by container's name, the path of the directory mounted into it is ~/dev/php8/myapp

Listing PFC-instances

You may view all or some of your pfc-compatible containers (pfc-instances) with one of the following commands:

pfc listor even shorterpfc lor
pfc l -ronly showing running pfc-instances. Or
pfc l -sshowing only the stopped ones.

And this might be the potential output of the first commandpfc list:

CONTAINER IMAGE TAG
my-project.web.dev pfc/default/v1.0.0 * my-pfc.forks.dev pfc/default/v1.0.0 :default proxy1 local/proxy :proxy proxy2 local/proxy :proxy
tor-proxy local/proxy :tor:proxy rb2.7 pfc/ruby2.7/rel_2022-03-21 * py3 pfc/python3 - php8 somebody/8.0.22 -

The asterisk (*) in place of the tag name tells us the container has been changed (excluding the mounted directories) and deviated from the original image. Thus you may want to assign it a new tag. It's the images, not containers, which can be assigned tags. Thus, once changes are introduced, the container automatically loses its tag(s) and is being assigned a special (*).

This fact does not necessitate any action on your part. In fact, most of your containers would probably be marked as such as more and more changes introduced — until you save (commit in Docker's terminology) them as images.

Sometimes images don't have any tags — it's someone's responsibility, you see to assign tags to images. So it'd be either you, the image maintainer or PFC's recipe script (later on that). Until then, or until you introduce changes into the container, you'll be seeing the - in place of a tag.

Recipes: portable images & life simplified

Imagine three seemingly simple problems — and let us suppose they are both applicable to your current situation (actually, very probable) and you need to solve them fast.

Prerequisites: you're using Docker, you have an image/container, of about 2Gb in size and you have a co-worker who, unlike you, uses FreeBSD (or MacOSX / Windows / Raspberry Pi OS...) instead of Linux and isn't quite keen on setting up a Virtual Machine. These are your problems, then:

  1. How do you make it work for your co-worker without him getting angry at you?
  2. How do you update the image without re-sending it every time a small change is introduced?
  3. How do you keep your co-worker happy be allowing him his own .dotfiles and environment within the container, while also keeping the important parts rolling and working the same way?

Before you go on raging about how "who cares about FreeBSD" let us present our case of recipes, which would be a much better solution even for two identical Linux hosts running Docker, let alone anything different.

A non-perverse solution to it is NOT to write a Dockerfile (especially, that Dockerfiles are only supported by Docker and maybe Podman). It is to write a script or, perhaps, a compiled program that would make the necessary changes to the container. You can then keep on pile new scripts on top, each one either with no dependencies on other the previous ones or some. And that we'll call recipes.

What is a recipe?

A recipe is single executable (or not?) file, written in any language of your choice. Its job is to do things to the target container, such that the container then works in a particular way. For instance, so it can run TensorFlow or Ruby On Rails 7 or whatever. Maybe it has some more complex stuff that needs to be done to it; and maybe your container gets more complex over time. The complexity and structure of the recipe-scripts are both up to you. ◜PFC◞, of course, will provide some basic ones that cover the most popular cases.

To demonstrate what a typical standalone recipe may look like, let's first, let's one. It'll do two things: install Vim and nginx. We'll write it in Bash for convenience, but you can actually write it in any language.

recipe_file_nginx_vim_debian.sh
#!/usr/bin/env bash # Calling `pfc syspkg` instead of system-specifc package-manager # takes care of most of the systems, be it Linux, FreeBSD, MacOSX or # Windows. There are a few more commands like that you might want to # make use of in the future. pfc syspkg install vim pfc syspkg install nginx # Oh, and maybe also let's have a default file for nginx, # so it serves us something from /var/www/html/ fn=/etc/nginx/sites-enabled/default.conf echo -e "server {\n\tlisten 80;\n\tserver_root /var/www/html;\n\t}" > $fn echo "<h1>hello world</h1>" > /var/www/html/index.html

This script will run inside the container and after its completion the container becomes what you want it to be. The advantage of using PFC-recipes instead of Dockerfiles would be that you can do ANYTHING you'd do with any script in any language AND have additional help of the pfc command itself, providing cross platform-functionality for various OS living inside containers themselves.

Applying recipes

It's easy to apply a recipe (which can be done from both the host or the guest (the latter — so long as made the recipe file available from within the guest file system). You just have to type: pfc recipe [FILENAME] or, if you have a special dedicated directory, which holds all the recipes for a particular type of image(s) and you know you want to apply them all, just type pfc recipe

The trick there, of course, is the order in which they shall be applied and the dependencies they shall have. Luckily, ◜PFC◞ can help you with both.

  • The order is determined by the numbered prefix of the recipe filenames, much like it's very often done with database migrations. You can consecutive integers or timestamps, it doesn't matter.
  • The dependencies are trickier, but only on the inside, not for the user: each container holds the information about all of the applied dependencies inside of itself. Thus, while writing your own recipe, you can make sure it doesn't run unless those dependencies are met, by putting the following code at the top of the file:
001_php8_recipe.sh
#!/usr/bin/env bash # These two commands determine the name of your own recipe group # and spell out the recipe dependencies, without which your file wouldn't run. pfc recipe --name=php8 pfc recipe --dependencies=pfc-compat,nginx ...

This basically tells ◜PFC◞ two things: your current recipe belongs to a group of recipes called "php8" and that before it's applied it must make sure that all of the available recipes from the "pfc-compat" and the "nginx" groups were applied first.

In fact, you do not need to spell it out the first line at all: if it is omitted, ◜PFC◞ will assume group name from your filename. As for the second line, you may just as well call it when calling the command itself, instead of putting into the file (although, it's not recommended).

The question is, what happens if you don't the dependencies? Which source shall be the authoritative one for getting those? We decided at this point to not get into this subject, as this smells package-management reinvention, but be sure we'll get to it as the feature's about to release. Our goal isn't to reinvent the package management in some obscure form under a different name, but to achieve our goals: allow for easy instances updates, no matter which OS or architecture an instance is being run on.

Creating recipe files automatically

It can be rather time-consuming to do things twice: first to manually install the software that you need and edit config files and then to also program a recipe for that. ◜PFC◞ can assist your here as well. By running pfc recipe --draft-changes it will scout system's history and log files, check inconsistencies and create a draft file for you, which you can later edit just a bit to make sure it didn't include the parts you wouldn't want it to include.

Backend options: light containers, VMs, Cloud

PFC-backend refers to the part which ◜PFC◞ itself serves as an interface. A backend can be as "simple" as Docker, or as complex as remote server running FreeBSD, which, in turn, runs Linux with the help of FreeBSD's bhyve hypervisor, which, in turn, runs Docker. In the latter case our "target" is still Docker containers and images, but ◜PFC◞ needs to be able to penetrate all these levels seamlessly, with the very same interface you'd use in the former, simpler case. So how does it do it? Or, rather, what do you, as ◜PFC◞'s user, must do in order to achieve that goal? Let's look at a few simple commands:

pfc --context=ssh:11.22.33.44,FreeBSD,bhyve:ubuntu22,Dockerpfc list # list containers in that context ^pfc mysite.www.var # create/start/connect to a container in that context

The first line of this code switches ◜PFC◞ into a mode, when all of its commands would go through your server that can be accessed by its ip address 11.22.33.44 — this mode will persist even after you reboot your local machine until you type pfc -c local (where -c is a shorthand for context, of course).

After you're in that remote mode, you can use ◜PFC◞ as if it was your local machine. More importantly, none of it has to be installed on your remote system, apart from the FreeBSD itself. ◜PFC◞ will install bhyve, install ubuntu 22.04 onto it, install Docker on it, install the default image and make it PFC-compatible. You could, theoretically, had made this command a bit more complicated, specifying more details, but, perhaps, what's shown here would suffice for now.

The second and the third commands are familiar to you already: they, list images and create/start/connect to an image called mysite.www.var And all of these layers are penetrated for you each time, automatically.

There's more: you don't have to type this every time you want to switch context. Instead you can your context an alias, either while you're in it or while you're just switching into it:

pfc -c ssh:11.22.33.44,FreeBSD,bhyve:ubuntu22,Docker --ct-name=remote1or
pfc -c ssh:11.22.33.44,FreeBSD,bhyve:ubuntu22,Dockerpfc --ct-name=remote1 # name the context

Then, you can switch between the contexts easily:

pfc -c local # brings you back into the local contextpfc -c remote1 # switches you into the remote1 context again

Shall we take a look at another example employing, say, a Linode API to create containers, instead of using particular container engine?

pfc -c linode:LINODE_API_KEY --ct-name=linode1pfc ubuntu22 my_new_website

The second command will create a new Linode instance with Ubuntu 22.04 and connect you to it.

Connecting software (text-editors, IDEs, etc.) to your instances

If you'd like to connect software to your instances, you'd need minimum some information, such as ip-address and a port. Ports can be especially tricky if you have layers of various VMs stacked-up upon each other, so ◜PFC◞ simplifies it for you by providing all the necessary information if you type this one command:

pfc --info=my-php8-app

or if you're not in the context you need to be, type the following instead:

pfc --info=remote1:my-php8-app

You shall then be presented with the following output:

LAYER INSTANCE TYPE IP-ADDRESS:port (on previous layer) ---------------------------------------------------------------------- a) FreeBSD 13.1 [BARE] 11.22.33.44:22 b) Ubuntu 22.04 [bhyve] 192.19.0.24:22 c) my-php8-app [Docker] 172.18.0.11:22 YOUR PASS-THROUGH to remote1 -> layer (c) ---------------------------------------------------------------------- ssh 11.22.33.44 -p 18011 -l pfc -i ~/.ssh/PFC_remote1

The last line is your way through: direct connection cannot be established, thus it shows you the information you need, such as port number — 18011 in this case — and you'll have to use the correct ssh-key & username to connect, which is what that line presents you with as well. It gets you through all the way to LAYER (c).

Backups and Roll-outs

There are many cases that can made for ◜PFC◞ performing its own backups. OF COURSE you should be doing your own backups, be it your local machine or a remote server. OF COURSE PFC-backups isn't something you have to use. But since it'll be there anyway, let's try them out — especially now, equipped with the knowledge of how easily ◜PFC◞ can switch contexts.

First, let's establish something very important...

It doesn't literally copy containers

Suppose you wanted to perform backups every hour. You wouldn't want ◜PFC◞ to copy all these 2Gb-or-more containers over the network, let alone disk-rewrites, nope. ◜PFC◞ uses plain old rsync to only sync the changes in containers that were made since the last backup. Besides, it makes no sense to copy the container if the architecture on your local machine is different and you'd want to run it anyway if your remote environment goes down. Thus, a clever combination of and rsync does the job under the hood.

Here's an example of a command performing a manual backup of all PFC-instances from your remote server onto your local machine:

pfc --backup=remote1 which is the same as pfc --backup=remote1:local

And here is an example of the opposite: we're backing up all of your local PFC-instances onto a remote machine:

pfc --backup=local:remote1

You can back-up from one remote to another and even include multiple backup targets — including your local machine:

pfc --backup=remote1:local:remote2

Quite clearly, you wouldn't want to do it all by hand, so you can use another command, which would do all the crontab mambo-jumbo scheduling for you:

pfc --backup=remote1:local:remote2,every:day@11:15am

And, of course, you can hand-pick containers to be backed up.
Perhaps, not all of them deserve to be backed up after all...

pfc mywebapp1,linux-kernel-fork-haha --backup=remote1

Having done that, you can simply "roll-out" and start your container on your backup-target — local or remote — machine the usual way, as if nothing happened:

pfc mywebapp1

Honestly, do you like it? Consider funding us by donating then.

Orchestration
INTRO

Orchestration is NOT supposed to be difficult nor complex in most cases. It can be, to a degree, but only when as your project grow. You are not supposed to learn a new tool, write complex scripts in some half-baked DSL language that's confusing and changes every 6 months, so everything breaks.

Orchestration is supposed to be, well, like an orchestra. A perfect and beautiful thing, not chaos and disarray. If you had ever seen the best bands out there, you know there's always a leader on stage, even if that leader may not appear to be one — but he is there, on stage with everyone else, performing — conducting this small orchestra. When there's a solo artist on stage he is too, a conductor, only he has to watch himself now (which is no easier job).

We decided we'd approach this in much the same way. What if you could have a tool that's part of the same toolset? What if that tool can be used to orchestrate different parts of your project (band) in much the same manner: going from as simple as one-man-show and all the way up to conducting a London Philharmonic? After all, the greatest conductors weren't born conductors — they played instruments first and foremost and learned music.

With that said, let us introduce a tool called pfo, which, we decided to make a separate command, rather then continue to bloat pfc (even though it is still being made available as pfc --orchestrate [ARGS].

THE VERSE (our project) & THE BRIDGE (examples)

It's absolutely fine if you start with the examples, but we do recommend reading THE INTRO part at least after you take a glance at this section.

Let us suppose you have a web-service called, well, Chorus to keep things consistent. This web-service displays lyrics, but not for whole songs, rather just the choruses. And it lets people do a bunch of things, such as:

  1. Search chorus lyrics for particular song
  2. Find song names and artists names that match their supposed chorus lyrics they entered (kind of the reverse-search of the first feature).
  3. Play the instrumental part, such that people can sing along
  4. Auto-respond to music-industry copyright issue emails with random song lyrics from songs, combining them in sensible emails such as:

    Thank you for the music, the songs I'm singing.

    Called my lawyer, Mr. Good News. He got me: "it's so easy when you know the rules". A backdrop for the girls and boys: you had your time, you had the power. If I'm not back again this time tomorrow — carry on, carry on as if nothing really matters.

    Don't judge me, you could be me — in another life, in another set of circumstances.

  5. And, finally, the web-interface, a webapp to tie it all. (wouldn't matter what you use to this webapp, not the point now).

That would be 5 services, which we'll keep in separate containers + we'll have one for the database and another one for an nginx server. So 7 containers in total. Kinda heave already, ain't it? Well... The coding of all of these things is, admittedly would not be hard. Let us assume we'd done it already. And now...

Let's orchestrate it!

First, let's see which containers we've got on out machine, running or stopped, no matter to us. This we had already done before, if you remember. This time we'll add an -n flag to number them for us:

pfc -l -n or pfc -ln

Here's the output we might get:

# CONTAINER IMAGE TAG
1. webapp.chorus.dev chorus/webapp/v1.2.21 * 2. find.chorus.dev chorus/find/v1.1.0 * 3. proxy1 local/proxy :proxy 4. proxy2 local/proxy :proxy
5. revrsfind.chorus.dev chorus/revrsfind/v1.0.11 * 6. play.chorus.dev chorus/play/v0.5.2 :chorus-play 7. copyleft.chorus.dev chorus/copyleft/v111.2.3 :chorus-copyleft 8. tor-proxy local/proxy :tor:proxy 9. rb2.7 pfc/ruby2.7/rel_2022-03-21 * 10. py3 pfc/python3 - 11. php8 somebody/8.0.22 - 12. mongodb-chorus pfc/mongodb/r6.0.1 :mongodb 13. nginx-chorus pfc/nginx/1.18.0 :nginx

Ah! What a happy number, 13 containers we have. Better not deploy this on Friday, my friends. So knowing container numbers, how can we easily orchestrate them, so they all start in correct order (where it's needed) and we wouldn't need to repeat the command again? Piece of cake:

pfo chorus (12)(2,5,6,7)(1,13)

Aaaaand we're done. Can you guess what happened here? All services be running now, you can check with the same pfc -l command. Those that were running would NOT be restarted unless you also provide the -f or --force flag for the pfo command.

Best part is, pfo "remembers" the name of your "orchestra" and the order in which the services must be started In fact, it remembers it into a shareable, auto-generated, human-readable configuration file, so your colleagues won't orchestrate it on their own

Obviously, those container ids are unique and temporary to your system only at this point in time, they were used for convenience only. (last one to start was number 13! Told ya, do not deploy on fridays — only run locally)

Now you can do a bunch of things with this "orchestra". The following commands are pretty self-explanatory:

pfo chorus stoppfo chorus startpfo chorus restartpfo chorus statuspfo chorus infopfo chorus delete
CODA
More complex cases will get documented over time, but we've already pierced a certain level of complexity with just a few simple commands and that's pretty damn amazing for 95% of software out there that might need orchestration. Honestly, if that had not convinced you... Okay, let's deploy it, folks. It ain't no Friday, not when these words are being written anyway.
Deployment

The PFC-toolset allows you to deploy both standalone as well as orchestrated projects. We've also decided to have a separate standalone command for it — pfd, which, again, is a shortcut for pfc --deploy.

The process

Some basic tasks any kind of deployment usually requires, such as git-archiving repositories with all its submodules, compiling, transpiling, inspiring even — what needs to be, and rsync-ing the release to the target machine(s). Finally, restarting machine itself, a container or a web-server. All these functions would be subject to your order of execution preferences.

If you're "one of those" 100% deterministic build personalities pfd wouldn't have a problem with you either. It'll just prepare and upload the images, create containers from them and off you go. It is agnostic after all, as advertised.

Examples

Deploying a standalone instance

To give it a try, we first deploy a standalone container — just to get a sense of it. We will then move on to our "orchestrated" app. But for now...

cd ~/dev/my-pet-projectthenpfd remote1

This is it. Deployed to remote1 — you may remember we defined this entity earlier in the section of this page. To refresh your memory, this is what you might have done previously to create a context, which now also serves as a deployment target — now ain't that quite a coincidence:

pfc --context=ssh:11.22.33.44 --ct-name=remote1 # <= some time earlier

Deploying an orchestra

Now let us see how much more complicated would it be to deploy our "Chorus" app from the previous section on . Not much, as it turns out — in fact we skip the cd step, so it's in actuality, simpler:

pfd chorus remote1

Now let us see how much more complicated would it be to deploy our That's it. All of our orchestrated containers deployed to the remote1 server and started in just the order they're supposed to be started (or re-started, for that matter). Ah, you say, but what if some of those apps were supposed to be deployed to another service. Well, we'll just have to orchestrate deployment itself a bit, but not much more. Let's see what contexts do we actually have:

pfc -c # with no arguments, this will list all contexts

...and we get our output:

CONTEXT NAME CONNECTION INFO HOSTNAME
remote1 pfc@11.22.33.44:22 mechorus.lol remote2 pfc@11.22.33.45:22 find.mechorus.lol remote3 pfc@11.22.33.46:22 noreply.mechorus.lol remote4 pfc@11.22.33.47:22 db.mechorus.lol local 127.0.0.1 localhost

We must say, someone did a pretty sloppy job at naming those contexts, but whatever. We'll rename them later. For now, we can just quickly cook something up, so that certain apps that comprise our larger "Chorus" app are deployed to the correct servers. How shall we do it? Turns out very easily:

pfd chorus remote1:13 remote2:2,5,6 remote3:7 remote4:12

The numbers here refer to the output of the pfc -l -n we . You can of course, use actual container names instead, but it's usually easier and faster to just type in the numbers once. We say "once", because...

...not only this command deploy it all to the correct servers, but it'll also "remember" (again!) which ones they shall be deployed to, so that next time you'd only have to type pfd chorus.