Coop-Cloud: Overview of Technologies and Concepts

This will try to give a brief and beginner-friendly overview of all the things that go into running a coop-cloud. It may be a bit daunting at first: don't worry, you won't need to know the ins and outs of everything to be successful. Do be aware that there is a lot more complexity “below the hood”. And hey – the coop-cloud people are super nice and always welcome help – ask them questions!

You can also skip this and jump straight into the abra operator tutorial, which walks you through setting up your first app (a personal nextcloud!).

Terminology: Developers, Maintainers and Operators

In big (and small) tech firms, a lot of these things are often done by that firm. For example, the company Meta both develops, maintains and operates the Instagram service. You'll probably have a rough idea of what these words mean, but here's a brief summary:

- Developers are people that create software. They make sure it does what it should do in the scenarios for which it is designed, fix bugs that come up, deal with security issues, come up with new features and make a lot of choices around how the software is build/should be used. Depending on the type of software, it may be used directly or distributed by maintainers for various operating systems. For example, the developers of Mastodon build the software that can be used to run a social media server.

- Maintainers are people that take existing software and make packages or (in the case of coop-cloud) recipes out of them. Their job is to maintain a working version of the software, integrate it with other software, or make it useable in a specific environment. For example, a maintainer could package a Mastodon server in such a way that it can be used on a debian linux system, which can then be used by operators to run a Mastodon service.

- Operators are people that run services. For example, they might use a combination of different products, hardware components and software to enable users to do things. For example, operators might run a social media service with the Mastodon server software. Users only need to interact with the operator's servers, they don't need to know about all the maintainers and developers.

Containers and Docker

If you've ever tried to get software that someone else wrote running on your machine manually, you'll know that there are hurdles, because every linux system works slightly differently, uses different conventions or has different versions of the same libraries. This is one of the problems that containers are used to solve. The idea is: put everything you need to run a specific piece of software into a self-contained box. If you can run boxes of this kind, you can now run any software delivered in such a box. In short, docker is the software used to run boxes. Dockerfiles are like blueprints for such boxes, which can be made into a copyable image. If you take a copy of such an image an run it, you have a container running the software. *Note: this is all a bit of an oversimplification – for example, containers share a kernel with the host system and are thus not completely self-contained.*

Containers are a de-facto standard to make software widely available; a main advantage of containers is that the software developer(s) have a lot of control over variables that are normally specific to each linux distribution, which reduces the work required to make sure that the software runs correctly (because there are fewer variables involved). They do this by writing a Dockerfile (and sometimes a compose file, but we'll get to that) and often by putting a ready-to-go image on a service like dockerhub. This is good for you, because you don't need to deal with the complexities of maintenance, and if you want to run multiple services, you won't need to worry about conflicts due to different dependencies – you can just pull the image and run it.

Compose and Docker Swarm

At least, that's how it works for simple, self-contained software. However, a core principle of software development over the last five decades has been re-use of software. Software often builds on other software – this means that we don't have to keep re-inventing the wheel. For example, a developer may just want to use a relational database. This is typically where maintainers or operators come in again: take software and configure it to use the correct database, with the correct mail server, do the backups with the right system, etc. This kind of defeats the purpose: the whole idea of containers was that they are self-contained (*again note: this is a huge simplification and not the only reason we have containers*). This is where compose files come in – they're a way to describe how a set of different containers works together to offer a service. For example, they might have a web server, a mail server and a database each in separate containers and define the interactions. Again: very useful for portability – and often geared towards making collaborative development easier.

However, for real services, it is often necessary to do additional configuration – adding domain names, setting passwords, choosing which features are enabled, etc. Docker Swarm allows people to use compose files to design configurations and services. Configurations, Services and Secrets are used in combination to build a complete service. Docker compose can also do a lot of this, but docker swarm offers additional features, such as health checks and autoscaling (though it is not quite as advanced as kubernetes, which allows you to organize complete clusters of systems – that's beyond the scope of this text, though). Conveniently for us, docker swarm also makes it easier to manage configurations. However, it can be quite difficult to use and it requires quite a bit of Docker experience to use docker swarm from scratch.

Abra: Recipes, Operators and Maintainers

The nice folks at coop-cloud have thought of a solution for this problem: abra. Abra is a fairly small piece of software that gives you a friendly command line interface to manage your services without remembering lots of docker commands. It does this by introducing recipes: standardized and re-usable configurations. Whenever you run something on your cloud, you'll basically use the following pattern:

If there is a recipe, it can really be this easy (and if you have a bit of computer experience, you’ll probably find the tutorial quite easy to follow). This blog, for example, was set up through these steps.