JamesAdir 9 days ago

Sorry for the noob question, but how can Docker help remediate the situation? I'm currently learning about DevOps.

5
danbreuer 8 days ago

It can't easily, Docker should not be naively treated as a security solution. It's very easy to misconfigure it:

- The Docker daemon runs as root: any user in the docker group effectively also has sudo (--privileged)

- Ports exposed by Docker punch through the firewall

- In general, you can break the security boundary towards root (not your user!) by mounting the wrong things, setting the wrong flags etc.

What Docker primarily gives you is a stupid (good!) solution for having a reproducible, re-settable environment. But containers (read: magic isolated box) are not really a good tool to reason about security in Linux imo.

If you are a beginner, instead make sure you don't run services as the sudo-capable/root user as a first step. Then, I would recommend you look into Systemd services: you can configure all the Linux sandboxing features Docker uses and more. This composes well with Podman, which gives you a reproducible environment (drop-in replacement for Docker) but contained to an unprivileged user.

fugue88 8 days ago

I agree with what you wrote, and add that you should make sure that your service's executables and scripts also should not be owned by the user they run as.

It's unfortunately very common to install, for example, a project as the "ubuntu" user and also run it as the "ubuntu" user. But this arrangement effectively turns any kind of file-overwrite vulnerability into a remote-execution vulnerability.

Owning executables as root:root, perms 0755, and running as a separate unprivileged user, is a standard approach.

smnc 8 days ago

> - Ports exposed by Docker punch through the firewall

I've been using ufw-docker [1] to force ufw and docker to cooperate. Without it, Docker ports do actually get exposed to to the Internet. As far as I can tell, it does its job correctly. Is there another problem I am not aware of?

[1] https://github.com/chaifeng/ufw-docker

msgodel 8 days ago

Docker keeps well behaved programs well behaved. You can escape in one line of shell.

edoceo 8 days ago

How? Like if I have a Debian-Slim container running it's possible to "break-out" onto the host?

msgodel 8 days ago

Yup that's trivially easy if you have permissions to use mknod and mount. (and if the file system namespace looks like it normally does all you need is mount.)

Docker is for organizing things for yourself, just like directories are. If you want actual isolation you have to take extra steps.

EDIT: and I feel like I should add those extra steps are exactly what most server software does automatically when it chroots itself. Again docker is really just for organizing things.

trod1234 8 days ago

For those not intimate familiar with containers (docker/podman), can you link to a brief blog post that touches on this in detail for further reading? Much appreciated.

dijksterhuis 8 days ago

> Docker is for organizing things for yourself, just like directories are.

Services have the following dependencies: static data files; configuration files; executable code/binaries; library dependencies.

In days of yonder, you'd need to download/install all of that ^ on each machine where "service A" needs to run. Developers would run and test "service A" on ubuntu 18.04. But production servers had to run ubuntu 16.04 because "service X" that also runs on the same server needs a library that has not been ported to 18.04 yet.

But "service A" needs a library that was never available on 16.04. Welcome to dependency hell!

Containers bundle all of those dependencies into one object that can be downloaded directly onto the host server, ready for the "service A" process to execute. Now it doesn't matter if production servers are running 16.04. Everything "service A" needs is stored inside the container blob (including some minimal ubuntu 18.04 stuff).

the magic that lets this happen -- containers re-use the host server's OS kernel. Running a new ubuntu 18.04 container does not start a new OS kernel running. the process for your container is just 'firewalled' off from all other processes using cgroups [0]. containers re-use the host's kernel, start a cgroup'd process which starts your container's services and processes (the 18.04 'OS' services and your binary/code/executable).

short/simpler version: containers share the core of the underlying operating system on the host server.

> If you want actual isolation you have to take extra steps.

unfortunately, this means containers share the core of the underlying operating system on the host server.

containers not being isolated from the host server OS can present a security risk as you can escape from the container and "do bad things to host server". [1]

In cases where that is a problem you mostly have two choices:

* use VMs instead (a completely isolated OS instance is started for each service, cannot interact with the host OS at all -- this uses a lot more memory/cpu)

* use rootless containers [2] (container processes are launched under a specific user namespace rather than kernel namespace -- escaping the container means you only get access to the user namespace)

[0]: https://en.wikipedia.org/wiki/Cgroups

[1]: by default the docker daemon service and all the container processes it starts are running as root, which means escaping out of a container in a a default docker installation is as bad as giving someone root.

[2]: https://docs.docker.com/engine/security/rootless/

duskwuff 8 days ago

> Yup that's trivially easy if you have permissions to use mknod and mount.

Docker containers don't have mount permissions by default.

whyever 8 days ago

Docker is not really a security boundary (unless you use something like gVisor), so it's a bit of a red herring here.

The idea is to make your app immutable and store all state in the DB. Then, with every deployment, you throw away the VM running the old version of your app and replace it with a new VM running the new version. If the VM running the old app somehow got compromised, the new VM will (hopefully) not be compromised anymore. In this regard, this approach is less vulnerable than just reusing the old VM.

cookiengineer 9 days ago

Containers allow separation of access rights, because you don't have to pwn only one program/service that is running on the host system to get physical access to it.

Containers have essentially 3 advantages:

- Restart the containers after they got pwned, takes less than a second to get your business up and running again.

- Separation of concerns: database, reverse proxy, and web service run in separate containers to spread the risk, meaning that an attacker now has to successfully exploit X of the containers to have the same kind of capabilities.

- Updates in containers are much easier to deploy than on host systems (or VPSes).

imglorp 8 days ago

> Separation of concerns

Sorta: yes the container is immutable and can be restarted, but when it does, it has the same privs and creds to phone up the same DB again or mount the same filesystem again. I'd argue touching the data is always the problem you're concerned about. If you can get an exec in that container you can own its data.

neom 8 days ago

Why do you think ISOs never really took off? I feel like they solve so many issues but only ever see folks reach for containers.

diggan 8 days ago

Do mean VMs? ISO is a file format, commonly used for VMs and other computers.

For VMs, they did take off and essentially the entire cloud ecosystem runs on mostly VMs behind the scenes for VPS and similar hosting.

It's true though at it seems more popular for developers to reach for containers when they need to think about deployments, particularly docker containers. But VMs are still widely in use and deployed today.

neom 8 days ago

yyeaaah, i built a cloud. :) I love VMs. I'm a disciple of Alex Polvi. Lets call it an "Immutable Application VM" Stack. Each application service (or a logical group of application services) is packaged directly into an immutable VM image, and the orchestration manages these VMs directly. No separate container runtime or container orchestration layer on top of the VM. So you have an Immutable, Bootable System Image, but you would use kvm plus .iso plus orchestration tech. Basically, why does nobody built a cloud on the cloud lol??

(I helped build digitalocean from zero the pre-IPO, so I'm verrry rusty, this all might be nonsense/wrong think, and happy to be told as much! :))

mjburgess 8 days ago

Just thinking about this from a proxmox pov -- applying this advice, do you see an issue with then saying: take a copy of all "final" VMs, delete the VM and clone the copy?

And, either way, do you have a thought on whether you'd still prefer a docker approach?

I have some on-prem "private cloud"-style severs with proxmox, and just curious about thinking through this advice.

guappa 8 days ago

There's already unix permissions and regular namespaces. Docker is very hard to secure.

calgoo 9 days ago

Not OP, but Im assuming its because of immutability of the containers where you can redeploy from a prebuilt image very quickly. There is nothing that says you cant do the same with servers / VMs however the deployment methodology for docker is a lot quicker (in most cases).

Edit: Im aware its not truly immutable (read only) but you can reset your environment very easy and patching also becomes easier.

ahoka 8 days ago

It can't. Also there's nothing inherently wrong with ssh password auth.

dmos62 8 days ago

You might want to back those statements up.

danbreuer 8 days ago

Not parent, but see my sibling comment re: Docker. The issue is imo that Docker is very easy to misconfigure and gives you the wrong mental model of how security on Linux works.

On SSH password auth: its secure if you use a long, random, not reused elsewhere password for every user. But it is also very easy to not do these things. SSH certs are just more convenient imo.

blueflow 8 days ago

Using docker does not help in this specific case - if the attackers came via ssh, they will have root access as before, and if they come in through the application, they still control your application inside the container and can make it serve what they want.

For ssh, the problem does not lie within password auth itself, but with weak passwords. A good password is more secure than a keypair on a machine whose files you can't keep private.