commandersaki 3 days ago

Video about it here: https://developer.apple.com/videos/play/wwdc2025/346/

Looks like each container gets its own lightweight Linux VM.

Can take it for a spin by downloading the container tool from here: https://github.com/apple/container/releases (needs macOS 26)

9
OJFord 2 days ago

The submission is about https://github.com/apple/containerization, not https://github.com/apple/container.

The former is for apps to ship with container sidecars (and cooler news IMO); the latter is 'I am a developer and I want to `docker run ...`'.

(Oh, and container has a submission here: https://news.ycombinator.com/item?id=44229239)

badc0ffee 2 days ago

The former is the framework enabling Linux containers on lightweight VMs and the latter is a tool using that framework.

WhyNotHugo 2 days ago

> Looks like each container gets its own lightweight Linux VM.

That sounds pretty heavyweight. A project with 12 containers will run 12 kernels instead of 1?

Curious to see metrics on this approach.

haiku2077 2 days ago

This is the approach used by Kata Containers/Firecracker. It's not much heavier than the shared kernel approach, but has significantly better security. An bug in the container runtime doesn't immediately break the separation between containers.

The performance overhead of the VM is minimal, the main tradeoffs is container startup time.

Yeroc 2 days ago

I wonder why Apple cared so much about the security aspect to take the isolated VM approach versus shared VM approach. Seems unlikely that Apple hardware is going to be used to host containerized applications in production where this would be more of a concern. On the other hand, it's more likely to be used for development purposes where the memory overhead could be a bigger concern.

ghostly_s 2 days ago

> Seems unlikely that Apple hardware is going to be used to host containerized applications in production

I imagine this is certainly happening already inside Apple datacenters.

haiku2077 2 days ago

One of the use cases for this feature is for macOS desktop apps to run Linux sidecars, so this needed to be secure for end user devices.

surajrmal 2 days ago

Ram overhead can be nontrivial. Each kernel has its own page cache.

haiku2077 2 days ago

On a non Linux OS that should be offset by being able to allocate RAM separately to each container instead of the current approach in Docker Desktop where a static slice of your system memory is always allocated to the Docker VM.

fpoling 2 days ago

This a feature targeting developers or perhaps apps running on end-user machine where page cache sharing between applications or container does not typically get much of RAM saving.

Linux kernel overhead itself while non-trivial is still very manageable in those settings. AWS Nitro stripped down VM kernel is about 40 MB, I suppose for Apple solution it will be similar.

arijun 2 days ago

Is that not the premise of docker?

rtkwe 2 days ago

No it's the opposite, the entire premise of Docker over VMs is that you run one instance of all the OS stuff that's shared so it takes less resources than a VM and the portable images are smaller because they don't contain the OS image.

dwaite 2 days ago

The premise is containerization, not necessarily particular resource usage by the host running the containers.

For hosted services, you want to choose - is it worth running a single kernel with a lot of containers for the cost savings from shared resources, or isolate them by making them different VMs. There are certainly products for containers which lean towards the latter, at least by default.

For development it matters a lot less, as long as the sum resources of containers you are planning to run don't overload the system.

rtkwe 1 day ago

The VM option is relatively new and the original idea was to provide that isolation without the weight of a VM. Also I'm not sure that docker didn't coin the word containerization, I've alway associated it with specifically the kind of packaging docker provides and don't remember it being mentioned around VMs.

pjmlp 2 days ago

On Windows containers you can chose if the kernel is shared across containers or not, it in only on Linux containers mode that the kernel gets shared.

WhyNotHugo 2 days ago

Nope, docker uses the host's kernel, so there are zero additional kernels.

On non-Linux, you obviously need an additional kernel running (the Linux kernel). In this case, there are N additional kernels running.

quietbritishjim 2 days ago

> On non-Linux, you obviously need an additional kernel running (the Linux kernel).

That seems to be true in practice, but I don't think it's obviously true. As WSL1 shows, it's possible to make an emulation layer for Linux syscalls on top of quite a different operating system.

capitol_ 2 days ago

I would draw the opposite conclusion from the WSL1 attempt.

It was a strategy that failed in practice and needed to be replaced with a vm based approach.

The Linux kernel have a huge surface area with some subtle behavior in it. There was no economic way to replicate all of that and keep it up to date in a proprietary kernel. Specially as the VM tech is well established and reusable.

paulryanrogers 2 days ago

WSL1 wasn't really a VM though? IIRC it was implementing syscalls over the Windows kernel.

quietbritishjim 2 days ago

Indeed, WSL1 isn't a VM. As I said, it's just:

> an emulation layer for Linux syscalls on top of quite a different operating system.

My point was that, in principle, it could be possible to implement Linux containers on another OS without using VMs.

However, as you said (and so did I), in practice no one has. Probably because it's just not worth the effort compared to just using a VM. Especially since all your containers can share a single VM, so you end up only running 2 kernels (rather than e.g. 11 for 10 containers). That's exactly how Docker on WSL2 works.

derekdb 2 days ago

gVisor has basically re-implemented most of syscall api, but only when the host is also Linux.

ongy 2 days ago

I think that's the point. You don't have to run the full kernel to run some linux tools.

Though I don't think it ever supported docker. And wasn't really expected to, since the entire namespaces+cgroup stuff is way deeper than just some surface level syscall shims.

asveikau 2 days ago

And long before WSL, *BSD was doing this with the Linux syscall abi.

lloeki 2 days ago

> On non-Linux, you obviously need an additional kernel running (the Linux kernel)

Only "obvious" for running Linux processes using Linux container facilities (cgroups)

Windows has its own native facilities allowing Windows processes to be containerised. It just so happens that in addition to that, there's WSL2 at hand to run Linux processes (containerised or not).

There is nothing preventing Apple to implement Darwin-native facilities so that Darwin processes would be containerised. It would actually be very nice to be able to distribute/spin up arbitrary macOS environments with some minimal CLI + CLT base† and run build/test stuff without having to spawn full-blown macOS VMs.

† "base" in the BSD sense.

karel-3d 2 days ago

eh docker desktop nowadays runs VMs even on Linux

speedgoose 2 days ago

Docker Desktop is non free proprietary software that isn’t very good anyway.

detaro 2 days ago

no.

AdamN 2 days ago

I could imagine one Linux kernel running in a VM (on top of MacOS) and then containers inside that host OS. So 1 base instance (MacOS), 1 hypervisor (Linux L0), 12 containers (using that L0 kernel).

haiku2077 2 days ago

That's how Docker Desktop for Mac works. With Apples approach you have 12 VMs with 12 Linux kernels.

paxys 2 days ago

Also works on macOS 15, but they mentioned that some networking features will be limited.

solarexplorer 2 days ago

I would assume that "lightweight" in this case means that they share a single Linux kernel. Or that there is an emulation layer that maps the Linux Kernel API to macOS. In any case, I don't think that they are running a Linux kernel per container.

ylk 2 days ago

You don’t have to assume, the docs in the repo tell you that it does run a Linux kernel in each VM. It’s one container per VM.

solarexplorer 2 days ago

Good call, thanks for clarifying!

commandersaki 2 days ago

"Lightweight" in the sense that the VM contains one static executable that runs the container, and not a full fledged Ubuntu VM (e.g. Colima).

selkin 2 days ago

It seems to work on macOS 15 as well, with some limitations[0].

[0] https://github.com/apple/container/blob/main/docs/technical-...

zmmmmm 2 days ago

interesting choice - doesn't that then mean that container to container integration is going to be harder and a lot of overhead per-container? I would have thought a shared VM made more sense. I wonder what attracted them to this.

pxc 2 days ago

It seems great from a security perspective, and a little bit nice from a networking perspective.

selimnairb 2 days ago

The "one IP per container" approach (instead of shared IPs) is similar to how kubernetes pods work.

mickdarling 2 days ago

I can see the decision to do it this way being related to their private secure cloud infrastructure for AI tools.

JoBrad 2 days ago

I like the security aspect. Maybe DNS works, and you can use that for communication between containers?

honkycat 2 days ago

> Looks like each container gets its own lightweight Linux VM.

We're through the looking glass here, people

musicale 18 hours ago

"Containers" now apparently means "boot a docker image as an ephemeral VM."

Which isn't such a bad idea really.

philips 2 days ago

Shoutout to Michael Crosby, the person in this video, who was instrumental in getting Open Containers (https://opencontainers.org) to v1.0. He was a steady and calm force through a very rocky process.

discohead 2 days ago

"A new report from Protocol today details that Apple has gone on a cloud computing hiring spree over the last few months... Michael Crosby, one of a handful of ex-Docker engineers to join Apple this year. Michael is who we can thank for containers as they exist today. He was the powerhouse engineer behind all of it, said a former colleague who asked to remain anonymous."

https://9to5mac.com/2020/05/11/apple-cloud-computing/

musicale 18 hours ago

We can thank the linux kernel developers for implementing namespaces and overlayfs.

And we can thank predecessor systems like BSD jails, Solaris zones, as well as Virtuozzo/openVZ and lxc as previous container systems on linux.

Docker's main improvements over lxc, as I understand it, were adding a layered, immutable image format (vs. repurposing existing VM image formats) and a "free" public image repository.

But the userspace implementation isn't exactly rocket science, which is why we periodically see HN posts of tiny systems that can run docker images.

zoobab 2 days ago

"Looks like each container gets its own lightweight Linux VM."

Not a container "as such" then.

How hard is it to emulate linux system calls?

teruakohatu 2 days ago

> How hard is it to emulate linux system calls?

It’s doable but a lot more effort. Microsoft did it with WSL1 and abandoned it with WSL2.

tsimionescu 2 days ago

Note that they didn't "do it" for WSL1, they started doing it, realized it is far too much work to cover eveything, and abandoned the approach in favor of VMs. It's not like WSL1 was a fully functioning Linux emulator on top of Windows, it was still very far from it, even though it could do many common tasks.

benwad 2 days ago

I've always wondered why only Linux can do 'true' containers without VMs. Is there a good blog post or something I can read about the various technical hurdles?

NexRebular 2 days ago

> I've always wondered why only Linux can do 'true' containers without VMs.

Solaris/illumos has been able to do actual "containers" since 2004[0] and FreeBSD has had jails even before that[1].

[0] https://www.usenix.org/legacy/event/lisa04/tech/full_papers/... [1] https://papers.freebsd.org/2000/phk-jails.files/sane2000-jai...

syhol 2 days ago

Many OS's have their own (sometimes multiple) container technologies, but the ecosystem and zeitgeist revolves around OCI Linux containers.

So it's more cultural than technical. I believe you can run OCI Windows containers on Windows with no VM, although I haven't tried this myself.

bayindirh 2 days ago

BSD can do BSD containers with Jails for more than a decade now?

Due to innate features of a container, it can be of the same OS of the host running on the system, since they have no kernel. Otherwise you need to go the VM route.

dwaite 2 days ago

In this context (OCI containers) that seems very inaccurate. For instance, ocijail is a two year old project still considered experimental.

soupbowl 2 days ago

FreeBSD has beta podman (OCI) support right now, using freebsd base images not Linux. It is missing some features but coming along.

notpushkin 2 days ago

Windows can do “true” containers, too. These containers won’t run Linux images, though.

dijit 2 days ago

Can it? As far as I understood windows containers required Hyper-V and the images themselves seem to contain an NT kernel.

Not that it helps them run on any other Windows OS other than the version they were built on, it seems.

noisem4ker 2 days ago

Source?

The following piece of documentation disagrees:

https://learn.microsoft.com/en-us/virtualization/windowscont...

> Containers build on top of the host operating system's kernel (...), and contain only apps and some lightweight operating system APIs and services that run in user mode

> You can increase the security by using Hyper-V isolation mode to isolate each container in a lightweight VM

pjmlp 2 days ago

Yes, it is based on Windows Jobs API.

Additionally you can decide if the images contain the kernel, or not.

There is nothing in OS containers that specifies the golden rule how the kernel sharing takes place.

Remember containers predate Linux.

tsimionescu 2 days ago

I'm not sure about MacOS, but otherwise all major OSs today can run containers natively. However, the interest in non-Linux containers is generally very very low. You can absolutely run Kubernetes as native Windows binaries [0] in native Windows containers, but why would you?

Note that containers, by definition, rely on the host OS kernel. So a Windows container can only run Windows binaries that interact with Windows syscalls. You can't run Linux binaries in a Windows container anymore than you can run them on Windows directly. You can run Word in a Windows container, but not GCC.

[0] https://learn.microsoft.com/en-us/virtualization/windowscont...

kcoddington 2 days ago

I wouldn't think there are many use cases for Windows, but I imagine supporting legacy .NET Framework apps would be a major one.

tsimionescu 2 days ago

Is there any limitation in running older.NET Framework on current Windows? Back when I was using it, you could have multiple versions installed at the same time, I think.

pjmlp 2 days ago

You can, but there are companies that also want to deploy different kinds of Windows software into Kubernetes clusters and so.

Some examples would be Sitecore XP/XM, SharePoint, Dynamics deployments.

ownagefool 2 days ago

Containers are essentially just a wrapper tool for a linux kernel feature called cgroups, with some added things such as layered fs and the distribution method.

You can also use just use cgroups with systemd.

Now, you could implement something fairly similar in each OS, but you wouldn't be able to use the vast majority of contained software, because it's ultimately linux software.

xrisk 2 days ago

cgroups is for controlling resource allocation (CPU, RAM, etc). What you mean is probably namespaces.

ownagefool 2 days ago

It's technically both I guess, but fair correction.

dwaite 2 days ago

Every OS can theoretically do 'true' containers without VMs - for containers which match the host platform.

You can have Windows containers running on Windows, for instance.

Containers themselves are a packaging format, and do rather little to solve the problem of e.g. running Linux-compiled executables on macOS.

anthk 2 days ago

Containers don't virtualize, just separate environments.

NexRebular 2 days ago

> How hard is it to emulate linux system calls?

FreeBSD has linuxulator and illumos comes with lx-zones that allow running some native linux binaries inside a "container". No idea why Apple didn't go for similar option.

citrin_ru 2 days ago

FreeBSD Linux emulation is being developed for 20 (may be even 30) years. While Apple can throw some $$$ to get it implemented in a couple years using virtualisation requires much less development time (so it’s cheaper).

rcleveng 2 days ago

Apple's already got the Virtualization framework and hypervisor already (https://developer.apple.com/documentation/virtualization), so adding the rest of the container ecosystem seems like a natural next step.

It puts them on par with Windows that has container support with a free option, plus I imagine it's a good way to pressure test swift as a language to make sure it really can be the systems programming language they are betting that it can and will be.

OrbStack has a great UX and experience, so I imagine this will eat into Docker Desktop on Mac more than OrbStack.

masklinn 2 days ago

Because that‘s a huge investment for something they have no reason or desire to productivize.

surajrmal 2 days ago

syscalls are just a fraction of the surface area. There are many files in many different vfs you need to implement, things like selinux and ebpf, iouring, etc. It's also a constantly shifting target. The VM API is much simpler, relatively stable, and already implemented.

Emulating Linux only makes sense on devices with constrained resources.

throwaway1482 2 days ago

> How hard is it to emulate linux system calls?

Just replace the XNU kernel with Linux already.