4 comments

  • k_bx 12 hours ago
    I use Incus and Proxmox for this, more mature and have quite a bit built around them. What does Containarium bring to the table compared to them?
    • hsin003 12 hours ago
      Thanks for sharing! We’re definitely aware that Incus + Proxmox are very mature and full-featured.

      Containarium is more of a "purpose-built, single-VM, SSH-first dev environment" approach:

      - Lightweight: 1 VM can host 50–100+ LXC containers - Quick provisioning: seconds instead of minutes per environment - Focused on SSH workflows and dev sandboxing, not full datacenter management - Minimal infra overhead: no GUI, no HA cluster required

      Tradeoffs we’re aware of: - Shared kernel (not VM-level isolation) - Linux-only - Less built-in tooling compared to Proxmox

      We designed it to *optimize for cost efficiency and rapid dev onboarding*, rather than full-featured virtualization.

      Would love to hear if you see any pitfalls with this approach compared to using Proxmox/Incus in a single-host scenario!

      • k_bx 11 hours ago
        This reads like an AI-generated reply. It repeats the points which are already present in Incus/Proxmox and doesn't directly address the question.
        • hsin003 11 hours ago
          Sorry, we want to understand your use case better. Did you provision *one VM via Proxmox* and then run *multiple users via Incus* inside it?

          We’re curious how you handled provisioning, isolation, and resource limits in your setup. More importantly, what’s the maximum scale you’ve been able to push?

          • k_bx 11 hours ago
            Why would I need a VM? I just install Proxmox on a computer/server and then create as many containers as I need. No VMs at all. VM is a waste.
            • fc417fc802 11 hours ago
              A VM is more robust as a security boundary than a container is. Still not as good as independent physical hardware but certainly worthwhile.
              • k_bx 10 hours ago
                We're not talking VM vs containers. We're talking VM vs no VM at all in base system.
                • fc417fc802 4 hours ago
                  I understand that. I'm saying that wrapping all the dev containers up inside a single VM serves to further protect the host system from the dev containers.
        • rvz 11 hours ago
          That's because it is, just like how this entire project is.

          In fact, it is just using the same technologies as LXC and Incus. (It is exactly LXC and Incus)

          So really nothing special at all. Perhaps people looked at the title and rushed to the repo.

          When I saw "IMPLEMENTATION-PLAN.md" and "SECURITY-CHECKLIST.md" filled with hundreds of emojis, I immediately closed the tab and now replying to you that it is total slop.

          2026 is the year of abundant "not invented here syndrome".

          • hsin003 11 hours ago
            Containarium does indeed build on LXC/Incus and isn’t trying to reinvent the wheel. If you’ve run multi-tenant sandboxes at scale, we’d love to hear what pitfalls or limitations you’ve seen.
  • BobbyTables2 11 hours ago
    How does one run docker inside an unprivileged LXC container?

    If a developer can run Docker inside this, what stops them from mounting volumes from the host or changing namespaces?

    Is this relying on user namespaces ?

    • hsin003 10 hours ago
      Good questions — yes, Containarium relies heavily on *user namespaces*. Here’s how it works:

      - We enable `security.nesting=true` on unprivileged LXC containers, so Docker can run inside (rootless).

      - *User namespace isolation* ensures that even if a user is “root” inside the container, they are mapped to an unprivileged UID on the host (e.g., UID 100000), preventing access to host files or devices.

      This setup allows developers to run Docker and do almost anything inside their sandbox, while keeping the host safe.

  • Weryj 12 hours ago
    I did the exact same thing for my own sandboxing. Through the Proxmox API
    • hsin003 12 hours ago
      That’s awesome — thanks for sharing!

      If you don’t mind me asking:

      - Did you use LXC containers, or full VMs for each sandbox? - How did you handle SSH / network isolation? - Any tips on making provisioning faster or keeping resources efficient?

      We’re using unprivileged LXC + SSH jump hosts on a single VM for cost efficiency. I’d love to hear what tradeoffs you found using the Proxmox API.

      • Weryj 9 hours ago
        My setup is quite purpose built. I use Orleans as the main fabric of our codebase. But since the Orleans cluster is a 'virtual computer' in a sense, you can't rely on anything outside the runtime, since you don't know which machine your code is executing on.

        So a Grain calls Proxmox with a generated SSH Key / CloudInit, then persists that to state, then deploys an Orleans client which connects to the cluster for any client side C# execution. There's lots you could do for isolated networks with the LXC setup, but my uses didn't require it.

        Proxmox handles the horizontal scaling of the hardware. Orleans handles the horizontal scaling of the codebase.

  • hsin003 12 hours ago
    Hi HN,

    We’ve been experimenting with an alternative to the “one VM per developer” model for SSH-based development environments.

    The project is called Containarium: https://github.com/FootprintAI/Containarium

    The idea is simple: - One cloud VM - Many unprivileged LXC system containers - Each user gets their own isolated Linux environment via SSH (ProxyJump) - Persistent storage survives VM restarts

    This is NOT Kubernetes, Docker app containers, or a web IDE. Each container behaves like a lightweight VM (full OS, users, SSH access).

    Why we built it: We kept seeing teams pay for dozens of mostly-idle VMs just to give people a place to SSH into. Using LXC, we can host tens or hundreds of environments on a single VM and cut infra costs significantly.

    What we’re looking for: - Feedback from people who’ve run multi-tenant Linux systems at scale - Security concerns we might be underestimating - Where this approach breaks down in real-world usage - Alternatives we should be considering (LXD, Proxmox, something else?)

    Tradeoffs we’re aware of: - Shared kernel (not VM-level isolation) - Not suitable for untrusted workloads - Linux-only - Requires infra discipline (limits, monitoring, backups)

    This is early-stage and open source. APIs and workflows will evolve.

    We’re not trying to “replace Kubernetes” — just trying to do one thing well: cheap, fast, SSH-based dev environments.

    Would love blunt feedback from folks who’ve been down this road before.