2

I am working on a Kubernetes-based solution to host dozens of PHP websites (with future compatibility for Python/Node) using PostgreSQL as the database. The goal is to provide an environment similar to traditional hosting solutions like Plesk or Webmin but leveraging Kubernetes for scalability.

Challenges:

  • Isolated file access: Each user must manage their own application, meaning every site needs separate file storage (FTP, S3 like solution or equivalent) and a dedicated PostgreSQL database.
  • Database access: Users should be able to access their databases via a GUI (e.g., phpPgAdmin).
  • Cost optimization: The infrastructure should share resources where possible, since billing is based on resource quotas (e.g., avoiding unnecessary pods per site).
  • Scalability: Should be ready to host websites that will have low traffic, but some may experience high demand, requiring an efficient auto-scaling strategy.
  • Minimal admin workload: The system should allow for self-administration (uploading files and managing their own DB) by users with minimal intervention from the cluster admin.

Proposed Approaches:

  1. A single namespace with Deployments for each service: An Apache/Nginx server, PHP-FPM, PostgreSQL, FTP server, and DB GUI, shared between different websites hosted. How to ensure isolated file access per user?
  2. A shared PostgreSQL cluster with multiple databases: Each site would have its own database within the same PostgreSQL instance.
  3. Persistent Volumes for shared storage: Storage shared across the web server and FTP instances. What’s the best way to manage user-specific file access in Kubernetes (FTP, SFTP, or alternatives)?
  4. Scaling strategies for uneven traffic distribution: Some sites will be low-traffic, while others will require more resources.

Has anyone implemented a similar multi-tenant setup in Kubernetes? What are the best practices for managing file access and scaling in this scenario?

I appreciate any insights from those who have tackled similar challenges in production environments.

1
  • That isn’t something one can answer in a simple q&a format. This requires a full blown consultancy, with use cases, SLOs and SLAs being defined and potentially there will be compliance standards to be met. Only after all of this is clear, a reasonable solution design and software preselection can take place, for which first a fitness for purpose test and then a PoC have to be performed. Commented Jun 10 at 19:12

1 Answer 1

0

This is only a partial answer, but it is possible to address parts of this definitively.

One customer per namespace

Sharing customers in the same namespace is going to create a lot of work for you. The namespace is the default unit of isolation in k8s. If you're not taking advantage of that by putting every customer in their own namespace, then you're stuck recreating some sort of barriers. Why build all of that, if namespaces are already doing that for you.

Shared postgresql cluster

This is ok to start, but eventually you're going to want to migrate to one customer per postgresql. Or maybe you'll have a shared cluster for small customers and single tenant for larger customers.

If you're not in AWS where you can use RDS to give each client their own instance without difficulty, consider localstack for creating your own RDS locally.

Scaling

There are lots of nobs in k8s to manage scaling. Part of the point of it is that you're taking advantage of some things bursting while everything else isn't so you're more fully utilizing your capacity while maintaining an acceptable service level.

ftp in k8s

If I were you I'd discourage anyone from using ftp ever again. It is very insecure since you're sending your password in plaintext across the network.

But if you can't convince your customers to come into modern time, check out this github repo where they show you how to deploy an ftp service in kubernetes.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.