Skip to content

amine-a11/mini-cloud-architecture

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Mini Cloud Architecture

High Availability TODO application with Vagrant, HAProxy, Node.js, and MariaDB Galera Cluster.

Architecture

graph TB subgraph "Host Machine" Browser["Browser<br/>:8080"] Stats["HAProxy Stats<br/>:8404"] end subgraph "Frontend Network - 192.168.50.0/24" LB["Load Balancer<br/>192.168.50.10<br/>HAProxy"] APP1["App Server 1<br/>192.168.50.11<br/>Node.js + garbd"] APP2["App Server 2<br/>192.168.50.12<br/>Node.js + garbd"] end subgraph "Backend Network - 192.168.60.0/24" VIP["Virtual IP<br/>192.168.60.30<br/>Keepalived"] DB1["Database 1<br/>192.168.60.21<br/>MariaDB Galera"] DB2["Database 2<br/>192.168.60.22<br/>MariaDB Galera"] end Browser --> LB Stats -.-> LB LB -->|Round Robin| APP1 LB -->|Round Robin| APP2 APP1 <-->|rsync<br/>File Sync| APP2 APP1 --> VIP APP2 --> VIP VIP --> DB1 VIP --> DB2 DB1 <-->|Multi-Master<br/>Replication| DB2 
Loading

Key Features:

  • Load Balancing: HAProxy distributes traffic across 2 app servers
  • Database HA: Galera multi-master cluster with VIP failover (Keepalived)
  • File Sync: Real-time bidirectional rsync + inotify between app servers
  • Arbitrators: 2 garbd instances on app servers provide quorum (4-node voting)
  • Networks: Separated frontend (50.x), backend (60.x), and management (70.x)

VMs

VM Role RAM Frontend IP Backend IP
lb HAProxy + UFW 512 MB 192.168.50.10 -
app1 Node.js + garbd + inotify-tools + UFW 512 MB 192.168.50.11 192.168.60.11
app2 Node.js + garbd + inotify-tools + UFW 512 MB 192.168.50.12 192.168.60.12
db1 MariaDB Galera + Keepalived + rsync + UFW 1024 MB - 192.168.60.21
db2 MariaDB Galera + Keepalived + rsync + UFW 1024 MB - 192.168.60.22

Quick Start

Prerequisites: VirtualBox 6.1+, Vagrant 2.2+, 4GB RAM, 10GB disk

Recommended Quick Start (recommended order to ensure Galera quorum)

# Start DB VMs vagrant up db1 db2 --provision # Start app servers vagrant up app1 app2 --provision # Start load balancer vagrant up lb --provision # NOTE: --provision is needed the first time.

Check status:

vagrant status

Access services from the host:

Common tasks:

# SSH vagrant ssh app1 # Stop all VMs vagrant halt # Destroy all VMs vagrant destroy -f

Common Commands

Monitoring

# Service status vagrant ssh app1 -c "sudo systemctl status todo-app" vagrant ssh db1 -c "sudo systemctl status mariadb" # Application logs vagrant ssh app1 -c "tail -f /home/vagrant/app.log" # Cluster size vagrant ssh db1 -c "mysql -u root -e 'SHOW STATUS LIKE \"wsrep_cluster_size\";'" # VIP location vagrant ssh db1 -c "ip addr show eth1 | grep 192.168.60.30" # File sync status vagrant ssh app1 -c "sudo systemctl status upload-sync" vagrant ssh app1 -c "sudo tail -f /var/log/upload-sync.log"

Database

# Access MySQL vagrant ssh db1 -c "mysql -u root" # Query todos vagrant ssh db1 -c "mysql -u root -e 'SELECT * FROM todo_app.todos;'"

Application Updates

The ./app folder syncs automatically. After editing server.js:

vagrant ssh app1 -c "sudo systemctl restart todo-app" vagrant ssh app2 -c "sudo systemctl restart todo-app"

High Availability Testing

App Failover:

vagrant ssh app1 -c "sudo systemctl stop todo-app" curl http://localhost:8080 # Still works via app2 vagrant ssh app1 -c "sudo systemctl start todo-app"

Database Failover:

vagrant ssh db1 -c "sudo systemctl stop mariadb" curl http://localhost:8080 # Still works via db2 (VIP fails over) vagrant ssh db1 -c "sudo systemctl start mariadb"

File Sync:

# Upload image via web UI, then verify sync: vagrant ssh app1 -c "ls /home/vagrant/app/public/uploads/" vagrant ssh app2 -c "ls /home/vagrant/app/public/uploads/"

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published