0

I am currently running a node.js app and am about to introduce socket.io to allow real time updates (chat, in-app notifications, ...). At the moment, I am running the smallest available setup from DigitalOcean (1 vCPU, 1 GB RAM) for my node.js server. I stress-tested the node.js app connecting to socket.io using Artillery:

config: target: "https://my.server.com" socketio: - transports: ["websocket"] // optional, same results if I remove this phases: - duration: 600 arrivalRate: 20 scenarios: - name: "A user that just connects" weight: 90 engine: "socketio" flow: - get: url: "/" - think: 600 

It can handle a couple hundred concurrent connections. After that, I start getting the following errors:

Errors: ECONNRESET: 1 Error: xhr poll error: 12 

When I resize my DigitalOcean droplet to 8 vCPU's and 32 GB RAM, I can get upwards of 1700 concurrent connections. No matter how much more I resize, it always sticks around that number.

My first question: is this normal behavior? Is there any way to increase this number per droplet, so I can have more concurrent connections on a single node instance? Here is my configuration:

sysctl -p

fs.file-max = 2097152 vm.swappiness = 10 vm.dirty_ratio = 60 vm.dirty_background_ratio = 2 net.ipv4.tcp_synack_retries = 2 net.ipv4.ip_local_port_range = 2000 65535 net.ipv4.tcp_rfc1337 = 1 net.ipv4.tcp_fin_timeout = 15 net.ipv4.tcp_keepalive_time = 300 net.ipv4.tcp_keepalive_probes = 5 net.ipv4.tcp_keepalive_intvl = 15 net.core.rmem_default = 31457280 net.core.rmem_max = 12582912 net.core.wmem_default = 31457280 net.core.wmem_max = 12582912 net.core.somaxconn = 4096 net.core.netdev_max_backlog = 65536 net.core.optmem_max = 25165824 net.ipv4.tcp_mem = 65536 131072 262144 net.ipv4.udp_mem = 65536 131072 262144 net.ipv4.tcp_rmem = 8192 87380 16777216 net.ipv4.udp_rmem_min = 16384 net.ipv4.tcp_wmem = 8192 65536 16777216 net.ipv4.udp_wmem_min = 16384 net.ipv4.tcp_max_tw_buckets = 1440000 net.ipv4.tcp_tw_reuse = 1 

ulimit

core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 3838 max locked memory (kbytes, -l) 16384 max memory size (kbytes, -m) unlimited open files (-n) 65535 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 10000000 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited 

nginx.conf

user www-data; worker_processes auto; worker_rlimit_nofile 1000000; pid /run/nginx.pid; include /etc/nginx/modules-enabled/*.conf; events { multi_accept on; use epoll; worker_connections 1000000; } http { ## # Basic Settings ## client_max_body_size 50M; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 120; keepalive_requests 10000; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # SSL Settings ## ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } 

Another question: I am thinking about scaling horizontally and spinning up more droplets. Let's say 4 droplets to proxy all connections to. How would this be set up in practice? I would use Redis to emit through socket.io to all connected clients. Do I use 4 droplets with the same configuration? Do I run the same stuff on all 4 of them? For instance, should I upload the same server.js app on all 4 droplets? Any advice is welcome.

1 Answer 1

1

I can't really answer your first question, however I can try my best on your second.

If you're setting up load balancing, you run the same server.js app on each droplet and have one handle traffic. I don't know much about Redis but found this: https://redis.io/topics/cluster-tutorial I hope this helped.

Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.