4

I have this very specific situation where I need one machine to serve a large number of MongoDB databases (such as 10k+) and every user should be able to connect to it directly. Our machine is quite powerful and it was running OK for a while, until a few days, when it started causing some problems.

At some point users can't login anymore and I get this from mongoS logs:

2016-08-19T18:08:16.667+0000 I NETWORK [mongosMain] pthread_create failed: errno:11 Resource temporarily unavailable 

I've tried to change most parameters MongoDB wise and SO wise, but no luck:

net.netfilter.nf_conntrack_max is 524288 fs.file-max is 128000 kernel.pid_max is 288000 

/etc/security/limits.d/90-nproc.conf has: * soft nproc 128000 * hard nproc 128000

/etc/init/mongos.conf has:

limit fsize unlimited unlimited limit cpu unlimited unlimited limit as unlimited unlimited limit nofile 512000 512000 limit rss unlimited unlimited limit nproc unlimited unlimited limit memlock unlimited unlimited 

but still no luck.

Is there any way for me to handle like 100K+ connections?

Thank you in advance.

2 Answers 2

3

I ended up with these settings, which appear to solve the problem, though generating a new one: the machine now supports over 100K connections, but it seems eventually it consumes so much RAM that it is not worth it. We ended up adding servers to solve the problem in definitive.

Just in case anyone needs it:

net.netfilter.nf_conntrack_max is 524288 net.netfilter.nf_conntrack_tcp_timeout_established=600 fs.file-max is 524288 kernel.pid_max is 524288 net.netfilter.nf_conntrack_tcp_timeout_time_wait=1 net.ipv4.tcp_tw_recycle=0 vm.max_map_count=524288 

Hope it helps someone in the future.

Sign up to request clarification or add additional context in comments.

Comments

0

hi it looks like linux server is running at resources limit.

There is a nice article about linux tuning to accept high number f connection here

Connection Tracking

The next parameter we looked at was Connection Tracking. This is a side effect of using iptables. Since iptables needs to allow two-way communication between established HTTP and ssh connections, it needs to keep track of which connections are established, and it puts these into a connection tracking table. This table grows. And grows. And grows.

You can see the current size of this table using sysctl net.netfilter.nf_conntrack_count and its limit using sysctl net.nf_conntrack_max. If count crosses max, your linux system will stop accepting new TCP connections and you’ll never know about this. The only indication that this has happened is a single line hidden somewhere in /var/log/syslog saying that you’re out of connection tracking entries. One line, once, when it first happens.

A better indication is if count is always very close to max. You might think, “Hey, we’ve set max exactly right.”, but you’d be wrong.

What you need to do (or at least that’s what you first think) is to increase max.

Keep in mind though, that the larger this value, the more RAM the kernel will use to keep track of these entries. RAM that could be used by your application.

We started down this path, increasing net.nf_conntrack_max, but soon we were just pushing it up every day. Connections that were getting in there were never getting out.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.