5

In contrast to this question about running Bitcoind in a low-memory environment, I have an opposite use-case.

I want to run a bitcoind full node (no wallet) on a Linux server with lots of memory but not particularly fast disks. What are the maximum useful memory constraints (dbcache=, maxmempool= and total amount of page cache) that you can give to bitcoind to maximize its performance? That is, at which point does giving more memory to bitcoind start yielding diminished returns?

1 Answer 1

3

After a point, there's no reason to increase memory usage further.

  • dbcache is useful up until the point where you have the entire UTXO in memory (4GB+ in 2021), and then increasing it further does not a lot other than increasing the time to shutdown the node safely (up to many minutes on a slow disk)

  • maxmempool will increase performance by allowing more transactions (and their validity) to be ready for their inclusion in a block, but only up to the point where you are including all of the possible transactions that miners are likely to include.

  • blockreconstructionextratxn will allow the node to store more transactions which are not in the mempool, but are still valid. This increases the efficiency of compact block transmission as sometimes transactions will be evicted from your mempool but may be in other people's. This setting notably is in number of transactions, not megabytes.

  • maxorphantx increases transaction relay efficiency by keeping copies of transactions which could not be connected to a parent (and therefor can't be validated!), but might become valid in the future by the arrival of their parent. This setting is also in number of transactions, not megabytes.

These are about as far as you can go just by tweaking the settings in bitcoin.conf.


A "hot rod", "no cost considered" Bitcoin Core node would look a bit different.

The entire UTXO would be stored in a ramdisk (to prevent a cold dbcache ever occurring), with the obvious cost that unexpected shutdowns would result in a complete loss of synchronization state. As many cores as possible would be in use, the node would make connections to other nodes owned by the operator for maximum efficiency, and those nodes would be tuned to have as many high quality peers as possible.

You can go a long way towards making extremely performant nodes with not an unreasonable financial outlay if there's a reason to do so, it's just a matter of optimizing the right parameters and limiting bottlenecks that are normally caused by the network being optimized for stability and correctness.

4
  • "After a point, there's no reason to increase memory usage further" That's precisely what my question is about. What exactly is that point? Commented Feb 20, 2021 at 5:54
  • As written in the answer, the limit for dbcache is the size of the UTXO database, which is around 4GB today (depending how it is stored). The other ones are as far as I'm aware are not documented enough to be able to state sensible limits. Commented Feb 20, 2021 at 5:56
  • "blockreconstructionextratxn will allow the node to store more transactions which are not in the mempool, but are still valid" - why are they not in the mempool? Commented Sep 2, 2021 at 19:17
  • They conflict with another transaction in the mempool, but this does not make them invalid. Commented Sep 2, 2021 at 19:18

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.