0

I have read this question and answer dynamically allocated memory after program termination, and I want to know if it is okay to NOT delete dynamically allocated memory and let it get freed by the OS after programme termination. So, if I have allocated some memory for objects that I need thoughout the programme, is it ok to skip deleting them at the end of the programme, in order to make the code run faster?

8
  • 1
    It is not a good practice and may result in unexpected behavior when large chunks of memory without getting deleted. If you don't want to manage your self, then use boost::unique_ptr. It will get deleted by itself. Commented Jul 22, 2015 at 15:48
  • 5
    Sounds like you would be happier with a garbage collected language, but you are probably using dynamic memory allocation when you really just don't need to. Commented Jul 22, 2015 at 15:49
  • @crashmstr : Indeed. Commented Jul 22, 2015 at 15:52
  • Does it really take that much more time to free your dynamic allocations? Commented Jul 22, 2015 at 15:57
  • 1
    I am just curious, thanks for answering! So it is bad practice not freeing the dynamically allocated memory. I will read about the shared_ptr and unique_ptr. Commented Jul 22, 2015 at 16:02

3 Answers 3

9

The short answer is yes, you can, the long answer is possibly you better not do that: if your code needs to be refactored and turned into a library, you are delivering a considerable amount of technical debt to the person who is going to do that job, which could be you.

Furthermore, if you have a real, hard-to-find memory leak (not a memory leak caused by you intentionally not freeing long-living objects) it's going to be quite time consuming to debug it with valgrind due to a considerable amount of noise and false positives.

Have a look at std::shared_ptr and std::unique_ptr. The latter has no overhead.

Sign up to request clarification or add additional context in comments.

Comments

3

Most sane OS release all memory and local resources own by the process upon termination (the OS may do it in lazy manner, or reduce their share counter, but it does not matter much on the question). So, it is safe to skip releasing those resources.

However, it is very bad habit, and you gain almost nothing. If you found releasing object takes long time (like walking in a very long list of objects), you should refine your code and choose a better algorithm.

Also, although the OS will release all local resources, there are exceptions like shared memory and global space semaphore, which you are required to release them explicitly.

Comments

0

First of all, the number one rule in C++ is:

Avoid dynamic allocation unless you really need it!

Don't use new lightly; not even if it's safely wrapped in std::make_unique or std::make_shared. The standard way to create an instance of a type in C++ is:

T t;

In C++, you need dynamic allocation only if an object should outlive the scope in which it was originally created.

If and only if you need to dynamically allocate an object, consider using std::shared_ptr or std::unique_ptr. Those will deallocate automatically when the object is no longer needed.

Second,

is it ok to skip deleting them at the end of the programme, in order to make the code run faster?

Absolutely not, because of the "in order to make the code run faster" part. This would be premature optimisation.


Those are the basic points.

However, you still have to consider what constitutes a "real", or bad memory leak.

Here is a bad memory leak:

#include <iostream> int main() { int count; std::cin >> count; for (int i = 0; i < count; ++i) { int* memory = new int[100]; } } 

This is not bad because the memory is "lost forever"; any remotely modern operating system will clean everything up for you once the process has gone (see Kerrek SB's answer in your linked question).

It is bad because memory consumption is not constant when it could be; it will unnecessarily grow with user input.

Here is another bad memory leak:

void OnButtonClicked() { std::string* s = new std::string("my"); // evil! label->SetText(*s + " label"); } 

This piece of (imaginary and slightly contrived) code will make memory consumption grow with every button click. The longer the program runs, the more memory it will take.

Now compare this with:

int main() { int* memory = new memory[100]; } 

In this case, memory consumption is constant; it does not depend on user input and will not become bigger the longer the program runs. While stupid for such a tiny test program, there are situations in C++ where deliberately not deallocating makes sense.

Singleton comes to mind. A very good way to implement Singleton in C++ is to create the instance dynamically and never delete it; this avoids all order-of-destruction issues (e.g. SettingsManager writing to Log in its destructor when Log was already destroyed). When the operating system clears the memory, no more code is executed and you are safe.

Chances are that you will never run into a situation where it's a good idea to avoid deallocation. But be wary of "always" and "never" rules in software engineering, especially in C++. Good memory management is much harder than matching every new with a delete.

11 Comments

What!!! avoid using dynamic memory? what the hell are you recommending here? The only reason you can have to do that is not knowing how to use it. Java is completely based on dynamic memory. What are you talking about? Cannot do but downvote you, sorry.
@LuisColorado: Java is a different language. You are arguing against a well-established, universally accepted C++ guideline here. Do you have authorative sources for your claims?
The 'unless you really need it' is indeed quite common. Let alone the lifecycle and ownership, the stack space is limited, if you have large piece of data you eventually need to resolve dynamic allocation, or have someone else did it for you in some kind of collection.
@Calvin: Well, obviously when I argue against unnecessary dynamic allocation, I do not intend avoiding std::vector et al. I argue against the beginners mistake of newing everything in C++ just because Java has garbage collection :) Smart pointers mitigate this problem somewhat, but do not solve it.
@Calvin: How often you need to allocate dynamically depends mostly on your application domain. In any case, it should be a concious choice, not the default.
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.