When you're working at the level of bits and bytes, of memory as raw homogeneous collection of data, as would often be required to effectively implement the most efficient allocators and data structures, there is no safety to be had. Safety is predominantly a strong data type-related concept, and a memory allocator doesn't work with data types. It works with bits and bytes to pool out with those same bits and bytes potentially representing one data type one moment and another later on.

It doesn't matter if you use C++ in that case. You'd still be sprinkling `static_casts` all over the code to cast from `void*` pointers and still working with bits and bytes and just dealing with more hassles related to respecting the type system in this context than C which has a much simpler type system where you're free to `memcpy` bits and bytes around without worrying about bulldozing over the type system.

In fact it's often harder to work in C++, an overall safer language, in such low-level contexts of bits and bytes without writing even more dangerous code than you would in C, since you could be bulldozing over C++'s type system and doing things like overwriting vptrs and failing to invoke copy constructors and destructors at appropriate times. If you take the proper time to respect these types and use placement new and manually invoke dtors and so forth, you then get exposed to the world of exception-handling in a context too low-level for RAII to be practical, and achieving exception-safety in such a low-level context is very difficult (you have to pretend like just about any function can throw and catch all possibilities and roll back any side effects as an indivisible transaction as though nothing happened). The C code can often "safely" assume that you can treat any data type instantiated in C as just bits and bytes without violating the type system and invoking undefined behavior or running into exceptions.

And it would be impossible to implement such allocators in languages that don't allow you to get "dangerous" here; you'd have to lean on whatever allocators they provide (implemented most likely in C or C++) and hope it is good enough for your purposes. And there is almost always more efficient but less general allocators and data structures suitable for your specific purposes but much more narrowly applicable since they're specifically tailored for your purposes.

Most people don't need the likes of C or C++ since they can just call code originally implemented in C or C++ or possibly even assembly already implemented for them. Many might benefit from innovating at the high-level, like stringing together an image program that just uses libraries of existing image processing functions already implemented in C where they're not innovating so much at the lowest level of looping through individual pixels, but maybe offering a very friendly user interface and workflow never seen before. In that case, if the point of the software is just to make high-level calls into low-level libraries (*"process this entire image for me, not for each pixel, do something"*), then it might arguably be a premature optimization to even attempt to start writing such an application in C.

But if you're doing something new at the low level where it helps to access data in a low-level way like a brand new image filter never seen before that's fast enough to work on HD video in realtime, then you generally have to get a little bit dangerous.

It's easy to take this stuff for granted. I remember a facebook post with someone pointing out how it's feasible to create a 3D video game with Python with the implication that low-level languages are becoming obsolete, and it was certainly a decent-looking game. But Python was making high-level calls into libraries implemented in C to do all the heavy-lifting work. You can't make Unreal Engine 4 by just making high-level calls into existing libraries. Unreal Engine 4 *is* the library. It did all kinds of things that never existed in other libraries and engines from lighting to even its nodal blueprint system and how it can compile and run code on the fly. If you want to innovate at the kind of low engine/core/kernel level, then you have to get low-level. If all game devs switched to high-level safe languages, there would be no Unreal Engine 5, or 6, or 7. It would likely be people still using Unreal Engine 4 decades later because you can't innovate at the level required to come out with a next-gen engine by just making high-level calls into the old one.