Skip to main content
deleted 72 characters in body
Source Link
Basilevs
  • 4.5k
  • 1
  • 20
  • 33

If malloc() performance is critical, we are likely working in a very lean environment (embedded). They often come without threading support and do not need a lot of concurrency. Therefore we can provide two APIs - to handle non-concurrent processing without allocations and to handle concurrent processing with allocations:

opaque_sha_context * sha_context = sha_init(malloc, free); // concurrent implementation, uses malloc opaque_sha_context * sha_context = sha_init(NULL, NULL); // allocation disabled, non-concurrent, returns statically preallocated area if (!sha_context) { abort("Outsha_context of= memorysha_init(malloc, orfree); // concurrent SHAimplementation, use");uses malloc } sha_process(sha_context, file); sha_free(sha_context); // either frees memory, or allows next use of a statically preallocated area 

I've never done or seen such API in practice

If malloc() performance is critical, we are likely working in a very lean environment (embedded). They often come without threading support and do not need a lot of concurrency. Therefore we can provide two APIs - to handle non-concurrent processing without allocations and to handle concurrent processing with allocations:

opaque_sha_context * sha_context = sha_init(malloc, free); // concurrent implementation, uses malloc opaque_sha_context * sha_context = sha_init(NULL, NULL); // allocation disabled, non-concurrent, returns statically preallocated area if (!sha_context) { abort("Out of memory or concurrent SHA use"); } sha_process(sha_context, file); sha_free(sha_context); // either frees memory, or allows next use of a statically preallocated area 

I've never done or seen such API in practice

If malloc() performance is critical, we are likely working in a very lean environment (embedded). They often come without threading support and do not need a lot of concurrency. Therefore we can provide two APIs - to handle non-concurrent processing without allocations and to handle concurrent processing with allocations:

opaque_sha_context * sha_context = sha_init(NULL, NULL); // allocation disabled, non-concurrent, returns statically preallocated area if (!sha_context) { sha_context = sha_init(malloc, free); // concurrent implementation, uses malloc } sha_process(sha_context, file); sha_free(sha_context); // either frees memory, or allows next use of a statically preallocated area 

I've never done or seen such API in practice

Style
Source Link
Basilevs
  • 4.5k
  • 1
  • 20
  • 33

As I see it, if mallocIf malloc() performance is critical, we are likely working in a very lean environment (embedded). They often come without threading support and do not need a lot of concurrency. Therefore we can provide two APIs - to handle non-concurrent processing without allocations and to handle concurrent processing with allocations:

opaque_sha_context * sha_context = sha_init(malloc, free); // concurrent implementation, uses malloc opaque_sha_context * sha_context = sha_init(nullNULL, nullNULL); // allocation disabled, non-concurrent, returns statically preallocated area if (!sha_context) { abort("Out of memory or concurrent SHA use"); } sha_process(sha_context, file); sha_free(sha_context); // either frees memory, or allows next use of a statically preallocated area 

I've never done or seen such API in practice

As I see it, if malloc performance is critical, we are working in a very lean environment (embedded). They often come without threading support and do not need a lot of concurrency. Therefore we can provide two APIs - to handle non-concurrent processing without allocations and to handle concurrent processing with allocations:

opaque_sha_context * sha_context = sha_init(malloc, free); // concurrent implementation, uses malloc opaque_sha_context * sha_context = sha_init(null, null); // allocation disabled, non-concurrent, returns statically preallocated area if (!sha_context) { abort("Out of memory or concurrent SHA use"); } sha_process(sha_context, file); sha_free(sha_context); // either frees memory, or allows next use of a statically preallocated area 

I've never done or seen such API in practice

If malloc() performance is critical, we are likely working in a very lean environment (embedded). They often come without threading support and do not need a lot of concurrency. Therefore we can provide two APIs - to handle non-concurrent processing without allocations and to handle concurrent processing with allocations:

opaque_sha_context * sha_context = sha_init(malloc, free); // concurrent implementation, uses malloc opaque_sha_context * sha_context = sha_init(NULL, NULL); // allocation disabled, non-concurrent, returns statically preallocated area if (!sha_context) { abort("Out of memory or concurrent SHA use"); } sha_process(sha_context, file); sha_free(sha_context); // either frees memory, or allows next use of a statically preallocated area 

I've never done or seen such API in practice

Source Link
Basilevs
  • 4.5k
  • 1
  • 20
  • 33

As I see it, if malloc performance is critical, we are working in a very lean environment (embedded). They often come without threading support and do not need a lot of concurrency. Therefore we can provide two APIs - to handle non-concurrent processing without allocations and to handle concurrent processing with allocations:

opaque_sha_context * sha_context = sha_init(malloc, free); // concurrent implementation, uses malloc opaque_sha_context * sha_context = sha_init(null, null); // allocation disabled, non-concurrent, returns statically preallocated area if (!sha_context) { abort("Out of memory or concurrent SHA use"); } sha_process(sha_context, file); sha_free(sha_context); // either frees memory, or allows next use of a statically preallocated area 

I've never done or seen such API in practice