I'm writing a simple program in C that finds anagrams of words based on looking them up in a hash table. What I have is an array of hash tables, where words are indexed by their length and hashed within by the summation of their letters, e.g. a = 1, b = 2, etc.
I'm still getting acquainted with dynamic memory management in C, so this question may be quite simple, but I have a functions that allocate and deallocate memory for each hash table (and its inside data).
I originally wrote this program using a single hash table and after running the program through valgrind I found that the memory had been freed correctly and there were no leaks. However, when expanding the program to use an array of hash tables, valgrind began to find possible leaks (though it's reporting them as still reachable). I'm confused as to why the memory isn't being freed correctly in the array of hash tables, although each hash table in the array is being run through the deallocate function that was used originally.
Gist with the full code Full Code
typedef struct Bucket Bucket; typedef struct HashTable HashTable; struct Bucket { char* data; Bucket *next; }; struct HashTable { int size; Bucket **buckets; }; int main(int argc, char const *argv[]) { // Allocate memory for array of hash tables HashTable** hash_array = (HashTable**) malloc(sizeof(HashTable*) * WORD_SIZE); for(i = 0; i < WORD_SIZE; i++) { hash_alloc(&hash_array[i], BUCKET_COUNT); } // Main logic here... // Free memory for(i = 0; i < WORD_SIZE; i++) { hash_dealloc(hash_array[i]); } free(hash_array); return 0; } Hash Table Allocation function
void hash_alloc(HashTable** a, unsigned int size) { *a = (HashTable*) malloc(sizeof(HashTable)); (*a)->buckets = (Bucket**) malloc(sizeof(Bucket*) * size); (*a)->size = size; } Hash Table Deallocation function
void hash_dealloc(HashTable* a) { int i; Bucket* current, *temp; for(i = 0; i < a->size; i++) { current = a->buckets[i]; while(current != NULL) { temp = current; free(temp->data); current = current->next; free(temp); } free(current); } free(a->buckets); free(a); } Add to hash table function
void add_to_hash_array(HashTable** a, char* data) { // Removed some other logic for readability... replace_str(data, "\n", "\0"); newNode->data = strdup(data); newNode->next = currentTable->buckets[index]; currentTable->buckets[index] = newNode; } else { return; } } Valgrind output
==39817== 261,120 bytes in 128 blocks are still reachable in loss record 5 of 7 ==39817== at 0x4C2B6CD: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==39817== by 0x400A38: hash_alloc (main.c:73) ==39817== by 0x4008B0: main (main.c:39) ==39817== ==39817== 286,936 bytes in 31,553 blocks are still reachable in loss record 6 of 7 ==39817== at 0x4C2B6CD: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==39817== by 0x4EBAD71: strdup (strdup.c:43) ==39817== by 0x400D4D: add_to_hash_array (main.c:141) ==39817== by 0x400914: main (main.c:51) ==39817== ==39817== 504,848 bytes in 31,553 blocks are still reachable in loss record 7 of 7 ==39817== at 0x4C2B6CD: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==39817== by 0x400D16: add_to_hash_array (main.c:136) ==39817== by 0x400914: main (main.c:51)
HashTableandBucketdeclarations?HashTable** hash_array = (HashTable**) malloc(sizeof(HashTable*) * WORD_SIZE);Why don't use the Hash_alloc() function instead ?