I wonder whether such a data structure can even be implemented.
I am afraid the answer is no.
Searching OK, Insertion NOT
When we look at the data structures like Binary search tree, B-tree, Red-black tree and AVL tree, they have average search complexity of O(log N), but at the same time the average insertion complexity is same as O(log N). Reason is obvious, the search will follow (or navigate through) the same pattern in which the insertion happens.
Insertion OK, Searching NOT
Data structures like Singly linked list, Doubly linked list have average insertion complexity of O(1), but again the searching in Singly and Doubly LL is painful O(N), just because they don't have any indexing based element access support.
Answer to your question lies in the Skiplist implementation, which is a linked list, still it needs O(log N) on average for insertion (when lists are expected to do insertion in O(1)).
On closing notes, Hashmap comes very close to meet the speedy search and speedy insertion requirement with the cost of huge space, but if horribly implemented, it can result into a complexity of O(N) for both insertion and searching.
such data structure even be implementedis broken.O(1)insertion andO(1)search if you are willing to consume an enormous amount of memory. Create a giant array the size of your key space. To insert, copy the item into the array at the position indexed by the kay. To look up, read from the array at the position indexed by the key. Each of these is anO(1)operation. However, the space requirements are huge if your key space is large.O(1)insertion, you need to find something that gives youO(1)search. That significantly limits your options.)