That's not exactly true. The idea of defragmenting memory is that most of memory of other processes gets pushed to swap and physical memory gets released. Then you can request a big chunk of memory that otherwise might be problematic due to swapping and the fact that memory is allocated in pages and not bytes. Consider the fact that you have 256 units of memory in total and page size is 16 units. If some process requests one unit of memory, 16 units get assigned to it and other processes can only request 240 units of memory without swapping. If you force the other process to be preemptied and its memory moved to swap, you can allocate 256 units of physical memory instead of 240 units of physical memory and 16 units of swap space. What you gain is that the swapping process is moved in time and you get a continuus chunk of physical memory with perspectives of faster access to physical memory. At least in theory. In practice there are other processes running and interacting with your process so it all becomes less predictable.
As for the original problem, find the border case where the allocation suceeds and make sure the hash is destroyed before that and that all values are correct (watch out for data type limitations).
Bookmarks