Garbage Collection, the Large Object Heap, and my Results
I wrote a test app simulating the creation of large byte arrays I discussed in my last post. It created 6 randomly sized arrays ranging from 20K to 250MB 100,000 times in a loop. I opened up Performance Monitor and watched the Gen-2, % in GC and Large Object Heap size for the app. To my surprise, there was no noticeable delays or pauses during the run. The unexpected result was that the code was in the GC about 50% of the run time. That means half the CPU cycled used were just for cleaning up and moving memory around! That seems like a big waste of cycles.
From there I decided to re-write the code using the Unsafe keyword. I allocated the memory using Marshal.GlobalHAlloc using the same random sizing code. I mapped the space to an UnmanagedMemoryStream and wrote out some bytes to it at random points to make sure the OS was really giving me the memory. The CPU utilization was much better. For this test, I simply watched CPU usage in Performance Monitor and task manager to watch the rise and fall of available system memory.
I was unsure about using the Unsafe option I C# at first. Most people just like to talk about how dangerous it is and that if you need to use it then you’re probably doing something wrong. I feel that this scenario is a good fit for Unsafe C#. I need to read through large blocks of memory, then dump it. Performance of the read is somewhat important, but I don’t want the cost of the memory allocation/de-allocation to be a noticeable factor like in the first test. The vast majority of my application will be fine using the GC and will not be in the Unsafe blocks, but this one part will probably benefit greatly with direct control of the memory.
Time to get coding!