Welcome!

Microsoft Cloud Authors: Pat Romanski, Andreas Grabner, Nick Basinger, Kevin Benedict, Liz McMillan

Related Topics: Microsoft Cloud

Microsoft Cloud: Article

Marissa's Guide to the .NET Garbage Collector

Marissa's Guide to the .NET Garbage Collector

"What's wrong, Uncle John?" I hadn't realized how my facial expressions were illustrating my inner feelings. I had been working on a new coding project, and as I worked I became more and more amazed by the native memory management provided by .NET. It seemed that almost by magic the runtime was able to figure out which objects were no longer needed and which should hang around, and - yet more amazing - it could even call special cleanup routines.

"Nothing is wrong, Marissa; I just wish I had time to dive into the automatic memory management features of .NET," I replied hastily. Honestly, what could a five-month-old baby possibly know about the intricacies of garbage collectors and memory management?

"Is that all?" she replied. "The .NET garbage collector is rather easy to understand; it is based on a generational collection algorithm. If you like, I can explain it to you - in fact I can do it in a way even you could understand," she stated rather confidently.

"What? Are you kidding? You're going to school me?" I wasn't about to walk away from this challenge. "Go for it, kid. Let's see what great mass of knowledge resides in the mind of a tiny, diaper-wearing, .NET internals architect."

Starting Off On the Right Foot
"Well, first you need to understand some of the assumptions made by the garbage collector's designers. I'll use the GC abbreviation going forward to refer to the garbage collector, like we did in the nursery before I came home." Marissa then went on to explain that there are three basic assumptions:

  • Recently created objects typically have a short lifetime: Consider the example of a database connection. You create the ADO.NET object, interact with the database, and then the object isn't typically needed while you interact with the data. This is an example of a pattern in which a new object doesn't need to hang around and take up precious resources.
  • Objects that are older typically have a longer lifetime: A good example of this is Windows Forms elements. A tree control that has been populated with information may be around for the lifetime of the application. In this pattern the use of memory is less costly than the cost of operation to create and populate the tree control.
  • Smaller collections yield better performance: The GC is based on a managed heap and it is faster to walk small sections of the heap than it is to walk the entire heap. So any operation on the heap that involves only a portion of the heap is going to be faster than an operation that involves the entire heap.

    "Well, Marissa, those are some good assumptions, but it really doesn't tell me much about how it all works," I said. She just frowned like babies do when they are frustrated and want you to just pay attention.

    "The next thing you need to understand is the basics of how objects are created and how memory is allocated. Then I can walk you through some of the deeper GC concepts."

    "Okay Marissa, it's your show; however you want to do this is fine." It's always a good idea to placate a baby if you can.

    She reviewed the object creation process with me and she was right: understanding how objects are created and how memory is allocated started to give me some insight into the internals of the GC. Rather than outline them here I've included her overview in Table 1.

    The important thing to keep in mind is that when you create a new object it results in a newobj IL instruction. This instruction will allocate and initialize memory. The initialization basically involves setting the object's initial state and is done by the constructor. But since the GC doesn't know about the state of your object it has no way to intelligently clean up your object. Marissa explained that the GC has some tricks to deal with intelligent cleanup, but it is really up to the programmer to implement these tricks. But as babies often do, she wanted to focus on the basics first.

    "When your applications create objects, Uncle John, do you find that they create a series of objects at once?" she asked.

    "Well, if you mean that objects created within the same scope usually have some type of strong relationship to each other, the answer would be yes."

    That was what she was asking. I learned that this natural pattern helps improve performance in the GC. As she explained earlier, "smaller collections yield better performance," so the fact that interrelated objects are near each other from a memory perspective means the GC can typically clean them all up at the same time. How this cleanup occurs, I learned, is based on something called generations.

    Coming to Terms with Generations
    "As you know, Uncle John, the GC is based on a generational approach." Marissa made her way to the whiteboard and started to draw some figures. "I've created a representation of an empty memory heap here (see Figure 1). When you create some new objects, they are added to what is internally known as "Generation 0". When the CLR is initialized it will set a size for Generation 0, 250KB for instance. So let's say some objects are created, since they are new, and all new objects are added to Generation 0, with the exception of objects larger then 85KB, which we will talk about later. The memory heap would look like this (see Figure 2)."

    I scratched my head. "Uh, go on." "Okay, suppose we create more objects. What do you think happens? Well, if the objects created cause a situation in which we exceed our 250KB size for Generation 0..."

    Interrupting, I asked, "So all the objects in Generation 0 can take up a total of 250KB or whatever the CLR initialized it to, right?" "Yes, that's right," she replied and went on without pausing, "so if we exceed the boundary, the GC will move any objects that are reachable by their root to Generation 1, which like Generation 0 has a memory size limit set by the CLR as well. This limit is about four times that of Generation 0. Once the objects are moved, the GC will then tear down any remaining objects in Generation 0, which will be left empty. The memory will be compacted and the new objects created."

    "Okay, okay. I think I'm starting to understand." I was getting excited - she was making sense. "Let me see if I can guess what happens next," I said. When another set of objects is created, the GC will see if Generation 0 has exceeded its size limit; if it has it will move the objects in Generation 0 to Generation 1, but also check to see if Generation 1 has exceeded its size limits. If moving the objects from Generation 0 to Generation 1 causes Generation 1 to exceed its limits, it will move reachable objects to Generation 2. Upon initialization the CLR also sets a size limit for Generation 2 , probably about twice that of Generation 1. Am I right?"

    Marissa nodded and then explained that there is no Generation three; the .NET garbage collector has only 3 generations and it is 0 based. She also explained that the GC is self tuning and that it can dynamically increase or decrease the size of each generation, which yields better performance. For instance, if the GC determines that you are using a high number of small short-lived objects, it might reduce the Generation 0 working set and free up resources for other areas. She also expanded on the issue with large objects, those greater then 85KB. Large objects are automatically created in Generation 2 to avoid the performance implications of starting in Generation 0 and immediately going over the size limit and repeating the process for Generation 1. Creating large objects in Generation 2 helps overall system performance. This is something I didn't know - what a smart little kid.

    Being Reachable
    "So Marissa, when you said the GC checks if the object is reachable, what did you mean?" She explained that the GC will build a graph of all reachable objects. A "reachable" object is one whose root - a storage location containing a memory pointer to a type - is not null. The assumption is made that if the object is pointing to something, it is being used. When the GC finds that an object is still being used it will not clean it up. On the other hand, objects that are not reachable can be destroyed by the GC to free up memory. When the GC is moving objects between generations, it will only move objects that are still reachable. A move is more expensive than releasing memory, so the fewer objects the GC has to move, the better. After a collection the GC assures that Generation 0 is empty and all surviving objects live in either Generation 1 or 2.

    Finalizing Objects
    Marissa highlighted some of the implications of the GC that are important to software developers. Since the GC has no true knowledge of your object it is important to understand the use of the Finalize and Dispose methods and how they affect the GC and your code.

    First, let's realize that there are no deterministic destructors in .NET. It is important to understand this because otherwise you can make false assumptions about how your system behaves. The closest equivalents are Finalize and Dispose. The Finalize method is called by the GC and the Dispose method is designed to be called programmatically.

    When a new object is created, objects containing a Finalize method get a special mention in what is known as the finalization list. This list contains pointers to all objects that have a Finalize method. By inspecting this table the GC can determine which objects to call Finalize on and which should simply be deleted. The process to call Finalize is expensive; therefore, you should use Finalize with care. During the first pass, the GC will look at the heap and determine which objects are not reachable; it will then review the finalization list and if it finds that one of the nonreachable objects has a Finalize method it will copy the pointer from the finalization list to what is known as the "freachable queue". The nonreachable object will then be moved to the next generation. As stated earlier, Generation 0 always gets emptied. During the next pass the GC will find that the object is not reachable and then will execute the finalize method from the freachable queue.

    The key concept to take away from this is that you do not know when the GC will call the Finalize method. It is possible that if you are using unmanaged or expensive resources they could be around for much longer then you expect. Also, the creation of objects with Finalize methods take a little longer since they are not only placed on the heap, but also require a pointer to be established on the finalization list structure. You should also consider the situation in which an object that has a Finalize method has references to other objects. The GC will not clean up these other objects until after the object with the Finalize method is cleaned up, so it is important to consider downstream object designs when you are implementing systems that use objects with Finalize. In fact, if you are seeing performance or resource issues and cannot understand why something is still referenced, be sure that a parent object doesn't have a Finalize method that is keeping the resource alive. Here is a little pop quiz for you to see if you are getting Marissa's message.

    Years ago I worked on the Argo project, which involved a system that could represent undersea long distance phone networks. A network wrapping around Africa could involve thousands and thousands of network nodes. Each type of network node was represented as an object and stored in an array. Consider what would happen under .NET and the GC if we populated a structure of some sort with say, 750 objects in a new version of Argo and each object had a Finalize method. What do you think would happen when the GC kicked in? Imagine that all the objects were very small and that even with 750 objects we didn't exceed the Generation 0 size limit. In this situation the GC would need to carry out the finalization semantics for all 750 objects. If you guessed that it would allocate 750 pointers in the finalization list, copy 750 pointers to the freachable queue, move 750 objects to Generation 1, and then call 750 Finalize methods, you guessed right. Now consider what that would mean to performance...

    Nap Time
    Marissa is starting to interject loud baby wails into her talk on the GC, so I think it is time for her to have a nap. She did promise to explain GC threading implications, strong and weak object references, the use of the Dispose method, object resurrection, programming the GC, and more on the Finalize method in Part 2 of this article.

  • More Stories By John Gomez

    John Gomez, open source editor for .NET Developer's Journal, has over 25 years of software development and architectural experience, and is considered a leader in the design of highly distributed transaction systems. His interests include chaos- and fuzzy-based systems, self-healing and self-reliant systems, and offensive security technologies, as well as artificial intelligence. John started developing software at age 9 and is currently the CTO of Eclipsys Corporation, a worldwide leader in hospital and physician information systems.

    Comments (2)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    IoT & Smart Cities Stories
    Codete accelerates their clients growth through technological expertise and experience. Codite team works with organizations to meet the challenges that digitalization presents. Their clients include digital start-ups as well as established enterprises in the IT industry. To stay competitive in a highly innovative IT industry, strong R&D departments and bold spin-off initiatives is a must. Codete Data Science and Software Architects teams help corporate clients to stay up to date with the mod...
    There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
    At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
    Druva is the global leader in Cloud Data Protection and Management, delivering the industry's first data management-as-a-service solution that aggregates data from endpoints, servers and cloud applications and leverages the public cloud to offer a single pane of glass to enable data protection, governance and intelligence-dramatically increasing the availability and visibility of business critical information, while reducing the risk, cost and complexity of managing and protecting it. Druva's...
    BMC has unmatched experience in IT management, supporting 92 of the Forbes Global 100, and earning recognition as an ITSM Gartner Magic Quadrant Leader for five years running. Our solutions offer speed, agility, and efficiency to tackle business challenges in the areas of service management, automation, operations, and the mainframe.
    The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
    With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
    DSR is a supplier of project management, consultancy services and IT solutions that increase effectiveness of a company's operations in the production sector. The company combines in-depth knowledge of international companies with expert knowledge utilising IT tools that support manufacturing and distribution processes. DSR ensures optimization and integration of internal processes which is necessary for companies to grow rapidly. The rapid growth is possible thanks, to specialized services an...
    At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
    Scala Hosting is trusted by 50 000 customers from 120 countries and hosting 700 000+ websites. The company has local presence in the United States and Europe and runs an internal R&D department which focuses on changing the status quo in the web hosting industry. Imagine every website owner running their online business on a fully managed cloud VPS platform at an affordable price that's very close to the price of shared hosting. The efforts of the R&D department in the last 3 years made that pos...