Welcome!

Microsoft Cloud Authors: Nick Basinger, Kevin Benedict, Pat Romanski, Liz McMillan, Lori MacVittie

Related Topics: Microsoft Cloud

Microsoft Cloud: Article

Marissa's Guide to the .NET Garbage Collector

Marissa's Guide to the .NET Garbage Collector

"What's wrong, Uncle John?" I hadn't realized how my facial expressions were illustrating my inner feelings. I had been working on a new coding project, and as I worked I became more and more amazed by the native memory management provided by .NET. It seemed that almost by magic the runtime was able to figure out which objects were no longer needed and which should hang around, and - yet more amazing - it could even call special cleanup routines.

"Nothing is wrong, Marissa; I just wish I had time to dive into the automatic memory management features of .NET," I replied hastily. Honestly, what could a five-month-old baby possibly know about the intricacies of garbage collectors and memory management?

"Is that all?" she replied. "The .NET garbage collector is rather easy to understand; it is based on a generational collection algorithm. If you like, I can explain it to you - in fact I can do it in a way even you could understand," she stated rather confidently.

"What? Are you kidding? You're going to school me?" I wasn't about to walk away from this challenge. "Go for it, kid. Let's see what great mass of knowledge resides in the mind of a tiny, diaper-wearing, .NET internals architect."

Starting Off On the Right Foot
"Well, first you need to understand some of the assumptions made by the garbage collector's designers. I'll use the GC abbreviation going forward to refer to the garbage collector, like we did in the nursery before I came home." Marissa then went on to explain that there are three basic assumptions:

  • Recently created objects typically have a short lifetime: Consider the example of a database connection. You create the ADO.NET object, interact with the database, and then the object isn't typically needed while you interact with the data. This is an example of a pattern in which a new object doesn't need to hang around and take up precious resources.
  • Objects that are older typically have a longer lifetime: A good example of this is Windows Forms elements. A tree control that has been populated with information may be around for the lifetime of the application. In this pattern the use of memory is less costly than the cost of operation to create and populate the tree control.
  • Smaller collections yield better performance: The GC is based on a managed heap and it is faster to walk small sections of the heap than it is to walk the entire heap. So any operation on the heap that involves only a portion of the heap is going to be faster than an operation that involves the entire heap.

    "Well, Marissa, those are some good assumptions, but it really doesn't tell me much about how it all works," I said. She just frowned like babies do when they are frustrated and want you to just pay attention.

    "The next thing you need to understand is the basics of how objects are created and how memory is allocated. Then I can walk you through some of the deeper GC concepts."

    "Okay Marissa, it's your show; however you want to do this is fine." It's always a good idea to placate a baby if you can.

    She reviewed the object creation process with me and she was right: understanding how objects are created and how memory is allocated started to give me some insight into the internals of the GC. Rather than outline them here I've included her overview in Table 1.

    The important thing to keep in mind is that when you create a new object it results in a newobj IL instruction. This instruction will allocate and initialize memory. The initialization basically involves setting the object's initial state and is done by the constructor. But since the GC doesn't know about the state of your object it has no way to intelligently clean up your object. Marissa explained that the GC has some tricks to deal with intelligent cleanup, but it is really up to the programmer to implement these tricks. But as babies often do, she wanted to focus on the basics first.

    "When your applications create objects, Uncle John, do you find that they create a series of objects at once?" she asked.

    "Well, if you mean that objects created within the same scope usually have some type of strong relationship to each other, the answer would be yes."

    That was what she was asking. I learned that this natural pattern helps improve performance in the GC. As she explained earlier, "smaller collections yield better performance," so the fact that interrelated objects are near each other from a memory perspective means the GC can typically clean them all up at the same time. How this cleanup occurs, I learned, is based on something called generations.

    Coming to Terms with Generations
    "As you know, Uncle John, the GC is based on a generational approach." Marissa made her way to the whiteboard and started to draw some figures. "I've created a representation of an empty memory heap here (see Figure 1). When you create some new objects, they are added to what is internally known as "Generation 0". When the CLR is initialized it will set a size for Generation 0, 250KB for instance. So let's say some objects are created, since they are new, and all new objects are added to Generation 0, with the exception of objects larger then 85KB, which we will talk about later. The memory heap would look like this (see Figure 2)."

    I scratched my head. "Uh, go on." "Okay, suppose we create more objects. What do you think happens? Well, if the objects created cause a situation in which we exceed our 250KB size for Generation 0..."

    Interrupting, I asked, "So all the objects in Generation 0 can take up a total of 250KB or whatever the CLR initialized it to, right?" "Yes, that's right," she replied and went on without pausing, "so if we exceed the boundary, the GC will move any objects that are reachable by their root to Generation 1, which like Generation 0 has a memory size limit set by the CLR as well. This limit is about four times that of Generation 0. Once the objects are moved, the GC will then tear down any remaining objects in Generation 0, which will be left empty. The memory will be compacted and the new objects created."

    "Okay, okay. I think I'm starting to understand." I was getting excited - she was making sense. "Let me see if I can guess what happens next," I said. When another set of objects is created, the GC will see if Generation 0 has exceeded its size limit; if it has it will move the objects in Generation 0 to Generation 1, but also check to see if Generation 1 has exceeded its size limits. If moving the objects from Generation 0 to Generation 1 causes Generation 1 to exceed its limits, it will move reachable objects to Generation 2. Upon initialization the CLR also sets a size limit for Generation 2 , probably about twice that of Generation 1. Am I right?"

    Marissa nodded and then explained that there is no Generation three; the .NET garbage collector has only 3 generations and it is 0 based. She also explained that the GC is self tuning and that it can dynamically increase or decrease the size of each generation, which yields better performance. For instance, if the GC determines that you are using a high number of small short-lived objects, it might reduce the Generation 0 working set and free up resources for other areas. She also expanded on the issue with large objects, those greater then 85KB. Large objects are automatically created in Generation 2 to avoid the performance implications of starting in Generation 0 and immediately going over the size limit and repeating the process for Generation 1. Creating large objects in Generation 2 helps overall system performance. This is something I didn't know - what a smart little kid.

    Being Reachable
    "So Marissa, when you said the GC checks if the object is reachable, what did you mean?" She explained that the GC will build a graph of all reachable objects. A "reachable" object is one whose root - a storage location containing a memory pointer to a type - is not null. The assumption is made that if the object is pointing to something, it is being used. When the GC finds that an object is still being used it will not clean it up. On the other hand, objects that are not reachable can be destroyed by the GC to free up memory. When the GC is moving objects between generations, it will only move objects that are still reachable. A move is more expensive than releasing memory, so the fewer objects the GC has to move, the better. After a collection the GC assures that Generation 0 is empty and all surviving objects live in either Generation 1 or 2.

    Finalizing Objects
    Marissa highlighted some of the implications of the GC that are important to software developers. Since the GC has no true knowledge of your object it is important to understand the use of the Finalize and Dispose methods and how they affect the GC and your code.

    First, let's realize that there are no deterministic destructors in .NET. It is important to understand this because otherwise you can make false assumptions about how your system behaves. The closest equivalents are Finalize and Dispose. The Finalize method is called by the GC and the Dispose method is designed to be called programmatically.

    When a new object is created, objects containing a Finalize method get a special mention in what is known as the finalization list. This list contains pointers to all objects that have a Finalize method. By inspecting this table the GC can determine which objects to call Finalize on and which should simply be deleted. The process to call Finalize is expensive; therefore, you should use Finalize with care. During the first pass, the GC will look at the heap and determine which objects are not reachable; it will then review the finalization list and if it finds that one of the nonreachable objects has a Finalize method it will copy the pointer from the finalization list to what is known as the "freachable queue". The nonreachable object will then be moved to the next generation. As stated earlier, Generation 0 always gets emptied. During the next pass the GC will find that the object is not reachable and then will execute the finalize method from the freachable queue.

    The key concept to take away from this is that you do not know when the GC will call the Finalize method. It is possible that if you are using unmanaged or expensive resources they could be around for much longer then you expect. Also, the creation of objects with Finalize methods take a little longer since they are not only placed on the heap, but also require a pointer to be established on the finalization list structure. You should also consider the situation in which an object that has a Finalize method has references to other objects. The GC will not clean up these other objects until after the object with the Finalize method is cleaned up, so it is important to consider downstream object designs when you are implementing systems that use objects with Finalize. In fact, if you are seeing performance or resource issues and cannot understand why something is still referenced, be sure that a parent object doesn't have a Finalize method that is keeping the resource alive. Here is a little pop quiz for you to see if you are getting Marissa's message.

    Years ago I worked on the Argo project, which involved a system that could represent undersea long distance phone networks. A network wrapping around Africa could involve thousands and thousands of network nodes. Each type of network node was represented as an object and stored in an array. Consider what would happen under .NET and the GC if we populated a structure of some sort with say, 750 objects in a new version of Argo and each object had a Finalize method. What do you think would happen when the GC kicked in? Imagine that all the objects were very small and that even with 750 objects we didn't exceed the Generation 0 size limit. In this situation the GC would need to carry out the finalization semantics for all 750 objects. If you guessed that it would allocate 750 pointers in the finalization list, copy 750 pointers to the freachable queue, move 750 objects to Generation 1, and then call 750 Finalize methods, you guessed right. Now consider what that would mean to performance...

    Nap Time
    Marissa is starting to interject loud baby wails into her talk on the GC, so I think it is time for her to have a nap. She did promise to explain GC threading implications, strong and weak object references, the use of the Dispose method, object resurrection, programming the GC, and more on the Finalize method in Part 2 of this article.

  • More Stories By John Gomez

    John Gomez, open source editor for .NET Developer's Journal, has over 25 years of software development and architectural experience, and is considered a leader in the design of highly distributed transaction systems. His interests include chaos- and fuzzy-based systems, self-healing and self-reliant systems, and offensive security technologies, as well as artificial intelligence. John started developing software at age 9 and is currently the CTO of Eclipsys Corporation, a worldwide leader in hospital and physician information systems.

    Comments (2)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    IoT & Smart Cities Stories
    Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
    Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
    IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
    The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
    Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
    Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
    Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
    To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
    In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
    Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...