Skip to content

Conversation

kunalspathak
Copy link
Contributor

When inlineCaches are created, they are registered in threadContext in an invalidation list and maintained using registeredInlineCacheCount in threadContext. When these caches are removed from invalidation list, we record that number as well using unregisteredInlineCacheCount in threadContext. If the ratio of registeredInlineCacheCount to unregisteredInlineCacheCount is less than 4, we perform compaction of the invalidation list by deleting unregisteredInlineCacheCount nodes from the list that doesn't hold inlineCache and return their memory back to the arena.

  • Ideally, unregisteredInlineCacheCount should always match no. of inlineCaches that were removed from invalidation list (aka cachesRemoved). However today, cachesRemoved > unregisteredInlineCacheCount most of the time. Because of this, we were not returning cachesRemoved - unregisteredInlineCacheCount nodes of invalidation list back to arena and the memory taken up by these nodes kept piling leading to memory leak. The reason for cachesRemoved > unregisteredInlineCacheCount was because there were couple of places where we were not recording the removal of inlineCaches in unregisteredInlineCacheCount. Also, we didn't update unregisteredInlineCacheCount when we bulk deleted the invalidation list. Fixing these 2 places, the cachesRemoved == unregisteredInlineCacheCount holds true.
  • registeredInlineCacheCount was updated (reduced that count) when we bulk delete entire invalidation lists. However we never updated (again reduced that count) when we performed compaction of invalidation list. Because of this, registeredInlineCacheCount kept growing and it became harder and harder to meet the condition of compaction. This leads to nodes having invalid inline cache still holding memory causing leak.

Also did minor code cleanup and added asserts.

@kunalspathak
Copy link
Contributor Author

@pleath, please review.

@@ -513,10 +513,6 @@ namespace Js
template<bool CheckLocal, bool CheckProto, bool CheckAccessor>
void CloneInlineCacheToEmptySlotInCollision(Type *const type, uint index);

#ifdef CLONE_INLINECACHE_TO_EMPTYSLOT
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CLONE_INLINECACHE_TO_EMPTYSLOT [](start = 7, length = 30)

Please dont delete this code. There has been an instance where we ended up using some code under this ifdef. The code under this ifdef is still good to keep around for experimenting with it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright. Thanks for letting me know.

When inlineCaches are created, they are registered in threadContext in an invalidation list and maintained using registeredInlineCacheCount in threadContext. When these caches are removed from invalidation list, we record that number as well using unregisteredInlineCacheCount in threadContext. If the ratio of registeredInlineCacheCount to unregisteredInlineCacheCount is less than 4, we perform compaction of the invalidation list by deleting unregisteredInlineCacheCount nodes from the list that doesn't hold inlineCache and return their memory back to the arena.

Ideally, unregisteredInlineCacheCount should always match no. of inlineCaches that were removed from invalidation list (aka cachesRemoved). However today, cachesRemoved > unregisteredInlineCacheCount most of the time. Because of this, we were not returning cachesRemoved - unregisteredInlineCacheCount nodes of invalidation list back to arena and the memory taken up by these nodes kept piling leading to memory leak. The reason for cachesRemoved > unregisteredInlineCacheCount was because there were couple of places where we were not recording the removal of inlineCaches in unregisteredInlineCacheCount. Also, we didn't update unregisteredInlineCacheCount when we bulk deleted the invalidation list. Fixing these 2 places, the cachesRemoved == unregisteredInlineCacheCount holds true.
registeredInlineCacheCount was updated (reduced that count) when we bulk delete entire invalidation lists. However we never updated (again reduced that count) when we performed compaction of invalidation list. Because of this, registeredInlineCacheCount kept growing and it became harder and harder to meet the condition of compaction. This leads to nodes having invalid inline cache still holding memory causing leak.
Also did minor code cleanup and added asserts.
@chakrabot chakrabot merged commit dc43338 into chakra-core:master Sep 12, 2016
chakrabot pushed a commit that referenced this pull request Sep 12, 2016
…dationList

Merge pull request #1554 from kunalspathak:memoryleak

When inlineCaches are created, they are registered in `threadContext` in an invalidation list and maintained using `registeredInlineCacheCount` in `threadContext`. When these caches are removed from invalidation list, we record that number as well using `unregisteredInlineCacheCount` in `threadContext`. If the ratio of `registeredInlineCacheCount` to `unregisteredInlineCacheCount` is less than 4, we perform compaction of the invalidation list by deleting `unregisteredInlineCacheCount` nodes from the list that doesn't hold inlineCache and return their memory back to the arena.

* Ideally, `unregisteredInlineCacheCount` should always match no. of inlineCaches that were removed from invalidation list (aka `cachesRemoved`). However today, `cachesRemoved > unregisteredInlineCacheCount` most of the time. Because of this, we were not returning `cachesRemoved - unregisteredInlineCacheCount` nodes of invalidation list back to arena and the memory taken up by these nodes kept piling leading to memory leak. The reason for `cachesRemoved > unregisteredInlineCacheCount` was because there were couple of places where we were not recording the removal of inlineCaches in `unregisteredInlineCacheCount`. Also, we didn't update `unregisteredInlineCacheCount` when we bulk deleted the invalidation list. Fixing these 2 places, the `cachesRemoved == unregisteredInlineCacheCount` holds true.

* `registeredInlineCacheCount` was updated (reduced that count) when we bulk delete entire invalidation lists. However we never updated (again reduced that count) when we performed compaction of invalidation list. Because of this, `registeredInlineCacheCount` kept growing and it became harder and harder to meet the condition of compaction. This leads to nodes having invalid inline cache still holding memory causing leak.

Also did minor code cleanup and added asserts.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants