-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Initial redeferral heuristic #1674
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* synchronize changes made in dotnet/coreclr#4922
…treat symbolic links Merge pull request chakra-core#1393 from antonmes:linux Synchronize changes made by @adityamandaleeka in dotnet/coreclr#4922 Fixes build on macOS Sierra: `ChakraCore/pal/src/cruntime/filecrt.cpp:355:12: error: 'syscall' is deprecated: first deprecated in macOS 10.12 - syscall(2) is unsupported; please switch to a supported interface. For SYS_kdebug_trace use kdebug_signpost(). [-Werror,-Wdeprecated-declarations]`
issue: slow running, JSON/simple.withLog test was randomly failing on xplat. We provide string length internally. However customized PAL interface expects null ending. Apart from the fix, also removed unused PAL source files.
…ut bus error [pal] and format precision Merge pull request chakra-core#1423 from obastemur:unused issue: [slow running] JSON/simple.withLog test was randomly failing on xplat. [also crashing] CC provides string length internally. However customized PAL interface expects null ending. Apart from the fix, also removed unused PAL source files.
…tput for xplat Merge pull request chakra-core#1414 from obastemur:fix_biops_test
Cmake fails to update references
Merge pull request chakra-core#1430 from obastemur:m_merge
… build caching Merge pull request chakra-core#1431 from obastemur:fix_cmake_cache Cmake fails to update references
Merge pull request chakra-core#1443 from obastemur:linux_merge_master_3
…w just warnings) Merge pull request chakra-core#1442 from Fishrock123:fix-mac-compile-errors Fixes chakra-core#1436
Also added a simple sort test to trigger hybrid_sort implementation
See comments on disabled test case
Call syntax was being disallowed after a call expression. Appears to be a mistake, perhaps copy and paste error. Fix by simply remove line disallowing call syntax in class expression parsing. Fixes chakra-core#1465
…xpression syntax Merge pull request chakra-core#1468 from ianwjhalliday:fix1465 Call syntax was being disallowed after a call expression. Appears to be a mistake, perhaps copy and paste error. Fix by simply remove line disallowing call syntax in class expression parsing. Fixes chakra-core#1465
…guments, caller and callee accessor functions Merge pull request chakra-core#1464 from agarwal-sandeep:accessorfunction Accessing arguments, caller and callee accessor returns different functions differing in error message. Spec says they should be treated as same for == and ===, so we had special handling in interpreter. Handling the same case in JIT is overkill. So I unified all the three functions into one and made the error message same.
…ble/disable some tests Merge pull request chakra-core#1455 from obastemur:ch_platform See comments on disabled test case Attempt to fix chakra-core#1397
…sues This change removes the function wrapper implementation of async functions. Now the async functions will behave similar to generator functions. The actual function body will be in the sciptFunction field. Symbols now need not be forced to slots. asyncspawn opcode and parse node are no more needed.In the backend async functions are also added to the generator JIT flag. await is desugared into yield.
…tion to fix the symbol capturing issues Merge pull request chakra-core#1456 from aneeshdk:AsyncFuncAsGenerator This change removes the function wrapper implementation of async functions. Now the async functions will behave similar to generator functions. The actual function body will be in the sciptFunction field. Symbols now need not be forced to slots. asyncspawn opcode and parse node are no more needed.In the backend async functions are also added to the generator JIT flag. await is desugared into yield.
… using the first parent
Merge pull request chakra-core#1477 from Cellule:merge_changes Change the git log command to list the file changed on a merge commit using the first parent. The file changed can be useful when consumed by tools trying to understand the nature of the change on that build.
…ed library) support [all tests pass] Merge pull request chakra-core#1473 from obastemur:osx_dylib
This fails on CI now and produces an error
Merge pull request chakra-core#1550 from obastemur:tofixed_bug ``` print(1.25499999999999989342.toFixed(2)); print(1.255.toFixed(2)); ``` Code above should print 1.25 while it was printing 1.26 Fixes chakra-core#1511
Merge pull request chakra-core#1556 from obastemur:memmove_warn This fails on CI now and produces an error. See http://dotnet-ci.cloudapp.net/job/Microsoft_ChakraCore/job/master/job/ubuntu_linux_release_prtest/717/
We set `target` only if proxy is from same context, but while asserting we were not taking cross-context into account.
… load a cached value into the result operand. But if the cached value is null, we'll have to call a helper to complete the operation, and if the result operand and string source have the same symbol, then we have overwritten the string we pass to the helper with null. Use a temp in such a case and copy it to the result operand only if it is non-null.
… String.charAt Merge pull request chakra-core#1567 from pleath:8532848 We optimistically load a cached value into the result operand. But if the cached value is null, we'll have to call a helper to complete the operation, and if the result operand and string source have the same symbol, then we have overwritten the string we pass to the helper with null. Use a temp in such a case and copy it to the result operand only if it is non-null.
When inlineCaches are created, they are registered in threadContext in an invalidation list and maintained using registeredInlineCacheCount in threadContext. When these caches are removed from invalidation list, we record that number as well using unregisteredInlineCacheCount in threadContext. If the ratio of registeredInlineCacheCount to unregisteredInlineCacheCount is less than 4, we perform compaction of the invalidation list by deleting unregisteredInlineCacheCount nodes from the list that doesn't hold inlineCache and return their memory back to the arena. Ideally, unregisteredInlineCacheCount should always match no. of inlineCaches that were removed from invalidation list (aka cachesRemoved). However today, cachesRemoved > unregisteredInlineCacheCount most of the time. Because of this, we were not returning cachesRemoved - unregisteredInlineCacheCount nodes of invalidation list back to arena and the memory taken up by these nodes kept piling leading to memory leak. The reason for cachesRemoved > unregisteredInlineCacheCount was because there were couple of places where we were not recording the removal of inlineCaches in unregisteredInlineCacheCount. Also, we didn't update unregisteredInlineCacheCount when we bulk deleted the invalidation list. Fixing these 2 places, the cachesRemoved == unregisteredInlineCacheCount holds true. registeredInlineCacheCount was updated (reduced that count) when we bulk delete entire invalidation lists. However we never updated (again reduced that count) when we performed compaction of invalidation list. Because of this, registeredInlineCacheCount kept growing and it became harder and harder to meet the condition of compaction. This leads to nodes having invalid inline cache still holding memory causing leak. Also did minor code cleanup and added asserts.
…ache invalidationList Merge pull request chakra-core#1554 from kunalspathak:memoryleak When inlineCaches are created, they are registered in `threadContext` in an invalidation list and maintained using `registeredInlineCacheCount` in `threadContext`. When these caches are removed from invalidation list, we record that number as well using `unregisteredInlineCacheCount` in `threadContext`. If the ratio of `registeredInlineCacheCount` to `unregisteredInlineCacheCount` is less than 4, we perform compaction of the invalidation list by deleting `unregisteredInlineCacheCount` nodes from the list that doesn't hold inlineCache and return their memory back to the arena. * Ideally, `unregisteredInlineCacheCount` should always match no. of inlineCaches that were removed from invalidation list (aka `cachesRemoved`). However today, `cachesRemoved > unregisteredInlineCacheCount` most of the time. Because of this, we were not returning `cachesRemoved - unregisteredInlineCacheCount` nodes of invalidation list back to arena and the memory taken up by these nodes kept piling leading to memory leak. The reason for `cachesRemoved > unregisteredInlineCacheCount` was because there were couple of places where we were not recording the removal of inlineCaches in `unregisteredInlineCacheCount`. Also, we didn't update `unregisteredInlineCacheCount` when we bulk deleted the invalidation list. Fixing these 2 places, the `cachesRemoved == unregisteredInlineCacheCount` holds true. * `registeredInlineCacheCount` was updated (reduced that count) when we bulk delete entire invalidation lists. However we never updated (again reduced that count) when we performed compaction of invalidation list. Because of this, `registeredInlineCacheCount` kept growing and it became harder and harder to meet the condition of compaction. This leads to nodes having invalid inline cache still holding memory causing leak. Also did minor code cleanup and added asserts.
…ferent context Merge pull request chakra-core#1565 from kunalspathak:cctxproxybug We set `target` only if proxy is from same context, but while asserting we were not taking cross-context into account and hitting assert.
…proxy ownkeys trap snapshot array Merge pull request chakra-core#1544 from leirocks:proxyenumbug
…oopBodyJobManager Merge pull request chakra-core#1522 from rajatd:freeLoopBodyJobBug Need to reset FreeLoopBodyJobManager::waitingForStackJob when stack job is processed.
…io 2015 platform toolset for now, this will just allow us to use the IDE without it complaining.
…iew 4 Merge pull request chakra-core#1553 from tcare:vs15 We'll use the Visual Studio 2015 platform toolset for now, this will just allow us to use the IDE without it complaining.
…Print()` Merge pull request chakra-core#1570 from kunalspathak:removedtraces The trace is only printed if host has `idleGC` implemented, so haven't noticed so far in `ch.exe` but was showing up in node+chakracore after my idleGC changes. This will fix nodejs/node-chakracore#115
Download ICU 57 [ if user accepts the license ] and build Link to ChakraCore statically
This work is to implement a prototype version of the `SharedArrayBuffer`. The spec is in the stage2. The `SharedArrayBuffer` is behind the `ESSharedArrayBuffer` (or -sab) flag. Highlights. Introduce the `SharedArrayBuffer` type and its implementation. Refactor the `ArrayBuffer` to a common class (`ArrayBufferBase`) so that both `SharedArrayBuffer` and `ArrayBuffer` leverage the common functionality and machinery. `Atomics` object (spec'ed) is introduced to provide the atomic operation on the buffer which is shared. Currently it is using the Win32 based APIs to provide the functionality. All 12 methods of `Atomics` are implemented. All `TypedArray` views are changed to make use of `SharedArrayBuffer` as well. The `Serialization/Deserialization` implementation is in the different repo. Added test cases to validate most of the functionality. sharedarraybuffer - initial work
Merge pull request chakra-core#1533 from akroshg:sab This work is to implement a prototype version of the `SharedArrayBuffer`. The spec is in the stage2. The `SharedArrayBuffer` is behind the `ESSharedArrayBuffer` (or -sab) flag. Highlights. Introduce the `SharedArrayBuffer` type and its implementation. Refactor the `ArrayBuffer` to a common class (`ArrayBufferBase`) so that both `SharedArrayBuffer` and `ArrayBuffer` leverage the common functionality and machinery. `Atomics` object (spec'ed) is introduced to provide the atomic operation on the buffer which is shared. Currently it is using the Win32 based APIs to provide the functionality. All 12 methods of `Atomics` are implemented. All `TypedArray` views are changed to make use of `SharedArrayBuffer` as well. The `Serialization/Deserialization` implementation is in the different repo. Added test cases to validate most of the functionality. sharedarraybuffer - initial work
Merge pull request chakra-core#1568 from obastemur:icu_static Download ICU 57 [ if user accepts the license ] and build Link to ChakraCore statically
1. BitVectors were not allocated as leaf, and this could cause the GC to interpret it's contents as pointer references. Switched to non-leaf 2. Temporary guest-arenas that were being cached could still get scanned, even if they weren't actively being used. Added a mechanism to unregister said arena when it wasn't used.
…in ChakraCore Merge pull request chakra-core#1581 from digitalinfinity:min_leaks 1. BitVectors were not allocated as leaf, and this could cause the GC to interpret it's contents as pointer references. Switched to leaf 2. Temporary guest-arenas that were being cached could still get scanned, even if they weren't actively being used. Added a mechanism to unregister said arena when it wasn't used.
…s generated by VS, and reorganize .gitignore file Merge pull request chakra-core#1580 from dilijev:ignore Added *.bak *.orig Added many more types of files that are sometimes generated by VS. Organized ignore filters into categories, and sorted lines within categories.
…dalone proxy from which FunctionProxy does not inherit, and FunctionProxy is the basis for all the representations of user functions (FunctionBody, etc.). FunctionInfo still points to the FunctionProxy that implements the function, and FunctionProxy points to FunctionInfo. Do this to facilitate re-deferral and to maximize the memory benefit.
eligible for deferred parsing (e.g., not arrow functions, not functions-in-block). This is experimental behavior, off by default. Define an 'on' mode in which all eligible functions are redeferred on GC, as well as a 'stress' mode in which all candidates are redeferred on each stack probe. This change is built on a previous PR that refactors the FunctionBody hierarchy to make it easier to toggle between deferred and fully-compiled states.
…k that Rajat did a while back. The heuristic uses 5 constants: an initial delay, which is the number of GC's we perform before we start checking for redeferral candidates; the number of GC's to wait between redeferral checks while we're in the startup phase; the number of GC's that must have passed since a function was executed before we redefer it in the startup phase; and the same wait and inactivity constants for the main phase, to which we transition after the startup phase ends. Todo: emit jitted code to reset a function's inactivity count when the function is executed, and/or avoid redeferring a jitted function if we think it may be executed again.
@pleath you might want to change your base branch to master |
Out-of-date PR, incorporated into #1585. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Implement a basic heuristic for redeferral, based on experimental work that Rajat did a while back. The heuristic uses 5 constants: an initial delay, which is the number of GC's we perform before we start checking for redeferral candidates; the number of GC's to wait between redeferral checks while we're in the startup phase; the number of GC's that must have passed since a function was executed before we redefer it in the startup phase; and the same wait and inactivity constants for the main phase, to which we transition after the startup phase ends. Todo: emit jitted code to reset a function's inactivity count when the function is executed, and/or avoid redeferring a jitted function if we think it may be executed again.