|
|
DescriptionUse MemoryChunk-based allocation for deoptimization entry code
This is done by first committing the deoptimization entry code with a minimal
area size (OS::CommitPageSize) and later using CommitArea to adjust the size.
Committed: http://code.google.com/p/v8/source/detail?r=13494
Committed: https://code.google.com/p/v8/source/detail?r=13532
Patch Set 1 #Patch Set 2 : #
Total comments: 8
Patch Set 3 : #
Total comments: 20
Patch Set 4 : #
Total comments: 5
Patch Set 5 : #
Total comments: 21
Patch Set 6 : #
Total comments: 23
Patch Set 7 : #Patch Set 8 : #
Total comments: 13
Patch Set 9 : #Patch Set 10 : #
Total comments: 4
Patch Set 11 : #Patch Set 12 : #
Messages
Total messages: 32 (0 generated)
If the deopt table is in the code range for X64, we could use direct jump or conditional branch in the optimized code when generating DeoptimizeIf or in the GenerateJumpTable.
Fixed x64.debug.check. There are no fails for x64.debug.check and x64.release.check. The performance data is not affected: Before: Richards: 17467 DeltaBlue: 21568 Crypto: 23028 RayTrace: 19906 EarleyBoyer: 36397 RegExp: 5466 Splay: 6868 NavierStokes: 24512 ---- Score (version 7): 16606 After: Richards: 17482 DeltaBlue: 21310 Crypto: 23088 RayTrace: 20067 EarleyBoyer: 36640 RegExp: 5491 Splay: 6878 NavierStokes: 24462 ---- Score (version 7): 16627
Apart from the comments below, I have trouble understanding what the point of the whole CL is: What problem exactly is it trying to solve? https://codereview.chromium.org/11566011/diff/6001/src/deoptimizer.h File src/deoptimizer.h (right): https://codereview.chromium.org/11566011/diff/6001/src/deoptimizer.h#newcode105 src/deoptimizer.h:105: #if defined(V8_TARGET_ARCH_X64) Having architecture-specific stuff here in the architecture-independent parts is the wrong approach, leading into the wrong direction. I would strongly object any such change... Our goal should be making the deoptimizer code more architecture-independent, we already have *tons* of duplicated boilerplate code. https://codereview.chromium.org/11566011/diff/6001/src/deoptimizer.h#newcode434 src/deoptimizer.h:434: #if V8_TARGET_ARCH_MIPS Not from this CL, but the same comment holds. This should be moved/refactored.
Sven, thanks for your review. This CL wants to put the deoptimization entries into the X64 code range so that the DeoptimizeIf in lithium-codegen-x64.cc could direct jump or conditional branch to the deoptimization entry. By doing this, we could avoid generating the jump table in lithium-codegen-x64.cc to save code size.
This CL seems to work really hard to duplicate functionality that already exists. If you look at MemoryAllocator::AllocateChunk, it already tries to put all EXECUTABLE allocates in the CodeRange if it exists. Because of that, the functionality you want is reduced to an ordering problem. If you can ensure that the CodeRange exists and is initialized before you request the space for the deopt tables, then you can use MemoryAllocator::AllocateChunk to give you the memory you need. You will probably need to pass through a flag that prevents committing the executable pages and do that later, but that shouldn't be too hard. With this approach, a very large amount of the code in this CL is unnecessary. I also feel very strongly that we should allocate the deopt entry tables in the same way on all platforms, there should be nothing specific about x64. Please make this CL cross-platform. https://codereview.chromium.org/11566011/diff/6001/src/deoptimizer.h File src/deoptimizer.h (right): https://codereview.chromium.org/11566011/diff/6001/src/deoptimizer.h#newcode106 src/deoptimizer.h:106: Address eager_deoptimization_entry_start_; Is there any reason why this shouldn't work identically on all platforms, taking the deopt tables from the CodeSpace? If not, please make this cross platform. https://codereview.chromium.org/11566011/diff/6001/src/spaces.cc File src/spaces.cc (right): https://codereview.chromium.org/11566011/diff/6001/src/spaces.cc#newcode212 src/spaces.cc:212: bool committed) { nit: s/committed/commit https://codereview.chromium.org/11566011/diff/6001/src/spaces.cc#newcode300 src/spaces.cc:300: // ASSERT(size_ == 0); Why this change? https://codereview.chromium.org/11566011/diff/6001/src/spaces.cc#newcode506 src/spaces.cc:506: MemoryChunk* MemoryAllocator::CommitChunkInCodeRange(Address start, Yikes! This is copy-pasted from MemoryAllocator::AllocateChunk. At the very least, you should refactor to share functionality, but please see my overall comments about this approach.
danno, thanks for your review. The main reason that I did not pass through a flag to prevent committing the executable pages is that MemoryChunk only works after the memory is committed. So for the reservation, I returned the reserved address and size, and when committing, I commit the reserved memory first and then returned initialized MemoryChunk which will be used when disposing. We might need to a new interface other than MemoryChunk to reserve memory first and then commit later. I am reading the source codes to see how to create such an interface and how to manage it in the V8 memory system.
1. Updated CL to refactor the existing code to decouple reserving and committing aligned memories and chunks. 2. Decouple allocation and creation of deopt tables in the code range for X64 and in the aligned memories for other architectures. https://codereview.chromium.org/11566011/diff/6001/src/spaces.cc File src/spaces.cc (right): https://codereview.chromium.org/11566011/diff/6001/src/spaces.cc#newcode300 src/spaces.cc:300: // ASSERT(size_ == 0); On 2012/12/20 10:18:35, danno wrote: > Why this change? "make x64.debug.check" has some errors. When I debugged one case, I found out size_ is equal to size_executable_. Could you talk a little on the comment "TODO(gc) this will be true again when we fix FreeMemory" here and "// TODO(gc) make code_range part of memory allocator?" several lines below?
https://codereview.chromium.org/11566011/diff/6001/src/spaces.cc File src/spaces.cc (right): https://codereview.chromium.org/11566011/diff/6001/src/spaces.cc#newcode300 src/spaces.cc:300: // ASSERT(size_ == 0); On 2012/12/24 14:46:46, haitao.feng wrote: > On 2012/12/20 10:18:35, danno wrote: > > Why this change? > > "make x64.debug.check" has some errors. When I debugged one case, I found out > size_ is equal to size_executable_. Could you talk a little on the comment > "TODO(gc) this will be true again when we fix FreeMemory" here and "// TODO(gc) > make code_range part of memory allocator?" several lines below? The point of this comment is that pages allocated directly CodeRange are managed by the code range, not by the MemoryAllocator. This suggests that any allocations, including code allocations, probably shouldn't adjust either size_ or executable_size_ on the MemoryAllocator, and neither should FreeMemory calls on them either. https://codereview.chromium.org/11566011/diff/11001/src/deoptimizer.cc File src/deoptimizer.cc (right): https://codereview.chromium.org/11566011/diff/11001/src/deoptimizer.cc#newcode50 src/deoptimizer.cc:50: eager_deoptimization_entry_start_ = allocator->ReserveChunk(deopt_table_size, Please don't store into a member variable and change after the fact. Set a local variable and store the member variable only once with the tweaked address, i.e.: Address chunk = allocator->ReserveChunk(...); eager_deoptimization_entry_start_ = chunk + MemoryAllocator::CodePageAreaStartOffset(); https://codereview.chromium.org/11566011/diff/11001/src/deoptimizer.cc#newcode77 src/deoptimizer.cc:77: delete eager_deoptimization_entry_code_; These must be destructed _after_ the memory chunks, don't they, since they are initialized before? https://codereview.chromium.org/11566011/diff/11001/src/deoptimizer.cc#newcod... src/deoptimizer.cc:1607: MemoryChunk** chunk = type == EAGER Can you please not use the conditional operator here to be tricky with the assignment address? Just store the results of CommitChunk into chunk which is just a MemoryChunk* and use an if statement afterwards to assign to either data->eager_deoptimization_chunk_ or data->lazy_deoptimization_chunk_. https://codereview.chromium.org/11566011/diff/11001/src/deoptimizer.cc#newcod... src/deoptimizer.cc:1610: if (*chunk == NULL) { I don't think this is right. If the deopt table grows (which it does), this routine gets called multiple times. In that case, you need to make sure that the entire grown table is committed. Here you only commit once with the initial table size. Or am I missing something? https://codereview.chromium.org/11566011/diff/11001/src/deoptimizer.h File src/deoptimizer.h (right): https://codereview.chromium.org/11566011/diff/11001/src/deoptimizer.h#newcode117 src/deoptimizer.h:117: VirtualMemory* lazy_deoptimization_entry_code_; Rename this and the one above to XXX_memory_ rather than XXX_code_ https://codereview.chromium.org/11566011/diff/11001/src/platform.h File src/platform.h (right): https://codereview.chromium.org/11566011/diff/11001/src/platform.h#newcode387 src/platform.h:387: void Set(void* address, size_t size) { I don't think you need this, see the comment in spaces.cc https://codereview.chromium.org/11566011/diff/11001/src/spaces.cc File src/spaces.cc (right): https://codereview.chromium.org/11566011/diff/11001/src/spaces.cc#newcode290 src/spaces.cc:290: // ASSERT(size_ == 0); This is _not_ safe to remove. Either you are leaking somewhere, or the logic to track size_/executable_size_ is wrong. If size_ is executable_size_ when you tear down, then you didn't release something with FreeMemory, or (more likely) FreeMemory/ReserveAlignedMemory needs to be adjusted to properly to manage size_/executable_size_ uniformly in the case that the memory comes from the code range (either always account for the memory in the MemoryAllocator's statistics or never). https://codereview.chromium.org/11566011/diff/11001/src/spaces.cc#newcode350 src/spaces.cc:350: VirtualMemory reservation; Remove. https://codereview.chromium.org/11566011/diff/11001/src/spaces.cc#newcode361 src/spaces.cc:361: reservation.Set(static_cast<void*>(base), reserved); This isn't correct. The VirtualMemory from the code range is managed by the CodeRange itself. By adding the Set method, you've made it possible to double-manage a VirtualMemory block. I think you should just create an empty VirtualMemory, i.e.: VirtualMemory empty; controller->TakeControl(empty); The calling code needs to be able to deal with the fact that it's not responsible for the memory management of a request. https://codereview.chromium.org/11566011/diff/11001/src/spaces.cc#newcode369 src/spaces.cc:369: reservation.TakeControl(&temp); Replace with: controller->TakeControl(temp); https://codereview.chromium.org/11566011/diff/11001/src/spaces.cc#newcode370 src/spaces.cc:370: size_ += reservation.size(); temp.size() https://codereview.chromium.org/11566011/diff/11001/src/spaces.cc#newcode376 src/spaces.cc:376: controller->TakeControl(&reservation); Remove
danno, thanks for your review. In this CL, I have used VirtualMemory to store the reserved address and size from the code range to keep the existing interfaces. Before the memory chunk is initialized, I reset the VirtualMemory so the reserved address is managed by the code range only. I will read more on ASSERT(size_== 0) by following your comments and try to get ASSERT(size_executable_ == 0) back also. https://codereview.chromium.org/11566011/diff/11001/src/deoptimizer.cc File src/deoptimizer.cc (right): https://codereview.chromium.org/11566011/diff/11001/src/deoptimizer.cc#newcode50 src/deoptimizer.cc:50: eager_deoptimization_entry_start_ = allocator->ReserveChunk(deopt_table_size, In this CL, I am using eager_deoptimization_entry_code_ and lazy_deoptimization_entry_code_ to store the reserved address and reserved size. The allocator->ReserveChunk is equal to the reserved address if reservation is successful. https://codereview.chromium.org/11566011/diff/11001/src/deoptimizer.cc#newcode77 src/deoptimizer.cc:77: delete eager_deoptimization_entry_code_; When a trunk is returned, the trunk will take control of eager_deoptimization_entry_code_ and lazy_deoptimization_entry_code_. The address in eager_deoptimization_entry_code_ and lazy_deoptimization_entry_code_ will be NULL. https://codereview.chromium.org/11566011/diff/11001/src/deoptimizer.cc#newcod... src/deoptimizer.cc:1610: if (*chunk == NULL) { If I read the codes right, table_size is Deoptimizer::GetMaxDeoptTableSize. So we need commit memory only once, but we do need to do memcpy when the deopt table grows. https://codereview.chromium.org/11566011/diff/11001/src/platform.h File src/platform.h (right): https://codereview.chromium.org/11566011/diff/11001/src/platform.h#newcode387 src/platform.h:387: void Set(void* address, size_t size) { I am using this to get a virtual memory from code range. Otherwise I need to create a struct to record the reserved address and size. https://codereview.chromium.org/11566011/diff/11001/src/spaces.cc File src/spaces.cc (right): https://codereview.chromium.org/11566011/diff/11001/src/spaces.cc#newcode361 src/spaces.cc:361: reservation.Set(static_cast<void*>(base), reserved); For the double-management, when the memory chunk is created, I reset the virtual memory if it is in the code range. The eager_deoptimization_entry_code_ and lazy_deoptimization_entry_code_ are reset there.
https://codereview.chromium.org/11566011/diff/11001/src/deoptimizer.cc File src/deoptimizer.cc (right): https://codereview.chromium.org/11566011/diff/11001/src/deoptimizer.cc#newcode77 src/deoptimizer.cc:77: delete eager_deoptimization_entry_code_; I don't know what you mean by "trunk". The more I think about this, you shouldn't be using a VirtualMemory to store this at all. If you need to need to store the base and size, then store those explicitly rather than wrapping them in a VirtualMemory object. You don't actually manage the deopt table's VirtualMemory, so you shouldn't act like you do. On 2012/12/28 15:04:54, haitao.feng wrote: > When a trunk is returned, the trunk will take control of > eager_deoptimization_entry_code_ and lazy_deoptimization_entry_code_. The > address in eager_deoptimization_entry_code_ and lazy_deoptimization_entry_code_ > will be NULL. https://codereview.chromium.org/11566011/diff/11001/src/platform.h File src/platform.h (right): https://codereview.chromium.org/11566011/diff/11001/src/platform.h#newcode387 src/platform.h:387: void Set(void* address, size_t size) { You don't need a struct, just store the base and size. This tuple of information in _not_ a VirtualMemory object! VirtualMemory objects _always_ call ReleaseRegion on their address() on destruction if address() is not zero... and that will happen in your case. Please, please don't abuse VirtualMemory, which is meant to reserve and manage a range of virtual addresses, for something it is not. On 2012/12/28 15:04:54, haitao.feng wrote: > I am using this to get a virtual memory from code range. Otherwise I need to > create a struct to record the reserved address and size. https://codereview.chromium.org/11566011/diff/11001/src/spaces.cc File src/spaces.cc (right): https://codereview.chromium.org/11566011/diff/11001/src/spaces.cc#newcode361 src/spaces.cc:361: reservation.Set(static_cast<void*>(base), reserved); See previous comments. Please, please don't abuse the VirtualMemory object as something it is not. On 2012/12/28 15:04:54, haitao.feng wrote: > For the double-management, when the memory chunk is created, I reset the virtual > memory if it is in the code range. The eager_deoptimization_entry_code_ and > lazy_deoptimization_entry_code_ are reset there.
danno, Happy New Year! Thanks for your quick review and sorry for the abuse of VirtualMemory. I have updated this CL to re-factoring only and will submit another CL to use ReserveChunk and CommitChunk for deopt entries: 1) Enabled the ASSERT(size_executable_ == 0). I have verified there are no fails for debug.check for IA32, X64 and ARM (simulator). This is true for the existing codes (without any change). Sorry I should have done this earlier instead of suspecting there is an issue in GC. 2) Decouple the allocateChunk to ReserveChunk and CommitChunk and allocateAlignedMemory to ReserveAlignedMemory and commitAlignedMemory. I want to know your options before writing the code for deopt entries: 1) Do we want to use chunk-based deopt allocation for X64 only? From my reading, chunk is always 1M aligned and at least 1M size, using chunk-based allocation might waste virtual memories. I read a little of heap allocation and sweep-compactor and I do not know whether the un-used memories in the chunk could be used for the other objects. I prefer still using VirtualMemory for the other platforms. 2) For X64, do we want to get the deopt entries into one chunk? The deopt entry size is 164K for eager and 164K for lazy, if we allocate them separately, they will occur the second page and third page of the code range. I prefer allocating them together, if you agree, do we need a guard page between them? An alternative is that we could get a customized chunk size but I am not sure how much effort I need to get that done as I read a comment on PagedSpace::SizeOfFirstPage: "On x64 we allocate code pages in a special way (from the reserved 2Byte area). That part of the code is not yet upgraded to handle small pages."
I think the general idea of this patch is right, but it adds too many new unnecessary API points in spaces.h (see comment below). >> 1) Do we want to use chunk-based deopt allocation for X64 only? Yes. It's important that we use the the same allocation path and the same code for maintainability. The possible 1MB wastage of virtual address space is a fair price to pay for having a common implementation. >> 2) For X64, do we want to get the deopt entries into one chunk? Please allocate them in two separate chunks for now, this keeps the code simpler, including for the guard pages. Allocating them together seems like a premature optimization without knowing it's a problem yet. https://codereview.chromium.org/11566011/diff/20001/src/spaces.cc File src/spaces.cc (right): https://codereview.chromium.org/11566011/diff/20001/src/spaces.cc#newcode353 src/spaces.cc:353: if (executable == EXECUTABLE) { It seems that you've removed the logic that rounds the base address up? That doesn't seem right. https://codereview.chromium.org/11566011/diff/20001/src/spaces.cc#newcode538 src/spaces.cc:538: base = ReserveAlignedMemory(chunk_size, Please move the Reserve call after the if {} else {}. Since chunk_size will be correct after the if/else, then you simply call ReserveAlignedMemory(chunk_size, MemoryChunk::kAlignment executable, &reservation); https://codereview.chromium.org/11566011/diff/20001/src/spaces.h File src/spaces.h (right): https://codereview.chromium.org/11566011/diff/20001/src/spaces.h#newcode892 src/spaces.h:892: bool commit = true); Please add an enum for this new flag so that it's clear from the caller site what the flag does, e.g. enum AllocateRawMemoryCommitMode { RESERVE_MEMORY RESERVE_AND_COMMIT_MEMORY, } https://codereview.chromium.org/11566011/diff/20001/src/spaces.h#newcode1044 src/spaces.h:1044: Address ReserveChunk(size_t requested, I have a fundamental concern with the increase in size to the API here, which doesn't seem necessary, and seems to expose the same functionality in multiple ways. ReserveAlignedMemory, CommitAlignedMemory and AllocateAlignedMemory are fine. But is it necessary to add all the calls associated with MemoryChunk? Instead of all of these new "public" APIs, how about just adding a parameter to AllocateChunk called "initial_commit_size". Then add a method on the MemoryChunk itself to change the committed amount on a pre-existing chunk?
Danno, thanks for the review. I have updated this CL to use your recommendation of using initial_commit_size. https://codereview.chromium.org/11566011/diff/20001/src/spaces.cc File src/spaces.cc (right): https://codereview.chromium.org/11566011/diff/20001/src/spaces.cc#newcode353 src/spaces.cc:353: if (executable == EXECUTABLE) { On 2013/01/15 09:41:18, danno wrote: > It seems that you've removed the logic that rounds the base address up? That > doesn't seem right. This RoundUp is done in the reservation(requested, alignment) in each platform. So we do not need to do it here again.
Thanks for the updated patch. In general, I like this approach better than all the other variants I've seen, but I've still got some suggestions below. https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc File src/spaces.cc (right): https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode246 src/spaces.cc:246: bool CodeRange::RecommitRawMemory(Address start, size_t size) { If you make the changes in MemoryChunk to support committed_body_size_, then this method too can just be called CommitRawMemory, and be used to just commit the delta of uncommitted pages. https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode253 src/spaces.cc:253: if (!code_range_->Guard(start + size)) return false; Guard pages are managed by the MemoryChunk (not here), and it should always be allocated in a fixed position at the end of the MemoryChunk. If there are uncommitted pages between the currently "active" committed region, that's OK. https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode380 src/spaces.cc:380: if (!CommitCodePage(&reservation, base, initial_commit_size)) { CommitCodePage should take both size and initial_commit_size. If should create the guard pages using size, but commit using initial_commit_size. https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode488 src/spaces.cc:488: bool MemoryChunk::RecommitBody(size_t body_size, Executability executable) { I think I'd prefer that the MemoryChunk remembered "committed_body_size_", and that this method just be called CommitBody, ASSERTing that body_size >= committed_body_size_ and only explicitly committing the pages between committed_body_size_ and body_size https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode495 src/spaces.cc:495: if (!reservation_.Guard(area_start_ + body_size)) return false; As mentioned above, the guard page should always be created at the end of the MemoryChunk already, so you shouldn't have to always re-create it here. That also simplified the logic, which is in both cases: if (!reservation_.Commit(area_start_, body_size, true)) return false; https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode503 src/spaces.cc:503: if (!heap_->isolate()->code_range()->RecommitRawMemory(area_start_, If you change this method to be called "CommitBody", then RecommitRawMemory can be CommitRawMemory: CommitRawMemory(area_start_ + committed_body_size_, body_size - committed_body_size_); https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode509 src/spaces.cc:509: if (Heap::ShouldZapGarbage()) { Just zap the delta of uncommitted pages. https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode543 src/spaces.cc:543: size_t chunk_size, initial_commit_size; nit: declare initial_commit_size in the scope that it's used in. https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode562 src/spaces.cc:562: initial_commit_size = RoundUp(CodePageAreaStartOffset() + commit_size, I don't think you need to round this up if you make sure the guard page is only allocated once and stays at the end. https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode562 src/spaces.cc:562: initial_commit_size = RoundUp(CodePageAreaStartOffset() + commit_size, As noted above: size_t initial_commit_size = ... https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode597 src/spaces.cc:597: initial_commit_size = MemoryChunk::kObjectStartOffset + commit_size; As noted above: size_t initial_commit_size = ...
I do not introduce "committed_body_size", instead I am using area_end_ - area_star_t to calculate it. All the comments are addressed. Thanks for the suggestions! https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc File src/spaces.cc (right): https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode246 src/spaces.cc:246: bool CodeRange::RecommitRawMemory(Address start, size_t size) { On 2013/01/16 14:48:54, danno wrote: > If you make the changes in MemoryChunk to support committed_body_size_, then > this method too can just be called CommitRawMemory, and be used to just commit > the delta of uncommitted pages. Done. https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode253 src/spaces.cc:253: if (!code_range_->Guard(start + size)) return false; On 2013/01/16 14:48:54, danno wrote: > Guard pages are managed by the MemoryChunk (not here), and it should always be > allocated in a fixed position at the end of the MemoryChunk. If there are > uncommitted pages between the currently "active" committed region, that's OK. Done. https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode380 src/spaces.cc:380: if (!CommitCodePage(&reservation, base, initial_commit_size)) { On 2013/01/16 14:48:54, danno wrote: > CommitCodePage should take both size and initial_commit_size. If should create > the guard pages using size, but commit using initial_commit_size. Done. https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode488 src/spaces.cc:488: bool MemoryChunk::RecommitBody(size_t body_size, Executability executable) { I am using area_end_ - area_start_ to remember "committed_body_size". If the body_size <= area_end_ - area_start_, the function just returns true and no uncommitment happens. https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode495 src/spaces.cc:495: if (!reservation_.Guard(area_start_ + body_size)) return false; On 2013/01/16 14:48:54, danno wrote: > As mentioned above, the guard page should always be created at the end of the > MemoryChunk already, so you shouldn't have to always re-create it here. That > also simplified the logic, which is in both cases: > > if (!reservation_.Commit(area_start_, body_size, true)) return false; Done. https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode503 src/spaces.cc:503: if (!heap_->isolate()->code_range()->RecommitRawMemory(area_start_, On 2013/01/16 14:48:54, danno wrote: > If you change this method to be called "CommitBody", then RecommitRawMemory can > be CommitRawMemory: > > CommitRawMemory(area_start_ + committed_body_size_, body_size - > committed_body_size_); Done. https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode509 src/spaces.cc:509: if (Heap::ShouldZapGarbage()) { On 2013/01/16 14:48:54, danno wrote: > Just zap the delta of uncommitted pages. Done. https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode562 src/spaces.cc:562: initial_commit_size = RoundUp(CodePageAreaStartOffset() + commit_size, On 2013/01/16 14:48:54, danno wrote: > I don't think you need to round this up if you make sure the guard page is only > allocated once and stays at the end. Done. https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode562 src/spaces.cc:562: initial_commit_size = RoundUp(CodePageAreaStartOffset() + commit_size, On 2013/01/16 14:48:54, danno wrote: > I don't think you need to round this up if you make sure the guard page is only > allocated once and stays at the end. Done. https://codereview.chromium.org/11566011/diff/25001/src/spaces.cc#newcode597 src/spaces.cc:597: initial_commit_size = MemoryChunk::kObjectStartOffset + commit_size; On 2013/01/16 14:48:54, danno wrote: > As noted above: > > size_t initial_commit_size = ... Done.
Thanks for the updated patch set. Here are some further comments. https://codereview.chromium.org/11566011/diff/27002/src/spaces.cc File src/spaces.cc (right): https://codereview.chromium.org/11566011/diff/27002/src/spaces.cc#newcode249 src/spaces.cc:249: if (!code_range_->Commit(start, length, true)) return false; how about just: return code_range_->Commit(start, length, true); https://codereview.chromium.org/11566011/diff/27002/src/spaces.cc#newcode364 src/spaces.cc:364: Address MemoryAllocator::AllocateAlignedMemory(size_t size, nit: "requested_size" is a better name https://codereview.chromium.org/11566011/diff/27002/src/spaces.cc#newcode366 src/spaces.cc:366: size_t commit_body_size, nit: "commit_size" is a better name nit: could you make this the second parameter?, i.e.: AllocateAlignedMemory(reserved_size, commit_size, alignment, ...) https://codereview.chromium.org/11566011/diff/27002/src/spaces.cc#newcode486 src/spaces.cc:486: if (body_size <= (area_end_ - area_start_)) return true; The problem with using area_end is that you change it later. This is brittle, since a MemoryChunk's reservation never changes, just how much is committed. If you change area_end_, you can commit off the end of the reserved area, because there is no way that you can validate body_size (i.e. there actually should be a ASSERT(body_size <= area_end_ - area_start_) statement in this method. That's why I suggested adding commited_size_, and I still think that's a good idea. area_end_ should be immutable. https://codereview.chromium.org/11566011/diff/27002/src/spaces.cc#newcode583 src/spaces.cc:583: area_end = area_start + commit_body_size; area_end should always be the end of the reserved space, not the committed space. https://codereview.chromium.org/11566011/diff/27002/src/spaces.cc#newcode604 src/spaces.cc:604: // memory trunk. nit: I don't know what you mean by "allocated memory trunk". How about: Use chunk_size for statistics and callbacks because we assume that they treat reserved but not-yet committed memory regions of chunks as allocated. https://codereview.chromium.org/11566011/diff/27002/src/spaces.cc#newcode787 src/spaces.cc:787: size_t size, I think it's clearer if you swap the order of the parameters here and call them "reserved_size" and "committed_size". I also think that the name of the function is not quite right, I think it should be renamed to "CommitExecutableMemory" https://codereview.chromium.org/11566011/diff/27002/src/spaces.h File src/spaces.h (right): https://codereview.chromium.org/11566011/diff/27002/src/spaces.h#newcode656 src/spaces.h:656: // Commit the delta memory region from area_end_ to area_start_ + body_size, Please don't include implementation details (e.g. mentioning "area_end_ to area_start_" in the interface comment). https://codereview.chromium.org/11566011/diff/27002/src/spaces.h#newcode658 src/spaces.h:658: // and no uncommitment happens. s/area_end_ to area_start_/Pages are never uncommitted. https://codereview.chromium.org/11566011/diff/27002/src/spaces.h#newcode897 src/spaces.h:897: size_t commit_body_size); Can you make this the second parameter? Also, "body" doesn't seem right. I think the parameters should be called "reserved_size" and "committed_size" https://codereview.chromium.org/11566011/diff/27002/src/spaces.h#newcode1057 src/spaces.h:1057: Address AllocateAlignedMemory(size_t requested, "requested_size" should be first parameter, second parameters should be "commited_size"
Thanks for the review. It might make sense to use area_end_ as the end of the committed memory as the reserved end could be calculated by address() + size_ and memory region between area_start_ and area_end_ is accessible anytime. https://codereview.chromium.org/11566011/diff/27002/src/spaces.cc File src/spaces.cc (right): https://codereview.chromium.org/11566011/diff/27002/src/spaces.cc#newcode249 src/spaces.cc:249: if (!code_range_->Commit(start, length, true)) return false; On 2013/01/17 10:49:55, danno wrote: > how about just: > > return code_range_->Commit(start, length, true); Done. https://codereview.chromium.org/11566011/diff/27002/src/spaces.cc#newcode364 src/spaces.cc:364: Address MemoryAllocator::AllocateAlignedMemory(size_t size, On 2013/01/17 10:49:55, danno wrote: > nit: "requested_size" is a better name Done. https://codereview.chromium.org/11566011/diff/27002/src/spaces.cc#newcode366 src/spaces.cc:366: size_t commit_body_size, On 2013/01/17 10:49:55, danno wrote: > nit: "commit_size" is a better name > nit: could you make this the second parameter?, i.e.: > > AllocateAlignedMemory(reserved_size, commit_size, alignment, ...) Done. https://codereview.chromium.org/11566011/diff/27002/src/spaces.cc#newcode366 src/spaces.cc:366: size_t commit_body_size, On 2013/01/17 10:49:55, danno wrote: > nit: "commit_size" is a better name > nit: could you make this the second parameter?, i.e.: > > AllocateAlignedMemory(reserved_size, commit_size, alignment, ...) Done. https://codereview.chromium.org/11566011/diff/27002/src/spaces.cc#newcode486 src/spaces.cc:486: if (body_size <= (area_end_ - area_start_)) return true; How about "ASSERT(body_size <= static_cast<size_t>(address() + size_ - area_start_))"? From my reading, the memory space between area_start_ and area_end_ should be iterated. If area_end_ is not committed, we could not access it. https://codereview.chromium.org/11566011/diff/27002/src/spaces.cc#newcode583 src/spaces.cc:583: area_end = area_start + commit_body_size; The end could be calculated by address() + size_. https://codereview.chromium.org/11566011/diff/27002/src/spaces.cc#newcode604 src/spaces.cc:604: // memory trunk. On 2013/01/17 10:49:55, danno wrote: > nit: I don't know what you mean by "allocated memory trunk". How about: > > Use chunk_size for statistics and callbacks because we assume that they treat > reserved but not-yet committed memory regions of chunks as allocated. Done. https://codereview.chromium.org/11566011/diff/27002/src/spaces.cc#newcode787 src/spaces.cc:787: size_t size, On 2013/01/17 10:49:55, danno wrote: > I think it's clearer if you swap the order of the parameters here and call them > "reserved_size" and "committed_size". I also think that the name of the function > is not quite right, I think it should be renamed to "CommitExecutableMemory" Done. https://codereview.chromium.org/11566011/diff/27002/src/spaces.h File src/spaces.h (right): https://codereview.chromium.org/11566011/diff/27002/src/spaces.h#newcode656 src/spaces.h:656: // Commit the delta memory region from area_end_ to area_start_ + body_size, On 2013/01/17 10:49:55, danno wrote: > Please don't include implementation details (e.g. mentioning "area_end_ to > area_start_" in the interface comment). Done. https://codereview.chromium.org/11566011/diff/27002/src/spaces.h#newcode658 src/spaces.h:658: // and no uncommitment happens. On 2013/01/17 10:49:55, danno wrote: > s/area_end_ to area_start_/Pages are never uncommitted. Done. https://codereview.chromium.org/11566011/diff/27002/src/spaces.h#newcode897 src/spaces.h:897: size_t commit_body_size); On 2013/01/17 10:49:55, danno wrote: > Can you make this the second parameter? Also, "body" doesn't seem right. I think > the parameters should be called "reserved_size" and "committed_size" Done. https://codereview.chromium.org/11566011/diff/27002/src/spaces.h#newcode1057 src/spaces.h:1057: Address AllocateAlignedMemory(size_t requested, On 2013/01/17 10:49:55, danno wrote: > "requested_size" should be first parameter, second parameters should be > "commited_size" Done.
Corrected the size calculation to: ASSERT(body_size <= size_ - (area_start_ - address()) - (executable == EXECUTABLE ? MemoryAllocator::CodePageGuardSize() : 0));
Thanks for the updated patch set. It's getting much closer, please see my feedback below, mostly small stuff. https://codereview.chromium.org/11566011/diff/31003/src/spaces.cc File src/spaces.cc (right): https://codereview.chromium.org/11566011/diff/31003/src/spaces.cc#newcode363 src/spaces.cc:363: Address MemoryAllocator::AllocateAlignedMemory(size_t requested_size, s/requested_size/reserve_size/ https://codereview.chromium.org/11566011/diff/31003/src/spaces.cc#newcode485 src/spaces.cc:485: bool MemoryChunk::CommitBody(size_t body_size, Executability executable) { I don't think that CommitBody should take the executable flag, since that suggests that there can be different regions of the same chunk that are executable and not-executable. You can ask the chunk whether it is already executable by calling GetFlags on the flag and checking for the IS_EXECUTABLE bit. https://codereview.chromium.org/11566011/diff/31003/src/spaces.cc#newcode490 src/spaces.cc:490: if (body_size <= (area_end_ - area_start_)) return true; I think you should actually call UnCommit if the body shrinks for the region that is not longer committed. Otherwise, the API doesn't promise what it claims to do, that is "commit the body to a specified length". https://codereview.chromium.org/11566011/diff/31003/src/spaces.h File src/spaces.h (right): https://codereview.chromium.org/11566011/diff/31003/src/spaces.h#newcode657 src/spaces.h:657: // committed size, the committed pages are never uncommitted. Why not uncommit? It seems to me that uncommitting would be more consistent. https://codereview.chromium.org/11566011/diff/31003/src/spaces.h#newcode1045 src/spaces.h:1045: // After calling this function, the memory region from commit_body_size to How about just: // Returns a MemoryChunk in which the memory region from ... https://codereview.chromium.org/11566011/diff/31003/src/spaces.h#newcode1046 src/spaces.h:1046: // reserve_body_size of the body is reserved but not committed, they could be nit: s/they could/it could/ https://codereview.chromium.org/11566011/diff/31003/src/spaces.h#newcode1056 src/spaces.h:1056: Address AllocateAlignedMemory(size_t requested_size, s/requested_size/reserve_size/
Thanks for the review. All the comments are addressed. 1) Rename CommitBody to CommitArea. 2) Add a test case for MemoryChunk. 3) Commit the deoptimization code entry size instead of Deoptimizer::GetMaxDeoptTableSize(). CommitArea should be able to adjust (append or shrink) the area size. https://codereview.chromium.org/11566011/diff/31003/src/spaces.cc File src/spaces.cc (right): https://codereview.chromium.org/11566011/diff/31003/src/spaces.cc#newcode363 src/spaces.cc:363: Address MemoryAllocator::AllocateAlignedMemory(size_t requested_size, On 2013/01/17 16:00:55, danno wrote: > s/requested_size/reserve_size/ Done. https://codereview.chromium.org/11566011/diff/31003/src/spaces.cc#newcode490 src/spaces.cc:490: if (body_size <= (area_end_ - area_start_)) return true; On 2013/01/17 16:00:55, danno wrote: > I think you should actually call UnCommit if the body shrinks for the region > that is not longer committed. Otherwise, the API doesn't promise what it claims > to do, that is "commit the body to a specified length". Done. https://codereview.chromium.org/11566011/diff/31003/src/spaces.h File src/spaces.h (right): https://codereview.chromium.org/11566011/diff/31003/src/spaces.h#newcode657 src/spaces.h:657: // committed size, the committed pages are never uncommitted. On 2013/01/17 16:00:55, danno wrote: > Why not uncommit? It seems to me that uncommitting would be more consistent. Done. https://codereview.chromium.org/11566011/diff/31003/src/spaces.h#newcode1045 src/spaces.h:1045: // After calling this function, the memory region from commit_body_size to On 2013/01/17 16:00:55, danno wrote: > How about just: > // Returns a MemoryChunk in which the memory region from ... Done. https://codereview.chromium.org/11566011/diff/31003/src/spaces.h#newcode1046 src/spaces.h:1046: // reserve_body_size of the body is reserved but not committed, they could be On 2013/01/17 16:00:55, danno wrote: > nit: s/they could/it could/ Done. https://codereview.chromium.org/11566011/diff/31003/src/spaces.h#newcode1056 src/spaces.h:1056: Address AllocateAlignedMemory(size_t requested_size, On 2013/01/17 16:00:55, danno wrote: > s/requested_size/reserve_size/ Done.
Updated to patch set 10. 1) Added comments in the MemoryChunk::AllocateChunk to describe the MemoryChunk layout. 2) Refined the test case for MemoryChunk.
Thanks for the updated patch. I especially like the updated comments with the helpful diagram showing the format of a Chunk. This patch is very close, if you address the comments below, I will land the patch for you. https://codereview.chromium.org/11566011/diff/49001/src/spaces.cc File src/spaces.cc (right): https://codereview.chromium.org/11566011/diff/49001/src/spaces.cc#newcode500 src/spaces.cc:500: } You can remove this block if you tweak the if below. https://codereview.chromium.org/11566011/diff/49001/src/spaces.cc#newcode521 src/spaces.cc:521: } else { if you make this: else if (commit_size < committed_size) Then you can avoid the special case handling of equality above.
danno, thanks for your review. I am glad you like that comment :) Thanks for all your comments, suggestions and recommendations during I am working on this CL, I really learned a lot from you! https://codereview.chromium.org/11566011/diff/49001/src/spaces.cc File src/spaces.cc (right): https://codereview.chromium.org/11566011/diff/49001/src/spaces.cc#newcode500 src/spaces.cc:500: } On 2013/01/24 12:38:17, danno wrote: > You can remove this block if you tweak the if below. Done. https://codereview.chromium.org/11566011/diff/49001/src/spaces.cc#newcode521 src/spaces.cc:521: } else { On 2013/01/24 12:38:17, danno wrote: > if you make this: > > else if (commit_size < committed_size) > > Then you can avoid the special case handling of equality above. Done.
lgtm, I'll land this for you
Unfortunately, this patch causes crashes on Windows, and I had to roll back. Can you please investigate and resubmit when you have a fix?
Sorry about that. I seldom use Windows and only verified Linux and Mac before submitting the CLs. I will set up a Windows development environment and investigate the crash.
Fixed Windows crash issue. I have mistakenly removed a RoundUp in the ReserveAlignedMemory, which has been pointed out by Danno. I should have looked at the platform-win32.cc carefully before removing that.
lgtm with the new fix. I'll land this again for you.
Message was sent while issue was closed.
So are you saying that the problem is not fixed? If it's not fixed, then I need to roll this out again. So, "keep in" or "roll back out"?
Message was sent while issue was closed.
The problem is fixed. I have verified that on Windows/Linux/Mac before submitting it. On Windows, the test/mjsunit/tools/tickprocessor failed when using VS2010, but it is OK with VS2008. I just looked at the buildbot, it seems that there are still crashes with "V8 Linux - nosnap" and "V8 Win32 - debug - 2: compile failed". I will have a check on my Linux machine with snapshot=off.
Message was sent while issue was closed.
The nosnap failures are not related to your change, we're working on fixing them right now.
Message was sent while issue was closed.
Thanks for the info. I just checked out mozilla. I am running the test cases, currently there are 9 fails for both snapshot=on and off. For the WebKit mac test, I do not know why there are 83 new crashes. Is it related that we have reserved more memory for deoptimization entry code than before? or is it caused by the same reason with nosnap? |