DescriptionRefactoring of snapshots. This simplifies and improves
the speed of deserializing code. The current startup
time improvement for V8 is around 6%, but code deserialization
is speeded up disproportionately, and we will soon have more
code in the snapshot.
* Removed support for deserializing into large object space.
The regular pages are 1Mbyte now and that is plenty. This
is a big simplification.
* Instead of reserving space for the snapshot we actually
allocate it now. This removes some special casing from
the memory management and simplifies deserialization since
we are just bumping a pointer rather than calling the
normal allocation routines during deserialization.
* Record in the snapshot how much we need to boot up and
allocate it instead of just assuming that allocations in
a new VM will always be linear.
* In the snapshot we always address an object as a negative
offset from the current allocation point. We used to
sometimes address from the start of the deserialized data,
but this is less useful now that we have good support for
roots and repetitions in the deserialization data.
* Code objects were previously deserialized (like other
objects) by alternating raw data (deserialized with memcpy)
and pointers (to external references, other objects, etc.).
Now we deserialize code objects with a single memcpy,
followed by a series of skips and pointers that partially
overwrite the code we memcopied out of the snapshot.
The skips are sometimes merged into the following
instruction in the deserialization data to reduce dispatch
time.
* Integers in the snapshot were stored in a variable length
format that gives a compact representation for small positive
integers. This is still the case, but the new encoding can
be decoded without branches or conditional instructions,
which is faster on a modern CPU.
Committed: https://code.google.com/p/v8/source/detail?r=12505
Patch Set 1 #
Total comments: 22
Patch Set 2 : #
Messages
Total messages: 6 (0 generated)
|