Fixes for bug 80981 (``Need extended jump bytecode to avoid "script too large"

errors, etc.''):

We now ReportStatementTooLarge only if
- a jump offset overflows 32 bits, signed;
- there are 2**32 or more span dependencies in a script;
- a backpatch chain link is more than (2**30 - 1) bytecodes long;
- a source note's distance from the last note, or from script main entry
  point, is > 0x7fffff bytes.

Narrative of the patch, by file:

- js.c
  The js_SrcNoteName array of const char * is now a js_SrcNoteSpec array of
  "specifiers", structs that include a const char *name member.  Also, due to
  span-dependent jumps at the ends of basic blocks where the decompiler knows
  the basic block length, but not the jump format, we need an offset operand
  for SRC_COND, SRC_IF_ELSE, and SRC_WHILE (to tell the distance from the
  branch bytecode after the condition expression to the span-dependent jump).

- jsarena.[ch]
  JS arenas are used mainly for last-in-first-out allocation with _en masse_
  release to the malloc pool (or, optionally, to a private freelist).  But
  the code generator needs to allocate and grow (by doubling, to avoid O(n^2)
  growth) allocations that hold bytecode, source notes, and span-dependency
  records.  This exception to LIFO allocation works by claiming an entire
  arena from the pool and realloc'ing it, as soon as the allocation size
  reaches the pool's default arena size.  Call such an allocation a "large
  single allocation".

  This patch adds a new arena API, JS_ArenaFreeAllocation, which can be used
  to free a large single allocation.  If called with an allocation that's not
  a large single allocation, it will nevertheless attempt to retract the arena
  containing that allocation, if the allocation is last within its arena.
  Thus JS_ArenaFreeAllocation adds a non-LIFO "free" special case to match the
  non-LIFO "grow" special case already implemented under JS_ARENA_GROW for
  large single allocations.

  The code generator still benefits via this extension to arenas, over purely
  manual malloc/realloc/free, by virtue of _en masse_ free (JS_ARENA_RELEASE
  after code generation has completed, successfully or not).

  To avoid searching for the previous arena, in order to update its next
  member upon reallocation of the arena containing a large single allocation,
  the oversized arena has a back-pointer to that next member stored (but not
  as allocable space within the arena) in a (JSArena **) footer at its end.

- jscntxt.c
  I've observed for many scripts that the bytes of source notes and bytecode
  are of comparable lengths, but only now am I fixing the default arena size
  for cx->notePool to match the size for cx->codePool (1024 instead of 256).

- jsemit.c
  Span-dependent instructions in JS bytecode consist of the jump (JOF_JUMP)
  and switch (JOF_LOOKUPSWITCH, JOF_TABLESWITCH) format opcodes, subdivided
  into unconditional (gotos and gosubs), and conditional jumps or branches
  (which pop a value, test it, and jump depending on its value).  Most jumps
  have just one immediate operand, a signed offset from the jump opcode's pc
  to the target bytecode.  The lookup and table switch opcodes may contain
  many jump offsets.

  This patch adds "X" counterparts to the opcodes/formats (X is suffixed, btw,
  to prefer JSOP_ORX and thereby to avoid colliding on the JSOP_XOR name for
  the extended form of the JSOP_OR branch opcode).  The unextended or short
  formats have 16-bit signed immediate offset operands, the extended or long
  formats have 32-bit signed immediates.  The span-dependency problem consists
  of selecting as few long instructions as possible, or about as few -- since
  jumps can span other jumps, extending one jump may cause another to need to
  be extended.

  Most JS scripts are short, so need no extended jumps.  We optimize for this
  case by generating short jumps until we know a long jump is needed.  After
  that point, we keep generating short jumps, but each jump's 16-bit immediate
  offset operand is actually an unsigned index into cg->spanDeps, an array of
  JSSpanDep structs.  Each struct tells the top offset in the script of the
  opcode, the "before" offset of the jump (which will be the same as top for
  simplex jumps, but which will index further into the bytecode array for a
  non-initial jump offset in a lookup or table switch), the after "offset"
  adjusted during span-dependent instruction selection (initially the same
  value as the "before" offset), and the jump target (more below).

  Since we generate cg->spanDeps lazily, from within js_SetJumpOffset, we must
  ensure that all bytecode generated so far can be inspected to discover where
  the jump offset immediate operands lie within CG_CODE(cg).  But the bonus is
  that we generate span-dependency records sorted by their offsets, so we can
  binary-search when trying to find a JSSpanDep for a given bytecode offset,
  or the nearest JSSpanDep at or above a given pc.

  To avoid limiting scripts to 64K jumps, if the cg->spanDeps index overflows
  65534, we store SPANDEP_INDEX_HUGE in the jump's immediate operand.  This
  tells us that we need to binary-search for the cg->spanDeps entry by the
  jump opcode's bytecode offset (sd->before).

  Jump targets need to be maintained in a data structure that lets us look
  up an already-known target by its address (jumps may have a common target),
  and that also lets us update the addresses (script-relative, a.k.a. absolute
  offsets) of targets that come after a jump target (for when a jump below
  that target needs to be extended).  We use an AVL tree, implemented using
  recursion, but with some tricky optimizations to its height-balancing code
  (see http://www.enteract.com/~bradapp/ftp/src/libs/C++/AvlTrees.html).

  A final wrinkle: backpatch chains are linked by jump-to-jump offsets with
  positive sign, even though they link "backward" (i.e., toward lower bytecode
  address).  We don't want to waste space and search time in the AVL tree for
  such temporary backpatch deltas, so we use a single-bit wildcard scheme to
  tag true JSJumpTarget pointers and encode untagged, signed (positive) deltas
  in JSSpanDep.target pointers, depending on whether the JSSpanDep has a known
  target, or is still awaiting backpatching.

  Note that backpatch chains would present a problem for BuildSpanDepTable,
  which inspects bytecode to build cg->spanDeps on demand, when the first
  short jump offset overflows.  To solve this temporary problem, we emit a
  proxy bytecode (JSOP_BACKPATCH; JSOP_BACKPATCH_PUSH for jumps that push a
  result on the interpreter's stack, namely JSOP_GOSUB; or JSOP_BACKPATCH_POP
  for branch ops) whose nuses/ndefs counts help keep the stack balanced, but
  whose opcode format distinguishes its backpatch delta immediate operand from
  a normal jump offset.

  The cg->spanDeps array and JSJumpTarget structs are allocated from the
  cx->tempPool arena-pool.  This created a LIFO vs. non-LIFO conflict: there
  were two places under the TOK_SWITCH case in js_EmitTree that used tempPool
  to allocate and release a chunk of memory, during whose lifetime JSSpanDep
  and/or JSJumpTarget structs might also be allocated from tempPool -- the
  ensuing release would prove disastrous.  These bitmap and table temporaries
  are now allocated from the malloc heap.

- jsinterp.c
  Straightforward cloning and JUMP => JUMPX mutating of the jump and switch
  format bytecode cases.

- jsobj.c
  Silence warnings about %p used without (void *) casts.

- jsopcode.c
  Massive and scary decompiler whackage to cope with extended jumps, using
  source note offsets to help find jumps whose format (short or long) can't
  be discovered from properties of prior instructions in the script.

  One cute hack here: long || and && expressions are broken up to wrap before
  the 80th column, with the operator at the end of each non-terminal line.

- jsopcode.h, jsopcode.tbl
  The new extended jump opcodes, formats, and fundamental parameterization
  macros.  Also, more comments.

- jsparse.c
  Random and probably only aesthetic fix to avoid decorating a foo[i]++ or
  --foo[i] parse tree node with JSOP_SETCALL, wrongly (only foo(i)++ or
  --foo(i), or the other post- or prefix form operator, should have such an
  opcode decoration on its parse tree).

- jsscript.h
  Random macro naming sanity: use trailing _ rather than leading _ for macro
  local variables in order to avoid invading the standard C global namespace.
This commit is contained in:
brendan%mozilla.org 2001-10-17 03:16:48 +00:00
Родитель 45d8560e3a
Коммит 43a911aeb6
12 изменённых файлов: 1727 добавлений и 335 удалений

Просмотреть файл

@ -882,7 +882,7 @@ SrcNotes(JSContext *cx, JSScript *script)
offset += delta;
fprintf(gOutFile, "%3u: %5u [%4u] %-8s",
PTRDIFF(sn, notes, jssrcnote), offset, delta,
js_SrcNoteName[SN_TYPE(sn)]);
js_SrcNoteSpec[SN_TYPE(sn)].name);
type = (JSSrcNoteType) SN_TYPE(sn);
switch (type) {
case SRC_SETLINE:
@ -894,6 +894,9 @@ SrcNotes(JSContext *cx, JSScript *script)
(uintN) js_GetSrcNoteOffset(sn, 1),
(uintN) js_GetSrcNoteOffset(sn, 2));
break;
case SRC_COND:
case SRC_IF_ELSE:
case SRC_WHILE:
case SRC_PCBASE:
case SRC_PCDELTA:
fprintf(gOutFile, " offset %u", (uintN) js_GetSrcNoteOffset(sn, 0));

Просмотреть файл

@ -91,53 +91,70 @@ JS_InitArenaPool(JSArenaPool *pool, const char *name, JSUint32 size, JSUint32 al
JS_PUBLIC_API(void *)
JS_ArenaAllocate(JSArenaPool *pool, JSUint32 nb)
{
JSArena **ap, *a, *b;
JSUint32 sz;
JSArena **ap, **bp, *a, *b;
JSUint32 extra, gross, sz;
void *p;
/*
* An allocation that consumes more than pool->arenasize also has a footer
* pointing back to its previous arena's next member. This footer is not
* included in [a->base, a->limit), so its space can't be wrongly claimed.
*/
ap = NULL;
JS_ASSERT((nb & pool->mask) == 0);
extra = (nb > pool->arenasize) ? sizeof(JSArena **) : 0;
gross = nb + extra;
for (a = pool->current; a->avail + nb > a->limit; pool->current = a) {
if (!a->next) {
ap = &arena_freelist;
ap = &a->next;
if (!*ap) {
bp = &arena_freelist;
JS_ACQUIRE_LOCK(arena_freelist_lock);
while ((b = *ap) != NULL) { /* reclaim a free arena */
while ((b = *bp) != NULL) { /* reclaim a free arena */
/*
* Insist on exact arenasize match if nb is not greater than
* Insist on exact arenasize match if gross is not greater than
* arenasize. Otherwise take any arena big enough, but not by
* more than nb + arenasize.
* more than gross + arenasize.
*/
sz = (JSUint32)(b->limit - b->base);
if ((nb > pool->arenasize)
? sz >= nb && sz <= nb + pool->arenasize
if ((gross > pool->arenasize)
? sz >= gross && sz <= gross + pool->arenasize
: sz == pool->arenasize) {
*ap = b->next;
*bp = b->next;
JS_RELEASE_LOCK(arena_freelist_lock);
b->next = NULL;
a = a->next = b;
COUNT(pool, nreclaims);
goto claim;
}
ap = &b->next;
bp = &b->next;
}
JS_RELEASE_LOCK(arena_freelist_lock);
sz = JS_MAX(pool->arenasize, nb); /* allocate a new arena */
sz += sizeof *a + pool->mask; /* header and alignment slop */
b = (JSArena *) malloc(sz);
b = (JSArena *) malloc(sz + extra); /* footer if oversized load */
if (!b)
return 0;
a = a->next = b;
a->next = NULL;
a->limit = (jsuword)a + sz;
b->next = NULL;
b->limit = (jsuword)b + sz;
JS_COUNT_ARENA(pool,++);
COUNT(pool, nmallocs);
claim:
*ap = a = b;
a->base = a->avail = JS_ARENA_ALIGN(pool, a + 1);
continue;
}
a = a->next; /* move to next arena */
a = *ap; /* move to next arena */
}
p = (void *)a->avail;
a->avail += nb;
/*
* If oversized, store ap in the footer, which lies at a->avail, but which
* can't be overwritten by a further small allocation, because a->limit is
* at most pool->mask bytes after a->avail, and no allocation can be fewer
* than (pool->mask + 1) bytes.
*/
if (extra && ap)
*(JSArena ***)a->avail = ap;
return p;
}
@ -145,31 +162,45 @@ JS_PUBLIC_API(void *)
JS_ArenaRealloc(JSArenaPool *pool, void *p, JSUint32 size, JSUint32 incr)
{
JSArena **ap, *a;
jsuword boff, aoff, newsize;
jsuword boff, aoff, netsize, gross;
ap = &pool->first.next;
while ((a = *ap) != pool->current)
ap = &a->next;
/*
* Use the oversized-single-allocation footer to avoid searching for ap.
* See JS_ArenaAllocate, the extra variable.
*/
if (size > pool->arenasize) {
ap = *(JSArena ***)((jsuword)p + JS_ARENA_ALIGN(pool, size));
a = *ap;
} else {
ap = &pool->first.next;
while ((a = *ap) != pool->current)
ap = &a->next;
}
JS_ASSERT(a->base == (jsuword)p);
boff = JS_UPTRDIFF(a->base, a);
aoff = newsize = size + incr;
JS_ASSERT(newsize > pool->arenasize);
newsize += sizeof *a + pool->mask; /* header and alignment slop */
a = (JSArena *) realloc(a, newsize);
aoff = netsize = size + incr;
JS_ASSERT(netsize > pool->arenasize);
netsize += sizeof *a + pool->mask; /* header and alignment slop */
gross = netsize + sizeof(JSArena **); /* oversized footer holds ap */
a = (JSArena *) realloc(a, gross);
if (!a)
return NULL;
if (pool->current == *ap)
pool->current = a;
*ap = a;
pool->current = a;
#ifdef JS_ARENAMETER
pool->stats.nreallocs++;
#endif
a->base = JS_ARENA_ALIGN(pool, a + 1);
a->limit = (jsuword)a + newsize;
a->limit = (jsuword)a + netsize;
a->avail = JS_ARENA_ALIGN(pool, a->base + aoff);
/* Check whether realloc aligned differently, and copy if necessary. */
if (boff != JS_UPTRDIFF(a->base, a))
memmove((void *)a->base, (char *)a + boff, size);
/* Store ap in the oversized load footer. */
*(JSArena ***)a->avail = ap;
return (void *)a->base;
}
@ -178,6 +209,13 @@ JS_ArenaGrow(JSArenaPool *pool, void *p, JSUint32 size, JSUint32 incr)
{
void *newp;
/*
* If p points to an oversized allocation, it owns an entire arena, so we
* can simply realloc the arena.
*/
if (size > pool->arenasize)
return JS_ArenaRealloc(pool, p, size, incr);
JS_ARENA_ALLOCATE(newp, pool, size + incr);
if (newp)
memcpy(newp, p, size);
@ -243,6 +281,66 @@ JS_ArenaRelease(JSArenaPool *pool, char *mark)
}
}
JS_PUBLIC_API(void)
JS_ArenaFreeAllocation(JSArenaPool *pool, void *p, JSUint32 size)
{
jsuword q;
JSArena **ap, *a, *b;
/*
* If the allocation is oversized, it consumes an entire arena, and there
* is a footer pointing back to its predecessor's next member. Otherwise,
* we have to search pool for a.
*/
q = (jsuword)p + size;
q = JS_ARENA_ALIGN(pool, q);
if (size > pool->arenasize) {
ap = *(JSArena ***)q;
a = *ap;
} else {
ap = &pool->first.next;
while ((a = *ap) != NULL) {
if (a->avail == q) {
/*
* If a is consumed by the allocation at p, we can free it to
* the malloc heap.
*/
if (a->base == (jsuword)p)
break;
/*
* We can't free a, but we can "retract" its avail cursor --
* whether there are others after it in pool.
*/
a->avail = (jsuword)p;
return;
}
ap = &a->next;
}
}
/*
* At this point, a is doomed, so ensure that pool->current doesn't point
* at it. What's more, force future allocations to scavenge all arenas on
* pool, in case some have free space.
*/
if (pool->current == a)
pool->current = &pool->first;
/*
* This is a non-LIFO deallocation, so take care to fix up a->next's back
* pointer in its footer, if a->next is oversized.
*/
*ap = b = a->next;
if (b && b->avail - b->base > pool->arenasize) {
JS_ASSERT(*(JSArena ***)b->avail == &a->next);
*(JSArena ***)b->avail = ap;
}
JS_CLEAR_ARENA(a);
JS_COUNT_ARENA(pool,--);
free(a);
}
JS_PUBLIC_API(void)
JS_FreeArenaPool(JSArenaPool *pool)
{

Просмотреть файл

@ -257,6 +257,13 @@ JS_ArenaGrow(JSArenaPool *pool, void *p, JSUint32 size, JSUint32 incr);
extern JS_PUBLIC_API(void)
JS_ArenaRelease(JSArenaPool *pool, char *mark);
/*
* Function to be used directly when an allocation has likely grown to consume
* an entire JSArena, in which case the arena is returned to the malloc heap.
*/
extern JS_PUBLIC_API(void)
JS_ArenaFreeAllocation(JSArenaPool *pool, void *p, JSUint32 size);
#ifdef JS_ARENAMETER
#include <stdio.h>

Просмотреть файл

@ -103,7 +103,7 @@ js_NewContext(JSRuntime *rt, size_t stackChunkSize)
cx->jsop_ne = JSOP_NE;
JS_InitArenaPool(&cx->stackPool, "stack", stackChunkSize, sizeof(jsval));
JS_InitArenaPool(&cx->codePool, "code", 1024, sizeof(jsbytecode));
JS_InitArenaPool(&cx->notePool, "note", 256, sizeof(jssrcnote));
JS_InitArenaPool(&cx->notePool, "note", 1024, sizeof(jssrcnote));
JS_InitArenaPool(&cx->tempPool, "temp", 1024, sizeof(jsdouble));
#if JS_HAS_REGEXPS

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -18,7 +18,7 @@
* Copyright (C) 1998 Netscape Communications Corporation. All
* Rights Reserved.
*
* Contributor(s):
* Contributor(s):
*
* Alternatively, the contents of this file may be used under the
* terms of the GNU Public License (the "GPL"), in which case the
@ -79,8 +79,8 @@ struct JSStmtInfo {
JSStmtInfo *down; /* info for enclosing statement */
};
#define SET_STATEMENT_TOP(stmt, top) \
((stmt)->top = (stmt)->update = (top), (stmt)->breaks = \
#define SET_STATEMENT_TOP(stmt, top) \
((stmt)->top = (stmt)->update = (top), (stmt)->breaks = \
(stmt)->continues = (stmt)->catchJump = (stmt)->gosub = (-1))
struct JSTreeContext { /* tree context for semantic checks */
@ -108,6 +108,62 @@ struct JSTreeContext { /* tree context for semantic checks */
#define TREE_CONTEXT_FINISH(tc) \
((void)0)
/*
* Span-dependent instructions are jumps whose span (from the jump bytecode to
* the jump target) may require 2 or 4 bytes of immediate operand.
*/
typedef struct JSSpanDep JSSpanDep;
typedef struct JSJumpTarget JSJumpTarget;
struct JSSpanDep {
ptrdiff_t top; /* offset of first bytecode in an opcode */
ptrdiff_t offset; /* offset - 1 within opcode of jump operand */
ptrdiff_t before; /* original offset - 1 of jump operand */
JSJumpTarget *target; /* tagged target pointer or backpatch delta */
};
/*
* Jump targets are stored in an AVL tree, for O(log(n)) lookup with targets
* sorted by offset from left to right, so that targets above a span-dependent
* instruction whose jump offset operand must be extended can be found quickly
* and adjusted upward (toward higher offsets).
*/
struct JSJumpTarget {
ptrdiff_t offset; /* offset of span-dependent jump target */
int balance; /* AVL tree balance number */
JSJumpTarget *kids[2]; /* left and right AVL tree child pointers */
};
#define JT_LEFT 0
#define JT_RIGHT 1
#define JT_OTHER_DIR(dir) (1 - (dir))
#define JT_IMBALANCE(dir) (((dir) << 1) - 1)
#define JT_DIR(imbalance) (((imbalance) + 1) >> 1)
/*
* Backpatch deltas are encoded in JSSpanDep.target if JT_TAG_BIT is clear,
* so we can maintain backpatch chains when using span dependency records to
* hold jump offsets that overflow 16 bits.
*/
#define JT_TAG_BIT ((jsword) 1)
#define JT_UNTAG_SHIFT 1
#define JT_SET_TAG(jt) ((JSJumpTarget *)((jsword)(jt) | JT_TAG_BIT))
#define JT_CLR_TAG(jt) ((JSJumpTarget *)((jsword)(jt) & ~JT_TAG_BIT))
#define JT_HAS_TAG(jt) ((jsword)(jt) & JT_TAG_BIT)
#define BITS_PER_PTRDIFF (sizeof(ptrdiff_t) * JS_BITS_PER_BYTE)
#define BITS_PER_BPDELTA (BITS_PER_PTRDIFF - 1 - JT_UNTAG_SHIFT)
#define BPDELTA_MAX ((ptrdiff_t)(JS_BIT(BITS_PER_BPDELTA) - 1))
#define BPDELTA_TO_TN(bp) ((JSJumpTarget *)((bp) << JT_UNTAG_SHIFT))
#define JT_TO_BPDELTA(jt) ((ptrdiff_t)((jsword)(jt) >> JT_UNTAG_SHIFT))
#define SD_SET_TARGET(sd,jt) ((sd)->target = JT_SET_TAG(jt))
#define SD_SET_BPDELTA(sd,bp) ((sd)->target = BPDELTA_TO_TN(bp))
#define SD_GET_BPDELTA(sd) (JS_ASSERT(!JT_HAS_TAG((sd)->target)), \
JT_TO_BPDELTA((sd)->target))
#define SD_TARGET_OFFSET(sd) (JS_ASSERT(JT_HAS_TAG((sd)->target)), \
JT_CLR_TAG((sd)->target)->offset)
struct JSCodeGenerator {
JSTreeContext treeContext; /* base state: statement info stack, etc. */
void *codeMark; /* low watermark in cx->codePool */
@ -132,6 +188,12 @@ struct JSCodeGenerator {
JSTryNote *tryBase; /* first exception handling note */
JSTryNote *tryNext; /* next available note */
size_t tryNoteSpace; /* # of bytes allocated at tryBase */
JSSpanDep *spanDeps; /* span dependent instruction records */
JSJumpTarget *jumpTargets; /* AVL tree of jump target offsets */
JSJumpTarget *jtFreeList; /* JT_LEFT-linked list of free structs */
uintN numSpanDeps; /* number of span dependencies */
uintN numJumpTargets; /* number of jump targets */
uintN emitLevel; /* js_EmitTree recursion level */
};
#define CG_BASE(cg) ((cg)->current->base)
@ -285,8 +347,8 @@ js_EmitFunctionBody(JSContext *cx, JSCodeGenerator *cg, JSParseNode *body,
* At most one "gettable" note (i.e., a note of type other than SRC_NEWLINE,
* SRC_SETLINE, and SRC_XDELTA) applies to a given bytecode.
*
* NB: the js_SrcNoteName and js_SrcNoteArity arrays in jsemit.c are indexed
* by this enum, so their initializers need to match the order here.
* NB: the js_SrcNoteSpec array in jsemit.c is indexed by this enum, so its
* initializers need to match the order here.
*/
typedef enum JSSrcNoteType {
SRC_NULL = 0, /* terminates a note vector */
@ -358,11 +420,18 @@ typedef enum JSSrcNoteType {
#define SN_3BYTE_OFFSET_FLAG 0x80
#define SN_3BYTE_OFFSET_MASK 0x7f
extern JS_FRIEND_DATA(const char *) js_SrcNoteName[];
extern JS_FRIEND_DATA(uint8) js_SrcNoteArity[];
extern JS_FRIEND_API(uintN) js_SrcNoteLength(jssrcnote *sn);
typedef struct JSSrcNoteSpec {
const char *name; /* name for disassembly/debugging output */
uint8 arity; /* number of offset operands */
uint8 offsetBias; /* bias of offset(s) from annotated pc */
int8 isSpanDep; /* 1 or -1 if offsets could span extended ops,
0 otherwise; sign tells span direction */
} JSSrcNoteSpec;
#define SN_LENGTH(sn) ((js_SrcNoteArity[SN_TYPE(sn)] == 0) ? 1 \
extern JS_FRIEND_DATA(JSSrcNoteSpec) js_SrcNoteSpec[];
extern JS_FRIEND_API(uintN) js_SrcNoteLength(jssrcnote *sn);
#define SN_LENGTH(sn) ((js_SrcNoteSpec[SN_TYPE(sn)].arity == 0) ? 1 \
: js_SrcNoteLength(sn))
#define SN_NEXT(sn) ((sn) + SN_LENGTH(sn))
@ -386,6 +455,13 @@ extern intN
js_NewSrcNote3(JSContext *cx, JSCodeGenerator *cg, JSSrcNoteType type,
ptrdiff_t offset1, ptrdiff_t offset2);
/*
* NB: this function can add at most one extra extended delta note.
*/
extern jssrcnote *
js_AddToSrcNoteDelta(JSContext *cx, JSCodeGenerator *cg, jssrcnote *sn,
ptrdiff_t delta);
/*
* Get and set the offset operand identified by which (0 for the first, etc.).
*/

Просмотреть файл

@ -1502,6 +1502,49 @@ js_Interpret(JSContext *cx, jsval *result)
}
break;
#if JS_HAS_SWITCH_STATEMENT
case JSOP_DEFAULTX:
(void) POP();
/* FALL THROUGH */
#endif
case JSOP_GOTOX:
len = GET_JUMPX_OFFSET(pc);
CHECK_BRANCH(len);
break;
case JSOP_IFEQX:
POP_BOOLEAN(cx, rval, cond);
if (cond == JS_FALSE) {
len = GET_JUMPX_OFFSET(pc);
CHECK_BRANCH(len);
}
break;
case JSOP_IFNEX:
POP_BOOLEAN(cx, rval, cond);
if (cond != JS_FALSE) {
len = GET_JUMPX_OFFSET(pc);
CHECK_BRANCH(len);
}
break;
case JSOP_ORX:
POP_BOOLEAN(cx, rval, cond);
if (cond == JS_TRUE) {
len = GET_JUMPX_OFFSET(pc);
PUSH_OPND(rval);
}
break;
case JSOP_ANDX:
POP_BOOLEAN(cx, rval, cond);
if (cond == JS_FALSE) {
len = GET_JUMPX_OFFSET(pc);
PUSH_OPND(rval);
}
break;
case JSOP_TOOBJECT:
SAVE_SP(fp);
ok = js_ValueToObject(cx, FETCH_OPND(-1), &obj);
@ -2102,6 +2145,17 @@ js_Interpret(JSContext *cx, jsval *result)
PUSH(lval);
}
break;
case JSOP_CASEX:
NEW_EQUALITY_OP(==, JS_FALSE);
(void) POP();
if (cond) {
len = GET_JUMPX_OFFSET(pc);
CHECK_BRANCH(len);
} else {
PUSH(lval);
}
break;
#endif
#endif /* !JS_BUG_FALLIBLE_EQOPS */
@ -2962,6 +3016,89 @@ js_Interpret(JSContext *cx, jsval *result)
#undef SEARCH_PAIRS
break;
case JSOP_TABLESWITCHX:
pc2 = pc;
len = GET_JUMPX_OFFSET(pc2);
/*
* ECMAv2 forbids conversion of discriminant, so we will skip to
* the default case if the discriminant isn't already an int jsval.
* (This opcode is emitted only for dense jsint-domain switches.)
*/
if (cx->version == JSVERSION_DEFAULT ||
cx->version >= JSVERSION_1_4) {
rval = POP_OPND();
if (!JSVAL_IS_INT(rval))
break;
i = JSVAL_TO_INT(rval);
} else {
FETCH_INT(cx, -1, i);
sp--;
}
pc2 += JUMPX_OFFSET_LEN;
low = GET_JUMP_OFFSET(pc2);
pc2 += JUMP_OFFSET_LEN;
high = GET_JUMP_OFFSET(pc2);
i -= low;
if ((jsuint)i < (jsuint)(high - low + 1)) {
pc2 += JUMP_OFFSET_LEN + JUMPX_OFFSET_LEN * i;
off = (jsint) GET_JUMPX_OFFSET(pc2);
if (off)
len = off;
}
break;
case JSOP_LOOKUPSWITCHX:
lval = POP_OPND();
pc2 = pc;
len = GET_JUMPX_OFFSET(pc2);
if (!JSVAL_IS_NUMBER(lval) &&
!JSVAL_IS_STRING(lval) &&
!JSVAL_IS_BOOLEAN(lval)) {
goto advance_pc;
}
pc2 += JUMPX_OFFSET_LEN;
npairs = (jsint) GET_ATOM_INDEX(pc2);
pc2 += ATOM_INDEX_LEN;
#define SEARCH_EXTENDED_PAIRS(MATCH_CODE) \
while (npairs) { \
atom = GET_ATOM(cx, script, pc2); \
rval = ATOM_KEY(atom); \
MATCH_CODE \
if (match) { \
pc2 += ATOM_INDEX_LEN; \
len = GET_JUMPX_OFFSET(pc2); \
goto advance_pc; \
} \
pc2 += ATOM_INDEX_LEN + JUMPX_OFFSET_LEN; \
npairs--; \
}
if (JSVAL_IS_STRING(lval)) {
str = JSVAL_TO_STRING(lval);
SEARCH_EXTENDED_PAIRS(
match = (JSVAL_IS_STRING(rval) &&
((str2 = JSVAL_TO_STRING(rval)) == str ||
!js_CompareStrings(str2, str)));
)
} else if (JSVAL_IS_DOUBLE(lval)) {
d = *JSVAL_TO_DOUBLE(lval);
SEARCH_EXTENDED_PAIRS(
match = (JSVAL_IS_DOUBLE(rval) &&
*JSVAL_TO_DOUBLE(rval) == d);
)
} else {
SEARCH_EXTENDED_PAIRS(
match = (lval == rval);
)
}
#undef SEARCH_EXTENDED_PAIRS
break;
case JSOP_CONDSWITCH:
break;
@ -3613,6 +3750,12 @@ js_Interpret(JSContext *cx, jsval *result)
PUSH(INT_TO_JSVAL(i));
break;
case JSOP_GOSUBX:
i = PTRDIFF(pc, script->main, jsbytecode) + len;
len = GET_JUMPX_OFFSET(pc);
PUSH(INT_TO_JSVAL(i));
break;
case JSOP_RETSUB:
rval = POP();
JS_ASSERT(JSVAL_IS_INT(rval));

Просмотреть файл

@ -101,6 +101,17 @@ uintN js_NumCodeSpecs = sizeof (js_CodeSpec) / sizeof js_CodeSpec[0];
/************************************************************************/
static ptrdiff_t
GetJumpOffset(jsbytecode *pc, jsbytecode *pc2)
{
uint32 type;
type = (js_CodeSpec[*pc].format & JOF_TYPEMASK);
if (JOF_TYPE_IS_EXTENDED_JUMP(type))
return GET_JUMPX_OFFSET(pc2);
return GET_JUMP_OFFSET(pc2);
}
#ifdef DEBUG
JS_FRIEND_API(void)
@ -160,7 +171,8 @@ js_Disassemble1(JSContext *cx, JSScript *script, jsbytecode *pc, uintN loc,
break;
case JOF_JUMP:
off = GET_JUMP_OFFSET(pc);
case JOF_JUMPX:
off = GetJumpOffset(pc, pc);
fprintf(fp, " %u (%d)", loc + off, off);
break;
@ -183,20 +195,19 @@ js_Disassemble1(JSContext *cx, JSScript *script, jsbytecode *pc, uintN loc,
#if JS_HAS_SWITCH_STATEMENT
case JOF_TABLESWITCH:
{
jsbytecode *pc2, *end;
jsbytecode *pc2;
jsint i, low, high;
pc2 = pc;
off = GET_JUMP_OFFSET(pc2);
end = pc + off;
off = GetJumpOffset(pc, pc2);
pc2 += JUMP_OFFSET_LEN;
low = GET_JUMP_OFFSET(pc2);
low = GetJumpOffset(pc, pc2);
pc2 += JUMP_OFFSET_LEN;
high = GET_JUMP_OFFSET(pc2);
high = GetJumpOffset(pc, pc2);
pc2 += JUMP_OFFSET_LEN;
fprintf(fp, " defaultOffset %d low %d high %d", off, low, high);
for (i = low; i <= high; i++) {
off = GET_JUMP_OFFSET(pc2);
off = GetJumpOffset(pc, pc2);
fprintf(fp, "\n\t%d: %d", i, off);
pc2 += JUMP_OFFSET_LEN;
}
@ -206,10 +217,11 @@ js_Disassemble1(JSContext *cx, JSScript *script, jsbytecode *pc, uintN loc,
case JOF_LOOKUPSWITCH:
{
jsbytecode *pc2 = pc;
jsbytecode *pc2;
jsint npairs;
off = GET_JUMP_OFFSET(pc2);
pc2 = pc;
off = GetJumpOffset(pc, pc2);
pc2 += JUMP_OFFSET_LEN;
npairs = (jsint) GET_ATOM_INDEX(pc2);
pc2 += ATOM_INDEX_LEN;
@ -217,7 +229,7 @@ js_Disassemble1(JSContext *cx, JSScript *script, jsbytecode *pc, uintN loc,
while (npairs) {
atom = GET_ATOM(cx, script, pc2);
pc2 += ATOM_INDEX_LEN;
off = GET_JUMP_OFFSET(pc2);
off = GetJumpOffset(pc, pc2);
pc2 += JUMP_OFFSET_LEN;
str = js_ValueToSource(cx, ATOM_KEY(atom));
@ -324,7 +336,7 @@ SprintPut(Sprinter *sp, const char *s, size_t len)
offset = sp->offset;
sp->offset += len;
bp = sp->base + offset;
memcpy(bp, s, len);
memmove(bp, s, len);
bp[len] = 0;
return offset;
}
@ -694,8 +706,8 @@ DecompileSwitch(SprintStack *ss, TableEntry *table, uintN tableLength,
* The next case expression follows immediately, unless we are
* at the last case.
*/
nextCaseExprOff = (ptrdiff_t)
(JSVAL_TO_INT(key) + js_CodeSpec[JSOP_CASE].length);
nextCaseExprOff = (ptrdiff_t)JSVAL_TO_INT(key);
nextCaseExprOff += js_CodeSpec[pc[nextCaseExprOff]].length;
jp->indent += 2;
if (!Decompile(ss, pc + caseExprOff,
nextCaseExprOff - caseExprOff)) {
@ -823,7 +835,7 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
JSOp op, lastop, saveop;
JSCodeSpec *cs, *topcs;
jssrcnote *sn;
const char *lval, *rval = NULL, *xval;
const char *lval, *rval, *xval;
jsint i, argc;
char **argv;
JSAtom *atom;
@ -843,8 +855,13 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
cx = ss->sprinter.context;
jp = ss->printer;
endpc = pc + nb;
forelem_done = NULL;
todo = -2; /* NB: different from Sprint() error return. */
tail = -1;
op = JSOP_NOP;
sn = NULL;
rval = NULL;
while (pc < endpc) {
lastop = op;
op = saveop = (JSOp) *pc;
@ -925,12 +942,12 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
cond = js_GetSrcNoteOffset(sn, 0);
next = js_GetSrcNoteOffset(sn, 1);
tail = js_GetSrcNoteOffset(sn, 2);
LOCAL_ASSERT(tail + GET_JUMP_OFFSET(pc + tail) == 0);
LOCAL_ASSERT(tail + GetJumpOffset(pc+tail, pc+tail) == 0);
/* Print the keyword and the possibly empty init-part. */
js_printf(jp, "\tfor (%s;", rval);
if (pc[cond] == JSOP_IFEQ) {
if (pc[cond] == JSOP_IFEQ || pc[cond] == JSOP_IFEQX) {
/* Decompile the loop condition. */
DECOMPILE_CODE(pc, cond);
js_printf(jp, " %s", POP_STR());
@ -939,7 +956,7 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
/* Need a semicolon whether or not there was a cond. */
js_puts(jp, ";");
if (pc[next] != JSOP_GOTO) {
if (pc[next] != JSOP_GOTO && pc[next] != JSOP_GOTOX) {
/* Decompile the loop updater. */
DECOMPILE_CODE(pc + next, tail - next - 1);
js_printf(jp, " %s", POP_STR());
@ -982,17 +999,32 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
sn = js_GetSrcNote(jp->script, pc);
pc += oplen;
js_printf(jp, "\t} catch ("); /* balance) */
pc += 6; /* name Object, pushobj, exception */
LOCAL_ASSERT(*pc == JSOP_NAME);
pc += js_CodeSpec[JSOP_NAME].length;
LOCAL_ASSERT(*pc == JSOP_PUSHOBJ);
pc += js_CodeSpec[JSOP_PUSHOBJ].length;
LOCAL_ASSERT(*pc == JSOP_NEWINIT);
pc += js_CodeSpec[JSOP_NEWINIT].length;
LOCAL_ASSERT(*pc == JSOP_EXCEPTION);
pc += js_CodeSpec[JSOP_EXCEPTION].length;
LOCAL_ASSERT(*pc == JSOP_INITCATCHVAR);
js_printf(jp, "%s",
ATOM_BYTES(GET_ATOM(cx, jp->script, pc)));
pc += js_CodeSpec[JSOP_INITCATCHVAR].length;
LOCAL_ASSERT(*pc == JSOP_ENTERWITH);
pc += js_CodeSpec[JSOP_ENTERWITH].length;
len = js_GetSrcNoteOffset(sn, 0);
pc += 4; /* initcatchvar, enterwith */
if (len) {
js_printf(jp, " if ");
DECOMPILE_CODE(pc, len - 3); /* don't decompile ifeq */
DECOMPILE_CODE(pc, len);
js_printf(jp, "%s", POP_STR());
pc += len;
LOCAL_ASSERT(*pc == JSOP_IFEQ || *pc == JSOP_IFEQX);
pc += js_CodeSpec[*pc].length;
}
js_printf(jp, ") {\n"); /* balance} */
jp->indent += 4;
len = 0;
@ -1054,6 +1086,7 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
break;
case JSOP_GOSUB:
case JSOP_GOSUBX:
case JSOP_RETSUB:
case JSOP_SETSP:
todo = -2;
@ -1160,6 +1193,7 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
#endif /* JS_HAS_EXCEPTIONS */
case JSOP_GOTO:
case JSOP_GOTOX:
sn = js_GetSrcNote(jp->script, pc);
switch (sn ? SN_TYPE(sn) : SRC_NULL) {
case SRC_CONT2LABEL:
@ -1185,7 +1219,8 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
break;
case JSOP_IFEQ:
len = GET_JUMP_OFFSET(pc);
case JSOP_IFEQX:
len = GetJumpOffset(pc, pc);
sn = js_GetSrcNote(jp->script, pc);
switch (sn ? SN_TYPE(sn) : SRC_NULL) {
@ -1197,13 +1232,13 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
if (SN_TYPE(sn) == SRC_IF) {
DECOMPILE_CODE(pc + oplen, len - oplen);
} else {
DECOMPILE_CODE(pc + oplen,
len - (oplen + js_CodeSpec[JSOP_GOTO].length));
len = js_GetSrcNoteOffset(sn, 0);
DECOMPILE_CODE(pc + oplen, len - oplen);
jp->indent -= 4;
pc += len - oplen;
LOCAL_ASSERT(*pc == JSOP_GOTO);
oplen = js_CodeSpec[JSOP_GOTO].length;
len = GET_JUMP_OFFSET(pc);
pc += len;
LOCAL_ASSERT(*pc == JSOP_GOTO || *pc == JSOP_GOTOX);
oplen = js_CodeSpec[*pc].length;
len = GetJumpOffset(pc, pc);
js_printf(jp, "\t} else {\n");
jp->indent += 4;
DECOMPILE_CODE(pc + oplen, len - oplen);
@ -1217,8 +1252,8 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
rval = POP_STR();
js_printf(jp, "\twhile (%s) {\n", rval);
jp->indent += 4;
DECOMPILE_CODE(pc + oplen,
len - (oplen + js_CodeSpec[JSOP_GOTO].length));
tail = js_GetSrcNoteOffset(sn, 0);
DECOMPILE_CODE(pc + oplen, tail - oplen);
jp->indent -= 4;
js_printf(jp, "\t}\n");
todo = -2;
@ -1228,17 +1263,17 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
xval = JS_strdup(cx, POP_STR());
if (!xval)
return JS_FALSE;
DECOMPILE_CODE(pc + oplen,
len - (oplen + js_CodeSpec[JSOP_GOTO].length));
len = js_GetSrcNoteOffset(sn, 0);
DECOMPILE_CODE(pc + oplen, len - oplen);
lval = JS_strdup(cx, POP_STR());
if (!lval) {
JS_free(cx, (void *)xval);
return JS_FALSE;
}
pc += len - oplen;
LOCAL_ASSERT(*pc == JSOP_GOTO);
oplen = js_CodeSpec[JSOP_GOTO].length;
len = GET_JUMP_OFFSET(pc);
pc += len;
LOCAL_ASSERT(*pc == JSOP_GOTO || *pc == JSOP_GOTOX);
oplen = js_CodeSpec[*pc].length;
len = GetJumpOffset(pc, pc);
DECOMPILE_CODE(pc + oplen, len - oplen);
rval = POP_STR();
todo = Sprint(&ss->sprinter, "%s ? %s : %s",
@ -1253,46 +1288,55 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
break;
case JSOP_IFNE:
case JSOP_IFNEX:
#if JS_HAS_DO_WHILE_LOOP
/* Check for a do-while loop's upward branch. */
sn = js_GetSrcNote(jp->script, pc);
if (sn && SN_TYPE(sn) == SRC_WHILE) {
jp->indent -= 4;
/* {balance: */
js_printf(jp, "\t} while (%s);\n", POP_STR());
todo = -2;
break;
}
/* Currently, this must be a do-while loop's upward branch. */
jp->indent -= 4;
/* {balance: */
js_printf(jp, "\t} while (%s);\n", POP_STR());
todo = -2;
#else
JS_ASSERT(0);
#endif /* JS_HAS_DO_WHILE_LOOP */
break;
case JSOP_OR:
case JSOP_ORX:
xval = "||";
do_logical_connective:
/* Top of stack is the first clause in a disjunction (||). */
lval = JS_strdup(cx, POP_STR());
if (!lval)
return JS_FALSE;
done = pc + GET_JUMP_OFFSET(pc);
done = pc + GetJumpOffset(pc, pc);
pc += len;
len = PTRDIFF(done, pc, jsbytecode);
DECOMPILE_CODE(pc, len);
rval = POP_STR();
todo = Sprint(&ss->sprinter, "%s || %s", lval, rval);
if (jp->pretty &&
jp->indent + 4 + strlen(lval) + 4 + strlen(rval) > 75) {
rval = JS_strdup(cx, rval);
if (!rval) {
tail = -1;
} else {
todo = Sprint(&ss->sprinter, "%s %s\n", lval, xval);
tail = Sprint(&ss->sprinter, "%*s%s",
jp->indent + 4, "", rval);
JS_free(cx, (char *)rval);
}
if (tail < 0)
todo = -1;
} else {
todo = Sprint(&ss->sprinter, "%s %s %s", lval, xval, rval);
}
JS_free(cx, (char *)lval);
break;
case JSOP_AND:
/* Top of stack is the first clause in a conjunction (&&). */
lval = JS_strdup(cx, POP_STR());
if (!lval)
return JS_FALSE;
done = pc + GET_JUMP_OFFSET(pc);
pc += len;
len = PTRDIFF(done, pc, jsbytecode);
DECOMPILE_CODE(pc, len);
rval = POP_STR();
todo = Sprint(&ss->sprinter, "%s && %s", lval, rval);
JS_free(cx, (char *)lval);
break;
case JSOP_ANDX:
xval = "&&";
goto do_logical_connective;
case JSOP_FORARG:
atom = GetSlotAtom(jp, js_GetArgument, GET_ARGNO(pc));
@ -1324,9 +1368,11 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
do_forinloop:
pc += oplen;
LOCAL_ASSERT(*pc == JSOP_IFEQ);
oplen = js_CodeSpec[JSOP_IFEQ].length;
len = GET_JUMP_OFFSET(pc);
LOCAL_ASSERT(*pc == JSOP_IFEQ || *pc == JSOP_IFEQX);
oplen = js_CodeSpec[*pc].length;
len = GetJumpOffset(pc, pc);
sn = js_GetSrcNote(jp->script, pc);
tail = js_GetSrcNoteOffset(sn, 0);
do_forinbody:
js_printf(jp, "\tfor (%s%s", VarPrefix(sn), lval);
@ -1337,8 +1383,7 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
rval = OFF2STR(&ss->sprinter, ss->offsets[ss->top-1]);
js_printf(jp, " in %s) {\n", rval);
jp->indent += 4;
DECOMPILE_CODE(pc + oplen,
len - (oplen + js_CodeSpec[JSOP_GOTO].length));
DECOMPILE_CODE(pc + oplen, tail - oplen);
jp->indent -= 4;
js_printf(jp, "\t}\n");
todo = -2;
@ -1346,8 +1391,16 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
case JSOP_FORELEM:
pc++;
LOCAL_ASSERT(*pc == JSOP_IFEQ);
len = js_CodeSpec[JSOP_IFEQ].length;
LOCAL_ASSERT(*pc == JSOP_IFEQ || *pc == JSOP_IFEQX);
len = js_CodeSpec[*pc].length;
/*
* Set tail for use by do_forinbody: code that uses it to find
* the loop-closing jump (whatever its format, short or long),
* in order to bound the recursively decompiled loop body.
*/
sn = js_GetSrcNote(jp->script, pc);
tail = js_GetSrcNoteOffset(sn, 0) - oplen;
/*
* This gets a little wacky. Only the length of the for loop
@ -1356,7 +1409,8 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
* is immediately below, to decompile that helper bytecode via
* the 'forelem_done' local.
*/
forelem_done = pc + GET_JUMP_OFFSET(pc);
JS_ASSERT(!forelem_done);
forelem_done = pc + GetJumpOffset(pc, pc);
break;
case JSOP_ENUMELEM:
@ -1370,7 +1424,9 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
xval = POP_STR();
lval = POP_STR();
rval = OFF2STR(&ss->sprinter, ss->offsets[ss->top-1]);
JS_ASSERT(forelem_done > pc);
len = forelem_done - pc;
forelem_done = NULL;
goto do_forinbody;
case JSOP_DUP2:
@ -1758,8 +1814,9 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
#if JS_HAS_SWITCH_STATEMENT
case JSOP_TABLESWITCH:
case JSOP_TABLESWITCHX:
{
jsbytecode *pc2, *end;
jsbytecode *pc2;
ptrdiff_t off, off2;
jsint j, n, low, high;
TableEntry *table;
@ -1768,12 +1825,11 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
JS_ASSERT(sn && SN_TYPE(sn) == SRC_SWITCH);
len = js_GetSrcNoteOffset(sn, 0);
pc2 = pc;
off = GET_JUMP_OFFSET(pc2);
end = pc + off;
off = GetJumpOffset(pc, pc2);
pc2 += JUMP_OFFSET_LEN;
low = GET_JUMP_OFFSET(pc2);
low = GetJumpOffset(pc, pc2);
pc2 += JUMP_OFFSET_LEN;
high = GET_JUMP_OFFSET(pc2);
high = GetJumpOffset(pc, pc2);
n = high - low + 1;
if (n == 0) {
@ -1786,7 +1842,7 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
return JS_FALSE;
for (i = j = 0; i < n; i++) {
pc2 += JUMP_OFFSET_LEN;
off2 = GET_JUMP_OFFSET(pc2);
off2 = GetJumpOffset(pc, pc2);
if (off2) {
table[j].key = INT_TO_JSVAL(low + i);
table[j++].offset = off2;
@ -1806,6 +1862,7 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
}
case JSOP_LOOKUPSWITCH:
case JSOP_LOOKUPSWITCHX:
{
jsbytecode *pc2;
ptrdiff_t off, off2;
@ -1816,7 +1873,7 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
JS_ASSERT(sn && SN_TYPE(sn) == SRC_SWITCH);
len = js_GetSrcNoteOffset(sn, 0);
pc2 = pc;
off = GET_JUMP_OFFSET(pc2);
off = GetJumpOffset(pc, pc2);
pc2 += JUMP_OFFSET_LEN;
npairs = (jsint) GET_ATOM_INDEX(pc2);
pc2 += ATOM_INDEX_LEN;
@ -1828,7 +1885,7 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
for (i = 0; i < npairs; i++) {
atom = GET_ATOM(cx, jp->script, pc2);
pc2 += ATOM_INDEX_LEN;
off2 = GET_JUMP_OFFSET(pc2);
off2 = GetJumpOffset(pc, pc2);
pc2 += JUMP_OFFSET_LEN;
table[i].key = ATOM_KEY(atom);
table[i].offset = off2;
@ -1863,8 +1920,9 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
off2 = off;
for (ncases = 0; off2 != 0; ncases++) {
pc2 += off2;
JS_ASSERT(*pc2 == JSOP_CASE || *pc2 == JSOP_DEFAULT);
if (*pc2 == JSOP_DEFAULT) {
JS_ASSERT(*pc2 == JSOP_CASE || *pc2 == JSOP_DEFAULT ||
*pc2 == JSOP_CASEX || *pc2 == JSOP_DEFAULTX);
if (*pc2 == JSOP_DEFAULT || *pc2 == JSOP_DEFAULTX) {
/* End of cases, but count default as a case. */
off2 = 0;
} else {
@ -1887,11 +1945,12 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
off2 = off;
for (i = 0; i < ncases; i++) {
pc2 += off2;
JS_ASSERT(*pc2 == JSOP_CASE || *pc2 == JSOP_DEFAULT);
JS_ASSERT(*pc2 == JSOP_CASE || *pc2 == JSOP_DEFAULT ||
*pc2 == JSOP_CASEX || *pc2 == JSOP_DEFAULTX);
caseOff = pc2 - pc;
table[i].key = INT_TO_JSVAL((jsint) caseOff);
table[i].offset = caseOff + GET_JUMP_OFFSET(pc2);
if (*pc2 == JSOP_CASE) {
table[i].offset = caseOff + GetJumpOffset(pc2, pc2);
if (*pc2 == JSOP_CASE || *pc2 == JSOP_CASEX) {
sn = js_GetSrcNote(jp->script, pc2);
JS_ASSERT(sn && SN_TYPE(sn) == SRC_PCDELTA);
off2 = js_GetSrcNoteOffset(sn, 0);
@ -1905,7 +1964,7 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
*/
off = JSVAL_TO_INT(table[ncases-1].key);
pc2 = pc + off;
off += GET_JUMP_OFFSET(pc2);
off += GetJumpOffset(pc2, pc2);
ok = DecompileSwitch(ss, table, (uintN)ncases, pc, len, off,
JS_TRUE);
@ -1917,6 +1976,7 @@ Decompile(SprintStack *ss, jsbytecode *pc, intN nb)
}
case JSOP_CASE:
case JSOP_CASEX:
{
lval = POP_STR();
if (!lval)

Просмотреть файл

@ -18,7 +18,7 @@
* Copyright (C) 1998 Netscape Communications Corporation. All
* Rights Reserved.
*
* Contributor(s):
* Contributor(s):
*
* Alternatively, the contents of this file may be used under the
* terms of the GNU Public License (the "GPL"), in which case the
@ -66,6 +66,9 @@ typedef enum JSOp {
#define JOF_QARG 6 /* quickened get/set function argument ops */
#define JOF_QVAR 7 /* quickened get/set local variable ops */
#define JOF_DEFLOCALVAR 8 /* define local var with initial value */
#define JOF_JUMPX 9 /* signed 32-bit jump offset immediate */
#define JOF_TABLESWITCHX 10 /* extended (32-bit offset) table switch */
#define JOF_LOOKUPSWITCHX 11 /* extended (32-bit offset) lookup switch */
#define JOF_TYPEMASK 0x000f /* mask for above immediate types */
#define JOF_NAME 0x0010 /* name operation */
#define JOF_PROP 0x0020 /* obj.prop operation */
@ -81,10 +84,16 @@ typedef enum JSOp {
#define JOF_FOR 0x1000 /* for-in property op */
#define JOF_ASSIGNING 0x2000 /* hint for JSClass.resolve, used for ops
that do simplex assignment */
#define JOF_BACKPATCH 0x4000 /* backpatch placeholder during codegen */
#define JOF_TYPE_IS_EXTENDED_JUMP(t) \
((unsigned)((t) - JOF_JUMPX) <= (unsigned)(JOF_LOOKUPSWITCHX - JOF_JUMPX))
/*
* Immediate operand getters, setters, and bounds.
*/
/* Short (2-byte signed offset) relative jump macros. */
#define JUMP_OFFSET_LEN 2
#define JUMP_OFFSET_HI(off) ((jsbytecode)((off) >> 8))
#define JUMP_OFFSET_LO(off) ((jsbytecode)(off))
@ -94,6 +103,39 @@ typedef enum JSOp {
#define JUMP_OFFSET_MIN ((int16)0x8000)
#define JUMP_OFFSET_MAX ((int16)0x7fff)
/*
* When a short jump won't hold a relative offset, its 2-byte immediate offset
* operand is an unsigned index of a span-dependency record, maintained until
* code generation finishes -- after which some (but we hope not nearly all)
* span-dependent jumps must be extended (see OptimizeSpanDeps in jsemit.c).
*
* If the span-dependency record index overflows SPANDEP_INDEX_MAX, the jump
* offset will contain SPANDEP_INDEX_HUGE, indicating that the record must be
* found (via binary search) by its "before span-dependency optimization" pc
* offset (from script main entry point).
*/
#define GET_SPANDEP_INDEX(pc) ((uint16)(((pc)[1] << 8) | (pc)[2]))
#define SET_SPANDEP_INDEX(pc,i) ((pc)[1] = JUMP_OFFSET_HI(i), \
(pc)[2] = JUMP_OFFSET_LO(i))
#define SPANDEP_INDEX_MAX ((uint16)0xfffe)
#define SPANDEP_INDEX_HUGE ((uint16)0xffff)
/* Ultimately, if short jumps won't do, emit long (4-byte signed) offsets. */
#define JUMPX_OFFSET_LEN 4
#define JUMPX_OFFSET_B3(off) ((jsbytecode)((off) >> 24))
#define JUMPX_OFFSET_B2(off) ((jsbytecode)((off) >> 16))
#define JUMPX_OFFSET_B1(off) ((jsbytecode)((off) >> 8))
#define JUMPX_OFFSET_B0(off) ((jsbytecode)(off))
#define GET_JUMPX_OFFSET(pc) ((int32)(((pc)[1] << 24) | ((pc)[2] << 16) \
| ((pc)[3] << 8) | (pc)[4]))
#define SET_JUMPX_OFFSET(pc,off)((pc)[1] = JUMPX_OFFSET_B3(off), \
(pc)[2] = JUMPX_OFFSET_B2(off), \
(pc)[3] = JUMPX_OFFSET_B1(off), \
(pc)[4] = JUMPX_OFFSET_B0(off))
#define JUMPX_OFFSET_MIN ((int32)0x80000000)
#define JUMPX_OFFSET_MAX ((int32)0x7fffffff)
/* A literal is indexed by a per-script atom map. */
#define ATOM_INDEX_LEN 2
#define ATOM_INDEX_HI(index) ((jsbytecode)((index) >> 8))
#define ATOM_INDEX_LO(index) ((jsbytecode)(index))
@ -105,6 +147,7 @@ typedef enum JSOp {
#define ATOM_INDEX_LIMIT_LOG2 16
#define ATOM_INDEX_LIMIT ((uint32)1 << ATOM_INDEX_LIMIT_LOG2)
/* Actual argument count operand format helpers. */
#define ARGC_HI(argc) ((jsbytecode)((argc) >> 8))
#define ARGC_LO(argc) ((jsbytecode)(argc))
#define GET_ARGC(pc) ((uintN)(((pc)[1] << 8) | (pc)[2]))

Просмотреть файл

@ -31,6 +31,7 @@
* the provisions above, a recipient may use your version of this
* file under either the NPL or the GPL.
*/
/*
* JavaScript operation bytecodes. If you need to allocate a bytecode, look
* for a name of the form JSOP_UNUSED* and claim it. Otherwise, always add at
@ -304,3 +305,20 @@ OPDEF(JSOP_ARGCNT, 137,"argcnt", NULL, 1, 0, 1, 12, JOF_BYTE)
* The function object's atom index is the second immediate operand.
*/
OPDEF(JSOP_DEFLOCALFUN, 138,"deflocalfun",NULL, 5, 0, 0, 0, JOF_DEFLOCALVAR)
/* Extended jumps. */
OPDEF(JSOP_GOTOX, 139,"gotox", NULL, 5, 0, 0, 0, JOF_JUMPX)
OPDEF(JSOP_IFEQX, 140,"ifeqx", NULL, 5, 1, 0, 0, JOF_JUMPX)
OPDEF(JSOP_IFNEX, 141,"ifnex", NULL, 5, 1, 0, 0, JOF_JUMPX)
OPDEF(JSOP_ORX, 142,"orx", NULL, 5, 1, 0, 0, JOF_JUMPX)
OPDEF(JSOP_ANDX, 143,"andx", NULL, 5, 1, 0, 0, JOF_JUMPX)
OPDEF(JSOP_GOSUBX, 144,"gosubx", NULL, 5, 0, 1, 0, JOF_JUMPX)
OPDEF(JSOP_CASEX, 145,"casex", NULL, 5, 1, 0, 0, JOF_JUMPX)
OPDEF(JSOP_DEFAULTX, 146,"defaultx", NULL, 5, 1, 0, 0, JOF_JUMPX)
OPDEF(JSOP_TABLESWITCHX, 147,"tableswitchx",NULL, -1, 1, 0, 0, JOF_TABLESWITCHX)
OPDEF(JSOP_LOOKUPSWITCHX, 148,"lookupswitchx",NULL, -1, 1, 0, 0, JOF_LOOKUPSWITCHX)
/* Placeholders for a real jump opcode set during backpatch chain fixup. */
OPDEF(JSOP_BACKPATCH, 149,"backpatch",NULL, 3, 0, 0, 0, JOF_JUMP|JOF_BACKPATCH)
OPDEF(JSOP_BACKPATCH_POP, 150,"backpatch_pop",NULL, 3, 1, 0, 0, JOF_JUMP|JOF_BACKPATCH)
OPDEF(JSOP_BACKPATCH_PUSH,151,"backpatch_push",NULL, 3, 0, 1, 0, JOF_JUMP|JOF_BACKPATCH)

Просмотреть файл

@ -2420,11 +2420,11 @@ SetIncOpKid(JSContext *cx, JSTokenStream *ts, JSTreeContext *tc,
: (preorder ? JSOP_DECPROP : JSOP_PROPDEC);
break;
case TOK_LB:
#if JS_HAS_LVALUE_RETURN
case TOK_LP:
kid->pn_op = JSOP_SETCALL;
#endif
case TOK_LB:
op = (tt == TOK_INC)
? (preorder ? JSOP_INCELEM : JSOP_ELEMINC)
: (preorder ? JSOP_DECELEM : JSOP_ELEMDEC);

Просмотреть файл

@ -72,16 +72,16 @@ struct JSScript {
#define JSSCRIPT_FIND_CATCH_START(script, pc, catchpc) \
JS_BEGIN_MACRO \
JSTryNote *_tn = (script)->trynotes; \
jsbytecode *_catchpc = NULL; \
if (_tn) { \
ptrdiff_t _offset = PTRDIFF(pc, (script)->main, jsbytecode); \
while (JS_UPTRDIFF(_offset, _tn->start) >= (jsuword)_tn->length) \
_tn++; \
if (_tn->catchStart) \
_catchpc = (script)->main + _tn->catchStart; \
JSTryNote *tn_ = (script)->trynotes; \
jsbytecode *catchpc_ = NULL; \
if (tn_) { \
ptrdiff_t offset_ = PTRDIFF(pc, (script)->main, jsbytecode); \
while (JS_UPTRDIFF(offset_, tn_->start) >= (jsuword)tn_->length) \
tn_++; \
if (tn_->catchStart) \
catchpc_ = (script)->main + tn_->catchStart; \
} \
catchpc = _catchpc; \
catchpc = catchpc_; \
JS_END_MACRO
extern JS_FRIEND_DATA(JSClass) js_ScriptClass;