Source/JavaScriptCore/ChangeLog

 12014-05-20 Filip Pizlo <fpizlo@apple.com>
 2
 3 [ftlopt] DFG bytecode parser should turn GetById with nothing but a Getter stub as stuff+handleCall, and handleCall should be allowed to inline if it wants to
 4 https://bugs.webkit.org/show_bug.cgi?id=133105
 5
 6 Reviewed by NOBODY (OOPS!).
 7
 8 - GetByIdStatus now knows about getters and can report intelligent things about them.
 9 As is usually the case with how we do these things, GetByIdStatus knows more about
 10 getters than the DFG can actually handle: it'll report details about polymorphic
 11 getter calls even though the DFG won't be able to handle those. This is fine; the DFG
 12 will see those statuses and bail to a generic slow path.
 13
 14 - The DFG::ByteCodeParser now knows how to set up and do handleCall() for a getter call.
 15 This can, and usually does, result in inlining of getters!
 16
 17 - CodeOrigin and OSR exit know about inlined getter calls. When you OSR out of an
 18 inlined getter, we set the return PC to a getter return thunk that fixes up the stack.
 19 We use the usual offset-true-return-PC trick, where OSR exit places the true return PC
 20 of the getter's caller as a phony argument that only the thunk knows how to find.
 21
 22 - Removed a bunch of dead monomorphic chain support from StructureStubInfo.
 23
 24 - A large chunk of this change is dragging GetGetterSetterByOffset, GetGetter, and
 25 GetSetter through the DFG and FTL. GetGetterSetterByOffset is like GetByOffset except
 26 that we know that we're returning a GetterSetter cell. GetGetter and GetSetter extract
 27 the getter, or setter, from the GetterSetter.
 28
 29 Still testing the performance impact.
 30
 31 * bytecode/CodeBlock.cpp:
 32 (JSC::CodeBlock::printGetByIdCacheStatus):
 33 (JSC::CodeBlock::findStubInfo):
 34 * bytecode/CodeBlock.h:
 35 * bytecode/CodeOrigin.cpp:
 36 (WTF::printInternal):
 37 * bytecode/CodeOrigin.h:
 38 (JSC::InlineCallFrame::specializationKindFor):
 39 * bytecode/GetByIdStatus.cpp:
 40 (JSC::GetByIdStatus::computeFor):
 41 (JSC::GetByIdStatus::computeForStubInfo):
 42 (JSC::GetByIdStatus::makesCalls):
 43 (JSC::GetByIdStatus::computeForChain): Deleted.
 44 * bytecode/GetByIdStatus.h:
 45 (JSC::GetByIdStatus::makesCalls): Deleted.
 46 * bytecode/GetByIdVariant.cpp:
 47 (JSC::GetByIdVariant::~GetByIdVariant):
 48 (JSC::GetByIdVariant::GetByIdVariant):
 49 (JSC::GetByIdVariant::operator=):
 50 (JSC::GetByIdVariant::dumpInContext):
 51 * bytecode/GetByIdVariant.h:
 52 (JSC::GetByIdVariant::GetByIdVariant):
 53 (JSC::GetByIdVariant::callLinkStatus):
 54 * bytecode/PolymorphicGetByIdList.cpp:
 55 (JSC::GetByIdAccess::fromStructureStubInfo):
 56 (JSC::PolymorphicGetByIdList::from):
 57 * bytecode/SpeculatedType.h:
 58 * bytecode/StructureStubInfo.cpp:
 59 (JSC::StructureStubInfo::deref):
 60 (JSC::StructureStubInfo::visitWeakReferences):
 61 * bytecode/StructureStubInfo.h:
 62 (JSC::isGetByIdAccess):
 63 (JSC::StructureStubInfo::initGetByIdChain): Deleted.
 64 * dfg/DFGAbstractHeap.h:
 65 * dfg/DFGAbstractInterpreterInlines.h:
 66 (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
 67 * dfg/DFGByteCodeParser.cpp:
 68 (JSC::DFG::ByteCodeParser::addCall):
 69 (JSC::DFG::ByteCodeParser::handleCall):
 70 (JSC::DFG::ByteCodeParser::handleInlining):
 71 (JSC::DFG::ByteCodeParser::handleGetByOffset):
 72 (JSC::DFG::ByteCodeParser::handleGetById):
 73 (JSC::DFG::ByteCodeParser::InlineStackEntry::InlineStackEntry):
 74 (JSC::DFG::ByteCodeParser::parse):
 75 * dfg/DFGCSEPhase.cpp:
 76 (JSC::DFG::CSEPhase::getGetterSetterByOffsetLoadElimination):
 77 (JSC::DFG::CSEPhase::getInternalFieldLoadElimination):
 78 (JSC::DFG::CSEPhase::performNodeCSE):
 79 (JSC::DFG::CSEPhase::getTypedArrayByteOffsetLoadElimination): Deleted.
 80 * dfg/DFGClobberize.h:
 81 (JSC::DFG::clobberize):
 82 * dfg/DFGFixupPhase.cpp:
 83 (JSC::DFG::FixupPhase::fixupNode):
 84 * dfg/DFGJITCompiler.cpp:
 85 (JSC::DFG::JITCompiler::linkFunction):
 86 * dfg/DFGNode.h:
 87 (JSC::DFG::Node::hasStorageAccessData):
 88 * dfg/DFGNodeType.h:
 89 * dfg/DFGOSRExitCompilerCommon.cpp:
 90 (JSC::DFG::reifyInlinedCallFrames):
 91 * dfg/DFGPredictionPropagationPhase.cpp:
 92 (JSC::DFG::PredictionPropagationPhase::propagate):
 93 * dfg/DFGSafeToExecute.h:
 94 (JSC::DFG::safeToExecute):
 95 * dfg/DFGSpeculativeJIT32_64.cpp:
 96 (JSC::DFG::SpeculativeJIT::compile):
 97 * dfg/DFGSpeculativeJIT64.cpp:
 98 (JSC::DFG::SpeculativeJIT::compile):
 99 * ftl/FTLAbstractHeapRepository.cpp:
 100 * ftl/FTLAbstractHeapRepository.h:
 101 * ftl/FTLCapabilities.cpp:
 102 (JSC::FTL::canCompile):
 103 * ftl/FTLLink.cpp:
 104 (JSC::FTL::link):
 105 * ftl/FTLLowerDFGToLLVM.cpp:
 106 (JSC::FTL::LowerDFGToLLVM::compileNode):
 107 (JSC::FTL::LowerDFGToLLVM::compileGetGetter):
 108 (JSC::FTL::LowerDFGToLLVM::compileGetSetter):
 109 * jit/AccessorCallJITStubRoutine.h:
 110 * jit/JIT.cpp:
 111 (JSC::JIT::assertStackPointerOffset):
 112 (JSC::JIT::privateCompile):
 113 * jit/JIT.h:
 114 * jit/JITPropertyAccess.cpp:
 115 (JSC::JIT::emit_op_get_by_id):
 116 * jit/ThunkGenerators.cpp:
 117 (JSC::arityFixupGenerator):
 118 (JSC::baselineGetterReturnThunkGenerator):
 119 (JSC::baselineSetterReturnThunkGenerator):
 120 (JSC::arityFixup): Deleted.
 121 * jit/ThunkGenerators.h:
 122 * runtime/CommonSlowPaths.cpp:
 123 (JSC::setupArityCheckData):
 124 * tests/stress/exit-from-getter.js: Added.
 125 * tests/stress/poly-chain-getter.js: Added.
 126 (Cons):
 127 (foo):
 128 (test):
 129 * tests/stress/poly-chain-then-getter.js: Added.
 130 (Cons1):
 131 (Cons2):
 132 (foo):
 133 (test):
 134 * tests/stress/poly-getter-combo.js: Added.
 135 (Cons1):
 136 (Cons2):
 137 (foo):
 138 (test):
 139 (.test):
 140 * tests/stress/poly-getter-then-chain.js: Added.
 141 (Cons1):
 142 (Cons2):
 143 (foo):
 144 (test):
 145 * tests/stress/poly-getter-then-self.js: Added.
 146 (foo):
 147 (test):
 148 (.test):
 149 * tests/stress/poly-self-getter.js: Added.
 150 (foo):
 151 (test):
 152 (getter):
 153 * tests/stress/poly-self-then-getter.js: Added.
 154 (foo):
 155 (test):
 156 * tests/stress/weird-getter-counter.js: Added.
 157 (foo):
 158 (test):
 159
11602014-05-17 Filip Pizlo <fpizlo@apple.com>
2161
3162 [ftlopt] Factor out how CallLinkStatus uses exit site data
169119

Source/JavaScriptCore/bytecode/CodeBlock.cpp

@@void CodeBlock::printGetByIdCacheStatus(
352352 out.printf("self");
353353 baseStructure = stubInfo.u.getByIdSelf.baseObjectStructure.get();
354354 break;
355  case access_get_by_id_chain:
356  out.printf("chain");
357  baseStructure = stubInfo.u.getByIdChain.baseObjectStructure.get();
358  chain = stubInfo.u.getByIdChain.chain.get();
359  break;
360355 case access_get_by_id_list:
361356 out.printf("list");
362357 list = stubInfo.u.getByIdList.list;

@@StructureStubInfo* CodeBlock::addStubInf
23442339 return m_stubInfos.add();
23452340}
23462341
 2342StructureStubInfo* CodeBlock::findStubInfo(CodeOrigin codeOrigin)
 2343{
 2344 for (StructureStubInfo* stubInfo : m_stubInfos) {
 2345 if (stubInfo->codeOrigin == codeOrigin)
 2346 return stubInfo;
 2347 }
 2348 return nullptr;
 2349}
 2350
23472351CallLinkInfo* CodeBlock::addCallLinkInfo()
23482352{
23492353 ConcurrentJITLocker locker(m_lock);
169014

Source/JavaScriptCore/bytecode/CodeBlock.h

@@public:
201201 StructureStubInfo* addStubInfo();
202202 Bag<StructureStubInfo>::iterator stubInfoBegin() { return m_stubInfos.begin(); }
203203 Bag<StructureStubInfo>::iterator stubInfoEnd() { return m_stubInfos.end(); }
 204
 205 // O(n) operation. Use getStubInfoMap() unless you really only intend to get one
 206 // stub info.
 207 StructureStubInfo* findStubInfo(CodeOrigin);
204208
205209 void resetStub(StructureStubInfo&);
206210
169014

Source/JavaScriptCore/bytecode/CodeOrigin.cpp

@@void printInternal(PrintStream& out, JSC
206206 case JSC::InlineCallFrame::Construct:
207207 out.print("Construct");
208208 return;
 209 case JSC::InlineCallFrame::GetterCall:
 210 out.print("GetterCall");
 211 return;
 212 case JSC::InlineCallFrame::SetterCall:
 213 out.print("SetterCall");
 214 return;
209215 }
210216 RELEASE_ASSERT_NOT_REACHED();
211217}
169014

Source/JavaScriptCore/bytecode/CodeOrigin.h

@@struct InlineCallFrame {
121121 enum Kind {
122122 Call,
123123 Construct,
 124
 125 // For these, the stackOffset incorporates the argument count plus the true return PC
 126 // slot.
 127 GetterCall,
 128 SetterCall
124129 };
125130
126131 static Kind kindFor(CodeSpecializationKind kind)

@@struct InlineCallFrame {
139144 {
140145 switch (kind) {
141146 case Call:
 147 case GetterCall:
 148 case SetterCall:
142149 return CodeForCall;
143150 case Construct:
144151 return CodeForConstruct;

@@struct InlineCallFrame {
153160 CodeOrigin caller;
154161 BitVector capturedVars; // Indexed by the machine call frame's variable numbering.
155162 signed stackOffset : 30;
156  Kind kind : 1;
 163 Kind kind : 2;
157164 bool isClosureCall : 1; // If false then we know that callee/scope are constants and the DFG won't treat them as variables, i.e. they have to be recovered manually.
158165 VirtualRegister argumentsRegister; // This is only set if the code uses arguments. The unmodified arguments register follows the unmodifiedArgumentsRegister() convention (see CodeBlock.h).
159166
169014

Source/JavaScriptCore/bytecode/GetByIdStatus.cpp

2626#include "config.h"
2727#include "GetByIdStatus.h"
2828
 29#include "AccessorCallJITStubRoutine.h"
2930#include "CodeBlock.h"
3031#include "JSCInlines.h"
3132#include "JSScope.h"

@@GetByIdStatus GetByIdStatus::computeFrom
8990#endif
9091}
9192
92 bool GetByIdStatus::computeForChain(CodeBlock* profiledBlock, StringImpl* uid, PassRefPtr<IntendedStructureChain> passedChain)
93 {
94 #if ENABLE(JIT)
95  RefPtr<IntendedStructureChain> chain = passedChain;
96 
97  // Validate the chain. If the chain is invalid, then currently the best thing
98  // we can do is to assume that TakesSlow is true. In the future, it might be
99  // worth exploring reifying the structure chain from the structure we've got
100  // instead of using the one from the cache, since that will do the right things
101  // if the structure chain has changed. But that may be harder, because we may
102  // then end up having a different type of access altogether. And it currently
103  // does not appear to be worth it to do so -- effectively, the heuristic we
104  // have now is that if the structure chain has changed between when it was
105  // cached on in the baseline JIT and when the DFG tried to inline the access,
106  // then we fall back on a polymorphic access.
107  if (!chain->isStillValid())
108  return false;
109 
110  if (chain->head()->takesSlowPathInDFGForImpureProperty())
111  return false;
112  size_t chainSize = chain->size();
113  for (size_t i = 0; i < chainSize; i++) {
114  if (chain->at(i)->takesSlowPathInDFGForImpureProperty())
115  return false;
116  }
117 
118  JSObject* currentObject = chain->terminalPrototype();
119  Structure* currentStructure = chain->last();
120 
121  ASSERT_UNUSED(currentObject, currentObject);
122 
123  unsigned attributesIgnored;
124  JSCell* specificValue;
125 
126  PropertyOffset offset = currentStructure->getConcurrently(
127  *profiledBlock->vm(), uid, attributesIgnored, specificValue);
128  if (currentStructure->isDictionary())
129  specificValue = 0;
130  if (!isValidOffset(offset))
131  return false;
132 
133  return appendVariant(GetByIdVariant(StructureSet(chain->head()), offset, specificValue, chain));
134 #else // ENABLE(JIT)
135  UNUSED_PARAM(profiledBlock);
136  UNUSED_PARAM(uid);
137  UNUSED_PARAM(passedChain);
138  UNREACHABLE_FOR_PLATFORM();
139  return false;
140 #endif // ENABLE(JIT)
141 }
142 
14393GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, StubInfoMap& map, unsigned bytecodeIndex, StringImpl* uid)
14494{
14595 ConcurrentJITLocker locker(profiledBlock->m_lock);

@@GetByIdStatus GetByIdStatus::computeFor(
14898
14999#if ENABLE(DFG_JIT)
150100 result = computeForStubInfo(
151  locker, profiledBlock, map.get(CodeOrigin(bytecodeIndex)), uid);
 101 locker, profiledBlock, map.get(CodeOrigin(bytecodeIndex)), uid,
 102 CallLinkStatus::computeExitSiteData(locker, profiledBlock, bytecodeIndex));
152103
153104 if (!result.takesSlowPath()
154105 && (hasExitSite(locker, profiledBlock, bytecodeIndex)
155106 || profiledBlock->likelyToTakeSlowCase(bytecodeIndex)))
156  return GetByIdStatus(TakesSlowPath, true);
 107 return GetByIdStatus(result.makesCalls() ? MakesCalls : TakesSlowPath, true);
157108#else
158109 UNUSED_PARAM(map);
159110#endif

@@GetByIdStatus GetByIdStatus::computeFor(
166117
167118#if ENABLE(JIT)
168119GetByIdStatus GetByIdStatus::computeForStubInfo(
169  const ConcurrentJITLocker&, CodeBlock* profiledBlock, StructureStubInfo* stubInfo,
170  StringImpl* uid)
 120 const ConcurrentJITLocker& locker, CodeBlock* profiledBlock, StructureStubInfo* stubInfo,
 121 StringImpl* uid, CallLinkStatus::ExitSiteData callExitSiteData)
171122{
172123 if (!stubInfo || !stubInfo->seen)
173124 return GetByIdStatus(NoInformation);
174125
175  if (stubInfo->resetByGC)
176  return GetByIdStatus(TakesSlowPath, true);
177 
178126 PolymorphicGetByIdList* list = 0;
 127 State takesSlowPath = TakesSlowPath;
179128 if (stubInfo->accessType == access_get_by_id_list) {
180129 list = stubInfo->u.getByIdList.list;
181  for (unsigned i = 0; i < list->size(); ++i) {
182  if (list->at(i).doesCalls())
183  return GetByIdStatus(MakesCalls, true);
184  }
 130
 131 for (unsigned i = 0; i < list->size(); ++i)
 132 takesSlowPath = MakesCalls;
185133 }
186134
 135 if (stubInfo->resetByGC)
 136 return GetByIdStatus(takesSlowPath, true);
 137
187138 // Finally figure out if we can derive an access strategy.
188139 GetByIdStatus result;
189140 result.m_state = Simple;

@@GetByIdStatus GetByIdStatus::computeForS
215166
216167 case access_get_by_id_list: {
217168 for (unsigned listIndex = 0; listIndex < list->size(); ++listIndex) {
218  ASSERT(!list->at(listIndex).doesCalls());
219 
220169 Structure* structure = list->at(listIndex).structure();
221170
222171 // FIXME: We should assert that we never see a structure that

@@GetByIdStatus GetByIdStatus::computeForS
226175 // https://bugs.webkit.org/show_bug.cgi?id=131810
227176
228177 if (structure->takesSlowPathInDFGForImpureProperty())
229  return GetByIdStatus(TakesSlowPath, true);
 178 return GetByIdStatus(takesSlowPath, true);
230179
 180 unsigned attributesIgnored;
 181 JSCell* specificValue;
 182 PropertyOffset myOffset;
 183 RefPtr<IntendedStructureChain> chain;
 184
231185 if (list->at(listIndex).chain()) {
232  RefPtr<IntendedStructureChain> chain = adoptRef(new IntendedStructureChain(
 186 chain = adoptRef(new IntendedStructureChain(
233187 profiledBlock, structure, list->at(listIndex).chain(),
234188 list->at(listIndex).chainCount()));
235  if (!result.computeForChain(profiledBlock, uid, chain))
236  return GetByIdStatus(TakesSlowPath, true);
237  continue;
 189
 190 if (!chain->isStillValid())
 191 return GetByIdStatus(takesSlowPath, true);
 192
 193 if (chain->head()->takesSlowPathInDFGForImpureProperty())
 194 return GetByIdStatus(takesSlowPath, true);
 195
 196 size_t chainSize = chain->size();
 197 for (size_t i = 0; i < chainSize; i++) {
 198 if (chain->at(i)->takesSlowPathInDFGForImpureProperty())
 199 return GetByIdStatus(takesSlowPath, true);
 200 }
 201
 202 JSObject* currentObject = chain->terminalPrototype();
 203 Structure* currentStructure = chain->last();
 204
 205 ASSERT_UNUSED(currentObject, currentObject);
 206
 207 myOffset = currentStructure->getConcurrently(
 208 *profiledBlock->vm(), uid, attributesIgnored, specificValue);
 209 if (currentStructure->isDictionary())
 210 specificValue = 0;
 211 } else {
 212 myOffset = structure->getConcurrently(
 213 *profiledBlock->vm(), uid, attributesIgnored, specificValue);
 214 if (structure->isDictionary())
 215 specificValue = 0;
238216 }
239217
240  unsigned attributesIgnored;
241  JSCell* specificValue;
242  PropertyOffset myOffset = structure->getConcurrently(
243  *profiledBlock->vm(), uid, attributesIgnored, specificValue);
244  if (structure->isDictionary())
245  specificValue = 0;
246 
247218 if (!isValidOffset(myOffset))
248  return GetByIdStatus(TakesSlowPath, true);
 219 return GetByIdStatus(takesSlowPath, true);
249220
250  bool found = false;
251  for (unsigned variantIndex = 0; variantIndex < result.m_variants.size(); ++variantIndex) {
252  GetByIdVariant& variant = result.m_variants[variantIndex];
253  if (variant.m_chain)
254  continue;
 221 if (!chain && !list->at(listIndex).doesCalls()) {
 222 // For non-chain, non-getter accesses, we try to do some coalescing.
 223 bool found = false;
 224 for (unsigned variantIndex = 0; variantIndex < result.m_variants.size(); ++variantIndex) {
 225 GetByIdVariant& variant = result.m_variants[variantIndex];
 226 if (variant.m_chain)
 227 continue;
255228
256  if (variant.m_offset != myOffset)
257  continue;
258 
259  found = true;
260  if (variant.m_structureSet.contains(structure))
261  break;
 229 if (variant.m_offset != myOffset)
 230 continue;
262231
263  if (variant.m_specificValue != JSValue(specificValue))
264  variant.m_specificValue = JSValue();
 232 if (variant.callLinkStatus())
 233 continue;
265234
266  variant.m_structureSet.add(structure);
267  break;
 235 found = true;
 236 if (variant.m_structureSet.contains(structure))
 237 break;
 238
 239 if (variant.m_specificValue != JSValue(specificValue))
 240 variant.m_specificValue = JSValue();
 241
 242 variant.m_structureSet.add(structure);
 243 break;
 244 }
 245
 246 if (found)
 247 continue;
268248 }
269249
270  if (found)
271  continue;
 250 std::unique_ptr<CallLinkStatus> callLinkStatus;
 251 switch (list->at(listIndex).type()) {
 252 case GetByIdAccess::SimpleInline:
 253 case GetByIdAccess::SimpleStub: {
 254 break;
 255 }
 256 case GetByIdAccess::Getter: {
 257 AccessorCallJITStubRoutine* stub = static_cast<AccessorCallJITStubRoutine*>(
 258 list->at(listIndex).stubRoutine());
 259 callLinkStatus = std::make_unique<CallLinkStatus>(
 260 CallLinkStatus::computeFor(locker, *stub->m_callLinkInfo, callExitSiteData));
 261 break;
 262 }
 263 case GetByIdAccess::CustomGetter: {
 264 // FIXME: It would be totally sweet to support this at some point in the future.
 265 // https://bugs.webkit.org/show_bug.cgi?id=133052
 266 return GetByIdStatus(takesSlowPath, true);
 267 }
 268 default:
 269 RELEASE_ASSERT_NOT_REACHED();
 270 }
272271
273  if (!result.appendVariant(GetByIdVariant(StructureSet(structure), myOffset, specificValue)))
274  return GetByIdStatus(TakesSlowPath, true);
 272 GetByIdVariant variant(
 273 StructureSet(structure), myOffset, specificValue, chain,
 274 std::move(callLinkStatus));
 275 if (!result.appendVariant(variant))
 276 return GetByIdStatus(takesSlowPath, true);
275277 }
276278
277279 return result;
278280 }
279281
280  case access_get_by_id_chain: {
281  if (!stubInfo->u.getByIdChain.isDirect)
282  return GetByIdStatus(MakesCalls, true);
283  RefPtr<IntendedStructureChain> chain = adoptRef(new IntendedStructureChain(
284  profiledBlock,
285  stubInfo->u.getByIdChain.baseObjectStructure.get(),
286  stubInfo->u.getByIdChain.chain.get(),
287  stubInfo->u.getByIdChain.count));
288  if (result.computeForChain(profiledBlock, uid, chain))
289  return result;
290  return GetByIdStatus(TakesSlowPath, true);
291  }
292 
293282 default:
294283 return GetByIdStatus(TakesSlowPath, true);
295284 }

@@GetByIdStatus GetByIdStatus::computeFor(
305294{
306295#if ENABLE(DFG_JIT)
307296 if (dfgBlock) {
 297 CallLinkStatus::ExitSiteData exitSiteData;
 298 {
 299 ConcurrentJITLocker locker(profiledBlock->m_lock);
 300 exitSiteData = CallLinkStatus::computeExitSiteData(
 301 locker, profiledBlock, codeOrigin.bytecodeIndex, ExitFromFTL);
 302 }
 303
308304 GetByIdStatus result;
309305 {
310306 ConcurrentJITLocker locker(dfgBlock->m_lock);
311  result = computeForStubInfo(locker, dfgBlock, dfgMap.get(codeOrigin), uid);
 307 result = computeForStubInfo(
 308 locker, dfgBlock, dfgMap.get(codeOrigin), uid, exitSiteData);
312309 }
313310
314311 if (result.takesSlowPath())

@@GetByIdStatus GetByIdStatus::computeFor(
361358 Simple, false, GetByIdVariant(StructureSet(structure), offset, specificValue));
362359}
363360
 361bool GetByIdStatus::makesCalls() const
 362{
 363 switch (m_state) {
 364 case NoInformation:
 365 case TakesSlowPath:
 366 return false;
 367 case Simple:
 368 for (unsigned i = m_variants.size(); i--;) {
 369 if (m_variants[i].callLinkStatus())
 370 return true;
 371 }
 372 return false;
 373 case MakesCalls:
 374 return true;
 375 }
 376 RELEASE_ASSERT_NOT_REACHED();
 377}
 378
364379void GetByIdStatus::dump(PrintStream& out) const
365380{
366381 out.print("(");
169014

Source/JavaScriptCore/bytecode/GetByIdStatus.h

2626#ifndef GetByIdStatus_h
2727#define GetByIdStatus_h
2828
 29#include "CallLinkStatus.h"
2930#include "CodeOrigin.h"
3031#include "ConcurrentJITLock.h"
3132#include "ExitingJITType.h"

@@public:
8384 const GetByIdVariant& operator[](size_t index) const { return at(index); }
8485
8586 bool takesSlowPath() const { return m_state == TakesSlowPath || m_state == MakesCalls; }
86  bool makesCalls() const { return m_state == MakesCalls; }
 87 bool makesCalls() const;
8788
8889 bool wasSeenInJIT() const { return m_wasSeenInJIT; }
8990

@@private:
9495 static bool hasExitSite(const ConcurrentJITLocker&, CodeBlock*, unsigned bytecodeIndex, ExitingJITType = ExitFromAnything);
9596#endif
9697#if ENABLE(JIT)
97  static GetByIdStatus computeForStubInfo(const ConcurrentJITLocker&, CodeBlock*, StructureStubInfo*, StringImpl* uid);
 98 static GetByIdStatus computeForStubInfo(
 99 const ConcurrentJITLocker&, CodeBlock* profiledBlock, StructureStubInfo*,
 100 StringImpl* uid, CallLinkStatus::ExitSiteData);
98101#endif
99  bool computeForChain(CodeBlock*, StringImpl* uid, PassRefPtr<IntendedStructureChain>);
100102 static GetByIdStatus computeFromLLInt(CodeBlock*, unsigned bytecodeIndex, StringImpl* uid);
101103
102104 bool appendVariant(const GetByIdVariant&);
169014

Source/JavaScriptCore/bytecode/GetByIdVariant.cpp

2626#include "config.h"
2727#include "GetByIdVariant.h"
2828
 29#include "CallLinkStatus.h"
2930#include "JSCInlines.h"
3031
3132namespace JSC {
3233
 34GetByIdVariant::~GetByIdVariant() { }
 35
 36GetByIdVariant::GetByIdVariant(const GetByIdVariant& other)
 37{
 38 *this = other;
 39}
 40
 41GetByIdVariant& GetByIdVariant::operator=(const GetByIdVariant& other)
 42{
 43 m_structureSet = other.m_structureSet;
 44 m_chain = other.m_chain;
 45 m_specificValue = other.m_specificValue;
 46 m_offset = other.m_offset;
 47 if (other.m_callLinkStatus)
 48 m_callLinkStatus = std::make_unique<CallLinkStatus>(*other.m_callLinkStatus);
 49 else
 50 m_callLinkStatus = nullptr;
 51 return *this;
 52}
 53
3354void GetByIdVariant::dump(PrintStream& out) const
3455{
3556 dumpInContext(out, 0);

@@void GetByIdVariant::dumpInContext(Print
4566 out.print(
4667 "<", inContext(structureSet(), context), ", ",
4768 pointerDumpInContext(chain(), context), ", ",
48  inContext(specificValue(), context), ", ", offset(), ">");
 69 inContext(specificValue(), context), ", ", offset());
 70 if (m_callLinkStatus)
 71 out.print("call: ", *m_callLinkStatus);
 72 out.print(">");
4973}
5074
5175} // namespace JSC
169014

Source/JavaScriptCore/bytecode/GetByIdVariant.h

2626#ifndef GetByIdVariant_h
2727#define GetByIdVariant_h
2828
 29#include "CallLinkStatus.h"
2930#include "IntendedStructureChain.h"
3031#include "JSCJSValue.h"
3132#include "PropertyOffset.h"

3334
3435namespace JSC {
3536
 37class CallLinkStatus;
3638class GetByIdStatus;
3739struct DumpContext;
3840

@@public:
4143 GetByIdVariant(
4244 const StructureSet& structureSet = StructureSet(),
4345 PropertyOffset offset = invalidOffset, JSValue specificValue = JSValue(),
44  PassRefPtr<IntendedStructureChain> chain = nullptr)
 46 PassRefPtr<IntendedStructureChain> chain = nullptr,
 47 std::unique_ptr<CallLinkStatus> callLinkStatus = nullptr)
4548 : m_structureSet(structureSet)
4649 , m_chain(chain)
4750 , m_specificValue(specificValue)
4851 , m_offset(offset)
 52 , m_callLinkStatus(std::move(callLinkStatus))
4953 {
5054 if (!structureSet.size()) {
5155 ASSERT(offset == invalidOffset);

@@public:
5458 }
5559 }
5660
 61 ~GetByIdVariant();
 62
 63 GetByIdVariant(const GetByIdVariant&);
 64 GetByIdVariant& operator=(const GetByIdVariant&);
 65
5766 bool isSet() const { return !!m_structureSet.size(); }
5867 bool operator!() const { return !isSet(); }
5968 const StructureSet& structureSet() const { return m_structureSet; }
6069 IntendedStructureChain* chain() const { return const_cast<IntendedStructureChain*>(m_chain.get()); }
6170 JSValue specificValue() const { return m_specificValue; }
6271 PropertyOffset offset() const { return m_offset; }
 72 CallLinkStatus* callLinkStatus() const { return m_callLinkStatus.get(); }
6373
6474 void dump(PrintStream&) const;
6575 void dumpInContext(PrintStream&, DumpContext*) const;

@@private:
7181 RefPtr<IntendedStructureChain> m_chain;
7282 JSValue m_specificValue;
7383 PropertyOffset m_offset;
 84 std::unique_ptr<CallLinkStatus> m_callLinkStatus;
7485};
7586
7687} // namespace JSC
169014

Source/JavaScriptCore/bytecode/PolymorphicGetByIdList.cpp

@@GetByIdAccess GetByIdAccess::fromStructu
5858
5959 GetByIdAccess result;
6060
61  switch (stubInfo.accessType) {
62  case access_get_by_id_self:
63  result.m_type = SimpleInline;
64  result.m_structure.copyFrom(stubInfo.u.getByIdSelf.baseObjectStructure);
65  result.m_stubRoutine = JITStubRoutine::createSelfManagedRoutine(initialSlowPath);
66  break;
67 
68  case access_get_by_id_chain:
69  result.m_structure.copyFrom(stubInfo.u.getByIdChain.baseObjectStructure);
70  result.m_chain.copyFrom(stubInfo.u.getByIdChain.chain);
71  result.m_chainCount = stubInfo.u.getByIdChain.count;
72  result.m_stubRoutine = stubInfo.stubRoutine;
73  if (stubInfo.u.getByIdChain.isDirect)
74  result.m_type = SimpleStub;
75  else
76  result.m_type = Getter;
77  break;
78 
79  default:
80  RELEASE_ASSERT_NOT_REACHED();
81  }
 61 RELEASE_ASSERT(stubInfo.accessType == access_get_by_id_self);
 62
 63 result.m_type = SimpleInline;
 64 result.m_structure.copyFrom(stubInfo.u.getByIdSelf.baseObjectStructure);
 65 result.m_stubRoutine = JITStubRoutine::createSelfManagedRoutine(initialSlowPath);
8266
8367 return result;
8468}

@@PolymorphicGetByIdList* PolymorphicGetBy
10993
11094 ASSERT(
11195 stubInfo.accessType == access_get_by_id_self
112  || stubInfo.accessType == access_get_by_id_chain
11396 || stubInfo.accessType == access_unset);
11497
11598 PolymorphicGetByIdList* result = new PolymorphicGetByIdList(stubInfo);
169014

Source/JavaScriptCore/bytecode/SpeculatedType.h

@@static const SpeculatedType SpecObject
5959static const SpeculatedType SpecStringIdent = 0x00010000; // It's definitely a JSString, and it's an identifier.
6060static const SpeculatedType SpecStringVar = 0x00020000; // It's definitely a JSString, and it's not an identifier.
6161static const SpeculatedType SpecString = 0x00030000; // It's definitely a JSString.
62 static const SpeculatedType SpecCellOther = 0x00040000; // It's definitely a JSCell but not a subclass of JSObject and definitely not a JSString.
 62static const SpeculatedType SpecCellOther = 0x00040000; // It's definitely a JSCell but not a subclass of JSObject and definitely not a JSString. FIXME: This shouldn't be part of heap-top or bytecode-top. https://bugs.webkit.org/show_bug.cgi?id=133078
6363static const SpeculatedType SpecCell = 0x0007ffff; // It's definitely a JSCell.
6464static const SpeculatedType SpecInt32 = 0x00200000; // It's definitely an Int32.
6565static const SpeculatedType SpecInt52 = 0x00400000; // It's definitely an Int52 and we intend it to unbox it.
169014

Source/JavaScriptCore/bytecode/StructureStubInfo.cpp

@@void StructureStubInfo::deref()
4949 return;
5050 }
5151 case access_get_by_id_self:
52  case access_get_by_id_chain:
5352 case access_put_by_id_transition_normal:
5453 case access_put_by_id_transition_direct:
5554 case access_put_by_id_replace:

@@bool StructureStubInfo::visitWeakReferen
6867 if (!Heap::isMarked(u.getByIdSelf.baseObjectStructure.get()))
6968 return false;
7069 break;
71  case access_get_by_id_chain:
72  if (!Heap::isMarked(u.getByIdChain.baseObjectStructure.get())
73  || !Heap::isMarked(u.getByIdChain.chain.get()))
74  return false;
75  break;
7670 case access_get_by_id_list: {
7771 if (!u.getByIdList.list->visitWeak(repatchBuffer))
7872 return false;
169014

Source/JavaScriptCore/bytecode/StructureStubInfo.h

@@class PolymorphicPutByIdList;
4747
4848enum AccessType {
4949 access_get_by_id_self,
50  access_get_by_id_chain,
5150 access_get_by_id_list,
5251 access_put_by_id_transition_normal,
5352 access_put_by_id_transition_direct,

@@inline bool isGetByIdAccess(AccessType a
6160{
6261 switch (accessType) {
6362 case access_get_by_id_self:
64  case access_get_by_id_chain:
6563 case access_get_by_id_list:
6664 return true;
6765 default:

@@struct StructureStubInfo {
107105 u.getByIdSelf.baseObjectStructure.set(vm, owner, baseObjectStructure);
108106 }
109107
110  void initGetByIdChain(VM& vm, JSCell* owner, Structure* baseObjectStructure, StructureChain* chain, unsigned count, bool isDirect)
111  {
112  accessType = access_get_by_id_chain;
113 
114  u.getByIdChain.baseObjectStructure.set(vm, owner, baseObjectStructure);
115  u.getByIdChain.chain.set(vm, owner, chain);
116  u.getByIdChain.count = count;
117  u.getByIdChain.isDirect = isDirect;
118  }
119 
120108 void initGetByIdList(PolymorphicGetByIdList* list)
121109 {
122110 accessType = access_get_by_id_list;
169014

Source/JavaScriptCore/dfg/DFGAbstractHeap.h

@@namespace JSC { namespace DFG {
5050 macro(Butterfly_arrayBuffer) \
5151 macro(Butterfly_publicLength) \
5252 macro(Butterfly_vectorLength) \
 53 macro(GetterSetter_getter) \
 54 macro(GetterSetter_setter) \
5355 macro(JSArrayBufferView_length) \
5456 macro(JSArrayBufferView_mode) \
5557 macro(JSArrayBufferView_vector) \
169014

Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h

@@bool AbstractInterpreter<AbstractStateTy
14151415 break;
14161416
14171417 case GetCallee:
 1418 case GetGetter:
 1419 case GetSetter:
14181420 forNode(node).setType(SpecFunction);
14191421 break;
14201422

@@bool AbstractInterpreter<AbstractStateTy
16401642 break;
16411643 }
16421644
 1645 case GetGetterSetterByOffset: {
 1646 forNode(node).set(m_graph, m_graph.m_vm.getterSetterStructure.get());
 1647 break;
 1648 }
 1649
16431650 case MultiGetByOffset: {
16441651 AbstractValue& value = forNode(node->child1());
16451652 ASSERT(!(value.m_type & ~SpecCell)); // Edge filtering should have already ensured this.
169014

Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp

@@private:
171171 bool handleMinMax(int resultOperand, NodeType op, int registerOffset, int argumentCountIncludingThis);
172172
173173 // Handle calls. This resolves issues surrounding inlining and intrinsics.
 174 void handleCall(
 175 int result, NodeType op, InlineCallFrame::Kind, unsigned instructionSize,
 176 Node* callTarget, int argCount, int registerOffset, CallLinkStatus);
174177 void handleCall(int result, NodeType op, CodeSpecializationKind, unsigned instructionSize, int callee, int argCount, int registerOffset);
175178 void handleCall(Instruction* pc, NodeType op, CodeSpecializationKind);
176179 void emitFunctionChecks(const CallLinkStatus&, Node* callTarget, int registerOffset, CodeSpecializationKind);
177180 void emitArgumentPhantoms(int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind);
178181 // Handle inlining. Return true if it succeeded, false if we need to plant a call.
179  bool handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus&, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, CodeSpecializationKind);
 182 bool handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus&, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind);
180183 // Handle intrinsic functions. Return true if it succeeded, false if we need to plant a call.
181184 bool handleIntrinsic(int resultOperand, Intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction);
182185 bool handleTypedArrayConstructor(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, TypedArrayType);
183186 bool handleConstantInternalFunction(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction, CodeSpecializationKind);
184187 Node* handlePutByOffset(Node* base, unsigned identifier, PropertyOffset, Node* value);
185  Node* handleGetByOffset(SpeculatedType, Node* base, unsigned identifierNumber, PropertyOffset);
186  void handleGetByOffset(
187  int destinationOperand, SpeculatedType, Node* base, unsigned identifierNumber,
188  PropertyOffset);
 188 Node* handleGetByOffset(SpeculatedType, Node* base, unsigned identifierNumber, PropertyOffset, NodeType op = GetByOffset);
189189 void handleGetById(
190190 int destinationOperand, SpeculatedType, Node* base, unsigned identifierNumber,
191191 const GetByIdStatus&);

@@private:
811811 m_numPassedVarArgs++;
812812 }
813813
814  Node* addCall(int result, NodeType op, int callee, int argCount, int registerOffset)
 814 Node* addCall(int result, NodeType op, Node* callee, int argCount, int registerOffset)
815815 {
816816 SpeculatedType prediction = getPrediction();
817817
818  addVarArgChild(get(VirtualRegister(callee)));
 818 addVarArgChild(callee);
819819 size_t parameterSlots = JSStack::CallFrameHeaderSize - JSStack::CallerFrameAndPCSize + argCount;
820820 if (parameterSlots > m_parameterSlots)
821821 m_parameterSlots = parameterSlots;

@@private:
11201120 VirtualRegister returnValueVR,
11211121 VirtualRegister inlineCallFrameStart,
11221122 int argumentCountIncludingThis,
1123  CodeSpecializationKind);
 1123 InlineCallFrame::Kind);
11241124
11251125 ~InlineStackEntry()
11261126 {

@@void ByteCodeParser::handleCall(
12031203 int result, NodeType op, CodeSpecializationKind kind, unsigned instructionSize,
12041204 int callee, int argumentCountIncludingThis, int registerOffset)
12051205{
1206  ASSERT(registerOffset <= 0);
1207 
12081206 Node* callTarget = get(VirtualRegister(callee));
12091207
1210  CallLinkStatus callLinkStatus;
1211 
 1208 CallLinkStatus callLinkStatus = CallLinkStatus::computeFor(
 1209 m_inlineStackTop->m_profiledBlock, currentCodeOrigin(),
 1210 m_inlineStackTop->m_callLinkInfos, m_callContextMap);
 1211
 1212 handleCall(
 1213 result, op, InlineCallFrame::kindFor(kind), instructionSize, callTarget,
 1214 argumentCountIncludingThis, registerOffset, callLinkStatus);
 1215}
 1216
 1217void ByteCodeParser::handleCall(
 1218 int result, NodeType op, InlineCallFrame::Kind kind, unsigned instructionSize,
 1219 Node* callTarget, int argumentCountIncludingThis, int registerOffset,
 1220 CallLinkStatus callLinkStatus)
 1221{
 1222 ASSERT(registerOffset <= 0);
 1223 CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
 1224
12121225 if (m_graph.isConstant(callTarget)) {
12131226 callLinkStatus = CallLinkStatus(
12141227 m_graph.valueOfJSConstant(callTarget)).setIsProved(true);
1215  } else {
1216  callLinkStatus = CallLinkStatus::computeFor(
1217  m_inlineStackTop->m_profiledBlock, currentCodeOrigin(),
1218  m_inlineStackTop->m_callLinkInfos, m_callContextMap);
12191228 }
12201229
12211230 if (!callLinkStatus.canOptimize()) {
12221231 // Oddly, this conflates calls that haven't executed with calls that behaved sufficiently polymorphically
12231232 // that we cannot optimize them.
12241233
1225  addCall(result, op, callee, argumentCountIncludingThis, registerOffset);
 1234 addCall(result, op, callTarget, argumentCountIncludingThis, registerOffset);
12261235 return;
12271236 }
12281237

@@void ByteCodeParser::handleCall(
12301239 SpeculatedType prediction = getPrediction();
12311240
12321241 if (InternalFunction* function = callLinkStatus.internalFunction()) {
1233  if (handleConstantInternalFunction(result, function, registerOffset, argumentCountIncludingThis, prediction, kind)) {
 1242 if (handleConstantInternalFunction(result, function, registerOffset, argumentCountIncludingThis, prediction, specializationKind)) {
12341243 // This phantoming has to be *after* the code for the intrinsic, to signify that
12351244 // the inputs must be kept alive whatever exits the intrinsic may do.
12361245 addToGraph(Phantom, callTarget);
1237  emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, kind);
 1246 emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind);
12381247 return;
12391248 }
12401249
12411250 // Can only handle this using the generic call handler.
1242  addCall(result, op, callee, argumentCountIncludingThis, registerOffset);
 1251 addCall(result, op, callTarget, argumentCountIncludingThis, registerOffset);
12431252 return;
12441253 }
12451254
1246  Intrinsic intrinsic = callLinkStatus.intrinsicFor(kind);
 1255 Intrinsic intrinsic = callLinkStatus.intrinsicFor(specializationKind);
12471256 if (intrinsic != NoIntrinsic) {
1248  emitFunctionChecks(callLinkStatus, callTarget, registerOffset, kind);
 1257 emitFunctionChecks(callLinkStatus, callTarget, registerOffset, specializationKind);
12491258
12501259 if (handleIntrinsic(result, intrinsic, registerOffset, argumentCountIncludingThis, prediction)) {
12511260 // This phantoming has to be *after* the code for the intrinsic, to signify that
12521261 // the inputs must be kept alive whatever exits the intrinsic may do.
12531262 addToGraph(Phantom, callTarget);
1254  emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, kind);
 1263 emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind);
12551264 if (m_graph.compilation())
12561265 m_graph.compilation()->noticeInlinedCall();
12571266 return;

@@void ByteCodeParser::handleCall(
12621271 return;
12631272 }
12641273
1265  addCall(result, op, callee, argumentCountIncludingThis, registerOffset);
 1274 addCall(result, op, callTarget, argumentCountIncludingThis, registerOffset);
12661275}
12671276
12681277void ByteCodeParser::emitFunctionChecks(const CallLinkStatus& callLinkStatus, Node* callTarget, int registerOffset, CodeSpecializationKind kind)

@@void ByteCodeParser::emitArgumentPhantom
12971306 addToGraph(Phantom, get(virtualRegisterForArgument(i, registerOffset)));
12981307}
12991308
1300 bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus& callLinkStatus, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, CodeSpecializationKind kind)
 1309bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus& callLinkStatus, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind)
13011310{
13021311 static const bool verbose = false;
13031312
 1313 CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
 1314
13041315 if (verbose)
13051316 dataLog("Considering inlining ", callLinkStatus, " into ", currentCodeOrigin(), "\n");
13061317

@@bool ByteCodeParser::handleInlining(Node
13331344 // if we had a static proof of what was being called; this might happen for example if you call a
13341345 // global function, where watchpointing gives us static information. Overall, it's a rare case
13351346 // because we expect that any hot callees would have already been compiled.
1336  CodeBlock* codeBlock = executable->baselineCodeBlockFor(kind);
 1347 CodeBlock* codeBlock = executable->baselineCodeBlockFor(specializationKind);
13371348 if (!codeBlock) {
13381349 if (verbose)
13391350 dataLog(" Failing because no code block available.\n");
13401351 return false;
13411352 }
13421353 CapabilityLevel capabilityLevel = inlineFunctionForCapabilityLevel(
1343  codeBlock, kind, callLinkStatus.isClosureCall());
 1354 codeBlock, specializationKind, callLinkStatus.isClosureCall());
13441355 if (!canInline(capabilityLevel)) {
13451356 if (verbose)
13461357 dataLog(" Failing because the function is not inlineable.\n");

@@bool ByteCodeParser::handleInlining(Node
13921403 // Now we know without a doubt that we are committed to inlining. So begin the process
13931404 // by checking the callee (if necessary) and making sure that arguments and the callee
13941405 // are flushed.
1395  emitFunctionChecks(callLinkStatus, callTargetNode, registerOffset, kind);
 1406 emitFunctionChecks(callLinkStatus, callTargetNode, registerOffset, specializationKind);
13961407
13971408 // FIXME: Don't flush constants!
13981409

@@bool ByteCodeParser::handleConstantInter
18501861 return false;
18511862}
18521863
1853 Node* ByteCodeParser::handleGetByOffset(SpeculatedType prediction, Node* base, unsigned identifierNumber, PropertyOffset offset)
 1864Node* ByteCodeParser::handleGetByOffset(SpeculatedType prediction, Node* base, unsigned identifierNumber, PropertyOffset offset, NodeType op)
18541865{
18551866 Node* propertyStorage;
18561867 if (isInlineOffset(offset))
18571868 propertyStorage = base;
18581869 else
18591870 propertyStorage = addToGraph(GetButterfly, base);
1860  Node* getByOffset = addToGraph(GetByOffset, OpInfo(m_graph.m_storageAccessData.size()), OpInfo(prediction), propertyStorage, base);
 1871 Node* getByOffset = addToGraph(op, OpInfo(m_graph.m_storageAccessData.size()), OpInfo(prediction), propertyStorage, base);
18611872
18621873 StorageAccessData storageAccessData;
18631874 storageAccessData.offset = offset;

@@Node* ByteCodeParser::handleGetByOffset(
18671878 return getByOffset;
18681879}
18691880
1870 void ByteCodeParser::handleGetByOffset(
1871  int destinationOperand, SpeculatedType prediction, Node* base, unsigned identifierNumber,
1872  PropertyOffset offset)
1873 {
1874  set(VirtualRegister(destinationOperand), handleGetByOffset(prediction, base, identifierNumber, offset));
1875 }
1876 
18771881Node* ByteCodeParser::handlePutByOffset(Node* base, unsigned identifier, PropertyOffset offset, Node* value)
18781882{
18791883 Node* propertyStorage;

@@void ByteCodeParser::handleGetById(
19101914 int destinationOperand, SpeculatedType prediction, Node* base, unsigned identifierNumber,
19111915 const GetByIdStatus& getByIdStatus)
19121916{
 1917 NodeType getById = getByIdStatus.makesCalls() ? GetByIdFlush : GetById;
 1918
19131919 if (!getByIdStatus.isSimple() || !Options::enableAccessInlining()) {
19141920 set(VirtualRegister(destinationOperand),
1915  addToGraph(
1916  getByIdStatus.makesCalls() ? GetByIdFlush : GetById,
1917  OpInfo(identifierNumber), OpInfo(prediction), base));
 1921 addToGraph(getById, OpInfo(identifierNumber), OpInfo(prediction), base));
19181922 return;
19191923 }
19201924
19211925 if (getByIdStatus.numVariants() > 1) {
1922  if (!isFTL(m_graph.m_plan.mode) || !Options::enablePolymorphicAccessInlining()) {
 1926 if (getByIdStatus.makesCalls() || !isFTL(m_graph.m_plan.mode)
 1927 || !Options::enablePolymorphicAccessInlining()) {
19231928 set(VirtualRegister(destinationOperand),
1924  addToGraph(GetById, OpInfo(identifierNumber), OpInfo(prediction), base));
 1929 addToGraph(getById, OpInfo(identifierNumber), OpInfo(prediction), base));
19251930 return;
19261931 }
19271932

@@void ByteCodeParser::handleGetById(
19541959 if (m_graph.compilation())
19551960 m_graph.compilation()->noticeInlinedGetById();
19561961
1957  Node* originalBaseForBaselineJIT = base;
 1962 Node* originalBase = base;
19581963
19591964 addToGraph(CheckStructure, OpInfo(m_graph.addStructureSet(variant.structureSet())), base);
19601965

@@void ByteCodeParser::handleGetById(
19691974 // on something other than the base following the CheckStructure on base, or if the
19701975 // access was compiled to a WeakJSConstant specific value, in which case we might not
19711976 // have any explicit use of the base at all.
1972  if (variant.specificValue() || originalBaseForBaselineJIT != base)
1973  addToGraph(Phantom, originalBaseForBaselineJIT);
 1977 if (variant.specificValue() || originalBase != base)
 1978 addToGraph(Phantom, originalBase);
19741979
1975  if (variant.specificValue()) {
1976  ASSERT(variant.specificValue().isCell());
1977 
1978  set(VirtualRegister(destinationOperand), cellConstant(variant.specificValue().asCell()));
 1980 Node* loadedValue;
 1981 if (variant.specificValue())
 1982 loadedValue = cellConstant(variant.specificValue().asCell());
 1983 else {
 1984 loadedValue = handleGetByOffset(
 1985 prediction, base, identifierNumber, variant.offset(),
 1986 variant.callLinkStatus() ? GetGetterSetterByOffset : GetByOffset);
 1987 }
 1988
 1989 if (!variant.callLinkStatus()) {
 1990 set(VirtualRegister(destinationOperand), loadedValue);
19791991 return;
19801992 }
19811993
1982  handleGetByOffset(
1983  destinationOperand, prediction, base, identifierNumber, variant.offset());
 1994 Node* getter = addToGraph(GetGetter, loadedValue);
 1995
 1996 // Make a call. We don't try to get fancy with using the smallest operand number because
 1997 // the stack layout phase should compress the stack anyway.
 1998
 1999 unsigned numberOfParameters = 0;
 2000 numberOfParameters++; // The 'this' argument.
 2001 numberOfParameters++; // True return PC.
 2002
 2003 // Start with a register offset that corresponds to the last in-use register.
 2004 int registerOffset = virtualRegisterForLocal(
 2005 m_inlineStackTop->m_profiledBlock->m_numCalleeRegisters - 1).offset();
 2006 registerOffset -= numberOfParameters;
 2007 registerOffset -= JSStack::CallFrameHeaderSize;
 2008
 2009 // Get the alignment right.
 2010 registerOffset = -WTF::roundUpToMultipleOf(
 2011 stackAlignmentRegisters(),
 2012 -registerOffset);
 2013
 2014 ensureLocals(
 2015 m_inlineStackTop->remapOperand(
 2016 VirtualRegister(registerOffset)).toLocal());
 2017
 2018 // Issue SetLocals. This has two effects:
 2019 // 1) That's how handleCall() sees the arguments.
 2020 // 2) If we inline then this ensures that the arguments are flushed so that if you use
 2021 // the dreaded arguments object on the getter, the right things happen. Well, sort of -
 2022 // since we only really care about 'this' in this case. But we're not going to take that
 2023 // shortcut.
 2024 int nextRegister = registerOffset + JSStack::CallFrameHeaderSize;
 2025 set(VirtualRegister(nextRegister++), originalBase, ImmediateNakedSet);
 2026
 2027 handleCall(
 2028 destinationOperand, Call, InlineCallFrame::GetterCall, OPCODE_LENGTH(op_get_by_id),
 2029 getter, numberOfParameters - 1, registerOffset, *variant.callLinkStatus());
19842030}
19852031
19862032void ByteCodeParser::emitPutById(

@@ByteCodeParser::InlineStackEntry::Inline
33763422 VirtualRegister returnValueVR,
33773423 VirtualRegister inlineCallFrameStart,
33783424 int argumentCountIncludingThis,
3379  CodeSpecializationKind kind)
 3425 InlineCallFrame::Kind kind)
33803426 : m_byteCodeParser(byteCodeParser)
33813427 , m_codeBlock(codeBlock)
33823428 , m_profiledBlock(profiledBlock)

@@ByteCodeParser::InlineStackEntry::Inline
34353481 m_inlineCallFrame->isClosureCall = true;
34363482 m_inlineCallFrame->caller = byteCodeParser->currentCodeOrigin();
34373483 m_inlineCallFrame->arguments.resize(argumentCountIncludingThis); // Set the number of arguments including this, but don't configure the value recoveries, yet.
3438  m_inlineCallFrame->kind = InlineCallFrame::kindFor(kind);
 3484 m_inlineCallFrame->kind = kind;
34393485
34403486 if (m_inlineCallFrame->caller.inlineCallFrame)
34413487 m_inlineCallFrame->capturedVars = m_inlineCallFrame->caller.inlineCallFrame->capturedVars;

@@bool ByteCodeParser::parse()
36743720
36753721 InlineStackEntry inlineStackEntry(
36763722 this, m_codeBlock, m_profiledBlock, 0, 0, VirtualRegister(), VirtualRegister(),
3677  m_codeBlock->numParameters(), CodeForCall);
 3723 m_codeBlock->numParameters(), InlineCallFrame::Call);
36783724
36793725 parseCodeBlock();
36803726
169014

Source/JavaScriptCore/dfg/DFGCSEPhase.cpp

@@private:
690690 return 0;
691691 }
692692
 693 Node* getGetterSetterByOffsetLoadElimination(unsigned identifierNumber, Node* base)
 694 {
 695 for (unsigned i = m_indexInBlock; i--;) {
 696 Node* node = m_currentBlock->at(i);
 697 if (node == base)
 698 break;
 699
 700 switch (node->op()) {
 701 case GetGetterSetterByOffset:
 702 if (node->child2() == base
 703 && m_graph.m_storageAccessData[node->storageAccessDataIndex()].identifierNumber == identifierNumber)
 704 return node;
 705 break;
 706
 707 case PutByValDirect:
 708 case PutByVal:
 709 case PutByValAlias:
 710 if (m_graph.byValIsPure(node)) {
 711 // If PutByVal speculates that it's accessing an array with an
 712 // integer index, then it's impossible for it to cause a structure
 713 // change.
 714 break;
 715 }
 716 return 0;
 717
 718 default:
 719 if (m_graph.clobbersWorld(node))
 720 return 0;
 721 break;
 722 }
 723 }
 724 return 0;
 725 }
 726
693727 Node* putByOffsetStoreElimination(unsigned identifierNumber, Node* child1)
694728 {
695729 for (unsigned i = m_indexInBlock; i--;) {

@@private:
845879 return 0;
846880 }
847881
848  Node* getTypedArrayByteOffsetLoadElimination(Node* child1)
 882 Node* getInternalFieldLoadElimination(NodeType op, Node* child1)
849883 {
850884 for (unsigned i = m_indexInBlock; i--;) {
851885 Node* node = m_currentBlock->at(i);
852886 if (node == child1)
853887 break;
854888
855  switch (node->op()) {
856  case GetTypedArrayByteOffset: {
857  if (node->child1() == child1)
858  return node;
859  break;
860  }
 889 if (node->op() == op && node->child1() == child1)
 890 return node;
861891
862  default:
863  if (m_graph.clobbersWorld(node))
864  return 0;
865  break;
866  }
 892 if (m_graph.clobbersWorld(node))
 893 return 0;
867894 }
868895 return 0;
869896 }

@@private:
14011428 break;
14021429 }
14031430
1404  case GetTypedArrayByteOffset: {
 1431 case GetTypedArrayByteOffset:
 1432 case GetGetter:
 1433 case GetSetter: {
14051434 if (cseMode == StoreElimination)
14061435 break;
1407  setReplacement(getTypedArrayByteOffsetLoadElimination(node->child1().node()));
 1436 setReplacement(getInternalFieldLoadElimination(node->op(), node->child1().node()));
14081437 break;
14091438 }
14101439

@@private:
14201449 setReplacement(getByOffsetLoadElimination(m_graph.m_storageAccessData[node->storageAccessDataIndex()].identifierNumber, node->child2().node()));
14211450 break;
14221451
 1452 case GetGetterSetterByOffset:
 1453 if (cseMode == StoreElimination)
 1454 break;
 1455 setReplacement(getGetterSetterByOffsetLoadElimination(m_graph.m_storageAccessData[node->storageAccessDataIndex()].identifierNumber, node->child2().node()));
 1456 break;
 1457
14231458 case MultiGetByOffset:
14241459 if (cseMode == StoreElimination)
14251460 break;
169014

Source/JavaScriptCore/dfg/DFGClobberize.h

@@void clobberize(Graph& graph, Node* node
213213 write(World);
214214 return;
215215
 216 case GetGetter:
 217 read(GetterSetter_getter);
 218 return;
 219
 220 case GetSetter:
 221 read(GetterSetter_setter);
 222 return;
 223
216224 case GetCallee:
217225 read(AbstractHeap(Variables, JSStack::Callee));
218226 return;

@@void clobberize(Graph& graph, Node* node
481489 return;
482490
483491 case GetByOffset:
 492 case GetGetterSetterByOffset:
484493 read(AbstractHeap(NamedProperties, graph.m_storageAccessData[node->storageAccessDataIndex()].identifierNumber));
485494 return;
486495
169014

Source/JavaScriptCore/dfg/DFGFixupPhase.cpp

@@private:
885885 case GetClosureRegisters:
886886 case SkipTopScope:
887887 case SkipScope:
888  case GetScope: {
 888 case GetScope:
 889 case GetGetter:
 890 case GetSetter: {
889891 fixEdge<KnownCellUse>(node->child1());
890892 break;
891893 }

@@private:
945947 break;
946948 }
947949
948  case GetByOffset: {
 950 case GetByOffset:
 951 case GetGetterSetterByOffset: {
949952 if (!node->child1()->hasStorageResult())
950953 fixEdge<KnownCellUse>(node->child1());
951954 fixEdge<KnownCellUse>(node->child2());

@@private:
10341037 Node* globalObjectNode = m_insertionSet.insertNode(
10351038 m_indexInBlock, SpecNone, WeakJSConstant, node->origin,
10361039 OpInfo(m_graph.globalObjectFor(node->origin.semantic)));
 1040 // FIXME: This probably shouldn't have an unconditional barrier.
 1041 // https://bugs.webkit.org/show_bug.cgi?id=133104
10371042 Node* barrierNode = m_graph.addNode(
10381043 SpecNone, StoreBarrier, m_currentNode->origin,
10391044 Edge(globalObjectNode, KnownCellUse));
169014

Source/JavaScriptCore/dfg/DFGJITCompiler.cpp

@@void JITCompiler::linkFunction()
417417 m_jitCode->shrinkToFit();
418418 codeBlock()->shrinkToFit(CodeBlock::LateShrink);
419419
420  linkBuffer->link(m_callArityFixup, FunctionPtr((m_vm->getCTIStub(arityFixup)).code().executableAddress()));
 420 linkBuffer->link(m_callArityFixup, FunctionPtr((m_vm->getCTIStub(arityFixupGenerator)).code().executableAddress()));
421421
422422 disassemble(*linkBuffer);
423423
169014

Source/JavaScriptCore/dfg/DFGNode.h

@@struct Node {
11461146
11471147 bool hasStorageAccessData()
11481148 {
1149  return op() == GetByOffset || op() == PutByOffset;
 1149 return op() == GetByOffset || op() == GetGetterSetterByOffset || op() == PutByOffset;
11501150 }
11511151
11521152 unsigned storageAccessDataIndex()
169014

Source/JavaScriptCore/dfg/DFGNodeType.h

@@namespace JSC { namespace DFG {
176176 macro(GetIndexedPropertyStorage, NodeResultStorage) \
177177 macro(ConstantStoragePointer, NodeResultStorage) \
178178 macro(TypedArrayWatchpoint, NodeMustGenerate) \
 179 macro(GetGetter, NodeResultJS) \
 180 macro(GetSetter, NodeResultJS) \
179181 macro(GetByOffset, NodeResultJS) \
 182 macro(GetGetterSetterByOffset, NodeResultJS) \
180183 macro(MultiGetByOffset, NodeResultJS) \
181184 macro(PutByOffset, NodeMustGenerate) \
182185 macro(MultiPutByOffset, NodeMustGenerate) \
169014

Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp

@@void reifyInlinedCallFrames(CCallHelpers
108108 InlineCallFrame* inlineCallFrame = codeOrigin.inlineCallFrame;
109109 CodeBlock* baselineCodeBlock = jit.baselineCodeBlockFor(codeOrigin);
110110 CodeBlock* baselineCodeBlockForCaller = jit.baselineCodeBlockFor(inlineCallFrame->caller);
 111 void* jumpTarget;
 112 void* trueReturnPC = nullptr;
 113
111114 unsigned callBytecodeIndex = inlineCallFrame->caller.bytecodeIndex;
112  CallLinkInfo* callLinkInfo =
113  baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex);
114  RELEASE_ASSERT(callLinkInfo);
115115
116  void* jumpTarget = callLinkInfo->callReturnLocation.executableAddress();
 116 switch (inlineCallFrame->kind) {
 117 case InlineCallFrame::Call:
 118 case InlineCallFrame::Construct: {
 119 CallLinkInfo* callLinkInfo =
 120 baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex);
 121 RELEASE_ASSERT(callLinkInfo);
 122
 123 jumpTarget = callLinkInfo->callReturnLocation.executableAddress();
 124 break;
 125 }
 126
 127 case InlineCallFrame::GetterCall:
 128 case InlineCallFrame::SetterCall: {
 129 StructureStubInfo* stubInfo =
 130 baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex));
 131 RELEASE_ASSERT(stubInfo);
 132
 133 switch (inlineCallFrame->kind) {
 134 case InlineCallFrame::GetterCall:
 135 jumpTarget = jit.vm()->getCTIStub(baselineGetterReturnThunkGenerator).code().executableAddress();
 136 break;
 137 case InlineCallFrame::SetterCall:
 138 jumpTarget = jit.vm()->getCTIStub(baselineSetterReturnThunkGenerator).code().executableAddress();
 139 break;
 140 default:
 141 RELEASE_ASSERT_NOT_REACHED();
 142 break;
 143 }
 144
 145 trueReturnPC = stubInfo->callReturnLocation.labelAtOffset(
 146 stubInfo->patch.deltaCallToDone).executableAddress();
 147 break;
 148 } }
117149
118150 GPRReg callerFrameGPR;
119151 if (inlineCallFrame->caller.inlineCallFrame) {

@@void reifyInlinedCallFrames(CCallHelpers
122154 } else
123155 callerFrameGPR = GPRInfo::callFrameRegister;
124156
 157 jit.storePtr(AssemblyHelpers::TrustedImmPtr(jumpTarget), AssemblyHelpers::addressForByteOffset(inlineCallFrame->returnPCOffset()));
 158 if (trueReturnPC)
 159 jit.storePtr(AssemblyHelpers::TrustedImmPtr(trueReturnPC), AssemblyHelpers::addressFor(inlineCallFrame->stackOffset + virtualRegisterForArgument(inlineCallFrame->arguments.size()).offset()));
 160
125161#if USE(JSVALUE64)
126162 jit.storePtr(AssemblyHelpers::TrustedImmPtr(baselineCodeBlock), AssemblyHelpers::addressFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::CodeBlock)));
127163 if (!inlineCallFrame->isClosureCall)
128164 jit.store64(AssemblyHelpers::TrustedImm64(JSValue::encode(JSValue(inlineCallFrame->calleeConstant()->scope()))), AssemblyHelpers::addressFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ScopeChain)));
129165 jit.store64(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset()));
130  jit.storePtr(AssemblyHelpers::TrustedImmPtr(jumpTarget), AssemblyHelpers::addressForByteOffset(inlineCallFrame->returnPCOffset()));
131166 uint32_t locationBits = CallFrame::Location::encodeAsBytecodeOffset(codeOrigin.bytecodeIndex);
132167 jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount)));
133168 jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->arguments.size()), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount)));

@@void reifyInlinedCallFrames(CCallHelpers
143178 if (!inlineCallFrame->isClosureCall)
144179 jit.storePtr(AssemblyHelpers::TrustedImmPtr(inlineCallFrame->calleeConstant()->scope()), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ScopeChain)));
145180 jit.storePtr(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset()));
146  jit.storePtr(AssemblyHelpers::TrustedImmPtr(jumpTarget), AssemblyHelpers::addressForByteOffset(inlineCallFrame->returnPCOffset()));
147181 Instruction* instruction = baselineCodeBlock->instructions().begin() + codeOrigin.bytecodeIndex;
148182 uint32_t locationBits = CallFrame::Location::encodeAsBytecodeInstruction(instruction);
149183 jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount)));
169014

Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp

@@private:
188188 changed |= setPrediction(node->getHeapPrediction());
189189 break;
190190 }
 191
 192 case GetGetterSetterByOffset: {
 193 changed |= setPrediction(SpecCellOther);
 194 break;
 195 }
 196
 197 case GetGetter:
 198 case GetSetter:
 199 case GetCallee:
 200 case NewFunctionNoCheck:
 201 case NewFunctionExpression: {
 202 changed |= setPrediction(SpecFunction);
 203 break;
 204 }
191205
192206 case StringCharCodeAt: {
193207 changed |= setPrediction(SpecInt32);

@@private:
423437 break;
424438 }
425439
426  case GetCallee: {
427  changed |= setPrediction(SpecFunction);
428  break;
429  }
430 
431440 case CreateThis:
432441 case NewObject: {
433442 changed |= setPrediction(SpecFinalObject);

@@private:
490499 break;
491500 }
492501
493  case NewFunctionNoCheck:
494  case NewFunctionExpression: {
495  changed |= setPrediction(SpecFunction);
496  break;
497  }
498 
499502 case PutByValAlias:
500503 case GetArrayLength:
501504 case GetTypedArrayByteOffset:
169014

Source/JavaScriptCore/dfg/DFGSafeToExecute.h

@@bool safeToExecute(AbstractStateType& st
253253 case ValueRep:
254254 case DoubleRep:
255255 case Int52Rep:
 256 case GetGetter:
 257 case GetSetter:
256258 return true;
257259
258260 case GetByVal:

@@bool safeToExecute(AbstractStateType& st
285287 StructureSet(node->structureTransitionData().previousStructure));
286288
287289 case GetByOffset:
 290 case GetGetterSetterByOffset:
288291 case PutByOffset:
289292 return state.forNode(node->child1()).m_currentKnownStructure.isValidOffset(
290293 graph.m_storageAccessData[node->storageAccessDataIndex()].offset);
169014

Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp

3535#include "DFGOperations.h"
3636#include "DFGSlowPathGenerator.h"
3737#include "Debugger.h"
 38#include "GetterSetter.h"
3839#include "JSActivation.h"
3940#include "ObjectPrototype.h"
4041#include "JSCInlines.h"

@@void SpeculativeJIT::compile(Node* node)
37923793 break;
37933794 }
37943795
 3796 case GetGetterSetterByOffset: {
 3797 StorageOperand storage(this, node->child1());
 3798 GPRTemporary resultPayload(this);
 3799
 3800 GPRReg storageGPR = storage.gpr();
 3801 GPRReg resultPayloadGPR = resultPayload.gpr();
 3802
 3803 StorageAccessData& storageAccessData = m_jit.graph().m_storageAccessData[node->storageAccessDataIndex()];
 3804
 3805 m_jit.load32(JITCompiler::Address(storageGPR, offsetRelativeToBase(storageAccessData.offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload)), resultPayloadGPR);
 3806
 3807 cellResult(resultPayloadGPR, node);
 3808 break;
 3809 }
 3810
 3811 case GetGetter: {
 3812 SpeculateCellOperand op1(this, node->child1());
 3813 GPRTemporary result(this, Reuse, op1);
 3814
 3815 GPRReg op1GPR = op1.gpr();
 3816 GPRReg resultGPR = result.gpr();
 3817
 3818 m_jit.loadPtr(JITCompiler::Address(op1GPR, GetterSetter::offsetOfGetter()), resultGPR);
 3819
 3820 cellResult(resultGPR, node);
 3821 break;
 3822 }
 3823
 3824 case GetSetter: {
 3825 SpeculateCellOperand op1(this, node->child1());
 3826 GPRTemporary result(this, Reuse, op1);
 3827
 3828 GPRReg op1GPR = op1.gpr();
 3829 GPRReg resultGPR = result.gpr();
 3830
 3831 m_jit.loadPtr(JITCompiler::Address(op1GPR, GetterSetter::offsetOfSetter()), resultGPR);
 3832
 3833 cellResult(resultGPR, node);
 3834 break;
 3835 }
 3836
37953837 case PutByOffset: {
37963838 StorageOperand storage(this, node->child1());
37973839 JSValueOperand value(this, node->child3());
169014

Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp

3535#include "DFGOperations.h"
3636#include "DFGSlowPathGenerator.h"
3737#include "Debugger.h"
 38#include "GetterSetter.h"
3839#include "JSCInlines.h"
3940#include "ObjectPrototype.h"
4041#include "SpillRegistersMode.h"

@@void SpeculativeJIT::compile(Node* node)
38533854 break;
38543855 }
38553856
3856  case GetByOffset: {
 3857 case GetByOffset:
 3858 case GetGetterSetterByOffset: {
38573859 StorageOperand storage(this, node->child1());
38583860 GPRTemporary result(this, Reuse, storage);
38593861

@@void SpeculativeJIT::compile(Node* node)
38683870 break;
38693871 }
38703872
 3873 case GetGetter: {
 3874 SpeculateCellOperand op1(this, node->child1());
 3875 GPRTemporary result(this, Reuse, op1);
 3876
 3877 GPRReg op1GPR = op1.gpr();
 3878 GPRReg resultGPR = result.gpr();
 3879
 3880 m_jit.loadPtr(JITCompiler::Address(op1GPR, GetterSetter::offsetOfGetter()), resultGPR);
 3881
 3882 cellResult(resultGPR, node);
 3883 break;
 3884 }
 3885
 3886 case GetSetter: {
 3887 SpeculateCellOperand op1(this, node->child1());
 3888 GPRTemporary result(this, Reuse, op1);
 3889
 3890 GPRReg op1GPR = op1.gpr();
 3891 GPRReg resultGPR = result.gpr();
 3892
 3893 m_jit.loadPtr(JITCompiler::Address(op1GPR, GetterSetter::offsetOfSetter()), resultGPR);
 3894
 3895 cellResult(resultGPR, node);
 3896 break;
 3897 }
 3898
38713899 case PutByOffset: {
38723900 StorageOperand storage(this, node->child1());
38733901 JSValueOperand value(this, node->child3());
169014

Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.cpp

11/*
2  * Copyright (C) 2013 Apple Inc. All rights reserved.
 2 * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
33 *
44 * Redistribution and use in source and binary forms, with or without
55 * modification, are permitted provided that the following conditions

2828
2929#if ENABLE(FTL_JIT)
3030
 31#include "GetterSetter.h"
3132#include "JSScope.h"
3233#include "JSVariableObject.h"
3334#include "JSCInlines.h"
169014

Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h

11/*
2  * Copyright (C) 2013 Apple Inc. All rights reserved.
 2 * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
33 *
44 * Redistribution and use in source and binary forms, with or without
55 * modification, are permitted provided that the following conditions

@@namespace JSC { namespace FTL {
4646 macro(Butterfly_publicLength, Butterfly::offsetOfPublicLength()) \
4747 macro(Butterfly_vectorLength, Butterfly::offsetOfVectorLength()) \
4848 macro(CallFrame_callerFrame, CallFrame::callerFrameOffset()) \
 49 macro(GetterSetter_getter, GetterSetter::offsetOfGetter()) \
 50 macro(GetterSetter_setter, GetterSetter::offsetOfSetter()) \
4951 macro(JSArrayBufferView_length, JSArrayBufferView::offsetOfLength()) \
5052 macro(JSArrayBufferView_mode, JSArrayBufferView::offsetOfMode()) \
5153 macro(JSArrayBufferView_vector, JSArrayBufferView::offsetOfVector()) \
169014

Source/JavaScriptCore/ftl/FTLCapabilities.cpp

@@inline CapabilityLevel canCompile(Node*
7272 case NewArray:
7373 case NewArrayBuffer:
7474 case GetByOffset:
 75 case GetGetterSetterByOffset:
 76 case GetGetter:
 77 case GetSetter:
7578 case PutByOffset:
7679 case GetGlobalVar:
7780 case PutGlobalVar:
169014

Source/JavaScriptCore/ftl/FTLLink.cpp

@@void link(State& state)
178178
179179 linkBuffer = adoptPtr(new LinkBuffer(vm, &jit, codeBlock, JITCompilationMustSucceed));
180180 linkBuffer->link(callArityCheck, codeBlock->m_isConstructor ? operationConstructArityCheck : operationCallArityCheck);
181  linkBuffer->link(callArityFixup, FunctionPtr((vm.getCTIStub(arityFixup)).code().executableAddress()));
 181 linkBuffer->link(callArityFixup, FunctionPtr((vm.getCTIStub(arityFixupGenerator)).code().executableAddress()));
182182 linkBuffer->link(mainPathJumps, CodeLocationLabel(bitwise_cast<void*>(state.generatedFunction)));
183183
184184 state.jitCode->initializeAddressForCall(MacroAssemblerCodePtr(bitwise_cast<void*>(state.generatedFunction)));
169014

Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp

@@private:
493493 compileStringCharCodeAt();
494494 break;
495495 case GetByOffset:
 496 case GetGetterSetterByOffset:
496497 compileGetByOffset();
497498 break;
 499 case GetGetter:
 500 compileGetGetter();
 501 break;
 502 case GetSetter:
 503 compileGetSetter();
 504 break;
498505 case MultiGetByOffset:
499506 compileMultiGetByOffset();
500507 break;

@@private:
31413148 lowStorage(m_node->child1()), data.identifierNumber, data.offset));
31423149 }
31433150
 3151 void compileGetGetter()
 3152 {
 3153 setJSValue(m_out.loadPtr(lowCell(m_node->child1()), m_heaps.GetterSetter_getter));
 3154 }
 3155
 3156 void compileGetSetter()
 3157 {
 3158 setJSValue(m_out.loadPtr(lowCell(m_node->child1()), m_heaps.GetterSetter_setter));
 3159 }
 3160
31443161 void compileMultiGetByOffset()
31453162 {
31463163 LValue base = lowCell(m_node->child1());
169014

Source/JavaScriptCore/jit/AccessorCallJITStubRoutine.h

@@public:
4646
4747 virtual bool visitWeak(RepatchBuffer&) override;
4848
49 private:
5049 std::unique_ptr<CallLinkInfo> m_callLinkInfo;
5150};
5251
169014

Source/JavaScriptCore/jit/JIT.cpp

@@void JIT::emitEnterOptimizationCheck()
108108}
109109#endif
110110
 111void JIT::assertStackPointerOffset()
 112{
 113 if (ASSERT_DISABLED)
 114 return;
 115
 116 addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, regT0);
 117 Jump ok = branchPtr(Equal, regT0, stackPointerRegister);
 118 breakpoint();
 119 ok.link(this);
 120}
 121
111122#define NEXT_OPCODE(name) \
112123 m_bytecodeOffset += OPCODE_LENGTH(name); \
113124 break;

@@CompilationResult JIT::privateCompile(JI
570581#endif
571582 move(TrustedImmPtr(m_vm->arityCheckFailReturnThunks->returnPCsFor(*m_vm, m_codeBlock->numParameters())), thunkReg);
572583 loadPtr(BaseIndex(thunkReg, regT0, timesPtr()), thunkReg);
573  emitNakedCall(m_vm->getCTIStub(arityFixup).code());
 584 emitNakedCall(m_vm->getCTIStub(arityFixupGenerator).code());
574585
575586#if !ASSERT_DISABLED
576587 m_bytecodeOffset = (unsigned)-1; // Reset this, in order to guard its use with ASSERTs.
169014

Source/JavaScriptCore/jit/JIT.h

@@namespace JSC {
242242
243243 static unsigned frameRegisterCountFor(CodeBlock*);
244244 static int stackPointerOffsetFor(CodeBlock*);
245 
 245
246246 private:
247247 JIT(VM*, CodeBlock* = 0);
248248

@@namespace JSC {
446446
447447 void emit_compareAndJump(OpcodeID, int op1, int op2, unsigned target, RelationalCondition);
448448 void emit_compareAndJumpSlow(int op1, int op2, unsigned target, DoubleCondition, size_t (JIT_OPERATION *operation)(ExecState*, EncodedJSValue, EncodedJSValue), bool invert, Vector<SlowCaseEntry>::iterator&);
 449
 450 void assertStackPointerOffset();
449451
450452 void emit_op_touch_entry(Instruction*);
451453 void emit_op_add(Instruction*);
169014

Source/JavaScriptCore/jit/JITPropertyAccess.cpp

@@void JIT::emit_op_get_by_id(Instruction*
528528
529529 emitValueProfilingSite();
530530 emitPutVirtualRegister(resultVReg);
 531 assertStackPointerOffset();
531532}
532533
533534void JIT::emitSlow_op_get_by_id(Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
169014

Source/JavaScriptCore/jit/ThunkGenerators.cpp

@@MacroAssemblerCodeRef nativeConstructGen
430430 return nativeForGenerator(vm, CodeForConstruct);
431431}
432432
433 MacroAssemblerCodeRef arityFixup(VM* vm)
 433MacroAssemblerCodeRef arityFixupGenerator(VM* vm)
434434{
435435 JSInterfaceJIT jit(vm);
436436

@@MacroAssemblerCodeRef arityFixup(VM* vm)
533533 return FINALIZE_CODE(patchBuffer, ("fixup arity"));
534534}
535535
 536MacroAssemblerCodeRef baselineGetterReturnThunkGenerator(VM* vm)
 537{
 538 JSInterfaceJIT jit(vm);
 539
 540#if USE(JSVALUE64)
 541 jit.move(GPRInfo::returnValueGPR, GPRInfo::regT0);
 542#else
 543 jit.setupResults(GPRInfo::regT0, GPRInfo::regT1);
 544#endif
 545
 546 unsigned numberOfParameters = 0;
 547 numberOfParameters++; // The 'this' argument.
 548 numberOfParameters++; // The true return PC.
 549
 550 unsigned numberOfRegsForCall =
 551 JSStack::CallFrameHeaderSize + numberOfParameters;
 552
 553 unsigned numberOfBytesForCall =
 554 numberOfRegsForCall * sizeof(Register) - sizeof(CallerFrameAndPC);
 555
 556 unsigned alignedNumberOfBytesForCall =
 557 WTF::roundUpToMultipleOf(stackAlignmentBytes(), numberOfBytesForCall);
 558
 559 // The real return address is stored above the arguments. We passed one argument, which is
 560 // 'this'. So argument at index 1 is the return address.
 561 jit.loadPtr(
 562 AssemblyHelpers::Address(
 563 AssemblyHelpers::stackPointerRegister,
 564 (virtualRegisterForArgument(1).offset() - JSStack::CallerFrameAndPCSize) * sizeof(Register)),
 565 GPRInfo::regT2);
 566
 567 jit.addPtr(
 568 AssemblyHelpers::TrustedImm32(alignedNumberOfBytesForCall),
 569 AssemblyHelpers::stackPointerRegister);
 570
 571 jit.jump(GPRInfo::regT2);
 572
 573 LinkBuffer patchBuffer(*vm, &jit, GLOBAL_THUNK_ID);
 574 return FINALIZE_CODE(patchBuffer, ("baseline getter return thunk"));
 575}
 576
 577MacroAssemblerCodeRef baselineSetterReturnThunkGenerator(VM* vm)
 578{
 579 JSInterfaceJIT jit(vm);
 580
 581 unsigned numberOfParameters = 0;
 582 numberOfParameters++; // The 'this' argument.
 583 numberOfParameters++; // The value to set.
 584 numberOfParameters++; // The true return PC.
 585
 586 unsigned numberOfRegsForCall =
 587 JSStack::CallFrameHeaderSize + numberOfParameters;
 588
 589 unsigned numberOfBytesForCall =
 590 numberOfRegsForCall * sizeof(Register) - sizeof(CallerFrameAndPC);
 591
 592 unsigned alignedNumberOfBytesForCall =
 593 WTF::roundUpToMultipleOf(stackAlignmentBytes(), numberOfBytesForCall);
 594
 595 // The real return address is stored above the arguments. We passed two arguments, so
 596 // the argument at index 2 is the return address.
 597 jit.loadPtr(
 598 AssemblyHelpers::Address(
 599 AssemblyHelpers::stackPointerRegister,
 600 (virtualRegisterForArgument(2).offset() - JSStack::CallerFrameAndPCSize) * sizeof(Register)),
 601 GPRInfo::regT2);
 602
 603 jit.addPtr(
 604 AssemblyHelpers::TrustedImm32(alignedNumberOfBytesForCall),
 605 AssemblyHelpers::stackPointerRegister);
 606
 607 jit.jump(GPRInfo::regT2);
 608
 609 LinkBuffer patchBuffer(*vm, &jit, GLOBAL_THUNK_ID);
 610 return FINALIZE_CODE(patchBuffer, ("baseline setter return thunk"));
 611}
 612
536613static void stringCharLoad(SpecializedThunkJIT& jit, VM* vm)
537614{
538615 // load string
169014

Source/JavaScriptCore/jit/ThunkGenerators.h

@@inline ThunkGenerator virtualThunkGenera
113113MacroAssemblerCodeRef nativeCallGenerator(VM*);
114114MacroAssemblerCodeRef nativeConstructGenerator(VM*);
115115MacroAssemblerCodeRef nativeTailCallGenerator(VM*);
116 MacroAssemblerCodeRef arityFixup(VM*);
 116MacroAssemblerCodeRef arityFixupGenerator(VM*);
 117
 118MacroAssemblerCodeRef baselineGetterReturnThunkGenerator(VM* vm);
 119MacroAssemblerCodeRef baselineSetterReturnThunkGenerator(VM* vm);
117120
118121MacroAssemblerCodeRef charCodeAtThunkGenerator(VM*);
119122MacroAssemblerCodeRef charAtThunkGenerator(VM*);
169014

Source/JavaScriptCore/runtime/CommonSlowPaths.cpp

@@static CommonSlowPaths::ArityCheckData*
170170 result->paddedStackSpace = slotsToAdd;
171171#if ENABLE(JIT)
172172 if (vm.canUseJIT()) {
173  result->thunkToCall = vm.getCTIStub(arityFixup).code().executableAddress();
 173 result->thunkToCall = vm.getCTIStub(arityFixupGenerator).code().executableAddress();
174174 result->returnPC = vm.arityCheckFailReturnThunks->returnPCFor(vm, slotsToAdd * stackAlignmentRegisters()).executableAddress();
175175 } else
176176#endif
169014

Source/JavaScriptCore/tests/stress/exit-from-getter.js

 1(function() {
 2 var o = {_f:42};
 3 o.__defineGetter__("f", function() { return this._f * 100; });
 4 var result = 0;
 5 var n = 50000;
 6 function foo(o) {
 7 return o.f + 11;
 8 }
 9 noInline(foo);
 10 for (var i = 0; i < n; ++i) {
 11 result += foo(o);
 12 }
 13 if (result != n * (42 * 100 + 11))
 14 throw "Error: bad result: " + result;
 15 o._f = 1000000000;
 16 result = 0;
 17 for (var i = 0; i < n; ++i) {
 18 result += foo(o);
 19 }
 20 if (result != n * (1000000000 * 100 + 11))
 21 throw "Error: bad result (2): " + result;
 22})();
 23
0

Source/JavaScriptCore/tests/stress/poly-chain-getter.js

 1function Cons() {
 2}
 3Cons.prototype.__defineGetter__("f", function() {
 4 counter++;
 5 return 84;
 6});
 7
 8function foo(o) {
 9 return o.f;
 10}
 11
 12noInline(foo);
 13
 14var counter = 0;
 15
 16function test(o, expected, expectedCount) {
 17 var result = foo(o);
 18 if (result != expected)
 19 throw new Error("Bad result: " + result);
 20 if (counter != expectedCount)
 21 throw new Error("Bad counter value: " + counter);
 22}
 23
 24for (var i = 0; i < 100000; ++i) {
 25 test(new Cons(), 84, counter + 1);
 26
 27 var o = new Cons();
 28 o.g = 54;
 29 test(o, 84, counter + 1);
 30}
0

Source/JavaScriptCore/tests/stress/poly-chain-then-getter.js

 1function Cons1() {
 2}
 3Cons1.prototype.f = 42;
 4
 5function Cons2() {
 6}
 7Cons2.prototype.__defineGetter__("f", function() {
 8 counter++;
 9 return 84;
 10});
 11
 12function foo(o) {
 13 return o.f;
 14}
 15
 16noInline(foo);
 17
 18var counter = 0;
 19
 20function test(o, expected, expectedCount) {
 21 var result = foo(o);
 22 if (result != expected)
 23 throw new Error("Bad result: " + result);
 24 if (counter != expectedCount)
 25 throw new Error("Bad counter value: " + counter);
 26}
 27
 28for (var i = 0; i < 100000; ++i) {
 29 test(new Cons1(), 42, counter);
 30 test(new Cons2(), 84, counter + 1);
 31}
0

Source/JavaScriptCore/tests/stress/poly-getter-combo.js

 1function Cons1() {
 2}
 3Cons1.prototype.f = 42;
 4
 5function Cons2() {
 6}
 7Cons2.prototype.__defineGetter__("f", function() {
 8 counter++;
 9 return 84;
 10});
 11
 12function foo(o) {
 13 return o.f;
 14}
 15
 16noInline(foo);
 17
 18var counter = 0;
 19
 20function test(o, expected, expectedCount) {
 21 var result = foo(o);
 22 if (result != expected)
 23 throw new Error("Bad result: " + result);
 24 if (counter != expectedCount)
 25 throw new Error("Bad counter value: " + counter);
 26}
 27
 28for (var i = 0; i < 100000; ++i) {
 29 test(new Cons1(), 42, counter);
 30 test(new Cons2(), 84, counter + 1);
 31
 32 var o = {};
 33 o.__defineGetter__("f", function() {
 34 counter++;
 35 return 84;
 36 });
 37 test(o, 84, counter + 1);
 38
 39 test({f: 42}, 42, counter);
 40}
0

Source/JavaScriptCore/tests/stress/poly-getter-then-chain.js

 1function Cons1() {
 2}
 3Cons1.prototype.f = 42;
 4
 5function Cons2() {
 6}
 7Cons2.prototype.__defineGetter__("f", function() {
 8 counter++;
 9 return 84;
 10});
 11
 12function foo(o) {
 13 return o.f;
 14}
 15
 16noInline(foo);
 17
 18var counter = 0;
 19
 20function test(o, expected, expectedCount) {
 21 var result = foo(o);
 22 if (result != expected)
 23 throw new Error("Bad result: " + result);
 24 if (counter != expectedCount)
 25 throw new Error("Bad counter value: " + counter);
 26}
 27
 28for (var i = 0; i < 100000; ++i) {
 29 test(new Cons2(), 84, counter + 1);
 30 test(new Cons1(), 42, counter);
 31}
0

Source/JavaScriptCore/tests/stress/poly-getter-then-self.js

 1function foo(o) {
 2 return o.f;
 3}
 4
 5noInline(foo);
 6
 7var counter = 0;
 8
 9function test(o, expected, expectedCount) {
 10 var result = foo(o);
 11 if (result != expected)
 12 throw new Error("Bad result: " + result);
 13 if (counter != expectedCount)
 14 throw new Error("Bad counter value: " + counter);
 15}
 16
 17for (var i = 0; i < 100000; ++i) {
 18 var o = {};
 19 o.__defineGetter__("f", function() {
 20 counter++;
 21 return 84;
 22 });
 23 test(o, 84, counter + 1);
 24
 25 test({f: 42}, 42, counter);
 26}
0

Source/JavaScriptCore/tests/stress/poly-self-getter.js

 1function foo(o) {
 2 return o.f;
 3}
 4
 5noInline(foo);
 6
 7var counter = 0;
 8
 9function test(o, expected, expectedCount) {
 10 var result = foo(o);
 11 if (result != expected)
 12 throw new Error("Bad result: " + result);
 13 if (counter != expectedCount)
 14 throw new Error("Bad counter value: " + counter);
 15}
 16
 17function getter() {
 18 counter++;
 19 return 84;
 20}
 21
 22for (var i = 0; i < 100000; ++i) {
 23 var o = {};
 24 o.__defineGetter__("f", getter);
 25 test(o, 84, counter + 1);
 26
 27 var o = {};
 28 o.__defineGetter__("f", getter);
 29 o.g = 54;
 30 test(o, 84, counter + 1);
 31}
0

Source/JavaScriptCore/tests/stress/poly-self-then-getter.js

 1function foo(o) {
 2 return o.f;
 3}
 4
 5noInline(foo);
 6
 7var counter = 0;
 8
 9function test(o, expected, expectedCount) {
 10 var result = foo(o);
 11 if (result != expected)
 12 throw new Error("Bad result: " + result);
 13 if (counter != expectedCount)
 14 throw new Error("Bad counter value: " + counter);
 15}
 16
 17for (var i = 0; i < 100000; ++i) {
 18 test({f: 42}, 42, counter);
 19
 20 var o = {};
 21 o.__defineGetter__("f", function() {
 22 counter++;
 23 return 84;
 24 });
 25 test(o, 84, counter + 1);
 26}
0

Source/JavaScriptCore/tests/stress/weird-getter-counter.js

 1function foo(o) {
 2 return o.f;
 3}
 4
 5noInline(foo);
 6
 7var counter = 0;
 8
 9function test(o, expected, expectedCount) {
 10 var result = foo(o);
 11 if (result != expected)
 12 throw new Error("Bad result: " + result);
 13 if (counter != expectedCount)
 14 throw new Error("Bad counter value: " + counter);
 15}
 16
 17for (var i = 0; i < 100000; ++i) {
 18 var o = {};
 19 o.__defineGetter__("f", function() {
 20 counter++;
 21 return 84;
 22 });
 23 test(o, 84, counter + 1);
 24}
0

Source/WTF/ChangeLog

 12014-05-19 Filip Pizlo <fpizlo@apple.com>
 2
 3 [ftlopt] DFG bytecode parser should turn GetById with nothing but a Getter stub as stuff+handleCall, and handleCall should be allowed to inline if it wants to
 4 https://bugs.webkit.org/show_bug.cgi?id=133105
 5
 6 Reviewed by NOBODY (OOPS!).
 7
 8 * wtf/Bag.h:
 9 (WTF::Bag::iterator::operator!=):
 10
1112014-05-07 Filip Pizlo <fpizlo@apple.com>
212
313 UNREACHABLE_FOR_PLATFORM() is meant to be a release crash.
169119

Source/WTF/wtf/Bag.h

@@public:
8383 {
8484 return m_node == other.m_node;
8585 }
 86
 87 bool operator!=(const iterator& other) const
 88 {
 89 return !(*this == other);
 90 }
8691 private:
8792 template<typename U> friend class WTF::Bag;
8893 Node* m_node;
169014

LayoutTests/ChangeLog

 12014-05-19 Filip Pizlo <fpizlo@apple.com>
 2
 3 [ftlopt] DFG bytecode parser should turn GetById with nothing but a Getter stub as stuff+handleCall, and handleCall should be allowed to inline if it wants to
 4 https://bugs.webkit.org/show_bug.cgi?id=133105
 5
 6 Reviewed by NOBODY (OOPS!).
 7
 8 * js/regress/getter-no-activation-expected.txt: Added.
 9 * js/regress/getter-no-activation.html: Added.
 10 * js/regress/script-tests/getter-no-activation.js: Added.
 11
1122014-05-08 Filip Pizlo <fpizlo@apple.com>
213
314 jsSubstring() should be lazy
169119

LayoutTests/js/regress/getter-no-activation-expected.txt

 1JSRegress/getter-no-activation
 2
 3On success, you will see a series of "PASS" messages, followed by "TEST COMPLETE".
 4
 5
 6PASS no exception thrown
 7PASS successfullyParsed is true
 8
 9TEST COMPLETE
 10
0

LayoutTests/js/regress/getter-no-activation.html

 1<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
 2<html>
 3<head>
 4<script src="../../resources/js-test-pre.js"></script>
 5</head>
 6<body>
 7<script src="../../resources/regress-pre.js"></script>
 8<script src="script-tests/getter-no-activation.js"></script>
 9<script src="../../resources/regress-post.js"></script>
 10<script src="../../resources/js-test-post.js"></script>
 11</body>
 12</html>
0

LayoutTests/js/regress/script-tests/getter-no-activation.js

 1(function() {
 2 var o = {_f:42};
 3 o.__defineGetter__("f", function() { return this._f; });
 4 (function() {
 5 var result = 0;
 6 var n = 2000000;
 7 for (var i = 0; i < n; ++i)
 8 result += o.f;
 9 if (result != n * 42)
 10 throw "Error: bad result: " + result;
 11 })();
 12})();
 13
0