[97a262]: userguide / html / RVM / Adaptive Optimization System.html Maximize Restore History

Download this file

Adaptive Optimization System.html    72 lines (53 with data), 9.4 kB

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title>RVM : Adaptive Optimization System</title>
<link rel="stylesheet" href="styles/site.css" type="text/css" />
<META http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<table class="pagecontent" border="0" cellpadding="0" cellspacing="0" width="100%" bgcolor="#ffffff">
<tr>
<td valign="top" class="pagebody">
<div class="pageheader">
<span class="pagetitle">
RVM : Adaptive Optimization System
</span>
</div>
<div class="pagesubheading">
This page last changed on Jul 07, 2008 by <font color="#0050B2">dgrove</font>.
</div>
<p>A comprehensive discussion of the design and implementation of the original Jikes RVM adaptive optimization system is given in the<a href="http://docs.codehaus.org/display/RVM/Publications#Publications-aos">OOPSLA 2000 paper</a> by Arnold, Fink, Grove, Hind and Sweeney. A number of aspects of the system have been changed since 2000, so a better resource is a technical report <a href="http://domino.research.ibm.com/comm/research_people.nsf/pages/dgrove.RC23429.html">Nov. 2004 technical report</a> that describes the architecture and implementation in some detail. This section of the userguide is based on section 5 of the 2004 technical report.
<br class="atl-forced-newline" /></p>
<p>The implementation of the Jikes RVM adaptive optimization system uses a number of Java threads: several organizer threads in the runtime measurements component, the controller thread, and the compilation thread. The various threads are loosely coupled, communicating with each other through shared queues and/or the other in memory data structures. All queues in the system are blocking priority queues; if a consumer thread performs a dequeue operation when the queue is empty, it suspends until a producer thread performs an enqueue operation.</p>
<p>The adaptive optimization system performs two primary tasks: selective optimization and profile-directed inlining.</p>
<h3><a name="AdaptiveOptimizationSystem-SelectiveOptimization"></a>Selective Optimization</h3>
<p>The goal of selective optimization is to identify regions of code in which the application spends significant execution time (often called ``hot spots''), determine if overall application performance is likely to be improved by further optimizing one or more hot spots, and if so to invoke the optimizing compiler and install the resulting optimized code in the virtual machine.</p>
<p>In Jikes RVM, the unit of optimization is a method.&nbsp; Thus, to perform selective optimization, first the runtime measurements component must identify candidate methods (``hot methods'') for the controller to consider. To this end, it installs a listener that periodically samples the currently executing method at every taken yieldpoint.&nbsp; When it is time to take a sample, the listener inspects the thread's call stack and records a single compiled method id into a buffer. If the yieldpoint occurs in the prologue of a method, then the listener additionally records the compiled method id of the current activation's caller.&nbsp; If the taken yieldpoint occurs on a loop backedge or method epilogue, then the listener records the compiled method id of the current method.&nbsp;</p>
<p>When the buffer of samples is full, the sampling window ends. The listener then unregisters itself (stops taking samples) and wakes the sleeping Hot Method Organizer.&nbsp; The Hot Method Organizer processes the buffer of compiled method ids by updating the Method Sample Data.&nbsp; This data structure maintains, for every compiled method, the total number of times that it has been sampled. Careful design of this data structure (MethodCountData.java) was critical to achieving low profiling overhead. In addition to supporting lookups and updates by compiled method id, it must also efficiently enumerate all methods that have been sampled more times than a (varying) threshold value. After updating the Method Sample Data, the Hot Method Organizer creates an event for each method that has been sampled in this window and adds it to the controller's priority queue, using the sample value as its priority. The event contains the compiled method and the <em>total</em> number of times it has been sampled&nbsp; since the beginning of execution.&nbsp; After enqueuing the last event, the Hot Method Organizer re-registers the method listener and then sleeps until the next buffer of samples is ready to be processed.</p>
<p>When the priority queue delivers an event to the controller, the controller dequeues the event and applies the model-driven recompilation policy to determine what action (if any) to take for the indicated method.&nbsp; If the controller decides to recompile the method, it creates a recompilation event that describes the method to be compiled and the optimization plan to use and places it on the recompilation queue. The recompilation queue prioritizes events based on the cost-benefit computation.</p>
<p>When an event is available on the recompilation queue, the recompilation thread removes it and performs the compilation activity specified by the event. It invokes the optimizing compiler at the specified optimization level and installs the resulting compiled method into the VM.&nbsp;</p>
<p>Although the overall structure of selective optimization in Jikes RVM is similar to that originally described in Arnold et al's OOPSLA 2000 paper, we have made several changes and improvements based on further experience with the system. The most significant change is that in the previous system, the method sample organizer attempted to filter the set of methods it presented to the controller.&nbsp; The organizer passed along to the controller only methods considered "hot".&nbsp; The organizer deemed a method "hot'' if the percentage of samples attributed to the method exceeded a dynamically adjusted threshold value. Method samples were periodically decayed to give more weight to recent samples. The controller dynamically adjusted this threshold value and the size of the sampling window in an attempt to reduce the overhead of processing the samples.</p>
<p>Later, significant algorithmic improvements in key data structures and additional performance tuning of the listeners, organizers, and<br/>
controller reduced AOS overhead by two orders of magnitude.&nbsp; These overhead reductions obviate the need to filter events passed<br/>
to the controller.&nbsp; This resulted in a more effective system with fewer parameters to tune and a sounder theoretical basis.&nbsp; In general, as we gained experience with the adaptive system implementation, we strove to reduce the number of tuning&nbsp; parameters.&nbsp; We believe that the closer the implementation matches the basic theoretical cost-benefit model, the more likely it will perform well and make reasonable and understandable decisions.</p>
<h3><a name="AdaptiveOptimizationSystem-ProfileDirectedInlining"></a>Profile-Directed Inlining</h3>
<p>Profile-directed inlining attempts to identify frequently traversed call graph edges, which represent caller-callee relationships, and determine whether it is beneficial to recompile the caller methods<br/>
to allow inlining of the callee methods. In Jikes RVM, profile-directed inlining augments a number of static<br/>
inlining heuristics. The role of profile-directed inlining is to identify high cost-high benefit inlining opportunities that evade the static heuristics and to predict the likely target(s) of invokevirtual and invokeinterface calls that could not be statically bound at compile time.</p>
<p>To accomplish this goal, the system takes a statistical sample of the method calls in the running application and maintains an approximation of the dynamic call graph based on this data. The system installs a listener that samples call edges whenever a yieldpoint is taken in the prologue or epilogue of a method. To sample the call edge, it records the compiled method id of the caller and callee methods and the offset of the call instruction in the caller's machine code into a buffer. When the buffer of samples is full, the sampling window ends.<br/>
The listener then unregisters itself (stops taking samples) and wakes an organizer to update the dynamic call graph with the new profile data. The optimizing compiler's Inline Oracle uses the dynamic call graph to guide it's inline decisions.</p>
<p>The system currently used is based on Arnold &amp; Grove's CGO 2005 paper. More details of the sampling scheme and the inlining oracle can be found there, or in the source code.</p>
</td>
</tr>
</table>
<table border="0" cellpadding="0" cellspacing="0" width="100%">
<tr>
<td height="12" background="http://docs.codehaus.org/images/border/border_bottom.gif"><img src="images/border/spacer.gif" width="1" height="1" border="0"/></td>
</tr>
<tr>
<td align="center"><font color="grey">Document generated by Confluence on Jul 04, 2010 19:57</font></td>
</tr>
</table>
</body>
</html>