-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy path2004.html
241 lines (194 loc) · 9.98 KB
/
2004.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
<!-- This document was automatically generated with bibtex2html 1.96
(see http://www.lri.fr/~filliatr/bibtex2html/),
with the following command:
bibtex2html -dl -nodoc -nobibsource -nokeys -nokeywords -nofooter 2004.bib -->
<p><a name="csdl2-03-01"></a>
Philip M. Johnson and Joy M. Agustin.
Keeping the coverage green: Investigating the cost and quality of
testing in agile development.
In <em>Submitted to the 2004 Conference on Software Metrics</em>,
Chicago, Illinois, August 2004.
[ <a href="http://csdl.ics.hawaii.edu/techreports/2003/03-01/03-01.pdf">.pdf</a> ]
<blockquote><font size="-1">
An essential component of agile methods such as Extreme Programming is a
suite of test cases that is incrementally built and maintained throughout
development. This paper presents research exploring two questions
regarding testing in these agile contexts. First, is there a way to
validate the quality of test case suites in a manner compatible with
agile development methods? Second, is there a way to assess and monitor
the costs of agile test case development and maintenance? In this paper,
we present the results of our recent research on these issues. Our
results include a measure called XC (for Extreme Coverage) which is
implemented in a system called JBlanket. XC is designed to support
validation of the test-driven design methodology used in agile
development. We describe how XC and JBlanket differ from other coverage
measures and tools, assess their feasibility through a case study in a
classroom setting, assess its external validity on a set of open source
systems, and illustrate how to incorporate XC into a more global measure
of testing cost and quality called Unit Test Dynamics (UTD). We conclude
with suggested research directions building upon these findings to
improve agile methods and tools.
</font></blockquote>
<p>
</p>
<p><a name="csdl2-03-12"></a>
Philip M. Johnson, Hongbing Kou, Joy M. Agustin, Qin Zhang, Aaron Kagawa, and
Takuya Yamashita.
Practical automated process and product metric collection and
analysis in a classroom setting: Lessons learned from Hackystat-UH.
In <em>Proceedings of the 2004 International Symposium on Empirical
Software Engineering</em>, Los Angeles, California, August 2004.
[ <a href="http://csdl.ics.hawaii.edu/techreports/2003/03-12/03-12.pdf">.pdf</a> ]
<blockquote><font size="-1">
Measurement definition, collection, and analysis is an essential
component of high quality software engineering practice, and is thus an
essential component of the software engineering curriculum. However,
providing students with practical experience with measurement in a
classroom setting can be so time-consuming and intrusive that it's
counter-productive-teaching students that software measurement is
“impractical” for many software development contexts. In this
research, we designed and evaluated a very low-overhead approach to
measurement collection and analysis using the Hackystat system with
special features for classroom use. We deployed this system in two
software engineering classes at the University of Hawaii during Fall,
2003, and collected quantitative and qualitative data to evaluate the
effectiveness of the approach. Results indicate that the approach
represents substantial progress toward practical, automated metrics
collection and analysis, though issues relating to the complexity of
installation and privacy of user data remain.
</font></blockquote>
<p>
</p>
<p><a name="csdl2-04-02"></a>
Aaron Kagawa and Philip M. Johnson.
The Hackystat-JPL configuration: Round 2 results.
Technical Report CSDL-03-07, Department of Information and Computer
Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 2004.
[ <a href="http://csdl.ics.hawaii.edu/techreports/2004/04-02/04-02.html">.html</a> ]
<blockquote><font size="-1">
This report presents selected round two results from Hackystat-based
descriptive analyses of Harvest workflow data gathered from the Mission
Data System software development project from January, 2003 to December,
2003. The information provided in this report describes improvements and
differences made since the time of writing of the previous techreport (The Hackystat-JPL
Configuration: Overview and Initial Results.
</font></blockquote>
<p>
</p>
<p><a name="csdl2-04-03"></a>
Stuart Faulk, John Gustafson, Philip M. Johnson, Adam A. Porter, Walter Tichy,
and Larry Votta.
Toward accurate HPC productivity measurement.
In <em>Proceedings of the First International Workshop on Software
Engineering for High Performance Computing System Applications</em>, Edinburgh,
Scotland, May 2004.
[ <a href="http://csdl.ics.hawaii.edu/techreports/2004/04-03/04-03.pdf">.pdf</a> ]
<blockquote><font size="-1">
One key to improving high-performance computing
(HPC) productivity is finding better ways to measure it.
We define productivity in terms of mission goals, i.e.,
greater productivity means that more science is
accomplished with less cost and effort. Traditional
software productivity metrics and computing benchmarks
have proven inadequate for assessing or predicting such
end-to-end productivity. In this paper we describe a new
approach to measuring productivity in HPC applications
that addresses both development time and execution time.
Our goal is to develop a public repository of effective
productivity benchmarks that anyone in the HPC
community can apply to assess or predict productivity.
</font></blockquote>
<p>
</p>
<p><a name="csdl2-04-04"></a>
Stuart Faulk, Philip M. Johnson, John Gustafson, Adam A. Porter, Walter Tichy,
and Larry Votta.
Measuring HPC productivity.
<em>International Journal of High Performance Computing
Applications</em>, December 2004.
[ <a href="http://csdl.ics.hawaii.edu/techreports/2004/04-04/04-04.pdf">.pdf</a> ]
<blockquote><font size="-1">
One key to improving high-performance computing (HPC) productivity is
finding better ways to measure it. We define productivity in terms of
mission goals, i.e., greater productivity means that more science is
accomplished with less cost and effort. Traditional software productivity
metrics and computing benchmarks have proven inadequate for assessing or
predicting such end-to-end productivity. In this paper we introduce a new
approach to measuring productivity in HPC applications that addresses both
development time and execution time. Our goal is to develop a public
repository of effective productivity benchmarks that anyone in the HPC
community can apply to assess or predict productivity.
</font></blockquote>
<p>
</p>
<p><a name="csdl2-04-05"></a>
Philip M. Johnson.
Proceedings of the first hackystat developer boot camp.
Technical report, University of Hawaii, May 2004.
[ <a href="http://csdl.ics.hawaii.edu/techreports/2004/04-05/04-05.pdf">.pdf</a> ]
</p>
<p><a name="csdl2-04-06"></a>
Aaron Kagawa.
Hackystat MDS supporting MSL MMR.
Technical Report CSDL-04-06, Department of Information and Computer
Sciences, University of Hawaii, Honolulu, Hawaii 96822, June 2004.
[ <a href="http://csdl.ics.hawaii.edu/techreports/2004/04-06/04-06.html">.html</a> ]
<blockquote><font size="-1">
This report presents selected results from Hackystat Analyses on
Mission Data System's Release 9. The goal is to identify reports of use
to the Monthly Management Report for Mars Science Laboratory.
</font></blockquote>
<p>
</p>
<p><a name="csdl2-04-07"></a>
Aaron Kagawa.
Hackystat mds supporting msl mmr: Round 2 results.
Technical Report CSDL-04-07, Department of Information and Computer
Sciences, University of Hawaii, Honolulu, Hawaii 96822, July 2004.
[ <a href="http://csdl.ics.hawaii.edu/techreports/2004/04-07/04-07.html">.html</a> ]
<blockquote><font size="-1">
This report presents selected additional results from Hackystat Analyses on
Mission Data System's Release 9. The goal is to identify reports of use
to the Monthly Management Report for Mars Science Laboratory.
</font></blockquote>
<p>
</p>
<p><a name="csdl2-04-09"></a>
Aaron Kagawa.
Hackystat-sqi: Modeling different development processes.
Technical Report CSDL-04-09, Department of Information and Computer
Sciences, University of Hawaii, Honolulu, Hawaii 96822, July 2004.
[ <a href="http://csdl.ics.hawaii.edu/techreports/2004/04-09/04-09.html">.html</a> ]
<blockquote><font size="-1">
This report presents the design of a Hackystat module called SQI, whose purpose
is to support quality analysis for multiple projects at Jet Propulsion Laboratory.
</font></blockquote>
<p>
</p>
<p><a name="csdl2-04-10"></a>
Aaron Kagawa.
Hackystat-sqi: First progress report.
Technical Report CSDL-04-10, Department of Information and Computer
Sciences, University of Hawaii, Honolulu, Hawaii 96822, July 2004.
[ <a href="http://csdl.ics.hawaii.edu/techreports/2004/04-09/04-09.html">.html</a> ]
<blockquote><font size="-1">
This report presents the initial analysis that are available for Hackystat-SQI and future directions.
</font></blockquote>
<p>
</p>
<p><a name="csdl2-04-13"></a>
Michael G. Paulding.
Measuring the processes and products of HPCS development: Initial
results for the optimal truss purpose-based benchmark.
Technical Report CSDL-04-13, Department of Information and Computer
Sciences, University of Hawaii, Honolulu, Hawaii 96822, September 2004.
[ <a href="http://csdl.ics.hawaii.edu/techreports/2004/04-13/04-13.html">.html</a> ]
<blockquote><font size="-1">
This report presents initial results from the in-progress implementation of the
Optimal Truss Purpose-based benchmark. It shows process and product data collected both
automatically by Hackystat and manually by engineering logs and other tools. It
presents some interpretations of the data and proposes approaches to improving
support for understanding how to improve HPCS development productivity.
</font></blockquote>
<p>
</p>