-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy path1998.html
306 lines (271 loc) · 14.6 KB
/
1998.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
<!-- This document was automatically generated with bibtex2html 1.96
(see http://www.lri.fr/~filliatr/bibtex2html/),
with the following command:
bibtex2html -dl -nodoc -nobibsource -nokeys -nokeywords -nofooter 1998.bib -->
<p><a name="csdl-95-24"></a>
Philip M. Johnson.
Reengineering inspection: The future of formal technical review.
<em>Communications of the ACM</em>, 41(2):49-52, February 1998.
[ <a href="ftp://ftp.ics.hawaii.edu/pub/tr/ics-tr-95-24.pdf">.pdf</a> ]
<blockquote><font size="-1">
Formal technical review is acknowledged as a preeminant software
quality improvement method. The “inspection” review method, first
introduced by Michael Fagan twenty years ago, has led to dramatic
improvements in software quality. It has also led to a myopia within
the review community, which tends to view inspection-based methods as
not just effective, but as the optimal approach to formal
technical review. This article challenges this view by presenting a
taxonomy of software review that shows inspection to be just one among
many valid approaches. The article then builds upon this framework to propose
seven guidelines for the radical redesign and improvement of formal
technical review during the next twenty years.
</font></blockquote>
<p>
</p>
<p><a name="csdl-96-14"></a>
Philip M. Johnson and Danu Tjahjono.
Does every inspection really need a meeting?
<em>Journal of Empirical Software Engineering</em>, 4(1):9-35, January
1998.
[ <a href="ftp://ftp.ics.hawaii.edu/pub/tr/ics-tr-96-14.ps.Z">.ps.Z</a> ]
<blockquote><font size="-1">
Software review is a fundamental component of the software quality
assurance process, yet significant controversies surround the most
efficient and effective review method. A central question surrounds the
use of meetings; traditional review practice views them as essential,
while more recent findings question their utility. To provide insight
into this question, we conducted a controlled experiment to
assess several measures of cost and effectiveness for a meeting and
non-meeting-based review method. The experiment used CSRS, a computer
mediated collaborative software review environment, and 24 three person
groups. We found that the meeting-based review method studied was
significantly more costly than the non-meeting-based method, but that
meeting-based review did not find significantly more defects than the
non-meeting-based method. However, the meeting-based review method was
significantly better at reducing the level of false positives, and
subjects subjectively preferred meeting-based review over
non-meeting-based review. This paper presents the motivation for this
experiment, its design and implementation, our empirical findings,
pointers to Internet repositories for replication or additional analysis
of this experiment, conclusions, and future directions.
</font></blockquote>
<p>
</p>
<p><a name="csdl-98-01"></a>
Jennifer M. Geis.
Javawizard: Investigating defect detection and analysis.
M.S. thesis, University of Hawaii, May 1998.
[ <a href="ftp://ftp.ics.hawaii.edu/pub/tr/ics-tr-98-01.pdf">.pdf</a> ]
<blockquote><font size="-1">
This thesis presents a study designed to investigate the occurrence of
certain kinds of errors in Java programs using JavaWizard
(JWiz), a static analysis mechanism for Java source code. JWiz is an
extensible tool that supports detection of certain commonly occurring
semantic errors in Java programs. For this thesis, I used JWiz within a
research framework designed to reveal (1) knowledge about the kinds of
errors made by Java programmers, (2) differences among Java programmers
in the kinds of errors made, and (3) potential avenues for improvement in
the design and/or implementation of the Java language or environment.
I performed a four week case study, collecting data from 14 students over
three programming projects which produced approximately 12,800 lines of
code. The JWiz results were categorized into three types: functional
errors (must be fixed for the program to work properly, maintenance
errors (program will work, but considered to be bad style), and false
positives (intended by the developer). Out of 235 JWiz warnings, there
were 69 functional errors, 100 maintenance errors, and 66 false
positives. The fix times for the functional errors added up to five and
a half hours, or 7.3 percent of the total amount of time spent debugging
in test.
I found that all programmers inject a few of the same mistakes into their
code, but these are only minor, non-defect causing errors. I found that
the types of defects injected vary drastically with no correlation to
program size or developer experience. I also found that for those
developers who make some of the mistakes that JWiz is designed for, JWiz
can be a great help, saving significant amounts of time ordinarily spent
tracking down defects in test.
</font></blockquote>
<p>
</p>
<p><a name="csdl-98-04"></a>
Anne M. Disney and Philip M. Johnson.
Investigating data quality problems in the PSP.
In <em>Sixth International Symposium on the Foundations of Software
Engineering (SIGSOFT'98)</em>, Orlando, FL., November 1998.
[ <a href="http://csdl.ics.hawaii.edu/techreports/1998/98-04/98-04.pdf">.pdf</a> ]
<blockquote><font size="-1">
The Personal Software Process (PSP) is used by software engineers to
gather and analyze data about their work. Published studies typically
use data collected using the PSP to draw quantitative conclusions about
its impact upon programmer behavior and product quality. However,
our experience using PSP in both industrial and academic settings
revealed problems both in collection of data and its later analysis.
We hypothesized that these two kinds of data quality problems could make a
significant impact upon the value of PSP measures. To test this
hypothesis, we built a tool to automate the PSP and then examined 89
projects completed by ten subjects using the PSP manually in an
educational setting. We discovered 1539 primary errors and categorized
them by type, subtype, severity, and age. To examine the collection
problem we looked at the 90 errors that represented impossible
combinations of data and at other less concrete anomalies in Time
Recording Logs and Defect Recording Logs. To examine the analysis
problem we developed a rule set, corrected the errors as far as possible,
and compared the original and corrected data. This resulted in
significant differences for measures such as yield and the
cost-performance ratio, confirming our hypothesis. Our results raise
questions about the accuracy of manually collected and analyzed PSP data,
indicate that integrated tool support may be required for high quality
PSP data analysis, and suggest that external measures
should be used when attempting to evaluate the impact of the PSP upon
programmer behavior and product quality.
</font></blockquote>
<p>
</p>
<p><a name="csdl-98-05"></a>
Anne M. Disney, Jarrett Lee, Tuan Huynh, and Jennifer Saito.
Investigating the design and evaluation of research web sites.
Technical Report CSDL-98-05, Department of Information and Computer
Sciences, University of Hawaii, Honolulu, Hawaii 96822, May 1998.
[ <a href="http://csdl.ics.hawaii.edu/techreports/1998/98-05/98-05.html">.html</a> ]
<blockquote><font size="-1">
The Aziza design group (formally 691 Web Development
Team) was commissioned by CSDL to implement a new web site. The group was
assigned not only to update the entire site, but also to research and
investigate the process and life cycle of World Wide Web site development.
This research document records the process and products that occurred
while updating of the CSDL web site.
It discusses issues such as the balance between providing
information and providing an image of the group,
and ways to share research information over the World Wide Web.
To back the data researched, evaluations by the various users of the site
occurred and are discussed here. This
document records our web site design processes,
what insights we had about those processes, our findings, and finally, our
conclusions.
</font></blockquote>
<p>
</p>
<p><a name="csdl-98-06"></a>
Anne M. Disney, Jarrett Lee, Tuan Huynh, and Jennifer Saito.
Csdl web site requirements specification document.
Technical Report CSDL-98-06, Department of Information and Computer
Sciences, University of Hawaii, Honolulu, Hawaii 96822, April 1998.
[ <a href="http://csdl.ics.hawaii.edu/techreports/1998/98-06/98-06.html">.html</a> ]
<blockquote><font size="-1">
The purpose of this document is to
summarize the results of our background research for the
CSDL web site, and describe the resulting requirements for
evaluation and review.
</font></blockquote>
<p>
</p>
<p><a name="csdl-98-07"></a>
Robert S. Brewer.
Improving mailing list archives through condensation.
M.S. Thesis Proposal CSDL-98-07, Department of Information and
Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, September
1998.
[ <a href="http://csdl.ics.hawaii.edu/techreports/1998/98-07/proposal.pdf">.pdf</a> ]
<blockquote><font size="-1">
Electronic mailing lists are popular Internet information
sources. Many mailing lists maintain an archive of all
messages sent to the list which is often searchable using
keywords. While useful, these archives suffer from the fact
that they include all messages sent to the list. Because they
include all messages, the ability of users to rapidly find
the information they want in the archive is hampered. To
solve the problems inherent in current mailing list archives,
I propose a process called condensation whereby one can strip
out all the extraneous, conversational aspects of the data
stream leaving only the pearls of interconnected wisdom.
To explore this idea of mailing list condensation and to test
whether or not a condensed archive of a mailing list is
actually better than traditional archives, I propose the
construction and evaluation of a new software system. I name
this system the Mailing list Condensation System or MCS. MCS
will have two main parts: one which is dedicated to taking
the raw material from the mailing list and condensing it, and
another which stores the condensed messages and allows users
to retrieve them.
The condensation process is performed by a human editor
(assisted by a tool), not an AI system. While this adds a
certain amount of overhead to the maintenance of the
MCS-generated archive when compared to a traditional archive,
it makes the system implementation feasible.
I believe that an MCS-generated mailing list archive
maintained by an external researcher will be adopted as a
information resource by the subscribers of that mailing list.
Furthermore, I believe that subscribers will prefer the
MCS-generated archive over existing traditional archives of
the mailing list. This thesis will be tested by a series of
quantitative and qualitative measures.
</font></blockquote>
<p>
</p>
<p><a name="csdl-98-08"></a>
Anne M. Disney.
Data quality problems in the personal software process.
M.S. thesis, University of Hawaii, August 1998.
[ <a href="http://csdl.ics.hawaii.edu/techreports/1998/98-08/98-08.pdf">.pdf</a> ]
<blockquote><font size="-1">
The Personal Software Process (PSP) is used by software engineers to
gather and analyze data about their work and to produce empirically
based evidence for the improvement of planning and quality in future
projects. Published studies have suggested that adopting the PSP results
in improved size and time estimation and in reduced numbers of defects
found in the compile and test phases of development. However, personal
experience using PSP in both industrial and academic settings caused me
to wonder about the quality of two areas of PSP practice: collection and
analysis. To investigate this I built a tool to automate the PSP and
then examined 89 projects completed by nine subjects using the PSP in an
educational setting. I discovered 1539 primary errors and analyzed them
by type, subtype, severity, and age. To examine the collection problem
I looked at the 90 errors that represented impossible combinations of
data and at other less concrete anomalies in Time Recording Logs and
Defect Recording Logs. To examine the analysis problem I developed a
rule set, corrected the errors as far as possible, and compared the
original and corrected data. This resulted in substantial
differences for numbers such as yield and the cost-performance ratio.
The results raise questions about the accuracy of published data on the
PSP and directions for future research.
</font></blockquote>
<p>
</p>
<p><a name="csdl-98-11"></a>
Philip M. Johnson and Anne M. Disney.
The personal software process: A cautionary case study.
<em>IEEE Software</em>, 15(6), November 1998.
<blockquote><font size="-1">
In 1995, Watts Humphrey introduced the Personal Software Process in
his book, A Discipline for Software Engineering.
Programmers who use the PSP gather measurements related to
their own work products and the process by which they were developed,
then use these measures to drive changes to their development
behavior.
After almost three years of
teaching and using the PSP, we have experienced the educational
benefits of the PSP. As researchers, however, we have also uncovered
evidence of certain limitations, which we believe can help improve
appropriate adoption and evaluation of the method by industrial
and academic practitioners. This paper presents an overview of
a case study we performed that presents evidence
of potential data quality problems, along with
recommendations for those interested in adopting
PSP within industry or academia.
</font></blockquote>
<p>
</p>
<p><a name="csdl-98-15"></a>
Jennifer M. Geis.
Javawizard user guide.
Technical Report CSDL-98-15, Department of Information and
Computer Sciences, University of Hawaii, Honolulu, Hawaii 96822, December
1998.
[ <a href="http://csdl.ics.hawaii.edu/techreports/1998/98-15/98-15.html">.html</a> ]
<blockquote><font size="-1">
This document describes the use of JavaWizard, an automated code
checker for the Java programming language. The user guide
includes directions for installation, command line
invocation, and graphical user interface invocation.
</font></blockquote>
<p>
</p>