-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy path2007.html
189 lines (159 loc) · 9.1 KB
/
2007.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
<!-- This document was automatically generated with bibtex2html 1.96
(see http://www.lri.fr/~filliatr/bibtex2html/),
with the following command:
bibtex2html -dl -nodoc -nobibsource -nokeys -nokeywords -nofooter 2007.bib -->
<p><a name="csdl2-06-06"></a>
Philip M. Johnson.
Requirement and design trade-offs in Hackystat: An in-process
software engineering measurement and analysis system.
In <em>Proceedings of the 2007 International Symposium on Empirical
Software Engineering and Measurement</em>, Madrid, Spain, September 2007.
[ <a href="http://csdl.ics.hawaii.edu/techreports/2006/06-06/06-06.pdf">.pdf</a> ]
<blockquote><font size="-1">
For five years, the Hackystat Project has incrementally developed and
evaluated a generic framework for in-process software engineering
measurement and analysis (ISEMA). At least five other independent ISEMA
system development projects have been initiated during this time,
indicating growing interest and investment in this approach by the software
engineering community. This paper presents 12 important requirement and
design tradeoffs made in the Hackystat system, some of their implications
for organizations wishing to introduce ISEMA, and six directions for future
research and development. The three goals of this paper are to: (1) help
potential users of ISEMA systems to better evaluate the relative strengths
and weaknesses of current and future systems, (2) help potential developers
of ISEMA systems to better understand some of the important requirement and
design trade-offs that they must make, and (3) help accelerate progress in
ISEMA by identifying promising directions for future research and
development.
</font></blockquote>
<p>
</p>
<p><a name="csdl2-06-07"></a>
Victor R. Basili, Marvin V. Zelkowitz, Dag Sjoberg, Philip M. Johnson, and Tony
Cowling.
Protocols in the use of empirical software engineering artifacts.
<em>Empirical Software Engineering</em>, 12, February 2007.
[ <a href="http://csdl.ics.hawaii.edu/techreports/2006/06-07/06-07.pdf">.pdf</a> ]
<blockquote><font size="-1">
If empirical software engineering is to grow as a valid scientific
endeavor, the ability to acquire, use, share, and compare data collected
from a variety of sources must be encouraged. This is necessary to validate
the formal models being developed within computer science. However, within
the empirical software engineering community this has not been easily
accomplished. This paper analyses experience from a number of projects, and
defines the issues, which include the following: (1) How should data,
testbeds, and artifacts be shared? (2) What limits should be placed on who
can use them and how? How does one limit potential misuse? (3) What is the
appropriate way to credit the organization and individual that spent the
effort collecting the data, developing the testbed, and building the
artifact? (4) Once shared, who owns the evolved asset? As a solution to
these issues, the paper proposes a framework for an empirical software
engineering artifact license, which is intended to address the needs for
both creator and user of such artifacts and should foster a market in
making available and using such artifacts. If this license framework for
sharing software engineering artifacts is commonly accepted, it is
considered that it should encourage artifact owners to make the artifacts
accessible to others (gaining credit is more likely and misuse is less
likely), and it may be easier for other researchers to request artifacts
since there will be a well-defined protocol for how to deal with relevant
matters.
</font></blockquote>
<p>
</p>
<p><a name="csdl2-06-10"></a>
Philip M. Johnson.
Automated software process and product measurement with Hackystat.
<em>Dr. Dobbs Journal</em>, January 2007.
<blockquote><font size="-1">
This article presents an overview of Hackystat, a system for automated software
process and product measurement.
</font></blockquote>
<p>
</p>
<p><a name="csdl2-06-13"></a>
Philip M. Johnson and Hongbing Kou.
Automated recognition of test-driven development with Zorro.
<em>Proceedings of Agile 2007</em>, August 2007.
[ <a href="http://csdl.ics.hawaii.edu/techreports/2006/06-13/06-13.pdf">.pdf</a> ]
<blockquote><font size="-1">
Zorro is a system designed to automatically determine whether a developer
is complying with an operational definition of Test-Driven Development
(TDD) practices. Automated recognition of TDD can benefit the software
development community in a variety of ways, from inquiry into the “true
nature” of TDD, to pedagogical aids to support the practice of test-driven
development, to support for more rigorous empirical studies on the effectiveness
of TDD in both laboratory and real world settings. This paper introduces
the Zorro system, its operational definition of TDD, the analyses made
possible by Zorro, and our ongoing efforts to validate the system.
</font></blockquote>
<p>
</p>
<p><a name="csdl2-07-03"></a>
Philip M. Johnson.
Ultra-automation and ultra-autonomy for software engineering
management of ultra-large-scale systems.
In <em>Proceedings of the 2007 Workshop on Ultra Large Scale
Systems</em>, Minneapolis, Minnesota, May 2007.
[ <a href="http://csdl.ics.hawaii.edu/techreports/2007/07-03/07-03.pdf">.pdf</a> ]
<blockquote><font size="-1">
“Ultra-Large-Scale Systems: The Software Challenge of the Future”
identifies “Engineering Management at Large Scales” as an important
focus of research. Engineering management for software typically
involves measurement and monitoring of products and processes in order to
maintain acceptable levels of important project characteristics including
cost, quality, usability, performance, reliability, and so forth. Our
research on software engineering measurement over the past ten years has
exhibited a trend towards increasing automation and autonomy in the
collection and analysis of process and product measures. In this
position paper, we extrapolate from our work so far to consider what new
forms of automation and autonomy might be required for software
engineering management of ULS systems.
</font></blockquote>
<p>
</p>
<p><a name="csdl2-07-04"></a>
Hongbing Kou.
<em>Automated Inference of Software Development Behaviors: Design,
Implementation and Validation of Zorro for Test-Driven Development</em>.
Ph.D. thesis, University of Hawaii, Department of Information and
Computer Sciences, December 2007.
[ <a href="http://csdl.ics.hawaii.edu/techreports/2007/07-04/07-04.pdf">.pdf</a> ]
<blockquote><font size="-1">
A recent focus of interest in software engineering research is on low-level
software processes, which define how software developers or development
teams should carry on development activities in short phases that last from
several minutes to a few hours. Anecdotal evidence exists for the positive
impact on quality and productivity of certain low-level software processes
such as test-driven development and continuous integration. However,
empirical research on low-level software processes often yields conflicting
results. A significant threat to the validity of the empirical studies on
low-level software processes is that they lack the ability to rigorously
assess process conformance. That is to say, the degree to which developers
follow the low-level software processes can not be evaluated.
In order to improve the quality of empirical research on low-level software
processes, I developed a technique called Software Development Stream
Analysis (SDSA) that can infer development behaviors using automatically
collected in-process software metrics. The collection of development
activities is supported by Hackystat, a framework for automated software
process and product metrics collection and analysis. SDSA abstracts the
collected software metrics into a software development stream, a
time-series data structure containing time-stamped development events. It
then partitions the development stream into episodes, and then uses a
rule-based system to infer low-level development behaviors exhibited in
episodes.
With the capabilities provided by Hackystat and SDSA, I developed the Zorro
software system to study a specific low-level software process called
Test-Driven Development (TDD). Experience reports have shown that TDD can
greatly improve software quality with increased developer productivity, but
empirical research findings on TDD are often mixed. An inability to
rigorously assess process conformance is a possible explanation. Zorro can
rigorously assess process conformance to a specific operational definition
for TDD, and thus enable more controlled, comparable empirical studies.
My research has demonstrated that Zorro can recognize the low-level
software development behaviors that characterize TDD. Both the pilot and
classroom case studies support this conclusion. The industrial case study
shows that the automated data collection and development behavior inference
has the potential to be useful for researchers.
</font></blockquote>
<p>
</p>