Formal Technical Reviews (FTRs), where a software developer and a team of reviewers walk through a piece of code to assess its quality, have been used in the computing industry for many years. In this paper we look at the suitability of FTRs in an academic research setting.
We investigated the use of FTRs combined with other approaches in a set of case studies over a three years period. MSc students were used as test subjects with their dissertation projects being used as the research products. In all about half the student volunteered to participate each year, though very few took part in FTRs initially. Changes to the experimental setup through the introduction of additional artefacts to lower adoption slowly increased the participation in FTRs.
Our initial intervention included the introduction of a documentation style for software coding, this was large ly about the approach to commenting and the specific type of data that should be captured for a research project. We also introduced the use of a software tool to extract comments from computer source code and generate a set of hypertext documentation complete with reverse engineered design diagrams. The generated documentation aimed to facilitate both the FTRs and discussion between researchers, for example office colleges, supervisors, or researchers at remote locations. We also found it necessary to create an installation and configuration guide to lower the barriers to adoption. A side effect of the generated documentation was the ability of the software to create a website for the participants with very little manual effort. This increased the benefits associated with the cost of learning the tool, and effectively reduced the barrier to FTR participation by making the tool more obviously cost effective.
While computer science researchers in universities typically shy away from quality improvement processes and have a high level of resistance to the introduction of new tools and techniques, the combination of approaches we used is shown to reduce the effort of participation required by both researchers and review participants to acceptable levels. This level of acceptance increased as the experiment progressed and more of the tools became available, and could with future work be improved further.
Past research shows a low level of adoption of similar software engineering approaches, and the current research repeatedly highlights the initial scepticism of both students and supervisors, however our data suggests that these barriers can be overcome with the right combination of tools and this can lead to improved collaboration between researchers in computer science or related areas.