TP

Evaluating programming systems design

Jonathan Edwards, Stephen Kell, Tomas Petricek, Luke Church

In Proceedings of PPIG, 2019

Research on programming systems design needs to consider a wide range of aspects in their full complexity. This includes user interaction, implementation, interoperability but also the sustainability of its ecosystem and wider societal impact. Established methods of evaluation, such as formal proofs or user studies, impose a reductionist view that makes it difficult to see programming systems in their full complexity and, consequently, force researchers to adopt simplistic perspectives.

This paper asks whether we can create more amenable methods of evaluation derived from existing informal practices such as multimedia essays, demos, and interactive tutorials. These popular forms incorporate recorded or scaffolded interaction, often embedded in a text that guides the reader. Can we augment such forms with structure and guidelines to obtain methods of evaluation suitable for peer review? We do not answer this question, but merely seek to identify some of the problems and instigate a community discussion. In that spirit we propose to hold a panel session at the conference.

Paper and more information

Bibtex

If you want to cite the paper, you can use the following BibTeX information.

 1: 
 2: 
 3: 
 4: 
 5: 
 6: 
 7: 
 8: 
 9: 
10: 
@inproceedings{evaluating-systems,
  author    = {Jonathan Edwards and Stephen Kell and
               Tomas Petricek and Luke Church},
  title     = {Evaluating programming systems design},
  booktitle = {Proceedings of 30th Annual Workshop of
               Psychology of Programming Interest Group},
  series    = {PPIG 2019},
  location  = {Newcastle, UK},
  year      = {2019}
}

If you have any comments, suggestions or related ideas, I'll be happy to hear from you! Send me an email at tomas@tomasp.net or get in touch via Twitter at @tomaspetricek.

Published: Thursday, 29 August 2019, 12:00 AM
Author: Tomas Petricek
Typos: Send me a pull request!