-
Notifications
You must be signed in to change notification settings - Fork 11
SC1011
This documents aims to cover all information concerning the course Software Construction 2010-2011.
(NB: be sure to refresh this page; it may be cached by your browser.)
Lectures and workshops will be from 9:00-11:00 on Mondays. Lectures will be given by Paul Klint, Jurgen Vinju and/or Tijs van der Storm.
Primary contact for this course is Tijs van der Storm.
- 03-1: Introduction
- 10-1: Lecture Parsing
- 17-1: Lecture PLT by Example + Code Complete Test
- 24-1: Lecture Code Quality or Debugging
- 31-1: Workshop + Grading of Part 1
- 07-2: Workshop + Code Complete Test
- 14-2: Workshop + Code reviewing workshop
- 21-2: Code Complete Test + Grading of Part 2
Lectures will be at Science Park, room C1.112. The lab rooms are A1.20 and A1.22.
Required skills:
- Create good low level designs
- Produce clean, readable code
- Reflect upon and argue for/against software construction techniques, patterns, guidelines etc.
- Assess the quality of code
- Apply state of the art software construction tools
Required knowledge:
- Understand basic principles of language implementation (parsing, AST, evaluation)
- Understand the basic aspects of code quality
- Understand encapsulation and modular design
Pre-conditions for getting a grade:
- You have to be present during all lecture and workshop sessions.
- You have to be present during the Code Review Workshop
- You have to pass all the Code Complete reading assignments
You will be graded on the following course assignments:
- P1: part 1 of the practical course
- P2: part 2 of the practical course
The grade is computed using the following formula: 0.6 * P1 + 0.4 * P2. For both parts a minimum grade of 5.5 is required to pass the course. The practical assignments will be graded on-site on the respective dates of the deadline (see schedule).
Date: 14th of February, 13:00 - 17:00.
Location: lab room.
The goal of the code review workshop is to encourage critical thinking with respect to code quality and low level design. Important questions are: what is code quality? What are relevant quality attributes? How about code smells? How can you improve the design of existing code? Etc.
The structure of the workshop will be as follows:
-
13:00 The first hour, we will collectively make a list of code attributes that you deem important for code quality.
-
14:00 short break/formation of teams of two persons.
-
14:15 Each team member will review the other team member's Logo implementation, guided by the list of quality attributes. Make notes so that you will be able to provide constructive feedback later. Teacher(s) will be around for assistance and discussion.
-
15:15: short break
-
15:30: Provide feedback on the design and code quality to your team member. Be criticial, but constructive: your team member should be able to use your comments to improve his code.
-
16:30: Closing.
NB Participation is mandatory!
During the course you are required to read McConnell's Code Complete. We will test that you have read the required chapters using small tests consisting of 3 or 4 open questions. The dates of the tests and the accompanying reading assignments are as follows:
- 17-1: Code Complete Test 6-13
- 07-2: Code Complete Test 14-24
- 21-2: Code Complete Test 29-32
The tests will be evaluated on site. There will be no grade, but you must show you have read the required chapters.
Each workshop, two topics will be presented by two teams each. The topics are centered around a thesis concerning the advantage or disadvantage of a certain technique or approach in the domain of software construction. The first team will argue for the thesis. Then, the second team acts as opposition and will give a presentation arguing against the thesis.
Each team consists of two members. Topics are to be selected from the list at the end of this document. The references listed there are required reading for EVERYONE (for the topics discussed in workshops, that is). It is, moreover, required to find at least two other papers related to the position you are defending.
Guidelines for giving the presentation:
- IMPORTANT: there is no need to introduce the subject since all participants will have read the required papers from below. So don't waste precious minutes on this aspect.
- Focus on the claim at hand and your arguments for or against it. Do not cover (technical) details from the papers. Use the essence of each paper in your argument or exposition of the subject.
- Indicate, possibly using slides, how your arguments are backed by the additional literature you have found.
- Your presentation should not exceed 15 minutes in duration.
- It is advised not to use more than 5 slides (excluding title and final slide).
- Interaction with the audience is much appreciated.
Presence during the workshops is required. The presentations will not be graded, but feedback will be provided by the teachers present.
Workshop slots are allocated by sending an email to Tijs van der Storm containing the preferred topic and date, and the two team members' names. For both topic and date holds: first come, first serve.
- Lennart Tange, Randy Fluit: "Literate Programming in the 21st Century"
- Jan de Mooij, Wietse Venema: "Literate Programming in the 21st Century"
- Israel Posner & Douwe Kasemier: "Design Patterns are Code Smells"
- Bart van Eijkelenburg en Pieter Brandwijk: "Design Patterns are Code Smells"
- Ben Kwint, Arie van der Veek: "Goto considered harmful"
- Anton Zhelyazkov en Paul den Boer: "Goto considered harmful"
- Job Jonkergauw, Arnoud Roo: "Design by Contract" (of: AOP, Fluent/LawOfDemeter, Max/Min/Reuse)
- Jursley Koots, Willem van den Esker: "Design by Contract"
- Hans van Bakel, Chiel Labee: "Internal vs external DSLs"
- Maarten Hoekstra: "Internal vs external DSLs"
The goal of the lab assignment is to implement an interpreter for the language Oberon-0. Oberon-0 is a subset of Niklaus Wirth's programming language Oberon, a successor to Modula-2 (which was a successor to Pascal). The ultimate reference for Oberon-0 is the book on compiler construction, by Niklaus Wirth Oberon0. The choice of this language is inspired by a Tool Challenge, currently run in the context of the leading workshop on language implementation tools LDTA. The challenge is to have modular, extensible, concise, and declarative implementations of languages.
For this lab assignment, you are required to use the Java programming language and the Eclipse IDE. Each participant will get access to a Google Code Subversion repository, setup for this course. The URL of the Google Code project is:
Please sign up for a Google account if you haven't done that already, and notify Tijs van der Storm to get a project entry.
IMPORTANT: You are required to complete the lab assignment individually. We will use clone detection tools to detect plagiarism.
The assigment consists of two parts. The first part, Part 1, consists of the following components:
-
A parser for Oberon-0. For this you will use a parser generator. You can choose from the following Java parser generators: ANTLR, Rats!, JavaCup, JavaCC, JACC, SableCC, Beaver or Grammatica. Depending on your choice you may be required to implement a tokenization phase.
-
The parsing phase of the language implementation produces an abstract syntax tree (AST). You are required to design a suitable class hierarchy modeling Oberon-0 ASTs.
-
Finally, the interpreter will run Oberon-0 programs by processing Oberon-0 ASTs.
Oberon-0 is a subset of full Oberon. For Part 1, you are required to include the aspects and features that are described by the grammar in the appendix of Oberon0.
Although Oberon-0 seems like a simple language, you should pay particular attention to the following aspects of the language:
-
Which keywords are reserved, and which keywords are not?
-
How to implement pass by reference (cf. the VAR keyword)?
-
What is the priority and associativity of binary and unary operators?
-
What are the scoping rules of Oberon-0?
-
What is the semantics of nested procedures?
In Part 2 you will modify and/or extend your current implementation to implement two new requirements. What the actual additional requirements are will be announced half way into the course. You are strongly encouraged to anticipate changes in the language or required tooling in the design you deliver for Part 1.
As an additional requirement you will have to collect a number of metrics that help to assess the quality of your implementation. You have to provide these metrics at both grading moments.
First, you will report metrics based on the SIG maintainability model:
-
Number of files, classes, methods, and non-comment, non-blank lines of code (SLOC).
-
The distribution of cyclomatic complexity across methods (i.e. a map from cyclomatic complexity x to number of methods with that cyclomatic complexity).
-
The distribution of volume over methods (i.e. a map from method size, measured in SLOC, to number of methods with that size).
These metrics can be derived using the tool JavaNCSS and you are required to use this tool.
The parser implementation uses dedicated grammar syntax that is input the parser generator. For this kind of source files, no metric tools are available. In this case you are required to count manually, the following metrics:
-
Number of non-terminals.
-
Number of productions.
-
The distribution of the number of productions (or alternatives) over non-terminals (i.e. a map from number of alternatives to number of non-terminals).
-
The distribution of production length (i.e. number of symbols) over productions (i.e. a map from production length to the number of productions with that length).
For each metric you have to distinguish between lexical syntax/tokenization productions and context-free productions.
Second, you are required to collect metrics that may indicate problems in the modular structure of your implementation. For this tool, you will use the JDepend tool.
Some final guidelines:
-
Exclude packages containing unit tests.
-
Only include Java source files in the metrics computation by JavaNCSS and JDepend (so no grammar files).
-
Both JavaNCSS and JDepend are Java programs, so it is possible (and advisable) to automate to computation of metrics. Be sure, however, to exclude such infrastructure from the metric computation itself.
-
You are strongly advised to minimize Java action code, occurring in grammar productions, as this will skew metrics computation.
-
You should use the XML output facilities of both tools. This will ease further processing and aggregation.
Great code is easy to change. The additional requirements that will be announced for Part 2 are intended to be able to show how good your initial design of Part 1 will live up to extensions, modifications and revisions. In order to get more insight into this question, at the end of Part 2, you will compute the difference between Part 1 and Part 2. In other words, we are interested in the number of changed classes, added classes, changed methods, statements, imports etc. If you only have to add code, then you could say your initial design is really extensible. If you have had to change some code, but only locally at specific locations, you might say your code was easy to change.
In order to quantify such observations, you will use the DiffJ tool to compute the difference between Part 1 and Part 2. DiffJ is similar to the common Unix utility diff, but has knowledge of the Java programming language. It is not line-based and ignores whitespace and comments. At the end of the course the results of all diffs will be aggregated and will be used for a qualitative discussion on the effect of design choices, if the results show interesting trends.
Again, DiffJ is a Java tool, so there are opportunities to automate much of the comparison.
NB It is of paramount importance that you use the Google Code Subversion repository from the very beginning of the course. If you cannot provide an accurate diff report, we cannot grade your solution.
First of all, we will provide sample Oberon-0 programs for smoke testing. If, upon grading, you fail to show a working run of the sample program, we will not grade your implementation.
We take the principles laid down in Code Complete as guidelines when grading your solutions. More specifically, the following aspects of quality code will be our focus:
- Functionality (e.g., are the requirements implemented)
- Tests (e.g., presence of meaningful unit tests)
- Simplicity (absence of code bloat and complexity; YAGNI)
- Modularity (e.g., encapsulation, class dependencies, package structure)
- Layout and style (indentation, comments, naming of variables etc.)
- Sensible use of design patterns (e.g., Visitor)
When grading, we will use a check list that will be made available in due course. The collected metrics and the diff between Part 1 and Part 2 (see above) will not be used for grading purposes.
NB: high performance, fancy GUIs and other forms of gold plating will be bad for your grade, so you are advised not to waste time on those aspects.
The two parts of the lab assignments will be graded on-site at the following dates:
- Part 1: 31st of January
- Part 2: 21st of February
- E.W. Dijkstra Goto considered harmful, 1968, Dijkstra68;
- D. Knuth, Structured Programming with go to statements, 1974, Knuth74.
- Simon Peyton Jones, Beautiful Concurrency, 2007, PeytonJones07.
- Calin Cascaval et al. Software Transactional Memory: Why is it Only a Research Toy?, 2008, CascavalEtAl08.
- Bryan Cantrill and Jeff Bonwick, Real-world Concurrency, 2008, CantrillBonwick08.
- Marjan Mernik et al. When and How to Develop Domain Specific Languages, 2005, MernikEtAl05.
- Martin Fowler, Implementing an Internal DSL, 2007 Fowler07.
- Gregor Kiczales et al. Aspect-Oriented Programming, 1997, KiczalesEtAl97.
- Robert E. Filman, Daniel P. Friedman, Aspect-Oriented Programming is Quantification and Obliviousness, 2000, FilmanFriedman00.
- Bentley, Knuth and McIllroy, A Literate Program, 1986, BentleyEtAl86.
- Knuth, Literate Programming, 1984, Knuth84.
- James Noble, Brian Foote, Attack of the Clones, 2002, NobleFoote02.
- Henry Lieberman, Using Prototypical Objects to Implement Shared Behavior in Object Oriented Systems, 1986, Lieberman86.
- Bertrand Meyer, Applying "Design by Contract", 1992, Meyer92.
- Jean-Marc Jézéquel, Bertrand Meyer, Design by Constract: The Lessons of Ariane, 1997, JezequelMeyer97.
- Martin Fowler, FluentInterface, 2005, Fowler05.
- Karl J. Lieberherr, Ian M. Holland, Assuring Good Style for Object-Oriented Programs, 1989, LieberherrHolland89.
- T.J. Biggerstaff, The Library Scaling Problem and the Limits of Concrete Component Reuse, 1994, Biggerstaff94.
- James M. Neighbors, Draco: A Method for Engineering Reusable Software Systems, 1989, Neighbors89.
- Jan Hannemann, Gregor Kiczales, Design Pattern Implementation in Java and AspectJ, 2002, HannemannKiczales02.
- Peter Norvig, Design Patterns in Dynamic Languages, 1996, Norvig96.