@for Computer science students and other aspiring
@author Kai Ruhl
Software engineering is an extension to programming. You want something done? You sit down + think + program + debug and you are done. No SE needed. But then you start programming with team mates. For a customer. And you have to maintain it. It becomes increasingly unclear what exactly it is that the customer wants to get done. There, all of a sudden you need SE. In this article, I go backwards in time to explain what should have been done -- if you take the same idea forwards, you know what you should do now.
Note: The following explains the absolute basics of software engineering (e.g. not adapted to agility, etc). In reality, it is a bit more complex. But you should get the idea.
Imagine you have written a product with a team of 7 people. The customer is happy and has been using the product for 1 year. You have a maintenance contract that says put 20 hours a month into it. You update all system libraries and all minor versions of GUI libraries, lets say QT4.x.x.x, but not QT5. You put minor features in. Many minor features. Indeed, it seems the customer has a never-ending stream of minor features to be implemented. You ponder when to say no. When?
What you should have done: Written a "maintenance file" (MF). In it, write down your expected man-hour (MH) efforts for (a) just keeping the thing running and (b) extra minor features. Allocate a MH budget for these minor features. Each month, update the list of minor features, together with originally expected MH and actually spent MH. This will help the customer see how much time goes into which features, and help choose which features to implement and which not. Common visibility is key to ensure that both sides feel respected.
You have been happily chugging along with 7 guys/gals for half a year. Your product is quite formidable, if buggy in rare cases and not totally polished. Your customer keeps saying (a) "when is the final release?" and (b) "you got to fix <rare issue 1002> / polish <ui thing 1003> before shipping". And you wonder: When am I done, actually?
What you should have done: Written a "test report" (TR). In it, there is a long list of test cases, each describing something a user can do with the product. At the end of each test case, there is a criterium on when the test is considered "good", usually something like "program still runs, shows X is successful." When all test cases report "good", you are done. So where do you get these test cases? That leads us to...
So you had a common design in all 7 heads, and went programming, and it runs. Partially, at least. There seem to be non-working things that used to work before, dammit how did that happen, we have to fix it, no now another thing does not work, frag it does nobody ever test their update with... ahm with what?
What you should have done: Written a "test specification" (TS) that describes things that a human tester can do with the product, together with an acceptance criterium (see above, "test report"). Many of these things can be automated. Everything a user does (mouse clicking, pressing keys) can be recorded and replayed. Everything a user sees (text, a graph, a 3D skeleton) can be seen in variables. Your product can be steered by software, using an API. But where does that API come from? This brings us to...
All 7 of you know what to do. You have distributed tasks into small subteams and went implementing them. Only that... the other subteam got their API that you want to use completely wrong. And worse: They insist that YOU got it all wrong. After a lot of shouting, both subteams write adapters so their parts fit together.
What you should have done: Written an "architectural design" (AD). In it, describe the top-level components and the most important classes that other subteams (and automated test cases) will actually want to use (nobody is interested in your detailed "VectorMath" class). The methods/functions and their arguments and return types as described in this API are binding for everyone. Everything else is implementation detail and can be changed by the subteams however they please. But the AD is binding. It must be able to perform all required operations. Hm, which operations are that, you ask? Well, that takes us to...
You know what needs to be done. So do the other 6 team members. So does the customer. The only problem (you probably guessed it): You all have slightly different visions in your head. You discuss endlessly how to implement the totally obvious functionality of your product, and rarely come to a good conclusion.
What you should have done: Written a "requirements document" (RD) that describes what a user should be able to do with the product. Not what the product does. And not how the product does it. The latter two will always change according to what technology (= libraries, frameworks, etc.) is available, and what it can do. But what the customer, and thus you, want to get done needs to be manifested in your heads as common understanding. In writing.
Here is one addition (optional but highly recommended): Specify not only what a user should be able to do, but how the user (not the product) does it. Think of it as "high-level test cases". At this point in time, you cannot specify them in the same detail as in the test spec (see above, "test spec"). But it helps tremendously to gain a common understanding of what the customer will want to do once acceptance comes around (see above, "acceptance").
By now you have probably figured out what software engineering is good for: To avoid the project biting you in your hind side. Everything has a reason. None of it is "just documentation". If you encounter parts of a process that do not make sense to you, throw these parts out. Seriously. Do everything with a reason. Then you are a software engineer.EOF (Apr:2012)