Sunday, November 3, 2013


In a typical and mature software company, Development Engineering and Sustaining/Support Engineering are kept separate. Separate teams, processes and procedures (there is usually plenty of overlap, especially around tools) and separate steps of the development process. While in most companies, there is a close relationship between these two, the minimal relationship is the handover agreement.

Without a set of handover criteria to govern the acceptance of new features and new applications into Support Engineering, there is bound to be an attitude in Development Engineering to just "toss it over the wall to Support", which, essentially, makes any issue that comes with the new development, someone else's problem.

Handover criteria are the Support Engineering team's safeguard against exactly this kind of attitude, but even with an amicable Development team, it's a necessary process. Any development team handing over, has to be aware that the Support person receiving handover can stop the handover for a long list of reasons, and this will mean more work for development.

Coming up with handover criteria is pretty straight-forward, and almost any list off the top of any support developer's head will do for starters. There are a few key concepts here, documentation, code, bugs and bug-count criteria.

As a suggestion, here is a simple list:
  • Has there been a transfer of knowledge (TOK)?
  • is there requirements documentation?
  • is there design documentation?
  • Does the code check out, compile and run without error?
  • Does the code match the above documentation?
  • Does the code meet your coding standard. 
  • Essentially, you want to make sure that the code isn't a rabbit's warren of jumps/gotos, "TBD"'s and comments with swear words in it.
  • does the new code match the bug-count criteria?
  • Are there suitable unit tests for the new features?
Some key points:

TOK (Transfer of Knowledge):

There has to be some kind of overview and question/answer period for the developers receiving the handover. Depending on the size of the list of new features, this can be anywhere from 5 minutes of discussion to multi-day sessions. It's important that it contain an overview and some overview documentation that developers can read before going into the sessions. Preferably, this is done by a developer or tester who worked on the project/new features.

the bug-count criteria:

No software is bug free. However, if you've got a bug tracking system in place (and you should do this very early in the game), each bug will come with a severity, usually critical, serious, major, minor, enhancement (1, 2, 3, 4, 5). Only the first 4 are important for the handover criteria and you'll usually have an agree-upon number of bugs at each severity level for handover. 0-0-5-10 is a good number.

Ideally, it's 0-0-0-0, but this is hard to do, and the project team who is handing over may well argue about whether a bug existed before the project or not, which is fair enough (i.e. if the bug exists in project testing, but also exists in earlier software, then the bug itself has been handed to Support Engineering already some time in the past, and wasn't caused by the project team)

Coding standard:

I think that if you're going to run a team of developers, the group of them should have a coding standard. Similar to the handover criteria, you can start with anything at first and grow it over time. You can find them on line if you want a starting point. A coding standard makes your code output consistent from developer to developer. What kinds of names to give variables, how to format while/for loops, conventions for member functions and member variables inside objects/classes and so on. When following a coding standard, the developer has to think a little more about what they are doing before just spitting something out, which means that it will probably be more maintainable. Ideally, it addresses the code to a degree where it makes obfuscation and messy code harder to do. "No magic numbers" (i.e. int X = 5 for example), a few lines of comment per function/method describing the fuction's operation etc...

And there you have it - the importance and a short description of a handover agreement.

Wednesday, October 23, 2013

Types of Maintainance

I was reading up on how to use Agile methods with software maintenance and came across this article. The most interesting part (for me) was his list of types of software maintenance. He lists 4 types:

  1. Corrective maintenance: corrects discovered problems;
  2. Adaptive maintenance: adapts software to changing needs;
  3. Perfective maintenance: improves performance or maintainability;
  4. Preventive maintenance: corrects latent faults in the software before they become effective faults.

Unfortunately, the rest of his article is about 2. adaptive maintenance, which I don't consider maintenance at all. This is new development and not maintenance. My last team did some of this, but it was primarily done by another group that focused on new development and new features, most often requested by customers, which makes them change requests, i.e. additions to the current requirements.

And, it seems like a completely silly question to ask whether agile methods can be used for new features. Clearly, it's possible. I don't understand why someone would ask this question.

The original reason I went looking for agile methods in software maintenance was for 1 and 3. In my experience, 4 rarely happens, simply because it will always be trumped by current, outstanding faults that the customers (internal or external) are currently aware of.

I do really appreciate the way the different types of maintenance are broken down though. It clearly delineates different types of roles in an organization. In my experience, 1 and 2 are almost always done by different groups, 1 is done by Sustaining Engineering and 2 is done by Development Engineering (in a start-up, these can even be the same people. As a company matures however, they will almost always split). 3 will depend on what is being perfected, and it can go to either group. If a piece of software fails because of a slow performing component, and it is current affecting the customer, then it falls to Sustaining to fix it ASAP. If a poorly performing piece of the software needs better performance to allow a new feature to work, or the problem with the performance is caught while delivering a new feature, then it will probably fall to Development Engineering. This is often a negotiation, because if the fault is pre-existing*, then it could easily be up to Sustaining to fix.

*a pre-existing problem is one that wasn't introduced by new features added by development, but is part of the currently supported software, presumably one that has gone through a handover of some kind, passed from Dev to Sustaining at some point in the past.

Wednesday, October 16, 2013

Software Maintenance - the stigma

I've been a support manager for the last six years.

There is a definite stigma surrounding support, and certainly around support developers, i.e. individuals who change underlying code in applications for maintenance/bug fixing.

I have seen this again, and again over my career, and in my recent job search.

In one of my recent interviews, I heard that Support Managers are more customer focused, have less imagination/creativity than Development Managers (I  have no clue where this comes from), and just a little annoyance to stick in my craw - make less money.

The simple fact is that a support manager and a dev manager share 90% of the same skill sets. They both usually come from a development background and they are both managing people. If one is developing a piece of software, the other has to accept and maintain it. There's a good chance that the two of them look at the same code base, discuss the same applications, use the same tools and manage a team of developers.

For developers, it's even worse. Sometimes, the developers themselves think that support developers are simply the cast-offs of the dev side of things. That dev is all creative and cutting edge and that support is simply drudgery.

The fact is, to do support well, you NEED good people. The skills sets of support engineers versus development engineers is even more closely aligned than dev manager and support managers. And this culture is rampant in the software development world. I don't get what the problem is with a developer who enjoys solving the problems of a product. There is a real skill to working through the puzzles, to pulling a string and making it all unwind. There are some dev personalities who just enjoy maintenance more than new development, and rather than abuse this fact, and frown at it, I say we embrace it and reward developers who are good at maintaining software.

Software Maintenance: tools

If there's one thing I've learned in my years as a Support Manager, it's that tools are critical.

We could just depend on the heroics of individuals, but why waste their time doing tedious, time-consuming tasks, when they could be developing new product features, or fixing bugs?

The biggest three that I've seen are the following:

Bug tracking/code management - I list these two together, since I believe it's ridiculous to separate them. When you go looking at a bug or an issue, or a new feature (it's just a number in the system), you can see the code committed against that bug/issue/feature. If you're not doing this, you're probably doing it wrong.

eServGlobal (a former employer of mine) originally used bugzilla and CVS. They developed a bit of glue, which was written as perl code for hooks into CVS that automatically added links and information into Bugzilla. This was called "cvszilla" and is still open source and works with Subversion/SVN as well. Github (where almost everyone seems to be holding their open source code these days) has something similar where individual projects can have "issues", which can be closed by code commits.

I would be keen to hear other people's experiences with this kind of system, which is, as far as I can tell, essential for both Software Development and Software Maintenance.

Customer Audits/Release Management - auditing customers or their release machines was one of the smartest things my team has ever did. It saves ridiculous amounts of time. All that's required is a script that runs on a target machine that collects data, checksums of binaries, config files, hardware data, software installs etc.. etc... and reports it back to you. Ideally, it's collected automatically and periodically after there have been any changes. Still, it would be ok if you can request these from the customer (the way Microsoft do with Word for example). When we did this at eServGlobal, the changes were checked into the code management system so they could be viewed just like the rest of the code. You could also see changes to customer configuration that had happened. It was very useful. This was key in several other time saving developments later on.

The reasons release management is mentioned here, is that when you release software/scripts/database changes and so on, it makes sense to track details of these and match them up against the audit data. That way, you're a click away from finding the source code for the system you are maintaining or upgrading.  We did this as a web page, and checksums were the heart of it, i.e. a binary checksum is, essentially, unique and must have come from a specific delivery, and we had the code tags for each of these on hand and easily searchable.

Typically, a customer reports a bug, you want to know what you are looking at. Sometimes, customer details/audit is enough, or your release tracking and application deployment is good enough that all you need is the customer name and you're good to go. This wasn't the case in any of my previous environments. You need details from the target machine, since people change things, or deploy binaries or scripts that either haven't been tracked or might be tracked somewhere else in your system, but don't show up in the install logs/details.

You might think, "Our system is simple enough", or "We only have 3 customers" or something, but believe me, putting in tools like this is well worth your time. It's not that much development time from a single, keen developer to put these into place.

Release Tracking - it's worth having a separate tool for tracking individual releases. If you are releasing emergency binaries/jars or scripts, then automate the release so that an entry is put into a tool/system that lets you see the release and its release notes from your release tracking screen. Ideally, if this is released for a specific customer, those details would appear as well.

From the release tracking screen,  you'd like to see all the details from as many different perspectives as possible, how many releases have been installed by a customer (tie this in to the auditing system mentioned above), a summary of all releases at a customer, which emergency interventions have gone to that customer, what bugs they've had reported in their environment and so on.

Ideally, tools give you the details you need when you need them at each step of the maintenance process.