Blog

Traditional software engineering, CMMi and its problems

This is Part 2 of the series of articles "Why go Agile?"

Well into the 1980s the largest buyer of software development services in the world, the US Department of Defense, was having trouble getting projects done on time, on budget and with the right specifications. Despite working with some of the best and most renowned software development companies out there, it still had close to a 50 / 50 chance of getting a project right. Worried about its performance, it set up to fund, together with Carnegie Mellon University, the Software Engineering Institute. This institute was tasked with compiling a set of software development best practices that could provide both a benchmark against which to measure current vendors, as well as concrete guidelines current vendors should follow to improve their software engineering practices.

The initiative ultimately gave birth to the Capability Maturity Model Integration (CMMI) and its levels of 1 to 5. The Capability Maturity Model ranks companies into levels, going from those that are at Level 1 (operating under processes that are not managed, or ad-hoc) to those that reach Level 5 (optimizing). It is not our purpose here to go into detail about what each level means, except to point out that at Level 5 companies must have complied with following the tenets of more than 16 software and systems development process areas, including the ability to (at level 2) manage their software development process (configuration management, project monitoring and control, project planning, requirements management); define their approach to software development (decision analysis and resolution, organizational training, risk management, validation, verification); quantitatively manage their development process (measure organizational process performance, reliably measure productivity, error injection and other key variables) and optimize their process, which involves constantly reviewing the causes of process problems and taking measures to improve and resolve them. The nature of the DoD's needs influenced CMMi at the core. The DoD's software development projects tend to be large and often interact with hardware.

Furthermore, they are government funded, which requires measures of "financial transparency" that limit budget flexibility, often requiring projects to provide a fixed cost. Thus, it is not surprising to see a CMMi model that was slanted towards traditional RUP (the Rational Unified Process) methodologies, especially in what regards to: Big Requirements Up Front (BRUF). BRUF is necessary to adopt so-called formal estimation techniques (such as COCOMO I/II, Function Points, Feature Points, etc.) that allow a software development team to have a shot at estimating, in advance, the effort required –and hence the cost—of a project; High formality and preciseness in the way requirements are elicited and documented. Under strict RUP or even under a more archaic "waterfall approach", requirement engineering is a ceremonious process that documents, in writing, use cases (i.e. using UML diagrams, for example), functional and non-functional requirements, process flows, mock-up screens, etc. It is not surprising that under RUP (or also under a traditional waterfall approach), the inception and elaboration stages take 40 to 50% of the total time spent in the software development project. Testing "after the fact", i.e. after code has been written. This is clearly a quality control, but often leads to detecting errors late, when correcting them is more expensive. Strict controls to "requirement changes". Processes for changing requirements are often so heavy and difficult, they seem to be designed more to prevent change than to manage change within a software development.

In essence, the software development methodologies proclaimed to be "best-practices" under CMMi and RUP, did produce significant gains in the quality of the code. However, this came at a significant price both in terms of:

a) Time (time was more predictable but the formality significantly increased the length of the software development vs. ad-hoc programming)

b) Functional relevance (a rigid process did not allow applications to easily adapt to changes in the business environment, so the ensuing application often was "what the client asked for but not really what the client needed";

c) Frustration with cost (the ensuing application was not cheap to make, due to the administrative overhead of all the formality of the process, and the predicted cost of the application still did not match the initial "fixed price" estimate –yes, estimates where off by a lesser amount vs. ad-hoc estimations, but they were still significantly off.

More on PSL: World class software outsourcing company PSL has over 30 years experience providing development services. Originally founded in Colombia, with additional offices in Mexico and the US, PSL is the leading Latin American software development company regarding quality, agility and process maturity

What was the end result of traditional software de...
Why Go Agile in your Software Development?
 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Guest
Thursday, 21 June 2018

Captcha Image