Dennis E. Hamilton, Software System Architect, NuovoDoc
This is draft 0.10 of 2000-06-21
1. What's SE4E?
2. Accountability: What Happened?
3. Collegiality: Community Accomplishment
4. Predictability: Make It So, Keep it So
5. References and Resources
I dream of computer technology and programming being accessible to anyone: using the technology itself to support people in exploring and mastering as much as they choose; using the technology to be less inscrutable and more open to investigation and understanding by computer users. I'm particularly taken by the idea of having support for customization and programming of our computers be available for anyone to partake of whenever it's needed, for as much as it's needed, and as long as it holds our interest.
Today, there are marvelous opportunities to employ these engaging tools as vehicles for exploration, discovery, and self-expression.
In learning about programming, playful discovery and experimentation is the key, especially for creating experiences of self-directed exploration [Hacker]. There are excellent, freely-available tools, such as Python, and a community of contributors willing to help newcomers begin personal programming activities. There are worked examples and web resources to help further.
As you take on personal programming, you'll soon encounter the same difficulties that software engineers confront in developing larger-scale software. Like it or not, it takes consistent discipline to deliver complex systems into an imperfect world, whether for commercial air travel, distributing electrical power, operating the postal service, visiting a comet, or giving depositors reliable access to their own accounts via automated teller machines anywhere in the world.
It's useful to know how software engineers win at their game, and there are elements that apply to any purposeful activity. I am thinking of three elements of engineering discipline that we can all appreciate and use as a form of Software Engineering for Everyone (SE4E): accountability, collegiality, and predictability.
Accountability is about providing an accounting of what happened, how it happened and even why it happened. It is such a common element of engineering and scientific disciplines that it is easy to overlook.
In 1958 I dropped out of college after two quarters. Freshman calculus plus high-school drafting classes got me a job as an engineering aide. I created graphs and charts from the output of computer models of aircraft behavior. Everything was checked, signed, bound, and filed. The engineers wrote everything down. Everything.
The lead engineer and I studied FORTRAN together. This was my first contact with a power user. He demanded program results in the exact form needed in the analyses and engineering reports we were producing, rather than suffering with difficult-to-digest output that the software group was willing to provide.
I resented the work's tediousness. I never feared riding in the Boeing 720 when it was later produced.
In Watts Humphrey's Introduction to the Personal Software Process, students record their time and keep a log book and journal very early [PSP]. This becomes a progressive historical account of actual effort, including the ideas that were tried, the problems that were discovered, and how the course of a project evolves over time. It also provides insight to the engineer, scientist, or student about where activity is spent and how long it takes to accomplish things in a genuine, not imagined, workday.
The critical instrument is records of what you are doing while you are doing it. Keeping notebooks and journals, and sticking to it is how to practice and strengthen the discipline of accountability.
By collegiality I mean being a willing, self-conscious participant in a community activity that engages participants distributed across space and over time. Teamwork is part of it. Scholarship is another. The key thing to recognize is that computer software is a community accomplishment and that everyone who contributes in any way brings something that wouldn't otherwise have been there.
There are two simple practices for developing collegiality:
- Eagerly acknowledge the contributions that you find in the work of others.
- Develop your work so that it is understandable and available to others without you.
In short: Borrow, add, and give it away.
I grew up on a steady diet of pre-Sputnik science fiction. I say that Robert Heinlein taught me to read. In all of that reading, I thought of being on the moon or going to Mars as something that would be a personal, individual act. The reality of space flight and the magnitude of the enterprise that it took just wasn't the way I dreamt it would be. Today I'm moved to tears by the magnificence of that undertaking and the contributions that so many people made to bring space flight to reality. Every detail and every contribution mattered.
Human activities of any scale are cooperative activities. As a young software developer, I had this conceit that I could do it all myself, relying on my innate creativity, and if those other jerks would get out of my way it would all be perfect. That's hogwash, and I'm still caught wasting time before consulting someone else for assistance and to be my sounding board for ideas I am struggling with.
In my first dedicated programming job (as a "Clerk Typist A" since they hadn't invented student programmer positions at that university in 1959), the faculty member I worked for was creating a handbook of software. He had a grant for collecting programs that were available in his field and republishing them with the documentation needed for making them useful. We were also cleaning up the implementations so that the programs and routines could be used together as part of a coherent body of work. I loved it. I was constantly frustrated in my struggles to write intelligible documentation. I learned what I have heard repeatedly since: the way to learn to write is to start writing. And that includes sharing ones writings as imperfect work in-progress.
Nowadays I incorporate documentation as an inseparable part of the design of programs. Assuring that a program is explainable is my primary test for conceptual economy of the software itself. Even when I am not building software for anyone else to use, I preserve the hard-won habit of documenting what I am doing as if it is intended for others to be able to use without having written it themselves. Truthfully, I don't ever think otherwise, because that someone else is often my forgetful self at a later time.
Work doesn't have to be perfect before it can be shared with others. And the constructive observations of others will provide focus on essentials that are easily overlooked by someone immersed in a project. Explaining a program design to someone else provides insights into what I am doing that I wouldn't have had, even when the person I am having a walkthrough with doesn't say a word. I don't know why that is. It works so often that I simply trust in it.
Progressive refinement toward mastery of programming as a problem-solving skill is not a solitary activity, no matter how much we go through it individually. For it to work, we must be willing to submit our work to the adaptation and refinement of others. Sooner rather than later. When I first met Donald Knuth, he spoke about some of the most beautifully-crafted programs he had ever read and that inspired work he would later be renowned for. Early, hand-crafted software became the inspirations for our generation of programmers, and it was possible because earlier programmers made their work available to study and adapt.
The first principle of predictability is divide-and-conquer: Creating interface agreements and decomposing implementations into modules so that there can be parallel and independent effort. This happens in two ways: organizing and coordinating work, and structuring and coordinating the artifacts that the work delivers.
This is the most difficult aspect of engineering discipline that you will ever encounter: defining and organizing programs and the work to produce them without having already done it.
Practice predefining and sticking to your interfaces. See how it allows modules to be brought together -- and substituted -- without surprises. Preserving predictability across many cycles of alteration and refinement is almost always important. Study the interfaces used by different software packages and you'll notice that some interfaces work better than others, and begin to see why.
There is a wonderful paradox: freedom for design and innovation is carved out of the space created by agreeing to constraints that will never be violated. It sets up the rules of a game so there is room to play, and to play again, and then to win.
In1975 I provided system architecture for a nationwide on-line processing system. The programming team was short-handed. I was given the opportunity to develop real-time display interface software -- a critical element -- for minicomputers to be installed in 100 business offices across the country. An initial release was being outgrown; having the new system was urgent. We rewrote all of the software, increasing performance and function all at once.
I immediately changed the application interfaces for interacting with display terminals. Instead of one interface with a complex set of parameters for accomplishing a combination of very different operations, I introduced one interface per operation, with each operation having a single well-defined function. There was no behavior that wasn't clearly defined, operation by operation. This was like hand-crafting what's called an object-oriented implementation now. We designed in an informal dialect of Algol, with the implementation in assembly language. Adherence to an object model came entirely out of disciplined use of the available lower-level tools.
Although I was nervous about introducing this change, the simplicity of debugging and verifying correct operation with cleaner, separate interfaces was too valuable to pass up. It also made the inspection and verification of my real-time implementation much easier.
Fortunately, the new interfaces could be built atop the old implementation using a simple shim layer. A working prototype was available immediately. Changed applications began testing with the prototype on a system that was already known to be reliable. Defects in the prototype were caught and repaired or worked around immediately.
It was exciting to offer a stable platform for the application developers, not having them wait for an early, fragile implementation from me. When I completely redesigned the implementation, I integrated with already-working applications. This became critical as my software became later and later, running far behind the original plan. At the end, integration and confirmation testing went off without a hitch. Only one bug was uncovered, in the new administrative interface that had no counterpart in the old system.
There was more. The new software was so fast that we uncovered timing problems in the firmware of the terminal hardware. Because the new interfaces completely hid the hardware, I was able to add special communication delays in places where the firmware needed more time to complete its operations. Application modules weren't touched.
Some problems never went away. The terminal hardware would lock up from time to time. When the firmware became autistic, up to 16 displays simply stopped talking, and my software quietly timed out, shutting down the user sessions that appeared to have simply gone away. There wasn't even anybody I could tell about it!
Hardware technicians pleaded for a way to diagnose the terminals from the minicomputer. I gave them a way to analyze the data my software had about all of the displays, but it was useless. The hardware interface didn't provide the kind of information that the technicians needed, and there was nothing I could do to make up for that. Well-known hardware interface techniques didn't have these limitations, but someone decided that was too expensive for these minicomputers. The users trying to run their business under these conditions might have wanted something to say about that.
Looking back, I see an unexpected lesson. When hardware developers start programming, it is easy to stop thinking like an engineer. We ended up dealing with a computer program hidden in firmware where none of us could fix it, dealing through hardware interfaces where we couldn't even tell what the problem was. No amount of add-on software could compensate for that unfortunate point of inscrutable failure. When I had been at a mainframe manufacturer a decade earlier, you couldn't build, let alone ship, a computer system if you couldn't show people how to troubleshoot and repair it. Somehow, that principle didn't transfer over so well when software started becoming part of everything.
The part of predictability that I don't want to talk about is having effective work practices. Predicting required effort is a critical skill. That includes being able to correct your predictions early in the development process when there is time to do something about. I struggle here.
It's important to be able to fail. I may have to fail, repeatedly, before having predictability. It is not something you or I already know. Taking on a new area of technology or changing my tools puts that that deficiency in my face. And when I'm afraid of being incompetent, procrastination sets in, compounding the problem. I am awestruck at how incompetent today's great software engineers must have been willing to be, so that they could have the experiences that gave them some mastery at predictability.
Today, open-source projects carried out over the Internet provide direct experience in all aspects of predictability. There is no safer nor more satisfying place available to the willing newcomer. Find an internet study group or volunteer for a simple piece of a group project on the Internet. Most open-source projects need people to practice installing and using the tools, providing information about problems, and providing documentation. All contributions are appreciated. Use this as your public laboratory for experimenting and observing your own development of predictability.
- Raymond, Eric S. How to Become A Hacker. Published on the Web as a Frequently-Asked-Questions file. Updated periodically. http://www.tuxedo.org/~esr/faqs/hacker-howto.html
- Haynes, Marion E. Project Management: From Idea to Implementation, ed.2. A Fifty-Minute Book. Crisp Publications (Menlo Park, CA: 1989, 1998). ISBN 1-58052-418-9. As difficult as creating predictable projects seems, the basic principles are very simple and have nothing to do with computers or software. When I need to remind myself of that, I go back to this straightforward resource. I keep giving my copies away to others who ask me how to get started in project management.
- Humphrey, Watts S. Introduction to the Personal Software Process. Addison-Wesley (Reading, MA: 1997). ISBN 0-201-54809-7 pbk.
$$Author: Orcmid $
$$Date: 02-10-13 15:28 $
$$Revision: 15 $
End of Document