Hangout for experimental confirmation and demonstration of software, computing, and networking. The exercises don't always work out. The professor is a bumbler and the laboratory assistant is a skanky dufus.
The nfoCentrale Blog Conclave
nfoCentrale Associated Sites
ACM News Service: Silver Bullets for Little Monsters: Making Software More Trustworthy. David Larsen and Keith Miller identify available solutions to three defects, suggesting that it is no longer necessary to tolerate these any longer. Oh, OK: These aren’t infallible, but they can improve the situation dramatically.
A number of tools are mentioned, including Microsoft’s Slam, a C program static analyzer that authenticates the program against rules stated in its Specification Language for Interface Checking. Now wouldn’t that be just SLIC.
The link to the article in the 2005–04 issue of IEEE IT Professional is directly to an Adobe Acrobat PDF. (I hate it when that happens. You’ll have to condition your browser and firewall security to get the kind of access you want. I only crashed my browser once before getting it to work.) You may find it easier to work from the free-article page or the table-of-contents of the publication after the free-download promotion ends.
The abstract resonates for me:
Despite the legions of ideas about how to improve software quality, much commercial software remains un-trustworthy. In this article, the authors make the case for at least taking small steps toward improved quality by using silver bullets “corrective actions or methods” to at least eliminate some common problems, the “little monsters” of the title.
I am left wondering why these particular problems aren’t greatly reduced by disciplined structural constraints on programs, ones that are easy to confirm. This would make confirmation that memory (and pointers) are released, that buffer-filling code is properly defended, and that resources are released subject to relatively simple inspection and well-crafted dynamic testing. Edge cases must still be addressed with regard to unexpected data and resource acquisition/retention problems, but I’d think it very worthwhile to combine structural constraints with confirmation techniques. I’m also wondering what hackers use to discover those defects as prospective avenues for security exploits.
With that inquiry in mind, I discovered additional gold in the article:
This paper is a keeper for students of this subject. From my perspective on TROSTing, I would say that declaring the absence of any sort of defect is problematic (as Knuth has famously remarked, and Edsger Dijkstra seemed to begrudge the point as well). It strikes me that it is more appropriate to assert diligence in application of current art, specifying which measures were applied to mitigate the prospect of various understood defects, and certifying that. Declaring perfection strikes me as foolhardy, especially since the user who stumbles on a defect will not care which category it falls in, if any. Demonstrating diligence I can see as do-able at the current level of art.
|You are navigating Orcmid's Lair.|