|
|
privacy |
||
|
Hangout for experimental confirmation and demonstration of software, computing, and networking. The exercises don't always work out. The professor is a bumbler and the laboratory assistant is a skanky dufus.
Blog Feed Recent Items The nfoCentrale Blog Conclave nfoCentrale Associated Sites |
2005-05-07NSS2: All Things to All People through Perfect SoftwareACM News Service: Summit Calls for ‘National Software Strategy’. The Second National Software Summit reported on the need for a national strategy that has something for everybody, so long as they’re on-shore:
Naturally, we’ll support the critical infrastructure, build software using known best practices, routinely develop trustworthy software products, establish a competitive U.S. software industry, and put a chicken in every pot. To oversee this broad strategy (dare I say grand challenge), a similarly-representative group (of industry, government, and academic representatives, of course) is to be constituted as the National Software Strategy Steering Group and meet every three years. But wait, save your matches, NSS2 sees light at the end of the tunnel. There’s a vision:
Oh, and a gap or two that need to be closed:
It’s wonderful to have so much scope and mission creep without leaving the starting gate, isn’t it? Which is to say, here we go, doing the same thing over and over again and expecting a different result. Except we now have to worry about the barbarian hoards doing it cheaper. I don’t know why I’m so pissy about this except that it reads just like every high-minded-committee output that includes everyones agenda and everyones pet unsuccessful solution, with a road map that is no different than any road map we’ve ever seen before and that is blindly trusted to get to the destination that we’ve failed to reach time after time. So, where did this inspiring account arise. Oh, from Reston Virginia, on the 2005-05-05 PR Newswire, under the title «‘Software 2015’ Program Addresses ‘Unacceptable Risks and Consequences of Software Failure’». The PR that this news blurb is about heralds the new report, “Software 2015: A National Software Strategy to Ensure U.S. Security and Competitiveness.” The original press release is a bit more coherent, though it is not clear that it is any more promising. The NSS2 meeting was apparently convened under the Center for National Software Studies (CNSS), a not-for-profit with the mission of elevating software to the national agenda. I like it that their home-page logo and organization name has a noticeable blur to it. The NSS2 Final report is available there as a compilation of several PDFs. The list of issue presenters is impressive, too. There are smart people in this act. Could it be a matter of breathing the air too close to the belt-way? I've taken a particular fancy to this bit from the press release: The Software 2015 Report makes a compelling case for the urgent need to I’m beginning to understand what people mean when they speak of my breathless prose. Look, I’m a believer in software engineering practice, raising the level of trustworthiness in software, all of those things. I’m willing to devote my life to that. But hawking this fluff the same way we’ve been doing it for over 40 years, only louder (and more nonsensically) really frosts my cup cake, you hear? I’ve down-loaded all of the PDFs and trust my MSN Desktop Search to index them and let me know just what kind of goodies there are in the report. Some really bright and seasoned people may have added to this important conversation. I won’t take a chance on having neglected finding some new insight in this work. I can see it like it was happening today. Just over 30 years ago, Jack Laschenski turns to me, and says “There is no software crisis. If there were, we’d have to do something about it.” We’re still not doing much about it. Maybe it’s not real? What would happen if we quit clamoring for more global, top-down intervention of national proportions and actually worked to deliver some trustworthy software. We’d then have some handle on a measurable difference that could be made. I am a little afraid of what the lesson might be, but we’ll never know until we do it, aye? 2005-05-03Are You A Problem Witch or a Solution Witch?ACM News Service: Security Stalls Mobile Multimedia. This blurb makes a nice contrast between four-and-counting approaches being employed to accomplish security (including DRM) on mobile multimedia-capable devices. It is also clear that the tension and multiplicity of approaches is being fought out in the solution space. The dominant agendas there may have little to do with how we look at it from the problem space where we operate as users of these devices. Junko Yoshida’s 2005-04-25 EE Times article provides a lenghty and clear navigation of the many layers involved, with emphasis on how much this is in the hands of content providers, service operators, and device manufacturers (and software developers). We get to vote last and, unfortunately, with our feet, thumbs, and eyeballs. All we’ll know is what our experience with the systems turn out to be. Nothing new, I guess. How Do You Know Your Discarded Disk Is Unreadable?ACM News Service: Skeletons on Your Hard Drive. This blurb is a great reminder of how much we put faith in two things: What people tell us their service or disk wiper software provides, and what we believe because we don’t know how to read the wiped data ourselves. We tend to forget that a culprit out for your information or simply opportunistically scanning for anyones goodies and private files is going to attempt things we aren’t equipped to verify ourselves. This is distressing for me because I am a big advocate of arrangements that I term “confirmable experience.” Confirmable experiences, such as two end parties having the tools they need to figure out why an e-mail communication is failing, have a strong cooperative component—the willingness to arrange, use, and exchange confirmatory findings. You and I may successfully arrange to run the same tests at a distance, or tolerate failures in ways that the defect can be discovered. More than that, I am able to communicate what I am seeing to the other party so that the end-to-end picture can be pieced together. That’s completely different than the situation where someone is willfully seeking an exploit and has no interest in my awareness of it, let alone confirming it with me. So here we are having to think about the unseen and putting faith that its invisibility to us means it is truly inaccessible. It’s one of those moments that brings William Kingdon Clifford’s challenging “The Ethics of Belief” sharply into recall. Matt Hines’ 2005-04-20 CNet News.com article provides more details and information about how to wipe disks properly if you’re going to rely on such techniques. All of this is of little help for ordinary theft of a laptop. There you have a pristine hard drive with what the user wants to be there, in all its glory. Not only am I bothered by it apparently being quite easy for my laptop to be reset to an insecure default startup configuration, with physical access to the machine a thief can simply remove the hard drive and examine it at leisure using a different machine, so I have to protect the data on the drive in a direct way. Mostly, I don’t want anyone else to be able to use the machine with my hard drive in it, and I’d rather the thief be discouraged in trying to use anything that is on that drive, especially the operating system itself. A determined perpetrator won’t be dissuaded, but I’d like to cut down on the hazards of ordinary theft of a mobile device. If encryption is the answer, how do I rely on that, and whose product can I trust? How does it impact my day-to-day ability to operate? I have no idea. What I know is that I have little way of telling whether the safeguards are really working as claimed, just as I have to rely on my antivirus software being benign, on my software firewall really protecting my system, and my residential router actually being impenetrable from the net, especially with a DSL modem in front of it. I am left with Clifford’s challenge: What right do I have to believe that I am protected by these measures? Uh, lemme see, I'm gonna hack my router and expose my residential LAN to the Internet ... Not.Joi Ito's Web: Earthlink R&D shows that IPv6 can be easy. Joi Ito notices a great announcement on migration of some routers to IPv6 in a way that preserves IPv4 and NAT and allows you to have serious IPv6 addressing of machines on your LAN. Typically, you’ll need a recent operating system release, such as Windows XP or OSX or Linux that tends to support a dual stack and/or tunneling of one through the other. The nice thing is being able to have a block of permanent IP addresses. I am not sure I know how to get my ISP (I’m on DSL) and border systems to route them to me, and I have a lot more questions before I’m willing to try it. First, let me say this is great news. As recently as Spring 2004, while I was in an M.Sc in IT Computer Communications course, it looked like the disruption of our in-place systems and all of those small routers and nodes, just for an IPv6 migration ,was near-insurmountable. We have an interim scheme (IPv4 with NAT) that works so well there is not a strong business need to fix it in North America. It’s not that broke that we need to fix it, and the migration has no benefit until it is done. That’s like setting a time-bomb in Visual Basic 6.0. Nobody wants to spend the money just to stand still. What about transitional security? My concern is two-fold. I don’t think I want to install any research-center’s firmware upgrade on my Linksys residential router and firewall without giving it a good hard think. This is primary infrastructure stuff, and I am not sure I see all of the pieces in place to put my trust in (l) not knowing what’s running in my first-line of network protection at the border to my SOHO LAN, and (2) exposing my systems to IP addresses that are addressable by anyone who sniffs for them. I’m certain that I don’t have appropriate safeguards for what happens when that avenue of attack succeeds against some system inside the residential firewall. How Is That Any Worse Than What We’ve Got? My point. We’ve been prepped to expect that IPv6 will solve our infrastructure (that is, basic Internet and IP) security problems as well as provide fixed addresses for every wandering mote on the planet. (I’m assuming that not everyone gets a block of addressses as big as Mr. Blog reports, or we’ll run out of those faster than IPv4 addresses.) Now that it looks like we don’t have to have a planetary Sunday in Sweden to switch over and some big bumps seem to have been smoothed out, let’s get the security and safety part down pat while we have the opportunity. That will take more than the common assertion that IPsec is the answer. I really don’t know what’s running in my residential router now, do I? Do you (in mine or yours)? Perhaps it is time to raise the bar on how we establish trustworthiness for those fixtures we’ve been accustomed to accepting without question. And then there’s the trustworthiness of the way we integrate all of this beneath the useful applications that is what we’re really interested in, and what that does to the vulnerability picture. I’m going to take my time on this one before I go skipping naked through the jungle that the Internet has become. I think I’ll keep those torches stacked near the cave mouth and be careful not to let the fire got out at night, thank you very much [;<).
2005-05-01Flaws in Genuine Software Still Exploitable in Trusted EnvironmentACM News Service: Does Trusted Computing Remedy Computer Security Problems? The use of trusted computer systems will make it likely that genuine software will be run under the protections of a trusted environment. This blurb reports an analyis that asserts there will still be vulnerabilities in those programs, and a malicious intruder may be able to exploit them. Although it would seem that computers will be more secure, there are a number of ways that trust can fail, and these will tend to be a result of defects in the trusted program that a malicious entity can still exploit. The Rolf Oppliger and Ruedi Rytz article in the April 2005 IEEE Security & Privacy issue provides a nice run-down on the trusted computing approach and its limitations. Basically, the trusted computing platform is unable to detect malicious acts that happen at a level where the exploited behavior is indistinguishable from correct behavior based on what the platform observes. Put simply, there can always be vulnerabilities at a higher-level that what the platform protects. The authors question whether this improvement, and it is one, will be acceptable based on the presumed loss of flexibility in being able to install and run software of the user’s choosing. There is no generic answer to this question, it seems to me. Different circumstances will have different trade-off preferences, and we’ll need to understand those better. A side benefit for me is a definition of technical trustworthiness, based on the Glossary of Internet terms: “trusted and trustworthy systems are not the same; according to RFC 2828 [big file], a system is trusted if it “operates as expected, according to design and policy. If the trust can also be guaranteed in some convincing way, such as through formal analysis and code review, the system is called trustworthy.” Hmm, interesting, aye Wingnut? |
||
|
|
You are navigating Orcmid's Lair. |
template
created 2004-06-17-20:01 -0700 (pdt)
by orcmid |