Blunder Dome Sighting  
privacy 
 
 
 

Hangout for experimental confirmation and demonstration of software, computing, and networking. The exercises don't always work out. The professor is a bumbler and the laboratory assistant is a skanky dufus.



Click for Blog Feed
Blog Feed

Recent Items
 
Republishing before Silence
 
Command Line Utilities: What Would Purr Do?
 
Retiring InfoNuovo.com
 
Confirmable Experience: What a Wideness Gains
 
Confirmable Experience: Consider the Real World
 
Cybersmith: IE 8.0 Mitigation #1: Site-wide Compat...
 
DMware: OK, What's CMIS Exactly?
 
Document Interoperability: The Web Lesson
 
Cybersmith: The IE 8.0 Disruption
 
Cybersmith: The Confirmability of Confirmable Expe...

This page is powered by Blogger. Isn't yours?
  

Locations of visitors to this site
visits to Orcmid's Lair pages

The nfoCentrale Blog Conclave
 
Millennia Antica: The Kiln Sitter's Diary
 
nfoWorks: Pursuing Harmony
 
Numbering Peano
 
Orcmid's Lair
 
Orcmid's Live Hideout
 
Prof. von Clueless in the Blunder Dome
 
Spanner Wingnut's Muddleware Lab (experimental)

nfoCentrale Associated Sites
 
DMA: The Document Management Alliance
 
DMware: Document Management Interoperability Exchange
 
Millennia Antica Pottery
 
The Miser Project
 
nfoCentrale: the Anchor Site
 
nfoWare: Information Processing Technology
 
nfoWorks: Tools for Document Interoperability
 
NuovoDoc: Design for Document System Interoperability
 
ODMA Interoperability Exchange
 
Orcmid's Lair
 
TROST: Open-System Trustworthiness

2004-09-11

 

Zombie Planet: Spam and Phish Egg Harvesting

Along with recent adventures around candling phish and disrobing other spoofs, I have begun to accumulate resources to compile into some how-it-works, how-to-tell, how-to-escape articles.  As part of that activity, my inclination is to document and refine on web pages, with blog pages leading to available material.  This is the reverse case: items that I have noticed that may end up as part of useful links in future compilations.  Here I feature dealing with intrusions and securing systems by personal actions.  Material on design for security and institutional responses are not emphasized in this posting.

Cybersecurity Groups: Where Are the Grass Roots?

ACM News Service: Industry Group Voicing Cybersecurity Concerns in Washington.  The Cyber Security Industry Alliance (CSIA) is another organization formed around cybersecurity policy, the "cybersecurity industry," and commercial responses to security concerns that arise in the context of Sarbanes-Oxley and HIPAA. In my recent stumblings into phish and other malware, I have not found many of the organizations that accept security information and attack samples to be particularly interested in grass-roots participation (with singular exception of a rapid and effective series of exchanges with secure@microsoft.com).  I keep coming back to Bruce Schneier's observations about agendas of participants in [cyber]security efforts (and the corresponding security theater) and wonder how much this is costing all of us.  I cast my lot with Dan Appleman, whose approach to Always Use Protection: A Teen's Guide to Safe Computing is completely based on home users not being powerless and having an eager army of eager recruits to the white hats forces: the neighborhood teenagers and their teachers.

March of the Zombie Horde

ACM News Service: Are Hackers Using Your PC to Spew Spam and Steal?.  Perhaps the most remarkable aspect of this article is that it appeared in USA Today.  The level of hand-wringing is a little surprising, although it is clear that law-enforcement in this space is difficult.  One wonders if we are missing something about computer users becoming disturbed enough about how this works to accept precautionary practices and buddy-up somehow. For my own interests in supporting safe computing practices along with gathering easier-to-use forensic tools, the interesting mentions are of Intelguardians, Ed Skoudis, Dave Dittrich, and the nearby University of Washington Center for Information Assurance and Cybersecurity.  I am leaning toward an information assurance topic in my M.Sc in IT dissertation proposal, too. The Byron Acohido and Jon Swartz 2004-09-08 USA Today Tech article has a sidebar of useful related stories and provides clear examples and scary consequences.  If phishing has managed to pluck $2.4 billion out of the banking system, what will it take to make the criminal risk greater than the prospect of big-bucks payoffs?

Proactive Security from the Experts

ACM News Service: A Proactive Approach to Security.  Robert Clyde, the Symantec CTO, says malware attacks are increasing in frequency and complexity, with the time for an exploit to be exercised now less than the time it takes to receive, validate, and install a patch.  The thrilling part: "Vulnerability scanners are useful for writing secure code, but they are by no means perfect, and Clyde believes that vulnerability will be a problem for the next 20 years or so." Ian Thomson's 2004-08-18 vnunet.com interview with Clyde proceeds cleanly.  One concern at the end of the interview involves the lack of top-notch security education, along with the paltry number of Ph.Ds in the field tending to stay in academia. There's an useful link to IT-ISAC, the IT Information Sharing and Analysis Center.  vnunet.com also has a security page.  The RSS feeds cover more than security, and they appear valuable. It didn't occur to me when I first noticed this blurb, but I notice how much it is assumed that the proactive responses are going to be provided by the folks who depend on the creativity of adversaries for their living.  If your attention is on the battle, there might not be much room for identifying a way of making the war simply unappealing to the enemy.

Neowin Interview: Bruce Schneier

Neowin.net - Where unprofessional journalism looks better - Neowin Interview: Bruce Schneier.  I'm looking for more security links, and this timely (August 30) interview with Bruce Schneier is, as usual, highly informative.  Highlights for me:
"In terms of security, do you buy into the argument that an open source model is better than a closed source model, when related to the Internet, and Operating Systems? "It's more complicated than that.  Secure software is software that's been analyzed, again and again by lots of smart people.  That kind of analysis is possible in the closed source model--experts can be hired--and it's possible in the open source model.  For large pieces of very popular open source software, like Linux, many people have analyzed the code for security vulnerabilities.  The result is some very well-written code.  But there are lots of open source programs that are obscure, and that no one has ever looked at.  Making your code open source allows for it to be analyzed for security, but does not magically make it secure.  I've written more here."
Although this is not so directly about my phishing theme, here, it does have to do with the general problem of malware and the exploits by which they arrive.  If I am anywhere typical of users of open-source software, I notice that I have no way of knowing the distributed form that I install is authentic and that it was built in a verifiable way from the source code which it is asserted to come from.  I also do not know the verification and review status of the source-code and library components on which it was built.  Although we seem willing to forego alternatives to that for commercial software, and when obtaining software written by someone you know, this seems to be a gap that the open-source community can cover, and can cover in a highly-economical way.  I want to look further into that.
"If you were to look at 3 areas - The Software Designer, The Systems Administrator, The User - who would you say should bear the burden of responsibility for security? Or do you perceive it to be a shared responsibility? "Right now, no one is responsible;  that's part of the problem.  In the abstract, everyone is responsible ... but that's not a fair answer.  In the end, we all pay.  The question really is: what's the most efficient way to assign responsibility?  Or: what allocation of responsibility results in the most cost-effective security solutions? "We can't survive with a solution that makes the user responsible, because users don't have the knowledge and expertise to be responsible.  The sysadmins have more knowledge and expertise, but they too are overwhelmed by the sheer amount of security nonsense they have to deal with.  The only way to solve the security problem is to get to the root of it, and the roots are in the software packages themselves.  Right now, software vendors bear no liability for the software vulnerabilities in their products.  Changing that would put enormous economic pressure on software vendors, and improve computer security faster and cheaper than anything else we can do.  I've written about this here."
Schneier clearly recognizes the larger picture of security, and comments on his progression from cryptography to general security.  I've ranted about how programming is not the key to security, and I always want to pay attention to what Schneier has to say.  My takeaway is that liability for software vulnerabilities would provide the greatest marginal benefit, and I can see ways that could work.  Assuming that won't happen soon enough to be of much good, I wonder what is next on his list.  I also don't want to see us end up in a situation around liability where only commercial firms can afford the insurance.
"Do you have any practical advice for our readers, in terms of staying secure, and safe? "Backup. Backup, backup, backup.  You're going to get whacked sooner or later, and the best thing you can do for yourself is to make regular backups. "Staying safe in the Internet is actually pretty simple.  If users bought a personal firewall and configured it never to accept incoming connections, and were smart about email attachments and websites, they'd be a lot safer.  Also, the fewer Microsoft products the better.  There's lots more here."
I think I finally get what Schneier is looking at when he recommends avoidance of Microsoft products.  One can look at Microsoft product delivery as fundamentally compromised in that security is not truly Job One. Facilitating commerce is ahead of that, as is feeding the feature-explosion rat race and preservation of the legacy.  I can see how Microsoft ambitions at being a media and communications company could compromise matters further.  It will be harder to move the company to where attention to security is always first, yet I don't doubt the organization is capable of it.  We'll see, won't we? More to ponder, more to wonder about.  And I need a backup plan that works, meaning I will actually carry it out.  What is that with you, Dennis?

Active Defense in Depth

Scobleizer: The layers of security I use to keep criminals at bay.  Robert Scoble looks at what Windows XP SP2 does to improve the situation and then looks at how he wants defense in depth on his computer systems.  I just came off of an 8-week Information Security Engineering course, so I'm charged up to notice security considerations everywhere I look:  I completely support this viewpoint, especially wanting to protect entrance and exit points and protect them with multiple layers end-to-end.  Scoble's approach also satisfies Bruce Schneier's injunctions about having defense in depth and having ductility in your security system (so that no single failure is catastrophic). You could lay out Robert's structures against a threat model that worked for practically everyone who operates on a single computer or a SOHO operation with a residential firewall/router at the perimeter of the SOHO LAN operation.  It's useful, practical, and grounded.  For geeks it is also a good way to practice using Frank Swiderski's Threat Modeling Tool (and give up any .NET phobia in so doing). Why active defense in depth?  When you take on the challenge of denying the use of your computer system by adversaries, you become an engaged warrior in the community's discouragement of criminally-achieved spam attacks, virus distributions, distributed denial of service, phishing for identity theft, and other bad behavior that one way or another depends on co-option of our computer resources.  Instead of standing around wailing about why somebody doesn't do something, you get to stand up and say this shall not be and what you can be counted on for achieving that community purpose. Until now, I hadn't noticed the opportunity that all of the attention on XP SP2 brings to awareness that Microsoft cannot solve the end-to-end security problem for us.  The introduction of improved security (and security-friendly functionality) in their important pieces raises the level of awareness and involvement for all of us.  Scoble's response is a great example of that.

Web Intrusions I Have Known

I was following a link from Allen Searls' Blog in the interest of finding out more about meta-social-networking.  Andy Swarbrick's Knowledge Business Website kindly reminded me that I didn't have JavaScript enabled (which is the case for sites that I haven't said I trust to ship mobile code to my machine). Then I was given a short description of JavaScript-enabled browsers, told about a bug in Opera around JavaScript, and informed that my site experience might not be very satisfactory.  What is interesting is that, rather than reducing fanciness and still providing me with content, I had no access to content at all.  I enabled JavaScript (using my software firewall) and nothing else (no Java applets, no ActiveX, etc.). What enabling JavaScript alone got me was two scripting errors because of unsatisfiable object references, both reported by Internet Explorer 6.0.  I can now see the page Allen referenced and also read the article.  The Blog This! scriplet doesn't work on the page though, which often happens around pages that depends on objects of some sort, or do weird things with frames. What's interesting to me is the supposition that of course I want to enable JavaScript.  There was no consideration that my browser didn't advertise support for JavaScript not because it was old but because I have instructed that it not do so.  There was no mention of the objects that the JavaScript was then going to attempt to open. I am singling this site out simply because there was no reason to put me through this.  At this point, I usually decide that I have experienced enough moments-of-truth and close my browser window.  This time I did want to see the article and also remember where I found it.  I am not going to give any further permissions for pages from this site, but I will read the article with interest. I am left baffled about where people's attention is that it is considered all right to require me to allow so much intrusion on my computer for all of the wonderful content experience they want to offer me and that someone thinks I will love. Since I have become much selective about what I allow a web site to send to me, I have discovered that almost all sites work just fine if I decline to accept ActiveX, and the reduced intrusion of advertising is most welcome.  The main exceptions for me are Microsoft sites, such as MSDN on-line, where ActiveX is needed for the content to work.  I can understand that, but it does create a security shear in those periods when we're being told to lock up Internet Explorer until a repair for the latest vulnerability is available. The other exceptions are ones that use Macromedia Flash or various Audio plug-ins to operate.  I will consider that, and I am very choosy where I am willing to go that far.  The default setting on my system is to have none of that.

Sophisticated Phishers

ACM News Service: Internet Snagged in the Hooks of 'Phishers'.  There are some daunting numbers: 57 million US adults have received phishing email, nearly 11 million have clicked a false link, and 1.8 million actually gave out information.  Every time there is a new outbreak, Earthlink receives 40,000 customer service calls.  The conclusion of a Gartner analyst is that there is accelerating avoidance of the internet for online transactions by consumers. The Leslie Walker 2004-07-29 Washington Post article has more details and this ominous observation: "Phisher attacks are skyrocketing.  They have the Internet and banking industries terribly worried -- though apparently not enough to fix the problem yet." Meanwhile, there are panels and conferences scheduled to look at improved authentication tools.  Based on discussions in class, I am wondering why there is no traction for a very simple remedy: dropping datagrams that cross through any border service (such as an ISP DSL modem) onto the Internet and that have incorrect origin IP addresses.  Just drop 'em.  Why is this too simple? [dh:2004-09-12T04:45Z Well, because most phishing doesn't bother with fake origin IP addresses, they use legitimate zombie computers to fly the hook to your in-box.  At the e-mail level there may be discontinuities to notice, and these could be defended at borders and, better yet, used to forward track to the bait box and the recipient of any foolishly-entered form posts.  The biggest deal is that, according to the statistics at the beginning of this post, crime does pay.  That's the difficult part.]
 

Lost in Twisty Overlays All the Same: Peer Pressure

I have been clipping on a lot of topics, and now want to organize them where I can see the result and also keep down the number of individual posts.  This may be contrary to blog etiquette, although I find it disheartening to trace one-line feed articles to one-line blog entries that are nothing but another link.  I don't mind that with Scoble's link blog, because I know what that is.  Whatever.  I am consolidating clippings into posts as a way to give me an organization that I can handle.  More or less ...

Peering Network for Content Updates: FeedMesh

Sam Ruby: FeedMesh.  There is an immediate and practical problem in the blogosphere around the collapse of syndication.  As was discovered on MSDN, certain approaches to syndication do not scale, being vulnerable to a form of commons-saturation crumble.  The hosts become effectively DoS-ed by the incessant demand for pulls of the latest feed.  This is partly attributable to a mistake in the MSDN approach, but it also comes from the fact that syndication is a pull technology (its great advantage, in fact) and the compulsive appetite of feed readers for polling the site constantly for updates to the feed. At this weekend's Friends-of-O'Reilly (Foo) camp, illuminati of the feed-syndication community have been putting their heads together about this problem and looking for a scalable remedy.  It looks pretty bottom-up at the moment, and it is awesome what happens when some magnet draws these folks into one place. This article provides valuable links to the connected parties and to places where the initiative is being discussed.  And you get to see Sam Ruby tell Dare Obasanjo to stop trolling.  The FeedMesh, uh topic, is posted on the Foo camp Kwiki, so it might take off in all unruly directions at once.

Efficiently Finding Resources the Peer-to-Peer Way

ACM News Service: Cooperative Search Technology.  One important aspect of peer-to-peer networks is the ability to search for resources.  This can be of great importance to grid computations and overlays where computing resources of various kinds are shared.  The discovery and location process, and ways for it to work among small specialized groupings while scaling with the overall population of sites is critical. The UCLA algorithm (I wonder when it will get a better name than that) appears to be extremely network-friendly while offering these advantages.  A software library is being built for incorporation of the search algorithm into applications "in about one or two years."  The work is sponsored by NSF and DARPA and follows on earlier work at Stanford and Hewlett Packard in 2001. The Kimblerly Patch 2004-09-08 Technology Research News article provides a sketch of the method.  Although it seems to involve extensive contact with nodes, the method assures that the number of nodes touched in a search grows more slowly than the network itself, and models with 100-million nodes are considered.  The breakthrough is in having the process implemented locally, through messages passed among neighbors only.  That makes me curious about hwo the network is itself discovered, and for that one apparently needs to look up BruNet, a P2P network that operates by local and simple actions.  That's exactly what I am looking for in distributing Miser and other distributed-object systems. The key technical reference is apparently "Scalable Percolation Search in Power Law Networks" on Arxiv.

Semi-Centralized P2P

ACM News Service: Building Peer-to-Peer Applications.  This is another EU project to develop open-source solutions for important technologies.  This P2P Architect project supports important commercial activities.  I am looking for fully decentralized operation, but the fact that this model supports full collaborative editing of the same objects has my attention. The 2004-08-20 IST Results feature article provides more information.  There is a link to the P2P Architect project and more detail on the participating organizations.  [The linked page was broken on my first visit, and it remains so now.  There seems to be great absence of the usual project materials that accompanies completed IST Results and I find that rather strange, especially with the avowed open-source approach. -- dh:2004-09-11]

NOMAD Middleware for Roaming

ACM News Service: New Middleware Platform for Roaming Mobile Users.  NOMAD is a project of the Information Societies Technologies (IST) program being conducted in the European Union.  There are many of these projects beginning to mature under IST, and NOMAD is one of them for mobile-user discovery and collaboration. The 2004-08-13 IST Results feature article provides more background and links to the project on Integrated Networks for Seamless and Transparent Service Discovery.  Although location-aware end-points are featured, I hold onto this link because this seems like yet-another view on discovery that may be as useful for fixed (or NAT-ed) endpoints as location-aware ones.

P2P Risk Exposures

ACM News Service: P2P Drag on Nets Getting Worse.  I find it amazing that the commercial sides of P2P businesses sell caching systems to ISPs so that the P2P glut on networks is relieved.  Beside creating significant traffic (30 to 70%), there is a significant risk to intranets by P2P traffic because of malware transmission and also proprietary materials that may expose organizations to lawsuits.  Finally, the ease with which P2P provides anonymity and secrecy creates an exposure to fraud with the cooperation of insiders. The Carolyn Duffy Marsan 2004-08-02 Network World Fusion article provides useful links along with discussion of the prevalent P2P technologies as well as countermeasures employed in businesses. A related interview (audio) and link page focuses on BitTorrent, a current favorite.

Sharing Lightens the Download

ACM News Service: Sharing Lightens the Download.  This article looks at the latest torrent/swarm-based P2P arrangements as powerful for legitimate multi-media and software distribution.  The BBC is experimenting with the mechanism for downloading TV broadcasts.  The Chord effort is found appealing for useful distributed storage and the prospect of preservation via distribution.  [The Kieren McCarthy 2004-06-26 New Scientist article is apparently not available on-line.] In researching this lead, I found that there are a variety of resources on this and related topics, including the proceedings of the International Peer To Peer Systems (IPTPS) workshops, IPTPS02, IPTPS03 and IPTPS04.

2004-09-07

 

To Engineer is to Tinker?

Visual Studio Magazine - The Software Architect - The Software Practitioner Triad.  (disclaimer: This article is a year old and I have been unsuccessful finding much clarifying discussion.)  I love Alan Cooper's books.  I respect his sensibilities and the insights about interaction design.  And I very much appreciate this:
"Today, Web designers are called programmers, programmers are called engineers, engineers are called architects, and architects never get called.  Not only are our titles mixed up, but our community of software practitioners is also deeply confused about the roles we play.  The confusion is even worse in the minds of the businesspeople who hire us and set our budgets and schedules."
I also nod knowingly over an assertion that programming is not the same as engineering.  That is, until I am told what the difference is. I'm here because Chris Sells recently chose to align with Cooper's appraisal in this manner:
"[There are] three different folks needed to design and build software:
  • Architect: responsible for determining who the user is, what he or she is trying to accomplish, and what behavior the software must exhibit to satisfy these human goals.
  • Engineer: technical problem solving via experimentation, not fettered with the demands of producing release code.
  • Programmer: producing a shippable product, consisting mostly of protective code that rarely--if ever--executes, but is dedicated to supporting obscure edge cases, platform idiosyncrasies, and error conditions.
I've done all three, but am happiest with architect and engineer. Where are you in your current job? Where do you want to be?"
Then commentors on that Chris Sells 2004 August 16 piece continue to embrace this nomenclature by placing themselves on this triadic grid.  The strongest alignment is in the June 26 piece by Phil Weber, in which this view of engineer is positively glorified.  Guys, being an engineer does not mean not knowing how to deliver to a schedule, OK? Meanwhile, I'm thinking "Alan, what have you done?" Let's review the bidding.  In "The Craft of Programming," Cooper's previous column, there is a nice description of programming as craftsmanship:
"Programmers are craftsmen and craftswomen.  They are commonly thought of as--and frequently titled--engineers, but few working programmers 'engineer' things.  Most build software.  They craft it into existence."
It is, of course, not everything that programmers and, especially, professional software developers do.  But it is arguably different than engineering:
"The engineer was the educated, trained expert who designed the manufacturing processes: the 'engines' of industrialization.  The engineer was clearly the most highly skilled person in the industrial-age hierarchy.  But engineers don't actually build things.  They solve complex and demanding technical problems necessary to building things, but they leave the actual creation to others.  Engineers don't build bridges; ironworkers do.  Engineers don't build software; programmers do."
This is fine as far as it goes, and Cooper has a great deal more to say in honor of craftsmanship, using the example set by his electrician father.  The entire article is valuable for an appreciation of craftsmanship. Ignoring the nudge about "actual creation," I can be satisfied with engineers as highly-skilled at solving problems necessary to building things.  How can we avoid the leap from here to untramelled experimentation (what I call tinkering in my choice of title) as engineering?  Cooper provides further useful characterization:
"Engineers primarily devise manufacturing solutions and solve technical problems. They're rarely responsible for the actual construction of things. For example, a structural engineer devises solutions that allow a building to stand firm, but he or she lives in a world of paper and mathematical models and won't lift a finger to weld steel or pour concrete. It behooves the engineer to try numerous solutions, exploring dusty corners of the problem to find opportunities for improvement. It behooves the craftsman to stay on well-known ground and to avoid costly experimentation. The engineer works on paper prior to construction, and the craftsman builds the end product. The engineer throws away paper; the craftsman throws away time and money. Both are prized for devising creative solutions to difficult technical problems, but craftsmen are most highly prized for constructing sound artifacts quickly, efficiently, and expertly, with a minimum of waste and a maximum of predictability."
Programmers and erstwhile software engineers should study this closely to locate themselves in here and determine whether their practice of skill or craft measures up. I think the muddle and the false contrasts that can be taken away from these simple statements have to do with confusion of problem solving, experimentation (and the implication of costliness), design, and differences in scale.  Architects, engineers, and programmers/craftsmen solve problems.  All look for soundness, constructability, and deliverability, at different scales and in different levels of abstraction.  For example, consider the following triad:
  • Architect: Why? - addresses the setting and the purposive requirements for an architected artifact, with attention on the problem space and how a solution must address it
  • Engineer: What? - addresses the nature of the artifact and the construction process that will secure and maintain it with attention to valid fit of solution with the architecture
  • Programmer: How? - a problem-oriented activity in which the components of the artifact are crafted, integrated and verified
I am not wedded to these distinctions.  It is just-another-triad (I won't say hierarchy here) with familiar labels.  Also, anyone who possesses mastery in one of these non-exclusive domains must also appreciate the contribution of the others.  These practitioners must be able to talk to each other and consult with each other and honor the different expertises that are being brought to bear. If Cooper is not observing these characteristics in development of software products, to that degree not only is architecture absent, so is there lack of engineering.  To the degree that practitioners mischaracterize engineering, engineering has been absent from their lives.  Results in each of these domains involves problem solving and choice among alternatives to optimize to the situation at hand.  Reach and depth may differ.  Relationship to the world of the user may certainly differ. And speaking of false contrasts, here's one cheap shot that doesn't separate the engineers from the programmers at all: "Engineering--real engineering, not programming--is problem solving divorced from the needs of actual people."  It's further down the page from the editor/interviewer remark "Reminds me of a saying I heard decades ago: 'In the course of every project, it eventually becomes necessary to shoot the engineers--and begin production.' " I'm disappointed and saddened. When my father labored in a foundry, he'd bring home talk about the engineers.  When he worked on the sales floor of a furniture store, he brought home talk about management (and customers).  When he carpented, he talked about electricians.  I want to come back to the recognition of "different folks needed to design and build software."  And if we are going to recognize software engineers, let's at least talk about software engineering.  I guess I'll leave it at that for now.
I am baffled to see, in other articles that I read, so much conflation of programming and computer science, something that even sucks software engineering under the same tiny umbrella, sometimes in the same breath.  It hadn't dawned on me that along with that peculiar reduction to "I code, therefore I am" and the idea that the code is the product that there are now a full rack of baseless contrasts by which engineer and architect are dispensed with.  Maybe, in all of the articles I am noticing, it is just an effort to identify and emphasize the importance of what we know the most about, our own rĂ´le.  It looks like we lost something along the glorious stampede to the present state of immaturity.  I think it is going to cost us.
Reposted 2004-09-11T05:17Z This page is in UTF8, but some times the browser has to be told that.  I made it easier to have this be presentable whether or not the browser figures it out on its own.  There's a lesson in system coherence here, but we'll talk about that another time.  I also missed one place where Unicode dash symbols occured, so I used this occasion to nip those, wordsmith a little, and have the Atom feed regenerated correctly.

2004-09-06

 

A Feed Too Far

I just corrected a problem of my own making around the recurring disappearance of "Why Learn Assembly Language?"  Unfortunately, the cure to that problem caused my Atom site feed to be updated. If you subscribe to the feed, you'll likely see that the feed reader presented several articles as updated, not merely the one I touched.  If you look closely, you'll see that the consistent difference is that the links to the actual posts are messed up in the "updated" feed entries.  This is the problem discussed under "Honey, Where'd You Put the Bloggo?"  Not every feed reader need do this, but don't be surprised if yours does. When the problem is eventually corrected, you'll see apparent updates of several entries once again.  This might not catch them all, because the feed is only regenerated for a number of recent posts, something like the current population of the blog's default page. I have the following prophylaxis to recommend.  If your feed reader shows you several entries for the same article, some marked newer than others, keep ones with proper specific URLs for the article location and discard the ones that don't use specific URLs for the articles.  The non-specific URLs are something like http://orcmid.com/BlunderDome/clueless/2004_08_29_clu-chive.asp instead of the specific link http://orcmid.com/BlunderDome/clueless/2004/09/security-is-programming-problem.asp.  Keep the feed entries with specific links (assuming you want to keep the entry in the first place). We now return you to regularly-scheduled programming.
 
Construction Structure (Hard Hat Area) You are navigating Orcmid's Lair.

template created 2004-06-17-20:01 -0700 (pdt) by orcmid
$$Author: Orcmid $
$$Date: 10-04-30 22:33 $
$$Revision: 21 $