Blunder Dome Sighting  
privacy 
 
 
 

Hangout for experimental confirmation and demonstration of software, computing, and networking. The exercises don't always work out. The professor is a bumbler and the laboratory assistant is a skanky dufus.



Click for Blog Feed
Blog Feed

Recent Items
 
Safe Software: Getting Easier?
 
The End of the Historical Record?
 
Software Inspection's Lonely Adherents
 
Trustworthy Deployment: What's That?
 
Is Distributed Trustworthiness Soup Yet?
 
MDDi: Integrated Tool Chains for Software Developm...
 
Dynamic Languages Improve Software Quality?
 
Rigorous Software Engineering at the Application-D...
 
Automated Authentication of Programming Standards?...
 
The Important Software Standards: Quality, Perform...

This page is powered by Blogger. Isn't yours?
  

Locations of visitors to this site
visits to Orcmid's Lair pages

The nfoCentrale Blog Conclave
 
Millennia Antica: The Kiln Sitter's Diary
 
nfoWorks: Pursuing Harmony
 
Numbering Peano
 
Orcmid's Lair
 
Orcmid's Live Hideout
 
Prof. von Clueless in the Blunder Dome
 
Spanner Wingnut's Muddleware Lab (experimental)

nfoCentrale Associated Sites
 
DMA: The Document Management Alliance
 
DMware: Document Management Interoperability Exchange
 
Millennia Antica Pottery
 
The Miser Project
 
nfoCentrale: the Anchor Site
 
nfoWare: Information Processing Technology
 
nfoWorks: Tools for Document Interoperability
 
NuovoDoc: Design for Document System Interoperability
 
ODMA Interoperability Exchange
 
Orcmid's Lair
 
TROST: Open-System Trustworthiness

2005-08-02

 

Assessing Open Source for Corporate Usability

O'Reilly Radar > Open Source Business Readiness Ratings.  Tim O’Reilly’s Radar has homed-in on Business Readiness Rating, “A proposed Open Standard to Facilitate Assessment and Adoption of Open Source Software.”  I want to pay attention to this, and indeed, I have to, because the sponsors are proposing to provide “a trusted, unbiased source for determining whether the open source software they are considering is mature enough to adopt [my emphasis].”  There are white papers, some samples, and a way to get involved.

I am interested in assessments generally because they will touch on trustworthiness in products, especially in the overlap with usability, security, deployment and support/maintainability.  There is something different than producer’s risk required as part of open-source development and distribution models, and I am keen to see what can provide a comparable basis for the confidence in suppliers that commercial adopters require.  Also, this approach would seem to provide developers a way to develop their trustworthiness and confidence in their products. 

I am interested in this approach specifically for how the authority of assessments is developed and aggregated with regard to particular software products.   I want to test this with regard to transparency, provenance of assessments, and verification / confirmation of assessments.  Most of all, I want to understand the approach to aggregation.  I don’t have an answer to this: it is an area where I have questions, and I want to see what comes out of BRR.  I also don’t understand why we would deal any differently with closed-source offerings, and I will wonder about that as well.

I recommend downloading the white-paper (PDF file) and the sample form (Microsoft Excel file [;<), registering with the site, and then joining the forum for discussion.  (The forum membership rules are something.)  I’ll be there.

 
Construction Structure (Hard Hat Area) You are navigating Orcmid's Lair.

template created 2004-06-17-20:01 -0700 (pdt) by orcmid
$$Author: Orcmid $
$$Date: 10-04-30 22:33 $
$$Revision: 21 $