|
|
2004-05-23Better Humans?ACM News Service: The Age of Purposeful Machines. I am not that thrilled by the existence of a publication with the title Betterhumans that seems more interested in mechanical substitutes: "Truly conscious machines may be the stuff of fantasy, but researchers worldwide are making significant strides in the creation of machines that exhibit purposeful behavior thanks to breakthroughs such as teleo-reactive programs (TRPs) designed to set up behavioral rules in changing environments."The blurb points out work by Nils Nilsson on highly-improved goal-oriented procedures involving hierarchies of ends-means-driven techniques. Questions I have concern the degree of rigidity with which goal conditions must be specified and also how one can update with new information in a dynamic way. These explorations may assist in establishing capabilities and limitations on autonomic computing initiatives, though, and I find it interesting enough for that alone. Betterhumans > The Age of Purposeful Machines. This is Patrick Bailey's 2004-05-18 Betterhumans article: "It's evident that we can create machines that behave in purposeful ways. At labs around the world, researchers are taking big steps in this direction." What disturbs me about the article is the kind of thing that is taken as evidence for intelligence and purposive behavior, but which is to me simply misplaced anthropomorphism: "Similarly, it's not always true that computer programs react in ways that were entirely intended by the programmers. We see this in some of the seemingly random behavior of programs we use on a daily basis at home or at work. These errors happen when programs seem to want to do something rather than nothing with information they don't know how to interpret." My wife notices my tendency to speak of computer behavior as volitional and conscious (e.g., what a program "knows"), and I can see from the excesses here why it is important to find a more-powerful and less misdirecting way of speaking about computational behavior and what it evokes for us. I will have to see whether some circumspect way in which one can discuss agency and attention in the context of computational behavior without implying any commitment to computational consciousness.
Comments:
The article states nothing in support of conscious machines...in fact, quite the opposite. I think it's fair to say that we can have purposeful behavior without any guarantees about consciousness. We do this all the time with human beings. As A.J. Ayer states (rightly so, I think) when we ask whether or not something is conscious, we're simply asking whether or not it behaves in certain expected ways. The same is true about our measure of intelligence. The measure of intelligence is a correct action to any given request. There's nothing metaphysical or mysterious to explain above that. We have no access into the minds of other human beings, yet we attribute consciousness to them even in the absense of such access. I don't think purpose and consciousness are married quite so strongly as you would have them be.
Patrick Bailey
Thanks Patrick. In making refering to my wife's objection to something she sees me do I am putting in concerns about attribution of consciousness that are not in response to anything said directly in the article.
The article does talk about computer programs that "seem to want" and "knowing how." Attribution of volition and knowing concerns me, whether separable from consciousness or not. I think the Ayer analysis and other inquiry is quite valuable. I don't think that is the level of the article, and I think there is more to be responsible for when using casual language that is not going to be read with the rigorous care that might occur in a different context. Your comment has me think about the social context as well. When a computer program misbehaves, who is it that we hold to account? I think that is an important practical distinction that should not be lost in a blanket reduction of purpose/intent to observable behavior. I had a coach who would say to me, "If you want to know what your intentions are, look at your results." This is not a conversation one would have with a programmed computer. That seems rather important here. I shall continue eliminating anthropomorphism from my speaking about computation and computers.
I'm being too circumspect. When I said just above, "I am putting in concerns about attribution of consciousness that are not in response to anything said directly in the article" I did not emphasize enough that my focus suggests something about Patrick's article that is not said there. I apologize. (Delete "making" from "making refering" too.)
Greetings, again! I've been meaning to respond to your comments, but haven't had the time. I completely agree with your assessment about the need for clarity and careful use of language when addressing computing issues and concepts. Unfortunately, I find that often ideas become "dumbed down" for numerous reasons (usually because of the preconceived notion of a "target audience") and we sometimes lose something in translation into "layman's terms."
Post a Comment
I like the comment you included about your coach, but I wonder if he really ever took time to stop, think about, and appreciate all of the implications of his statement. Clearly, I think, we can have results that go beyond or are in opposition to our intentions. I think our results only meet our intentions when a specified set of conditions are met. At least you took the time to consider the issues in the article and comment on them, which I greatly appreciate. Discussing the issues is the best way for us to work through the rough spots in our concepts. :3) Patrick Bailey
|
|||