[iDC] if AI were a reality, which, currently, it is not
David Golumbia
dg6n at unix.mail.virginia.edu
Tue Mar 18 13:37:42 UTC 2008
On Mon, 17 Mar 2008, Paul Prueitt wrote:
> I am thankful for this note, as it clarifies some things.
>
> The issue that we seem to disagree over is seen in the word "currently:
>
>> The only way that would be possible is if AI were a reality, which,
>> currently, it is not.
>
> The argument that Penrose and Robert Rosen and others make, is that,
> like creating life from abstractions; abstractions may only be a part
> of intelligence. One need actual substance, and the silicon
> computing paradigm does not have that substance. In fact Pribram and
> I and others regard the necessary substance as something that deals
> with emergence, locality and non-locality in a way seen in the
> emergence of function in biological systems.
>
> There is no future to AI. What there is is a recognition of the
> things you are saying about the nature of the problem in data
> interoperability and the aggregation of information into a
> synthesized presentation (un touched by a centralized control).
This is spot-on, and to the degree that we have seen RDF/Semantic Web
advocates here fall back on an "AI-to-come" we see deep, structural,
inherent problems in the ideas themeselves. We not only don't *need*
"machine-readable" semantics; the very notion is incoherent.
The one emendation I'd make here is that your point can be made in more
simple language, following the work of writers like Hilary Putnam, Hubert
Dreyfus, John Haugeland and even Terry Winograd: the very idea of "Strong
AI" (or what Haugeland calls GOFAI) is incoherent because it rests on a
highly idealized notion of just what "intelligence" is in the first place.
The notions of intelligence found almost uniformly in the AI literature
are much more ideological than their authors realize: they are about
denying the role of the body, the unconscious, and society as a whole in
the abstraction we call intelligence. To quote Putnam, "meanings ain't in
the head." Also see Ryle, Sellers, Rorty on the mind/brain.
> The nature of the Church that you may have membership in is
> illustrated by the statement:
>
>> We do, however, have the ability to
>> program computers using whatever algorithm we feel is most
>> meaningful and
>> most able to send us the information we want. Perhaps you are mad
>> because
>> the other people are not doing it for you in the way you most
>> desire. In
>> which case I say learn to program.
>
> First, I am not mad, not upset and not insane. I am insistent about
> a specific point of what I and others regard as science. I am also
> insistent that the programmer class should not be forced on all of
> us . Suppose we take your statement at face value and evolve a
> culture where the right to vote depends on one's programing ability?
>
> Why not let them eat cake?
Right on, a second time, including the rhetoric you invoke here. Members
of this church argue (and apparently think) they are interested in
"advancing" society; in fact, they appear to we few dissenters to be
interested in *changing* society and people to meet their own
ideologically-structured ideals. We need many more people capable of
understanding what this Church's beliefs are and who nevertheless
dissent--a difficult group to locate since it is precisely "learning
programming" that seems to also have the effect of teaching "do not
doubt, do not question."
--
David Golumbia
Assistant Professor
Media Studies, English, and Linguistics
University of Virginia
More information about the iDC
mailing list