[iDC] Why Magic Stinks, NOT
Natalie Jeremijenko
njeremijenko at ucsd.edu
Tue Oct 10 09:18:35 EDT 2006
Bruce, and others.
Jef's essay promotes skepticism about the technology, while promoting
magical thinking about the tests that can be used to resolve any
technological uncertainly. It is far from true that we lack tests
when if comes to computing technology and design--certainly if the
ACM digital library is any indication, most of the x million
scholarly articles in the field of HCI provide measures and tests to
evaluate various technologies.
I have to counter Jef's examples (audio cables and fishing) by way of
introducing myself to the list. It is the case that the materials in
audio cables perform differently in different circumstances,
different lengths, wear differently. The material science of
characterizing temperature dependence and other material parameters
that are simply not being addressed in the experiment with which he
challenged the cable company. While the cheapest cable might be just
fine in one circumstance, e.g. in the air conditioned lab in Seattle
where you are making college students with no vested interest, skill
or motivation to hear differences in the music that they probably
don't like or care about, whereas the same test, with the same music
done in muggy hot brisbane.au with uni students would conceivable
have different results. Test results can, and often do, mean very
little--such is the case with the tests he proposes.
So while we live in a material world and many of the material
parameters lend themselves to quantification (even the simplest
bounded parameter pair-wise testing has problems, namely sequencing)
we also live in a social and biological one, that is, people have
different listening skills, frequency envelopes, motivation and
situations. Psychoacoustics is a different animal to Material
Science, but it provides some language to describe the tremendous
variability in listening skills across people, circumstances, and
sounds. But it doesn't explain the particularity of individuals and
some people do have expertise in this area unlike pizza bribed
college students participating in some "random experiment", or
participants in a web survey for a chance at a free toaster. The
artist Paul DeMarinis, or the designer/engineer Mark Levinson are a
couple of these people who's evaluation I trust blindly in these
matters. Neither of these two systematically test (or the testing has
spanned so many years, circumstances and informal and formal
variations that they do not resemble the testing promoted in Jef’s
post) that it becomes judgment. Levinson’s expertise in producing
listening environments and specifying particular cables is
believable, no matter the number of college student lab rats you had
switch a switch for a slice of pizza or $15. Even a statistically
significant sample of students does not stand in for a person/case of
demonstrable skill and expertise, nor does it have the power to
represent the sheer diversity of human perceptual ability. Jef's one-
parameter comparisons in the realm of this simple technology
question--which cable is better?--just does not generalize or hold
true across people and situations, despite his blunt persuasiveness.
This is also the case with other complex technological questions
involved in Human Computer Interaction.
Jef's other example of fishing also dismisses the possibility of
both expertise and variability of both fishermen and fish (in other
words, the universalization of the user, and "user-centered design"
trope). I am currently in the business of fish lure design, and to me
his rock fishing example demonstrates only the "mathematical
certainty" of his committed ignorance of the sophisticated visual and
audio communication that fish deploy. His claim to me has no truth
value--it is magical thinking to believe that you can make all
complexity disappear.
And yes, fish talk, well, chatter. The first conference on fish
chatter was held only a couple of years ago precisely because of the
noise reduction—in part due to the high-end cables involved--made it
possible to record the utterance of fish for the first time in the
contemporary science. Jefs rule of thumb--to maximize the time his
hook is in the water--does not account for the variability between
fishermen let alone where fish spend their time, and there are ways
of understanding and representing this. His practice works for him
because he wants not to know anything about the differing sensing
talents and ecological patterns of fish, the marvels of interspecies
communication. Whereas it doesn’t work for others, myself included,
who are fascinated by what fish can communicate with a glint and a
glimmer, how they make sense of an environment, how they learn what
is edible or not through their fishlifetime of experimenting and
imitating, and whether and where and under what circumstances they
will respond to a lure (I make the edible kind of lure, and with
chelating agents instead of hooks embedded). There is no
mathematical certainty to probabilistic phenomena, nor to
investigating fish behavior with catch counts. Jef's dismissal all
others as "fiddling", and his certainty of his own method does not
appear at all reasonable to me, nor magical, but bordering on
delusional. I would be interested in a test what could evaluate
people who enjoy the sport of killing animals for their interest and
knowledge of the organism they will kill. Those who hunt and who have
learned about the organism and the ecosystem it inhabits are more
believable (and human) to me. By contrast, according to Jef's rule,
hunting should be a stochastic affair. When applied to the technology
of guns rather than hooks, shooting off as many bullets in a given
window of time would comply with his rule. Needless to say, I would
have more than a few hunters and environmentalists agreeing that this
is not advisable. Extending his fishing technique to using, designing
or understanding human computer interaction, might likewise be unwise.
In short, I just do not "believe" Jef tests, nor his results, nor
the blind faith in double-blind studies he champions which likewise
do not hold across circumstances, and populations. (Andy Lakoff's
paper on turning the placebo response in double blind FDA studies can
make this argument). And were Jef around I would ask him to tone his
bombastic absolutism by understanding that the problem of colonial
classificatory habit of seeing indigenous and "other" knowledge as
stinking magic and superstition. In other words, systems to
denigrate, ignore and devalue other's technological practices are
just that, and waste the many years of empirical testing, material
experimentation, functioning practices and knowledge that informs
them. It is just not the case that people or "magic" are stupid, nor
stinky.
Magical thinking acknowledges that we experience, and deal with much
more complexity than we can test or coherently and explicitly
understand, that we always and already operate in a context of
partial information, with tacit information, that we rely on the
expertise and judgment of others (aka social resources, including
marketing information) as primary and use all of these resources to
make sense of digital and nondigital information.
The delusion that one can be all seeing, all knowing, and can
represent everything with certainty and universality with simple
tests and web-based customer feedback surveys, is pervasive and you
can find many other versions in the ACM digital library. Nagel
describes this as "the view from nowhere" which is, of course, blind
to among other things, ones own situation. Magical thinking has the
advantage that it does not over represent its "mathematical
certainty" in the way that Jef does and the field of Human Computer
interaction are inclined.
That said, the question I have for this iDC list and upcoming
conference is the flip side of the magic debate: what can we
represent quantitatively and with any generalizability? What kind of
truth claims, if any, can we make in the realm of complex
technosocial phenomena to promote the ways that situated technologies
support sense-making in space (with digital and other information)?
Because magic only exists in contrast to a belief in an absolute all-
knowing rationality, which is often approximated (ludicrously) as
quantitative or mathematical, my focus is often on the misplaced
belief and certainty one finds in quantitative representations of
human-computer interaction. Yet, for the conference I am going to
present some quantitative (and qualitative) analysis of "160 people,
68 small groups using distinct tangible situated interfaces, and I am
going to claim some generalizability.
I admit I have been very uncertain about doing this--apologies for
the delay in getting the abstract and intro to y'all--because I
understand that this audience (on list and upcoming conference) is
largely, and understandably, disinterested even hostile to
quantitative descriptions of human computer interaction. I recognize
the lineage of Situated Technologies as coming from those trained in
symbolic interactionism: Lucy Suchman (Plans, and Situated Actions);
Jean Lave (who thoroughly disputed the cognitivists demonstrating the
diverse ways we think with things/stuff and not just abstractions)
and Joan Fujimura's (more institutional context of how technologies
and techniques provide grey boxes, rather than black boxes, to
translate between different specialist languages). These (and other)
science studies/STS scholars helped me understand “situatedness” with
detailed ethnographic descriptions and multiyear case studies, and
how to dispute the simplistic claims of quantitative representation,
and how to provide believable accounts of the ways we collectively
use information resources to make sense, inform action and organize
institutions. These are tremendously important scholars--that I would
suggest be added to the reading list. Some go so far as saying that
Suchman's book (after 20 years the 2nd edition is due in 2007,
reissued as “Human-Machine Reconfigurations: Plans and Situated
Actions” 2nd expanded edition. New York and Cambridge UK: Cambridge
University Press.) single handedly brought down the so-called AI
project (artificial intelligence/computation as intelligence) and its
domination of computational research, its concommitment to
unsituated, general claims about human intelligence and sensemaking.
Although some credit is also due to the incredible computer scientist
Jack Schwartz (http://www.cs.nyu.edu/cs/faculty/schwartz/) who did
much of the dirty work. He became reknown in the computer science
world as Jack the Slasher for the number of huge multiyear NSF grants
he ended and/or slashed, funding projects promulgating inflated
claims of how neural nets; computation linguistic and other
strategies of the AI project represented human intelligence. This
reality check has not slowed the fantasy that human intelligence and
social competence can be and should be "downloaded" into a computer,
but it has demanded understanding computation in situ. It is an
ongoing issue and now situated approaches provide the highest
standards of evidence in the counter to mainstream technology
development and dominant funding sources dedicated to the explicit
militarization and a culture of fear and limited civil liberties.
So, knowing this lineage, why would I then want to do try to find
some summative abstraction, some quantitative measure to represent
diverse people using diverse computational interfaces? The answer is
grey, as in Fujimura’s sense. Having deployed a number of
computational devices that couple physical and social situations with
digital representations (from kiddies rides that perform economic
data; to packs of dogs who's movements display data that I would
describe as situated technology (?), I have had experienced some real
difficulty in eliciting understanding and interest from my computer
science colleagues, or from the VCs and business interests that also
shape technology. These technological forces have a lingua franca
deference to quantitative representations. Live Wire (aka the
Dangling String) is ethernet traffic indicator was a project built in
and for the computer science labs of Xerox Parc in an effort to
elaborate Mark Weiser's Uniquitous Computing (UbiComp) vision (circa
94). The project was was called an "embarrassment" by the lab
director that succeeded Weiser, Craig Mudge, and he ordered it
removed from the lab. Some voice boxes built around the same time
exploring how to place digital sound files in particular spaces, were
evaluated and dismissed by the same person as "trivial" work. At
Stanford, the economist of technological innovation Nathan Rosenberg,
likewise suggested my concerns about the tropes of computation-based
innovation were misguided, and the alternatives were “silly”. Later,
at Yale, the Dean of Faculty of Engineering, described my work and
projects as "fuzzy, social science, tree-hugging sort of
engineering". This was meant as a negative coming from Bromley (the
author of Star Wars, and former science advisor for Bush (senior)).
Although I am rather proud of it this description was intended to
suggest that this particular junior faculty member was not for the
halls of his departments. Moreover, I have had countless papers
rejected for "lack of verification" "no quantitative evidence" , all
of which have motivated my search for a quantitative measure that was
at least feasible, if defensive, that could perhaps lend some of
authority and generalizability to the design strategies in which I
was/remain interested.
Like others who spend largish amounts of time building devices, I
have held a build-it-and-they-will-come belief, that is, by
developing concrete functioning alternatives (demos) of computational
interfaces one will be able to address the multiple expert
communities involved in shaping the technological future. Experience
has disabused me of this idea, augmentation is required.
I look forward to a brief presentation to y'all (or "youse",
pronounced "use" in Australian) on the Structures of Participation
(strop) approach I have been crafting, and an associated measure for
summative comparison on interaction with and around tangible situated
technologies. If it is absurd, I trust this community will tell me so.
I must say that I have greatly enjoyed the discussions on the list
since its inception--as proud and dedicated lurker who has spared u
other over-long, over situated responses.
nj
On Oct 8, 2006, at 8:36 PM, Bruce Sterling wrote:
> *Jef's dead now, but I'm sure his mana is smiling on the idc list
> and guiding
> this discussion from beyond the grave.
>
> Bruce Sterling
>
>
>
> http://www.acmqueue.com/modules.php?name=Content&pa=showpage&pid=98
>
> Silicon Superstitions
> ACM Queue vol. 1, no. 9 - December/January 2003-2004
> by Jef Raskin, Consultant
>
>
> When we don't understand a process, we fall into magical thinking
> about results.
>
> We live in a technological age. Even most individuals on this
> planet who do not have TV or cellular telephones know about such
> gadgets of technology. They are artifacts made by us and for us.
> You'd think, therefore, that it would be part of our common
> heritage to understand them. Their insides are open to inspection,
> their designers generally understand the principles behind them,
> and it is possible to communicate this knowledge—even though the
> "theory of operation" sections of manuals, once prevalent, seem no
> longer to be included. Perhaps that's not surprising considering
> that manuals themselves are disappearing, leaving behind glowing
> Help screens that too often are just reference material for the
> cognoscenti rather than guides for the perplexed.
>
> This loss of information is unfortunate, as any activity involving
> the exact same actions can have different results—that is, wherever
> there's "random reinforcement" (as the psychologists say) is fallow
> ground in which superstitions rapidly grow. Fishing is a good
> example. When out angling for rock fish, you generally use the same
> lure as everybody else. There is not much technique to it, so the
> number of fish you catch is proportional to the time your lure is
> in the water. Those who spend time fiddling with the equipment
> beforehand catch fewer fish. It's a mathematical certainty. I
> choose my equipment with one aim in mind: Eliminate hassle. So
> while my fishing companions use fancy reels and fight the
> occasional tangle, I use the closed-cap kind you give to youngsters
> because they seldom foul. On every trip I have fished to the limit
> as fast or faster than anybody else has on the boat. They don't
> laugh at my "primitive" equipment anymore, but they do ask me if
> there's some special stuff I rub onto my lures to get the fish to
> bite or if I have some other "secret." They don't believe the true
> explanation, which I am happy to share. It's too simple, and
> there's no "secret" stuff or device behind my success.
>
> In fact, people love mysteries and myths so much that they create
> them when an explanation seems too simple or straightforward. "Why
> is Windows so hard to use?" I am asked.
>
> "Because it was designed badly in the first place and has grown by
> repeatedly being patched and adjusted rather than being developed
> from the ground up."
>
> "But, "say the inquisitive, "there must be more to it," thinking
> that some deep problems inherent to computers force the outward
> complexity. The only forces involved are what Microsoft mistakenly
> thinks the market wants—and inertia.
>
> In particular, superstitions grow rampant when testing is
> subjective, difficult, and (usually) not performed at all. There is
> a purely magical belief in the idea that you can hear the
> difference between different brands of audio cables, for example.
> You can buy a simple one-meter audio cable with gold-plated RCA
> connectors at both ends for a few bucks, or you can buy one with
> "time-correct windings" that the manufacturer claims will "provide
> accurate phase and amplitude signal response for full, natural
> music reproduction." Price? $40. Or, if you are especially
> insecure, purchase a one-meter cable that has "3-Way Bandwidth
> Balanced construction for smoother, more accurate sound" for a mere
> $100 from Monster Cable (http://www.monstercable.com).
>
> I've had the fun of testing if people could tell the difference—
> they couldn't. At audible frequencies small differences in
> capacitance, inductance, and resistance in a cable will make no
> audible difference, and there are no significant differences in the
> pertinent electrical parameters among popular brands. One ad, also
> from Monster Cable, says, "Choosing the right cables can be a
> daunting task" (especially if you read the ad copy) and it explains
> that "Underbuying is all too common." This last claim is true, as
> far as the marketing department is concerned.
>
> I e-mailed Monster Cable and challenged the company to conduct a
> simple test with its choice of equipment and listeners. My proposed
> setup was simple: a CD player, an audio cable to a power amplifier,
> and a set of speakers. All I would do is change the cables between
> the CD player and the power amplifier, flipping a coin to determine
> which cable I'd attach for the next test. All the listeners had to
> do was to identify which was the inexpensive Radio Shack cable and
> which was the Monster cable. I would videotape the experiment so
> that a viewer could see what cable I was using and hear what the
> listener(s) said.
>
> We had a friendly exchange of e-mails, but when I proposed this
> experiment, I got no further replies. It seems to me that if there
> were a real difference, the company had nothing to fear.
>
> All testimonials and most magazine reviews are based on situations
> in which the reviewer knew what audio equipment was being used.
> Owners and magazine reviewers have a vested interest; the former
> needs to justify the money spent, the latter needs to preserve ad
> revenue.
>
> One claim that is obviously false without requiring testing
> involves weighted rims that are sold for audio CDs. The makers
> claim that the added mass will help the CD spin at an unvarying
> rate. This is true. People who know a bit of physics are aware that
> a greater mass is accelerated less by a given force, so any
> disturbing force will have less effect on the rate of spin of a
> heavier disk. The makers also claim that this will make the CD
> sound better with less "wow" or "flutter," which on tape recordings
> or vinyl records was the result of uneven motion of the recording
> medium. The claim for better sound is false and relies on the
> ignorance of owners of CD players. Ignorance is superstition's guide.
>
> What the suckers who purchase these rims don't know is that the CD
> player reads ahead of where it is playing and stores the musical
> data in semiconductor memory, which acts as a buffer. The
> information in memory is clocked out by an unvarying crystal
> oscillator. Any unevenness in the speed of rotation of the CD (so
> long as it is sending data to the buffer faster than it's being
> played) is simply irrelevant to the sound. In fact, this was one of
> the points of genius in the design of the CD player, making the
> quality of sound independent of the mechanical quality of the
> rotation of the media. With the introduction of CDs, flutter and
> wow instantly vanished to inaudible levels. Weighted rims are
> simply irrelevant.
>
> When I was a graduate student I did the simplest possible
> experiment. I placed a pair of amplifiers on a table: one fancy and
> expensive, and the other plain and cheap. Both had wires that ran
> to a switch box. The switch was clearly labeled as to which amp
> corresponded to which position.
>
> Subjects were allowed as much time as they wanted; they operated
> the switch themselves, and all they had to do was to report in
> which position of the switch the system sounded better. All but a
> few reported that they could tell the difference, and almost all
> preferred the more expensive unit. One person said that as far as
> he was concerned, the switch "wasn't doing anything at all." That
> person was right: I was using only one amplifier and the switch was
> not connected to anything. The results were statistically
> significant, and showed that people can fool themselves with
> alarming ease.
>
> Computer systems exhibit all the behaviors best suited to create
> superstitious responses. You will try something, it won't work, so
> you try it again—the exact same way—and this time it works, or not.
> That's random reinforcement. The effectiveness of many programming
> and management practices thus are not measurable.
>
> Most of the principles of "extreme programming," for example, seem
> reasonable to me, and I was using many of them long before they had
> acquired their present absurd name. The people who promulgate the
> idea, however, are also those who created the paradigm. Most
> reported results aren't even single-blind, much less double-blind.
> We rarely understand, in any detail, the processes going on behind
> the tasks we do with computers. We're using megabytes of code
> written by others, code that is indifferently documented and
> inadequately tested, and which is being used in ways and in
> combinations unforeseen by its creators.
>
> No wonder we tend to act as if computers are run by magic. Many of
> us (including me) use the exact sequence of operations for a task
> because it worked once and we don't dare to vary it (even when
> somebody suggests a different method). The now obsolescent SCSI
> (small computer system interface) bus was that way, too: Some
> configurations worked, whereas others that seemed to obey the rules
> on cable length, termination, and device addresses did not. Once we
> had a setup working, we wouldn't change it; it was as if we had
> achieved some heavenly arrangement.
>
> I invite readers to share examples of superstitious behavior in the
> technological world with me. Meanwhile, be a skeptic: Ask yourself
> if what you're doing is based on fact, on observation, on a sound
> footing, or if there is something dodgy about it—if there's a touch
> of superstition in your interaction with technology.
>
>
>
> _______________________________________________
> iDC -- mailing list of the Institute for Distributed Creativity
> (distributedcreativity.org)
> iDC at bbs.thing.net
> http://mailman.thing.net/cgi-bin/mailman/listinfo/idc
>
> List Archive:
> http://mailman.thing.net/pipermail/idc/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.thing.net/pipermail/idc/attachments/20061010/9ee4776a/attachment.htm
More information about the iDC
mailing list