[iDC] Can DIY education be crowdsourced?
jippolito at maine.edu
Tue Sep 13 20:01:59 UTC 2011
On Sep 6, 2011, at 4:20 PM, Anya Kamenetz wrote:
> When you're talking about learning& scholarship, as opposed to Amazon reviews, you're talking about a community that extends beyond any particular peer group on any particular platform. Academic disciplines are global in scale and of relevance to humanity writ large (if they're not, then they deserve to wither and die). Therefore there's a very strong existing organic reputation based system for professional scholars: citation and peer review. It's not internal to any one organization, though it is internal to each discipline.
> Here's an example, via Stian Haklev on Google Reader, of a couple of different existing systems for representing the "score" of a particular academic based on their citations:
> So the question would be, to what extent is it feasible to represent a similar type of score, based on references to their previous statements, for amateur scholars? That would be an interesting example of an incentive that's both internal and external.
I'm glad to hear your experience of peer review has been more global and relevant to humanity at large than Amazon reviews. I've had pretty much the opposite experience.
In my field, artists and theorists often complain about wanting to break out of the "media art ghetto," but they insist on posting critical commentary to insider listserves and publishing articles in peer-reviewed subdisciplinary journals whose readership maxes out at two digits. A few years ago I decided instead to take it upon myself to write Amazon reviews of recent MIT Press titles as a sort of public service for the field. There's no question in my mind which audience is bigger or interdisciplinary.
As for a "score" for amateur scholars, I believe this does exist particular networks: Slashdot users have karma, ThoughtMesh authors have credibility, and so on. Unlike Google Scholar's Hirsh index, however, you *cannot* look up a particular scholar's Slashdot or ThoughtMesh rating--it is hidden in the MySQL tables that store data for these sites, and only revealed indirectly by the Perl and PHP scripts that decide which comments should be displayed most prominently.
While I'm a strong advocate of open-source software, I believe the absence of transparency in credibility scores is a necessary evil. It keeps authors from gaming the system, and, perhaps more importantly, helps fight rankism.
For any ranked list is a hierarchy, and as such fundamentally at odds with a scholarly network. A list of artists or academics with numbers next to their names is a pitiful representation of their impact on the field. Ultimately, ranked lists are, like standardized tests and representative democracy, a convenient excuse for not thinking.
One way to defeat rankism is to abandon lists altogether in favor of clouds. Unlike ranked lists, clouds of influence can be contextual (relative to the subculture being measured), multiple (applicable to more than one subculture), variable (reflecting changes over shorter timescales than a global metric), and net-native. Del.icio.us or Connotea tag clusters and Touchgraph link diagrams might be repurposed to create distributed metrics. Still Water has experimented with a couple of ways to diagram relationships in The Pool, including graphing ancestor-descendent relationships and collaborator-work relationships.
Of course, no academic is going to abandon peer-reviewed journals unless their university's promotion and tenure guidelines recognize more "organic" reputation metrics. To this end, Still Water worked with MIT's Roger Malina to publish suggested "New Criteria for New Media," a white paper and sample guidelines that became the most downloaded article in Leonardo and (to judge from private emails I've received) an influence on P&T policies in several universities.
As I tried to argue before, DIY undertakings require a network to flourish and vice versa.
Still Water--what networks need to thrive.
More information about the iDC