Clay Shirky on Mutable Rules for Games

by Alphaville Herald on 27/05/04 at 10:04 pm

Clay Shirky rocks, what more can I say. Oh, maybe this: following is a copy of an essay he’s been giving since the State of Play conf., on the plausibility and desirability of games that give players themselves the ability to change the rules as the game evolves. There are technical issues, of course, but then there is also this:

“Much of what makes a game fun is mastering the
rules — both winning as defined by the rules and gaming the system
are likely to be more fun than taking responsibility for what the
rules are. There is a danger that by dumping so much responsibility
into the citizen’s laps, we would end up re-creating all the fun of
city planning board meetings in an online environment (though given
players’ willingness to sit around all day making armor, maybe that’s
not a fatal flaw.)”

Would Coco, the scammers, and the money sellers actually like a game if there weren’t set rules to (respectively) either religiously follow, flout, or work around? Essay follows.

* Essay ==============================================================

Nomic World: By the players, for the players
http://www.shirky.com/writings/nomic.html

I’m sort of odd-man-out in a Games and Law conference, in that my
primary area of inquiry isn’t games but social software. Not only am I
not a lawyer, I don’t even spend most of my time thinking about game
problems. I spend my time thinking about software that supports group
interaction across a fairly wide range of social patterns.

So, instead of working from case law out, which has been a theme here
(and here’s where I insert the “I am not a lawyer” disclaimer) I’m
going to propose a thought experiment looking from the outside in.
And I want to pick up on something that Julian [Dibbell] said earlier
about game worlds: ‘users are the state.’ The thought experiment I
want to propose is to agree with that sentiment, and to ask “How far
can we go in that direction?”

Instead of looking for the places where game users are currently suing
or fighting one another, forcing the owners of various virtual worlds
to deal with these things one crisis at a time, I want to ask the
question “What would happen if we wanted to build a world where we
maximized the amount of user control? What would that look like?”

I’m going to make that argument in three pieces. First, I’m going to
do a little background on group structure and the tension between the
individual and the group. Then I want to contrast briefly governance
in real and virtual worlds. Finally I want to propose a thought
experiment on placing control of online spaces in the hands of the
users.

- Background
[This material is also covered in A Group Is Its Own Worst Enemy -- ed.]

The background first: The core fact about human groups is that they
are first-class entities. They exhibit behaviors that can’t be
predicted by looking at individual psychologies. When groups of people
get together they do surprising things, things you can’t predict from
watching the behavior of individuals. I want to illustrate this with a
story, and I want to illustrate it with a story from your life,
because even though I don’t know you, I know what I’m about to
describe has happened to you.

You’re at a party and you get bored — it’s not doing it for you
anymore. The people you wanted to talk to have already left, you’ve
been there a long time, you’d rather be home playing Ultima,
whatever. You’re ready to go. And then a really remarkable thing
happens – you don’t actually leave. You decide you don’t like this
party anymore, but you don’t walk out. That second thing, that thing
keeping you there is a kind of social stickiness. And so there’s this
tension between your intellectual self and your membership, however
tenuous, in the group.

Then, twenty minutes later, another really remarkable thing
happens. Somebody else gets their coat, ‘Oh, look at the time.’ What
happens? Suddenly everybody is leaving all at once. So you have this
group of people, each of whom is perfectly capable of making
individual decisions and yet they’re unconsciously synchronizing in
ways that you couldn’t have predicted from watching each of them.

We’re very used to talking about social structure in online game
spaces in terms of guilds or other formal organizations. But in fact,
human group structure kicks in incredibly early, at very low levels of
common association. Anything more focused than a group of people
standing together in an elevator is likely to exhibit some of these
group effects.

So what’s it like to be there at that party, once you’ve decided to
leave but are not leaving? It’s horrible. You really want to go and
you’re stuck. And that tension is between your emotional commitment to
the group fabric and your intellectual decision that this is not for
you. The tension between the individual and the group is inevitable,
it arises over and over again — we’ve seen that pattern for as long
as we’ve had any history of human groups we can look at in any detail.

Unfortunately the literature is pretty clear this isn’t a problem you
outgrow. The tension between the individual and the group is a
permanent fact of human life. And when groups get to the point where
this tension becomes a crisis, they have to say “Some individual
freedoms have to be curtailed in order for group cohesion to be
protected.”

This is an extremely painful moment, especially in communities that
privilege individual freedoms, and the first crisis, of course, is the
worst one. In the first crisis, not only do you have to change the
rules, you don’t even have those rules spelt out in the first place –
that’s the constitutional crisis. That’s the crisis where you say,
this group of people is going to be self-governing.

Group structure, even when it’s not explicitly political, is in part a
response to this tension. It’s a response to the idea that the
cohesion of the group sometimes requires limits on individual
rights. (As an aside, this is one of the reasons that libertarianism
in its extreme form doesn’t work, because it assumes that groups are
simply aggregates of individuals, and that those individuals will
create shared value without any sort of coercion. In fact, the logic
of collective action, to use Mancur Olsen’s phrase, requires some
limits on individual freedom. Olsen’s book on the subject, by the way,
is brilliant if a little dry.)

- Fork World

If you want to see why the tension between the individual and the
group is so significant, imagine a world, call it Fork World, where
the citizens were given the right to vote on how the world was run. In
Fork World, however, the guiding principle would be “no coercion.”
Players would vote on rule changes, but instead of announcing winners
and losers at the end of a vote, the world would simply be split in
two with every vote.

Imagine there was a vote in Fork World on whether players can kill one
another, say, which has been a common theme in political crises in
virtual worlds. After the vote, instead of imposing the results on
everyone, you would send everyone who voted Yes to a world where
player killing is allowed, and everyone who voted No to an alternate
world, identical in every respect except that player killing was not
allowed.

And of course, after 20 such votes, you would have subdivided 2 to the
20th times, leaving you with a million potential worlds — a world
with player killing and where your possessions can be stolen when you
die and where you re-spawn vs. a world with player killing and
possession stealing but death is permanent, and so on. Even if you
started with a million players on Day One, by your 20th vote each
world would average, by definition, one player per world. You would
have created a new category of MMO — the Minimally Multi-player Online
game.

This would fulfill the libertarian fantasy of no coercion on behalf of
the group, because no one would ever be asked to assent to rules they
hadn’t voted for, but it would also be approximately no fun. To get
the pleasure of other people’s company, people have to abide by rules
they might not like considered in isolation. Group cohesion has
significant value, value that makes accepting majority rule
worthwhile.

- Simple Governance Stack

Since tensions between group structure and individual behavior are
fundamental, we can look at ways that real world group structure and
virtual world group structure differ. To illustrate this, I’m going to
define the world’s tiniest stack, a three-layer stack of governance
functions in social spaces. This is of course a tremendous
over-simplification, you could draw the stack at all kinds of levels
of complexity. I’ve used three levels because it’s what fits on a
Powerpoint slide with great big text.

– Social Norms
- Interventions
- Mechanics

At the top level are social norms. We’ve heard this several times at
the conference today — social norms in game worlds have the effect of
governance. There are some societies where not wearing white shoes
after Memorial Day has acquired the force of law. It’s nowhere spelt
out, no one can react to you in any kind of official way if you
violate that rule, and yet there’s a social structure that keeps that
in place.

Then at the bottom of the stack is mechanics, the stuff that just
happens. I’ve pulled Norman Mailer’s quote about Naked Lunch here –
“As implacable as a sales tax.” Sales tax just happens as a
side-effect of living our normal lives. We have all sorts of
mechanisms for making it work in this way, but the experience of the
average citizen is that the tax just happens.

And between these top and bottom layers in the stack, between
relatively light-weight social norms and things that are really
embedded into the mechanics of society, are lots and lots of
interventions, places where we give some segment of society heightened
power, and then allow them to make judgment calls, albeit with
oversight.

Arresting someone or suing someone are examples of such interventions,
where human judgment is required. I’ve listed interventions in the
middle of the stack because they are more than socially enforceable –
suing someone for libel is more than just social stigma– but they are
not mechanical — libel is a judgment call, so some human agency is
required to decide whether libel has happened, and if so, how it
should be dealt with.

And of course these layers interact with one another as well. One of
the characteristics of this interaction is that in many cases social
norms acquire the force of law. If the society can be shown to have
done things in a certain way consistently and for a long time, the
courts will, at least in common law societies, abide by that.

- The Stack in Social Worlds

Contrast the virtual world. Social norms – the players have all sorts
of ways of acting on and enacting social norms. There are individual
behaviors – trolling and flaming, which are in part indoctrination
rituals and in part population control, then there are guilds and more
organized social structures, so there’s a spectrum of formality in the
social controls in the game world.

Beneath that there’s intervention by wizardly fiat. Intervention comes
from the people who have a higher order of power, some sort of direct
access to the software that runs the system. Sometimes it is used to
solve social dilemmas, like the toading of Mr. Bungle in LambdaMoo
[http://www.juliandibbell.com/texts/bungle.html], or for dispute
resolution, where two players come to a deadlock, and it can’t be
worked out except by a third party who has more power. Sometimes it’s
used to fix places where system mechanics break down, as with the
story from Habitat [http://www.fudco.com/chip/lessons.html] about
accidentally allowing a player to get a too-powerful gun.

Intervention is a key lubricator, since it allows the ad hoc solution
of unforeseen problems, and the history of both political norms and
computer networks is the history of unforeseen problems.

And then there’s mechanics. The principal difference between real
world mechanics and virtual world mechanics is ownership. Someone owns
the server – there is a deed for a box sitting in a hosting company
somewhere, and that server could be turned off by the person who owns
it. The irony is that although we’re used to computers greatly
expanding the reach of the individual, as they do in many aspects of
our lives, in this domain they actually contract it. Players live in
an online analog to a shopping mall, which seems like public space,
but is actually privately owned. And of course both the possibility of
monitoring and control in virtual worlds is orders of magnitude higher
than in a shopping mall.

The players have no right to modification of the game world, or even
to oversight of the world’s owners. There are very few environments
where the players can actually vote to compel either the system
administrators or the owners to do things, in a way that acquires the
force of law. (Michael Froomkin has done some interesting work on
testing legal structures in game worlds
[http://intel.si.umich.edu/tprc/papers/2003/240/VirtualReal.pdf].)

In fact what often happens, both online and off, is that structures
are created which look like citizen input, but these structures are
actually designed to deflect participation while providing political
cover. Anyone in academia knows that faculty meetings exist so the
administration can say “Well you were consulted” whenever something
bad happens, even though the actual leverage the faculty has over the
ultimate decision is nil. The model here is customer service –
generate a feeling of satisfaction at the lowest possible
cost. Political representation, on the other hand, is a high-cost
exercise, not least because it requires group approval.

- Two Obstacles

So, what are the barriers to self-governance by the users? There are
two big ones — lots of little ones, but two big ones.

The first obstacle is code, the behavior of code. As Lessig says, code
is law. In online spaces, code defines the environment we operate
in. It’s difficult to share the powers of code among the users,
because our current hardware design center is the ‘Personal Computer’,
we don’t have a design that’s allows for social constraints on
individual use.

Wizards have a higher degree of power than other players, and simply
allowing everyone to be a wizard tends to very quickly devolve into
constant fighting about what to do with those wizardly powers. (We’ve
seen this with IRC [internet relay chat], where channel operators have
higher powers than mere users, leading to operator wars, where the
battle is over control of the space.)

The second big obstacle is economics — the box that runs the virtual
world is owned by someone, and it isn’t you. When you pay your $20 a
month to Sony or Microsoft, you’re subscribing to a service, but
you’re not actually paying for the server directly. The ownership
passes through a series of layers that dilutes your control over
it. The way our legal system works, it’s hard for groups to own things
without being legal entities. It’s easy for Sony to own
infrastructure, but for you and your 5 friends to own a server in
common, you’d have to create some kind of formal entity. IOUs and
social agreements to split the cost won’t get you very far.

So, if we want to maximize player control, if we want to create an
environment in which users have refined control, political control,
over this stack I’ve drawn, you have to deal with those two obstacles
– making code subject to political control, and making it possible
for the group to own their own environment.

- Nomic World

Now what would it be like if we set out to design a game environment
like that? Instead of just waiting for the players to argue for
property rights or democratic involvement, what would it be like to
design an environment where they owned their online environment
directly, where we took the “Code is Law” equation at face value, and
gave the users a constitution that included the ability to both own
and alter the environment?

There’s a curious tension here between political representation and
games. The essence of political representation is that the rules are
subject to oversight and alteration by the very people expected to
abide by them, while games are fun in part because the rule set is
fixed. Even in games with highly idiosyncratic adjustments to the
rules, as with Monopoly say, the particular rules are fixed in advance
of playing.

One possible approach to this problem is to make changing the rules
fun, to make it part of the game. This is exactly the design center of
a game called Nomic [http://www.earlham.edu/~peters/nomic.htm]. It was
invented in 1982 by the philosopher Peter Suber. He included it as an
appendix to a book called The Paradox of Self Amendment
[http://www.earlham.edu/~peters/writing/psa/index.htm], which concerns
the philosophical ramifications of having a body of laws that includes
the instructions for modifying those laws.

Nomic is a game in which changing the rules is a legitimate move
within the game world, which makes it closer to the condition of a
real government than to, say, Everquest. The characteristics of the
Nomic rules are, I think, the kind of thing you would have to take on
if you wanted to build an environment in which players had real
control. Nomic rules are alterable, and they’re also explicit – one of
the really interesting things about designing a game in which changing
the rules is one of the rules, is you have to say much more carefully
what the rules actually are.

The first rule of Nomic, Rule 101, is “All players must abide by the
rules.” Now that’s an implicit rule that almost any game anyone ever
plays, but in Nomic it needs to be spelled out, and ironically, once
you spell it out it’s up for amendment – you can have a game of Nomic
in which you allow people to no longer play by the rules.

Suber’s other key intuition, I think, in addition to making mutability
a move in the game, is making Nomic contain both deep and shallow
rules. There are rules that are “immutable”, and rules that are
mutable. I put immutable in quotes because Rule 103 allows for “…the
transmutation of an immutable rule into a mutable rule or vice versa.”

Because the players can first vote to make an immutable rule mutable,
and then can vote to change the newly mutable rule, Nomic works a
little like the US Constitution: there are things that are easier to
change and harder to change, but nothing is beyond change. For
example, flag burning is currently protected speech under our First
Amendment, so laws restricting flag burning are invariably struck down
as unconstitutional. An amendment to the constitution making flag
burning illegal, however, would not, by definition, be
unconstitutional, but such an amendment is much much harder to pass
than an ordinary law. Same pattern.

The game Nomic has the advantage of being mental and interpretive,
unlike software-mediated environments, where the rules are blindly run
by the processor. We know (thank you Herr Godel), that we cannot prove
that any sufficiently large set of rules is also self-consistent, and
we know (from bitter experience) that even simple software contains
bugs.

The task of instantiating a set of rules in code and then trying to
work in the resulting environment, while modifying it, can seem
daunting. I think it’s worth trying, though, because an increasing
degree of our lives, personal, social and political, are going to be
lived in these mediated spaces. The importance of online spaces as
public gatherings is so fundamental, in fact, that for the rest of
this talk, I’m going to use the words player and citizen
interchangeably.

- How to build it?

How to build a Nomic world? Start with economics. The current barriers
to self-ownership by users is simple: the hardware running the social
environment is owned by someone, and we have a model of contractual
obligation for ownership of hardware, rather than a model of political
membership.

One possible response to current economic barriers, call it the
Co-operative Model, is to use contracts to create political
rights. Here we would set up a world or game in which the people
running it are actually the employees of the citizens, not the
employees of the game company, and their relationship to the body of
citizens is effectively as work-for-hire. This would be different than
the current ‘monthly subscriber’ model for most game worlds. In the
co-operative model, when you’re paying for access to the game world
your dollars would buy you shares of stock in a joint stock
corporation — citizens would be both stakeholders and
shareholders. There would be a fiduciary duty on the part of the
people running the game on your behalf to act on the political will of
the people, however expressed, rather than the contractual
relationship we have now.

The downside of this model is that the contractual requirements to do
such a thing are complex. The Open Source world gives us a number of
interesting models for valuable communal property like the license to
a particular piece of software being held in trust. When such a
communal trust, though, wants to have employees, the contracts become
far more complex, and the citizens’ co-op becomes an employer. Not
un-do-able, but not a great target for quick experiments either.

A second way to allow a group to own their own social substrate is
with a Distributed Model, where you would attack the problem down at
the level of the infrastructure. If the issue is that any one server
has to be owned somewhere, distribute the server. Run the environment
on individual PC’s and create a peer-to-peer network, so that the
entirety of the infrastructure is literally owned by the citizens from
the moment they join the game. That pushes some technological
constraints, like asynchronicity, into the environment, but it’s also
a way of attacking ownership, and one that doesn’t require a lot of
contractual specification with employees.

[Since I gave this talk, I've discovered BlogNomic
[http://blognomic.blogspot.com/], a version of Nomic run on weblogs,
which uses this “distributed platform” pattern.]

A third way could be called Cheap Model; simply work at such low cost
that you can piggyback on low-cost or free infrastructure. If you
wanted to build a game, this would probably mean working in text-based
strategy mode, rather than the kind of immersive graphic worlds we’ve
been talking so much about today. There are a number of social tools
– wikis and mailing lists and so on — that are freely available and
cheap to host. In this case, a one-time donation of a few dollars per
citizen at the outset would cover hosting costs for some time.

Those are some moves that would potentially free a Nomic environment
from the economic constraints of ownership by someone other than the
citizens. The second constraint is dealing with the code, the actual
software running the world.

- Code

Code is a much harder thing to manipulate than economics. Current
barriers in code, as I said, are levels of permission and root
powers. The real world has nothing like root access — there is an
urban legend, a rural legend, I guess, about the State of Tennesee
declaring the value of pi to be 3, as irrational numbers were
decidedly inconvenient. However, the passage of such a law couldn’t
actually change the value of pi.

In an online world, on the other hand, it would be possible to
redefine pi, or indeed any other value, which would in some cases
cause the world itself to grind to a halt.

This situation, where a rule change ends the game, is possible even in
Nomic. In one game a few years ago, a set of players made a sub-game
of trying to pass game-ending rules. Now in theory Nomic shouldn’t
allow such a thing, since such rules could be repealed, so this group
of players specifically targeted unrepealability as the core virtue of
all their proposed changes. One such change, for example, would have
made the comment period for subsequent rule changes 54 years long.
Such a rule could eventually have been repealed, of course, but not in
a year with two zeros in the middle.

So unlike actual political systems, where the legislators are allowed
to create nonsensical but unenforceable laws, in virtual worlds, it’s
possible to make laws that are nonsensical and enforceable, even at
the expense of damaging the world itself. This means that any
citizen-owned and operated environment would have to include a third
set of controls, designed to safeguard the world itself against this
kind of damage.

One potential response is to create Platform World, with a third,
deeper level of immutability, enforced with the choice of platform.
You could announce a social experiment, using anything from mailing
list software to There.com, and that would set a bounded environment.
You can imagine that software slowly mutating away from the original
code base, as citizens made rule changes that required code changes,
but by picking a root platform, you would actually have a set of rules
embodied in code that was harder to change than classic Nomic rules.
The top two layers of the stack, the social and interventionist
changes, could happen in the environment, but re-coding the mechanics
of the environment itself would be harder.

A second possibility, as a move completely away from that, would be
Coder World. Here you would only allow users who are comfortable
coding to play or participate in this environment, so that a kind of
literacy becomes a requirement for participation. This would greatly
narrow the range of potential participants, but would flatten the
stack so that all citizens could participate directly in all layers.
This flattening would lead to problems of its own of course, and would
often devolve into tests of hacking prowess, and even attempts crash
the world, as with Nomic, but that might be interesting in and of
itself.

Third, and this is the closest to the current game world model, would
be Macro World. Here you would create a bunch of macro-languages, to
create ways in which end-users who aren’t coders could nevertheless
alter the world they inhabit. And obviously object creation, the whole
history of creating virtual objects for virtual environments, works
this way now, but it’s not yet at the level of creating the
environment itself from scratch. You come into an environment in which
you create objects, rather than coming into a negative space and
letting the citizens build up the environment, including the rules.

A fourth and final possibility is a CVS World. CVS is a concurrent
versioning system, it’s what programmers use to keep code safe, so
that when they make a change that breaks something, they can role back
a version. Wikis, collaborative workspaces where users create the site
together and on the fly, have shown that the CVS pattern can have
important social uses as well.

In the Matrix tradition, because I guess that everybody’s referred to
the Matrix in every talk today, CVS World would be a world in which
you simply wouldn’t care if citizens made mistakes and screwed up the
world, because the world could always be rolled back to the last
working version.

In this environment, the ‘crash the world’ problem stops being a
problem not because there is a defense against crashing, but rather a
response. If someone crashes the world, for whatever reason, it rolls
back to the last working version. And that would be potentially the
most dynamic in terms of experimentation, but it would also have
probably the most disruption of game play.

- Why do it?

The looming question here, of course, is “Would it be fun?” Would it
be fun to be in a virtual environment where citizens have a
significant amount of control, and where the legal structures reflect
the legal structures we know in the real world?” And the answer is,
maybe no.

One of my great former students, Elizabeth Goodman, said, the reason
academics like to talk about play but not about fun is that you can
force people to play. Much of what makes a game fun is mastering the
rules — both winning as defined by the rules and gaming the system
are likely to be more fun than taking responsibility for what the
rules are. There is a danger that by dumping so much responsibility
into the citizen’s laps, we would end up re-creating all the fun of
city planning board meetings in an online environment (though given
players’ willingness to sit around all day making armor, maybe that’s
not a fatal flaw.)

Despite all of this, though, I think it’s worth maximizing citizen
involvement through experiments in ownership and control of code, for
several reasons.

First, as Ted [Castronova]‘s work shows
[http://www.gamestudies.org/0302/castronova/], the economic
seriousness of game worlds has surpassed anything any of us would have
expected even a few years ago. Economics and politics are both about
distributed optimization under constraints, and given the surprises
we’ve seen in the economic sphere, with inflation in virtual worlds
and economy hacking by players, it would be interesting to see if
similar energy would be devoted to political engagement, on a level
more fundamental than making and managing Guild rules.

Next, it’s happening anyway, so why not formalize it? As Julian
[Dibbell]‘s work on everything from LambdaMOO to MMO Terms of Service
demonstrates, the demands for governance are universal features of
virtual worlds, and rather than simply re-invent solutions one crisis
at a time, we could start building a palette of viable political
systems.

Finally, and this is the most important point, we are moving an
increasing amount of our speech to owned environments. The economic
seriousness of these worlds undermines the ‘it’s only a game’
argument, and the case of tk being run out of the Sims for publishing
reports critical of the game show how quickly freedom of speech issues
can arise. The real world is too difficult to control by fiat — pi
remains stubbornly irrational no matter who votes on it — but the
online world is not. Even in non-game and non-fee collecting social
environments like Yahoo Groups, the intrusiveness of advertising and
the right of the owners to unilaterally change the rules creates many
fewer freedoms than we enjoy offline.

We should experiment with game-world models that dump a large and
maybe even unpleasant amount of control into the hands of the players
because it’s the best lab we have for experiments with real governance
in the 21st century agora, the place where people gather when they
want to be out in public.

While real world political culture has the unfortunate effect of being
either/or choices — uni-cameral or bi-cameral legislatures, president
or prime minister, and so on — the online world offers us a degree of
flexibility that allows us to model rather than theorize. Wonder what
the difference is between forcing new citizens to have sponsors
vs. dumping newbies into the world alone? Try it both ways and see how
the results differ.

This is really the argument for Nomic World, for making an environment
as wholly owned and managed by and for the citizens as a real country
– if we’re going to preserve our political freedoms as we moved to
virtual environments, we’re going to need novel political and economic
relations between the citizens and their environments. We need this,
we can’t get it from the real world. So we might as well start
experimenting now, because it’s going to take a long time to get good
at it, and if we can enlist the players efforts, we’ll learn more,
much more, than if we leave the political questions in the hands of
the owners and wizards.

And with that, I’ll sit down. Thanks very much.

* Two Interviews ========================================================

Curiously, I was interviewed twice in the space of a couple of weeks,
once for an NYC-focused site called Gothamist, and once by R.U. Sirius
for his Neofile series.

Both interviews include questions on my general views on technology
(though the bulk of the Gothamist interview is about live as a New
Yorker.)

– Gothamist – http://www.gothamist.com/interview/archives/2004/04/09/clay_shirky_internet_technologist.php

– Neofiles – http://www.life-enhancement.com/neofiles/default.asp?id=35

* End ====================================================================

This work is licensed under the Creative Commons Attribution License.
The licensor permits others to copy, distribute, display, and perform
the work. In return, licensees must give the original author credit.

To view a copy of this license, visit
http://creativecommons.org/licenses/by/1.0

or send a letter to
Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.

2004, Clay Shirky

Comments are closed.