Gwyneth: A decentralized, open-source Second Life?
by Alphaville Herald on 18/04/05 at 10:13 am
When Gwyneth Llewelyn comes up with some food for thought, it’s usually not a light salad.
At her blog she has led a herd of cattle into our living rooms, slaughtered them and impaled their carcasses on rotisseries, ready to be consumed. Her post ‘More thoughts on expanding Second Life® to the metaverse…’ outlines a proposal for the future of Second Life – a Second Life that is capable of handling a predicted 1 million users by 2007.
Gwyneth presents a radical, decentralized, open-source model that would render Second Life more akin to the internet itself. It is an idea that addresses SL at the most fundamental level and would have an unprecendented impact on controversial topics like griefing, free-speech and resident-governments.
Document re-published here under the Creative Commons License and with Gwyneth’s permission. Original source:
http://secondlife.game-host.org/article38visual1layout1.html
More thoughts on expanding Second Life® to the metaverse…
2005-04-13 14:18:41
I feel in my bones that 2005 will be a turning point for Second Life®, at least for myself, since I’m pretty “new” in SL – after something like eight months, who look much more like eight years, several things have changed for myself.
First, it looks like, google-wise, my SL pseudonym is more “famous” than my RL self, which is weird, since my RL email address has been on the spammers’ lists since 1995, I think. But weirdly enough, the tiny tiny community of Second Life® seems to attract much more attention than anything else in the Internet. I wonder how that can be. It’s certainly an uncanny thought. Then again, I guess that you get much more hits by searching for “Marylin Monroe” (2,820,000) than for her real name, “Norma Jeane Mortenson” (only 3,210)…
Weirdness apart, this only shows that slowly the cogs and wheels beneath Second Life are spinning. Perhaps for the first time in my professional life, I’m watching “common people” embracing a brave new technology even faster than mobile phones or the Internet. If the trend catches, I must take off my virtual hat to Philip Linden, who brags about having over a million users around 2007 or so. So many people have laughed about this prediction. Well, SL grows geometrically (about 4000 new users every month), not exponentially, but, as Philip very well said, exponential growth will happen when more people have broadband and better computers, which may well happen in two years.
As an example, my country contributes to about 0.2% of the whole Internet population, but only towards 0.1% of the SL population. However, directly through my own efforts (or rather, the work I’m involved with in RL using SL as a collaborative platform), I can safely assume that we will have 25 times as many residents in my country in about one year. So, if the same happens all over SL – individual teams of residents bringing massively new residents to SL due to their own RL projects using SL – hmm, 25 times 30,000, that should be three quarters of a million users. Maybe Philip is not so wrong with his estimates after all!
Wow. One million users in SL. The big question is: how will Linden Lab™ evolve its technology to be able to handle all of them?
Several ideas and suggestions have been argued all over the forums. I’d say that the majority – a comfortable majority – believe that this will happen only through open sourcing the code, and LL’s Philip certainly don’t disagree. The only question seems to be “timing”.
Open source or not, let’s face it – the current technology of SL is simply not scalable enough. LL is betting only on two things: hardware with better performance, and rewriting of the renderer on the client side to make it a tiny bit more efficient.
That’s not enough. As anybody who has programmed knows, a more efficient algorithm is way better than a faster machine. The same applies to systems engineering as well: a better architecture is far faster than lots of hardware put together. For the Linux die-hard fanatics, it does not come as a surprise that one single Linux server can replace dozens (in some cases, hundreds) of Windows servers – and give even better performance than all of them put together. Yahoo and later Google prided themselves to be able to run all their infrastructure in just a fez dozen, cheap Unix boxes (FreeBSD for Yahoo, Linux for Google, according to NetCraft).
So, LL has to think about how to implement a million-user-infrastructure for 2007. Hmm, not easy. Some forum posters – most notably Morgaine Dinova – advise them to redesign most of the infrastructure from scratch, and release the source code as soon as possible, to get the help from a few thousand programmers to debug and review the code for free, and add nifty features that will improve performance. The major reasoning behind that is simple enough. 700+ Linux servers should be more than adequate for hosting a million users. However, due to the way the grid works, you cannot have more than 40 avatars in the same sim – and people tend to concentrate on “hot spots”, the places where an event is hosted – leaving most (over 90% at the very least) of the CPUs completely idle. Since events are hosted pretty anywhere, it’s impossible to “predict” where the next “hot spot” is going to be, and thus the Lindens cannot quickly allocate more CPU power to the places that need it. Basically, the idea would be to create an infrastructure where CPU power would be shared and allocated dynamically to wherever it is needed – and fully used in that way.
Unfortunately, this model does not work so well if you think of Linden Lab as a “VR hosting company”. They need to be able to offer customers “a whole sim in a package” – ie., an independent physical machine with certain characteristics, with an allocation of prims (or similar measure which is representative on throughput). This is the model employed by every other Internet Service Provider (or Application Service Provider), and a model which the consumers understand. Also, it works better if you want to interconnect future grids – since sims are individual units, assuming you can get a copy of the sim server software, you should be able to run your own sims, independently of the “main LL grid”.
So there seems to be no way to get rid of “sim-based” CPUs, tied to a region of land, instead of having a mega-world with “virtual CPU power”, allocated on demand.
Now, the problem with the current model is that everything is too proprietary, and the sim computers are really not “independent units” at all. Rather, users have to log in to a common “login server”. From here, you get your inventory. Textures, sounds and anims are spread all over the grid (they are stored on the sim you have uploaded it first) and you need an asset server to track the place they really are. If you’re Internet-savvy, you can think of the asset server as a sort of DNS system – it says where the assets are stored.
But it’s a centralised system. New residents cannot remember the troubles we had when there was only one asset server. When this server failed, everything failed in SL. Currently, due to some clever engineering feat, LL’s developers have managed to duplicate the asset server into a redundant array of boxes – a simple cluster solution, but which has done wonders. Still, recently, we have been plagued with outages from the login server. LL is working hard on more fixes…
This means that if you ever get the source code for the server software, you have a problem. For residents to visit your sim, you have to be “tied in” to the central user and asset servers, or else you’d be an “isolated spot” – no way your login will work, and no way you can use textures/sounds/animations from the main grid (or, conversely, you could not upload textures to your own sim, and expect them to work on the main grid). This is a similar solution to the offerings of Virtual Universe, a Java-based virtual reality where you can get the server software for free – but it’s not “connected” to anything in the “rest of the world” (and it can’t even manage further servers – so it’s really one isolated spot).
Now let’s take a look at Moon Adamant’s ideas. For those that don’t know her, Moon Adamant refuses to post at any forum, and she definitely isn’t a computer expert, although she has used computers during half of her life Her idea is pretty simple: get rid of the “central” user & asset servers. Sounds pretty clever, right? The question is: how?
Let’s imagine that each sim does its own avatar authentication and local storage. This means that when you create an avatar in the main grid, you get a sim assigned randomly to you. When you log in for the first time, your SL client gets a special key which “ties” your username and password to that particular sim (let’s imagine that you simply store its address, eg. things like sim456.agni.lindenlab.com). Your inventory will be stored on that sim as well, and streamed to your client on demand, as version 1.6 does right now.
All textures/anims/sounds that you upload to a sim get this very same key as well. If you do it properly, you can have the UUID (currently generated by a MySQL statement… so the Lindens did not have much work in generating unique keys) reflect both the sim and a pointer to the database on that sim where the texture is stored. Notice that under the current model, a texture is not tied to a particular sim – it’s unique across the whole grid, but you don’t know where it’s stored, and that’s why you need a central asset server.
But under Moon’s model, you don’t need that at all. Avatars, their inventory, and all textures/sounds/anims stored in a sim will have special keys which uniquely identify them as belonging to the sim. This means that you authenticate on a single sim, and retrieve textures/objects from inventory by looking at the needed keys, and asking the appropriate sim for the asset. No need for central databases at all!
All the rest of the system does not need further modification – so, whenever you rez in an object which has a texture stored on another sim, you retrieve the texture on that sim, and cache it locally. When an avatar crosses borders, items are simply copied from one sim to another. A cleverly designed cache system will expire useless data after a while (I imagine that this very same system is already in place, anyway).
So this means that if you get a copy of the server software and install it, you don’t need to “tie in” with any of LL’s central servers. And if your server is down, the only thing that happens is that all the textures stored there will be replaced by a “missing image” texture, and all users created on that sim will not be able to log in. So, instead of a central login server, you’d have 700+ login servers (and, of course, asset servers…), spreading nicely the load among them. The current model seems to favour 500 users per sim computer, so this means that you can grow the grid as large as you want. Also, even if the world overall has gigabytes and gigabytes of storage, this isn’t too stressful for the poor local caches. After all, you have both a prim limit and an avatar limit per sim, and this translates to a maximum of textures that need to be cached. It’s easy to plan! Individuals running their own hardware could provide sims with more (or less) prims and larger (or smaller) avatar limits, tweaking the numbers to allow for the best performance.
Why didn’t LL favour this model? Well, one good reason comes to mind: “fake” authentication. Under such a decentralized system, how could you ensure that login names are unique? Sure, it’s easy to assign unique UUIDs even on decentralized systems, but how can I guarantee that there is no other Gwyneth Llewelyn, who has been registered on another sim?
Let’s put that issue on hold for a bit. Under a wholly decentralized system, how would Linden Lab make any money? After all, if the server software was to be given away for free, you could register your own users locally, and never pay LL any fees…
We must take a look at how the Internet works to understand my proposal for a finantially sound system. In the Internet, you can buy a machine, hook it up to the Internet, and serve Web pages, using open source software. You don’t need to pay anyone for the “privilege” of hosting Web sites. This is what so many residents want – “free” hosting of SL sims!
Think again. Yes, you pay for the privilege of being “hooked up” to the Internet. Remember, you need a domain name. And this domain name has to be registered on a “central authority” – the Domain Name System. For this registry, you have to pay a small fee.
I propose that Linden Lab uses a similar system. Each time anyone registers their username at LL’s web site, they pay a small fee, and get an encrypted certificate in return, where LL acts as a certification authority. In the same manner, you can run your own sim server, but it also needs an encrypted certificate from LL to work. When “your” user logs in at your own sim, the UUID which is generated is encrypted with a key which has been provided by LL. If you don’t use that key, well, you may register with an “isolated sim” and have your fun there. But you won’t be able to access any content on the main grid, nor be able to export your content there.
As you see, this is very similar to the whole concept of internet vs. intranets. Inside your intranet, you can allocate IP addresses at will, create names for your machines, and do pretty much whatever you wish – except, of course, roam wildly among the larger Internet. For that, you need to get a “valid” IP address, and to offer content, a “valid” domain name address (www.somethingorother.com). You pay for that “privilege”!
This model could – and should – be exploited by Linden Lab. This would mean they would still have that so desirable “control” over the virtual world, the metaverse built with SL tools. Also, they would be able to make sure that people don’t tweak the source code too much to a point where it becomes “incompatible” with the rest of the world. People could certainly contribute bug fixes and several improvements, or change things radically at their own servers – but if they wished to be “a part of Second Life”, they would need to make sure all their software is still 100% compatible with the main grid, or LL wouldn’t be able to give them a valid certificate.
Also, this model allows for “outsourcing” and “delegation” policies – just like the current DNS model, as Morgaine Dinova suggested on a comment in Philip Linden’s blog. LL could certificate other certificate authorities, allowing them to emit their own certificates, and charging a fee for that privilege. A healthy competition would allow these competing certificate authorities to give out valid encryption keys for a better pricing model or structure. And still LL would be able to keep control over everything, like they want.
Imagine now that the future holds mega-content hosters, with their own thousands of servers, and being able to register hundreds of thousands of users, on separate grids from LL’s main grid. Now, what would be LL’s relationship with those mega-grids? Again, we can envision a similar system that has been adopted with telecom carriers (and later with Internet Service Providers), which are called “peering agreements”. Basically, the idea is, if to support a million users spread among two separate networks, those two networks would be more than willing to interconnect for free. If a smaller network wants to join, that would mean that most of the traffic (in our case: exchange of objects, textures, sounds, animations, IMs, etc…) will be pretty much one-sided, ie. the bigger network will sustain most of the traffic, and the smaller one will contribute significantly less. So, under this model – used by the big ISPs to exchange traffic between them – the smaller network pays the bigger one for the privilege of being connected. As the smaller network grows and grows, it comes to a point where traffic among both is basically equal in dimension. At this point, it doesn’t make any sense to charge them any more.
This model allows for “cartels” – ie. the biggest corporations are the ones that exchange traffic for free among themselves, and charge the smaller ones, so that they don’t grow as easily. This sort of mentality also fits well into LL’s view of “world control” (if not actual “domination”). In SL’s terms, this would mean that LL competitors, if they charge much less for setup fees or tier (ie. land usage fees) – thus becoming a “threat” to LL’s “monopoly” at the main grid – they would need to back these up with a larger amount of money to pay for the peering agreement. So, to compete with LL’s prices, you need a strong finantial backup, and an excellent business plan. But this also means that, in the long term, more and more finantially sound companies would help the metaverse to grow.
After all, that’s what happened the Internet to become more and more stable. The tiny ISPs were almost all bought by larger ISPs. “Tiny” sometimes means better prices and customer support, but also much more instability. As the ISPs grew in size, they were able to get more redundant connections, better deals at the peering agreements, and better service to their customers. If they managed to keep all this with lower prices, they would strive and succeed.
So, this is another case where technology and a solid business plan go hand in hand with each other! The good thing is, LL does not really need to “reinvent the wheel” when designing a system that allows the expansion of the metaverse, while letting them keep the “control” of the technology (and open source the code at the same time) and even make a profit of that. All good reasons for Linden Lab to review their plans for the immediate future
Architects of the Metaverse, rejoice!
A tiny note at the end. After browsing the forums recently, I actually found out that dozens of different people have reached the same conclusions as myself, at about the same time! So, please don’t quote me as being an original person – after all, it looks like several of us “deep thinkers” have come to the same conclusions, independently of each other! Just take a recent look at the forums, search for “open source”, and see what people already have written about it. It’s great to see that we seem to share the same ideas and thoughts. Truly, Linden Lab must share some of them as well. The coincidences are too many!
As I initially wrote, the cogs and wheels are definitely spinning…
Prokofy Neva
Apr 19th, 2005
Wow, very instructive Gwyn. And Wow, all this time, I didn’t really think about it, but I thought Moon’s idea was what *already* was happening. I thought the forcibly-assigned last names in the game were attached to some shards or servers or whatever, that housed that avs info, clothes, etc. then ported that stuff all around the world later, for better and worse. What *is* the reason for the forced last names, then? And I tend to agree that making SL open source could encourage faster innovation and development. But as much as I rail and rant about SL and all its obstacles, I’m actually for SL remaining as a kind of decompression chamber/adaptation world where people get their feet wet in metaverse meta-adversity. SL could still play that indoctrination and funnelling role — they will help people find the niche where others using the open source will go. It’s kinda like the Linden builds — they look so much better, and give such security to the world and people value them so much more, though we all want freedom for player builds.
Last night, when my entire sim simply disappeared from the map and the list to teleport, I wondered what was going on. After an hour, I looked at its FPS and saw it was at 486 again and cranked again at the server switcheroon pawned on me some months ago after this relatively new sim’s birth. And I IM’d a friend to complain about this usual performance problem, and the sim seams problem, and other issues, and he said to me “because it’s all a HACK”. Like, they started on one server, then realized they needed another server, patched the two together, and…got it running but then…they had to use that hack/patch again…and again. I honestly don’t want to think too hard about how that hack and patch works LOL because I bet it is held together on a wing and a prayer. Now, like the Cat n the Hat, they have 600 of these spinning plates in motion. Can they do it? I don’t think so. And the only direction to go in is the open sourcing and explorations you suggest.
Randal
Apr 22nd, 2005
Interesting ideas! I would just like to add another possibility. As I understood in your proposal, assets would be hosted on one server, addressed by a URI and cached locally.
Let’s say a special event takes place and all of a sudden thousands of clients start requesting assets from that server (they don’t yet have the assets cached) resulting in a server meltdown.
A simple enhancement would be to implement some sort of P2P model for asset retrieval. Something akin to bittorrent. The source server would host a tracker and clients would automatically share their cache therby reducing the burden on the popular server. Of course some sort of digital signature/hash would confirm the authenticity of the assets to prevent peers tampering locally with the files.
Marisa Uritsky
Apr 23rd, 2005
Wonderful ideas Gwyn!! I wish such ideas could be truly implemented, hopefully one day they will. For now, we will all handle the simple luxury of our lives and take in what we can.
Zmajrsg
Jun 3rd, 2007
http://sexxearch.info x
Zmajrsg
Jun 3rd, 2007
http://sexxearch.info x
mehrab
Jun 5th, 2008
ass