About ] Mission ] Services ] Funding ] Opportunities ] Open Tools ] Crosbie's Stuff ]

Cyberspace Engineers

Engineering Cyberspace!

Technical Q&A

 

Introduction

On the Gamasutra discussion boards there has been much discussion relating to the technology ultimately intended to form the design principles behind our vision of cyberspace. This discussion has often been prompted by the Gamasutra articles of Crosbie Fitch (Cyberspace in the 21st Century), and the material in this Q&A form is probably an excellent means of presenting many of the technical issues, and answering the typical questions someone interested in the technology that we’re proposing is likely to have.

The following first section covers a variety of issues raised by a diverse range of Gamasutra readers, typically involved in the technical side of games design and development. Some of these are addressed by other readers, some by Crosbie Fitch.

The second section covers a lengthy exchange of more searching questions initiated by Intel’s Charles Congdon (3D R&D Labs), and answers from Crosbie Fitch.

Security

A good link and security in a Dynamic Hierarchy

Brent Smith  Posted: 06:24pm Jan 2, 2001 ET

 

Thanks for the great article! Very thought provoking.

The Mariposa Project at Berkeley has some interesting ideas about using economic models to balance and migrate data and query processing over large numbers of nodes in a distributed database system. The algorithms seem applicable. Here's a blurb,

"Mariposa allows DBMSs which are far apart and under different administrative domains to work together to process queries, by means of an economic paradigm in which processing sites buy and sell data and query processing services. Mariposa has been designed with the following principles in mind:

Scalability to a large number of cooperating sites. Local autonomy. Data mobility. No global synchronization. Easily configurable policies. "

My biggest concern for a system like this is managing security across distributed nodes. It would be too easy for a few bad apples to destroy the experience for everyone 'downstream' from them. This situation already crops up in today's shooter client/server and RTS peer-to-peer games where rogue server operators modify the game to gain an advantage. The problem would be magnified 1000-fold in a massively distributed system.

To remain distributed, you may have to accept the fact that some percentage of your nodes will always be compromised. To combat this, you could implement some form of Reputation Manager as part of your replication architecture that, over time, discourages nodes from interacting with rogue ones. This would still permit localized problems but hopefully, over time, the rogues would have less and less influence over the global state. Might lead to some interesting gaming situations. One day your avatar might walk through a section of the world resembling a wild-west town, and return the next day to find it temporarily morphed into a red-light district.

I'm looking forward to hearing what you have to say on this issue.

Thanks,

Brent

Security and compromise

Jon Watte  Posted: 11:32pm Jan 3, 2001 ET

 

Another approach to security is to use "consensus" on important events; i.e. let the 5 most interested machines vote, and take the majority vote. Or let the 3 most interested machines vote, and only let the event happen if the vote is unanimous.

Yes, some unscrupulous person or gang of persons could add some number of nodes to the network, and "rig the vote" so to speak. However, that means they are the most interested in that particular part of the world, so if you enter that part, you're in their world now. After all, if their machines run some specific part of the world, why SHOULDN'T they be allowed to set the rules? Reputation should fix the worst abominations over time.

Another solution (or orthogonal, I haven't decided yet :-) is to let users "out-source" their serving to service providers. Your machine isn't on the net all the time, but your ISP's is, barring catastrophic failure. People might feel better doing business in a "trusted" area or with "trusted" escrow agents. In a sense, we'd import some of the same mechanisms we use in real life to have reasonably fair dealings in the normal case. Think "Title Insurance". 

Security using consensus

Crosbie Fitch  Posted: 08:00am Jan 5, 2001 ET

 

Well the way the distributed system works at its simplest is to be careful in choosing who gets to determine a particular object's state.

Now that's pretty straightforward. If you start needing to select several nodes to be involved in determining an object's state by using a voting procedure, then it adds considerable complexity to the system, for not much advantage. It is difficult to tell given 3 or even 5 samples as to whether one or more of the values is plain wrong or simply divergent.

It's better to have a background process on every node that monitors a selection of values and the way the incoming update disagrees with the locally computed value over time. One can start to get a good idea of normal behaviour (related to a latency measure) and this measure can be distributed with the value. As soon as a node exhibits abnormal behaviour, i.e. the fluctuations in the disagreement exhibit tampering, then the integrity metric is reduced for that node.

The thing is all interested nodes will be aware of how a value is computed (they all have the same script) and if instead of being computed, it is replaced with a human selected value, then this will stick out like a sore thumb in the historical record. Note that we don't need to detect corruption immediately, we can be patient.

Any values that are dependent upon user controls, well, their normal behaviour will accord with this. And it may even be possible to detect a sudden improvement in user dexterity, but I'm not even sure we should worry too much if robots start playing the game, that's just super-human behaviour, it's not necessarily tampering with the object store. But, we can still monitor it and publish the fact that BigZupit happens to exceed known human dexterity.

When you have millions of nodes, then it is very difficult for unscrupulous persons to try and compete with that. The most they could do would be to cheat themselves. But, then everyone's free to create their own game - if you want a game where some players have 100 strength and other players have 20 strength, well go ahead and create it - see how many players join up.

As for ISPs doing the modelling, or 'trusted' PCs only, we lose the major advantage of a P2P system, i.e. that the infrastructure is provided by the players. If you can't cope with entrusting modelling to the player's computer then you might as well use a client/server approach (replicated servers if necessary), there isn't that much economic advantage.

If you think about the explosive growth potential of a P2P game (cf Napster) growing to millions in the space of a few months, then you can't mess about organising a bunch of ISPs to join in. They can join in by all means, but they shouldn't be a requirement.

This is why you have a scalable system. It's not because you necessarily need 40 million players, it's because you can't be arsed with all the admin hassle and player whingeing when the system grinds to a halt at 5,000 players, 100,000 players, or whatever non-scalable ceiling you thought would be good enough.

One piece of software - any number of players - zero admin

What could be simpler?

Security in Distributed Games

Crosbie Fitch  Posted: 07:21am Jan 5, 2001 ET

 

Yup, reputation monitoring is one of the approaches I had in mind. Given that there is already going to be a need to monitor the performance of computers in order to optimise the load balancing, modelling responsibility, and communications relationships, there will also be monitoring of integrity. The integrity measure will be added to the heuristics used to do this.

The trick is in observing that whilst a hacker would have little difficulty mucking about with their local object store, it would be difficult for them to prevent an apparently random selection of neighbouring nodes (out of millions) detecting the statistically significant non-random variations in the discrepancies observed in locally computed values versus the hacker's values.

In effect one can dedicate a tiny fraction of the power in what is also a distributed processing system, to the task of detecting wayward nodes. One will automatically start reducing their status as reliable computing nodes, however, one might also flag the nodes to any one interested in such nodes, as suspicious. Indeed one could use peer pressure as a social police force, e.g. "Significant discrepancies have arisen in FruggleMan's node (142.35.12.5) that have a 0.01% statistical likelihood of arising through network noise, lost packets, SIMM errors, etc."

This system can be compared to the human immune system, i.e. corruption is not prevented, rather it is detected, and a force is raised to stamp it out (with luck).

I note that "Dig It Network Multimedia, Inc" reckon they have a solution (www.serverlessgames.com). I'm wondering if this is by using a public key encryption approach.

Incidentally, I am confident that like Napster, mankind's prurient interests will always subvert p2p applications. However, we just need to always ensure that users have facilities to act as their own police force, i.e. players rate their own content, and one relies upon the majority to provide a good enough appraisal that other players can select what they want to see, e.g. adult-only or family-safe.

In an open system (like the human body) it is extremely difficult to prevent corruption (viruses) or damage (knock on the head), but as long as the system has measures in place to monitor its own integrity and adapt quicker than the cause, then the system will remain viable, though it might get knocked down now and then (disease).

If anyone is able to create a software virus that infects the software running the distributed system in every player's computer (as opposed to the game data) then this would be a severe problem (like a virus that rapidly inserts itself into all DNA). However, we'll just have to use existing techniques to address this problem like any other software virus.

The server based paradigm replies...

Rolf Jansson  Posted: 07:32am Jan 5, 2001 ET

 

First, let me congratulate Mr Fitch on a great series of articles which really opened my eyes to some new issues. Then let me counter his articles with a few possible crude issues he must and probably will deal with in the upcoming article/s.

I am sworn to the server-based game philosophy. I believe security, as Mr Watte so kindly points at will be the head issue here. The article here in Gamasutra on internet based games cheating which I believe all participating in this discussion have read or should have read pretty much describes what I am aiming at. Clients can never be responsible for data, states, events etc _unmonitored_. I would not trust my money with anonymous people. I go to banks. I could however trust institutions/servers that has been certified by my bank.

There is some serious problems relating to ownership as people point out, and I am afraid that what Cosby will realize or already has realized is that there has to be some kind of trust-chain established, much as those regarding certificates issued by Verisign.

When I go offline Mr Watte suggests I should transfer my responsibility up the chain to my ISP. This is the same thing as going bankruptcy and returning what is left to the bank. So we are really talking server based gaming here with a distributed model. How many ISP's do the Everquest owners Verant trust with their server code? Okay that might be an unrelated economic question but still it is all about trusting other clients and in the end people with valuable property, event and state handling.

Do we even see a police system arising keeping track of other servers/(peers)? Yes perhaps. The police could in fact be the server side of these games, checking for cheaters and authorizing peers to speak with each other. We could imagine a hierarchy of servers acting as police servers, being in charge of the security on a number of clients. This would resemble the DNS system of today I guess. Perhaps there will a central institution governing the IWS Internet World System of servers in the future:) These servers also could keep score of what jobs/event handling/ownership they have distributed to their clients and be responsible for distributing the games themselves.

I still believe those police servers always could be compromised/fooled and problems will arise if we make those servers part of the peer to peer system where computers pop in and out all the time. What I am saying is that a) I believe that either this discussion eventually will come up with an advanced client/server system (which actually is what I thought the end of the last article was pointing at) or b) there will be peer to peer communication policed and managed by a server based hierarchy. If we take these worlds seriously, _and we do_, we will put resources into some kind of structure to make them reliable. It will be a dynamic hierarchy, but not quite in the sense I feel Crosby describes it. It will be much more institutionalised and standardized like any other phenomenon on the internet.

Perhaps you already thought this out anyways:)

All good thinking starts by assuming anarchy, then refining it. We then end up with an advanced system for most things. We are building worlds here, where each client represents a person. Not all can be trusted. We invent institutions to govern this. This is how civilization came to be, isn’t it? Do we live in a server based or peer to peer based society today? I would say a peer to peer (human interaction) with server (state) policing and management for security and efficiency.

This model should be represented in the online gaming/world model, I guess, unless we discard thousands of years of experience in society building history.

The future, oh the future! Regards, Rolf

Security: P2P vs. Client/Server

Crosbie Fitch  Posted: 08:52am Jan 5, 2001 ET

 

Rolf,

I know there are some people wedded to client/server, if only because you get to put the server behind lock and key.

It's a bit of a leap, but I am one of those evangelists trying to convince people that there is the potential for a very useful system despite the apparently ludicrous idea of letting players' computers participate.

Remember, it's one hacker on one computer among many millions. Even 100 hackers operating in concert would be hard put to convince the rest of these millions of computers that Robin Hood is wearing blue socks instead of red ones. They'd probably find it easier to gang up in the game, kidnap Robin, get him to swap his socks over, and then replace him from whence he came.

Entertainment is one of those things where you don't need to worry too much if something's corrupted. As long as 99.9% of players have fun, who cares if 0.1% of players notice a discrepancy now and then? "Hang on a mo, I could have sworn I'd chopped that tree down yesterday..."

It's not like it's a case of "Oh sorry sir, but that seat you booked yesterday now seems to be double booked", or "That cheque for $10,000 seems to have been double-drawn".

The thing about P2P systems is that the system has to provide its own security measures. A P2P system does not have to be immune from corruption, it just has to be able to detect the culprit and adapt or recover from the damage. There isn't enough manpower on the planet to go around policing millions of computers.

We have the client/server mentality which says "There's no security like six-foot thick concrete walls". And we have the P2P mentality which says "Use the player's computer to police the system". On the one hand the player coughs up a load of dosh for all the server infrastructure, security guards, maintenance admin, etc. And on the other hand the player's computer does all the work. The player still ends up paying for it, but it seems a tad more equitable and a more efficient use of resources.

I note that you've suggested that in a network of peers (people) that some of these peers are used to police the network. I'm saying let's have everyone part of a neighbourhood watch scheme, i.e. rather than having 1% of the population 100% dedicated to policing, instead have 100% of the population 1% dedicated to policing.

In a peer to peer network everyone is the same (just not necessarily in terms of resources: CPU, RAM, Bandwidth, etc.)

P2P systems are not a threat to client/server. I'm sure the latter will be around forever. It's just that we haven't really explored the potential of the P2P approach (apart from the Web and apps like e-mail, IRC, etc.). Those games companies that want to exploit the patent security and integrity benefits of locking servers in cupboards can continue to do so. I'm simply suggesting that it isn't all doom and gloom to countenance the use of Jimmy's PC in his bedroom, however many tools he develops to hack the system.

I expect someone might have said to god "Oh no, god, you can't leave DNA in cells! It's too unprotected." and god will reply "Sorry, but I just want to set this thing running. I don't want the hassle of supervising evolution for the next billion years. And anyway, I've put some self-repair mechanisms in DNA so it won't be too bad. It'll be quite a few years before any creature figures out DNA too, but then things will really start getting interesting..."

More on security

Sam Minnee  Posted: 03:56am Jan 11, 2001 ET  

 

Hackers, while they will do anything they can to break the system, are 1 in a million. This has been said before, and would be the way to ensure (to a reasonable degree) the integrity of the system.

However, this cannot (well, not nearly as well) work with a one-owner-per-object system, as described in the Foundations II article. Because, if the owner is a hacker, all other nodes will follow the changes they make.

One way of solving the problem would be to look at the individual nodes as hosting bubbles of world-space. Inter-node communications happen at the surfaces of the bubbles, and can be thought of as objects entering/exiting the bubble. A node cannot know when stuff is due to enter its bubble, as that would require modelling the area outside the bubble, effectively increasing the bubble. So, a node is only going to push stuff to the spaces outside its bubble, and in turn receive stuff pushed into it.

As there will be many people hosting a given area, there are going to be lots of things pushed in. It's then not hard to statistically determine the un-hacked version of world-reality.

of course, this will increase the required bandwidth, which is a bad.

Possibly neighbouring bubbles could be given trust ratings by the majority. So, there will still be lots of nodes hosting a given object, but as long as it's trustworthy, only one will need to be accessed.

Hmmm, that probably lacked coherence. Ah well...

Sam

Online + Persistent + Secure + Distributed = Failure

Eric Robertson  Posted: 02:21pm Jul 27, 2001 ET  

 

I hope I wont dissuade you, but its simply inefficient (currently impossible in my humble opinion) with current OS/Hardware/networks.

I do hope you all can work together to document all the problems you will be having so as to inform others what to expect.

Things I’ve noticed from previous research into a distributed persistent online game:

One thing really, Security.

Sure, you can make watchdog daemons (separate client processes) but in order to truly confirm and control what other client resources are doing (or not doing) you would have to pretty much connect to each one, which in effect brings you to an inefficiently implemented client/server model. Inefficient in design and bandwidth.

I’m sure you will say that there can be lots of watch dogs "distributed" through out the network, but unless the organization developing this game controls ALL these watchdog processes, security will be compromised. That takes you again back to the client/server model.

How important is security? In the short term of a game release, little. Players, for the most part, wont know about it and will develop their avatars (or areas) accordingly. If you don’t allow development of something, I wouldn’t consider your game persistent and is just another Tribes and/or chat server. Short term, players would buy into your idea and play the game. Your short term sales will be fine. Long term your toast, sadly. Within the first few weeks, maybe months (depending on how much money & resources you spend on resource watchdog programming), resource duplication/modification will appear. Players will easily, using simple tools, determine memory locations (even after every frantic update of yours to stop this) of all key data and slowly and surely manipulate your data in ways you frankly never thought possible. If its not memory location, they will simply mimic your client's interface messaging and make their own controllable client process. They might even re-write their Winsock DLL (any one year exp programmer can do it). Long term, players who care about their avatar/base will quickly find out about the hacks & exploits and realize its just a waste of their time. Trust is everything in persistent games. If you don’t trust the game, you don’t play.

To sum up, I love the idea of a distributed persistent online game, but unless you have complete control of the data, which if distributed over current OSs and Networks you don’t, it will fail. If you believe me, going forward understanding this and promoting a game with the false pretence that its secure "and" reliable is fraud. Only way around misleading the customers is to tell them "Your data is not reliable and is not secure". How fun would that be?

I hope you all prove me wrong, because when you do, it will bring forth a new generation of game development.

Good luck!

No reliability and no security, and no fun?

Crosbie Fitch  Posted: 06:20am Jul 30, 2001 ET  

 

If you have sufficient reliability and sufficient security then you can have sufficient fun. You don't need perfect reliability and perfect security.

We get by in the real world with this. We have imperfect immune systems, imperfect policing and justice systems, imperfect banking systems, etc.

The idea that computers MUST provide absolute reliability and security is an illusory expectation caused by the fact that some programs appear to be 100% reliable and secure FOR A LIMITED TIME, i.e. until they fail or security is compromised.

Even public key encryption is not 100% secure (in the broader sense). Why? Because humans are involved.

The industry has got itself in a bind striving for an ultimately flawed ideal. 100% reliability and security is a brittle strength. Strong until it's broken, and then it's a pile of rubble. You go for a more flexible and adaptable quality and you get a resilient strength that although may appear weaker, is a better quality in the long term.

For all their flaws and perhaps because of them, human beings are one of the most perfect life-forms on this planet. Intelligent and good enough to co-operate, but stupid and corrupt enough not to allow a 'perfect' totalitarian state to last very long.

From this perspective, a 'totalitarian', perfectly reliable and secure system will crash and burn. I'm not betting on that horse. No way!

You get a million computers a fraction of which are continually and successfully hacking the system, well, as long as it's sustainable, it's not too bad. We get a few diseases now and then, jam our thumbs in the drawer, and maybe get a chronic illness, topped off by cancer or a car crash. But our life expectancy is pretty good. Societies can tolerate a fair amount of crime, occasional insurrection, even an ongoing amount of terrorism, but eventually collapse/reform due to invasion, war, famine, power crazed politician, etc.

So I agree with your point: "promoting a game with the false pretence that its secure "and" reliable is fraud". I also agree that you have to be honest with players and tell them "Don't think for one minute that just because this game is supported by a network of computers that it's going to be perfectly reliable and secure - it's not, but see for yourself whether it's fun or not".

How fun could it be? How much fun is life? Not perfectly reliable or secure, but most of us continue to play it, and pay quite a bit for it... is life fun for you?

Nearly all of us would like life to be more secure and more reliable, but we still have disease, disaster, war and crime. Ah well, at least it's fun - enough of the time.

Locating the Player’s Avatar

And what happens to your avatar when you log out?

Jos Yule  Posted: 02:15pm Jan 11, 2001 ET

 

With the system you have proposed so far (which I find very interesting), who becomes the owner/responsible for your avatar object? Is an avatar just another object, like all the other? I've been assuming so. so, who takes care of it once you've left? How do you 'find' it again once you log back in?

Thanks! jos

Finding your avatar

Crosbie Fitch  Posted: 01:29pm Jan 12, 2001 ET  

 

The player isn't necessarily able to 'own' their avatar even if they're modelling it. Sometimes, even for the avatar, the player will have to relay their influence over the modelled object to a node further up the hierarchy (who does own it). Ownership doesn't dictate who gets to ensure persistence (or presence) of an object, it only determines who's best to arbitrate over modelling the object. There may be several other nodes duplicating its modelling, and even more who have some record of its relatively recent state.

So really you have to think of the system as containing a virtual world that carries on regardless of whether anyone's playing. It doesn't matter when a computer joins or leaves, the virtual world is unaffected. All players do is get to influence some objects. Now it's up to the game to decide how much choice the player has in choosing which objects they get to influence, but whatever that is, the player's computer will express interest in whichever objects the player gets to influence, and they will appear (slowish the very first time a player plays, but almost instant next time) in the correct scene.

Imagine looking in a crystal ball and saying "Show me what John Malkovich is up to right now". Gradually the scene materialises (and if the game thinks it's ok) you get to overrule John's actions (without him worrying about it too much).

Now, obviously, this assumes that we have enough total storage in the system that despite players leaving and joining all the time, and network outages happening, that there's always enough resources left to keep track of what everyone was up to.

Patently, you can't have a game the size of Ultima online if you've only got ten computers playing with 10 megabytes each. But if you have 5 million computers playing with 1 gigabyte each, well, even if 1 million computers keep leaving and joining, there's probably enough resources to keep things running just fine.

Yup it all depends on how it all pans out, but if for every 100 computers of X resources there's a computer of 10X resources then you can have a relatively deep hierarchy. If all computers are the same then you'd probably get a very shallow hierarchy.

I think we need a little goal definition here

Jon Watte  Posted: 11:15pm Jan 13, 2001 ET  

 

That's fine if you're building "cyberspace" at large.

However, I'd still like to know how the system lets you find John Malkovich without using global broadcasts (which are clearly too expensive in this kind of system) or a global directory service.

Also, I think that many games are fun because they have strict rules and when you beat your opponent, you did it fair and square. A game of StarCraft won't be at all as nice in a self-correcting world as something which is mostly social in nature; the basic design requires (mostly) perfect integrity.

Global Directory

Crosbie Fitch  Posted: 07:04am Jan 14, 2001 ET  

 

Yup, the system is effectively its own global directory service.

Any node (given its relationships with other nodes) knows how to reliably and quickly obtain details of any object you can identify.

Any node (given its relationships with other nodes) knows how to reliably and quickly obtain details of as many objects as you want that fall within your interest criteria.

* *

There is indeed one tiny little paradigm shift you have to do, i.e. think in terms of a best-effort modelling system, and therefore think of a game design that embraces this.

You can have a small number of players and a synchronised system, or a large number of players and a system that does its best to get as close to synchrony as possible, but never worries about actually getting there.

If you're happy designing 16 player games, and don't need the headaches of 16 million (or even think there's any point to it) then that ground's pretty much covered. The new wild-west, the next revolution in online entertainment, is scalable. It's dangerous, inhospitable, and plain scary, but it's a whole new world out there!

Let no one think for one instant that this crazy new system is just like the PS2, a new platform with tons of industry backing behind it and a safe way to earn a living. No way! I'm talking about a couple of generations beyond Napster, Gnutella, MojoNation, FreeNet, etc.

Any intrepid game developers out there who don't mind that it may be a toss up as to whether you eat a mountain lion for breakfast, or it eats you, start kitting up and get ready to go.

But how can I be John Malkovich?

Jon Watte  Posted: 12:54am Jan 15, 2001 ET  

 

Supposing I want to start "being" John Malkovich, I need to first find out that there's no-one else doing the same job, because, sadly, there can be only one.

I understand your premise about relaxing synchronicity towards a "tendency for convergence," but if I want to add J M to my very-high- interest list, and take over the responsibility for that object, I need to know that no-one else has him/it already, and/or I need to find out who does it and negotiate for ownership.

Proving the negative requires ALL other active nodes to be involved, unless you think duplication is OK. The only other solution I see is one based on directories (which, admittedly, is a much more scalable and cheaper "server" solution than simulation servers, but still are not fully peer-to-peer). If you're purely peer-to-peer in a fully distributed system, the best you can do is start "being" or claiming ownership on J M, broadcast the fact to interested listeners, and then hope there's no negative coming in half an hour later as you just start to enjoy yourself.

Oh, and while we're on the movie clichés: I'll be back :-)

There can be only one

Crosbie Fitch  Posted: 12:16pm Jan 15, 2001 ET  

 

There's only one owner to any object or avatar, and there's a 'well known' way to track down the owner (ownership doesn't change particularly frequently).

However, ownership isn't used to determine which player gets to influence which avatar. Ownership is a node concept, used to determine which node gets to arbitrate over an object's state.

It's up to the game to determine the mechanism by which players choose what objects or avatars they are going to influence (variable degree of control).

If the first time you play, the game says "Hey, John Malkovich is right up your street, very little latency to him. No one else is running him at the mo, perhaps you'd like to try?" If the game chooses, it could allow any number of punters to attempt to influence a single avatar - it might get a little confusing sometimes, but the game might think that multiple players was fine.

Once you know that you're interested in influencing JM, then your client will express interest in JM (and this will ensure JM is downloaded). Your node will then do its best to read and write your influence over JM with as little lag as possible. This may result in the JM object ending up owned by your node, but not if there are plenty of other players interested in what JM does from time to time, and who are interacting with him. In this case JM would be owned by a mutual parent (probably).

The game may decide to create a new avatar for each new player, and maintain a fairly permanent relationship.

The Need for Artificial Intelligence

Why do we need AI at all?

Duncan Kimpton  Posted: 02:11pm Jan 15, 2001 ET  

 

Why has everyone got so hooked up on the idea that large multi-player games need any AI at all?

Surely if the game is big enough then there would be no need for any significant AI. Every animate object being played by a human. Possibly not many humans would want to play a rabbit in which case some low-level AI may be needed but for animals AI is simple.

Inanimate objects don't need any AI at all.

When a person logs in the first time then simply create them a character that remains theirs for ever more, also create them a safe room where they can leave that character between sessions. If the user should disconnect due to error then simply have their character curl put up a tent (or equivalent) and go to sleep - all their items would be here and could optionally be open for stealing by other players, providing an incentive to get back online ASAP to rescue their character. (this could be done by the next most interested node which could remain responsible for these items till further notice)

anyone care to discuss?

Failsafe

Crosbie Fitch  Posted: 03:42pm Jan 15, 2001 ET  

 

It takes a tad of AI to get a character to curl up or pitch a tent (particularly if they happen to be descending by parachute in the presence of enemy fire).

The trouble with a large persistent online world, is that if it is to appear as a separate world, then bits of it shouldn't keep stopping and starting just because some players have had to answer the door, go to sleep, or go to school.

This is why I tend to favour the idea of an independent world, in which occasional players, as short attention-span gods, meddle from time to time.

Think of an RTS, in which several players form groups in order to take shifts in running a particular country.

Or even the crew of a star-ship could each be controlled by a different player. Each crew member might go to their quarters whenever the player went offline. Not a very tight ship, but it might work. As long as there was always a crew, the ship would always be able to handle itself. If no players were playing then it would have to be left in some degree of safety, or it would need decent AI to cope in the event of trouble.

AI ? Depends what you mean by a LARGE multi-player game.

Jim Allen  Posted: 04:08pm Feb 15, 2001 ET  

 

I think "LARGE" is the primary concept here. If we consider size as player density (world size vs. number of players at any given moment) we can draw a picture of why AI is important.

If you have a constant high density, then yeah, who needs AI? But, if it's possible for people to be stranded alone with nothing to do, AI is essential. Maintaining a high density is either extremely difficult or very limiting in game design terms. The actual size of the game world has to contract and expand as people join and leave the game, otherwise you end up with crowding or the aforementioned stranding.

A more interesting notion, I think, is an AI that mimics the behaviours of the players. If you have a generalized system, that can mimic and mutate player behaviours, then all you need is starting AI strategy (maybe fleeing), and from there an AI is built from the previous actions of players.

Jim...

Persistence

Peer-to-peer and MMORPG

Simon Larsen  Posted: 04:18am Feb 20, 2001 ET  

 

Hi,

First of all thanks to Crosbie Fitch for writing these great articles. What an eye opener.

I’m current writing a small school thesis about peer-to-peer networks and MMORPG. My thesis will be a comparison of the “traditional” way of client/server approach and the peer-to-peer way of “hosting” a MMORPG. There will be a sum of pros and cons, and a concluding recommendation for future MMOPRG development.

All over the net I’ve seen discussions about this topic (most of the great discussions here at Gamasutra). But most of the discussions don’t talk about whether or not it is really realistic or not. The problem is as I see it, that the online universe created with the p2p model will be very difficult to manage and/or control. Because of the decentralized nature of the p2p model things like collision control and player created objects will be very hard to manage.

How for example can you ensure that if a player logs of after 10 hours of straight playing, where he/she has been building a house in a far of region, that the house stays, even if the player stays offline for the next week. The problem that might occur is that some other player builds another building at the exact same spot as the player before, and when the first player logs on again you have TWO buildings on the same spot. How and where to you store player made objects? If the players build objects in (player) crowed places the object can automatically be transferred to some or all of the nearby players game storage (hard disks), and in theory the object will now be stored forever, because a least one of the players game storage will always be online. The problem described above might call for a central super server that acts as backup for the online universe. But then it’s not a “pure p2p” model. And with pure I mean “Gnutella like” pure. Where there is no server at all.

I’m very interested in your opinions on the matter.

Simon

Re: Peer-to-peer and MMORPG

Niklas Smedberg  Posted: 07:35pm Feb 20, 2001 ET  

 

Yes, there must be a backup server that maintain the persistency of objects. Otherwise the whole world would disappear if no people were online for just a sec. The backup server would be just a normal client, so the p2p model isn't violated.

But I would like to know what would happen if a client suddenly crashes, without yielding its ownerships?

-Smedis

Persistence

Crosbie Fitch  Posted: 06:25am Mar 8, 2001 ET  

 

Remember we are distributing duplicate objects, so right there you can see that we are likely to have a fair bit of redundancy in terms of storage of these objects on various computers' hard disks.

What provides a persistence guarantee is ensuring that a set of computers conspires to ensure that they have an interest in every aspect of the virtual world. Note that they don't necessarily have to be particularly fast, just capacious.

As an avatar travels through the virtual world it will always be expressing interest in scenery in its vicinity (even beyond its view). So well before the avatar needs to see it, the interest that has percolated up the hierarchy will result in a stream of subsequent updates percolating back down - just in time.

There may be a particularly uninteresting piece of the virtual world, a desert say, but an avatar many years ago buried some treasure in the middle of the desert. The details of the relevant objects would have been passed up the hierarchy and would have ultimately ended up on a DVD-RAM jukebox somewhere because perhaps there's a nice computer that flushes its hard disk of 'least interesting objects' to near-line storage. Years, later hardly any computer has this info concerning the centre of the desert. However, when one day an avatar finds the treasure map and rides into the desert, eventually, that nice computer will check its index and retrieve the treasure objects from its near-line store, and then pass them down to the player's computer (via intermediate nodes).

If you doubt that such nice computers will exist, consider Deja News and Google. Someone will spot an 'opportunity' for ensuring the persistence of a colossal virtual universe.

As to what happens when a client suddenly crashes...

It doesn't matter. In the event of a child becoming disconnected from its parent (timeout), ownership implicitly reverts to the parent. There is no way the child can legitimately retain ownership. The child will also be aware of the timeout and is obliged to stop thinking it retains ownership. If the child has crashed, well, it will then be incommunicado full stop - it won't even be able to say whether it has ownership or not. Of course, nodes can get hacked and lie about what they own, but then we have to have other means to deal with that. As long as nodes are operating ok, then whatever the condition of their connections, ownership will be tracked reliably.

Freenet-model persistence?

Hugh Pyle  Posted: 04:38am Sep 27, 2001 ET  

 

I see several different approaches to persistence. The "interest-driven" model sounds quite similar to Freenet, where the physical content tends to migrate towards those nodes which have expressed an interest in it.

Networking Issues (Peer-to-Peer idiom)

p2p and mmorpg

Paul Garceau  Posted: 08:46pm Mar 6, 2001 ET  

 

Simon writes:

How for example can you ensure that if a player logs of after 10 hours of straight playing, where he/she has been building a house in a far of region, that the house stays, even if the player stays offline for the next week. The problem that might occur is that some other player builds another building at the exact same spot as the player before, and when the first player logs on again you have TWO buildings on the same spot. How and where to you store player made objects?

First, we need to outline what defines a p2p network...that has been done, and you are referencing what you consider to be a "pure" p2p model. You indicate that there is no such thing except for "Gnutella like" p2p network models...can you tell us more about what you mean by "Gnutella like"?

Second, given the quote, if each player has their own computer, and each computer is part of a large p2p network, then the network itself tends to take on a certain amount of intelligence as does each peer...the network knows where it ends and other networks begin by simply looking up its table of peers...this is not a server process. My question then is, how do you think a scenario such as the one quoted here, might be implemented?

re: Peer-to-peer and MMORPG

Alex Choi  Posted: 02:40am Feb 21, 2001 ET  

 

Gnutella-like pure? =)

Honestly, I can't think of anything that will effectively use a gnutella-like pure p2p model. The problem with not having a central server is that it leaves everything up for grabs. Like in for the house example that you listed, the tendency to be overwritten is can be affected by almost anything. If you wanted to effectively build a house, and since there is no 'central nervous system,' that would mean you would have to basically tell every single connected person, "Hey! Got a house here! So that means you can't build here.." and everything computer would have to reply, "OK!"

And what happens when one computer goes, "No, I'm gonna build there anyways" (aka he's bending/breaking the rules)? Or what happens when one computer doesn't hear it in time, and is preparing to build a house himself on the same spot (bad transaction)?

Sure, there are benefits to the p2p model listed: flexibility, and anonymity. If you could use a multicast IP (is that what it's called? It's been years), you have the benefit of saying, "built a house here" only once, to be heard by everybody. You know, like a bullhorn. Is that more efficient than going to every single person, "Heygotahousehere, heygotahousehere, heygotahouse here.."? You betcha. The biggest problem with flexibility and anonymity: those are the basic recipes for cheating. And even worse, because of the flexibility and anonymity, cheating would be considered legitimate game-play. It's the basic case of: "This is what he said, so it must be right" syndrome. Every single 'peer' WILL be trusted, and that will never make for good honest gaming because some silly person will break that trust, sooner or later. There are just too many ways for someone to cheat.

Unless of course, every single 'peer' will be considered pure. So what do you need to have a 'pure' peer? For starters, you could begin with your own network. (one that's NOT connected to the internet, thank you =P) That will cut down, oh, I dunno, 99.9% of the people that are considering cheating. In fact, the only people that would be able to cheat, are the people from the inside: either people who hard-coded the cheat into the game, game admin, or many other things too frequent to list at the moment.

Benefits of running a 'pure' p2p model: efficiency and trust. The "This is what he said, so it must be right" syndrome in this case will always be legit. You could whip out some crazy multicasting that will only be limited by hardware. Now imagine this: You have an arcade machine. Lets call it... X-wing fighter (couldn't think of anything) Anyways, this game is your basic flying-shooting game in a 3-d environment. Think: Descent, except you are actually sitting in a cockpit. And now, your friend is sitting in another X-wing arcade machine right next to you, playing in a multiplayer game, with you. He just shot you, and you are dead. Instead of him telling everybody, "I just shot my friend who had this much health, with this gun that had this much ammo, where he was at location x,y,z, going at a speed x mph at this direction-x,y,z", he could simply say, "He's dead! Replace flying ship with explosion" and everybody will go "OK!". OR, if you didn't die, he would say, "I lowered his health by 6 points!" and everybody will go "OK!". All of a sudden your anti-cheat code is useless, and it frees you up for other things: better network code, graphics, building a more robust game engine, etc.. The last thing you would have to worry about is redundancy. The only messages you would need to send to everybody is: 1) My plane is pointed at this direction, going at this speed, from this spot. 2) I have damaged player-123, he now has this much health.

The only message that has to be heard fairly consistently would be message 2). If you were keen enough, you could write some real nifty flight prediction, depending on your plane's turning capacity, therefore only sending out message 1) at long intervals.

re: Peer-to-peer and MMORPG (part II)

Alex Choi  Posted: 02:42am Feb 21, 2001 ET  

 

The only message that has to be heard fairly consistently would be message 2). If you were keen enough, you could write some real nifty flight prediction, depending on your plane's turning capacity, therefore only sending out message 1) at long intervals. That frees up bandwidth. But by how much?

let's create two packets:

Let's create Packet one that has to the following:

1) packet id 1 bit 2) plane id 32 bits 3) Pitch 9 bits 4) Yaw 9 bits 5) Roll 9 bits 6) x-coord 32 bits 7) y-coord 32 bits 8) z-coord 32 bits 9) velocity 32 bits 10) other 100 bits I'll explain why I chose this so far..

Packet ID is the 'header' packet to determine if it's packet type 1 or packet type 2

Plane ID is the unique ID of the plane

Pitch, Yaw and Roll should be at least 360 degrees each, if they have 516 degrees, should be more than enough to define where they are facing. Even 256 degrees should be ok, but it's only 1 bit each =). I'm generous like that =)

For location (x,y,z) I chose to use 32 bits, because I needed a large playing field. How big is the field? Well, let's say each number is one inch, which would give us roughly 67,000 miles CUBED to play with. For comparison sake, the circumference of the earth at the equator is 24,901.55 miles. It's a large playing field. =)

Velocity is the speed of the plane. Duh. You could probably measure in inches how fast you are going.

Other is for other stuff I haven't thought of yet, which people could probably find things to use that space for.

Now let's create packet 2 with the following 1) packet id 1 bit 2) plane ID 32 bit 3) hp remaining 16 bit

Again, packet ID is the header packet to determine if it's packet type 1 or packet type 2 Plane ID is the unique ID of the plane hp remaining is how much health the plane has

Now, packet 1 is 256 bits large(or 32 bytes), packet 2 is 48(6 bytes). Let's use UDP to move these packets thru the network. Lets assume the smallest udp packet(without payload) is 8 bytes, let's say we got 2 udp packets ready for transmission: one is 40bytes long, the other is 14 bytes long.

Now Assume we are using a multicast router with a transmission rate of 100 Mbps. And let's assume that we send out packet 1 5 times every second (which would be somewhat redundant), and packet 2 20 times every second every time someone is shot(just in case packets are lost), and let's assume someone is shot every second. That would mean one person was sending and receiving would receive about 7680 bytes. Meaning that you could have a maximum theoretical capacity of 13020 users playing at once, flying around, and shooting someone every second. Now, that's assuming that there is absolutely nothing else on the network. =)

Of course, Let's try to make it sound 'realistic' =P But we still have our dedicated private network. If this could handle 13020 user every second, do you think it could break a sweat when you have 500 x-wing arcade machines and 500 TIE fighter arcade machines hooked up together, fighting in an all out war in the same area? Let's find out: We have 1000 'peers', each sending out a type 1 packet 5 times: 1.6Mbps. Let's have 1 person shooting 5 people/second (remember, we are sending out 20 type-2 packets per shot): 11.2Mbps.

Total: 12.8Mbps, which is about 7% of the router's bandwidth. I think it's very doable, with hardware cost now being the only limitation (I've never seen 1000 arcade machines at one place, let alone 1000 of the SAME arcade machines) In any case, it's just way too expensive, and risky to invest in something like this. But what a sight it would be! =)

Whoa! I've really gone on a tangent. Anyways, the counter-argument is: anybody could run any game on a fast network, which is true. But what I'm talking about here is efficiency, which ultimately got lost when I started babbling =P Anyways, only two packets (in this case) were needed to be sent, the 'peer' processed everything else. In an rpg again, it could be doable, maybe as an arcade machine (sorta like Gauntlet Legends), but not with a home machine that could be tinkered. A home console would be no good too. There are some real smart cheaters out there. As long as the cpu and the physical line is in their reach, prone to tampering, p2p will never work.

-sorry for the looong message.. Didn't realize how long it was..

Combination of p2p and traditional server/client set up

Simon Larsen  Posted: 04:40pm Feb 24, 2001 ET  

 

Ahh, with Gnutella-like pure, I off course mean: No server at all.

The model that might be useful in a MMORPG setting is the combination of the traditional server/client version and the distributed p2p model. Let me explain.

If every computer acts as both a client and as a resource, then we might get a system that is far cheaper than the usual ones. In a traditional set-up you usually put up an additional server for every 2000 users (/player). Take the Napster p2p model as an example. The Napster server doesn’t host any music at all (well some struggling start-up bands) they only make the connection so to speak. Then the connection is made between the searching user and the user that has the music that you were looking for. Fairly simple, and yet very effective.

With the Gnutella p2p model, it works more or less the same. The user types in his/her search criteria and your computer “asks” the 5 (or 10) nearest computers if it has the music that your are looking for. If not, the computer passes on the search to the next 5 or 10 computers in the network, until it finally finds (hopefully) what you were searching for and sends back the location of the computer with the material. No server, just clients thinking as servers.

So if you combine the theory behind these networks with the traditional set-up then you end up with a very powerful system, which is cheap too. :-)

You set up the usual master server system, with your online persistent world located on it. But instead of setting up a server for every 2000 or so new players that logs on the your system (or your game if you like), you make the players computers do some of the work. When some player performs an action that requires a lot of computer power, then, instead of making the server to all of the work, then the player computer does the calculations. And if that’s not enough, the computer can send out a signal saying, “I need help... I need help... I need help...” to the 5 or 10 nearby computers, and if they can help they do, otherwise the send the request on the next computers until the requesting computer as its demand filled. Just like the Gnutella model, but with resources instead of music files. And when the calculations are done the player computer sends back the result to the master server that keeps track of the online world. You need the master server as backup for the online world, or else the entire world can disappear, if, as Niklas Smedberg pointed out in his post, all the players logged off at the same moment.

So instead of having to set up a new server for every 2000 player or so, then you might only have to do it for every 10000 or 15000.

Tell me if I’m totally of the mark or if you like what you read.

Locatability (Locating Other Players, etc.)

P2P gaming

Nikk M.  Posted: 01:17pm Feb 28, 2001 ET  

 

As I see it, the CPU resources are NOT the problem here. The bandwidth is, and that was the whole point of the article.

ICQ is a good example of a P2P system. Let's say we have an RPG game - instead of sending everything to the server and then getting back all of the info, we simply send out messages to the computers on our "active list". The active list is continually updated, since it contains only computers linked to avatars in the vicinity of the player. If some avatar is more than two hundred meters away, throw it out of the list. The addition of new computers to the list is a tricky thing - the person will need to contact the already "known" computers further in the necessary direction so that they share parts of their "active lists". In case the user is alone, contact the computers that are more "in-charge" of the area - server-like agents that have better bandwidth and keep larger "active-lists".

The model is very tempting for the aspiring developers who cannot afford a dedicated server. Furthermore, it could then be developed into a unified protocol that is shared between several games in order to allow them have interconnecting worlds. The game graphics and object stats could be uploaded dynamically, therefore enabling the game to be constantly upgraded. Whee! Well, at least it's a tasty compSci problem.

As far as security goes, nothing matters in the end - for each man-made force there is a better force. Of course, nobody wants to totally abandon the issues of anti-cheating and privacy, but the advantages of P2P networking are quite impressive for it to be disregarded completely. And, after all, it still goes down to the issue of maturity of people.

Re: P2P gaming

Krystian Majewski  Posted: 03:23pm Apr 24, 2001 ET  

 

Adding Players to your 'active list' can be simplified. Instead of playing in a seamless world you could divide it into 'sectors'. Let's say you just started the game and want enter a certain sector. Of course, you have some players in you active list, otherwise it won't work. You send out a request that say "Hey guys, who is in that sector and can you give me a list of people in it?". The request goes trough the network until a player is found that is actually in this sector. He replies "Hi, I’m in this sector! There are the following people in here: bla bla". Next, you send a message to all the people in this sector: "Hi! I'm Krystian and I just entered your sector". And all the people in the sector add you to their contact list and submit you from now on what they do (what they say, where they move to, what they do).

Big Problem: What if the request gets lost? What if you don't know enough people in order to send it trough the WHOLE network? There ARE people who are in this sector but none of the people in your contact list have one of them in the contact list? You'll think: great! I'm alone! and enter an empty sector. The next time a player from your part of the network enters your sector you'll answer that you are in here and now we have the problem that there are actually two versions of this particular sector. The phenomenon we have here is quite dangerous. Pure P2P network tend to 'split'. Since every little part of the network is enough to create a new network, a network can be divided into two different networks (and the users won't notice it until it's too late)..

I think you NEED at least some kind of a 'user list server' that logs the active users and distributes their addresses.. like in Napster..

ceeu

Krystian

Good Discussions

Bryan Turner  Posted: 06:34pm Mar 2, 2001 ET  

 

It's good to find some intelligent discussions on MMORPG issues. I posted a thread on the old Gamasutra forum but it seems not to have survived the transition.

I've been thinking about these issues for awhile, and now I'm putting it to code. It'll be interesting to see how the first implementations of these ideas pan out.

My $0.02:

P2P vs. Client/Server: I agree with Fitch, in fact his hierarchical model matches a hand-drawn picture in my notebook to the dot! My jaw dropped when I saw this. :)

I think a system like this would work very well, but I've been unable to overcome the immediate technical challenges. And hacking is a serious threat, more so than I believe Mr. Fitch recognizes.

My thoughts on this matter follows the voting strategy, every 'important' action (killing players, creating new objects, etc) requires approval from a set of peers, and at least one higher-ranking node. Otherwise the action fails. I believe this strategy will solve most cases of 'individual' hackers. It does not solve the problem of script-kiddies or groups of hackers.

This is where Mr. Fitch's model does not show resilience in my view. If a hacker designs a script, markets it well to a fan site, and gets a large enough group of lemmings, then the hacker becomes God. The system has to have some more structure, like that of the banking system mentioned earlier. A trust hierarchy, not just peer review.

Also, a well-equipped hacker with a T1 and lots of space will, by definition, be promoted to a role with more responsibility. I believe this should not be the case, the trust hierarchy should also enforce which nodes become freeholds for objects.

As for bandwidth, there's just no way you're going to get P2P of this scale over a 28.8 modem. 2Kb/s is just not enough data to spread over N peers at 10 updates/sec or whatever. Just the IP/UDP headers would eat that up.

Q: Is anyone else actually writing a system like this? I'm interested in sharing research with other engineers. --Bryan (bryan.turner@pobox.com)

 

The Charles Congdon & Crosbie Fitch Exchange

The Questions

Discovery in Consensus

Charles Congdon  Posted: 02:23pm Apr 13, 2001 ET  

 

I am interested in how such a large system handles discovery, which seems like it could bring this system to its knees if one isn't careful. Here are some examples of the discovery problem that may provide useful topics for discussion:

1.      A figure moves slowly through a topological simulation. This case will be very common, but also should be easy to handle. One expects that with carefully designed virtual environments, the interests of the observer will change less than the scenery they observe, i.e. the observer does not teleport and scenery does not teleport, rather there is nice, pleasant, continuous, gradual/predictable change. This doesn't preclude apparently sudden change (from the observer's perception), as long as it is predictably sudden change. What would be interesting to know is how the observer's interest shifts through the objects in the simulation? How, and how often does transferral of ownership or interest happen? Are there cases where the shifting interest might involve a search for ownership/interest that might involve a large portion of the database? Is there a way to make this search more efficient than O(N)?

2.      Let us consider a slightly more complex example. I direct my character into a crowded place, such as a sports arena or the central square of a city. Ignoring for a moment the problem of keeping updated on the events in this area once I arrive, I will at some moment need learn about a large number of interesting items (everyone else in the arena, the arena itself, popcorn stands, etc.), and respond to similar requests made by anyone else who enters the area after me. I can imagine the possibility of an event that might be attended "in person" (as opposed to viewing a "broadcast") by a large fraction of cyberspace. How can the discovery system handle this without falling to its knees? Now what happens if I want to try to meet you in a corner of this crowded space - how do we find each other?

3.      What if I "beam into" a new location in the virtual world? Even if we limit the number of places where one can "beam down," all that does is reduce the size of "N" against which we need to do discovery. How do I efficiently discover everyone who is around me when I "beam into" an area? How do we make this more efficient than O(N) , for an N that is presumably smaller than the total number of people in the simulation?

4.      Let's say I'm a new user. I downloaded/got in the mail my "Cyberspace" CD. I install the software, connect to the internet. Just what happens now? How is a new user placed in the world? If they all start at the same location, we have sort of a pathological case of (3). If they start at random locations, we have a spread out case of (3). How do the new users even find the current root(s) of Cyberspace to start their discovery process? If you are talking about a billion user system you can imagine first-time users would be popping up in 6-digit numbers or worse every day. That's a discovery problem!!!

5.      I re-connect to cyberspace after being logged off for the day. My machine, in re-taking control of my "avatar," will need to learn from the rest of the world the current state of my avatar and its environment. If we have 10 million people connected to this system at any given time, I think it is reasonable to say that we will have at least 10 million major discovery events in the system a day. Ouch. This will be a big event, on the order of (3), every time someone re-connects to the system. As in (4), how do they even find the root(s) of Cyberspace to even track down their avatar? Should the client cache some portions of the ownership hierarchy to give them a starting place when they reconnect? Should the avatars be "docked" in persistent hosted areas until their owner returns (houses, neighbourhoods, hotels, etc.)?

Let the discussion begin!

Cheers,

Charles

Interest & Ownership

Let's just clarify the terms first:

Crosbie Fitch  Posted: 02:28pm Apr 18, 2001 ET

 

Interest is basically the 'turning on' of senses, e.g. consider what it would be like if you were an avatar in the real world and all your senses had been suspended. Despite being able to think, you are to most intents and purposes incommunicado with the universe. The rest of the universe may as well not exist for you. You might fall off a cliff or get run over, and though this would cause your thoughts to cease, it would be a difficult thing to notice. However, if we decide that we need to be able to see - that we need the sense of sight - then it is as though we declare that our mind is now interested in perceiving visual information concerning the universe. We therefore need the visual information conveyed to us, so that when we operate our 'sense of sight' or in other words, when we open our eyelids, our eyes can process this information and our mind can try and understand what the eyes see.

This process is similar for the avatar in the virtual world. The avatar is interested in objects that can be seen by its eyes, such as rocks, trees, other avatars, etc. It won't (except for other purposes) be interested in objects that can't be seen by its eyes, such as magnetic fields, or objects whose visibility is negligible (due to distance, occlusion or size). This interest has to be refined in terms of time and probability too, i.e. the avatar will not only be interested in objects that it can see within its field of view, it will also be interested in objects it could see if it turned its gaze. Furthermore, it will be interested in objects that it might see in a few moments time (like fast moving planes). You can imagine this as the interest not only having spatial dimensions, but a temporal dimension too, perhaps even a probability field to boot. Each object also has these spatial, temporal and probability dimensions. It is where the dimensions of the 'interest' and object intersect that the object can be considered to be interesting (to some extent). This level of interest can be used to govern the priority at which the details of the object are to be ferried to the avatar, e.g. it may be unlikely that a fighter jet (currently occluded over the horizon) will in 5 seconds take a sharp turn and fly over the hills on the horizon toward the avatar, but it might just be at a sufficient probability that its details can be affordably obtained on the off chance that it does become visible (and the avatar happens to look that way, and hasn't popped inside a building).

So 'interest' can be compared to an ongoing query to a web search engine. The search engine will keep on chugging away providing details of things it knows that match the interest (in order of closeness of match). Eventually it will have provided everything it knows, but it will know to provide details of when interesting objects change, become uninteresting, or new objects appear that are interesting. The 'search engine' will also recommend other 'search engines' that might know more about the area of interest.

The important thing to note is that an interest is not a demand, it does not have to be serviced or forwarded or responded to. Just like a search engine doesn't have to respond if it's too busy, it's just a request that is expected to be dealt with fairly. It certainly shouldn't ripple out into a global event requiring the attention of large proportion of nodes. Just like people, every node understands a little about a lot, but a lot about only a little. A node will always be able to make a guess (usually good) about another node who will know more about a particular subject.

It's difficult to avoid being tautological, but 'Interest' only changes as the avatar changes the part of the universe that they're interested in, e.g. if the avatar moves, then any interest with a spatial basis will be adjusted accordingly. This change in interest will gradually be disseminated to all the nodes to which the interest is currently expressed and will thus adjust the set of details being provided via this interest. If the avatar gets close to a teleport tube (with a fixed set of destinations, please!) then the tube is likely to express an interest in details of objects at the remote destinations (simply on the off-chance that the avatar might use the teleport tube). This also applies if the avatar comes close to a radar screen, CCTV set, or any other thing that will augment the interest of the avatar. Independent of the sense of sight is the sense of hearing, so if there are any objects that make a sound, but are otherwise invisible, then these will be picked up by the auditory interest. It's not just senses either, anything that can affect the avatar (radiation say) will be subject to an appropriate interest. Interests can be adjusted according to the locally available resources. If you have a tiny hard disk then you can trim the interest such that only the most interesting and important objects are maintained, whereas otherwise, the interest can be expanded to cover a much broader range.

Ownership is something quite different to Interest. It simply tends to be that whichever node owns an object will also be a node with a good interest in that object (for itself or on behalf of child nodes). Ownership is the mechanism which determines which node's version of state for a particular object is the one that is considered the truth, i.e. the arbitrating node. Change in ownership can be relatively slow as it is usually expected that a current owner is unlikely to suddenly become extremely unsuited to arbitrate over an object. As long as ownership tends toward optimal, then it is unlikely that a not-quite-optimal situation will have a significant impact on the modelling of the virtual world. For example, from a logistics/distribution company's perspective, as long as a warehouse is in the same country it's probably ok. It would be nice if the warehouse was in just the right position within a country to service the delivery network, but as long as its situation tends toward this ideal, then the distribution process will be feasible, and get more efficient as the warehouse is located more optimally. The thing is, it is expected that consumer preference and thus the distribution required will not rapidly change - or if it does, it will be recognised by all as a rare event and any delay in adjusting to the change will be tolerated.

Change in ownership can thus be considered a background process akin to a decision that distribution efficiency may be marginally improved if a particular product was sourced from an alternative warehouse than the one at present.

Re: Interest vs. Ownership

Charles Congdon  Posted: 11:43pm Apr 30, 2001 ET  

 

Thanks for clarifying "interest", in particular your comment "interest is not a demand, it does not have to be serviced or forwarded or responded to." That's an important point, especially in an environment where something that interests you may be served by a slow machine on the other side of the world.

Thanks, Charles

More on Interest

Crosbie Fitch  Posted: 02:17pm May 2, 2001 ET

 

Furthermore, remember that the heuristics that determine which node gets to own something, include the following in their calculations (vs. the current owner and other bidders): The degree of interest the node has (which will be augmented by any children it has that have an interest) The connectivity of the node, i.e. latency, bandwidth, consistency, topological location, The performance of the node, i.e. storage capacity, processing power, load (self + children + peers) on each The reliability of the node, i.e. uptime, continuity, integrity, security, trust, etc.

This tends to keep ownership fairly optimal.

Let's look at a highly contrived example that doesn't exploit the high likelihood of the temporally stable, spatial coherence of a universe. We'll thus see what happens when we deal with non-spatial objects.

Now let's say that a god creates a new kind of object that meets no existing interest except that of a particular avatar, e.g. a 'magic power' with avatar Fred's name on it (and no spatial properties). Avatar Fred might happen to be the object most interested in everything with his name on it, and the node that owns Fred might get to own this 'magic power' object. Let's not worry about what it does, but assume it interacts in some way with avatar Fred.

At this point, avatars that are interested in Fred, will by association be interested in whatever Fred's interested in. This happens as soon as the object representing Fred gets downloaded into a node. Why would anyone be interested in Fred? Well, they wouldn't necessarily be interested specifically in Fred, but they would be interested in what they could touch, see, hear, etc. and Fred's object might meet respective spatial criteria representing some avatars' interests. So, one of Fred's object's properties is an interest in things with his name on them (it might not be, but let's say he'd been scripted that way). As soon as Fred's object arrives, it gets a chance to do something, like register its interests with the node its been copied to.

If avatar Bill happens to be near avatar Fred, then Bill's node will end up getting Fred the object and any of its interests. It will then be up to Bill's node to satisfy these interests (on a prioritised basis). It's highly likely that whichever node Bill's node got Fred from, will also have stuff that Fred was interested in, so it shouldn't be too difficult to get Fred's 'magic power' object.. In this way, interests are like magnetic forces that attract entourages of other objects.

Bill is likely to be duplicated on many nodes, and Fred is too. On some nodes, both Fred and Bill will exist together. However, rather than imagine that nodes are tight on capacity, you could imagine that Fred and Bill exist together on all nodes, but that on some nodes an optimisation process has flushed Fred, Bill or both from the store.

Interest is the mechanism that prioritises the need to keep stuff up to date on a node. A node can flush what's uninteresting if it needs to, but its storage is just like a web cache, the last known information. The critical use of interest is to prioritise what information is sent to a node from other nodes that are likely to have more recent updates on objects that meet that interest. In this way the limited communication channels are used efficiently.

Usually, interests are spatial, and stuff in the vicinity of Fred will be obtained anyway, because if Bill was interested in Fred (because Fred met Bill's spatial interest) then a lot of stuff near to Fred will be within that interest anyway. But, if Fred has much better hearing than Bill, then Bill's spatial interest will effectively be extended because at some priority Fred's interest will be added to Bill's on the same node.

The primary interest on a node will be expressed by a player. One node is likely to express a primary interest in Fred, and another in Bill. Both nodes will have an overlapping interest when Fred and Bill are near.

Let's rewind and take it that Fred and Bill are miles away from each other outside any spatial interest (with an effective priority).

Because Bill isn't interested in Fred, Bill's node won't be interested in what Fred's interested in.

Now, it might just happen that a god creates another object, a 'magic power' detector amulet with mini radar screen and puts it in reach of avatar Bill. This amulet might express an interest in all objects with magical powers. Well, because it's not spatially limited, that's a pretty big interest, especially if there are a lot of objects that meet this interest. However, the normal procedure is:

1) An object expresses an interest to the node its on (an asynchronous kind of request for a dynamic set of objects) 2) The node registers the interest with all its others 3) Try to satisfy the interest locally 4) If the interest intersects with any current peer interests adjust them appropriately (if necessary) 5) Add the interest to those expressed to parent 6) Take note of any recommendation a parent or peer might make for a better node to express the interest to.

This process can probably be improved, but it'd be something similar.

So what happens now is, that eventually the interest will percolate high enough up the hierarchy that a mutual ancestor node of Bill's and Fred's will hold a record of Fred's 'magic power' (remember that all objects have an owner, and that ultimately this ownership was obtained via the root node, and that all owners (including delegators) have a duty to maintain the state of these objects in their store). Now it may be that it will be a while before the ancestral node knows about the 'magic power' object's creation, and it may be a while before its properties end up matching Bill's amulet's interest criteria, but hopefully within a few seconds, Fred's 'magic power' object will turn up in Bill's node's store. (An interest is basically a search request to 'send me objects like this'). Now, the amulet can interrogate the 'magic power' object and obtain details (depending on scripting) from it such as who it's been bestowed upon, i.e. Fred. The amulet can then represent Fred's position on its radar screen (the owned version of the amulet will re-render the texture used for its display using the local rendering service specifying the set of objects to include, i.e. an interest).

Interest is only gradually migrated up the hierarchy compared to the speed at which ownership tracing might progress. Also remember, that nodes are expected to be of ever greater capacity/power as you go up the hierarchy, therefore the task of locating an object for which details have been provided via an extra-universal agency (a god), is still not going to involve querying every computer in the universe. There's an ongoing query between every node in the hierarchy and an additional interest at a leaf node is not really going to make much of an impact, unless there's a lot of gods messing about, as in the contrived example. Usually objects only suddenly know about things due to interactions with other objects - not because some god has reached down and tweaked something, like into the 5th century and told King Arthur he needs to find the holy grail. So usually all objects carry around virtual clouds of connections with other objects that they might relate to. It is rare for an object to spontaneously have an interest in something that no other object or neighbouring node knows about.

Limiting the impact of interest queries on the system

Charles Congdon  Posted: 10:24am May 24, 2001 ET  

 

Crosbie:

I would like to ask for clarification on a few points

..At this point, avatars that are interested in Fred, will by association be interested in whatever Fred's interested in...

Where does this stop? If I am interested in Fred, then anyone interested in me will also be interested in Fred (through me), and anyone interested in them will be interested in Fred via two hops and... Or is there a limit to this "sideways inheritance" (for lack of a better term). What do we do to limit interest in things held by nodes far from us? I guess this may be application specific. As you say:

Fred's object might meet respective spatial criteria representing some avatars' interests.

Still, how far does this testing or probing for something that might match a node's interests go? Does a node ever stop sending out feelers, or passing on feelers from other nodes? If so, aren't we talking about an awful lot of network traffic here?

It also seems like an awful lot of interest will percolate up to parents, and possibly between peers...unless any given node doesn’t register interest often. But they have to, else the node will miss the movement of other nodes through their perception, interesting events, etc. And the higher you go in the tree, the more likely interest will percolate up, correct? Seems to me that this will result in exponential increases in network traffic the higher up you get in the hierarchy, and that puts this system at risk – timely flow of information can get bottlenecked.

Thanks, Charles

Unlimited interest, but limits on its impact

Crosbie Fitch  Posted: 10:59am Jun 2, 2001 ET  

 

This is something I talk about in my next article (6). There are rules or heuristics we can use to decide when it’s more trouble than it’s worth to ‘put out another feeler’ as it were. These heuristics help decide whether it’s better getting updates from a parent rather than a new peer.

As to network traffic, it’s a matter of sending as much as can be sent, conveyed and received. The sender prioritises their outgoing channels and spreads their outgoing bandwidth across them, the receiver prioritises their incoming channels and informs the sender as to their receiving bandwidth so that it doesn’t send more than can be received. There’s also an ongoing measurement of available network bandwidth (in case this proves to be the limiting factor).

In general, the idea is to transmit as much information between nodes as possible, and prioritise the most important (or interesting) information.

Bottlenecks are expected to be the order of the day. The system’s primary objective is to prioritise what goes through the bottlenecks. One of the earliest standpoints I’ve expressed is that in the future, the network will always be the bottleneck. There’s never enough bandwidth for every node to know everything there is to know in a timely manner. There is no solution, except to accept an inferior system, i.e. one in which it is acceptable to get ‘best effort’ modelling. We do the best that can be done with as much as we have – we do not pack our bags and go home because we don’t have as much as we want.

It might seem as though by dint of a hierarchy that there will be a concentration of all traffic into the root. However, remember that there is no requirement that every update to a child must be seen by a parent the very same moment. There is a primary obligation to inform the parent concerning changes to owned objects (as soon as possible, but no sooner), and a secondary implicit interest of the parent in any updates the child gets from its peer contacts (or its children via theirs).

I’m expecting a fairly ‘exponential’ distribution of computer capabilities, i.e. 1 super computer, 10 computers half as good, 100 computers half as good as that, 1000 etc. Well, something along those lines anyway. Suffice it to say that there’s not much point in a hierarchy if all the computers are the same. We end up with a sort of egalitarian gossip distribution mechanism. The thing about a hierarchically adaptive system is that if all computers are the same then it will tend to behave as a gossip type, but if computers are diverse in capability then it can adapt and automatically balance the load to exploit computers to the best of their ability.

In case you still don’t think I’ve answered your question. Your answer is in your question, i.e. there cannot be any exponential growth in bandwidth requirement where there is no exponential growth in availability of that bandwidth. Just because the system would seem to desire stupid amounts of bandwidth, doesn’t mean it will fail if it turns out to be unavailable.

More on ownership

Crosbie Fitch  Posted: 11:15am May 4, 2001 ET  

 

Perhaps I'd better say some more on the ownership issue. For example, you might wonder why I didn't have the owner responsible for keeping track of all its copies, because with the scheme I've described it seems that it's up to whoever's interested in an object to track down or chase the owner.

Chasing the owner. Yup, taken in isolation it could be a wild goose chase. However, remember that peers are already conspiring to ensure they're always talking to other peers with similar interests, and thus there is a very high likelihood that the most interesting stuff will be owned by one of these peers or one of their parents (and if not, then the workload in tracking down ownership will be shared). The hops to owner won't be much either. And remember also that owners only have to be resolved if the ownership details in a object turn out to be out of date (this will be when you are handed an object from another peer's store that happened to be no longer interesting to that peer though it is to you). Object ownership is a property like any other, and if the object is interesting you will be subscribing for updates to it (perhaps to the owner, perhaps to someone else), an ownership change will be published to you as much as any other state change. So for the duration of the time that an object is interesting to you the ownership details aren't particularly critical except for a need to write to the object, a benefit in getting the most recent updates from the horse's mouth, and a bid to compete for ownership of the object.

So frankly, I think it's totally cart before horse to burden the owner with responsibility for tracking copies. 'Interest' is what drives the system. Interest is the pull of information from what's owned. It's enough of a burden simply being the focus of interest through owning an object. Interest is something that only the node knows about. Objects are the substance of memory. It may be better to imagine that every node has a complete duplicate of the entire universe. This is a sponge like thing. Where it's wet is where the node's interest lies. Where it's dry is where there isn't much interest at all. What each node does is to try to pipe water (recent state info) from other sponges that are damp in the same parts (ok, it's just a copy of info and won't dry out the source). What an object store is, is just something that squeezes or removes out the dry bits of the sponge, because usually, there isn't enough space for a node to contain the entire universe (dry or wet). A node knows it can reconstitute missing parts of the sponge, and dampen them, by finding out about other nodes that have sponges that are damp in that area (even a dry sponge in that area will do in the short term, e.g. a CD). But, there's an ebb and flow as interest changes so a the node's sponge will squidge and shift its compression and dampness.

We don't have to keep track of copies because every node is expected to have copies, and every node knows how up to date its copies are.

Hmmn, maybe I've not got very far in explaining things. How about this: I want to keep a node's burden of responsibility concerning tracking other nodes to a bare minimum. A node can maintain as much non-critical knowledge about other nodes as it wants, but it's minimum duty is absolutely nothing UNLESS it owns objects, in which case it has a duty to keep in contact with its parent and keep it up to date with state changes to what it owns. OK, so you say, well ok, it wouldn't hurt to keep track of which nodes are currently subscribing to our owned objects. Well, sure, this has to happen anyway, as a publisher needs to know about its subscribers. However, this should not extend to keeping track of where the copies end up. Copies only need to be updated according to the node's interest, so it's up to the node to take pains to locate a good update source (publisher) for any objects its interested in.

Incidentally, nodes do not express interest in specific objects, what they do is express interest in 'all objects that meets specific criteria'. It may happen that this interest will win it ownership of a subset of objects, perhaps just one object. However, most stuff like peer selection is based on interest rather than the objects that happened to meet that interest. There are some things that operate at the object level, such as write methods which may need to forward a method to the owner, and here there may be a choice as to whether the last recorded owner has a duty to forward method calls on, or can reject them (with new owner details or 'unknown') and leave the onus on the caller to track down the current owner. Perhaps the latter is the safest course of action. Current owner should be fairly quickly updated anyway, i.e. if you have an interesting object then you'll get updates to it. It should be fairly rare to get an object that meets your interest but because it lies on the periphery or outside the interest of your current peer group (including your parent) it is largely out of date and requires extra hops to locate its current owner. In the rare case that the last known owner has eliminated the object from its store (hence has forgotten to whom ownership transferred), then an owner trace can be initiated by requesting it from the last known owner's parent. Ownership has to have migrated up the hierarchy from a node that used to own an object, 9 times out of 10 the parent will have current owner details (if not, then that parent's parent, etc.).

Er, regarding crowds, it is latency limited if the players involved are widely dispersed. It's also a bandwidth problem wherever they are. No system can magically overcome bandwidth and latency restrictions. If you want a Quake like experience and by this I mean one where laser and player locations are critical, then you need a LAN. If you could do 1,000 player Quake over 56K modems via the Internet I think ID would have done it already. Even if you have a thousand quad Pentium IVs with super 3D cards and 2/2Mbps DSL connections, if these players are dispersed across the planet, they can't play Quake as they know and love it.

However, if a billion player game conspires to introduce players only to topologically local players, then we may yet lull most players into believing that all potential encounters are low latency, as well as the ones in their experience.

Case Study?

Has anyone tried this?

Charles Congdon  Posted: 10:27am May 24, 2001 ET  

 

Crosbie:

One final question along this line: can you think of any public systems which get close to implementing a system like you are proposing? There are a number of assumptions we are making that need to be tested, and simply knowing that someone has made progress testing them, or has discovered better alternatives, would be valuable.

Thanks, Charles

Not really (as far as I know).

Crosbie Fitch  Posted: 11:14am Jun 2, 2001 ET  

 

Well Sun’s JavaSpaces is the most advanced form of a system that might be able to be re-architected into something along the lines I propose.

The closest system that might appear to have similar long term aims in addressing large scale peer-to-peer games would be Proksim’s NetZ.

Usenet is something that could be compared I suppose: Globally collaborative development of a shared space (albeit text).

Perception

Re Q1. "A figure moves slowly through a topological simulation...".

Crosbie Fitch  Posted: 02:30pm Apr 18, 2001 ET  

 

A search for ownership is relatively short as an object always includes 'current owner' as one of its properties. If it turns out that the 'current owner' is out of date, then it is highly likely that the previous owner will know who does own the object (since it will have been the loser in a bid to obtain ownership, and thus is likely to retain sufficient interest in the object to record details of its new owner - if not then this detail can be sought from a parent). Also note that you only ever seek ownership details of objects that you are interested in, i.e. objects that you have obtained the details of.

Therefore the situation in which a large proportion of ownership details is sought will be one in which the player has left their computer offline for some time, and then reconnects. In this situation (if the game design took no steps to ameliorate it) the player would notice that the scene in view of their avatar would visibly update (in a spooky manner), indeed, the player may find that the avatar has moved and therefore that the scene blinks out of view to be replaced by the correct scene. Perhaps it would be a bit like waking in a hotel room, as one's expectations of features of a highly familiar bedroom are quickly (perhaps unpleasantly) replaced by new perspectives and new features. For this reason it may be best to have some kind of crystal ball metaphor where the player is gradually introduced to the avatar's perceptions (depending upon how quickly the most recent info can be obtained). Alternatively, one can contrive to persuade the avatar to always tend return to a 'home' location for which change tends to be minimal.

In any case, the player's previous choice of parent node is probably a reasonably good parent for a reconnection (after even a few days), the change of parent to one currently more suited for the player's avatar can happen at the normal rate.

The fact that all nodes tend toward a hierarchy according to interest and capability means that the system will tend to maintain coherence. Now when we introduce the nocturnal habits of people across the planet, we may see wholesale shift between daytime players and night-time players in each time zone as the world turns. This might introduce significant, systemic, 'tidal' phenomena if players provided the only resources and these were only online whilst each player was actively playing. However, I'm envisioning that in the future player's computers will tend to be online all the time, and that there will be a fair distribution of non-player computers donated to the resource pool that are highly available and highly capable. I'm smiling here at many P2P companies' expectation that they'll have no problem getting at this untapped spare resource - what's a player going to go for when they choose between: $1 for giving up 12hours 'unused' CPU time, or a better play experience when they next hook up with their avatar? They'll go for the money of course... hmmmn... not so sure.

In general, like plane designers that don't expect passengers to run to the front and rear of the plane for laughs, for well crafted games I don't think we're talking O(N). More like O(logN) at worst.

Re2 Q1. "A figure moves slowly through a topological simulation...".

Charles Congdon  Posted: 11:45pm Apr 30, 2001 ET  

 

Clearly I was using "ownership" and "interest" interchangeably when I should not have done so.

In a topological simulation, is the ownership hierarchy likely to mimic topological "closeness," at least for most objects in the scene (moving ones being an exception)? How about the interest hierarchy? Is it a bunch of isolated islands, or will it also mimic topological connectivity? Does one walk one or both of these trees to discover what will next interest you?

I keep on coming back to the fundamental question - as you move through the simulation, you need to somehow determine what interests you next, and what no longer does. Just how does this discovery process work? What are the techniques an application could use to determine what will interest me next? I certainly can't, since my computer only really knows about where I currently am and what I am currently interested in. How is this process of asking what will probably interest me next happen? To where do the queries go to answer this question? I still don't get how it works, especially in the case of the jet plane that is currently over the horizon, but may shortly be visible if I choose to turn at the right time. I'm feeling like we need to walk a considerable number of the nodes around us in the various hierarchies (topological, interest, and ownership) to find those nodes that might next be interesting. This feels expensive, and inaccurate (gaining interest in that jet plane particularly bothers me).

What happens if we were off-line for a while, and when we come back our cached parent is off-line? To whom do we go for a point of reference then?

Thanks,

Charles

Re3 Q1

Crosbie Fitch  Posted: 01:20pm May 3, 2001 ET  

 

This is a bit of a mind bender and I tend only to look at it in a simplistic way. In practice things will be a tad more complex and difficult to conceptualise. Ownership will tend to follow foci or centres of gravity of nodal interest in objects (where nodes are separated by network distance and weighted by performance).

I would strongly recommend that the game (rules of the universe, AI scripts, player/avatar assignment, etc.) will conspire to arrange the spatial/topological structure of the virtual world to be similar to the topological structure of the underlying nodal network, and furthermore that players will tend to be assigned avatars in topological vicinity to the implicit mapping of their node.

Note that this is just a "let's not give the system too hard a problem to start with" kind of proposal. There are plenty of other schemes that can be used. One of them may be to break the implicit identity between player and avatar and have a Sims type game where the player has a deteriorating quality of control over their minions the further away they are from the player's domain. But, all that's something to be explored. The important thing is that there exists a workable 'cyberspace' scenario that's scalable, however contrived it might need to be to fit the underlying system.

Getting back to what shape the ownership hierarchy will be when compared to the shape of the virtual world. I imagine it might be as simple as a governmental hierarchy, i.e. UN overseeing the planet, continental forum overseeing continent, national government overseeing country, regional government overseeing region, ..., head of family overseeing household, individual overseeing self (and fairly mobile). However, there'd be plenty of diffuse overlap and continuously shifting, vague borders (if any). It may be that at a certain level in the hierarchy connectivity is no longer significant in the heuristics and that there is a lot more blending going on (no discernible topological territories). Someone will have to produce some visualisation software...

There isn't really an interest hierarchy so to speak except that one of the duties of ownership is to be interested in what you own (or have delegated). So there will always be a flow of updates going towards the root (perhaps best thought of as a background process). The only hierarchy is that for delegating ownership/responsibility/arbitration, otherwise interest really is what drives the peer-to-peer relationships between nodes. That's probably a good aide memoire: a hierarchy of responsibility, and a (p2p) network of interest.

But, yes, how does interest work? It's up to the 'game developer'. They decide what an object is and what's interesting to it. To start with there are no objects (no classes either) and no interests. The game developer defines their classes (using inheritance as desired), and instantiates any objects as necessary. It could be possible to create and populate a universe with a single object (that is sophisticated enough to spawn instances of objects of other classes, etc.). Objects hold state, and can have methods (defined by their classes) called upon them. Some of the properties might be interests and some of the methods might 'register' these interests (with the ambient operating environment provided by the node). All an interest is, is just a special type of object. It observes the same class hierarchy as objects, but instead of being a container of state, it is a container of criteria (or methods) that compare state. An interest can be treated as a volatile set of objects. The objects can be enumerated (may change whilst enumerated, but there you go), operated upon, or any kind of thing that might be done with a set of objects. It is up to the node to determine what objects meet the criteria of the interest. There are no guarantees. The node may not get a chance to service the interest (due to insufficient priority, etc.), updates may not arrive in time, etc.

One interest may be an interest in all objects that are derived from a fundamental base class, i.e. this could end up being 'all objects in the universe' if the class hierarchy had a single root. However, this kind of interest wouldn't be particularly sensible - in the absence of any other interests this'd just fill up the node's object store with whatever its parent decided to bung it (if the parent had the same object then it'd have similar behaviour, etc.). You can do silly things in this system just as you can in any other. The developer is expected to be frugal and circumspect in their use of interests, i.e. they will typically specify a spatial interest that attenuates with distance from the object expressing the interest. The aspect of the avatar interested in what they can see will specify an interest that prioritises scenery according to size and distance. Some interests will be expressed to select what might affect the avatar's decision making process (or the player's), e.g. birds, might not be particularly visible, but depending upon the scenario might be critical. Such an interest would be specified for many other types of object, e.g. a tiger would be interested in smells from humans and other mammals. Bunnies would be interested in 'predators' from a threat point of view and other bunnies from a 'reproduction' point of view. A fire 'object' might be interested in nearby 'flammable' objects.

There's always going to be many interests operating at the same time and according to the various priorities involved, objects will be brought into the object store. But, bandwidth and storage is finite and expected to reach saturation. Sometimes an object might arrive only to be immediately flushed by the next object, more often not.

Modelling of the virtual world at the periphery of a node's interest is going to be very flaky, but then that's fine, because by definition it's uninteresting anyway. In the areas of greatest interest modelling will be the best, as the world in that area will be the most accurately updated and modelled.

The games developer is also going to be responsible for interest prediction, i.e. figuring that the shape of future interest will be distorted by velocity or available directions/choices. They also will decide how interesting (priority-wise) stuff an object might encounter soon (if it keeps going in a particular direction) is.

Hmmn, maybe we'll call them 'game object developers'.... 'gods' hehe.

Some objects might be fairly dumb and have no interest in anything. Perhaps a mountain has no interest in anything? I dunno, it's up to the developer to decide. However, if a developer decides that an early warning beacon is interested in a plane within its range, a timid deer is interested in the horrible noises that EWBs make when they detect planes, and an avatar is interested in deer, then the deer will register interest in things that make horrible noises and EWBs will register interest in planes. An avatar might only see a spooked herd of deer running on the horizon. It might not hear what spooked 'em, it probably won't see the plane that triggered the EWB (though it might).

It doesn't matter whether the avatar, deer, EWB, or plane or any combo are simultaneously loaded on any node. They could be all loaded on several - it doesn't matter. Redundant modelling! If a node isn't modelling it, it's getting state updates for it from a node that is, and then again, it might not be interested.

Think of interest as a torch beam of life: where it shines within a node, modelling is going on in high fidelity, where it's dim, slow object update and partial modelling is going on, where it's dark, objects lie sluggish and may even evaporate. Some nodes have tiny, wandering maglite beams, some nodes have steady gigantic floodlights.

You could also think of it like a company of contractors. Each contractor has a set of skills which tend to be governed by their experience. Each job increases their experience and skill toward a particular area, some jobs reinforce what they already know, some jobs pull them into new directions. Sometimes a contractor might not know something and they will confer with a colleague who they know has better experience in this area. The colleague may even refer them to someone else. The contractors are nodes. Their jobs are the interests the nodes are serving. Their experience is the node's object store. Their relationships with their colleagues are the peer network connections.

As to your last point regarding what happens if a node goes offline and when it reconnects, it finds its previous parent was offline, well no worries. The node may have made some note of the addresses of its lineage to the root, i.e. if it's previous parent is offline, it can try its grandparent, and so on until it gets to the last root it knew about. If it's successful in this then it has a relatively easy time getting back close to a node that matches its interest, and can go about reclaiming ownership (for stuff that was automatically relinquished upon disconnection), though ownership isn't as immediately important as getting updates. If it can't find a node of its lineage (it forgot the details, or it's really out of date) then there will be 'well known addresses', i.e. nodes that won't necessarily be root or anything else, but they specialise in being online 100% and keeping good track of senior nodes in the hierarchy.

Re4: Q1 - bandwidth and being frugal

Charles Congdon  Posted: 10:30am May 24, 2001 ET  

 

Crosbie:

You say The developer is expected to be frugal and circumspect in their use of interests, i.e. they will typically specify a spatial interest that attenuates with distance from the object expressing the interest. ...There's always going to be many interests operating at the same time and according to the various priorities involved, objects will be brought into the object store. But, bandwidth and storage is finite and expected to reach saturation. Sometimes an object might arrive only to be immediately flushed by the next object, more often not.

Didn't you contradict yourself here? How is saturating bandwidth being frugal? How can any meaningful traffic make it through the peripheral traffic? We have to find a mechanism here for streamlining the interaction between nodes so that we don’t just end up with a storm of packets fighting to get through small pipes.

Thanks, Charles

Re5: Q1 - Frugally specifying interests doesn't contradict saturating bandwidth

Crosbie Fitch  Posted: 12:05pm Jun 2, 2001 ET  

 

No contradiction as far as I can tell.

Frugal use of interests means not simply saying “I’m interested in everything”. That’s not the same as being frugal with the use of bandwidth. Right from the start I’ve always gone on about maximising the use of bandwidth, certainly not being frugal with it. The developer never worries about networking issues such as bandwidth, they only need to worry about making careful and considered (frugal and circumspect – whatever) judgements in ascribing priority to objects and interest in them.

As far as saturating bandwidth goes, this means monitoring how wide the pipe is at its narrowest point, and contriving to use just as much bandwidth is available. I presume this addresses your observation of a need for streamlined interaction between nodes?

Re4: Q1 - keeping track of everything

Charles Congdon  Posted: 10:33am May 24, 2001 ET  

 

Crosbie:

More clarification needed...

It doesn't matter whether the avatar, deer, EWB, or plane or any combo are simultaneously loaded on any node. They could be all loaded on several - it doesn't matter. Redundant modelling! If a node isn't modelling it, it is getting state updates for it from a node that is, and then again, it might not be interested.

I forgot about all that modelling while I was worrying about all the traffic we have passing about. That might be a way to keep the updates down.

If it can't find a node of its lineage (it forgot the details, or it's really out of date) then there will be 'well known addresses', i.e. nodes that won't necessarily be root or anything else, but they specialize in being online 100% and keeping good track of senior nodes in the hierarchy.

Interesting! So in addition to the hierarchy and p2p, another structure could overlay the system to provide fast access in situations where it is impossible (you don't have a starting point to talk to) or too expensive to otherwise traverse the entire hierarchy (you and a bunch of your parents are all on tiny network links).

Thanks, Charles

Re5: Q1 - Modelling & Well Known Nodes

Crosbie Fitch  Posted: 12:30pm Jun 2, 2001 ET  

 

I forgot about all that modelling while I was worrying about all the traffic we have passing about. That might be a way to keep the updates down.

Er, yep, duplicated modelling is what keeps things going in between updates. Just like a new frame of video refresh keeps our brains happy while they model the perceived motion.

Interesting! So in addition to the hierarchy and p2p, another structure could overlay the system to provide fast access in situations where it is impossible (you don't have a starting point to talk to) or too expensive to otherwise traverse the entire hierarchy (you and a bunch of your parents are all on tiny network links).

Hmmn, well I don’t really think it’s another structure.

There may be plenty of directory services and such like that obviate the need for well known nodes, perhaps even a kind of broadcast/advertising system, but in the absence of alternatives (if we just had an empty Ethernet and not the Internet) then well known nodes are a crude solution, e.g. 123.45.67.89 might be hardwired into each client as a first point of call if no alternative solution exists. But, I don’t think we need to do any work on this problem (cos it is shared by many other systems).

Locatability

Re Q2 "...try to meet you in a corner of this crowded space..."

Crosbie Fitch  Posted: 02:31pm Apr 18, 2001 ET  

 

Well, we're beginning to talk about how to design a plane that can cope with poorly behaved passengers, but anyway... ;-)

If we don't allow ball game fans to teleport to the stadium, but make their way there by conventional transport, that provides some time for the inherent load balancing characteristics of this system to adjust the hierarchy appropriately and in a fairly timely manner. However, to get an idea of what goes on consider what would happen if you had a rubber sheet with white dots representing the spatial separation of avatars (and their nodes) and bordered zones representing the parent nodes that serve them. If you stretched/compressed the sheet such that some of the nodes became very close together, it wouldn't necessarily change the nodal relationships it just means that each parent node's spatial interest is adjusted (in terms of a notional 'interest distribution function'). Where spatial interest around the avatars may have previously been in terms of miles, it may now be in terms of yards. Otherwise, nothing much has changed. The fabric of reality adapts as it were to the distribution of entities within it - the avatars think in terms of being in the same stadium instead of merely the same county, their perception of scale may have changed, but load balancing is unaffected. In this way it doesn't matter whether avatars are spacecraft with a separation of light years or football fans with a separation of yards. What does matter however, is how quickly the relationships change between these avatars, how many other avatars one avatar can affect, and how critical it is for this to be accurately modelled.

If two avatars have a prior ongoing relationship then the accuracy of the modelling between them is likely to be modelled more accurately than the modelling of avatars with which no prior relationship has occurred.

So we have a fundamental limit to the rate at which node/node relationships can change when large numbers are involved. This is governed by latency, i.e. given 64kbs: in a crowd of ten people it may be no problem, in a crowd of a hundred, fidelity may be reduced, but in a crowd of a thousand or more it may be too much to expect perfect modelling of interactions if each member of the crowd continuously and rapidly changes the other members of the crowd with which they interact. In other words, a conference hall with a thousand seated delegates talking to their neighbours would be no problem (unless you are conducting a secret experiment to check that certain delegates blow their noses or itch their scalps at exactly the right moments). However, a Roman arena with a thousand sub-machine gun armed Quake addicts playing a game of 'last man standing' would be chaos in modelling terms (it would be chaos anyway, so perhaps it doesn't matter).

So if two avatars want to meet each other in a corner of a stadium they do it like two people in the real world. They describe a point of reference and go to it. Both their nodes express an interest in the same locality and given that the other avatar is likely to be more interesting that any other avatar in the crowd, they should meet up and have relatively good mutual modelling fidelity. The rest of the avatars in the crowd can be modelled using a default 'avatar at ball game' behaviour (until such time as updates arrive).

But, really, let's not try for high-fidelity crowd modelling just yet ok? :-)

Re: Q2 "...try to meet you in a corner of this crowded space..."

Charles Congdon  Posted: 11:46pm Apr 30, 2001 ET  

 

Crosbie:

I take all your points. I guess maybe I should have asked the question differently. We enter the crowded stadium. Clearly we will see the people closer to us at a higher fidelity than those on the other side of the stadium, and our sphere of interest will intersect with these closer people almost immediately upon entering the stadium. But how do we know that the stadium has any more people in it than those in our immediate vicinity, which we can discover and take interest in via the traditional approach (whatever that is)? Is there any way for me to look across the stadium and tell that the seats are not as full over there as they are where I just entered? How do I tell if the players have entered the field yet, or where the virtual hot-dog stand is?

It occurs to me that this may be a good case for the application designer to have a special kind of object in the world: the stadium object. It keeps track of where everyone is in the stadium, where the players are, and where the hot-dog stands live. It then communicates and updates this information for all the people who enter the stadium, rather than forcing them to ask around. So maybe we have a good case for where service providers can make money: any place where many people can gather needs to be specially hosted. You pay to enter and have this system provide you with the necessary data to experience this area as highly populated. If you don't pay, you learn about other people in the area the traditional way, and miss the larger picture (the ongoing game, the cute avatars cheering the game just outside your interest sphere, etc.). High fidelity isn't even the point - simply knowing and seeing the place as crowded is the service which such a hosted area can provide (much like that teleport tube, which gathers possible interesting data for those who might enter the tube). At least as I understand it, the process by which we discover what is around us that might be interesting is not able to provide such a service, simply due to the scale of the search and the number of other people making it at the same time if nothing else.

I still don't get how gathering these data actually occurs... :-)

Thanks, Charles

Re3: Q2

Crosbie Fitch  Posted: 07:20am May 4, 2001 ET  

 

Interest is not a singular kind of thing. You can have any number of interests, big ones at a low priority, small ones at high priority, weird shaped ones, strangely defined ones, as many as an object needs, and the set of active interests an object has at any one time is expected to change fairly frequently. Remember, CPU & Storage is not a problem. If at the end of the day thousands of interests combine in priority to convey the optimal set of objects given the available bandwidth, then we get the best modelling available.

Some interests will provide simplistic details of the stadium, some will provide coarse level details of the location of other avatars, some narrower interests will provide higher fidelity details of some things. Some interests may attenuate and consider side of object in one fell swoop. Interest is just a mechanism available to the games developer to use in order to specify what they think an avatar or any other object for that matter needs to see and at whatever level of detail or modelling fidelity necessary.

There's nothing to stop someone creating little snapshot daemons that lurk around taking JPEG photos from certain viewpoints every so often (on nodes whose performance has time for 'em) and allowing these to be distorted and used as backdrops by any avatar that happens to need a quick rendering of a distant vista. Of course, only some nodes might be able to afford these things and have an interest in them (or the interest will only tend to be served on nodes with good bandwidth). What I'm talking about is a fairly generic (but, scalable, don't forget 'scalable') 'best-effort' distributed modelling system. I'm not talking about a particular implementation of a virtual universe. There are many techniques yet to be developed as far as figuring out the best types of interest and combinations thereof.

If you want something such as a stadium object that has some intelligence regarding how to assist avatars in visualising the avatars and other objects that it contains, well be my guest. That's a game developer issue. It may be useful, I don't know. However, I am pretty loathe to recommend that the system allow people to lock virtual objects down in terms of commercial real estate. It's a bit like Open Source. There's no reason why someone can't sell a sophisticated stadium object for use in a public virtual world (obviously, perforce a one-off sale into the public domain). Alternatively, someone could reengineer this system introducing some kind of authentication (and pay-per-view) mechanism into it - selling live video feeds to the virtual stadium. The system can be Open Source, but that doesn't stop commercial providers selling access to a virtual 'American Football' universe that they provide on a secured implementation of the system. But, remember, it's likely that with security comes performance, connectivity, and scalability issues.

Perhaps we can have some kind of ad hoc standard that defines how universes (that adopt the standard) can allow avatars to relatively seamlessly transition from universe to universe, with some gateways requiring subscriptions when going from a public universe to a commercial one.

There are plenty of commercial opportunities, but the first step is to get the system working in the public domain, i.e. without the encumbrance of mechanisms required by commerce. (cf http first, https second).

The interest "language" and handling large groups

Charles Congdon  Posted: 10:35am May 24, 2001 ET  

 

Crosbie:

Care to hazard a guess on the "language" used to express all these different "shapes" of interest? For a system of any size, it occurs to me that these "shapes" will need to be created and refined dynamically -- you couldn't hard-code them into the implementation and make them flexible enough. The thing that worries me about query "languages" is that people many times go overboard. Especially in the case where we have limited bandwidth and lots of latency, do you have any thoughts on how to keep the "query packets" associated with interest small enough while also making the interest system extensible and flexible enough?

Switching gears, I think I confused you with a particular tree (the stadium example) that is hiding the forest. The only reason I mentioned this as a commercial object in the game is that it seems to me a fair-sized server would be needed. This server would host an area that can handle a large number of people at once, sending them data that fulfils their interests without the need for the interests to trickle through everyone else participating in the group experience. My idea is that in such a "place" a node could express all their interests just to the stadium, which returns the data of interest to them directly (as well as, possibly, a pointer to the owner of the information).

Game developer issue or not, I think one major "selling point" (way of getting people to participate) of a massive simulation is to give users the sense that many other people are present in the simulation. Not just distantly ("there are now over 2 million connected to this system"), but in your immediate vicinity (I'm in an area with 20,000 other people listening to a concert). You need to be able to get a sense of the number of other people in a central park, stadium, or other place as well as their bulk behaviour. This is what differentiates a large simulation from a Quake level, where you don't see too many other people at any given time.

I am uncertain how this interest system alone could help the game developer create something as simple as a night club, where the point is to see and be seen. The interest mechanism could show you the close neighbours, but beyond that I don't know. Could one have a two-way interest "in the club object," which would communicate details on who else is there and where they are, as well as the expected data on what the club looks like, its physical extent, etc.? "Bouncers" need a mechanism for ejecting anyone in the club that is no longer welcome. Participants need a way of seeing the spotlighted couple as they dance, or the celebrity in the roped-off table in the corner. A central, stable system of objects could maintain this consistent state for all participants in the club.

Which raises an interesting question: does it make more sense from the scalability argument as well as others to implement the club, stadium, or park as a separate "world/universe" connected to the "base world/universe" of the simulation, or is it implemented simply as an area of the simulation with partitioned interest traffic?

Thanks, Charles

Re5q2: Scalable doesn't necessarily mean 'stadia'

Crosbie Fitch  Posted: 02:14pm Jun 2, 2001 ET  

 

Care to hazard a guess on the "language" used to express all these different "shapes" of interest? For a system of any size, it occurs to me that these "shapes" will need to be created and refined dynamically -- you couldn't hard-code them into the implementation and make them flexible enough. The thing that worries me about query "languages" is that people many times go overboard. Especially in the case where we have limited bandwidth and lots of latency, do you have any thoughts on how to keep the "query packets" associated with interest small enough while also making the interest system extensible and flexible enough?

An interest isn’t much different from an object, i.e. it has the same schema, and like the object has properties and methods. The properties can be compared against objects of the same class (or derivations). Expressing an interest in all aeroplanes would simply be to declare an interest of class aeroplane and leave the properties null or ‘match any’. Thus a radar object interested in aeroplanes would have as one of its ‘start-up’ methods something that registered this interest. If it was only interested in aeroplanes in range then it would either specify criteria values in appropriate position properties to meet this requirement, or it might have a method that performed a computation to decide if the plane was ‘in range’.

Note that interests are expected to be a fraction of bandwidth compared to update traffic.

Switching gears, I think I confused you with a particular tree (the stadium example) that is hiding the forest. The only reason I mentioned this as a commercial object in the game is that it seems to me a fair-sized server would be needed. This server would host an area that can handle a large number of people at once, sending them data that fulfils their interests without the need for the interests to trickle through everyone else participating in the group experience. My idea is that in such a "place" a node could express all their interests just to the stadium, which returns the data of interest to them directly (as well as, possibly, a pointer to the owner of the information).

When you have a scalable world (think colossal) it becomes impractical to rely upon human intervention to determine what hardware is required where, and the necessary resources that should be installed. The best we can hope for is that a range of diverse computers are introduced to the system (probably continuously).

If someone has designed a stadium that is likely to be of interest to a large number of players who’d all like to congregate there, then firstly I hope the designer appreciated the possibility that this could cause a bit of a stress point. Secondly, the system will locate the stadium object appropriately. The collection of objects forming the stadium will tend to migrate towards the computers that are best suited to model them and serve the current interests. If there are a large number of interests then this means the greatest interest will lie toward the root of the mutual parents of interested nodes, some parts will drift down if of interest to only a proportion of nodes. So the pitch might interesting to everyone, but the vending machine at gate 17 might only interest the avatars hanging around there. The stadium objects will thus sort themselves out.

There is simply too little time to sort this kind of thing out manually. I’m talking about a system that must support explosive growth, both in terms of numbers of nodes as well as content.

Game developer issue or not, I think one major "selling point" (way of getting people to participate) of a massive simulation is to give users the sense that many other people are present in the simulation. Not just distantly ("there are now over 2 million connected to this system"), but in your immediate vicinity (I'm in an area with 20,000 other people listening to a concert). You need to be able to get a sense of the number of other people in a central park, stadium, or other place as well as their bulk behaviour. This is what differentiates a large simulation from a Quake level, where you don't see too many other people at any given time.

Why do people participate in the Web? Do they need a visible sense that there are millions of people cruising it at the same time? Nope. Seeing might be believing, but that doesn’t mean belief is difficult without sight.

The kind of simulation I’m trying to design a system to support is not about being able to simulate stadium type events. Well, if that’s the type of simulation you’re after, then client/server may be your best bet. What I mean by scalability is being able to support an environment of unlimited size, and the ability to support an unlimited number of simultaneous players. Latency (rather than the system) tends to be the thing that impairs stadium events.

This is not me being awkward. It’s the laws of nature. You can solve scale, but you can’t solve the problem of exchanging accurate and timely data concerning a large number of mutually interacting players when faced with the speed of light. It’s either accurate and not timely or vice versa. The only solution is to segregate players. For this you need a system optimised to support it, and a game design that achieves it.

The selling point I’m going for is not ‘in your face - millions of players’, but a more subtle one, i.e. it’s the size of the potential pool of people with which you could encounter and have relationships with that is valuable. This also applies to the content, e.g. you don’t have to explore the planet in order to value its diversity and vastness. It is the ongoing titbits of evidence that you are participating in a huge world that matters, rather than to have it thrust into your consciousness like Douglas Adams’ total perspective vortex.

I don’t doubt that I’ve only seen a tiny fraction of all the web pages that there are to see, and that I’ve only corresponded with a tiny fraction of the number of people that cruise the web. Nevertheless, I’d far rather cruise the web than restrict myself to a carefully prepared Teletext system, or Compuserve’s internal pages, or even MSN’s web pages for that matter.

We’re looking at the difference between prepared and consumed entertainment, versus shared creation and exploration. The Web is an example of what happens when you let people publish to each other. Think what would happen if people could create worlds and explore and interact within them together…

I am uncertain how this interest system alone could help the game developer create something as simple as a night club, where the point is to see and be seen. The interest mechanism could show you the close neighbours, but beyond that I don't know. Could one have a two-way interest "in the club object," which would communicate details on who else is there and where they are, as well as the expected data on what the club looks like, its physical extent, etc.? "Bouncers" need a mechanism for ejecting anyone in the club that is no longer welcome. Participants need a way of seeing the spotlighted couple as they dance, or the celebrity in the roped-off table in the corner. A central, stable system of objects could maintain this consistent state for all participants in the club.

The night club is scenery.

Objects with vision and behaviour (avatars, bouncers, etc.) need to express interest in their environment, i.e. most other objects in the vicinity. There can be multiple interests, e.g. in geometry & texture data appropriate to the player’s platform, behaviour data, state properties, etc.

Bouncers (when not played by humans) need AI scripts which detect unruly behaviour and describe how to interact with the guilty parties in order to forcibly eject them.

A ‘central, stable system’ is what you get from a client/server system, or even a system that enforces a correspondence between spatial zones and the underlying computer that is assigned to model them. This is an intermediate approach that I expect will become popular (it’s more reassuring) prior to the adoption of the distributed objects approach, i.e. object granularity as opposed to zone granularity.

Which raises an interesting question: does it make more sense from the scalability argument as well as others to implement the club, stadium, or park as a separate "world/universe" connected to the "base world/universe" of the simulation, or is it implemented simply as an area of the simulation with partitioned interest traffic?

It may well be easier to understand a system that parcels up the virtual world into zones and then continually optimises the allocation of the zones around the participating computers, but whilst this might be easier to understand, I think people will find that it introduces additional complexity and inflexibility.

The 3d spatial zone can be thought of a special case of interest. My point is that it therefore doesn’t need to be hardwired into the system. It’s not like it’ll make the system any more efficient, it’s just easier for people to understand. That’s the main reason why you’ll see it being adopted. Well, you’ll get some people suggesting that better consistency can be maintained within zones, but then the inconsistency is simply shunted over to the interfaces between zones. This scheme then develops into larger zones with portals in between, and before you know it, you’re back to multiple servers.

I’m trying to convince you to do an easy implementation first (albeit perhaps difficult to understand) and then to see if by making it more complicated (albeit more easy to understand) you can obtain a better result.

NO PARTITIONS!

Relocation

Re Q3. "What if I "beam into" a new location in the virtual world? ..."

Crosbie Fitch  Posted: 02:32pm Apr 18, 2001 ET  

 

Well, if we had a totally unrestricted teleport device, then we're looking at O(logN) to locate the most appropriate node to obtain the desired info (probably an automatic candidate as a new parent), after that, it's a matter of the normal process of sending out peer tendrils to get the most up to date info of all concerning objects in that location. There'll be a delay as the latest info concerning that region is downloaded (in order interest and priority). I'd expect the teleporting avatar has some knowledge of every part of the virtual world (on DVD-ROM say) so this delay can be reduced. NB 'priority' is something determined by the designer of the game whether hardwired or procedural.

The thing is though, I'd expect most games would contrive to keep a player's avatar in a similar topological position in the virtual world to the position of their node in the system. Free teleporting would probably break this. For that matter, teleporting, like crowd modelling, is something whose absence will not destroy the potential for entertainment.

Re2 Q3. "What if I "beam into" a new location in the virtual world? ..."

Charles Congdon  Posted: 11:47pm Apr 30, 2001 ET  

 

Crosbie:

Yes, unrestricted teleporting is something that might be good to avoid. In fact, due to poor latency handling, some online games already have "teleporting," although unintentionally. Maybe not having that problem is a plus for such a system. Restricted teleporting, though, is something I think would be good. "Grand Central Teleport Stations", or high-speed "bullet trains" that get you from one part of a spatial simulation to a very distant part could help with system adoption. After all, I didn't sign up to a virtual simulation of a world only to spend all my time driving across the country. While that may interest some, this system won't live without the interpersonal interaction part. And I think that everyone would be happy to do away with crowded airports and delayed flights in the virtual world...

Thought problem: So what if my avatar calls yours on a video phone? Or I tune into a live virtual news broadcast (we need a whole new lexicon here!) Doesn't this count as a "partial teleport," since you suddenly gain, through the video phone or TV, interest in things far away from you? Fortunately, the discovery process becomes easy, since the phone on each end can keep track of the state of the world at its end and quickly update a caller with what is currently in view (sort of a "lite" version of the stadium object).

Thanks, Charles

Re3 Q3 (teleporting)

Crosbie Fitch  Posted: 08:06am May 4, 2001 ET  

 

An object that facilitates encounters with distant objects should have an interest in those objects. Thus when another object (such as an avatar) uses such a facility they will already have an active interest in the distant objects (they will be to hand and fresh). This is ok for things such as TVs, fixed teleport pads, radars, etc. as their distant interest is fairly well defined. However, video phones, telescopes, and free teleporters are much more of a problem given that it can be up to the avatar's choice as to which distant location the object is directed to. Well, an astronomical telescope might not be much of a problem if it's only usable at night, and it's not expected to pick up other players whizzing around in their spacecraft, i.e. we can have it accompanied by a spherical texture map of the night sky (or an interest in a recent rendering thereof).

Totally free teleports would need an inbuilt delaying mechanism to allow the teleporter's interest in the avatar's eventual choice of destination to result in sufficient detail to be 'downloaded' to then transport the avatar into the new scenery (and have some hope of it being more than a recent snapshot of the scenery as held on a CD the node just happens to have in the drive). Of course, if the avatar had recently been at the destination then they may have a residual interest in where they've just been (if the game developer thinks that's a good idea), and so it may be that not much information need be transferred (though it might be decided that teleporters need a consistent delay). However, if the avatar is transported to a remote location on the network (which I don't recommend) then latency will start determining the player's fidelity of control and experience. Though even in this case the node will still be busily forming direct peer connections with suitable remote node(s) (at a reasonable nexus of whatever other nodes are currently interested in that location). If there was insignificant difference in latencies no matter where on the network a connection was made then there's not much point in making the virtual world's topology similar to the network's, but we'll see how things turn out...

Discovery

Re Q4 "Let's say I'm a new user...."

Crosbie Fitch  Posted: 02:33pm Apr 18, 2001 ET  

 

The golden rule applicable in this case is: 'the player is not their avatar'. There is a virtual world modelled independently of the real world. The player's interface is a means of connecting the player to a denizen of the virtual world. This denizen becomes an avatar of the player. So you see, nothing changes in the virtual world just because a player has decided to play the game. All that happens is that the player's monitor is like a crystal ball onto the perceptions of the avatar that the player has been paired up with. We really are talking 'Being John Malkovich' here. Of course, the player may have some preference as to the kind of avatar they want and the situation they want the avatar to be available for, and the system will have to persuade the player to accept an avatar in a topological location that is able to meet the player's expectations in terms of modelling fidelity.

In terms of numbers, the virtual world may increase its population of avatars to keep pace with the growth in player population (as enabled by the concomitant growth in resources). The virtual world may also increase in terms of content to provide lebensraum for all these avatars.

So I don't see much of a 'discovery problem' here.

Re2 Q4 "Let's say I'm a new user...."

Charles Congdon  Posted: 11:48pm Apr 30, 2001 ET  

 

I still assert that there is a discovery problem here, although not as much as I originally thought. People will want some control over the avatar they inhabit. You see this in all sorts of RPGs, and even games like the Sims, where people spend much time to create a character that they want to play. This extends to where the character lives, what the character does, and who are their neighbours. Now, like the real world there *will* be limitations, namely that most of the interesting places to live already have people living there.

My point is that I don't feel most people will want to link up with a random character in a random part of the world. Somehow this "personalized" avatar will have to enter this system.

But say that you are limited to avatars that are already out there. How do I find a free dairy farmer avatar in virtual England to follow around? Somehow the software out of the box has to know where to ask for the "world map" server, to locate the "England" server, to locate the "Yorkshire server," and so on. Then it has to be able to ask around for a free farmer in that area, and either create attach me to that avatar if one is free, create one if that is possible, or report back if the system doesn't allow ad-hoc addition of avatars to an area. I can't imagine that this will be a cheap set of queries, and every new user will need to do it. I still don't understand how we even get our first hook into such a large system.

Now, about those avatars that are hanging around waiting for me to start controlling them. How are they created? Who maintains them for the weeks before someone takes them over? How are new avatars added to the system? These are not small problems when the system starts to get large. Maybe you should allow users to create their own avatars, if anything just to minimize the authoring problem that populating a large world will entail. And it has to be reasonably populated from the start, just to hook people's interest.

Maybe this is the case for another commercial application. New users are directed to a commercial "avatar" server, which maintains a list of who is free, where they are, and what they are like. So all users do is hit up this server for the initial entry to the system. Once they are happily controlling their perfect John Malkovich, they are handed off to the world at large with enough information to hook them up with the proper owners right off the bat.

Cheers, Charles

Re3 Q4 (New players)

Crosbie Fitch  Posted: 08:58am May 4, 2001 ET  

 

Hey, players want ray-traced Quake at film res and 100fps, but that doesn't mean they get it (well today, anyway).

However, at the end of the day any suggestions I make for the way the virtual world's should work is really just what I reckon would work best. If another virtual world designer comes along and decides that players get 100% dedicated, exclusive and fully immersed control of their avatar, then that's up to them. And there's nothing to stop an avatar being created any time a player wants one, and in any place the player wants it, but again, what I've said so far is what I think the system would have least problem dealing with.

As to how a 'first-time' process might work, well, remember that interests are fine query mechanisms. The player effectively gets to express an interest in 'farmers, dairy, England, unassigned' or something like that depending on how the developer has defined their classes. The moment the interest has been expressed, farmer objects meeting that criteria start getting downloaded. The user interface could immediately render the view from one of the farmers (raising the priority of that farmer's interest in what they can see). Alternatively, it could display a list of locations for the player to choose from. The player could select Yorkshire, and then a list of farmer names could be displayed and the player could select Giles. And then, the view of Giles could be displayed. Next time, if interest in Gile's environment has been maintained, the player has no delay in resuming their influence over the farmer's decision making (or if the system can provide such immersion, the play can resume their 1st person perspective from Gile's eyes).

As to whether players can have god-like powers, or who creates avatars or any other object, this is up to the designer. Hackers would love god-like powers, but that's another story. Avatars could be created on a pay-per-avatar basis if the system was secured. Alternatively they could be created by special players, or on demand, or as defined by a script that monitors the ratio of players to available avatars. There could even be a population density monitor that creates new land-masses (lebensraum) and populates it with culturally appropriate denizens.

And yup, a scalable virtual world, is going to have to survive without a team of artists frantically beavering away as players arrive in exponential numbers. Either it will have to be created by the players themselves, or it'll have to be created programmatically. It's possible though that one could create a huge void and each player brings content to it in the form of a space craft that's created for each one. Asteroids can then start coagulating out of the debris of various wrecks from battle. Probably many other solutions too.

As to what happens when players go offline (also answered in an older thread), again it's up to the designer, but I like the idea of giving them enough AI to carry on a daily maintenance routine, e.g. feed the dog, empty the rubbish, etc. 'Sims' kind of behaviour. However, it would be good if the AI wasn't hardwired, i.e. the player could via some means define their routine and priorities, e.g. "Don't answer the door or telephone, and if you hear on the news that Notchit has been released from quarantine then high-tail it to Freddie's place and lie low".

New players and NPCs

Charles Congdon  Posted: 10:37am May 24, 2001 ET  

 

Crosbie:

Who do you first submit your interest query to when you first connect to the system? How does that query percolate through the world and return the first round of free farmers? I'm hoping that only very high-level nodes are participating at this point, since I don't want my query to have to pass through all 10,000,000 systems currently making up the system looking for an NPC that is of class "farmer."

NPCs are possibly behind my question of how you attach to someone you wish to control (BTW, I think we need a whole new term for something that can be both an NPC *and* a user-controlled avatar depending on the time of day...). Allowing some nodes in the system to create NPCs raises the whole hacker question.

Hard-coding all the NPCs into the system, and ensuring that they stay alive all the time for someone to control when they log in (for the first time or again) has two problems: (1) you limit the number of people who can use your system by keeping the number of NPCs static or (2) you leave some poor high-level node holding the ball for a whole parcel of NPCs in the early days when the system has few humans playing it (or over time zone shifts). I don't like the idea of an NPC dying simply because no other nodes had interest in it (although *I* will when I reconnect). Somewhere, someone has to commit to keep the NPC alive until the person with interest in the character returns.

Maybe to handle new users we need a few well-defined and secure servers they can connect to. These servers are the only ones authorized to create new NPCs in the system to the user's specs, and place them in the desired location (subject to other implementation-defined rules).

Thanks, Charles

Re: New players and NPCs

Crosbie Fitch  Posted: 08:05am Jun 4, 2001 ET  

 

Who do you first submit your interest query to when you first connect to the system?

Let’s assume your computer has downloaded the system software and made contact with a node in the system (via a directory service of some sort). The very first interest is by the system software which registers an interest to this first node of contact in details of available universes or games (probably all of the ones available).

There’s then a process of allowing the player to select a game . Once this is done, the player’s node can begin to go about the process of determining a more suitable parent node. The step after this is to obtain the appropriate front-end software (possibly game dependent and possibly hardware dependent). This then presents the player with whatever selection process is required for the player to interact with the virtual world, e.g. obtaining one or more avatars to influence. This is where the game might contrive to locate an avatar in a topologically appropriate region of the virtual world. Note that the virtual world may have an ongoing process of creating new avatars (and new scenery) to ensure that there’s always a ready supply.

In order to provide the player with a selection, it’s likely that the front-end will have had to specify an interest in objects describing ‘regions of the virtual world’, and after that, perhaps available avatars within a particular region. Some of this may already be available on a CD (last month’s index to ‘Roman World’, say).

Once the player is sorted out with their avatars (or other means of influence, e.g. spacecraft, crystal ball, puppet strings, whatever) then we have the player’s primary interest. This is registered by the game front-end software to the back-end system component, which now takes responsibility for satisfying this interest (bringing in objects that match). This in turn entails optimising the choice of parent and peer nodes.

The avatar object will have been designed (by the game designer) to have an interest in objects in its view and vicinity. Many of these objects will have interests in turn. All these interests are prioritised by various means, and if the world and player didn’t change, and the player had enough space, then eventually, no more objects would get downloaded. It’s possible the designer may have specified a low level interest in absolutely everything, in which case, eventually the whole universe would get downloaded. However, as things change, and the player is unlikely to have oodles of bandwidth and terabytes to spare, the player will tend to have only the most interesting objects, i.e. the information most critical for presenting the player with an accurate view of the virtual world. It’s not a matter of whether it’s enough – it’ll have to be enough, because that’s all there’s time and space for.

NPCs are possibly behind my question of how you attach to someone you wish to control (BTW, I think we need a whole new term for something that can be both an NPC *and* a user-controlled avatar depending on the time of day...). Allowing some nodes in the system to create NPCs raises the whole hacker question.

Well, in the classical sense, ‘avatar’ doesn’t preclude an everyday human from becoming possessed (temporarily or permanently) by a god. Sure, perhaps a god can cause an avatar to materialise, but I don’t think the term avatar is necessarily reserved for entities that have been created by gods as opposed to extant ones that have been possessed.

As to hacking, well, yup, that’s addressed in article no.6.

Hard-coding all the NPCs into the system, and ensuring that they stay alive all the time for someone to control when they log in (for the first time or again) has two problems: (1) you limit the number of people who can use your system by keeping the number of NPCs static or (2) you leave some poor high-level node holding the ball for a whole parcel of NPCs in the early days when the system has few humans playing it (or over time zone shifts). I don't like the idea of an NPC dying simply because no other nodes had interest in it (although *I* will when I reconnect). Somewhere, someone has to commit to keep the NPC alive until the person with interest in the character returns.

No limit. If the world automatically expands to accommodate new players (auto-generated scenery and ‘NPC’s) then there’s no limit to the number of NPCs that can become possessed as avatars.

The ‘poor high level node’ is effectively selected for its ability to be left modelling the NPC when the player departs.

For persistence of scenery and avatars see another of my contemporary posts.

Maybe to handle new users we need a few well-defined and secure servers they can connect to. These servers are the only ones authorized to create new NPCs in the system to the user's specs, and place them in the desired location (subject to other implementation-defined rules).

This authorisation service may form the basis of a commercial opportunity, but I don’t think it needs to be hardwired into the open form of the system.

Re3 Q4 Part 2 (New players)

Crosbie Fitch  Posted: 08:59am May 4, 2001 ET  

 

There's no cast iron rule that say one player per avatar either. A player might feel capable of operating a small army of avatars. Then again, two players (perhaps remote from each other) may collaborate in directing a single avatar that they share (sometimes simultaneously, sometimes alternately). Another possibility is that a group of players collectively operate a crew of avatars of a virtual space craft. There may be clubs of players who collectively take shifts in managing a cohort of avatars, e.g. certain avatars might be preferred by some players, but generally they're shared around according to ability and how many of the club happen to be online at the moment (can be used to get round the 24 hour usage pattern).

Persistence

Re Q5. "I re-connect to cyberspace after being logged off for the day..."

Crosbie Fitch  Posted: 02:33pm Apr 18, 2001 ET  

 

I'm looking forward to the time when being disconnected from the network is regarded as an abnormality. But, yes, if we have ten million punters turning their machines on only at the weekend and uninstalling the game each Monday, then that's a bit of a problem (though somehow I can't see it happening).

As to caching, well, yes, this is the order of the day. If a node is disconnected, then most of the locally stored state data along with ownership details and last parent, etc. will stick around on hard disk and be a pretty good initial working set upon reconnection (assuming the virtual world doesn't tend to undergo continuous and severe topological change).

As to what happens to avatars when the player is otherwise occupied, well, I don't think they should fall limp and lifeless to the ground, or even mope about in an apparently depressed and unintelligent state. The avatar should be like John Malkovich, but rather, able to carry on living as though nothing untoward had happened. Though it might be quite a bonus if an avatar ballet dancer became a tad concerned in a moment's contemplation as to whether sumo wrestling was one of their wisest recent career choices. (Assuming a transition from acting to puppetry is less jarring).

At the end of the day a lot is in the hands of the game designer, and they can make the decision as to whether avatars disappear into a state of invisible, suspended animation, or whether they go about their business as usual. I'm assuming the latter is more likely, simply because it's less liable to abuse. The last thing we want to do is to encourage players to disconnect (or 'suspend') in order help their avatar out of an awkward situation (though I guess in some games, this kind of tactic may be perfectly acceptable).

Re2 Q5. "I re-connect to cyberspace after being logged off for the day..."

Charles Congdon  Posted: 11:49pm Apr 30, 2001 ET  

 

This is poses an interesting problem. Yes, I agree that my avatar should continue after I leave the computer to go to work, or if a rolling brownout removes my ability to control the avatar for a while. Where does the avatar's state and what it as "learned" reside (some sort of serious AI will be needed even if avatars cannot learn to handle latency hiding)? What prevents someone else from taking over "my" avatar and trashing all my hard work? An interesting security problem that, especially if the computer that was acting as that avatar's "controller" is off the network for a while (darn that spotty cell modem coverage!). I think of, for example, The Sims. Expand this to a persistent online game, and how do I ensure that my relationships, skills, etc. survive the times when I am not controlling my family?

I think this might be another case where businesses could make money in such a system. Creating the concept of a virtual "hotel" that an avatar could go and safely park until its "owner/god" returned might be something people might be willing to pay for. Likewise, a server-hosted virtual "house" or "neighbourhood" where avatars could safely go and live their lives until their "owner" returned might have value. It's also a way to make sure that there is a place for my avatar's mail to go in this system, and prevents a deranged neighbour from painting the interior of my virtual home pink. Possibly there might want to be the distinction of "free-use" avatars (avatars that no one owns), versus "owned" avatars, which you pay to have managed by a server somewhere when you go off-line.

As you say, this is certainly up to the application designer, but is also smells like a place for money-making ventures. Likewise a server that allows one to rent time running the "John Malkovich" avatar (sort of like checking out a library book, only it is a "person"), or Captain Kirk. I imagine that one might also be able to make money providing a service that lets you follow the activity of popular avatars, much like real-world tabloids follow the "stars."

Cheers, Charles

Re3 Q5 (persistence)

Crosbie Fitch  Posted: 10:58am May 4, 2001 ET  

 

As objects are the sole repository for state (except that interests represent the state of interests, though interests are just different kinds of objects really and are stored in a similar way), an avatar's AI will also use objects to store any state data for that purpose.

It's up to the games designer to decide how to address whether any player can influence any avatar they like, i.e. if there is some kind of assignment mechanism.

Persistence of objects tends to happen automatically while at least one node is interested in those objects. All objects have owners and all owners are interested in the state of those objects. However, people might be interested in guarantees of persistence, because the system doesn't guarantee it, it just 'tends' to make objects persistent. It is possible, if an object becomes of no interest to anyone except a node near the root, that that node may flush its state from the cache (due to being at capacity and having to eliminate state of its least interesting objects).

There is nothing to stop philanthropic or commercially oriented providers from adding some serious computing resources to the system and selling guarantees that their storage will never be flushed of objects that a player has paid good money for (the guaranteed storage of). However the only stuff that tends to get flushed is that of minimal interest to all objects in the system. We're probably talking about teaspoons of no significance drifting out into the void of space (in regions of it that no one's likely to visit).

Now a commercial provider of a virtual world might ensure they had a cluster of highly reliable nodes that continuously stored stuff in a near-line storage farm. These would simply have 'players' that instead of being interested in a particular avatar, were interested in 'everything' or divisions thereof. These nodes might end up as children of one or more near root nodes. They might not have any children of their own (if heuristics determine that they tend to hold uninteresting stuff, and don't have particularly good latency). However, they might end up owning everything that no-one else is interested in owning. And any time any node is interested in what they have, then a mutual parent will refer them appropriately.

So if a player creates an artefact of only sentimental value (an origami model of the planet Earth) and launched it into deep space one day (noting down its trajectory), they can forget about it for a year, and then seek out its current location knowing that a spatial query will ultimately, eventually, reach the near-line storage node who will pick the appropriate DVD-RW from the jukebox and provide the details of the origami object. Why will it reach this node? Because only this node (or cluster) is interested in 'all space, even bits other nodes aren't interested in'. Remember that a node's interest is always duplicated to its parent and thus all the way up the hierarchy. So if there are 5 nodes in a node's lineage its interests are consolidated in the interests of these 5 other nodes. And each of those might know a node who can service an interest if they can't. Again, mostly, nodal relationships are continuously adjusted to optimally serve the node's interests and thus the majority of its interests will be well served. It's only when a player decides in a spur of fickleness that they'll go and visit quadrant 3132.412.5.GUb once more that their relationships are likely to change as they journey there. Certainly, if they tried to teleport there, it might take a while for the cobwebs to get dusted off the DVD - we're talking 10 seconds perhaps...

For a truly mind-boggling commercial opportunity (for the likes of Google), that'd probably bring the system to its knees, I expect someone might think of recording the entire universe, i.e. not just its current state, but all the deltas too, thus allowing any moment in its history to be replayed. The replay probably wouldn't look that pleasant (the higher the resolution, the bigger the drain at the time of recording), but it might be interesting. Of course, some players (or computer controlled 'players') might take it upon themselves to be dedicated recorders/journalists and archive stuff they thought would be good for posterity (TV shows, ;-) ). Thus their recordings could be of a much higher fidelity (being restricted to a camera view - rather than the entire shebang).

Persistence concerns

Charles Congdon  Posted: 10:39am May 24, 2001 ET  

 

Crosbie:

Thanks for elaborating on the persistence problem, and possible solutions to it. I think, however, that I have ended up more concerned about the problem than I was before. In the research phase objects can come and go with no problems, and during such a phase fine-tuning of the persistence mechanism would happen.

But when you try to get large numbers of people using such a system, I would argue that they won't come (or stay) unless there is a "guaranteed" persistence mechanism (something which clearly violates the decentralized nature of this proposal and its Open-Source feel). Say that I build a house on the far outskirts of a small virtual town. My house is not near any of the "highly-travelled" nodes into, out of, or around town. Now the residents hold a parade while I am logged off, and everyone goes to the town centre to see it. Who is still around to have any interest in where my home is, let alone the colour of the rug in my virtual bedroom? Seems to me that when I next log in I will be lucky to see 4 walls where I last left my house. Small towns (by nature of lower avatar traffic) are very much less likely to be interesting to other nodes in the system. Their residents are possibly more likely to all log in and out at the same times of day. In short, at least for periods of the day, they are only slightly more interesting that quadrant 3132.412.5.GUb in a system with 1 million users. So how do you prevent the town from getting mostly purged from the caches every night?

One possible solution is for the user's computer to store the data it owned, and information about other owners, when the user leaves the system. When you next log back in a negotiation happens between the nodes who currently have interest in the area you last inhabited (or where the avatar you were controlling last was), and the virtual property you "own." And somehow a reasonably consistent view of the world returns. Problem with this is two-fold: (1) This is a hacker's heaven (2) anyone already connected may see things spring up around them as data that had been purged from all caches is re-introduced to the system.

Which brings us back to the need for some data to be saved and maintained in the system whether or not anyone has interest in it. When the system is first started up there would have to be a base definition of the world, making it an interesting place for people to visit and explore (otherwise, why would people want to visit your system). But if no one is on a continent yet, you don't want the content put on that continent to vanish. Big cities may need less persistence infrastructure than low population areas of the system, since the odds are good that someone or something will have interest in most structures of the city all the time. Still, I don't want half a city block to vanish the next time there is a football game.

In short, I feel we are stuck with the need for some sort of "persistence servers" if a system based on this technology is rolled out for general use by millions of users, whose technical savviness may not extend much beyond the ability to log into AOL.

Thanks, Charles

Solutions?

Discussing alternatives

Charles Congdon  Posted: 10:40am May 24, 2001 ET  

 

Crosbie:

You are getting in the realm where you are answering many of my silly questions by saying "that is up to the game developer." I agree that some sort of boundary must be made between the general principles of the system, and its implementation. If such a massive simulation is to become as large as you propose, in the end only one or two of such systems will be able to exist. If we are lucky enough to find ourselves in such a situation, it will be valuable to know that alternative approaches have already been explored, at least as thought experiments. The worst thing that can happen to a massive simulation is that it grows to the point of being the system that the majority uses, only to discover that it has a fatal flaw that prevents it from expanding further. That's a sure way to leave a bad taste and sense of distrust in the people who would otherwise continue to support massive systems by their participation in them.

Thanks, Charles

Discussion and Exploration - yes please!

Crosbie Fitch  Posted: 05:48am Jun 4, 2001 ET  

 

I couldn't agree with you more Charles,

It is precisely because I don't want to disappoint anyone (including myself), nor do I wish to waste anyone's time, that I want to discuss this kind of system in the open. This is in the hope that if it has any flaws that they will be spotted.

Hopefully, you and others will continue to post challenging and thought provoking arguments. And hopefully, I will address them satisfactorily, or acknowledge that some adjustments need to be made (if flaws are not too fundamental).

There are a variety of concerns and they are addressed in different areas.

There are some things that don't really impinge on the system design, but really are a matter for the person designing the game (the user interface, the content, the rules of the virtual world, how the player is meant to operate, etc.). Some games can be designed that would only work on a LAN, i.e. it is possible to design a FPS type game that any system would have great difficulty pulling off over the Internet (FPS World War One perhaps?).

Some issues relate to market viability of a game based on the system, and what revenue models might work. These issues may be a driving force in games development companies, but they’re not necessarily the best issues to drive the design of a system. There are some things (like the Web, e-mail, and Linux perhaps?) that are so global in terms of their development and derived benefit that it simply doesn’t make sense for them to be proprietary. Linux has no quality guarantees, but that doesn’t stop it being good, or people using it.

Certainly, if I ever seem to wave my hands and say “That’s up to the game developer”, and I don’t back that up with enough explanation, then please pick me up on it.

Persistence can be a feature of a free system, but its guarantee costs money

Crosbie Fitch  Posted: 05:25am Jun 4, 2001 ET  

 

Please bear in mind I’m talking about scalable, i.e. able to scale. Try and read an emphasis on ‘able’ – there is no ‘and it will be 100% persistent, 100% glitch free’ guarantee. I do not make any guarantees regarding resources – because I’m simply not in control of the resources. I’m describing how to design a system to make best use of the available resources and it not come crashing to a halt if there’s a shortage somewhere.

This means guarantees cost money. I can give you a hunch for free, i.e. “It’s my hunch that there’ll be so much storage that no-one needs to sell guarantees”. But, a guarantee means obtaining dedicated storage (though it might be cheaper simply to set up an insurance fund).

If you are saying you doubt this system would ever be used in the absence of persistence guarantees (as opposed to the absence of enough persistence that people are happy to ignore the lack of a guarantee), then well, I take your point, but it’s not something I’ve lost any sleep over.

The whole system is dedicated to a fair bit of state duplication, so although not guaranteed, I reckon persistence is pretty likely.

Think of it this way, if you have enough storage to store everything then there’s no problem. Conversely if you don’t have enough storage then it’s probably best to flush the least interesting parts of the world rather than the more interesting parts. If people also want to keep the least interesting parts then you need more storage. The system can’t magic storage out of thin air.

There’s also not much stopping the application giving the player the following option: “Disable flushing of modified objects from cache?”. As long as the player doesn’t mind buying a new hard disk every week, this would provide a persistence guarantee. But, then one of the primary design objectives was to obviate the need for every player to store the game world…

This system is free, it’s guaranteeing the service that costs money. And that’s the trick: produce a system that’s inherently free, and let third parties figure out what guarantees and services people will pay for, e.g. cyber-policing, guaranteed persistence, low latency, hack free, quality content, hand-holding, etc.