I’m not sure I get it. My benchmark is Minecraft, and if they applied this technology to Minecraft, it would be just a little different. Zombies would keep besiegine villages when you’re not there to watch. Chickens would lay unlimited eggs, or I suppose they would need to be reworked a bit. But to me, these changes would be pretty hard to detect. Maybe less so if I played more multiplayer.
I think they’ll become obvious in games that provide the player with a level of abstraction about the world. For example, say you have factions in a game, who maintain a collective sense of disposition toward the player, and the player can find out about it.
Usually this is handled crudely: the faction Just Knows as soon as anything happens. You kill a guard, and suddenly the entire city guard is out to get you.
Or, perhaps, there’s a trick the game uses to make it look a bit more convincing–say, the city guard are not alerted so long as your target doesn’t escape whatever room you’re in. Either way, though, the model of the world is transparently artificial, and it all feels like a weird theme park.
This would basically fix that.
But maybe just open a whole set of new problems?
using Improbable’s technology, objects and entities will be able to remain in the virtual world persistently, even when there are no human players around (currently, most virtual worlds essentially freeze when unoccupied)
Objects and entities have always been able to persist in game environments! Granted, there always limits on how much stuff can be kept track of in there. Any game where you can create a server for multiple players tends to have this persistence. This has been the case at least with all of the games running on the Quake and Source engines. JC2MP can have thousands of players simultaneously without looking like cardboard. I suppose it all might be a revelation for those who expect such things to look like Minecraft.
This area could indeed use some improvements. These would usually be hard-coded now as complete “game modes” which define simple behaviors. It does seem to be tricky to make behaviors/routines adaptable but responsive.
I thought Shadows of Mordor approached this a bit more gracefully than most, which lent to greater immersion. (or less artificial, depending how you hold the ruler).
Would like to see that scaled up to a bigger world like Elder Scrolls size, with a greater range of interaction options beyond shoot/stab.
Some games do a decent job of at least faking ‘realistic’ information propagation(NPCs get sight cones, usually appropriately modified by environmental obstructions, and auditory radii, which may or may not be; and remain oblivious to things that aren’t visible or audible to them, and usually have some sort of ‘call for backup’/‘sound the alarm’ behavior so that information spreads through the group when you make contact with it.
What I’ve never seen, though, and which would likely be brutally difficult(both to build and to play without the feeling that you are just rolling the dice and re-loading saves a whole lot) is AI that gets even remotely close to the “Humans are actually pretty inattentive; but are also pretty good at putting two and two together if you arouse suspicion” state.
There are some very limited substitutes(like Fallout: NV’s ‘faction disguise’ mechanism, where being dressed like a member of a given faction will cause most other members of that faction to treat you as one of them); but those are really just modifications of the basic sight-line system(in the Fallout case, being faction-disguised just means that only a few special NPC types will detect you if you enter their sight, while normal types will ignore you). There is no need to ‘act natural’ or otherwise try to fit in.
Games with crime mechanics tend to have similar issues: an omniscient ‘stolen goods’ tag(like Skyrim) is easy; but completely implausible; an ‘unless you were in somebody’s sight while you stole it, nobody will know it was stolen’ mechanic is similarly simple; but leads to absurd situations where a character with a decent sneak score can steal every piece of silverware in the city, then walk right over to the market and sell 3,000 spoons to the nearest merchant and nobody will raise an eyebrow; or where a priceless-one-of-a-kind artifact can suddenly end up in your hands and nobody will ask how it got there.
Violence frequently goes the same way: if nobody saw you make the kill, your hands are clean(though if it’s a context where the guards are ‘hostile’, seeing dead bodies will generally make them go into search mode and call for backup); but nobody even bothers to notice that a stranger with a quiver full of +3 exotic arrows of novelty walked into town, and by the time he left 4 prominent citizens had +3 exotic arrows of novelty through their throats.
I have no idea how you would plausibly solve this sort of problem; but modelling the process where people stitch together multiple independent facts into a conclusion is likely to be crazy difficult compared to modelling the imperfect and temporally limited flow of perfect information, which is already done(sometimes with bugs) fairly routinely.
I’m confused about what this actually is. It sounds like all it is is systems that helps share information about world-states between servers. Which presumably just helps multiple servers feel like they’re part of the same world.
The persistence and simulational stuff seems like another ball of wax that would potentially be helped by this, but wouldn’t be a direct consequence of it. Having worked on MMOs, I’ve spent a fair amount of time talking about the idea of increasing the level of simulation with my colleagues. The argument that was frequently made to me by those who had tried it was that simulational elements can easily get lost in an MMO - the individual player is unable to see cause and effect (because of the scale upon which it happens), so it could be random for all they know. Which means scripted events work just as well - better, even, as you can control when they’re triggered more easily.
In the article they claim, “Currently, in even the most elaborate virtual worlds, some characters and objects cannot interact because it would require more computational power than is available.” This isn’t really the case - usually there are characters and objects you can’t interact with because it was designed that way. Because to do so would break the experience for everyone else in the game. Most MMOs are trying to serve up a theme park ride - you have an experience “on rails” that’s the same as everyone else’s. A persistent world in which you can rearrange things means player 1 comes in and wrecks everything and players 2 through 10 million come into an environment that’s already wrecked. That’s fine for Minecraft, where multi-player experiences are very limited and the game is all about rearranging things, but doesn’t work at all for a game like World of Warcraft.
That’s a design issue, not a technical one, though. Let’s say you decide to do something more realistic than a toggle switch for being flagged as an enemy. Ok, so you decide that any other guard in line of sight that “sees” the killing gets the character flagged. Is it just them, do they “tell” other guards? How quickly? What’s the range? What if the guard is five feet away, but turned in a different direction, do they “see” the killing? Ok, so now you need a “hearing” radius to activate guards that don’t have line of sight. What if there’s a heavy stone wall between the two guards? What if it’s a wooden wall? What if it’s a stone wall with windows? What if no one is within range when the killing happens, but a wandering guard comes along as you’re standing over the body? (What defines “standing over”?) What happens if another player killed the guard, but your character is near the body when another guard comes along? Does the system “leap to unwarranted conclusions”? (And thus fuck your character for being in the wrong time and wrong place?) What happens when multiple characters, only one of whom was involved, are there? Does the system consider special circumstances? (E.g. I kill the guard, turn invisible and stand there waiting for the wandering guard.) Etc. You can easily create a massively complex set of systems (that are going to use processing power) that sometimes have weird glitches that make them inconsistent in effect, sometimes punish the wrong player, still sometimes (or often) seem unrealistic, etc. all to achieve the same thing as a simple flag that gets triggered when they kill a guard. Many designers just say, “fuck it, let’s save lots of development time and go for a simple, consistent, predictable, fair system, because this isn’t a significant element of the game.”
If memory serves, the Second Life devs spent a lot of time on trying to combat this problem while still allow for a relatively user-created world. They did better than one might have expected; but self-replicating dildo swarms and the like were a menace from time to time. In single player, emergent behavior can be endearing; but multiplayer provides certain incentives to deliberately reduce the malleability of the world; because if it can be done somebody will probably do it, tastelessly.
I too was confused about the vague claims regarding Improbable’s technology. Following the link in the article to Improbable’s home page, they have a few more informative, older articles linked. The most informative was a Wire.co.uk article:
Basically, they’ve taken a cross-disciplinary approach to getting as many agents onto a single shared space as possible. When they say they want to increase the realism of games and let you interact with objects you normally can’t, it has nothing to do with things like information propagation between NPCs, AI, or creating more flexible interaction methods for any given object. They mean that they can effectively make mirroring unnecessary, so when all 10,000 players are in the Grand Central Station map, they are each actually one of those 10,000 people all in the same instance of the same room; they are not just one of 200 people in any of 50 parallel copies of that prototypical room. Your actions can reach all 10,000 people, not just the 200 on the same server as you.
They are vague on the “how” of it, but they do reveal a few things. First, that they have a running prototype game that simulates an ongoing, massive battle between humans and jackals, where corpses don’t disappear until the jackals eat them, and the number and danger of the jackal population varies with food supply. Second, they mention the problem of latency, and how they pulled in “reformed bankers” to lend expertise in low-latency calculations. Third, that they reached into the telecom space for some server load-balancing techniques, drawing on the real world problem of balancing cell tower loads. Fourth, that they went to someone from Google’s video chat team for methods of distributing high-bandwidth streams (the example being a “30 person live video chat”).
It’s very focused on the back-end, and they are pursuing customers other than game makers – such a scientific modeling for whether, airlines, traffic, etc. – as well as plans to over a cloud-based service where they handle all this tricky back-end work for you.
Yeah, and Second Life only works because it is pretty much an online space that’s about self-replicating flying dildos in the same way that Minecraft is about moving things around. That is, it’s a primarily a multiplayer sandbox, rather than a game (whereas Minecraft is a [potentially limited multiplayer] sandbox with some game elements as well.) There are games within Second Life, but they seem to rely on the players agreeing to behave according to the rules of that particular game in order to work - that is, to give up certain freedoms and abilities that Second Life allows.
Huh, yeah, ok, I figured the technology was something about increasing the number of players in a shared world (though I’m not sure how that’s different from other similar efforts that have popped up in the last few years). But their own website and the actual examples given in the article are all about the simulational stuff, which doesn’t make any sense. I guess what they are saying is, that because they can distribute the processing load for X number of players to a greater number of servers, you could do more computationally complex game worlds, which could include more simulation elements. Their examples are largely nonsense though, because we’re talking about things that aren’t done for design reasons rather than technical reasons. So people who have already been working on simulational spaces might see their simulations improve, but it’s not going to add simulatory elements to any other games.
Edit: Things like their jackal example are problematic - by increasing the potential number of people playing together, you increase the variability in how many players might be in one region at any given time, which exacerbates the problems of the simulational approach. Your simulation would be tuned for a given number of players, it’s an ecosystem, after all - the more you diverge from that number, the more things get thrown off. So you have your jackal-plagued area in a game with 100K players - you could have anywhere from one to 100,000 players dealing with the situation (or no players at all). One player is overwhelmed, 100K players face no challenge. How do you keep it interesting for players, in that case? Vary the number of jackals by the number of players? Now you’ve simply thrown out the simulation for the old, simple approach of triggered events.
There examples tend to be geared to the more specific problem of an MMO shooter, which helps understand what problems they’re trying to overcome. Basically, it sounds like they’re trying to widen the bottlenecks that cause realistic gameplay to break down.
For example, think about a game of Quake with a high degree of latency for one player. Player 1 and Player 2 have game instances communicating with the same sever. In Player 1’s instance, he is going around a corner and down some stairs, fleeing Player 2. In Player 2’s instance, he shoots Player 1 in the head before he reaches the corner. Because Player 1’s latency is high (or packets are lost), positional information from Player 1’s instance arrives at the server after the headshot from Player 2’s instance. The Server resolves the situation as a kill and sends synchronizing information back to the instances. Player 2 sees a clean kill. Player 1 is shot by a bullet that can go around corners. This is a demonstration of how latency undermines realism. The same issue exists for server-to-server communications: you can’t distribute calculations between multiple servers with loose alignment unless you’re willing to be equally loose about other features. Or unless you avoid the issue altogether by not letting servers talk to each other, which means Player 1 and Player 2 may be at the same point of the same map at the same time, but not interacting. The problem of persistent objects – such as the piles of bodies in Jackal Story – is only a problem because it’s one more piece of data to sync up between servers, and if you have too many of those the servers can’t cleanly keep up all the time. The reason they mention all the simulationist cause/effect stuff is because they’re skipping a few steps ahead. If you don’t sync up all the servers all the time (for a given map), you can get notable things that happen in one instance but not another. Imagine if you were talking about the fallout of a tanker truck exploding in a tunnel in Wherever, only to have another player say, “oh, that didn’t even happen in my instance of Wherever.”
Sure, the kind of lag you’re talking about doesn’t happen right now because lag-senstive games simply restrict the number of players per server to a small number, where they’re effectively walled off from other groups. (I worked on a game that hilariously labeled itself an “MMO” because the chat server included all players, even though no more than four or five people could actually share a world instance for gameplay. Five people isn’t exactly “massive.”) There was nothing stopping developers from having previously done “Jackal Story” - there were just going to be fewer players in the world. There have been a few other companies who have come up with server software for online shooters in the last few years that do allow massive numbers of players to share a world, so they must have come up with systems that do something quite similar to what’s being discussed here. What’s been throwing me off here is that all the examples in this case are things that no one would actually do with their server software*. So it ends up being really misleading as to what they’ve actually done, but I guess the alternative is saying, over and over again, “Hey look, you can have a lot more people playing together in online FPSes!”
*And many of their examples wouldn’t even be impacted by latency/data issues in practice, anyways. I mean, I don’t see how a “you bump into an NPC and he tells his friends and they all hate you” mechanic benefits from what they’ve done.
As @Shuck pointed out, this is a design problem, not a technical one. Early versions of Ultima Online had a deeply systemic approach to world design nigh on 20 years ago. The wilderness had rabbits, wolves that would hunt them, monsters that would attack each other, wolves that would attack the monsters, and so on and so on…
The big design challenge is: How do I know that wolves are attacking the village because players hunted too many rabbits, versus it being entirely random? And is it worth the trouble?
More prosaically, how do you design these systems so they’re not hugely exploitable? My favourite story from early UO was trapping people inside buildings by herding a flock of sheep outside the door. The only way out would be to kill one of the sheep to clear a space–but then you just killed someone else’s sheep, which counts as a crime, so the city watch would come and attack you…
There were so, so many ways of trolling players using unexpected dynamics in UO. It’s understood in MMO design that the more you allow players to alter the world, the more ways they find to screw over each other. I guess it’s not surprising that the designers I know who are most skeptical about simulational elements in online games are the ones that spent a bit of time working on UO (and canceled sequels) over the years.
I guess the time is ripe to try again. There aren’t nearly the same technical challenges now and our ability to gather and process vast amounts of data on what everyone and everything is doing ought to help.
Open world crafting games have been super popular ever since Minecraft so the audience is starting out more literate in systems to begin with. And of course everyone has Dwarf Fortress to crib ideas from nowadays…
People have also come to expect more consistent, polished MMO experiences, too, something that becomes impossible the more simulation elements that are added. The simulation sandbox online game is pretty niche, and therefore low budget, which limits the depth of simulation that’s going to tend to be going on, except with passion projects. Dwarf Fortress is so ahead of the rest of the industry, so mathematically advanced and feature-rich that projects that want to copy it aren’t really able to - I’ve certainly seen people try.
This topic was automatically closed after 5 days. New replies are no longer allowed.