Gamasutra is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Gamasutra: The Art & Business of Making Gamesspacer
Cyberspace in the 21st Century: Part Seven, Security is Relative
arrowPress Releases
May 27, 2019
Games Press
View All     RSS








If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 

Cyberspace in the 21st Century: Part Seven, Security is Relative


August 5, 2002 Article Start Page 1 of 4 Next
 

A system's security is about controlling the system's relationship with its environment. If you have a predictable environment, or you feel it is safe to assume the environment will be sufficiently predictable for the lifetime of the system, then you can get away with defining a rigid, rather closed security model based on your understanding of this environment. Compromises will most probably be complete as opposed to partial, but one hopes they will at least be a long time coming.

However, if you're developing a system for the very long term, there's not much chance of being able to predict the environment. So any rigid security model doesn't stand a chance. Security in this case needs to be adaptable, and thus, to some extent, intelligent. It needs to be able understand unforeseen threats and attacks, recover from them, and ideally be better prepared to defend against them in the future.

Securing Cyberspace - For Fun, Not Money

Security in cyberspace is a different kettle of fish compared to many computer systems. We aren't a bank, defending a network of computers against unauthorized access, needing to audit and ensure the integrity of all transactions. We have no computers, no intellectual property, nothing of real financial value. All we have is an objective to produce a system that will enable cyberspace and prevent it from being vandalized. In simple terms, we're producing a free game that the whole world can play.

From a real-world, materialistic point of view then, no-one can lose. The source code is still there. The player's money is still in their bank account. What's been created and is inevitably vandalized is entirely virtual: virtual scenery, virtual buildings, and virtual characters.

There's only been one case in which this kind of thing has happened before and it's known as Bowdlerization - the effective vandalization of fiction. You might say that re-writing the history books is comparable, but that is an attempt to change a description of reality, as opposed to a virtual one. Just as some people are into protecting the integrity of works of fiction, so we should be interested in protecting the integrity of the virtual world. Players will be putting a lot of effort in the course of playing within it, and the persistent world that is created will become valuable to many of the players. People won't necessarily require that a castle is never allowed to fall, but they will require that if it is sacked that it is done so via legitimate means and not by hacking the system.

Sure, there is more than the content to vandalize, you can burn the book too. You can trash the computer just as you can trash the virtual world. However, we can rely on conventional security measures for the computer so I don't need to talk about those here. Moreover, we can assume that 100 percent of users (even the hackers) don't want their computers corrupted, and 99% of our users are actively interested in maintaining the integrity of the part of the system that resides on their machine. What we're left with is the task of designing the system to secure the content against that 1 percent of hackers.

But, note that we don't need to be entirely successful in preventing these hackers from achieving their aims. Just as Bowdlerized books remain entertaining to a large audience, despite annoying many purists, so a hacked cyberspace can remain entertaining. Don't get me wrong though. I'm not saying let's give up. I'm saying the opposite: let's build, because even if it does get vandalized now and then, it's still worth repairing the damage and continuing, because the whole edifice doesn't come crumbling down, it's just a tad tainted.

The thing is, cyberspace only has to be useful to the majority of its users, i.e. entertaining. That means it can tolerate a small amount of vandalism or corruption and still remain useful. This is quite unlike a system required for commerce where confidence in its integrity (and thus viability) might completely collapse if it became compromised even in a very small part.

So while some may say that security in P2P systems is a major headache, this is largely from a commercial perspective. We still have a headache of course, but at least we only need to keep the system around 90% clean, rather than 99.99 percent. Of course we'll still strive to stamp out corruption, but now we know our system failure threshold is more achievable than one might at first assume.

Security in Society

This becomes more akin to maintaining a stable society than of securing a system. Each player's node, in the course of being an integrated member of a productive society must necessarily be completely open to each and every other, simply in order to present the player with its understanding of the virtual world and to allow the player's actions to be conveyed to all other nodes.

So, I think we should start talking more about how members of a society evaluate each other in order to determine truth from falsehood, hypothesis from proof, and rumor from news, rather than how we go about keeping secrets.

I think I'm on the side of the fence that believes an open/free society is a more secure one than a closed/repressed one. If anything, it is precisely because its security is so easily and continuously tested that it becomes reinforced. It has to adapt and learn from each new threat, each new attack. Attempting to utilize a closed security model in the process of government may appear to be a more robust approach because it prevents easy access to control or change, but the trouble is when it breaks, it breaks big. A closed or rigidly secure society rots, it cannot confront any corruption, any weakness in its overall security, until ultimately the whole edifice comes crashing down under its own weight.

A rigid structure such as a building may need to be completely rebuilt after slight structural damage, but a more adaptable structure such as an ant hill can be repaired or rebuilt around even quite major damage.

A society in which each member is free and encouraged to express their views lets the society understand itself, understand any problems, and thus heal itself. A world in which a person is free to fly a plane into a building, is a world that can heal itself (better than nuclear annihilation anyway). Of course, it would have been better if this ill could have been recognized and fixed earlier, but it seems that we must deduce that in this case it was difficult for pleb to speak peace unto superpower. And this just as we thought world peace was imminent, given we've just got 'nation shall speak peace unto nation' sorted…

If anything we need to encourage conversation, exchange of ideas, good along with the bad, pleasant along with the unpleasant. This means facilitating crime along with sociability, theft along with trade, vandalism along with art, etc. This must happen at all levels of society, not just among plebs, but also at the corporate level. Read the Cluetrain Manifesto for how corporations need to change toward greater openness. You can't have two worlds, one of people and one of commerce. We're all in this world together.

And yup, this all applies to P2P systems. Full exchange of information. You can't ban bad information. You can't even ban secrets. There must be no apartheid between commercial content provider and punter - the player is not the enemy - we are all having fun together. We only have to cope with the minority of players that choose not to play by the rules.

Secrets? From Who?

As far as secrets are concerned, unfortunately the Pandora's box of encryption has been opened and nothing any government can do can undo that. The best thing to do is to encourage free communication of dissent, bigotry, insurrection, crime, etc. Then people won't feel the need to encrypt it. We certainly can't start sending people to prison simply because they've forgotten their private key.

And on that point: if someone published a program that generated public keys without private keys, and everyone started sending each other, as a matter of course, blocks of code that had been encrypted using one of these public keys, then everyone would have a defense against forgetting their private key, i.e. anytime someone wanted to send real code, no-one would know if there existed a private key to it. Perhaps this should be termed 'anti-steganography', i.e. when code becomes widespread, then there's no need to send code covertly.

Anyway, my point is, that you can't ban the undesirable, because if you do, it'll just happen in secret. And the next step, of banning anyone from doing anything in secret, is not just silly it's mad that anyone finds it plausible (like banning suicide - you only catch the failures). The solution is to encourage open communication, and thus to make this a far more productive means of solving problems and resolving disputes. When you provide the means for man to speak freely unto mankind then secrets become redundant.

Troublesome Alternate Perspectives

Of course, all the above applies to the insane and the criminal as well as the sociable. It is better for an open society to allow someone to show their hand sooner with a lesser act, than for a closed one to force it to commit a much greater act later. A bit like saying "I'd rather someone vented their anger in an e-mail today, than saved up for a gun to shoot me with later…"
But, what to do about the warped sects that follow a collective delusion that humankind must be destroyed? Well, the sooner their hand is shown the sooner something can be done…
However, ultimately there always exists an end to every system. A human can withstand a lot, but a bullet or cancer can finish it off. Humankind can withstand a lot, but a meteorite or nuclear war can finish it off. Life can withstand a lot, but not a supernova. Matter can take a lot, but not a black hole. No system is eternal.

Similar Systems

So, there is no such thing as perfect security for any system, but we can take pains to ensure that things are as secure as possible, but without compromising the system in the process. This means finding an appropriate approach.

It seems to me that it is better to look to similar systems in nature to apply to our distributed systems based cyberspace, than it is to bang the square peg of rigid systems' security into our triangular hole.

For example, just as our bloodstream happily ferries pathogens along with our blood, it also uses the same network to ferry antibodies as a defense. There are some systems such as this, that on some levels seem to be dangerously open to attack, but are open through necessity, and luckily it's from such systems that we can take examples of appropriate approaches to security.

Breaking Away from Obsessive Computer Security

Perhaps it's instinctive, perhaps it's brainwashing. Either way, a large chunk of computer literati seem to be overly concerned about security in computer systems. So, perhaps it might be a good idea to just step back a mo and review the situation.

For example, although a high level of conventional security measures may be appropriate for a single computer or a corporate network, it isn't necessarily the best approach for a network of a million or more, effectively anonymous PCs.

With the advent of the Internet it's important not to lose sight of what we're trying to secure, and risk ending up thinking security is sacred. Fragile systems that lose significant value when they're compromised by accident or deliberate act are indeed candidates to warrant considerable security. However, more flexible systems that are expected from the outset to be compromised (perhaps only in part) on a continuous or occasional basis can still maintain their value. Security for such systems is, and must be, an intrinsic property and not an added feature.
The thing is, there's a risk that by continually reinforcing a system's security it simply becomes more and more complicated, burdensome to maintain, unwieldy, and worst of all, ever more fragile. That's why I think it's useful exploring analogues to networked computers, it broadens one's perspective of what's important and how much security, or lack of it, other systems can tolerate.

Insecurity of Changing Systems

Consider human society. If it operates according to certain principles that regard property as a human right, then as long as the majority respect that right, the society can tolerate the few who do not (perhaps making an effort to discourage such disrespect). However, if the majority respect for property collapses then we might still arrive at a stable society - such as communism or anarchy. It will probably be a different society, but not necessarily less 'good'. Whatever the majority conspires to achieve will define the society, and it can still work, still be viable, whether we judge it civilized or not.

There may be some people who are strongly averse to changes in the system, but those changes aren't necessarily undesirable or unworkable. Protecting a system from change may thus sometimes seem to be a given requirement of a security system, but it isn't.
In the sense we're interested in, security is the ability to protect a system from being changed into a patently non-functioning system (and its constituents to non-functioning constituents, but only in so far as the viability of the system is threatened).

The typical solution to this is overkill, i.e. preventing all change to the system except by a select, privileged group.

As can be seen in Open Source software development. Things don't necessarily descend into chaos simply because anyone can change the system. There just happen to be good mechanisms to weed out disadvantageous changes.

I've often heard that people don't like change, so perhaps it's not too surprising that people like security. Creatures of habit aren't we, eh?

But, notwithstanding our discomfort at change, we seek only to minimize the likelihood (and duration should it occur) that our system will cease to function (in whatever form it has evolved into) in a popular way.

Introducing The Social Approach

In the following discussion my primary focus is to consider security in terms of assuring the system's integrity and operational viability. While such things as access control, secrecy, privacy, rights management, input validation, etc. may figure prominently in typical commercial systems' security requirements, I hope I will end up persuading you that they may not be fundamentally necessary in an open system.


Article Start Page 1 of 4 Next

Related Jobs

Gear Inc.
Gear Inc. — Hanoi, Vietnam
[05.25.19]

Technical Director
Dream Harvest
Dream Harvest — Brighton, England, United Kingdom
[05.25.19]

Technical Game Designer
Deep Silver Volition
Deep Silver Volition — Champaign, Illinois, United States
[05.24.19]

Senior Animation Programmer
Ubisoft RedLynx
Ubisoft RedLynx — Helsinki, Finland
[05.24.19]

Senior/Lead Graphics Programmer





Loading Comments

loader image