next up previous
Next: A network experiment Up: No Title Previous: Who

Security

The discussions began with the security issue. Tsutomu started the discussion with some general comments about the philosophy of security. He suggested that first we must define what the limits of the system are, i.e. what we want to accomplish and what we want to disallow. Then there are in effect three levels of the process: a) What do you want to accomplish (security policy). b) What do you think the implemented system does (design and code). c) What does the system really do (testing and ``assurance''). The reason for `c' is that sometimes if is very difficult to prove that a system really does what you think it should do.

Tsutomu was not familiar with particular details of the Tierra system, but had a general understanding. He suggested two security issues that are unique to Tierra, and many that are general to network software. The first unique security issue in Tierra derives from the fact that Tierra is a general purpose computer, and that a large network implementation would be a very large general purpose computer, perhaps the largest in the world.

This computational resource could attract the attention of persons who wish to subvert it to do their large scale computations. This presents two security issues. One issue is that the purpose of the network Tierra would be subverted in this way. Given that the network Tierra is viewed as an ecological reserve for digital organisms, in the worst case, this would be equivalent to cutting down the rain forest to plant bananas. The ecology would be destroyed in the process. The other issue is that the subversive computation could be something like code cracking, which many people might object to providing CPU cycles for. This would be like cutting down the rain forest to plant cocaine.

Tsutomu's conception of how this might take place is that the hacker designs a digital organism which would be a superior competitor and at the same time perform the subversive computation. This organism would then be introduced into the reserve where it would proliferate, and even evolve to perform better on its computational task.

Tom objected that domesticated organisms do not make it in the wild. If you plant bananas in the rain forest, they just die. Because the introduced organism would carry the extra burden of performing the subversive computation, it would be at a selective disadvantage. Thus selection would either eliminate these organisms from the community, or would eliminate the computational function from the organisms. In this way the problem would resolve itself and the attempt at subverting the system would fail.

Not everyone accepted the view that Tom put forth. Some people felt that it could be possible for a talented programmer to design an organism that did some applied computation, competed successfully, and was able to maintain these properties in the noisy, evolutionary environment of Tierra, long enough to accomplish the subversive computation. Clearly this would not be an easy task, but it is conceivable that it could be done, and after all, it would be fun to try.

However, Tom suggested that the task could be accomplished by setting up some fake node(s) on the net, and having these nodes flood the reserve with introduced organisms. In this way, the population of these organisms could be maintained in spite of their selective disadvantage.

Tsutomu and Tom disagreed on the viability of subverting the system by simply introducing an engineered organism, but everyone agreed that it could be accomplished by flooding the system. There was a sense that this type of subversion of the system would be very difficult to accomplish, and might not be able to achieve the goal of using the system to accomplish the subversive computation. Even so it could be attempted, and that is a threat in itself. Probably we should develop some monitor tools which could recognize a flooding attack so that defensive actions could be taken if it should happen. Simple introductions would be much more difficult, probably impossible to detect, but some more thought should be given to the matter. Perhaps we could look for the pattern of communications that would be required to collect the data from the distributed computation.

The second security issue related to the information returned by the tping function. Among other things, this function tells us how fast the Tierra virtual machine at a remote node is running, which reflects the activity patterns of the users of the real machine. From this information, it might be possible to infer if the users are at the machine, or to gather some information about their activity patterns. This might be unacceptable to some potential users.

Because the definition of security depends on what is acceptable to the user, it was felt that these two security issues should be discussed in the Tierra documentation so that users could make an educated decision as to whether they consider these to be acceptable risks. It was felt that we do not need to document general network security issues that are not unique to Tierra, although it is also the goal of network Tierra to be as secure as typical network oriented internet programs that exist today.

Actually, the most common fear related to the network Tierra project is that the digital organisms will escape and run wild on the net. However, there was unanimous agreement that this Terminator 2 / Jurassic Park scenario is not a security issue. Nobody at the workshop considered this to be a realistic possibility, because the digital organisms are securely confined within the virtual net of virtual machines.



next up previous
Next: A network experiment Up: No Title Previous: Who



Thomas S.Ray
Tue Aug 1 12:33:30 JST 1995