10.3 Barriers and Points of Entry
In general, a barrier is what stops users from exceeding their authorization of access into a container through the points of entry. The points of entry into the container through the barrier are both its physical and network-based openings into the container. Points of entry into the physical barriers of a container are the points of entry into the physical location which houses the container, such as the lock on the door of the room in which a computer resides. The other points of entry into a computer include its TCP/IP ports, those openings which accept TCP/IP packets into the computer. Finally, there are a limited number of other points of entry into the computer, such as the UDP ports.
While it is possible to enumerate the points of entry into a container available now, the idea is to extend the concept into containers which we have not yet identified. If it is possible to divide containers into physical and virtual, then it is possible to describe the theoretically-possible points of entry as a function of that distinction. The physical containers have physical and software points of entry into the system; the virtual containers have software points of entry but no physical ones, as there may be no physical manifestation of a particular virtual container.
Having identified the points of entry into different types of containers, it is possible to now define the barriers preventing unauthorized entry into each of them. The technical and legal concepts of encryption, trusted agents, and signaling are aided by the use of market pressures for encryption products as well as social forces toward the use of encryption in general. The following are technical options; it is assumed that for any given container, one or more of the options will be utilized. Market forces create a demand for encryption products and architectures such as trusted agents. Market forces also create incentives for owners[145] to erect barriers to the entry of third parties into containers or to erect barriers to control the behavior of authorized users.[146]
Technical components: Encryption as a container
The first technical concept is that of encryption as a virtual container, or more specifically as the barrier of the container. Encryption is used primarily when the other options, such as trusted agents outlined below, are unavailable. In the case of a virtual container, it cannot defend itself against attack, so it must be externally protected. This is done here using encryption.
The architecture proposed in this whitepaper relies heavily on the use of encryption, primarily because encryption is the very thing which enables virtual containers. An encryption key in this way defines two realms: the data encrypted with that key, which can be considered inside the container, and data not encrypted with the key, which can be considered outside the container. Different types of encryption (public versus secret key encryption) define different permissions, as the next section will show. In addition, it seems that encryption will become more widespread and less controlled through law as time goes on.
Public key encryption defines a public key and a private key used in the encryption and decryption, respectively, of data. In a world where a public key encryption infrastructure is established, one can imagine an Internet whitepages of sorts, where a sender of email looks up a recipient's public encryption key. The sender has just learned how to write to the recipient's container: encrypt a message with the recipient's public key. Even out on the network, an encrypted email or other entity is protected by encryption, assuming it is strong enough.
Once the email is encrypted with the recipient's public key and sent out onto the network, it becomes property of the recipient, within the confines of the container called his mailbox. Anyone who can find the recipient's public key can write to the container. The only person who can read from the container, however, is the owner because he is the only one with the private key. In this way, encryption is a container which anyone can create and which only the owner can read or access.
Due to the security inherent in certain encryption algorithms, an encrypted connection to a container would not be considered a point of entry into the container as much as an extension of the container to include the connection. An unencrypted connection or unencrypted virtual container, however, is totally unsafe, as a postcard is in today's postal system.
Technical components: Trusted agents
The second technical concept, that of the trusted agent, applies more to the physical container than the virtual one. The concept is most easily explained using an example. A government computer containing the most classified information will likely be in a very secure environment. Physically, it is probably located in a room protected by security guards, electronic identification devices, and mechanical locks. It is probably not connected to any network at all; if it is, then most of the ports are shut off and all allowed connections are made into ports responding with software daemons using encrypted connections. The number of people allowed to access the computer is probably quite low. All in all, the security of the computer is entrusted to different sorts of agents: whether they be security guards, identification devices, locks, or secure software daemons. The access to the machine is controlled by guards of all kinds: hardware and software.
In a general sense, a trusted agent is something which the container trusts to allow users into a container and prevents them from exceeding their authority once inside. The "guard at the gate," a piece of software or hardware or even a person, has been trusted for decades by different operating systems to do this very job. It is not a grandiose or new idea. The Unix operating system, for example, has a salient example of this guard: telnetd. A user can telnet into a computer in attempt to access his account. If he has the proper authentication information, he will be allowed inside the system to complete only certain operations consistent with his level of authority in the situation. While the trusted agent is helpful in the understanding of permissions, it is one that creates a barrier between the outside of the container and the inside. Assuming that the owner employs reasonable software agents, such as fingerd on the appropriate port of his computer, he will be protecting his container in a legal sense. If someone were to exploit the finger daemon in order to obtain improper access to the computer, the owner would have total justification in claiming improper access.
Technical components: Sandboxing
Sandboxing, a third component of the technology, is the technique in which a container reinforces itself by creating a thicker barrier around itself. The Java programming language has used sandboxing to prevent Java applets from, for example, erasing a user's hard drive. Unlike Microsoft's ActiveX (TM) technology, which relies on the legal liability of the authors of malicious code, sandboxing actually prevents a program from leaving its container and extending into the parent container: in one case preventing a program carried in an email from affecting the recipient's hard drive.
Technically, one would basically need to use Java right now to implement sandboxing. Should sandboxing of all containers take off, it is entirely conceivable that more programming languages and software products would be created to resolve the need for sandboxing in a more general sense.
Technical components: The whole picture
While the whitepaper presents several options for creating barriers between protected entities and the rest of the Internet, it is assumed that, for each container scenario, one or more of the options will be chosen. To send email securely, for example, will require only the use of encryption as a container. To maintain the sanctity of one's mailbox, one may employ trusted agents to access his mailbox securely as well as sandboxing to prevent improper access outside of that container. To maintain the security of a file server account will require the use of software agents as well as hardware protection in the form of safe location of the mainframe.
On the whole, some permutation of the three options presented here need to be used to provide a secure container on a generic level. In addition, some form of signaling must be used to inform users of their right to enter a container. Whether that means providing the appropriate header information to a web browser (Error
Legal components
Preventing third parties from accessing private containers in cyberspace or private spaces on a computer not connected to the Internet requires users to take affirmative steps to protect themselves, such as the use of encryption, trusted agents, or sandboxing.
Under this regime, hackers are subject to criminal sanctions and punitive damages in civil suits, whether or not damage to an owner's system or data occurs. This may mean beneficial information currently produced by hackers must be obtained from other sources, such as contractors with an owner's permission to test a system for security flaws. Unsolicited commercial email would not be affected by these rules if recipients do not take steps to bar access by these solicitations. Unsolicited commercial email which enters private space by circumventing access barriers subjects the sender to the criminal and civil sanctions above.
A barrier erected by an owner must be reasonably difficult to defeat. For instance, if an encryption algorithm is not strong enough, then the barrier is not one that the law respects. Government encryption standards could be used to set a minimum standard for algorithms which the law will enforce when owners erect barriers to access by third parties.
But there are always holes in programs which provide heretofore unforeseen entry points. Consequently, one must distinguish between circumvention of architecture as occurred in the Robert Morris incident and "holes" in architecture or bugs in software which provide a point of entry to a container. Our regime uses an unreasonable entry standard implemented by statute which looks at the following factors: 1) did the third party use a program outside of its common or reasonable use in order to gain access to the container? 2) did the third party "try to beat the system" to get in to the container (e.g. try to crack a password or give false credentials to a trusted agent)? 3) did the third party steal an authorized user's password or assume the identity of an authorized user? If any of the three factors exists, then the third party's access of the private container is illegal and the owner has a civil cause of action against here, regardless of whether the third party caused any damage.
The unreasonable entry standard implemented in a criminal statute would require the specific intent of intentional, purposeful or reckless acts before the elements of the crime are met. No criminal liability exists for negligent acts because our regime places the burden on owners to erect a barrier and prevent unauthorized access. Unintentional access in not a violation under our regime. Unintentional access by a third party suggests that the owner did not erect a sufficient access barrier.
Although the unreasonable entry standard interferes with the goal of ensuring that all users know the rule for accessing containers, the messiness here corresponds to the messiness of real life. In difficult cases, a reasonableness standard applied by judges and juries is required.
Other components
There are other tools at our disposal, outside of legal and technical ones. Social norms and market forces can be used to influence public behavior.[147] In that spirit, this whitepaper recommends the following actions toward the use of the container concept and its associated technical innovations. First, social pressure can create an environment friendly to the wide-spread use of encryption. If the attitude towards unencrypted email moves toward the current attitude about the security of a postcard, then people will be more likely to use encryption to safeguard their email. Second, education can increase social awareness about the standard rules of access. That is, you are allowed access a container if you do not have to hack any part of the security mechanism to get that access to it. Third, market demand for encryption will, it is hoped, provide freely available and easy-to-use encryption software. As well, there will be increased market demand for increasingly stable and bug-free trusted software agents. Overall, the legal and technical proposals create a market need for and social pressure to use the technologies suggested.
10.4 Permissions
What are permissions? What makes a container public or private? If a container is physically secured and only the owner can access it, the container is fully private. A container is fully public if any user can access and modify its contents via any connection to it.
Most containers, it seems, fall somewhere in between fully private and fully public. The owner will probably define a limited set of users who can modify the entities within the container and a larger set of users who can view them. At the point of defining the permissions for the container in a traditional sense, the owner will also have the option to define what he wants from each user in order for the user to interact with the container. Perhaps the owner doesn't want users to send anonymous email into his mailbox or wants only animated graphics but no Java applets accepted into his mailbox. Law and code can be used to encourage the use of permissions as outlined here. The code can provide three solutions, described below.
Technical components: Encryption defines permissions
To some extent, the use of encryption as a container, which was described in great detail above, defines the permissions of that container. That is, anyone in the world can write to the container (see Figure 2) using the public encryption key and only the owner can read from it. While this may be a simple definition of permissions, it is also the most secure one, since "hacking" into the container in most cases is a problem even for government agencies and certainly for the local 10-year-old.
Figure 2: How the encryption container works
Technical components: Operating system style permissions, or trusted agent-defined permissions
The second option for the definition of permissions is the trusted-agent or operating-system style permissions as defined above in the ownership section. This relies on the use of guards at each point of entry into a container. Each guard can have a list of users and their associated permissions with respect to each container. The system could be more complex of course, but the general idea is that the guard is entrusted to make the decision at each step with respect to whether a user has the right to perform a particular access on a particular entity within a container. If the agent is corrupted, then there clearly is a problem; however, reliable agents are in common existence today in many forms.
Technical components: Labels and filtering of certain interactions
The third option, and the only one not previously described, relies on labels and filters based on those labels. For each access into a container, the user wishing to access the container would label his access in certain predefined ways. The owner of the container can set up filters in code to allow or disallow the access based on the labels. In law, described later, will protect users from fraudulent labeling.
A perfect example of this labeling and filtering scheme deals with spam email. A standards organization can establish a set of labels such as those identifying "commercial" versus "non-commercial" emails. A container owner can specify that in order for an email to be admitted into his mailbox container it has to have a label in the "commercial" email header, a boolean value indicating that status. The owner of the container can set up a filter to deny access into the container if the label contains the wrong values. A response email can be sent if the recipient desires, informing the sender why his email was denied access.
Legal components: Preventing falsification of labeling information.
The enforcement of barriers erected by owners covers most of the legal issues regarding permissions. In our regime, a state or federal statute would make intentional, purposeful, or reckless falsification of identification credentials given to a container a criminal violation.
11 Evaluation
The architecture put forward in this document is one that suggests a number of important changes to the technical landscape of the Internet. Such changes should not be taken lightly, and need to be evaluated independently of the reasons that led to the conception of this new architecture. This architecture is not expected to deal perfectly with every issue at hand. One can expect, however, that it will prove to be quite compatible in a number of ways, and will progressively and subtly change the world to the point of making this architecture quite adequate.
11.1 Examples of Trespass Revisited
In the evaluation of the proposed architecture, it is important to ask whether there has been an improvement in the treatment of potential problems. That is to say, has it addressed those issues identify as difficult potential trespasses in previous sections?
Spam Email
The architecture was designed specifically with spam in mind, as spam was one of the largest problems dealt with in the arena of trespass. Going along with the philosophy diagram put forth in the Architecture section, it is possible to follow the same line of reasoning, hence questions.
First, is the spam unwanted access? In some cases, the answer is no. Certain people are unbothered by spam, and using the proposed architecture would not force them to implement a solution stopping the spam.
What if the spam is indeed unwanted, by someone downloading mail on a slow, expensive connection? In this case, does the code attempt to stop it? Yes, the owner of the mailbox can set up a filtering mechanism allowing him to filter out "bulk" or "commercial" email labeled as such. While such an infrastructure of labeling is not currently available, one can easily imagine a small but utilitarian set of labels including "commercial" and "bulk." or "text-only" and "contains-programs," labels as described here.
In this case, the owner can, with relatively little time, cost, and effort, set up labels blocking commercial email from entering his mailbox. A sender is required to label correctly his email. In the ideal case, then, the recipient would never receive email which he considers a spam email. While there is a certain amount of effort required in the owner-side labeling and filtering, this maintains the goal of default access that is so important to the architecture.
What if the sender of the spam succeeds in getting email into the recipient's mailbox, and thus the code has failed? As the scenario progresses down the flow chart of the philosophy, it arrives at the place of greatest weakness: the unauthorized access has already occurred.
It is hoped that the architecture has correctly done its job, and that it is the sender of the email which got through who actually made fraudulent claims about the content and nature of his emails. In most of those cases, it is possible to track down the sender of the email either by three techniques: either, (A) the content of the email points to a particular commercial organization, or (B) the sender can be identified from the labels in the email and relied on the lack of interest to prosecute on the part of the recipient, or (C) the recipient can use the return headers of the email to trace down the sender. Given anonymous and pseudonymous remailers makes certain situations virtually untraceable, but such is the risk that the architecture takes in the maintenance of the Internet as a free place. In most cases, however, it will be possible to find at least the organization from which the fraudulent email came, and prosecute as necessary for fraudulent labeling and identification.
Overall, the architecture is more successful than the status quo at keeping out unwanted spam, and more successful in prosecuting senders who falsify return information.
Active Email
The second scenario, active email, relies on the same labeling scheme used in the spam scenario. However, it adds to its arsenal the use of sandboxing, if desired, to create a safe place for the inclusion of some programs.
Is the active email unwanted access? In some cases, no. It is possible to imagine a recipient downloading a program to run on his local machine, and needing access outside of his mailbox to his local files. In this case, it is possible for the recipient to correctly identify the sender of the program (using the same legal ramifications in false identification) and to open the sandbox to a particular sender. The technique is in general called the "signing" of particular programs, and can be used in conjunction with sandboxing. In this case, it is assumed that the recipient knows the sender well and trusts that the sender will not create malicious
What if the active email is indeed unwanted at the outset? Then the recipient can (A) set up a sandbox around his email inbox in addition to (B) using labels to defend against incoming mail which is labeled as "containing a program." Should he wish to receive "safe" programs but not "unsafe" ones, he can use only (A) but not (B). This will allow him a relative amount of security in the knowledge that it is unlikely that his computer will be accessed. And he has done essentially nothing costly or time-consuming in this defense of his computer.
What if the code fails, and the unwanted, malicious email gets into his computer? First, this condition is unlikely given the current state of sandboxing technologies such as Java, which prevent most unwanted access. In the quite rare case, however, it is possible to download code which harms one's computer, and it was at once labeled incorrectly. In this case, unfortunately, the only recourse is once again to rely on punitive law to scare people away from doing this to other people's containers.
Overall, the architecture is more likely than the status quo to prevent damage to a computer from unwanted access. But in the small chance that it does not adequately prevent it, the architecture relies on law to punish the sender of the email, and to find out whether the email was indeed malicious or not. Since it was incorrectly labeled, there is a good signal that it was malicious.
Hacking a Web Page
The concepts of web page hacking and computer-system hacking fall into largely the same category, because they are so common in technique of access.
Is the "hacking," or access to the computer followed by changing of information presented there, unwanted? Perhaps not. Perhaps the computer is left somewhat open for "gentleman's agreements" not to harm the other computer. This falls into a different category altogether, and law needs to deal with that appropriately.
What if the hacking is unwanted? Then, did the owner of the computer make reasonable attempts to keep the computer's points of entry safe? Did he keep the computer physically in a safe place, and all of the open ports protected by widely used software? If so, then he probably protected himself from hacking.
What if the code fails, if the reasonable protection on the points of entry is not enough? Then, having attempted to protect himself, the container owner has the right legally to try to prosecute the enterer, should he (A) want to do so, and (B) have the proper identification information, frequently found in access logs. And in the end, the money and time spent was done in preventing unwanted hacking into the computer.
Overall, the architecture clarifies more readily the legal recourse possible where hacking is involved. As well, it does not seem to harm the ideal of free exchange of ideas on the Internet, and still allows for privacy and commerce to happen as well.
Other Cases
It is important to note that the architecture also deals with borderline cases of computer such as wristwatches and so-called "palmtops." Since the main point of entry there is through someone's peaking into the viewable area of one of these small devices, it becomes incumbent upon the owner to protect that point of entry. If he leaves it on his desk in a public space open without touching to private information, he has not done enough. If he uses it on a subway without checking who can see it, he has not done enough. If he tries legitimately to protect that point of entry and fails because of the sneakiness of someone desiring access to his palmtop, then he has tried hard enough and has legal recourse as necessary, with no cost and very little effort on his part.
11.2 Feasibility
Usability