Online Encyclopedia Search Tool

Your Online Encyclopedia

 

Online Encylopedia and Dictionary Research Site

Online Encyclopedia Free Search Online Encyclopedia Search    Online Encyclopedia Browse    welcome to our free dictionary for your research of every kind

Online Encyclopedia



Computer security

Computer security is the effort to create a secure computing platform, designed so that agents (users or programs) cannot perform actions that they are not allowed to perform, but can perform the actions that they are allowed to. This involves specifying and implementing a security policy. The actions in question can be reduced to operations of access, modification and deletion. Computer security can be seen as a subfield of security engineering, which looks at broader security issues in addition to computer security.

It is important to understand that in a secure system, the legitimate users of that system are still able to do what they should be able to do. In the case of a computer system sequestered in a vault without any means of power or communication, the term 'secure' is applied in a pejorative sense only.

It is also important to distinguish the techniques employed to increase a system's security from the issue of that system's security status. In particular, systems which contain fundamental flaws in their security designs cannot be made secure without compromising their utility. Consequently, most computer systems cannot be made secure even after the application of extensive "computer security" measures.

Contents

Computer security by design

There are two different approaches to security in computing. One focuses mainly on external threats, and generally treats the computer system itself as a trusted system. This philosophy is discussed in the computer insecurity article.

The other, discussed in this article, regards the computer system itself as largely an untrusted system, and redesigns it to make it more secure in a number of ways.

This technique enforces privilege separation, where an entity has only the privileges that are needed for its function. That way, even if an attacker has subverted one part of the system, fine-grained security ensures that it is just as difficult for them to subvert the rest.

Furthermore, by breaking the system up into smaller components, the complexity of individual components is reduced, opening up the possibility of using techniques such as automated theorem proving to prove the correctness of crucial software subsystems. Where formal correctness proofs are not possible, rigorous use of code review and unit testing measures can be used to try to make modules as secure as possible.

The design should use "defense in depth", where more than one subsystem needs to be compromised to compromise the security of the system and the information it holds. Subsystems should default to secure settings, and wherever possible should be designed to "fail secure" rather than "fail insecure" (see fail safe for the equivalent in safety engineering). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure.

In addition, security should not be an all-or-nothing issue. The designers and operators of systems should assume that security breaches are inevitable in the long term. Full audit trails should be kept of system activity, so that when a security breach occurs, the mechanism and extent of the breach can be determined. Finally, full disclosure helps to ensure that when bugs are found the "window of vulnerability " is kept as short as possible.

Early history of security by design

The early Multics operating system was notable for its early emphasis on computer security by design, and Multics was possibly the very first operating system to be designed as a secure system from the ground up. In spite of this, Multics security was broken, not once, but repeatedly. This led to further work on computer security that prefigured modern security engineering techniques..

Techniques for creating secure systems

The following techniques can be used in engineering secure systems. Note that these techniques, whilst useful, do not of themselves ensure security -- a security system is no stronger than its weakest link.

Cryptographic techniques involve transforming information, scrambling it so it becomes unreadable during transmission. The intended recipient can unscramble the message, but eavesdroppers cannot.
Cryptographic techniques involve transforming information, scrambling it so it becomes unreadable during transmission. The intended recipient can unscramble the message, but eavesdroppers cannot.
  • Cryptographic techniques can be used to defend data in transit between systems, reducing the probability that data exchanged between systems can be intercepted or modified.
  • Strong authentication techniques can be used to ensure that communication end-points are who they say they are.
  • Secure cryptoprocessors can be used to leverage physical security techniques into protecting the security of the computer system.
  • Chain of trust techniques can be used to attempt to ensure that all software loaded has been certified as authentic by the system's designers.
  • Mandatory access control can be used to ensure that privileged access is withdrawn when privileges are revoked. For example, deleting a user account should also stop any processes that are running with that user's privileges.
  • Capability and access control list techniques can be used to ensure privilege separation and mandatory access control. The next sections discuss their use.

Capabilities vs. ACLs

Within computer systems, the two fundamental means of enforcing privilege separation are access control lists (ACLs) and capabilities. The semantics of ACLs have been proven to be insecure in many situations (e.g., Confused deputy problem). It has also been shown that ACL's promise of giving access to an object to only one person can never be guaranteed in practice. Both of these problems are resolved by capabilities.

Unfortunately, for various historical reasons, capabilities have been mostly restricted to research operating systems and commercial OSes still use ACLs. Capabilities can, however, also be implemented at the language level, leading to a style of programing that is essentially a refinement of standard object-oriented design. An open source project in the area is the E language[1].

The Cambridge CAP computer demonstrated the use of capabilities, both in hardware and software, in the 1970s, so this technology is hardly new. A reason for the lack of adoption of capabilities may be that ACLs appeared to offer a 'quick fix' for security without pervasive redesign of the operating system and hardware.

A good example of a current secure system is Eros. But see also the article on secure operating systems. TrustedBSD is an example of an opensource project with a goal, among other things, of building capability functionality into the FreeBSD operating system. Much of the work is already done.

The term "trusted" is often applied to operating systems that meet different levels of the common criteria, some of which are discussed above as the techniques for creating secure systems. Unfortunately, a computer industry group led by Microsoft, in an attempt to market a different set of products and services, has taken the term "trusted system" and changed it to include making computer hardware that prevents the user from having full control over their own system. They call their project the Trusted Computing Platform Alliance (TCPA). See also: Palladium.

Further reading

Computer security is a highly complex field, and it is relatively immature. The ever-greater amounts of money dependent on electronic information make protecting it a growing industry and an active research topic.

Notable persons in computer security

For additional persons, see also: Category:Cryptographers.

See also

See Category:Computer security for a complete list of all related articles.

References

External links



Last updated: 12-22-2004 06:02:51