Jump to content

Computer security

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Ark~enwiki (talk | contribs) at 10:26, 10 June 2002. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A secure computing platform is one designed so that those agents who should not be able to perform certain actions cannot do them while those agents who should be able to perform certain actions can do them. The actions in question can be reduced to operations of access, modification and deletion.

It is important to understand that in a secure system, the legitimate users of that system are still able to do what they should be able to do. In the case of a computer system sequestered in a vault without any means of power or communication, the term 'secure' is applied in a pejorative sense only.

It is also important to distinguish the techniques employed to increase a system's security from the issue of that system's security status. In particular, systems which contain fundamentally flaws in their security designs cannot be made secure without compromising their utility. Consequently, most computer systems today cannot be made secure even after the application of extensive "computer security" measures.

There are two different cultures of security in computing. One focuses mainly on external threats, and generally treats the computer system itself as a trusted system. See the article computer insecurity for a description of the current state of the art in this approach. The other regards the computer system itself as largely an untrusted system, and redesigns it to make it secure. This way, even if an attacker has subverted one part of the system, fine-grained security ensures it's just as difficult for them to subvert the rest. A system with a rigorous security design should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure.

Within computer systems, the two fundamental means of making operations secure are access control lists (ACLs) and capabilities. The semantics of ACLs have been proven to be insecure in many situations (eg, Confused Deputy Problem. It has also been shown that ACL's promise of giving access to an object to only one person can never be guaranteed in practice. Both of these problems are resolved by capabilities.

Unfortunately, for various historical reasons, capabilities have been restricted to research operating systems and commercial OSes still use ACLs.

The Cambridge CAP computer demonstrated the use of capabilities, both in hardware and software, in the 1970s, so this technology is hardly new. The reason for the lack of adoption of capabilities may be that ACLs appeared to offer a 'quick fix' for security without pervasive re-design of the operating system and hardware.

Further reading

Computer security is a highly complex field, and is relatively immature. The ever-greater amounts of money dependant on electronic information make protecting it a growing industry and an active research topic.

There is a big culture around electronic security called the Electronic Underground Community.

Related topics: Security engineering, cryptology, cryptography, Physical Security, hacking, Secure coding practice, full disclosure.

References:

  • Ross J. Anderson, Security Engineering: A Guide to Building Dependable Distributed Systems, ISBN 0471389226
  • Bruce Schneier, Secrets & Lies: Digital Security in a Networked World, ISBN 0471253111

See also: Security-Enhanced Linux