Security Mechanisms
and
Fundamental security design
        principles
     MS. Fatima Zahra Sajid
       3rd CS R Information Security
Security Mechanisms:
      Security Mechanisms are used to implement security services.
   Encipherment
   Digital signature
   Access Control
   Data Integrity
   Authentication Exchange
   Traffic Padding
   Routing Control
   Notarization
 Encipherment: The use of mathematical algorithms to transform data into a
form that is not readily intelligible. The transformation and subsequent
recovery of the data depend on an algorithm and zero or more encryption keys.
Digital signature: Data appended to, or a cryptographic transformation of, a
data unit that allows a recipient of the data unit to prove the source and
integrity of the data unit and protect against forgery (e.g., by the recipient).
Access Control: A variety of mechanisms that enforce access rights to
Resources.
Data Integrity: A variety of mechanisms used to assure the integrity of a data
unit or stream of data units.
Authentication Exchange: A mechanism intended to ensure the identity of an
entity by means of information exchange.
Traffic Padding: The insertion of bits into gaps in a data stream to frustrate
eavesdropper’s traffic analysis attempts.
Routing Control: Enables selection of particular physically secure Routes for
certain data and allows routing changes, especially when a breach of security
is suspected.
Notarization: The use of a trusted third party to assure certain properties of a
data exchange.
Fundamental security Design
       principles
FUNDAMENTAL SECURITY DESIGN PRINCIPLES:
Despite years of research and development, it has not been
possible to develop security design and implementation
techniques that systematically exclude security flaws and
prevent all unauthorized actions. In the absence of such
foolproof techniques, it is useful to have a set of widely
agreed design principles that can guide the development of
protection mechanisms. The National Centers of Academic
Excellence in Infor mation Assurance/Cyber Defense,
which is jointly sponsored by the U.S. National Security
Agency and the U. S. Department of Homeland Security,
list the following as fundamental security design principles.
• Economy of mechanism
• Fail-safe defaults
• Complete mediation
• Open design
• Separation of privilege
 • Least privilege
• Least common mechanism
• Psychological acceptability
• Isolation
• Encapsulation
• Modularity
• Layering
• Least astonishment
Economy of
mechanism:
means the design of security measures embodied in both hardware
 and software should be as simple and small as possible. The
    motivation for this principle is that relatively simple, small design is
    easier to test and verify thoroughly. With a complex design, there
    are many more opportunities for an adversary to discover subtle
    weaknesses to exploit that may be difficult to spot ahead of time.
    The more complex the mechanism is, the more likely it is to possess
    exploitable flaws. Simple mechanisms tend to have fewer
    exploitable flaws and require less maintenance. Furthermore,
    because configuration management issues are simplified, updating
    or replacing a simple mechanism becomes a less intensive process.
    In practice, this is perhaps the most difficult principle to honor.
    There is a constant demand for new features in both hardware and
    software, complicating the security design task. The best that can
    be done is to keep this principle in mind during system design to try
    to eliminate unnecessary complexity.
Fail-safe default:
   means access decisions should be based on permission rather than
    exclusion. That is, the default situation is lack of access, and the
    protection scheme identifies conditions under which access is
    permitted. This approach exhibits a better failure mode than the
    alternative approach, where the default is to permit access. A design
    or implementation mistake in a mechanism that gives explicit
    permission tends to fail by refusing permission, a safe situation that
    can be quickly detected. On the other hand, a design or
    implementation mistake in a mechanism that explicitly excludes
    access tends to fail by allowing access, a failure that may long go
    unnoticed in normal use. For example, most file access systems work
    on this principle and virtually all protected services on client/server
    systems work this way.
Complete
mediation:
means every access must be checked against the access control
 mechanism. Systems should not rely on access decisions retrieved
    from a cache. In a system designed to operate continuously, this
    principle requires that, if access decisions are remembered for future
    use, careful consideration be given to how changes in authority are
    propagated into such local memories. File access systems appear to
    provide an example of a system that complies with this principle.
    However, typically, once a user has opened a file, no check is made
    to see of permissions change. To fully implement complete mediation,
    every time a user reads a field or record in a file, or a data item in a
    database, the system must exercise access control. This resource-
    intensive approach is rarely used.
Open design:
   means the design of a security mechanism should be open rather than
    secret. For example, although encryption keys must be secret,
    encryption algorithms should be open to public scrutiny. The
    algorithms can then be reviewed by many experts, and users can
    therefore have high confidence in them. This is the philosophy behind
    the National Institute of Standards and Technology (NIST) program of
    standardizing encryption and hash algorithms, and has led to the
    widespread adoption of NIST-approved algorithms.
Separation of
privilege:
In which multiple privilege attributes are required to achieve access
    to a restricted resource. A good example of this is multifactor user
    authentication, which requires the use of multiple techniques, such
    as a password and a smart card, to authorize a user. The term is
    also now applied to any technique in which a program is divided
    into parts that are limited to the specific privileges they require in
    order to perform a specific task. This is used to mitigate the
    potential damage of a computer security attack. One example of
    this latter interpretation of the principle is removing high privilege
    operations to another process and running that process with the
    higher privileges required to perform its tasks. Day-to-day
    interfaces are executed in a lower privileged process.
Least privilege:
   means every process and every user of the system should operate using the
    least set of privileges necessary to perform the task. A good example of the
    use of this principle is role-based access control, as will be described in
    Chapter 4. The system security policy can identify and define the various roles
    of users or processes. Each role is assigned only those permissions needed to
    perform its functions. Each permission specifies a permitted access to a
    particular resource (such as read and write access to a specified file or
    directory, and connect access to a given host and port). Unless permission is
    granted explicitly, the user or process should not be able to access the
    protected resource. More generally, any access control system should allow
    each user only the privileges that are authorized for that user. There is also a
    temporal aspect to the least privilege principle. For example, system
    programs or administrators who have special privileges should have those
    privileges only when necessary; when they are doing ordinary activities the
    privileges should be withdrawn. Leaving them in place just opens the door to
    accidents.
Least common
mechanism:
   means the design should minimize the functions shared by different
    users, providing mutual security. This principle helps reduce the
    number of unintended communication paths and reduces the amount
    of hardware and software on which all users depend, thus making it
    easier to verify if there are any undesirable security implications.
Psychological acceptability
: implies the security mechanisms should not interfere unduly with the
    work of users, and at the same time meet the needs of those who
    authorize access. If security mechanisms hinder the usability or
    accessibility of resources, users may opt to turn off those
    mechanisms. Where possible, security mechanisms should be
    transparent to the users of the system or at most introduce minimal
    obstruction. In addition to not being intrusive or burden some,
    security procedures must reflect the user’s mental model of
    protection. If the protection procedures do not make sense to the user
    or if the user, must translate his or her image of protection into a
    substantially different protocol, the user is likely to make errors.
Isolation:
   is a principle that applies in three contexts. First, public access systems
    should be isolated from critical resources (data, processes, etc.) to prevent
    disclosure or tampering. In cases where the sensitivity or criticality of the
    information is high, organizations may want to limit the number of systems on
    which that data are stored and isolate them, either physically or logically.
    Physical isolation may include ensuring that no physical connection exists
    between an organization’s public access information resources and an
    organization’s critical information. When implementing logical isolation
    solutions, layers of security services and mechanisms should be established
    between public systems and secure systems that is responsible for protecting
    critical resources. Second, the processes and files of individual users should
    be isolated from one another except where it is explicitly desired. All modern
    operating systems provide facilities for such isolation, so individual users have
    separate, isolated process space, memory space, and file space, with
    protections for preventing unauthorized access. And finally, security
    mechanisms should be isolated in the sense of preventing access to those
    mechanisms. For example, logical access control may provide a means of
    isolating cryptographic software from other parts of the host system and for
    protecting cryptographic software from tampering and the keys from
    replacement or disclosure.
Encapsulation:
   can be viewed as a specific form of isolation based on object
    oriented functionality. Protection is provided by encapsulating a
    collection of procedures and data objects in a domain of its own so
    that the internal structure of a data object is accessible only to the
    procedures of the protected subsystem and the procedures may be
    called only at designated domain entry points.
Modularity:
   in the context of security refers both to the development of security
    functions as separate, protected modules, and to the use of a modular
    architecture for mechanism design and implementation. With respect to
    the use of separate security modules, the design goal here is to provide
    common security functions and services, such as cryptographic functions,
    as common modules. For example, numerous protocols and applications
    make use of cryptographic functions. Rather than implementing such
    functions in each protocol or application, a more secure design is provided
    by developing a common cryptographic module that can be invoked by
    numerous protocols and applications. The design and implementation
    effort can then focus on the secure design and implementation of a single
    crypto graphic module, including mechanisms to protect the module from
    tampering. With respect to the use of a modular architecture, each
    security mechanism should be able to support migration to new
    technology or upgrade of new features without requiring an entire system
    redesign. The security design should be modular so that individual parts of
    the security design can be upgraded without the requirement to modify
    the entire system.
Layering:
   refers to the use of multiple, overlapping protection approaches
    addressing the people, technology, and operational aspects of
    information systems. By using multiple, overlapping protection
    approaches, the failure or circumvention of any individual
    protection approach will not leave the system unprotected. We will
    see throughout this book that a layering approach is often used to
    provide multiple barriers between an adversary and protected
    information or services. This technique is often referred to as
    defense in depth.
Least astonishment:
   means a program or user interface should always respond in
    the way that is least likely to astonish the user. For example,
    the mechanism for authorization should be transparent
    enough to a user that the user has a good intuitive
    understanding of how the security goals map to the provided
    security mechanism.