NIST Checklist
NIST Checklist
Requirement
ACCESS CONTROL
Access Control Basic 3.1.1 03.01.01
Awarness
and Traning
Awareness and Basic 3.2.1 03.02.01
Training
Configurati
on
Manageme
nt
Configuration Basic 3.4.1 03.04.01
Management
Identificatio
n and
authonticati
on
Identification and Basic 3.5.1 03.05.01
Authentication
Incident
Response
Incident Basic 3.6.1 03.06.01
response
Incident Basic 3.6.2 03.06.02
response
Maintenace
Maintenance Basic 3.7.1 03.07.01
Media
protection
Media Protection Basic 3.8.1 03.08.01
Personal
security
Personnel Securit Basic 3.9.1 03.09.01
Personnel Securit Basic 3.9.2 03.09.02
Physical
protection
Physical Protectio Basic 3.10.1 03.10.01
Risk
Assessment
Risk Assessment Basic 3.11.1 03.11.01
Risk Assessment Derived 3.11.2 03.11.02
Security
assessment
Security AssessmeBasic 3.12.1 03.12.01
system and
communica
tion
protection
System and Basic 3.13.1 03.13.01
Communications
Protection
System and Basic 3.13.2 03.13.02
Communications
Protection
system and
information
integrity
System and Basic 3.14.1 03.14.01
Information
Integrity
Access control policies (e.g., identity- or role-based policies, control matrices, and
cryptography) control access between active entities or subjects (i.e., users or
processes acting on behalf of users) and passive entities or objects (e.g., devices,
files, records, and domains) in systems. Access enforcement mechanisms can be
employed at the application and service level to provide increased information
security. Other systems include systems internal and external to the organization.
This requirement focuses on account management for systems and applications.
The definition of and enforcement of access authorizations, other than those
determined by account
type (e.g., privileged verses non-privileged) are addressed in requirement 3.1.2.
Organizations may choose to define access privileges or other attributes by
account, by type of account, or a combination of both. System account types
include individual, shared, group, system, anonymous, guest, emergency,
developer, manufacturer, vendor, and temporary. Other attributes required for
authorizing access include restrictions on time-of-day, day-of-week, and point-of-
origin. In defining other account attributes, organizations consider system-related
requirements (e.g., system upgrades scheduled maintenance,) and mission or
business requirements, (e.g., time zone differences, customer requirements, remote
access to support travel requirements).
Information flow control regulates where information can travel within a system and
between systems (versus who can access the information) and without explicit
regard to subsequent accesses to that information. Flow control restrictions include
the following: keeping export-controlled information from being transmitted in the
clear to the Internet; blocking outside traffic that claims to be from within the
organization; restricting requests to the Internet that are not from the internal web
proxy server; and limiting information transfers between organizations based on
data structures and content. Organizations commonly use information flow control
policies and enforcement mechanisms to control the flow of information between
designated sources and destinations (e.g., networks, individuals, and devices)
within systems and between interconnected systems. Flow control is based on
characteristics of the information or the information path. Enforcement occurs in
boundary protection devices (e.g., gateways, routers, guards, encrypted tunnels,
firewalls) that employ rule sets or establish configuration settings that restrict
system services, provide a packet-filtering capability based on header information,
or message-filtering capability based on message content (e.g., implementing key
word searches or using document characteristics). Organizations also consider the
trustworthiness of filtering and inspection mechanisms (i.e., hardware, firmware,
and software components) that are critical to information flow enforcement.
Transferring information between systems representing different security domains
with different security policies introduces risk that such transfers violate one or
more domain security policies. In such situations, information owners or stewards
provide guidance at designated policy enforcement points between interconnected
systems. Organizations consider mandating specific architectural solutions when
required to enforce specific security policies. Enforcement includes: prohibiting
information transfers between interconnected systems (i.e., allowing access only);
employing hardware mechanisms to enforce one-way information flows; and
implementing trustworthy regrading mechanisms to reassign security attributes and
security labels.
Separation of duties addresses the potential for abuse of authorized privileges and
helps to reduce the risk of malevolent activity without collusion. Separation of
duties includes dividing mission functions and system support functions among
different individuals or roles; conducting system support functions with different
individuals (e.g., configuration management, quality assurance and testing, system
management, programming, and network security); and ensuring that security
personnel administering access control functions do not also administer audit
functions. Because separation of duty violations can span systems and application
domains, organizations consider the entirety of organizational systems and system
components when developing policy on separation of duties.
Organizations employ the principle of least privilege for specific duties and
authorized accesses for users and processes. The principle of least privilege is
applied with the goal of authorized privileges no higher than necessary to
accomplish required organizational missions or business functions. Organizations
consider the creation of additional processes, roles, and system accounts as
necessary, to achieve least privilege. Organizations also apply least privilege to the
development, implementation, and operation of organizational systems. Security
functions include establishing system accounts, setting events to be logged, setting
intrusion detection parameters, and configuring access authorizations (i.e.,
permissions, privileges). Privileged accounts, including super user accounts, are
typically described as system administrator for various types of commercial off-the-
shelf operating systems. Restricting privileged accounts to specific personnel or
roles prevents day-to-day users from having access to privileged information or
functions. Organizations may differentiate in the application of this requirement
between allowed privileges for local accounts and for domain accounts provided
organizations retain the ability to control system configurations for key security
parameters and as otherwise necessary to sufficiently mitigate risk.
This requirement limits exposure when operating from within privileged accounts or
roles. The inclusion of roles addresses situations where organizations implement
access control policies such as role-based access control and where a change of role
provides the same degree of assurance in the change of access authorizations for
the user and all processes acting on behalf of the user as would be provided by a
change between a privileged and non-privileged account.
Privileged functions include establishing system accounts, performing system
integrity checks, conducting patching operations, or administering cryptographic
key management activities. Non-privileged users are individuals that do not possess
appropriate authorizations. Circumventing intrusion detection and prevention
mechanisms or malicious code protection mechanisms are examples of privileged
functions that require protection from non-privileged users. Note that this
requirement represents a condition to be achieved by the definition of authorized
privileges in 3.1.2. Misuse of privileged functions, either intentionally or
unintentionally by authorized users, or by unauthorized external entities that have
compromised system accounts, is a serious and ongoing concern and can have
significant adverse impacts on organizations. Logging the use of privileged
functions is one way to detect such misuse, and in doing so, help mitigate the risk
from insider threats and the advanced persistent threat.
This requirement applies regardless of whether the logon occurs via a local or
network connection. Due to the potential for denial of service, automatic lockouts
initiated by systems are, in most cases, temporary and automatically release after a
predetermined period established by the organization (i.e., a delay algorithm). If a
delay algorithm is selected, organizations may employ different algorithms for
different system components based on the capabilities of the respective
components. Responses to unsuccessful logon attempts may be implemented at
the operating system and application levels.
System use notifications can be implemented using messages or warning banners
displayed before individuals log in to organizational systems. System use
notifications are used only for access via logon interfaces with human users and are
not required when such human interfaces do not exist. Based on a risk assessment,
organizations consider whether a secondary system use notification is needed to
access applications or other system resources after the initial network logon. Where
necessary, posters or other printed materials may be used in lieu of an automated
system banner. Organizations consult with the Office of General Counsel for legal
review and approval of warning banner content
Session locks are temporary actions taken when users stop work and move away
from the immediate vicinity of the system but do not want to log out because of the
temporary nature of their absences. Session locks are implemented where session
activities can be determined, typically at the operating system level (but can also
be at the application level). Session locks are not an acceptable substitute for
logging out of the system, for example, if organizations require users to log out at
the end of the workday. Pattern-hiding displays can include static or dynamic
images, for example, patterns used with screen savers, photographic images, solid
colors, clock, battery life indicator, or a blank screen, with the additional caveat that
none of the images convey controlled unclassified information.
This requirement addresses the termination of user-initiated logical sessions in
contrast to the termination of network connections that are associated with
communications sessions (i.e., disconnecting from the network). A logical session
(for local, network, and remote access) is initiated whenever a user (or process
acting on behalf of a user) accesses an organizational system. Such user sessions
can be terminated (and thus terminate user access) without terminating network
sessions. Session termination terminates all processes associated with a user’s
logical session except those processes that are specifically created by the user (i.e.,
session owner) to continue after the session is terminated. Conditions or trigger
events requiring automatic session termination can include organization-defined
periods of user inactivity, targeted responses to certain types of incidents, and
time-of-day restrictions on system use
Remote access is access to organizational systems by users (or processes acting on
behalf of users) communicating through external networks (e.g., the Internet).
Remote access methods include dial-up, broadband, and wireless. Organizations
often employ encrypted virtual private networks (VPNs) to enhance confidentiality
over remote connections. The use of encrypted VPNs does not make the access
non-remote; however, the use of VPNs, when adequately provisioned with
appropriate control (e.g., employing encryption techniques for confidentiality
protection), may provide sufficient assurance to the organization that it can
effectively treat such connections as internal networks. VPNs with encrypted
tunnels can affect the capability to adequately monitor network communications
traffic for malicious code. Automated monitoring and control of remote access
sessions allows organizations to detect cyber-attacks and help to ensure ongoing
compliance with remote access policies by auditing connection activities of remote
users on a variety of system components (e.g., servers, workstations, notebook
computers, smart phones, and tablets). [SP 800-46], [SP 800-77], and [SP 800-113]
provide guidance on secure remote access and virtual private networks.
Cryptographic standards include FIPS-validated cryptography and NSA-approved
cryptography. See [NIST CRYPTO]; [NIST CAVP]; [NIST CMVP]; National Security
Agency Cryptographic Standards.
Routing remote access through managed access control points enhances explicit,
organizational control over such connections, reducing the susceptibility to
unauthorized access to organizational systems resulting in the unauthorized
disclosure ofcommand
A privileged CUI. is a human-initiated (interactively or via a process operating
on behalf of the human) command executed on a system involving the control,
monitoring, or administration of the system including security functions and
associated security-relevant information. Security-relevant information is any
information within the system that can potentially impact the operation of security
functions or the provision of security services in a manner that could result in failure
to enforce the system security policy or maintain isolation of code and data.
Privileged commands give individuals the ability to execute sensitive, security-
critical, or security-relevant system functions. Controlling such access from remote
locations helps to ensure that unauthorized individuals are not able to execute such
commands freely with the potential to do serious or catastrophic damage to
organizational systems. Note that the ability to affect the integrity of the system is
considered security-relevant as that could enable the means to by-pass security
functions although not directly impacting the function itself.
Establishing usage restrictions and configuration/connection requirements for
wireless access to the system provides criteria for organizations to support wireless
access authorization decisions. Such restrictions and requirements reduce the
susceptibility to unauthorized access to the system through wireless technologies.
Wireless networks use authentication protocols which provide credential protection
and mutual authentication. [SP 800-97] provides guidance on secure wireless
networks.
Organizations authenticate individuals and devices to help protect wireless access
to the system. Special attention is given to the wide variety of devices that are part
of the Internet of Things with potential wireless access to organizational systems.
See [NISTdevice
A mobile CRYPTO].
is a computing device that has a small form factor such that it can
easily be carried by a single individual; is designed to operate without a physical
connection (e.g., wirelessly transmit or receive information); possesses local, non-
removable or removable data storage; and includes a self-contained power source.
Mobile devices may also include voice communication capabilities, on-board sensors
that allow the device to capture information, or built-in features for synchronizing
local data with remote locations. Examples of mobile devices include smart phones,
e-readers, and tablets. Due to the large variety of mobile devices with different
technical characteristics and capabilities, organizational restrictions may vary for
the different types of devices. Usage restrictions and implementation guidance for
mobile devices include: device identification and authentication; configuration
management; implementation of mandatory protective software (e.g., malicious
code detection, firewall); scanning devices for malicious code; updating virus
protection software; scanning for critical software updates and patches; conducting
primary operating system (and possibly other resident software) integrity checks;
and disabling unnecessary hardware (e.g., wireless, infrared). The need to provide
adequate security for mobile devices goes beyond this requirement. Many controls
for mobile devices are reflected in other CUI security requirements. [SP 800-124]
provides guidance on mobile device security.
Organizations can employ full-device encryption or container-based encryption to
protect the confidentiality of CUI on mobile devices and computing platforms.
Container-based encryption provides a more fine-grained approach to the
encryption of data and information including encrypting selected data structures
such as files, records, or fields. See [NIST CRYPTO].
[23] Mobile devices and computing platforms include, for example, smartphones
and tablets.
Potential indicators and possible precursors of insider threat include behaviors such
as: inordinate, long-term job dissatisfaction; attempts to gain access to information
that is not required for job performance; unexplained access to financial resources;
bullying or sexual harassment of fellow employees; workplace violence; and other
serious violations of the policies, procedures, directives, rules, or practices of
organizations. Security awareness training includes how to communicate employee
and management concerns regarding potential indicators of insider threat through
appropriate organizational channels in accordance with established organizational
policies and procedures. Organizations may consider tailoring insider threat
awareness topics to the role (e.g., training for managers may be focused on specific
changes in behavior of team members, while training for employees may be
focused on more general observations).
An event is any observable occurrence in a system, which includes unlawful or
unauthorized system activity. Organizations identify event types for which a logging
functionality is needed as those events which are significant and relevant to the
security of systems and the environments in which those systems operate to meet
specific and ongoing auditing needs. Event types can include password changes,
failed logons or failed accesses related to systems, administrative privilege usage,
or third-party credential usage. In determining event types that require logging,
organizations consider the monitoring and auditing appropriate for each of the CUI
security requirements. Monitoring and auditing requirements can be balanced with
other system needs. For example, organizations may determine that systems must
have the capability to log every file access both successful and unsuccessful, but
not activate that capability except for specific circumstances due to the potential
burden on system performance. Audit records can be generated at various levels of
abstraction, including at the packet level as information traverses the network.
Selecting the appropriate level of abstraction is a critical aspect of an audit logging
capability and can facilitate the identification of root causes to problems.
Organizations consider in the definition of event types, the logging necessary to
cover related events such as the steps in distributed, transaction-based processes
(e.g., processes that are distributed across multiple organizations) and actions that
occur in service-oriented or cloud-based architectures. Audit record content that
may be necessary to satisfy this requirement includes time stamps, source and
destination addresses, user or process identifiers, event descriptions, success or fail
indications, filenames involved, and access control or flow control rules invoked.
Event outcomes can include indicators of event success or failure and event-specific
results (e.g., the security state of the system after the event occurred). Detailed
information that organizations may consider in audit records includes full text
recording of privileged commands or the individual identities of group account
users. Organizations consider limiting the additional audit log information to only
that information explicitly needed for specific audit requirements. This facilitates
the use of audit trails and audit logs by not including information that could
potentially be misleading or could make it more difficult to locate information of
interest. Audit logs are reviewed and analyzed as often as needed to provide
This requirement ensures that the contents of the audit record include the
information needed to link the audit event to the actions of an individual to the
extent feasible. Organizations consider logging for traceability including results
from monitoring of account usage, remote access, wireless connectivity, mobile
device connection, communications at system boundaries, configuration settings,
physical access, nonlocal maintenance, use of maintenance tools, temperature and
humidity, equipment delivery and removal, system component inventory, use of
mobile code, and use of Voice over Internet Protocol (VoIP).
The intent of this requirement is to periodically re-evaluate which logged events will
continue to be included in the list of events to be logged. The event types that are
logged by organizations may change over time. Reviewing and updating the set of
logged event types periodically is necessary to ensure that the current set remains
necessary and sufficient.
Audit logging process failures include software and hardware errors, failures in the
audit record capturing mechanisms, and audit record storage capacity being
reached or exceeded. This requirement applies to each audit record data storage
repository (i.e., distinct system component where audit records are stored), the
total audit record storage capacity of organizations (i.e., all audit record data
storage repositories combined), or both.
Correlating audit record review, analysis, and reporting processes helps to ensure
that they do not operate independently, but rather collectively. Regarding the
assessment of a given organizational system, the requirement is agnostic as to
whether this correlation is applied at the system level or at the organization level
across all systems.
Audit record reduction is a process that manipulates collected audit information and
organizes such information in a summary format that is more meaningful to
analysts. Audit record reduction and report generation capabilities do not always
emanate from the same system or organizational entities conducting auditing
activities. Audit record reduction capability can include, for example, modern data
mining techniques with advanced data filters to identify anomalous behavior in
audit records. The report generation capability provided by the system can help
generate customizable reports. Time ordering of audit records can be a significant
issue if the granularity of the time stamp in the record is insufficient.
Internal system clocks are used to generate time stamps, which include date and
time. Time is expressed in Coordinated Universal Time (UTC), a modern
continuation of Greenwich Mean Time (GMT), or local time with an offset from UTC.
The granularity of time measurements refers to the degree of synchronization
between system clocks and reference clocks, for example, clocks synchronizing
within hundreds of milliseconds or within tens of milliseconds. Organizations may
define different time granularities for different system components. Time service
can also be critical to other security capabilities such as access control and
identification and authentication, depending on the nature of the mechanisms used
to support those capabilities. This requirement provides uniformity of time stamps
for systems with multiple system clocks and systems connected over a network.
See [IETF 5905].
Audit information includes all information (e.g., audit records, audit log settings,
and audit reports) needed to successfully audit system activity. Audit logging tools
are those programs and devices used to conduct audit and logging activities. This
requirement focuses on the technical protection of audit information and limits the
ability to access and execute audit logging tools to authorized individuals. Physical
protection of audit information is addressed by media protection and physical and
environmental protection requirements.
Individuals with privileged access to a system and who are also the subject of an
audit by that system, may affect the reliability of audit information by inhibiting
audit logging activities or modifying audit records. This requirement specifies that
privileged access be further defined between audit-related privileges and other
privileges, thus limiting the users with audit-related privileges
Baseline configurations are documented, formally reviewed, and agreed-upon
specifications for systems or configuration items within those systems. Baseline
configurations serve as a basis for future builds, releases, and changes to systems.
Baseline configurations include information about system components (e.g.,
standard software packages installed on workstations, notebook computers,
servers, network components, or mobile devices; current version numbers and
update and patch information on operating systems and applications; and
configuration settings and parameters), network topology, and the logical
placement of those components within the system architecture. Baseline
configurations of systems also reflect the current enterprise architecture.
Maintaining effective baseline configurations requires creating new baselines as
organizational systems change over time. Baseline configuration maintenance
includes reviewing and updating the baseline configuration when changes are made
based on security risks and deviations from the established baseline configuration.
Organizations can implement centralized system component inventories that
include components from multiple organizational systems. In such situations,
organizations ensure that the resulting inventories include system-specific
information required for proper component accountability (e.g., system association,
system owner). Information deemed necessary for effective accountability of
system components includes hardware inventory specifications, software license
information, software version numbers, component owners, and for networked
components or devices, machine names and network addresses. Inventory
specifications include manufacturer, device type, model, serial number, and
physical location. [SP 800-128] provides guidance on security-focused configuration
management.
Configuration settings are the set of parameters that can be changed in hardware,
software, or firmware components of the system that affect the security posture or
functionality of the system. Information technology products for which security-
related configuration settings can be defined include mainframe computers,
servers, workstations, input and output devices (e.g., scanners, copiers, and
printers), network components (e.g., firewalls, routers, gateways, voice and data
switches, wireless access points, network appliances, sensors), operating systems,
middleware, and applications. Security parameters are those parameters impacting
the security state of systems including the parameters required to satisfy other
security requirements. Security parameters include: registry settings; account, file,
directory permission settings; and settings for functions, ports, protocols, and
remote connections. Organizations establish organization-wide configuration
settings and subsequently derive specific configuration settings for systems. The
established settings become part of the systems configuration baseline. Common
secure configurations (also referred to as security configuration checklists,
lockdown and hardening guides, security reference guides, security technical
implementation guides) provide recognized, standardized, and established
benchmarks that stipulate secure configuration settings for specific information
technology platforms/products and instructions for configuring those system
components to meet operational requirements. Common secure configurations can
be developed by a variety of organizations including information technology product
developers, manufacturers, vendors, consortia, academia, industry, federal
agencies, and other organizations in the public and private sectors. [SP 800-70]
and [SP 800-128] provide guidance on security configuration settings.
Tracking, reviewing, approving/disapproving, and logging changes is called
configuration change control. Configuration change control for organizational
systems involves the systematic proposal, justification, implementation, testing,
review, and disposition of changes to the systems, including system upgrades and
modifications. Configuration change control includes changes to baseline
configurations for components and configuration items of systems, changes to
configuration settings for information technology products (e.g., operating systems,
applications, firewalls, routers, and mobile devices), unscheduled and unauthorized
changes, and changes to remediate vulnerabilities. Processes for managing
configuration changes to systems include Configuration Control Boards or Change
Advisory Boards that review and approve proposed changes to systems. For new
development systems or systems undergoing major upgrades, organizations
consider including representatives from development organizations on the
Configuration Control Boards or Change Advisory Boards. Audit logs of changes
include activities before and after changes are made to organizational systems and
the activities required to implement such changes. [SP 800-128] provides guidance
on configuration change control.
The process used to identify software programs that are not authorized to execute
on systems is commonly referred to as blacklisting. The process used to identify
software programs that are authorized to execute on systems is commonly referred
to as whitelisting. Whitelisting is the stronger of the two policies for restricting
software program execution. In addition to whitelisting, organizations consider
verifying the integrity of whitelisted software programs using, for example,
cryptographic checksums, digital signatures, or hash functions. Verification of
whitelisted software can occur either prior to execution or at system startup. [SP
800-167] provides guidance on application whitelisting.
Users can install software in organizational systems if provided the necessary
privileges. To maintain control over the software installed, organizations identify
permitted and prohibited actions regarding software installation through policies.
Permitted software installations include updates and security patches to existing
software and applications from organization-approved “app stores.” Prohibited
software installations may include software with unknown or suspect pedigrees or
software that organizations consider potentially malicious. The policies
organizations select governing user-installed software may be organization-
developed or provided by some external entity. Policy enforcement methods
include procedural methods, automated methods, or both.
Common device identifiers include Media Access Control (MAC), Internet Protocol
(IP) addresses, or device-unique token identifiers. Management of individual
identifiers is not applicable to shared system accounts. Typically, individual
identifiers are the user names associated with the system accounts assigned to
those individuals. Organizations may require unique identification of individuals in
group accounts or for detailed accountability of individual activity. In addition, this
requirement addresses individual identifiers that are not necessarily associated with
system accounts. Organizational devices requiring identification may be defined by
type, by device, or by a combination of type/device. [SP 800-63-3] provides
guidance on digital identities.
Individual authenticators include the following: passwords, key cards, cryptographic
devices, and one-time password devices. Initial authenticator content is the actual
content of the authenticator, for example, the initial password. In contrast, the
requirements about authenticator content include the minimum password length.
Developers ship system components with factory default authentication credentials
to allow for initial installation and configuration. Default authentication credentials
are often well known, easily discoverable, and present a significant security risk.
Systems support authenticator management by organization-defined settings and
restrictions for various authenticator characteristics including minimum password
length, validation time window for time synchronous one-time tokens, and number
of allowed rejections during the verification stage of biometric authentication.
Authenticator management includes issuing and revoking, when no longer needed,
authenticators for temporary access such as that required for remote maintenance.
Device authenticators include certificates and passwords. [SP 800-63-3] provides
guidance on digital identities.
Multifactor authentication requires the use of two or more different factors to
authenticate. The factors are defined as something you know (e.g., password,
personal identification number [PIN]); something you have (e.g., cryptographic
identification device, token); or something you are (e.g., biometric). Multifactor
authentication solutions that feature physical authenticators include hardware
authenticators providing time-based or challenge-response authenticators and
smart cards. In addition to authenticating users at the system level (i.e., at logon),
organizations may also employ authentication mechanisms at the application level,
when necessary, to provide increased information security. Access to organizational
systems is defined as local access or network access. Local access is any access to
organizational systems by users (or processes acting on behalf of users) where such
access is obtained by direct connections without the use of networks. Network
access is access to systems by users (or processes acting on behalf of users) where
such access is obtained through network connections (i.e., nonlocal accesses).
Remote access is a type of network access that involves communication through
external networks. The use of encrypted virtual private networks for connections
between organization-controlled and non-organization controlled endpoints may be
treated as internal networks with regard to protecting the confidentiality of
information. [SP 800-63-3] provides guidance on digital identities.
System media includes digital and non-digital media. Digital media includes
diskettes, magnetic tapes, external and removable hard disk drives, flash drives,
compact disks, and digital video disks. Non-digital media includes paper and
microfilm. Protecting digital media includes limiting access to design specifications
stored on compact disks or flash drives in the media library to the project leader
and any individuals on the development team. Physically controlling system media
includes conducting inventories, maintaining accountability for stored media, and
ensuring procedures are in place to allow individuals to check out and return media
to the media library. Secure storage includes a locked drawer, desk, or cabinet, or a
controlled media library. Access to CUI on system media can be limited by
physically controlling such media, which includes conducting inventories, ensuring
procedures are in place to allow individuals to check out and return media to the
media library, and maintaining accountability for all stored media. [SP 800-111]
provides guidance on storage encryption technologies for end user devices.
Access can be limited by physically controlling system media and secure storage
areas. Physically controlling system media includes conducting inventories,
ensuring procedures are in place to allow individuals to check out and return
system media to the media library, and maintaining accountability for all stored
media. Secure storage includes a locked drawer, desk, or cabinet, or a controlled
media library
This requirement applies to all system media, digital and non-digital, subject to
disposal or reuse. Examples include: digital media found in workstations, network
components, scanners, copiers, printers, notebook computers, and mobile devices;
and non-digital media such as paper and microfilm. The sanitization process
removes information from the media such that the information cannot be retrieved
or reconstructed. Sanitization techniques, including clearing, purging, cryptographic
erase, and destruction, prevent the disclosure of information to unauthorized
individuals when such media is released for reuse or disposal. Organizations
determine the appropriate sanitization methods, recognizing that destruction may
be necessary when other methods cannot be applied to the media requiring
sanitization. Organizations use discretion on the employment of sanitization
techniques and procedures for media containing information that is in the public
domain or publicly releasable or deemed to have no adverse impact on
organizations or individuals if released for reuse or disposal. Sanitization of non-
digital media includes destruction, removing CUI from documents, or redacting
selected sections or words from a document by obscuring the redacted sections or
words in a manner equivalent in effectiveness to removing the words or sections
from the document. NARA policy and guidance control sanitization processes for
controlled unclassified information. [SP 800-88] provides guidance on media
sanitization.
The term security marking refers to the application or use of human-readable
security attributes. System media includes digital and non-digital media. Marking of
system media reflects applicable federal laws, Executive Orders, directives, policies,
and regulations. See [NARA MARK].
[27] The implementation of this requirement is per marking guidance in [32 CFR
2002] and [NARA CUI]. Standard Form (SF) 902 (approximate size 2.125” x 1.25”)
and SF 903 (approximate size 2.125” x .625”) can be used on media that contains
CUI such as hard drives, or USB devices. Both forms are available from
https://www.gsaadvantage.gov.
Controlled areas are areas or spaces for which organizations provide physical or
procedural controls to meet the requirements established for protecting systems
and information. Controls to maintain accountability for media during transport
include locked containers and cryptography. Cryptographic mechanisms can
provide confidentiality and integrity protections depending upon the mechanisms
used. Activities associated with transport include the actual transport as well as
those activities such as releasing media for transport and ensuring that media
enters the appropriate transport processes. For the actual transport, authorized
transport and courier personnel may include individuals external to the
organization. Maintaining accountability of media during transport includes
restricting transport activities to authorized personnel and tracking and obtaining
explicit records of transport activities as the media moves through the
transportation system to prevent and detect loss, destruction, or tampering.
This requirement applies to portable storage devices (e.g., USB memory sticks,
digital video disks, compact disks, external or removable hard disk drives). See
[NIST CRYPTO]. [SP 800-111] provides guidance on storage encryption technologies
for end user devices.
In contrast to requirement 3.8.1, which restricts user access to media, this
requirement restricts the use of certain types of media on systems, for example,
restricting or prohibiting the use of flash drives or external hard disk drives.
Organizations can employ technical and nontechnical controls (e.g., policies,
procedures, and rules of behavior) to control the use of system media.
Organizations may control the use of portable storage devices, for example, by
using physical cages on workstations to prohibit access to certain external ports, or
disabling or removing the ability to insert, read, or write to such devices.
Organizations may also limit the use of portable storage devices to only approved
devices including devices provided by the organization, devices provided by other
approved organizations, and devices that are not personally owned. Finally,
organizations may control the use of portable storage devices based on the type of
device, prohibiting the use of writeable, portable devices, and implementing this
restriction by disabling or removing the capability to write to such devices.
Requiring identifiable owners (e.g., individuals, organizations, or projects) for
portable storage devices reduces the overall risk of using such technologies by
allowing organizations to assign responsibility and accountability for addressing
known vulnerabilities in the devices (e.g., insertion of malicious code).
Organizations can employ cryptographic mechanisms or alternative physical
controls to protect the confidentiality of backup information at designated storage
locations. Backed-up information containing CUI may include system-level
information and user-level information. System-level information includes system-
state information, operating system software, application software, and licenses.
User-level information includes information other than system-level information.
Physical access devices include keys, locks, combinations, and card readers.
Alternate work sites may include government facilities or the private residences of
employees. Organizations may define different security requirements for specific
alternate work sites or types of sites depending on the work-related activities
conducted at those sites. [SP 800-46] and [SP 800-114] provide guidance on
enterprise and user security when teleworking.
Clearly defined system boundaries are a prerequisite for effective risk assessments.
Such risk assessments consider threats, vulnerabilities, likelihood, and impact to
organizational operations, organizational assets, and individuals based on the
operation and use of organizational systems. Risk assessments also consider risk
from external parties (e.g., service providers, contractors operating systems on
behalf of the organization, individuals accessing organizational systems,
outsourcing entities). Risk assessments, either formal or informal, can be conducted
at the organization level, the mission or business process level, or the system level,
and at any phase in the system development life cycle. [SP 800-30] provides
guidance on conducting risk assessments.
Organizations determine the required vulnerability scanning for all system
components, ensuring that potential sources of vulnerabilities such as networked
printers, scanners, and copiers are not overlooked. The vulnerabilities to be
scanned are readily updated as new vulnerabilities are discovered, announced, and
scanning methods developed. This process ensures that potential vulnerabilities in
the system are identified and addressed as quickly as possible. Vulnerability
analyses for custom software applications may require additional approaches such
as static analysis, dynamic analysis, binary analysis, or a hybrid of the three
approaches. Organizations can employ these analysis approaches in source code
reviews and in a variety of tools (e.g., static analysis tools, web-based application
scanners, binary analyzers) and in source code reviews. Vulnerability scanning
includes: scanning for patch levels; scanning for functions, ports, protocols, and
services that should not be accessible to users or devices; and scanning for
improperly configured or incorrectly operating information flow control mechanisms.
To facilitate interoperability, organizations consider using products that are
Security Content Automated Protocol (SCAP)-validated, scanning tools that express
vulnerabilities in the Common Vulnerabilities and Exposures (CVE) naming
convention, and that employ the Open Vulnerability Assessment Language (OVAL)
to determine the presence of system vulnerabilities. Sources for vulnerability
information include the Common Weakness Enumeration (CWE) listing and the
National Vulnerability Database (NVD). Security assessments, such as red team
exercises, provide additional sources of potential vulnerabilities for which to scan.
Organizations also consider using scanning tools that express vulnerability impact
by the Common Vulnerability Scoring System (CVSS). In certain situations, the
nature of the vulnerability scanning may be more intrusive or the system
component that is the subject of the scanning may contain highly sensitive
information. Privileged access authorization to selected system components
facilitates thorough vulnerability scanning and protects the sensitive nature of such
scanning. [SP 800-40] provides guidance on vulnerability management.
Vulnerabilities discovered, for example, via the scanning conducted in response to
3.11.2, are remediated with consideration of the related assessment of risk. The
consideration of risk influences the prioritization of remediation efforts and the level
of effort to be expended in the remediation for specific vulnerabilities.
Organizations assess security controls in organizational systems and the
environments in which those systems operate as part of the system development
life cycle. Security controls are the safeguards or countermeasures organizations
implement to satisfy security requirements. By assessing the implemented security
controls, organizations determine if the security safeguards or countermeasures are
in place and operating as intended. Security control assessments ensure that
information security is built into organizational systems; identify weaknesses and
deficiencies early in the development process; provide essential information needed
to make risk-based decisions; and ensure compliance to vulnerability mitigation
procedures. Assessments are conducted on the implemented security controls as
documented in system security plans. Security assessment reports document
assessment results in sufficient detail as deemed necessary by organizations, to
determine the accuracy and completeness of the reports and whether the security
controls are implemented correctly, operating as intended, and producing the
desired outcome with respect to meeting security requirements. Security
assessment results are provided to the individuals or roles appropriate for the types
of assessments being conducted. Organizations ensure that security assessment
results are current, relevant to the determination of security control effectiveness,
and obtained with the appropriate level of assessor independence. Organizations
can choose to use other types of assessment activities such as vulnerability
scanning and system monitoring to maintain the security posture of systems during
the system life cycle. [SP 800-53] provides guidance on security and privacy
controls for systems and organizations. [SP 800-53A] provides guidance on
developing security assessment plans and conducting assessments.
The plan of action is a key document in the information security program.
Organizations develop plans of action that describe how any unimplemented
security requirements will be met and how any planned mitigations will be
implemented. Organizations can document the system security plan and plan of
action as separate or combined documents and in any chosen format. Federal
agencies may consider the submitted system security plans and plans of action as
critical inputs to an overall risk management decision to process, store, or transmit
CUI on a system hosted by a nonfederal organization and whether it is advisable to
pursue an agreement or contract with the nonfederal organization. [NIST CUI]
provides supplemental material for Special Publication 800-171 including templates
for plans of action.
Continuous monitoring programs facilitate ongoing awareness of threats,
vulnerabilities, and information security to support organizational risk management
decisions. The terms continuous and ongoing imply that organizations assess and
analyze security controls and information security-related risks at a frequency
sufficient to support risk-based decisions. The results of continuous monitoring
programs generate appropriate risk response actions by organizations. Providing
access to security information on a continuing basis through reports or dashboards
gives organizational officials the capability to make effective and timely risk
management decisions. Automation supports more frequent updates to hardware,
software, firmware inventories, and other system information. Effectiveness is
further enhanced when continuous monitoring outputs are formatted to provide
information that is specific, measurable, actionable, relevant, and timely. Monitoring
requirements, including the need for specific monitoring, may also be referenced in
other requirements. [SP 800-137] provides guidance on continuous monitoring.
System security plans relate security requirements to a set of security controls.
System security plans also describe, at a high level, how the security controls meet
those security requirements, but do not provide detailed, technical descriptions of
the design or implementation of the controls. System security plans contain
sufficient information to enable a design and implementation that is unambiguously
compliant with the intent of the plans and subsequent determinations of risk if the
plan is implemented as intended. Security plans need not be single documents; the
plans can be a collection of various documents including documents that already
exist. Effective security plans make extensive use of references to policies,
procedures, and additional documents (e.g., design and implementation
specifications) where more detailed information can be obtained. This reduces the
documentation requirements associated with security programs and maintains
security-related information in other established management/operational areas
related to enterprise architecture, system development life cycle, systems
engineering, and acquisition. Federal agencies may consider the submitted system
security plans and plans of action as critical inputs to an overall risk management
decision to process, store, or transmit CUI on a system hosted by a nonfederal
organization and whether it is advisable to pursue an agreement or contract with
the nonfederal organization. [SP 800-18] provides guidance on developing security
plans. [NIST CUI] provides supplemental material for Special Publication 800-171
including templates for system security plans.
[28] There is no prescribed format or specified level of detail for system security
plans. However, organizations ensure that the required information in 3.12.4 is
conveyed in those plans.
Organizations apply systems security engineering principles to new development
systems or systems undergoing major upgrades. For legacy systems, organizations
apply systems security engineering principles to system upgrades and modifications
to the extent feasible, given the current state of hardware, software, and firmware
components within those systems. The application of systems security engineering
concepts and principles helps to develop trustworthy, secure, and resilient systems
and system components and reduce the susceptibility of organizations to
disruptions, hazards, and threats. Examples of these concepts and principles
include developing layered protections; establishing security policies, architecture,
and controls as the foundation for design; incorporating security requirements into
the system development life cycle; delineating physical and logical security
boundaries; ensuring that developers are trained on how to build secure software;
and performing threat modeling to identify use cases, threat agents, attack vectors
and patterns, design patterns, and compensating controls needed to mitigate risk.
Organizations that apply security engineering concepts and principles can facilitate
the development of trustworthy, secure systems, system components, and system
services; reduce risk to acceptable levels; and make informed risk-management
decisions. [SP 800-160-1] provides guidance on systems security engineering.
[29] Dedicated video conferencing systems, which rely on one of the participants
calling or connecting to the other party to activate the video conference, are
excluded.
Mobile code technologies include Java, JavaScript, ActiveX, Postscript, PDF, Flash
animations, and VBScript. Decisions regarding the use of mobile code in
organizational systems are based on the potential for the code to cause damage to
the systems if used maliciously. Usage restrictions and implementation guidance
apply to the selection and use of mobile code installed on servers and mobile code
downloaded and executed on individual workstations, notebook computers, and
devices (e.g., smart phones). Mobile code policy and procedures address controlling
or preventing the development, acquisition, or introduction of unacceptable mobile
code in systems, including requiring mobile code to be digitally signed by a trusted
source. [SP 800-28] provides guidance on mobile code.
VoIP has different requirements, features, functionality, availability, and service
limitations when compared with the Plain Old Telephone Service (POTS) (i.e., the
standard telephone service). In contrast, other telephone services are based on
high-speed, digital communications lines, such as Integrated Services Digital
Network (ISDN) and Fiber Distributed Data Interface (FDDI). The main distinctions
between POTS and non-POTS services are speed and bandwidth. To address the
threats associated with VoIP, usage restrictions and implementation guidelines are
based on the potential for the VoIP technology to cause damage to the system if it
is used maliciously. Threats to VoIP are similar to those inherent with any Internet-
based application. [SP 800-58] provides guidance on Voice Over IP Systems.
Periodic scans of organizational systems and real-time scans of files from external
sources can detect malicious code. Malicious code can be encoded in various
formats (e.g., UUENCODE, Unicode), contained within compressed or hidden files, or
hidden in files using techniques such as steganography. Malicious code can be
inserted into systems in a variety of ways including web accesses, electronic mail,
electronic mail attachments, and portable storage devices. Malicious code insertions
occur through the exploitation of system vulnerabilities.
System monitoring includes external and internal monitoring. External monitoring
includes the observation of events occurring at the system boundary (i.e., part of
perimeter defense and boundary protection). Internal monitoring includes the
observation of events occurring within the system. Organizations can monitor
systems, for example, by observing audit record activities in real time or by
observing other system aspects such as access patterns, characteristics of access,
and other actions. The monitoring objectives may guide determination of the
events. System monitoring capability is achieved through a variety of tools and
techniques (e.g., intrusion detection systems, intrusion prevention systems,
malicious code protection software, scanning tools, audit record monitoring
software, network monitoring software). Strategic locations for monitoring devices
include selected perimeter locations and near server farms supporting critical
applications, with such devices being employed at managed system interfaces. The
granularity of monitoring information collected is based on organizational
monitoring objectives and the capability of systems to support such objectives.
System monitoring is an integral part of continuous monitoring and incident
response programs. Output from system monitoring serves as input to continuous
monitoring and incident response programs. A network connection is any
connection with a device that communicates through a network (e.g., local area
network, Internet). A remote connection is any connection with a device
communicating through an external network (e.g., the Internet). Local, network, and
remote connections can be either wired or wireless. Unusual or unauthorized
activities or conditions related to inbound/outbound communications traffic include
internal traffic that indicates the presence of malicious code in systems or
propagating among system components, the unauthorized exporting of information,
or signaling to external systems. Evidence of malicious code is used to identify
potentially compromised systems or system components. System monitoring
requirements, including the need for specific types of system monitoring, may be
referenced in other requirements. [SP 800-94] provides guidance on intrusion
detection and prevention systems.
System monitoring includes external and internal monitoring. System monitoring
can detect unauthorized use of organizational systems. System monitoring is an
integral part of continuous monitoring and incident response programs. Monitoring
is achieved through a variety of tools and techniques (e.g., intrusion detection
systems, intrusion prevention systems, malicious code protection software,
scanning tools, audit record monitoring software, network monitoring software).
Output from system monitoring serves as input to continuous monitoring and
incident response programs. Unusual/unauthorized activities or conditions related
to inbound and outbound communications traffic include internal traffic that
indicates the presence of malicious code in systems or propagating among system
components, the unauthorized exporting of information, or signaling to external
systems. Evidence of malicious code is used to identify potentially compromised
systems or system components. System monitoring requirements, including the
need for specific types of system monitoring, may be referenced in other
requirements. [SP 800-94] provides guidance on intrusion detection and prevention
systems.
DIGITAL SECURITY ASSESSMENT CHECKLIST
01. Use a custom administrator account and disable the default “admin” and
“guest” accounts
02. Enable 2-step verification
03. Change the system default ports, e.g. port 8000/8001 for the management
interface if you use Synology Router Manager (SRM), to new custom ports
04. Enable IP Auto Block against brute-force attacks
05. Enable HTTPS for services running on SRM with a valid SSL certificate
06. Enable email, SMS or push notifications to stay on top of critical events
07. Enable automatic update for the router’s firmware and all built-in security
databases
Endpoint protection
01. Use a custom administrator account and disable the default “admin” and
“guest” accounts
02. Enable 2-step verification
03. Apply password strength rules to all your users
04. Restrict users’ access privileges to only the shared folders and services they
need
05. Change the system default ports, e.g. port 5000/5001 for the DSM
management interface to new custom ports
06. If port forwarding is enabled for your NAS, use custom public ports on the
router instead of well-known ports (e.g. 5000/5001)
Endpoint protection
01. Keep your operating system up-to-date
02. Run a reliable antivirus software and regularly conduct full scans
01. Use a strong password
02. Block devices (e.g. IP cameras, printers, phones, etc.) from
accessing the internet unless the device requires communication
with the server in order to function
Endpoint protection
01. Use a strong password
02. Block devices (e.g. IP cameras, printers, phones, etc.) from
accessing the internet unless the device requires communication
with the server in order to function
Data backup
03. Enable Hyper Backup to back up shared folders, LUNs and system/package
configurations
04. Configure an alert threshold in Hyper Backup for file changes between two
backup versions, so that it automatically notifies you of abnormal behavior
and prevents all intact versions from being silently overwritten