0% found this document useful (0 votes)
14 views8 pages

Draft

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views8 pages

Draft

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Insider Threats An insider threat means that someone who has approved access to your

systems, network, and data (usually an employee or consultant) negatively affects one or more
of the CIA aspects of your systems, data, and/or network. This can be malicious (on purpose) or
accidental. Here are some examples of malicious threats and the parts of the CIA Triad they
affect: ■ An employee downloading intellectual property onto a portable drive, leaving the
building, and then selling the information to your competitors (confidentiality) ■ An employee
deleting a database and its backup on their last day of work because they are angry that they
were dismissed (availability) ■ An employee programming a back door into a system so they
can steal from your company (integrity and confidentiality) ■ An employee downloading
sensitive files from another employee’s com‑ puter and using them for blackmail (confidentiality)
■ An employee accidentally deleting files, then changing the logs to cover their mistake (integrity
and availability) ■ An employee not reporting a vulnerability to management in order to avoid the
work of fixing it (potentially all three, depending upon the type of vulnerability)

Here are some examples of accidental threats and the parts of the CIA Triad they affect: ■
Employees using software improperly, causing it to fall into an unknown state (potentially all
three) ■ An employee accidentally deleting valuable data, files, or even entire sys‑ tems
(availability) ■ An employee accidentally misconfiguring software, the network, or other software
in a way that introduces security vulnerabilities (potentially all three) ■ An inexperienced
employee pointing a web proxy/dynamic application security testing (DAST) tool at one of your
internal applications, crashing the application (availability) or polluting your database (integrity)
We will cover how to avoid this in later chapters to ensure all of your security testing is
performed safely.

Requirements Checklist Below is a checklist that you can use for all your web application
projects. All of these requirements can apply to any web application, and I suggest you include
them all as a minimum, plus add your own that fit your unique business needs. ■ Encrypt all
data at rest (while in the database). ■ Encrypt all data in transit (on its way to and from the user,
the database, an API, etc.). ■ Trust no one: validate (and sanitize if specialize circumstances
apply) all data, even from your own database. ■ Encode (and escape if need be) all output. ■
Scan all libraries and third-party components for known vulnerabilities before use, and regularly
after use (new vulnerabilities and versions are released all the time). ■ Use all applicable
security headers. ■ Use appropriate secure cookie settings. ■ Classify and label all data that
your application will store, collect, or create. ■ Hash and salt all user passwords. Make the salt
at least 28 characters. ■ Store all application secrets in a secret store. ■ Ensure all accounts
used within the application are service accounts (not a human being’s account). ■ Have all
people on your team use password managers and never reuse passwords. ■ Turn on MFA for
all important accounts. ■ Do not force password changes on a schedule, but only after a breach
or suspicious activity. ■ Only allow public-facing (internet) sites to be accessible via HTTPS.
Redirect from HTTP to HTTPS. Ideally this would be applied to both internal and external
applications. ■ Ensure you are using the latest version of TLS for encryption (currently 1.3).

■ Never hard code anything. Ever. ■ Never put sensitive information in comments. This includes
connection strings and passwords; those belong in a secret store. ■ Use the security features
within your framework; for instance, cryptog‑ raphy/encryption, session management features,
or input sanitization functions. Never write your own if your framework provides them. ■ Use
only the latest version (or the one before) of your framework and keep it up to date. Technical
debt = security debt. ■ If performing a file upload, ensure you are following the advice from
OWASP for this highly risky activity. This includes scanning all uploaded files with a scanner
such as AssemblyLine, available for free from the Communications Security Establishment of
Canada (CSE). ■ Ensure all errors are logged (but do not log sensitive information), and if any
security errors happen, trigger an alert. ■ Ensure all input validation (and sanitization) is
performed server side, using an approved list or accepted list (not a block list) approach. ■
Ensure security testing is performed on your application before being released (more on this in
later chapters). ■ Perform threat modeling on your application before being released. Learn
more in Chapter 3 in the “Threat Modeling” section. ■ Perform code review (specifically of
security functions) on your applica‑ tion before being released. ■ Ensure the application catches
all errors and fails safe or fails closed (never fail into an unknown state). ■ Ensure all errors
provide generic information to the user, never information from a stack trace, query fail, or other
technically specific information. ■ Define specifics on role-based access in the project
requirements. ■ Ensure specifics on authentication methods and identity systems are defined in
the project requirements. ■ Only use parameterized queries, never inline SQL/NOSQL. ■ Do not
pass variables that are of any importance in the URL parameters. ■ Ensure the application
enforces the security principle of least privilege, especially in regard to accessing the database
and APIs. ■ Minimize your attack surface whenever possible.
■ Allow users to cut and paste into the password field, which will allow for use of password
managers. Disable password autocomplete features in browsers, to ensure users do not save
their passwords into the browser. ■ Disable caching on pages that contain sensitive information.
While the Cache HTTP header is not technically a security header, it can be used to enforce this
requirement. ■ Ensure passwords for your application’s users are long, but not necessarily
complex. The longer the better; encourage use of passphrases. ■ Do not force users to change
their passwords after a certain amount of time, unless a breach is suspected. ■ Verify that new
user’s passwords have not previously been in a breach by using a service designed for such a
task. Depending upon what your application does, you may want to add more requirements or
remove some. The point of this chapter is to get you thinking about security while you are
writing your project requirements. If developers know from the beginning that they need to
adhere to these requirements, you are already on your way to creating more secure software.

Example #1 Bob has worked in many IT shops during his time in the Canadian government.
Some of them were quite modern, while others seemed to lag behind, collecting more and more
technical debt as time went on. When he started his first project management position, he
worked somewhere that used a programming language he had never heard of; even when he
searched it online he found almost nothing. They also used .Net, and the shop was split
between the two languages. When a project came up for a new software product, Bob had
assumed that they would use the newest .Net framework version, and either VB.net or C# as
their programming language. When speaking to the tech lead for the project, he had been
informed they were going to use the strange ancient programming language because they
“need to make sure those programmers had work to do.” He was told that some of the
programmers were insecure in their jobs, because they didn’t know .Net. Bob was shocked.
Why would you intentionally create new technical debt? That language wasn’t even supported
anymore. Bob spoke to upper management and came up with a compromise: they would add
training for the programmers on that project to learn .Net and pair them with other programmers
who already knew it, to ensure they didn’t make any serious mistakes during the project. This
way he updated his team’s skills and got his project done, with no new technical debt! The
programmers were thrilled to find out they would have paid training and on-the-job mentoring. It
was a bit more expensive up front, but in the long run it was a great investment in their
employees. Bob was particularly proud of how the project worked out.

Example #3 Bob briefly worked on a top-secret project, leading a team that was making a tool to
dissect malware by in-house analysts. The team wanted to write it in C, rather than Rust. Bob
had heard that Rust was more secure, because it was memory safe. He argued they should use
Rust instead. The team explained to him that as part of their “Buy, Borrow, Build” new-tech
mantra they were repurposing a tool from another government they had good relations with.
That tool was already written in C; they would have to start again if their tool was in Rust. They
also explained that since it would only be used in-house, by their own experts, it would be safe
from external attackers. Lastly, they explained that they had no one on staff with any Rust
experience, so training would be required. After weighing all of these factors, Bob agreed that
they should proceed with using C instead of Rust and documented the decision as part of the
project design.
“ Application Security--The Front End

Chapter 10: Application Security Fundamentals

Coding Standards
The Software Development Process
Models and Selection
Cohesion and Coupling
Development, Test, and Production
Client and Server
Side Effects of a Bad Security in Software
Fixing the SQL Injection Attacks
Evaluate User Input
Do Back-End Database Checks
Change Management—Speaking the Same Language
Secure Logging In to Applications, Access to Users
Summary
Chapter 10 Questions
Answer to Chapter 10 Questions

Chapter 11: The Unseen Back End

Back-End DB Connections in Java/Tomcat


Connection Strings and Passwords in Code
Stored Procedures and Functions
File Encryption, Types, and Association
Implementing Public Key Infrastructure and Smart Card
Examples of Key Pairs on Java and Linux
Symmetric Encryption
Asymmetric Encryption
Vulnerabilities, Threats, and Web Security
Attack Types and Mitigations
Summary
Chapter 11 Questions
Answers to Chapter 11 Questions”

Excerpt From
Database and Application Security: A Practitioner’s Guide
R. Sarma Danturthi
This material may be protected by copyright.

“Chapter 12: Securing Software--In-House and Vendor


Internal Development Versus Vendors”

Excerpt From
Database and Application Security: A Practitioner’s Guide
R. Sarma Danturthi
This material may be protected by copyright.
“Vendor or COTS Software
Action Plan
In-House Software Development
Initial Considerations for In-House Software
Code Security Check
Fixing the Final Product—SAST Tools
Fine-tuning the Product—Testing and Release
Patches and Updates
Product Retirement/Decommissioning
Summary
Chapter 12 Questions
Answers to Chapter 12 Questions

Excerpt From
Database and Application Security: A Practitioner’s Guide
R. Sarma Danturthi
This material may be protected by copyright.
Application security (AppSec) encompasses the practices and technologies used to
protect applications from threats throughout their lifecycle, ensuring the confidentiality,
integrity, and availability of application data and code. This proactive approach is
essential in today's digital landscape, where applications are frequent targets for
cyberattacks.

Key Concepts of Application Security


1. Definition and Importance
Application security involves implementing measures to safeguard applications against
various threats, including unauthorized access, data breaches, and exploitation of
vulnerabilities. It is crucial for organizations handling sensitive information, as security
breaches can lead to significant financial losses, legal repercussions, and damage to
reputation124.
2. Lifecycle Integration
Application security should be integrated into every phase of the software development
lifecycle (SDLC), from design through deployment and maintenance. This integration
helps identify and mitigate vulnerabilities early in the development process, reducing the
risk of exploitation in production environments235.

Common Threats
Applications face numerous security threats, including:
●​ Injection Attacks: Exploiting vulnerabilities to inject malicious code.
●​ Cross-Site Scripting (XSS): Allowing attackers to execute scripts in users'
browsers.
●​ SQL Injection: Manipulating databases through insecure input fields.
●​ Misconfigurations: Errors in application setup that expose vulnerabilities45.

Security Measures and Best Practices


To effectively protect applications, organizations should adopt a multi-layered security
strategy that includes:
●​ Authentication Controls: Implementing strong authentication mechanisms, such
as multi-factor authentication (MFA), to verify user identities.
●​ Access Controls: Limiting user permissions based on roles to prevent
unauthorized access to sensitive data.
●​ Encryption: Using encryption techniques for data at rest and in transit to protect
sensitive information from unauthorized access23.
●​ Regular Security Testing: Conducting static application security testing (SAST),
dynamic application security testing (DAST), and penetration testing to identify
vulnerabilities before they can be exploited45.

Application Security Testing Tools


Several tools play a critical role in application security:
●​ Static Application Security Testing (SAST): Analyzes source code for
vulnerabilities without executing the program.
●​ Dynamic Application Security Testing (DAST): Tests running applications for
exploitable vulnerabilities by simulating attacks.
●​ Interactive Application Security Testing (IAST): Combines SAST and DAST
techniques for real-time analysis during application execution.
●​ Web Application Firewalls (WAF): Monitors and filters HTTP traffic between web
applications and the internet to block common attacks

You might also like