Here are detailed answers questions based on the provided slides:
From lec2.pptx (Data Outsourcing, AES, RSA)
    1. What is data outsourcing in cloud computing?
       Data outsourcing in cloud computing refers to the practice of entrusting data storage,
       management, and processing tasks to a third-party cloud service provider. This approach
       enables organizations to benefit from cloud advantages such as scalability, cost-effectiveness,
       and operational efficiency without investing in and maintaining their own infrastructure. The
       cloud provider assumes responsibility for running and maintaining managed services,
       guaranteeing availability, security, regular updates, and technical support to clients. Clients
       typically pay a subscription fee for access to these services. While data outsourcing provides
       many operational benefits, it raises potential concerns regarding data security, privacy, and
       integrity because data is stored outside the organization's direct control. Therefore,
       organizations must implement strong security measures and carefully assess the cloud
       provider’s capabilities before outsourcing. Additionally, cloud migration can be conducted
       without disrupting daily business activities by carefully planning and implementing strategies
       tailored to the organization's needs. Overall, data outsourcing offers a powerful means for
       organizations to focus on core business functions while leveraging advanced cloud
       technologies.
       Slide No: 1, Page No: 1
    2. Describe the main steps involved in AES encryption.
       AES encryption works by processing data in fixed-size blocks of 128 bits (16 bytes) through
       multiple rounds, with the number of rounds depending on the key length: 10 rounds for 128-
       bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit keys. Each round comprises
       four main operations. First, the SubBytes step substitutes each byte of the input block
       independently using a substitution box (S-box), which is designed to provide non-linearity in
       the cipher. Second, the ShiftRows step cyclically shifts bytes in each row of the 4x4 byte grid:
       the first row remains unchanged, the second shifts one byte to the left, the third shifts two
       bytes, and the fourth row shifts three bytes. Third, MixColumns step mixes each column of
       the matrix through matrix multiplication with a fixed polynomial matrix, diffusing the bytes
       across the columns; this step is skipped in the last round. Finally, AddRoundKey performs a
       bitwise XOR between the data block and a round key derived from the initial cipher key
       through a key schedule algorithm. These steps repeat for all rounds, ensuring high diffusion
       and confusion of the input data, securing it against cryptanalytic attacks. The result is a
       ciphertext of the same size as the block, transforming the plaintext into a secure format that is
       computationally infeasible to reverse without the key.
       Slide No: 4, Page No: 5
    3. How are round keys generated in AES encryption?
       The generation of round keys in AES encryption is accomplished by a Key Schedule
       algorithm. This algorithm starts with the initial cipher key provided for encryption and
       expands it to produce multiple different round keys, each used in a specific round of the
       encryption process. The number of round keys generated corresponds to the number of
       rounds: 11 keys for 10 rounds (128-bit key), 13 keys for 12 rounds (192-bit key), and 15 keys
                                                                                                Page 1
   for 14 rounds (256-bit key). The expansion process involves operations such as byte
   substitution, rotation, and XOR with round constants to ensure the keys used in each round
   differ significantly from one another, which enhances the security of the cipher. By using
   different round keys in each step, the encryption process becomes resistant to key recovery
   attacks that exploit similarities in the key schedule. The round keys are arranged as 16-byte
   blocks to match the data block size and are XORed in the AddRoundKey step during
   encryption. This systematic derivation of keys minimizes key management complexity while
   maintaining robust security properties.
   Slide No: 3, Page No: 4
4. Explain the RSA key generation process.
   RSA key generation is a foundational step in asymmetric cryptography where two large prime
   numbers, ppp and qqq, are chosen secretly by the key generator. These primes are multiplied
   to produce n=p×qn = p \times qn=p×q, which forms part of both the public and private keys.
   The Euler's Totient function Φ(n)=(p−1)(q−1)\Phi(n) = (p-1)(q-1)Φ(n)=(p−1)(q−1) is then
   computed, which is essential for determining the key exponents. Next, an encryption exponent
   eee is selected such that it is greater than 1 and less than Φ(n)\Phi(n)Φ(n), and coprime with
   Φ(n)\Phi(n)Φ(n) (meaning gcd(e,Φ(n)e, \Phi(n)e,Φ(n)) = 1). After that, the corresponding
   decryption exponent ddd is calculated as the modular multiplicative inverse of eee modulo
   Φ(n)\Phi(n)Φ(n), which means d×e≡1mod Φ(n)d \times e \equiv 1 \mod
   \Phi(n)d×e≡1modΦ(n). The public key consists of the pair (n,e)(n, e)(n,e) and is made
   publicly available, while the private key (n,d)(n, d)(n,d) must be kept secret by the owner.
   RSA security depends heavily on the difficulty of factoring nnn back into ppp and qqq, which
   is computationally infeasible when these primes are large enough, ensuring secure
   communication.
   Slide No: 15, Page No: 17
5. What are the major security concerns and attacks related to RSA?
   RSA, despite being widely used and secure when properly implemented, faces several
   security challenges and attacks. Plaintext attacks can occur when attackers have access to both
   the ciphertext and some known plaintext, potentially allowing reverse-engineering of the key.
   Short message attacks exploit the vulnerability of encrypting small messages without
   appropriate padding. The cycling attack tries all possible permutations of plaintext to break
   the ciphertext. RSA is also vulnerable to factorization attacks because its security depends on
   the difficulty of factoring large integers; if the modulus nnn can be factored into primes ppp
   and qqq, the private key can be derived. Key-related vulnerabilities arise if short or weak keys
   are used or if the random number generators produce predictable primes. Timing attacks
   analyze the time a system takes to perform cryptographic operations to infer information
   about the keys. The advent of quantum computing presents a major future threat, as
   algorithms like Shor’s algorithm can factorize large numbers efficiently, potentially rendering
   RSA insecure. To counter these problems, using sufficiently large keys (2048 bits or higher),
   secure padding, and rotating keys regularly is recommended.
   Slide No: 18, Page No: 21
                                                                                           Page 2
From Lec4.pptx (Provable Data Possession and Proofs of Retrievability)
    6. What is Provable Data Possession (PDP) and its primary purpose?
       Provable Data Possession (PDP) is a cryptographic protocol designed to enable clients to
       verify that their data stored on untrusted storage servers remains intact and unaltered without
       needing to download the entire dataset. In a typical PDP scheme, the client preprocesses the
       original data, generating metadata such as tags or hashes, and then uploads the processed data
       to the server. Later, the client can issue random challenges to the server to prove possession of
       the intact data. The server responds with proofs generated based on the stored data, which the
       client verifies against the saved metadata. The primary goal of PDP is to ensure data integrity
       efficiently with minimal communication overhead and metadata storage on the client’s side.
       This is particularly useful in cloud storage scenarios where data may be altered, deleted, or
       corrupted without the client’s knowledge. However, traditional PDP schemes mainly support
       static data, requiring reinitialization of the process upon any data modification. PDP protocols
       thus provide a cost-effective and secure solution for data integrity verification in outsourced
       storage environments.
       Slide No: 1, Page No: 1
    7. What are the limitations of the original PDP scheme?
       The original PDP scheme, while effective for static data integrity verification, suffers from
       several limitations. It requires significant computational resources or communication
       bandwidth for verifying large files since the verification often involves processing data
       proportional to the entire file size. Secondly, the scheme mandates linear storage of
       verification metadata on the client, which can become impractical as data size grows.
       Additionally, traditional PDP protocols do not guarantee security against all possible data
       possession attacks, sometimes lacking robustness concerning certain manipulations by
       malicious storage providers. A critical drawback is that these schemes only support static
       data, meaning if the client wishes to modify, insert, or delete data, the PDP process must be
       restarted from scratch. This leads to inefficiencies when handling dynamic or frequently
       updated data. Furthermore, the high overhead can limit their practical deployment for real-
       world large-scale cloud storage scenarios. Thus, to overcome these issues, enhanced or
       dynamic PDP protocols have been proposed, although with their distinct trade-offs.
       Slide No: 3, Page No: 3
    8. How does Proofs of Retrievability (PoR) differ from PDP?
       Proofs of Retrievability (PoR) share the goal of verifying data possession like PDP but extend
       it by also ensuring that the entire file can be retrieved successfully even if small parts are
       corrupted or missing. PoR schemes embed extra check blocks called sentinels and employ
       error-correcting codes to detect and repair minor corruptions. Unlike PDP, which only
       confirms the presence and integrity of data, PoR guarantees data availability and
       recoverability, providing stronger assurance to clients about their outsourced data. PoR also
       encrypts the file and makes sentinels indistinguishable from regular data blocks, increasing
       security against malicious providers. The client sends challenges asking for sentinel values for
       verification, and failure to respond correctly indicates data tampering or loss. While PoR
       introduces computational and storage overhead due to encoding and encryption, it suits
       archival storage where data integrity and retrievability are critical. Thus, PoR schemes
                                                                                                Page 3
       provide a more comprehensive security solution for cloud storage environments.
       Slide No: 5, Page No: 5
    9. What role do error-correcting codes play in PoR schemes?
       Error-correcting codes in Proofs of Retrievability (PoR) schemes enable the detection and
       correction of small file corruptions that may otherwise go unnoticed by sentinel checks alone.
       These codes are added to the file blocks before storage, allowing the server to fix minor
       damages during data retrieval. The combination of inner and outer codes, as used in advanced
       schemes, plays complementary roles at different protocol layers, with inner codes computed
       by the server and outer codes embedded with the file. While the inner code adds computation
       overhead during proof generation, it does not increase storage size, whereas the outer code
       increases file size but has minimal computational impact on the server. This error tolerance
       enhances data reliability and availability, especially important for large-scale storage where
       occasional corruption or partial failures are expected. By enabling automatic repair of
       corrupted data, error-correcting codes reduce the probability of permanent data loss.
       Ultimately, they strengthen the overall guarantees provided by PoR protocols for secure
       outsourced storage.
       Slide No: 7, Page No: 7
    10.What is Scalable PDP and what are its limitations?
       Scalable Provable Data Possession (PDP) is an advancement over the original PDP scheme
       that supports limited dynamic data operations such as appending, modifying, and deleting
       blocks. It uses a setup phase where multiple future verification challenges and corresponding
       answers are precomputed and stored as metadata on the client side. This precomputation
       increases efficiency by reducing online computational burden during verification. However,
       this design introduces limitations, including a fixed number of verifications and updates
       allowed, as the number of challenges is determined at setup time. Additionally, it disallows
       inserting blocks anywhere except at the end, restricting flexibility in dynamic data
       management. If more updates or verifications are needed, the entire setup and preprocessing
       must be redone, which can be impractical for large files. Thus, while Scalable PDP improves
       performance for some dynamic scenarios, it imposes trade-offs in terms of update flexibility
       and scalability.
       Slide No: 4, Page No: 4
From lec6.pptx (Secure On-Premise Internet Access, IDS, IPS)
    1. What is secure on-premise internet access and why is it important?
       Secure on-premise internet access involves protecting an organization's internal network
       infrastructure and endpoints when users connect to the internet from within the organization’s
       premises. It ensures malicious traffic is identified and blocked, sensitive data is protected, and
       internal systems remain shielded from external cyber threats. Key components include
       firewalls that monitor inbound and outbound traffic, Intrusion Detection and Prevention
       Systems (IDS/IPS) that detect and block attacks, Secure Web Gateways that filter harmful
       web content, and DNS security measures to prevent phishing and command-and-control
       attacks. This security is crucial because network intrusions can steal valuable resources and
       data, potentially causing severe damage and loss of trust. Threat actors can exploit
                                                                                                 Page 4
   vulnerabilities such as outdated software, mobile device exploits, and unencrypted data
   storage to infiltrate networks. Therefore, a combination of these tools and continuous
   monitoring is necessary to maintain a robust security posture. Secure on-premise internet
   access allows organizations to safely utilize online services while minimizing risk.
   Lec No: 6, Page No: 1
2. Explain how an Intrusion Detection System (IDS) works and its types.
   An Intrusion Detection System (IDS) is a network security technology designed to detect
   vulnerability exploits and unauthorized access attempts by monitoring network traffic
   patterns. IDS generally operates in a passive mode, placed on a network tap or span port to
   analyze copies of traffic without impacting actual data flow. There are three primary detection
   methods: Signature-based detection looks for known attack patterns or byte sequences;
   Heuristic or anomaly-based detection uses machine learning to profile normal network
   behavior and flags deviations; Reputation-based detection checks file reputations to identify
   suspicious content. IDS can be categorized mainly as Network-based IDS (NIDS), which
   monitors traffic across the network, and Host-based IDS (HIDS), which examines data and
   events on an individual endpoint. While IDS can detect known threats effectively, heuristic
   detection can identify new and unknown threats but may produce false positives. IDS alerts
   administrators of suspicious activity but does not block the traffic by itself, distinguishing it
   from Intrusion Prevention Systems.
   Lec No: 6, Page No: 3
3. What is an Intrusion Prevention System (IPS) and how does it differ from IDS?
   An Intrusion Prevention System (IPS) actively monitors, detects, and prevents network
   attacks by being placed inline between source and destination traffic, unlike IDS which is
   passive. The IPS inspects every packet in real-time and can automatically take actions such as
   terminating TCP sessions, blocking malicious IP addresses, reconfiguring firewalls, or
   removing malicious content. Like IDS, IPS uses signature-based, anomaly-based, and policy-
   based approaches to detect threats but adds the ability to act immediately on such detection.
   IPS helps prevent denial of service attacks, worms, viruses, and various exploits by stopping
   attacks before they cause damage. There are multiple types of IPS, including network-based
   (NIPS), wireless (WIPS), network behavior, and host-based (HIPS). NIPS analyze protocol
   packets on the network, while HIPS operate on individual hosts tracking system and
   application behavior. The combination of detection and prevention makes IPS a vital part of
   modern network security infrastructure.
   Lec No: 6, Page No: 6
4. Can IDS and IPS work together, and what are the benefits?
   Yes, IDS and IPS can work together and are often integrated into a single appliance known as
   a Next-Generation Firewall (NGFW) or Unified Threat Management (UTM) system. IDS
   provides the detection capability, monitoring network traffic for suspicious patterns, while
   IPS offers prevention by blocking detected threats in real time. Integration began around
   2005, enabling more efficient security operations by consolidating monitoring and response
   functions. This combination allows administrators to configure systems to either detect (alert
   only) or prevent (block) suspicious activity depending on the business needs. Such integration
   simplifies network security management and enhances overall protection. Additionally,
   combined systems can integrate with Security Information and Event Management (SIEM)
                                                                                            Page 5
       solutions to correlate event data and improve threat response. This unified approach reduces
       the need for multiple devices, lowers operational complexity, and improves security posture.
       Lec No: 6, Page No: 8
From lec7.pptx (SSL, TLS, IPsec)
    5. What is SSL and how does it work to secure website communication?
       Secure Sockets Layer (SSL) is a security protocol that establishes an encrypted connection
       between a web browser and a web server. It ensures sensitive data such as login credentials,
       credit card numbers, and personal information are transmitted securely over the internet.
       Initially, a website owner purchases an SSL certificate from a Certificate Authority (CA),
       which contains the site’s public key and identity authentication information. When a user
       visits the website, the browser initiates an SSL handshake by requesting the certificate and
       verifying its authenticity. Once verified, asymmetric encryption is used during the handshake
       to securely exchange keys between the browser and server. Afterward, a symmetric session
       key is created, which encrypts all further communications for the duration of the session. This
       process prevents attackers from intercepting or tampering with data. Browsers display a
       padlock icon to indicate a secure connection, helping users identify HTTPS-secured websites.
       Lec No: 7, Page No: 1
    6. Differentiate between SSL and TLS.
       SSL (Secure Sockets Layer) is the predecessor of TLS (Transport Layer Security) and is now
       considered deprecated with TLS being the modern and secure replacement. Both protocols
       provide encryption and secure communication over networks like the internet, but TLS offers
       a more robust handshake process, stronger encryption algorithms, and improved cipher suites.
       TLS uses advanced message authentication codes such as HMAC compared to SSL’s older
       MAC function, enhancing data security and integrity. TLS can send multiple alert messages to
       indicate errors, whereas SSL only supports a single alert. Despite these differences, the terms
       SSL and TLS are often used interchangeably because TLS evolved directly from SSL 3.0.
       TLS versions range from 1.0 to the latest 1.3, with newer versions recommended for enhanced
       security. The adoption of TLS has led to improved security, authentication, and data integrity
       for encrypted communications on the web.
       Lec No: 7, Page No: 8
    7. What is a Cipher Suite and what are its main components?
       A Cipher Suite is a set of cryptographic algorithms that dictate how secure communication is
       conducted over protocols like TLS and SSL. It defines how the keys are exchanged, the
       encryption method for the bulk data, authentication mechanisms, and ensures message
       integrity. Typically, a cipher suite includes four components: a key exchange algorithm (e.g.,
       RSA, Diffie-Hellman), a bulk encryption algorithm to encrypt the data (e.g., AES, 3DES), an
       authentication algorithm (e.g., RSA, DSA), and a message authentication code (MAC)
       algorithm (e.g., SHA, MD5). During the TLS handshake, the client and server agree upon
       which cipher suite to use based on supported algorithms and security preferences. The
       strength of each component affects the overall security of the communication session. For
       example, AES with Galois/Counter Mode (AES-GCM) offers both confidentiality and
       integrity protection. Choosing a secure cipher suite is essential to protect data against
                                                                                              Page 6
        interception and manipulation on the network.
        Lec No: 7, Page No: 10
    8. Explain how IP Security (IPSec) works and its importance.
       IP Security (IPSec) is a suite of protocols that secures Internet Protocol (IP) communications
       by authenticating and encrypting each IP packet during data transmission. It is crucial for
       protecting data exchanged over public networks, such as the internet, ensuring confidentiality,
       integrity, and authenticity of the communication. IPSec operates primarily in two modes:
       Transport Mode, which encrypts only the payload of the packet, and Tunnel Mode, which
       encrypts the entire IP packet and encapsulates it for secure routing. Two main protocols
       within IPSec are the Authentication Header (AH), which verifies the data’s origin and
       integrity without encryption, and Encapsulating Security Payload (ESP), which provides
       encryption and optional authentication. Keys used for encryption are established using
       protocols such as the Internet Key Exchange (IKE), ensuring secure mutual authentication
       between devices. IPSec is widely used to create Virtual Private Networks (VPNs), allowing
       remote or branch office users to securely access corporate networks. It enhances cybersecurity
       by preventing unauthorized data access and eavesdropping on network communications.
       Lec No: 7, Page No: 16
Lecture 8: Cloud Denial-of-Service (DDoS) Protection (lec8.pptx)
Question 6:
What is a DDoS attack, and how does a SYN flood attack specifically disrupt network services? (Lec
8, Slides 2-14)
Answer:
A Distributed Denial-of-Service (DDoS) attack aims to disrupt normal traffic to a targeted server or
network by overwhelming it with excessive traffic from multiple compromised devices (bots). In a
SYN flood attack, the attacker exploits the TCP handshake process by sending numerous SYN
packets with spoofed IPs to a server. The server responds with SYN/ACK packets and waits for the
final ACK to complete the handshake. However, the attacker never sends this final ACK, keeping the
server’s connection table full with half-open connections. Once all available connections are
consumed, the server cannot process legitimate requests, causing denial of service. This attack targets
the protocol layer by exhausting server resources, rendering the service unavailable or sluggish.
Question 7:
Explain volumetric DDoS attacks, such as the DNS amplification attack, and how anycast routing
helps mitigate DDoS attacks. (Lec 8, Slides 15-28)
Answer:
Volumetric DDoS attacks try to consume all available bandwidth between the victim and the Internet
by amplifying traffic. A DNS amplification attack is an example where an attacker sends small UDP
queries to open DNS resolvers with the victim’s IP spoofed as the source. The resolvers reply with
large responses to the victim, generating a massive traffic surge that overwhelms the target's network.
To mitigate such traffic floods, anycast routing is used. Anycast distributes incoming requests to
multiple geographically dispersed data centers or nodes. By routing traffic to the nearest or least
                                                                                                Page 7
congested node, anycast reduces latency and spreads the attack traffic, preventing any single data
center from being overwhelmed. This selective routing enhances resilience and helps maintain service
availability during DDoS attacks.
                                                                                            Page 8