Vapt Unit 2notes
Vapt Unit 2notes
1. ATTACKS TO PRIVACY
Spyware and backdoors are insidious tools designed to infiltrate systems, extract data, or
maintain unauthorized access. They differ in intent—spyware focuses on surveillance,
backdoors on persistence—but share stealth as a core trait.
      Definition: Malicious software that secretly monitors and collects user data—
       keystrokes, files, browsing history, or multimedia (audio/video).
      Delivery Methods:
          o Bundling: Hidden in freeware or pirated software (e.g., codec packs laced
              with spyware like Zango).
          o Phishing: Links or attachments in emails/SMS (e.g., a fake “invoice.pdf”
              dropping spyware).
          o Exploits: Drive-by downloads via browser or OS vulnerabilities (e.g., Adobe
              Flash exploits pre-2020).
          o Physical Access: USB drops or infected peripherals in public spaces.
      Mechanisms:
          o Keyloggers: Record every keystroke, capturing passwords or chats. Example:
              HawkEye logs to an FTP server.
          o Screen Scraping: Periodic screenshots or video grabs (e.g., DarkComet
              RAT).
          o Network Sniffing: Intercepts unencrypted traffic (e.g., Wi-Fi data on open
              networks).
          o Data Exfiltration: Sends stolen data via HTTPS, SMTP, or disguised
              protocols (e.g., DNS tunneling).
      Advanced Examples:
          o Pegasus (NSO Group): Zero-click spyware exploiting iOS/Android
              vulnerabilities (e.g., iMessage flaws). Accesses encrypted chats, activates
              mic/camera, and self-destructs to evade forensics.
          o FinFisher: Government-grade spyware sold to regimes, using rootkits to hide
              in system memory.
      Stealth Techniques:
          o Masquerades as legitimate processes (e.g., “chrome_helper.exe”).
          o Polymorphic code mutates to dodge signature-based antivirus.
          o Rootkit integration buries it in the OS kernel.
      Impact: Identity theft, corporate espionage, or blackmail (e.g., webcam footage).
Countermeasures
      Detection: Behavioral analysis (e.g., Sysmon for unusual process activity), network
       monitoring (e.g., Zeek for odd traffic).
      Prevention: Regular patching, avoiding untrusted downloads, endpoint protection
       (e.g., CrowdStrike, Malwarebytes).
      Mitigation: Sandboxing, air-gapped systems for sensitive data.
Browsers are the internet’s front door, making them prime targets for privacy attacks. These
threats exploit browser features, user habits, or unpatched flaws.
Countermeasures
      Browser Settings: Disable unneeded plugins (e.g., Flash), block third-party cookies,
       use private modes.
      Tools: uBlock Origin (ad/script blocking), HTTPS Everywhere, anti-fingerprinting
       extensions (e.g., Privacy Badger).
      Updates: Patch browsers/OS promptly—auto-updates are key.
      Developer Side: Content Security Policy (CSP), input sanitization, secure headers
       (e.g., X-Frame-Options).
Email’s ubiquity and trust make it a juicy target for privacy breaches, from interception to
impersonation.
      Definition: Fraudulent emails tricking users into revealing data or installing malware.
      Types:
          o Mass Phishing: Generic lures (e.g., “Your PayPal account is locked”).
          o Spear Phishing: Targeted, using personal info (e.g., “Hey John, here’s the Q1
              report”).
          o Whaling: Aims at execs (e.g., CEO fraud).
      Mechanisms:
          o Links to fake sites (e.g., typosquatted domains like “g00gle.com”).
          o Attachments with macros (e.g., Word docs running PowerShell).
          o HTML forms mimicking logins.
      Examples: Emotet spread via phishing, evolving into a malware loader.
      Email Bombs: Floods inboxes with junk, overwhelming users or hiding legit mail.
          o Tools: Scripts hitting SMTP servers or subscription bots.
      Spoofing: Fakes sender address.
          o Mechanisms: Forged “From” headers, no SPF/DKIM checks.
          o Examples: Spoofed HR emails with malware-laden “resumes.”
      Impact: Bypasses filters, extracts replies with sensitive data.
HTTP cookies are small pieces of data stored by a web browser on a user’s device, sent by a
website via HTTP headers. They’re primarily used for session tracking—keeping you
logged into a site, remembering your preferences, or maintaining a shopping cart across
pages. A server assigns a unique identifier (like a session ID) to a cookie, which the browser
sends back with each subsequent request to that site, allowing the server to recognize you.
Privacy risks arise because cookies can store more than just session data—like your
browsing habits or personal details if the site chooses to encode them. They’re vulnerable to
theft via techniques like cross-site scripting (XSS), where attackers snag cookies to hijack
sessions. Even without theft, cookies enable tracking of user behavior across a single site, and
if not secured (e.g., no HttpOnly or Secure flags), they’re fair game for interception over
unencrypted connections.
Third-party cookies are set by domains other than the one you’re visiting—think ad networks
or analytics providers embedded in a site via scripts or iframes. They’re the backbone of
cross-site tracking, letting companies like Google or Facebook stitch together your activity
across unrelated websites. For example, an ad widget on Site A and Site B can drop the same
third-party cookie, linking your visits into a profile for targeted ads.
The method relies on browsers sending these cookies to the third-party domain whenever it’s
referenced, regardless of the top-level site. Privacy-wise, this is a bigger deal than first-party
cookies because it creates a sprawling, often invisible web of surveillance. Browsers like
Safari (with Intelligent Tracking Prevention) and Firefox have started blocking or partitioning
these by default, and Google’s phasing them out in Chrome (eventually) for alternatives like
FLoC or Topics API.
Browser fingerprinting skips cookies entirely, identifying users by collecting unique traits of
their device and browser. Think screen resolution, installed fonts, timezone, user agent string,
WebGL capabilities, or even subtle differences in how JavaScript executes. Sites use scripts
to gather this data, hashing it into a near-unique identifier—no storage needed on your
device.
It’s stealthier than cookies because it’s passive (no opt-in) and harder to block. Even privacy
tools like VPNs or incognito mode don’t fully stop it unless you spoof hardware-level details.
The trade-off? It’s less reliable for long-term tracking—change your browser settings or
device, and your fingerprint shifts. Still, companies like ad trackers love it as a fallback when
cookies are blocked.
Content Security Policy (CSP) is a browser security feature that lets websites define rules
about where scripts, images, or other resources can load from. It’s delivered via an HTTP
header (e.g., Content-Security-Policy: script-src 'self') and acts like a whitelist for content
origins. For tracking, CSP can block third-party scripts or cookies by restricting domains—
like stopping an ad network’s tracker from executing. It also mitigates malicious content, like
XSS attacks, by preventing unauthorized script injection.
It’s not a silver bullet—poorly configured CSPs are common, and it doesn’t stop
fingerprinting or first-party tracking directly. But when paired with other defenses (like
SameSite cookies or blocking third-party requests), it shrinks the attack surface for both
privacy invasions and exploits.
      Definition: Privacy controls are browser features or settings that allow users to
       regulate how their data is collected, stored, and shared, focusing on cookies and
       tracking mechanisms.
      Cookie Management:
           o What Are Cookies? Small text files (typically <4KB) stored by websites in
               the browser to maintain state (e.g., session IDs, preferences) or track behavior
               (e.g., visited pages).
           o Types:
                    Session Cookies: Temporary, erased when the browser closes; used for
                        short-term tasks like logins.
                    Persistent Cookies: Stored with an expiration date (days to years);
                        used for tracking or remembering settings.
                    First-Party Cookies: Set by the visited domain (e.g., example.com).
                    Third-Party Cookies: Set by external domains (e.g., doubleclick.net
                        for ads).
           o Mechanisms:
                    Cookies are sent with HTTP requests via the Cookie header and stored
                        via the Set-Cookie response header.
                    Example: Set-Cookie: user_id=12345; Expires=Wed, 13 Mar 2026
                        12:00:00 GMT; Path=/.
           o Privacy Risks:
                  Persistent cookies enable long-term profiling (e.g., ad retargeting).
                  Third-party cookies allow cross-site tracking, linking user behavior
                   across unrelated sites.
                Lack of consent violates regulations like GDPR or CCPA.
       o Control Options:
                Block All Cookies: Disables all cookie storage; breaks functionality
                   (e.g., logins, carts).
                Block Third-Party Cookies: Prevents cross-site tracking while
                   preserving site usability.
                Clear on Exit: Deletes cookies automatically when the browser closes.
                Selective Blocking: Allows exceptions for trusted sites (e.g., banking).
       o Implementation:
                Firefox: Settings > Privacy & Security > Cookies and Site Data >
                   Choose “Delete cookies and site data when Firefox is closed” or
                   “Block third-party cookies.”
                Chrome: Settings > Privacy and Security > Cookies and other site data
                   > Select “Block third-party cookies” or “Clear cookies on exit.”
       o Practical Example: Blocking third-party cookies stops doubleclick.net from
           tracking you on news.com while allowing news.com to keep you logged in.
   Tracker Blocking:
       o What Are Trackers?: Scripts, pixels, or iframes embedded by third parties
           (e.g., Google Analytics, Facebook Pixel) to monitor user actions (e.g., page
           views, clicks).
       o Mechanisms:
                Tracking Pixels: 1x1 invisible images that log requests to a third-party
                   server.
                Social Widgets: Buttons (e.g., Twitter Share) that report interactions
                   back to the provider.
                Ad Networks: Use JavaScript to track impressions and clicks across
                   sites.
       o Privacy Risks: Builds detailed user profiles for advertising, often without
           explicit consent.
       o Tools:
                Firefox Enhanced Tracking Protection (ETP):
                        Uses Disconnect’s blocklist to identify and block trackers.
                        Modes:
                                Standard: Blocks trackers in private browsing and
                                    known malicious scripts.
                                Strict: Blocks all detected trackers, may break some
                                    sites.
                                Custom: User-defined rules (e.g., block trackers and
                                    cookies from specific domains).
                        Enable via Settings > Privacy & Security > Enhanced Tracking
                            Protection.
                Other Browsers: Chrome’s Tracking Protection (in development),
                   Safari’s Intelligent Tracking Prevention (ITP).
       o Practical Example: Visiting a blog with ETP Strict mode blocks google-
           analytics.com scripts, preventing page view tracking.
   Benefits:
       o Reduces data leakage to third parties.
          o   Enhances user control over personal information.
      Limitations:
          o Blocking all cookies/trackers may disrupt site functionality (e.g., payment
              gateways).
          o Some trackers evade basic blocking via first-party proxies or fingerprinting.
      Best Practices:
          o Use Strict mode for sensitive browsing, Standard for general use.
          o Regularly clear cookies manually or automate deletion.
      Performance Constraints:
          o Cause: Multi-hop routing (e.g., Australia → Canada → India) adds latency;
              volunteer nodes have limited bandwidth.
          o Metrics: Average page load time ~5–15 seconds vs. <2 seconds on clearnet.
          o Impact: Poor for real-time apps (e.g., video calls, gaming).
          o Example: Streaming Netflix via Tor buffers excessively, often failing.
      Exit Node Risks:
          o Mechanism: Traffic exits unencrypted unless the destination uses HTTPS;
              exit nodes see raw data.
          o Threats:
                   Malicious exit nodes (est. <1% of total) log plaintext (e.g., HTTP form
                       submissions).
                   Governments monitor exit traffic (e.g., NSA tapping known nodes).
          o Real-World Case: In 2014, researchers found exit nodes injecting malware
              into HTTP downloads.
          o Mitigation: Use HTTPS everywhere; avoid sensitive actions over Tor unless
              end-to-end encrypted (e.g., .onion sites).
      Correlation and Timing Attacks:
          o How It Works: An adversary controlling entry and exit nodes (or tapping
              network endpoints) matches traffic patterns (e.g., packet timing, sizes).
          o Probability: Requires significant resources (e.g., 10% of Tor nodes or ISP-
              level access); feasible for nation-states, not casual attackers.
          o Example: If 1MB enters at 12:00:00 and exits at 12:00:02 consistently,
              correlation is possible.
          o Defense: Increase noise (e.g., random delays), though Tor lacks built-in
              padding.
      User-Induced Vulnerabilities:
          o Behavior: Logging into Facebook over Tor links the session to a real identity.
          o Protocol Leaks: Torrenting over Tor leaks IP via DHT (Distributed Hash
              Table) outside the Tor network.
          o Example: A user torrents a file, and their real IP is exposed despite Tor usage.
          o Mitigation: Use Tor Browser’s isolated profile; avoid non-Tor traffic.
      Practical and Legal Hurdles:
          o Blocking: Sites like Reddit or Wikipedia may block Tor exit IPs due to abuse
              (e.g., spam), requiring CAPTCHAs or VPNs.
          o Perception: Tor use flags users for scrutiny (e.g., flagged by corporate IT or
              law enforcement).
          o Example: A student accessing a university portal via Tor is denied due to IP
              blacklisting.
      Advanced Challenges:
          o Sybil Attacks: Flooding the network with malicious nodes to control circuits
              (mitigated by guard nodes).
           o  Deanonymization Studies: 2016 Carnegie Mellon attack allegedly unmasked
              Tor users for FBI (disputed).
      Mitigations:
          o Use bridges or meek transports (e.g., Azure disguise) for restricted regions.
          o Pair with VPN before Tor (not after) for added entry protection.
          o Stick to .onion for maximum security.
      Best Practices:
          o Never resize Tor Browser window (breaks fingerprint resistance).
          o Avoid plugins (e.g., Flash) that bypass Tor.
          o Test anonymity with tools like check.torproject.org.
Enhanced Summary
      4.1 Anonymity Basics: Vital for privacy, safety, and freedom, but fragile—requires
       masking IP, metadata, and behavior against diverse threats.
      4.2 Tor Overview: Uses onion routing with layered encryption and a volunteer
       network to provide strong anonymity for browsing and hidden services.
      4.3 Limitations: Slow speeds, exit node risks, and potential deanonymization demand
       careful usage and awareness of technical and practical trade-offs.
These notes dive deeper into technical underpinnings (e.g., encryption flows, attack vectors)
and real-world contexts (e.g., activism, legal issues). Let me know if you’d like code
examples, diagrams, or further elaboration!
5. Internet Email
      Definition: Email architecture encompasses the protocols, servers, and clients that
       facilitate the creation, transmission, storage, and retrieval of electronic messages
       across networks.
      Core Protocols:
           o SMTP (Simple Mail Transfer Protocol):
                     Purpose: The foundational protocol for sending emails from a client to
                        a server or between servers.
                     Technical Details:
                             Operates on TCP port 25 (unencrypted), 587 (STARTTLS), or
                                465 (SMTPS with SSL/TLS).
                             Uses ASCII-based commands in a client-server dialogue:
                                     HELO domain.com (or EHLO for extended SMTP)
                                       initiates the session.
                                     MAIL FROM:<sender@domain.com> specifies the
                                       sender.
                                     RCPT TO:<recipient@domain.com> identifies the
                                       recipient.
                         DATA followed by email content (headers + body),
                          terminated by a single ..
                        QUIT ends the session.
               Relies on DNS MX (Mail Exchange) records to locate recipient
                  servers (e.g., domain.com MX 10 mail.domain.com).
       Flow Example: Alice’s client (smtp.domain.com) → Bob’s server
          (smtp.gmail.com) via SMTP relay.
       Security: No native encryption; STARTTLS upgrades to TLS mid-
          session (e.g., EHLO response includes 250-STARTTLS).
       Real-World Example: Sending an email from Outlook to Gmail
          involves SMTP routing through smtp-mail.outlook.com to
          smtp.gmail.com.
       Limitations: No retrieval mechanism; vulnerable to interception
          without TLS.
o   IMAP (Internet Message Access Protocol):
       Purpose: A protocol for retrieving and managing emails, designed for
          server-side storage and multi-device synchronization.
       Technical Details:
               Operates on TCP port 143 (unencrypted) or 993 (SSL/TLS).
               Commands include LOGIN, SELECT "INBOX", FETCH
                  (retrieve email parts), STORE (set flags like \Seen), LOGOUT.
               Supports hierarchical folders (e.g., INBOX.Sent), UID (unique
                  identifiers), and real-time push (IMAP IDLE).
               Example: FETCH 1:10 (FLAGS BODY[HEADER]) retrieves
                  headers for messages 1-10.
       Flow Example: Bob’s phone connects to imap.gmail.com, marks an
          email read, and his laptop reflects this instantly.
       Security: Encrypted with SSL/TLS; plaintext IMAP is rare today.
       Real-World Example: Gmail’s IMAP keeps emails synced across a
          user’s phone, tablet, and web client.
       Benefits: Flexible, preserves server state, supports large mailboxes.
       Limitations: Requires constant connectivity; server storage can fill up.
o   POP3 (Post Office Protocol 3):
       Purpose: Downloads emails from a server to a client, typically
          removing them from the server afterward.
       Technical Details:
               Operates on TCP port 110 (unencrypted) or 995 (SSL/TLS).
               Commands: USER username, PASS password, LIST (message
                  list), RETR n (retrieve message n), DELE n (delete), QUIT.
               Example: RETR 1 downloads the first email’s full text.
               Optional “leave on server” setting retains copies.
       Flow Example: Alice’s Thunderbird pulls emails from
          pop.domain.com to her PC, deleting server copies unless configured
          otherwise.
       Security: SSL/TLS encrypts modern POP3; older setups were
          plaintext.
       Real-World Example: A rural user with limited internet uses POP3 to
          download emails offline via pop.googlemail.com.
       Benefits: Simple, lightweight, good for single-device use.
                    Limitations: No folder sync; deleted server copies disrupt multi-
                     device access.
      Full Architecture:
          o Sender’s MUA → Outgoing SMTP → Recipient’s SMTP → MDA →
              Recipient’s MUA (via IMAP/POP3).
          o Example: alice@domain.com → smtp.domain.com → smtp.hotmail.com →
              imap.hotmail.com → Bob’s Outlook.
      Edge Cases:
          o SMTP relay abuse (open relays) enables spam; modern servers require
              authentication.
          o IMAP vs. POP3 choice depends on use case (sync vs. offline).
5.2 Agents & Standards: Mail Flow and Protocols (MIME, PGP)
      Definition: Agents are software components handling email tasks; standards define
       the rules and formats for interoperability.
      Agents:
           o MUA (Mail User Agent):
                    Role: User interface for composing, sending, and reading emails (e.g.,
                       Gmail web, Apple Mail).
                    Example: Alice uses Thunderbird to draft an email and send it via
                       SMTP.
           o MTA (Mail Transfer Agent):
                    Role: Routes emails between servers using SMTP (e.g., Exim,
                       Microsoft Exchange).
                    Example: smtp.domain.com forwards Alice’s email to
                       smtp.gmail.com.
           o MDA (Mail Delivery Agent):
                    Role: Places emails into user mailboxes, serving IMAP/POP3 (e.g.,
                       Dovecot, Cyrus).
                    Example: imap.gmail.com stores Bob’s email in his inbox.
           o Detailed Flow:
                    MUA submits to MTA via SMTP (port 587).
                    MTA resolves MX records, relays to recipient MTA.
                    Recipient MTA hands off to MDA, which stores the email.
                    MUA retrieves via IMAP/POP3.
      Standards & Protocols:
           o MIME (Multipurpose Internet Mail Extensions):
                    Purpose: Extends SMTP’s ASCII-only limitation to support rich
                       content.
                    Technical Details:
                            Headers: Content-Type (e.g., text/html), Content-Transfer-
                              Encoding (e.g., base64), Content-Disposition (e.g., attachment;
                              filename="doc.pdf").
                            Multipart: multipart/mixed combines text and attachments;
                              boundaries (e.g., --boundary123) separate parts.
                            Example: Content-Type: multipart/mixed; boundary="xyz"
                              with text and an image.
                    Encoding: Binary data (e.g., JPGs) encoded in Base64 (e.g.,
                       /9j/4AAQSkZJRg==).
                  Real-World Example: An email with a PDF uses Content-Type:
                   application/pdf; name="file.pdf".
                 Limitations: Increases size (Base64 adds ~33% overhead); no
                   security.
          o PGP (Pretty Good Privacy):
                 Purpose: Encrypts and signs emails for confidentiality and
                   authenticity.
                 Technical Details:
                        Hybrid encryption: Public key (e.g., 2048-bit RSA) encrypts a
                           session key; session key (e.g., AES-256) encrypts the message.
                        Signing: Sender’s private key creates a signature; recipient’s
                           public key verifies it.
                        Key management: Keys stored in keyrings (e.g.,
                           ~/.gnupg/pubring.gpg).
                        Example: -----BEGIN PGP MESSAGE----- encapsulates
                           encrypted content.
                 Implementation: Tools like GnuPG or plugins (e.g., Enigmail for
                   Thunderbird).
                 Real-World Example: A lawyer encrypts a sensitive contract using
                   Bob’s PGP public key, ensuring only Bob can read it.
                 Benefits: End-to-end security; non-repudiation via signatures.
                 Limitations: Complex setup; requires both parties to use PGP;
                   alternative S/MIME is more corporate-friendly.
      Flow Example: Alice’s MUA (MIME-encoded attachment) → MTA (SMTP relay)
       → Bob’s MDA → Bob’s MUA (PGP decryption).
Below are highly detailed, in-depth notes for Section 6: Introduction to Email Forensics
based on the topics you provided (6.1 Core Concepts, 6.2 Methods, and 6.3 Privacy Balance).
These notes expand on technical mechanisms, practical methodologies, real-world examples,
tools, legal/ethical considerations, and advanced insights to provide an exhaustive
understanding of email forensics.
      Definition: Email forensics must navigate the tension between investigative needs
       and individual privacy rights, guided by ethical principles and legal frameworks.
      Ethical Considerations:
       o   Consent: Accessing emails without permission risks violating privacy unless
           authorized (e.g., employee consent via company policy).
                Example: Monitoring personal Gmail on a work device without notice
                   is unethical.
       o Proportionality: Investigations should be narrowly scoped to relevant emails,
           avoiding unnecessary intrusion.
                Example: Searching an entire mailbox for one fraud email vs. targeting
                   specific dates.
       o Transparency: Subjects should be informed of monitoring when feasible
           (e.g., workplace email policies).
                Example: Employees notified that work emails may be audited for
                   security.
       o Data Minimization: Only collect/process data essential to the case, deleting
           irrelevant findings.
                Example: Redacting personal emails unrelated to a corporate
                   investigation.
       o Confidentiality: Protect sensitive data uncovered (e.g., health info, personal
           photos) from misuse.
                Example: Encrypting forensic reports to prevent leaks.
   Legal Frameworks:
       o GDPR (EU):
                Requires a lawful basis (e.g., legal obligation, consent) for processing
                   email data (Article 6).
                Mandates data protection (e.g., encryption) and subject rights (e.g.,
                   access, erasure) (Articles 5, 15-17).
                Example: A German firm needs a court order to forensically analyze
                   employee emails.
       o CCPA (California Consumer Privacy Act):
                Grants consumers rights to know what email data is collected and
                   request deletion.
                Example: A California resident demands a company delete forensic
                   copies of their emails.
       o ECPA (Electronic Communications Privacy Act, US):
                Protects emails in transit (Title I) and stored emails (Title II) from
                   unauthorized access.
                Allows employer access to business emails with notice or consent.
                Example: FBI needs a warrant to access a suspect’s Gmail under
                   ECPA.
       o Fourth Amendment (US):
                Guards against unreasonable searches; private email access requires
                   probable cause and a warrant.
                Example: Police can’t seize a personal email server without judicial
                   approval.
       o Local Laws: Vary globally (e.g., India’s IT Act allows email interception with
           government approval).
   Practical Scenarios:
       o Workplace: A firm investigates insider trading via email headers but limits
           scope to work accounts, notifying staff per policy.
                Ethical: Notice given; legal under ECPA with business justification.
       o   Criminal: Police trace a blackmail email with a warrant, ensuring chain of
           custody for court.
                Ethical: Judicial oversight; proportional to crime.
       o Civil: Divorce proceedings uncover emails via discovery, but personal data is
           redacted.
                Ethical: Relevant data only; privacy respected.
   Privacy Risks:
       o Overreach: Collecting unrelated personal emails (e.g., family correspondence
           in a fraud case).
       o Exposure: Mishandling forensic data (e.g., unencrypted reports leaked).
       o Bias: Misinterpreting intent without context (e.g., sarcastic email taken as a
           threat).
   Best Practices:
       o Obtain legal authorization (e.g., warrant, consent).
       o Use forensic tools with logging (e.g., EnCase) to document actions.
       o Anonymize non-relevant data in reports.
       o Train investigators on privacy laws (e.g., GDPR compliance).
   Real-World Example: In 2018, Facebook’s email forensics in a data breach probe
    complied with GDPR by limiting scope and securing findings.
   Balancing Act:
       o Investigative need (e.g., catching a hacker) vs. privacy rights (e.g., avoiding
           collateral intrusion).
       o Example: Tracing a phishing email stops at the sender’s IP, avoiding unrelated
           mailbox contents.