Jump to content

Memory safety

From Wikipedia, the free encyclopedia

This is the current revision of this page, as edited by StartOkayStop (talk | contribs) at 23:34, 14 November 2024 (Reverted edits by 2400:6A80:B034:16B0:65D4:9BC2:3FB9:87D2 (talk) to last version by Citation bot). The present address (URL) is a permanent link to this version.

(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Memory safety is the state of being protected from various software bugs and security vulnerabilities when dealing with memory access, such as buffer overflows and dangling pointers.[1] For example, Java is said to be memory-safe because its runtime error detection checks array bounds and pointer dereferences.[1] In contrast, C and C++ allow arbitrary pointer arithmetic with pointers implemented as direct memory addresses with no provision for bounds checking,[2] and thus are potentially memory-unsafe.[3]

History

[edit]

Memory errors were first considered in the context of resource management (computing) and time-sharing systems, in an effort to avoid problems such as fork bombs.[4] Developments were mostly theoretical until the Morris worm, which exploited a buffer overflow in fingerd.[5] The field of computer security developed quickly thereafter, escalating with multitudes of new attacks such as the return-to-libc attack and defense techniques such as the non-executable stack[6] and address space layout randomization. Randomization prevents most buffer overflow attacks and requires the attacker to use heap spraying or other application-dependent methods to obtain addresses, although its adoption has been slow.[5] However, deployments of the technology are typically limited to randomizing libraries and the location of the stack.

Impact

[edit]

In 2019, a Microsoft security engineer reported that 70% of all security vulnerabilities were caused by memory safety issues.[7] In 2020, a team at Google similarly reported that 70% of all "severe security bugs" in Chromium were caused by memory safety problems. Many other high-profile vulnerabilities and exploits in critical software have ultimately stemmed from a lack of memory safety, including Heartbleed[8] and a long-standing privilege escalation bug in sudo.[9] The pervasiveness and severity of vulnerabilities and exploits arising from memory safety issues have led several security researchers to describe identifying memory safety issues as "shooting fish in a barrel".[10]

Approaches

[edit]

Some modern high-level programming languages are memory-safe by default[citation needed], though not completely since they only check their own code and not the system they interact with. Automatic memory management in the form of garbage collection is the most common technique for preventing some of the memory safety problems, since it prevents common memory safety errors like use-after-free for all data allocated within the language runtime.[11] When combined with automatic bounds checking on all array accesses and no support for raw pointer arithmetic, garbage collected languages provide strong memory safety guarantees (though the guarantees may be weaker for low-level operations explicitly marked unsafe, such as use of a foreign function interface). However, the performance overhead of garbage collection makes these languages unsuitable for certain performance-critical applications.[1]

For languages that use manual memory management, memory safety is not usually guaranteed by the runtime. Instead, memory safety properties must either be guaranteed by the compiler via static program analysis and automated theorem proving or carefully managed by the programmer at runtime.[11] For example, the Rust programming language implements a borrow checker to ensure memory safety,[12] while C and C++ provide no memory safety guarantees. The substantial amount of software written in C and C++ has motivated the development of external static analysis tools like Coverity, which offers static memory analysis for C.[13]

DieHard,[14] its redesign DieHarder,[15] and the Allinea Distributed Debugging Tool are special heap allocators that allocate objects in their own random virtual memory page, allowing invalid reads and writes to be stopped and debugged at the exact instruction that causes them. Protection relies upon hardware memory protection and thus overhead is typically not substantial, although it can grow significantly if the program makes heavy use of allocation.[16] Randomization provides only probabilistic protection against memory errors, but can often be easily implemented in existing software by relinking the binary.

The memcheck tool of Valgrind uses an instruction set simulator and runs the compiled program in a memory-checking virtual machine, providing guaranteed detection of a subset of runtime memory errors. However, it typically slows the program down by a factor of 40,[17] and furthermore must be explicitly informed of custom memory allocators.[18][19]

With access to the source code, libraries exist that collect and track legitimate values for pointers ("metadata") and check each pointer access against the metadata for validity, such as the Boehm garbage collector.[20] In general, memory safety can be safely assured using tracing garbage collection and the insertion of runtime checks on every memory access; this approach has overhead, but less than that of Valgrind. All garbage-collected languages take this approach.[1] For C and C++, many tools exist that perform a compile-time transformation of the code to do memory safety checks at runtime, such as CheckPointer[21] and AddressSanitizer which imposes an average slowdown factor of 2.[22]

BoundWarden is a new spatial memory enforcement approach that utilizes a combination of compile-time transformation and runtime concurrent monitoring techniques.[23]

Fuzz testing is well-suited for finding memory safety bugs and is often used in combination with dynamic checkers such as AddressSanitizer.

Classification of memory safety errors

[edit]

Many different types of memory errors can occur:[24][25]

  • Spatial
  • Temporal
    • Use after free – dereferencing a dangling pointer storing the address of an object that has been deleted.
    • Double free – repeated calls to free may prematurely free a new object at the same address. If the exact address has not been reused, other corruption may occur, especially in allocators that use free lists.
    • Uninitialized variables – a variable that has not been assigned a value is used. It may contain sensitive information or bits that are not valid for the type.
      • Wild pointers arise when a pointer is used prior to initialization to some known state. They show the same erratic behaviour as dangling pointers, though they are less likely to stay undetected.
      • Invalid free – passing an invalid address to free can corrupt the heap.
    • Mismatched free – when multiple allocators are in use, attempting to free memory with a deallocation function of a different allocator[26]

Contributing bugs

[edit]

Depending on the language and environment, other types of bugs can contribute to memory unsafety:

  • Stack exhaustion – occurs when a program runs out of stack space, typically because of too deep recursion. A guard page typically halts the program, preventing memory corruption, but functions with large stack frames may bypass the page, and kernel code may not have the benefit of guard pages.
  • Heap exhaustion – the program tries to allocate more memory than the amount available. In some languages, this condition must be checked for manually after each allocation.
  • Memory leak – Failing to return memory to the allocator may set the stage for heap exhaustion (above). Failing to run the destructor of an RAII object may lead to unexpected results,[27][28] but is not itself considered a memory safety error.
  • Null pointer dereference – A null pointer dereference will often cause an exception or program termination in most environments, but can cause corruption in operating system kernels or systems without memory protection or when use of the null pointer involves a large or negative offset. In C++, because dereferencing a null pointer is undefined behavior, compiler optimizations may cause other checks to be removed, leading to vulnerabilities elsewhere in the code.[29][30]

Some lists may also include race conditions (concurrent reads/writes to shared memory) as being part of memory safety (e.g., for access control). The Rust programming language prevents many kinds of memory-based race conditions by default, because it ensures there is at most one writer or one or more readers. Many other programming languages, such as Java, do not automatically prevent memory-based race conditions, yet are still generally considered "memory safe" languages. Therefore, countering race conditions is generally not considered necessary for a language to be considered memory safe.

References

[edit]
  1. ^ a b c d Dhurjati, Dinakar; Kowshik, Sumant; Adve, Vikram; Lattner, Chris (1 January 2003). "Memory safety without runtime checks or garbage collection" (PDF). Proceedings of the 2003 ACM SIGPLAN conference on Language, compiler, and tool for embedded systems. ACM. pp. 69–80. doi:10.1145/780732.780743. ISBN 1581136471. S2CID 1459540. Retrieved 13 March 2017.
  2. ^ Koenig, Andrew. "How C Makes It Hard To Check Array Bounds". Dr. Dobb's. Retrieved 13 March 2017.
  3. ^ Akritidis, Periklis (June 2011). "Practical memory safety for C" (PDF). Technical Report - University of Cambridge. Computer Laboratory. University of Cambridge, Computer Laboratory. ISSN 1476-2986. UCAM-CL-TR-798. Retrieved 13 March 2017.
  4. ^ Anderson, James P. "Computer Security Planning Study" (PDF). 2. Electronic Systems Center. ESD-TR-73-51. {{cite journal}}: Cite journal requires |journal= (help)
  5. ^ a b van der Veen, Victor; dutt-Sharma, Nitish; Cavallaro, Lorenzo; Bos, Herbert (2012). "Memory Errors: The Past, the Present, and the Future" (PDF). Research in Attacks, Intrusions, and Defenses. Lecture Notes in Computer Science. Vol. 7462. pp. 86–106. doi:10.1007/978-3-642-33338-5_5. ISBN 978-3-642-33337-8. Retrieved 13 March 2017.
  6. ^ Wojtczuk, Rafal. "Defeating Solar Designer's Non-executable Stack Patch". insecure.org. Retrieved 13 March 2017.
  7. ^ "Microsoft: 70 percent of all security bugs are memory safety issues". ZDNET. Retrieved 21 September 2022.
  8. ^ "CVE-2014-0160". Common Vulnerabilities and Exposures. Mitre. Archived from the original on 24 January 2018. Retrieved 8 February 2018.
  9. ^ Goodin, Dan (4 February 2020). "Serious flaw that lurked in sudo for 9 years hands over root privileges". Ars Technica.
  10. ^ "Fish in a Barrel". fishinabarrel.github.io. Retrieved 21 September 2022.
  11. ^ a b Crichton, Will. "CS 242: Memory safety". stanford-cs242.github.io. Retrieved 22 September 2022.
  12. ^ "References". The Rustonomicon. Rust.org. Retrieved 13 March 2017.
  13. ^ Bessey, Al; Engler, Dawson; Block, Ken; Chelf, Ben; Chou, Andy; Fulton, Bryan; Hallem, Seth; Henri-Gros, Charles; Kamsky, Asya; McPeak, Scott (1 February 2010). "A few billion lines of code later". Communications of the ACM. 53 (2): 66–75. doi:10.1145/1646353.1646374. S2CID 2611544.
  14. ^ Berger, Emery D.; Zorn, Benjamin G. (1 January 2006). "DieHard: Probabilistic memory safety for unsafe languages" (PDF). Proceedings of the 27th ACM SIGPLAN Conference on Programming Language Design and Implementation. ACM. pp. 158–168. doi:10.1145/1133981.1134000. ISBN 1595933204. S2CID 8984358. Retrieved 14 March 2017.
  15. ^ Novark, Gene; Berger, Emery D. (1 January 2010). "DieHarder: Securing the heap" (PDF). Proceedings of the 17th ACM conference on Computer and communications security. ACM. pp. 573–584. doi:10.1145/1866307.1866371. ISBN 9781450302456. S2CID 7880497. Retrieved 14 March 2017.
  16. ^ "Memory Debugging in Allinea DDT". Archived from the original on 2015-02-03.
  17. ^ Gyllenhaal, John. "Using Valgrind's Memcheck Tool to Find Memory Errors and Leaks". computing.llnl.gov. Archived from the original on 7 November 2018. Retrieved 13 March 2017.
  18. ^ "Memcheck: a memory error detector". Valgrind User Manual. valgrind.org. Retrieved 13 March 2017.
  19. ^ Kreinin, Yossi. "Why custom allocators/pools are hard". Proper Fixation. Retrieved 13 March 2017.
  20. ^ "Using the Garbage Collector as Leak Detector". www.hboehm.info. Retrieved 14 March 2017.
  21. ^ "Semantic Designs: CheckPointer compared to other safety checking tools". www.semanticdesigns.com. Semantic Designs, Inc.
  22. ^ "AddressSanitizerPerformanceNumbers". GitHub.
  23. ^ Dhumbumroong, Smith (2020). "BoundWarden: Thread-enforced spatial memory safety through compile-time transformations". Science of Computer Programming. 198: 102519. doi:10.1016/j.scico.2020.102519. S2CID 224925197.
  24. ^ Gv, Naveen. "How to Avoid, Find (and Fix) Memory Errors in your C/C++ Code". Cprogramming.com. Retrieved 13 March 2017.
  25. ^ "CWE-633: Weaknesses that Affect Memory". Community Weakness Enumeration. MITRE. Retrieved 13 March 2017.
  26. ^ "CWE-762: Mismatched Memory Management Routines". Community Weakness Enumeration. MITRE. Retrieved 13 March 2017.
  27. ^ "Destructors - the Rust Reference".
  28. ^ "Leaking - the Rustonomicon".
  29. ^ "Security flaws caused by compiler optimizations". www.redhat.com. Retrieved 2024-06-26.
  30. ^ "NVD - CVE-2009-1897". nvd.nist.gov. Retrieved 2024-06-26.