default search action
Alina Oprea
Person information
- affiliation: Northeastern University, Boston, MA, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j12]John Abascal, Stanley Wu, Alina Oprea, Jonathan R. Ullman:
TMI! Finetuned Models Leak Private Information from their Pretraining Data. Proc. Priv. Enhancing Technol. 2024(3): 202-223 (2024) - [c66]Lisa Oakley, Steven Holtzen, Alina Oprea:
Synthesizing Tight Privacy and Accuracy Bounds via Weighted Model Counting. CSF 2024: 449-463 - [c65]Nikhil Kandpal, Krishna Pillutla, Alina Oprea, Peter Kairouz, Christopher A. Choquette-Choo, Zheng Xu:
User Inference Attacks on Large Language Models. EMNLP 2024: 18238-18265 - [c64]Galen Andrew, Peter Kairouz, Sewoong Oh, Alina Oprea, Hugh Brendan McMahan, Vinith Menon Suriyakumar:
One-shot Empirical Privacy Estimation for Federated Learning. ICLR 2024 - [c63]Harsh Chaudhari, Giorgio Severi, Alina Oprea, Jonathan R. Ullman:
Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning. ICLR 2024 - [c62]Andrew Yuan, Alina Oprea, Cheng Tan:
Dropout Attacks. SP 2024: 1255-1269 - [i55]Lisa Oakley, Steven Holtzen, Alina Oprea:
Synthesizing Tight Privacy and Accuracy Bounds via Weighted Model Counting. CoRR abs/2402.16982 (2024) - [i54]Harsh Chaudhari, Giorgio Severi, John Abascal, Matthew Jagielski, Christopher A. Choquette-Choo, Milad Nasr, Cristina Nita-Rotaru, Alina Oprea:
Phantom: General Trigger Attacks on Retrieval Augmented Language Generation. CoRR abs/2405.20485 (2024) - [i53]Ethan Rathbun, Christopher Amato, Alina Oprea:
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents. CoRR abs/2405.20539 (2024) - [i52]Giorgio Severi, Simona Boboila, John T. Holodnak, Kendra Kratkiewicz, Rauf Izmailov, Alina Oprea:
Model-agnostic clean-label backdoor mitigation in cybersecurity environments. CoRR abs/2407.08159 (2024) - [i51]Evan Rose, Hidde Lycklama, Harsh Chaudhari, Anwar Hithnawi, Alina Oprea:
UTrace: Poisoning Forensics for Private Collaborative Learning. CoRR abs/2409.15126 (2024) - [i50]Ethan Rathbun, Christopher Amato, Alina Oprea:
Adversarial Inception for Bounded Backdoor Poisoning in Deep Reinforcement Learning. CoRR abs/2410.13995 (2024) - [i49]Aditya Vikram Singh, Ethan Rathbun, Emma Graham, Lisa Oakley, Simona Boboila, Alina Oprea, Peter Chin:
Hierarchical Multi-agent Reinforcement Learning for Cyber Network Defense. CoRR abs/2410.17351 (2024) - 2023
- [j11]Alesia Chernikova, Nicolò Gozzi, Nicola Perra, Simona Boboila, Tina Eliassi-Rad, Alina Oprea:
Modeling self-propagating malware with epidemiological models. Appl. Netw. Sci. 8(1): 52 (2023) - [j10]Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard Alois Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli:
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning. ACM Comput. Surv. 55(13s): 294:1-294:39 (2023) - [j9]Matthew Jagielski, Stanley Wu, Alina Oprea, Jonathan R. Ullman, Roxana Geambasu:
How to Combine Membership-Inference Attacks on Multiple Updated Machine Learning Models. Proc. Priv. Enhancing Technol. 2023(3): 211-232 (2023) - [j8]Han Wang, David Eklund, Alina Oprea, Shahid Raza:
FL4IoT: IoT Device Fingerprinting and Identification Using Federated Learning. ACM Trans. Internet Things 4(3): 17:1-17:24 (2023) - [c61]Giorgio Severi, Simona Boboila, Alina Oprea, John T. Holodnak, Kendra Kratkiewicz, Jason Matterer:
Poisoning Network Flow Classifiers. ACSAC 2023: 337-351 - [c60]Achyut Reddy, Sridhar Venkatesan, Rauf Izmailov, Alina Oprea:
An Improved Nested Training Approach to Mitigate Clean-label Attacks against Malware Classifiers. MILCOM 2023: 703-709 - [c59]Krishna Pillutla, Galen Andrew, Peter Kairouz, H. Brendan McMahan, Alina Oprea, Sewoong Oh:
Unleashing the Power of Randomization in Auditing Differentially Private ML. NeurIPS 2023 - [c58]Joshua Bundt, Michael Davinroy, Ioannis Agadakos, Alina Oprea, William K. Robertson:
Black-box Attacks Against Neural Binary Function Detection. RAID 2023: 1-16 - [c57]Harsh Chaudhari, Matthew Jagielski, Alina Oprea:
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning. SaTML 2023: 176-196 - [c56]Harsh Chaudhari, John Abascal, Alina Oprea, Matthew Jagielski, Florian Tramèr, Jonathan R. Ullman:
SNAP: Efficient Extraction of Private Properties with Poisoning. SP 2023: 400-417 - [i48]Gökberk Yar, Cristina Nita-Rotaru, Alina Oprea:
Backdoor Attacks in Peer-to-Peer Federated Learning. CoRR abs/2301.09732 (2023) - [i47]Galen Andrew, Peter Kairouz, Sewoong Oh, Alina Oprea, H. Brendan McMahan, Vinith M. Suriyakumar:
One-shot Empirical Privacy Estimation for Federated Learning. CoRR abs/2302.03098 (2023) - [i46]Krishna Pillutla, Galen Andrew, Peter Kairouz, H. Brendan McMahan, Alina Oprea, Sewoong Oh:
Unleashing the Power of Randomization in Auditing Differentially Private ML. CoRR abs/2305.18447 (2023) - [i45]John Abascal, Stanley Wu, Alina Oprea, Jonathan R. Ullman:
TMI! Finetuned Models Leak Private Information from their Pretraining Data. CoRR abs/2306.01181 (2023) - [i44]Giorgio Severi, Simona Boboila, Alina Oprea, John T. Holodnak, Kendra Kratkiewicz, Jason Matterer:
Poisoning Network Flow Classifiers. CoRR abs/2306.01655 (2023) - [i43]Andrew Yuan, Alina Oprea, Cheng Tan:
Dropout Attacks. CoRR abs/2309.01614 (2023) - [i42]Harsh Chaudhari, Giorgio Severi, Alina Oprea, Jonathan R. Ullman:
Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning. CoRR abs/2310.03838 (2023) - [i41]Nikhil Kandpal, Krishna Pillutla, Alina Oprea, Peter Kairouz, Christopher A. Choquette-Choo, Zheng Xu:
User Inference Attacks on Large Language Models. CoRR abs/2310.09266 (2023) - 2022
- [j7]Alina Oprea, Anoop Singhal, Apostol Vassilev:
Poisoning Attacks Against Machine Learning: Can Machine Learning Be Trustworthy? Computer 55(11): 94-99 (2022) - [j6]Nathalie Baracaldo, Alina Oprea:
Machine Learning Security and Privacy. IEEE Secur. Priv. 20(5): 11-13 (2022) - [j5]Alesia Chernikova, Alina Oprea:
FENCE: Feasible Evasion Attacks on Neural Networks in Constrained Environments. ACM Trans. Priv. Secur. 25(4): 34:1-34:34 (2022) - [c55]Afsah Anwar, Yi Hui Chen, Roy Hodgman, Tom Sellers, Engin Kirda, Alina Oprea:
A Recent Year On the Internet: Measuring and Understanding the Threats to Everyday Internet Devices. ACSAC 2022: 251-266 - [c54]Giorgio Severi, Matthew Jagielski, Gökberk Yar, Yuxuan Wang, Alina Oprea, Cristina Nita-Rotaru:
Network-Level Adversaries in Federated Learning. CNS 2022: 19-27 - [c53]Lisa Oakley, Alina Oprea, Stavros Tripakis:
Adversarial Robustness Verification and Attack Synthesis in Stochastic Systems. CSF 2022: 380-395 - [c52]Alesia Chernikova, Nicolò Gozzi, Simona Boboila, Priyanka Angadi, John Loughner, Matthew Wilden, Nicola Perra, Tina Eliassi-Rad, Alina Oprea:
Cyber Network Resilience Against Self-Propagating Malware Attacks. ESORICS (1) 2022: 531-550 - [c51]Giorgio Severi, Will Pearce, Alina Oprea:
Bad Citrus: Reducing Adversarial Costs with Model Distances. ICMLA 2022: 307-312 - [c50]Samson Ho, Achyut Reddy, Sridhar Venkatesan, Rauf Izmailov, Ritu Chadha, Alina Oprea:
Data Sanitization Approach to Mitigate Clean-Label Attacks Against Malware Detection Systems. MILCOM 2022: 993-998 - [i40]Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard Alois Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli:
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning. CoRR abs/2205.01992 (2022) - [i39]Matthew Jagielski, Stanley Wu, Alina Oprea, Jonathan R. Ullman, Roxana Geambasu:
How to Combine Membership-Inference Attacks on Multiple Updated Models. CoRR abs/2205.06369 (2022) - [i38]Harsh Chaudhari, Matthew Jagielski, Alina Oprea:
SafeNet: Mitigating Data Poisoning Attacks on Private Machine Learning. CoRR abs/2205.09986 (2022) - [i37]Talha Ongun, Simona Boboila, Alina Oprea, Tina Eliassi-Rad, Jason Hiser, Jack W. Davidson:
CELEST: Federated Learning for Globally Coordinated Threat Detection. CoRR abs/2205.11459 (2022) - [i36]Alesia Chernikova, Nicolò Gozzi, Simona Boboila, Priyanka Angadi, John Loughner, Matthew Wilden, Nicola Perra, Tina Eliassi-Rad, Alina Oprea:
Cyber Network Resilience against Self-Propagating Malware Attacks. CoRR abs/2206.13594 (2022) - [i35]Alesia Chernikova, Nicolò Gozzi, Simona Boboila, Nicola Perra, Tina Eliassi-Rad, Alina Oprea:
Modeling Self-Propagating Malware with Epidemiological Models. CoRR abs/2208.03276 (2022) - [i34]Joshua Bundt, Michael Davinroy, Ioannis Agadakos, Alina Oprea, William Robertson:
Attacking Neural Binary Function Detection. CoRR abs/2208.11667 (2022) - [i33]Harsh Chaudhari, John Abascal, Alina Oprea, Matthew Jagielski, Florian Tramèr, Jonathan R. Ullman:
SNAP: Efficient Extraction of Private Properties with Poisoning. CoRR abs/2208.12348 (2022) - [i32]Giorgio Severi, Matthew Jagielski, Gökberk Yar, Yuxuan Wang, Alina Oprea, Cristina Nita-Rotaru:
Network-Level Adversaries in Federated Learning. CoRR abs/2208.12911 (2022) - [i31]Giorgio Severi, Will Pearce, Alina Oprea:
Bad Citrus: Reducing Adversarial Costs with Model Distances. CoRR abs/2210.03239 (2022) - [i30]Harsh Chaudhari, Matthew Jagielski, Alina Oprea:
SafeNet: Mitigating Data Poisoning Attacks on Private Machine Learning. IACR Cryptol. ePrint Arch. 2022: 663 (2022) - 2021
- [j4]Jialin Wen, Benjamin Zi Hao Zhao, Minhui Xue, Alina Oprea, Haifeng Qian:
With Great Dispersion Comes Greater Resilience: Efficient Poisoning Attacks and Defenses for Linear Regression Models. IEEE Trans. Inf. Forensics Secur. 16: 3709-3723 (2021) - [c49]Matthew Jagielski, Giorgio Severi, Niklas Pousette Harger, Alina Oprea:
Subpopulation Data Poisoning Attacks. CCS 2021: 3104-3122 - [c48]Xianrui Meng, Dimitrios Papadopoulos, Alina Oprea, Nikos Triandopoulos:
Private Hierarchical Clustering and Efficient Approximation. CCSW 2021: 3-20 - [c47]Talha Ongun, Oliver Spohngellert, Benjamin A. Miller, Simona Boboila, Alina Oprea, Tina Eliassi-Rad, Jason Hiser, Alastair Nottingham, Jack W. Davidson, Malathi Veeraraghavan:
PORTFILER: Port-Level Network Profiling for Self-Propagating Malware Detection. CNS 2021: 182-190 - [c46]Sridhar Venkatesan, Harshvardhan Sikka, Rauf Izmailov, Ritu Chadha, Alina Oprea, Michael J. De Lucia:
Poisoning Attacks and Data Sanitization Mitigations for Machine Learning Models in Network Intrusion Detection Systems. MILCOM 2021: 874-879 - [c45]Talha Ongun, Jack W. Stokes, Jonathan Bar Or, Ke Tian, Farid Tajaddodianfar, Joshua Neil, Christian Seifert, Alina Oprea, John C. Platt:
Living-Off-The-Land Command Detection Using Active Learning. RAID 2021: 442-455 - [c44]Alina Oprea:
Machine Learning Integrity and Privacy in Adversarial Environments. SACMAT 2021: 1-2 - [c43]Giorgio Severi, Jim Meyer, Scott E. Coull, Alina Oprea:
Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers. USENIX Security Symposium 2021: 1487-1504 - [c42]Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, Colin Raffel:
Extracting Training Data from Large Language Models. USENIX Security Symposium 2021: 2633-2650 - [i29]Molly Buchanan, Jeffrey W. Collyer, Jack W. Davidson, Saikat Dey, Mark Gardner, Jason D. Hiser, Jeffry Lang, Alastair Nottingham, Alina Oprea:
On Generating and Labeling Network Traffic with Realistic, Self-Propagating Malware. CoRR abs/2104.10034 (2021) - [i28]Talha Ongun, Simona Boboila, Alina Oprea, Tina Eliassi-Rad, Alastair Nottingham, Jason Hiser, Jack W. Davidson:
Collaborative Information Sharing for ML-Based Threat Detection. CoRR abs/2104.11636 (2021) - [i27]Lisa Oakley, Alina Oprea, Stavros Tripakis:
Adversarial Robustness Verification and Attack Synthesis in Stochastic Systems. CoRR abs/2110.02125 (2021) - [i26]Talha Ongun, Jack W. Stokes, Jonathan Bar Or, Ke Tian, Farid Tajaddodianfar, Joshua Neil, Christian Seifert, Alina Oprea, John C. Platt:
Living-Off-The-Land Command Detection Using Active Learning. CoRR abs/2111.15039 (2021) - [i25]Talha Ongun, Oliver Spohngellert, Benjamin A. Miller, Simona Boboila, Alina Oprea, Tina Eliassi-Rad, Jason Hiser, Alastair Nottingham, Jack W. Davidson, Malathi Veeraraghavan:
PORTFILER: Port-Level Network Profiling for Self-Propagating Malware Detection. CoRR abs/2112.13798 (2021) - 2020
- [c41]Matthew Jagielski, Jonathan R. Ullman, Alina Oprea:
Auditing Differentially Private Machine Learning: How Private is Private SGD? NeurIPS 2020 - [c40]Ahmet Salih Buyukkayhan, Can Gemicioglu, Tobias Lauinger, Alina Oprea, William Robertson, Engin Kirda:
What's in an Exploit? An Empirical Analysis of Reflected Server XSS Exploitation Techniques. RAID 2020: 107-120 - [i24]Giorgio Severi, Jim Meyer, Scott E. Coull, Alina Oprea:
Exploring Backdoor Poisoning Attacks Against Malware Classifiers. CoRR abs/2003.01031 (2020) - [i23]Matthew Jagielski, Jonathan R. Ullman, Alina Oprea:
Auditing Differentially Private Machine Learning: How Private is Private SGD? CoRR abs/2006.07709 (2020) - [i22]Matthew Jagielski, Giorgio Severi, Niklas Pousette Harger, Alina Oprea:
Subpopulation Data Poisoning Attacks. CoRR abs/2006.14026 (2020) - [i21]Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, Colin Raffel:
Extracting Training Data from Large Language Models. CoRR abs/2012.07805 (2020)
2010 – 2019
- 2019
- [c39]Indranil Jana, Alina Oprea:
AppMine: Behavioral Analytics for Web Application Vulnerability Detection. CCSW 2019: 69-80 - [c38]Lisa Oakley, Alina Oprea:
\mathsf QFlip : An Adaptive Reinforcement Learning Strategy for the \mathsf FlipIt Security Game. GameSec 2019: 364-384 - [c37]Matthew Jagielski, Michael J. Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, Jonathan R. Ullman:
Differentially Private Fair Learning. ICML 2019: 3000-3008 - [c36]Alesia Chernikova, Alina Oprea, Cristina Nita-Rotaru, BaekGyu Kim:
Are Self-Driving Cars Secure? Evasion Attacks Against Deep Neural Networks for Steering Angle Prediction. IEEE Symposium on Security and Privacy Workshops 2019: 132-137 - [c35]Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli:
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks. USENIX Security Symposium 2019: 321-338 - [i20]Xianrui Meng, Dimitrios Papadopoulos, Alina Oprea, Nikos Triandopoulos:
Privacy-Preserving Hierarchical Clustering: Formal Security and Efficient Approximation. CoRR abs/1904.04475 (2019) - [i19]Alesia Chernikova, Alina Oprea, Cristina Nita-Rotaru, BaekGyu Kim:
Are Self-Driving Cars Secure? Evasion Attacks against Deep Neural Networks for Steering Angle Prediction. CoRR abs/1904.07370 (2019) - [i18]Lisa Oakley, Alina Oprea:
Playing Adaptively Against Stealthy Opponents: A Reinforcement Learning Strategy for the FlipIt Security Game. CoRR abs/1906.11938 (2019) - [i17]Talha Ongun, Timothy Sakharaov, Simona Boboila, Alina Oprea, Tina Eliassi-Rad:
On Designing Machine Learning Models for Malicious Network Traffic Classification. CoRR abs/1907.04846 (2019) - [i16]Talha Ongun, Oliver Spohngellert, Alina Oprea, Cristina Nita-Rotaru, Mihai Christodorescu, Negin Salajegheh:
The House That Knows You: User Authentication Based on IoT Data. CoRR abs/1908.00592 (2019) - [i15]Indranil Jana, Alina Oprea:
AppMine: Behavioral Analytics for Web Application Vulnerability Detection. CoRR abs/1908.01928 (2019) - [i14]Alesia Chernikova, Alina Oprea:
Adversarial Examples for Deep Learning Cyber Security Analytics. CoRR abs/1909.10480 (2019) - 2018
- [c34]Alina Oprea, Zhou Li, Robin Norris, Kevin D. Bowers:
MADE: Security Analytics for Enterprise Threat Detection. ACSAC 2018: 124-136 - [c33]Jiayi Duan, Ziheng Zeng, Alina Oprea, Shobha Vasudevan:
Automated Generation and Selection of Interpretable Features for Enterprise Security. IEEE BigData 2018: 1258-1265 - [c32]Talha Ongun, Alina Oprea, Cristina Nita-Rotaru, Mihai Christodorescu, Negin Salajegheh:
The House That Knows You: User Authentication Based on IoT Data. CCS 2018: 2255-2257 - [c31]Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, Bo Li:
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning. IEEE Symposium on Security and Privacy 2018: 19-35 - [i13]Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, Bo Li:
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning. CoRR abs/1804.00308 (2018) - [i12]Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli:
On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks. CoRR abs/1809.02861 (2018) - [i11]Matthew Jagielski, Michael J. Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, Jonathan R. Ullman:
Differentially Private Fair Learning. CoRR abs/1812.02696 (2018) - 2017
- [c30]Trishita Tiwari, Ata Turk, Alina Oprea, Katzalin Olcoz, Ayse K. Coskun:
User-profile-based analytics for detecting cloud security breaches. IEEE BigData 2017: 4529-4535 - [c29]Chang Liu, Bo Li, Yevgeniy Vorobeychik, Alina Oprea:
Robust Linear Regression Against Training Data Poisoning. AISec@CCS 2017: 91-102 - [c28]Ahmet Salih Buyukkayhan, Alina Oprea, Zhou Li, William K. Robertson:
Lens on the Endpoint: Hunting for Malicious Software Through Endpoint Data Analysis. RAID 2017: 73-97 - 2016
- [c27]Sumayah A. Alrwais, Kan Yuan, Eihal Alowaisheq, Xiaojing Liao, Alina Oprea, XiaoFeng Wang, Zhou Li:
Catching predators at watering holes: finding and understanding strategically compromised websites. ACSAC 2016: 153-166 - [c26]Zhou Li, Alina Oprea:
Operational Security Log Analytics for Enterprise Breach Detection. SecDev 2016: 15-22 - [c25]Alina Oprea, Ata Turk, Cristina Nita-Rotaru, Orran Krieger:
MOSAIC: A Platform for Monitoring and Security Analytics in Public Clouds. SecDev 2016: 69-70 - [i10]Chang Liu, Bo Li, Yevgeniy Vorobeychik, Alina Oprea:
Robust High-Dimensional Linear Regression. CoRR abs/1608.02257 (2016) - 2015
- [c24]Alina Oprea, Zhou Li, Ting-Fang Yen, Sang H. Chin, Sumayah A. Alrwais:
Detection of Early-Stage Enterprise Infection by Mining Large-Scale Log Data. DSN 2015: 45-56 - 2014
- [c23]Ting-Fang Yen, Victor Heorhiadi, Alina Oprea, Michael K. Reiter, Ari Juels:
An Epidemiological Study of Malware Encounters in a Large Enterprise. CCS 2014: 1117-1130 - [c22]Alina Oprea, Reihaneh Safavi-Naini:
CCSW 2014: Sixth ACM Cloud Computing Security Workshop. CCS 2014: 1560-1561 - [e1]Gail-Joon Ahn, Alina Oprea, Reihaneh Safavi-Naini:
Proceedings of the 6th edition of the ACM Workshop on Cloud Computing Security, CCSW '14, Scottsdale, Arizona, USA, November 7, 2014. ACM 2014, ISBN 978-1-4503-3239-2 [contents] - [i9]Alina Oprea, Zhou Li, Ting-Fang Yen, Sang H. Chin, Sumayah A. Alrwais:
Detection of Early-Stage Enterprise Infection by Mining Large-Scale Log Data. CoRR abs/1411.5005 (2014) - 2013
- [j3]Ari Juels, Alina Oprea:
New approaches to security and availability for cloud data. Commun. ACM 56(2): 64-73 (2013) - [j2]Marten van Dijk, Ari Juels, Alina Oprea, Ronald L. Rivest:
FlipIt: The Game of "Stealthy Takeover". J. Cryptol. 26(4): 655-713 (2013) - [c21]Ting-Fang Yen, Alina Oprea, Kaan Onarlioglu, Todd Leetham, William K. Robertson, Ari Juels, Engin Kirda:
Beehive: large-scale log analysis for detecting suspicious activity in enterprise networks. ACSAC 2013: 199-208 - 2012
- [j1]Jianqiang Luo, Kevin D. Bowers, Alina Oprea, Lihao Xu:
Efficient software implementations of large finite fields GF(2n) for secure storage applications. ACM Trans. Storage 8(1): 2:1-2:27 (2012) - [c20]Emil Stefanov, Marten van Dijk, Ari Juels, Alina Oprea:
Iris: a scalable cloud file system with efficient integrity checks. ACSAC 2012: 229-238 - [c19]Marten van Dijk, Ari Juels, Alina Oprea, Ronald L. Rivest, Emil Stefanov, Nikos Triandopoulos:
Hourglass schemes: how to prove that cloud files are encrypted. CCS 2012: 265-280 - [c18]George Amvrosiadis, Alina Oprea, Bianca Schroeder:
Practical scrubbing: Getting to the bad sector at the right time. DSN 2012: 1-12 - [c17]Kevin D. Bowers, Marten van Dijk, Robert Griffin, Ari Juels, Alina Oprea, Ronald L. Rivest, Nikos Triandopoulos:
Defending against the Unknown Enemy: Applying FlipIt to System Security. GameSec 2012: 248-263 - [i8]Marten van Dijk, Ari Juels, Alina Oprea, Ronald L. Rivest:
FlipIt: The Game of "Stealthy Takeover". IACR Cryptol. ePrint Arch. 2012: 103 (2012) - [i7]Kevin D. Bowers, Marten van Dijk, Robert Griffin, Ari Juels, Alina Oprea, Ronald L. Rivest, Nikos Triandopoulos:
Defending Against the Unknown Enemy: Applying FlipIt to System Security. IACR Cryptol. ePrint Arch. 2012: 579 (2012) - 2011
- [c16]Kevin D. Bowers, Marten van Dijk, Ari Juels, Alina Oprea, Ronald L. Rivest:
How to tell if your cloud files are vulnerable to drive crashes. CCS 2011: 501-514 - [c15]Yinqian Zhang, Ari Juels, Alina Oprea, Michael K. Reiter:
HomeAlone: Co-residency Detection in the Cloud via Side-Channel Analysis. IEEE Symposium on Security and Privacy 2011: 313-328 - [i6]Emil Stefanov, Marten van Dijk, Alina Oprea, Ari Juels:
Iris: A Scalable Cloud File System with Efficient Integrity Checks. IACR Cryptol. ePrint Arch. 2011: 585 (2011) - 2010
- [c14]Alina Oprea, Ari Juels:
A Clean-Slate Look at Disk Scrubbing. FAST 2010: 57-70 - [i5]Kevin D. Bowers, Marten van Dijk, Ari Juels, Alina Oprea, Ronald L. Rivest:
How to Tell if Your Cloud Files Are Vulnerable to Drive Crashes. IACR Cryptol. ePrint Arch. 2010: 214 (2010)
2000 – 2009
- 2009
- [c13]Kevin D. Bowers, Ari Juels, Alina Oprea:
Proofs of retrievability: theory and implementation. CCSW 2009: 43-54 - [c12]Kevin D. Bowers, Ari Juels, Alina Oprea:
HAIL: a high-availability and integrity layer for cloud storage. CCS 2009: 187-198 - [c11]Alina Oprea, Kevin D. Bowers:
Authentic Time-Stamps for Archival Storage. ESORICS 2009: 136-151 - [i4]Alina Oprea, Kevin D. Bowers:
Authentic Time-Stamps for Archival Storage. IACR Cryptol. ePrint Arch. 2009: 306 (2009) - 2008
- [c10]Constantin Vertan, Alina Oprea, Corneliu Florea, Laura Florea:
A Pseudo-logarithmic Image Processing Framework for Edge Detection. ACIVS 2008: 637-644 - [i3]Kevin D. Bowers, Ari Juels, Alina Oprea:
Proofs of Retrievability: Theory and Implementation. IACR Cryptol. ePrint Arch. 2008: 175 (2008) - [i2]Kevin D. Bowers, Ari Juels, Alina Oprea:
HAIL: A High-Availability and Integrity Layer for Cloud Storage. IACR Cryptol. ePrint Arch. 2008: 489 (2008) - 2007
- [c9]Laura Florea, Constantin Vertan, Corneliu Florea, Alina Oprea:
Dynamic range enhancement of consumer digital camera acquired hip prosthesis X-ray images. EUSIPCO 2007: 1103-1106 - [c8]Alina Oprea, Michael K. Reiter:
Integrity Checking in Cryptographic File Systems with Constant Trusted Storage. USENIX Security Symposium 2007 - 2006
- [c7]Michael Backes, Christian Cachin, Alina Oprea:
Secure Key-Updating for Lazy Revocation. ESORICS 2006: 327-346 - [c6]Alina Oprea, Michael K. Reiter:
On Consistency of Encrypted Files. DISC 2006: 254-268 - 2005
- [c5]Alina Oprea, Michael K. Reiter:
Space-Efficient Block Storage Integrity. NDSS 2005 - [c4]Michael Backes, Christian Cachin, Alina Oprea:
Lazy Revocation in Cryptographic File Systems. IEEE Security in Storage Workshop 2005: 1-11 - [i1]Michael Backes, Christian Cachin, Alina Oprea:
Secure Key-Updating for Lazy Revocation. IACR Cryptol. ePrint Arch. 2005: 334 (2005) - 2004
- [c3]Lea Kissner, Alina Oprea, Michael K. Reiter, Dawn Xiaodong Song, Ke Yang:
Private Keyword-Based Push and Pull with Applications to Anonymous Communication. ACNS 2004: 16-30 - [c2]Alina Oprea, Dirk Balfanz, Glenn Durfee, Diana K. Smetters:
Securing a Remote Terminal Application with a Mobile Trusted Device. ACSAC 2004: 438-447 - 2003
- [c1]Philip D. MacKenzie, Alina Oprea, Michael K. Reiter:
Automatic generation of two-party computations. CCS 2003: 210-219
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-19 22:13 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint