Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
The physical security industry stands at a crossroads. Video surveillance and analytics have rapidly transitioned to the cloud over the past decade, bringing enhanced connectivity and intelligence. But these same innovations also enable new potential for mass data collection, profiling and abuse.
As one of the sector’s leading cloud-based providers, Verkada, which offers a range of physical security measures including AI-equipped remote monitoring cameras, controllers, wireless locks, and more, is attempting to chart a privacy-first path forward amidst these emerging tensions.
The San Mateo-based company, which has brought over 20,000 organizations into the cloud security era, plans to roll out features focused on protecting identities and validating footage authenticity.
Set to launch today, the updates come at a pivotal moment for society and the way we exist in public and private places. Verkada has drawn significant backlash for past security lapses and controversial incidents. However, its ability to balance innovation with ethics will reveal how it navigates the turbulent physical security industry.
Obscuring identities, validating authenticity
In an interview with Verkada founder and CEO Filip Kaliszan, he outlined the motivation and mechanics behind the new privacy and verification features.
“Our mission is protecting people and property in the most privacy sensitive way possible,” Kaliszan said. “[The feature release] is about that privacy sensitive way of accomplishing our goal.”
The first update focuses on obscuring identities in video feeds. Verkada cameras will gain the ability to automatically “blur faces and video streams” using principles similar to augmented reality filters on social media apps. Kaliszan noted security guards monitoring feeds “don’t really need to see all these details” about individuals until an incident occurs.
Making blurring the “default path” where possible is a priority, with the goal being “most videos washed with identities obfuscated.”
In addition to blurring based on facial recognition, Verkada plans to implement “hashing of the video that we’re capturing on all of our devices…So we’re creating, you can think of it like a signature of the contents of the video as it is captured,” Kaliszan explained.
This creates a tamper-proof digital fingerprint for each video that can be used to validate authenticity.
Such a feature helps address growing concerns around generative AI, which makes it easier to fake or alter footage.
“We can say this video is real. It came out of one of our sensors and we have proof of when it was captured and how, or hey there is no match,” Kaliszan said.
For Kaliszan, adding privacy and verification capabilities aligns both with ethical imperatives and Verkada’s competitive strategy.
“It’s a win-win strategy for Verkada because on the one hand, you know, we’re doing what we believe is right for society,” he argued. “But it’s also very wise for us,” in terms of building customer trust and preference, he said.
Questions raised about protecting privacy
While Kaliszan positioned Verkada’s new features as a step toward protecting privacy, civil society critics argue the changes do not go nearly far enough.
“If you’re doing it where it can be undone — you can undo it later — you’re still collecting that very intrusive information,” said Merve Hickok, president of the independent nonprofit Center for AI and Digital Policy.
Rather than merely blurring images temporarily, Hickok believes companies like Verkada should embrace a “privacy enhancing approach where you’re not collecting the data in the first place.” Once collected, even obscured footage enables tracking via “location data, license plate readers, heatmapping.”
Hickok argued Verkada’s incremental changes reflect an imbalance of priorities. “The security capabilities are so good, so it’s like yeah, go ahead and collect it all, we’ll blur it for now,” she said. “But then the individual rights of the people walking around are not protected.”
Without stronger regulations, Hickok believes we are on a “slippery slope” toward ubiquitous public surveillance. She advocated for legal prohibitions on “real time biometric identification systems in public spaces,” similar to those being debated in the European Union.
A collision of perspectives on ethics and tech
Verkada finds itself at the center of these colliding perspectives on ethics and technology. On one side, Kaliszan aims to show security can be “privacy sensitive” through features like blurring.
On the other, civil society critics like Hickok question whether Verkada’s business model can ever fully align with individual rights.
The answer holds major implications not just for Verkada, but the broader security industry. As physical security transitions to the cloud, companies like Verkada are guiding thousands of organizations into new technological terrain. The choices they make today around data practices and defaults will ripple far into the future.
That power comes with obligation, Hickok argues. “We’re way closer to enabling the fully surveyed society than we are from a fully private and protected society,” she said. “So I think we do need to have that security measure but maybe the takeaway here is the companies just need to be very cogent.”
For Verkada, cogency means advancing security while avoiding mass surveillance. “When all of it comes together, that privacy consideration further increases, right?” Kaliszan said. “And so thinking through how do we maintain privacy, how do we tie identity locally, doing the processing on the edge and not building a mass surveillance system.”