September 13, 2024
Olympics’ AI Security Stokes Backlash Over Mass Surveillance

The Paris Summer Olympics that begin Friday include among the most public and controversial rollouts ever of algorithmic video surveillance, an AI event-security technology that uses machine learning to analyze video footage in real time to detect—and even predict—threats and other anomalies.

In Paris, video cameras around the city will watch millions of visitors to detect weapons or people moving against the crowds, among other things that could be seen as a precursors to an attack. Security personnel will then decide whether to notify authorities including local and national police.

While French lawmakers call the tool a security measure aimed at shielding the multi-week event from violence, privacy advocates on both sides of the Atlantic have sounded alarms.

“The things that these tools are supposed to achieve are something like those pre-cognitive efforts from that Tom Cruise movie some years ago,” University of California, Irvine law professor Ari Ezra Waldman said, referencing the sci-fi 2002 film Minority Report.

Some privacy advocates say the technology’s threats of civil liberties infringement, built-in bias, false positives, and biometric data collection will inevitably continue if, or when, the technology is used at upcoming US mega-events like the 2026 FIFA World Cup, the 2028 Summer Olympics in Los Angeles, and the just-awarded 2034 Winter Olympics in Salt Lake City.

Waldman and other privacy advocates acknowledge that the massive crowds at such large-scale events bring heightened worries of violence and terrorism.

“It’s rational for people to be concerned,” Waldman said. “I actually just think that there are better tools that we have available that are less privacy-invasive.”

He noted stadiums do a good job of managing crowds with metal detectors, pat-downs, and bag regulations. The push for AI surveillance, he said, is the result of the narrative that new technology is necessarily better.

The French law allowing ramped up AI surveillance at the Olympics was passed for a set time. While it also will be used around the 2024 Paralympics that start Aug. 28, the law is set to lapse next spring. Still, increased use of AI-enhanced video security at such high-profile events, some privacy advocates say, could create a larger appetite for mass surveillance outside the venues’ walls.

“They always use these as Trojan horses to try and implement more widespread use of the technologies,” Ella Jakubowska, head of policy at European Digital Rights, said.

Weighty Concerns

While the French legislation permitting the technology for a trial period decrees the systems won’t process biometric information, privacy groups are expressing skepticism.

European Digital Rights and nearly three dozen other civil society organizations said in an open letter last year that while the systems won’t implement facial recognition techniques under the French law, it will still identify individuals in crowds. That constitutes “unique identification” and triggers protections under the European Union’s General Data Protection Regulation, they said.

Moreover, detecting the kinds of incidents the tool is meant to find requires biometric data, according to Laura Lazaro Cabrera, counsel and director of the equity and data program at the Center for Democracy & Technology.

“That will necessarily capture and analyze physiological features of behaviors of individuals who are present in those spaces, including their body positions, their movements, gestures, et cetera,” Lazaro Cabrera said.

Surveillance solutions are thrown at emotionally charged issues despite incomplete evidence of the tools’ efficacy, Leila Nashashibi, a campaigner for the nonprofit Fight for the Future, said.

A 2023 study from The Markup and Wired found predictive-policing software Geolitica to be poor at predicting crime, with a success rate of as low as 0.1% for some types of crimes. Research featured last year in Scientific American found that law enforcement agencies’ use automated facial recognition tools disproportionately leads to arrests of Black people, a result analysts attributed to homogeneous training data sets.

Incomplete or skewed training data can affect results, and seemingly erratic behavior can be defined differently depending on the context, said Christoph Lütge, director of the Institute for Ethics in Artificial Intelligence at the Technical University of Munich.

For instance, jaywalking isn’t a concept in Germany because it’s not considered a problem, Lütge said.

To make algorithms used for surveillance at international events comprehensive, he said, they can’t just be trained on a singular culture’s customs but must instead cover a diverse range.

Catherine Crump, director of University of California, Berkeley’s Samuelson Law, Technology & Public Policy Clinic, pushed for companies to obtain third-party efficacy assessments.

She said there needs to be more transparency about how effective the tools are, so people aren’t “just relying on the goodwill of private corporations and their technology” to protect their rights.

US Use

While many US-based companies have to comply with the EU’s privacy rules, there isn’t a direct US legal or regulatory equivalent.

There’s no comprehensive federal privacy law, though proposals have been introduced in Congress, and attempts to regulate AI have been scattershot. The Biden administration’s October 2023 AI executive order assembled federal agencies across the government to vet the technology for health, safety, and security risks.

“Whereas in some other countries, the default is that you can’t use technology like video analytics unless you get permission, here the default is that you can use it unless it’s specifically prohibited,” Crump said.

A number of US sectors—including the security industry—have enveloped AI into their products.

Scylla is a security system supplier that markets AI video analytics to sports venues. The company says its technology can detect slips and falls, guns, and fights. If the company’s facial recognition tools are activated, it can also spot individuals from a watchlist and find missing children in stadiums, according to its website. Among Scylla’s clients is Major League Baseball’s Chicago Cubs, which uses the technology at Wrigley Field.

Scylla’s AI system monitors the venues’ camera network in real time and watches for anomalies, according to Kris Greiner, the company’s vice president of sales. He likened the technology to a Roku stick plugged into a decade-old televsion to enhance its streaming capabilities.

When the AI detects a fight or weapon, it pings security personnel. Unlike people, the AI “never blinks” and can watch numerous cameras simultaneously, he said.

Greiner said the company doesn’t log, store, or collect any personal private information. When asked about concerns of biometric data collection, Greiner said facial recognition is always deactivated by default, unless an organization chooses to turn it on. He added that the AI is never trained to watch for a specific ethnicities, genders, or races.

“So if somebody’s doing gun detection, for example, that’s all we’re watching for, gun detection—nothing else,” he said. “All we look for is the human holding a weapon.”

Next Stop: California

California, the host of the 2028 Summer Olympics and two venues slated to be used in the 2026 World Cup, has been a frontrunner among states regulating consumer privacy and technology. While the Fourth Amendment’s protection from unreasonable searches and seizures doesn’t apply to the private sector, California’s state-constitutional right to privacy does, according to Nicole Ozer, the technologies and civil liberties director of the American Civil Liberties Union of Northern California.

The protection has been used in courts since it was added to the list of inalienable rights in 1972 to challenge privacy concerns like indiscriminate body searches and government spying, according to the ACLU chapter.

“There needs to definitely be some really, really deep questions asked if systems are going to be contemplated because the constitutional right to privacy is a very robust protection,” Ozer said. “It’s a fundamental right in California.”

Under the California Consumer Privacy Act, citizens have a right to know what personal information businesses collect about them and how it’s used. The California Privacy Protection Agency, a first-in-the-nation regulator, gained enforcement powers last July. Lawmakers and regulators in the state appear eager to ensure their privacy protections keep up with rapidly advancing AI technologies.

This year, the state’s legislature proposed dozens of bills to regulate AI. This month, a California legislative panel approved a measure that would ban discrimination by AI tools. Some cities in California, like future World Cup host San Francisco, have even banned city agencies’ use of facial recognition.

Going forward, “if any state is going to be able to pass a law to regulate or restrict the use of biased, insufficient, and very often incorrect outputs from AI, it’s going to be California,” Wardman said.

While Olympics and World Cups are temporary, the erosion of privacy could be longer-lasting, said American University law professor Andrew Guthrie Ferguson, likening surveillance technologies piloted during the events to the vast physical infrastructure they bring with them.

“We build all these things and then they stay. We don’t tear them down because it seems sort of wasteful,” Ferguson said. “The problem is that same argument would then erode privacy rights going forward when there’s not an Olympic Game and there’s not extra millions of people.”


link

Leave a Reply

Your email address will not be published. Required fields are marked *