Critics claim Paris using 2024 Games to introduce Big Brother video surveillance

France’s National Assembly is due to adopt a law on Tuesday ahead of the 2024 Olympic Games in Paris. Article 7 is the most controversial aspect of this law, as it will allow AI video surveillance to be used to detect abnormal behaviour. Human rights organisations and the French left have condemned the measure.  

The all-encompassing law that France’s National Assembly is due to adopt on March 28, ahead of the 2024 Paris Olympic Games, will allow shops to open on Sundays, establish a health centre in the department of Seine-Saint-Denis (located northeast of Paris) and permit the French state to investigate future accredited persons. However, Article 7 of this law is particularly controversial, as it states that AI video surveillance may be used, on a trial basis, to ensure the safety of the Olympic Games. Human rights groups say the use of this technology will set a dangerous precedent.  

During the preliminary phase, Article 7 was adopted by the presidential majority, France’s right-wing party Les Républicains and the far-right National Rally. The New Ecological and Social People’s Union (NUPES), a coalition of left-wing parties, opposed it. It will allow algorithm-driven video surveillance technology to be used to ensure the safety of large-scale “sporting, recreational or cultural events” on a trial basis.  

New technology in question 

“Algorithmic video surveillance is a new form of technology that uses computer software to analyse images captured by surveillance cameras in real time,” explains Arnaud Touati, a lawyer specialised in digital law. “The algorithms used in the software are notably based on machine learning technology, which allows AI video surveillance, over time, to continue to improve and adapt to new situations.” 

Proponents of this technology claim to be able to anticipate crowd movements and spot abandoned luggage or potentially dangerous incidents. Compared to traditional video surveillance, everything is automated with algorithms in charge of analysis – which, according to those in favour of this technology, limits human errors. 

“While France promotes itself as a champion of human rights globally, its decision to legalize AI-powered mass surveillance during the Olympics will lead to an all-out assault on the rights to privacy, protest, and freedom of assembly and expression,” Amnesty International said in a statement after the article was passed. 

A herald of future video surveillance across Europe? 

Katia Roux, the NGO’s technology and human rights specialist, explains that this technology can elicit many fears. “Under international law, legislation must respect the strict principles of necessity and proportionality. In this case, however, the legislator has not demonstrated this,” she says. “We are talking about assessment technology, which has to evaluate behaviours and categorise them as at risk so that measures can be taken afterwards.”  

TECH 24
TECH 24 © FRANCE 24

 

“This technology is not legal today. In France, experiments have been done but not within the legal framework that this law proposes to create,” she said. “Nor is it legal at the European level. It is even brought up during discussions in the European Parliament about technology and the regulation of artificial intelligence systems. The legislation could therefore also violate the European regulation currently being drafted.” 

“By adopting this law, France would become the champion of video surveillance in the EU and set an extremely dangerous precedent. It would send an extremely worrying signal to countries that might be tempted to use this technology against their own population,” she continued. 

Discriminatory? 

One fear is that the seemingly cold and infallible algorithm may in fact contain discriminatory biases. “These algorithms are going to be trained using a set of data decided and designed by human beings. They will therefore be able to incorporate the discriminatory biases of the people who conceived and designed them,” says Roux. 

“AI video surveillance has already been used for racist purposes, notably by China, in the exclusive surveillance of the Uighurs, a Muslim minority present in the country,” says Touati. “Because ethnic minorities are under-represented in the data provided to the algorithms for learning-purposes, there are significant discriminatory and racist biases. According to an MIT study, while the facial recognition error is 1% for White men, it is 34% for Black women.” 

Touati, however, wants to see the glass as half full. “Using AI video surveillance during events of this magnitude could also highlight the algorithm’s discriminatory, misogynistic and racist biases by identifying, at too high a frequency to be accurate, people from minority ethnic groups as potential suspects,” he explains. 

When asked by members of the left-wing opposition coalition NUPES what kind of people AI video surveillance would be targeting, the French Interior Minister Gérald Darmanin said, “Not [ones wearing] hoodies.”  The French government believes that the limits set by the law – the absence of facial recognition, data protection – will be enough to prevent discriminatory practices.  

“We have put safeguards in place so that tenders are only reserved for companies that respect a certain number of rules, including hosting data on national territory, respecting the CNIL [National Commission on Informatics and Liberty; an independent French administrative regulatory body responsible for ensuring that data privacy law is applied to the collection, storage and use of personal data] and the GDPR [General Data Protection Regulation ; a data protection law introduced by the EU],” says MP Philippe Latombe, a member of the pro-Europe and centre-right political party Democratic Movement. He co-signed an amendment with the National Rally so that the call for tenders would give priority to European companies. “Clearly, we don’t want it to be a Chinese company that does data processing in China and uses the data to do something else.” 

“We are not reassured by the government’s guarantees. In reality, no real amendment is possible, and this technology is, in itself, problematic and dangerous for human rights,” says Roux. “It will remain so until a serious evaluation has been conducted, the necessity and proportionality of its use has been demonstrated, and a real debate has been held with civil society’s different actors on this issue.” 

Sports events and tech experiments

Although the Olympic Games are clearly the target event, this technological experiment can begin as soon as the law is implemented and will end on December 31, 2024, four months after the Paralympic Games finish. It could therefore be applied to a wide range of events, starting with the Rugby World Cup from September 8 to October 28.  

Opponents of AI video surveillance fear that its initially exceptional use will eventually become commonplace. After all, sports events are often used as a testing ground for policing, security and new technology. The 2012 London Olympics, for example, led to the widespread use of video surveillance in the British capital. 

“We are afraid that this exceptional period will become the norm,” explains Roux, who adds that voice recognition technology, which was deployed on an experimental basis during the 2018 World Cup in Russia, has since been used to repress the opposition.  

Finally, Amnesty International is concerned that video surveillance will eventually lead to biometric or voice surveillance. “Facial recognition is just a feature waiting to be activated,” says Roux. 

The law on the 2024 Olympic Games has not yet completed its legislative journey. Following Tuesday’s formal vote in the National Assembly, the text will undergo several changes and make multiple trips between the Assembly and Senate, which had previously amended it, until the two chambers agree to adopt it.  

Tech 24’s Peter O’Brien contributed to this article. 

This article has been translated from the original in French. 

For all the latest health News Click Here 

Read original article here

Denial of responsibility! TechAI is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.