Ukrainian military gets facial recognition AI. We are worried

0

The more precise the tool, the more likely it is to be integrated into autonomous weapon systems that can be directed not only against invading armies but also against political opponents, members of specific ethnic groups, etc. On the contrary, the improved reliability of the technology makes it all the more sinister and dangerous. This applies not only to private technology, but also to efforts by states like China to develop facial recognition tools for security purposes.

Also outside of combat, the use of facial recognition AI in the Ukraine war carries significant risks. Where facial recognition is used in the EU for border control and migration purposes – and this is largely the case – it is public authorities who collect the sensitive biomarker data essential for facial recognition, the data subject knows this is happening and EU law strictly regulates the process. Clearview, on the other hand, has already violated the EU GDPR (General Data Protection Regulation) on several occasions and has been heavily sanctioned by data security agencies in Italy and France.

If private facial recognition technologies were used to identify Ukrainian citizens within the EU, or in border areas, in order to offer them some form of protective status, a gray area would be created between military and civilian use in within the EU itself. Any such facial recognition system should be used on civilian populations within the EU. A company like Clearview could promise to keep its civilian and military databases separate, but that would require additional regulation – and even then raise the question of how a single company can be entrusted with civilian data that it can easily reuse at military purposes. This is in fact what Clearview is already offering the Ukrainian government: it is building its military frontline reconnaissance operation on civilian data harvested from Russian social media recordings.

The question then arises of the power of the state. Once ready for use, facial recognition may prove simply too tempting for European security agencies to put back in place. This has already been reported in the United States where members of the New York Police Department allegedly used Clearview’s tool to circumvent data protection and privacy rules within the department and installed Clearview’s app on private devices in violation of NYPD policy.

This is a particular risk associated with deployment and testing in Ukraine. If Ukraine’s accession to the European Union is fast-tracked, as many believe it should be, it will result in the EU using Clearview’s AI as an established practice for purposes military and potentially civilian, both originally designed without malice or intent to abuse, but set what we believe to be a disturbing precedent.

The Russian invasion of Ukraine is extraordinary in its scale and brutality. But throwing caution to the wind is not a legitimate doctrine for the laws of war or rules of engagement; this is especially the case when it comes to powerful new technologies. The defense of Ukraine may well involve tools and methods which, if standardized, will end up undermining the peace and security of European citizens at home and on future fronts. European politicians should be wary of this. The EU must use all the tools at its disposal to end the conflict in Ukraine and the Russian aggression, but it must do so while guaranteeing the rule of law and the protection of citizens.

openDemocracy asked Clearview to comment on specific issues raised in the article, but it did not.

Share.

Comments are closed.