As artificial intelligence (AI) technology proliferates, it’s no surprise that law enforcement organizations are seeking to incorporate its advances in their never-ending efforts to keep citizens safe and criminals at bay while taking ethical concerns into consideration, and EU’s Europol has been one of them.
Indeed, Europol’s Innovation Lab has analyzed the use of AI, among other things, in law enforcement from a moral perspective and published the results in its report called ‘Assessing technology in law enforcement: a method for ethical decision-making,’ published on February 20.
Using AI to serve and protect – ethical concerns
One of the areas Europol focused on was online sexual abuse of children. The researchers considered the possibility of using a chatbot to prevent it by detecting sexualized speech, indicating age and gender, performing sentiment analysis, and detecting linguistic fingerprints, allowing a human operator to intervene.
The main moral concern here was the risk of excessive surveillance as it would require processing of all chat data in domains, forums, and chat rooms. At the same time, real-life testing is still problematic, and there’s also the black box problem of deep learning to take into account.
Notably, the ‘black box problem’ refers to the inability of humans to see how deep learning systems make their conclusions and decisions, making it difficult to fix deep learning systems when they deliver undesired outcomes. Consequently, AI systems can make judgment mistakes with ethical implications.
That said, the analysts have concluded that the utilization of a limited version of the chatbot with a large age threshold – the age difference between the interlocutors – is acceptable.
Can chatbots protect children online?
Per the study, the European Law Enforcement Authority (LEA) has considered implementing ‘PrevBot,’ a machine learning tool applied to natural language processing, to prevent child sexual abuse (CSA) online. Theoretically, it would detect grooming that takes place in chat channels and send a warning to the adult in the chat with the aim of making them stop.
PrevBot’s training allows it to identify sexually charged conversations and predict participants’ age and gender. Additionally, it may carry out sentiment analysis and author identification by computing ‘linguistic fingerprints’ and matching them against linguistic fingerprints of previous CSA convicts in the database.
However, the authors of the study have found at least two main moral problems in PrevBot’s implementation.
The first is the intrusive measures that require PrevBot to process the data of all conversations in the chat room to identify those that present a risk of CSA, inadvertently including those of harmless users, which constitutes a threat to privacy and freedom of speech.
Secondly, transparency and efficiency of the measure are still open to debate. Just because the perpetrator has received a warning, it doesn’t prevent them from continuing to do the same in a different forum. Moreover, the black box problem and the lack of real-world testing make its application challenging.
Having said that, certain tests, in an environment including both adults and teenagers, have delivered satisfactory results in terms of accuracy. Furthermore, the LEA has taken part in a sandbox process with the National Data Protection Authority to examine all privacy issues connected to the tool.
Is there a middle ground?
Finally, the researchers have presented three possible options – option 1, not using PrevBot at all (and was rejected); option 2, which involves using PrevBot to the maximum of its capabilities and recalibrating with use; and option 3, which involves use of a limited version of PrevBot, setting the age threshold to 30 (the most preferable, albeit with lower efficiency compared to option 2).
All things considered, the right to privacy and freedom of speech are important concerns when trying to detect and prevent crimes like sexual abuse of children, especially when using AI to help due to the problems present in the still budding technology. Further analyses and careful reviews of the pros and cons are necessary before AI starts going around accusing people of being child abusers.