New AI Model Developed to Detect Extremist Users
Source - TweakTown

A team of researchers at the University of California, Berkeley have a New AI Model Developed to Detect Extremist Users and flag extremist users and ISIS-related content on social media platforms. The research has been published in the journal Nature Machine Intelligence.

The AI model uses natural language processing and machine learning algorithms to analyze text, images, videos, and other multimedia content shared by users across platforms like Twitter, Facebook, YouTube, Instagram, and others. It has been trained on hundreds of thousands of examples of extremist rhetoric, symbolism, and multimedia content created and shared by known extremist groups like ISIS.

New AI Model Developed to Detect Extremist Users
Source – Towards Data Science

“Social media platforms are struggling to keep up with the massive volumes of content being shared daily. Our AI model can proactively identify and take down extremist content at the point of upload itself before it has a chance to spread,” explained Dr. Amanda Smith, lead researcher of the project.

“By analyzing the social networks, posting patterns, and linguistic styles of users, our model can also determine if the user is likely to be an extremist. This provides an early warning system to block their accounts before they have amplified their reach,” she added.

The model has already been tested by some social media platforms and has achieved an accuracy rate of over 90% in identifying extremist content in languages like English, Arabic, French, German, and Spanish. It continues to improve as more training data is fed.

“Extremist content spreads much faster on social media today. It is impossible for human content moderators to manually review the billions of posts created daily. Our AI system can flag content in real-time for immediate takedown or review while understanding context to differentiate extremism from counter-speech,” said Dr. Amir Khan, lead data scientist on the project.

Also Read – Elon Musk’s X to Hire 100 Employees for Content Moderation

Civil liberty groups have expressed concerns that such AI systems could end up unfairly censoring innocent conversations about sensitive geo-political topics. The researchers emphasized that their model is focused entirely on known terrorist rhetoric and symbols, with rigorous human oversight and training.

Source – Telegraph India

Benefits of the AI Model:

  • Proactively detect and remove extremist multimedia content like beheading videos and propaganda posters at upload
  • Identify high-risk users based on their online activity patterns and social associations
  • Understand context in multiple languages to accurately flag extremism and avoid false positives
  • Rapidly evolving with new data to identify emerging extremist trends and coded languages
  • Significant time and cost savings compared to manual content moderation

Also Read – How to Use Two-Factor Authentication on Twitter for Free

The research team is currently testing the integration of this model on some social media platforms on a trial basis. Wider adoption could help make a dent in the spread of extremist ideologies and recruitment online. However, experts say that technology alone cannot solve this complex issue without a broader strategy to counter violent extremism in society.

Similar Posts