On September 12th, Beijing time, Facebook announced the development of a new artificial intelligence (AI) system that can be used to detect hate speech. US technology media CNET believes that the launch of this "weapons" can be said to be the right time, and it will be useful when Facebook faces content review.
The system, code-named “Rosetta”, helps computers read and understand the billions of images and videos that are posted to social networks every day. With this new system, Facebook can more easily discover which content on the platform violates anti-hate speech rules.
Typically, computers use a method called Optical Character Recognition (OCR) to identify content in pictures or videos, but because of the huge amount of Facebook information —— 2.2 billion people use the social network &mdash every month ;— The shortcomings of OCR are exposed. Therefore, Facebook announced the establishment of a larger system.
The system works on both Facebook and Instagram and can be used to improve the surface content of photo search and news streaming. Rosetta monitors website content by extracting text in different languages in real time from more than one billion images and video frames.
As Facebook reviews content on social networks, the system will definitely come in handy. Previously, Facebook.com was accused of having some responsibility for violence in Myanmar, Sri Lanka and India. Last month, the social network said that due to the discovery of false information on the Facebook platform, the company is taking action to stop Myanmar’s “hate spread”, and as of now, the ethnic violence against Rohingya Muslims in Myanmar is still Continuing, Facebook has deleted 18 accounts and 52 web pages related to the Myanmar military.
In July of this year, Facebook said it would begin to delete false information designed to incite or exacerbate violence, including written posts and manipulated images. Previously, Facebook only banned direct violent content, and the current new policy also covers fake news that may provoke physical harm.
Last week, Facebook Chief Operating Officer Shirley · Sheryl Sandberg and Twitter CEO Jack · Jack Dorsey went to Congress to testify about the company's content review policy to protect users. And security measures are elaborated.
Facebook's CEO Mark · Zuckerberg used to say that he is looking to use artificial intelligence technology to proactively detect annoying content on the platform, rather than waiting for people to mark the report. But when the company developed the technology, he added that social networks are employing 20,000 human hosts to monitor harmful information on the platform.
In addition, Facebook said on Tuesday that it is adding a new language to the automated translation toolset. The newly added 24 new languages include Hausa, Urdu and Nepali, and the total number of translated languages has exceeded 125.