On the morning of June 22, Facebook announced that it will expand its content verification scope to crack down on false news and mischief messages on social networks, and to minimize the adverse effects of false news through appropriate countermeasures and techniques.
Facebook revealed in the blog post that false news and maliciously tampered content have become the norm in some countries. This makes it difficult for readers to distinguish between true and false information, and it is even difficult to determine whether the picture or video related to the news event is true. Facebook will also work with third-party vision agency experts to extend the fact-checking section to images and videos. At the same time, misleading photographs posted on the Facebook platform will be tagged to remind users of the accuracy and authenticity of the information.
When some Facebook users are trying to share fake news, Facebook will pop up a prompt to remind the user that the message has been confirmed as false information and display the official banter link. When the user clicks the prompt link, the marked suspicious content and the original can also be seen. News report. The blog post mentioned that more than 1 billion pictures, videos and other content are uploaded to social networking platforms every day. The huge workload makes manual review teams difficult to complete tasks. So Facebook will introduce machine learning mechanisms to identify false information exposed on the Internet through AI. The tool can help companies find links that are being disseminated but have been proven to be false information.
Tessa Lyons, Facebook’s product manager, explained that AI tags plagiarized text, suspicious advertisements, and information maliciously targeting other users. Because this information is usually published in different forms in different domains, once a false news is revealed, the machine will mark the key information of the news and add it to the identification library in the background so that the AI can identify similar information later.
Today, Facebook has said that it has expanded its content review program to 14 countries around the world. The verification of pictures and videos has also been extended to four countries. The company plans to expand it to more countries by the end of the year. Facebook stated that under the company's efforts, the fact-finding system has reduced false news on social media by 80%.
Since December 2016, Facebook has stepped up efforts to combat false news. The reason is that during the U.S. presidential election, a large number of false news spread on social platforms. Facebook has been criticized for failing to do its best to stop the spread of false news. Subsequently, Facebook cooperated with media agencies such as Germany and France to avoid the adverse effects of false news being uploaded to social platforms.
In the fight against false news and fake news, Facebook has been working hard for a year and a half. The company stated that it will further improve its fake accounts, false information and related accountability systems. He also said: "The project to combat false information will never end. We need to do more."
However, while effectively cracking down on fake news, the move has also caused some users to worry about Facebook checking pictures and videos. They fear that Facebook's move will infringe on personal information security and privacy. Some users questioned: Who defines the authenticity of the incident? Does the scope of the review involve personal privacy? Will this feature be used by the government to monitor users' personal thoughts?
In recent months, Facebook has been condemned by users and society for its data privacy. Earlier this year, political consultancy Cambridge Analytica illegally accessed the personal information of millions of Facebook users and used it for unfair purposes. Since then, Facebook has experienced several rounds of questions from Europe and the United States. Now the company promises greater transparency in handling user data.