Facebook grilled on Britain First page by MPs

Facebook has said it is reviewing the future of Britain First's profile page, following the removal of its leaders' pages from Twitter.

The social network said it was "very cautious" about removing political speech.

The details emerged as the Home Affairs Committee grilled Facebook, Google and Twitter on what they were doing to combat hate speech.

MPs said the firms had made progress but were still not doing enough.

Google promised an annual transparency report on the issue. Facebook and Twitter said they were looking at a similar course of action but did not commit to it.

On Britain First, a far-right group, Facebook's director of public policy Simon Milner said it was reviewing its future.

"Clearly there are issues with the pages but we are very cautious about political speech," he told MPs.

He added that, until recently, it had been registered as a political party.

'Doing very little'

Conservative MP Tim Loughton accused technology giants of inciting violence through inaction.

"This is not about taking away somebody's rights to criticise somebody whose politics they don't agree with," he said.

"It's about not providing a platform - whatever the ills of society you want to blame it on - for placing stuff that incites people to kill, harm, maim, incite violence against people because of their political beliefs."

"You are profiting from the fact that people use your platforms and you are profiting, I'm afraid, from the fact that people are using your platforms to further the ills of society and you're allowing them to do it and doing very little, proactively to prevent them," he added.

Committee chairwoman Yvette Cooper said that as three of the "richest companies in the world", the firms "needed to do more" on hate speech.

She accused YouTube of failing to remove a racist video repeatedly flagged up to it by her.

Ms Cooper described how, over the course of eight months, she repeatedly checked whether a propaganda video from far-right organisation National Action had been taken down, after Google agreed that it violated its policies.

She found that it remained on the platform for more than half a year.

"It took eight months of the chair of the select committee raising it with the most senior people in your organisation to get this down," Ms Cooper said. "Even when we raise it and nothing happens, it is hard to believe that enough is being done."

She said that the video remained on Facebook and Twitter even after it was flagged to Google, saying it was "incomprehensible" the information had not been shared.

Global terrorism

In response, Google's vice-president of public policy Dr Nicklas Lundblad said the firm had seen a "sea-change" in the way it was dealing with such content in the last year and was now turning to machine learning - a type of artificial intelligence - which it hoped would become "five times" more effective than human moderators and do the work of thousands of them.

Ms Cooper also flagged to Google the fact that, as a result of her constant searching for the YouTube video, she was recommended "vile" content.

"Is it not simply that you are actively recommending racist material into people's timelines? Your algorithms are doing the job of grooming and radicalising," the Labour MP said.

In response, Dr Lundblad said Google did not want people to "end up in a bubble of hate" and was working on identifying such videos and using machine learning to limit their features, so they would not be recommended to others or have any comments on them.

Facebook's Simon Milner said on the matter: "Our focus has been on global terrorist organisations. One of the issues with this is that content from videos like this can be used by news organisations to highlight their activities.

"With this material, context really matters," he said. "There is a chance that we are taking down important journalism."

Cleaning up

He was also asked whether the social media firm would be willing to introduce legislation, being brought in by Germany, that will impose huge fines on social networks if they do not delete illegal content, including hate speech.

"The German legislation is not yet in action," he said. "It is asking us to decide what is illegal, not courts, and we think that is problematic."

Ms Cooper also grilled Sinead McSweeney, Twitter's vice-president of public policy on a series of abusive tweets - including racist comments aimed at MP Diane Abbott and death threats aimed at MP Anna Soubry - remained on Twitter.

Ms McSweeney said that the firm was increasing the numbers of people moderating its content, but declined to give a figure.

She said that Twitter provided dedicated teams who work with parliamentarians. "Where we see someone getting a lot of abusive content, we are increasingly communicating to them within the platform," she said.

But she was unable to guarantee that all the tweets referred to by Ms Cooper had been removed.

"Right now, I can't say what you'd see. You can clean a street in the morning and it can still be full of rubbish by 22:00."

None of the three firms was prepared to answer a question about how much moderators were paid, saying it varied from country to country and depended on the skills and specialism of staff.

Ms Cooper said there had been a "shift in attitude" for the better since the three firms were last questioned.

All three admitted they still needed to "do better".

 

 

Photo copyright: REUTERS. Caption: Yvette Cooper read out a series of abusive threats made to MPs on Twitter