Categories
Technology US

Nextdoor moderators scramble to address QAnon after Capitol attack

For months, Nextdoor moderators have struggled with the challenge of addressing QAnon content on its neighborhood sites — but after last week’s deadly attack on the Capitol, the pressure between moderators and the company’s policy team may have reached a breaking point.

Moderators have been asking Nextdoor to impose a ban against QAnon content since at least October, according to forum screenshots obtained by The Verge. Last week, Nextdoor moderators began pressuring the company directly in the National Leads Forum, a private forum for moderators on the site. In screenshots of forum posts, obtained by The Verge, moderators expressed concern that Nextdoor’s misinformation policies did not fully bar discussions of conspiracy theories like QAnon.

Following last week’s pro-Trump riot at the Capitol, one user returned to an early QAnon thread, writing, “I am bumping this up. It’s January 8th. Any policies yet? After the past week, we need some. I also wrote an email to Next Door Leadership about this three months ago and got no response.”

It wasn’t until five days after the riot that Nextdoor finally responded to the request, referring moderators back to the company’s policy on violent content. In this post, Caty K., Nextdoor head of community, wrote, “I want to reiterate that the broader Nextdoor team is committed to the safety of all members and communities on the platform.” She continued, “The violent events that took place at the US Capitol last week are no exception.”

But some Nextdoor moderators say that the company’s misinformation policies don’t meaningfully address QAnon, and haven’t been communicated well enough to help communities deal with the conspiracy. The company’s misinformation policy asks moderators to report individuals who distribute “misinformation related to the Election and COVID-19,” but does not directly address conspiracy theories like QAnon. After the attack on the Capitol, many QAnon theories carry an implicit risk of inciting violence, but moderators find it hard to justify their removal as straightforwardly violent content. At the same time, current Nextdoor moderation policies do not include a ban on discussions of conspiracy theories.

“The problem is this policy is written so specific to election and Covid-19 information and does not mention any violation that can be used for things like misinformation around politics and inciting fear in the community,” one moderator wrote in the thread.

“Facebook has announced that it will be automatically removing content with the phrase ‘Stop The Steal’ and #StopTheSteal,” Steve C., a California lead responded. “Does Nextdoor plan to do the same?”

On Monday, Caty wrote that “Nextdoor views QAnon as a hate group,” as a response to a thread titled “FB has banned all QAnon Content – what is ND policy?” Caty continued, “If you see content or members advocating these ideologies, please report them to our team and we will handle. I recognize we do not have a list of groups available for you all to reference, and I will work on that to make things clearer, but for now this comment serves the purpose of confirming that QAnon content should be removed.”

On Wednesday, Nextdoor confirmed to The Verge that it classifies QAnon as a hate group. Still, there’s been no effort to communicate the QAnon policy to everyday users, and as of publication, Nextdoor has not updated its misinformation policies on its website to reflect its classification of QAnon as a hate group. “Right now we don’t have plans to email it out [to moderators,]” Caty said in response to a post asking if the decision would be communicated beyond the forum.

Nextdoor also referred The Verge to its misinformation and violent content policies. “Any post or content on Nextdoor that organizes or calls for violence will be immediately taken down,” a Nextdoor spokesperson told The Verge. “Nextdoor’s Neighborhood Operations Team also uses a combination of technology and member reports to proactively identify and remove content.

Nextdoor has struggled to establish clear moderation policies in the past. Nextdoor neighborhoods are primarily self-governed, and unpaid “community leads” are in charge of reporting and removing content in their communities. This has led to content being wrongfully being removed or allowed to stay up. Last June, The Verge reported that posts supporting the Black Lives Matter movement were being wrongly taken down by Nextdoor moderators.

In October, Recode reported that QAnon-related content flourished on the platform in the last few weeks in the lead-up to the 2020 US presidential election. In one instance, Recode said that a user bombarded Nextdoor for weeks on Twitter before the platform removed a post “containing QAnon talking points.”

According to Nextdoor’s rules, discussions of national politics are banned on the main community feed. As a result, public and private groups have grown to house these discussions. In forum posts obtained by The Verge, community moderators expressed worry over private groups that could be housing violent or extremist posts.

“How can we ensure locked Groups are not participating in harmful discussions?” Jennifer V, an Arizona moderator, wrote in a forum post Tuesday. “We have a LOT of pro-Trump/Patriot Groups that I worry about. I also worry about other Leads or Community Reviewers seeing me report the Groups and the backlash.”

“My concern is QAnon content, as well as other content with conspiracy theories, promotions of violence, etc., that is in *private* groups that won’t get reported because the members of the group WANT that content,” Carol C., a Colorado moderator wrote in the forums last week. “I saw some of this type of content in the public political groups that have since gone private.”

Categories
Headlines UK

Ex-PornHub moderators reveal life inside explicit video site being sued for $80m

Former PornHub moderators have lifted the lid on life at the world’s largest online adult entertainment website, currently being sued for $80million over alleged ‘sex trafficking’.

Workers at MindGeek, PornHub’s parent company based in Montreal, Canada, would allegedly watch up to 1,200 videos a day, spending hours categorizing and tagging sex acts and fetishes as well as reviewing ‘suspicious’ content, from puppies being kicked to death, to child abuse, rape and incest. 

Pornhub, which had 42billion views in 2019, was considered the ‘breadwinner’ in a group of adult sites owned by MindGeek including YouPorn and RedTube.   

The ex-staffers claimed that PornHub’s guidelines were the most lenient and their job was ‘to find weird excuses not to remove’ videos.

They spoke of managers who felt ‘untouchable’ boasting of soaring profits at all-staff meetings and who took too long to report and remove concerning content.   

Mandatory 400 videos a day quotas were introduced in the spring, leading to ‘panic attacks’ among some staff who were under ‘surveillance’ and felt unable to watch so much questionable content.

Moderators who didn’t meet the quota could be terminated whilst others say they suffered from ‘burn out’ from watching ‘a life time’ of pornography and upsetting content, with some reporting low sex drive, as a result, and others, higher.    

‘As the leaders in the porn industry, in adult entertainment, I thought it was their moral obligation to do a lot more’, one whistleblower told Business Insider.   

MindGeek, PornHub’s parent company based in Montreal, Canada, also the owner of other popular YouTube-like adult portals, including YouPorn and RedTube

PornHub is being sued for $80million by 40 women who claim it 'profited' from 'sex trafficking' related to videos from former PornHub partner GirlsDoPorn

PornHub is being sued for $80million by 40 women who claim it ‘profited’ from ‘sex trafficking’ related to videos from former PornHub partner GirlsDoPorn

The revelations are the latest in a series of blows to the controversial adult entertainment company in the last few weeks.  

On December 15 MindGeek was sued for $80 million by 40 women who say the site profited from ‘sex trafficking’ operation GirlsDoPorn.

In a complaint filed in California, the alleged victims say MindGeek ‘knew it was partnering with and profiting from a sex trafficking venture for years’.

On December 14 the site was forced to remove 10million videos, the majority of its content, from unverified users, following a New York Times report which said it was profiting from videos of child sexual exploitation, rape and revenge porn.

Even after the videos were flagged and removed, downloaded copies continued to circulate, often with severe personal consequences, the paper said.

PornHub now says it was only allow content from verified users, something other social media giants like Facebook, Instagram, TikTok, YouTube, Snapchat and Twitter have yet to implement. 

Credit card companies Visa, Mastercard and Discover severed ties with the company soon afterwards, blocking customers from making purchases on Pornhub.

Users can still pay in cryptocurrency.  

In January a judge found that the owners and operators GirlsDoPorn must pay $12.7 million for lying to women about how their explicit videos would be distributed.  

PornHub said that it doesn’t knowingly allow images of sexual abuse of children.

In the latest revelations, whistleblower ‘Brian’, a pseudonym as he signed a non-disclosure agreement, told Business Insider that when he worked at MindGeek’s Montreal offices employees a few years ago they were required to review a minimum of 1,200 videos a day, a number that the company has disputed.

Brian also revealed the strange, internal logic that his team used to allow suspicious content through. 

If an incest video title said ‘his mother’, it was taken down, he said, but if it said ‘mother f—s son’, then they could argue she was a generic mother, of no relation to the son, and the video was approved. 

‘Our job was to find weird excuses to keep videos on our sites,’ said Brian.  

He added: ‘At one point we were debating what creatures were OK to be stepped on. Crickets, insects, and crawfish were OK, but not goldfish, because people have goldfish as pets.’

Later, when his team reviewed videos of women kicking puppies to death, MindGeek decided the rules were too lenient, and they prohibited stepping on any living creature.

MindGeek eventually made some of the changes Brian’s team recommended but he said some recent changes, like reporting suspicious videos to local authorities and adding more mental-health services, were too slow to arrive.  

Brian quit several years ago after suffering burnout from all the videos.  

‘We joked that in a month we’d see more porn than someone would see in a lifetime,’ he said.

‘I talked about that with people I was working with — how doing this work affected them in their personal lives. Some people had a lack of sex drive, some people said it didn’t affect them, and some people said it increased their sex drive.’  

Feras Antoon

Pornhub was launched by Canadian college grad Matt Keezer in 2007 after he bought the domain name for a mere $2,750

Canadian Feras Antoon (left) is currently the CEO of Pornhub’s parent company MindGeek. Of the founders, he’s the only one who retains a formal operational role. Matt Keezer (right) is one of the founders. He now runs a travel booking company 

In March, when video shoots halted due to COVID-19, some non-moderator teams were moved over to review content, another former employee, ‘John’, claimed. 

The new members allegedly underwent two training sessions before being handed an Excel document with tens of thousands of suspicious videos to check, with a mandatory quota of 400 a day.  

John said: ‘It was the first time ever that anyone was under this much type of surveillance, at least on my team, because we didn’t have to log hours. I had a coworker who was having panic attacks every week.’

He added that increased scrutiny on the company later caused their guidelines to get stricter with some videos being flagged that wouldn’t have been before, including performers simulating scenes of sex involving a person who was asleep, some BDSM content, and videos with visible bottles of alcohol.  

John added that the lack of industry consensus on ‘best practices’, including Facebook and other social media sites, was no excuse and reported hearing the CEO bragging at the annual February all-staff meeting about how much money MindGeek was making.

‘You know it’s an issue. You always have to deal with it,’ John said. ‘The fact that they didn’t shows you how untouchable they felt.’

Adult performer Allie Awesome was critical of Pornhub's decision to remove amateur videos, saying it's a 'crusade against the sex industry and the workers who comprise it'

Adult performer Allie Awesome was critical of Pornhub’s decision to remove amateur videos, saying it’s a ‘crusade against the sex industry and the workers who comprise it’

A spokesman for MindGeek, which is fiscally based in Luxembourg, said safety is the company’s top priority.

In an email, the spokesman highlighted steps the company has taken over the past several years and in the past week.

The company said it was targeted by anti-pornography groups, one of which got more than 2 million signatures on a petition to shut down Pornhub.  

MindGeek’s financial accounts, filed in Luxembourg, show that profit plummeted in 2018, the most recent year it filed results.

The company, which is privately held, earned $29 million in profit in 2018, compared with $102 million in 2017. 

PornHub was founded in 2007 by college friends Matt Keezer, Stephane Manos, Ouissam Youssef and Feras Antoon, branded ‘the kings of smut’.

They scaled it quickly between 2007-2010 before selling it for $140million to a German billionaire. 

Antoon bought back the billionaire’s stake in 2013 and continues to serve as the CEO of PornHub’s parent company Mind Geek.

MindGeek is privately held so its valuation is unknown but it has 1,200 global employees.    

Categories
Technology US

Facebook content moderators demand better coronavirus protections


More than 200 content moderators at Facebook have signed an open letter to Mark Zuckerberg demanding better COVID-19 protections. They say management has needlessly put their lives at risk by forcing them back into the office, even as full-time employees work from home until July 2021.

On October 12th, content moderators working for the third-party contracting firm Accenture in Austin, Texas were asked to return to the office. The company implemented additional cleaning measures and asked employees to wear masks. Despite these efforts, a contractor tested positive for COVID-19 shortly after returning to work, according to The Intercept.

Facebook has been under intense pressure to stop the spread of viral misinformation and take down incitements to violence, particularly around the 2020 US election. During the pandemic, it relied more heavily on artificial intelligence to detect content that violated its policies. “The AI wasn’t up to the job,” content moderators say in the letter, which was published by the law firm Foxglove. “Important speech got swept into the maw of the Facebook filter — and risky content, like self-harm, stayed up.”

While high-risk workers do not have to return to the office, contractors say the policy doesn’t extend to those who live with high-risk individuals. They’re asking Facebook and Accenture to allow moderators to work from home if they live with someone who is high risk.

Workers are also demanding hazard pay of 1.5x their typical hourly wage and asking Facebook to stop outsourcing their work. “Facebook should bring the content moderation workforce in house, giving us the same rights and benefits as full Facebook staff,” the letter says.

The demands reflect longstanding tensions between content moderators and the big tech companies for which they contract. While these workers are asked to look at some of the most vile content on the internet, their jobs often lack the pay and benefits of full-time employees. Some, at Google and YouTube, have gotten PTSD from their work.

Roughly 63 workers signed the letter to Facebook by name. Foxglove says another 171 across the US and Europe signed anonymously. “This is the biggest joint international effort of Facebook content moderators yet,” the law firm tweeted. “Many more moderators in other sites wanted to sign, but were too intimidated by Facebook – these people are risking their livelihood to speak out.”

In a statement emailed to The Verge, a Facebook spokesperson pushed back on the idea that content moderators aren’t able to work from home and don’t have sufficient protection. “We appreciate the valuable work content reviewers do and we prioritize their health and safety. While we believe in having an open internal dialogue, these discussions need to be honest,” they wrote. “The majority of these 15,000 global content reviewers have been working from home and will continue to do so for the duration of the pandemic. All of them have access to health care and confidential wellbeing resources from their first day of employment, and Facebook has exceeded health guidance on keeping facilities safe for any in-office work.”





Source link

Categories
Technology US

Former Facebook moderators worried for the upcoming US election


When Viana Ferguson was a content moderator for Facebook, she came across a post that she immediately recognized as racist: a photo of a white family with a Black child that had a caption reading “a house is not a home without a pet.” But she had a hard time convincing her manager that the picture was not just an innocent photo of a family.

“She didn’t seem to have the same perspective, there was no reference I could use,” Ferguson said. She pointed out that there was no pet in the photo, but the manager also told her, “Well, there’s also no home in the picture.”

Ferguson said it was one of several examples of the lack of structure and support Facebook moderators face in their day-to-day jobs, a vast majority of which are performed for third-party consultancies. Ferguson spoke on a call organized by a group that calls themselves the Real Facebook Oversight Board, along with Color of Change, a progressive nonprofit that led the call for a Facebook advertiser boycott over the summer, and UK-based nonprofit technology justice organization Foxglove.

“In 2020 on the world’s largest social network, clickbait still rules lies and hate still travels on Facebook like a California wildfire,” said Cori Crider, co-founder of Foxglove. “Things are still so bad that in two days, Mark Zuckerberg will testify once again to the Senate about what Facebook is doing to address this problem, and protect American democracy.”

Crider said Facebook points to its massive workforce of content moderators as evidence it takes the issues seriously. “Content moderators are the firefighters on the front lines guarding our elections,” she said. “They’re so critical to Facebook’s work that Facebook has hauled them back into their offices during the pandemic and kept them in the offices.”

The challenges of working as a Facebook moderator both in the US and overseas have been well-documented, and consistent complaints over the course of many years about how viewing traumatic content for hours on end led to the company agreeing to pay $52 million to current and former US-based moderators to compensate them for mental health issues developed on the job.

Former moderator Alison Trebacz said on the call she remembered the day after the 2017 mass shooting at Las Vegas’ Mandalay Bay casino, her work queue was full of videos of injured and dying shooting victims. But to mark a video as “disturbing,” moderators had to verify that a person was completely incapacitated, something that was nearly impossible to do in a timely way. “We end up as moderators and agents trying to make these big decisions on popular content without having full direction and guidance within five minutes of the event happening,” she said.

As part of her job, Trebacz said she and other moderators regularly had to view graphic content, and she felt mentally drained by the nature of the work. She was paid $15 an hour and said while she was there, from 2017 to 2018, there was little mental health support. The company used nondisclosure agreements, which limited moderators from being able to talk about their jobs with people outside the company, adding to the overall stress of the job. The moderators are independent contractors, and most don’t receive benefits or sick leave, noted Jade Ogunnaike of Color of Change.

“When companies like Facebook make these grand statements about Black Lives Matter, and that they care about equity and justice, it is in direct contrast to the way that these content moderators and contractors are treated,” Ogunnaike said.

The group wants to see Facebook make moderators full-time employees who would receive the same rights as other Facebook staff and provide adequate training and support. While the company relies on artificial intelligence to help root out violent and problematic content, that’s not sufficient to address more nuanced instances of racism like the one Ferguson mentioned.

But Trebacz pointed out that human moderators aren’t going away; rather, they’re becoming even more important. “If Facebook wants valuable feedback from the people doing the bulk of the work, they would benefit by bringing them in house.”

Ferguson said she saw a sharp uptick in hate speech on Facebook following the 2016 US presidential election. She said the platform was ill-equipped to handle newly emboldened people posting more and more hateful content. If a moderator removed a piece of content later found not to be against Facebook rules, they could be disciplined or even fired, she added.

Trebacz said she hoped Facebook would provide more real-time communication with moderators about content decisions and that more decisions will be made preemptively instead of reacting all the time. But she said she expects the next few weeks will be “outrageously difficult” for current content moderators.

“I think it’s going to be chaos,” she said. “Truly.”

Facebook did not immediately reply to a request for comment Monday. The Wall Street Journal reported Sunday that the company is bracing for possible chaos around next week’s election with plans to implement internal tools it’s used in at-risk countries. The plans may include slowing the spread of posts as they begin to go viral, altering the News Feed algorithm to change what content users see, and changing the rules for what kind of content should be removed.



Source link

Categories
Technology US

Facebook moderators in Dublin reportedly forced to work in office despite lockdown


Facebook moderators working as independent contractors in Dublin say they’re required to work in the office, despite a new nationwide lockdown across Ireland, The Guardian reports. The moderators, employed by contractor CPL, say they were told they’re considered essential workers and therefore not bound by Ireland’s Level 5 restrictions, which require people to work at home unless they’re “providing an essential purpose for which your physical presence is required.”

Earlier this week, Ireland became the biggest country to implement a strict lockdown to try to contain a new spike in coronavirus cases. As of Thursday, the country has had more than 54,000 cases of COVID-19 and more than 1,871 deaths, Ireland’s Department of Health reported.

Facebook said in a statement its “partners have started to bring some content reviewers back to offices” in recent months. “Our focus has always been on how this content review can be done in a way that keeps our reviewers safe.” A moderator considered vulnerable can work from home, according to the statement, and the company is “working with our partners to ensure strict health and safety measures are in place and any confirmed cases of illness are disclosed.”

Facebook requires physical distancing and reduced capacity at its workplaces, as well as mandatory temperature checks and mask-wearing. It also says it conducts daily deep cleaning.

But according to The Guardian, workers said when they returned to offices in Ireland in July, they were told a confirmed COVID-19 case would result in a 72-hour shutdown of the office. The office has remained open, despite three new cases since the end of September.

In May, Facebook announced it was shifting its remote work policy to allow most of its 48,000 employees to work from home permanently. But Facebook has about 15,000 paid contractors employed by third-party companies, and the workers are ineligible for most of the benefits that corporate employees receive.

And it’s not the first time the company’s contractors have required moderators to work in-office during the pandemic. Earlier this month, The Verge reported that Facebook moderators in Austin, Texas employed by third-party contractor Accenture were being forced to return to the office there as well.

Facebook moderators can spend much of their time reviewing graphic videos, hate speech, and other disturbing material, and many have developed post-traumatic stress disorders. In May, Facebook settled with current and former moderators for $52 million in a ruling that found the job had a severe impact on the workers’ mental health.



Source link

Categories
Tech News

Facebook Moderators Pressured to Return to Office Despite Pandemic: Report


Facebook moderators were reportedly pressured by their employer Genpact to return to office despite safety concerns over the ongoing coronavirus-induced pandemic. Genpact, a third-party contracting firm that handles Facebook’s moderation operations, has reportedly been asking its employees to return to its offices in Hyderabad as early as July. Genpact claimed that most parts of Facebook’s moderation services needed be performed in office due to privacy issues and other technical hurdles. The report also cited some senior moderators from the firm who said that their jobs were at risk if they chose not to return to the workplace.

According to a report by Rest of World, Genpact asked employees to return to the office located in Hyderabad’s tech hub, HITEC City. Genpact is one of the firms that Facebook outsources to take care of its moderation operations around the world. The firm employs close to 1,600 Facebook moderators, as per the report.

Claiming that Facebook’s moderation services needed to be taken care of while operating from office due to privacy and technical concerns, Genpact told Rest of World that in-office work was done voluntarily. “To make this manageable, safe, and clear, employees need to sign a weekly form that asks them to voluntarily agree to this,” a Genpact spokesperson said.

The publication also spoke to four current and former Genpact employees, who anonymously said that moderators were asked to return to office as early as July to “to tackle sensitive content, including posts involving child exploitation, suicide, and other matter that could lead to real-world harm.” India had imposed strict lockdown rules between March to May as a preventive measure against COVID-19. By July, the country had entered its second phase of unlock, Unlock 2.0, when certain restrictions were eased but lockdown measures were still imposed in containment zones and night curfews were still in effect in most areas.

The report also cited a senior moderator at Genpact who said that employees were informed that their jobs could be at risk should they choose not to return to office. “The operations team told them these are important orders. There’s a threatening factor behind [it]. People are forced to go to work, even if they are not happy to,” said the moderator.

Gadgets 360 has reached out Genpact for a response. This report will be updated when we hear back.

Back in August, Facebook allowed its corporate employees (non-third party) to continue to work from home until July 2021 due to the pandemic and will give them $1,000 (roughly Rs. 75,000) for home office needs. Other tech giants including Google have also taken similar steps to ensure employee safety.

A Facebook spokesperson told Gadgets 360, “Our focus for reopening any office is on how it can be done in a way that keeps our reviewers safe. To do this we are putting strict health and safety measures in place, making sure they’re followed, and addressing and disclosing any confirmed cases of illness.”

Update: Facebook issued a clarification that disagrees with the report. The company told Gadgets 360 that “all reviewers currently working in-office in India have done so on an opt-in basis. Anyone wishing to continue working from home has been able to do so with no impact on their pay and benefits.Facebook and our partners are meeting or exceeding guidance on maintaining a safe workspace and a wide variety of additional health and safety measures are put in place for anyone returning to the office.”

It also added that “some types of content can only be done by reviewers in-office and that review is important to keeping our platforms safe.”


Are iPhone 12 mini, HomePod mini the Perfect Apple Devices for India? We discussed this on Orbital, our weekly technology podcast, which you can subscribe to via Apple Podcasts, Google Podcasts, or RSS, download the episode, or just hit the play button below.



Source link

Categories
Technology US

Facebook moderators in India were pressured to return to the office despite COVID-19 concerns


Facebook moderators located in India were pressured by their employer, third-party contracting firm Genpact, to return to the office despite safety concerns over the COVID-19 pandemic, according to a new report from nonprofit publication Rest of World.

Genpact, one of many firms Facebook outsources moderation to around the world, employs roughly 1,600 moderators in India, where employees analyze offensive and disturbing content posted in large volumes to Facebook’s platforms for potential rule violations. The company was pressuring employees to return to its offices in Hyderabad as early as July, Rest of World reports, with Genpact claiming key parts of its moderation services had to be performed in the office due to privacy issues and other technical hurdles.

Genpact claims any in-office work was done so voluntarily. “To make this manageable, safe, and clear, employees need to sign a weekly form that asks them to voluntarily agree to this,” a Genpact spokesperson told Rest of World. But according to interviews with employees, Genpact management allegedly instructed some employees that their jobs may be at risk if they chose not to perform in-office duties.

Rest of World also reports that India’s IT industry was deemed essential in the earliest days of coronavirus-related lockdowns throughout the country. This meant many of the firms providing outsourced labor for US technology companies were able to circumvent restrictions on office work to keep employees coming in.

Facebook did not immediately respond to a request for comment.

Facebook employs more than 15,000 content moderators all around the globe, a vast majority of which are contractors without access to many of the same benefits as corporate employees. Those contractors are also subjected to work conditions involving the viewing of child exploitation, violence, terrorism videos, and other material that may cause post-traumatic stress disorder and related mental health issues.

Meanwhile, corporate Facebook employees are able to work from anywhere they like until July of next year, upon which they can choose to remain permanently remote so long as they agree to potential wage reduction depending on location. Corporate employees also receive benefits like stock options and a $1,000 stipend to cover remote work costs and office equipment, in addition to the free transportation and on-site food perks enjoyed prior to the pandemic.

Earlier this month, The Verge reported that one of Facebook’s primary moderation firms, the consulting firm Accenture, began instructing its US-based employees in Austin, Texas that they would have to return to the office despite concerns about COVID-19 safety precautions. Accenture initially refused to answer questions from employees worried about their health and safety, and the firm said it would not be providing increased pay.

In a statement emailed to The Verge at the time, Accenture wrote: “We are gradually returning people to client offices in cases where there is a critical business reason to do so. We prioritize the safety and well-being of our people, and only return people to offices when we are comfortable that the right measures and protocols are in place, properly evaluated for each country or local situation.”

Facebook says its moderators are still being paid regardless of how their jobs may be affected by COVID-19 safety measures, office closures and cleanings, and other work disruptions. But the company has been using financial incentives to get content moderators at third-party firms to resume working full time in the office, The Washington Post reported in May.



Source link

Categories
Technology US

Facebook moderators at Accenture are being forced back to the office, and many are scared for their safety


Facebook moderators employed by third-party contracting firm Accenture and based in Austin, Texas are being forced to return to the office on October 12th, The Verge has learned.

Employees, almost all of whom are contractors, were instructed of the new policy at a company-wide town hall meeting today, say multiple people familiar with Accenture’s plans. Accenture, which has allowed its workforce of hundreds of moderators to work from home since March due to the COVID-19 pandemic, has not given the employees a reason for why they must return to the office. Accenture did not take questions at the town hall meeting, telling concerned employees that it would schedule a second call to answer COVID-specific questions regarding matters like sick leave and time off. High-risk workers are being asked to make alternate arrangements, and will not have to come in.

Facebook has an estimated 15,000 paid contractors almost entirely employed by third-party firms, and therefore not eligible for many of the same benefits as corporate employees. These contractors often spend their days looking at graphic videos, hate speech, and other disturbing material posted to the social network in large volumes on a daily basis. Some Facebook moderators, including those employed by Accenture, have developed post-traumatic stress disorders, and Facebook in May settled with current and former moderators for $52 million in a ruling that concluded the job had severe negative mental health effects.

Since the COVID-19 pandemic, however, tech employees and a large swath of those companies’ contract workforces have shifted to remote work. Some platform owners, including Facebook and YouTube, say this shift has hindered their moderation work because of privacy issues that involve employees working in the office on protected machines that handle sensitive user data.

YouTube in March said it would rely more on artificial intelligence to fill the gap, but the company last month said its AI moderation failed to match the accuracy of humans after a huge swell in video removals and incorrect takedowns, and the company decided to bring back more human moderators to address the issue. Facebook has not openly talked about how its handled moderation during the pandemic, but one Accenture contractor confirms the firm has allowed its workforce to work from home since March.

Facebook did say at the onset of the pandemic that contract workers would receive full salaries even in the event they could not perform all of their required duties, but the company denied those contractors access to a $1,000 bonus it paid to all full-time corporate employees to purchase remote work equipment like office chairs and desks, according to TechCrunch.

One Accenture contractor tells The Verge the company will require everyone wear masks, and only four people at a time may use the elevators in the building. Accenture is also saying it will clean high-contact surfaces every two hours and will clean the entire office every 24 hours, as well as taking employee temperatures and enforcing socially distanced seating.

“However, there is no staggering of our return. The entire community ops teams in Austin and California will be returning all at once, and will be given no other options,” says one concerned Accenture contractor. “We [are] expected to keep our regular shifts, even though this will lead to everyone arriving at the office at the same time and creating a bottleneck. We were told in a separate town hall on Monday that they would be returning ‘only essential employees’ to the office ‘slowly,’ and their message today directly contradicts that.”

The employee tells The Verge that has been no mention of added sick time or changed in attendance policy, which they fear will mean employees come into work sick. “Many of us asked questions about this in the town hall question box through Microsoft teams, and our questions were ignored,” the employee says. “We’re being told we’re having to return because the ‘essential’ work we do must be done in an office setting.”

In a statement, a Facebook spokesperson said, “A lot of the work done by the Accenture Austin team involves work streams that can’t be done from home.” The company is currently under intense scrutiny to enforce policies banning incitements to violence and election misinformation — policies human moderators are much better at enforcing than algorithms.

In an internal letter, Accenture contractors asked for increased pay and benefits due to the risks associated with returning to the office during a pandemic. “Because much of our work is too sensitive to be done at home, there has been a recent push from management to return to working in the office,” they wrote. “Accenture has a responsibility to take care of employees that put themselves at risk.”

In a list of demands, the moderators asked for hourly wages to be increased by 50 percent, and for the company to cover all costs associated with the testing and treatment of COVID-19. They also asked for paid time off should they get sick with the virus.

According to another individual with knowledge of Accenture’s town hall meeting, a contract worker for the company contracted COVID-19 from the office just two weeks ago. “They are mandating that over 300 contract workers are to return to the office on October 12th,” the individual tells The Verge. “They were asked multiple times during this meeting what medical data they had used to confirm that this would be safe for their employees. They refused to answer. The majority of the employees are in extreme distress over this news and worried for their safety.”

Meanwhile, a vast majority of Facebook’s full-time corporate workforce now enjoys a permanent remote work policy, with employees not mandated to inform Facebook of where they’re going to be permanently working from until July of 2021. That’s when the company intends to fully reopen its headquarters in Menlo Park, California. An Accenture contractor says full-time Facebook employees in Austin are not being asked to return to the office.

Lawyers involved in Facebook’s settlement with content moderators earlier this year determined that as many as half of all Facebook moderators may develop mental health issues on the job. A competitor of Accenture central to the settlement was Cognizant, a similarly structured third-party contractor that decided to exit the moderation business last fall following two investigations from The Verge into the company’s working conditions.

“So no sick days, no hazard pay, no staggering of the return to the office, and only just a week and a half ago, Texas saw the largest spike in COVID cases since the pandemic began,” says one of the Accenture moderators. “They are confident that they can handle this rush of people back to the office because they’ve had a handful working from a different office voluntarily, so now they’re being cavalier with our lives.”

In a statement emailed to The Verge, Accenture wrote: “We are gradually returning people to client offices in cases where there is a critical business reason to do so. We prioritize the safety and well-being of our people, and only return people to offices when we are comfortable that the right measures and protocols are in place, properly evaluated for each country or local situation.”

Update October 1st, 6:22PM ET: Added a statement from Accenture.
Update October 1st, 6:54PM ET: Added a statement from Facebook.



Source link