More than one hundred thousand people work as online content moderators, viewing and evaluating the most violent, disturbing, and exploitative content on social media. In a new book, “Behind the Screen,” Sarah T. Roberts, a professor of information studies at U.C.L.A., describes how this work shapes their professional and personal lives. Roberts, who conducted interviews with current and former content moderators, found that many work in Silicon Valley, but she also travelled as far as the Philippines, where some of the work has been outsourced. From her research, we learn about the emotional toll, low wages, and poor working conditions of most content moderation. Roberts never disputes that the work is crucial, but raises the question of how highly companies like Facebook and Google actually value it.

I recently spoke by phone with Roberts. During our conversation, which has been edited for length and clarity, we discussed why finding and deleting offensive content is so tricky, why the job is so psychologically taxing, and the fixes that could help these workers and make them better at their jobs.

What is it about the lives of people doing this work that you thought was important for people to understand?

I came to this topic in 2010. The nature of the work demanded total psychic engagement and commitment in a way that was disturbing, because it was a flow that they could not predict, and they were always open to anything at any time. People were flocking to these platforms, in no small part, at least in the American context, because they were being led to believe, either tacitly or overtly in some cases, that being online in this way would allow them to express themselves. What they were being told was that you have your thought or you have your thing you want to express, you share it on the platform, and you can broadcast it to the world. YouTube’s tagline was “Broadcast yourself.”

They were being sold on this notion that you could blast your thoughts around the world in this unfettered way, and in fact it turned out that there were big buildings filled with people making really important decisions about whether or not that self-expression was appropriate. And the platforms themselves were saying nothing about it. They weren't saying who those people were, where they were, and they weren’t really saying on what grounds they were making their decisions. There was just total opacity in every direction. I was curious about the difficult nature of the work, and then the secrecy and the lack of communication, and what that meant for the world’s orientation toward social media as a substitute for the public square.

What are the people who hold these jobs doing on an average day?

I was looking at the rank-and-file people who would be fairly new to not only this particular work of commercial content moderation but also to the tech industry. It’s typically considered an entry-level job, which meant that a lot of young people were doing it, but not exclusively. In all the cases I’ve looked at, they’ve tended to be people who are fairly well educated, college grads. Again, these were people working at élite Silicon Valley firms. But, instead of coming into those firms as full-badge employees with a career trajectory in front of them, they were coming in through contract labor, third-party sourcing. They were coming in relatively low-wage, especially in relation to any peers that they could be working side-by-side with in such a place. And, in the case of the United States, they didn’t have health care provided to them through this arrangement; when we think about psychological issues or other health issues that come up on the job, the way that people get health care is through their employment.

The work tended to be something that you would likely not do for a long time. You would probably do it for a couple of years and either be term-limited, because of the contractual arrangements that the firms had with these third-party contractors, or you would just burn out. You would either not be able to really take it anymore, or you would become so desensitized as to not be any good at that job anymore.

This burnout, or desensitization—what is it that’s going on in their work that’s causing this?

What these people were doing was really a front-line decision-making process, where they would sit in front of the screen and jack into a queue system that would serve up to them content that someone else, someone like you or me, might have encountered on the platform and had a problem with. We found it offensive, we found it disturbing, maybe it was really, really bad, or illegal activity, or somebody being harmed. And someone like us would report that.

That material would get aggregated and triaged into queues, and the people on the other side of the queue were these moderators who would then look at it, usually just for a matter of seconds because of the sheer volume, and then they would be called to have memorized the company’s internal policies about what that company allowed or disallowed. And then they would apply that policy to the particular piece of content, and ultimately it was really a binary decision: this should stay up or this should come down, and it should come down on these grounds, and then they’d be on to the next piece of content.

Sometimes what they would see would be totally innocuous. But the next piece could be something very shocking and abhorrent. And these people were called upon to make decisions sometimes into the thousands per day on their shift.

What did you find are the psychological costs of doing this? And, second, did you find that the company cared at all that this is happening to these workers?

To the first point, obviously I’m not a trained psychologist. But, in their own words, a lot of the workers would claim that they didn’t have any ill effects. And I would take that at face value, and then we would continue talking and engaging, and they would say things to me, like, “You know, since I took this job, I’ve really been drinking a lot. I just come home at night, I don’t really want to talk to anyone.” Or, “I find myself avoiding social situations.”

One case that really stood out to me is one of these people who told me over and over again that he could handle it, and he had no impact personally. So then he told me about one night when he was at home in San Francisco with his girlfriend—they were on the couch. She scooted over to him to become intimate with him, they were making out, getting close. And suddenly he just stiff-armed her, like the football move, and shoved her away. And he said to me, “You know, I couldn’t tell her that the reason I did that is because the image of something I’d seen at work that day popped into my mind in this intimate moment and it just shut me down.” That’s a good stand-in for the sorts of things that people disclosed.

One woman used to work for MySpace, which used to be a big thing, and she told me that she’s now a bookkeeper. She got as far away from dealing with humans as she could and got into the dealing-with-numbers business. She told me that she wouldn’t shake people’s hands for about three years after she quit her job. I asked her why, and she said it’s because she knew people were disgusting.

Did you get a sense that the social-media companies had clear policies on the way they wanted this content looked at? And, if not, do you think that’s because it’s a really complicated issue or because they didn’t care that much?

That went both ways. They often had really, really, really specific policies that most people might consider a little bit ridiculous or absurd, that would remove the agency of the moderator to make a decision. But then there was another phenomenon where there would be a particular type of content that one of these front-line people, who saw it all the time, felt violated the spirit if not the letter of some greater principle that was supposed to be at play.

The example I’ll give there is blackface. One person that I talked with said time and again he would see these videos that were filled with blackface, and he would go and argue with his supervisor, saying, “This is racist, we supposedly don’t allow racist or hate speech on our platform,” and he could get no traction. So the policies that were in place almost parodied themselves. They were so specific on the one hand and totally missing the forest for the trees on the other that you really had to embed yourself into the logic of the particular platform, and of course every platform has its own set of policies that it makes up.

I think they cared enough that they had an entire apparatus devoted to the creating and designing and thinking through their policies, but what became clear to me through the course of this work was that the primary function of people doing commercial content moderation at these platforms was for brand management of the social-media platform itself. There would be a great side-benefit of keeping some bad stuff out of people’s way, or “cleaning up” the platform. But ultimately this was in the service of the brand, so that the brand could continue to function as a site where advertisers might want to come. And so I feel that this whole practice really laid that bare for me.

Did you get some larger sense of how important these issues were to the company based on how the workers were treated?

Yes, but again it was sort of this talking out of both sides of their mouths, where on the one hand anyone who was engaged with these practices at these companies knew they were so important, maybe one of the few sorts of gate-keeping mechanisms that could really be relied upon. But, on the other hand, bringing in crews of contractors said something very different, and the moderators themselves were really aware of that, and felt pretty disposable in most cases.

You also go to the Philippines in this book and you talk to people from other countries, in Mexico, for example. What are the consequences of outsourcing these jobs in terms of the quality of the work being done? And I don’t ask that to imply that people abroad can’t do a job as well.

I think there is a precedent for outsourcing this type of service work, and we see that in the call-center industry. The same kinds of problems that are present in that work are present in this particular context. So that would be things like the dissonance and distance culturally and linguistically, contextually, and politically, for a group of people that are being asked to adjudicate and make decisions about material that emanates from one place in the world and is destined for another, that may have absolutely nothing to do with their day-to-day life.

I think a second thing is that the marketplace has chased a globalization solution for the same reasons it has in other industries, which are the issues of: Where can we get the cheapest labor? What countries are lax in terms of labor protections? Where is organizing low? Where is there a huge pool of people for whom this job might be appealing because it’s better than the other jobs on offer? It’s not a simple case of everyone in the Philippines who does this work is exploited, and I was really trying hard not to make that claim in the book. But, at the same time, the United States sends the work to the Philippines for a reason. It sends the work there because Filipino people have a long-standing relationship, so to speak, with the United States, that means that they have a better facility to understand the American context. That’s actually been in the favor of most people in the Philippines.

It’s worrisome to see those kinds of colonial traditions and practices picked up again, especially in this digital marketplace, this marketplace of the mind that was supposed to be deliverance from so many of the difficult working conditions of the twentieth century. So I think that’s the big thing about the way that this plays out on the global stage. The companies have a problem that they don’t have enough people to do the work. And so they are pulling out all the stops in a way to find people to do the work, but it’s still not nearly enough.

What could be done to make the lives of these workers better, given that this is a job that needs to be done? And it needs to be done by smart people doing it well, who need to be very well-trained.

This is a question that I’ve often posed to the workers themselves because I certainly am not possessed of the answers on my own. They want better pay. And I think we can read that in a lot of ways: they want better pay, they want to be respected. The nature of the way the work has been designed has been for the work to be secret. In many cases, their N.D.A. precludes them from even talking about the work. And the industry itself formulated the job as a source of shame in that sense, an industry source of shame. They were not eager to tout the efforts of these people, and so instead they hid them in the shadows. And, if nothing else, that was a business decision and a value judgment that could have gone another way. I think there’s still a chance that we could understand the work of these people in a different way and value it differently, collectively. And we could ask that the companies do that as well.

There’s a rich history of labor organizing and worker-led, or worker-informed, movements, and in this case it might have to be region by region or specific to particular parts of the world. Or it could be something that crossed geographic and cultural boundaries where workers learn to identify with each other despite where they’re located.

We talk a lot about automation. I think that’s what you’re saying about the tech companies. Their solution is always automation, or that’s what gets foregrounded, but, I think if you talk to anyone in the industry who’s in the know, the likelihood of humans going away anytime soon is pretty much nil. And we also need to support them with mental-health support. There are things we can do technologically to maybe make it less difficult to look at some of the content.

Are there any major companies in the realm of social media that you think take this seriously or have made major steps to take seriously?

I do think that since the time I started looking at this issue, there has been a sea change, and I don’t think anybody would accuse me of being a champion of industry around this. But I have seen a shift such that many of the big tech firms that I think about and am familiar with, which are the big American ones, have seriously taken up this issue, or they’ve shown up at the table to have conversations with civil society and with advocacy organizations and other people like me.

Facebook, just about ten days or so ago, announced a major initiative where they were going to raise the base pay of all their content moderators. I was thrilled about that. On the other hand, we could read between the lines of such an announcement to learn that until now these people were probably making minimum wage or close to that. And we could also read the deafening silence from other firms that they haven’t done that and aren't really willing to do that yet. Because, if they were, they’d be issuing a press release, too. We’ve got a ways to go on that.

Click Here: Sports Water Bottles