Monday, 20 March 2017

User Responsibility in a Time of Misinformation

facts User Responsibility in a Time of Misinformation

A couple of weeks ago I wrote about Google’s new API, Perspective, that filters hurtful comments from the open comments section on publisher’s websites. The goal of the tool is to enhance user experience by making the comments section more suitable for productive, diverse, non-trolling discussions. The API is no doubt a response to the growing problems of online trolling and the widespread “fake news” pandemonium across the internet. That’s likely the same motivation behind Google’s recent effort to flag upsetting and offensive content online.

The Problem of Misinformation

There’s a growing problem among users to seek out information online that reinforces their opinions. While that’s not necessarily a bad thing, it becomes problematic when used to reinforce radical, untrue, or discriminative opinions. This is especially true of user behavior within the last couple of years, as global climates have become tense due to terrorism and political affairs among countries aggressively polarized (think: the U.S. presidential election of 2016).

As a direct result of this polarization, users began producing and seeking a flux of information that reinforced their opinions, arguments, and beliefs. While there has never been a greater need for accurate content online, the reality is that the production of and user demand for wildly polarized, radicalized, and one-sided content has crowded out factual, proven, trustworthy information.

Think of it like this: In the month leading up to the presidential election, you likely saw content in your Facebook feed with titles like, “Proof that the Majority of Muslims in the U.S. are Working for ISIS” or “Hillary Clinton Kicked a Navy Seal.”  Unfortunately-and embarrassingly-highly polarized and clearly false news headlines similar to these were the most shared and viewed content pieces online.

As a result and consequence, that’s the type of information that ranks at the top of search results. This is a problem, because in response to user behavior, the algorithms are ranking extremely low quality content under the most reliable content.

Google Adds to Guidelines for Quality Raters

Google has had quality raters (10,000+ contracted individuals) for a long time. They conduct actual searches on the web based on the real searches Google sees from its user, and try to identify issues with websites. More likely than not, Google uses the quality raters’ finding for future updates and algorithms. When the quality raters go through and try to understand and evaluate web page content, they use the Search Quality Evaluator Guidelines provided by Google. The comprehensiveness guidelines, which are roughly 160 pages long, outline in detail how the quality raters are to evaluate websites. The guidelines account for nearly every aspect that factors into the quality of a website, including reviews, advertisements, quality of page content, and much more. Based on these guidelines, the quality raters flag different issues on websites/pages.

Now, the Search Quality Evaluator Guidelines also includes an “Upsetting-Offensive” content flag. As defined in the guidelines, upsetting and/or offensive content is typically:

  • Content that promotes hate or violence against a group of people based on criteria including (but not limited to) race or ethnicity, religion, gender, nationality or citizenship, disability, age, sexual orientation, or veteran status.
  • Content with racial slurs or extremely offensive terminology.
  • Graphic violence, including animal cruelty or child abuse.
  • Explicit how-to information about harmful activities (e.g., how-tos on human trafficking or violent assault).
  • Other types of content which users in your locale would find extremely upsetting or offensive.

These examples from the Search Quality Evaluator Guidelines demonstrate how the uspsetting-offensive content flag works:

holocaustexample User Responsibility in a Time of Misinformation

holocaustexample User Responsibility in a Time of Misinformation

Again, the end goal is to improve user experience and point users in the direction of the most reliable, factual, highest quality pages in response to search queries.  At the end of the day, Google just wants to make the internet a safer place. Just like Perspective, this effort is not about prioritizing certain beliefs, opinions, or views over others, just as it is not a form of censorship. Rather, it’s about improving user experience for everyone by improving the kind of content we are presented with when we conduct searches.

The “Highly Meets” Rating

Sometimes users deliberately search for offensive content, perhaps to understand another side of an argument or simply to see the offensive content. In such cases, the goal is to return relevant, authoritative, and credible information. Here’s an example of how that works:

holocaustexample User Responsibility in a Time of Misinformation

Per the guidelines, when the user’s query seems to either ask for or tolerate potentially upsetting, offensive, or sensitive content, they will call the query an “Upsetting-Offensive tolerant query”. Google and the quality raters will assume that users have a dominant educational/informational intent for upsetting-offensive tolerant queries, and the results will be rated on the “Needs Met” rating scae assuming a genuine educational/information intent.

The Impact on SEO

It’s important to note that flagging a page as having upsetting or offensive content doesn’t immediately result in a ban or penalty. The findings of quality raters remains used in the same way it always has been: a collection of “training data” that Google will use for future algorithms. When the quality raters see and flag a lot of problem recurring within a specific industry or in response to certain searches, Google coders use that information for future updates and algorithms so that the search engine will be able to auto-identify upsetting-offensive material.

That being said, it’s entirely possible that some flagged websites will later be internally reviewed by Google and result in a penalty or demotion. For instance, some manual penalties on websites have come about as a result of quality rater findings. Though the purpose of quality rater findings is to accumulate data for future updates, it is by no means a stretch to suspect that some upsetting-offensive content flags will impact the SEO of certain websites.

User Responsibility in a Time of Misinformation

These steps towards having more credible and reliable comments sections and search results are exciting. As users, we all stand to benefit from having access and exposure to the best possible content out there, so we can continue to learn, share, and produce ever-improved ideas.

However, the need for such tools speaks volumes of current user/search behavior. If the top search results we see are “news sources” that are eye-level with tabloids, what does that say about the kind of content we consume? Google has contracted over 10,000 people to flag racial slurs, graphic violence, and content that promotes hate and violence against entire groups of people based on race, religion, gender, citizenship, etc. But what does the overwhelming prevalence of that material say about what users are putting online?

All users have a responsibility as both consumers and producers of content to be both pragmatic and respectful in their online communities. We can’t blame online platforms for blind partisanship, irresponsible speech, or inappropriate content, because the algorithms of Google and Facebook have nothing to sort and rank without our user behavior.


The post https://www.highervisibility.com/blog/user-responsibility-in-a-time-of-misinformation/ appeared first on https://www.highervisibility.com

No comments:

Post a Comment