What Is Mechanical Turk? A Summit Series on Amazon’s MTurk

November 13, 2020 Kami Ehrich

Summit mechanical turk series

With contributions by Teresa Kline

In this three-part series, we’ll explore Amazon’s Mechanical Turk (“MTurk”) and its use for researchers. In the first installment, we’ll cover what MTurk is and how it works. Next week, we’ll look into specific examples of how MTurk has been used in academia and in litigation. Finally, we’ll wrap up with a discussion of the potential concerns with data collection via MTurk and how to mitigate these issues so they don’t negatively influence the data.

2020 mechanical turk robot only - wide image

What is MTurk?

MTurk is an online crowdsourced labor market where registered workers―called “Turkers”―complete “human-intelligence tasks” or “HITs” that computers are unable to do. HITs can range from data validation, surveys, and content moderation to image tagging, transcribing audio, and filtering adult content.

Turkers represent a wide variety of ages, ethnicities, socioeconomic statuses, languages, and countries of origin. Researchers in academia have noted that unlike the ebb and flow of university students during an academic year, the pool of Turkers remains consistent throughout the year. Finally, MTurk’s online presence means that it offers easy access to subjects to whom researchers would not be able to gain access via traditional recruitment tools.

How does it work?

Similar in some ways to Uber or TaskRabbit, MTurk matches people looking for work with people who need work done. After requesters post their HITs to the site, Turkers can choose to complete the task. HITs either use one of Amazon’s templates (an “internal” request) or link to the researcher’s website (an “external” request).

HITs are short tasks that typically pay less than $1. Turkers can be paid in U.S. dollars, Indian rupees, or Amazon credits. Between the short duration of tasks and the number of Turkers on the site, researchers often find that their HITs are completed very quickly. For example, Summit recently conducted a survey on MTurk that collected 1,206 responses in less than 24 hours.

After a Turker has completed the HIT, the requester can review and accept or reject the Turker’s work. If the work is rejected, the Turker is not paid for the task. Turkers receive a reputation score based on the number of tasks they complete and their acceptance rate, and requesters may limit participation in their HITs to Turkers who meet or exceed a specific reputation score.

Analyses have shown that Turkers are motivated by skill variety, task autonomy, and enjoyment more than monetary gain. In general, studies evaluating data quality have had positive results, but we’ll discuss potential concerns with data quality in the third installment of our series.

Who uses it?

Launched in 2005, MTurk was originally designed for the human-intelligence tasks we noted earlier, such as audio transcriptions or filtering adult content. As MTurk grew, social-science researchers and scientists began using it for online behavioral experiments and surveys. Fifteen years later, there is now increasingly robust academic literature on using MTurk for research.

In our next post, we’ll dive into some examples of how researchers are using MTurk and look specifically at academia and litigation.

Share This: