Week Three: Classification, Continued; Research Techniques

Picture8

Alexis C. Madrigal’s article How Netflix Reverse Engineered Hollywood was really fascinating to read. As an avid Netflix user, I used to take these genre titles at face value. I recognized that my watching patterns were probably noted by the Netflix system and therefore suggested similar titles. I was shocked to find out the back-end of this categorization system. Not only does Madrigal’s unique research technique illustrate the complexity of rationalizing such a gigantic database, it also suggests the ideological effects of various systems of classification.

In Sorting Things Out, authors Geoffrey C. Bowker and Susan Leigh Star define classification as “a set of boxes (metaphorical or literal) into which things can be put to then do some kind of work – bureaucratic or knowledge production”. They identify the three key characteristics of an ideal classification system as “There are unique classificatory principles in operation…These categories are mutually exclusive…The system is complete” (10-11). However, Bowker and Star continue their argument to say that “no real-world working classification system that we have looked at meets these ‘simple’ requirements and we doubt that any ever could” (11). The Netflix genre generator does indeed have “literal” which are checked on a rating system. Its classification system does produce knowledge to the company, informing it of its consumers’ likes and dislikes, an obvious advantage in gaining and retaining viewers.

Madrigal explains Netflix’s tagging system in laymen’s terms, “Using large teams of people specially trained to watch movies, Netflix deconstructed Hollywood. They paid people to watch films and tag them with all kinds of metadata. This process is so sophisticated and precise that taggers receive a 36-page training document that teaches them how to rate movies on their sexually suggestive content, goriness, romance levels, and even narrative elements like plot conclusiveness…they even rate the moral status of characters” (Madrigal). While there is a human input to this system, the Netflix genre generator acts as an unprecedented catalyst between man and machine. Madrigal observes, “There’s something in the Netflix personalized genres that I think we can tell is not fully human, but is revealing in a way that humans alone might no be”. In this way, Netflix is “a tool for introspection”. Its unique categorization system sheds light on human’s reliance on machines to even tell us what we like. Can a machine capture the innate, complex human tendency to feel emotionally drawn to something?

A similar project that came to mind (which was actually mentioned in the article) is Pandora’s Music Genome Project. Much like Netflix, Pandora analyzed millions of songs “using up to 450 distinct musical characteristics by a trained musical analyst. These attributes capture not only the musical identity of a song, but also the many significant qualities that are relevant to understanding the musical preferences of listeners” (Pandora.com). Before really reading anything about the Music Genome Project specifically, I had a thought that the categorization of music would be much harder than movies. Relatively, movies tend to follow trends, while music has a longer history and many, many iterations. While it tries to do something similar to the Netflix personalized genres, it is much more ambitious of Pandora to distill this medium. For example, critics of the Music Genome Project pointed out the social aspect of music, “Music is traditionally a more collective experience…that aspect shows itself very powerfully in the way we consume music in society. We want what other people are having” (Wilkinson). Although Pandora is invested in advertising to its listeners in a similar way to Netflix, the medium of music definitely has its limitations.