While trying to look up more information on algorithmic filtering, I chanced upon the Wikipedia article for the term “filter bubble”- this refers to the process by which an algorithm “selectively guesses what information a user would like to see based on information about the user”…thus “effectively isolating them in their own cultural or ideological bubbles”.
Based on my experience in my entrepreneurship fraternity, these filters and algorithms are part of a larger discipline called machine learning. This is a “scientific discipline that explores the construction and study of algorithms that can learn from data”. Because these algorithms operate by building a “model from example inputs and using that to make predictions or decisions”, the system attempts to learn to make human choices using precedent. This in itself is not flawed- but the way we make choices often are. We rely on our cultural background and circumstances to make choices, and these choices are often biased- the very same way that these programs are accused of unfairly functioning. But if the goal of technology is to imitate that human decision making process (however flawed), then we are in no position to say it has not achieved its aim.
However, this does warrant a more concerning inquiry regarding the way we make choices. There seems to be a tension between equating the advancement of technology to human simulation on the one hand, and the advancement of technology to promote ideals of democracy and a globalized interconnected world of information on the other. Because people are not automatically conditioned to be impartial and all-accommodating, machine learning’s pursuit to capture the complexities of human thought will necessarily fail at upholding complete neutrality.
However, the difference in the way various social media platforms choose to formulate their algorithms shows promise for the field. Examining the difference in Facebook and Twitter feeds in the Ferguson article, it is apparent that the philosophies of each platform weigh heavily on how and how effectively information is filtered. In the case of Facebook, its way of simulating relationships and building networks is based off of a person’s life and profile. Like a customizable AI device in the movie Her, we very much customize Facebook to suit our personality. In contrast, while Twitter is very focused on what accounts you follow and the type of news you are interested in, it also keeps the big picture by alerting users to trending topics. It seems more of a conversation starter than it is a platform for you to gather information about another acquaintance’s life. Because of the etiquette and practices of each platform, we have also come to expect certain things out of each platform. We are more likely to get information about a party on Facebook, and real time updates on news issues on Twitter. These differences, and our knowledge of these differences, hopefully make us more discerning and savvy users- at least until developers figure things out on their end.
That said, at this point, I don’t think we can quite reach the assertion that net neutrality and algorithmic filtering are human rights issues. It is, however, something worth considering if/ when we are able to reconcile machine learning’s trend toward human simulation, and broader expectations that information on the web be credible and fairly filtered.