In 2018 it can be hard to recall what life was like before the all-powerful algorithms – before content was selected for you by the unseen and all-powerful artificial intelligences that mediate daily life. For businesses, algorithms increasingly dictate whether customers are aware of you or your competition. For the rest of us, they make sure that we have a constant stream of the freshest memes and content humanity has to offer.
And while algorithms are immensely useful: enriching our experiences and helping us discover valuable information among a constant and endless deluge that no person could ever review, they provide their services at the behest of powerful corporations with interests that are their own and at a cost that their users may not be aware they are paying.
Determining that price of algorithms is married in many ways to the users relationship to it. Are you a consumer, searching for a new product on Amazon, or are you a seller, trying to make sure your product is found by people who would like it? Mediating both sides of that relationship, and many others just like it, is a complicated, obtuse and well-hidden algorithm.
Google, Facebook, Twitter, YouTube, Netflix, Hulu, and others all use their own proprietary algorithms to make sure that people enjoy using their service and keep coming back.
For some of these platforms, that enjoyment would seem to be derived from achieving what we came there for. You went to Google to find out what time a store in your area closes and Google Maps gave you that answer or a number to call to find out.
Purely utilitarian. No apparent threat. Need met.
So you go back the next time you are looking for a place to get pizza. Instead of searching for a specific pizza restaurant, you just search for “pizza” and are given a ranked list of results. You are now trusting that the algorithms and engine behind answering your search understand what they are being asked, understand the context in which the question is being asked, and are delivering you answers based on criteria that aligns with your own.
And that’s just one example.
What seems to be at the heart of the most powerful algorithms today is an attempt to do a better job of discerning than the individual is presumed to be able to do on their own. Looking for a can opener? Let the algorithm help you find the most “relevant” result to that search. Looking for the best catering service in your area for a wedding? Let the algorithm find the “best” results for you.
Inside of these searches and their execution by the algorithms they rely on are explicit decisions regarding value and worth. Implicitly they capture the biases and best efforts of their designers (whether human or other) to assess the value of something to someone. algorithms try to improve their results by considering contextual factors, for example: location, search history, browsing history, previous engagement, relationships, demographic data, time and many more.
They aren’t wrong, but something is lost in the mix. These platforms provide an impressive and useful service, but they can also become a crutch by removing complexity and nuance from the algorithmic consideration.
Absent from their contextual background is doubt or some nagging sense that perhaps this is all leading to some dreaded, horrible end.
And that sucks.
The ease of access and the self assured confidence that the best result was found because the searcher is told they were given the most relevant or “best” result is dangerous. There is no room for doubt because the platforms themselves present none. And why should they?
Doubt and skepticism are intimately human tools. They are, by their very nature, uncomfortable. But they are necessary to a good life. Modern platforms provide us with avenues to evade relying on those anxious tools.
Because the goal of the algorithms is to keep us using them, we should recognize that the decisions made by the algorithms are not necessarily those that best for us, but best for them.
Nowhere is this more readily apparent than in the search for news and media regarding what many would consider important events. It can be very difficult for individuals to determine whether what they are being shown is fact or carefully calibrated opinion.
In the end though, we have the power to choose. We can choose to engage with platforms like Facebook, Amazon, Twitter, YouTube, and others, and we can choose not to. We can choose to look at the results they give us to our questions as “The Results” or as “some results.”
Perhaps the cure for algorithms is a more skeptical view of the results they produce. Maybe when we’re told that we’re looking at the most “relevant” or “best” result, we should question just who or what those best answers and relevant results are best best for. And maybe, as cliche as it sounds, we should check their sources.