Recommendation engines are great. They help you discover movies, music, books and all manner of other products and content that you might like. But they also perpetuate something called the “you loop”.
The “you loop” was described in Eli Pariser’s The Filter Bubble and occurs when you watch, buy or even click on something and it gets recorded as interest. After which, you’ll see similar topics which you then watch, buy or click and on and on. The problem is that your preferences – your ‘identity’ – can easily become misrepresented because of the amplification this loop causes.
These algorithms currently don’t push the boundaries of your filter bubble nearly enough. They should constantly prove their models wrong by introducing the opposite choice instead of perpetuating the sort of confirmation bias that exists today.
Imagine that. Despite your propensity for watching horror films on Netflix, it recommends that you might like “The Fault in Our Stars”. Because, you know what.. you just might.
The algorithms that only provide you with recommendations that you will like based on your historical clicks or views is really a form of inverse censorship. You like war movies or crime novels because that is all you are allowed to see – at some point, perhaps you never find out that anything else actually exists. Harmless with war movies and crime novels, but concerning if it happened to news, politics and other forms of opinion.
Perhaps if they don’t introduce an “opposite choice”, they simply need to insert some randomness… some serendipity. Your history states that you prefer news from the UK and business, but here’s a story on fruit flies and here’s one on national yoga day. Not all the time, but say 15% of the time. How could that be a bad thing?
So recommendation engines are immensely useful – but they also limit choice. I would really like to see those companies that use them (Amazon, Netflix, etc) start thinking outside of the bubble before our preferences become our only reality.