Beyond the Data: A Human Approach to User Engagement

Customer Engagement
May 23, 2018

In 2018, We live in the world of predictive algorithm.

Facebook matches us with articles and advertisements they predict we want to see. A user logs in, changes his status from “In a Relationship” to “Engaged,” and within minutes there are advertisements for tuxedos in his feed. A user adding “The Hunger Games” to her Netflix watch list is soon paired with other films “Starring Strong Female Leads.” Amazon greets users with “Deals Recommended for You,” based on what they have previously purchased, viewed, and added to their wishlists. These companies, and so many more, study, refine, A/B test, and iterate on their predictive algorithms constantly, with the goal of keeping users engaged, coming back, and ultimately, creating money for the company, in one way or another.

In the current tech landscape, it may seem like the key to strong user engagement is the illustrious predictive algorithm. If the giants are doing it, shouldn’t all companies follow suit? No, not necessarily. In addition to practical considerations such as cost, team size, funding and allocation of resources, a simple question looms — does the predictive algorithm work? And more importantly, is it the best solution to your problem?

Facebook has famously learned how to turn a significant profit on advertising using a predictive algorithm. At the same time, their newsfeed algorithm unintentionally siloed its user base into echo chambers wherein a person can only see content and opinions that align with his or her own. For a company whose mission statement is to “Give people the power to build community and bring the world closer together”—it is clear that their algorithms are not always working. In this case, the newsfeed algorithm appears to be in direct conflict with the company’s core mission.

But they aren’t alone—running into trouble with a predictive algorithm happens all the time, even at the largest tech companies.

Predicting Wrong: A Case Study

In October 2010, the photo-sharing app Instagram was released. The original feature set was simple: Sign up. Create a profile. Upload and filter your photos. Follow and get followed. Like and comment. After initial sign up, homebase was a chronologically organized photo feed, with the most recently posted photo at the top. Depending on your lens, Instagram was a Twitter for the visual, or a Facebook for the singularly focused.

And it worked. Just two months after launch, one million people had signed up for the photo-sharing platform. Within five years, Instagram had hit 400 million monthly-active users4. While the product had added many new features during that time, the core functionality and homebase remained the same—the chronological feed.

In 2016, the algorithmically ordered photo feed was introduced. The change was referred to as “best posts first.”5  If a user liked certain friends’ posts frequently, those would rank higher. If a post already had a lot of likes, those would also rank higher. Recency of the post would still be a factor, but less heavily weighted. Instagram had failed to predict the unpredictable, to account for the fact that a user and a user’s data are not one and the same.

When the change went live, there was widespread outcry from many of Instagram’s most frequent users. Of the many arguments as to why the new algorithmic feed was frustrating, one in particular stood out—Instagram had failed to predict the unpredictable, to account for the fact that a user’s preferences and a user’s data are not one and the same. Let me explain.

Instagram has always been a community of users who not only follow real-life friends, but also find and follow accounts based on areas of interest. For instance, a user who is passionate about home design might follow her sister, her close friends, some acquaintances, as well as a number of locally- to globally-known interior designers, architects, and home-design hobbyists. Perhaps this same user is much more likely to “like” and comment on the posts of her sister who takes poorly-lit photos of her dog and her meals. Why? Because it’s her sister. And while this user loves the beautifully curated home-focused accounts that she follows, she rarely likes or comments on those photos. She doesn’t have a personal relationship with the people who manage those accounts, so why leave a like?

For this hypothetical user, the rollout of the new algorithm would have meant all those photos she actually liked (but didn’t “like”) were buried in her feed. And what was at the top? Poorly-lit photos of her sister’s meals and dog. (Ugh.) As it turned out, this hypothetical user wasn’t hypothetical at all. Her behavior was just like millions of other users, all of whom couldn’t find the content they actually wanted to see.

Instagram’s new algorithm relied on capturable data such as likes and comments (as predictive algorithms often do), while the human component of user behavior was either ignored or forgotten. Instagram had run right into one of the great fallacies of our digital age—that users’ data accurately represents who users are, what they care about, and most importantly, the motivations behind their actions.

This is not a story of the beginning of the end of Instagram. Far from it. Armed with its large and loyal user base, it weathered the bad PR and twitter rants from users and tech blogs alike just fine. In fact, Instagram has more than doubled in monthly-active users since this change. That said, for a less robust company, this story may have easily gone the other way.

Opting against the algorithm: An internal case study

At Life.io, we expanded our platform to include a library of articles, recipes, and expert advice for staying physically and mentally healthy. The goal of this feature is to provide value to our users, help them along their journey to a healthier life, and keep them coming back to our platform. In planning this feature we came to a vital question—“How do we match our users to content they will want to read?”

In the world of the predictive algorithm, the solution seemed obvious. We have a lot of data about our users, including gender, age, life events, and actions taken on our platform. Why not pair users who are tracking their workouts regularly with content about exercise? And wouldn’t it make sense to pair users who track their diet with articles about nutrition? At first glance, the answer appeared to be “yes,” but only because we were thinking through the lens of data, and more specifically, the data we currently had. Once we shifted to a more human lens, the answer became much less clear. The more you consider the possible motivations of a single individual, the more questions arise.

Who is the user tracking workouts regularly? Does she track because she loves exercise? Or might there be another reason? Maybe she exercises and tracks begrudgingly, knowing it’s good for her. Maybe she’s in an exercise challenge with friends, and tracks out of a sense of social obligation. Maybe she tracks her workouts because Life.io is incentivized and she wants to increase her chances of winning. The more you consider the possible motivations of a single individual, the more questions arise.  Attempting to predict their interests from those motivations, is rarely straightforward.

So, what’s the solution? In our case, a seemingly novel idea emerged. What if we started by asking our users what topics are of interest to them? What if we let them, tell us what they want to see?  Instead of assuming a person enjoys exercising based on his data, that person could simply tell us if he does or if he does not enjoy exercise. This solution would not only allow us learn more about our user base, but also allow us to bring more value to our users, be more strategic when sourcing content, and improve how we prioritize iterations and enhancements to our platform. It could even become the foundation of an algorithm, but one driven by the explicit wants of our users, rather than the assumptions of our team.

Similar to Life.io, many digital platforms take this approach. Foursquare asks what kind of food you like before recommending restaurants. Pinterest asks about areas of interest before populating a user’s dashboard with suggested pins. As a user of some of the platforms that think to ask what I want to see, what I care about, I find it refreshing.

A human-first approach

When approaching new challenges, the user engagement team at Life.io always comes back to the same question—what is the most human solution to this problem? Beyond their data, clicks, form submissions, and likes, who are our users? What is the data that cannot be seen—the omissions? In writing we say, “trust your reader.” Wouldn’t it be nice if we could bring that kind of trust into the space of technology and the products we build therein? Even if it’s just a starting point, why not trust users enough to tell us what they want?

Ethan Zuckerman, an MIT scholar and the inventor of the pop-up ad, asked similar questions in the April 2018 article The Internet Apologizes, “As soon as you’re saying ‘I need to put you under surveillance so I can figure out what you want and meet your needs better,’ you really have to ask yourself the questions ‘Am I in the right business? Am I doing this the right way?’”

Two years after the release of the “Best Posts First” photo feed, even Instagram appears to be questioning their data-driven approach. In March of 2018, the company announced they were going to be “introducing changes to give you more control over your feed and ensure the posts you see are timely.” They are calling the change “New Posts”—a return to chronology, to simplicity, to what users liked about Instagram in the first place.

As technologists, let’s challenge ourselves to set aside the trends in technology, the desire to implement the most interesting solution or the most data-driven solution, and instead ask the human questions first to see what solutions emerge. Our technology is only as good as a user’s experience of it—a human’s experience of it. So let’s start there, with them.

Endnotes

1. Theweek.com. (2018). How Facebook became so profitable. [online] Available at: http://theweek.com/articles/622508/how-facebook-became-profitable [Accessed 16 May 2018].

2. Emba, C. and Emba, C. (2018). Opinion | Confirmed: Echo chambers exist on social media. So what do we do about them?. [online] Washington Post. Available at: https://www.washingtonpost.com/news/in-
theory/wp/2016/07/14/confirmed-echo-chambers-exist-on-social-media-but-what-can-we-do-about-them/?noredirect=on&utm_term=.55a4db386cab [Accessed 16 May 2018].

3. Facebook.com. (2018). Facebook. [online] Available at: https://www.facebook.com/pg/facebook/about/ [Accessed 16 May 2018].
4. En.wikipedia.org. (2018). Timeline of Instagram. [online] Available at: https://en.wikipedia.org/wiki/Timeline_of_Instagram [Accessed 16 May 2018].
5. TechCrunch. (2018). Instagram is switching its feed from chronological to best posts first. [online] Available at: https://techcrunch.com/2016/03/15/filteredgram/ [Accessed 16 May 2018].

6. Hunt, E. (2018). New algorithm-driven Instagram feed rolled out to the dismay of users. [online] the Guardian. Available at: https://www.theguardian.com/technology/2016/jun/07/new-algorithm-driven-
instagram-feed-rolled-out-to-the-dismay-of-users [Accessed 16 May 2018].

7. Kulwin, N. (2018). An Apology for the Internet — From the Architects Who Built It. [online] Select All. Available at: http://nymag.com/selectall/2018/04/an-apology-for-the-internet-from-the-people-who-built-
it.html [Accessed 16 May 2018].

8. Instagram Press. (2018) Instagram [online] Available at: https://instagram-press.com/blog/2018/03/22/changes-to-improve-your-instagram-feed/

Request a Demo