Enough. It’s time. You’ve decided to reclaim your morning commute by spending it on something substantive. No more bottomless Instagram feeds and auto-playing YouTube videos for you! So out the door you stride with that week’s New Yorker wedged beneath your arm, a new episode of Flash Forward playing in your ear, or the latest Jesmyn Ward novel cued up on your Kindle app. So far so substantive. But it doesn’t last. You’ve nearly reached the bus stop when the assault on your attention begins with a notification about… notifications:
Check out the notifications you have on Twitter.
In a more vulnerable moment, you’d have tapped the push alert reflexively. But not today. You’ve no way of knowing what awaits you on the other side (experience has taught you that “your” Twitter notifications are occasionally about personal mentions, but they’re usually about other people’s activity), so you leave Schrödinger’s tweet alone and motion to put your phone away.
But it’s too late. Before you can pocket your device, something catches your eye. Was it a Slack from your boss? One of the tiny red badges dotting the corners of your apps? Or maybe a Snap from your brother? You’re ensnared. You spend your commute flitting from app to app, feed to feed, one notification to the next. You even catch yourself scrolling through Twitter (turns out WIRED and two others retweeted Chelsea Clinton). Next thing you know you’ve arrived at work, your Kindle app unopened, your podcast unlistened to, your longread unread.
Scenarios like this one contribute to the growing sense among tech critics, policymakers, and the public that technology companies hold too much sway over attention, well-being, and our very democracy—even as they disagree over the extent to which tech giants have overstepped. To some, our phones and apps are little more than a distraction; to others, they’re nothing short of an existential threat. But the vast majority of critics—and more and more companies—agree: People could use help deciding where to place their attention, to ensure that their time with technology is—to borrow an increasingly fashionable phrase—time well spent.
And make no mistake: We users do need help. And that help can take a form that’s subtle and effective.
It is tempting to blame our failure to resist our phones, apps, and feeds on a lack of self control. As with so many things in life, the recipe for a healthy relationship with technology seems to boil down to a command of one’s impulses.
But how you use your phone, and the apps on it, is ultimately about decisions—and decisions hinge on more than self control. They’re also informed by rational and irrational judgements, subconscious biases, and information gaps (among other factors), all of which contribute to a quirk of human behavior that has long fascinated psychologists, philosophers, and economists: People will often make a decision at one point in time that becomes inconsistent—or works against their apparent interests—at a later point in time.
Behavioral economists have a name for the tension between our present and future selves. They call it time-inconsistency. It colors countless human decisions, from the trivial to the momentous: Eat that cookie—or stick to your diet? Put a chunk of your paycheck toward a new outfit—or your 401k? Just think: How often has your preference in the moment (say, for a delicious snack or a nice jacket) come to contradict your later preference (for a flatter stomach or a more robust retirement fund)?
This tension now defines many people’s relationships with their phones, as well. Science doesn’t have a definitive answer about the effect technology is having on our brains, or on society. But evidence does suggest that the ways in which we use our devices on a minute-to-minute basis often contradict how we wish we used them—or didn’t.
When Moment, an app that helps people track their screen-time, asked 200,000 of its users to rate the ways they engage with their phones, apps like Facebook Reddit, Instagram, and Snapchat fared notably worse than those like Google Calendar, Headspace, and MyFitnessPal; the latter left people feeling happy, while the former had the opposite effect. And yet, with few exceptions, survey respondents spent much more time in apps they later regretting so much time on. The survey reveals what many of us know intuitively: The way we use our phones is not time-consistent.
Today, through default settings like push notifications and autoplaying videos, tech companies like Google and Facebook take advantage of this time-inconsistency. They exploit our tendency to procrastinate and our susceptibility to inertia to grab our attention, no matter how it makes us feel.
But our susceptibilities also make us receptive to something Harvard legal scholar Cass Sunstein calls libertarian paternalism, a term he coined with Nobel Prize-winning economist Richard Thaler to describe “nudges” by which institutions help people make better choices (as judged by themselves), while preserving their freedom to make those choices at as low a cost as possible. A hallmark example: Employers that automatically enroll their workers in tax-deferred retirement plans (while allowing them to opt out) dramatically improve the contribution rates of their employees.
The question for tech giants, Sunstein says, is not whether they should engage in libertarian paternalism, but the ends to which they do so. “For companies like Facebook and Apple, there is a pressing need for a lot more thought on the goals of choice architecture,” he says. “Once we specify the goals, we can identify an assortment of freedom-preserving tools, such as reminders and warnings, that can help users.”
Policymakers, designers, and former employees from the likes of Google and Facebook have begun to imagine what those tools might look like. One suggestion: Companies could use their stockpiles of personalized information to detect and notify users who are spending more time than they’d like their platforms—or even identify risky behavior. “If you’re an alcohol manufacturer, you have no way of knowing who’s abusing your product,” says Nir Eyal, author of Hooked: How to Build Habit Forming Products. “But if you’re a tech company, you can actually reach out and say, hey, you might have a predisposition for technology addiction.”
In an essay titled “Choicemaking and the Interface,” social scientist Joe Edelman proposes alternative interfaces that would give users more nuanced, up-front choices about how to spend their time and the way they’ll spend it on one choice or another. Typing “facebook” into your address bar, for example, might prompt you to select whether you intend to visit for a “Quick Break,” “Easy Reading,” or to “Organize an Event.” The proposed interface would also show you how well these uses of Facebook had panned out for people who selected them in the past.
“In order to display these kinds of cues, we’d need to build giant database about people, about their choices, and about the outcomes of those choices,” Edelman writes. Compiling such a database might seem unfeasible, were it not for Facebook’s more than decade-long practice of polling users on everything from their satisfaction with the platform to how trustworthy they find the sources in their newsfeeds. A survey from February went so far as to ask users whether they agreed or disagreed that “Facebook is good for the world.” But to be good for the world, Facebook and other tech giants first have to be good for us.