By Colin Horgan, Medium | September 27, 2018

An insurance company now requires policyholders to use fitness trackers. What does that mean for the future of surveillance?

If you’re looking to get life insurance from John Hancock, one of North America’s largest insurers, you’d better be prepared to strap a Fitbit or Apple Watch to your wrist. Last week, the company announced that from now on, it will only issue “interactive” policies requiring those insured to wear a fitness tracking device.

The program claims to incentivize policyholders to be more health-conscious. It rewards people who track their workouts and healthy food purchases with premium discounts and gift vouchers.

At the same time, it could make John Hancock a lot of money. “The longer people live, the more money we make,” Brooks Tingle, president and chief executive of John Hancock Insurance, told the New York Times.

As for giving an insurance company even more information about your personal life? Marianne Harrison, chief executive of John Hancock, offered assurances that the information will be secure. “We get medical records on people every day,” she told the Times. “That’s more confidential than physical fitness data.”

More confidential, maybe. But perhaps not as capable of shaping our lives. Beyond security, there’s something else policyholders — or any of us — might worry about when it comes to feeding more data into programs. Namely, how fitness data can modify the way we interpret our world.

The Times spoke with Brian and Carla Restid, a couple in their mid-sixties from Ohio who claim the John Hancock insurance program has changed their lives for the better.

“It provided a way for me to be accountable to myself,” Carla Restid told the Times. “It provided me a way to get going and keep going. I was exercising before, but it wasn’t at the forefront of my mind. This set me on a life-changing program.” The Restids say they’ve benefited financially, too, saving money on travel and consumer products, thanks to the kickback scheme.

It’s a life-changing program, alright.

The Restids’ happiness with the data-driven lives they’re now leading, thanks to this insurance program, reflect a recent trend. As advertisers have perfected ways to compile data about our online activities and offline lives, many of us have turned the same logic on ourselves.

Nowhere is this trend more obvious than in the popularity of just the thing John Hancock is promoting — personal fitness devices like Fitbits or Garmin and Apple Watches. They are tools for self-improvement, yes; but just as much, they are tools of surveillance.

These devices reveal a lot about us. They tell us how far we’ve walked or run, and they can track our meals and calories. They learn far more personal information about us, too — things few of us might have ever known about ourselves in the past. They can monitor our heart rate. They can tell us how much deep sleep we got. They track it all, and keep it on file. And the numbers they generate are alluring, even addictive. They’re presented in colorful charts and available at the tap of a finger. We are entranced by our own data.

We want others to be entranced, too. So, we invite friends, family, and followers to see the numbers, to track our progress — to share in our surveillance. Eventually, it’s easy for us to accept that a company, like one that issues life insurance, is monitoring us as well.

And once we’ve started, it’s difficult to stop. Data tracking becomes a way of life. We’ll even send someone the most personal information of all: our DNA, the core data of our lives, the stuff that makes us who we are. It all seems, as tech critic Jacob Silverman put it, like “a potentially brave act of radical transparency.” This, we tell the online world, is who we are.

But what if it’s not who the world sees?

Two decades ago, Kevin D. Haggerty and Richard V. Ericson introduced the concept of what they called “the surveillant assemblage.” Before Facebook and Twitter, and before the iPhone or YouTube, the Apple Watch or the Fitbit, Haggerty and Ericson saw the potential for a new frontier in surveillance: data.

“We are witnessing a convergence of what were once discrete surveillance systems to the point that we can now speak of an emerging ‘surveillant assemblage’,” they wrote. “This assemblage operates by abstracting human bodies from their territorial settings and separating them into a series of discrete flows. These flows are then reassembled into distinct ‘data doubles’ which can be scrutinized and targeted for intervention.”

There are now, as you read this, two of you. There is the real-life you, sitting at your computer or scrolling on a mobile device, and then there is the other you — your data double — an amalgam created entirely from algorithms and computer programs. These programs analyze the trail of information left behind as the real you shops online, chats with friends, or posts fitness information on social media.

This information is what feeds the persuasion architectures — the myriad surveillance technology apparatuses — that surround us, whether we’re actively engaged online or just walking around with our mobile phone in our pocket. Endless and unseen, these connections are being constantly created. Guesses as to what we might like to do or see next, based on the digital persona we’re steadily creating, our data double.

In the most benign sense, this might mean we’ll see more ads for running shoes or fitness apps. But the algorithms’ connections don’t operate only within the parameters of a single type of interest. They go on and on, searching for more and more data points with which to connect our own.

As Zeynep Tufekci put it in a TED talk last year, the problem is not so much that people might, as a function of the algorithms, see advertising they don’t want. The problem, Tufekci said, is that “we no longer really understand how these complex algorithms work.”

“We don’t understand how [the algorithms] are doing this categorization,” she said. “It’s like we’re not programming anymore, we’re growing intelligence that we don’t truly understand.”

We may yet come to a point where we are unable to understand something else: life beyond what the algorithm decides we want.

As we put more and more faith in the devices that, like little pocket-sized, wrist-worn oracles, reveal such mesmerizingly accurate details about our bodies and our lives, the more we will come to trust them as sources of ultimate truth. We will, and in some cases already have, live inside the reality they create for us. Along the way, experiences outside that device-driven reality will, to paraphrase Jaron Lanier, become as opaque as the algorithms that drive those inside it.

When the information we see — the people and news we are exposed to — is decided by our data double, we begin to lose a part of ourselves. Something else happens, too. We begin to lose track of which self is which. Data doubles seem accurate — as surveillance critic David Lyon once put it, they can appear as “more real to the surveillance system than the bodies and daily lives from which the data have been drawn” — but they’re ultimately out of our control.

Instead, power resides with the algorithms that build the connections that create our networked shadow. What happens when they’re wrong? The results can be frustrating, or even devastating. Algorithmic faults “have seen voters expunged from electoral rolls without notice, small businesses labeled as ineligible for government contracts, and individuals mistakenly identified as ‘deadbeat’ parents,” Wired summarized back in 2014. And even when the algorithms are accurate, recognizing us perfectly, we might wish they weren’t.

But there’s no turning back. Each of us is now a split person — one human, one digital. As we become more and more reliant on the devices in our pockets or on our wrists, we — not to mention the companies and governments that rely on our accumulated data to determine our identity — may become less and less certain of which version of us is really us.

Earlier this year, in the wake of revelations about how Cambridge Analytica allegedly targeted Facebook users with political advertising, some hoping to escape omnipresent surveillance adopted the hashtag #DeleteFacebook. It was a form of rebellion against the narrowly curated worldview the platform presents to its users.

But big as Facebook is, the network of persuasion tools and surveillance architecture stretches well beyond it. That architecture is everywhere, and we work at all times to sharpen the image it has of us, to bring into focus the shadow that haunt us in its digital realm — our data doubles. We feed the data that creates our double every time we write something on Facebook, yes. But also every time we record a workout. With every run we log. With every moment of our sleep that’s monitored. With every beat of our heart.

Escape from the machine and experiencing a reality free from monitoring was, before last week, a mere matter of taking that device off, or leaving it in a drawer for good. Now, for some, life — at least, the insurance of their real life — will rely on it.

But big as Facebook is, the network of persuasion tools and surveillance architecture stretches well beyond it. That architecture is everywhere, and we work at all times to sharpen the image it has of us, to bring into focus the shadow that haunt us in its digital realm — our data doubles. We feed the data that creates our double every time we write something on Facebook, yes. But also every time we record a workout. With every run we log. With every moment of our sleep that’s monitored. With every beat of our heart.

Escape from the machine and experiencing a reality free from monitoring was, before last week, a mere matter of taking that device off, or leaving it in a drawer for good. Now, for some, life — at least, the insurance of their real life — will rely on it.