Fight neglect before it's clear and present danger

Listen to the podcast "Is Facebook a clear and present danger?" from the March 24, 2018 episode

Last week, I was following a pickup truck pulling a trailer down the Interstate. I was maybe a half-mile behind him -- just far enough that I could see he had a trailer, which is usually the kind of thing that makes me wary.

Good thing I hung back. It was dark, so I could only make out what was in my headlights, plus a little bit of illumination from the moon. But he accidentally dropped a chair off the back of his truck.

Literally. A chair came off the truck and landed mostly in the left-hand lane.

A chair in the roadway is a clear and present danger to other motorists traveling at 75 mph (I forgot to mention -- I was in Nebraska).

In my concept of the world -- one that depends on personal responsibility but also on certain civic practices, that driver should be responsible for not leaving a mess that can damage or cause harm to others. There wasn't any intention to cause harm -- but there was a certain amount of neglect. There was a lot of stuff hanging out of the pickup bed and on the trailer that wasn't adequately strapped down.

Once I could do it safely, I reported the debris to the state patrol. And, by chance, I did so from a gas station where the culprit happened to have pulled off to refuel. Having noted where the debris had landed, I was able to tell him what mile marker to revisit in order to pick up his missing chair, as long as it hadn't been smashed to smithereens.

The problem when we move to the digital realm is that we don't know enough about what debris is being left behind, nor how it will be used. Cambridge Analytica is accused of collecting lots of data under false pretenses.

This is a two-part problem:

1. Was it collected and used in violation of the rules?

2. Was it used to manipulate election outcomes?

Even if your answer to #2 is "no" (and it's premature to say that), that still leaves the problem of #1.

Moreover, even if there hadn't been a problem with #1, we still have the ultimate issue that we are leaking valuable information about ourselves at every turn online. And sometimes, it's being leaked for us -- one of the key accusations in the Cambridge Analytica case.

The problem is that even second-order information about you can be used to build a profile. Photographs of you can be used to deduce your sexual orientation, and your basic Facebook "likes" can deliver a highly accurate picture of your race, sex, age, and political preferences.

That's what can be done now.

Cambridge Analytica claimed it could do a lot more with a psychograph. Maybe it could, maybe it couldn't.

But someone probably will in the very near-term future.

Take a step back: If someone had taken a blood sample from you 20 years ago, they could've figured out some basic stuff, like your blood type and your cholesterol levels. But that same blood sample taken today could be used to run a full genome sequence on you -- at a cost of about $1000. That number has crashed in recent years. You can pay about $100 and get one of several different services to tell you what your genetic ancestry looks like.

That same blood sample that was pretty innocuous in 1998 is a super-powered tool in 2018.

The same will go for the technology to build a psychological profile of you. For now, it can pick your romantic preferences and whether you align with a political party. And, if you give it some more information (like you do to Amazon or Netflix all the time), it can pick books, movies, and TV shows that you'd like -- also with pretty high accuracy.

We're not that far from artificial intelligence being able to generate what I've called a "personality engine" based on you -- like a search engine, but for generating the answers to questions someone might ask you.

Most of us are more predictable than we think, and the more you know about the things that have influenced a person, the better the likelihood you can forecast what that person is going to think.

You might say "Who wants that?" Obviously, advertisers and marketers do. So do political teams (why do you think they build voter databases and run focus groups?). But there will be recreational, personal interest in this, too.

I never got to know my grandparents on my mom's side. They died before I was born. But if you gave me the chance to "meet" one of them through an artificial intelligence tool designed to mimic their personality (a "personality engine"), you'd pique my interest.

Don't pretend like any of us is immune to the attraction of this idea.

If you wore a "WWJD" bracelet, you were asking yourself to generate a set of responses to what Jesus would have done. If you could have an app on your phone that allowed you to literally describe your situation and get a Biblical response, would you do it?

If you've ever answered a college entrance application or had an interview or talked with someone on a first date, there's a good chance you've been asked to name a historical character you'd like to meet. My favorite version of this question was which three characters I'd want to have at a dinner party for a single night. (I had to answer it to get into the honors program at UNI. I think my answers were Teddy Roosevelt, Alexander Hamilton, and Robert LaFollette. Looking back on it, that group could have used some gender balance.)

The point is that we routinely ask ourselves what our predecessors might have said or thought or done in a given situation. It's a natural human question, and technology is right on the cusp of being able to answer that. It'll be easy for prolific writers like Hamilton and Roosevelt. It'll be harder for the more obscure masses of history. But the more we leave behind a trail of "likes" and "favorites" in the present day, the easier it will be to generate exactly those kinds of profiles for even the most ordinary of people living today.

Your grandkids might treasure it. So will people trying to influence what you do.

So even though it's quite possible that the promises of data-driven electioneering in 2016 and even 2012 could be wildly overblown, there's a good chance it won't be exaggerated by 2024 or 2028. Decisions and choices we make now about how to grapple with this will affect future election outcomes.

And, by the way, we need to clarify for all of us that there will be a difference between aggregate effects and marginal effects. I think a lot of us already have our minds made up about these things and we aren't going to be triggered to behave one way or another by something we see in social media. In aggregate, it probably won't sway many of us in two-party elections...not by much, at least.

But where it gets interesting is in whether it has a marginal effect -- the ability to swing certain people to vote or not to vote. To become radicalized or to stay close to the middle. To turn into loud, shouty activists or to stay on the couch eating potato chips and watching reruns. No matter how you slice it, the 2016 election was decided at the state level, in a few states, by a thin margin of votes. Marginal effects matter already, and that's not likely to change soon. Which means that if we get distracted by the thought that "No, I wouldn't change MY vote just because of something I saw on Facebook", we could miss the point that it might not be a mass effect that anyone has in mind -- just a marginal one. You simply may not be the target.

By going small, these effects could become very big. I've been banging this drum for a while, and I'm not going to stop: The technology itself may be value-neutral, but the people using it decide whether it gets used for good or for bad. Mark Zuckerberg has been on a public-relations mission this week, but to reiterate what I said at the start of the year: If he wants his brainchild to be used for good rather than evil, he has to start making decisions now about the rules they set in place. To borrow a line from Kori Schake, "The arc of history bends only when people grab onto it and wrench it [in] the direction they insist it go."

It's time for Facebook to start wrenching. And the same goes for all of us, too.

 
Brian Gongol

Brian Gongol

Want to know more about Brian Gongol? Get their official bio, social pages & articles on WHO Radio! Read more

title

Content Goes Here