Welcome to the Machine

DIGG THIS

“Welcome my son, Welcome to the machine. Where have you been? It’s all right – we know where you’ve been…

~ Pink Floyd, “Welcome to the Machine” (1975)

For prophetic visions of where we’re headed, forget the economists, philosophers, historians, politicos – and especially climatologists (30 years ago, they predicted an impending ice age, for Pete’s sake). It’s only every once in a while that they get something right about what’s waiting for us around the corner…

But for my money, society’s real seers are the novelists and short story writers.

Look at how today’s America mirrors Aldous Huxley’s vision in Brave New World of a hedonistic, classist, high-tech future world where consumerism is civic duty – and where relentless promiscuity and legalized drug use (the author’s euphoria-inducing “soma” equating to modern-day Prozac, Percocet, OxyContin, etc.) are standard measures of what’s normal and healthy…

See how Vonnegut’s vision of a 2081 U.S. government that codifies and enforces equality in the brilliantly comedic Harrison Bergeron resonates in both the modern American education system and its tax code – both of which punish or ignore excellence, while overlooking or rewarding failure and mediocrity…

Consider how H.G. Wells’ The Island of Dr. Moreau foreshadowed Nazism, eugenics, and the human genetic meddling and embryonic selection (now called pre-natal “health screening” – but, perhaps soon, prenatal “enhancement”) we’re increasingly accepting as a normal part of reproduction…

And of course, everyone’s aware of how American society is creeping evermore toward a PC surveillance state, where both privacy and dissention are borderline criminal – la the “Thought Police” and “Big Brother” from Orwell’s 1984

But as unsettlingly accurate as these quasi-prophecies have proved, what’s next for America may be even more terrifying: A dehumanized cyber-world more akin to Asimov’s I, Robot.

I’m talking about a world where robots – and I’m considering any combination of hardware and software that can detect, assess, and classify human actions or events as such a machine – compete directly with humans, and where the most critical decisions in our society are increasingly made by artificial intelligence.

Don’t scoff, it’s already beginning.

I, Robot Witness

In past Whiskey & Gunpowder essays, I’ve written about the explosion of warrant-less civilian surveillance in our society in the wake of Sept. 11. Cameras are everywhere nowadays – in the store, at the ATM machine, in the bank, on bridges over the highway, in cops’ cars, on street corners, at stoplights, in parking garages, at the airport, and on almost everybody’s cellular phone.

It’s getting so that you can’t steal a smooch (or whatever) from your lover at a stoplight anymore for fear of some bored government employee in some office with beige-painted cinderblock walls zooming in on you to get his kicks. Not that this is currently happening in “real time” whenever you’re at a stoplight. As it stands, footage from the cameras that watch us in intersections and on street corners usually only gets looked at in review – to better gather facts in case a crime has been committed. But using the stoplights as an example: What if a car runs the light in the other direction right when you’re in some manner in flagrante delicto? The shutters snap from every direction and…

Surprise! You’re on (very) candid camera.

Same with changing your clothes in a parked car outside the mall (who hasn’t done this at least once?) or hurriedly stuffing a chili dog into your face while walking down the street on your way to some meeting. All it takes is for the wrong thing to happen in the foreground while you’re in the background and your mug (or again, your whatever) is on display in some crime lab, court room, and no doubt someday on the Internet.

The point being this: Awkward, vulnerable or risqué moments happen in any life worth living – and now, they’re happening on camera…

It isn’t just the population centers, public areas, and highways that are under round-the-clock surveillance in America, either. Space-based satellite imaging covers every square inch of this country – albeit with varying degrees of resolution. However, that’s all but certain to soon change. I don’t know if you’ve heard about this or not (it hasn’t made the headlines in any mainstream information outlet that I know of), but just over a year ago, Lockheed Martin landed a $149 million contract to study the overall feasibility and to produce a prototype of its High Altitude Airship (HAA), known as a stratospheric platform system.

Ostensibly part of a missile defense system, the feds are planning to soon have 11 or more of these in constant flight at around 70,000 feet blanketing the entire U.S. with real-time, high-resolution surveillance. Each one of these unmanned behemoth blimps would be about 20 times the size of the one Goodyear floats over football games, and would monitor a patch of American soil 750 miles in diameter – with cameras that are no doubt capable of detail many times greater than those on satellites.

Understandably, I could find no specs on these. However, I’m certain that given the resolution of current space-based lenses, these cameras would easily be able to discern fine detail like individual human faces, license plates, etc. Which means forget about pulling over to the side of a remote stretch of highway for a quick whiz or that midnight skinny-dip in the pool at your condo complex. They’ll be able to identify you by your birthmarks, tattoos – or, uh, dimensions. But I digress…

The point of me rehashing all that’s old and new in the arena of today’s questionably constitutional monitoring of American citizens is to get to what’s every bit as disturbing as the omnipresence of prying eyes: the fact that robot technology may soon allow Big Broth – er, I mean the government – to CONSTANTLY MONITOR these channels in “real time,” instead of simply reviewing images after the fact in an evidentiary capacity.

This is bad.

I, Robot Cop

You may remember a surprise semi-blockbuster movie from a few years back (1987) called RoboCop. Although this movie’s “bad guy” was actually a mega-corporation that effectively privatized the police for its own ends – the “good guy” was someone who really resonated with audiences: a robotic cop who doled out justice without fear, emotion, prejudice, vice, corruption, or ulterior motives.

In other words, he was the ideal enforcer.

But of course, this was just a movie. The reality behind the likely progression of robotic justice is far less cheer-worthy. Tomorrow’s robocops will not be armed enforcers, just omni-prying watchers. And they won’t be infallible…

According to a recent Reuters article, “intelligent video” is the next big development in law enforcement surveillance. Basically, this is cutting-edge computer software that’ll be employed by various agencies of the government from the local police on up to monitor everyday actions – picked up 24/7 by both cameras and microphones – in order to identify and sound the alarm about “suspicious” behaviors.

Yes, you read that right: Soon, everything you do AND SAY in almost any public setting could be filmed, taped, and checked by artificial intelligence against a list of behaviors and speech that a bunch of pointy-headed G-men have determined are threats to public safety or national security.

Things like loitering, circling a location, or walking away from a package – or simply uttering words like “bomb” or “explosive” – would constitute alarm-worthy actions in the eyes of intelligent video, according to the Reuters piece.

Which means if you’ve made three low passes over that watch in the jewelry store window over the span of an evening’s shopping, the fuzz might just swarm down on you on pass number four…

And if you say to your friend in the food court that those delicious Cinnabons – or those women three tables over who are eating them in seemingly orgasmic ecstasy – are “the bomb,” the Men in Black might take you down…

And if you accidentally drive away without that “Sharper Image” bag you set on the ground as you fumbled for your keys in the parking lot, the copters might descend on you with their huge spotlights on your way home…

What scares the hell out of me about this isn’t simply the fact that we may soon be watched and scanned with high-performance computer technology – that’s already happening every time we go to the airport. Like most Americans, I’m willing to submit to this because heightened airport security is certainly called for in the post-Sept. 11 world. Besides, everyone who flies knows what to expect when they go to board a plane nowadays. It’s not the same thing at the local mall.

No, what bothers me about this Orwellian inevitability is that machines will be making the call about what constitutes probable cause for detention, search, or arrest. Now, I’m no lawyer, but it seems to me that this standard has been shifting lately from meaning roughly “a reasonable suspicion that a crime has been committed” to more or less meaning “anything that could indicate a crime might soon be committed.”

Do we want this?

Think about it for a minute. Errors made by human cops in the establishment of probable cause can be remedied or nullified in court. Cops can be cross-examined for prejudicial behavior and interpretations of their words and actions can be disputed, records can be expunged, and reputations restored if a person is found to have been wrongfully (or at least unlawfully) detained…

But how would people secure justice when these prying robotic eyes made the wrong call and sounded a false alarm? Would innocent people wrongfully detained on some machine’s say-so be able to get justice in courts?

Who would people be able to sue for wrongful arrest? How would one sue a machine for damages? Would the accountability fall to the software engineer – or to the agency that implemented the system?

Beyond this: What if a surveillance machine “learned” to profile people based on race or ethnicity, or other discriminatory factors? Machines know nothing of political correctness, you know…

Or would they have to be designed to overlook perfectly logical criteria that typically fall under the heading of “profiling”? More critically, how could this be done in a way that would be unassailable in court?

And if it were done, how could that machine’s judgment then be considered impartial? We would have TAUGHT it to be partial…

Oh, and what about this: Once we make the jump to machines deciding what constitutes probable cause, will human cops still be allowed to do this on their own in places where the cameras aren’t looking? Or will their judgment – as nonmachines – all of a sudden be considered less than partial or credible in the eyes of the law?

In other words: Will cops ultimately be prohibited from making arrests unless a machine “sees” a crime (or the likelihood of one) and gives them the OK?

I, Robot Catalyst

There’s only one way that “intelligent video” could be legitimized…

And that’s if Americans resigned themselves to the necessity of it and were willing to submit to the supposed impartiality of machines in yet another invasion of our privacy and subversion of our rights.

Of course, we’ll do it.

Our leaders will tout the system as an end to wrongful arrests, when, in fact, it could be the beginning to many more of them. They’ll say the impartiality of machines will make the criminal justice system less corrupt and less prone to abuses and brutality – by making it less vulnerable to the prejudices of individual cops…

They’ll sell it to us on the grounds that it’ll make our streets, roads, shopping centers, and neighborhoods safer without draining the public coffers on more police…

They’ll say it could foil terrorist attacks by spotting suspicious behavior patterns mere humans could never detect (but without profiling, of course)…

And once more, we’ll cave and sacrifice yet another huge chunk of our freedom and privacy on the altar of safety and security. The fact that we might be trading one kind of danger for another won’t even enter into the equation. Like they always do, the machines will have become the catalyst for seismic change, and we’ll be left with the aftermath, which is always the same:

Less liberty and the illusion of more security.

For those of you who think I’m a nut case for extrapolating all of this, I want you to think about something for me:

Just six years ago, it probably would’ve been inconceivable to you – and very likely an outrage – that soon your face would be photographed, computer-enhanced, recorded, and checked against a database of criminals while you were waiting to pass through what at that time must’ve seemed like a cumbersome and inconvenient amount of security (a simple metal detector) before boarding an airplane…

And 12 years ago, it would’ve seemed unlikely to you that in the near future, cameras at stoplights and on highways would be clicking away and issuing you tickets for traffic violations without ever involving an officer of the law…

There's this friend of mine…

He works in the financial industry, lives in a well-appointed, relatively new home in an idyllic suburb of a booming Maryland city, is married to a lovely, charming woman, and has an adorable year-old son. He's from a fine, relatively soap-opera-free family in which there are no head cases or drug addicts, and in which everyone's educated and more or less gainfully employed.

He works out at the gym several times a week, drives a Lexus, and plays golf or goes clay-bird shooting when he gets a chance. Like most people, he's saving too little and juggling his fair share of debt. Bottom line: I consider this friend of mine reasonable, sane, and well adjusted – and very likely quite representative of the typical middle-class American professional…

And judging by a surprising amount of the reader mail we got in response to Part 1 of this series, my friend is VERY typical in the sense that he doesn't mind if everything he does and everywhere he goes is recorded and scrutinized by the eyes of law enforcement. He's not the least bit worried about living his life on camera.

"I'm not a criminal," he said over lunch recently. "I have nothing to hide, so what do I have to fear from being on camera?"

Again, this friend of mine is a reasonable man, and his thinking is reasonable – given one very UNreasonable (yet seemingly universal) supposition…

That our safety is the only aim of our government's omni-prying eyes.

I, Robot Revenuer

A little more than a year and a half ago (Whiskey & Gunpowder, June 15, 2005), I wrote about the absurd number of laws that govern every aspect of our lives. There are literally tens of thousands of pages of legislation that we don't know we're violating until we're caught running afoul of it…

Now, stay with me here. The vast majority of laws are typically punishable by fines, not incarceration – which means that lawbreaking is a major source of revenue for all levels of government. This is no great revelation, but it does explain WHY there are tens of thousands of pages of laws we'll never know about until we're caught breaking them. It's part of a vast revenue and control machine, the bottom line of which is this:

Everyone's guilty of something. We just don't know it until after the fact. Or more accurately, until we get the bill. There are literally so many laws that one cannot help but break a bunch of them. I've even seen cases of laws that contradict each other – so that no matter what you do, you're a criminal.

But because in the eyes of the court ignorance of the law is no excuse for breaking it (if you ask me, it's the ONLY valid excuse), all it takes for the revenue machine to grind onward is for an appropriate agent of authority who knows which laws we're breaking to see/catch/detect us in the act, and we're slapped with a fine that goes directly into Big Brother's pocket.

The only thing currently protecting us from being nickel-and-dimed to death with fines for our inadvertent lawlessness is the fact that there aren't enough cops, IRS auditors, ATF agents, etc. that actually know all the laws who are in a position to catch us every time we accidentally break one of them…

Seriously, very few people in positions of authority really know much about the law. The average cop on the street, for instance, has a very poor grasp of it. The example of this I love to cite is the one about the pocketknife. Here it is, excerpted from my June 15, 2005, Whiskey essay:

"I asked a dozen or more police officers this very simple question: Is it legal to carry a knife? I got the following answers, or variations of them:

1) No. 2) Yes. 3) Yes, as long as it's concealed. 4) Yes, as long as it's NOT concealed. 5) Not for the purpose of self-defense, only for utility. 6) Yes, as long as the blade isn't more than 2 inches long. 7) Yes, as long as the blade isn't more than 3 inches long. 8) Yes, as long as the blade isn't more than 4 inches long. 9) Yes, as long as it is a folding knife, and not spring-loaded or of a "butterfly" configuration (whatever that means). 10) Yes, as long as it would not be construed by any police officer as a threat during a routine search (this is entirely subjective, of course).

"See what I mean?"

What's so ironic is that citizens' ignorance of the law is NOT an excuse for breaking it, yet enforcers' ignorance of the law is the very thing that keeps many of us from routinely being caught breaking it! Right now, only two things give us a safety net against an unwitting career in crime – or a bankruptcy at the hands of petty fines:

  1. The fact that the mere mortals that enforce laws can't keep track of the hundreds of thousands of them that govern us.
  2. The eyes of authority aren't on us frequently enough to catch us breaking the million or so laws we don't know about.

Now here's the $64 trillion question:

How will this dynamic change once the detection of unlawful behavior is no longer the job of humans – but officially delegated to omnipresent, camera-eyed, software-brained, mega-memoried robots that CAN keep track of thousands of laws simultaneously?

Think we won't all of a sudden find out just how inadvertently criminal we really are – even those of us who "have nothing to hide"? Think we won't find out very quickly just how many ways our elected officials have legislatively transformed us from upstanding citizens into petty outlaws?

If you think I'm nuts, just consider the ways in which this is already happening. Cameras on highways and at stoplights are handling the enforcement of moving violations now, instead of flesh-and-blood cops. Think this isn't aimed purely at revenue creation – or do you really believe it's about safer roads?

If things go the way they're shaping up, it won't be long until robots are calling in the cavalry on us – or simply mailing us fines and citations – for all kinds of stuff we'd never have known or imagined we were doing wrong…

And we'll be lucky if it doesn't bankrupt or ruin every one of us.

I, Robot Voyeur

One of the more disturbing aspects of this coming quagmire is something I touched on earlier in this essay: the likelihood that soon, robots (camera-equipped computers) will be making the determination of what constitutes suspicious behavior. Basically, they'll decide what warrants the scrutiny/investigation of a law enforcement officer under the "probable cause" standard…

This bothers me because it's possible that robotic surveillance from all sources may NOT fall under the same regulatory control as manned surveillance by humans. Again, I'm no lawyer, but this seems to me to be uncharted legal waters.

For instance: It's currently illegal for the police, FBI, etc. to set up targeted surveillance on any American without a court order or similar official permission (which isn't to say this kind of thing doesn't happen anyway). In other words, there's a legal process that has to take place before anything other than generalized surveillance can occur…

Increasingly, the line demarcating "generalized" surveillance is creeping toward "omnipresent" surveillance – especially with the approaching implementation of HAA (High Altitude Airship) technology. Soon, the government's people-watching activities won't be confined to street corners with a history of drug dealing or potential terrorist targets like train stations or busy shopping centers. It'll be EVERYWHERE.

What I'm wondering is this: Once the monitoring duties are transferred to robots, are the legal limitations on when and where we can be peeped on still valid? Think about this for a minute. If heartless, soulless, emotionless machines, instead of real people, are watching us – complete with their fantasies and agendas and corrupt urges and senses of humor – does this constitute any unconstitutional breach of privacy rights?

Put another way: Can MACHINES invade a person's privacy – especially if they're programmed to block all external access to their images and call in the cavalry only if they detect crimes being planned or committed?

I'm betting that the feds would argue no.

By virtue of their lack of humanity, I'm betting it will one day be determined that robots could watch us in every room of our houses, every moment of the day, and never invade our privacy in the eyes of the law – as long as we don't do anything illegal (this is easier in some states than others)…

However, if we did do something that met some cyber-definition of "probable cause" for illegality, the cops these robots summon would have every right to invade our privacy based on that determination.

But here's an interesting question: What if, upon a court's review, a surveillance machine were found to have misinterpreted innocent actions and wrongfully determined probable cause – yet the cops that busted in the door found evidence of law-breaking anyway?

Example: Let's say you and your teenage son start roughhousing in the clubroom downstairs. You're trying to teach him wrestling moves you used to pin people with in college. He's using his youth and surprising speed to slip away from you. You're having a grand old male bonding session, the likes of which are all too rare these days…

All of a sudden, the cops bust down the front door. It seems that your innocent horseplay was watched through a small basement window and misread as child abuse by the robot on the telephone pole outside your home. You're detained until a representative of Social Services arrives and interrogates your child…

However, in the process of sorting out the mistake and establishing your innocence, the cops in your home – called there justifiably by a machine, mind you – spot a dog without a license lounging on the hearth (a stray your daughter brought home a month ago) and an antique firearm over your mantle without a trigger lock on it.

Unbeknownst to you, these are violations of the law. Punishable by FINES.

The question is: Are these crimes "fruits of the poison tree" since the robot made the wrong call on probable cause? Or are they still prosecutable under the "plain sight" standard? How do you think your state's courts would rule on this, given the fact that a lot of money in fines hangs in the balance?

I'm telling you, what's coming is yet another legal and financial quagmire for ordinary people and a bonanza of cash for governments.

But that's not even the worst wrinkle in the coming "eye, robot" reality…

I, Robot Citizen

Lest you think I'm either a closet criminal or just a militia-joining paranoiac, let me share with you what I'm most afraid of, should (when) omnipresent robotic surveillance become a reality…

It's not getting caught and fined to death for laws I unknowingly break that really worries me. Nor is it being filmed in embarrassing or compromising positions and having these images somehow reach the Internet or some government database.

No, what I'm really worried about is that, if we accept or rationalize the necessity that everything we do is being watched and scrutinized, we'll inevitability stop doing the right thing simply for its own sake.

We'll only do it out of fear of being seen doing the wrong thing.

Think about this for a second. It's easy to do what's right when someone's watching. What's hard – and what leads to character, honor, and all the other things good people (and good leaders) should be made of – is doing the right thing when NOBODY is looking.

And if we're on camera every minute, we'll no longer have a need for such inner integrity. Doing what's right will be simply an exercise in hassle-free living, not an impulse that comes from within. So we won't teach integrity to our kids or develop it for ourselves anymore. We'll only teach them how to avoid appearing guilty of anything on camera…

The most laughable and terrifying irony of all is that when this happens, we will have become the robots, and the robots us! We'll have literally switched places.

The robots will have become pseudo-citizens in the sense that they'll be making more and more of society's most crucial decisions (like determining probable cause for arrest). And citizens will have become robot-like, in that they're no longer directed from within by their hearts, consciences, and sensibilities – but from without by the need to integrate themselves with the dictates and perceptions of a cyber-system.

In this world, we won't wrestle and tickle and roughhouse with our kids anymore, for fear of drawing child abuse charges…

We won't make love any way but "by the book," lest our rambunctious role-playing be construed as domestic violence…

We won't make more than one pass over that engagement ring in the store window, for fear of being seen as casing the place…

We won't leave that box of clothes and blankets on the street corner where the homeless guys congregate every night, for fear our "abandoned parcel" would trigger a bomb alert…

In other words, we'll be programmed (literally) not to do anything spontaneous or nutty or risky or gutsy or sexy or romantic or valiant or compassionate or humorous or creative or HUMAN anymore, for fear that the machines will see it as criminal in their literal ones-and-zeros minds.

I don't know about you, but that's a reality I'll risk any terrorist plot, any drunk driver, any hazardous neighborhood, any crazed mall gunman, any sexual predator, any crime of passion, or any serial killer to avoid living in…

Because if my life isn't discernibly human, what's the point of protecting it?

March 1, 2007