The Military Origins of Facebook

Facebook’s growing role in the ever-expanding surveillance and “pre-crime” apparatus of the national security state demands new scrutiny of the company’s origins and its products as they relate to a former, controversial DARPA-run surveillance program that was essentially analogous to what is currently the world’s largest social network.

In mid-February, Daniel Baker, a US veteran described by the media as “anti-Trump, anti-government, anti-white supremacists, and anti-police,” was charged by a Florida grand jury with two counts of “transmitting a communication in interstate commerce containing a threat to kidnap or injure.”

The communication in question had been posted by Baker on Facebook, where he had created an event page to organize an armed counter-rally to one planned by Donald Trump supporters at the Florida capital of Tallahassee on January 6. “If you are afraid to die fighting the enemy, then stay in bed and live. Call all of your friends and Rise Up!,” Baker had written on his Facebook event page.

Baker’s case is notable as it is one of the first “precrime” arrests based entirely on social media posts—the logical conclusion of the Trump administration’s, and now Biden administration’s, push to normalize arresting individuals for online posts to prevent violent acts before they can happen. From the increasing sophistication of US intelligence/military contractor Palantir’s predictive policing programs to the formal announcement of the Justice Department’s Disruption and Early Engagement Program in 2019 to Biden’s first budget, which contains $111 million for pursuing and managing “increasing domestic terrorism caseloads,” the steady advance toward a precrime-centered “war on domestic terror” has been notable under every post-9/11 presidential administration.

This new so-called war on domestic terror has actually resulted in many of these types of posts on Facebook. And, while Facebook has long sought to portray itself as a “town square” that allows people from across the world to connect, a deeper look into its apparently military origins and continual military connections reveals that the world’s largest social network was always intended to act as a surveillance tool to identify and target domestic dissent.

Part 1 of this two-part series on Facebook and the US national-security state explores the social media network’s origins and the timing and nature of its rise as it relates to a controversial military program that was shut down the same day that Facebook launched. The program, known as LifeLog, was one of several controversial post-9/11 surveillance programs pursued by the Pentagon’s Defense Advanced Research Projects Agency (DARPA) that threatened to destroy privacy and civil liberties in the United States while also seeking to harvest data for producing “humanized” artificial intelligence (AI).

As this report will show, Facebook is not the only Silicon Valley giant whose origins coincide closely with this same series of DARPA initiatives and whose current activities are providing both the engine and the fuel for a hi-tech war on domestic dissent.

DARPA’s Data Mining for “National Security” and to “Humanize” AI

In the aftermath of the September 11 attacks, DARPA, in close collaboration with the US intelligence community (specifically the CIA), began developing a “precrime” approach to combatting terrorism known as Total Information Awareness or TIA. The purpose of TIA was to develop an “all-seeing” military-surveillance apparatus. The official logic behind TIA was that invasive surveillance of the entire US population was necessary to prevent terrorist attacks, bioterrorism events, and even naturally occurring disease outbreaks.

The architect of TIA, and the man who led it during its relatively brief existence, was John Poindexter, best known for being Ronald Reagan’s National Security Advisor during the Iran-Contra affair and for being convicted of five felonies in relation to that scandal. A less well-known activity of Iran-Contra figures like Poindexter and Oliver North was their development of the Main Core database to be used in “continuity of government” protocols. Main Core was used to compile a list of US dissidents and “potential troublemakers” to be dealt with if the COG protocols were ever invoked. These protocols could be invoked for a variety of reasons, including widespread public opposition to a US military intervention abroad, widespread internal dissent, or a vaguely defined moment of “national crisis” or “time of panic.” Americans were not informed if their name was placed on the list, and a person could be added to the list for merely having attended a protest in the past, for failing to pay taxes, or for other, “often trivial,” behaviors deemed “unfriendly” by its architects in the Reagan administration.

In light of this, it was no exaggeration when New York Times columnist William Safire remarked that, with TIA, “Poindexter is now realizing his twenty-year dream: getting the ‘data-mining’ power to snoop on every public and private act of every American.”

The TIA program met with considerable citizen outrage after it was revealed to the public in early 2003. TIA’s critics included the American Civil Liberties Union, which claimed that the surveillance effort would “kill privacy in America” because “every aspect of our lives would be catalogued,” while several mainstream media outlets warned that TIA was “fighting terror by terrifying US citizens.” As a result of the pressure, DARPA changed the program’s name to Terrorist Information Awareness to make it sound less like a national-security panopticon and more like a program aiming specifically at terrorists in the post-9/11 era.

The TIA projects were not actually closed down, however, with most moved to the classified portfolios of the Pentagon and US intelligence community. Some became intelligence funded and guided private-sector endeavors, such as Peter Thiel’s Palantir, while others resurfaced years later under the guise of combatting the COVID-19 crisis.

Soon after TIA was initiated, a similar DARPA program was taking shape under the direction of a close friend of Poindexter’s, DARPA program manager Douglas Gage. Gage’s project, LifeLog, sought to “build a database tracking a person’s entire existence” that included an individual’s relationships and communications (phone calls, mail, etc.), their media-consumption habits, their purchases, and much more in order to build a digital record of “everything an individual says, sees, or does.” LifeLog would then take this unstructured data and organize it into “discreet episodes” or snapshots while also “mapping out relationships, memories, events and experiences.”

LifeLog, per Gage and supporters of the program, would create a permanent and searchable electronic diary of a person’s entire life, which DARPA argued could be used to create next-generation “digital assistants” and offer users a “near-perfect digital memory.” Gage insisted, even after the program was shut down, that individuals would have had “complete control of their own data-collection efforts” as they could “decide when to turn the sensors on or off and decide who will share the data.” In the years since then, analogous promises of user control have been made by the tech giants of Silicon Valley, only to be broken repeatedly for profit and to feed the government’s domestic-surveillance apparatus.

The information that LifeLog gleaned from an individual’s every interaction with technology would be combined with information obtained from a GPS transmitter that tracked and documented the person’s location, audio-visual sensors that recorded what the person saw and said, as well as biomedical monitors that gauged the person’s health. Like TIA, LifeLog was promoted by DARPA as potentially supporting “medical research and the early detection of an emerging epidemic.”

Critics in mainstream media outlets and elsewhere were quick to point out that the program would inevitably be used to build profiles on dissidents as well as suspected terrorists. Combined with TIA’s surveillance of individuals at multiple levels, LifeLog went farther by “adding physical information (like how we feel) and media data (like what we read) to this transactional data.” One critic, Lee Tien of the Electronic Frontier Foundation, warned at the time that the programs that DARPA was pursuing, including LifeLog, “have obvious, easy paths to Homeland Security deployments.”

At the time, DARPA publicly insisted that LifeLog and TIA were not connected, despite their obvious parallels, and that LifeLog would not be used for “clandestine surveillance.” However, DARPA’s own documentation on LifeLog noted that the project “will be able . . . to infer the user’s routines, habits and relationships with other people, organizations, places and objects, and to exploit these patterns to ease its task,” which acknowledged its potential use as a tool of mass surveillance.

In addition to the ability to profile potential enemies of the state, LifeLog had another goal that was arguably more important to the national-security state and its academic partners—the “humanization” and advancement of artificial intelligence. In late 2002, just months prior to announcing the existence of LifeLog, DARPA released a strategy document detailing development of artificial intelligence by feeding it with massive floods of data from various sources.

The post-9/11 military-surveillance projects—LifeLog and TIA being only two of them—offered quantities of data that had previously been unthinkable to obtain and that could potentially hold the key to achieving the hypothesized “technological singularity.” The 2002 DARPA document even discusses DARPA’s effort to create a brain-machine interface that would feed human thoughts directly into machines to advance AI by keeping it constantly awash in freshly mined data.

One of the projects outlined by DARPA, the Cognitive Computing Initiative, sought to develop sophisticated artificial intelligence through the creation of an “enduring personalized cognitive assistant,” later termed the Perceptive Assistant that Learns, or PAL. PAL, from the very beginning was tied to LifeLog, which was originally intended to result in granting an AI “assistant” human-like decision-making and comprehension abilities by spinning masses of unstructured data into narrative format.

The would-be main researchers for the LifeLog project also reflect the program’s end goal of creating humanized AI. For instance, Howard Shrobe at the MIT Artificial Intelligence Laboratory and his team at the time were set to be intimately involved in LifeLog. Shrobe had previously worked for DARPA on the “evolutionary design of complex software” before becoming associate director of the AI Lab at MIT and has devoted his lengthy career to building “cognitive-style AI.” In the years after LifeLog was cancelled, he again worked for DARPA as well as on intelligence community–related AI research projects. In addition, the AI Lab at MIT was intimately connected with the 1980s corporation and DARPA contractor called Thinking Machines, which was founded by and/or employed many of the lab’s luminaries—including Danny Hillis, Marvin Minsky, and Eric Lander—and sought to build AI supercomputers capable of human-like thought. All three of these individuals were later revealed to be close associates of and/or sponsored by the intelligence-linked pedophile Jeffrey Epstein, who also generously donated to MIT as an institution and was a leading funder of and advocate for transhumanist-related scientific research.

Soon after the LifeLog program was shuttered, critics worried that, like TIA, it would continue under a different name. For example, Lee Tien of the Electronic Frontier Foundation told VICE at the time of LifeLog’s cancellation, “It would not surprise me to learn that the government continued to fund research that pushed this area forward without calling it LifeLog.”

Along with its critics, one of the would-be researchers working on LifeLog, MIT’s David Karger, was also certain that the DARPA project would continue in a repackaged form. He told Wired that “I am sure such research will continue to be funded under some other title . . . I can’t imagine DARPA ‘dropping out’ of a such a key research area.”

The answer to these speculations appears to lie with the company that launched the exact same day that LifeLog was shuttered by the Pentagon: Facebook.

Read the Whole Article