Advances in Artificial Intelligence have always been penciled in, moving forward in tentative scribbles like squiggly lie detector graphs. Advances that take the form of “AI,” are cloaked in a mystic veil — that changes as they make it to production.

Once-new AI technologies like machine vision, circuit optimization, or voice recognition throw away their AI badge when they successfully do something. They wave so long to failed AIs that continue a long trudge through academic research labs, questing for the holy grail that usually amounts to “predicting the future.”

Then they sprout anew.

The much-discussed algorithms of Amazon, Google, Facebook and Cambridge Analytica have a pre-history and are cases in point. Disclosure: Their makers don’t mind if some of the veil of mystic shroud remains.

This is what it looks like today — Amazon predicts what book you’ll buy, Google predicts which ad to serve you, and Facebook predicts which meme of the moment will hold your attention.

Most insidious and mystical perhaps was the now defunct Cambridge Analytica. The purpose of Cambridge Analytica’s algorithm was to predict which profiles could be pinged to move an election. To the extent this may have influenced history, it might be the most powerful algorithm of all.

In any case, all these wonks’ algorithms had precursors in Collaborative Filtering and Software Agents — AI efforts of yore. Today they prowl the Web inside Recommendation and Personalization engines.

The fact is such precursors are forgotten in the rushing currents of news. But behind the algorithms is a history populated by people with the requisite defects that humans show, which a recent book by Harvard historian Jill Lepore uncovers in engaging detail.

In “If Then: How the Simulmatics Corporation Invented the Future” [W.W. Norton, 2020], Lepore explores a 1960s corporation that sought to apply (what is now called) data science to politics and other domains, doing the algorithms by hand, but then programming the mainframe computers of the day to predict social outcomes.

A central figure in her narrative is Ithiel de Sola Pool, a Cold War Era MIT political scientist and the co-founder of Simulmatics Corp. Like more than a few of today’s machine learning seers, his predictions were couched in bias, hyped with unwarranted certitude, and leavened with short cuts. Mispredictions eventually gained Simulmatics a reputation as an oversold under-deliverer.

It all makes for a good read. In Lepore’s exploration, de Sola Pool flies to the sun and falls victim to a pseudoscience of prediction. The caution in the tale should not be lost on the masters of Silicon Valley today.

De Sola Pool’s company took its name from the combination of ‘simulation’ and ‘automatic.’ Its AI lineage lies in the simulation of models, and the quest for automation.

The title “If Then” derives from Simulmatics’ focus on automated reasoning and data driven computer-based behavioral simulations — ‘what if’ analyses or game theory that is central still to the algorithms of today’s Big Data giants.

During World War II de Sola Pool had worked within the Dept. of Defense, studying propaganda, combining statistical and psychological analysis, and with others setting the stage for future Natural Language Processing. Like these others, he wondered how new computer technology, first applied for artillery firing tables and nuclear reaction modeling, could be more broadly applied to other problems.

So, Simulmatics was not alone. There was the Ford Foundation, the Rand Corp., the Army Math Research Center at the University of Wisconsin, and others. Simulmatics early techniques of computer to prediction in social and, eventually, military realms, and was early to gain attention; the company’s first most vivid appearance coming in its work on the part of the Kennedy campaign in 1960.

Kennedy’s use of computers to target voter blocs was controversial, more so than the work of Cambridge Analytica was controversial in 2016. Generally, automation was no more welcomed in the ’60s then than now. Bad ink and some bad predictions caused a smooth talking de Sola Pool to lead Simulmatics away from political polling and toward greener pastures of military planning and operations efficiencies for the DoD in South Vietnam; that, as war raged and its political polling work dried up.

In the middle ’60s, the company’s staff would interview Vietnamese peasants, give them Rorschach tests, churn data simulations in computers, and predict likely futures and file reports — all to little obvious immediate effect. Meanwhile, the times they were a’ changing. Later, back at MIT, as anti-war sentiment swelled, de Sola Pool was the object of a mock trial staged by protesters for war crimes. By then, Simulmatics was bankrupt.

On this and other points author Lepore is perceptive. The Simulmatics principals thought the mathematics of the targeting of messages derived from the same mainframe-based modeling means used to target missiles, she writes. She does not miss opportunities to highlight the researchers’ missteps and hubris along the way.

It is not easy to measure the life work of de Sola Pool, but this is more than a start. At one point Lepore turns to famed whistle-blower and one-time Rand data analyst Daniel Ellsberg for a verdict on de Sola Pool, and Ellsberg is not shy:

“He was a very charming guy. Still I thought of him as the most corrupt social scientist I had ever met, without question.”

Lepore’s “If Then” includes a heaping helping of humanist viewpoint. She is highly aware of the mystical aura to which some technologists aspire. It’s a short plank’s walk to conceit, she suggests, as in her writing here on the then then incredibly influential Ford Foundation:

“…Ford asked social scientists to make prediction the entire object of their research, to claim knowledge — empirical, probabilistic, mathematical knowledge — of what would happen next. This however was no less mystical than the ancient art of prophecy.”

With “If Then,” Lepore weaves a sometimes off-beat tale — this is the ’60s, right? — of what I would call the Roots of Recommendation. Things — relationships, particularly — come together and fall apart in fairly quick order. Some bits are like Quentin Tarantino’s “Once Upon a Time in Hollywood” while others are more firmly attached to the tone of your more usual history of technology.

The extent of contemporary backlash against Simulmatics’ activity is among personal discoveries Lepore recounts in a Zoom interview she did with RPI and Harvard computer science researcher Fran Berman for Harvard’s Data Science Initiative. I concur with many of the the points she makes in the discussion, especially when she suggests that, for tomorrow’s data scientists, studying computer history might be more valuable than straight-up studying “ethics.”

I found “If Then” informative and, in the light of the zest for AI so common today, especially compelling. The Progressive Gauge Recommendation Engine recommends it! — Jack Vaughan



Jack Vaughan

As a computer trade press reporter, online Internet editor and advanced technology writer for over 20 years, Vaughan has covered big data, software development.