Although ChatGPT and other AI systems have received massive media attention since late 2022, I only began dipping my toe in those waters about a year ago.
At that time, I released a series of chatbots for many of the individual authors on our website, with each of these produced by focusing it individually upon the total content we’d published from that corresponding writer.
These notably included the Ron Unz Chatbot, drawing upon roughly a million and a half words of my own articles, and some of our readers have subsequently used it to simulate my own reactions on various issues. For example, earlier this month a commenter used this Unz chatbot to write a 10,000 word article on some aspects of McCarthyism, which he published as a very long comment.
The Myth of American M...
Best Price: $25.99
Buy New $29.99
(as of 04:36 UTC - Details)
- American Pravda: Laminated Mouse-Brains and the Magic of AI
Ron Unz • The Unz Review • June 10, 2024 • 15,000 Words - Questioning Our Writers, Whether Living or Dead
Ron Unz • The Unz Review • June 15, 2024 • 6,600 Words
Several months ago, I added another AI feature, having the ChatGPT system automatically produce short summaries and outlines for every article we published that was longer than 1,000 words. This allowed readers to easily get a rough sense of those pieces that they lacked the time or interest to actually read:
- Providing AI Summaries of Website Articles
Ron Unz • The Unz Review • February 14, 2025 • 1,400 Words
However, with hundreds of billions of dollars of ongoing capital spending on AI, these software systems have continued to rapidly improve, and I recently learned that OpenAI had released a new and especially powerful version of ChatGPT called Deep Research.
Whereas ChatGPT and most other chatbots are designed to respond within seconds, the Deep Research AI may spend up to 30 minutes working on a given assigned topic, but it uses that time to produce remarkably advanced results. For example, on a standard benchmark test, the GPT-4o system scored only 3.3%, DeepSeek’s R1 model did much better at 9.4%, but Deep Research rated a vastly superior 26.6%.
Once I began testing the Deep Research AI, these numbers seemed quite plausible to me. I discovered that the system can very effectively be used to fact-check long, complex articles of the sort that I often write. After a couple of such tests, I was so impressed that I have now had dozens of my American Pravda articles fact-checked by Deep Research.
I was hardly surprised that such a powerful new AI system was also very resource-intensive, so basic ChatGPT users have been limited to twenty-five Deep Research runs per month, with only the first ten being at full power and quality. Moreover, since the system is so new, these runs sometimes fail, with those failures still being counted towards that monthly limit.
But despite those minor inconveniences, I was absolutely astonished by the analytical quality of what Deep Research produced, results that fully validated the dramatic claims made in media accounts.
It’s been widely recognized that all of these recent AI systems have easily blown past the decades-old “Turing Test” of machine intelligence, but the output of the Deep Research AI was entirely on a different level. Many or most of its full-power analysis runs seemed as if they had been written by an exceptionally intelligent individual who had read nearly everything published on the entire Internet and also had almost total recall.
Although I’m still not entirely convinced that the hundreds of billions of dollars currently being invested in AI will ever produce an adequate financial return, the AI systems created are certainly one of the most amazing things I’ve ever encountered, being closer to magic than software technology, and doing things I never would have believed possible in a million years. If some company had invented a practical teleportation device, I probably would have regarded such a product as much less remarkable.
Regardless of any future advances, I have already found the current fact-checking capabilities of the Deep Research AI extremely useful, especially with regard to the very controversial and non-mainstream content of so many of my own articles.
As a trivial example of the power of the system, it easily caught several small factual mistakes that I’d made in some of my articles, mistakes that for many years had escaped my own notice and that of all my readers. Reacting in exemplary fashion, Deep Research flagged these as minor, careless errors that should be corrected while also dismissing the notion that they seriously detracted from the broader accuracy of those articles.
At the top of each fact-checking run, I have included a selection of the AI remarks and my own responses, and here’s an example of one such exchange regarding my first 2018 article on the JFK Assassination:
AI: “Thomas Burnett” writing in a French newsweekly – this appears to be a reference to journalist Thomas G. Buchanan…His minor error is the name “Thomas Burnett” – the person in question was Thomas G. Buchanan, who wrote in L’Express.
Unz: Corrected.
Deep Research found another such careless mistake in an article I had published a couple of weeks later on some aspects of World War II:
AI: Claim: Unz recounts that in July 1940, Britain attacked and sank its former ally’s fleet at Mers-el-Kébir, killing “up to 2,000 Frenchmen,” an event he likens to Pearl Harbor in surprise and effect…this is an overestimate; most sources put fatalities around 1,200–1,300.
Unz: Corrected.
But the vast majority of the analysis text produced by the Deep Research AI, totaling around 400,000 words across nearly fifty fact-checking runs, provided carefully-written critiques of my articles, focusing on factual information, logical inferences, and even source representation. Once again, I found it absolutely astonishing that this material was written by an AI rather than an extremely intelligent human researcher with oceans of factual information at his fingertips.
Articles that have such fact-checking runs available now have a button labeled “AI Fact Check” near the top and clicking it opens a saved, annotated copy of the Deep Research fact-checking run in a new browser tab.
Although there were many differences of opinion or other disputes, the Deep Research AI was so powerful and accurate that I was actually surprised at its occasional errors, much like I would be surprised if a super-powerful, all-seeing magical genie sometimes made such mistakes.
In the case of the Deep Research AI, these errors often turned out to be among the few it claimed to have found in my own work.
For example, the first of my articles that I tested mostly dealt with the long-suppressed story of the “Zebra killings,” a large wave of random racial murders that occurred in San Francisco and other parts of California during the early 1970s. Deep Research seemed to find a number of significant errors in my article, but all of these turned out to be mistaken. These errors often seemed due to its apparent lack of access to the full texts of the books that I had relied upon.
AI: The first Zebra murders in SF occurred in October 1973, not 1972…The exact date “1972” is off by a year. So this claim is partly accurate…
Unz: The first SF Zebra killings described in detail did indeed take place in October 1973, but according to p. 34 of the Clark Howard book, some 270 other racial killings of whites in California had already occurred by that date.
AI: This “9 kills” rule appears only in Unz’s summary (and likely the extremist blog, see below). We could find no historical record of a formal quota of nine. Thus this claim is inaccurate and unsupported.
Unz: The requirement to kill nine white men first appears on p. 28 of the Clark Howard book and in numerous other places in the text.
AI: There is no evidence that police officially estimated “over 70” murders or that Howard thought “270.” In fact, Clark Howard’s book details 23 victims identified in San Francisco during 1973–74. The figure “270” appears only on an extremist blog (the Zebra Project)…Unz’s numbers wildly exceed documented totals. We conclude this claim is inaccurate/hyperbolic.
Unz: The following statement appears on p. 34 of the hardcover edition of the Clark Howard book:
At that time, there were fifteen accredited Death Angels in California. To achieve their collective membership, they had already quietly killed throughout the state 135 white men, 75 white women, 60 white children…
Thus, the total number of alleged white victims was 270 by that point.
AI: “The Zebra killings were the single greatest instance of racially-motivated killing in modern American history… [with] possibly greater victims than the combined total of all other such examples over the last 100 years.” Even if we accept 73 possible Zebra victims, numerous other racial violence events in U.S. history (e.g. massacres like Wounded Knee 1890, Tulsa Race Massacre 1921, etc.) involved far more deaths, though those predate the “last 100 years”…We mark this inaccurate/unsupported.
Unz: I explicitly said “the last 100 years” and anyway most of the contrary examples cited were not necessarily racially-motivated killings rather than military massacres or violent, deadly riots.
AI: “To this day, except on obscure websites, the Zebra killings have been virtually erased from public memory”…David Talbot’s 2012 Season of the Witch does include the story…The claim that only a “white supremacist blog” records it is false…So this claim is exaggerated/misleading.
Unz: I specifically mentioned that it was discussed in the Talbot book, but emphasized that Talbot had been forced “to cite an obscure white racialist blogsite” as one of his only sources of information.
AI: Unz claims Talbot “included a discussion of the Zebra killings” and attributes to him citing the extremist blog. We could not access the book’s text. If Talbot did mention Zebras, he may or may not have cited that blog; it’s unconfirmed. Thus Unz’s statement about Talbot is hearsay. It serves his narrative of a “media blackout.” Because we cannot verify, we label that use as unsubstantiated speculation.
Unz: On p. 420 of his book, Talbot cites that extremist blogsite as his main documentary source for the Zebra killings.
- American Pravda: The KKK and Mass Racial Killings
Ron Unz • The Unz Review • September 19, 2016 • 3,200 Words - Fact-Checking American Pravda: The KKK and Mass Racial Killings • 4,800 Words
A similar sort of serious AI error appeared regarding another one of my 2016 American Pravda articles, describing the nefarious activities of British intelligence agents during the 1940 American presidential election:
America’s Cultur...
Best Price: $6.54
Buy New $13.55
(as of 04:31 UTC - Details)
AI: British intelligence “had played a massive hidden role in getting America involved in WWII” and may have murdered a top Republican Party official while secretly steering the GOP nominating process…the specific claim of murdering a top Republican official has no solid evidence. Unz phrases it as Cockburn’s uncertain phrasing…Mahl’s book (and other sources) do not document any confirmed British assassination of a U.S. politician. What appears relevant is that a Republican convention manager (a Taft supporter) died suddenly in mid-1940 (just before Willkie’s nomination), which Mahl implies was suspicious….The Unz claim seems to exaggerate Mahl’s hint that British agents were even “warned” murder might be needed…but the specific “murdered GOP official” assertion is unsubstantiated speculation.
No mainstream account reports outright murder plots. Conclusion: Unverified/unsupported by mainstream sources. Willkie’s nomination was unusual, but the alleged sabotage is not documented outside Mahl/Unz. At most, there was a sudden death of a Taft organizer (not proven as foul play). We rate this claim inaccurate (no credible evidence of British spy rigging the convention in the wild manner described).
Unz: On p. 160, Mahl describes the sudden death of Ralph E. Williams, the pro-Taft Republican Arrangements Convention Manager, that “allowed the Willkie forces to take control of the mechanics of the convention” and thereby nominate their candidate. Later in that same paragraph, Mahl explains that the British operatives recruited to gain control of the Republican Party and nominate Willkie were warned that they “mustn’t be afraid of murder.” In footnote n. 19, Mahl explains that he unsuccessfully attempted to gain access to the autopsy report on Williams, which he believes “deserves a thorough review by a forensic specialist knowledgeable in the intelligence techniques of the time.” All of this supports my careful phrasing that Mahl suggests “that British spies…very possibly murdered a top Republican Party official.”
Thomas Mahl is a reputable historian and his monograph was his Kent State Ph.D. dissertation, later published in a fully respectable academic series, so Mahl himself is certainly a mainstream source. Admittedly, his research seems to provide the only coverage of these extremely controversial claims, but that’s another matter entirely. Indeed, the AI itself mentions that “Trusted histories (e.g. Gil Troy’s biography of Willkie) note that Willkie barely campaigned and won, surprising many.”
- American Pravda: Alexander Cockburn and the British Spies
Ron Unz • The Unz Review • August 29, 2016 • 2,700 Words - Fact-Checking American Pravda: Alexander Cockburn and the British Spies • 4,000 Words