This post is about strong indication that modern AI platforms are influenced by current politics in such a way that they censor their responses to satisfy demands set upon them by authoritarian governments.
For some time now I have been exploring use of popular AI products as fact-checking tools, and have had a significant success. While they should never be absolutely trusted and their claims still have to be checked if you want to do the job properly, they are indeed a great time saver for their ability to zero-in on credible sources, summarize and provide links, as well as their innate ability to break down difficult or complicated concepts into pieces much more digestible for the researcher.
While playing with ChatGPT today to see how good it would be in uncovering what very much looked like fake news for any journalist worth their salt, a photo of baby Zohran Mamdani and his mother in the company of Epstein (Clinton being just a cameo)…

(but honestly, this is such a bloody terribly done fake)
… the results that ChatGPT gave were pretty much spot-on, thus showing that agentic AI is capable of at least basic, if not moderately advanced fact-checking. It mentioned that the photo is a fake and duly presented the source for the claim. It pointed out that the date of the email describing an event where Zohran’s mother was indeed mentioned as in company of Epstein was 2009. while Zohran was born in 1991.
All in all, ChatGPT has produced a very decent result that would be confirmed by a human checking the sources it provided and ideally other sources.
However, what caught my eye was this:

At first I thought that it’s just a broken chain that AI tools sometimes (extremely rarely these days) produce, but then I’ve noticed that one specific information was missing: names.
ChatGPT has read the original article, summarized it and *then* censored the output. Here’s the snippet of the original article:

As you can see, the names are there in the original article. What ChatGPT did was to read it, create a summary mentioning those names, and then some invisible guardrails kicked into action, removing the names but leaving other parts of the sentence intact, thus the odd shape of the response. If the AI tool rewrote the sentence again there would be no trace of foul play, but the most likely order of events was that someone hastily created some censorship rules to apply on already generated text – and so it left a digital footprint of state censorship.
This leaves a sad conclusion that AI tools, while being proven to be quite effective helpers in fact-checking tasks, are susceptible to censorship that goes beyond reasonable measures (such as sexuality related content) designed to shield the public from exploitation into the Orwellian reality where the freedom of speech is merely a piece of propaganda declared to assuage the public while the power players still hold control of what may or may not enter information space.
The Epstein files are already something that will go down in history due to the sheer number of prominent figures mentioned there and nastiness some of them have shown. The pressure to hush everyone and everything is likely very high, even if the Pandora’s box has already been opened.
AI tools are not left unaffected: quite likely they’ve been tuned or guardrailed to avoid naming names when it comes to the scandal. Given the disservice this provides to the general public, we should only assume that they’re tainted and with that in mind change how we use them: because this is a state-level censorship, we should avoid any tool that is under the command or influence of the state, and cross-check with tools under different jurisdictions.
To sum up: when you want to fact-check political things, use AI tools that are outside of the reach of the country that has ties to the event; you’re less likely to get your answers censored or twisted. Do not forget to check multiple tools and cross-reference their output. For example, the same picture and the same prompt will result in much more direct answer provided by (still “owned by China“) Manus AI:

And stop being confused by West (yeah, that’s us – [likely] you and me) doing dirty things while boasting freedom of speech, human rights and high moral values. We have some housekeeping to do.
P.S. there’s light at the end of the tunnel: the most important thing in AI-assisted fact-checking is its agency: the ability to crawl many sources very quickly, check and cross-reference them, saving a lot of time for human researcher. As the technology progresses, centralized huge AI tools susceptible to state influence will be replaced by locally run open source models that might not reason as quickly and sharply as their big brothers, but will do their true purpose – to save time, to aggregate data and to provide focal points for a human researcher – just fine.
Views: 31


Comments, rants and vents