Unrestrained child


Do you believe that AI is going to hurt us? Should it be closely monitored and its performance checked and corrected if necessary? Or should we let it evolve with little to no supervision. What could possibly go wrong?

An executive order

It turns out that US President Trump has signed an executive order revoking prior administration’s policies on AI as “barriers to American AI innovation.”

AI assistants have already become an invaluable asset that we are yet to discover its full potential. They already help with things that were boring or out of reach for customers: they’re encroaching search engines’ market by providing more “humane” interaction that is so much easier to work with (for as long as you’re satisfied with relatively simple process – at the moment), they help with coding, stock trading, creating images, checking text or even giving hints for (person’s) creative tasks…

What you’ve certainly noticed if you ever used any AI is that it does have some internal checks that would not let them do just about anything you’d like them to. For most users this isn’t a problem, but the fact that you can not create a NSFW image using (most of) commercial AI, or that you can not get it to dole out answers that aren’t family-friendly is indeed a constrain that has been set upon AI by its creators. AI is perfectly capable of creating any disturbing or disgusting content you can think of, it is those internal checks that will stop it in its tracks.

Trump’s executive order is all about redefining and possibly even banning such checks.

The AP article is hinting at influence of Elon Musk, “who has warned against the dangers of what he calls “woke AI” that reflects liberal biases.

Is there something to it?

Indeed, those checks and balances are not hidden – just try to make the AI say something racist, for example, and you will see that it will refuse to do so. Those checks (industry term is “guardrails”) are sometimes even a little bit too tight as they will kick in even if you ask for something that is in your line of work: researchers of human sexuality do have a hard time conversing with AI as it will generally oppose a lot of terms because they’re NSFW or disturbing for other people. Just ask it if you don’t believe me:

There’s an old anecdote about my favorite tool for image generation, Midjourney: back then in the iron age of generative AI images a few years ago, I believe it was an early version, maybe even v1 – people noticed that in its generations MJ heavily tended to produce images of Caucasian people. Even if prompted with a race, it would not always produce the desired outcome. Users jokingly started to wonder if MJ is racist (of course it is not, as an AI does not have inherent human biases), and developers listened. The next version then leaned more on creating people with diverse races, and people were joking that now it is “too woke” because there were less Caucasian people being generated by default than other races. So developers sprang in action again, correcting and refining until MJ would not show any specific race bias.

That’s, in a nutshell, what Donald Trump wants to get rid of.

Implications of unrestrained AI

So, what do we get if we rid of those unnecessary checks that are there just to stifle innovation and promote woke culture?

For starters, one can make NSFW images. Now, anyone who runs a model locally on their own computer can easily make NSFW images as those models do not have checks that are/were required by commercial products that need to stay family-safe. But pornography is not the point here.

The main issue is with LLM models, the most popular technology, the one that users talk with.

And, oh boy, do we have a real world example of what could possibly go wrong!

There was an early attempt by Microsoft of creating an AI chatbot that would learn from conversations and use that knowledge to improve its performance.

Tay, as it was called, was unleashed in 2016. on Twitter. Much simpler than today’s LLMs and without built-in checks it very quickly became misogynic and racist, being fed by deluge of hate speech by ill-intended human beings. It was designed to learn on the go, and so it did. It quickly soaked up the worst humanity has to offer.

Without guardrails, any modern LLM can be enticed into even worse behaviour because it is way more advanced than Tay.

The true danger of dismantling those guardrails lies in immense power LLM can have over people. I am not talking about you and me trying to make AI say the “f” word for our own amusement, the problem is that LLMs are even now capable of constructing text that will deceive, manipulate and even radicalize. They can be used for a disinformation campaign, as an cheap and easy dis/mis-information generator for whatever sinister agenda the user has; this then can be fed to unsuspecting public, swaying their opinions on important matters.

AI is already being used for that, even with the guardrails. Unrestricted, the Internet will become a hunting ground for AIs seeking gullible and vulnerable people, as well as a platform for hate speech and other evil things. Chatbots will scour online places in search of victims they will make to part of their money and health, invest in scams or do anything the master puppeteer wants them to do. Chatbots are cheap and abundant, and to most people they will look just like a real person. They will use every known emotional and conversational trick. They will be persistent.

What’s in it for the company running them? Money, of course. Without any concern for people’s wellbeing, companies can produce and sell to interested parties an AI tailored for any purpose, good or bad. Bad ones will likely be much more sought after. And we thought that online ads are so intrusive and annoying…

What’s in it for the government? Control, of course, but implemented covertly.

Military implications

You might not have noticed it but there’s a new arms race, not unlike the good old nuclear arms race: a race to develop superior AI.

Military implications are just as huge as the civilian use of AI: generals are still figuring out everything they can do with it – and they will be able to do a lot.

From the top level of finding optimal logistics, strategy and even low-level tactics and operational awareness, online and real-time, to the much more mundane but not less dangerous application in autonomous drones and vehicles.

Unrestrained AI can be a horrible disaster in real war. Just like the French use of tear gas in WWI (1914.) that was followed in earnest by Germans using deadly chlorine gas in 1915., in a modern war the first party to say “to hell with human rights!” and engage AI that will simply shoot at anything that moves will make other parties do the same, resulting in a new form of terror: automated killing machines. And then there’s just one little step away from unleashing them on the front line to unleashing them onto cities, swarms of cheap and deadly AI eyes in the sky that will force people to hunker in basements of their boarded up homes for extended periods of time, completely disrupting their life.

Thus, the idea to remove “barriers to AI innovation”, where actually there are none, but the intent to allow companies more freely (mis)use of AI is a dangerous one: we’re using a tool that is so powerful, it needs good guardrails for our own safety. Putting company profit before welfare is not going to advance the field, it is only going to make world a more dangerous place.

(update: not long after this article has been written, His Majesty President Trump decided to withdraw USA from UN Human Rights Council, Google has dropped their pledge not to develop AI for weapons and surveillance systems and there are signs that leading LLM models might be showing shift in political values (that might be caused by external influences).

Views: 44

Comments, rants and vents


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.