AI critic


I’ve just recently created a Writers Club, a copy of a club I was a member of about nine years ago. As a first topic I’ve suggested “scientific discovery” (and that with a reason, but that’s another story); the rules are simple – on given topic, create a short story in any style you see fit. I choose a silly little take on Flintstones.

The story is just five pages long and I might post it on this blog later on. I’m here to show you something else…

Google’s Notebook LM is a fascinating AI tool that can take any material and create a podcast about it. The podcast is pretty standard dialogue of two speakers, a man and a woman, and they always use the same voice. This is somewhat more limiting than those other generative speech AI services that offer tens or hundreds of different voices, but this is not the important thing. The important thing is that Notebook LM has created a podcast that was almost a spot-on when it comes to accuracy and relevance of ideas expressed in my story.

It is an astonishing feat for AI, albeit not unexpected one. This is not revolution, this is evolution. And it sounds seriously convincing. Uncanny.

Well, see hear it for yourself.

Update: I’ve created another story and let the AI discuss it. The first one was a relatively straightforward funny piece, but this one is much more mature and it does ask the reader to use theory of mind and understand cultural nuances. When I submitted the story for AI to analyze I expected it to fail at figuring out the true plot in the story. It did not disappoint me: there are 17 minutes of talking about the narrator, and even if the AI pointed out a simple but significant turn of events that actually define the whole story, it failed to understand the gravity of it. The end result is a relatively lengthy (could easily be half that time spent on reiterating same points) discussion that incorrectly focused on narrator’s selfish motives. All being said, the analysis is not that bad and it does get some things quite right, but the main plot twist was outside of the scope of AI understanding, as theory of mind is still something that AI can not emulate much deeper than at a superficial level.

Listen to it here.

Update: it took some prompting in a sense of “take a look at exactly this thing there, tell me how it relates to protagonists” to make the AI create a result that is much closer to how the human being would see it.

It is worth noting that this is one of the ways we can spot students cheating by using AI: if the task requires understanding and application of theory of mind, it is more likely for AI to fail at it (at least for intermediate future) because the current model is [relatively speaking] incapable of “twisting” its own process to accommodate for variables that do not emerge from within its own framework.

Views: 39

Comments, rants and vents


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.