prAIvate tutor


(TL;DR update: you can listen to AI discussing this article here)

Everyone knows it: AI is here to stay. It’s still clunky, power-hungry, and unwieldy – but it’s improving fast. We can certainly look forward to having very decent local LLM (Large Language Model, the hyped variation of AI we use today) running on our computers or our phones.

Don’t wait for the next big thing – start using it today. It might be a monetary non-negligible subscription service if you don’t have a really beefy computer to run AI locally, but this money can be very well spent, especially if you have school kids.

Using AI in education is still highly controversial because it is a powerful tool that can be misused and might be difficult to trace. A big part of the issue is that the educational system is absolutely unprepared for such a powerful tool: teachers do have a point in stating that it will help kids cheat in a lot of (creative, I’d say) ways, and it will make grading their true knowledge harder.

That is true.

But let me remind you of something else, another disruptive technology that emerged decades ago. Teachers were afraid of it then just like they’re afraid of the AI today. An argument was that it will make their pupils intellectually lazy and they will never properly learn how to solve problems. And my favorite argument: “They believe they will always have that device with them”.

Calculators.

Not the fancy ones with color displays and graphics and programming languages and hundreds of built-in mathematical functions. Calculators like this one:

(and just to digress, db 800 was the first calculator made in Europe, made by a company in Croatia (then Yugoslavia))

They were very simple devices capable of just four basic operations – but they were perceived as a menace to education, dystopian technology that will cause kids to forget how to calculate things and ultimately lead to the doom of humanity.

Only time has proven those teachers of yore wrong: not only do kids still know how to calculate, but we now have much more powerful tools and nobody is arguing how they are ruining the civilization.

I believe that the same will happen with AI: this is a tool that frees up your mind for better thinking. Yeah, it is so advanced that you might offload a lot of critical cognitive functions onto it and some people are likely to do that: “let the computer think for me.” Some other people will ask it to read their palms. But for the vast majority out there, it will be a helpful assistant to fetch that information for you, be it a recipe, weather update or a quick reply to an email.

And it certainly can be used to augment your knowledge or learning process. With a little imagination an AI can be used to facilitate faster and even deeper learning. The best thing is that this tool is so malleable, one can shape it into whatever is needed at that moment and then reshape it easily for some other task. It’s like having a private tutor in your home.

Well, almost.

There’s still a big difference between real teacher and a piece of software. There’s still no AI that can replace a living person in all aspects of the relationship between a tutor and a pupil.

The biggest difference is that AI isn’t really trained to work with kids of different age. While AI is very polite and positive and helpful, it can not easily adapt to individual needs of a pupil, something a real teacher can do instantly: gauge the level of comprehension and on the fly adapt the method to optimize the learning process to suit pupil’s knowledge or mood at that moment.

Do not forget: AI is a tool. It is not a teacher. It is not a teacher. AI is a beautiful mirage – a complex tangle of statistical computations. It cannot think, is not sentient, and cannot truly understand things… but it can parse your input and create an output that will amaze with its life-likeness. But it’s a machine. It is not alive. It does not think. It’s an insanely complex db 800, nothing more than that.

If you’re about to try out tricks described in this article, please keep this in mind and let your child understand it as well: AI isn’t a teacher – it’s a highly advanced program. It can mimic human interaction, but lacks understanding, emotion, and awareness.

AI makes mistakes – so-called ‘hallucinations.’ It handles conversation well, but struggles with much of what we take for granted. Mathematics is a more difficult concept, and relatively complex logic or reasoning can throw it off tracks. Sometimes it just invents things out of thin air. Because of that, information received from AI should always be checked for accuracy.

Having mastered the [non-existence of] soul of the machine, let’s see how it can help us with traditional education.

For starters, AI tutor is incredibly cheap. At the moment this article is being written the monthly cost is about $20 for most LLM tools. That sum can get you many hours of interaction. But it can get quite expensive if you overdo it.

Another nice thing is that AI tutor is always there – in your computer, in your phone, even in your car. You can use it anywhere, any time. You can communicate by typing on the keyboard or by talking to it. You can make it start when you want and stop when you want, as many times as you want.

AI is absolutely patient – there’s no spoiled brat with any set of psychological manipulations who could make AI tutor lose a nerve or give up. AI can not get agitated or insulted. AI will not quit.

The idea of using it to help my child learn faster and better came to me when I realized, after a few years of actually using AI to generate images, that I could combine the two hot AI tools into one smart educational game: my daughter likes creating AI generated images. We were lucky to have good tutors in kindergarten so she learned English at a very young age, effectively becoming bilingual. She can easily hold conversation with an adult and she can (to our surprise) read English quite proficiently. Yet she can’t really write, as Croatian and English language are very different at the core: whereas in Croatian each letter is exactly one immutable phoneme, in English language things are much more complicated.

So I devised a little game: I’ve instructed ChatGPT (but this should work with many other LLMs) to take her input, check it for syntax and grammar, and if there are issues it should point them out and refuse to execute the prompt until those mistakes are corrected. Only after letting the user fail three times may it suggest a correct prompt.

The magic that does all that is this small set of instructions:

Before executing any instruction, strictly check the user's input for grammar, syntax, capitalization, and punctuation. If errors are found, list them and refuse to execute the prompt until the input is corrected. Do not suggest corrections unless the user makes three incorrect attempts in a row, after which provide a grammatically correct version.

If you haven’t used an AI tool before, you might wonder how you could do that? How do you “program” an AI on user level?

It’s surprisingly simple: because those tools are designed to hold a conversation, there’s no hidden option to click, no computer code or configuration file to edit, you simply tell your instructions to the AI.

That’s it – you can copy and paste the above into the chat window and it would be accepted and executed by the AI. Some AIs can “remember” this instruction and save it for future use, and some might forget it the moment you finish the conversation and you will have to give those instructions again next time you “talk” to it.

What happens once you instruct the AI to follow those rules?

(please note that this is simulated conversation, my daughter knows better than this – yes, had to say this because she will eventually stumble upon her father’s work)

The AI will wait for the prompt (user input), check for syntax and grammar and then provide feedback. Both gross and subtle errors will be pointed out and the user encouraged to give it another try. If the user fails three times, AI will provide a correct answer and ask if it should create an image nevertheless. Such rule will ensure that the user (a child or an impatient adult) will not lose interest in the process if they make many mistakes as the image will be created eventually. By asking the user to repeat the corrected sentence AI will reinforce learning process.

The nice advantage is that this process is universal: after the grammar check directive has been enacted, any conversation with AI [within that chat window] will be checked and corrected in a similar manner, not just prompts asking for an image. This directive can then be merged with any other directive to create a multi-layered tool to help with and train different things at the same time.

What else can be taught? Basically, anything that resembles conversation. Let’s create an example for simple CS training for binary arithmetic:

you are a CS teacher and will now ask me basics of binary arithmetic, then check my answers for validity and provide feedback.

The AI will jump into the role of a CS teacher and start grilling user with binary mathematics:

There’s a very interesting observation to make here: even though I have provided a correct answer to a question, AI mistakenly thought of it as incorrect, only to correct itself later on when it was offering an explanation how to calculate the subtraction. It even found why it was confused with my answer: I did not give a leading zero in my answer – something that a human teacher would spot instantly.

This very nice example is giving us insight into how the LLM work, as apparently it is working in a step-by-step manner, focused at immediate task at hand, only to later on recognize my answer as a correct one after doing more math. Somehow this actually does add up to its ability to simulate a conversation like a real person: we’re used to computers providing either a straight, emotionless answer, or an error. Correcting oneself is immanently human.

More importantly, this serves to remind us that this AI (LLM, to be more precise) is not the know-it-all, superior mind: it can fail miserably at more complex tasks that need deep domain knowledge – or simple mathematics. This will eventually be ironed out, but for now, it serves as a good reminder that indeed we have to check information given to us by the AI.

Here’s a nice example of AI blindness: notice how 1010 in the example is shifted two positions to the left, not one. This is a simple misalignment but a clear oversight by AI and is likely to confuse the student.

We can tell it to re-check the alignment:

And it will fail miserably again. Here we have demonstrated weak point of current AI solutions. Keep that in mind and check the process from time to time. This is the main reason your child must understand that this is just a fancy computer program and that it can make mistakes: if something like this happens, you should be ready to step in as a parent and work with your child to come to the correct answer. Or you might want to take a challenge and make the AI give a correct answer with some helpful guidance from your offspring:

You’ll also find that AI can’t do everything equally easily or reliably. It’s still pretty bad at mathematics, sometimes struggling with basic mathematical operations. That’s specific domain knowledge that requires different approach than the one LLM use, and will likely be resolved in near future with the multi-agent solutions that use two or more separate AI agents, each trained to be highly efficient in their focused scope of functionality.

If we stick to things that AI can understand better, we can do very nice things. It is amazingly easy to gamify boring school tasks, for example:

You're playing Quiz host.

In each round, user be shown two historical English or British kings.
Each king comes with a short description—but no dates.

User's task is to pick the king who began his reign earlier.
After each guess, the correct answer and both kings’ reign years will be revealed.

This prompt will create an interesting quiz game that will help the kid remember reigns of monarchs; you can substitute British monarchs for some other dynasties tailored to the country of your choosing.

How would this quiz look like? Slightly different from one LLM to another, as they “comprehend” your instructions in a somewhat different way. Let’s see how ChatGPT would create a quiz:

And this is how Gemini would create the quiz using the same prompt:

As you can see, ChatGPT has created significantly more gamified quiz with colorful icons and nice trivia-like layout, while Gemini created more of a scholarly question. However, both LLMs created a game of quiz that follow our prompt. It is up to the user to decide which layout is better (and don’t forget that you can fine-tune it further to add touches to it) – or which one is cheaper to run.

Let’s grow our expectations and see if we can use AI to help us memorize a defined set of data: a set of pages from the book. It would be very useful to focus learning on exact information that was given to the student. The general outline for this action would be:

1. attach the PDF of the content you want to practice
2. instruct the AI that it should question the student about the uploaded content

For this example I’ve uploaded a short story I’ve written and let AI ask me about the story:

Using a story for the example will serve two purposes: it will show how AI can comprehend a variety of data, and how it can be used to promote more thoughtful thinking.

I used my own story to test how accurate AI is in interpreting meaning and actions in the document that it digested. For the very basic questions it did a remarkably good job, almost on par with the human tutor.

There’s a slight error in comprehension regarding time, because the AI did misinterpret this part of the story:

“It’s a scientific device that can be used for a number of things”, Blarney began his lecture,”you can see the time, you can measure distances, you can use it to create a standard measure of any two or even three dimensional object…”

What any human tutor would notice immediately is that the only way to use gnomon to tell the time is to stick it in the ground and observe the shadow. AI failed to notice this little nuance and interpreted the text literally: you can use gnomon to measure time. It is not incorrect per se as you really can measure time by measuring the change in length of the shadow [with another gnomon] over time, but this level of thought is out of reach for today’s LLMs. Here it simply made a small mistake that a human tutor would likely avoid.

In the second round I’ve deliberately made a mistake and AI corrected me by citing exact place in the text with correct explanation why my answer was wrong.

Round 3 was pretty much nailed: the incorrect answer to Fredd’s worries has shown that AI did comprehend the story exactly as planned: Fredd was indeed worried about science changing the society in a way he might not like. As I’ve written the story with exactly that punchline in mind, I can say that AI did digest this story really well.

To get a better insight into this process (and to save some space), you can follow these steps:

  1. Read the story

2. Read the conversation with AI:

This is how you can upload some text and let AI churn it and be of some help.This isn’t limited to literary works – it could be anything. But be careful: AI still struggles with specialized knowledge. Complex math or literary analysis requiring introspection and theory of mind are often beyond its reach. AI might or might not fail on such task, we can never be sure.

By now I think that you got a pretty good grasp on how to talk to AI to help you or your kids with school work, so let’s summarize:

  1. You (and your child) should keep in mind that this is just a glorified computer program and no Deus Ex Machina – it is not sentient;
  2. This isn’t a replacement for human tutor: if there’s persistent issue with learning school curriculum or with different subjects, this tool is unlikely to help and finding a human tutor is what you need to do;
  3. You should look at AI as a sort of smart intern: it has wide general knowledge but you still have to tell it what to do and how to do it;
  4. Be specific: the more specific and less ambiguous your prompt is, the better are chances that AI will pull out a good response;
  5. Use a well defined prompt structure for LLM you’re using: in general, this would look like “You are [what you want AI to be]. Your task is to [clearly explain goals], and you should do it like this: [step by step, specific instructions to follow].” You can use bold or CAPITAL WORDS to further emphasize important aspects of your prompt;
  6. Be imaginative: not all LLMs react in the same way to your prompts. Experiment with everything until you get good grip what works and what doesn’t. You will learn as you go.

Now go there and be amazed.

(and if you still don’t know how to do it – just ask your child for help)

Views: 72

Comments, rants and vents


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.