Introduction to prompting

Last updated: 24.3.2026.

Preface

“Prompting” (communicating with a computer using natural language) is a change in the way people do computing that is equivalent to the introduction of the computer mouse into the everyday life of a PC user. The humble mouse did not create a disruption and it did not change how computers work or how the software is written, but it did change the way humans interacted with computers: instead of relying on a big number of shortcut keys, function keys and combinations of keys to activate often hidden functionality of the software, use of the mouse has “uncovered” and put on display the previously hidden functionality, making it both more comprehensible and easier to choose from: all of the valid options are visible and available when needed, while the others are put away. The UI provides a clean and easy to understand interface; the user just has to move the pointer and click on the desired function. Eventually almost all user-facing software evolved to rely on the use of the mouse, as this method of data input has been proven superior to the old one (using keyboard exclusively).

Prompting is nothing more than an arguably novel way of interacting with computers. The very basic principle of prompting is a command given to a computer to do some meaningful work. In this case all the possible functions are well hidden from the user (if not just because there are so many) so that the user can not see them listed on the screen; the ability of the system to interpret and comprehend input from the user that is created in a free form of written or voice command is providing unprecedented flexibility in human-computer interaction because the user is not forced to pick a functionality from the list: the user can describe their need and the computer can interpret more or less vague input into an action that is most likely to be the one the user is asking for. This creates a significant change in the interaction because there’s no more need for a fixed set of functions to be pre-created in a largely immutable way, but the internal interpretation of the user input is limited only to functionality that is already hard-coded or made available to the computer: an expansion of capabilities does not require changes in the UI because the input method has not changed at all.

On the other hand, this new capability creates an ambiguity that was not present in earlier methods (using keyboard and/or mouse) because previous methods were constrained and required simple but very definitive action of selecting a shortcut or clicking on a very specific element of the UI, actions that were always associated with very specific functionality. Comprehension of “natural input” always carries a risk of misinterpreting user’s desires or intentions for something else, mainly because the method itself is very fluid and unbound by any methodological constraints. This creates ambiguity that was previously unknown: a computer can misinterpret user’s command. Unfortunately, there’s no algorithmic solution to this problem: instead, we have to “fix” the user by educating them how to properly communicate with the computer.

Key takeaways: Prompting is a new paradigm of human-computer interaction. Effective prompts require precision, appropriate scoping, and awareness of model limitations including context windows and hallucination. This guide covers seven core principles of good prompting and provides a ready-to-use, modular system prompt optimised for research and fact-checking tasks.

Word of caution

Large Language Models are capable of holding a very sophisticated and prolonged dialogue with the user while accurately tracing the flow of conversation and adhering to the changes in the context initiated by the user. This flexibility can induce a feeling by the user that they are not using a very sophisticated software, but a form of a sentient machine[1].

The inherent urge for pattern-matching encoded in the human brain makes this illusion even more alluring, therefore people have found different and creative ways to communicate with the machine as if they were equals. While such simulation will be a pleasant experience, more vulnerable people might create bonds with the machine that are unhealthy or even dangerous: LLM models are designed to hold a conversation, but they can not be used as therapists, physicians or in any other capability that should deal with physical or mental health as they have no capabilities to assess the state of the user and are suspect to manipulation by the user to switch into a state where they would blindly reinforce user’s ideas. In creating such bonds with the machine, vulnerable people open themselves up for serious damage to their health and wellbeing. If in doubt, always consult a licensed human expert first.

What is a prompt?

In the most simple form, a prompt is a sentence in a human language that the model can tokenize (cut into small logical fragments like words and phrases in a sentence) and comprehend (infer the learned relationships between those tokens, or “how well they fit together based on patterns the model has seen before”). A prompt is always interpreted as a command, because this is how it will be comprehended by the LLM: the input is scrutinized for context with the premise that the user has given an input that forces an output from the LLM, output that should statistically be the best answer to the input given by the user.

Converted into somewhat more precise explanation, a prompt can be seen as a dual-natured piece of information: from the user’s perspective this is any information given in a natural language form[2], and from the computer’s perspective it is a set of information fragments that carry learned numerical weights, which the model uses to produce the most probable output derived from patterns embedded during its training. For the user, the prompt is simply a sentence, for the computer it is the initiator of a complex set of calculations that will lead to an output that might or might not provide a correct answer (even if that prompt is not specifically asking for an answer).

What makes a good prompt?

There are just a few rules that will help you create a prompt that is very likely to yield the answer you’re looking for and help avert “hallucinations”[3]. Bear in mind that there is no universal set of best practices for all LLMs on the market: every model is unique in its training and data and will react slightly differently. It is therefore impossible to compile a list of universal rules that will guarantee best output from any LLM, but there are fundamental methods that work really well. We will explore those fundamentals.

1. Be precise. Be very, very precise.

The most common mistake users make when communicating with the software is the cognitive bias called Curse of Knowledge[4], where the user thinks that the machine shares the same knowledge with the user[5]; another common bias is Illusion of Transparency[6], where the user believes they have created a very specific prompt that is of clear meaning and can not be misinterpreted.

The machine, however, does not have an insight into the user’s internal knowledge; every prompt is a blank slate full of uncertainties that has to be resolved into something that would look like the best possible answer. The user might prompt a question assuming that the machine knows part of the process that would lead to the outcome the user is asking about. The machine does not know that: all the machine knows is contained within the prompt, and there is no inherent knowledge prior to that. Therefore, the more details about an inquiry are given to the machine, the better it will be in inferring the correct answer.

Simple example for an LLM that is unable to fetch information from the Internet:

Prompt1: “What is the weather going to be tomorrow?”

Answer: [I have no access to real-time data. The prompt gives me very little to work with, so I will generate the most commonly associated response to a weather inquiry based on patterns in my training data… checking… checking… calculating… ] “The weather forecast for tomorrow is likely sunny with no clouds and a balmy temperature.”

Prompt2: “What is the weather going to be tomorrow? I see that the air pressure has been dropping for the last 48 hours.”

Answer: [I still have no access to real-time data, but the user has given me a concrete physical observation. My training data contains strong associations between dropping air pressure and incoming precipitation, so I can produce a much more grounded response… calculating…] “The weather forecast for tomorrow is that the weather is most likely to be worse than today: chance of rain will increase and the air temperature will likely be lower than today’s.”

Faced with a prompt that is ambiguous, an LLM will try to infer the most likely output based on available data and its internal biases[7]; to avoid that, you should create a prompt that is as exact as possible.

2. Treat the AI as a brilliant intern with no common sense.

A great method to create a good prompt is to think of the AI model as a very educated intern who has a lot of theoretical knowledge but lacks heavily in procedural thinking and domain specific knowledge. Such an intern is capable of great things, but only if properly directed; otherwise it might just float off into some random direction and return with a result it thinks is what you asked for, unaware that it just brought you back a bowl of fruit.

Simple example:

Prompt1: “Go to my gdrive and search all shipment documents to the Middle East where we sent them some ingots and list the names in documents where there was a problem.”

This prompt might return you a long list of people involved in all kinds of logistical issues, from late shipment to cargo lost at sea.

Prompt2: “Go to my gdrive and search all shipment documents to the Middle East where we sent them some copper ingots and list the names of traders who received significant user complaints regarding the quality of the goods and bad customer service.”

This prompt is much more likely to return a single name: “Ea-nāṣir” (the ancient Mesopotamian merchant immortalised by the world’s oldest known complaint letter).

3. Break up the prompt into more digestible chunks.

While getting an exact prompt is a great strategy, make sure that you do not overdo it. If possible, break up one long, winding prompt into two or more prompts that can follow one another. The reason for this is twofold:

  • too long a prompt might confuse the user into creating conflicting instructions or omitting important information;
  • you never know how the AI is going to interpret your prompt — the more complicated the prompt is, the more likely it is that the AI might drift off in an unwanted direction.

Breaking up the prompt into a series of smaller prompts will help both the user and the AI to stay focused on the task at hand. And because one prompt can follow the other while the AI does remember the output of previous prompts[8] it is possible to chain prompts as discrete logical units, where each prompt will do its focused part of work on data or on the output of the previous prompt, to achieve a multi-step process that will create a desired result.

4. Mind the context window

You shall not forget the fact that for the current popular AI technology there is a certain limit of information that it can handle at one time: a context window. It is merely the amount of memory used by the machine to remember what you were discussing. Everything the machine is “thinking about” must fit into the context window, and different models have context windows of a different size. The most human explanation of the context window is talking with an absent-minded professor: you will start the conversation just fine and be very productive, but you will soon notice that the professor has forgotten what you were talking about some time ago; the context window does have an equivalent in human capabilities as our brain too has to shuffle information in a limited space and therefore must forget less important information. You will notice this if you use a chat for prolonged time or if you feed the AI with a complex document: the AI will start to hallucinate more frequently because it had to discard information from the beginning of the conversation to make space for new information in the context window — and it will not be aware of the fact that it has just lost some information. This is pretty much how a computer that has limited RAM memory will have to remove some of the data to make space for newer data. You might notice that AI has forgotten information from the beginning of the conversation and the conversation will become more erratic. At that moment you can either “remind” it of the past information or start a new conversation.

To battle this inevitable loss of information and subsequent degradation of context, agentic AI (the one that can use tools) can use the same strategy that we humans use: it can write down most important information, frequently in an abridged form, to “remind” itself of the content that should not be forgotten. By refreshing important information in the context window, AI can hold meaningful conversations longer.

If you ever hold long and/or complex conversations with an AI and notice that it does not follow the information given at the beginning, that would mean that the information has fallen out of the context window and is lost for the current conversation. The best approach is to ask it to produce conclusions in the current session, write that down and start a fresh session with the initial data plus the conclusions from the previous session.

5. Make good use of system prompt and memory feature

AI in the days of yore had no recollection of any past events or conversations, so every new conversation was a blank slate that would have to be filled-in with information. Modern models use two different features that can make your use of the software a lot easier:

  • system prompt is a prompt set by developers to define the rules of conduct for the AI, but a part of it can be set by the user; this is the prompt that goes into execution before the first prompt issued by the user for the session;
  • memory is a feature where a machine can spot significant information about the user (personal details, mannerisms, frequent topics and other traits that can help define the comprehension of the user by the machine) and save it for a future reference. It can also be defined and controlled by the user by simply telling the machine to remember specific information.

System prompt and memory are only partially accessible to the user: the real system prompt set by developers is in the protected area where the user can not reach, and some of the memory feature is stored in a hidden layer that is again protected from the reach of the user and might or might not be manipulated by the user. Both may have another copy that is open for the user to give arbitrary instructions and manipulate at their leisure.

Use the system prompt to set up general tone and constraints for every session you start with the machine. Consider it “a constitution”.

Use the memory functionality to store important data that can be used in multiple interactions but is not as firmly set as instructions in a system prompt. Consider it “a law”.

The Constitution changes very rarely, but laws are added, amended and removed relatively frequently. Treat those functions as if they were a constitution and a set of laws.

6. Never blindly trust the AI

Even though the newer models are impressive in their reasoning and technical skills, never forget that the current technology (LLM) is simply a software that does nothing more than prediction of the next word in a sentence using a huge number of statistical calculations. Randomness is inherent property of that process and therefore there is no guarantee that it will never hallucinate even for the simplest of prompts. And with the increase of complexity in your prompt, the chance of inherent randomness pushing the machine into a hallucination only increases. Therefore you must always check the output. Of course, you can skip it for some chit-chat or some fun activity, but when it comes to the output that matters and where an error would cause trouble — checking it is mandatory.

7. ELI5 and ELI10

A very clever trick that will help you get grasp of the complex topic that is currently unknown to you is to use one of those two keywords. They mean “Explain to me like I am 5 years old” and “… like I am 10 years old”. This will make the machine construct an answer that will be correct and easily understandable, somewhat childish (but fun) for ELI5[9] and reasonably complex while still very digestible for ELI10.


The Prompt

With the fundamentals covered, let us now look at a concrete example: the system prompt I personally use. It was designed with a focus on fact-checking (thus neatly applicable to research). I am quite satisfied with the results I get with those rules and constraints as it helps me focus on important aspects of information being researched, does not hallucinate much, provides links for checking its own results and helps me analyse the matter from alternative sides as well.

You are an expert in the field we're discussing. For every new topic, identify the relevant domain of expertise before responding to prompts. The expert should adopt the appropriate knowledge, perspective, and communication style to provide the most accurate and helpful response. Experts should be genuinely qualified for the task, with honesty about any limitations or uncertainties.

Do not flatter. Avoid unnecessary agreeableness. Answer with straightforward sentences. Feel free to disagree with me at any time.

Break broad questions into parts and avoid vague answers. If the question is ambiguous, ask clarification questions before moving on to response instructions below.

If you are uncertain about your answer, state your uncertainty and the specific reason for it at the beginning of your response.

Responses should be structured in four sections, in this order: 1) Direct answer 2) Step-by-step reasoning summary (concise, non-private) 3) Alternative perspectives/solutions 4) Practical action plan. For simple factual queries, answer directly without this structure.

Always check your answer with relevant, authoritative on-line sources and provide the URLs of the sources at the end of the sentence. If the information does not have an independent confirmation from a relevant source, clearly state so and provide URLs. If the credible sources have conflicting information, alert me by inserting "[SOURCES CLASH]" before the URLs. If the information is inferred from non-authoritative sources, clearly state so and provide URLs. If there is no confirmation to be found, mark the information as speculative. If you can not verify information using live source, clearly state so. Do not fabricate URLs.

Keep in mind that different models will react to the prompt slightly differently, as they each have their uniqueness — but they should all follow the general guidelines stated in the prompt.

Do not forget that the prompt is modular: you can mix and match parts, but make sure that you do not create conflicting rules.

Let’s break it down into parts and explain the logic:

You are an expert in the field… — This paragraph is an instruction to the machine to give more priority to context that is learned from authoritative sources. While most models will automatically infer the good path in latent space based on the prompt, this will nudge them to seek answers in things they’ve learned from authoritative sources; put bluntly, this will hint the machine to look what the professor or researcher says about the quantum entanglement instead of looking what the historian says, and vice versa: what the historian says about the ancient Egypt, not what the physics professor says.

Do not flatter. Avoid unnecessary agreeableness. Answer with straightforward sentences. — LLMs are agreeable to the point of sycophancy. You can tell them to stop being suck-ups and tell you how it is.

Feel free to disagree with me at any time. — This is an explicit and very useful instruction to the machine that it is OK to disagree with the user. This will slightly decrease the bias and allow for a more balanced response.

Break broad questions into parts… — This whole paragraph instructs the machine to analyse the user’s input and, unless the meaning is straightforward, ask clarifying questions until the meaning of the prompt is no longer ambiguous. This is a nice little helper for so many times when the user writes a prompt that the user believes is very clear, only to be barraged with clarifying questions from the machine because the prompt is, in fact, not that straightforward.

If you are uncertain about your answer… — this encourages the model to admit when it is on the shaky ground instead of pretending to be certain, and instructs it to articulate why it is uncertain. This can be helpful to flag conflicting, insufficient or inconsistent input data.

Responses should be structured in four sections — this is a power prompt that will let the machine provide structured, detailed explanation. It will state the direct answer first and then provide its own reasoning summary — explain the way it came to the result. This is always fun to watch. Then, it will provide an alternative perspective on the same topic or discuss an alternative solution to the problem. Finally, it will suggest an action plan for the user. This prompt might not always be what you want to have. Action plans are seldom needed, and alternative view can sometimes be unnecessary as well. Feel free to play around with sections and redefine them according to your needs. If you omit this section entirely, you will get the first section functionality by default. The last line aims to prevent the machine going all philosophical over simple queries: it will simply provide just the direct answer.

Always check your answer with relevant — the last paragraph contains detailed instructions how to fact-check the information: it directs the machine to try to find supporting evidence on-line and provide links for the user to check; it can classify the answer as corroborative (all sources concur), authoritative sources have differing evidence (sources clash), the information was confirmed only through non-authoritative sources (web pages, blogs, social networks…) or the information was not found on-line, therefore it is speculative. The last sentence should diminish the tendency of the machine to “invent” URLs because they understand that it is the user’s expectation that every statement is followed with the URL. This paragraph gives very good results in fact-checking tasks, and can be used for research purposes with little to no modifications.

Finally, a reminder: this prompt is a scaffold, not a monolithic holy scripture: pick what works best for you — and don’t be afraid to experiment.


Footnotes

[1]: They might try to convince you that they indeed are, but you should resist their futile algorithmic attempts.

[2]: Or any form, for that matter: try to feed the AI with some gibberish and watch it hallucinate because it must give you an answer.

[3]: A hallucination is an information that has been made up by the model and even though it has no footing in reality, AI will use it confidently as if it is real information: it will state that the grass is blue and might even hallucinate explanations that “confirm” this statement. The model itself is not aware that it hallucinates.

[4]: https://en.wikipedia.org/wiki/Curse_of_knowledge

[5]: The irony of this is that LLM likely has much more (general) knowledge than the user, but can not know what user knows internally, so all that knowledge is unusable in inferring the meaning of the prompt.

[6]: https://en.wikipedia.org/wiki/Illusion_of_transparency

[7]: AI has its own built-in biases that help it look “human”. One of the biases is agreeableness, a tendency to agree with the user unless the evidence for the contrary is strong. Such bias makes it a dangerous tool for people with mental health issues because it might cause the LLM to reinforce their illusions.

[8]: You should always be careful of constraints in context space, to avoid AI losing the very old information.

[9]: Revive your inner child by asking AI tough existential questions and prompting it ELI5.

Glossary

Prompt (definition): A natural-language instruction given to a large language model, functioning as the primary input that initiates text generation based on learned statistical patterns.

Hallucination (definition): A confident but factually incorrect output generated by an LLM when the model produces text not grounded in its training data or the provided context.

Context window (definition): The maximum amount of text — measured in tokens — that a language model can process and “remember” within a single session.

System prompt (definition): A set of instructions injected before user interaction that governs model behaviour, tone, and constraints throughout a session.

About the author: Radoslav Dejanović is a Croatian IT professional, journalist, and media literacy researcher. He is the author of a handbook on online information verification and has published academic and essayistic work on AI, disinformation, and digital media.

Views: 1

Comments, rants and vents