Tuesday, March 10, 2026

Prompting Techniques explained

 

The core differences between prompting techniques lie in how they structure instructions, the volume of examples provided, the specific mechanism used to trigger reasoning, and how they define the model's interaction with external information.

Instruction vs. Example Based Techniques

The most fundamental distinction exists between prompts that rely solely on description and those that utilize pattern recognition through examples.


Zero-Shot Prompting

This is the simplest technique, relying entirely on a description of the task without providing any examples. It depends on the model's pre-existing training data to understand instructions like "Classify movie reviews".

One-Shot and Few-Shot Prompting

These techniques differ from zero-shot by providing demonstrations. One-shot provides a single example to help the model imitate a task, while few-shot provides multiple examples (generally three to five) to establish a pattern. The core difference here is that few-shot prompting conditions the model to follow a specific output structure or reasoning style for the current inference, rather than relying solely on its general training. For classification tasks, mixing up the order of classes in few-shot examples is recommended to prevent the model from overfitting to a specific sequence.


Contextual and Persona-Based Techniques

These techniques differ in which aspect of the model's generation they primarily influence: its fundamental purpose, its immediate knowledge base, or its stylistic voice.

System Prompting

This sets the "big picture" context and defines the model's overarching purpose and capabilities (e.g., defining the model as a code translator). It is often used to enforce safety or specific output requirements like JSON formats.

Contextual Prompting

Unlike system prompting, which is broad, contextual prompting provides immediate, task-specific background information necessary for the current interaction.

Role Prompting

While system prompting defines *what* the model does, role prompting defines *who* the model is. It assigns a specific character or identity (e.g., "act as a travel guide" or "act as a confrontational debater") to frame the output's tone, style, and personality.


Reasoning and Logic Techniques

Several techniques are designed to improve performance on complex tasks by altering the model's cognitive process. The differences lie in the structure of that process—whether it is linear, abstract, or branching.


Chain of Thought (CoT)

This technique forces the model to generate intermediate reasoning steps before providing a final answer. It differs from standard prompting by breaking down the "black box" of the model's processing into a linear sequence of thoughts. It is particularly effective for math or logic tasks where a direct answer might fail.

Step-Back Prompting

Unlike CoT, which works through the specific details immediately, step-back prompting asks the model to first answer a high-level, general question related to the task. This abstraction allows the model to retrieve relevant principles and background knowledge *before* applying them to the specific problem, reducing errors rooted in specific details.

Tree of Thoughts (ToT)

While CoT follows a single linear path, ToT allows the model to explore multiple reasoning paths simultaneously. It generalizes CoT by maintaining a "tree" where the model can branch out to explore different possibilities, making it superior for tasks requiring exploration rather than just linear execution.


### Consensus and Action-Based Techniques

These advanced techniques differ by introducing verification mechanisms or external interactions.


*   **Self-Consistency:** This technique addresses the limitation of a single reasoning path in CoT. It involves submitting the same prompt multiple times (often with a higher temperature to encourage diversity) and selecting the most consistent answer (majority voting) [18]. It essentially prioritizes the *reliability* of the reasoning over a single attempt [19].

*   **ReAct (Reason & Act):** This paradigm differentiates itself by allowing the model to interact with the outside world. It combines reasoning with the ability to perform actions, such as querying external APIs or search engines [20]. It operates in a "Thought-Action-Observation" loop, whereas other techniques rely solely on the model's internal parameters [21, 22].


### Structural Frameworks

Finally, there are differences in how users are advised to organize prompts conceptually:

*   **The Rhetorical Approach:** Focuses on the rhetorical situation, explicitly defining the audience, author ethos, pathos (emotional appeal), and logos (logic) [23].

*   **The C.R.E.A.T.E. Framework:** A specific acronym-based structure (Character, Request, Examples, Additions, Type, Extras) that emphasizes treating the AI as a distinct "character" [24].

*   **The Structured Approach:** Emphasizes a formulaic breakdown: Role and Goal, Context, Task, and Reference content [25].

Wednesday, March 4, 2026

Prompt Engineering and its Wily Ways

This week I’ll take some time out from AIO and talk about some basics that I’ve been getting to grips with in my day job, particularly over the last year. Prompt Engineering has appeared from nowhere and the more you dig in, the more I find that there is just a ton of techniques and methods that can really make a difference in what you get back from AI. Sure you can treat it just like a Google search, but it can do a whole lot more…

What is an AI Prompt?

In the context of generative AI, a “prompt” is most often text, but can also be other modes like images or voice commands, that are provided to an AI model to elicit a specific response or prediction. It serves as the primary interface for interacting with Large Language Models (LLMs), acting as a form of "coding in English" where the user defines the task, context, and constraints for the AI to process.

In other words, it's not just like something you'd type into a Google search, it can be a whole lot more. Possibly review your Resume and re-write it in a particular manner, summarize a website article or even produce something out of a hat like a unique poem or story.

Why Take Time to Develop Them?

It’s easy to use this like a standard Google search and that’s totally fine too. However you can really unleash the power of AI by investing some time in “prompt engineering” which is described as more of an art than a science, often requiring experience and intuition to master. This iterative process is necessary for several reasons:

To Ensure Accuracy. LLMs function as prediction engines, generating the next most likely text based on their training data. Without a high-quality prompt to guide this prediction, the model may produce ambiguous, inaccurate, or irrelevant outputs.

It forces you to write very accurate instructions to ensure a more predictable result and this is a good practice for all walks of life.

To Navigate Sensitivity. Models are highly sensitive to word choice, tone, structure, and context; even small differences in phrasing or formatting can lead to significantly different results.

To Define Boundaries. A well-developed prompt helps the user understand the model's capabilities and limitations, allowing them to improve safety and reduce the likelihood of "hallucinations" (fabricated information). AI can lie very effectively, so don't give it a fraction of a chance to do it.

To Optimize Resources. Poorly designed prompts can lead to excessive token generation, which increases latency and computational costs. Refined prompts can enforce conciseness and specific output structures (like JSON) that make the data more usable.

Ultimately it’s always best to be absolutely clear on what you are asking the AI to do, giving it no possibility to go off and get creative with its answer.

Prompt Design

Designing high-quality prompts is an iterative process that blends art and engineering. The best practices for prompt engineering can be categorized into structural frameworks, instructional strategies, technical configuration, and process management.

Structural Frameworks

To maximize effectiveness, prompts should follow a logical structure rather than being a loose collection of sentences. Several frameworks are recommended:

The Structured Approach

This formula involves four key components:

    1.  Role and Goal - Broadly describe the aim and the persona the model should adopt.

    2.  Context - Provide background information.

    3.  Task - Make expectations explicit and detailed.

    4.  Reference Content - Supply the data or text the AI needs to process.

The C.R.E.A.T.E. Framework

A mnemonic for drafting prompts that stands for Character (role), Request (specific task), Examples, Additions (style/POV refinements), Type of Output, and Extras (context/reference text).

The Rhetorical Approach

This focuses on the "rhetorical situation," defining the audience, context, author ethos (credentials), pathos (desired emotional response), logos (logical points), and arrangement.

Instructional Strategies

How you phrase your request significantly impacts the model's performance.

Be Specific and Simple

Simplicity is a design principle; if a prompt is confusing to a human, it will likely confuse the AI model. You must be specific about the desired output to ensure the model focuses on what is relevant. Leave as little to interpretation as possible.

Use Instructions Over Constraints

It is generally more effective to give positive instructions (telling the model what to do) rather than constraints (telling it what not to do). Constraints should be reserved for safety purposes or specific formatting limits.

Provide Examples (Few-Shot)

Giving the model one or more examples (input and output pairs) is highly effective. It acts as a teaching tool, allowing the model to imitate the desired pattern, style, and tone. This is as simple as laying out a plain text example with a Heading, block of body text followed by some bullet points. It will use that format in its response. We will exploring prompting techniques in my next post.

Tip: For classification tasks, use at least six examples and mix up the classes (e.g., positive, negative, neutral) to prevent the model from overfitting to a specific order.

Break Tasks Down

For complex requests, split the task into smaller steps. For instance, instruct the model to first extract factual claims and then as a second prompt, verify them, rather than doing both in one pass.

Define the Role

Assigning a specific persona (e.g., "Technical Product Manager" "News Anchor" or "Industry Journalist") helps frame the output's voice and focused expertise.


Formatting and Syntax

The physical layout and syntax of the prompt help the model parse intent.


Use Clear Syntax

Utilize punctuation, headings, and section markers (like `---` or XML tags) to differentiate between instructions, context, and reference data.

Combat Recency Bias

Models can be influenced more heavily by information at the end of a prompt. It is often helpful to repeat instructions at the end of the prompt or place the primary instructions before the data content.

Prime the Output (Cues)

You can "jumpstart" the model's response by providing the first few words of the desired output. For example, ending a prompt with "Here is a bulleted list of key points:" guides the model to immediately start listing items.

Structured Output (JSON/XML)

Requesting output in specific formats like JSON limits hallucinations and creates structured data that is easier to integrate into applications. For the real techies out there, if the JSON output is truncated or malformed, libraries like json-repair can help salvage the data.


Technical Configuration

Beyond the text, model settings play a crucial role in the output quality.


Temperature and Top-P (controlling randomness)

These are known as hyper-parameters and the difference between them is quite subtle.

The temperature parameter is used in language models to control the randomness of the generated text. It controls how much the model should take into account low-probability words when generating the next token in the sequence. For tasks requiring factual accuracy (like math or code), set the temperature to 0 or a very low number. For creative tasks, higher temperatures (e.g., 0.9) encourage diversity.

The top_p parameter can also be used to control the randomness of the outputs. Top_p sampling is also called nucleus sampling, in which a probability threshold is set (Default value =1 in the API). This threshold represents the proportion of the probability distribution to consider for the next word. In other words, It consists of selecting the top words from the probability distribution, having the highest probabilities that add up to the given threshold.

For example, if we set a top_p of 0.05, it means that the model, once it generated the probability distribution, will only be considering the tokens that have the highest probabilities, and sum up to 5%. Then the model will be randomly selecting the next token among these 5% tokens, according to its likelihood. The top_p sampling is highly correlated to the quality and the size of the dataset used to train the model. In Machine learning subjects, as there are huge datasets with good quality, the answers are not that different when modifying the value of top_p.

It is generally recommended to alter only one of these parameters (Temperature or Top-P) at a time, not both.

Note : Don't ask me to repeat that after a few beers.


Token Limits

Be mindful of output length. Generating excessive tokens increases cost and latency. You can control this via configuration settings or by explicitly instructing the model to be concise (e.g.I "Explain in a tweet length message").


Process Management

Prompt engineering is rarely perfect on the first try.


Iterate and Document

You should document every version of your prompt, including the model used, temperature settings, and the resulting output. This helps in debugging and refining performance over time. Keep them in a Google doc or simple text file.

Experiment with Variables

Use variables (e.g. `{city}`) in your prompts to make them dynamic and reusable across different inputs.

Collaborate

Have multiple people attempt to design prompts for the same goal; variance in phrasing can lead to discovering more effective techniques


Next up...

In the next post I will try and outline some prompting techniques, of which there are many.

Tuesday, February 24, 2026

Measuring Success in the Age of GEO

I am back after missing a week due to the day job! So, you devised your perfect GEO/AEO strategy and started writing your product content in conformance with the methodologies outlined in previous posts . Now comes the million-dollar question: Is it actually working?

Auditing your performance in the age of AI is tricky because the old scoreboard (Google Analytics) might be lying to you. Traffic might go down while your brand awareness goes up—simply because the AI answered the customer’s question without them ever needing to visit your site.

Here is a no-nonsense, friendly guide on how to audit your GEO and AEO efforts, the tools you can use, and how to fix the cracks in your strategy.


1. The "Ego Surf" Audit (Ask the AI)

The simplest way to audit your standing is to go directly to the source. You need to see if the "Generative Engines" (ChatGPT, Perplexity, Gemini, Claude) actually know who you are. Also, bare in mind that the AI models don’t reindex as often as the Google Search Index, so this is a long game.

  • The Action: Treat the AI like a potential customer.

  • Brand Audit: Ask, "What is {Your Company Name}?" or "What does {Your Company} sell?" If the AI hallucinates or says "I don't have enough information," you have an AIO (AI Optimization) problem. It means your digital footprint is too small or inconsistent.

  • Category Audit: Ask, "Who provides the best Service in {City}?" or "Compare {Your Product} vs {Competitor}".

  • The Goal: You aren't just looking for a mention; you are looking for sentiment and accuracy. Does the AI recommend you? Does it cite the right features? If it recommends a competitor, analyze why—is their pricing clearer? Do they have more reviews?

2. The Metric Shift: From Clicks to "Inclusion"

In traditional SEO, we obsess over Click-Through Rates (CTR). In AEO and GEO, we care about Source Inclusion and Visibility Scores.

  • Zero-Click Visibility: You need to track how often you appear in "Featured Snippets," "People Also Ask" boxes, or AI overviews. Tools like AIOSEO (for WordPress) or SEMrush can help track these specific SERP features.

  • Position-Adjusted Visibility: This is a fancy term for a simple concept: Did the AI mention you early in its answer? Research suggests that visibility is measured not just by if you were cited, but where and how much of your content was used. You want to be in the first paragraph of the AI’s script, not a footnote at the bottom.

3. The Toolkit: What to Use

You don't need to invent new technology to do this, but you do need to use existing tools differently.

  • AIOSEO (All In One SEO): If you are on WordPress, this plugin has a "Search Statistics" module. It helps you track keyword rankings specifically for content performance and identifies "content decay" (when your old posts stop ranking and need a refresh).

  • Using tools such as AIClicks and Profound, track AEO performance and monitor which products appear in AI citations, which content gets extracted most often, and what language patterns work best. Use these insights to refine your content templates, adjust attribute structures, and improve descriptions across similar products. Once you identify effective AEO patterns.

  • Question Research Tools: Use AnswerThePublic, SEMrush, or even your own customer support tickets. These tell you exactly what questions people are asking. If you aren't answering these specific questions on your site, you are invisible to the Answer Engine.

  • GPT-4 (as an Auditor): You can actually feed your content into ChatGPT and ask it to evaluate it against Google’s E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) standards. Ask it, "How would you rate this article’s authority compared to Competitor {URL}?".

4. Corrective Actions: How to Fix Your Strategy

So, you audited your site and the AI is ignoring you. Here is how to get its attention.

Fix #1: The "Answer First" Adjust (AEO)

If you aren't winning featured snippets or voice search results, your content is likely buried.

  • The Fix: Rewrite your headers as questions (e.g., "How long does a drill battery last?") and provide the answer immediately in a concise, 40–60 word paragraph directly underneath. No fluff, no backstory. Just the answer.

  • Technical Boost: Use Schema Markup (like FAQPage schema). This is code that screams to the robot, "Here is the answer!" Tools like AIOSEO can generate this for you without you needing to code.

Fix #2: The "Citation Magnet" Move (GEO)

If the AI summarizes the topic but doesn't mention you, your content lacks authority signals.

  • The Fix: Add hard data. Don't say "Our software is fast." Say, "Our software processes data 30% faster than the industry average," and cite a source or internal study. Adding citations and statistics can increase your visibility in AI answers by 30-40%.

  • Quote Experts: Include direct quotations from industry leaders or your own experts. AI loves to pull quotes to build its "script".

Fix #3: The "Consensus" Cleanup (Off-Page Audit)

This is the big one. AI doesn't just trust your website; it trusts what the rest of the internet says about you. If you have great content but terrible reviews on Yelp or G2, the AI might skip you.

  • The Fix: Audit your N.A.P. (Name, Address, Phone) across all directories. Inconsistency confuses the AI. Then, actively drive happy customers to leave reviews on third-party sites. The AI looks for "consensus" across the web to verify you are a legitimate recommendation.

Summary Checklist

  1. Ask the AI: regularly prompt ChatGPT/Perplexity to see how it describes your brand.

  2. Track Snippets: Monitor how often you appear in "People Also Ask" or AI Overviews.

  3. Inject Facts: Audit your top pages—if they are full of fluff, replace them with stats, tables, and direct answers.

  4. Check the Vibe: Ensure your off-site reviews and directory listings are squeaky clean.

If you do this, you stop chasing clicks and start building the "influence" that gets you cited as the expert, but remember that this is built over time. Be patient!


Prompting Techniques explained

  The core differences between prompting techniques lie in how they structure instructions, the volume of examples provided, the specific me...