Instruction vs. Example Based Techniques
The most fundamental distinction exists between prompts that rely solely on description and those that utilize pattern recognition through examples.
Zero-Shot Prompting
This is the simplest technique, relying entirely on a description of the task without providing any examples. It depends on the model's pre-existing training data to understand instructions like "Classify movie reviews".
One-Shot and Few-Shot Prompting
These techniques differ from zero-shot by providing demonstrations. One-shot provides a single example to help the model imitate a task, while few-shot provides multiple examples (generally three to five) to establish a pattern. The core difference here is that few-shot prompting conditions the model to follow a specific output structure or reasoning style for the current inference, rather than relying solely on its general training. For classification tasks, mixing up the order of classes in few-shot examples is recommended to prevent the model from overfitting to a specific sequence.
Contextual and Persona-Based Techniques
System Prompting
This sets the "big picture" context and defines the model's overarching purpose and capabilities (e.g., defining the model as a code translator). It is often used to enforce safety or specific output requirements like JSON formats.
Contextual Prompting
Unlike system prompting, which is broad, contextual prompting provides immediate, task-specific background information necessary for the current interaction.
Role Prompting
While system prompting defines *what* the model does, role prompting defines *who* the model is. It assigns a specific character or identity (e.g., "act as a travel guide" or "act as a confrontational debater") to frame the output's tone, style, and personality.
Reasoning and Logic Techniques
Several techniques are designed to improve performance on complex tasks by altering the model's cognitive process. The differences lie in the structure of that process—whether it is linear, abstract, or branching.
Chain of Thought (CoT)
This technique forces the model to generate intermediate reasoning steps before providing a final answer. It differs from standard prompting by breaking down the "black box" of the model's processing into a linear sequence of thoughts. It is particularly effective for math or logic tasks where a direct answer might fail.
Step-Back Prompting
Unlike CoT, which works through the specific details immediately, step-back prompting asks the model to first answer a high-level, general question related to the task. This abstraction allows the model to retrieve relevant principles and background knowledge *before* applying them to the specific problem, reducing errors rooted in specific details.
Tree of Thoughts (ToT)
While CoT follows a single linear path, ToT allows the model to explore multiple reasoning paths simultaneously. It generalizes CoT by maintaining a "tree" where the model can branch out to explore different possibilities, making it superior for tasks requiring exploration rather than just linear execution.
### Consensus and Action-Based Techniques
These advanced techniques differ by introducing verification mechanisms or external interactions.
* **Self-Consistency:** This technique addresses the limitation of a single reasoning path in CoT. It involves submitting the same prompt multiple times (often with a higher temperature to encourage diversity) and selecting the most consistent answer (majority voting) [18]. It essentially prioritizes the *reliability* of the reasoning over a single attempt [19].
* **ReAct (Reason & Act):** This paradigm differentiates itself by allowing the model to interact with the outside world. It combines reasoning with the ability to perform actions, such as querying external APIs or search engines [20]. It operates in a "Thought-Action-Observation" loop, whereas other techniques rely solely on the model's internal parameters [21, 22].
### Structural Frameworks
Finally, there are differences in how users are advised to organize prompts conceptually:
* **The Rhetorical Approach:** Focuses on the rhetorical situation, explicitly defining the audience, author ethos, pathos (emotional appeal), and logos (logic) [23].
* **The C.R.E.A.T.E. Framework:** A specific acronym-based structure (Character, Request, Examples, Additions, Type, Extras) that emphasizes treating the AI as a distinct "character" [24].
* **The Structured Approach:** Emphasizes a formulaic breakdown: Role and Goal, Context, Task, and Reference content [25].


