Podcast Transcript
Welcome to our third episode of EVO AI, titled What is Prompt Engineering?
Join us today as we look at what is a good prompt or bad one when there is too much or too little information in a prompt, how and why a large language model treats various words from a prompt in a different way and what decomposition means for crafting a good suite of prompts that can solve complex problems.
If you like our podcast and want to know more, please remember to subscribe on our website: evoai.ai and follow and subscribe to our Spotify, YouTube and Apple channels.
Alright, let’s get down to business. One of the terms you may have heard buzzing around the tech world (and not only) is “Prompt Engineering.” But what exactly does it mean? And why should you, dear listener, even care about it? Well, sit tight, because I’m about to demystify this intriguing subject.
Now, most of you are familiar with the idea of a ‘prompt’ as something that elicits a response. In school, you get essay prompts. In a command-line interface on your computer, you get prompts to enter commands. But in the context of artificial intelligence, and more specifically language models like Bard or GPT-4, a ‘prompt’ has a richer, deeper layer of importance.
We have developed this incredible tool—a machine that can write like a human, calculate like a supercomputer, and generate insights like a professional analyst.
That’s what a modern large language model can do. But there’s a catch. You have to know how to ask it to do what you want, and that’s where prompt engineering comes in. It’s the art and science of crafting the perfect question or statement that tells the AI precisely what you need.
You see, prompt engineering isn’t just about formulating a random question and throwing it at the AI, hoping for a good answer. Oh no, it’s much more nuanced than that. We’re talking about strategic thinking, design, and yes, sometimes even iterative testing. In essence, you are engineering your query to get the most useful, accurate, or creative output from the machine.
The prompt is essentially your user interface for the AI system, your means of communication with this incredibly powerful tool.
Now, why is this important? Well, in a world increasingly driven by AI, how we interact with these systems is crucial. The more effectively you can communicate your needs to an AI, the more value you can extract from it. Whether you’re a business owner looking to automate certain aspects of your operations, a writer seeking creative inspiration, or even a researcher looking to sort through mountains of data, the ability to create an effective prompt can substantially impact the quality of your results.
Think of it as having a conversation where you’re not just hoping to be understood; you’re aiming for maximum clarity and effectiveness. It’s not about speaking the AI’s language; it’s about refining our own questions and statements to align perfectly with what the AI is capable of delivering. In a nutshell, a well-engineered prompt can be the difference between getting a generic, vague or even wrong answer and obtaining a detailed, insightful, and even transformative response.
Believe it or not, this is far from a niche skill. Prompt engineering is becoming increasingly important across various sectors. From automating customer service and generating high-quality content to conducting complex data analysis, knowing how to prompt an AI effectively is quickly turning into an invaluable skill. So if you’re intrigued by what AI has to offer, understanding the art and science of prompt engineering should be high on your learning list.
Let’s look at Applications of Prompt Engineering
Now that we’ve grasped what prompt engineering is, let’s turn our attention to its applications. I assure you, this isn’t some arcane technology hidden away in research labs. No, prompt engineering is at work in countless technologies that we use every day, often without even realizing it.
First on the list—customer support bots (as in chatbots). Ah yes, the little chat boxes that magically appear on websites offering to assist you. Ever wonder why some seem to understand exactly what you need, while others flounder? That’s prompt engineering at its finest. These bots have to ask the right questions in the right sequence to lead you to a solution or an answer, and the difference between a well-crafted prompt and a poorly constructed one can be the difference between a happy customer and an annoyed one.
But it’s not just customer service; prompt engineering significantly impacts our experiences with search engines too. Google, Bing—you name it, they all rely heavily on the concept. When you type in a search query, you’re essentially giving the search engine a prompt. The phrasing, the keywords, the context—all of these things matter. And let’s flip the coin; search engines themselves are masters of inverse prompt engineering. They continually optimize algorithms to understand the myriad ways people ask for the same information.
Now, for the creatives out there, AI-driven content generation is another fascinating arena where prompt engineering shines. Whether we’re talking about machine-aided journalism, scriptwriting, or even something as specific as poetry generation, the prompt is king. For example, for automated journalism, a well-designed prompt like “Summarize the key takeaways from the latest Apple product launch” could generate a draft that requires little to no human editing.
And we’re not done yet. Let’s dive into some specialized fields like healthcare and education. In medical diagnostics, for instance, the way a question is phrased to an AI analyzing medical imagery can drastically impact the diagnosis. Similarly, in educational platforms that use AI to create personalized learning experiences, prompt engineering helps fine-tune the questions and exercises that the AI presents to students. Experts in these fields work hand-in-hand with AI specialists to ensure that the prompts are not just technically sound, but also meaningful and effective in a real-world context.
Finally, consider financial modelling. Investment firms are leveraging AI to analyse market trends, but the quality of that analysis boils down to—you guessed it—the prompt. A poorly designed prompt might return generic investment advice that you could find on a blog, while a well-engineered one could yield deep insights, revealing high-potential investment opportunities that could genuinely move the needle.
So, to conclude, prompt engineering is everywhere, interwoven into the fabric of our increasingly digital lives. From the customer service you receive to the search results that guide your decisions, to the automated content that informs or entertains you, prompt engineering plays an indispensable role in shaping the quality and effectiveness of these experiences.
Section 3: Usability and Examples
So we’ve unpacked what prompt engineering is, and we’ve toured the landscape of its applications. But how usable is this technology really, and what are some concrete examples that demonstrate its value? That’s what we’re going to delve into now.
First, let’s talk about usability. In the early days of language models and AI, you practically had to be a coder or a data scientist to get the most out of these tools. But the paradigm has shifted dramatically. With platforms simplifying the user interface and the emergence of ‘no-code’ solutions, even non-technical users are beginning to wield the power of sophisticated AI. And it all comes down to effective prompt engineering.
Imagine a journalist with no coding background using a tool like GPT-4 to write articles. By understanding how to construct an effective prompt, they can get drafts that are nearly ready for publication. They could ask the AI, “Write an introductory paragraph about the impact of climate change on polar bears,” and get something that aligns perfectly with their story’s tone and content. It democratizes the technology, making it accessible and usable for a broader audience.
Okay, let’s pivot a bit and explore some specific examples. Ever used a fitness app that tailors workout routines based on your responses to a few simple questions? You’ve been the beneficiary of prompt engineering! These apps use machine learning algorithms trained to interpret your answers to a set of prompts like ‘What’s your fitness level?’ or ‘Any specific areas you want to focus on?’ Based on how you respond, you get a customized workout plan.
Here’s another example that many of us can relate to—streaming services. When you first sign up for platforms like Netflix or Spotify, you’re often prompted to select genres, artists, or types of shows you like. Your responses, essentially prompts for the AI, help tailor your recommendations. So the next time you discover a hidden gem of a show or a new favourite song, remember, that’s prompt engineering working behind the scenes to enhance your experience.
But it’s not just consumer-facing scenarios; businesses benefit hugely from prompt engineering as well. Take market research. Companies are now using AI models to scrape and analyze vast amounts of data. But the questions they ask—the prompts—are where the real magic happens. A prompt like, “Analyze consumer sentiments on electric cars from social media data in the last quarter,” could give actionable insights that shape product development or marketing strategies.
And before we wrap up this section, let’s touch on emergency response services. Imagine a natural disaster like a hurricane or an earthquake. Agencies could use AI models to sift through enormous amounts of data to pinpoint which areas need immediate assistance. But again, the success of these operations is significantly influenced by how well the prompt is engineered. A vague prompt could lead to delays, while a well-thought-out prompt could potentially save lives.
So there you have it. From everyday applications like fitness apps and streaming services to critical operations like market research and emergency response, the usability and effectiveness of AI technologies are heavily influenced by the quality of the prompts engineered into them.
Section 3.5: Top 10 Types of Prompt Engineering
Before we dive into what makes a good or bad prompt, let’s do a quick run-through of the top 10 types of prompt engineering. These are the methodologies that shape the way AI interacts with us and are key to enhancing its performance and usability.
Zero-Shot Prompting
- This is the most basic form, where you feed the model a single prompt and expect it to understand and respond appropriately.
- Example: Asking a chatbot, “What’s the weather like?”
Few-Shot Prompting
- This involves providing a couple of examples to help the model understand the context better.
- Example: Before asking the main question, you might say, “As a vegetarian, I don’t eat meat. What are some protein sources for me?”
Chain-of-Thought (CoT) Prompting
- This involves a sequence of prompts that guide the model through a line of reasoning.
- Example: A legal analysis tool being guided through a series of case laws before being asked for an interpretation.
Natural Language Interface (NLI) Prompting
- Here, the aim is to allow more organic, conversational prompts, much like speaking to a human.
- Example: Saying to your smart speaker, “Hey, could you play some relaxing music?”
Generative Prompts
- These are prompts designed for tasks that require the model to generate content, like writing or creating art.
- Example: “Write a poem about autumn.”
Selective Prompts
- These are used when you want the model to focus on specific elements within a larger dataset or text.
- Example: “Extract all mentions of financial figures from the annual report.”
Contextual Prompts
- These are designed to take into account the user’s previous interactions or known preferences.
- Example: A fitness app asking, “Would you like to continue with your strength training routine today?”
Prompt Templating
- This involves creating a basic structure of a prompt that can be filled-in dynamically based on real-time data.
- Example: A stock analysis tool that fills in the name of the company and date range dynamically in a query prompt.
Visual Prompts
- These are not text-based but instead rely on visual cues to guide machine learning models.
- Example: Using a symbol to initiate a function in a design software.
Multi-Modal Prompts
- These prompts combine different types of data—like text and images—to guide the AI.
- Example: Asking a model to “Describe the emotion conveyed in this photograph.”
These methods of prompt engineering have their own strengths and weaknesses, and they can be mixed and matched depending on what you’re looking to achieve. Knowing when and how to use these types can dramatically improve your interaction with AI systems.
Section 4: Examples
Having delved into what Prompt Engineering is, its applications, and its usability, let’s make things a bit more tangible with some real-world examples. The devil is, as they say, in the details.
Good Prompt for a Financial Advisor Bot
- Picture yourself using an AI-driven financial advisor. A solid prompt would be: “Recommend the top 3 performing tech stocks of the last quarter.”
- Why It Works: This prompt is targeted, specifies the sector, and limits the timeframe, helping the AI focus on delivering the most relevant information.
Bad Prompt for a Financial Advisor Bot
- A bad example would be simply asking, “Tell me where to invest.”
- Why It’s Inadequate: Too vague. It lacks context about your financial goals, risk tolerance, and time horizon, leading to a potentially irrelevant answer.
Good Prompt for a Language Translation App
- If you’re using an AI-based translation app, a good prompt might be, “Translate the following English text to conversational French.”
- Why It Works: It is specific about the source and target languages and even gives context to the tone or style (conversational).
Bad Prompt for the Language Translation App
- A bad example would be, “Translate this.”
- Why It’s Inadequate: This lacks information. What language are we starting with, and where are we going?
Good Prompt for a Travel Recommendation Engine
- Imagine querying a travel bot for your next vacation. A well-framed prompt could be, “Suggest a 7-day itinerary for a family vacation in Japan, focusing on cultural experiences.”
- Why It Works: This prompt specifies the duration, the type of trip, and even the kind of experiences you’re interested in.
Bad Prompt for the Travel Recommendation Engine
- A poorly crafted example might be, “What’s a good vacation spot?”
- Why It’s Inadequate: This is far too general. It could lead to suggestions that don’t match your interests, budget, or time availability.
In each of these examples, you can see how a good prompt versus a bad one can be the difference between a helpful, accurate answer and a confusing or irrelevant one. Crafting effective prompts is not just about asking questions; it’s about asking the right questions in the right way. And that’s the essence of prompt engineering—creating that optimal interface between human needs and computational capabilities.
Section 5: Good Prompt vs. Bad Prompt
As we segue from the types of prompt engineering into the intricacies, one question that naturally comes up is: What makes a good prompt and what makes a bad one? In essence, the quality of your prompts can either leverage the capabilities of an AI or render it quite ineffective.
So, let’s start with the attributes of a good prompt:
- Clarity: A good prompt is unambiguous. You want to ask questions or make statements that don’t leave room for misinterpretation.
- Example: Instead of asking, “Tell me about that big event,” specify what you’re looking for: “Summarize the key outcomes of the 2020 Climate Change Conference.”
- Context: Especially for more complex queries, providing context can steer the AI in the direction you intend.
- Example: Instead of asking, “How do I get there?” say, “How do I get to the downtown library from Main Street?”
- Conciseness: While context is important, brevity is equally valuable. Overloading a prompt can lead to less focused answers.
- Example: Instead of asking, “Can you write me a story that involves a cat, a mysterious neighbour, and a full moon, and make it suspenseful?” simplify to, “Write a suspenseful story featuring a cat and a mysterious neighbour.”
On the flip side, here are the attributes of a bad prompt:
- Vagueness: This can lead to answers that may technically be correct but are not what you were looking for.
- Example: Asking, “Tell me something interesting,” could result in a myriad of unrelated facts.
- Assumptions: Sometimes, prompts can unknowingly include assumptions that skew the results.
- Example: Asking, “Why do people choose to smoke?” assumes that smoking is always a choice, neglecting factors like social pressure or addiction. This example highlights how a poorly crafted prompt can introduce biases or assumptions into the responses generated by AI systems.
- Over-Complexity: An overly complicated prompt can confuse the model, leading to subpar or incorrect responses.
- Example: Asking, “What are the socio-economic and cultural implications of the rise in gig economy jobs among Millennials vis-a-vis post-industrial economic theories?” This might result in a response that misses the nuance of the multiple factors involved.
So the key takeaway here is: Good prompts are the backbone of effective interaction with AI systems. They are clear, provide sufficient context, and are as concise as possible. Avoid being vague, making assumptions, or overcomplicating the question.
Section 6: Decomposing Complex Problems for Better Prompt Engineering
So we’ve talked about what makes a good prompt and what doesn’t, but what about when you’re facing a problem so large, so multifaceted, that you don’t even know where to start with your prompt? This brings us to an important technique in problem-solving and, by extension, in prompt engineering: decomposition.
Decomposition involves breaking down a complex problem into smaller, more manageable parts. Once these smaller issues are solved, the solutions can be combined to solve the original, complex problem.
Introduction to Problem Decomposition
- Problem Decomposition is the art and science of breaking down complex problems into smaller, more manageable tasks.
- Explanation: It allows us to deal with complicated situations in a structured and more manageable way.
Iterative Questioning
- The first technique in Problem Decomposition is Iterative Questioning.
- Explanation: You start with a broad question. Let’s say, “How can we reduce carbon emissions?” The AI gives you general categories: transportation, agriculture, industry, etc. Then you dig deeper. “Tell me about the most effective ways to reduce emissions in agriculture.”
- Further Iteration: You keep refining and questioning, peeling back layers, until you’re down to minutiae like, “Compare the carbon footprint of traditional vs. vertical farming.”
Prioritizing Key Issues
- But what if the problem has multiple facets? That’s where our second technique comes in—Prioritizing Key Issues.
- Explanation: You identify what’s urgent, what’s important, and what’s neither. Only then do you craft your prompts, aiming them like precision missiles at the most critical aspects of the problem.
- Example: If you’re tackling urban pollution, perhaps air quality is more urgent than waste management, depending on the city. So you’d focus your prompts accordingly.
- Now, let’s combine these techniques in an example. The problem? The dissemination of accurate information about climate change.
- Initial Prompt: You might start off by asking, “What are the main channels currently used for disseminating climate change information?”
- Refined Prompt: After sifting through the initial information, you zero in on a more refined question like, “How effective are social media campaigns in combating climate misinformation?”
- Final, Focused Prompt: Finally, you ask, “Generate a step-by-step guide on how to use Twitter to debunk common climate change myths.”
Through Problem Decomposition, we’ve moved from a broad, nebulous question to a targeted, actionable task that can lead to real-world solutions.
Case Study: Health Care
To drive this home, let’s think about healthcare.
- Initial Prompt: “How can technology improve healthcare outcomes?”
- Mid-Level Prompts: “Describe AI applications in early cancer detection.”
- Final Prompt: “List ways machine learning can improve the accuracy of MRI readings.”
Again, we’ve cut down an overwhelming challenge into chewable bits that can be solved step-by-step.
So, there you have it. Problem Decomposition allows us to divide and conquer, using the power of Prompt Engineering to dissect even the most challenging problems into something we can tackle. It’s all about asking the right questions in the right sequence to get to the solution.
Let’s look at another example of Healthcare
- Let’s say you’re designing a prompt to help healthcare professionals identify risk factors for heart disease. The problem is complicated; it involves genetics, lifestyle, and environmental factors.
- Decomposition: Instead of crafting a single, unwieldy prompt, you could break it down into questions that address each aspect.
“Identify genetic markers linked to heart disease.”
“List common lifestyle choices that increase heart disease risk.”
“Describe environmental factors contributing to heart disease.”
Example in Business Analysis
- Or let’s consider a business wanting to understand why a product isn’t selling.
- Decomposition: Rather than asking, “Why isn’t this product successful?” you could ask:
“What are the customer reviews saying about this product?”
“How does the price point compare to competitors?”
“Is the marketing reaching the target demographic?”
Example in Environmental Science
- Imagine you’re a policy advisor, and you need to tackle the enormous issue of climate change. This is a complex problem that has social, economic, technological, and natural dimensions.
- Decomposition: Instead of creating a vague or overarching prompt like, “How can we solve climate change?”, you might break it down into:
“What are the most effective renewable energy sources available?”
“How can we reduce carbon emissions in large cities?”
“What policy changes can encourage sustainable farming?”
“Identify public awareness campaigns that have positively impacted climate change.”
By decomposing the problem, you make it easier to craft effective prompts that can be fed into an AI system. The AI can then focus on each smaller problem, providing more accurate and actionable insights. Once you have all these ‘mini-solutions,’ you can piece them together for a comprehensive answer to your big, complex problem.
Decomposition doesn’t only apply to mathematical equations or computer algorithms; it’s a life skill, a business tool, and an essential aspect of effective Prompt Engineering.
Section 7: Information Sufficiency in Prompt Engineering
As we wind down our exploration of Prompt Engineering, there’s one more critical topic we can’t overlook: Information Sufficiency. Essentially, this is all about striking the right balance between too much information and too little information in a prompt.
Enough Information
- We’ve talked about crafting precise and well-defined prompts, but what constitutes ‘enough’ information?
- Example: For instance, if you’re using a machine learning model to predict weather patterns, a sufficient prompt could be: “Predict the likelihood of rainfall in San Francisco for the next seven days based on historical weather data from the last ten years.”
- Why It Works: This is explicit about the what, where, when, and how, which allows the AI to generate a focused, useful answer.
Inadequate Information
- On the flip side, we have prompts that lack enough detail.
- Example: “Tell me if it’s going to rain.”
- Why It’s Inadequate: This prompt is missing essential parameters like the location and time frame, leading to either a very generic answer or a request for more information.
Too Much Information
- Yes, you can err on the side of too much detail, creating prompts that are overly complicated.
- Example: “Predict the likelihood of rainfall in San Francisco for the next seven days based on historical weather data from the last ten years, considering lunar phases, ocean currents, and El Niño cycles.”
- Why It’s Problematic: Overloading a prompt with unnecessary detail can make it difficult for the AI to focus on what’s actually important, possibly resulting in a less accurate or relevant output.
The Role of Variable Weighting
- When crafting a prompt, each variable introduced can be represented by a numerical weight, like 0.0018, 0.0197, or 0.288. These are part of how the AI decides where to focus its computational power.
- Explanation: For example, in our weather prediction prompt, variables like “San Francisco” or “next seven days” may be assigned weights that indicate their influence in generating an output. A weight of 0.288 is obviously much more significant than a weight of 0.0018.
The key takeaway here is to aim for the Goldilocks Zone of information sufficiency—enough detail to get a useful, accurate response but not so much that you confuse or dilute the AI’s capabilities. Remember, a prompt is a question or a command, but it’s also a guide, steering the AI in the direction you want it to go.
So, when you’re crafting a prompt, it’s essential to consider not just what you’re asking but how each piece of information will guide the AI to your desired output. This is why the concept of ‘Information Sufficiency’ isn’t just about the quantity of information, but also about its quality. Each piece of information in your prompt should serve a purpose, guiding the AI towards the answer you’re looking for, without sending it down unnecessary rabbit holes. When crafting a prompt, keep in mind that each added variable comes with a weight, a measure of its importance in the final answer.
That’s it for today’s episode of EVO AI. We hope you’ve found it as enlightening as we did putting it together for you.
Host: If you enjoyed this episode and want to hear more about the fascinating world of technology and artificial intelligence, make sure you subscribe to our website evoai.ai and leave a review.
Teaser for the Next Episode
On our next episode, we’ll be diving into the world of Artificial Neural Networks—those mysterious algorithms that make all this AI magic possible. You won’t want to miss it!
Thank you for tuning in and remember: the future is shaped by the questions we dare to ask today.
About The Author

Bogdan Iancu
Bogdan Iancu is a seasoned entrepreneur and strategic leader with over 25 years of experience in diverse industrial and commercial fields. His passion for AI, Machine Learning, and Generative AI is underpinned by a deep understanding of advanced calculus, enabling him to leverage these technologies to drive innovation and growth. As a Non-Executive Director, Bogdan brings a wealth of experience and a unique perspective to the boardroom, contributing to robust strategic decisions. With a proven track record of assisting clients worldwide, Bogdan is committed to harnessing the power of AI to transform businesses and create sustainable growth in the digital age.
Leave A Comment