In today’s column, I have put together my most-read postings on how to skillfully craft your prompts when making use of generative AI such as ChatGPT, Bard, Gemini, Claude, GPT-4, and other popular large language models (LLM). These are handy strategies and specific techniques that can make a tremendous difference when using generative AI. If you ever wondered what other people know about prompting but for which you don’t know, perhaps this recap will ensure that you are in the know.
Notably, even if you already are a prompt engineering wizard, you might nonetheless still find insightful my coverage of state-of-the-art prompting approaches.
I’ll cover a few upfront considerations before we jump into the trees of the forest.
Reasons To Know Prompt Engineering
My golden rule about generative AI is this:
- The use of generative AI can altogether succeed or fail based on the prompt that you enter.
If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Similarly, if you put distracting words into your prompt, the odds are that the generative AI will pursue an unintended line of consideration. For example, if you include words that suggest levity, there is a solid chance that the generative AI will seemingly go into a humorous mode and no longer emit serious answers to your questions.
Be direct, be obvious, and avoid distractive wording.
Being copiously specific should be cautiously employed. You see, being painstakingly specific can be off-putting due to giving too much information. Amidst all the details, there is a chance that the generative AI will either get lost in the weeds or will strike upon a particular word or phrase that causes a wild leap into some tangential realm. I am not saying that you should never use detailed prompts. That’s silly. I am saying that you should use detailed prompts in sensible ways, such as telling the generative AI that you are going to include copious details and forewarn the AI accordingly.
You need to compose your prompts in relatively straightforward language and be abundantly clear about what you are asking or what you are telling the generative AI to do.
A wide variety of cheat sheets and training courses for suitable ways to compose and utilize prompts has been rapidly entering the marketplace to try and help people leverage generative AI soundly. In addition, add-ons to generative AI have been devised to aid you when trying to come up with prudent prompts, see my coverage at the link here.
AI Ethics and AI Law also stridently enter into the prompt engineering domain. For example, whatever prompt you opt to compose can directly or inadvertently elicit or foster the potential of generative AI to produce essays and interactions that imbue untoward biases, errors, falsehoods, glitches, and even so-called AI hallucinations (I do not favor the catchphrase of AI hallucinations, though it has admittedly tremendous stickiness in the media; here’s my take on AI hallucinations at the link here).
There is also a marked chance that we will ultimately see lawmakers come to the fore on these matters, possibly devising and putting in place new laws or regulations to try and scope and curtail misuses of generative AI. Regarding prompt engineering, there are likely going to be heated debates over putting boundaries around the kinds of prompts you can use. This might include requiring AI makers to filter and prevent certain presumed inappropriate or unsuitable prompts, a cringe-worthy issue for some that borders on free speech considerations. For my ongoing coverage of these types of AI Ethics and AI Law issues, see the link here and the link here, just to name a few.
All in all, be mindful of how you compose your prompts. By being careful and thoughtful you will hopefully minimize the possibility of wasting your time and effort. There is also the matter of cost. If you are paying to use a generative AI app, the usage is sometimes based on how much computational activity is required to fulfill your prompt request or instruction. Thus, entering prompts that are off-target could cause the generative AI to take excessive computational resources to respond. You end up paying for stuff that either took longer than required or that doesn’t satisfy your request and you are stuck for the bill anyway.
I like to say at my speaking engagements that prompts and dealing with generative AI is like a box of chocolates. You never know exactly what you are going to get when you enter prompts. The generative AI is devised with a probabilistic and statistical underpinning which pretty much guarantees that the output produced will vary each time. In the parlance of the AI field, we say that generative AI is considered non-deterministic.
My point is that, unlike other apps or systems that you might use, you cannot fully predict what will come out of generative AI when inputting a particular prompt. You must remain flexible. You must always be on your toes. Do not fall into the mental laziness of assuming that the generative AI output will always be correct or apt to your query. It won’t be.
Write that down on a handy snip of paper and tape it onto your laptop or desktop screen.
Prompting Strategies And Techniques
I will in a moment be walking you through the top-priority approaches that are considered innovative prompting strategies or highly touted prompting techniques for performing well-accomplished use of generative AI. For each one, I’ll provide a link to my detailed coverage so that you can dig deeper if so desired.
One special or out-of-the-ordinary aspect is that I often provide reasoned speculation as to why particular prompting patterns seem to aid in boosting generative AI to produce better answers. This is the “why” underlying the various practices. I suppose there isn’t a requisite need to necessarily know why they work, and you can be somewhat satisfied that they seem to in fact work.
Personally, I strongly suggest that you develop a strident mental model of why they work. A proficient mental model can be important to understanding when to use the approaches and when to likely not use them since they might not be productive for you.
Another aspect that you might find of value is that I try to showcase tangible examples of prompts that you can use and also likewise deconstruct prompts that aren’t going to get you much traction. There are plenty of prompting guides that fail to show you precisely what the recommended prompts should look like, which is exasperating and downright maddening. In addition, sometimes you aren’t shown the types of prompts that won’t be satisfactory. I like to take a look at both sides of the coin.
Since this is devised as a quick-read recap of my extensive coverage of prompt engineering, I urge you to consider taking a look at each of the referenced articles for further details. Here, I am trying to keep things short and sweet. In a sense, this is a taste of what each piece confers. You can relish the fascinating underpinnings and details by reading the designated postings that correspond to the prompting recommendations.
The structure of this recap is straightforward.
First, I list the main topic associated with each concerted prompt engineering practice. I will then give you a rapid summary of the matter. If you have a keen interest in the handiwork, you can go ahead and access the referenced article to see the detailed prompt-specific examples and the allied prompting dialogues that show how to devise your prompts accordingly. I aim to give you enough of an indication to grasp why the prompting topic is vital and allow you to decide whether it is something you want to know more about.
A final remark before I dive into the prompting strategies and techniques.
Some people say that there is no need to learn about the composing of good prompts. The usual rationale for this claim is that generative AI will be enhanced anyway by the AI makers such that your prompts will automatically be adjusted and improved for you. This capacity is at times referred to as adding a “trust layer” that surrounds the generative AI app, see my coverage at the link here.
The vented opinion is that soon there will be promulgated AI advances that can take the flimsiest of prompts and still enable generative AI to figure out what you want to have done. The pressing issue therefore is whether you are wasting your time by learning prompting techniques. It could be that you are only on a short-term clock and that in a year or two the skills you homed in prompting will be obsolete.
In my viewpoint, and though I concur that we will be witnessing AI advances that will tend toward helping interpret your prompts, I still believe that knowing prompt engineering is exceedingly worthwhile. First, you can instantly improve your efforts in today’s generative AI, thus, an immediate and valuable reward is found at the get-go. Second, we don’t know how long it will take for the AI advances to emerge and take hold. Those who avoid prompting improvements of their own volition are going to be waiting on the edge of their seat for that which might be further in the future than is offhandedly proclaimed (a classic waiting for Godot).
And, thirdly, I would fervently suggest that learning about prompting has an added benefit that few seem to be acknowledging. The more you know about prompting provides a surefire path to knowing more about how generative AI seems to respond. I am asserting that your mental model about the way that generative AI works is embellished by studying and using prompting insights. The gist is that this makes you a better user of generative AI and will prepare you for the continuing expansion of where generative AI will appear in our lives.
Generative AI is becoming ubiquitous. Period, end of story.
Shouldn’t you therefore seek to know enough about generative AI to protect yourself and be prepared for the onslaught of generative AI apps and systems?
There will be generative AI in nearly all applications that you use or that you are reliant upon. The more that you can think like the machine, the greater the chances you have of successfully contending with the machine. You are in a battle of having to push and prod generative AI to make sure you get what you want. Don’t by mindless default, let generative AI undercut what you aim to achieve. Knowing solid prompting strategies will strenuously mentally arm you to cope with a world filled with generative AI at the turn of every corner.
Comprehensive List Of Prompting Strategies And Techniques
I will one at a time describe each of the notable prompting strategies and techniques that I believe are vital and that form a reasonably comprehensive set that you should be aware of (new prompting approaches are arising, nearly daily, so be on the watch for the latest coverage in my column postings). At the end of each of the individual descriptions, there is a link provided to further delve into the topic at hand.
Let’s get underway by starting with the often overlooked and misunderstood role of imperfect prompting. We will proceed at a brisk pace through each prompting strategy or technique.
Imperfect Prompting
Here’s perhaps a bit of a surprise for you.
Imperfect prompts can be cleverly useful.
I realize this seems counterintuitive. I just said that you should be composing your prompts in stellar ways. Be direct, be obvious. Yes, I said that.
The thing is, purposely composing imperfect prompts is yet another kind of prompt engineering trick or tip. If you want the generative AI to intentionally go off the rails or see what it might oddly come up with, you can nearly force this to happen by devising prompts that are vague, confusing, roundabout, etc.
Please observe that I said this entails purposely composing imperfect prompts. The gist is that you should use imperfect prompting when you knowingly are doing so. Those who by happenstance fall into imperfect prompts are typically unaware of what they are doing. They end up surprised at responses by the generative AI that seem bizarre or totally lateral to the matter at hand.
You can wield imperfect prompts when the situation warrants doing so. Feel free to compose a prompt that is out to lunch. There are definitive ways to make an imperfect prompt stoke generative AI in particular directions, thus, you can do haphazard imperfect prompts, or you can instead devise systematic imperfect prompts.
For various examples and further detailed indications about the nature and use of imperfect prompts, see my coverage at the link here.
Persistent Context And Custom Instructions Prompting
Normally, when you start a conversation with generative AI, you are starting from scratch.
There is no contextual fabric surrounding the nature of the conversation. It is as though you have come upon someone that you know nothing about, and they know nothing about you. When that happens in real life, you might consume a lot of energy and effort toward setting a context and making sure that you both are on the same page (I don’t want to take this analogy overly far, since it could venture into anthropomorphizing AI).
The key here is that you don’t necessarily have to start with generative AI at a zero point upon each conversation that you initiate. If desired, you can set up a persistent context. A persistent context is a phraseology that suggests you can establish a context that will be persistent and ensure that the generative AI is already given a heads-up on things you believe are important to have established with the AI.
A persistent context is often undertaken by using custom instructions. Here’s the deal. You prepare a prompt that contains things you want the generative AI to be up-to-speed on. The prompt is stored as a custom instruction. You indicate in the generative AI app that the custom instruction is to be processed whenever you start a new conversation.
Ergo, each time you start a new dialogue with the generative AI, the prompt that you had previously stored is read and processed by the generative AI as though you were entering it live at the time of beginning the new dialogue. This saves you the angst and agony of having to repeatedly enter such a prompt. It will automatically be invoked and processed on your behalf.
What might this custom instruction consist of?
Well, the sky is the limit.
You might for example want the generative AI to be aware of salient aspects about yourself. Some people set up a custom instruction that describes who they are. They want the generative AI to take into account their personal facets and hopefully personalize emitted responses accordingly. Others think this is eerie and don’t want the generative AI to be leveraging personal details such as their entered age, gender, personal outlook, and other considered highly private nuances.
A more generic angle would be to set up custom instructions about the types of responses you want from generative AI. For example, you might be the type of person who only wants succinct responses. You could put into a set of custom instructions a stipulation that the generative AI is to limit any answers to no more than three paragraphs in size, or maybe indicate the number of words allowed. In the instructions you might also state that you want only serious replies, you want the generative AI to always be polite, etc.
This is a handy overall technique that I would believe only a small percentage of generative AI users utilize, but that doesn’t mean it isn’t useful. It is useful. If you are someone who avidly frequently uses generative AI, the use of a persistent context and custom instructions can be a lifesaver in terms of reducing the tedious aspects of making sure the AI is ready for your use in the ways you want.
For various examples and further detailed indications about the nature and use of persistent context and custom instructions, see my coverage at the link here.
Multi-Persona Prompting
Speaking of features or functions of generative AI that seem to be less used but that are worthy of attention, let’s talk about multi-persona prompting. As the name suggests, you can get the AI to take on one or more personas, which is all a pretense or a make-believe setting that you can establish.
Notably, you can use generative AI in a role-playing manner. You might decide to tell the AI to pretend to be Abraham Lincoln. The generative AI will attempt to interact with you as Honest Abe might have done so. This is all a matter of fakeries. You must keep your own head straight that the entire dialogue is a made-up version of Lincoln. Don’t allow yourself to somehow start to believe that the AI has embodied the soul of Lincoln.
Why would someone use this capability?
Imagine that a student in school is studying the life and times of President Lincoln. They could ask the generative AI for the details about his life. I doubt that will make his amazing accomplishments seem as impressive as if you could interact with Lincoln. By telling the generative AI to pretend to be Lincoln, the student would get a chance to seemingly gauge what he was like. This might be memorable and eye-opening.
The multi-personas can come into play by either doing various personas from time to time, such as first doing Lincoln and perhaps on another day doing George Washington, or you can use more than one persona at a time. Supposing that Lincoln met with Gandhi. What would they discuss? How would they carry on a conversation? You can tell the generative AI to try doing so and then see what comes out of the pairing.
Make sure to keep your expectations restrained. The generative AI might do a lousy job of the pretense. There is also a danger that the AI will falsify facts or make things up. I say this is a danger because a student might naively believe whatever the pretense says. Anyone using multi-personas should do so with a healthy grain of salt.
For various examples and further detailed indications about the nature and use of multi-persona prompting, see my coverage at the link here.
Chain-of-Thought (CoT) Prompting
The emergence of Chain-of-Thought (CoT) prompting has been heralded as one of the most important prompting techniques that everyone should use. Headlines have been blaring about this approach for the longest time and emphasized the need to incorporate it into your prompt engineering repertoire.
You definitely need to know this one.
The concept is simple. When you enter a prompt on just about any topic, make sure to also mention that you want the generative AI to work on the matter in a stepwise manner. This will get the AI to step by step indicate what it is doing. In turn, research studies suggest that you will get a better answer or at least a more complete answer.
You might liken this to humans and human thought, though please don’t go overboard with the comparison. We often ask a person to state their chain of reasoning or chain of thoughts so that we can gauge whether they have mindfully analyzed the matter. Speaking aloud about their thought processes can reveal deficiencies in what they are thinking or intending to do. Furthermore, the line of thinking can be instructive as to how something works or what the person is trying to convey.
In the case of generative AI, some have balked that using the verbiage of chain-of-thought is overstepping what the AI is doing. We are ascribing the powers of thinking by bestowing the word “thought” into this depiction of the AI. Be forewarned that some who don’t like referring to this as chain-of-thought are vehemently insistent that we should just label this as being stepwise in computational processing and cut out the word “thought” from the matter.
The bottom line is that telling generative AI to proceed in a stepwise fashion does seem to help.
Sometimes it might not make a difference, but a lot of the time it does. The added good news is that asking for the stepwise doesn’t seem to have a negative impact per se. The downsides to producing an answer are fortunately minimal, such as the likelihood that you will consume more computing cycles and if paying for the use of the generative AI might incur a heightened cost with each such usage (arguably, this added cost is worth it if you are getting better answers otherwise).
For various examples and further detailed indications about the nature and use of Chain-of-Thought (CoT) prompting, see my coverage at the link here.
Retrieval-Augmented Generation (RAG) Prompting
An area of increasing interest and popularity in prompting consists of retrieval-augmented generation (RAG). That is one of those haughty kinds of acronyms that is floating around these days. I’ve typically depicted RAG by simply stating that it consists of in-model learning that is accompanied by a vector database. You are welcome to use the RAG acronym since it is faster to say and sounds abundantly technologically snazzy.
It works this way.
Suppose you have a specialized topic that you want generic generative AI to include. Maybe you want generative AI to be data-aware of how stamp collecting works. The usual off-the-shelf generative AI might not have been initially data-trained on stamp collection to any notable depth.
You could collect together text data or similar information that describes stamp collecting in a relatively deep manner. You then have the generative AI do some pre-processing by trying to computationally pattern match on this newly introduced data. You have that specialized database made available so that you can use it when needed (the type of database is said to be a vector database).
The generic generative use of in-context modeling, in this case, the context pertains to stamp collecting, to augment what the AI already has initially been data trained on. When you use the generative AI and ask a question about stamp collecting, the AI will augment what it was initially data trained on by going out to the pre-processed content and using that as part of seeking to answer whatever question you have entered. I assume you can readily discern why this is known as retrieval-augmented generation.
I have predicted that we will see a great deal of growth in the adoption of RAG. The reason for this is that you can somewhat readily expand what generic generative AI is data trained on. Doing so in this means is easier than starting the generative AI anew or building a new generative AI that incorporates the specialized aspects at the get-go. I’ve discussed how this can readily be used in the medical field, legal field, and other domains that want to get generative AI tailored or be more in-depth in a respective field.
For various examples and further detailed indications about the nature and use of retrieval-augmented generation (RAG)., see my coverage at the link here.
Chain-of-Thought Factored Decomposition Prompting
I already discussed chain-of-thought prompting, but let’s see if we can upsize that juicy topic.
You can supplement chain-of-thought prompting with an additional instruction that tells the generative AI to produce a series of questions and answers when doing the chain-of-thought generation. This is a simple but potentially powerful power punch. Your goal is to nudge or prod the generative AI to generate a series of sub-questions and sub-answers.
Why so? You are guiding the generative AI toward how to potentially improve upon the chain-of-thought computational processing effort. Whereas the notion of let’s think step by step is enough to lightly spark the generative AI into a chain-of-thought mode, you are leaving out the details of how to do so up to the generative AI. You are being exceedingly sparse in your instruction. Providing added precision could be a keen boost to the already anticipated benefits.
You instruct the generative AI via an added prompt describing how to do a decomposition. The chances are this might improve the chain-of-thought results. Please realize there are important tradeoffs such that sometimes this helps enhance the chain-of-thought, while sometimes it might not. Like most things in life, you must use the added technique in the right way and at the right time.
For various examples and further detailed indications about the nature and use of chain-of-thought by leveraging factored decomposition, see my coverage at the link here).
Skeleton-of-Thought (SoT) Prompting
Think about all the times that you started to write something by first making an outline or a skeleton about what you wanted to say. An outline or skeleton can be extremely useful. You can decide what to include and the order of things. Once you’ve got the structure figured out, you can then in an orderly fashion fill in the outline.
The same idea can be applied to the use of generative AI.
Via a prompt, you tell the generative AI to first produce an outline or skeleton for whatever topic or question you have at center stage, employing a skeleton-of-thought (SoT) method to do so. Voila, you can then inspect the skeleton to see if the generative AI is on-target or off-target of your interests.
Assuming that the generative AI is on target, you can tell it to expand the outline and thus get the rest of your verbiage. If the generative AI is off-target, you can instruct it to change direction or maybe start cleaning if things are fouled up.
Another plus to this skeleton issuance is that you’ll presumably avoid those costly wrong-topic essays or narratives that the generative AI might inadvertently produce for you. You will nip things in the bud. Admittedly, that being said, there is the cost of the outline being generated and then a second cost to do the expansion, but the odds are that this will be roughly the same as having requested the entire essay at the get-go. The primary savings will come from averting the generation of content that you didn’t intend to get.
There is a potential hidden added plus to using the skeleton-of-thought approach. Research so far tentatively suggests that the production of an outline or skeleton will prime the pump for the generative AI. Once the generative AI has generated the skeleton, it seems to be likelier to stay on course and produce the rest of the answer or essay as befits the now-produced skeleton.
I’m not asserting that the SoT will always be meritorious, which can similarly be said about the use of CoT. They both on-the-balance seem to be quite helpful. Whether this is always the case is certainly debatable. You will need to judge based on your efforts in using CoT and using SoT.
For various examples and further detailed indications about the nature and use of the skeleton-of-thought approach for prompt engineering, see my coverage at the link here.
Show-Me Versus Tell-Me Prompting
Here’s a pervasive zillion-dollar question about the crafting of prompts.
Should you enter a prompt that demonstrates to the generative AI an indication of what you want (show it), or should you enter a prompt that gives explicit instructions delineating what you want (tell it)?
That is the ongoing conundrum known as the show-me versus tell-me enigma in prompt engineering.
I am an advocate of using the right style for the appropriate circumstances. It is the Goldilocks viewpoint. You don’t want to select a choice that is either too hot or too cold. You want whichever one is best for the situation at hand. Meanwhile, keep the other style in your back pocket and use it in conjunction as warranted.
Also, don’t fall for a false dichotomy on this. You can use one approach, see how things go, and if need be, then try the other one. They can even be combined into a single prompt so that the generative AI gets both at the same time.
Some people form a habit of using only one of the two approaches. You might be familiar with the old saying about possessing only one tool such as a hammer. If the only tool you know is a hammer, the rest of the world looks like a nail. There will be a tendency to use the hammer even when doing so is either ineffective or counterproductive. Having familiarity with multiple tools is handy, and on top of this knowing when to use each such tool is even easier.
For various examples and further detailed indications about the nature and use of the show-me versus tell-me prompting strategy, see my coverage at the link here.
Mega-Personas Prompting
I previously discussed the use of multi-persona prompting.
Well, as you know, go big or go home. A prompting strategy known as mega-personas takes the multi-persona to a much larger degree. It is a go big or go home revelation. You ask the generative AI to take on a pretense of dozens, hundreds, or maybe thousands of pretend personas.
The primary use would be to undertake a survey or perform some kind of group-oriented analysis when trying to assess something or figure something out. For example, suppose you wanted to survey a thousand lawyers and ask them whether they like their job and whether they would pursue the legal field again if they had things to do over. You could try to wrangle up a thousand lawyers and ask them those pointed questions.
Finding a thousand lawyers who have the time and willingness to respond to your survey is probably going to be problematic. They are busy. They charge by the billable hour. They don’t have the luxury of sitting around and answering polling questions. Also, consider how hard it might be to reach them to begin with. Do you try calling them on the phone? Maybe send them emails? Perhaps try to reach them at online forums designated for attorneys? Best of luck in that unwieldy endeavor.
Envision that instead, you opt to have generative AI create a thousand pretend lawyers and have the AI attempt to answer your survey questions for you. Voila, with just a few carefully worded prompts, you can get your entire survey fully completed. No hassle. No logistics nightmare. Easy-peasy.
There are numerous tradeoffs to using this technique. You will likely need to steer the generative AI toward differentiating the mega-personas otherwise they will essentially be identical clones. Another concern is whether the generative AI can adequately pretend to distinctly simulate so many personas or might be computationally shortcutting things. Etc.
For various examples and further detailed indications about the nature and use of mega-personas prompting, see my coverage at the link here.
Certainty And Uncertainty Prompting
Certainty and uncertainty play a big role in life.
It is said that the only true certainty consists of deaths and taxes. Michael Crichton, the famous writer, said that he was certain there was too much certainty in the world. Legendary poet Robert Burns indicated that there is no such uncertainty as a sure thing.
One issue that few users of generative AI realize exists until taking a reflective moment to ponder it is that most generative AI apps tend to exhibit an aura of immense certainty. You enter your prompt and typically get a generated essay or interactive dialogue that portrays the generative AI as nearly all-knowing. The sense that you get is that the generative AI is altogether confident in what it has to say. We subliminally fall into the mental trap of assuming that the answers and responses from generative AI are correct, apt, and above reproach.
Generative AI typically does not include the signals and wording that would tip you toward thinking of how certain or uncertain a given response is. To clarify, I am not saying that generative AI will never provide such indications. It will do so depending upon various circumstances, including and especially the nature of the prompt that you have entered.
If you explicitly indicate in your prompt that you want the generative AI to emit a certainty or uncertainty qualification, then you will almost certainly get such an indication. On the other hand, if your prompt only tangentially implies the need for an indication of certainty or uncertainty, you might get an output from the AI app that mentions the certainty considerations, or you might not.
As a bonus, and this is a mind bender, the very act of asking or telling the generative AI to include a certainty or uncertainty will often spur the generative AI to be less off-the-cuff and produce more well-devised results.
For various examples and further detailed indications about the nature and use of the hidden role of certainty and uncertainty when prompting for generative AI, see my coverage at the link here.
Vagueness Prompting
I earlier discussed the use of imperfect prompts.
A particular kind of imperfect prompt would consist of an exceedingly vague prompt. On the one hand, vagueness might be a bad thing. The generative AI might not be able to figure out what you want the AI to do. The other side of the coin is that the vagueness might prod the generative AI toward giving you a response that is seemingly creative or beyond what you had in mind.
John Tukey, the famous mathematician, said this uplighting remark about vagueness: “Far better an approximate answer to the right question, which is often vague, than the exact answer to the wrong question, which can always be made precise.” Keep in mind too that one of the most powerful elements of being vague is that it can be a boon to creativity, as well stated by the renowned painter Pablo Picasso: “You have an idea of what you are going to do, but it should be a vague idea.”
Let’s not rigidly bash vagueness and instead see what it is in a fuller picture for all that it portends, opening our eyes wide to see both the bad and the good at hand.
For various examples and further detailed indications about the nature and use of vagueness while prompting, see my coverage at the link here.
Catalogs Or Frameworks For Prompting
A prompt-oriented framework or catalog attempts to categorize and present to you the cornerstone ways to craft and utilize prompts.
You can use this for training purposes when learning about the different kinds of prompts and what they achieve. You can use this too for a cheat sheet of sorts, reminding you of the range of prompts that you can use while engrossed in an intense generative AI conversation. It is all too easy to lose your way while using generative AI. Having a handy-dandy framework or catalog can jog your memory and awaken you to being more systematic.
To clarify, I am not saying that a framework or catalog is a silver bullet. You can still compose prompts that go flat. You can still get exasperated while using generative AI. Do not overinflate what a framework or catalog can instill. All in all, the benefit is that you’ll undoubtedly and indubitably shift from the zany erratic zone to the more ascertained systematic zone. A serious user of generative AI that plans on long-term ongoing use will be grateful that they took the upfront time to delve into and make use of a suitable framework or catalog underlying prompt engineering.
For various examples and further detailed indications about the nature and use of prompt engineering frameworks or catalogs, see my coverage at the link here.
Flipped Interaction Prompting
Flipping the script.
This overall societal catchphrase refers to turning things on their head and doing nearly the opposite of what is normally done. Up becomes down, down becomes up. There can be lots of good reasons to do this. Maybe the approach will reveal new facets and spark a fresh viewpoint on the world. It could also be something that you do on a lark, just for kicks.
The beauty of flipping the script is that it can have profound outcomes and tremendous possibilities. It all depends on what you are trying to accomplish. Plus, knowing how to best carry out a flip-the-script endeavor is a vital consideration too. You can easily mess up and get nothing in return.
A clever prompting strategy and technique consists of having the generative AI engage in a mode known as flipped interaction. Here’s the deal. You flip the script, as it were, getting generative AI to ask you questions rather than having you ask generative AI your questions.
Here are my six major reasons that I expound upon when conducting workshops on the best in prompt engineering when it comes to savvy use of the flipped interaction mode:
- (1) Inform or data-train the generative AI.
- (2) Discover what kinds of questions arise in a given context.
- (3) Learn from the very act of being questioned by the AI.
- (4) Allow yourself intentionally to be tested and possibly scored.
- (5) Do this as a game or maybe just for plain fun.
- (6) Other bona fide reasons.
For various examples and further detailed indications about the nature and use of flipped interaction, see my coverage at the link here.
Self-Reflection Prompting
Aristotle famously said that knowing yourself is the beginning of all wisdom.
The notion that self-reflection can lead to self-improvement is certainly longstanding, typified best by the all-time classic saying know thyself. Some would suggest that knowing yourself encompasses a wide variety of possibilities. There are the knowing aspects of what you know and the knowledge that you embody. Another possibility is to know your limits. Yet another is to know your faults. And so on.
In modern times, we seem to have a resurgence of these precepts. There are online classes and social media clamors that urge you to learn how to do self-reflection, self-observation, exercise reflective awareness, undertake insightful introspection, perform self-assessment, etc. Each day you undoubtedly encounter someone or something telling you to look inward and proffering stout promises that doing so will produce great personal growth.
Interestingly and importantly, this same clarion call has come to generative AI.
You can enter a prompt into generative AI that tells the AI app to essentially be (in a manner of speaking) self-reflective by having the AI double-check whatever generative result it has pending or that it has recently produced. The AI will revisit whatever the internal mathematical and computational pattern matching is or has done, trying to assess whether other alternatives exist and often doing a comparison to subsequently derived alternatives.
There are two distinct considerations at play here:
- (1) AI self-reflection. Generative AI can be prompted to do a double-check that we will refer to as having the AI be self-reflective (which is computationally oriented, and we won’t think of this as akin to sentience).
- (2) AI self-improvement. Generative AI can be prompted to do a double-check and subsequently adjust or update its internal structures as a result of the double-check, which we will refer to as AI self-improving (which is computationally oriented, and we won’t think of this as akin to sentience).
For various examples and further detailed indications about the nature and use of AI self-reflection and AI self-improvement for prompting purposes, see my coverage at the link here.
Add-On Prompting
You can come up with prompts on your own and do so entirely out of thin air. Another approach consists of using a special add-on that plugs into your generative AI app and aids in either producing prompts or adjusting prompts. The add-on can conjure up prompts for you or potentially take your prompt and augment it.
Thus, there are two primary considerations at play:
- (1) Prompt Wording. The wording that you use in your prompt will demonstrably affect whether the generative AI will be on-target responsive or perhaps exasperatingly unresponsive to your requests and interactions.
- (2) Prompt Add-On. The use of AI add-ons and other automation as part of the prompting effort can also substantially and beneficially affect the generative AI responsiveness and either construct a prompt or adjust a given prompt.
Some generative AI apps provide a facility for selecting and using add-ons. But some do not. You’ll need to explore whether your preferred generative AI allows for this type of usage.
For various examples and further detailed indications about the nature and use of add-ons for prompting, see my coverage at the link here.
Conversational Prompting
Force of habit is causing many people to undershoot when it comes to using generative AI.
Here’s what I mean.
Most of us are accustomed to using conversational AI such as Alexa or Siri. Those natural language processing (NLP) systems are rather crude in fluency in comparison to modern generative AI. Indeed, those old-fashioned AI systems are so exasperating that you likely have decided to use very shortened commands and try not to carry on an actual dialogue. Doing a dialogue is frustrating and those NLPs will get confused or go off-topic.
The problem from a generative AI perspective is that many people apply the same outdated mindset when using generative AI. They enter one-word prompts. After getting a response from the generative AI, they exit from the AI app. This is being done because of a force of habit.
A key means to overcome this consists of adjusting your mindset to willingly and intentionally carry on a conversation with generative AI. Use prompts that are fluent. Don’t shortchange your prompting. When you get a response from generative AI, challenge the response or in some fashion make the response into a dialogue with the generative AI.
Get rid of the one-and-done mentality.
Be a fluent and interactive prompter.
For various examples and further detailed indications about the nature and use of conversational prompting, see my coverage at the link here.
Prompt-To-Code Prompting
A nifty feature of most generative AI apps is that they can produce software code for you.
I realize that the vast proportion of generative AI users is likely not into software development and probably don’t do any coding. As such, the capability to produce code via generative AI would seem to only be of interest to a small segment of generative AI users.
Aha, the light bulb goes on, namely that those who aren’t into coding can now potentially become amateur software developers by using generative AI to do their coding for them. You can get generative AI to produce code. You can even get the generative AI to do other programming tasks such as debugging the code.
Not many people are using this feature right now. I have predicted that as the maturity of using generative AI gains steam, we will have a lot more non-programmers who will decide to up the ante by using generative AI to develop software for them. This requires knowing what kinds of prompts to use. There is a lot of finesse involved and it isn’t the easiest thing to pull off.
For various examples and further detailed indications about the nature and use of prompting to produce programming code, see my coverage at the link here.
Target-Your-Response (TAYOR) Prompting
There is a famous expression that gracefully says this: “Onward still he goes, Yet ne’er looks forward further than his nose” (per legendary English poet Alexandar Pope, 1734, Essay on Man). We know of this today as the generally expressed notion that sometimes you get stuck and cannot seem to look any further than your nose.
This is easy to do. You are at times involved deeply in something and are focused on the here and now. Immersed in those deep thoughts, you might be preoccupied mentally and unable to look a step ahead. It happens to all of us.
When using generative AI, you can fall readily into the same mental trap. Here’s what I mean. You are often so focused on composing your prompt that you fail to anticipate what will happen next. The output generated by generative AI is given little advance thought and we all tend to react to whatever output we see. Upon observing the generated output, and only at that juncture, might you be stirred into thinking that perhaps the output should be given some other spin or angle.
Welcome to the realm of target-your-response (TAYOR), a prompt engineering technique that gets you to stay on your toes and think ahead about what the generated AI response is going to look like.
If you are cognizant about anticipating the nature of your desired output, you can upfront say what you want when you enter your requested prompt. All you need to do is put a bit of mental effort into thinking ahead and then merely specifying your desired output accordingly in a single prompt. This is not just about formatting. There is a plethora of facets that come into play.
You think about what the output or generated response ought to look like. You then mention this in your prompt. Your prompt then contains two elements. One element is the question or problem that you want the AI to solve. The other element that is blended into your prompt consists of explaining what you want the response to be like.
For various examples and further detailed indications about the nature and use of TAYOR or target-your-response prompting, see my coverage at the link here.
Macros And End-Goal Prompting
I’ll cover two topics here. The first is about the use of macros. The second topic is about end-goal planning for prompting purposes.
First, think about your use of macros in ordinary spreadsheets. You might find yourself routinely doing the same action over and over, such as copying a spreadsheet cell and modifying it before you paste it into another part of the sheet. Rather than always laboriously performing that action, you might craft a macro that semi-automates the spreadsheet task at hand. You can thereafter merely invoke the macro and the spreadsheet activity will be run via the stored macro.
Let’s use that same concept when composing prompts in generative AI.
Suppose you sometimes opt to have generative AI interact as though it is the beloved character Yoda from Star Wars. You might initially devise a prompt that tells generative AI to pretend that it is Yoda and respond to you henceforth in a Yoda-like manner. This persona-establishing prompt could be several sentences in length. You might need to provide a somewhat detailed explanation about the types of lingo Yoda would use and how far you want the generative AI to go when responding in that preferred tone and style.
Each time that you are using generative AI and want to invoke the Yoda persona, you would either have to laboriously retype that depiction or maybe store it in a file and do a copy-and-paste into the prompt window of the AI app. Quite tiring. Instead, you could potentially create a macro that contained the same set of instructions and merely invoke the macro. The macro would feed that prompt silently into the generative AI and get the AI pattern-matching into the contextual setting that you want.
That’s the underlying notion of devising or revising generative AI to encompass the use of macros.
The second topic is referred to as end-goal planning for prompting.
Bill Copeland, American poet and esteemed historian had proffered this cautionary bit of wisdom about life overall: “The trouble with not having a goal is that you can spend your life running up and down the field and never score.”
With the technique known as end-goal planning for prompting, you consider these crucial questions:
- What do you hope to accomplish during your interactive dialogue with generative AI?
- Will you be able to discern that you have arrived at an endpoint that has delivered whatever you wanted the generative AI to be able to garner for you?
- Do you have specific goals articulated that are tangible enough to know when you’ve reached a satisfying conclusion?
For various examples and further detailed indications about the nature and use of prompt macros and also end-goal planning, see my coverage at the link here.
Tree-of-Thoughts (ToT) Prompting
Trees, you’ve got to love them.
You undoubtedly have heard of the tree of knowledge and the symbolism thereof. We also speak of people who if they grow up suitably will be stout and stand tall like a resplendent tree. Joyce Kilmer, the famed poet, notably made this remark comparing poems and trees: “I think that I shall never see a poem lovely as a tree.”
Turns out that trees or at least the conceptualization of trees are an important underpinning for prompt engineering and generative AI.
We can use a tree-of-thoughts (ToT) prompting approach in generative AI.
Here’s how.
You can ask generative AI a question or try to get it to solve a problem. In addition, you can tell the AI app to pursue multiple avenues (i.e., so-called “thoughts”) when doing so. On top of that, you can get the AI app to then use those multiple avenues to figure out which one is likely the best answer. The aim is to get generative AI to be more thorough and to achieve a better answer or response.
Various prompts can be used to invoke a tree-of-thought invocation. The most common consists of making use of multi-persona prompting and adding some amplification of what you want the generative AI to do.
For various examples and further detailed indications about the nature and use of ToT or tree-of-thoughts prompting, see my coverage at the link here.
Trust Layers For Prompting
Let’s examine the use of trust layers for generative AI.
This has to do with the building and fielding of elements associated with generative AI that will serve as a trust-boosting layer outside of generative AI. What we might do is attempt to surround generative AI with mechanisms that can help prod generative AI toward being trustworthy and failing that we can at least have those same mechanisms seek to ascertain when trustworthiness is being potentially forsaken or undercut.
I liken this to putting protection around a black box. Suppose you have a black box that takes inputs and produces outputs. Assume that you have limited ability to alter the internal machinations of the black box. You at least have direct access to the inputs, and likewise, you have direct access to the outputs.
Therefore, you can arm yourself by trying to purposefully devise inputs that will do the best for you, such that they will hopefully get good results out of the black box. Once you get the outputs from the black box, you once again need to be purposefully determined to scrutinize the outputs so that if the black box has gone awry you can detect this has occurred (possibly making corrections on the fly to the outputs).
Your sense of trust toward the black box is being bolstered due to the external surrounding protective components. The aim is that the stridently composed inputs will steer the black box away from faltering. In addition, no matter what the black box does, the additional aim is to assume that the outputs from the black box are intrinsically suspicious and need a close-in double-check.
If the maker of the black box can meanwhile also be tuning or advancing the black box to be less untrustworthy, we construe that as icing on the cake. Nonetheless, we will still maintain our external trust layer as a means of protecting us from things going astray.
You can expect that many generative AI apps in corporations and the government will undoubtedly be adopting trust layers associated with their generative AI. The prompt that you enter seemingly into generative AI will first be processed by the trust layer. Likewise, the output produced by the generative AI will first be screened in a trust layer before it is shown to you.
This has significant ramifications for how you write your prompts. Also, you will need to realize that the prompt that you wrote is not necessarily the same as what the trust layer passed along to the generative AI.
For various examples and further detailed indications about the nature and use of trust layers for aiding prompting, see my coverage at the link here.
Directional Stimulus Prompting (DSP)
Hints can be handy.
Robert Frost, the famous American poet, said this about hints (particularly when used in a family context): “The greatest thing in family life is to take a hint when a hint is intended, and not to take a hint when a hint isn’t intended.” It would seem that this sage advice applies to all manner of hints, going far beyond those of a familial nature.
Hints ought to be an integral element of your use of generative AI. Infusing hints into prompts can be highly advantageous. A formal catchphrase used for this is a technique known as Directional Stimulus Prompting (DSP).
Hints or DSP can play a substantial role when you are entering prompts into any and all generative AI apps, including those such as ChatGPT, GPT-4, Bard, and the like. A rarely known and yet superb technique for those who avidly practice prompt engineering best practices is to leverage hints as part of your prompting strategy. A hint can go a long way toward getting generative AI to provide you with stellar results.
I dare say that a lot of generative AI users do not realize that hints are vital. That’s a shame. The use of hints when well-placed and well-timed can spur generative AI to emit better answers and attain heightened levels of problem-solving.
Yes, there is gold in those AI hills that can be found at the feet of proper prompting hints.
For various examples and further detailed indications about the nature and use of hints or directional stimulus prompting, see my coverage at the link here.
Privacy Invasive Prompting
Did you realize that when you enter prompts into generative AI, you are not usually guaranteed that your entered data or information will be kept private?
The licensing agreement of the generative AI app will typically indicate that the AI maker can examine any of the prompts entered into the app (some exceptions might apply). In addition, the AI maker usually indicates they can use your prompts for purposes of ongoing data training of the generative AI.
In theory, though at low odds, there is a chance that the pattern-matching of the generative AI will essentially memorize something you have entered, and then, later, emit that something to another user of the generative AI.
When you enter your prompts, make sure to compose them in a manner that will not undercut your privacy. The same goes for confidentiality. There are various suggested tricks and tips about how to effectively make use of generative AI and still avoid entering any private or confidential data.
For various examples and further detailed indications about the nature and use of prompts that do not give away privacy or confidentiality, see my coverage at the link here.
Illicit Or Disallowed Prompting
Did you know that the licensing agreement of most generative AI apps says that you are only allowed to use the generative AI in various strictly stipulated ways?
Any other usage is considered illicit usage by the AI maker. They can cancel your account. They can take other actions associated with your illicit usage. I dare say that most users of generative AI have no idea that there is a list of illicit things you aren’t supposed to do.
The moment you compose and enter a prompt, you should be asking yourself whether the prompt comports with being suitable and proper, or whether it might cross over into illicit uses. I think you might be surprised at the types of uses that are considered illicit. Some are obvious such as not using the generative AI to commit a crime. Others are much less obvious, such as using generative AI for seemingly innocuous purposes.
For various examples and further detailed indications about the nature and use of illicit prompts that you aren’t supposed to use, see my coverage at the link here.
Chain-of-Density (CoD) Prompting
I challenge you to put five pounds of rocks into a three-pound bag.
That adage about filling a bag or sack is indicative that sometimes you are faced with the difficult chore of seeking to squeeze down something larger into something smaller in size. Turns out that we do this all the time, particularly when attempting to summarize materials such as a lengthy article or a voluminous blog posting. You have to figure out how to convey the essence of the original content and yet do so with less available space when doing so.
Welcome to the world of summarization and the at times agonizing tradeoffs in deriving sufficient and suitable summaries. It can be challenging and exasperating to devise a summary. You want to make sure that crucial bits and pieces make their way into the summary. At the same time, you don’t want the summary to become overly unwieldy and perhaps begin to approach the same size as the original content being summarized.
I bring up this topic because a common use of generative AI consists of getting the AI app to produce a summary for you. You feed an article or some narrative into the generative AI and ask for a handy dandy summary. The AI app complies. But you have to ask yourself, is the summary any good? Does it do a proper job of summarizing? Has anything vital been left out? Could the summary be more tightly conceived? Etc.
A shrewd method of devising summaries involves a clever prompting strategy that aims to bolster generative AI toward attaining especially superb or at least better than usual kinds of summaries. The technique is known as Chain-of-Density (CoD).
Anybody versed in prompt engineering ought to become familiar with this insightful technique. Consider Chain-of-Density as not only helpful for producing summaries but there are a lot of other benefits garnered by understanding how the technique works and how this can power up your overall prompting prowess all-told.
For various examples and further detailed indications about the nature and use of CoD or chain-of-density prompting, see my coverage at the link here.
“Take A Deep Breath” Prompting
Take a deep breath.
Now that I’ve made that everyday statement to you (or perhaps it is a commanding directive aimed at you), suggesting that you are to take a deep breath, what would you do next?
I suppose you could completely ignore the remark. You might brush it off. Perhaps it was just a figure of speech and not intended for your attention per se. On the other hand, maybe you interpreted the remark as quite helpful. Ergo, you have indeed stilled yourself and taken a deep breath. Good for you. We all seem to know or be told that taking a deep breath can be good for the soul and get your mind into a calm contemplative state.
Turns out that “take a deep breath” is also a prompting technique or strategy for generative AI.
Some assert that if you include into a prompt the line “take a deep breath” the generative AI will do a better job of answering your question. To some degree, there is a scrap of validity to the claim. But you need to be cautious in overinterpreting the properties of the catchy saying.
Per my detailed analysis, the saying as a prompt has been somewhat unfairly plucked out of the midst of a fuller research study and given a shiny light that tends to overstate its importance. Also, turns out that some in the mass media have nearly in their glee run amuck with the catchphrase. They appear to be touting it as a heralded prompting technique in ways that are misleading, misguided, or maybe naïve.
For various examples and further detailed indications about the nature and use of the take a deep breath prompting, see my coverage at the link here.
Chain-of-Verification (CoV) Prompting
I’d like to introduce you to a technique in prompt engineering that can aid your efforts to be diligent and double-check or verify the responses produced by generative AI. The technique is coined as Chain-of-Verification (formally COVE or CoVe, though some are using CoV).
Here’s an overview of how it works:
- (1) Enter your initial prompt. This is the initiating prompt that gets the generative AI to produce an answer or essay to whatever question or problem you want to have solved.
- (2) Look at the initial response to the prompt. This is the initial answer or response that the AI app provides to your prompt.
- (3) Establish suitable verification questions. Based on the generative AI output, come up with pertinent verification questions.
- (4) Ask the verification questions. Enter a prompt or series of prompts that ask the generative AI the identified verification questions.
- (5) Inspect the answers to the verification questions. Take a look at the answers to the verification questions, weighing them in light of what they might signify regarding the GenAI initial response.
- (6) Adjust or refine the initial answer accordingly. If the verification answers warrant doing so, go ahead and refine or adjust the initial answer as needed.
For various examples and further detailed indications about the nature and use of CoV or chain-of-verification prompting, see my coverage at the link here.
Beat the “Reverse Curse” Prompting
AI insiders refer to a generative AI knew internal flaw or limitation as the veritable “Reverse Curse”.
As an example, generative AI might be able to tell you the father of Tom Cruise, but if you decide to give the name of the father to the AI and then ask for the name of the father’s son, the AI might balk and indicate that the name is unknown. Curious indeed.
You could tongue-in-cheek say that generative AI is cursed with the limitation of not being readily able to figure out the reverse side of a deductive logic circumstance. Many years ago, numerous qualms were raised in the AI field that the underlying computational pattern-matching schemes for generative AI would be weak or sparse when it came to dealing with this type of issue. There are ways to deal with the Reverse Curse, including prompting strategies and techniques.
For various examples and further detailed indications about the nature and use of beating the reverse curse prompting, see my coverage at the link here.
Overcoming “Dumbing Down” Prompting
I mentioned earlier that users of generative AI often tend to restrict their wording to the simplest possible words (plus, they tend to do a one-and-done transaction rather than being conversational). This is likely a habit formed by the widespread adoption of Siri and Alexa which are not as fluent as current generative AI.
You might typify this as a dumbing down of the prompts that some people use. One place where dumbing down is definitely a pitfall involves interacting with contemporary generative AI. Seasoned users of generative AI have typically figured out that they can be expressive and there isn’t a need to hold themselves back in fluency. In fact, they often watch in rapt fascination when a newbie or someone who only occasionally uses generative AI opts to write in three-word or four-word sentences.
Knowing when to use succinct or terse wording versus using more verbose or fluent wording is a skill that anyone versed in prompt engineering should have in their personal toolkit.
For various examples and further detailed indications about the nature and use of averting the dumbing down of prompts, see my coverage at the link here.
DeepFakes To TrueFakes Prompting
Celebrities and others are using generative AI to pattern themselves and make a persona digital twin available. Turns out that the public is willing to pay to interact with these digital twins. Fans are fans. Money can be made.
The generative AI that does this is the same AI that can be used to craft deepfakes. As you know, deepfakes are false portrayals of oftentimes real people and real situations. The world is going to sorrowfully become awash with deepfakes and it will be very hard to discern truth from falsity.
Anyway, as you likely realize, generative AI has a dual-use capacity, namely you can use the AI to do bad things such as create deepfakes (bad if used in ill-advised ways), meanwhile you can use the same AI to make a digital twin of yourself (possibly for fun, maybe to make money). I refer to these as truefakes. They are a fake version of yourself, but it is “true” to the intent of you wanting to have the fake digital twin devised and published.
Various prompting strategies and prompting techniques underlie the creation of a truefake.
For various examples and further detailed indications about the nature and use of going from deepfakes to truefakes via prompting, see my coverage at the link here.
Disinformation Detection And Removal Prompting
Speaking of being awash, the volume of disinformation and misinformation that society is confronting keeps growing and seems unstoppable.
You can use generative AI to be your filter for detecting disinformation and misinformation. On top of that, you can have generative do something with the detected disinformation and misinformation. You might via prompts have established that the detected information be put aside, or maybe you want it to be summarized, or take some other action.
Handy prompting strategies and techniques can reduce the tsunami of foul information that you receive daily.
For various examples and further detailed indications about the nature and use of prompting to detect and mitigate the flow of misinformation and disinformation, see my coverage at the link here.
Emotionally Expressed Prompting
Does it make a difference to use emotionally charged wording in your prompts when conversing with generative AI, and if so, why would the AI seemingly be reacting to your emotion-packed instructions or questions?
The first part of the answer to this two-pronged question is that when you use prompts containing emotional pleas, the odds are that modern-day generative AI will rise to the occasion with better answers. You can readily spur the AI toward being more thorough. You can with just a few well-placed carefully chosen emotional phrases garner AI responses attaining heightened depth and correctness.
All in all, a new handy rule of thumb is that it makes abundant sense to seed your prompts with some amount of emotional language or entreaties, doing so within reasonable limits. Is the AI being stirred in some emotional heartfelt laden manner? No.
There is a logical and entirely computational reason for why generative AI “reacts” to your use of emotional wording. No souls are involved on the AI side of things.
For various examples and further detailed indications about the nature and use of emotionally worded prompting, see my coverage at the link here.
Conclusion
First, hearty congratulations on having slogged through all those various prompt engineering strategies and techniques. Pat yourself on the back. You deserve a moment of Zen-like reflection and ought to allow your brain cells to rest for a few moments.
Now then, back to the harsh cold world.
I have a quick question for you.
How many of those prompt engineering strategies and techniques are you familiar with?
Be honest.
For those of you who want to be top-notch in prompt engineering, the answer should be that you are familiar with all of them. I say this to accentuate that you should have familiarity with all those approaches, namely that you ably know what they are and when they should be suitably used.
Going further, the next step would be to rank yourself as being proficient in them. The notion of proficiency is that you actively know how to use them and can readily employ them off the top of your head. It takes many hours of run-throughs to be able to use those prompting approaches proficiently or prudently and have them readily at the tips of your fingers.
Firms are paying big bucks to those who are highly versed in prompt engineering. I would advise that you get cranking on knowing the prompting strategies and techniques that I’ve listed and as deeply covered in my columns. You don’t necessarily need to be proficient in all of them, but I strongly urge that you should at least be familiar with them all.
One way or another, as the culturally prevalent saying goes, you gotta collect them all.