In today’s column, I reveal a vital prompting technique known as logic-of-thought (LoT) that can noticeably improve your results when using generative AI. Logic-of-thought involves telling generative AI to work through a question or problem in a highly logical fashion and ergo lean into logical reasoning as much as possible.

You might be thinking that generative AI ought to already be employing an extensive logic-based approach and thus it seems odd to outrightly have to tell the AI to do so. The LoT seems at first glance as perhaps an unnecessary form of redundancy. But the reality is that AI makers explicitly tune generative AI for fast or quick-and-dirty result generation over deriving much more computationally involved deeper results. They believe that people want speed of response over necessarily depth or higher reliability of response (well, admittedly, people often do want that).

Logic-of-thought as a prompt engineering strategy helps overcome that default condition and pushes generative AI to heighten priority for producing logic-based answers. Anyone using generative AI on a regular basis needs to be familiar with and always ready to employ the LoT prompt when relevant.

Let’s talk about it.

This analysis of an innovative proposition is part of my ongoing Forbes.com column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here). For those of you specifically interested in prompting and prompt engineering, you might want to take a see my comprehensive analysis and description of over fifty notable prompting techniques at the link here.

Getting AI To Maximize Logic Over Speed

Before we jump into logic-of-thought, there is a close cousin known as chain-of-thought (CoT) that is worthy of an overall rundown.

I’ve frequently covered chain-of-thought, as the link here and the link here, since it is an essential prompting technique that certainly is near the top of the list for proficient prompt engineers and avid users of generative AI. The chain-of-thought method is very straightforward. When you enter a question or problem into AI, you merely also state that you want the AI to work on a stepwise basis to solve the matter at hand. By telling the AI to proceed on a step-by-step basis, you tend to get better results out of generative AI.

Why does chain-of-thought make a substantive difference?

Easy-peasy, it is partially because you get AI to take more cautious steps to generate your results, plus this allows the AI to take more time and consume more computational resources to derive an answer. You are giving permission to the AI to take time that otherwise most AI makers have decided will be exasperating to users. Users are apt to want the soonest possible results and not be willing to wait a few extra seconds to see their responses. That’s why the successor to ChatGPT and GPT-4o known as o1 can produce better answers, namely it takes more time, works deeper, and users are forewarned upfront that they will need to be patient for getting responses. The AI maker OpenAI has baked chain-of-thought directly into o1, see my detailed analysis of how o1 works, at the link here and the link here.

Shifting gears, we can leverage a chain-of-thought premise for purposes of solving logic-based problems or questions with generative AI. Rather than making use of chain-of-thought, we will instead employ logic-of-thought. As a side note, there isn’t any concern per se about using both CoT and LoT together, which you can do if you wish and sometimes makes ample sense to do.

How To Do Logic-of-Thought In Generative AI

Here’s the deal.

If you have a question or problem for generative AI that you know consists of best being solved via heightened use of logic, that’s when you readily pull the logic-of-thought prompt out of your prompt engineering toolkit or skillset.

Please note that I am also suggesting that if your question or problem doesn’t seem to need extraordinarily logical reasoning, there isn’t much value in employing the LoT. The odds are that you’ll be wasting computational resources and essentially giving a false flag to the AI about what it takes to solve the matter. Use LoT when you either know for sure that logic is required or that you have a hunch that logic will at least likely improve the outcome.

Use LoT with grace and aplomb.

When might an explicit boost of logical reasoning be valuable?

The obvious use case consists of logic-based questions such as trying to figure out the propositional series that if A is to B is known, and B is to C is involved, what can you say about A. You might remember propositional logic from your days in high school or college. Logic-based questions frequently are included on national proficiency tests such as the SAT or ACT. Many students dreaded those bleak and at times unfathomable propositional logic questions.

Anyway, real life can involve similar logic-based considerations. Even if the question or problem doesn’t lay out directly as a logic-based situation, there can be a hidden semblance that by using heightened logical reasoning there is a greater chance of a better answer.

There are three steps that you need to instruct generative AI to undertake when wanting the AI to proceed on a heightened logic solving basis:

  • (1) Logic Extraction: You tell the AI to identify all the possible logic-based propositions that are embedded in the question or problem to be solved and that the AI is to display the propositions via use of any conventional format commonly used to express propositional logic.
  • (2) Solution Using Propositions: You tell the AI that based on those extracted propositions, go ahead and solve the problem or answer the question, making use of strident logical reasoning to arrive at a suitable conclusion.
  • (3) Explain In Plain Language: You tell the AI that in addition to showing the propositional logical reasoning that was utilized, also explain in everyday plain language what the reasoning consisted of.

I will provide you next with a sample prompt that encompasses those three explicit instructions into one straightforward prompt.

Logic-of-Thought Prompting Is Easy To Invoke

Let’s look at the logic-of-thought prompt in action. I’ll provide an LoT prompt template that you can reuse for your own daily efforts while using generative AI.

I logged into ChatGPT to do this. You can use just about any generative AI app such as ChatGPT, GPT-4o, o1, Claude, Gemini, or Llama, and the chances are that logic-of-thought prompt will be useful there too. It works well for nearly all the major generative AI and large language model designs.

First, here’s my prompt that invokes LoT.

  • My entered prompt: “I want you to solve the following logic-based question by performing three crucial steps. The first step entails doing logic extraction from the given question. You are to determine all possible logic-based propositions that exist in the question, and you should express each proposition in conventional propositional language. The second step entails your using the extracted and determined logic-based propositions to solve the question logically. The third step consists of showing the logic-based propositions and how you solved the question, along with then explaining the logic in natural language so that I can plainly see how you solved the question. Do you understand these instructions?”
  • ChatGPT generated response: “Yes, I understand the instructions.”

You are welcome to use my above prompt as a kind of template, or you can devise your own version.

Some thoughts on this. People often prefer to be less detailed and just tell the AI to be logical. I don’t find that quite as useful. The issue is that if you are overly terse or lacking in specificity, the AI might not get the drift of what you want to have undertaken. I realize that my recommended prompt is a bit long, but I have found this useful and usually just do a copy-paste of the prompt from a document that contains my favorite prompts.

Example Of Logic-Based Question For LoT

We shall try out the LoT, now that we’ve got ChatGPT ready to go.

I decided to see if generative AI can solve a logic-based question that might be seen on the Law School Admission Test (LSAT). The LSAT is developed and administered to gauge whether prospective lawyers have what it takes to achieve success in law school. You might have heard from attorney acquaintances that the LSAT is a tough test. Indeedy do. Those attorneys probably shudder when you mention the LSAT. There are four sections consisting of multiple-choice questions and an additional unscored writing portion.

I went to the LSAT website where they post sample questions. Of the sample questions presented, I picked one that has been on the LSAT previously and was often incorrectly answered by those taking the demanding test. We can assume therefore that it is an especially harder logic-based problem.

If you want to have some fun, go ahead and read the problem that I show in the prompt below and mull over what you think the correct answer is. The good news is that since it is a multiple-choice question, you have a one-out-of-five chance of picking the right answer. Good luck.

Here is the sample question from the LSAT website:

  • My entered prompt: “In jurisdictions where the use of headlights is optional when visibility is good, drivers who use headlights at all times are less likely to be involved in a collision than are drivers who use headlights only when visibility is poor. Yet Highway Safety Department records show that making use of headlights mandatory at all times does nothing to reduce the overall number of collisions. Which one of the following, if true, most helps to resolve the apparent discrepancy in the information above? a. In jurisdictions where the use of headlights is optional when visibility is good, one driver in four uses headlights for daytime driving in good weather. b. A law making use of headlights mandatory at all times is not especially difficult to enforce. c. Only very careful drivers use headlights when their use is not legally required. d. There are some jurisdictions in which it is illegal to use headlights when visibility is good. e. The jurisdictions where the use of headlights is mandatory at all times are those where daytime visibility is frequently poor.”

Before I show you the answer, take a moment and select your choice.

I’ll wait.

Turns out the correct answer is choice “c” – did you get it right? I’m sure you did, congrats.

Let’s see what ChatGPT comes up with.

  • ChatGPT generated response:

“To solve this question, I’ll follow the three-step process you outlined.”

Step 1: Logic Extraction of Propositions. Define two propositions. P1: In jurisdictions where headlights are optional in good visibility, drivers who use headlights at all times are less likely to be in a collision than those who use headlights only when visibility is poor. P2: Highway Safety Department records show that making headlights mandatory at all times does not reduce the overall number of collisions. Let’s define additional terms to clarify these propositions: H = Headlights are used at all times. C = Involvement in a collision. O = Headlights are optional in good visibility. M = Headlights are mandatory at all times.”

“Step 2: Solution Using Propositions. The propositions can then be rephrased as: P1: O⇒(H⇒¬C). In jurisdictions where headlights are optional, using headlights at all times decreases collision risk. P2: M⇒(¬(H⇒¬C). Making headlights mandatory does not decrease the overall number of collisions.

Step 3: Analysis of Answer Choices and Explanation in Natural Language. The best answer is (c). Answer (c) resolves the discrepancy by highlighting the role of driver behavior. When headlight use is optional, only very careful drivers choose to use headlights at all times. These drivers tend to be more vigilant and cautious in general, so their lower collision rates may be due to their careful driving habits, not just their use of headlights. When headlight use becomes mandatory, all drivers, regardless of their caution level, are required to use headlights. This group includes both careful and less careful drivers. As a result, the average collision rate does not decrease simply due to mandatory headlight use, since less careful drivers are still involved in collisions at the same rate as before. Thus, the apparent contradiction is resolved: the lower collision rate observed among voluntary headlight users is likely due to their cautious nature, not merely the act of using headlights.”

End of ChatGPT response

Aha, ChatGPT came up with the correct answer.

You might keenly observe that ChatGPT abided by my LoT instructions and performed all three steps. I always like to include the third step that tells the AI to explain the answer in plain language. This is especially useful if the problem entails arcane propositional logic. The LSAT question wasn’t deeply involved in propositional logic, so it was admittedly easy to otherwise understand. A more obscure problem might be nearly inscrutable as to reading the propositional logic and thus a natural language exposition could be quite handy.

Empirical Research Supports Logic-of-Thought

My preference is to only adopt prompts that have been empirically analyzed to ascertain that they make a difference. I say this because there are lots of people with lots of off-the-top of their head ideas for prompts, but those often have not been scrutinized for their potency. Hunches are fine but I like to see some proof in the pudding by in-depth probing and testing.

The logic-of-thought has been experimentally examined, as exemplified in this research paper entitled “Logic-of-Thought: Injecting Logic Into Contexts For Full Reasoning In Large Language Models” by Tongxuan Liu, Wenjiang Xu, Weizhe Huang, Xingyu Wang, Jiaxing Wang, Hailong Yang, Jing Li, arXiv, September 26, 2024, making these salient points (excerpts):

  • “Large Language Models (LLMs) have demonstrated remarkable capabilities across various tasks but their performance in complex logical reasoning tasks remains unsatisfactory.”
  • “To this end, we propose Logic-of-Thought (LoT) prompting which employs propositional logic to generate expanded logical information from input context and utilizes the generated logical information as an additional augmentation to the input prompts, thereby enhancing the capability of logical reasoning.”
  • “The LoT is orthogonal to existing prompting methods and can be seamlessly integrated with them.”
  • “Extensive experiments demonstrate that LoT boosts the performance of various prompting methods with a striking margin across five logical reasoning tasks.”

Good stuff.

I have personally made use of LoT in my ongoing use of generative AI and can directly attest to the improvements for logic-based questions or problems. As such, I have added logic-of-thought to the classes that I teach on prompt engineering and urge others to add LoT to their skillset.

Logic-of-Thought Has Its Place And Time

Is the logic-of-thought a silver bullet that solves all kinds of problems and must be used at all times?

Nope.

It is a niche player. When you have a question or problem that bears on logic and logical reasoning, that’s your clue to use the LoT prompt. The rest of the time, just keep it in the back of your mind, ready to unleash when needed.

The final word on logic goes to the esteemed Albert Einstein: “Logic will get you from A to B. Imagination will take you everywhere.” I suppose that cheekily implies that you are to use your imagination and determine when to use the logic-thought to your advantage.

Make your choice wisely and logically.

Share.
Exit mobile version