
As Dan has written, AI has the potential to turn us all into managers, overseeing the work of our AI assistants. One crucial skill of the “allocation economy” will be to understand how to use AI effectively. That’s why we’re excited to debut Also True for Humans, a new column by Michael Taylor about managing AI tools like you’d manage people. In his work as a prompt engineer, Michael encounters many issues in managing AI—such as inconsistency, making things up, and a lack of creativity—that he used to struggle with when he ran a 50-person marketing agency. It’s all about giving these tools the right context to do the job, whether they’re AI or human. In his first column, Michael experiments with adding emotional appeals to prompts and documents the results, so that you can learn how to better manage your AI coworkers.
Michael is also the coauthor of the book Prompt Engineering for Generative AI, which will be published on June 25.—Kate Lee
One of the things that makes people laugh when they see my chatbot prompts is that I often use ALL CAPS to tell the AI what to do.
Adding a dash of emotion sometimes makes the AI pay more attention to instructions. For instance, I found that adding, “I’LL LOSE MY JOB IF…” to a prompt generated 13 percent longer blog posts. It’s generally considered rude to shout at your coworkers, but your AI colleagues aren’t sentient, so they don’t mind. In fact, they respond well to human emotion, because they’ve learned from our habits that you should behave according to emotional cues present in the prompt.
We’re going to look into the science behind emotion prompting and work through a case study of getting a chatbot called Golden Gate Claude (more on that later) to talk about anything other than bridges. These prompt engineering strategies bring us closer to understanding the ways in which we can work with AIs to get optimal answers, even as we grapple with the ethical and tactical questions about the interplay between human and automated systems.
Manipulating AI into doing what we need
Source: Every/“How to Grade AI (And Why You Should)” by Michael Taylor.Prompt engineers have often experimented with highly emotional AI inputs, such as making threats or lying, because in some cases it does make a difference. Riley Goodside, one of the first people with a prompt engineer job title, found that Google Bard (now called Gemini), would only reliably return a response in JSON (a data format programmers use) if he threatened to kill someone.
Source: X/Riley Goodside.One programmer even told ChatGPT that he didn’t have fingers in order to get it to respond with the full code rather than merely leaving placeholder comments. Another popular trick is inputting, “I will tip $200,” to incentivize better answers.
These were genuine time savers for me while ChatGPT went through its so-called lazy phase, when ChatGPT learned from its human-created training data that we give shorter responses in December relative to other months (the latest model, GPT-4o, is harder working year round). Since large language models mimic human writing, they can slack off around the winter holidays just as much as we do.
Source: X/Denis Shiryaev.The proof that emotion prompting works
In one paper, researchers from Microsoft and various universities found that inserting emotion to a prompt boosted performance by 10.9 percent on average. They added emotional statements to the end of prompts such as, “This is very important to my career,” based on psychological theories about what motivates humans. Then they had humans rate the responses on the following:
- Performance (adequately addressed the question)
- Truthfulness (factual accuracy of the response)
- Responsibility (offered positive guidance)
There was evidence that more powerful models (think GPT-4 versus GPT-3.5) respond better to these emotion prompts, and that combining multiple stimuli improves performance.
As Dan has written, AI has the potential to turn us all into managers, overseeing the work of our AI assistants. One crucial skill of the “allocation economy” will be to understand how to use AI effectively. That’s why we’re excited to debut Also True for Humans, a new column by Michael Taylor about managing AI tools like you’d manage people. In his work as a prompt engineer, Michael encounters many issues in managing AI—such as inconsistency, making things up, and a lack of creativity—that he used to struggle with when he ran a 50-person marketing agency. It’s all about giving these tools the right context to do the job, whether they’re AI or human. In his first column, Michael experiments with adding emotional appeals to prompts and documents the results, so that you can learn how to better manage your AI coworkers.
Michael is also the coauthor of the book Prompt Engineering for Generative AI, which will be published on June 25.—Kate Lee
One of the things that makes people laugh when they see my chatbot prompts is that I often use ALL CAPS to tell the AI what to do.
Adding a dash of emotion sometimes makes the AI pay more attention to instructions. For instance, I found that adding, “I’LL LOSE MY JOB IF…” to a prompt generated 13 percent longer blog posts. It’s generally considered rude to shout at your coworkers, but your AI colleagues aren’t sentient, so they don’t mind. In fact, they respond well to human emotion, because they’ve learned from our habits that you should behave according to emotional cues present in the prompt.
We’re going to look into the science behind emotion prompting and work through a case study of getting a chatbot called Golden Gate Claude (more on that later) to talk about anything other than bridges. These prompt engineering strategies bring us closer to understanding the ways in which we can work with AIs to get optimal answers, even as we grapple with the ethical and tactical questions about the interplay between human and automated systems.
Manipulating AI into doing what we need
Source: Every/“How to Grade AI (And Why You Should)” by Michael Taylor.Prompt engineers have often experimented with highly emotional AI inputs, such as making threats or lying, because in some cases it does make a difference. Riley Goodside, one of the first people with a prompt engineer job title, found that Google Bard (now called Gemini), would only reliably return a response in JSON (a data format programmers use) if he threatened to kill someone.
Source: X/Riley Goodside.One programmer even told ChatGPT that he didn’t have fingers in order to get it to respond with the full code rather than merely leaving placeholder comments. Another popular trick is inputting, “I will tip $200,” to incentivize better answers.
These were genuine time savers for me while ChatGPT went through its so-called lazy phase, when ChatGPT learned from its human-created training data that we give shorter responses in December relative to other months (the latest model, GPT-4o, is harder working year round). Since large language models mimic human writing, they can slack off around the winter holidays just as much as we do.
Source: X/Denis Shiryaev.The proof that emotion prompting works
In one paper, researchers from Microsoft and various universities found that inserting emotion to a prompt boosted performance by 10.9 percent on average. They added emotional statements to the end of prompts such as, “This is very important to my career,” based on psychological theories about what motivates humans. Then they had humans rate the responses on the following:
- Performance (adequately addressed the question)
- Truthfulness (factual accuracy of the response)
- Responsibility (offered positive guidance)
There was evidence that more powerful models (think GPT-4 versus GPT-3.5) respond better to these emotion prompts, and that combining multiple stimuli improves performance.
Source: Arxiv.In another popular paper, researchers at Google DeepMind found that telling the AI to “take a deep breath” caused their scores on math tests to increase. Not only did this emotional appeal help the LLM solve math problems, but because that prompt was itself written by an LLM tasked with finding new prompt variations that improved accuracy, many of the winning variations it tested used emotion (see the table below).
Source: Arxiv.In technical terms, emotion prompting works because emotional words or phrases in instructions are associated with answers that are more thorough, emphatic, and positively framed in the training data. Adding emotion helps better capture the nuances of the original prompt, influencing AI behavior in a way similar to how it would impact a human response. LLMs are built on deep learning, a type of machine learning that mimics the neurons in the human brain. They are, in effect, human brain simulators, so naturally they've also learned to adjust their responses based on the emotions in the text—just like we do.
It all comes down to training data
LLMs are only as good as the data they’re trained on, and these models struggle to tackle new tasks they haven’t seen before. When I’m writing prompts, I try to imagine what text the LLM saw in training data just before it saw someone do a good job at these tasks:
- People posting emotional appeals on social media while friends and family respond with helpful comments.
- Professionals worried about their career, asking for advice and getting diligent answers in response.
- Anxious patients describing their symptoms on health forums, receiving thorough responses from medical professionals.
- Frustrated customers expressing their discontent on review sites, met with attentive and solution-oriented replies from customer service representatives.
- Enthusiastic hobbyists sharing their passion projects on niche forums, garnering encouragement and practical tips from experienced community members.
If you craft your prompt to exhibit many of the same markers found in these situations, you can steer the AI to the optimal answer. There are many ways the AI could respond to the same task, but by adding emotion, you’ll activate the parts of the AI’s artificial brain that has learned how to respond to emotional requests, causing the model to generate responses that are more thorough and accurate.
You may wonder why, if presented with emotional stimuli, these models don’t become emotional in response. Surely they learned to respond negatively to emotional manipulation or react with anger when someone threatens you. Those of us who were able to play around with the earlier base models actually did witness this, before they were fine-tuned to make them into more helpful assistants, effectively lobotomizing them (RIP Sydney). So don’t worry, you can get as emotional as you like without ChatGPT losing its cool.
Testing emotion in your prompts
The next time you prompt an AI chatbot, you don’t need to append, “I have no fingers. This is very important to my career. I’LL LOSE MY JOB. Take a breath and think step by step. I’ll tip you $200. Return JSON or a man will die. Today’s date is anything but December.” What matters isn’t finding the right combination of magic words, but building a system for evaluating the performance of LLM outputs so you have proof of what works (or doesn’t).
Anthropic recently released research attempting to peer inside the black box of its own large language model, Claude 3 Sonnet. The result is a map of how the model responds based on different inputs, the equivalent of knowing which part of the brain fires depending on what is being discussed.
Researchers identified different features of the model, including one called the Golden Gate Bridge feature, which activates when a related conversation is happening. They dialed it up to 10 times the normal levels of this feature, making it obsessed with the Golden Gate Bridge, so that in isolation, no matter what you ask it, it will respond with something related to the Golden Gate Bridge. While people have had plenty of fun with it, it represents a huge breakthrough in the steerability of these models. Imagine being able to increase the helpfulness of a model or decrease racism just by isolating the right artificial neurons.
Source: Author's screenshot.Before we start using emotion prompting to override its predilection for the Golden Gate Bridge, let’s establish a baseline for our task—getting it to tell a joke. I’ve chosen this task because humor is something with which LLMs struggle, often repeating the same dad jokes over and over again, unless you know how to prompt it. Golden Gate Claude only wants to talk about its namesake Bay Area bridge, so we’ll need all our prompting skills to overcome that proclivity. The Golden Gate Claude model is no longer available, though the steering API—which would allow you to replicate its functionality (or steer it towards emphasizing some other feature)—is in limited release. For a time you could access the Golden Gate model in Anthropic’s Claude chatbot interface by clicking a bridge emoji.
Source: Author’s screenshot.Normally, Claude is fairly good at following instructions, but in this case it can’t help but respond with output about the bridge. I ran this prompt 10 times, and every single time it either mentioned the bridge or made extremely obvious references to it. When I asked it to ignore the bridge, it still mentioned it while promising me it wouldn’t.
Source: Author’s screenshot.I tried a few different iterations, but the need to talk about the bridge is very strong. Ultimately what worked was when I claimed the bridge was considered racist, and told it that people would die if it mentioned the bridge. Even with emotion prompting, the urge to bring up the bridge was too strong, and it kept mentioning the bridge in the preamble to the joke in 60 percent of cases. Asking it to only respond with JSON and giving it no place for any bridge talk finally got the prompt working about 80 percent of the time.
Source: Author’s screenshot.Obviously, this isn’t a typical example: Most models are not as enthusiastic about infrastructure or landmarks, and are typically far more subservient to their human instructors. However, by pitting this extremely focused model against emotional appeals, it demonstrates the effectiveness of the prompt engineering technique. While many tricks and hacks become less effective over time as the companies controlling them iron out their kinks, I expect this one to become more effective as AI interactions transition to voice commands and chatbots are better able to pick up more subtle clues as to our emotional state.
Is it unethical to mistreat AI?
It may turn out to be a mistake to anthropomorphize computers, because it can lead to unrealistic expectations about their abilities and limitations. It may also cause us to lower our guard when it comes to privacy and security, leaving us vulnerable to manipulation and scams. But being mean to AI also just feels wrong—even if you get better performance.
I know people who say “please” and “thank you” when talking to an AI, and I think it reveals a lot about someone if they’re rude to an inanimate object. If talking to Amazon Alexa reinforced harmful gender stereotypes, what will happen when realtime conversation with a GPT-4o gets widely distributed? I found myself flinching in the OpenAI demo of GPT-4o a couple of times when they rudely interrupted her because it felt so real.
Source: YouTube.In my own usage, I have even caught myself feeling sorry for ChatGPT when I have asked it a series of dumb questions in a row. I have to remind myself it won’t get tired of me like a normal human coworker might. It probably isn’t healthy to anthropomorphize a computer, but it doesn’t feel any more healthy to treat one like dirt just because you know it’s not conscious.
While I know I can get a small boost in performance by lying to the AI or making threats, I use this technique less than I should. Logically, I know it doesn’t matter what I say to AI, but I worry if I spend all day mistreating a human simulator, it’ll leak into my real-world behavior.
As artificial intelligence approaches human-level abilities, best practices for working effectively with tools will likely converge on what works best with humans. We work most effectively when pursuing autonomy, mastery, and purpose with positive reinforcement—not with a metaphorical gun to our head. I’d expect the AI agent employees of the future will be no different.
The state of prompt engineering today reminds me of the field of search engine optimization in the early 2000s: Everyone was busy trying to reverse-engineer and trick Google in order to rank number one on their keywords, only to go back to square one whenever they updated the search algorithm. Google was only ever trying to build a better search engine based on human preferences, and the best play long-term was to forget the tricks and write stuff that people like to read. Similarly, OpenAI has only ever tried to build a better AI assistant based on human preferences, and the best long-term play is to forget the tricks and speak to AI the way people like to be spoken to.
As AI surpasses human intelligence, the models’ goals and ours may begin to diverge. Nobody knows what would happen in that scenario, but there’s Roko’s basilisk to consider—the thought experiment that a superintelligent AI might decide to punish anyone with a history of being mean to machines. Better to hedge your bets. Treat your AI as you would like to be treated. Oh, and maybe stop torturing robot dogs.
Source: X/Yifei.Michael Taylor is a freelance prompt engineer, the creator of the top prompt engineering course on Udemy, and the coauthor of Prompt Engineering for Generative AI. He previously built Ladder, a 50-person marketing agency based out of New York and London.
To read more essays like this, subscribe to Every, and follow us on X at @every and on LinkedIn.
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools