How to use chatbots to make you smarter, not dumber

Use AI thoughtlessly and it dulls your mind. But with a strategic approach and the right prompts, it can be a powerful tool

by Nick Kabrél, researcher on human-centred AI

Photo of a hand holding a smartphone displaying a chat app interface asking “How can I help you this afternoon?”

Listen to this article

I bet you’re hearing about AI, especially the chatbots, from every corner nowadays: the news, social media, colleagues at work, perhaps even from your grandmother. All of us are. In fact, some might even have developed an allergic reaction to the AI hype, overpromising and doom-mongering. One day it’s ‘AGI [AI as smart as humans] is here,’ the next it’s ‘AI will take your job,’ and the day after that it’s ‘AI is better than your therapist.’

I’m not here as an AI enthusiast, but as a guide to help you navigate this new world. According to surveys, 78 per cent of organisations, 81 per cent of researchers, 86 per cent of students, and nearly two-thirds of physicians now use AI in some way. Whether we like it or not, chatbots are here to stay. It’s not necessarily a problem, but it risks becoming one if people use chatbots in harmful ways. I’m going to help you avoid that.

Early findings suggest that excessive and thoughtless engagement with chatbots can lead to deleterious cognitive effects. For example, research led by the Wharton School at the University of Pennsylvania showed that, while a chatbot improved students’ mathematics performance, the benefit was like a crutch – when the AI was taken away, the students’ performance was worse than a control group. In another study, Michael Gerlich at the SBS Swiss Business School found that the more students used chatbots, the more their critical thinking abilities suffered. Besides that, a recent brain-imaging study by researchers at MIT revealed that students who wrote an essay with a chatbot couldn’t remember the contents minutes later, and their brain activity was lower and less coherent than in the group without a chatbot.

Whether your preferred AI is ChatGPT, Gemini, Claude or Grok, you might be concerned about these kinds of harms to your own critical thinking and creativity. To help you avoid this, I’m going to share recommendations for using AI wisely. I’ll focus on intellectual and project work, such as writing, research and idea development, rather than emotional support, life advice or coding.

The good news is, it’s not the technology itself that risks making us more stupid, but the way we use it. To protect yourself and potentially gain benefits, you simply need to put a little more effort into designing smarter interactions between your mind and chatbots.

Key points

  1. When we use AI chatbots excessively or mindlessly, they can hinder our intellectual performance. The good news is that the problem isn’t with the technology but with the way we use it. To protect yourself and potentially gain benefits, you simply need to put a little more effort into designing smarter interactions between your mind and chatbots.
  2. Reflect and set boundaries. Step back and ask yourself what matters for your personal and professional development over the next three to 10 years. This can help you better judge whether using a chatbot will help or hinder your longer-term aims. Consider keeping a decision tree near your workspace to guide you in the moment whether to use AI.
  3. Use AI strategically. The most important rule is to always start your thinking without AI. When you do get input from a chatbot, always treat it with scepticism and verify important claims. Use creative ‘prompts’ (instructions) to avoid generic answers or advice – for instance, ask the bot to critique your work like an ancient philosopher. Also, keep track of how much you rely on AI for a task, to avoid overdependence.
  4. Design effective prompts. There is an important distinction between ‘directive mode’ prompts that encourage the chatbot to be like a work supervisor who critiques your work, and ‘non-directive mode’ prompts that instruct it to be more like an intellectual tutor helping you to develop your thoughts and ideas. The former is ideal for when you have a product (an article or developed idea) already; the latter is ideal for when your thoughts and ideas are only half-baked.

Reflect and set boundaries

Without reflection, it’s easier to fall into using chatbots automatically or thoughtlessly, which can undermine your personal growth in the long run. Before we get into specific ways to use AI, I recommend you take a step back and ask yourself: what matters for my personal and professional development in the next three, to five, to 10 years? Write down your key objectives, describe your ideal self, or simply the skills you want to cultivate. This will give you a clear reference point against which you can judge your AI use. Here’s an example:

Goal: In five years, I want to become a business consultant.
Abilities needed: Generating creative solutions, decision-making for complex scenarios, critical evaluation of trade-offs, persuasive presentation, flexibility across contexts, etc.

Having reflected in this way, before you begin any work task, you can better judge whether using a chatbot will help or hinder your longer-term aims. For example, if you got into the habit of using a chatbot to design creative strategies from scratch, it could harm your acquisition of skills in strategic decision-making. On reflection you might decide it makes sense to reserve chatbots only for routine or monotonous tasks that don’t directly impact your professional development.

To streamline this process, you could draw a decision tree and keep it near your workspace. I’ve shared my own decision tree below (I explain some of the terms in it such as ‘directive mode’ later in this Guide). Feel free to adapt my tree for your own use.

Flowchart on decision-making for AI assistance, focusing on task importance, data sensitivity and learning outcomes.

A decision tree for chatbot usage. Source: Nick Kabrel (created in Draw.io)

Use AI strategically

Always start without AI

Now let’s drill down to specific ways to use AI wisely. If you follow only one rule, make it this: for any task where thinking matters, always try on your own first, and only then use chatbots. You can think of this strategy as a sandwich:

  • Bottom layer = your ‘raw material’. Whatever the task, be it drafting an article, crafting an argument or idea, or planning a presentation, start by producing something on your own.
  • Middle layer = chatbot support. Ask a chatbot to critique your work, point out blind spots, challenge your assumptions, or suggest different angles. Reflect on that feedback.
  • Top layer = integration. Selectively incorporate the feedback into your original work. Decide for yourself what strengthens your thinking and discard what doesn’t.

This strategy not only preserves your authenticity but also helps you learn. When you struggle first, you build your own understanding of the problem and how to approach it. Using a chatbot afterward allows you to refine and expand that understanding. In contrast, relying on ready-made solutions leaves the AI’s ideas disconnected from your thinking, making them harder to apply later.

Adopt a sceptical mindset

When receiving an answer from chatbots, always remain sceptical and question the incoming information. This is important because AI is prone to ‘hallucinations’ (generating false information with high confidence). Here’s how to minimise hallucinations and deal with them:

  • Verify important claims with sources. For factual or evidence-based queries, use systems that integrate verified sources (such as ChatGPT with ‘web search’ enabled, Perplexity or Scite). Many chatbots now also offer a ‘deep research’ mode. For research tasks, this is usually the better choice because it relies less on the model’s internal training data and more on live internet searches. These modes also show you the steps the system took to reach its answer, making the process less of a black box and easier for you to evaluate. Click through the links it provides for specific claims, locate the exact claim in the source, and confirm it for yourself. You could also use specific ‘prompts’ to help (prompts are the instructions you give the AI – more on these later) such as: List three to five relevant sources and link directly to where this claim appears. If no reliable source exists, say so clearly. If uncertain, respond with: ‘I don’t know.’ Do not guess.
  • Investigate biases and blind spots. Sometimes, even if the answer is technically correct, it may be narrow, generic or biased due to limitations of the AI’s training data. To avoid it, build a habit of actively probing for what’s missing with prompts like: What potential biases might shape this response? or Suggest two to three alternative perspectives or creative solutions that go beyond conventional answers.

Push your creativity

Don’t settle for predictable prompts such as summarise this text or rewrite my paragraph. Almost everyone uses such conventional prompts, which explains why the outputs are often generic. Instead, train your creativity and treat chatbots like a playground for your imagination. Here are a few concrete ways that I’ve used this approach myself:

  • Role-playing. When testing an idea for a talk on AI in education, I asked: Critique this idea as if you were an ancient philosopher sceptical of technology. Then critique it again as if you were a venture capitalist seeking commercial potential. The combination of two different views helped make my argument more comprehensive.
  • Search for metaphors. At one point, I was struggling to explain in simple language why relying on chatbots for shortcuts undermines authentic learning. I asked the chatbot: Suggest metaphors that capture why using AI as a shortcut is harmful. It came up with a ‘teleportation versus map-based navigation’ metaphor. That concept later helped me explain the idea to students clearly and in an engaging manner.
  • Search for new angles. When I was developing a concept from cognitive neuroscience, I asked: What are similar processes in physics or mathematics that might shed light on this idea? The comparison it generated revealed surprising parallels, including a more technical description of the process than I had initially articulated. That not only enriched my own understanding, but also gave me language to communicate the concept more rigorously.

Note that, in general, more advanced models (as of September 2025, those are GPT-5, Grok 4, Gemini 2.5 pro, Claude Sonnet 4.5) tend to be more creative because they capture a wider range of information and can bridge greater distances between concepts.

Track and balance your own contributions against the AI’s

It’s easy to get lazy and lapse into accepting too much input from the AI. To retain agency and authorship over your work, use this tracking exercise:

  • Create a table with two columns – one for your input (ideas, arguments, outlines, reasoning) and one for the chatbot’s input (rephrasing, examples, metaphors, critiques).
  • Score each contribution from 1 to 10 based on how central it was to shaping the work. For example, if your original outline set the whole direction, you might score it 9. If the chatbot suggested a metaphor that you barely used, maybe 2.
  • Add up the scores to see the balance. Did you do most of the core intellectual work, or did the chatbot carry the major weight?
  • If you notice the chatbot scoring higher than you’d like on key tasks, change how you prompt it. For example, limit it to providing you with hints instead of answers, or only ask for supportive tasks such as grammar polishing, while keeping the major contributions to yourself.

Design effective prompts

I already mentioned a few prompts, but as these are so key to how you interreact with AI, let’s dig deeper. Systems like ChatGPT, Claude, Grok or Gemini are created to be agreeable and pleasant, not to help you grow – unless you prompt them appropriately. Two distinct strategies are effective: prompting chatbots to provide ‘directive’ or ‘non-directive’ guidance. In the former mode, it keeps a close distance and directs your thinking more, while in the latter, it keeps aside and tries to avoid directing you too much. Let’s go through both and see when it makes sense to use each.

Directive mode

Use this mode whenever you already have a tangible ‘product’ such as an article draft or a developed idea. Essentially, here you use a chatbot as a kind of work supervisor to provide feedback, evaluate your arguments or ideas, identify and critique the weak spots, etc. Since my work requires producing a lot of written content, I sometimes ask chatbots to critically evaluate my drafts. The standard prompt I use looks close to this:

Act as a critical reviewer. Evaluate the clarity of my argument, the logic of my structure, and the persuasiveness of my evidence. Point out weaknesses or gaps, and suggest ways to improve the flow and coherence. Do not rewrite the text yourself. Focus only on critical feedback.

Non-directive mode

Use this mode when you want to minimise the chatbot’s influence – you want it to act less like a supervisor and more like a tutor bringing out the best in you. The key difference from the directive mode is that you don’t want to receive direct instructions on what to fix, but instead you want the AI to point vaguely to areas that might potentially warrant your attention. The rest of the work, such as identifying a concrete issue and fixing it, is your own. For example, for my creative writing tasks, I often use a prompt as follows:

Never tell me directly what to fix and avoid strongly imposing your own vision. Instead, point neutrally and vaguely to the areas that might need more consideration. For example, raise potential ambiguities, confusing phrasing, or ideas that could be strengthened. If you see a concrete issue, don’t write ‘Argument X is weak, you need to add Y.’ Instead, write something like ‘Are there any weak spots in argument X? What points could be criticised by a sceptic?’ Or if specific sentences are unclear, don’t point that out directly and don’t rewrite them. Instead, write something like ‘Some sentences in the third paragraph might not be the best.’

The non-directive mode is especially useful when you have only a half-baked idea or an intuition that needs to be articulated more explicitly. You want the AI to trigger your own reflection but never define what the exact output of this reflection should be. To give a real example, I had the idea to write this Guide for a long time, but it was mixed and tangled in my head. Before starting, I prompted a chatbot along these lines:

I have an idea for an article on the topic X. However, I don’t yet have a clearly defined message and an article structure. Act as a Socratic sparring partner: ask me questions that challenge my assumptions, clarify my goals, and make me consider aspects I haven’t articulated yet. Do not provide any answers, opinions or suggestions. Stay as neutral as possible. Your role is only to provoke my thinking and help me understand myself better.

Based on that prompt, it asked me questions like: ‘What’s the real message you want to convey?’, ‘What do you mean that we need to avoid being influenced too much?’ and ‘In what specific ways can chatbots help develop critical thinking?’

The key is that it helped me uncover my own implicit thoughts and understand how I want this piece to look, not something imposed by the chatbot.

Final notes

The developmental psychologist Jean Piaget reportedly said: ‘The principal goal of education is to create men who are capable of doing new things, not simply of repeating what other generations have done … The second goal of education is to form minds which can be critical, can verify, and not accept everything they are offered.’ He didn’t say it yesterday. This quote originated at a conference in 1964. Just think about it, if Piaget warned us about this more than 60 years ago, when no one even had a mobile phone, how seriously should we take this now, when chatbots can literally think for us? I hope some of the strategies I’ve described will help you in the quest to preserve your critical thinking, creativity and authenticity in times when it’s more urgent than ever.

Syndicate this guide

Explore more

Photo of an art installation featuring multiple monitors displaying black and white images in a dimly lit room.

Why AI’s hallucinations are like the illusions of narcissism

Unable to handle uncertainty, AI mimics the narcissistic compulsion to fill voids with plausible but false narratives

by Jennine Gates

A home entrance with a black front door, mirrored wall, umbrella stand and carpeted stairs.

How to build a memory palace

Upgrade your ability to recall dates, names or other details with an ancient trick of the memory trade: the ‘method of loci’

by Lynne Kelly

Illustration of a masked healthcare worker in multiple colours against a pixelated background.

Every choice has an energy cost. Learn to manage your budget

Video by TED-Ed

A young person lying on a bed in a cluttered room with a suitcase, laundry basket, chest of drawers and desk, illuminated by window.

For young people, AI is now a second brain – should we worry?

As a resident tutor, I’ve seen how students are using AI as more than a tool. It’s a psychological shift we’ll soon all make

by Rhea Tibrewala

A person in the air, mid-flip, on a grassy hill with distant hills in the background.

How to check if an argument is valid

In logic, validity is prime. If you want to make valid arguments, or sniff out invalid ones, here’s what you need to do

by Robert Trueman

A switched-off smartphone on a bright yellow surface with fingerprints and smudges on the black screen.

A brief escape from social media

After leaving my phone behind for a week and coming back to it, I saw my social media use in a stark new light

by Tamur Qutab

A person using a smartphone, with focus on their finger scrolling the screen. Face partially visible.
ADHD

In an era of split attention, there is more than one type of ADHD

ADHD is typically thought to be wired into the brain early. But many cases may be better seen as products of digital life

by Paul Kudlow, Karline Treurnicht Naylor & Elia Abi-Jaoude

Collage of animal photos with progressively simplified line drawings of a flamingo, giraffe and cat.

From cave art to subatomic sketches – how drawing has accelerated human progress

Video by MIT Quest for Intelligence