top of page

AI, Critical Thinking and People

The future of critical thinking will not be decided by circuits alone, but by the everyday choices students and schools make.

Syrymbet Shertay

Imagine asking ChatGPT a question, receiving a convincing answer and then submitting it without second guessing yourself. This is getting too common. AI provides a fast and confident response. However the mentioned speed and confidence conceal the misinformation. It must be considered. Does using AI help people think or does it actually cause them to neglect the process of thought?

AI enhances access to information but may demotivate people from actually understanding or putting effort into thought, and the process of its creation.

Context relevance
Generative AI tools are turning into a new norm. It has become so popular that people stop caring about whether the work done is real or not. AI tools are spread worldwide, and they have spread very quickly. We can see them in classrooms, workplaces, and in research. It changed the way people think, research and judge. Surveys and studies have shown a mixed picture. Some report to have saved much more time and have accessed ideas that they could not have otherwise. On the contrary, educators and researchers warn that constant use of AI, receiving answers quickly and effortlessly can encourage pure surface work. At the same time, designed tools show promise for guiding students to deeper thinking, when AI is used as a tutor rather than a shortcut

Definitions
AI: In this article “artificial intelligence” is mostly used as generative text-making systems like chat assistants or language models. They make text based on patterns in data that they were trained with.
Critical thinking: In this article it means cognitive skills, like checking evidence, weighing choices and options, reasoning and justifying. Mostly idea making.

Evidence
Studies show:
Shifts in reasoning when relying on AI (Lee, 2025).
- A Microsoft survey of 319 knowledge workers found that higher trust in AI often corresponded with lower self-reported cognitive effort.
- In contrast, people with stronger self-confidence tended to use AI while still applying critical checks. (Ogunleeye 2024)
- 355 studies report absence of guidance on how to use AI appropriately, because educators have not yet adapted and figured out how to treat the extensive use of AI. There are not enough validated systems/frameworks that teachers can use to guide students in use of AI.
Together, these studies show a pattern. AI can reduce reflective effort and double checking information unless users or instructors require to do so.

Example-1
Education:
There are reports of students misusing AI. Mixed-method classroom studies show both risks and solutions. In a study of 40 university students, with directed use of AI(Nasr et al., 2025):
- When unguided, students skipped the deep resolution and accepted AI outputs.
- When guided, students used SOCRATIC QUESTIONS. It led to higher engagement and improved quiz scores in an online computer science course.
So, tools with guidance and scrutiny lead to boosted critical thinking. Tools without them, corrode critical thinking.

Example-2
Journalism and workplaces (Larson, 2024):
Researchers warn of overreliance on technology. Tendency of accepting machine outputs carelessly and reduced cognitive reflection. Lee’s survey of knowledge workers reported new roles emerging (Lee, 2025):
- Instead of doing every step, people increasingly manage/verify AI outputs, by combining AI answers with human thinking.
- This can raise overall quality, by having support rather than reliance. But it can also generate errors if teams lack training or time, to verify AI claims and info

How does it work? (mechanisms of how thinking corrodes due to AI overuse)
1. Bias accumulation (Garcia-Lopez 2025, Larson 2024)
AI models have bias in their training data and can amplify errors. When users accept an AIs confident answer, the resulting biased content may go unchecked. Multiple studies raise concern about algorithmic fairness. The biased content circles back and accumulates each time an AI’s response is unchecked.

2. Authority bias (Lee, 2025)
The perfect grammatically structured, fluent AI output seems authoritative. The fluency shown tends to cause authority bias. Authority bias is when people correlate fluency with correctness. Surveys show that greater trust in AI results in less critical thinking effort into checking.

3. Cognitive task transfer (Nasr et al, 2025, Mohale & Suliman, 2025)
AI makes it very convenient for users to just hand over their tasks. Offloading(Transferring tasks) can free time for more in-depth work. But only if the user properly reinvests the time saved. Without being instructed, many students have shown to skip the deep analysis.

4. Speed vs depth (Lee et al., 2025; dos Santos, 2023)
AI increases the speed of getting the answers, but also a lower level of understanding.
It puts the AI user in a trade-off: faster reply vs slower reflection.
Having gotten an answer so quickly can help brainstorming and quick inquiry,but ultimately creates an illusion that problems are solved only with a sketch. Design and prompts that force deeper questioning can preserve in-depth understanding

Counterargument
AI can be a tool, not a replacement of thinking. To be sure, AI also helps critical thinking in multiple cases. For example:
- When built as a tutor, AI improves engagement, prompt reflection and raises quiz performance (Lee et al., 2025; dos Santos, 2023)
When combining multiple AI outputs and applying checks. It also increases analytical skill (Lee, 2025) despite the limitations, like lack of long term evidence, universal evidence on best practices and methods. While AI can enhance/boost thinking, evidence about more significant impacts is still to be found.

HOW TO USE AI WITHOUT WEAKENING CRITICAL THINKING:
- Using AI to generate opinions, not final answers. By asking for multiple approaches and picking one, you encourage comparison and choice.
- Requesting AI for its sources and checking. This prevents accepting unsupported claims
- Ask questions. After an AI reply, ask ”What would contradict this?” or “What assumptions underlie this claim?”. Methods designed to ask probing questions improve engagement and learning.
- Combine multiple AI tools and human review. Compare outputs and consider the differences. Divergence of answers shows the uncertainty and possible error.

Conclusion:
This challenge extends beyond classrooms. Institutions need clear rules, training and assessment methods that encourage verification and original reasoning rather than using AI excessively. We must address equity, privacy so that access to helpful AI and protections against AI are not uneven. Employers and journalists must train people to manage AI outputs, making verification an explicit part of work. The future of critical thinking will not be decided by circuits alone, but by the everyday choices students and schools make.

Bibliography:
Nasr, N. R., et al. “Exploring the Impact of Generative AI ChatGPT on Critical Thinking in Higher Education.” Education Sciences, vol. 15, no. 9, 2025, https://www.mdpi.com/2227-7102/15/9/1198.
Mixed-methods study of ~40 students; used for classroom evidence on modes of AI use and effects on “resolution”.
Lee, H. P. “The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort.” Microsoft Research, 2025, https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf.
Survey of 319 workers showing links between AI trust and reduced effort; used for claims about verification and stewardship.
Lee, Jeonghyun, et al. “Socratic Mind: Impact of a Novel GenAI-Powered Assessment Tool on Student Learning and Higher-Order Thinking.” arXiv, 2025, https://arxiv.org/abs/2509.16262.
Pilot of a Socratic AI tutor showing engagement and score gains; used for examples of designed AI that boosts thinking.
Salido, A. “Integrating Critical Thinking and Artificial Intelligence in Education.” ScienceDirect, 2025, https://www.sciencedirect.com/science/article/pii/S2590291125006527.
Review of integration challenges; used for institutional and instructional risk points.
García-López, I. M., et al. “Ethical and Regulatory Challenges of Generative AI in Education.” Frontiers in Education, 2025, https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2025.1565938/full.
Covers privacy, bias, inequality; used for policy and equity implications.
Larson, B. Z. “Critical Thinking in the Age of Generative AI.” Academy of Management Learning & Education, 2024, https://journals.aom.org/doi/10.5465/amle.2024.0338.
Theoretical piece on automation bias and reduced metacognition; used for risk framing.
Mohale, N. E., and Z. Suliman. “The Influence of Generative AI and Its Impact on Critical Cognitive Engagement in an Open-Access, Distance-Learning University.” ResearchGate, 2025, https://www.researchgate.net/publication/395953150_The_Influence_of_Generative_AI_and_Its_Impact_on_Critical_Cognitive_Engagement_In_an_Open_Access_Distance_Learning_University.
Shows outsourcing of deep analysis in distance learning contexts; used for risk examples.
Ogunleye, B., et al. “A Systematic Review of Generative AI for Teaching and Learning Practice.” arXiv, 2024, https://arxiv.org/abs/2406.09520.
Systematic review identifying gaps and lack of frameworks; used to argue for more tested guidelines.
dos Santos, R. P. “Enhancing Chemistry Learning with ChatGPT and Bing Chat as Agents to Think With.” arXiv, 2023, https://arxiv.org/abs/2305.11890.
Case study where chat agents provoked reflection in chemistry tasks; used for examples of AI as a thinking partner.

bottom of page