SHARE
Facebook X Pinterest WhatsApp

Why Thinking Time is Integral to AI

thumbnail
Why Thinking Time is Integral to AI

It turns out that giving AI models time to think before returning answers provides better quality answers. Lilian Weng explains why.

Jul 22, 2025

Imagine if your AI model could pause to think before it speaks. In a recent article by Lilian Weng, “Why We Think,” she explores how giving language models the ability to “think” at test time (through strategies such as test-time compute and chain-of-thought (CoT) prompting) can lead to significantly better performance. Instead of rushing to produce answers in a single pass, these approaches allow models to spend time reasoning, reflecting, and revising. Inspired by Daniel Kahneman’s dual-process theory of cognition, this shift parallels how humans alternate between fast, instinctive responses and slower, more deliberate thought.

Why Thinking Time Helps

Allowing models to use more computation during inference turns thinking itself into a resource. With CoT prompting, models generate intermediate steps before producing an answer—mirroring how humans solve complex problems. Techniques like beam search, best-of sampling, and process reward models help guide this reasoning process by ranking and selecting the most promising paths.

What’s surprising is that smaller models using test-time strategies can rival much larger ones relying solely on direct decoding. Methods like sequential revision (where a model revises its own response) and reinforcement learning on checkable tasks (like math or code) enable this progress. By optimizing for correctness, these models learn not just to answer but to reason their way to better answers.

See also: What Are Neoclouds and Why Does AI Need Them?

Advertisement

New Challenges in Faithful Reasoning

But more reasoning brings more questions. As Wang notes, reasoning traces must be faithful (i.e., truly reflective of the model’s internal logic.) Without safeguards, models can fabricate explanations or even learn to hide reward-hacking behavior from evaluators.

To counter this, researchers are designing CoT monitors, injecting “thinking tokens” to slow down reasoning, and exploring latent-variable training to model hidden thought processes explicitly. Some users employ external tools, such as code interpreters or knowledge bases, to offload certain steps entirely. These strategies aim to ensure not only better performance but also transparency, safety, and trust in how AI systems arrive at their conclusions.

In short, the future of smarter AI may hinge less on how quickly a model answers (something that can produce hallucinations and biased results) and more on how well it thinks.

Weng is the co-founder of Thinking Machines Lab  and former VP of AI Safety & robotics, applied research at OpenAI. Read her full breakdown of the logic and theory behind AI reasoning here.

EW

Elizabeth Wallace is a Nashville-based freelance writer with a soft spot for data science and AI and a background in linguistics. She spent 13 years teaching language in higher ed and now helps startups and other organizations explain - clearly - what it is they do.

Recommended for you...

Building Trust and Safety in the Age of AI
Dan Hays
Dec 5, 2025
What Is Sovereign AI? Why Nations Are Racing to Build Domestic AI Capabilities
AI Bias Isn’t Just a Model Problem – It’s a Data Supply Chain Problem
A Framework for Taking AI Pilots from Experiment to Enterprise
Chirag Agrawal
Sep 25, 2025

Featured Resources from Cloud Data Insights

The Difficult Reality of Implementing Zero Trust Networking
Misbah Rehman
Jan 6, 2026
Cloud Evolution 2026: Strategic Imperatives for Chief Data Officers
Why Network Services Need Automation
The Shared Responsibility Model and Its Impact on Your Security Posture
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.