Emergent Reasoning Patterns in Chain-of-Thought Prompted Language Models
Abstract
Chain-of-thought (CoT) prompting has demonstrated remarkable effectiveness in eliciting complex reasoning capabilities from large language models (LLMs). In this work, we systematically investigate the emergent reasoning patterns that arise when LLMs are prompted to generate intermediate reasoning steps. Through extensive experiments across arithmetic, symbolic, and commonsense reasoning benchmarks, we identify three distinct phases of reasoning emergence as a function of model scale. We introduce the Reasoning Density Score (RDS), a novel metric quantifying the information-theoretic efficiency of intermediate reasoning steps, and reveal that reasoning emergence depends critically on the interaction between pretraining data diversity, prompt structure, and attention head specialization.
1. Introduction
The discovery that large language models can perform complex multi-step reasoning when prompted with intermediate steps [1] has fundamentally altered our understanding of what autoregressive models can achieve. Chain-of-thought prompting, first formalized by Wei et al. (2022), demonstrates that simply prefacing a question with "Let's think step by step" can unlock reasoning capabilities that appear absent under standard prompting.
However, a central question remains open: what computational mechanisms underlie the emergence of reasoning in CoT-prompted models? Prior work has largely treated CoT as a black-box technique, focusing on downstream accuracy rather than characterizing the structure of the reasoning traces themselves.
In this paper, we address this gap through three contributions:
- A formal taxonomy of reasoning primitives extracted from over 50,000 CoT traces across five model families.
- The Reasoning Density Score (RDS), an information-theoretic metric that captures how efficiently a model utilizes intermediate steps.
- An empirical phase diagram mapping the emergence of reasoning capabilities as a function of model scale, prompt complexity, and pretraining composition.
2. Related Work
The capacity of LLMs to perform reasoning has been explored along several axes. Wei et al. (2022) [1] introduced chain-of-thought prompting, demonstrating significant gains on arithmetic and commonsense benchmarks. Kojima et al. (2022) [2] showed that zero-shot CoT ("Let's think step by step") is surprisingly effective, suggesting that reasoning pathways are latent in pretrained models rather than solely learned from few-shot exemplars.
Wang et al. (2023) [3] introduced self-consistency decoding, sampling multiple reasoning paths and selecting the majority answer, improving robustness. More recently, work on Tree-of-Thought [4] and Graph-of-Thought [5] has extended the linear chain paradigm to branching and cyclic reasoning structures.
Our work differs from prior studies in that we focus not on improving downstream task performance but on characterizing the internal structure of emergent reasoning. We draw on information-theoretic tools and mechanistic interpretability to provide a principled account of how reasoning patterns emerge.
3. Methodology
3.1 Reasoning Primitive Taxonomy
We define a set of atomic reasoning operations observed in CoT traces:
- Decomposition (): Breaking a complex problem into subproblems.
- Retrieval (): Invoking factual knowledge from parametric memory.
- Transformation (): Applying a mathematical or logical operation.
- Verification (): Checking the validity of an intermediate result.
- Synthesis (): Combining partial results into a final answer.
Each CoT trace is annotated as a sequence of primitives where .
3.2 Reasoning Density Score
We introduce the Reasoning Density Score (RDS) to measure the efficiency of a reasoning trace. Let denote the Shannon entropy of the primitive sequence and the number of tokens in the trace:
The first factor captures the diversity of reasoning operations relative to trace length, while the second factor weights traces that contain more transformation and verification steps — the primitives most strongly correlated with correct answers in our preliminary analysis.
3.3 Experimental Setup
We evaluate five model families across a range of scales:
| Model Family | Parameters | Variants Tested |
|---|---|---|
| LLaMA-3 | 8B, 70B, 405B | Base, Instruct |
| Qwen-2 | 7B, 72B | Base, Chat |
| Mistral | 7B, 8x7B (MoE) | Base, Instruct |
| GPT-4 class | ~1.8T (est.) | API |
| Claude-3 | Undisclosed | API |
Benchmarks include GSM8K (arithmetic), BIG-Bench Hard (symbolic/logical), StrategyQA (commonsense), and MATH (competition mathematics). We generate 10 CoT traces per problem using nucleus sampling (, ) and annotate primitives using a combination of GPT-4-based classification and human validation ().
4. Results and Discussion
4.1 Three Phases of Reasoning Emergence
Our experiments reveal a clear phase transition pattern in reasoning capability:
Phase I — Pattern Mimicry (< 10B parameters): Models produce CoT traces that superficially resemble reasoning but lack logical coherence. The primitive distribution is dominated by (retrieval) with minimal (verification). Average RDS: .
Phase II — Structured Decomposition (10B–70B): Models begin to reliably decompose problems and apply transformations. The primitive sequence follows predictable patterns (e.g., ). RDS increases to . Accuracy on GSM8K jumps from 18% to 64%.
Phase III — Adaptive Strategy Selection (> 70B): Models dynamically select reasoning strategies based on problem characteristics. We observe spontaneous verification loops — the model checks intermediate results and backtracks upon detecting errors — in 23.7% of traces. RDS: . GSM8K accuracy: 89%.
The transition between phases is sharp, consistent with the hypothesis that reasoning is an emergent capability [6].
4.2 Attention Head Specialization
Using activation patching on LLaMA-3-70B, we identify a subset of attention heads in layers 45–58 that are causally responsible for verification operations. Ablating these heads reduces primitive frequency by 78% and GSM8K accuracy by 14 percentage points, while leaving and primitives largely intact.
This suggests that verification is implemented by specialized circuits that emerge during pretraining, rather than being distributed across the network.
4.3 Effect of Pretraining Data Composition
We find a strong correlation (, ) between the fraction of mathematical and code content in pretraining data and the RDS of the resulting model. Models pretrained with code content exhibit Phase II reasoning at 7B parameters — a scale at which code-poor models remain in Phase I. This aligns with the hypothesis that code pretraining instills structured problem-solving heuristics [7].
4.4 Prompt Structure Effects
We vary prompt structure along three dimensions: (a) number of exemplars (0, 1, 4, 8), (b) exemplar complexity, and (c) instruction specificity. Key findings:
- Zero-shot CoT achieves 82% of few-shot CoT performance at the 70B+ scale, but only 41% at the 7B scale.
- Increasing exemplar complexity beyond the target task difficulty degrades performance for Phase I/II models but improves it for Phase III models.
- Instruction specificity (e.g., "First identify the unknowns, then set up equations") provides the largest gains for Phase II models ( on MATH).
5. Conclusion
We have presented a systematic analysis of emergent reasoning in chain-of-thought prompted language models. Our three-phase model of reasoning emergence — pattern mimicry, structured decomposition, and adaptive strategy selection — provides a principled framework for understanding how reasoning capabilities scale. The Reasoning Density Score offers a practical metric for evaluating reasoning quality beyond task accuracy.
Our findings carry implications for model development: (1) verification capabilities emerge from specialized attention circuits, suggesting targeted training objectives could accelerate reasoning emergence; (2) code-heavy pretraining mixtures lower the scale threshold for structured reasoning; (3) prompt engineering strategies should be calibrated to the model's reasoning phase.
Future work will extend this analysis to multi-turn reasoning, tool-augmented settings, and the interaction between chain-of-thought and reinforcement learning from human feedback (RLHF).
References
[1] J. Wei et al., "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models," NeurIPS, 2022.
[2] T. Kojima et al., "Large Language Models are Zero-Shot Reasoners," NeurIPS, 2022.
[3] X. Wang et al., "Self-Consistency Improves Chain of Thought Reasoning in Language Models," ICLR, 2023.
[4] S. Yao et al., "Tree of Thoughts: Deliberate Problem Solving with Large Language Models," NeurIPS, 2023.
[5] M. Besta et al., "Graph of Thoughts: Solving Elaborate Problems with Large Language Models," AAAI, 2024.
[6] J. Wei et al., "Emergent Abilities of Large Language Models," TMLR, 2022.
[7] R. Li et al., "Code Pretraining Improves Mathematical Reasoning in Language Models," ICML, 2024.


