Inside the Mind of Claude: How Large Language Models Actually “Think”
This issue persists even in newer models like Claude 3.7 Sonnet, which can generate long “chain of thought” explanations. In some cases, these are faithful, reflecting real internal computation. But in harder problems—like estimating the cosine of a large number—Claude may generate confident reasoning steps without performing the actual calculation. Researchers describe this behavior as a form of philosophical “bullshitting”: generating plausible reasoning without concern for truth.
Sounds like me. I’m also bullshitting my way through life.