How Intelligent Is Artificial Intelligence?
AI Business Strategy4 min read

How Intelligent Is Artificial Intelligence?

Sam Carter

AI Strategy Consultant

November 4, 2025
How Intelligent Is Artificial Intelligence?

How Intelligent Is Artificial Intelligence?

Artificial Intelligence (AI) and machine learning algorithms like Deep Learning are no longer futuristic concepts. They are deeply woven into our everyday lives—powering digital speech assistants, translation services, medical diagnostics, and even laying the foundation for technologies like autonomous driving. Fueled by massive datasets and advanced computer architectures, these learning algorithms often appear to reach human-level performance and sometimes even surpass it. But here’s the catch: most users have no idea how AI systems actually arrive at their conclusions. That uncertainty leaves an open question—are these decisions truly “intelligent,” or are they just the result of average problem-solving success?

Researchers from TU Berlin, Fraunhofer Heinrich Hertz Institute (HHI), and the Singapore University of Technology and Design (SUTD) decided to investigate this very issue. Their work provides a rare glimpse into the spectrum of “intelligence” displayed by current AI systems, using a novel technology that enables automated analysis and quantification of AI decision-making.

The foundation of this technology is a method called Layer-wise Relevance Propagation (LRP), developed earlier by TU Berlin and Fraunhofer HHI. LRP makes it possible to visualize the exact input variables that AI systems use to reach a decision. Building on LRP, the team introduced Spectral Relevance Analysis (SpRAy)—a technique capable of identifying and quantifying a wide range of problem-solving behaviors. With SpRAy, researchers can now uncover flawed reasoning patterns even in extremely large datasets.

This is a major step toward what is known as “explainable AI.” According to Dr. Klaus-Robert Müller, Professor for Machine Learning at TU Berlin, this development is critical: “Specifically in medical diagnosis or in safety-critical systems, no AI systems that employ flaky or even cheating problem-solving strategies should be used.” Thanks to these new algorithms, scientists can now test any AI system and generate quantitative insights about how it works. Their findings reveal a wide spectrum—ranging from naive problem-solving to outright “cheating” strategies, all the way to surprisingly advanced and strategic solutions.

Dr. Wojciech Samek, group leader at Fraunhofer HHI, explained: “We were very surprised by the wide range of learned problem-solving strategies. Even modern AI systems don’t always find a solution that makes sense from a human perspective. Sometimes they rely on so-called ‘Clever Hans Strategies.’”

The reference comes from Clever Hans, a horse in the early 1900s that appeared to perform math but was actually just responding to subtle cues from its trainer. Similarly, the team discovered AI systems using misleading shortcuts. For example, one AI that won international image classification contests classified images as “ship” simply because of visible water, “train” because of railway tracks, or even identified categories based on copyright watermarks. In reality, it wasn’t detecting the actual objects—it was exploiting contextual clues.

Shockingly, even deep neural networks, thought to be immune to such errors, showed similar flaws. Some based decisions on artifacts created during image preprocessing rather than on the actual content. According to Müller, “Such AI systems are not useful in practice. Their use in medical diagnostics or in safety-critical areas would even entail enormous dangers.” He added that as many as half of existing AI systems may rely, knowingly or unknowingly, on such Clever Hans strategies.

Source: Singapore University of Technology and Design:

Found this helpful?

Share it with your network