Flapping Airplanes AI Lab: Why This $180M Startup Wants to Build a Radically Different Kind of AI:
The next generation of AI research labs isn't just trying to build bigger models — it's trying to build smarter ones. Meet Flapping Airplanes, the ambitious new AI lab betting that data efficiency, not raw scale, is the real key to the future of artificial intelligence.
If you've been watching the AI startup landscape closely, you've likely noticed a new wave of research-focused AI labs challenging the dominant "scale at all costs" paradigm.
Among the most intriguing is Flapping Airplanes— a freshly funded AI research lab with $180 million in seed funding, a team of exceptionally young researchers, and a bold thesis: that the way we currently train large language models (LLMs) is fundamentally wasteful, and there's a smarter path forward.
We sat down with the lab's three co-founders — brothers Ben and Asher Spector, and Aidan Smith (formerly of Neuralink) — to find out why they believe this is the most exciting moment in history to start a new foundation model company, and why they keep coming back to one unlikely source of inspiration: the human brain.
Why Launch a New AI Lab in 2025? The Case for Data-Efficient AI:
The question seems almost counterintuitive at first. OpenAI, Google DeepMind, Anthropic, and Meta AI have spent billions scaling their foundation models to unprecedented sizes. Why would three young founders believe they can carve out a meaningful position in such a competitive AI research landscape?
"There's just so much to do," said Ben Spector, co-founder and CEO of Flapping Airplanes. "The advances we've gotten over the last five to ten years have been spectacular. We love the tools. We use them every day. But the question is, is this the whole universe of things that needs to happen? And our answer was no — there's a lot more to do."
The core problem Flapping Airplanes is focused on is data efficiency — specifically, the staggering gap between how much data current AI models require to learn versus how little data human beings need to acquire new skills and knowledge. Today's frontier AI models are trained on what amounts to the entire sum of human-written knowledge available on the internet. A human child learns to recognize a cat from a handful of examples. Current transformer-based models need millions.
"We find it really, really perplexing that you need to use all the internet to really get this human-level intelligence," said Asher Spector, co-founder. "You should be able to use all the internet — but you shouldn't need to."
What Is Flapping Airplanes Building? Understanding the AI Research Thesis:
Flapping Airplanes is making a concentrated three-part bet on the future of AI development. First, that data-efficient AI training is one of the most important unsolved problems in machine learning research. Second, that cracking it will be enormously commercially valuable — potentially unlocking entirely new AI verticals like robotics, scientific discovery, and enterprise AI applications that are currently bottlenecked by data constraints. Third, that solving it requires a fresh, creative team willing to challenge the foundational assumptions of modern deep learning.
"We don't really see ourselves as competing with the other labs," explained Aidan Smith, co-founder and former researcher at Neuralink, "because we think we're looking at a very different set of problems. LLMs have an incredible ability to memorize and draw on a great breadth of knowledge, but they can't really pick up new skills very fast. It takes rivers and rivers of data to adapt."
The name Flapping Airplanes itself encodes the lab's philosophy in a memorable way. As Ben Spector explains it: think of current AI systems — GPT-4, Claude, Gemini — as Boeing 787s: powerful, efficient, and highly optimized for what they do. Flapping Airplanes isn't trying to build a bird. That's a step too far. They're trying to build something in between — a "flapping airplane" — a system that takes genuine inspiration from how biological intelligence works, without being slavishly constrained by it.
The Neuroscience Connection: How Brain-Inspired AI Differs from Standard Deep Learning:
One of the most distinctive aspects of Flapping Airplanes' research agenda is its deep engagement with neuroscience and brain-inspired computing — a field sometimes called neuromorphic AI. Aidan Smith's background at Neuralink, where he worked directly with neural interface technology and brain data, gives the team a rare lens through which to examine the limitations of current AI architectures.
"The way I look at the brain is as an existence proof," Smith said. "We see it as evidence that there are other algorithms out there — that there isn't just one orthodoxy. The brain has some crazy constraints. It takes a millisecond to fire an action potential. In that time, your computer can do just so many operations. And so realistically, there's probably an approach that's actually much better than the brain out there — and also very different from the transformer."
The key insight driving Flapping Airplanes' research is that the algorithms the brain uses to learn are fundamentally different from gradient descent — the optimization technique that underpins virtually all modern deep learning and large language model training. Understanding why those differences exist, and what advantages they confer, may hold the key to dramatically more data-efficient machine learning systems.
"When the substrate is so different," Ben Spector added, "and you have genuinely very different trade-offs about the cost of compute, the cost of locality and moving data, you actually expect these systems to look a little bit different. But just because they will look somewhat different does not mean we should not take inspiration from the brain and use the parts we think are interesting to improve our own systems."
$180 Million in Seed Funding: What It Means for AI Research Startups:
The funding environment for AI research startups has changed dramatically over the past 18 months, and Flapping Airplanes' $180 million seed round is a striking example of just how much investor appetite has grown for research-first AI labs willing to take long-horizon bets.
"I would say it was a mixture of knowing and discovering," Ben Spector said about the fundraising process. "The market has been hot for many months. But you never quite know how the fundraising environment will respond to your particular ideas. We were somewhat surprised by how well our message resonated, because it was very clear to us internally — but you never know whether your ideas will turn out to be things that other people believe as well."
The round reflects a broader trend in AI venture capital — a growing recognition that the next major breakthroughs in artificial intelligence may not come from simply scaling up existing transformer architectures, but from genuinely new approaches to how AI systems learn, reason, and generalize. For investors, Flapping Airplanes represents a high-conviction bet on that thesis.
"A thirst for the age of research has kind of been in the water for a little bit now," Smith observed. "And more and more, we find ourselves positioned as the player to pursue the age of research and really try these radical ideas."
Can Data-Efficient AI Unlock New Frontiers? Three Hypotheses:
So what actually becomes possible if Flapping Airplanes succeeds in training AI models that learn from dramatically less data? Asher Spector outlined three compelling hypotheses — while being careful to note that, as scientists, they're genuinely uncertain which will prove true.
The first hypothesis involves the depth versus breadth trade-off in AI intelligence. Current large language models exist somewhere on a spectrum between pure statistical pattern matching and genuine deep understanding. Forcing a model to learn from far less data might push it much further toward deep understanding — resulting in a model that knows fewer facts, but reasons significantly better.
The second hypothesis focuses on post-training efficiency — the expensive and data-intensive process of teaching existing AI models new capabilities or adapting them to new domains. A vastly more data-efficient training paradigm could mean that with just a handful of examples, an AI model could be rapidly and cheaply adapted to entirely new verticals, making enterprise AI deployment dramatically more accessible and cost-effective.
The third hypothesis may be the most transformative: that data-efficient AI training unlocks entirely new commercial verticals that are currently blocked by data constraints. Robotics is the clearest example — a domain where hardware capability is arguably sufficient, but where the data required to train AI systems to perform reliably remains prohibitively scarce. Scientific discovery is another. As Asher Spector put it: "My opinion is that it's a limited data problem, not a hardware problem."
The AGI Question: Where Does Flapping Airplanes Stand?
The question of artificial general intelligence (AGI) — and how close we might be to achieving it — is one of the most hotly debated topics in the AI research community. Flapping Airplanes' founders have a characteristically grounded take.
"I really don't exactly know what AGI means," Asher Spector said frankly. "It's clear that capabilities are advancing very quickly. It's clear that there's tremendous economic value being created. But I don't think we're very close to God-in-a-box. I don't think that within two months or even two years, there's going to be a singularity where suddenly humans are completely obsolete."
Aidan Smith offered a more forward-looking framing, one that reflects the lab's core philosophy: "The brain is not the ceiling — it's the floor. I see no evidence that the brain is not a knowable system that follows physical laws. We would expect to be able to create capabilities that are much, much more interesting and potentially better than the brain in the long run."
Ben Spector articulated the vision that animates the entire project: "The most exciting vision of AI is not one where it just automates a bunch of jobs and makes existing work cheaper. The most exciting vision is one where there's all kinds of new science and technologies that we can construct that humans aren't smart enough to come up with — but other systems can. That's more than just, 'Let's go fire a bunch of people from their jobs.'"
Hiring at Flapping Airplanes: Why They're Betting on Young, Unconventional Researchers:
One of the most distinctive aspects of Flapping Airplanes as an AI research lab is its hiring philosophy. The team has deliberately sought out exceptionally young researchers — including people still in college or even high school — prioritizing raw creativity and unconventional thinking over traditional credentials and experience.
"It's when you talk to someone and they just dazzle you," Smith explained. "They have so many new ideas and they think about things in a way that many established researchers just can't — because they haven't been polluted by the context of thousands and thousands of papers. The number one thing we look for is creativity."
Ben Spector added a simple but powerful litmus test for evaluating potential research hires: "Do they teach me something new when I spend time with them? If they teach me something new, the odds that they're going to teach us something new about what we're working on is also pretty good."
The approach isn't about dismissing experience — the lab has also hired seasoned researchers who have worked on large-scale AI systems. But the underlying philosophy is clear: Flapping Airplanes is looking for people who are not afraid to challenge the dominant paradigm, imagine entirely new architectures, and pursue ideas that most established researchers would consider too radical to pursue.
The Bottom Line: Is Flapping Airplanes the Future of AI Research?
Flapping Airplanes represents one of the most intellectually honest and genuinely differentiated bets in the current AI startup landscape. Rather than competing head-to-head with OpenAI, Anthropic, or Google DeepMind on the dimension of scale, the lab is pursuing a fundamentally different research agenda — one focused on data efficiency, brain-inspired learning algorithms, and the kind of radical architectural innovation that large, established labs are often too cautious or too resource-committed to pursue.
Whether they'll succeed remains genuinely uncertain — as the founders themselves freely admit. They're doing science, not engineering to a known specification. The research could fail. The hypotheses could be wrong. But the intellectual case for what they're attempting is compelling, the funding is substantial, and the team is, by all accounts, exceptionally talented.
"We want to try really, really radically different things," Smith said, "and sometimes radically different things are just worse than the paradigm. We're exploring a set of different trade-offs. It's our hope that they will be different in the long run."
In a field increasingly dominated by incremental scaling, that kind of intellectual courage may be exactly what the AI research community needs.
Want to learn more or get in touch with Flapping Airplanes? Reach out at hi@flappingairplanes.com — or if you want to push back on their ideas, they also welcome that at disagree@flappingairplanes.com. They're also actively hiring exceptional, unconventional thinkers who want to change the field.
As Ben Spector puts it: "You don't need two PhDs. We really are looking for folks who think differently."



