
The superintendent was enthusiastic. In April 2024, Alberto Carvalho told an education conference about "Ed"—an AI-powered chatbot from a Boston startup called AllHere that could do something remarkable: truly "know students" in ways human administrators couldn't scale.
Ed could text students about missing assignments. Answer parent questions about bus routes in multiple languages. Flag the early warning signs of chronic absence before they became dropout statistics. For a district the size of Los Angeles Unified, America's second-largest school system with an $18 billion budget, this was the kind of personalized attention that seemed impossible without AI.
Two months later, in June 2024, LAUSD approved the contract: $6.2 million for a two-year engagement running through July 2025, with three additional one-year renewal options. Potentially five years of partnership with AllHere. The future looked bright.
By fall, AllHere was gone.
CEO Joanna Smith-Griffin had departed. Most staff were furloughed. A former executive alleged that student data had been mishandled. The chatbot never fully launched. And LAUSD had already paid roughly $2 million for something that would never materialize.
What happened in those few months? And why does this keep happening in ed-tech?
AllHere wasn't some fly-by-night operation. Founded in 2016, the company had raised approximately $12 million from venture capitalists who saw the same potential Carvalho did. TIME magazine featured them. Inc. magazine wrote about them. The narrative was compelling: AI could finally solve the chronic communication gaps between schools, students, and families.
And in demos, Ed probably did look magical. That's how these deals start, with a presentation that makes the impossible seem not just possible, but obvious. Why shouldn't AI be able to do this? Every other industry was being transformed by machine learning. Education was simply catching up.
For LAUSD administrators evaluating the contract, AllHere checked every psychological box. External validation from prestigious VCs? Check. Media coverage suggesting momentum and credibility? Check. A demo that addressed real pain points? Check. A CEO who could articulate the vision? Check.
What's harder to evaluate in a conference room: Can this company actually deliver at our scale? Does their engineering team have the depth to integrate with our Student Information System? What happens to our data if they run out of money?
These questions require technical expertise most districts don't have in-house. They require skepticism during a procurement process that rewards enthusiasm. They require asking the vendor to prove things that, if they're being honest, the vendor might not be able to prove yet.
Here's what most people outside ed-tech don't understand: building a working demo of an AI chatbot is genuinely hard. Building one that actually operates reliably across hundreds of schools with tens of thousands of students is exponentially harder.
A demo can use curated data. Production has to handle the reality of incomplete SIS records, inconsistent data formats across schools, parents who speak languages your training data didn't include, and edge cases your testing never encountered.
A demo can gracefully dodge questions it can't answer. Production has to give parents accurate information about their child's bus route or risk real-world consequences.
A demo can run on your infrastructure. Production has to meet district security requirements, comply with FERPA and state privacy laws, handle authentication systems you don't control, and keep running when your servers crash.
AllHere had a working demo. What they apparently didn't have was the operational capacity to bridge that gap at LAUSD's scale. Or the runway to build it before the money ran out.
LAUSD's procurement team wasn't naive. These are professionals managing a budget that would rank among America's largest corporations. So why did this happen?
The answer is pressure, institutional, political, and psychological.
School boards read about AI in education. Parents ask why their district isn't using these tools. Superintendents attend conferences where innovation is the currency of credibility. The administrator who says "we're waiting to see proven results" risks looking like they're holding students back.
Meanwhile, ed-tech vendors are optimizing for this psychology. They know that speed matters more than staged pilots. They know that VC backing and media coverage create social proof that's easier to evaluate than operational readiness. They know that districts want to believe the future is possible right now.
The contract structure LAUSD agreed to reveals this mindset: not a pilot, but a multi-year engagement with renewal options. Not staged payments tied to measurable results, but a funding commitment that gave AllHere cash before proving they could deliver.
What would alternative contract terms have looked like? Payments tied to daily active user metrics. Independent audits before student data entered the system. Code escrow and data extraction guarantees in case of vendor failure. These aren't exotic clauses, they're standard enterprise software protections.
But they require saying "prove it first" in an environment that rewards saying "let's go."
After AllHere collapsed, something predictable happened: the investors went quiet.
The 74 Million, an education news site, reported that the VCs who had previously touted AllHere as the future of family engagement went completely silent. They moved on to their next ed-tech bet. For them, AllHere was a failed experiment in a portfolio of experiments. Some work, some don't. That's venture capital.
For LAUSD, the calculus is different. The district is out $2 million. Students never received the personalized support that was promised. And every administrator who approved the contract now has to explain what happened to the school board, to parents, to taxpayers.
More importantly, they've learned a lesson that will shape future procurement: AI vendors can't be trusted. Not because AllHere was malicious, but because the gap between promise and operational reality is so large that even well-intentioned startups can't bridge it.
That's the real cost of these failures. Not just wasted money, but wasted trust in the possibility of innovation itself.
The lesson from LAUSD isn't "don't buy AI for education." It's "understand what you're actually buying."
When a vendor shows you a demo, you're not buying the demo. You're buying their capacity to operate that technology at your scale, under your constraints, with your legal obligations. Most vendors honestly don't know if they can do that until they try. Your job isn't to believe theml, it's to structure contracts that protect you when they discover they can't.
For vendors using tools like Nationgraph to identify district buying signals, LAUSD's failure should be sobering. The fastest way to destroy credibility, yours and every AI vendor who comes after you, is to overpromise your operational readiness. If you can't honestly say "yes, we're ready to operate at your scale right now," you're setting up both parties for disaster.
For districts, the solution is straightforward: change what you demand before you sign. Staged payments. Measurable KPIs. Independent audits. Contingency planning. These aren't innovation killers, they're the difference between betting on potential and buying proven capacity.
The irony is that somewhere right now, another superintendent is sitting in a conference room watching a demo of an AI tool that looks amazing. Another procurement team is evaluating a contract with a well-funded startup that has impressive credentials.
And everyone involved sincerely hopes this time will be different.
What would it take to make sure it actually is?









