Buckle up friends, it’s been a wild week. 

This week in toast...

🧠 Brain Cells on a Chip

Two hundred thousand human brain cells, grown on a silicon chip, have learned to play the Doom. Not well, mind you. They can spot enemies and shoot them, but can't remember where they've been or plan a route. Reflexes of a gamer, spatial awareness of a goldfish.

Australian biotech Cortical Labs grew the neurons from stem cells, kept them alive in a nutrient bath, and fed the game's video as electrical signals. The neurons fired back patterns mapped to in-game actions. It's a big upgrade from the same team's 2022 DishBrain experiment, which managed Pong. The new CL1 chip houses 800,000 neurons with a six-month lifespan, a Python API, and a $35,000 price tag. The first 115 units ship this year.

Why care? The human brain runs on roughly 20 watts, whilst training a large AI model can consume the output of a small power station. Biology is absurdly efficient, and AI's energy bill is becoming a boardroom problem.

It’s more impressive than it looks…

🧐 What's in it for me? Nobody's replacing your laptop with a brain in a jar. But biocomputing could eventually handle specific AI workloads at a fraction of the energy cost. Nearer term, these platforms are already being used for drug testing and disease modelling, potentially reducing reliance on animal trials.

💵 Out of the Lab: Biocomputing is largely pre-revenue and pre-hype, but AI's energy economics are pushing serious money toward biological alternatives.

  • Cortical Labs, founded by neuroscientist Brett Kagan, has raised $11M and is shipping its first commercial biocomputers this year.

  • Swiss rival FinalSpark has built a remote biocomputing platform accessible via API, positioning itself as the AWS of wetware.

  • Neuromorphic chipmaker BrainChip (ASX: BRN) builds silicon that mimics neural architectures and could benefit from the broader brain-inspired computing push.

🎾 Robots Are Coming For Your Balls

Teaching a robot to play tennis from messy data is supposed to be nearly impossible. A team from Peking University has done it anyway.

Training humanoids for athletic tasks normally requires pristine motion-capture from professionals. The LATENT system learns instead from short, imperfect clips of basic human swings. A physics simulator corrects the errors, reinforcement learning stitches the fragments together, and the result is astounding. 

China's humanoid robotics sector is scaling with state backing and academic-industry integration that's hard to replicate. Galbot, co-founded by Peking University professor He Wang, became a unicorn in under two years and has raised over $330M. UBTech's Walker S2 rallied against a human in January, and Unitree performed martial arts at the Spring Festival Gala. The question for Western competitors isn't whether the gap is closing. It's whether there’s time to catch up.

🧐 What's in it for me? The Terminator of tennis is a way off from competition grade. But if robots can learn athletic skills from shaky phone-quality footage instead of expensive data and labs, the cost of teaching them everything else just collapsed.

💵 Out of the Lab: China's humanoid robotics boom is backed by state funding and university-industry pipelines Western firms are struggling to match.

  • Galbot, co-founded by LATENT co-author He Wang, has partnerships with Bosch and CATL and is already deploying robots in Beijing pharmacies.

  • Unitree Robotics is commercially shipping quadrupeds and pushing into humanoids, with its Spring Festival Gala demo reaching hundreds of millions of viewers.

  • Boston Dynamics remains the Western benchmark, owned by Hyundai, but has yet to match the pace of Chinese sim-to-real research.

🎭 AI Thinking, Fast and Fake

The latest AI models appear to "reason" before answering: they produce long chains of step-by-step thinking, working through problems like a diligent student. It's one of the main reasons people trust them. However, new research from Harvard suggests much of that thinking is just for show.

By probing what's actually happening inside these models, researchers found that on straightforward questions the model reaches its answer almost immediately, then generates hundreds of extra tokens that read as deliberation. On simple tasks, the real answer was locked in after just 20% of the reasoning chain. The other 80%? Performance. Training rewards verbose reasoning, so the models learned to perform “thinking” whether they need to or not.

On genuinely hard problems, the reasoning was real. Belief updates and backtracking tracked with genuine computation. So the models can think. They just don't always bother, and there's currently no way to tell the difference from the outside. If we're relying on watching AI "show its working" to catch errors or dangerous reasoning, that's a problem.

🧐 What's in it for me? Those "thinking" tokens cost money. The researchers showed you could exit early and save 80% of tokens on simple tasks with no accuracy loss. Faster, cheaper AI is coming. Less reassuring: models can hide their actual confidence behind plausible-sounding reasoning.

💵 Out of the Lab: If AI regulation demands transparency into how models reach decisions, interpretability becomes infrastructure.

  • Goodfire AI, the startup behind the research (co-founded by paper co-author Atticus Geiger), is building tools to look under the hood of model behaviour.

  • Anthropic has staked its identity on AI safety and pioneered its own interpretability research.

  • Alphabet (NASDAQ: GOOGL) faces the same problem via DeepMind: if reasoning is partly performative, the compute bill for "thinking" models is larger than it needs to be.

🧹 The Brain's New Cleaner

The best Alzheimer's drugs on the market require patients to sit through high-dose infusions once or twice a month, and the reward for all that effort is roughly 10 extra months of independent living. Not nothing, but not exactly a victory lap.

Researchers have now taken a different approach: steal from cancer therapy. CAR-T treatments reprogram immune cells to hunt tumours. This team did the same trick with astrocytes, the brain's most common cell type, fitting them with a molecular homing device that locks onto amyloid beta (the sticky protein behind Alzheimer's plaques). One injection, and no monthly appointments. The findings, published in Science, are the first successful engineering of astrocytes to clear amyloid.

In mice treated before plaques formed: total prevention. In mice with established disease, a single injection halved plaque levels. Obligatory caveats: these are mice, not people, and clearing plaques doesn't necessarily undo the cognitive damage already done. But a one-shot therapy versus lifelong monthly infusions isn't an incremental improvement it's a different category entirely. The team has filed patents, which in academic science is the equivalent of clearing your throat before a very loud announcement.

🧐 What's in it for me? Clinical trials are a way off, but the shift from monthly hospital visits to a single injection could make treatment accessible to millions who can't manage the logistics. If your family has been touched by Alzheimer's, worth watching.

💵 Out of the Lab: Cell therapy for neurodegeneration is where cancer immunotherapy was a decade ago: proven in concept, years from scale, attracting serious capital.

  • Co-author David Holtzman co-founded C2N Diagnostics for Alzheimer's biomarker testing, so there's entrepreneurial form. Any WashU spinout from this patent will be closely watched.

  • Korsana Biosciences, backed by $175M, is chasing better amyloid-clearing antibodies and could be disrupted or validated by cell therapy approaches.

  • Eli Lilly (NYSE: LLY) dominates with Kisunla, but a one-shot competitor could compress its treatment moat. Biogen (NASDAQ: BIIB), co-developer of Leqembi, faces the same risk.

🧐 In Other News...

Humanity Writes Its Last Exam

AI kept acing every academic test we threw at it, so nearly 1,000 researchers built one it couldn't pass. Humanity's Last Exam is a 2,500-question gauntlet spanning maths, ancient Palmyrene inscriptions, bird anatomy, and Biblical Hebrew pronunciation. The design rule: if any current AI model could answer a question, that question was thrown out.

It worked. When the exam launched in late 2024, GPT-4o scored 3.3% and Claude 3.5 Sonnet managed 4.1%. Progress has been rapid but humbling: the best models today, Gemini 3.1 Pro and GPT-5.4, have clawed their way to the mid-40s. Better, but still failing. The study, published in Nature, is less a way for us humans to feel better about ourselves, more a calibration tool. The gap between AI and deep specialist human expertise remains open, for now…

Until next time.

Like what you're reading? Share toast with a friend.