by Adam Jonas

Categories

  • ai
  • science
  • fulcrum

For the past 7 years, I’ve been stuck on a question: how do you build systems that nobody controls but everybody benefits from?

This is what I love about bitcoin. Not the price. The protocol. How do you maintain critical financial infrastructure with no company behind it, no CEO to fire, no single point of failure? I spent years thinking about contributor pipelines, review bottlenecks, and how to help developers make meaningful contributions to build the future of money.

I’m proud of that work. Bitcoin still matters. But something else has my attention.

The gap

AI systems score 77% on olympiad-level science problems but only 25% on open-ended PhD-level research (says OpenAI). That’s a big gap. The models can do real work. It’s a deployment problem. Closing it requires people who can translate between what AI can do and what scientists actually need.

AI can now compress years of work into days for things like protein folding, drug candidate screening and lit review. But these tools haven’t diffused their way down to the wet labs or the research labs. The people building AI systems aren’t talking to the people running experiments. The pipelines don’t exist.

Fulcrum

So I want to help build them.

Fulcrum is my new project. The idea: connect strong technical talent with researchers.

We’re starting with fellowships. Recruit engineers and ML researchers who want to work on real problems, then embed them with research teams.

It’s early. We’re still figuring out what works. But the urgency feels real. If we want AI pointed at scientific discovery instead of ad targeting and toy projects, we need infrastructure to do the pointing.

What carries over

My time at Chaincode has taught me a few things I’m bringing along:

Education compounds. What we’ve done still produces a significant portion of bitcoin open source contributors years later. I want something similar for AI in science. Structured pathways that turn curious engineers into effective research accelerators.

Open beats closed. Bitcoin’s resilience comes from transparency. Anyone can audit the code, anyone can contribute. I think scientific AI benefits from similar openness. Shared benchmarks. Reproducible results. Tools that researchers can actually trust.

Talent is the bottleneck. In bitcoin, we always had more funding than qualified developers. I suspect the same is true here. The models are good. The researchers are eager. What’s missing are people who can work at the intersection.

Still around

I’m not walking away from Bitcoin. That work continues. But I’m splitting my attention, which is its own kind of bet.