Build the Future of AI. Don't Buy It.
RISC‑V Summit North America 2025
Makeljana Shkurti
Business Strategy
Makeljana leads business strategy at VRULL and chairs the RISC-V International AI & ML Market Development Committee. A frequent speaker on technology strategy, she connects the commercial reality of AI silicon with the ecosystem decisions that make it viable.
A few weeks ago, I had the privilege of joining Ed Doran, VP of Strategy at the EdgeAI Foundation, on the keynote stage at RISC‑V Summit North America 2025 for a talk called “RISC‑V Opportunities at the Edge of AI.” It was one of those conversations that felt bigger than the room — and the response afterwards confirmed I wasn’t imagining it.
Let me share what we talked about, and why I think it matters beyond the Summit.
Setting the stage: Ed on why the hyperscaler is not enough
Ed opened with something I always appreciate — a reminder that technology alone doesn’t change the world. Communities do. The technologies that create lasting value are the ones that get adopted, applied to real daily challenges, and forced to evolve by the people using them. His point was that the RISC‑V and EdgeAI communities are exactly that kind of community.
From there, he painted the landscape of edge AI with real clarity. The hyperscaler is a remarkable engine — immensely powerful — but it is not sufficient on its own. Delivering intelligence into the real world means putting AI in factories, vehicles, sensors at the bottom of the ocean, wearables on your wrist, and satellites overhead. Those use cases cannot all make round trips to a data centre. The physics won’t allow it, and increasingly neither will the economics.
Ed also made a point that doesn’t get said enough: the resource demands of hyperscale computing are real constraints that communities feel. Data centres are embedding their own power generation to keep up with electricity demands. Water consumption is significant. The hyperscaler is powerful, but it has inherent limitations that edge computing is uniquely positioned to complement.
He outlined four core advantages that edge AI brings: bandwidth and latency savings for time-critical decisions, economic efficiency from processing data where it originates, reliability through distributed systems without single points of failure, and privacy by keeping intelligence local. These are not incremental improvements over cloud AI. They are a genuinely different and complementary capability.
“The companies that will do the best are the ones who understand the unique value their implementation on the edge provides — but also seamlessly orchestrate across from hyperscaler to on-prem to the individual edge instance.” — Ed Doran, VP of Strategy, EdgeAI Foundation
The cloud fuelled the revolution. The edge is where it lives.
The next phase of AI growth is happening where data is actually created — at the edge — where latency, cost, and sustainability define success, not raw compute benchmarks. With that shift comes something genuinely exciting: new opportunities to innovate faster, reach new markets, and empower players who couldn’t compete before to shape what comes next.
The question I wanted to answer on stage was: how do we actually deliver on that opportunity in a way that’s scalable, efficient, and sustainable? The answer, for me, is RISC‑V — because right now, RISC‑V and edge AI are not just heading in the same direction. They are sharing the same path.
Why the old architectures can’t get us there
To understand why RISC‑V is the right foundation, it helps to be honest about what the alternatives actually are. Legacy architectures were designed for a world of fixed workloads. They are closed — any innovation you want to pursue has to align with someone else’s roadmap. They are rigid — built for general-purpose cores that cannot adapt to the diverse, fast-moving nature of AI workloads. And they are licensed — payment and permission at every step, which slows experimentation, disadvantages smaller innovators, and at scale can lock entire nations out of compute sovereignty.
Edge AI is none of those things. It is distributed, dynamic, and alive. It needs an architecture that can keep up — one that is open, modular, and designed to evolve.
“Closed means innovation is controlled by others. Rigid means it cannot adapt. Licensed means payment and permission at every step. Edge AI needs none of those constraints — and RISC‑V imposes none of them.”
What edge AI actually demands
At the edge, every millisecond counts. Power is the new performance. Privacy defines security. And determinism — the ability to guarantee a response in time — can be the difference between a system that works and one that fails when it matters most.
RISC‑V is built for exactly this. But I didn’t want to leave it there, because I think the most powerful thing you can do with a technical argument is make it human. So I walked through five use cases:
- Automotive. A car that cannot wait for the cloud to make a safety decision must see, decide, and act in milliseconds. Deterministic compute and real-time custom extensions — RISC‑V delivers both, natively.
- Industrial automation. Thousands of machines learning and operating together demand reliability and power efficiency at scale. RISC‑V’s domain-specific accelerators are compact and upgradable.
- Healthcare. AR-guided surgery and real-time diagnostics cannot tolerate latency or send sensitive data to a remote server. RISC‑V vector and tensor extensions run inference directly on-device.
- Wearables. Health data that should never leave the device. RISC‑V microcontrollers run AI locally, designed for strict privacy.
- Drones. Connectivity may drop at any moment, but the logic must not. RISC‑V enables real-time control and AI inference on a single flexible platform.
What struck me preparing these examples was how consistent the underlying requirement is across all of them: purpose-built compute that is adaptable, efficient, and free to evolve.
Who this actually unlocks
One of the things I care most about — both at VRULL and as Chair of the RISC‑V AI Market Development Group — is making sure we talk about who this technology serves beyond the obvious beneficiaries.
Startups can develop edge processors in months, not years. Enterprises can deploy AI solutions while retaining full control over their stack. And nations — this is the part that genuinely moves me — can build real compute sovereignty, free from vendor lock-in, by investing in their own citizens’ technological future.
RISC‑V turns all of these from aspirations into actual, scalable opportunity.
“The future of AI doesn’t have to be bought. It can be built. And you all have the opportunity to build it on RISC‑V.”
What happened after
The response after the talk meant a lot to me. People came up to say the message resonated — that there is real, growing appetite for more RISC‑V designs and processors built specifically for edge AI. Engineers who are already building it. Investors looking for where to put their conviction. Ecosystem builders who see the same window I see.
That energy is what drives this work. It is why VRULL is investing in the compiler infrastructure, the ISA extensions, and the platform bring-up that turns RISC‑V’s promise into production reality. And it is why I will keep making this argument — on stages, in standards bodies, and in conversations — for as long as it takes.
The next phase of compute will be AI-native. It will be built at the edge. And if we do our jobs well, it will be built on RISC‑V — openly, sustainably, and together.