At DePIN Day Buenos Aires, a panel of leading voices from across the decentralized compute ecosystem came together to unpack one of the most critical infrastructure questions facing Web3 and AI today. Moderated by Adam Wozney (Akash Network), the discussion featured Evgeny Ponomarev (Fluence), Dylan Bane (Messari), Nate Fikru (Consensys), and Thomas Abraham (Zeeve), each bringing a distinct perspective from protocol design and research to enterprise infrastructure and developer tooling.
The panel explored how the global compute stack is evolving under the pressure of exploding AI demand, geopolitical constraints, and the need for verifiable, scalable alternatives to hyperscalers.
Why Traditional Cloud Infrastructure Is Reaching Its Limits
The panel opened with a stark reality: global compute demand is accelerating faster than centralized cloud infrastructure can sustainably support. Estimates suggest trillions of dollars in data center investment will be required by 2030 to keep pace with AI, blockchain, and global-scale applications.
Yet simply adding more hyperscaler capacity does not solve deeper structural issues. Centralized compute concentrates power geographically, economically, and politically. It also creates single points of failure, opaque pricing, and growing barriers to access.
Decentralized compute networks are emerging in response to introduce diversity, resilience, and market-driven pricing into the compute economy.
Compute Is the New Oil
A recurring theme across the panel was that compute has become the most strategic resource of the modern digital economy. Where previous generations fought over oil and data, today’s competition increasingly centers on GPUs, accelerators, and access to large-scale compute.
Export controls, regional quotas, and supply chain constraints have turned compute into a geopolitical issue. For many countries and organizations, access to state-of-the-art hardware is no longer guaranteed.
Decentralized compute offers a different model: aggregating globally distributed hardware into open marketplaces where access is determined by demand, not political alignment. This model enables AI sovereignty, lowers dependency on a small set of providers, and democratizes participation in the AI economy.
Training vs. Inference: Where Decentralized Compute Wins Today
The panel was clear-eyed about current limitations. Competing with frontier AI labs on trillion-parameter model training remains extremely challenging for decentralized systems. Latency, bandwidth constraints, and hardware coordination still favor tightly coupled, centralized clusters.
Where decentralized compute already shines is inference, smaller models, and elastic workloads. As AI adoption matures, demand increasingly shifts toward cost-efficient inference, regional execution, and on-demand capacity rather than constant frontier training.
In this context, decentralized compute does not need to be the fastest—it needs to be good enough, cheaper, and globally accessible.
Democratizing Access to GPUs and Compute Markets
One of the strongest arguments for decentralized compute is access. By allowing independent operators to bring GPUs online into open marketplaces, decentralized networks unlock underutilized capacity across the world.
This approach bypasses supply bottlenecks and reduces reliance on centralized vendors. While total supply is still growing, the trajectory is clear: as more hardware connects to decentralized marketplaces, they become increasingly viable for enterprise and public-sector use cases.
The panel compared this moment to the rise of open-source software — initially dismissed as inferior, later becoming foundational to the internet and cloud itself.
The Hard Problems: Regulation, Compliance and Trust
Scaling decentralized compute globally introduces non-technical challenges that cannot be ignored. Enterprises care deeply about compliance, certifications, data locality, and security guarantees.
For decentralized networks to serve serious workloads, they must meet standards such as SOC 2, operate within regulated data centers, and provide verifiable assurances around data handling. These requirements are expensive, slow, and operationally complex.
Several panelists emphasized that success will come from boring execution: certifications, audits, enterprise-grade operations, and clear accountability across decentralized actors.
Incentives, Markets and Sustainable Economics
At the heart of decentralized compute lies incentive design. Hardware providers must be rewarded reliably, networks must maintain high utilization, and customers must receive predictable pricing and performance.
Rather than relying solely on unused capacity, decentralized compute will likely require dedicated investment into new hardware over time. Market mechanisms, staking, rewards, pricing curves, will determine which networks attract the most supply.
The long-term vision resembles energy markets: providers contribute resources, consumers pay for usage, and pricing emerges dynamically based on demand.
Financialization of Compute: What Comes Next
Looking ahead, the panel explored how compute itself may become a financial asset. Potential futures include compute futures, hedging instruments, revenue-sharing models tied to specific GPU types, and aggregators that source the best-priced compute across multiple networks.
As AI companies seek cost predictability, financial abstraction layers may sit on top of decentralized compute markets, hiding complexity while preserving open access underneath.
This evolution would further integrate decentralized compute into global economic systems.
What Will Determine the Future of Decentralized Compute
Despite differing perspectives, the panel converged on a clear conclusion: decentralized compute will only succeed if it delivers real value.
That means:
- competitive performance for real workloads
- verifiable, transparent revenue generation
- clear ROI for customers and providers
- visible success stories operating at scale
Decentralized compute is not about ideology or tokens. It is about shipping infrastructure that people want to use and proving it through adoption, revenue, and growth.
Vision & Infrastructure
The new compute stack is not a binary choice between hyperscalers and decentralized networks. It is a hybrid future, where decentralized compute fills gaps that centralized systems cannot — on cost, access, resilience and sovereignty.
As AI and Web3 continue to expand, decentralized, verifiable, and scalable compute will increasingly move from the margins to the core of global infrastructure.
The race is no longer theoretical. It is already underway.