“Our next big challenge is to mainstream confidential computing with blockchain and AI - Centralized AI is broken, and we need to get decentralized AI right or it will be even more broken,” she said. Going on to describe in a presentation on how personal AI models that do use sensitive information for services like healthcare have a critical balance between data sharing and the need for confidentiality.
Yannick Schrade, CEO of Arcium, captured this emerging reality noting that "decentralized confidential computing is the missing link for distributed systems." With AI models increasingly central to decision-making across sectors, the ability to conduct computations in fully encrypted environments has become not only desirable but essential.
Schrade highlighted the potential of confidential AI and confidential decentralized finance (DeFi) as examples of how such technologies could "improve applications by an order of magnitude." Yet, for all its promise, confidential computing still grapples with a persistent challenge: how to seamlessly integrate rigorous privacy measures without compromising user accessibility.
Companies pioneering privacy-enhancing technologies are operating in a complex landscape. As Schrade observed, traditional Web2 enterprises are acutely aware of the importance of confidentiality, driven by both regulatory mandates and business logic. By contrast, the Web3 ecosystem has only recently begun to adopt privacy-centric solutions at scale.
This divergence underscores a critical tension: while privacy safeguards are becoming non-negotiable, they often come at the cost of user experience. Schrade emphasized that for confidential computing to reach mass adoption, "the end user should never notice they are using confidential computing technology." Efficiency, low latency, and intuitive design are indispensable, particularly in AI-powered applications where computational demands are high.
Shahaf Bar-Geffen, CEO of COTI, also spoke to the pivotal role of AI in shaping the future of privacy technologies. Bar-Geffen highlighted the growing importance of federated learning, which allows organizations to train AI models on decentralized datasets without ever sharing raw data. This innovation is particularly relevant for sensitive sectors like healthcare and finance, where regulatory compliance and data security are paramount. Bar-Geffen also emphasized that AI’s reliance on vast data sets makes privacy-preserving technologies indispensable: "As models grow, the need for private learning increases. It’s about enabling innovation while safeguarding individual and institutional secrets."
In an era where privacy and usability often seem at odds, Henry de Valence, founder of Penumbra Labs, offers a compelling approach to balancing both. de Valence highlighted the importance of aligning cryptographic systems with user expectations. "For a blockchain, the cryptography is the product," he explained, emphasizing that when users encounter unexpected security gaps, risks like hacks and financial losses emerge.
Drawing parallels to Signal, the secure messaging app, de Valence stressed that effective cryptography should be invisible to the user. Similarly, Penumbra aims to make privacy effortless, allowing users to interact seamlessly without needing a technical background.
“As AI becomes more prevalent, we need to emphasise the importance of privacy-first infrastructure. AI’s ability to analyze and exploit data only heightens the need for private systems that align with user behavior and protect sensitive information. Penumbra’s work offers lessons that could be applied across industries to safeguard interactions in an increasingly digital world,” he said.
For privacy-focused companies, the journey toward balancing security and usability is fraught with complexity. Martin Leclerc of iEXEC described the task as akin to navigating a "privacy quadrilemma," encompassing builders, users, regulatory bodies, and technology providers. This multifaceted challenge requires reconciling the seemingly contradictory goals of robust privacy, seamless user interfaces, and compliance with an evolving regulatory landscape.
The integration of confidential computing into identity verification systems offers a potential model for overcoming these obstacles. Lasha Antadze of Rarimo explained how their zero-knowledge proof technology enables users to "prove verifiable anonymity" without revealing sensitive data. For instance, users can demonstrate their eligibility to vote or purchase age-restricted goods by simply tapping their passport on a phone. This approach, Atageda noted, abstracts the complexity of blockchain interactions, making it accessible even to non-technical users.
New entrants to the privacy space have also taken lessons from established players. Dominik Schmidt, co-founder of Polygon Miden, outlined how zero-knowledge proofs (ZKPs) allow users to execute transactions without exposing underlying data. Schmidt acknowledged that while ZKPs represent a breakthrough in scalability and privacy, they require users to manage their own state and data—a burden that legacy systems like Ethereum do not impose. To mitigate these challenges, Schmidt’s team is collaborating with decentralized storage providers and refining their Rust compiler to improve developer and user experiences.
Schmidt highlighted the broader implications of such advances, particularly for regulatory frameworks. Contrary to expectations, regulators have shown significant interest in privacy-preserving technologies. Schmidt cited discussions with the European Union, which is exploring ZKP chains as a potential foundation for privacy-respecting digital currencies. "If you want to put the financial system of a country on-chain," Schmidt remarked, "current solutions are not sufficient, privacy-wise or scalability-wise."
A recurring theme among the companies interviewed is the necessity of simplification. Bar-Geffen underscored the importance of "selective disclosure," enabling users to determine what information to share and with whom. By abstracting privacy features into existing ecosystems, COTI aims to ensure that users interact with privacy technology as intuitively as they would with mainstream applications like WhatsApp.
Jovonni Smith-Martinez, Developer Relations Engineer at Agoric, views the balance between security and usability as key to blockchain adoption. Smith-Martinez highlighted Agoric’s unique approach: leveraging JavaScript-based smart contracts to make privacy-preserving technologies more accessible to developers.
"Developers shouldn’t need to become cryptography experts to build secure applications," Smith-Martinez explained. Agoric’s system allows for asynchronous execution and integration of AI capabilities, simplifying the development process while ensuring robust security.
“It’s important to collaborate with regulatory bodies, as compliance must be a priority for the industry to achieve mass adoption. We love the Cypherpunk ideology, but to be taken seriously, we have to work with nation-states and regulators," he said.
Similarly, Adrienne Youngman of Partisia Blockchain pointed out that self-custody of data can paradoxically enhance user experience. By leveraging multi-party computation (MPC), users can complete complex processes—such as credit scoring or anti-money laundering checks—with a single click. This approach not only reduces friction but also aligns with regulatory demands for data protection.
Leclerc advocated for hybrid technologies that combine multiple privacy-enabling methods, such as secure enclaves, ZKPs, and MPC. "We cannot solve everything with one technology," he observed, emphasizing the need for interoperability and collaboration among privacy tech providers. This sentiment was echoed by Smith-Martinez of Agoric, who highlighted the importance of building "layers around complexity" to make advanced privacy features more accessible.
Smith-Martinez also pointed to the potential of JavaScript-based smart contracts as a game-changer for privacy and usability. By enabling asynchronous execution and integrating AI capabilities, Agoric aims to simplify the development of privacy-preserving applications, bridging the gap between technical sophistication and user-centric design.
AI’s role in shaping the future of privacy technologies cannot be overstated. Companies like Arcium and COTI are already integrating confidential AI into their offerings, leveraging techniques such as secure multi-party computation and federated learning. These innovations enable organizations to process sensitive data without exposing it, thereby unlocking new possibilities in healthcare, finance, and beyond.
However, the integration of AI also raises questions about compliance and governance. Mark Bungard of Partisia Blockchain emphasized the importance of "privacy and transparency as stepping stones" for ethical AI deployment. Bungard cited examples from Japan, where identity solutions are being piloted to enable selective disclosure of information across healthcare and financial systems.
The regulatory environment is another key determinant of success. Antadze argued that clearer guidelines for permissionless networks would encourage innovation by reducing uncertainty for founders. This sentiment was echoed across the summit, with many participants calling for a collaborative approach between technology providers and policymakers.
As the landscape of confidential computing evolves, the companies at the forefront are proving that privacy and usability are not mutually exclusive. Through innovations in ZKPs, MPC, and hybrid technologies, they are demonstrating how rigorous privacy protections can coexist with intuitive user experiences.
Schrade’s vision of a seamless, plug-and-play future for confidential computing may still be aspirational, but the foundations are being laid. As AI continues to reshape industries, the ability to balance privacy with usability will be a defining challenge—and opportunity—for the next generation of technology leaders.