Welcome to Coinchange’s guest AMA number 16. Today we have Alex Shkor on the show, CEO of Collective Intelligence Labs AKA CIL. In brief, CIL has been working on an innovative bridging solution, or should I say an omnichain solution. He joined CIL in 2021, he is board of director of Paralect an accelerator & venture studio. He is also a think tank member of Blockchain for science since 2017.
The topic of our discussion today is bridges and how CIL work fit within the interoperability space across blockchain. If have not already, you can get the higher level overview of what bridges are, why we need them and how we can make them more secure by reading our long form report about it on our website under resources, crosschain interoperability.
Telegram chat: https://t.me/+MTqt9OhBMho3NTYy
Jerome Ostorero: Coinchange recently published a research report offering a high-level overview of bridges - what they are, why they're needed in this field, and how their security can be enhanced. You can find this report on our website, under the resources section.
Could you briefly describe what CIL has been working on and tell us about the main innovation that you are developing?
Alex Shkor: We prefer to describe our solution as an omni chain infrastructure, not a bridging solution. We've given it a name - CILA, which translates to 'power' in Russian. We find the coincidence quite fitting. CILA represents a general-purpose omni chain smart contract infrastructure, enabling the creation of smart contracts that can exist on multiple chains simultaneously.
As a user, developer, or application, you need not concern yourself with the specific chain hosting the smart contract. A unique routing system ensures that each transaction finds the most efficient path for execution. This makes traditional bridge solutions, and even all chain solutions, obsolete. It's no longer necessary for one smart contract to communicate directly with another - CILA handles this as part of its foundational infrastructure.
Does CILA fit within the current bridge categorization of Centralized, Decentralized and Hybrid?
Jerome Ostorero: Before discussing your omni chain infrastructure, CILA, we should first understand the role of current bridges. In the report we published, we identified two main types of bridges: centralized bridges, which are managed by various validators, and decentralized versions. There are also hybrid bridges that combine elements of both centralized and decentralized validation mechanisms.
Does CILA fit within this framework or does it represent a complete departure from this traditional understanding? What about validation processes and mechanisms - how do they function within CILA?
Alex Shkor: Firstly, I want to commend you on the excellent report. I strongly recommend everyone read it as it effectively highlights the issues with bridges and the current landscape. However, I'd like to debate your categorizations of centralized, hybrid, and decentralized. I view it more as a spectrum, ranging from 0% to 100%, with full decentralization yet to be achieved.
In this context, I agree that our solution, CILA, does fit within your framework. It introduces a module similar to existing solutions, but as part of a broader infrastructure. Despite our aim to gradually decentralize our solution, we haven't jumped straight into full decentralization. After implementing proof-of-concepts with some centralized parts, we have a roadmap for gradually replacing these with decentralized elements.
How does validating blockchain states work in the case of Omni Chain solution?
Jerome Ostorero: In the context of bridges, we're largely discussing blockchains. Omni chain smart contracts enable a contract to be deployed and executed across all chains simultaneously. You only need to deploy the contract once, and it can execute functions across all chains. However, with bridges, there's a challenge in validating the state of these blockchains. This complexity often leads to breaches, as achieving consensus between different states and blockchains, particularly dissimilar ones, is difficult. So, how does this work in the case of omni chain contracts?
Alex Shkor: That's an excellent question and it leads us to the primary industry challenge we've managed to address. Why haven't omni chain smart contracts been possible until now? The reason is the difficulty in achieving full data consistency between smart contracts on different chains. Our solution applies an 'eventually consistent model' which ensures that even if data is temporarily inconsistent, it eventually synchronizes.
This model is crucial in our technology stack, along with our command query responsibility approach. It's a standard in distributed systems for strong consistency and conventional data systems. This architecture is fundamental in financial systems where data inconsistency or fund loss isn't tolerable.
We've worked extensively with this architecture and adapted it to allow deployment of a specific execution environment on any chain, regardless of its virtual machine or existing environment. We've established two layers: an aggregation layer under the blockchain, where the router and aggregator reside, and a relay layer above the blockchain, which transmits events between chains. This innovative approach is the core solution to the challenge, and we're continually building on this technology to enhance its capabilities.
Jerome Ostorero: Essentially, the CQRS (Command Query Responsibility Segregation) infrastructure and its state validation approach, which ensures eventual data consistency through event validation, is the primary innovation.
Have I understood correctly that these aggregators and relayers form new layers, similar to layer one blockchains like Ethereum and Cosmos, and function as network nodes as you've mentioned?
Alex Shkor: The network comprises nodes and relayers. The relayer architecture allows for independent solutions on each relayer, which compete according to a specific protocol. By creating this infrastructure, we enable everyone to deploy their own relayer, aggregator, and router. This creates a competitive market for optimization, driving innovation in the development of more efficient relays, routers, and aggregators.
Would it be accurate to say that your relayers will operate similarly to the MEV (Miner Extractable Value) bots and relayers on layer one chains, with their role being to optimize the routing of functions to different execution layers, and as such, these relayers would have an incentive?
Alex Shkor: Indeed, this will foster healthy competition among blockchains for efficient execution. Currently, blockchains compete for user attention and, effectively, buyers of their native currencies. However, there isn't a common conduit through which transactions are distributed to all blockchains based on their performance. With our solution, the most efficient blockchains will rise to the top.
Could we argue that this adds a layer of centralization atop existing blockchains?
Alex Shkor: I wouldn't characterize it as a centralization layer. It's more of a synchronization layer. There can be many of these layers, with each execution chain potentially connected to multiple relays. Each relay can be linked to a different set of chains, and not every chain necessarily needs to connect to every relay.
Jerome Ostorero: We've discussed synchronizing the stream of events across chains, but decentralization remains a concern. You mentioned starting off somewhat centralized.
So, what are the plans to increase the decentralization of this new layer you're creating over time?
Alex Shkor: Our plan is straightforward. We aim to create an open-source protocol, and potentially open-source the solution, enabling people to develop various relay solutions. There will be at least two types of relays, pessimistic and optimistic. Each relay will optimize itself for the set of chains it's connected to, as different chains might require different optimization techniques. This will create an open market where relay developers will need to decide the most profitable way to implement their relays.
Let's consider Coinchange as a company wanting to deploy a smart contract using the omni chain contract infrastructure. How does that work? What language do we deploy the contract in?
Do we need to route it directly to the relay, or go through a node infrastructure provider? What's the process for an end user like Coinchange?
Alex Shkor: Great question, when you deploy a smart contract using omni chain infrastructure, you would do it the same way as any transaction, with a special transaction to deploy the smart contract through the router. The router will find the most efficient place to deploy the contract based on various factors such as memory usage, collections used in the smart contract, or even which other smart contracts you want to connect to.
You won't need to know exactly where the smart contract resides as the router should provide optimal execution. In some cases, routers may provide a feature to force deployment on specific chains if desired. Initially, a limited set of predefined smart contracts will be available for users to select a template and deploy with set parameters. Later, a domain-specific language (DSL) will be released to allow the implementation of any custom smart contract to be deployed just once. This effectively simplifies the entire process, making it more accessible and efficient for users.
Jerome Ostorero: That leads me to a follow-up question.
How would this work for more complex transactions, particularly when dealing with a significant number, like 10, 15, or even 30 calls within a single transaction?
Alex Shkor: As I tried to clarify in my previous answer, we use the SAGA pattern to address this complexity. This pattern is prevalent in securing systems. Essentially, the problem of managing cross-aggregate calls is already resolved in secure systems due to the nature of their architecture. It's designed in such a way that you can't approach it differently. Consequently, mistakes are avoided, and you can't force the contract into an inconsistent state. This is simply not possible with the infrastructure.
Jerome Ostorero: Certainly
Let's step back and explore the meaning of the Collective Intelligence Lab (CIL). I've gone through your white paper, and it appears that you're proposing a "layer zero" to act as a governance mechanism for web three.
Could you provide more information on this? What's the vision behind this concept?
Alex Shkor: Essentially, the idea is to create a "layer minus one," if you will. This isn't about numbering but rather the roles different layers play. The core layer, as we call it, is designed for synchronization. As the ecosystem expands, not every relay can maintain a comprehensive state of all the chains it's connected to, so it becomes necessary to connect only to subsets of data.
This results in various relays, each with a subset of all the data in Web3, and these must be synchronized. Technically, it's feasible because they need to merge their event streams and merge the Merkle tree of all aggregate states. But conflicts between relays can occur, and we need a system to resolve these conflicts. That's when we introduce the term epoch and the concept of a new layer.
This new layer does not have a native token because it could lead to infinite layers of synchronization, merely transferring the problem from relays to the next layer. So, we need to halt this somewhere. But this layer must still be decentralized, which prompts us to delve deeper into the meaning of decentralization.
Decentralization is not just about multiple nodes each having its own token; it's about the ability of people with knowledge and expertise to influence decisions. This led us to the realization that the core layer needs a different model than the previous layers. Here, DAOs start to compete, with the most decentralized DAO governing the core layer. This DAO is not necessarily the one we created; it could be any other. The DAO that garners the most support and creates an effective incentive and voting system will govern.
We proposed a three-chamber parliament, but it could be any other model. Essentially, it's about making a bet on a specific DAO design that is the most decentralized.
Jerome Ostorero: Indeed, it's about reintegrating people into governance. We're moving beyond mere numbers and tokens, or machines doing all the work. Actual experts are contemplating the implications of decentralization across various financial aspects and different blockchains.
Could you elaborate on the DAO selection process. Is it indeed a selection?
Or is it more spontaneously created? You mentioned taking the most decentralized DAO, but surely we want a variety of DAOs, each with diverse people and expertise. How would such a model be created? And how would it foster a multi-influential governance process, incorporating different perspectives?
Alex Shkor: Yes, this brings us to the concept of collective intelligence. Our initial DAO is quite simple, but its core principle is leveraging people's expertise in decision-making. We've created numerous committees for specific purposes and fields of expertise.
However, it's not an entirely inclusive model. The council itself decides who to include, so it's more of an expertise-based system rather than a fully inclusive one. At some point, we'll need to expand the initial number of experts in this DAO.
Additionally, we have two other equally significant chambers – a trade chamber and a liquid democracy. In the trade chamber, voting is capital-based, meaning every decision must pass through this chamber. In the liquid democracy, each person has one vote, ensuring wider inclusion.
Proposals originate from experts, but they must be accepted by both the capital and the people to be implemented. This structure promotes balance between expertise, capital, and popular vote.
Jerome Ostorero: Right, it operates much like a veto system. If the two chambers don't agree, the proposal must be reworked and resubmitted.
This is indeed insightful. It seems we've been missing this approach as new DAOs, or rather 'metaDAOs', as they're sometimes called, emerge. These still need to be clearly defined. Very intriguing indeed.
Where can people find more information about CIL and learn about your current activities?
Alex Shkor: Firstly, you can visit our website, collectiveintelligence.dev, to learn more about our activities and governance structures. We've also recently started sharing updates publicly on Twitter and Telegram. We'll post the links in the comments.
Jerome Ostorero: We'll include those links in the YouTube description.
I'm curious about your current status. Have you already started deploying on the mainnet?
And which are the first few blockchains that you've integrated?
Alex Shkor: Just last week, we completed our first proof of concept (POC), which we showcased to our advisors and private network this week. This was a significant milestone, as we spent about five months solely on architecture. It's the most extensive design I've worked on in my 14-year career. But thanks to the well-defined architecture, we were able to deliver the POC quickly - in about two months.
This first version consists of four layers. We didn't need the fifth layer to demonstrate its functionality, as that will come into play in the next stage when we need to synchronize the release. For this initial POC, we built it with EVM chains, specifically the Ethereum testnets Goerli and Aurora.
Jerome Ostorero: Alex, your insights, although quite technical, were very enlightening. Thank you for joining us on today's podcast. We look forward to seeing more from CIL and witnessing your vision come to fruition.
Alex Shkor: I appreciate the invitation. It was wonderful discussing this with you. I'm very pleased with our conversation. Stay tuned, follow us, and perhaps we'll have another chat in the future when everything has launched and is operational.
Jerome Ostorero: Absolutely, we'll circle back when there's more progress and exciting news to share. Thanks, Alex.
Alex Shkor: Thank you.