Home > auther > Articles

Vitalik Buterin’s Keynote at Devcon 4

(The video is shared for learning purpose only. The copyright belongs to the organizer)

Okay. Hello everyone. How are you?

Happy anniversary of the Satoshi’s White paper. Ten years!

Yeah. They are 10 in binary counting.

Okay. So today I’m going to basically talk about Ethereum theory 2.0, but not just from a technical point of view, but more from the point of view of why Ethereum 2.0, what is Ethereum 2.0, and kind of how we got here.

Right, so what is Ethereum 2.0? First of all, Ethereum 2.0 is this kind of combination of a bunch of different features that we’ve been talking about for several years, researching for several years, actively building for several years that are finally going to come together into this one coherent whole, and these features include:

  • Proof of stake (Casper)
  • Scalability (Sharding)
  • Virtual Machine improvements (eWASM)
  • Improvements to cross-channel contract logic
  • Improvements to protocol economics

And like really the list goes on and on and there is some parallel distribution. So lots of great stuff.

Now, how did we get here?

Right. So the road to proof of stake actually started way back in 2014 with this blog post that I published in January describing this algorithm called Slasher, which is, introduced kind of the really most basic concept in a lot of proof of stake algorithms which is this idea that if you get caught doing something wrong then this can be proven and you can be penalized for it, and how this can be used to increase security.

But at the time as you can see from this slide, I believe that quote Slasher is a useful construct to have in our war chest in case proof of stake mining becomes substantially more popular or a compelling reason is provided to switch, but we’re not doing that yet. So at the time it was not even clear yet that proof of stake is even the direction that we’re going, but as we know now over time that changed quite a lot.

So what happened in 2014?

So first of all we went through a bunch of kind interesting and aborted ideas:

“proof of proof of work” was this kind of suggestion to try to improve scalability, and “Hub-and-spoke chains” so basically you kind of have one chain in the middle and a bunch of chains on the edges. This was a kind of very early scalability and sharding proposal that tried to improve scalability for local transactions, but not for transactions that are global, so not for transactions that jump from one shard to another.

Hypercubes. So basically except the cube should have 12 dimensions instead of 3, so we can get more scale even more scalability with hubs and spokes by going with Hypercubes. Now unfortunately for various reasons this idea ended up getting abandoned, but someone has a big ICO to make it work, so happy someone’s trying it out.

So in 2014, there were still some progress, right? So there was this concept of weak subjectivity that we came up with which was this kind of semi-formal security model that tries to capture this idea of kind of under what conditions are proof of stake deposits and slashing and all of these concepts actually secure.

Also I know we’ve got more and more certain that algorithms which much with much stronger properties than the proof of stake algorithms that existed at the time, so things like pure coin and all of its derivatives were actually possible, and kind of growing understanding that there was some kind of proof of stake scalability strategy that you could somehow do is through random sampling but we had no idea how, and we had a roadmap.

So there was this nice blog post from Vinay Gupta in March 2015 where he outlined the four big stages of Ethereum’s roadmap at the time.

Stage 1, Frontier, Ethereum launching, yeah!

Stage 2, Homestead, which is kind of going from alpha to beta.

Stage 3, Metropolis, which at the time was supposed to be about Mist and user interfaces and improving user experience. But since then, the focus has kind of switched more to end of enabling stronger cryptocurrency, but interface is still going forward in parallel.

And stage 4, Serenity, proof of stake. Right. So from now now we’re not gonna call it Ethereum 2.0. I will also refuse to use the word Sharsper because I find it insanely and we’ll call it Serenity.

So after this came a bit of a winter, we had a bunch of different kind of abortive attempts at solving some of the core problems in proof of stake, some of the core problems with scalability, research on Casper quietly Vlad kind of quietly began to hang all of his work on Casper CBC.

One of the first interesting ideas was this kind of idea of Consensus by Bet where people would kind of bet on which block becomes finalized next and then once more people bet on a block that itself becomes information that gets fed into other people’s bets, and so the idea is that you would have this kind of recursive formula where more and more people would bet more and more strongly on a block over time, and after a logarithmic number of runds everyone would be betting their entire money on a block and that would be called finality.

This we actually took this idea really far. We created an entire proof of concept for it and you could see it finalizing, and you can see here is a signature function, I mean it burns most of our time on this but then that in whole idea kind of ended up going away basically once we realized how to make kind of proper BFT inspired consensus actually works sanely. Um, Rent, so this is the idea that instead of charging a big one-time fee for filling storage we would kind of charge fees over time, so basically for every day or every block or whatever that some storage slot is filled you would have to kind of pay some amount of Ether for it.

There was this one EIP number 35 that I tried to call EIP 103. But really it was EIP number 35 because that was the issue number that takes precedence. And this was one of the really early ideas trying to kind of formalize this concept and we’ve had many iterations on the idea of how to implement can rent maximally nicely since then.

There was also this scalability paper that we tried to do back in 2015. And this tried to kind of formalize the idea of kind of quadratic sharding, super quadratic sharding, but it was very complicated. It had these kind of very complicated escalation games. It had the a lot of them were kind of inspired by the ideas of how escalation works in court systems, an analogy that I know Joseph with philosopher really loves to use. But we tried to use it for the base layer, deep layer reversions, so basically if something goes wrong, then potentially large portions of the state could and if get reverted even fairly into the future. So there was a bunch of complexity, right?

Now one of the fundamental problems that we couldn’t quite capture but we kind of invent slowly inching towards was this idea of the fisherman’s dilemma and this is a fundamentally and very fundamental concept in Sharding research that basically describes the difference between scaling execution of state and scaling execution of programs versus availability of data. And the problem basically is that with an execution of programs you can have people commit to what the answer is and you can later kind of play a game and try to kind of binary search your way to who actually made a mistake, and you can binarize everyone who made a mistake after the fact.

The problem with data availability is whatever the game is, you can cheat the game because you can just not publish any data at all until the mechanism tries to check if you published it and only then do you publish just the data that the mechanism checked for. And this turns out to be a fairly large flaw in a fairly large class of scalability algorithms.

And I wrote this a blog post if you want to search for it you can call it. It’s called A Note on Erasure Coding and Data Availability that describes some of the issues and more details. But still this was one of the things that delayed us. But even still we were happily making progress. Ethereum was moving forward. We were on our way.

Wait, then this happened (referring to the DAO attack). Okay, no more problems. Oh, wait, then this happened (referring to the Shanghai DOS Attacks). So the DAO hacked, the DOS attacks, all of that ended up delaying a lot people’s time and attention by potentially up to six months.

But even still work move forward, eWASM moved forward, the work on virtual machine moved forward, and work on kind of alternatives things like EVM 1.5 moved forward, and people were still continuing to kind of get a better and better idea of look basically what a more optimal blockchain algorithms would look like from many different angles.

So after this that we started making huge progress and very quickly. Right, so during all of this time there were these different strands of research that we’re going on:

Some of them were around prof of stake, and trying to do base layer consensus more efficiently.

Some of them were around scalability and trying to shard base layer consensus.

Some of them were around improving the efficiency of virtual machine.

Some of them were around things like abstraction that would allow people to use whatever signature algorithms they wanted for their accounts which could provide post-quantum security. It would make it easier to make privacy solutions, among a bunch of other benefits and protocol economic improvements. And really all these things were still happening all the way through.

So at some point around the beginning of 2017, we finally came up with this protocol that we gave this very kind of unambitious name of Minimal Slashing Conditions. And Minimal Slashing Conditions was basically a kind of translation of PBFT style traditional Byzantine consensus. So the same sort of stuff that was great done by Lamport or Shostak, and all those wonderful people back in the 1980s. But Simplifying it and kind of carrying it forward into more of a blockchain context.

So the idea basically is that in a blockchain you just keep on having these new blocks appear over time and you can gain these kind of nice pipelining efficiencies by merging sequence numbers and viewers every time a new round starts you would add new data into the round. You can also have the second kind of round of confirmation for one piece of data be the first round of confirmation for the next piece of data, and you can really kind of get a huge amount of efficiency gained out of all of that.

So the first step was Minimal Slashing Conditions which had six slashing conditions. Then it went down to four, and finally about half a year later we ended up merging prepares and comments. And this gave us Casper the Friendly Finality Gadget (Casper FFG).

So last year at Devcon I presented this new Sharding design that basically kept the main chain and then created sharding as a kind of layer two system on top of the existing main chain that would then kind of get upgraded to being the layer one once it gets solid enough. From Vlad, the Casper CBC paper, the Casper FFG Concept, so December 31st, 2017, 23:40, Bangkok time, because we happen to be Thailand at the time.

Basically what happened here is we pretty much to nail down what the spec of a version of a hybrid proof of stake would look like, and this version of hybrid proof of stake would basically use the ideas from Casper FFG, use this kind of traditional Byzantine fault tolerant consensus inspired ideas of how to do proof of stake on top of the existing proof of work chain.

So this would be a mechanism that would allow us to get to hybrid proof of stake fairly quickly with actually a fairly minimal level of disruption to the existing blockchain. And then the theory is that we would be able to upgrade to full proof of stake over time. And we got really far along this direction, and there was a test that there were Python clients, there were messages going between like different VPSs and different servers, and different computers. And it got very far.

And at the same time, we were making a lot of progress on Sharding. So we continued working on the Sharding spec. Eventually we had this retreat in Taipei in March, and around here a lot of the ideas around how to implement a sharded blockchain seems to solidify, seemed to solidify.

So in June we made this kind of very difficult but I think in the long term really beneficial and valuable decision which is that we said that hey, over here we have a bunch of teams that are trying to implement hybrid proof of stake and they’re trying to do the Casper FFG thing, build the Casper FFG implementation as a smart contract inside of the existing blockchain, make a few tweaks to the fork choice rule.

Then over here we had a completely separate group that was trying to make a sharding system, that was trying to make a validator or manager contract so that was later renamed into a sharding manager contract on the main chain, and that was trying to build a sharding system on top of that.

These groups were kind of not talking to each other too much. Then on the sharding side, it eventually became clear that we would get much efficiency by making the kind of core of the sharding system not be a contract on the proof of work chain, but instead be its own proof of stake chain.

And because that way we could make validation much more efficient. We did not have to deal with EVM overhead. We did not have to deal with gas. We would not have to deal with unpredictable proof of work block times. We can make block times faster, along with a whole bunch of other efficiencies. And we realized hey, we’re working on proof of stake here, we’re working proof of stake here. Why we doing two separate proof of stake things again? And so we decided to just merge the two together.

This did end up nullifying a lot of work that came before. But what it meant is that instead of having it working on these two separate implementations we have working together on this one spec, this one protocol that would gets us the benefits of Casper proof of stake and Sharding essentially at the same time.

Right so basically instead of trying to go to one destination, then go to another destination, and later on have this massive work of figuring out how to merge the two, we would just take a path which would take a little longer at the beginning, but the place it gets to actually is a proof of stake and sharded blockchain with the properties that we’re looking for.

So in the meantime, we spent a lot of time arguing about fork choice rules. We ended up kind of getting closer and closer and deeper into realizing that fork choice rules based on GHOST, the agreed and heaviest observed sub-tree algorithm that was originally intended to proof of work, but repurposed by us for proof of stake, made a huge amount of sense.

We were just in the start of doing research on Verifiable Delay Functions. We had this workshop at Stanford, and we made a lot of progress on Verifiable Delay Functions there and just is still collaborating with a lot of researchers there. More ideas on how to do abstraction, how to do this idea where individual users can choose their own signature algorithms for their accounts. More ideas on Rent which we decided to rename to “storage maintenance fees” for political reasons.

And research

So there’s a lot of work on Cross-shard transactions. So for example, there is this suppose I made on cross-shard contract yanking which kind of generalizes the traditional distributed system’s concept of locking into something that makes sense in this kind of asynchronous for cross-shard context.

Also I wrote this paper on Resource Pricing which includes ideas, ability kind of optimized and my ends up much more efficient fee market along with storage, how to do storage maintenance fees and why, and the different trade-offs between different ways of setting them. And in a case here Cdetrio wrote this post on doing Synchronous cross-shard transactions.

So of course in the meantime, Casper CBC research also expanded into kind of Casper CBC’s own brand of Sharding which is totally not called Vlading, because Vlad absolutely hates that term.

So development right. So there’s one of the kind of key strategies that we really tried to push forward in the Ethereum 2.0 path is the idea of creative multi-client decentralized development. And this wasn’t just a kind of because we have an ideological belief in decentralization, this is also a kind of very pragmatic strategy to like basically hedge your bet first of all hedge your bets against the possibility that anyone software development team would not perform well.

Second we already have plenty of experience from the Shanghai DOS attacks, of how you know there are plenty of cases where if one client has a bug, having other clients being available allows the network to continue running better, also wanting to kind of make the development ecosystem less dependent on the Foundation itself.

So the client the Ethereum Foundation works on is actually the Python client and so it has plenty of use cases, but in Python, just as a language has inherent performance limitations, and so there’s always going to be an incentive to try running the stuff built by the wonderful folks at prismatic and white-house status and pegasus and all the other teams that are popping up seemingly every month.

So soon something which is totally not going to be called Shasper. Serenity begins. Yeah!

What is serenity?

So first of all, it’s the fourth stage after frontier and homestead and metropolis, and where metropolis is broken down into Byzantium in Constantinople with Constantinople coming very soon as well.

And some realization of all of these different strands of research that we have been spending all of our time on for the last four years. So this includes Casper (not just hybrid Casper, 100% organic genuine pure Casper), sharding, EWASM and all of these other form of protocol research ideas.

This is a new blockchain for in the sense of being a data structure, but it has this kind of link to the existing proof-of-work chain, so the proof of stake chain would be aware of the block hashes of the proof-of-work chain, you would be able to move ether from the proof-of-work chain into the proof-of-stake chain, so it’s a new system, but it’s a connected system, and the kind of long long term goal is that once this new system is stable enough, then basically all of the applications on the existing blockchain can be folded into a contract on one shard of the new system that would kind of be an EVM interpreter written in EWASM. And (it’s) not finalized but this seems to kind of roughly be where the road map is going at this point.

Serenity is also the world computer at it’s really meant to be, not a smartphone from 1999 that can process 15 transactions per second and maybe potentially play snake.

And it’s still decentralized and we hope that in many metrics that can be even more decentralized than today. So for example, as a beacon chain validator, your storage requirements at this point seem like they’ll be under on gigabyte as compared to something like 8 gigabytes of state storage today and the 1.8 terabytes that trolls on the internet seems to think the ethereum blockchain require for some stupid reason.

Expected phases.

So phase 0 beacon chain proof-of-stake. And beacon chain proof-of-stake is actually kind of the blockchain is not, kind of hold any information, it’s kind of like a dummy chain, so all that you have is you have validators and these validators are executing and they’re running the proof-of-stake algorithm.

So this is kind of like halfway between a testnet and the mainnet. It’s not quite a testnet because you would be able to actually stake a real ether and earn real rewards on it. But it’s also not quite a mainnet, because it doesn’t have applications, so if it breaks, people are hopefully not going to cry too badly as they did when the Shanghai DOS attacks made everyone’s ICO go slowly.

Phase 1: shards as data chains. So basically the idea here is that this is where the kind of sharding part turns on. Here it’s just a kind of simplified version that doesn’t do sharding of state. It does sharding of data, so you can throw data on the chain, you could try to make just do a state execution engine yourself, but really the simplest thing to use it for is data. So if you want to do decentralized whatever on a blockchain you’ll now have the scalability to do this. But you won’t really have the kind of all of state execution capability to build smart contract applications and all of the really fancy complex stuff.

Phase 2: enable state transitions. This includes enabling the virtual machine, enabling accounts contract, ether-ether moving between shards, all of this cool stuff.

Phase 3 and beyond: iterate, improve, add technology.

So excepted features: pure proof-of-stake, faster time to synchronous confirmation (about 8-16 seconds). Now notice that, because of how the fork choice rule and the signing mechanism works in the beacon chain, one confirmation in the beacon chain involves messages from hundreds of validators, so for all probabilistic point of view, it’s actually equivalent to hundreds of confirmations of the Ethereum proof-of-work chain. So under a synchronous model you should be able to treat one block as being closed to final.

Economic finality and safety under a synchrony comes after 10 to 20 minutes, and fast virtual machine execution via EWASM and hopefully a thousand times higher scalability. We will see.

Post-Serenity innovation improvements in privacy

So there’s already been a lot of work done. For example, in Byzatium we activated the pre-compiles for elliptic curve operation and elliptic curve pairings and Barry White has been doing great work on building layer 2 solutions to preserve privacy of coin transfers, voting reputation systems, and all of these work could be carried over.

Cross-shard transactions. Semi-private chains

So the idea here is that if you want to build some application where the data is kept private between a few users, you could still dump all the data on the public chain, but you would just dump it in an encrypted form, or you can dump hashes of it then use your zero knowledge proof, so it’s your choice.

Proof of Stake improvements

There is definitely a place in our heart and at the roadmap’s heart for Casper CBC. When it becomes clear that there is a kind of a version that makes sense from an overhead point of view.

Post-Serenity innovation

At some point we have all what we want to and we do have a door open to kind of upgrade everything that STARKs. So using STARKs for signature aggregation, for verifying erasure codes, for data availability checks, maybe eventually for checking correctness of state execution, maybe stronger forms of cross-shard transactions, faster single-confirmations getting the confirmation time down from 8 seconds to even lower.

Medium-term goals

Eventually stabilize at least the kind of functionality of layer 1, think about issuance, think about fees, agree more and more over time on kind of what specific guarantees people expect from the protocol and things that people expect as features for the protocols to have for a long time, think about governance.

Now what’s next immediately?

What happens before kind of the big launch? Well, first of all, stabilizing the protocol spec. So for those who have been watching github.com/ethereum/, if -2.0, or sorry if 2.0- backslash dream master specs, beacon chain, md something like that. This spec has been kind of moving fairly quickly but you know it will stabilize fairly soon. Continue development and testing. There is something like 8 implementations of the Ethereum 2.0 protocol happening now.

Cross-client testnets

So I believe Alfred made a statement that he hopes to see a cross-client work like really picking up in Q1 next year. I mean, we’ll see.That would be definitely nice to see a kind of testnet working between two implementations.

In earnest, we’re gonna be nice to see a testnet working between one implementation. As I kind of quick historian aside, the ethereum 1.0 development time between the conception of white paper and launch, 19 months. Part of the reason why it took so long is because we tried to get kind of cross-client compatibility way before the spec even finished, and so we had to agree test, release testnet or wait protocol changes, agree test release testnet, or wait more protocol changes. And we had about five cycles of this.

This time around we have the luxury of kind of learning from that lesson, and we don’t really need to kind of focus on cross-client compatibility, until we have something close to a release candidate of the spec. But I think we’re actually not that far from a release candidate of the spec at least for kind of limited portions that don’t include a state execution, so we’ll see.

Security audits

Who thinks security audits are important? Who here thinks security audits are not important? Who here thinks the world is a literally controlled by Lizardmen? Okay, so more people for the third of the second, that’s good to hear. And once we’re done that launch. Who here thinks that launching is not important? Okay. Who here fix that your favorite political candidate is literally a Lizardman?

Okay. So launch, there we go. I mean that’s basically the milestone that we’ve all been waiting for that we’ve been working toward for the last four to five years, and a milestone which is really no longer so far away. Thank you.

Speaker: Vitalik Buterin, founder of Ethereum

Link of the speech video:

Tecent link: https://v.qq.com/x/page/v07911ymk3z.html

Youtube link: https://www.youtube.com/watch?v=Km9BaxRm1wA

Edited by: Jhonny & Echo

【The copyright of this article belongs to Unitimes. Please contact us at editor@unitimes.io or add Wechat unitimes2017 if you want to repost the article, and please indicate the sorce for any authorized repost. Opinions expressed by contributors belong to themselves】