Some time ago, Tesla founder Musk tweeted that Dogecoin can ideally speed up the block confirmation time by 10 times, increase the block size by 10 times, and reduce the transaction fee by 100 times, and then it will easily win . This statement has aroused criticism from many KOLs in the encryption industry. Vitalik, the founder of Ethereum, also wrote an article today to talk about this matter, saying that simply improving the parameters of the blockchain network will cause more trouble, and elaborated on the improvement of the performance of the blockchain network. The problems and limitations that need to be faced, so the chain catcher has translated this article and made deletions that do not affect the original intention. How far can you push blockchain scalability? As Musk hopes, can you really achieve "reduce block confirmation time by 10 times, increase block size by 10 times and reduce transaction fees by 100 times" without causing extreme centralization and harming users? Basic properties of blockchain? If not, how far can you go? What if you change the consensus algorithm? More importantly, what happens if you change the technology to introduce features like ZK-SNARKs or sharding? It turns out that, sharded or not, there are important and rather subtle technical factors that limit the scalability of blockchains. In many cases, these limitations have solutions, but even with solutions, there are limitations. This article will explore these questions. At 2:35 in the morning, you get an urgent call from a partner on the other side of the world who is helping you manage the mining pool (or possibly the staking pool). Starting about 14 minutes ago, your partner told you that your mining pool and a few others forked off from the blockchain that still hosts 79% of the network. According to your node, the blocks of the majority chain are invalid. Here comes the balance error: the key block appears to have mistakenly allocated 4.5 million extra coins to an unknown address. An hour later, you are in a Telegram chat with two other small mining pools. You end up seeing someone paste a link into a tweet with a posted message. The tweet began with “Announcement of new on-chain sustainable protocol development fund.” By morning, arguments were everywhere on Twitter and community forums. But by then, a significant portion of those 4.5 million tokens had been converted on-chain into other assets, and billions of dollars in DeFi transactions had taken place. 79% of consensus nodes, as well as all major blockchain explorers and light wallet nodes follow this new chain. Maybe a new developer fund will fund some developments, or maybe all of them get swallowed up by leading exchanges. But whatever the outcome, the fund is, for all intents and purposes, a fait accompli, and ordinary users are powerless to fight back. Chain game Civitas completes $20 million in financing, led by Delphi Digital and Three Arrows Capital: On April 12th, chain game Civitas announced the completion of a $20 million financing, led by Delphi Digital and Three Arrows Capital, Framework Ventures, BITKRAFT, DeFiance Capital , Sfermion, CCP Games, Yield Guild Games, Merit Circle and YGG SEA participated in the investment. It is understood that this round of financing will be used to build Civitas into a community-driven collaborative city-building game. It is planned to launch a beta version in the first quarter of 2023 and officially launch it later in 2023. (Venturebeat) [2022/4/13 14:20:44] Can this happen on your blockchain? The elite of your blockchain community is probably well coordinated, including mining pools, block explorers, and custodian nodes. They are likely all on the same Telegram channel and WeChat group. If they really wanted to make sudden changes to protocol rules to further their own interests, they probably would. The only surefire way to nullify this coordinated social attack is through passive defense, and that group is effectively dispersed: the users. Imagine how the story would play out if users were running nodes that validated the blockchain and then automatically rejected blocks that broke protocol rules (even if more than 90% of the miners or stakeholders supported it). If every user runs a validator node, the attack will fail quickly: some mining pools and exchanges will fork in the process, which looks rather stupid. However, even if some users run validating nodes, the attack does not give the attacker a big win; instead, it leads to confusion, with different users seeing different views of the blockchain. At the very least, the ensuing market panic and likely continued fragmentation will greatly reduce the attacker's profits. The very thought of navigating such a protracted conflict would deter most attacks by itself. Paradigm research partner Hasu’s twitter Vitalik donated 100 ETH and 100 MKR to India’s new crown rescue campaign: On April 25, Polygon co-founder Sandeep tweeted that Vitalik donated over $600,000 to India’s new crown rescue campaign currency. At present, the new crown epidemic in India is serious, Polygon co-founder Sandeep launched a rescue campaign and mobilized the cryptocurrency circle to provide help. According to Etherscan information, Ethereum co-founder Vitalik Buterin has donated 100 ETH and 100 MKR. [2021/4/25 20:55:41] If you have a community of 37 node runners and 80,000 passive listeners checking signatures and blocking block headers, the attacker wins. If everyone in your community runs a node, the attacker will fail. We don't know what the exact threshold of herd immunity against coordinated attacks is, but one thing is absolutely clear: more nodes are good, fewer nodes are bad, and we definitely need dozens or hundreds more than one node. To maximize the number of users that can run a node, we are focusing on regular consumer hardware. There are three key constraints on the ability of a full node to process a large number of transactions: Computational power: What percentage of a node's CPU is required to safely run it? Bandwidth: Given the realities of current internet connections, how many bytes can a block contain? Storage: How much GB disk can we ask the user to store? Also, how fast does it have to be to be read? (i.e. can a hard drive be used, or do we need SSDs?) Many people mistakenly think how far blockchains can scale using "simple" technology, due to being overly optimistic about these numbers. We can look at the following three factors one by one: 1) Computing power Wrong answer: 100% of CPU power can be spent on block validation. Correct Answer: About 5-10% of CPU power is available for block validation. There are four main reasons why the limit ratio is so low: we need a margin of safety to cover the possibility of DoS attacks (transactions made by attackers to exploit code weaknesses take longer to process than regular transactions); Vitalik's street paintings: According to Reddit netizens, recently, a street art painting of Vitalik, the founder of Ethereum, appeared on the streets of Melbourne, Australia. [2021/4/6 19:49:53] The node needs to be able to synchronize the blockchain after being offline. If I disconnect from the network for a minute, I should be able to catch up in seconds; running a node shouldn't drain the battery anytime soon, slowing down all other applications; the node also needs to perform other non-block producing tasks, mainly Around authenticating and responding to incoming transactions and requests on a p2p network. Note that, until recently, most explanations of "Why only 5-10%?" focused on a different problem: Since PoW blocks appear randomly, the longer time it takes to validate a block will increase at the same time. Risk of creating multiple blocks. There are many solutions to this problem (e.g., Bitcoin NG or just using proof-of-stake). But these fixes don't fix the other four, so they don't deliver the huge scalability gains that many originally thought they would. Parallelism is not everything. In general, even seemingly single-threaded blockchain clients are already parallelized: signatures can be verified by one thread while execution is done by other threads, and there is a separate thread handling transaction pool logic in the background. Also, the closer to 100% utilization of all threads, the more energy is consumed to run the node and the lower the margin of safety against DoS. 2) Bandwidth WRONG ANSWER: If we have 10 MB chunks every 2-3 seconds, then most users have network speeds > 10 MB/sec, so of course they can handle it. Correct answer: Maybe we can process 1-5 MB blocks every 12 seconds, although it is difficult. We often hear advertising statistics these days about how much bandwidth an internet connection can provide: often we hear figures of 100 Mbps or even 1 Gbps. However, there is a large discrepancy between advertised bandwidth figures and actual bandwidth due to several reasons: "Mbps" means "millions of bits per second", and a bit is 1/8 of a byte, so it takes Divide the number of advertised bits by 8 to get the number of advertised bytes; Voice | V God: The mailbox Vitalik@butterin.me is not mine: Vitalik Buterin, the founder of Ethereum, tweeted, "Vitalik@butterin. me is not mine, and any emails sent from this email address are scammers. All my email addresses are .com or .org domain names." [2018/11/1] Like all companies, the Internet Providers often lie; there are always multiple applications using the same internet connection, so nodes cannot hog the entire bandwidth; p2p networks inevitably bring their own overhead: nodes often download and reupload the same block multiple times (not to mention transactions broadcast through the mempool before being included in a block). When Starkware experimented in 2019, they released 500 kb blocks for the first time, because the reduction in transaction gas costs made this possible for the first time, and several nodes were actually unable to process blocks of that size. Since then, blockchain's ability to handle large data blocks has improved and will continue to improve. But no matter what we do, we're still far from naively getting average bandwidth in MB/s, convincing ourselves we can live with 1s latency, and be able to have chunks of this size. 3) Storage Wrong answer: 10TB. Correct answer: 512G. As you can probably guess, the main argument here is the same as elsewhere: the distinction between theory and practice. In theory, you could buy an 8 TB SSD on Amazon. Actually, the laptop I'm using to write this blog post has 512 GB, and if you made people buy their own hardware, many of them would be lazy (or they couldn't afford an $800 8TB SSD), Instead, use a centralized provider. And, even if you could get a blocknode up and running onto some storage disk, high levels of activity could easily burn up the disk quickly, forcing you to keep buying new disks. Additionally, the storage size determines the time it takes for new nodes to be able to come online and start participating in the network. Any data that existing nodes must store is data that new nodes must download. Initial sync time (and bandwidth) is also a major hurdle for users running a node. At the time of writing this blog, it took me about 15 hours to sync a new geth node. Vitalik Buterin held discussions with the Thailand Securities Regulatory Commission: The founder of Omise tweeted that Vitalik Buterin, founder of the Ethereum network (V god), had a "fruitful" discussion with the Thailand Securities Regulatory Commission. In addition, according to CCN, Vitalik's discussions with the Thailand Securities Regulatory Commission involve its own platform and OmiseGo tokens. [2018/2/23] Today, in the Ethereum blockchain, running a node has become a challenge for many users. Therefore, we hit a bottleneck. The biggest concern of core developers is storage size. Therefore, currently, efforts to address computational and data bottlenecks, or even changes to the consensus algorithm, are unlikely to result in large gas limit increases. Even solving Ethereum’s largest prominent DoS vulnerability would only increase the gas limit by 20%. The only solution to the storage size problem is statelessness and state expiration. Statelessness allows a class of nodes to validate the blockchain without maintaining permanent storage. Status expiry clears the not-recently-accessed status, forcing the user to manually provide proof of renewal. Both paths have been used for a long time, and proof-of-concept implementations of statelessness have also begun. Combined, these two improvements can greatly alleviate these concerns and open up room for a substantial increase in the gas limit. However, even after implementing statelessness and state expiration, the gas limit may only be safely increased by a factor of about 3 until other limits start to dominate. Sharding fundamentally bypasses the aforementioned limitations, as it decouples the data contained on the blockchain from what individual nodes need to process and store. They use advanced mathematical and cryptographic techniques to indirectly verify blocks, rather than nodes verifying blocks by downloading and executing them themselves. Therefore, sharded blockchains can safely have a level of transaction throughput that non-sharded blockchains cannot. It does take a lot of cryptographic ingenuity to create efficient and simple full verification that successfully rejects invalid blocks, but it can be done: the theory is well established, and proof-of-concepts based on draft specifications are already underway. Ethereum is planning to use quadratic sharding, so total scalability is limited because nodes must be able to handle individual shards and the beacon chain (must perform a certain amount of administrative work for each shard). If the shards are too large, the node can no longer process a single shard, and if there are too many shards, the node can no longer process the beacon chain. The product of these two constraints forms the upper bound. It is conceivable to go further by doing cubic sharding or even exponential sharding. In such a design, data availability sampling would certainly become much more complicated, but it can be done. However, Ethereum will not go further than the quadratic curve. The reason is that transaction sharding can't actually achieve additional scalability gains unless the other stakes become very high. So what are these risks? 1) Minimum number of users It is conceivable that as long as there is one user willing to participate, the non-fragmented blockchain can run.
Tags:
"Find New" is a blockchain project observation project launched by Jinse Finance. It covers the development of projects in various fields of the industry, and the specific design includes project overview.
The Kusama parachain slot auction will start soon, because the auction will start the crowd loan mode.
The so-called cycle, those who follow me prosper and those who oppose me perish.The currency circle may be the group of people who pay the most attention to the cycle.
Some time ago, Tesla founder Musk tweeted that Dogecoin can ideally speed up the block confirmation time by 10 times, increase the block size by 10 times, and reduce the transaction fee by 100 times.
According to news on May 19, BSC's largest lending platform VENUS was exposed to a large amount of liquidation, which had extremely bad impact. According to the feedback from the community.
In the past period of time, many mainstream Ethereum applications including Aave, Curve, Aavegotchi.
In the background of the PolkaWorld community and public account, people often ask "Is the XXX project a Polkadot ecological project?". In this regard.