Your Position Home News

Ethereum Foundation’s 13th AMA Highlights: Native Rollup, Blob Fee Model, DA Value Capture, etc.

The Ethereum Foundation research team held its 13th AMA on the reddit forum on February 25, 2025. Community members can leave comments and questions in the posts, and research team members will answer them.

compilation| GaryMa Wu said blockchain

The Ethereum Foundation research team held its 13th AMA on the reddit forum on February 25, 2025. Community members can leave comments and questions in the post, and research team members will answer questions, covering topics such as EXECUTE precompilation, native Rollup, Blob fee model, DA value capture, block construction end-point, L2 strategic reflection, Verge, VDF, encrypted memory pools, and academic funding. Wu said that the summary and compilation of relevant questions/technical points involved in this AMA is as follows:

Question 1: Native Rollup and EXECUTE pre-compiled translation

Question:

You may have seen Martin Köppelmann’s talk, who proposed the concept of “Native Rollups”, similar to the “Execution Shards” we envisioned earlier.

In addition, Justin Drake also proposed a “native Rollup” solution, proposing to integrate some L2 functions into the consensus layer.

This is important to me because current L2s cannot provide what I expect from Ethereum-for example, they have issues such as administrator backdoors.  I also don’t see that they will solve these problems in the future, because if they cannot be upgraded, they will become obsolete sooner or later. How are these proposals currently progressing? Is there a community consensus on these ideas, or is there a general belief that Rollup should remain organizationally independent from Ethereum? Are there any other related proposals?

Answer (Justin Drake-Ethereum Foundation):  

To avoid confusion, I suggest calling Martin’s proposal directly “Execution Sharding,” a concept that existed nearly a decade ago. The main difference between executing sharding and native Rollup is flexibility. Execution sharding is a single chain of preset templates, such as a complete replica of L1 EVM, which typically generates a fixed number of shards from top to bottom through a hard fork. Native Rollup, on the other hand, is a customizable chain that supports flexible sorting, data availability, governance, bridging, and fee settings, and is generated bottom-up without licenses through programmable pre-compilation. I think native Rollup is more in line with the programmability spirit of Ethereum.

We need to provide an EVM equivalent L2 with a path to get rid of the security committee and maintain full L1 security and EVM equivalence when L1 hard forks. Due to the lack of flexibility in implementing sharding, it is difficult to meet the needs of existing L2. Native Rollup may open up new design space by introducing precompilations like EXECUTE (and possibly auxiliary DERVE precompilations to support derivation functions).

About “Community Consensus”:

The discussion of native Rollup is still in its early stages. But I found it not difficult to promote the concept to developers of EVM equivalent Rollup. If a Rollup can choose to become “native”, which is almost a free upgrade provided by L1, why not accept it? It is worth mentioning that founders of top Rollup such as Arbitrum, Base, Namechain, Optimism, Scroll and Unichain expressed interest at the 17th Ranking Conference and other occasions.

In contrast, I think promoting native Rollup is at least 10 times easier than promoting Basic Rollup. Based Rollup doesn’t seem like a free upgrade at first glance-it will cost MEV revenue, and the 12-second block time may affect the user experience.  But in fact, based on incentive-compatible sequencing and pre-confirmation mechanisms, it can provide a better experience, but it just takes more time to explain and digest.

Technically, EXECUTE pre-compilation sets Gas limits and uses a dynamic charging mechanism similar to EIP-1559 to prevent DoS attacks. For optimistic L2, this is not a problem because EXECUTE is only called when fraud is proven. For pessimistic Rollups, data availability (DA) may be more bottleneck than execution because verifiers can easily verify SNARK, and home network bandwidth is a fundamental limitation.

Regarding the “status quo”:

Looking back at history, Vitalik proposed EXECTX precompilation in 2017, when the terms “native” or “Rollup” had not yet appeared. Although the time was still early at the time, in 2025, under the “native Rollup” craze, the idea of joining EVM for introspection has attracted renewed attention.

Regarding “Whether Rollup should be organizationally separated from Ethereum”:

An ideal endgame model would treat native Rollup and Based Rollup as smart contracts on L1, but at a lower cost. They can enjoy the network effects and security of L1 while being scalable.

For example, ENS is currently an L1 smart contract. In the future, I expect Namechain to become a compatible native and Based application chain, essentially an extensible L1 smart contract. It can retain organizational independence (such as token economics and governance) while deeply integrating into the Ethereum ecosystem.

Embedded questions:

Question: Executing sharding may be an advantage in the eyes of many people, but native L2 now seems to be the sub-optimal choice, or the only option, with no built-in execution sharding as an option.

Answer (Justin Drake):

EXECUTE precompilation is more flexible and powerful than executing sharding. In fact, it can simulate executing sharding, but not the other way around. If someone wants an exact copy of the L1 EVM, native Rollup also provides this option.

Q: The problem I want to solve is the need for a neutral, trustworthy Rollup with the Ethereum brand, rather than outsourcing responsibility to the company’s operations, which doesn’t seem to meet the need.

Answer (Justin Drake):

This can be achieved through EXECUTE pre-compilation. As a preliminary idea, the Ethereum Foundation can use it to deploy 128 “shards”.

Question: You mentioned that native L2 is a customizable chain that can be generated bottom-up through pre-compilation, which is more in line with Ethereum’s programmability spirit; you also mentioned that there is a need to provide EVM equivalent L2 with a path to get rid of the security committee. So, if the basic layer does not implement functions such as sorting and bridging and some kind of governance mechanism, can we really get rid of the security committee? Being unable to keep up with changes in EVM is just an outdated approach. In implementing sharding, we solve these issues through hard fork upgrades, benefiting from subjective governance. But if it is built on the upper level, the basic layer will not interfere with the upper level program. If a Bug occurs, we will not risk branching to save the application layer. Have the team you contacted made it clear that if Ethereum launches EXECUTE, they will completely remove the security committee and achieve complete lack of trust?

Answer (Max Gillett):

The main reason for the existence of security committees is that fraud certification and validity certification systems are so complex that even one implementation error in the validator can be catastrophic. If this complex logic (at least in fraud proof) is incorporated into the L1 consensus, client diversity can reduce risk, which is an important step in removing the security committee. I think that if EXECUTE pre-compilation is properly designed, most of the “Rollup application logic”(such as bridging, messaging, etc.) can be easily audited and meet the standards of DeFi smart contracts-contracts there typically do not require a security committee.  

Subjective governance is indeed a concise way to upgrade, but it is only practical when there is less competition among shards. Part of the significance of programmable native Rollup is to allow existing L2s to continue to experiment with factors such as sequencing and governance, which will ultimately be decided by the market. I expect a range of native Rollups, ranging from zero-governance community-deployed versions (try to follow L1 EVM) to versions with token governance and experimental pre-compilation.

Answer (Justin Drake):

Regarding “whether the team is committed to complete lack of trust”:

What I can be sure of is:

1. Many L2 teams want to achieve complete lack of trust.

2. Mechanisms such as EXECUTE are necessary to achieve this goal.

3. For some applications (such as the minimum execution fragmentation that Martin wants), EXECUTE is sufficient to achieve complete untrustworthiness.

These three points are enough to push us on the path of EXECUTE. Of course, EXECUTE may not be enough for certain L2s, which is why DERVE precompilations were introduced in early discussions.

Question 2: Optimizing the Blob Fee Model

Question:

Blob’s fee model seems imperfect and too simple-the minimum fee is only 1 Wei (the smallest unit of ETH).  Combined with the price mechanism of EIP-1559, if the Blob capacity expands significantly, we may not see an increase in Blob fees for a long time. This is not ideal, and we want to encourage the use of blobs, but we also don’t want the network to carry this data for free. Are there plans to adjust the expense model for Blob? If so, how will it be changed? What alternatives or adjustments are being considered?

Answer (Vitalik Buterin):

I think the agreement should be kept simple and avoid over-optimizing short-term scenarios, while uniformly enforcing the market logic of Gas and Blob Gas. EIP-7706 is one main direction (the other direction is to add a separate Gas dimension to Calldata).

I support the introduction of Super-Exponential Basis Fee Adjustment, which has been repeatedly proposed in different scenarios. If over-capacity blocks appear continuously, the cost will increase at a super-exponential rate and quickly reach a new balance. After setting the parameters properly, almost any Gas price spike can return to stability within minutes.

Another independent idea is to directly increase the minimum Blob fee. This shortens peak usage periods (which facilitates network stability) and increases more consistent cost destruction.

Answer (Ansgar Dietrich-Ethereum Foundation):  

Your concerns about the Blob expense model are legitimate, especially during the efficiency improvement phase. It’s true that this is a big issue with “L1 value accumulation”, but I want to focus on efficiency first.

During the development of EIP-4844, we discussed this issue and finally decided to set the minimum fee to 1 Wei as a “neutral value” for the initial implementation. Later observations found that this did pose challenges to L2 during the transition period from non-congested to congested. Max Resnick proposed an option in EIP-7762 that suggested setting the minimum fee to be close to zero during non-congestion periods but increase faster when demand increases.

This proposal was proposed late in the development of the Pectra fork, and implementing it could delay the fork. We discussed it at RollCall #9 (an L2 feedback forum) to see if we needed a delayed fork. Feedback from L2 indicated that this was no longer an urgent issue, so we decided to maintain the status quo in Pectra. But if ecosystem demand is strong, future bifurcations may be adjusted.

Answer (Barnabé Monnot-Ethereum Foundation):  

Thank you for your question. Indeed, pre-EIP-4844 research (completed by u/dcrapis) showed that the transition period from 1 Wei to a reasonable market price can be problematic and can disrupt the market during congestion, which we see every time a Blob is congested. Therefore, with EIP-7762, it is proposed to increase the minimum Blob base fee.

However, even if the base fee is 1 Wei, it does not mean that they can “ride for free” on the Internet. First, blobs usually require priority fees to compensate block proposers. Secondly, to judge whether it is free, we have to see whether the Blob occupies resources that are not reasonably priced. Someone mentioned that the increased restructuring risk of Blob (affecting activity) is not compensated, and I responded to this view on X).

I think the discussion should focus on compensating for activity risks. Some people link the Blob base fee to value accumulation because the base fee is destroyed (EIP-1559). If the base fee is low and the network value is accumulated less, should the base fee be increased to draw more tax from L2? I think this is short-sighted: first, the Internet has to define a “reasonable tax rate”(like fiscal policy); second, I believe that the growth of the Ethereum economy will bring more value. Unwarranted increases in Blob costs (raw materials that expand the economy) will backfire.

Answer (Dankrad Feist-Ethereum Foundation):  

I want to clarify that concerns about the low cost of Blob are exaggerated and short-sighted. In the next 2 – 3 years, the encryption field may grow significantly. At this time, we should consider as little as possible fee extraction and pay more attention to long-term development.

Despite this, I think Ethereum’s current resource model for pure congestion pricing is not ideal, both in terms of price stability and long-term value accumulation of ETH. When Rollup stabilizes, a bottom-price model that occasionally degenerates to congestion pricing will be better. In the short term, I also support a higher minimum price for Blob, which will be a better choice.

Answer (Justin Drake-Ethereum Foundation):  

Regarding “Whether to plan to redesign”:

Yes, EIP-7762 proposes to increase the minimum base fee from 1 Wei to a higher value, such as 2² Wei.

Answer (Davide Crapis-Ethereum Foundation):  

I support raising the minimum base fee, which was mentioned in my original 4844 analysis. However, core developers had some objections at the time. The consensus now seems to be more inclined to think this will work. I think the minimum base fee (even if it is slightly lower) is meaningful and is not short-sighted. Demand will increase in the future, but so will supply, and we may once again encounter the long-term lowest Blob costs we have seen over the past year.

More broadly, blobs also consume network bandwidth and memory pool resources, which are not currently priced. We are studying upgrades that may optimize Blob pricing in this direction.

Embedded questions:

Question: I want to emphasize that this is not an attempt to extract maximum value from L2, because this reason is often ignored every time you question Blob pricing.

Answer:

Thanks for the clarification, exactly right. The focus is not to maximize extraction, but to design a fee mechanism that encourages adoption, while pricing resources fairly to facilitate the development of a fee market.

Question 3: DA and L1/L2 value capture

Question:

The expansion of L2 has led to a significant reduction in the cumulative value of L1 (Ethereum main network), which has affected the value of ETH. Apart from the statement that “Layer 2 will eventually burn more ETH and process more transactions,” what specific plans do you have to address this problem?

Answer (Justin Drake-Ethereum Foundation):  

Revenue from blockchain (whether L1 or L2) mainly comes from two components: congestion charges (i.e.,”base charges”) and competition fees (i.e., MEV, maximum extractable value).

Let’s talk about competition costs first. As application and wallet design improves, I think MEVs will increasingly be captured upstream (apps, wallets, or users) and eventually be taken almost entirely by entities close to the source of the traffic, with downstream infrastructure (L1 and L2) only getting a little bit of debris. In the long run, L1 and L2 chasing MEVs may be futile.

Let’s talk about congestion charges. Historically, the bottleneck for L1 has been EVM execution, and hardware requirements of consensus participants (such as disk I/O and state growth) have limited execution of Gas. But when modern designs are extended with SNARKs or fraud proof, execution resources will enter a “post-scarcity era” and the bottleneck will shift to data availability (DA). Because validators rely on limited home network bandwidth, DA is fundamentally scarce. Data Availability Sampling (DAS) can only provide a linear expansion of about 100 times, unlike the nearly infinite nature of SNARKs or Fraud Proof.

So, we focus on DA economics, which I think is the only sustainable source of income for L1. EIP-4844 (to increase DA supply through Blob) has been implemented for less than a year. Blob demand has grown over time (mainly driven by induced demand), from an average of 1 Blob/chunk to 2 and 3. Now that supply is saturated and price discovery is just beginning, low-value “junk” transactions are being squeezed out by more economically dense transactions.

If DA supplies stabilize for a few months, I expect hundreds of ETH to burn through DA every day. But L1 is currently in “growth mode”, and the upcoming Pectra hard fork (expected to be available in a few months) will increase the number of target blobs from three to six. This will crush the Blob expense market, and it will take months for demand to catch up. In the next few years, as Danksharding is fully launched, DA supply and demand will play a cat-and-mouse game.

In the long run, I think DA demand will exceed supply. Supply is limited by home network bandwidth, and the throughput of about 100 home networks may not be enough to meet global demand. In particular, humans can always find new ways to consume bandwidth. I expect Ethereum to stabilize at 10 million TPS (about 100 transactions per person per day) over the next 10 years, and even if it only collects US$0.001 per transaction, it will still generate US$1 billion in revenue per day.

Of course, DA revenue is only part of the accumulated value of ETH. Circulation and currency premiums are also critical. I suggest you check out my 2022 Devcon speech.

Embedded questions:

Question: You said,”If the DA supply remains unchanged for a few months, hundreds of ETH will be burned through DA every day.” Why is this prediction? Data from the past four months when Blob targets were saturated does not seem to support this growth and payment demand. How do you infer from these data that there will be a significant increase in “high-payment demand” in a few months?

Answer (Justin Drake):

My rough model is that “real” economic transactions (such as users trading tokens) can afford small fees, such as $0.01 per transaction. I guess a lot of “junk” transactions (robot generated) are now being replaced by real transactions. Once the real trading demand exceeds DA supply, price discovery will be initiated.

Answer (Vitalik Buterin):

Many L2s currently either use off-chain DA or delay going online, because if you use on-chain DA as planned, it will fill up the Blob space alone, causing expenses to skyrocket. L1 transactions are daily decisions made by many small participants, while L2 Blob spaces are long-term decisions made by a few large participants and cannot be simply inferred from daily markets. I think even if the Blob’s capacity increases significantly, there is still a good chance that there will be huge demand willing to pay a reasonable fee.

Question: 10 million TPS? This seems unrealistic. Can you explain how it is possible?

Answer (Justin Drake):

It is recommended to watch my 2022 Devcon speech.

Simply put:

● L1 raw throughput: 10 TPS

● Rollups: 100 times more

● Danksharding: 100 times increase

● Nielsen’s Law (10 years): 100 times increase

Question: I believe the supply side can do it, but what about the demand side?

Answer (Dankrad Feist-Ethereum Foundation):  

All blockchains have value accumulation problems and there are no perfect answers. If Visa charges a fixed fee per transaction, regardless of the amount, their revenue will be greatly reduced, but this is the status quo in blockchain. The execution layer is slightly better than the data layer and can extract priority fees that reflect urgency, while the data layer only has fixed fees.

My suggestion is to add value first. Without value to create, there is no accumulation. To do this, we should maximize the Ethereum data layer so that replacements for DA are unnecessary; expand L1 so that high-value applications can run on L1; and encourage projects like EigenLayer to expand the use of ETH as (non-financial) collateral. (Pure financial collateral is more difficult to expand and could exacerbate the risk of a death spiral.)

Question: Isn’t it contradictory to “encourage EigenLayer” and “make alternative DA unnecessary”? If DA is the only sustainable source of revenue, doesn’t supporting EigenLayer risk depriving EIGEN pledgers of potential 10 million TPS or $1 billion a day in revenue? As an independent validator and EigenLayer operator, I feel like introducing a Trojan horse, which is contradictory.

Answer (Dankrad Feist):

I think EigenLayer is more like a decentralized insurance product backed by ETH (EigenDA is just one of them). I hope Ethereum DA expands to make EigenDA unnecessary for financial use cases.

Justin may be wrong when he thinks DA is the main source of revenue for Ethereum. Ethereum has something more valuable-a highly mobile execution layer, of which DA is only a small part (but useful for white-label Ethereum and high-scale applications).  DA has a moat, but the price is much lower than the execution level, so more extensions need to be provided.

Answer (Justin Drake):

Haha, Dankrad and I have been arguing about this for years. I think the execution layer is undefensible, MEVs will be captured by applications, and SNARKs will make execution no longer a bottleneck. Time will tell everything.

Answer (Dankrad Feist):

SNARKs have no effect on this. Synchronous state access is the foundation of the value and limitations of the execution layer. SNARKs has nothing to do with what one performs. I don’t think DA has no value accumulation, but the execution level and DA’s ability to charge each payment may be 2 – 3 orders of magnitude apart. The one that can charge high prices may be the DA that combines sorting rather than the universal DA.

Answer (Justin Drake):

You believe that “competition”(state access restrictions or ordering constraints) is valuable. I agree that it has value, but don’t think it will pay off in the long term for L1 or L2. Applications, wallets and users close to the source of traffic regain competitive value.

L1DA is no substitute for applications that pursue top-level security and composability. EigenDA is the “best-fit” alternative DA, often used as a “spillover” option for high-volume, low-value applications such as games.

Question 4: The final outcome of block construction

Question:

How will Ethereum’s final block construction work? Justin’s proposed trusted gateway model appears to be a centralized sorter and may not be compatible with the APS ePBS (improved proposer-builder separation) we expect. The current FOCIL (Mandatory Include List) design is not suitable for transactions carrying MEVs, so block construction seems to favor non-financial applications in L1, potentially driving application selection to run on the fast centralized sorter L2.

More in-depth, can we design a sorting system that neither maximizes the extraction of MEVs on L1 but is efficient? Do all efficient and low-withdrawal transactions require a principal agent (like a centralized sorter or pre-confirmation/gateway)? Are multiple proposer coordination (MCP) like BRAID still being explored?

Answer (Justin Drake-Ethereum Foundation):  

I don’t quite understand what you mean. Clarify a few points:

1. APS (Advance Proposer Promise) and ePBS (Improved Proposer-Builder Separation) are different design areas, and this is probably the first time I have seen the “APS ePBS” combination.

2. The gateway I understand is similar to a “pre-confirmation relay.” If ePBS eliminates the middleman role of relays, APS also eliminates the need for gateways. Under APS, the L1 execution proposer (if professional enough) can directly provide pre-confirmation without delegating it to the gateway.

3. Saying “Gateway is incompatible with APS” is like saying “Relay is incompatible with ePBS”-the original intention is to eliminate the middle role!  Gateway is just a temporary complex measure before APS arrives.

4. Before APS, I couldn’t understand why gateways were compared to centralized sequencing. Centralized sorting is permissive, while the gateway market (and the set of L1 proposers delegated to the gateway) is permissive. Are you saying this because there is only a single gateway sorting per time slot? Then according to this logic, L1 is also centralized sorting because there is only a single proposer for each slot. The core of decentralized sorting is rotating transient sorters from unauthorized sets.

I think MCP (Multiple Proposer Coordination) is a sub-optimal design. There are several reasons: it introduces centralized multi-block games, complicates fee processing, and requires complex infrastructure (such as VDF, a delayed verification function) to prevent final bids.

If MCP is as good as Max Resnick says, we will see results on Solana soon. Max now works full-time at Solana, Anatoly also supports MCP latency reduction, and Solana iterates quickly ™. By the way, L2 can experiment with MCP without permission, and I am happy to see it. But when Max was in charge of MetaMask at Consensus sys, he was unable to persuade the internal L2 Linea to switch to MCP.

Answer (Barnabé Monnot-Ethereum Foundation):  

I want to provide an alternative perspective on the endgame. My preliminary roadmap is as follows, which is already a big challenge:

● Deploy FOCIL to ensure censorship resistance and begin to decouple extension restrictions from local block construction restrictions.

● Deploy SSF (Single Slot Finality) as soon as possible to shorten the slot time as much as possible. This requires deploying Orbit to ensure that the validator size is consistent with the SSF and slot goals.

At the same time, I believe that application-level improvements (such as BuilderNet, various Rollup, and L1-based Rollup) can ensure innovation in block building and support new applications.

At the same time, we should seriously consider the different architectures for building L1 blocks, including BRAID. The end may never be decided? Who knows. But after FOCIL and SSF/shorter slot deployments, the next step will be more based.

Question 5: Do you regret focusing on L2?

Question:

Answer (Ansgar Dietrich-Ethereum Foundation):  

My opinion is that Ethereum’s strategy has always been to pursue principles-based architectural solutions. In the long run, Rollup is the only principled solution needed to extend blockchain to the fundamentals of the global economy. The single chain requires “each participant to verify everything,” while Rollup significantly reduces the verification burden by “execution compression.” Only the latter can scale to billions of users (and possibly even AI agents).

Looking back, I feel that we didn’t pay enough attention to the path to achieving our final goal and the intermediate user experience. Even in a world dominated by Rollup, L1 still needs to expand significantly, as Vitalik recently mentioned. We should have realized that promoting L2 while continuing to expand L1 would bring more value to users during the transition period.

I think Ethereum has long lacked real opponents and is a little complacent. Fighter competition now exposes these misjudgments and is also pushing us to deliver better “products”, not just theoretically correct solutions.

But to reiterate, Rollup is crucial to achieving the “extended endgame.” The specific architecture is still evolving-Justin’s exploration of native Rollup, for example, suggests that the approach is still being adjusted-but the general direction is clearly in the right direction.    

Answer (Dankrad Feist-Ethereum Foundation):  

I disagree in some respects. If Rollup is defined as “extended DA and execution verification”, how is it different from execution fragmentation?

In fact, we think of Rollup more as a “white label Ethereum”. To be fair, this releases a lot of energy and money. If we only focused on sharding in 2020, there would be no current progress in zkEVM and interoperability research today.

Technically, we can now achieve anything-a highly extended L1, an extremely extended fragmented chain, or a base layer for Rollup.  The best thing for Ethereum is to combine the first and third types.

Question 6: ETH economic security risks

Question:

If the dollar price of ETH falls below a certain level, will it threaten the economic security of Ethereum?

Answer (Justin Drake-Ethereum Foundation):  

If we want Ethereum to be effective against attacks-including attacks from the national level-then high economic security is crucial.    Currently, Ethereum has an economic security of approximately US$80 billion in forfeiture (based on 33,644,183 ETH pledged, approximately US$2,385 per ETH), which is the highest among all blockchains. In comparison, Bitcoin has only about $10 billion in (non-forfeiture) economic security.

Question 7: Mainnetwork expansion and cost reduction plan

Question:

In the next few years, what plans does the Ethereum Foundation have to improve the scalability of the main network and reduce transaction fees?

Answer (Vitalik Buterin):

1. Expand L2: Add more blobs, such as PeerDAS in Fusaka, to further increase data capacity.

2. Optimize interoperability and user experience: Improve interactions across L2, such as the recent Open Intention Framework.

3. Moderately increase the L1 Gas limit.

Question 8: Collaboration with L1/L2 in future application scenarios

Question:

What applications and usage scenarios have you designed for Ethereum during the following time periods:

● Short-term (1 year)

● Medium term (1 – 3 years)

● Long term (4+ years)

How do L1 and L2 activities work together during these time periods?

Answer (Ansgar Dietrich-Ethereum Foundation):  

This is a broad question, and I provide some insights and focus on overall trends:

● Short-term (1 year): Focus on stablecoins. Because of their few regulatory restrictions, they are already pioneers in real-world applications. Small-scale cases such as Polymarket are also beginning to gain influence.<

● Medium term (1 – 3 years): Expand to more real-world assets (such as stocks and bonds), use DeFi modules to seamlessly interoperate, and provide innovations such as business process uplink, governance, and market forecasting.

● Long-term (4+ years): Achieve “real-world Ethereum”(DC Posch Vision), build real products for billions of users and AI agents, with encryption as an enabler rather than a selling point.

● L1/L2 relationship: The original vision of “L1 is only for settlement and rebalancing” needs to be updated. L1 expansion continues to be important, L2 remains the main force of expansion, and the relationship will further evolve in the coming months.

Answer (Carl Beekhuizen-Ethereum Foundation):  

We focus on extending the entire technology stack rather than designing for specific applications. Ethereum’s strength is that it remains neutral to the content running in EVM and provides the best platform for developers. The core theme is expansion: how to build the most powerful system while remaining decentralized and censor-resistant.

● Short-term (1 year): The focus is on launching PeerDAS to significantly increase the number of blobs in blocks; and at the same time, improving EVM, such as launching EOF (EVM Object Format) as soon as possible. Research is also underway, including statelessness, Gas repricing, zero-knowledge EVM, etc.

● Medium term (1 – 3 years): Further expand Blob throughput and launch preliminary research projects such as ethproofs. org. zkEVM program.

● Long-term (4+ years): Add a lot of extensions to EVM (L2 will also benefit), significantly improve Blob throughput, improve censorship resistance through measures such as FOCIL, and use zero-knowledge technology to speed up.

Question 9: Verge selection and hash function

Question:

Vitalik mentioned in a recent post about Verge that we will soon face three options: (i) Verkle trees,(ii) STARK-friendly hash functions, and (iii) conservative hash functions. Have you decided which way to go?

Answer (Vitalik Buterin):

This is still under heated discussion. I personally feel that the atmosphere has tilted slightly towards (ii) in the past few months, but it has not yet been finalized.

I believe these options should be considered in the context of the overall roadmap. The realistic options may be:

● Option A:

1. 2025: Pectra, possibly with EOF

● 2026: Verkle Tree

● 2027: L1 execution optimization (delayed execution, multi-dimensional Gas, re-pricing)

● Option B:

● 2025: Pectra, possibly with EOF

● 2026: L1 execution optimization (delayed execution, multi-dimensional Gas, re-pricing)

● 2027: Initial launch of Poseidon (initially, only a small number of clients are encouraged to be stateless to reduce risks)

● 2028: Gradually increase stateless clients

Option B is also compatible with conservative hash functions, but I still prefer to introduce them gradually. Even if the hash function is less risky than Poseidon, the proof system is still risky early on.

Answer (Justin Drake-Ethereum Foundation):  

As Vitalik said, near-term options are still under discussion. But from a long-term fundamental perspective,(ii) is clearly the direction. Because (i) there is no post-quantum security,(iii) the efficiency is low.

Question 10: VDF progress

Question:

What is the latest development of VDF (Delayed Verification Function)? I remember a paper in 2024 that pointed out some basic issues.

Answer (Dmitry Khovratovich-Ethereum Foundation):  

Currently we lack ideal VDF candidates. The situation may change as new models (for analysis) and new constructs (heuristic or non-heuristic) develop. But with the current level of technology, we cannot confidently say that any plan cannot be accelerated, such as 5 times. Therefore, the consensus is to shelve VDF for the time being.

Question:

From a developer’s perspective, do you tend to gradually shorten block time, reduce finalization time, or keep both the same until single-slot finality (SSF) is achieved?

Answer (Barnabé Monnot-Ethereum Foundation):  

I am not sure if there is a compromise path between the current and SSF to shorten the final time. I think launching SSF is the best opportunity to shorten both final latency and slot time. We can adjust based on existing agreements, but if we can implement SSF in the short term, it may not be worth the effort on the current agreement.

Answer (Francesco D’Amato-Ethereum Foundation):  

Before SSF, we can definitely shorten the block time (say, to 6 – 9 seconds), but it’s best to first check if this is compatible with SSF and other elements of the roadmap (such as ePBS). Currently I understand that SSF should be compatible, but that doesn’t mean we should do it right away. The SSF design has not yet been fully determined.

Question 12: FOCIL and Encrypted Memory Pool

Question:

Why not skip FOCIL (Mandatory Include List) and use encrypted memory pools directly?

Answer (Justin Drake-Ethereum Foundation):  

Unfortunately, encrypted memory pools are not enough to guarantee mandatory inclusion. This has been reflected on BuilderNet based on TEE (Trusted Execution Environment) running on the main network. For example, Flashbots will review OFAC transactions from its BuilderNet block. TEE (which has access to unencrypted transaction content) can be easily filtered. More advanced memory pools based on MPC (Multi-Party Computing) or FHE (Fully Homomorphic Encryption) have similar problems, where sorters can require zero-knowledge proofs to exclude transactions they do not want to include.

More broadly, cryptographic memory pools and FOCIL are orthogonal and complementary. Encrypted memory pools focus on privacy inclusion, while FOCIL focuses on mandatory inclusion. They also operate at different layers of the technology stack: FOCIL is the built-in infrastructure in L1, and the encrypted memory pool is off-chain or application layer.

Answer (Julian Ma-Ethereum Foundation):  

Although both FOCIL and encrypted memory pools are designed to improve censorship resistance, they are not complete replacements, but complementary. So FOCIL is not a transition to an encrypted memory pool. The main reason why there are no encrypted memory pools now is the lack of satisfactory proposals, despite the efforts being made. Deploying now would place honest assumptions on Ethereum’s activity.

FOCIL should be deployed because it has robust proposals, the community has confidence in it, and the implementation is relatively lightweight. When the two are combined, crypto transactions in FOCIL can limit the financial harm to users from reordering.

Question 13: Gas and Blob restrictions on voting

Question:

Will you let the number of blobs be voted on by pledgers like the Gas limit? Big players may collude to increase restrictions and crowd out small family pledgers with insufficient hardware or bandwidth, leading to centralization of pledges and undermining decentralization. Moreover, if these increases are unlimited, will it become difficult to oppose them through a hard fork? If hardware bandwidth requirements are determined by voting, what is the point of setting these requirements? The interests of the pledger may not be consistent with the overall network. Is this appropriate to vote?

Answer (Vitalik Buterin):

I personally think it would be a good idea to: (i) have blobs voted by pledgers like Gas limits, and (ii) have clients coordinate updates of default Gas voting parameters more frequently. This is equivalent to the “Blob Parameter Only (BPO) Forking” feature, but is more robust. If the client does not upgrade in time or implementation errors occur, consensus failure will not be caused. Many supporters of BPO forking actually refer to this idea.

Question 14: Fusaka and Glamsterdam upgrades

Question:

What features should the Fusaka and Glamstam upgrades include to significantly advance the roadmap?

Answer (Francesco D’Amato-Ethereum Foundation):  

As mentioned, Fusaka will significantly improve data availability (DA). I hope Glamstam makes a similar leap at the execution level (EL), where EL has the most room for improvement (more than a year to determine the direction). Current repricing efforts may bring about significant changes in Glamsterdamm, but they are not the only option.

In addition, FOCIL can be regarded as an extensible EIP that better separates local block construction and verifier needs. Combined with its anti-censorship goals and reduced reliance on altruistic behavior, it will push Ethereum forward. These are my current priorities, but by no means all.

Answer (Barnabé Monnot-Ethereum Foundation):  

Fusaka focuses on PeerDAS, which is critical to L2 expansion, and few people want other features to delay it. I hope Glamsburg includes FOCIL and Orbit to pave the way for SSF.

The above is biased towards the consensus level (CL) and DA, but Glamestead should also have an execution level (EL) effort to significantly advance L1 expansion. Discussions on specific feature sets are ongoing.

Question 15: Forced decentralization of L2

Question:

Given the slow progress of L2 decentralization, can EIP be used to “force” L2 to adopt Stage 1 or Stage 2 decentralization?

Answer (Vitalik Buterin):

Native Rollup (such as EXECUTE precompilation) achieves this to some extent. L2 can still be free to ignore it and code its own backdoors, but they can use L1’s built-in simple, high-security certification system. L2s pursuing EVM compatibility are likely to choose this option.

Question 16: The biggest risk to Ethereum’s survival

Question:

What are the biggest survival risks facing Ethereum?

Answer (Vitalik Buterin):

Super-intelligent AI could lead to a single entity controlling most of the world’s resources and power, making blockchain irrelevant.

Question 17: Impact of Alt-DA on ETH holders

Question:

Is Alt-DA (non-ETH mainnet DA) a vulnerability or a feature for ETH holders in the short, medium and long term?

Answer (Vitalik Buterin):

I still stubbornly hope for a dedicated R & D team to develop ideal Plasma-like designs so that chains that rely on Ethereum L1 can still provide users with stronger (albeit imperfect) security when using alternative DAs. There are many overlooked opportunities here that can increase user safety and are of value to the DA team.

Question 18: Future prospects for hardware wallets

Question:

What are your visions for the future of hardware wallets?

Answer (Justin Drake-Ethereum Foundation):  

In the future, most hardware wallets will be based on mobile phone quarantine rather than stand-alone devices like Ledger USB. Account abstraction has made infrastructure such as Passkeys available. I hope to see native integrations within this decade (such as in Apple Pay).

Answer (Vitalik Buterin):

Hardware wallets need to be “truly safe” in several aspects:

1. Security hardware: Based on open source, verifiable stacks such as [IRIS](https://media.ccc.de/v/38c3-iris-non-destructive-inspection-of-silicon)) to reduce the risk of backdoor and sidechannel attacks.

2. Interface security: Provide enough transaction information to prevent computers from tricking users into signing unexpected content.

3. Popularization: The ideal is to create a device that doubles as a cryptowallet and other security purposes, encouraging more people to obtain and use it.

Question 19:2025 L1 Gas Limit Targets

Question:

What is the Gas Limit Target for L1 in 2025?

Answer (Toni Wahrstätter-Ethereum Foundation):  

Opinions vary on Gas restrictions, but the core question is: Should we extend L1 by increasing the Gas restrictions, or should we focus on L2 and add blobs with technologies such as DAS?

Vitalik’s recent blog discusses the reasons for moderately expanding L1. But there are trade-offs to increase Gas limits:

● Status and historical data growth, increasing node burden

● Greater bandwidth requirements

On the other hand, the Rollup centered vision aims to increase scalability without increasing node requirements. PeerDAS (short-term) and full DAS (medium-to long-term) will unleash significant potential while keeping resources under control.

I wouldn’t be surprised if after the Pectra hard fork (April), the validators pushed the Gas limit to 60 million. But in the long run, the focus of expansion may be on DAS solutions rather than simply increasing Gas restrictions.

Question 20: Beam client transition

Question:

If the Ethereum Beam client experiment (or its renamed version) is successful and there are several implementations available in 2 – 3 years, is there a stage for the current PoS and Beam PoS to run in parallel and both receive pledge rewards, just like when PoW to PoS transitioned?

Answer (Vitalik Buterin):

I think we can do an instant upgrade directly.

The reasons for using double chains when merging are:

● PoS is not fully tested and takes time to get the ecosystem running to ensure safe switching.

● PoW can be reorganized, and the switching mechanism needs to be robust.

PoS is final and most infrastructure (such as pledge) can be renewed. We can change the verification rules from the beacon chain to a new design with a hard fork. The economy may be short-lived in finality at the transition point, but this is an acceptable small price.

Answer (Justin Drake-Ethereum Foundation):  

I assume that the upgrade from Beacon Chain to Beam will be handled like a normal fork, without the need to “merge 2.0.” A few thoughts:

1. Consensus participants (ETH pledgers) are the same on both sides of the fork, unlike when merging changes groups and risks miners interference.

2. The “clocks” on both sides of the fork are the same, unlike the probabilistic slot-to-fixed slot transition from PoW to PoS.

3. Infrastructure such as libp2p, SSZ, and anti-cut database is mature and reusable.

4. This time, there is no need to rush to disable PoW to avoid additional releases. You can spend your time doing due diligence and quality assurance (testing network runs multiple times) to ensure smooth fork of the main network.

Question 21:2025 Academic Aid Plan

Question:

The Ethereum Foundation has launched a US$2 million academic funding plan for 2025. What research areas should you prioritize? How to integrate results into the Ethereum roadmap?

Answer (Fredrik Svantes-Ethereum Foundation):  

The protocol security team is interested in:

● P2P security: Many vulnerabilities are related to network-layer DoS attacks (such as libp2p or devp2p), and improvements in this area are valuable.

● Fuzzy testing: EVM and consensus layer clients have been tested, but fields such as the network layer can be explored in depth.

● Supply chain risks: Understand Ethereum’s current dependence risks.

● LLM applications: How large language models can improve protocol security (such as auditing code, automated fuzzification testing).

Answer (Alexander Hicks-Ethereum Foundation):  

In terms of integration, we continue to do so by reaching out to academia, funding research and participating in it. The Ethereum system is unique, and the impact of academic research on the roadmap is not always direct (for example, consensus protocols are unique and academic results are difficult to directly transform), but it is obvious in areas such as zero-knowledge proof.

The academic funding program is part of our internal and external research, and this time it explores interesting but may not directly affect the roadmap. For example, I added formal verification and AI-related topics. Currently, the practicality of AI in Ethereum missions needs to be verified, but I want to drive progress in the next year or two. This is a good opportunity to evaluate the current situation and improve methods, and it can also attract cross-cutting researchers who have little knowledge of Ethereum but are interested in it.


原文链接

Popular Articles