With the new release, solx fixes solc’s notorious stack-too-deep failure without altering contract semantics — your contract behaves exactly as if compiled with solc, just cheaper and with fewer compilation failures. It now reliably eliminates stack-too-deep and, since our first release in May, has tightened the byte-code-size and compile-time gap while further reducing runtime gas consumption.
With that, we believe solx is ready for mainnet deployment of non-critical, well-tested contracts. We’re still raising the security bar, so please don’t rush into gas-saving migrations just yet.
solx ships two pipelines—default and
--via-ir
, the latter applying rewrites that can change semantics. solx leaves both untouched:solx
(default) matches solc’s default semantics, andsolx --via-ir
mirrorssolc --via-ir
. If something looks off, double-check your Foundry or other configuration to ensure the expected pipeline is in use before filing a bug.
solx Beta eliminates the stack-too-deep error that has topped every Solidity-developer pain-point survey for years.
Like the solc --via-ir
pipeline, solx resolves the issue by spilling selected stack values into a region of memory that starts right after 0x80
and is sized exactly to fit the spilled variables. The free-memory pointer (0x40
) is then bumped so user-level allocations never collide with that region.
The approach carries the same inherent limits as --via-ir
:
The compiler can’t spill inside recursion because it must know the total spill size at compile time.
Memory-unsafe inline assembly disables spilling for the entire contract.
Where solx differs is in how it achieves the fix: solx plugs the spilling logic into the default compiler pipeline, so no semantic changes are introduced. Your contract behaves exactly as it would under the legacy pipeline—just without the stack error.
Heads-up on inline assembly
Whenever possible, prefer high-level Solidity; solx is steadily eroding the old performance edge of hand-rolled assembly.
If you do need assembly, be honest: mark only truly safe blocks with
assembly ("memory-safe") // preferred, future-proof
or/// @solidity memory-safe-assembly // legacy form, kept only for old compilers
. Mis-labelling unsafe code lets both your logic and the spill logic write to the same slots, silently corrupting data.In doubtful cases, skip the annotation and let the compiler disable spilling rather than risk undefined behavior.
We’ll publish an in-depth blog post on the algorithm later. For now remember: enjoy the automatic fix—just keep your assembly minimal, accurate, and genuinely memory-safe.
A proper benchmark that weights each function by real-world call frequency and uses representative inputs is still on the to-do list. For now we rely on raw forge test --gas-report
data. Across 2 057 cases drawn from popular projects, solx consumes less gas than solc in 1 888 of them—about 92 %. The advantage ranges from small fractions of percent to double-digit cuts, depending on the code path. You can browse the full snapshot at https://matter-labs.github.io/solx/dashboard/. Pick any project from the drop-down to see a visual, test-by-test comparison of forge test --gas-report
for solc vs. solx.
Recent compiler changes
EVM constant folding (PR #835)
add
, mulmod
, exp
, signext
, and byte
now fold to literals when inputs are known at compile time, removing whole instruction sequences in the legacy pipeline.
Sharper branch lowering (PR #817)
solx now emits fewer instructions to set a JUMPI
condition.
These wins were partly offset when we restored explicit local function calls and let LLVM decide what to inline (PR #1048). The refactor produced smaller byte-code and faster builds, but runtime gas ticked up slightly because LLVM’s default inliner is too cautious for the EVM cost model. We’re retuning those heuristics and expect to regain the lost gas efficiency.
Path forward
LLVM underpins both the default and --via-ir
modes, so every EVM-specific optimization that lives only in --via-ir
can migrate into the main LLVM path. Constant folding was the first step; more moves are underway, so the default pipeline will ultimately include all LLVM optimizations and every --via-ir
improvement.
ERC-20 tokens (USDT, USDC, DAI, …) dominate Ethereum traffic, so we measured gas on the lean, audited Solady implementation. The same contract was compiled with solc 0.8.30 and with the current solx beta, then run through Foundry’s forge test --gas-report
200 times each (Solady’s harness injects randomness, so single runs are noisy). Average gas per call:
(Negative Δ means solx is cheaper.)
Why Solady?
It mirrors real-world ERC-20 patterns while staying easy to benchmark. Solady ships its own gas-benchmarking harness, making large-sample runs easy. OpenZeppelin’s implementation doesn’t include comparable tooling out of the box, so Solady was the quicker path; quick hand-written trials on OpenZeppelin actually showed larger gas cuts, making the table above a conservative lower bound on real-world savings.
Ethereum sees about 1.5 million main-net transactions a day, and roughly 60 % of them are simple ERC-20 transfers
— close to 900 000 moves of tokens every 24 hours. Shaving 255 gas off each call with solx (-0.56 % versus solc) removes around 230 million gas daily. In the quieter 2025 market, where gas has hovered near 2.6 Gwei, that’s roughly 0.60 ETH saved every day — about $1 600, or $0.6 million a year at today’s $2 800/ETH. During 2024’s busier spell, with averages nearer 15 Gwei and frequent spikes, the same 230 million gas translated to about $10 000 per day, or $3–4 million over the year; slicing 0.56 % off the $2.48 billion users paid in 2024 fees would have left roughly $14 million in their wallets. Even if only half of new tokens adopted solx, annual network-wide savings would still run well into seven figures — without touching a single line of Solidity.
Please don’t rush critical contracts to mainnet.
solx already cuts gas and removes stack-too-deep, but we’re still hardening security. Treat these numbers as a preview of what full-scale adoption could unlock; test thoroughly before deploying anything mission-critical.
When we first spot-checked solx binaries earlier this year they were a little over 20 % larger than solc’s. In the current beta that overhead has fallen to about 6 %. The code-size dashboard (https://matter-labs.github.io/solx/codesize/0.1.0/) shows the details contract by contract; at this point only two default-pipeline builds —Aave’s StataTokenV2 and Beefy’s StrategyAuraGyroMainnet — still exceed Ethereum’s 24 576-byte limit.
Recent compiler changes
Inlining shifted to LLVM. Already explained in the previous section.
Large-constant unfolding (PR #833). The backend now emits shorter sequences for small negatives and other literals when the gas trade-off is minimal, avoiding full-width PUSH
instructions.
The default pipeline remains gas-first — it adopts size tweaks only when they cost little or no runtime gas. For the few outlier contracts that still exceed the cap, we plan to offer an aggressive size-optimization fallback that prioritizes byte-code shrinkage over gas efficiency.
Right now the default solx pipeline still lags solc’s by a few-fold on wall-clock time, although it is already noticeably faster than solc --via-ir --optimize
. We don’t have a public dashboard for this metric yet, but every pull request in the solx repo now gets a compile-time report from CI, so progress is tracked continuously.
Looking ahead, our MLIR pipeline is showing encouraging numbers: with LLVM optimizations enabled it compiles only a few‐tens-of-percent slower than solc with optimizations off. Once that pipeline is production-ready, we expect compilation speed to tighten further while keeping all current gas and size wins.
Our May-2025 pre-alpha already cleared the Solidity compiler test-suite and real-world projects such as Uniswap V2 and Solmate. A detailed write-up (“Solx: The New Solidity Compiler — Why It Matters to You”) covers those results.
With the stack-too-deep issue resolved, we have widened our test pool. Every project below builds cleanly, and all unit tests pass on solx beta:
† Beefy note Inline assembly in Beefy lacks the memory-safe-assembly
tag; adding it turns the suite green.
Totals: ≈ 2 000 contracts, ≈ 6 400 unit tests — all green.
That is far from all the testing we have run. While solx may look new, its core is not: the same LLVM fork has powered zksolc (Solidity) and zkvyper (Vyper) on ZKsync since 2021. The only delta is about 9 000 LoC in the EVM code generator. Across contracts deployed to ZKsync we have seen zero post-deployment miscompilations; the lone genuine LLVM issue—the AAVE miscompilation—was caught in testing and patched before release, plus five additional miscompilations introduced in our own ZKsync VM (EraVM) specific code; each was discovered during testing or bug-bounty report and is now fixed.
Today we believe solx is comparable at safety to solc --via-ir --optimize
. The legacy solc pipeline — though ultra-conservative — delivers far fewer optimizations and often forces developers to hand-tune code. As compiler engineers we don’t think people at scale can do optimizations more securely than a thoroughly tested compiler. Nevertheless, the 401 open LLVM “miscompilation” tickets (label = miscompilation, 11 Jul 2025) remain a concern, and we do not assume LLVM is secure enough by default. Our first triage suggests that only 42 of them could realistically surface in solx:
Before tackling those 42, we are defining a permanent triage and tracking process so we never repeat this snapshot exercise. Reported LLVM miscompilations will be kept out of our next releases.
Recommendations
Maintain 100 % test coverage in your own codebases; well-tested code is far less likely to conceal an undetected optimization bug.
Feel free to battle-test solx on non-critical contracts today. For mission-critical deployments, wait until we raise our security bar further.
Two months after our first public release, the beta compiles every contract that previously tripped over stack-too-deep—the error that has topped Solidity-developer pain charts for years.
So is the problem gone for good? Partly.
Stack-too-deep is eliminated in optimizing builds, meaning you can point solx at production code today and ship byte-code that solc’s default pipeline could never produce. What remains is a smooth development-time experience: lightning-fast, non-optimizing builds with source maps and coverage. Delivering that requires a “debug” pipeline, but—unlike solc --via-ir
detour—solx doesn’t need a separate code generator. Turning off most LLVM passes is enough, because the spill strategy already lives in the backend.
That puts a developer-friendly, non-optimizing pipeline well within reach. Once it lands, teams will compile fast, debug smoothly, and then flip the optimizer on for deployment — all without ever seeing stack-too-deep again.
This is our best idea for what to tackle next, but if you think we’re wrong — or have any feedback at all — please reach out on Telegram or email. Above all else we want solx to meet your needs. And if you haven’t tried it yet, head over to solx.zksync.io for an easy start.