solx is a new optimizing compiler for Ethereum smart-contracts that could change the way you write and optimize Solidity code. Built on LLVM, solx focuses on improving runtime gas efficiency while reducing the need for manual workarounds.
If you're a smart-contracts developer and gas costs matter to you, solx could help - some low-level tweaks you’re used to making might already be unnecessary, and more improvements are on the way.
If you're a tooling or compiler engineer, or someone thinking about the future of Ethereum’s developer tooling, Part 2 explores solx as a modular compiler infrastructure — ready to support new languages, EVM extensions, or even compiling Solidity to RISC-V. We still suggest starting here to see what solx delivers today, in practice.
solx was built to improve runtime gas efficiency - but there’s no single number that captures the effect, as different contracts benefit differently. The best way to see the impact is to try solx yourself using the demo we prepared. It’s streamlined for quick gas measurements, and adding new contracts is easy. The picture below shows some improvements from the demo - if it looks convincing, feel free to pause here and give it a try.
solx is currently in pre-alpha. It passes our internal test suite, including all tests from the Solidity compiler repository and a number of real-world contracts such as Uniswap V2 and Solmate. We haven’t observed code generation issues except for stack-too-deep errors in larger projects. A breakdown is included below. The numbers indicate how many contracts compiled successfully out of the total for each project.
Basic Foundry integration already works. solx can be used as a drop-in replacement for solc
in Foundry workflows, although contract linking is not fully supported yet. Hardhat integration is more challenging and requires further work.
Still, don’t assume solx can be dropped into production just because it works as a solc
replacement in Foundry. You will still need to retest your contracts. Even if the contracts have worked reliably with solc, solx may expose different edge cases - and vice versa. When switching back from solx to solc, especially its modern optimizing pipeline solc --via-ir --optimize
retesting is essential as well.
Aiming for 100% code coverage almost excludes the possibility of compiler bugs affecting your project. Fuzzing and other forms of testing are welcome too - in our experience, when developers rigorously test their own code, compiler bugs simply don’t go unnoticed. Still, until we address the remaining stack-too-deep issues and check solx more broadly against real-world contracts, we recommend holding off on production use.
Or, if you're interested in doing more than just waiting - let us know about your project and your intent to use solx. We can add your contracts to our benchmarking suite and start optimizing for your use case. You can reach us via Telegram, or by email at solx@matterlabs.dev.
The next section explains when solx works well, where it might fall short, and how it compares to solc
across common usage patterns.
Don't worry - we won't explain every classical compiler optimization here. Instead, this section will walk you through four simple examples that highlight what solx can do in practice.
The examples are intentionally simple and focus on extremes to make the patterns obvious. But the broader guidance is:
The more computations vs. storage operations your contract has, the more likely you’ll benefit from solx.
The more branches and loops your code contains, the more likely you’ll benefit from solx.
The cleaner your code and the fewer manual optimizations, the more solx can help.
If you’ve manually inlined functions, precomputed constants, or reordered expressions to save gas, you probably did a good job. But solx might make that manual optimization unnecessary.
For each example, we share a link to Compiler Explorer, a tool that lets you inspect the generated EVM assembly and compare outputs from different compilers side by side. To keep things simple, we compare solc --optimize
with solx
using default parameters or solc --via-ir --optimize
with solx --via-ir
. You’re welcome to experiment with other options or LLVM-specific flags.
Note: solx emits a slightly different textual format for EVM assembly, but we’ve tried to make it comprehensible without needing to explain the syntax here.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
contract FactorialStorage {
uint256 private result;
// 57! fits in uint256, 58! does not
uint256 constant MAX_SAFE_N = 57;
function computeFactorial(uint256 n) external {
if (n > MAX_SAFE_N) {
revert("Overflow: n too large");
}
result = 1;
unchecked {
for (uint256 i = 2; i <= n; ++i) {
result *= i;
}
}
}
function getResult() external view returns (uint256) {
return result;
}
}
In the FactorialStorage
example (view on Compiler Explorer), both of solc's optimizing pipelines — the legacy path and --via-ir --optimize
— keep storage accesses (sstore
and sload
) inside the loop. Specifically, solc emits:
an initial sstore
before the loop,
and a sload
/sstore
pair on every loop iteration.
In contrast, solx recognizes the pattern and restructures the code:
it uses a temporary variable for the loop computation,
and lifts the sstore
out of the loop.
Note that unchecked
is essential — without it, the compiler cannot assume that multiplication won't overflow, so updating storage on each iteration becomes necessary for correctness. Still, solx will eliminate the sload
from the loop even in that case.
The initial sstore
before the loop remains, even though it’s possible to optimize it away. Except for that, the generated binary works as the following loop.
uint tmp = 1;
for (uint i = 2; i <= n; ++i) {
tmp *= i;
}
result = tmp;
// SPDX-License-Identifier: MIT
pragma solidity >=0.4.16;
contract Foldable {
function entry() public pure returns(uint64) {
return test() + test() + test();
}
function test() private pure returns(uint64) {
for (uint8 i = 0; i < 2; i++) {
uint8 j = 1;
while (j < 4) {
uint8 p = 0;
do {
p += 2;
if (p == 8)
break;
for (uint8 h = 1; h <= 4; h++) {
if (h > 2)
break;
for (uint8 k = 10; k < 12; k++) {
uint8 x = 6;
do {
x -= 1;
if (x == 0)
break;
uint8 y = 10;
while (y < 17) {
y += 1;
}
} while (true);
}
}
} while (true);
j *= 2;
}
}
return 1;
}
}
The Foldable
example (view on Compiler Explorer) shows how LLVM is able to precompute even complex expressions that are doable at compile time.
While solc with --via-ir --optimize performs most of the computations written in the source, solx --via-ir detects that the final result is 3 and replaces the entire computation with the constant directly.
If you had previously put precomputed numbers into your contract, you can now leave computations written explicitly - it is often more robust than writing comments explaining what specific numbers mean.
More importantly, constant folding is not limited to precomputing standalone constants. Constant expressions often appear as a side effect of other optimization passes or logic lowering, such as addressing an array element by index. solx can fold such expressions even across branches and loops whenever they are statically determinable.
We observed that general-purpose sorting algorithms with no storage interaction work significantly better in solx than in Solidity’s current optimizers.
In particular, in our benchmarks on random input, Bubble Sort used 64% less gas and Quick Sort 51% less gas compared to solc --via-ir --optimize (with solc --optimize performing even worse).
One of the reasons why is that in solc, the base address of an array is recomputed on every iteration, while solx extracts loop-counter independent computations out of the loop and fold constants.
If computations with arrays are in your hot code, solx can automatically optimize them without requiring manual restructuring.
Manual loop unrolling is common in Solidity libraries. In Uniswap V4’s TickMath.getSqrtPriceAtTick
, 19 steps are unrolled explicitly instead of using a loop (view on Compiler Explorer).
We rewrote the logic using a for
loop and an array of constants (view on Compiler Explorer):
With solc, gas usage rose from 1365 to 3524 - a 2.5× increase.
With solx, gas increased from 965 to 2451 - again about 2.5×.
But solx can automatically unroll loops when the number of iterations is known at compile time.
To enable this, use:
--llvm-options='--unroll-count=20'
This brings gas down to 1197 - just a 24% increase instead of 150%, and already better than solc’s manually unrolled version.
We don’t enable this unrolling by default because it involves a trade-off: unrolling reduces runtime gas but increases bytecode size. Since Solidity contracts have a size cap, applying this optimization indiscriminately could backfire. That’s why solx leaves it up to you.
Some of the remaining overhead comes from inefficient constant array initialization- another area we’re targeting for improvement in our upcoming MLIR-based frontend.
Here are the areas we currently consider most important - some are already in progress, others are next on our list:
Stack-too-deep resolution is underway. This is our top priority and will remove one of the last blockers for solx adoption in large contracts.
MLIR-based Solidity IR is in active development. Currently, solx reuses parts of solc’s frontend up to intermediate representation emission. This helped us deliver a pre-alpha faster, but it comes at the cost of bloated binaries and missed optimization opportunities. By replacing these components with our own frontend, we aim to unlock optimizations solc can’t support - like constant array folding (see Example 4) and eliminating redundant heap allocations, as shown in the example below.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;
contract HeapLoopExample {
struct Item {
uint256 x;
uint256 y;
}
function compute() public pure returns (uint256 sum) {
for (uint256 i = 0; i < 100; ++i) {
// This allocates memory for a struct in each iteration
Item memory item = Item({x: i, y: i * 2});
sum += item.x + item.y;
}
}
}
Our priorities aren’t set in stone. Your input could help us focus on features that are truly in demand. If we’ve missed something - don’t hesitate to tell us.
If you’ve made it this far, chances are you’re interested in solx. Here’s how you can get started and contribute:
🔥 Try the Demo: If you haven’t already, take a moment to try the demo. It includes everything you need to start working with solx. But don’t stop at the bundled examples — compile your own contracts, compare gas usage, bytecode, and behavior. It’s the fastest way to see what solx can do.
📝 Share Feedback: As you experiment, let us know what you find. Did solx reduce gas for a specific function? Did it behave unexpectedly in some edge case? We’re listening. The best way to reach us is via our telegram channel. You can also send questions or feedback via email: solx@matterlabs.dev.
📣 Spread the Word: If you found solx interesting or useful, share it with your team, post your findings, or just mention it in developer forums. The more people experiment with it, the faster we can identify real-world needs and improve. We also promote blog posts, benchmarks, and feedback shared by the community - so if you write or record something about solx, let us know!