From Integration to Verification: Completing the First Steps in ZKsync’s Prover Network

Today we are launching the Prover API, a significant milestone in the prover decentralization initiative which began in June 2024. The API allows anyone to generate proofs and verify their correctness against proofs generated for Era. As such, you can download batch inputs from the API and provide a batch proof to be verified. The Prover API covers Phase 0 and Phase 1 of the initial proposal. Below is a summary of the current roadmap.

  • Phase 0: Test Integration - an example of setting up prover subsystem and running the proving process for a batch

  • Phase 1: Proof Verification - users can get access to real batch inputs and verify generated proofs against ZKsync via endpoints

Prover API is the testing ground for generating proofs for Era’s mainnet and verifying their validity against ZKsync generated proofs. The system is fully permissionless, anyone willing to play, test, benchmark and integrate is welcome to do so. We’ll gauge the interest during the testing phase and will use it as input to drive priorities forward with the next phases.

Prover API launches for Era’s mainnet today, but the code is open source and readily available for any other ZKsync chain.

Now, let’s get into technical details.

The API exposes batch proof inputs that operators can get and generate proofs from. It’s indistinguishable from the inputs that ZKsync’s prover uses. The API has two endpoints:

  • /proof_generation_data — retrieve batch proof input

  • /verify_proof — verify proof (byte by byte diff against proofs generated on ZKsync)

The first endpoint is split into 2 parts, for convenience, as follows:

  • /proof_generation_data — get the latest batch proof inputs (very similar to how ZKsync’s provers work internally)

  • proof_generation_data/{batch_number} — get inputs for a specific batch proof; great for testing or benchmarking purposes

💡 Note: Not all batches have proof inputs available. Batch data is available for a finite amount of time (at the time of writing, 30 days) after which it is deleted. This implies that ZKsync can’t verify batches provided to the API that are older than the data deletion time frame.

As outlined in the first post, the roadmap has three follow-up phases:

  • Phase 2: Real-Time Proof Verification — introduction of authentication

  • Phase 3: Live Proving Under Test — a way to measure performance of specific proof providers, ratings & more

  • Phase 4: Live Proving — a proof provider becomes an integral part of ZKsync’s proving

Let's look at how to start making ZKsync's prover network more scalable and robust.

Here's how to run a basic prover:

What you will need

Hardware

  • GPU (Graphics Card): NVIDIA A100 with at least 40 GB VRAM

  • RAM: at least 85 GB RAM

  • CPU: At least 6 cores, but more are advised

  • Operating System: Ubuntu 22.04 LTS

  • Storage: At least 300GB of SSD

💡 Note: Those are the requirements to run everything on a single machine. Advanced users may run one process at a time or distribute this workload on multiple machines, fine-tuning requirements. Feel free to refer to docs for finer-grained control.

Step 1: Getting Everything Ready

Duration: ~20-30m

Other notes: Check out the docs for more detailed information.

Before setting up the system, you’ll need to install some tools and software. Follow these steps:

*💡 Tip: If you get a message to reload your terminal/dependency is missing while you have already installed it, close the terminal and open it again, or use the command *source ~/.<SHELL>rc .

1.1 Install Rust

Rust is the programming language used for provers. To install it, open your terminal (a command line tool in Ubuntu) and copy-paste this command:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

1.2 Install Node.js, NVM, and Yarn

These tools are needed for the JavaScript part of the ecosystem.

  1. Install NVM (Node Version Manager), which makes it easy to install and manage different versions of Node.js:

    curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.5/install.sh | bash
    
  2. Install Node.js (version 20) using NVM:

    nvm install 20
    
  3. Finally, install Yarn, which helps with managing software packages:

    npm install -g yarn
    yarn set version 1.22.19
    

1.3 Install Essential Libraries

These are basic tools and libraries needed by the system. Just copy and paste the following into your terminal:

sudo apt-get update
sudo apt-get install -y build-essential pkg-config cmake clang lldb lld libssl-dev postgresql apt-transport-https ca-certificates curl software-properties-common
cargo install sqlx-cli --version 0.8.1

1.4 Install Docker

Install docker with:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
sudo apt install -y docker-ce
sudo usermod -aG docker ${USER}

After installing Docker, log out of your system (sudo reboot) and log back in so the changes take effect.

1.5 Install Foundry

curl -L https://foundry.paradigm.xyz | bash

After installing, you might need to reload your terminal again (see tip).

foundryup --branch master

1.6 Adjust Services: Disable PostgreSQL and Enable Docker

ZKsync runs PostgreSQL inside of docker. System-wide PostgreSQL needs to be stopped so there won’t be port conflicts:

sudo systemctl stop postgresql
sudo systemctl disable postgresql
sudo systemctl start docker

1.7 Install CUDA drivers

  • Install Cmake version 3.24 or higher with these instructions

  • To run docker images of prover components, make sure that you have a specific version of CUDA drivers - driver version 535, CUDA version 12.2(if you have other versions - you need to remove them and install the correct ones)

    wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
    sudo dpkg -i cuda-keyring_1.1-1_all.deb
    sudo apt-get update
    
    sudo apt-get install -y cuda-drivers-535
    sudo apt-get install -y cuda-toolkit-12-2
    
  • Add the following lines into your .<SHELL>rc file (and don’t forget to source it!):

    export CUDA_HOME=/usr/local/cuda
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
    export PATH=$PATH:$CUDA_HOME/bin
    
  • Also, for docker images to work with GPU, you need to install nvidia-container-toolkit version 1.14.0 and configure it for docker.

    curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
      && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
        sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
        sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
        
    sudo apt-get update
    sudo apt-get install -y nvidia-container-toolkit=1.14.0-1 nvidia-container-toolkit-base=1.14.0-1
    sudo nvidia-ctk runtime configure --runtime=docker
    sudo systemctl restart docker
    

Reboot for the drivers to kick in.

Step 2: Setting Up the Prover Subsystem

Duration: ~10m

Now that everything is installed, let’s set up the prover subsystem.

2.1 Install ZKsync Tools

We need to install specific tools (zk_inception, zk_supervisor, prover_cli) to manage our prover subsystem. Run this command:

cargo +nightly-2024-08-01 install --git https://github.com/matter-labs/zksync-era/ --locked zk_inception zk_supervisor prover_cli --force

💡 Note:* If you don’t have the toolchain installed, you can install it with: *rustup install nightly-2024-08-01

Step 3: Initializing the System

Duration: ~10m

Next, let’s initialize & start up the prover subsystem:

3.1 Create an Ecosystem

An "ecosystem" in this context is just the environment where the prover subsystem operates. Create one with:

zk_inception ecosystem create --l1-network=localhost --prover-mode=gpu --wallet-creation=localhost --l1-batch-commit-data-generator-mode=rollup --start-containers=true

This sets up everything the prover needs. Use default values if prompted, but feel free to choose the names you prefer.

To enter the new ecosystem folder:

cd {ECOSYSTEM_NAME}

3.2 Initialize the Prover Subsystem

Now, initialize the prover with:

zk_inception prover init --shall-save-to-public-bucket=false --setup-database=true --use-default=true --dont-drop=false --setup-keys=false

You can accept the default values if any prompts appear.

Step 4: Preparing the system for actual proving

Duration: ~30m

4.1 Prepare Witness Inputs

Witness inputs are files containing data that the prover needs to verify. To get them, use:

wget --content-disposition https://prover-api.zksync.io/proof_generation_data

Place these downloaded files into the folder: {ecosystem_path}/zksync-era/prover/artifacts/witness_inputs.

4.2 Set Up the Database

Now, you need to tell the database, that you want to prove this exact batch. For that, you need to have the following info:

  • BATCH_NUMBER - is encoded in witness_inputs_{number}.bin file.

Now you need to insert information about the protocol version and the batch into the database. You can do it with these commands:

zk_supervisor prover insert-version --default
zk_supervisor prover insert-batch --default --number <BATCH_NUMBER>

4.3 Getting Setup Keys

Now you have 2 options:

  • Copy the keys from our bucket. It can take some time (~25m, depending on your connection speed, ~2m inside of GCP VM instance), but generally is the fastest way.

    💡 Note: To download setup keys, you need to install gcloud tool (unless you are running a GCP VM instance, where it comes by default). You can find instructions for installation here.

    zk_inception prover setup-keys --mode=download --region={region}
    # For region you have 3 options: us, europe and asia, choose the one that is the most suitable for you
    
  • Generating keys is also possible, but will require further setup. Please see the docs for details.

Step 5: Running the Prover Subsystem

Duration: It depends heavily on the type of machine you use and on the size of the batch. We’ve observed that A100s with 6 CPUs can take 10s of hours, but given a machine with more CPUs, you can expect closer to 1h

Now, let's run the prover to start verifying data:

zk_inception prover run --component=prover-job-monitor --docker=true
zk_inception prover run --component=witness-generator --round=all-rounds --docker=true
zk_inception prover run --component=witness-vector-generator --threads=8 --docker=true
zk_inception prover run --component=prover --max-allocation=17179869184 --docker=true
zk_inception prover run --component=compressor --docker=true

The system will process the data and store the results in the database.

💡 Note: You can run the *witness-vector-generator with multiple threads if you have more cores on the machine. A single prover can work with up to ~15 witness vector generators (depending on the CPU speed). Running with 1 thread will make it slower, but it will still work. More threads (up to 15), more speed.

Step 6: Verifying the Proof

To check if everything worked, you can verify the proof using an API (an interface that lets programs talk to each other):

  1. Proof will be stored in {ecosystem}/zksync-era/prover/artifacts/proofs_fri with name l1_batch_proof_{number}_{protocol_version}.bin

  2. Send the proof file to the verification endpoint with this command:

    curl -v -F proof=@{ecosystem}/zksync-era/prover/artifacts/proofs_fri/{filename} https://prover-api.zksync.io/verify_proof/{l1_batch_number}
    

If the proof is correct, you’ll get a status 200 message back; otherwise, it will show one of the following errors:

  • Serialization - The proof you submitted could not be deserialized

  • Invalid proof - The proof you submitted is not correct

  • Batch not ready - There is no proven batch in ZKsync system with such a number yet

  • Invalid File - The request you submitted has an incorrectly attached file

  • Proof is gone - The batch you want to verify is too old and it was purged from our system

Appendix

GPU requirements

Both compressor and prover components require around 20GB of GPU, so if you have less than 40GB of memory, you can’t run the prover and compressor simultaneously. But you may still generate proof. To do so, you need to wait until every single proof is generated and only the compressor is pending; you can check the status by using this query against the database(You can find the database’s URL by running zk_supervisor prover info):

SELECT count(*) FROM prover_jobs_fri WHERE l1_batch_number = <BATCH_NUMBER> AND aggregation_round = 4 AND status = 'successful'; # should return 1 when proving for the batch is done, will return 0 when it's not

After that, you can stop the prover and run the compressor with

zk_inception prover run --component=compressor --docker=true

Proving testnet batches

In this guide we provided the URL of our mainnet API, but you are also able to get inputs and verify batches from testnet. Testnet prover API URL is https://prover-api.testnet-sepolia.era.zksync.dev .

Subscribe to ZKsync
Receive the latest updates directly to your inbox.
Verification
This entry has been permanently stored onchain and signed by its creator.