This document describes the Lasso proof system from an engineering perspective. Of course the most accurate and up-to-date view can be found in the code itself.
The following section paraphrases Lasso Figure 7.
Params
-
$N$ : Virtual table size -
$C$ : Subtable dimensionality -
$m$ : Subtable/memory size -
$s$ : Sparsity, i.e. number of lookups
-
$\text{dim}_1, ... \text{dim}_c$ : Purported subtable lookup indices $E_1,...,E_\alpha$ $\text{read\_counts}_{1},...,\text{read\_counts}_{\alpha}$
$\text{final\_counts}_{1},...,\text{final\_counts}_{\alpha}$
Reduces the check
to:
- Verifier provides
$\tau, \gamma \in \mathbb{F}$ - Prover and verifier run sumcheck protocol for grand products (Tha13) to reduce the check equality between multiset hashes:
$\mathcal{H}_{\tau, \gamma}(WS) = \mathcal{H}_{\tau, \gamma}(RS) \cdot \mathcal{H}_{\tau, \gamma}(S)$ - Sumcheck reduces the check to (for
$r''_i \in \mathbb{F}^\ell; r'''_i \in \mathbb{F}^{log(s)}$ )$E_{i}(r^{'''}_{i}) \stackrel{?}{=} v_{E_{i}}$ $`\text{dim}_i(r'''_i) \stackrel{?}{=} v_i$ $\text{read\_counts}_{i}(r^{'''}_{i}) \stackrel{?}{=} v_{\text{read\_counts}_{i}}$ $\text{final\_counts}_{i}(r^{''}_{i}) \stackrel{?}{=} v_{\text{final\_counts}_{i}}$
5. Verifier checks that the equations above hold with the RHS provided by sumcheck and the LHS provided by oracle queries to commitments in Step 1.
We convert a C
sized vector of lookup indices into a DensifiedRepresentation
which handles the construction of:
$\text{dim}_i \ \forall i=1,...,C$ $E_i \ \forall i=1,...,C$ $\text{read\_counts}_i \ \forall i=1,...,C$ $\text{final\_counts}_i \ \forall i=1,...,C$
Each of these is stored as a (dense) mutlilinear polynomial in its Lagrange basis representation.
Finally, we merge all
Now we can commit these 2 merged multilinear polynomials via any (dense) multilinear polynomial commitment scheme. This code is handled by SparsePolynomialCommitment
-> SparsePolyCommitmentGens
-> PolyEvalProof
-> DotProductProofLog
-> .... Initially we use Hyrax from Spartan as the dense PCS, but this could be swapped down the road for different performance characteristics.
After inital commitment, SparsePolynomialEvaluationProof::<_, _, _, SubtableStrategy>::prove(dense, ...)
is called. SubtableStrategy
describes which table collation function g
will be used and which set of subtables T_i
to materialize.
Subtables::new()
: First we materialize the subtables and read the entries at their respective lookup indices. These entries determine (via Lagrange interpolation) the
First, Subtables::compute_sumcheck_claim
: computes the combined evaluations of
Run a generic SumcheckInstanceProof::prove_arbitrary
assuming the lookup polynomials
CombinedTableEvalProof::prove
: Create the combined opening proof from the dense PCS.
The valid formation of
This step gets a bit messy becuase we combine each dimension of the memory checking sumcheck into a single sumcheck via a random linear combination of the input polynomials.
Idea is to use homomorphic multiset hashes to ensure set equality.
MemoryCheckingProof::prove()
Subtables::to_grand_products()
: Create the Reed-Solomon fingerprints from each setGrandProducts::new()
GrandProducts::build_grand_product_inputs()
ProductLayerProof::prove()
: Prove product (multiset hash) of a set's Reed-Solomon fingerprintsBatchedGrandProductArgument::prove()
HashLayerProof::prove()
: Prove Reed-Solomon evaluations directly
See imgs/memory-checking.png for a visual explainer of the memory checking process.