Filecoin — Deep into Precommit2 logic

Trapdoor-Tech
3 min readAug 9, 2020

--

The Sector computation is divded in two parts: Precommit1and Precommit2. The two parts togather is called SDR algorithm. We have introduced the computation of SDR algorithm in previous articles.

This article will emphasize on computing logic of Precommit2. Precommit2 is made up with 1/ Column Hash and construting Merkle tree 2/ Replica function and constructing Merkle tree. For specific logic please refer to transform_and_replicate_layers function in rust-fil-proofs/storage-proofs/porep/src/stacked/vanilla/proof.rs .

Column Hash Calculation

Column Hash calculation is implemented in generate_tree_c. There are two versions implemented, depending on whether you are using CPU or GPU.

if settings::SETTINGS.lock().unwrap().use_gpu_column_builder {
Self::generate_tree_c_gpu::<ColumnArity, TreeArity>(
layers,
nodes_count,
tree_count,
configs,
labels,
)
} else {
Self::generate_tree_c_cpu::<ColumnArity, TreeArity>(
layers,
nodes_count,
tree_count,
configs,
labels,
)
}

The GPU version is more complicated. Let’s go through the GPU version logic.

To perform column calculation, we need to read data from 11 layers and integrate the data into columns. In GPU version Column Hash calculation, we process the columns batch by batch, by reading and arranging a portion of the columns and sending to GPU processor through channels (Column Hash and constructing Merkle Tree). Basically there are two threads in the code. One reads data from layer, sort the columns; the other handles GPU processing. Each batch by default contains 400000 nodes, which is about 135 megabytes. After done with column calcalution, GPU constructs the Merkle Tree.

Replica Calculation

Replica is the result of encoded data from last layer and original data. Each time we ncode a portion of Replica, and send the result to GPU (constructing Merkle Tree) rhrough channel. Every batch by default has 700000 nodes, which is about 22 megabytes. Note that the batch is the result from Encoding.

Merkle Tree Construction

We utilize merkletree library to construct Merkle Tree. This library defines general Merkle Tree structure and functions. The general Merkle Tree refers to Merkle that is not the binary tree that we usually see, but divides into three layers: top,sub and base.

As shown in above example, top has one child, sub has 3, and base has 4 children. In Precommit2 calculation, tree_c and tree_r_last are Octrees:

type Tree = storage_proofs::merkle::OctMerkleTree<DefaultTreeHasher>;
pub type OctMerkleTree<H> = DiskTree<H, U8, U0, U0>;

GPU Acceleration

In Precommit2 calculation, Column Hash calculation and Merkle Tree consturction uses GPU Acceleration. Related code is located in neptune library. Interestingly, this portion of code is not implemented in cuda or opencl, but a new higher level language: Futhark .

Related Macro Definition

  • FIL_PROOFS_USE_GPU_COLUMN_BUILDER — uses GPU for column hash calculation
  • FIL_PROOFS_MAX_GPU_COLUMN_BATCH_SIZE — the size for GPU column batch, by default this is 400000
  • FIL_PROOFS_COLUMN_WRITE_BATCH_SIZE — the size for each write of batch into column, by default this is 262144
  • FIL_PROOFS_USE_GPU_TREE_BUILDER — uses GPU to build Merkle Tree
  • FIL_PROOFS_MAX_GPU_TREE_BATCH_SIZE — the size for Encoding calculation batch size,by default this is 700000

Summary:

At Precommit2 stage, we mainly focus on Column Hash calculation and Replica generation, and building the corresponding Merkle Tree. We can use GPU acceleration for Column Hash calculation and building Merkle Tree. And Lastly, GPU acceleration is implemented in a new higher level language: Futhark.

--

--

Trapdoor-Tech
Trapdoor-Tech

Written by Trapdoor-Tech

Trapdoor-Tech tries to connect the world with zero-knowledge proof technologies. zk-SNARK/STARK solution and proving acceleration are our first small steps :)

No responses yet