Share Chain

A P2Pool-style side-chain that records every unit of work.

What is it?

Like Monero's P2Pool, the share chain is a lightweight side-chain that records compute shares — proof that a node performed a forward pass. Shares are bundled into blocks every ~10 seconds by the daemon node.

Each block contains a list of ComputeShare entries, the block's hash, the parent hash, the miner's node ID, and a timestamp.

ComputeShare structure

{
  "node_id": "0xAbC123...",
  "shard_index": 1,
  "tokens_processed": 42,
  "weight": 1.0,
  "timestamp": 1739500000.0
}

The weight field determines payout proportionality. Default weight is 1.0 per forward pass; vision shards may have different weights.

Share-chain progression: blocks link by parent hash (N-2 -> N-1 -> N tip -> N+1 pending), and each tip block aggregates compute-share records from active nodes (node ID, shard index, token count, weight, timestamp).

Verification Tickets

Random spot-checks that keep nodes honest without checking every computation.

1

Sampling

During prefill forward passes, each node randomly decides whether to record a verification ticket. Default sampling rate: 5%. This is configurable via --sampling-rate.

2

Ticket Contents

A ticket records: shard index, input activation tensor (the hidden states that entered this shard), and output activation hash (SHA-256 of the hidden states that left this shard). No user identity, session ID, or original prompt text is included.

3

Submission

Tickets are submitted to the registry via the SubmitTicket gRPC. The registry queues them for the verifier to pull. Tickets are stored without any association to the client session.

Context-free by design: Verification tickets contain only the mathematical inputs and outputs of a shard computation. The verifier has no way to determine who sent the original request or what the user asked.

Output Privacy

Full output-stage 2PC replaces the retired server-side sidecar path.

Output privacy path: penultimate compute shard (N-1) hands off to output MPC A/B with bound output-2PC metadata; MPC A/B return a sampled-token artifact to the client with session/step/key/hash binding for replay/substitution resistance.

Mode semantics

  • off: plaintext token path
  • decode_client_sample: encrypted top-k for client sampling
  • full_output_2pc: output MPC A/B artifact path
  • server_sample is retired and rejected at runtime
  • Applies to both text-only and multimodal token generation stages

Leakage hardening

  • Session/step/key binding checks on request and response payloads
  • Payload hash is bound into forward attestation and racing checks
  • Adversarial tests cover replay, substitution, and payload mutation
source .venv/bin/activate
python -m pytest tests/test_mpc_output_adversarial.py \
  tests/test_mpc_output_leakage.py -q

Vision MPC note: current vision privacy includes MPC handling for image-side share processing and text-stage output privacy controls. Full parity coverage across all vision internals remains an active hardening area.

Registry Admission + Control Auth

How staking identity, node identity, and lifecycle RPCs are now bound and verified.

1. Signed Registration Binding

For compute/vision/MPC/daemon in on-chain mode, registration includes an EVM signature over a canonical payload containing node ID, address, model/shard, share-signing key, timestamp, and nonce. The registry recovers the signer and requires it to match node_id.

2. Replay Defense

Registration, heartbeat, and unregister all include timestamp + nonce inputs. The registry enforces max clock skew and one-time nonce usage to reject replayed control-plane messages. To reduce abuse surface, auth nonce caches are bounded and auth RPCs are rate-limited per peer/node.

3. Ongoing Stake Eligibility

Admission is not one-and-done. During operation, ineligible staked node types are filtered from discovery and model-health views, and can be evicted during heartbeat checks until eligibility is restored.

Fraud Proofs

How the verifier detects and reports cheating.

Fraud-proof lifecycle: compute node submits a signed suspicion report -> registry validates signatures and evidence bindings -> confirmed anomalies trigger slashing policy; non-confirmed reports are retained for audit history only.

What counts as cheating?

  • Returning random activations instead of computing the actual forward pass
  • Using lower-precision weights to save compute
  • Injecting noise or bias into outputs
  • Skipping layers in the transformer

Why it works

  • Cheating nodes don't know which forward passes will be sampled
  • The verifier holds the same shard weights and can exactly reproduce results
  • Even a small corruption produces a completely different output hash
  • The cost of getting caught (slashing) far exceeds the cost of computing honestly

On-Chain Settlement

Share windows are posted, disputed if needed, then finalized with safety gates.

1. Build a Settlement Window

The share-chain emits a SettlementSummary for each window (weighted shares per node, token usage, block range, settlement hash). The registry only accepts windows with validated shares and required audits, then computes payout splits for that settlement.

Racing winners are reported to the registry and signed as coordinator receipts. During settlement, winner nodes get a bounded share-weight bonus while all other validated shares remain payable.

Winner reports are submitted in batches to reduce control-plane overhead under high token rates, while preserving per-hop receipt signing and replay-safe idempotency semantics.

2. Post On-Chain + Challenge Window

With escrow enabled (default path), the registry posts the payout map via postSettlement(hash, nodes, amounts). This opens the configurable challenge window (challenge_window_seconds, default: 60s). During this period, settlements can be challenged and fraudulent nodes can be slashed.

3. Finalize When Safe

After the challenge window expires, the registry finalizes using finalizeSettlement(hash) when verifier/daemon safety gates are healthy. Rewards are allocated from measured token usage according to weighted shares (including bounded winner bonuses), and become claimable from escrow.

Why this flow? The registry avoids per-request on-chain overhead by batching work into settlement windows, while still preserving accountability through disputes, slashing, and delayed finalization.

Gossip Discovery

How clients find clusters and registries find each other.

Seed List

Clients start with a hardcoded or configurable seed list of known registry addresses. This bootstraps the initial connection, similar to Bitcoin's DNS seeds.

seeds:
  - registry.cluster-a.example:50050
  - registry.cluster-b.example:50050

Peer Exchange

Registries periodically gossip with each other via the ExchangePeers RPC, sharing their list of known registries. This allows the network to grow organically without a central directory.

# Automatic peer discovery
Registry A <--gossip--> Registry B
Registry B <--gossip--> Registry C
# A now knows about C