Why This Matters

🚫

Censorship Resistance

Centralized AI providers filter, log, and refuse requests at will. UNFED AI distributes inference across independent operators — no single entity can censor or shut down the network.

👁

Privacy by Design

Your prompts never exist in plaintext on any single machine. Multi-Party Computation at the embedding layer means the most sensitive part of inference — turning your words into numbers — is cryptographically protected.

No Single Point of Failure

Models are sharded across multiple nodes run by different operators. If one node goes down, the network can route around it. No company can pull the plug.

Centralized AI vs UNFED AI

The fundamental difference in who controls your data.

Centralized AI (status quo)

  • One company holds the full model
  • Your prompts are logged and stored
  • Content policies filter what you can ask
  • Single point of failure and censorship
  • Pricing is opaque and monopolistic
  • Provider can shut down service at any time

UNFED AI

  • Model split across independent operators
  • No node sees the full input (MPC + sharding)
  • No content filtering at the protocol level
  • Decentralized — no single point of failure
  • Transparent per-token pricing set by operators
  • Open-source, permissionless participation

Key Properties

🔒

No node sees the full input

The embedding layer (shard 0) runs as an MPC pair. Your token IDs are secret-shared before any computation happens. Subsequent shards only see intermediate activations, not your original text.

Economic incentives align nodes

Nodes must stake tokens on-chain to participate. A verifier network randomly spot-checks computations. Cheating results in automatic slashing — dishonesty costs real money.

🌐

Cluster marketplace competition

Registries (clusters) compete like Monero mining pools. Each operator sets their own pricing, staking rules, and model offerings. Nodes and clients choose freely based on economics and reputation.