No new chain
Launch a GPU growth program and product layer on top of Aleph Cloud and LibertAI.
- Public GPU/operator catalog
- Useful-capacity scoring
- Direct-host UX
- LoRA/adaptor marketplace
- Usage and royalty dashboards
Ecosystem evolution proposal
A practical plan to grow Aleph Cloud and LibertAI: use the infrastructure that already exists, attract useful GPU supply, add transparent receipts and royalties, and only build a chain if it becomes load-bearing.
The first proposal tried to fuse Bittensor, Akash, Phala, Pearl, and a LoRA marketplace into a new chain. That is too much surface area before demand and supply are proven.
Aleph Cloud already has CCNs, CRNs, VMs, confidential VMs, GPU instances, credits, and payment rails. The next layer should strengthen this ecosystem.
At current token prices, big rewards are either too small to attract serious operators or too dilutive if they become meaningful.
TEE privacy, customer-verifiable attestation, model identity, and output correctness are different claims. Keep the public story precise.
The proposal should tie into these existing pieces instead of replacing them.
Most developers use the LibertAI API. They want a clean endpoint, stable pricing, and no infra complexity.
Advanced users can discover hosts and connect directly. This is the decentralization escape hatch and the right path for stronger privacy/verification.
These are not mutually exclusive forever. The decision is order and evidence.
Launch a GPU growth program and product layer on top of Aleph Cloud and LibertAI.
Make the existing ecosystem more auditable without launching a sovereign chain.
Only if demand proves it: a GPU coordination layer for Aleph + LibertAI.
| Criterion | Path A | Path B | Path C |
|---|---|---|---|
| Speed | High | Med | Low |
| GPU pull | Medium, demand-dependent | Medium-high, more trust | High if rewards are credible |
| Token dilution | Low | Low to medium | High risk if done too early |
| Best use | Now | After receipts matter | Only after demand gates |
The network should pay for useful capacity, not for the mere existence of a registered GPU.
Operators with better uptime, latency, price, attestation, and model availability get more LibertAI/Aleph traffic.
Guarantee selected GPUs revenue for 30-90 days, then require paid utilization. Treat it as supply acquisition, not permanent emission.
Non-transferable points based on served work, uptime, and useful capacity. Future upside without immediate liquid token sell pressure.
Use partner financing, revenue share, reserved capacity, and customer prepayments to bring GPUs online without relying on emissions.
If used, mining should mean useful-capacity mining: earn by serving paid workloads and passing checks, not synthetic work forever.
Airdrops for registered capacity will attract farmers, idle GPUs, fake supply, and short-term token sellers.
| Mechanism | Token cost | GPU pull | Farmability |
|---|---|---|---|
| Paid demand routing | None | High if demand exists | Low |
| USDC/credit utilization floors | None to low | High | Medium |
| Future operator points | Deferred | Medium-high | Medium |
| Hardware financing | None to low | High | Low |
| Immediate ALEPH/LTAI emissions | High | Medium | High |
| Registration airdrop | High | Low-quality | Very high |
Aleph/LibertAI already cover generic private inference. The differentiated layer is a marketplace for models and adapters that creates new demand for GPUs.
Commodity GPU rental is a price war. Adapters create differentiated demand. Operators earn more when they host the models and adapters users actually want.
Per-token royalties across permissionless hosts are hard to enforce off-chain. If the marketplace works, settlement and slashing become more valuable.
A chain should be a GPU coordination layer for Aleph + LibertAI. It should not become a second cloud or a weak TAO clone.
Operator identity, linked CRNs/hosts, GPU commitments, region, tier, bond, reputation root.
Epoch rewards weighted mostly by paid usage, successful jobs, uptime, and verified availability.
Fake capacity, forged/stale attestation, receipt fraud, double claims, and reserved-job refusal.
Hashes, roots, signatures, payment references, model/adaptor ids. Not raw prompts or outputs.
Adapter/base-model author splits across permissionless hosts.
Customers should still use credits, API keys, x402, or direct-host payment. Do not make them bridge to call a model.
Cloud substrate economics, CRN/CCN alignment, credit conversion, and infrastructure value capture.
Application-layer alignment, marketplace incentives, useful GPU operator upside, and possibly insurance/slashing for LibertAI jobs.
Spend engineering and incentives only when the previous step produces evidence.
Document customer-verifiable TEE flow, direct-host discovery, managed vs direct-host trust model, GPU onboarding, and x402 examples.
Recruit strategic GPU operators, launch useful-capacity scoring, offer strict floors, and route LibertAI traffic by reliability.
Launch manifests, commission 20-50 adapters, add royalty ledger, and pre-warm top adapters on selected GPUs.
Add signed inference receipts, public operator scorecards, and optional EVM anchoring for roots.
Prototype a chain/appchain only if demand exceeds supply, serious operators want permissionless onboarding, and legal review supports token-linked rewards.
The simplest version to share internally.
Source research lives in ref/proposal-v1/ and ref/landscape/aleph-libertai.md.