Nimble AI OrderBook
The network supports both AI training and inferencing, with each relying on three critical resources: compute power (GPUs), data, and developers. Each of these pillars faces its own set of challenges—from accessibility and scalability to efficiency. The AI inferencing process, like training, relies on compute power and application calls.
We refer to this integrated system as the "AI Orderbook," a one-stop solution that connects all components on-chain, covering both training and inference.
The Nimble Matrix standardizes complex data structures into DSLs that flow across the network. The AI Orderbook sequentially handles the matching, pricing, and transactions for each Nimble Matrix.
For a deeper dive into the design of the AI Orderbook and Nimble Matrix, refer to the Nimble architecture. The following sections will outline how compute, data, and developers are orchestrated within the network, and the benefits to participants.