Cost-Effective Onsite LLM Training and Better Inferencing
By offloading expensive HBM & GDDR memory to cost-effective flash memory, the need for large numbers of high-cost and power-hungry GPU cards is significantly reduced.
aiDAPTIV+ allows businesses to scale-up or scale-out nodes to increase training data size, reduce training time and improve inferencing - even on-premise.
Phison aiDAPTIV+ LLM Training Integrated Solution
Seamless Integration with GPU Memory
The optimized middleware extends GPU memory by an additional 320GB (for PCs) up to 8TB (for workstations and servers) using aiDAPTIVCache. This added memory is used to support LLM training with low latency. Furthermore, the high endurance feature offers an industry-leading 100 DWPD, utilizing a specialized SSD design with an advanced NAND correction algorithm.

Ease of Use
Use a Command Line or leverage the intuitive All-in-One aiDAPTIVPro Suite to perform LLM Training. This AI toolset enabling ingest to RAG and fine-tuning to inference using an intuitive graphical user interface. Deploys in your home, office, classroom or data center using commonplace power & cooling.
- Transparent drop-in
- No need to change your AI Application
- Reuse existing HW or add nodes
Train and Inference Any Model Size On-Premises
aiDAPTIV+ allows businesses to scale-up or scale-out nodes to increase training data size, reduce training time and improve inferencing.
Keeps Data in Your Control
Enables LLM training behind your firewall. Gives you full control over your private data and peace of mind over data sovereignty compliance.
Built-in Memory Management Solution
Experience seamless PyTorch compliance that eliminates the need to modify your AI application. You can effortlessly add nodes as needed. System vendors have access to AI100E SSD, middleware library licenses, and full Phison support to facilitate smooth system integration.
Supported Models
- Llama, Llama-2, Llama-3, CodeLlama
- Vicuna, Falcon, Whisper, Clip Large
- Metaformer, Resnet, Deit base, Mistral, TAIDE
- And many more being continually added
Affordable
Offloads expensive HBM and GDDR memory to cost-effective flash memory. Eliminates the need for large numbers of high-cost and power-hungry GPU cards. It also keeps AI processing where the date is collected or created, thus saving data transmission costs to and from the public cloud.
High Endurance
Industry-leading 100 DWPD with 5-year warranty
SLC NAND with advanced NAND correction algorithm
| PCN | AI100E | |||
| Capacity (TB) | 320GB | 1TB | 2TB | |
| Form Factor | M.2 2280 | M.2 2280, U.2 | ||
| Interface | PCIe 4.0 x4 | |||
| Flash Type | SLC NAND Flash | |||
| Endurance | 80 DWPD | 100 DWPD | ||
| Warranty | 5 Years | |||
For further details and inquiries, please reach out to the Zstor Sales Team via email at:

