Select your language

Cost-Effective Onsite LLM Training and Better Inferencing

aiDAPTIV+ provides a turnkey solution for organizations to train and inference large data models on-site at a price they can afford. It enhances foundation LLMs by incorporating an organization’s own data enabling better decision making and innovation.
By offloading expensive HBM & GDDR memory to cost-effective flash memory, the need for large numbers of high-cost and power-hungry GPU cards is significantly reduced.
aiDAPTIV+ allows businesses to scale-up or scale-out nodes to increase training data size, reduce training time and improve inferencing - even on-premise.

 

Phison Logo 1000px Phison Logo 1000px

 

 

Phison Pascari AI100E aiDAPTIV Cache Extreme Endurance GPU memory offload

 

Phison aiDAPTIV+ LLM Training Integrated Solution

 

Seamless Integration with GPU Memory

The optimized middleware extends GPU memory by an additional 320GB (for PCs) up to 8TB (for workstations and servers) using aiDAPTIVCache. This added memory is used to support LLM training with low latency. Furthermore, the high endurance feature offers an industry-leading 100 DWPD, utilizing a specialized SSD design with an advanced NAND correction algorithm.

Pascari Phison desktop workstation server ai model parameter overview

 

Ease of Use

Use a Command Line or leverage the intuitive All-in-One aiDAPTIVPro Suite to perform LLM Training. This AI toolset enabling ingest to RAG and fine-tuning to inference using an intuitive graphical user interface. Deploys in your home, office, classroom or data center using commonplace power & cooling.

  • Transparent drop-in
  • No need to change your AI Application
  • Reuse existing HW or add nodes

 

Train and Inference Any Model Size On-Premises

aiDAPTIV+ allows businesses to scale-up or scale-out nodes to increase training data size, reduce training time and improve inferencing.

 

Keeps Data in Your Control

Enables LLM training behind your firewall. Gives you full control over your private data and peace of mind over data sovereignty compliance.

 

Built-in Memory Management Solution

Experience seamless PyTorch compliance that eliminates the need to modify your AI application. You can effortlessly add nodes as needed. System vendors have access to AI100E SSD, middleware library licenses, and full Phison support to facilitate smooth system integration.

 

Supported Models

  • Llama, Llama-2, Llama-3, CodeLlama
  • Vicuna, Falcon, Whisper, Clip Large
  • Metaformer, Resnet, Deit base, Mistral, TAIDE
  • And many more being continually added

 

Affordable

Offloads expensive HBM and GDDR memory to cost-effective flash memory. Eliminates the need for large numbers of high-cost and power-hungry GPU cards. It also keeps AI processing where the date is collected or created, thus saving data transmission costs to and from the public cloud.

 

High Endurance

Industry-leading 100 DWPD with 5-year warranty
SLC NAND with advanced NAND correction algorithm

 

 

PCN AI100E
Capacity (TB) 320GB 1TB 2TB
Form Factor M.2 2280 M.2 2280, U.2
Interface PCIe 4.0 x4
Flash Type SLC NAND Flash
Endurance 80 DWPD 100 DWPD
Warranty 5 Years

 

 

For further details and inquiries, please reach out to the Zstor Sales Team via email at: This email address is being protected from spambots. You need JavaScript enabled to view it. 

 

Zstor Office

Gutenbergstr. 18
41564 Kaarst
Deutschland

Start a Conversation

+49-2131-3867640
sales@zstor.de

Copyright

Copyright © 2026 Zstor GmbH - Open Storage. All Rights Reserved.
We use cookies

We use cookies on our website. Some of them are essential for the operation of the site, while others help us to improve this site and the user experience (tracking cookies). You can decide for yourself whether you want to allow cookies or not. Please note that if you reject them, you may not be able to use all the functionalities of the site.