Frequently Asked Questions
Does PanaAI provide machine learning pipelines for training large language models (LLMs)?
Yes, we provide NVIDIA's NVAIE for end to end development, which includes enterprise NeMo. Our Enterprise NeMo has end to end platform which includes tools for Data curation, Distributed training, Model customization, accelerated inference, RAG, and Guardrails.
Do you support multi-node parallel training?
Yes, we do support multi-node parallel training. Shakti Cloud supports NeMo-Megatron for training large language models.
Can cloud-based AI hardware be utilized?
Certainly, there are several viable options for leveraging cloud-based AI hardware, offering a cost-effective alternative to procuring or building your own infrastructure. Shakti Cloud offers GPU powered AI-specific hardware solutions, available on flexible on-demand basis.
Can I resize a GPU instance?
Yes, GPU instances can be upgraded to a higher model after a reboot. However, they cannot be downgraded to a lower model.
How does GPU cloud computing differ from traditional cloud computing?
Traditional cloud computing often uses CPUs, while GPU cloud computing specifically leverages GPUs for high performance tasks.
How do I choose the right GPU instance for my AI project?
Different projects have different needs, including memory, processing power, and storage.
How do I monitor and manage the performance of my GPU instance?
Use tools and practices to keep track of how your instance is performing.
Can I install my own software on the GPU Cluster?
Yes, you can install your own software on our GPU clusters, bare metal instances, server less GPUs.