You can run most open-source models available on the Ollama platform, including models with up to 16B parameters. While some users have successfully tested models with up to 34B parameters, performance may vary depending on your use case. For optimal inference speed and reliability, we recommend using models with 16B parameters or smaller. However, feel free to experiment with larger models if you wish.
The ideal plan for your LLM depends on several factors, such as the specific model, quantization, number of concurrent requests, and expected inference time. For 7B/8B models where inference speed is not critical, we recommend our Standard VPS Server. For 16B models, choose the Pro Server. For complex reasoning, large-scale inference, or mission-critical applications with higher requirements select the Enterprise VPS Server.
We use Kernel-based Virtual Machine (KVM) virtualization, a robust full-virtualization technology that provides optimal performance and complete isolation between all VPS instances on our platform.
We do not currently offer VPS servers with GPU acceleration, but understand some applications might require this level of power. If your workload needs GPU capabilities, we can offer a dedicated server solution. Please contact us with your specific needs to receive a custom quote.
Certainly! Your VPS is a full Linux environment, allowing you to install and configure software as needed. For security and compliance, ensure all installations adhere to our Acceptable Use Policy.
Yes. You can configure your own domain for your VPS by updating the DNS settings of the domain. If you wish to update the reverse DNS (rDNS) for your IP address, contact our support team.
Yes. We offer discounts for annual plans. Check the prices on our website or contact our sales team for details on bulk pricing or custom contracts.
Our VPS servers are unmanaged, meaning that customers are responsible for managing their VPS, except for handling potential hardware issues, which fall under our responsibility.
To use our service, customers should have basic technical knowledge, such as logging into a Linux server via SSH, executing commands on the Linux shell (e.g., managing passwords, installing software, if you need some), and ideally, having some familiarity with Ollama.
Providers that build open LLMs usually do not provide direct technical support. However, they may offer documentation or community forums. For general troubleshooting, try platforms like Reddit or Stack Overflow. For complex issues, consider hiring an AI/ML consultant.
We provide a 7-day money-back guarantee for Our VPS servers. If you are not satisfied with our service, contact us within 7 days of purchase, and we’ll issue a full refund.
Yes. All plans adhere to our Terms of Service and Acceptable Usage Policy.