Q: Machine Specification and Cost
What is the machine spec of each tier Mini/Medium/Large/Large Pro/Ultra?
Also how much credit does it cost per hour?
For now, I subscribe a Windows VM with 28GB VRAM for 58EUR/month. I wonder if Mimic PC will be a good product to add to my shelf ware - especially if I need to use a resource-intensive LLMs from hugging face

Claire_MimicPC
Sep 9, 2024A: Here’s a detailed breakdown of the machine specifications and costs for Mimic PC tiers based on your needs:
Mini
Specs: CPU 4 cores, 16GB RAM, No VRAM
Cost: $0.19/h
Use case: Best for basic tasks and non-GPU-based workloads.
Medium
Specs: NVIDIA T4, 16GB VRAM, 16GB RAM
Cost: $0.49/h
Use case: Suitable for moderate workloads, AI tasks, and mid-range applications.
Large
Specs: NVIDIA A10G, 24GB VRAM, 16GB RAM
Cost: $0.99/h
Use case: Ideal for machine learning, model training, and high-performance applications.
Large Pro
Specs: NVIDIA A10G, 24GB VRAM, 32GB RAM
Cost: $1.19/h
Use case: Perfect for demanding AI workloads and professional-grade tasks.
Ultra
Specs: NVIDIA L40S, 48GB VRAM, 32GB RAM
Cost: $2.19/h
Use case: Designed for the most resource-intensive applications, including large LLMs like LLaMA 3.1: 70B.
Comparison with your current setup:
Your current Windows VM has 28GB VRAM for 58 EUR/month (~$63 USD/month). Depending on your usage, Mimic PC can provide a more flexible and scalable solution. For example:
Large Pro offers 24GB VRAM at $1.19/h, which might suit your resource-intensive needs while giving you control over your compute costs.
Ultra provides 48GB VRAM, ideal for running heavy models with optimal performance.
With Mimic PC, you can scale up or down as needed, making it a powerful addition to your current setup for running resource-heavy LLMs from Hugging Face.

Verified purchaser
In a short, i wanted to ask how much Ultra costs per hour. because I might be only interested in using Ultra - to use complementary to my environment setup
The Ultra tier costs $2.19 per hour. This tier provides 48GB VRAM and 32GB RAM, ideal for resource-intensive tasks and large models. It can be a great complementary solution to your existing setup for handling high-demand applications.