Q: Question about DeepSeek R1 Model Parameters
Hi, I have a question. Your plan mentions that the Ultra reasoning model DeepSeek R1 is distilled. Could you let me know which parameter version it is? Because if it’s not the 671B version, DeepSeek R1 might not actually be on par with OpenAI’s O1.

Sam_BindAI
Feb 16, 2025A: Below is the US hosted model we're currently supporting in Bind AI. It is at par or better than O1-mini.
The performance of this model is instantaneous, you can try out and the entire response is 5X faster than any other model. In addition, the model is hosted on US servers, and we're not directly using DeepSeek APIs.
https://groq.com/groqcloud-makes-deepseek-r1-distill-llama-70b-available/

Verified purchaser
Thank you for your reply. So currently you use DeekSeek R1 Distilled 70B version. I hope in the future you can replace it with its full form 671B version.

Verified purchaser
I was testing it and was wondering the same thing. This clarified why the model didn't seem the same as the native deepseek experience.

Verified purchaser
I don't care about instant answers...I care about good answers.