Q: Question about DeepSeek R1 Model Parameters

Hi, I have a question. Your plan mentions that the Ultra reasoning model DeepSeek R1 is distilled. Could you let me know which parameter version it is? Because if it’s not the 671B version, DeepSeek R1 might not actually be on par with OpenAI’s O1.

ChakotayPLUSFeb 15, 2025
Founder Team
Sam_BindAI

Sam_BindAI

Feb 16, 2025

A: Below is the US hosted model we're currently supporting in Bind AI. It is at par or better than O1-mini.
The performance of this model is instantaneous, you can try out and the entire response is 5X faster than any other model. In addition, the model is hosted on US servers, and we're not directly using DeepSeek APIs.
https://groq.com/groqcloud-makes-deepseek-r1-distill-llama-70b-available/

Share
Helpful?
Log in to join the conversation
Verified Purchaser badge

Verified purchaser

Posted: Feb 16, 2025

Thank you for your reply. So currently you use DeekSeek R1 Distilled 70B version. I hope in the future you can replace it with its full form 671B version.

Verified Purchaser badge

Verified purchaser

Posted: Feb 18, 2025

I was testing it and was wondering the same thing. This clarified why the model didn't seem the same as the native deepseek experience.

Verified Purchaser badge

Verified purchaser

Edited: Feb 19, 2025

I don't care about instant answers...I care about good answers.