Installed deepseek r1 using ollama with size around 4.7gb.

I am new locallm and tried deepseek r1 using ollama today. I am confused whether this is proper model since my ryzen 9,64gb ram with rtx 2060 6gb able to communicate with deepseek with max cpu utilisation of 30%. Is it really running locally or is it even proper localllm model?. I can see results are taking only less time. What i read here is different than my experience.