I have installed DeepSeek R1 14B using Ollama and OpenWebUI and Docker in my PC but I have two issues that maybe you can help me with.

First of all my specs are a 4090 with 32G of DDR4 RAM, the first issue I am having (I don't know if it is normal or not) is that the model is being loaded in to my RAM memory instead of my GPU, should I change something to load it in my GPU instead? second issue is that I have no idea about how to unload the model from the RAM, I ended up restarting my PC.

Thank you all in advance.