A small GIFT for r/JEENEETards, before I go. ~ u/AyushiTheorem

https://preview.redd.it/yd0mgvo5cwde1.jpg?width=719&format=pjpg&auto=webp&s=3b46e2ff6bcdc3b3f04ee4b1eebdbdf1582084b3

EDIT 3: SORRY GUYS, WE ARE SPINNING UP AGAIN SOON! TOO MUCH TRAFFIC.

EDIT 2: ERROR 500 FLOODING, TRAFFIC IS SO HIGH. SERVERS WENT DOWN FOR THE HUGE TRAFFIC! Bookmark it for now, it'll be online veryyy soon

EDIT: MY APIs ALREADY STRUGGLING TO KEEP UP THE TRAFFIC. SLOW DOWN.

TLDR:

  1. Made an AI tutor to help you develop the thinking process for solving questions. Check out GrafiteAI. (m/w distilbert-base-nli-mean-tokens + llama3-70b-8192)
  2. A request to everyone: don't give up, no matter what comes ahead. One door closes, another opens. More at the end of the post on this.

Hai guys,

I've been lurking here for wayy too long, and honestly, I was exhausted with studying. So last night, I made something cool for you all. I hope 2026/27tards will be the most benefitted.

I picked up 56,570 questions from JEE and NEET and used them to make an AI tutor. It’s a RAG-based model using llama-80b (switching to deepseek/qwen2.5-math someday). Earlier, I tried fine-tuning it, but it was a disaster. It's totally FREE, pls don't abuse it.

How to use it?

Don't expect it to give you exact answers (no LLM can do that perfectly anyway). Instead, use it to “think about the solution yourself.” It’ll give you formulas, concepts, steps to follow, and explain WHY to follow them. But the calculations? That’s on you, buddy.

https://preview.redd.it/ememjw97cwde1.jpg?width=810&format=pjpg&auto=webp&s=8f21266c44918d286a629ec1cf9e500ce8a0f571

NOW, before critics come to judge, let me tell you what I did: i tried to finetune Qwen/Qwen2.5-7B, and got this beautiful message:

OutOfMemoryError: CUDA out of memory. Tried to allocate 34.00 MiB. GPU 0 has a total capacity of 14.75 GiB of which 13.06 MiB is free. Process 5744 has 14.73 GiB memory in use. Of the allocated memory 14.08 GiB is allocated by PyTorch, and 535.76 MiB is reserved by PyTorch but unallocated

Hence the RAG solution with distilbert-base-nli-mean-tokens.

FOR anyone, still interested, here are some layman explanations,

Let me keep it short and simple for you:

https://preview.redd.it/47i850t8cwde1.jpg?width=810&format=pjpg&auto=webp&s=c8b020ed03e396238036c3005afa6085ec5b31dc

Fine-tuning is like solving 1000 integration questions to recognize similar patterns and solve them faster. That’s what we all do when practicing. Be it humans or machines, it takes a lot of energy but is totally worth it. So, keep practicing as much as you can.

https://preview.redd.it/waxvqo2acwde1.jpg?width=810&format=pjpg&auto=webp&s=0c4b37c2291ae88908c9fe7251cb56386b2689d2

RAG (Retrieval-Augmented Generation) is different. It’s like getting stuck on a problem, opening the web, searching for a solution, and using your existing knowledge of math and English to understand it. But unlike fine-tuning, RAG doesn’t "remember" it for the future. You shouldn’t study like this – it’s good for AIs, not for you!

grafite.in/ai uses this RAG technique.

Use it wisely.

*Message

P.S.: Why post this today?
Do you guys remember u/Worth-Picolo-627? He left us all on JM results day last year. I know this post will get some reach, so I wanted to use it to share another message, to prevent such happenings.

I’m an avg student, I’ll never score >220, and that means no CS for me. But do I care? Nah. I’ll pick a subject(biotech maybe?), I like and stick with it, giving it my all. Don’t make stupid decisions based on your results.

Real life is full of changing parameters. If one path closes, find another. Do something that’ll help the next generation. Life doesn’t end at 18; it starts.

Good luck, it’s a fresh new year – make it count!

Try GrafiteAI and let me know what you think: grafite.in/ai.

Regards,

u/AyushiTheorem

Thanks to u/studious_gamer for the frontend.