Leveling Up Head Movement Detection: My Graduation Project Adventure
Hey there, tech enthusiasts and curious minds! π
So, I've been knee-deep in this wild ride called my university graduation project, and boy, do I have a story to tell! We've been on a mission to detect head movements using machine learning. Sounds simple, right? Well, buckle up, because it's been quite the rollercoaster! π’
The Initial Game Plan
We started off thinking, "Hey, let's use MediaPipe to grab some facial landmarks and train a model to figure out head movements!" Seemed straightforward enough. We were all pumped to detect tilting, turning, nodding β you name it!
Here's a snippet of how we initially extracted facial landmarks:
But here's where it gets interesting (or frustrating, depending on how you look at it π ):
Houston, We Have a Problem
Our model was struggling harder than me trying to wake up for an 8 AM class. It just couldn't tell the difference between subtle movements. Tilting, turning β it was all the same to our poor, confused AI.
It almost feel like I need to crack my head to make it know I'm tilting my head. π€¦ββοΈ
The Lightbulb Moment π‘
Then, out of nowhere, while I was probably procrastinating on TikTok, it hit me: "Why are we trying to make one model do everything? That's like asking your cat to fetch, do your taxes, and make you coffee!"
Our Cool New Approach
Here's what we cooked up:
- Use MediaPipe as our base model (it's like the foundation of a house, but for faces)
- Create a bunch of smaller models, each with its own specialty (like having a different chef for each course in a fancy restaurant)
- Combine all their outputs for the final prediction (teamwork makes the dream work, right?)
It's kind of like how Apple does their AI magic, or how LoRA works in those fancy diffusion models. We're basically creating a boy band of AI models, each with its own special talent! πΊπΊπΊπΊ
The Secret Sauce (a.k.a. How It Actually Works)
- MediaPipe: Our facial landmark detective π΅οΈ
- Specialized Models:
- Tilt Detective π
- Turn Tracker π
- Nod Spotter π
- (and more β we got the whole squad!)
- The Mastermind: A module that takes all this info and makes the final call
Here's a sneak peek at how we're combining our models:
Each specialized model (tilt_model, turn_model, nod_model) focuses on its specific task, making it much more accurate at detecting subtle differences.
Did It Actually Work?
Short answer: Heck yeah! π
Long answer: We saw some serious improvements:
- Better at catching those sneaky subtle movements
- Could actually tell the difference between tilting and turning (finally!)
- Worked like a charm in different lighting and head positions
Check out these comparison results:
What's Next?
Well, we're not stopping here! We've got big dreams:
- Make our AI combo even smarter
- Teach it to spot even more movements (maybe even eye rolls for when I tell bad jokes)
- See if we can make it work super fast in real-time
We're also thinking about optimizing our model for mobile devices. Imagine having this running smoothly on your smartphone! π±β¨
The Big Takeaway
Breaking down big problems into smaller, manageable chunks? Total game-changer. It's like when you're faced with a huge pizza β tackle it slice by slice, and before you know it, you've conquered the whole thing! π
That's all for now, folks! Remember, in the world of AI and machine learning, sometimes the best solution is to divide and conquer. Stay curious, keep experimenting, and who knows? Your next crazy idea might just be the next big thing! βοΈ
If you're curious, you can check out our multi-model approach here: https://huggingface.co/suko/Janus/