The Idea
What if AI could actually feel what you're feeling β
not just read your words?
That was the spark behind EmpathIQ. Built solo in 24
hours for the Replit 10 Buildathon.
What It Does
EmpathIQ combines:
- ποΈ Facial emotion detection via webcam
- ποΈ Vocal emotion analysis via Hume EVI
- π€ Claude API responses calibrated to both signals
The result? An AI that responds to how you actually
feel β not just what you type.
The Feature That Surprised Me Most
Smart Glasses mode π₯½
Point the camera at someone ELSE. EmpathIQ reads
THEIR emotion and gives YOU real-time coaching
on what to say.
Angry person in front of you?
β "Lower your voice and acknowledge their concern
without arguing"
The future vision is Meta Ray-Ban integration β
real-time emotional coaching in every room you
walk into.
The Tech
- React + Vite
- face-api.js β facial emotion detection
- Hume EVI β vocal emotion AI
- Claude API β emotionally calibrated responses
- Recharts β emotion timeline chart
- Tailwind CSS
The Hardest Part
Combining two real-time emotion signals (face + voice)
into one coherent reading without lag or conflicts.
The fusion panel took several iterations to get right.
What I'd Do Differently
Start with the voice mode earlier β EVI integration
took longer than expected and nearly didn't make
the 24hr deadline.
What's Next
- Apple Watch pulse + biometric fusion
- Meta smart glasses integration
- Clinical/therapy version (HIPAA compliant)
- Mobile app
Try It
π Live: https://empathiq-studio--varundasharadhi.replit.app
π¬ Demo: https://www.loom.com/share/ee3177d34b40404487115fca5f8366ed
β GitHub: https://github.com/VarunDasharadhi/Empathiq-Studio


Top comments (0)