DEV Community

Varun Dasharadhi
Varun Dasharadhi

Posted on

How I built an emotion-reading AI in 24 hours using Claude + Hume EVI

The Idea

What if AI could actually feel what you're feeling β€”
not just read your words?

That was the spark behind EmpathIQ. Built solo in 24
hours for the Replit 10 Buildathon.

What It Does

EmpathIQ combines:

  • πŸ‘οΈ Facial emotion detection via webcam
  • πŸŽ™οΈ Vocal emotion analysis via Hume EVI
  • πŸ€– Claude API responses calibrated to both signals

The result? An AI that responds to how you actually
feel β€” not just what you type.

The Feature That Surprised Me Most

Smart Glasses mode πŸ₯½

Point the camera at someone ELSE. EmpathIQ reads
THEIR emotion and gives YOU real-time coaching
on what to say.

Angry person in front of you?
β†’ "Lower your voice and acknowledge their concern
without arguing"

The future vision is Meta Ray-Ban integration β€”
real-time emotional coaching in every room you
walk into.

The Tech

  • React + Vite
  • face-api.js β€” facial emotion detection
  • Hume EVI β€” vocal emotion AI
  • Claude API β€” emotionally calibrated responses
  • Recharts β€” emotion timeline chart
  • Tailwind CSS

The Hardest Part

Combining two real-time emotion signals (face + voice)
into one coherent reading without lag or conflicts.
The fusion panel took several iterations to get right.

What I'd Do Differently

Start with the voice mode earlier β€” EVI integration
took longer than expected and nearly didn't make
the 24hr deadline.

What's Next

  • Apple Watch pulse + biometric fusion
  • Meta smart glasses integration
  • Clinical/therapy version (HIPAA compliant)
  • Mobile app

Try It

πŸ”— Live: https://empathiq-studio--varundasharadhi.replit.app
🎬 Demo: https://www.loom.com/share/ee3177d34b40404487115fca5f8366ed
⭐ GitHub: https://github.com/VarunDasharadhi/Empathiq-Studio

Would love your feedback! πŸ™

Top comments (0)