top of page
Search

My Summer as an EMPath Trainee @ Affectiva

You know those robots that can tell exactly how you're feeling? The ones that totally don't exist yet? Well, those robots are coming into existence, with the help of an MIT Media Lab startup in Boston called Affectiva. This company, formed in 2009, specializes in Affective Computing, the area of AI that incorporates recognizing human emotion. Their research has applications in Autonomous Vehicles, Virtual Conferencing, Media Analytics, and Biometrics. I had the great pleasure of participating in Affectiva's 5-week EMPath (Emotion Machine Pathway) Intern Program this summer. For the first half of the program, we learned about Machine Learning and Affective Computing through online resources and quick projects. At the end, we got into groups and competed in a Makeathon, where our goal was to create a product using ML and pitch it to Affectiva investors.

In my previous post, I explained Convolutional Neural Networks and an example of a CNN-- facial expression recognition. I mentioned that I would be coding a similar CNN for my Affectiva project about driver aggression. Now that our project is complete, I'd love to share it with you all!


Our model was a binary classification algorithm that decided whether a driver's face was aggressive or not. First, we set out to locate a database of facial images that separated aggressive images from neutral ones. This proved to be one of the toughest parts-- databases we found were either too small, not sorted, in faulty formats, or not labeled at all!

Eventually, we came across the CK+ dataset which sorted images into groups based on the 7 universal facial expressions. According to our surveys, aggression comes in three forms: fear, surprise, and anger, so we combined these folders into one labeled "aggressive". For our "neutral" folder, we gathered images from several different databases, and in the end, we had our perfect dataset of size ~300 with two categories: aggressive and neutral.


Next, we coded our algorithm and trained it on this set! Shreif, an Engineering student in Egypt, preprocessed the images before I defined the CNN and trained the model. We utilized Keras and PyTorch in the process. Our wonderful mentor Mohammed, also from Egypt, recommended and showed us how to use VGG16, a popular CNN made by Google that would provide us with higher accuracy in our results!


Our final step was to implement CV2 so that the model could run in real-time on our devices with a live camera feed. Here are a few pictures of the results :)


A little dramatic, I agree :) Throughout the final week of the Makeathon, we prepared our Pitch, putting together mock Dashboard images, explanatory diagrams, logos, and solid selling points. Nouran, another student in Egypt, set the stage with a heartwrenching story of the time her uncle nearly killed her friend due to aggressive driving. After I explained our goal and product, Shreif explained the live demo, and Lorenz (from California) ended with an overview of our competitive analysis. Ending with a cheesy joke, we wowed our judges and audience members (I hope)!



The EMPath program was an incredible, influential part of my summer that has taught me a lot, not just about Machine Learning, but also how to promote myself within my workplace, sell an idea, and spread positivity with every step I take. I want to say thank you to Rana and Taniya, who organized this whole program for us. Also to my team members, team mentors, and Eve, my mentor for the first few weeks. Talking to you all was more enriching than any other experience could've been.


So I'll end this with the same cheesy line that wrapped up our pitch...

With an amazing team like GeeksVision and an even more amazing product like AggreVision, we can ensure that our roads can go from this...


to this!

Here is a link to our GitHub Repo and to our presentation!


59 views0 comments

Recent Posts

See All
Post: Blog2_Post
bottom of page