Red Hen Lab

Research on Multimodal Communication

Technologies

machine learning
opencv
high performance computing
audio procesing
multimodal analysis
audio processing
python
tensorflow
singularity
scikit-learn
syntaxnet
nlp
asr
data science
big data science
computer vision
machinelearning

Topics

natural language processing
deep learning
multimedia
co-speech gesture
big data visualization
machine learning
artificial intelligence
video processing
audio processing
big data
ai
communication
cognitive science
data science
metadata
media
language
multimodal communication
gesture
http://www.redhenlab.org
Chat
Email
Mailing List / Forum
Twitter
Blog

Projects

Contributor

Abhinav Mehta

Gesture Recognition Using Machine Learning

Gesture Recognition using template matching, motion history image and machine learning. The project is basically divided into 3 phases involving...

View project detailsView code

Contributor

mozin

Gesture recognition using multimodal deep learning

Use video and text data of the speaker to recognise gesture of the speaker on TV using LSTMs.

View project detailsView code

Contributor

Xi-Jin Zhang (mfs6174)

Computer Vision and Machine Leaning Applications on Artwork Images

The proposal is inspired by the idea G on RedHen's GSoC 2016 idea page. The main purpose is to develop models and code helping domain experts to...

View project detailsView code

Contributor

Aswin kumar J

To construct Bootstrapping Human Motion Data for Gesture Analysis

The Project aims at detecting the Human gesture with the help of classifiers.The project consists of 1) Database, has the Segmented Frames of...

View project detailsView code

Contributor

Soumitra Agarwal

Gestures, Machine learning and other things

The proposal aims to identify elements of co-speech gestures in a massive data of television news. The steps will include building a flawed data-set,...

View project detailsView code