Emotive Virtual Patient

Emotive Virtual Patient project aim to develop efficient cognitive based evolutionary response and behaviorism for virtual humans, that are in line with psychological factor while dealing with a human subject.

Dr. Marjorie Zielke leads the research work at VHSS for the Emotive Virtual Reality Patient, which aims at creating a platform which will offer high-quality simulations, known as emotive Virtual Reality Patients, which can exhibit medical symptoms to help medical students improve their verbal and nonverbal communication skills. Project is cornerstone in creating virtual robots that can either work in tandem, or in some cases, replace the need for medical mannequins often used in educational scenarios. The advantage of a training simulation is its potential to physically emulate what symptoms the patient is presenting. Research work has received two grants — one each from Southwestern Medical Foundation and the National Institutes for Health — to fuel ongoing research into virtual reality-based medical experiences.

Accomplishment are listed below :

  • Winner – 2017 Richardson US IGNITE Smart Cities Gigabit Challenge
  • Best in Show – 2017 International Meeting on Simulation in Healthcare (IMSH)
  • Best Poster – 2017 UT System-Health Science Education Innovations Conference

More detail on project :

My role :

  • Contribute to the exploratory study for the creation of an advanced neural network based cognitive framework that will power the research and move the team closer to the goal of creating world-class virtual humans.
  • Assist Dr. Zielke in the creation of enhanced learning theory based out of the Connectivism theory of learning and Cognitive science utilizing Deep Neural Network and Mixed Reality (Work in progress).
  • Led a team of programmers to research and develop Neural Network based Natural language processing and Human Action Recognition systems, with compiling data on body language, facial cues and other physiological information in VR (Oculus, HTC Vive) /AR (Microsoft HoloLens) platforms. All neural networks are developed in-house from scratch.
  • Bridging the gap between technical and research requirement by collaborating with other teams including 3D modelers, animators, research designers and other programmers.
  • Assisted the undergraduate researchers in the lab for the development and maintenance of cognitive-based research work, software, and technological resources.

My Contributions :

  • Research and development of state-of-the-art Neural Network based Natural Language Processing and Human action recognition systems for the Emotive Virtual Patient project to develop efficient cognitive based evolutionary response and behaviorism that are in line with psychological factor while dealing with a human subject.
  • Developed Deep learning based NLP toolkit without usage of any third party packages (Compactible with Unity game engine and Microsoft Mixed Reality Toolkit(MRTK), which consists of:
    • Recurrent Neural Network (RNN) models (Forward RNN, Bidirectional RNN) to build a system capable of performing Named Entity Recognition(NER) and Question answering (QA).
    • The model uses Long Short-Term Memory (LSTM) and Back-propagation through time (BPTT) for hidden layer. And SoftMax, negatively Sampled SoftMax and Recurrent Condition Random Fields(CRF) for the output layer.
    • NER model achieved a token error of 4.97% and sentence error of 13.83% on the On Named Entity task with Groningen Meaning Bank (GMB) dataset.
    • QA task is evaluated on The Stanford Question Answering (SQuAD) Dataset and has achieved Exact Match (EM) of 0.59185 and F1-score of 0.69732 (Work In progress).
    • Led the development of Hidden Markov model with rule-based logic to update transition and emission probability, thus developing Part of speech tagger which performed exceptionally well on GMB and CoNLL shared task 2003 dataset.
    • Developed model similar to Google’s Word2vector model for word embedding.
  • In the process of developing enhanced version of Faster Region based Convolution network (Faster R-CNN) for Human action recognition with better generalization and frame-per-second processing capability.
  • Part of the team which won the 2017 Richardson US IGNITE Smart Cities Gigabit Challenge.

Note: The research was supported by the National Institute on Drug Abuse of the National Institutes of Health under Award Number R34DA040954. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.