About Me

Pras

Hello and welcome to my webpage! I'm a doctoral candidate in the Computer Science department at Rensselaer Polytechnic Institution located in Troy, NY. My advisor is Prof. Chris Carothers and my research deals with Neuromorphic Computing and Deep Learning. As part of my doctoral research, I've worked on several interesting projects which I intend to showcase through this webpage. I'm actively seeking internship and job opportunities in the areas of Neuromorphic Computing and Deep Learning. If you'd like to know more about my work, please have a look at my research projects or glance through my resume. If you'd like to get in touch, feel free to contact me.

If I'm not getting results, writing papers or attending conferences, you'll most likely find me on the Cricket field with the RPI Cricket Club or jamming and hanging out with my rock band Sonic Fractals! I also happen to be an independent musician and SoundCloud is the place where you can listen to my music. Apart from research, music and sports, I like to take up a lot of outdoor activities for pleasure like hiking, camping, kayaking, skiing and a handful others. From time to time, I indulge myself in adventure sports like skydiving, river rafting, paragliding, parasailing and scuba diving.


Research Projects

Design Index for Deep Neural Networks July 2016

This was my first project after I started my PhD program in Computer Science at RPI. At the time, I was exploring the field of Deep Learning - understanding the underlying math and the designing process. I realized that designing Deep Neural Networks (DNN) was considered an art rather than science. It was goverened by certain "rules of thumb" and in order to understand why these rules existed, a few years worth of experience in designing DNNs was required. So, I began a scientific study to understand the process of designing DNNs using a factorial design approach. And it is out of this process that I came up with the idea of Design Index for DNNs. The Design Index is a performance metric that quantifies the accuracy and overfitting of a DNN model.

Node Failure Prediction on Supercomputers: A Comparative Analysis of Machine Learning Techniques July 2017

Imagine a setting where you have a supercomputer! In addition to CPUs, GPUs and/or FPGAs, each node of the supercomputer also has a Neuromorphic Processing Unit (NPU). By the way, "neuromorphic" literally means "brain-like". So, an NPU is a chip that performs computation by emulating the brain. The next generation supercomputer designs are already exploring the possibility of having an NPU on each node of the supercomputer. In such a setting, we would like to have an application running on the NPU that could predict when a node is about to fail - so that we can minimize the load on the failing node and fix it. Now, whenever you talk about predicting something, Machine Learning (ML) is the first thing that comes to mind. That's what we have done in this work! We have explored the possibility of using an ML predictor, running on an NPU (which is a piece of hardware tailor made for ML tasks) to predict node failures on supercomputers.

Integer Programming approach to obtain Integral Weights on Neuromorphic Hardware August 2016

In today's world, Machine Learning (ML) and Deep Learning (DL) applications dominate the computational world, so much so that it seems fairly reasonable to have a dedicated piece of hardware that runs ML/DL tasks. Neuromorphic (brain-like) chips do precisely this. They are tailor made to run ML/DL tasks extremely fast and in an energy efficient manner. Present day neuromorphic hardware operate under the constraint that the learned weights of the ML/DL model are integers. This work explores the use of integer programming techniques to train deep neural networks in order to obtain more accurate integral weights, that could be deployed directly onto the neuromorphic hardware.

On-Chip Training of Deep Neural Networks on Simulated Neuromorphic Hardware September 2017

The word "neuromorphic" has been derived from two latin words, "neuro" meaning "brain" and "morphic" meaning "like". The word literally means "brain-like". Neuromorphic Computing refers to a new paradigm where computations are performed by these units called "neurons". These "neurons" exist at the hardware level and try to emulate the biological neurons. The neuromorphic devices are tailor made to run deep learning or machine learning models, which basically are intelligent techniques to learn from data. The TrueNorth chip developed by IBM is a neuromorphic device, which can deploy an already trained deep learning model. As of today, it is not possible to train any new model on the chip. Our research tries to enable on-chip training of networks on simulated versions of neuromorphic chips.


Articles

From Manufacturing Engineering to Computer Science July 2017

Prasanna Date

To start things off and give you a little idea about my background, let me take you back to my high school days in India. In those days (first decade of twenty first century), everybody and their brother was preparing for engineering entrance exams. It was almost as if they did not give themselves the luxury of a different career path. I was signed up in the Engineering rat race as well. But for me, things were slightly different. I actually enjoyed studying Physics and Math and somewhere deep down inside I knew that the calling was certainly Engineering.

I ended up getting a good enough score to take up Manufacturing Engineering at BITS Pilani. At the time, the general perception about choosing an Engineering branch was simple - choose the branch in decreasing order of demand. So, it was Computer Science and Engineering, followed by Electrical Engineering, followed by Mechanical Engineering, followed by everything else. If given complete freedom, every Engineering student would take up one of these "top" three Engineering branches. There were no thoughts spared for which Engineering branch is best suited for me or which Engineering branch am I best suited for. By these lofty standards, mine was a very mediocre branch. Although I did develop a keen interest in Manufacturing Engineering, my mind was too cluttered to spare a thought for whether it was the right branch for me or not. When the time came to pick a thesis topic in my final year, I involuntarily picked one that involved application of an intelligent performance evaluation model to a manufacturing setting.

At the time, I was also applyting to PhD programs in Industrial Engineering all over the US. RPI and a few other schools were generous enough to consider my application worthy enough. I chose RPI and in hindsight, I realize that my decision was governed by nothing but an extremely strong gut feeling. When I joined the program in Fall 2014, I was faced with the decision of choosing an advisor and a research area. The two choices for me at the time were 3D Printing or Deep Learning. Without having an ounce of knowledge about the field, I chose Deep Learning. To this day, I cannot fathom why I did that. Again, this decision too was purely based on gut feelings and instincts.

Then came the summer of 2015 and I was searching with all guns blazing for an internship or summer research position that would pay my summer bills. I approached Prof. Carothers, with whom I had taken a course the previous semester. Keep in mind that my background was Manufacturing Engineering and I was approaching a big shot in High Performance Computing. These fields are poles apart, and from where I was seeing, there was absolutely no overlap! From my point of view, I was certain that there was no way he was going to even consider me for the position, but then again, I just had to give it a shot...

In a meeting that would change the entire course of my subsequent professional life, he heard what I had to say, glanced once through my resume and offered me a research position - just like that! I was flabergasted, excited and relieved, all at the same time. To this day, I have no clue what he saw in my profile. Sometimes I tell myself that all my stars perfectly aligned that day and I just got lucky. Sometimes I tell myself that I worked hard in his course and maybe deserved it to a certain extent. Maybe it was neither, maybe it was both. Whatever it was, but so began my journey into this beautiful world of Computer Science, every moment of which, I cherish to this day!

First Look at High Performance Computing July 2017

Prasanna Date

Coming soon...


Publications

Conference Publications

Journal Publications


Contact Info

GitHub

@prasannadate


Google Scholar

Prasanna Date


ResearchGate

Prasanna_Date


LinkedIn

@prasannadate


Facebook

@prasdate


Twitter

@PrasannaDate


Google Plus

+PrasannaDate