Minje Kim
Minje Kim, an assistant professor of Intelligent Systems Engineering at the School of Informatics and Computing, has received a gift from Intel to pursue a method of lowering the power and computing cost of deep learning processes in artificial intelligence. Intel sought a portfolio of research projects focused on compelling new human-computer interaction advancements that have HCI on the precipice of a breakthrough.
As smart devices have become more ubiquitous, advances in deep learning have allowed AI to reach a near-human level. Deep learning allows complicated intelligence jobs, such as computer vision, near real-time language translation, music recognition, and more to be performed quickly, but such computing comes at a cost. Due to the fact neural networks present each of the millions of parameters of a computation in up to 64-bit forms, the computations required are both sizeable and hungry for power.
Such computations aren’t a problem on laptop or desktop computers, but smart devices are limited by battery life and processing power. Communication with the cloud creates delays and slows down the process. If an application drains a battery too quickly and/or is slow presenting results, the app will be of limited use.
Kim’s project, Bitwise Deep Recurrent Neural Networks (BDRNN) for Efficient Context-Aware Pervasive Systems, focuses on changing the parameters from multi-bit encoding to a single bit while retaining nearly all of the information. The resulting computations would require less power and memory, allowing processes to run locally on smart devices without losing the near human-level performance of deep learning.
The research plans to accomplish three things. First, the work aims to devise a proper training mechanism for BDRNN by transplanting some recent successful training policies for ordinary bitwise neural networks. Second, it will measure the success of the training method by checking if the performance of BDRNN catches up to the upper bound defined by the state-of-the-art comprehensive DRNN systems that tend to necessitate a lot more resources. Finally, the run-time efficiency of BDRNN systems will be accessed in a proper mockup hardware implementation.
“The idea is to somehow come up with a way to streamline the procedure while still keeping the high-performance aspect,” Kim says. “If you can rely on a single-bit logical gate for computation (e.g. XNOR), there will be a great savings in the amount of power and memory required.”
Intel requested proposals to address technological areas such as robotics and autonomous machines, virtual and augmented reality and visualization, simulation, and modeling, advance imaging and displays, wearables and/or human activity or state monitoring systems, and other areas. The computing giant also looks to address application domains such as smart spaces, whole-home personal assistants, person-to-person interaction, group conflict resolution, and personal coaching.