Does Deep Learning AI deserve all the hype it is getting? Or are its limits catching up to reality?

Ronald Schmelzer, managing partner and founder of AI firm Cognilytica, explains in greater detail in this excerpt from CTOvision.com:

Deep learning is being applied successfully in a wide range of situations, such as natural language processing, computer vision, machine translation, bioinformatics, gaming, and many other applications where classification, pattern matching, and the use of this automatically tuned deep neural network approach works well. However, these same advantages have a number of disadvantages.

The most notable of these disadvantages is that since deep learning consists of many layers, each with many interconnected nodes, each configured with different weights and other parameters there’s no way to inspect a deep learning network and understand how any particular decision, clustering, or classification is actually done. It’s a black box, which means deep learning networks are inherently unexplainable. As we’ve detailed in some of our other research on Explainable AI (XAI), any system that’s being used to make decisions of significance will eventually need to have explainability to satisfy issues of trust, compliance, verifiability, and understandability. While DARPA and others are working on ways to possibly explain deep learning neural networks, the lack of explainability is a significant drawback for many.

The second disadvantage is that deep learning networks are really great at classification and clustering of information, but not really good at other decision-making or learning scenarios. Not every learning situation is one of classifying something in a category or grouping information together into a cluster. Sometimes you have to deduce what to do based on what you’ve learned before. Deduction and reasoning is not a forté of deep learning networks.