Is deep learning always the right way to go?

From self-driving cars, to virtual assistants, to fraud detection, deep learning holds mass appeal. With pre-trained models in place, organisations can apply deep learning algorithms to huge datasets to generate tremendous data-driven insight. However, it doesn’t mean that deep learning is always the solution to all machine learning-related problems.

The use cases for deep learning are bound to be very much down to individual business requirements and both long and short-term objectives, as well as the data available to any given organisation, and its existing AI maturity. In some instances, classical machine learning may be a more appropriate and straightforward solution to a given business requirement.

If you’re unsure as to whether or not deep learning will solve a given business problem, it’s time to consider your data, proposed business use cases, time, resources, and methods for assessing success.

Check the label

One of the main strengths of deep learning lies in being able to process complex data and relationships. However, to do this, you need data that is labelled properly, which means preparing your dataset by ensuring that machines understand the images, text, videos or audio contained within it. A lack of sufficiently and precisely labeled high-quality data is one of the main reasons why deep learning may have disappointing results.

Unlabelled or poorly labelled data can significantly affect the training of the algorithm used to refine your machine learning model, which will ultimately decrease the business value of any solution you create.

Alexis Fournier, Director of AI Strategy, Dataiku

Cleaning, preparing and labelling data often take up a massive volume of time and resources from teams, who no doubt would rather be building the next machine learning models or pushing models into production. In this instance, there are two choices: one, opt for traditional machine learning algorithms and models instead of diving into deep learning – they may take up less of your data team’s time and resource. Or two, begin to understand active learning. It’s a process that automates data labelling through machine learning algorithms, and optimises the processing of unlabelled data.

A last note on data: unfortunately, it’s not just a matter of quality. Quantity is also key, and if you want to extract enough complex patterns from your data, you will need thousands or hundreds of labelled data points for any given classification task.

Consider machine learning versus deep learning

Despite the fact that deep learning is becoming increasingly accessible, in practice, it’s still a complicated and expensive endeavour. Due to their complexities and the significant number of layers and amount of data required, deep learning models take a long time to train and require a lot of computational power. Again, this makes them very time and resource-intensive. To execute deep learning algorithms, Graphics Processing Units (GPUs) are used to train deep networks to high performance. However, GPUs are often prohibitively expensive and present a real barrier to deep learning for many organisations.

On the other hand, classical machine learning algorithms can be trained easily with a decent CPU (Central Processing Unit), and without elite, expensive hardware. Because they aren’t so computationally expensive, data teams usually find they can also iterate machine learning models faster and try out different techniques in a shorter period of time.

There is also the issue of interpretability surrounding deep learning. While deep learning reduces the human effort of feature engineering (as this is automatically done by the machine), it also increases complexity, making it more challenging for humans to understand and interpret models.

While the accuracy that has been achieved by deep networks is usually deemed to be above that associated with classical machine learning methods in most domains, the trade-off between accuracy and interpretability can be a tough call.

Model interpretability can end up being one of deep learning’s biggest challenges, with the highly complex and non-linear relationships between variables making deep neural networks exceptionally difficult to interpret. There is also the issue of trust: because interpretability is more complex surrounding deep learning, it’s more difficult for people to ‘believe’ in the results, which can lead to problems in adopting solutions based on these results.

On the contrary, because of the direct feature engineering that underpins classical machine learning, its algorithms are usually quite straightforward to interpret and understand, often because data scientists and engineers have been directly involved with the feature selection and engineering.

It’s up to you

The notable performance and success of deep learning algorithms combined with the increasing availability of pre-trained models on publicly available data, has made headlines for deep learning over the past few years. However, deep learning is not a silver bullet. Regardless of whether you decide to embrace deep learning or more classical machine learning for an upcoming data project, the most important thing is that you make your decision in an informed way so as to achieve tangible business results with enterprise AI.


Alexis Fournier

Director of AI Strategy, Dataiku


Scroll to Top