Reflections and predictions on the state of artificial intelligence & machine learning, by Jorge Garcia de Bustos, Technical Presales Consultant at Godel Technologies.
Open the pages of business and technology magazines and everyone has a view on what artificial intelligence and machine learning will mean for not only commercial entities, but the wider world we live in. Here we asked Godel Technical Presales Consultant Jorge Garcia de Bustos for his thoughts on the subject and to give us a prediction for the direction things may go in.
1. What has led us to our current position in artificial intelligence and machine learning?
Four key factors have made the adoption of machine learning possible in the last decade:
a. Availability of data. Collectively we are producing quintillions of megabytes of data (that is 1 followed by 18 zeroes!) every single day.
b. The cost of data storage has fallen by several orders of magnitude in the last 30 years, and public cloud providers offer multiple options to store ever-increasing volumes of data without having to own a costly data centre.
c. ML toolkits and APIs have become more mature, with many well-documented open-source libraries for languages like Python, R and Java.
d. Computing power increased for many years following Moore’s Law, and now we have specific hardware like GPU and TPU to accelerate model training and inference.
2. What are the challenges?
We must de-mystify machine learning. In the same way that police forensics is not quite like the ‘CSI’ TV series, data science and ML is not like in films. Laypersons often expect magic, but we must avoid selling smoke & mirrors and set the right expectation on what the tools can or can’t do.
As we move forward it’s crucial that development teams and data scientists start to work together. Trained ML models are beginning to appear in IT systems, fulfilling very specific purposes. We’re already a long way down the road in terms of the concepts of test automation, continuous integration and continuous deployment within software development. Now, these processes must be applied to training and deploying machine learning models too.
This can only be achieved if the teams collaborate – in much the same way that QA and development teams have generally grown to work integrally with one another. Data scientists can’t sit in isolation without a view of the right tools and processes needed to build robust applications, because machine learning models that rely on core applications need to be built correctly as the apps themselves were.
3. What’s your biggest prediction for AI and ML in 2020?
At some point soon, there’s going to be a machine learning crash. The tools we have today are pretty unintelligent compared with what we aspire for them to be. They need data that is rigidly similar to the datasets upon which they were trained, which makes them brittle and prone to error when unexpected events occur. If we apply machine learning to scenarios it wasn’t meant for – especially where outcomes are sensitive and have the potential for disaster if things go wrong – we’ll create disillusionment around the ‘magic’ abilities of artificial intelligence/machine learning technologies. It has extremely valuable uses; it saves costs massively when applied correctly. But until some it’s revolutionised to be as robust as we need, it can’t extend beyond these specific purposes. And we can’t predict when, or if, that will happen.