Machine Learning and AI are two of the biggest buzzwords in the world of tech today. They appear regularly alongside terms such as Big Data, Deep Learning and The Cloud. Some might even argue that they have been used so often that they have lost all meaning. Well, not quite yet!

It’s about time to unpick the difference and see how IoT fits in.

 

What is Artificial Intelligence (AI)?

Artificial Intelligence (AI) is the term applied to the broad concept of ‘intelligent’ machines. To summarise it neatly, an AI device can perceive its environment and act in a way that will maximise its chance of successfully achieving its goals. It’s a system that learns by acquiring knowledge and then learning how to apply it.

The notion of a created object being given a human-like consciousness has been around for thousands of years.  The Greek myth of the golden talking handmaidens of Hephaestus is one example of AI – created beings mimicking human responses. Not only that, but most of us are familiar with the tale of Pandora, but how many of us have connected the story of the created ‘first woman’ and artificial intelligence?

To give a little bit of a background, the field of artificial intelligence research was founded in 1956 and was inspired by research into the brain’s neural network. The research demonstrated that the brain was an electrical network of neurons that fired all-or-nothing pulses. This was then combined with theories such as Alan Turing’s theory of computation, that showed that any form of computation could be described digitally and well, there was a feasible foundation for the construction of an electronic brain.

We’ve come on in leaps and bounds since 1956, as according to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence. This was in part attributed to improvements that began from 2012, when a reduced error rate began in image processing tasks. It’s just gotten better from there.

To put a bit of a modern spin on it, replace Pandora with Alexa and well, that’s where we currently are.

Amazon Alexa

 

What is the difference in narrow or general AI?

AI is usually classified into two groups; narrow or general. Narrow AI is increasingly common and is designed to manoeuvre an autonomous vehicle or intelligently trade stocks.

AI is now involved in multiple aspects of our lives and is being rapidly incorporated into a number of different verticals. Taking an example from the world of  agri-tech, John Deere is putting data-driven analytical tools and automation into the hands of its farmers. They offer automated farm vehicles that plough and sow using pinpoint-accurate GPS systems, alongside its Farmsight system that is designed to help with agricultural decision-making.

According to Professor Noel Sharkey, we are still a way away from generalised AI, which is theoretically designed to handle any task. After Google’s DeepMind AI beat the reigning Go champion, he stated that ‘it is an incredible achievement and most experts thought an AI winning at Go was 20 years away, so DeepMind is leading the field, but this AI doesn’t have general intelligence. It doesn’t know that is playing a game and it can’t make you a cup of tea afterwards.” Narrow AI is inherently narrow by design, it cannot adapt to a dynamic situation.

 

What is Machine Learning (ML)?

Machine Learning is a subfield of Artificial Intelligence, which might be why the terms are so often used interchangeably. In one sentence, Machine Learning uses experience to look for a pattern. It is a set of algorithms that learn through experience, relying upon large data sets to find common patterns.

The algorithm mines data, such as that from images, to identify patterns that exist. For example, at the 2017 World Mobile Congress, a specific version of IBM’s machine learning system, Watson, was given hundreds of images of the artist Gaudi’s work along with other complementary material to help the machine learn the possible influences for his work.  Architects then used two different aspects of Watson’s capabilities, Visual Recognition and AlchemyLanguage,  to create The First Thinking Sculpture. Data included images of Barcelona, details of its culture, biographies, historical articles and song lyrics.

It wasn’t that long ago that Google’s AIphaGo defeated the world’s best Go player. Go is thought to be one of the world’s most complex games and is far more difficult for computers than chess. To quote the BBC, ‘AlphaGo has built up its expertise by studying older matches and playing thousands of games against itself’. To get into the real nitty gritty, AlphaGo and its more recent successors learn via a Monte Carlo tree search algorithm which finds moves based on knowledge previously ‘learnt’ through machine learning. To be specific, it uses an artificial neural network which is trained to predicted AlphaGo’s own moves as well as the winner’s games.

The neural net, which is a deep learning method, improves the strength of the tree search, which results in a higher quality of move selection and stronger self-play in the next round.

 

What about changes in data set?

For Machine Learning to work to its best capacity, if there are any changes in the data set then the algorithm has to be fed with new data that captures that change, in order for it to accurately predict the future.

Sometimes, you do get wonderful occurrences where there are still a few kinks to iron out, but you can laugh them off:

In other instances, the problems created by poor data sets can be far more severe with much larger ramifications. Amazon’s AI recruiting tool, which was used between 2014 and 2017, was used to help review resumes and make recommendations.

However, it had been trained using resumes submitted to Amazon over the previous decade, when far more male than female candidates had been hired.

The result? The software reportedly downgraded resumes that contained the word “women” or implied that the applicant was female. They have since abandoned the software.

It just goes to show that you’re only as good as your data!

 

Machine Learning and IoT

It won’t come as a surprise that there is a lot of interest in advancing existing ML-driven IoT in order to boost both customer experience and improve industrial processes. Yet, what exactly ML means in the context of IoT is a bit of a grey area currently. Ericsson foresees the future of ML and IoT as a mutually beneficial partnership where ML’s intelligent algorithms equip IoT end-devices.
It’s all about ML inference capabilities.

You may have come across the phrase TinyML and wondered what an earth it meant and how it relates to IoT. TinyML is all about running Machine Learning inference on Ultra Low-Power (ULP ~1mW) microcontrollers found on IoT devices. At the moment, there are a number of challenges that still limit the implementation of TinyML in the embedded IoT world.

If we look at the hardware aspect of IoT, the growth of artificial intelligence has resulted in the optimisation of Machine Learning tasks for things like existing chip designs e.g. graphics processing units (GPUs) plus the design of new dedicated hardware forms such as application specific integrated circuits (ASICs). These embed chips designed exclusively for the execution of specific Machine Learning operations. However, at the moment, more investment is needed in the embedded world.

Software can make up for some of the hiccups that hardware creates. In the web domain, there are tonnes of different ML-oriented application software. However, the portability of these applications into embedded devices is not so straightforward. Ericsson points out that often high-level programming languages such as Python and the large size of software run-time can make software portability painful, and occasionally impossible.

This isn’t to say that it cannot be done. After having spoken to one of our top developers, he was of the opinion that it was indeed possible to run ML algorithms, as long as they were embedded onto the device itself.

As Mark Hung, research VP at Gartner states, “We’re definitely still in the very early stages.” He continues, “It’s only fairly recently that people understand the value of that data and are finding out what’s the best way to extract that value.”

So, in our minds it’s time to roll up those sleeves and head to the drawing board. We have a feeling that this is a definite possibility, so let’s see how far we can run with it!

 

PS: Get cracking with Machine Learning on your Pycom devices. Our Machine Learning has already launched on Pybytes. Check it out here:

Machine Learning screens on Pybytes

Head over to http://pybytes.pycom.io to try it out for yourself.