Artificial intelligence is the new buzz word in the tech world. But this isn’t a new phenomenon, or even a new technology. In this article, I’d like to explore the role of neural networks within AI, as well as its history, and the benefits – and challenges – it brings.
The role of neural networks in AI
We rely on AI without even realising it. Touch ID, facial recognition and speech recognition all use AI. In these instances, the technology being utilised is often based on artificial neural networks, a subset of AI commonly referred to as machine learning.
Neural networks learn by being fed data – a substantial amount more than a human could process – and making connections, spotting patterns in a similar manner to the way our brains subconsciously do over a period of time.
Developments in neural networks
Neural networks have been around for some time. In 1995, my M.Eng dissertation used a neural network to identify damaged roller bearings. We fed the neural network with vibration traces of different roller bearings: some damaged, some not. It processed this massive amount of data in a way that a human couldn’t possibly, and learned to predict when another (unknown) roller bearing was likely to fail.
In 1995 our neural network application needed to run overnight, or even over the weekend, to train itself on the data. However, this is no longer the case, and increased processing speeds have brought AI into the mainstream. When you get a new phone, it can process the data and recognise your fingerprint in seconds rather than hours.
“Increased processing speeds have brought AI into the mainstream. When you get a new phone, it can process the data and recognise your fingerprint in seconds rather than hours”
The limitations of neural networks
Neural networks are incredibly useful at analysing vast amounts of data where there perhaps isn’t an immediate apparent pattern. But its use is still limited, and there are numerous examples of it going wrong. For example, when Amazon tried to train a neural network to recognise good CVs from bad, it was found to favour men significantly more than women, and the project was brought to a close.
At the end of the day, we still train these networks with 1’s and 0’s. A neural network is only as good as the data it is trained upon (known as ‘garbage in, garbage out’). An experienced human can spot a blagger’s CV most of the time, but perhaps it will be a bit longer before a neural network can.
Are there alternatives to neural networks?
AI doesn’t have to rely on neural networks. Indeed, the ability to process large datasets allows us to analyse them statistically, too. The huge improvements in processor speed have enabled such datasets to be analysed rapidly, and these statistical measures can then be used to make decisions in a way that a human might. No training is needed, because we as humans have an idea of what the outcome should be already – we are just creating algorithms that will make the same decisions as we would, but more quickly and more impartially than we are able to ourselves.
This logic-based system, known often as algorithm-based decision making, is more commonly what is used in EdTech. However, many don’t consider this as true AI, as the output is constrained by the developer.
Is AI – in the form of machine learning – better than the use of algorithms, logic and basic data analysis? To make that judgement, I would look at the problem that is being solved.
“If analysing a CT scan to determine early-stage lung cancer, then AI could be (and is) a good tool to help”
For example, if analysing a CT scan to determine early-stage lung cancer, then AI could be (and is) a good tool to help. In this instance, a doctor will use judgement – and judgements differ, as there is no obvious threshold for deciding if the lung is cancerous or not – just millions of data points in the form of an image.
Contrast this with looking at a child’s results from his latest maths test to determine what they need to study next. This is an instance where there are only a few data points (the time taken per question and the number of attempts per question, for example). A teacher would aggregate this data and make a decision based on this and his pedagogical experience. There is likely to be a more obvious threshold. This is a process that can be replicated through data analysis and a good algorithm.
The future of neural networks in AI
It’s fair to say that we are a tiny fraction of the way through our journey of using AI. We are years away from AI being able to replicate or create its own AI, which is where the scaremongering and hype that surround AI originate from.
Instead, it needs to be balanced by looking at the massive benefits it brings. Outside security of iPhones and self-driving cars, AI is powering huge improvements in breast cancer screening, early warning systems for tsunamis, and fraud detection, to name just a few.