The limitations (and ridiculous power) of ANN creativity
Despite a lot of marketing talk 'Cognitive Computing', ANNs are in many ways Artificial Intuition than intelligence per se. They are able to creatively fill in gaps and make intuitive leaps to make an appropriate response to a given situation (System 1 thinking a la Kahneman).
They are so powerful that they can in effect take over any human activity that takes no longer than one second, or a series of moments/loops like that (for driving a car, recognising faces, reading handwriting, understanding and labelling objects in a scene).
- So, we have traditional computers which are great for calculation.
- ANNs, particularly Deep Learning, gives us Artifical Intuition and potentially super-human pattern spotting.
- That leaves us with the question of true intelligence – True Reasoning about things.
Reinforcement learning is able to create AI which will learn about situations but cannot conceptualise them. That is something that we don't have, and will not have until another revolution in AI – That could take a few years or a few decades.
Some of the latest developments from MIT are able to combine multiple discrete elements of something, to syncretise something new, which seems promising. OpenCog and Wolfram Alpha are built in a top-down model whereby a system is explicitly taught things, rather than inferring properties from data (bottom-up). In theory this can lead to a more reasoning-type process.
However, there is a lot of human grunt work required in building such systems. They are also not optimised. Marcus Hutter's AIXI design would be a near-ideal AI system, if it could be implemented. Unfortunately, it's considered computationally non-viable.
There has been development lately in generalising learning between ANNs, which seems very promising (although it could potentially introduce more biases or misconceptions through the back door), as well as learning from a single example without needing a large curated dataset.
My hope is that some of the recent leap in bottom-up approaches, and the GPU/FPGAs/ASICs now powering AI systems, can translate to making this top-down reasoning process faster and easier.
To sum up, machine intelligence can do a lot of creative things; it can mash up existing content, reframe it to fit a new context, fill in gaps in an appropriate fashion, or generate potential solutions given a range of parameters.
Outside of a few potential hints at something deeper, ANNs do not appear to be generating purely original concepts or ideas, or performing abstract reasoning at this time. However, surprisingly few human tasks or roles actually require this kind of mental function. Most people are Cooks, and not Chefs, businesspeople rather than entrepreneurs, and they have not been taught how to reason from First Principles either.
One area where some kind of reasoning is generally required however is in ethics. This is why a project which I have co-founded, OpenEth.org, is working to create ethical constraint solutions for narrow AI, in this niche but crucial area.
Bias in how a machine intelligence perceives something can indeed come from the algorithm, but it can also come from the data. An algorithm will generally be tweaked over time by an engineer to get a better sense out of the data that is available.
However, an incorrectly weighted algorithm can reinforce existing biases that lie within data. This means that stereotypes can get reinforced, or implicit discrimination occurs without warrant, where certain individuals are not shown a job ad for example, because they don't fit the standard pattern of hires. The worse abuses may occur within the justice system, as decision support and probabilistic engines are increasingly being used to calculate things like bail.
Indeed, the engineering of many of the most powerful machine learning algorithms available today is done within a small geographic zone, by demographically similar individuals, and this situation isn't likely to change much anytime soon.
I'm therefore proud to serve as an advisor to Diversity.AI, an organisation that fights for better, more open, and more accountable use of machine learning. Machines are intended to help liberate us, let's help ensure they take our society in the right direction.