Top Top Top

Conjurors of Content

Wizards of Math and Stats


Formative AI technologies are those which can be directly applied to optimize processes based upon rapidly changing variables. Such situations can be found almost anywhere, everything from healthcare and business to autonomous vehicles and curating personalized content.

Such technologies have the capability to transform practically every sector or domain of the economy because they can make existing processes so much more efficient. Those who embrace these technologies and master their deployment can enjoy very strong advantages over competitors. We see this with how Big Tech has eclipsed all over domains of the economy, due to having first-mover advantages in the application of AI, due to them already possessing lots of data, compute, and algorithmic engineers.

The transformative nature of such AI technologies can be compared to electricity and motive power changed every sector a century ago – the creation of power drills and tractors instead of mule-driven plows. All businesses must now learn to recognise the advantages that AI is bringing to their sector, and planfor how they can bring such optimizations into their own processes.

One of the most exciting and immediately applicable areas of machine learning is in generative AI. Generative AI techniques involve multiple neural networks competing against each other. Some network try to make a plausible yet fake piece of content, and others try to detect content as being fake. If one sets up a loop between them, one can breed successively more accurate and plausible representations of something – human faces for example.

Generative techniques can be used to turn a simple sketch into a painting in the style of a great artist at the touch of a button. They can restore damaged, lost, or obscured content. They can massively upscale images or video from very low resolution, and transfer an aesthetic je ne sais quoi from one object onto another. Simply by providing a few examples, machine learning can instinctively follow the underlying patterns and correlations that one would be hard-pressed to describe in words or mathematics. They can even transform a video taken in winter into a summer scene, or vice versa.

In many ways, this generative form of artificial intelligence can be described as the closest thing to magic in the world today.Such technologies are being widely deployed to restore and upscale older per-HD content in movies, TV, and games, as well as for video filters in Zoom or Snapchat. The earliest applications have focussed on visual content, but recent developments upon these generative techniques are about to unleash a great step forward.

GPT-3 (Generative Pre-trained Transformer 3) by OpenAI is the latest and greatest evolution of those generative AI techniques. It builds on promising previous work by taking it to a massive scale, ingesting almost the entire known internet, with a stupendous amount of parameters (the relative strengths of connections between things).

GPT-2 had 1.5 billion parameters, whereas GPT-3 uses 175 billion. To the surprise of many researchers, the massive increase in compute time made it a great deal more capable. The same model with 10 billion parameters can complete math problems at a D- level, whereas 100 Billion performs to a B- grade, and 175 Billion to A+. It illustrates the power of using lots and lots of compute, just as deep learning showed the power of having lots and lots of data ten years ago.

GPT-3 is accessible only via an API interface to a remote server for now due to the hardware requirements (it cost OpenAI around $5million to compute). However, is not a deal-breaker for using it in a business context – in fact, it makes it even easy to start applying these techniques in minutes instead of weeks.

Recent developments in hardware will make such costs a great deal cheaper for those who wish to make their own private version anyway. The human brain has an estimated 100 Trillion parameters, give or take, and we will see models of such complexity achievable for the same $5million cost before the end of this decade. I expect that the same pattern of increasing capability will hold as parameter size and complexity increases further. This is as worrying as it is exciting.

Right now, GPT-3 can be applied to a very wide amount of creative endeavors. The same intelligence can translate poetry from chinese to english, play chess, calculate math problems, function as a hilarious dungeon master, figure out appropriate treatment regimens and dosages of medicines... a massive amount of flexible capability. Bloggers have even experimented with using GPT-3 to make new posts based upon their existing content. Unbeknownst to their readers, the generated articles have proven surprisingly popular.

It's still closer to human intuition than human intelligence per se, but it's very adaptable, very flexible, and very capable (within limitations). GPT-3 is a significant step in AI, though not an intelligence panacea. Its multifunctionality is formidable, but it still lacks executive functions or logical reasoning, and it's restricted to working with text. It's adaptable, so long as humans clearly define the problem to be solved. One can think of it like a babbling savant genie, but that's still incredibly valuable. The next version will be even more flexible, so much so that entire creative industries may be made obsolete overnight. I strongly recommend that businesses in all sectors experiment with GPT-3 and similar generative AI technologies, and become familiar with their application. Those who embrace this new wave will be as well-positioned as big tech has been to reap the benefits of the previous wave of deep learning.


ML is definitely one of the hottest careers, and that is likely to increase even further. Deep Learning has emerged in the past ten years or so, enabling amazing new predictive processes that can find patterns with patterns, and make order out of chaos. This has transformed industry, but has had less immediate effect upon the office. That's about to change, thanks to revolutionary new models such as Large Language Models such as Transformers and Diffusion models, sometimes described as Foundation Models. These are very large statistical models that can be dynamically reconfigured to solve for thousands of different problems just with a simple natural language request, typically described as a 'prompt'.

With this new technology, ML is finally accessible to the masses, as we no longer require much skill beyond asking a simple question to obtain quick and reasonable assistance with almost any digital office task one can imagine. The latest models are even generating computer code, video, 3D models, virtual personalities, and music on demand from nothing more than a description of the desired output. One might think that ML skills will be less needed as a result. However, by making the power of state of the art ML clear to the public, the desire for improved machine learning capabilities to optimise almost any problem we can conceive of will be greatly increased.

Machine learning and Statistics are related disciplines. Statistics is about analysing data, and constructing models to explain and predict phenomena, whereas ML is about creating technical information pipelines of automated data analysis that can construct their own internal models to make a prediction. Statistics is human-focused and easily explainable. ML is machine-focussed and less explainable, but may be more powerful in circumstances where there are very complex patterns, perhaps with too many variables for a human being to manage.

Both occupations require a solid grounding in mathematics and statistics, with a procedural focus on statistics (how to clean up data so it can be used, how to analyse, usually coded in R), versus a technical focus in ML on which model to apply, and the computing code required to implement it (usually Python).

However, there is a lot of mixing up of terms, sometimes due to honest confusion, and sometimes due to wilful misrepresentation. Due to the long time that the term has been used, 'AI' can refer to anything from a hand-written chess algorithm to a sophisticated transformer that can turn a simple natural language request into a masterfully executed output. Often, rather basic data science is sexed-up into being described as ML or AI. On the other hand, companies often apply the jackhammer of AI to surgically split a peanut of a problem when good old fashioned data science would be far cheaper and quicker. Data science holds up the modern economy far more than ML, but it remains an unsung hero. Data Science is still often a prerequisite for ML, as it provides the grounding to help ensure accuracy and robustness in ML models, which can reduce the risks of an unethical outcome, such as disproportionate treatment or other statistical biases.

Already, academia has been plundered by industry for top ML talent, lured away by large salaries and generous research budgets, and new graduates can hardly come soon enough. There will be increasing demand for skills with building and applying Foundation models in particular, as well as in other, more specialized areas such as machine vision to help embedded systems such as robots to understand the physical world with ever-greater precision. New search engines for prompts are emerging also, as the art of constructing new prompts to manifest unseen potential from existing models will be another hot commodity, like sorcerers figuring out the pronunciation of words written in a spellbook. ML is here to stay, and it's the closest thing to magic.