GenAI: Creativity will be the force of the future

GenAI: Creativity will be the force of the future

In my article, Who Will Make Money from the Generative AI Gold Rush? Part I, I discuss who will profit, control and benefit from this technology. With OpenAI’s launch of GPT4, the future is already here. Although BigTech dominates the Infrastructure and Foundational Model layers of GenAI so far, the application layer will be much more of a level playing field.

The race is on by start-ups and existing enterprise software companies to incorporate GenAi into their offerings.  Enabled by these Foundational Models, these companies and “full stack” startups will offer new GenAI applications and incorporate these seamlessly into their product offerings.  GenAI winners will achieve scale and defensibility by implementing the following:

  • Strong ROI — for their use case and short time to proof of concept/value proposition.
  • Proprietary and customised Foundational Models — “fine-tuned” for specific audiences using localised proprietary enterprise-led data.
  • Workflows — proving usability and deep integration into customer processes, making it difficult to remove once installed.
  • Feedback loops — Creating reinforcement learning from human feedback  or RLFH to improve model alignment with user intent (i.e. learn from all those crazy conversations we are having with GPT3/4).
  • Flywheel dynamics — the more RLFH feedback, the better the model performance; through “fine-tuning”, the greater the usage, and thus momentum grows. The flywheel should drive performance improvements at scale.
  • Scale and speed of investment — with lower profit margins (i.e. IP belongs to Foundational Models), the game becomes even more about scaling. Those who can quickly build their brand and reach mass  adoption to get the flywheel spinning will thrive as category leaders.

In many ways, traditional metrics will matter more than ever to achieve the scale and user adoption necessary to make ‘money’ from GenAi.  For the B2C GenAI consumer space, horizontal players with speed and massive consumer acquisition budgets are likely to win their race.  

Along with all the amazing possibilities of this technology, we need to talk about Foundational Models, their bias and reflection of the darker sides of the internet and its use of racist, violent, gender bias and misogynistic content.  The task of training models to address these issues is a mammoth task similar to content moderation across social media platforms which has been hard to achieve in practice.

Ethics for AI

One of the most important issues facing AI and GenAI is the need to create a universal framework of principles to enable AI to be used equitably and transparently to protect businesses, consumers and societies. There are four main ethical principles that everyone can agree on:

  1. AI needs to be transparent and explainable, i.e. how it works, where the information comes from and how it can accurately credit sources of information.
  2. Bias must be addressed and reduced as much as possible through better model training and content moderation frameworks.
  3. There must be accountability to humans, not algorithms.
  4. AI and GenAI must be safe for humans to use and work with, especially concerning fact-checking and verification.  It should do no ‘harm’.

Billions of humans will be using or consuming GenAI services and content as it becomes ubiquitous and woven into the fabric of business, government, and society. The marginal cost of creativity will move towards zero, making us all creators.

The combination of human and algorithmic creativity will be the force of the future. It will unleash huge changes to how we work, live, and interact with other humans, and vis a vie AI.  This is just the beginning of the next AI evolution.

You can email Simon directly or find him on LinkedIn or twitter.


Simon Greenman

Simon Greenman

Simon loves technology and its applications in the business world. He runs his advisory firm Best Practice AI helping enterprises and sits on the World Economic Forum’s Global AI Council.