Over two decades working in various software and technology industries, bootstrapped and raised funding, mentored startups, written a book on the future, and done everything else in-between, I can safely say that we are at the beginning of the next evolution of humanity with the rapid growth and adoption of AI. Are we as humans ready?
Al is not a novelty; the sheer degree of impact on knowledge, society, and truth is only starting to become clear. Ai is like the Ouroboros, the emblematic serpent of ancient Egypt and Greece, represented with its tail in its mouth, continually devouring itself and being reborn. This is the perfect analogy for AI. LLMs are eating the world and are continually reborn paradoxically through our data and the humans that train it.
One of my favourite quotes from the Neo in The Matrix sums up where we are now:
"I don't know the future. I didn't come here to tell you how this is going to end. I came here to tell you how it's going to begin."
Neo - The Matrix
Although AI could lead to the renaissance of humanity, allowing people to solve the dilemma of space travel, mitigate Earth’s climate, speed up medical research, not to mention 10x improvements in productivity across almost every industry; Humanity must be the arbitrator of its own future. Societies need to be human-centric rather than technology-centric, even in the name of scientific progress.
We've been speeding towards AI for decades, and now the cliff is behind us, we want to put the brakes on? It's too late now, the genie is out of the bottle. What do we say to those currently in higher education studying what they believed to be future-proofed jobs only to face something bleak at the end of 4, 5, or 6 years in college or university? Let’s be honest there are woeful levels of industry realism and real-world preparation from university courses, but we are leaving graduates with little time to pivot towards something less at risk of being disrupted. This is borderline irresponsible and dangerous in a society where the inequality between age groups is already so large, and the gap between what younger and older generations can afford continues to widen. History teaches us that inequality leads to political and economic instability and eventually revolutions.
Everyone needs to study humanities like Ethics and Philosophy, this is what separates us from AI and machines, but we also need to be activists to ensure that humans are protected and informed about using AI in an ‘ethical’ way because the general tech industry and every developer rushing to build GenAI solutions aren't interested when there's profit to be made from it right now. We need to examine the ethical questions posed by AI and the trade-offs between developing this technology and the effects on our society’s future.
We have only recently brought most of humanity into the digital economy of Spotify, social media and subscriptions. When AI becomes ubiquitous and bundled into every app we already use, why would we ‘need’ to pay for it? Capitalism always needs a revenue model and AI feels like a race to the bottom in my book.
Maybe decentralisation and Web3 was the answer but the question should have been asked 20 years ago when we started building what would become AI. As I said in 2015: “Blockchain, web3, and decentralisation have nothing to do with the creation of AI but everything to do with data sovereignty.”
We can create data self-sovereignty and different methods of direct community-led governance (think DAOs, NFTs, and decentralised cloud storage) using web3 technology outside BigTech’s domain. This is what Humans need now. This is how it begins.