BY Wesley Diphoko 3 MINUTE READ

Over the weekend we witnessed one of the most dramatic moments in the history of technology startup management. It reminded many of what happened to Steve Jobs when he was fired from a company he founded, Apple. This time around it was Sam Altman the man who co-founded the company that brought us ChatGPT, OpenAI. He was fired by the four member board allegedly for not being “candid with the truth”. Although there were attempts to bring him back, an intervention by shareholders led to an appointment of Emmett Shear, the former CEO of Twitch.Why do developments at OpenAI matter? To get the sense of the significance of this development, one has to look at what I understand to be at the root of conflict. According to reports, at the centre of the OpenAI conflict, there’s the company Chief Scientist, Ilya Sutskever and former CEO, Sam Altman. In a debate about curbing the powers of AI or to let it be, Sutskever was more for curbing AI powers and Altman wanted to move ahead swiftly. In technical terms that means, Altman was pushing ahead to release Artificial General Intelligence (AGI) which has more powers that supercede human beings. On the other hand, Sutskever was of the view that society is not ready for AGI.

As of Monday (20 November 2023) part of the settlement included a move by Sam Altman to Microsoft in order to create an internal AI entity. At this stage it’s not clear how much power will Sam Altman have under Satya Nadella’s, Microsoft CEO, wings.

We can however conclude that the world is currently at war about what is to be done about AI. Should it be allowed to go rogue or be controlled. This is one of the most important issues after climate change. It is something that will probably feature heavily in a battle amongst global powers such as China and the US.

While the Sam Altman firing issue may seem trivial, it is something that should serve as a wake up call for the global community. The battle about AI should not be left to just individuals and corporations. If its potential harms are what we have come to understand, then there’s a need for consensus building process around how AI will be enabled in society. Tech needs a multilateral institution that will facilitate the deployment of technology in line with human values. If developments at Open AI are anything to go by in terms of how prepared we are to deal with what is coming from AI, we are in trouble. Our situation is such that we’ve allowed kids to play with a bomb while we’re watching as bystanders. If AI will have a significant impact on society, then society has to decide how it should be governed and function.

South Africa has been vocal about current wars and quiet about AI. Such a reaction is understandable, considering there’s little local understanding (at least amongst law makers) about its impact. The time is now to get its full understanding and begin a process of having a voice on what should be done. The issue may seem far from South Africa (or countries beyond the US) at this stage due to where it’s developed. Its impact will be felt globally and therefore there should be a global response. Currently, commercial interests are shaping everything in the AI space. Everyone is at the mercy of current investors. It’s not an ideal situation for human society. One can only hope that there will be a change of heart amongst companies that are leading the AI race. The leadership of tech leaders matters now more than ever.

WESLEY DIPHOKO