By Adam Smith | Tech correspondent
OpenAI rolled out GPT-4 last week, the new artificial intelligence service that powers ChatGPT.
The main addition to the new model is that it can now “understand” images – writing captions and descriptions, as well as explaining jokes behind memes – and write computer code.
Sam Altman, one of the founders of OpenAI, said GPT-4 “is less biased“, but it has the same ethics system built into it that ChatGPT has, to avoid it being used to create racist or sexist output.
And the release comes as OpenAI’s partners pull back on AI’s ethical considerations.
Microsoft has fired its entire ethics and society team within the AI organisation as part of its recent layoffs, Platformer reported last week.
A view shows a Microsoft logo at Microsoft offices in Issy-les-Moulineaux near Paris, France, January 25, 2023. REUTERS/Gonzalo Fuentes
Microsoft says that its Office of Responsible AI remains active, but it does create a distinct separation between philosophy and product design.
This disconnect has been visible over the past months, with Microsoft limiting Bing’s GPT-3 chat function after it gave answers that were potentially dangerous.
Altman recently expressed his own fears about GPT-4’s development.
“I’m particularly worried that these models could be used for large-scale disinformation,” he told ABC News.
“Now that they’re getting better at writing computer code, [they] could be used for offensive cyberattacks.”
A response by ChatGPT, an AI chatbot developed by OpenAI, is seen on its website in this illustration picture taken February 9, 2023. REUTERS/Florence Lo/Illustration