Weekly AI recap: Musk sues OpenAI, Google (sorta) apologizes for Gemini controversy
Also, IPG becomes the latest advertising holding company to boost its investments in AI.
A lawsuit filed by Elon Musk claims hat “GPT-4 is an AGI algorithm.” / Adobe Stock
Elon Musk files lawsuit against OpenAI and Sam Altman
Elon Musk filed a lawsuit on Thursday against OpenAI and its founder, Sam Altman, claiming that the company has abandoned its founding mission to build AI that benefits humanity and has instead chosen to prioritize its own financial interests.
Musk co-founded OpenAI in 2015 and originally served as its primary financial backer. He left in 2019 following the company’s decision to launch a for-profit arm and has since been vocally critical of the decision, describing it as being contrary to the company’s original mission. Not long after Musk’s departure, Microsoft began pouring billions of dollars into OpenAI, securing the right to incorporate the AI company’s technology into all of its platforms and products. Following Altman’s recent firing and rehiring by the OpenAI board, Microsoft also secured a non-voting board seat.
OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.
Not what I intended at all.
— Elon Musk (@elonmusk) February 17, 2023
The lawsuit argues that this close partnership with Microsoft – the world’s wealthiest tech company – has effectively turned OpenAI and its most powerful large language model, GPT-4, into a commercial enterprise.
“Although developed by OpenAI using contributions from [Musk] and others that were intended to benefit the public, GPT-4 is now a de facto Microsoft proprietary algorithm, which it has integrated into its Office software suite,” the filing reads.
It further argues – citing statements from Microsoft researchers – that “GPT-4 is an AGI algorithm.” (Artificial general intelligence, or AGI, is the level at which an AI model that can match or exceed human competence in virtually any reasoning task.)
Musk has long been outspoken in his belief that unchecked AI could lead to the downfall of civilization. He signed an open letter published last March that called for a six-month pause on the development of AI models and suggested that the technology, if not developed with sufficient guardrails, could pose an existential threat to civilization. He also launched his own AI company, xAI, in July of last year, positioning it as a direct competitor to OpenAI.
Altman has spoken publicly about the need to develop AI safely and responsibly while defending OpenAI’s decision to launch a for-profit arm, underscoring the huge costs of building AI models and what he views as the enormous benefits that properly commercialized AI can bring to humanity.
The lawsuit accuses OpenAI, Altman and Greg Brockman – another OpenAI co-founder who currently serves as the company’s president – of breach of contract and breach of fiduciary duty, among other charges. It also seeks to compel OpenAI to publicly release all of its code, and to force Altman to pay back the wealth that he’s accumulated through his company’s allegedly illegal commercial endeavors.
Google responds to Gemini image-generation controversy
Last week, a slew of reports published on social media and in the press showed that Gemini – the multimodal large language model launched by Google in December and which has since replaced the AI-powered chatbot Bard – has a bias towards diversity that distorts historical realities. When asked to create an image of the US founding fathers, for example (all of whom were white), the model conjured up images depicting these 18th-century patriarchs as people of color. There have also been documented cases of the model refusing to generate images of white people, prompting some to claim that an anti-white bias had been built into the model.
Critics, including Elon Musk, have pounced on the issue, condemning Gemini’s “woke” inclinations and describing these as an extension of a deeper cultural issue at Google, which prioritizes diversity, equity and inclusion over truth. In response, Google announced on Thursday that it had paused Gemini’s ability to generate images of people. The following day, February 23, Google senior vice-president Prabhakar Raghavan published a blog post that attempted to explain Gemini’s behavior, and which provided a quasi-apology while shirking responsibility away from Google and on to the model itself.
“So what went wrong? In short, two things,” Raghavan wrote. “First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely – wrongly interpreting some very anodyne prompts as sensitive. These two things led the model to overcompensate in some cases and be over-conservative in others, leading to images that were embarrassing and wrong.”
Google DeepMind CEO Demis Hassabis said during a Monday panel at the Mobile World Congress in Barcelona that the feature enabling Gemini to generate images of people would be relaunched within the coming weeks.
This isn’t the first time Google has faced controversy for its rollout of an AI tool. Just a little over a year ago, in its first-ever public demo, Bard generated an incorrect fact about the James Webb Space Telescope, claiming that “JWST took the very first pictures of a planet outside of our own solar system.” (The first exoplanet was in fact photographed 17 years before the telescope’s launch.) The blunder caused Google parent company Alphabet’s valuation to plummet by around $100bn.
IPG unveils new partnership with Adobe
Interpublic Group (IPG) – owner of several prominent ad agencies, including Golin and McCann – announced on Thursday that it has launched a new partnership with Adobe geared towards delivering AI-generated content tools to clients. The announcement closely follows similar efforts from WPP and Publicis to ramp up their own investments in AI.
The new partnership – which brings the Adobe suite of AI-powered GenStudio tools to IPG engine, the holdco’s internal marketing platform – is aimed at increasing both efficiency and oversight of customer data among IPG agencies. “We’re living in a world where clients are asking for efficiency, transparency, integrity,” says IPG chief solutions architect Jayna Kothary.
“When we’ve got multiple agencies working on one client, they want a full view of the asset, full view of the production, what’s being used where – one single workflow tool. So Adobe’s GenStudio now is going to power every IPG agency across the entire content supply chain.”
Adobe has in turn incorporated IPG-owned brand Acxiom’s massive audience data resources into its Customer Data Platform (CDP) and Adobe Experience Platform (AEP), both of which are designed to help brands build more personalized marketing campaigns.
Microsoft announces multi-year partnership with Mistral AI
Microsoft announced Monday that it has entered into a multiyear partnership with Mistral AI, an AI firm headquartered in Paris and founded last spring by former employees of Meta and Google DeepMind.
We're announcing a multi-year partnership with @MistralAI, as we build on our commitment to offer customers the best choice of open and foundation models on Azure. https://t.co/k1L7lfFeES — Satya Nadella (@satyanadella) February 26, 2024
The announcement coincided with Mistral AI’s release of Mistral Large, described on the company’s website as a “new cutting-edge text generation model” that “can be used for complex multilingual reasoning tasks, including text understanding, transformation, and code generation.”
Mistral Large – which is being positioned as a competitor to OpenAI’s GPT-4, currently the most powerful large language model on the market – is now available through Microsoft Azure via the new collaboration between the two brands. It’s also available through Le Plateforme, Mistral AI’s European developer’s platform. The partnership will also focus on developing and commercializing increasingly advanced large language models.
As concerns continue to mount around the social and political ramifications of AI – compounded significantly by the fact that a number of countries around the world are approaching key elections – Microsoft is seizing its new partnership with Mistral AI as an opportunity to reaffirm its commitment to safety.
“This partnership with Mistral AI is founded on a shared commitment to build trustworthy and safe AI systems and products,” Eric Boyd, corporate vice-president of Azure AI Platform, wrote in a company blog post published on Monday.
Microsoft’s new partnership with Mistral, according to Bloomberg, has drawn the attention of regulators in the European Union, who fear that the tech giant is developing a growing monopoly in the AI industry. Microsoft is already facing an antitrust investigation from regulators in the UK and the EU over its partnership with OpenAI.
Explore frequently asked questions
Google pays publishers to create and publish AI-generated content
Google is in the early stages of an experimental program that pays small publishers to create regular content using generative AI in exchange for a five-figure annual sum, according to a report published on Tuesday by Adweek. In response, Google hopes to collect data and feedback that will enable it to refine its AI tools.
Participating publishers are reportedly required to produce three articles per day, one newsletter per week and one advertising campaign per month using a suite of Google-owned AI tools.
An extension of the Google News Initiative (GNI), which kicked off in 2018 and is aimed at helping newsrooms adopt AI, the new program began recruiting participants in January and officially kicked off earlier this month, according to Adweek.
Google pushes back against the Adweek report’s implication that the new program essentially recycles content from online publishers. “This speculation about this tool being used to republish other outlets’ work is inaccurate,” a Google Spokesperson told The Drum, referring to the Adweek article. “The experimental tool is being responsibly designed to help small, local publishers produce high-quality journalism using factual content from public data sources – like a local government’s public information office or health authority. Publishers remain in full editorial control of what is ultimately published on their site. These tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating, and fact-checking their articles.”
For more on the latest happenings in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter.