It’s been a whirlwind four days for OpenAI, the poster child for generative AI behind the hugely successful ChatGPT app.
Apparently, OpenAI’s board of directors ousted CEO and co-founder Sam Altman, and demoted president and co-founder Greg Brockman who later resigned, paving the way for what looked like a mutiny by employees who insisted the founders be reinstated post-haste. By then, Microsoft had already hired Altman and Brockman to head up a new internal AI unit, though nothing had been signed yet, with rumors emerging suggesting the ousted leaders might actually return to OpenAI after all — in some capacity. At least.
The situation remains fluid, and any number of possible outcomes remain on the table. But the whole debacle has put a spotlight on the forces controlling the burgeoning AI revolution, leading many to ask what happens if you bet everything on a centrally owned player, and what happens if things then go downhill?
“The OpenAI/Microsoft drama highlights one of the big near-term risks for AI – that this next wave of technology is controlled by the same small group of players that shaped that last era of the internet.” Mark SurmanThe president and CEO of the Mozilla Foundation told TechCrunch. “We may have a chance to avoid this if GPT-X is responsibly open sourced, giving researchers and startups a chance to make this technology safer, more useful, and trustworthy for people everywhere.”
Open and close
In an open letter published by Mozilla a few weeks ago, Yann LeCun, chief AI scientist at Meta, joined about 70 other signatories in calling for more openness in AI development, though that letter It happened since then More than 1,700 signatures. The backdrop stems from big tech companies like OpenAI and Google’s DeepMind calling for more regulation, warning of catastrophic consequences if AI tools fall into the wrong hands — in other words, they argue that proprietary AI is safer than open source.
LeCun and others disagree with this view.
“Yes, openly available models come with risks and vulnerabilities – AI models can be misused by malicious actors or deployed by ill-equipped developers,” the letter acknowledged. “However, we have seen time and time again that the same is true of proprietary technologies – and that increased public access and scrutiny makes technology safer, not more dangerous. The idea that strict control and ownership of underlying AI models is the only way to protect us from harm at the societal level It is a naive idea at best, and dangerous at worst.
On a personal level, LeCun has Accused Big-name AI players trying to secure “regulatory capture of the AI industry” by lobbying against open AI R&D. At the company level, Meta is doing everything it can to encourage collaboration and “openness,” recently partnering with Hugging Face to launch a new startup accelerator designed to spur the adoption of open source AI models.
But OpenAI was — at least until last week — still the darling AI that everyone wanted to dance with. Countless startups and startups have built businesses on top of OpenAI’s proprietary GPT-X Large Language Models (LLMs), and over the weekend hundreds of OpenAI customers It has reportedly begun contacting OpenAI’s competitors That includes Anthropic, Google, and Cohere, which are concerned that their businesses could suffer if OpenAI disintegrates overnight.
Excessive dependence
The panic was palpable. But there are precedents from elsewhere in technology, perhaps most notably the cloud computing industry, which has become notorious for the way it confines companies into central, vortex-like silos.
“Part of the hype around the future of OpenAI is due to the abundance of startups that are overly reliant on their proprietary models,” Louis SisiThe University of Washington computer science professor and OctoML CEO told TechCrunch in an email statement. “It’s dangerous to put all your chips in one basket – we saw this in the early days of the cloud which led to companies shifting to multi-cloud and hybrid environments.
On the surface, Microsoft currently looks like the big winner amid the OpenAI turmoil, as it has seemed to be It is looking to reduce its dependence on OpenAI Although it remains one of its major shareholders. But Meta, Facebook’s parent company, may also benefit, as companies pursue multimedia strategies or models that incorporate a more “open” ethos.
“Today’s open source fundamentally offers a wide range of models for companies to diversify,” Cezi added. “By doing this, these startups can pivot quickly and reduce risk. There is also a huge upside, as many of these models already outperform the likes of OpenAI in terms of price performance and speed.”
a Leaked internal memo Google earlier this year seemed to express concerns that despite the tremendous progress made by proprietary LLM models from the likes of OpenAI, open source AI would eventually overtake them all. “We don’t have a moat, and neither does OpenAI,” the document noted.
The memo in question was referring to a core language model initially leaked from Meta in March, which gained a fair amount of traction in a short period of time. This has highlighted the power and scalability of a more open approach to AI development – it enables collaboration and experimentation at a level that is not easy to replicate with closed models.
It is worth noting here though Meta claims, the Llama-branded LLM family is not as “open source” as people would like to think. Yes, it is available for both research and commercial use cases, But it prevents Developers can use Llama to train other models, while app developers with more than 700 million monthly users must request a special license from Meta which it may grant at its “sole discretion” — basically, anyone except Meta’s bros in Big Tech can use Llama without permission.
Meta is certainly not the only company to flaunt its “open” approach to AI development – particularly the likes of Hugging Face, Mistral AI and 01.AI which have all raised significant sums at lofty valuations with similar goals in mind. But as a $900 billion juggernaut with… A long history of courting developers through open source endeavorsMaybe Meta is better positioned to take advantage of the chaos OpenAI has created for itself. Her decision to pursue “openness” rather than “closedness” seems completely justified at the moment, regardless of whether the lama is present or not truly Open source, it’s probably “open enough” for most people.
It is still too early to make any firm claims about the impact that the fallout from OpenAI will have on LLM development and uptake in the future. Altman and Brockman are undoubtedly working steadily at a commercial AI startup, and may return to running OpenAI. But some might argue that it is unhealthy for so much focus to be placed on just a handful of people – suggesting that their passing has wreaked such widespread devastation.