
OpenAI’s CEO: The Sam Altman Saga. A Tragicomedy in Four Acts and Five Days
We cannot start our weekly Newsletter without covering the most talked-about event in the AI industry:
-
ACT #1: The sacking of Sam Altman as CEO of OpenAI
It all started on Friday, November 17th when the Twittersphere became inundated with news regarding Sam Altman’s abrupt departure from OpenAI after an alleged dispute with the Board of Directors. Shortly after OpenAI released a document stating that Mr. Altman will depart as CEO and leave the board of directors, while Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.
As a part of the transition, Greg Brockman was to be stepping down as chairman of the board. However, shortly after the announcement was published, Mr. Brockman decided to leave the company too and join Sam in whatever next venture he planned to be involved in.
At that time, OpenAI’s board of directors consisted of OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner.
-
ACT #2: OpenAI announces a new interim CEO: Emmett Shear, co-founder of Twitch
Less than 48 hours later, in a tale of twists and turns, investors such as Microsoft and Thrive Capital had started pushing for OpenAI’s board to reinstate Sam Altman as CEO. There were reports that Altman is “ambivalent” about returning and if he were to return, he would demand governance changes.
On November 19th, a source close to Altman was reporting that the Board had agreed in principle to resign to allow Altman and Brockman to return. The majority of OpenAI’s staff had set a 5 PM deadline on the day for the Board of Directors to resign, otherwise, they’d threatened to quit & join Sam in an (alleged) new company.
A day later, Sam Altman returns to OpenAI to discuss with the Board and agree on a way forward. However, despite significant efforts from OpenAI’s executives, & following discussions with the Board as well as with major investors, Sam has decided against returning to the company. Ilya Sutskever has then announced a new interim CEO: Emmett Shear, co-founder of Twitch.
-
ACT #3: Sam Altman and Greg Brockman join Microsoft
Later on, November 20th, Satya Nadella, CEO of Microsoft announced on Twitter (now X) that: “Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team.” It all looked like a chess game where the most significant investor (Microsoft) that already had access to the GPT model weights and is currently allowing OpenAI to use their servers, got both the founder and co-founder as well as the majority of the employees transferred through, effectively relegating OpenAI overnight from the dominant force to a second-tier player in AI at best. Rumours start swirling around that other prominent investors are considering their options to sue both OpenAI and Microsoft.
-
ACT #4: The triumphant return of Sam Altman as CEO of OpenAI and the appointment of New Board
Then, on November 22nd, the story reaches its climax and conclusion when OpenAI announces on X: “We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo. We are collaborating to figure out the details.”
Satya, Sam and Greg have all confirmed the news ending a 5-day rollercoaster that has both dented the reputation of OpenAI but at the same time consolidated Sam’s grip on the company. It has also served as a PR campaign that will have long-lasting effects on the company and its shareholders.
So, who are the new Board members? Bret Taylor, the newly appointed Chairman of OpenAI, is known for co-creating Google Maps and holding prominent roles at Facebook, including the position of CTO. He also founded Quip, a company later acquired by Salesforce, where he became CEO. Adam D’Angelo, renowned as the co-founder and CEO of Quora and a former CTO at Facebook, joined the OpenAI board in 2018. D’Angelo has been deeply involved in AI and machine learning since his high school days when he collaborated on the Synapse Media Player with Mark Zuckerberg, using ML algorithms for song recommendations. Larry Summers, a prominent economist and influential figure in Washington, D.C., has expressed belief in the potential of artificial intelligence. Summers previously served as the Secretary of the Treasury under President Bill Clinton and offers valuable expertise in navigating the political landscape as AI faces increasing scrutiny. He also has experience in the startup world and sits on the board of Block, a payments company co-founded by Twitter’s Jack Dorsey.
-
The Lingering Questions
What led to this strange course of events and extremely unorthodox story with a bitter-sweet “happy end”?
There are a few theories around. And it is all speculative at the moment. One is that Sam Altman raised OpenAI’s risk profile a hundredth fold during DevDay when he announced that the company would fully indemnify those who were developing GPTs using OpenAI’s APIs: “We will step in and defend our customers and pay the costs incurred if you face legal claims around copyright infringement”. Bold. But crazy. We can all imagine what the directors might’ve thought given their fiduciary responsibilities. Especially if they were blindsided and knew nothing about this before DevDay. We can also safely bet that the Insurance premiums had to have skyrocketed after that statement.
The other theory (as reported exclusively by Reuters) is that Sam Altman’s sacking was in part triggered by staff researchers’ letter to the board warning about a powerful AI discovery with potential risks to humanity. This letter and the AI discovery were significant factors in Altman’s removal. The board had massive concerns, including hasty commercialization of advancements. Reuters couldn’t access the letter, and the staff who wrote it didn’t respond to inquiries. OpenAI acknowledged a project called Q* and the letter to the board but didn’t comment on their accuracy. Some believe Q* could be a breakthrough in the quest for artificial general intelligence (AGI), as it showed promise in solving math problems.
Regardless, many questions linger. Are these the reasons why he was sacked, or was it something completely different? What exactly is Q*, and what are the potential risks? Why hasn’t Microsoft sought a board seat (after all, it has a 40% stake in OpenAI)? Why did they appoint as a board member someone like Larry Summers? Certainly, a prominent economist who believes that AI will replace all white-collar jobs is not the best suited to be on the board of the most forward thinking, bullish AI company on the planet? Or is it to temper Sam? Has the governance change on the board, and if so, how?…
I guess we will all get to know some of the answers one day.

Introducing Inflection-2: The Next Step Up in LLMs
Introduction:
Inflection-2 is the latest language model developed by Inflection AI, positioning itself as the most advanced in its compute class and the second most capable large language model (LLM) globally
Features:
- Enhanced Capabilities: Compared to its predecessor, Inflection-1, this new model showcases superior factual knowledge, improved stylistic control, and significantly better reasoning abilities
- Advanced Training: Utilized 5,000 NVIDIA H100 GPUs and achieved approximately 10²⁵ FLOPs in fp8 mixed precision, aligning it with Google’s PaLM 2 Large model in terms of training compute class
- Efficiency and Cost-effectiveness: Despite being substantially larger, Inflection-2 is designed for higher serving efficiency and speed, with reduced operational costs, thanks to optimized inference implementations and a transition from A100 to H100 GPUs
Benefits:
- Performance: Inflection-2 outperforms Google’s PaLM 2 Large model on various standard AI benchmarks, such as MMLU, TriviaQA, HellaSwag, and GSM8k
- Safety and Security: Emphasizing the importance of safety, security, and trustworthiness, Inflection AI has implemented rigorous evaluation and alignment protocols, contributing to the development of global alignment and governance mechanisms for AI technologies
Other Technical Details:
- Benchmarking: Inflection-2 shows commendable performance across various benchmarks, including MMLU, HellaSwag, TriviaQA, and NaturalQuestions, often outperforming other leading models
- Coding and Mathematical Reasoning: While not a primary focus, Inflection-2 demonstrates strong capabilities in code and mathematical reasoning benchmarks, with potential for further enhancement through fine-tuning on code-heavy datasets
Conclusion:
Inflection-2 represents a significant advancement in the field of AI language models, offering enhanced capabilities, efficiency, and a commitment to safety and security. Its superior performance across various benchmarks highlights its potential as a leading tool in personal AI development and various AI applications.
Inflection-2 is positioned as the second most capable large language model in the world, surpassing other models like PaLM-2, Claude-2, and LLaMA-2 in various benchmarks. However, it still ranks behind GPT-4, which remains the most performant model. This indicates that while Inflection-2 has made significant strides in AI language modeling, GPT-4 continues to be the leading model in terms of overall capabilities

Anthropic introduces Claude 2.1 : the latest iteration of its Claude family of LLMs
Introduction:
Claude 2.1, an advancement over the Claude 2.0 model, is now available, offering significant improvements for enterprise applications.
Features:
- 200K Token Context Window: Doubled information capacity (compared to Claude 2.0) to 200,000 tokens, equivalent to 150,000 words or 500 pages, enabling extensive data processing, from technical documentation to long literary works.
- Enhanced Honesty and Accuracy: Achieved a 2x decrease in hallucination rates (compared to Claude 2.0), boosting reliability and trust in AI applications
- Improved Comprehension and Summarization: Demonstrated a 30% reduction in incorrect answers (compared to Claude 2.0), especially beneficial for complex documents like legal texts and financial reports
Benefits:
- Advanced Processing Ability: Capable of handling tasks that typically require hours of human effort in a matter of minutes
- Versatile Tool Integration: Allows integration with existing processes, products, and APIs for diverse operational applications
Other Technical Details:
- Tool Use Feature: New beta feature enables Claude 2.1 to execute actions such as complex calculations, structured API calls, database searches, and actions in software via private APIs
- Improved Developer Experience: The Workbench product facilitates prompt testing and optimization, enhancing the developer Console experience for Claude API users
Conclusion:
Claude 2.1 represents a significant advancement in AI technology, offering enhanced data processing capabilities, increased accuracy, and versatile tool integration, making it a valuable asset for a wide range of enterprise applications. Its commitment to improved developer experience further solidifies its position as a cutting-edge solution in AI.

Stability Diffusion introduces Stable Video Diffusion: the Text to Video Creator
Introduction:
Stability AI introduces Stable Video Diffusion, a pioneering generative video AI model derived from the image model Stable Diffusion. This model, in its research preview stage, signifies a substantial advancement towards creating versatile AI models accessible to everyone
Features:
- Adaptability: Tailored for a variety of video applications, including multi-view synthesis from single images, with potential for expansion
- Text-to-Video Interface: An upcoming web tool demonstrates practical applications in sectors like advertising, education, and entertainment
- Model Capacity: Offers two image-to-video models, generating 14 and 25 frames at customizable frame rates between 3 and 30 frames per second
Benefits:
- Performance: Outperforms leading closed models in user preference studies, establishing its competitive edge
- Research Focus: Currently aimed at research applications, with plans for updates and improvements based on user feedback
- Diverse Portfolio: Part of Stability AI’s extensive range of open-source models spanning image, language, audio, 3D, and code modalities

Microsoft introduces Orca 2: A pair of 7B and 13B parameters LLMs that puck a punch
Introduction:
Microsoft Research has announced the release of ORCA 2, a pair of small language models that demonstrate remarkable capabilities. These models operate on a leaner parameter scale of 7 billion and 13 billion but are comparable in performance to much larger models with up to 70 billion parameters
Features:
- Sophisticated Reasoning Abilities: ORCA 2 models excel in nuanced understanding, logical deduction, and contextual grasp, challenging the notion that only large models can achieve advanced reasoning
- Diverse Solution Strategies: They employ various strategies like step-by-step processing, recall-then-generate, and direct answer methods, showcasing versatile problem-solving approaches
Benefits:
- Competitive Performance: In comparison with larger models such as LLaMA-2-Chat and WizardLM, ORCA 2 has shown superior or equivalent performance in advanced general intelligence and multi-step reasoning tasks
- High Benchmark Scores: ORCA 2 has achieved impressive scores in tasks like causal judgment, understanding dates, geometric shapes, and sports understanding, reflecting its advanced reasoning capabilities
Other Technical Details:
- Training Data: The models are trained with high-quality synthetic data and instructions from a more capable teacher model, enhancing their learning and innovation abilities
- Performance Across Tasks: ORCA 2 has been tested across a range of tasks, scoring particularly well in temporal sequence and sports understanding, among others
- Accessibility for Developers: ORCA 2 is accessible via the Ollama command-line interface and API, facilitating easy integration and use in various applications
Conclusion:
Microsoft’s ORCA 2 language models represent a significant shift in AI model development. By combining efficiency with advanced reasoning, ORCA 2 sets a new standard for smaller language models. Its robust performance, adaptability, and ease of integration mark a step forward in the application of AI for both everyday tasks and complex problem-solving scenarios. ORCA 2 stands as a testament to Microsoft Research’s innovative approach, heralding a new era where smaller AI systems can effectively compete with larger counterparts in complex tasks
Other AI News
About The Author

Bogdan Iancu
Bogdan Iancu is a seasoned entrepreneur and strategic leader with over 25 years of experience in diverse industrial and commercial fields. His passion for AI, Machine Learning, and Generative AI is underpinned by a deep understanding of advanced calculus, enabling him to leverage these technologies to drive innovation and growth. As a Non-Executive Director, Bogdan brings a wealth of experience and a unique perspective to the boardroom, contributing to robust strategic decisions. With a proven track record of assisting clients worldwide, Bogdan is committed to harnessing the power of AI to transform businesses and create sustainable growth in the digital age.
Leave A Comment