Jeremy Howard & The OpenAI Debacle


Jeremy Howard & OpenAI's Debacle

Interview Recap + Microsoft's Perspective

Welcome to the awesome people who have joined us since last week! If you aren’t subscribed, join 3,307 curious AI folks. View this post online.

I came over two pieces of AI content that, if I had your number, I would text you about.


They are the types of niche content that are underrated from a share-to-value ratio.

We're going to cover two items today:

  1. A recap of an interview with Jeremy Howard
  2. Behind the scenes of the OpenAI Saga from Microsoft's point of view (Spoiler: They called it a “Turkey-Shoot Clusterf*ck” - tons of juicy quotes at the bottom)

First up, let’s chat about Jeremy Howard’s interview with Hugo Browne-Anderson “Making Large Language Models Uncool Again.”

Jeremy Howard on OpenAI and Open Source Models

Quick intro on Jeremy - he is an Australian data scientist, entrepreneur, and educator. He is the co-founder of fast.ai and previously the President and Chief Scientist of Kaggle

Jeremy's thoughts on the OpenAI weekend

Jeremy was critical of OpenAI just days before the debacle.

So it was coincidental that a few days later OpenAI boots Sam from CEO (more on this later from Microsoft’s point of view).

He said there were signs that OpenAI was hitting a wall: closing ChatGPT+ signups and dev day disappointments (like releasing a cheaper & faster GPT4 but no intelligence upgrades).

Jeremy continued with his reaction to the OpenAI drama weekend:

  • Board Power - One of the board's only available forms of power is to fire the CEO. With their mission to execute OpenAI’s charter, Jeremy understood why the board fired Sam (not necessarily that he agreed with them).
  • Money made this messy - OpenAI had millions from Microsoft, employees were promised millions, and Thrive Capital would also put in millions. This put the company at odds with the non-profit arm.
  • The Breaking Point For OpenAI’s Board - When Microsoft said they’ll take OpenAI’s CEO, CTO, and all the staff, the board's hands were tied. They had to retract.
  • Was it worth it? - “Whatever they [OpenAI’s board] get paid, it’s not worth the sh*t they went through”

LLMs and Artificial Super Intelligence (ASI)

Jeremy acknowledges that LLMs are not the path to ASI because they are expensive, slow, and not great at planning. When asked about Q*, his response:

Something everybody I think pretty much agrees on, including
Sam Alman and Yann LeCun, is LLMs aren't going to make it. The current LLMs are not are not a path to ASI. They're getting more and more expensive, they're getting more and more slow.

The more we use them the more we realize their limitations. We're also getting better at taking advantage of them and they're super cool and helpful but they are looking like…they appear to be behaving as extremely flexible fuzzy compressed search engines.

The thing you can really see missing here is this this planning piece right - Source


He says Q* may help with this, alluded by the fact that OpenAI released a paper on step by step mathematical reasoning in May 2023.

Open Sourced Models

Jeremy thinks that GPT-4 “has been nerfed” by evidence of performance degrading. He goes on to say that Sam Altman says it’s better. So either the evaluations of GPT-4 are off, or Sam is not being honest. “They’re both a problem.

Jeremy’s Top Open Sourced Models by Tier:

Killer Robots

Jeremy is a big fan of having ASI in the hands of everyone as opposed to a centralized group.

He steelmans that AI destruction could happen in two ways:

  • Agentic AIs - Sentient AIs that try to persist it’s existence, of which, humans pulling the plug would be the biggest threat, so humans would need to go
  • Super Powerful Humans - If a human has access to super powerful AI, then that person becomes a super powerful themselves. In the wrong hands, would lead to loss of life

Of these two options, he thinks super powerful humans will come first. Humans will always be able to do something better than AI

The best defense for against this is super powerful good humans sharing knowledge and increasing the collective intelligence.

See the full interview

The OpenAI weekend debacle from Microsoft’s point of view

I saw an article from the New Yorker floating around Twitter - not only was it a great background on the Microsoft + OpenAI relationship but also had a few more behind the scenes tidbits on the OpenAI debacle weekend.

My Favorites

  • After Satya got the call (20min before the press release went out), someone at Microsoft said it was a “Turkey-Shoot Clusterf*ck
  • After Satya found out the news he called Adam D’Angelo (OpenAI board member) for answers, but he didn't get any. Adam was as tight lipped as he was with the public
  • Microsoft devised three plans, each of increasing severity:
    • Plan A: Support Mira Murati (the new CEO) and see if the board would reverse their decision
    • Plan B: Use Microsoft's leverage and money to get Sam reappointed
    • Plan C: Hire Sam @ Microsoft
  • “Every step we get closer to A.G.I., everybody takes on, like, ten insanity points.” - someone familiar with the board
  • “Two people familiar with the board’s thinking say that the members felt bound to silence by confidentiality constraints. Moreover, as Altman’s ouster became global news, the board members felt overwhelmed and ‘had limited bandwidth to engage with anyone, including Microsoft.’”
  • Plan A wasn’t working, so Microsoft went to Plan B: work with Mira directly to get Sam back in
  • OpenAI’s board asked Murati to join them, alone, for a private conversation. They told her that they’d been secretly recruiting a new C.E.O.—and had finally found someone willing to take the job.”
  • Plan B didn’t work, so Microsoft offered jobs to everyone at OpenAI
  • “There will be a full and independent investigation, and rather than putting a bunch of Sam’s cronies on the board we ended up with new people who can stand up to him,” the person familiar with the board’s discussions told me. “Sam is very powerful, he’s persuasive, he’s good at getting his way, and now he’s on notice that people are watching.” Toner told me, 'The board’s focus throughout was to fulfill our obligation to OpenAI’s mission.'”

See the full article

In case you missed it

  • I’m on the hunt to find how AI is making a difference at your work. Not which AI products your building, but rather how AI is having a tangible impact on efficiency or revenue gains. If you have a good example I’d love for you to fill out a 3-min survey.
  • Aymeric Roucher (Hugging Face) extended the Needle In A Haystack analysis to be tested against a Retrieval Augment Generation.
    • Instead of having the model search 128K tokens of context for a "needle" statement, he would first do semantic search, then as the model.
    • It’s almost not a fair fight as RAG will reduce your token count from 128K > ~10K.
    • As expected RAG’s performance was great, cheaper, and faster.


Greg Kamradt

Twitter / LinkedIn / Youtube / Work With Me

Unsubscribe

Greg's Updates & News

AI, Business, and Personal Milestones

Read more from Greg's Updates & News

Sully Omar Interview 2 years of building with LLMs in 35 minutes Welcome to the 100 people who have joined us since last week! If you aren’t subscribed, join 9,675 AI folks. View this post online. Subscribe Now Sully, CEO Of Otto (Agents in your spreadsheets) came on my new series AI Show & Tell I reached out to him because you can tell he feels the AI. His experience is not only practical, it's battle tested. Sully's literally built a product of autonomous async agents that do research for...

Joining ARC Prize How the cofounder of Zapier recruited me to run a $1M AI competition Welcome to the 2,450 people who have joined us since last post! If you aren’t subscribed, join 9,619 AI folks. View this post online. Subscribe Now "We gotta blow this up." That's what Mike Knoop (co-founder of Zapier) says to me in early 2024. "ARC-AGI, we gotta make it huge. It's too important." "Wait, ARC? What are you talking about?" I quickly reply. "It's the most important benchmark and unsolved...

Building a business around a commodity OpenAI's models are a commodity, now what? Welcome to the 296 people who have joined us since last week! If you aren’t subscribed, join 3,939 AI folks. View this post online. Subscribe Now Large Language Models are becoming a commodity. We all know it. So if you’re a foundational model company, what do you do? You build a defensible business around your model. You build your moat. Google famously said they have no moat, “and neither does OpenAI.” But...