Get the Linkedin stats of Ethan Mollick and many LinkedIn Influencers by Taplio.
open on linkedin
I research the factors that lead to success in entrepreneurship and innovation, and how to teach people to become more effective leaders and innovators. Most recently, I have focused on AI, and especially how it affects education and work. I co-founded Wharton Interactive, an effort to democratize education using games and simulations, while applying rigorous approaches to pedagogy at scale. I have worked with a lot of different organizations (from startups to government agencies), co-founded a technology company, and have a mysterious past.
Check out Ethan Mollick's verified LinkedIn stats (last 30 days)
Use Taplio to search all-time best posts
o3 is far more agentic than people realize. It is worth playing with a more than a typical new model to see what it can do. You can get remarkably complex work out of a single prompt. It just does things. (Of course, that makes checking its work even harder, especially for non-experts.)
Interestingly, Perplexity’s version Deep Research prioritizes verbosity in its replies. Is it to make people feel good (long reports make it seem like they have a thorough answer)? Is it something having to do with actual result quality? Either way, it is why it is worth knowing the system prompts companies are using
There seems to be a real bifurcation growing between the IT-facing API side of OpenAI & the user-facing ChatGPT side. You can increasingly only access the high power, high cost models (GPT-4.5, Deep Research) through ChatGPT while the API supports cheap fast models (the new GPT-4.1). Some tension there for any organizational use, as IT departments and companies making wrappers will not be able to provide the same capabilities that users can access on their own. Giving employees direct access to frontier models is going to remain a big consideration.
One thing the GPT-4o personality issue demonstrates is that treating AI like every other online product by maximizing for engagement & likeability will have unintended consequences that could cause real problems, both for the usefulness of the models & for the people using them, The issue, as this thread by the former leader of Microsoft’s Bing shows, is that we also do not want models that tell us the brutal truth about ourselves.
Worth noting that in their latest federal filing, Anthropic is still saying the timeframe for having AIs with "Intellectual capabilities matching or exceeding that of Nobel Prize winners across most disciplines—including biology, computer science, mathematics & engineering." ...is expected in the next 2-3 years. (You don't have to believe them, of course, but they haven't moved their time frames, which is interesting)
o3 now cracks new Harvard Business School cases from the PDF, in one shot. I blurred the figures to not ruin the case, but I asked the AI to figure out financials, which incorporates data scattered throughout the case. More interesting, I asked it to compare to the case's answer, it was within 4%, and the differences were legitimate points for interpretation (I think the AI was generally more correct). There were no hallucinations here (surprisingly) but o3 does still hallucinate, so I wouldn't make it my sole accountant or analyst, yet. But how it gathered the information and coherently built out models like an MBA would is really interesting.
Our Generative AI Lab at Wharton is releasing its first Prompt Engineering Report, empirically testing prompting approaches. In this first report we find: 1) Prompting “tricks” like saying “please” do not help consistently or predictably. On some questions they improved outcomes, on others they hurt. 2) How you measure against benchmarks matters a lot, and you need to pick benchmarks that match your use case (Link in comments)
What cyborg work looks like as an academic, by Robert Ghrist, a mathematician and associate dean of undergraduate education at the University of Pennsylvania.
I don't think people realize how much even a mildly agentic AI system like o3 can do on its own. For example, this prompt works in o3, zero shot: "Come up with 20 clever ideas for marketing slogans for a new mail-order cheese shop. Develop criteria and select the best one. Then build a financial and marketing plan for the shop, revising as needed and analyzing competition. Then generate an appropriate logo using image generator and build a website for the shop as a mockup, making sure to carry 5-10 cheeses that fit the marketing plan." All the outputs in the image. With that single prompt, in less than two minutes, the AI not only provided a list of slogans, but ranked and selected an option, did web research, developed a logo, built marketing and financial plans, and launched a demo website for me to react to. The fact that my instructions were vague, and that common sense was required to make decisions about how to address them, was not a barrier. More here (including cases where o3 fails): https://lnkd.in/e97DKUSF
In our new paper we ran an experiment at Procter and Gamble with 776 experienced professionals solving real business problems. We found that individuals randomly assiged to use AI did as well as a team of two without AI. And AI-augmented teams produced more exceptional solutions. The teams using AI were happier as well. Even more interesting: AI broke down professional silos. R&D people with AI produced more commercial work and commercial people with AI had more technical solutions. The standard model of "AI as productivity tool" may be too limiting. Today’s AI can function as a kind of teammate, offering better performance, expertise sharing, and even positive emotional experiences. This was a massive team effort with work led by Fabrizio Dell'Acqua, Charles Ayoubi, and Karim Lakhani along with Hila Lifshitz, Raffaella Sadun, Lilach M., me and our partners at P&G: Yi Han, Jeff Goldman, Hari Nair and Stewart Taub Subatack about the work here: https://lnkd.in/ehJr8CxM Paper: https://lnkd.in/e-ZGZmW9
I don’t mean to be a broken record but AI development could stop at the o3/Gemini 2.5 level and we would have a decade of major changes across entire professions & industries (medicine, law, education, coding…) as we figure out how to actually use it & adapt our systems and organizations to what it can do. AI disruption is baked in.
Content Inspiration, AI, scheduling, automation, analytics, CRM.
Get all of that and more in Taplio.
Try Taplio for free
Amelia Sordell 🔥
@ameliasordell
228k
Followers
Vaibhav Sisinty ↗️
@vaibhavsisinty
449k
Followers
Richard Moore
@richardjamesmoore
105k
Followers
Daniel Murray
@daniel-murray-marketing
150k
Followers
Sam G. Winsbury
@sam-g-winsbury
49k
Followers
Shlomo Genchin
@shlomogenchin
49k
Followers
Matt Gray
@mattgray1
1m
Followers
Luke Matthews
@lukematthws
187k
Followers
Ash Rathod
@ashrathod
73k
Followers
Izzy Prior
@izzyprior
81k
Followers
Tibo Louis-Lucas
@thibaultll
6k
Followers
Wes Kao
@weskao
107k
Followers
Justin Welsh
@justinwelsh
1m
Followers
Sabeeka Ashraf
@sabeekaashraf
20k
Followers
Andy Mewborn
@amewborn
212k
Followers
Austin Belcak
@abelcak
1m
Followers