Get the Linkedin stats of Ethan Mollick and many LinkedIn Influencers by Taplio.
open on linkedin
I research the factors that lead to success in entrepreneurship and innovation, and how to teach people to become more effective leaders and innovators. Most recently, I have focused on AI, and especially how it affects education and work. I co-founded Wharton Interactive, an effort to democratize education using games and simulations, while applying rigorous approaches to pedagogy at scale. I have worked with a lot of different organizations (from startups to government agencies), co-founded a technology company, and have a mysterious past.
Check out Ethan Mollick's verified LinkedIn stats (last 30 days)
Use Taplio to search all-time best posts
“I believe now is the right time to start preparing for AGI” The same warnings are now appearing with increasing frequency from smart outside observers of the AI industry who do not benefit from hyping what it can do, like Kevin Roose (below) & Ezra Klein. I think ignoring the possibility they are right is a real mistake.
The irony of working with AI right now is that it eliminates the need for basic expertise yet benefits real experts the most. Using a few prompts, I build a 3D game, create an elaborate course & write an academic paper to test the state of "vibeworking."
In our new paper on using AI solving real business problems at Procter and Gamble where we found AI helped individuals, even inexperienced ones, perform as well as teams of two, we gave participants a set of default prompts to work with. They are in the Appendix of the paper, but I attached them as images here. The experiments were done with GPT-4/4o. We would likely prompt the AI a little differently, now, as Reasoners and more recent models are less sensitive to personas and CoT (and probably less sensitive to prompting technique overall). And of course, Deep Research tools have also changed the capabilities of AIs by even more, and didn't exist this summer. https://lnkd.in/ehJr8CxM
Really in-depth paper on AI hallucinations in medicine, with lots of discussion and analysis about addressing them & what is appropriate for medical use. But I found this bit on how much more accurate the latest models have gotten to be interesting (though more study is needed, I also wonder how it compares to humans).
City builder game in the styles of Cezanne, Piranesi, the Voynich Manuscript, Van Gogh, Rembrandt, Rothko, and Seurat. I made this all in Veo 2.
The past 18 months have seen the most rapid change in human written communication ever By. September 2024, 18% of financial consumer complaints, 24% of press releases, 15% of job postings & 14% of UN press releases showed signs of LLM writing. And the method undercounts true use because it can only detect unsophisticated AI.
What is now trivial with multimodal image generation (these are the exact prompts): "ChatGPT show me a photorealistic drone shot of a fantasy city, the walls are white alabaster streaked with gold, while massive brass trellises built into the towers allow vines of maroon to climb them. At the center is a tower with a balcony and a hooded figure on it" "in the next shot we move closer to the balcony, keeping the rest of the details, we cannot yet see a face in the cloak's darkness, but see a gleaming clasp to the cloak, worked with red stones, we also can se the figure holds a map case in one hand" (note how the buildings keep consistency) "the scene is the same in all of its details, and remains photorealistic, but the cloak is pulled back to reveal a middle aged man, his eyes set with determination, his hair is woven with golden threads, he holds the open case in one hand and, in the other, a short wand of obsidian." (really good detail consistency overall, though you can see some changes to the towers and vines... and the belt) "we zoom in on his face, which fills the frame, one side is illuminated by a sickly green light from the wand, which is covered with a network of fine glowing lines." (really impressive character consistency, including the outfit)
The amount of capability overhang in current AI systems is hard to overstate, whether discussing LLMs in general or just narrow areas like vision & image creation. If AI development stopped today (and no indication that is happening), we have a couple decades of figuring out how to integrate it into work, education, & life.
Snake games are a bad test of AI because there are so many online so it can just be copyi- "Claude 3.7, make a snake game, but the snake is self-aware it is in a game and trying to escape and interesting things happen as a result" This is all AI (one prompt + a request to make special things happen faster). Take a look at Matrix mode at 0:55
One reason I wish more humanities-oriented people would engage with AI is that models are writers, trained on words, producing words. There are strengths & weaknesses in the models that can only be seen if you engage deeply with them as writers. They do not show up in coding benchmarks or tests, but they are often far more important for many of the things that AI is used for.
I regret to announce that the meme Turing Test has been passed: LLMs produce funnier memes than the average human, as judged by humans. Humans working with AI get no real boost (a serious finding worth considering, and one that is coming up often in AI-creativity work). The best human meme-makers still beat AI, however.
Randomized trial AI for legal work finds Reasoning models are a big deal for lawyers: Law students using o1-preview (the first available reasoner) had the quality of their work on most tasks increase (up to 28%) & time savings of 12-28%. There were a few hallucinations, but a RAG-based AI with access to legal material (Vincent) reduced those to human level. Combining both with be the future. Big changes to law appear to be coming: "Our findings demonstrate that reasoning models improve not only the clarity, organization, and professionalism of legal work but also the depth and rigor of legal analysis itself."
When using a AI Deep Research tool for the first time, you need to review the output with a critical eye: follow every link to make sure things are really cited, read every line for hallucination, etc. You aren't going to keep that attention to detail long, so get an idea right away. You need to do the work, and you need to decide how much to trust the tool at the start. I see people taken in by good-looking output all the time. Start by being skeptical, then you can embrace it. Don't trust the assessment of anyone else - AI can be great in one area and bad in another. And for goodness sake, start by testing it in areas where you are an expert. Don't ask it to predict the future (it isn't made for that) but ask it for something analytical and research-y that you know, where you can judge its opinion and the quality of the sources it found.
I wish more people were taking seriously the possibility that Ezra Klein and the leadership of the top AI labs are raising: that AGI, a machine better than most humans at most intellectual tasks, is a real possibility in the near future. You don't have to buy into this yourself, but leaders & policymakers need to consider the possibility it is true and think about how to mitigate risks and take advantage of opportunities. An addition after seeing the comments: to be clear, no one knows whether AGI is possible, let alone in the near term. It may not be. But quite a few serious experts, including many of the leaders in the space, seem to think it is imminent. For people who need to consider possible futures, it seems a mistake to assume they must be wrong.
In our new paper we ran an experiment at Procter and Gamble with 776 experienced professionals solving real business problems. We found that individuals randomly assiged to use AI did as well as a team of two without AI. And AI-augmented teams produced more exceptional solutions. The teams using AI were happier as well. Even more interesting: AI broke down professional silos. R&D people with AI produced more commercial work and commercial people with AI had more technical solutions. The standard model of "AI as productivity tool" may be too limiting. Today’s AI can function as a kind of teammate, offering better performance, expertise sharing, and even positive emotional experiences. This was a massive team effort with work led by Fabrizio Dell'Acqua, Charles Ayoubi, and Karim Lakhani along with Hila Lifshitz, Raffaella Sadun, Lilach M., me and our partners at P&G: Yi Han, Jeff Goldman, Hari Nair and Stewart Taub Subatack about the work here: https://lnkd.in/ehJr8CxM Paper: https://lnkd.in/e-ZGZmW9
This is the AI graph that big companies (and many startups) haven’t yet absorbed. Models are getting both better and cheaper at very fast rate. You either need to skate towards where the puck is going, or else make a bet on when AI will hit a wall. Don’t assume a static world. This graph shows how quickly this trend has advanced, mapping the capability of AI on the y axis and the logarithmically decreasing costs on the x axis. When GPT-4 came out it was around $50 per million tokens (a token is roughly a word), now it costs around 12 cents per million tokens to use Gemini 1.5 Flash, an even more capable model than the original GPT-4.
Content Inspiration, AI, scheduling, automation, analytics, CRM.
Get all of that and more in Taplio.
Try Taplio for free
Amelia Sordell 🔥
@ameliasordell
216k
Followers
Ash Rathod
@ashrathod
73k
Followers
Daniel Murray
@daniel-murray-marketing
147k
Followers
Matt Gray
@mattgray1
1m
Followers
Richard Moore
@richardjamesmoore
103k
Followers
Sam G. Winsbury
@sam-g-winsbury
45k
Followers
Vaibhav Sisinty ↗️
@vaibhavsisinty
445k
Followers
Shlomo Genchin
@shlomogenchin
49k
Followers
Justin Welsh
@justinwelsh
1m
Followers
Izzy Prior
@izzyprior
81k
Followers
Andy Mewborn
@amewborn
206k
Followers
Wes Kao
@weskao
107k
Followers
Guillaume Moubeche
@-g-
80k
Followers
Luke Matthews
@lukematthws
186k
Followers
Tibo Louis-Lucas
@thibaultll
6k
Followers
Sabeeka Ashraf
@sabeekaashraf
20k
Followers