Get the Linkedin stats of Pascal Biese and many LinkedIn Influencers by Taplio.
open on linkedin
As the founder of LLM Watch, my goal is to curate the most relevant Large Language Model (LLM) research and make it accessible to a broad audience. I distill complex technical topics into summaries that respect your time. In addition, I offer advisory services to a select group of partners, leveraging my expertise in the field. Before embarking on this journey, I led Deep Learning R&D at one of Europe's largest telecom companies. There, I spearheaded numerous AI projects with a focus on Natural Language Processing (NLP) and Automatic Speech Recognition (ASR). With more than 6 years of experience in AI, I was there before the hype. I try to offer a healthy balance between shiny new things and what really matters.
Check out Pascal Biese's verified LinkedIn stats (last 30 days)
Use Taplio to search all-time best posts
A new generalist AI just beat Manus & DeepResearch - and it's not even close. A new paper shows that stripping away complexity might be the key to breakthrough performance. Researchers from Princeton, Tsinghua, and other institutions just introduced Alita- a generalist AI agent that challenges everything we thought we knew about building capable AI systems. While most agent frameworks rely on extensive pre-programmed toolsets and workflows, Alita takes a radically minimalist approach. The core insight: Instead of trying to anticipate every tool an agent might need, Alita starts with just a basic web agent and generates its own capabilities on-demand. It does this through Model Context Protocols (MCPs) - standardized ways to provide context to LLMs that the agent can create, test, and reuse automatically. Despite having almost no predefined tools, Alita achieves 87.27% accuracy on the challenging GAIA benchmark, outperforming OpenAI's Deep Research and other heavily-engineered systems. The agent can even share its self-generated MCPs with other frameworks, boosting their performance by over 20%.This work suggests we may have been overthinking AI agent design. Perhaps the path to truly capable AI assistants isn't through meticulous engineering, but through systems that can evolve their own solutions. For more on Alita and other AI highlights, check out this week's LLM Watch:
LLMs are learning to think better by doing the opposite of humans. While we typically think fast first and only slow down when stuck, new research shows AI reasoning models perform best with the reverse approach. Researchers from UIUC and UC Berkeley just introduced AlphaOne (α1), a framework that teaches Large Reasoning Models like OpenAI's o1 to modulate their thinking speed at test time. The key insight? These models suffer from either overthinking (wasting compute on simple problems) or underthinking (rushing through complex ones). The solution is elegantly simple: α1 uses an "α moment" to scale the thinking phase and schedules slow-to-fast reasoning transitions using stochastic processes. During the initial phase, the model thinks slowly and carefully, then switches to fast thinking for efficient answer generation. This approach improved accuracy by up to 6% while reducing computational costs by 14%. What's fascinating is that this slow-first, fast-later pattern directly contradicts human cognition as described by dual process theory - suggesting that optimal AI reasoning may follow fundamentally different principles than biological intelligence. The implications go beyond efficiency gains: as reasoning models become central to AI applications, understanding how to optimize their thinking patterns could unlock new capabilities we haven't yet imagined. ↓ 𝐖𝐚𝐧𝐭 𝐭𝐨 𝐤𝐞𝐞𝐩 𝐮𝐩? Join my newsletter with 50k+ readers and be the first to learn about the latest AI research: llmwatch.com 💡
Content Inspiration, AI, scheduling, automation, analytics, CRM.
Get all of that and more in Taplio.
Try Taplio for free
Justin Welsh
@justinwelsh
1m
Followers
Izzy Prior
@izzyprior
82k
Followers
Vaibhav Sisinty ↗️
@vaibhavsisinty
451k
Followers
Richard Moore
@richardjamesmoore
105k
Followers
Daniel Murray
@daniel-murray-marketing
150k
Followers
Matt Gray
@mattgray1
1m
Followers
Shlomo Genchin
@shlomogenchin
49k
Followers
Sam G. Winsbury
@sam-g-winsbury
49k
Followers
Ash Rathod
@ashrathod
73k
Followers
Sabeeka Ashraf
@sabeekaashraf
20k
Followers
Wes Kao
@weskao
107k
Followers
Tibo Louis-Lucas
@thibaultll
6k
Followers
Sahil Bloom
@sahilbloom
1m
Followers
Austin Belcak
@abelcak
1m
Followers
Luke Matthews
@lukematthws
188k
Followers
Andy Mewborn
@amewborn
215k
Followers