Get the Linkedin stats of Anthony Alcaraz and many LinkedIn Influencers by Taplio.
open on linkedin
"But logic is not all, one needs one’s heart to follow an idea.” (Richard P. Feynman) I am deeply involved in the AI community, where my expertise in decision science, particularly at the intersection of Large Language Models (LLM), Natural Language Processing (NLP), Knowledge Graphs, and Graph Theory, is applied to foster innovation and strategic development. This specialization has positioned me as a leading voice in the construction of Retrieval-Augmented Generation (RAG) and reasoning engines, regarded by many, including myself, as the state-of-the-art approach in our field. I am also an avid writer, sharing daily insights on AI applications in business and decision-making with my 30,000+ followers on Medium. My articles aim to demystify the complexities of AI and provide actionable advice for leveraging technology in business contexts. You can follow my writings and engage with the community on Medium and LinkedIn. This dedication to AI and its potential in transforming industries led me to share my knowledge and insights as a speaker, including a recent lecture at Oxford on the integration of LLMs and graphs for contemporary business practices. Engaging in discussions about AI and its multifaceted applications in business and beyond is not just my profession but a true passion. I look forward to connecting with fellow enthusiasts and professionals keen on exploring the vast possibilities that AI offers.
Check out Anthony Alcaraz's verified LinkedIn stats (last 30 days)
Use Taplio to search all-time best posts
Long-term coherence refers to a large-language-model agent’s ability to pursue goals consistently, remember relevant facts, and recover from errors over many thousands of interactions... Such coherence is indispensable to be True Agentic Systems, where an agent enjoys both reliable access to the right knowledge (high context control) and enough workflow autonomy to operate without constant human shepherding. Recent empirical work, however, shows that today’s LLM agents rarely sustain coherent behaviour for long, limiting their usefulness as enterprise “digital co-workers”. First, benchmarks built expressly to test longevity—most prominently Vending-Bench—demonstrate that even state-of-the-art models degenerate after roughly a hundred simulated business days, exhibiting wide performance variance and catastrophic failure cascades. Vending-Bench uncovers a core limitation: models privately trained to predict the next token build only ephemeral internal states. When they misinterpret an event (e.g., believing stock has arrived) the error propagates through later steps; the language model lacks an intrinsic mechanism to overwrite the faulty belief with ground truth. Rule-fatigue results echo this, showing that models allocate attention unevenly across long prompts; once cognitive “budget” is exhausted, subsequent instructions lose salience. Together, these studies imply that long-term coherence failure emanates from how knowledge is represented and updated rather than from how much of it fits in the prompt. Second, the degradation is not caused by running out of context-window space; enlarging memory sometimes worsens outcomes, signalling deeper architectural shortcomings. Third, separate experiments on “rule fatigue” reveal that as the number of simultaneous constraints rises, rule adherence falls precipitously and predictably, with later instructions effectively ignored. Fourth, traditional retrieval pipelines that feed raw documents or similarity-based vectors to an agent amplify these weaknesses, because they deliver either too little or too much information and force the model to infer relationships that are not explicitly encoded. Fifth, design patterns that externalise structure—graph-based knowledge representations for context and declarative grammars for output—promise to mitigate the above failure modes by constraining both what the model sees and how it can act. Both long-term coherence and robust structured output ultimately hinge on a well-designed domain ontology. In practical terms, the ontology acts as the single source of semantic truth: it tells the retrieval layer which nodes and edges constitute “relevant context,” it informs the output schema against which Pydantic or logit-constrained generation validates, and it provides the conceptual anchors that keep an agent’s beliefs aligned over thousands of turns.
Content Inspiration, AI, scheduling, automation, analytics, CRM.
Get all of that and more in Taplio.
Try Taplio for free