Get the Linkedin stats of Armand Ruiz and many LinkedIn Influencers by Taplio.
open on linkedin
As the Director of AI Engineering at IBM, I lead the watsonx team, a global group of over 1,000 AI Engineers who work on cutting-edge Generative AI solutions for various industries. With more than 12 years of experience in AI, I have a passion for democratizing practical AI skills and empowering professionals to leverage AI to transform their business. Previously, I managed IBM's Data Science and AI Elite, delivering complex AI implementations for hundreds of clients. I also led the product management of IBM Watson Studio and Watson Machine Learning, an AI platform used by hundreds of customers and SaaS apps powered by AI. I hold a Master of Science in Telecommunications from Universitat Politècnica de Catalunya and speak three languages: Spanish, Catalan, French, and English. Connect with me! ✉️ JOIN MY NEWSLETTER: Every Saturday morning, you'll get 1 actionable tip to launch and grow your AI strategies in business in less than 4 minutes. → GO HERE: https://www.nocode.ai/ 📧 Email: armand@nocode.ai X: @armand_ruiz "It is kind of fun to do the impossible" All opinions are my own and don't represent my employer.
Check out Armand Ruiz's verified LinkedIn stats (last 30 days)
Use Taplio to search all-time best posts
Today I'm attenting LlamaCon and Meta is coming with some incredible announcements once again. Most importantly, I’m so happy to see them build a real developer ecosystem around Llama. In today’s AI market, it’s not enough to just drop models on Hugging Face. To create real gravity, MOAT, and stickiness, you need platforms, tools, and community, and that’s exactly what Meta is doing. Here’s my take on the biggest announcements for developers, ranked by impact: 🔥 𝟭. 𝗟𝗹𝗮𝗺𝗮 𝗔𝗣𝗜 (𝗽𝗿𝗲𝘃𝗶𝗲𝘄) A hosted API to compete directly with OpenAI, Anthropic, and Gemini. It provides: - One-click API key creation - Interactive playgrounds - Fine-tuning and evaluation tools - Full model portability (you own your models) - OpenAI SDK compatibility - Faster inference powered by Cerebras and Groq This is a critical move to standardize developers on their APIs and drive platform growth. There's also a toolbox with practical collection of tools and recipes to make building and customizing Llama applications easier. 🧠 2. 𝗡𝗲𝘄 𝗟𝗹𝗮𝗺𝗮 𝟰 𝟴𝗕 𝗠𝗼𝗱𝗲𝗹 (𝗖𝗼𝗺𝗶𝗻𝗴 𝗦𝗼𝗼𝗻) Meta is preparing an efficient, lightweight Llama 4 model with 8 billion parameters. 🛡️ 𝟯. 𝗡𝗲𝘄 𝗟𝗹𝗮𝗺𝗮 𝗣𝗿𝗼𝘁𝗲𝗰𝘁𝗶𝗼𝗻 𝗧𝗼𝗼𝗹𝘀 - Llama Guard 4: A 12B parameter multimodal safety classifier, classifying risks across violence, privacy, IP, elections, and more. - Prompt Guard 2: Two models (86M and 22M) to detect prompt injections and jailbreaks, now faster, smaller, and more accurate. Meta also announced Llama Defenders Program with big focus on security and responsible open-source AI development. It’s clear: Open source is winning. Llama is no longer just a set of models, it's a full-stack AI platform with real developer momentum behind it. Learn more here: https://lnkd.in/gr_jjZV4
If you still don’t get what AI Agents are all about, now’s the time to fix that. I created a free 10-day course to teach you the fundamentals. You won’t become an expert overnight, but you will walk away with a solid grasp of what they are, why they matter, and what’s behind all the hype. These are the topics I cover: -𝗗𝗮𝘆 𝟭: 𝗜𝗻𝘁𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 𝘁𝗼 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 An overview of what AI Agents are, how they differ from traditional AI systems, and why they are crucial for modern businesses. - 𝗗𝗮𝘆 𝟮: 𝗥𝗲𝗮𝗹-𝗪𝗼𝗿𝗹𝗱 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝗨𝘀𝗲 𝗖𝗮𝘀𝗲𝘀 Showcasing practical, real-world examples of AI Agents at work —customer support automation, supply chain optimization, predictive maintenance, and more. - 𝗗𝗮𝘆 𝟯: 𝗞𝗲𝘆 𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀 𝗼𝗳 𝗮𝗻 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 Understanding the foundational elements: perception, reasoning, memory, planning, and action, and how these components work in unison. - 𝗗𝗮𝘆 𝟰: 𝗧𝗼𝗼𝗹𝘀 𝗮𝗻𝗱 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸𝘀 𝗳𝗼𝗿 𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 An introduction to popular libraries, platforms, and frameworks (e.g., CrewAI, LangGraph, LlamaIndex, Bee, ReAct frameworks) that simplify agent development. - 𝗗𝗮𝘆 𝟱: 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗻𝗴 𝗟𝗟𝗠𝘀 𝘄𝗶𝘁𝗵 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 Exploring how large language models like GPT can serve as the “brain” of your agent, enabling more natural interaction and richer context understanding. - 𝗗𝗮𝘆 𝟲: 𝗠𝗲𝗺𝗼𝗿𝘆 𝗮𝗻𝗱 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 Diving into techniques for storing, retrieving, and updating context to make your AI Agents more reliable, adaptable, and accurate over time. - 𝗗𝗮𝘆 𝟳: 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗻𝗴 𝗮𝗻𝗱 𝗠𝗲𝗮𝘀𝘂𝗿𝗶𝗻𝗴 𝗔𝗴𝗲𝗻𝘁 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 Key metrics and evaluation techniques for assessing your AI Agent’s accuracy, efficiency, and overall effectiveness. - 𝗗𝗮𝘆 𝟴: 𝗧𝗵𝗲 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿’𝘀 𝗦𝗸𝗶𝗹𝗹 𝗦𝗲𝘁 What it takes to excel in this new professional domain: essential coding skills, knowledge of ML/LLM models, system design, and a strategic mindset. - 𝗗𝗮𝘆 𝟵: 𝗘𝘁𝗵𝗶𝗰𝗮𝗹 𝗮𝗻𝗱 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗹𝗲 𝗔𝗴𝗲𝗻𝘁 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 Examining ethical considerations, privacy concerns, and fairness to ensure responsible creation and use of AI Agents. - 𝗗𝗮𝘆 𝟭𝟬: 𝗙𝘂𝘁𝘂𝗿𝗲 𝗧𝗿𝗲𝗻𝗱𝘀 𝗶𝗻 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 A look ahead at emerging trends, including more autonomous multi-agent systems, deeper integrations, and the evolving regulatory landscape. Sign up for FREE here: https://lnkd.in/grh2wADd
Introducing the book 𝐀𝐈 𝐕𝐚𝐥𝐮𝐞 𝐂𝐫𝐞𝐚𝐭𝐨𝐫𝐬, and for a limited time, you can get a digital copy for 𝐅𝐑𝐄𝐄! First, what's the book about? It is a practical guide for turning AI into real business value: ✅ Real-world success stories ✅ Ethical and strategic insights ✅ Actionable steps to lead with AI It’s written for all levels, but especially for 𝐛𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐥𝐞𝐚𝐝𝐞𝐫𝐬 ready to seize this generative AI moment. Written by the visionaries helping businesses master GenAI and behind IBM’s enterprise strategy. - Rob Thomas, SVP Software and Chief Commercial Officer at IBM - Paul Zikopoulos, VP Skills and Enablement at IBM - Kate Soule, Director of Technical Product Management for Granite Models at IBM Research You can get the hardcover in May 2025 here: https://lnkd.in/g9qhPS54 and.. 🎁 IBM has secured licensing rights so you can download the PDF for 𝐅𝐑𝐄𝐄 right here: https://lnkd.in/gsUzs39n Enjoy!
After years working on AI, here's what I’ve seen that works and what doesn’t in enterprises making AI real. If you want to move beyond pilots and into production: 𝟭/ 𝗔𝗜 𝗶𝘀 𝗮 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 𝗽𝗿𝗼𝗯𝗹𝗲𝗺, 𝗻𝗼𝘁 𝗮 𝗺𝗼𝗱𝗲𝗹 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 The most successful deployments aren’t about plugging in the latest model. They’re about orchestrating models inside secure, privacy-preserving workflows, with clear ownership and deterministic behavior. Build compound systems: - Think orchestration layers, not chat interfaces - Handle PII internally, only send safe inputs to models - Keep business logic and computation on your end 𝟮/ 𝗗𝗮𝘁𝗮 𝗽𝗿𝗶𝘃𝗮𝗰𝘆 𝗶𝘀𝗻’𝘁 𝗼𝗽𝘁𝗶𝗼𝗻𝗮𝗹, 𝗶𝘁’𝘀 𝘁𝗵𝗲 𝗳𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻 No matter how good the model is, if you don’t design for privacy from day one, you’ll stall out before production. You need to place systems where nothing sensitive ever touches the LLM, specially if it is a 3rd party API call. That’s the bar. ✅ Local pre-processing ✅ Sensitive detection using internal SLMs ✅ Model only sees what it needs, never raw data 𝟯/ 𝗠𝗼𝗱𝗲𝗹-𝗮𝗴𝗻𝗼𝘀𝘁𝗶𝗰 = 𝗹𝗼𝗻𝗴-𝘁𝗲𝗿𝗺 𝗹𝗲𝘃𝗲𝗿𝗮𝗴𝗲 Leading AI platform strategy, we always aimed to be multi-model, multi-cloud. Why? Because the performance gap between top models is closing. And pricing, licensing, and latency really matter. 𝟰/ 𝗠𝘂𝗹𝘁𝗶-𝗮𝗴𝗲𝗻𝘁 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 𝗮𝗿𝗲 𝘄𝗵𝗲𝗿𝗲 𝘁𝗵𝗲 𝗺𝗮𝗴𝗶𝗰 𝗵𝗮𝗽𝗽𝗲𝗻𝘀 We’re seeing real traction with agentic designs. I’ve recommended teams deploying internal AI agents that: - Extract, validate, and match data - Trigger downstream actions - Work in autonomous flows, with humans in the loop only at the end This isn’t science fiction. It’s happening now for some real workflows. 𝟱/ 𝗟𝗮𝘁𝗲𝗻𝗰𝘆 𝗮𝗻𝗱 𝗰𝗼𝘀𝘁 𝗮𝗿𝗲 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗸𝗶𝗹𝗹𝗲𝗿𝘀 You can have the smartest model in the world, but if it takes too long or costs too much, it won’t make it past your CFO or your ops team. I always advise teams to: - Benchmark for latency and accuracy - Monitor token costs like cloud spend - Stay lean, especially in customer-facing apps Don’t get distracted by the model-of-the-month. The real differentiator? How you integrate AI into your systems.
AI skills aren’t just for engineers. They’re for everyone. I like LinkedIn’s AI Upskilling Framework and their upskilling pyramid. From leaders to developers, the path to value looks like this: 𝟭/ 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴: AI literacy, generative AI, and responsible use 𝟮/ 𝗔𝗽𝗽𝗹𝘆𝗶𝗻𝗴: Prompt engineering, strategy, copilots, AI productivity 𝟯/ 𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴: No/low code apps, GPTs, APIs, and databases 𝟰/ 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 & 𝗠𝗮𝗶𝗻𝘁𝗮𝗶𝗻𝗶𝗻𝗴 𝗠𝗼𝗱𝗲𝗹𝘀: Adapting models, training Supervised models, fine-tuning. I have to be honest this one is a bit off, not many people will do it 𝟱/ 𝗗𝗲𝗲𝗽 𝗦𝗽𝗲𝗰𝗶𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 – AIOps, MLOps, AI security, cloud AI Every role has a place in this pyramid. So where are YOU on the pyramid?
🚀 New in watsonx.ai: Synthetic Unstructured Data Generation Tuning foundation models just got easier. You can now programmatically generate high-quality, domain-specific synthetic text datasets using your own seed data, directly via the watsonx.ai API. Why this matters: 1/ Realistic training data without real data exposure 2/ Accelerated tuning and eval cycles for your LLMs 3/ Pipelines for tool use, text-to-SQL, and Q&A generation This is an exciting new capability to enable scalable GenAI development inside enterprises, especially where real data is hard to use due to privacy or availability.
🚨 Struggling to get great results when switching from OpenAI to Llama? You’re not alone. Most teams face degraded performance when migrating prompts from proprietary models to self-hosted LLMs like Llama. I also heard that from customers upgrading from Llama 2 or Llama 3 to the latest versions of Llama. Why? Because what works for one model often fails silently on another. Enter llama-prompt-ops — a powerful new Python package that automatically adapts and optimizes prompts for Llama models. Here's why I think it is a very big deal: 𝟭/ 𝗣𝗿𝗼𝗺𝗽𝘁 𝗠𝗶𝗴𝗿𝗮𝘁𝗶𝗼𝗻: Easily convert prompts from OpenAI/GPT to Llama with high reliability 𝟮/ 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿 𝗙𝗿𝗶𝗰𝘁𝗶𝗼𝗻: Say goodbye to trial-and-error across Llama versions 𝟯/ 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲: Manual tuning is slow, inconsistent, and hard to scale, this tool makes it systematic 𝟰/ 𝗠𝗲𝗮𝘀𝘂𝗿𝗮𝗯𝗹𝗲 𝗥𝗲𝘀𝘂𝗹𝘁𝘀: Built-in evaluation lets you see what actually improves How it works: Other LLM prompt ➝ llama-prompt-ops ➝ Llama-optimized prompt The benchmarks are impressive, demonstrating substantial improvements over baseline prompts across various model sizes. It’s the kind of tooling that makes migrating to open models actually practical and helps your devs focus on building, not babysitting prompts. Now available on PyPI: llama-prompt-ops and repo available here: https://lnkd.in/gsQDpJC7
🔥 New Granite 3.3 is here. And it's our most multimodal release yet. This release introduces three major updates: 𝟭/ 𝗚𝗿𝗮𝗻𝗶𝘁𝗲 𝗦𝗽𝗲𝗲𝗰𝗵 𝟯.𝟯 𝟴𝗕 - Our first speech-to-text model for enterprise use: - Accurate ASR + AST across 7+ languages - Outperforms Whisper, GPT-4o, Gemini 2.0 Flash on key benchmarks - No 30s limit – supports long-form audio 𝟮/ 𝗚𝗿𝗮𝗻𝗶𝘁𝗲 𝟯.𝟯 𝗜𝗻𝘀𝘁𝗿𝘂𝗰𝘁 - New FIM (fill-in-the-middle) capabilities – great for code repair - Enhanced reasoning via GRPO – 70%+ on MATH500 - CoT toggle: deeper reasoning only when needed 𝟯/ 𝗥𝗔𝗚-𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗲𝗱 𝗟𝗼𝗥𝗔 𝗮𝗱𝗮𝗽𝘁𝗲𝗿𝘀 - Query rewriting, hallucination detection, citations, answerability, and uncertainty scoring - Activated LoRAs (aLoRAs) = adapter swapping without recompute All Apache 2.0 license 😘 Learn all the details here: https://lnkd.in/gX-XQ3ZX Try it out! • Granite Playground: https://lnkd.in/gY93aw6p • Available on watsonx ai, LM Studio, Ollama, Replicate • All models on Hugging Face: https://lnkd.in/gisdvn8w Can’t wait to see what you build. Next stop: Granite 4.0. The model is in training — faster, longer context, more efficient. Coming later this quarter. Still small, practical, and cost-effective. 💸
This isn’t just an AI demo. It’s live. And it's what the future of every customer interaction could look like. Every year, the Masters feels like a tradition frozen in time. But behind the scenes, it's a glimpse of the future. This year, over 20,000 golf shots at Augusta National are captured, analyzed, and narrated by AI, within minutes. Not just stat lines, but real insights, like: 🟢 “Shots from this location have an 82% chance of birdie.” 🟠 “Hole 14 is playing the toughest today, with 25% bogeys.” 🔮 “No. 9 is projected to be the third most difficult hole today.” That’s Hole Insights 2.0, built on watsonx and tuned on 180,000+ historical shots and expert input, including past Masters caddies. AI narration returns in English and Spanish, now also available through the Masters app on Apple Vision Pro, complete with 3D course views and multi-stream live play. It’s easy to dismiss this as "just sports tech." But I see something bigger here. This is what happens when AI is tuned, grounded, and embedded into a real-world experience: - Real-time context - Predictive intelligence - Multimodal delivery (text, voice, AR) - Personalization at scale For any business leader thinking about AI transformation: don’t just look at dashboards and copilots. Look at how we partnered with the Masters to turn data into decisions, and decisions into experiences. AI is not a plug-in. Allows full reinvention.
The most secure and scalable Gen AI systems won't run in the cloud; they'll run on mainframes. They run on IBM Z... meet the new Z17. AI that runs: Quietly. Reliably. Relentlessly. It's being deployed inside the transactional heart of the Fortune 100. AI on Z17 isn’t just about speed. It’s about sovereignty, security, and total control. Try getting that from a GPU you rented in someone else's cloud. These are some of the technical specs: 1/ On-chip GenAI: Z17’s Spyre Accelerator runs watsonx for Z natively, eliminating GPU/cloud dependencies for GenAI. 2/ AI-optimized Db2: Telum II AI and vector DB support enable hybrid structured/unstructured queries via enhanced SQL Data Insights. 3/ AI for Dev/Test: watsonx Code Assistant expands to more languages; Test Accelerator for Z adds NL-driven test generation and AI-powered refactoring. 4/ Smarter Ops: Z Operations Unite and IntelliMagic Vision use AI + OpenTelemetry for 9x faster incident resolution and Spyre I/O insights. 5/ Secure data sync: Integrated watsonx.data + Data Gate, with Vault-based secrets management, ensures real-time, secure hybrid data access. Learn more in our announcement from yesterday's press release: https://lnkd.in/gmgcK7tr
The operating principles of Enterprise AI: 1/ Enterprise AI won’t be centralized; it’ll be a choreography of agents across your stack. 2/ AI adoption won’t fail because of models. It’ll fail because of interoperability. 3/ MCPs and Agent-to-Agent standards will become the TCP/IP of enterprise AI. 4/ Agent-to-agent coordination is the enterprise glue of the AI era. 5/ Orchestration will shift from rule-based to context-based: dynamic, adaptive, truly intelligent. 6/ Agent networks will decide who leads based on intent, not hierarchy. 7/ Salesforce, Workday, Box... each will own its workflow, but not the full customer journey. 8/ The monolith is dead. Long live the mesh of intelligent agents. 9/ Agents are not products. They’re participants in workflows. 10/ Composable AI is like Lego for workflows. You bring your blocks. The system will build itself. 11/ AI is no longer a layer; it’s the fabric stitching the enterprise together. 12/ AgentOps will become the new DevOps. 13/ You won’t debug code, you’ll debug conversations between agents. 14/ Legacy IT is already struggling. Agent-based architectures will widen the gap. 15/ Building an agent is easy. Getting 50 to work together is not. 16/ Enterprise IT isn’t ready. Most data isn’t even accessible, let alone AI-ready. 17/ Agent networks will force a reckoning with your data infrastructure. 18/ Horizontal agent orchestration will emerge when no clear system owns the workflow. 19/ Agent interactions will need the same auditability and traceability as financial systems. 20/ You’ll need governance not just over data, but over agent behavior. 21/ How your agents reason will be subject to compliance. 22/ An agent is only as trustworthy as the data it’s trained on. 23/ The battle for AI supremacy will be won in orchestration, not inference. 24/ Vertical agents will dominate first. Horizontal orchestration will follow.
AI Agents 101 - Let's learn the basics: 𝗪𝗵𝗮𝘁 𝗜𝘀 𝗮𝗻 𝗔𝗴𝗲𝗻𝘁? An AI agent is a system designed to reason through complex problems, create actionable plans, and execute these plans using a suite of tools. It follows this continuous cycle: 1/ Think – The agent processes available data and context. 2/ Plan – It decides on a strategy to achieve a goal or answer a question. 3/ Act – It executes the chosen plan (e.g., makes an API call, retrieves data, or interfaces with a user). 4/ Reflect – It evaluates the outcome of its actions, checks for errors or new insights, and uses those reflections to inform the next iteration of thought. These four steps form a feedback loop that enables agents to adapt their approach as new information arrives. 𝗧𝗵𝗲 𝗥𝗼𝗹𝗲 𝗼𝗳 𝗟𝗟𝗠𝘀 Under the hood, agents leverage LLMs and their brains. These LLMs can be: 1/ Open: Harnessing publicly available models or open-source alternatives. 2/ Small: Optimized or compressed versions that can run on local hardware more efficiently. 3/ Trained & Tuned Anywhere: Thanks to modern tooling, LLMs can now be developed, fine-tuned, and deployed in various environments—from the public cloud to on-premises servers. LLMs provide the core reasoning, comprehension, and language generation abilities that agents require. 𝗛𝗼𝘄 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗖𝗼𝗺𝗽𝗹𝗲𝘁𝗲 𝗧𝗮𝘀𝗸𝘀 Putting everything together, the workflow of an AI agent system might look like this: 1/ User or system triggers a request. 2/ LLM-based agent uses an internal reasoning loop (think → plan → act → reflect) to interpret the request and plan next steps. 3/ The agent makes calls to GenAI APIs or enterprise data/services (via GraphQL, OpenAPI, etc.) to gather more information. 4/ The agent processes the gathered data and formulates a response or an action. 5/ It executes that action or generates a final output to the user or system. 6/ Logging and monitoring systems record metrics (accuracy, compliance, fairness, relevance) so that developers and administrators can refine the agent’s behavior. The business world is not yet ready to fully adopt agents; they are still in the very early experimental stage, and that's why everyone is implementing RAG like crazy, as it is a safer bet. But believe me, AI agents are where it's at, and at the current pace of innovation, we will soon start collaborating with AI agents instead of humans for some specific tasks. ____ Please repost it ♻️ and follow me, Armand Ruiz, for more similar posts.
🛰 Introducing 𝗧𝗲𝗿𝗿𝗮𝗠𝗶𝗻𝗱, a next-generation foundation model for Earth observation, co-developed by IBM and the European Space Agency (ESA), now open-sourced and available Hugging Face. Unlike typical geospatial models, 𝗧𝗲𝗿𝗿𝗮𝗠𝗶𝗻𝗱 understands Earth using 9 complementary data types: from satellite imagery and radar to elevation, vegetation, land use, and even natural language. Its unique dual-scale architecture (pixel + token) enables intuitive, AI-native comprehension of our planet. This “any-to-any” generative model can predict or simulate any of the input modalities. For example, generating radar data from optical images or land cover classifications from elevation maps. Researchers call this breakthrough "Thinking in Modalities" (TiM), a new tuning method that boosts accuracy by generating synthetic training signals internally. TerraMind was trained on 9 million globally distributed samples and evaluated across key Earth science benchmarks like PANGAEA, outperforming 12 other foundation models with up to 8% accuracy gains in tasks like flood mapping, land cover classification, and vegetation monitoring. 🌍 Built with open science in mind, TerraMind is now: 📖 Paper published on arXiv: https://lnkd.in/gfTB4dvp 🤗 Models available on Hugging Face: https://lnkd.in/gP76Ad3G 🌐 Blog with details explained here: https://lnkd.in/gwjEZZJT Let’s unlock the full potential of satellite data and help the planet. I look forward to seeing what you build with it.
Excited to announce that Meta’s newest models — Llama 4 Scout and Llama 4 Maverick — are now available on the IBM watsonx platform. Starting today, our clients can tap into the power of Meta’s most advanced open models yet, with the added trust, governance, and scalability they’ve come to expect from watsonx. Live today in Dallas data center, and rolling out to Frankfurt and London tomorrow. Try here: watsonx.ai
Very excited to announce a partnership with Credo AI and the new addition to the watsonx.governance family, meet the Compliance Accelerator! As AI regulation and frameworks grow at an unprecedented pace, enterprises need a simple, repeatable way to keep every project on the right side of the law. The 𝘄𝗮𝘁𝘀𝗼𝗻𝘅.𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗖𝗼𝗺𝗽𝗹𝗶𝗮𝗻𝗰𝗲 𝗔𝗰𝗰𝗲𝗹𝗲𝗿𝗮𝘁𝗼𝗿 brings up-to-date regulatory content straight into your governance console, so both use case owners and compliance officers can work from the same playbook. 𝗧𝗵𝗲 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲 - Rapidly evolving rules: From the EU AI Act to ISO 42001 and NIST AI RMF, global frameworks are multiplying and maturing. - Distributed responsibilities: Use case owners must assess each AI deployment against dozens of obligations, while compliance teams need enterprise-wide visibility into overall risk posture. 𝗢𝘂𝗿 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻 & 𝗩𝗮𝗹𝘂𝗲 By integrating our Compliance Accelerator as a data-as-a-service add-on to watsonx.Governance, now you can: 1/ Map AI use cases to regulations such as the EU AI Act, ISO 42001, and NIST AI RMF 2/ Boost efficiency with curated research and automated reporting workflows 3/ Strengthen assessments by giving both business owners and compliance officers the same rich, contextual guidance It gets regular updates to stay current as global regulations evolve Empower your teams to navigate the expanding world of AI rules with confidence. Learn more here: https://lnkd.in/gCCzf3PR
Most people vastly underestimate what AI can do, not because the tech isn’t ready, but because most aren’t using it. There’s this illusion that everyone is running at the same speed. We see viral demos, wild headlines, and assume AI is now “everywhere.” But the truth is: the majority of individuals, teams, and even companies are still on the sidelines. The real gap right now isn’t between the tools that exist and those that don’t, it’s between the people who are putting the tools to work, and those who aren’t. And that gap is widening. See the story below, a single person can build tools that used to take an entire product team. You can automate tedious workflows, stand up agents, tune models, even spin up lightweight personal SaaS just for your family or friends. The infra is there. The interfaces are finally usable. And the leverage is unprecedented. But only a tiny fraction of companies, and people are doing it. We’re seeing emerge quietly but unmistakably, a new kind of builder. Not just developers. Not just founders. But power users. People embracing AI not as a novelty, but as a second brain, a co-pilot, a full-stack productivity engine. They’re not posting about it. They’re not raising money for it. They’re just... getting things done with it. And the impact is insane. More output. Fewer bottlenecks. Faster iteration. Less waste. These people aren’t waiting for enterprise tools to catch up. The real competitive edge today isn’t about having the best AI, it’s about being the one who actually uses it. In the meantime, most orgs are stuck. Security concerns. IT policies. Change management. Risk aversion. That’s understandable, but it also creates a massive opening. If you’re even a little proactive, a little bold, you can be 100x more effective than someone with the same role who’s still copying and pasting into spreadsheets or toggling through legacy tools. This is especially true in fields like healthcare, finance, logistics, education, where the cost of inefficiency has always been high, but the tools to fix it were too complex or expensive. AI changes that. You can now build solutions that were unthinkable a year ago. But only if you’re looking for them. The real unlock isn’t the tech, it’s the mindset. It’s realizing you no longer need permission to build. Your workflow is yours to redesign. The excuses of the past - too hard, slow or expensive - don’t apply anymore. We’re still early. Most people haven’t made the shift. But those who have? They’re not waiting. They’re compounding. And in five years, the gap between those who embraced this era and those who didn’t will look like a canyon. If you’re even a little curious, now is the time to lean in. Because the future isn’t coming, it’s already here.
The future of AI is Agentic Let's learn the basics: 𝗗𝗲𝗳𝗶𝗻𝗶𝘁𝗶𝗼𝗻 An AI agent is a system designed to reason through complex problems, create actionable plans, and execute these plans using a suite of tools. These agents exhibit advanced reasoning capabilities, memory retention, and task execution abilities. 𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀 𝗼𝗳 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 1. Agent Core: The central processing unit that integrates all functionalities. 2. Memory Module: Stores and retrieves information to maintain context and continuity over time. 3. Tools: External resources and APIs the agent can use to perform specific tasks. 4. Planning Module: Analyzes problems and devises strategies to solve them. 𝗖𝗮𝗽𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀 1. Advanced Problem Solving: AI agents can plan and execute complex tasks, such as generating project plans, writing code, running benchmarks, and creating summaries. 2. Self-Reflection and Improvement: AI agents can analyze their own output, identify problems, and provide constructive feedback. By incorporating this feedback and repeating the criticism/rewrite process, agents can continually improve their performance across various tasks, including code production, text writing, and answering questions. 3. Tool Utilization: AI agents can use tools to evaluate their output, such as running unit tests on code to check for correctness or searching the web to verify text accuracy. This allows them to reflect on errors and propose improvements. 4. Collaborative Multi-Agent Framework: Implementing a multi-agent framework, where one agent generates outputs, and another provides constructive criticism, leads to enhanced performance through iterative feedback and discussion. The business world is not yet ready to fully adopt agents; they are still in the very early experimental stage, and that's why everyone is implementing RAG like crazy, as it is a safer bet. But believe me, AI agents are where it's at—and at the current pace of innovation, we will soon start collaborating with AI agents instead of humans for some specific tasks. ____ Please repost it ♻️ and follow me, Armand Ruiz, for more similar posts.
Heard of MCP? Well, meet ACP! 😅 Our talented IBM Research team is tackling AI agent-to-agent communication challenges, and we've just published our initial draft of ACP (Agent Communication Protocol). 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗔𝗖𝗣? ACP standardizes agent interactions, enabling automation, collaboration, UI integration, and developer tools. Instead of enforcing strict rules upfront, ACP prioritizes practical features, ensuring adoption before formal standardization. 𝗠𝗼𝘁𝗶𝘃𝗮𝘁𝗶𝗼𝗻 Agent systems often face integration issues due to varied communication standards. ACP addresses this by streamlining interactions for agents that process natural language and rely on external models, fostering easier integration and collaboration. 𝗪𝗵𝘆 𝗦𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗠𝗮𝘁𝘁𝗲𝗿𝘀 Standardization ensures reliable and widely adopted features. ACP evolves based on real-world use, offering: 1/ Interoperability: Smooth interaction between agents and systems. 2/ Simplified Development: Less complexity, more innovation. 3/ Efficiency: Reduced redundancy through reusable solutions. ACP also enhances agent-specific capabilities, such as: - Natural Language Flexibility: Optimized for handling ambiguous language. - Model Dependency Management: Simplifies external model integration and resource management. I invite you to join the discussion here or on GitHub about the ideas we're implementing: https://lnkd.in/gYkbbK5Y. Link to the initial draft: https://lnkd.in/gMeWegfb
New job titles are emerging: Agent Architects and Agent Orchestrators Their job? Design and optimize the network of AI workers that run your company. Just like cloud architects reshaped infrastructure, these new roles will redefine how companies operate in the era of AI-native workflows. The org chart of the future won’t just have humans. It’ll have agents. And someone needs to be in charge of them.
Content Inspiration, AI, scheduling, automation, analytics, CRM.
Get all of that and more in Taplio.
Try Taplio for free
Sabeeka Ashraf
@sabeekaashraf
20k
Followers
Izzy Prior
@izzyprior
81k
Followers
Vaibhav Sisinty ↗️
@vaibhavsisinty
449k
Followers
Matt Gray
@mattgray1
1m
Followers
Shlomo Genchin
@shlomogenchin
49k
Followers
Sam G. Winsbury
@sam-g-winsbury
49k
Followers
Daniel Murray
@daniel-murray-marketing
149k
Followers
Richard Moore
@richardjamesmoore
105k
Followers
Wes Kao
@weskao
107k
Followers
Justin Welsh
@justinwelsh
1m
Followers
Ash Rathod
@ashrathod
73k
Followers
Sahil Bloom
@sahilbloom
1m
Followers
Austin Belcak
@abelcak
1m
Followers
Amelia Sordell 🔥
@ameliasordell
228k
Followers
Tibo Louis-Lucas
@thibaultll
6k
Followers
Luke Matthews
@lukematthws
187k
Followers