Logo Taplio

Taplio

Paul Iusztin's Linkedin Analytics

Get the Linkedin stats of Paul Iusztin and many LinkedIn Influencers by Taplio.

Want detailed analytics of your Linkedin Account? Try Taplio for free.

Paul Iusztin

open on linkedin

I am a senior machine learning engineer and contractor with ๐Ÿฒ+ ๐˜†๐—ฒ๐—ฎ๐—ฟ๐˜€ ๐—ผ๐—ณ ๐—ฒ๐˜…๐—ฝ๐—ฒ๐—ฟ๐—ถ๐—ฒ๐—ป๐—ฐ๐—ฒ. I design and implement modular, scalable, and production-ready ML systems for startups worldwide. My central mission is to build data-intensive AI/ML products that serve the world. Since training my first neural network in 2017, I have 2 passions that fuel my mission: โ†’ Designing and implementing production AI/ML systems using MLOps best practices. โ†’ Teaching people about the process. . I currently develop production-ready Deep Learning products at Metaphysic, a leading GenAI platform. In the past, I built Computer Vision and MLOps solutions for CoreAI, Everseen, and Continental. Also, I am the Founder of Decoding ML, a channel for battle-tested content on learning how to design, code, and deploy production-grade ML and MLOps systems. I am writing articles and posts each week on: - ๐˜“๐˜ช๐˜ฏ๐˜ฌ๐˜ฆ๐˜ฅ๐˜๐˜ฏ: 29k+ followers - ๐˜”๐˜ฆ๐˜ฅ๐˜ช๐˜ถ๐˜ฎ: 2.5k+ followers ~ ๐Ÿ”— https://medium.com/@pauliusztin - ๐˜š๐˜ถ๐˜ฃ๐˜ด๐˜ต๐˜ข๐˜ค๐˜ฌ (๐˜ฏ๐˜ฆ๐˜ธ๐˜ด๐˜ญ๐˜ฆ๐˜ต๐˜ต๐˜ฆ๐˜ณ): 6k+ followers ~ ๐Ÿ”— https://decodingml.substack.com/ . If you want to learn how to build an end-to-end production-ready LLM & RAG system using MLOps best practices, you can take Decoding MLโ€™s self-guided free course: โ†’ ๐˜“๐˜“๐˜” ๐˜›๐˜ธ๐˜ช๐˜ฏ ๐˜Š๐˜ฐ๐˜ถ๐˜ณ๐˜ด๐˜ฆ: ๐˜‰๐˜ถ๐˜ช๐˜ญ๐˜ฅ๐˜ช๐˜ฏ๐˜จ ๐˜ ๐˜ฐ๐˜ถ๐˜ณ ๐˜—๐˜ณ๐˜ฐ๐˜ฅ๐˜ถ๐˜ค๐˜ต๐˜ช๐˜ฐ๐˜ฏ-๐˜™๐˜ฆ๐˜ข๐˜ฅ๐˜บ ๐˜ˆ๐˜ ๐˜™๐˜ฆ๐˜ฑ๐˜ญ๐˜ช๐˜ค๐˜ข ~ ๐Ÿ”— https://github.com/decodingml/llm-twin-course . ๐Ÿ’ฌ If you need machine learning solutions for your business, letโ€™s discuss! ๐ŸŒŽ Only open to full remote positions as a contractor. . Contact: ๐Ÿ“ฑ Phone: +40 732 509 516 โœ‰๏ธ Email: p.b.iusztin@gmail.com ๐Ÿ’ป Decoding ML: https://linktr.ee/decodingml ๐Ÿ•ต๐Ÿปโ€โ™‚๏ธ Personal site & Socials: https://www.pauliusztin.me/

Check out Paul Iusztin's verified LinkedIn stats (last 30 days)

Followers
54,582
Posts
20
Engagements
5,858
Likes
4,809

What is Paul talking about?

nicheai
  • frequency
  • engagement

Who is engaging with Paul

Banias Baabe profile picture
Samanth Koduru profile picture
Jason Quick profile picture
Piotr Malicki profile picture
Alexandru Razvant profile picture
Saba Hesaraki profile picture
Alex Vesa profile picture
Tales Marra profile picture
Ashok Kumar Murugesan profile picture
Daniel Svonava profile picture
Silvestre Pรฉrez O. profile picture
Priyanka Kamath profile picture
Hendy Fergus Atheri Hura profile picture
ORLANDO MONTENEGRO REYES profile picture
Sujoy Chabri profile picture
Maria Vechtomova profile picture
Achutha Subhash profile picture
Jai Singh profile picture
Ankit Srivastava profile picture
Joshua Rio-Ross profile picture
Ajeet Singh  profile picture
Prashant K Dhingra profile picture
Mattheus Chediak profile picture
Ray Florez profile picture
Anirban Datta profile picture
Rutuja Surve profile picture
Kaan Kabalak profile picture
Abby Morgan profile picture
Raphaรซl Hoogvliets profile picture
zahra sadeghi profile picture
Li Yin profile picture
Ali Alemi Matin Pour profile picture
Bharat Raghunathan profile picture
bรนi xuรขn hรนng profile picture
Victor Robles profile picture
Shamal De Silva profile picture
Leon Jose profile picture
Koyelia Ghosh Roy profile picture
Aniruddh Durga Naga Gilakamsetty profile picture
Pascal Biese profile picture
Diego Marinho profile picture
Diego Miranda profile picture
Thiago Costa profile picture
Bruna Lemberck profile picture

Paul Iusztin's Best Posts (last 30 days)

Use Taplio to search all-time best posts


Fine-tuning isnโ€™t hard. Here's where most pipelines fall apart: Integrating it into a full LLM system. So hereโ€™s how we architected our training pipeline: ๐—œ๐—ป๐—ฝ๐˜‚๐˜๐˜€ ๐—ฎ๐—ป๐—ฑ ๐—ผ๐˜‚๐˜๐—ฝ๐˜‚๐˜๐˜€ The training pipeline has one job: โ†’ Input: โ€ข A dataset from the data registry โ€ข A base model from the model registry โ†’ Output: โ€ข A fine-tuned model registered in a model registry and ready for deployment In our case: - Base: Llama 3.1 8B Instruct โ€ข Dataset: Custom summarization data generated from web documents โ€ข Output: A specialized model that summarizes web content ๐—ฃ๐—ถ๐—ฝ๐—ฒ๐—น๐—ถ๐—ป๐—ฒ ๐˜€๐˜๐—ฒ๐—ฝ๐˜€ 1. Load base model โ†’ apply LoRA adapters 2. Load dataset โ†’ format using Alpaca-style instructions 3. Fine-tune with Unsloth AI on T4 GPUs (via Colab) 4. Track training + eval metrics with Comet 5. If performance is good โ†’ push to Hugging Face model registry 6. If not โ†’ iterate with new data or hyperparameters Most research happens in Notebooks (and thatโ€™s okay). So we kept our training pipeline in a Jupyter Notebook on Colab. Why? โ†’ Let researchers feel at home โ†’ No SSH friction โ†’ Visualize results fast โ†’ Enable rapid iteration โ†’ Plug into the rest of the system via registries Just because itโ€™s manual doesnโ€™t mean itโ€™s isolated. Here's how it connects: - Data registry: feeds in the right fine-tuning set โ€ข Model registry: stores the fine-tuned weights โ€ข Inference service: serves the fine-tuned model solely using the model registry โ€ข Eval tracker: logs metrics + compares runs in real-time The Notebook is completely decoupled from the rest of the LLM system. Can it be automated? Yes... and weโ€™re almost there. With ZenML already managing our offline pipelines, the training code can be converted to a deployable pipeline. The only barrier? Cost and compute. Thatโ€™s why continuous training (CT) in the LLM space is more of a dream than something that you actually want to do in practice. TL;DR: If youโ€™re thinking of training your own LLMs, donโ€™t just ask โ€œhow do I fine-tune this?โ€ Ask: โ€ข How does it integrate? โ€ข What data version did I use? โ€ข Where do I store the weights? โ€ข How do I track experiments across runs? โ€ข How can I detach the fine-tuning from deployment? Thatโ€™s what separates model builders from AI engineers. ๐Ÿ”— Full breakdown here: https://lnkd.in/de_ndNbQ


307

Fine-tuning should NEVER be the first step when building an AI system. Hereโ€™s the only time you should do it: When nothing else works. But let's face it... Most teams jump straight into fine-tuning Why? Because it feels technical. Custom. Smart. In reality, itโ€™s often just unnecessary complexity. Before you spend hours generating synthetic data and burning through GPUs, you must ask yourself three questions: โ†’ Can I solve this with smart prompt engineering? โ†’ Can I improve it further by adding RAG? โ†’ Have I even built an evaluatable system yet? If the answer to those isnโ€™t a solid "YES," you have no business fine-tuning anything. I say this all the time - "You donโ€™t need your own model; you need better system design." - Prompt engineering handles ~30โ€“50% of cases - RAG handles another ~30โ€“40% - Fine-tuning? Reserve it for the last 10% (when the problem demands it) For example, in our work at Decoding ML, we only fine-tune when: - The context window is too small for RAG to help - The task requires domain-specific tone, behavior, or reasoning - The system is mature enough to warrant the extra complexity Anything sooner is overkill. Thanks to Maxime Labonne for helping sharpen this thinking during our work on The LLM Engineerโ€™s Handbook (especially when mapping tradeoffs between fine-tuning, prompting, and RAG) Want to learn more? Check out Lesson 4 of the Second Brain AI Assistant course. Link in the comments.

  • No alternative text description for this image

292

90% of RAG systems struggle with the same bottleneck: (And better LLMs are not the solution) It's retrieval. And most teams donโ€™t realize it because they rush to build without proper evaluation. Before I tell you how to fix this, let me make something clear - ๐—ก๐—ฎ๐—ถ๐˜ƒ๐—ฒ ๐—ฅ๐—”๐—š ๐—ถ๐˜€ ๐—ฒ๐—ฎ๐˜€๐˜†. You chunk some docs, embed them, drop a top_k retriever on top, and call it a pipeline. Getting it production-ready? Thatโ€™s where most teams stall. โ†’ They get hallucinations. โ†’ They miss key info. โ†’ Their outputs feel... off. Why? Because the quality of generation is downstream of the quality of context. ... and naive RAG often pulls in irrelevant or partial chunks that confuse the LLM. If you're serious about improving your system, here's the progression that actually works: ๐—ฆ๐˜๐—ฒ๐—ฝ ๐Ÿญ: ๐—™๐—ถ๐˜… ๐˜๐—ต๐—ฒ ๐—•๐—ฎ๐˜€๐—ถ๐—ฐ๐˜€ These โ€œtable-stakesโ€ upgrades outperform fancy models most of the time: โ†’ Smarter Chunking - Dynamic over fixed-size. Respect structure. โ†’ Chunk Size Tuning - Too long = loss in the middle. Too short = fragmented context. โ†’ Metadata Filtering - Boosts precision by narrowing scope semantically and structurally. โ†’ Hybrid Search - Combine vector + keyword filtering. ๐—ฆ๐˜๐—ฒ๐—ฝ ๐Ÿฎ: ๐—Ÿ๐—ฎ๐˜†๐—ฒ๐—ฟ ๐—ผ๐—ป ๐—”๐—ฑ๐˜ƒ๐—ฎ๐—ป๐—ฐ๐—ฒ๐—ฑ ๐—ฅ๐—ฒ๐˜๐—ฟ๐—ถ๐—ฒ๐˜ƒ๐—ฎ๐—น When basic techniques arenโ€™t enough: โ†’ Re-ranking (learned or rule-based) โ†’ Small-to-Big Retrieval: Retrieve sentences, synthesize larger windows. โ†’ Recursive Retrieval (e.g., LlamaIndex) โ†’ Multi-hop + agentic retrieval: When you need reasoning across documents. ๐—ฆ๐˜๐—ฒ๐—ฝ ๐Ÿฏ: ๐—˜๐˜ƒ๐—ฎ๐—น๐˜‚๐—ฎ๐˜๐—ฒ ๐—ผ๐—ฟ ๐——๐—ถ๐—ฒ ๐—ง๐—ฟ๐˜†๐—ถ๐—ป๐—ด There's no point iterating blindly. Do the following: โ†’ End-to-End eval - Is the output good? Ground truths, synthetic evals, user feedback. โ†’ Component-level eval - Does the retriever return the right chunks? Use ranking metrics like MRR, NDCG, success@k. ๐—ฆ๐˜๐—ฒ๐—ฝ ๐Ÿฐ: ๐—™๐—ถ๐—ป๐—ฒ-๐˜๐˜‚๐—ป๐—ถ๐—ป๐—ด = ๐—Ÿ๐—ฎ๐˜€๐˜ ๐—ฅ๐—ฒ๐˜€๐—ผ๐—ฟ๐˜ Donโ€™t start here. Do this only when: โ†’ Your domain is so specific general embeddings fail. โ†’ Your LLM is too weak to synthesize even when context is correct. โ†’ Youโ€™ve squeezed all juice from prompt + retrieval optimizations. Fine-tuning adds cost, latency, and infra complexity. Itโ€™s powerful, but only when everything else is dialed in. ๐—ก๐—ผ๐˜๐—ฒ: These notes are from a talk over a year old. And yet... most teams are still stuck in Step 0. That tells you something - The surface area of RAG is small. But building good RAG is still an unsolved craft. Letโ€™s change that. Want to learn to implement advanced RAG systems yourself? The link is in the comments. ๐—œ๐—บ๐—ฎ๐—ด๐—ฒ ๐—ฐ๐—ฟ๐—ฒ๐—ฑ๐—ถ๐˜: LlamaIndex and Jerry Liu

  • timeline

280

Everyoneโ€™s building agents. But very few are building them for production... And thatโ€™s the gap we wanted to close with Lesson 2 of the ๐—ฃ๐—ต๐—ถ๐—น๐—ผ๐—”๐—ด๐—ฒ๐—ป๐˜๐˜€ course. Too often, agentic demos look impressive - until you try scaling them. Then comes the hidden complexity: โ†’ Orchestrating LLM calls โ†’ Managing memory โ†’ Debugging emergent behavior โ†’ Building in retrieval without breaking the flow Thatโ€™s why this lesson doesnโ€™t stop at โ€œtoy demos.โ€ We show you how to build a real, production-ready RAG agent inside a gaming simulation. An agent that can impersonate philosophers, carry context-aware conversations, and dynamically adapt to user input. Not just an NPC, but a character. Hereโ€™s what youโ€™ll build: โ†’ An agentic RAG system powered by LangGraph โ†’ A memory architecture backed by MongoDB โ†’ Persona-specific prompt templates streamed via Groq LLM APIs โ†’ Observability and evaluation pipelines instrumented with Opik (by Comet) โ†’ A system designed to scale, recover, and impersonate in real time This is how you go from scripts to systems. From chatbots to characters. Lesson 2 is live. (Link in the comments) P.S. A massive shout out and thanks to Miguel Otero Pedrido for the collab ๐Ÿ‘Š


242

Finding the right open-source LLMs to work with is a pain in the backside. 98% of LLM leaderboards are bloated. Too many closed models. Too many broken repos. Too little clarity on what actually works in production. It's frustrating. Fortunately, I found something to help with mitigate this issue... If youโ€™re looking for open-source LLMs that just run - For fine-tuning, quantization, and deployment... Unsloth AI has done the hard work for you. Theyโ€™ve compiled a list of all the popular, supported, and production-viable models that: โ†’ Fine-tune easily (with Unsloth + QLoRA) โ†’ Quantize to GGUFs for local inference (Ollama, llama.cpp, OpenWebUI) โ†’ Play well with Hugging Face and Python โ†’ Come with working code and notebook examples โ†’ Easy to deploy to Hugging Face Inference Endpoints, AWS, GCP, Modal, and more No more jumping between broken GitHub repos or guessing which models will survive a production pipeline. Itโ€™s the fastest way to stay current without losing your mind. If youโ€™re working with open-source LLMs, just bookmark this list. Link in the comments!


236

90% of AI engineers are dangerously abstracted from reality. They work with: โ†’ Prebuilt models โ†’ High-level APIs โ†’ Auto-magical cloud tools But hereโ€™s the thing - If you donโ€™t understand how these tools actually work, youโ€™ll always be guessing when something breaks. Thatโ€™s why the best AI engineers I know go deeper... They understand: How Git actually tracks changes. How Redis handles memory. How Docker isolates environments. If youโ€™re serious about engineering, you'd go build the tools you use. And itโ€™s why I recommend CodeCrafters.io (YC S22) You wonโ€™t just learn tools. Youโ€™ll rebuild them (from scratch). โ†’ Git, Redis, Docker, Kafka, SQLite, Shell... โ†’ Step by step, test by test โ†’ In your favorite language (Rust, Python, Go, etc.) Itโ€™s perfect for AI engineers who want to: โ†’ Level up their backend + system design skills โ†’ Reduce debugging time in production โ†’ Build apps that actually scale under load And most importantly... โ†’ Stop being a model user โ†’ Start being a systems thinker If I had to level up my engineering foundations today, CodeCrafters is where Iโ€™d start. The ink is in the comments. P.S. We only promote tools we use or would personally take. P.S.S. Subscribe with my affiliate link to get a 40% discount :)

  • graphical user interface, application

225

The difference between RAG and Agentic RAG isnโ€™t technical. Itโ€™s philosophical... RAG assumes answers are linear. Agentic RAG assumes thinking is iterative. That single belief changes how you architect everything. Let me explain. Most RAG pipelines follow this recipe: โ†’ Embed a bunch of documents โ†’ Retrieve top-K chunks โ†’ Slam them into a prompt โ†’ Pray the model gets it right It works until the query gets complex. Then the whole thing falls apart. Why? Because RAG is passive. It retrieves once and hopes for the best. But real questions aren't solved with one shot. They evolve. They require clarification, follow-ups, and refined context. Thatโ€™s where Agentic RAG comes in... Agentic RAG doesnโ€™t just retrieve, it also reasons: โ†’ Do I have enough context? โ†’ Should I re-query with a better search? โ†’ Should I ask the user for clarification? โ†’ Which tool should I use next? The result? A system that thinks before it speaks. If youโ€™re building copilots, assistants, or longform Q&A tools, this matters. Because reliability comes from better decisions. Agentic RAG introduces that decision loop. It turns workflows into systems. It trades static pipelines for dynamic reasoning. And that mindset shift is where real GenAI builders separate themselves from the hype. Want to see what Agentic RAG looks like in action? We break it down with code, graphs, and production use cases in the Second Brain AI Assistant course. ๐Ÿ”— Link: https://lnkd.in/dA465E_J


175

The #1 mistake in building LLM agents? Thinking the project ends at reasoning. Here's when it actually ends: When your agent can talk to the world securely, reliably, and in real time. And thatโ€™s what ๐—Ÿ๐—ฒ๐˜€๐˜€๐—ผ๐—ป ๐Ÿฐ ๐—ผ๐—ณ ๐˜๐—ต๐—ฒ ๐—ฃ๐—ต๐—ถ๐—น๐—ผ๐—”๐—ด๐—ฒ๐—ป๐˜๐˜€ ๐—ฐ๐—ผ๐˜‚๐—ฟ๐˜€๐—ฒ is all about. Up to this point, we focused on making our agents think: โ†’ Philosophical worldviews โ†’ Context-aware reasoning โ†’ Memory-backed conversations But intelligence alone isnโ€™t enough. To be useful, agents need a voice. To be deployable, they need an interface. To be real, they need to exist as APIs. This lesson is the bridge from the local prototype to the live system. Hereโ€™s what youโ€™ll learn: โ†’ How to deploy your agent as a REST API using FastAPI โ†’ How to stream responses token-by-token with WebSockets โ†’ How to wire up a clean backendโ€“frontend architecture using FastAPI (web server) + Phaser (game interface) โ†’ How to think about agent interfaces in real-world products (not just demos) In short: ๐—ง๐—ต๐—ถ๐˜€ ๐—ถ๐˜€ ๐—ต๐—ผ๐˜„ ๐˜†๐—ผ๐˜‚ ๐˜€๐—ต๐—ถ๐—ฝ ๐—ฎ๐—ป ๐—ฎ๐—ด๐—ฒ๐—ป๐˜ ๐˜„๐—ต๐—ผ ๐—ฟ๐—ฒ๐—ฎ๐˜€๐—ผ๐—ป๐˜€ ๐—”๐—ก๐—— ๐—ฟ๐—ฒ๐˜€๐—ฝ๐—ผ๐—ป๐—ฑ๐˜€ ๐—ถ๐—ป ๐—ฝ๐—ฟ๐—ผ๐—ฑ๐˜‚๐—ฐ๐˜๐—ถ๐—ผ๐—ป. Shoutout to Anca-Ioana Martin for helping shape this lesson and write the deep-dive article. And of course... big thanks to my co-creator Miguel Otero Pedrido for the ongoing collab. ๐Ÿ”— Link to Lesson 4 in the comments.

  • No alternative text description for this image

172

Everyone chunks documents for retrieval. But what if thatโ€™s the wrong unit? Let me explain.. In standard RAG, we embed small text chunks and pass those into the LLM as context. Itโ€™s simple, but flawed. Why? Because small chunks are great for retrieval precision, but terrible for generation context. Thatโ€™s where Parent Retrieval comes in. (aka small-to-big retrieval) Hereโ€™s how it works: โ†’ You split your documents into small chunks โ†’ You embed and retrieve using those small chunks โ†’ But you donโ€™t pass the chunk to the LLM... โ†’ You pass the parent document that the chunk came from The result? โ†’ Precise semantic retrieval (thanks to small, clean embeddings that encode a single entity) โ†’ Rich generation context (because the LLM sees the broader section) โ†’ Fewer hallucinations โ†’ Less tuning needed around chunk size and top-k Itโ€™s one of the few advanced RAG techniques that work in production. No fancy agents. No latency bombs. No retraining. We break it all down (with diagrams and code examples) in ๐—Ÿ๐—ฒ๐˜€๐˜€๐—ผ๐—ป ๐Ÿฑ ๐—ผ๐—ณ ๐˜๐—ต๐—ฒ ๐—ฆ๐—ฒ๐—ฐ๐—ผ๐—ป๐—ฑ ๐—•๐—ฟ๐—ฎ๐—ถ๐—ป ๐—”๐—œ ๐—”๐˜€๐˜€๐—ถ๐˜€๐˜๐—ฎ๐—ป๐˜ ๐—ฐ๐—ผ๐˜‚๐—ฟ๐˜€๐—ฒ. ๐Ÿ”— Link to the full lesson in the comments.

  • diagram

161

Hereโ€™s the problem with most AI books: They teach the model, not the system. Which is fine... until you try to deploy that model in production. Thatโ€™s where everything breaks: - Your RAG pipeline is duct-taped together - Your eval framework is an afterthought - Your prompts arenโ€™t versioned - Your architecture canโ€™t scale Thatโ€™s why Maxime and I wrote the LLM Engineerโ€™s Handbook... We wanted to create a practical guide for AI engineers who build real world AI applications. This isnโ€™t just another guide... It's a practical road map for designing and deploying real-world LLM systems. In the book, we cover: โ†’ Efficient fine-tuning workflows โ†’ RAG architectures โ†’ Evaluation pipelines with LLM-as-judge โ†’ Scaling strategies for serving + infra โ†’ MLOps + LLMOps patterns baked in Whether youโ€™re building your first assistant or scaling your 10th RAG app... This book gives you the mental models and engineering scaffolding to do it right. ๐Ÿ”— Here's the link to get your copy: https://lnkd.in/dVgFJtzF


151

Back in 2023, I was struggling to keep track of my notes. So I did something the black mirror producers would be proud of... I built a second brain. All I wanted was an AI-powered assistant connected to my knowledge base. Something I could use to recall notes, surface ideas, and help me think. But making it real wasnโ€™t as simple as connecting a chatbot to Notion. To get it working, I had to build a full system: โ†’ A modular RAG pipeline to retrieve from custom notes โ†’ Ingest, crawl, and clean all my noisy resources regardless of their form โ†’ Real-time APIs to stream responses as I typed โ†’ A memory layer to track context across conversations โ†’ Observability and evaluation to measure what worked No hacks. No hardcoded prompts. Just an LLM agent that understood my notes and helped me reason through them. After building, I open-sourced the entire thing - code and lessons. And over the past year, thousands of engineers have cloned, forked, and built on top of it. This week, the GitHub repo ๐—ฝ๐—ฎ๐˜€๐˜€๐—ฒ๐—ฑ ๐Ÿญ,๐Ÿฌ๐Ÿฌ๐Ÿฌ ๐˜€๐˜๐—ฎ๐—ฟ๐˜€. I just want to say a massive thank you to everyone who tried it, shared it, or built something new with it. And to those who havenโ€™t seen it yet - ๐Ÿ”— The linkโ€™s in the comments. P.S. Let me know what youโ€™d create with it.

  • graphical user interface, text, application

139

98% of people consume AI content. But only 2% are actually building with it (and we wanted to change that)... So we created 5 open-source project-based AI courses that teach you how to go from zero to production. Each course is built with developers in mind. Backed by best practices from MLOps, LLMOps, and modern software engineering. And 100% free. Hereโ€™s whatโ€™s inside: ๐—ฃ๐—ต๐—ถ๐—น๐—ผ๐—”๐—ด๐—ฒ๐—ป๐˜๐˜€ (๐˜„๐—ถ๐˜๐—ต The Neural Maze) Build a character simulation engine that brings AI agents to life with memory, retrieval, and real-time dialogue powered by: - Groq - LangGraph (by LangChain) - Opik (by Comet) โ†’ Learn agents, RAG, persona design, and modular LLM architecture. ๐—ฆ๐—ฒ๐—ฐ๐—ผ๐—ป๐—ฑ ๐—•๐—ฟ๐—ฎ๐—ถ๐—ป ๐—”๐—œ ๐—”๐˜€๐˜€๐—ถ๐˜€๐˜๐—ฎ๐—ป๐˜ Build an AI assistant that chats with your personal knowledge base. โ†’ Learn end-to-end agentic RAG pipelines, fine-tuning, modular design, and full-stack AI integration. ๐—”๐—บ๐—ฎ๐˜‡๐—ผ๐—ป ๐—ง๐—ฎ๐—ฏ๐˜‚๐—น๐—ฎ๐—ฟ ๐—ฆ๐—ฒ๐—บ๐—ฎ๐—ป๐˜๐—ถ๐—ฐ ๐—ฆ๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต Master vector search over structured data by building a natural language search engine for e-commerce products. โ†’ Learn how to embed, index, and retrieve relevant product data using semantic search. ๐—Ÿ๐—Ÿ๐—  ๐—ง๐˜„๐—ถ๐—ป: ๐—ฌ๐—ผ๐˜‚๐—ฟ ๐—ฃ๐—ฟ๐—ผ๐—ฑ๐˜‚๐—ฐ๐˜๐—ถ๐—ผ๐—ป-๐—ฅ๐—ฒ๐—ฎ๐—ฑ๐˜† ๐—”๐—œ ๐—ฅ๐—ฒ๐—ฝ๐—น๐—ถ๐—ฐ๐—ฎ Create your own LLM-powered twin from scratchโ€”designed to reflect your knowledge and communication style. โ†’ Learn fine-tuning, embedding, vector databases, and serving production-grade AI. ๐—›&๐—  ๐—ฅ๐—ฒ๐—ฎ๐—น-๐—ง๐—ถ๐—บ๐—ฒ ๐—ฅ๐—ฒ๐—ฐ๐—ผ๐—บ๐—บ๐—ฒ๐—ป๐—ฑ๐—ฒ๐—ฟ ๐—ฆ๐˜†๐˜€๐˜๐—ฒ๐—บ Deploy a neural recommender system for fashion items with real-time serving using Hopsworks and KServe. โ†’ Learn feature engineering, MLOps, Kubernetes deployment, and retrieval-augmented recsys. Just: โ†’ Clone the repo โ†’ Open the Substack lesson โ†’ Follow the guide + run the code โ†’ Remix it, fork it, and make it your own If youโ€™re tired of learning in isolation and want to actually build production AI, these courses are for you. ๐Ÿ”— Link to all 5 courses: https://lnkd.in/d8gP9cxC


206

Evaluation is the bottleneck of every serious GenAI system. And 90% of teams are still treating it as an afterthought... If youโ€™re building LLM apps, especially with RAG or agentic systems, youโ€™ve probably hit the same wall: โ†’ Messy prompt changes with zero version control โ†’ Vector search that โ€œfeelsโ€ right, but fails silently โ†’ Outputs that kinda work, but you have no way to quantify why โ†’ No strategy to measure the impact of new features So ahead of my upcoming Open Data Science Conference (ODSC) 2025 webinar, Iโ€™m releasing the full open-source evaluation playbook. If you want wants to explore the code before the talk drops, here's your chance... Note, you donโ€™t have to attend the webinar to use it - the README is detailed enough to guide you. Hereโ€™s what youโ€™ll get: ๐— ๐—ผ๐—ฑ๐˜‚๐—น๐—ฒ ๐Ÿญ: ๐—ฃ๐—ฟ๐—ผ๐—บ๐—ฝ๐˜ ๐— ๐—ผ๐—ป๐—ถ๐˜๐—ผ๐—ฟ๐—ถ๐—ป๐—ด + ๐—ฉ๐—ฒ๐—ฟ๐˜€๐—ถ๐—ผ๐—ป๐—ถ๐—ป๐—ด Track every LLM call and prompt change using Opik by Comet. Visualize agent traces, compare versions, and finally debug with confidence. ๐— ๐—ผ๐—ฑ๐˜‚๐—น๐—ฒ ๐Ÿฎ: ๐—ฅ๐—ฒ๐˜๐—ฟ๐—ถ๐—ฒ๐˜ƒ๐—ฎ๐—น ๐—˜๐˜ƒ๐—ฎ๐—น๐˜‚๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐—ณ๐—ผ๐—ฟ ๐—ฅ๐—”๐—š Use UMAP/t-SNE to visualize embeddings. Compute retrieval recall/precision with LLM-as-a-judge. ๐— ๐—ผ๐—ฑ๐˜‚๐—น๐—ฒ ๐Ÿฏ: ๐—”๐—ฝ๐—ฝ๐—น๐—ถ๐—ฐ๐—ฎ๐˜๐—ถ๐—ผ๐—ป-๐—Ÿ๐—ฒ๐˜ƒ๐—ฒ๐—น ๐— ๐—ฒ๐˜๐—ฟ๐—ถ๐—ฐ๐˜€ Detect hallucinations, moderation issues, and quality drops with custom judges. Log everything into Opik to track iterations across builds. ๐— ๐—ผ๐—ฑ๐˜‚๐—น๐—ฒ ๐Ÿฐ: ๐—–๐—ผ๐—น๐—น๐—ฒ๐—ฐ๐˜๐—ถ๐—ป๐—ด ๐—ฅ๐—ฒ๐—ฎ๐—น ๐—จ๐˜€๐—ฒ๐—ฟ ๐—™๐—ฒ๐—ฒ๐—ฑ๐—ฏ๐—ฎ๐—ฐ๐—ธ Capture structured feedback from users to fuel future eval splits or fine-tuning jobs (e.g. preference alignment). ๐™’๐™๐™ฎ ๐™–๐™ข ๐™„ ๐™™๐™ค๐™ž๐™ฃ๐™œ ๐™ฉ๐™๐™ž๐™จ? Because evaluation is hard. And most teams donโ€™t have a mental model for how to think about these moving parts - let alone code for it. This project brings structure, tooling, and clarity to that chaos. ๐Ÿ”— Link to the repo in the comments. P.S. If you're joining my session at ODSC, keep it bookmarked, weโ€™ll walk through the full stack live.


196

Here's why 98% agent demos break after 3 turns: (Hint: it's not because the prompts are bad) ๐—ง๐—ต๐—ฒ ๐—ฎ๐—ด๐—ฒ๐—ป๐˜ ๐—ฑ๐—ผ๐—ฒ๐˜€๐—ปโ€™๐˜ ๐—ฟ๐—ฒ๐—บ๐—ฒ๐—บ๐—ฏ๐—ฒ๐—ฟ ๐˜„๐—ต๐—ฎ๐˜ ๐—ท๐˜‚๐˜€๐˜ ๐—ต๐—ฎ๐—ฝ๐—ฝ๐—ฒ๐—ป๐—ฒ๐—ฑ. Without memory, you donโ€™t get reasoning. Without reasoning, you donโ€™t get believable agents... You just get brittle demos that fall apart under pressure. Thatโ€™s why we made Lesson 3 of the PhiloAgents course all about memory. In this lesson, we cover how memory enables: โ†’ Conversational flow via short-term memory โ†’ Grounded reasoning via long-term memory (with agentic RAG) โ†’ Semantic vs. Episodic vs. Procedural long-term memory โ†’ Scalable architecture across threads, users, and interactions โ†’ Fast, focused context handling through smart summarization We also break down the critical design choices: โ†’ What kind of memory structures you actually need โ†’ How to avoid bloated infra with a single vector DB โ†’ Why long-term memory โ‰  just sticking a RAG on top Interested? Lesson 3 is now live. Youโ€™ll build all this directly into a philosopher NPC simulation. (Link in comments) P.S. Huge thanks to Miguel Otero Pedrido for the collab on this one. This was one of the most fun to build and itโ€™s a piece most agentic builders overlook.


192

Writing a book felt like a gamble. But looking back, it was one of the best decisions Iโ€™ve ever made. As of today, the ๐—Ÿ๐—Ÿ๐—  ๐—˜๐—ป๐—ด๐—ถ๐—ป๐—ฒ๐—ฒ๐—ฟโ€™๐˜€ ๐—›๐—ฎ๐—ป๐—ฑ๐—ฏ๐—ผ๐—ผ๐—ธ has: - Sold 12,000+ copies - Become an Amazon bestseller - Given me the freedom to build without pressure When I completely denounced my social life to focus on writing - I didnโ€™t know if anyone would read it. I didnโ€™t know if it would open any doors. I didnโ€™t know if it would be worth the effort. Fortunately, it all paid off. The book gave me breathing room to focus, reinvest, and go all-in on what I love: โ†’ Content โ†’ AI & Software โ†’ Building Decoding ML But the impact went far beyond the numbers... โ†’It gave me the confidence that my content is good โ†’ It led to speaking invites at QCon, ODSC, and DataCamp โ†’ It connected me to incredible collaborators like [@whats-ai]โ€”which sparked our next course on agents โ†’ And it directly led to my current consulting role (plus many more Iโ€™ve had to turn down) In short: itโ€™s been the catalyst for almost everything Iโ€™m building today. I'm extremely grateful to Maxime Labonne for co-authoring this journey and Gebin George for trusting me with the opportunity. TL;DR: If youโ€™re thinking about writing a book, do it. Youโ€™re not just publishing words... Youโ€™re publishing proof of who you are and what you stand for.


267

Here's the best piece of advice you need to build real-world agents: "Stop thinking in prompts; start thinking in graphs." Because under the hood, serious agentic systems arenโ€™t just string manipulation. Theyโ€™re structured, dynamic workflows. And thatโ€™s exactly how the ๐—ฃ๐—ต๐—ถ๐—น๐—ผ๐—”๐—ด๐—ฒ๐—ป๐˜ works... Itโ€™s not a prompt wrapped in a Python script. Itโ€™s a full agentic RAG system - Letโ€™s break it down... We use a stateful execution graph to drive our philosopher NPCs. Hereโ€™s how: ๐Ÿญ. ๐—–๐—ผ๐—ป๐˜ƒ๐—ฒ๐—ฟ๐˜€๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐—ก๐—ผ๐—ฑ๐—ฒ Handles the primary logic. It merges incoming messages, current state, and philosopher identity (style, tone, perspective) to generate the next reply. ๐Ÿฎ. ๐—ฅ๐—ฒ๐˜๐—ฟ๐—ถ๐—ฒ๐˜ƒ๐—ฎ๐—น ๐—ง๐—ผ๐—ผ๐—น ๐—ก๐—ผ๐—ฑ๐—ฒ If the agent needs more information, it calls a MongoDB powered vector search to fetch relevant facts about the philosopher's life and work. This turns simple RAG into agentic RAG, since the LLM dynamically chooses tool calls. ๐Ÿฏ. ๐—ฆ๐˜‚๐—บ๐—บ๐—ฎ๐—ฟ๐—ถ๐˜‡๐—ฒ ๐—–๐—ผ๐—ป๐˜๐—ฒ๐˜…๐˜ ๐—ก๐—ผ๐—ฑ๐—ฒ We summarize long-retrieved passages before injecting them into the prompt. This keeps prompts clean and focused, avoiding dumping in whole Wikipedia pages. ๐Ÿฐ. ๐—ฆ๐˜‚๐—บ๐—บ๐—ฎ๐—ฟ๐—ถ๐˜‡๐—ฒ ๐—–๐—ผ๐—ป๐˜ƒ๐—ฒ๐—ฟ๐˜€๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐—ก๐—ผ๐—ฑ๐—ฒ If the conversation gets long, we summarize and trim earlier messages and keep only recent context, while preserving meaning. We need the summary for the agent to be consistent and reference early topics from the conversation. This helps keep the context window short and focused, lowering costs and latency and improving accuracy. ๐Ÿฑ. ๐—˜๐—ป๐—ฑ ๐—ก๐—ผ๐—ฑ๐—ฒ Wraps up the cycle. Memory is updated, context evolves, and the agent grows with every message. ๐—›๐—ฒ๐—ฟ๐—ฒ'๐˜€ ๐˜๐—ต๐—ฒ ๐—ถ๐—บ๐—ฝ๐—น๐—ฒ๐—บ๐—ฒ๐—ป๐˜๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐—ฑ๐—ฒ๐˜๐—ฎ๐—ถ๐—น๐˜€: - The short-term memory is kept as a Pydantic in-memory state: the PhilosopherState - Tool orchestration with LangChain - Low-latency LLMs, such as Llama 70B, served by Groq - Smaller 8B models used for summarization tasks - Prompt templates are dynamically generated per philosopher - Served as a real-time REST API through FastAPI & WebSockets to power the game UI - Monitoring + evaluation wired through Opik by Comet ๐—œ๐—ป ๐˜€๐—ต๐—ผ๐—ฟ๐˜: Agents come alive through structure, memory, tools, and flow control. You can adapt this exact system to build - โ†’ Context-aware assistants โ†’ Multi-turn RAG copilots โ†’ NPCs, tutors, or internal tools that think and retrieve We walk through every step (with code) in Lesson 2 of the ๐—ฃ๐—ต๐—ถ๐—น๐—ผ๐—”๐—ด๐—ฒ๐—ป๐˜๐˜€ course. ๐Ÿ”— Link to the comments


266

Here's the most annoying thing about MLOps pipelines: (And it's contrary to popular belief) Most break at the last mile. Not during training. Not during evaluation. But at the moment of deployment, more specifically, when testing your local models in production. It's the part where: - DevOps gets looped in late - ML engineers get blocked by infra - Debugging takes forever And worst of all? You might wait 20 minutes just to find out your endpoint doesnโ€™t work. The long cycles to test your ML deployments kill productivity and, most importantly, the inspiration and experimentation speed, critical to building AI solutions. But thereโ€™s a better way to approach this... Instead of treating deployment as someone elseโ€™s problem, ML teams can take control by testing their models locally before handing them off. Hereโ€™s what that looks like in practice: 1. Train and log your model using MLflow 2. Wrap it with a custom class that defines your prediction logic (e.g. convert labels to readable outputs) 3. Download the model artifact using MLflowโ€™s CLI 4. Serve it locally using the MLflow inference server 5. Test the /invocations endpoint with real requests to ensure contract correctness 6. Validate edge cases (e.g. malformed input) to catch failures early This flow ensures your deployment logic works **before** involving production infra, speeding up the development cycle by 10x. A simple shift in mindset. A massive win in practice. Thanks to Maria Vechtomova and BaลŸak TuฤŸรงe Eskili for outlining this workflow so clearly in their latest article. P.S. I highly recommend their course End-to-end MLOps with Databricks.


254

If you're thinking about consulting in AI, think twice. Here's what it looks like: In week 1 with one of my existing clients shared the product vision and roadmap then dropped a task list that said: โ†’ Deploy the product to AWS with CI/CD โ†’ Support multiple deployment modes โ†’ Add LLM observability โ†’ Optimize and stabilize the core system. Thatโ€™s it. No onboarding, no long handovers. From there, it was all on me to: โ†’ Reverse-engineer the code and architecture โ†’ Identify missing pieces in the infra โ†’ Learn whatever tool or system I needed on the fly โ†’ Ship fast in an environment with low resources and zero handholding This is the reality of working as a contractor in early-stage AI teams. You donโ€™t get to ask for the perfect setup. You make decisions in ambiguity. You learn fast, adapt faster, and ship before you feel ready. Hereโ€™s what Iโ€™ve learned: โœ… Youโ€™ll never know everything going in โœ… Mastering fundamentals matters more than mastering tools โœ… You need to balance speed with system thinking โœ… Your job isnโ€™t to follow processโ€”itโ€™s to create one that works under fire If youโ€™re looking to freelance or consult in AI, prepare to be thrown into the fire. ๐—ฌ๐—ผ๐˜‚๐—ฟ ๐˜ƒ๐—ฎ๐—น๐˜‚๐—ฒ ๐—ถ๐˜€ ๐—ถ๐—ป ๐—ต๐—ผ๐˜„ ๐—ณ๐—ฎ๐˜€๐˜ ๐˜†๐—ผ๐˜‚ ๐—ณ๐—ถ๐—ป๐—ฑ ๐—ฐ๐—น๐—ฎ๐—ฟ๐—ถ๐˜๐˜†, ๐—ป๐—ผ๐˜ ๐—ต๐—ผ๐˜„ ๐—บ๐˜‚๐—ฐ๐—ต ๐˜†๐—ผ๐˜‚ ๐—ฎ๐—น๐—ฟ๐—ฒ๐—ฎ๐—ฑ๐˜† ๐—ธ๐—ป๐—ผ๐˜„. No better prep than building, breaking, and repeating.

  • No alternative text description for this image

417

The most underestimated part of building LLM applications? Evaluation. Evaluation can take up to 80% of your development time (because itโ€™s HARD) Most people obsess over prompts. They tweak models. Tune embeddings. But when itโ€™s time to test whether the whole system actually works? Thatโ€™s where it breaks. Especially in agentic RAG systems - where youโ€™re orchestrating retrieval, reasoning, memory, tools, and APIs into one seamless flow. Implementation might take a week. Evaluation takes longer. (And itโ€™s what makes or breaks the product.) Letโ€™s clear up a common confusion: ๐—Ÿ๐—Ÿ๐—  ๐—ฒ๐˜ƒ๐—ฎ๐—น๐˜‚๐—ฎ๐˜๐—ถ๐—ผ๐—ป โ‰  ๐—ฅ๐—”๐—š ๐—ฒ๐˜ƒ๐—ฎ๐—น๐˜‚๐—ฎ๐˜๐—ถ๐—ผ๐—ป. LLM eval tests reasoning in isolation - useful, but incomplete. In production, your model isnโ€™t reasoning in a vacuum. Itโ€™s pulling context from a vector DB, reacting to user input, and shaped by memory + tools. Thatโ€™s why RAG evaluation takes a system-level view. It asks: Did this app respond correctly, given the user input and the retrieved context? Hereโ€™s how to break it down: ๐—ฆ๐˜๐—ฒ๐—ฝ ๐Ÿญ: ๐—˜๐˜ƒ๐—ฎ๐—น๐˜‚๐—ฎ๐˜๐—ฒ ๐—ฟ๐—ฒ๐˜๐—ฟ๐—ถ๐—ฒ๐˜ƒ๐—ฎ๐—น. โ†’ Are the retrieved docs relevant? Ranked correctly? โ†’ Use LLM judges to compute context precision and recall โ†’ If ranking matters, compute NDCG, MRR metrics โ†’ Visualize embeddings (e.g. UMAP) ๐—ฆ๐˜๐—ฒ๐—ฝ ๐Ÿฎ: ๐—˜๐˜ƒ๐—ฎ๐—น๐˜‚๐—ฎ๐˜๐—ฒ ๐—ด๐—ฒ๐—ป๐—ฒ๐—ฟ๐—ฎ๐˜๐—ถ๐—ผ๐—ป. โ†’ Did the LLM ground its answer in the right info? โ†’ Use heuristics, LLM-as-a-judge, and contextual scoring. In practice, treat your app as a black box and log: - User query - Retrieved context - Model output - (Optional) Expected output This lets you debug the whole system, not just the model. ๐˜๐˜ฐ๐˜ธ ๐˜ฎ๐˜ข๐˜ฏ๐˜บ ๐˜ด๐˜ข๐˜ฎ๐˜ฑ๐˜ญ๐˜ฆ๐˜ด ๐˜ข๐˜ณ๐˜ฆ ๐˜ฆ๐˜ฏ๐˜ฐ๐˜ถ๐˜จ๐˜ฉ? 5โ€“10? Too few. 30โ€“50? Good start. 400+? Now youโ€™re capturing real patterns and edge cases. Still, start with how many samples you have available, and keep expanding your evaluation split. Itโ€™s better to have an imperfect evaluation layer than nothing. Also track latency, cost, throughput, and business metrics (like conversion or retention). Some battle-tested tools: โ†’ RAGAS (retrieval-grounding alignment) โ†’ ARES (factual grounding) โ†’ Opik by Comet (end-to-end open-source eval + monitoring) โ†’ Langsmith, Langfuse, Phoenix (observability + tracing) TL;DR: Agentic systems are complex. Success = making evaluation part of your design from Day 0. We unpack this in full in Lesson 5 of the PhiloAgents course. ๐Ÿ”— Check it out here: https://lnkd.in/dA465E_J


322

95% of agents never leave the notebook. And itโ€™s not because the code is bad... Itโ€™s because the system around them doesnโ€™t exist. Here's my point: Anyone can build an agent that works in isolation. The real challenge is shipping one that survives real-world conditions (e.g., live traffic, unpredictable users, scaling demands, and messy data). That's exactly what we tackled in ๐—Ÿ๐—ฒ๐˜€๐˜€๐—ผ๐—ป ๐Ÿญ ๐—ผ๐—ณ ๐˜๐—ต๐—ฒ ๐—ฃ๐—ต๐—ถ๐—น๐—ผ๐—”๐—ด๐—ฒ๐—ป๐˜๐˜€ ๐—ฐ๐—ผ๐˜‚๐—ฟ๐˜€๐—ฒ. We started by asking, "What does an agent need to survive in production?" And decided on 4 things - It needs an LLM to run in real-time. A memory to understand what just happened. A brain that can reason and retrieve factual information. And a monitor to ensure it all works under load. So we designed a system around those needs. The frontend is where the agent comes to life. We used Phaser to simulate a browser-based world. But more important than the tool is the fact that this layer is completely decoupled from the backend. (so game logic and agent logic evolve independently) The backend, built in FastAPI, is where the agent thinks. We stream responses token-by-token using WebSockets. All decisions, tool calls, and memory management happen server-side. Inside that backend sits the agentic core - a dynamic state graph that lets the agent reason step-by-step. The agent is orchestrated by LangGraph and powered by Groq for real-time inference speeds. It can ask follow-up questions, query external knowledge, or summarize whatโ€™s already been said (all in a loop). When the agent needs facts, it queries long-term memory. We built a retrieval system that mixes semantic and keyword search, using cleaned, de-duplicated philosophical texts crawled from the open web. That memory lives in MongoDB and gets queried in real time. Meanwhile, short-term memory tracks the conversation thread across turns. Without it, every new message would be a reset. With it, the agent knows whatโ€™s been said, whatโ€™s been missed, and how to respond. But hereโ€™s the part most people skip: observability. If you want to improve your system, you need to see and measure what it's doing. Using Opik (by Comet), we track every prompt, log every decision, and evaluate multi-turn outputs using automatically generated test sets. Put it all together and you get a complete framework that remembers, retrieves, reasons, and responds in a real-world environment. Oh... and we made the whole thing open source. ๐Ÿ”— Link: https://lnkd.in/d8-QbhCd P.S. Special shout out to my co-creator Miguel Otero Pedrido

  • No alternative text description for this image

309

Want to drive more opportunities from LinkedIn?

Content Inspiration, AI, scheduling, automation, analytics, CRM.

Get all of that and more in Taplio.

Try Taplio for free

Famous LinkedIn Creators to Check Out

Sabeeka Ashraf

@sabeekaashraf

On January 8th my "one day" became DAY ONE ... 7 days earlier I downgraded my life into a suitcase....

20k

Followers

Sahil Bloom

@sahilbloom

Sahil Bloom is the New York Times Bestselling author of The 5 Types of Wealth: A Transformative Guid...

1m

Followers

Izzy Prior

@izzyprior

No matter how outrageously amazing your mission is, it's likely you're not seeing the results you ne...

82k

Followers

Richard Moore

@richardjamesmoore

โฉYou know how all the clients you'll ever work with are on LinkedIn, right? But you struggle to gene...

105k

Followers

Shlomo Genchin

@shlomogenchin

Hey! Here are 3 ways I can help you: 1๏ธโƒฃ Talks and Workshops: I'll show your team, or students, how...

49k

Followers

Sam G. Winsbury

@sam-g-winsbury

We turn entrepreneurs into credible thought leaders through personal branding so they can scale thei...

49k

Followers

Matt Gray

@mattgray1

Over the last decade, Iโ€™ve built 4 successful companies and a community of over 14 million people. ...

1m

Followers

Hi! Iโ€™m Daniel. Iโ€™m the creator of The Marketing Millennials and the founder of Authority, a B2B Lin...

150k

Followers

Ash Rathod

@ashrathod

You already know storytelling is essential for your business and brand. But storytelling is much m...

73k

Followers

Amelia Sordell ๐Ÿ”ฅ

@ameliasordell

Klowt builds personal brands. I founded the business after realising that the best leads came throu...

228k

Followers

Vaibhav Sisinty โ†—๏ธ

@vaibhavsisinty

I'm an engineer turned marketer, now a founder. I've worked at Uber and Klook, focusing on marketi...

451k

Followers

Wes Kao

@weskao

Wes Kao is an entrepreneur, coach, and advisor who writes at newsletter.weskao.com. She is co-founde...

107k

Followers

Austin Belcak

@abelcak

CultivatedCulture.com/Coaching // I teach people how to land jobs they love in today's market withou...

1m

Followers

Justin Welsh

@justinwelsh

Over the last decade, I helped build two companies past a $1B valuation and raise over $300M in vent...

1m

Followers

Luke Matthews

@lukematthws

LinkedIn has changed. You need to change too. Hey I'm Luke, I've been marketing for 5+ years on ...

188k

Followers

Tibo Louis-Lucas

@thibaultll

Founder Prev Taplio & Tweet Hunter (sold) Building Typeframes & revid.ai Invested in animstats.com ...

6k

Followers

Andy Mewborn

@amewborn

I use to be young & cool. Now I do b2b SaaS. Husband. Dad. Ironman. Founder of Distribute // Co-fo...

213k

Followers

Guillaume Moubeche

@-g-

If youโ€™re here, that's because you know that your personal growth will drive your business growth ๐Ÿš€...

80k

Followers