Logo Taplio

Taplio

Nate Cooper's Linkedin Analytics

Get the Linkedin stats of Nate Cooper and many LinkedIn Influencers by Taplio.

Want detailed analytics of your Linkedin Account? Try Taplio for free.

I help brands transform ideas into digital products and e-commerce platforms using human-centered design that drives real business results. So many ambitious e-commerce ideas start out as innovative ways to serve customers, but end up being just another website. Our approach to digital product creation considers human needs, not just a few customer touchpoints, to create a customer-centric experience that drives results. We partner with our clients to truly understand how a digital product fits in with their business goals and their customer’s experience, helping them create features, products, and user experiences that increase engagement, traffic, and sales. We pay attention to every aspect of digital experience. Whether it’s making sure a site is in compliance with the ADA, building a new mobile app, or customizing an e-learning platform for your business, we have you covered. You won’t end up with just another shiny new object, either. We’ll work with you to make sure the project increases your sales and users to exceed your business objectives. We’ve helped 100s of businesses with development and digital product experience. I’ve helped businesses navigate challenging technical hurdles and build repeatable and seamless processes for selling online. CLIENT RESULTS: +Clinton Health Access Initiative - We helped this leading nonprofit build a responsive, interactive site that met the WCAG international accessibility standards. We also provided the support the organization needed to use to make sure their new site helped them meet their objectives. +People TV - Improved People Magazine’s video delivery website, improving overall time on site and reducing bounce rates by optimizing WordPress installation. +Stone Breaker - Helped boutique glove company modernize e-commerce sites using Shopify and support a pivot into PPE supplies in the EU market. WHAT CLIENTS ARE SAYING: “If your organization is new to mobile development, work with SWARM to determine your needs. They can help you build out your mobile project and meet your goals in an agile and iterative manner.” - Mobile Product Manager @ a Data Insights Company “We were probably at 100,000 downloads before [working with SWARM] and now we’re at 400,000.” - CEO @ an App Startup “As someone unfamiliar with the mobile world it was nice to have SWARM help guide our strategy." - Founder @ Admitted.ly If you’re interested in chatting about a project send me a message and let's find a time to talk.

Check out Nate Cooper's verified LinkedIn stats (last 30 days)

Followers
13,321
Posts
20
Engagements
135
Likes
107

Nate Cooper's Best Posts (last 30 days)

Use Taplio to search all-time best posts


This is a great post from Emanuele Fabbiani pushing back on the “consulting is dead” narrative. Yes, AI is rewriting the rulebook—but let’s not confuse automation with expertise. If your consulting firm’s entire value prop is a prettier deck of common knowledge, AI should eat your lunch. But real consulting isn’t just writing slides—it’s knowing which slides matter, for whom, when, and why. AI can draft. But it can’t yet: • Read a boardroom’s mood • Diagnose a political bottleneck • Reframe the problem upstream of the prompt • Spot when the wrong metric is driving the conversation The future belongs to consultants who can blend machine precision with human perspective—not those who pretend the work is just prettier prose.

Profile picture of Emanuele Fabbiani

Emanuele Fabbiani


No, it is not. And here's why. 👇 🔴 We’re far from a world where AI can fully handle creative work on its own. Human input is still essential to craft ideas, review outputs, and guide AI decisions. But some things will change. 🟢 Repetitive tasks will be automated. 🟢 Creative work will become more efficient. 🟢 The focus will shift from "how" to do a task, to "what" task should be done, and from finding “the answer” to asking the right questions. Professionals across all industries will feel the pressure to master AI tools—and eventually, they will. As Fiverr’s famous campaign said: AI won’t take our jobs. It’ll take them to the next level.


4

This hits hard—and not just for researchers. Anyone working with data, strategy, or decision-making should be paying attention. This wasn’t just a breakdown in vendor oversight. It was a system optimized for speed and scale at the expense of trust. And now we’re seeing the downstream effects: organizations making decisions based on manipulated inputs, and practitioners left questioning their own work. As we integrate AI deeper into research workflows, we can’t afford to treat it as a patch. It has to be part of the rebuild—one that centers accountability, transparency, and the human judgment behind every data point. Thank you for raising the bigger questions. We need them.

Profile picture of Sam Ward

Sam Ward


Yikes. 🐜 "To execute the scheme, some of the defendants recruited “ants”, who pretended to be legitimate survey takers but instead were paid a nominal fee for completing surveys that produced fraudulent market research data. Some of the defendants even served as “ants” and fraudulently took large quantities of surveys themselves and received significant payment for their “ant” work." https://lnkd.in/gW4DZdXz


2

🔥 This is the kind of clarity we need more of in AI discourse—especially from folks who actually understand how these systems function. Too many marketers and product teams still treat LLMs like search engines with ranking algorithms, when in reality they’re predictive summarization engines pulling from retrieved content. Kristine Schachinger nails the key point: if you want your content to surface, don’t game nonexistent “ranking signals”—just structure it well, write it clearly, and focus on being part of the knowledge the models draw from. #LLMs #SEO #AIliteracy #StructuredData #Copilot #ContentStrategy

Profile picture of Kristine Schachinger

Kristine Schachinger


I really wish people in my industry would learn how large language models work so they stop putting out misinformation about the supposed search function. You do not rank in a large language model there's no such thing as a ranking signal in a large language model. Every large language model has a search engine that it uses to bring back documents TO SUMMARIZE (THEY ARE SUMMARIZATION ENGINES) when you put in a supposed query, to help ground the large language model so it doesn't write things that are fictitious. It will still write things that are fictitious. It is still a predictive text model Yes, that's a simplification there's a lot more that goes into it but it doesn't have a bunch of documents it brings back, the search engine bring back to the documents. Also generative AI search, again not really search it's a summarization engine, is less than 1% of all traffic to websites so it's not something you really have to stress about. Do your SEO well so that you're one of the documents that it summarizes from and hopefully they pull your link as a citation. does not mean it will always pull your link as a citation because they can pull a hundred documents or more to summarize from, but that is your best way to appear in the summarization engine. Oh and yes you need structured data because that is how OpenAI will pull in products into its new product engine and Bing uses it for its generative AI and Bing also feeds co-pilot and OpenAI.


2

"This series is a field guide for people who are still building in a time of fracture — people who feel the collapse but haven’t stopped moving." ❌ What This Series Isn’t It’s not collapse-core. It’s not institutional nostalgia. It’s not about finding “the next big thing.” This is about frameworks and patterns that help you do the work without losing the thread. Check out my *new* substack: https://lnkd.in/efFYkVSD


1

This is such a clear example of why public trust in climate science (or any science) depends not just on having the facts, but on designing systems that surface them clearly and early. The Rogan/Peterson model thrives on ambiguity, cherry-picked frames, and platforms that reward performance over substance. But good design work—like what’s being done here—asks, “How do we build processes that make truth legible at scale?” It’s not just a communication challenge. It’s a systems one.

Profile picture of Shachar Hatan

Shachar Hatan


“You should listen to opinions that don’t necessarily align with yours,” said my friend and shared a link to the latest Joe Rogan. Fair point - I listened (1:08-1:26 in the episode). And as someone who works in this field daily, I want to clarify a few things—because it’s too important to let misinformation slide. Here’s what I have to say:   1. “I’m not a climate scientist, but…” That’s exactly why we listen to climate scientists. Dismissing climate action as a “social contagion", led by "narcissistic psychopaths” is an insult to the science, and to the people already facing the consequences of climate change. Climate science is not about 'tyrants using fear and compulsion'. It’s about informed, measured risk management built on rigorous, peer-reviewed research. 2. “The Earth is in a cooling period” Misquoting a Washington Post article doesn’t change the science. The referenced article actually warns about unprecedented rates of warming, comparing today’s human-driven climate shift to the asteroid that wiped out the dinosaurs. This isn’t “natural fluctuation”—it’s a crisis moving faster than ecosystems and societies can adapt (read it yourself, link in comment). 3. “Carbon levels are only high if you pick your starting point” Carbon levels rose 35 parts per million over 1,000 years coming out of the last ice age 20,000 years ago. We've done the same in just 14 years. And we’re 8 billion people now, not hunter-gatherers. We're pushing our planetary systems at a breakneck pace, far beyond historic norms. 4. “But the Earth is 20% greener than it was 30 years ago” Yes—but not for the reasons claimed by Peterson. NASA’s most recent research shows this greening is largely due to human intervention, particularly massive tree-planting programs in China and agricultural efforts in India—not simply CO₂ fertilization. It’s a positive trend, but it doesn’t offset the loss of natural vegetation in biodiversity-rich regions like the Amazon and Indonesia. The gains are real, but so are the losses—and the costs of deforestation and habitat collapse are immense. 5. “Developing countries need fossil fuels to grow” Actually, many countries are leapfrogging straight to cleaner, decentralised energy systems. This isn’t charity—it’s smart economics. Clean energy brings jobs, lower energy costs, and improved health outcomes. Energy access and climate action go hand in hand.   Rogan and Peterson challenge the foundations of climate science with a blend of scepticism and conspiracy, arguing that climate action is about control, guilt, and fear. In my view, it’s the opposite. It’s about taking responsibility, not burying our heads in the sand, and using the best science, tools, and innovation we have to protect communities, ecosystems, and our future. Curious to hear what others think…


1

Embracing the journey of aging is feeling ok with being older than the actor playing Superman yet still younger than the one playing Pa Kent #Superman #DC #Movies #Aging

  • No alternative text description for this image

2

We’ve been sold a lie about innovation. For two decades, “disruption” has been held up as the highest virtue in business and tech. But when that same mindset spills into governance—valorizing speed over strategy, and contrarianism over care—we don’t get progress. We get collapse. This brilliant visual thread dismantles the myth that government should run like a startup. It’s a must-read if you care about democracy, leadership, or what we teach the next generation about impact. Let’s redefine innovation not as destruction, but as responsibility.

Profile picture of Craighton Berman

Craighton Berman


Some notes on "innovation" and the "Disrupted States of America"


3

This is a powerful example of how vague policy decisions can harm the people doing critical, thoughtful work. Casey’s project wasn’t about censorship—it aimed to equip students with tools to think critically about AI where they already learn: on social media and YouTube. We talk a lot about “AI literacy,” but what happens when the funding vanishes before the first lesson? This feels like another case of a system built to be “neutral” (like a keyword scan) producing real harm. The grant didn’t fail on merit—it likely failed on a Ctrl+F for “misinformation.” This is exactly why context—and thoughtful, human-centered policy—matters.

Profile picture of Casey Fiesler

Casey Fiesler


My National Science Foundation grant on AI education was cancelled on Friday. Why? Not that I was actually told, but if I had to guess, it's because of this sentence in the grant's abstract: "Informal learning for young people increasingly takes place on social media, which is also full of myths, misconceptions, and misinformation around AI." This research is--was, I suppose--focused on interdisciplinary groups of students as learners and educators to create high-quality, accurate content about AI for the general public with a focus on the topics most important for young people. The government announcement on Friday cited blanket reasoning that "NSF will not support research with the goal of combating ‘misinformation,’ ‘disinformation,’ and ‘malinformation’ that could be used to infringe on the constitutionally protected speech rights." Which, though an incredibly important area of research that absolutely did not deserve to be cut regardless, is not what my grant is actually about. What we appear to have here is a simple word search for the word "misinformation" and without further consideration for context, the clawing back of money for research in an area that in theory is important to the current administration (AI workforce development). The loss of this funding means that this work will not happen, a PhD student will not have research funding next year, and the undergraduates who would have been involved will not get this research funding and experience. (Also, selfishly, I will not have summer salary that I was counting on.) We are also having to scramble a bit to make sure that we can pay the stipends to students that were set to be paid out at the end of the semester in two weeks, for the work that they already did on the project this semester. (They WILL be paid even if it has to come out of my own discretionary funds, though I'm hoping it doesn't come to that.) Feel free to share this as an example of the material impact and utter carelessness that we're seeing in these cuts to federal research funding in the U.S.


2

We’re seeing more attempts to “fix” LLM hallucinations by making models reason more like humans—layering in memory, selectors, and even executive control. It’s impressive. But it raises a core design question: Are we building copilots to extend human reasoning—or stand-ins to replace it? This proposal is a thoughtful take on truthfulness in AI systems. But the endgame still matters. What kind of intelligence are we optimizing for—and who benefits? #AI #LLM #copilot #humaninloop #reasoning #AIethics

Profile picture of Lloyd Watts

Lloyd Watts


I've been working on the Trillion-Dollar LLM Hallucination Problem for about a year and a half. I've noticed that the Big Tech companies (OpenAI, Google, Anthropic, Meta, Apple) have not solved the problem; in fact, they are recently reporting that the hallucination problem is getting worse as they try to advance into reasoning and planning, and both OpenAI and Anthropic are saying that they need to do further research in this area. There are many vocal critics of LLMs (Gary Marcus, Yann LeCun, Dr. Jeffrey Funk, Denis O., Richard Self.) The common refrain is that present-day LLMs are a dead end on the path to human-like intelligence, and that some kind of new architecture is needed. I agree with that. I have a dozen friends on LinkedIn who have alternative architectures that could be candidates. (Kyrtin Atreides, Vincent Granville, John Ball, Srini Pagidyala, Peter Voss, Walter C., Luciano Zorzin, Hristo Georgiev, etc.), and I am following them and cheering for them to succeed. And of course there is Yann LeCun's JEPA architecture and Fei-Fei Li's World Model, and whatever mysterious ideas are being developed by Ilya Sutskever and Mira Murati at their new Billion-Dollar startups. For me, the Trillion-Dollar Question is: Can we find a new architecture, that can build on the remarkable successes of LLMs, while overcoming their serious problems? I have found it productive to focus on the Hallucination Problem. LLMs hallucinate because they are sampled auto-regressive next-word predictors, trained on internet-scale text data. Once they are trained to predict next words very well, they do not use their training data at inference time. They are able to predict next words very well, to make syntactically correct language, but they do not explicitly remember any statements from their training data. They discard any memory of statements which could be considered true. They predict likely next words. They are not constrained to make true statements, and they have no way of reliably doing so. To remedy the problem, I have had to develop Hallucination Detector and Deep Attribution Networks, to allow an LLM to natively remember and refer to its training data. This is a separate, integrated subsystem that functions as a compressed memory of factual statements. And finally, an Executive Selector can override the LLM when it is veering into a hallucination, and substitute a key next word that is directly derived from the relevant true statements in the training data. The language part of the human brain (Broca's Area) is NOT the memory controller (Hippocampus), and it is not the Executive Selector (Frontal Networks). This new architecture uses an LLM for what it is good for: producing syntactically correct language. It augments the LLM with memory to get the facts right, and a Hallucination Detector and Executive Selector to satisfy the joint objectives of producing valid language and factual statements. #ai #llm


2

This made me laugh—and then made me nod. The “vibe coding” era is real. We’re generating more code, faster than ever—but speed alone doesn’t equal progress. 🧠 You still need strong architecture instincts 🧪 You still need real testing discipline ⚖️ And most of all, you need discernment: What not to build is just as important AI makes the entry curve feel smoother. But building useful, maintainable software? Still a craft. Anyone else feeling this tension? #AI #vibecoding #devlife #buildwithcare #softwarecraft

Profile picture of Rod Rivera

Rod Rivera


The more I vibe-code, the more I realize: this isn’t a tool for newbies or non-coders to replace no-code or low-code tools. It’s something much different — a powerful but highly unstable technology, best suited for expert coders. From the outside, vibe-coding can look like magic. It feels as if Ralph Wiggum from The Simpsons could suddenly build the next Google all by himself. But that’s not going to happen — not now, and maybe not ever. A few observations from my recent vibecoding journeys: • You need a strong mental model of what you’re building. Vibe-coding can generate thousands of lines of code in minutes. But is it the code you actually need? Where should the tool evolve further? To give a mental analogy: imagine that building software is like drawing a disk — and over time, your disk should steadily grow larger. With vibe-coding, you can draw very long lines very fast. But if you want a disk, you must maintain a clear mental picture, carefully guiding the tool to fill in adjacent lines until the circle emerges. • You need deep software instincts to avoid bloat and overengineering. Models are naturally eager to produce code. What should have been a simple script can easily explode into an “enterprise application” — bloated with unnecessary layers and abstractions. A trained eye quickly notices that much of the generated code is not only redundant but actively harmful to maintainability. • You need to become serious about software testing. As someone in ML, I was rarely exposed to real-world testing. I knew about testing in theory — the techniques, the best practices — but not how to implement or prioritize them. Vibecoding changed that. If you look at the QuackVerse, our open-source project at AIProduct.Engineer the number of lines dedicated to tests is now almost equal to the lines for business logic. You either learn to test well or drown in your own code. Is all this transitory? Some optimists say: It will never get worse than this — it can only get better. I’m not so sure. Progress in machine learning isn’t linear — it’s stepwise. You can see it in the latest reports on Llama 4: it excels at some tasks but regresses in others where earlier Llama models were strong. Improvements aren’t guaranteed across the board. Sometimes, progress stalls — for years, even decades. Think about chatbots: they were laughably bad for a long time, until suddenly they became astonishingly good. It’s entirely possible that, within our lifetime, we get stuck in a world where AI remains good enough to help, but not good enough to replace coders — and where Ralph Wiggum still can’t build the next Google on his own. In this age of vibe-coding, do you feel like Ralph Wiggum, suddenly enlightened by the ability to summon thousands of lines of code in minutes? Or has it made you realize that software development remains a deeply challenging craft — just like everyone can draw a horse, but not everyone gets to be Picasso?

  • No alternative text description for this image

3

“Democracy erodes quietly. One workflow, one narrative, one shortcut at a time.” This list hit me hard—not just because of the political reality it points to, but because I’ve seen echoes of these patterns inside institutions meant to serve the public. If you work in civic tech, advocacy, or mission-driven operations, you’ve probably felt it too: • The pressure to silence dissenting ideas “for speed” • The creeping normalization of chaos framed as urgency • The fear of speaking up because it feels easier to just keep moving We need better systems. But even more than that, we need clarity—so that we’re not just moving fast, but moving in the right direction. Thanks Terry Szuplat

Profile picture of Terry Szuplat

Terry Szuplat


I asked ChatGPT for 10 ways that autocrats around the world undermine democracy. Here's what it said (unedited)... "That’s a thoughtful and important question. Understanding how democracies can erode is crucial for preventing it. 1. Undermine the Free Press: Discredit independent media as “fake news,” take over public broadcasters, and intimidate journalists to limit dissent and control narratives. 2. Attack the Judiciary: Undermine judicial independence by appointing loyal judges, removing or intimidating uncooperative ones, and limiting judicial review powers. 3. Weaken Checks and Balances: Expand executive power, bypass legislatures, or co-opt them through patronage or intimidation to reduce oversight. 4. Undermine Electoral Integrity: Change election laws, gerrymander districts, limit access to voting, and use disinformation to manipulate outcomes. 5. Exploit Emergencies: Use real or manufactured crises (like terrorism or pandemics) to justify emergency powers and bypass democratic procedures. 6. Silence Dissent: Criminalize protest, restrict civil society organizations, monitor citizens, and use police or intelligence services to harass opponents. 7. Co-opt or Crush the Opposition: Buy off political opponents, ban parties, or jail opposition leaders on dubious charges. 8. Use Nationalism and Scapegoating: Stir up ethnic, religious, or cultural divisions to unite supporters and justify repressive measures. 9. Undermine Truth and Facts: Flood public discourse with disinformation and conspiracy theories to erode trust in institutions and shared reality. 10. Extend Rule Indefinitely: Eliminate term limits, rig referendums, or rewrite constitutions to stay in power indefinitely. These steps don’t always happen in the same order, and sometimes they unfold gradually. Would you like examples of countries where some of these steps have been observed?"


    7

    We need to talk about Sarah. This isn’t just a story about cheating—it’s a story about disengagement, disillusionment, and design failure. Sarah used ChatGPT to speed through assignments she didn’t value, in a system that didn’t value her learning. Professors, overwhelmed and under-equipped, graded her ability to simulate effort—not engage deeply. And the scary part? She’s not an outlier—she’s a bellwether. The real crisis isn’t AI. It’s what AI reveals. • Students optimizing for survival, not mastery. • Faculty grading performance theater, not growth. • Institutions that reward efficiency over inquiry. • A system where cheating is indistinguishable from coping. We built an education system that rewards outputs, then act surprised when students find faster ways to produce them. Sarah didn’t break the rules—she followed the incentives. If we want to fix this, we need to stop treating AI as a threat to education and start treating it as a mirror. Tagging educators, designers, and civic builders: How might we reimagine learning systems that earn trust, restore meaning, and center real work in an age of simulation? #AI #Education #LearningDesign #Assessment #HigherEd #CivicTech #DigitalTrust

    Profile picture of Luiza Jarovsky

    Luiza Jarovsky


    🚨 I've been hearing from university professors that ChatGPT is completely destroying learning and that we'll soon have an AI-powered semi-illiterate workforce. Here's a glimpse into the educational apocalypse: "Sarah, a freshman at Wilfrid Laurier University in Ontario, said she first used ChatGPT to cheat during the spring semester of her final year of high school. (...) After getting acquainted with the chatbot, Sarah used it for all her classes: Indigenous studies, law, English, and a “hippie farming class” called Green Industries. “My grades were amazing,” she said. “It changed my life.” Sarah continued to use AI when she started college this past fall. Why wouldn’t she? Rarely did she sit in class and not see other students’ laptops open to ChatGPT. Toward the end of the semester, she began to think she might be dependent on the website. She already considered herself addicted to TikTok, Instagram, Snapchat, and Reddit, where she writes under the username maybeimnotsmart. “I spend so much time on TikTok,” she said. “Hours and hours, until my eyes start hurting, which makes it hard to plan and do my schoolwork. With ChatGPT, I can write an essay in two hours that normally takes 12.” - "By November, Williams estimated that at least half of his students were using AI to write their papers. Attempts at accountability were pointless. Williams had no faith in AI detectors, and the professor teaching the class instructed him not to fail individual papers, even the clearly AI-smoothed ones. “Every time I brought it up with the professor, I got the sense he was underestimating the power of ChatGPT, and the departmental stance was, ‘Well, it’s a slippery slope, and we can’t really prove they’re using AI,’” Williams said. “I was told to grade based on what the essay would’ve gotten if it were a ‘true attempt at a paper.’ So I was grading people on their ability to use ChatGPT.” - It would be great to hear more from teachers and educators to understand what might be potential solutions. I'm also adding below the essay I published today about the hidden dangers of AI chatbots.


      7

      Is AI at risk of consuming itself? A recent TechCrunch article highlights a pressing issue: AI models trained on their own generated data can experience “model collapse,” leading to a decline in performance and originality. This isn’t just a technical concern—it has profound implications for how we build and rely on AI systems in civic and public sectors. As we integrate AI into public services, ensuring the integrity and reliability of these models becomes paramount. We must ask: Are we prioritizing short-term efficiency over long-term sustainability? Let’s discuss how we can implement strategies to prevent such pitfalls and promote responsible AI development. https://lnkd.in/eq7ubtYG #AI #ModelCollapse #CivicTech #ResponsibleAI #PublicSectorInnovation


      4

      I’ve been prototyping similar AI workflows—and this prompt design hits the sweet spot between coaching and systems thinking. What I appreciate most about Liz Bradford’s approach is that it respects the user’s agency. It doesn’t prescribe; it provokes. That’s the difference between AI-as-tool vs. AI-as-gimmick. The right prompt isn’t a shortcut—it’s a compass. If you're exploring reflective design, personal systems, or augmented coaching practices, this is worth your time.

      Profile picture of Liz Bradford

      Liz Bradford


      I tested AI coaching for 30 days. The results shocked me. As the founder of a peak performance coaching community, I'm about to share something that might put me out of business. It’s a topic we debate plenty in the Transform Perform Coach team chat. But the truth is, some simple prompts, with an LLM, can help you level up in life. Start with this prompt, courtesy of Zain Kahn's Superhuman newsletter (link in comments👇). Prompt: "Act as a purpose-driven business coach. Help me discover my Ikigai — the intersection of what I love, what I’m good at, what the world needs, and what I can be paid for — and guide me in turning it into a business or career path. Start by asking me reflective questions in each of the four Ikigai dimensions to uncover my passions, strengths, marketable skills, and potential value to others. Once we’ve mapped those, suggest business ideas or career directions that align with my Ikigai. For each suggestion, explain how it connects to the four elements and provide examples of real-world businesses or roles that reflect similar paths. Include prompts for journaling, self-assessment tools, and small experiments I can run to test alignment. End with a one-page Ikigai roadmap summarizing my core themes, possible paths, and first action steps toward building a purpose-driven life or business." This simple prompt can help you to... Discover your purpose through these core elements: 👉 Align your passions with marketable skills 👉 Identify where your talents meet global needs 👉 Test small experiments in your chosen direction 👉 Track insights through guided self-reflection Transform your career path: 👉 Start with 15 minutes of daily journaling 👉 Map your strengths against market opportunities 👉 Create micro-projects to validate your direction 👉 Build evidence through real-world testing So why would I encourage you to start experimenting with AI coaching in this way? The ultimate paradox of coaching: ✨ My role is to help you not need me ✨ Your success means becoming your own guide ✨ True growth happens when you lead yourself So, are you ready to begin? ♻️ Share this with someone who needs an empowering high five 👉 Follow Liz Bradford for insights to boost your wellbeing, career and augment your business.


        8

        Is AI transforming the way we think about ethics in design? In the age of rapid technological advancement, AI offers incredible potential for innovation. However, with this power comes a responsibility to integrate ethical considerations into our design processes. We must ask ourselves: How do we ensure AI-driven tools respect user privacy? How do we avoid biases in algorithms that could perpetuate inequities? At SWARM, we're constantly exploring questions like this. We believe it's not just about building smarter tools but creating solutions that empower and respect every individual. How are you integrating ethical practices into your work with AI? Let's discuss how together we can create a more inclusive tech culture.


          7

          "We celebrate shipping features, not stitching systems." That line hit hard. Too often, product and ops teams are stuck in parallel conversations—each solving for their own world. But most breakdowns don’t happen inside the feature. They happen in the gaps between them. Service design isn’t just a method—it’s a mindset that asks, "What are we missing?" and "Who else is part of this?"


          Service design thinking aligns people, processes, and systems. We agree with Aishwarya Dwivedi that great products don’t just come from designing good features. She argues they come from understanding and designing the whole system behind them. Many teams focus too much on the interface and overlook the people, tools, and processes that make everything work. Service design thinking helps teams spot problems early, avoid launch chaos, and build smoother user experiences. Check out her article: https://lnkd.in/gzWPDUfD To address these challenges, she encourages teams to:​ → Map the entire user journey, including behind-the-scenes processes. → Identify all stakeholders and systems involved at each step. → Recognize data dependencies and potential failure points. This approach helps product teams spot issues early and build smoother user experiences. 💬 We asked Aishwarya why she wrote the article: “I still remember the moment it happened. We had just lunched a feature and a critical bug had surfaced, and the PM on my team couldn’t solve it because he only understood his piece of the puzzle. That’s when I saw it clearly: in the name of efficiency, most teams work in silos. I’ve seen it in companies I’ve been part of, and I’ve heard the same from friends and people I’ve coached. We celebrate shipping features, not stitching systems. That’s when I realized the real need to understand how the entire system works. And for me, service blueprints became more than just a framework—they became a way to see the whole picture. “ Love it. Helio helps you test ideas with users early so that you can spot problems before launch. It connects user feedback and UX metrics to the experience, not just the interface, making it easier to build great products. #uxresearch #productdiscovery #marketresearch #productdesign


            7

            “Not everything that can be built should be.” In both education and civic tech, there’s often pressure to ship fast and prove impact through output. But what actually builds trust, longevity, and clarity isn’t speed—it’s discernment. This piece is a sharp reminder that creative direction isn’t just about aesthetics. It’s about knowing what not to ship. What to protect. When to pause. As AI continues to commodify execution, brand—and by extension, vision—is becoming the last real moat. That makes the creative director not a “nice-to-have,” but a necessary strategist for teams who want to build with meaning.

            Profile picture of R.J. Abbott

            R.J. Abbott


            PSA to Tech Founders + VCs: The Creative Director Is the Most Underrated Hire in Tech. In a world where AI has commoditized execution, taste is the new growth lever. And the Creative Director? They’re your moat. Not the prompt engineer. Not the growth hacker. Not the product manager. The creative director. In an age of AI, they’re no longer a “nice-to-have.” They’re as critical to your business as your CTO or Head of Product. Because AI didn’t just change how we work, It commoditized it. Design. Code. Strategy. Content. Automated. Outsourced. Replicated. In a world where software is infinite, brand is your final moat. And the creative director is the architect of that moat. ⸻ Welcome to the era of the Creative Polymath. Not the designer with the best Figma shortcuts. Not the marketer chasing shallow engagement and fleeting trends. But the one who: • Thinks like a director • Writes like a screenwriter • Imagines like a set designer • Consumes film, architecture, fashion, subcultures… • And knows what feels iconic, long before the data proves it. They don’t just make decks. They design new realities. ⸻ Because in a world where anyone can generate content… The value isn’t in creation anymore. It’s in the curation. And that requires impeccable taste. Taste you can’t teach with mood boards, dashboards or spreadsheets. This is why the right creative director is no longer a cost center, They’re your cultural strategist, your growth multiplier, your maker of meaning, Your 10x creative engineer. ⸻ The most overlooked strategy in tech? Hiring someone who innately knows what not to ship. Because that’s the creative director’s superpower. Discernment. They will be the soul of your story. The architect of your world. The person your CEO will turn to and ask: “Do you believe in this?” ⸻ And here’s the shift no one is prepared for: As AI levels the playing field, brand becomes the business strategy. Not simply a function of it. And brand won’t be led by committees or CMOs obsessed with data and alignment. It will be led by creatives with vision. With taste. With range. With teeth. ⸻ So if you’re building the next great company… Don’t just hire someone who can make it look good. Hire someone who can make it feel inevitable. Because in the coming decade, your brand will be the only thing they can’t copy. And the creative director? They’ll be the one designing what can’t be commoditized. ⸻ Investors hunt for visionary founders. Startups chase engineering unicorns. But if you’re not also hunting for a creative savant, and world-builder, you’re building a castle with no walls. Read that last part again. ⸻ My advice: take a note from David Carradine’s 1970’s hit show, Kung Fu, Wonder the earth. Find yours. Create your moat. Because in the decade ahead, taste will build empires.


              11

              “Master AI or get replaced.” Aakash Gupta is right—the gap is widening fast. But what we master—and why—matters just as much. Yes, product managers need fluency. Yes, AI will reshape the work. But if we only teach people to optimize for outputs, we’ll miss the bigger shift. At SWARM, we’re working with community colleges and civic teams to build AI literacy that’s grounded in purpose: • How to use AI to understand systems • How to design with (not just for) communities • How to turn data into meaningful stories Because the future isn’t just AI-native. It has to be ethics-native, context-native, equity-native too. The next 12 months won’t just separate AI-masters from AI-users. They’ll separate builders of extractive tools from builders of trustworthy systems.

              Profile picture of Aakash Gupta

              Aakash Gupta


              Some of the smartest CEOs we know are saying it: Master AI… or get replaced. So here’s how: BTW, it’s not just CEOs. I’ve been talking to top product leaders, and they have the same expectations of their PMs. And here’s the thing: the gulf between those who just use AI vs those who are masterfully using it widens everyday. To keep you on the right side of the curve, I put together a full roadmap. PART 1 - USING AI AS A PM 1. The AI PM's Playbook: How Top Product Managers Are 10x-ing Their Impact in 2025 - https://lnkd.in/ef8TkEdQ 2. The Ultimate Guide to AI Prototyping - https://lnkd.in/eJujDhBV 3. Tutorial of Top 5 AI Prototyping Tools With Colin Matthews - https://lnkd.in/eit9MnaR 7. How to Use ChatGPT as a PM - https://lnkd.in/eyfUi8ZN 5. We Built an AI Product Manager in 58 mins (Claude, ChatGPT, Loom + Notion AI) - https://lnkd.in/eAZZ8MSb 6. We Built an AI Agent to Automate PM in 73 mins (ZERO CODING) with Anthony Maggio - https://lnkd.in/ebF8hhvx 7. How to PM with AI at Early Stage Startups with Sergio Pereira - https://lnkd.in/eWt7UciA PART 2 - AI PM SKILLS 1. Complete Course: AI Product Management With Paweł Huryn - https://lnkd.in/eN2_-6xi 2. Use AI To Draft Your Email and Slack Messages with MCP - https://lnkd.in/ejMMjAAV 3. 21 Harsh Truths about Product Management in AI with Scott Jones - https://lnkd.in/esYZcN7N PART 3 - BUILDING AI PRODUCTS 1. How to Build an AI Startup - https://lnkd.in/e4MaHeKM 2. Most People are Building AI Products Wrong - Here's How to do it Right - https://lnkd.in/eDGmsvZ5 3. Product Management in the Era of AI with Sam Kawsarani - https://lnkd.in/eGjTrvQe PART 4 - BECOMING AN AI PM 1. How to Become an AI Product Manager with No Experience - https://lnkd.in/e85tyhxy 2. How to become an AI Product Manager - https://lnkd.in/emUdezbS 3. How to Write a Killer AI Product Manager Resume (With Examples) - https://lnkd.in/ehAbcRGy 4. How to Become an AI PM With Marily Nika, Ph.D - https://lnkd.in/eid66mjS 5. 2025 AI PM Job Search Masterclass with Dr. Nancy Li - https://lnkd.in/eqqMD_Ne The next 12 months will separate the AI-masters from the AI-users. Which will you be?


                10

                UC Berkeley saw a 1,106% increase in CS majors from 2011 to 2021. MIT, Caltech, and Cornell weren’t far behind. Bootcamps promised six-figure salaries in 12 weeks. Policymakers warned of a tech talent shortage. And yet: 📉 There are fewer software developers employed in the U.S. today than in 2018. 🎓 Even top CS grads from Berkeley report struggling to find jobs. 🏚️ Dozens of bootcamps have quietly closed their doors. This isn’t just about tech. It’s about how easily we reduce complex challenges to simplistic solutions. “Learn to code” was never just advice. It was an ideology.


                15

                I’m hosting a free webinar to share more about my courses on UX and Human-Centered Design at Kingsborough Community College. If you’re curious about the field—or know someone who might be—feel free to join or pass it along!


                  9

                  Want to drive more opportunities from LinkedIn?

                  Content Inspiration, AI, scheduling, automation, analytics, CRM.

                  Get all of that and more in Taplio.

                  Try Taplio for free

                  Famous LinkedIn Creators to Check Out

                  Izzy Prior

                  @izzyprior

                  No matter how outrageously amazing your mission is, it's likely you're not seeing the results you ne...

                  82k

                  Followers

                  Ash Rathod

                  @ashrathod

                  You already know storytelling is essential for your business and brand. But storytelling is much m...

                  73k

                  Followers

                  Matt Gray

                  @mattgray1

                  Over the last decade, I’ve built 4 successful companies and a community of over 14 million people. ...

                  1m

                  Followers

                  Shlomo Genchin

                  @shlomogenchin

                  Hey! Here are 3 ways I can help you: 1️⃣ Talks and Workshops: I'll show your team, or students, how...

                  49k

                  Followers

                  Sam G. Winsbury

                  @sam-g-winsbury

                  We turn entrepreneurs into credible thought leaders through personal branding so they can scale thei...

                  49k

                  Followers

                  Hi! I’m Daniel. I’m the creator of The Marketing Millennials and the founder of Authority, a B2B Lin...

                  150k

                  Followers

                  Richard Moore

                  @richardjamesmoore

                  ⏩You know how all the clients you'll ever work with are on LinkedIn, right? But you struggle to gene...

                  105k

                  Followers

                  Vaibhav Sisinty ↗️

                  @vaibhavsisinty

                  I'm an engineer turned marketer, now a founder. I've worked at Uber and Klook, focusing on marketi...

                  451k

                  Followers

                  Wes Kao

                  @weskao

                  Wes Kao is an entrepreneur, coach, and advisor who writes at newsletter.weskao.com. She is co-founde...

                  107k

                  Followers

                  Justin Welsh

                  @justinwelsh

                  Over the last decade, I helped build two companies past a $1B valuation and raise over $300M in vent...

                  1m

                  Followers

                  Andy Mewborn

                  @amewborn

                  I use to be young & cool. Now I do b2b SaaS. Husband. Dad. Ironman. Founder of Distribute // Co-fo...

                  213k

                  Followers

                  Austin Belcak

                  @abelcak

                  CultivatedCulture.com/Coaching // I teach people how to land jobs they love in today's market withou...

                  1m

                  Followers

                  Sabeeka Ashraf

                  @sabeekaashraf

                  On January 8th my "one day" became DAY ONE ... 7 days earlier I downgraded my life into a suitcase....

                  20k

                  Followers

                  Amelia Sordell 🔥

                  @ameliasordell

                  Klowt builds personal brands. I founded the business after realising that the best leads came throu...

                  228k

                  Followers

                  Tibo Louis-Lucas

                  @thibaultll

                  Founder Prev Taplio & Tweet Hunter (sold) Building Typeframes & revid.ai Invested in animstats.com ...

                  6k

                  Followers

                  Luke Matthews

                  @lukematthws

                  LinkedIn has changed. You need to change too. Hey I'm Luke, I've been marketing for 5+ years on ...

                  188k

                  Followers

                  Guillaume Moubeche

                  @-g-

                  If you’re here, that's because you know that your personal growth will drive your business growth 🚀...

                  80k

                  Followers

                  Sahil Bloom

                  @sahilbloom

                  Sahil Bloom is the New York Times Bestselling author of The 5 Types of Wealth: A Transformative Guid...

                  1m

                  Followers