Get the Linkedin stats of Nate Cooper and many LinkedIn Influencers by Taplio.
open on linkedin
I help brands transform ideas into digital products and e-commerce platforms using human-centered design that drives real business results. So many ambitious e-commerce ideas start out as innovative ways to serve customers, but end up being just another website. Our approach to digital product creation considers human needs, not just a few customer touchpoints, to create a customer-centric experience that drives results. We partner with our clients to truly understand how a digital product fits in with their business goals and their customer’s experience, helping them create features, products, and user experiences that increase engagement, traffic, and sales. We pay attention to every aspect of digital experience. Whether it’s making sure a site is in compliance with the ADA, building a new mobile app, or customizing an e-learning platform for your business, we have you covered. You won’t end up with just another shiny new object, either. We’ll work with you to make sure the project increases your sales and users to exceed your business objectives. We’ve helped 100s of businesses with development and digital product experience. I’ve helped businesses navigate challenging technical hurdles and build repeatable and seamless processes for selling online. CLIENT RESULTS: +Clinton Health Access Initiative - We helped this leading nonprofit build a responsive, interactive site that met the WCAG international accessibility standards. We also provided the support the organization needed to use to make sure their new site helped them meet their objectives. +People TV - Improved People Magazine’s video delivery website, improving overall time on site and reducing bounce rates by optimizing WordPress installation. +Stone Breaker - Helped boutique glove company modernize e-commerce sites using Shopify and support a pivot into PPE supplies in the EU market. WHAT CLIENTS ARE SAYING: “If your organization is new to mobile development, work with SWARM to determine your needs. They can help you build out your mobile project and meet your goals in an agile and iterative manner.” - Mobile Product Manager @ a Data Insights Company “We were probably at 100,000 downloads before [working with SWARM] and now we’re at 400,000.” - CEO @ an App Startup “As someone unfamiliar with the mobile world it was nice to have SWARM help guide our strategy." - Founder @ Admitted.ly If you’re interested in chatting about a project send me a message and let's find a time to talk.
Check out Nate Cooper's verified LinkedIn stats (last 30 days)
Use Taplio to search all-time best posts
The first rung is breaking—and we’re not just talking about internships. This piece hits hard. As someone who works at the intersection of design, tech, and education, I see the impact of this erosion every day. My students—many first-gen, many navigating systemic barriers—are doing the hard work. But the systems meant to connect talent to opportunity? They’re brittle, fragmented, often inaccessible. It’s not just a pipeline problem—it’s an infrastructure problem. At SWARM, we’ve been rethinking what it means to build readiness. Not just for students, but for the orgs that serve them. Because when your backend sends the wrong signals—unclear roles, missing data, duct-taped tech—you risk being overlooked, no matter how powerful your mission. We can’t afford to treat this as an individual hustle story. We need new scaffolding—shared, stable, human-centered—to support the next generation’s climb.
Lucia Velasco
‘If entry-level roles evaporate, those lacking elite networks or privileged backgrounds will face even steeper barriers to finding their footing in the workplace. Plus, the fallout from large-scale economic shifts ripples through entire communities. When manufacturing jobs vanished across America’s heartland, the result wasn’t just lost income but also social and political upheaval.’ Worth reading this op-ed today by Aneesh Raman in The New York Times
We’ve been sold a lie about innovation. For two decades, “disruption” has been held up as the highest virtue in business and tech. But when that same mindset spills into governance—valorizing speed over strategy, and contrarianism over care—we don’t get progress. We get collapse. This brilliant visual thread dismantles the myth that government should run like a startup. It’s a must-read if you care about democracy, leadership, or what we teach the next generation about impact. Let’s redefine innovation not as destruction, but as responsibility.
Craighton Berman
Some notes on "innovation" and the "Disrupted States of America"
The future won’t reward the loudest or the fastest—it will reward the ones who notice what others overlook. In a world of infinite content and AI-generated sameness, the irrational act of caring deeply is no longer fluff. It's infrastructure. Designers who obsess over the friction, the silence, the felt sense of coherence? They’re not going anywhere. If anything—they’re finally being revealed. 🧠💥 Curious: What’s the last product that made you feel something? #DesignMatters #EmotionalDesign #AIandCreativity #FutureOfWork #HumanCentered #CraftNotScale #DesignThinking #UXDesign
R.J. Abbott
Everyone’s scared AI will replace designers. Wrong. It’s going to expose them. Brutally. The next era won’t belong to the fastest, cheapest, or most scalable. It will belong to the most obsessive. The most irrational. And dare I say, the most human. And here’s what the AI bros will never understand… At Stripe Sessions, Jony Ive told a story about something as simple as an iPhone cable. When someone opens that Apple box, pulls out the cable, and notices the way it’s designed, something primal happens: “Damn. Somebody gave a damn about me.” That moment? It’s not packaging. It’s not design. ⸻ It’s what I would call emotional transference at scale. It’s care. But in the AI age, it’s the last true moat left. ⸻ In a world where anyone can produce anything. Design, video, products, campaigns, all done in seconds. The real battleground is no longer what you make. It’s how you make people feel. This is the part no one’s talking about. This is where AI won’t flatten the field, it will simply widen the gap. ⸻ AI will annihilate mediocre work. It will call the bluff on anyone who never gave a damn in the first place. But the irrational? The hand-crafted? The meticulously obsessed? They’re the designers that will become more valuable than ever. That last 5%? Those invisible moments no spreadsheet can measure, no algorithm can predict? That’s where cult brands are built. That’s where belonging is born. That’s where you turn a product into a belief system. ⸻ In the AI era: Care is the only thing that scales. Craft is the only thing that cuts through. And belonging is the only thing that sticks. No machine can replicate it. No shortcut can fake it. No competitor can copy it. ⸻ So here’s your rallying cry: Designers. Founders. Creators. This is your moment. It will not come again. Double down on the details. Obsess over the unseen. Make them FEEL it. Because the future won’t belong to the fastest. Or the cheapest. Or the ones who chase scale. It will belong to the ones who care so much, it’s irrational. Because no AI will ever out-care you. That’s your moat. That’s your edge. That’s your moment. ⸻ Design for humans. Stop building brands. Start building cults. 🙏
Earlier this month we hired Erika Hall to do a workshop with SWARM. to become better communicators—especially in how we deliver value to our clients. Weeks later our team is still referencing "like in the Erika workshop" when discussing how we deliver work. It was completely transformational in how the team values their own work and how they see what they bring into each client engagement. Erika is a seasoned pro at understanding the value of design and just as importantly how to explain that value and communicate it to clients. Working with Erika really helped us breakthrough and operate better as a design team. Thanks to Audrey Crane for connecting us! If you are a designer or design lead who feels the frustration of having to validate your work only to feel every time like it's not really landing—Erika can give you the language and confidence to turn it around and re-orient around the value you bring.
Erika Hall
Stop wasting time doing research that gets ignored. Find out how to pick the right tools and techniques. Help your organization learn more with less effort. Details and sign-up: https://lnkd.in/geU_xdr4 Whether you are just starting out or an experienced specialist. This will help.
"The structure is falling down." Not just because of who’s in charge—but because the systems can’t hold their weight anymore. This was one of the best conversations I’ve heard in a while on what’s really behind the global slide into spectacle politics, strongmen, and executive overreach. The diagnosis? Politics has become performative because our institutions are frozen, slow, and hollowed out by decades of overreach and underinvestment. Rory Stewart’s call for patience, humility, and better governance—not just better politicians—is one worth hearing. Highly recommend this episode if you’re wrestling with where this moment is heading. Relevant clip: https://lnkd.in/e36526r7 Full Episode: https://lnkd.in/ea_DkrmU #geopolitics #democracy #institutions #jonstewart #rorystewart
🔍 AI detection isn’t a solution—it’s a stall. The real opportunity? Redesign assessments so students want to engage honestly. This post nails it: ✅ AI tools are here to stay ✅ Detection is unreliable ✅ The answer is better design, not better surveillance Curious how others are thinking about this shift in civic, public, or higher ed spaces. Who’s rethinking the assignment, not just the tech? #edtech #aiineducation #assessmentdesign #systemschange
Sohan Choudhury
A tech director emailed me last week: “Our English chair wants a tool that flags AI-written essays. What should we buy?” My answer? Buy nothing. ❌ The fix isn’t better policing—it’s assessment design. With Flint, schools are shifting from “catch the cheat” to “make cheating pointless”: 🎙️ Live AI dialogues. An essay prompt becomes a real-time conversation where students defend their ideas on the spot. 🧠 Voice reflections. Students explain their thinking verbally; AI probes deeper. Hard to outsource, easy to grade. 🤔 Process portfolios. Drafts, revision notes, and AI coaching all count toward the mark, rewarding growth over copy-paste. Our partner schools report lower plagiarism anxiety, higher engagement, and way less whack-a-mole with detection tools. Detection is a dead-end race; redesign is the opportunity. How is your school tackling AI cheating? #ai #edtech #aiedtech #flint
Not all tariffs are created equal. Bernie’s not wrong: used strategically, tariffs can protect workers and industries. But blanket, erratic trade policies aren’t strategy—they’re signals of a deeper problem: governing without a clear vision. This part stood out: “Selective utilization of tariffs to protect American workers and American industry is a good thing. But that is very different than coming up with some crazy proposal based on nothing… imposing arbitrary tariffs on virtually every country on Earth.” Economic tools aren’t the problem. Misuse is. And so is a leadership vacuum that treats disruption as direction. We don’t just need better tactics—we need a better map.
We don’t need a slower AI revolution. We need a smarter, more accountable one. Cassidy’s piece in The New Yorker hit hard. Not because it’s alarmist—but because it maps out the real civic risks of letting incentive structures—not human values—drive the next wave of tech deployment. As someone who works at the intersection of UX, public infrastructure, and workforce readiness, I’m less worried about the tools than the scaffolding we (fail to) build around them. A few key principles I keep coming back to: • Augment, don’t automate. Especially in public and mission-driven sectors, tech should deepen trust—not shortcut it. • Design for civic resilience. Democracy is a UX problem. AI can help—but only if we build interfaces that center participation, transparency, and equity. • Change the incentives, not just the tools. Procurement, accountability, and governance matter more than flash. We’re not powerless in this transition. But we are responsible for how we frame it—and who we bring with us. Would love to hear how others are approaching this moment. How to Survive the A.I. Revolution
🔥 This is the kind of clarity we need more of in AI discourse—especially from folks who actually understand how these systems function. Too many marketers and product teams still treat LLMs like search engines with ranking algorithms, when in reality they’re predictive summarization engines pulling from retrieved content. Kristine Schachinger nails the key point: if you want your content to surface, don’t game nonexistent “ranking signals”—just structure it well, write it clearly, and focus on being part of the knowledge the models draw from. #LLMs #SEO #AIliteracy #StructuredData #Copilot #ContentStrategy
Kristine Schachinger
I really wish people in my industry would learn how large language models work so they stop putting out misinformation about the supposed search function. You do not rank in a large language model there's no such thing as a ranking signal in a large language model. Every large language model has a search engine that it uses to bring back documents TO SUMMARIZE (THEY ARE SUMMARIZATION ENGINES) when you put in a supposed query, to help ground the large language model so it doesn't write things that are fictitious. It will still write things that are fictitious. It is still a predictive text model Yes, that's a simplification there's a lot more that goes into it but it doesn't have a bunch of documents it brings back, the search engine bring back to the documents. Also generative AI search, again not really search it's a summarization engine, is less than 1% of all traffic to websites so it's not something you really have to stress about. Do your SEO well so that you're one of the documents that it summarizes from and hopefully they pull your link as a citation. does not mean it will always pull your link as a citation because they can pull a hundred documents or more to summarize from, but that is your best way to appear in the summarization engine. Oh and yes you need structured data because that is how OpenAI will pull in products into its new product engine and Bing uses it for its generative AI and Bing also feeds co-pilot and OpenAI.
In today’s funding landscape, your digital presence isn’t just a brochure—it’s your frontline ambassador. An outdated or inefficient website can silently erode trust, confuse stakeholders, and even disqualify you from critical funding opportunities. Consider these real-world examples
We’re seeing more attempts to “fix” LLM hallucinations by making models reason more like humans—layering in memory, selectors, and even executive control. It’s impressive. But it raises a core design question: Are we building copilots to extend human reasoning—or stand-ins to replace it? This proposal is a thoughtful take on truthfulness in AI systems. But the endgame still matters. What kind of intelligence are we optimizing for—and who benefits? #AI #LLM #copilot #humaninloop #reasoning #AIethics
Lloyd Watts
I've been working on the Trillion-Dollar LLM Hallucination Problem for about a year and a half. I've noticed that the Big Tech companies (OpenAI, Google, Anthropic, Meta, Apple) have not solved the problem; in fact, they are recently reporting that the hallucination problem is getting worse as they try to advance into reasoning and planning, and both OpenAI and Anthropic are saying that they need to do further research in this area. There are many vocal critics of LLMs (Gary Marcus, Yann LeCun, Dr. Jeffrey Funk, Denis O., Richard Self.) The common refrain is that present-day LLMs are a dead end on the path to human-like intelligence, and that some kind of new architecture is needed. I agree with that. I have a dozen friends on LinkedIn who have alternative architectures that could be candidates. (Kyrtin Atreides, Vincent Granville, John Ball, Srini Pagidyala, Peter Voss, Walter C., Luciano Zorzin, Hristo Georgiev, etc.), and I am following them and cheering for them to succeed. And of course there is Yann LeCun's JEPA architecture and Fei-Fei Li's World Model, and whatever mysterious ideas are being developed by Ilya Sutskever and Mira Murati at their new Billion-Dollar startups. For me, the Trillion-Dollar Question is: Can we find a new architecture, that can build on the remarkable successes of LLMs, while overcoming their serious problems? I have found it productive to focus on the Hallucination Problem. LLMs hallucinate because they are sampled auto-regressive next-word predictors, trained on internet-scale text data. Once they are trained to predict next words very well, they do not use their training data at inference time. They are able to predict next words very well, to make syntactically correct language, but they do not explicitly remember any statements from their training data. They discard any memory of statements which could be considered true. They predict likely next words. They are not constrained to make true statements, and they have no way of reliably doing so. To remedy the problem, I have had to develop Hallucination Detector and Deep Attribution Networks, to allow an LLM to natively remember and refer to its training data. This is a separate, integrated subsystem that functions as a compressed memory of factual statements. And finally, an Executive Selector can override the LLM when it is veering into a hallucination, and substitute a key next word that is directly derived from the relevant true statements in the training data. The language part of the human brain (Broca's Area) is NOT the memory controller (Hippocampus), and it is not the Executive Selector (Frontal Networks). This new architecture uses an LLM for what it is good for: producing syntactically correct language. It augments the LLM with memory to get the facts right, and a Hallucination Detector and Executive Selector to satisfy the joint objectives of producing valid language and factual statements. #ai #llm
We need to talk about Sarah. This isn’t just a story about cheating—it’s a story about disengagement, disillusionment, and design failure. Sarah used ChatGPT to speed through assignments she didn’t value, in a system that didn’t value her learning. Professors, overwhelmed and under-equipped, graded her ability to simulate effort—not engage deeply. And the scary part? She’s not an outlier—she’s a bellwether. The real crisis isn’t AI. It’s what AI reveals. • Students optimizing for survival, not mastery. • Faculty grading performance theater, not growth. • Institutions that reward efficiency over inquiry. • A system where cheating is indistinguishable from coping. We built an education system that rewards outputs, then act surprised when students find faster ways to produce them. Sarah didn’t break the rules—she followed the incentives. If we want to fix this, we need to stop treating AI as a threat to education and start treating it as a mirror. Tagging educators, designers, and civic builders: How might we reimagine learning systems that earn trust, restore meaning, and center real work in an age of simulation? #AI #Education #LearningDesign #Assessment #HigherEd #CivicTech #DigitalTrust
Luiza Jarovsky
🚨 I've been hearing from university professors that ChatGPT is completely destroying learning and that we'll soon have an AI-powered semi-illiterate workforce. Here's a glimpse into the educational apocalypse: "Sarah, a freshman at Wilfrid Laurier University in Ontario, said she first used ChatGPT to cheat during the spring semester of her final year of high school. (...) After getting acquainted with the chatbot, Sarah used it for all her classes: Indigenous studies, law, English, and a “hippie farming class” called Green Industries. “My grades were amazing,” she said. “It changed my life.” Sarah continued to use AI when she started college this past fall. Why wouldn’t she? Rarely did she sit in class and not see other students’ laptops open to ChatGPT. Toward the end of the semester, she began to think she might be dependent on the website. She already considered herself addicted to TikTok, Instagram, Snapchat, and Reddit, where she writes under the username maybeimnotsmart. “I spend so much time on TikTok,” she said. “Hours and hours, until my eyes start hurting, which makes it hard to plan and do my schoolwork. With ChatGPT, I can write an essay in two hours that normally takes 12.” - "By November, Williams estimated that at least half of his students were using AI to write their papers. Attempts at accountability were pointless. Williams had no faith in AI detectors, and the professor teaching the class instructed him not to fail individual papers, even the clearly AI-smoothed ones. “Every time I brought it up with the professor, I got the sense he was underestimating the power of ChatGPT, and the departmental stance was, ‘Well, it’s a slippery slope, and we can’t really prove they’re using AI,’” Williams said. “I was told to grade based on what the essay would’ve gotten if it were a ‘true attempt at a paper.’ So I was grading people on their ability to use ChatGPT.” - It would be great to hear more from teachers and educators to understand what might be potential solutions. I'm also adding below the essay I published today about the hidden dangers of AI chatbots.
Most nonprofit tech stacks weren’t designed. They were inherited. Duct-taped. Built under pressure and patched on the fly. But funders are starting to notice the cracks—laggy reporting, duplicated data, impact that’s hard to trace. And in 2025, that’s not just a headache. It’s a risk to your next grant. We pulled together 3 quiet signs your tech stack might be quietly working against you—plus what others did to turn it around. → Is your backend helping you win funding—or quietly sinking the pitch? 👇 Does this feel familiar? #NonprofitOps #GrantReadiness #TechForGood #ScrappyStewardship #DigitalInfrastructure #TrustIsInfrastructure
This is a great post from Emanuele Fabbiani pushing back on the “consulting is dead” narrative. Yes, AI is rewriting the rulebook—but let’s not confuse automation with expertise. If your consulting firm’s entire value prop is a prettier deck of common knowledge, AI should eat your lunch. But real consulting isn’t just writing slides—it’s knowing which slides matter, for whom, when, and why. AI can draft. But it can’t yet: • Read a boardroom’s mood • Diagnose a political bottleneck • Reframe the problem upstream of the prompt • Spot when the wrong metric is driving the conversation The future belongs to consultants who can blend machine precision with human perspective—not those who pretend the work is just prettier prose.
Emanuele Fabbiani
No, it is not. And here's why. 👇 🔴 We’re far from a world where AI can fully handle creative work on its own. Human input is still essential to craft ideas, review outputs, and guide AI decisions. But some things will change. 🟢 Repetitive tasks will be automated. 🟢 Creative work will become more efficient. 🟢 The focus will shift from "how" to do a task, to "what" task should be done, and from finding “the answer” to asking the right questions. Professionals across all industries will feel the pressure to master AI tools—and eventually, they will. As Fiverr’s famous campaign said: AI won’t take our jobs. It’ll take them to the next level.
Most nonprofit tech stacks weren’t designed—they were inherited. Outdated. Overextended. And duct-taped together just to get by. But funders aren’t just funding impact—they’re scanning your systems. If your backend screams “not ready,” that next grant might quietly disappear. We pulled together 3 quick case studies of orgs who flipped the script—and used tech to send a different signal: We’re built to scale. We’re built for trust. Curious where your stack stands? 👇 Read the post and let me know what resonates. #nonprofittech #grantfunding #digitalreadiness #impactdriven #socialsector #trustinfrastructure
I’ve been prototyping similar AI workflows—and this prompt design hits the sweet spot between coaching and systems thinking. What I appreciate most about Liz Bradford’s approach is that it respects the user’s agency. It doesn’t prescribe; it provokes. That’s the difference between AI-as-tool vs. AI-as-gimmick. The right prompt isn’t a shortcut—it’s a compass. If you're exploring reflective design, personal systems, or augmented coaching practices, this is worth your time.
Liz Bradford
I tested AI coaching for 30 days. The results shocked me. As the founder of a peak performance coaching community, I'm about to share something that might put me out of business. It’s a topic we debate plenty in the Transform Perform Coach team chat. But the truth is, some simple prompts, with an LLM, can help you level up in life. Start with this prompt, courtesy of Zain Kahn's Superhuman newsletter (link in comments👇). Prompt: "Act as a purpose-driven business coach. Help me discover my Ikigai — the intersection of what I love, what I’m good at, what the world needs, and what I can be paid for — and guide me in turning it into a business or career path. Start by asking me reflective questions in each of the four Ikigai dimensions to uncover my passions, strengths, marketable skills, and potential value to others. Once we’ve mapped those, suggest business ideas or career directions that align with my Ikigai. For each suggestion, explain how it connects to the four elements and provide examples of real-world businesses or roles that reflect similar paths. Include prompts for journaling, self-assessment tools, and small experiments I can run to test alignment. End with a one-page Ikigai roadmap summarizing my core themes, possible paths, and first action steps toward building a purpose-driven life or business." This simple prompt can help you to... Discover your purpose through these core elements: 👉 Align your passions with marketable skills 👉 Identify where your talents meet global needs 👉 Test small experiments in your chosen direction 👉 Track insights through guided self-reflection Transform your career path: 👉 Start with 15 minutes of daily journaling 👉 Map your strengths against market opportunities 👉 Create micro-projects to validate your direction 👉 Build evidence through real-world testing So why would I encourage you to start experimenting with AI coaching in this way? The ultimate paradox of coaching: ✨ My role is to help you not need me ✨ Your success means becoming your own guide ✨ True growth happens when you lead yourself So, are you ready to begin? ♻️ Share this with someone who needs an empowering high five 👉 Follow Liz Bradford for insights to boost your wellbeing, career and augment your business.
Is AI at risk of consuming itself? A recent TechCrunch article highlights a pressing issue: AI models trained on their own generated data can experience “model collapse,” leading to a decline in performance and originality. This isn’t just a technical concern—it has profound implications for how we build and rely on AI systems in civic and public sectors. As we integrate AI into public services, ensuring the integrity and reliability of these models becomes paramount. We must ask: Are we prioritizing short-term efficiency over long-term sustainability? Let’s discuss how we can implement strategies to prevent such pitfalls and promote responsible AI development. https://lnkd.in/eq7ubtYG #AI #ModelCollapse #CivicTech #ResponsibleAI #PublicSectorInnovation
“Master AI or get replaced.” Aakash Gupta is right—the gap is widening fast. But what we master—and why—matters just as much. Yes, product managers need fluency. Yes, AI will reshape the work. But if we only teach people to optimize for outputs, we’ll miss the bigger shift. At SWARM, we’re working with community colleges and civic teams to build AI literacy that’s grounded in purpose: • How to use AI to understand systems • How to design with (not just for) communities • How to turn data into meaningful stories Because the future isn’t just AI-native. It has to be ethics-native, context-native, equity-native too. The next 12 months won’t just separate AI-masters from AI-users. They’ll separate builders of extractive tools from builders of trustworthy systems.
Aakash Gupta
Some of the smartest CEOs we know are saying it: Master AI… or get replaced. So here’s how: BTW, it’s not just CEOs. I’ve been talking to top product leaders, and they have the same expectations of their PMs. And here’s the thing: the gulf between those who just use AI vs those who are masterfully using it widens everyday. To keep you on the right side of the curve, I put together a full roadmap. PART 1 - USING AI AS A PM 1. The AI PM's Playbook: How Top Product Managers Are 10x-ing Their Impact in 2025 - https://lnkd.in/ef8TkEdQ 2. The Ultimate Guide to AI Prototyping - https://lnkd.in/eJujDhBV 3. Tutorial of Top 5 AI Prototyping Tools With Colin Matthews - https://lnkd.in/eit9MnaR 7. How to Use ChatGPT as a PM - https://lnkd.in/eyfUi8ZN 5. We Built an AI Product Manager in 58 mins (Claude, ChatGPT, Loom + Notion AI) - https://lnkd.in/eAZZ8MSb 6. We Built an AI Agent to Automate PM in 73 mins (ZERO CODING) with Anthony Maggio - https://lnkd.in/ebF8hhvx 7. How to PM with AI at Early Stage Startups with Sergio Pereira - https://lnkd.in/eWt7UciA PART 2 - AI PM SKILLS 1. Complete Course: AI Product Management With Paweł Huryn - https://lnkd.in/eN2_-6xi 2. Use AI To Draft Your Email and Slack Messages with MCP - https://lnkd.in/ejMMjAAV 3. 21 Harsh Truths about Product Management in AI with Scott Jones - https://lnkd.in/esYZcN7N PART 3 - BUILDING AI PRODUCTS 1. How to Build an AI Startup - https://lnkd.in/e4MaHeKM 2. Most People are Building AI Products Wrong - Here's How to do it Right - https://lnkd.in/eDGmsvZ5 3. Product Management in the Era of AI with Sam Kawsarani - https://lnkd.in/eGjTrvQe PART 4 - BECOMING AN AI PM 1. How to Become an AI Product Manager with No Experience - https://lnkd.in/e85tyhxy 2. How to become an AI Product Manager - https://lnkd.in/emUdezbS 3. How to Write a Killer AI Product Manager Resume (With Examples) - https://lnkd.in/ehAbcRGy 4. How to Become an AI PM With Marily Nika, Ph.D - https://lnkd.in/eid66mjS 5. 2025 AI PM Job Search Masterclass with Dr. Nancy Li - https://lnkd.in/eqqMD_Ne The next 12 months will separate the AI-masters from the AI-users. Which will you be?
So proud of my former student Ahmer A.. Ahmer was in my first semester of teaching UX at Kingsborough Community College, The City University of New York. He was so determined and focused. I really knew from the start he was going places. Turns out that place was NYU. Congratulations, Ahmer!
Ahmer A.
I officially graduated with my bachelor’s degree from MY DREAM SCHOOL, New York University with Honors! These past few years have been filled with growth, challenges, and unforgettable memories. From studying abroad and traveling through over 30 cities, 5 countries, and two continents to exploring entrepreneurship and forming lifelong friendships, this journey has shaped me more than I could have imagined. It all started in 9th grade when I asked my favorite teacher Naomi O'Reilly at the end of class “How do I become like you?” She told me she graduated from NYU with a degree in education. I went home later that day and researched everything I could find about NYU, and it instantly became my dream school. I began college not knowing what to do, so I did what any brown kid with immigrant parents would do: I started as a premed student, and I hated it! I wanted to help people but I just didn’t know how to until I stumbled across a poster about how user experience design can solve real-life problems. This little poster changed the trajectory of my academic career and LIFE. As a child of deaf parents, I witnessed firsthand the barriers that my deaf parents had to face when engaging with digital technologies. I made it my mission to make tech more accessible and inclusive for everyone. Throughout these years, I managed to intern at companies focusing on accessibility and improving user experiences, built my startup from scratch, and created cool projects involving AI and accessibility for the Deaf community. A special thanks to my parents, who have sacrificed so much in a world that isn’t built for them. I will build it for you. I would like to thank New York University, NYU School of Professional Studies, Matt Shelosky, PMP, Mechthild Schmidt Feist, Cyril Tsiboulski, Johnson Tang, Nate Cooper, My family Syeda Alina R., Abid Ali, Hussain Ali, The Knowledge House, My friends Talha Gondal, Usman Abdullah Waqas Arain, and more who supported me through this journey. Your belief in me has been the foundation of everything I’ve achieved. “Indeed, God is the best of planners” 8:30 California, I’m coming for you! 💛👻 #NYU #NYU2025 #NYUGRAD #ClassOf2025
UC Berkeley saw a 1,106% increase in CS majors from 2011 to 2021. MIT, Caltech, and Cornell weren’t far behind. Bootcamps promised six-figure salaries in 12 weeks. Policymakers warned of a tech talent shortage. And yet: 📉 There are fewer software developers employed in the U.S. today than in 2018. 🎓 Even top CS grads from Berkeley report struggling to find jobs. 🏚️ Dozens of bootcamps have quietly closed their doors. This isn’t just about tech. It’s about how easily we reduce complex challenges to simplistic solutions. “Learn to code” was never just advice. It was an ideology.
Content Inspiration, AI, scheduling, automation, analytics, CRM.
Get all of that and more in Taplio.
Try Taplio for free
Amelia Sordell 🔥
@ameliasordell
228k
Followers
Vaibhav Sisinty ↗️
@vaibhavsisinty
451k
Followers
Richard Moore
@richardjamesmoore
105k
Followers
Sam G. Winsbury
@sam-g-winsbury
49k
Followers
Shlomo Genchin
@shlomogenchin
49k
Followers
Daniel Murray
@daniel-murray-marketing
150k
Followers
Matt Gray
@mattgray1
1m
Followers
Austin Belcak
@abelcak
1m
Followers
Ash Rathod
@ashrathod
73k
Followers
Izzy Prior
@izzyprior
82k
Followers
Wes Kao
@weskao
107k
Followers
Luke Matthews
@lukematthws
188k
Followers
Sahil Bloom
@sahilbloom
1m
Followers
Sabeeka Ashraf
@sabeekaashraf
20k
Followers
Tibo Louis-Lucas
@thibaultll
6k
Followers
Andy Mewborn
@amewborn
213k
Followers