Get the Linkedin stats of ahmet acar and many LinkedIn Influencers by Taplio.
open on linkedin
Your business is eating up all your time, but you can't just keep hiring people? You're looking to improve productivity, but online, agile frameworks, and digital transformation have failed you? Imagine developing and running your products with half the time you are putting in today. Imagine making informed decisions in days, not weeks. Imagine running 20+ experiments per month instead of 2 per year. If you are struggling to: - create digital products your customers love - improve productivity in your business - spending time working on the business instead of inside it Let's talk and I'll show you how you can automate your workflows and processes. Attain modern product operations with AI ans automation. Past customers: McKinsey, Amazon, Google, Facebook, SAP, Maersk, TUI, Standard Bank, DB, Haniel, Haufe, TUI, Mercedes, Volkswagen, Sanofi, Zurich, Deutsche Bank, +30 more. Ask me for the full list and 1-page case studies. ---------- Customers I helped had the following challenges: ☒ Manual production of products & services, with no established systems ☒ Products that don't perform and lack an effective product strategy ☒ Inefficient product development and delivery processes Here is how I helped them: ☑︎ Build systems that create new products and services reliably ☑︎ Develop high-performing products with an effective product strategy ☑︎ Introduce best practices in product operations
Check out ahmet acar's verified LinkedIn stats (last 30 days)
Use Taplio to search all-time best posts
How to use AI tools well. Here's some do's and don'ts based on the past 3 years of experimenting: DO 1) use (gen)AI tools in a domain you understand to speed up your work and increase the quality of your work 2) review everything that is generated in detail before you use the content 3) make sure you understand what is generated and see if you can replicate it 4) use these tools in a complementary way, not as replacements DON'T 1) use them in a domain you don't understand or are unfamiliar with. learn the basics at least and have the results reviewed by an expert 2) just copy and paste or use without review and revision 3) just accept the answer as is without checking sources 4) try to use them in ways they aren't intended 5) see them as able to reason or understand There are huge benefits to using these tools, but you actually need to understand the underlying AI approaches and their limitations. It doesn't take a lot to do that. there are good courses on this. Here are two I can recommend: https://lnkd.in/ePBhrWGi - by the university of helsinki https://lnkd.in/en4D6a67 - by 2 US professors
Do no evil. ...but provide IDF with GCP AI tools to target and kill people in Gaza. It's not just google. All of the cloud and AI providers are in on it. Then again, Hugo Boss made some nice uniforms for you know who.
"LLM make computing radically more accessible, by allowing untrained users to converse with the machine in their own natural language. Look for places where that ability solves real problems." - Carl Bergstrom, the bullshit machines Instead of trying to replace specialists with LLM and wrapping products with these AI systems at their core, a more sensible approach is to wrap an LLM around a complex software to make their use easier. Think of domains where people aren't paid to learn to use IT systems but just need these systems to quickly get to the results they want. An example of this is medicine. My in-law Sebastian Frees often complained about the awful UI of the IT systems he has to use in his urology practice. Most of these systems are basically just documentation, reporting, and accounting software, but with an insane amount of menus, options, and configurations to go through. They're like super extensive paper forms. Most doctors waste an hour or two every day dealing with these tools because of the complexity of their work. They also have to learn these systems in the first place and spend a lot of money on their annual licenses. What if you could just talk to the system instead? What if it could listen in on your conversation with the patient and fill out the relevant details? What if instead of searching and selecting the medicine from a long list, the doctor could simply say what they are prescribing, and the system would find it? There are many examples of complex IT systems in other domains. Most can not be fixed by a simple UI overhaul. But one could wrap an LLM around them to let untrained users to converse with the machine in their own natural language. By untrained, I mean untrained in the system, not clueless of the domain. If the above holds true, then 99% of AI startups out there are BS and will crash and burn because they are putting the LLM at the core instead.
LLM make it possible for us to use computers in our natural language instead of learning languages that are built for human-computer interaction. What might this allow us to do that isn't possible today? Today, if you want to organize hundreds of files of a certain size on your computer, you need to know the right commands on powershell or bash. Most of you reading this, won't know what I'm talking about. If you're not a developer or love to spend time on the operating system of your computer, chances are, you've never used a command line interface. But if your computer used an LLM based interface, you could just tell it what you want it to do, have it reply what it is going to do, then run the existing tools on your system. This is what people should mean by "agentic systems".
Thanks to LLMs and diffusion models, we're entering a new era of information warfare. WW3 is being fought with misinformation. In 2023, an image of the pentagon in flames went through social media. While truth was putting its pants on, that lie already galloped through the world economy and set jitters through the stock market. Imagine a trader spreading fake information like this to then benefit from the short dip that is likely to follow. Or imagine one government taking action like this against another to attack their economy. As these tools become more sophisticated, we'll see this at all levels. In the case of the fake pentagon attack, the US economy dropped half a trillion dollars within days. If you're old enough, you'll remember the "weapons of mass destruction" narrative to initiate a war in the middle east. It's no wonder that 1984 now reads like an instruction manual. And we, the public, are at risk.
Don't throw the baby out with the bathwater. Current genAI approaches have their strengths. Here's a story. Back in 2004, I was doing research for Klaus Rebensburg on the use of and interaction with multimedia. One of the challenges we were looking at was trying to recognize and differentiate images, mainly painted portraits. This was due to an art project we were running with Uli Weinberg - combining portrait paintings by an artist with projected copies that we would manipulate with AI. Suffice to say, the project (mostly) failed. One of the reasons was that we couldn't figure out how to differentiate the face in the painting from the background. We went to conferences and listened to papers with titles such as "high accuracy edge detection with blurred edge models". Basically, people were trying to "teach computers how to see" by programming a bunch of rules into software. This approach didn't work. What did work was machine learning and neural networks. Today, your phone can recognize you in different photos under different lighting conditions. It can distinguish your cat from your hard. As someone who worked on this stuff 20 years ago, I can tell you that this is brutally hard to do. And yet, we have it today. Back then, another professor of mine who was in computer linguistics told me that "computers will never recognize human language well" because we simply did not have the data sets and the computing power. Boy, was he wrong. Today, people have a hard time keeping in mind that ChatGPT is just a probability engine trained on language. It feels real, it feels sentient. Of course, it is anything but. There is no "reasoning model" there, but the mere fact that you can use your everyday language to interact with an IT system is already a huge achievement. So there's no need to inflate what these systems can do. But there is also no sense in ignoring what they can do either. People who know better should educate others, not just criticise these systems.
I stopped using LLM in my design thinking process, specifically in ideation. Here is why: 1) The Anchoring Effect In helping me create, discuss and evolve ideas, the AI gives results that feel productive, but in reality is just pushing an anchoring effect. "Anchoring bias is a process whereby people are influenced by specific information given before a judgement." There's been substantial research on this, including a metastudy of existing literature. In short: the LLM will convince you that something is a good idea although it isn't. 2) Design Fixation In discussing potential solutions, the AI limits the output of my design because it blindly sticks to its training data, or in other words, to what it knows. That's highly problematic because there's been a lot of studies on this that show the fixation to be "a measurable barrier in the conceptual design process." The results are more likely to be boring and unoriginal (but it won't feel like it because of anchoring). 3) Solution Space Limitation Yes, LLMs have been trained on everything under the sun. The problem is, not every approach is extensively studied or written about. Some lesser known techniques are drowned out by the common (and boring) advice. This means that the solutions / designs are typically based on just a few creativity theories. You can ask an LLM about design thinking, triz, etc or to explain different creativity theories. Once you instruct it to stick to one approach however, things get muddled up. The solution space is limited by whatever is strongest in the training data of the LLM, which is difficult to know as a user. 4) Biased Studies I've followed up on the various innovation, ideation, and creativity studies with LLMs posted here and elsewhere, eg by people like Mollick. I haven't found ANY studies that addressed any of the issues above. None. Zip. Nada. I'd expect scientists to apply some critical thinking before they gush out with "this is gonna revolutionize everything" type statements. Alas, here we are. I still use AI tools, but I am careful not to undermine processes and intuition built over 3 decades of work. So should you. --- Sources on the above: David G. Jansson, Steven M. Smith, Adrian Furnham, Hua Chu Boo, Carl Bergstrom, Hung-Fu Chang, Tong Li
Right now, using LLM based systems like perplexity to search for information seems to be the sensible thing to do. But here is the problem: If a tool like perplexity points to a credible source, like the WHO website to get health information on a certain virus, you can check the source to see if the tool is providing the correct answer. But what will happen once the organizations hosting those sites start to use LLM to create the websites? What if you can no longer trust the sources? Or if you don't know which sources to trust? This is already happening with major organizations reducing or firing their web and content teams, relying on AI-generated text instead. Once bullshit is no longer indistinguishable from facts, the value of the internet will rapidly decline. It's already happening on social media, which is why more and more people are opting out of platforms like this one. LinkedIn used to be a great source of industry information and networking. But in the past 2 years, the value of the platform rapidly declined. It's why I've canceled my premium subscription. Right now, in our ad and attention driven platforms, this doesn't seem to be a mass problem yet. But I'm suspecting that in the next few years, all of these platforms will run into major challenges with people being irritated and frustrated by the content and interactions they're getting. The alternative? No clue, but I'm guessing that there will be a premium on information sources and search that is validated and grounded in facts while the rest of the population will just get BS.
China is leading AI now. You might want to rethink your $200/month ChatGPT subscription. Some stats on AI research in China 2023 - leader in AI research publications 23.2% vs 15.2% EU and 12% USA 2024 - AI patents granted 69.7% vs 14.2% USA 2025 - 36% of global AI research output vs 17% EU, 16% USA Stats on investment China ~$55–$100+ billion/year, focus on Infrastructure, R&D USA ~$109–$500+ billion/year, focus on private sector and data centers EU ~€40 billion/year focus on industrial applications, ethical AI As for performance, as of 2025, the AI systems built in China now outperform other models in the benchmarks. With the USA moving towards fascism, the argument of models created in an autocratic system no longer holds. So, now you're free to pick whatever models and tools you want. Make of that what you will.
Content Inspiration, AI, scheduling, automation, analytics, CRM.
Get all of that and more in Taplio.
Try Taplio for free